DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 00/21] add support for cpfl PMD in DPDK
@ 2022-12-23  1:55 Mingxia Liu
  2022-12-23  1:55 ` [PATCH 01/21] net/cpfl: support device initialization Mingxia Liu
                   ` (21 more replies)
  0 siblings, 22 replies; 263+ messages in thread
From: Mingxia Liu @ 2022-12-23  1:55 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

The patchset introduced the cpfl (Control Plane Function Library) PMD
for Intel® IPU E2100’s Configure Physical Function (Device ID: 0x1453)

The cpfl PMD inherits all the features from idpf PMD which will follow
an ongoing standard data plan function spec
https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=idpf
Besides, it will also support more device specific hardware offloading
features from DPDK’s control path (e.g.: hairpin, rte_flow …)

This patchset mainly focuses on idpf PMD’s equivalent features.
To avoid duplicated code, the patchset depends on below patchsets which
move the common part from net/idpf into common/idpf as a shared library.

This patchset is based on the idpf PMD code:
http://patches.dpdk.org/project/dpdk/list/?submitter=410&q=&delegate=&archive=&series=&state=*
http://patches.dpdk.org/project/dpdk/list/?submitter=2083&q=&delegate=&archive=&series=&state=*
http://patches.dpdk.org/project/dpdk/list/?submitter=2514&q=&delegate=&archive=&series=&state=

Mingxia Liu (21):
  net/cpfl: support device initialization
  net/cpfl: add Tx queue setup
  net/cpfl: add Rx queue setup
  net/cpfl: support device start and stop
  net/cpfl: support queue start
  net/cpfl: support queue stop
  net/cpfl: support queue release
  net/cpfl: support MTU configuration
  net/cpfl: support basic Rx data path
  net/cpfl: support basic Tx data path
  net/cpfl: support write back based on ITR expire
  net/cpfl: support RSS
  net/cpfl: support Rx offloading
  net/cpfl: support Tx offloading
  net/cpfl: add AVX512 data path for single queue model
  net/cpfl: support timestamp offload
  net/cpfl: add AVX512 data path for split queue model
  net/cpfl: add hw statistics
  net/cpfl: add RSS set/get ops
  net/cpfl: support single q scatter RX datapath
  net/cpfl: add xstats ops

 MAINTAINERS                             |    9 +
 doc/guides/nics/cpfl.rst                |   88 ++
 doc/guides/nics/features/cpfl.ini       |   17 +
 doc/guides/rel_notes/release_23_03.rst  |    5 +
 drivers/net/cpfl/cpfl_ethdev.c          | 1481 +++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h          |   94 ++
 drivers/net/cpfl/cpfl_logs.h            |   32 +
 drivers/net/cpfl/cpfl_rxtx.c            |  900 ++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h            |   44 +
 drivers/net/cpfl/cpfl_rxtx_vec_common.h |  111 ++
 drivers/net/cpfl/meson.build            |   38 +
 drivers/net/meson.build                 |    1 +
 12 files changed, 2820 insertions(+)
 create mode 100644 doc/guides/nics/cpfl.rst
 create mode 100644 doc/guides/nics/features/cpfl.ini
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.c
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.h
 create mode 100644 drivers/net/cpfl/cpfl_logs.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h
 create mode 100644 drivers/net/cpfl/meson.build

-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH 01/21] net/cpfl: support device initialization
  2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
@ 2022-12-23  1:55 ` Mingxia Liu
  2022-12-23  1:55 ` [PATCH 02/21] net/cpfl: add Tx queue setup Mingxia Liu
                   ` (20 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2022-12-23  1:55 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

Support device init and add the following dev ops:
 - dev_configure
 - dev_close
 - dev_infos_get
 - link_update
 - cpfl_dev_supported_ptypes_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 MAINTAINERS                            |   9 +
 doc/guides/nics/cpfl.rst               |  66 +++
 doc/guides/nics/features/cpfl.ini      |  12 +
 doc/guides/rel_notes/release_23_03.rst |   5 +
 drivers/net/cpfl/cpfl_ethdev.c         | 768 +++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h         |  77 +++
 drivers/net/cpfl/cpfl_logs.h           |  32 ++
 drivers/net/cpfl/meson.build           |  14 +
 drivers/net/meson.build                |   1 +
 9 files changed, 984 insertions(+)
 create mode 100644 doc/guides/nics/cpfl.rst
 create mode 100644 doc/guides/nics/features/cpfl.ini
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.c
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.h
 create mode 100644 drivers/net/cpfl/cpfl_logs.h
 create mode 100644 drivers/net/cpfl/meson.build

diff --git a/MAINTAINERS b/MAINTAINERS
index 22ef2ea4b9..970acc5751 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -780,6 +780,15 @@ F: drivers/common/idpf/
 F: doc/guides/nics/idpf.rst
 F: doc/guides/nics/features/idpf.ini
 
+Intel cpfl
+M: Qi Zhang <qi.z.zhang@intel.com>
+M: Jingjing Wu <jingjing.wu@intel.com>
+M: Beilei Xing <beilei.xing@intel.com>
+T: git://dpdk.org/next/dpdk-next-net-intel
+F: drivers/net/cpfl/
+F: doc/guides/nics/cpfl.rst
+F: doc/guides/nics/features/cpfl.ini
+
 Intel igc
 M: Junfeng Guo <junfeng.guo@intel.com>
 M: Simei Su <simei.su@intel.com>
diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
new file mode 100644
index 0000000000..064c69ba7d
--- /dev/null
+++ b/doc/guides/nics/cpfl.rst
@@ -0,0 +1,66 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright(c) 2022 Intel Corporation.
+
+.. include:: <isonum.txt>
+
+CPFL Poll Mode Driver
+=====================
+
+The [*EXPERIMENTAL*] cpfl PMD (**librte_net_cpfl**) provides poll mode driver support
+for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
+
+
+Linux Prerequisites
+-------------------
+
+Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment.
+
+To get better performance on Intel platforms,
+please follow the :doc:`../linux_gsg/nic_perf_intel_platform`.
+
+
+Pre-Installation Configuration
+------------------------------
+
+Runtime Config Options
+~~~~~~~~~~~~~~~~~~~~~~
+
+- ``vport`` (default ``0``)
+
+  The PMD supports creation of multiple vports for one PCI device,
+  each vport corresponds to a single ethdev.
+  The user can specify the vports with specific ID to be created, for example::
+
+    -a ca:00.0,vport=[0,2,3]
+
+  Then the PMD will create 3 vports (ethdevs) for device ``ca:00.0``.
+
+  If the parameter is not provided, the vport 0 will be created by default.
+
+- ``rx_single`` (default ``0``)
+
+  There are two queue modes supported by Intel\ |reg| IPU Ethernet ES2000 Series,
+  single queue mode and split queue mode for Rx queue.
+  User can choose Rx queue mode, example::
+
+    -a ca:00.0,rx_single=1
+
+  Then the PMD will configure Rx queue with single queue mode.
+  Otherwise, split queue mode is chosen by default.
+
+- ``tx_single`` (default ``0``)
+
+  There are two queue modes supported by Intel\ |reg| IPU Ethernet ES2000 Series,
+  single queue mode and split queue mode for Tx queue.
+  User can choose Tx queue mode, example::
+
+    -a ca:00.0,tx_single=1
+
+  Then the PMD will configure Tx queue with single queue mode.
+  Otherwise, split queue mode is chosen by default.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :doc:`build_and_test` for details.
\ No newline at end of file
diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
new file mode 100644
index 0000000000..a2d1ca9e15
--- /dev/null
+++ b/doc/guides/nics/features/cpfl.ini
@@ -0,0 +1,12 @@
+;
+; Supported features of the 'cpfl' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+; A feature with "P" indicates only be supported when non-vector path
+; is selected.
+;
+[Features]
+Linux                = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst
index b8c5b68d6c..465a25e91e 100644
--- a/doc/guides/rel_notes/release_23_03.rst
+++ b/doc/guides/rel_notes/release_23_03.rst
@@ -55,6 +55,11 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Added Intel cpfl driver.**
+
+  Added the new ``cpfl`` net driver
+  for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
+  See the :doc:`../nics/cpfl` NIC guide for more details on this new driver.
 
 Removed Items
 -------------
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
new file mode 100644
index 0000000000..7c3bc945e0
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -0,0 +1,768 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_dev.h>
+#include <errno.h>
+#include <rte_alarm.h>
+
+#include "cpfl_ethdev.h"
+
+#define CPFL_TX_SINGLE_Q	"tx_single"
+#define CPFL_RX_SINGLE_Q	"rx_single"
+#define CPFL_VPORT		"vport"
+
+rte_spinlock_t cpfl_adapter_lock;
+/* A list for all adapters, one adapter matches one PCI device */
+struct cpfl_adapter_list cpfl_adapter_list;
+bool cpfl_adapter_list_init;
+
+static const char * const cpfl_valid_args[] = {
+	CPFL_TX_SINGLE_Q,
+	CPFL_RX_SINGLE_Q,
+	CPFL_VPORT,
+	NULL
+};
+
+static int
+cpfl_dev_link_update(struct rte_eth_dev *dev,
+		     __rte_unused int wait_to_complete)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct rte_eth_link new_link;
+
+	memset(&new_link, 0, sizeof(new_link));
+
+	switch (vport->link_speed) {
+	case 10:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
+		break;
+	case 100:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		break;
+	case 1000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
+		break;
+	case 10000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+		break;
+	case 20000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
+		break;
+	case 25000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
+		break;
+	case 40000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
+		break;
+	case 50000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
+		break;
+	case 100000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+		break;
+	case 200000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
+		break;
+	default:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	}
+
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
+		RTE_ETH_LINK_DOWN;
+	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
+				  RTE_ETH_LINK_SPEED_FIXED);
+
+	return rte_eth_linkstatus_set(dev, &new_link);
+}
+
+static int
+cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+
+	dev_info->max_rx_queues = adapter->caps.max_rx_q;
+	dev_info->max_tx_queues = adapter->caps.max_tx_q;
+	dev_info->min_rx_bufsize = CPFL_MIN_BUF_SIZE;
+	dev_info->max_rx_pktlen = CPFL_MAX_FRAME_SIZE;
+
+	dev_info->max_mtu = dev_info->max_rx_pktlen - CPFL_ETH_OVERHEAD;
+	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+
+	return 0;
+}
+
+static const uint32_t *
+cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+	static const uint32_t ptypes[] = {
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_L4_FRAG,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_UNKNOWN
+	};
+
+	return ptypes;
+}
+
+static int
+cpfl_dev_configure(struct rte_eth_dev *dev)
+{
+	struct rte_eth_conf *conf = &dev->data->dev_conf;
+
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
+		PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not supported",
+			     conf->txmode.mq_mode);
+		return -ENOTSUP;
+	}
+
+	if (conf->lpbk_mode != 0) {
+		PMD_INIT_LOG(ERR, "Loopback operation mode %d is not supported",
+			     conf->lpbk_mode);
+		return -ENOTSUP;
+	}
+
+	if (conf->dcb_capability_en != 0) {
+		PMD_INIT_LOG(ERR, "Priority Flow Control(PFC) if not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.lsc != 0) {
+		PMD_INIT_LOG(ERR, "LSC interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.rxq != 0) {
+		PMD_INIT_LOG(ERR, "RXQ interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.rmv != 0) {
+		PMD_INIT_LOG(ERR, "RMV interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	return 0;
+}
+
+static int
+cpfl_dev_close(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter);
+
+	idpf_vport_deinit(vport);
+
+	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
+	adapter->cur_vport_nb--;
+	dev->data->dev_private = NULL;
+	adapter->vports[vport->sw_idx] = NULL;
+	rte_free(vport);
+
+	return 0;
+}
+
+static int
+insert_value(struct cpfl_devargs *devargs, uint16_t id)
+{
+	uint16_t i;
+
+	/* ignore duplicate */
+	for (i = 0; i < devargs->req_vport_nb; i++) {
+		if (devargs->req_vports[i] == id)
+			return 0;
+	}
+
+	if (devargs->req_vport_nb >= RTE_DIM(devargs->req_vports)) {
+		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
+			     CPFL_MAX_VPORT_NUM);
+		return -EINVAL;
+	}
+
+	devargs->req_vports[devargs->req_vport_nb] = id;
+	devargs->req_vport_nb++;
+
+	return 0;
+}
+
+static const char *
+parse_range(const char *value, struct cpfl_devargs *devargs)
+{
+	uint16_t lo, hi, i;
+	int n = 0;
+	int result;
+	const char *pos = value;
+
+	result = sscanf(value, "%hu%n-%hu%n", &lo, &n, &hi, &n);
+	if (result == 1) {
+		if (lo >= CPFL_MAX_VPORT_NUM)
+			return NULL;
+		if (insert_value(devargs, lo) != 0)
+			return NULL;
+	} else if (result == 2) {
+		if (lo > hi || hi >= CPFL_MAX_VPORT_NUM)
+			return NULL;
+		for (i = lo; i <= hi; i++) {
+			if (insert_value(devargs, i) != 0)
+				return NULL;
+		}
+	} else {
+		return NULL;
+	}
+
+	return pos + n;
+}
+
+static int
+parse_vport(const char *key, const char *value, void *args)
+{
+	struct cpfl_devargs *devargs = args;
+	const char *pos = value;
+
+	devargs->req_vport_nb = 0;
+
+	if (*pos == '[')
+		pos++;
+
+	while (1) {
+		pos = parse_range(pos, devargs);
+		if (pos == NULL) {
+			PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ",
+				     value, key);
+			return -EINVAL;
+		}
+		if (*pos != ',')
+			break;
+		pos++;
+	}
+
+	if (*value == '[' && *pos != ']') {
+		PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ",
+			     value, key);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+parse_bool(const char *key, const char *value, void *args)
+{
+	int *i = args;
+	char *end;
+	int num;
+
+	errno = 0;
+
+	num = strtoul(value, &end, 10);
+
+	if (errno == ERANGE || (num != 0 && num != 1)) {
+		PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", value must be 0 or 1",
+			value, key);
+		return -EINVAL;
+	}
+
+	*i = num;
+	return 0;
+}
+
+static int
+cpfl_parse_devargs(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter,
+		   struct cpfl_devargs *cpfl_args)
+{
+	struct rte_devargs *devargs = pci_dev->device.devargs;
+	struct rte_kvargs *kvlist;
+	int i, ret;
+
+	cpfl_args->req_vport_nb = 0;
+
+	if (devargs == NULL)
+		return 0;
+
+	kvlist = rte_kvargs_parse(devargs->args, cpfl_valid_args);
+	if (kvlist == NULL) {
+		PMD_INIT_LOG(ERR, "invalid kvargs key");
+		return -EINVAL;
+	}
+
+	/* check parsed devargs */
+	if (adapter->cur_vport_nb + cpfl_args->req_vport_nb >
+	    CPFL_MAX_VPORT_NUM) {
+		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
+			     CPFL_MAX_VPORT_NUM);
+		ret = -EINVAL;
+		goto bail;
+	}
+
+	for (i = 0; i < cpfl_args->req_vport_nb; i++) {
+		if (adapter->cur_vports & RTE_BIT32(cpfl_args->req_vports[i])) {
+			PMD_INIT_LOG(ERR, "Vport %d has been created",
+				     cpfl_args->req_vports[i]);
+			ret = -EINVAL;
+			goto bail;
+		}
+	}
+
+	ret = rte_kvargs_process(kvlist, CPFL_VPORT, &parse_vport,
+				 cpfl_args);
+	if (ret != 0)
+		goto bail;
+
+	ret = rte_kvargs_process(kvlist, CPFL_TX_SINGLE_Q, &parse_bool,
+				 &adapter->base.txq_model);
+	if (ret != 0)
+		goto bail;
+
+	ret = rte_kvargs_process(kvlist, CPFL_RX_SINGLE_Q, &parse_bool,
+				 &adapter->base.rxq_model);
+	if (ret != 0)
+		goto bail;
+
+bail:
+	rte_kvargs_free(kvlist);
+	return ret;
+}
+
+static struct idpf_vport *
+cpfl_find_vport(struct cpfl_adapter_ext *adapter, uint32_t vport_id)
+{
+	struct idpf_vport *vport = NULL;
+	int i;
+
+	for (i = 0; i < adapter->cur_vport_nb; i++) {
+		vport = adapter->vports[i];
+		if (vport->vport_id != vport_id)
+			continue;
+		else
+			return vport;
+	}
+
+	return vport;
+}
+
+static void
+cpfl_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t msglen)
+{
+	struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
+
+	if (msglen < sizeof(struct virtchnl2_event)) {
+		PMD_DRV_LOG(ERR, "Error event");
+		return;
+	}
+
+	switch (vc_event->event) {
+	case VIRTCHNL2_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL2_EVENT_LINK_CHANGE");
+		vport->link_up = vc_event->link_status;
+		vport->link_speed = vc_event->link_speed;
+		cpfl_dev_link_update(dev, 0);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", vc_event->event);
+		break;
+	}
+}
+
+static void
+cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter_ex)
+{
+	struct idpf_adapter *adapter = &adapter_ex->base;
+	struct idpf_dma_mem *dma_mem = NULL;
+	struct idpf_hw *hw = &adapter->hw;
+	struct virtchnl2_event *vc_event;
+	struct idpf_ctlq_msg ctlq_msg;
+	enum idpf_mbx_opc mbx_op;
+	struct idpf_vport *vport;
+	enum virtchnl_ops vc_op;
+	uint16_t pending = 1;
+	int ret;
+
+	while (pending) {
+		ret = idpf_ctlq_recv(hw->arq, &pending, &ctlq_msg);
+		if (ret) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret);
+			return;
+		}
+
+		rte_memcpy(adapter->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
+			   IDPF_DFLT_MBX_BUF_SIZE);
+
+		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
+		vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+		adapter->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
+
+		switch (mbx_op) {
+		case idpf_mbq_opc_send_msg_to_peer_pf:
+			if (vc_op == VIRTCHNL2_OP_EVENT) {
+				if (ctlq_msg.data_len < sizeof(struct virtchnl2_event)) {
+					PMD_DRV_LOG(ERR, "Error event");
+					return;
+				}
+				vc_event = (struct virtchnl2_event *)adapter->mbx_resp;
+				vport = cpfl_find_vport(adapter_ex, vc_event->vport_id);
+				if (!vport) {
+					PMD_DRV_LOG(ERR, "Can't find vport.");
+					return;
+				}
+				cpfl_handle_event_msg(vport, adapter->mbx_resp,
+						      ctlq_msg.data_len);
+			} else {
+				if (vc_op == adapter->pend_cmd)
+					notify_cmd(adapter, adapter->cmd_retval);
+				else
+					PMD_DRV_LOG(ERR, "command mismatch, expect %u, get %u",
+						    adapter->pend_cmd, vc_op);
+
+				PMD_DRV_LOG(DEBUG, " Virtual channel response is received,"
+					    "opcode = %d", vc_op);
+			}
+			goto post_buf;
+		default:
+			PMD_DRV_LOG(DEBUG, "Request %u is not supported yet", mbx_op);
+		}
+	}
+
+post_buf:
+	if (ctlq_msg.data_len)
+		dma_mem = ctlq_msg.ctx.indirect.payload;
+	else
+		pending = 0;
+
+	ret = idpf_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
+	if (ret && dma_mem)
+		idpf_free_dma_mem(hw, dma_mem);
+}
+
+static void
+cpfl_dev_alarm_handler(void *param)
+{
+	struct cpfl_adapter_ext *adapter = param;
+
+	cpfl_handle_virtchnl_msg(adapter);
+
+	rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
+}
+
+static int
+cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter)
+{
+	struct idpf_adapter *base = &adapter->base;
+	struct idpf_hw *hw = &base->hw;
+	int ret = 0;
+
+	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+	hw->hw_addr_len = pci_dev->mem_resource[0].len;
+	hw->back = base;
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->device_id = pci_dev->id.device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+
+	strncpy(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE);
+
+	ret = idpf_adapter_init(base);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init adapter");
+		goto err_adapter_init;
+	}
+
+	rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
+
+	adapter->max_vport_nb = adapter->base.caps.max_vports;
+
+	adapter->vports = rte_zmalloc("vports",
+				      adapter->max_vport_nb *
+				      sizeof(*adapter->vports),
+				      0);
+	if (adapter->vports == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
+		ret = -ENOMEM;
+		goto err_get_ptype;
+	}
+
+	adapter->cur_vports = 0;
+	adapter->cur_vport_nb = 0;
+
+	adapter->used_vecs_num = 0;
+
+	return ret;
+
+err_get_ptype:
+	idpf_adapter_deinit(base);
+err_adapter_init:
+	return ret;
+}
+
+static const struct eth_dev_ops cpfl_eth_dev_ops = {
+	.dev_configure			= cpfl_dev_configure,
+	.dev_close			= cpfl_dev_close,
+	.dev_infos_get			= cpfl_dev_info_get,
+	.link_update			= cpfl_dev_link_update,
+	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
+};
+
+static uint16_t
+cpfl_vport_idx_alloc(struct cpfl_adapter_ext *ad)
+{
+	uint16_t vport_idx;
+	uint16_t i;
+
+	for (i = 0; i < ad->max_vport_nb; i++) {
+		if (ad->vports[i] == NULL)
+			break;
+	}
+
+	if (i == ad->max_vport_nb)
+		vport_idx = CPFL_INVALID_VPORT_IDX;
+	else
+		vport_idx = i;
+
+	return vport_idx;
+}
+
+static int
+cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct cpfl_vport_param *param = init_params;
+	struct cpfl_adapter_ext *adapter = param->adapter;
+	/* for sending create vport virtchnl msg prepare */
+	struct virtchnl2_create_vport create_vport_info;
+	int ret = 0;
+
+	dev->dev_ops = &cpfl_eth_dev_ops;
+	vport->adapter = &adapter->base;
+	vport->sw_idx = param->idx;
+	vport->devarg_id = param->devarg_id;
+	vport->dev = dev;
+
+	memset(&create_vport_info, 0, sizeof(create_vport_info));
+	ret = idpf_create_vport_info_init(vport, &create_vport_info);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init vport req_info.");
+		goto err;
+	}
+
+	ret = idpf_vport_init(vport, &create_vport_info, dev->data);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init vports.");
+		goto err;
+	}
+
+	adapter->vports[param->idx] = vport;
+	adapter->cur_vports |= RTE_BIT32(param->devarg_id);
+	adapter->cur_vport_nb++;
+
+	dev->data->mac_addrs = rte_zmalloc(NULL, RTE_ETHER_ADDR_LEN, 0);
+	if (dev->data->mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory.");
+		ret = -ENOMEM;
+		goto err_mac_addrs;
+	}
+
+	rte_ether_addr_copy((struct rte_ether_addr *)vport->default_mac_addr,
+			    &dev->data->mac_addrs[0]);
+
+	return 0;
+
+err_mac_addrs:
+	adapter->vports[param->idx] = NULL;  /* reset */
+	idpf_vport_deinit(vport);
+err:
+	return ret;
+}
+
+static const struct rte_pci_id pci_id_cpfl_map[] = {
+	{ RTE_PCI_DEVICE(IDPF_INTEL_VENDOR_ID, IDPF_DEV_ID_CPF) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static struct cpfl_adapter_ext *
+cpfl_find_adapter_ext(struct rte_pci_device *pci_dev)
+{
+	struct cpfl_adapter_ext *adapter;
+	int found = 0;
+
+	if (pci_dev == NULL)
+		return NULL;
+
+	rte_spinlock_lock(&cpfl_adapter_lock);
+	TAILQ_FOREACH(adapter, &cpfl_adapter_list, next) {
+		if (strncmp(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE) == 0) {
+			found = 1;
+			break;
+		}
+	}
+	rte_spinlock_unlock(&cpfl_adapter_lock);
+
+	if (found == 0)
+		return NULL;
+
+	return adapter;
+}
+
+static void
+cpfl_adapter_ext_deinit(struct cpfl_adapter_ext *adapter)
+{
+	rte_eal_alarm_cancel(cpfl_dev_alarm_handler, adapter);
+	idpf_adapter_deinit(&adapter->base);
+
+	rte_free(adapter->vports);
+	adapter->vports = NULL;
+}
+
+static int
+cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+	       struct rte_pci_device *pci_dev)
+{
+	struct cpfl_vport_param vport_param;
+	struct cpfl_adapter_ext *adapter;
+	struct cpfl_devargs devargs;
+	char name[RTE_ETH_NAME_MAX_LEN];
+	int i, retval;
+	bool first_probe = false;
+
+	if (!cpfl_adapter_list_init) {
+		rte_spinlock_init(&cpfl_adapter_lock);
+		TAILQ_INIT(&cpfl_adapter_list);
+		cpfl_adapter_list_init = true;
+	}
+
+	adapter = cpfl_find_adapter_ext(pci_dev);
+	if (adapter == NULL) {
+		first_probe = true;
+		adapter = rte_zmalloc("cpfl_adapter_ext",
+						sizeof(struct cpfl_adapter_ext), 0);
+		if (adapter == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
+			return -ENOMEM;
+		}
+
+		retval = cpfl_adapter_ext_init(pci_dev, adapter);
+		if (retval != 0) {
+			PMD_INIT_LOG(ERR, "Failed to init adapter.");
+			return retval;
+		}
+
+		rte_spinlock_lock(&cpfl_adapter_lock);
+		TAILQ_INSERT_TAIL(&cpfl_adapter_list, adapter, next);
+		rte_spinlock_unlock(&cpfl_adapter_lock);
+	}
+
+	retval = cpfl_parse_devargs(pci_dev, adapter, &devargs);
+	if (retval != 0) {
+		PMD_INIT_LOG(ERR, "Failed to parse private devargs");
+		goto err;
+	}
+
+	if (devargs.req_vport_nb == 0) {
+		/* If no vport devarg, create vport 0 by default. */
+		vport_param.adapter = adapter;
+		vport_param.devarg_id = 0;
+		vport_param.idx = cpfl_vport_idx_alloc(adapter);
+		if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
+			PMD_INIT_LOG(ERR, "No space for vport %u", vport_param.devarg_id);
+			return 0;
+		}
+		snprintf(name, sizeof(name), "cpfl_%s_vport_0",
+			 pci_dev->device.name);
+		retval = rte_eth_dev_create(&pci_dev->device, name,
+					    sizeof(struct idpf_vport),
+					    NULL, NULL, cpfl_dev_vport_init,
+					    &vport_param);
+		if (retval != 0)
+			PMD_DRV_LOG(ERR, "Failed to create default vport 0");
+	} else {
+		for (i = 0; i < devargs.req_vport_nb; i++) {
+			vport_param.adapter = adapter;
+			vport_param.devarg_id = devargs.req_vports[i];
+			vport_param.idx = cpfl_vport_idx_alloc(adapter);
+			if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
+				PMD_INIT_LOG(ERR, "No space for vport %u", vport_param.devarg_id);
+				break;
+			}
+			snprintf(name, sizeof(name), "cpfl_%s_vport_%d",
+				 pci_dev->device.name,
+				 devargs.req_vports[i]);
+			retval = rte_eth_dev_create(&pci_dev->device, name,
+						    sizeof(struct idpf_vport),
+						    NULL, NULL, cpfl_dev_vport_init,
+						    &vport_param);
+			if (retval != 0)
+				PMD_DRV_LOG(ERR, "Failed to create vport %d",
+					    vport_param.devarg_id);
+		}
+	}
+
+	return 0;
+
+err:
+	if (first_probe) {
+		rte_spinlock_lock(&cpfl_adapter_lock);
+		TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
+		rte_spinlock_unlock(&cpfl_adapter_lock);
+		cpfl_adapter_ext_deinit(adapter);
+		rte_free(adapter);
+	}
+	return retval;
+}
+
+static int
+cpfl_pci_remove(struct rte_pci_device *pci_dev)
+{
+	struct cpfl_adapter_ext *adapter = cpfl_find_adapter_ext(pci_dev);
+	uint16_t port_id;
+
+	/* Ethdev created can be found RTE_ETH_FOREACH_DEV_OF through rte_device */
+	RTE_ETH_FOREACH_DEV_OF(port_id, &pci_dev->device) {
+			rte_eth_dev_close(port_id);
+	}
+
+	rte_spinlock_lock(&cpfl_adapter_lock);
+	TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
+	rte_spinlock_unlock(&cpfl_adapter_lock);
+	cpfl_adapter_ext_deinit(adapter);
+	rte_free(adapter);
+
+	return 0;
+}
+
+static struct rte_pci_driver rte_cpfl_pmd = {
+	.id_table	= pci_id_cpfl_map,
+	.drv_flags	= RTE_PCI_DRV_NEED_MAPPING,
+	.probe		= cpfl_pci_probe,
+	.remove		= cpfl_pci_remove,
+};
+
+/**
+ * Driver initialization routine.
+ * Invoked once at EAL init time.
+ * Register itself as the [Poll Mode] Driver of PCI devices.
+ */
+RTE_PMD_REGISTER_PCI(net_cpfl, rte_cpfl_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_cpfl, pci_id_cpfl_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_cpfl, "* igb_uio | vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(net_cpfl,
+			      CPFL_TX_SINGLE_Q "=<0|1> "
+			      CPFL_RX_SINGLE_Q "=<0|1> "
+			      CPFL_VPORT "=[vport_set0,[vport_set1],...]");
+
+RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_driver, driver, NOTICE);
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
new file mode 100644
index 0000000000..e24ecb614b
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _CPFL_ETHDEV_H_
+#define _CPFL_ETHDEV_H_
+
+#include <stdint.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+#include <rte_ethdev.h>
+#include <rte_kvargs.h>
+#include <ethdev_driver.h>
+#include <ethdev_pci.h>
+
+#include "cpfl_logs.h"
+
+#include <idpf_common_device.h>
+#include <idpf_common_virtchnl.h>
+#include <base/idpf_prototype.h>
+#include <base/virtchnl2.h>
+
+#define CPFL_MAX_VPORT_NUM	8
+
+#define CPFL_INVALID_VPORT_IDX	0xffff
+
+#define CPFL_MIN_BUF_SIZE	1024
+#define CPFL_MAX_FRAME_SIZE	9728
+
+#define CPFL_NUM_MACADDR_MAX	64
+
+#define CPFL_VLAN_TAG_SIZE	4
+#define CPFL_ETH_OVERHEAD \
+	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + CPFL_VLAN_TAG_SIZE * 2)
+
+#define CPFL_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
+
+#define CPFL_ALARM_INTERVAL	50000 /* us */
+
+/* Device IDs */
+#define IDPF_DEV_ID_CPF			0x1453
+
+struct cpfl_vport_param {
+	struct cpfl_adapter_ext *adapter;
+	uint16_t devarg_id; /* arg id from user */
+	uint16_t idx;       /* index in adapter->vports[]*/
+};
+
+/* Struct used when parse driver specific devargs */
+struct cpfl_devargs {
+	uint16_t req_vports[CPFL_MAX_VPORT_NUM];
+	uint16_t req_vport_nb;
+};
+
+struct cpfl_adapter_ext {
+	TAILQ_ENTRY(cpfl_adapter_ext) next;
+	struct idpf_adapter base;
+
+	char name[CPFL_ADAPTER_NAME_LEN];
+
+	struct idpf_vport **vports;
+	uint16_t max_vport_nb;
+
+	uint16_t cur_vports; /* bit mask of created vport */
+	uint16_t cur_vport_nb;
+
+	uint16_t used_vecs_num;
+};
+
+TAILQ_HEAD(cpfl_adapter_list, cpfl_adapter_ext);
+
+#define CPFL_DEV_TO_PCI(eth_dev)		\
+	RTE_DEV_TO_PCI((eth_dev)->device)
+#define CPFL_ADAPTER_TO_EXT(p)					\
+	container_of((p), struct cpfl_adapter_ext, base)
+
+#endif /* _CPFL_ETHDEV_H_ */
diff --git a/drivers/net/cpfl/cpfl_logs.h b/drivers/net/cpfl/cpfl_logs.h
new file mode 100644
index 0000000000..451bdfbd1d
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_logs.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _CPFL_LOGS_H_
+#define _CPFL_LOGS_H_
+
+#include <rte_log.h>
+
+extern int cpfl_logtype_init;
+extern int cpfl_logtype_driver;
+
+#define PMD_INIT_LOG(level, ...) \
+	rte_log(RTE_LOG_ ## level, \
+		cpfl_logtype_init, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#define PMD_DRV_LOG_RAW(level, ...) \
+	rte_log(RTE_LOG_ ## level, \
+		cpfl_logtype_driver, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _CPFL_LOGS_H_ */
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
new file mode 100644
index 0000000000..106cc97e60
--- /dev/null
+++ b/drivers/net/cpfl/meson.build
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 Intel Corporation
+
+if is_windows
+    build = false
+    reason = 'not supported on Windows'
+    subdir_done()
+endif
+
+deps += ['common_idpf']
+
+sources = files(
+        'cpfl_ethdev.c',
+)
\ No newline at end of file
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 6470bf3636..a8ca338875 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -13,6 +13,7 @@ drivers = [
         'bnxt',
         'bonding',
         'cnxk',
+        'cpfl',
         'cxgbe',
         'dpaa',
         'dpaa2',
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH 02/21] net/cpfl: add Tx queue setup
  2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
  2022-12-23  1:55 ` [PATCH 01/21] net/cpfl: support device initialization Mingxia Liu
@ 2022-12-23  1:55 ` Mingxia Liu
  2022-12-23  1:55 ` [PATCH 03/21] net/cpfl: add Rx " Mingxia Liu
                   ` (19 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2022-12-23  1:55 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

Add support for tx_queue_setup ops.

In the single queue model, the same descriptor queue is used by SW to
post buffer descriptors to HW and by HW to post completed descriptors
to SW.

In the split queue model, "RX buffer queues" are used to pass
descriptor buffers from SW to HW while Rx queues are used only to
pass the descriptor completions, that is, descriptors that point
to completed buffers, from HW to SW. This is contrary to the single
queue model in which Rx queues are used for both purposes.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  13 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 244 +++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  25 ++++
 drivers/net/cpfl/meson.build   |   1 +
 4 files changed, 283 insertions(+)
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.h

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 7c3bc945e0..10d2387b66 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -12,6 +12,7 @@
 #include <rte_alarm.h>
 
 #include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
 
 #define CPFL_TX_SINGLE_Q	"tx_single"
 #define CPFL_RX_SINGLE_Q	"rx_single"
@@ -96,6 +97,17 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = dev_info->max_rx_pktlen - CPFL_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = CPFL_MAX_RING_DESC,
+		.nb_min = CPFL_MIN_RING_DESC,
+		.nb_align = CPFL_ALIGN_RING_DESC,
+	};
+
 	return 0;
 }
 
@@ -513,6 +525,7 @@ cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *a
 static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_configure			= cpfl_dev_configure,
 	.dev_close			= cpfl_dev_close,
+	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
 	.link_update			= cpfl_dev_link_update,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
new file mode 100644
index 0000000000..ea4a2002bf
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -0,0 +1,244 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <ethdev_driver.h>
+#include <rte_net.h>
+#include <rte_vect.h>
+
+#include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
+
+static uint64_t
+cpfl_tx_offload_convert(uint64_t offload)
+{
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_SCTP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
+		ol |= IDPF_TX_OFFLOAD_MULTI_SEGS;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0)
+		ol |= IDPF_TX_OFFLOAD_MBUF_FAST_FREE;
+
+	return ol;
+}
+
+static const struct rte_memzone *
+cpfl_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
+		      uint16_t len, uint16_t queue_type,
+		      unsigned int socket_id, bool splitq)
+{
+	char ring_name[RTE_MEMZONE_NAMESIZE];
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+
+	memset(ring_name, 0, RTE_MEMZONE_NAMESIZE);
+	switch (queue_type) {
+	case VIRTCHNL2_QUEUE_TYPE_TX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_sched_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "cpfl Tx ring", sizeof("cpfl Tx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3),
+					      CPFL_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_singleq_rx_buf_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "cpfl Rx ring", sizeof("cpfl Rx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
+		ring_size = RTE_ALIGN(len * sizeof(struct idpf_splitq_tx_compl_desc),
+				      CPFL_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "cpfl Tx compl ring", sizeof("cpfl Tx compl ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
+		ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_splitq_rx_buf_desc),
+				      CPFL_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "cpfl Rx buf ring", sizeof("cpfl Rx buf ring"));
+		break;
+	default:
+		PMD_INIT_LOG(ERR, "Invalid queue type");
+		return NULL;
+	}
+
+	mz = rte_eth_dma_zone_reserve(dev, ring_name, queue_idx,
+				      ring_size, CPFL_RING_BASE_ALIGN,
+				      socket_id);
+	if (mz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for ring");
+		return NULL;
+	}
+
+	/* Zero all the descriptors in the ring. */
+	memset(mz->addr, 0, ring_size);
+
+	return mz;
+}
+
+static void
+cpfl_dma_zone_release(const struct rte_memzone *mz)
+{
+	rte_memzone_free(mz);
+}
+
+static int
+cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
+		     uint16_t queue_idx, uint16_t nb_desc,
+		     unsigned int socket_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	const struct rte_memzone *mz;
+	struct idpf_tx_queue *cq;
+	int ret;
+
+	cq = rte_zmalloc_socket("cpfl splitq cq",
+				sizeof(struct idpf_tx_queue),
+				RTE_CACHE_LINE_SIZE,
+				socket_id);
+	if (cq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for Tx compl queue");
+		ret = -ENOMEM;
+		goto err_cq_alloc;
+	}
+
+	cq->nb_tx_desc = nb_desc;
+	cq->queue_id = vport->chunks_info.tx_compl_start_qid + queue_idx;
+	cq->port_id = dev->data->port_id;
+	cq->txqs = dev->data->tx_queues;
+	cq->tx_start_qid = vport->chunks_info.tx_start_qid;
+
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, nb_desc,
+				   VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION,
+				   socket_id, true);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	cq->tx_ring_phys_addr = mz->iova;
+	cq->compl_ring = mz->addr;
+	cq->mz = mz;
+	reset_split_tx_complq(cq);
+
+	txq->complq = cq;
+
+	return 0;
+
+err_mz_reserve:
+	rte_free(cq);
+err_cq_alloc:
+	return ret;
+}
+
+int
+cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_txconf *tx_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t tx_rs_thresh, tx_free_thresh;
+	struct idpf_hw *hw = &adapter->hw;
+	const struct rte_memzone *mz;
+	struct idpf_tx_queue *txq;
+	uint64_t offloads;
+	uint16_t len;
+	bool is_splitq;
+	int ret;
+
+	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+	tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh > 0) ?
+		tx_conf->tx_rs_thresh : CPFL_DEFAULT_TX_RS_THRESH);
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ?
+		tx_conf->tx_free_thresh : CPFL_DEFAULT_TX_FREE_THRESH);
+	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket("cpfl txq",
+				 sizeof(struct idpf_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (txq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
+		ret = -ENOMEM;
+		goto err_txq_alloc;
+	}
+
+	is_splitq = !!(vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
+
+	txq->nb_tx_desc = nb_desc;
+	txq->rs_thresh = tx_rs_thresh;
+	txq->free_thresh = tx_free_thresh;
+	txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->offloads = cpfl_tx_offload_convert(offloads);
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	if (is_splitq)
+		len = 2 * nb_desc;
+	else
+		len = nb_desc;
+	txq->sw_nb_desc = len;
+
+	/* Allocate TX hardware ring descriptors. */
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, nb_desc, VIRTCHNL2_QUEUE_TYPE_TX,
+				   socket_id, is_splitq);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	txq->tx_ring_phys_addr = mz->iova;
+	txq->mz = mz;
+
+	txq->sw_ring = rte_zmalloc_socket("cpfl tx sw ring",
+					  sizeof(struct idpf_tx_entry) * len,
+					  RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq->sw_ring == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
+	}
+
+	if (!is_splitq) {
+		txq->tx_ring = mz->addr;
+		reset_single_tx_queue(txq);
+	} else {
+		txq->desc_ring = mz->addr;
+		reset_split_tx_descq(txq);
+
+		/* Setup tx completion queue if split model */
+		ret = cpfl_tx_complq_setup(dev, txq, queue_idx,
+					   2 * nb_desc, socket_id);
+		if (ret != 0)
+			goto err_complq_setup;
+	}
+
+	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
+			queue_idx * vport->chunks_info.tx_qtail_spacing);
+	txq->q_set = true;
+	dev->data->tx_queues[queue_idx] = txq;
+
+	return 0;
+
+err_complq_setup:
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(txq);
+err_txq_alloc:
+	return ret;
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
new file mode 100644
index 0000000000..ec42478393
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _CPFL_RXTX_H_
+#define _CPFL_RXTX_H_
+
+#include <idpf_common_rxtx.h>
+#include "cpfl_ethdev.h"
+
+/* In QLEN must be whole number of 32 descriptors. */
+#define CPFL_ALIGN_RING_DESC	32
+#define CPFL_MIN_RING_DESC	32
+#define CPFL_MAX_RING_DESC	4096
+#define CPFL_DMA_MEM_ALIGN	4096
+/* Base address of the HW descriptor ring should be 128B aligned. */
+#define CPFL_RING_BASE_ALIGN	128
+
+#define CPFL_DEFAULT_TX_RS_THRESH	32
+#define CPFL_DEFAULT_TX_FREE_THRESH	32
+
+int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			uint16_t nb_desc, unsigned int socket_id,
+			const struct rte_eth_txconf *tx_conf);
+#endif /* _CPFL_RXTX_H_ */
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
index 106cc97e60..3ccee15703 100644
--- a/drivers/net/cpfl/meson.build
+++ b/drivers/net/cpfl/meson.build
@@ -11,4 +11,5 @@ deps += ['common_idpf']
 
 sources = files(
         'cpfl_ethdev.c',
+        'cpfl_rxtx.c',
 )
\ No newline at end of file
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH 03/21] net/cpfl: add Rx queue setup
  2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
  2022-12-23  1:55 ` [PATCH 01/21] net/cpfl: support device initialization Mingxia Liu
  2022-12-23  1:55 ` [PATCH 02/21] net/cpfl: add Tx queue setup Mingxia Liu
@ 2022-12-23  1:55 ` Mingxia Liu
  2022-12-23  1:55 ` [PATCH 04/21] net/cpfl: support device start and stop Mingxia Liu
                   ` (18 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2022-12-23  1:55 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

Add support for rx_queue_setup ops.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  11 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 232 +++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |   6 +
 3 files changed, 249 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 10d2387b66..6d1992dd6e 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -102,12 +102,22 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
 	};
 
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_free_thresh = CPFL_DEFAULT_RX_FREE_THRESH,
+	};
+
 	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
 		.nb_max = CPFL_MAX_RING_DESC,
 		.nb_min = CPFL_MIN_RING_DESC,
 		.nb_align = CPFL_ALIGN_RING_DESC,
 	};
 
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = CPFL_MAX_RING_DESC,
+		.nb_min = CPFL_MIN_RING_DESC,
+		.nb_align = CPFL_ALIGN_RING_DESC,
+	};
+
 	return 0;
 }
 
@@ -525,6 +535,7 @@ cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *a
 static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_configure			= cpfl_dev_configure,
 	.dev_close			= cpfl_dev_close,
+	.rx_queue_setup			= cpfl_rx_queue_setup,
 	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
 	.link_update			= cpfl_dev_link_update,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index ea4a2002bf..695c79e1db 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -9,6 +9,25 @@
 #include "cpfl_ethdev.h"
 #include "cpfl_rxtx.h"
 
+static uint64_t
+cpfl_rx_offload_convert(uint64_t offload)
+{
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
+		ol |= IDPF_RX_OFFLOAD_TIMESTAMP;
+
+	return ol;
+}
+
 static uint64_t
 cpfl_tx_offload_convert(uint64_t offload)
 {
@@ -94,6 +113,219 @@ cpfl_dma_zone_release(const struct rte_memzone *mz)
 	rte_memzone_free(mz);
 }
 
+static int
+cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
+			 uint16_t queue_idx, uint16_t rx_free_thresh,
+			 uint16_t nb_desc, unsigned int socket_id,
+			 struct rte_mempool *mp, uint8_t bufq_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_hw *hw = &adapter->hw;
+	const struct rte_memzone *mz;
+	struct idpf_rx_queue *bufq;
+	uint16_t len;
+	int ret;
+
+	bufq = rte_zmalloc_socket("cpfl bufq",
+				   sizeof(struct idpf_rx_queue),
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (bufq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue.");
+		ret = -ENOMEM;
+		goto err_bufq1_alloc;
+	}
+
+	bufq->mp = mp;
+	bufq->nb_rx_desc = nb_desc;
+	bufq->rx_free_thresh = rx_free_thresh;
+	bufq->queue_id = vport->chunks_info.rx_buf_start_qid + queue_idx;
+	bufq->port_id = dev->data->port_id;
+	bufq->rx_hdr_len = 0;
+	bufq->adapter = adapter;
+
+	len = rte_pktmbuf_data_room_size(bufq->mp) - RTE_PKTMBUF_HEADROOM;
+	bufq->rx_buf_len = len;
+
+	/* Allocate a little more to support bulk allocate. */
+	len = nb_desc + IDPF_RX_MAX_BURST;
+
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, len,
+				   VIRTCHNL2_QUEUE_TYPE_RX_BUFFER,
+				   socket_id, true);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+
+	bufq->rx_ring_phys_addr = mz->iova;
+	bufq->rx_ring = mz->addr;
+	bufq->mz = mz;
+
+	bufq->sw_ring =
+		rte_zmalloc_socket("cpfl rx bufq sw ring",
+				   sizeof(struct rte_mbuf *) * len,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (bufq->sw_ring == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
+	}
+
+	reset_split_rx_bufq(bufq);
+	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
+			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+	bufq->q_set = true;
+
+	if (bufq_id == 1) {
+		rxq->bufq1 = bufq;
+	} else if (bufq_id == 2) {
+		rxq->bufq2 = bufq;
+	} else {
+		PMD_INIT_LOG(ERR, "Invalid buffer queue index.");
+		ret = -EINVAL;
+		goto err_bufq_id;
+	}
+
+	return 0;
+
+err_bufq_id:
+	rte_free(bufq->sw_ring);
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(bufq);
+err_bufq1_alloc:
+	return ret;
+}
+
+static void
+cpfl_rx_split_bufq_release(struct idpf_rx_queue *bufq)
+{
+	rte_free(bufq->sw_ring);
+	cpfl_dma_zone_release(bufq->mz);
+	rte_free(bufq);
+}
+
+int
+cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_rxconf *rx_conf,
+		    struct rte_mempool *mp)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_hw *hw = &adapter->hw;
+	const struct rte_memzone *mz;
+	struct idpf_rx_queue *rxq;
+	uint16_t rx_free_thresh;
+	uint64_t offloads;
+	bool is_splitq;
+	uint16_t len;
+	int ret;
+
+	offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads;
+
+	/* Check free threshold */
+	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
+		CPFL_DEFAULT_RX_FREE_THRESH :
+		rx_conf->rx_free_thresh;
+	if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Setup Rx queue */
+	rxq = rte_zmalloc_socket("cpfl rxq",
+				 sizeof(struct idpf_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (rxq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
+		ret = -ENOMEM;
+		goto err_rxq_alloc;
+	}
+
+	is_splitq = !!(vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
+
+	rxq->mp = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->queue_id = vport->chunks_info.rx_start_qid + queue_idx;
+	rxq->port_id = dev->data->port_id;
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+	rxq->rx_hdr_len = 0;
+	rxq->adapter = adapter;
+	rxq->offloads = cpfl_rx_offload_convert(offloads);
+
+	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_buf_len = len;
+
+	/* Allocate a little more to support bulk allocate. */
+	len = nb_desc + IDPF_RX_MAX_BURST;
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, len, VIRTCHNL2_QUEUE_TYPE_RX,
+				   socket_id, is_splitq);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	rxq->rx_ring_phys_addr = mz->iova;
+	rxq->rx_ring = mz->addr;
+	rxq->mz = mz;
+
+	if (!is_splitq) {
+		rxq->sw_ring = rte_zmalloc_socket("cpfl rxq sw ring",
+						  sizeof(struct rte_mbuf *) * len,
+						  RTE_CACHE_LINE_SIZE,
+						  socket_id);
+		if (rxq->sw_ring == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+			ret = -ENOMEM;
+			goto err_sw_ring_alloc;
+		}
+
+		reset_single_rx_queue(rxq);
+		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
+				queue_idx * vport->chunks_info.rx_qtail_spacing);
+	} else {
+		reset_split_rx_descq(rxq);
+
+		/* Setup Rx buffer queues */
+		ret = cpfl_rx_split_bufq_setup(dev, rxq, 2 * queue_idx,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 1);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 1");
+			ret = -EINVAL;
+			goto err_bufq1_setup;
+		}
+
+		ret = cpfl_rx_split_bufq_setup(dev, rxq, 2 * queue_idx + 1,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 2);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 2");
+			ret = -EINVAL;
+			goto err_bufq2_setup;
+		}
+	}
+
+	rxq->q_set = true;
+	dev->data->rx_queues[queue_idx] = rxq;
+
+	return 0;
+
+err_bufq2_setup:
+	cpfl_rx_split_bufq_release(rxq->bufq1);
+err_bufq1_setup:
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(rxq);
+err_rxq_alloc:
+	return ret;
+}
+
 static int
 cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
 		     uint16_t queue_idx, uint16_t nb_desc,
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index ec42478393..fd838d3f07 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -16,10 +16,16 @@
 /* Base address of the HW descriptor ring should be 128B aligned. */
 #define CPFL_RING_BASE_ALIGN	128
 
+#define CPFL_DEFAULT_RX_FREE_THRESH	32
+
 #define CPFL_DEFAULT_TX_RS_THRESH	32
 #define CPFL_DEFAULT_TX_FREE_THRESH	32
 
 int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_txconf *tx_conf);
+int cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			uint16_t nb_desc, unsigned int socket_id,
+			const struct rte_eth_rxconf *rx_conf,
+			struct rte_mempool *mp);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH 04/21] net/cpfl: support device start and stop
  2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                   ` (2 preceding siblings ...)
  2022-12-23  1:55 ` [PATCH 03/21] net/cpfl: add Rx " Mingxia Liu
@ 2022-12-23  1:55 ` Mingxia Liu
  2022-12-23  1:55 ` [PATCH 05/21] net/cpfl: support queue start Mingxia Liu
                   ` (17 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2022-12-23  1:55 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

Add dev ops dev_start, dev_stop and link_update.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 45 ++++++++++++++++++++++++++++++++++
 1 file changed, 45 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 6d1992dd6e..4c259f24e8 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -184,12 +184,55 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_dev_start(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	int ret;
+
+	vport->stopped = 0;
+
+	if (dev->data->mtu > vport->max_mtu) {
+		PMD_DRV_LOG(ERR, "MTU should be less than %d", vport->max_mtu);
+		ret = -EINVAL;
+		goto err_mtu;
+	}
+
+	vport->max_pkt_len = dev->data->mtu + CPFL_ETH_OVERHEAD;
+
+	ret = idpf_vc_ena_dis_vport(vport, true);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to enable vport");
+		return ret;
+	}
+
+	return 0;
+err_mtu:
+	return ret;
+}
+
+static int
+cpfl_dev_stop(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->stopped == 1)
+		return 0;
+
+	idpf_vc_ena_dis_vport(vport, false);
+
+	vport->stopped = 1;
+
+	return 0;
+}
+
 static int
 cpfl_dev_close(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter);
 
+	cpfl_dev_stop(dev);
 	idpf_vport_deinit(vport);
 
 	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
@@ -538,6 +581,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.rx_queue_setup			= cpfl_rx_queue_setup,
 	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
+	.dev_start			= cpfl_dev_start,
+	.dev_stop			= cpfl_dev_stop,
 	.link_update			= cpfl_dev_link_update,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH 05/21] net/cpfl: support queue start
  2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                   ` (3 preceding siblings ...)
  2022-12-23  1:55 ` [PATCH 04/21] net/cpfl: support device start and stop Mingxia Liu
@ 2022-12-23  1:55 ` Mingxia Liu
  2022-12-23  1:55 ` [PATCH 06/21] net/cpfl: support queue stop Mingxia Liu
                   ` (16 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2022-12-23  1:55 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

Add support for these device ops:
 - rx_queue_start
 - tx_queue_start

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  41 ++++++++++
 drivers/net/cpfl/cpfl_rxtx.c   | 138 +++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |   4 +
 3 files changed, 183 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 4c259f24e8..d939dcb005 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -184,6 +184,39 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_start_queues(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	struct idpf_tx_queue *txq;
+	int err = 0;
+	int i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq == NULL || txq->tx_deferred_start)
+			continue;
+		err = cpfl_tx_queue_start(dev, i);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start Tx queue %u", i);
+			return err;
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq == NULL || rxq->rx_deferred_start)
+			continue;
+		err = cpfl_rx_queue_start(dev, i);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start Rx queue %u", i);
+			return err;
+		}
+	}
+
+	return err;
+}
+
 static int
 cpfl_dev_start(struct rte_eth_dev *dev)
 {
@@ -200,6 +233,12 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 
 	vport->max_pkt_len = dev->data->mtu + CPFL_ETH_OVERHEAD;
 
+	ret = cpfl_start_queues(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to start queues");
+		goto err_mtu;
+	}
+
 	ret = idpf_vc_ena_dis_vport(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
@@ -584,6 +623,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_start			= cpfl_dev_start,
 	.dev_stop			= cpfl_dev_stop,
 	.link_update			= cpfl_dev_link_update,
+	.rx_queue_start			= cpfl_rx_queue_start,
+	.tx_queue_start			= cpfl_tx_queue_start,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 695c79e1db..aa67db1e92 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -474,3 +474,141 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 err_txq_alloc:
 	return ret;
 }
+
+int
+cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_rx_queue *rxq;
+	int err;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	if (rxq == NULL || !rxq->q_set) {
+		PMD_DRV_LOG(ERR, "RX queue %u not available or setup",
+					rx_queue_id);
+		return -EINVAL;
+	}
+
+	if (rxq->bufq1 == NULL) {
+		/* Single queue */
+		err = idpf_alloc_single_rxq_mbufs(rxq);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+			return err;
+		}
+
+		rte_wmb();
+
+		/* Init the RX tail register. */
+		IDPF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	} else {
+		/* Split queue */
+		err = idpf_alloc_split_rxq_mbufs(rxq->bufq1);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+			return err;
+		}
+		err = idpf_alloc_split_rxq_mbufs(rxq->bufq2);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+			return err;
+		}
+
+		rte_wmb();
+
+		/* Init the RX tail register. */
+		IDPF_PCI_REG_WRITE(rxq->bufq1->qrx_tail, rxq->bufq1->rx_tail);
+		IDPF_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->bufq2->rx_tail);
+	}
+
+	return err;
+}
+
+int
+cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_rx_queue *rxq =
+		dev->data->rx_queues[rx_queue_id];
+	int err = 0;
+
+	err = idpf_vc_config_rxq(vport, rxq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Fail to configure Rx queue %u", rx_queue_id);
+		return err;
+	}
+
+	err = cpfl_rx_queue_init(dev, rx_queue_id);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to init RX queue %u",
+			    rx_queue_id);
+		return err;
+	}
+
+	/* Ready to switch the queue on */
+	err = idpf_switch_queue(vport, rx_queue_id, true, true);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+	} else {
+		rxq->q_started = true;
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	return err;
+}
+
+int
+cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_tx_queue *txq;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	txq = dev->data->tx_queues[tx_queue_id];
+
+	/* Init the RX tail register. */
+	IDPF_PCI_REG_WRITE(txq->qtx_tail, 0);
+
+	return 0;
+}
+
+int
+cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_tx_queue *txq =
+		dev->data->tx_queues[tx_queue_id];
+	int err = 0;
+
+	err = idpf_vc_config_txq(vport, txq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Fail to configure Tx queue %u", tx_queue_id);
+		return err;
+	}
+
+	err = cpfl_tx_queue_init(dev, tx_queue_id);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to init TX queue %u",
+			    tx_queue_id);
+		return err;
+	}
+
+	/* Ready to switch the queue on */
+	err = idpf_switch_queue(vport, tx_queue_id, false, true);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+			    tx_queue_id);
+	} else {
+		txq->q_started = true;
+		dev->data->tx_queue_state[tx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	return err;
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index fd838d3f07..2fa7950775 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -28,4 +28,8 @@ int cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
 			struct rte_mempool *mp);
+int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH 06/21] net/cpfl: support queue stop
  2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                   ` (4 preceding siblings ...)
  2022-12-23  1:55 ` [PATCH 05/21] net/cpfl: support queue start Mingxia Liu
@ 2022-12-23  1:55 ` Mingxia Liu
  2022-12-23  1:55 ` [PATCH 07/21] net/cpfl: support queue release Mingxia Liu
                   ` (15 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2022-12-23  1:55 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

Add support for these device ops:
 - rx_queue_stop
 - tx_queue_stop

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  9 +++-
 drivers/net/cpfl/cpfl_rxtx.c   | 87 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  3 ++
 3 files changed, 98 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index d939dcb005..4332f66ed6 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -242,10 +242,13 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 	ret = idpf_vc_ena_dis_vport(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
-		return ret;
+		goto err_vport;
 	}
 
 	return 0;
+
+err_vport:
+	cpfl_stop_queues(dev);
 err_mtu:
 	return ret;
 }
@@ -260,6 +263,8 @@ cpfl_dev_stop(struct rte_eth_dev *dev)
 
 	idpf_vc_ena_dis_vport(vport, false);
 
+	cpfl_stop_queues(dev);
+
 	vport->stopped = 1;
 
 	return 0;
@@ -625,6 +630,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.link_update			= cpfl_dev_link_update,
 	.rx_queue_start			= cpfl_rx_queue_start,
 	.tx_queue_start			= cpfl_tx_queue_start,
+	.rx_queue_stop			= cpfl_rx_queue_stop,
+	.tx_queue_stop			= cpfl_tx_queue_stop,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index aa67db1e92..b7d616de4f 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -612,3 +612,90 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 
 	return err;
 }
+
+int
+cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_rx_queue *rxq;
+	int err;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	err = idpf_switch_queue(vport, rx_queue_id, true, false);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+			    rx_queue_id);
+		return err;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		rxq->ops->release_mbufs(rxq);
+		reset_single_rx_queue(rxq);
+	} else {
+		rxq->bufq1->ops->release_mbufs(rxq->bufq1);
+		rxq->bufq2->ops->release_mbufs(rxq->bufq2);
+		reset_split_rx_queue(rxq);
+	}
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+int
+cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_tx_queue *txq;
+	int err;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	err = idpf_switch_queue(vport, tx_queue_id, false, false);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+			    tx_queue_id);
+		return err;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	txq->ops->release_mbufs(txq);
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		reset_single_tx_queue(txq);
+	} else {
+		reset_split_tx_descq(txq);
+		reset_split_tx_complq(txq->complq);
+	}
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+void
+cpfl_stop_queues(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	struct idpf_tx_queue *txq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq == NULL)
+			continue;
+
+		if (cpfl_rx_queue_stop(dev, i) != 0)
+			PMD_DRV_LOG(WARNING, "Fail to stop Rx queue %d", i);
+	}
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq == NULL)
+			continue;
+
+		if (cpfl_tx_queue_stop(dev, i) != 0)
+			PMD_DRV_LOG(WARNING, "Fail to stop Tx queue %d", i);
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 2fa7950775..6b63137d5c 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -32,4 +32,7 @@ int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void cpfl_stop_queues(struct rte_eth_dev *dev);
+int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH 07/21] net/cpfl: support queue release
  2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                   ` (5 preceding siblings ...)
  2022-12-23  1:55 ` [PATCH 06/21] net/cpfl: support queue stop Mingxia Liu
@ 2022-12-23  1:55 ` Mingxia Liu
  2022-12-23  1:55 ` [PATCH 08/21] net/cpfl: support MTU configuration Mingxia Liu
                   ` (14 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2022-12-23  1:55 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

Add support for queue operations:
 - rx_queue_release
 - tx_queue_release

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  2 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 35 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  2 ++
 3 files changed, 39 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 4332f66ed6..be3cac3b27 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -632,6 +632,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_start			= cpfl_tx_queue_start,
 	.rx_queue_stop			= cpfl_rx_queue_stop,
 	.tx_queue_stop			= cpfl_tx_queue_stop,
+	.rx_queue_release		= cpfl_dev_rx_queue_release,
+	.tx_queue_release		= cpfl_dev_tx_queue_release,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index b7d616de4f..a10deb6c96 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -49,6 +49,14 @@ cpfl_tx_offload_convert(uint64_t offload)
 	return ol;
 }
 
+static const struct idpf_rxq_ops def_rxq_ops = {
+	.release_mbufs = release_rxq_mbufs,
+};
+
+static const struct idpf_txq_ops def_txq_ops = {
+	.release_mbufs = release_txq_mbufs,
+};
+
 static const struct rte_memzone *
 cpfl_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
 		      uint16_t len, uint16_t queue_type,
@@ -177,6 +185,7 @@ cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
 	reset_split_rx_bufq(bufq);
 	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
 			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+	bufq->ops = &def_rxq_ops;
 	bufq->q_set = true;
 
 	if (bufq_id == 1) {
@@ -235,6 +244,12 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
 		return -EINVAL;
 
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx] != NULL) {
+		idpf_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
 	/* Setup Rx queue */
 	rxq = rte_zmalloc_socket("cpfl rxq",
 				 sizeof(struct idpf_rx_queue),
@@ -287,6 +302,7 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		reset_single_rx_queue(rxq);
 		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
 				queue_idx * vport->chunks_info.rx_qtail_spacing);
+		rxq->ops = &def_rxq_ops;
 	} else {
 		reset_split_rx_descq(rxq);
 
@@ -399,6 +415,12 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
 		return -EINVAL;
 
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx] != NULL) {
+		idpf_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
 	/* Allocate the TX queue data structure. */
 	txq = rte_zmalloc_socket("cpfl txq",
 				 sizeof(struct idpf_tx_queue),
@@ -461,6 +483,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 
 	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
 			queue_idx * vport->chunks_info.tx_qtail_spacing);
+	txq->ops = &def_txq_ops;
 	txq->q_set = true;
 	dev->data->tx_queues[queue_idx] = txq;
 
@@ -674,6 +697,18 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	return 0;
 }
 
+void
+cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+	idpf_rx_queue_release(dev->data->rx_queues[qid]);
+}
+
+void
+cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+	idpf_tx_queue_release(dev->data->tx_queues[qid]);
+}
+
 void
 cpfl_stop_queues(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 6b63137d5c..037d479d56 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -35,4 +35,6 @@ int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void cpfl_stop_queues(struct rte_eth_dev *dev);
 int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH 08/21] net/cpfl: support MTU configuration
  2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                   ` (6 preceding siblings ...)
  2022-12-23  1:55 ` [PATCH 07/21] net/cpfl: support queue release Mingxia Liu
@ 2022-12-23  1:55 ` Mingxia Liu
  2022-12-23  1:55 ` [PATCH 09/21] net/cpfl: support basic Rx data path Mingxia Liu
                   ` (13 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2022-12-23  1:55 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

Add dev ops mtu_set.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini |  1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 13 +++++++++++++
 2 files changed, 14 insertions(+)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index a2d1ca9e15..470ba81579 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -7,6 +7,7 @@
 ; is selected.
 ;
 [Features]
+MTU update           = Y
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index be3cac3b27..5c487e5511 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -121,6 +121,18 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	return 0;
 }
 
+static int
+cpfl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused)
+{
+	/* mtu setting is forbidden if port is start */
+	if (dev->data->dev_started) {
+		PMD_DRV_LOG(ERR, "port must be stopped before configuration");
+		return -EBUSY;
+	}
+
+	return 0;
+}
+
 static const uint32_t *
 cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 {
@@ -634,6 +646,7 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_stop			= cpfl_tx_queue_stop,
 	.rx_queue_release		= cpfl_dev_rx_queue_release,
 	.tx_queue_release		= cpfl_dev_tx_queue_release,
+	.mtu_set			= cpfl_dev_mtu_set,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH 09/21] net/cpfl: support basic Rx data path
  2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                   ` (7 preceding siblings ...)
  2022-12-23  1:55 ` [PATCH 08/21] net/cpfl: support MTU configuration Mingxia Liu
@ 2022-12-23  1:55 ` Mingxia Liu
  2022-12-23  1:55 ` [PATCH 10/21] net/cpfl: support basic Tx " Mingxia Liu
                   ` (12 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2022-12-23  1:55 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

Add basic Rx support in split queue mode and single queue mode.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  2 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 11 +++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  1 +
 3 files changed, 14 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 5c487e5511..02594e1455 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -251,6 +251,8 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 		goto err_mtu;
 	}
 
+	cpfl_set_rx_function(dev);
+
 	ret = idpf_vc_ena_dis_vport(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index a10deb6c96..30df129a19 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -734,3 +734,14 @@ cpfl_stop_queues(struct rte_eth_dev *dev)
 			PMD_DRV_LOG(WARNING, "Fail to stop Tx queue %d", i);
 	}
 }
+
+void
+cpfl_set_rx_function(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
+	else
+		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 037d479d56..c29c30c7a3 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -37,4 +37,5 @@ int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+void cpfl_set_rx_function(struct rte_eth_dev *dev);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH 10/21] net/cpfl: support basic Tx data path
  2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                   ` (8 preceding siblings ...)
  2022-12-23  1:55 ` [PATCH 09/21] net/cpfl: support basic Rx data path Mingxia Liu
@ 2022-12-23  1:55 ` Mingxia Liu
  2022-12-23  1:55 ` [PATCH 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
                   ` (11 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2022-12-23  1:55 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

Add basic Tx support in split queue mode and single queue mode.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  3 +++
 drivers/net/cpfl/cpfl_rxtx.c   | 14 ++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  1 +
 3 files changed, 18 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 02594e1455..d9ac3983aa 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -97,6 +97,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = dev_info->max_rx_pktlen - CPFL_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
 		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
@@ -252,6 +254,7 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 	}
 
 	cpfl_set_rx_function(dev);
+	cpfl_set_tx_function(dev);
 
 	ret = idpf_vc_ena_dis_vport(vport, true);
 	if (ret != 0) {
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 30df129a19..0e053f4434 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -745,3 +745,17 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 	else
 		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
 }
+
+void
+cpfl_set_tx_function(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		dev->tx_pkt_burst = idpf_splitq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_prep_pkts;
+	} else {
+		dev->tx_pkt_burst = idpf_singleq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_prep_pkts;
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index c29c30c7a3..021db5bf8a 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -38,4 +38,5 @@ int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_set_rx_function(struct rte_eth_dev *dev);
+void cpfl_set_tx_function(struct rte_eth_dev *dev);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH 11/21] net/cpfl: support write back based on ITR expire
  2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                   ` (9 preceding siblings ...)
  2022-12-23  1:55 ` [PATCH 10/21] net/cpfl: support basic Tx " Mingxia Liu
@ 2022-12-23  1:55 ` Mingxia Liu
  2022-12-23  1:55 ` [PATCH 12/21] net/cpfl: support RSS Mingxia Liu
                   ` (10 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2022-12-23  1:55 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

Enable write back on ITR expire, then packets can be received one by

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 38 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h |  2 ++
 2 files changed, 40 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index d9ac3983aa..ccd9783f5c 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -198,6 +198,15 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	uint16_t nb_rx_queues = dev->data->nb_rx_queues;
+
+	return idpf_config_irq_map(vport, nb_rx_queues);
+}
+
 static int
 cpfl_start_queues(struct rte_eth_dev *dev)
 {
@@ -235,6 +244,10 @@ static int
 cpfl_dev_start(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(base);
+	uint16_t num_allocated_vectors = base->caps.num_allocated_vectors;
+	uint16_t req_vecs_num;
 	int ret;
 
 	vport->stopped = 0;
@@ -247,6 +260,27 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 
 	vport->max_pkt_len = dev->data->mtu + CPFL_ETH_OVERHEAD;
 
+	req_vecs_num = CPFL_DFLT_Q_VEC_NUM;
+	if (req_vecs_num + adapter->used_vecs_num > num_allocated_vectors) {
+		PMD_DRV_LOG(ERR, "The accumulated request vectors' number should be less than %d",
+			    num_allocated_vectors);
+		ret = -EINVAL;
+		goto err_mtu;
+	}
+
+	ret = idpf_vc_alloc_vectors(vport, req_vecs_num);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to allocate interrupt vectors");
+		goto err_mtu;
+	}
+	adapter->used_vecs_num += req_vecs_num;
+
+	ret = cpfl_config_rx_queues_irqs(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to configure irqs");
+		goto err_mtu;
+	}
+
 	ret = cpfl_start_queues(dev);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to start queues");
@@ -282,6 +316,10 @@ cpfl_dev_stop(struct rte_eth_dev *dev)
 
 	cpfl_stop_queues(dev);
 
+	idpf_config_irq_unmap(vport, dev->data->nb_rx_queues);
+
+	idpf_vc_dealloc_vectors(vport);
+
 	vport->stopped = 1;
 
 	return 0;
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
index e24ecb614b..714149df32 100644
--- a/drivers/net/cpfl/cpfl_ethdev.h
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -24,6 +24,8 @@
 
 #define CPFL_INVALID_VPORT_IDX	0xffff
 
+#define CPFL_DFLT_Q_VEC_NUM	1
+
 #define CPFL_MIN_BUF_SIZE	1024
 #define CPFL_MAX_FRAME_SIZE	9728
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH 12/21] net/cpfl: support RSS
  2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                   ` (10 preceding siblings ...)
  2022-12-23  1:55 ` [PATCH 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
@ 2022-12-23  1:55 ` Mingxia Liu
  2022-12-23  1:55 ` [PATCH 13/21] net/cpfl: support Rx offloading Mingxia Liu
                   ` (9 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2022-12-23  1:55 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

Add RSS support.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 52 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h | 15 ++++++++++
 2 files changed, 67 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index ccd9783f5c..8e1d60e2d0 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -97,6 +97,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = dev_info->max_rx_pktlen - CPFL_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
+
 	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
@@ -153,10 +155,49 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static int
+cpfl_init_rss(struct idpf_vport *vport)
+{
+	struct rte_eth_rss_conf *rss_conf;
+	struct rte_eth_dev_data *dev_data;
+	uint16_t i, nb_q;
+	int ret = 0;
+
+	dev_data = vport->dev_data;
+	rss_conf = &dev_data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev_data->nb_rx_queues;
+
+	if (rss_conf->rss_key == NULL) {
+		for (i = 0; i < vport->rss_key_size; i++)
+			vport->rss_key[i] = (uint8_t)rte_rand();
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_INIT_LOG(ERR, "Invalid RSS key length in RSS configuration, should be %d",
+			     vport->rss_key_size);
+		return -EINVAL;
+	} else {
+		rte_memcpy(vport->rss_key, rss_conf->rss_key,
+			   vport->rss_key_size);
+	}
+
+	for (i = 0; i < vport->rss_lut_size; i++)
+		vport->rss_lut[i] = i % nb_q;
+
+	vport->rss_hf = IDPF_DEFAULT_RSS_HASH_EXPANDED;
+
+	ret = idpf_config_rss(vport);
+	if (ret != 0)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS");
+
+	return ret;
+}
+
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_conf *conf = &dev->data->dev_conf;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret;
 
 	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
@@ -195,6 +236,17 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
+	if (adapter->caps.rss_caps != 0 && dev->data->nb_rx_queues != 0) {
+		ret = cpfl_init_rss(vport);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to init rss");
+			return ret;
+		}
+	} else {
+		PMD_INIT_LOG(ERR, "RSS is not supported.");
+		return -1;
+	}
+
 	return 0;
 }
 
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
index 714149df32..03b87a9976 100644
--- a/drivers/net/cpfl/cpfl_ethdev.h
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -35,6 +35,21 @@
 #define CPFL_ETH_OVERHEAD \
 	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + CPFL_VLAN_TAG_SIZE * 2)
 
+#define CPFL_RSS_OFFLOAD_ALL (				\
+		RTE_ETH_RSS_IPV4                |	\
+		RTE_ETH_RSS_FRAG_IPV4           |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_TCP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_UDP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_SCTP   |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_OTHER  |	\
+		RTE_ETH_RSS_IPV6                |	\
+		RTE_ETH_RSS_FRAG_IPV6           |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP   |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_OTHER  |	\
+		RTE_ETH_RSS_L2_PAYLOAD)
+
 #define CPFL_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
 #define CPFL_ALARM_INTERVAL	50000 /* us */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH 13/21] net/cpfl: support Rx offloading
  2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                   ` (11 preceding siblings ...)
  2022-12-23  1:55 ` [PATCH 12/21] net/cpfl: support RSS Mingxia Liu
@ 2022-12-23  1:55 ` Mingxia Liu
  2022-12-23  1:55 ` [PATCH 14/21] net/cpfl: support Tx offloading Mingxia Liu
                   ` (8 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2022-12-23  1:55 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

Add Rx offloading support:
 - support CHKSUM and RSS offload for split queue model
 - support CHKSUM offload for single queue model

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 2 ++
 drivers/net/cpfl/cpfl_ethdev.c    | 6 ++++++
 2 files changed, 8 insertions(+)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index 470ba81579..ee5948f444 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -8,6 +8,8 @@
 ;
 [Features]
 MTU update           = Y
+L3 checksum offload  = P
+L4 checksum offload  = P
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 8e1d60e2d0..043e24675e 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -99,6 +99,12 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
 
+	dev_info->rx_offload_capa =
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM           |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+
 	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH 14/21] net/cpfl: support Tx offloading
  2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                   ` (12 preceding siblings ...)
  2022-12-23  1:55 ` [PATCH 13/21] net/cpfl: support Rx offloading Mingxia Liu
@ 2022-12-23  1:55 ` Mingxia Liu
  2022-12-23  1:55 ` [PATCH 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
                   ` (7 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2022-12-23  1:55 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

Add Tx offloading support:
 - support TSO for single queue model and split queue model.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 8 +++++++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index ee5948f444..f4e45c7c68 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -8,6 +8,7 @@
 ;
 [Features]
 MTU update           = Y
+TSO                  = P
 L3 checksum offload  = P
 L4 checksum offload  = P
 Linux                = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 043e24675e..f684d7cff5 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -105,7 +105,13 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
-	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->tx_offload_capa =
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_TCP_TSO		|
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH 15/21] net/cpfl: add AVX512 data path for single queue model
  2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                   ` (13 preceding siblings ...)
  2022-12-23  1:55 ` [PATCH 14/21] net/cpfl: support Tx offloading Mingxia Liu
@ 2022-12-23  1:55 ` Mingxia Liu
  2022-12-23  1:55 ` [PATCH 16/21] net/cpfl: support timestamp offload Mingxia Liu
                   ` (6 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2022-12-23  1:55 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu, Wenjun Wu

Add support of AVX512 vector data path for single queue model.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/cpfl.rst                | 24 +++++-
 drivers/net/cpfl/cpfl_ethdev.c          |  3 +-
 drivers/net/cpfl/cpfl_rxtx.c            | 85 +++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx_vec_common.h | 99 +++++++++++++++++++++++++
 drivers/net/cpfl/meson.build            | 25 ++++++-
 5 files changed, 233 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h

diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
index 064c69ba7d..489a2d6153 100644
--- a/doc/guides/nics/cpfl.rst
+++ b/doc/guides/nics/cpfl.rst
@@ -63,4 +63,26 @@ Runtime Config Options
 Driver compilation and testing
 ------------------------------
 
-Refer to the document :doc:`build_and_test` for details.
\ No newline at end of file
+Refer to the document :doc:`build_and_test` for details.
+
+Features
+--------
+
+Vector PMD
+~~~~~~~~~~
+
+Vector path for Rx and Tx path are selected automatically.
+The paths are chosen based on 2 conditions:
+
+- ``CPU``
+
+  On the x86 platform, the driver checks if the CPU supports AVX512.
+  If the CPU supports AVX512 and EAL argument ``--force-max-simd-bitwidth``
+  is set to 512, AVX512 paths will be chosen.
+
+- ``Offload features``
+
+  The supported HW offload features are described in the document cpfl.ini,
+  A value "P" means the offload feature is not supported by vector path.
+  If any not supported features are used, cpfl vector PMD is disabled
+  and the scalar paths are chosen.
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index f684d7cff5..5fe800f27c 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -111,7 +111,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
 		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM		|
 		RTE_ETH_TX_OFFLOAD_TCP_TSO		|
-		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS		|
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 0e053f4434..63f474a79b 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -8,6 +8,7 @@
 
 #include "cpfl_ethdev.h"
 #include "cpfl_rxtx.h"
+#include "cpfl_rxtx_vec_common.h"
 
 static uint64_t
 cpfl_rx_offload_convert(uint64_t offload)
@@ -739,22 +740,106 @@ void
 cpfl_set_rx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+#ifdef RTE_ARCH_X86
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	if (cpfl_rx_vec_dev_check_default(dev) == CPFL_VECTOR_PATH &&
+	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+		vport->rx_vec_allowed = true;
+
+		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
+#ifdef CC_AVX512_SUPPORT
+			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512DQ))
+				vport->rx_use_avx512 = true;
+#else
+		PMD_DRV_LOG(NOTICE,
+			    "AVX512 is not supported in build env");
+#endif /* CC_AVX512_SUPPORT */
+	} else {
+		vport->rx_vec_allowed = false;
+	}
+#endif /* RTE_ARCH_X86 */
+
+#ifdef RTE_ARCH_X86
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
+	} else {
+		if (vport->rx_vec_allowed) {
+			for (i = 0; i < dev->data->nb_tx_queues; i++) {
+				rxq = dev->data->rx_queues[i];
+				(void)idpf_singleq_rx_vec_setup(rxq);
+			}
+#ifdef CC_AVX512_SUPPORT
+			if (vport->rx_use_avx512) {
+				dev->rx_pkt_burst = idpf_singleq_recv_pkts_avx512;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
 
+		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+	}
+#else
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
 		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
 	else
 		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+#endif /* RTE_ARCH_X86 */
 }
 
 void
 cpfl_set_tx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+#ifdef RTE_ARCH_X86
+#ifdef CC_AVX512_SUPPORT
+	struct idpf_tx_queue *txq;
+	int i;
+#endif /* CC_AVX512_SUPPORT */
+
+	if (cpfl_tx_vec_dev_check_default(dev) == CPFL_VECTOR_PATH &&
+	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+		vport->tx_vec_allowed = true;
+		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
+#ifdef CC_AVX512_SUPPORT
+		{
+			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
+				vport->tx_use_avx512 = true;
+			if (vport->tx_use_avx512) {
+				for (i = 0; i < dev->data->nb_tx_queues; i++) {
+					txq = dev->data->tx_queues[i];
+					idpf_tx_vec_setup_avx512(txq);
+				}
+			}
+		}
+#else
+		PMD_DRV_LOG(NOTICE,
+			    "AVX512 is not supported in build env");
+#endif /* CC_AVX512_SUPPORT */
+	} else {
+		vport->tx_vec_allowed = false;
+	}
+#endif /* RTE_ARCH_X86 */
 
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		dev->tx_pkt_burst = idpf_splitq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_prep_pkts;
 	} else {
+#ifdef RTE_ARCH_X86
+		if (vport->tx_vec_allowed) {
+#ifdef CC_AVX512_SUPPORT
+			if (vport->tx_use_avx512) {
+				dev->tx_pkt_burst = idpf_singleq_xmit_pkts_avx512;
+				dev->tx_pkt_prepare = idpf_prep_pkts;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
+#endif /* RTE_ARCH_X86 */
 		dev->tx_pkt_burst = idpf_singleq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_prep_pkts;
 	}
diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
new file mode 100644
index 0000000000..a411cf6a32
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _CPFL_RXTX_VEC_COMMON_H_
+#define _CPFL_RXTX_VEC_COMMON_H_
+#include <stdint.h>
+#include <ethdev_driver.h>
+#include <rte_malloc.h>
+
+#include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
+
+#ifndef __INTEL_COMPILER
+#pragma GCC diagnostic ignored "-Wcast-qual"
+#endif
+
+#define CPFL_VECTOR_PATH		0
+#define ICE_RX_NO_VECTOR_FLAGS (		\
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+#define ICE_TX_NO_VECTOR_FLAGS (		\
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |	\
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+static inline int
+cpfl_rx_vec_queue_default(struct idpf_rx_queue *rxq)
+{
+	if (rxq == NULL)
+		return -1;
+
+	if (rte_is_power_of_2(rxq->nb_rx_desc) == 0)
+		return -1;
+
+	if (rxq->rx_free_thresh < IDPF_VPMD_RX_MAX_BURST)
+		return -1;
+
+	if ((rxq->nb_rx_desc % rxq->rx_free_thresh) != 0)
+		return -1;
+
+	if ((rxq->offloads & ICE_RX_NO_VECTOR_FLAGS) != 0)
+		return -1;
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_tx_vec_queue_default(struct idpf_tx_queue *txq)
+{
+	if (txq == NULL)
+		return -1;
+
+	if (txq->rs_thresh < IDPF_VPMD_TX_MAX_BURST ||
+	    (txq->rs_thresh & 3) != 0)
+		return -1;
+
+	if ((txq->offloads & ICE_TX_NO_VECTOR_FLAGS) != 0)
+		return -1;
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i, ret = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		ret = (cpfl_rx_vec_queue_default(rxq));
+		if (ret < 0)
+			return -1;
+	}
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_tx_vec_dev_check_default(struct rte_eth_dev *dev)
+{
+	int i;
+	struct idpf_tx_queue *txq;
+	int ret = 0;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		ret = cpfl_tx_vec_queue_default(txq);
+		if (ret < 0)
+			return -1;
+	}
+
+	return CPFL_VECTOR_PATH;
+}
+
+#endif /*_CPFL_RXTX_VEC_COMMON_H_*/
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
index 3ccee15703..40ed8dbb7b 100644
--- a/drivers/net/cpfl/meson.build
+++ b/drivers/net/cpfl/meson.build
@@ -7,9 +7,32 @@ if is_windows
     subdir_done()
 endif
 
+if dpdk_conf.get('RTE_IOVA_AS_PA') == 0
+    build = false
+    reason = 'driver does not support disabling IOVA as PA mode'
+    subdir_done()
+endif
+
 deps += ['common_idpf']
 
 sources = files(
         'cpfl_ethdev.c',
         'cpfl_rxtx.c',
-)
\ No newline at end of file
+)
+
+if arch_subdir == 'x86'
+    cpfl_avx512_cpu_support = (
+        cc.get_define('__AVX512F__', args: machine_args) != '' and
+        cc.get_define('__AVX512BW__', args: machine_args) != ''
+    )
+
+    cpfl_avx512_cc_support = (
+        not machine_args.contains('-mno-avx512f') and
+        cc.has_argument('-mavx512f') and
+        cc.has_argument('-mavx512bw')
+    )
+
+    if cpfl_avx512_cpu_support == true or cpfl_avx512_cc_support == true
+        cflags += ['-DCC_AVX512_SUPPORT']
+    endif
+endif
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH 16/21] net/cpfl: support timestamp offload
  2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                   ` (14 preceding siblings ...)
  2022-12-23  1:55 ` [PATCH 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
@ 2022-12-23  1:55 ` Mingxia Liu
  2022-12-23  1:55 ` [PATCH 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
                   ` (5 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2022-12-23  1:55 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

Add support for timestamp offload.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 3 ++-
 drivers/net/cpfl/cpfl_rxtx.c      | 7 +++++++
 3 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index f4e45c7c68..c1209df3e5 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -11,6 +11,7 @@ MTU update           = Y
 TSO                  = P
 L3 checksum offload  = P
 L4 checksum offload  = P
+Timestamp offload    = P
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 5fe800f27c..4e5d4e124a 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -103,7 +103,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM           |
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
-		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 63f474a79b..efe13775e6 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -516,6 +516,13 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	err = idpf_register_ts_mbuf(rxq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "fail to register timestamp mbuf %u",
+					rx_queue_id);
+		return -EIO;
+	}
+
 	if (rxq->bufq1 == NULL) {
 		/* Single queue */
 		err = idpf_alloc_single_rxq_mbufs(rxq);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH 17/21] net/cpfl: add AVX512 data path for split queue model
  2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                   ` (15 preceding siblings ...)
  2022-12-23  1:55 ` [PATCH 16/21] net/cpfl: support timestamp offload Mingxia Liu
@ 2022-12-23  1:55 ` Mingxia Liu
  2022-12-23  1:55 ` [PATCH 18/21] net/cpfl: add hw statistics Mingxia Liu
                   ` (4 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2022-12-23  1:55 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu, Wenjun Wu

Add support of AVX512 data path for split queue model.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_rxtx.c            | 25 +++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx_vec_common.h | 14 +++++++++++++-
 2 files changed, 38 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index efe13775e6..9277249704 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -772,6 +772,20 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 
 #ifdef RTE_ARCH_X86
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+#ifdef RTE_ARCH_X86
+		if (vport->rx_vec_allowed) {
+			for (i = 0; i < dev->data->nb_tx_queues; i++) {
+				rxq = dev->data->rx_queues[i];
+				(void)idpf_splitq_rx_vec_setup(rxq);
+			}
+#ifdef CC_AVX512_SUPPORT
+			if (vport->rx_use_avx512) {
+				dev->rx_pkt_burst = idpf_splitq_recv_pkts_avx512;
+				return;
+			}
+#endif
+		}
+#endif
 		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
 	} else {
 		if (vport->rx_vec_allowed) {
@@ -833,6 +847,17 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 #endif /* RTE_ARCH_X86 */
 
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+#ifdef RTE_ARCH_X86
+		if (vport->tx_vec_allowed) {
+#ifdef CC_AVX512_SUPPORT
+			if (vport->tx_use_avx512) {
+				dev->tx_pkt_burst = idpf_splitq_xmit_pkts_avx512;
+				dev->tx_pkt_prepare = idpf_prep_pkts;
+				return;
+			}
+#endif
+		}
+#endif
 		dev->tx_pkt_burst = idpf_splitq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_prep_pkts;
 	} else {
diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
index a411cf6a32..fc3ace89dd 100644
--- a/drivers/net/cpfl/cpfl_rxtx_vec_common.h
+++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
@@ -63,15 +63,27 @@ cpfl_tx_vec_queue_default(struct idpf_tx_queue *txq)
 	return CPFL_VECTOR_PATH;
 }
 
+static inline int
+cpfl_rx_splitq_vec_default(struct idpf_rx_queue *rxq)
+{
+	if (rxq->bufq2->rx_buf_len < rxq->max_pkt_len)
+		return -1;
+
+	return CPFL_VECTOR_PATH;
+}
+
 static inline int
 cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_rx_queue *rxq;
 	int i, ret = 0;
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
-		ret = (cpfl_rx_vec_queue_default(rxq));
+		ret = cpfl_rx_vec_queue_default(rxq) ||
+		      (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT &&
+		       cpfl_rx_splitq_vec_default(rxq));
 		if (ret < 0)
 			return -1;
 	}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH 18/21] net/cpfl: add hw statistics
  2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                   ` (16 preceding siblings ...)
  2022-12-23  1:55 ` [PATCH 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
@ 2022-12-23  1:55 ` Mingxia Liu
  2022-12-23  1:55 ` [PATCH 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
                   ` (3 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2022-12-23  1:55 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

This patch add hardware packets/bytes statistics.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 88 ++++++++++++++++++++++++++++++++++
 1 file changed, 88 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 4e5d4e124a..026ac52997 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -169,6 +169,87 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static uint64_t
+cpfl_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	uint64_t mbuf_alloc_failed = 0;
+	struct idpf_rx_queue *rxq;
+	int i = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		mbuf_alloc_failed +=
+		    rte_atomic64_read(&(rxq->rx_stats.mbuf_alloc_failed));
+	}
+
+	return mbuf_alloc_failed;
+}
+
+static int
+cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_query_stats(vport, &pstats);
+	if (ret == 0) {
+		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
+					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
+					 RTE_ETHER_CRC_LEN;
+
+		idpf_update_stats(&vport->eth_stats_offset, pstats);
+		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
+				pstats->rx_broadcast - pstats->rx_discards;
+		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
+						pstats->tx_unicast;
+		stats->imissed = pstats->rx_discards;
+		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
+		stats->ibytes = pstats->rx_bytes;
+		stats->ibytes -= stats->ipackets * crc_stats_len;
+		stats->obytes = pstats->tx_bytes;
+
+		dev->data->rx_mbuf_alloc_failed = cpfl_get_mbuf_alloc_failed_stats(dev);
+		stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return ret;
+}
+
+static void
+cpfl_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		rte_atomic64_set(&(rxq->rx_stats.mbuf_alloc_failed), 0);
+	}
+}
+
+static int
+cpfl_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_query_stats(vport, &pstats);
+	if (ret != 0)
+		return ret;
+
+	/* set stats offset base on current values */
+	vport->eth_stats_offset = *pstats;
+
+	cpfl_reset_mbuf_alloc_failed_stats(dev);
+
+	return 0;
+}
+
 static int
 cpfl_init_rss(struct idpf_vport *vport)
 {
@@ -362,6 +443,11 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 		goto err_vport;
 	}
 
+	if (cpfl_dev_stats_reset(dev)) {
+		PMD_DRV_LOG(ERR, "Failed to reset stats");
+		goto err_vport;
+	}
+
 	return 0;
 
 err_vport:
@@ -757,6 +843,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_release		= cpfl_dev_tx_queue_release,
 	.mtu_set			= cpfl_dev_mtu_set,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
+	.stats_get			= cpfl_dev_stats_get,
+	.stats_reset			= cpfl_dev_stats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH 19/21] net/cpfl: add RSS set/get ops
  2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                   ` (17 preceding siblings ...)
  2022-12-23  1:55 ` [PATCH 18/21] net/cpfl: add hw statistics Mingxia Liu
@ 2022-12-23  1:55 ` Mingxia Liu
  2022-12-23  1:55 ` [PATCH 20/21] net/cpfl: support single q scatter RX datapath Mingxia Liu
                   ` (2 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2022-12-23  1:55 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

Add support for these device ops:
- rss_reta_update
- rss_reta_query
- rss_hash_update
- rss_hash_conf_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 303 +++++++++++++++++++++++++++++++++
 1 file changed, 303 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 026ac52997..578137dca0 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -30,6 +30,56 @@ static const char * const cpfl_valid_args[] = {
 	NULL
 };
 
+static const uint64_t cpfl_map_hena_rss[] = {
+	[IDPF_HASH_NONF_UNICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
+	[IDPF_HASH_NONF_IPV4_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+	[IDPF_HASH_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
+
+	/* IPv6 */
+	[IDPF_HASH_NONF_UNICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
+	[IDPF_HASH_NONF_IPV6_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+	[IDPF_HASH_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
+
+	/* L2 Payload */
+	[IDPF_HASH_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
+};
+
+static const uint64_t cpfl_ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV4;
+
+static const uint64_t cpfl_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV6;
+
 static int
 cpfl_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -97,6 +147,9 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = dev_info->max_rx_pktlen - CPFL_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->hash_key_size = vport->rss_key_size;
+	dev_info->reta_size = vport->rss_lut_size;
+
 	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
@@ -250,6 +303,54 @@ cpfl_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int cpfl_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
+{
+	uint64_t hena = 0, valid_rss_hf = 0;
+	int ret = 0;
+	uint16_t i;
+
+	/**
+	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
+	 * generalizations of all other IPv4 and IPv6 RSS types.
+	 */
+	if (rss_hf & RTE_ETH_RSS_IPV4)
+		rss_hf |= cpfl_ipv4_rss;
+
+	if (rss_hf & RTE_ETH_RSS_IPV6)
+		rss_hf |= cpfl_ipv6_rss;
+
+	for (i = 0; i < RTE_DIM(cpfl_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (cpfl_map_hena_rss[i] & rss_hf) {
+			valid_rss_hf |= cpfl_map_hena_rss[i];
+			hena |= bit;
+		}
+	}
+
+	vport->rss_hf = hena;
+
+	ret = idpf_vc_set_rss_hash(vport);
+	if (ret != 0) {
+		PMD_DRV_LOG(WARNING,
+			    "fail to set RSS offload types, ret: %d", ret);
+		return ret;
+	}
+
+	if (valid_rss_hf & cpfl_ipv4_rss)
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & cpfl_ipv6_rss)
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
+
+	if (rss_hf & ~valid_rss_hf)
+		PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%" PRIx64,
+			    rss_hf & ~valid_rss_hf);
+	vport->last_general_rss_hf = valid_rss_hf;
+
+	return ret;
+}
+
 static int
 cpfl_init_rss(struct idpf_vport *vport)
 {
@@ -286,6 +387,204 @@ cpfl_init_rss(struct idpf_vport *vport)
 	return ret;
 }
 
+static int
+cpfl_rss_reta_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_reta_entry64 *reta_conf,
+		     uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	uint32_t *lut;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+				 "(%d) doesn't match the number of hardware can "
+				 "support (%d)",
+			    reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	/* It MUST use the current LUT size to get the RSS lookup table,
+	 * otherwise if will fail with -100 error code.
+	 */
+	lut = rte_zmalloc(NULL, reta_size * sizeof(uint32_t), 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+	/* store the old lut table temporarily */
+	rte_memcpy(lut, vport->rss_lut, reta_size);
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	rte_memcpy(vport->rss_lut, lut, reta_size);
+	/* send virtchnl ops to configure RSS */
+	ret = idpf_vc_set_rss_lut(vport);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
+		goto out;
+	}
+out:
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+cpfl_rss_reta_query(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_reta_entry64 *reta_conf,
+		    uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	ret = idpf_vc_get_rss_lut(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS LUT");
+		return ret;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vport->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+cpfl_rss_hash_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		goto skip_rss_key;
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+				 "(%d) doesn't match the size of hardware can "
+				 "support (%d)",
+			    rss_conf->rss_key_len,
+			    vport->rss_key_size);
+		return -EINVAL;
+	}
+
+	rte_memcpy(vport->rss_key, rss_conf->rss_key,
+		   vport->rss_key_size);
+	ret = idpf_vc_set_rss_key(vport);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS key");
+		return ret;
+	}
+
+skip_rss_key:
+	ret = cpfl_config_rss_hf(vport, rss_conf->rss_hf);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
+		return ret;
+	}
+
+	return 0;
+}
+
+static uint64_t
+cpfl_map_general_rss_hf(uint64_t config_rss_hf, uint64_t last_general_rss_hf)
+{
+	uint64_t valid_rss_hf = 0;
+	uint16_t i;
+
+	for (i = 0; i < RTE_DIM(cpfl_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (bit & config_rss_hf)
+			valid_rss_hf |= cpfl_map_hena_rss[i];
+	}
+
+	if (valid_rss_hf & cpfl_ipv4_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & cpfl_ipv6_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV6;
+
+	return valid_rss_hf;
+}
+
+static int
+cpfl_rss_hash_conf_get(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	ret = idpf_vc_get_rss_hash(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS hf");
+		return ret;
+	}
+
+	rss_conf->rss_hf = cpfl_map_general_rss_hf(vport->rss_hf, vport->last_general_rss_hf);
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	ret = idpf_vc_get_rss_key(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS key");
+		return ret;
+	}
+
+	if (rss_conf->rss_key_len > vport->rss_key_size)
+		rss_conf->rss_key_len = vport->rss_key_size;
+
+	rte_memcpy(rss_conf->rss_key, vport->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
@@ -845,6 +1144,10 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 	.stats_get			= cpfl_dev_stats_get,
 	.stats_reset			= cpfl_dev_stats_reset,
+	.reta_update			= cpfl_rss_reta_update,
+	.reta_query			= cpfl_rss_reta_query,
+	.rss_hash_update		= cpfl_rss_hash_update,
+	.rss_hash_conf_get		= cpfl_rss_hash_conf_get,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH 20/21] net/cpfl: support single q scatter RX datapath
  2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                   ` (18 preceding siblings ...)
  2022-12-23  1:55 ` [PATCH 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
@ 2022-12-23  1:55 ` Mingxia Liu
  2022-12-23  1:55 ` [PATCH 21/21] net/cpfl: add xstats ops Mingxia Liu
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2022-12-23  1:55 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu, Wenjun Wu

This patch add single q recv scatter rx function.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  3 ++-
 drivers/net/cpfl/cpfl_rxtx.c   | 26 ++++++++++++++++++++++++--
 drivers/net/cpfl/cpfl_rxtx.h   |  2 ++
 3 files changed, 28 insertions(+), 3 deletions(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 578137dca0..07f616835c 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -157,7 +157,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
-		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP		|
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 9277249704..3d768f1e30 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -503,6 +503,8 @@ int
 cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct idpf_rx_queue *rxq;
+	uint16_t max_pkt_len;
+	uint32_t frame_size;
 	int err;
 
 	if (rx_queue_id >= dev->data->nb_rx_queues)
@@ -516,6 +518,17 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	frame_size = dev->data->mtu + CPFL_ETH_OVERHEAD;
+
+	max_pkt_len =
+	    RTE_MIN((uint32_t)CPFL_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+		    frame_size);
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
+	    frame_size > rxq->rx_buf_len)
+		dev->data->scattered_rx = 1;
+
 	err = idpf_register_ts_mbuf(rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "fail to register timestamp mbuf %u",
@@ -801,13 +814,22 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 #endif /* CC_AVX512_SUPPORT */
 		}
 
+		if (dev->data->scattered_rx) {
+			dev->rx_pkt_burst = idpf_singleq_recv_scatter_pkts;
+			return;
+		}
 		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
 	}
 #else
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
-	else
+	} else {
+		if (dev->data->scattered_rx) {
+			dev->rx_pkt_burst = idpf_singleq_recv_scatter_pkts;
+			return;
+		}
 		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+	}
 #endif /* RTE_ARCH_X86 */
 }
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 021db5bf8a..2d55f58455 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -21,6 +21,8 @@
 #define CPFL_DEFAULT_TX_RS_THRESH	32
 #define CPFL_DEFAULT_TX_FREE_THRESH	32
 
+#define CPFL_SUPPORT_CHAIN_NUM 5
+
 int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_txconf *tx_conf);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH 21/21] net/cpfl: add xstats ops
  2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                   ` (19 preceding siblings ...)
  2022-12-23  1:55 ` [PATCH 20/21] net/cpfl: support single q scatter RX datapath Mingxia Liu
@ 2022-12-23  1:55 ` Mingxia Liu
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2022-12-23  1:55 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

Add support for these device ops:
-idpf_dev_xstats_get
-idpf_dev_xstats_get_names
-idpf_dev_xstats_reset

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 80 ++++++++++++++++++++++++++++++++++
 1 file changed, 80 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 07f616835c..bc2e6507d2 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -80,6 +80,30 @@ static const uint64_t cpfl_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
 			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
 			  RTE_ETH_RSS_FRAG_IPV6;
 
+struct rte_cpfl_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct rte_cpfl_xstats_name_off rte_cpfl_stats_strings[] = {
+	{"rx_bytes", offsetof(struct virtchnl2_vport_stats, rx_bytes)},
+	{"rx_unicast_packets", offsetof(struct virtchnl2_vport_stats, rx_unicast)},
+	{"rx_multicast_packets", offsetof(struct virtchnl2_vport_stats, rx_multicast)},
+	{"rx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, rx_broadcast)},
+	{"rx_dropped_packets", offsetof(struct virtchnl2_vport_stats, rx_discards)},
+	{"rx_errors", offsetof(struct virtchnl2_vport_stats, rx_errors)},
+	{"rx_unknown_protocol_packets", offsetof(struct virtchnl2_vport_stats,
+						 rx_unknown_protocol)},
+	{"tx_bytes", offsetof(struct virtchnl2_vport_stats, tx_bytes)},
+	{"tx_unicast_packets", offsetof(struct virtchnl2_vport_stats, tx_unicast)},
+	{"tx_multicast_packets", offsetof(struct virtchnl2_vport_stats, tx_multicast)},
+	{"tx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, tx_broadcast)},
+	{"tx_dropped_packets", offsetof(struct virtchnl2_vport_stats, tx_discards)},
+	{"tx_error_packets", offsetof(struct virtchnl2_vport_stats, tx_errors)}};
+
+#define CPFL_NB_XSTATS (sizeof(rte_cpfl_stats_strings) / \
+		sizeof(rte_cpfl_stats_strings[0]))
+
 static int
 cpfl_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -304,6 +328,59 @@ cpfl_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int cpfl_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+	cpfl_dev_stats_reset(dev);
+	return 0;
+}
+
+static int cpfl_dev_xstats_get(struct rte_eth_dev *dev,
+			       struct rte_eth_xstat *xstats, unsigned int n)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	unsigned int i;
+	int ret;
+
+	if (n < CPFL_NB_XSTATS)
+		return CPFL_NB_XSTATS;
+
+	if (!xstats)
+		return 0;
+
+	ret = idpf_query_stats(vport, &pstats);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+		return 0;
+	}
+
+	idpf_update_stats(&vport->eth_stats_offset, pstats);
+
+	/* loop over xstats array and values from pstats */
+	for (i = 0; i < CPFL_NB_XSTATS; i++) {
+		xstats[i].id = i;
+		xstats[i].value = *(uint64_t *)(((char *)pstats) +
+			rte_cpfl_stats_strings[i].offset);
+	}
+	return CPFL_NB_XSTATS;
+}
+
+static int cpfl_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+				     struct rte_eth_xstat_name *xstats_names,
+				     __rte_unused unsigned int limit)
+{
+	unsigned int i;
+
+	if (xstats_names)
+		for (i = 0; i < CPFL_NB_XSTATS; i++) {
+			snprintf(xstats_names[i].name,
+				 sizeof(xstats_names[i].name),
+				 "%s", rte_cpfl_stats_strings[i].name);
+		}
+	return CPFL_NB_XSTATS;
+}
+
 static int cpfl_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
 {
 	uint64_t hena = 0, valid_rss_hf = 0;
@@ -1149,6 +1226,9 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.reta_query			= cpfl_rss_reta_query,
 	.rss_hash_update		= cpfl_rss_hash_update,
 	.rss_hash_conf_get		= cpfl_rss_hash_conf_get,
+	.xstats_get			= cpfl_dev_xstats_get,
+	.xstats_get_names		= cpfl_dev_xstats_get_names,
+	.xstats_reset			= cpfl_dev_xstats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v2 00/21] add support for cpfl PMD in DPDK
  2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                   ` (20 preceding siblings ...)
  2022-12-23  1:55 ` [PATCH 21/21] net/cpfl: add xstats ops Mingxia Liu
@ 2023-01-13  8:19 ` Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 01/21] net/cpfl: support device initialization Mingxia Liu
                     ` (24 more replies)
  21 siblings, 25 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-13  8:19 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

The patchset introduced the cpfl (Control Plane Function Library) PMD
for Intel® IPU E2100’s Configure Physical Function (Device ID: 0x1453)

The cpfl PMD inherits all the features from idpf PMD which will follow
an ongoing standard data plan function spec
https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=idpf
Besides, it will also support more device specific hardware offloading
features from DPDK’s control path (e.g.: hairpin, rte_flow …)

This patchset mainly focuses on idpf PMD’s equivalent features.
To avoid duplicated code, the patchset depends on below patchsets which
move the common part from net/idpf into common/idpf as a shared library.

This patchset is based on the idpf PMD code:
http://patches.dpdk.org/project/dpdk/cover/20230106090501.9106-1-beilei.xing@intel.com/
http://patches.dpdk.org/project/dpdk/cover/20230106091627.13530-1-beilei.xing@intel.com/
http://patches.dpdk.org/project/dpdk/patch/20230113015119.3279019-2-wenjun1.wu@intel.com/
http://patches.dpdk.org/project/dpdk/cover/20230111071545.504706-1-mingxia.liu@intel.com/

v2 changes:
 - rebase to the new baseline.
 - Fix rss lut config issue.

Mingxia Liu (21):
  net/cpfl: support device initialization
  net/cpfl: add Tx queue setup
  net/cpfl: add Rx queue setup
  net/cpfl: support device start and stop
  net/cpfl: support queue start
  net/cpfl: support queue stop
  net/cpfl: support queue release
  net/cpfl: support MTU configuration
  net/cpfl: support basic Rx data path
  net/cpfl: support basic Tx data path
  net/cpfl: support write back based on ITR expire
  net/cpfl: support RSS
  net/cpfl: support Rx offloading
  net/cpfl: support Tx offloading
  net/cpfl: add AVX512 data path for single queue model
  net/cpfl: support timestamp offload
  net/cpfl: add AVX512 data path for split queue model
  net/cpfl: add hw statistics
  net/cpfl: add RSS set/get ops
  net/cpfl: support single q scatter RX datapath
  net/cpfl: add xstats ops

 MAINTAINERS                             |    9 +
 doc/guides/nics/cpfl.rst                |   88 ++
 doc/guides/nics/features/cpfl.ini       |   17 +
 doc/guides/rel_notes/release_23_03.rst  |    5 +
 drivers/net/cpfl/cpfl_ethdev.c          | 1490 +++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h          |   95 ++
 drivers/net/cpfl/cpfl_logs.h            |   32 +
 drivers/net/cpfl/cpfl_rxtx.c            |  900 ++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h            |   44 +
 drivers/net/cpfl/cpfl_rxtx_vec_common.h |  113 ++
 drivers/net/cpfl/meson.build            |   38 +
 drivers/net/idpf/idpf_ethdev.c          |    3 +-
 drivers/net/meson.build                 |    1 +
 13 files changed, 2834 insertions(+), 1 deletion(-)
 create mode 100644 doc/guides/nics/cpfl.rst
 create mode 100644 doc/guides/nics/features/cpfl.ini
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.c
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.h
 create mode 100644 drivers/net/cpfl/cpfl_logs.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h
 create mode 100644 drivers/net/cpfl/meson.build

-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v2 01/21] net/cpfl: support device initialization
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
@ 2023-01-13  8:19   ` Mingxia Liu
  2023-01-13 13:32     ` Zhang, Helin
  2023-01-13  8:19   ` [PATCH v2 02/21] net/cpfl: add Tx queue setup Mingxia Liu
                     ` (23 subsequent siblings)
  24 siblings, 1 reply; 263+ messages in thread
From: Mingxia Liu @ 2023-01-13  8:19 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Support device init and add the following dev ops:
 - dev_configure
 - dev_close
 - dev_infos_get
 - link_update
 - cpfl_dev_supported_ptypes_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 MAINTAINERS                            |   9 +
 doc/guides/nics/cpfl.rst               |  66 +++
 doc/guides/nics/features/cpfl.ini      |  12 +
 doc/guides/rel_notes/release_23_03.rst |   5 +
 drivers/net/cpfl/cpfl_ethdev.c         | 769 +++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h         |  78 +++
 drivers/net/cpfl/cpfl_logs.h           |  32 +
 drivers/net/cpfl/cpfl_rxtx.c           | 244 ++++++++
 drivers/net/cpfl/cpfl_rxtx.h           |  25 +
 drivers/net/cpfl/meson.build           |  14 +
 drivers/net/meson.build                |   1 +
 11 files changed, 1255 insertions(+)
 create mode 100644 doc/guides/nics/cpfl.rst
 create mode 100644 doc/guides/nics/features/cpfl.ini
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.c
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.h
 create mode 100644 drivers/net/cpfl/cpfl_logs.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.h
 create mode 100644 drivers/net/cpfl/meson.build

diff --git a/MAINTAINERS b/MAINTAINERS
index 22ef2ea4b9..970acc5751 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -780,6 +780,15 @@ F: drivers/common/idpf/
 F: doc/guides/nics/idpf.rst
 F: doc/guides/nics/features/idpf.ini
 
+Intel cpfl
+M: Qi Zhang <qi.z.zhang@intel.com>
+M: Jingjing Wu <jingjing.wu@intel.com>
+M: Beilei Xing <beilei.xing@intel.com>
+T: git://dpdk.org/next/dpdk-next-net-intel
+F: drivers/net/cpfl/
+F: doc/guides/nics/cpfl.rst
+F: doc/guides/nics/features/cpfl.ini
+
 Intel igc
 M: Junfeng Guo <junfeng.guo@intel.com>
 M: Simei Su <simei.su@intel.com>
diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
new file mode 100644
index 0000000000..064c69ba7d
--- /dev/null
+++ b/doc/guides/nics/cpfl.rst
@@ -0,0 +1,66 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright(c) 2022 Intel Corporation.
+
+.. include:: <isonum.txt>
+
+CPFL Poll Mode Driver
+=====================
+
+The [*EXPERIMENTAL*] cpfl PMD (**librte_net_cpfl**) provides poll mode driver support
+for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
+
+
+Linux Prerequisites
+-------------------
+
+Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment.
+
+To get better performance on Intel platforms,
+please follow the :doc:`../linux_gsg/nic_perf_intel_platform`.
+
+
+Pre-Installation Configuration
+------------------------------
+
+Runtime Config Options
+~~~~~~~~~~~~~~~~~~~~~~
+
+- ``vport`` (default ``0``)
+
+  The PMD supports creation of multiple vports for one PCI device,
+  each vport corresponds to a single ethdev.
+  The user can specify the vports with specific ID to be created, for example::
+
+    -a ca:00.0,vport=[0,2,3]
+
+  Then the PMD will create 3 vports (ethdevs) for device ``ca:00.0``.
+
+  If the parameter is not provided, the vport 0 will be created by default.
+
+- ``rx_single`` (default ``0``)
+
+  There are two queue modes supported by Intel\ |reg| IPU Ethernet ES2000 Series,
+  single queue mode and split queue mode for Rx queue.
+  User can choose Rx queue mode, example::
+
+    -a ca:00.0,rx_single=1
+
+  Then the PMD will configure Rx queue with single queue mode.
+  Otherwise, split queue mode is chosen by default.
+
+- ``tx_single`` (default ``0``)
+
+  There are two queue modes supported by Intel\ |reg| IPU Ethernet ES2000 Series,
+  single queue mode and split queue mode for Tx queue.
+  User can choose Tx queue mode, example::
+
+    -a ca:00.0,tx_single=1
+
+  Then the PMD will configure Tx queue with single queue mode.
+  Otherwise, split queue mode is chosen by default.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :doc:`build_and_test` for details.
\ No newline at end of file
diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
new file mode 100644
index 0000000000..a2d1ca9e15
--- /dev/null
+++ b/doc/guides/nics/features/cpfl.ini
@@ -0,0 +1,12 @@
+;
+; Supported features of the 'cpfl' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+; A feature with "P" indicates only be supported when non-vector path
+; is selected.
+;
+[Features]
+Linux                = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst
index b8c5b68d6c..465a25e91e 100644
--- a/doc/guides/rel_notes/release_23_03.rst
+++ b/doc/guides/rel_notes/release_23_03.rst
@@ -55,6 +55,11 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Added Intel cpfl driver.**
+
+  Added the new ``cpfl`` net driver
+  for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
+  See the :doc:`../nics/cpfl` NIC guide for more details on this new driver.
 
 Removed Items
 -------------
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
new file mode 100644
index 0000000000..2d79ba2098
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -0,0 +1,769 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_dev.h>
+#include <errno.h>
+#include <rte_alarm.h>
+
+#include "cpfl_ethdev.h"
+
+#define CPFL_TX_SINGLE_Q	"tx_single"
+#define CPFL_RX_SINGLE_Q	"rx_single"
+#define CPFL_VPORT		"vport"
+
+rte_spinlock_t cpfl_adapter_lock;
+/* A list for all adapters, one adapter matches one PCI device */
+struct cpfl_adapter_list cpfl_adapter_list;
+bool cpfl_adapter_list_init;
+
+static const char * const cpfl_valid_args[] = {
+	CPFL_TX_SINGLE_Q,
+	CPFL_RX_SINGLE_Q,
+	CPFL_VPORT,
+	NULL
+};
+
+static int
+cpfl_dev_link_update(struct rte_eth_dev *dev,
+		     __rte_unused int wait_to_complete)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct rte_eth_link new_link;
+
+	memset(&new_link, 0, sizeof(new_link));
+
+	switch (vport->link_speed) {
+	case 10:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
+		break;
+	case 100:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		break;
+	case 1000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
+		break;
+	case 10000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+		break;
+	case 20000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
+		break;
+	case 25000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
+		break;
+	case 40000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
+		break;
+	case 50000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
+		break;
+	case 100000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+		break;
+	case 200000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
+		break;
+	default:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	}
+
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
+		RTE_ETH_LINK_DOWN;
+	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
+				  RTE_ETH_LINK_SPEED_FIXED);
+
+	return rte_eth_linkstatus_set(dev, &new_link);
+}
+
+static int
+cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+
+	dev_info->max_rx_queues = adapter->caps.max_rx_q;
+	dev_info->max_tx_queues = adapter->caps.max_tx_q;
+	dev_info->min_rx_bufsize = CPFL_MIN_BUF_SIZE;
+	dev_info->max_rx_pktlen = vport->max_mtu + CPFL_ETH_OVERHEAD;
+
+	dev_info->max_mtu = vport->max_mtu;
+	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+
+	return 0;
+}
+
+static const uint32_t *
+cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+	static const uint32_t ptypes[] = {
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_L4_FRAG,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_UNKNOWN
+	};
+
+	return ptypes;
+}
+
+static int
+cpfl_dev_configure(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct rte_eth_conf *conf = &dev->data->dev_conf;
+
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
+		PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not supported",
+			     conf->txmode.mq_mode);
+		return -ENOTSUP;
+	}
+
+	if (conf->lpbk_mode != 0) {
+		PMD_INIT_LOG(ERR, "Loopback operation mode %d is not supported",
+			     conf->lpbk_mode);
+		return -ENOTSUP;
+	}
+
+	if (conf->dcb_capability_en != 0) {
+		PMD_INIT_LOG(ERR, "Priority Flow Control(PFC) if not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.lsc != 0) {
+		PMD_INIT_LOG(ERR, "LSC interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.rxq != 0) {
+		PMD_INIT_LOG(ERR, "RXQ interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.rmv != 0) {
+		PMD_INIT_LOG(ERR, "RMV interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	return 0;
+}
+
+static int
+cpfl_dev_close(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter);
+
+	idpf_vport_deinit(vport);
+
+	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
+	adapter->cur_vport_nb--;
+	dev->data->dev_private = NULL;
+	adapter->vports[vport->sw_idx] = NULL;
+	rte_free(vport);
+
+	return 0;
+}
+
+static int
+insert_value(struct cpfl_devargs *devargs, uint16_t id)
+{
+	uint16_t i;
+
+	/* ignore duplicate */
+	for (i = 0; i < devargs->req_vport_nb; i++) {
+		if (devargs->req_vports[i] == id)
+			return 0;
+	}
+
+	if (devargs->req_vport_nb >= RTE_DIM(devargs->req_vports)) {
+		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
+			     CPFL_MAX_VPORT_NUM);
+		return -EINVAL;
+	}
+
+	devargs->req_vports[devargs->req_vport_nb] = id;
+	devargs->req_vport_nb++;
+
+	return 0;
+}
+
+static const char *
+parse_range(const char *value, struct cpfl_devargs *devargs)
+{
+	uint16_t lo, hi, i;
+	int n = 0;
+	int result;
+	const char *pos = value;
+
+	result = sscanf(value, "%hu%n-%hu%n", &lo, &n, &hi, &n);
+	if (result == 1) {
+		if (lo >= CPFL_MAX_VPORT_NUM)
+			return NULL;
+		if (insert_value(devargs, lo) != 0)
+			return NULL;
+	} else if (result == 2) {
+		if (lo > hi || hi >= CPFL_MAX_VPORT_NUM)
+			return NULL;
+		for (i = lo; i <= hi; i++) {
+			if (insert_value(devargs, i) != 0)
+				return NULL;
+		}
+	} else {
+		return NULL;
+	}
+
+	return pos + n;
+}
+
+static int
+parse_vport(const char *key, const char *value, void *args)
+{
+	struct cpfl_devargs *devargs = args;
+	const char *pos = value;
+
+	devargs->req_vport_nb = 0;
+
+	if (*pos == '[')
+		pos++;
+
+	while (1) {
+		pos = parse_range(pos, devargs);
+		if (pos == NULL) {
+			PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ",
+				     value, key);
+			return -EINVAL;
+		}
+		if (*pos != ',')
+			break;
+		pos++;
+	}
+
+	if (*value == '[' && *pos != ']') {
+		PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ",
+			     value, key);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+parse_bool(const char *key, const char *value, void *args)
+{
+	int *i = args;
+	char *end;
+	int num;
+
+	errno = 0;
+
+	num = strtoul(value, &end, 10);
+
+	if (errno == ERANGE || (num != 0 && num != 1)) {
+		PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", value must be 0 or 1",
+			value, key);
+		return -EINVAL;
+	}
+
+	*i = num;
+	return 0;
+}
+
+static int
+cpfl_parse_devargs(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter,
+		   struct cpfl_devargs *cpfl_args)
+{
+	struct rte_devargs *devargs = pci_dev->device.devargs;
+	struct rte_kvargs *kvlist;
+	int i, ret;
+
+	cpfl_args->req_vport_nb = 0;
+
+	if (devargs == NULL)
+		return 0;
+
+	kvlist = rte_kvargs_parse(devargs->args, cpfl_valid_args);
+	if (kvlist == NULL) {
+		PMD_INIT_LOG(ERR, "invalid kvargs key");
+		return -EINVAL;
+	}
+
+	/* check parsed devargs */
+	if (adapter->cur_vport_nb + cpfl_args->req_vport_nb >
+	    CPFL_MAX_VPORT_NUM) {
+		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
+			     CPFL_MAX_VPORT_NUM);
+		ret = -EINVAL;
+		goto bail;
+	}
+
+	for (i = 0; i < cpfl_args->req_vport_nb; i++) {
+		if (adapter->cur_vports & RTE_BIT32(cpfl_args->req_vports[i])) {
+			PMD_INIT_LOG(ERR, "Vport %d has been created",
+				     cpfl_args->req_vports[i]);
+			ret = -EINVAL;
+			goto bail;
+		}
+	}
+
+	ret = rte_kvargs_process(kvlist, CPFL_VPORT, &parse_vport,
+				 cpfl_args);
+	if (ret != 0)
+		goto bail;
+
+	ret = rte_kvargs_process(kvlist, CPFL_TX_SINGLE_Q, &parse_bool,
+				 &adapter->base.txq_model);
+	if (ret != 0)
+		goto bail;
+
+	ret = rte_kvargs_process(kvlist, CPFL_RX_SINGLE_Q, &parse_bool,
+				 &adapter->base.rxq_model);
+	if (ret != 0)
+		goto bail;
+
+bail:
+	rte_kvargs_free(kvlist);
+	return ret;
+}
+
+static struct idpf_vport *
+cpfl_find_vport(struct cpfl_adapter_ext *adapter, uint32_t vport_id)
+{
+	struct idpf_vport *vport = NULL;
+	int i;
+
+	for (i = 0; i < adapter->cur_vport_nb; i++) {
+		vport = adapter->vports[i];
+		if (vport->vport_id != vport_id)
+			continue;
+		else
+			return vport;
+	}
+
+	return vport;
+}
+
+static void
+cpfl_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t msglen)
+{
+	struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
+
+	if (msglen < sizeof(struct virtchnl2_event)) {
+		PMD_DRV_LOG(ERR, "Error event");
+		return;
+	}
+
+	switch (vc_event->event) {
+	case VIRTCHNL2_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL2_EVENT_LINK_CHANGE");
+		vport->link_up = vc_event->link_status;
+		vport->link_speed = vc_event->link_speed;
+		cpfl_dev_link_update(dev, 0);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", vc_event->event);
+		break;
+	}
+}
+
+static void
+cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter_ex)
+{
+	struct idpf_adapter *adapter = &adapter_ex->base;
+	struct idpf_dma_mem *dma_mem = NULL;
+	struct idpf_hw *hw = &adapter->hw;
+	struct virtchnl2_event *vc_event;
+	struct idpf_ctlq_msg ctlq_msg;
+	enum idpf_mbx_opc mbx_op;
+	struct idpf_vport *vport;
+	enum virtchnl_ops vc_op;
+	uint16_t pending = 1;
+	int ret;
+
+	while (pending) {
+		ret = idpf_ctlq_recv(hw->arq, &pending, &ctlq_msg);
+		if (ret) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret);
+			return;
+		}
+
+		rte_memcpy(adapter->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
+			   IDPF_DFLT_MBX_BUF_SIZE);
+
+		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
+		vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+		adapter->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
+
+		switch (mbx_op) {
+		case idpf_mbq_opc_send_msg_to_peer_pf:
+			if (vc_op == VIRTCHNL2_OP_EVENT) {
+				if (ctlq_msg.data_len < sizeof(struct virtchnl2_event)) {
+					PMD_DRV_LOG(ERR, "Error event");
+					return;
+				}
+				vc_event = (struct virtchnl2_event *)adapter->mbx_resp;
+				vport = cpfl_find_vport(adapter_ex, vc_event->vport_id);
+				if (!vport) {
+					PMD_DRV_LOG(ERR, "Can't find vport.");
+					return;
+				}
+				cpfl_handle_event_msg(vport, adapter->mbx_resp,
+						      ctlq_msg.data_len);
+			} else {
+				if (vc_op == adapter->pend_cmd)
+					notify_cmd(adapter, adapter->cmd_retval);
+				else
+					PMD_DRV_LOG(ERR, "command mismatch, expect %u, get %u",
+						    adapter->pend_cmd, vc_op);
+
+				PMD_DRV_LOG(DEBUG, " Virtual channel response is received,"
+					    "opcode = %d", vc_op);
+			}
+			goto post_buf;
+		default:
+			PMD_DRV_LOG(DEBUG, "Request %u is not supported yet", mbx_op);
+		}
+	}
+
+post_buf:
+	if (ctlq_msg.data_len)
+		dma_mem = ctlq_msg.ctx.indirect.payload;
+	else
+		pending = 0;
+
+	ret = idpf_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
+	if (ret && dma_mem)
+		idpf_free_dma_mem(hw, dma_mem);
+}
+
+static void
+cpfl_dev_alarm_handler(void *param)
+{
+	struct cpfl_adapter_ext *adapter = param;
+
+	cpfl_handle_virtchnl_msg(adapter);
+
+	rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
+}
+
+static int
+cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter)
+{
+	struct idpf_adapter *base = &adapter->base;
+	struct idpf_hw *hw = &base->hw;
+	int ret = 0;
+
+	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+	hw->hw_addr_len = pci_dev->mem_resource[0].len;
+	hw->back = base;
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->device_id = pci_dev->id.device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+
+	strncpy(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE);
+
+	ret = idpf_adapter_init(base);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init adapter");
+		goto err_adapter_init;
+	}
+
+	rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
+
+	adapter->max_vport_nb = adapter->base.caps.max_vports;
+
+	adapter->vports = rte_zmalloc("vports",
+				      adapter->max_vport_nb *
+				      sizeof(*adapter->vports),
+				      0);
+	if (adapter->vports == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
+		ret = -ENOMEM;
+		goto err_get_ptype;
+	}
+
+	adapter->cur_vports = 0;
+	adapter->cur_vport_nb = 0;
+
+	adapter->used_vecs_num = 0;
+
+	return ret;
+
+err_get_ptype:
+	idpf_adapter_deinit(base);
+err_adapter_init:
+	return ret;
+}
+
+static const struct eth_dev_ops cpfl_eth_dev_ops = {
+	.dev_configure			= cpfl_dev_configure,
+	.dev_close			= cpfl_dev_close,
+	.dev_infos_get			= cpfl_dev_info_get,
+	.link_update			= cpfl_dev_link_update,
+	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
+};
+
+static uint16_t
+cpfl_vport_idx_alloc(struct cpfl_adapter_ext *ad)
+{
+	uint16_t vport_idx;
+	uint16_t i;
+
+	for (i = 0; i < ad->max_vport_nb; i++) {
+		if (ad->vports[i] == NULL)
+			break;
+	}
+
+	if (i == ad->max_vport_nb)
+		vport_idx = CPFL_INVALID_VPORT_IDX;
+	else
+		vport_idx = i;
+
+	return vport_idx;
+}
+
+static int
+cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct cpfl_vport_param *param = init_params;
+	struct cpfl_adapter_ext *adapter = param->adapter;
+	/* for sending create vport virtchnl msg prepare */
+	struct virtchnl2_create_vport create_vport_info;
+	int ret = 0;
+
+	dev->dev_ops = &cpfl_eth_dev_ops;
+	vport->adapter = &adapter->base;
+	vport->sw_idx = param->idx;
+	vport->devarg_id = param->devarg_id;
+	vport->dev = dev;
+
+	memset(&create_vport_info, 0, sizeof(create_vport_info));
+	ret = idpf_create_vport_info_init(vport, &create_vport_info);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init vport req_info.");
+		goto err;
+	}
+
+	ret = idpf_vport_init(vport, &create_vport_info, dev->data);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init vports.");
+		goto err;
+	}
+
+	adapter->vports[param->idx] = vport;
+	adapter->cur_vports |= RTE_BIT32(param->devarg_id);
+	adapter->cur_vport_nb++;
+
+	dev->data->mac_addrs = rte_zmalloc(NULL, RTE_ETHER_ADDR_LEN, 0);
+	if (dev->data->mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory.");
+		ret = -ENOMEM;
+		goto err_mac_addrs;
+	}
+
+	rte_ether_addr_copy((struct rte_ether_addr *)vport->default_mac_addr,
+			    &dev->data->mac_addrs[0]);
+
+	return 0;
+
+err_mac_addrs:
+	adapter->vports[param->idx] = NULL;  /* reset */
+	idpf_vport_deinit(vport);
+err:
+	return ret;
+}
+
+static const struct rte_pci_id pci_id_cpfl_map[] = {
+	{ RTE_PCI_DEVICE(IDPF_INTEL_VENDOR_ID, IDPF_DEV_ID_CPF) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static struct cpfl_adapter_ext *
+cpfl_find_adapter_ext(struct rte_pci_device *pci_dev)
+{
+	struct cpfl_adapter_ext *adapter;
+	int found = 0;
+
+	if (pci_dev == NULL)
+		return NULL;
+
+	rte_spinlock_lock(&cpfl_adapter_lock);
+	TAILQ_FOREACH(adapter, &cpfl_adapter_list, next) {
+		if (strncmp(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE) == 0) {
+			found = 1;
+			break;
+		}
+	}
+	rte_spinlock_unlock(&cpfl_adapter_lock);
+
+	if (found == 0)
+		return NULL;
+
+	return adapter;
+}
+
+static void
+cpfl_adapter_ext_deinit(struct cpfl_adapter_ext *adapter)
+{
+	rte_eal_alarm_cancel(cpfl_dev_alarm_handler, adapter);
+	idpf_adapter_deinit(&adapter->base);
+
+	rte_free(adapter->vports);
+	adapter->vports = NULL;
+}
+
+static int
+cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+	       struct rte_pci_device *pci_dev)
+{
+	struct cpfl_vport_param vport_param;
+	struct cpfl_adapter_ext *adapter;
+	struct cpfl_devargs devargs;
+	char name[RTE_ETH_NAME_MAX_LEN];
+	int i, retval;
+	bool first_probe = false;
+
+	if (!cpfl_adapter_list_init) {
+		rte_spinlock_init(&cpfl_adapter_lock);
+		TAILQ_INIT(&cpfl_adapter_list);
+		cpfl_adapter_list_init = true;
+	}
+
+	adapter = cpfl_find_adapter_ext(pci_dev);
+	if (adapter == NULL) {
+		first_probe = true;
+		adapter = rte_zmalloc("cpfl_adapter_ext",
+						sizeof(struct cpfl_adapter_ext), 0);
+		if (adapter == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
+			return -ENOMEM;
+		}
+
+		retval = cpfl_adapter_ext_init(pci_dev, adapter);
+		if (retval != 0) {
+			PMD_INIT_LOG(ERR, "Failed to init adapter.");
+			return retval;
+		}
+
+		rte_spinlock_lock(&cpfl_adapter_lock);
+		TAILQ_INSERT_TAIL(&cpfl_adapter_list, adapter, next);
+		rte_spinlock_unlock(&cpfl_adapter_lock);
+	}
+
+	retval = cpfl_parse_devargs(pci_dev, adapter, &devargs);
+	if (retval != 0) {
+		PMD_INIT_LOG(ERR, "Failed to parse private devargs");
+		goto err;
+	}
+
+	if (devargs.req_vport_nb == 0) {
+		/* If no vport devarg, create vport 0 by default. */
+		vport_param.adapter = adapter;
+		vport_param.devarg_id = 0;
+		vport_param.idx = cpfl_vport_idx_alloc(adapter);
+		if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
+			PMD_INIT_LOG(ERR, "No space for vport %u", vport_param.devarg_id);
+			return 0;
+		}
+		snprintf(name, sizeof(name), "cpfl_%s_vport_0",
+			 pci_dev->device.name);
+		retval = rte_eth_dev_create(&pci_dev->device, name,
+					    sizeof(struct idpf_vport),
+					    NULL, NULL, cpfl_dev_vport_init,
+					    &vport_param);
+		if (retval != 0)
+			PMD_DRV_LOG(ERR, "Failed to create default vport 0");
+	} else {
+		for (i = 0; i < devargs.req_vport_nb; i++) {
+			vport_param.adapter = adapter;
+			vport_param.devarg_id = devargs.req_vports[i];
+			vport_param.idx = cpfl_vport_idx_alloc(adapter);
+			if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
+				PMD_INIT_LOG(ERR, "No space for vport %u", vport_param.devarg_id);
+				break;
+			}
+			snprintf(name, sizeof(name), "cpfl_%s_vport_%d",
+				 pci_dev->device.name,
+				 devargs.req_vports[i]);
+			retval = rte_eth_dev_create(&pci_dev->device, name,
+						    sizeof(struct idpf_vport),
+						    NULL, NULL, cpfl_dev_vport_init,
+						    &vport_param);
+			if (retval != 0)
+				PMD_DRV_LOG(ERR, "Failed to create vport %d",
+					    vport_param.devarg_id);
+		}
+	}
+
+	return 0;
+
+err:
+	if (first_probe) {
+		rte_spinlock_lock(&cpfl_adapter_lock);
+		TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
+		rte_spinlock_unlock(&cpfl_adapter_lock);
+		cpfl_adapter_ext_deinit(adapter);
+		rte_free(adapter);
+	}
+	return retval;
+}
+
+static int
+cpfl_pci_remove(struct rte_pci_device *pci_dev)
+{
+	struct cpfl_adapter_ext *adapter = cpfl_find_adapter_ext(pci_dev);
+	uint16_t port_id;
+
+	/* Ethdev created can be found RTE_ETH_FOREACH_DEV_OF through rte_device */
+	RTE_ETH_FOREACH_DEV_OF(port_id, &pci_dev->device) {
+			rte_eth_dev_close(port_id);
+	}
+
+	rte_spinlock_lock(&cpfl_adapter_lock);
+	TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
+	rte_spinlock_unlock(&cpfl_adapter_lock);
+	cpfl_adapter_ext_deinit(adapter);
+	rte_free(adapter);
+
+	return 0;
+}
+
+static struct rte_pci_driver rte_cpfl_pmd = {
+	.id_table	= pci_id_cpfl_map,
+	.drv_flags	= RTE_PCI_DRV_NEED_MAPPING,
+	.probe		= cpfl_pci_probe,
+	.remove		= cpfl_pci_remove,
+};
+
+/**
+ * Driver initialization routine.
+ * Invoked once at EAL init time.
+ * Register itself as the [Poll Mode] Driver of PCI devices.
+ */
+RTE_PMD_REGISTER_PCI(net_cpfl, rte_cpfl_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_cpfl, pci_id_cpfl_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_cpfl, "* igb_uio | vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(net_cpfl,
+			      CPFL_TX_SINGLE_Q "=<0|1> "
+			      CPFL_RX_SINGLE_Q "=<0|1> "
+			      CPFL_VPORT "=[vport_set0,[vport_set1],...]");
+
+RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_driver, driver, NOTICE);
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
new file mode 100644
index 0000000000..83459b9c91
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _CPFL_ETHDEV_H_
+#define _CPFL_ETHDEV_H_
+
+#include <stdint.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+#include <rte_ethdev.h>
+#include <rte_kvargs.h>
+#include <ethdev_driver.h>
+#include <ethdev_pci.h>
+
+#include "cpfl_logs.h"
+
+#include <idpf_common_device.h>
+#include <idpf_common_virtchnl.h>
+#include <base/idpf_prototype.h>
+#include <base/virtchnl2.h>
+
+#define CPFL_MAX_VPORT_NUM	8
+
+#define CPFL_INVALID_VPORT_IDX	0xffff
+
+#define CPFL_MIN_BUF_SIZE	1024
+#define CPFL_MAX_FRAME_SIZE	9728
+#define CPFL_DEFAULT_MTU	RTE_ETHER_MTU
+
+#define CPFL_NUM_MACADDR_MAX	64
+
+#define CPFL_VLAN_TAG_SIZE	4
+#define CPFL_ETH_OVERHEAD \
+	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + CPFL_VLAN_TAG_SIZE * 2)
+
+#define CPFL_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
+
+#define CPFL_ALARM_INTERVAL	50000 /* us */
+
+/* Device IDs */
+#define IDPF_DEV_ID_CPF			0x1453
+
+struct cpfl_vport_param {
+	struct cpfl_adapter_ext *adapter;
+	uint16_t devarg_id; /* arg id from user */
+	uint16_t idx;       /* index in adapter->vports[]*/
+};
+
+/* Struct used when parse driver specific devargs */
+struct cpfl_devargs {
+	uint16_t req_vports[CPFL_MAX_VPORT_NUM];
+	uint16_t req_vport_nb;
+};
+
+struct cpfl_adapter_ext {
+	TAILQ_ENTRY(cpfl_adapter_ext) next;
+	struct idpf_adapter base;
+
+	char name[CPFL_ADAPTER_NAME_LEN];
+
+	struct idpf_vport **vports;
+	uint16_t max_vport_nb;
+
+	uint16_t cur_vports; /* bit mask of created vport */
+	uint16_t cur_vport_nb;
+
+	uint16_t used_vecs_num;
+};
+
+TAILQ_HEAD(cpfl_adapter_list, cpfl_adapter_ext);
+
+#define CPFL_DEV_TO_PCI(eth_dev)		\
+	RTE_DEV_TO_PCI((eth_dev)->device)
+#define CPFL_ADAPTER_TO_EXT(p)					\
+	container_of((p), struct cpfl_adapter_ext, base)
+
+#endif /* _CPFL_ETHDEV_H_ */
diff --git a/drivers/net/cpfl/cpfl_logs.h b/drivers/net/cpfl/cpfl_logs.h
new file mode 100644
index 0000000000..451bdfbd1d
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_logs.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _CPFL_LOGS_H_
+#define _CPFL_LOGS_H_
+
+#include <rte_log.h>
+
+extern int cpfl_logtype_init;
+extern int cpfl_logtype_driver;
+
+#define PMD_INIT_LOG(level, ...) \
+	rte_log(RTE_LOG_ ## level, \
+		cpfl_logtype_init, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#define PMD_DRV_LOG_RAW(level, ...) \
+	rte_log(RTE_LOG_ ## level, \
+		cpfl_logtype_driver, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _CPFL_LOGS_H_ */
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
new file mode 100644
index 0000000000..ea4a2002bf
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -0,0 +1,244 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <ethdev_driver.h>
+#include <rte_net.h>
+#include <rte_vect.h>
+
+#include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
+
+static uint64_t
+cpfl_tx_offload_convert(uint64_t offload)
+{
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_SCTP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
+		ol |= IDPF_TX_OFFLOAD_MULTI_SEGS;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0)
+		ol |= IDPF_TX_OFFLOAD_MBUF_FAST_FREE;
+
+	return ol;
+}
+
+static const struct rte_memzone *
+cpfl_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
+		      uint16_t len, uint16_t queue_type,
+		      unsigned int socket_id, bool splitq)
+{
+	char ring_name[RTE_MEMZONE_NAMESIZE];
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+
+	memset(ring_name, 0, RTE_MEMZONE_NAMESIZE);
+	switch (queue_type) {
+	case VIRTCHNL2_QUEUE_TYPE_TX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_sched_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "cpfl Tx ring", sizeof("cpfl Tx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3),
+					      CPFL_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_singleq_rx_buf_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "cpfl Rx ring", sizeof("cpfl Rx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
+		ring_size = RTE_ALIGN(len * sizeof(struct idpf_splitq_tx_compl_desc),
+				      CPFL_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "cpfl Tx compl ring", sizeof("cpfl Tx compl ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
+		ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_splitq_rx_buf_desc),
+				      CPFL_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "cpfl Rx buf ring", sizeof("cpfl Rx buf ring"));
+		break;
+	default:
+		PMD_INIT_LOG(ERR, "Invalid queue type");
+		return NULL;
+	}
+
+	mz = rte_eth_dma_zone_reserve(dev, ring_name, queue_idx,
+				      ring_size, CPFL_RING_BASE_ALIGN,
+				      socket_id);
+	if (mz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for ring");
+		return NULL;
+	}
+
+	/* Zero all the descriptors in the ring. */
+	memset(mz->addr, 0, ring_size);
+
+	return mz;
+}
+
+static void
+cpfl_dma_zone_release(const struct rte_memzone *mz)
+{
+	rte_memzone_free(mz);
+}
+
+static int
+cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
+		     uint16_t queue_idx, uint16_t nb_desc,
+		     unsigned int socket_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	const struct rte_memzone *mz;
+	struct idpf_tx_queue *cq;
+	int ret;
+
+	cq = rte_zmalloc_socket("cpfl splitq cq",
+				sizeof(struct idpf_tx_queue),
+				RTE_CACHE_LINE_SIZE,
+				socket_id);
+	if (cq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for Tx compl queue");
+		ret = -ENOMEM;
+		goto err_cq_alloc;
+	}
+
+	cq->nb_tx_desc = nb_desc;
+	cq->queue_id = vport->chunks_info.tx_compl_start_qid + queue_idx;
+	cq->port_id = dev->data->port_id;
+	cq->txqs = dev->data->tx_queues;
+	cq->tx_start_qid = vport->chunks_info.tx_start_qid;
+
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, nb_desc,
+				   VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION,
+				   socket_id, true);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	cq->tx_ring_phys_addr = mz->iova;
+	cq->compl_ring = mz->addr;
+	cq->mz = mz;
+	reset_split_tx_complq(cq);
+
+	txq->complq = cq;
+
+	return 0;
+
+err_mz_reserve:
+	rte_free(cq);
+err_cq_alloc:
+	return ret;
+}
+
+int
+cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_txconf *tx_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t tx_rs_thresh, tx_free_thresh;
+	struct idpf_hw *hw = &adapter->hw;
+	const struct rte_memzone *mz;
+	struct idpf_tx_queue *txq;
+	uint64_t offloads;
+	uint16_t len;
+	bool is_splitq;
+	int ret;
+
+	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+	tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh > 0) ?
+		tx_conf->tx_rs_thresh : CPFL_DEFAULT_TX_RS_THRESH);
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ?
+		tx_conf->tx_free_thresh : CPFL_DEFAULT_TX_FREE_THRESH);
+	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket("cpfl txq",
+				 sizeof(struct idpf_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (txq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
+		ret = -ENOMEM;
+		goto err_txq_alloc;
+	}
+
+	is_splitq = !!(vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
+
+	txq->nb_tx_desc = nb_desc;
+	txq->rs_thresh = tx_rs_thresh;
+	txq->free_thresh = tx_free_thresh;
+	txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->offloads = cpfl_tx_offload_convert(offloads);
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	if (is_splitq)
+		len = 2 * nb_desc;
+	else
+		len = nb_desc;
+	txq->sw_nb_desc = len;
+
+	/* Allocate TX hardware ring descriptors. */
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, nb_desc, VIRTCHNL2_QUEUE_TYPE_TX,
+				   socket_id, is_splitq);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	txq->tx_ring_phys_addr = mz->iova;
+	txq->mz = mz;
+
+	txq->sw_ring = rte_zmalloc_socket("cpfl tx sw ring",
+					  sizeof(struct idpf_tx_entry) * len,
+					  RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq->sw_ring == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
+	}
+
+	if (!is_splitq) {
+		txq->tx_ring = mz->addr;
+		reset_single_tx_queue(txq);
+	} else {
+		txq->desc_ring = mz->addr;
+		reset_split_tx_descq(txq);
+
+		/* Setup tx completion queue if split model */
+		ret = cpfl_tx_complq_setup(dev, txq, queue_idx,
+					   2 * nb_desc, socket_id);
+		if (ret != 0)
+			goto err_complq_setup;
+	}
+
+	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
+			queue_idx * vport->chunks_info.tx_qtail_spacing);
+	txq->q_set = true;
+	dev->data->tx_queues[queue_idx] = txq;
+
+	return 0;
+
+err_complq_setup:
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(txq);
+err_txq_alloc:
+	return ret;
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
new file mode 100644
index 0000000000..ec42478393
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _CPFL_RXTX_H_
+#define _CPFL_RXTX_H_
+
+#include <idpf_common_rxtx.h>
+#include "cpfl_ethdev.h"
+
+/* In QLEN must be whole number of 32 descriptors. */
+#define CPFL_ALIGN_RING_DESC	32
+#define CPFL_MIN_RING_DESC	32
+#define CPFL_MAX_RING_DESC	4096
+#define CPFL_DMA_MEM_ALIGN	4096
+/* Base address of the HW descriptor ring should be 128B aligned. */
+#define CPFL_RING_BASE_ALIGN	128
+
+#define CPFL_DEFAULT_TX_RS_THRESH	32
+#define CPFL_DEFAULT_TX_FREE_THRESH	32
+
+int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			uint16_t nb_desc, unsigned int socket_id,
+			const struct rte_eth_txconf *tx_conf);
+#endif /* _CPFL_RXTX_H_ */
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
new file mode 100644
index 0000000000..106cc97e60
--- /dev/null
+++ b/drivers/net/cpfl/meson.build
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 Intel Corporation
+
+if is_windows
+    build = false
+    reason = 'not supported on Windows'
+    subdir_done()
+endif
+
+deps += ['common_idpf']
+
+sources = files(
+        'cpfl_ethdev.c',
+)
\ No newline at end of file
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 6470bf3636..a8ca338875 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -13,6 +13,7 @@ drivers = [
         'bnxt',
         'bonding',
         'cnxk',
+        'cpfl',
         'cxgbe',
         'dpaa',
         'dpaa2',
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v2 02/21] net/cpfl: add Tx queue setup
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 01/21] net/cpfl: support device initialization Mingxia Liu
@ 2023-01-13  8:19   ` Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 03/21] net/cpfl: add Rx " Mingxia Liu
                     ` (22 subsequent siblings)
  24 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-13  8:19 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add support for tx_queue_setup ops.

In the single queue model, the same descriptor queue is used by SW to
post buffer descriptors to HW and by HW to post completed descriptors
to SW.

In the split queue model, "RX buffer queues" are used to pass
descriptor buffers from SW to HW while Rx queues are used only to
pass the descriptor completions, that is, descriptors that point
to completed buffers, from HW to SW. This is contrary to the single
queue model in which Rx queues are used for both purposes.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 13 +++++++++++++
 drivers/net/cpfl/meson.build   |  1 +
 2 files changed, 14 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 2d79ba2098..f07e2f97a2 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -12,6 +12,7 @@
 #include <rte_alarm.h>
 
 #include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
 
 #define CPFL_TX_SINGLE_Q	"tx_single"
 #define CPFL_RX_SINGLE_Q	"rx_single"
@@ -96,6 +97,17 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = CPFL_MAX_RING_DESC,
+		.nb_min = CPFL_MIN_RING_DESC,
+		.nb_align = CPFL_ALIGN_RING_DESC,
+	};
+
 	return 0;
 }
 
@@ -514,6 +526,7 @@ cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *a
 static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_configure			= cpfl_dev_configure,
 	.dev_close			= cpfl_dev_close,
+	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
 	.link_update			= cpfl_dev_link_update,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
index 106cc97e60..3ccee15703 100644
--- a/drivers/net/cpfl/meson.build
+++ b/drivers/net/cpfl/meson.build
@@ -11,4 +11,5 @@ deps += ['common_idpf']
 
 sources = files(
         'cpfl_ethdev.c',
+        'cpfl_rxtx.c',
 )
\ No newline at end of file
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v2 03/21] net/cpfl: add Rx queue setup
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 01/21] net/cpfl: support device initialization Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 02/21] net/cpfl: add Tx queue setup Mingxia Liu
@ 2023-01-13  8:19   ` Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 04/21] net/cpfl: support device start and stop Mingxia Liu
                     ` (21 subsequent siblings)
  24 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-13  8:19 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add support for rx_queue_setup ops.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  11 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 232 +++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |   6 +
 3 files changed, 249 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index f07e2f97a2..a113ed0de0 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -102,12 +102,22 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
 	};
 
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_free_thresh = CPFL_DEFAULT_RX_FREE_THRESH,
+	};
+
 	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
 		.nb_max = CPFL_MAX_RING_DESC,
 		.nb_min = CPFL_MIN_RING_DESC,
 		.nb_align = CPFL_ALIGN_RING_DESC,
 	};
 
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = CPFL_MAX_RING_DESC,
+		.nb_min = CPFL_MIN_RING_DESC,
+		.nb_align = CPFL_ALIGN_RING_DESC,
+	};
+
 	return 0;
 }
 
@@ -526,6 +536,7 @@ cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *a
 static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_configure			= cpfl_dev_configure,
 	.dev_close			= cpfl_dev_close,
+	.rx_queue_setup			= cpfl_rx_queue_setup,
 	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
 	.link_update			= cpfl_dev_link_update,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index ea4a2002bf..695c79e1db 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -9,6 +9,25 @@
 #include "cpfl_ethdev.h"
 #include "cpfl_rxtx.h"
 
+static uint64_t
+cpfl_rx_offload_convert(uint64_t offload)
+{
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
+		ol |= IDPF_RX_OFFLOAD_TIMESTAMP;
+
+	return ol;
+}
+
 static uint64_t
 cpfl_tx_offload_convert(uint64_t offload)
 {
@@ -94,6 +113,219 @@ cpfl_dma_zone_release(const struct rte_memzone *mz)
 	rte_memzone_free(mz);
 }
 
+static int
+cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
+			 uint16_t queue_idx, uint16_t rx_free_thresh,
+			 uint16_t nb_desc, unsigned int socket_id,
+			 struct rte_mempool *mp, uint8_t bufq_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_hw *hw = &adapter->hw;
+	const struct rte_memzone *mz;
+	struct idpf_rx_queue *bufq;
+	uint16_t len;
+	int ret;
+
+	bufq = rte_zmalloc_socket("cpfl bufq",
+				   sizeof(struct idpf_rx_queue),
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (bufq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue.");
+		ret = -ENOMEM;
+		goto err_bufq1_alloc;
+	}
+
+	bufq->mp = mp;
+	bufq->nb_rx_desc = nb_desc;
+	bufq->rx_free_thresh = rx_free_thresh;
+	bufq->queue_id = vport->chunks_info.rx_buf_start_qid + queue_idx;
+	bufq->port_id = dev->data->port_id;
+	bufq->rx_hdr_len = 0;
+	bufq->adapter = adapter;
+
+	len = rte_pktmbuf_data_room_size(bufq->mp) - RTE_PKTMBUF_HEADROOM;
+	bufq->rx_buf_len = len;
+
+	/* Allocate a little more to support bulk allocate. */
+	len = nb_desc + IDPF_RX_MAX_BURST;
+
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, len,
+				   VIRTCHNL2_QUEUE_TYPE_RX_BUFFER,
+				   socket_id, true);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+
+	bufq->rx_ring_phys_addr = mz->iova;
+	bufq->rx_ring = mz->addr;
+	bufq->mz = mz;
+
+	bufq->sw_ring =
+		rte_zmalloc_socket("cpfl rx bufq sw ring",
+				   sizeof(struct rte_mbuf *) * len,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (bufq->sw_ring == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
+	}
+
+	reset_split_rx_bufq(bufq);
+	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
+			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+	bufq->q_set = true;
+
+	if (bufq_id == 1) {
+		rxq->bufq1 = bufq;
+	} else if (bufq_id == 2) {
+		rxq->bufq2 = bufq;
+	} else {
+		PMD_INIT_LOG(ERR, "Invalid buffer queue index.");
+		ret = -EINVAL;
+		goto err_bufq_id;
+	}
+
+	return 0;
+
+err_bufq_id:
+	rte_free(bufq->sw_ring);
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(bufq);
+err_bufq1_alloc:
+	return ret;
+}
+
+static void
+cpfl_rx_split_bufq_release(struct idpf_rx_queue *bufq)
+{
+	rte_free(bufq->sw_ring);
+	cpfl_dma_zone_release(bufq->mz);
+	rte_free(bufq);
+}
+
+int
+cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_rxconf *rx_conf,
+		    struct rte_mempool *mp)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_hw *hw = &adapter->hw;
+	const struct rte_memzone *mz;
+	struct idpf_rx_queue *rxq;
+	uint16_t rx_free_thresh;
+	uint64_t offloads;
+	bool is_splitq;
+	uint16_t len;
+	int ret;
+
+	offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads;
+
+	/* Check free threshold */
+	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
+		CPFL_DEFAULT_RX_FREE_THRESH :
+		rx_conf->rx_free_thresh;
+	if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Setup Rx queue */
+	rxq = rte_zmalloc_socket("cpfl rxq",
+				 sizeof(struct idpf_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (rxq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
+		ret = -ENOMEM;
+		goto err_rxq_alloc;
+	}
+
+	is_splitq = !!(vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
+
+	rxq->mp = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->queue_id = vport->chunks_info.rx_start_qid + queue_idx;
+	rxq->port_id = dev->data->port_id;
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+	rxq->rx_hdr_len = 0;
+	rxq->adapter = adapter;
+	rxq->offloads = cpfl_rx_offload_convert(offloads);
+
+	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_buf_len = len;
+
+	/* Allocate a little more to support bulk allocate. */
+	len = nb_desc + IDPF_RX_MAX_BURST;
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, len, VIRTCHNL2_QUEUE_TYPE_RX,
+				   socket_id, is_splitq);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	rxq->rx_ring_phys_addr = mz->iova;
+	rxq->rx_ring = mz->addr;
+	rxq->mz = mz;
+
+	if (!is_splitq) {
+		rxq->sw_ring = rte_zmalloc_socket("cpfl rxq sw ring",
+						  sizeof(struct rte_mbuf *) * len,
+						  RTE_CACHE_LINE_SIZE,
+						  socket_id);
+		if (rxq->sw_ring == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+			ret = -ENOMEM;
+			goto err_sw_ring_alloc;
+		}
+
+		reset_single_rx_queue(rxq);
+		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
+				queue_idx * vport->chunks_info.rx_qtail_spacing);
+	} else {
+		reset_split_rx_descq(rxq);
+
+		/* Setup Rx buffer queues */
+		ret = cpfl_rx_split_bufq_setup(dev, rxq, 2 * queue_idx,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 1);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 1");
+			ret = -EINVAL;
+			goto err_bufq1_setup;
+		}
+
+		ret = cpfl_rx_split_bufq_setup(dev, rxq, 2 * queue_idx + 1,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 2);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 2");
+			ret = -EINVAL;
+			goto err_bufq2_setup;
+		}
+	}
+
+	rxq->q_set = true;
+	dev->data->rx_queues[queue_idx] = rxq;
+
+	return 0;
+
+err_bufq2_setup:
+	cpfl_rx_split_bufq_release(rxq->bufq1);
+err_bufq1_setup:
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(rxq);
+err_rxq_alloc:
+	return ret;
+}
+
 static int
 cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
 		     uint16_t queue_idx, uint16_t nb_desc,
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index ec42478393..fd838d3f07 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -16,10 +16,16 @@
 /* Base address of the HW descriptor ring should be 128B aligned. */
 #define CPFL_RING_BASE_ALIGN	128
 
+#define CPFL_DEFAULT_RX_FREE_THRESH	32
+
 #define CPFL_DEFAULT_TX_RS_THRESH	32
 #define CPFL_DEFAULT_TX_FREE_THRESH	32
 
 int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_txconf *tx_conf);
+int cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			uint16_t nb_desc, unsigned int socket_id,
+			const struct rte_eth_rxconf *rx_conf,
+			struct rte_mempool *mp);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v2 04/21] net/cpfl: support device start and stop
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                     ` (2 preceding siblings ...)
  2023-01-13  8:19   ` [PATCH v2 03/21] net/cpfl: add Rx " Mingxia Liu
@ 2023-01-13  8:19   ` Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 05/21] net/cpfl: support queue start Mingxia Liu
                     ` (20 subsequent siblings)
  24 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-13  8:19 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add dev ops dev_start, dev_stop and link_update.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 35 ++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index a113ed0de0..05c3ad1a9c 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -185,12 +185,45 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_dev_start(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	int ret;
+
+	ret = idpf_vc_ena_dis_vport(vport, true);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to enable vport");
+		return ret;
+	}
+
+	vport->stopped = 0;
+
+	return 0;
+}
+
+static int
+cpfl_dev_stop(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->stopped == 1)
+		return 0;
+
+	idpf_vc_ena_dis_vport(vport, false);
+
+	vport->stopped = 1;
+
+	return 0;
+}
+
 static int
 cpfl_dev_close(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter);
 
+	cpfl_dev_stop(dev);
 	idpf_vport_deinit(vport);
 
 	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
@@ -539,6 +572,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.rx_queue_setup			= cpfl_rx_queue_setup,
 	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
+	.dev_start			= cpfl_dev_start,
+	.dev_stop			= cpfl_dev_stop,
 	.link_update			= cpfl_dev_link_update,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v2 05/21] net/cpfl: support queue start
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                     ` (3 preceding siblings ...)
  2023-01-13  8:19   ` [PATCH v2 04/21] net/cpfl: support device start and stop Mingxia Liu
@ 2023-01-13  8:19   ` Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 06/21] net/cpfl: support queue stop Mingxia Liu
                     ` (19 subsequent siblings)
  24 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-13  8:19 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add support for these device ops:
 - rx_queue_start
 - tx_queue_start

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  41 ++++++++++
 drivers/net/cpfl/cpfl_rxtx.c   | 138 +++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |   4 +
 3 files changed, 183 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 05c3ad1a9c..51d6243028 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -185,12 +185,51 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_start_queues(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	struct idpf_tx_queue *txq;
+	int err = 0;
+	int i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq == NULL || txq->tx_deferred_start)
+			continue;
+		err = cpfl_tx_queue_start(dev, i);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start Tx queue %u", i);
+			return err;
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq == NULL || rxq->rx_deferred_start)
+			continue;
+		err = cpfl_rx_queue_start(dev, i);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start Rx queue %u", i);
+			return err;
+		}
+	}
+
+	return err;
+}
+
 static int
 cpfl_dev_start(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	int ret;
 
+	ret = cpfl_start_queues(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to start queues");
+		return ret;
+	}
+
 	ret = idpf_vc_ena_dis_vport(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
@@ -575,6 +614,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_start			= cpfl_dev_start,
 	.dev_stop			= cpfl_dev_stop,
 	.link_update			= cpfl_dev_link_update,
+	.rx_queue_start			= cpfl_rx_queue_start,
+	.tx_queue_start			= cpfl_tx_queue_start,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 695c79e1db..aa67db1e92 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -474,3 +474,141 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 err_txq_alloc:
 	return ret;
 }
+
+int
+cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_rx_queue *rxq;
+	int err;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	if (rxq == NULL || !rxq->q_set) {
+		PMD_DRV_LOG(ERR, "RX queue %u not available or setup",
+					rx_queue_id);
+		return -EINVAL;
+	}
+
+	if (rxq->bufq1 == NULL) {
+		/* Single queue */
+		err = idpf_alloc_single_rxq_mbufs(rxq);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+			return err;
+		}
+
+		rte_wmb();
+
+		/* Init the RX tail register. */
+		IDPF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	} else {
+		/* Split queue */
+		err = idpf_alloc_split_rxq_mbufs(rxq->bufq1);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+			return err;
+		}
+		err = idpf_alloc_split_rxq_mbufs(rxq->bufq2);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+			return err;
+		}
+
+		rte_wmb();
+
+		/* Init the RX tail register. */
+		IDPF_PCI_REG_WRITE(rxq->bufq1->qrx_tail, rxq->bufq1->rx_tail);
+		IDPF_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->bufq2->rx_tail);
+	}
+
+	return err;
+}
+
+int
+cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_rx_queue *rxq =
+		dev->data->rx_queues[rx_queue_id];
+	int err = 0;
+
+	err = idpf_vc_config_rxq(vport, rxq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Fail to configure Rx queue %u", rx_queue_id);
+		return err;
+	}
+
+	err = cpfl_rx_queue_init(dev, rx_queue_id);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to init RX queue %u",
+			    rx_queue_id);
+		return err;
+	}
+
+	/* Ready to switch the queue on */
+	err = idpf_switch_queue(vport, rx_queue_id, true, true);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+	} else {
+		rxq->q_started = true;
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	return err;
+}
+
+int
+cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_tx_queue *txq;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	txq = dev->data->tx_queues[tx_queue_id];
+
+	/* Init the RX tail register. */
+	IDPF_PCI_REG_WRITE(txq->qtx_tail, 0);
+
+	return 0;
+}
+
+int
+cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_tx_queue *txq =
+		dev->data->tx_queues[tx_queue_id];
+	int err = 0;
+
+	err = idpf_vc_config_txq(vport, txq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Fail to configure Tx queue %u", tx_queue_id);
+		return err;
+	}
+
+	err = cpfl_tx_queue_init(dev, tx_queue_id);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to init TX queue %u",
+			    tx_queue_id);
+		return err;
+	}
+
+	/* Ready to switch the queue on */
+	err = idpf_switch_queue(vport, tx_queue_id, false, true);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+			    tx_queue_id);
+	} else {
+		txq->q_started = true;
+		dev->data->tx_queue_state[tx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	return err;
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index fd838d3f07..2fa7950775 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -28,4 +28,8 @@ int cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
 			struct rte_mempool *mp);
+int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v2 06/21] net/cpfl: support queue stop
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                     ` (4 preceding siblings ...)
  2023-01-13  8:19   ` [PATCH v2 05/21] net/cpfl: support queue start Mingxia Liu
@ 2023-01-13  8:19   ` Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 07/21] net/cpfl: support queue release Mingxia Liu
                     ` (18 subsequent siblings)
  24 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-13  8:19 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add support for these device ops:
 - rx_queue_stop
 - tx_queue_stop

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 10 +++-
 drivers/net/cpfl/cpfl_rxtx.c   | 87 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  3 ++
 3 files changed, 99 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 51d6243028..a80e916ae4 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -233,12 +233,16 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 	ret = idpf_vc_ena_dis_vport(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
-		return ret;
+		goto err_vport;
 	}
 
 	vport->stopped = 0;
 
 	return 0;
+
+err_vport:
+	cpfl_stop_queues(dev);
+	return ret;
 }
 
 static int
@@ -251,6 +255,8 @@ cpfl_dev_stop(struct rte_eth_dev *dev)
 
 	idpf_vc_ena_dis_vport(vport, false);
 
+	cpfl_stop_queues(dev);
+
 	vport->stopped = 1;
 
 	return 0;
@@ -616,6 +622,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.link_update			= cpfl_dev_link_update,
 	.rx_queue_start			= cpfl_rx_queue_start,
 	.tx_queue_start			= cpfl_tx_queue_start,
+	.rx_queue_stop			= cpfl_rx_queue_stop,
+	.tx_queue_stop			= cpfl_tx_queue_stop,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index aa67db1e92..b7d616de4f 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -612,3 +612,90 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 
 	return err;
 }
+
+int
+cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_rx_queue *rxq;
+	int err;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	err = idpf_switch_queue(vport, rx_queue_id, true, false);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+			    rx_queue_id);
+		return err;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		rxq->ops->release_mbufs(rxq);
+		reset_single_rx_queue(rxq);
+	} else {
+		rxq->bufq1->ops->release_mbufs(rxq->bufq1);
+		rxq->bufq2->ops->release_mbufs(rxq->bufq2);
+		reset_split_rx_queue(rxq);
+	}
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+int
+cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_tx_queue *txq;
+	int err;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	err = idpf_switch_queue(vport, tx_queue_id, false, false);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+			    tx_queue_id);
+		return err;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	txq->ops->release_mbufs(txq);
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		reset_single_tx_queue(txq);
+	} else {
+		reset_split_tx_descq(txq);
+		reset_split_tx_complq(txq->complq);
+	}
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+void
+cpfl_stop_queues(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	struct idpf_tx_queue *txq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq == NULL)
+			continue;
+
+		if (cpfl_rx_queue_stop(dev, i) != 0)
+			PMD_DRV_LOG(WARNING, "Fail to stop Rx queue %d", i);
+	}
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq == NULL)
+			continue;
+
+		if (cpfl_tx_queue_stop(dev, i) != 0)
+			PMD_DRV_LOG(WARNING, "Fail to stop Tx queue %d", i);
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 2fa7950775..6b63137d5c 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -32,4 +32,7 @@ int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void cpfl_stop_queues(struct rte_eth_dev *dev);
+int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v2 07/21] net/cpfl: support queue release
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                     ` (5 preceding siblings ...)
  2023-01-13  8:19   ` [PATCH v2 06/21] net/cpfl: support queue stop Mingxia Liu
@ 2023-01-13  8:19   ` Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 08/21] net/cpfl: support MTU configuration Mingxia Liu
                     ` (17 subsequent siblings)
  24 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-13  8:19 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add support for queue operations:
 - rx_queue_release
 - tx_queue_release

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  2 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 35 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  2 ++
 3 files changed, 39 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index a80e916ae4..922f1acc59 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -624,6 +624,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_start			= cpfl_tx_queue_start,
 	.rx_queue_stop			= cpfl_rx_queue_stop,
 	.tx_queue_stop			= cpfl_tx_queue_stop,
+	.rx_queue_release		= cpfl_dev_rx_queue_release,
+	.tx_queue_release		= cpfl_dev_tx_queue_release,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index b7d616de4f..a10deb6c96 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -49,6 +49,14 @@ cpfl_tx_offload_convert(uint64_t offload)
 	return ol;
 }
 
+static const struct idpf_rxq_ops def_rxq_ops = {
+	.release_mbufs = release_rxq_mbufs,
+};
+
+static const struct idpf_txq_ops def_txq_ops = {
+	.release_mbufs = release_txq_mbufs,
+};
+
 static const struct rte_memzone *
 cpfl_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
 		      uint16_t len, uint16_t queue_type,
@@ -177,6 +185,7 @@ cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
 	reset_split_rx_bufq(bufq);
 	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
 			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+	bufq->ops = &def_rxq_ops;
 	bufq->q_set = true;
 
 	if (bufq_id == 1) {
@@ -235,6 +244,12 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (check_rx_thresh(nb_desc, rx_free_thresh) != 0)
 		return -EINVAL;
 
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx] != NULL) {
+		idpf_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
 	/* Setup Rx queue */
 	rxq = rte_zmalloc_socket("cpfl rxq",
 				 sizeof(struct idpf_rx_queue),
@@ -287,6 +302,7 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		reset_single_rx_queue(rxq);
 		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
 				queue_idx * vport->chunks_info.rx_qtail_spacing);
+		rxq->ops = &def_rxq_ops;
 	} else {
 		reset_split_rx_descq(rxq);
 
@@ -399,6 +415,12 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
 		return -EINVAL;
 
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx] != NULL) {
+		idpf_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
 	/* Allocate the TX queue data structure. */
 	txq = rte_zmalloc_socket("cpfl txq",
 				 sizeof(struct idpf_tx_queue),
@@ -461,6 +483,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 
 	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
 			queue_idx * vport->chunks_info.tx_qtail_spacing);
+	txq->ops = &def_txq_ops;
 	txq->q_set = true;
 	dev->data->tx_queues[queue_idx] = txq;
 
@@ -674,6 +697,18 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	return 0;
 }
 
+void
+cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+	idpf_rx_queue_release(dev->data->rx_queues[qid]);
+}
+
+void
+cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+	idpf_tx_queue_release(dev->data->tx_queues[qid]);
+}
+
 void
 cpfl_stop_queues(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 6b63137d5c..037d479d56 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -35,4 +35,6 @@ int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void cpfl_stop_queues(struct rte_eth_dev *dev);
 int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v2 08/21] net/cpfl: support MTU configuration
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                     ` (6 preceding siblings ...)
  2023-01-13  8:19   ` [PATCH v2 07/21] net/cpfl: support queue release Mingxia Liu
@ 2023-01-13  8:19   ` Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 09/21] net/cpfl: support basic Rx data path Mingxia Liu
                     ` (16 subsequent siblings)
  24 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-13  8:19 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add dev ops mtu_set.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini |  1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 26 ++++++++++++++++++++++++++
 2 files changed, 27 insertions(+)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index a2d1ca9e15..470ba81579 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -7,6 +7,7 @@
 ; is selected.
 ;
 [Features]
+MTU update           = Y
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 922f1acc59..5ef9c902f7 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -121,6 +121,27 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	return 0;
 }
 
+static int
+cpfl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	/* mtu setting is forbidden if port is start */
+	if (dev->data->dev_started) {
+		PMD_DRV_LOG(ERR, "port must be stopped before configuration");
+		return -EBUSY;
+	}
+
+	if (mtu > vport->max_mtu) {
+		PMD_DRV_LOG(ERR, "MTU should be less than %d", vport->max_mtu);
+		return -EINVAL;
+	}
+
+	vport->max_pkt_len = mtu + CPFL_ETH_OVERHEAD;
+
+	return 0;
+}
+
 static const uint32_t *
 cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 {
@@ -182,6 +203,10 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
+	vport->max_pkt_len =
+		(dev->data->mtu == 0) ? CPFL_DEFAULT_MTU : dev->data->mtu +
+		CPFL_ETH_OVERHEAD;
+
 	return 0;
 }
 
@@ -626,6 +651,7 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_stop			= cpfl_tx_queue_stop,
 	.rx_queue_release		= cpfl_dev_rx_queue_release,
 	.tx_queue_release		= cpfl_dev_tx_queue_release,
+	.mtu_set			= cpfl_dev_mtu_set,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v2 09/21] net/cpfl: support basic Rx data path
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                     ` (7 preceding siblings ...)
  2023-01-13  8:19   ` [PATCH v2 08/21] net/cpfl: support MTU configuration Mingxia Liu
@ 2023-01-13  8:19   ` Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 10/21] net/cpfl: support basic Tx " Mingxia Liu
                     ` (15 subsequent siblings)
  24 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-13  8:19 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add basic Rx support in split queue mode and single queue mode.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  2 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 11 +++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  1 +
 3 files changed, 14 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 5ef9c902f7..716b9e3807 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -255,6 +255,8 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
+	cpfl_set_rx_function(dev);
+
 	ret = idpf_vc_ena_dis_vport(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index a10deb6c96..30df129a19 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -734,3 +734,14 @@ cpfl_stop_queues(struct rte_eth_dev *dev)
 			PMD_DRV_LOG(WARNING, "Fail to stop Tx queue %d", i);
 	}
 }
+
+void
+cpfl_set_rx_function(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
+	else
+		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 037d479d56..c29c30c7a3 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -37,4 +37,5 @@ int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+void cpfl_set_rx_function(struct rte_eth_dev *dev);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v2 10/21] net/cpfl: support basic Tx data path
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                     ` (8 preceding siblings ...)
  2023-01-13  8:19   ` [PATCH v2 09/21] net/cpfl: support basic Rx data path Mingxia Liu
@ 2023-01-13  8:19   ` Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
                     ` (14 subsequent siblings)
  24 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-13  8:19 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add basic Tx support in split queue mode and single queue mode.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  3 +++
 drivers/net/cpfl/cpfl_rxtx.c   | 14 ++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  1 +
 3 files changed, 18 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 716b9e3807..f94af81d95 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -97,6 +97,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
 		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
@@ -256,6 +258,7 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 	}
 
 	cpfl_set_rx_function(dev);
+	cpfl_set_tx_function(dev);
 
 	ret = idpf_vc_ena_dis_vport(vport, true);
 	if (ret != 0) {
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 30df129a19..0e053f4434 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -745,3 +745,17 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 	else
 		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
 }
+
+void
+cpfl_set_tx_function(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		dev->tx_pkt_burst = idpf_splitq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_prep_pkts;
+	} else {
+		dev->tx_pkt_burst = idpf_singleq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_prep_pkts;
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index c29c30c7a3..021db5bf8a 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -38,4 +38,5 @@ int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_set_rx_function(struct rte_eth_dev *dev);
+void cpfl_set_tx_function(struct rte_eth_dev *dev);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v2 11/21] net/cpfl: support write back based on ITR expire
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                     ` (9 preceding siblings ...)
  2023-01-13  8:19   ` [PATCH v2 10/21] net/cpfl: support basic Tx " Mingxia Liu
@ 2023-01-13  8:19   ` Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 12/21] net/cpfl: support RSS Mingxia Liu
                     ` (13 subsequent siblings)
  24 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-13  8:19 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Enable write back on ITR expire, then packets can be received one by

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 45 +++++++++++++++++++++++++++++++++-
 drivers/net/cpfl/cpfl_ethdev.h |  2 ++
 2 files changed, 46 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index f94af81d95..8c968a8eeb 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -212,6 +212,15 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	uint16_t nb_rx_queues = dev->data->nb_rx_queues;
+
+	return idpf_config_irq_map(vport, nb_rx_queues);
+}
+
 static int
 cpfl_start_queues(struct rte_eth_dev *dev)
 {
@@ -249,12 +258,37 @@ static int
 cpfl_dev_start(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(base);
+	uint16_t num_allocated_vectors = base->caps.num_allocated_vectors;
+	uint16_t req_vecs_num;
 	int ret;
 
+	req_vecs_num = CPFL_DFLT_Q_VEC_NUM;
+	if (req_vecs_num + adapter->used_vecs_num > num_allocated_vectors) {
+		PMD_DRV_LOG(ERR, "The accumulated request vectors' number should be less than %d",
+			    num_allocated_vectors);
+		ret = -EINVAL;
+		goto err_vec;
+	}
+
+	ret = idpf_vc_alloc_vectors(vport, req_vecs_num);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to allocate interrupt vectors");
+		goto err_vec;
+	}
+	adapter->used_vecs_num += req_vecs_num;
+
+	ret = cpfl_config_rx_queues_irqs(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to configure irqs");
+		goto err_irq;
+	}
+
 	ret = cpfl_start_queues(dev);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to start queues");
-		return ret;
+		goto err_startq;
 	}
 
 	cpfl_set_rx_function(dev);
@@ -272,6 +306,11 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 
 err_vport:
 	cpfl_stop_queues(dev);
+err_startq:
+	idpf_config_irq_unmap(vport, dev->data->nb_rx_queues);
+err_irq:
+	idpf_vc_dealloc_vectors(vport);
+err_vec:
 	return ret;
 }
 
@@ -287,6 +326,10 @@ cpfl_dev_stop(struct rte_eth_dev *dev)
 
 	cpfl_stop_queues(dev);
 
+	idpf_config_irq_unmap(vport, dev->data->nb_rx_queues);
+
+	idpf_vc_dealloc_vectors(vport);
+
 	vport->stopped = 1;
 
 	return 0;
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
index 83459b9c91..9ae543c2ad 100644
--- a/drivers/net/cpfl/cpfl_ethdev.h
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -24,6 +24,8 @@
 
 #define CPFL_INVALID_VPORT_IDX	0xffff
 
+#define CPFL_DFLT_Q_VEC_NUM	1
+
 #define CPFL_MIN_BUF_SIZE	1024
 #define CPFL_MAX_FRAME_SIZE	9728
 #define CPFL_DEFAULT_MTU	RTE_ETHER_MTU
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v2 12/21] net/cpfl: support RSS
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                     ` (10 preceding siblings ...)
  2023-01-13  8:19   ` [PATCH v2 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
@ 2023-01-13  8:19   ` Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 13/21] net/cpfl: support Rx offloading Mingxia Liu
                     ` (12 subsequent siblings)
  24 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-13  8:19 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add RSS support.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 51 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h | 15 ++++++++++
 2 files changed, 66 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 8c968a8eeb..2be1f841e0 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -97,6 +97,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
+
 	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
@@ -162,11 +164,49 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static int
+cpfl_init_rss(struct idpf_vport *vport)
+{
+	struct rte_eth_rss_conf *rss_conf;
+	struct rte_eth_dev_data *dev_data;
+	uint16_t i, nb_q;
+	int ret = 0;
+
+	dev_data = vport->dev_data;
+	rss_conf = &dev_data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev_data->nb_rx_queues;
+
+	if (rss_conf->rss_key == NULL) {
+		for (i = 0; i < vport->rss_key_size; i++)
+			vport->rss_key[i] = (uint8_t)rte_rand();
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_INIT_LOG(ERR, "Invalid RSS key length in RSS configuration, should be %d",
+			     vport->rss_key_size);
+		return -EINVAL;
+	} else {
+		rte_memcpy(vport->rss_key, rss_conf->rss_key,
+			   vport->rss_key_size);
+	}
+
+	for (i = 0; i < vport->rss_lut_size; i++)
+		vport->rss_lut[i] = i % nb_q;
+
+	vport->rss_hf = IDPF_DEFAULT_RSS_HASH_EXPANDED;
+
+	ret = idpf_config_rss(vport);
+	if (ret != 0)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS");
+
+	return ret;
+}
+
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_conf *conf = &dev->data->dev_conf;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret;
 
 	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
@@ -205,6 +245,17 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
+	if (adapter->caps.rss_caps != 0 && dev->data->nb_rx_queues != 0) {
+		ret = cpfl_init_rss(vport);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to init rss");
+			return ret;
+		}
+	} else {
+		PMD_INIT_LOG(ERR, "RSS is not supported.");
+		return -1;
+	}
+
 	vport->max_pkt_len =
 		(dev->data->mtu == 0) ? CPFL_DEFAULT_MTU : dev->data->mtu +
 		CPFL_ETH_OVERHEAD;
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
index 9ae543c2ad..0d60ee3aed 100644
--- a/drivers/net/cpfl/cpfl_ethdev.h
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -36,6 +36,21 @@
 #define CPFL_ETH_OVERHEAD \
 	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + CPFL_VLAN_TAG_SIZE * 2)
 
+#define CPFL_RSS_OFFLOAD_ALL (				\
+		RTE_ETH_RSS_IPV4                |	\
+		RTE_ETH_RSS_FRAG_IPV4           |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_TCP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_UDP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_SCTP   |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_OTHER  |	\
+		RTE_ETH_RSS_IPV6                |	\
+		RTE_ETH_RSS_FRAG_IPV6           |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP   |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_OTHER  |	\
+		RTE_ETH_RSS_L2_PAYLOAD)
+
 #define CPFL_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
 #define CPFL_ALARM_INTERVAL	50000 /* us */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v2 13/21] net/cpfl: support Rx offloading
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                     ` (11 preceding siblings ...)
  2023-01-13  8:19   ` [PATCH v2 12/21] net/cpfl: support RSS Mingxia Liu
@ 2023-01-13  8:19   ` Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 14/21] net/cpfl: support Tx offloading Mingxia Liu
                     ` (11 subsequent siblings)
  24 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-13  8:19 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add Rx offloading support:
 - support CHKSUM and RSS offload for split queue model
 - support CHKSUM offload for single queue model

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 2 ++
 drivers/net/cpfl/cpfl_ethdev.c    | 6 ++++++
 2 files changed, 8 insertions(+)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index 470ba81579..ee5948f444 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -8,6 +8,8 @@
 ;
 [Features]
 MTU update           = Y
+L3 checksum offload  = P
+L4 checksum offload  = P
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 2be1f841e0..569f74197f 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -99,6 +99,12 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
 
+	dev_info->rx_offload_capa =
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM           |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+
 	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v2 14/21] net/cpfl: support Tx offloading
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                     ` (12 preceding siblings ...)
  2023-01-13  8:19   ` [PATCH v2 13/21] net/cpfl: support Rx offloading Mingxia Liu
@ 2023-01-13  8:19   ` Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
                     ` (10 subsequent siblings)
  24 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-13  8:19 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add Tx offloading support:
 - support TSO for single queue model and split queue model.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 8 +++++++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index ee5948f444..f4e45c7c68 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -8,6 +8,7 @@
 ;
 [Features]
 MTU update           = Y
+TSO                  = P
 L3 checksum offload  = P
 L4 checksum offload  = P
 Linux                = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 569f74197f..01a730716e 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -105,7 +105,13 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
-	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->tx_offload_capa =
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_TCP_TSO		|
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v2 15/21] net/cpfl: add AVX512 data path for single queue model
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                     ` (13 preceding siblings ...)
  2023-01-13  8:19   ` [PATCH v2 14/21] net/cpfl: support Tx offloading Mingxia Liu
@ 2023-01-13  8:19   ` Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 16/21] net/cpfl: support timestamp offload Mingxia Liu
                     ` (9 subsequent siblings)
  24 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-13  8:19 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add support of AVX512 vector data path for single queue model.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/cpfl.rst                |  24 +++++-
 drivers/net/cpfl/cpfl_ethdev.c          |   3 +-
 drivers/net/cpfl/cpfl_rxtx.c            |  85 ++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx_vec_common.h | 100 ++++++++++++++++++++++++
 drivers/net/cpfl/meson.build            |  25 +++++-
 5 files changed, 234 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h

diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
index 064c69ba7d..489a2d6153 100644
--- a/doc/guides/nics/cpfl.rst
+++ b/doc/guides/nics/cpfl.rst
@@ -63,4 +63,26 @@ Runtime Config Options
 Driver compilation and testing
 ------------------------------
 
-Refer to the document :doc:`build_and_test` for details.
\ No newline at end of file
+Refer to the document :doc:`build_and_test` for details.
+
+Features
+--------
+
+Vector PMD
+~~~~~~~~~~
+
+Vector path for Rx and Tx path are selected automatically.
+The paths are chosen based on 2 conditions:
+
+- ``CPU``
+
+  On the x86 platform, the driver checks if the CPU supports AVX512.
+  If the CPU supports AVX512 and EAL argument ``--force-max-simd-bitwidth``
+  is set to 512, AVX512 paths will be chosen.
+
+- ``Offload features``
+
+  The supported HW offload features are described in the document cpfl.ini,
+  A value "P" means the offload feature is not supported by vector path.
+  If any not supported features are used, cpfl vector PMD is disabled
+  and the scalar paths are chosen.
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 01a730716e..a4ebbb9821 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -111,7 +111,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
 		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM		|
 		RTE_ETH_TX_OFFLOAD_TCP_TSO		|
-		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS		|
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 0e053f4434..a5bb3c728b 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -8,6 +8,7 @@
 
 #include "cpfl_ethdev.h"
 #include "cpfl_rxtx.h"
+#include "cpfl_rxtx_vec_common.h"
 
 static uint64_t
 cpfl_rx_offload_convert(uint64_t offload)
@@ -739,22 +740,106 @@ void
 cpfl_set_rx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+#ifdef RTE_ARCH_X86
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	if (cpfl_rx_vec_dev_check_default(dev) == CPFL_VECTOR_PATH &&
+	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+		vport->rx_vec_allowed = true;
+
+		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
+#ifdef CC_AVX512_SUPPORT
+			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512DQ))
+				vport->rx_use_avx512 = true;
+#else
+		PMD_DRV_LOG(NOTICE,
+			    "AVX512 is not supported in build env");
+#endif /* CC_AVX512_SUPPORT */
+	} else {
+		vport->rx_vec_allowed = false;
+	}
+#endif /* RTE_ARCH_X86 */
+
+#ifdef RTE_ARCH_X86
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
+	} else {
+		if (vport->rx_vec_allowed) {
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				rxq = dev->data->rx_queues[i];
+				(void)idpf_singleq_rx_vec_setup(rxq);
+			}
+#ifdef CC_AVX512_SUPPORT
+			if (vport->rx_use_avx512) {
+				dev->rx_pkt_burst = idpf_singleq_recv_pkts_avx512;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
 
+		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+	}
+#else
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
 		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
 	else
 		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+#endif /* RTE_ARCH_X86 */
 }
 
 void
 cpfl_set_tx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+#ifdef RTE_ARCH_X86
+#ifdef CC_AVX512_SUPPORT
+	struct idpf_tx_queue *txq;
+	int i;
+#endif /* CC_AVX512_SUPPORT */
+
+	if (cpfl_tx_vec_dev_check_default(dev) == CPFL_VECTOR_PATH &&
+	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+		vport->tx_vec_allowed = true;
+		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
+#ifdef CC_AVX512_SUPPORT
+		{
+			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
+				vport->tx_use_avx512 = true;
+			if (vport->tx_use_avx512) {
+				for (i = 0; i < dev->data->nb_tx_queues; i++) {
+					txq = dev->data->tx_queues[i];
+					idpf_tx_vec_setup_avx512(txq);
+				}
+			}
+		}
+#else
+		PMD_DRV_LOG(NOTICE,
+			    "AVX512 is not supported in build env");
+#endif /* CC_AVX512_SUPPORT */
+	} else {
+		vport->tx_vec_allowed = false;
+	}
+#endif /* RTE_ARCH_X86 */
 
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		dev->tx_pkt_burst = idpf_splitq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_prep_pkts;
 	} else {
+#ifdef RTE_ARCH_X86
+		if (vport->tx_vec_allowed) {
+#ifdef CC_AVX512_SUPPORT
+			if (vport->tx_use_avx512) {
+				dev->tx_pkt_burst = idpf_singleq_xmit_pkts_avx512;
+				dev->tx_pkt_prepare = idpf_prep_pkts;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
+#endif /* RTE_ARCH_X86 */
 		dev->tx_pkt_burst = idpf_singleq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_prep_pkts;
 	}
diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
new file mode 100644
index 0000000000..503bc87f21
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
@@ -0,0 +1,100 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _CPFL_RXTX_VEC_COMMON_H_
+#define _CPFL_RXTX_VEC_COMMON_H_
+#include <stdint.h>
+#include <ethdev_driver.h>
+#include <rte_malloc.h>
+
+#include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
+
+#ifndef __INTEL_COMPILER
+#pragma GCC diagnostic ignored "-Wcast-qual"
+#endif
+
+#define CPFL_SCALAR_PATH		0
+#define CPFL_VECTOR_PATH		1
+#define CPFL_RX_NO_VECTOR_FLAGS (		\
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+#define CPFL_TX_NO_VECTOR_FLAGS (		\
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |	\
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+static inline int
+cpfl_rx_vec_queue_default(struct idpf_rx_queue *rxq)
+{
+	if (rxq == NULL)
+		return CPFL_SCALAR_PATH;
+
+	if (rte_is_power_of_2(rxq->nb_rx_desc) == 0)
+		return CPFL_SCALAR_PATH;
+
+	if (rxq->rx_free_thresh < IDPF_VPMD_RX_MAX_BURST)
+		return CPFL_SCALAR_PATH;
+
+	if ((rxq->nb_rx_desc % rxq->rx_free_thresh) != 0)
+		return CPFL_SCALAR_PATH;
+
+	if ((rxq->offloads & CPFL_RX_NO_VECTOR_FLAGS) != 0)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_tx_vec_queue_default(struct idpf_tx_queue *txq)
+{
+	if (txq == NULL)
+		return CPFL_SCALAR_PATH;
+
+	if (txq->rs_thresh < IDPF_VPMD_TX_MAX_BURST ||
+	    (txq->rs_thresh & 3) != 0)
+		return CPFL_SCALAR_PATH;
+
+	if ((txq->offloads & CPFL_TX_NO_VECTOR_FLAGS) != 0)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i, ret = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		ret = (cpfl_rx_vec_queue_default(rxq));
+		if (ret == CPFL_SCALAR_PATH)
+			return CPFL_SCALAR_PATH;
+	}
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_tx_vec_dev_check_default(struct rte_eth_dev *dev)
+{
+	int i;
+	struct idpf_tx_queue *txq;
+	int ret = 0;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		ret = cpfl_tx_vec_queue_default(txq);
+		if (ret == CPFL_SCALAR_PATH)
+			return CPFL_SCALAR_PATH;
+	}
+
+	return CPFL_VECTOR_PATH;
+}
+
+#endif /*_CPFL_RXTX_VEC_COMMON_H_*/
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
index 3ccee15703..40ed8dbb7b 100644
--- a/drivers/net/cpfl/meson.build
+++ b/drivers/net/cpfl/meson.build
@@ -7,9 +7,32 @@ if is_windows
     subdir_done()
 endif
 
+if dpdk_conf.get('RTE_IOVA_AS_PA') == 0
+    build = false
+    reason = 'driver does not support disabling IOVA as PA mode'
+    subdir_done()
+endif
+
 deps += ['common_idpf']
 
 sources = files(
         'cpfl_ethdev.c',
         'cpfl_rxtx.c',
-)
\ No newline at end of file
+)
+
+if arch_subdir == 'x86'
+    cpfl_avx512_cpu_support = (
+        cc.get_define('__AVX512F__', args: machine_args) != '' and
+        cc.get_define('__AVX512BW__', args: machine_args) != ''
+    )
+
+    cpfl_avx512_cc_support = (
+        not machine_args.contains('-mno-avx512f') and
+        cc.has_argument('-mavx512f') and
+        cc.has_argument('-mavx512bw')
+    )
+
+    if cpfl_avx512_cpu_support == true or cpfl_avx512_cc_support == true
+        cflags += ['-DCC_AVX512_SUPPORT']
+    endif
+endif
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v2 16/21] net/cpfl: support timestamp offload
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                     ` (14 preceding siblings ...)
  2023-01-13  8:19   ` [PATCH v2 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
@ 2023-01-13  8:19   ` Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
                     ` (8 subsequent siblings)
  24 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-13  8:19 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add support for timestamp offload.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 3 ++-
 drivers/net/cpfl/cpfl_rxtx.c      | 7 +++++++
 3 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index f4e45c7c68..c1209df3e5 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -11,6 +11,7 @@ MTU update           = Y
 TSO                  = P
 L3 checksum offload  = P
 L4 checksum offload  = P
+Timestamp offload    = P
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index a4ebbb9821..5fc40f8298 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -103,7 +103,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM           |
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
-		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index a5bb3c728b..3101e59ee1 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -516,6 +516,13 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	err = idpf_register_ts_mbuf(rxq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "fail to register timestamp mbuf %u",
+					rx_queue_id);
+		return -EIO;
+	}
+
 	if (rxq->bufq1 == NULL) {
 		/* Single queue */
 		err = idpf_alloc_single_rxq_mbufs(rxq);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v2 17/21] net/cpfl: add AVX512 data path for split queue model
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                     ` (15 preceding siblings ...)
  2023-01-13  8:19   ` [PATCH v2 16/21] net/cpfl: support timestamp offload Mingxia Liu
@ 2023-01-13  8:19   ` Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 18/21] net/cpfl: add hw statistics Mingxia Liu
                     ` (7 subsequent siblings)
  24 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-13  8:19 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add support of AVX512 data path for split queue model.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_rxtx.c            | 25 +++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx_vec_common.h | 17 +++++++++++++++--
 2 files changed, 40 insertions(+), 2 deletions(-)

diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 3101e59ee1..c797e09b52 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -772,6 +772,20 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 
 #ifdef RTE_ARCH_X86
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+#ifdef RTE_ARCH_X86
+		if (vport->rx_vec_allowed) {
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				rxq = dev->data->rx_queues[i];
+				(void)idpf_splitq_rx_vec_setup(rxq);
+			}
+#ifdef CC_AVX512_SUPPORT
+			if (vport->rx_use_avx512) {
+				dev->rx_pkt_burst = idpf_splitq_recv_pkts_avx512;
+				return;
+			}
+#endif
+		}
+#endif
 		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
 	} else {
 		if (vport->rx_vec_allowed) {
@@ -833,6 +847,17 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 #endif /* RTE_ARCH_X86 */
 
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+#ifdef RTE_ARCH_X86
+		if (vport->tx_vec_allowed) {
+#ifdef CC_AVX512_SUPPORT
+			if (vport->tx_use_avx512) {
+				dev->tx_pkt_burst = idpf_splitq_xmit_pkts_avx512;
+				dev->tx_pkt_prepare = idpf_prep_pkts;
+				return;
+			}
+#endif
+		}
+#endif
 		dev->tx_pkt_burst = idpf_splitq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_prep_pkts;
 	} else {
diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
index 503bc87f21..63e52dd937 100644
--- a/drivers/net/cpfl/cpfl_rxtx_vec_common.h
+++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
@@ -64,15 +64,28 @@ cpfl_tx_vec_queue_default(struct idpf_tx_queue *txq)
 	return CPFL_VECTOR_PATH;
 }
 
+static inline int
+cpfl_rx_splitq_vec_default(struct idpf_rx_queue *rxq)
+{
+	if (rxq->bufq2->rx_buf_len < rxq->max_pkt_len)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
 static inline int
 cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_rx_queue *rxq;
-	int i, ret = 0;
+	int i, default_ret, splitq_ret, ret = CPFL_SCALAR_PATH;
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
-		ret = (cpfl_rx_vec_queue_default(rxq));
+		splitq_ret = cpfl_rx_splitq_vec_default(rxq);
+		default_ret = cpfl_rx_vec_queue_default(rxq);
+		ret = (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) ?
+		      splitq_ret && default_ret : default_ret;
 		if (ret == CPFL_SCALAR_PATH)
 			return CPFL_SCALAR_PATH;
 	}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v2 18/21] net/cpfl: add hw statistics
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                     ` (16 preceding siblings ...)
  2023-01-13  8:19   ` [PATCH v2 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
@ 2023-01-13  8:19   ` Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
                     ` (6 subsequent siblings)
  24 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-13  8:19 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

This patch add hardware packets/bytes statistics.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 88 ++++++++++++++++++++++++++++++++++
 drivers/net/idpf/idpf_ethdev.c |  3 +-
 2 files changed, 90 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 5fc40f8298..9ae653b99a 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -178,6 +178,87 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static uint64_t
+cpfl_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	uint64_t mbuf_alloc_failed = 0;
+	struct idpf_rx_queue *rxq;
+	int i = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		mbuf_alloc_failed +=
+		    rte_atomic64_read(&(rxq->rx_stats.mbuf_alloc_failed));
+	}
+
+	return mbuf_alloc_failed;
+}
+
+static int
+cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_query_stats(vport, &pstats);
+	if (ret == 0) {
+		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
+					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
+					 RTE_ETHER_CRC_LEN;
+
+		idpf_update_stats(&vport->eth_stats_offset, pstats);
+		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
+				pstats->rx_broadcast - pstats->rx_discards;
+		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
+						pstats->tx_unicast;
+		stats->imissed = pstats->rx_discards;
+		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
+		stats->ibytes = pstats->rx_bytes;
+		stats->ibytes -= stats->ipackets * crc_stats_len;
+		stats->obytes = pstats->tx_bytes;
+
+		dev->data->rx_mbuf_alloc_failed = cpfl_get_mbuf_alloc_failed_stats(dev);
+		stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return ret;
+}
+
+static void
+cpfl_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		rte_atomic64_set(&(rxq->rx_stats.mbuf_alloc_failed), 0);
+	}
+}
+
+static int
+cpfl_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_query_stats(vport, &pstats);
+	if (ret != 0)
+		return ret;
+
+	/* set stats offset base on current values */
+	vport->eth_stats_offset = *pstats;
+
+	cpfl_reset_mbuf_alloc_failed_stats(dev);
+
+	return 0;
+}
+
 static int
 cpfl_init_rss(struct idpf_vport *vport)
 {
@@ -365,6 +446,11 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 		goto err_vport;
 	}
 
+	if (cpfl_dev_stats_reset(dev)) {
+		PMD_DRV_LOG(ERR, "Failed to reset stats");
+		goto err_vport;
+	}
+
 	vport->stopped = 0;
 
 	return 0;
@@ -766,6 +852,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_release		= cpfl_dev_tx_queue_release,
 	.mtu_set			= cpfl_dev_mtu_set,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
+	.stats_get			= cpfl_dev_stats_get,
+	.stats_reset			= cpfl_dev_stats_reset,
 };
 
 static uint16_t
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index bcd15db3c5..b2cf959ee7 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -824,13 +824,14 @@ idpf_dev_start(struct rte_eth_dev *dev)
 
 	if (idpf_dev_stats_reset(dev)) {
 		PMD_DRV_LOG(ERR, "Failed to reset stats");
-		goto err_vport;
+		goto err_stats_reset;
 	}
 
 	vport->stopped = 0;
 
 	return 0;
 
+err_stats_reset:
 err_vport:
 	idpf_stop_queues(dev);
 err_startq:
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v2 19/21] net/cpfl: add RSS set/get ops
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                     ` (17 preceding siblings ...)
  2023-01-13  8:19   ` [PATCH v2 18/21] net/cpfl: add hw statistics Mingxia Liu
@ 2023-01-13  8:19   ` Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 20/21] net/cpfl: support single q scatter RX datapath Mingxia Liu
                     ` (5 subsequent siblings)
  24 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-13  8:19 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add support for these device ops:
- rss_reta_update
- rss_reta_query
- rss_hash_update
- rss_hash_conf_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 303 +++++++++++++++++++++++++++++++++
 1 file changed, 303 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 9ae653b99a..b35224e1ec 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -30,6 +30,56 @@ static const char * const cpfl_valid_args[] = {
 	NULL
 };
 
+static const uint64_t cpfl_map_hena_rss[] = {
+	[IDPF_HASH_NONF_UNICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
+	[IDPF_HASH_NONF_IPV4_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+	[IDPF_HASH_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
+
+	/* IPv6 */
+	[IDPF_HASH_NONF_UNICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
+	[IDPF_HASH_NONF_IPV6_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+	[IDPF_HASH_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
+
+	/* L2 Payload */
+	[IDPF_HASH_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
+};
+
+static const uint64_t cpfl_ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV4;
+
+static const uint64_t cpfl_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV6;
+
 static int
 cpfl_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -97,6 +147,9 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->hash_key_size = vport->rss_key_size;
+	dev_info->reta_size = vport->rss_lut_size;
+
 	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
@@ -259,6 +312,54 @@ cpfl_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int cpfl_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
+{
+	uint64_t hena = 0, valid_rss_hf = 0;
+	int ret = 0;
+	uint16_t i;
+
+	/**
+	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
+	 * generalizations of all other IPv4 and IPv6 RSS types.
+	 */
+	if (rss_hf & RTE_ETH_RSS_IPV4)
+		rss_hf |= cpfl_ipv4_rss;
+
+	if (rss_hf & RTE_ETH_RSS_IPV6)
+		rss_hf |= cpfl_ipv6_rss;
+
+	for (i = 0; i < RTE_DIM(cpfl_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (cpfl_map_hena_rss[i] & rss_hf) {
+			valid_rss_hf |= cpfl_map_hena_rss[i];
+			hena |= bit;
+		}
+	}
+
+	vport->rss_hf = hena;
+
+	ret = idpf_vc_set_rss_hash(vport);
+	if (ret != 0) {
+		PMD_DRV_LOG(WARNING,
+			    "fail to set RSS offload types, ret: %d", ret);
+		return ret;
+	}
+
+	if (valid_rss_hf & cpfl_ipv4_rss)
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & cpfl_ipv6_rss)
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
+
+	if (rss_hf & ~valid_rss_hf)
+		PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%" PRIx64,
+			    rss_hf & ~valid_rss_hf);
+	vport->last_general_rss_hf = valid_rss_hf;
+
+	return ret;
+}
+
 static int
 cpfl_init_rss(struct idpf_vport *vport)
 {
@@ -295,6 +396,204 @@ cpfl_init_rss(struct idpf_vport *vport)
 	return ret;
 }
 
+static int
+cpfl_rss_reta_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_reta_entry64 *reta_conf,
+		     uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	uint32_t *lut;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+				 "(%d) doesn't match the number of hardware can "
+				 "support (%d)",
+			    reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	/* It MUST use the current LUT size to get the RSS lookup table,
+	 * otherwise if will fail with -100 error code.
+	 */
+	lut = rte_zmalloc(NULL, reta_size * sizeof(uint32_t), 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+	/* store the old lut table temporarily */
+	rte_memcpy(lut, vport->rss_lut, reta_size * sizeof(uint32_t));
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	rte_memcpy(vport->rss_lut, lut, reta_size * sizeof(uint32_t));
+	/* send virtchnl ops to configure RSS */
+	ret = idpf_vc_set_rss_lut(vport);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
+		goto out;
+	}
+out:
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+cpfl_rss_reta_query(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_reta_entry64 *reta_conf,
+		    uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	ret = idpf_vc_get_rss_lut(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS LUT");
+		return ret;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vport->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+cpfl_rss_hash_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		goto skip_rss_key;
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+				 "(%d) doesn't match the size of hardware can "
+				 "support (%d)",
+			    rss_conf->rss_key_len,
+			    vport->rss_key_size);
+		return -EINVAL;
+	}
+
+	rte_memcpy(vport->rss_key, rss_conf->rss_key,
+		   vport->rss_key_size);
+	ret = idpf_vc_set_rss_key(vport);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS key");
+		return ret;
+	}
+
+skip_rss_key:
+	ret = cpfl_config_rss_hf(vport, rss_conf->rss_hf);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
+		return ret;
+	}
+
+	return 0;
+}
+
+static uint64_t
+cpfl_map_general_rss_hf(uint64_t config_rss_hf, uint64_t last_general_rss_hf)
+{
+	uint64_t valid_rss_hf = 0;
+	uint16_t i;
+
+	for (i = 0; i < RTE_DIM(cpfl_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (bit & config_rss_hf)
+			valid_rss_hf |= cpfl_map_hena_rss[i];
+	}
+
+	if (valid_rss_hf & cpfl_ipv4_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & cpfl_ipv6_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV6;
+
+	return valid_rss_hf;
+}
+
+static int
+cpfl_rss_hash_conf_get(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	ret = idpf_vc_get_rss_hash(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS hf");
+		return ret;
+	}
+
+	rss_conf->rss_hf = cpfl_map_general_rss_hf(vport->rss_hf, vport->last_general_rss_hf);
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	ret = idpf_vc_get_rss_key(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS key");
+		return ret;
+	}
+
+	if (rss_conf->rss_key_len > vport->rss_key_size)
+		rss_conf->rss_key_len = vport->rss_key_size;
+
+	rte_memcpy(rss_conf->rss_key, vport->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
@@ -854,6 +1153,10 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 	.stats_get			= cpfl_dev_stats_get,
 	.stats_reset			= cpfl_dev_stats_reset,
+	.reta_update			= cpfl_rss_reta_update,
+	.reta_query			= cpfl_rss_reta_query,
+	.rss_hash_update		= cpfl_rss_hash_update,
+	.rss_hash_conf_get		= cpfl_rss_hash_conf_get,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v2 20/21] net/cpfl: support single q scatter RX datapath
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                     ` (18 preceding siblings ...)
  2023-01-13  8:19   ` [PATCH v2 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
@ 2023-01-13  8:19   ` Mingxia Liu
  2023-01-13  8:19   ` [PATCH v2 21/21] net/cpfl: add xstats ops Mingxia Liu
                     ` (4 subsequent siblings)
  24 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-13  8:19 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

This patch add single q recv scatter rx function.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  3 ++-
 drivers/net/cpfl/cpfl_rxtx.c   | 26 ++++++++++++++++++++++++--
 drivers/net/cpfl/cpfl_rxtx.h   |  2 ++
 3 files changed, 28 insertions(+), 3 deletions(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index b35224e1ec..e7034c4e22 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -157,7 +157,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
-		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP		|
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index c797e09b52..bedbaa9de0 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -503,6 +503,8 @@ int
 cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct idpf_rx_queue *rxq;
+	uint16_t max_pkt_len;
+	uint32_t frame_size;
 	int err;
 
 	if (rx_queue_id >= dev->data->nb_rx_queues)
@@ -516,6 +518,17 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	frame_size = dev->data->mtu + CPFL_ETH_OVERHEAD;
+
+	max_pkt_len =
+	    RTE_MIN((uint32_t)CPFL_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+		    frame_size);
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
+	    frame_size > rxq->rx_buf_len)
+		dev->data->scattered_rx = 1;
+
 	err = idpf_register_ts_mbuf(rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "fail to register timestamp mbuf %u",
@@ -801,13 +814,22 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 #endif /* CC_AVX512_SUPPORT */
 		}
 
+		if (dev->data->scattered_rx) {
+			dev->rx_pkt_burst = idpf_singleq_recv_scatter_pkts;
+			return;
+		}
 		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
 	}
 #else
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
-	else
+	} else {
+		if (dev->data->scattered_rx) {
+			dev->rx_pkt_burst = idpf_singleq_recv_scatter_pkts;
+			return;
+		}
 		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+	}
 #endif /* RTE_ARCH_X86 */
 }
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 021db5bf8a..2d55f58455 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -21,6 +21,8 @@
 #define CPFL_DEFAULT_TX_RS_THRESH	32
 #define CPFL_DEFAULT_TX_FREE_THRESH	32
 
+#define CPFL_SUPPORT_CHAIN_NUM 5
+
 int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_txconf *tx_conf);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v2 21/21] net/cpfl: add xstats ops
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                     ` (19 preceding siblings ...)
  2023-01-13  8:19   ` [PATCH v2 20/21] net/cpfl: support single q scatter RX datapath Mingxia Liu
@ 2023-01-13  8:19   ` Mingxia Liu
  2023-01-13 12:49   ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Zhang, Helin
                     ` (3 subsequent siblings)
  24 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-13  8:19 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add support for these device ops:
- dev_xstats_get
- dev_xstats_get_names
- dev_xstats_reset

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 80 ++++++++++++++++++++++++++++++++++
 1 file changed, 80 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index e7034c4e22..7b0ca81cd9 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -80,6 +80,30 @@ static const uint64_t cpfl_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
 			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
 			  RTE_ETH_RSS_FRAG_IPV6;
 
+struct rte_cpfl_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct rte_cpfl_xstats_name_off rte_cpfl_stats_strings[] = {
+	{"rx_bytes", offsetof(struct virtchnl2_vport_stats, rx_bytes)},
+	{"rx_unicast_packets", offsetof(struct virtchnl2_vport_stats, rx_unicast)},
+	{"rx_multicast_packets", offsetof(struct virtchnl2_vport_stats, rx_multicast)},
+	{"rx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, rx_broadcast)},
+	{"rx_dropped_packets", offsetof(struct virtchnl2_vport_stats, rx_discards)},
+	{"rx_errors", offsetof(struct virtchnl2_vport_stats, rx_errors)},
+	{"rx_unknown_protocol_packets", offsetof(struct virtchnl2_vport_stats,
+						 rx_unknown_protocol)},
+	{"tx_bytes", offsetof(struct virtchnl2_vport_stats, tx_bytes)},
+	{"tx_unicast_packets", offsetof(struct virtchnl2_vport_stats, tx_unicast)},
+	{"tx_multicast_packets", offsetof(struct virtchnl2_vport_stats, tx_multicast)},
+	{"tx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, tx_broadcast)},
+	{"tx_dropped_packets", offsetof(struct virtchnl2_vport_stats, tx_discards)},
+	{"tx_error_packets", offsetof(struct virtchnl2_vport_stats, tx_errors)}};
+
+#define CPFL_NB_XSTATS (sizeof(rte_cpfl_stats_strings) / \
+		sizeof(rte_cpfl_stats_strings[0]))
+
 static int
 cpfl_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -313,6 +337,59 @@ cpfl_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int cpfl_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+	cpfl_dev_stats_reset(dev);
+	return 0;
+}
+
+static int cpfl_dev_xstats_get(struct rte_eth_dev *dev,
+			       struct rte_eth_xstat *xstats, unsigned int n)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	unsigned int i;
+	int ret;
+
+	if (n < CPFL_NB_XSTATS)
+		return CPFL_NB_XSTATS;
+
+	if (!xstats)
+		return 0;
+
+	ret = idpf_query_stats(vport, &pstats);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+		return 0;
+	}
+
+	idpf_update_stats(&vport->eth_stats_offset, pstats);
+
+	/* loop over xstats array and values from pstats */
+	for (i = 0; i < CPFL_NB_XSTATS; i++) {
+		xstats[i].id = i;
+		xstats[i].value = *(uint64_t *)(((char *)pstats) +
+			rte_cpfl_stats_strings[i].offset);
+	}
+	return CPFL_NB_XSTATS;
+}
+
+static int cpfl_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+				     struct rte_eth_xstat_name *xstats_names,
+				     __rte_unused unsigned int limit)
+{
+	unsigned int i;
+
+	if (xstats_names)
+		for (i = 0; i < CPFL_NB_XSTATS; i++) {
+			snprintf(xstats_names[i].name,
+				 sizeof(xstats_names[i].name),
+				 "%s", rte_cpfl_stats_strings[i].name);
+		}
+	return CPFL_NB_XSTATS;
+}
+
 static int cpfl_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
 {
 	uint64_t hena = 0, valid_rss_hf = 0;
@@ -1158,6 +1235,9 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.reta_query			= cpfl_rss_reta_query,
 	.rss_hash_update		= cpfl_rss_hash_update,
 	.rss_hash_conf_get		= cpfl_rss_hash_conf_get,
+	.xstats_get			= cpfl_dev_xstats_get,
+	.xstats_get_names		= cpfl_dev_xstats_get_names,
+	.xstats_reset			= cpfl_dev_xstats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v2 00/21] add support for cpfl PMD in DPDK
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                     ` (20 preceding siblings ...)
  2023-01-13  8:19   ` [PATCH v2 21/21] net/cpfl: add xstats ops Mingxia Liu
@ 2023-01-13 12:49   ` Zhang, Helin
  2023-01-18  7:31   ` [PATCH v3 " Mingxia Liu
                     ` (2 subsequent siblings)
  24 siblings, 0 replies; 263+ messages in thread
From: Zhang, Helin @ 2023-01-13 12:49 UTC (permalink / raw)
  To: Liu, Mingxia, dev, Zhang, Qi Z, Wu, Jingjing, Xing, Beilei
  Cc: Wu, Wenjun1, Liu, Mingxia



> -----Original Message-----
> From: Mingxia Liu <mingxia.liu@intel.com>
> Sent: Friday, January 13, 2023 4:19 PM
> To: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> Cc: Wu, Wenjun1 <wenjun1.wu@intel.com>; Liu, Mingxia
> <mingxia.liu@intel.com>
> Subject: [PATCH v2 00/21] add support for cpfl PMD in DPDK
> 
> The patchset introduced the cpfl (Control Plane Function Library) PMD for
> Intel® IPU E2100’s Configure Physical Function (Device ID: 0x1453)
> 
> The cpfl PMD inherits all the features from idpf PMD which will follow an
> ongoing standard data plan function spec https://www.oasis-
> open.org/committees/tc_home.php?wg_abbrev=idpf
> Besides, it will also support more device specific hardware offloading
> features from DPDK’s control path (e.g.: hairpin, rte_flow …)
Can you explain a bit why idpf cannot be used for this new PCIe function? Why do you need to create a new PMD?

Thanks,
Helin
> 
> This patchset mainly focuses on idpf PMD’s equivalent features.
> To avoid duplicated code, the patchset depends on below patchsets which
> move the common part from net/idpf into common/idpf as a shared library.
> 
> This patchset is based on the idpf PMD code:
> http://patches.dpdk.org/project/dpdk/cover/20230106090501.9106-1-
> beilei.xing@intel.com/
> http://patches.dpdk.org/project/dpdk/cover/20230106091627.13530-1-
> beilei.xing@intel.com/
> http://patches.dpdk.org/project/dpdk/patch/20230113015119.3279019-2-
> wenjun1.wu@intel.com/
> http://patches.dpdk.org/project/dpdk/cover/20230111071545.504706-1-
> mingxia.liu@intel.com/
> 
> v2 changes:
>  - rebase to the new baseline.
>  - Fix rss lut config issue.
> 
> Mingxia Liu (21):
>   net/cpfl: support device initialization
>   net/cpfl: add Tx queue setup
>   net/cpfl: add Rx queue setup
>   net/cpfl: support device start and stop
>   net/cpfl: support queue start
>   net/cpfl: support queue stop
>   net/cpfl: support queue release
>   net/cpfl: support MTU configuration
>   net/cpfl: support basic Rx data path
>   net/cpfl: support basic Tx data path
>   net/cpfl: support write back based on ITR expire
>   net/cpfl: support RSS
>   net/cpfl: support Rx offloading
>   net/cpfl: support Tx offloading
>   net/cpfl: add AVX512 data path for single queue model
>   net/cpfl: support timestamp offload
>   net/cpfl: add AVX512 data path for split queue model
>   net/cpfl: add hw statistics
>   net/cpfl: add RSS set/get ops
>   net/cpfl: support single q scatter RX datapath
>   net/cpfl: add xstats ops
> 
>  MAINTAINERS                             |    9 +
>  doc/guides/nics/cpfl.rst                |   88 ++
>  doc/guides/nics/features/cpfl.ini       |   17 +
>  doc/guides/rel_notes/release_23_03.rst  |    5 +
>  drivers/net/cpfl/cpfl_ethdev.c          | 1490 +++++++++++++++++++++++
>  drivers/net/cpfl/cpfl_ethdev.h          |   95 ++
>  drivers/net/cpfl/cpfl_logs.h            |   32 +
>  drivers/net/cpfl/cpfl_rxtx.c            |  900 ++++++++++++++
>  drivers/net/cpfl/cpfl_rxtx.h            |   44 +
>  drivers/net/cpfl/cpfl_rxtx_vec_common.h |  113 ++
>  drivers/net/cpfl/meson.build            |   38 +
>  drivers/net/idpf/idpf_ethdev.c          |    3 +-
>  drivers/net/meson.build                 |    1 +
>  13 files changed, 2834 insertions(+), 1 deletion(-)  create mode 100644
> doc/guides/nics/cpfl.rst  create mode 100644
> doc/guides/nics/features/cpfl.ini  create mode 100644
> drivers/net/cpfl/cpfl_ethdev.c  create mode 100644
> drivers/net/cpfl/cpfl_ethdev.h  create mode 100644
> drivers/net/cpfl/cpfl_logs.h  create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
> create mode 100644 drivers/net/cpfl/cpfl_rxtx.h  create mode 100644
> drivers/net/cpfl/cpfl_rxtx_vec_common.h
>  create mode 100644 drivers/net/cpfl/meson.build
> 
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v2 01/21] net/cpfl: support device initialization
  2023-01-13  8:19   ` [PATCH v2 01/21] net/cpfl: support device initialization Mingxia Liu
@ 2023-01-13 13:32     ` Zhang, Helin
  0 siblings, 0 replies; 263+ messages in thread
From: Zhang, Helin @ 2023-01-13 13:32 UTC (permalink / raw)
  To: Liu, Mingxia, dev, Zhang, Qi Z, Wu, Jingjing, Xing, Beilei
  Cc: Wu, Wenjun1, Liu, Mingxia



> -----Original Message-----
> From: Mingxia Liu <mingxia.liu@intel.com>
> Sent: Friday, January 13, 2023 4:19 PM
> To: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> Cc: Wu, Wenjun1 <wenjun1.wu@intel.com>; Liu, Mingxia
> <mingxia.liu@intel.com>
> Subject: [PATCH v2 01/21] net/cpfl: support device initialization
> 
> Support device init and add the following dev ops:
>  - dev_configure
>  - dev_close
>  - dev_infos_get
>  - link_update
>  - cpfl_dev_supported_ptypes_get
> 
> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> ---
>  MAINTAINERS                            |   9 +
>  doc/guides/nics/cpfl.rst               |  66 +++
>  doc/guides/nics/features/cpfl.ini      |  12 +
>  doc/guides/rel_notes/release_23_03.rst |   5 +
>  drivers/net/cpfl/cpfl_ethdev.c         | 769 +++++++++++++++++++++++++
>  drivers/net/cpfl/cpfl_ethdev.h         |  78 +++
>  drivers/net/cpfl/cpfl_logs.h           |  32 +
>  drivers/net/cpfl/cpfl_rxtx.c           | 244 ++++++++
>  drivers/net/cpfl/cpfl_rxtx.h           |  25 +
>  drivers/net/cpfl/meson.build           |  14 +
>  drivers/net/meson.build                |   1 +
>  11 files changed, 1255 insertions(+)
>  create mode 100644 doc/guides/nics/cpfl.rst  create mode 100644
> doc/guides/nics/features/cpfl.ini  create mode 100644
> drivers/net/cpfl/cpfl_ethdev.c  create mode 100644
> drivers/net/cpfl/cpfl_ethdev.h  create mode 100644
> drivers/net/cpfl/cpfl_logs.h  create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
> create mode 100644 drivers/net/cpfl/cpfl_rxtx.h  create mode 100644
> drivers/net/cpfl/meson.build
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 22ef2ea4b9..970acc5751 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -780,6 +780,15 @@ F: drivers/common/idpf/
>  F: doc/guides/nics/idpf.rst
>  F: doc/guides/nics/features/idpf.ini
> 
> +Intel cpfl
> +M: Qi Zhang <qi.z.zhang@intel.com>
> +M: Jingjing Wu <jingjing.wu@intel.com>
> +M: Beilei Xing <beilei.xing@intel.com>
> +T: git://dpdk.org/next/dpdk-next-net-intel
> +F: drivers/net/cpfl/
> +F: doc/guides/nics/cpfl.rst
> +F: doc/guides/nics/features/cpfl.ini
> +
>  Intel igc
>  M: Junfeng Guo <junfeng.guo@intel.com>
>  M: Simei Su <simei.su@intel.com>
> diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst new file mode
> 100644 index 0000000000..064c69ba7d
> --- /dev/null
> +++ b/doc/guides/nics/cpfl.rst
> @@ -0,0 +1,66 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> +   Copyright(c) 2022 Intel Corporation.
> +
> +.. include:: <isonum.txt>
> +
> +CPFL Poll Mode Driver
> +=====================
> +
> +The [*EXPERIMENTAL*] cpfl PMD (**librte_net_cpfl**) provides poll mode
> +driver support for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg|
> IPU) E2100.
> +
> +
> +Linux Prerequisites
> +-------------------
> +
> +Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK
> environment.
> +
> +To get better performance on Intel platforms, please follow the
> +:doc:`../linux_gsg/nic_perf_intel_platform`.
> +
> +
> +Pre-Installation Configuration
> +------------------------------
> +
> +Runtime Config Options
> +~~~~~~~~~~~~~~~~~~~~~~
> +
> +- ``vport`` (default ``0``)
> +
> +  The PMD supports creation of multiple vports for one PCI device,
> + each vport corresponds to a single ethdev.
> +  The user can specify the vports with specific ID to be created, for example::
> +
> +    -a ca:00.0,vport=[0,2,3]
> +
> +  Then the PMD will create 3 vports (ethdevs) for device ``ca:00.0``.
> +
> +  If the parameter is not provided, the vport 0 will be created by default.
> +
> +- ``rx_single`` (default ``0``)
> +
> +  There are two queue modes supported by Intel\ |reg| IPU Ethernet
> + ES2000 Series,  single queue mode and split queue mode for Rx queue.
What is the relationship with IPU E2100? Is IPU ethernet ES2000 a new product?

Thanks,
Helin

> +  User can choose Rx queue mode, example::
> +
> +    -a ca:00.0,rx_single=1
> +
> +  Then the PMD will configure Rx queue with single queue mode.
> +  Otherwise, split queue mode is chosen by default.
> +
> +- ``tx_single`` (default ``0``)
> +
> +  There are two queue modes supported by Intel\ |reg| IPU Ethernet
> + ES2000 Series,  single queue mode and split queue mode for Tx queue.
> +  User can choose Tx queue mode, example::
> +
> +    -a ca:00.0,tx_single=1
> +
> +  Then the PMD will configure Tx queue with single queue mode.
> +  Otherwise, split queue mode is chosen by default.
> +
> +
> +Driver compilation and testing
> +------------------------------
> +
> +Refer to the document :doc:`build_and_test` for details.
> \ No newline at end of file
> diff --git a/doc/guides/nics/features/cpfl.ini
> b/doc/guides/nics/features/cpfl.ini
> new file mode 100644
> index 0000000000..a2d1ca9e15
> --- /dev/null
> +++ b/doc/guides/nics/features/cpfl.ini
> @@ -0,0 +1,12 @@
> +;
> +; Supported features of the 'cpfl' network poll mode driver.
> +;
> +; Refer to default.ini for the full list of available PMD features.
> +;
> +; A feature with "P" indicates only be supported when non-vector path ;
> +is selected.
> +;
> +[Features]
> +Linux                = Y
> +x86-32               = Y
> +x86-64               = Y
> diff --git a/doc/guides/rel_notes/release_23_03.rst
> b/doc/guides/rel_notes/release_23_03.rst
> index b8c5b68d6c..465a25e91e 100644
> --- a/doc/guides/rel_notes/release_23_03.rst
> +++ b/doc/guides/rel_notes/release_23_03.rst
> @@ -55,6 +55,11 @@ New Features
>       Also, make sure to start the actual text at the margin.
>       =======================================================
> 
> +* **Added Intel cpfl driver.**
> +
> +  Added the new ``cpfl`` net driver
> +  for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
> +  See the :doc:`../nics/cpfl` NIC guide for more details on this new driver.
> 
>  Removed Items
>  -------------
> diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
> new file mode 100644 index 0000000000..2d79ba2098
> --- /dev/null
> +++ b/drivers/net/cpfl/cpfl_ethdev.c
> @@ -0,0 +1,769 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2022 Intel Corporation
> + */
> +
> +#include <rte_atomic.h>
> +#include <rte_eal.h>
> +#include <rte_ether.h>
> +#include <rte_malloc.h>
> +#include <rte_memzone.h>
> +#include <rte_dev.h>
> +#include <errno.h>
> +#include <rte_alarm.h>
> +
> +#include "cpfl_ethdev.h"
> +
> +#define CPFL_TX_SINGLE_Q	"tx_single"
> +#define CPFL_RX_SINGLE_Q	"rx_single"
> +#define CPFL_VPORT		"vport"
> +
> +rte_spinlock_t cpfl_adapter_lock;
> +/* A list for all adapters, one adapter matches one PCI device */
> +struct cpfl_adapter_list cpfl_adapter_list; bool
> +cpfl_adapter_list_init;
> +
> +static const char * const cpfl_valid_args[] = {
> +	CPFL_TX_SINGLE_Q,
> +	CPFL_RX_SINGLE_Q,
> +	CPFL_VPORT,
> +	NULL
> +};
> +
> +static int
> +cpfl_dev_link_update(struct rte_eth_dev *dev,
> +		     __rte_unused int wait_to_complete) {
> +	struct idpf_vport *vport = dev->data->dev_private;
> +	struct rte_eth_link new_link;
> +
> +	memset(&new_link, 0, sizeof(new_link));
> +
> +	switch (vport->link_speed) {
> +	case 10:
Is it better to replace '10' with a meaningful macro?
Same comments to below 20 lines of code.

Thanks,
Helin

> +		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
> +		break;
> +	case 100:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
> +		break;
> +	case 1000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
> +		break;
> +	case 10000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
> +		break;
> +	case 20000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
> +		break;
> +	case 25000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
> +		break;
> +	case 40000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
> +		break;
> +	case 50000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
> +		break;
> +	case 100000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
> +		break;
> +	case 200000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
> +		break;
> +	default:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
> +	}
> +
> +	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
> +	new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
> +		RTE_ETH_LINK_DOWN;
> +	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
> +				  RTE_ETH_LINK_SPEED_FIXED);
> +
> +	return rte_eth_linkstatus_set(dev, &new_link); }
> +
> +static int
> +cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info
> +*dev_info) {
> +	struct idpf_vport *vport = dev->data->dev_private;
> +	struct idpf_adapter *adapter = vport->adapter;
> +
> +	dev_info->max_rx_queues = adapter->caps.max_rx_q;
> +	dev_info->max_tx_queues = adapter->caps.max_tx_q;
> +	dev_info->min_rx_bufsize = CPFL_MIN_BUF_SIZE;
> +	dev_info->max_rx_pktlen = vport->max_mtu +
> CPFL_ETH_OVERHEAD;
> +
> +	dev_info->max_mtu = vport->max_mtu;
> +	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
> +
> +	return 0;
> +}
> +
> +static const uint32_t *
> +cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused) {
> +	static const uint32_t ptypes[] = {
> +		RTE_PTYPE_L2_ETHER,
> +		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
> +		RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
> +		RTE_PTYPE_L4_FRAG,
> +		RTE_PTYPE_L4_UDP,
> +		RTE_PTYPE_L4_TCP,
> +		RTE_PTYPE_L4_SCTP,
> +		RTE_PTYPE_L4_ICMP,
> +		RTE_PTYPE_UNKNOWN
> +	};
> +
> +	return ptypes;
> +}
> +
> +static int
> +cpfl_dev_configure(struct rte_eth_dev *dev) {
> +	struct idpf_vport *vport = dev->data->dev_private;
> +	struct rte_eth_conf *conf = &dev->data->dev_conf;
> +
> +	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
> +		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
> +		PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not
> supported",
> +			     conf->txmode.mq_mode);
> +		return -ENOTSUP;
> +	}
> +
> +	if (conf->lpbk_mode != 0) {
> +		PMD_INIT_LOG(ERR, "Loopback operation mode %d is not
> supported",
> +			     conf->lpbk_mode);
> +		return -ENOTSUP;
> +	}
> +
> +	if (conf->dcb_capability_en != 0) {
> +		PMD_INIT_LOG(ERR, "Priority Flow Control(PFC) if not
> supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (conf->intr_conf.lsc != 0) {
> +		PMD_INIT_LOG(ERR, "LSC interrupt is not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (conf->intr_conf.rxq != 0) {
> +		PMD_INIT_LOG(ERR, "RXQ interrupt is not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (conf->intr_conf.rmv != 0) {
> +		PMD_INIT_LOG(ERR, "RMV interrupt is not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	return 0;
> +}
> +
> +static int
> +cpfl_dev_close(struct rte_eth_dev *dev) {
> +	struct idpf_vport *vport = dev->data->dev_private;
> +	struct cpfl_adapter_ext *adapter =
> +CPFL_ADAPTER_TO_EXT(vport->adapter);
> +
> +	idpf_vport_deinit(vport);
> +
> +	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
> +	adapter->cur_vport_nb--;
> +	dev->data->dev_private = NULL;
> +	adapter->vports[vport->sw_idx] = NULL;
> +	rte_free(vport);
> +
> +	return 0;
> +}
> +
> +static int
> +insert_value(struct cpfl_devargs *devargs, uint16_t id) {
> +	uint16_t i;
> +
> +	/* ignore duplicate */
> +	for (i = 0; i < devargs->req_vport_nb; i++) {
> +		if (devargs->req_vports[i] == id)
> +			return 0;
> +	}
> +
> +	if (devargs->req_vport_nb >= RTE_DIM(devargs->req_vports)) {
> +		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
> +			     CPFL_MAX_VPORT_NUM);
> +		return -EINVAL;
> +	}
> +
> +	devargs->req_vports[devargs->req_vport_nb] = id;
> +	devargs->req_vport_nb++;
> +
> +	return 0;
> +}
> +
> +static const char *
> +parse_range(const char *value, struct cpfl_devargs *devargs) {
> +	uint16_t lo, hi, i;
> +	int n = 0;
> +	int result;
> +	const char *pos = value;
> +
> +	result = sscanf(value, "%hu%n-%hu%n", &lo, &n, &hi, &n);
> +	if (result == 1) {
What does "1" mean here?
I may suggest to replace it with a meaningful macro.

Thanks,
Helin

> +		if (lo >= CPFL_MAX_VPORT_NUM)
> +			return NULL;
> +		if (insert_value(devargs, lo) != 0)
> +			return NULL;
> +	} else if (result == 2) {
> +		if (lo > hi || hi >= CPFL_MAX_VPORT_NUM)
> +			return NULL;
> +		for (i = lo; i <= hi; i++) {
> +			if (insert_value(devargs, i) != 0)
> +				return NULL;
> +		}
> +	} else {
> +		return NULL;
> +	}
> +
> +	return pos + n;
> +}
> +
> +static int
> +parse_vport(const char *key, const char *value, void *args) {
> +	struct cpfl_devargs *devargs = args;
> +	const char *pos = value;
> +
> +	devargs->req_vport_nb = 0;
> +
> +	if (*pos == '[')
> +		pos++;
> +
> +	while (1) {
> +		pos = parse_range(pos, devargs);
> +		if (pos == NULL) {
> +			PMD_INIT_LOG(ERR, "invalid value:\"%s\" for
> key:\"%s\", ",
> +				     value, key);
> +			return -EINVAL;
> +		}
> +		if (*pos != ',')
> +			break;
> +		pos++;
> +	}
> +
> +	if (*value == '[' && *pos != ']') {
> +		PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ",
> +			     value, key);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int
> +parse_bool(const char *key, const char *value, void *args) {
> +	int *i = args;
> +	char *end;
> +	int num;
> +
> +	errno = 0;
> +
> +	num = strtoul(value, &end, 10);
> +
> +	if (errno == ERANGE || (num != 0 && num != 1)) {
> +		PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\",
> value must be 0 or 1",
> +			value, key);
> +		return -EINVAL;
> +	}
> +
> +	*i = num;
> +	return 0;
> +}
> +
> +static int
> +cpfl_parse_devargs(struct rte_pci_device *pci_dev, struct
> cpfl_adapter_ext *adapter,
> +		   struct cpfl_devargs *cpfl_args)
> +{
> +	struct rte_devargs *devargs = pci_dev->device.devargs;
> +	struct rte_kvargs *kvlist;
> +	int i, ret;
> +
> +	cpfl_args->req_vport_nb = 0;
> +
> +	if (devargs == NULL)
Need a LOG for debugging purpose?

> +		return 0;
> +
> +	kvlist = rte_kvargs_parse(devargs->args, cpfl_valid_args);
> +	if (kvlist == NULL) {
> +		PMD_INIT_LOG(ERR, "invalid kvargs key");
> +		return -EINVAL;
> +	}
> +
> +	/* check parsed devargs */
> +	if (adapter->cur_vport_nb + cpfl_args->req_vport_nb >
> +	    CPFL_MAX_VPORT_NUM) {
> +		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
> +			     CPFL_MAX_VPORT_NUM);
> +		ret = -EINVAL;
> +		goto bail;
> +	}
> +
> +	for (i = 0; i < cpfl_args->req_vport_nb; i++) {
> +		if (adapter->cur_vports & RTE_BIT32(cpfl_args-
> >req_vports[i])) {
> +			PMD_INIT_LOG(ERR, "Vport %d has been created",
> +				     cpfl_args->req_vports[i]);
> +			ret = -EINVAL;
> +			goto bail;
> +		}
> +	}
> +
> +	ret = rte_kvargs_process(kvlist, CPFL_VPORT, &parse_vport,
> +				 cpfl_args);
> +	if (ret != 0)
> +		goto bail;
> +
> +	ret = rte_kvargs_process(kvlist, CPFL_TX_SINGLE_Q, &parse_bool,
> +				 &adapter->base.txq_model);
> +	if (ret != 0)
> +		goto bail;
> +
> +	ret = rte_kvargs_process(kvlist, CPFL_RX_SINGLE_Q, &parse_bool,
> +				 &adapter->base.rxq_model);
> +	if (ret != 0)
> +		goto bail;
Is above line useless code?

> +
> +bail:
> +	rte_kvargs_free(kvlist);
> +	return ret;
> +}
> +
> +static struct idpf_vport *
> +cpfl_find_vport(struct cpfl_adapter_ext *adapter, uint32_t vport_id) {
> +	struct idpf_vport *vport = NULL;
> +	int i;
> +
> +	for (i = 0; i < adapter->cur_vport_nb; i++) {
> +		vport = adapter->vports[i];
> +		if (vport->vport_id != vport_id)
> +			continue;
> +		else
> +			return vport;
Likely you just need to check if vport->vport_id equals to vport_id, right?

> +	}
> +
> +	return vport;
> +}
> +
> +static void
> +cpfl_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t
> +msglen) {
> +	struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
> +	struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
> +
> +	if (msglen < sizeof(struct virtchnl2_event)) {
> +		PMD_DRV_LOG(ERR, "Error event");
> +		return;
> +	}
> +
> +	switch (vc_event->event) {
> +	case VIRTCHNL2_EVENT_LINK_CHANGE:
> +		PMD_DRV_LOG(DEBUG,
> "VIRTCHNL2_EVENT_LINK_CHANGE");
> +		vport->link_up = vc_event->link_status;
> +		vport->link_speed = vc_event->link_speed;
> +		cpfl_dev_link_update(dev, 0);
> +		break;
> +	default:
> +		PMD_DRV_LOG(ERR, " unknown event received %u",
> vc_event->event);
> +		break;
> +	}
> +}
> +
> +static void
> +cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter_ex) {
> +	struct idpf_adapter *adapter = &adapter_ex->base;
> +	struct idpf_dma_mem *dma_mem = NULL;
> +	struct idpf_hw *hw = &adapter->hw;
> +	struct virtchnl2_event *vc_event;
> +	struct idpf_ctlq_msg ctlq_msg;
> +	enum idpf_mbx_opc mbx_op;
> +	struct idpf_vport *vport;
> +	enum virtchnl_ops vc_op;
> +	uint16_t pending = 1;
> +	int ret;
> +
> +	while (pending) {
> +		ret = idpf_ctlq_recv(hw->arq, &pending, &ctlq_msg);
> +		if (ret) {
> +			PMD_DRV_LOG(INFO, "Failed to read msg from
> virtual channel, ret: %d", ret);
> +			return;
> +		}
> +
> +		rte_memcpy(adapter->mbx_resp,
> ctlq_msg.ctx.indirect.payload->va,
> +			   IDPF_DFLT_MBX_BUF_SIZE);
> +
> +		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
> +		vc_op =
> rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
> +		adapter->cmd_retval =
> +rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
> +
> +		switch (mbx_op) {
> +		case idpf_mbq_opc_send_msg_to_peer_pf:
> +			if (vc_op == VIRTCHNL2_OP_EVENT) {
> +				if (ctlq_msg.data_len < sizeof(struct
> virtchnl2_event)) {
> +					PMD_DRV_LOG(ERR, "Error event");
> +					return;
> +				}
> +				vc_event = (struct virtchnl2_event
> *)adapter->mbx_resp;
> +				vport = cpfl_find_vport(adapter_ex,
> vc_event->vport_id);
> +				if (!vport) {
> +					PMD_DRV_LOG(ERR, "Can't find
> vport.");
> +					return;
> +				}
> +				cpfl_handle_event_msg(vport, adapter-
> >mbx_resp,
> +						      ctlq_msg.data_len);
> +			} else {
> +				if (vc_op == adapter->pend_cmd)
> +					notify_cmd(adapter, adapter-
> >cmd_retval);
> +				else
> +					PMD_DRV_LOG(ERR, "command
> mismatch, expect %u, get %u",
> +						    adapter->pend_cmd,
> vc_op);
> +
> +				PMD_DRV_LOG(DEBUG, " Virtual channel
> response is received,"
> +					    "opcode = %d", vc_op);
> +			}
> +			goto post_buf;
> +		default:
> +			PMD_DRV_LOG(DEBUG, "Request %u is not
> supported yet", mbx_op);
> +		}
> +	}
> +
> +post_buf:
> +	if (ctlq_msg.data_len)
> +		dma_mem = ctlq_msg.ctx.indirect.payload;
> +	else
> +		pending = 0;
> +
> +	ret = idpf_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
> +	if (ret && dma_mem)
> +		idpf_free_dma_mem(hw, dma_mem);
> +}
> +
> +static void
> +cpfl_dev_alarm_handler(void *param)
> +{
> +	struct cpfl_adapter_ext *adapter = param;
> +
> +	cpfl_handle_virtchnl_msg(adapter);
> +
> +	rte_eal_alarm_set(CPFL_ALARM_INTERVAL,
> cpfl_dev_alarm_handler,
> +adapter); }
> +
> +static int
> +cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct
> +cpfl_adapter_ext *adapter) {
> +	struct idpf_adapter *base = &adapter->base;
> +	struct idpf_hw *hw = &base->hw;
> +	int ret = 0;
> +
> +	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
> +	hw->hw_addr_len = pci_dev->mem_resource[0].len;
> +	hw->back = base;
> +	hw->vendor_id = pci_dev->id.vendor_id;
> +	hw->device_id = pci_dev->id.device_id;
> +	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
> +
> +	strncpy(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE);
> +
> +	ret = idpf_adapter_init(base);
> +	if (ret != 0) {
> +		PMD_INIT_LOG(ERR, "Failed to init adapter");
> +		goto err_adapter_init;
> +	}
> +
> +	rte_eal_alarm_set(CPFL_ALARM_INTERVAL,
> cpfl_dev_alarm_handler,
> +adapter);
> +
> +	adapter->max_vport_nb = adapter->base.caps.max_vports;
> +
> +	adapter->vports = rte_zmalloc("vports",
> +				      adapter->max_vport_nb *
> +				      sizeof(*adapter->vports),
> +				      0);
> +	if (adapter->vports == NULL) {
> +		PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
> +		ret = -ENOMEM;
> +		goto err_get_ptype;
> +	}
> +
> +	adapter->cur_vports = 0;
> +	adapter->cur_vport_nb = 0;
> +
> +	adapter->used_vecs_num = 0;
> +
> +	return ret;
> +
> +err_get_ptype:
> +	idpf_adapter_deinit(base);
> +err_adapter_init:
> +	return ret;
> +}
> +
> +static const struct eth_dev_ops cpfl_eth_dev_ops = {
> +	.dev_configure			= cpfl_dev_configure,
> +	.dev_close			= cpfl_dev_close,
> +	.dev_infos_get			= cpfl_dev_info_get,
> +	.link_update			= cpfl_dev_link_update,
> +	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
> +};
> +
> +static uint16_t
> +cpfl_vport_idx_alloc(struct cpfl_adapter_ext *ad) {
> +	uint16_t vport_idx;
> +	uint16_t i;
> +
> +	for (i = 0; i < ad->max_vport_nb; i++) {
> +		if (ad->vports[i] == NULL)
> +			break;
> +	}
> +
> +	if (i == ad->max_vport_nb)
> +		vport_idx = CPFL_INVALID_VPORT_IDX;
Why not initialize vport_idx directly?

> +	else
> +		vport_idx = i;
> +
> +	return vport_idx;
> +}
> +
> +static int
> +cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) {
> +	struct idpf_vport *vport = dev->data->dev_private;
> +	struct cpfl_vport_param *param = init_params;
> +	struct cpfl_adapter_ext *adapter = param->adapter;
> +	/* for sending create vport virtchnl msg prepare */
> +	struct virtchnl2_create_vport create_vport_info;
> +	int ret = 0;
> +
> +	dev->dev_ops = &cpfl_eth_dev_ops;
> +	vport->adapter = &adapter->base;
> +	vport->sw_idx = param->idx;
> +	vport->devarg_id = param->devarg_id;
> +	vport->dev = dev;
> +
> +	memset(&create_vport_info, 0, sizeof(create_vport_info));
> +	ret = idpf_create_vport_info_init(vport, &create_vport_info);
> +	if (ret != 0) {
> +		PMD_INIT_LOG(ERR, "Failed to init vport req_info.");
> +		goto err;
> +	}
> +
> +	ret = idpf_vport_init(vport, &create_vport_info, dev->data);
> +	if (ret != 0) {
> +		PMD_INIT_LOG(ERR, "Failed to init vports.");
> +		goto err;
> +	}
> +
> +	adapter->vports[param->idx] = vport;
> +	adapter->cur_vports |= RTE_BIT32(param->devarg_id);
> +	adapter->cur_vport_nb++;
> +
> +	dev->data->mac_addrs = rte_zmalloc(NULL, RTE_ETHER_ADDR_LEN,
> 0);
> +	if (dev->data->mac_addrs == NULL) {
> +		PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory.");
> +		ret = -ENOMEM;
> +		goto err_mac_addrs;
> +	}
> +
> +	rte_ether_addr_copy((struct rte_ether_addr *)vport-
> >default_mac_addr,
> +			    &dev->data->mac_addrs[0]);
> +
> +	return 0;
> +
> +err_mac_addrs:
> +	adapter->vports[param->idx] = NULL;  /* reset */
> +	idpf_vport_deinit(vport);
> +err:
> +	return ret;
> +}
> +
> +static const struct rte_pci_id pci_id_cpfl_map[] = {
> +	{ RTE_PCI_DEVICE(IDPF_INTEL_VENDOR_ID, IDPF_DEV_ID_CPF) },
> +	{ .vendor_id = 0, /* sentinel */ },
> +};
> +
> +static struct cpfl_adapter_ext *
> +cpfl_find_adapter_ext(struct rte_pci_device *pci_dev) {
> +	struct cpfl_adapter_ext *adapter;
> +	int found = 0;
> +
> +	if (pci_dev == NULL)
> +		return NULL;
> +
> +	rte_spinlock_lock(&cpfl_adapter_lock);
> +	TAILQ_FOREACH(adapter, &cpfl_adapter_list, next) {
> +		if (strncmp(adapter->name, pci_dev->device.name,
> PCI_PRI_STR_SIZE) == 0) {
> +			found = 1;
> +			break;
> +		}
> +	}
> +	rte_spinlock_unlock(&cpfl_adapter_lock);
> +
> +	if (found == 0)
> +		return NULL;
> +
> +	return adapter;
> +}
> +
> +static void
> +cpfl_adapter_ext_deinit(struct cpfl_adapter_ext *adapter) {
> +	rte_eal_alarm_cancel(cpfl_dev_alarm_handler, adapter);
> +	idpf_adapter_deinit(&adapter->base);
> +
> +	rte_free(adapter->vports);
> +	adapter->vports = NULL;
> +}
> +
> +static int
> +cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
> +	       struct rte_pci_device *pci_dev) {
> +	struct cpfl_vport_param vport_param;
> +	struct cpfl_adapter_ext *adapter;
> +	struct cpfl_devargs devargs;
> +	char name[RTE_ETH_NAME_MAX_LEN];
> +	int i, retval;
> +	bool first_probe = false;
> +
> +	if (!cpfl_adapter_list_init) {
> +		rte_spinlock_init(&cpfl_adapter_lock);
> +		TAILQ_INIT(&cpfl_adapter_list);
> +		cpfl_adapter_list_init = true;
> +	}
> +
> +	adapter = cpfl_find_adapter_ext(pci_dev);
> +	if (adapter == NULL) {
> +		first_probe = true;
> +		adapter = rte_zmalloc("cpfl_adapter_ext",
> +						sizeof(struct
> cpfl_adapter_ext), 0);
> +		if (adapter == NULL) {
> +			PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
> +			return -ENOMEM;
> +		}
> +
> +		retval = cpfl_adapter_ext_init(pci_dev, adapter);
> +		if (retval != 0) {
> +			PMD_INIT_LOG(ERR, "Failed to init adapter.");
> +			return retval;
> +		}
> +
> +		rte_spinlock_lock(&cpfl_adapter_lock);
> +		TAILQ_INSERT_TAIL(&cpfl_adapter_list, adapter, next);
> +		rte_spinlock_unlock(&cpfl_adapter_lock);
> +	}
> +
> +	retval = cpfl_parse_devargs(pci_dev, adapter, &devargs);
> +	if (retval != 0) {
> +		PMD_INIT_LOG(ERR, "Failed to parse private devargs");
> +		goto err;
> +	}
> +
> +	if (devargs.req_vport_nb == 0) {
> +		/* If no vport devarg, create vport 0 by default. */
> +		vport_param.adapter = adapter;
> +		vport_param.devarg_id = 0;
> +		vport_param.idx = cpfl_vport_idx_alloc(adapter);
> +		if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
> +			PMD_INIT_LOG(ERR, "No space for vport %u",
> vport_param.devarg_id);
> +			return 0;
> +		}
> +		snprintf(name, sizeof(name), "cpfl_%s_vport_0",
> +			 pci_dev->device.name);
> +		retval = rte_eth_dev_create(&pci_dev->device, name,
> +					    sizeof(struct idpf_vport),
> +					    NULL, NULL, cpfl_dev_vport_init,
> +					    &vport_param);
> +		if (retval != 0)
> +			PMD_DRV_LOG(ERR, "Failed to create default vport
> 0");
> +	} else {
> +		for (i = 0; i < devargs.req_vport_nb; i++) {
> +			vport_param.adapter = adapter;
> +			vport_param.devarg_id = devargs.req_vports[i];
> +			vport_param.idx = cpfl_vport_idx_alloc(adapter);
> +			if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
> +				PMD_INIT_LOG(ERR, "No space for
> vport %u", vport_param.devarg_id);
> +				break;
> +			}
> +			snprintf(name, sizeof(name), "cpfl_%s_vport_%d",
> +				 pci_dev->device.name,
> +				 devargs.req_vports[i]);
> +			retval = rte_eth_dev_create(&pci_dev->device,
> name,
> +						    sizeof(struct idpf_vport),
> +						    NULL, NULL,
> cpfl_dev_vport_init,
> +						    &vport_param);
> +			if (retval != 0)
> +				PMD_DRV_LOG(ERR, "Failed to create
> vport %d",
> +					    vport_param.devarg_id);
> +		}
> +	}
> +
> +	return 0;
> +
> +err:
> +	if (first_probe) {
> +		rte_spinlock_lock(&cpfl_adapter_lock);
> +		TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
> +		rte_spinlock_unlock(&cpfl_adapter_lock);
> +		cpfl_adapter_ext_deinit(adapter);
> +		rte_free(adapter);
> +	}
> +	return retval;
> +}
> +
> +static int
> +cpfl_pci_remove(struct rte_pci_device *pci_dev) {
> +	struct cpfl_adapter_ext *adapter = cpfl_find_adapter_ext(pci_dev);
> +	uint16_t port_id;
> +
> +	/* Ethdev created can be found RTE_ETH_FOREACH_DEV_OF
> through rte_device */
> +	RTE_ETH_FOREACH_DEV_OF(port_id, &pci_dev->device) {
> +			rte_eth_dev_close(port_id);
> +	}
> +
> +	rte_spinlock_lock(&cpfl_adapter_lock);
> +	TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
> +	rte_spinlock_unlock(&cpfl_adapter_lock);
> +	cpfl_adapter_ext_deinit(adapter);
> +	rte_free(adapter);
> +
> +	return 0;
> +}
> +
> +static struct rte_pci_driver rte_cpfl_pmd = {
> +	.id_table	= pci_id_cpfl_map,
> +	.drv_flags	= RTE_PCI_DRV_NEED_MAPPING,
> +	.probe		= cpfl_pci_probe,
> +	.remove		= cpfl_pci_remove,
> +};
> +
> +/**
> + * Driver initialization routine.
> + * Invoked once at EAL init time.
> + * Register itself as the [Poll Mode] Driver of PCI devices.
> + */
> +RTE_PMD_REGISTER_PCI(net_cpfl, rte_cpfl_pmd);
> +RTE_PMD_REGISTER_PCI_TABLE(net_cpfl, pci_id_cpfl_map);
> +RTE_PMD_REGISTER_KMOD_DEP(net_cpfl, "* igb_uio | vfio-pci");
> +RTE_PMD_REGISTER_PARAM_STRING(net_cpfl,
> +			      CPFL_TX_SINGLE_Q "=<0|1> "
> +			      CPFL_RX_SINGLE_Q "=<0|1> "
> +			      CPFL_VPORT "=[vport_set0,[vport_set1],...]");
> +
> +RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_init, init, NOTICE);
> +RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_driver, driver, NOTICE);
> diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
> new file mode 100644 index 0000000000..83459b9c91
> --- /dev/null
> +++ b/drivers/net/cpfl/cpfl_ethdev.h
> @@ -0,0 +1,78 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2022 Intel Corporation
> + */
> +
> +#ifndef _CPFL_ETHDEV_H_
> +#define _CPFL_ETHDEV_H_
> +
> +#include <stdint.h>
> +#include <rte_malloc.h>
> +#include <rte_spinlock.h>
> +#include <rte_ethdev.h>
> +#include <rte_kvargs.h>
> +#include <ethdev_driver.h>
> +#include <ethdev_pci.h>
> +
> +#include "cpfl_logs.h"
> +
> +#include <idpf_common_device.h>
> +#include <idpf_common_virtchnl.h>
> +#include <base/idpf_prototype.h>
> +#include <base/virtchnl2.h>
> +
> +#define CPFL_MAX_VPORT_NUM	8
> +
> +#define CPFL_INVALID_VPORT_IDX	0xffff
> +
> +#define CPFL_MIN_BUF_SIZE	1024
> +#define CPFL_MAX_FRAME_SIZE	9728
> +#define CPFL_DEFAULT_MTU	RTE_ETHER_MTU
> +
> +#define CPFL_NUM_MACADDR_MAX	64
> +
> +#define CPFL_VLAN_TAG_SIZE	4
> +#define CPFL_ETH_OVERHEAD \
> +	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN +
> CPFL_VLAN_TAG_SIZE * 2)
> +
> +#define CPFL_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
> +
> +#define CPFL_ALARM_INTERVAL	50000 /* us */
> +
> +/* Device IDs */
> +#define IDPF_DEV_ID_CPF			0x1453
> +
> +struct cpfl_vport_param {
> +	struct cpfl_adapter_ext *adapter;
> +	uint16_t devarg_id; /* arg id from user */
> +	uint16_t idx;       /* index in adapter->vports[]*/
> +};
> +
> +/* Struct used when parse driver specific devargs */ struct
> +cpfl_devargs {
> +	uint16_t req_vports[CPFL_MAX_VPORT_NUM];
> +	uint16_t req_vport_nb;
> +};
> +
> +struct cpfl_adapter_ext {
> +	TAILQ_ENTRY(cpfl_adapter_ext) next;
> +	struct idpf_adapter base;
> +
> +	char name[CPFL_ADAPTER_NAME_LEN];
> +
> +	struct idpf_vport **vports;
> +	uint16_t max_vport_nb;
> +
> +	uint16_t cur_vports; /* bit mask of created vport */
> +	uint16_t cur_vport_nb;
> +
> +	uint16_t used_vecs_num;
> +};
> +
> +TAILQ_HEAD(cpfl_adapter_list, cpfl_adapter_ext);
> +
> +#define CPFL_DEV_TO_PCI(eth_dev)		\
> +	RTE_DEV_TO_PCI((eth_dev)->device)
> +#define CPFL_ADAPTER_TO_EXT(p)					\
> +	container_of((p), struct cpfl_adapter_ext, base)
> +
> +#endif /* _CPFL_ETHDEV_H_ */
> diff --git a/drivers/net/cpfl/cpfl_logs.h b/drivers/net/cpfl/cpfl_logs.h new
> file mode 100644 index 0000000000..451bdfbd1d
> --- /dev/null
> +++ b/drivers/net/cpfl/cpfl_logs.h
> @@ -0,0 +1,32 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2022 Intel Corporation
> + */
> +
> +#ifndef _CPFL_LOGS_H_
> +#define _CPFL_LOGS_H_
> +
> +#include <rte_log.h>
> +
> +extern int cpfl_logtype_init;
> +extern int cpfl_logtype_driver;
> +
> +#define PMD_INIT_LOG(level, ...) \
> +	rte_log(RTE_LOG_ ## level, \
> +		cpfl_logtype_init, \
> +		RTE_FMT("%s(): " \
> +			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
> +			__func__, \
> +			RTE_FMT_TAIL(__VA_ARGS__,)))
> +
> +#define PMD_DRV_LOG_RAW(level, ...) \
> +	rte_log(RTE_LOG_ ## level, \
> +		cpfl_logtype_driver, \
> +		RTE_FMT("%s(): " \
> +			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
> +			__func__, \
> +			RTE_FMT_TAIL(__VA_ARGS__,)))
> +
> +#define PMD_DRV_LOG(level, fmt, args...) \
> +	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
> +
> +#endif /* _CPFL_LOGS_H_ */
> diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c new file
> mode 100644 index 0000000000..ea4a2002bf
> --- /dev/null
> +++ b/drivers/net/cpfl/cpfl_rxtx.c
> @@ -0,0 +1,244 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2022 Intel Corporation
> + */
> +
> +#include <ethdev_driver.h>
> +#include <rte_net.h>
> +#include <rte_vect.h>
> +
> +#include "cpfl_ethdev.h"
> +#include "cpfl_rxtx.h"
> +
> +static uint64_t
> +cpfl_tx_offload_convert(uint64_t offload) {
> +	uint64_t ol = 0;
> +
> +	if ((offload & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0)
> +		ol |= IDPF_TX_OFFLOAD_IPV4_CKSUM;
> +	if ((offload & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0)
> +		ol |= IDPF_TX_OFFLOAD_UDP_CKSUM;
> +	if ((offload & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
> +		ol |= IDPF_TX_OFFLOAD_TCP_CKSUM;
> +	if ((offload & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) != 0)
> +		ol |= IDPF_TX_OFFLOAD_SCTP_CKSUM;
> +	if ((offload & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
> +		ol |= IDPF_TX_OFFLOAD_MULTI_SEGS;
> +	if ((offload & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0)
> +		ol |= IDPF_TX_OFFLOAD_MBUF_FAST_FREE;
> +
> +	return ol;
> +}
> +
> +static const struct rte_memzone *
> +cpfl_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
> +		      uint16_t len, uint16_t queue_type,
> +		      unsigned int socket_id, bool splitq) {
> +	char ring_name[RTE_MEMZONE_NAMESIZE];
> +	const struct rte_memzone *mz;
> +	uint32_t ring_size;
> +
> +	memset(ring_name, 0, RTE_MEMZONE_NAMESIZE);
> +	switch (queue_type) {
> +	case VIRTCHNL2_QUEUE_TYPE_TX:
> +		if (splitq)
> +			ring_size = RTE_ALIGN(len * sizeof(struct
> idpf_flex_tx_sched_desc),
> +					      CPFL_DMA_MEM_ALIGN);
> +		else
> +			ring_size = RTE_ALIGN(len * sizeof(struct
> idpf_flex_tx_desc),
> +					      CPFL_DMA_MEM_ALIGN);
> +		rte_memcpy(ring_name, "cpfl Tx ring", sizeof("cpfl Tx ring"));
> +		break;
> +	case VIRTCHNL2_QUEUE_TYPE_RX:
> +		if (splitq)
> +			ring_size = RTE_ALIGN(len * sizeof(struct
> virtchnl2_rx_flex_desc_adv_nic_3),
> +					      CPFL_DMA_MEM_ALIGN);
> +		else
> +			ring_size = RTE_ALIGN(len * sizeof(struct
> virtchnl2_singleq_rx_buf_desc),
> +					      CPFL_DMA_MEM_ALIGN);
> +		rte_memcpy(ring_name, "cpfl Rx ring", sizeof("cpfl Rx ring"));
> +		break;
> +	case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
> +		ring_size = RTE_ALIGN(len * sizeof(struct
> idpf_splitq_tx_compl_desc),
> +				      CPFL_DMA_MEM_ALIGN);
> +		rte_memcpy(ring_name, "cpfl Tx compl ring", sizeof("cpfl Tx
> compl ring"));
> +		break;
> +	case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
> +		ring_size = RTE_ALIGN(len * sizeof(struct
> virtchnl2_splitq_rx_buf_desc),
> +				      CPFL_DMA_MEM_ALIGN);
> +		rte_memcpy(ring_name, "cpfl Rx buf ring", sizeof("cpfl Rx
> buf ring"));
> +		break;
> +	default:
> +		PMD_INIT_LOG(ERR, "Invalid queue type");
> +		return NULL;
> +	}
> +
> +	mz = rte_eth_dma_zone_reserve(dev, ring_name, queue_idx,
> +				      ring_size, CPFL_RING_BASE_ALIGN,
> +				      socket_id);
> +	if (mz == NULL) {
> +		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for
> ring");
> +		return NULL;
> +	}
> +
> +	/* Zero all the descriptors in the ring. */
> +	memset(mz->addr, 0, ring_size);
> +
> +	return mz;
> +}
> +
> +static void
> +cpfl_dma_zone_release(const struct rte_memzone *mz) {
> +	rte_memzone_free(mz);
> +}
> +
> +static int
> +cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
> +		     uint16_t queue_idx, uint16_t nb_desc,
> +		     unsigned int socket_id)
> +{
> +	struct idpf_vport *vport = dev->data->dev_private;
> +	const struct rte_memzone *mz;
> +	struct idpf_tx_queue *cq;
> +	int ret;
> +
> +	cq = rte_zmalloc_socket("cpfl splitq cq",
> +				sizeof(struct idpf_tx_queue),
> +				RTE_CACHE_LINE_SIZE,
> +				socket_id);
> +	if (cq == NULL) {
> +		PMD_INIT_LOG(ERR, "Failed to allocate memory for Tx compl
> queue");
> +		ret = -ENOMEM;
> +		goto err_cq_alloc;
> +	}
> +
> +	cq->nb_tx_desc = nb_desc;
> +	cq->queue_id = vport->chunks_info.tx_compl_start_qid +
> queue_idx;
> +	cq->port_id = dev->data->port_id;
> +	cq->txqs = dev->data->tx_queues;
> +	cq->tx_start_qid = vport->chunks_info.tx_start_qid;
> +
> +	mz = cpfl_dma_zone_reserve(dev, queue_idx, nb_desc,
> +
> VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION,
> +				   socket_id, true);
> +	if (mz == NULL) {
Need an error log here?
> +		ret = -ENOMEM;
> +		goto err_mz_reserve;
> +	}
> +	cq->tx_ring_phys_addr = mz->iova;
> +	cq->compl_ring = mz->addr;
> +	cq->mz = mz;
> +	reset_split_tx_complq(cq);
> +
> +	txq->complq = cq;
> +
> +	return 0;
> +
> +err_mz_reserve:
> +	rte_free(cq);
> +err_cq_alloc:
> +	return ret;
> +}
> +
> +int
> +cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
> +		    uint16_t nb_desc, unsigned int socket_id,
> +		    const struct rte_eth_txconf *tx_conf) {
> +	struct idpf_vport *vport = dev->data->dev_private;
> +	struct idpf_adapter *adapter = vport->adapter;
> +	uint16_t tx_rs_thresh, tx_free_thresh;
> +	struct idpf_hw *hw = &adapter->hw;
> +	const struct rte_memzone *mz;
> +	struct idpf_tx_queue *txq;
> +	uint64_t offloads;
> +	uint16_t len;
> +	bool is_splitq;
> +	int ret;
> +
> +	offloads = tx_conf->offloads | dev->data-
> >dev_conf.txmode.offloads;
> +
> +	tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh > 0) ?
> +		tx_conf->tx_rs_thresh : CPFL_DEFAULT_TX_RS_THRESH);
> +	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ?
> +		tx_conf->tx_free_thresh :
> CPFL_DEFAULT_TX_FREE_THRESH);
> +	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
> +		return -EINVAL;
> +
> +	/* Allocate the TX queue data structure. */
> +	txq = rte_zmalloc_socket("cpfl txq",
> +				 sizeof(struct idpf_tx_queue),
> +				 RTE_CACHE_LINE_SIZE,
> +				 socket_id);
> +	if (txq == NULL) {
> +		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue
> structure");
> +		ret = -ENOMEM;
> +		goto err_txq_alloc;
> +	}
> +
> +	is_splitq = !!(vport->txq_model ==
> VIRTCHNL2_QUEUE_MODEL_SPLIT);
> +
> +	txq->nb_tx_desc = nb_desc;
> +	txq->rs_thresh = tx_rs_thresh;
> +	txq->free_thresh = tx_free_thresh;
> +	txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
> +	txq->port_id = dev->data->port_id;
> +	txq->offloads = cpfl_tx_offload_convert(offloads);
> +	txq->tx_deferred_start = tx_conf->tx_deferred_start;
> +
> +	if (is_splitq)
> +		len = 2 * nb_desc;
> +	else
> +		len = nb_desc;
> +	txq->sw_nb_desc = len;
> +
> +	/* Allocate TX hardware ring descriptors. */
> +	mz = cpfl_dma_zone_reserve(dev, queue_idx, nb_desc,
> VIRTCHNL2_QUEUE_TYPE_TX,
> +				   socket_id, is_splitq);
> +	if (mz == NULL) {
> +		ret = -ENOMEM;
> +		goto err_mz_reserve;
> +	}
> +	txq->tx_ring_phys_addr = mz->iova;
> +	txq->mz = mz;
> +
> +	txq->sw_ring = rte_zmalloc_socket("cpfl tx sw ring",
> +					  sizeof(struct idpf_tx_entry) * len,
> +					  RTE_CACHE_LINE_SIZE, socket_id);
> +	if (txq->sw_ring == NULL) {
> +		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX
> ring");
> +		ret = -ENOMEM;
> +		goto err_sw_ring_alloc;
> +	}
> +
> +	if (!is_splitq) {
> +		txq->tx_ring = mz->addr;
> +		reset_single_tx_queue(txq);
> +	} else {
> +		txq->desc_ring = mz->addr;
> +		reset_split_tx_descq(txq);
> +
> +		/* Setup tx completion queue if split model */
> +		ret = cpfl_tx_complq_setup(dev, txq, queue_idx,
> +					   2 * nb_desc, socket_id);
> +		if (ret != 0)
> +			goto err_complq_setup;
> +	}
> +
> +	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
> +			queue_idx * vport->chunks_info.tx_qtail_spacing);
> +	txq->q_set = true;
> +	dev->data->tx_queues[queue_idx] = txq;
> +
> +	return 0;
> +
> +err_complq_setup:
> +err_sw_ring_alloc:
> +	cpfl_dma_zone_release(mz);
> +err_mz_reserve:
> +	rte_free(txq);
> +err_txq_alloc:
> +	return ret;
> +}
> diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h new
> file mode 100644 index 0000000000..ec42478393
> --- /dev/null
> +++ b/drivers/net/cpfl/cpfl_rxtx.h
> @@ -0,0 +1,25 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2022 Intel Corporation
> + */
> +
> +#ifndef _CPFL_RXTX_H_
> +#define _CPFL_RXTX_H_
> +
> +#include <idpf_common_rxtx.h>
> +#include "cpfl_ethdev.h"
> +
> +/* In QLEN must be whole number of 32 descriptors. */
> +#define CPFL_ALIGN_RING_DESC	32
> +#define CPFL_MIN_RING_DESC	32
> +#define CPFL_MAX_RING_DESC	4096
> +#define CPFL_DMA_MEM_ALIGN	4096
> +/* Base address of the HW descriptor ring should be 128B aligned. */
> +#define CPFL_RING_BASE_ALIGN	128
> +
> +#define CPFL_DEFAULT_TX_RS_THRESH	32
> +#define CPFL_DEFAULT_TX_FREE_THRESH	32
> +
> +int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
> +			uint16_t nb_desc, unsigned int socket_id,
> +			const struct rte_eth_txconf *tx_conf); #endif /*
> _CPFL_RXTX_H_ */
> diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build new
> file mode 100644 index 0000000000..106cc97e60
> --- /dev/null
> +++ b/drivers/net/cpfl/meson.build
> @@ -0,0 +1,14 @@
> +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2022 Intel
> +Corporation
> +
> +if is_windows
> +    build = false
> +    reason = 'not supported on Windows'
> +    subdir_done()
> +endif
> +
> +deps += ['common_idpf']
> +
> +sources = files(
> +        'cpfl_ethdev.c',
> +)
> \ No newline at end of file
> diff --git a/drivers/net/meson.build b/drivers/net/meson.build index
> 6470bf3636..a8ca338875 100644
> --- a/drivers/net/meson.build
> +++ b/drivers/net/meson.build
> @@ -13,6 +13,7 @@ drivers = [
>          'bnxt',
>          'bonding',
>          'cnxk',
> +        'cpfl',
>          'cxgbe',
>          'dpaa',
>          'dpaa2',
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v3 00/21] add support for cpfl PMD in DPDK
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                     ` (21 preceding siblings ...)
  2023-01-13 12:49   ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Zhang, Helin
@ 2023-01-18  7:31   ` Mingxia Liu
  2023-01-18  7:31     ` [PATCH v3 01/21] net/cpfl: support device initialization Mingxia Liu
                       ` (17 more replies)
  2023-01-18  7:33   ` [PATCH v3 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
  24 siblings, 18 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:31 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=a, Size: 3288 bytes --]

The patchset introduced the cpfl (Control Plane Function Library) PMD
for Intel® IPU E2100’s Configure Physical Function (Device ID: 0x1453)

The cpfl PMD inherits all the features from idpf PMD which will follow
an ongoing standard data plan function spec
https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=idpf
Besides, it will also support more device specific hardware offloading
features from DPDK’s control path (e.g.: hairpin, rte_flow …). which is
different from idpf PMD, and that's why we need a new cpfl PMD.

This patchset mainly focuses on idpf PMD’s equivalent features.
To avoid duplicated code, the patchset depends on below patchsets which
move the common part from net/idpf into common/idpf as a shared library.

This patchset is based on the idpf PMD code:
http://patches.dpdk.org/project/dpdk/cover/20230106090501.9106-1-beilei.xing@intel.com/
http://patches.dpdk.org/project/dpdk/cover/20230117080622.105657-1-beilei.xing@intel.com/
http://patches.dpdk.org/project/dpdk/cover/20230118035139.485060-1-wenjun1.wu@intel.com/
http://patches.dpdk.org/project/dpdk/cover/20230118071440.902155-1-mingxia.liu@intel.com/

v2 changes:
 - rebase to the new baseline.
 - Fix rss lut config issue.
v3 changes:
 - rebase to the new baseline.

Mingxia Liu (21):
  net/cpfl: support device initialization
  net/cpfl: add Tx queue setup
  net/cpfl: add Rx queue setup
  net/cpfl: support device start and stop
  net/cpfl: support queue start
  net/cpfl: support queue stop
  net/cpfl: support queue release
  net/cpfl: support MTU configuration
  net/cpfl: support basic Rx data path
  net/cpfl: support basic Tx data path
  net/cpfl: support write back based on ITR expire
  net/cpfl: support RSS
  net/cpfl: support Rx offloading
  net/cpfl: support Tx offloading
  net/cpfl: add AVX512 data path for single queue model
  net/cpfl: support timestamp offload
  net/cpfl: add AVX512 data path for split queue model
  net/cpfl: add hw statistics
  net/cpfl: add RSS set/get ops
  net/cpfl: support single q scatter RX datapath
  net/cpfl: add xstats ops

 MAINTAINERS                             |    9 +
 doc/guides/nics/cpfl.rst                |   88 ++
 doc/guides/nics/features/cpfl.ini       |   17 +
 doc/guides/rel_notes/release_23_03.rst  |    5 +
 drivers/net/cpfl/cpfl_ethdev.c          | 1489 +++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h          |   95 ++
 drivers/net/cpfl/cpfl_logs.h            |   32 +
 drivers/net/cpfl/cpfl_rxtx.c            |  900 ++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h            |   44 +
 drivers/net/cpfl/cpfl_rxtx_vec_common.h |  115 ++
 drivers/net/cpfl/meson.build            |   38 +
 drivers/net/idpf/idpf_ethdev.c          |    3 +-
 drivers/net/meson.build                 |    1 +
 13 files changed, 2835 insertions(+), 1 deletion(-)
 create mode 100644 doc/guides/nics/cpfl.rst
 create mode 100644 doc/guides/nics/features/cpfl.ini
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.c
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.h
 create mode 100644 drivers/net/cpfl/cpfl_logs.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h
 create mode 100644 drivers/net/cpfl/meson.build

-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v3 01/21] net/cpfl: support device initialization
  2023-01-18  7:31   ` [PATCH v3 " Mingxia Liu
@ 2023-01-18  7:31     ` Mingxia Liu
  2023-01-18  7:31     ` [PATCH v3 02/21] net/cpfl: add Tx queue setup Mingxia Liu
                       ` (16 subsequent siblings)
  17 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:31 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Support device init and add the following dev ops:
 - dev_configure
 - dev_close
 - dev_infos_get
 - link_update
 - cpfl_dev_supported_ptypes_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 MAINTAINERS                            |   9 +
 doc/guides/nics/cpfl.rst               |  66 +++
 doc/guides/nics/features/cpfl.ini      |  12 +
 doc/guides/rel_notes/release_23_03.rst |   5 +
 drivers/net/cpfl/cpfl_ethdev.c         | 768 +++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h         |  78 +++
 drivers/net/cpfl/cpfl_logs.h           |  32 ++
 drivers/net/cpfl/cpfl_rxtx.c           | 244 ++++++++
 drivers/net/cpfl/cpfl_rxtx.h           |  25 +
 drivers/net/cpfl/meson.build           |  14 +
 drivers/net/meson.build                |   1 +
 11 files changed, 1254 insertions(+)
 create mode 100644 doc/guides/nics/cpfl.rst
 create mode 100644 doc/guides/nics/features/cpfl.ini
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.c
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.h
 create mode 100644 drivers/net/cpfl/cpfl_logs.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.h
 create mode 100644 drivers/net/cpfl/meson.build

diff --git a/MAINTAINERS b/MAINTAINERS
index 22ef2ea4b9..970acc5751 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -780,6 +780,15 @@ F: drivers/common/idpf/
 F: doc/guides/nics/idpf.rst
 F: doc/guides/nics/features/idpf.ini
 
+Intel cpfl
+M: Qi Zhang <qi.z.zhang@intel.com>
+M: Jingjing Wu <jingjing.wu@intel.com>
+M: Beilei Xing <beilei.xing@intel.com>
+T: git://dpdk.org/next/dpdk-next-net-intel
+F: drivers/net/cpfl/
+F: doc/guides/nics/cpfl.rst
+F: doc/guides/nics/features/cpfl.ini
+
 Intel igc
 M: Junfeng Guo <junfeng.guo@intel.com>
 M: Simei Su <simei.su@intel.com>
diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
new file mode 100644
index 0000000000..35a4bd44c6
--- /dev/null
+++ b/doc/guides/nics/cpfl.rst
@@ -0,0 +1,66 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright(c) 2022 Intel Corporation.
+
+.. include:: <isonum.txt>
+
+CPFL Poll Mode Driver
+=====================
+
+The [*EXPERIMENTAL*] cpfl PMD (**librte_net_cpfl**) provides poll mode driver support
+for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
+
+
+Linux Prerequisites
+-------------------
+
+Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment.
+
+To get better performance on Intel platforms,
+please follow the :doc:`../linux_gsg/nic_perf_intel_platform`.
+
+
+Pre-Installation Configuration
+------------------------------
+
+Runtime Config Options
+~~~~~~~~~~~~~~~~~~~~~~
+
+- ``vport`` (default ``0``)
+
+  The PMD supports creation of multiple vports for one PCI device,
+  each vport corresponds to a single ethdev.
+  The user can specify the vports with specific ID to be created, for example::
+
+    -a ca:00.0,vport=[0,2,3]
+
+  Then the PMD will create 3 vports (ethdevs) for device ``ca:00.0``.
+
+  If the parameter is not provided, the vport 0 will be created by default.
+
+- ``rx_single`` (default ``0``)
+
+  There are two queue modes supported by Intel\ |reg| IPU Ethernet E21000 Series,
+  single queue mode and split queue mode for Rx queue.
+  User can choose Rx queue mode, example::
+
+    -a ca:00.0,rx_single=1
+
+  Then the PMD will configure Rx queue with single queue mode.
+  Otherwise, split queue mode is chosen by default.
+
+- ``tx_single`` (default ``0``)
+
+  There are two queue modes supported by Intel\ |reg| IPU Ethernet E21000 Series,
+  single queue mode and split queue mode for Tx queue.
+  User can choose Tx queue mode, example::
+
+    -a ca:00.0,tx_single=1
+
+  Then the PMD will configure Tx queue with single queue mode.
+  Otherwise, split queue mode is chosen by default.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :doc:`build_and_test` for details.
\ No newline at end of file
diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
new file mode 100644
index 0000000000..a2d1ca9e15
--- /dev/null
+++ b/doc/guides/nics/features/cpfl.ini
@@ -0,0 +1,12 @@
+;
+; Supported features of the 'cpfl' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+; A feature with "P" indicates only be supported when non-vector path
+; is selected.
+;
+[Features]
+Linux                = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst
index b8c5b68d6c..465a25e91e 100644
--- a/doc/guides/rel_notes/release_23_03.rst
+++ b/doc/guides/rel_notes/release_23_03.rst
@@ -55,6 +55,11 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Added Intel cpfl driver.**
+
+  Added the new ``cpfl`` net driver
+  for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
+  See the :doc:`../nics/cpfl` NIC guide for more details on this new driver.
 
 Removed Items
 -------------
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
new file mode 100644
index 0000000000..2ac53bc5b0
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -0,0 +1,768 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_dev.h>
+#include <errno.h>
+#include <rte_alarm.h>
+
+#include "cpfl_ethdev.h"
+
+#define CPFL_TX_SINGLE_Q	"tx_single"
+#define CPFL_RX_SINGLE_Q	"rx_single"
+#define CPFL_VPORT		"vport"
+
+rte_spinlock_t cpfl_adapter_lock;
+/* A list for all adapters, one adapter matches one PCI device */
+struct cpfl_adapter_list cpfl_adapter_list;
+bool cpfl_adapter_list_init;
+
+static const char * const cpfl_valid_args[] = {
+	CPFL_TX_SINGLE_Q,
+	CPFL_RX_SINGLE_Q,
+	CPFL_VPORT,
+	NULL
+};
+
+static int
+cpfl_dev_link_update(struct rte_eth_dev *dev,
+		     __rte_unused int wait_to_complete)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct rte_eth_link new_link;
+
+	memset(&new_link, 0, sizeof(new_link));
+
+	switch (vport->link_speed) {
+	case RTE_ETH_SPEED_NUM_10M:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
+		break;
+	case RTE_ETH_SPEED_NUM_100M:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		break;
+	case RTE_ETH_SPEED_NUM_1G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
+		break;
+	case RTE_ETH_SPEED_NUM_10G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+		break;
+	case RTE_ETH_SPEED_NUM_20G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
+		break;
+	case RTE_ETH_SPEED_NUM_25G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
+		break;
+	case RTE_ETH_SPEED_NUM_40G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
+		break;
+	case RTE_ETH_SPEED_NUM_50G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
+		break;
+	case RTE_ETH_SPEED_NUM_100G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+		break;
+	case RTE_ETH_SPEED_NUM_200G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
+		break;
+	default:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	}
+
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
+		RTE_ETH_LINK_DOWN;
+	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
+				  RTE_ETH_LINK_SPEED_FIXED);
+
+	return rte_eth_linkstatus_set(dev, &new_link);
+}
+
+static int
+cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+
+	dev_info->max_rx_queues = adapter->caps.max_rx_q;
+	dev_info->max_tx_queues = adapter->caps.max_tx_q;
+	dev_info->min_rx_bufsize = CPFL_MIN_BUF_SIZE;
+	dev_info->max_rx_pktlen = vport->max_mtu + CPFL_ETH_OVERHEAD;
+
+	dev_info->max_mtu = vport->max_mtu;
+	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+
+	return 0;
+}
+
+static const uint32_t *
+cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+	static const uint32_t ptypes[] = {
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_L4_FRAG,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_UNKNOWN
+	};
+
+	return ptypes;
+}
+
+static int
+cpfl_dev_configure(struct rte_eth_dev *dev)
+{
+	struct rte_eth_conf *conf = &dev->data->dev_conf;
+
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
+		PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not supported",
+			     conf->txmode.mq_mode);
+		return -ENOTSUP;
+	}
+
+	if (conf->lpbk_mode != 0) {
+		PMD_INIT_LOG(ERR, "Loopback operation mode %d is not supported",
+			     conf->lpbk_mode);
+		return -ENOTSUP;
+	}
+
+	if (conf->dcb_capability_en != 0) {
+		PMD_INIT_LOG(ERR, "Priority Flow Control(PFC) if not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.lsc != 0) {
+		PMD_INIT_LOG(ERR, "LSC interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.rxq != 0) {
+		PMD_INIT_LOG(ERR, "RXQ interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.rmv != 0) {
+		PMD_INIT_LOG(ERR, "RMV interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	return 0;
+}
+
+static int
+cpfl_dev_close(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter);
+
+	idpf_vport_deinit(vport);
+
+	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
+	adapter->cur_vport_nb--;
+	dev->data->dev_private = NULL;
+	adapter->vports[vport->sw_idx] = NULL;
+	rte_free(vport);
+
+	return 0;
+}
+
+static int
+insert_value(struct cpfl_devargs *devargs, uint16_t id)
+{
+	uint16_t i;
+
+	/* ignore duplicate */
+	for (i = 0; i < devargs->req_vport_nb; i++) {
+		if (devargs->req_vports[i] == id)
+			return 0;
+	}
+
+	if (devargs->req_vport_nb >= RTE_DIM(devargs->req_vports)) {
+		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
+			     CPFL_MAX_VPORT_NUM);
+		return -EINVAL;
+	}
+
+	devargs->req_vports[devargs->req_vport_nb] = id;
+	devargs->req_vport_nb++;
+
+	return 0;
+}
+
+static const char *
+parse_range(const char *value, struct cpfl_devargs *devargs)
+{
+	uint16_t lo, hi, i;
+	int n = 0;
+	int result;
+	const char *pos = value;
+
+	result = sscanf(value, "%hu%n-%hu%n", &lo, &n, &hi, &n);
+	if (result == 1) {
+		if (lo >= CPFL_MAX_VPORT_NUM)
+			return NULL;
+		if (insert_value(devargs, lo) != 0)
+			return NULL;
+	} else if (result == 2) {
+		if (lo > hi || hi >= CPFL_MAX_VPORT_NUM)
+			return NULL;
+		for (i = lo; i <= hi; i++) {
+			if (insert_value(devargs, i) != 0)
+				return NULL;
+		}
+	} else {
+		return NULL;
+	}
+
+	return pos + n;
+}
+
+static int
+parse_vport(const char *key, const char *value, void *args)
+{
+	struct cpfl_devargs *devargs = args;
+	const char *pos = value;
+
+	devargs->req_vport_nb = 0;
+
+	if (*pos == '[')
+		pos++;
+
+	while (1) {
+		pos = parse_range(pos, devargs);
+		if (pos == NULL) {
+			PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ",
+				     value, key);
+			return -EINVAL;
+		}
+		if (*pos != ',')
+			break;
+		pos++;
+	}
+
+	if (*value == '[' && *pos != ']') {
+		PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ",
+			     value, key);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+parse_bool(const char *key, const char *value, void *args)
+{
+	int *i = args;
+	char *end;
+	int num;
+
+	errno = 0;
+
+	num = strtoul(value, &end, 10);
+
+	if (errno == ERANGE || (num != 0 && num != 1)) {
+		PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", value must be 0 or 1",
+			value, key);
+		return -EINVAL;
+	}
+
+	*i = num;
+	return 0;
+}
+
+static int
+cpfl_parse_devargs(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter,
+		   struct cpfl_devargs *cpfl_args)
+{
+	struct rte_devargs *devargs = pci_dev->device.devargs;
+	struct rte_kvargs *kvlist;
+	int i, ret;
+
+	cpfl_args->req_vport_nb = 0;
+
+	if (devargs == NULL)
+		return 0;
+
+	kvlist = rte_kvargs_parse(devargs->args, cpfl_valid_args);
+	if (kvlist == NULL) {
+		PMD_INIT_LOG(ERR, "invalid kvargs key");
+		return -EINVAL;
+	}
+
+	/* check parsed devargs */
+	if (adapter->cur_vport_nb + cpfl_args->req_vport_nb >
+	    CPFL_MAX_VPORT_NUM) {
+		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
+			     CPFL_MAX_VPORT_NUM);
+		ret = -EINVAL;
+		goto bail;
+	}
+
+	for (i = 0; i < cpfl_args->req_vport_nb; i++) {
+		if (adapter->cur_vports & RTE_BIT32(cpfl_args->req_vports[i])) {
+			PMD_INIT_LOG(ERR, "Vport %d has been created",
+				     cpfl_args->req_vports[i]);
+			ret = -EINVAL;
+			goto bail;
+		}
+	}
+
+	ret = rte_kvargs_process(kvlist, CPFL_VPORT, &parse_vport,
+				 cpfl_args);
+	if (ret != 0)
+		goto bail;
+
+	ret = rte_kvargs_process(kvlist, CPFL_TX_SINGLE_Q, &parse_bool,
+				 &adapter->base.txq_model);
+	if (ret != 0)
+		goto bail;
+
+	ret = rte_kvargs_process(kvlist, CPFL_RX_SINGLE_Q, &parse_bool,
+				 &adapter->base.rxq_model);
+	if (ret != 0)
+		goto bail;
+
+bail:
+	rte_kvargs_free(kvlist);
+	return ret;
+}
+
+static struct idpf_vport *
+cpfl_find_vport(struct cpfl_adapter_ext *adapter, uint32_t vport_id)
+{
+	struct idpf_vport *vport = NULL;
+	int i;
+
+	for (i = 0; i < adapter->cur_vport_nb; i++) {
+		vport = adapter->vports[i];
+		if (vport->vport_id != vport_id)
+			continue;
+		else
+			return vport;
+	}
+
+	return vport;
+}
+
+static void
+cpfl_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t msglen)
+{
+	struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
+
+	if (msglen < sizeof(struct virtchnl2_event)) {
+		PMD_DRV_LOG(ERR, "Error event");
+		return;
+	}
+
+	switch (vc_event->event) {
+	case VIRTCHNL2_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL2_EVENT_LINK_CHANGE");
+		vport->link_up = vc_event->link_status;
+		vport->link_speed = vc_event->link_speed;
+		cpfl_dev_link_update(dev, 0);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", vc_event->event);
+		break;
+	}
+}
+
+static void
+cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter_ex)
+{
+	struct idpf_adapter *adapter = &adapter_ex->base;
+	struct idpf_dma_mem *dma_mem = NULL;
+	struct idpf_hw *hw = &adapter->hw;
+	struct virtchnl2_event *vc_event;
+	struct idpf_ctlq_msg ctlq_msg;
+	enum idpf_mbx_opc mbx_op;
+	struct idpf_vport *vport;
+	enum virtchnl_ops vc_op;
+	uint16_t pending = 1;
+	int ret;
+
+	while (pending) {
+		ret = idpf_ctlq_recv(hw->arq, &pending, &ctlq_msg);
+		if (ret) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret);
+			return;
+		}
+
+		rte_memcpy(adapter->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
+			   IDPF_DFLT_MBX_BUF_SIZE);
+
+		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
+		vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+		adapter->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
+
+		switch (mbx_op) {
+		case idpf_mbq_opc_send_msg_to_peer_pf:
+			if (vc_op == VIRTCHNL2_OP_EVENT) {
+				if (ctlq_msg.data_len < sizeof(struct virtchnl2_event)) {
+					PMD_DRV_LOG(ERR, "Error event");
+					return;
+				}
+				vc_event = (struct virtchnl2_event *)adapter->mbx_resp;
+				vport = cpfl_find_vport(adapter_ex, vc_event->vport_id);
+				if (!vport) {
+					PMD_DRV_LOG(ERR, "Can't find vport.");
+					return;
+				}
+				cpfl_handle_event_msg(vport, adapter->mbx_resp,
+						      ctlq_msg.data_len);
+			} else {
+				if (vc_op == adapter->pend_cmd)
+					notify_cmd(adapter, adapter->cmd_retval);
+				else
+					PMD_DRV_LOG(ERR, "command mismatch, expect %u, get %u",
+						    adapter->pend_cmd, vc_op);
+
+				PMD_DRV_LOG(DEBUG, " Virtual channel response is received,"
+					    "opcode = %d", vc_op);
+			}
+			goto post_buf;
+		default:
+			PMD_DRV_LOG(DEBUG, "Request %u is not supported yet", mbx_op);
+		}
+	}
+
+post_buf:
+	if (ctlq_msg.data_len)
+		dma_mem = ctlq_msg.ctx.indirect.payload;
+	else
+		pending = 0;
+
+	ret = idpf_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
+	if (ret && dma_mem)
+		idpf_free_dma_mem(hw, dma_mem);
+}
+
+static void
+cpfl_dev_alarm_handler(void *param)
+{
+	struct cpfl_adapter_ext *adapter = param;
+
+	cpfl_handle_virtchnl_msg(adapter);
+
+	rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
+}
+
+static int
+cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter)
+{
+	struct idpf_adapter *base = &adapter->base;
+	struct idpf_hw *hw = &base->hw;
+	int ret = 0;
+
+	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+	hw->hw_addr_len = pci_dev->mem_resource[0].len;
+	hw->back = base;
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->device_id = pci_dev->id.device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+
+	strncpy(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE);
+
+	ret = idpf_adapter_init(base);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init adapter");
+		goto err_adapter_init;
+	}
+
+	rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
+
+	adapter->max_vport_nb = adapter->base.caps.max_vports;
+
+	adapter->vports = rte_zmalloc("vports",
+				      adapter->max_vport_nb *
+				      sizeof(*adapter->vports),
+				      0);
+	if (adapter->vports == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
+		ret = -ENOMEM;
+		goto err_get_ptype;
+	}
+
+	adapter->cur_vports = 0;
+	adapter->cur_vport_nb = 0;
+
+	adapter->used_vecs_num = 0;
+
+	return ret;
+
+err_get_ptype:
+	idpf_adapter_deinit(base);
+err_adapter_init:
+	return ret;
+}
+
+static const struct eth_dev_ops cpfl_eth_dev_ops = {
+	.dev_configure			= cpfl_dev_configure,
+	.dev_close			= cpfl_dev_close,
+	.dev_infos_get			= cpfl_dev_info_get,
+	.link_update			= cpfl_dev_link_update,
+	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
+};
+
+static uint16_t
+cpfl_vport_idx_alloc(struct cpfl_adapter_ext *ad)
+{
+	uint16_t vport_idx;
+	uint16_t i;
+
+	for (i = 0; i < ad->max_vport_nb; i++) {
+		if (ad->vports[i] == NULL)
+			break;
+	}
+
+	if (i == ad->max_vport_nb)
+		vport_idx = CPFL_INVALID_VPORT_IDX;
+	else
+		vport_idx = i;
+
+	return vport_idx;
+}
+
+static int
+cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct cpfl_vport_param *param = init_params;
+	struct cpfl_adapter_ext *adapter = param->adapter;
+	/* for sending create vport virtchnl msg prepare */
+	struct virtchnl2_create_vport create_vport_info;
+	int ret = 0;
+
+	dev->dev_ops = &cpfl_eth_dev_ops;
+	vport->adapter = &adapter->base;
+	vport->sw_idx = param->idx;
+	vport->devarg_id = param->devarg_id;
+	vport->dev = dev;
+
+	memset(&create_vport_info, 0, sizeof(create_vport_info));
+	ret = idpf_create_vport_info_init(vport, &create_vport_info);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init vport req_info.");
+		goto err;
+	}
+
+	ret = idpf_vport_init(vport, &create_vport_info, dev->data);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init vports.");
+		goto err;
+	}
+
+	adapter->vports[param->idx] = vport;
+	adapter->cur_vports |= RTE_BIT32(param->devarg_id);
+	adapter->cur_vport_nb++;
+
+	dev->data->mac_addrs = rte_zmalloc(NULL, RTE_ETHER_ADDR_LEN, 0);
+	if (dev->data->mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory.");
+		ret = -ENOMEM;
+		goto err_mac_addrs;
+	}
+
+	rte_ether_addr_copy((struct rte_ether_addr *)vport->default_mac_addr,
+			    &dev->data->mac_addrs[0]);
+
+	return 0;
+
+err_mac_addrs:
+	adapter->vports[param->idx] = NULL;  /* reset */
+	idpf_vport_deinit(vport);
+err:
+	return ret;
+}
+
+static const struct rte_pci_id pci_id_cpfl_map[] = {
+	{ RTE_PCI_DEVICE(IDPF_INTEL_VENDOR_ID, IDPF_DEV_ID_CPF) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static struct cpfl_adapter_ext *
+cpfl_find_adapter_ext(struct rte_pci_device *pci_dev)
+{
+	struct cpfl_adapter_ext *adapter;
+	int found = 0;
+
+	if (pci_dev == NULL)
+		return NULL;
+
+	rte_spinlock_lock(&cpfl_adapter_lock);
+	TAILQ_FOREACH(adapter, &cpfl_adapter_list, next) {
+		if (strncmp(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE) == 0) {
+			found = 1;
+			break;
+		}
+	}
+	rte_spinlock_unlock(&cpfl_adapter_lock);
+
+	if (found == 0)
+		return NULL;
+
+	return adapter;
+}
+
+static void
+cpfl_adapter_ext_deinit(struct cpfl_adapter_ext *adapter)
+{
+	rte_eal_alarm_cancel(cpfl_dev_alarm_handler, adapter);
+	idpf_adapter_deinit(&adapter->base);
+
+	rte_free(adapter->vports);
+	adapter->vports = NULL;
+}
+
+static int
+cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+	       struct rte_pci_device *pci_dev)
+{
+	struct cpfl_vport_param vport_param;
+	struct cpfl_adapter_ext *adapter;
+	struct cpfl_devargs devargs;
+	char name[RTE_ETH_NAME_MAX_LEN];
+	int i, retval;
+	bool first_probe = false;
+
+	if (!cpfl_adapter_list_init) {
+		rte_spinlock_init(&cpfl_adapter_lock);
+		TAILQ_INIT(&cpfl_adapter_list);
+		cpfl_adapter_list_init = true;
+	}
+
+	adapter = cpfl_find_adapter_ext(pci_dev);
+	if (adapter == NULL) {
+		first_probe = true;
+		adapter = rte_zmalloc("cpfl_adapter_ext",
+						sizeof(struct cpfl_adapter_ext), 0);
+		if (adapter == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
+			return -ENOMEM;
+		}
+
+		retval = cpfl_adapter_ext_init(pci_dev, adapter);
+		if (retval != 0) {
+			PMD_INIT_LOG(ERR, "Failed to init adapter.");
+			return retval;
+		}
+
+		rte_spinlock_lock(&cpfl_adapter_lock);
+		TAILQ_INSERT_TAIL(&cpfl_adapter_list, adapter, next);
+		rte_spinlock_unlock(&cpfl_adapter_lock);
+	}
+
+	retval = cpfl_parse_devargs(pci_dev, adapter, &devargs);
+	if (retval != 0) {
+		PMD_INIT_LOG(ERR, "Failed to parse private devargs");
+		goto err;
+	}
+
+	if (devargs.req_vport_nb == 0) {
+		/* If no vport devarg, create vport 0 by default. */
+		vport_param.adapter = adapter;
+		vport_param.devarg_id = 0;
+		vport_param.idx = cpfl_vport_idx_alloc(adapter);
+		if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
+			PMD_INIT_LOG(ERR, "No space for vport %u", vport_param.devarg_id);
+			return 0;
+		}
+		snprintf(name, sizeof(name), "cpfl_%s_vport_0",
+			 pci_dev->device.name);
+		retval = rte_eth_dev_create(&pci_dev->device, name,
+					    sizeof(struct idpf_vport),
+					    NULL, NULL, cpfl_dev_vport_init,
+					    &vport_param);
+		if (retval != 0)
+			PMD_DRV_LOG(ERR, "Failed to create default vport 0");
+	} else {
+		for (i = 0; i < devargs.req_vport_nb; i++) {
+			vport_param.adapter = adapter;
+			vport_param.devarg_id = devargs.req_vports[i];
+			vport_param.idx = cpfl_vport_idx_alloc(adapter);
+			if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
+				PMD_INIT_LOG(ERR, "No space for vport %u", vport_param.devarg_id);
+				break;
+			}
+			snprintf(name, sizeof(name), "cpfl_%s_vport_%d",
+				 pci_dev->device.name,
+				 devargs.req_vports[i]);
+			retval = rte_eth_dev_create(&pci_dev->device, name,
+						    sizeof(struct idpf_vport),
+						    NULL, NULL, cpfl_dev_vport_init,
+						    &vport_param);
+			if (retval != 0)
+				PMD_DRV_LOG(ERR, "Failed to create vport %d",
+					    vport_param.devarg_id);
+		}
+	}
+
+	return 0;
+
+err:
+	if (first_probe) {
+		rte_spinlock_lock(&cpfl_adapter_lock);
+		TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
+		rte_spinlock_unlock(&cpfl_adapter_lock);
+		cpfl_adapter_ext_deinit(adapter);
+		rte_free(adapter);
+	}
+	return retval;
+}
+
+static int
+cpfl_pci_remove(struct rte_pci_device *pci_dev)
+{
+	struct cpfl_adapter_ext *adapter = cpfl_find_adapter_ext(pci_dev);
+	uint16_t port_id;
+
+	/* Ethdev created can be found RTE_ETH_FOREACH_DEV_OF through rte_device */
+	RTE_ETH_FOREACH_DEV_OF(port_id, &pci_dev->device) {
+			rte_eth_dev_close(port_id);
+	}
+
+	rte_spinlock_lock(&cpfl_adapter_lock);
+	TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
+	rte_spinlock_unlock(&cpfl_adapter_lock);
+	cpfl_adapter_ext_deinit(adapter);
+	rte_free(adapter);
+
+	return 0;
+}
+
+static struct rte_pci_driver rte_cpfl_pmd = {
+	.id_table	= pci_id_cpfl_map,
+	.drv_flags	= RTE_PCI_DRV_NEED_MAPPING,
+	.probe		= cpfl_pci_probe,
+	.remove		= cpfl_pci_remove,
+};
+
+/**
+ * Driver initialization routine.
+ * Invoked once at EAL init time.
+ * Register itself as the [Poll Mode] Driver of PCI devices.
+ */
+RTE_PMD_REGISTER_PCI(net_cpfl, rte_cpfl_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_cpfl, pci_id_cpfl_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_cpfl, "* igb_uio | vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(net_cpfl,
+			      CPFL_TX_SINGLE_Q "=<0|1> "
+			      CPFL_RX_SINGLE_Q "=<0|1> "
+			      CPFL_VPORT "=[vport_set0,[vport_set1],...]");
+
+RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_driver, driver, NOTICE);
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
new file mode 100644
index 0000000000..83459b9c91
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _CPFL_ETHDEV_H_
+#define _CPFL_ETHDEV_H_
+
+#include <stdint.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+#include <rte_ethdev.h>
+#include <rte_kvargs.h>
+#include <ethdev_driver.h>
+#include <ethdev_pci.h>
+
+#include "cpfl_logs.h"
+
+#include <idpf_common_device.h>
+#include <idpf_common_virtchnl.h>
+#include <base/idpf_prototype.h>
+#include <base/virtchnl2.h>
+
+#define CPFL_MAX_VPORT_NUM	8
+
+#define CPFL_INVALID_VPORT_IDX	0xffff
+
+#define CPFL_MIN_BUF_SIZE	1024
+#define CPFL_MAX_FRAME_SIZE	9728
+#define CPFL_DEFAULT_MTU	RTE_ETHER_MTU
+
+#define CPFL_NUM_MACADDR_MAX	64
+
+#define CPFL_VLAN_TAG_SIZE	4
+#define CPFL_ETH_OVERHEAD \
+	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + CPFL_VLAN_TAG_SIZE * 2)
+
+#define CPFL_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
+
+#define CPFL_ALARM_INTERVAL	50000 /* us */
+
+/* Device IDs */
+#define IDPF_DEV_ID_CPF			0x1453
+
+struct cpfl_vport_param {
+	struct cpfl_adapter_ext *adapter;
+	uint16_t devarg_id; /* arg id from user */
+	uint16_t idx;       /* index in adapter->vports[]*/
+};
+
+/* Struct used when parse driver specific devargs */
+struct cpfl_devargs {
+	uint16_t req_vports[CPFL_MAX_VPORT_NUM];
+	uint16_t req_vport_nb;
+};
+
+struct cpfl_adapter_ext {
+	TAILQ_ENTRY(cpfl_adapter_ext) next;
+	struct idpf_adapter base;
+
+	char name[CPFL_ADAPTER_NAME_LEN];
+
+	struct idpf_vport **vports;
+	uint16_t max_vport_nb;
+
+	uint16_t cur_vports; /* bit mask of created vport */
+	uint16_t cur_vport_nb;
+
+	uint16_t used_vecs_num;
+};
+
+TAILQ_HEAD(cpfl_adapter_list, cpfl_adapter_ext);
+
+#define CPFL_DEV_TO_PCI(eth_dev)		\
+	RTE_DEV_TO_PCI((eth_dev)->device)
+#define CPFL_ADAPTER_TO_EXT(p)					\
+	container_of((p), struct cpfl_adapter_ext, base)
+
+#endif /* _CPFL_ETHDEV_H_ */
diff --git a/drivers/net/cpfl/cpfl_logs.h b/drivers/net/cpfl/cpfl_logs.h
new file mode 100644
index 0000000000..451bdfbd1d
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_logs.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _CPFL_LOGS_H_
+#define _CPFL_LOGS_H_
+
+#include <rte_log.h>
+
+extern int cpfl_logtype_init;
+extern int cpfl_logtype_driver;
+
+#define PMD_INIT_LOG(level, ...) \
+	rte_log(RTE_LOG_ ## level, \
+		cpfl_logtype_init, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#define PMD_DRV_LOG_RAW(level, ...) \
+	rte_log(RTE_LOG_ ## level, \
+		cpfl_logtype_driver, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _CPFL_LOGS_H_ */
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
new file mode 100644
index 0000000000..ea4a2002bf
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -0,0 +1,244 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <ethdev_driver.h>
+#include <rte_net.h>
+#include <rte_vect.h>
+
+#include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
+
+static uint64_t
+cpfl_tx_offload_convert(uint64_t offload)
+{
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_SCTP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
+		ol |= IDPF_TX_OFFLOAD_MULTI_SEGS;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0)
+		ol |= IDPF_TX_OFFLOAD_MBUF_FAST_FREE;
+
+	return ol;
+}
+
+static const struct rte_memzone *
+cpfl_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
+		      uint16_t len, uint16_t queue_type,
+		      unsigned int socket_id, bool splitq)
+{
+	char ring_name[RTE_MEMZONE_NAMESIZE];
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+
+	memset(ring_name, 0, RTE_MEMZONE_NAMESIZE);
+	switch (queue_type) {
+	case VIRTCHNL2_QUEUE_TYPE_TX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_sched_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "cpfl Tx ring", sizeof("cpfl Tx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3),
+					      CPFL_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_singleq_rx_buf_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "cpfl Rx ring", sizeof("cpfl Rx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
+		ring_size = RTE_ALIGN(len * sizeof(struct idpf_splitq_tx_compl_desc),
+				      CPFL_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "cpfl Tx compl ring", sizeof("cpfl Tx compl ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
+		ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_splitq_rx_buf_desc),
+				      CPFL_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "cpfl Rx buf ring", sizeof("cpfl Rx buf ring"));
+		break;
+	default:
+		PMD_INIT_LOG(ERR, "Invalid queue type");
+		return NULL;
+	}
+
+	mz = rte_eth_dma_zone_reserve(dev, ring_name, queue_idx,
+				      ring_size, CPFL_RING_BASE_ALIGN,
+				      socket_id);
+	if (mz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for ring");
+		return NULL;
+	}
+
+	/* Zero all the descriptors in the ring. */
+	memset(mz->addr, 0, ring_size);
+
+	return mz;
+}
+
+static void
+cpfl_dma_zone_release(const struct rte_memzone *mz)
+{
+	rte_memzone_free(mz);
+}
+
+static int
+cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
+		     uint16_t queue_idx, uint16_t nb_desc,
+		     unsigned int socket_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	const struct rte_memzone *mz;
+	struct idpf_tx_queue *cq;
+	int ret;
+
+	cq = rte_zmalloc_socket("cpfl splitq cq",
+				sizeof(struct idpf_tx_queue),
+				RTE_CACHE_LINE_SIZE,
+				socket_id);
+	if (cq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for Tx compl queue");
+		ret = -ENOMEM;
+		goto err_cq_alloc;
+	}
+
+	cq->nb_tx_desc = nb_desc;
+	cq->queue_id = vport->chunks_info.tx_compl_start_qid + queue_idx;
+	cq->port_id = dev->data->port_id;
+	cq->txqs = dev->data->tx_queues;
+	cq->tx_start_qid = vport->chunks_info.tx_start_qid;
+
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, nb_desc,
+				   VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION,
+				   socket_id, true);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	cq->tx_ring_phys_addr = mz->iova;
+	cq->compl_ring = mz->addr;
+	cq->mz = mz;
+	reset_split_tx_complq(cq);
+
+	txq->complq = cq;
+
+	return 0;
+
+err_mz_reserve:
+	rte_free(cq);
+err_cq_alloc:
+	return ret;
+}
+
+int
+cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_txconf *tx_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t tx_rs_thresh, tx_free_thresh;
+	struct idpf_hw *hw = &adapter->hw;
+	const struct rte_memzone *mz;
+	struct idpf_tx_queue *txq;
+	uint64_t offloads;
+	uint16_t len;
+	bool is_splitq;
+	int ret;
+
+	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+	tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh > 0) ?
+		tx_conf->tx_rs_thresh : CPFL_DEFAULT_TX_RS_THRESH);
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ?
+		tx_conf->tx_free_thresh : CPFL_DEFAULT_TX_FREE_THRESH);
+	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket("cpfl txq",
+				 sizeof(struct idpf_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (txq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
+		ret = -ENOMEM;
+		goto err_txq_alloc;
+	}
+
+	is_splitq = !!(vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
+
+	txq->nb_tx_desc = nb_desc;
+	txq->rs_thresh = tx_rs_thresh;
+	txq->free_thresh = tx_free_thresh;
+	txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->offloads = cpfl_tx_offload_convert(offloads);
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	if (is_splitq)
+		len = 2 * nb_desc;
+	else
+		len = nb_desc;
+	txq->sw_nb_desc = len;
+
+	/* Allocate TX hardware ring descriptors. */
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, nb_desc, VIRTCHNL2_QUEUE_TYPE_TX,
+				   socket_id, is_splitq);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	txq->tx_ring_phys_addr = mz->iova;
+	txq->mz = mz;
+
+	txq->sw_ring = rte_zmalloc_socket("cpfl tx sw ring",
+					  sizeof(struct idpf_tx_entry) * len,
+					  RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq->sw_ring == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
+	}
+
+	if (!is_splitq) {
+		txq->tx_ring = mz->addr;
+		reset_single_tx_queue(txq);
+	} else {
+		txq->desc_ring = mz->addr;
+		reset_split_tx_descq(txq);
+
+		/* Setup tx completion queue if split model */
+		ret = cpfl_tx_complq_setup(dev, txq, queue_idx,
+					   2 * nb_desc, socket_id);
+		if (ret != 0)
+			goto err_complq_setup;
+	}
+
+	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
+			queue_idx * vport->chunks_info.tx_qtail_spacing);
+	txq->q_set = true;
+	dev->data->tx_queues[queue_idx] = txq;
+
+	return 0;
+
+err_complq_setup:
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(txq);
+err_txq_alloc:
+	return ret;
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
new file mode 100644
index 0000000000..ec42478393
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _CPFL_RXTX_H_
+#define _CPFL_RXTX_H_
+
+#include <idpf_common_rxtx.h>
+#include "cpfl_ethdev.h"
+
+/* In QLEN must be whole number of 32 descriptors. */
+#define CPFL_ALIGN_RING_DESC	32
+#define CPFL_MIN_RING_DESC	32
+#define CPFL_MAX_RING_DESC	4096
+#define CPFL_DMA_MEM_ALIGN	4096
+/* Base address of the HW descriptor ring should be 128B aligned. */
+#define CPFL_RING_BASE_ALIGN	128
+
+#define CPFL_DEFAULT_TX_RS_THRESH	32
+#define CPFL_DEFAULT_TX_FREE_THRESH	32
+
+int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			uint16_t nb_desc, unsigned int socket_id,
+			const struct rte_eth_txconf *tx_conf);
+#endif /* _CPFL_RXTX_H_ */
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
new file mode 100644
index 0000000000..106cc97e60
--- /dev/null
+++ b/drivers/net/cpfl/meson.build
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 Intel Corporation
+
+if is_windows
+    build = false
+    reason = 'not supported on Windows'
+    subdir_done()
+endif
+
+deps += ['common_idpf']
+
+sources = files(
+        'cpfl_ethdev.c',
+)
\ No newline at end of file
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 6470bf3636..a8ca338875 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -13,6 +13,7 @@ drivers = [
         'bnxt',
         'bonding',
         'cnxk',
+        'cpfl',
         'cxgbe',
         'dpaa',
         'dpaa2',
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v3 02/21] net/cpfl: add Tx queue setup
  2023-01-18  7:31   ` [PATCH v3 " Mingxia Liu
  2023-01-18  7:31     ` [PATCH v3 01/21] net/cpfl: support device initialization Mingxia Liu
@ 2023-01-18  7:31     ` Mingxia Liu
  2023-01-18  7:31     ` [PATCH v3 03/21] net/cpfl: add Rx " Mingxia Liu
                       ` (15 subsequent siblings)
  17 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:31 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add support for tx_queue_setup ops.

In the single queue model, the same descriptor queue is used by SW to
post buffer descriptors to HW and by HW to post completed descriptors
to SW.

In the split queue model, "RX buffer queues" are used to pass
descriptor buffers from SW to HW while Rx queues are used only to
pass the descriptor completions, that is, descriptors that point
to completed buffers, from HW to SW. This is contrary to the single
queue model in which Rx queues are used for both purposes.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 13 +++++++++++++
 drivers/net/cpfl/cpfl_rxtx.c   |  8 ++++----
 drivers/net/cpfl/meson.build   |  1 +
 3 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 2ac53bc5b0..4a569c2f7e 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -12,6 +12,7 @@
 #include <rte_alarm.h>
 
 #include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
 
 #define CPFL_TX_SINGLE_Q	"tx_single"
 #define CPFL_RX_SINGLE_Q	"rx_single"
@@ -96,6 +97,17 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = CPFL_MAX_RING_DESC,
+		.nb_min = CPFL_MIN_RING_DESC,
+		.nb_align = CPFL_ALIGN_RING_DESC,
+	};
+
 	return 0;
 }
 
@@ -513,6 +525,7 @@ cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *a
 static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_configure			= cpfl_dev_configure,
 	.dev_close			= cpfl_dev_close,
+	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
 	.link_update			= cpfl_dev_link_update,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index ea4a2002bf..a9742379db 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -130,7 +130,7 @@ cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
 	cq->tx_ring_phys_addr = mz->iova;
 	cq->compl_ring = mz->addr;
 	cq->mz = mz;
-	reset_split_tx_complq(cq);
+	idpf_reset_split_tx_complq(cq);
 
 	txq->complq = cq;
 
@@ -164,7 +164,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		tx_conf->tx_rs_thresh : CPFL_DEFAULT_TX_RS_THRESH);
 	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ?
 		tx_conf->tx_free_thresh : CPFL_DEFAULT_TX_FREE_THRESH);
-	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
+	if (idpf_check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
 		return -EINVAL;
 
 	/* Allocate the TX queue data structure. */
@@ -215,10 +215,10 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 
 	if (!is_splitq) {
 		txq->tx_ring = mz->addr;
-		reset_single_tx_queue(txq);
+		idpf_reset_single_tx_queue(txq);
 	} else {
 		txq->desc_ring = mz->addr;
-		reset_split_tx_descq(txq);
+		idpf_reset_split_tx_descq(txq);
 
 		/* Setup tx completion queue if split model */
 		ret = cpfl_tx_complq_setup(dev, txq, queue_idx,
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
index 106cc97e60..3ccee15703 100644
--- a/drivers/net/cpfl/meson.build
+++ b/drivers/net/cpfl/meson.build
@@ -11,4 +11,5 @@ deps += ['common_idpf']
 
 sources = files(
         'cpfl_ethdev.c',
+        'cpfl_rxtx.c',
 )
\ No newline at end of file
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v3 03/21] net/cpfl: add Rx queue setup
  2023-01-18  7:31   ` [PATCH v3 " Mingxia Liu
  2023-01-18  7:31     ` [PATCH v3 01/21] net/cpfl: support device initialization Mingxia Liu
  2023-01-18  7:31     ` [PATCH v3 02/21] net/cpfl: add Tx queue setup Mingxia Liu
@ 2023-01-18  7:31     ` Mingxia Liu
  2023-01-18  7:31     ` [PATCH v3 04/21] net/cpfl: support device start and stop Mingxia Liu
                       ` (14 subsequent siblings)
  17 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:31 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add support for rx_queue_setup ops.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  11 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 232 +++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |   6 +
 3 files changed, 249 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 4a569c2f7e..0a4bf59220 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -102,12 +102,22 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
 	};
 
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_free_thresh = CPFL_DEFAULT_RX_FREE_THRESH,
+	};
+
 	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
 		.nb_max = CPFL_MAX_RING_DESC,
 		.nb_min = CPFL_MIN_RING_DESC,
 		.nb_align = CPFL_ALIGN_RING_DESC,
 	};
 
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = CPFL_MAX_RING_DESC,
+		.nb_min = CPFL_MIN_RING_DESC,
+		.nb_align = CPFL_ALIGN_RING_DESC,
+	};
+
 	return 0;
 }
 
@@ -525,6 +535,7 @@ cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *a
 static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_configure			= cpfl_dev_configure,
 	.dev_close			= cpfl_dev_close,
+	.rx_queue_setup			= cpfl_rx_queue_setup,
 	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
 	.link_update			= cpfl_dev_link_update,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index a9742379db..71ebbf440b 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -9,6 +9,25 @@
 #include "cpfl_ethdev.h"
 #include "cpfl_rxtx.h"
 
+static uint64_t
+cpfl_rx_offload_convert(uint64_t offload)
+{
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
+		ol |= IDPF_RX_OFFLOAD_TIMESTAMP;
+
+	return ol;
+}
+
 static uint64_t
 cpfl_tx_offload_convert(uint64_t offload)
 {
@@ -94,6 +113,219 @@ cpfl_dma_zone_release(const struct rte_memzone *mz)
 	rte_memzone_free(mz);
 }
 
+static int
+cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
+			 uint16_t queue_idx, uint16_t rx_free_thresh,
+			 uint16_t nb_desc, unsigned int socket_id,
+			 struct rte_mempool *mp, uint8_t bufq_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_hw *hw = &adapter->hw;
+	const struct rte_memzone *mz;
+	struct idpf_rx_queue *bufq;
+	uint16_t len;
+	int ret;
+
+	bufq = rte_zmalloc_socket("cpfl bufq",
+				   sizeof(struct idpf_rx_queue),
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (bufq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue.");
+		ret = -ENOMEM;
+		goto err_bufq1_alloc;
+	}
+
+	bufq->mp = mp;
+	bufq->nb_rx_desc = nb_desc;
+	bufq->rx_free_thresh = rx_free_thresh;
+	bufq->queue_id = vport->chunks_info.rx_buf_start_qid + queue_idx;
+	bufq->port_id = dev->data->port_id;
+	bufq->rx_hdr_len = 0;
+	bufq->adapter = adapter;
+
+	len = rte_pktmbuf_data_room_size(bufq->mp) - RTE_PKTMBUF_HEADROOM;
+	bufq->rx_buf_len = len;
+
+	/* Allocate a little more to support bulk allocate. */
+	len = nb_desc + IDPF_RX_MAX_BURST;
+
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, len,
+				   VIRTCHNL2_QUEUE_TYPE_RX_BUFFER,
+				   socket_id, true);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+
+	bufq->rx_ring_phys_addr = mz->iova;
+	bufq->rx_ring = mz->addr;
+	bufq->mz = mz;
+
+	bufq->sw_ring =
+		rte_zmalloc_socket("cpfl rx bufq sw ring",
+				   sizeof(struct rte_mbuf *) * len,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (bufq->sw_ring == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
+	}
+
+	idpf_reset_split_rx_bufq(bufq);
+	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
+			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+	bufq->q_set = true;
+
+	if (bufq_id == 1) {
+		rxq->bufq1 = bufq;
+	} else if (bufq_id == 2) {
+		rxq->bufq2 = bufq;
+	} else {
+		PMD_INIT_LOG(ERR, "Invalid buffer queue index.");
+		ret = -EINVAL;
+		goto err_bufq_id;
+	}
+
+	return 0;
+
+err_bufq_id:
+	rte_free(bufq->sw_ring);
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(bufq);
+err_bufq1_alloc:
+	return ret;
+}
+
+static void
+cpfl_rx_split_bufq_release(struct idpf_rx_queue *bufq)
+{
+	rte_free(bufq->sw_ring);
+	cpfl_dma_zone_release(bufq->mz);
+	rte_free(bufq);
+}
+
+int
+cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_rxconf *rx_conf,
+		    struct rte_mempool *mp)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_hw *hw = &adapter->hw;
+	const struct rte_memzone *mz;
+	struct idpf_rx_queue *rxq;
+	uint16_t rx_free_thresh;
+	uint64_t offloads;
+	bool is_splitq;
+	uint16_t len;
+	int ret;
+
+	offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads;
+
+	/* Check free threshold */
+	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
+		CPFL_DEFAULT_RX_FREE_THRESH :
+		rx_conf->rx_free_thresh;
+	if (idpf_check_rx_thresh(nb_desc, rx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Setup Rx queue */
+	rxq = rte_zmalloc_socket("cpfl rxq",
+				 sizeof(struct idpf_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (rxq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
+		ret = -ENOMEM;
+		goto err_rxq_alloc;
+	}
+
+	is_splitq = !!(vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
+
+	rxq->mp = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->queue_id = vport->chunks_info.rx_start_qid + queue_idx;
+	rxq->port_id = dev->data->port_id;
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+	rxq->rx_hdr_len = 0;
+	rxq->adapter = adapter;
+	rxq->offloads = cpfl_rx_offload_convert(offloads);
+
+	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_buf_len = len;
+
+	/* Allocate a little more to support bulk allocate. */
+	len = nb_desc + IDPF_RX_MAX_BURST;
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, len, VIRTCHNL2_QUEUE_TYPE_RX,
+				   socket_id, is_splitq);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	rxq->rx_ring_phys_addr = mz->iova;
+	rxq->rx_ring = mz->addr;
+	rxq->mz = mz;
+
+	if (!is_splitq) {
+		rxq->sw_ring = rte_zmalloc_socket("cpfl rxq sw ring",
+						  sizeof(struct rte_mbuf *) * len,
+						  RTE_CACHE_LINE_SIZE,
+						  socket_id);
+		if (rxq->sw_ring == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+			ret = -ENOMEM;
+			goto err_sw_ring_alloc;
+		}
+
+		idpf_reset_single_rx_queue(rxq);
+		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
+				queue_idx * vport->chunks_info.rx_qtail_spacing);
+	} else {
+		idpf_reset_split_rx_descq(rxq);
+
+		/* Setup Rx buffer queues */
+		ret = cpfl_rx_split_bufq_setup(dev, rxq, 2 * queue_idx,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 1);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 1");
+			ret = -EINVAL;
+			goto err_bufq1_setup;
+		}
+
+		ret = cpfl_rx_split_bufq_setup(dev, rxq, 2 * queue_idx + 1,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 2);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 2");
+			ret = -EINVAL;
+			goto err_bufq2_setup;
+		}
+	}
+
+	rxq->q_set = true;
+	dev->data->rx_queues[queue_idx] = rxq;
+
+	return 0;
+
+err_bufq2_setup:
+	cpfl_rx_split_bufq_release(rxq->bufq1);
+err_bufq1_setup:
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(rxq);
+err_rxq_alloc:
+	return ret;
+}
+
 static int
 cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
 		     uint16_t queue_idx, uint16_t nb_desc,
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index ec42478393..fd838d3f07 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -16,10 +16,16 @@
 /* Base address of the HW descriptor ring should be 128B aligned. */
 #define CPFL_RING_BASE_ALIGN	128
 
+#define CPFL_DEFAULT_RX_FREE_THRESH	32
+
 #define CPFL_DEFAULT_TX_RS_THRESH	32
 #define CPFL_DEFAULT_TX_FREE_THRESH	32
 
 int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_txconf *tx_conf);
+int cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			uint16_t nb_desc, unsigned int socket_id,
+			const struct rte_eth_rxconf *rx_conf,
+			struct rte_mempool *mp);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v3 04/21] net/cpfl: support device start and stop
  2023-01-18  7:31   ` [PATCH v3 " Mingxia Liu
                       ` (2 preceding siblings ...)
  2023-01-18  7:31     ` [PATCH v3 03/21] net/cpfl: add Rx " Mingxia Liu
@ 2023-01-18  7:31     ` Mingxia Liu
  2023-01-18  7:31     ` [PATCH v3 05/21] net/cpfl: support queue start Mingxia Liu
                       ` (13 subsequent siblings)
  17 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:31 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add dev ops dev_start, dev_stop and link_update.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 35 ++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 0a4bf59220..f98d10ffd4 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -184,12 +184,45 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_dev_start(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	int ret;
+
+	ret = idpf_vc_ena_dis_vport(vport, true);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to enable vport");
+		return ret;
+	}
+
+	vport->stopped = 0;
+
+	return 0;
+}
+
+static int
+cpfl_dev_stop(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->stopped == 1)
+		return 0;
+
+	idpf_vc_ena_dis_vport(vport, false);
+
+	vport->stopped = 1;
+
+	return 0;
+}
+
 static int
 cpfl_dev_close(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter);
 
+	cpfl_dev_stop(dev);
 	idpf_vport_deinit(vport);
 
 	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
@@ -538,6 +571,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.rx_queue_setup			= cpfl_rx_queue_setup,
 	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
+	.dev_start			= cpfl_dev_start,
+	.dev_stop			= cpfl_dev_stop,
 	.link_update			= cpfl_dev_link_update,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v3 05/21] net/cpfl: support queue start
  2023-01-18  7:31   ` [PATCH v3 " Mingxia Liu
                       ` (3 preceding siblings ...)
  2023-01-18  7:31     ` [PATCH v3 04/21] net/cpfl: support device start and stop Mingxia Liu
@ 2023-01-18  7:31     ` Mingxia Liu
  2023-01-18  7:31     ` [PATCH v3 06/21] net/cpfl: support queue stop Mingxia Liu
                       ` (12 subsequent siblings)
  17 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:31 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add support for these device ops:
 - rx_queue_start
 - tx_queue_start

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  41 ++++++++++
 drivers/net/cpfl/cpfl_rxtx.c   | 138 +++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |   4 +
 3 files changed, 183 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index f98d10ffd4..51792c648e 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -184,12 +184,51 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_start_queues(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	struct idpf_tx_queue *txq;
+	int err = 0;
+	int i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq == NULL || txq->tx_deferred_start)
+			continue;
+		err = cpfl_tx_queue_start(dev, i);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start Tx queue %u", i);
+			return err;
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq == NULL || rxq->rx_deferred_start)
+			continue;
+		err = cpfl_rx_queue_start(dev, i);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start Rx queue %u", i);
+			return err;
+		}
+	}
+
+	return err;
+}
+
 static int
 cpfl_dev_start(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	int ret;
 
+	ret = cpfl_start_queues(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to start queues");
+		return ret;
+	}
+
 	ret = idpf_vc_ena_dis_vport(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
@@ -574,6 +613,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_start			= cpfl_dev_start,
 	.dev_stop			= cpfl_dev_stop,
 	.link_update			= cpfl_dev_link_update,
+	.rx_queue_start			= cpfl_rx_queue_start,
+	.tx_queue_start			= cpfl_tx_queue_start,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 71ebbf440b..fb6becc99f 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -474,3 +474,141 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 err_txq_alloc:
 	return ret;
 }
+
+int
+cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_rx_queue *rxq;
+	int err;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	if (rxq == NULL || !rxq->q_set) {
+		PMD_DRV_LOG(ERR, "RX queue %u not available or setup",
+					rx_queue_id);
+		return -EINVAL;
+	}
+
+	if (rxq->bufq1 == NULL) {
+		/* Single queue */
+		err = idpf_alloc_single_rxq_mbufs(rxq);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+			return err;
+		}
+
+		rte_wmb();
+
+		/* Init the RX tail register. */
+		IDPF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	} else {
+		/* Split queue */
+		err = idpf_alloc_split_rxq_mbufs(rxq->bufq1);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+			return err;
+		}
+		err = idpf_alloc_split_rxq_mbufs(rxq->bufq2);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+			return err;
+		}
+
+		rte_wmb();
+
+		/* Init the RX tail register. */
+		IDPF_PCI_REG_WRITE(rxq->bufq1->qrx_tail, rxq->bufq1->rx_tail);
+		IDPF_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->bufq2->rx_tail);
+	}
+
+	return err;
+}
+
+int
+cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_rx_queue *rxq =
+		dev->data->rx_queues[rx_queue_id];
+	int err = 0;
+
+	err = idpf_vc_config_rxq(vport, rxq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Fail to configure Rx queue %u", rx_queue_id);
+		return err;
+	}
+
+	err = cpfl_rx_queue_init(dev, rx_queue_id);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to init RX queue %u",
+			    rx_queue_id);
+		return err;
+	}
+
+	/* Ready to switch the queue on */
+	err = idpf_switch_queue(vport, rx_queue_id, true, true);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+	} else {
+		rxq->q_started = true;
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	return err;
+}
+
+int
+cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_tx_queue *txq;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	txq = dev->data->tx_queues[tx_queue_id];
+
+	/* Init the RX tail register. */
+	IDPF_PCI_REG_WRITE(txq->qtx_tail, 0);
+
+	return 0;
+}
+
+int
+cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_tx_queue *txq =
+		dev->data->tx_queues[tx_queue_id];
+	int err = 0;
+
+	err = idpf_vc_config_txq(vport, txq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Fail to configure Tx queue %u", tx_queue_id);
+		return err;
+	}
+
+	err = cpfl_tx_queue_init(dev, tx_queue_id);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to init TX queue %u",
+			    tx_queue_id);
+		return err;
+	}
+
+	/* Ready to switch the queue on */
+	err = idpf_switch_queue(vport, tx_queue_id, false, true);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+			    tx_queue_id);
+	} else {
+		txq->q_started = true;
+		dev->data->tx_queue_state[tx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	return err;
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index fd838d3f07..2fa7950775 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -28,4 +28,8 @@ int cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
 			struct rte_mempool *mp);
+int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v3 06/21] net/cpfl: support queue stop
  2023-01-18  7:31   ` [PATCH v3 " Mingxia Liu
                       ` (4 preceding siblings ...)
  2023-01-18  7:31     ` [PATCH v3 05/21] net/cpfl: support queue start Mingxia Liu
@ 2023-01-18  7:31     ` Mingxia Liu
  2023-01-18  7:31     ` [PATCH v3 07/21] net/cpfl: support queue release Mingxia Liu
                       ` (11 subsequent siblings)
  17 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:31 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add support for these device ops:
 - rx_queue_stop
 - tx_queue_stop

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 10 +++-
 drivers/net/cpfl/cpfl_rxtx.c   | 87 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  3 ++
 3 files changed, 99 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 51792c648e..b15ecab840 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -232,12 +232,16 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 	ret = idpf_vc_ena_dis_vport(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
-		return ret;
+		goto err_vport;
 	}
 
 	vport->stopped = 0;
 
 	return 0;
+
+err_vport:
+	cpfl_stop_queues(dev);
+	return ret;
 }
 
 static int
@@ -250,6 +254,8 @@ cpfl_dev_stop(struct rte_eth_dev *dev)
 
 	idpf_vc_ena_dis_vport(vport, false);
 
+	cpfl_stop_queues(dev);
+
 	vport->stopped = 1;
 
 	return 0;
@@ -615,6 +621,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.link_update			= cpfl_dev_link_update,
 	.rx_queue_start			= cpfl_rx_queue_start,
 	.tx_queue_start			= cpfl_tx_queue_start,
+	.rx_queue_stop			= cpfl_rx_queue_stop,
+	.tx_queue_stop			= cpfl_tx_queue_stop,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index fb6becc99f..648b1f1e03 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -612,3 +612,90 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 
 	return err;
 }
+
+int
+cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_rx_queue *rxq;
+	int err;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	err = idpf_switch_queue(vport, rx_queue_id, true, false);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+			    rx_queue_id);
+		return err;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		rxq->ops->release_mbufs(rxq);
+		idpf_reset_single_rx_queue(rxq);
+	} else {
+		rxq->bufq1->ops->release_mbufs(rxq->bufq1);
+		rxq->bufq2->ops->release_mbufs(rxq->bufq2);
+		idpf_reset_split_rx_queue(rxq);
+	}
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+int
+cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_tx_queue *txq;
+	int err;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	err = idpf_switch_queue(vport, tx_queue_id, false, false);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+			    tx_queue_id);
+		return err;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	txq->ops->release_mbufs(txq);
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		idpf_reset_single_tx_queue(txq);
+	} else {
+		idpf_reset_split_tx_descq(txq);
+		idpf_reset_split_tx_complq(txq->complq);
+	}
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+void
+cpfl_stop_queues(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	struct idpf_tx_queue *txq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq == NULL)
+			continue;
+
+		if (cpfl_rx_queue_stop(dev, i) != 0)
+			PMD_DRV_LOG(WARNING, "Fail to stop Rx queue %d", i);
+	}
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq == NULL)
+			continue;
+
+		if (cpfl_tx_queue_stop(dev, i) != 0)
+			PMD_DRV_LOG(WARNING, "Fail to stop Tx queue %d", i);
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 2fa7950775..6b63137d5c 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -32,4 +32,7 @@ int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void cpfl_stop_queues(struct rte_eth_dev *dev);
+int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v3 07/21] net/cpfl: support queue release
  2023-01-18  7:31   ` [PATCH v3 " Mingxia Liu
                       ` (5 preceding siblings ...)
  2023-01-18  7:31     ` [PATCH v3 06/21] net/cpfl: support queue stop Mingxia Liu
@ 2023-01-18  7:31     ` Mingxia Liu
  2023-01-18  7:31     ` [PATCH v3 08/21] net/cpfl: support MTU configuration Mingxia Liu
                       ` (10 subsequent siblings)
  17 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:31 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add support for queue operations:
 - rx_queue_release
 - tx_queue_release

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  2 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 35 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  2 ++
 3 files changed, 39 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index b15ecab840..33a0b9ba60 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -623,6 +623,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_start			= cpfl_tx_queue_start,
 	.rx_queue_stop			= cpfl_rx_queue_stop,
 	.tx_queue_stop			= cpfl_tx_queue_stop,
+	.rx_queue_release		= cpfl_dev_rx_queue_release,
+	.tx_queue_release		= cpfl_dev_tx_queue_release,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 648b1f1e03..4ed15ef7f4 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -49,6 +49,14 @@ cpfl_tx_offload_convert(uint64_t offload)
 	return ol;
 }
 
+static const struct idpf_rxq_ops def_rxq_ops = {
+	.release_mbufs = idpf_release_rxq_mbufs,
+};
+
+static const struct idpf_txq_ops def_txq_ops = {
+	.release_mbufs = idpf_release_txq_mbufs,
+};
+
 static const struct rte_memzone *
 cpfl_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
 		      uint16_t len, uint16_t queue_type,
@@ -177,6 +185,7 @@ cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
 	idpf_reset_split_rx_bufq(bufq);
 	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
 			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+	bufq->ops = &def_rxq_ops;
 	bufq->q_set = true;
 
 	if (bufq_id == 1) {
@@ -235,6 +244,12 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (idpf_check_rx_thresh(nb_desc, rx_free_thresh) != 0)
 		return -EINVAL;
 
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx] != NULL) {
+		idpf_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
 	/* Setup Rx queue */
 	rxq = rte_zmalloc_socket("cpfl rxq",
 				 sizeof(struct idpf_rx_queue),
@@ -287,6 +302,7 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		idpf_reset_single_rx_queue(rxq);
 		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
 				queue_idx * vport->chunks_info.rx_qtail_spacing);
+		rxq->ops = &def_rxq_ops;
 	} else {
 		idpf_reset_split_rx_descq(rxq);
 
@@ -399,6 +415,12 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (idpf_check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
 		return -EINVAL;
 
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx] != NULL) {
+		idpf_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
 	/* Allocate the TX queue data structure. */
 	txq = rte_zmalloc_socket("cpfl txq",
 				 sizeof(struct idpf_tx_queue),
@@ -461,6 +483,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 
 	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
 			queue_idx * vport->chunks_info.tx_qtail_spacing);
+	txq->ops = &def_txq_ops;
 	txq->q_set = true;
 	dev->data->tx_queues[queue_idx] = txq;
 
@@ -674,6 +697,18 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	return 0;
 }
 
+void
+cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+	idpf_rx_queue_release(dev->data->rx_queues[qid]);
+}
+
+void
+cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+	idpf_tx_queue_release(dev->data->tx_queues[qid]);
+}
+
 void
 cpfl_stop_queues(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 6b63137d5c..037d479d56 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -35,4 +35,6 @@ int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void cpfl_stop_queues(struct rte_eth_dev *dev);
 int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v3 08/21] net/cpfl: support MTU configuration
  2023-01-18  7:31   ` [PATCH v3 " Mingxia Liu
                       ` (6 preceding siblings ...)
  2023-01-18  7:31     ` [PATCH v3 07/21] net/cpfl: support queue release Mingxia Liu
@ 2023-01-18  7:31     ` Mingxia Liu
  2023-01-18  7:31     ` [PATCH v3 09/21] net/cpfl: support basic Rx data path Mingxia Liu
                       ` (9 subsequent siblings)
  17 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:31 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add dev ops mtu_set.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini |  1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 27 +++++++++++++++++++++++++++
 2 files changed, 28 insertions(+)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index a2d1ca9e15..470ba81579 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -7,6 +7,7 @@
 ; is selected.
 ;
 [Features]
+MTU update           = Y
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 33a0b9ba60..1f40f1749e 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -121,6 +121,27 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	return 0;
 }
 
+static int
+cpfl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	/* mtu setting is forbidden if port is start */
+	if (dev->data->dev_started) {
+		PMD_DRV_LOG(ERR, "port must be stopped before configuration");
+		return -EBUSY;
+	}
+
+	if (mtu > vport->max_mtu) {
+		PMD_DRV_LOG(ERR, "MTU should be less than %d", vport->max_mtu);
+		return -EINVAL;
+	}
+
+	vport->max_pkt_len = mtu + CPFL_ETH_OVERHEAD;
+
+	return 0;
+}
+
 static const uint32_t *
 cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 {
@@ -142,6 +163,7 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_conf *conf = &dev->data->dev_conf;
 
 	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
@@ -181,6 +203,10 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
+	vport->max_pkt_len =
+		(dev->data->mtu == 0) ? CPFL_DEFAULT_MTU : dev->data->mtu +
+		CPFL_ETH_OVERHEAD;
+
 	return 0;
 }
 
@@ -625,6 +651,7 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_stop			= cpfl_tx_queue_stop,
 	.rx_queue_release		= cpfl_dev_rx_queue_release,
 	.tx_queue_release		= cpfl_dev_tx_queue_release,
+	.mtu_set			= cpfl_dev_mtu_set,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v3 09/21] net/cpfl: support basic Rx data path
  2023-01-18  7:31   ` [PATCH v3 " Mingxia Liu
                       ` (7 preceding siblings ...)
  2023-01-18  7:31     ` [PATCH v3 08/21] net/cpfl: support MTU configuration Mingxia Liu
@ 2023-01-18  7:31     ` Mingxia Liu
  2023-01-18  7:31     ` [PATCH v3 10/21] net/cpfl: support basic Tx " Mingxia Liu
                       ` (8 subsequent siblings)
  17 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:31 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add basic Rx support in split queue mode and single queue mode.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  2 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 11 +++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  1 +
 3 files changed, 14 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 1f40f1749e..ca5b7f952e 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -255,6 +255,8 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
+	cpfl_set_rx_function(dev);
+
 	ret = idpf_vc_ena_dis_vport(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 4ed15ef7f4..8d1d31e186 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -734,3 +734,14 @@ cpfl_stop_queues(struct rte_eth_dev *dev)
 			PMD_DRV_LOG(WARNING, "Fail to stop Tx queue %d", i);
 	}
 }
+
+void
+cpfl_set_rx_function(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
+	else
+		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 037d479d56..c29c30c7a3 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -37,4 +37,5 @@ int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+void cpfl_set_rx_function(struct rte_eth_dev *dev);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v3 10/21] net/cpfl: support basic Tx data path
  2023-01-18  7:31   ` [PATCH v3 " Mingxia Liu
                       ` (8 preceding siblings ...)
  2023-01-18  7:31     ` [PATCH v3 09/21] net/cpfl: support basic Rx data path Mingxia Liu
@ 2023-01-18  7:31     ` Mingxia Liu
  2023-01-18  7:31     ` [PATCH v3 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
                       ` (7 subsequent siblings)
  17 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:31 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add basic Tx support in split queue mode and single queue mode.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  3 +++
 drivers/net/cpfl/cpfl_rxtx.c   | 14 ++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  1 +
 3 files changed, 18 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index ca5b7f952e..94ff0d94bb 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -97,6 +97,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
 		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
@@ -256,6 +258,7 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 	}
 
 	cpfl_set_rx_function(dev);
+	cpfl_set_tx_function(dev);
 
 	ret = idpf_vc_ena_dis_vport(vport, true);
 	if (ret != 0) {
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 8d1d31e186..8724d391ad 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -745,3 +745,17 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 	else
 		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
 }
+
+void
+cpfl_set_tx_function(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		dev->tx_pkt_burst = idpf_splitq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_prep_pkts;
+	} else {
+		dev->tx_pkt_burst = idpf_singleq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_prep_pkts;
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index c29c30c7a3..021db5bf8a 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -38,4 +38,5 @@ int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_set_rx_function(struct rte_eth_dev *dev);
+void cpfl_set_tx_function(struct rte_eth_dev *dev);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v3 11/21] net/cpfl: support write back based on ITR expire
  2023-01-18  7:31   ` [PATCH v3 " Mingxia Liu
                       ` (9 preceding siblings ...)
  2023-01-18  7:31     ` [PATCH v3 10/21] net/cpfl: support basic Tx " Mingxia Liu
@ 2023-01-18  7:31     ` Mingxia Liu
  2023-01-18  7:31     ` [PATCH v3 12/21] net/cpfl: support RSS Mingxia Liu
                       ` (6 subsequent siblings)
  17 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:31 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Enable write back on ITR expire, then packets can be received one by

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 45 +++++++++++++++++++++++++++++++++-
 drivers/net/cpfl/cpfl_ethdev.h |  2 ++
 2 files changed, 46 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 94ff0d94bb..c678fe0354 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -212,6 +212,15 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	uint16_t nb_rx_queues = dev->data->nb_rx_queues;
+
+	return idpf_config_irq_map(vport, nb_rx_queues);
+}
+
 static int
 cpfl_start_queues(struct rte_eth_dev *dev)
 {
@@ -249,12 +258,37 @@ static int
 cpfl_dev_start(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(base);
+	uint16_t num_allocated_vectors = base->caps.num_allocated_vectors;
+	uint16_t req_vecs_num;
 	int ret;
 
+	req_vecs_num = CPFL_DFLT_Q_VEC_NUM;
+	if (req_vecs_num + adapter->used_vecs_num > num_allocated_vectors) {
+		PMD_DRV_LOG(ERR, "The accumulated request vectors' number should be less than %d",
+			    num_allocated_vectors);
+		ret = -EINVAL;
+		goto err_vec;
+	}
+
+	ret = idpf_vc_alloc_vectors(vport, req_vecs_num);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to allocate interrupt vectors");
+		goto err_vec;
+	}
+	adapter->used_vecs_num += req_vecs_num;
+
+	ret = cpfl_config_rx_queues_irqs(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to configure irqs");
+		goto err_irq;
+	}
+
 	ret = cpfl_start_queues(dev);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to start queues");
-		return ret;
+		goto err_startq;
 	}
 
 	cpfl_set_rx_function(dev);
@@ -272,6 +306,11 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 
 err_vport:
 	cpfl_stop_queues(dev);
+err_startq:
+	idpf_config_irq_unmap(vport, dev->data->nb_rx_queues);
+err_irq:
+	idpf_vc_dealloc_vectors(vport);
+err_vec:
 	return ret;
 }
 
@@ -287,6 +326,10 @@ cpfl_dev_stop(struct rte_eth_dev *dev)
 
 	cpfl_stop_queues(dev);
 
+	idpf_config_irq_unmap(vport, dev->data->nb_rx_queues);
+
+	idpf_vc_dealloc_vectors(vport);
+
 	vport->stopped = 1;
 
 	return 0;
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
index 83459b9c91..9ae543c2ad 100644
--- a/drivers/net/cpfl/cpfl_ethdev.h
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -24,6 +24,8 @@
 
 #define CPFL_INVALID_VPORT_IDX	0xffff
 
+#define CPFL_DFLT_Q_VEC_NUM	1
+
 #define CPFL_MIN_BUF_SIZE	1024
 #define CPFL_MAX_FRAME_SIZE	9728
 #define CPFL_DEFAULT_MTU	RTE_ETHER_MTU
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v3 12/21] net/cpfl: support RSS
  2023-01-18  7:31   ` [PATCH v3 " Mingxia Liu
                       ` (10 preceding siblings ...)
  2023-01-18  7:31     ` [PATCH v3 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
@ 2023-01-18  7:31     ` Mingxia Liu
  2023-01-18  7:31     ` [PATCH v3 13/21] net/cpfl: support Rx offloading Mingxia Liu
                       ` (5 subsequent siblings)
  17 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:31 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add RSS support.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 51 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h | 15 ++++++++++
 2 files changed, 66 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index c678fe0354..49b8861df0 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -97,6 +97,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
+
 	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
@@ -162,11 +164,49 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static int
+cpfl_init_rss(struct idpf_vport *vport)
+{
+	struct rte_eth_rss_conf *rss_conf;
+	struct rte_eth_dev_data *dev_data;
+	uint16_t i, nb_q;
+	int ret = 0;
+
+	dev_data = vport->dev_data;
+	rss_conf = &dev_data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev_data->nb_rx_queues;
+
+	if (rss_conf->rss_key == NULL) {
+		for (i = 0; i < vport->rss_key_size; i++)
+			vport->rss_key[i] = (uint8_t)rte_rand();
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_INIT_LOG(ERR, "Invalid RSS key length in RSS configuration, should be %d",
+			     vport->rss_key_size);
+		return -EINVAL;
+	} else {
+		rte_memcpy(vport->rss_key, rss_conf->rss_key,
+			   vport->rss_key_size);
+	}
+
+	for (i = 0; i < vport->rss_lut_size; i++)
+		vport->rss_lut[i] = i % nb_q;
+
+	vport->rss_hf = IDPF_DEFAULT_RSS_HASH_EXPANDED;
+
+	ret = idpf_config_rss(vport);
+	if (ret != 0)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS");
+
+	return ret;
+}
+
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_conf *conf = &dev->data->dev_conf;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret;
 
 	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
@@ -205,6 +245,17 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
+	if (adapter->caps.rss_caps != 0 && dev->data->nb_rx_queues != 0) {
+		ret = cpfl_init_rss(vport);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to init rss");
+			return ret;
+		}
+	} else {
+		PMD_INIT_LOG(ERR, "RSS is not supported.");
+		return -1;
+	}
+
 	vport->max_pkt_len =
 		(dev->data->mtu == 0) ? CPFL_DEFAULT_MTU : dev->data->mtu +
 		CPFL_ETH_OVERHEAD;
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
index 9ae543c2ad..0d60ee3aed 100644
--- a/drivers/net/cpfl/cpfl_ethdev.h
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -36,6 +36,21 @@
 #define CPFL_ETH_OVERHEAD \
 	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + CPFL_VLAN_TAG_SIZE * 2)
 
+#define CPFL_RSS_OFFLOAD_ALL (				\
+		RTE_ETH_RSS_IPV4                |	\
+		RTE_ETH_RSS_FRAG_IPV4           |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_TCP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_UDP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_SCTP   |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_OTHER  |	\
+		RTE_ETH_RSS_IPV6                |	\
+		RTE_ETH_RSS_FRAG_IPV6           |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP   |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_OTHER  |	\
+		RTE_ETH_RSS_L2_PAYLOAD)
+
 #define CPFL_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
 #define CPFL_ALARM_INTERVAL	50000 /* us */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v3 13/21] net/cpfl: support Rx offloading
  2023-01-18  7:31   ` [PATCH v3 " Mingxia Liu
                       ` (11 preceding siblings ...)
  2023-01-18  7:31     ` [PATCH v3 12/21] net/cpfl: support RSS Mingxia Liu
@ 2023-01-18  7:31     ` Mingxia Liu
  2023-01-18  7:31     ` [PATCH v3 14/21] net/cpfl: support Tx offloading Mingxia Liu
                       ` (4 subsequent siblings)
  17 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:31 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add Rx offloading support:
 - support CHKSUM and RSS offload for split queue model
 - support CHKSUM offload for single queue model

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 2 ++
 drivers/net/cpfl/cpfl_ethdev.c    | 6 ++++++
 2 files changed, 8 insertions(+)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index 470ba81579..ee5948f444 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -8,6 +8,8 @@
 ;
 [Features]
 MTU update           = Y
+L3 checksum offload  = P
+L4 checksum offload  = P
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 49b8861df0..28c4721c24 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -99,6 +99,12 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
 
+	dev_info->rx_offload_capa =
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM           |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+
 	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v3 14/21] net/cpfl: support Tx offloading
  2023-01-18  7:31   ` [PATCH v3 " Mingxia Liu
                       ` (12 preceding siblings ...)
  2023-01-18  7:31     ` [PATCH v3 13/21] net/cpfl: support Rx offloading Mingxia Liu
@ 2023-01-18  7:31     ` Mingxia Liu
  2023-01-18  7:31     ` [PATCH v3 16/21] net/cpfl: support timestamp offload Mingxia Liu
                       ` (3 subsequent siblings)
  17 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:31 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add Tx offloading support:
 - support TSO for single queue model and split queue model.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 8 +++++++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index ee5948f444..f4e45c7c68 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -8,6 +8,7 @@
 ;
 [Features]
 MTU update           = Y
+TSO                  = P
 L3 checksum offload  = P
 L4 checksum offload  = P
 Linux                = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 28c4721c24..20e16c815a 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -105,7 +105,13 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
-	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->tx_offload_capa =
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_TCP_TSO		|
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v3 16/21] net/cpfl: support timestamp offload
  2023-01-18  7:31   ` [PATCH v3 " Mingxia Liu
                       ` (13 preceding siblings ...)
  2023-01-18  7:31     ` [PATCH v3 14/21] net/cpfl: support Tx offloading Mingxia Liu
@ 2023-01-18  7:31     ` Mingxia Liu
  2023-01-18  7:31     ` [PATCH v3 18/21] net/cpfl: add hw statistics Mingxia Liu
                       ` (2 subsequent siblings)
  17 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:31 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add support for timestamp offload.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 3 ++-
 drivers/net/cpfl/cpfl_rxtx.c      | 7 +++++++
 3 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index f4e45c7c68..c1209df3e5 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -11,6 +11,7 @@ MTU update           = Y
 TSO                  = P
 L3 checksum offload  = P
 L4 checksum offload  = P
+Timestamp offload    = P
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index dcff55e5b5..7b28571cfc 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -103,7 +103,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM           |
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
-		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 6b5ea46a7b..559b10cb85 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -516,6 +516,13 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	err = idpf_register_ts_mbuf(rxq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "fail to register timestamp mbuf %u",
+					rx_queue_id);
+		return -EIO;
+	}
+
 	if (rxq->bufq1 == NULL) {
 		/* Single queue */
 		err = idpf_alloc_single_rxq_mbufs(rxq);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v3 18/21] net/cpfl: add hw statistics
  2023-01-18  7:31   ` [PATCH v3 " Mingxia Liu
                       ` (14 preceding siblings ...)
  2023-01-18  7:31     ` [PATCH v3 16/21] net/cpfl: support timestamp offload Mingxia Liu
@ 2023-01-18  7:31     ` Mingxia Liu
  2023-01-18  7:31     ` [PATCH v3 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
  2023-01-18  7:31     ` [PATCH v3 21/21] net/cpfl: add xstats ops Mingxia Liu
  17 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:31 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

This patch add hardware packets/bytes statistics.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 87 ++++++++++++++++++++++++++++++++++
 drivers/net/idpf/idpf_ethdev.c |  3 +-
 2 files changed, 89 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 7b28571cfc..70c7d5daba 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -178,6 +178,86 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static uint64_t
+cpfl_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	uint64_t mbuf_alloc_failed = 0;
+	struct idpf_rx_queue *rxq;
+	int i = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		mbuf_alloc_failed += rte_atomic64_read(&(rxq->rx_stats.mbuf_alloc_failed));
+	}
+
+	return mbuf_alloc_failed;
+}
+
+static int
+cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_query_stats(vport, &pstats);
+	if (ret == 0) {
+		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
+					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
+					 RTE_ETHER_CRC_LEN;
+
+		idpf_update_stats(&vport->eth_stats_offset, pstats);
+		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
+				pstats->rx_broadcast - pstats->rx_discards;
+		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
+						pstats->tx_unicast;
+		stats->imissed = pstats->rx_discards;
+		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
+		stats->ibytes = pstats->rx_bytes;
+		stats->ibytes -= stats->ipackets * crc_stats_len;
+		stats->obytes = pstats->tx_bytes;
+
+		dev->data->rx_mbuf_alloc_failed = cpfl_get_mbuf_alloc_failed_stats(dev);
+		stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return ret;
+}
+
+static void
+cpfl_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		rte_atomic64_set(&(rxq->rx_stats.mbuf_alloc_failed), 0);
+	}
+}
+
+static int
+cpfl_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_query_stats(vport, &pstats);
+	if (ret != 0)
+		return ret;
+
+	/* set stats offset base on current values */
+	vport->eth_stats_offset = *pstats;
+
+	cpfl_reset_mbuf_alloc_failed_stats(dev);
+
+	return 0;
+}
+
 static int
 cpfl_init_rss(struct idpf_vport *vport)
 {
@@ -365,6 +445,11 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 		goto err_vport;
 	}
 
+	if (cpfl_dev_stats_reset(dev)) {
+		PMD_DRV_LOG(ERR, "Failed to reset stats");
+		goto err_vport;
+	}
+
 	vport->stopped = 0;
 
 	return 0;
@@ -766,6 +851,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_release		= cpfl_dev_tx_queue_release,
 	.mtu_set			= cpfl_dev_mtu_set,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
+	.stats_get			= cpfl_dev_stats_get,
+	.stats_reset			= cpfl_dev_stats_reset,
 };
 
 static uint16_t
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index bcd15db3c5..b2cf959ee7 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -824,13 +824,14 @@ idpf_dev_start(struct rte_eth_dev *dev)
 
 	if (idpf_dev_stats_reset(dev)) {
 		PMD_DRV_LOG(ERR, "Failed to reset stats");
-		goto err_vport;
+		goto err_stats_reset;
 	}
 
 	vport->stopped = 0;
 
 	return 0;
 
+err_stats_reset:
 err_vport:
 	idpf_stop_queues(dev);
 err_startq:
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v3 19/21] net/cpfl: add RSS set/get ops
  2023-01-18  7:31   ` [PATCH v3 " Mingxia Liu
                       ` (15 preceding siblings ...)
  2023-01-18  7:31     ` [PATCH v3 18/21] net/cpfl: add hw statistics Mingxia Liu
@ 2023-01-18  7:31     ` Mingxia Liu
  2023-01-18  7:31     ` [PATCH v3 21/21] net/cpfl: add xstats ops Mingxia Liu
  17 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:31 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add support for these device ops:
- rss_reta_update
- rss_reta_query
- rss_hash_update
- rss_hash_conf_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 303 +++++++++++++++++++++++++++++++++
 1 file changed, 303 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 70c7d5daba..1d6902a3bd 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -30,6 +30,56 @@ static const char * const cpfl_valid_args[] = {
 	NULL
 };
 
+static const uint64_t cpfl_map_hena_rss[] = {
+	[IDPF_HASH_NONF_UNICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
+	[IDPF_HASH_NONF_IPV4_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+	[IDPF_HASH_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
+
+	/* IPv6 */
+	[IDPF_HASH_NONF_UNICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
+	[IDPF_HASH_NONF_IPV6_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+	[IDPF_HASH_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
+
+	/* L2 Payload */
+	[IDPF_HASH_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
+};
+
+static const uint64_t cpfl_ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV4;
+
+static const uint64_t cpfl_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV6;
+
 static int
 cpfl_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -97,6 +147,9 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->hash_key_size = vport->rss_key_size;
+	dev_info->reta_size = vport->rss_lut_size;
+
 	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
@@ -258,6 +311,54 @@ cpfl_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int cpfl_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
+{
+	uint64_t hena = 0, valid_rss_hf = 0;
+	int ret = 0;
+	uint16_t i;
+
+	/**
+	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
+	 * generalizations of all other IPv4 and IPv6 RSS types.
+	 */
+	if (rss_hf & RTE_ETH_RSS_IPV4)
+		rss_hf |= cpfl_ipv4_rss;
+
+	if (rss_hf & RTE_ETH_RSS_IPV6)
+		rss_hf |= cpfl_ipv6_rss;
+
+	for (i = 0; i < RTE_DIM(cpfl_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (cpfl_map_hena_rss[i] & rss_hf) {
+			valid_rss_hf |= cpfl_map_hena_rss[i];
+			hena |= bit;
+		}
+	}
+
+	vport->rss_hf = hena;
+
+	ret = idpf_vc_set_rss_hash(vport);
+	if (ret != 0) {
+		PMD_DRV_LOG(WARNING,
+			    "fail to set RSS offload types, ret: %d", ret);
+		return ret;
+	}
+
+	if (valid_rss_hf & cpfl_ipv4_rss)
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & cpfl_ipv6_rss)
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
+
+	if (rss_hf & ~valid_rss_hf)
+		PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%" PRIx64,
+			    rss_hf & ~valid_rss_hf);
+	vport->last_general_rss_hf = valid_rss_hf;
+
+	return ret;
+}
+
 static int
 cpfl_init_rss(struct idpf_vport *vport)
 {
@@ -294,6 +395,204 @@ cpfl_init_rss(struct idpf_vport *vport)
 	return ret;
 }
 
+static int
+cpfl_rss_reta_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_reta_entry64 *reta_conf,
+		     uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	uint32_t *lut;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+				 "(%d) doesn't match the number of hardware can "
+				 "support (%d)",
+			    reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	/* It MUST use the current LUT size to get the RSS lookup table,
+	 * otherwise if will fail with -100 error code.
+	 */
+	lut = rte_zmalloc(NULL, reta_size * sizeof(uint32_t), 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+	/* store the old lut table temporarily */
+	rte_memcpy(lut, vport->rss_lut, reta_size * sizeof(uint32_t));
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	rte_memcpy(vport->rss_lut, lut, reta_size * sizeof(uint32_t));
+	/* send virtchnl ops to configure RSS */
+	ret = idpf_vc_set_rss_lut(vport);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
+		goto out;
+	}
+out:
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+cpfl_rss_reta_query(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_reta_entry64 *reta_conf,
+		    uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	ret = idpf_vc_get_rss_lut(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS LUT");
+		return ret;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vport->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+cpfl_rss_hash_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		goto skip_rss_key;
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+				 "(%d) doesn't match the size of hardware can "
+				 "support (%d)",
+			    rss_conf->rss_key_len,
+			    vport->rss_key_size);
+		return -EINVAL;
+	}
+
+	rte_memcpy(vport->rss_key, rss_conf->rss_key,
+		   vport->rss_key_size);
+	ret = idpf_vc_set_rss_key(vport);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS key");
+		return ret;
+	}
+
+skip_rss_key:
+	ret = cpfl_config_rss_hf(vport, rss_conf->rss_hf);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
+		return ret;
+	}
+
+	return 0;
+}
+
+static uint64_t
+cpfl_map_general_rss_hf(uint64_t config_rss_hf, uint64_t last_general_rss_hf)
+{
+	uint64_t valid_rss_hf = 0;
+	uint16_t i;
+
+	for (i = 0; i < RTE_DIM(cpfl_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (bit & config_rss_hf)
+			valid_rss_hf |= cpfl_map_hena_rss[i];
+	}
+
+	if (valid_rss_hf & cpfl_ipv4_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & cpfl_ipv6_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV6;
+
+	return valid_rss_hf;
+}
+
+static int
+cpfl_rss_hash_conf_get(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	ret = idpf_vc_get_rss_hash(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS hf");
+		return ret;
+	}
+
+	rss_conf->rss_hf = cpfl_map_general_rss_hf(vport->rss_hf, vport->last_general_rss_hf);
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	ret = idpf_vc_get_rss_key(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS key");
+		return ret;
+	}
+
+	if (rss_conf->rss_key_len > vport->rss_key_size)
+		rss_conf->rss_key_len = vport->rss_key_size;
+
+	rte_memcpy(rss_conf->rss_key, vport->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
@@ -853,6 +1152,10 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 	.stats_get			= cpfl_dev_stats_get,
 	.stats_reset			= cpfl_dev_stats_reset,
+	.reta_update			= cpfl_rss_reta_update,
+	.reta_query			= cpfl_rss_reta_query,
+	.rss_hash_update		= cpfl_rss_hash_update,
+	.rss_hash_conf_get		= cpfl_rss_hash_conf_get,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v3 21/21] net/cpfl: add xstats ops
  2023-01-18  7:31   ` [PATCH v3 " Mingxia Liu
                       ` (16 preceding siblings ...)
  2023-01-18  7:31     ` [PATCH v3 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
@ 2023-01-18  7:31     ` Mingxia Liu
  17 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:31 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add support for these device ops:
- dev_xstats_get
- dev_xstats_get_names
- dev_xstats_reset

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 80 ++++++++++++++++++++++++++++++++++
 1 file changed, 80 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index ab427fcc0b..f178f3fbb8 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -80,6 +80,30 @@ static const uint64_t cpfl_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
 			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
 			  RTE_ETH_RSS_FRAG_IPV6;
 
+struct rte_cpfl_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct rte_cpfl_xstats_name_off rte_cpfl_stats_strings[] = {
+	{"rx_bytes", offsetof(struct virtchnl2_vport_stats, rx_bytes)},
+	{"rx_unicast_packets", offsetof(struct virtchnl2_vport_stats, rx_unicast)},
+	{"rx_multicast_packets", offsetof(struct virtchnl2_vport_stats, rx_multicast)},
+	{"rx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, rx_broadcast)},
+	{"rx_dropped_packets", offsetof(struct virtchnl2_vport_stats, rx_discards)},
+	{"rx_errors", offsetof(struct virtchnl2_vport_stats, rx_errors)},
+	{"rx_unknown_protocol_packets", offsetof(struct virtchnl2_vport_stats,
+						 rx_unknown_protocol)},
+	{"tx_bytes", offsetof(struct virtchnl2_vport_stats, tx_bytes)},
+	{"tx_unicast_packets", offsetof(struct virtchnl2_vport_stats, tx_unicast)},
+	{"tx_multicast_packets", offsetof(struct virtchnl2_vport_stats, tx_multicast)},
+	{"tx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, tx_broadcast)},
+	{"tx_dropped_packets", offsetof(struct virtchnl2_vport_stats, tx_discards)},
+	{"tx_error_packets", offsetof(struct virtchnl2_vport_stats, tx_errors)}};
+
+#define CPFL_NB_XSTATS (sizeof(rte_cpfl_stats_strings) / \
+		sizeof(rte_cpfl_stats_strings[0]))
+
 static int
 cpfl_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -312,6 +336,59 @@ cpfl_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int cpfl_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+	cpfl_dev_stats_reset(dev);
+	return 0;
+}
+
+static int cpfl_dev_xstats_get(struct rte_eth_dev *dev,
+			       struct rte_eth_xstat *xstats, unsigned int n)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	unsigned int i;
+	int ret;
+
+	if (n < CPFL_NB_XSTATS)
+		return CPFL_NB_XSTATS;
+
+	if (!xstats)
+		return 0;
+
+	ret = idpf_query_stats(vport, &pstats);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+		return 0;
+	}
+
+	idpf_update_stats(&vport->eth_stats_offset, pstats);
+
+	/* loop over xstats array and values from pstats */
+	for (i = 0; i < CPFL_NB_XSTATS; i++) {
+		xstats[i].id = i;
+		xstats[i].value = *(uint64_t *)(((char *)pstats) +
+			rte_cpfl_stats_strings[i].offset);
+	}
+	return CPFL_NB_XSTATS;
+}
+
+static int cpfl_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+				     struct rte_eth_xstat_name *xstats_names,
+				     __rte_unused unsigned int limit)
+{
+	unsigned int i;
+
+	if (xstats_names)
+		for (i = 0; i < CPFL_NB_XSTATS; i++) {
+			snprintf(xstats_names[i].name,
+				 sizeof(xstats_names[i].name),
+				 "%s", rte_cpfl_stats_strings[i].name);
+		}
+	return CPFL_NB_XSTATS;
+}
+
 static int cpfl_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
 {
 	uint64_t hena = 0, valid_rss_hf = 0;
@@ -1157,6 +1234,9 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.reta_query			= cpfl_rss_reta_query,
 	.rss_hash_update		= cpfl_rss_hash_update,
 	.rss_hash_conf_get		= cpfl_rss_hash_conf_get,
+	.xstats_get			= cpfl_dev_xstats_get,
+	.xstats_get_names		= cpfl_dev_xstats_get_names,
+	.xstats_reset			= cpfl_dev_xstats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v3 15/21] net/cpfl: add AVX512 data path for single queue model
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                     ` (22 preceding siblings ...)
  2023-01-18  7:31   ` [PATCH v3 " Mingxia Liu
@ 2023-01-18  7:33   ` Mingxia Liu
  2023-01-18  7:33     ` [PATCH v3 17/21] net/cpfl: add AVX512 data path for split " Mingxia Liu
  2023-01-18  7:33     ` [PATCH v3 20/21] net/cpfl: support single q scatter RX datapath Mingxia Liu
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
  24 siblings, 2 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:33 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add support of AVX512 vector data path for single queue model.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/cpfl.rst                |  24 +++++-
 drivers/net/cpfl/cpfl_ethdev.c          |   3 +-
 drivers/net/cpfl/cpfl_rxtx.c            |  85 ++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx_vec_common.h | 100 ++++++++++++++++++++++++
 drivers/net/cpfl/meson.build            |  25 +++++-
 5 files changed, 234 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h

diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
index 35a4bd44c6..1b275eb166 100644
--- a/doc/guides/nics/cpfl.rst
+++ b/doc/guides/nics/cpfl.rst
@@ -63,4 +63,26 @@ Runtime Config Options
 Driver compilation and testing
 ------------------------------
 
-Refer to the document :doc:`build_and_test` for details.
\ No newline at end of file
+Refer to the document :doc:`build_and_test` for details.
+
+Features
+--------
+
+Vector PMD
+~~~~~~~~~~
+
+Vector path for Rx and Tx path are selected automatically.
+The paths are chosen based on 2 conditions:
+
+- ``CPU``
+
+  On the x86 platform, the driver checks if the CPU supports AVX512.
+  If the CPU supports AVX512 and EAL argument ``--force-max-simd-bitwidth``
+  is set to 512, AVX512 paths will be chosen.
+
+- ``Offload features``
+
+  The supported HW offload features are described in the document cpfl.ini,
+  A value "P" means the offload feature is not supported by vector path.
+  If any not supported features are used, cpfl vector PMD is disabled
+  and the scalar paths are chosen.
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 20e16c815a..dcff55e5b5 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -111,7 +111,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
 		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM		|
 		RTE_ETH_TX_OFFLOAD_TCP_TSO		|
-		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS		|
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 8724d391ad..6b5ea46a7b 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -8,6 +8,7 @@
 
 #include "cpfl_ethdev.h"
 #include "cpfl_rxtx.h"
+#include "cpfl_rxtx_vec_common.h"
 
 static uint64_t
 cpfl_rx_offload_convert(uint64_t offload)
@@ -739,22 +740,106 @@ void
 cpfl_set_rx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+#ifdef RTE_ARCH_X86
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	if (cpfl_rx_vec_dev_check_default(dev) == CPFL_VECTOR_PATH &&
+	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+		vport->rx_vec_allowed = true;
+
+		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
+#ifdef CC_AVX512_SUPPORT
+			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512DQ))
+				vport->rx_use_avx512 = true;
+#else
+		PMD_DRV_LOG(NOTICE,
+			    "AVX512 is not supported in build env");
+#endif /* CC_AVX512_SUPPORT */
+	} else {
+		vport->rx_vec_allowed = false;
+	}
+#endif /* RTE_ARCH_X86 */
+
+#ifdef RTE_ARCH_X86
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
+	} else {
+		if (vport->rx_vec_allowed) {
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				rxq = dev->data->rx_queues[i];
+				(void)idpf_singleq_rx_vec_setup(rxq);
+			}
+#ifdef CC_AVX512_SUPPORT
+			if (vport->rx_use_avx512) {
+				dev->rx_pkt_burst = idpf_singleq_recv_pkts_avx512;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
 
+		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+	}
+#else
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
 		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
 	else
 		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+#endif /* RTE_ARCH_X86 */
 }
 
 void
 cpfl_set_tx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+#ifdef RTE_ARCH_X86
+#ifdef CC_AVX512_SUPPORT
+	struct idpf_tx_queue *txq;
+	int i;
+#endif /* CC_AVX512_SUPPORT */
+
+	if (cpfl_tx_vec_dev_check_default(dev) == CPFL_VECTOR_PATH &&
+	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+		vport->tx_vec_allowed = true;
+		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
+#ifdef CC_AVX512_SUPPORT
+		{
+			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
+				vport->tx_use_avx512 = true;
+			if (vport->tx_use_avx512) {
+				for (i = 0; i < dev->data->nb_tx_queues; i++) {
+					txq = dev->data->tx_queues[i];
+					idpf_tx_vec_setup_avx512(txq);
+				}
+			}
+		}
+#else
+		PMD_DRV_LOG(NOTICE,
+			    "AVX512 is not supported in build env");
+#endif /* CC_AVX512_SUPPORT */
+	} else {
+		vport->tx_vec_allowed = false;
+	}
+#endif /* RTE_ARCH_X86 */
 
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		dev->tx_pkt_burst = idpf_splitq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_prep_pkts;
 	} else {
+#ifdef RTE_ARCH_X86
+		if (vport->tx_vec_allowed) {
+#ifdef CC_AVX512_SUPPORT
+			if (vport->tx_use_avx512) {
+				dev->tx_pkt_burst = idpf_singleq_xmit_pkts_avx512;
+				dev->tx_pkt_prepare = idpf_prep_pkts;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
+#endif /* RTE_ARCH_X86 */
 		dev->tx_pkt_burst = idpf_singleq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_prep_pkts;
 	}
diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
new file mode 100644
index 0000000000..503bc87f21
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
@@ -0,0 +1,100 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _CPFL_RXTX_VEC_COMMON_H_
+#define _CPFL_RXTX_VEC_COMMON_H_
+#include <stdint.h>
+#include <ethdev_driver.h>
+#include <rte_malloc.h>
+
+#include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
+
+#ifndef __INTEL_COMPILER
+#pragma GCC diagnostic ignored "-Wcast-qual"
+#endif
+
+#define CPFL_SCALAR_PATH		0
+#define CPFL_VECTOR_PATH		1
+#define CPFL_RX_NO_VECTOR_FLAGS (		\
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+#define CPFL_TX_NO_VECTOR_FLAGS (		\
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |	\
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+static inline int
+cpfl_rx_vec_queue_default(struct idpf_rx_queue *rxq)
+{
+	if (rxq == NULL)
+		return CPFL_SCALAR_PATH;
+
+	if (rte_is_power_of_2(rxq->nb_rx_desc) == 0)
+		return CPFL_SCALAR_PATH;
+
+	if (rxq->rx_free_thresh < IDPF_VPMD_RX_MAX_BURST)
+		return CPFL_SCALAR_PATH;
+
+	if ((rxq->nb_rx_desc % rxq->rx_free_thresh) != 0)
+		return CPFL_SCALAR_PATH;
+
+	if ((rxq->offloads & CPFL_RX_NO_VECTOR_FLAGS) != 0)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_tx_vec_queue_default(struct idpf_tx_queue *txq)
+{
+	if (txq == NULL)
+		return CPFL_SCALAR_PATH;
+
+	if (txq->rs_thresh < IDPF_VPMD_TX_MAX_BURST ||
+	    (txq->rs_thresh & 3) != 0)
+		return CPFL_SCALAR_PATH;
+
+	if ((txq->offloads & CPFL_TX_NO_VECTOR_FLAGS) != 0)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i, ret = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		ret = (cpfl_rx_vec_queue_default(rxq));
+		if (ret == CPFL_SCALAR_PATH)
+			return CPFL_SCALAR_PATH;
+	}
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_tx_vec_dev_check_default(struct rte_eth_dev *dev)
+{
+	int i;
+	struct idpf_tx_queue *txq;
+	int ret = 0;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		ret = cpfl_tx_vec_queue_default(txq);
+		if (ret == CPFL_SCALAR_PATH)
+			return CPFL_SCALAR_PATH;
+	}
+
+	return CPFL_VECTOR_PATH;
+}
+
+#endif /*_CPFL_RXTX_VEC_COMMON_H_*/
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
index 3ccee15703..40ed8dbb7b 100644
--- a/drivers/net/cpfl/meson.build
+++ b/drivers/net/cpfl/meson.build
@@ -7,9 +7,32 @@ if is_windows
     subdir_done()
 endif
 
+if dpdk_conf.get('RTE_IOVA_AS_PA') == 0
+    build = false
+    reason = 'driver does not support disabling IOVA as PA mode'
+    subdir_done()
+endif
+
 deps += ['common_idpf']
 
 sources = files(
         'cpfl_ethdev.c',
         'cpfl_rxtx.c',
-)
\ No newline at end of file
+)
+
+if arch_subdir == 'x86'
+    cpfl_avx512_cpu_support = (
+        cc.get_define('__AVX512F__', args: machine_args) != '' and
+        cc.get_define('__AVX512BW__', args: machine_args) != ''
+    )
+
+    cpfl_avx512_cc_support = (
+        not machine_args.contains('-mno-avx512f') and
+        cc.has_argument('-mavx512f') and
+        cc.has_argument('-mavx512bw')
+    )
+
+    if cpfl_avx512_cpu_support == true or cpfl_avx512_cc_support == true
+        cflags += ['-DCC_AVX512_SUPPORT']
+    endif
+endif
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v3 17/21] net/cpfl: add AVX512 data path for split queue model
  2023-01-18  7:33   ` [PATCH v3 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
@ 2023-01-18  7:33     ` Mingxia Liu
  2023-01-18  7:33     ` [PATCH v3 20/21] net/cpfl: support single q scatter RX datapath Mingxia Liu
  1 sibling, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:33 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

Add support of AVX512 data path for split queue model.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_rxtx.c            | 25 +++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx_vec_common.h | 19 +++++++++++++++++--
 2 files changed, 42 insertions(+), 2 deletions(-)

diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 559b10cb85..5cd25278d6 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -772,6 +772,20 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 
 #ifdef RTE_ARCH_X86
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+#ifdef RTE_ARCH_X86
+		if (vport->rx_vec_allowed) {
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				rxq = dev->data->rx_queues[i];
+				(void)idpf_splitq_rx_vec_setup(rxq);
+			}
+#ifdef CC_AVX512_SUPPORT
+			if (vport->rx_use_avx512) {
+				dev->rx_pkt_burst = idpf_splitq_recv_pkts_avx512;
+				return;
+			}
+#endif
+		}
+#endif
 		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
 	} else {
 		if (vport->rx_vec_allowed) {
@@ -833,6 +847,17 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 #endif /* RTE_ARCH_X86 */
 
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+#ifdef RTE_ARCH_X86
+		if (vport->tx_vec_allowed) {
+#ifdef CC_AVX512_SUPPORT
+			if (vport->tx_use_avx512) {
+				dev->tx_pkt_burst = idpf_splitq_xmit_pkts_avx512;
+				dev->tx_pkt_prepare = idpf_prep_pkts;
+				return;
+			}
+#endif
+		}
+#endif
 		dev->tx_pkt_burst = idpf_splitq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_prep_pkts;
 	} else {
diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
index 503bc87f21..1f01cd40c5 100644
--- a/drivers/net/cpfl/cpfl_rxtx_vec_common.h
+++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
@@ -64,15 +64,30 @@ cpfl_tx_vec_queue_default(struct idpf_tx_queue *txq)
 	return CPFL_VECTOR_PATH;
 }
 
+static inline int
+cpfl_rx_splitq_vec_default(struct idpf_rx_queue *rxq)
+{
+	if (rxq->bufq2->rx_buf_len < rxq->max_pkt_len)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
 static inline int
 cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_rx_queue *rxq;
-	int i, ret = 0;
+	int i, default_ret, splitq_ret, ret = CPFL_SCALAR_PATH;
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
-		ret = (cpfl_rx_vec_queue_default(rxq));
+		default_ret = cpfl_rx_vec_queue_default(rxq);
+		if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+			splitq_ret = cpfl_rx_splitq_vec_default(rxq);
+			ret = splitq_ret && default_ret;
+		} else
+			ret = default_ret;
 		if (ret == CPFL_SCALAR_PATH)
 			return CPFL_SCALAR_PATH;
 	}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v3 20/21] net/cpfl: support single q scatter RX datapath
  2023-01-18  7:33   ` [PATCH v3 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
  2023-01-18  7:33     ` [PATCH v3 17/21] net/cpfl: add AVX512 data path for split " Mingxia Liu
@ 2023-01-18  7:33     ` Mingxia Liu
  1 sibling, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:33 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: wenjun1.wu, Mingxia Liu

This patch add single q recv scatter rx function.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  3 ++-
 drivers/net/cpfl/cpfl_rxtx.c   | 26 ++++++++++++++++++++++++--
 drivers/net/cpfl/cpfl_rxtx.h   |  2 ++
 3 files changed, 28 insertions(+), 3 deletions(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 1d6902a3bd..ab427fcc0b 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -157,7 +157,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
-		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP		|
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 5cd25278d6..b15323a4f4 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -503,6 +503,8 @@ int
 cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct idpf_rx_queue *rxq;
+	uint16_t max_pkt_len;
+	uint32_t frame_size;
 	int err;
 
 	if (rx_queue_id >= dev->data->nb_rx_queues)
@@ -516,6 +518,17 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	frame_size = dev->data->mtu + CPFL_ETH_OVERHEAD;
+
+	max_pkt_len =
+	    RTE_MIN((uint32_t)CPFL_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+		    frame_size);
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
+	    frame_size > rxq->rx_buf_len)
+		dev->data->scattered_rx = 1;
+
 	err = idpf_register_ts_mbuf(rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "fail to register timestamp mbuf %u",
@@ -801,13 +814,22 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 #endif /* CC_AVX512_SUPPORT */
 		}
 
+		if (dev->data->scattered_rx) {
+			dev->rx_pkt_burst = idpf_singleq_recv_scatter_pkts;
+			return;
+		}
 		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
 	}
 #else
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
-	else
+	} else {
+		if (dev->data->scattered_rx) {
+			dev->rx_pkt_burst = idpf_singleq_recv_scatter_pkts;
+			return;
+		}
 		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+	}
 #endif /* RTE_ARCH_X86 */
 }
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 021db5bf8a..2d55f58455 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -21,6 +21,8 @@
 #define CPFL_DEFAULT_TX_RS_THRESH	32
 #define CPFL_DEFAULT_TX_FREE_THRESH	32
 
+#define CPFL_SUPPORT_CHAIN_NUM 5
+
 int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_txconf *tx_conf);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v4 00/21] add support for cpfl PMD in DPDK
  2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                     ` (23 preceding siblings ...)
  2023-01-18  7:33   ` [PATCH v3 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
@ 2023-01-18  7:57   ` Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 01/21] net/cpfl: support device initialization Mingxia Liu
                       ` (21 more replies)
  24 siblings, 22 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

The patchset introduced the cpfl (Control Plane Function Library) PMD
for Intel® IPU E2100’s Configure Physical Function (Device ID: 0x1453)

The cpfl PMD inherits all the features from idpf PMD which will follow
an ongoing standard data plan function spec
https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=idpf
Besides, it will also support more device specific hardware offloading
features from DPDK’s control path (e.g.: hairpin, rte_flow …). which is
different from idpf PMD, and that's why we need a new cpfl PMD.

This patchset mainly focuses on idpf PMD’s equivalent features.
To avoid duplicated code, the patchset depends on below patchsets which
move the common part from net/idpf into common/idpf as a shared library.

This patchset is based on the idpf PMD code:
http://patches.dpdk.org/project/dpdk/cover/20230106090501.9106-1-beilei.xing@intel.com/
http://patches.dpdk.org/project/dpdk/cover/20230117080622.105657-1-beilei.xing@intel.com/
http://patches.dpdk.org/project/dpdk/cover/20230118035139.485060-1-wenjun1.wu@intel.com/
http://patches.dpdk.org/project/dpdk/cover/20230118071440.902155-1-mingxia.liu@intel.com/

v2 changes:
 - rebase to the new baseline.
 - Fix rss lut config issue.
v3 changes:
 - rebase to the new baseline.
v4 changes:
 - Resend v3. No code changed.

Mingxia Liu (21):
  net/cpfl: support device initialization
  net/cpfl: add Tx queue setup
  net/cpfl: add Rx queue setup
  net/cpfl: support device start and stop
  net/cpfl: support queue start
  net/cpfl: support queue stop
  net/cpfl: support queue release
  net/cpfl: support MTU configuration
  net/cpfl: support basic Rx data path
  net/cpfl: support basic Tx data path
  net/cpfl: support write back based on ITR expire
  net/cpfl: support RSS
  net/cpfl: support Rx offloading
  net/cpfl: support Tx offloading
  net/cpfl: add AVX512 data path for single queue model
  net/cpfl: support timestamp offload
  net/cpfl: add AVX512 data path for split queue model
  net/cpfl: add hw statistics
  net/cpfl: add RSS set/get ops
  net/cpfl: support single q scatter RX datapath
  net/cpfl: add xstats ops

 MAINTAINERS                             |    9 +
 doc/guides/nics/cpfl.rst                |   88 ++
 doc/guides/nics/features/cpfl.ini       |   17 +
 doc/guides/rel_notes/release_23_03.rst  |    5 +
 drivers/net/cpfl/cpfl_ethdev.c          | 1489 +++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h          |   95 ++
 drivers/net/cpfl/cpfl_logs.h            |   32 +
 drivers/net/cpfl/cpfl_rxtx.c            |  900 ++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h            |   44 +
 drivers/net/cpfl/cpfl_rxtx_vec_common.h |  115 ++
 drivers/net/cpfl/meson.build            |   38 +
 drivers/net/idpf/idpf_ethdev.c          |    3 +-
 drivers/net/meson.build                 |    1 +
 13 files changed, 2835 insertions(+), 1 deletion(-)
 create mode 100644 doc/guides/nics/cpfl.rst
 create mode 100644 doc/guides/nics/features/cpfl.ini
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.c
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.h
 create mode 100644 drivers/net/cpfl/cpfl_logs.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h
 create mode 100644 drivers/net/cpfl/meson.build

-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v4 01/21] net/cpfl: support device initialization
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
@ 2023-01-18  7:57     ` Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 02/21] net/cpfl: add Tx queue setup Mingxia Liu
                       ` (20 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Support device init and add the following dev ops:
 - dev_configure
 - dev_close
 - dev_infos_get
 - link_update
 - cpfl_dev_supported_ptypes_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 MAINTAINERS                            |   9 +
 doc/guides/nics/cpfl.rst               |  66 +++
 doc/guides/nics/features/cpfl.ini      |  12 +
 doc/guides/rel_notes/release_23_03.rst |   5 +
 drivers/net/cpfl/cpfl_ethdev.c         | 768 +++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h         |  78 +++
 drivers/net/cpfl/cpfl_logs.h           |  32 ++
 drivers/net/cpfl/cpfl_rxtx.c           | 244 ++++++++
 drivers/net/cpfl/cpfl_rxtx.h           |  25 +
 drivers/net/cpfl/meson.build           |  14 +
 drivers/net/meson.build                |   1 +
 11 files changed, 1254 insertions(+)
 create mode 100644 doc/guides/nics/cpfl.rst
 create mode 100644 doc/guides/nics/features/cpfl.ini
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.c
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.h
 create mode 100644 drivers/net/cpfl/cpfl_logs.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.h
 create mode 100644 drivers/net/cpfl/meson.build

diff --git a/MAINTAINERS b/MAINTAINERS
index 22ef2ea4b9..970acc5751 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -780,6 +780,15 @@ F: drivers/common/idpf/
 F: doc/guides/nics/idpf.rst
 F: doc/guides/nics/features/idpf.ini
 
+Intel cpfl
+M: Qi Zhang <qi.z.zhang@intel.com>
+M: Jingjing Wu <jingjing.wu@intel.com>
+M: Beilei Xing <beilei.xing@intel.com>
+T: git://dpdk.org/next/dpdk-next-net-intel
+F: drivers/net/cpfl/
+F: doc/guides/nics/cpfl.rst
+F: doc/guides/nics/features/cpfl.ini
+
 Intel igc
 M: Junfeng Guo <junfeng.guo@intel.com>
 M: Simei Su <simei.su@intel.com>
diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
new file mode 100644
index 0000000000..35a4bd44c6
--- /dev/null
+++ b/doc/guides/nics/cpfl.rst
@@ -0,0 +1,66 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright(c) 2022 Intel Corporation.
+
+.. include:: <isonum.txt>
+
+CPFL Poll Mode Driver
+=====================
+
+The [*EXPERIMENTAL*] cpfl PMD (**librte_net_cpfl**) provides poll mode driver support
+for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
+
+
+Linux Prerequisites
+-------------------
+
+Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment.
+
+To get better performance on Intel platforms,
+please follow the :doc:`../linux_gsg/nic_perf_intel_platform`.
+
+
+Pre-Installation Configuration
+------------------------------
+
+Runtime Config Options
+~~~~~~~~~~~~~~~~~~~~~~
+
+- ``vport`` (default ``0``)
+
+  The PMD supports creation of multiple vports for one PCI device,
+  each vport corresponds to a single ethdev.
+  The user can specify the vports with specific ID to be created, for example::
+
+    -a ca:00.0,vport=[0,2,3]
+
+  Then the PMD will create 3 vports (ethdevs) for device ``ca:00.0``.
+
+  If the parameter is not provided, the vport 0 will be created by default.
+
+- ``rx_single`` (default ``0``)
+
+  There are two queue modes supported by Intel\ |reg| IPU Ethernet E21000 Series,
+  single queue mode and split queue mode for Rx queue.
+  User can choose Rx queue mode, example::
+
+    -a ca:00.0,rx_single=1
+
+  Then the PMD will configure Rx queue with single queue mode.
+  Otherwise, split queue mode is chosen by default.
+
+- ``tx_single`` (default ``0``)
+
+  There are two queue modes supported by Intel\ |reg| IPU Ethernet E21000 Series,
+  single queue mode and split queue mode for Tx queue.
+  User can choose Tx queue mode, example::
+
+    -a ca:00.0,tx_single=1
+
+  Then the PMD will configure Tx queue with single queue mode.
+  Otherwise, split queue mode is chosen by default.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :doc:`build_and_test` for details.
\ No newline at end of file
diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
new file mode 100644
index 0000000000..a2d1ca9e15
--- /dev/null
+++ b/doc/guides/nics/features/cpfl.ini
@@ -0,0 +1,12 @@
+;
+; Supported features of the 'cpfl' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+; A feature with "P" indicates only be supported when non-vector path
+; is selected.
+;
+[Features]
+Linux                = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst
index b8c5b68d6c..465a25e91e 100644
--- a/doc/guides/rel_notes/release_23_03.rst
+++ b/doc/guides/rel_notes/release_23_03.rst
@@ -55,6 +55,11 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Added Intel cpfl driver.**
+
+  Added the new ``cpfl`` net driver
+  for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
+  See the :doc:`../nics/cpfl` NIC guide for more details on this new driver.
 
 Removed Items
 -------------
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
new file mode 100644
index 0000000000..2ac53bc5b0
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -0,0 +1,768 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_dev.h>
+#include <errno.h>
+#include <rte_alarm.h>
+
+#include "cpfl_ethdev.h"
+
+#define CPFL_TX_SINGLE_Q	"tx_single"
+#define CPFL_RX_SINGLE_Q	"rx_single"
+#define CPFL_VPORT		"vport"
+
+rte_spinlock_t cpfl_adapter_lock;
+/* A list for all adapters, one adapter matches one PCI device */
+struct cpfl_adapter_list cpfl_adapter_list;
+bool cpfl_adapter_list_init;
+
+static const char * const cpfl_valid_args[] = {
+	CPFL_TX_SINGLE_Q,
+	CPFL_RX_SINGLE_Q,
+	CPFL_VPORT,
+	NULL
+};
+
+static int
+cpfl_dev_link_update(struct rte_eth_dev *dev,
+		     __rte_unused int wait_to_complete)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct rte_eth_link new_link;
+
+	memset(&new_link, 0, sizeof(new_link));
+
+	switch (vport->link_speed) {
+	case RTE_ETH_SPEED_NUM_10M:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
+		break;
+	case RTE_ETH_SPEED_NUM_100M:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		break;
+	case RTE_ETH_SPEED_NUM_1G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
+		break;
+	case RTE_ETH_SPEED_NUM_10G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+		break;
+	case RTE_ETH_SPEED_NUM_20G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
+		break;
+	case RTE_ETH_SPEED_NUM_25G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
+		break;
+	case RTE_ETH_SPEED_NUM_40G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
+		break;
+	case RTE_ETH_SPEED_NUM_50G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
+		break;
+	case RTE_ETH_SPEED_NUM_100G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+		break;
+	case RTE_ETH_SPEED_NUM_200G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
+		break;
+	default:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	}
+
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
+		RTE_ETH_LINK_DOWN;
+	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
+				  RTE_ETH_LINK_SPEED_FIXED);
+
+	return rte_eth_linkstatus_set(dev, &new_link);
+}
+
+static int
+cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+
+	dev_info->max_rx_queues = adapter->caps.max_rx_q;
+	dev_info->max_tx_queues = adapter->caps.max_tx_q;
+	dev_info->min_rx_bufsize = CPFL_MIN_BUF_SIZE;
+	dev_info->max_rx_pktlen = vport->max_mtu + CPFL_ETH_OVERHEAD;
+
+	dev_info->max_mtu = vport->max_mtu;
+	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+
+	return 0;
+}
+
+static const uint32_t *
+cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+	static const uint32_t ptypes[] = {
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_L4_FRAG,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_UNKNOWN
+	};
+
+	return ptypes;
+}
+
+static int
+cpfl_dev_configure(struct rte_eth_dev *dev)
+{
+	struct rte_eth_conf *conf = &dev->data->dev_conf;
+
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
+		PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not supported",
+			     conf->txmode.mq_mode);
+		return -ENOTSUP;
+	}
+
+	if (conf->lpbk_mode != 0) {
+		PMD_INIT_LOG(ERR, "Loopback operation mode %d is not supported",
+			     conf->lpbk_mode);
+		return -ENOTSUP;
+	}
+
+	if (conf->dcb_capability_en != 0) {
+		PMD_INIT_LOG(ERR, "Priority Flow Control(PFC) if not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.lsc != 0) {
+		PMD_INIT_LOG(ERR, "LSC interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.rxq != 0) {
+		PMD_INIT_LOG(ERR, "RXQ interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.rmv != 0) {
+		PMD_INIT_LOG(ERR, "RMV interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	return 0;
+}
+
+static int
+cpfl_dev_close(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter);
+
+	idpf_vport_deinit(vport);
+
+	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
+	adapter->cur_vport_nb--;
+	dev->data->dev_private = NULL;
+	adapter->vports[vport->sw_idx] = NULL;
+	rte_free(vport);
+
+	return 0;
+}
+
+static int
+insert_value(struct cpfl_devargs *devargs, uint16_t id)
+{
+	uint16_t i;
+
+	/* ignore duplicate */
+	for (i = 0; i < devargs->req_vport_nb; i++) {
+		if (devargs->req_vports[i] == id)
+			return 0;
+	}
+
+	if (devargs->req_vport_nb >= RTE_DIM(devargs->req_vports)) {
+		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
+			     CPFL_MAX_VPORT_NUM);
+		return -EINVAL;
+	}
+
+	devargs->req_vports[devargs->req_vport_nb] = id;
+	devargs->req_vport_nb++;
+
+	return 0;
+}
+
+static const char *
+parse_range(const char *value, struct cpfl_devargs *devargs)
+{
+	uint16_t lo, hi, i;
+	int n = 0;
+	int result;
+	const char *pos = value;
+
+	result = sscanf(value, "%hu%n-%hu%n", &lo, &n, &hi, &n);
+	if (result == 1) {
+		if (lo >= CPFL_MAX_VPORT_NUM)
+			return NULL;
+		if (insert_value(devargs, lo) != 0)
+			return NULL;
+	} else if (result == 2) {
+		if (lo > hi || hi >= CPFL_MAX_VPORT_NUM)
+			return NULL;
+		for (i = lo; i <= hi; i++) {
+			if (insert_value(devargs, i) != 0)
+				return NULL;
+		}
+	} else {
+		return NULL;
+	}
+
+	return pos + n;
+}
+
+static int
+parse_vport(const char *key, const char *value, void *args)
+{
+	struct cpfl_devargs *devargs = args;
+	const char *pos = value;
+
+	devargs->req_vport_nb = 0;
+
+	if (*pos == '[')
+		pos++;
+
+	while (1) {
+		pos = parse_range(pos, devargs);
+		if (pos == NULL) {
+			PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ",
+				     value, key);
+			return -EINVAL;
+		}
+		if (*pos != ',')
+			break;
+		pos++;
+	}
+
+	if (*value == '[' && *pos != ']') {
+		PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ",
+			     value, key);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+parse_bool(const char *key, const char *value, void *args)
+{
+	int *i = args;
+	char *end;
+	int num;
+
+	errno = 0;
+
+	num = strtoul(value, &end, 10);
+
+	if (errno == ERANGE || (num != 0 && num != 1)) {
+		PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", value must be 0 or 1",
+			value, key);
+		return -EINVAL;
+	}
+
+	*i = num;
+	return 0;
+}
+
+static int
+cpfl_parse_devargs(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter,
+		   struct cpfl_devargs *cpfl_args)
+{
+	struct rte_devargs *devargs = pci_dev->device.devargs;
+	struct rte_kvargs *kvlist;
+	int i, ret;
+
+	cpfl_args->req_vport_nb = 0;
+
+	if (devargs == NULL)
+		return 0;
+
+	kvlist = rte_kvargs_parse(devargs->args, cpfl_valid_args);
+	if (kvlist == NULL) {
+		PMD_INIT_LOG(ERR, "invalid kvargs key");
+		return -EINVAL;
+	}
+
+	/* check parsed devargs */
+	if (adapter->cur_vport_nb + cpfl_args->req_vport_nb >
+	    CPFL_MAX_VPORT_NUM) {
+		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
+			     CPFL_MAX_VPORT_NUM);
+		ret = -EINVAL;
+		goto bail;
+	}
+
+	for (i = 0; i < cpfl_args->req_vport_nb; i++) {
+		if (adapter->cur_vports & RTE_BIT32(cpfl_args->req_vports[i])) {
+			PMD_INIT_LOG(ERR, "Vport %d has been created",
+				     cpfl_args->req_vports[i]);
+			ret = -EINVAL;
+			goto bail;
+		}
+	}
+
+	ret = rte_kvargs_process(kvlist, CPFL_VPORT, &parse_vport,
+				 cpfl_args);
+	if (ret != 0)
+		goto bail;
+
+	ret = rte_kvargs_process(kvlist, CPFL_TX_SINGLE_Q, &parse_bool,
+				 &adapter->base.txq_model);
+	if (ret != 0)
+		goto bail;
+
+	ret = rte_kvargs_process(kvlist, CPFL_RX_SINGLE_Q, &parse_bool,
+				 &adapter->base.rxq_model);
+	if (ret != 0)
+		goto bail;
+
+bail:
+	rte_kvargs_free(kvlist);
+	return ret;
+}
+
+static struct idpf_vport *
+cpfl_find_vport(struct cpfl_adapter_ext *adapter, uint32_t vport_id)
+{
+	struct idpf_vport *vport = NULL;
+	int i;
+
+	for (i = 0; i < adapter->cur_vport_nb; i++) {
+		vport = adapter->vports[i];
+		if (vport->vport_id != vport_id)
+			continue;
+		else
+			return vport;
+	}
+
+	return vport;
+}
+
+static void
+cpfl_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t msglen)
+{
+	struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
+
+	if (msglen < sizeof(struct virtchnl2_event)) {
+		PMD_DRV_LOG(ERR, "Error event");
+		return;
+	}
+
+	switch (vc_event->event) {
+	case VIRTCHNL2_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL2_EVENT_LINK_CHANGE");
+		vport->link_up = vc_event->link_status;
+		vport->link_speed = vc_event->link_speed;
+		cpfl_dev_link_update(dev, 0);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", vc_event->event);
+		break;
+	}
+}
+
+static void
+cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter_ex)
+{
+	struct idpf_adapter *adapter = &adapter_ex->base;
+	struct idpf_dma_mem *dma_mem = NULL;
+	struct idpf_hw *hw = &adapter->hw;
+	struct virtchnl2_event *vc_event;
+	struct idpf_ctlq_msg ctlq_msg;
+	enum idpf_mbx_opc mbx_op;
+	struct idpf_vport *vport;
+	enum virtchnl_ops vc_op;
+	uint16_t pending = 1;
+	int ret;
+
+	while (pending) {
+		ret = idpf_ctlq_recv(hw->arq, &pending, &ctlq_msg);
+		if (ret) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret);
+			return;
+		}
+
+		rte_memcpy(adapter->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
+			   IDPF_DFLT_MBX_BUF_SIZE);
+
+		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
+		vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+		adapter->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
+
+		switch (mbx_op) {
+		case idpf_mbq_opc_send_msg_to_peer_pf:
+			if (vc_op == VIRTCHNL2_OP_EVENT) {
+				if (ctlq_msg.data_len < sizeof(struct virtchnl2_event)) {
+					PMD_DRV_LOG(ERR, "Error event");
+					return;
+				}
+				vc_event = (struct virtchnl2_event *)adapter->mbx_resp;
+				vport = cpfl_find_vport(adapter_ex, vc_event->vport_id);
+				if (!vport) {
+					PMD_DRV_LOG(ERR, "Can't find vport.");
+					return;
+				}
+				cpfl_handle_event_msg(vport, adapter->mbx_resp,
+						      ctlq_msg.data_len);
+			} else {
+				if (vc_op == adapter->pend_cmd)
+					notify_cmd(adapter, adapter->cmd_retval);
+				else
+					PMD_DRV_LOG(ERR, "command mismatch, expect %u, get %u",
+						    adapter->pend_cmd, vc_op);
+
+				PMD_DRV_LOG(DEBUG, " Virtual channel response is received,"
+					    "opcode = %d", vc_op);
+			}
+			goto post_buf;
+		default:
+			PMD_DRV_LOG(DEBUG, "Request %u is not supported yet", mbx_op);
+		}
+	}
+
+post_buf:
+	if (ctlq_msg.data_len)
+		dma_mem = ctlq_msg.ctx.indirect.payload;
+	else
+		pending = 0;
+
+	ret = idpf_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
+	if (ret && dma_mem)
+		idpf_free_dma_mem(hw, dma_mem);
+}
+
+static void
+cpfl_dev_alarm_handler(void *param)
+{
+	struct cpfl_adapter_ext *adapter = param;
+
+	cpfl_handle_virtchnl_msg(adapter);
+
+	rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
+}
+
+static int
+cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter)
+{
+	struct idpf_adapter *base = &adapter->base;
+	struct idpf_hw *hw = &base->hw;
+	int ret = 0;
+
+	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+	hw->hw_addr_len = pci_dev->mem_resource[0].len;
+	hw->back = base;
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->device_id = pci_dev->id.device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+
+	strncpy(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE);
+
+	ret = idpf_adapter_init(base);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init adapter");
+		goto err_adapter_init;
+	}
+
+	rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
+
+	adapter->max_vport_nb = adapter->base.caps.max_vports;
+
+	adapter->vports = rte_zmalloc("vports",
+				      adapter->max_vport_nb *
+				      sizeof(*adapter->vports),
+				      0);
+	if (adapter->vports == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
+		ret = -ENOMEM;
+		goto err_get_ptype;
+	}
+
+	adapter->cur_vports = 0;
+	adapter->cur_vport_nb = 0;
+
+	adapter->used_vecs_num = 0;
+
+	return ret;
+
+err_get_ptype:
+	idpf_adapter_deinit(base);
+err_adapter_init:
+	return ret;
+}
+
+static const struct eth_dev_ops cpfl_eth_dev_ops = {
+	.dev_configure			= cpfl_dev_configure,
+	.dev_close			= cpfl_dev_close,
+	.dev_infos_get			= cpfl_dev_info_get,
+	.link_update			= cpfl_dev_link_update,
+	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
+};
+
+static uint16_t
+cpfl_vport_idx_alloc(struct cpfl_adapter_ext *ad)
+{
+	uint16_t vport_idx;
+	uint16_t i;
+
+	for (i = 0; i < ad->max_vport_nb; i++) {
+		if (ad->vports[i] == NULL)
+			break;
+	}
+
+	if (i == ad->max_vport_nb)
+		vport_idx = CPFL_INVALID_VPORT_IDX;
+	else
+		vport_idx = i;
+
+	return vport_idx;
+}
+
+static int
+cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct cpfl_vport_param *param = init_params;
+	struct cpfl_adapter_ext *adapter = param->adapter;
+	/* for sending create vport virtchnl msg prepare */
+	struct virtchnl2_create_vport create_vport_info;
+	int ret = 0;
+
+	dev->dev_ops = &cpfl_eth_dev_ops;
+	vport->adapter = &adapter->base;
+	vport->sw_idx = param->idx;
+	vport->devarg_id = param->devarg_id;
+	vport->dev = dev;
+
+	memset(&create_vport_info, 0, sizeof(create_vport_info));
+	ret = idpf_create_vport_info_init(vport, &create_vport_info);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init vport req_info.");
+		goto err;
+	}
+
+	ret = idpf_vport_init(vport, &create_vport_info, dev->data);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init vports.");
+		goto err;
+	}
+
+	adapter->vports[param->idx] = vport;
+	adapter->cur_vports |= RTE_BIT32(param->devarg_id);
+	adapter->cur_vport_nb++;
+
+	dev->data->mac_addrs = rte_zmalloc(NULL, RTE_ETHER_ADDR_LEN, 0);
+	if (dev->data->mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory.");
+		ret = -ENOMEM;
+		goto err_mac_addrs;
+	}
+
+	rte_ether_addr_copy((struct rte_ether_addr *)vport->default_mac_addr,
+			    &dev->data->mac_addrs[0]);
+
+	return 0;
+
+err_mac_addrs:
+	adapter->vports[param->idx] = NULL;  /* reset */
+	idpf_vport_deinit(vport);
+err:
+	return ret;
+}
+
+static const struct rte_pci_id pci_id_cpfl_map[] = {
+	{ RTE_PCI_DEVICE(IDPF_INTEL_VENDOR_ID, IDPF_DEV_ID_CPF) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static struct cpfl_adapter_ext *
+cpfl_find_adapter_ext(struct rte_pci_device *pci_dev)
+{
+	struct cpfl_adapter_ext *adapter;
+	int found = 0;
+
+	if (pci_dev == NULL)
+		return NULL;
+
+	rte_spinlock_lock(&cpfl_adapter_lock);
+	TAILQ_FOREACH(adapter, &cpfl_adapter_list, next) {
+		if (strncmp(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE) == 0) {
+			found = 1;
+			break;
+		}
+	}
+	rte_spinlock_unlock(&cpfl_adapter_lock);
+
+	if (found == 0)
+		return NULL;
+
+	return adapter;
+}
+
+static void
+cpfl_adapter_ext_deinit(struct cpfl_adapter_ext *adapter)
+{
+	rte_eal_alarm_cancel(cpfl_dev_alarm_handler, adapter);
+	idpf_adapter_deinit(&adapter->base);
+
+	rte_free(adapter->vports);
+	adapter->vports = NULL;
+}
+
+static int
+cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+	       struct rte_pci_device *pci_dev)
+{
+	struct cpfl_vport_param vport_param;
+	struct cpfl_adapter_ext *adapter;
+	struct cpfl_devargs devargs;
+	char name[RTE_ETH_NAME_MAX_LEN];
+	int i, retval;
+	bool first_probe = false;
+
+	if (!cpfl_adapter_list_init) {
+		rte_spinlock_init(&cpfl_adapter_lock);
+		TAILQ_INIT(&cpfl_adapter_list);
+		cpfl_adapter_list_init = true;
+	}
+
+	adapter = cpfl_find_adapter_ext(pci_dev);
+	if (adapter == NULL) {
+		first_probe = true;
+		adapter = rte_zmalloc("cpfl_adapter_ext",
+						sizeof(struct cpfl_adapter_ext), 0);
+		if (adapter == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
+			return -ENOMEM;
+		}
+
+		retval = cpfl_adapter_ext_init(pci_dev, adapter);
+		if (retval != 0) {
+			PMD_INIT_LOG(ERR, "Failed to init adapter.");
+			return retval;
+		}
+
+		rte_spinlock_lock(&cpfl_adapter_lock);
+		TAILQ_INSERT_TAIL(&cpfl_adapter_list, adapter, next);
+		rte_spinlock_unlock(&cpfl_adapter_lock);
+	}
+
+	retval = cpfl_parse_devargs(pci_dev, adapter, &devargs);
+	if (retval != 0) {
+		PMD_INIT_LOG(ERR, "Failed to parse private devargs");
+		goto err;
+	}
+
+	if (devargs.req_vport_nb == 0) {
+		/* If no vport devarg, create vport 0 by default. */
+		vport_param.adapter = adapter;
+		vport_param.devarg_id = 0;
+		vport_param.idx = cpfl_vport_idx_alloc(adapter);
+		if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
+			PMD_INIT_LOG(ERR, "No space for vport %u", vport_param.devarg_id);
+			return 0;
+		}
+		snprintf(name, sizeof(name), "cpfl_%s_vport_0",
+			 pci_dev->device.name);
+		retval = rte_eth_dev_create(&pci_dev->device, name,
+					    sizeof(struct idpf_vport),
+					    NULL, NULL, cpfl_dev_vport_init,
+					    &vport_param);
+		if (retval != 0)
+			PMD_DRV_LOG(ERR, "Failed to create default vport 0");
+	} else {
+		for (i = 0; i < devargs.req_vport_nb; i++) {
+			vport_param.adapter = adapter;
+			vport_param.devarg_id = devargs.req_vports[i];
+			vport_param.idx = cpfl_vport_idx_alloc(adapter);
+			if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
+				PMD_INIT_LOG(ERR, "No space for vport %u", vport_param.devarg_id);
+				break;
+			}
+			snprintf(name, sizeof(name), "cpfl_%s_vport_%d",
+				 pci_dev->device.name,
+				 devargs.req_vports[i]);
+			retval = rte_eth_dev_create(&pci_dev->device, name,
+						    sizeof(struct idpf_vport),
+						    NULL, NULL, cpfl_dev_vport_init,
+						    &vport_param);
+			if (retval != 0)
+				PMD_DRV_LOG(ERR, "Failed to create vport %d",
+					    vport_param.devarg_id);
+		}
+	}
+
+	return 0;
+
+err:
+	if (first_probe) {
+		rte_spinlock_lock(&cpfl_adapter_lock);
+		TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
+		rte_spinlock_unlock(&cpfl_adapter_lock);
+		cpfl_adapter_ext_deinit(adapter);
+		rte_free(adapter);
+	}
+	return retval;
+}
+
+static int
+cpfl_pci_remove(struct rte_pci_device *pci_dev)
+{
+	struct cpfl_adapter_ext *adapter = cpfl_find_adapter_ext(pci_dev);
+	uint16_t port_id;
+
+	/* Ethdev created can be found RTE_ETH_FOREACH_DEV_OF through rte_device */
+	RTE_ETH_FOREACH_DEV_OF(port_id, &pci_dev->device) {
+			rte_eth_dev_close(port_id);
+	}
+
+	rte_spinlock_lock(&cpfl_adapter_lock);
+	TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
+	rte_spinlock_unlock(&cpfl_adapter_lock);
+	cpfl_adapter_ext_deinit(adapter);
+	rte_free(adapter);
+
+	return 0;
+}
+
+static struct rte_pci_driver rte_cpfl_pmd = {
+	.id_table	= pci_id_cpfl_map,
+	.drv_flags	= RTE_PCI_DRV_NEED_MAPPING,
+	.probe		= cpfl_pci_probe,
+	.remove		= cpfl_pci_remove,
+};
+
+/**
+ * Driver initialization routine.
+ * Invoked once at EAL init time.
+ * Register itself as the [Poll Mode] Driver of PCI devices.
+ */
+RTE_PMD_REGISTER_PCI(net_cpfl, rte_cpfl_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_cpfl, pci_id_cpfl_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_cpfl, "* igb_uio | vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(net_cpfl,
+			      CPFL_TX_SINGLE_Q "=<0|1> "
+			      CPFL_RX_SINGLE_Q "=<0|1> "
+			      CPFL_VPORT "=[vport_set0,[vport_set1],...]");
+
+RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_driver, driver, NOTICE);
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
new file mode 100644
index 0000000000..83459b9c91
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _CPFL_ETHDEV_H_
+#define _CPFL_ETHDEV_H_
+
+#include <stdint.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+#include <rte_ethdev.h>
+#include <rte_kvargs.h>
+#include <ethdev_driver.h>
+#include <ethdev_pci.h>
+
+#include "cpfl_logs.h"
+
+#include <idpf_common_device.h>
+#include <idpf_common_virtchnl.h>
+#include <base/idpf_prototype.h>
+#include <base/virtchnl2.h>
+
+#define CPFL_MAX_VPORT_NUM	8
+
+#define CPFL_INVALID_VPORT_IDX	0xffff
+
+#define CPFL_MIN_BUF_SIZE	1024
+#define CPFL_MAX_FRAME_SIZE	9728
+#define CPFL_DEFAULT_MTU	RTE_ETHER_MTU
+
+#define CPFL_NUM_MACADDR_MAX	64
+
+#define CPFL_VLAN_TAG_SIZE	4
+#define CPFL_ETH_OVERHEAD \
+	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + CPFL_VLAN_TAG_SIZE * 2)
+
+#define CPFL_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
+
+#define CPFL_ALARM_INTERVAL	50000 /* us */
+
+/* Device IDs */
+#define IDPF_DEV_ID_CPF			0x1453
+
+struct cpfl_vport_param {
+	struct cpfl_adapter_ext *adapter;
+	uint16_t devarg_id; /* arg id from user */
+	uint16_t idx;       /* index in adapter->vports[]*/
+};
+
+/* Struct used when parse driver specific devargs */
+struct cpfl_devargs {
+	uint16_t req_vports[CPFL_MAX_VPORT_NUM];
+	uint16_t req_vport_nb;
+};
+
+struct cpfl_adapter_ext {
+	TAILQ_ENTRY(cpfl_adapter_ext) next;
+	struct idpf_adapter base;
+
+	char name[CPFL_ADAPTER_NAME_LEN];
+
+	struct idpf_vport **vports;
+	uint16_t max_vport_nb;
+
+	uint16_t cur_vports; /* bit mask of created vport */
+	uint16_t cur_vport_nb;
+
+	uint16_t used_vecs_num;
+};
+
+TAILQ_HEAD(cpfl_adapter_list, cpfl_adapter_ext);
+
+#define CPFL_DEV_TO_PCI(eth_dev)		\
+	RTE_DEV_TO_PCI((eth_dev)->device)
+#define CPFL_ADAPTER_TO_EXT(p)					\
+	container_of((p), struct cpfl_adapter_ext, base)
+
+#endif /* _CPFL_ETHDEV_H_ */
diff --git a/drivers/net/cpfl/cpfl_logs.h b/drivers/net/cpfl/cpfl_logs.h
new file mode 100644
index 0000000000..451bdfbd1d
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_logs.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _CPFL_LOGS_H_
+#define _CPFL_LOGS_H_
+
+#include <rte_log.h>
+
+extern int cpfl_logtype_init;
+extern int cpfl_logtype_driver;
+
+#define PMD_INIT_LOG(level, ...) \
+	rte_log(RTE_LOG_ ## level, \
+		cpfl_logtype_init, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#define PMD_DRV_LOG_RAW(level, ...) \
+	rte_log(RTE_LOG_ ## level, \
+		cpfl_logtype_driver, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _CPFL_LOGS_H_ */
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
new file mode 100644
index 0000000000..ea4a2002bf
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -0,0 +1,244 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#include <ethdev_driver.h>
+#include <rte_net.h>
+#include <rte_vect.h>
+
+#include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
+
+static uint64_t
+cpfl_tx_offload_convert(uint64_t offload)
+{
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_SCTP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
+		ol |= IDPF_TX_OFFLOAD_MULTI_SEGS;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0)
+		ol |= IDPF_TX_OFFLOAD_MBUF_FAST_FREE;
+
+	return ol;
+}
+
+static const struct rte_memzone *
+cpfl_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
+		      uint16_t len, uint16_t queue_type,
+		      unsigned int socket_id, bool splitq)
+{
+	char ring_name[RTE_MEMZONE_NAMESIZE];
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+
+	memset(ring_name, 0, RTE_MEMZONE_NAMESIZE);
+	switch (queue_type) {
+	case VIRTCHNL2_QUEUE_TYPE_TX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_sched_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "cpfl Tx ring", sizeof("cpfl Tx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3),
+					      CPFL_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_singleq_rx_buf_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "cpfl Rx ring", sizeof("cpfl Rx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
+		ring_size = RTE_ALIGN(len * sizeof(struct idpf_splitq_tx_compl_desc),
+				      CPFL_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "cpfl Tx compl ring", sizeof("cpfl Tx compl ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
+		ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_splitq_rx_buf_desc),
+				      CPFL_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "cpfl Rx buf ring", sizeof("cpfl Rx buf ring"));
+		break;
+	default:
+		PMD_INIT_LOG(ERR, "Invalid queue type");
+		return NULL;
+	}
+
+	mz = rte_eth_dma_zone_reserve(dev, ring_name, queue_idx,
+				      ring_size, CPFL_RING_BASE_ALIGN,
+				      socket_id);
+	if (mz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for ring");
+		return NULL;
+	}
+
+	/* Zero all the descriptors in the ring. */
+	memset(mz->addr, 0, ring_size);
+
+	return mz;
+}
+
+static void
+cpfl_dma_zone_release(const struct rte_memzone *mz)
+{
+	rte_memzone_free(mz);
+}
+
+static int
+cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
+		     uint16_t queue_idx, uint16_t nb_desc,
+		     unsigned int socket_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	const struct rte_memzone *mz;
+	struct idpf_tx_queue *cq;
+	int ret;
+
+	cq = rte_zmalloc_socket("cpfl splitq cq",
+				sizeof(struct idpf_tx_queue),
+				RTE_CACHE_LINE_SIZE,
+				socket_id);
+	if (cq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for Tx compl queue");
+		ret = -ENOMEM;
+		goto err_cq_alloc;
+	}
+
+	cq->nb_tx_desc = nb_desc;
+	cq->queue_id = vport->chunks_info.tx_compl_start_qid + queue_idx;
+	cq->port_id = dev->data->port_id;
+	cq->txqs = dev->data->tx_queues;
+	cq->tx_start_qid = vport->chunks_info.tx_start_qid;
+
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, nb_desc,
+				   VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION,
+				   socket_id, true);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	cq->tx_ring_phys_addr = mz->iova;
+	cq->compl_ring = mz->addr;
+	cq->mz = mz;
+	reset_split_tx_complq(cq);
+
+	txq->complq = cq;
+
+	return 0;
+
+err_mz_reserve:
+	rte_free(cq);
+err_cq_alloc:
+	return ret;
+}
+
+int
+cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_txconf *tx_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t tx_rs_thresh, tx_free_thresh;
+	struct idpf_hw *hw = &adapter->hw;
+	const struct rte_memzone *mz;
+	struct idpf_tx_queue *txq;
+	uint64_t offloads;
+	uint16_t len;
+	bool is_splitq;
+	int ret;
+
+	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+	tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh > 0) ?
+		tx_conf->tx_rs_thresh : CPFL_DEFAULT_TX_RS_THRESH);
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ?
+		tx_conf->tx_free_thresh : CPFL_DEFAULT_TX_FREE_THRESH);
+	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket("cpfl txq",
+				 sizeof(struct idpf_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (txq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
+		ret = -ENOMEM;
+		goto err_txq_alloc;
+	}
+
+	is_splitq = !!(vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
+
+	txq->nb_tx_desc = nb_desc;
+	txq->rs_thresh = tx_rs_thresh;
+	txq->free_thresh = tx_free_thresh;
+	txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->offloads = cpfl_tx_offload_convert(offloads);
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	if (is_splitq)
+		len = 2 * nb_desc;
+	else
+		len = nb_desc;
+	txq->sw_nb_desc = len;
+
+	/* Allocate TX hardware ring descriptors. */
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, nb_desc, VIRTCHNL2_QUEUE_TYPE_TX,
+				   socket_id, is_splitq);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	txq->tx_ring_phys_addr = mz->iova;
+	txq->mz = mz;
+
+	txq->sw_ring = rte_zmalloc_socket("cpfl tx sw ring",
+					  sizeof(struct idpf_tx_entry) * len,
+					  RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq->sw_ring == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
+	}
+
+	if (!is_splitq) {
+		txq->tx_ring = mz->addr;
+		reset_single_tx_queue(txq);
+	} else {
+		txq->desc_ring = mz->addr;
+		reset_split_tx_descq(txq);
+
+		/* Setup tx completion queue if split model */
+		ret = cpfl_tx_complq_setup(dev, txq, queue_idx,
+					   2 * nb_desc, socket_id);
+		if (ret != 0)
+			goto err_complq_setup;
+	}
+
+	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
+			queue_idx * vport->chunks_info.tx_qtail_spacing);
+	txq->q_set = true;
+	dev->data->tx_queues[queue_idx] = txq;
+
+	return 0;
+
+err_complq_setup:
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(txq);
+err_txq_alloc:
+	return ret;
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
new file mode 100644
index 0000000000..ec42478393
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _CPFL_RXTX_H_
+#define _CPFL_RXTX_H_
+
+#include <idpf_common_rxtx.h>
+#include "cpfl_ethdev.h"
+
+/* In QLEN must be whole number of 32 descriptors. */
+#define CPFL_ALIGN_RING_DESC	32
+#define CPFL_MIN_RING_DESC	32
+#define CPFL_MAX_RING_DESC	4096
+#define CPFL_DMA_MEM_ALIGN	4096
+/* Base address of the HW descriptor ring should be 128B aligned. */
+#define CPFL_RING_BASE_ALIGN	128
+
+#define CPFL_DEFAULT_TX_RS_THRESH	32
+#define CPFL_DEFAULT_TX_FREE_THRESH	32
+
+int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			uint16_t nb_desc, unsigned int socket_id,
+			const struct rte_eth_txconf *tx_conf);
+#endif /* _CPFL_RXTX_H_ */
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
new file mode 100644
index 0000000000..106cc97e60
--- /dev/null
+++ b/drivers/net/cpfl/meson.build
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2022 Intel Corporation
+
+if is_windows
+    build = false
+    reason = 'not supported on Windows'
+    subdir_done()
+endif
+
+deps += ['common_idpf']
+
+sources = files(
+        'cpfl_ethdev.c',
+)
\ No newline at end of file
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 6470bf3636..a8ca338875 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -13,6 +13,7 @@ drivers = [
         'bnxt',
         'bonding',
         'cnxk',
+        'cpfl',
         'cxgbe',
         'dpaa',
         'dpaa2',
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v4 02/21] net/cpfl: add Tx queue setup
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 01/21] net/cpfl: support device initialization Mingxia Liu
@ 2023-01-18  7:57     ` Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 03/21] net/cpfl: add Rx " Mingxia Liu
                       ` (19 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add support for tx_queue_setup ops.

In the single queue model, the same descriptor queue is used by SW to
post buffer descriptors to HW and by HW to post completed descriptors
to SW.

In the split queue model, "RX buffer queues" are used to pass
descriptor buffers from SW to HW while Rx queues are used only to
pass the descriptor completions, that is, descriptors that point
to completed buffers, from HW to SW. This is contrary to the single
queue model in which Rx queues are used for both purposes.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 13 +++++++++++++
 drivers/net/cpfl/cpfl_rxtx.c   |  8 ++++----
 drivers/net/cpfl/meson.build   |  1 +
 3 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 2ac53bc5b0..4a569c2f7e 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -12,6 +12,7 @@
 #include <rte_alarm.h>
 
 #include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
 
 #define CPFL_TX_SINGLE_Q	"tx_single"
 #define CPFL_RX_SINGLE_Q	"rx_single"
@@ -96,6 +97,17 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = CPFL_MAX_RING_DESC,
+		.nb_min = CPFL_MIN_RING_DESC,
+		.nb_align = CPFL_ALIGN_RING_DESC,
+	};
+
 	return 0;
 }
 
@@ -513,6 +525,7 @@ cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *a
 static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_configure			= cpfl_dev_configure,
 	.dev_close			= cpfl_dev_close,
+	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
 	.link_update			= cpfl_dev_link_update,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index ea4a2002bf..a9742379db 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -130,7 +130,7 @@ cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
 	cq->tx_ring_phys_addr = mz->iova;
 	cq->compl_ring = mz->addr;
 	cq->mz = mz;
-	reset_split_tx_complq(cq);
+	idpf_reset_split_tx_complq(cq);
 
 	txq->complq = cq;
 
@@ -164,7 +164,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		tx_conf->tx_rs_thresh : CPFL_DEFAULT_TX_RS_THRESH);
 	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ?
 		tx_conf->tx_free_thresh : CPFL_DEFAULT_TX_FREE_THRESH);
-	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
+	if (idpf_check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
 		return -EINVAL;
 
 	/* Allocate the TX queue data structure. */
@@ -215,10 +215,10 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 
 	if (!is_splitq) {
 		txq->tx_ring = mz->addr;
-		reset_single_tx_queue(txq);
+		idpf_reset_single_tx_queue(txq);
 	} else {
 		txq->desc_ring = mz->addr;
-		reset_split_tx_descq(txq);
+		idpf_reset_split_tx_descq(txq);
 
 		/* Setup tx completion queue if split model */
 		ret = cpfl_tx_complq_setup(dev, txq, queue_idx,
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
index 106cc97e60..3ccee15703 100644
--- a/drivers/net/cpfl/meson.build
+++ b/drivers/net/cpfl/meson.build
@@ -11,4 +11,5 @@ deps += ['common_idpf']
 
 sources = files(
         'cpfl_ethdev.c',
+        'cpfl_rxtx.c',
 )
\ No newline at end of file
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v4 03/21] net/cpfl: add Rx queue setup
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 01/21] net/cpfl: support device initialization Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 02/21] net/cpfl: add Tx queue setup Mingxia Liu
@ 2023-01-18  7:57     ` Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 04/21] net/cpfl: support device start and stop Mingxia Liu
                       ` (18 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add support for rx_queue_setup ops.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  11 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 232 +++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |   6 +
 3 files changed, 249 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 4a569c2f7e..0a4bf59220 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -102,12 +102,22 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
 	};
 
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_free_thresh = CPFL_DEFAULT_RX_FREE_THRESH,
+	};
+
 	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
 		.nb_max = CPFL_MAX_RING_DESC,
 		.nb_min = CPFL_MIN_RING_DESC,
 		.nb_align = CPFL_ALIGN_RING_DESC,
 	};
 
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = CPFL_MAX_RING_DESC,
+		.nb_min = CPFL_MIN_RING_DESC,
+		.nb_align = CPFL_ALIGN_RING_DESC,
+	};
+
 	return 0;
 }
 
@@ -525,6 +535,7 @@ cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *a
 static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_configure			= cpfl_dev_configure,
 	.dev_close			= cpfl_dev_close,
+	.rx_queue_setup			= cpfl_rx_queue_setup,
 	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
 	.link_update			= cpfl_dev_link_update,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index a9742379db..71ebbf440b 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -9,6 +9,25 @@
 #include "cpfl_ethdev.h"
 #include "cpfl_rxtx.h"
 
+static uint64_t
+cpfl_rx_offload_convert(uint64_t offload)
+{
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
+		ol |= IDPF_RX_OFFLOAD_TIMESTAMP;
+
+	return ol;
+}
+
 static uint64_t
 cpfl_tx_offload_convert(uint64_t offload)
 {
@@ -94,6 +113,219 @@ cpfl_dma_zone_release(const struct rte_memzone *mz)
 	rte_memzone_free(mz);
 }
 
+static int
+cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
+			 uint16_t queue_idx, uint16_t rx_free_thresh,
+			 uint16_t nb_desc, unsigned int socket_id,
+			 struct rte_mempool *mp, uint8_t bufq_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_hw *hw = &adapter->hw;
+	const struct rte_memzone *mz;
+	struct idpf_rx_queue *bufq;
+	uint16_t len;
+	int ret;
+
+	bufq = rte_zmalloc_socket("cpfl bufq",
+				   sizeof(struct idpf_rx_queue),
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (bufq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue.");
+		ret = -ENOMEM;
+		goto err_bufq1_alloc;
+	}
+
+	bufq->mp = mp;
+	bufq->nb_rx_desc = nb_desc;
+	bufq->rx_free_thresh = rx_free_thresh;
+	bufq->queue_id = vport->chunks_info.rx_buf_start_qid + queue_idx;
+	bufq->port_id = dev->data->port_id;
+	bufq->rx_hdr_len = 0;
+	bufq->adapter = adapter;
+
+	len = rte_pktmbuf_data_room_size(bufq->mp) - RTE_PKTMBUF_HEADROOM;
+	bufq->rx_buf_len = len;
+
+	/* Allocate a little more to support bulk allocate. */
+	len = nb_desc + IDPF_RX_MAX_BURST;
+
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, len,
+				   VIRTCHNL2_QUEUE_TYPE_RX_BUFFER,
+				   socket_id, true);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+
+	bufq->rx_ring_phys_addr = mz->iova;
+	bufq->rx_ring = mz->addr;
+	bufq->mz = mz;
+
+	bufq->sw_ring =
+		rte_zmalloc_socket("cpfl rx bufq sw ring",
+				   sizeof(struct rte_mbuf *) * len,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (bufq->sw_ring == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
+	}
+
+	idpf_reset_split_rx_bufq(bufq);
+	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
+			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+	bufq->q_set = true;
+
+	if (bufq_id == 1) {
+		rxq->bufq1 = bufq;
+	} else if (bufq_id == 2) {
+		rxq->bufq2 = bufq;
+	} else {
+		PMD_INIT_LOG(ERR, "Invalid buffer queue index.");
+		ret = -EINVAL;
+		goto err_bufq_id;
+	}
+
+	return 0;
+
+err_bufq_id:
+	rte_free(bufq->sw_ring);
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(bufq);
+err_bufq1_alloc:
+	return ret;
+}
+
+static void
+cpfl_rx_split_bufq_release(struct idpf_rx_queue *bufq)
+{
+	rte_free(bufq->sw_ring);
+	cpfl_dma_zone_release(bufq->mz);
+	rte_free(bufq);
+}
+
+int
+cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_rxconf *rx_conf,
+		    struct rte_mempool *mp)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_hw *hw = &adapter->hw;
+	const struct rte_memzone *mz;
+	struct idpf_rx_queue *rxq;
+	uint16_t rx_free_thresh;
+	uint64_t offloads;
+	bool is_splitq;
+	uint16_t len;
+	int ret;
+
+	offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads;
+
+	/* Check free threshold */
+	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
+		CPFL_DEFAULT_RX_FREE_THRESH :
+		rx_conf->rx_free_thresh;
+	if (idpf_check_rx_thresh(nb_desc, rx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Setup Rx queue */
+	rxq = rte_zmalloc_socket("cpfl rxq",
+				 sizeof(struct idpf_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (rxq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
+		ret = -ENOMEM;
+		goto err_rxq_alloc;
+	}
+
+	is_splitq = !!(vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
+
+	rxq->mp = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->queue_id = vport->chunks_info.rx_start_qid + queue_idx;
+	rxq->port_id = dev->data->port_id;
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+	rxq->rx_hdr_len = 0;
+	rxq->adapter = adapter;
+	rxq->offloads = cpfl_rx_offload_convert(offloads);
+
+	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_buf_len = len;
+
+	/* Allocate a little more to support bulk allocate. */
+	len = nb_desc + IDPF_RX_MAX_BURST;
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, len, VIRTCHNL2_QUEUE_TYPE_RX,
+				   socket_id, is_splitq);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	rxq->rx_ring_phys_addr = mz->iova;
+	rxq->rx_ring = mz->addr;
+	rxq->mz = mz;
+
+	if (!is_splitq) {
+		rxq->sw_ring = rte_zmalloc_socket("cpfl rxq sw ring",
+						  sizeof(struct rte_mbuf *) * len,
+						  RTE_CACHE_LINE_SIZE,
+						  socket_id);
+		if (rxq->sw_ring == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+			ret = -ENOMEM;
+			goto err_sw_ring_alloc;
+		}
+
+		idpf_reset_single_rx_queue(rxq);
+		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
+				queue_idx * vport->chunks_info.rx_qtail_spacing);
+	} else {
+		idpf_reset_split_rx_descq(rxq);
+
+		/* Setup Rx buffer queues */
+		ret = cpfl_rx_split_bufq_setup(dev, rxq, 2 * queue_idx,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 1);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 1");
+			ret = -EINVAL;
+			goto err_bufq1_setup;
+		}
+
+		ret = cpfl_rx_split_bufq_setup(dev, rxq, 2 * queue_idx + 1,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 2);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 2");
+			ret = -EINVAL;
+			goto err_bufq2_setup;
+		}
+	}
+
+	rxq->q_set = true;
+	dev->data->rx_queues[queue_idx] = rxq;
+
+	return 0;
+
+err_bufq2_setup:
+	cpfl_rx_split_bufq_release(rxq->bufq1);
+err_bufq1_setup:
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(rxq);
+err_rxq_alloc:
+	return ret;
+}
+
 static int
 cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
 		     uint16_t queue_idx, uint16_t nb_desc,
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index ec42478393..fd838d3f07 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -16,10 +16,16 @@
 /* Base address of the HW descriptor ring should be 128B aligned. */
 #define CPFL_RING_BASE_ALIGN	128
 
+#define CPFL_DEFAULT_RX_FREE_THRESH	32
+
 #define CPFL_DEFAULT_TX_RS_THRESH	32
 #define CPFL_DEFAULT_TX_FREE_THRESH	32
 
 int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_txconf *tx_conf);
+int cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			uint16_t nb_desc, unsigned int socket_id,
+			const struct rte_eth_rxconf *rx_conf,
+			struct rte_mempool *mp);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v4 04/21] net/cpfl: support device start and stop
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                       ` (2 preceding siblings ...)
  2023-01-18  7:57     ` [PATCH v4 03/21] net/cpfl: add Rx " Mingxia Liu
@ 2023-01-18  7:57     ` Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 05/21] net/cpfl: support queue start Mingxia Liu
                       ` (17 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add dev ops dev_start, dev_stop and link_update.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 35 ++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 0a4bf59220..f98d10ffd4 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -184,12 +184,45 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_dev_start(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	int ret;
+
+	ret = idpf_vc_ena_dis_vport(vport, true);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to enable vport");
+		return ret;
+	}
+
+	vport->stopped = 0;
+
+	return 0;
+}
+
+static int
+cpfl_dev_stop(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->stopped == 1)
+		return 0;
+
+	idpf_vc_ena_dis_vport(vport, false);
+
+	vport->stopped = 1;
+
+	return 0;
+}
+
 static int
 cpfl_dev_close(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter);
 
+	cpfl_dev_stop(dev);
 	idpf_vport_deinit(vport);
 
 	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
@@ -538,6 +571,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.rx_queue_setup			= cpfl_rx_queue_setup,
 	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
+	.dev_start			= cpfl_dev_start,
+	.dev_stop			= cpfl_dev_stop,
 	.link_update			= cpfl_dev_link_update,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v4 05/21] net/cpfl: support queue start
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                       ` (3 preceding siblings ...)
  2023-01-18  7:57     ` [PATCH v4 04/21] net/cpfl: support device start and stop Mingxia Liu
@ 2023-01-18  7:57     ` Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 06/21] net/cpfl: support queue stop Mingxia Liu
                       ` (16 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add support for these device ops:
 - rx_queue_start
 - tx_queue_start

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  41 ++++++++++
 drivers/net/cpfl/cpfl_rxtx.c   | 138 +++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |   4 +
 3 files changed, 183 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index f98d10ffd4..51792c648e 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -184,12 +184,51 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_start_queues(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	struct idpf_tx_queue *txq;
+	int err = 0;
+	int i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq == NULL || txq->tx_deferred_start)
+			continue;
+		err = cpfl_tx_queue_start(dev, i);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start Tx queue %u", i);
+			return err;
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq == NULL || rxq->rx_deferred_start)
+			continue;
+		err = cpfl_rx_queue_start(dev, i);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start Rx queue %u", i);
+			return err;
+		}
+	}
+
+	return err;
+}
+
 static int
 cpfl_dev_start(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	int ret;
 
+	ret = cpfl_start_queues(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to start queues");
+		return ret;
+	}
+
 	ret = idpf_vc_ena_dis_vport(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
@@ -574,6 +613,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_start			= cpfl_dev_start,
 	.dev_stop			= cpfl_dev_stop,
 	.link_update			= cpfl_dev_link_update,
+	.rx_queue_start			= cpfl_rx_queue_start,
+	.tx_queue_start			= cpfl_tx_queue_start,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 71ebbf440b..fb6becc99f 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -474,3 +474,141 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 err_txq_alloc:
 	return ret;
 }
+
+int
+cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_rx_queue *rxq;
+	int err;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	if (rxq == NULL || !rxq->q_set) {
+		PMD_DRV_LOG(ERR, "RX queue %u not available or setup",
+					rx_queue_id);
+		return -EINVAL;
+	}
+
+	if (rxq->bufq1 == NULL) {
+		/* Single queue */
+		err = idpf_alloc_single_rxq_mbufs(rxq);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+			return err;
+		}
+
+		rte_wmb();
+
+		/* Init the RX tail register. */
+		IDPF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	} else {
+		/* Split queue */
+		err = idpf_alloc_split_rxq_mbufs(rxq->bufq1);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+			return err;
+		}
+		err = idpf_alloc_split_rxq_mbufs(rxq->bufq2);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+			return err;
+		}
+
+		rte_wmb();
+
+		/* Init the RX tail register. */
+		IDPF_PCI_REG_WRITE(rxq->bufq1->qrx_tail, rxq->bufq1->rx_tail);
+		IDPF_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->bufq2->rx_tail);
+	}
+
+	return err;
+}
+
+int
+cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_rx_queue *rxq =
+		dev->data->rx_queues[rx_queue_id];
+	int err = 0;
+
+	err = idpf_vc_config_rxq(vport, rxq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Fail to configure Rx queue %u", rx_queue_id);
+		return err;
+	}
+
+	err = cpfl_rx_queue_init(dev, rx_queue_id);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to init RX queue %u",
+			    rx_queue_id);
+		return err;
+	}
+
+	/* Ready to switch the queue on */
+	err = idpf_switch_queue(vport, rx_queue_id, true, true);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+	} else {
+		rxq->q_started = true;
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	return err;
+}
+
+int
+cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_tx_queue *txq;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	txq = dev->data->tx_queues[tx_queue_id];
+
+	/* Init the RX tail register. */
+	IDPF_PCI_REG_WRITE(txq->qtx_tail, 0);
+
+	return 0;
+}
+
+int
+cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_tx_queue *txq =
+		dev->data->tx_queues[tx_queue_id];
+	int err = 0;
+
+	err = idpf_vc_config_txq(vport, txq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Fail to configure Tx queue %u", tx_queue_id);
+		return err;
+	}
+
+	err = cpfl_tx_queue_init(dev, tx_queue_id);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to init TX queue %u",
+			    tx_queue_id);
+		return err;
+	}
+
+	/* Ready to switch the queue on */
+	err = idpf_switch_queue(vport, tx_queue_id, false, true);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+			    tx_queue_id);
+	} else {
+		txq->q_started = true;
+		dev->data->tx_queue_state[tx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	return err;
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index fd838d3f07..2fa7950775 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -28,4 +28,8 @@ int cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
 			struct rte_mempool *mp);
+int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v4 06/21] net/cpfl: support queue stop
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                       ` (4 preceding siblings ...)
  2023-01-18  7:57     ` [PATCH v4 05/21] net/cpfl: support queue start Mingxia Liu
@ 2023-01-18  7:57     ` Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 07/21] net/cpfl: support queue release Mingxia Liu
                       ` (15 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add support for these device ops:
 - rx_queue_stop
 - tx_queue_stop

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 10 +++-
 drivers/net/cpfl/cpfl_rxtx.c   | 87 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  3 ++
 3 files changed, 99 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 51792c648e..b15ecab840 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -232,12 +232,16 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 	ret = idpf_vc_ena_dis_vport(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
-		return ret;
+		goto err_vport;
 	}
 
 	vport->stopped = 0;
 
 	return 0;
+
+err_vport:
+	cpfl_stop_queues(dev);
+	return ret;
 }
 
 static int
@@ -250,6 +254,8 @@ cpfl_dev_stop(struct rte_eth_dev *dev)
 
 	idpf_vc_ena_dis_vport(vport, false);
 
+	cpfl_stop_queues(dev);
+
 	vport->stopped = 1;
 
 	return 0;
@@ -615,6 +621,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.link_update			= cpfl_dev_link_update,
 	.rx_queue_start			= cpfl_rx_queue_start,
 	.tx_queue_start			= cpfl_tx_queue_start,
+	.rx_queue_stop			= cpfl_rx_queue_stop,
+	.tx_queue_stop			= cpfl_tx_queue_stop,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index fb6becc99f..648b1f1e03 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -612,3 +612,90 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 
 	return err;
 }
+
+int
+cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_rx_queue *rxq;
+	int err;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	err = idpf_switch_queue(vport, rx_queue_id, true, false);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+			    rx_queue_id);
+		return err;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		rxq->ops->release_mbufs(rxq);
+		idpf_reset_single_rx_queue(rxq);
+	} else {
+		rxq->bufq1->ops->release_mbufs(rxq->bufq1);
+		rxq->bufq2->ops->release_mbufs(rxq->bufq2);
+		idpf_reset_split_rx_queue(rxq);
+	}
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+int
+cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_tx_queue *txq;
+	int err;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	err = idpf_switch_queue(vport, tx_queue_id, false, false);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+			    tx_queue_id);
+		return err;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	txq->ops->release_mbufs(txq);
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		idpf_reset_single_tx_queue(txq);
+	} else {
+		idpf_reset_split_tx_descq(txq);
+		idpf_reset_split_tx_complq(txq->complq);
+	}
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+void
+cpfl_stop_queues(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	struct idpf_tx_queue *txq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq == NULL)
+			continue;
+
+		if (cpfl_rx_queue_stop(dev, i) != 0)
+			PMD_DRV_LOG(WARNING, "Fail to stop Rx queue %d", i);
+	}
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq == NULL)
+			continue;
+
+		if (cpfl_tx_queue_stop(dev, i) != 0)
+			PMD_DRV_LOG(WARNING, "Fail to stop Tx queue %d", i);
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 2fa7950775..6b63137d5c 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -32,4 +32,7 @@ int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void cpfl_stop_queues(struct rte_eth_dev *dev);
+int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v4 07/21] net/cpfl: support queue release
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                       ` (5 preceding siblings ...)
  2023-01-18  7:57     ` [PATCH v4 06/21] net/cpfl: support queue stop Mingxia Liu
@ 2023-01-18  7:57     ` Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 08/21] net/cpfl: support MTU configuration Mingxia Liu
                       ` (14 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add support for queue operations:
 - rx_queue_release
 - tx_queue_release

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  2 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 35 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  2 ++
 3 files changed, 39 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index b15ecab840..33a0b9ba60 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -623,6 +623,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_start			= cpfl_tx_queue_start,
 	.rx_queue_stop			= cpfl_rx_queue_stop,
 	.tx_queue_stop			= cpfl_tx_queue_stop,
+	.rx_queue_release		= cpfl_dev_rx_queue_release,
+	.tx_queue_release		= cpfl_dev_tx_queue_release,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 648b1f1e03..4ed15ef7f4 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -49,6 +49,14 @@ cpfl_tx_offload_convert(uint64_t offload)
 	return ol;
 }
 
+static const struct idpf_rxq_ops def_rxq_ops = {
+	.release_mbufs = idpf_release_rxq_mbufs,
+};
+
+static const struct idpf_txq_ops def_txq_ops = {
+	.release_mbufs = idpf_release_txq_mbufs,
+};
+
 static const struct rte_memzone *
 cpfl_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
 		      uint16_t len, uint16_t queue_type,
@@ -177,6 +185,7 @@ cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
 	idpf_reset_split_rx_bufq(bufq);
 	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
 			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+	bufq->ops = &def_rxq_ops;
 	bufq->q_set = true;
 
 	if (bufq_id == 1) {
@@ -235,6 +244,12 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (idpf_check_rx_thresh(nb_desc, rx_free_thresh) != 0)
 		return -EINVAL;
 
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx] != NULL) {
+		idpf_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
 	/* Setup Rx queue */
 	rxq = rte_zmalloc_socket("cpfl rxq",
 				 sizeof(struct idpf_rx_queue),
@@ -287,6 +302,7 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		idpf_reset_single_rx_queue(rxq);
 		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
 				queue_idx * vport->chunks_info.rx_qtail_spacing);
+		rxq->ops = &def_rxq_ops;
 	} else {
 		idpf_reset_split_rx_descq(rxq);
 
@@ -399,6 +415,12 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (idpf_check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
 		return -EINVAL;
 
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx] != NULL) {
+		idpf_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
 	/* Allocate the TX queue data structure. */
 	txq = rte_zmalloc_socket("cpfl txq",
 				 sizeof(struct idpf_tx_queue),
@@ -461,6 +483,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 
 	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
 			queue_idx * vport->chunks_info.tx_qtail_spacing);
+	txq->ops = &def_txq_ops;
 	txq->q_set = true;
 	dev->data->tx_queues[queue_idx] = txq;
 
@@ -674,6 +697,18 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	return 0;
 }
 
+void
+cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+	idpf_rx_queue_release(dev->data->rx_queues[qid]);
+}
+
+void
+cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+	idpf_tx_queue_release(dev->data->tx_queues[qid]);
+}
+
 void
 cpfl_stop_queues(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 6b63137d5c..037d479d56 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -35,4 +35,6 @@ int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void cpfl_stop_queues(struct rte_eth_dev *dev);
 int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v4 08/21] net/cpfl: support MTU configuration
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                       ` (6 preceding siblings ...)
  2023-01-18  7:57     ` [PATCH v4 07/21] net/cpfl: support queue release Mingxia Liu
@ 2023-01-18  7:57     ` Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 09/21] net/cpfl: support basic Rx data path Mingxia Liu
                       ` (13 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add dev ops mtu_set.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini |  1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 27 +++++++++++++++++++++++++++
 2 files changed, 28 insertions(+)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index a2d1ca9e15..470ba81579 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -7,6 +7,7 @@
 ; is selected.
 ;
 [Features]
+MTU update           = Y
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 33a0b9ba60..1f40f1749e 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -121,6 +121,27 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	return 0;
 }
 
+static int
+cpfl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	/* mtu setting is forbidden if port is start */
+	if (dev->data->dev_started) {
+		PMD_DRV_LOG(ERR, "port must be stopped before configuration");
+		return -EBUSY;
+	}
+
+	if (mtu > vport->max_mtu) {
+		PMD_DRV_LOG(ERR, "MTU should be less than %d", vport->max_mtu);
+		return -EINVAL;
+	}
+
+	vport->max_pkt_len = mtu + CPFL_ETH_OVERHEAD;
+
+	return 0;
+}
+
 static const uint32_t *
 cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 {
@@ -142,6 +163,7 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_conf *conf = &dev->data->dev_conf;
 
 	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
@@ -181,6 +203,10 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
+	vport->max_pkt_len =
+		(dev->data->mtu == 0) ? CPFL_DEFAULT_MTU : dev->data->mtu +
+		CPFL_ETH_OVERHEAD;
+
 	return 0;
 }
 
@@ -625,6 +651,7 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_stop			= cpfl_tx_queue_stop,
 	.rx_queue_release		= cpfl_dev_rx_queue_release,
 	.tx_queue_release		= cpfl_dev_tx_queue_release,
+	.mtu_set			= cpfl_dev_mtu_set,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v4 09/21] net/cpfl: support basic Rx data path
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                       ` (7 preceding siblings ...)
  2023-01-18  7:57     ` [PATCH v4 08/21] net/cpfl: support MTU configuration Mingxia Liu
@ 2023-01-18  7:57     ` Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 10/21] net/cpfl: support basic Tx " Mingxia Liu
                       ` (12 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add basic Rx support in split queue mode and single queue mode.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  2 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 11 +++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  1 +
 3 files changed, 14 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 1f40f1749e..ca5b7f952e 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -255,6 +255,8 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
+	cpfl_set_rx_function(dev);
+
 	ret = idpf_vc_ena_dis_vport(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 4ed15ef7f4..8d1d31e186 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -734,3 +734,14 @@ cpfl_stop_queues(struct rte_eth_dev *dev)
 			PMD_DRV_LOG(WARNING, "Fail to stop Tx queue %d", i);
 	}
 }
+
+void
+cpfl_set_rx_function(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
+	else
+		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 037d479d56..c29c30c7a3 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -37,4 +37,5 @@ int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+void cpfl_set_rx_function(struct rte_eth_dev *dev);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v4 10/21] net/cpfl: support basic Tx data path
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                       ` (8 preceding siblings ...)
  2023-01-18  7:57     ` [PATCH v4 09/21] net/cpfl: support basic Rx data path Mingxia Liu
@ 2023-01-18  7:57     ` Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
                       ` (11 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add basic Tx support in split queue mode and single queue mode.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  3 +++
 drivers/net/cpfl/cpfl_rxtx.c   | 14 ++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  1 +
 3 files changed, 18 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index ca5b7f952e..94ff0d94bb 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -97,6 +97,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
 		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
@@ -256,6 +258,7 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 	}
 
 	cpfl_set_rx_function(dev);
+	cpfl_set_tx_function(dev);
 
 	ret = idpf_vc_ena_dis_vport(vport, true);
 	if (ret != 0) {
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 8d1d31e186..8724d391ad 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -745,3 +745,17 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 	else
 		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
 }
+
+void
+cpfl_set_tx_function(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		dev->tx_pkt_burst = idpf_splitq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_prep_pkts;
+	} else {
+		dev->tx_pkt_burst = idpf_singleq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_prep_pkts;
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index c29c30c7a3..021db5bf8a 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -38,4 +38,5 @@ int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_set_rx_function(struct rte_eth_dev *dev);
+void cpfl_set_tx_function(struct rte_eth_dev *dev);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v4 11/21] net/cpfl: support write back based on ITR expire
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                       ` (9 preceding siblings ...)
  2023-01-18  7:57     ` [PATCH v4 10/21] net/cpfl: support basic Tx " Mingxia Liu
@ 2023-01-18  7:57     ` Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 12/21] net/cpfl: support RSS Mingxia Liu
                       ` (10 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Enable write back on ITR expire, then packets can be received one by

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 45 +++++++++++++++++++++++++++++++++-
 drivers/net/cpfl/cpfl_ethdev.h |  2 ++
 2 files changed, 46 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 94ff0d94bb..c678fe0354 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -212,6 +212,15 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	uint16_t nb_rx_queues = dev->data->nb_rx_queues;
+
+	return idpf_config_irq_map(vport, nb_rx_queues);
+}
+
 static int
 cpfl_start_queues(struct rte_eth_dev *dev)
 {
@@ -249,12 +258,37 @@ static int
 cpfl_dev_start(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(base);
+	uint16_t num_allocated_vectors = base->caps.num_allocated_vectors;
+	uint16_t req_vecs_num;
 	int ret;
 
+	req_vecs_num = CPFL_DFLT_Q_VEC_NUM;
+	if (req_vecs_num + adapter->used_vecs_num > num_allocated_vectors) {
+		PMD_DRV_LOG(ERR, "The accumulated request vectors' number should be less than %d",
+			    num_allocated_vectors);
+		ret = -EINVAL;
+		goto err_vec;
+	}
+
+	ret = idpf_vc_alloc_vectors(vport, req_vecs_num);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to allocate interrupt vectors");
+		goto err_vec;
+	}
+	adapter->used_vecs_num += req_vecs_num;
+
+	ret = cpfl_config_rx_queues_irqs(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to configure irqs");
+		goto err_irq;
+	}
+
 	ret = cpfl_start_queues(dev);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to start queues");
-		return ret;
+		goto err_startq;
 	}
 
 	cpfl_set_rx_function(dev);
@@ -272,6 +306,11 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 
 err_vport:
 	cpfl_stop_queues(dev);
+err_startq:
+	idpf_config_irq_unmap(vport, dev->data->nb_rx_queues);
+err_irq:
+	idpf_vc_dealloc_vectors(vport);
+err_vec:
 	return ret;
 }
 
@@ -287,6 +326,10 @@ cpfl_dev_stop(struct rte_eth_dev *dev)
 
 	cpfl_stop_queues(dev);
 
+	idpf_config_irq_unmap(vport, dev->data->nb_rx_queues);
+
+	idpf_vc_dealloc_vectors(vport);
+
 	vport->stopped = 1;
 
 	return 0;
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
index 83459b9c91..9ae543c2ad 100644
--- a/drivers/net/cpfl/cpfl_ethdev.h
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -24,6 +24,8 @@
 
 #define CPFL_INVALID_VPORT_IDX	0xffff
 
+#define CPFL_DFLT_Q_VEC_NUM	1
+
 #define CPFL_MIN_BUF_SIZE	1024
 #define CPFL_MAX_FRAME_SIZE	9728
 #define CPFL_DEFAULT_MTU	RTE_ETHER_MTU
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v4 12/21] net/cpfl: support RSS
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                       ` (10 preceding siblings ...)
  2023-01-18  7:57     ` [PATCH v4 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
@ 2023-01-18  7:57     ` Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 13/21] net/cpfl: support Rx offloading Mingxia Liu
                       ` (9 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add RSS support.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 51 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h | 15 ++++++++++
 2 files changed, 66 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index c678fe0354..49b8861df0 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -97,6 +97,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
+
 	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
@@ -162,11 +164,49 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static int
+cpfl_init_rss(struct idpf_vport *vport)
+{
+	struct rte_eth_rss_conf *rss_conf;
+	struct rte_eth_dev_data *dev_data;
+	uint16_t i, nb_q;
+	int ret = 0;
+
+	dev_data = vport->dev_data;
+	rss_conf = &dev_data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev_data->nb_rx_queues;
+
+	if (rss_conf->rss_key == NULL) {
+		for (i = 0; i < vport->rss_key_size; i++)
+			vport->rss_key[i] = (uint8_t)rte_rand();
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_INIT_LOG(ERR, "Invalid RSS key length in RSS configuration, should be %d",
+			     vport->rss_key_size);
+		return -EINVAL;
+	} else {
+		rte_memcpy(vport->rss_key, rss_conf->rss_key,
+			   vport->rss_key_size);
+	}
+
+	for (i = 0; i < vport->rss_lut_size; i++)
+		vport->rss_lut[i] = i % nb_q;
+
+	vport->rss_hf = IDPF_DEFAULT_RSS_HASH_EXPANDED;
+
+	ret = idpf_config_rss(vport);
+	if (ret != 0)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS");
+
+	return ret;
+}
+
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_conf *conf = &dev->data->dev_conf;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret;
 
 	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
@@ -205,6 +245,17 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
+	if (adapter->caps.rss_caps != 0 && dev->data->nb_rx_queues != 0) {
+		ret = cpfl_init_rss(vport);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to init rss");
+			return ret;
+		}
+	} else {
+		PMD_INIT_LOG(ERR, "RSS is not supported.");
+		return -1;
+	}
+
 	vport->max_pkt_len =
 		(dev->data->mtu == 0) ? CPFL_DEFAULT_MTU : dev->data->mtu +
 		CPFL_ETH_OVERHEAD;
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
index 9ae543c2ad..0d60ee3aed 100644
--- a/drivers/net/cpfl/cpfl_ethdev.h
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -36,6 +36,21 @@
 #define CPFL_ETH_OVERHEAD \
 	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + CPFL_VLAN_TAG_SIZE * 2)
 
+#define CPFL_RSS_OFFLOAD_ALL (				\
+		RTE_ETH_RSS_IPV4                |	\
+		RTE_ETH_RSS_FRAG_IPV4           |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_TCP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_UDP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_SCTP   |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_OTHER  |	\
+		RTE_ETH_RSS_IPV6                |	\
+		RTE_ETH_RSS_FRAG_IPV6           |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP   |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_OTHER  |	\
+		RTE_ETH_RSS_L2_PAYLOAD)
+
 #define CPFL_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
 #define CPFL_ALARM_INTERVAL	50000 /* us */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v4 13/21] net/cpfl: support Rx offloading
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                       ` (11 preceding siblings ...)
  2023-01-18  7:57     ` [PATCH v4 12/21] net/cpfl: support RSS Mingxia Liu
@ 2023-01-18  7:57     ` Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 14/21] net/cpfl: support Tx offloading Mingxia Liu
                       ` (8 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add Rx offloading support:
 - support CHKSUM and RSS offload for split queue model
 - support CHKSUM offload for single queue model

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 2 ++
 drivers/net/cpfl/cpfl_ethdev.c    | 6 ++++++
 2 files changed, 8 insertions(+)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index 470ba81579..ee5948f444 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -8,6 +8,8 @@
 ;
 [Features]
 MTU update           = Y
+L3 checksum offload  = P
+L4 checksum offload  = P
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 49b8861df0..28c4721c24 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -99,6 +99,12 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
 
+	dev_info->rx_offload_capa =
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM           |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+
 	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v4 14/21] net/cpfl: support Tx offloading
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                       ` (12 preceding siblings ...)
  2023-01-18  7:57     ` [PATCH v4 13/21] net/cpfl: support Rx offloading Mingxia Liu
@ 2023-01-18  7:57     ` Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
                       ` (7 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add Tx offloading support:
 - support TSO for single queue model and split queue model.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 8 +++++++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index ee5948f444..f4e45c7c68 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -8,6 +8,7 @@
 ;
 [Features]
 MTU update           = Y
+TSO                  = P
 L3 checksum offload  = P
 L4 checksum offload  = P
 Linux                = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 28c4721c24..20e16c815a 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -105,7 +105,13 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
-	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->tx_offload_capa =
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_TCP_TSO		|
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v4 15/21] net/cpfl: add AVX512 data path for single queue model
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                       ` (13 preceding siblings ...)
  2023-01-18  7:57     ` [PATCH v4 14/21] net/cpfl: support Tx offloading Mingxia Liu
@ 2023-01-18  7:57     ` Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 16/21] net/cpfl: support timestamp offload Mingxia Liu
                       ` (6 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu, Wenjun Wu

Add support of AVX512 vector data path for single queue model.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/cpfl.rst                |  24 +++++-
 drivers/net/cpfl/cpfl_ethdev.c          |   3 +-
 drivers/net/cpfl/cpfl_rxtx.c            |  85 ++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx_vec_common.h | 100 ++++++++++++++++++++++++
 drivers/net/cpfl/meson.build            |  25 +++++-
 5 files changed, 234 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h

diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
index 35a4bd44c6..1b275eb166 100644
--- a/doc/guides/nics/cpfl.rst
+++ b/doc/guides/nics/cpfl.rst
@@ -63,4 +63,26 @@ Runtime Config Options
 Driver compilation and testing
 ------------------------------
 
-Refer to the document :doc:`build_and_test` for details.
\ No newline at end of file
+Refer to the document :doc:`build_and_test` for details.
+
+Features
+--------
+
+Vector PMD
+~~~~~~~~~~
+
+Vector path for Rx and Tx path are selected automatically.
+The paths are chosen based on 2 conditions:
+
+- ``CPU``
+
+  On the x86 platform, the driver checks if the CPU supports AVX512.
+  If the CPU supports AVX512 and EAL argument ``--force-max-simd-bitwidth``
+  is set to 512, AVX512 paths will be chosen.
+
+- ``Offload features``
+
+  The supported HW offload features are described in the document cpfl.ini,
+  A value "P" means the offload feature is not supported by vector path.
+  If any not supported features are used, cpfl vector PMD is disabled
+  and the scalar paths are chosen.
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 20e16c815a..dcff55e5b5 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -111,7 +111,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
 		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM		|
 		RTE_ETH_TX_OFFLOAD_TCP_TSO		|
-		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS		|
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 8724d391ad..6b5ea46a7b 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -8,6 +8,7 @@
 
 #include "cpfl_ethdev.h"
 #include "cpfl_rxtx.h"
+#include "cpfl_rxtx_vec_common.h"
 
 static uint64_t
 cpfl_rx_offload_convert(uint64_t offload)
@@ -739,22 +740,106 @@ void
 cpfl_set_rx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+#ifdef RTE_ARCH_X86
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	if (cpfl_rx_vec_dev_check_default(dev) == CPFL_VECTOR_PATH &&
+	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+		vport->rx_vec_allowed = true;
+
+		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
+#ifdef CC_AVX512_SUPPORT
+			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512DQ))
+				vport->rx_use_avx512 = true;
+#else
+		PMD_DRV_LOG(NOTICE,
+			    "AVX512 is not supported in build env");
+#endif /* CC_AVX512_SUPPORT */
+	} else {
+		vport->rx_vec_allowed = false;
+	}
+#endif /* RTE_ARCH_X86 */
+
+#ifdef RTE_ARCH_X86
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
+	} else {
+		if (vport->rx_vec_allowed) {
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				rxq = dev->data->rx_queues[i];
+				(void)idpf_singleq_rx_vec_setup(rxq);
+			}
+#ifdef CC_AVX512_SUPPORT
+			if (vport->rx_use_avx512) {
+				dev->rx_pkt_burst = idpf_singleq_recv_pkts_avx512;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
 
+		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+	}
+#else
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
 		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
 	else
 		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+#endif /* RTE_ARCH_X86 */
 }
 
 void
 cpfl_set_tx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+#ifdef RTE_ARCH_X86
+#ifdef CC_AVX512_SUPPORT
+	struct idpf_tx_queue *txq;
+	int i;
+#endif /* CC_AVX512_SUPPORT */
+
+	if (cpfl_tx_vec_dev_check_default(dev) == CPFL_VECTOR_PATH &&
+	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+		vport->tx_vec_allowed = true;
+		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
+#ifdef CC_AVX512_SUPPORT
+		{
+			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
+				vport->tx_use_avx512 = true;
+			if (vport->tx_use_avx512) {
+				for (i = 0; i < dev->data->nb_tx_queues; i++) {
+					txq = dev->data->tx_queues[i];
+					idpf_tx_vec_setup_avx512(txq);
+				}
+			}
+		}
+#else
+		PMD_DRV_LOG(NOTICE,
+			    "AVX512 is not supported in build env");
+#endif /* CC_AVX512_SUPPORT */
+	} else {
+		vport->tx_vec_allowed = false;
+	}
+#endif /* RTE_ARCH_X86 */
 
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		dev->tx_pkt_burst = idpf_splitq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_prep_pkts;
 	} else {
+#ifdef RTE_ARCH_X86
+		if (vport->tx_vec_allowed) {
+#ifdef CC_AVX512_SUPPORT
+			if (vport->tx_use_avx512) {
+				dev->tx_pkt_burst = idpf_singleq_xmit_pkts_avx512;
+				dev->tx_pkt_prepare = idpf_prep_pkts;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
+#endif /* RTE_ARCH_X86 */
 		dev->tx_pkt_burst = idpf_singleq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_prep_pkts;
 	}
diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
new file mode 100644
index 0000000000..503bc87f21
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
@@ -0,0 +1,100 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2022 Intel Corporation
+ */
+
+#ifndef _CPFL_RXTX_VEC_COMMON_H_
+#define _CPFL_RXTX_VEC_COMMON_H_
+#include <stdint.h>
+#include <ethdev_driver.h>
+#include <rte_malloc.h>
+
+#include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
+
+#ifndef __INTEL_COMPILER
+#pragma GCC diagnostic ignored "-Wcast-qual"
+#endif
+
+#define CPFL_SCALAR_PATH		0
+#define CPFL_VECTOR_PATH		1
+#define CPFL_RX_NO_VECTOR_FLAGS (		\
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+#define CPFL_TX_NO_VECTOR_FLAGS (		\
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |	\
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+static inline int
+cpfl_rx_vec_queue_default(struct idpf_rx_queue *rxq)
+{
+	if (rxq == NULL)
+		return CPFL_SCALAR_PATH;
+
+	if (rte_is_power_of_2(rxq->nb_rx_desc) == 0)
+		return CPFL_SCALAR_PATH;
+
+	if (rxq->rx_free_thresh < IDPF_VPMD_RX_MAX_BURST)
+		return CPFL_SCALAR_PATH;
+
+	if ((rxq->nb_rx_desc % rxq->rx_free_thresh) != 0)
+		return CPFL_SCALAR_PATH;
+
+	if ((rxq->offloads & CPFL_RX_NO_VECTOR_FLAGS) != 0)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_tx_vec_queue_default(struct idpf_tx_queue *txq)
+{
+	if (txq == NULL)
+		return CPFL_SCALAR_PATH;
+
+	if (txq->rs_thresh < IDPF_VPMD_TX_MAX_BURST ||
+	    (txq->rs_thresh & 3) != 0)
+		return CPFL_SCALAR_PATH;
+
+	if ((txq->offloads & CPFL_TX_NO_VECTOR_FLAGS) != 0)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i, ret = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		ret = (cpfl_rx_vec_queue_default(rxq));
+		if (ret == CPFL_SCALAR_PATH)
+			return CPFL_SCALAR_PATH;
+	}
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_tx_vec_dev_check_default(struct rte_eth_dev *dev)
+{
+	int i;
+	struct idpf_tx_queue *txq;
+	int ret = 0;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		ret = cpfl_tx_vec_queue_default(txq);
+		if (ret == CPFL_SCALAR_PATH)
+			return CPFL_SCALAR_PATH;
+	}
+
+	return CPFL_VECTOR_PATH;
+}
+
+#endif /*_CPFL_RXTX_VEC_COMMON_H_*/
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
index 3ccee15703..40ed8dbb7b 100644
--- a/drivers/net/cpfl/meson.build
+++ b/drivers/net/cpfl/meson.build
@@ -7,9 +7,32 @@ if is_windows
     subdir_done()
 endif
 
+if dpdk_conf.get('RTE_IOVA_AS_PA') == 0
+    build = false
+    reason = 'driver does not support disabling IOVA as PA mode'
+    subdir_done()
+endif
+
 deps += ['common_idpf']
 
 sources = files(
         'cpfl_ethdev.c',
         'cpfl_rxtx.c',
-)
\ No newline at end of file
+)
+
+if arch_subdir == 'x86'
+    cpfl_avx512_cpu_support = (
+        cc.get_define('__AVX512F__', args: machine_args) != '' and
+        cc.get_define('__AVX512BW__', args: machine_args) != ''
+    )
+
+    cpfl_avx512_cc_support = (
+        not machine_args.contains('-mno-avx512f') and
+        cc.has_argument('-mavx512f') and
+        cc.has_argument('-mavx512bw')
+    )
+
+    if cpfl_avx512_cpu_support == true or cpfl_avx512_cc_support == true
+        cflags += ['-DCC_AVX512_SUPPORT']
+    endif
+endif
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v4 16/21] net/cpfl: support timestamp offload
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                       ` (14 preceding siblings ...)
  2023-01-18  7:57     ` [PATCH v4 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
@ 2023-01-18  7:57     ` Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
                       ` (5 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add support for timestamp offload.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 3 ++-
 drivers/net/cpfl/cpfl_rxtx.c      | 7 +++++++
 3 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index f4e45c7c68..c1209df3e5 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -11,6 +11,7 @@ MTU update           = Y
 TSO                  = P
 L3 checksum offload  = P
 L4 checksum offload  = P
+Timestamp offload    = P
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index dcff55e5b5..7b28571cfc 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -103,7 +103,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM           |
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
-		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 6b5ea46a7b..559b10cb85 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -516,6 +516,13 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	err = idpf_register_ts_mbuf(rxq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "fail to register timestamp mbuf %u",
+					rx_queue_id);
+		return -EIO;
+	}
+
 	if (rxq->bufq1 == NULL) {
 		/* Single queue */
 		err = idpf_alloc_single_rxq_mbufs(rxq);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v4 17/21] net/cpfl: add AVX512 data path for split queue model
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                       ` (15 preceding siblings ...)
  2023-01-18  7:57     ` [PATCH v4 16/21] net/cpfl: support timestamp offload Mingxia Liu
@ 2023-01-18  7:57     ` Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 18/21] net/cpfl: add hw statistics Mingxia Liu
                       ` (4 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu, Wenjun Wu

Add support of AVX512 data path for split queue model.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_rxtx.c            | 25 +++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx_vec_common.h | 19 +++++++++++++++++--
 2 files changed, 42 insertions(+), 2 deletions(-)

diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 559b10cb85..5cd25278d6 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -772,6 +772,20 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 
 #ifdef RTE_ARCH_X86
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+#ifdef RTE_ARCH_X86
+		if (vport->rx_vec_allowed) {
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				rxq = dev->data->rx_queues[i];
+				(void)idpf_splitq_rx_vec_setup(rxq);
+			}
+#ifdef CC_AVX512_SUPPORT
+			if (vport->rx_use_avx512) {
+				dev->rx_pkt_burst = idpf_splitq_recv_pkts_avx512;
+				return;
+			}
+#endif
+		}
+#endif
 		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
 	} else {
 		if (vport->rx_vec_allowed) {
@@ -833,6 +847,17 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 #endif /* RTE_ARCH_X86 */
 
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+#ifdef RTE_ARCH_X86
+		if (vport->tx_vec_allowed) {
+#ifdef CC_AVX512_SUPPORT
+			if (vport->tx_use_avx512) {
+				dev->tx_pkt_burst = idpf_splitq_xmit_pkts_avx512;
+				dev->tx_pkt_prepare = idpf_prep_pkts;
+				return;
+			}
+#endif
+		}
+#endif
 		dev->tx_pkt_burst = idpf_splitq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_prep_pkts;
 	} else {
diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
index 503bc87f21..1f01cd40c5 100644
--- a/drivers/net/cpfl/cpfl_rxtx_vec_common.h
+++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
@@ -64,15 +64,30 @@ cpfl_tx_vec_queue_default(struct idpf_tx_queue *txq)
 	return CPFL_VECTOR_PATH;
 }
 
+static inline int
+cpfl_rx_splitq_vec_default(struct idpf_rx_queue *rxq)
+{
+	if (rxq->bufq2->rx_buf_len < rxq->max_pkt_len)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
 static inline int
 cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_rx_queue *rxq;
-	int i, ret = 0;
+	int i, default_ret, splitq_ret, ret = CPFL_SCALAR_PATH;
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
-		ret = (cpfl_rx_vec_queue_default(rxq));
+		default_ret = cpfl_rx_vec_queue_default(rxq);
+		if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+			splitq_ret = cpfl_rx_splitq_vec_default(rxq);
+			ret = splitq_ret && default_ret;
+		} else
+			ret = default_ret;
 		if (ret == CPFL_SCALAR_PATH)
 			return CPFL_SCALAR_PATH;
 	}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v4 18/21] net/cpfl: add hw statistics
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                       ` (16 preceding siblings ...)
  2023-01-18  7:57     ` [PATCH v4 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
@ 2023-01-18  7:57     ` Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
                       ` (3 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

This patch add hardware packets/bytes statistics.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 87 ++++++++++++++++++++++++++++++++++
 drivers/net/idpf/idpf_ethdev.c |  3 +-
 2 files changed, 89 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 7b28571cfc..70c7d5daba 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -178,6 +178,86 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static uint64_t
+cpfl_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	uint64_t mbuf_alloc_failed = 0;
+	struct idpf_rx_queue *rxq;
+	int i = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		mbuf_alloc_failed += rte_atomic64_read(&(rxq->rx_stats.mbuf_alloc_failed));
+	}
+
+	return mbuf_alloc_failed;
+}
+
+static int
+cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_query_stats(vport, &pstats);
+	if (ret == 0) {
+		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
+					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
+					 RTE_ETHER_CRC_LEN;
+
+		idpf_update_stats(&vport->eth_stats_offset, pstats);
+		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
+				pstats->rx_broadcast - pstats->rx_discards;
+		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
+						pstats->tx_unicast;
+		stats->imissed = pstats->rx_discards;
+		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
+		stats->ibytes = pstats->rx_bytes;
+		stats->ibytes -= stats->ipackets * crc_stats_len;
+		stats->obytes = pstats->tx_bytes;
+
+		dev->data->rx_mbuf_alloc_failed = cpfl_get_mbuf_alloc_failed_stats(dev);
+		stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return ret;
+}
+
+static void
+cpfl_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		rte_atomic64_set(&(rxq->rx_stats.mbuf_alloc_failed), 0);
+	}
+}
+
+static int
+cpfl_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_query_stats(vport, &pstats);
+	if (ret != 0)
+		return ret;
+
+	/* set stats offset base on current values */
+	vport->eth_stats_offset = *pstats;
+
+	cpfl_reset_mbuf_alloc_failed_stats(dev);
+
+	return 0;
+}
+
 static int
 cpfl_init_rss(struct idpf_vport *vport)
 {
@@ -365,6 +445,11 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 		goto err_vport;
 	}
 
+	if (cpfl_dev_stats_reset(dev)) {
+		PMD_DRV_LOG(ERR, "Failed to reset stats");
+		goto err_vport;
+	}
+
 	vport->stopped = 0;
 
 	return 0;
@@ -766,6 +851,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_release		= cpfl_dev_tx_queue_release,
 	.mtu_set			= cpfl_dev_mtu_set,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
+	.stats_get			= cpfl_dev_stats_get,
+	.stats_reset			= cpfl_dev_stats_reset,
 };
 
 static uint16_t
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index bcd15db3c5..b2cf959ee7 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -824,13 +824,14 @@ idpf_dev_start(struct rte_eth_dev *dev)
 
 	if (idpf_dev_stats_reset(dev)) {
 		PMD_DRV_LOG(ERR, "Failed to reset stats");
-		goto err_vport;
+		goto err_stats_reset;
 	}
 
 	vport->stopped = 0;
 
 	return 0;
 
+err_stats_reset:
 err_vport:
 	idpf_stop_queues(dev);
 err_startq:
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v4 19/21] net/cpfl: add RSS set/get ops
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                       ` (17 preceding siblings ...)
  2023-01-18  7:57     ` [PATCH v4 18/21] net/cpfl: add hw statistics Mingxia Liu
@ 2023-01-18  7:57     ` Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 20/21] net/cpfl: support single q scatter RX datapath Mingxia Liu
                       ` (2 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add support for these device ops:
- rss_reta_update
- rss_reta_query
- rss_hash_update
- rss_hash_conf_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 303 +++++++++++++++++++++++++++++++++
 1 file changed, 303 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 70c7d5daba..1d6902a3bd 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -30,6 +30,56 @@ static const char * const cpfl_valid_args[] = {
 	NULL
 };
 
+static const uint64_t cpfl_map_hena_rss[] = {
+	[IDPF_HASH_NONF_UNICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
+	[IDPF_HASH_NONF_IPV4_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+	[IDPF_HASH_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
+
+	/* IPv6 */
+	[IDPF_HASH_NONF_UNICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
+	[IDPF_HASH_NONF_IPV6_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+	[IDPF_HASH_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
+
+	/* L2 Payload */
+	[IDPF_HASH_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
+};
+
+static const uint64_t cpfl_ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV4;
+
+static const uint64_t cpfl_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV6;
+
 static int
 cpfl_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -97,6 +147,9 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->hash_key_size = vport->rss_key_size;
+	dev_info->reta_size = vport->rss_lut_size;
+
 	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
@@ -258,6 +311,54 @@ cpfl_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int cpfl_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
+{
+	uint64_t hena = 0, valid_rss_hf = 0;
+	int ret = 0;
+	uint16_t i;
+
+	/**
+	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
+	 * generalizations of all other IPv4 and IPv6 RSS types.
+	 */
+	if (rss_hf & RTE_ETH_RSS_IPV4)
+		rss_hf |= cpfl_ipv4_rss;
+
+	if (rss_hf & RTE_ETH_RSS_IPV6)
+		rss_hf |= cpfl_ipv6_rss;
+
+	for (i = 0; i < RTE_DIM(cpfl_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (cpfl_map_hena_rss[i] & rss_hf) {
+			valid_rss_hf |= cpfl_map_hena_rss[i];
+			hena |= bit;
+		}
+	}
+
+	vport->rss_hf = hena;
+
+	ret = idpf_vc_set_rss_hash(vport);
+	if (ret != 0) {
+		PMD_DRV_LOG(WARNING,
+			    "fail to set RSS offload types, ret: %d", ret);
+		return ret;
+	}
+
+	if (valid_rss_hf & cpfl_ipv4_rss)
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & cpfl_ipv6_rss)
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
+
+	if (rss_hf & ~valid_rss_hf)
+		PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%" PRIx64,
+			    rss_hf & ~valid_rss_hf);
+	vport->last_general_rss_hf = valid_rss_hf;
+
+	return ret;
+}
+
 static int
 cpfl_init_rss(struct idpf_vport *vport)
 {
@@ -294,6 +395,204 @@ cpfl_init_rss(struct idpf_vport *vport)
 	return ret;
 }
 
+static int
+cpfl_rss_reta_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_reta_entry64 *reta_conf,
+		     uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	uint32_t *lut;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+				 "(%d) doesn't match the number of hardware can "
+				 "support (%d)",
+			    reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	/* It MUST use the current LUT size to get the RSS lookup table,
+	 * otherwise if will fail with -100 error code.
+	 */
+	lut = rte_zmalloc(NULL, reta_size * sizeof(uint32_t), 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+	/* store the old lut table temporarily */
+	rte_memcpy(lut, vport->rss_lut, reta_size * sizeof(uint32_t));
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	rte_memcpy(vport->rss_lut, lut, reta_size * sizeof(uint32_t));
+	/* send virtchnl ops to configure RSS */
+	ret = idpf_vc_set_rss_lut(vport);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
+		goto out;
+	}
+out:
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+cpfl_rss_reta_query(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_reta_entry64 *reta_conf,
+		    uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	ret = idpf_vc_get_rss_lut(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS LUT");
+		return ret;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vport->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+cpfl_rss_hash_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		goto skip_rss_key;
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+				 "(%d) doesn't match the size of hardware can "
+				 "support (%d)",
+			    rss_conf->rss_key_len,
+			    vport->rss_key_size);
+		return -EINVAL;
+	}
+
+	rte_memcpy(vport->rss_key, rss_conf->rss_key,
+		   vport->rss_key_size);
+	ret = idpf_vc_set_rss_key(vport);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS key");
+		return ret;
+	}
+
+skip_rss_key:
+	ret = cpfl_config_rss_hf(vport, rss_conf->rss_hf);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
+		return ret;
+	}
+
+	return 0;
+}
+
+static uint64_t
+cpfl_map_general_rss_hf(uint64_t config_rss_hf, uint64_t last_general_rss_hf)
+{
+	uint64_t valid_rss_hf = 0;
+	uint16_t i;
+
+	for (i = 0; i < RTE_DIM(cpfl_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (bit & config_rss_hf)
+			valid_rss_hf |= cpfl_map_hena_rss[i];
+	}
+
+	if (valid_rss_hf & cpfl_ipv4_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & cpfl_ipv6_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV6;
+
+	return valid_rss_hf;
+}
+
+static int
+cpfl_rss_hash_conf_get(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	ret = idpf_vc_get_rss_hash(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS hf");
+		return ret;
+	}
+
+	rss_conf->rss_hf = cpfl_map_general_rss_hf(vport->rss_hf, vport->last_general_rss_hf);
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	ret = idpf_vc_get_rss_key(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS key");
+		return ret;
+	}
+
+	if (rss_conf->rss_key_len > vport->rss_key_size)
+		rss_conf->rss_key_len = vport->rss_key_size;
+
+	rte_memcpy(rss_conf->rss_key, vport->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
@@ -853,6 +1152,10 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 	.stats_get			= cpfl_dev_stats_get,
 	.stats_reset			= cpfl_dev_stats_reset,
+	.reta_update			= cpfl_rss_reta_update,
+	.reta_query			= cpfl_rss_reta_query,
+	.rss_hash_update		= cpfl_rss_hash_update,
+	.rss_hash_conf_get		= cpfl_rss_hash_conf_get,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v4 20/21] net/cpfl: support single q scatter RX datapath
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                       ` (18 preceding siblings ...)
  2023-01-18  7:57     ` [PATCH v4 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
@ 2023-01-18  7:57     ` Mingxia Liu
  2023-01-18  7:57     ` [PATCH v4 21/21] net/cpfl: add xstats ops Mingxia Liu
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu, Wenjun Wu

This patch add single q recv scatter rx function.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  3 ++-
 drivers/net/cpfl/cpfl_rxtx.c   | 26 ++++++++++++++++++++++++--
 drivers/net/cpfl/cpfl_rxtx.h   |  2 ++
 3 files changed, 28 insertions(+), 3 deletions(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 1d6902a3bd..ab427fcc0b 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -157,7 +157,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
-		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP		|
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 5cd25278d6..b15323a4f4 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -503,6 +503,8 @@ int
 cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct idpf_rx_queue *rxq;
+	uint16_t max_pkt_len;
+	uint32_t frame_size;
 	int err;
 
 	if (rx_queue_id >= dev->data->nb_rx_queues)
@@ -516,6 +518,17 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	frame_size = dev->data->mtu + CPFL_ETH_OVERHEAD;
+
+	max_pkt_len =
+	    RTE_MIN((uint32_t)CPFL_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+		    frame_size);
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
+	    frame_size > rxq->rx_buf_len)
+		dev->data->scattered_rx = 1;
+
 	err = idpf_register_ts_mbuf(rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "fail to register timestamp mbuf %u",
@@ -801,13 +814,22 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 #endif /* CC_AVX512_SUPPORT */
 		}
 
+		if (dev->data->scattered_rx) {
+			dev->rx_pkt_burst = idpf_singleq_recv_scatter_pkts;
+			return;
+		}
 		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
 	}
 #else
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
-	else
+	} else {
+		if (dev->data->scattered_rx) {
+			dev->rx_pkt_burst = idpf_singleq_recv_scatter_pkts;
+			return;
+		}
 		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+	}
 #endif /* RTE_ARCH_X86 */
 }
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 021db5bf8a..2d55f58455 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -21,6 +21,8 @@
 #define CPFL_DEFAULT_TX_RS_THRESH	32
 #define CPFL_DEFAULT_TX_FREE_THRESH	32
 
+#define CPFL_SUPPORT_CHAIN_NUM 5
+
 int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_txconf *tx_conf);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v4 21/21] net/cpfl: add xstats ops
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                       ` (19 preceding siblings ...)
  2023-01-18  7:57     ` [PATCH v4 20/21] net/cpfl: support single q scatter RX datapath Mingxia Liu
@ 2023-01-18  7:57     ` Mingxia Liu
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add support for these device ops:
- dev_xstats_get
- dev_xstats_get_names
- dev_xstats_reset

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 80 ++++++++++++++++++++++++++++++++++
 1 file changed, 80 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index ab427fcc0b..f178f3fbb8 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -80,6 +80,30 @@ static const uint64_t cpfl_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
 			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
 			  RTE_ETH_RSS_FRAG_IPV6;
 
+struct rte_cpfl_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct rte_cpfl_xstats_name_off rte_cpfl_stats_strings[] = {
+	{"rx_bytes", offsetof(struct virtchnl2_vport_stats, rx_bytes)},
+	{"rx_unicast_packets", offsetof(struct virtchnl2_vport_stats, rx_unicast)},
+	{"rx_multicast_packets", offsetof(struct virtchnl2_vport_stats, rx_multicast)},
+	{"rx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, rx_broadcast)},
+	{"rx_dropped_packets", offsetof(struct virtchnl2_vport_stats, rx_discards)},
+	{"rx_errors", offsetof(struct virtchnl2_vport_stats, rx_errors)},
+	{"rx_unknown_protocol_packets", offsetof(struct virtchnl2_vport_stats,
+						 rx_unknown_protocol)},
+	{"tx_bytes", offsetof(struct virtchnl2_vport_stats, tx_bytes)},
+	{"tx_unicast_packets", offsetof(struct virtchnl2_vport_stats, tx_unicast)},
+	{"tx_multicast_packets", offsetof(struct virtchnl2_vport_stats, tx_multicast)},
+	{"tx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, tx_broadcast)},
+	{"tx_dropped_packets", offsetof(struct virtchnl2_vport_stats, tx_discards)},
+	{"tx_error_packets", offsetof(struct virtchnl2_vport_stats, tx_errors)}};
+
+#define CPFL_NB_XSTATS (sizeof(rte_cpfl_stats_strings) / \
+		sizeof(rte_cpfl_stats_strings[0]))
+
 static int
 cpfl_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -312,6 +336,59 @@ cpfl_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int cpfl_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+	cpfl_dev_stats_reset(dev);
+	return 0;
+}
+
+static int cpfl_dev_xstats_get(struct rte_eth_dev *dev,
+			       struct rte_eth_xstat *xstats, unsigned int n)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	unsigned int i;
+	int ret;
+
+	if (n < CPFL_NB_XSTATS)
+		return CPFL_NB_XSTATS;
+
+	if (!xstats)
+		return 0;
+
+	ret = idpf_query_stats(vport, &pstats);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+		return 0;
+	}
+
+	idpf_update_stats(&vport->eth_stats_offset, pstats);
+
+	/* loop over xstats array and values from pstats */
+	for (i = 0; i < CPFL_NB_XSTATS; i++) {
+		xstats[i].id = i;
+		xstats[i].value = *(uint64_t *)(((char *)pstats) +
+			rte_cpfl_stats_strings[i].offset);
+	}
+	return CPFL_NB_XSTATS;
+}
+
+static int cpfl_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+				     struct rte_eth_xstat_name *xstats_names,
+				     __rte_unused unsigned int limit)
+{
+	unsigned int i;
+
+	if (xstats_names)
+		for (i = 0; i < CPFL_NB_XSTATS; i++) {
+			snprintf(xstats_names[i].name,
+				 sizeof(xstats_names[i].name),
+				 "%s", rte_cpfl_stats_strings[i].name);
+		}
+	return CPFL_NB_XSTATS;
+}
+
 static int cpfl_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
 {
 	uint64_t hena = 0, valid_rss_hf = 0;
@@ -1157,6 +1234,9 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.reta_query			= cpfl_rss_reta_query,
 	.rss_hash_update		= cpfl_rss_hash_update,
 	.rss_hash_conf_get		= cpfl_rss_hash_conf_get,
+	.xstats_get			= cpfl_dev_xstats_get,
+	.xstats_get_names		= cpfl_dev_xstats_get_names,
+	.xstats_reset			= cpfl_dev_xstats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v5 00/21] add support for cpfl PMD in DPDK
  2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                       ` (20 preceding siblings ...)
  2023-01-18  7:57     ` [PATCH v4 21/21] net/cpfl: add xstats ops Mingxia Liu
@ 2023-02-09  8:45     ` Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 01/21] net/cpfl: support device initialization Mingxia Liu
                         ` (22 more replies)
  21 siblings, 23 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-09  8:45 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=y, Size: 3106 bytes --]

The patchset introduced the cpfl (Control Plane Function Library) PMD
for Intel® IPU E2100’s Configure Physical Function (Device ID: 0x1453)

The cpfl PMD inherits all the features from idpf PMD which will follow
an ongoing standard data plan function spec
https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=idpf
Besides, it will also support more device specific hardware offloading
features from DPDK’s control path (e.g.: hairpin, rte_flow …). which is
different from idpf PMD, and that's why we need a new cpfl PMD.

This patchset mainly focuses on idpf PMD’s equivalent features.
To avoid duplicated code, the patchset depends on below patchsets which
move the common part from net/idpf into common/idpf as a shared library.

v2 changes:
 - rebase to the new baseline.
 - Fix rss lut config issue.
v3 changes:
 - rebase to the new baseline.
v4 changes:
 - Resend v3. No code changed.
v3 changes:
 - rebase to the new baseline.
 - optimize some code
 - give "not supported" tips when user want to config rss hash type
 - if stats reset fails at initialization time, don't rollback, just
   print ERROR info

Mingxia Liu (21):
  net/cpfl: support device initialization
  net/cpfl: add Tx queue setup
  net/cpfl: add Rx queue setup
  net/cpfl: support device start and stop
  net/cpfl: support queue start
  net/cpfl: support queue stop
  net/cpfl: support queue release
  net/cpfl: support MTU configuration
  net/cpfl: support basic Rx data path
  net/cpfl: support basic Tx data path
  net/cpfl: support write back based on ITR expire
  net/cpfl: support RSS
  net/cpfl: support Rx offloading
  net/cpfl: support Tx offloading
  net/cpfl: add AVX512 data path for single queue model
  net/cpfl: support timestamp offload
  net/cpfl: add AVX512 data path for split queue model
  net/cpfl: add HW statistics
  net/cpfl: add RSS set/get ops
  net/cpfl: support scalar scatter Rx datapath for single queue model
  net/cpfl: add xstats ops

 MAINTAINERS                             |    9 +
 doc/guides/nics/cpfl.rst                |   88 ++
 doc/guides/nics/features/cpfl.ini       |   17 +
 doc/guides/rel_notes/release_23_03.rst  |    6 +
 drivers/net/cpfl/cpfl_ethdev.c          | 1453 +++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h          |   95 ++
 drivers/net/cpfl/cpfl_logs.h            |   32 +
 drivers/net/cpfl/cpfl_rxtx.c            |  952 +++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h            |   44 +
 drivers/net/cpfl/cpfl_rxtx_vec_common.h |  116 ++
 drivers/net/cpfl/meson.build            |   38 +
 drivers/net/meson.build                 |    1 +
 12 files changed, 2851 insertions(+)
 create mode 100644 doc/guides/nics/cpfl.rst
 create mode 100644 doc/guides/nics/features/cpfl.ini
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.c
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.h
 create mode 100644 drivers/net/cpfl/cpfl_logs.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h
 create mode 100644 drivers/net/cpfl/meson.build

-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v5 01/21] net/cpfl: support device initialization
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
@ 2023-02-09  8:45       ` Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 02/21] net/cpfl: add Tx queue setup Mingxia Liu
                         ` (21 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-09  8:45 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Support device init and add the following dev ops:
 - dev_configure
 - dev_close
 - dev_infos_get
 - link_update
 - dev_supported_ptypes_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 MAINTAINERS                            |   9 +
 doc/guides/nics/cpfl.rst               |  66 +++
 doc/guides/nics/features/cpfl.ini      |  12 +
 doc/guides/rel_notes/release_23_03.rst |   6 +
 drivers/net/cpfl/cpfl_ethdev.c         | 768 +++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h         |  78 +++
 drivers/net/cpfl/cpfl_logs.h           |  32 ++
 drivers/net/cpfl/cpfl_rxtx.c           | 244 ++++++++
 drivers/net/cpfl/cpfl_rxtx.h           |  25 +
 drivers/net/cpfl/meson.build           |  14 +
 drivers/net/meson.build                |   1 +
 11 files changed, 1255 insertions(+)
 create mode 100644 doc/guides/nics/cpfl.rst
 create mode 100644 doc/guides/nics/features/cpfl.ini
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.c
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.h
 create mode 100644 drivers/net/cpfl/cpfl_logs.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.h
 create mode 100644 drivers/net/cpfl/meson.build

diff --git a/MAINTAINERS b/MAINTAINERS
index 9a0f416d2e..cf044c478b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -783,6 +783,15 @@ F: drivers/common/idpf/
 F: doc/guides/nics/idpf.rst
 F: doc/guides/nics/features/idpf.ini
 
+Intel cpfl
+M: Qi Zhang <qi.z.zhang@intel.com>
+M: Jingjing Wu <jingjing.wu@intel.com>
+M: Beilei Xing <beilei.xing@intel.com>
+T: git://dpdk.org/next/dpdk-next-net-intel
+F: drivers/net/cpfl/
+F: doc/guides/nics/cpfl.rst
+F: doc/guides/nics/features/cpfl.ini
+
 Intel igc
 M: Junfeng Guo <junfeng.guo@intel.com>
 M: Simei Su <simei.su@intel.com>
diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
new file mode 100644
index 0000000000..7c5aff0789
--- /dev/null
+++ b/doc/guides/nics/cpfl.rst
@@ -0,0 +1,66 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright(c) 2022 Intel Corporation.
+
+.. include:: <isonum.txt>
+
+CPFL Poll Mode Driver
+=====================
+
+The [*EXPERIMENTAL*] cpfl PMD (**librte_net_cpfl**) provides poll mode driver support
+for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
+
+
+Linux Prerequisites
+-------------------
+
+Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment.
+
+To get better performance on Intel platforms,
+please follow the :doc:`../linux_gsg/nic_perf_intel_platform`.
+
+
+Pre-Installation Configuration
+------------------------------
+
+Runtime Config Options
+~~~~~~~~~~~~~~~~~~~~~~
+
+- ``vport`` (default ``0``)
+
+  The PMD supports creation of multiple vports for one PCI device,
+  each vport corresponds to a single ethdev.
+  The user can specify the vports with specific ID to be created, for example::
+
+    -a ca:00.0,vport=[0,2,3]
+
+  Then the PMD will create 3 vports (ethdevs) for device ``ca:00.0``.
+
+  If the parameter is not provided, the vport 0 will be created by default.
+
+- ``rx_single`` (default ``0``)
+
+  There are two queue modes supported by Intel\ |reg| IPU Ethernet E2100 Series,
+  single queue mode and split queue mode for Rx queue.
+  User can choose Rx queue mode, example::
+
+    -a ca:00.0,rx_single=1
+
+  Then the PMD will configure Rx queue with single queue mode.
+  Otherwise, split queue mode is chosen by default.
+
+- ``tx_single`` (default ``0``)
+
+  There are two queue modes supported by Intel\ |reg| IPU Ethernet E2100 Series,
+  single queue mode and split queue mode for Tx queue.
+  User can choose Tx queue mode, example::
+
+    -a ca:00.0,tx_single=1
+
+  Then the PMD will configure Tx queue with single queue mode.
+  Otherwise, split queue mode is chosen by default.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :doc:`build_and_test` for details.
\ No newline at end of file
diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
new file mode 100644
index 0000000000..a2d1ca9e15
--- /dev/null
+++ b/doc/guides/nics/features/cpfl.ini
@@ -0,0 +1,12 @@
+;
+; Supported features of the 'cpfl' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+; A feature with "P" indicates only be supported when non-vector path
+; is selected.
+;
+[Features]
+Linux                = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst
index 07914170a7..b0b23d1a44 100644
--- a/doc/guides/rel_notes/release_23_03.rst
+++ b/doc/guides/rel_notes/release_23_03.rst
@@ -88,6 +88,12 @@ New Features
   * Added timesync API support.
   * Added packet pacing(launch time offloading) support.
 
+* **Added Intel cpfl driver.**
+
+  Added the new ``cpfl`` net driver
+  for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
+  See the :doc:`../nics/cpfl` NIC guide for more details on this new driver.
+
 Removed Items
 -------------
 
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
new file mode 100644
index 0000000000..e10c6346ba
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -0,0 +1,768 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_dev.h>
+#include <errno.h>
+#include <rte_alarm.h>
+
+#include "cpfl_ethdev.h"
+
+#define CPFL_TX_SINGLE_Q	"tx_single"
+#define CPFL_RX_SINGLE_Q	"rx_single"
+#define CPFL_VPORT		"vport"
+
+rte_spinlock_t cpfl_adapter_lock;
+/* A list for all adapters, one adapter matches one PCI device */
+struct cpfl_adapter_list cpfl_adapter_list;
+bool cpfl_adapter_list_init;
+
+static const char * const cpfl_valid_args[] = {
+	CPFL_TX_SINGLE_Q,
+	CPFL_RX_SINGLE_Q,
+	CPFL_VPORT,
+	NULL
+};
+
+static int
+cpfl_dev_link_update(struct rte_eth_dev *dev,
+		     __rte_unused int wait_to_complete)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct rte_eth_link new_link;
+
+	memset(&new_link, 0, sizeof(new_link));
+
+	switch (vport->link_speed) {
+	case RTE_ETH_SPEED_NUM_10M:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
+		break;
+	case RTE_ETH_SPEED_NUM_100M:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		break;
+	case RTE_ETH_SPEED_NUM_1G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
+		break;
+	case RTE_ETH_SPEED_NUM_10G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+		break;
+	case RTE_ETH_SPEED_NUM_20G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
+		break;
+	case RTE_ETH_SPEED_NUM_25G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
+		break;
+	case RTE_ETH_SPEED_NUM_40G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
+		break;
+	case RTE_ETH_SPEED_NUM_50G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
+		break;
+	case RTE_ETH_SPEED_NUM_100G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+		break;
+	case RTE_ETH_SPEED_NUM_200G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
+		break;
+	default:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	}
+
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
+		RTE_ETH_LINK_DOWN;
+	new_link.link_autoneg = (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) ?
+				 RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG;
+
+	return rte_eth_linkstatus_set(dev, &new_link);
+}
+
+static int
+cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+
+	dev_info->max_rx_queues = adapter->caps.max_rx_q;
+	dev_info->max_tx_queues = adapter->caps.max_tx_q;
+	dev_info->min_rx_bufsize = CPFL_MIN_BUF_SIZE;
+	dev_info->max_rx_pktlen = vport->max_mtu + CPFL_ETH_OVERHEAD;
+
+	dev_info->max_mtu = vport->max_mtu;
+	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+
+	return 0;
+}
+
+static const uint32_t *
+cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+	static const uint32_t ptypes[] = {
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_L4_FRAG,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_UNKNOWN
+	};
+
+	return ptypes;
+}
+
+static int
+cpfl_dev_configure(struct rte_eth_dev *dev)
+{
+	struct rte_eth_conf *conf = &dev->data->dev_conf;
+
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
+		PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not supported",
+			     conf->txmode.mq_mode);
+		return -ENOTSUP;
+	}
+
+	if (conf->lpbk_mode != 0) {
+		PMD_INIT_LOG(ERR, "Loopback operation mode %d is not supported",
+			     conf->lpbk_mode);
+		return -ENOTSUP;
+	}
+
+	if (conf->dcb_capability_en != 0) {
+		PMD_INIT_LOG(ERR, "Priority Flow Control(PFC) if not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.lsc != 0) {
+		PMD_INIT_LOG(ERR, "LSC interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.rxq != 0) {
+		PMD_INIT_LOG(ERR, "RXQ interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.rmv != 0) {
+		PMD_INIT_LOG(ERR, "RMV interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	return 0;
+}
+
+static int
+cpfl_dev_close(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter);
+
+	idpf_vport_deinit(vport);
+
+	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
+	adapter->cur_vport_nb--;
+	dev->data->dev_private = NULL;
+	adapter->vports[vport->sw_idx] = NULL;
+	rte_free(vport);
+
+	return 0;
+}
+
+static int
+insert_value(struct cpfl_devargs *devargs, uint16_t id)
+{
+	uint16_t i;
+
+	/* ignore duplicate */
+	for (i = 0; i < devargs->req_vport_nb; i++) {
+		if (devargs->req_vports[i] == id)
+			return 0;
+	}
+
+	if (devargs->req_vport_nb >= RTE_DIM(devargs->req_vports)) {
+		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
+			     CPFL_MAX_VPORT_NUM);
+		return -EINVAL;
+	}
+
+	devargs->req_vports[devargs->req_vport_nb] = id;
+	devargs->req_vport_nb++;
+
+	return 0;
+}
+
+static const char *
+parse_range(const char *value, struct cpfl_devargs *devargs)
+{
+	uint16_t lo, hi, i;
+	int n = 0;
+	int result;
+	const char *pos = value;
+
+	result = sscanf(value, "%hu%n-%hu%n", &lo, &n, &hi, &n);
+	if (result == 1) {
+		if (lo >= CPFL_MAX_VPORT_NUM)
+			return NULL;
+		if (insert_value(devargs, lo) != 0)
+			return NULL;
+	} else if (result == 2) {
+		if (lo > hi || hi >= CPFL_MAX_VPORT_NUM)
+			return NULL;
+		for (i = lo; i <= hi; i++) {
+			if (insert_value(devargs, i) != 0)
+				return NULL;
+		}
+	} else {
+		return NULL;
+	}
+
+	return pos + n;
+}
+
+static int
+parse_vport(const char *key, const char *value, void *args)
+{
+	struct cpfl_devargs *devargs = args;
+	const char *pos = value;
+
+	devargs->req_vport_nb = 0;
+
+	if (*pos == '[')
+		pos++;
+
+	while (1) {
+		pos = parse_range(pos, devargs);
+		if (pos == NULL) {
+			PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ",
+				     value, key);
+			return -EINVAL;
+		}
+		if (*pos != ',')
+			break;
+		pos++;
+	}
+
+	if (*value == '[' && *pos != ']') {
+		PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ",
+			     value, key);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+parse_bool(const char *key, const char *value, void *args)
+{
+	int *i = args;
+	char *end;
+	int num;
+
+	errno = 0;
+
+	num = strtoul(value, &end, 10);
+
+	if (errno == ERANGE || (num != 0 && num != 1)) {
+		PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", value must be 0 or 1",
+			value, key);
+		return -EINVAL;
+	}
+
+	*i = num;
+	return 0;
+}
+
+static int
+cpfl_parse_devargs(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter,
+		   struct cpfl_devargs *cpfl_args)
+{
+	struct rte_devargs *devargs = pci_dev->device.devargs;
+	struct rte_kvargs *kvlist;
+	int i, ret;
+
+	cpfl_args->req_vport_nb = 0;
+
+	if (devargs == NULL)
+		return 0;
+
+	kvlist = rte_kvargs_parse(devargs->args, cpfl_valid_args);
+	if (kvlist == NULL) {
+		PMD_INIT_LOG(ERR, "invalid kvargs key");
+		return -EINVAL;
+	}
+
+	/* check parsed devargs */
+	if (adapter->cur_vport_nb + cpfl_args->req_vport_nb >
+	    CPFL_MAX_VPORT_NUM) {
+		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
+			     CPFL_MAX_VPORT_NUM);
+		ret = -EINVAL;
+		goto bail;
+	}
+
+	for (i = 0; i < cpfl_args->req_vport_nb; i++) {
+		if (adapter->cur_vports & RTE_BIT32(cpfl_args->req_vports[i])) {
+			PMD_INIT_LOG(ERR, "Vport %d has been created",
+				     cpfl_args->req_vports[i]);
+			ret = -EINVAL;
+			goto bail;
+		}
+	}
+
+	ret = rte_kvargs_process(kvlist, CPFL_VPORT, &parse_vport,
+				 cpfl_args);
+	if (ret != 0)
+		goto bail;
+
+	ret = rte_kvargs_process(kvlist, CPFL_TX_SINGLE_Q, &parse_bool,
+				 &adapter->base.txq_model);
+	if (ret != 0)
+		goto bail;
+
+	ret = rte_kvargs_process(kvlist, CPFL_RX_SINGLE_Q, &parse_bool,
+				 &adapter->base.rxq_model);
+	if (ret != 0)
+		goto bail;
+
+bail:
+	rte_kvargs_free(kvlist);
+	return ret;
+}
+
+static struct idpf_vport *
+cpfl_find_vport(struct cpfl_adapter_ext *adapter, uint32_t vport_id)
+{
+	struct idpf_vport *vport = NULL;
+	int i;
+
+	for (i = 0; i < adapter->cur_vport_nb; i++) {
+		vport = adapter->vports[i];
+		if (vport->vport_id != vport_id)
+			continue;
+		else
+			return vport;
+	}
+
+	return vport;
+}
+
+static void
+cpfl_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t msglen)
+{
+	struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
+
+	if (msglen < sizeof(struct virtchnl2_event)) {
+		PMD_DRV_LOG(ERR, "Error event");
+		return;
+	}
+
+	switch (vc_event->event) {
+	case VIRTCHNL2_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL2_EVENT_LINK_CHANGE");
+		vport->link_up = !!(vc_event->link_status);
+		vport->link_speed = vc_event->link_speed;
+		cpfl_dev_link_update(dev, 0);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", vc_event->event);
+		break;
+	}
+}
+
+static void
+cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter_ex)
+{
+	struct idpf_adapter *adapter = &adapter_ex->base;
+	struct idpf_dma_mem *dma_mem = NULL;
+	struct idpf_hw *hw = &adapter->hw;
+	struct virtchnl2_event *vc_event;
+	struct idpf_ctlq_msg ctlq_msg;
+	enum idpf_mbx_opc mbx_op;
+	struct idpf_vport *vport;
+	enum virtchnl_ops vc_op;
+	uint16_t pending = 1;
+	int ret;
+
+	while (pending) {
+		ret = idpf_vc_ctlq_recv(hw->arq, &pending, &ctlq_msg);
+		if (ret) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret);
+			return;
+		}
+
+		rte_memcpy(adapter->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
+			   IDPF_DFLT_MBX_BUF_SIZE);
+
+		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
+		vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+		adapter->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
+
+		switch (mbx_op) {
+		case idpf_mbq_opc_send_msg_to_peer_pf:
+			if (vc_op == VIRTCHNL2_OP_EVENT) {
+				if (ctlq_msg.data_len < sizeof(struct virtchnl2_event)) {
+					PMD_DRV_LOG(ERR, "Error event");
+					return;
+				}
+				vc_event = (struct virtchnl2_event *)adapter->mbx_resp;
+				vport = cpfl_find_vport(adapter_ex, vc_event->vport_id);
+				if (!vport) {
+					PMD_DRV_LOG(ERR, "Can't find vport.");
+					return;
+				}
+				cpfl_handle_event_msg(vport, adapter->mbx_resp,
+						      ctlq_msg.data_len);
+			} else {
+				if (vc_op == adapter->pend_cmd)
+					notify_cmd(adapter, adapter->cmd_retval);
+				else
+					PMD_DRV_LOG(ERR, "command mismatch, expect %u, get %u",
+						    adapter->pend_cmd, vc_op);
+
+				PMD_DRV_LOG(DEBUG, " Virtual channel response is received,"
+					    "opcode = %d", vc_op);
+			}
+			goto post_buf;
+		default:
+			PMD_DRV_LOG(DEBUG, "Request %u is not supported yet", mbx_op);
+		}
+	}
+
+post_buf:
+	if (ctlq_msg.data_len)
+		dma_mem = ctlq_msg.ctx.indirect.payload;
+	else
+		pending = 0;
+
+	ret = idpf_vc_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
+	if (ret && dma_mem)
+		idpf_free_dma_mem(hw, dma_mem);
+}
+
+static void
+cpfl_dev_alarm_handler(void *param)
+{
+	struct cpfl_adapter_ext *adapter = param;
+
+	cpfl_handle_virtchnl_msg(adapter);
+
+	rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
+}
+
+static int
+cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter)
+{
+	struct idpf_adapter *base = &adapter->base;
+	struct idpf_hw *hw = &base->hw;
+	int ret = 0;
+
+	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+	hw->hw_addr_len = pci_dev->mem_resource[0].len;
+	hw->back = base;
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->device_id = pci_dev->id.device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+
+	strncpy(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE);
+
+	ret = idpf_adapter_init(base);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init adapter");
+		goto err_adapter_init;
+	}
+
+	rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
+
+	adapter->max_vport_nb = adapter->base.caps.max_vports;
+
+	adapter->vports = rte_zmalloc("vports",
+				      adapter->max_vport_nb *
+				      sizeof(*adapter->vports),
+				      0);
+	if (adapter->vports == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
+		ret = -ENOMEM;
+		goto err_get_ptype;
+	}
+
+	adapter->cur_vports = 0;
+	adapter->cur_vport_nb = 0;
+
+	adapter->used_vecs_num = 0;
+
+	return ret;
+
+err_get_ptype:
+	idpf_adapter_deinit(base);
+err_adapter_init:
+	return ret;
+}
+
+static const struct eth_dev_ops cpfl_eth_dev_ops = {
+	.dev_configure			= cpfl_dev_configure,
+	.dev_close			= cpfl_dev_close,
+	.dev_infos_get			= cpfl_dev_info_get,
+	.link_update			= cpfl_dev_link_update,
+	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
+};
+
+static uint16_t
+cpfl_vport_idx_alloc(struct cpfl_adapter_ext *ad)
+{
+	uint16_t vport_idx;
+	uint16_t i;
+
+	for (i = 0; i < ad->max_vport_nb; i++) {
+		if (ad->vports[i] == NULL)
+			break;
+	}
+
+	if (i == ad->max_vport_nb)
+		vport_idx = CPFL_INVALID_VPORT_IDX;
+	else
+		vport_idx = i;
+
+	return vport_idx;
+}
+
+static int
+cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct cpfl_vport_param *param = init_params;
+	struct cpfl_adapter_ext *adapter = param->adapter;
+	/* for sending create vport virtchnl msg prepare */
+	struct virtchnl2_create_vport create_vport_info;
+	int ret = 0;
+
+	dev->dev_ops = &cpfl_eth_dev_ops;
+	vport->adapter = &adapter->base;
+	vport->sw_idx = param->idx;
+	vport->devarg_id = param->devarg_id;
+	vport->dev = dev;
+
+	memset(&create_vport_info, 0, sizeof(create_vport_info));
+	ret = idpf_vport_info_init(vport, &create_vport_info);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init vport req_info.");
+		goto err;
+	}
+
+	ret = idpf_vport_init(vport, &create_vport_info, dev->data);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init vports.");
+		goto err;
+	}
+
+	adapter->vports[param->idx] = vport;
+	adapter->cur_vports |= RTE_BIT32(param->devarg_id);
+	adapter->cur_vport_nb++;
+
+	dev->data->mac_addrs = rte_zmalloc(NULL, RTE_ETHER_ADDR_LEN, 0);
+	if (dev->data->mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory.");
+		ret = -ENOMEM;
+		goto err_mac_addrs;
+	}
+
+	rte_ether_addr_copy((struct rte_ether_addr *)vport->default_mac_addr,
+			    &dev->data->mac_addrs[0]);
+
+	return 0;
+
+err_mac_addrs:
+	adapter->vports[param->idx] = NULL;  /* reset */
+	idpf_vport_deinit(vport);
+err:
+	return ret;
+}
+
+static const struct rte_pci_id pci_id_cpfl_map[] = {
+	{ RTE_PCI_DEVICE(IDPF_INTEL_VENDOR_ID, IDPF_DEV_ID_CPF) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static struct cpfl_adapter_ext *
+cpfl_find_adapter_ext(struct rte_pci_device *pci_dev)
+{
+	struct cpfl_adapter_ext *adapter;
+	int found = 0;
+
+	if (pci_dev == NULL)
+		return NULL;
+
+	rte_spinlock_lock(&cpfl_adapter_lock);
+	TAILQ_FOREACH(adapter, &cpfl_adapter_list, next) {
+		if (strncmp(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE) == 0) {
+			found = 1;
+			break;
+		}
+	}
+	rte_spinlock_unlock(&cpfl_adapter_lock);
+
+	if (found == 0)
+		return NULL;
+
+	return adapter;
+}
+
+static void
+cpfl_adapter_ext_deinit(struct cpfl_adapter_ext *adapter)
+{
+	rte_eal_alarm_cancel(cpfl_dev_alarm_handler, adapter);
+	idpf_adapter_deinit(&adapter->base);
+
+	rte_free(adapter->vports);
+	adapter->vports = NULL;
+}
+
+static int
+cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+	       struct rte_pci_device *pci_dev)
+{
+	struct cpfl_vport_param vport_param;
+	struct cpfl_adapter_ext *adapter;
+	struct cpfl_devargs devargs;
+	char name[RTE_ETH_NAME_MAX_LEN];
+	int i, retval;
+	bool first_probe = false;
+
+	if (!cpfl_adapter_list_init) {
+		rte_spinlock_init(&cpfl_adapter_lock);
+		TAILQ_INIT(&cpfl_adapter_list);
+		cpfl_adapter_list_init = true;
+	}
+
+	adapter = cpfl_find_adapter_ext(pci_dev);
+	if (adapter == NULL) {
+		first_probe = true;
+		adapter = rte_zmalloc("cpfl_adapter_ext",
+				      sizeof(struct cpfl_adapter_ext), 0);
+		if (adapter == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
+			return -ENOMEM;
+		}
+
+		retval = cpfl_adapter_ext_init(pci_dev, adapter);
+		if (retval != 0) {
+			PMD_INIT_LOG(ERR, "Failed to init adapter.");
+			return retval;
+		}
+
+		rte_spinlock_lock(&cpfl_adapter_lock);
+		TAILQ_INSERT_TAIL(&cpfl_adapter_list, adapter, next);
+		rte_spinlock_unlock(&cpfl_adapter_lock);
+	}
+
+	retval = cpfl_parse_devargs(pci_dev, adapter, &devargs);
+	if (retval != 0) {
+		PMD_INIT_LOG(ERR, "Failed to parse private devargs");
+		goto err;
+	}
+
+	if (devargs.req_vport_nb == 0) {
+		/* If no vport devarg, create vport 0 by default. */
+		vport_param.adapter = adapter;
+		vport_param.devarg_id = 0;
+		vport_param.idx = cpfl_vport_idx_alloc(adapter);
+		if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
+			PMD_INIT_LOG(ERR, "No space for vport %u", vport_param.devarg_id);
+			return 0;
+		}
+		snprintf(name, sizeof(name), "cpfl_%s_vport_0",
+			 pci_dev->device.name);
+		retval = rte_eth_dev_create(&pci_dev->device, name,
+					    sizeof(struct idpf_vport),
+					    NULL, NULL, cpfl_dev_vport_init,
+					    &vport_param);
+		if (retval != 0)
+			PMD_DRV_LOG(ERR, "Failed to create default vport 0");
+	} else {
+		for (i = 0; i < devargs.req_vport_nb; i++) {
+			vport_param.adapter = adapter;
+			vport_param.devarg_id = devargs.req_vports[i];
+			vport_param.idx = cpfl_vport_idx_alloc(adapter);
+			if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
+				PMD_INIT_LOG(ERR, "No space for vport %u", vport_param.devarg_id);
+				break;
+			}
+			snprintf(name, sizeof(name), "cpfl_%s_vport_%d",
+				 pci_dev->device.name,
+				 devargs.req_vports[i]);
+			retval = rte_eth_dev_create(&pci_dev->device, name,
+						    sizeof(struct idpf_vport),
+						    NULL, NULL, cpfl_dev_vport_init,
+						    &vport_param);
+			if (retval != 0)
+				PMD_DRV_LOG(ERR, "Failed to create vport %d",
+					    vport_param.devarg_id);
+		}
+	}
+
+	return 0;
+
+err:
+	if (first_probe) {
+		rte_spinlock_lock(&cpfl_adapter_lock);
+		TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
+		rte_spinlock_unlock(&cpfl_adapter_lock);
+		cpfl_adapter_ext_deinit(adapter);
+		rte_free(adapter);
+	}
+	return retval;
+}
+
+static int
+cpfl_pci_remove(struct rte_pci_device *pci_dev)
+{
+	struct cpfl_adapter_ext *adapter = cpfl_find_adapter_ext(pci_dev);
+	uint16_t port_id;
+
+	/* Ethdev created can be found RTE_ETH_FOREACH_DEV_OF through rte_device */
+	RTE_ETH_FOREACH_DEV_OF(port_id, &pci_dev->device) {
+			rte_eth_dev_close(port_id);
+	}
+
+	rte_spinlock_lock(&cpfl_adapter_lock);
+	TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
+	rte_spinlock_unlock(&cpfl_adapter_lock);
+	cpfl_adapter_ext_deinit(adapter);
+	rte_free(adapter);
+
+	return 0;
+}
+
+static struct rte_pci_driver rte_cpfl_pmd = {
+	.id_table	= pci_id_cpfl_map,
+	.drv_flags	= RTE_PCI_DRV_NEED_MAPPING,
+	.probe		= cpfl_pci_probe,
+	.remove		= cpfl_pci_remove,
+};
+
+/**
+ * Driver initialization routine.
+ * Invoked once at EAL init time.
+ * Register itself as the [Poll Mode] Driver of PCI devices.
+ */
+RTE_PMD_REGISTER_PCI(net_cpfl, rte_cpfl_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_cpfl, pci_id_cpfl_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_cpfl, "* igb_uio | vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(net_cpfl,
+			      CPFL_TX_SINGLE_Q "=<0|1> "
+			      CPFL_RX_SINGLE_Q "=<0|1> "
+			      CPFL_VPORT "=[vport_set0,[vport_set1],...]");
+
+RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_driver, driver, NOTICE);
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
new file mode 100644
index 0000000000..9ca39b4558
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _CPFL_ETHDEV_H_
+#define _CPFL_ETHDEV_H_
+
+#include <stdint.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+#include <rte_ethdev.h>
+#include <rte_kvargs.h>
+#include <ethdev_driver.h>
+#include <ethdev_pci.h>
+
+#include "cpfl_logs.h"
+
+#include <idpf_common_device.h>
+#include <idpf_common_virtchnl.h>
+#include <base/idpf_prototype.h>
+#include <base/virtchnl2.h>
+
+#define CPFL_MAX_VPORT_NUM	8
+
+#define CPFL_INVALID_VPORT_IDX	0xffff
+
+#define CPFL_MIN_BUF_SIZE	1024
+#define CPFL_MAX_FRAME_SIZE	9728
+#define CPFL_DEFAULT_MTU	RTE_ETHER_MTU
+
+#define CPFL_NUM_MACADDR_MAX	64
+
+#define CPFL_VLAN_TAG_SIZE	4
+#define CPFL_ETH_OVERHEAD \
+	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + CPFL_VLAN_TAG_SIZE * 2)
+
+#define CPFL_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
+
+#define CPFL_ALARM_INTERVAL	50000 /* us */
+
+/* Device IDs */
+#define IDPF_DEV_ID_CPF			0x1453
+
+struct cpfl_vport_param {
+	struct cpfl_adapter_ext *adapter;
+	uint16_t devarg_id; /* arg id from user */
+	uint16_t idx;       /* index in adapter->vports[]*/
+};
+
+/* Struct used when parse driver specific devargs */
+struct cpfl_devargs {
+	uint16_t req_vports[CPFL_MAX_VPORT_NUM];
+	uint16_t req_vport_nb;
+};
+
+struct cpfl_adapter_ext {
+	TAILQ_ENTRY(cpfl_adapter_ext) next;
+	struct idpf_adapter base;
+
+	char name[CPFL_ADAPTER_NAME_LEN];
+
+	struct idpf_vport **vports;
+	uint16_t max_vport_nb;
+
+	uint16_t cur_vports; /* bit mask of created vport */
+	uint16_t cur_vport_nb;
+
+	uint16_t used_vecs_num;
+};
+
+TAILQ_HEAD(cpfl_adapter_list, cpfl_adapter_ext);
+
+#define CPFL_DEV_TO_PCI(eth_dev)		\
+	RTE_DEV_TO_PCI((eth_dev)->device)
+#define CPFL_ADAPTER_TO_EXT(p)					\
+	container_of((p), struct cpfl_adapter_ext, base)
+
+#endif /* _CPFL_ETHDEV_H_ */
diff --git a/drivers/net/cpfl/cpfl_logs.h b/drivers/net/cpfl/cpfl_logs.h
new file mode 100644
index 0000000000..365b53e8b3
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_logs.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _CPFL_LOGS_H_
+#define _CPFL_LOGS_H_
+
+#include <rte_log.h>
+
+extern int cpfl_logtype_init;
+extern int cpfl_logtype_driver;
+
+#define PMD_INIT_LOG(level, ...) \
+	rte_log(RTE_LOG_ ## level, \
+		cpfl_logtype_init, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#define PMD_DRV_LOG_RAW(level, ...) \
+	rte_log(RTE_LOG_ ## level, \
+		cpfl_logtype_driver, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _CPFL_LOGS_H_ */
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
new file mode 100644
index 0000000000..53ba2770de
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -0,0 +1,244 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#include <ethdev_driver.h>
+#include <rte_net.h>
+#include <rte_vect.h>
+
+#include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
+
+static uint64_t
+cpfl_tx_offload_convert(uint64_t offload)
+{
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_SCTP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
+		ol |= IDPF_TX_OFFLOAD_MULTI_SEGS;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0)
+		ol |= IDPF_TX_OFFLOAD_MBUF_FAST_FREE;
+
+	return ol;
+}
+
+static const struct rte_memzone *
+cpfl_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
+		      uint16_t len, uint16_t queue_type,
+		      unsigned int socket_id, bool splitq)
+{
+	char ring_name[RTE_MEMZONE_NAMESIZE];
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+
+	memset(ring_name, 0, RTE_MEMZONE_NAMESIZE);
+	switch (queue_type) {
+	case VIRTCHNL2_QUEUE_TYPE_TX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_sched_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "cpfl Tx ring", sizeof("cpfl Tx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3),
+					      CPFL_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_singleq_rx_buf_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "cpfl Rx ring", sizeof("cpfl Rx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
+		ring_size = RTE_ALIGN(len * sizeof(struct idpf_splitq_tx_compl_desc),
+				      CPFL_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "cpfl Tx compl ring", sizeof("cpfl Tx compl ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
+		ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_splitq_rx_buf_desc),
+				      CPFL_DMA_MEM_ALIGN);
+		rte_memcpy(ring_name, "cpfl Rx buf ring", sizeof("cpfl Rx buf ring"));
+		break;
+	default:
+		PMD_INIT_LOG(ERR, "Invalid queue type");
+		return NULL;
+	}
+
+	mz = rte_eth_dma_zone_reserve(dev, ring_name, queue_idx,
+				      ring_size, CPFL_RING_BASE_ALIGN,
+				      socket_id);
+	if (mz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for ring");
+		return NULL;
+	}
+
+	/* Zero all the descriptors in the ring. */
+	memset(mz->addr, 0, ring_size);
+
+	return mz;
+}
+
+static void
+cpfl_dma_zone_release(const struct rte_memzone *mz)
+{
+	rte_memzone_free(mz);
+}
+
+static int
+cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
+		     uint16_t queue_idx, uint16_t nb_desc,
+		     unsigned int socket_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	const struct rte_memzone *mz;
+	struct idpf_tx_queue *cq;
+	int ret;
+
+	cq = rte_zmalloc_socket("cpfl splitq cq",
+				sizeof(struct idpf_tx_queue),
+				RTE_CACHE_LINE_SIZE,
+				socket_id);
+	if (cq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for Tx compl queue");
+		ret = -ENOMEM;
+		goto err_cq_alloc;
+	}
+
+	cq->nb_tx_desc = nb_desc;
+	cq->queue_id = vport->chunks_info.tx_compl_start_qid + queue_idx;
+	cq->port_id = dev->data->port_id;
+	cq->txqs = dev->data->tx_queues;
+	cq->tx_start_qid = vport->chunks_info.tx_start_qid;
+
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, nb_desc,
+				   VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION,
+				   socket_id, true);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	cq->tx_ring_phys_addr = mz->iova;
+	cq->compl_ring = mz->addr;
+	cq->mz = mz;
+	reset_split_tx_complq(cq);
+
+	txq->complq = cq;
+
+	return 0;
+
+err_mz_reserve:
+	rte_free(cq);
+err_cq_alloc:
+	return ret;
+}
+
+int
+cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_txconf *tx_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t tx_rs_thresh, tx_free_thresh;
+	struct idpf_hw *hw = &adapter->hw;
+	const struct rte_memzone *mz;
+	struct idpf_tx_queue *txq;
+	uint64_t offloads;
+	uint16_t len;
+	bool is_splitq;
+	int ret;
+
+	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+	tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh > 0) ?
+		tx_conf->tx_rs_thresh : CPFL_DEFAULT_TX_RS_THRESH);
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ?
+		tx_conf->tx_free_thresh : CPFL_DEFAULT_TX_FREE_THRESH);
+	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket("cpfl txq",
+				 sizeof(struct idpf_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (txq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
+		ret = -ENOMEM;
+		goto err_txq_alloc;
+	}
+
+	is_splitq = !!(vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
+
+	txq->nb_tx_desc = nb_desc;
+	txq->rs_thresh = tx_rs_thresh;
+	txq->free_thresh = tx_free_thresh;
+	txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->offloads = cpfl_tx_offload_convert(offloads);
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	if (is_splitq)
+		len = 2 * nb_desc;
+	else
+		len = nb_desc;
+	txq->sw_nb_desc = len;
+
+	/* Allocate TX hardware ring descriptors. */
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, nb_desc, VIRTCHNL2_QUEUE_TYPE_TX,
+				   socket_id, is_splitq);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	txq->tx_ring_phys_addr = mz->iova;
+	txq->mz = mz;
+
+	txq->sw_ring = rte_zmalloc_socket("cpfl tx sw ring",
+					  sizeof(struct idpf_tx_entry) * len,
+					  RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq->sw_ring == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
+	}
+
+	if (!is_splitq) {
+		txq->tx_ring = mz->addr;
+		reset_single_tx_queue(txq);
+	} else {
+		txq->desc_ring = mz->addr;
+		reset_split_tx_descq(txq);
+
+		/* Setup tx completion queue if split model */
+		ret = cpfl_tx_complq_setup(dev, txq, queue_idx,
+					   2 * nb_desc, socket_id);
+		if (ret != 0)
+			goto err_complq_setup;
+	}
+
+	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
+			queue_idx * vport->chunks_info.tx_qtail_spacing);
+	txq->q_set = true;
+	dev->data->tx_queues[queue_idx] = txq;
+
+	return 0;
+
+err_complq_setup:
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(txq);
+err_txq_alloc:
+	return ret;
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
new file mode 100644
index 0000000000..232630c5e9
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _CPFL_RXTX_H_
+#define _CPFL_RXTX_H_
+
+#include <idpf_common_rxtx.h>
+#include "cpfl_ethdev.h"
+
+/* In QLEN must be whole number of 32 descriptors. */
+#define CPFL_ALIGN_RING_DESC	32
+#define CPFL_MIN_RING_DESC	32
+#define CPFL_MAX_RING_DESC	4096
+#define CPFL_DMA_MEM_ALIGN	4096
+/* Base address of the HW descriptor ring should be 128B aligned. */
+#define CPFL_RING_BASE_ALIGN	128
+
+#define CPFL_DEFAULT_TX_RS_THRESH	32
+#define CPFL_DEFAULT_TX_FREE_THRESH	32
+
+int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			uint16_t nb_desc, unsigned int socket_id,
+			const struct rte_eth_txconf *tx_conf);
+#endif /* _CPFL_RXTX_H_ */
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
new file mode 100644
index 0000000000..c721732b50
--- /dev/null
+++ b/drivers/net/cpfl/meson.build
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 Intel Corporation
+
+if is_windows
+    build = false
+    reason = 'not supported on Windows'
+    subdir_done()
+endif
+
+deps += ['common_idpf']
+
+sources = files(
+        'cpfl_ethdev.c',
+)
\ No newline at end of file
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 6470bf3636..a8ca338875 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -13,6 +13,7 @@ drivers = [
         'bnxt',
         'bonding',
         'cnxk',
+        'cpfl',
         'cxgbe',
         'dpaa',
         'dpaa2',
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v5 02/21] net/cpfl: add Tx queue setup
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 01/21] net/cpfl: support device initialization Mingxia Liu
@ 2023-02-09  8:45       ` Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 03/21] net/cpfl: add Rx " Mingxia Liu
                         ` (20 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-09  8:45 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add support for tx_queue_setup ops.

In the single queue model, the same descriptor queue is used by SW to
post buffer descriptors to HW and by HW to post completed descriptors
to SW.

In the split queue model, "RX buffer queues" are used to pass
descriptor buffers from SW to HW while Rx queues are used only to
pass the descriptor completions, that is, descriptors that point
to completed buffers, from HW to SW. This is contrary to the single
queue model in which Rx queues are used for both purposes.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 13 +++++++++++++
 drivers/net/cpfl/cpfl_rxtx.c   |  8 ++++----
 drivers/net/cpfl/meson.build   |  1 +
 3 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index e10c6346ba..abb9f8d617 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -12,6 +12,7 @@
 #include <rte_alarm.h>
 
 #include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
 
 #define CPFL_TX_SINGLE_Q	"tx_single"
 #define CPFL_RX_SINGLE_Q	"rx_single"
@@ -96,6 +97,17 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = CPFL_MAX_RING_DESC,
+		.nb_min = CPFL_MIN_RING_DESC,
+		.nb_align = CPFL_ALIGN_RING_DESC,
+	};
+
 	return 0;
 }
 
@@ -513,6 +525,7 @@ cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *a
 static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_configure			= cpfl_dev_configure,
 	.dev_close			= cpfl_dev_close,
+	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
 	.link_update			= cpfl_dev_link_update,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 53ba2770de..e0f8484b19 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -130,7 +130,7 @@ cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
 	cq->tx_ring_phys_addr = mz->iova;
 	cq->compl_ring = mz->addr;
 	cq->mz = mz;
-	reset_split_tx_complq(cq);
+	idpf_qc_split_tx_complq_reset(cq);
 
 	txq->complq = cq;
 
@@ -164,7 +164,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		tx_conf->tx_rs_thresh : CPFL_DEFAULT_TX_RS_THRESH);
 	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ?
 		tx_conf->tx_free_thresh : CPFL_DEFAULT_TX_FREE_THRESH);
-	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
+	if (idpf_qc_tx_thresh_check(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
 		return -EINVAL;
 
 	/* Allocate the TX queue data structure. */
@@ -215,10 +215,10 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 
 	if (!is_splitq) {
 		txq->tx_ring = mz->addr;
-		reset_single_tx_queue(txq);
+		idpf_qc_single_tx_queue_reset(txq);
 	} else {
 		txq->desc_ring = mz->addr;
-		reset_split_tx_descq(txq);
+		idpf_qc_split_tx_descq_reset(txq);
 
 		/* Setup tx completion queue if split model */
 		ret = cpfl_tx_complq_setup(dev, txq, queue_idx,
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
index c721732b50..1894423689 100644
--- a/drivers/net/cpfl/meson.build
+++ b/drivers/net/cpfl/meson.build
@@ -11,4 +11,5 @@ deps += ['common_idpf']
 
 sources = files(
         'cpfl_ethdev.c',
+        'cpfl_rxtx.c',
 )
\ No newline at end of file
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v5 03/21] net/cpfl: add Rx queue setup
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 01/21] net/cpfl: support device initialization Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 02/21] net/cpfl: add Tx queue setup Mingxia Liu
@ 2023-02-09  8:45       ` Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 04/21] net/cpfl: support device start and stop Mingxia Liu
                         ` (19 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-09  8:45 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add support for rx_queue_setup ops.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  11 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 232 +++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |   6 +
 3 files changed, 249 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index abb9f8d617..fb530c7690 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -102,12 +102,22 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
 	};
 
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_free_thresh = CPFL_DEFAULT_RX_FREE_THRESH,
+	};
+
 	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
 		.nb_max = CPFL_MAX_RING_DESC,
 		.nb_min = CPFL_MIN_RING_DESC,
 		.nb_align = CPFL_ALIGN_RING_DESC,
 	};
 
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = CPFL_MAX_RING_DESC,
+		.nb_min = CPFL_MIN_RING_DESC,
+		.nb_align = CPFL_ALIGN_RING_DESC,
+	};
+
 	return 0;
 }
 
@@ -525,6 +535,7 @@ cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *a
 static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_configure			= cpfl_dev_configure,
 	.dev_close			= cpfl_dev_close,
+	.rx_queue_setup			= cpfl_rx_queue_setup,
 	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
 	.link_update			= cpfl_dev_link_update,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index e0f8484b19..4083e8c3b6 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -9,6 +9,25 @@
 #include "cpfl_ethdev.h"
 #include "cpfl_rxtx.h"
 
+static uint64_t
+cpfl_rx_offload_convert(uint64_t offload)
+{
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
+		ol |= IDPF_RX_OFFLOAD_TIMESTAMP;
+
+	return ol;
+}
+
 static uint64_t
 cpfl_tx_offload_convert(uint64_t offload)
 {
@@ -94,6 +113,219 @@ cpfl_dma_zone_release(const struct rte_memzone *mz)
 	rte_memzone_free(mz);
 }
 
+static int
+cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
+			 uint16_t queue_idx, uint16_t rx_free_thresh,
+			 uint16_t nb_desc, unsigned int socket_id,
+			 struct rte_mempool *mp, uint8_t bufq_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_hw *hw = &adapter->hw;
+	const struct rte_memzone *mz;
+	struct idpf_rx_queue *bufq;
+	uint16_t len;
+	int ret;
+
+	bufq = rte_zmalloc_socket("cpfl bufq",
+				   sizeof(struct idpf_rx_queue),
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (bufq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue.");
+		ret = -ENOMEM;
+		goto err_bufq1_alloc;
+	}
+
+	bufq->mp = mp;
+	bufq->nb_rx_desc = nb_desc;
+	bufq->rx_free_thresh = rx_free_thresh;
+	bufq->queue_id = vport->chunks_info.rx_buf_start_qid + queue_idx;
+	bufq->port_id = dev->data->port_id;
+	bufq->rx_hdr_len = 0;
+	bufq->adapter = adapter;
+
+	len = rte_pktmbuf_data_room_size(bufq->mp) - RTE_PKTMBUF_HEADROOM;
+	bufq->rx_buf_len = len;
+
+	/* Allocate a little more to support bulk allocate. */
+	len = nb_desc + IDPF_RX_MAX_BURST;
+
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, len,
+				   VIRTCHNL2_QUEUE_TYPE_RX_BUFFER,
+				   socket_id, true);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+
+	bufq->rx_ring_phys_addr = mz->iova;
+	bufq->rx_ring = mz->addr;
+	bufq->mz = mz;
+
+	bufq->sw_ring =
+		rte_zmalloc_socket("cpfl rx bufq sw ring",
+				   sizeof(struct rte_mbuf *) * len,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (bufq->sw_ring == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
+	}
+
+	idpf_qc_split_rx_bufq_reset(bufq);
+	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
+			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+	bufq->q_set = true;
+
+	if (bufq_id == 1) {
+		rxq->bufq1 = bufq;
+	} else if (bufq_id == 2) {
+		rxq->bufq2 = bufq;
+	} else {
+		PMD_INIT_LOG(ERR, "Invalid buffer queue index.");
+		ret = -EINVAL;
+		goto err_bufq_id;
+	}
+
+	return 0;
+
+err_bufq_id:
+	rte_free(bufq->sw_ring);
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(bufq);
+err_bufq1_alloc:
+	return ret;
+}
+
+static void
+cpfl_rx_split_bufq_release(struct idpf_rx_queue *bufq)
+{
+	rte_free(bufq->sw_ring);
+	cpfl_dma_zone_release(bufq->mz);
+	rte_free(bufq);
+}
+
+int
+cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_rxconf *rx_conf,
+		    struct rte_mempool *mp)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_hw *hw = &adapter->hw;
+	const struct rte_memzone *mz;
+	struct idpf_rx_queue *rxq;
+	uint16_t rx_free_thresh;
+	uint64_t offloads;
+	bool is_splitq;
+	uint16_t len;
+	int ret;
+
+	offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads;
+
+	/* Check free threshold */
+	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
+		CPFL_DEFAULT_RX_FREE_THRESH :
+		rx_conf->rx_free_thresh;
+	if (idpf_qc_rx_thresh_check(nb_desc, rx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Setup Rx queue */
+	rxq = rte_zmalloc_socket("cpfl rxq",
+				 sizeof(struct idpf_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (rxq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
+		ret = -ENOMEM;
+		goto err_rxq_alloc;
+	}
+
+	is_splitq = !!(vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
+
+	rxq->mp = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->queue_id = vport->chunks_info.rx_start_qid + queue_idx;
+	rxq->port_id = dev->data->port_id;
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+	rxq->rx_hdr_len = 0;
+	rxq->adapter = adapter;
+	rxq->offloads = cpfl_rx_offload_convert(offloads);
+
+	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_buf_len = len;
+
+	/* Allocate a little more to support bulk allocate. */
+	len = nb_desc + IDPF_RX_MAX_BURST;
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, len, VIRTCHNL2_QUEUE_TYPE_RX,
+				   socket_id, is_splitq);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	rxq->rx_ring_phys_addr = mz->iova;
+	rxq->rx_ring = mz->addr;
+	rxq->mz = mz;
+
+	if (!is_splitq) {
+		rxq->sw_ring = rte_zmalloc_socket("cpfl rxq sw ring",
+						  sizeof(struct rte_mbuf *) * len,
+						  RTE_CACHE_LINE_SIZE,
+						  socket_id);
+		if (rxq->sw_ring == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+			ret = -ENOMEM;
+			goto err_sw_ring_alloc;
+		}
+
+		idpf_qc_single_rx_queue_reset(rxq);
+		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
+				queue_idx * vport->chunks_info.rx_qtail_spacing);
+	} else {
+		idpf_qc_split_rx_descq_reset(rxq);
+
+		/* Setup Rx buffer queues */
+		ret = cpfl_rx_split_bufq_setup(dev, rxq, 2 * queue_idx,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 1);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 1");
+			ret = -EINVAL;
+			goto err_bufq1_setup;
+		}
+
+		ret = cpfl_rx_split_bufq_setup(dev, rxq, 2 * queue_idx + 1,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 2);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 2");
+			ret = -EINVAL;
+			goto err_bufq2_setup;
+		}
+	}
+
+	rxq->q_set = true;
+	dev->data->rx_queues[queue_idx] = rxq;
+
+	return 0;
+
+err_bufq2_setup:
+	cpfl_rx_split_bufq_release(rxq->bufq1);
+err_bufq1_setup:
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(rxq);
+err_rxq_alloc:
+	return ret;
+}
+
 static int
 cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
 		     uint16_t queue_idx, uint16_t nb_desc,
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 232630c5e9..e0221abfa3 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -16,10 +16,16 @@
 /* Base address of the HW descriptor ring should be 128B aligned. */
 #define CPFL_RING_BASE_ALIGN	128
 
+#define CPFL_DEFAULT_RX_FREE_THRESH	32
+
 #define CPFL_DEFAULT_TX_RS_THRESH	32
 #define CPFL_DEFAULT_TX_FREE_THRESH	32
 
 int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_txconf *tx_conf);
+int cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			uint16_t nb_desc, unsigned int socket_id,
+			const struct rte_eth_rxconf *rx_conf,
+			struct rte_mempool *mp);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v5 04/21] net/cpfl: support device start and stop
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                         ` (2 preceding siblings ...)
  2023-02-09  8:45       ` [PATCH v5 03/21] net/cpfl: add Rx " Mingxia Liu
@ 2023-02-09  8:45       ` Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 05/21] net/cpfl: support queue start Mingxia Liu
                         ` (18 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-09  8:45 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add dev ops dev_start, dev_stop and link_update.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 35 ++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index fb530c7690..423a8dcdcd 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -184,12 +184,45 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_dev_start(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	int ret;
+
+	ret = idpf_vc_vport_ena_dis(vport, true);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to enable vport");
+		return ret;
+	}
+
+	vport->stopped = 0;
+
+	return 0;
+}
+
+static int
+cpfl_dev_stop(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->stopped == 1)
+		return 0;
+
+	idpf_vc_vport_ena_dis(vport, false);
+
+	vport->stopped = 1;
+
+	return 0;
+}
+
 static int
 cpfl_dev_close(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter);
 
+	cpfl_dev_stop(dev);
 	idpf_vport_deinit(vport);
 
 	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
@@ -538,6 +571,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.rx_queue_setup			= cpfl_rx_queue_setup,
 	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
+	.dev_start			= cpfl_dev_start,
+	.dev_stop			= cpfl_dev_stop,
 	.link_update			= cpfl_dev_link_update,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v5 05/21] net/cpfl: support queue start
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                         ` (3 preceding siblings ...)
  2023-02-09  8:45       ` [PATCH v5 04/21] net/cpfl: support device start and stop Mingxia Liu
@ 2023-02-09  8:45       ` Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 06/21] net/cpfl: support queue stop Mingxia Liu
                         ` (17 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-09  8:45 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add support for these device ops:
 - rx_queue_start
 - tx_queue_start

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  41 ++++++++++
 drivers/net/cpfl/cpfl_rxtx.c   | 138 +++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |   4 +
 3 files changed, 183 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 423a8dcdcd..60339c836d 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -184,12 +184,51 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_start_queues(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	struct idpf_tx_queue *txq;
+	int err = 0;
+	int i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq == NULL || txq->tx_deferred_start)
+			continue;
+		err = cpfl_tx_queue_start(dev, i);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start Tx queue %u", i);
+			return err;
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq == NULL || rxq->rx_deferred_start)
+			continue;
+		err = cpfl_rx_queue_start(dev, i);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start Rx queue %u", i);
+			return err;
+		}
+	}
+
+	return err;
+}
+
 static int
 cpfl_dev_start(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	int ret;
 
+	ret = cpfl_start_queues(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to start queues");
+		return ret;
+	}
+
 	ret = idpf_vc_vport_ena_dis(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
@@ -574,6 +613,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_start			= cpfl_dev_start,
 	.dev_stop			= cpfl_dev_stop,
 	.link_update			= cpfl_dev_link_update,
+	.rx_queue_start			= cpfl_rx_queue_start,
+	.tx_queue_start			= cpfl_tx_queue_start,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 4083e8c3b6..2813e83a67 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -474,3 +474,141 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 err_txq_alloc:
 	return ret;
 }
+
+int
+cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_rx_queue *rxq;
+	int err;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	if (rxq == NULL || !rxq->q_set) {
+		PMD_DRV_LOG(ERR, "RX queue %u not available or setup",
+					rx_queue_id);
+		return -EINVAL;
+	}
+
+	if (rxq->bufq1 == NULL) {
+		/* Single queue */
+		err = idpf_qc_single_rxq_mbufs_alloc(rxq);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+			return err;
+		}
+
+		rte_wmb();
+
+		/* Init the RX tail register. */
+		IDPF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	} else {
+		/* Split queue */
+		err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq1);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+			return err;
+		}
+		err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq2);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+			return err;
+		}
+
+		rte_wmb();
+
+		/* Init the RX tail register. */
+		IDPF_PCI_REG_WRITE(rxq->bufq1->qrx_tail, rxq->bufq1->rx_tail);
+		IDPF_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->bufq2->rx_tail);
+	}
+
+	return err;
+}
+
+int
+cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_rx_queue *rxq =
+		dev->data->rx_queues[rx_queue_id];
+	int err = 0;
+
+	err = idpf_vc_rxq_config(vport, rxq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Fail to configure Rx queue %u", rx_queue_id);
+		return err;
+	}
+
+	err = cpfl_rx_queue_init(dev, rx_queue_id);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to init RX queue %u",
+			    rx_queue_id);
+		return err;
+	}
+
+	/* Ready to switch the queue on */
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+	} else {
+		rxq->q_started = true;
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	return err;
+}
+
+int
+cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_tx_queue *txq;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	txq = dev->data->tx_queues[tx_queue_id];
+
+	/* Init the RX tail register. */
+	IDPF_PCI_REG_WRITE(txq->qtx_tail, 0);
+
+	return 0;
+}
+
+int
+cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_tx_queue *txq =
+		dev->data->tx_queues[tx_queue_id];
+	int err = 0;
+
+	err = idpf_vc_txq_config(vport, txq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Fail to configure Tx queue %u", tx_queue_id);
+		return err;
+	}
+
+	err = cpfl_tx_queue_init(dev, tx_queue_id);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to init TX queue %u",
+			    tx_queue_id);
+		return err;
+	}
+
+	/* Ready to switch the queue on */
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+			    tx_queue_id);
+	} else {
+		txq->q_started = true;
+		dev->data->tx_queue_state[tx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	return err;
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index e0221abfa3..716b2fefa4 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -28,4 +28,8 @@ int cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
 			struct rte_mempool *mp);
+int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v5 06/21] net/cpfl: support queue stop
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                         ` (4 preceding siblings ...)
  2023-02-09  8:45       ` [PATCH v5 05/21] net/cpfl: support queue start Mingxia Liu
@ 2023-02-09  8:45       ` Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 07/21] net/cpfl: support queue release Mingxia Liu
                         ` (16 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-09  8:45 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add support for these device ops:
 - rx_queue_stop
 - tx_queue_stop

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 10 +++-
 drivers/net/cpfl/cpfl_rxtx.c   | 87 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  3 ++
 3 files changed, 99 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 60339c836d..8ce7329b78 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -232,12 +232,16 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 	ret = idpf_vc_vport_ena_dis(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
-		return ret;
+		goto err_vport;
 	}
 
 	vport->stopped = 0;
 
 	return 0;
+
+err_vport:
+	cpfl_stop_queues(dev);
+	return ret;
 }
 
 static int
@@ -250,6 +254,8 @@ cpfl_dev_stop(struct rte_eth_dev *dev)
 
 	idpf_vc_vport_ena_dis(vport, false);
 
+	cpfl_stop_queues(dev);
+
 	vport->stopped = 1;
 
 	return 0;
@@ -615,6 +621,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.link_update			= cpfl_dev_link_update,
 	.rx_queue_start			= cpfl_rx_queue_start,
 	.tx_queue_start			= cpfl_tx_queue_start,
+	.rx_queue_stop			= cpfl_rx_queue_stop,
+	.tx_queue_stop			= cpfl_tx_queue_stop,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 2813e83a67..ab5383a635 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -612,3 +612,90 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 
 	return err;
 }
+
+int
+cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_rx_queue *rxq;
+	int err;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, false);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+			    rx_queue_id);
+		return err;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		rxq->ops->release_mbufs(rxq);
+		idpf_qc_single_rx_queue_reset(rxq);
+	} else {
+		rxq->bufq1->ops->release_mbufs(rxq->bufq1);
+		rxq->bufq2->ops->release_mbufs(rxq->bufq2);
+		idpf_qc_split_rx_queue_reset(rxq);
+	}
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+int
+cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_tx_queue *txq;
+	int err;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, false);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+			    tx_queue_id);
+		return err;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	txq->ops->release_mbufs(txq);
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		idpf_qc_single_tx_queue_reset(txq);
+	} else {
+		idpf_qc_split_tx_descq_reset(txq);
+		idpf_qc_split_tx_complq_reset(txq->complq);
+	}
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+void
+cpfl_stop_queues(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	struct idpf_tx_queue *txq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq == NULL)
+			continue;
+
+		if (cpfl_rx_queue_stop(dev, i) != 0)
+			PMD_DRV_LOG(WARNING, "Fail to stop Rx queue %d", i);
+	}
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq == NULL)
+			continue;
+
+		if (cpfl_tx_queue_stop(dev, i) != 0)
+			PMD_DRV_LOG(WARNING, "Fail to stop Tx queue %d", i);
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 716b2fefa4..e9b810deaa 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -32,4 +32,7 @@ int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void cpfl_stop_queues(struct rte_eth_dev *dev);
+int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v5 07/21] net/cpfl: support queue release
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                         ` (5 preceding siblings ...)
  2023-02-09  8:45       ` [PATCH v5 06/21] net/cpfl: support queue stop Mingxia Liu
@ 2023-02-09  8:45       ` Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 08/21] net/cpfl: support MTU configuration Mingxia Liu
                         ` (15 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-09  8:45 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add support for queue operations:
 - rx_queue_release
 - tx_queue_release

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  2 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 35 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  2 ++
 3 files changed, 39 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 8ce7329b78..f59ad56db2 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -623,6 +623,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_start			= cpfl_tx_queue_start,
 	.rx_queue_stop			= cpfl_rx_queue_stop,
 	.tx_queue_stop			= cpfl_tx_queue_stop,
+	.rx_queue_release		= cpfl_dev_rx_queue_release,
+	.tx_queue_release		= cpfl_dev_tx_queue_release,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index ab5383a635..aa0f6bd792 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -49,6 +49,14 @@ cpfl_tx_offload_convert(uint64_t offload)
 	return ol;
 }
 
+static const struct idpf_rxq_ops def_rxq_ops = {
+	.release_mbufs = idpf_qc_rxq_mbufs_release,
+};
+
+static const struct idpf_txq_ops def_txq_ops = {
+	.release_mbufs = idpf_qc_txq_mbufs_release,
+};
+
 static const struct rte_memzone *
 cpfl_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
 		      uint16_t len, uint16_t queue_type,
@@ -177,6 +185,7 @@ cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
 	idpf_qc_split_rx_bufq_reset(bufq);
 	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
 			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+	bufq->ops = &def_rxq_ops;
 	bufq->q_set = true;
 
 	if (bufq_id == 1) {
@@ -235,6 +244,12 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (idpf_qc_rx_thresh_check(nb_desc, rx_free_thresh) != 0)
 		return -EINVAL;
 
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx] != NULL) {
+		idpf_qc_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
 	/* Setup Rx queue */
 	rxq = rte_zmalloc_socket("cpfl rxq",
 				 sizeof(struct idpf_rx_queue),
@@ -287,6 +302,7 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		idpf_qc_single_rx_queue_reset(rxq);
 		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
 				queue_idx * vport->chunks_info.rx_qtail_spacing);
+		rxq->ops = &def_rxq_ops;
 	} else {
 		idpf_qc_split_rx_descq_reset(rxq);
 
@@ -399,6 +415,12 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (idpf_qc_tx_thresh_check(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
 		return -EINVAL;
 
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx] != NULL) {
+		idpf_qc_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
 	/* Allocate the TX queue data structure. */
 	txq = rte_zmalloc_socket("cpfl txq",
 				 sizeof(struct idpf_tx_queue),
@@ -461,6 +483,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 
 	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
 			queue_idx * vport->chunks_info.tx_qtail_spacing);
+	txq->ops = &def_txq_ops;
 	txq->q_set = true;
 	dev->data->tx_queues[queue_idx] = txq;
 
@@ -674,6 +697,18 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	return 0;
 }
 
+void
+cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+	idpf_qc_rx_queue_release(dev->data->rx_queues[qid]);
+}
+
+void
+cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+	idpf_qc_tx_queue_release(dev->data->tx_queues[qid]);
+}
+
 void
 cpfl_stop_queues(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index e9b810deaa..f5882401dc 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -35,4 +35,6 @@ int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void cpfl_stop_queues(struct rte_eth_dev *dev);
 int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v5 08/21] net/cpfl: support MTU configuration
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                         ` (6 preceding siblings ...)
  2023-02-09  8:45       ` [PATCH v5 07/21] net/cpfl: support queue release Mingxia Liu
@ 2023-02-09  8:45       ` Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 09/21] net/cpfl: support basic Rx data path Mingxia Liu
                         ` (14 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-09  8:45 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add dev ops mtu_set.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini |  1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 27 +++++++++++++++++++++++++++
 2 files changed, 28 insertions(+)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index a2d1ca9e15..470ba81579 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -7,6 +7,7 @@
 ; is selected.
 ;
 [Features]
+MTU update           = Y
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index f59ad56db2..19b5234ef4 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -121,6 +121,27 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	return 0;
 }
 
+static int
+cpfl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	/* mtu setting is forbidden if port is start */
+	if (dev->data->dev_started) {
+		PMD_DRV_LOG(ERR, "port must be stopped before configuration");
+		return -EBUSY;
+	}
+
+	if (mtu > vport->max_mtu) {
+		PMD_DRV_LOG(ERR, "MTU should be less than %d", vport->max_mtu);
+		return -EINVAL;
+	}
+
+	vport->max_pkt_len = mtu + CPFL_ETH_OVERHEAD;
+
+	return 0;
+}
+
 static const uint32_t *
 cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 {
@@ -142,6 +163,7 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_conf *conf = &dev->data->dev_conf;
 
 	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
@@ -181,6 +203,10 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
+	vport->max_pkt_len =
+		(dev->data->mtu == 0) ? CPFL_DEFAULT_MTU : dev->data->mtu +
+		CPFL_ETH_OVERHEAD;
+
 	return 0;
 }
 
@@ -625,6 +651,7 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_stop			= cpfl_tx_queue_stop,
 	.rx_queue_release		= cpfl_dev_rx_queue_release,
 	.tx_queue_release		= cpfl_dev_tx_queue_release,
+	.mtu_set			= cpfl_dev_mtu_set,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v5 09/21] net/cpfl: support basic Rx data path
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                         ` (7 preceding siblings ...)
  2023-02-09  8:45       ` [PATCH v5 08/21] net/cpfl: support MTU configuration Mingxia Liu
@ 2023-02-09  8:45       ` Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 10/21] net/cpfl: support basic Tx " Mingxia Liu
                         ` (13 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-09  8:45 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add basic Rx support in split queue mode and single queue mode.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  2 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 18 ++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  1 +
 3 files changed, 21 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 19b5234ef4..cdbe0eede2 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -255,6 +255,8 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
+	cpfl_set_rx_function(dev);
+
 	ret = idpf_vc_vport_ena_dis(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index aa0f6bd792..d583079fb6 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -734,3 +734,21 @@ cpfl_stop_queues(struct rte_eth_dev *dev)
 			PMD_DRV_LOG(WARNING, "Fail to stop Tx queue %d", i);
 	}
 }
+
+void
+cpfl_set_rx_function(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Split Scalar Rx (port %d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
+	} else {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Single Scalar Rx (port %d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index f5882401dc..a5dd388e1f 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -37,4 +37,5 @@ int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+void cpfl_set_rx_function(struct rte_eth_dev *dev);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v5 10/21] net/cpfl: support basic Tx data path
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                         ` (8 preceding siblings ...)
  2023-02-09  8:45       ` [PATCH v5 09/21] net/cpfl: support basic Rx data path Mingxia Liu
@ 2023-02-09  8:45       ` Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
                         ` (12 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-09  8:45 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add basic Tx support in split queue mode and single queue mode.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  3 +++
 drivers/net/cpfl/cpfl_rxtx.c   | 20 ++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  1 +
 3 files changed, 24 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index cdbe0eede2..b24fae8f3f 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -97,6 +97,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
 		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
@@ -256,6 +258,7 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 	}
 
 	cpfl_set_rx_function(dev);
+	cpfl_set_tx_function(dev);
 
 	ret = idpf_vc_vport_ena_dis(vport, true);
 	if (ret != 0) {
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index d583079fb6..9c59b74c90 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -752,3 +752,23 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
 	}
 }
+
+void
+cpfl_set_tx_function(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Split Scalar Tx (port %d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+	} else {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Single Scalar Tx (port %d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index a5dd388e1f..5f8144e55f 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -38,4 +38,5 @@ int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_set_rx_function(struct rte_eth_dev *dev);
+void cpfl_set_tx_function(struct rte_eth_dev *dev);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v5 11/21] net/cpfl: support write back based on ITR expire
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                         ` (9 preceding siblings ...)
  2023-02-09  8:45       ` [PATCH v5 10/21] net/cpfl: support basic Tx " Mingxia Liu
@ 2023-02-09  8:45       ` Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 12/21] net/cpfl: support RSS Mingxia Liu
                         ` (11 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-09  8:45 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Enable write back on ITR expire, then packets can be received one by

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 45 +++++++++++++++++++++++++++++++++-
 drivers/net/cpfl/cpfl_ethdev.h |  2 ++
 2 files changed, 46 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index b24fae8f3f..c02e6c8e58 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -212,6 +212,15 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	uint16_t nb_rx_queues = dev->data->nb_rx_queues;
+
+	return idpf_vport_irq_map_config(vport, nb_rx_queues);
+}
+
 static int
 cpfl_start_queues(struct rte_eth_dev *dev)
 {
@@ -249,12 +258,37 @@ static int
 cpfl_dev_start(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(base);
+	uint16_t num_allocated_vectors = base->caps.num_allocated_vectors;
+	uint16_t req_vecs_num;
 	int ret;
 
+	req_vecs_num = CPFL_DFLT_Q_VEC_NUM;
+	if (req_vecs_num + adapter->used_vecs_num > num_allocated_vectors) {
+		PMD_DRV_LOG(ERR, "The accumulated request vectors' number should be less than %d",
+			    num_allocated_vectors);
+		ret = -EINVAL;
+		goto err_vec;
+	}
+
+	ret = idpf_vc_vectors_alloc(vport, req_vecs_num);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to allocate interrupt vectors");
+		goto err_vec;
+	}
+	adapter->used_vecs_num += req_vecs_num;
+
+	ret = cpfl_config_rx_queues_irqs(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to configure irqs");
+		goto err_irq;
+	}
+
 	ret = cpfl_start_queues(dev);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to start queues");
-		return ret;
+		goto err_startq;
 	}
 
 	cpfl_set_rx_function(dev);
@@ -272,6 +306,11 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 
 err_vport:
 	cpfl_stop_queues(dev);
+err_startq:
+	idpf_vport_irq_unmap_config(vport, dev->data->nb_rx_queues);
+err_irq:
+	idpf_vc_vectors_dealloc(vport);
+err_vec:
 	return ret;
 }
 
@@ -287,6 +326,10 @@ cpfl_dev_stop(struct rte_eth_dev *dev)
 
 	cpfl_stop_queues(dev);
 
+	idpf_vport_irq_unmap_config(vport, dev->data->nb_rx_queues);
+
+	idpf_vc_vectors_dealloc(vport);
+
 	vport->stopped = 1;
 
 	return 0;
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
index 9ca39b4558..cd7f560d19 100644
--- a/drivers/net/cpfl/cpfl_ethdev.h
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -24,6 +24,8 @@
 
 #define CPFL_INVALID_VPORT_IDX	0xffff
 
+#define CPFL_DFLT_Q_VEC_NUM	1
+
 #define CPFL_MIN_BUF_SIZE	1024
 #define CPFL_MAX_FRAME_SIZE	9728
 #define CPFL_DEFAULT_MTU	RTE_ETHER_MTU
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v5 12/21] net/cpfl: support RSS
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                         ` (10 preceding siblings ...)
  2023-02-09  8:45       ` [PATCH v5 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
@ 2023-02-09  8:45       ` Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 13/21] net/cpfl: support Rx offloading Mingxia Liu
                         ` (10 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-09  8:45 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add RSS support.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 51 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h | 15 ++++++++++
 2 files changed, 66 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index c02e6c8e58..cf5a968cad 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -97,6 +97,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
+
 	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
@@ -162,11 +164,49 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static int
+cpfl_init_rss(struct idpf_vport *vport)
+{
+	struct rte_eth_rss_conf *rss_conf;
+	struct rte_eth_dev_data *dev_data;
+	uint16_t i, nb_q;
+	int ret = 0;
+
+	dev_data = vport->dev_data;
+	rss_conf = &dev_data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev_data->nb_rx_queues;
+
+	if (rss_conf->rss_key == NULL) {
+		for (i = 0; i < vport->rss_key_size; i++)
+			vport->rss_key[i] = (uint8_t)rte_rand();
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_INIT_LOG(ERR, "Invalid RSS key length in RSS configuration, should be %d",
+			     vport->rss_key_size);
+		return -EINVAL;
+	} else {
+		rte_memcpy(vport->rss_key, rss_conf->rss_key,
+			   vport->rss_key_size);
+	}
+
+	for (i = 0; i < vport->rss_lut_size; i++)
+		vport->rss_lut[i] = i % nb_q;
+
+	vport->rss_hf = IDPF_DEFAULT_RSS_HASH_EXPANDED;
+
+	ret = idpf_vport_rss_config(vport);
+	if (ret != 0)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS");
+
+	return ret;
+}
+
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_conf *conf = &dev->data->dev_conf;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret;
 
 	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
@@ -205,6 +245,17 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
+	if (adapter->caps.rss_caps != 0 && dev->data->nb_rx_queues != 0) {
+		ret = cpfl_init_rss(vport);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to init rss");
+			return ret;
+		}
+	} else {
+		PMD_INIT_LOG(ERR, "RSS is not supported.");
+		return -1;
+	}
+
 	vport->max_pkt_len =
 		(dev->data->mtu == 0) ? CPFL_DEFAULT_MTU : dev->data->mtu +
 		CPFL_ETH_OVERHEAD;
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
index cd7f560d19..e00dff4bf0 100644
--- a/drivers/net/cpfl/cpfl_ethdev.h
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -36,6 +36,21 @@
 #define CPFL_ETH_OVERHEAD \
 	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + CPFL_VLAN_TAG_SIZE * 2)
 
+#define CPFL_RSS_OFFLOAD_ALL (				\
+		RTE_ETH_RSS_IPV4                |	\
+		RTE_ETH_RSS_FRAG_IPV4           |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_TCP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_UDP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_SCTP   |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_OTHER  |	\
+		RTE_ETH_RSS_IPV6                |	\
+		RTE_ETH_RSS_FRAG_IPV6           |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP   |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_OTHER  |	\
+		RTE_ETH_RSS_L2_PAYLOAD)
+
 #define CPFL_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
 #define CPFL_ALARM_INTERVAL	50000 /* us */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v5 13/21] net/cpfl: support Rx offloading
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                         ` (11 preceding siblings ...)
  2023-02-09  8:45       ` [PATCH v5 12/21] net/cpfl: support RSS Mingxia Liu
@ 2023-02-09  8:45       ` Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 14/21] net/cpfl: support Tx offloading Mingxia Liu
                         ` (9 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-09  8:45 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add Rx offloading support:
 - support CHKSUM and RSS offload for split queue model
 - support CHKSUM offload for single queue model

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 2 ++
 drivers/net/cpfl/cpfl_ethdev.c    | 6 ++++++
 2 files changed, 8 insertions(+)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index 470ba81579..ee5948f444 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -8,6 +8,8 @@
 ;
 [Features]
 MTU update           = Y
+L3 checksum offload  = P
+L4 checksum offload  = P
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index cf5a968cad..3c0145303e 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -99,6 +99,12 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
 
+	dev_info->rx_offload_capa =
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM           |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+
 	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v5 14/21] net/cpfl: support Tx offloading
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                         ` (12 preceding siblings ...)
  2023-02-09  8:45       ` [PATCH v5 13/21] net/cpfl: support Rx offloading Mingxia Liu
@ 2023-02-09  8:45       ` Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
                         ` (8 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-09  8:45 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add Tx offloading support:
 - support TSO for single queue model and split queue model.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 8 +++++++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index ee5948f444..f4e45c7c68 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -8,6 +8,7 @@
 ;
 [Features]
 MTU update           = Y
+TSO                  = P
 L3 checksum offload  = P
 L4 checksum offload  = P
 Linux                = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 3c0145303e..a0bdfb5ca4 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -105,7 +105,13 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
-	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->tx_offload_capa =
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_TCP_TSO		|
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v5 15/21] net/cpfl: add AVX512 data path for single queue model
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                         ` (13 preceding siblings ...)
  2023-02-09  8:45       ` [PATCH v5 14/21] net/cpfl: support Tx offloading Mingxia Liu
@ 2023-02-09  8:45       ` Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 16/21] net/cpfl: support timestamp offload Mingxia Liu
                         ` (7 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-09  8:45 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu, Wenjun Wu

Add support of AVX512 vector data path for single queue model.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/cpfl.rst                |  24 +++++-
 drivers/net/cpfl/cpfl_ethdev.c          |   3 +-
 drivers/net/cpfl/cpfl_rxtx.c            |  94 ++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx_vec_common.h | 100 ++++++++++++++++++++++++
 drivers/net/cpfl/meson.build            |  25 +++++-
 5 files changed, 243 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h

diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
index 7c5aff0789..f0018b41df 100644
--- a/doc/guides/nics/cpfl.rst
+++ b/doc/guides/nics/cpfl.rst
@@ -63,4 +63,26 @@ Runtime Config Options
 Driver compilation and testing
 ------------------------------
 
-Refer to the document :doc:`build_and_test` for details.
\ No newline at end of file
+Refer to the document :doc:`build_and_test` for details.
+
+Features
+--------
+
+Vector PMD
+~~~~~~~~~~
+
+Vector path for Rx and Tx path are selected automatically.
+The paths are chosen based on 2 conditions:
+
+- ``CPU``
+
+  On the x86 platform, the driver checks if the CPU supports AVX512.
+  If the CPU supports AVX512 and EAL argument ``--force-max-simd-bitwidth``
+  is set to 512, AVX512 paths will be chosen.
+
+- ``Offload features``
+
+  The supported HW offload features are described in the document cpfl.ini,
+  A value "P" means the offload feature is not supported by vector path.
+  If any not supported features are used, cpfl vector PMD is disabled
+  and the scalar paths are chosen.
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index a0bdfb5ca4..9d921b4355 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -111,7 +111,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
 		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM		|
 		RTE_ETH_TX_OFFLOAD_TCP_TSO		|
-		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS		|
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 9c59b74c90..f1119b27e1 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -8,6 +8,7 @@
 
 #include "cpfl_ethdev.h"
 #include "cpfl_rxtx.h"
+#include "cpfl_rxtx_vec_common.h"
 
 static uint64_t
 cpfl_rx_offload_convert(uint64_t offload)
@@ -735,11 +736,61 @@ cpfl_stop_queues(struct rte_eth_dev *dev)
 	}
 }
 
+
 void
 cpfl_set_rx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+#ifdef RTE_ARCH_X86
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	if (cpfl_rx_vec_dev_check_default(dev) == CPFL_VECTOR_PATH &&
+	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+		vport->rx_vec_allowed = true;
 
+		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
+#ifdef CC_AVX512_SUPPORT
+			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
+				vport->rx_use_avx512 = true;
+#else
+		PMD_DRV_LOG(NOTICE,
+			    "AVX512 is not supported in build env");
+#endif /* CC_AVX512_SUPPORT */
+	} else {
+		vport->rx_vec_allowed = false;
+	}
+#endif /* RTE_ARCH_X86 */
+
+#ifdef RTE_ARCH_X86
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Split Scalar Rx (port %d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
+	} else {
+		if (vport->rx_vec_allowed) {
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				rxq = dev->data->rx_queues[i];
+				(void)idpf_qc_singleq_rx_vec_setup(rxq);
+			}
+#ifdef CC_AVX512_SUPPORT
+			if (vport->rx_use_avx512) {
+				PMD_DRV_LOG(NOTICE,
+					    "Using Single AVX512 Vector Rx (port %d).",
+					    dev->data->port_id);
+				dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts_avx512;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
+		PMD_DRV_LOG(NOTICE,
+			    "Using Single Scalar Rx (port %d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
+	}
+#else
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		PMD_DRV_LOG(NOTICE,
 			    "Using Split Scalar Rx (port %d).",
@@ -751,12 +802,35 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 			    dev->data->port_id);
 		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
 	}
+#endif /* RTE_ARCH_X86 */
 }
 
 void
 cpfl_set_tx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+#ifdef RTE_ARCH_X86
+#ifdef CC_AVX512_SUPPORT
+	struct idpf_tx_queue *txq;
+	int i;
+#endif /* CC_AVX512_SUPPORT */
+
+	if (cpfl_tx_vec_dev_check_default(dev) == CPFL_VECTOR_PATH &&
+	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+		vport->tx_vec_allowed = true;
+		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
+#ifdef CC_AVX512_SUPPORT
+			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
+				vport->tx_use_avx512 = true;
+#else
+		PMD_DRV_LOG(NOTICE,
+			    "AVX512 is not supported in build env");
+#endif /* CC_AVX512_SUPPORT */
+	} else {
+		vport->tx_vec_allowed = false;
+	}
+#endif /* RTE_ARCH_X86 */
 
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		PMD_DRV_LOG(NOTICE,
@@ -765,6 +839,26 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 		dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
 	} else {
+#ifdef RTE_ARCH_X86
+		if (vport->tx_vec_allowed) {
+#ifdef CC_AVX512_SUPPORT
+			if (vport->tx_use_avx512) {
+				for (i = 0; i < dev->data->nb_tx_queues; i++) {
+					txq = dev->data->tx_queues[i];
+					if (txq == NULL)
+						continue;
+					idpf_qc_tx_vec_avx512_setup(txq);
+				}
+				PMD_DRV_LOG(NOTICE,
+					    "Using Single AVX512 Vector Tx (port %d).",
+					    dev->data->port_id);
+				dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts_avx512;
+				dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
+#endif /* RTE_ARCH_X86 */
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Tx (port %d).",
 			    dev->data->port_id);
diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
new file mode 100644
index 0000000000..2d4c6a0ef3
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
@@ -0,0 +1,100 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _CPFL_RXTX_VEC_COMMON_H_
+#define _CPFL_RXTX_VEC_COMMON_H_
+#include <stdint.h>
+#include <ethdev_driver.h>
+#include <rte_malloc.h>
+
+#include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
+
+#ifndef __INTEL_COMPILER
+#pragma GCC diagnostic ignored "-Wcast-qual"
+#endif
+
+#define CPFL_SCALAR_PATH		0
+#define CPFL_VECTOR_PATH		1
+#define CPFL_RX_NO_VECTOR_FLAGS (		\
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+#define CPFL_TX_NO_VECTOR_FLAGS (		\
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |	\
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+static inline int
+cpfl_rx_vec_queue_default(struct idpf_rx_queue *rxq)
+{
+	if (rxq == NULL)
+		return CPFL_SCALAR_PATH;
+
+	if (rte_is_power_of_2(rxq->nb_rx_desc) == 0)
+		return CPFL_SCALAR_PATH;
+
+	if (rxq->rx_free_thresh < IDPF_VPMD_RX_MAX_BURST)
+		return CPFL_SCALAR_PATH;
+
+	if ((rxq->nb_rx_desc % rxq->rx_free_thresh) != 0)
+		return CPFL_SCALAR_PATH;
+
+	if ((rxq->offloads & CPFL_RX_NO_VECTOR_FLAGS) != 0)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_tx_vec_queue_default(struct idpf_tx_queue *txq)
+{
+	if (txq == NULL)
+		return CPFL_SCALAR_PATH;
+
+	if (txq->rs_thresh < IDPF_VPMD_TX_MAX_BURST ||
+	    (txq->rs_thresh & 3) != 0)
+		return CPFL_SCALAR_PATH;
+
+	if ((txq->offloads & CPFL_TX_NO_VECTOR_FLAGS) != 0)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i, ret = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		ret = (cpfl_rx_vec_queue_default(rxq));
+		if (ret == CPFL_SCALAR_PATH)
+			return CPFL_SCALAR_PATH;
+	}
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_tx_vec_dev_check_default(struct rte_eth_dev *dev)
+{
+	int i;
+	struct idpf_tx_queue *txq;
+	int ret = 0;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		ret = cpfl_tx_vec_queue_default(txq);
+		if (ret == CPFL_SCALAR_PATH)
+			return CPFL_SCALAR_PATH;
+	}
+
+	return CPFL_VECTOR_PATH;
+}
+
+#endif /*_CPFL_RXTX_VEC_COMMON_H_*/
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
index 1894423689..fbe6500826 100644
--- a/drivers/net/cpfl/meson.build
+++ b/drivers/net/cpfl/meson.build
@@ -7,9 +7,32 @@ if is_windows
     subdir_done()
 endif
 
+if dpdk_conf.get('RTE_IOVA_AS_PA') == 0
+    build = false
+    reason = 'driver does not support disabling IOVA as PA mode'
+    subdir_done()
+endif
+
 deps += ['common_idpf']
 
 sources = files(
         'cpfl_ethdev.c',
         'cpfl_rxtx.c',
-)
\ No newline at end of file
+)
+
+if arch_subdir == 'x86'
+    cpfl_avx512_cpu_support = (
+        cc.get_define('__AVX512F__', args: machine_args) != '' and
+        cc.get_define('__AVX512BW__', args: machine_args) != ''
+    )
+
+    cpfl_avx512_cc_support = (
+        not machine_args.contains('-mno-avx512f') and
+        cc.has_argument('-mavx512f') and
+        cc.has_argument('-mavx512bw')
+    )
+
+    if cpfl_avx512_cpu_support == true or cpfl_avx512_cc_support == true
+        cflags += ['-DCC_AVX512_SUPPORT']
+    endif
+endif
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v5 16/21] net/cpfl: support timestamp offload
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                         ` (14 preceding siblings ...)
  2023-02-09  8:45       ` [PATCH v5 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
@ 2023-02-09  8:45       ` Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
                         ` (6 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-09  8:45 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add support for timestamp offload.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 3 ++-
 drivers/net/cpfl/cpfl_rxtx.c      | 7 +++++++
 3 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index f4e45c7c68..c1209df3e5 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -11,6 +11,7 @@ MTU update           = Y
 TSO                  = P
 L3 checksum offload  = P
 L4 checksum offload  = P
+Timestamp offload    = P
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 9d921b4355..5393b32922 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -103,7 +103,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM           |
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
-		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index f1119b27e1..c81e830c6a 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -516,6 +516,13 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	err = idpf_qc_ts_mbuf_register(rxq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "fail to register timestamp mbuf %u",
+					rx_queue_id);
+		return -EIO;
+	}
+
 	if (rxq->bufq1 == NULL) {
 		/* Single queue */
 		err = idpf_qc_single_rxq_mbufs_alloc(rxq);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v5 17/21] net/cpfl: add AVX512 data path for split queue model
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                         ` (15 preceding siblings ...)
  2023-02-09  8:45       ` [PATCH v5 16/21] net/cpfl: support timestamp offload Mingxia Liu
@ 2023-02-09  8:45       ` Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 18/21] net/cpfl: add HW statistics Mingxia Liu
                         ` (5 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-09  8:45 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu, Wenjun Wu

Add support of AVX512 data path for split queue model.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_rxtx.c            | 56 +++++++++++++++++++++++--
 drivers/net/cpfl/cpfl_rxtx_vec_common.h | 20 ++++++++-
 2 files changed, 71 insertions(+), 5 deletions(-)

diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index c81e830c6a..d55ce9696d 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -759,7 +759,8 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
 #ifdef CC_AVX512_SUPPORT
 			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
-			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512DQ))
 				vport->rx_use_avx512 = true;
 #else
 		PMD_DRV_LOG(NOTICE,
@@ -772,6 +773,21 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 
 #ifdef RTE_ARCH_X86
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		if (vport->rx_vec_allowed) {
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				rxq = dev->data->rx_queues[i];
+				(void)idpf_qc_splitq_rx_vec_setup(rxq);
+			}
+#ifdef CC_AVX512_SUPPORT
+			if (vport->rx_use_avx512) {
+				PMD_DRV_LOG(NOTICE,
+					    "Using Split AVX512 Vector Rx (port %d).",
+					    dev->data->port_id);
+				dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts_avx512;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Split Scalar Rx (port %d).",
 			    dev->data->port_id);
@@ -827,9 +843,17 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 		vport->tx_vec_allowed = true;
 		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
 #ifdef CC_AVX512_SUPPORT
+		{
 			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
 			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
 				vport->tx_use_avx512 = true;
+			if (vport->tx_use_avx512) {
+				for (i = 0; i < dev->data->nb_tx_queues; i++) {
+					txq = dev->data->tx_queues[i];
+					idpf_qc_tx_vec_avx512_setup(txq);
+				}
+			}
+		}
 #else
 		PMD_DRV_LOG(NOTICE,
 			    "AVX512 is not supported in build env");
@@ -839,14 +863,26 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 	}
 #endif /* RTE_ARCH_X86 */
 
+#ifdef RTE_ARCH_X86
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		if (vport->tx_vec_allowed) {
+#ifdef CC_AVX512_SUPPORT
+			if (vport->tx_use_avx512) {
+				PMD_DRV_LOG(NOTICE,
+					    "Using Split AVX512 Vector Tx (port %d).",
+					    dev->data->port_id);
+				dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts_avx512;
+				dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Split Scalar Tx (port %d).",
 			    dev->data->port_id);
 		dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
 	} else {
-#ifdef RTE_ARCH_X86
 		if (vport->tx_vec_allowed) {
 #ifdef CC_AVX512_SUPPORT
 			if (vport->tx_use_avx512) {
@@ -865,11 +901,25 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 			}
 #endif /* CC_AVX512_SUPPORT */
 		}
-#endif /* RTE_ARCH_X86 */
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Tx (port %d).",
 			    dev->data->port_id);
 		dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
 	}
+#else
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Split Scalar Tx (port %d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+	} else {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Single Scalar Tx (port %d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+	}
+#endif /* RTE_ARCH_X86 */
 }
diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
index 2d4c6a0ef3..665418d27d 100644
--- a/drivers/net/cpfl/cpfl_rxtx_vec_common.h
+++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
@@ -64,15 +64,31 @@ cpfl_tx_vec_queue_default(struct idpf_tx_queue *txq)
 	return CPFL_VECTOR_PATH;
 }
 
+static inline int
+cpfl_rx_splitq_vec_default(struct idpf_rx_queue *rxq)
+{
+	if (rxq->bufq2->rx_buf_len < rxq->max_pkt_len)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
 static inline int
 cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_rx_queue *rxq;
-	int i, ret = 0;
+	int i, default_ret, splitq_ret, ret = CPFL_SCALAR_PATH;
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
-		ret = (cpfl_rx_vec_queue_default(rxq));
+		default_ret = cpfl_rx_vec_queue_default(rxq);
+		if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+			splitq_ret = cpfl_rx_splitq_vec_default(rxq);
+			ret = splitq_ret && default_ret;
+		} else {
+			ret = default_ret;
+		}
 		if (ret == CPFL_SCALAR_PATH)
 			return CPFL_SCALAR_PATH;
 	}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v5 18/21] net/cpfl: add HW statistics
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                         ` (16 preceding siblings ...)
  2023-02-09  8:45       ` [PATCH v5 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
@ 2023-02-09  8:45       ` Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
                         ` (4 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-09  8:45 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

This patch add hardware packets/bytes statistics.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 86 ++++++++++++++++++++++++++++++++++
 1 file changed, 86 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 5393b32922..c6ae8039fb 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -178,6 +178,87 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static uint64_t
+cpfl_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	uint64_t mbuf_alloc_failed = 0;
+	struct idpf_rx_queue *rxq;
+	int i = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		mbuf_alloc_failed += __atomic_load_n(&rxq->rx_stats.mbuf_alloc_failed,
+						     __ATOMIC_RELAXED);
+	}
+
+	return mbuf_alloc_failed;
+}
+
+static int
+cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret == 0) {
+		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
+					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
+					 RTE_ETHER_CRC_LEN;
+
+		idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
+		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
+				pstats->rx_broadcast - pstats->rx_discards;
+		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
+						pstats->tx_unicast;
+		stats->imissed = pstats->rx_discards;
+		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
+		stats->ibytes = pstats->rx_bytes;
+		stats->ibytes -= stats->ipackets * crc_stats_len;
+		stats->obytes = pstats->tx_bytes;
+
+		dev->data->rx_mbuf_alloc_failed = cpfl_get_mbuf_alloc_failed_stats(dev);
+		stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return ret;
+}
+
+static void
+cpfl_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		__atomic_store_n(&rxq->rx_stats.mbuf_alloc_failed, 0, __ATOMIC_RELAXED);
+	}
+}
+
+static int
+cpfl_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret != 0)
+		return ret;
+
+	/* set stats offset base on current values */
+	vport->eth_stats_offset = *pstats;
+
+	cpfl_reset_mbuf_alloc_failed_stats(dev);
+
+	return 0;
+}
+
 static int
 cpfl_init_rss(struct idpf_vport *vport)
 {
@@ -365,6 +446,9 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 		goto err_vport;
 	}
 
+	if (cpfl_dev_stats_reset(dev))
+		PMD_DRV_LOG(ERR, "Failed to reset stats");
+
 	vport->stopped = 0;
 
 	return 0;
@@ -766,6 +850,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_release		= cpfl_dev_tx_queue_release,
 	.mtu_set			= cpfl_dev_mtu_set,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
+	.stats_get			= cpfl_dev_stats_get,
+	.stats_reset			= cpfl_dev_stats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v5 19/21] net/cpfl: add RSS set/get ops
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                         ` (17 preceding siblings ...)
  2023-02-09  8:45       ` [PATCH v5 18/21] net/cpfl: add HW statistics Mingxia Liu
@ 2023-02-09  8:45       ` Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 20/21] net/cpfl: support scalar scatter Rx datapath for single queue model Mingxia Liu
                         ` (3 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-09  8:45 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add support for these device ops:
- rss_reta_update
- rss_reta_query
- rss_hash_update
- rss_hash_conf_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 268 +++++++++++++++++++++++++++++++++
 1 file changed, 268 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index c6ae8039fb..c657f9c7cc 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -30,6 +30,56 @@ static const char * const cpfl_valid_args[] = {
 	NULL
 };
 
+static const uint64_t cpfl_map_hena_rss[] = {
+	[IDPF_HASH_NONF_UNICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
+	[IDPF_HASH_NONF_IPV4_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+	[IDPF_HASH_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
+
+	/* IPv6 */
+	[IDPF_HASH_NONF_UNICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
+	[IDPF_HASH_NONF_IPV6_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+	[IDPF_HASH_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
+
+	/* L2 Payload */
+	[IDPF_HASH_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
+};
+
+static const uint64_t cpfl_ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV4;
+
+static const uint64_t cpfl_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV6;
+
 static int
 cpfl_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -97,6 +147,9 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->hash_key_size = vport->rss_key_size;
+	dev_info->reta_size = vport->rss_lut_size;
+
 	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
@@ -259,6 +312,36 @@ cpfl_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int cpfl_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
+{
+	uint64_t hena = 0;
+	uint16_t i;
+
+	/**
+	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
+	 * generalizations of all other IPv4 and IPv6 RSS types.
+	 */
+	if (rss_hf & RTE_ETH_RSS_IPV4)
+		rss_hf |= cpfl_ipv4_rss;
+
+	if (rss_hf & RTE_ETH_RSS_IPV6)
+		rss_hf |= cpfl_ipv6_rss;
+
+	for (i = 0; i < RTE_DIM(cpfl_map_hena_rss); i++) {
+		if (cpfl_map_hena_rss[i] & rss_hf)
+			hena |= BIT_ULL(i);
+	}
+
+	/**
+	 * At present, cp doesn't process the virtual channel msg of rss_hf configuration,
+	 * tips are given below.
+	 */
+	if (hena != vport->rss_hf)
+		PMD_DRV_LOG(WARNING, "Updating RSS Hash Function is not supported at present.");
+
+	return 0;
+}
+
 static int
 cpfl_init_rss(struct idpf_vport *vport)
 {
@@ -295,6 +378,187 @@ cpfl_init_rss(struct idpf_vport *vport)
 	return ret;
 }
 
+static int
+cpfl_rss_reta_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_reta_entry64 *reta_conf,
+		     uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+				 "(%d) doesn't match the number of hardware can "
+				 "support (%d)",
+			    reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			vport->rss_lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	/* send virtchnl ops to configure RSS */
+	ret = idpf_vc_rss_lut_set(vport);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
+
+	return ret;
+}
+
+static int
+cpfl_rss_reta_query(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_reta_entry64 *reta_conf,
+		    uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	ret = idpf_vc_rss_lut_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS LUT");
+		return ret;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vport->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+cpfl_rss_hash_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		goto skip_rss_key;
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+				 "(%d) doesn't match the size of hardware can "
+				 "support (%d)",
+			    rss_conf->rss_key_len,
+			    vport->rss_key_size);
+		return -EINVAL;
+	}
+
+	rte_memcpy(vport->rss_key, rss_conf->rss_key,
+		   vport->rss_key_size);
+	ret = idpf_vc_rss_key_set(vport);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS key");
+		return ret;
+	}
+
+skip_rss_key:
+	ret = cpfl_config_rss_hf(vport, rss_conf->rss_hf);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
+		return ret;
+	}
+
+	return 0;
+}
+
+static uint64_t
+cpfl_map_general_rss_hf(uint64_t config_rss_hf, uint64_t last_general_rss_hf)
+{
+	uint64_t valid_rss_hf = 0;
+	uint16_t i;
+
+	for (i = 0; i < RTE_DIM(cpfl_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (bit & config_rss_hf)
+			valid_rss_hf |= cpfl_map_hena_rss[i];
+	}
+
+	if (valid_rss_hf & cpfl_ipv4_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & cpfl_ipv6_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV6;
+
+	return valid_rss_hf;
+}
+
+static int
+cpfl_rss_hash_conf_get(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	ret = idpf_vc_rss_hash_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS hf");
+		return ret;
+	}
+
+	rss_conf->rss_hf = cpfl_map_general_rss_hf(vport->rss_hf, vport->last_general_rss_hf);
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	ret = idpf_vc_rss_key_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS key");
+		return ret;
+	}
+
+	if (rss_conf->rss_key_len > vport->rss_key_size)
+		rss_conf->rss_key_len = vport->rss_key_size;
+
+	rte_memcpy(rss_conf->rss_key, vport->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
@@ -852,6 +1116,10 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 	.stats_get			= cpfl_dev_stats_get,
 	.stats_reset			= cpfl_dev_stats_reset,
+	.reta_update			= cpfl_rss_reta_update,
+	.reta_query			= cpfl_rss_reta_query,
+	.rss_hash_update		= cpfl_rss_hash_update,
+	.rss_hash_conf_get		= cpfl_rss_hash_conf_get,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v5 20/21] net/cpfl: support scalar scatter Rx datapath for single queue model
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                         ` (18 preceding siblings ...)
  2023-02-09  8:45       ` [PATCH v5 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
@ 2023-02-09  8:45       ` Mingxia Liu
  2023-02-09  8:45       ` [PATCH v5 21/21] net/cpfl: add xstats ops Mingxia Liu
                         ` (2 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-09  8:45 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu, Wenjun Wu

This patch add single q recv scatter Rx function.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  3 ++-
 drivers/net/cpfl/cpfl_rxtx.c   | 27 +++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  2 ++
 3 files changed, 31 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index c657f9c7cc..a97d9b4494 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -157,7 +157,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
-		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP		|
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index d55ce9696d..f4b76e0f90 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -503,6 +503,8 @@ int
 cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct idpf_rx_queue *rxq;
+	uint16_t max_pkt_len;
+	uint32_t frame_size;
 	int err;
 
 	if (rx_queue_id >= dev->data->nb_rx_queues)
@@ -516,6 +518,17 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	frame_size = dev->data->mtu + CPFL_ETH_OVERHEAD;
+
+	max_pkt_len =
+	    RTE_MIN((uint32_t)CPFL_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+		    frame_size);
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
+	    frame_size > rxq->rx_buf_len)
+		dev->data->scattered_rx = 1;
+
 	err = idpf_qc_ts_mbuf_register(rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "fail to register timestamp mbuf %u",
@@ -808,6 +821,13 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 			}
 #endif /* CC_AVX512_SUPPORT */
 		}
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE,
+				    "Using Single Scalar Scatterd Rx (port %d).",
+				    dev->data->port_id);
+			dev->rx_pkt_burst = idpf_dp_singleq_recv_scatter_pkts;
+			return;
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Rx (port %d).",
 			    dev->data->port_id);
@@ -820,6 +840,13 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 			    dev->data->port_id);
 		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
 	} else {
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE,
+				    "Using Single Scalar Scatterd Rx (port %d).",
+				    dev->data->port_id);
+			dev->rx_pkt_burst = idpf_dp_singleq_recv_scatter_pkts;
+			return;
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Rx (port %d).",
 			    dev->data->port_id);
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 5f8144e55f..fb267d38c8 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -21,6 +21,8 @@
 #define CPFL_DEFAULT_TX_RS_THRESH	32
 #define CPFL_DEFAULT_TX_FREE_THRESH	32
 
+#define CPFL_SUPPORT_CHAIN_NUM 5
+
 int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_txconf *tx_conf);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v5 21/21] net/cpfl: add xstats ops
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                         ` (19 preceding siblings ...)
  2023-02-09  8:45       ` [PATCH v5 20/21] net/cpfl: support scalar scatter Rx datapath for single queue model Mingxia Liu
@ 2023-02-09  8:45       ` Mingxia Liu
  2023-02-09 16:47       ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Stephen Hemminger
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-09  8:45 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add support for these device ops:
- dev_xstats_get
- dev_xstats_get_names
- dev_xstats_reset

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 80 ++++++++++++++++++++++++++++++++++
 1 file changed, 80 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index a97d9b4494..1f6c9aa248 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -80,6 +80,30 @@ static const uint64_t cpfl_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
 			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
 			  RTE_ETH_RSS_FRAG_IPV6;
 
+struct rte_cpfl_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct rte_cpfl_xstats_name_off rte_cpfl_stats_strings[] = {
+	{"rx_bytes", offsetof(struct virtchnl2_vport_stats, rx_bytes)},
+	{"rx_unicast_packets", offsetof(struct virtchnl2_vport_stats, rx_unicast)},
+	{"rx_multicast_packets", offsetof(struct virtchnl2_vport_stats, rx_multicast)},
+	{"rx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, rx_broadcast)},
+	{"rx_dropped_packets", offsetof(struct virtchnl2_vport_stats, rx_discards)},
+	{"rx_errors", offsetof(struct virtchnl2_vport_stats, rx_errors)},
+	{"rx_unknown_protocol_packets", offsetof(struct virtchnl2_vport_stats,
+						 rx_unknown_protocol)},
+	{"tx_bytes", offsetof(struct virtchnl2_vport_stats, tx_bytes)},
+	{"tx_unicast_packets", offsetof(struct virtchnl2_vport_stats, tx_unicast)},
+	{"tx_multicast_packets", offsetof(struct virtchnl2_vport_stats, tx_multicast)},
+	{"tx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, tx_broadcast)},
+	{"tx_dropped_packets", offsetof(struct virtchnl2_vport_stats, tx_discards)},
+	{"tx_error_packets", offsetof(struct virtchnl2_vport_stats, tx_errors)}};
+
+#define CPFL_NB_XSTATS (sizeof(rte_cpfl_stats_strings) / \
+		sizeof(rte_cpfl_stats_strings[0]))
+
 static int
 cpfl_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -313,6 +337,59 @@ cpfl_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int cpfl_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+	cpfl_dev_stats_reset(dev);
+	return 0;
+}
+
+static int cpfl_dev_xstats_get(struct rte_eth_dev *dev,
+			       struct rte_eth_xstat *xstats, unsigned int n)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	unsigned int i;
+	int ret;
+
+	if (n < CPFL_NB_XSTATS)
+		return CPFL_NB_XSTATS;
+
+	if (!xstats)
+		return 0;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+		return 0;
+	}
+
+	idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
+
+	/* loop over xstats array and values from pstats */
+	for (i = 0; i < CPFL_NB_XSTATS; i++) {
+		xstats[i].id = i;
+		xstats[i].value = *(uint64_t *)(((char *)pstats) +
+			rte_cpfl_stats_strings[i].offset);
+	}
+	return CPFL_NB_XSTATS;
+}
+
+static int cpfl_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+				     struct rte_eth_xstat_name *xstats_names,
+				     __rte_unused unsigned int limit)
+{
+	unsigned int i;
+
+	if (xstats_names)
+		for (i = 0; i < CPFL_NB_XSTATS; i++) {
+			snprintf(xstats_names[i].name,
+				 sizeof(xstats_names[i].name),
+				 "%s", rte_cpfl_stats_strings[i].name);
+		}
+	return CPFL_NB_XSTATS;
+}
+
 static int cpfl_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
 {
 	uint64_t hena = 0;
@@ -1121,6 +1198,9 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.reta_query			= cpfl_rss_reta_query,
 	.rss_hash_update		= cpfl_rss_hash_update,
 	.rss_hash_conf_get		= cpfl_rss_hash_conf_get,
+	.xstats_get			= cpfl_dev_xstats_get,
+	.xstats_get_names		= cpfl_dev_xstats_get_names,
+	.xstats_reset			= cpfl_dev_xstats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v5 00/21] add support for cpfl PMD in DPDK
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                         ` (20 preceding siblings ...)
  2023-02-09  8:45       ` [PATCH v5 21/21] net/cpfl: add xstats ops Mingxia Liu
@ 2023-02-09 16:47       ` Stephen Hemminger
  2023-02-13  1:37         ` Liu, Mingxia
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
  22 siblings, 1 reply; 263+ messages in thread
From: Stephen Hemminger @ 2023-02-09 16:47 UTC (permalink / raw)
  To: Mingxia Liu; +Cc: dev, qi.z.zhang, jingjing.wu, beilei.xing

On Thu,  9 Feb 2023 08:45:20 +0000
Mingxia Liu <mingxia.liu@intel.com> wrote:

> The patchset introduced the cpfl (Control Plane Function Library) PMD
> for Intel® IPU E2100’s Configure Physical Function (Device ID: 0x1453)
> 
> The cpfl PMD inherits all the features from idpf PMD which will follow
> an ongoing standard data plan function spec
> https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=idpf
> Besides, it will also support more device specific hardware offloading
> features from DPDK’s control path (e.g.: hairpin, rte_flow …). which is
> different from idpf PMD, and that's why we need a new cpfl PMD.
> 
> This patchset mainly focuses on idpf PMD’s equivalent features.
> To avoid duplicated code, the patchset depends on below patchsets which
> move the common part from net/idpf into common/idpf as a shared library.
> 
> v2 changes:
>  - rebase to the new baseline.
>  - Fix rss lut config issue.
> v3 changes:
>  - rebase to the new baseline.
> v4 changes:
>  - Resend v3. No code changed.
> v3 changes:
>  - rebase to the new baseline.
>  - optimize some code
>  - give "not supported" tips when user want to config rss hash type
>  - if stats reset fails at initialization time, don't rollback, just
>    print ERROR info
> 
> Mingxia Liu (21):
>   net/cpfl: support device initialization
>   net/cpfl: add Tx queue setup
>   net/cpfl: add Rx queue setup
>   net/cpfl: support device start and stop
>   net/cpfl: support queue start
>   net/cpfl: support queue stop
>   net/cpfl: support queue release
>   net/cpfl: support MTU configuration
>   net/cpfl: support basic Rx data path
>   net/cpfl: support basic Tx data path
>   net/cpfl: support write back based on ITR expire
>   net/cpfl: support RSS
>   net/cpfl: support Rx offloading
>   net/cpfl: support Tx offloading
>   net/cpfl: add AVX512 data path for single queue model
>   net/cpfl: support timestamp offload
>   net/cpfl: add AVX512 data path for split queue model
>   net/cpfl: add HW statistics
>   net/cpfl: add RSS set/get ops
>   net/cpfl: support scalar scatter Rx datapath for single queue model
>   net/cpfl: add xstats ops
> 
>  MAINTAINERS                             |    9 +
>  doc/guides/nics/cpfl.rst                |   88 ++
>  doc/guides/nics/features/cpfl.ini       |   17 +
>  doc/guides/rel_notes/release_23_03.rst  |    6 +
>  drivers/net/cpfl/cpfl_ethdev.c          | 1453 +++++++++++++++++++++++
>  drivers/net/cpfl/cpfl_ethdev.h          |   95 ++
>  drivers/net/cpfl/cpfl_logs.h            |   32 +
>  drivers/net/cpfl/cpfl_rxtx.c            |  952 +++++++++++++++
>  drivers/net/cpfl/cpfl_rxtx.h            |   44 +
>  drivers/net/cpfl/cpfl_rxtx_vec_common.h |  116 ++
>  drivers/net/cpfl/meson.build            |   38 +
>  drivers/net/meson.build                 |    1 +
>  12 files changed, 2851 insertions(+)
>  create mode 100644 doc/guides/nics/cpfl.rst
>  create mode 100644 doc/guides/nics/features/cpfl.ini
>  create mode 100644 drivers/net/cpfl/cpfl_ethdev.c
>  create mode 100644 drivers/net/cpfl/cpfl_ethdev.h
>  create mode 100644 drivers/net/cpfl/cpfl_logs.h
>  create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
>  create mode 100644 drivers/net/cpfl/cpfl_rxtx.h
>  create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h
>  create mode 100644 drivers/net/cpfl/meson.build
> 

Overall, the driver looks good. One recommendation would be to not
use rte_memcpy for small fixed size structure.  Regular memcpy() will
be as fast or faster and get more checking from analyzers.

Examples:
		rte_memcpy(adapter->mbx_resp, ctlq_msg.ctx.indirect.payload->va,

		rte_memcpy(ring_name, "cpfl Tx ring", sizeof("cpfl Tx ring"));

		rte_memcpy(ring_name, "cpfl Rx ring", sizeof("cpfl Rx ring"));
		rte_memcpy(ring_name, "cpfl Tx compl ring", sizeof("cpfl Tx compl ring"));
		rte_memcpy(ring_name, "cpfl Rx buf ring", sizeof("cpfl Rx buf ring"));
		rte_memcpy(vport->rss_key, rss_conf->rss_key,
		rte_memcpy(vport->rss_key, rss_conf->rss_key,
		rte_memcpy(rss_conf->rss_key, vport->rss_key, rss_conf->rss_key_len);


^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v5 00/21] add support for cpfl PMD in DPDK
  2023-02-09 16:47       ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Stephen Hemminger
@ 2023-02-13  1:37         ` Liu, Mingxia
  0 siblings, 0 replies; 263+ messages in thread
From: Liu, Mingxia @ 2023-02-13  1:37 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev, Zhang, Qi Z, Wu,  Jingjing, Xing, Beilei

ok, thanks, I'll update in next version.

> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Friday, February 10, 2023 12:47 AM
> To: Liu, Mingxia <mingxia.liu@intel.com>
> Cc: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> Subject: Re: [PATCH v5 00/21] add support for cpfl PMD in DPDK
> 
> On Thu,  9 Feb 2023 08:45:20 +0000
> Mingxia Liu <mingxia.liu@intel.com> wrote:
> 
> > The patchset introduced the cpfl (Control Plane Function Library) PMD
> > for Intel® IPU E2100’s Configure Physical Function (Device ID: 0x1453)
> >
> > The cpfl PMD inherits all the features from idpf PMD which will follow
> > an ongoing standard data plan function spec
> > https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=idpf
> > Besides, it will also support more device specific hardware offloading
> > features from DPDK’s control path (e.g.: hairpin, rte_flow …). which
> > is different from idpf PMD, and that's why we need a new cpfl PMD.
> >
> > This patchset mainly focuses on idpf PMD’s equivalent features.
> > To avoid duplicated code, the patchset depends on below patchsets
> > which move the common part from net/idpf into common/idpf as a shared
> library.
> >
> > v2 changes:
> >  - rebase to the new baseline.
> >  - Fix rss lut config issue.
> > v3 changes:
> >  - rebase to the new baseline.
> > v4 changes:
> >  - Resend v3. No code changed.
> > v3 changes:
> >  - rebase to the new baseline.
> >  - optimize some code
> >  - give "not supported" tips when user want to config rss hash type
> >  - if stats reset fails at initialization time, don't rollback, just
> >    print ERROR info
> >
> > Mingxia Liu (21):
> >   net/cpfl: support device initialization
> >   net/cpfl: add Tx queue setup
> >   net/cpfl: add Rx queue setup
> >   net/cpfl: support device start and stop
> >   net/cpfl: support queue start
> >   net/cpfl: support queue stop
> >   net/cpfl: support queue release
> >   net/cpfl: support MTU configuration
> >   net/cpfl: support basic Rx data path
> >   net/cpfl: support basic Tx data path
> >   net/cpfl: support write back based on ITR expire
> >   net/cpfl: support RSS
> >   net/cpfl: support Rx offloading
> >   net/cpfl: support Tx offloading
> >   net/cpfl: add AVX512 data path for single queue model
> >   net/cpfl: support timestamp offload
> >   net/cpfl: add AVX512 data path for split queue model
> >   net/cpfl: add HW statistics
> >   net/cpfl: add RSS set/get ops
> >   net/cpfl: support scalar scatter Rx datapath for single queue model
> >   net/cpfl: add xstats ops
> >
> >  MAINTAINERS                             |    9 +
> >  doc/guides/nics/cpfl.rst                |   88 ++
> >  doc/guides/nics/features/cpfl.ini       |   17 +
> >  doc/guides/rel_notes/release_23_03.rst  |    6 +
> >  drivers/net/cpfl/cpfl_ethdev.c          | 1453 +++++++++++++++++++++++
> >  drivers/net/cpfl/cpfl_ethdev.h          |   95 ++
> >  drivers/net/cpfl/cpfl_logs.h            |   32 +
> >  drivers/net/cpfl/cpfl_rxtx.c            |  952 +++++++++++++++
> >  drivers/net/cpfl/cpfl_rxtx.h            |   44 +
> >  drivers/net/cpfl/cpfl_rxtx_vec_common.h |  116 ++
> >  drivers/net/cpfl/meson.build            |   38 +
> >  drivers/net/meson.build                 |    1 +
> >  12 files changed, 2851 insertions(+)
> >  create mode 100644 doc/guides/nics/cpfl.rst  create mode 100644
> > doc/guides/nics/features/cpfl.ini  create mode 100644
> > drivers/net/cpfl/cpfl_ethdev.c  create mode 100644
> > drivers/net/cpfl/cpfl_ethdev.h  create mode 100644
> > drivers/net/cpfl/cpfl_logs.h  create mode 100644
> > drivers/net/cpfl/cpfl_rxtx.c  create mode 100644
> > drivers/net/cpfl/cpfl_rxtx.h  create mode 100644
> > drivers/net/cpfl/cpfl_rxtx_vec_common.h
> >  create mode 100644 drivers/net/cpfl/meson.build
> >
> 
> Overall, the driver looks good. One recommendation would be to not use
> rte_memcpy for small fixed size structure.  Regular memcpy() will be as fast
> or faster and get more checking from analyzers.
> 
> Examples:
> 		rte_memcpy(adapter->mbx_resp,
> ctlq_msg.ctx.indirect.payload->va,
> 
> 		rte_memcpy(ring_name, "cpfl Tx ring", sizeof("cpfl Tx ring"));
> 
> 		rte_memcpy(ring_name, "cpfl Rx ring", sizeof("cpfl Rx ring"));
> 		rte_memcpy(ring_name, "cpfl Tx compl ring", sizeof("cpfl Tx
> compl ring"));
> 		rte_memcpy(ring_name, "cpfl Rx buf ring", sizeof("cpfl Rx
> buf ring"));
> 		rte_memcpy(vport->rss_key, rss_conf->rss_key,
> 		rte_memcpy(vport->rss_key, rss_conf->rss_key,
> 		rte_memcpy(rss_conf->rss_key, vport->rss_key, rss_conf-
> >rss_key_len);


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v6 00/21] add support for cpfl PMD in DPDK
  2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                         ` (21 preceding siblings ...)
  2023-02-09 16:47       ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Stephen Hemminger
@ 2023-02-13  2:19       ` Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 01/21] net/cpfl: support device initialization Mingxia Liu
                           ` (22 more replies)
  22 siblings, 23 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-13  2:19 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=y, Size: 3242 bytes --]

The patchset introduced the cpfl (Control Plane Function Library) PMD
for Intel® IPU E2100’s Configure Physical Function (Device ID: 0x1453)

The cpfl PMD inherits all the features from idpf PMD which will follow
an ongoing standard data plan function spec
https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=idpf
Besides, it will also support more device specific hardware offloading
features from DPDK’s control path (e.g.: hairpin, rte_flow …). which is
different from idpf PMD, and that's why we need a new cpfl PMD.

This patchset mainly focuses on idpf PMD’s equivalent features.
To avoid duplicated code, the patchset depends on below patchsets which
move the common part from net/idpf into common/idpf as a shared library.

v2 changes:
 - rebase to the new baseline.
 - Fix rss lut config issue.
v3 changes:
 - rebase to the new baseline.
v4 changes:
 - Resend v3. No code changed.
v5 changes:
 - rebase to the new baseline.
 - optimize some code
 - give "not supported" tips when user want to config rss hash type
 - if stats reset fails at initialization time, don't rollback, just
   print ERROR info
v6 changes:
 - for small fixed size structure, change rte_memcpy to memcpy()
 - fix compilation for AVX512DQ
 - update cpfl maintainers

Mingxia Liu (21):
  net/cpfl: support device initialization
  net/cpfl: add Tx queue setup
  net/cpfl: add Rx queue setup
  net/cpfl: support device start and stop
  net/cpfl: support queue start
  net/cpfl: support queue stop
  net/cpfl: support queue release
  net/cpfl: support MTU configuration
  net/cpfl: support basic Rx data path
  net/cpfl: support basic Tx data path
  net/cpfl: support write back based on ITR expire
  net/cpfl: support RSS
  net/cpfl: support Rx offloading
  net/cpfl: support Tx offloading
  net/cpfl: add AVX512 data path for single queue model
  net/cpfl: support timestamp offload
  net/cpfl: add AVX512 data path for split queue model
  net/cpfl: add HW statistics
  net/cpfl: add RSS set/get ops
  net/cpfl: support scalar scatter Rx datapath for single queue model
  net/cpfl: add xstats ops

 MAINTAINERS                             |    8 +
 doc/guides/nics/cpfl.rst                |   88 ++
 doc/guides/nics/features/cpfl.ini       |   17 +
 doc/guides/rel_notes/release_23_03.rst  |    6 +
 drivers/net/cpfl/cpfl_ethdev.c          | 1453 +++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h          |   95 ++
 drivers/net/cpfl/cpfl_logs.h            |   32 +
 drivers/net/cpfl/cpfl_rxtx.c            |  952 +++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h            |   44 +
 drivers/net/cpfl/cpfl_rxtx_vec_common.h |  116 ++
 drivers/net/cpfl/meson.build            |   40 +
 drivers/net/meson.build                 |    1 +
 12 files changed, 2852 insertions(+)
 create mode 100644 doc/guides/nics/cpfl.rst
 create mode 100644 doc/guides/nics/features/cpfl.ini
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.c
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.h
 create mode 100644 drivers/net/cpfl/cpfl_logs.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h
 create mode 100644 drivers/net/cpfl/meson.build

-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v6 01/21] net/cpfl: support device initialization
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
@ 2023-02-13  2:19         ` Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 02/21] net/cpfl: add Tx queue setup Mingxia Liu
                           ` (21 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-13  2:19 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Support device init and add the following dev ops:
 - dev_configure
 - dev_close
 - dev_infos_get
 - link_update
 - dev_supported_ptypes_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 MAINTAINERS                            |   8 +
 doc/guides/nics/cpfl.rst               |  66 +++
 doc/guides/nics/features/cpfl.ini      |  12 +
 doc/guides/rel_notes/release_23_03.rst |   6 +
 drivers/net/cpfl/cpfl_ethdev.c         | 768 +++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h         |  78 +++
 drivers/net/cpfl/cpfl_logs.h           |  32 ++
 drivers/net/cpfl/cpfl_rxtx.c           | 244 ++++++++
 drivers/net/cpfl/cpfl_rxtx.h           |  25 +
 drivers/net/cpfl/meson.build           |  14 +
 drivers/net/meson.build                |   1 +
 11 files changed, 1254 insertions(+)
 create mode 100644 doc/guides/nics/cpfl.rst
 create mode 100644 doc/guides/nics/features/cpfl.ini
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.c
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.h
 create mode 100644 drivers/net/cpfl/cpfl_logs.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.h
 create mode 100644 drivers/net/cpfl/meson.build

diff --git a/MAINTAINERS b/MAINTAINERS
index 9a0f416d2e..af80edaf6e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -783,6 +783,14 @@ F: drivers/common/idpf/
 F: doc/guides/nics/idpf.rst
 F: doc/guides/nics/features/idpf.ini
 
+Intel cpfl
+M: Yuying Zhang <yuying.zhang@intel.com>
+M: Beilei Xing <beilei.xing@intel.com>
+T: git://dpdk.org/next/dpdk-next-net-intel
+F: drivers/net/cpfl/
+F: doc/guides/nics/cpfl.rst
+F: doc/guides/nics/features/cpfl.ini
+
 Intel igc
 M: Junfeng Guo <junfeng.guo@intel.com>
 M: Simei Su <simei.su@intel.com>
diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
new file mode 100644
index 0000000000..7c5aff0789
--- /dev/null
+++ b/doc/guides/nics/cpfl.rst
@@ -0,0 +1,66 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright(c) 2022 Intel Corporation.
+
+.. include:: <isonum.txt>
+
+CPFL Poll Mode Driver
+=====================
+
+The [*EXPERIMENTAL*] cpfl PMD (**librte_net_cpfl**) provides poll mode driver support
+for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
+
+
+Linux Prerequisites
+-------------------
+
+Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment.
+
+To get better performance on Intel platforms,
+please follow the :doc:`../linux_gsg/nic_perf_intel_platform`.
+
+
+Pre-Installation Configuration
+------------------------------
+
+Runtime Config Options
+~~~~~~~~~~~~~~~~~~~~~~
+
+- ``vport`` (default ``0``)
+
+  The PMD supports creation of multiple vports for one PCI device,
+  each vport corresponds to a single ethdev.
+  The user can specify the vports with specific ID to be created, for example::
+
+    -a ca:00.0,vport=[0,2,3]
+
+  Then the PMD will create 3 vports (ethdevs) for device ``ca:00.0``.
+
+  If the parameter is not provided, the vport 0 will be created by default.
+
+- ``rx_single`` (default ``0``)
+
+  There are two queue modes supported by Intel\ |reg| IPU Ethernet E2100 Series,
+  single queue mode and split queue mode for Rx queue.
+  User can choose Rx queue mode, example::
+
+    -a ca:00.0,rx_single=1
+
+  Then the PMD will configure Rx queue with single queue mode.
+  Otherwise, split queue mode is chosen by default.
+
+- ``tx_single`` (default ``0``)
+
+  There are two queue modes supported by Intel\ |reg| IPU Ethernet E2100 Series,
+  single queue mode and split queue mode for Tx queue.
+  User can choose Tx queue mode, example::
+
+    -a ca:00.0,tx_single=1
+
+  Then the PMD will configure Tx queue with single queue mode.
+  Otherwise, split queue mode is chosen by default.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :doc:`build_and_test` for details.
\ No newline at end of file
diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
new file mode 100644
index 0000000000..a2d1ca9e15
--- /dev/null
+++ b/doc/guides/nics/features/cpfl.ini
@@ -0,0 +1,12 @@
+;
+; Supported features of the 'cpfl' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+; A feature with "P" indicates only be supported when non-vector path
+; is selected.
+;
+[Features]
+Linux                = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst
index 07914170a7..b0b23d1a44 100644
--- a/doc/guides/rel_notes/release_23_03.rst
+++ b/doc/guides/rel_notes/release_23_03.rst
@@ -88,6 +88,12 @@ New Features
   * Added timesync API support.
   * Added packet pacing(launch time offloading) support.
 
+* **Added Intel cpfl driver.**
+
+  Added the new ``cpfl`` net driver
+  for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
+  See the :doc:`../nics/cpfl` NIC guide for more details on this new driver.
+
 Removed Items
 -------------
 
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
new file mode 100644
index 0000000000..fe0061133c
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -0,0 +1,768 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_dev.h>
+#include <errno.h>
+#include <rte_alarm.h>
+
+#include "cpfl_ethdev.h"
+
+#define CPFL_TX_SINGLE_Q	"tx_single"
+#define CPFL_RX_SINGLE_Q	"rx_single"
+#define CPFL_VPORT		"vport"
+
+rte_spinlock_t cpfl_adapter_lock;
+/* A list for all adapters, one adapter matches one PCI device */
+struct cpfl_adapter_list cpfl_adapter_list;
+bool cpfl_adapter_list_init;
+
+static const char * const cpfl_valid_args[] = {
+	CPFL_TX_SINGLE_Q,
+	CPFL_RX_SINGLE_Q,
+	CPFL_VPORT,
+	NULL
+};
+
+static int
+cpfl_dev_link_update(struct rte_eth_dev *dev,
+		     __rte_unused int wait_to_complete)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct rte_eth_link new_link;
+
+	memset(&new_link, 0, sizeof(new_link));
+
+	switch (vport->link_speed) {
+	case RTE_ETH_SPEED_NUM_10M:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
+		break;
+	case RTE_ETH_SPEED_NUM_100M:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		break;
+	case RTE_ETH_SPEED_NUM_1G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
+		break;
+	case RTE_ETH_SPEED_NUM_10G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+		break;
+	case RTE_ETH_SPEED_NUM_20G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
+		break;
+	case RTE_ETH_SPEED_NUM_25G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
+		break;
+	case RTE_ETH_SPEED_NUM_40G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
+		break;
+	case RTE_ETH_SPEED_NUM_50G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
+		break;
+	case RTE_ETH_SPEED_NUM_100G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+		break;
+	case RTE_ETH_SPEED_NUM_200G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
+		break;
+	default:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	}
+
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
+		RTE_ETH_LINK_DOWN;
+	new_link.link_autoneg = (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) ?
+				 RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG;
+
+	return rte_eth_linkstatus_set(dev, &new_link);
+}
+
+static int
+cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+
+	dev_info->max_rx_queues = adapter->caps.max_rx_q;
+	dev_info->max_tx_queues = adapter->caps.max_tx_q;
+	dev_info->min_rx_bufsize = CPFL_MIN_BUF_SIZE;
+	dev_info->max_rx_pktlen = vport->max_mtu + CPFL_ETH_OVERHEAD;
+
+	dev_info->max_mtu = vport->max_mtu;
+	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+
+	return 0;
+}
+
+static const uint32_t *
+cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+	static const uint32_t ptypes[] = {
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_L4_FRAG,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_UNKNOWN
+	};
+
+	return ptypes;
+}
+
+static int
+cpfl_dev_configure(struct rte_eth_dev *dev)
+{
+	struct rte_eth_conf *conf = &dev->data->dev_conf;
+
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
+		PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not supported",
+			     conf->txmode.mq_mode);
+		return -ENOTSUP;
+	}
+
+	if (conf->lpbk_mode != 0) {
+		PMD_INIT_LOG(ERR, "Loopback operation mode %d is not supported",
+			     conf->lpbk_mode);
+		return -ENOTSUP;
+	}
+
+	if (conf->dcb_capability_en != 0) {
+		PMD_INIT_LOG(ERR, "Priority Flow Control(PFC) if not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.lsc != 0) {
+		PMD_INIT_LOG(ERR, "LSC interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.rxq != 0) {
+		PMD_INIT_LOG(ERR, "RXQ interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.rmv != 0) {
+		PMD_INIT_LOG(ERR, "RMV interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	return 0;
+}
+
+static int
+cpfl_dev_close(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter);
+
+	idpf_vport_deinit(vport);
+
+	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
+	adapter->cur_vport_nb--;
+	dev->data->dev_private = NULL;
+	adapter->vports[vport->sw_idx] = NULL;
+	rte_free(vport);
+
+	return 0;
+}
+
+static int
+insert_value(struct cpfl_devargs *devargs, uint16_t id)
+{
+	uint16_t i;
+
+	/* ignore duplicate */
+	for (i = 0; i < devargs->req_vport_nb; i++) {
+		if (devargs->req_vports[i] == id)
+			return 0;
+	}
+
+	if (devargs->req_vport_nb >= RTE_DIM(devargs->req_vports)) {
+		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
+			     CPFL_MAX_VPORT_NUM);
+		return -EINVAL;
+	}
+
+	devargs->req_vports[devargs->req_vport_nb] = id;
+	devargs->req_vport_nb++;
+
+	return 0;
+}
+
+static const char *
+parse_range(const char *value, struct cpfl_devargs *devargs)
+{
+	uint16_t lo, hi, i;
+	int n = 0;
+	int result;
+	const char *pos = value;
+
+	result = sscanf(value, "%hu%n-%hu%n", &lo, &n, &hi, &n);
+	if (result == 1) {
+		if (lo >= CPFL_MAX_VPORT_NUM)
+			return NULL;
+		if (insert_value(devargs, lo) != 0)
+			return NULL;
+	} else if (result == 2) {
+		if (lo > hi || hi >= CPFL_MAX_VPORT_NUM)
+			return NULL;
+		for (i = lo; i <= hi; i++) {
+			if (insert_value(devargs, i) != 0)
+				return NULL;
+		}
+	} else {
+		return NULL;
+	}
+
+	return pos + n;
+}
+
+static int
+parse_vport(const char *key, const char *value, void *args)
+{
+	struct cpfl_devargs *devargs = args;
+	const char *pos = value;
+
+	devargs->req_vport_nb = 0;
+
+	if (*pos == '[')
+		pos++;
+
+	while (1) {
+		pos = parse_range(pos, devargs);
+		if (pos == NULL) {
+			PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ",
+				     value, key);
+			return -EINVAL;
+		}
+		if (*pos != ',')
+			break;
+		pos++;
+	}
+
+	if (*value == '[' && *pos != ']') {
+		PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ",
+			     value, key);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+parse_bool(const char *key, const char *value, void *args)
+{
+	int *i = args;
+	char *end;
+	int num;
+
+	errno = 0;
+
+	num = strtoul(value, &end, 10);
+
+	if (errno == ERANGE || (num != 0 && num != 1)) {
+		PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", value must be 0 or 1",
+			value, key);
+		return -EINVAL;
+	}
+
+	*i = num;
+	return 0;
+}
+
+static int
+cpfl_parse_devargs(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter,
+		   struct cpfl_devargs *cpfl_args)
+{
+	struct rte_devargs *devargs = pci_dev->device.devargs;
+	struct rte_kvargs *kvlist;
+	int i, ret;
+
+	cpfl_args->req_vport_nb = 0;
+
+	if (devargs == NULL)
+		return 0;
+
+	kvlist = rte_kvargs_parse(devargs->args, cpfl_valid_args);
+	if (kvlist == NULL) {
+		PMD_INIT_LOG(ERR, "invalid kvargs key");
+		return -EINVAL;
+	}
+
+	/* check parsed devargs */
+	if (adapter->cur_vport_nb + cpfl_args->req_vport_nb >
+	    CPFL_MAX_VPORT_NUM) {
+		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
+			     CPFL_MAX_VPORT_NUM);
+		ret = -EINVAL;
+		goto bail;
+	}
+
+	for (i = 0; i < cpfl_args->req_vport_nb; i++) {
+		if (adapter->cur_vports & RTE_BIT32(cpfl_args->req_vports[i])) {
+			PMD_INIT_LOG(ERR, "Vport %d has been created",
+				     cpfl_args->req_vports[i]);
+			ret = -EINVAL;
+			goto bail;
+		}
+	}
+
+	ret = rte_kvargs_process(kvlist, CPFL_VPORT, &parse_vport,
+				 cpfl_args);
+	if (ret != 0)
+		goto bail;
+
+	ret = rte_kvargs_process(kvlist, CPFL_TX_SINGLE_Q, &parse_bool,
+				 &adapter->base.txq_model);
+	if (ret != 0)
+		goto bail;
+
+	ret = rte_kvargs_process(kvlist, CPFL_RX_SINGLE_Q, &parse_bool,
+				 &adapter->base.rxq_model);
+	if (ret != 0)
+		goto bail;
+
+bail:
+	rte_kvargs_free(kvlist);
+	return ret;
+}
+
+static struct idpf_vport *
+cpfl_find_vport(struct cpfl_adapter_ext *adapter, uint32_t vport_id)
+{
+	struct idpf_vport *vport = NULL;
+	int i;
+
+	for (i = 0; i < adapter->cur_vport_nb; i++) {
+		vport = adapter->vports[i];
+		if (vport->vport_id != vport_id)
+			continue;
+		else
+			return vport;
+	}
+
+	return vport;
+}
+
+static void
+cpfl_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t msglen)
+{
+	struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
+
+	if (msglen < sizeof(struct virtchnl2_event)) {
+		PMD_DRV_LOG(ERR, "Error event");
+		return;
+	}
+
+	switch (vc_event->event) {
+	case VIRTCHNL2_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL2_EVENT_LINK_CHANGE");
+		vport->link_up = !!(vc_event->link_status);
+		vport->link_speed = vc_event->link_speed;
+		cpfl_dev_link_update(dev, 0);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", vc_event->event);
+		break;
+	}
+}
+
+static void
+cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter_ex)
+{
+	struct idpf_adapter *adapter = &adapter_ex->base;
+	struct idpf_dma_mem *dma_mem = NULL;
+	struct idpf_hw *hw = &adapter->hw;
+	struct virtchnl2_event *vc_event;
+	struct idpf_ctlq_msg ctlq_msg;
+	enum idpf_mbx_opc mbx_op;
+	struct idpf_vport *vport;
+	enum virtchnl_ops vc_op;
+	uint16_t pending = 1;
+	int ret;
+
+	while (pending) {
+		ret = idpf_vc_ctlq_recv(hw->arq, &pending, &ctlq_msg);
+		if (ret) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret);
+			return;
+		}
+
+		memcpy(adapter->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
+			   IDPF_DFLT_MBX_BUF_SIZE);
+
+		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
+		vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+		adapter->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
+
+		switch (mbx_op) {
+		case idpf_mbq_opc_send_msg_to_peer_pf:
+			if (vc_op == VIRTCHNL2_OP_EVENT) {
+				if (ctlq_msg.data_len < sizeof(struct virtchnl2_event)) {
+					PMD_DRV_LOG(ERR, "Error event");
+					return;
+				}
+				vc_event = (struct virtchnl2_event *)adapter->mbx_resp;
+				vport = cpfl_find_vport(adapter_ex, vc_event->vport_id);
+				if (!vport) {
+					PMD_DRV_LOG(ERR, "Can't find vport.");
+					return;
+				}
+				cpfl_handle_event_msg(vport, adapter->mbx_resp,
+						      ctlq_msg.data_len);
+			} else {
+				if (vc_op == adapter->pend_cmd)
+					notify_cmd(adapter, adapter->cmd_retval);
+				else
+					PMD_DRV_LOG(ERR, "command mismatch, expect %u, get %u",
+						    adapter->pend_cmd, vc_op);
+
+				PMD_DRV_LOG(DEBUG, " Virtual channel response is received,"
+					    "opcode = %d", vc_op);
+			}
+			goto post_buf;
+		default:
+			PMD_DRV_LOG(DEBUG, "Request %u is not supported yet", mbx_op);
+		}
+	}
+
+post_buf:
+	if (ctlq_msg.data_len)
+		dma_mem = ctlq_msg.ctx.indirect.payload;
+	else
+		pending = 0;
+
+	ret = idpf_vc_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
+	if (ret && dma_mem)
+		idpf_free_dma_mem(hw, dma_mem);
+}
+
+static void
+cpfl_dev_alarm_handler(void *param)
+{
+	struct cpfl_adapter_ext *adapter = param;
+
+	cpfl_handle_virtchnl_msg(adapter);
+
+	rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
+}
+
+static int
+cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter)
+{
+	struct idpf_adapter *base = &adapter->base;
+	struct idpf_hw *hw = &base->hw;
+	int ret = 0;
+
+	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+	hw->hw_addr_len = pci_dev->mem_resource[0].len;
+	hw->back = base;
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->device_id = pci_dev->id.device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+
+	strncpy(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE);
+
+	ret = idpf_adapter_init(base);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init adapter");
+		goto err_adapter_init;
+	}
+
+	rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
+
+	adapter->max_vport_nb = adapter->base.caps.max_vports;
+
+	adapter->vports = rte_zmalloc("vports",
+				      adapter->max_vport_nb *
+				      sizeof(*adapter->vports),
+				      0);
+	if (adapter->vports == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
+		ret = -ENOMEM;
+		goto err_get_ptype;
+	}
+
+	adapter->cur_vports = 0;
+	adapter->cur_vport_nb = 0;
+
+	adapter->used_vecs_num = 0;
+
+	return ret;
+
+err_get_ptype:
+	idpf_adapter_deinit(base);
+err_adapter_init:
+	return ret;
+}
+
+static const struct eth_dev_ops cpfl_eth_dev_ops = {
+	.dev_configure			= cpfl_dev_configure,
+	.dev_close			= cpfl_dev_close,
+	.dev_infos_get			= cpfl_dev_info_get,
+	.link_update			= cpfl_dev_link_update,
+	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
+};
+
+static uint16_t
+cpfl_vport_idx_alloc(struct cpfl_adapter_ext *ad)
+{
+	uint16_t vport_idx;
+	uint16_t i;
+
+	for (i = 0; i < ad->max_vport_nb; i++) {
+		if (ad->vports[i] == NULL)
+			break;
+	}
+
+	if (i == ad->max_vport_nb)
+		vport_idx = CPFL_INVALID_VPORT_IDX;
+	else
+		vport_idx = i;
+
+	return vport_idx;
+}
+
+static int
+cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct cpfl_vport_param *param = init_params;
+	struct cpfl_adapter_ext *adapter = param->adapter;
+	/* for sending create vport virtchnl msg prepare */
+	struct virtchnl2_create_vport create_vport_info;
+	int ret = 0;
+
+	dev->dev_ops = &cpfl_eth_dev_ops;
+	vport->adapter = &adapter->base;
+	vport->sw_idx = param->idx;
+	vport->devarg_id = param->devarg_id;
+	vport->dev = dev;
+
+	memset(&create_vport_info, 0, sizeof(create_vport_info));
+	ret = idpf_vport_info_init(vport, &create_vport_info);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init vport req_info.");
+		goto err;
+	}
+
+	ret = idpf_vport_init(vport, &create_vport_info, dev->data);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init vports.");
+		goto err;
+	}
+
+	adapter->vports[param->idx] = vport;
+	adapter->cur_vports |= RTE_BIT32(param->devarg_id);
+	adapter->cur_vport_nb++;
+
+	dev->data->mac_addrs = rte_zmalloc(NULL, RTE_ETHER_ADDR_LEN, 0);
+	if (dev->data->mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory.");
+		ret = -ENOMEM;
+		goto err_mac_addrs;
+	}
+
+	rte_ether_addr_copy((struct rte_ether_addr *)vport->default_mac_addr,
+			    &dev->data->mac_addrs[0]);
+
+	return 0;
+
+err_mac_addrs:
+	adapter->vports[param->idx] = NULL;  /* reset */
+	idpf_vport_deinit(vport);
+err:
+	return ret;
+}
+
+static const struct rte_pci_id pci_id_cpfl_map[] = {
+	{ RTE_PCI_DEVICE(IDPF_INTEL_VENDOR_ID, IDPF_DEV_ID_CPF) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static struct cpfl_adapter_ext *
+cpfl_find_adapter_ext(struct rte_pci_device *pci_dev)
+{
+	struct cpfl_adapter_ext *adapter;
+	int found = 0;
+
+	if (pci_dev == NULL)
+		return NULL;
+
+	rte_spinlock_lock(&cpfl_adapter_lock);
+	TAILQ_FOREACH(adapter, &cpfl_adapter_list, next) {
+		if (strncmp(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE) == 0) {
+			found = 1;
+			break;
+		}
+	}
+	rte_spinlock_unlock(&cpfl_adapter_lock);
+
+	if (found == 0)
+		return NULL;
+
+	return adapter;
+}
+
+static void
+cpfl_adapter_ext_deinit(struct cpfl_adapter_ext *adapter)
+{
+	rte_eal_alarm_cancel(cpfl_dev_alarm_handler, adapter);
+	idpf_adapter_deinit(&adapter->base);
+
+	rte_free(adapter->vports);
+	adapter->vports = NULL;
+}
+
+static int
+cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+	       struct rte_pci_device *pci_dev)
+{
+	struct cpfl_vport_param vport_param;
+	struct cpfl_adapter_ext *adapter;
+	struct cpfl_devargs devargs;
+	char name[RTE_ETH_NAME_MAX_LEN];
+	int i, retval;
+	bool first_probe = false;
+
+	if (!cpfl_adapter_list_init) {
+		rte_spinlock_init(&cpfl_adapter_lock);
+		TAILQ_INIT(&cpfl_adapter_list);
+		cpfl_adapter_list_init = true;
+	}
+
+	adapter = cpfl_find_adapter_ext(pci_dev);
+	if (adapter == NULL) {
+		first_probe = true;
+		adapter = rte_zmalloc("cpfl_adapter_ext",
+				      sizeof(struct cpfl_adapter_ext), 0);
+		if (adapter == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
+			return -ENOMEM;
+		}
+
+		retval = cpfl_adapter_ext_init(pci_dev, adapter);
+		if (retval != 0) {
+			PMD_INIT_LOG(ERR, "Failed to init adapter.");
+			return retval;
+		}
+
+		rte_spinlock_lock(&cpfl_adapter_lock);
+		TAILQ_INSERT_TAIL(&cpfl_adapter_list, adapter, next);
+		rte_spinlock_unlock(&cpfl_adapter_lock);
+	}
+
+	retval = cpfl_parse_devargs(pci_dev, adapter, &devargs);
+	if (retval != 0) {
+		PMD_INIT_LOG(ERR, "Failed to parse private devargs");
+		goto err;
+	}
+
+	if (devargs.req_vport_nb == 0) {
+		/* If no vport devarg, create vport 0 by default. */
+		vport_param.adapter = adapter;
+		vport_param.devarg_id = 0;
+		vport_param.idx = cpfl_vport_idx_alloc(adapter);
+		if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
+			PMD_INIT_LOG(ERR, "No space for vport %u", vport_param.devarg_id);
+			return 0;
+		}
+		snprintf(name, sizeof(name), "cpfl_%s_vport_0",
+			 pci_dev->device.name);
+		retval = rte_eth_dev_create(&pci_dev->device, name,
+					    sizeof(struct idpf_vport),
+					    NULL, NULL, cpfl_dev_vport_init,
+					    &vport_param);
+		if (retval != 0)
+			PMD_DRV_LOG(ERR, "Failed to create default vport 0");
+	} else {
+		for (i = 0; i < devargs.req_vport_nb; i++) {
+			vport_param.adapter = adapter;
+			vport_param.devarg_id = devargs.req_vports[i];
+			vport_param.idx = cpfl_vport_idx_alloc(adapter);
+			if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
+				PMD_INIT_LOG(ERR, "No space for vport %u", vport_param.devarg_id);
+				break;
+			}
+			snprintf(name, sizeof(name), "cpfl_%s_vport_%d",
+				 pci_dev->device.name,
+				 devargs.req_vports[i]);
+			retval = rte_eth_dev_create(&pci_dev->device, name,
+						    sizeof(struct idpf_vport),
+						    NULL, NULL, cpfl_dev_vport_init,
+						    &vport_param);
+			if (retval != 0)
+				PMD_DRV_LOG(ERR, "Failed to create vport %d",
+					    vport_param.devarg_id);
+		}
+	}
+
+	return 0;
+
+err:
+	if (first_probe) {
+		rte_spinlock_lock(&cpfl_adapter_lock);
+		TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
+		rte_spinlock_unlock(&cpfl_adapter_lock);
+		cpfl_adapter_ext_deinit(adapter);
+		rte_free(adapter);
+	}
+	return retval;
+}
+
+static int
+cpfl_pci_remove(struct rte_pci_device *pci_dev)
+{
+	struct cpfl_adapter_ext *adapter = cpfl_find_adapter_ext(pci_dev);
+	uint16_t port_id;
+
+	/* Ethdev created can be found RTE_ETH_FOREACH_DEV_OF through rte_device */
+	RTE_ETH_FOREACH_DEV_OF(port_id, &pci_dev->device) {
+			rte_eth_dev_close(port_id);
+	}
+
+	rte_spinlock_lock(&cpfl_adapter_lock);
+	TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
+	rte_spinlock_unlock(&cpfl_adapter_lock);
+	cpfl_adapter_ext_deinit(adapter);
+	rte_free(adapter);
+
+	return 0;
+}
+
+static struct rte_pci_driver rte_cpfl_pmd = {
+	.id_table	= pci_id_cpfl_map,
+	.drv_flags	= RTE_PCI_DRV_NEED_MAPPING,
+	.probe		= cpfl_pci_probe,
+	.remove		= cpfl_pci_remove,
+};
+
+/**
+ * Driver initialization routine.
+ * Invoked once at EAL init time.
+ * Register itself as the [Poll Mode] Driver of PCI devices.
+ */
+RTE_PMD_REGISTER_PCI(net_cpfl, rte_cpfl_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_cpfl, pci_id_cpfl_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_cpfl, "* igb_uio | vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(net_cpfl,
+			      CPFL_TX_SINGLE_Q "=<0|1> "
+			      CPFL_RX_SINGLE_Q "=<0|1> "
+			      CPFL_VPORT "=[vport_set0,[vport_set1],...]");
+
+RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_driver, driver, NOTICE);
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
new file mode 100644
index 0000000000..9ca39b4558
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _CPFL_ETHDEV_H_
+#define _CPFL_ETHDEV_H_
+
+#include <stdint.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+#include <rte_ethdev.h>
+#include <rte_kvargs.h>
+#include <ethdev_driver.h>
+#include <ethdev_pci.h>
+
+#include "cpfl_logs.h"
+
+#include <idpf_common_device.h>
+#include <idpf_common_virtchnl.h>
+#include <base/idpf_prototype.h>
+#include <base/virtchnl2.h>
+
+#define CPFL_MAX_VPORT_NUM	8
+
+#define CPFL_INVALID_VPORT_IDX	0xffff
+
+#define CPFL_MIN_BUF_SIZE	1024
+#define CPFL_MAX_FRAME_SIZE	9728
+#define CPFL_DEFAULT_MTU	RTE_ETHER_MTU
+
+#define CPFL_NUM_MACADDR_MAX	64
+
+#define CPFL_VLAN_TAG_SIZE	4
+#define CPFL_ETH_OVERHEAD \
+	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + CPFL_VLAN_TAG_SIZE * 2)
+
+#define CPFL_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
+
+#define CPFL_ALARM_INTERVAL	50000 /* us */
+
+/* Device IDs */
+#define IDPF_DEV_ID_CPF			0x1453
+
+struct cpfl_vport_param {
+	struct cpfl_adapter_ext *adapter;
+	uint16_t devarg_id; /* arg id from user */
+	uint16_t idx;       /* index in adapter->vports[]*/
+};
+
+/* Struct used when parse driver specific devargs */
+struct cpfl_devargs {
+	uint16_t req_vports[CPFL_MAX_VPORT_NUM];
+	uint16_t req_vport_nb;
+};
+
+struct cpfl_adapter_ext {
+	TAILQ_ENTRY(cpfl_adapter_ext) next;
+	struct idpf_adapter base;
+
+	char name[CPFL_ADAPTER_NAME_LEN];
+
+	struct idpf_vport **vports;
+	uint16_t max_vport_nb;
+
+	uint16_t cur_vports; /* bit mask of created vport */
+	uint16_t cur_vport_nb;
+
+	uint16_t used_vecs_num;
+};
+
+TAILQ_HEAD(cpfl_adapter_list, cpfl_adapter_ext);
+
+#define CPFL_DEV_TO_PCI(eth_dev)		\
+	RTE_DEV_TO_PCI((eth_dev)->device)
+#define CPFL_ADAPTER_TO_EXT(p)					\
+	container_of((p), struct cpfl_adapter_ext, base)
+
+#endif /* _CPFL_ETHDEV_H_ */
diff --git a/drivers/net/cpfl/cpfl_logs.h b/drivers/net/cpfl/cpfl_logs.h
new file mode 100644
index 0000000000..365b53e8b3
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_logs.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _CPFL_LOGS_H_
+#define _CPFL_LOGS_H_
+
+#include <rte_log.h>
+
+extern int cpfl_logtype_init;
+extern int cpfl_logtype_driver;
+
+#define PMD_INIT_LOG(level, ...) \
+	rte_log(RTE_LOG_ ## level, \
+		cpfl_logtype_init, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#define PMD_DRV_LOG_RAW(level, ...) \
+	rte_log(RTE_LOG_ ## level, \
+		cpfl_logtype_driver, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _CPFL_LOGS_H_ */
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
new file mode 100644
index 0000000000..2b9c20928b
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -0,0 +1,244 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#include <ethdev_driver.h>
+#include <rte_net.h>
+#include <rte_vect.h>
+
+#include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
+
+static uint64_t
+cpfl_tx_offload_convert(uint64_t offload)
+{
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_SCTP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
+		ol |= IDPF_TX_OFFLOAD_MULTI_SEGS;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0)
+		ol |= IDPF_TX_OFFLOAD_MBUF_FAST_FREE;
+
+	return ol;
+}
+
+static const struct rte_memzone *
+cpfl_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
+		      uint16_t len, uint16_t queue_type,
+		      unsigned int socket_id, bool splitq)
+{
+	char ring_name[RTE_MEMZONE_NAMESIZE];
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+
+	memset(ring_name, 0, RTE_MEMZONE_NAMESIZE);
+	switch (queue_type) {
+	case VIRTCHNL2_QUEUE_TYPE_TX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_sched_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		memcpy(ring_name, "cpfl Tx ring", sizeof("cpfl Tx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3),
+					      CPFL_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_singleq_rx_buf_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		memcpy(ring_name, "cpfl Rx ring", sizeof("cpfl Rx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
+		ring_size = RTE_ALIGN(len * sizeof(struct idpf_splitq_tx_compl_desc),
+				      CPFL_DMA_MEM_ALIGN);
+		memcpy(ring_name, "cpfl Tx compl ring", sizeof("cpfl Tx compl ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
+		ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_splitq_rx_buf_desc),
+				      CPFL_DMA_MEM_ALIGN);
+		memcpy(ring_name, "cpfl Rx buf ring", sizeof("cpfl Rx buf ring"));
+		break;
+	default:
+		PMD_INIT_LOG(ERR, "Invalid queue type");
+		return NULL;
+	}
+
+	mz = rte_eth_dma_zone_reserve(dev, ring_name, queue_idx,
+				      ring_size, CPFL_RING_BASE_ALIGN,
+				      socket_id);
+	if (mz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for ring");
+		return NULL;
+	}
+
+	/* Zero all the descriptors in the ring. */
+	memset(mz->addr, 0, ring_size);
+
+	return mz;
+}
+
+static void
+cpfl_dma_zone_release(const struct rte_memzone *mz)
+{
+	rte_memzone_free(mz);
+}
+
+static int
+cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
+		     uint16_t queue_idx, uint16_t nb_desc,
+		     unsigned int socket_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	const struct rte_memzone *mz;
+	struct idpf_tx_queue *cq;
+	int ret;
+
+	cq = rte_zmalloc_socket("cpfl splitq cq",
+				sizeof(struct idpf_tx_queue),
+				RTE_CACHE_LINE_SIZE,
+				socket_id);
+	if (cq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for Tx compl queue");
+		ret = -ENOMEM;
+		goto err_cq_alloc;
+	}
+
+	cq->nb_tx_desc = nb_desc;
+	cq->queue_id = vport->chunks_info.tx_compl_start_qid + queue_idx;
+	cq->port_id = dev->data->port_id;
+	cq->txqs = dev->data->tx_queues;
+	cq->tx_start_qid = vport->chunks_info.tx_start_qid;
+
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, nb_desc,
+				   VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION,
+				   socket_id, true);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	cq->tx_ring_phys_addr = mz->iova;
+	cq->compl_ring = mz->addr;
+	cq->mz = mz;
+	reset_split_tx_complq(cq);
+
+	txq->complq = cq;
+
+	return 0;
+
+err_mz_reserve:
+	rte_free(cq);
+err_cq_alloc:
+	return ret;
+}
+
+int
+cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_txconf *tx_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t tx_rs_thresh, tx_free_thresh;
+	struct idpf_hw *hw = &adapter->hw;
+	const struct rte_memzone *mz;
+	struct idpf_tx_queue *txq;
+	uint64_t offloads;
+	uint16_t len;
+	bool is_splitq;
+	int ret;
+
+	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+	tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh > 0) ?
+		tx_conf->tx_rs_thresh : CPFL_DEFAULT_TX_RS_THRESH);
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ?
+		tx_conf->tx_free_thresh : CPFL_DEFAULT_TX_FREE_THRESH);
+	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket("cpfl txq",
+				 sizeof(struct idpf_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (txq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
+		ret = -ENOMEM;
+		goto err_txq_alloc;
+	}
+
+	is_splitq = !!(vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
+
+	txq->nb_tx_desc = nb_desc;
+	txq->rs_thresh = tx_rs_thresh;
+	txq->free_thresh = tx_free_thresh;
+	txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->offloads = cpfl_tx_offload_convert(offloads);
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	if (is_splitq)
+		len = 2 * nb_desc;
+	else
+		len = nb_desc;
+	txq->sw_nb_desc = len;
+
+	/* Allocate TX hardware ring descriptors. */
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, nb_desc, VIRTCHNL2_QUEUE_TYPE_TX,
+				   socket_id, is_splitq);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	txq->tx_ring_phys_addr = mz->iova;
+	txq->mz = mz;
+
+	txq->sw_ring = rte_zmalloc_socket("cpfl tx sw ring",
+					  sizeof(struct idpf_tx_entry) * len,
+					  RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq->sw_ring == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
+	}
+
+	if (!is_splitq) {
+		txq->tx_ring = mz->addr;
+		reset_single_tx_queue(txq);
+	} else {
+		txq->desc_ring = mz->addr;
+		reset_split_tx_descq(txq);
+
+		/* Setup tx completion queue if split model */
+		ret = cpfl_tx_complq_setup(dev, txq, queue_idx,
+					   2 * nb_desc, socket_id);
+		if (ret != 0)
+			goto err_complq_setup;
+	}
+
+	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
+			queue_idx * vport->chunks_info.tx_qtail_spacing);
+	txq->q_set = true;
+	dev->data->tx_queues[queue_idx] = txq;
+
+	return 0;
+
+err_complq_setup:
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(txq);
+err_txq_alloc:
+	return ret;
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
new file mode 100644
index 0000000000..232630c5e9
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _CPFL_RXTX_H_
+#define _CPFL_RXTX_H_
+
+#include <idpf_common_rxtx.h>
+#include "cpfl_ethdev.h"
+
+/* In QLEN must be whole number of 32 descriptors. */
+#define CPFL_ALIGN_RING_DESC	32
+#define CPFL_MIN_RING_DESC	32
+#define CPFL_MAX_RING_DESC	4096
+#define CPFL_DMA_MEM_ALIGN	4096
+/* Base address of the HW descriptor ring should be 128B aligned. */
+#define CPFL_RING_BASE_ALIGN	128
+
+#define CPFL_DEFAULT_TX_RS_THRESH	32
+#define CPFL_DEFAULT_TX_FREE_THRESH	32
+
+int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			uint16_t nb_desc, unsigned int socket_id,
+			const struct rte_eth_txconf *tx_conf);
+#endif /* _CPFL_RXTX_H_ */
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
new file mode 100644
index 0000000000..c721732b50
--- /dev/null
+++ b/drivers/net/cpfl/meson.build
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 Intel Corporation
+
+if is_windows
+    build = false
+    reason = 'not supported on Windows'
+    subdir_done()
+endif
+
+deps += ['common_idpf']
+
+sources = files(
+        'cpfl_ethdev.c',
+)
\ No newline at end of file
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 6470bf3636..a8ca338875 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -13,6 +13,7 @@ drivers = [
         'bnxt',
         'bonding',
         'cnxk',
+        'cpfl',
         'cxgbe',
         'dpaa',
         'dpaa2',
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v6 02/21] net/cpfl: add Tx queue setup
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 01/21] net/cpfl: support device initialization Mingxia Liu
@ 2023-02-13  2:19         ` Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 03/21] net/cpfl: add Rx " Mingxia Liu
                           ` (20 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-13  2:19 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for tx_queue_setup ops.

In the single queue model, the same descriptor queue is used by SW to
post buffer descriptors to HW and by HW to post completed descriptors
to SW.

In the split queue model, "RX buffer queues" are used to pass
descriptor buffers from SW to HW while Rx queues are used only to
pass the descriptor completions, that is, descriptors that point
to completed buffers, from HW to SW. This is contrary to the single
queue model in which Rx queues are used for both purposes.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 13 +++++++++++++
 drivers/net/cpfl/cpfl_rxtx.c   |  8 ++++----
 drivers/net/cpfl/meson.build   |  1 +
 3 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index fe0061133c..5ca21c9772 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -12,6 +12,7 @@
 #include <rte_alarm.h>
 
 #include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
 
 #define CPFL_TX_SINGLE_Q	"tx_single"
 #define CPFL_RX_SINGLE_Q	"rx_single"
@@ -96,6 +97,17 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = CPFL_MAX_RING_DESC,
+		.nb_min = CPFL_MIN_RING_DESC,
+		.nb_align = CPFL_ALIGN_RING_DESC,
+	};
+
 	return 0;
 }
 
@@ -513,6 +525,7 @@ cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *a
 static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_configure			= cpfl_dev_configure,
 	.dev_close			= cpfl_dev_close,
+	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
 	.link_update			= cpfl_dev_link_update,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 2b9c20928b..5b69ac0009 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -130,7 +130,7 @@ cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
 	cq->tx_ring_phys_addr = mz->iova;
 	cq->compl_ring = mz->addr;
 	cq->mz = mz;
-	reset_split_tx_complq(cq);
+	idpf_qc_split_tx_complq_reset(cq);
 
 	txq->complq = cq;
 
@@ -164,7 +164,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		tx_conf->tx_rs_thresh : CPFL_DEFAULT_TX_RS_THRESH);
 	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ?
 		tx_conf->tx_free_thresh : CPFL_DEFAULT_TX_FREE_THRESH);
-	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
+	if (idpf_qc_tx_thresh_check(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
 		return -EINVAL;
 
 	/* Allocate the TX queue data structure. */
@@ -215,10 +215,10 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 
 	if (!is_splitq) {
 		txq->tx_ring = mz->addr;
-		reset_single_tx_queue(txq);
+		idpf_qc_single_tx_queue_reset(txq);
 	} else {
 		txq->desc_ring = mz->addr;
-		reset_split_tx_descq(txq);
+		idpf_qc_split_tx_descq_reset(txq);
 
 		/* Setup tx completion queue if split model */
 		ret = cpfl_tx_complq_setup(dev, txq, queue_idx,
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
index c721732b50..1894423689 100644
--- a/drivers/net/cpfl/meson.build
+++ b/drivers/net/cpfl/meson.build
@@ -11,4 +11,5 @@ deps += ['common_idpf']
 
 sources = files(
         'cpfl_ethdev.c',
+        'cpfl_rxtx.c',
 )
\ No newline at end of file
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v6 03/21] net/cpfl: add Rx queue setup
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 01/21] net/cpfl: support device initialization Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 02/21] net/cpfl: add Tx queue setup Mingxia Liu
@ 2023-02-13  2:19         ` Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 04/21] net/cpfl: support device start and stop Mingxia Liu
                           ` (19 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-13  2:19 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for rx_queue_setup ops.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  11 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 232 +++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |   6 +
 3 files changed, 249 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 5ca21c9772..3029f03d02 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -102,12 +102,22 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
 	};
 
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_free_thresh = CPFL_DEFAULT_RX_FREE_THRESH,
+	};
+
 	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
 		.nb_max = CPFL_MAX_RING_DESC,
 		.nb_min = CPFL_MIN_RING_DESC,
 		.nb_align = CPFL_ALIGN_RING_DESC,
 	};
 
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = CPFL_MAX_RING_DESC,
+		.nb_min = CPFL_MIN_RING_DESC,
+		.nb_align = CPFL_ALIGN_RING_DESC,
+	};
+
 	return 0;
 }
 
@@ -525,6 +535,7 @@ cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *a
 static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_configure			= cpfl_dev_configure,
 	.dev_close			= cpfl_dev_close,
+	.rx_queue_setup			= cpfl_rx_queue_setup,
 	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
 	.link_update			= cpfl_dev_link_update,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 5b69ac0009..042b848ce2 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -9,6 +9,25 @@
 #include "cpfl_ethdev.h"
 #include "cpfl_rxtx.h"
 
+static uint64_t
+cpfl_rx_offload_convert(uint64_t offload)
+{
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
+		ol |= IDPF_RX_OFFLOAD_TIMESTAMP;
+
+	return ol;
+}
+
 static uint64_t
 cpfl_tx_offload_convert(uint64_t offload)
 {
@@ -94,6 +113,219 @@ cpfl_dma_zone_release(const struct rte_memzone *mz)
 	rte_memzone_free(mz);
 }
 
+static int
+cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
+			 uint16_t queue_idx, uint16_t rx_free_thresh,
+			 uint16_t nb_desc, unsigned int socket_id,
+			 struct rte_mempool *mp, uint8_t bufq_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_hw *hw = &adapter->hw;
+	const struct rte_memzone *mz;
+	struct idpf_rx_queue *bufq;
+	uint16_t len;
+	int ret;
+
+	bufq = rte_zmalloc_socket("cpfl bufq",
+				   sizeof(struct idpf_rx_queue),
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (bufq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue.");
+		ret = -ENOMEM;
+		goto err_bufq1_alloc;
+	}
+
+	bufq->mp = mp;
+	bufq->nb_rx_desc = nb_desc;
+	bufq->rx_free_thresh = rx_free_thresh;
+	bufq->queue_id = vport->chunks_info.rx_buf_start_qid + queue_idx;
+	bufq->port_id = dev->data->port_id;
+	bufq->rx_hdr_len = 0;
+	bufq->adapter = adapter;
+
+	len = rte_pktmbuf_data_room_size(bufq->mp) - RTE_PKTMBUF_HEADROOM;
+	bufq->rx_buf_len = len;
+
+	/* Allocate a little more to support bulk allocate. */
+	len = nb_desc + IDPF_RX_MAX_BURST;
+
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, len,
+				   VIRTCHNL2_QUEUE_TYPE_RX_BUFFER,
+				   socket_id, true);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+
+	bufq->rx_ring_phys_addr = mz->iova;
+	bufq->rx_ring = mz->addr;
+	bufq->mz = mz;
+
+	bufq->sw_ring =
+		rte_zmalloc_socket("cpfl rx bufq sw ring",
+				   sizeof(struct rte_mbuf *) * len,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (bufq->sw_ring == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
+	}
+
+	idpf_qc_split_rx_bufq_reset(bufq);
+	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
+			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+	bufq->q_set = true;
+
+	if (bufq_id == 1) {
+		rxq->bufq1 = bufq;
+	} else if (bufq_id == 2) {
+		rxq->bufq2 = bufq;
+	} else {
+		PMD_INIT_LOG(ERR, "Invalid buffer queue index.");
+		ret = -EINVAL;
+		goto err_bufq_id;
+	}
+
+	return 0;
+
+err_bufq_id:
+	rte_free(bufq->sw_ring);
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(bufq);
+err_bufq1_alloc:
+	return ret;
+}
+
+static void
+cpfl_rx_split_bufq_release(struct idpf_rx_queue *bufq)
+{
+	rte_free(bufq->sw_ring);
+	cpfl_dma_zone_release(bufq->mz);
+	rte_free(bufq);
+}
+
+int
+cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_rxconf *rx_conf,
+		    struct rte_mempool *mp)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_hw *hw = &adapter->hw;
+	const struct rte_memzone *mz;
+	struct idpf_rx_queue *rxq;
+	uint16_t rx_free_thresh;
+	uint64_t offloads;
+	bool is_splitq;
+	uint16_t len;
+	int ret;
+
+	offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads;
+
+	/* Check free threshold */
+	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
+		CPFL_DEFAULT_RX_FREE_THRESH :
+		rx_conf->rx_free_thresh;
+	if (idpf_qc_rx_thresh_check(nb_desc, rx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Setup Rx queue */
+	rxq = rte_zmalloc_socket("cpfl rxq",
+				 sizeof(struct idpf_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (rxq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
+		ret = -ENOMEM;
+		goto err_rxq_alloc;
+	}
+
+	is_splitq = !!(vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
+
+	rxq->mp = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->queue_id = vport->chunks_info.rx_start_qid + queue_idx;
+	rxq->port_id = dev->data->port_id;
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+	rxq->rx_hdr_len = 0;
+	rxq->adapter = adapter;
+	rxq->offloads = cpfl_rx_offload_convert(offloads);
+
+	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_buf_len = len;
+
+	/* Allocate a little more to support bulk allocate. */
+	len = nb_desc + IDPF_RX_MAX_BURST;
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, len, VIRTCHNL2_QUEUE_TYPE_RX,
+				   socket_id, is_splitq);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	rxq->rx_ring_phys_addr = mz->iova;
+	rxq->rx_ring = mz->addr;
+	rxq->mz = mz;
+
+	if (!is_splitq) {
+		rxq->sw_ring = rte_zmalloc_socket("cpfl rxq sw ring",
+						  sizeof(struct rte_mbuf *) * len,
+						  RTE_CACHE_LINE_SIZE,
+						  socket_id);
+		if (rxq->sw_ring == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+			ret = -ENOMEM;
+			goto err_sw_ring_alloc;
+		}
+
+		idpf_qc_single_rx_queue_reset(rxq);
+		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
+				queue_idx * vport->chunks_info.rx_qtail_spacing);
+	} else {
+		idpf_qc_split_rx_descq_reset(rxq);
+
+		/* Setup Rx buffer queues */
+		ret = cpfl_rx_split_bufq_setup(dev, rxq, 2 * queue_idx,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 1);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 1");
+			ret = -EINVAL;
+			goto err_bufq1_setup;
+		}
+
+		ret = cpfl_rx_split_bufq_setup(dev, rxq, 2 * queue_idx + 1,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 2);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 2");
+			ret = -EINVAL;
+			goto err_bufq2_setup;
+		}
+	}
+
+	rxq->q_set = true;
+	dev->data->rx_queues[queue_idx] = rxq;
+
+	return 0;
+
+err_bufq2_setup:
+	cpfl_rx_split_bufq_release(rxq->bufq1);
+err_bufq1_setup:
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(rxq);
+err_rxq_alloc:
+	return ret;
+}
+
 static int
 cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
 		     uint16_t queue_idx, uint16_t nb_desc,
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 232630c5e9..e0221abfa3 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -16,10 +16,16 @@
 /* Base address of the HW descriptor ring should be 128B aligned. */
 #define CPFL_RING_BASE_ALIGN	128
 
+#define CPFL_DEFAULT_RX_FREE_THRESH	32
+
 #define CPFL_DEFAULT_TX_RS_THRESH	32
 #define CPFL_DEFAULT_TX_FREE_THRESH	32
 
 int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_txconf *tx_conf);
+int cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			uint16_t nb_desc, unsigned int socket_id,
+			const struct rte_eth_rxconf *rx_conf,
+			struct rte_mempool *mp);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v6 04/21] net/cpfl: support device start and stop
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
                           ` (2 preceding siblings ...)
  2023-02-13  2:19         ` [PATCH v6 03/21] net/cpfl: add Rx " Mingxia Liu
@ 2023-02-13  2:19         ` Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 05/21] net/cpfl: support queue start Mingxia Liu
                           ` (18 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-13  2:19 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add dev ops dev_start, dev_stop and link_update.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 35 ++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 3029f03d02..d1dfcfff9b 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -184,12 +184,45 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_dev_start(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	int ret;
+
+	ret = idpf_vc_vport_ena_dis(vport, true);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to enable vport");
+		return ret;
+	}
+
+	vport->stopped = 0;
+
+	return 0;
+}
+
+static int
+cpfl_dev_stop(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->stopped == 1)
+		return 0;
+
+	idpf_vc_vport_ena_dis(vport, false);
+
+	vport->stopped = 1;
+
+	return 0;
+}
+
 static int
 cpfl_dev_close(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter);
 
+	cpfl_dev_stop(dev);
 	idpf_vport_deinit(vport);
 
 	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
@@ -538,6 +571,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.rx_queue_setup			= cpfl_rx_queue_setup,
 	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
+	.dev_start			= cpfl_dev_start,
+	.dev_stop			= cpfl_dev_stop,
 	.link_update			= cpfl_dev_link_update,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v6 05/21] net/cpfl: support queue start
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
                           ` (3 preceding siblings ...)
  2023-02-13  2:19         ` [PATCH v6 04/21] net/cpfl: support device start and stop Mingxia Liu
@ 2023-02-13  2:19         ` Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 06/21] net/cpfl: support queue stop Mingxia Liu
                           ` (17 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-13  2:19 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for these device ops:
 - rx_queue_start
 - tx_queue_start

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  41 ++++++++++
 drivers/net/cpfl/cpfl_rxtx.c   | 138 +++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |   4 +
 3 files changed, 183 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index d1dfcfff9b..c4565e687b 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -184,12 +184,51 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_start_queues(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	struct idpf_tx_queue *txq;
+	int err = 0;
+	int i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq == NULL || txq->tx_deferred_start)
+			continue;
+		err = cpfl_tx_queue_start(dev, i);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start Tx queue %u", i);
+			return err;
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq == NULL || rxq->rx_deferred_start)
+			continue;
+		err = cpfl_rx_queue_start(dev, i);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start Rx queue %u", i);
+			return err;
+		}
+	}
+
+	return err;
+}
+
 static int
 cpfl_dev_start(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	int ret;
 
+	ret = cpfl_start_queues(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to start queues");
+		return ret;
+	}
+
 	ret = idpf_vc_vport_ena_dis(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
@@ -574,6 +613,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_start			= cpfl_dev_start,
 	.dev_stop			= cpfl_dev_stop,
 	.link_update			= cpfl_dev_link_update,
+	.rx_queue_start			= cpfl_rx_queue_start,
+	.tx_queue_start			= cpfl_tx_queue_start,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 042b848ce2..e306a52b31 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -474,3 +474,141 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 err_txq_alloc:
 	return ret;
 }
+
+int
+cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_rx_queue *rxq;
+	int err;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	if (rxq == NULL || !rxq->q_set) {
+		PMD_DRV_LOG(ERR, "RX queue %u not available or setup",
+					rx_queue_id);
+		return -EINVAL;
+	}
+
+	if (rxq->bufq1 == NULL) {
+		/* Single queue */
+		err = idpf_qc_single_rxq_mbufs_alloc(rxq);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+			return err;
+		}
+
+		rte_wmb();
+
+		/* Init the RX tail register. */
+		IDPF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	} else {
+		/* Split queue */
+		err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq1);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+			return err;
+		}
+		err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq2);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+			return err;
+		}
+
+		rte_wmb();
+
+		/* Init the RX tail register. */
+		IDPF_PCI_REG_WRITE(rxq->bufq1->qrx_tail, rxq->bufq1->rx_tail);
+		IDPF_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->bufq2->rx_tail);
+	}
+
+	return err;
+}
+
+int
+cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_rx_queue *rxq =
+		dev->data->rx_queues[rx_queue_id];
+	int err = 0;
+
+	err = idpf_vc_rxq_config(vport, rxq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Fail to configure Rx queue %u", rx_queue_id);
+		return err;
+	}
+
+	err = cpfl_rx_queue_init(dev, rx_queue_id);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to init RX queue %u",
+			    rx_queue_id);
+		return err;
+	}
+
+	/* Ready to switch the queue on */
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+	} else {
+		rxq->q_started = true;
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	return err;
+}
+
+int
+cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_tx_queue *txq;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	txq = dev->data->tx_queues[tx_queue_id];
+
+	/* Init the RX tail register. */
+	IDPF_PCI_REG_WRITE(txq->qtx_tail, 0);
+
+	return 0;
+}
+
+int
+cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_tx_queue *txq =
+		dev->data->tx_queues[tx_queue_id];
+	int err = 0;
+
+	err = idpf_vc_txq_config(vport, txq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Fail to configure Tx queue %u", tx_queue_id);
+		return err;
+	}
+
+	err = cpfl_tx_queue_init(dev, tx_queue_id);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to init TX queue %u",
+			    tx_queue_id);
+		return err;
+	}
+
+	/* Ready to switch the queue on */
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+			    tx_queue_id);
+	} else {
+		txq->q_started = true;
+		dev->data->tx_queue_state[tx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	return err;
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index e0221abfa3..716b2fefa4 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -28,4 +28,8 @@ int cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
 			struct rte_mempool *mp);
+int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v6 06/21] net/cpfl: support queue stop
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
                           ` (4 preceding siblings ...)
  2023-02-13  2:19         ` [PATCH v6 05/21] net/cpfl: support queue start Mingxia Liu
@ 2023-02-13  2:19         ` Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 07/21] net/cpfl: support queue release Mingxia Liu
                           ` (16 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-13  2:19 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for these device ops:
 - rx_queue_stop
 - tx_queue_stop

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 10 +++-
 drivers/net/cpfl/cpfl_rxtx.c   | 87 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  3 ++
 3 files changed, 99 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index c4565e687b..f757fea530 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -232,12 +232,16 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 	ret = idpf_vc_vport_ena_dis(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
-		return ret;
+		goto err_vport;
 	}
 
 	vport->stopped = 0;
 
 	return 0;
+
+err_vport:
+	cpfl_stop_queues(dev);
+	return ret;
 }
 
 static int
@@ -250,6 +254,8 @@ cpfl_dev_stop(struct rte_eth_dev *dev)
 
 	idpf_vc_vport_ena_dis(vport, false);
 
+	cpfl_stop_queues(dev);
+
 	vport->stopped = 1;
 
 	return 0;
@@ -615,6 +621,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.link_update			= cpfl_dev_link_update,
 	.rx_queue_start			= cpfl_rx_queue_start,
 	.tx_queue_start			= cpfl_tx_queue_start,
+	.rx_queue_stop			= cpfl_rx_queue_stop,
+	.tx_queue_stop			= cpfl_tx_queue_stop,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index e306a52b31..de0f2a5723 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -612,3 +612,90 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 
 	return err;
 }
+
+int
+cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_rx_queue *rxq;
+	int err;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, false);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+			    rx_queue_id);
+		return err;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		rxq->ops->release_mbufs(rxq);
+		idpf_qc_single_rx_queue_reset(rxq);
+	} else {
+		rxq->bufq1->ops->release_mbufs(rxq->bufq1);
+		rxq->bufq2->ops->release_mbufs(rxq->bufq2);
+		idpf_qc_split_rx_queue_reset(rxq);
+	}
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+int
+cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_tx_queue *txq;
+	int err;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, false);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+			    tx_queue_id);
+		return err;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	txq->ops->release_mbufs(txq);
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		idpf_qc_single_tx_queue_reset(txq);
+	} else {
+		idpf_qc_split_tx_descq_reset(txq);
+		idpf_qc_split_tx_complq_reset(txq->complq);
+	}
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+void
+cpfl_stop_queues(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	struct idpf_tx_queue *txq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq == NULL)
+			continue;
+
+		if (cpfl_rx_queue_stop(dev, i) != 0)
+			PMD_DRV_LOG(WARNING, "Fail to stop Rx queue %d", i);
+	}
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq == NULL)
+			continue;
+
+		if (cpfl_tx_queue_stop(dev, i) != 0)
+			PMD_DRV_LOG(WARNING, "Fail to stop Tx queue %d", i);
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 716b2fefa4..e9b810deaa 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -32,4 +32,7 @@ int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void cpfl_stop_queues(struct rte_eth_dev *dev);
+int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v6 07/21] net/cpfl: support queue release
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
                           ` (5 preceding siblings ...)
  2023-02-13  2:19         ` [PATCH v6 06/21] net/cpfl: support queue stop Mingxia Liu
@ 2023-02-13  2:19         ` Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 08/21] net/cpfl: support MTU configuration Mingxia Liu
                           ` (15 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-13  2:19 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for queue operations:
 - rx_queue_release
 - tx_queue_release

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  2 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 35 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  2 ++
 3 files changed, 39 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index f757fea530..2e5bfac1c0 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -623,6 +623,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_start			= cpfl_tx_queue_start,
 	.rx_queue_stop			= cpfl_rx_queue_stop,
 	.tx_queue_stop			= cpfl_tx_queue_stop,
+	.rx_queue_release		= cpfl_dev_rx_queue_release,
+	.tx_queue_release		= cpfl_dev_tx_queue_release,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index de0f2a5723..3edba70b16 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -49,6 +49,14 @@ cpfl_tx_offload_convert(uint64_t offload)
 	return ol;
 }
 
+static const struct idpf_rxq_ops def_rxq_ops = {
+	.release_mbufs = idpf_qc_rxq_mbufs_release,
+};
+
+static const struct idpf_txq_ops def_txq_ops = {
+	.release_mbufs = idpf_qc_txq_mbufs_release,
+};
+
 static const struct rte_memzone *
 cpfl_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
 		      uint16_t len, uint16_t queue_type,
@@ -177,6 +185,7 @@ cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
 	idpf_qc_split_rx_bufq_reset(bufq);
 	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
 			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+	bufq->ops = &def_rxq_ops;
 	bufq->q_set = true;
 
 	if (bufq_id == 1) {
@@ -235,6 +244,12 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (idpf_qc_rx_thresh_check(nb_desc, rx_free_thresh) != 0)
 		return -EINVAL;
 
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx] != NULL) {
+		idpf_qc_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
 	/* Setup Rx queue */
 	rxq = rte_zmalloc_socket("cpfl rxq",
 				 sizeof(struct idpf_rx_queue),
@@ -287,6 +302,7 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		idpf_qc_single_rx_queue_reset(rxq);
 		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
 				queue_idx * vport->chunks_info.rx_qtail_spacing);
+		rxq->ops = &def_rxq_ops;
 	} else {
 		idpf_qc_split_rx_descq_reset(rxq);
 
@@ -399,6 +415,12 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (idpf_qc_tx_thresh_check(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
 		return -EINVAL;
 
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx] != NULL) {
+		idpf_qc_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
 	/* Allocate the TX queue data structure. */
 	txq = rte_zmalloc_socket("cpfl txq",
 				 sizeof(struct idpf_tx_queue),
@@ -461,6 +483,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 
 	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
 			queue_idx * vport->chunks_info.tx_qtail_spacing);
+	txq->ops = &def_txq_ops;
 	txq->q_set = true;
 	dev->data->tx_queues[queue_idx] = txq;
 
@@ -674,6 +697,18 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	return 0;
 }
 
+void
+cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+	idpf_qc_rx_queue_release(dev->data->rx_queues[qid]);
+}
+
+void
+cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+	idpf_qc_tx_queue_release(dev->data->tx_queues[qid]);
+}
+
 void
 cpfl_stop_queues(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index e9b810deaa..f5882401dc 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -35,4 +35,6 @@ int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void cpfl_stop_queues(struct rte_eth_dev *dev);
 int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v6 08/21] net/cpfl: support MTU configuration
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
                           ` (6 preceding siblings ...)
  2023-02-13  2:19         ` [PATCH v6 07/21] net/cpfl: support queue release Mingxia Liu
@ 2023-02-13  2:19         ` Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 09/21] net/cpfl: support basic Rx data path Mingxia Liu
                           ` (14 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-13  2:19 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add dev ops mtu_set.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini |  1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 27 +++++++++++++++++++++++++++
 2 files changed, 28 insertions(+)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index a2d1ca9e15..470ba81579 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -7,6 +7,7 @@
 ; is selected.
 ;
 [Features]
+MTU update           = Y
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 2e5bfac1c0..e2eb92c738 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -121,6 +121,27 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	return 0;
 }
 
+static int
+cpfl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	/* mtu setting is forbidden if port is start */
+	if (dev->data->dev_started) {
+		PMD_DRV_LOG(ERR, "port must be stopped before configuration");
+		return -EBUSY;
+	}
+
+	if (mtu > vport->max_mtu) {
+		PMD_DRV_LOG(ERR, "MTU should be less than %d", vport->max_mtu);
+		return -EINVAL;
+	}
+
+	vport->max_pkt_len = mtu + CPFL_ETH_OVERHEAD;
+
+	return 0;
+}
+
 static const uint32_t *
 cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 {
@@ -142,6 +163,7 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_conf *conf = &dev->data->dev_conf;
 
 	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
@@ -181,6 +203,10 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
+	vport->max_pkt_len =
+		(dev->data->mtu == 0) ? CPFL_DEFAULT_MTU : dev->data->mtu +
+		CPFL_ETH_OVERHEAD;
+
 	return 0;
 }
 
@@ -625,6 +651,7 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_stop			= cpfl_tx_queue_stop,
 	.rx_queue_release		= cpfl_dev_rx_queue_release,
 	.tx_queue_release		= cpfl_dev_tx_queue_release,
+	.mtu_set			= cpfl_dev_mtu_set,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v6 09/21] net/cpfl: support basic Rx data path
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
                           ` (7 preceding siblings ...)
  2023-02-13  2:19         ` [PATCH v6 08/21] net/cpfl: support MTU configuration Mingxia Liu
@ 2023-02-13  2:19         ` Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 10/21] net/cpfl: support basic Tx " Mingxia Liu
                           ` (13 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-13  2:19 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add basic Rx support in split queue mode and single queue mode.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  2 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 18 ++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  1 +
 3 files changed, 21 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index e2eb92c738..ba66b284cc 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -255,6 +255,8 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
+	cpfl_set_rx_function(dev);
+
 	ret = idpf_vc_vport_ena_dis(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 3edba70b16..568ae3ec68 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -734,3 +734,21 @@ cpfl_stop_queues(struct rte_eth_dev *dev)
 			PMD_DRV_LOG(WARNING, "Fail to stop Tx queue %d", i);
 	}
 }
+
+void
+cpfl_set_rx_function(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Split Scalar Rx (port %d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
+	} else {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Single Scalar Rx (port %d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index f5882401dc..a5dd388e1f 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -37,4 +37,5 @@ int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+void cpfl_set_rx_function(struct rte_eth_dev *dev);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v6 10/21] net/cpfl: support basic Tx data path
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
                           ` (8 preceding siblings ...)
  2023-02-13  2:19         ` [PATCH v6 09/21] net/cpfl: support basic Rx data path Mingxia Liu
@ 2023-02-13  2:19         ` Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
                           ` (12 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-13  2:19 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add basic Tx support in split queue mode and single queue mode.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  3 +++
 drivers/net/cpfl/cpfl_rxtx.c   | 20 ++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  1 +
 3 files changed, 24 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index ba66b284cc..f02cbd08d9 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -97,6 +97,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
 		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
@@ -256,6 +258,7 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 	}
 
 	cpfl_set_rx_function(dev);
+	cpfl_set_tx_function(dev);
 
 	ret = idpf_vc_vport_ena_dis(vport, true);
 	if (ret != 0) {
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 568ae3ec68..c250642719 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -752,3 +752,23 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
 	}
 }
+
+void
+cpfl_set_tx_function(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Split Scalar Tx (port %d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+	} else {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Single Scalar Tx (port %d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index a5dd388e1f..5f8144e55f 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -38,4 +38,5 @@ int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_set_rx_function(struct rte_eth_dev *dev);
+void cpfl_set_tx_function(struct rte_eth_dev *dev);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v6 11/21] net/cpfl: support write back based on ITR expire
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
                           ` (9 preceding siblings ...)
  2023-02-13  2:19         ` [PATCH v6 10/21] net/cpfl: support basic Tx " Mingxia Liu
@ 2023-02-13  2:19         ` Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 12/21] net/cpfl: support RSS Mingxia Liu
                           ` (11 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-13  2:19 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Enable write back on ITR expire, then packets can be received one by

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 45 +++++++++++++++++++++++++++++++++-
 drivers/net/cpfl/cpfl_ethdev.h |  2 ++
 2 files changed, 46 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index f02cbd08d9..7e0630c605 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -212,6 +212,15 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	uint16_t nb_rx_queues = dev->data->nb_rx_queues;
+
+	return idpf_vport_irq_map_config(vport, nb_rx_queues);
+}
+
 static int
 cpfl_start_queues(struct rte_eth_dev *dev)
 {
@@ -249,12 +258,37 @@ static int
 cpfl_dev_start(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(base);
+	uint16_t num_allocated_vectors = base->caps.num_allocated_vectors;
+	uint16_t req_vecs_num;
 	int ret;
 
+	req_vecs_num = CPFL_DFLT_Q_VEC_NUM;
+	if (req_vecs_num + adapter->used_vecs_num > num_allocated_vectors) {
+		PMD_DRV_LOG(ERR, "The accumulated request vectors' number should be less than %d",
+			    num_allocated_vectors);
+		ret = -EINVAL;
+		goto err_vec;
+	}
+
+	ret = idpf_vc_vectors_alloc(vport, req_vecs_num);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to allocate interrupt vectors");
+		goto err_vec;
+	}
+	adapter->used_vecs_num += req_vecs_num;
+
+	ret = cpfl_config_rx_queues_irqs(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to configure irqs");
+		goto err_irq;
+	}
+
 	ret = cpfl_start_queues(dev);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to start queues");
-		return ret;
+		goto err_startq;
 	}
 
 	cpfl_set_rx_function(dev);
@@ -272,6 +306,11 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 
 err_vport:
 	cpfl_stop_queues(dev);
+err_startq:
+	idpf_vport_irq_unmap_config(vport, dev->data->nb_rx_queues);
+err_irq:
+	idpf_vc_vectors_dealloc(vport);
+err_vec:
 	return ret;
 }
 
@@ -287,6 +326,10 @@ cpfl_dev_stop(struct rte_eth_dev *dev)
 
 	cpfl_stop_queues(dev);
 
+	idpf_vport_irq_unmap_config(vport, dev->data->nb_rx_queues);
+
+	idpf_vc_vectors_dealloc(vport);
+
 	vport->stopped = 1;
 
 	return 0;
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
index 9ca39b4558..cd7f560d19 100644
--- a/drivers/net/cpfl/cpfl_ethdev.h
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -24,6 +24,8 @@
 
 #define CPFL_INVALID_VPORT_IDX	0xffff
 
+#define CPFL_DFLT_Q_VEC_NUM	1
+
 #define CPFL_MIN_BUF_SIZE	1024
 #define CPFL_MAX_FRAME_SIZE	9728
 #define CPFL_DEFAULT_MTU	RTE_ETHER_MTU
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v6 12/21] net/cpfl: support RSS
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
                           ` (10 preceding siblings ...)
  2023-02-13  2:19         ` [PATCH v6 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
@ 2023-02-13  2:19         ` Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 13/21] net/cpfl: support Rx offloading Mingxia Liu
                           ` (10 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-13  2:19 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add RSS support.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 51 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h | 15 ++++++++++
 2 files changed, 66 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 7e0630c605..fb15004e48 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -97,6 +97,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
+
 	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
@@ -162,11 +164,49 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static int
+cpfl_init_rss(struct idpf_vport *vport)
+{
+	struct rte_eth_rss_conf *rss_conf;
+	struct rte_eth_dev_data *dev_data;
+	uint16_t i, nb_q;
+	int ret = 0;
+
+	dev_data = vport->dev_data;
+	rss_conf = &dev_data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev_data->nb_rx_queues;
+
+	if (rss_conf->rss_key == NULL) {
+		for (i = 0; i < vport->rss_key_size; i++)
+			vport->rss_key[i] = (uint8_t)rte_rand();
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_INIT_LOG(ERR, "Invalid RSS key length in RSS configuration, should be %d",
+			     vport->rss_key_size);
+		return -EINVAL;
+	} else {
+		rte_memcpy(vport->rss_key, rss_conf->rss_key,
+			   vport->rss_key_size);
+	}
+
+	for (i = 0; i < vport->rss_lut_size; i++)
+		vport->rss_lut[i] = i % nb_q;
+
+	vport->rss_hf = IDPF_DEFAULT_RSS_HASH_EXPANDED;
+
+	ret = idpf_vport_rss_config(vport);
+	if (ret != 0)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS");
+
+	return ret;
+}
+
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_conf *conf = &dev->data->dev_conf;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret;
 
 	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
@@ -205,6 +245,17 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
+	if (adapter->caps.rss_caps != 0 && dev->data->nb_rx_queues != 0) {
+		ret = cpfl_init_rss(vport);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to init rss");
+			return ret;
+		}
+	} else {
+		PMD_INIT_LOG(ERR, "RSS is not supported.");
+		return -1;
+	}
+
 	vport->max_pkt_len =
 		(dev->data->mtu == 0) ? CPFL_DEFAULT_MTU : dev->data->mtu +
 		CPFL_ETH_OVERHEAD;
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
index cd7f560d19..e00dff4bf0 100644
--- a/drivers/net/cpfl/cpfl_ethdev.h
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -36,6 +36,21 @@
 #define CPFL_ETH_OVERHEAD \
 	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + CPFL_VLAN_TAG_SIZE * 2)
 
+#define CPFL_RSS_OFFLOAD_ALL (				\
+		RTE_ETH_RSS_IPV4                |	\
+		RTE_ETH_RSS_FRAG_IPV4           |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_TCP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_UDP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_SCTP   |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_OTHER  |	\
+		RTE_ETH_RSS_IPV6                |	\
+		RTE_ETH_RSS_FRAG_IPV6           |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP   |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_OTHER  |	\
+		RTE_ETH_RSS_L2_PAYLOAD)
+
 #define CPFL_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
 #define CPFL_ALARM_INTERVAL	50000 /* us */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v6 13/21] net/cpfl: support Rx offloading
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
                           ` (11 preceding siblings ...)
  2023-02-13  2:19         ` [PATCH v6 12/21] net/cpfl: support RSS Mingxia Liu
@ 2023-02-13  2:19         ` Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 14/21] net/cpfl: support Tx offloading Mingxia Liu
                           ` (9 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-13  2:19 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add Rx offloading support:
 - support CHKSUM and RSS offload for split queue model
 - support CHKSUM offload for single queue model

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 2 ++
 drivers/net/cpfl/cpfl_ethdev.c    | 6 ++++++
 2 files changed, 8 insertions(+)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index 470ba81579..ee5948f444 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -8,6 +8,8 @@
 ;
 [Features]
 MTU update           = Y
+L3 checksum offload  = P
+L4 checksum offload  = P
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index fb15004e48..d0f90b7d2c 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -99,6 +99,12 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
 
+	dev_info->rx_offload_capa =
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM           |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+
 	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v6 14/21] net/cpfl: support Tx offloading
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
                           ` (12 preceding siblings ...)
  2023-02-13  2:19         ` [PATCH v6 13/21] net/cpfl: support Rx offloading Mingxia Liu
@ 2023-02-13  2:19         ` Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
                           ` (8 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-13  2:19 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add Tx offloading support:
 - support TSO for single queue model and split queue model.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 8 +++++++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index ee5948f444..f4e45c7c68 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -8,6 +8,7 @@
 ;
 [Features]
 MTU update           = Y
+TSO                  = P
 L3 checksum offload  = P
 L4 checksum offload  = P
 Linux                = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index d0f90b7d2c..fda945d34b 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -105,7 +105,13 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
-	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->tx_offload_capa =
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_TCP_TSO		|
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v6 15/21] net/cpfl: add AVX512 data path for single queue model
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
                           ` (13 preceding siblings ...)
  2023-02-13  2:19         ` [PATCH v6 14/21] net/cpfl: support Tx offloading Mingxia Liu
@ 2023-02-13  2:19         ` Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 16/21] net/cpfl: support timestamp offload Mingxia Liu
                           ` (7 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-13  2:19 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu, Wenjun Wu

Add support of AVX512 vector data path for single queue model.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/cpfl.rst                |  24 +++++-
 drivers/net/cpfl/cpfl_ethdev.c          |   3 +-
 drivers/net/cpfl/cpfl_rxtx.c            |  94 ++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx_vec_common.h | 100 ++++++++++++++++++++++++
 drivers/net/cpfl/meson.build            |  25 +++++-
 5 files changed, 243 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h

diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
index 7c5aff0789..f0018b41df 100644
--- a/doc/guides/nics/cpfl.rst
+++ b/doc/guides/nics/cpfl.rst
@@ -63,4 +63,26 @@ Runtime Config Options
 Driver compilation and testing
 ------------------------------
 
-Refer to the document :doc:`build_and_test` for details.
\ No newline at end of file
+Refer to the document :doc:`build_and_test` for details.
+
+Features
+--------
+
+Vector PMD
+~~~~~~~~~~
+
+Vector path for Rx and Tx path are selected automatically.
+The paths are chosen based on 2 conditions:
+
+- ``CPU``
+
+  On the x86 platform, the driver checks if the CPU supports AVX512.
+  If the CPU supports AVX512 and EAL argument ``--force-max-simd-bitwidth``
+  is set to 512, AVX512 paths will be chosen.
+
+- ``Offload features``
+
+  The supported HW offload features are described in the document cpfl.ini,
+  A value "P" means the offload feature is not supported by vector path.
+  If any not supported features are used, cpfl vector PMD is disabled
+  and the scalar paths are chosen.
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index fda945d34b..346af055cf 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -111,7 +111,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
 		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM		|
 		RTE_ETH_TX_OFFLOAD_TCP_TSO		|
-		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS		|
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index c250642719..cb7bbddb16 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -8,6 +8,7 @@
 
 #include "cpfl_ethdev.h"
 #include "cpfl_rxtx.h"
+#include "cpfl_rxtx_vec_common.h"
 
 static uint64_t
 cpfl_rx_offload_convert(uint64_t offload)
@@ -735,11 +736,61 @@ cpfl_stop_queues(struct rte_eth_dev *dev)
 	}
 }
 
+
 void
 cpfl_set_rx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+#ifdef RTE_ARCH_X86
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	if (cpfl_rx_vec_dev_check_default(dev) == CPFL_VECTOR_PATH &&
+	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+		vport->rx_vec_allowed = true;
 
+		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
+#ifdef CC_AVX512_SUPPORT
+			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
+				vport->rx_use_avx512 = true;
+#else
+		PMD_DRV_LOG(NOTICE,
+			    "AVX512 is not supported in build env");
+#endif /* CC_AVX512_SUPPORT */
+	} else {
+		vport->rx_vec_allowed = false;
+	}
+#endif /* RTE_ARCH_X86 */
+
+#ifdef RTE_ARCH_X86
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Split Scalar Rx (port %d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
+	} else {
+		if (vport->rx_vec_allowed) {
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				rxq = dev->data->rx_queues[i];
+				(void)idpf_qc_singleq_rx_vec_setup(rxq);
+			}
+#ifdef CC_AVX512_SUPPORT
+			if (vport->rx_use_avx512) {
+				PMD_DRV_LOG(NOTICE,
+					    "Using Single AVX512 Vector Rx (port %d).",
+					    dev->data->port_id);
+				dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts_avx512;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
+		PMD_DRV_LOG(NOTICE,
+			    "Using Single Scalar Rx (port %d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
+	}
+#else
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		PMD_DRV_LOG(NOTICE,
 			    "Using Split Scalar Rx (port %d).",
@@ -751,12 +802,35 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 			    dev->data->port_id);
 		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
 	}
+#endif /* RTE_ARCH_X86 */
 }
 
 void
 cpfl_set_tx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+#ifdef RTE_ARCH_X86
+#ifdef CC_AVX512_SUPPORT
+	struct idpf_tx_queue *txq;
+	int i;
+#endif /* CC_AVX512_SUPPORT */
+
+	if (cpfl_tx_vec_dev_check_default(dev) == CPFL_VECTOR_PATH &&
+	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+		vport->tx_vec_allowed = true;
+		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
+#ifdef CC_AVX512_SUPPORT
+			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
+				vport->tx_use_avx512 = true;
+#else
+		PMD_DRV_LOG(NOTICE,
+			    "AVX512 is not supported in build env");
+#endif /* CC_AVX512_SUPPORT */
+	} else {
+		vport->tx_vec_allowed = false;
+	}
+#endif /* RTE_ARCH_X86 */
 
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		PMD_DRV_LOG(NOTICE,
@@ -765,6 +839,26 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 		dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
 	} else {
+#ifdef RTE_ARCH_X86
+		if (vport->tx_vec_allowed) {
+#ifdef CC_AVX512_SUPPORT
+			if (vport->tx_use_avx512) {
+				for (i = 0; i < dev->data->nb_tx_queues; i++) {
+					txq = dev->data->tx_queues[i];
+					if (txq == NULL)
+						continue;
+					idpf_qc_tx_vec_avx512_setup(txq);
+				}
+				PMD_DRV_LOG(NOTICE,
+					    "Using Single AVX512 Vector Tx (port %d).",
+					    dev->data->port_id);
+				dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts_avx512;
+				dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
+#endif /* RTE_ARCH_X86 */
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Tx (port %d).",
 			    dev->data->port_id);
diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
new file mode 100644
index 0000000000..2d4c6a0ef3
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
@@ -0,0 +1,100 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _CPFL_RXTX_VEC_COMMON_H_
+#define _CPFL_RXTX_VEC_COMMON_H_
+#include <stdint.h>
+#include <ethdev_driver.h>
+#include <rte_malloc.h>
+
+#include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
+
+#ifndef __INTEL_COMPILER
+#pragma GCC diagnostic ignored "-Wcast-qual"
+#endif
+
+#define CPFL_SCALAR_PATH		0
+#define CPFL_VECTOR_PATH		1
+#define CPFL_RX_NO_VECTOR_FLAGS (		\
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+#define CPFL_TX_NO_VECTOR_FLAGS (		\
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |	\
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+static inline int
+cpfl_rx_vec_queue_default(struct idpf_rx_queue *rxq)
+{
+	if (rxq == NULL)
+		return CPFL_SCALAR_PATH;
+
+	if (rte_is_power_of_2(rxq->nb_rx_desc) == 0)
+		return CPFL_SCALAR_PATH;
+
+	if (rxq->rx_free_thresh < IDPF_VPMD_RX_MAX_BURST)
+		return CPFL_SCALAR_PATH;
+
+	if ((rxq->nb_rx_desc % rxq->rx_free_thresh) != 0)
+		return CPFL_SCALAR_PATH;
+
+	if ((rxq->offloads & CPFL_RX_NO_VECTOR_FLAGS) != 0)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_tx_vec_queue_default(struct idpf_tx_queue *txq)
+{
+	if (txq == NULL)
+		return CPFL_SCALAR_PATH;
+
+	if (txq->rs_thresh < IDPF_VPMD_TX_MAX_BURST ||
+	    (txq->rs_thresh & 3) != 0)
+		return CPFL_SCALAR_PATH;
+
+	if ((txq->offloads & CPFL_TX_NO_VECTOR_FLAGS) != 0)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i, ret = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		ret = (cpfl_rx_vec_queue_default(rxq));
+		if (ret == CPFL_SCALAR_PATH)
+			return CPFL_SCALAR_PATH;
+	}
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_tx_vec_dev_check_default(struct rte_eth_dev *dev)
+{
+	int i;
+	struct idpf_tx_queue *txq;
+	int ret = 0;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		ret = cpfl_tx_vec_queue_default(txq);
+		if (ret == CPFL_SCALAR_PATH)
+			return CPFL_SCALAR_PATH;
+	}
+
+	return CPFL_VECTOR_PATH;
+}
+
+#endif /*_CPFL_RXTX_VEC_COMMON_H_*/
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
index 1894423689..fbe6500826 100644
--- a/drivers/net/cpfl/meson.build
+++ b/drivers/net/cpfl/meson.build
@@ -7,9 +7,32 @@ if is_windows
     subdir_done()
 endif
 
+if dpdk_conf.get('RTE_IOVA_AS_PA') == 0
+    build = false
+    reason = 'driver does not support disabling IOVA as PA mode'
+    subdir_done()
+endif
+
 deps += ['common_idpf']
 
 sources = files(
         'cpfl_ethdev.c',
         'cpfl_rxtx.c',
-)
\ No newline at end of file
+)
+
+if arch_subdir == 'x86'
+    cpfl_avx512_cpu_support = (
+        cc.get_define('__AVX512F__', args: machine_args) != '' and
+        cc.get_define('__AVX512BW__', args: machine_args) != ''
+    )
+
+    cpfl_avx512_cc_support = (
+        not machine_args.contains('-mno-avx512f') and
+        cc.has_argument('-mavx512f') and
+        cc.has_argument('-mavx512bw')
+    )
+
+    if cpfl_avx512_cpu_support == true or cpfl_avx512_cc_support == true
+        cflags += ['-DCC_AVX512_SUPPORT']
+    endif
+endif
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v6 16/21] net/cpfl: support timestamp offload
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
                           ` (14 preceding siblings ...)
  2023-02-13  2:19         ` [PATCH v6 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
@ 2023-02-13  2:19         ` Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
                           ` (6 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-13  2:19 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for timestamp offload.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 3 ++-
 drivers/net/cpfl/cpfl_rxtx.c      | 7 +++++++
 3 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index f4e45c7c68..c1209df3e5 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -11,6 +11,7 @@ MTU update           = Y
 TSO                  = P
 L3 checksum offload  = P
 L4 checksum offload  = P
+Timestamp offload    = P
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 346af055cf..1e40f3e55c 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -103,7 +103,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM           |
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
-		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index cb7bbddb16..7b12c80f1c 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -516,6 +516,13 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	err = idpf_qc_ts_mbuf_register(rxq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "fail to register timestamp mbuf %u",
+					rx_queue_id);
+		return -EIO;
+	}
+
 	if (rxq->bufq1 == NULL) {
 		/* Single queue */
 		err = idpf_qc_single_rxq_mbufs_alloc(rxq);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v6 17/21] net/cpfl: add AVX512 data path for split queue model
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
                           ` (15 preceding siblings ...)
  2023-02-13  2:19         ` [PATCH v6 16/21] net/cpfl: support timestamp offload Mingxia Liu
@ 2023-02-13  2:19         ` Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 18/21] net/cpfl: add HW statistics Mingxia Liu
                           ` (5 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-13  2:19 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu, Wenjun Wu

Add support of AVX512 data path for split queue model.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_rxtx.c            | 56 +++++++++++++++++++++++--
 drivers/net/cpfl/cpfl_rxtx_vec_common.h | 20 ++++++++-
 drivers/net/cpfl/meson.build            |  6 ++-
 3 files changed, 75 insertions(+), 7 deletions(-)

diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 7b12c80f1c..0d5bfb901d 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -759,7 +759,8 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
 #ifdef CC_AVX512_SUPPORT
 			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
-			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512DQ))
 				vport->rx_use_avx512 = true;
 #else
 		PMD_DRV_LOG(NOTICE,
@@ -772,6 +773,21 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 
 #ifdef RTE_ARCH_X86
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		if (vport->rx_vec_allowed) {
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				rxq = dev->data->rx_queues[i];
+				(void)idpf_qc_splitq_rx_vec_setup(rxq);
+			}
+#ifdef CC_AVX512_SUPPORT
+			if (vport->rx_use_avx512) {
+				PMD_DRV_LOG(NOTICE,
+					    "Using Split AVX512 Vector Rx (port %d).",
+					    dev->data->port_id);
+				dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts_avx512;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Split Scalar Rx (port %d).",
 			    dev->data->port_id);
@@ -827,9 +843,17 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 		vport->tx_vec_allowed = true;
 		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
 #ifdef CC_AVX512_SUPPORT
+		{
 			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
 			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
 				vport->tx_use_avx512 = true;
+			if (vport->tx_use_avx512) {
+				for (i = 0; i < dev->data->nb_tx_queues; i++) {
+					txq = dev->data->tx_queues[i];
+					idpf_qc_tx_vec_avx512_setup(txq);
+				}
+			}
+		}
 #else
 		PMD_DRV_LOG(NOTICE,
 			    "AVX512 is not supported in build env");
@@ -839,14 +863,26 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 	}
 #endif /* RTE_ARCH_X86 */
 
+#ifdef RTE_ARCH_X86
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		if (vport->tx_vec_allowed) {
+#ifdef CC_AVX512_SUPPORT
+			if (vport->tx_use_avx512) {
+				PMD_DRV_LOG(NOTICE,
+					    "Using Split AVX512 Vector Tx (port %d).",
+					    dev->data->port_id);
+				dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts_avx512;
+				dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Split Scalar Tx (port %d).",
 			    dev->data->port_id);
 		dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
 	} else {
-#ifdef RTE_ARCH_X86
 		if (vport->tx_vec_allowed) {
 #ifdef CC_AVX512_SUPPORT
 			if (vport->tx_use_avx512) {
@@ -865,11 +901,25 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 			}
 #endif /* CC_AVX512_SUPPORT */
 		}
-#endif /* RTE_ARCH_X86 */
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Tx (port %d).",
 			    dev->data->port_id);
 		dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
 	}
+#else
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Split Scalar Tx (port %d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+	} else {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Single Scalar Tx (port %d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+	}
+#endif /* RTE_ARCH_X86 */
 }
diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
index 2d4c6a0ef3..665418d27d 100644
--- a/drivers/net/cpfl/cpfl_rxtx_vec_common.h
+++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
@@ -64,15 +64,31 @@ cpfl_tx_vec_queue_default(struct idpf_tx_queue *txq)
 	return CPFL_VECTOR_PATH;
 }
 
+static inline int
+cpfl_rx_splitq_vec_default(struct idpf_rx_queue *rxq)
+{
+	if (rxq->bufq2->rx_buf_len < rxq->max_pkt_len)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
 static inline int
 cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_rx_queue *rxq;
-	int i, ret = 0;
+	int i, default_ret, splitq_ret, ret = CPFL_SCALAR_PATH;
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
-		ret = (cpfl_rx_vec_queue_default(rxq));
+		default_ret = cpfl_rx_vec_queue_default(rxq);
+		if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+			splitq_ret = cpfl_rx_splitq_vec_default(rxq);
+			ret = splitq_ret && default_ret;
+		} else {
+			ret = default_ret;
+		}
 		if (ret == CPFL_SCALAR_PATH)
 			return CPFL_SCALAR_PATH;
 	}
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
index fbe6500826..2cf69258e2 100644
--- a/drivers/net/cpfl/meson.build
+++ b/drivers/net/cpfl/meson.build
@@ -23,13 +23,15 @@ sources = files(
 if arch_subdir == 'x86'
     cpfl_avx512_cpu_support = (
         cc.get_define('__AVX512F__', args: machine_args) != '' and
-        cc.get_define('__AVX512BW__', args: machine_args) != ''
+        cc.get_define('__AVX512BW__', args: machine_args) != '' and
+        cc.get_define('__AVX512DQ__', args: machine_args) != ''
     )
 
     cpfl_avx512_cc_support = (
         not machine_args.contains('-mno-avx512f') and
         cc.has_argument('-mavx512f') and
-        cc.has_argument('-mavx512bw')
+        cc.has_argument('-mavx512bw') and
+        cc.has_argument('-mavx512dq')
     )
 
     if cpfl_avx512_cpu_support == true or cpfl_avx512_cc_support == true
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v6 18/21] net/cpfl: add HW statistics
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
                           ` (16 preceding siblings ...)
  2023-02-13  2:19         ` [PATCH v6 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
@ 2023-02-13  2:19         ` Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
                           ` (4 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-13  2:19 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

This patch add hardware packets/bytes statistics.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 86 ++++++++++++++++++++++++++++++++++
 1 file changed, 86 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 1e40f3e55c..0fb9f0455b 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -178,6 +178,87 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static uint64_t
+cpfl_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	uint64_t mbuf_alloc_failed = 0;
+	struct idpf_rx_queue *rxq;
+	int i = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		mbuf_alloc_failed += __atomic_load_n(&rxq->rx_stats.mbuf_alloc_failed,
+						     __ATOMIC_RELAXED);
+	}
+
+	return mbuf_alloc_failed;
+}
+
+static int
+cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret == 0) {
+		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
+					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
+					 RTE_ETHER_CRC_LEN;
+
+		idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
+		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
+				pstats->rx_broadcast - pstats->rx_discards;
+		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
+						pstats->tx_unicast;
+		stats->imissed = pstats->rx_discards;
+		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
+		stats->ibytes = pstats->rx_bytes;
+		stats->ibytes -= stats->ipackets * crc_stats_len;
+		stats->obytes = pstats->tx_bytes;
+
+		dev->data->rx_mbuf_alloc_failed = cpfl_get_mbuf_alloc_failed_stats(dev);
+		stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return ret;
+}
+
+static void
+cpfl_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		__atomic_store_n(&rxq->rx_stats.mbuf_alloc_failed, 0, __ATOMIC_RELAXED);
+	}
+}
+
+static int
+cpfl_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret != 0)
+		return ret;
+
+	/* set stats offset base on current values */
+	vport->eth_stats_offset = *pstats;
+
+	cpfl_reset_mbuf_alloc_failed_stats(dev);
+
+	return 0;
+}
+
 static int
 cpfl_init_rss(struct idpf_vport *vport)
 {
@@ -365,6 +446,9 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 		goto err_vport;
 	}
 
+	if (cpfl_dev_stats_reset(dev))
+		PMD_DRV_LOG(ERR, "Failed to reset stats");
+
 	vport->stopped = 0;
 
 	return 0;
@@ -766,6 +850,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_release		= cpfl_dev_tx_queue_release,
 	.mtu_set			= cpfl_dev_mtu_set,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
+	.stats_get			= cpfl_dev_stats_get,
+	.stats_reset			= cpfl_dev_stats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v6 19/21] net/cpfl: add RSS set/get ops
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
                           ` (17 preceding siblings ...)
  2023-02-13  2:19         ` [PATCH v6 18/21] net/cpfl: add HW statistics Mingxia Liu
@ 2023-02-13  2:19         ` Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 20/21] net/cpfl: support scalar scatter Rx datapath for single queue model Mingxia Liu
                           ` (3 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-13  2:19 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for these device ops:
- rss_reta_update
- rss_reta_query
- rss_hash_update
- rss_hash_conf_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 270 ++++++++++++++++++++++++++++++++-
 1 file changed, 269 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 0fb9f0455b..d2387b9a39 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -30,6 +30,56 @@ static const char * const cpfl_valid_args[] = {
 	NULL
 };
 
+static const uint64_t cpfl_map_hena_rss[] = {
+	[IDPF_HASH_NONF_UNICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
+	[IDPF_HASH_NONF_IPV4_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+	[IDPF_HASH_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
+
+	/* IPv6 */
+	[IDPF_HASH_NONF_UNICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
+	[IDPF_HASH_NONF_IPV6_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+	[IDPF_HASH_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
+
+	/* L2 Payload */
+	[IDPF_HASH_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
+};
+
+static const uint64_t cpfl_ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV4;
+
+static const uint64_t cpfl_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV6;
+
 static int
 cpfl_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -97,6 +147,9 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->hash_key_size = vport->rss_key_size;
+	dev_info->reta_size = vport->rss_lut_size;
+
 	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
@@ -259,6 +312,36 @@ cpfl_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int cpfl_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
+{
+	uint64_t hena = 0;
+	uint16_t i;
+
+	/**
+	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
+	 * generalizations of all other IPv4 and IPv6 RSS types.
+	 */
+	if (rss_hf & RTE_ETH_RSS_IPV4)
+		rss_hf |= cpfl_ipv4_rss;
+
+	if (rss_hf & RTE_ETH_RSS_IPV6)
+		rss_hf |= cpfl_ipv6_rss;
+
+	for (i = 0; i < RTE_DIM(cpfl_map_hena_rss); i++) {
+		if (cpfl_map_hena_rss[i] & rss_hf)
+			hena |= BIT_ULL(i);
+	}
+
+	/**
+	 * At present, cp doesn't process the virtual channel msg of rss_hf configuration,
+	 * tips are given below.
+	 */
+	if (hena != vport->rss_hf)
+		PMD_DRV_LOG(WARNING, "Updating RSS Hash Function is not supported at present.");
+
+	return 0;
+}
+
 static int
 cpfl_init_rss(struct idpf_vport *vport)
 {
@@ -279,7 +362,7 @@ cpfl_init_rss(struct idpf_vport *vport)
 			     vport->rss_key_size);
 		return -EINVAL;
 	} else {
-		rte_memcpy(vport->rss_key, rss_conf->rss_key,
+		memcpy(vport->rss_key, rss_conf->rss_key,
 			   vport->rss_key_size);
 	}
 
@@ -295,6 +378,187 @@ cpfl_init_rss(struct idpf_vport *vport)
 	return ret;
 }
 
+static int
+cpfl_rss_reta_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_reta_entry64 *reta_conf,
+		     uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+				 "(%d) doesn't match the number of hardware can "
+				 "support (%d)",
+			    reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			vport->rss_lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	/* send virtchnl ops to configure RSS */
+	ret = idpf_vc_rss_lut_set(vport);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
+
+	return ret;
+}
+
+static int
+cpfl_rss_reta_query(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_reta_entry64 *reta_conf,
+		    uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	ret = idpf_vc_rss_lut_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS LUT");
+		return ret;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vport->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+cpfl_rss_hash_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		goto skip_rss_key;
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+				 "(%d) doesn't match the size of hardware can "
+				 "support (%d)",
+			    rss_conf->rss_key_len,
+			    vport->rss_key_size);
+		return -EINVAL;
+	}
+
+	memcpy(vport->rss_key, rss_conf->rss_key,
+		   vport->rss_key_size);
+	ret = idpf_vc_rss_key_set(vport);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS key");
+		return ret;
+	}
+
+skip_rss_key:
+	ret = cpfl_config_rss_hf(vport, rss_conf->rss_hf);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
+		return ret;
+	}
+
+	return 0;
+}
+
+static uint64_t
+cpfl_map_general_rss_hf(uint64_t config_rss_hf, uint64_t last_general_rss_hf)
+{
+	uint64_t valid_rss_hf = 0;
+	uint16_t i;
+
+	for (i = 0; i < RTE_DIM(cpfl_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (bit & config_rss_hf)
+			valid_rss_hf |= cpfl_map_hena_rss[i];
+	}
+
+	if (valid_rss_hf & cpfl_ipv4_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & cpfl_ipv6_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV6;
+
+	return valid_rss_hf;
+}
+
+static int
+cpfl_rss_hash_conf_get(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	ret = idpf_vc_rss_hash_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS hf");
+		return ret;
+	}
+
+	rss_conf->rss_hf = cpfl_map_general_rss_hf(vport->rss_hf, vport->last_general_rss_hf);
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	ret = idpf_vc_rss_key_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS key");
+		return ret;
+	}
+
+	if (rss_conf->rss_key_len > vport->rss_key_size)
+		rss_conf->rss_key_len = vport->rss_key_size;
+
+	memcpy(rss_conf->rss_key, vport->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
@@ -852,6 +1116,10 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 	.stats_get			= cpfl_dev_stats_get,
 	.stats_reset			= cpfl_dev_stats_reset,
+	.reta_update			= cpfl_rss_reta_update,
+	.reta_query			= cpfl_rss_reta_query,
+	.rss_hash_update		= cpfl_rss_hash_update,
+	.rss_hash_conf_get		= cpfl_rss_hash_conf_get,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v6 20/21] net/cpfl: support scalar scatter Rx datapath for single queue model
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
                           ` (18 preceding siblings ...)
  2023-02-13  2:19         ` [PATCH v6 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
@ 2023-02-13  2:19         ` Mingxia Liu
  2023-02-13  2:19         ` [PATCH v6 21/21] net/cpfl: add xstats ops Mingxia Liu
                           ` (2 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-13  2:19 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu, Wenjun Wu

This patch add single q recv scatter Rx function.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  3 ++-
 drivers/net/cpfl/cpfl_rxtx.c   | 27 +++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  2 ++
 3 files changed, 31 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index d2387b9a39..f959a2911d 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -157,7 +157,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
-		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP		|
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 0d5bfb901d..6226b02301 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -503,6 +503,8 @@ int
 cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct idpf_rx_queue *rxq;
+	uint16_t max_pkt_len;
+	uint32_t frame_size;
 	int err;
 
 	if (rx_queue_id >= dev->data->nb_rx_queues)
@@ -516,6 +518,17 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	frame_size = dev->data->mtu + CPFL_ETH_OVERHEAD;
+
+	max_pkt_len =
+	    RTE_MIN((uint32_t)CPFL_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+		    frame_size);
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
+	    frame_size > rxq->rx_buf_len)
+		dev->data->scattered_rx = 1;
+
 	err = idpf_qc_ts_mbuf_register(rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "fail to register timestamp mbuf %u",
@@ -808,6 +821,13 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 			}
 #endif /* CC_AVX512_SUPPORT */
 		}
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE,
+				    "Using Single Scalar Scatterd Rx (port %d).",
+				    dev->data->port_id);
+			dev->rx_pkt_burst = idpf_dp_singleq_recv_scatter_pkts;
+			return;
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Rx (port %d).",
 			    dev->data->port_id);
@@ -820,6 +840,13 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 			    dev->data->port_id);
 		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
 	} else {
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE,
+				    "Using Single Scalar Scatterd Rx (port %d).",
+				    dev->data->port_id);
+			dev->rx_pkt_burst = idpf_dp_singleq_recv_scatter_pkts;
+			return;
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Rx (port %d).",
 			    dev->data->port_id);
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 5f8144e55f..fb267d38c8 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -21,6 +21,8 @@
 #define CPFL_DEFAULT_TX_RS_THRESH	32
 #define CPFL_DEFAULT_TX_FREE_THRESH	32
 
+#define CPFL_SUPPORT_CHAIN_NUM 5
+
 int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_txconf *tx_conf);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v6 21/21] net/cpfl: add xstats ops
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
                           ` (19 preceding siblings ...)
  2023-02-13  2:19         ` [PATCH v6 20/21] net/cpfl: support scalar scatter Rx datapath for single queue model Mingxia Liu
@ 2023-02-13  2:19         ` Mingxia Liu
  2023-02-15 14:04         ` [PATCH v6 00/21] add support for cpfl PMD in DPDK Ferruh Yigit
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-13  2:19 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for these device ops:
- dev_xstats_get
- dev_xstats_get_names
- dev_xstats_reset

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 80 ++++++++++++++++++++++++++++++++++
 1 file changed, 80 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index f959a2911d..543dbd60f0 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -80,6 +80,30 @@ static const uint64_t cpfl_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
 			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
 			  RTE_ETH_RSS_FRAG_IPV6;
 
+struct rte_cpfl_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct rte_cpfl_xstats_name_off rte_cpfl_stats_strings[] = {
+	{"rx_bytes", offsetof(struct virtchnl2_vport_stats, rx_bytes)},
+	{"rx_unicast_packets", offsetof(struct virtchnl2_vport_stats, rx_unicast)},
+	{"rx_multicast_packets", offsetof(struct virtchnl2_vport_stats, rx_multicast)},
+	{"rx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, rx_broadcast)},
+	{"rx_dropped_packets", offsetof(struct virtchnl2_vport_stats, rx_discards)},
+	{"rx_errors", offsetof(struct virtchnl2_vport_stats, rx_errors)},
+	{"rx_unknown_protocol_packets", offsetof(struct virtchnl2_vport_stats,
+						 rx_unknown_protocol)},
+	{"tx_bytes", offsetof(struct virtchnl2_vport_stats, tx_bytes)},
+	{"tx_unicast_packets", offsetof(struct virtchnl2_vport_stats, tx_unicast)},
+	{"tx_multicast_packets", offsetof(struct virtchnl2_vport_stats, tx_multicast)},
+	{"tx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, tx_broadcast)},
+	{"tx_dropped_packets", offsetof(struct virtchnl2_vport_stats, tx_discards)},
+	{"tx_error_packets", offsetof(struct virtchnl2_vport_stats, tx_errors)}};
+
+#define CPFL_NB_XSTATS (sizeof(rte_cpfl_stats_strings) / \
+		sizeof(rte_cpfl_stats_strings[0]))
+
 static int
 cpfl_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -313,6 +337,59 @@ cpfl_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int cpfl_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+	cpfl_dev_stats_reset(dev);
+	return 0;
+}
+
+static int cpfl_dev_xstats_get(struct rte_eth_dev *dev,
+			       struct rte_eth_xstat *xstats, unsigned int n)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	unsigned int i;
+	int ret;
+
+	if (n < CPFL_NB_XSTATS)
+		return CPFL_NB_XSTATS;
+
+	if (!xstats)
+		return 0;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+		return 0;
+	}
+
+	idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
+
+	/* loop over xstats array and values from pstats */
+	for (i = 0; i < CPFL_NB_XSTATS; i++) {
+		xstats[i].id = i;
+		xstats[i].value = *(uint64_t *)(((char *)pstats) +
+			rte_cpfl_stats_strings[i].offset);
+	}
+	return CPFL_NB_XSTATS;
+}
+
+static int cpfl_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+				     struct rte_eth_xstat_name *xstats_names,
+				     __rte_unused unsigned int limit)
+{
+	unsigned int i;
+
+	if (xstats_names)
+		for (i = 0; i < CPFL_NB_XSTATS; i++) {
+			snprintf(xstats_names[i].name,
+				 sizeof(xstats_names[i].name),
+				 "%s", rte_cpfl_stats_strings[i].name);
+		}
+	return CPFL_NB_XSTATS;
+}
+
 static int cpfl_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
 {
 	uint64_t hena = 0;
@@ -1121,6 +1198,9 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.reta_query			= cpfl_rss_reta_query,
 	.rss_hash_update		= cpfl_rss_hash_update,
 	.rss_hash_conf_get		= cpfl_rss_hash_conf_get,
+	.xstats_get			= cpfl_dev_xstats_get,
+	.xstats_get_names		= cpfl_dev_xstats_get_names,
+	.xstats_reset			= cpfl_dev_xstats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v6 00/21] add support for cpfl PMD in DPDK
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
                           ` (20 preceding siblings ...)
  2023-02-13  2:19         ` [PATCH v6 21/21] net/cpfl: add xstats ops Mingxia Liu
@ 2023-02-15 14:04         ` Ferruh Yigit
  2023-02-16  1:16           ` Liu, Mingxia
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
  22 siblings, 1 reply; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-15 14:04 UTC (permalink / raw)
  To: Mingxia Liu, dev, beilei.xing, yuying.zhang

On 2/13/2023 2:19 AM, Mingxia Liu wrote:
> The patchset introduced the cpfl (Control Plane Function Library) PMD
> for Intel® IPU E2100’s Configure Physical Function (Device ID: 0x1453)
> 
> The cpfl PMD inherits all the features from idpf PMD which will follow
> an ongoing standard data plan function spec
> https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=idpf
> Besides, it will also support more device specific hardware offloading
> features from DPDK’s control path (e.g.: hairpin, rte_flow …). which is
> different from idpf PMD, and that's why we need a new cpfl PMD.
> 
> This patchset mainly focuses on idpf PMD’s equivalent features.
> To avoid duplicated code, the patchset depends on below patchsets which
> move the common part from net/idpf into common/idpf as a shared library.
> 
> v2 changes:
>  - rebase to the new baseline.
>  - Fix rss lut config issue.
> v3 changes:
>  - rebase to the new baseline.
> v4 changes:
>  - Resend v3. No code changed.
> v5 changes:
>  - rebase to the new baseline.
>  - optimize some code
>  - give "not supported" tips when user want to config rss hash type
>  - if stats reset fails at initialization time, don't rollback, just
>    print ERROR info
> v6 changes:
>  - for small fixed size structure, change rte_memcpy to memcpy()
>  - fix compilation for AVX512DQ
>  - update cpfl maintainers
> 
> Mingxia Liu (21):
>   net/cpfl: support device initialization
>   net/cpfl: add Tx queue setup
>   net/cpfl: add Rx queue setup
>   net/cpfl: support device start and stop
>   net/cpfl: support queue start
>   net/cpfl: support queue stop
>   net/cpfl: support queue release
>   net/cpfl: support MTU configuration
>   net/cpfl: support basic Rx data path
>   net/cpfl: support basic Tx data path
>   net/cpfl: support write back based on ITR expire
>   net/cpfl: support RSS
>   net/cpfl: support Rx offloading
>   net/cpfl: support Tx offloading
>   net/cpfl: add AVX512 data path for single queue model
>   net/cpfl: support timestamp offload
>   net/cpfl: add AVX512 data path for split queue model
>   net/cpfl: add HW statistics
>   net/cpfl: add RSS set/get ops
>   net/cpfl: support scalar scatter Rx datapath for single queue model
>   net/cpfl: add xstats ops

Hi Mingxia, Beilei,

Is there any missing dependency at this point?

^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v7 00/21] add support for cpfl PMD in DPDK
  2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
                           ` (21 preceding siblings ...)
  2023-02-15 14:04         ` [PATCH v6 00/21] add support for cpfl PMD in DPDK Ferruh Yigit
@ 2023-02-16  0:29         ` Mingxia Liu
  2023-02-16  0:29           ` [PATCH v7 01/21] net/cpfl: support device initialization Mingxia Liu
                             ` (22 more replies)
  22 siblings, 23 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-16  0:29 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=y, Size: 3693 bytes --]

The patchset introduced the cpfl (Control Plane Function Library) PMD
for Intel® IPU E2100’s Configure Physical Function (Device ID: 0x1453)

The cpfl PMD inherits all the features from idpf PMD which will follow
an ongoing standard data plan function spec
https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=idpf
Besides, it will also support more device specific hardware offloading
features from DPDK’s control path (e.g.: hairpin, rte_flow …). which is
different from idpf PMD, and that's why we need a new cpfl PMD.

This patchset mainly focuses on idpf PMD’s equivalent features.
To avoid duplicated code, the patchset depends on below patchsets which
move the common part from net/idpf into common/idpf as a shared library.

v2 changes:
 - rebase to the new baseline.
 - Fix rss lut config issue.
v3 changes:
 - rebase to the new baseline.
v4 changes:
 - Resend v3. No code changed.
v5 changes:
 - rebase to the new baseline.
 - optimize some code
 - give "not supported" tips when user want to config rss hash type
 - if stats reset fails at initialization time, don't rollback, just
   print ERROR info
v6 changes:
 - for small fixed size structure, change rte_memcpy to memcpy()
 - fix compilation for AVX512DQ
 - update cpfl maintainers
v7 changes:
 - add dependency in cover-letter

This patchset is based on the idpf PMD code:
http://patches.dpdk.org/project/dpdk/cover/20230206054618.40975-1-beilei.xing@intel.com/
http://patches.dpdk.org/project/dpdk/cover/20230106090501.9106-1-beilei.xing@intel.com/
http://patches.dpdk.org/project/dpdk/cover/20230207084549.2225214-1-wenjun1.wu@intel.com/
http://patches.dpdk.org/project/dpdk/cover/20230208073401.2468579-1-mingxia.liu@intel.com/


Mingxia Liu (21):
  net/cpfl: support device initialization
  net/cpfl: add Tx queue setup
  net/cpfl: add Rx queue setup
  net/cpfl: support device start and stop
  net/cpfl: support queue start
  net/cpfl: support queue stop
  net/cpfl: support queue release
  net/cpfl: support MTU configuration
  net/cpfl: support basic Rx data path
  net/cpfl: support basic Tx data path
  net/cpfl: support write back based on ITR expire
  net/cpfl: support RSS
  net/cpfl: support Rx offloading
  net/cpfl: support Tx offloading
  net/cpfl: add AVX512 data path for single queue model
  net/cpfl: support timestamp offload
  net/cpfl: add AVX512 data path for split queue model
  net/cpfl: add HW statistics
  net/cpfl: add RSS set/get ops
  net/cpfl: support scalar scatter Rx datapath for single queue model
  net/cpfl: add xstats ops

 MAINTAINERS                             |    8 +
 doc/guides/nics/cpfl.rst                |   88 ++
 doc/guides/nics/features/cpfl.ini       |   17 +
 doc/guides/rel_notes/release_23_03.rst  |    6 +
 drivers/net/cpfl/cpfl_ethdev.c          | 1453 +++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h          |   95 ++
 drivers/net/cpfl/cpfl_logs.h            |   32 +
 drivers/net/cpfl/cpfl_rxtx.c            |  952 +++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h            |   44 +
 drivers/net/cpfl/cpfl_rxtx_vec_common.h |  116 ++
 drivers/net/cpfl/meson.build            |   40 +
 drivers/net/meson.build                 |    1 +
 12 files changed, 2852 insertions(+)
 create mode 100644 doc/guides/nics/cpfl.rst
 create mode 100644 doc/guides/nics/features/cpfl.ini
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.c
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.h
 create mode 100644 drivers/net/cpfl/cpfl_logs.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h
 create mode 100644 drivers/net/cpfl/meson.build

-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v7 01/21] net/cpfl: support device initialization
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
@ 2023-02-16  0:29           ` Mingxia Liu
  2023-02-27 13:46             ` Ferruh Yigit
  2023-02-27 21:43             ` Ferruh Yigit
  2023-02-16  0:29           ` [PATCH v7 02/21] net/cpfl: add Tx queue setup Mingxia Liu
                             ` (21 subsequent siblings)
  22 siblings, 2 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-16  0:29 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Support device init and add the following dev ops:
 - dev_configure
 - dev_close
 - dev_infos_get
 - link_update
 - dev_supported_ptypes_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 MAINTAINERS                            |   8 +
 doc/guides/nics/cpfl.rst               |  66 +++
 doc/guides/nics/features/cpfl.ini      |  12 +
 doc/guides/rel_notes/release_23_03.rst |   6 +
 drivers/net/cpfl/cpfl_ethdev.c         | 768 +++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h         |  78 +++
 drivers/net/cpfl/cpfl_logs.h           |  32 ++
 drivers/net/cpfl/cpfl_rxtx.c           | 244 ++++++++
 drivers/net/cpfl/cpfl_rxtx.h           |  25 +
 drivers/net/cpfl/meson.build           |  14 +
 drivers/net/meson.build                |   1 +
 11 files changed, 1254 insertions(+)
 create mode 100644 doc/guides/nics/cpfl.rst
 create mode 100644 doc/guides/nics/features/cpfl.ini
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.c
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.h
 create mode 100644 drivers/net/cpfl/cpfl_logs.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.h
 create mode 100644 drivers/net/cpfl/meson.build

diff --git a/MAINTAINERS b/MAINTAINERS
index 9a0f416d2e..af80edaf6e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -783,6 +783,14 @@ F: drivers/common/idpf/
 F: doc/guides/nics/idpf.rst
 F: doc/guides/nics/features/idpf.ini
 
+Intel cpfl
+M: Yuying Zhang <yuying.zhang@intel.com>
+M: Beilei Xing <beilei.xing@intel.com>
+T: git://dpdk.org/next/dpdk-next-net-intel
+F: drivers/net/cpfl/
+F: doc/guides/nics/cpfl.rst
+F: doc/guides/nics/features/cpfl.ini
+
 Intel igc
 M: Junfeng Guo <junfeng.guo@intel.com>
 M: Simei Su <simei.su@intel.com>
diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
new file mode 100644
index 0000000000..7c5aff0789
--- /dev/null
+++ b/doc/guides/nics/cpfl.rst
@@ -0,0 +1,66 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright(c) 2022 Intel Corporation.
+
+.. include:: <isonum.txt>
+
+CPFL Poll Mode Driver
+=====================
+
+The [*EXPERIMENTAL*] cpfl PMD (**librte_net_cpfl**) provides poll mode driver support
+for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
+
+
+Linux Prerequisites
+-------------------
+
+Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment.
+
+To get better performance on Intel platforms,
+please follow the :doc:`../linux_gsg/nic_perf_intel_platform`.
+
+
+Pre-Installation Configuration
+------------------------------
+
+Runtime Config Options
+~~~~~~~~~~~~~~~~~~~~~~
+
+- ``vport`` (default ``0``)
+
+  The PMD supports creation of multiple vports for one PCI device,
+  each vport corresponds to a single ethdev.
+  The user can specify the vports with specific ID to be created, for example::
+
+    -a ca:00.0,vport=[0,2,3]
+
+  Then the PMD will create 3 vports (ethdevs) for device ``ca:00.0``.
+
+  If the parameter is not provided, the vport 0 will be created by default.
+
+- ``rx_single`` (default ``0``)
+
+  There are two queue modes supported by Intel\ |reg| IPU Ethernet E2100 Series,
+  single queue mode and split queue mode for Rx queue.
+  User can choose Rx queue mode, example::
+
+    -a ca:00.0,rx_single=1
+
+  Then the PMD will configure Rx queue with single queue mode.
+  Otherwise, split queue mode is chosen by default.
+
+- ``tx_single`` (default ``0``)
+
+  There are two queue modes supported by Intel\ |reg| IPU Ethernet E2100 Series,
+  single queue mode and split queue mode for Tx queue.
+  User can choose Tx queue mode, example::
+
+    -a ca:00.0,tx_single=1
+
+  Then the PMD will configure Tx queue with single queue mode.
+  Otherwise, split queue mode is chosen by default.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :doc:`build_and_test` for details.
\ No newline at end of file
diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
new file mode 100644
index 0000000000..a2d1ca9e15
--- /dev/null
+++ b/doc/guides/nics/features/cpfl.ini
@@ -0,0 +1,12 @@
+;
+; Supported features of the 'cpfl' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+; A feature with "P" indicates only be supported when non-vector path
+; is selected.
+;
+[Features]
+Linux                = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst
index 07914170a7..b0b23d1a44 100644
--- a/doc/guides/rel_notes/release_23_03.rst
+++ b/doc/guides/rel_notes/release_23_03.rst
@@ -88,6 +88,12 @@ New Features
   * Added timesync API support.
   * Added packet pacing(launch time offloading) support.
 
+* **Added Intel cpfl driver.**
+
+  Added the new ``cpfl`` net driver
+  for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
+  See the :doc:`../nics/cpfl` NIC guide for more details on this new driver.
+
 Removed Items
 -------------
 
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
new file mode 100644
index 0000000000..fe0061133c
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -0,0 +1,768 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_dev.h>
+#include <errno.h>
+#include <rte_alarm.h>
+
+#include "cpfl_ethdev.h"
+
+#define CPFL_TX_SINGLE_Q	"tx_single"
+#define CPFL_RX_SINGLE_Q	"rx_single"
+#define CPFL_VPORT		"vport"
+
+rte_spinlock_t cpfl_adapter_lock;
+/* A list for all adapters, one adapter matches one PCI device */
+struct cpfl_adapter_list cpfl_adapter_list;
+bool cpfl_adapter_list_init;
+
+static const char * const cpfl_valid_args[] = {
+	CPFL_TX_SINGLE_Q,
+	CPFL_RX_SINGLE_Q,
+	CPFL_VPORT,
+	NULL
+};
+
+static int
+cpfl_dev_link_update(struct rte_eth_dev *dev,
+		     __rte_unused int wait_to_complete)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct rte_eth_link new_link;
+
+	memset(&new_link, 0, sizeof(new_link));
+
+	switch (vport->link_speed) {
+	case RTE_ETH_SPEED_NUM_10M:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
+		break;
+	case RTE_ETH_SPEED_NUM_100M:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		break;
+	case RTE_ETH_SPEED_NUM_1G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
+		break;
+	case RTE_ETH_SPEED_NUM_10G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+		break;
+	case RTE_ETH_SPEED_NUM_20G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
+		break;
+	case RTE_ETH_SPEED_NUM_25G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
+		break;
+	case RTE_ETH_SPEED_NUM_40G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
+		break;
+	case RTE_ETH_SPEED_NUM_50G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
+		break;
+	case RTE_ETH_SPEED_NUM_100G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+		break;
+	case RTE_ETH_SPEED_NUM_200G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
+		break;
+	default:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	}
+
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
+		RTE_ETH_LINK_DOWN;
+	new_link.link_autoneg = (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) ?
+				 RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG;
+
+	return rte_eth_linkstatus_set(dev, &new_link);
+}
+
+static int
+cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+
+	dev_info->max_rx_queues = adapter->caps.max_rx_q;
+	dev_info->max_tx_queues = adapter->caps.max_tx_q;
+	dev_info->min_rx_bufsize = CPFL_MIN_BUF_SIZE;
+	dev_info->max_rx_pktlen = vport->max_mtu + CPFL_ETH_OVERHEAD;
+
+	dev_info->max_mtu = vport->max_mtu;
+	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+
+	return 0;
+}
+
+static const uint32_t *
+cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+	static const uint32_t ptypes[] = {
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_L4_FRAG,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_UNKNOWN
+	};
+
+	return ptypes;
+}
+
+static int
+cpfl_dev_configure(struct rte_eth_dev *dev)
+{
+	struct rte_eth_conf *conf = &dev->data->dev_conf;
+
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
+		PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not supported",
+			     conf->txmode.mq_mode);
+		return -ENOTSUP;
+	}
+
+	if (conf->lpbk_mode != 0) {
+		PMD_INIT_LOG(ERR, "Loopback operation mode %d is not supported",
+			     conf->lpbk_mode);
+		return -ENOTSUP;
+	}
+
+	if (conf->dcb_capability_en != 0) {
+		PMD_INIT_LOG(ERR, "Priority Flow Control(PFC) if not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.lsc != 0) {
+		PMD_INIT_LOG(ERR, "LSC interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.rxq != 0) {
+		PMD_INIT_LOG(ERR, "RXQ interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.rmv != 0) {
+		PMD_INIT_LOG(ERR, "RMV interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	return 0;
+}
+
+static int
+cpfl_dev_close(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter);
+
+	idpf_vport_deinit(vport);
+
+	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
+	adapter->cur_vport_nb--;
+	dev->data->dev_private = NULL;
+	adapter->vports[vport->sw_idx] = NULL;
+	rte_free(vport);
+
+	return 0;
+}
+
+static int
+insert_value(struct cpfl_devargs *devargs, uint16_t id)
+{
+	uint16_t i;
+
+	/* ignore duplicate */
+	for (i = 0; i < devargs->req_vport_nb; i++) {
+		if (devargs->req_vports[i] == id)
+			return 0;
+	}
+
+	if (devargs->req_vport_nb >= RTE_DIM(devargs->req_vports)) {
+		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
+			     CPFL_MAX_VPORT_NUM);
+		return -EINVAL;
+	}
+
+	devargs->req_vports[devargs->req_vport_nb] = id;
+	devargs->req_vport_nb++;
+
+	return 0;
+}
+
+static const char *
+parse_range(const char *value, struct cpfl_devargs *devargs)
+{
+	uint16_t lo, hi, i;
+	int n = 0;
+	int result;
+	const char *pos = value;
+
+	result = sscanf(value, "%hu%n-%hu%n", &lo, &n, &hi, &n);
+	if (result == 1) {
+		if (lo >= CPFL_MAX_VPORT_NUM)
+			return NULL;
+		if (insert_value(devargs, lo) != 0)
+			return NULL;
+	} else if (result == 2) {
+		if (lo > hi || hi >= CPFL_MAX_VPORT_NUM)
+			return NULL;
+		for (i = lo; i <= hi; i++) {
+			if (insert_value(devargs, i) != 0)
+				return NULL;
+		}
+	} else {
+		return NULL;
+	}
+
+	return pos + n;
+}
+
+static int
+parse_vport(const char *key, const char *value, void *args)
+{
+	struct cpfl_devargs *devargs = args;
+	const char *pos = value;
+
+	devargs->req_vport_nb = 0;
+
+	if (*pos == '[')
+		pos++;
+
+	while (1) {
+		pos = parse_range(pos, devargs);
+		if (pos == NULL) {
+			PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ",
+				     value, key);
+			return -EINVAL;
+		}
+		if (*pos != ',')
+			break;
+		pos++;
+	}
+
+	if (*value == '[' && *pos != ']') {
+		PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ",
+			     value, key);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+parse_bool(const char *key, const char *value, void *args)
+{
+	int *i = args;
+	char *end;
+	int num;
+
+	errno = 0;
+
+	num = strtoul(value, &end, 10);
+
+	if (errno == ERANGE || (num != 0 && num != 1)) {
+		PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", value must be 0 or 1",
+			value, key);
+		return -EINVAL;
+	}
+
+	*i = num;
+	return 0;
+}
+
+static int
+cpfl_parse_devargs(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter,
+		   struct cpfl_devargs *cpfl_args)
+{
+	struct rte_devargs *devargs = pci_dev->device.devargs;
+	struct rte_kvargs *kvlist;
+	int i, ret;
+
+	cpfl_args->req_vport_nb = 0;
+
+	if (devargs == NULL)
+		return 0;
+
+	kvlist = rte_kvargs_parse(devargs->args, cpfl_valid_args);
+	if (kvlist == NULL) {
+		PMD_INIT_LOG(ERR, "invalid kvargs key");
+		return -EINVAL;
+	}
+
+	/* check parsed devargs */
+	if (adapter->cur_vport_nb + cpfl_args->req_vport_nb >
+	    CPFL_MAX_VPORT_NUM) {
+		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
+			     CPFL_MAX_VPORT_NUM);
+		ret = -EINVAL;
+		goto bail;
+	}
+
+	for (i = 0; i < cpfl_args->req_vport_nb; i++) {
+		if (adapter->cur_vports & RTE_BIT32(cpfl_args->req_vports[i])) {
+			PMD_INIT_LOG(ERR, "Vport %d has been created",
+				     cpfl_args->req_vports[i]);
+			ret = -EINVAL;
+			goto bail;
+		}
+	}
+
+	ret = rte_kvargs_process(kvlist, CPFL_VPORT, &parse_vport,
+				 cpfl_args);
+	if (ret != 0)
+		goto bail;
+
+	ret = rte_kvargs_process(kvlist, CPFL_TX_SINGLE_Q, &parse_bool,
+				 &adapter->base.txq_model);
+	if (ret != 0)
+		goto bail;
+
+	ret = rte_kvargs_process(kvlist, CPFL_RX_SINGLE_Q, &parse_bool,
+				 &adapter->base.rxq_model);
+	if (ret != 0)
+		goto bail;
+
+bail:
+	rte_kvargs_free(kvlist);
+	return ret;
+}
+
+static struct idpf_vport *
+cpfl_find_vport(struct cpfl_adapter_ext *adapter, uint32_t vport_id)
+{
+	struct idpf_vport *vport = NULL;
+	int i;
+
+	for (i = 0; i < adapter->cur_vport_nb; i++) {
+		vport = adapter->vports[i];
+		if (vport->vport_id != vport_id)
+			continue;
+		else
+			return vport;
+	}
+
+	return vport;
+}
+
+static void
+cpfl_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t msglen)
+{
+	struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
+
+	if (msglen < sizeof(struct virtchnl2_event)) {
+		PMD_DRV_LOG(ERR, "Error event");
+		return;
+	}
+
+	switch (vc_event->event) {
+	case VIRTCHNL2_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL2_EVENT_LINK_CHANGE");
+		vport->link_up = !!(vc_event->link_status);
+		vport->link_speed = vc_event->link_speed;
+		cpfl_dev_link_update(dev, 0);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", vc_event->event);
+		break;
+	}
+}
+
+static void
+cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter_ex)
+{
+	struct idpf_adapter *adapter = &adapter_ex->base;
+	struct idpf_dma_mem *dma_mem = NULL;
+	struct idpf_hw *hw = &adapter->hw;
+	struct virtchnl2_event *vc_event;
+	struct idpf_ctlq_msg ctlq_msg;
+	enum idpf_mbx_opc mbx_op;
+	struct idpf_vport *vport;
+	enum virtchnl_ops vc_op;
+	uint16_t pending = 1;
+	int ret;
+
+	while (pending) {
+		ret = idpf_vc_ctlq_recv(hw->arq, &pending, &ctlq_msg);
+		if (ret) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret);
+			return;
+		}
+
+		memcpy(adapter->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
+			   IDPF_DFLT_MBX_BUF_SIZE);
+
+		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
+		vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+		adapter->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
+
+		switch (mbx_op) {
+		case idpf_mbq_opc_send_msg_to_peer_pf:
+			if (vc_op == VIRTCHNL2_OP_EVENT) {
+				if (ctlq_msg.data_len < sizeof(struct virtchnl2_event)) {
+					PMD_DRV_LOG(ERR, "Error event");
+					return;
+				}
+				vc_event = (struct virtchnl2_event *)adapter->mbx_resp;
+				vport = cpfl_find_vport(adapter_ex, vc_event->vport_id);
+				if (!vport) {
+					PMD_DRV_LOG(ERR, "Can't find vport.");
+					return;
+				}
+				cpfl_handle_event_msg(vport, adapter->mbx_resp,
+						      ctlq_msg.data_len);
+			} else {
+				if (vc_op == adapter->pend_cmd)
+					notify_cmd(adapter, adapter->cmd_retval);
+				else
+					PMD_DRV_LOG(ERR, "command mismatch, expect %u, get %u",
+						    adapter->pend_cmd, vc_op);
+
+				PMD_DRV_LOG(DEBUG, " Virtual channel response is received,"
+					    "opcode = %d", vc_op);
+			}
+			goto post_buf;
+		default:
+			PMD_DRV_LOG(DEBUG, "Request %u is not supported yet", mbx_op);
+		}
+	}
+
+post_buf:
+	if (ctlq_msg.data_len)
+		dma_mem = ctlq_msg.ctx.indirect.payload;
+	else
+		pending = 0;
+
+	ret = idpf_vc_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
+	if (ret && dma_mem)
+		idpf_free_dma_mem(hw, dma_mem);
+}
+
+static void
+cpfl_dev_alarm_handler(void *param)
+{
+	struct cpfl_adapter_ext *adapter = param;
+
+	cpfl_handle_virtchnl_msg(adapter);
+
+	rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
+}
+
+static int
+cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter)
+{
+	struct idpf_adapter *base = &adapter->base;
+	struct idpf_hw *hw = &base->hw;
+	int ret = 0;
+
+	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+	hw->hw_addr_len = pci_dev->mem_resource[0].len;
+	hw->back = base;
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->device_id = pci_dev->id.device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+
+	strncpy(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE);
+
+	ret = idpf_adapter_init(base);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init adapter");
+		goto err_adapter_init;
+	}
+
+	rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
+
+	adapter->max_vport_nb = adapter->base.caps.max_vports;
+
+	adapter->vports = rte_zmalloc("vports",
+				      adapter->max_vport_nb *
+				      sizeof(*adapter->vports),
+				      0);
+	if (adapter->vports == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
+		ret = -ENOMEM;
+		goto err_get_ptype;
+	}
+
+	adapter->cur_vports = 0;
+	adapter->cur_vport_nb = 0;
+
+	adapter->used_vecs_num = 0;
+
+	return ret;
+
+err_get_ptype:
+	idpf_adapter_deinit(base);
+err_adapter_init:
+	return ret;
+}
+
+static const struct eth_dev_ops cpfl_eth_dev_ops = {
+	.dev_configure			= cpfl_dev_configure,
+	.dev_close			= cpfl_dev_close,
+	.dev_infos_get			= cpfl_dev_info_get,
+	.link_update			= cpfl_dev_link_update,
+	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
+};
+
+static uint16_t
+cpfl_vport_idx_alloc(struct cpfl_adapter_ext *ad)
+{
+	uint16_t vport_idx;
+	uint16_t i;
+
+	for (i = 0; i < ad->max_vport_nb; i++) {
+		if (ad->vports[i] == NULL)
+			break;
+	}
+
+	if (i == ad->max_vport_nb)
+		vport_idx = CPFL_INVALID_VPORT_IDX;
+	else
+		vport_idx = i;
+
+	return vport_idx;
+}
+
+static int
+cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct cpfl_vport_param *param = init_params;
+	struct cpfl_adapter_ext *adapter = param->adapter;
+	/* for sending create vport virtchnl msg prepare */
+	struct virtchnl2_create_vport create_vport_info;
+	int ret = 0;
+
+	dev->dev_ops = &cpfl_eth_dev_ops;
+	vport->adapter = &adapter->base;
+	vport->sw_idx = param->idx;
+	vport->devarg_id = param->devarg_id;
+	vport->dev = dev;
+
+	memset(&create_vport_info, 0, sizeof(create_vport_info));
+	ret = idpf_vport_info_init(vport, &create_vport_info);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init vport req_info.");
+		goto err;
+	}
+
+	ret = idpf_vport_init(vport, &create_vport_info, dev->data);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init vports.");
+		goto err;
+	}
+
+	adapter->vports[param->idx] = vport;
+	adapter->cur_vports |= RTE_BIT32(param->devarg_id);
+	adapter->cur_vport_nb++;
+
+	dev->data->mac_addrs = rte_zmalloc(NULL, RTE_ETHER_ADDR_LEN, 0);
+	if (dev->data->mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory.");
+		ret = -ENOMEM;
+		goto err_mac_addrs;
+	}
+
+	rte_ether_addr_copy((struct rte_ether_addr *)vport->default_mac_addr,
+			    &dev->data->mac_addrs[0]);
+
+	return 0;
+
+err_mac_addrs:
+	adapter->vports[param->idx] = NULL;  /* reset */
+	idpf_vport_deinit(vport);
+err:
+	return ret;
+}
+
+static const struct rte_pci_id pci_id_cpfl_map[] = {
+	{ RTE_PCI_DEVICE(IDPF_INTEL_VENDOR_ID, IDPF_DEV_ID_CPF) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static struct cpfl_adapter_ext *
+cpfl_find_adapter_ext(struct rte_pci_device *pci_dev)
+{
+	struct cpfl_adapter_ext *adapter;
+	int found = 0;
+
+	if (pci_dev == NULL)
+		return NULL;
+
+	rte_spinlock_lock(&cpfl_adapter_lock);
+	TAILQ_FOREACH(adapter, &cpfl_adapter_list, next) {
+		if (strncmp(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE) == 0) {
+			found = 1;
+			break;
+		}
+	}
+	rte_spinlock_unlock(&cpfl_adapter_lock);
+
+	if (found == 0)
+		return NULL;
+
+	return adapter;
+}
+
+static void
+cpfl_adapter_ext_deinit(struct cpfl_adapter_ext *adapter)
+{
+	rte_eal_alarm_cancel(cpfl_dev_alarm_handler, adapter);
+	idpf_adapter_deinit(&adapter->base);
+
+	rte_free(adapter->vports);
+	adapter->vports = NULL;
+}
+
+static int
+cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+	       struct rte_pci_device *pci_dev)
+{
+	struct cpfl_vport_param vport_param;
+	struct cpfl_adapter_ext *adapter;
+	struct cpfl_devargs devargs;
+	char name[RTE_ETH_NAME_MAX_LEN];
+	int i, retval;
+	bool first_probe = false;
+
+	if (!cpfl_adapter_list_init) {
+		rte_spinlock_init(&cpfl_adapter_lock);
+		TAILQ_INIT(&cpfl_adapter_list);
+		cpfl_adapter_list_init = true;
+	}
+
+	adapter = cpfl_find_adapter_ext(pci_dev);
+	if (adapter == NULL) {
+		first_probe = true;
+		adapter = rte_zmalloc("cpfl_adapter_ext",
+				      sizeof(struct cpfl_adapter_ext), 0);
+		if (adapter == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
+			return -ENOMEM;
+		}
+
+		retval = cpfl_adapter_ext_init(pci_dev, adapter);
+		if (retval != 0) {
+			PMD_INIT_LOG(ERR, "Failed to init adapter.");
+			return retval;
+		}
+
+		rte_spinlock_lock(&cpfl_adapter_lock);
+		TAILQ_INSERT_TAIL(&cpfl_adapter_list, adapter, next);
+		rte_spinlock_unlock(&cpfl_adapter_lock);
+	}
+
+	retval = cpfl_parse_devargs(pci_dev, adapter, &devargs);
+	if (retval != 0) {
+		PMD_INIT_LOG(ERR, "Failed to parse private devargs");
+		goto err;
+	}
+
+	if (devargs.req_vport_nb == 0) {
+		/* If no vport devarg, create vport 0 by default. */
+		vport_param.adapter = adapter;
+		vport_param.devarg_id = 0;
+		vport_param.idx = cpfl_vport_idx_alloc(adapter);
+		if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
+			PMD_INIT_LOG(ERR, "No space for vport %u", vport_param.devarg_id);
+			return 0;
+		}
+		snprintf(name, sizeof(name), "cpfl_%s_vport_0",
+			 pci_dev->device.name);
+		retval = rte_eth_dev_create(&pci_dev->device, name,
+					    sizeof(struct idpf_vport),
+					    NULL, NULL, cpfl_dev_vport_init,
+					    &vport_param);
+		if (retval != 0)
+			PMD_DRV_LOG(ERR, "Failed to create default vport 0");
+	} else {
+		for (i = 0; i < devargs.req_vport_nb; i++) {
+			vport_param.adapter = adapter;
+			vport_param.devarg_id = devargs.req_vports[i];
+			vport_param.idx = cpfl_vport_idx_alloc(adapter);
+			if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
+				PMD_INIT_LOG(ERR, "No space for vport %u", vport_param.devarg_id);
+				break;
+			}
+			snprintf(name, sizeof(name), "cpfl_%s_vport_%d",
+				 pci_dev->device.name,
+				 devargs.req_vports[i]);
+			retval = rte_eth_dev_create(&pci_dev->device, name,
+						    sizeof(struct idpf_vport),
+						    NULL, NULL, cpfl_dev_vport_init,
+						    &vport_param);
+			if (retval != 0)
+				PMD_DRV_LOG(ERR, "Failed to create vport %d",
+					    vport_param.devarg_id);
+		}
+	}
+
+	return 0;
+
+err:
+	if (first_probe) {
+		rte_spinlock_lock(&cpfl_adapter_lock);
+		TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
+		rte_spinlock_unlock(&cpfl_adapter_lock);
+		cpfl_adapter_ext_deinit(adapter);
+		rte_free(adapter);
+	}
+	return retval;
+}
+
+static int
+cpfl_pci_remove(struct rte_pci_device *pci_dev)
+{
+	struct cpfl_adapter_ext *adapter = cpfl_find_adapter_ext(pci_dev);
+	uint16_t port_id;
+
+	/* Ethdev created can be found RTE_ETH_FOREACH_DEV_OF through rte_device */
+	RTE_ETH_FOREACH_DEV_OF(port_id, &pci_dev->device) {
+			rte_eth_dev_close(port_id);
+	}
+
+	rte_spinlock_lock(&cpfl_adapter_lock);
+	TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
+	rte_spinlock_unlock(&cpfl_adapter_lock);
+	cpfl_adapter_ext_deinit(adapter);
+	rte_free(adapter);
+
+	return 0;
+}
+
+static struct rte_pci_driver rte_cpfl_pmd = {
+	.id_table	= pci_id_cpfl_map,
+	.drv_flags	= RTE_PCI_DRV_NEED_MAPPING,
+	.probe		= cpfl_pci_probe,
+	.remove		= cpfl_pci_remove,
+};
+
+/**
+ * Driver initialization routine.
+ * Invoked once at EAL init time.
+ * Register itself as the [Poll Mode] Driver of PCI devices.
+ */
+RTE_PMD_REGISTER_PCI(net_cpfl, rte_cpfl_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_cpfl, pci_id_cpfl_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_cpfl, "* igb_uio | vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(net_cpfl,
+			      CPFL_TX_SINGLE_Q "=<0|1> "
+			      CPFL_RX_SINGLE_Q "=<0|1> "
+			      CPFL_VPORT "=[vport_set0,[vport_set1],...]");
+
+RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_driver, driver, NOTICE);
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
new file mode 100644
index 0000000000..9ca39b4558
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _CPFL_ETHDEV_H_
+#define _CPFL_ETHDEV_H_
+
+#include <stdint.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+#include <rte_ethdev.h>
+#include <rte_kvargs.h>
+#include <ethdev_driver.h>
+#include <ethdev_pci.h>
+
+#include "cpfl_logs.h"
+
+#include <idpf_common_device.h>
+#include <idpf_common_virtchnl.h>
+#include <base/idpf_prototype.h>
+#include <base/virtchnl2.h>
+
+#define CPFL_MAX_VPORT_NUM	8
+
+#define CPFL_INVALID_VPORT_IDX	0xffff
+
+#define CPFL_MIN_BUF_SIZE	1024
+#define CPFL_MAX_FRAME_SIZE	9728
+#define CPFL_DEFAULT_MTU	RTE_ETHER_MTU
+
+#define CPFL_NUM_MACADDR_MAX	64
+
+#define CPFL_VLAN_TAG_SIZE	4
+#define CPFL_ETH_OVERHEAD \
+	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + CPFL_VLAN_TAG_SIZE * 2)
+
+#define CPFL_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
+
+#define CPFL_ALARM_INTERVAL	50000 /* us */
+
+/* Device IDs */
+#define IDPF_DEV_ID_CPF			0x1453
+
+struct cpfl_vport_param {
+	struct cpfl_adapter_ext *adapter;
+	uint16_t devarg_id; /* arg id from user */
+	uint16_t idx;       /* index in adapter->vports[]*/
+};
+
+/* Struct used when parse driver specific devargs */
+struct cpfl_devargs {
+	uint16_t req_vports[CPFL_MAX_VPORT_NUM];
+	uint16_t req_vport_nb;
+};
+
+struct cpfl_adapter_ext {
+	TAILQ_ENTRY(cpfl_adapter_ext) next;
+	struct idpf_adapter base;
+
+	char name[CPFL_ADAPTER_NAME_LEN];
+
+	struct idpf_vport **vports;
+	uint16_t max_vport_nb;
+
+	uint16_t cur_vports; /* bit mask of created vport */
+	uint16_t cur_vport_nb;
+
+	uint16_t used_vecs_num;
+};
+
+TAILQ_HEAD(cpfl_adapter_list, cpfl_adapter_ext);
+
+#define CPFL_DEV_TO_PCI(eth_dev)		\
+	RTE_DEV_TO_PCI((eth_dev)->device)
+#define CPFL_ADAPTER_TO_EXT(p)					\
+	container_of((p), struct cpfl_adapter_ext, base)
+
+#endif /* _CPFL_ETHDEV_H_ */
diff --git a/drivers/net/cpfl/cpfl_logs.h b/drivers/net/cpfl/cpfl_logs.h
new file mode 100644
index 0000000000..365b53e8b3
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_logs.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _CPFL_LOGS_H_
+#define _CPFL_LOGS_H_
+
+#include <rte_log.h>
+
+extern int cpfl_logtype_init;
+extern int cpfl_logtype_driver;
+
+#define PMD_INIT_LOG(level, ...) \
+	rte_log(RTE_LOG_ ## level, \
+		cpfl_logtype_init, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#define PMD_DRV_LOG_RAW(level, ...) \
+	rte_log(RTE_LOG_ ## level, \
+		cpfl_logtype_driver, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#define PMD_DRV_LOG(level, fmt, args...) \
+	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
+
+#endif /* _CPFL_LOGS_H_ */
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
new file mode 100644
index 0000000000..2b9c20928b
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -0,0 +1,244 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#include <ethdev_driver.h>
+#include <rte_net.h>
+#include <rte_vect.h>
+
+#include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
+
+static uint64_t
+cpfl_tx_offload_convert(uint64_t offload)
+{
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_SCTP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
+		ol |= IDPF_TX_OFFLOAD_MULTI_SEGS;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0)
+		ol |= IDPF_TX_OFFLOAD_MBUF_FAST_FREE;
+
+	return ol;
+}
+
+static const struct rte_memzone *
+cpfl_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
+		      uint16_t len, uint16_t queue_type,
+		      unsigned int socket_id, bool splitq)
+{
+	char ring_name[RTE_MEMZONE_NAMESIZE];
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+
+	memset(ring_name, 0, RTE_MEMZONE_NAMESIZE);
+	switch (queue_type) {
+	case VIRTCHNL2_QUEUE_TYPE_TX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_sched_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		memcpy(ring_name, "cpfl Tx ring", sizeof("cpfl Tx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3),
+					      CPFL_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_singleq_rx_buf_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		memcpy(ring_name, "cpfl Rx ring", sizeof("cpfl Rx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
+		ring_size = RTE_ALIGN(len * sizeof(struct idpf_splitq_tx_compl_desc),
+				      CPFL_DMA_MEM_ALIGN);
+		memcpy(ring_name, "cpfl Tx compl ring", sizeof("cpfl Tx compl ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
+		ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_splitq_rx_buf_desc),
+				      CPFL_DMA_MEM_ALIGN);
+		memcpy(ring_name, "cpfl Rx buf ring", sizeof("cpfl Rx buf ring"));
+		break;
+	default:
+		PMD_INIT_LOG(ERR, "Invalid queue type");
+		return NULL;
+	}
+
+	mz = rte_eth_dma_zone_reserve(dev, ring_name, queue_idx,
+				      ring_size, CPFL_RING_BASE_ALIGN,
+				      socket_id);
+	if (mz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for ring");
+		return NULL;
+	}
+
+	/* Zero all the descriptors in the ring. */
+	memset(mz->addr, 0, ring_size);
+
+	return mz;
+}
+
+static void
+cpfl_dma_zone_release(const struct rte_memzone *mz)
+{
+	rte_memzone_free(mz);
+}
+
+static int
+cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
+		     uint16_t queue_idx, uint16_t nb_desc,
+		     unsigned int socket_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	const struct rte_memzone *mz;
+	struct idpf_tx_queue *cq;
+	int ret;
+
+	cq = rte_zmalloc_socket("cpfl splitq cq",
+				sizeof(struct idpf_tx_queue),
+				RTE_CACHE_LINE_SIZE,
+				socket_id);
+	if (cq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for Tx compl queue");
+		ret = -ENOMEM;
+		goto err_cq_alloc;
+	}
+
+	cq->nb_tx_desc = nb_desc;
+	cq->queue_id = vport->chunks_info.tx_compl_start_qid + queue_idx;
+	cq->port_id = dev->data->port_id;
+	cq->txqs = dev->data->tx_queues;
+	cq->tx_start_qid = vport->chunks_info.tx_start_qid;
+
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, nb_desc,
+				   VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION,
+				   socket_id, true);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	cq->tx_ring_phys_addr = mz->iova;
+	cq->compl_ring = mz->addr;
+	cq->mz = mz;
+	reset_split_tx_complq(cq);
+
+	txq->complq = cq;
+
+	return 0;
+
+err_mz_reserve:
+	rte_free(cq);
+err_cq_alloc:
+	return ret;
+}
+
+int
+cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_txconf *tx_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t tx_rs_thresh, tx_free_thresh;
+	struct idpf_hw *hw = &adapter->hw;
+	const struct rte_memzone *mz;
+	struct idpf_tx_queue *txq;
+	uint64_t offloads;
+	uint16_t len;
+	bool is_splitq;
+	int ret;
+
+	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+	tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh > 0) ?
+		tx_conf->tx_rs_thresh : CPFL_DEFAULT_TX_RS_THRESH);
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ?
+		tx_conf->tx_free_thresh : CPFL_DEFAULT_TX_FREE_THRESH);
+	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket("cpfl txq",
+				 sizeof(struct idpf_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (txq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
+		ret = -ENOMEM;
+		goto err_txq_alloc;
+	}
+
+	is_splitq = !!(vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
+
+	txq->nb_tx_desc = nb_desc;
+	txq->rs_thresh = tx_rs_thresh;
+	txq->free_thresh = tx_free_thresh;
+	txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->offloads = cpfl_tx_offload_convert(offloads);
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	if (is_splitq)
+		len = 2 * nb_desc;
+	else
+		len = nb_desc;
+	txq->sw_nb_desc = len;
+
+	/* Allocate TX hardware ring descriptors. */
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, nb_desc, VIRTCHNL2_QUEUE_TYPE_TX,
+				   socket_id, is_splitq);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	txq->tx_ring_phys_addr = mz->iova;
+	txq->mz = mz;
+
+	txq->sw_ring = rte_zmalloc_socket("cpfl tx sw ring",
+					  sizeof(struct idpf_tx_entry) * len,
+					  RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq->sw_ring == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
+	}
+
+	if (!is_splitq) {
+		txq->tx_ring = mz->addr;
+		reset_single_tx_queue(txq);
+	} else {
+		txq->desc_ring = mz->addr;
+		reset_split_tx_descq(txq);
+
+		/* Setup tx completion queue if split model */
+		ret = cpfl_tx_complq_setup(dev, txq, queue_idx,
+					   2 * nb_desc, socket_id);
+		if (ret != 0)
+			goto err_complq_setup;
+	}
+
+	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
+			queue_idx * vport->chunks_info.tx_qtail_spacing);
+	txq->q_set = true;
+	dev->data->tx_queues[queue_idx] = txq;
+
+	return 0;
+
+err_complq_setup:
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(txq);
+err_txq_alloc:
+	return ret;
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
new file mode 100644
index 0000000000..232630c5e9
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _CPFL_RXTX_H_
+#define _CPFL_RXTX_H_
+
+#include <idpf_common_rxtx.h>
+#include "cpfl_ethdev.h"
+
+/* In QLEN must be whole number of 32 descriptors. */
+#define CPFL_ALIGN_RING_DESC	32
+#define CPFL_MIN_RING_DESC	32
+#define CPFL_MAX_RING_DESC	4096
+#define CPFL_DMA_MEM_ALIGN	4096
+/* Base address of the HW descriptor ring should be 128B aligned. */
+#define CPFL_RING_BASE_ALIGN	128
+
+#define CPFL_DEFAULT_TX_RS_THRESH	32
+#define CPFL_DEFAULT_TX_FREE_THRESH	32
+
+int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			uint16_t nb_desc, unsigned int socket_id,
+			const struct rte_eth_txconf *tx_conf);
+#endif /* _CPFL_RXTX_H_ */
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
new file mode 100644
index 0000000000..c721732b50
--- /dev/null
+++ b/drivers/net/cpfl/meson.build
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 Intel Corporation
+
+if is_windows
+    build = false
+    reason = 'not supported on Windows'
+    subdir_done()
+endif
+
+deps += ['common_idpf']
+
+sources = files(
+        'cpfl_ethdev.c',
+)
\ No newline at end of file
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 6470bf3636..a8ca338875 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -13,6 +13,7 @@ drivers = [
         'bnxt',
         'bonding',
         'cnxk',
+        'cpfl',
         'cxgbe',
         'dpaa',
         'dpaa2',
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v7 02/21] net/cpfl: add Tx queue setup
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
  2023-02-16  0:29           ` [PATCH v7 01/21] net/cpfl: support device initialization Mingxia Liu
@ 2023-02-16  0:29           ` Mingxia Liu
  2023-02-27 21:44             ` Ferruh Yigit
  2023-02-16  0:29           ` [PATCH v7 03/21] net/cpfl: add Rx " Mingxia Liu
                             ` (20 subsequent siblings)
  22 siblings, 1 reply; 263+ messages in thread
From: Mingxia Liu @ 2023-02-16  0:29 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for tx_queue_setup ops.

In the single queue model, the same descriptor queue is used by SW to
post buffer descriptors to HW and by HW to post completed descriptors
to SW.

In the split queue model, "RX buffer queues" are used to pass
descriptor buffers from SW to HW while Rx queues are used only to
pass the descriptor completions, that is, descriptors that point
to completed buffers, from HW to SW. This is contrary to the single
queue model in which Rx queues are used for both purposes.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 13 +++++++++++++
 drivers/net/cpfl/cpfl_rxtx.c   |  8 ++++----
 drivers/net/cpfl/meson.build   |  1 +
 3 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index fe0061133c..5ca21c9772 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -12,6 +12,7 @@
 #include <rte_alarm.h>
 
 #include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
 
 #define CPFL_TX_SINGLE_Q	"tx_single"
 #define CPFL_RX_SINGLE_Q	"rx_single"
@@ -96,6 +97,17 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = CPFL_MAX_RING_DESC,
+		.nb_min = CPFL_MIN_RING_DESC,
+		.nb_align = CPFL_ALIGN_RING_DESC,
+	};
+
 	return 0;
 }
 
@@ -513,6 +525,7 @@ cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *a
 static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_configure			= cpfl_dev_configure,
 	.dev_close			= cpfl_dev_close,
+	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
 	.link_update			= cpfl_dev_link_update,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 2b9c20928b..5b69ac0009 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -130,7 +130,7 @@ cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
 	cq->tx_ring_phys_addr = mz->iova;
 	cq->compl_ring = mz->addr;
 	cq->mz = mz;
-	reset_split_tx_complq(cq);
+	idpf_qc_split_tx_complq_reset(cq);
 
 	txq->complq = cq;
 
@@ -164,7 +164,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		tx_conf->tx_rs_thresh : CPFL_DEFAULT_TX_RS_THRESH);
 	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ?
 		tx_conf->tx_free_thresh : CPFL_DEFAULT_TX_FREE_THRESH);
-	if (check_tx_thresh(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
+	if (idpf_qc_tx_thresh_check(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
 		return -EINVAL;
 
 	/* Allocate the TX queue data structure. */
@@ -215,10 +215,10 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 
 	if (!is_splitq) {
 		txq->tx_ring = mz->addr;
-		reset_single_tx_queue(txq);
+		idpf_qc_single_tx_queue_reset(txq);
 	} else {
 		txq->desc_ring = mz->addr;
-		reset_split_tx_descq(txq);
+		idpf_qc_split_tx_descq_reset(txq);
 
 		/* Setup tx completion queue if split model */
 		ret = cpfl_tx_complq_setup(dev, txq, queue_idx,
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
index c721732b50..1894423689 100644
--- a/drivers/net/cpfl/meson.build
+++ b/drivers/net/cpfl/meson.build
@@ -11,4 +11,5 @@ deps += ['common_idpf']
 
 sources = files(
         'cpfl_ethdev.c',
+        'cpfl_rxtx.c',
 )
\ No newline at end of file
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v7 03/21] net/cpfl: add Rx queue setup
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
  2023-02-16  0:29           ` [PATCH v7 01/21] net/cpfl: support device initialization Mingxia Liu
  2023-02-16  0:29           ` [PATCH v7 02/21] net/cpfl: add Tx queue setup Mingxia Liu
@ 2023-02-16  0:29           ` Mingxia Liu
  2023-02-27 21:46             ` Ferruh Yigit
  2023-02-16  0:29           ` [PATCH v7 04/21] net/cpfl: support device start and stop Mingxia Liu
                             ` (19 subsequent siblings)
  22 siblings, 1 reply; 263+ messages in thread
From: Mingxia Liu @ 2023-02-16  0:29 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for rx_queue_setup ops.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  11 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 232 +++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |   6 +
 3 files changed, 249 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 5ca21c9772..3029f03d02 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -102,12 +102,22 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
 	};
 
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_free_thresh = CPFL_DEFAULT_RX_FREE_THRESH,
+	};
+
 	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
 		.nb_max = CPFL_MAX_RING_DESC,
 		.nb_min = CPFL_MIN_RING_DESC,
 		.nb_align = CPFL_ALIGN_RING_DESC,
 	};
 
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = CPFL_MAX_RING_DESC,
+		.nb_min = CPFL_MIN_RING_DESC,
+		.nb_align = CPFL_ALIGN_RING_DESC,
+	};
+
 	return 0;
 }
 
@@ -525,6 +535,7 @@ cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *a
 static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_configure			= cpfl_dev_configure,
 	.dev_close			= cpfl_dev_close,
+	.rx_queue_setup			= cpfl_rx_queue_setup,
 	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
 	.link_update			= cpfl_dev_link_update,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 5b69ac0009..042b848ce2 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -9,6 +9,25 @@
 #include "cpfl_ethdev.h"
 #include "cpfl_rxtx.h"
 
+static uint64_t
+cpfl_rx_offload_convert(uint64_t offload)
+{
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
+		ol |= IDPF_RX_OFFLOAD_TIMESTAMP;
+
+	return ol;
+}
+
 static uint64_t
 cpfl_tx_offload_convert(uint64_t offload)
 {
@@ -94,6 +113,219 @@ cpfl_dma_zone_release(const struct rte_memzone *mz)
 	rte_memzone_free(mz);
 }
 
+static int
+cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
+			 uint16_t queue_idx, uint16_t rx_free_thresh,
+			 uint16_t nb_desc, unsigned int socket_id,
+			 struct rte_mempool *mp, uint8_t bufq_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_hw *hw = &adapter->hw;
+	const struct rte_memzone *mz;
+	struct idpf_rx_queue *bufq;
+	uint16_t len;
+	int ret;
+
+	bufq = rte_zmalloc_socket("cpfl bufq",
+				   sizeof(struct idpf_rx_queue),
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (bufq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue.");
+		ret = -ENOMEM;
+		goto err_bufq1_alloc;
+	}
+
+	bufq->mp = mp;
+	bufq->nb_rx_desc = nb_desc;
+	bufq->rx_free_thresh = rx_free_thresh;
+	bufq->queue_id = vport->chunks_info.rx_buf_start_qid + queue_idx;
+	bufq->port_id = dev->data->port_id;
+	bufq->rx_hdr_len = 0;
+	bufq->adapter = adapter;
+
+	len = rte_pktmbuf_data_room_size(bufq->mp) - RTE_PKTMBUF_HEADROOM;
+	bufq->rx_buf_len = len;
+
+	/* Allocate a little more to support bulk allocate. */
+	len = nb_desc + IDPF_RX_MAX_BURST;
+
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, len,
+				   VIRTCHNL2_QUEUE_TYPE_RX_BUFFER,
+				   socket_id, true);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+
+	bufq->rx_ring_phys_addr = mz->iova;
+	bufq->rx_ring = mz->addr;
+	bufq->mz = mz;
+
+	bufq->sw_ring =
+		rte_zmalloc_socket("cpfl rx bufq sw ring",
+				   sizeof(struct rte_mbuf *) * len,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (bufq->sw_ring == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
+	}
+
+	idpf_qc_split_rx_bufq_reset(bufq);
+	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
+			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+	bufq->q_set = true;
+
+	if (bufq_id == 1) {
+		rxq->bufq1 = bufq;
+	} else if (bufq_id == 2) {
+		rxq->bufq2 = bufq;
+	} else {
+		PMD_INIT_LOG(ERR, "Invalid buffer queue index.");
+		ret = -EINVAL;
+		goto err_bufq_id;
+	}
+
+	return 0;
+
+err_bufq_id:
+	rte_free(bufq->sw_ring);
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(bufq);
+err_bufq1_alloc:
+	return ret;
+}
+
+static void
+cpfl_rx_split_bufq_release(struct idpf_rx_queue *bufq)
+{
+	rte_free(bufq->sw_ring);
+	cpfl_dma_zone_release(bufq->mz);
+	rte_free(bufq);
+}
+
+int
+cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_rxconf *rx_conf,
+		    struct rte_mempool *mp)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	struct idpf_hw *hw = &adapter->hw;
+	const struct rte_memzone *mz;
+	struct idpf_rx_queue *rxq;
+	uint16_t rx_free_thresh;
+	uint64_t offloads;
+	bool is_splitq;
+	uint16_t len;
+	int ret;
+
+	offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads;
+
+	/* Check free threshold */
+	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
+		CPFL_DEFAULT_RX_FREE_THRESH :
+		rx_conf->rx_free_thresh;
+	if (idpf_qc_rx_thresh_check(nb_desc, rx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Setup Rx queue */
+	rxq = rte_zmalloc_socket("cpfl rxq",
+				 sizeof(struct idpf_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (rxq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
+		ret = -ENOMEM;
+		goto err_rxq_alloc;
+	}
+
+	is_splitq = !!(vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
+
+	rxq->mp = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->queue_id = vport->chunks_info.rx_start_qid + queue_idx;
+	rxq->port_id = dev->data->port_id;
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+	rxq->rx_hdr_len = 0;
+	rxq->adapter = adapter;
+	rxq->offloads = cpfl_rx_offload_convert(offloads);
+
+	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_buf_len = len;
+
+	/* Allocate a little more to support bulk allocate. */
+	len = nb_desc + IDPF_RX_MAX_BURST;
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, len, VIRTCHNL2_QUEUE_TYPE_RX,
+				   socket_id, is_splitq);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	rxq->rx_ring_phys_addr = mz->iova;
+	rxq->rx_ring = mz->addr;
+	rxq->mz = mz;
+
+	if (!is_splitq) {
+		rxq->sw_ring = rte_zmalloc_socket("cpfl rxq sw ring",
+						  sizeof(struct rte_mbuf *) * len,
+						  RTE_CACHE_LINE_SIZE,
+						  socket_id);
+		if (rxq->sw_ring == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+			ret = -ENOMEM;
+			goto err_sw_ring_alloc;
+		}
+
+		idpf_qc_single_rx_queue_reset(rxq);
+		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
+				queue_idx * vport->chunks_info.rx_qtail_spacing);
+	} else {
+		idpf_qc_split_rx_descq_reset(rxq);
+
+		/* Setup Rx buffer queues */
+		ret = cpfl_rx_split_bufq_setup(dev, rxq, 2 * queue_idx,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 1);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 1");
+			ret = -EINVAL;
+			goto err_bufq1_setup;
+		}
+
+		ret = cpfl_rx_split_bufq_setup(dev, rxq, 2 * queue_idx + 1,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 2);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 2");
+			ret = -EINVAL;
+			goto err_bufq2_setup;
+		}
+	}
+
+	rxq->q_set = true;
+	dev->data->rx_queues[queue_idx] = rxq;
+
+	return 0;
+
+err_bufq2_setup:
+	cpfl_rx_split_bufq_release(rxq->bufq1);
+err_bufq1_setup:
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(rxq);
+err_rxq_alloc:
+	return ret;
+}
+
 static int
 cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
 		     uint16_t queue_idx, uint16_t nb_desc,
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 232630c5e9..e0221abfa3 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -16,10 +16,16 @@
 /* Base address of the HW descriptor ring should be 128B aligned. */
 #define CPFL_RING_BASE_ALIGN	128
 
+#define CPFL_DEFAULT_RX_FREE_THRESH	32
+
 #define CPFL_DEFAULT_TX_RS_THRESH	32
 #define CPFL_DEFAULT_TX_FREE_THRESH	32
 
 int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_txconf *tx_conf);
+int cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			uint16_t nb_desc, unsigned int socket_id,
+			const struct rte_eth_rxconf *rx_conf,
+			struct rte_mempool *mp);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v7 04/21] net/cpfl: support device start and stop
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
                             ` (2 preceding siblings ...)
  2023-02-16  0:29           ` [PATCH v7 03/21] net/cpfl: add Rx " Mingxia Liu
@ 2023-02-16  0:29           ` Mingxia Liu
  2023-02-16  0:29           ` [PATCH v7 05/21] net/cpfl: support queue start Mingxia Liu
                             ` (18 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-16  0:29 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add dev ops dev_start, dev_stop and link_update.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 35 ++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 3029f03d02..d1dfcfff9b 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -184,12 +184,45 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_dev_start(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	int ret;
+
+	ret = idpf_vc_vport_ena_dis(vport, true);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to enable vport");
+		return ret;
+	}
+
+	vport->stopped = 0;
+
+	return 0;
+}
+
+static int
+cpfl_dev_stop(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->stopped == 1)
+		return 0;
+
+	idpf_vc_vport_ena_dis(vport, false);
+
+	vport->stopped = 1;
+
+	return 0;
+}
+
 static int
 cpfl_dev_close(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter);
 
+	cpfl_dev_stop(dev);
 	idpf_vport_deinit(vport);
 
 	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
@@ -538,6 +571,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.rx_queue_setup			= cpfl_rx_queue_setup,
 	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
+	.dev_start			= cpfl_dev_start,
+	.dev_stop			= cpfl_dev_stop,
 	.link_update			= cpfl_dev_link_update,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v7 05/21] net/cpfl: support queue start
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
                             ` (3 preceding siblings ...)
  2023-02-16  0:29           ` [PATCH v7 04/21] net/cpfl: support device start and stop Mingxia Liu
@ 2023-02-16  0:29           ` Mingxia Liu
  2023-02-27 21:47             ` Ferruh Yigit
  2023-02-16  0:29           ` [PATCH v7 06/21] net/cpfl: support queue stop Mingxia Liu
                             ` (17 subsequent siblings)
  22 siblings, 1 reply; 263+ messages in thread
From: Mingxia Liu @ 2023-02-16  0:29 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for these device ops:
 - rx_queue_start
 - tx_queue_start

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  41 ++++++++++
 drivers/net/cpfl/cpfl_rxtx.c   | 138 +++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |   4 +
 3 files changed, 183 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index d1dfcfff9b..c4565e687b 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -184,12 +184,51 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_start_queues(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	struct idpf_tx_queue *txq;
+	int err = 0;
+	int i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq == NULL || txq->tx_deferred_start)
+			continue;
+		err = cpfl_tx_queue_start(dev, i);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start Tx queue %u", i);
+			return err;
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq == NULL || rxq->rx_deferred_start)
+			continue;
+		err = cpfl_rx_queue_start(dev, i);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start Rx queue %u", i);
+			return err;
+		}
+	}
+
+	return err;
+}
+
 static int
 cpfl_dev_start(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	int ret;
 
+	ret = cpfl_start_queues(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to start queues");
+		return ret;
+	}
+
 	ret = idpf_vc_vport_ena_dis(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
@@ -574,6 +613,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_start			= cpfl_dev_start,
 	.dev_stop			= cpfl_dev_stop,
 	.link_update			= cpfl_dev_link_update,
+	.rx_queue_start			= cpfl_rx_queue_start,
+	.tx_queue_start			= cpfl_tx_queue_start,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 042b848ce2..e306a52b31 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -474,3 +474,141 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 err_txq_alloc:
 	return ret;
 }
+
+int
+cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_rx_queue *rxq;
+	int err;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	if (rxq == NULL || !rxq->q_set) {
+		PMD_DRV_LOG(ERR, "RX queue %u not available or setup",
+					rx_queue_id);
+		return -EINVAL;
+	}
+
+	if (rxq->bufq1 == NULL) {
+		/* Single queue */
+		err = idpf_qc_single_rxq_mbufs_alloc(rxq);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+			return err;
+		}
+
+		rte_wmb();
+
+		/* Init the RX tail register. */
+		IDPF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	} else {
+		/* Split queue */
+		err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq1);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+			return err;
+		}
+		err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq2);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+			return err;
+		}
+
+		rte_wmb();
+
+		/* Init the RX tail register. */
+		IDPF_PCI_REG_WRITE(rxq->bufq1->qrx_tail, rxq->bufq1->rx_tail);
+		IDPF_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->bufq2->rx_tail);
+	}
+
+	return err;
+}
+
+int
+cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_rx_queue *rxq =
+		dev->data->rx_queues[rx_queue_id];
+	int err = 0;
+
+	err = idpf_vc_rxq_config(vport, rxq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Fail to configure Rx queue %u", rx_queue_id);
+		return err;
+	}
+
+	err = cpfl_rx_queue_init(dev, rx_queue_id);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to init RX queue %u",
+			    rx_queue_id);
+		return err;
+	}
+
+	/* Ready to switch the queue on */
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+	} else {
+		rxq->q_started = true;
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	return err;
+}
+
+int
+cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_tx_queue *txq;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	txq = dev->data->tx_queues[tx_queue_id];
+
+	/* Init the RX tail register. */
+	IDPF_PCI_REG_WRITE(txq->qtx_tail, 0);
+
+	return 0;
+}
+
+int
+cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_tx_queue *txq =
+		dev->data->tx_queues[tx_queue_id];
+	int err = 0;
+
+	err = idpf_vc_txq_config(vport, txq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Fail to configure Tx queue %u", tx_queue_id);
+		return err;
+	}
+
+	err = cpfl_tx_queue_init(dev, tx_queue_id);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to init TX queue %u",
+			    tx_queue_id);
+		return err;
+	}
+
+	/* Ready to switch the queue on */
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+			    tx_queue_id);
+	} else {
+		txq->q_started = true;
+		dev->data->tx_queue_state[tx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	return err;
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index e0221abfa3..716b2fefa4 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -28,4 +28,8 @@ int cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
 			struct rte_mempool *mp);
+int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v7 06/21] net/cpfl: support queue stop
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
                             ` (4 preceding siblings ...)
  2023-02-16  0:29           ` [PATCH v7 05/21] net/cpfl: support queue start Mingxia Liu
@ 2023-02-16  0:29           ` Mingxia Liu
  2023-02-27 21:48             ` Ferruh Yigit
  2023-02-16  0:29           ` [PATCH v7 07/21] net/cpfl: support queue release Mingxia Liu
                             ` (16 subsequent siblings)
  22 siblings, 1 reply; 263+ messages in thread
From: Mingxia Liu @ 2023-02-16  0:29 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for these device ops:
 - rx_queue_stop
 - tx_queue_stop

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 10 +++-
 drivers/net/cpfl/cpfl_rxtx.c   | 87 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  3 ++
 3 files changed, 99 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index c4565e687b..f757fea530 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -232,12 +232,16 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 	ret = idpf_vc_vport_ena_dis(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
-		return ret;
+		goto err_vport;
 	}
 
 	vport->stopped = 0;
 
 	return 0;
+
+err_vport:
+	cpfl_stop_queues(dev);
+	return ret;
 }
 
 static int
@@ -250,6 +254,8 @@ cpfl_dev_stop(struct rte_eth_dev *dev)
 
 	idpf_vc_vport_ena_dis(vport, false);
 
+	cpfl_stop_queues(dev);
+
 	vport->stopped = 1;
 
 	return 0;
@@ -615,6 +621,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.link_update			= cpfl_dev_link_update,
 	.rx_queue_start			= cpfl_rx_queue_start,
 	.tx_queue_start			= cpfl_tx_queue_start,
+	.rx_queue_stop			= cpfl_rx_queue_stop,
+	.tx_queue_stop			= cpfl_tx_queue_stop,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index e306a52b31..de0f2a5723 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -612,3 +612,90 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 
 	return err;
 }
+
+int
+cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_rx_queue *rxq;
+	int err;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, false);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+			    rx_queue_id);
+		return err;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		rxq->ops->release_mbufs(rxq);
+		idpf_qc_single_rx_queue_reset(rxq);
+	} else {
+		rxq->bufq1->ops->release_mbufs(rxq->bufq1);
+		rxq->bufq2->ops->release_mbufs(rxq->bufq2);
+		idpf_qc_split_rx_queue_reset(rxq);
+	}
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+int
+cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_tx_queue *txq;
+	int err;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, false);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+			    tx_queue_id);
+		return err;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	txq->ops->release_mbufs(txq);
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		idpf_qc_single_tx_queue_reset(txq);
+	} else {
+		idpf_qc_split_tx_descq_reset(txq);
+		idpf_qc_split_tx_complq_reset(txq->complq);
+	}
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+void
+cpfl_stop_queues(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	struct idpf_tx_queue *txq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq == NULL)
+			continue;
+
+		if (cpfl_rx_queue_stop(dev, i) != 0)
+			PMD_DRV_LOG(WARNING, "Fail to stop Rx queue %d", i);
+	}
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq == NULL)
+			continue;
+
+		if (cpfl_tx_queue_stop(dev, i) != 0)
+			PMD_DRV_LOG(WARNING, "Fail to stop Tx queue %d", i);
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 716b2fefa4..e9b810deaa 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -32,4 +32,7 @@ int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void cpfl_stop_queues(struct rte_eth_dev *dev);
+int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v7 07/21] net/cpfl: support queue release
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
                             ` (5 preceding siblings ...)
  2023-02-16  0:29           ` [PATCH v7 06/21] net/cpfl: support queue stop Mingxia Liu
@ 2023-02-16  0:29           ` Mingxia Liu
  2023-02-16  0:29           ` [PATCH v7 08/21] net/cpfl: support MTU configuration Mingxia Liu
                             ` (15 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-16  0:29 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for queue operations:
 - rx_queue_release
 - tx_queue_release

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  2 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 35 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  2 ++
 3 files changed, 39 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index f757fea530..2e5bfac1c0 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -623,6 +623,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_start			= cpfl_tx_queue_start,
 	.rx_queue_stop			= cpfl_rx_queue_stop,
 	.tx_queue_stop			= cpfl_tx_queue_stop,
+	.rx_queue_release		= cpfl_dev_rx_queue_release,
+	.tx_queue_release		= cpfl_dev_tx_queue_release,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index de0f2a5723..3edba70b16 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -49,6 +49,14 @@ cpfl_tx_offload_convert(uint64_t offload)
 	return ol;
 }
 
+static const struct idpf_rxq_ops def_rxq_ops = {
+	.release_mbufs = idpf_qc_rxq_mbufs_release,
+};
+
+static const struct idpf_txq_ops def_txq_ops = {
+	.release_mbufs = idpf_qc_txq_mbufs_release,
+};
+
 static const struct rte_memzone *
 cpfl_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
 		      uint16_t len, uint16_t queue_type,
@@ -177,6 +185,7 @@ cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
 	idpf_qc_split_rx_bufq_reset(bufq);
 	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
 			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+	bufq->ops = &def_rxq_ops;
 	bufq->q_set = true;
 
 	if (bufq_id == 1) {
@@ -235,6 +244,12 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (idpf_qc_rx_thresh_check(nb_desc, rx_free_thresh) != 0)
 		return -EINVAL;
 
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx] != NULL) {
+		idpf_qc_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
 	/* Setup Rx queue */
 	rxq = rte_zmalloc_socket("cpfl rxq",
 				 sizeof(struct idpf_rx_queue),
@@ -287,6 +302,7 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		idpf_qc_single_rx_queue_reset(rxq);
 		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
 				queue_idx * vport->chunks_info.rx_qtail_spacing);
+		rxq->ops = &def_rxq_ops;
 	} else {
 		idpf_qc_split_rx_descq_reset(rxq);
 
@@ -399,6 +415,12 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (idpf_qc_tx_thresh_check(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
 		return -EINVAL;
 
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx] != NULL) {
+		idpf_qc_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
 	/* Allocate the TX queue data structure. */
 	txq = rte_zmalloc_socket("cpfl txq",
 				 sizeof(struct idpf_tx_queue),
@@ -461,6 +483,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 
 	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
 			queue_idx * vport->chunks_info.tx_qtail_spacing);
+	txq->ops = &def_txq_ops;
 	txq->q_set = true;
 	dev->data->tx_queues[queue_idx] = txq;
 
@@ -674,6 +697,18 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	return 0;
 }
 
+void
+cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+	idpf_qc_rx_queue_release(dev->data->rx_queues[qid]);
+}
+
+void
+cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+	idpf_qc_tx_queue_release(dev->data->tx_queues[qid]);
+}
+
 void
 cpfl_stop_queues(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index e9b810deaa..f5882401dc 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -35,4 +35,6 @@ int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void cpfl_stop_queues(struct rte_eth_dev *dev);
 int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v7 08/21] net/cpfl: support MTU configuration
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
                             ` (6 preceding siblings ...)
  2023-02-16  0:29           ` [PATCH v7 07/21] net/cpfl: support queue release Mingxia Liu
@ 2023-02-16  0:29           ` Mingxia Liu
  2023-02-16  0:29           ` [PATCH v7 09/21] net/cpfl: support basic Rx data path Mingxia Liu
                             ` (14 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-16  0:29 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add dev ops mtu_set.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini |  1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 27 +++++++++++++++++++++++++++
 2 files changed, 28 insertions(+)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index a2d1ca9e15..470ba81579 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -7,6 +7,7 @@
 ; is selected.
 ;
 [Features]
+MTU update           = Y
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 2e5bfac1c0..e2eb92c738 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -121,6 +121,27 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	return 0;
 }
 
+static int
+cpfl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	/* mtu setting is forbidden if port is start */
+	if (dev->data->dev_started) {
+		PMD_DRV_LOG(ERR, "port must be stopped before configuration");
+		return -EBUSY;
+	}
+
+	if (mtu > vport->max_mtu) {
+		PMD_DRV_LOG(ERR, "MTU should be less than %d", vport->max_mtu);
+		return -EINVAL;
+	}
+
+	vport->max_pkt_len = mtu + CPFL_ETH_OVERHEAD;
+
+	return 0;
+}
+
 static const uint32_t *
 cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 {
@@ -142,6 +163,7 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_conf *conf = &dev->data->dev_conf;
 
 	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
@@ -181,6 +203,10 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
+	vport->max_pkt_len =
+		(dev->data->mtu == 0) ? CPFL_DEFAULT_MTU : dev->data->mtu +
+		CPFL_ETH_OVERHEAD;
+
 	return 0;
 }
 
@@ -625,6 +651,7 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_stop			= cpfl_tx_queue_stop,
 	.rx_queue_release		= cpfl_dev_rx_queue_release,
 	.tx_queue_release		= cpfl_dev_tx_queue_release,
+	.mtu_set			= cpfl_dev_mtu_set,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v7 09/21] net/cpfl: support basic Rx data path
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
                             ` (7 preceding siblings ...)
  2023-02-16  0:29           ` [PATCH v7 08/21] net/cpfl: support MTU configuration Mingxia Liu
@ 2023-02-16  0:29           ` Mingxia Liu
  2023-02-16  0:29           ` [PATCH v7 10/21] net/cpfl: support basic Tx " Mingxia Liu
                             ` (13 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-16  0:29 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add basic Rx support in split queue mode and single queue mode.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  2 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 18 ++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  1 +
 3 files changed, 21 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index e2eb92c738..ba66b284cc 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -255,6 +255,8 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
+	cpfl_set_rx_function(dev);
+
 	ret = idpf_vc_vport_ena_dis(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 3edba70b16..568ae3ec68 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -734,3 +734,21 @@ cpfl_stop_queues(struct rte_eth_dev *dev)
 			PMD_DRV_LOG(WARNING, "Fail to stop Tx queue %d", i);
 	}
 }
+
+void
+cpfl_set_rx_function(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Split Scalar Rx (port %d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
+	} else {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Single Scalar Rx (port %d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index f5882401dc..a5dd388e1f 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -37,4 +37,5 @@ int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+void cpfl_set_rx_function(struct rte_eth_dev *dev);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v7 10/21] net/cpfl: support basic Tx data path
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
                             ` (8 preceding siblings ...)
  2023-02-16  0:29           ` [PATCH v7 09/21] net/cpfl: support basic Rx data path Mingxia Liu
@ 2023-02-16  0:29           ` Mingxia Liu
  2023-02-16  0:30           ` [PATCH v7 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
                             ` (12 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-16  0:29 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add basic Tx support in split queue mode and single queue mode.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  3 +++
 drivers/net/cpfl/cpfl_rxtx.c   | 20 ++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  1 +
 3 files changed, 24 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index ba66b284cc..f02cbd08d9 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -97,6 +97,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
 		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
@@ -256,6 +258,7 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 	}
 
 	cpfl_set_rx_function(dev);
+	cpfl_set_tx_function(dev);
 
 	ret = idpf_vc_vport_ena_dis(vport, true);
 	if (ret != 0) {
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 568ae3ec68..c250642719 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -752,3 +752,23 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
 	}
 }
+
+void
+cpfl_set_tx_function(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Split Scalar Tx (port %d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+	} else {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Single Scalar Tx (port %d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index a5dd388e1f..5f8144e55f 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -38,4 +38,5 @@ int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_set_rx_function(struct rte_eth_dev *dev);
+void cpfl_set_tx_function(struct rte_eth_dev *dev);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v7 11/21] net/cpfl: support write back based on ITR expire
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
                             ` (9 preceding siblings ...)
  2023-02-16  0:29           ` [PATCH v7 10/21] net/cpfl: support basic Tx " Mingxia Liu
@ 2023-02-16  0:30           ` Mingxia Liu
  2023-02-27 21:49             ` Ferruh Yigit
  2023-02-16  0:30           ` [PATCH v7 12/21] net/cpfl: support RSS Mingxia Liu
                             ` (11 subsequent siblings)
  22 siblings, 1 reply; 263+ messages in thread
From: Mingxia Liu @ 2023-02-16  0:30 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Enable write back on ITR expire, then packets can be received one by

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 45 +++++++++++++++++++++++++++++++++-
 drivers/net/cpfl/cpfl_ethdev.h |  2 ++
 2 files changed, 46 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index f02cbd08d9..7e0630c605 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -212,6 +212,15 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	uint16_t nb_rx_queues = dev->data->nb_rx_queues;
+
+	return idpf_vport_irq_map_config(vport, nb_rx_queues);
+}
+
 static int
 cpfl_start_queues(struct rte_eth_dev *dev)
 {
@@ -249,12 +258,37 @@ static int
 cpfl_dev_start(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(base);
+	uint16_t num_allocated_vectors = base->caps.num_allocated_vectors;
+	uint16_t req_vecs_num;
 	int ret;
 
+	req_vecs_num = CPFL_DFLT_Q_VEC_NUM;
+	if (req_vecs_num + adapter->used_vecs_num > num_allocated_vectors) {
+		PMD_DRV_LOG(ERR, "The accumulated request vectors' number should be less than %d",
+			    num_allocated_vectors);
+		ret = -EINVAL;
+		goto err_vec;
+	}
+
+	ret = idpf_vc_vectors_alloc(vport, req_vecs_num);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to allocate interrupt vectors");
+		goto err_vec;
+	}
+	adapter->used_vecs_num += req_vecs_num;
+
+	ret = cpfl_config_rx_queues_irqs(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to configure irqs");
+		goto err_irq;
+	}
+
 	ret = cpfl_start_queues(dev);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to start queues");
-		return ret;
+		goto err_startq;
 	}
 
 	cpfl_set_rx_function(dev);
@@ -272,6 +306,11 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 
 err_vport:
 	cpfl_stop_queues(dev);
+err_startq:
+	idpf_vport_irq_unmap_config(vport, dev->data->nb_rx_queues);
+err_irq:
+	idpf_vc_vectors_dealloc(vport);
+err_vec:
 	return ret;
 }
 
@@ -287,6 +326,10 @@ cpfl_dev_stop(struct rte_eth_dev *dev)
 
 	cpfl_stop_queues(dev);
 
+	idpf_vport_irq_unmap_config(vport, dev->data->nb_rx_queues);
+
+	idpf_vc_vectors_dealloc(vport);
+
 	vport->stopped = 1;
 
 	return 0;
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
index 9ca39b4558..cd7f560d19 100644
--- a/drivers/net/cpfl/cpfl_ethdev.h
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -24,6 +24,8 @@
 
 #define CPFL_INVALID_VPORT_IDX	0xffff
 
+#define CPFL_DFLT_Q_VEC_NUM	1
+
 #define CPFL_MIN_BUF_SIZE	1024
 #define CPFL_MAX_FRAME_SIZE	9728
 #define CPFL_DEFAULT_MTU	RTE_ETHER_MTU
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v7 12/21] net/cpfl: support RSS
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
                             ` (10 preceding siblings ...)
  2023-02-16  0:30           ` [PATCH v7 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
@ 2023-02-16  0:30           ` Mingxia Liu
  2023-02-27 21:50             ` Ferruh Yigit
  2023-02-16  0:30           ` [PATCH v7 13/21] net/cpfl: support Rx offloading Mingxia Liu
                             ` (10 subsequent siblings)
  22 siblings, 1 reply; 263+ messages in thread
From: Mingxia Liu @ 2023-02-16  0:30 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add RSS support.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 51 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h | 15 ++++++++++
 2 files changed, 66 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 7e0630c605..fb15004e48 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -97,6 +97,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
+
 	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
@@ -162,11 +164,49 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static int
+cpfl_init_rss(struct idpf_vport *vport)
+{
+	struct rte_eth_rss_conf *rss_conf;
+	struct rte_eth_dev_data *dev_data;
+	uint16_t i, nb_q;
+	int ret = 0;
+
+	dev_data = vport->dev_data;
+	rss_conf = &dev_data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev_data->nb_rx_queues;
+
+	if (rss_conf->rss_key == NULL) {
+		for (i = 0; i < vport->rss_key_size; i++)
+			vport->rss_key[i] = (uint8_t)rte_rand();
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_INIT_LOG(ERR, "Invalid RSS key length in RSS configuration, should be %d",
+			     vport->rss_key_size);
+		return -EINVAL;
+	} else {
+		rte_memcpy(vport->rss_key, rss_conf->rss_key,
+			   vport->rss_key_size);
+	}
+
+	for (i = 0; i < vport->rss_lut_size; i++)
+		vport->rss_lut[i] = i % nb_q;
+
+	vport->rss_hf = IDPF_DEFAULT_RSS_HASH_EXPANDED;
+
+	ret = idpf_vport_rss_config(vport);
+	if (ret != 0)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS");
+
+	return ret;
+}
+
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_conf *conf = &dev->data->dev_conf;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret;
 
 	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
@@ -205,6 +245,17 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
+	if (adapter->caps.rss_caps != 0 && dev->data->nb_rx_queues != 0) {
+		ret = cpfl_init_rss(vport);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to init rss");
+			return ret;
+		}
+	} else {
+		PMD_INIT_LOG(ERR, "RSS is not supported.");
+		return -1;
+	}
+
 	vport->max_pkt_len =
 		(dev->data->mtu == 0) ? CPFL_DEFAULT_MTU : dev->data->mtu +
 		CPFL_ETH_OVERHEAD;
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
index cd7f560d19..e00dff4bf0 100644
--- a/drivers/net/cpfl/cpfl_ethdev.h
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -36,6 +36,21 @@
 #define CPFL_ETH_OVERHEAD \
 	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + CPFL_VLAN_TAG_SIZE * 2)
 
+#define CPFL_RSS_OFFLOAD_ALL (				\
+		RTE_ETH_RSS_IPV4                |	\
+		RTE_ETH_RSS_FRAG_IPV4           |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_TCP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_UDP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_SCTP   |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_OTHER  |	\
+		RTE_ETH_RSS_IPV6                |	\
+		RTE_ETH_RSS_FRAG_IPV6           |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP   |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_OTHER  |	\
+		RTE_ETH_RSS_L2_PAYLOAD)
+
 #define CPFL_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
 #define CPFL_ALARM_INTERVAL	50000 /* us */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v7 13/21] net/cpfl: support Rx offloading
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
                             ` (11 preceding siblings ...)
  2023-02-16  0:30           ` [PATCH v7 12/21] net/cpfl: support RSS Mingxia Liu
@ 2023-02-16  0:30           ` Mingxia Liu
  2023-02-27 21:50             ` Ferruh Yigit
  2023-02-16  0:30           ` [PATCH v7 14/21] net/cpfl: support Tx offloading Mingxia Liu
                             ` (9 subsequent siblings)
  22 siblings, 1 reply; 263+ messages in thread
From: Mingxia Liu @ 2023-02-16  0:30 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add Rx offloading support:
 - support CHKSUM and RSS offload for split queue model
 - support CHKSUM offload for single queue model

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 2 ++
 drivers/net/cpfl/cpfl_ethdev.c    | 6 ++++++
 2 files changed, 8 insertions(+)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index 470ba81579..ee5948f444 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -8,6 +8,8 @@
 ;
 [Features]
 MTU update           = Y
+L3 checksum offload  = P
+L4 checksum offload  = P
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index fb15004e48..d0f90b7d2c 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -99,6 +99,12 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
 
+	dev_info->rx_offload_capa =
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM           |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+
 	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v7 14/21] net/cpfl: support Tx offloading
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
                             ` (12 preceding siblings ...)
  2023-02-16  0:30           ` [PATCH v7 13/21] net/cpfl: support Rx offloading Mingxia Liu
@ 2023-02-16  0:30           ` Mingxia Liu
  2023-02-16  0:30           ` [PATCH v7 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
                             ` (8 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-16  0:30 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add Tx offloading support:
 - support TSO for single queue model and split queue model.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 8 +++++++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index ee5948f444..f4e45c7c68 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -8,6 +8,7 @@
 ;
 [Features]
 MTU update           = Y
+TSO                  = P
 L3 checksum offload  = P
 L4 checksum offload  = P
 Linux                = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index d0f90b7d2c..fda945d34b 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -105,7 +105,13 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
-	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->tx_offload_capa =
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_TCP_TSO		|
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v7 15/21] net/cpfl: add AVX512 data path for single queue model
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
                             ` (13 preceding siblings ...)
  2023-02-16  0:30           ` [PATCH v7 14/21] net/cpfl: support Tx offloading Mingxia Liu
@ 2023-02-16  0:30           ` Mingxia Liu
  2023-02-27 21:51             ` Ferruh Yigit
  2023-02-16  0:30           ` [PATCH v7 16/21] net/cpfl: support timestamp offload Mingxia Liu
                             ` (7 subsequent siblings)
  22 siblings, 1 reply; 263+ messages in thread
From: Mingxia Liu @ 2023-02-16  0:30 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu, Wenjun Wu

Add support of AVX512 vector data path for single queue model.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/cpfl.rst                |  24 +++++-
 drivers/net/cpfl/cpfl_ethdev.c          |   3 +-
 drivers/net/cpfl/cpfl_rxtx.c            |  94 ++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx_vec_common.h | 100 ++++++++++++++++++++++++
 drivers/net/cpfl/meson.build            |  25 +++++-
 5 files changed, 243 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h

diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
index 7c5aff0789..f0018b41df 100644
--- a/doc/guides/nics/cpfl.rst
+++ b/doc/guides/nics/cpfl.rst
@@ -63,4 +63,26 @@ Runtime Config Options
 Driver compilation and testing
 ------------------------------
 
-Refer to the document :doc:`build_and_test` for details.
\ No newline at end of file
+Refer to the document :doc:`build_and_test` for details.
+
+Features
+--------
+
+Vector PMD
+~~~~~~~~~~
+
+Vector path for Rx and Tx path are selected automatically.
+The paths are chosen based on 2 conditions:
+
+- ``CPU``
+
+  On the x86 platform, the driver checks if the CPU supports AVX512.
+  If the CPU supports AVX512 and EAL argument ``--force-max-simd-bitwidth``
+  is set to 512, AVX512 paths will be chosen.
+
+- ``Offload features``
+
+  The supported HW offload features are described in the document cpfl.ini,
+  A value "P" means the offload feature is not supported by vector path.
+  If any not supported features are used, cpfl vector PMD is disabled
+  and the scalar paths are chosen.
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index fda945d34b..346af055cf 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -111,7 +111,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
 		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM		|
 		RTE_ETH_TX_OFFLOAD_TCP_TSO		|
-		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS		|
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index c250642719..cb7bbddb16 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -8,6 +8,7 @@
 
 #include "cpfl_ethdev.h"
 #include "cpfl_rxtx.h"
+#include "cpfl_rxtx_vec_common.h"
 
 static uint64_t
 cpfl_rx_offload_convert(uint64_t offload)
@@ -735,11 +736,61 @@ cpfl_stop_queues(struct rte_eth_dev *dev)
 	}
 }
 
+
 void
 cpfl_set_rx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+#ifdef RTE_ARCH_X86
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	if (cpfl_rx_vec_dev_check_default(dev) == CPFL_VECTOR_PATH &&
+	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+		vport->rx_vec_allowed = true;
 
+		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
+#ifdef CC_AVX512_SUPPORT
+			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
+				vport->rx_use_avx512 = true;
+#else
+		PMD_DRV_LOG(NOTICE,
+			    "AVX512 is not supported in build env");
+#endif /* CC_AVX512_SUPPORT */
+	} else {
+		vport->rx_vec_allowed = false;
+	}
+#endif /* RTE_ARCH_X86 */
+
+#ifdef RTE_ARCH_X86
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Split Scalar Rx (port %d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
+	} else {
+		if (vport->rx_vec_allowed) {
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				rxq = dev->data->rx_queues[i];
+				(void)idpf_qc_singleq_rx_vec_setup(rxq);
+			}
+#ifdef CC_AVX512_SUPPORT
+			if (vport->rx_use_avx512) {
+				PMD_DRV_LOG(NOTICE,
+					    "Using Single AVX512 Vector Rx (port %d).",
+					    dev->data->port_id);
+				dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts_avx512;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
+		PMD_DRV_LOG(NOTICE,
+			    "Using Single Scalar Rx (port %d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
+	}
+#else
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		PMD_DRV_LOG(NOTICE,
 			    "Using Split Scalar Rx (port %d).",
@@ -751,12 +802,35 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 			    dev->data->port_id);
 		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
 	}
+#endif /* RTE_ARCH_X86 */
 }
 
 void
 cpfl_set_tx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+#ifdef RTE_ARCH_X86
+#ifdef CC_AVX512_SUPPORT
+	struct idpf_tx_queue *txq;
+	int i;
+#endif /* CC_AVX512_SUPPORT */
+
+	if (cpfl_tx_vec_dev_check_default(dev) == CPFL_VECTOR_PATH &&
+	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+		vport->tx_vec_allowed = true;
+		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
+#ifdef CC_AVX512_SUPPORT
+			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
+				vport->tx_use_avx512 = true;
+#else
+		PMD_DRV_LOG(NOTICE,
+			    "AVX512 is not supported in build env");
+#endif /* CC_AVX512_SUPPORT */
+	} else {
+		vport->tx_vec_allowed = false;
+	}
+#endif /* RTE_ARCH_X86 */
 
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		PMD_DRV_LOG(NOTICE,
@@ -765,6 +839,26 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 		dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
 	} else {
+#ifdef RTE_ARCH_X86
+		if (vport->tx_vec_allowed) {
+#ifdef CC_AVX512_SUPPORT
+			if (vport->tx_use_avx512) {
+				for (i = 0; i < dev->data->nb_tx_queues; i++) {
+					txq = dev->data->tx_queues[i];
+					if (txq == NULL)
+						continue;
+					idpf_qc_tx_vec_avx512_setup(txq);
+				}
+				PMD_DRV_LOG(NOTICE,
+					    "Using Single AVX512 Vector Tx (port %d).",
+					    dev->data->port_id);
+				dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts_avx512;
+				dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
+#endif /* RTE_ARCH_X86 */
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Tx (port %d).",
 			    dev->data->port_id);
diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
new file mode 100644
index 0000000000..2d4c6a0ef3
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
@@ -0,0 +1,100 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _CPFL_RXTX_VEC_COMMON_H_
+#define _CPFL_RXTX_VEC_COMMON_H_
+#include <stdint.h>
+#include <ethdev_driver.h>
+#include <rte_malloc.h>
+
+#include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
+
+#ifndef __INTEL_COMPILER
+#pragma GCC diagnostic ignored "-Wcast-qual"
+#endif
+
+#define CPFL_SCALAR_PATH		0
+#define CPFL_VECTOR_PATH		1
+#define CPFL_RX_NO_VECTOR_FLAGS (		\
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+#define CPFL_TX_NO_VECTOR_FLAGS (		\
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |	\
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+static inline int
+cpfl_rx_vec_queue_default(struct idpf_rx_queue *rxq)
+{
+	if (rxq == NULL)
+		return CPFL_SCALAR_PATH;
+
+	if (rte_is_power_of_2(rxq->nb_rx_desc) == 0)
+		return CPFL_SCALAR_PATH;
+
+	if (rxq->rx_free_thresh < IDPF_VPMD_RX_MAX_BURST)
+		return CPFL_SCALAR_PATH;
+
+	if ((rxq->nb_rx_desc % rxq->rx_free_thresh) != 0)
+		return CPFL_SCALAR_PATH;
+
+	if ((rxq->offloads & CPFL_RX_NO_VECTOR_FLAGS) != 0)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_tx_vec_queue_default(struct idpf_tx_queue *txq)
+{
+	if (txq == NULL)
+		return CPFL_SCALAR_PATH;
+
+	if (txq->rs_thresh < IDPF_VPMD_TX_MAX_BURST ||
+	    (txq->rs_thresh & 3) != 0)
+		return CPFL_SCALAR_PATH;
+
+	if ((txq->offloads & CPFL_TX_NO_VECTOR_FLAGS) != 0)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i, ret = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		ret = (cpfl_rx_vec_queue_default(rxq));
+		if (ret == CPFL_SCALAR_PATH)
+			return CPFL_SCALAR_PATH;
+	}
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_tx_vec_dev_check_default(struct rte_eth_dev *dev)
+{
+	int i;
+	struct idpf_tx_queue *txq;
+	int ret = 0;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		ret = cpfl_tx_vec_queue_default(txq);
+		if (ret == CPFL_SCALAR_PATH)
+			return CPFL_SCALAR_PATH;
+	}
+
+	return CPFL_VECTOR_PATH;
+}
+
+#endif /*_CPFL_RXTX_VEC_COMMON_H_*/
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
index 1894423689..fbe6500826 100644
--- a/drivers/net/cpfl/meson.build
+++ b/drivers/net/cpfl/meson.build
@@ -7,9 +7,32 @@ if is_windows
     subdir_done()
 endif
 
+if dpdk_conf.get('RTE_IOVA_AS_PA') == 0
+    build = false
+    reason = 'driver does not support disabling IOVA as PA mode'
+    subdir_done()
+endif
+
 deps += ['common_idpf']
 
 sources = files(
         'cpfl_ethdev.c',
         'cpfl_rxtx.c',
-)
\ No newline at end of file
+)
+
+if arch_subdir == 'x86'
+    cpfl_avx512_cpu_support = (
+        cc.get_define('__AVX512F__', args: machine_args) != '' and
+        cc.get_define('__AVX512BW__', args: machine_args) != ''
+    )
+
+    cpfl_avx512_cc_support = (
+        not machine_args.contains('-mno-avx512f') and
+        cc.has_argument('-mavx512f') and
+        cc.has_argument('-mavx512bw')
+    )
+
+    if cpfl_avx512_cpu_support == true or cpfl_avx512_cc_support == true
+        cflags += ['-DCC_AVX512_SUPPORT']
+    endif
+endif
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v7 16/21] net/cpfl: support timestamp offload
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
                             ` (14 preceding siblings ...)
  2023-02-16  0:30           ` [PATCH v7 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
@ 2023-02-16  0:30           ` Mingxia Liu
  2023-02-16  0:30           ` [PATCH v7 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
                             ` (6 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-16  0:30 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for timestamp offload.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 3 ++-
 drivers/net/cpfl/cpfl_rxtx.c      | 7 +++++++
 3 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index f4e45c7c68..c1209df3e5 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -11,6 +11,7 @@ MTU update           = Y
 TSO                  = P
 L3 checksum offload  = P
 L4 checksum offload  = P
+Timestamp offload    = P
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 346af055cf..1e40f3e55c 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -103,7 +103,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM           |
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
-		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index cb7bbddb16..7b12c80f1c 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -516,6 +516,13 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	err = idpf_qc_ts_mbuf_register(rxq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "fail to register timestamp mbuf %u",
+					rx_queue_id);
+		return -EIO;
+	}
+
 	if (rxq->bufq1 == NULL) {
 		/* Single queue */
 		err = idpf_qc_single_rxq_mbufs_alloc(rxq);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v7 17/21] net/cpfl: add AVX512 data path for split queue model
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
                             ` (15 preceding siblings ...)
  2023-02-16  0:30           ` [PATCH v7 16/21] net/cpfl: support timestamp offload Mingxia Liu
@ 2023-02-16  0:30           ` Mingxia Liu
  2023-02-27 21:52             ` Ferruh Yigit
  2023-02-16  0:30           ` [PATCH v7 18/21] net/cpfl: add HW statistics Mingxia Liu
                             ` (5 subsequent siblings)
  22 siblings, 1 reply; 263+ messages in thread
From: Mingxia Liu @ 2023-02-16  0:30 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu, Wenjun Wu

Add support of AVX512 data path for split queue model.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_rxtx.c            | 56 +++++++++++++++++++++++--
 drivers/net/cpfl/cpfl_rxtx_vec_common.h | 20 ++++++++-
 drivers/net/cpfl/meson.build            |  6 ++-
 3 files changed, 75 insertions(+), 7 deletions(-)

diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 7b12c80f1c..0d5bfb901d 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -759,7 +759,8 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
 #ifdef CC_AVX512_SUPPORT
 			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
-			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512DQ))
 				vport->rx_use_avx512 = true;
 #else
 		PMD_DRV_LOG(NOTICE,
@@ -772,6 +773,21 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 
 #ifdef RTE_ARCH_X86
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		if (vport->rx_vec_allowed) {
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				rxq = dev->data->rx_queues[i];
+				(void)idpf_qc_splitq_rx_vec_setup(rxq);
+			}
+#ifdef CC_AVX512_SUPPORT
+			if (vport->rx_use_avx512) {
+				PMD_DRV_LOG(NOTICE,
+					    "Using Split AVX512 Vector Rx (port %d).",
+					    dev->data->port_id);
+				dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts_avx512;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Split Scalar Rx (port %d).",
 			    dev->data->port_id);
@@ -827,9 +843,17 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 		vport->tx_vec_allowed = true;
 		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
 #ifdef CC_AVX512_SUPPORT
+		{
 			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
 			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
 				vport->tx_use_avx512 = true;
+			if (vport->tx_use_avx512) {
+				for (i = 0; i < dev->data->nb_tx_queues; i++) {
+					txq = dev->data->tx_queues[i];
+					idpf_qc_tx_vec_avx512_setup(txq);
+				}
+			}
+		}
 #else
 		PMD_DRV_LOG(NOTICE,
 			    "AVX512 is not supported in build env");
@@ -839,14 +863,26 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 	}
 #endif /* RTE_ARCH_X86 */
 
+#ifdef RTE_ARCH_X86
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		if (vport->tx_vec_allowed) {
+#ifdef CC_AVX512_SUPPORT
+			if (vport->tx_use_avx512) {
+				PMD_DRV_LOG(NOTICE,
+					    "Using Split AVX512 Vector Tx (port %d).",
+					    dev->data->port_id);
+				dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts_avx512;
+				dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Split Scalar Tx (port %d).",
 			    dev->data->port_id);
 		dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
 	} else {
-#ifdef RTE_ARCH_X86
 		if (vport->tx_vec_allowed) {
 #ifdef CC_AVX512_SUPPORT
 			if (vport->tx_use_avx512) {
@@ -865,11 +901,25 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 			}
 #endif /* CC_AVX512_SUPPORT */
 		}
-#endif /* RTE_ARCH_X86 */
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Tx (port %d).",
 			    dev->data->port_id);
 		dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
 	}
+#else
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Split Scalar Tx (port %d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+	} else {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Single Scalar Tx (port %d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+	}
+#endif /* RTE_ARCH_X86 */
 }
diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
index 2d4c6a0ef3..665418d27d 100644
--- a/drivers/net/cpfl/cpfl_rxtx_vec_common.h
+++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
@@ -64,15 +64,31 @@ cpfl_tx_vec_queue_default(struct idpf_tx_queue *txq)
 	return CPFL_VECTOR_PATH;
 }
 
+static inline int
+cpfl_rx_splitq_vec_default(struct idpf_rx_queue *rxq)
+{
+	if (rxq->bufq2->rx_buf_len < rxq->max_pkt_len)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
 static inline int
 cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_rx_queue *rxq;
-	int i, ret = 0;
+	int i, default_ret, splitq_ret, ret = CPFL_SCALAR_PATH;
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
-		ret = (cpfl_rx_vec_queue_default(rxq));
+		default_ret = cpfl_rx_vec_queue_default(rxq);
+		if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+			splitq_ret = cpfl_rx_splitq_vec_default(rxq);
+			ret = splitq_ret && default_ret;
+		} else {
+			ret = default_ret;
+		}
 		if (ret == CPFL_SCALAR_PATH)
 			return CPFL_SCALAR_PATH;
 	}
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
index fbe6500826..2cf69258e2 100644
--- a/drivers/net/cpfl/meson.build
+++ b/drivers/net/cpfl/meson.build
@@ -23,13 +23,15 @@ sources = files(
 if arch_subdir == 'x86'
     cpfl_avx512_cpu_support = (
         cc.get_define('__AVX512F__', args: machine_args) != '' and
-        cc.get_define('__AVX512BW__', args: machine_args) != ''
+        cc.get_define('__AVX512BW__', args: machine_args) != '' and
+        cc.get_define('__AVX512DQ__', args: machine_args) != ''
     )
 
     cpfl_avx512_cc_support = (
         not machine_args.contains('-mno-avx512f') and
         cc.has_argument('-mavx512f') and
-        cc.has_argument('-mavx512bw')
+        cc.has_argument('-mavx512bw') and
+        cc.has_argument('-mavx512dq')
     )
 
     if cpfl_avx512_cpu_support == true or cpfl_avx512_cc_support == true
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v7 18/21] net/cpfl: add HW statistics
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
                             ` (16 preceding siblings ...)
  2023-02-16  0:30           ` [PATCH v7 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
@ 2023-02-16  0:30           ` Mingxia Liu
  2023-02-27 21:52             ` Ferruh Yigit
  2023-02-16  0:30           ` [PATCH v7 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
                             ` (4 subsequent siblings)
  22 siblings, 1 reply; 263+ messages in thread
From: Mingxia Liu @ 2023-02-16  0:30 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

This patch add hardware packets/bytes statistics.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 86 ++++++++++++++++++++++++++++++++++
 1 file changed, 86 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 1e40f3e55c..0fb9f0455b 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -178,6 +178,87 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static uint64_t
+cpfl_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	uint64_t mbuf_alloc_failed = 0;
+	struct idpf_rx_queue *rxq;
+	int i = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		mbuf_alloc_failed += __atomic_load_n(&rxq->rx_stats.mbuf_alloc_failed,
+						     __ATOMIC_RELAXED);
+	}
+
+	return mbuf_alloc_failed;
+}
+
+static int
+cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret == 0) {
+		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
+					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
+					 RTE_ETHER_CRC_LEN;
+
+		idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
+		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
+				pstats->rx_broadcast - pstats->rx_discards;
+		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
+						pstats->tx_unicast;
+		stats->imissed = pstats->rx_discards;
+		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
+		stats->ibytes = pstats->rx_bytes;
+		stats->ibytes -= stats->ipackets * crc_stats_len;
+		stats->obytes = pstats->tx_bytes;
+
+		dev->data->rx_mbuf_alloc_failed = cpfl_get_mbuf_alloc_failed_stats(dev);
+		stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return ret;
+}
+
+static void
+cpfl_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		__atomic_store_n(&rxq->rx_stats.mbuf_alloc_failed, 0, __ATOMIC_RELAXED);
+	}
+}
+
+static int
+cpfl_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret != 0)
+		return ret;
+
+	/* set stats offset base on current values */
+	vport->eth_stats_offset = *pstats;
+
+	cpfl_reset_mbuf_alloc_failed_stats(dev);
+
+	return 0;
+}
+
 static int
 cpfl_init_rss(struct idpf_vport *vport)
 {
@@ -365,6 +446,9 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 		goto err_vport;
 	}
 
+	if (cpfl_dev_stats_reset(dev))
+		PMD_DRV_LOG(ERR, "Failed to reset stats");
+
 	vport->stopped = 0;
 
 	return 0;
@@ -766,6 +850,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_release		= cpfl_dev_tx_queue_release,
 	.mtu_set			= cpfl_dev_mtu_set,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
+	.stats_get			= cpfl_dev_stats_get,
+	.stats_reset			= cpfl_dev_stats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v7 19/21] net/cpfl: add RSS set/get ops
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
                             ` (17 preceding siblings ...)
  2023-02-16  0:30           ` [PATCH v7 18/21] net/cpfl: add HW statistics Mingxia Liu
@ 2023-02-16  0:30           ` Mingxia Liu
  2023-02-16  0:30           ` [PATCH v7 20/21] net/cpfl: support scalar scatter Rx datapath for single queue model Mingxia Liu
                             ` (3 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-16  0:30 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for these device ops:
- rss_reta_update
- rss_reta_query
- rss_hash_update
- rss_hash_conf_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 270 ++++++++++++++++++++++++++++++++-
 1 file changed, 269 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 0fb9f0455b..d2387b9a39 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -30,6 +30,56 @@ static const char * const cpfl_valid_args[] = {
 	NULL
 };
 
+static const uint64_t cpfl_map_hena_rss[] = {
+	[IDPF_HASH_NONF_UNICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
+	[IDPF_HASH_NONF_IPV4_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+	[IDPF_HASH_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
+
+	/* IPv6 */
+	[IDPF_HASH_NONF_UNICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
+	[IDPF_HASH_NONF_IPV6_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+	[IDPF_HASH_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
+
+	/* L2 Payload */
+	[IDPF_HASH_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
+};
+
+static const uint64_t cpfl_ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV4;
+
+static const uint64_t cpfl_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV6;
+
 static int
 cpfl_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -97,6 +147,9 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->hash_key_size = vport->rss_key_size;
+	dev_info->reta_size = vport->rss_lut_size;
+
 	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
@@ -259,6 +312,36 @@ cpfl_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int cpfl_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
+{
+	uint64_t hena = 0;
+	uint16_t i;
+
+	/**
+	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
+	 * generalizations of all other IPv4 and IPv6 RSS types.
+	 */
+	if (rss_hf & RTE_ETH_RSS_IPV4)
+		rss_hf |= cpfl_ipv4_rss;
+
+	if (rss_hf & RTE_ETH_RSS_IPV6)
+		rss_hf |= cpfl_ipv6_rss;
+
+	for (i = 0; i < RTE_DIM(cpfl_map_hena_rss); i++) {
+		if (cpfl_map_hena_rss[i] & rss_hf)
+			hena |= BIT_ULL(i);
+	}
+
+	/**
+	 * At present, cp doesn't process the virtual channel msg of rss_hf configuration,
+	 * tips are given below.
+	 */
+	if (hena != vport->rss_hf)
+		PMD_DRV_LOG(WARNING, "Updating RSS Hash Function is not supported at present.");
+
+	return 0;
+}
+
 static int
 cpfl_init_rss(struct idpf_vport *vport)
 {
@@ -279,7 +362,7 @@ cpfl_init_rss(struct idpf_vport *vport)
 			     vport->rss_key_size);
 		return -EINVAL;
 	} else {
-		rte_memcpy(vport->rss_key, rss_conf->rss_key,
+		memcpy(vport->rss_key, rss_conf->rss_key,
 			   vport->rss_key_size);
 	}
 
@@ -295,6 +378,187 @@ cpfl_init_rss(struct idpf_vport *vport)
 	return ret;
 }
 
+static int
+cpfl_rss_reta_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_reta_entry64 *reta_conf,
+		     uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+				 "(%d) doesn't match the number of hardware can "
+				 "support (%d)",
+			    reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			vport->rss_lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	/* send virtchnl ops to configure RSS */
+	ret = idpf_vc_rss_lut_set(vport);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
+
+	return ret;
+}
+
+static int
+cpfl_rss_reta_query(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_reta_entry64 *reta_conf,
+		    uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	ret = idpf_vc_rss_lut_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS LUT");
+		return ret;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vport->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+cpfl_rss_hash_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		goto skip_rss_key;
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+				 "(%d) doesn't match the size of hardware can "
+				 "support (%d)",
+			    rss_conf->rss_key_len,
+			    vport->rss_key_size);
+		return -EINVAL;
+	}
+
+	memcpy(vport->rss_key, rss_conf->rss_key,
+		   vport->rss_key_size);
+	ret = idpf_vc_rss_key_set(vport);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS key");
+		return ret;
+	}
+
+skip_rss_key:
+	ret = cpfl_config_rss_hf(vport, rss_conf->rss_hf);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
+		return ret;
+	}
+
+	return 0;
+}
+
+static uint64_t
+cpfl_map_general_rss_hf(uint64_t config_rss_hf, uint64_t last_general_rss_hf)
+{
+	uint64_t valid_rss_hf = 0;
+	uint16_t i;
+
+	for (i = 0; i < RTE_DIM(cpfl_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (bit & config_rss_hf)
+			valid_rss_hf |= cpfl_map_hena_rss[i];
+	}
+
+	if (valid_rss_hf & cpfl_ipv4_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & cpfl_ipv6_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV6;
+
+	return valid_rss_hf;
+}
+
+static int
+cpfl_rss_hash_conf_get(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	ret = idpf_vc_rss_hash_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS hf");
+		return ret;
+	}
+
+	rss_conf->rss_hf = cpfl_map_general_rss_hf(vport->rss_hf, vport->last_general_rss_hf);
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	ret = idpf_vc_rss_key_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS key");
+		return ret;
+	}
+
+	if (rss_conf->rss_key_len > vport->rss_key_size)
+		rss_conf->rss_key_len = vport->rss_key_size;
+
+	memcpy(rss_conf->rss_key, vport->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
@@ -852,6 +1116,10 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 	.stats_get			= cpfl_dev_stats_get,
 	.stats_reset			= cpfl_dev_stats_reset,
+	.reta_update			= cpfl_rss_reta_update,
+	.reta_query			= cpfl_rss_reta_query,
+	.rss_hash_update		= cpfl_rss_hash_update,
+	.rss_hash_conf_get		= cpfl_rss_hash_conf_get,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v7 20/21] net/cpfl: support scalar scatter Rx datapath for single queue model
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
                             ` (18 preceding siblings ...)
  2023-02-16  0:30           ` [PATCH v7 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
@ 2023-02-16  0:30           ` Mingxia Liu
  2023-02-16  0:30           ` [PATCH v7 21/21] net/cpfl: add xstats ops Mingxia Liu
                             ` (2 subsequent siblings)
  22 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-02-16  0:30 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu, Wenjun Wu

This patch add single q recv scatter Rx function.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  3 ++-
 drivers/net/cpfl/cpfl_rxtx.c   | 27 +++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  2 ++
 3 files changed, 31 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index d2387b9a39..f959a2911d 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -157,7 +157,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
-		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP		|
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 0d5bfb901d..6226b02301 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -503,6 +503,8 @@ int
 cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct idpf_rx_queue *rxq;
+	uint16_t max_pkt_len;
+	uint32_t frame_size;
 	int err;
 
 	if (rx_queue_id >= dev->data->nb_rx_queues)
@@ -516,6 +518,17 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	frame_size = dev->data->mtu + CPFL_ETH_OVERHEAD;
+
+	max_pkt_len =
+	    RTE_MIN((uint32_t)CPFL_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+		    frame_size);
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
+	    frame_size > rxq->rx_buf_len)
+		dev->data->scattered_rx = 1;
+
 	err = idpf_qc_ts_mbuf_register(rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "fail to register timestamp mbuf %u",
@@ -808,6 +821,13 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 			}
 #endif /* CC_AVX512_SUPPORT */
 		}
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE,
+				    "Using Single Scalar Scatterd Rx (port %d).",
+				    dev->data->port_id);
+			dev->rx_pkt_burst = idpf_dp_singleq_recv_scatter_pkts;
+			return;
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Rx (port %d).",
 			    dev->data->port_id);
@@ -820,6 +840,13 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 			    dev->data->port_id);
 		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
 	} else {
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE,
+				    "Using Single Scalar Scatterd Rx (port %d).",
+				    dev->data->port_id);
+			dev->rx_pkt_burst = idpf_dp_singleq_recv_scatter_pkts;
+			return;
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Rx (port %d).",
 			    dev->data->port_id);
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 5f8144e55f..fb267d38c8 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -21,6 +21,8 @@
 #define CPFL_DEFAULT_TX_RS_THRESH	32
 #define CPFL_DEFAULT_TX_FREE_THRESH	32
 
+#define CPFL_SUPPORT_CHAIN_NUM 5
+
 int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_txconf *tx_conf);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v7 21/21] net/cpfl: add xstats ops
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
                             ` (19 preceding siblings ...)
  2023-02-16  0:30           ` [PATCH v7 20/21] net/cpfl: support scalar scatter Rx datapath for single queue model Mingxia Liu
@ 2023-02-16  0:30           ` Mingxia Liu
  2023-02-27 21:52             ` Ferruh Yigit
  2023-02-27 21:43           ` [PATCH v7 00/21] add support for cpfl PMD in DPDK Ferruh Yigit
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
  22 siblings, 1 reply; 263+ messages in thread
From: Mingxia Liu @ 2023-02-16  0:30 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for these device ops:
- dev_xstats_get
- dev_xstats_get_names
- dev_xstats_reset

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 80 ++++++++++++++++++++++++++++++++++
 1 file changed, 80 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index f959a2911d..543dbd60f0 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -80,6 +80,30 @@ static const uint64_t cpfl_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
 			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
 			  RTE_ETH_RSS_FRAG_IPV6;
 
+struct rte_cpfl_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct rte_cpfl_xstats_name_off rte_cpfl_stats_strings[] = {
+	{"rx_bytes", offsetof(struct virtchnl2_vport_stats, rx_bytes)},
+	{"rx_unicast_packets", offsetof(struct virtchnl2_vport_stats, rx_unicast)},
+	{"rx_multicast_packets", offsetof(struct virtchnl2_vport_stats, rx_multicast)},
+	{"rx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, rx_broadcast)},
+	{"rx_dropped_packets", offsetof(struct virtchnl2_vport_stats, rx_discards)},
+	{"rx_errors", offsetof(struct virtchnl2_vport_stats, rx_errors)},
+	{"rx_unknown_protocol_packets", offsetof(struct virtchnl2_vport_stats,
+						 rx_unknown_protocol)},
+	{"tx_bytes", offsetof(struct virtchnl2_vport_stats, tx_bytes)},
+	{"tx_unicast_packets", offsetof(struct virtchnl2_vport_stats, tx_unicast)},
+	{"tx_multicast_packets", offsetof(struct virtchnl2_vport_stats, tx_multicast)},
+	{"tx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, tx_broadcast)},
+	{"tx_dropped_packets", offsetof(struct virtchnl2_vport_stats, tx_discards)},
+	{"tx_error_packets", offsetof(struct virtchnl2_vport_stats, tx_errors)}};
+
+#define CPFL_NB_XSTATS (sizeof(rte_cpfl_stats_strings) / \
+		sizeof(rte_cpfl_stats_strings[0]))
+
 static int
 cpfl_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -313,6 +337,59 @@ cpfl_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int cpfl_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+	cpfl_dev_stats_reset(dev);
+	return 0;
+}
+
+static int cpfl_dev_xstats_get(struct rte_eth_dev *dev,
+			       struct rte_eth_xstat *xstats, unsigned int n)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	unsigned int i;
+	int ret;
+
+	if (n < CPFL_NB_XSTATS)
+		return CPFL_NB_XSTATS;
+
+	if (!xstats)
+		return 0;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+		return 0;
+	}
+
+	idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
+
+	/* loop over xstats array and values from pstats */
+	for (i = 0; i < CPFL_NB_XSTATS; i++) {
+		xstats[i].id = i;
+		xstats[i].value = *(uint64_t *)(((char *)pstats) +
+			rte_cpfl_stats_strings[i].offset);
+	}
+	return CPFL_NB_XSTATS;
+}
+
+static int cpfl_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+				     struct rte_eth_xstat_name *xstats_names,
+				     __rte_unused unsigned int limit)
+{
+	unsigned int i;
+
+	if (xstats_names)
+		for (i = 0; i < CPFL_NB_XSTATS; i++) {
+			snprintf(xstats_names[i].name,
+				 sizeof(xstats_names[i].name),
+				 "%s", rte_cpfl_stats_strings[i].name);
+		}
+	return CPFL_NB_XSTATS;
+}
+
 static int cpfl_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
 {
 	uint64_t hena = 0;
@@ -1121,6 +1198,9 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.reta_query			= cpfl_rss_reta_query,
 	.rss_hash_update		= cpfl_rss_hash_update,
 	.rss_hash_conf_get		= cpfl_rss_hash_conf_get,
+	.xstats_get			= cpfl_dev_xstats_get,
+	.xstats_get_names		= cpfl_dev_xstats_get_names,
+	.xstats_reset			= cpfl_dev_xstats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v6 00/21] add support for cpfl PMD in DPDK
  2023-02-15 14:04         ` [PATCH v6 00/21] add support for cpfl PMD in DPDK Ferruh Yigit
@ 2023-02-16  1:16           ` Liu, Mingxia
  0 siblings, 0 replies; 263+ messages in thread
From: Liu, Mingxia @ 2023-02-16  1:16 UTC (permalink / raw)
  To: Ferruh Yigit, dev, Xing,  Beilei, Zhang, Yuying

Yes, this patchset is based on the idpf PMD code:
http://patches.dpdk.org/project/dpdk/cover/20230206054618.40975-1-beilei.xing@intel.com/
http://patches.dpdk.org/project/dpdk/cover/20230106090501.9106-1-beilei.xing@intel.com/
http://patches.dpdk.org/project/dpdk/cover/20230207084549.2225214-1-wenjun1.wu@intel.com/
http://patches.dpdk.org/project/dpdk/cover/20230208073401.2468579-1-mingxia.liu@intel.com/

But as these dependency have been merged to networking.dataplane.dpdk.next-net-intel, 
so I delete the  dependency description.

If necessary , I'll send a new version.

BR
mingxia
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Wednesday, February 15, 2023 10:05 PM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> Subject: Re: [PATCH v6 00/21] add support for cpfl PMD in DPDK
> 
> On 2/13/2023 2:19 AM, Mingxia Liu wrote:
> > The patchset introduced the cpfl (Control Plane Function Library) PMD
> > for Intel® IPU E2100’s Configure Physical Function (Device ID: 0x1453)
> >
> > The cpfl PMD inherits all the features from idpf PMD which will follow
> > an ongoing standard data plan function spec
> > https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=idpf
> > Besides, it will also support more device specific hardware offloading
> > features from DPDK’s control path (e.g.: hairpin, rte_flow …). which
> > is different from idpf PMD, and that's why we need a new cpfl PMD.
> >
> > This patchset mainly focuses on idpf PMD’s equivalent features.
> > To avoid duplicated code, the patchset depends on below patchsets
> > which move the common part from net/idpf into common/idpf as a shared
> library.
> >
> > v2 changes:
> >  - rebase to the new baseline.
> >  - Fix rss lut config issue.
> > v3 changes:
> >  - rebase to the new baseline.
> > v4 changes:
> >  - Resend v3. No code changed.
> > v5 changes:
> >  - rebase to the new baseline.
> >  - optimize some code
> >  - give "not supported" tips when user want to config rss hash type
> >  - if stats reset fails at initialization time, don't rollback, just
> >    print ERROR info
> > v6 changes:
> >  - for small fixed size structure, change rte_memcpy to memcpy()
> >  - fix compilation for AVX512DQ
> >  - update cpfl maintainers
> >
> > Mingxia Liu (21):
> >   net/cpfl: support device initialization
> >   net/cpfl: add Tx queue setup
> >   net/cpfl: add Rx queue setup
> >   net/cpfl: support device start and stop
> >   net/cpfl: support queue start
> >   net/cpfl: support queue stop
> >   net/cpfl: support queue release
> >   net/cpfl: support MTU configuration
> >   net/cpfl: support basic Rx data path
> >   net/cpfl: support basic Tx data path
> >   net/cpfl: support write back based on ITR expire
> >   net/cpfl: support RSS
> >   net/cpfl: support Rx offloading
> >   net/cpfl: support Tx offloading
> >   net/cpfl: add AVX512 data path for single queue model
> >   net/cpfl: support timestamp offload
> >   net/cpfl: add AVX512 data path for split queue model
> >   net/cpfl: add HW statistics
> >   net/cpfl: add RSS set/get ops
> >   net/cpfl: support scalar scatter Rx datapath for single queue model
> >   net/cpfl: add xstats ops
> 
> Hi Mingxia, Beilei,
> 
> Is there any missing dependency at this point?

^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 01/21] net/cpfl: support device initialization
  2023-02-16  0:29           ` [PATCH v7 01/21] net/cpfl: support device initialization Mingxia Liu
@ 2023-02-27 13:46             ` Ferruh Yigit
  2023-02-27 15:45               ` Thomas Monjalon
  2023-02-27 21:43             ` Ferruh Yigit
  1 sibling, 1 reply; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-27 13:46 UTC (permalink / raw)
  To: Thomas Monjalon, Andrew Rybchenko, Jerin Jacob Kollanukkaran,
	Qi Z Zhang, David Marchand
  Cc: dev, Mingxia Liu, yuying.zhang, beilei.xing, techboard

On 2/16/2023 12:29 AM, Mingxia Liu wrote:
> +static int
> +cpfl_dev_configure(struct rte_eth_dev *dev)
> +{
> +	struct rte_eth_conf *conf = &dev->data->dev_conf;
> +
> +	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
> +		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
> +		PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not supported",
> +			     conf->txmode.mq_mode);
> +		return -ENOTSUP;
> +	}
> +
> +	if (conf->lpbk_mode != 0) {
> +		PMD_INIT_LOG(ERR, "Loopback operation mode %d is not supported",
> +			     conf->lpbk_mode);
> +		return -ENOTSUP;
> +	}
> +
> +	if (conf->dcb_capability_en != 0) {
> +		PMD_INIT_LOG(ERR, "Priority Flow Control(PFC) if not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (conf->intr_conf.lsc != 0) {
> +		PMD_INIT_LOG(ERR, "LSC interrupt is not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (conf->intr_conf.rxq != 0) {
> +		PMD_INIT_LOG(ERR, "RXQ interrupt is not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (conf->intr_conf.rmv != 0) {
> +		PMD_INIT_LOG(ERR, "RMV interrupt is not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	return 0;

This is '.dev_configure()' dev ops of a driver, there is nothing wrong
with the function but it is a good example to highlight a point.


'rte_eth_dev_configure()' can fail from various reasons, what can an
application do in this case?
It is not clear why configuration failed, there is no way to figure out
failed config option dynamically.

Application developer can read the log and find out what caused the
failure, but what can do next? Put a conditional check for the
particular device, assuming application supports multiple devices,
before configuration?

I think we need better error value, to help application detect what went
wrong and adapt dynamically, perhaps a bitmask of errors one per each
config option, what do you think?



And I think this is another reason why we should not make a single API
too overloaded and complex.


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 01/21] net/cpfl: support device initialization
  2023-02-27 13:46             ` Ferruh Yigit
@ 2023-02-27 15:45               ` Thomas Monjalon
  2023-02-27 23:38                 ` Ferruh Yigit
  2023-02-28  2:06                 ` Liu, Mingxia
  0 siblings, 2 replies; 263+ messages in thread
From: Thomas Monjalon @ 2023-02-27 15:45 UTC (permalink / raw)
  To: Andrew Rybchenko, Jerin Jacob Kollanukkaran, Qi Z Zhang,
	David Marchand, Ferruh Yigit
  Cc: dev, Mingxia Liu, yuying.zhang, beilei.xing, techboard

27/02/2023 14:46, Ferruh Yigit:
> On 2/16/2023 12:29 AM, Mingxia Liu wrote:
> > +static int
> > +cpfl_dev_configure(struct rte_eth_dev *dev)
> > +{
> > +	struct rte_eth_conf *conf = &dev->data->dev_conf;
> > +
> > +	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
> > +		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
> > +		return -ENOTSUP;
> > +	}
> > +
> > +	if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
> > +		PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not supported",
> > +			     conf->txmode.mq_mode);
> > +		return -ENOTSUP;
> > +	}
> > +
> > +	if (conf->lpbk_mode != 0) {
> > +		PMD_INIT_LOG(ERR, "Loopback operation mode %d is not supported",
> > +			     conf->lpbk_mode);
> > +		return -ENOTSUP;
> > +	}
> > +
> > +	if (conf->dcb_capability_en != 0) {
> > +		PMD_INIT_LOG(ERR, "Priority Flow Control(PFC) if not supported");
> > +		return -ENOTSUP;
> > +	}
> > +
> > +	if (conf->intr_conf.lsc != 0) {
> > +		PMD_INIT_LOG(ERR, "LSC interrupt is not supported");
> > +		return -ENOTSUP;
> > +	}
> > +
> > +	if (conf->intr_conf.rxq != 0) {
> > +		PMD_INIT_LOG(ERR, "RXQ interrupt is not supported");
> > +		return -ENOTSUP;
> > +	}
> > +
> > +	if (conf->intr_conf.rmv != 0) {
> > +		PMD_INIT_LOG(ERR, "RMV interrupt is not supported");
> > +		return -ENOTSUP;
> > +	}
> > +
> > +	return 0;
> 
> This is '.dev_configure()' dev ops of a driver, there is nothing wrong
> with the function but it is a good example to highlight a point.
> 
> 
> 'rte_eth_dev_configure()' can fail from various reasons, what can an
> application do in this case?
> It is not clear why configuration failed, there is no way to figure out
> failed config option dynamically.

There are some capabilities to read before calling "configure".

> Application developer can read the log and find out what caused the
> failure, but what can do next? Put a conditional check for the
> particular device, assuming application supports multiple devices,
> before configuration?

Which failures cannot be guessed with capability flags?

> I think we need better error value, to help application detect what went
> wrong and adapt dynamically, perhaps a bitmask of errors one per each
> config option, what do you think?

I am not sure we can change such an old API.

> And I think this is another reason why we should not make a single API
> too overloaded and complex.

Right, and I would support a work to have some of those "configure" features
available as small functions.



^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 01/21] net/cpfl: support device initialization
  2023-02-16  0:29           ` [PATCH v7 01/21] net/cpfl: support device initialization Mingxia Liu
  2023-02-27 13:46             ` Ferruh Yigit
@ 2023-02-27 21:43             ` Ferruh Yigit
  2023-02-28 11:12               ` Liu, Mingxia
  1 sibling, 1 reply; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-27 21:43 UTC (permalink / raw)
  To: Mingxia Liu, beilei.xing, yuying.zhang; +Cc: dev, Yigit, Ferruh

On 2/16/2023 12:29 AM, Mingxia Liu wrote:
> Support device init and add the following dev ops:
>  - dev_configure
>  - dev_close
>  - dev_infos_get
>  - link_update
>  - dev_supported_ptypes_get
> 
> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> ---
>  MAINTAINERS                            |   8 +
>  doc/guides/nics/cpfl.rst               |  66 +++

Need to add file to toctree (doc/guides/nics/index.rst) to make it visible.

>  doc/guides/nics/features/cpfl.ini      |  12 +
>  doc/guides/rel_notes/release_23_03.rst |   6 +
>  drivers/net/cpfl/cpfl_ethdev.c         | 768 +++++++++++++++++++++++++
>  drivers/net/cpfl/cpfl_ethdev.h         |  78 +++
>  drivers/net/cpfl/cpfl_logs.h           |  32 ++
>  drivers/net/cpfl/cpfl_rxtx.c           | 244 ++++++++
>  drivers/net/cpfl/cpfl_rxtx.h           |  25 +

cpfl_rxtx.[ch] not used at all in this patch,
'cpfl_tx_queue_setup()' is added in this patch and next patch (2/21)
looks a better place for it.

>  drivers/net/cpfl/meson.build           |  14 +
>  drivers/net/meson.build                |   1 +
>  11 files changed, 1254 insertions(+)
>  create mode 100644 doc/guides/nics/cpfl.rst
>  create mode 100644 doc/guides/nics/features/cpfl.ini
>  create mode 100644 drivers/net/cpfl/cpfl_ethdev.c
>  create mode 100644 drivers/net/cpfl/cpfl_ethdev.h
>  create mode 100644 drivers/net/cpfl/cpfl_logs.h
>  create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
>  create mode 100644 drivers/net/cpfl/cpfl_rxtx.h
>  create mode 100644 drivers/net/cpfl/meson.build
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 9a0f416d2e..af80edaf6e 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -783,6 +783,14 @@ F: drivers/common/idpf/
>  F: doc/guides/nics/idpf.rst
>  F: doc/guides/nics/features/idpf.ini
>  
> +Intel cpfl
> +M: Yuying Zhang <yuying.zhang@intel.com>
> +M: Beilei Xing <beilei.xing@intel.com>
> +T: git://dpdk.org/next/dpdk-next-net-intel
> +F: drivers/net/cpfl/
> +F: doc/guides/nics/cpfl.rst
> +F: doc/guides/nics/features/cpfl.ini
> +

Documentation mentions driver is experimental, can you please highlight
this in the maintainers file too, as:
Intel cpfl - EXPERIMENTAL

>  Intel igc
>  M: Junfeng Guo <junfeng.guo@intel.com>
>  M: Simei Su <simei.su@intel.com>
> diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
> new file mode 100644
> index 0000000000..7c5aff0789
> --- /dev/null
> +++ b/doc/guides/nics/cpfl.rst
> @@ -0,0 +1,66 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> +   Copyright(c) 2022 Intel Corporation.
> +

s/2022/2023/

> +.. include:: <isonum.txt>
> +
> +CPFL Poll Mode Driver
> +=====================
> +
> +The [*EXPERIMENTAL*] cpfl PMD (**librte_net_cpfl**) provides poll mode driver support
> +for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
> +

Can you please provide a link for the mentioned device?

So, interested users can evaluate, learn more about the mentioned hardware.


> +
> +Linux Prerequisites
> +-------------------
> +
> +Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment.
> +
> +To get better performance on Intel platforms,
> +please follow the :doc:`../linux_gsg/nic_perf_intel_platform`.
> +
> +
> +Pre-Installation Configuration
> +------------------------------
> +
> +Runtime Config Options
> +~~~~~~~~~~~~~~~~~~~~~~

Is "Runtime Config Options", a sub section of "Pre-Installation
Configuration"?

> +
> +- ``vport`` (default ``0``)
> +
> +  The PMD supports creation of multiple vports for one PCI device,
> +  each vport corresponds to a single ethdev.
> +  The user can specify the vports with specific ID to be created, for example::
> +
> +    -a ca:00.0,vport=[0,2,3]
> +
> +  Then the PMD will create 3 vports (ethdevs) for device ``ca:00.0``.
> +

Why need specific IDs?

other option is just provide number of requested vports and they get
sequential ids, but since vport ids are got from user instead there must
be some significance of them, can you please briefly document why ids
matter.

> +  If the parameter is not provided, the vport 0 will be created by default.
> +
> +- ``rx_single`` (default ``0``)
> +
> +  There are two queue modes supported by Intel\ |reg| IPU Ethernet E2100 Series,
> +  single queue mode and split queue mode for Rx queue.

Can you please describe in the documentation what 'split queue' and
'single queue' are and what is the difference between them?

<...>

> index 07914170a7..b0b23d1a44 100644
> --- a/doc/guides/rel_notes/release_23_03.rst
> +++ b/doc/guides/rel_notes/release_23_03.rst
> @@ -88,6 +88,12 @@ New Features
>    * Added timesync API support.
>    * Added packet pacing(launch time offloading) support.
>  
> +* **Added Intel cpfl driver.**
> +
> +  Added the new ``cpfl`` net driver
> +  for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
> +  See the :doc:`../nics/cpfl` NIC guide for more details on this new driver.
> +

"New Features" section is grouped, an that grouping is documented in the
section comment.

Can you please move the update to the proper location in the section.

<...>

> +static int
> +cpfl_dev_link_update(struct rte_eth_dev *dev,
> +		     __rte_unused int wait_to_complete)
> +{
> +	struct idpf_vport *vport = dev->data->dev_private;
> +	struct rte_eth_link new_link;
> +
> +	memset(&new_link, 0, sizeof(new_link));
> +
> +	switch (vport->link_speed) {
> +	case RTE_ETH_SPEED_NUM_10M:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
> +		break;
> +	case RTE_ETH_SPEED_NUM_100M:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
> +		break;
> +	case RTE_ETH_SPEED_NUM_1G:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
> +		break;
> +	case RTE_ETH_SPEED_NUM_10G:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
> +		break;
> +	case RTE_ETH_SPEED_NUM_20G:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
> +		break;
> +	case RTE_ETH_SPEED_NUM_25G:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
> +		break;
> +	case RTE_ETH_SPEED_NUM_40G:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
> +		break;
> +	case RTE_ETH_SPEED_NUM_50G:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
> +		break;
> +	case RTE_ETH_SPEED_NUM_100G:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
> +		break;
> +	case RTE_ETH_SPEED_NUM_200G:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
> +		break;

What about:
```
switch (vport->link_speed) {
case RTE_ETH_SPEED_NUM_10M:
case RTE_ETH_SPEED_NUM_100M:
...
case RTE_ETH_SPEED_NUM_200G:
	new_link.link_speed = vport->link_speed;
	break;
default:
	new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
```

OR

```
for (i = 0; i < RTE_DIM(supported_speeds); i++) {
	if (vport->link_speed == supported_speeds[i]) {
		new_link.link_speed = vport->link_speed;
		break;
	}
}

if (i == RTE_DIM(supported_speeds))
	new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
```

> +	default:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;

I think this should be :

if (link_up)
	new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
else
	new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;

<...>

> +static int
> +insert_value(struct cpfl_devargs *devargs, uint16_t id)
> +{
> +	uint16_t i;
> +
> +	/* ignore duplicate */
> +	for (i = 0; i < devargs->req_vport_nb; i++) {
> +		if (devargs->req_vports[i] == id)
> +			return 0;
> +	}
> +
> +	if (devargs->req_vport_nb >= RTE_DIM(devargs->req_vports)) {
> +		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
> +			     CPFL_MAX_VPORT_NUM);

Check is using 'RTE_DIM(devargs->req_vports)' and log is using
'CPFL_MAX_VPORT_NUM', they are same value but better to stick to one of
them.

<...>

> +static int
> +parse_vport(const char *key, const char *value, void *args)
> +{
> +	struct cpfl_devargs *devargs = args;
> +	const char *pos = value;
> +
> +	devargs->req_vport_nb = 0;
> +

if "vport" can be provided multiple times, above assignment is wrong, like:
"vport=1,vport=3-5"

<...>

> +static int
> +cpfl_parse_devargs(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter,
> +		   struct cpfl_devargs *cpfl_args)
> +{
> +	struct rte_devargs *devargs = pci_dev->device.devargs;
> +	struct rte_kvargs *kvlist;
> +	int i, ret;
> +
> +	cpfl_args->req_vport_nb = 0;
> +
> +	if (devargs == NULL)
> +		return 0;
> +
> +	kvlist = rte_kvargs_parse(devargs->args, cpfl_valid_args);
> +	if (kvlist == NULL) {
> +		PMD_INIT_LOG(ERR, "invalid kvargs key");
> +		return -EINVAL;
> +	}
> +
> +	/* check parsed devargs */
> +	if (adapter->cur_vport_nb + cpfl_args->req_vport_nb >
> +	    CPFL_MAX_VPORT_NUM) {

At this stage 'cpfl_args->req_vport_nb' is 0 since CPFL_VPORT is not
parsed yet, is the intention to do this check after 'rte_kvargs_processs()'?

> +		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
> +			     CPFL_MAX_VPORT_NUM);
> +		ret = -EINVAL;
> +		goto bail;
> +	}
> +
> +	for (i = 0; i < cpfl_args->req_vport_nb; i++) {> +		if (adapter->cur_vports & RTE_BIT32(cpfl_args->req_vports[i])) {
> +			PMD_INIT_LOG(ERR, "Vport %d has been created",
> +				     cpfl_args->req_vports[i]);

This is just argument parsing, nothing created yet, I suggest updating
log accordingly.

> +			ret = -EINVAL;
> +			goto bail;
> +		}
> +	}

same here, both for 'cpfl_args->req_vport_nb' &
'cpfl_args->req_vports[]', they are not updated yet.

<...>

> +static void
> +cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter_ex)
> +{
> +	struct idpf_adapter *adapter = &adapter_ex->base;

Everywhere else, 'struct cpfl_adapter_ext' type variable name is 'adapter',
here it is 'adapter_ex' and 'struct idpf_adapter' type is 'adapter'.

As far as I understand 'struct cpfl_adapter_ext' is something like
"extended adapter" and extended version of 'struct idpf_adapter', so in
the context of this driver what do you think to refer:
'struct cpfl_adapter_ext' as 'adapter'
'struct idpf_adapter'     as 'base' (or 'adapter_base'), consistently.

<...>

> +static const struct eth_dev_ops cpfl_eth_dev_ops = {
> +	.dev_configure			= cpfl_dev_configure,
> +	.dev_close			= cpfl_dev_close,
> +	.dev_infos_get			= cpfl_dev_info_get,
> +	.link_update			= cpfl_dev_link_update,
> +	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
> +};

Can you please move the block just after 'cpfl_dev_close()', to group
dev_ops related code together.

<...>

> +
> +static int
> +cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
> +{
> +	struct idpf_vport *vport = dev->data->dev_private;
> +	struct cpfl_vport_param *param = init_params;
> +	struct cpfl_adapter_ext *adapter = param->adapter;
> +	/* for sending create vport virtchnl msg prepare */
> +	struct virtchnl2_create_vport create_vport_info;
> +	int ret = 0;
> +
> +	dev->dev_ops = &cpfl_eth_dev_ops;
> +	vport->adapter = &adapter->base;
> +	vport->sw_idx = param->idx;
> +	vport->devarg_id = param->devarg_id;
> +	vport->dev = dev;
> +
> +	memset(&create_vport_info, 0, sizeof(create_vport_info));
> +	ret = idpf_vport_info_init(vport, &create_vport_info);
> +	if (ret != 0) {
> +		PMD_INIT_LOG(ERR, "Failed to init vport req_info.");
> +		goto err;
> +	}
> +
> +	ret = idpf_vport_init(vport, &create_vport_info, dev->data);
> +	if (ret != 0) {
> +		PMD_INIT_LOG(ERR, "Failed to init vports.");
> +		goto err;
> +	}
> +
> +	adapter->vports[param->idx] = vport;
> +	adapter->cur_vports |= RTE_BIT32(param->devarg_id);
> +	adapter->cur_vport_nb++;
> +
> +	dev->data->mac_addrs = rte_zmalloc(NULL, RTE_ETHER_ADDR_LEN, 0);
> +	if (dev->data->mac_addrs == NULL) {
> +		PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory.");
> +		ret = -ENOMEM;
> +		goto err_mac_addrs;
> +	}
> +
> +	rte_ether_addr_copy((struct rte_ether_addr *)vport->default_mac_addr,
> +			    &dev->data->mac_addrs[0]);
> +
> +	return 0;
> +
> +err_mac_addrs:
> +	adapter->vports[param->idx] = NULL;  /* reset */

shouln't update 'cur_vports' & 'cur_vport_nb' too in this error path.

<...>

> +
> +err:
> +	if (first_probe) {
> +		rte_spinlock_lock(&cpfl_adapter_lock);
> +		TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
> +		rte_spinlock_unlock(&cpfl_adapter_lock);
> +		cpfl_adapter_ext_deinit(adapter);
> +		rte_free(adapter);
> +	}


Why 'first_probe' is needed, it looks like it is for the case when
probe() called multiple time for same pci_dev, can this happen?

<...>

> +RTE_PMD_REGISTER_PCI(net_cpfl, rte_cpfl_pmd);
> +RTE_PMD_REGISTER_PCI_TABLE(net_cpfl, pci_id_cpfl_map);
> +RTE_PMD_REGISTER_KMOD_DEP(net_cpfl, "* igb_uio | vfio-pci");
> +RTE_PMD_REGISTER_PARAM_STRING(net_cpfl,
> +			      CPFL_TX_SINGLE_Q "=<0|1> "
> +			      CPFL_RX_SINGLE_Q "=<0|1> "
> +			      CPFL_VPORT "=[vport_set0,[vport_set1],...]");

What about:
"\[vport0_begin[-vport0_end][,vport1_begin[-vport1_end][,..]\]"

<...>

> +
> +#define CPFL_MAX_VPORT_NUM	8
> +
It looks like there is a dynamic max vport number
(adapter->base.caps.max_vports), and there is above hardcoded define,
for requested (devargs) vports.

The dynamic max is received via 'cpfl_adapter_ext_init()' before parsing
dev_arg, so can it be possible to remove this hardcoded max completely?


> +#define CPFL_INVALID_VPORT_IDX	0xffff
> +
> +#define CPFL_MIN_BUF_SIZE	1024
> +#define CPFL_MAX_FRAME_SIZE	9728
> +#define CPFL_DEFAULT_MTU	RTE_ETHER_MTU
> +
> +#define CPFL_NUM_MACADDR_MAX	64

The macro is not used, can you please add them when they are used.

<...>

> @@ -0,0 +1,32 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2023 Intel Corporation
> + */
> +
> +#ifndef _CPFL_LOGS_H_
> +#define _CPFL_LOGS_H_
> +
> +#include <rte_log.h>
> +
> +extern int cpfl_logtype_init;
> +extern int cpfl_logtype_driver;
> +
> +#define PMD_INIT_LOG(level, ...) \
> +	rte_log(RTE_LOG_ ## level, \
> +		cpfl_logtype_init, \
> +		RTE_FMT("%s(): " \
> +			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
> +			__func__, \
> +			RTE_FMT_TAIL(__VA_ARGS__,)))
> +
> +#define PMD_DRV_LOG_RAW(level, ...) \
> +	rte_log(RTE_LOG_ ## level, \
> +		cpfl_logtype_driver, \
> +		RTE_FMT("%s(): " \
> +			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
> +			__func__, \
> +			RTE_FMT_TAIL(__VA_ARGS__,)))
> +
> +#define PMD_DRV_LOG(level, fmt, args...) \
> +	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
> +

Is 'PMD_DRV_LOG_RAW' required at all, why not define 'PMD_DRV_LOG'
directly as it is done with 'PMD_INIT_LOG'?

Btw, both 'PMD_DRV_LOG_RAW' seems adding double '\n', one as part of
'fmt', other in the rte_log().


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 00/21] add support for cpfl PMD in DPDK
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
                             ` (20 preceding siblings ...)
  2023-02-16  0:30           ` [PATCH v7 21/21] net/cpfl: add xstats ops Mingxia Liu
@ 2023-02-27 21:43           ` Ferruh Yigit
  2023-02-28  1:44             ` Zhang, Qi Z
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
  22 siblings, 1 reply; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-27 21:43 UTC (permalink / raw)
  To: Mingxia Liu, beilei.xing, yuying.zhang; +Cc: dev, Mcnamara, John

On 2/16/2023 12:29 AM, Mingxia Liu wrote:
> The patchset introduced the cpfl (Control Plane Function Library) PMD
> for Intel® IPU E2100’s Configure Physical Function (Device ID: 0x1453)
> 
> The cpfl PMD inherits all the features from idpf PMD which will follow
> an ongoing standard data plan function spec
> https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=idpf
> Besides, it will also support more device specific hardware offloading
> features from DPDK’s control path (e.g.: hairpin, rte_flow …). which is
> different from idpf PMD, and that's why we need a new cpfl PMD.
> 

Hi Mingxia, Beilei, Yuying,

Do you know if is there any effort to make device specific offloads part
of IDPF spec?

Overall, working on a standard interface and upstreaming driver for it
(idpf) but next product is diverging from the standard and requiring a
dedicated driver, do you know if 'cpfl' is a temporary solution while
standard gets required update or is it a long term difference between
standard?


> This patchset mainly focuses on idpf PMD’s equivalent features.
> To avoid duplicated code, the patchset depends on below patchsets which
> move the common part from net/idpf into common/idpf as a shared library.
> 
> v2 changes:
>  - rebase to the new baseline.
>  - Fix rss lut config issue.
> v3 changes:
>  - rebase to the new baseline.
> v4 changes:
>  - Resend v3. No code changed.
> v5 changes:
>  - rebase to the new baseline.
>  - optimize some code
>  - give "not supported" tips when user want to config rss hash type
>  - if stats reset fails at initialization time, don't rollback, just
>    print ERROR info
> v6 changes:
>  - for small fixed size structure, change rte_memcpy to memcpy()
>  - fix compilation for AVX512DQ
>  - update cpfl maintainers
> v7 changes:
>  - add dependency in cover-letter
> 
> This patchset is based on the idpf PMD code:
> http://patches.dpdk.org/project/dpdk/cover/20230206054618.40975-1-beilei.xing@intel.com/
> http://patches.dpdk.org/project/dpdk/cover/20230106090501.9106-1-beilei.xing@intel.com/
> http://patches.dpdk.org/project/dpdk/cover/20230207084549.2225214-1-wenjun1.wu@intel.com/
> http://patches.dpdk.org/project/dpdk/cover/20230208073401.2468579-1-mingxia.liu@intel.com/
> 
> 
> Mingxia Liu (21):
>   net/cpfl: support device initialization
>   net/cpfl: add Tx queue setup
>   net/cpfl: add Rx queue setup
>   net/cpfl: support device start and stop
>   net/cpfl: support queue start
>   net/cpfl: support queue stop
>   net/cpfl: support queue release
>   net/cpfl: support MTU configuration
>   net/cpfl: support basic Rx data path
>   net/cpfl: support basic Tx data path
>   net/cpfl: support write back based on ITR expire
>   net/cpfl: support RSS
>   net/cpfl: support Rx offloading
>   net/cpfl: support Tx offloading
>   net/cpfl: add AVX512 data path for single queue model
>   net/cpfl: support timestamp offload
>   net/cpfl: add AVX512 data path for split queue model
>   net/cpfl: add HW statistics
>   net/cpfl: add RSS set/get ops
>   net/cpfl: support scalar scatter Rx datapath for single queue model
>   net/cpfl: add xstats ops
> 
>  MAINTAINERS                             |    8 +
>  doc/guides/nics/cpfl.rst                |   88 ++
>  doc/guides/nics/features/cpfl.ini       |   17 +
>  doc/guides/rel_notes/release_23_03.rst  |    6 +
>  drivers/net/cpfl/cpfl_ethdev.c          | 1453 +++++++++++++++++++++++
>  drivers/net/cpfl/cpfl_ethdev.h          |   95 ++
>  drivers/net/cpfl/cpfl_logs.h            |   32 +
>  drivers/net/cpfl/cpfl_rxtx.c            |  952 +++++++++++++++
>  drivers/net/cpfl/cpfl_rxtx.h            |   44 +
>  drivers/net/cpfl/cpfl_rxtx_vec_common.h |  116 ++
>  drivers/net/cpfl/meson.build            |   40 +
>  drivers/net/meson.build                 |    1 +
>  12 files changed, 2852 insertions(+)
>  create mode 100644 doc/guides/nics/cpfl.rst
>  create mode 100644 doc/guides/nics/features/cpfl.ini
>  create mode 100644 drivers/net/cpfl/cpfl_ethdev.c
>  create mode 100644 drivers/net/cpfl/cpfl_ethdev.h
>  create mode 100644 drivers/net/cpfl/cpfl_logs.h
>  create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
>  create mode 100644 drivers/net/cpfl/cpfl_rxtx.h
>  create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h
>  create mode 100644 drivers/net/cpfl/meson.build
> 


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 02/21] net/cpfl: add Tx queue setup
  2023-02-16  0:29           ` [PATCH v7 02/21] net/cpfl: add Tx queue setup Mingxia Liu
@ 2023-02-27 21:44             ` Ferruh Yigit
  2023-02-28  2:40               ` Liu, Mingxia
  0 siblings, 1 reply; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-27 21:44 UTC (permalink / raw)
  To: Mingxia Liu, dev, beilei.xing, yuying.zhang

On 2/16/2023 12:29 AM, Mingxia Liu wrote:
> Add support for tx_queue_setup ops.
> 
> In the single queue model, the same descriptor queue is used by SW to
> post buffer descriptors to HW and by HW to post completed descriptors
> to SW.
> 
> In the split queue model, "RX buffer queues" are used to pass
> descriptor buffers from SW to HW while Rx queues are used only to
> pass the descriptor completions, that is, descriptors that point
> to completed buffers, from HW to SW. This is contrary to the single
> queue model in which Rx queues are used for both purposes.
> 

This patch is related to the Tx and above description seems related Rx,
can next patch be a better place for above paragraph? Or please revise
it for Tx if it applies to this patch too.


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 03/21] net/cpfl: add Rx queue setup
  2023-02-16  0:29           ` [PATCH v7 03/21] net/cpfl: add Rx " Mingxia Liu
@ 2023-02-27 21:46             ` Ferruh Yigit
  2023-02-28  3:03               ` Liu, Mingxia
  0 siblings, 1 reply; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-27 21:46 UTC (permalink / raw)
  To: Mingxia Liu, dev, beilei.xing, yuying.zhang

On 2/16/2023 12:29 AM, Mingxia Liu wrote:
> Add support for rx_queue_setup ops.
> 
> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>

<...>

> +
> +	if (bufq_id == 1) {
> +		rxq->bufq1 = bufq;
> +	} else if (bufq_id == 2) {
> +		rxq->bufq2 = bufq;

For readability better to use enums to diffrentiate queues, instead of
using 1 and 2 as paramter to function.

Also I wonder if queue variable names can be improved too, from 'bufq1'
& 'bufq2' to something more descriptive.

^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 05/21] net/cpfl: support queue start
  2023-02-16  0:29           ` [PATCH v7 05/21] net/cpfl: support queue start Mingxia Liu
@ 2023-02-27 21:47             ` Ferruh Yigit
  2023-02-28  3:14               ` Liu, Mingxia
  0 siblings, 1 reply; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-27 21:47 UTC (permalink / raw)
  To: Mingxia Liu, dev, beilei.xing, yuying.zhang

On 2/16/2023 12:29 AM, Mingxia Liu wrote:
> Add support for these device ops:
>  - rx_queue_start
>  - tx_queue_start
> 
> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>

<...>

> +int
> +cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
> +{
> +	struct idpf_rx_queue *rxq;
> +	int err;
> +
> +	if (rx_queue_id >= dev->data->nb_rx_queues)
> +		return -EINVAL;
> +
> +	rxq = dev->data->rx_queues[rx_queue_id];
> +
> +	if (rxq == NULL || !rxq->q_set) {
> +		PMD_DRV_LOG(ERR, "RX queue %u not available or setup",
> +					rx_queue_id);
> +		return -EINVAL;
> +	}
> +
> +	if (rxq->bufq1 == NULL) {
> +		/* Single queue */

What do you think to keep the queue type explicitly in the queue struct,
instead of deducing it from pointer values?



^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 06/21] net/cpfl: support queue stop
  2023-02-16  0:29           ` [PATCH v7 06/21] net/cpfl: support queue stop Mingxia Liu
@ 2023-02-27 21:48             ` Ferruh Yigit
  0 siblings, 0 replies; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-27 21:48 UTC (permalink / raw)
  To: Mingxia Liu, dev, beilei.xing, yuying.zhang

On 2/16/2023 12:29 AM, Mingxia Liu wrote:
> Add support for these device ops:
>  - rx_queue_stop
>  - tx_queue_stop
> 
> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>

<...>

> +int
> +cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
> +{
> +	struct idpf_vport *vport = dev->data->dev_private;
> +	struct idpf_rx_queue *rxq;
> +	int err;
> +
> +	if (rx_queue_id >= dev->data->nb_rx_queues)
> +		return -EINVAL;
> +
> +	err = idpf_vc_queue_switch(vport, rx_queue_id, true, false);
> +	if (err != 0) {
> +		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
> +			    rx_queue_id);
> +		return err;
> +	}
> +
> +	rxq = dev->data->rx_queues[rx_queue_id];
> +	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
> +		rxq->ops->release_mbufs(rxq);
> +		idpf_qc_single_rx_queue_reset(rxq);
> +	} else {
> +		rxq->bufq1->ops->release_mbufs(rxq->bufq1);
> +		rxq->bufq2->ops->release_mbufs(rxq->bufq2);

In this patch, queue ops (bufq1->ops) not set yet, it is set in next
patch, switching order with next one may help.


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 11/21] net/cpfl: support write back based on ITR expire
  2023-02-16  0:30           ` [PATCH v7 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
@ 2023-02-27 21:49             ` Ferruh Yigit
  2023-02-28 11:31               ` Liu, Mingxia
  0 siblings, 1 reply; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-27 21:49 UTC (permalink / raw)
  To: Mingxia Liu, beilei.xing, yuying.zhang; +Cc: dev

On 2/16/2023 12:30 AM, Mingxia Liu wrote:
> Enable write back on ITR expire, then packets can be received one by
> 

Can you please describe this commit more?

I can see a wrapper to 'idpf_vport_irq_map_config()' is called, what is
configured related to the IRQ? What ITR stands for, etc...


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 12/21] net/cpfl: support RSS
  2023-02-16  0:30           ` [PATCH v7 12/21] net/cpfl: support RSS Mingxia Liu
@ 2023-02-27 21:50             ` Ferruh Yigit
  2023-02-28 11:28               ` Liu, Mingxia
  0 siblings, 1 reply; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-27 21:50 UTC (permalink / raw)
  To: Mingxia Liu, dev, beilei.xing, yuying.zhang

On 2/16/2023 12:30 AM, Mingxia Liu wrote:
> Add RSS support.
> 
> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>

<...>

>  static int
>  cpfl_dev_configure(struct rte_eth_dev *dev)
>  {
>  	struct idpf_vport *vport = dev->data->dev_private;
>  	struct rte_eth_conf *conf = &dev->data->dev_conf;
> +	struct idpf_adapter *adapter = vport->adapter;
> +	int ret;
>  
>  	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
>  		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
> @@ -205,6 +245,17 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
>  		return -ENOTSUP;
>  	}
>  
> +	if (adapter->caps.rss_caps != 0 && dev->data->nb_rx_queues != 0) {
> +		ret = cpfl_init_rss(vport);
> +		if (ret != 0) {
> +			PMD_INIT_LOG(ERR, "Failed to init rss");
> +			return ret;
> +		}
> +	} else {
> +		PMD_INIT_LOG(ERR, "RSS is not supported.");
> +		return -1;
> +	}


Shouldn't driver take into account 'conf->rxmode->mq_mode' and
'conf->rx_adv_conf->rss_conf->*' ?


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 13/21] net/cpfl: support Rx offloading
  2023-02-16  0:30           ` [PATCH v7 13/21] net/cpfl: support Rx offloading Mingxia Liu
@ 2023-02-27 21:50             ` Ferruh Yigit
  2023-02-28  5:48               ` Liu, Mingxia
  0 siblings, 1 reply; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-27 21:50 UTC (permalink / raw)
  To: Mingxia Liu, dev, beilei.xing, yuying.zhang

On 2/16/2023 12:30 AM, Mingxia Liu wrote:
> Add Rx offloading support:
>  - support CHKSUM and RSS offload for split queue model
>  - support CHKSUM offload for single queue model
> 
> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> ---
>  doc/guides/nics/features/cpfl.ini | 2 ++
>  drivers/net/cpfl/cpfl_ethdev.c    | 6 ++++++
>  2 files changed, 8 insertions(+)
> 
> diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
> index 470ba81579..ee5948f444 100644
> --- a/doc/guides/nics/features/cpfl.ini
> +++ b/doc/guides/nics/features/cpfl.ini
> @@ -8,6 +8,8 @@
>  ;
>  [Features]
>  MTU update           = Y
> +L3 checksum offload  = P
> +L4 checksum offload  = P
>  Linux                = Y
>  x86-32               = Y
>  x86-64               = Y
> diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
> index fb15004e48..d0f90b7d2c 100644
> --- a/drivers/net/cpfl/cpfl_ethdev.c
> +++ b/drivers/net/cpfl/cpfl_ethdev.c
> @@ -99,6 +99,12 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
>  
>  	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
>  
> +	dev_info->rx_offload_capa =
> +		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM           |
> +		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
> +		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
> +		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
> +

Just to confirm, are these capabilities are already supported in the
data path functions?

Same for Tx ones in next patch.

^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 15/21] net/cpfl: add AVX512 data path for single queue model
  2023-02-16  0:30           ` [PATCH v7 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
@ 2023-02-27 21:51             ` Ferruh Yigit
  2023-02-28  3:19               ` Liu, Mingxia
  0 siblings, 1 reply; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-27 21:51 UTC (permalink / raw)
  To: Mingxia Liu, dev, beilei.xing, yuying.zhang; +Cc: Wenjun Wu

On 2/16/2023 12:30 AM, Mingxia Liu wrote:
> Add support of AVX512 vector data path for single queue model.
> 
> Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>

<...>

> diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
> index c250642719..cb7bbddb16 100644
> --- a/drivers/net/cpfl/cpfl_rxtx.c
> +++ b/drivers/net/cpfl/cpfl_rxtx.c
> @@ -8,6 +8,7 @@
>  
>  #include "cpfl_ethdev.h"
>  #include "cpfl_rxtx.h"
> +#include "cpfl_rxtx_vec_common.h"
>  
>  static uint64_t
>  cpfl_rx_offload_convert(uint64_t offload)
> @@ -735,11 +736,61 @@ cpfl_stop_queues(struct rte_eth_dev *dev)
>  	}
>  }
>  
> +

Extra empty line.


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 17/21] net/cpfl: add AVX512 data path for split queue model
  2023-02-16  0:30           ` [PATCH v7 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
@ 2023-02-27 21:52             ` Ferruh Yigit
  0 siblings, 0 replies; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-27 21:52 UTC (permalink / raw)
  To: Mingxia Liu, dev, beilei.xing, yuying.zhang, Bruce Richardson; +Cc: Wenjun Wu

On 2/16/2023 12:30 AM, Mingxia Liu wrote:
> Add support of AVX512 data path for split queue model.
> 
> Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>

<...>

> diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
> index fbe6500826..2cf69258e2 100644
> --- a/drivers/net/cpfl/meson.build
> +++ b/drivers/net/cpfl/meson.build
> @@ -23,13 +23,15 @@ sources = files(
>  if arch_subdir == 'x86'
>      cpfl_avx512_cpu_support = (
>          cc.get_define('__AVX512F__', args: machine_args) != '' and
> -        cc.get_define('__AVX512BW__', args: machine_args) != ''
> +        cc.get_define('__AVX512BW__', args: machine_args) != '' and
> +        cc.get_define('__AVX512DQ__', args: machine_args) != ''
>      )
>  
>      cpfl_avx512_cc_support = (
>          not machine_args.contains('-mno-avx512f') and
>          cc.has_argument('-mavx512f') and
> -        cc.has_argument('-mavx512bw')
> +        cc.has_argument('-mavx512bw') and
> +        cc.has_argument('-mavx512dq')
>      )

+Bruce

Does it make sense to have a common 'avx512_cc_support' meson function
that all required drivers and libraries to use?
As it is getting more complex to detect and it is required by multiple
components.


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 18/21] net/cpfl: add HW statistics
  2023-02-16  0:30           ` [PATCH v7 18/21] net/cpfl: add HW statistics Mingxia Liu
@ 2023-02-27 21:52             ` Ferruh Yigit
  2023-02-28  6:46               ` Liu, Mingxia
  0 siblings, 1 reply; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-27 21:52 UTC (permalink / raw)
  To: Mingxia Liu, dev, beilei.xing, yuying.zhang

On 2/16/2023 12:30 AM, Mingxia Liu wrote:
> This patch add hardware packets/bytes statistics.
> 
> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>

<...>

> +static int
> +cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
> +{
> +	struct idpf_vport *vport =
> +		(struct idpf_vport *)dev->data->dev_private;
> +	struct virtchnl2_vport_stats *pstats = NULL;
> +	int ret;
> +
> +	ret = idpf_vc_stats_query(vport, &pstats);
> +	if (ret == 0) {
> +		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
> +					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
> +					 RTE_ETHER_CRC_LEN;
> +
> +		idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
> +		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
> +				pstats->rx_broadcast - pstats->rx_discards;
> +		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
> +						pstats->tx_unicast;
> +		stats->imissed = pstats->rx_discards;
> +		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
> +		stats->ibytes = pstats->rx_bytes;
> +		stats->ibytes -= stats->ipackets * crc_stats_len;
> +		stats->obytes = pstats->tx_bytes;
> +
> +		dev->data->rx_mbuf_alloc_failed = cpfl_get_mbuf_alloc_failed_stats(dev);

'dev->data->rx_mbuf_alloc_failed' is also used by telemetry, updating
here only in stats_get() will make it wrong for telemetry.

Is it possible to update 'dev->data->rx_mbuf_alloc_failed' whenever
alloc failed? (alongside 'rxq->rx_stats.mbuf_alloc_failed').


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 21/21] net/cpfl: add xstats ops
  2023-02-16  0:30           ` [PATCH v7 21/21] net/cpfl: add xstats ops Mingxia Liu
@ 2023-02-27 21:52             ` Ferruh Yigit
  2023-02-28  5:28               ` Liu, Mingxia
  2023-02-28  5:54               ` Liu, Mingxia
  0 siblings, 2 replies; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-27 21:52 UTC (permalink / raw)
  To: Mingxia Liu, dev, beilei.xing, yuying.zhang

On 2/16/2023 12:30 AM, Mingxia Liu wrote:
> Add support for these device ops:
> - dev_xstats_get
> - dev_xstats_get_names
> - dev_xstats_reset
> 
> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> ---
>  drivers/net/cpfl/cpfl_ethdev.c | 80 ++++++++++++++++++++++++++++++++++
>  1 file changed, 80 insertions(+)
> 
> diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
> index f959a2911d..543dbd60f0 100644
> --- a/drivers/net/cpfl/cpfl_ethdev.c
> +++ b/drivers/net/cpfl/cpfl_ethdev.c
> @@ -80,6 +80,30 @@ static const uint64_t cpfl_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
>  			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
>  			  RTE_ETH_RSS_FRAG_IPV6;
>  
> +struct rte_cpfl_xstats_name_off {
> +	char name[RTE_ETH_XSTATS_NAME_SIZE];
> +	unsigned int offset;
> +};
> +
> +static const struct rte_cpfl_xstats_name_off rte_cpfl_stats_strings[] = {
> +	{"rx_bytes", offsetof(struct virtchnl2_vport_stats, rx_bytes)},
> +	{"rx_unicast_packets", offsetof(struct virtchnl2_vport_stats, rx_unicast)},
> +	{"rx_multicast_packets", offsetof(struct virtchnl2_vport_stats, rx_multicast)},
> +	{"rx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, rx_broadcast)},
> +	{"rx_dropped_packets", offsetof(struct virtchnl2_vport_stats, rx_discards)},
> +	{"rx_errors", offsetof(struct virtchnl2_vport_stats, rx_errors)},
> +	{"rx_unknown_protocol_packets", offsetof(struct virtchnl2_vport_stats,
> +						 rx_unknown_protocol)},
> +	{"tx_bytes", offsetof(struct virtchnl2_vport_stats, tx_bytes)},
> +	{"tx_unicast_packets", offsetof(struct virtchnl2_vport_stats, tx_unicast)},
> +	{"tx_multicast_packets", offsetof(struct virtchnl2_vport_stats, tx_multicast)},
> +	{"tx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, tx_broadcast)},
> +	{"tx_dropped_packets", offsetof(struct virtchnl2_vport_stats, tx_discards)},
> +	{"tx_error_packets", offsetof(struct virtchnl2_vport_stats, tx_errors)}};
> +
> +#define CPFL_NB_XSTATS (sizeof(rte_cpfl_stats_strings) / \
> +		sizeof(rte_cpfl_stats_strings[0]))
> +

Can use RTE_DIM here.

>  static int
>  cpfl_dev_link_update(struct rte_eth_dev *dev,
>  		     __rte_unused int wait_to_complete)
> @@ -313,6 +337,59 @@ cpfl_dev_stats_reset(struct rte_eth_dev *dev)
>  	return 0;
>  }
>  
> +static int cpfl_dev_xstats_reset(struct rte_eth_dev *dev)
> +{
> +	cpfl_dev_stats_reset(dev);
> +	return 0;
> +}
> +
> +static int cpfl_dev_xstats_get(struct rte_eth_dev *dev,
> +			       struct rte_eth_xstat *xstats, unsigned int n)
> +{
> +	struct idpf_vport *vport =
> +		(struct idpf_vport *)dev->data->dev_private;
> +	struct virtchnl2_vport_stats *pstats = NULL;
> +	unsigned int i;
> +	int ret;
> +
> +	if (n < CPFL_NB_XSTATS)
> +		return CPFL_NB_XSTATS;
> +
> +	if (!xstats)
> +		return 0;
> +

if 'xstats' is NULL, it should return 'CPFL_NB_XSTATS'.


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 01/21] net/cpfl: support device initialization
  2023-02-27 15:45               ` Thomas Monjalon
@ 2023-02-27 23:38                 ` Ferruh Yigit
  2023-02-28  2:06                 ` Liu, Mingxia
  1 sibling, 0 replies; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-27 23:38 UTC (permalink / raw)
  To: Thomas Monjalon, Andrew Rybchenko, Jerin Jacob Kollanukkaran,
	Qi Z Zhang, David Marchand
  Cc: dev, Mingxia Liu, yuying.zhang, beilei.xing, techboard

On 2/27/2023 3:45 PM, Thomas Monjalon wrote:
> 27/02/2023 14:46, Ferruh Yigit:
>> On 2/16/2023 12:29 AM, Mingxia Liu wrote:
>>> +static int
>>> +cpfl_dev_configure(struct rte_eth_dev *dev)
>>> +{
>>> +	struct rte_eth_conf *conf = &dev->data->dev_conf;
>>> +
>>> +	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
>>> +		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
>>> +		return -ENOTSUP;
>>> +	}
>>> +
>>> +	if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
>>> +		PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not supported",
>>> +			     conf->txmode.mq_mode);
>>> +		return -ENOTSUP;
>>> +	}
>>> +
>>> +	if (conf->lpbk_mode != 0) {
>>> +		PMD_INIT_LOG(ERR, "Loopback operation mode %d is not supported",
>>> +			     conf->lpbk_mode);
>>> +		return -ENOTSUP;
>>> +	}
>>> +
>>> +	if (conf->dcb_capability_en != 0) {
>>> +		PMD_INIT_LOG(ERR, "Priority Flow Control(PFC) if not supported");
>>> +		return -ENOTSUP;
>>> +	}
>>> +
>>> +	if (conf->intr_conf.lsc != 0) {
>>> +		PMD_INIT_LOG(ERR, "LSC interrupt is not supported");
>>> +		return -ENOTSUP;
>>> +	}
>>> +
>>> +	if (conf->intr_conf.rxq != 0) {
>>> +		PMD_INIT_LOG(ERR, "RXQ interrupt is not supported");
>>> +		return -ENOTSUP;
>>> +	}
>>> +
>>> +	if (conf->intr_conf.rmv != 0) {
>>> +		PMD_INIT_LOG(ERR, "RMV interrupt is not supported");
>>> +		return -ENOTSUP;
>>> +	}
>>> +
>>> +	return 0;
>>
>> This is '.dev_configure()' dev ops of a driver, there is nothing wrong
>> with the function but it is a good example to highlight a point.
>>
>>
>> 'rte_eth_dev_configure()' can fail from various reasons, what can an
>> application do in this case?
>> It is not clear why configuration failed, there is no way to figure out
>> failed config option dynamically.
> 
> There are some capabilities to read before calling "configure".
> 

Yes, but there are some PMD specific cases as well, like above
SPEED_FIXED is not supported. How an app can manage this?

Mainly "struct rte_eth_dev_info" is used for capabilities (although it
is a mixed bag), that is not symmetric with config/setup functions, I
mean for a config/setup function there is no clear matching capability
struct/function.

>> Application developer can read the log and find out what caused the
>> failure, but what can do next? Put a conditional check for the
>> particular device, assuming application supports multiple devices,
>> before configuration?
> 
> Which failures cannot be guessed with capability flags?
> 

At least for above sample as far as I can see some capabilities are missing:
- txmode.mq_mode
- rxmode.mq_mode
- lpbk_mode
- intr_conf.rxq

We can go through all list to detect gaps if we plan to have an action.

>> I think we need better error value, to help application detect what went
>> wrong and adapt dynamically, perhaps a bitmask of errors one per each
>> config option, what do you think?
> 
> I am not sure we can change such an old API.
> 

Yes that is hard, but if we keep the return value negative, that can
still be backward compatible.

Or API can keep the interface same but set a global 'reason' variable,
similar to 'errno', so optionally new application code can get it with a
new API and investigate it.

>> And I think this is another reason why we should not make a single API
>> too overloaded and complex.
> 
> Right, and I would support a work to have some of those "configure" features
> available as small functions.
> 

If there is enough appetite we can put something to deprecation notice
for next ABI release.


^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v7 00/21] add support for cpfl PMD in DPDK
  2023-02-27 21:43           ` [PATCH v7 00/21] add support for cpfl PMD in DPDK Ferruh Yigit
@ 2023-02-28  1:44             ` Zhang, Qi Z
  0 siblings, 0 replies; 263+ messages in thread
From: Zhang, Qi Z @ 2023-02-28  1:44 UTC (permalink / raw)
  To: Ferruh Yigit, Liu, Mingxia, Xing, Beilei, Zhang, Yuying
  Cc: dev, Mcnamara, John



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Tuesday, February 28, 2023 5:44 AM
> To: Liu, Mingxia <mingxia.liu@intel.com>; Xing, Beilei
> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> Cc: dev@dpdk.org; Mcnamara, John <john.mcnamara@intel.com>
> Subject: Re: [PATCH v7 00/21] add support for cpfl PMD in DPDK
> 
> On 2/16/2023 12:29 AM, Mingxia Liu wrote:
> > The patchset introduced the cpfl (Control Plane Function Library) PMD
> > for Intel® IPU E2100’s Configure Physical Function (Device ID: 0x1453)
> >
> > The cpfl PMD inherits all the features from idpf PMD which will follow
> > an ongoing standard data plan function spec
> > https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=idpf
> > Besides, it will also support more device specific hardware offloading
> > features from DPDK’s control path (e.g.: hairpin, rte_flow …). which
> > is different from idpf PMD, and that's why we need a new cpfl PMD.
> >
> 
> Hi Mingxia, Beilei, Yuying,
> 
> Do you know if is there any effort to make device specific offloads part of
> IDPF spec?

Currently, there is no related effort for standard idpf, but the vendors can enable their
own hardware offloading feature based on idpf spec, (like cpfl), and the common/idpf can be leveraged.

Regards
Qi

> 
> Overall, working on a standard interface and upstreaming driver for it
> (idpf) but next product is diverging from the standard and requiring a
> dedicated driver, do you know if 'cpfl' is a temporary solution while standard
> gets required update or is it a long term difference between standard?


> 
> 
> > This patchset mainly focuses on idpf PMD’s equivalent features.
> > To avoid duplicated code, the patchset depends on below patchsets
> > which move the common part from net/idpf into common/idpf as a shared
> library.
> >
> > v2 changes:
> >  - rebase to the new baseline.
> >  - Fix rss lut config issue.
> > v3 changes:
> >  - rebase to the new baseline.
> > v4 changes:
> >  - Resend v3. No code changed.
> > v5 changes:
> >  - rebase to the new baseline.
> >  - optimize some code
> >  - give "not supported" tips when user want to config rss hash type
> >  - if stats reset fails at initialization time, don't rollback, just
> >    print ERROR info
> > v6 changes:
> >  - for small fixed size structure, change rte_memcpy to memcpy()
> >  - fix compilation for AVX512DQ
> >  - update cpfl maintainers
> > v7 changes:
> >  - add dependency in cover-letter
> >
> > This patchset is based on the idpf PMD code:
> > http://patches.dpdk.org/project/dpdk/cover/20230206054618.40975-1-
> beil
> > ei.xing@intel.com/
> > http://patches.dpdk.org/project/dpdk/cover/20230106090501.9106-1-
> beile
> > i.xing@intel.com/
> > http://patches.dpdk.org/project/dpdk/cover/20230207084549.2225214-1-
> we
> > njun1.wu@intel.com/
> > http://patches.dpdk.org/project/dpdk/cover/20230208073401.2468579-1-
> mi
> > ngxia.liu@intel.com/
> >
> >
> > Mingxia Liu (21):
> >   net/cpfl: support device initialization
> >   net/cpfl: add Tx queue setup
> >   net/cpfl: add Rx queue setup
> >   net/cpfl: support device start and stop
> >   net/cpfl: support queue start
> >   net/cpfl: support queue stop
> >   net/cpfl: support queue release
> >   net/cpfl: support MTU configuration
> >   net/cpfl: support basic Rx data path
> >   net/cpfl: support basic Tx data path
> >   net/cpfl: support write back based on ITR expire
> >   net/cpfl: support RSS
> >   net/cpfl: support Rx offloading
> >   net/cpfl: support Tx offloading
> >   net/cpfl: add AVX512 data path for single queue model
> >   net/cpfl: support timestamp offload
> >   net/cpfl: add AVX512 data path for split queue model
> >   net/cpfl: add HW statistics
> >   net/cpfl: add RSS set/get ops
> >   net/cpfl: support scalar scatter Rx datapath for single queue model
> >   net/cpfl: add xstats ops
> >
> >  MAINTAINERS                             |    8 +
> >  doc/guides/nics/cpfl.rst                |   88 ++
> >  doc/guides/nics/features/cpfl.ini       |   17 +
> >  doc/guides/rel_notes/release_23_03.rst  |    6 +
> >  drivers/net/cpfl/cpfl_ethdev.c          | 1453 +++++++++++++++++++++++
> >  drivers/net/cpfl/cpfl_ethdev.h          |   95 ++
> >  drivers/net/cpfl/cpfl_logs.h            |   32 +
> >  drivers/net/cpfl/cpfl_rxtx.c            |  952 +++++++++++++++
> >  drivers/net/cpfl/cpfl_rxtx.h            |   44 +
> >  drivers/net/cpfl/cpfl_rxtx_vec_common.h |  116 ++
> >  drivers/net/cpfl/meson.build            |   40 +
> >  drivers/net/meson.build                 |    1 +
> >  12 files changed, 2852 insertions(+)
> >  create mode 100644 doc/guides/nics/cpfl.rst  create mode 100644
> > doc/guides/nics/features/cpfl.ini  create mode 100644
> > drivers/net/cpfl/cpfl_ethdev.c  create mode 100644
> > drivers/net/cpfl/cpfl_ethdev.h  create mode 100644
> > drivers/net/cpfl/cpfl_logs.h  create mode 100644
> > drivers/net/cpfl/cpfl_rxtx.c  create mode 100644
> > drivers/net/cpfl/cpfl_rxtx.h  create mode 100644
> > drivers/net/cpfl/cpfl_rxtx_vec_common.h
> >  create mode 100644 drivers/net/cpfl/meson.build
> >


^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v7 01/21] net/cpfl: support device initialization
  2023-02-27 15:45               ` Thomas Monjalon
  2023-02-27 23:38                 ` Ferruh Yigit
@ 2023-02-28  2:06                 ` Liu, Mingxia
  2023-02-28  9:53                   ` Ferruh Yigit
  1 sibling, 1 reply; 263+ messages in thread
From: Liu, Mingxia @ 2023-02-28  2:06 UTC (permalink / raw)
  To: Thomas Monjalon, Andrew Rybchenko, Jerin Jacob Kollanukkaran,
	Zhang, Qi Z, David Marchand, Ferruh Yigit
  Cc: dev, Zhang, Yuying, Xing, Beilei, techboard

Thanks you all!
It's a good question, but as it is experimental version and rc2 is approaching,
we won't optimize this function now, and will do it at the time of the official product release.

> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Monday, February 27, 2023 11:46 PM
> To: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>; Jerin Jacob
> Kollanukkaran <jerinj@marvell.com>; Zhang, Qi Z <qi.z.zhang@intel.com>;
> David Marchand <david.marchand@redhat.com>; Ferruh Yigit
> <ferruh.yigit@amd.com>
> Cc: dev@dpdk.org; Liu, Mingxia <mingxia.liu@intel.com>; Zhang, Yuying
> <yuying.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>;
> techboard@dpdk.org
> Subject: Re: [PATCH v7 01/21] net/cpfl: support device initialization
> 
> 27/02/2023 14:46, Ferruh Yigit:
> > On 2/16/2023 12:29 AM, Mingxia Liu wrote:
> > > +static int
> > > +cpfl_dev_configure(struct rte_eth_dev *dev) {
> > > +	struct rte_eth_conf *conf = &dev->data->dev_conf;
> > > +
> > > +	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
> > > +		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
> > > +		return -ENOTSUP;
> > > +	}
> > > +
> > > +	if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
> > > +		PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not
> supported",
> > > +			     conf->txmode.mq_mode);
> > > +		return -ENOTSUP;
> > > +	}
> > > +
> > > +	if (conf->lpbk_mode != 0) {
> > > +		PMD_INIT_LOG(ERR, "Loopback operation mode %d is not
> supported",
> > > +			     conf->lpbk_mode);
> > > +		return -ENOTSUP;
> > > +	}
> > > +
> > > +	if (conf->dcb_capability_en != 0) {
> > > +		PMD_INIT_LOG(ERR, "Priority Flow Control(PFC) if not
> supported");
> > > +		return -ENOTSUP;
> > > +	}
> > > +
> > > +	if (conf->intr_conf.lsc != 0) {
> > > +		PMD_INIT_LOG(ERR, "LSC interrupt is not supported");
> > > +		return -ENOTSUP;
> > > +	}
> > > +
> > > +	if (conf->intr_conf.rxq != 0) {
> > > +		PMD_INIT_LOG(ERR, "RXQ interrupt is not supported");
> > > +		return -ENOTSUP;
> > > +	}
> > > +
> > > +	if (conf->intr_conf.rmv != 0) {
> > > +		PMD_INIT_LOG(ERR, "RMV interrupt is not supported");
> > > +		return -ENOTSUP;
> > > +	}
> > > +
> > > +	return 0;
> >
> > This is '.dev_configure()' dev ops of a driver, there is nothing wrong
> > with the function but it is a good example to highlight a point.
> >
> >
> > 'rte_eth_dev_configure()' can fail from various reasons, what can an
> > application do in this case?
> > It is not clear why configuration failed, there is no way to figure
> > out failed config option dynamically.
> 
> There are some capabilities to read before calling "configure".
> 
> > Application developer can read the log and find out what caused the
> > failure, but what can do next? Put a conditional check for the
> > particular device, assuming application supports multiple devices,
> > before configuration?
> 
> Which failures cannot be guessed with capability flags?
> 
> > I think we need better error value, to help application detect what
> > went wrong and adapt dynamically, perhaps a bitmask of errors one per
> > each config option, what do you think?
> 
> I am not sure we can change such an old API.
> 
> > And I think this is another reason why we should not make a single API
> > too overloaded and complex.
> 
> Right, and I would support a work to have some of those "configure"
> features available as small functions.
> 


^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v7 02/21] net/cpfl: add Tx queue setup
  2023-02-27 21:44             ` Ferruh Yigit
@ 2023-02-28  2:40               ` Liu, Mingxia
  0 siblings, 0 replies; 263+ messages in thread
From: Liu, Mingxia @ 2023-02-28  2:40 UTC (permalink / raw)
  To: Ferruh Yigit, dev, Xing,  Beilei, Zhang, Yuying

Ok,thanks!

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Tuesday, February 28, 2023 5:45 AM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> Subject: Re: [PATCH v7 02/21] net/cpfl: add Tx queue setup
> 
> On 2/16/2023 12:29 AM, Mingxia Liu wrote:
> > Add support for tx_queue_setup ops.
> >
> > In the single queue model, the same descriptor queue is used by SW to
> > post buffer descriptors to HW and by HW to post completed descriptors
> > to SW.
> >
> > In the split queue model, "RX buffer queues" are used to pass
> > descriptor buffers from SW to HW while Rx queues are used only to pass
> > the descriptor completions, that is, descriptors that point to
> > completed buffers, from HW to SW. This is contrary to the single queue
> > model in which Rx queues are used for both purposes.
> >
> 
> This patch is related to the Tx and above description seems related Rx, can
> next patch be a better place for above paragraph? Or please revise it for Tx
> if it applies to this patch too.


^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v7 03/21] net/cpfl: add Rx queue setup
  2023-02-27 21:46             ` Ferruh Yigit
@ 2023-02-28  3:03               ` Liu, Mingxia
  2023-02-28 10:02                 ` Ferruh Yigit
  0 siblings, 1 reply; 263+ messages in thread
From: Liu, Mingxia @ 2023-02-28  3:03 UTC (permalink / raw)
  To: Ferruh Yigit, dev, Xing,  Beilei, Zhang, Yuying

Thanks or your comments, we will use enums to differentiate queues.

As for 'bufq1'&'bufq2', they are members of struct idpf_rx_queue, defined in idpf commen module,
And it involves idpf pmd code, so it's better to improve it in the later fixed patch.

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Tuesday, February 28, 2023 5:46 AM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> Subject: Re: [PATCH v7 03/21] net/cpfl: add Rx queue setup
> 
> On 2/16/2023 12:29 AM, Mingxia Liu wrote:
> > Add support for rx_queue_setup ops.
> >
> > Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> 
> <...>
> 
> > +
> > +	if (bufq_id == 1) {
> > +		rxq->bufq1 = bufq;
> > +	} else if (bufq_id == 2) {
> > +		rxq->bufq2 = bufq;
> 
> For readability better to use enums to diffrentiate queues, instead of using
> 1 and 2 as paramter to function.
> 
> Also I wonder if queue variable names can be improved too, from 'bufq1'
> & 'bufq2' to something more descriptive.

^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v7 05/21] net/cpfl: support queue start
  2023-02-27 21:47             ` Ferruh Yigit
@ 2023-02-28  3:14               ` Liu, Mingxia
  2023-02-28  3:28                 ` Liu, Mingxia
  0 siblings, 1 reply; 263+ messages in thread
From: Liu, Mingxia @ 2023-02-28  3:14 UTC (permalink / raw)
  To: Ferruh Yigit, dev, Xing,  Beilei, Zhang, Yuying

Thanks, Ferruh ! 
It's a good idea!

As for it involves a wide range of change, including idpf common module and idpf pmd code, 
So we'd better update it in the later fix patch after rc2.

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Tuesday, February 28, 2023 5:47 AM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> Subject: Re: [PATCH v7 05/21] net/cpfl: support queue start
> 
> On 2/16/2023 12:29 AM, Mingxia Liu wrote:
> > Add support for these device ops:
> >  - rx_queue_start
> >  - tx_queue_start
> >
> > Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> 
> <...>
> 
> > +int
> > +cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) {
> > +	struct idpf_rx_queue *rxq;
> > +	int err;
> > +
> > +	if (rx_queue_id >= dev->data->nb_rx_queues)
> > +		return -EINVAL;
> > +
> > +	rxq = dev->data->rx_queues[rx_queue_id];
> > +
> > +	if (rxq == NULL || !rxq->q_set) {
> > +		PMD_DRV_LOG(ERR, "RX queue %u not available or setup",
> > +					rx_queue_id);
> > +		return -EINVAL;
> > +	}
> > +
> > +	if (rxq->bufq1 == NULL) {
> > +		/* Single queue */
> 
> What do you think to keep the queue type explicitly in the queue struct,
> instead of deducing it from pointer values?
> 


^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v7 15/21] net/cpfl: add AVX512 data path for single queue model
  2023-02-27 21:51             ` Ferruh Yigit
@ 2023-02-28  3:19               ` Liu, Mingxia
  0 siblings, 0 replies; 263+ messages in thread
From: Liu, Mingxia @ 2023-02-28  3:19 UTC (permalink / raw)
  To: Ferruh Yigit, dev, Xing,  Beilei, Zhang, Yuying; +Cc: Wu, Wenjun1

Thanks, will delete it.

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Tuesday, February 28, 2023 5:51 AM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> Cc: Wu, Wenjun1 <wenjun1.wu@intel.com>
> Subject: Re: [PATCH v7 15/21] net/cpfl: add AVX512 data path for single
> queue model
> 
> On 2/16/2023 12:30 AM, Mingxia Liu wrote:
> > Add support of AVX512 vector data path for single queue model.
> >
> > Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
> > Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> 
> <...>
> 
> > diff --git a/drivers/net/cpfl/cpfl_rxtx.c
> > b/drivers/net/cpfl/cpfl_rxtx.c index c250642719..cb7bbddb16 100644
> > --- a/drivers/net/cpfl/cpfl_rxtx.c
> > +++ b/drivers/net/cpfl/cpfl_rxtx.c
> > @@ -8,6 +8,7 @@
> >
> >  #include "cpfl_ethdev.h"
> >  #include "cpfl_rxtx.h"
> > +#include "cpfl_rxtx_vec_common.h"
> >
> >  static uint64_t
> >  cpfl_rx_offload_convert(uint64_t offload) @@ -735,11 +736,61 @@
> > cpfl_stop_queues(struct rte_eth_dev *dev)
> >  	}
> >  }
> >
> > +
> 
> Extra empty line.


^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v7 05/21] net/cpfl: support queue start
  2023-02-28  3:14               ` Liu, Mingxia
@ 2023-02-28  3:28                 ` Liu, Mingxia
  0 siblings, 0 replies; 263+ messages in thread
From: Liu, Mingxia @ 2023-02-28  3:28 UTC (permalink / raw)
  To: Ferruh Yigit, dev, Xing,  Beilei, Zhang, Yuying

If time permits, we will first submit a fixed patch to add a q type member in rx_queue strc.

> -----Original Message-----
> From: Liu, Mingxia
> Sent: Tuesday, February 28, 2023 11:15 AM
> To: Ferruh Yigit <ferruh.yigit@amd.com>; dev@dpdk.org; Xing, Beilei
> <beilei.xing@intel.com>; Zhang, Yuying <Yuying.Zhang@intel.com>
> Subject: RE: [PATCH v7 05/21] net/cpfl: support queue start
> 
> Thanks, Ferruh !
> It's a good idea!
> 
> As for it involves a wide range of change, including idpf common module
> and idpf pmd code, So we'd better update it in the later fix patch after rc2.
> 
> > -----Original Message-----
> > From: Ferruh Yigit <ferruh.yigit@amd.com>
> > Sent: Tuesday, February 28, 2023 5:47 AM
> > To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> > <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> > Subject: Re: [PATCH v7 05/21] net/cpfl: support queue start
> >
> > On 2/16/2023 12:29 AM, Mingxia Liu wrote:
> > > Add support for these device ops:
> > >  - rx_queue_start
> > >  - tx_queue_start
> > >
> > > Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> >
> > <...>
> >
> > > +int
> > > +cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id) {
> > > +	struct idpf_rx_queue *rxq;
> > > +	int err;
> > > +
> > > +	if (rx_queue_id >= dev->data->nb_rx_queues)
> > > +		return -EINVAL;
> > > +
> > > +	rxq = dev->data->rx_queues[rx_queue_id];
> > > +
> > > +	if (rxq == NULL || !rxq->q_set) {
> > > +		PMD_DRV_LOG(ERR, "RX queue %u not available or setup",
> > > +					rx_queue_id);
> > > +		return -EINVAL;
> > > +	}
> > > +
> > > +	if (rxq->bufq1 == NULL) {
> > > +		/* Single queue */
> >
> > What do you think to keep the queue type explicitly in the queue
> > struct, instead of deducing it from pointer values?
> >


^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v7 21/21] net/cpfl: add xstats ops
  2023-02-27 21:52             ` Ferruh Yigit
@ 2023-02-28  5:28               ` Liu, Mingxia
  2023-02-28  5:54               ` Liu, Mingxia
  1 sibling, 0 replies; 263+ messages in thread
From: Liu, Mingxia @ 2023-02-28  5:28 UTC (permalink / raw)
  To: Ferruh Yigit, dev, Xing,  Beilei, Zhang, Yuying



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Tuesday, February 28, 2023 5:53 AM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> Subject: Re: [PATCH v7 21/21] net/cpfl: add xstats ops
> 
> On 2/16/2023 12:30 AM, Mingxia Liu wrote:
> > Add support for these device ops:
> > - dev_xstats_get
> > - dev_xstats_get_names
> > - dev_xstats_reset
> >
> > Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> > ---
> >  drivers/net/cpfl/cpfl_ethdev.c | 80
> > ++++++++++++++++++++++++++++++++++
> >  1 file changed, 80 insertions(+)
> >
> > diff --git a/drivers/net/cpfl/cpfl_ethdev.c
> > b/drivers/net/cpfl/cpfl_ethdev.c index f959a2911d..543dbd60f0 100644
> > --- a/drivers/net/cpfl/cpfl_ethdev.c
> > +++ b/drivers/net/cpfl/cpfl_ethdev.c
> > @@ -80,6 +80,30 @@ static const uint64_t cpfl_ipv6_rss =
> RTE_ETH_RSS_NONFRAG_IPV6_UDP |
> >  			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
> >  			  RTE_ETH_RSS_FRAG_IPV6;
> >
> > +struct rte_cpfl_xstats_name_off {
> > +	char name[RTE_ETH_XSTATS_NAME_SIZE];
> > +	unsigned int offset;
> > +};
> > +
> > +static const struct rte_cpfl_xstats_name_off rte_cpfl_stats_strings[] = {
> > +	{"rx_bytes", offsetof(struct virtchnl2_vport_stats, rx_bytes)},
> > +	{"rx_unicast_packets", offsetof(struct virtchnl2_vport_stats,
> rx_unicast)},
> > +	{"rx_multicast_packets", offsetof(struct virtchnl2_vport_stats,
> rx_multicast)},
> > +	{"rx_broadcast_packets", offsetof(struct virtchnl2_vport_stats,
> rx_broadcast)},
> > +	{"rx_dropped_packets", offsetof(struct virtchnl2_vport_stats,
> rx_discards)},
> > +	{"rx_errors", offsetof(struct virtchnl2_vport_stats, rx_errors)},
> > +	{"rx_unknown_protocol_packets", offsetof(struct
> virtchnl2_vport_stats,
> > +						 rx_unknown_protocol)},
> > +	{"tx_bytes", offsetof(struct virtchnl2_vport_stats, tx_bytes)},
> > +	{"tx_unicast_packets", offsetof(struct virtchnl2_vport_stats,
> tx_unicast)},
> > +	{"tx_multicast_packets", offsetof(struct virtchnl2_vport_stats,
> tx_multicast)},
> > +	{"tx_broadcast_packets", offsetof(struct virtchnl2_vport_stats,
> tx_broadcast)},
> > +	{"tx_dropped_packets", offsetof(struct virtchnl2_vport_stats,
> tx_discards)},
> > +	{"tx_error_packets", offsetof(struct virtchnl2_vport_stats,
> > +tx_errors)}};
> > +
> > +#define CPFL_NB_XSTATS (sizeof(rte_cpfl_stats_strings) / \
> > +		sizeof(rte_cpfl_stats_strings[0]))
> > +
> 
> Can use RTE_DIM here.
> 
> >  static int
> >  cpfl_dev_link_update(struct rte_eth_dev *dev,
> >  		     __rte_unused int wait_to_complete) @@ -313,6
> +337,59 @@
> > cpfl_dev_stats_reset(struct rte_eth_dev *dev)
> >  	return 0;
> >  }
> >
> > +static int cpfl_dev_xstats_reset(struct rte_eth_dev *dev) {
> > +	cpfl_dev_stats_reset(dev);
> > +	return 0;
> > +}
> > +
> > +static int cpfl_dev_xstats_get(struct rte_eth_dev *dev,
> > +			       struct rte_eth_xstat *xstats, unsigned int n) {
> > +	struct idpf_vport *vport =
> > +		(struct idpf_vport *)dev->data->dev_private;
> > +	struct virtchnl2_vport_stats *pstats = NULL;
> > +	unsigned int i;
> > +	int ret;
> > +
> > +	if (n < CPFL_NB_XSTATS)
> > +		return CPFL_NB_XSTATS;
> > +
> > +	if (!xstats)
> > +		return 0;
> > +
> 
> if 'xstats' is NULL, it should return 'CPFL_NB_XSTATS'.
[Liu, Mingxia] ok thanks!


^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v7 13/21] net/cpfl: support Rx offloading
  2023-02-27 21:50             ` Ferruh Yigit
@ 2023-02-28  5:48               ` Liu, Mingxia
  0 siblings, 0 replies; 263+ messages in thread
From: Liu, Mingxia @ 2023-02-28  5:48 UTC (permalink / raw)
  To: Ferruh Yigit, dev, Xing,  Beilei, Zhang, Yuying



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Tuesday, February 28, 2023 5:50 AM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> Subject: Re: [PATCH v7 13/21] net/cpfl: support Rx offloading
> 
> On 2/16/2023 12:30 AM, Mingxia Liu wrote:
> > Add Rx offloading support:
> >  - support CHKSUM and RSS offload for split queue model
> >  - support CHKSUM offload for single queue model
> >
> > Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> > ---
> >  doc/guides/nics/features/cpfl.ini | 2 ++
> >  drivers/net/cpfl/cpfl_ethdev.c    | 6 ++++++
> >  2 files changed, 8 insertions(+)
> >
> > diff --git a/doc/guides/nics/features/cpfl.ini
> > b/doc/guides/nics/features/cpfl.ini
> > index 470ba81579..ee5948f444 100644
> > --- a/doc/guides/nics/features/cpfl.ini
> > +++ b/doc/guides/nics/features/cpfl.ini
> > @@ -8,6 +8,8 @@
> >  ;
> >  [Features]
> >  MTU update           = Y
> > +L3 checksum offload  = P
> > +L4 checksum offload  = P
> >  Linux                = Y
> >  x86-32               = Y
> >  x86-64               = Y
> > diff --git a/drivers/net/cpfl/cpfl_ethdev.c
> > b/drivers/net/cpfl/cpfl_ethdev.c index fb15004e48..d0f90b7d2c 100644
> > --- a/drivers/net/cpfl/cpfl_ethdev.c
> > +++ b/drivers/net/cpfl/cpfl_ethdev.c
> > @@ -99,6 +99,12 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct
> > rte_eth_dev_info *dev_info)
> >
> >  	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
> >
> > +	dev_info->rx_offload_capa =
> > +		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM           |
> > +		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
> > +		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
> > +		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
> > +
> 
> Just to confirm, are these capabilities are already supported in the data
> path functions?
> 
> Same for Tx ones in next patch.
[Liu, Mingxia] sure, they are all already supported.

^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v7 21/21] net/cpfl: add xstats ops
  2023-02-27 21:52             ` Ferruh Yigit
  2023-02-28  5:28               ` Liu, Mingxia
@ 2023-02-28  5:54               ` Liu, Mingxia
  1 sibling, 0 replies; 263+ messages in thread
From: Liu, Mingxia @ 2023-02-28  5:54 UTC (permalink / raw)
  To: Ferruh Yigit, dev, Xing,  Beilei, Zhang, Yuying



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Tuesday, February 28, 2023 5:53 AM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> Subject: Re: [PATCH v7 21/21] net/cpfl: add xstats ops
> 
> On 2/16/2023 12:30 AM, Mingxia Liu wrote:
> > Add support for these device ops:
> > - dev_xstats_get
> > - dev_xstats_get_names
> > - dev_xstats_reset
> >
> > Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> > ---
> >  drivers/net/cpfl/cpfl_ethdev.c | 80
> > ++++++++++++++++++++++++++++++++++
> >  1 file changed, 80 insertions(+)
> >
> > diff --git a/drivers/net/cpfl/cpfl_ethdev.c
> > b/drivers/net/cpfl/cpfl_ethdev.c index f959a2911d..543dbd60f0 100644
> > --- a/drivers/net/cpfl/cpfl_ethdev.c
> > +++ b/drivers/net/cpfl/cpfl_ethdev.c
> > @@ -80,6 +80,30 @@ static const uint64_t cpfl_ipv6_rss =
> RTE_ETH_RSS_NONFRAG_IPV6_UDP |
> >  			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
> >  			  RTE_ETH_RSS_FRAG_IPV6;
> >
> > +struct rte_cpfl_xstats_name_off {
> > +	char name[RTE_ETH_XSTATS_NAME_SIZE];
> > +	unsigned int offset;
> > +};
> > +
> > +static const struct rte_cpfl_xstats_name_off rte_cpfl_stats_strings[] = {
> > +	{"rx_bytes", offsetof(struct virtchnl2_vport_stats, rx_bytes)},
> > +	{"rx_unicast_packets", offsetof(struct virtchnl2_vport_stats,
> rx_unicast)},
> > +	{"rx_multicast_packets", offsetof(struct virtchnl2_vport_stats,
> rx_multicast)},
> > +	{"rx_broadcast_packets", offsetof(struct virtchnl2_vport_stats,
> rx_broadcast)},
> > +	{"rx_dropped_packets", offsetof(struct virtchnl2_vport_stats,
> rx_discards)},
> > +	{"rx_errors", offsetof(struct virtchnl2_vport_stats, rx_errors)},
> > +	{"rx_unknown_protocol_packets", offsetof(struct
> virtchnl2_vport_stats,
> > +						 rx_unknown_protocol)},
> > +	{"tx_bytes", offsetof(struct virtchnl2_vport_stats, tx_bytes)},
> > +	{"tx_unicast_packets", offsetof(struct virtchnl2_vport_stats,
> tx_unicast)},
> > +	{"tx_multicast_packets", offsetof(struct virtchnl2_vport_stats,
> tx_multicast)},
> > +	{"tx_broadcast_packets", offsetof(struct virtchnl2_vport_stats,
> tx_broadcast)},
> > +	{"tx_dropped_packets", offsetof(struct virtchnl2_vport_stats,
> tx_discards)},
> > +	{"tx_error_packets", offsetof(struct virtchnl2_vport_stats,
> > +tx_errors)}};
> > +
> > +#define CPFL_NB_XSTATS (sizeof(rte_cpfl_stats_strings) / \
> > +		sizeof(rte_cpfl_stats_strings[0]))
> > +
> 
> Can use RTE_DIM here.
> 
[Liu, Mingxia] Ok, it‘s better, thanks.

> >  static int
> >  cpfl_dev_link_update(struct rte_eth_dev *dev,
> >  		     __rte_unused int wait_to_complete) @@ -313,6
> +337,59 @@
> > cpfl_dev_stats_reset(struct rte_eth_dev *dev)
> >  	return 0;
> >  }
> >
> > +static int cpfl_dev_xstats_reset(struct rte_eth_dev *dev) {
> > +	cpfl_dev_stats_reset(dev);
> > +	return 0;
> > +}
> > +
> > +static int cpfl_dev_xstats_get(struct rte_eth_dev *dev,
> > +			       struct rte_eth_xstat *xstats, unsigned int n) {
> > +	struct idpf_vport *vport =
> > +		(struct idpf_vport *)dev->data->dev_private;
> > +	struct virtchnl2_vport_stats *pstats = NULL;
> > +	unsigned int i;
> > +	int ret;
> > +
> > +	if (n < CPFL_NB_XSTATS)
> > +		return CPFL_NB_XSTATS;
> > +
> > +	if (!xstats)
> > +		return 0;
> > +
> 
> if 'xstats' is NULL, it should return 'CPFL_NB_XSTATS'.


^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v7 18/21] net/cpfl: add HW statistics
  2023-02-27 21:52             ` Ferruh Yigit
@ 2023-02-28  6:46               ` Liu, Mingxia
  2023-02-28 10:01                 ` Ferruh Yigit
  0 siblings, 1 reply; 263+ messages in thread
From: Liu, Mingxia @ 2023-02-28  6:46 UTC (permalink / raw)
  To: Ferruh Yigit, dev, Xing,  Beilei, Zhang, Yuying



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Tuesday, February 28, 2023 5:52 AM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
> 
> On 2/16/2023 12:30 AM, Mingxia Liu wrote:
> > This patch add hardware packets/bytes statistics.
> >
> > Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> 
> <...>
> 
> > +static int
> > +cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats
> > +*stats) {
> > +	struct idpf_vport *vport =
> > +		(struct idpf_vport *)dev->data->dev_private;
> > +	struct virtchnl2_vport_stats *pstats = NULL;
> > +	int ret;
> > +
> > +	ret = idpf_vc_stats_query(vport, &pstats);
> > +	if (ret == 0) {
> > +		uint8_t crc_stats_len = (dev->data-
> >dev_conf.rxmode.offloads &
> > +					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
> 0 :
> > +					 RTE_ETHER_CRC_LEN;
> > +
> > +		idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
> > +		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
> > +				pstats->rx_broadcast - pstats->rx_discards;
> > +		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast
> +
> > +						pstats->tx_unicast;
> > +		stats->imissed = pstats->rx_discards;
> > +		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
> > +		stats->ibytes = pstats->rx_bytes;
> > +		stats->ibytes -= stats->ipackets * crc_stats_len;
> > +		stats->obytes = pstats->tx_bytes;
> > +
> > +		dev->data->rx_mbuf_alloc_failed =
> > +cpfl_get_mbuf_alloc_failed_stats(dev);
> 
> 'dev->data->rx_mbuf_alloc_failed' is also used by telemetry, updating here
> only in stats_get() will make it wrong for telemetry.
> 
> Is it possible to update 'dev->data->rx_mbuf_alloc_failed' whenever alloc
> failed? (alongside 'rxq->rx_stats.mbuf_alloc_failed').
[Liu, Mingxia] As I know, rte_eth_dev_data is not a public structure provided to user, user need to access through rte_ethdev APIs.
Because we already put rx and tx burst func to common/idpf which has no dependcy with ethdev lib. If I update "dev->data->rx_mbuf_alloc_failed" 
when allocate mbuf fails, it will break the design of our common/idpf interface to net/cpfl or net.idpf.

And I didn't find any reference of 'dev->data->rx_mbuf_alloc_failed' in lib code.


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 01/21] net/cpfl: support device initialization
  2023-02-28  2:06                 ` Liu, Mingxia
@ 2023-02-28  9:53                   ` Ferruh Yigit
  0 siblings, 0 replies; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-28  9:53 UTC (permalink / raw)
  To: Liu, Mingxia, Thomas Monjalon, Andrew Rybchenko,
	Jerin Jacob Kollanukkaran, Zhang, Qi Z, David Marchand
  Cc: dev, Zhang, Yuying, Xing, Beilei, techboard

On 2/28/2023 2:06 AM, Liu, Mingxia wrote:
> Thanks you all!
> It's a good question, but as it is experimental version and rc2 is approaching,
> we won't optimize this function now, and will do it at the time of the official product release.
> 

Hi Mingxia,

The discussion is not specific to the driver, it is for ethdev API and
for long term.


>> -----Original Message-----
>> From: Thomas Monjalon <thomas@monjalon.net>
>> Sent: Monday, February 27, 2023 11:46 PM
>> To: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>; Jerin Jacob
>> Kollanukkaran <jerinj@marvell.com>; Zhang, Qi Z <qi.z.zhang@intel.com>;
>> David Marchand <david.marchand@redhat.com>; Ferruh Yigit
>> <ferruh.yigit@amd.com>
>> Cc: dev@dpdk.org; Liu, Mingxia <mingxia.liu@intel.com>; Zhang, Yuying
>> <yuying.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>;
>> techboard@dpdk.org
>> Subject: Re: [PATCH v7 01/21] net/cpfl: support device initialization
>>
>> 27/02/2023 14:46, Ferruh Yigit:
>>> On 2/16/2023 12:29 AM, Mingxia Liu wrote:
>>>> +static int
>>>> +cpfl_dev_configure(struct rte_eth_dev *dev) {
>>>> +	struct rte_eth_conf *conf = &dev->data->dev_conf;
>>>> +
>>>> +	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
>>>> +		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
>>>> +		return -ENOTSUP;
>>>> +	}
>>>> +
>>>> +	if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
>>>> +		PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not
>> supported",
>>>> +			     conf->txmode.mq_mode);
>>>> +		return -ENOTSUP;
>>>> +	}
>>>> +
>>>> +	if (conf->lpbk_mode != 0) {
>>>> +		PMD_INIT_LOG(ERR, "Loopback operation mode %d is not
>> supported",
>>>> +			     conf->lpbk_mode);
>>>> +		return -ENOTSUP;
>>>> +	}
>>>> +
>>>> +	if (conf->dcb_capability_en != 0) {
>>>> +		PMD_INIT_LOG(ERR, "Priority Flow Control(PFC) if not
>> supported");
>>>> +		return -ENOTSUP;
>>>> +	}
>>>> +
>>>> +	if (conf->intr_conf.lsc != 0) {
>>>> +		PMD_INIT_LOG(ERR, "LSC interrupt is not supported");
>>>> +		return -ENOTSUP;
>>>> +	}
>>>> +
>>>> +	if (conf->intr_conf.rxq != 0) {
>>>> +		PMD_INIT_LOG(ERR, "RXQ interrupt is not supported");
>>>> +		return -ENOTSUP;
>>>> +	}
>>>> +
>>>> +	if (conf->intr_conf.rmv != 0) {
>>>> +		PMD_INIT_LOG(ERR, "RMV interrupt is not supported");
>>>> +		return -ENOTSUP;
>>>> +	}
>>>> +
>>>> +	return 0;
>>>
>>> This is '.dev_configure()' dev ops of a driver, there is nothing wrong
>>> with the function but it is a good example to highlight a point.
>>>
>>>
>>> 'rte_eth_dev_configure()' can fail from various reasons, what can an
>>> application do in this case?
>>> It is not clear why configuration failed, there is no way to figure
>>> out failed config option dynamically.
>>
>> There are some capabilities to read before calling "configure".
>>
>>> Application developer can read the log and find out what caused the
>>> failure, but what can do next? Put a conditional check for the
>>> particular device, assuming application supports multiple devices,
>>> before configuration?
>>
>> Which failures cannot be guessed with capability flags?
>>
>>> I think we need better error value, to help application detect what
>>> went wrong and adapt dynamically, perhaps a bitmask of errors one per
>>> each config option, what do you think?
>>
>> I am not sure we can change such an old API.
>>
>>> And I think this is another reason why we should not make a single API
>>> too overloaded and complex.
>>
>> Right, and I would support a work to have some of those "configure"
>> features available as small functions.
>>
> 


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 18/21] net/cpfl: add HW statistics
  2023-02-28  6:46               ` Liu, Mingxia
@ 2023-02-28 10:01                 ` Ferruh Yigit
  2023-02-28 11:47                   ` Liu, Mingxia
  0 siblings, 1 reply; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-28 10:01 UTC (permalink / raw)
  To: Liu, Mingxia, dev, Xing, Beilei, Zhang, Yuying

On 2/28/2023 6:46 AM, Liu, Mingxia wrote:
> 
> 
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@amd.com>
>> Sent: Tuesday, February 28, 2023 5:52 AM
>> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
>> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
>> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
>>
>> On 2/16/2023 12:30 AM, Mingxia Liu wrote:
>>> This patch add hardware packets/bytes statistics.
>>>
>>> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
>>
>> <...>
>>
>>> +static int
>>> +cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats
>>> +*stats) {
>>> +	struct idpf_vport *vport =
>>> +		(struct idpf_vport *)dev->data->dev_private;
>>> +	struct virtchnl2_vport_stats *pstats = NULL;
>>> +	int ret;
>>> +
>>> +	ret = idpf_vc_stats_query(vport, &pstats);
>>> +	if (ret == 0) {
>>> +		uint8_t crc_stats_len = (dev->data-
>>> dev_conf.rxmode.offloads &
>>> +					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
>> 0 :
>>> +					 RTE_ETHER_CRC_LEN;
>>> +
>>> +		idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
>>> +		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
>>> +				pstats->rx_broadcast - pstats->rx_discards;
>>> +		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast
>> +
>>> +						pstats->tx_unicast;
>>> +		stats->imissed = pstats->rx_discards;
>>> +		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
>>> +		stats->ibytes = pstats->rx_bytes;
>>> +		stats->ibytes -= stats->ipackets * crc_stats_len;
>>> +		stats->obytes = pstats->tx_bytes;
>>> +
>>> +		dev->data->rx_mbuf_alloc_failed =
>>> +cpfl_get_mbuf_alloc_failed_stats(dev);
>>
>> 'dev->data->rx_mbuf_alloc_failed' is also used by telemetry, updating here
>> only in stats_get() will make it wrong for telemetry.
>>
>> Is it possible to update 'dev->data->rx_mbuf_alloc_failed' whenever alloc
>> failed? (alongside 'rxq->rx_stats.mbuf_alloc_failed').
> [Liu, Mingxia] As I know, rte_eth_dev_data is not a public structure provided to user, user need to access through rte_ethdev APIs.
> Because we already put rx and tx burst func to common/idpf which has no dependcy with ethdev lib. If I update "dev->data->rx_mbuf_alloc_failed" 
> when allocate mbuf fails, it will break the design of our common/idpf interface to net/cpfl or net.idpf.
> 
> And I didn't find any reference of 'dev->data->rx_mbuf_alloc_failed' in lib code.
> 

Please check 'eth_dev_handle_port_info()' function.
As I said this is used by telemetry, not directly exposed to the user.

I got the design concern, perhaps you can put a brief limitation to the
driver documentation.



^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 03/21] net/cpfl: add Rx queue setup
  2023-02-28  3:03               ` Liu, Mingxia
@ 2023-02-28 10:02                 ` Ferruh Yigit
  0 siblings, 0 replies; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-28 10:02 UTC (permalink / raw)
  To: Liu, Mingxia, dev, Xing, Beilei, Zhang, Yuying

On 2/28/2023 3:03 AM, Liu, Mingxia wrote:
> Thanks or your comments, we will use enums to differentiate queues.
> 
> As for 'bufq1'&'bufq2', they are members of struct idpf_rx_queue, defined in idpf commen module,
> And it involves idpf pmd code, so it's better to improve it in the later fixed patch.
> 

OK

>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@amd.com>
>> Sent: Tuesday, February 28, 2023 5:46 AM
>> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
>> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
>> Subject: Re: [PATCH v7 03/21] net/cpfl: add Rx queue setup
>>
>> On 2/16/2023 12:29 AM, Mingxia Liu wrote:
>>> Add support for rx_queue_setup ops.
>>>
>>> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
>>
>> <...>
>>
>>> +
>>> +	if (bufq_id == 1) {
>>> +		rxq->bufq1 = bufq;
>>> +	} else if (bufq_id == 2) {
>>> +		rxq->bufq2 = bufq;
>>
>> For readability better to use enums to diffrentiate queues, instead of using
>> 1 and 2 as paramter to function.
>>
>> Also I wonder if queue variable names can be improved too, from 'bufq1'
>> & 'bufq2' to something more descriptive.


^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v7 01/21] net/cpfl: support device initialization
  2023-02-27 21:43             ` Ferruh Yigit
@ 2023-02-28 11:12               ` Liu, Mingxia
  2023-02-28 11:34                 ` Ferruh Yigit
  0 siblings, 1 reply; 263+ messages in thread
From: Liu, Mingxia @ 2023-02-28 11:12 UTC (permalink / raw)
  To: Ferruh Yigit, Xing, Beilei, Zhang, Yuying; +Cc: dev



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Tuesday, February 28, 2023 5:44 AM
> To: Liu, Mingxia <mingxia.liu@intel.com>; Xing, Beilei <beilei.xing@intel.com>;
> Zhang, Yuying <yuying.zhang@intel.com>
> Cc: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@amd.com>
> Subject: Re: [PATCH v7 01/21] net/cpfl: support device initialization
> 
> On 2/16/2023 12:29 AM, Mingxia Liu wrote:
> > Support device init and add the following dev ops:
> >  - dev_configure
> >  - dev_close
> >  - dev_infos_get
> >  - link_update
> >  - dev_supported_ptypes_get
> >
> > Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> > ---
> >  MAINTAINERS                            |   8 +
> >  doc/guides/nics/cpfl.rst               |  66 +++
> 
> Need to add file to toctree (doc/guides/nics/index.rst) to make it visible.
> 
[Liu, Mingxia] ok, got it! Thanks!

> >  doc/guides/nics/features/cpfl.ini      |  12 +
> >  doc/guides/rel_notes/release_23_03.rst |   6 +
> >  drivers/net/cpfl/cpfl_ethdev.c         | 768 +++++++++++++++++++++++++
> >  drivers/net/cpfl/cpfl_ethdev.h         |  78 +++
> >  drivers/net/cpfl/cpfl_logs.h           |  32 ++
> >  drivers/net/cpfl/cpfl_rxtx.c           | 244 ++++++++
> >  drivers/net/cpfl/cpfl_rxtx.h           |  25 +
> 
> cpfl_rxtx.[ch] not used at all in this patch, 'cpfl_tx_queue_setup()' is added in
> this patch and next patch (2/21) looks a better place for it.
> 
[Liu, Mingxia] ok, will update it, thanks!

> >  drivers/net/cpfl/meson.build           |  14 +
> >  drivers/net/meson.build                |   1 +
> >  11 files changed, 1254 insertions(+)
> >  create mode 100644 doc/guides/nics/cpfl.rst  create mode 100644
> > doc/guides/nics/features/cpfl.ini  create mode 100644
> > drivers/net/cpfl/cpfl_ethdev.c  create mode 100644
> > drivers/net/cpfl/cpfl_ethdev.h  create mode 100644
> > drivers/net/cpfl/cpfl_logs.h  create mode 100644
> > drivers/net/cpfl/cpfl_rxtx.c  create mode 100644
> > drivers/net/cpfl/cpfl_rxtx.h  create mode 100644
> > drivers/net/cpfl/meson.build
> >
> > diff --git a/MAINTAINERS b/MAINTAINERS index 9a0f416d2e..af80edaf6e
> > 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -783,6 +783,14 @@ F: drivers/common/idpf/
> >  F: doc/guides/nics/idpf.rst
> >  F: doc/guides/nics/features/idpf.ini
> >
> > +Intel cpfl
> > +M: Yuying Zhang <yuying.zhang@intel.com>
> > +M: Beilei Xing <beilei.xing@intel.com>
> > +T: git://dpdk.org/next/dpdk-next-net-intel
> > +F: drivers/net/cpfl/
> > +F: doc/guides/nics/cpfl.rst
> > +F: doc/guides/nics/features/cpfl.ini
> > +
> 
> Documentation mentions driver is experimental, can you please highlight this
> in the maintainers file too, as:
> Intel cpfl - EXPERIMENTAL
> 
[Liu, Mingxia] ok, will update it, thanks!

> >  Intel igc
> >  M: Junfeng Guo <junfeng.guo@intel.com>
> >  M: Simei Su <simei.su@intel.com>
> > diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst new
> > file mode 100644 index 0000000000..7c5aff0789
> > --- /dev/null
> > +++ b/doc/guides/nics/cpfl.rst
> > @@ -0,0 +1,66 @@
> > +.. SPDX-License-Identifier: BSD-3-Clause
> > +   Copyright(c) 2022 Intel Corporation.
> > +
> 
> s/2022/2023/
> 
> > +.. include:: <isonum.txt>
> > +
> > +CPFL Poll Mode Driver
> > +=====================
> > +
> > +The [*EXPERIMENTAL*] cpfl PMD (**librte_net_cpfl**) provides poll
> > +mode driver support for Intel\ |reg| Infrastructure Processing Unit (Intel\
> |reg| IPU) E2100.
> > +
> 
> Can you please provide a link for the mentioned device?
> 
> So, interested users can evaluate, learn more about the mentioned hardware.
> 
[Liu, Mingxia] ok, will add it, thanks!
> 
> > +
> > +Linux Prerequisites
> > +-------------------
> > +
> > +Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK
> environment.
> > +
> > +To get better performance on Intel platforms, please follow the
> > +:doc:`../linux_gsg/nic_perf_intel_platform`.
> > +
> > +
> > +Pre-Installation Configuration
> > +------------------------------
> > +
> > +Runtime Config Options
> > +~~~~~~~~~~~~~~~~~~~~~~
> 
> Is "Runtime Config Options", a sub section of "Pre-Installation Configuration"?
> 
[Liu, Mingxia] Yes, refer to ice and i40e .rst.

> > +
> > +- ``vport`` (default ``0``)
> > +
> > +  The PMD supports creation of multiple vports for one PCI device,
> > + each vport corresponds to a single ethdev.
> > +  The user can specify the vports with specific ID to be created, for
> example::
> > +
> > +    -a ca:00.0,vport=[0,2,3]
> > +
> > +  Then the PMD will create 3 vports (ethdevs) for device ``ca:00.0``.
> > +
> 
> Why need specific IDs?
> 
[beilei] we’re using this ID to map physical port to vport as default receive and transmit interface.

> other option is just provide number of requested vports and they get
> sequential ids, but since vport ids are got from user instead there must be
> some significance of them, can you please briefly document why ids matter.
> 
> > +  If the parameter is not provided, the vport 0 will be created by default.
> > +
> > +- ``rx_single`` (default ``0``)
> > +
> > +  There are two queue modes supported by Intel\ |reg| IPU Ethernet
> > + E2100 Series,  single queue mode and split queue mode for Rx queue.
> 
> Can you please describe in the documentation what 'split queue' and 'single
> queue' are and what is the difference between them?
>
 [beilei] sure, will add the description.

> <...>
> 
> > index 07914170a7..b0b23d1a44 100644
> > --- a/doc/guides/rel_notes/release_23_03.rst
> > +++ b/doc/guides/rel_notes/release_23_03.rst
> > @@ -88,6 +88,12 @@ New Features
> >    * Added timesync API support.
> >    * Added packet pacing(launch time offloading) support.
> >
> > +* **Added Intel cpfl driver.**
> > +
> > +  Added the new ``cpfl`` net driver
> > +  for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
> > +  See the :doc:`../nics/cpfl` NIC guide for more details on this new driver.
> > +
> 
> "New Features" section is grouped, an that grouping is documented in the
> section comment.
> 
> Can you please move the update to the proper location in the section.
> 
[Liu, Mingxia] ok, thanks will update it.
> <...>
> 
> > +static int
> > +cpfl_dev_link_update(struct rte_eth_dev *dev,
> > +		     __rte_unused int wait_to_complete) {
> > +	struct idpf_vport *vport = dev->data->dev_private;
> > +	struct rte_eth_link new_link;
> > +
> > +	memset(&new_link, 0, sizeof(new_link));
> > +
> > +	switch (vport->link_speed) {
> > +	case RTE_ETH_SPEED_NUM_10M:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
> > +		break;
> > +	case RTE_ETH_SPEED_NUM_100M:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
> > +		break;
> > +	case RTE_ETH_SPEED_NUM_1G:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
> > +		break;
> > +	case RTE_ETH_SPEED_NUM_10G:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
> > +		break;
> > +	case RTE_ETH_SPEED_NUM_20G:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
> > +		break;
> > +	case RTE_ETH_SPEED_NUM_25G:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
> > +		break;
> > +	case RTE_ETH_SPEED_NUM_40G:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
> > +		break;
> > +	case RTE_ETH_SPEED_NUM_50G:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
> > +		break;
> > +	case RTE_ETH_SPEED_NUM_100G:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
> > +		break;
> > +	case RTE_ETH_SPEED_NUM_200G:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
> > +		break;
> 
> What about:
> ```
> switch (vport->link_speed) {
> case RTE_ETH_SPEED_NUM_10M:
> case RTE_ETH_SPEED_NUM_100M:
> ...
> case RTE_ETH_SPEED_NUM_200G:
> 	new_link.link_speed = vport->link_speed;
> 	break;
> default:
> 	new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; ```
> 
> OR
> 
> ```
> for (i = 0; i < RTE_DIM(supported_speeds); i++) {
> 	if (vport->link_speed == supported_speeds[i]) {
> 		new_link.link_speed = vport->link_speed;
> 		break;
> 	}
> }
> 
> if (i == RTE_DIM(supported_speeds))
> 	new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; ```
> 
> > +	default:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
> 
> I think this should be :
> 
> if (link_up)
> 	new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; else
> 	new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
>
[Liu, Mingxia] Thanks, good idea, will update. 

> <...>
> 
> > +static int
> > +insert_value(struct cpfl_devargs *devargs, uint16_t id) {
> > +	uint16_t i;
> > +
> > +	/* ignore duplicate */
> > +	for (i = 0; i < devargs->req_vport_nb; i++) {
> > +		if (devargs->req_vports[i] == id)
> > +			return 0;
> > +	}
> > +
> > +	if (devargs->req_vport_nb >= RTE_DIM(devargs->req_vports)) {
> > +		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
> > +			     CPFL_MAX_VPORT_NUM);
> 
> Check is using 'RTE_DIM(devargs->req_vports)' and log is using
> 'CPFL_MAX_VPORT_NUM', they are same value but better to stick to one of
> them.
> 
[Liu, Mingxia] Thanks, good idea, will update.
> <...>
> 
> > +static int
> > +parse_vport(const char *key, const char *value, void *args) {
> > +	struct cpfl_devargs *devargs = args;
> > +	const char *pos = value;
> > +
> > +	devargs->req_vport_nb = 0;
> > +
> 
> if "vport" can be provided multiple times, above assignment is wrong, like:
> "vport=1,vport=3-5"
> 
[beilei] We won’t support this case. Will add check in idpf_parse_devargs.

> <...>
> 
> > +static int
> > +cpfl_parse_devargs(struct rte_pci_device *pci_dev, struct
> cpfl_adapter_ext *adapter,
> > +		   struct cpfl_devargs *cpfl_args)
> > +{
> > +	struct rte_devargs *devargs = pci_dev->device.devargs;
> > +	struct rte_kvargs *kvlist;
> > +	int i, ret;
> > +
> > +	cpfl_args->req_vport_nb = 0;
> > +
> > +	if (devargs == NULL)
> > +		return 0;
> > +
> > +	kvlist = rte_kvargs_parse(devargs->args, cpfl_valid_args);
> > +	if (kvlist == NULL) {
> > +		PMD_INIT_LOG(ERR, "invalid kvargs key");
> > +		return -EINVAL;
> > +	}
> > +
> > +	/* check parsed devargs */
> > +	if (adapter->cur_vport_nb + cpfl_args->req_vport_nb >
> > +	    CPFL_MAX_VPORT_NUM) {
> 
> At this stage 'cpfl_args->req_vport_nb' is 0 since CPFL_VPORT is not parsed
> yet, is the intention to do this check after 'rte_kvargs_processs()'?
> 
[beilei] Yes, thanks for the catch, will fix it in next version.

> > +		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
> > +			     CPFL_MAX_VPORT_NUM);
> > +		ret = -EINVAL;
> > +		goto bail;
> > +	}
> > +
> > +	for (i = 0; i < cpfl_args->req_vport_nb; i++) {> +		if
> (adapter->cur_vports & RTE_BIT32(cpfl_args->req_vports[i])) {
> > +			PMD_INIT_LOG(ERR, "Vport %d has been created",
> > +				     cpfl_args->req_vports[i]);
> 
> This is just argument parsing, nothing created yet, I suggest updating log
> accordingly.
> 
[beilei] OK, will update the log in the next version.

> > +			ret = -EINVAL;
> > +			goto bail;
> > +		}
> > +	}
> 
> same here, both for 'cpfl_args->req_vport_nb' & 'cpfl_args->req_vports[]',
> they are not updated yet.
>
[beilei] Yes, thanks for the catch, will fix it in next version.

> <...>
> 
> > +static void
> > +cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter_ex) {
> > +	struct idpf_adapter *adapter = &adapter_ex->base;
> 
> Everywhere else, 'struct cpfl_adapter_ext' type variable name is 'adapter',
> here it is 'adapter_ex' and 'struct idpf_adapter' type is 'adapter'.
> 
> As far as I understand 'struct cpfl_adapter_ext' is something like "extended
> adapter" and extended version of 'struct idpf_adapter', so in the context of
> this driver what do you think to refer:
> 'struct cpfl_adapter_ext' as 'adapter'
> 'struct idpf_adapter'     as 'base' (or 'adapter_base'), consistently.
> 
[Liu, Mingxia] ok, got it, I'll update them, for your comments.

> <...>
> 
> > +static const struct eth_dev_ops cpfl_eth_dev_ops = {
> > +	.dev_configure			= cpfl_dev_configure,
> > +	.dev_close			= cpfl_dev_close,
> > +	.dev_infos_get			= cpfl_dev_info_get,
> > +	.link_update			= cpfl_dev_link_update,
> > +	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
> > +};
> 
> Can you please move the block just after 'cpfl_dev_close()', to group dev_ops
> related code together.
> 
[Liu, Mingxia] ok, thanks!

> <...>
> 
> > +
> > +static int
> > +cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params) {
> > +	struct idpf_vport *vport = dev->data->dev_private;
> > +	struct cpfl_vport_param *param = init_params;
> > +	struct cpfl_adapter_ext *adapter = param->adapter;
> > +	/* for sending create vport virtchnl msg prepare */
> > +	struct virtchnl2_create_vport create_vport_info;
> > +	int ret = 0;
> > +
> > +	dev->dev_ops = &cpfl_eth_dev_ops;
> > +	vport->adapter = &adapter->base;
> > +	vport->sw_idx = param->idx;
> > +	vport->devarg_id = param->devarg_id;
> > +	vport->dev = dev;
> > +
> > +	memset(&create_vport_info, 0, sizeof(create_vport_info));
> > +	ret = idpf_vport_info_init(vport, &create_vport_info);
> > +	if (ret != 0) {
> > +		PMD_INIT_LOG(ERR, "Failed to init vport req_info.");
> > +		goto err;
> > +	}
> > +
> > +	ret = idpf_vport_init(vport, &create_vport_info, dev->data);
> > +	if (ret != 0) {
> > +		PMD_INIT_LOG(ERR, "Failed to init vports.");
> > +		goto err;
> > +	}
> > +
> > +	adapter->vports[param->idx] = vport;
> > +	adapter->cur_vports |= RTE_BIT32(param->devarg_id);
> > +	adapter->cur_vport_nb++;
> > +
> > +	dev->data->mac_addrs = rte_zmalloc(NULL, RTE_ETHER_ADDR_LEN,
> 0);
> > +	if (dev->data->mac_addrs == NULL) {
> > +		PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory.");
> > +		ret = -ENOMEM;
> > +		goto err_mac_addrs;
> > +	}
> > +
> > +	rte_ether_addr_copy((struct rte_ether_addr *)vport-
> >default_mac_addr,
> > +			    &dev->data->mac_addrs[0]);
> > +
> > +	return 0;
> > +
> > +err_mac_addrs:
> > +	adapter->vports[param->idx] = NULL;  /* reset */
> 
> shouln't update 'cur_vports' & 'cur_vport_nb' too in this error path.
> 
[beilei] Yes, need to update the two fields.
> <...>
> 
> > +
> > +err:
> > +	if (first_probe) {
> > +		rte_spinlock_lock(&cpfl_adapter_lock);
> > +		TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
> > +		rte_spinlock_unlock(&cpfl_adapter_lock);
> > +		cpfl_adapter_ext_deinit(adapter);
> > +		rte_free(adapter);
> > +	}
> 
> 
> Why 'first_probe' is needed, it looks like it is for the case when
> probe() called multiple time for same pci_dev, can this happen?
> 
[Liu, Mingxia] It's related to runtime creating vport in the future, but now the version doesn't support runtime.
> <...>
> 
> > +RTE_PMD_REGISTER_PCI(net_cpfl, rte_cpfl_pmd);
> > +RTE_PMD_REGISTER_PCI_TABLE(net_cpfl, pci_id_cpfl_map);
> > +RTE_PMD_REGISTER_KMOD_DEP(net_cpfl, "* igb_uio | vfio-pci");
> > +RTE_PMD_REGISTER_PARAM_STRING(net_cpfl,
> > +			      CPFL_TX_SINGLE_Q "=<0|1> "
> > +			      CPFL_RX_SINGLE_Q "=<0|1> "
> > +			      CPFL_VPORT "=[vport_set0,[vport_set1],...]");
> 
> What about:
> "\[vport0_begin[-vport0_end][,vport1_begin[-vport1_end][,..]\]"
> 
[Liu, Mingxia] Good idea, thanks!

> <...>
> 
> > +
> > +#define CPFL_MAX_VPORT_NUM	8
> > +
> It looks like there is a dynamic max vport number (adapter-
> >base.caps.max_vports), and there is above hardcoded define, for requested
> (devargs) vports.
> 
> The dynamic max is received via 'cpfl_adapter_ext_init()' before parsing
> dev_arg, so can it be possible to remove this hardcoded max completely?
> 
> 
[Liu, Mingxia] yes, we'll try.

> > +#define CPFL_INVALID_VPORT_IDX	0xffff
> > +
> > +#define CPFL_MIN_BUF_SIZE	1024
> > +#define CPFL_MAX_FRAME_SIZE	9728
> > +#define CPFL_DEFAULT_MTU	RTE_ETHER_MTU
> > +
> > +#define CPFL_NUM_MACADDR_MAX	64
> 
> The macro is not used, can you please add them when they are used.
> 
[Liu, Mingxia] ok, thanks! Will delete it.

> <...>
> 
> > @@ -0,0 +1,32 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2023 Intel Corporation  */
> > +
> > +#ifndef _CPFL_LOGS_H_
> > +#define _CPFL_LOGS_H_
> > +
> > +#include <rte_log.h>
> > +
> > +extern int cpfl_logtype_init;
> > +extern int cpfl_logtype_driver;
> > +
> > +#define PMD_INIT_LOG(level, ...) \
> > +	rte_log(RTE_LOG_ ## level, \
> > +		cpfl_logtype_init, \
> > +		RTE_FMT("%s(): " \
> > +			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
> > +			__func__, \
> > +			RTE_FMT_TAIL(__VA_ARGS__,)))
> > +
> > +#define PMD_DRV_LOG_RAW(level, ...) \
> > +	rte_log(RTE_LOG_ ## level, \
> > +		cpfl_logtype_driver, \
> > +		RTE_FMT("%s(): " \
> > +			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
> > +			__func__, \
> > +			RTE_FMT_TAIL(__VA_ARGS__,)))
> > +
> > +#define PMD_DRV_LOG(level, fmt, args...) \
> > +	PMD_DRV_LOG_RAW(level, fmt "\n", ## args)
> > +
> 
> Is 'PMD_DRV_LOG_RAW' required at all, why not define 'PMD_DRV_LOG'
> directly as it is done with 'PMD_INIT_LOG'?
>
[Liu, Mingxia] Good idea, I'll simplify the code.

> Btw, both 'PMD_DRV_LOG_RAW' seems adding double '\n', one as part of
> 'fmt', other in the rte_log().



^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v7 12/21] net/cpfl: support RSS
  2023-02-27 21:50             ` Ferruh Yigit
@ 2023-02-28 11:28               ` Liu, Mingxia
  2023-02-28 11:34                 ` Ferruh Yigit
  0 siblings, 1 reply; 263+ messages in thread
From: Liu, Mingxia @ 2023-02-28 11:28 UTC (permalink / raw)
  To: Ferruh Yigit, dev, Xing,  Beilei, Zhang, Yuying



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Tuesday, February 28, 2023 5:50 AM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> Subject: Re: [PATCH v7 12/21] net/cpfl: support RSS
> 
> On 2/16/2023 12:30 AM, Mingxia Liu wrote:
> > Add RSS support.
> >
> > Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> 
> <...>
> 
> >  static int
> >  cpfl_dev_configure(struct rte_eth_dev *dev)  {
> >  	struct idpf_vport *vport = dev->data->dev_private;
> >  	struct rte_eth_conf *conf = &dev->data->dev_conf;
> > +	struct idpf_adapter *adapter = vport->adapter;
> > +	int ret;
> >
> >  	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
> >  		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
> @@ -205,6
> > +245,17 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
> >  		return -ENOTSUP;
> >  	}
> >
> > +	if (adapter->caps.rss_caps != 0 && dev->data->nb_rx_queues != 0) {
> > +		ret = cpfl_init_rss(vport);
> > +		if (ret != 0) {
> > +			PMD_INIT_LOG(ERR, "Failed to init rss");
> > +			return ret;
> > +		}
> > +	} else {
> > +		PMD_INIT_LOG(ERR, "RSS is not supported.");
> > +		return -1;
> > +	}
> 
> 
> Shouldn't driver take into account 'conf->rxmode->mq_mode' and 'conf-
> >rx_adv_conf->rss_conf->*' ?
[Liu, Mingxia] Thanks for your comments, we will add checking of 'conf->rxmode->mq_mode'.
As for 'conf- >rx_adv_conf->rss_conf->*', we checked rss_conf->rss_key_len and rss_conf->rss_key in cpfl_dev_configure()-> cpfl_init_rss().

But for now pmd only support default rss_hf according to packge, so ignore the conf->rx_adv_conf->rss_conf->rss_hf.
In the future, it will support configuring rss_hf.




^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v7 11/21] net/cpfl: support write back based on ITR expire
  2023-02-27 21:49             ` Ferruh Yigit
@ 2023-02-28 11:31               ` Liu, Mingxia
  0 siblings, 0 replies; 263+ messages in thread
From: Liu, Mingxia @ 2023-02-28 11:31 UTC (permalink / raw)
  To: Ferruh Yigit, Xing, Beilei, Zhang, Yuying; +Cc: dev

Ok, more commit msg will be given next version.

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Tuesday, February 28, 2023 5:50 AM
> To: Liu, Mingxia <mingxia.liu@intel.com>; Xing, Beilei <beilei.xing@intel.com>;
> Zhang, Yuying <yuying.zhang@intel.com>
> Cc: dev@dpdk.org
> Subject: Re: [PATCH v7 11/21] net/cpfl: support write back based on ITR expire
> 
> On 2/16/2023 12:30 AM, Mingxia Liu wrote:
> > Enable write back on ITR expire, then packets can be received one by
> >
> 
> Can you please describe this commit more?
> 
> I can see a wrapper to 'idpf_vport_irq_map_config()' is called, what is
> configured related to the IRQ? What ITR stands for, etc...


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 01/21] net/cpfl: support device initialization
  2023-02-28 11:12               ` Liu, Mingxia
@ 2023-02-28 11:34                 ` Ferruh Yigit
  0 siblings, 0 replies; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-28 11:34 UTC (permalink / raw)
  To: Liu, Mingxia, Xing, Beilei, Zhang, Yuying, Mcnamara, John; +Cc: dev

On 2/28/2023 11:12 AM, Liu, Mingxia wrote:
>>> +
>>> +To get better performance on Intel platforms, please follow the
>>> +:doc:`../linux_gsg/nic_perf_intel_platform`.
>>> +
>>> +
>>> +Pre-Installation Configuration
>>> +------------------------------
>>> +
>>> +Runtime Config Options
>>> +~~~~~~~~~~~~~~~~~~~~~~
>> Is "Runtime Config Options", a sub section of "Pre-Installation Configuration"?
>>
> [Liu, Mingxia] Yes, refer to ice and i40e .rst.
> 

You are right it has been used in a few other drivers too, but what
exactly "Pre-Installation Configuration" means?

I think it is historical, remaining from times that device options was
compile time options and needs to be configured before build.

But these are dynamic runtime configurations, and I think shouldn't be
under "Pre-Installation Configuration", instead can have its own section.


@John, what do you think, a few existing samples:
- https://doc.dpdk.org/guides/nics/i40e.html#pre-installation-configuration
- https://doc.dpdk.org/guides/nics/idpf.html#pre-installation-configuration

^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 12/21] net/cpfl: support RSS
  2023-02-28 11:28               ` Liu, Mingxia
@ 2023-02-28 11:34                 ` Ferruh Yigit
  0 siblings, 0 replies; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-28 11:34 UTC (permalink / raw)
  To: Liu, Mingxia, dev, Xing, Beilei, Zhang, Yuying

On 2/28/2023 11:28 AM, Liu, Mingxia wrote:
> 
> 
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@amd.com>
>> Sent: Tuesday, February 28, 2023 5:50 AM
>> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
>> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
>> Subject: Re: [PATCH v7 12/21] net/cpfl: support RSS
>>
>> On 2/16/2023 12:30 AM, Mingxia Liu wrote:
>>> Add RSS support.
>>>
>>> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
>>
>> <...>
>>
>>>  static int
>>>  cpfl_dev_configure(struct rte_eth_dev *dev)  {
>>>  	struct idpf_vport *vport = dev->data->dev_private;
>>>  	struct rte_eth_conf *conf = &dev->data->dev_conf;
>>> +	struct idpf_adapter *adapter = vport->adapter;
>>> +	int ret;
>>>
>>>  	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
>>>  		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
>> @@ -205,6
>>> +245,17 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
>>>  		return -ENOTSUP;
>>>  	}
>>>
>>> +	if (adapter->caps.rss_caps != 0 && dev->data->nb_rx_queues != 0) {
>>> +		ret = cpfl_init_rss(vport);
>>> +		if (ret != 0) {
>>> +			PMD_INIT_LOG(ERR, "Failed to init rss");
>>> +			return ret;
>>> +		}
>>> +	} else {
>>> +		PMD_INIT_LOG(ERR, "RSS is not supported.");
>>> +		return -1;
>>> +	}
>>
>>
>> Shouldn't driver take into account 'conf->rxmode->mq_mode' and 'conf-
>>> rx_adv_conf->rss_conf->*' ?
> [Liu, Mingxia] Thanks for your comments, we will add checking of 'conf->rxmode->mq_mode'.
> As for 'conf- >rx_adv_conf->rss_conf->*', we checked rss_conf->rss_key_len and rss_conf->rss_key in cpfl_dev_configure()-> cpfl_init_rss().
> 
> But for now pmd only support default rss_hf according to packge, so ignore the conf->rx_adv_conf->rss_conf->rss_hf.
> In the future, it will support configuring rss_hf.
> 

ack, thanks.


^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v7 18/21] net/cpfl: add HW statistics
  2023-02-28 10:01                 ` Ferruh Yigit
@ 2023-02-28 11:47                   ` Liu, Mingxia
  2023-02-28 12:04                     ` Ferruh Yigit
  0 siblings, 1 reply; 263+ messages in thread
From: Liu, Mingxia @ 2023-02-28 11:47 UTC (permalink / raw)
  To: Ferruh Yigit, dev, Xing,  Beilei, Zhang, Yuying; +Cc: Wu, Jingjing

OK, got it.

As our previous design did have flaws.
And if we don't want to affect correctness of telemetry, we have to redesign the idpf common module code,
which means a lot of work to do, so can we lower the priority of this issue?

Thanks,
BR,
mingxia
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Tuesday, February 28, 2023 6:02 PM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
> 
> On 2/28/2023 6:46 AM, Liu, Mingxia wrote:
> >
> >
> >> -----Original Message-----
> >> From: Ferruh Yigit <ferruh.yigit@amd.com>
> >> Sent: Tuesday, February 28, 2023 5:52 AM
> >> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> >> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> >> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
> >>
> >> On 2/16/2023 12:30 AM, Mingxia Liu wrote:
> >>> This patch add hardware packets/bytes statistics.
> >>>
> >>> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> >>
> >> <...>
> >>
> >>> +static int
> >>> +cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats
> >>> +*stats) {
> >>> +	struct idpf_vport *vport =
> >>> +		(struct idpf_vport *)dev->data->dev_private;
> >>> +	struct virtchnl2_vport_stats *pstats = NULL;
> >>> +	int ret;
> >>> +
> >>> +	ret = idpf_vc_stats_query(vport, &pstats);
> >>> +	if (ret == 0) {
> >>> +		uint8_t crc_stats_len = (dev->data-
> >>> dev_conf.rxmode.offloads &
> >>> +					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
> >> 0 :
> >>> +					 RTE_ETHER_CRC_LEN;
> >>> +
> >>> +		idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
> >>> +		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
> >>> +				pstats->rx_broadcast - pstats->rx_discards;
> >>> +		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast
> >> +
> >>> +						pstats->tx_unicast;
> >>> +		stats->imissed = pstats->rx_discards;
> >>> +		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
> >>> +		stats->ibytes = pstats->rx_bytes;
> >>> +		stats->ibytes -= stats->ipackets * crc_stats_len;
> >>> +		stats->obytes = pstats->tx_bytes;
> >>> +
> >>> +		dev->data->rx_mbuf_alloc_failed =
> >>> +cpfl_get_mbuf_alloc_failed_stats(dev);
> >>
> >> 'dev->data->rx_mbuf_alloc_failed' is also used by telemetry, updating
> >> here only in stats_get() will make it wrong for telemetry.
> >>
> >> Is it possible to update 'dev->data->rx_mbuf_alloc_failed' whenever
> >> alloc failed? (alongside 'rxq->rx_stats.mbuf_alloc_failed').
> > [Liu, Mingxia] As I know, rte_eth_dev_data is not a public structure provided
> to user, user need to access through rte_ethdev APIs.
> > Because we already put rx and tx burst func to common/idpf which has no
> dependcy with ethdev lib. If I update "dev->data->rx_mbuf_alloc_failed"
> > when allocate mbuf fails, it will break the design of our common/idpf
> interface to net/cpfl or net.idpf.
> >
> > And I didn't find any reference of 'dev->data->rx_mbuf_alloc_failed' in lib
> code.
> >
> 
> Please check 'eth_dev_handle_port_info()' function.
> As I said this is used by telemetry, not directly exposed to the user.
> 
> I got the design concern, perhaps you can put a brief limitation to the driver
> documentation.
> 


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 18/21] net/cpfl: add HW statistics
  2023-02-28 11:47                   ` Liu, Mingxia
@ 2023-02-28 12:04                     ` Ferruh Yigit
  2023-02-28 12:12                       ` Bruce Richardson
  0 siblings, 1 reply; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-28 12:04 UTC (permalink / raw)
  To: Liu, Mingxia, dev, Xing, Beilei, Zhang, Yuying, Bruce Richardson
  Cc: Wu, Jingjing

On 2/28/2023 11:47 AM, Liu, Mingxia wrote:

Comment moved down, please don't top post, it makes very hard to follow
discussion.

>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@amd.com>
>> Sent: Tuesday, February 28, 2023 6:02 PM
>> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
>> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
>> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
>>
>> On 2/28/2023 6:46 AM, Liu, Mingxia wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: Ferruh Yigit <ferruh.yigit@amd.com>
>>>> Sent: Tuesday, February 28, 2023 5:52 AM
>>>> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
>>>> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
>>>> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
>>>>
>>>> On 2/16/2023 12:30 AM, Mingxia Liu wrote:
>>>>> This patch add hardware packets/bytes statistics.
>>>>>
>>>>> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
>>>>
>>>> <...>
>>>>
>>>>> +static int
>>>>> +cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats
>>>>> +*stats) {
>>>>> +	struct idpf_vport *vport =
>>>>> +		(struct idpf_vport *)dev->data->dev_private;
>>>>> +	struct virtchnl2_vport_stats *pstats = NULL;
>>>>> +	int ret;
>>>>> +
>>>>> +	ret = idpf_vc_stats_query(vport, &pstats);
>>>>> +	if (ret == 0) {
>>>>> +		uint8_t crc_stats_len = (dev->data-
>>>>> dev_conf.rxmode.offloads &
>>>>> +					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
>>>> 0 :
>>>>> +					 RTE_ETHER_CRC_LEN;
>>>>> +
>>>>> +		idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
>>>>> +		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
>>>>> +				pstats->rx_broadcast - pstats->rx_discards;
>>>>> +		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast
>>>> +
>>>>> +						pstats->tx_unicast;
>>>>> +		stats->imissed = pstats->rx_discards;
>>>>> +		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
>>>>> +		stats->ibytes = pstats->rx_bytes;
>>>>> +		stats->ibytes -= stats->ipackets * crc_stats_len;
>>>>> +		stats->obytes = pstats->tx_bytes;
>>>>> +
>>>>> +		dev->data->rx_mbuf_alloc_failed =
>>>>> +cpfl_get_mbuf_alloc_failed_stats(dev);
>>>>
>>>> 'dev->data->rx_mbuf_alloc_failed' is also used by telemetry, updating
>>>> here only in stats_get() will make it wrong for telemetry.
>>>>
>>>> Is it possible to update 'dev->data->rx_mbuf_alloc_failed' whenever
>>>> alloc failed? (alongside 'rxq->rx_stats.mbuf_alloc_failed').
>>> [Liu, Mingxia] As I know, rte_eth_dev_data is not a public structure provided
>> to user, user need to access through rte_ethdev APIs.
>>> Because we already put rx and tx burst func to common/idpf which has no
>> dependcy with ethdev lib. If I update "dev->data->rx_mbuf_alloc_failed"
>>> when allocate mbuf fails, it will break the design of our common/idpf
>> interface to net/cpfl or net.idpf.
>>>
>>> And I didn't find any reference of 'dev->data->rx_mbuf_alloc_failed' in lib
>> code.
>>>
>>
>> Please check 'eth_dev_handle_port_info()' function.
>> As I said this is used by telemetry, not directly exposed to the user.
>>
>> I got the design concern, perhaps you can put a brief limitation to the driver
>> documentation.
>>
> OK, got it.
> 
> As our previous design did have flaws.
> And if we don't want to affect correctness of telemetry, we have to redesign the idpf common module code,
> which means a lot of work to do, so can we lower the priority of this issue?
> 
I don't believe this is urgent, can you but a one line limitation to the
documentation for now, and fix it later?

And for the fix, updating 'dev->data->rx_mbuf_alloc_failed' where ever
'rxq->rx_stats.mbuf_alloc_failed' updated is easy, although you may need
to store 'dev->data' in rxq struct for this.

But,
I think it is also fair to question the assumption telemetry has that
'rx_mbuf_alloc_fail' is always available data, and consider moving it to
the 'eth_dev_handle_port_stats()' handler.
+Bruce for comment.



^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 18/21] net/cpfl: add HW statistics
  2023-02-28 12:04                     ` Ferruh Yigit
@ 2023-02-28 12:12                       ` Bruce Richardson
  2023-02-28 12:24                         ` Ferruh Yigit
  0 siblings, 1 reply; 263+ messages in thread
From: Bruce Richardson @ 2023-02-28 12:12 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Liu, Mingxia, dev, Xing, Beilei, Zhang, Yuying, Wu, Jingjing

On Tue, Feb 28, 2023 at 12:04:53PM +0000, Ferruh Yigit wrote:
> On 2/28/2023 11:47 AM, Liu, Mingxia wrote:
> 
> Comment moved down, please don't top post, it makes very hard to follow
> discussion.
> 
> >> -----Original Message----- From: Ferruh Yigit <ferruh.yigit@amd.com>
> >> Sent: Tuesday, February 28, 2023 6:02 PM To: Liu, Mingxia
> >> <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> >> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> >> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
> >>
> >> On 2/28/2023 6:46 AM, Liu, Mingxia wrote:
> >>>
> >>>
> >>>> -----Original Message----- From: Ferruh Yigit <ferruh.yigit@amd.com>
> >>>> Sent: Tuesday, February 28, 2023 5:52 AM To: Liu, Mingxia
> >>>> <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> >>>> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> >>>> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
> >>>>
> >>>> On 2/16/2023 12:30 AM, Mingxia Liu wrote:
> >>>>> This patch add hardware packets/bytes statistics.
> >>>>>
> >>>>> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> >>>>
> >>>> <...>
> >>>>
> >>>>> +static int +cpfl_dev_stats_get(struct rte_eth_dev *dev, struct
> >>>>> rte_eth_stats +*stats) { +	struct idpf_vport *vport = +
> >>>>> (struct idpf_vport *)dev->data->dev_private; +	struct
> >>>>> virtchnl2_vport_stats *pstats = NULL; +	int ret; + +	ret =
> >>>>> idpf_vc_stats_query(vport, &pstats); +	if (ret == 0) { +
> >>>>> uint8_t crc_stats_len = (dev->data- dev_conf.rxmode.offloads & +
> >>>>> RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
> >>>> 0 :
> >>>>> +					 RTE_ETHER_CRC_LEN; + +
> >>>>> idpf_vport_stats_update(&vport->eth_stats_offset, pstats); +
> >>>>> stats->ipackets = pstats->rx_unicast + pstats->rx_multicast + +
> >>>>> pstats->rx_broadcast - pstats->rx_discards; +
> >>>>> stats->opackets = pstats->tx_broadcast + pstats->tx_multicast
> >>>> +
> >>>>> +						pstats->tx_unicast;
> >>>>> +		stats->imissed = pstats->rx_discards; +
> >>>>> stats->oerrors = pstats->tx_errors + pstats->tx_discards; +
> >>>>> stats->ibytes = pstats->rx_bytes; +		stats->ibytes -=
> >>>>> stats->ipackets * crc_stats_len; +		stats->obytes =
> >>>>> pstats->tx_bytes; + +		dev->data->rx_mbuf_alloc_failed =
> >>>>> +cpfl_get_mbuf_alloc_failed_stats(dev);
> >>>>
> >>>> 'dev->data->rx_mbuf_alloc_failed' is also used by telemetry,
> >>>> updating here only in stats_get() will make it wrong for telemetry.
> >>>>
> >>>> Is it possible to update 'dev->data->rx_mbuf_alloc_failed' whenever
> >>>> alloc failed? (alongside 'rxq->rx_stats.mbuf_alloc_failed').
> >>> [Liu, Mingxia] As I know, rte_eth_dev_data is not a public structure
> >>> provided
> >> to user, user need to access through rte_ethdev APIs.
> >>> Because we already put rx and tx burst func to common/idpf which has
> >>> no
> >> dependcy with ethdev lib. If I update
> >> "dev->data->rx_mbuf_alloc_failed"
> >>> when allocate mbuf fails, it will break the design of our common/idpf
> >> interface to net/cpfl or net.idpf.
> >>>
> >>> And I didn't find any reference of 'dev->data->rx_mbuf_alloc_failed'
> >>> in lib
> >> code.
> >>>
> >>
> >> Please check 'eth_dev_handle_port_info()' function.  As I said this is
> >> used by telemetry, not directly exposed to the user.
> >>
> >> I got the design concern, perhaps you can put a brief limitation to
> >> the driver documentation.
> >>
> > OK, got it.
> > 
> > As our previous design did have flaws.  And if we don't want to affect
> > correctness of telemetry, we have to redesign the idpf common module
> > code, which means a lot of work to do, so can we lower the priority of
> > this issue?
> > 
> I don't believe this is urgent, can you but a one line limitation to the
> documentation for now, and fix it later?
> 
> And for the fix, updating 'dev->data->rx_mbuf_alloc_failed' where ever
> 'rxq->rx_stats.mbuf_alloc_failed' updated is easy, although you may need
> to store 'dev->data' in rxq struct for this.
> 
> But, I think it is also fair to question the assumption telemetry has
> that 'rx_mbuf_alloc_fail' is always available data, and consider moving
> it to the 'eth_dev_handle_port_stats()' handler.  +Bruce for comment.
> 

That's not really a telemetry assumption, it's one from the stats,
structure. Telemetry just outputs the contents of data reported by ethdev
stats, and rx_nombuf is just one of those fields.

/Bruce

^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 18/21] net/cpfl: add HW statistics
  2023-02-28 12:12                       ` Bruce Richardson
@ 2023-02-28 12:24                         ` Ferruh Yigit
  2023-02-28 12:33                           ` Ferruh Yigit
  0 siblings, 1 reply; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-28 12:24 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: Liu, Mingxia, dev, Xing, Beilei, Zhang, Yuying, Wu, Jingjing

On 2/28/2023 12:12 PM, Bruce Richardson wrote:
> On Tue, Feb 28, 2023 at 12:04:53PM +0000, Ferruh Yigit wrote:
>> On 2/28/2023 11:47 AM, Liu, Mingxia wrote:
>>
>> Comment moved down, please don't top post, it makes very hard to follow
>> discussion.
>>
>>>> -----Original Message----- From: Ferruh Yigit <ferruh.yigit@amd.com>
>>>> Sent: Tuesday, February 28, 2023 6:02 PM To: Liu, Mingxia
>>>> <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
>>>> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
>>>> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
>>>>
>>>> On 2/28/2023 6:46 AM, Liu, Mingxia wrote:
>>>>>
>>>>>
>>>>>> -----Original Message----- From: Ferruh Yigit <ferruh.yigit@amd.com>
>>>>>> Sent: Tuesday, February 28, 2023 5:52 AM To: Liu, Mingxia
>>>>>> <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
>>>>>> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
>>>>>> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
>>>>>>
>>>>>> On 2/16/2023 12:30 AM, Mingxia Liu wrote:
>>>>>>> This patch add hardware packets/bytes statistics.
>>>>>>>
>>>>>>> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
>>>>>>
>>>>>> <...>
>>>>>>
>>>>>>> +static int +cpfl_dev_stats_get(struct rte_eth_dev *dev, struct
>>>>>>> rte_eth_stats +*stats) { +	struct idpf_vport *vport = +
>>>>>>> (struct idpf_vport *)dev->data->dev_private; +	struct
>>>>>>> virtchnl2_vport_stats *pstats = NULL; +	int ret; + +	ret =
>>>>>>> idpf_vc_stats_query(vport, &pstats); +	if (ret == 0) { +
>>>>>>> uint8_t crc_stats_len = (dev->data- dev_conf.rxmode.offloads & +
>>>>>>> RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
>>>>>> 0 :
>>>>>>> +					 RTE_ETHER_CRC_LEN; + +
>>>>>>> idpf_vport_stats_update(&vport->eth_stats_offset, pstats); +
>>>>>>> stats->ipackets = pstats->rx_unicast + pstats->rx_multicast + +
>>>>>>> pstats->rx_broadcast - pstats->rx_discards; +
>>>>>>> stats->opackets = pstats->tx_broadcast + pstats->tx_multicast
>>>>>> +
>>>>>>> +						pstats->tx_unicast;
>>>>>>> +		stats->imissed = pstats->rx_discards; +
>>>>>>> stats->oerrors = pstats->tx_errors + pstats->tx_discards; +
>>>>>>> stats->ibytes = pstats->rx_bytes; +		stats->ibytes -=
>>>>>>> stats->ipackets * crc_stats_len; +		stats->obytes =
>>>>>>> pstats->tx_bytes; + +		dev->data->rx_mbuf_alloc_failed =
>>>>>>> +cpfl_get_mbuf_alloc_failed_stats(dev);
>>>>>>
>>>>>> 'dev->data->rx_mbuf_alloc_failed' is also used by telemetry,
>>>>>> updating here only in stats_get() will make it wrong for telemetry.
>>>>>>
>>>>>> Is it possible to update 'dev->data->rx_mbuf_alloc_failed' whenever
>>>>>> alloc failed? (alongside 'rxq->rx_stats.mbuf_alloc_failed').
>>>>> [Liu, Mingxia] As I know, rte_eth_dev_data is not a public structure
>>>>> provided
>>>> to user, user need to access through rte_ethdev APIs.
>>>>> Because we already put rx and tx burst func to common/idpf which has
>>>>> no
>>>> dependcy with ethdev lib. If I update
>>>> "dev->data->rx_mbuf_alloc_failed"
>>>>> when allocate mbuf fails, it will break the design of our common/idpf
>>>> interface to net/cpfl or net.idpf.
>>>>>
>>>>> And I didn't find any reference of 'dev->data->rx_mbuf_alloc_failed'
>>>>> in lib
>>>> code.
>>>>>
>>>>
>>>> Please check 'eth_dev_handle_port_info()' function.  As I said this is
>>>> used by telemetry, not directly exposed to the user.
>>>>
>>>> I got the design concern, perhaps you can put a brief limitation to
>>>> the driver documentation.
>>>>
>>> OK, got it.
>>>
>>> As our previous design did have flaws.  And if we don't want to affect
>>> correctness of telemetry, we have to redesign the idpf common module
>>> code, which means a lot of work to do, so can we lower the priority of
>>> this issue?
>>>
>> I don't believe this is urgent, can you but a one line limitation to the
>> documentation for now, and fix it later?
>>
>> And for the fix, updating 'dev->data->rx_mbuf_alloc_failed' where ever
>> 'rxq->rx_stats.mbuf_alloc_failed' updated is easy, although you may need
>> to store 'dev->data' in rxq struct for this.
>>
>> But, I think it is also fair to question the assumption telemetry has
>> that 'rx_mbuf_alloc_fail' is always available data, and consider moving
>> it to the 'eth_dev_handle_port_stats()' handler.  +Bruce for comment.
>>
> 
> That's not really a telemetry assumption, it's one from the stats,
> structure. Telemetry just outputs the contents of data reported by ethdev
> stats, and rx_nombuf is just one of those fields.
> 

Not talking about 'rx_nombuf' in 'eth_dev_handle_port_stats()',
but talking about 'rx_mbuf_alloc_fail' in 'eth_dev_handle_port_info()',

should telemetry return interim 'eth_dev->data->rx_mbuf_alloc_failed'
value, specially when 'rx_nombuf' is available?

Because at least for this driver returned 'rx_mbuf_alloc_fail' value
will be wrong, I believe that is same for 'idpf' driver.



^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 18/21] net/cpfl: add HW statistics
  2023-02-28 12:24                         ` Ferruh Yigit
@ 2023-02-28 12:33                           ` Ferruh Yigit
  2023-02-28 13:29                             ` Zhang, Qi Z
  0 siblings, 1 reply; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-28 12:33 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: Liu, Mingxia, dev, Xing, Beilei, Zhang, Yuying, Wu, Jingjing

On 2/28/2023 12:24 PM, Ferruh Yigit wrote:
> On 2/28/2023 12:12 PM, Bruce Richardson wrote:
>> On Tue, Feb 28, 2023 at 12:04:53PM +0000, Ferruh Yigit wrote:
>>> On 2/28/2023 11:47 AM, Liu, Mingxia wrote:
>>>
>>> Comment moved down, please don't top post, it makes very hard to follow
>>> discussion.
>>>
>>>>> -----Original Message----- From: Ferruh Yigit <ferruh.yigit@amd.com>
>>>>> Sent: Tuesday, February 28, 2023 6:02 PM To: Liu, Mingxia
>>>>> <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
>>>>> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
>>>>> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
>>>>>
>>>>> On 2/28/2023 6:46 AM, Liu, Mingxia wrote:
>>>>>>
>>>>>>
>>>>>>> -----Original Message----- From: Ferruh Yigit <ferruh.yigit@amd.com>
>>>>>>> Sent: Tuesday, February 28, 2023 5:52 AM To: Liu, Mingxia
>>>>>>> <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
>>>>>>> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
>>>>>>> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
>>>>>>>
>>>>>>> On 2/16/2023 12:30 AM, Mingxia Liu wrote:
>>>>>>>> This patch add hardware packets/bytes statistics.
>>>>>>>>
>>>>>>>> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
>>>>>>>
>>>>>>> <...>
>>>>>>>
>>>>>>>> +static int +cpfl_dev_stats_get(struct rte_eth_dev *dev, struct
>>>>>>>> rte_eth_stats +*stats) { +	struct idpf_vport *vport = +
>>>>>>>> (struct idpf_vport *)dev->data->dev_private; +	struct
>>>>>>>> virtchnl2_vport_stats *pstats = NULL; +	int ret; + +	ret =
>>>>>>>> idpf_vc_stats_query(vport, &pstats); +	if (ret == 0) { +
>>>>>>>> uint8_t crc_stats_len = (dev->data- dev_conf.rxmode.offloads & +
>>>>>>>> RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
>>>>>>> 0 :
>>>>>>>> +					 RTE_ETHER_CRC_LEN; + +
>>>>>>>> idpf_vport_stats_update(&vport->eth_stats_offset, pstats); +
>>>>>>>> stats->ipackets = pstats->rx_unicast + pstats->rx_multicast + +
>>>>>>>> pstats->rx_broadcast - pstats->rx_discards; +
>>>>>>>> stats->opackets = pstats->tx_broadcast + pstats->tx_multicast
>>>>>>> +
>>>>>>>> +						pstats->tx_unicast;
>>>>>>>> +		stats->imissed = pstats->rx_discards; +
>>>>>>>> stats->oerrors = pstats->tx_errors + pstats->tx_discards; +
>>>>>>>> stats->ibytes = pstats->rx_bytes; +		stats->ibytes -=
>>>>>>>> stats->ipackets * crc_stats_len; +		stats->obytes =
>>>>>>>> pstats->tx_bytes; + +		dev->data->rx_mbuf_alloc_failed =
>>>>>>>> +cpfl_get_mbuf_alloc_failed_stats(dev);
>>>>>>>
>>>>>>> 'dev->data->rx_mbuf_alloc_failed' is also used by telemetry,
>>>>>>> updating here only in stats_get() will make it wrong for telemetry.
>>>>>>>
>>>>>>> Is it possible to update 'dev->data->rx_mbuf_alloc_failed' whenever
>>>>>>> alloc failed? (alongside 'rxq->rx_stats.mbuf_alloc_failed').
>>>>>> [Liu, Mingxia] As I know, rte_eth_dev_data is not a public structure
>>>>>> provided
>>>>> to user, user need to access through rte_ethdev APIs.
>>>>>> Because we already put rx and tx burst func to common/idpf which has
>>>>>> no
>>>>> dependcy with ethdev lib. If I update
>>>>> "dev->data->rx_mbuf_alloc_failed"
>>>>>> when allocate mbuf fails, it will break the design of our common/idpf
>>>>> interface to net/cpfl or net.idpf.
>>>>>>
>>>>>> And I didn't find any reference of 'dev->data->rx_mbuf_alloc_failed'
>>>>>> in lib
>>>>> code.
>>>>>>
>>>>>
>>>>> Please check 'eth_dev_handle_port_info()' function.  As I said this is
>>>>> used by telemetry, not directly exposed to the user.
>>>>>
>>>>> I got the design concern, perhaps you can put a brief limitation to
>>>>> the driver documentation.
>>>>>
>>>> OK, got it.
>>>>
>>>> As our previous design did have flaws.  And if we don't want to affect
>>>> correctness of telemetry, we have to redesign the idpf common module
>>>> code, which means a lot of work to do, so can we lower the priority of
>>>> this issue?
>>>>
>>> I don't believe this is urgent, can you but a one line limitation to the
>>> documentation for now, and fix it later?
>>>
>>> And for the fix, updating 'dev->data->rx_mbuf_alloc_failed' where ever
>>> 'rxq->rx_stats.mbuf_alloc_failed' updated is easy, although you may need
>>> to store 'dev->data' in rxq struct for this.
>>>
>>> But, I think it is also fair to question the assumption telemetry has
>>> that 'rx_mbuf_alloc_fail' is always available data, and consider moving
>>> it to the 'eth_dev_handle_port_stats()' handler.  +Bruce for comment.
>>>
>>
>> That's not really a telemetry assumption, it's one from the stats,
>> structure. Telemetry just outputs the contents of data reported by ethdev
>> stats, and rx_nombuf is just one of those fields.
>>
> 
> Not talking about 'rx_nombuf' in 'eth_dev_handle_port_stats()',
> but talking about 'rx_mbuf_alloc_fail' in 'eth_dev_handle_port_info()',
> 
> should telemetry return interim 'eth_dev->data->rx_mbuf_alloc_failed'
> value, specially when 'rx_nombuf' is available?
> 
> Because at least for this driver returned 'rx_mbuf_alloc_fail' value
> will be wrong, I believe that is same for 'idpf' driver.
> 
> 

Or, let me rephrase like this,
'eth_dev->data->rx_mbuf_alloc_failed' is not returned to user directly
via ethdev APIs, but it is via telemetry.

I think it is not guaranteed that this value will be correct at any
given time as telemetry assumes, so should we remove it from telemetry?


^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v7 18/21] net/cpfl: add HW statistics
  2023-02-28 12:33                           ` Ferruh Yigit
@ 2023-02-28 13:29                             ` Zhang, Qi Z
  2023-02-28 13:34                               ` Ferruh Yigit
  0 siblings, 1 reply; 263+ messages in thread
From: Zhang, Qi Z @ 2023-02-28 13:29 UTC (permalink / raw)
  To: Ferruh Yigit, Richardson, Bruce
  Cc: Liu, Mingxia, dev, Xing, Beilei, Zhang, Yuying, Wu, Jingjing



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Tuesday, February 28, 2023 8:33 PM
> To: Richardson, Bruce <bruce.richardson@intel.com>
> Cc: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>; Wu,
> Jingjing <jingjing.wu@intel.com>
> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
> 
> On 2/28/2023 12:24 PM, Ferruh Yigit wrote:
> > On 2/28/2023 12:12 PM, Bruce Richardson wrote:
> >> On Tue, Feb 28, 2023 at 12:04:53PM +0000, Ferruh Yigit wrote:
> >>> On 2/28/2023 11:47 AM, Liu, Mingxia wrote:
> >>>
> >>> Comment moved down, please don't top post, it makes very hard to
> >>> follow discussion.
> >>>
> >>>>> -----Original Message----- From: Ferruh Yigit
> >>>>> <ferruh.yigit@amd.com>
> >>>>> Sent: Tuesday, February 28, 2023 6:02 PM To: Liu, Mingxia
> >>>>> <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> >>>>> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> >>>>> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
> >>>>>
> >>>>> On 2/28/2023 6:46 AM, Liu, Mingxia wrote:
> >>>>>>
> >>>>>>
> >>>>>>> -----Original Message----- From: Ferruh Yigit
> >>>>>>> <ferruh.yigit@amd.com>
> >>>>>>> Sent: Tuesday, February 28, 2023 5:52 AM To: Liu, Mingxia
> >>>>>>> <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> >>>>>>> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> >>>>>>> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
> >>>>>>>
> >>>>>>> On 2/16/2023 12:30 AM, Mingxia Liu wrote:
> >>>>>>>> This patch add hardware packets/bytes statistics.
> >>>>>>>>
> >>>>>>>> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> >>>>>>>
> >>>>>>> <...>
> >>>>>>>
> >>>>>>>> +static int +cpfl_dev_stats_get(struct rte_eth_dev *dev, struct
> >>>>>>>> rte_eth_stats +*stats) { +	struct idpf_vport *vport = +
> >>>>>>>> (struct idpf_vport *)dev->data->dev_private; +	struct
> >>>>>>>> virtchnl2_vport_stats *pstats = NULL; +	int ret; + +	ret =
> >>>>>>>> idpf_vc_stats_query(vport, &pstats); +	if (ret == 0) { +
> >>>>>>>> uint8_t crc_stats_len = (dev->data- dev_conf.rxmode.offloads &
> >>>>>>>> +
> >>>>>>>> RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
> >>>>>>> 0 :
> >>>>>>>> +					 RTE_ETHER_CRC_LEN; + +
> >>>>>>>> idpf_vport_stats_update(&vport->eth_stats_offset, pstats); +
> >>>>>>>> stats->ipackets = pstats->rx_unicast + pstats->rx_multicast + +
> >>>>>>>> pstats->rx_broadcast - pstats->rx_discards; +
> >>>>>>>> stats->opackets = pstats->tx_broadcast + pstats->tx_multicast
> >>>>>>> +
> >>>>>>>> +						pstats->tx_unicast;
> >>>>>>>> +		stats->imissed = pstats->rx_discards; +
> >>>>>>>> stats->oerrors = pstats->tx_errors + pstats->tx_discards; +
> >>>>>>>> stats->ibytes = pstats->rx_bytes; +		stats->ibytes -=
> >>>>>>>> stats->ipackets * crc_stats_len; +		stats->obytes =
> >>>>>>>> pstats->tx_bytes; + +		dev->data->rx_mbuf_alloc_failed =
> >>>>>>>> +cpfl_get_mbuf_alloc_failed_stats(dev);
> >>>>>>>
> >>>>>>> 'dev->data->rx_mbuf_alloc_failed' is also used by telemetry,
> >>>>>>> updating here only in stats_get() will make it wrong for telemetry.
> >>>>>>>
> >>>>>>> Is it possible to update 'dev->data->rx_mbuf_alloc_failed'
> >>>>>>> whenever alloc failed? (alongside 'rxq->rx_stats.mbuf_alloc_failed').
> >>>>>> [Liu, Mingxia] As I know, rte_eth_dev_data is not a public
> >>>>>> structure provided
> >>>>> to user, user need to access through rte_ethdev APIs.
> >>>>>> Because we already put rx and tx burst func to common/idpf which
> >>>>>> has no
> >>>>> dependcy with ethdev lib. If I update
> >>>>> "dev->data->rx_mbuf_alloc_failed"
> >>>>>> when allocate mbuf fails, it will break the design of our
> >>>>>> common/idpf
> >>>>> interface to net/cpfl or net.idpf.
> >>>>>>
> >>>>>> And I didn't find any reference of 'dev->data->rx_mbuf_alloc_failed'
> >>>>>> in lib
> >>>>> code.
> >>>>>>
> >>>>>
> >>>>> Please check 'eth_dev_handle_port_info()' function.  As I said
> >>>>> this is used by telemetry, not directly exposed to the user.
> >>>>>
> >>>>> I got the design concern, perhaps you can put a brief limitation
> >>>>> to the driver documentation.
> >>>>>
> >>>> OK, got it.
> >>>>
> >>>> As our previous design did have flaws.  And if we don't want to
> >>>> affect correctness of telemetry, we have to redesign the idpf
> >>>> common module code, which means a lot of work to do, so can we
> >>>> lower the priority of this issue?
> >>>>
> >>> I don't believe this is urgent, can you but a one line limitation to
> >>> the documentation for now, and fix it later?
> >>>
> >>> And for the fix, updating 'dev->data->rx_mbuf_alloc_failed' where
> >>> ever 'rxq->rx_stats.mbuf_alloc_failed' updated is easy, although you
> >>> may need to store 'dev->data' in rxq struct for this.
> >>>
> >>> But, I think it is also fair to question the assumption telemetry
> >>> has that 'rx_mbuf_alloc_fail' is always available data, and consider
> >>> moving it to the 'eth_dev_handle_port_stats()' handler.  +Bruce for
> comment.
> >>>
> >>
> >> That's not really a telemetry assumption, it's one from the stats,
> >> structure. Telemetry just outputs the contents of data reported by
> >> ethdev stats, and rx_nombuf is just one of those fields.
> >>
> >
> > Not talking about 'rx_nombuf' in 'eth_dev_handle_port_stats()', but
> > talking about 'rx_mbuf_alloc_fail' in 'eth_dev_handle_port_info()',
> >
> > should telemetry return interim 'eth_dev->data->rx_mbuf_alloc_failed'
> > value, specially when 'rx_nombuf' is available?
> >
> > Because at least for this driver returned 'rx_mbuf_alloc_fail' value
> > will be wrong, I believe that is same for 'idpf' driver.
> >
> >
> 
> Or, let me rephrase like this,
> 'eth_dev->data->rx_mbuf_alloc_failed' is not returned to user directly via
> ethdev APIs, but it is via telemetry.
> 
> I think it is not guaranteed that this value will be correct at any given time as
> telemetry assumes, so should we remove it from telemetry?

May not be necessary, PMD should be able to give the right number, this is something we can fix in idpf and cpfl PMD, to align with other PMD.

^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 18/21] net/cpfl: add HW statistics
  2023-02-28 13:29                             ` Zhang, Qi Z
@ 2023-02-28 13:34                               ` Ferruh Yigit
  2023-02-28 14:04                                 ` Zhang, Qi Z
  2023-02-28 14:24                                 ` Bruce Richardson
  0 siblings, 2 replies; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-28 13:34 UTC (permalink / raw)
  To: Zhang, Qi Z, Richardson, Bruce
  Cc: Liu, Mingxia, dev, Xing, Beilei, Zhang, Yuying, Wu, Jingjing

On 2/28/2023 1:29 PM, Zhang, Qi Z wrote:
> 
> 
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@amd.com>
>> Sent: Tuesday, February 28, 2023 8:33 PM
>> To: Richardson, Bruce <bruce.richardson@intel.com>
>> Cc: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
>> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>; Wu,
>> Jingjing <jingjing.wu@intel.com>
>> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
>>
>> On 2/28/2023 12:24 PM, Ferruh Yigit wrote:
>>> On 2/28/2023 12:12 PM, Bruce Richardson wrote:
>>>> On Tue, Feb 28, 2023 at 12:04:53PM +0000, Ferruh Yigit wrote:
>>>>> On 2/28/2023 11:47 AM, Liu, Mingxia wrote:
>>>>>
>>>>> Comment moved down, please don't top post, it makes very hard to
>>>>> follow discussion.
>>>>>
>>>>>>> -----Original Message----- From: Ferruh Yigit
>>>>>>> <ferruh.yigit@amd.com>
>>>>>>> Sent: Tuesday, February 28, 2023 6:02 PM To: Liu, Mingxia
>>>>>>> <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
>>>>>>> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
>>>>>>> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
>>>>>>>
>>>>>>> On 2/28/2023 6:46 AM, Liu, Mingxia wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>> -----Original Message----- From: Ferruh Yigit
>>>>>>>>> <ferruh.yigit@amd.com>
>>>>>>>>> Sent: Tuesday, February 28, 2023 5:52 AM To: Liu, Mingxia
>>>>>>>>> <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
>>>>>>>>> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
>>>>>>>>> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
>>>>>>>>>
>>>>>>>>> On 2/16/2023 12:30 AM, Mingxia Liu wrote:
>>>>>>>>>> This patch add hardware packets/bytes statistics.
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
>>>>>>>>>
>>>>>>>>> <...>
>>>>>>>>>
>>>>>>>>>> +static int +cpfl_dev_stats_get(struct rte_eth_dev *dev, struct
>>>>>>>>>> rte_eth_stats +*stats) { +	struct idpf_vport *vport = +
>>>>>>>>>> (struct idpf_vport *)dev->data->dev_private; +	struct
>>>>>>>>>> virtchnl2_vport_stats *pstats = NULL; +	int ret; + +	ret =
>>>>>>>>>> idpf_vc_stats_query(vport, &pstats); +	if (ret == 0) { +
>>>>>>>>>> uint8_t crc_stats_len = (dev->data- dev_conf.rxmode.offloads &
>>>>>>>>>> +
>>>>>>>>>> RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
>>>>>>>>> 0 :
>>>>>>>>>> +					 RTE_ETHER_CRC_LEN; + +
>>>>>>>>>> idpf_vport_stats_update(&vport->eth_stats_offset, pstats); +
>>>>>>>>>> stats->ipackets = pstats->rx_unicast + pstats->rx_multicast + +
>>>>>>>>>> pstats->rx_broadcast - pstats->rx_discards; +
>>>>>>>>>> stats->opackets = pstats->tx_broadcast + pstats->tx_multicast
>>>>>>>>> +
>>>>>>>>>> +						pstats->tx_unicast;
>>>>>>>>>> +		stats->imissed = pstats->rx_discards; +
>>>>>>>>>> stats->oerrors = pstats->tx_errors + pstats->tx_discards; +
>>>>>>>>>> stats->ibytes = pstats->rx_bytes; +		stats->ibytes -=
>>>>>>>>>> stats->ipackets * crc_stats_len; +		stats->obytes =
>>>>>>>>>> pstats->tx_bytes; + +		dev->data->rx_mbuf_alloc_failed =
>>>>>>>>>> +cpfl_get_mbuf_alloc_failed_stats(dev);
>>>>>>>>>
>>>>>>>>> 'dev->data->rx_mbuf_alloc_failed' is also used by telemetry,
>>>>>>>>> updating here only in stats_get() will make it wrong for telemetry.
>>>>>>>>>
>>>>>>>>> Is it possible to update 'dev->data->rx_mbuf_alloc_failed'
>>>>>>>>> whenever alloc failed? (alongside 'rxq->rx_stats.mbuf_alloc_failed').
>>>>>>>> [Liu, Mingxia] As I know, rte_eth_dev_data is not a public
>>>>>>>> structure provided
>>>>>>> to user, user need to access through rte_ethdev APIs.
>>>>>>>> Because we already put rx and tx burst func to common/idpf which
>>>>>>>> has no
>>>>>>> dependcy with ethdev lib. If I update
>>>>>>> "dev->data->rx_mbuf_alloc_failed"
>>>>>>>> when allocate mbuf fails, it will break the design of our
>>>>>>>> common/idpf
>>>>>>> interface to net/cpfl or net.idpf.
>>>>>>>>
>>>>>>>> And I didn't find any reference of 'dev->data->rx_mbuf_alloc_failed'
>>>>>>>> in lib
>>>>>>> code.
>>>>>>>>
>>>>>>>
>>>>>>> Please check 'eth_dev_handle_port_info()' function.  As I said
>>>>>>> this is used by telemetry, not directly exposed to the user.
>>>>>>>
>>>>>>> I got the design concern, perhaps you can put a brief limitation
>>>>>>> to the driver documentation.
>>>>>>>
>>>>>> OK, got it.
>>>>>>
>>>>>> As our previous design did have flaws.  And if we don't want to
>>>>>> affect correctness of telemetry, we have to redesign the idpf
>>>>>> common module code, which means a lot of work to do, so can we
>>>>>> lower the priority of this issue?
>>>>>>
>>>>> I don't believe this is urgent, can you but a one line limitation to
>>>>> the documentation for now, and fix it later?
>>>>>
>>>>> And for the fix, updating 'dev->data->rx_mbuf_alloc_failed' where
>>>>> ever 'rxq->rx_stats.mbuf_alloc_failed' updated is easy, although you
>>>>> may need to store 'dev->data' in rxq struct for this.
>>>>>
>>>>> But, I think it is also fair to question the assumption telemetry
>>>>> has that 'rx_mbuf_alloc_fail' is always available data, and consider
>>>>> moving it to the 'eth_dev_handle_port_stats()' handler.  +Bruce for
>> comment.
>>>>>
>>>>
>>>> That's not really a telemetry assumption, it's one from the stats,
>>>> structure. Telemetry just outputs the contents of data reported by
>>>> ethdev stats, and rx_nombuf is just one of those fields.
>>>>
>>>
>>> Not talking about 'rx_nombuf' in 'eth_dev_handle_port_stats()', but
>>> talking about 'rx_mbuf_alloc_fail' in 'eth_dev_handle_port_info()',
>>>
>>> should telemetry return interim 'eth_dev->data->rx_mbuf_alloc_failed'
>>> value, specially when 'rx_nombuf' is available?
>>>
>>> Because at least for this driver returned 'rx_mbuf_alloc_fail' value
>>> will be wrong, I believe that is same for 'idpf' driver.
>>>
>>>
>>
>> Or, let me rephrase like this,
>> 'eth_dev->data->rx_mbuf_alloc_failed' is not returned to user directly via
>> ethdev APIs, but it is via telemetry.
>>
>> I think it is not guaranteed that this value will be correct at any given time as
>> telemetry assumes, so should we remove it from telemetry?
> 
> May not be necessary, PMD should be able to give the right number, this is something we can fix in idpf and cpfl PMD, to align with other PMD.

Thanks Qi, Ok to have drivers aligned to common usage.

Still, for telemetry we can consider removing 'rx_mbuf_alloc_fail', user
can get that information from 'rx_nombuf'.

^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v7 18/21] net/cpfl: add HW statistics
  2023-02-28 13:34                               ` Ferruh Yigit
@ 2023-02-28 14:04                                 ` Zhang, Qi Z
  2023-02-28 14:24                                 ` Bruce Richardson
  1 sibling, 0 replies; 263+ messages in thread
From: Zhang, Qi Z @ 2023-02-28 14:04 UTC (permalink / raw)
  To: Ferruh Yigit, Richardson, Bruce
  Cc: Liu, Mingxia, dev, Xing, Beilei, Zhang, Yuying, Wu, Jingjing



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Tuesday, February 28, 2023 9:35 PM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>
> Cc: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>; Wu,
> Jingjing <jingjing.wu@intel.com>
> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
> 
> On 2/28/2023 1:29 PM, Zhang, Qi Z wrote:
> >
> >
> >> -----Original Message-----
> >> From: Ferruh Yigit <ferruh.yigit@amd.com>
> >> Sent: Tuesday, February 28, 2023 8:33 PM
> >> To: Richardson, Bruce <bruce.richardson@intel.com>
> >> Cc: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> >> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>; Wu,
> >> Jingjing <jingjing.wu@intel.com>
> >> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
> >>
> >> On 2/28/2023 12:24 PM, Ferruh Yigit wrote:
> >>> On 2/28/2023 12:12 PM, Bruce Richardson wrote:
> >>>> On Tue, Feb 28, 2023 at 12:04:53PM +0000, Ferruh Yigit wrote:
> >>>>> On 2/28/2023 11:47 AM, Liu, Mingxia wrote:
> >>>>>
> >>>>> Comment moved down, please don't top post, it makes very hard to
> >>>>> follow discussion.
> >>>>>
> >>>>>>> -----Original Message----- From: Ferruh Yigit
> >>>>>>> <ferruh.yigit@amd.com>
> >>>>>>> Sent: Tuesday, February 28, 2023 6:02 PM To: Liu, Mingxia
> >>>>>>> <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> >>>>>>> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> >>>>>>> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
> >>>>>>>
> >>>>>>> On 2/28/2023 6:46 AM, Liu, Mingxia wrote:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>> -----Original Message----- From: Ferruh Yigit
> >>>>>>>>> <ferruh.yigit@amd.com>
> >>>>>>>>> Sent: Tuesday, February 28, 2023 5:52 AM To: Liu, Mingxia
> >>>>>>>>> <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> >>>>>>>>> <beilei.xing@intel.com>; Zhang, Yuying
> >>>>>>>>> <yuying.zhang@intel.com>
> >>>>>>>>> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
> >>>>>>>>>
> >>>>>>>>> On 2/16/2023 12:30 AM, Mingxia Liu wrote:
> >>>>>>>>>> This patch add hardware packets/bytes statistics.
> >>>>>>>>>>
> >>>>>>>>>> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> >>>>>>>>>
> >>>>>>>>> <...>
> >>>>>>>>>
> >>>>>>>>>> +static int +cpfl_dev_stats_get(struct rte_eth_dev *dev,
> >>>>>>>>>> +struct
> >>>>>>>>>> rte_eth_stats +*stats) { +	struct idpf_vport *vport = +
> >>>>>>>>>> (struct idpf_vport *)dev->data->dev_private; +	struct
> >>>>>>>>>> virtchnl2_vport_stats *pstats = NULL; +	int ret; + +	ret =
> >>>>>>>>>> idpf_vc_stats_query(vport, &pstats); +	if (ret == 0) { +
> >>>>>>>>>> uint8_t crc_stats_len = (dev->data- dev_conf.rxmode.offloads
> >>>>>>>>>> &
> >>>>>>>>>> +
> >>>>>>>>>> RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
> >>>>>>>>> 0 :
> >>>>>>>>>> +					 RTE_ETHER_CRC_LEN; + +
> >>>>>>>>>> idpf_vport_stats_update(&vport->eth_stats_offset, pstats); +
> >>>>>>>>>> stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
> >>>>>>>>>> stats->+
> >>>>>>>>>> pstats->rx_broadcast - pstats->rx_discards; +
> >>>>>>>>>> stats->opackets = pstats->tx_broadcast + pstats->tx_multicast
> >>>>>>>>> +
> >>>>>>>>>> +						pstats->tx_unicast;
> >>>>>>>>>> +		stats->imissed = pstats->rx_discards; +
> >>>>>>>>>> stats->oerrors = pstats->tx_errors + pstats->tx_discards; +
> >>>>>>>>>> stats->ibytes = pstats->rx_bytes; +		stats->ibytes -=
> >>>>>>>>>> stats->ipackets * crc_stats_len; +		stats->obytes =
> >>>>>>>>>> pstats->tx_bytes; + +		dev->data-
> >rx_mbuf_alloc_failed =
> >>>>>>>>>> +cpfl_get_mbuf_alloc_failed_stats(dev);
> >>>>>>>>>
> >>>>>>>>> 'dev->data->rx_mbuf_alloc_failed' is also used by telemetry,
> >>>>>>>>> updating here only in stats_get() will make it wrong for telemetry.
> >>>>>>>>>
> >>>>>>>>> Is it possible to update 'dev->data->rx_mbuf_alloc_failed'
> >>>>>>>>> whenever alloc failed? (alongside 'rxq-
> >rx_stats.mbuf_alloc_failed').
> >>>>>>>> [Liu, Mingxia] As I know, rte_eth_dev_data is not a public
> >>>>>>>> structure provided
> >>>>>>> to user, user need to access through rte_ethdev APIs.
> >>>>>>>> Because we already put rx and tx burst func to common/idpf
> >>>>>>>> which has no
> >>>>>>> dependcy with ethdev lib. If I update
> >>>>>>> "dev->data->rx_mbuf_alloc_failed"
> >>>>>>>> when allocate mbuf fails, it will break the design of our
> >>>>>>>> common/idpf
> >>>>>>> interface to net/cpfl or net.idpf.
> >>>>>>>>
> >>>>>>>> And I didn't find any reference of 'dev->data-
> >rx_mbuf_alloc_failed'
> >>>>>>>> in lib
> >>>>>>> code.
> >>>>>>>>
> >>>>>>>
> >>>>>>> Please check 'eth_dev_handle_port_info()' function.  As I said
> >>>>>>> this is used by telemetry, not directly exposed to the user.
> >>>>>>>
> >>>>>>> I got the design concern, perhaps you can put a brief limitation
> >>>>>>> to the driver documentation.
> >>>>>>>
> >>>>>> OK, got it.
> >>>>>>
> >>>>>> As our previous design did have flaws.  And if we don't want to
> >>>>>> affect correctness of telemetry, we have to redesign the idpf
> >>>>>> common module code, which means a lot of work to do, so can we
> >>>>>> lower the priority of this issue?
> >>>>>>
> >>>>> I don't believe this is urgent, can you but a one line limitation
> >>>>> to the documentation for now, and fix it later?
> >>>>>
> >>>>> And for the fix, updating 'dev->data->rx_mbuf_alloc_failed' where
> >>>>> ever 'rxq->rx_stats.mbuf_alloc_failed' updated is easy, although
> >>>>> you may need to store 'dev->data' in rxq struct for this.
> >>>>>
> >>>>> But, I think it is also fair to question the assumption telemetry
> >>>>> has that 'rx_mbuf_alloc_fail' is always available data, and
> >>>>> consider moving it to the 'eth_dev_handle_port_stats()' handler.
> >>>>> +Bruce for
> >> comment.
> >>>>>
> >>>>
> >>>> That's not really a telemetry assumption, it's one from the stats,
> >>>> structure. Telemetry just outputs the contents of data reported by
> >>>> ethdev stats, and rx_nombuf is just one of those fields.
> >>>>
> >>>
> >>> Not talking about 'rx_nombuf' in 'eth_dev_handle_port_stats()', but
> >>> talking about 'rx_mbuf_alloc_fail' in 'eth_dev_handle_port_info()',
> >>>
> >>> should telemetry return interim 'eth_dev->data->rx_mbuf_alloc_failed'
> >>> value, specially when 'rx_nombuf' is available?
> >>>
> >>> Because at least for this driver returned 'rx_mbuf_alloc_fail' value
> >>> will be wrong, I believe that is same for 'idpf' driver.
> >>>
> >>>
> >>
> >> Or, let me rephrase like this,
> >> 'eth_dev->data->rx_mbuf_alloc_failed' is not returned to user
> >> directly via ethdev APIs, but it is via telemetry.
> >>
> >> I think it is not guaranteed that this value will be correct at any
> >> given time as telemetry assumes, so should we remove it from telemetry?
> >
> > May not be necessary, PMD should be able to give the right number, this is
> something we can fix in idpf and cpfl PMD, to align with other PMD.
> 
> Thanks Qi, Ok to have drivers aligned to common usage.
> 
> Still, for telemetry we can consider removing 'rx_mbuf_alloc_fail', user can
> get that information from 'rx_nombuf'.

No objection, if this is not a comprise for above issue :)

^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 18/21] net/cpfl: add HW statistics
  2023-02-28 13:34                               ` Ferruh Yigit
  2023-02-28 14:04                                 ` Zhang, Qi Z
@ 2023-02-28 14:24                                 ` Bruce Richardson
  2023-02-28 16:14                                   ` Ferruh Yigit
  1 sibling, 1 reply; 263+ messages in thread
From: Bruce Richardson @ 2023-02-28 14:24 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Zhang, Qi Z, Liu, Mingxia, dev, Xing, Beilei, Zhang, Yuying, Wu,
	Jingjing

On Tue, Feb 28, 2023 at 01:34:43PM +0000, Ferruh Yigit wrote:
> On 2/28/2023 1:29 PM, Zhang, Qi Z wrote:
> > 
> > 
> >> -----Original Message-----
> >> From: Ferruh Yigit <ferruh.yigit@amd.com>
> >> Sent: Tuesday, February 28, 2023 8:33 PM
> >> To: Richardson, Bruce <bruce.richardson@intel.com>
> >> Cc: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> >> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>; Wu,
> >> Jingjing <jingjing.wu@intel.com>
> >> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
> >>
> >> On 2/28/2023 12:24 PM, Ferruh Yigit wrote:
> >>> On 2/28/2023 12:12 PM, Bruce Richardson wrote:
> >>>> On Tue, Feb 28, 2023 at 12:04:53PM +0000, Ferruh Yigit wrote:
> >>>>> On 2/28/2023 11:47 AM, Liu, Mingxia wrote:
> >>>>>
> >>>>> Comment moved down, please don't top post, it makes very hard to
> >>>>> follow discussion.
> >>>>>
> >>>>>>> -----Original Message----- From: Ferruh Yigit
> >>>>>>> <ferruh.yigit@amd.com>
> >>>>>>> Sent: Tuesday, February 28, 2023 6:02 PM To: Liu, Mingxia
> >>>>>>> <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> >>>>>>> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> >>>>>>> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
> >>>>>>>
> >>>>>>> On 2/28/2023 6:46 AM, Liu, Mingxia wrote:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>> -----Original Message----- From: Ferruh Yigit
> >>>>>>>>> <ferruh.yigit@amd.com>
> >>>>>>>>> Sent: Tuesday, February 28, 2023 5:52 AM To: Liu, Mingxia
> >>>>>>>>> <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> >>>>>>>>> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> >>>>>>>>> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
> >>>>>>>>>
> >>>>>>>>> On 2/16/2023 12:30 AM, Mingxia Liu wrote:
> >>>>>>>>>> This patch add hardware packets/bytes statistics.
> >>>>>>>>>>
> >>>>>>>>>> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> >>>>>>>>>
> >>>>>>>>> <...>
> >>>>>>>>>
> >>>>>>>>>> +static int +cpfl_dev_stats_get(struct rte_eth_dev *dev, struct
> >>>>>>>>>> rte_eth_stats +*stats) { +	struct idpf_vport *vport = +
> >>>>>>>>>> (struct idpf_vport *)dev->data->dev_private; +	struct
> >>>>>>>>>> virtchnl2_vport_stats *pstats = NULL; +	int ret; + +	ret =
> >>>>>>>>>> idpf_vc_stats_query(vport, &pstats); +	if (ret == 0) { +
> >>>>>>>>>> uint8_t crc_stats_len = (dev->data- dev_conf.rxmode.offloads &
> >>>>>>>>>> +
> >>>>>>>>>> RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
> >>>>>>>>> 0 :
> >>>>>>>>>> +					 RTE_ETHER_CRC_LEN; + +
> >>>>>>>>>> idpf_vport_stats_update(&vport->eth_stats_offset, pstats); +
> >>>>>>>>>> stats->ipackets = pstats->rx_unicast + pstats->rx_multicast + +
> >>>>>>>>>> pstats->rx_broadcast - pstats->rx_discards; +
> >>>>>>>>>> stats->opackets = pstats->tx_broadcast + pstats->tx_multicast
> >>>>>>>>> +
> >>>>>>>>>> +						pstats->tx_unicast;
> >>>>>>>>>> +		stats->imissed = pstats->rx_discards; +
> >>>>>>>>>> stats->oerrors = pstats->tx_errors + pstats->tx_discards; +
> >>>>>>>>>> stats->ibytes = pstats->rx_bytes; +		stats->ibytes -=
> >>>>>>>>>> stats->ipackets * crc_stats_len; +		stats->obytes =
> >>>>>>>>>> pstats->tx_bytes; + +		dev->data->rx_mbuf_alloc_failed =
> >>>>>>>>>> +cpfl_get_mbuf_alloc_failed_stats(dev);
> >>>>>>>>>
> >>>>>>>>> 'dev->data->rx_mbuf_alloc_failed' is also used by telemetry,
> >>>>>>>>> updating here only in stats_get() will make it wrong for telemetry.
> >>>>>>>>>
> >>>>>>>>> Is it possible to update 'dev->data->rx_mbuf_alloc_failed'
> >>>>>>>>> whenever alloc failed? (alongside 'rxq->rx_stats.mbuf_alloc_failed').
> >>>>>>>> [Liu, Mingxia] As I know, rte_eth_dev_data is not a public
> >>>>>>>> structure provided
> >>>>>>> to user, user need to access through rte_ethdev APIs.
> >>>>>>>> Because we already put rx and tx burst func to common/idpf which
> >>>>>>>> has no
> >>>>>>> dependcy with ethdev lib. If I update
> >>>>>>> "dev->data->rx_mbuf_alloc_failed"
> >>>>>>>> when allocate mbuf fails, it will break the design of our
> >>>>>>>> common/idpf
> >>>>>>> interface to net/cpfl or net.idpf.
> >>>>>>>>
> >>>>>>>> And I didn't find any reference of 'dev->data->rx_mbuf_alloc_failed'
> >>>>>>>> in lib
> >>>>>>> code.
> >>>>>>>>
> >>>>>>>
> >>>>>>> Please check 'eth_dev_handle_port_info()' function.  As I said
> >>>>>>> this is used by telemetry, not directly exposed to the user.
> >>>>>>>
> >>>>>>> I got the design concern, perhaps you can put a brief limitation
> >>>>>>> to the driver documentation.
> >>>>>>>
> >>>>>> OK, got it.
> >>>>>>
> >>>>>> As our previous design did have flaws.  And if we don't want to
> >>>>>> affect correctness of telemetry, we have to redesign the idpf
> >>>>>> common module code, which means a lot of work to do, so can we
> >>>>>> lower the priority of this issue?
> >>>>>>
> >>>>> I don't believe this is urgent, can you but a one line limitation to
> >>>>> the documentation for now, and fix it later?
> >>>>>
> >>>>> And for the fix, updating 'dev->data->rx_mbuf_alloc_failed' where
> >>>>> ever 'rxq->rx_stats.mbuf_alloc_failed' updated is easy, although you
> >>>>> may need to store 'dev->data' in rxq struct for this.
> >>>>>
> >>>>> But, I think it is also fair to question the assumption telemetry
> >>>>> has that 'rx_mbuf_alloc_fail' is always available data, and consider
> >>>>> moving it to the 'eth_dev_handle_port_stats()' handler.  +Bruce for
> >> comment.
> >>>>>
> >>>>
> >>>> That's not really a telemetry assumption, it's one from the stats,
> >>>> structure. Telemetry just outputs the contents of data reported by
> >>>> ethdev stats, and rx_nombuf is just one of those fields.
> >>>>
> >>>
> >>> Not talking about 'rx_nombuf' in 'eth_dev_handle_port_stats()', but
> >>> talking about 'rx_mbuf_alloc_fail' in 'eth_dev_handle_port_info()',
> >>>
> >>> should telemetry return interim 'eth_dev->data->rx_mbuf_alloc_failed'
> >>> value, specially when 'rx_nombuf' is available?
> >>>
> >>> Because at least for this driver returned 'rx_mbuf_alloc_fail' value
> >>> will be wrong, I believe that is same for 'idpf' driver.
> >>>
> >>>

Thanks for the clarification, the question is clearer now. Having duplicate
info seems strange.

> >>
> >> Or, let me rephrase like this,
> >> 'eth_dev->data->rx_mbuf_alloc_failed' is not returned to user directly via
> >> ethdev APIs, but it is via telemetry.
> >>
> >> I think it is not guaranteed that this value will be correct at any given time as
> >> telemetry assumes, so should we remove it from telemetry?
> > 
> > May not be necessary, PMD should be able to give the right number, this is something we can fix in idpf and cpfl PMD, to align with other PMD.
> 
> Thanks Qi, Ok to have drivers aligned to common usage.
> 
> Still, for telemetry we can consider removing 'rx_mbuf_alloc_fail', user
> can get that information from 'rx_nombuf'.

I would agree with Ferruh. The information on nombufs should be available
just from the stats. It doesn't logically fit in the "info" category,
especially when it is in stats already.

/Bruce

^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v7 18/21] net/cpfl: add HW statistics
  2023-02-28 14:24                                 ` Bruce Richardson
@ 2023-02-28 16:14                                   ` Ferruh Yigit
  0 siblings, 0 replies; 263+ messages in thread
From: Ferruh Yigit @ 2023-02-28 16:14 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: Zhang, Qi Z, Liu, Mingxia, dev, Xing, Beilei, Zhang, Yuying, Wu,
	Jingjing

On 2/28/2023 2:24 PM, Bruce Richardson wrote:
> On Tue, Feb 28, 2023 at 01:34:43PM +0000, Ferruh Yigit wrote:
>> On 2/28/2023 1:29 PM, Zhang, Qi Z wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: Ferruh Yigit <ferruh.yigit@amd.com>
>>>> Sent: Tuesday, February 28, 2023 8:33 PM
>>>> To: Richardson, Bruce <bruce.richardson@intel.com>
>>>> Cc: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
>>>> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>; Wu,
>>>> Jingjing <jingjing.wu@intel.com>
>>>> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
>>>>
>>>> On 2/28/2023 12:24 PM, Ferruh Yigit wrote:
>>>>> On 2/28/2023 12:12 PM, Bruce Richardson wrote:
>>>>>> On Tue, Feb 28, 2023 at 12:04:53PM +0000, Ferruh Yigit wrote:
>>>>>>> On 2/28/2023 11:47 AM, Liu, Mingxia wrote:
>>>>>>>
>>>>>>> Comment moved down, please don't top post, it makes very hard to
>>>>>>> follow discussion.
>>>>>>>
>>>>>>>>> -----Original Message----- From: Ferruh Yigit
>>>>>>>>> <ferruh.yigit@amd.com>
>>>>>>>>> Sent: Tuesday, February 28, 2023 6:02 PM To: Liu, Mingxia
>>>>>>>>> <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
>>>>>>>>> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
>>>>>>>>> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
>>>>>>>>>
>>>>>>>>> On 2/28/2023 6:46 AM, Liu, Mingxia wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> -----Original Message----- From: Ferruh Yigit
>>>>>>>>>>> <ferruh.yigit@amd.com>
>>>>>>>>>>> Sent: Tuesday, February 28, 2023 5:52 AM To: Liu, Mingxia
>>>>>>>>>>> <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
>>>>>>>>>>> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
>>>>>>>>>>> Subject: Re: [PATCH v7 18/21] net/cpfl: add HW statistics
>>>>>>>>>>>
>>>>>>>>>>> On 2/16/2023 12:30 AM, Mingxia Liu wrote:
>>>>>>>>>>>> This patch add hardware packets/bytes statistics.
>>>>>>>>>>>>
>>>>>>>>>>>> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
>>>>>>>>>>>
>>>>>>>>>>> <...>
>>>>>>>>>>>
>>>>>>>>>>>> +static int +cpfl_dev_stats_get(struct rte_eth_dev *dev, struct
>>>>>>>>>>>> rte_eth_stats +*stats) { +	struct idpf_vport *vport = +
>>>>>>>>>>>> (struct idpf_vport *)dev->data->dev_private; +	struct
>>>>>>>>>>>> virtchnl2_vport_stats *pstats = NULL; +	int ret; + +	ret =
>>>>>>>>>>>> idpf_vc_stats_query(vport, &pstats); +	if (ret == 0) { +
>>>>>>>>>>>> uint8_t crc_stats_len = (dev->data- dev_conf.rxmode.offloads &
>>>>>>>>>>>> +
>>>>>>>>>>>> RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
>>>>>>>>>>> 0 :
>>>>>>>>>>>> +					 RTE_ETHER_CRC_LEN; + +
>>>>>>>>>>>> idpf_vport_stats_update(&vport->eth_stats_offset, pstats); +
>>>>>>>>>>>> stats->ipackets = pstats->rx_unicast + pstats->rx_multicast + +
>>>>>>>>>>>> pstats->rx_broadcast - pstats->rx_discards; +
>>>>>>>>>>>> stats->opackets = pstats->tx_broadcast + pstats->tx_multicast
>>>>>>>>>>> +
>>>>>>>>>>>> +						pstats->tx_unicast;
>>>>>>>>>>>> +		stats->imissed = pstats->rx_discards; +
>>>>>>>>>>>> stats->oerrors = pstats->tx_errors + pstats->tx_discards; +
>>>>>>>>>>>> stats->ibytes = pstats->rx_bytes; +		stats->ibytes -=
>>>>>>>>>>>> stats->ipackets * crc_stats_len; +		stats->obytes =
>>>>>>>>>>>> pstats->tx_bytes; + +		dev->data->rx_mbuf_alloc_failed =
>>>>>>>>>>>> +cpfl_get_mbuf_alloc_failed_stats(dev);
>>>>>>>>>>>
>>>>>>>>>>> 'dev->data->rx_mbuf_alloc_failed' is also used by telemetry,
>>>>>>>>>>> updating here only in stats_get() will make it wrong for telemetry.
>>>>>>>>>>>
>>>>>>>>>>> Is it possible to update 'dev->data->rx_mbuf_alloc_failed'
>>>>>>>>>>> whenever alloc failed? (alongside 'rxq->rx_stats.mbuf_alloc_failed').
>>>>>>>>>> [Liu, Mingxia] As I know, rte_eth_dev_data is not a public
>>>>>>>>>> structure provided
>>>>>>>>> to user, user need to access through rte_ethdev APIs.
>>>>>>>>>> Because we already put rx and tx burst func to common/idpf which
>>>>>>>>>> has no
>>>>>>>>> dependcy with ethdev lib. If I update
>>>>>>>>> "dev->data->rx_mbuf_alloc_failed"
>>>>>>>>>> when allocate mbuf fails, it will break the design of our
>>>>>>>>>> common/idpf
>>>>>>>>> interface to net/cpfl or net.idpf.
>>>>>>>>>>
>>>>>>>>>> And I didn't find any reference of 'dev->data->rx_mbuf_alloc_failed'
>>>>>>>>>> in lib
>>>>>>>>> code.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Please check 'eth_dev_handle_port_info()' function.  As I said
>>>>>>>>> this is used by telemetry, not directly exposed to the user.
>>>>>>>>>
>>>>>>>>> I got the design concern, perhaps you can put a brief limitation
>>>>>>>>> to the driver documentation.
>>>>>>>>>
>>>>>>>> OK, got it.
>>>>>>>>
>>>>>>>> As our previous design did have flaws.  And if we don't want to
>>>>>>>> affect correctness of telemetry, we have to redesign the idpf
>>>>>>>> common module code, which means a lot of work to do, so can we
>>>>>>>> lower the priority of this issue?
>>>>>>>>
>>>>>>> I don't believe this is urgent, can you but a one line limitation to
>>>>>>> the documentation for now, and fix it later?
>>>>>>>
>>>>>>> And for the fix, updating 'dev->data->rx_mbuf_alloc_failed' where
>>>>>>> ever 'rxq->rx_stats.mbuf_alloc_failed' updated is easy, although you
>>>>>>> may need to store 'dev->data' in rxq struct for this.
>>>>>>>
>>>>>>> But, I think it is also fair to question the assumption telemetry
>>>>>>> has that 'rx_mbuf_alloc_fail' is always available data, and consider
>>>>>>> moving it to the 'eth_dev_handle_port_stats()' handler.  +Bruce for
>>>> comment.
>>>>>>>
>>>>>>
>>>>>> That's not really a telemetry assumption, it's one from the stats,
>>>>>> structure. Telemetry just outputs the contents of data reported by
>>>>>> ethdev stats, and rx_nombuf is just one of those fields.
>>>>>>
>>>>>
>>>>> Not talking about 'rx_nombuf' in 'eth_dev_handle_port_stats()', but
>>>>> talking about 'rx_mbuf_alloc_fail' in 'eth_dev_handle_port_info()',
>>>>>
>>>>> should telemetry return interim 'eth_dev->data->rx_mbuf_alloc_failed'
>>>>> value, specially when 'rx_nombuf' is available?
>>>>>
>>>>> Because at least for this driver returned 'rx_mbuf_alloc_fail' value
>>>>> will be wrong, I believe that is same for 'idpf' driver.
>>>>>
>>>>>
> 
> Thanks for the clarification, the question is clearer now. Having duplicate
> info seems strange.
> 
>>>>
>>>> Or, let me rephrase like this,
>>>> 'eth_dev->data->rx_mbuf_alloc_failed' is not returned to user directly via
>>>> ethdev APIs, but it is via telemetry.
>>>>
>>>> I think it is not guaranteed that this value will be correct at any given time as
>>>> telemetry assumes, so should we remove it from telemetry?
>>>
>>> May not be necessary, PMD should be able to give the right number, this is something we can fix in idpf and cpfl PMD, to align with other PMD.
>>
>> Thanks Qi, Ok to have drivers aligned to common usage.
>>
>> Still, for telemetry we can consider removing 'rx_mbuf_alloc_fail', user
>> can get that information from 'rx_nombuf'.
> 
> I would agree with Ferruh. The information on nombufs should be available
> just from the stats. It doesn't logically fit in the "info" category,
> especially when it is in stats already.
> 

Thanks Bruce.

So, no need to update driver(s) related to 'rx_mbuf_alloc_fail',
existing patch is good.

I will send ethdev telemetry change later, it is a minor change.


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v8 21/21] net/cpfl: add xstats ops
  2023-03-02 10:35             ` [PATCH v8 21/21] net/cpfl: add xstats ops Mingxia Liu
@ 2023-03-02  9:30               ` Ferruh Yigit
  2023-03-02 11:19                 ` Liu, Mingxia
  0 siblings, 1 reply; 263+ messages in thread
From: Ferruh Yigit @ 2023-03-02  9:30 UTC (permalink / raw)
  To: Mingxia Liu, dev, beilei.xing, yuying.zhang

On 3/2/2023 10:35 AM, Mingxia Liu wrote:
> Add support for these device ops:
> - dev_xstats_get
> - dev_xstats_get_names
> - dev_xstats_reset
> 
> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>

<...>

> +static int cpfl_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
> +				     struct rte_eth_xstat_name *xstats_names,
> +				     __rte_unused unsigned int limit)
> +{
> +	unsigned int i;
> +
> +	if (xstats_names)
> +		for (i = 0; i < CPFL_NB_XSTATS; i++) {
> +			snprintf(xstats_names[i].name,
> +				 sizeof(xstats_names[i].name),
> +				 "%s", rte_cpfl_stats_strings[i].name);
> +		}


Although above is correct, can you please add {}, it is safer to do it
for multi line blocks.

^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v8 01/21] net/cpfl: support device initialization
  2023-03-02 10:35             ` [PATCH v8 01/21] net/cpfl: support device initialization Mingxia Liu
@ 2023-03-02  9:31               ` Ferruh Yigit
  2023-03-02 11:24                 ` Liu, Mingxia
  2023-03-02 12:08                 ` Xing, Beilei
  0 siblings, 2 replies; 263+ messages in thread
From: Ferruh Yigit @ 2023-03-02  9:31 UTC (permalink / raw)
  To: Mingxia Liu, dev, beilei.xing, yuying.zhang

On 3/2/2023 10:35 AM, Mingxia Liu wrote:
> Support device init and add the following dev ops:
>  - dev_configure
>  - dev_close
>  - dev_infos_get
>  - link_update
>  - dev_supported_ptypes_get
> 
> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>

<...>

> --- /dev/null
> +++ b/doc/guides/nics/cpfl.rst
> @@ -0,0 +1,85 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> +   Copyright(c) 2022 Intel Corporation.
> +

s/2022/2023

<...>

> +static int
> +cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
> +	       struct rte_pci_device *pci_dev)
> +{
> +	struct cpfl_vport_param vport_param;
> +	struct cpfl_adapter_ext *adapter;
> +	struct cpfl_devargs devargs;
> +	char name[RTE_ETH_NAME_MAX_LEN];
> +	int i, retval;
> +	bool first_probe = false;
> +
> +	if (!cpfl_adapter_list_init) {
> +		rte_spinlock_init(&cpfl_adapter_lock);
> +		TAILQ_INIT(&cpfl_adapter_list);
> +		cpfl_adapter_list_init = true;
> +	}
> +
> +	adapter = cpfl_find_adapter_ext(pci_dev);
> +	if (adapter == NULL) {
> +		first_probe = true;
> +		adapter = rte_zmalloc("cpfl_adapter_ext",
> +				      sizeof(struct cpfl_adapter_ext), 0);
> +		if (adapter == NULL) {
> +			PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
> +			return -ENOMEM;
> +		}
> +
> +		retval = cpfl_adapter_ext_init(pci_dev, adapter);
> +		if (retval != 0) {
> +			PMD_INIT_LOG(ERR, "Failed to init adapter.");
> +			return retval;
> +		}
> +
> +		rte_spinlock_lock(&cpfl_adapter_lock);
> +		TAILQ_INSERT_TAIL(&cpfl_adapter_list, adapter, next);
> +		rte_spinlock_unlock(&cpfl_adapter_lock);
> +	}
> +
> +	retval = cpfl_parse_devargs(pci_dev, adapter, &devargs);
> +	if (retval != 0) {
> +		PMD_INIT_LOG(ERR, "Failed to parse private devargs");
> +		goto err;
> +	}
> +
> +	if (devargs.req_vport_nb == 0) {
> +		/* If no vport devarg, create vport 0 by default. */
> +		vport_param.adapter = adapter;
> +		vport_param.devarg_id = 0;
> +		vport_param.idx = cpfl_vport_idx_alloc(adapter);
> +		if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
> +			PMD_INIT_LOG(ERR, "No space for vport %u", vport_param.devarg_id);
> +			return 0;
> +		}
> +		snprintf(name, sizeof(name), "cpfl_%s_vport_0",
> +			 pci_dev->device.name);
> +		retval = rte_eth_dev_create(&pci_dev->device, name,
> +					    sizeof(struct idpf_vport),
> +					    NULL, NULL, cpfl_dev_vport_init,
> +					    &vport_param);
> +		if (retval != 0)
> +			PMD_DRV_LOG(ERR, "Failed to create default vport 0");
> +	} else {
> +		for (i = 0; i < devargs.req_vport_nb; i++) {
> +			vport_param.adapter = adapter;
> +			vport_param.devarg_id = devargs.req_vports[i];
> +			vport_param.idx = cpfl_vport_idx_alloc(adapter);
> +			if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
> +				PMD_INIT_LOG(ERR, "No space for vport %u", vport_param.devarg_id);
> +				break;
> +			}
> +			snprintf(name, sizeof(name), "cpfl_%s_vport_%d",
> +				 pci_dev->device.name,
> +				 devargs.req_vports[i]);
> +			retval = rte_eth_dev_create(&pci_dev->device, name,
> +						    sizeof(struct idpf_vport),
> +						    NULL, NULL, cpfl_dev_vport_init,
> +						    &vport_param);
> +			if (retval != 0)
> +				PMD_DRV_LOG(ERR, "Failed to create vport %d",
> +					    vport_param.devarg_id);
> +		}
> +	}
> +
> +	return 0;
> +
> +err:
> +	if (first_probe) {
> +		rte_spinlock_lock(&cpfl_adapter_lock);
> +		TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
> +		rte_spinlock_unlock(&cpfl_adapter_lock);
> +		cpfl_adapter_ext_deinit(adapter);
> +		rte_free(adapter);
> +	}

Is 'first_probe' left intentionally? If so, what is the reason to have
this condition?



^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v8 00/21] add support for cpfl PMD in DPDK
  2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
                             ` (21 preceding siblings ...)
  2023-02-27 21:43           ` [PATCH v7 00/21] add support for cpfl PMD in DPDK Ferruh Yigit
@ 2023-03-02 10:35           ` Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 01/21] net/cpfl: support device initialization Mingxia Liu
                               ` (21 more replies)
  22 siblings, 22 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 10:35 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=y, Size: 3630 bytes --]

The patchset introduced the cpfl (Control Plane Function Library) PMD
for Intel® IPU E2100’s Configure Physical Function (Device ID: 0x1453)

The cpfl PMD inherits all the features from idpf PMD which will follow
an ongoing standard data plan function spec
https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=idpf
Besides, it will also support more device specific hardware offloading
features from DPDK’s control path (e.g.: hairpin, rte_flow …). which is
different from idpf PMD, and that's why we need a new cpfl PMD.

This patchset mainly focuses on idpf PMD’s equivalent features.
To avoid duplicated code, the patchset depends on below patchsets which
move the common part from net/idpf into common/idpf as a shared library.

v2 changes:
 - rebase to the new baseline.
 - Fix rss lut config issue.
v3 changes:
 - rebase to the new baseline.
v4 changes:
 - Resend v3. No code changed.
v5 changes:
 - rebase to the new baseline.
 - optimize some code
 - give "not supported" tips when user want to config rss hash type
 - if stats reset fails at initialization time, don't rollback, just
   print ERROR info
v6 changes:
 - for small fixed size structure, change rte_memcpy to memcpy()
 - fix compilation for AVX512DQ
 - update cpfl maintainers
v7 changes:
 - add dependency in cover-letter
v8 changes:
 - improve documentation and commit msg
 - optimize function cpfl_dev_link_update()
 - refine devargs check

This patchset is based on the idpf PMD code for refining Rx/Tx queue
model info:
http://patches.dpdk.org/project/dpdk/patch/20230301192659.601892-1-mingxia.liu@intel.com/

Mingxia Liu (21):
  net/cpfl: support device initialization
  net/cpfl: add Tx queue setup
  net/cpfl: add Rx queue setup
  net/cpfl: support device start and stop
  net/cpfl: support queue start
  net/cpfl: support queue stop
  net/cpfl: support queue release
  net/cpfl: support MTU configuration
  net/cpfl: support basic Rx data path
  net/cpfl: support basic Tx data path
  net/cpfl: support write back based on ITR expire
  net/cpfl: support RSS
  net/cpfl: support Rx offloading
  net/cpfl: support Tx offloading
  net/cpfl: add AVX512 data path for single queue model
  net/cpfl: support timestamp offload
  net/cpfl: add AVX512 data path for split queue model
  net/cpfl: add HW statistics
  net/cpfl: add RSS set/get ops
  net/cpfl: support scalar scatter Rx datapath for single queue model
  net/cpfl: add xstats ops

 MAINTAINERS                             |    8 +
 doc/guides/nics/cpfl.rst                |  107 ++
 doc/guides/nics/features/cpfl.ini       |   16 +
 doc/guides/nics/index.rst               |    1 +
 doc/guides/rel_notes/release_23_03.rst  |    6 +
 drivers/net/cpfl/cpfl_ethdev.c          | 1466 +++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h          |   94 ++
 drivers/net/cpfl/cpfl_logs.h            |   29 +
 drivers/net/cpfl/cpfl_rxtx.c            |  951 +++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h            |   44 +
 drivers/net/cpfl/cpfl_rxtx_vec_common.h |  116 ++
 drivers/net/cpfl/meson.build            |   40 +
 drivers/net/meson.build                 |    1 +
 13 files changed, 2879 insertions(+)
 create mode 100644 doc/guides/nics/cpfl.rst
 create mode 100644 doc/guides/nics/features/cpfl.ini
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.c
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.h
 create mode 100644 drivers/net/cpfl/cpfl_logs.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h
 create mode 100644 drivers/net/cpfl/meson.build

-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v8 01/21] net/cpfl: support device initialization
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
@ 2023-03-02 10:35             ` Mingxia Liu
  2023-03-02  9:31               ` Ferruh Yigit
  2023-03-02 10:35             ` [PATCH v8 02/21] net/cpfl: add Tx queue setup Mingxia Liu
                               ` (20 subsequent siblings)
  21 siblings, 1 reply; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 10:35 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Support device init and add the following dev ops:
 - dev_configure
 - dev_close
 - dev_infos_get
 - link_update
 - dev_supported_ptypes_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 MAINTAINERS                            |   8 +
 doc/guides/nics/cpfl.rst               |  85 +++
 doc/guides/nics/features/cpfl.ini      |  12 +
 doc/guides/nics/index.rst              |   1 +
 doc/guides/rel_notes/release_23_03.rst |   6 +
 drivers/net/cpfl/cpfl_ethdev.c         | 772 +++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h         |  77 +++
 drivers/net/cpfl/cpfl_logs.h           |  29 +
 drivers/net/cpfl/meson.build           |  14 +
 drivers/net/meson.build                |   1 +
 10 files changed, 1005 insertions(+)
 create mode 100644 doc/guides/nics/cpfl.rst
 create mode 100644 doc/guides/nics/features/cpfl.ini
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.c
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.h
 create mode 100644 drivers/net/cpfl/cpfl_logs.h
 create mode 100644 drivers/net/cpfl/meson.build

diff --git a/MAINTAINERS b/MAINTAINERS
index ffbf91296e..878204c93b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -783,6 +783,14 @@ F: drivers/common/idpf/
 F: doc/guides/nics/idpf.rst
 F: doc/guides/nics/features/idpf.ini
 
+Intel cpfl - EXPERIMENTAL
+M: Yuying Zhang <yuying.zhang@intel.com>
+M: Beilei Xing <beilei.xing@intel.com>
+T: git://dpdk.org/next/dpdk-next-net-intel
+F: drivers/net/cpfl/
+F: doc/guides/nics/cpfl.rst
+F: doc/guides/nics/features/cpfl.ini
+
 Intel igc
 M: Junfeng Guo <junfeng.guo@intel.com>
 M: Simei Su <simei.su@intel.com>
diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
new file mode 100644
index 0000000000..253fa3afae
--- /dev/null
+++ b/doc/guides/nics/cpfl.rst
@@ -0,0 +1,85 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright(c) 2022 Intel Corporation.
+
+.. include:: <isonum.txt>
+
+CPFL Poll Mode Driver
+=====================
+
+The [*EXPERIMENTAL*] cpfl PMD (**librte_net_cpfl**) provides poll mode driver support
+for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
+Please refer to
+https://www.intel.com/content/www/us/en/products/network-io/infrastructure-processing-units/asic/e2000-asic.html
+for more information.
+
+Linux Prerequisites
+-------------------
+
+Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment.
+
+To get better performance on Intel platforms,
+please follow the :doc:`../linux_gsg/nic_perf_intel_platform`.
+
+
+Pre-Installation Configuration
+------------------------------
+
+Runtime Config Options
+~~~~~~~~~~~~~~~~~~~~~~
+
+- ``vport`` (default ``0``)
+
+  The PMD supports creation of multiple vports for one PCI device,
+  each vport corresponds to a single ethdev.
+  The user can specify the vports with specific ID to be created, and ID should
+  be 0 ~ 7 currently, for example:
+
+    -a ca:00.0,vport=[0,2,3]
+
+  Then the PMD will create 3 vports (ethdevs) for device ``ca:00.0``.
+
+  If the parameter is not provided, the vport 0 will be created by default.
+
+- ``rx_single`` (default ``0``)
+
+  There are two queue modes supported by Intel\ |reg| IPU Ethernet E2100 Series,
+  single queue mode and split queue mode for Rx queue.
+
+  For the single queue model, the descriptor queue is used by SW to post buffer
+  descriptors to HW, and it's also used by HW to post completed descriptors to SW.
+
+  For the split queue model, "RX buffer queues" are used to pass descriptor buffers
+  from SW to HW, while RX queues are used only to pass the descriptor completions
+  from HW to SW.
+
+  User can choose Rx queue mode, example:
+
+    -a ca:00.0,rx_single=1
+
+  Then the PMD will configure Rx queue with single queue mode.
+  Otherwise, split queue mode is chosen by default.
+
+- ``tx_single`` (default ``0``)
+
+  There are two queue modes supported by Intel\ |reg| IPU Ethernet E2100 Series,
+  single queue mode and split queue mode for Tx queue.
+
+  For the single queue model, the descriptor queue is used by SW to post buffer
+  descriptors to HW, and it's also used by HW to post completed descriptors to SW.
+
+  For the split queue model, "TX completion queues" are used to pass descriptor buffers
+  from SW to HW, while TX queues are used only to pass the descriptor completions from
+  HW to SW.
+
+  User can choose Tx queue mode, example::
+
+    -a ca:00.0,tx_single=1
+
+  Then the PMD will configure Tx queue with single queue mode.
+  Otherwise, split queue mode is chosen by default.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :doc:`build_and_test` for details.
\ No newline at end of file
diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
new file mode 100644
index 0000000000..a2d1ca9e15
--- /dev/null
+++ b/doc/guides/nics/features/cpfl.ini
@@ -0,0 +1,12 @@
+;
+; Supported features of the 'cpfl' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+; A feature with "P" indicates only be supported when non-vector path
+; is selected.
+;
+[Features]
+Linux                = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index df58a237ca..5c9d1edf5e 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -20,6 +20,7 @@ Network Interface Controller Drivers
     bnx2x
     bnxt
     cnxk
+    cpfl
     cxgbe
     dpaa
     dpaa2
diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst
index 49c18617a5..29690d8813 100644
--- a/doc/guides/rel_notes/release_23_03.rst
+++ b/doc/guides/rel_notes/release_23_03.rst
@@ -148,6 +148,12 @@ New Features
   * Added support for timesync API.
   * Added support for packet pacing (launch time offloading).
 
+* **Added Intel cpfl driver.**
+
+  * Added the new ``cpfl`` net driver
+    for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
+    See the :doc:`../nics/cpfl.rst` NIC guide for more details on this new driver.
+
 * **Updated Marvell cnxk ethdev driver.**
 
   * Added support to skip RED using ``RTE_FLOW_ACTION_TYPE_SKIP_CMAN``.
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
new file mode 100644
index 0000000000..21c505fda3
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -0,0 +1,772 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_dev.h>
+#include <errno.h>
+#include <rte_alarm.h>
+
+#include "cpfl_ethdev.h"
+
+#define CPFL_TX_SINGLE_Q	"tx_single"
+#define CPFL_RX_SINGLE_Q	"rx_single"
+#define CPFL_VPORT		"vport"
+
+rte_spinlock_t cpfl_adapter_lock;
+/* A list for all adapters, one adapter matches one PCI device */
+struct cpfl_adapter_list cpfl_adapter_list;
+bool cpfl_adapter_list_init;
+
+static const char * const cpfl_valid_args[] = {
+	CPFL_TX_SINGLE_Q,
+	CPFL_RX_SINGLE_Q,
+	CPFL_VPORT,
+	NULL
+};
+
+uint32_t cpfl_supported_speeds[] = {
+	RTE_ETH_SPEED_NUM_NONE,
+	RTE_ETH_SPEED_NUM_10M,
+	RTE_ETH_SPEED_NUM_100M,
+	RTE_ETH_SPEED_NUM_1G,
+	RTE_ETH_SPEED_NUM_2_5G,
+	RTE_ETH_SPEED_NUM_5G,
+	RTE_ETH_SPEED_NUM_10G,
+	RTE_ETH_SPEED_NUM_20G,
+	RTE_ETH_SPEED_NUM_25G,
+	RTE_ETH_SPEED_NUM_40G,
+	RTE_ETH_SPEED_NUM_50G,
+	RTE_ETH_SPEED_NUM_56G,
+	RTE_ETH_SPEED_NUM_100G,
+	RTE_ETH_SPEED_NUM_200G
+};
+
+static int
+cpfl_dev_link_update(struct rte_eth_dev *dev,
+		     __rte_unused int wait_to_complete)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct rte_eth_link new_link;
+	unsigned int i;
+
+	memset(&new_link, 0, sizeof(new_link));
+
+	for (i = 0; i < RTE_DIM(cpfl_supported_speeds); i++) {
+		if (vport->link_speed == cpfl_supported_speeds[i]) {
+			new_link.link_speed = vport->link_speed;
+			break;
+		}
+	}
+
+	if (i == RTE_DIM(cpfl_supported_speeds)) {
+		if (vport->link_up)
+			new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
+		else
+			new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	}
+
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
+		RTE_ETH_LINK_DOWN;
+	new_link.link_autoneg = (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) ?
+				 RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG;
+
+	return rte_eth_linkstatus_set(dev, &new_link);
+}
+
+static int
+cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+
+	dev_info->max_rx_queues = base->caps.max_rx_q;
+	dev_info->max_tx_queues = base->caps.max_tx_q;
+	dev_info->min_rx_bufsize = CPFL_MIN_BUF_SIZE;
+	dev_info->max_rx_pktlen = vport->max_mtu + CPFL_ETH_OVERHEAD;
+
+	dev_info->max_mtu = vport->max_mtu;
+	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+
+	return 0;
+}
+
+static const uint32_t *
+cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+	static const uint32_t ptypes[] = {
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_L4_FRAG,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_UNKNOWN
+	};
+
+	return ptypes;
+}
+
+static int
+cpfl_dev_configure(struct rte_eth_dev *dev)
+{
+	struct rte_eth_conf *conf = &dev->data->dev_conf;
+
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
+		PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not supported",
+			     conf->txmode.mq_mode);
+		return -ENOTSUP;
+	}
+
+	if (conf->lpbk_mode != 0) {
+		PMD_INIT_LOG(ERR, "Loopback operation mode %d is not supported",
+			     conf->lpbk_mode);
+		return -ENOTSUP;
+	}
+
+	if (conf->dcb_capability_en != 0) {
+		PMD_INIT_LOG(ERR, "Priority Flow Control(PFC) if not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.lsc != 0) {
+		PMD_INIT_LOG(ERR, "LSC interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.rxq != 0) {
+		PMD_INIT_LOG(ERR, "RXQ interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.rmv != 0) {
+		PMD_INIT_LOG(ERR, "RMV interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	return 0;
+}
+
+static int
+cpfl_dev_close(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter);
+
+	idpf_vport_deinit(vport);
+
+	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
+	adapter->cur_vport_nb--;
+	dev->data->dev_private = NULL;
+	adapter->vports[vport->sw_idx] = NULL;
+	rte_free(vport);
+
+	return 0;
+}
+
+static const struct eth_dev_ops cpfl_eth_dev_ops = {
+	.dev_configure			= cpfl_dev_configure,
+	.dev_close			= cpfl_dev_close,
+	.dev_infos_get			= cpfl_dev_info_get,
+	.link_update			= cpfl_dev_link_update,
+	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
+};
+
+static int
+insert_value(struct cpfl_devargs *devargs, uint16_t id)
+{
+	uint16_t i;
+
+	/* ignore duplicate */
+	for (i = 0; i < devargs->req_vport_nb; i++) {
+		if (devargs->req_vports[i] == id)
+			return 0;
+	}
+
+	devargs->req_vports[devargs->req_vport_nb] = id;
+	devargs->req_vport_nb++;
+
+	return 0;
+}
+
+static const char *
+parse_range(const char *value, struct cpfl_devargs *devargs)
+{
+	uint16_t lo, hi, i;
+	int n = 0;
+	int result;
+	const char *pos = value;
+
+	result = sscanf(value, "%hu%n-%hu%n", &lo, &n, &hi, &n);
+	if (result == 1) {
+		if (insert_value(devargs, lo) != 0)
+			return NULL;
+	} else if (result == 2) {
+		if (lo > hi)
+			return NULL;
+		for (i = lo; i <= hi; i++) {
+			if (insert_value(devargs, i) != 0)
+				return NULL;
+		}
+	} else {
+		return NULL;
+	}
+
+	return pos + n;
+}
+
+static int
+parse_vport(const char *key, const char *value, void *args)
+{
+	struct cpfl_devargs *devargs = args;
+	const char *pos = value;
+
+	devargs->req_vport_nb = 0;
+
+	if (*pos == '[')
+		pos++;
+
+	while (1) {
+		pos = parse_range(pos, devargs);
+		if (pos == NULL) {
+			PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ",
+				     value, key);
+			return -EINVAL;
+		}
+		if (*pos != ',')
+			break;
+		pos++;
+	}
+
+	if (*value == '[' && *pos != ']') {
+		PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ",
+			     value, key);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+parse_bool(const char *key, const char *value, void *args)
+{
+	int *i = args;
+	char *end;
+	int num;
+
+	errno = 0;
+
+	num = strtoul(value, &end, 10);
+
+	if (errno == ERANGE || (num != 0 && num != 1)) {
+		PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", value must be 0 or 1",
+			value, key);
+		return -EINVAL;
+	}
+
+	*i = num;
+	return 0;
+}
+
+static int
+cpfl_parse_devargs(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter,
+		   struct cpfl_devargs *cpfl_args)
+{
+	struct rte_devargs *devargs = pci_dev->device.devargs;
+	struct rte_kvargs *kvlist;
+	int i, ret;
+
+	cpfl_args->req_vport_nb = 0;
+
+	if (devargs == NULL)
+		return 0;
+
+	kvlist = rte_kvargs_parse(devargs->args, cpfl_valid_args);
+	if (kvlist == NULL) {
+		PMD_INIT_LOG(ERR, "invalid kvargs key");
+		return -EINVAL;
+	}
+
+	if (rte_kvargs_count(kvlist, CPFL_VPORT) > 1) {
+		PMD_INIT_LOG(ERR, "devarg vport is duplicated.");
+		return -EINVAL;
+	}
+
+	ret = rte_kvargs_process(kvlist, CPFL_VPORT, &parse_vport,
+				 cpfl_args);
+	if (ret != 0)
+		goto fail;
+
+	ret = rte_kvargs_process(kvlist, CPFL_TX_SINGLE_Q, &parse_bool,
+				 &adapter->base.is_tx_singleq);
+	if (ret != 0)
+		goto fail;
+
+	ret = rte_kvargs_process(kvlist, CPFL_RX_SINGLE_Q, &parse_bool,
+				 &adapter->base.is_rx_singleq);
+	if (ret != 0)
+		goto fail;
+
+	/* check parsed devargs */
+	if (adapter->cur_vport_nb + cpfl_args->req_vport_nb >
+	    adapter->max_vport_nb) {
+		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
+			     adapter->max_vport_nb);
+		ret = -EINVAL;
+		goto fail;
+	}
+
+	for (i = 0; i < cpfl_args->req_vport_nb; i++) {
+		if (cpfl_args->req_vports[i] > adapter->max_vport_nb - 1) {
+			PMD_INIT_LOG(ERR, "Invalid vport id %d, it should be 0 ~ %d",
+				     cpfl_args->req_vports[i], adapter->max_vport_nb - 1);
+			ret = -EINVAL;
+			goto fail;
+		}
+
+		if (adapter->cur_vports & RTE_BIT32(cpfl_args->req_vports[i])) {
+			PMD_INIT_LOG(ERR, "Vport %d has been requested",
+				     cpfl_args->req_vports[i]);
+			ret = -EINVAL;
+			goto fail;
+		}
+	}
+
+fail:
+	rte_kvargs_free(kvlist);
+	return ret;
+}
+
+static struct idpf_vport *
+cpfl_find_vport(struct cpfl_adapter_ext *adapter, uint32_t vport_id)
+{
+	struct idpf_vport *vport = NULL;
+	int i;
+
+	for (i = 0; i < adapter->cur_vport_nb; i++) {
+		vport = adapter->vports[i];
+		if (vport->vport_id != vport_id)
+			continue;
+		else
+			return vport;
+	}
+
+	return vport;
+}
+
+static void
+cpfl_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t msglen)
+{
+	struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
+
+	if (msglen < sizeof(struct virtchnl2_event)) {
+		PMD_DRV_LOG(ERR, "Error event");
+		return;
+	}
+
+	switch (vc_event->event) {
+	case VIRTCHNL2_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL2_EVENT_LINK_CHANGE");
+		vport->link_up = !!(vc_event->link_status);
+		vport->link_speed = vc_event->link_speed;
+		cpfl_dev_link_update(dev, 0);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", vc_event->event);
+		break;
+	}
+}
+
+static void
+cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter)
+{
+	struct idpf_adapter *base = &adapter->base;
+	struct idpf_dma_mem *dma_mem = NULL;
+	struct idpf_hw *hw = &base->hw;
+	struct virtchnl2_event *vc_event;
+	struct idpf_ctlq_msg ctlq_msg;
+	enum idpf_mbx_opc mbx_op;
+	struct idpf_vport *vport;
+	enum virtchnl_ops vc_op;
+	uint16_t pending = 1;
+	int ret;
+
+	while (pending) {
+		ret = idpf_vc_ctlq_recv(hw->arq, &pending, &ctlq_msg);
+		if (ret) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret);
+			return;
+		}
+
+		memcpy(base->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
+			   IDPF_DFLT_MBX_BUF_SIZE);
+
+		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
+		vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+		base->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
+
+		switch (mbx_op) {
+		case idpf_mbq_opc_send_msg_to_peer_pf:
+			if (vc_op == VIRTCHNL2_OP_EVENT) {
+				if (ctlq_msg.data_len < sizeof(struct virtchnl2_event)) {
+					PMD_DRV_LOG(ERR, "Error event");
+					return;
+				}
+				vc_event = (struct virtchnl2_event *)base->mbx_resp;
+				vport = cpfl_find_vport(adapter, vc_event->vport_id);
+				if (!vport) {
+					PMD_DRV_LOG(ERR, "Can't find vport.");
+					return;
+				}
+				cpfl_handle_event_msg(vport, base->mbx_resp,
+						      ctlq_msg.data_len);
+			} else {
+				if (vc_op == base->pend_cmd)
+					notify_cmd(base, base->cmd_retval);
+				else
+					PMD_DRV_LOG(ERR, "command mismatch, expect %u, get %u",
+						    base->pend_cmd, vc_op);
+
+				PMD_DRV_LOG(DEBUG, " Virtual channel response is received,"
+					    "opcode = %d", vc_op);
+			}
+			goto post_buf;
+		default:
+			PMD_DRV_LOG(DEBUG, "Request %u is not supported yet", mbx_op);
+		}
+	}
+
+post_buf:
+	if (ctlq_msg.data_len)
+		dma_mem = ctlq_msg.ctx.indirect.payload;
+	else
+		pending = 0;
+
+	ret = idpf_vc_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
+	if (ret && dma_mem)
+		idpf_free_dma_mem(hw, dma_mem);
+}
+
+static void
+cpfl_dev_alarm_handler(void *param)
+{
+	struct cpfl_adapter_ext *adapter = param;
+
+	cpfl_handle_virtchnl_msg(adapter);
+
+	rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
+}
+
+static int
+cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter)
+{
+	struct idpf_adapter *base = &adapter->base;
+	struct idpf_hw *hw = &base->hw;
+	int ret = 0;
+
+	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+	hw->hw_addr_len = pci_dev->mem_resource[0].len;
+	hw->back = base;
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->device_id = pci_dev->id.device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+
+	strncpy(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE);
+
+	ret = idpf_adapter_init(base);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init adapter");
+		goto err_adapter_init;
+	}
+
+	rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
+
+	adapter->max_vport_nb = adapter->base.caps.max_vports > CPFL_MAX_VPORT_NUM ?
+				CPFL_MAX_VPORT_NUM : adapter->base.caps.max_vports;
+
+	adapter->vports = rte_zmalloc("vports",
+				      adapter->max_vport_nb *
+				      sizeof(*adapter->vports),
+				      0);
+	if (adapter->vports == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
+		ret = -ENOMEM;
+		goto err_get_ptype;
+	}
+
+	adapter->cur_vports = 0;
+	adapter->cur_vport_nb = 0;
+
+	adapter->used_vecs_num = 0;
+
+	return ret;
+
+err_get_ptype:
+	idpf_adapter_deinit(base);
+err_adapter_init:
+	return ret;
+}
+
+static uint16_t
+cpfl_vport_idx_alloc(struct cpfl_adapter_ext *adapter)
+{
+	uint16_t vport_idx;
+	uint16_t i;
+
+	for (i = 0; i < adapter->max_vport_nb; i++) {
+		if (adapter->vports[i] == NULL)
+			break;
+	}
+
+	if (i == adapter->max_vport_nb)
+		vport_idx = CPFL_INVALID_VPORT_IDX;
+	else
+		vport_idx = i;
+
+	return vport_idx;
+}
+
+static int
+cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct cpfl_vport_param *param = init_params;
+	struct cpfl_adapter_ext *adapter = param->adapter;
+	/* for sending create vport virtchnl msg prepare */
+	struct virtchnl2_create_vport create_vport_info;
+	int ret = 0;
+
+	dev->dev_ops = &cpfl_eth_dev_ops;
+	vport->adapter = &adapter->base;
+	vport->sw_idx = param->idx;
+	vport->devarg_id = param->devarg_id;
+	vport->dev = dev;
+
+	memset(&create_vport_info, 0, sizeof(create_vport_info));
+	ret = idpf_vport_info_init(vport, &create_vport_info);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init vport req_info.");
+		goto err;
+	}
+
+	ret = idpf_vport_init(vport, &create_vport_info, dev->data);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init vports.");
+		goto err;
+	}
+
+	adapter->vports[param->idx] = vport;
+	adapter->cur_vports |= RTE_BIT32(param->devarg_id);
+	adapter->cur_vport_nb++;
+
+	dev->data->mac_addrs = rte_zmalloc(NULL, RTE_ETHER_ADDR_LEN, 0);
+	if (dev->data->mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory.");
+		ret = -ENOMEM;
+		goto err_mac_addrs;
+	}
+
+	rte_ether_addr_copy((struct rte_ether_addr *)vport->default_mac_addr,
+			    &dev->data->mac_addrs[0]);
+
+	return 0;
+
+err_mac_addrs:
+	adapter->vports[param->idx] = NULL;  /* reset */
+	idpf_vport_deinit(vport);
+	adapter->cur_vports &= ~RTE_BIT32(param->devarg_id);
+	adapter->cur_vport_nb--;
+err:
+	return ret;
+}
+
+static const struct rte_pci_id pci_id_cpfl_map[] = {
+	{ RTE_PCI_DEVICE(IDPF_INTEL_VENDOR_ID, IDPF_DEV_ID_CPF) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static struct cpfl_adapter_ext *
+cpfl_find_adapter_ext(struct rte_pci_device *pci_dev)
+{
+	struct cpfl_adapter_ext *adapter;
+	int found = 0;
+
+	if (pci_dev == NULL)
+		return NULL;
+
+	rte_spinlock_lock(&cpfl_adapter_lock);
+	TAILQ_FOREACH(adapter, &cpfl_adapter_list, next) {
+		if (strncmp(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE) == 0) {
+			found = 1;
+			break;
+		}
+	}
+	rte_spinlock_unlock(&cpfl_adapter_lock);
+
+	if (found == 0)
+		return NULL;
+
+	return adapter;
+}
+
+static void
+cpfl_adapter_ext_deinit(struct cpfl_adapter_ext *adapter)
+{
+	rte_eal_alarm_cancel(cpfl_dev_alarm_handler, adapter);
+	idpf_adapter_deinit(&adapter->base);
+
+	rte_free(adapter->vports);
+	adapter->vports = NULL;
+}
+
+static int
+cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+	       struct rte_pci_device *pci_dev)
+{
+	struct cpfl_vport_param vport_param;
+	struct cpfl_adapter_ext *adapter;
+	struct cpfl_devargs devargs;
+	char name[RTE_ETH_NAME_MAX_LEN];
+	int i, retval;
+	bool first_probe = false;
+
+	if (!cpfl_adapter_list_init) {
+		rte_spinlock_init(&cpfl_adapter_lock);
+		TAILQ_INIT(&cpfl_adapter_list);
+		cpfl_adapter_list_init = true;
+	}
+
+	adapter = cpfl_find_adapter_ext(pci_dev);
+	if (adapter == NULL) {
+		first_probe = true;
+		adapter = rte_zmalloc("cpfl_adapter_ext",
+				      sizeof(struct cpfl_adapter_ext), 0);
+		if (adapter == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
+			return -ENOMEM;
+		}
+
+		retval = cpfl_adapter_ext_init(pci_dev, adapter);
+		if (retval != 0) {
+			PMD_INIT_LOG(ERR, "Failed to init adapter.");
+			return retval;
+		}
+
+		rte_spinlock_lock(&cpfl_adapter_lock);
+		TAILQ_INSERT_TAIL(&cpfl_adapter_list, adapter, next);
+		rte_spinlock_unlock(&cpfl_adapter_lock);
+	}
+
+	retval = cpfl_parse_devargs(pci_dev, adapter, &devargs);
+	if (retval != 0) {
+		PMD_INIT_LOG(ERR, "Failed to parse private devargs");
+		goto err;
+	}
+
+	if (devargs.req_vport_nb == 0) {
+		/* If no vport devarg, create vport 0 by default. */
+		vport_param.adapter = adapter;
+		vport_param.devarg_id = 0;
+		vport_param.idx = cpfl_vport_idx_alloc(adapter);
+		if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
+			PMD_INIT_LOG(ERR, "No space for vport %u", vport_param.devarg_id);
+			return 0;
+		}
+		snprintf(name, sizeof(name), "cpfl_%s_vport_0",
+			 pci_dev->device.name);
+		retval = rte_eth_dev_create(&pci_dev->device, name,
+					    sizeof(struct idpf_vport),
+					    NULL, NULL, cpfl_dev_vport_init,
+					    &vport_param);
+		if (retval != 0)
+			PMD_DRV_LOG(ERR, "Failed to create default vport 0");
+	} else {
+		for (i = 0; i < devargs.req_vport_nb; i++) {
+			vport_param.adapter = adapter;
+			vport_param.devarg_id = devargs.req_vports[i];
+			vport_param.idx = cpfl_vport_idx_alloc(adapter);
+			if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
+				PMD_INIT_LOG(ERR, "No space for vport %u", vport_param.devarg_id);
+				break;
+			}
+			snprintf(name, sizeof(name), "cpfl_%s_vport_%d",
+				 pci_dev->device.name,
+				 devargs.req_vports[i]);
+			retval = rte_eth_dev_create(&pci_dev->device, name,
+						    sizeof(struct idpf_vport),
+						    NULL, NULL, cpfl_dev_vport_init,
+						    &vport_param);
+			if (retval != 0)
+				PMD_DRV_LOG(ERR, "Failed to create vport %d",
+					    vport_param.devarg_id);
+		}
+	}
+
+	return 0;
+
+err:
+	if (first_probe) {
+		rte_spinlock_lock(&cpfl_adapter_lock);
+		TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
+		rte_spinlock_unlock(&cpfl_adapter_lock);
+		cpfl_adapter_ext_deinit(adapter);
+		rte_free(adapter);
+	}
+	return retval;
+}
+
+static int
+cpfl_pci_remove(struct rte_pci_device *pci_dev)
+{
+	struct cpfl_adapter_ext *adapter = cpfl_find_adapter_ext(pci_dev);
+	uint16_t port_id;
+
+	/* Ethdev created can be found RTE_ETH_FOREACH_DEV_OF through rte_device */
+	RTE_ETH_FOREACH_DEV_OF(port_id, &pci_dev->device) {
+			rte_eth_dev_close(port_id);
+	}
+
+	rte_spinlock_lock(&cpfl_adapter_lock);
+	TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
+	rte_spinlock_unlock(&cpfl_adapter_lock);
+	cpfl_adapter_ext_deinit(adapter);
+	rte_free(adapter);
+
+	return 0;
+}
+
+static struct rte_pci_driver rte_cpfl_pmd = {
+	.id_table	= pci_id_cpfl_map,
+	.drv_flags	= RTE_PCI_DRV_NEED_MAPPING,
+	.probe		= cpfl_pci_probe,
+	.remove		= cpfl_pci_remove,
+};
+
+/**
+ * Driver initialization routine.
+ * Invoked once at EAL init time.
+ * Register itself as the [Poll Mode] Driver of PCI devices.
+ */
+RTE_PMD_REGISTER_PCI(net_cpfl, rte_cpfl_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_cpfl, pci_id_cpfl_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_cpfl, "* igb_uio | vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(net_cpfl,
+	CPFL_TX_SINGLE_Q "=<0|1> "
+	CPFL_RX_SINGLE_Q "=<0|1> "
+	CPFL_VPORT "=[vport0_begin[-vport0_end][,vport1_begin[-vport1_end]][,..]]");
+
+RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_driver, driver, NOTICE);
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
new file mode 100644
index 0000000000..9738e89ca8
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _CPFL_ETHDEV_H_
+#define _CPFL_ETHDEV_H_
+
+#include <stdint.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+#include <rte_ethdev.h>
+#include <rte_kvargs.h>
+#include <ethdev_driver.h>
+#include <ethdev_pci.h>
+
+#include "cpfl_logs.h"
+
+#include <idpf_common_device.h>
+#include <idpf_common_virtchnl.h>
+#include <base/idpf_prototype.h>
+#include <base/virtchnl2.h>
+
+/* Currently, backend supports up to 8 vports */
+#define CPFL_MAX_VPORT_NUM	8
+
+#define CPFL_INVALID_VPORT_IDX	0xffff
+
+#define CPFL_MIN_BUF_SIZE	1024
+#define CPFL_MAX_FRAME_SIZE	9728
+#define CPFL_DEFAULT_MTU	RTE_ETHER_MTU
+
+#define CPFL_VLAN_TAG_SIZE	4
+#define CPFL_ETH_OVERHEAD \
+	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + CPFL_VLAN_TAG_SIZE * 2)
+
+#define CPFL_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
+
+#define CPFL_ALARM_INTERVAL	50000 /* us */
+
+/* Device IDs */
+#define IDPF_DEV_ID_CPF			0x1453
+
+struct cpfl_vport_param {
+	struct cpfl_adapter_ext *adapter;
+	uint16_t devarg_id; /* arg id from user */
+	uint16_t idx;       /* index in adapter->vports[]*/
+};
+
+/* Struct used when parse driver specific devargs */
+struct cpfl_devargs {
+	uint16_t req_vports[CPFL_MAX_VPORT_NUM];
+	uint16_t req_vport_nb;
+};
+
+struct cpfl_adapter_ext {
+	TAILQ_ENTRY(cpfl_adapter_ext) next;
+	struct idpf_adapter base;
+
+	char name[CPFL_ADAPTER_NAME_LEN];
+
+	struct idpf_vport **vports;
+	uint16_t max_vport_nb;
+
+	uint16_t cur_vports; /* bit mask of created vport */
+	uint16_t cur_vport_nb;
+
+	uint16_t used_vecs_num;
+};
+
+TAILQ_HEAD(cpfl_adapter_list, cpfl_adapter_ext);
+
+#define CPFL_DEV_TO_PCI(eth_dev)		\
+	RTE_DEV_TO_PCI((eth_dev)->device)
+#define CPFL_ADAPTER_TO_EXT(p)					\
+	container_of((p), struct cpfl_adapter_ext, base)
+
+#endif /* _CPFL_ETHDEV_H_ */
diff --git a/drivers/net/cpfl/cpfl_logs.h b/drivers/net/cpfl/cpfl_logs.h
new file mode 100644
index 0000000000..bdfa5c41a5
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_logs.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _CPFL_LOGS_H_
+#define _CPFL_LOGS_H_
+
+#include <rte_log.h>
+
+extern int cpfl_logtype_init;
+extern int cpfl_logtype_driver;
+
+#define PMD_INIT_LOG(level, ...) \
+	rte_log(RTE_LOG_ ## level, \
+		cpfl_logtype_init, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#define PMD_DRV_LOG(level, ...) \
+	rte_log(RTE_LOG_ ## level, \
+		cpfl_logtype_driver, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#endif /* _CPFL_LOGS_H_ */
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
new file mode 100644
index 0000000000..c721732b50
--- /dev/null
+++ b/drivers/net/cpfl/meson.build
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 Intel Corporation
+
+if is_windows
+    build = false
+    reason = 'not supported on Windows'
+    subdir_done()
+endif
+
+deps += ['common_idpf']
+
+sources = files(
+        'cpfl_ethdev.c',
+)
\ No newline at end of file
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index f83a6de117..b1df17ce8c 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -13,6 +13,7 @@ drivers = [
         'bnxt',
         'bonding',
         'cnxk',
+        'cpfl',
         'cxgbe',
         'dpaa',
         'dpaa2',
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v8 02/21] net/cpfl: add Tx queue setup
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 01/21] net/cpfl: support device initialization Mingxia Liu
@ 2023-03-02 10:35             ` Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 03/21] net/cpfl: add Rx " Mingxia Liu
                               ` (19 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 10:35 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for tx_queue_setup ops.

There are two queue modes, single queue mode and split queue mode
for Tx queue.

For the single queue model, the descriptor TX queue is used by SW
to post buffer descriptors to HW, and it's also used by HW to post
completed descriptors to SW.

For the split queue model, "Tx completion queue" are used to pass
descriptor buffers from SW to HW, while TX queues are used only to
pass the descriptor completions from HW to SW.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  13 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 244 +++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  25 ++++
 drivers/net/cpfl/meson.build   |   1 +
 4 files changed, 283 insertions(+)
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.h

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 21c505fda3..b40f373fb9 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -12,6 +12,7 @@
 #include <rte_alarm.h>
 
 #include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
 
 #define CPFL_TX_SINGLE_Q	"tx_single"
 #define CPFL_RX_SINGLE_Q	"rx_single"
@@ -93,6 +94,17 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = CPFL_MAX_RING_DESC,
+		.nb_min = CPFL_MIN_RING_DESC,
+		.nb_align = CPFL_ALIGN_RING_DESC,
+	};
+
 	return 0;
 }
 
@@ -179,6 +191,7 @@ cpfl_dev_close(struct rte_eth_dev *dev)
 static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_configure			= cpfl_dev_configure,
 	.dev_close			= cpfl_dev_close,
+	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
 	.link_update			= cpfl_dev_link_update,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
new file mode 100644
index 0000000000..737d069ec2
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -0,0 +1,244 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#include <ethdev_driver.h>
+#include <rte_net.h>
+#include <rte_vect.h>
+
+#include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
+
+static uint64_t
+cpfl_tx_offload_convert(uint64_t offload)
+{
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_SCTP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
+		ol |= IDPF_TX_OFFLOAD_MULTI_SEGS;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0)
+		ol |= IDPF_TX_OFFLOAD_MBUF_FAST_FREE;
+
+	return ol;
+}
+
+static const struct rte_memzone *
+cpfl_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
+		      uint16_t len, uint16_t queue_type,
+		      unsigned int socket_id, bool splitq)
+{
+	char ring_name[RTE_MEMZONE_NAMESIZE];
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+
+	memset(ring_name, 0, RTE_MEMZONE_NAMESIZE);
+	switch (queue_type) {
+	case VIRTCHNL2_QUEUE_TYPE_TX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_sched_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		memcpy(ring_name, "cpfl Tx ring", sizeof("cpfl Tx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3),
+					      CPFL_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_singleq_rx_buf_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		memcpy(ring_name, "cpfl Rx ring", sizeof("cpfl Rx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
+		ring_size = RTE_ALIGN(len * sizeof(struct idpf_splitq_tx_compl_desc),
+				      CPFL_DMA_MEM_ALIGN);
+		memcpy(ring_name, "cpfl Tx compl ring", sizeof("cpfl Tx compl ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
+		ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_splitq_rx_buf_desc),
+				      CPFL_DMA_MEM_ALIGN);
+		memcpy(ring_name, "cpfl Rx buf ring", sizeof("cpfl Rx buf ring"));
+		break;
+	default:
+		PMD_INIT_LOG(ERR, "Invalid queue type");
+		return NULL;
+	}
+
+	mz = rte_eth_dma_zone_reserve(dev, ring_name, queue_idx,
+				      ring_size, CPFL_RING_BASE_ALIGN,
+				      socket_id);
+	if (mz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for ring");
+		return NULL;
+	}
+
+	/* Zero all the descriptors in the ring. */
+	memset(mz->addr, 0, ring_size);
+
+	return mz;
+}
+
+static void
+cpfl_dma_zone_release(const struct rte_memzone *mz)
+{
+	rte_memzone_free(mz);
+}
+
+static int
+cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
+		     uint16_t queue_idx, uint16_t nb_desc,
+		     unsigned int socket_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	const struct rte_memzone *mz;
+	struct idpf_tx_queue *cq;
+	int ret;
+
+	cq = rte_zmalloc_socket("cpfl splitq cq",
+				sizeof(struct idpf_tx_queue),
+				RTE_CACHE_LINE_SIZE,
+				socket_id);
+	if (cq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for Tx compl queue");
+		ret = -ENOMEM;
+		goto err_cq_alloc;
+	}
+
+	cq->nb_tx_desc = nb_desc;
+	cq->queue_id = vport->chunks_info.tx_compl_start_qid + queue_idx;
+	cq->port_id = dev->data->port_id;
+	cq->txqs = dev->data->tx_queues;
+	cq->tx_start_qid = vport->chunks_info.tx_start_qid;
+
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, nb_desc,
+				   VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION,
+				   socket_id, true);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	cq->tx_ring_phys_addr = mz->iova;
+	cq->compl_ring = mz->addr;
+	cq->mz = mz;
+	idpf_qc_split_tx_complq_reset(cq);
+
+	txq->complq = cq;
+
+	return 0;
+
+err_mz_reserve:
+	rte_free(cq);
+err_cq_alloc:
+	return ret;
+}
+
+int
+cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_txconf *tx_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	uint16_t tx_rs_thresh, tx_free_thresh;
+	struct idpf_hw *hw = &base->hw;
+	const struct rte_memzone *mz;
+	struct idpf_tx_queue *txq;
+	uint64_t offloads;
+	uint16_t len;
+	bool is_splitq;
+	int ret;
+
+	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+	tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh > 0) ?
+		tx_conf->tx_rs_thresh : CPFL_DEFAULT_TX_RS_THRESH);
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ?
+		tx_conf->tx_free_thresh : CPFL_DEFAULT_TX_FREE_THRESH);
+	if (idpf_qc_tx_thresh_check(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket("cpfl txq",
+				 sizeof(struct idpf_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (txq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
+		ret = -ENOMEM;
+		goto err_txq_alloc;
+	}
+
+	is_splitq = !!(vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
+
+	txq->nb_tx_desc = nb_desc;
+	txq->rs_thresh = tx_rs_thresh;
+	txq->free_thresh = tx_free_thresh;
+	txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->offloads = cpfl_tx_offload_convert(offloads);
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	if (is_splitq)
+		len = 2 * nb_desc;
+	else
+		len = nb_desc;
+	txq->sw_nb_desc = len;
+
+	/* Allocate TX hardware ring descriptors. */
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, nb_desc, VIRTCHNL2_QUEUE_TYPE_TX,
+				   socket_id, is_splitq);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	txq->tx_ring_phys_addr = mz->iova;
+	txq->mz = mz;
+
+	txq->sw_ring = rte_zmalloc_socket("cpfl tx sw ring",
+					  sizeof(struct idpf_tx_entry) * len,
+					  RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq->sw_ring == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
+	}
+
+	if (!is_splitq) {
+		txq->tx_ring = mz->addr;
+		idpf_qc_single_tx_queue_reset(txq);
+	} else {
+		txq->desc_ring = mz->addr;
+		idpf_qc_split_tx_descq_reset(txq);
+
+		/* Setup tx completion queue if split model */
+		ret = cpfl_tx_complq_setup(dev, txq, queue_idx,
+					   2 * nb_desc, socket_id);
+		if (ret != 0)
+			goto err_complq_setup;
+	}
+
+	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
+			queue_idx * vport->chunks_info.tx_qtail_spacing);
+	txq->q_set = true;
+	dev->data->tx_queues[queue_idx] = txq;
+
+	return 0;
+
+err_complq_setup:
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(txq);
+err_txq_alloc:
+	return ret;
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
new file mode 100644
index 0000000000..232630c5e9
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _CPFL_RXTX_H_
+#define _CPFL_RXTX_H_
+
+#include <idpf_common_rxtx.h>
+#include "cpfl_ethdev.h"
+
+/* In QLEN must be whole number of 32 descriptors. */
+#define CPFL_ALIGN_RING_DESC	32
+#define CPFL_MIN_RING_DESC	32
+#define CPFL_MAX_RING_DESC	4096
+#define CPFL_DMA_MEM_ALIGN	4096
+/* Base address of the HW descriptor ring should be 128B aligned. */
+#define CPFL_RING_BASE_ALIGN	128
+
+#define CPFL_DEFAULT_TX_RS_THRESH	32
+#define CPFL_DEFAULT_TX_FREE_THRESH	32
+
+int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			uint16_t nb_desc, unsigned int socket_id,
+			const struct rte_eth_txconf *tx_conf);
+#endif /* _CPFL_RXTX_H_ */
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
index c721732b50..1894423689 100644
--- a/drivers/net/cpfl/meson.build
+++ b/drivers/net/cpfl/meson.build
@@ -11,4 +11,5 @@ deps += ['common_idpf']
 
 sources = files(
         'cpfl_ethdev.c',
+        'cpfl_rxtx.c',
 )
\ No newline at end of file
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v8 03/21] net/cpfl: add Rx queue setup
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 01/21] net/cpfl: support device initialization Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 02/21] net/cpfl: add Tx queue setup Mingxia Liu
@ 2023-03-02 10:35             ` Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 04/21] net/cpfl: support device start and stop Mingxia Liu
                               ` (18 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 10:35 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for rx_queue_setup ops.

There are two queue modes supported, single queue mode and split
queue mode for Rx queue.

For the single queue model, the descriptor RX queue is used by SW
to post buffer descriptors to HW, and it's also used by HW to post
completed descriptors to SW.

For the split queue model, "RX buffer queues" are used to pass
descriptor buffers from SW to HW, while RX queues are used only to
pass the descriptor completions from HW to SW.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  11 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 232 +++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |   6 +
 3 files changed, 249 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index b40f373fb9..99fd86d6d0 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -99,12 +99,22 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
 	};
 
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_free_thresh = CPFL_DEFAULT_RX_FREE_THRESH,
+	};
+
 	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
 		.nb_max = CPFL_MAX_RING_DESC,
 		.nb_min = CPFL_MIN_RING_DESC,
 		.nb_align = CPFL_ALIGN_RING_DESC,
 	};
 
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = CPFL_MAX_RING_DESC,
+		.nb_min = CPFL_MIN_RING_DESC,
+		.nb_align = CPFL_ALIGN_RING_DESC,
+	};
+
 	return 0;
 }
 
@@ -191,6 +201,7 @@ cpfl_dev_close(struct rte_eth_dev *dev)
 static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_configure			= cpfl_dev_configure,
 	.dev_close			= cpfl_dev_close,
+	.rx_queue_setup			= cpfl_rx_queue_setup,
 	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
 	.link_update			= cpfl_dev_link_update,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 737d069ec2..930d725a4a 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -9,6 +9,25 @@
 #include "cpfl_ethdev.h"
 #include "cpfl_rxtx.h"
 
+static uint64_t
+cpfl_rx_offload_convert(uint64_t offload)
+{
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
+		ol |= IDPF_RX_OFFLOAD_TIMESTAMP;
+
+	return ol;
+}
+
 static uint64_t
 cpfl_tx_offload_convert(uint64_t offload)
 {
@@ -94,6 +113,219 @@ cpfl_dma_zone_release(const struct rte_memzone *mz)
 	rte_memzone_free(mz);
 }
 
+static int
+cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
+			 uint16_t queue_idx, uint16_t rx_free_thresh,
+			 uint16_t nb_desc, unsigned int socket_id,
+			 struct rte_mempool *mp, uint8_t bufq_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	struct idpf_hw *hw = &base->hw;
+	const struct rte_memzone *mz;
+	struct idpf_rx_queue *bufq;
+	uint16_t len;
+	int ret;
+
+	bufq = rte_zmalloc_socket("cpfl bufq",
+				   sizeof(struct idpf_rx_queue),
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (bufq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue.");
+		ret = -ENOMEM;
+		goto err_bufq1_alloc;
+	}
+
+	bufq->mp = mp;
+	bufq->nb_rx_desc = nb_desc;
+	bufq->rx_free_thresh = rx_free_thresh;
+	bufq->queue_id = vport->chunks_info.rx_buf_start_qid + queue_idx;
+	bufq->port_id = dev->data->port_id;
+	bufq->rx_hdr_len = 0;
+	bufq->adapter = base;
+
+	len = rte_pktmbuf_data_room_size(bufq->mp) - RTE_PKTMBUF_HEADROOM;
+	bufq->rx_buf_len = len;
+
+	/* Allocate a little more to support bulk allocate. */
+	len = nb_desc + IDPF_RX_MAX_BURST;
+
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, len,
+				   VIRTCHNL2_QUEUE_TYPE_RX_BUFFER,
+				   socket_id, true);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+
+	bufq->rx_ring_phys_addr = mz->iova;
+	bufq->rx_ring = mz->addr;
+	bufq->mz = mz;
+
+	bufq->sw_ring =
+		rte_zmalloc_socket("cpfl rx bufq sw ring",
+				   sizeof(struct rte_mbuf *) * len,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (bufq->sw_ring == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
+	}
+
+	idpf_qc_split_rx_bufq_reset(bufq);
+	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
+			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+	bufq->q_set = true;
+
+	if (bufq_id == IDPF_RX_SPLIT_BUFQ1_ID) {
+		rxq->bufq1 = bufq;
+	} else if (bufq_id == IDPF_RX_SPLIT_BUFQ2_ID) {
+		rxq->bufq2 = bufq;
+	} else {
+		PMD_INIT_LOG(ERR, "Invalid buffer queue index.");
+		ret = -EINVAL;
+		goto err_bufq_id;
+	}
+
+	return 0;
+
+err_bufq_id:
+	rte_free(bufq->sw_ring);
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(bufq);
+err_bufq1_alloc:
+	return ret;
+}
+
+static void
+cpfl_rx_split_bufq_release(struct idpf_rx_queue *bufq)
+{
+	rte_free(bufq->sw_ring);
+	cpfl_dma_zone_release(bufq->mz);
+	rte_free(bufq);
+}
+
+int
+cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_rxconf *rx_conf,
+		    struct rte_mempool *mp)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	struct idpf_hw *hw = &base->hw;
+	const struct rte_memzone *mz;
+	struct idpf_rx_queue *rxq;
+	uint16_t rx_free_thresh;
+	uint64_t offloads;
+	bool is_splitq;
+	uint16_t len;
+	int ret;
+
+	offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads;
+
+	/* Check free threshold */
+	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
+		CPFL_DEFAULT_RX_FREE_THRESH :
+		rx_conf->rx_free_thresh;
+	if (idpf_qc_rx_thresh_check(nb_desc, rx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Setup Rx queue */
+	rxq = rte_zmalloc_socket("cpfl rxq",
+				 sizeof(struct idpf_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (rxq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
+		ret = -ENOMEM;
+		goto err_rxq_alloc;
+	}
+
+	is_splitq = !!(vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
+
+	rxq->mp = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->queue_id = vport->chunks_info.rx_start_qid + queue_idx;
+	rxq->port_id = dev->data->port_id;
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+	rxq->rx_hdr_len = 0;
+	rxq->adapter = base;
+	rxq->offloads = cpfl_rx_offload_convert(offloads);
+
+	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_buf_len = len;
+
+	/* Allocate a little more to support bulk allocate. */
+	len = nb_desc + IDPF_RX_MAX_BURST;
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, len, VIRTCHNL2_QUEUE_TYPE_RX,
+				   socket_id, is_splitq);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	rxq->rx_ring_phys_addr = mz->iova;
+	rxq->rx_ring = mz->addr;
+	rxq->mz = mz;
+
+	if (!is_splitq) {
+		rxq->sw_ring = rte_zmalloc_socket("cpfl rxq sw ring",
+						  sizeof(struct rte_mbuf *) * len,
+						  RTE_CACHE_LINE_SIZE,
+						  socket_id);
+		if (rxq->sw_ring == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+			ret = -ENOMEM;
+			goto err_sw_ring_alloc;
+		}
+
+		idpf_qc_single_rx_queue_reset(rxq);
+		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
+				queue_idx * vport->chunks_info.rx_qtail_spacing);
+	} else {
+		idpf_qc_split_rx_descq_reset(rxq);
+
+		/* Setup Rx buffer queues */
+		ret = cpfl_rx_split_bufq_setup(dev, rxq, 2 * queue_idx,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 1);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 1");
+			ret = -EINVAL;
+			goto err_bufq1_setup;
+		}
+
+		ret = cpfl_rx_split_bufq_setup(dev, rxq, 2 * queue_idx + 1,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 2);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 2");
+			ret = -EINVAL;
+			goto err_bufq2_setup;
+		}
+	}
+
+	rxq->q_set = true;
+	dev->data->rx_queues[queue_idx] = rxq;
+
+	return 0;
+
+err_bufq2_setup:
+	cpfl_rx_split_bufq_release(rxq->bufq1);
+err_bufq1_setup:
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(rxq);
+err_rxq_alloc:
+	return ret;
+}
+
 static int
 cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
 		     uint16_t queue_idx, uint16_t nb_desc,
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 232630c5e9..e0221abfa3 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -16,10 +16,16 @@
 /* Base address of the HW descriptor ring should be 128B aligned. */
 #define CPFL_RING_BASE_ALIGN	128
 
+#define CPFL_DEFAULT_RX_FREE_THRESH	32
+
 #define CPFL_DEFAULT_TX_RS_THRESH	32
 #define CPFL_DEFAULT_TX_FREE_THRESH	32
 
 int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_txconf *tx_conf);
+int cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			uint16_t nb_desc, unsigned int socket_id,
+			const struct rte_eth_rxconf *rx_conf,
+			struct rte_mempool *mp);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v8 04/21] net/cpfl: support device start and stop
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
                               ` (2 preceding siblings ...)
  2023-03-02 10:35             ` [PATCH v8 03/21] net/cpfl: add Rx " Mingxia Liu
@ 2023-03-02 10:35             ` Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 05/21] net/cpfl: support queue start Mingxia Liu
                               ` (17 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 10:35 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add dev ops dev_start, dev_stop and link_update.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 35 ++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 99fd86d6d0..dbd3f056e7 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -181,12 +181,45 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_dev_start(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	int ret;
+
+	ret = idpf_vc_vport_ena_dis(vport, true);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to enable vport");
+		return ret;
+	}
+
+	vport->stopped = 0;
+
+	return 0;
+}
+
+static int
+cpfl_dev_stop(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->stopped == 1)
+		return 0;
+
+	idpf_vc_vport_ena_dis(vport, false);
+
+	vport->stopped = 1;
+
+	return 0;
+}
+
 static int
 cpfl_dev_close(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter);
 
+	cpfl_dev_stop(dev);
 	idpf_vport_deinit(vport);
 
 	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
@@ -204,6 +237,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.rx_queue_setup			= cpfl_rx_queue_setup,
 	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
+	.dev_start			= cpfl_dev_start,
+	.dev_stop			= cpfl_dev_stop,
 	.link_update			= cpfl_dev_link_update,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v8 05/21] net/cpfl: support queue start
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
                               ` (3 preceding siblings ...)
  2023-03-02 10:35             ` [PATCH v8 04/21] net/cpfl: support device start and stop Mingxia Liu
@ 2023-03-02 10:35             ` Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 06/21] net/cpfl: support queue stop Mingxia Liu
                               ` (16 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 10:35 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for these device ops:
 - rx_queue_start
 - tx_queue_start

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  41 ++++++++++
 drivers/net/cpfl/cpfl_rxtx.c   | 138 +++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |   4 +
 3 files changed, 183 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index dbd3f056e7..3248d22d2f 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -181,12 +181,51 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_start_queues(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	struct idpf_tx_queue *txq;
+	int err = 0;
+	int i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq == NULL || txq->tx_deferred_start)
+			continue;
+		err = cpfl_tx_queue_start(dev, i);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start Tx queue %u", i);
+			return err;
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq == NULL || rxq->rx_deferred_start)
+			continue;
+		err = cpfl_rx_queue_start(dev, i);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start Rx queue %u", i);
+			return err;
+		}
+	}
+
+	return err;
+}
+
 static int
 cpfl_dev_start(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	int ret;
 
+	ret = cpfl_start_queues(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to start queues");
+		return ret;
+	}
+
 	ret = idpf_vc_vport_ena_dis(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
@@ -240,6 +279,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_start			= cpfl_dev_start,
 	.dev_stop			= cpfl_dev_stop,
 	.link_update			= cpfl_dev_link_update,
+	.rx_queue_start			= cpfl_rx_queue_start,
+	.tx_queue_start			= cpfl_tx_queue_start,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 930d725a4a..c13166b63c 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -474,3 +474,141 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 err_txq_alloc:
 	return ret;
 }
+
+int
+cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_rx_queue *rxq;
+	int err;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	if (rxq == NULL || !rxq->q_set) {
+		PMD_DRV_LOG(ERR, "RX queue %u not available or setup",
+					rx_queue_id);
+		return -EINVAL;
+	}
+
+	if (rxq->adapter->is_rx_singleq) {
+		/* Single queue */
+		err = idpf_qc_single_rxq_mbufs_alloc(rxq);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+			return err;
+		}
+
+		rte_wmb();
+
+		/* Init the RX tail register. */
+		IDPF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	} else {
+		/* Split queue */
+		err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq1);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+			return err;
+		}
+		err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq2);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+			return err;
+		}
+
+		rte_wmb();
+
+		/* Init the RX tail register. */
+		IDPF_PCI_REG_WRITE(rxq->bufq1->qrx_tail, rxq->bufq1->rx_tail);
+		IDPF_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->bufq2->rx_tail);
+	}
+
+	return err;
+}
+
+int
+cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_rx_queue *rxq =
+		dev->data->rx_queues[rx_queue_id];
+	int err = 0;
+
+	err = idpf_vc_rxq_config(vport, rxq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Fail to configure Rx queue %u", rx_queue_id);
+		return err;
+	}
+
+	err = cpfl_rx_queue_init(dev, rx_queue_id);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to init RX queue %u",
+			    rx_queue_id);
+		return err;
+	}
+
+	/* Ready to switch the queue on */
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+	} else {
+		rxq->q_started = true;
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	return err;
+}
+
+int
+cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_tx_queue *txq;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	txq = dev->data->tx_queues[tx_queue_id];
+
+	/* Init the RX tail register. */
+	IDPF_PCI_REG_WRITE(txq->qtx_tail, 0);
+
+	return 0;
+}
+
+int
+cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_tx_queue *txq =
+		dev->data->tx_queues[tx_queue_id];
+	int err = 0;
+
+	err = idpf_vc_txq_config(vport, txq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Fail to configure Tx queue %u", tx_queue_id);
+		return err;
+	}
+
+	err = cpfl_tx_queue_init(dev, tx_queue_id);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to init TX queue %u",
+			    tx_queue_id);
+		return err;
+	}
+
+	/* Ready to switch the queue on */
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+			    tx_queue_id);
+	} else {
+		txq->q_started = true;
+		dev->data->tx_queue_state[tx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	return err;
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index e0221abfa3..716b2fefa4 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -28,4 +28,8 @@ int cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
 			struct rte_mempool *mp);
+int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v8 06/21] net/cpfl: support queue stop
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
                               ` (4 preceding siblings ...)
  2023-03-02 10:35             ` [PATCH v8 05/21] net/cpfl: support queue start Mingxia Liu
@ 2023-03-02 10:35             ` Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 07/21] net/cpfl: support queue release Mingxia Liu
                               ` (15 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 10:35 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for these device ops:
 - rx_queue_stop
 - tx_queue_stop

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 10 +++-
 drivers/net/cpfl/cpfl_rxtx.c   | 98 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  3 ++
 3 files changed, 110 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 3248d22d2f..00672142e3 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -229,12 +229,16 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 	ret = idpf_vc_vport_ena_dis(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
-		return ret;
+		goto err_vport;
 	}
 
 	vport->stopped = 0;
 
 	return 0;
+
+err_vport:
+	cpfl_stop_queues(dev);
+	return ret;
 }
 
 static int
@@ -247,6 +251,8 @@ cpfl_dev_stop(struct rte_eth_dev *dev)
 
 	idpf_vc_vport_ena_dis(vport, false);
 
+	cpfl_stop_queues(dev);
+
 	vport->stopped = 1;
 
 	return 0;
@@ -281,6 +287,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.link_update			= cpfl_dev_link_update,
 	.rx_queue_start			= cpfl_rx_queue_start,
 	.tx_queue_start			= cpfl_tx_queue_start,
+	.rx_queue_stop			= cpfl_rx_queue_stop,
+	.tx_queue_stop			= cpfl_tx_queue_stop,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index c13166b63c..08db01412e 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -49,6 +49,14 @@ cpfl_tx_offload_convert(uint64_t offload)
 	return ol;
 }
 
+static const struct idpf_rxq_ops def_rxq_ops = {
+	.release_mbufs = idpf_qc_rxq_mbufs_release,
+};
+
+static const struct idpf_txq_ops def_txq_ops = {
+	.release_mbufs = idpf_qc_txq_mbufs_release,
+};
+
 static const struct rte_memzone *
 cpfl_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
 		      uint16_t len, uint16_t queue_type,
@@ -177,6 +185,7 @@ cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
 	idpf_qc_split_rx_bufq_reset(bufq);
 	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
 			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+	bufq->ops = &def_rxq_ops;
 	bufq->q_set = true;
 
 	if (bufq_id == IDPF_RX_SPLIT_BUFQ1_ID) {
@@ -287,6 +296,7 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		idpf_qc_single_rx_queue_reset(rxq);
 		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
 				queue_idx * vport->chunks_info.rx_qtail_spacing);
+		rxq->ops = &def_rxq_ops;
 	} else {
 		idpf_qc_split_rx_descq_reset(rxq);
 
@@ -461,6 +471,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 
 	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
 			queue_idx * vport->chunks_info.tx_qtail_spacing);
+	txq->ops = &def_txq_ops;
 	txq->q_set = true;
 	dev->data->tx_queues[queue_idx] = txq;
 
@@ -612,3 +623,90 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 
 	return err;
 }
+
+int
+cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_rx_queue *rxq;
+	int err;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, false);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+			    rx_queue_id);
+		return err;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		rxq->ops->release_mbufs(rxq);
+		idpf_qc_single_rx_queue_reset(rxq);
+	} else {
+		rxq->bufq1->ops->release_mbufs(rxq->bufq1);
+		rxq->bufq2->ops->release_mbufs(rxq->bufq2);
+		idpf_qc_split_rx_queue_reset(rxq);
+	}
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+int
+cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_tx_queue *txq;
+	int err;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, false);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+			    tx_queue_id);
+		return err;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	txq->ops->release_mbufs(txq);
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		idpf_qc_single_tx_queue_reset(txq);
+	} else {
+		idpf_qc_split_tx_descq_reset(txq);
+		idpf_qc_split_tx_complq_reset(txq->complq);
+	}
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+void
+cpfl_stop_queues(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	struct idpf_tx_queue *txq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq == NULL)
+			continue;
+
+		if (cpfl_rx_queue_stop(dev, i) != 0)
+			PMD_DRV_LOG(WARNING, "Fail to stop Rx queue %d", i);
+	}
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq == NULL)
+			continue;
+
+		if (cpfl_tx_queue_stop(dev, i) != 0)
+			PMD_DRV_LOG(WARNING, "Fail to stop Tx queue %d", i);
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 716b2fefa4..e9b810deaa 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -32,4 +32,7 @@ int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void cpfl_stop_queues(struct rte_eth_dev *dev);
+int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v8 07/21] net/cpfl: support queue release
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
                               ` (5 preceding siblings ...)
  2023-03-02 10:35             ` [PATCH v8 06/21] net/cpfl: support queue stop Mingxia Liu
@ 2023-03-02 10:35             ` Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 08/21] net/cpfl: support MTU configuration Mingxia Liu
                               ` (14 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 10:35 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for queue operations:
 - rx_queue_release
 - tx_queue_release

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  2 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 24 ++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  2 ++
 3 files changed, 28 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 00672142e3..cfce3f60d7 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -289,6 +289,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_start			= cpfl_tx_queue_start,
 	.rx_queue_stop			= cpfl_rx_queue_stop,
 	.tx_queue_stop			= cpfl_tx_queue_stop,
+	.rx_queue_release		= cpfl_dev_rx_queue_release,
+	.tx_queue_release		= cpfl_dev_tx_queue_release,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 08db01412e..f9295c970f 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -244,6 +244,12 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (idpf_qc_rx_thresh_check(nb_desc, rx_free_thresh) != 0)
 		return -EINVAL;
 
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx] != NULL) {
+		idpf_qc_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
 	/* Setup Rx queue */
 	rxq = rte_zmalloc_socket("cpfl rxq",
 				 sizeof(struct idpf_rx_queue),
@@ -409,6 +415,12 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (idpf_qc_tx_thresh_check(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
 		return -EINVAL;
 
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx] != NULL) {
+		idpf_qc_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
 	/* Allocate the TX queue data structure. */
 	txq = rte_zmalloc_socket("cpfl txq",
 				 sizeof(struct idpf_tx_queue),
@@ -685,6 +697,18 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	return 0;
 }
 
+void
+cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+	idpf_qc_rx_queue_release(dev->data->rx_queues[qid]);
+}
+
+void
+cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+	idpf_qc_tx_queue_release(dev->data->tx_queues[qid]);
+}
+
 void
 cpfl_stop_queues(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index e9b810deaa..f5882401dc 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -35,4 +35,6 @@ int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void cpfl_stop_queues(struct rte_eth_dev *dev);
 int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v8 08/21] net/cpfl: support MTU configuration
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
                               ` (6 preceding siblings ...)
  2023-03-02 10:35             ` [PATCH v8 07/21] net/cpfl: support queue release Mingxia Liu
@ 2023-03-02 10:35             ` Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 09/21] net/cpfl: support basic Rx data path Mingxia Liu
                               ` (13 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 10:35 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add dev ops mtu_set.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini |  1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 27 +++++++++++++++++++++++++++
 2 files changed, 28 insertions(+)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index a2d1ca9e15..470ba81579 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -7,6 +7,7 @@
 ; is selected.
 ;
 [Features]
+MTU update           = Y
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index cfce3f60d7..158cd0ae68 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -118,6 +118,27 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	return 0;
 }
 
+static int
+cpfl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	/* mtu setting is forbidden if port is start */
+	if (dev->data->dev_started) {
+		PMD_DRV_LOG(ERR, "port must be stopped before configuration");
+		return -EBUSY;
+	}
+
+	if (mtu > vport->max_mtu) {
+		PMD_DRV_LOG(ERR, "MTU should be less than %d", vport->max_mtu);
+		return -EINVAL;
+	}
+
+	vport->max_pkt_len = mtu + CPFL_ETH_OVERHEAD;
+
+	return 0;
+}
+
 static const uint32_t *
 cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 {
@@ -139,6 +160,7 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_conf *conf = &dev->data->dev_conf;
 
 	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
@@ -178,6 +200,10 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
+	vport->max_pkt_len =
+		(dev->data->mtu == 0) ? CPFL_DEFAULT_MTU : dev->data->mtu +
+		CPFL_ETH_OVERHEAD;
+
 	return 0;
 }
 
@@ -291,6 +317,7 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_stop			= cpfl_tx_queue_stop,
 	.rx_queue_release		= cpfl_dev_rx_queue_release,
 	.tx_queue_release		= cpfl_dev_tx_queue_release,
+	.mtu_set			= cpfl_dev_mtu_set,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v8 09/21] net/cpfl: support basic Rx data path
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
                               ` (7 preceding siblings ...)
  2023-03-02 10:35             ` [PATCH v8 08/21] net/cpfl: support MTU configuration Mingxia Liu
@ 2023-03-02 10:35             ` Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 10/21] net/cpfl: support basic Tx " Mingxia Liu
                               ` (12 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 10:35 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add basic Rx support in split queue mode and single queue mode.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  2 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 18 ++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  1 +
 3 files changed, 21 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 158cd0ae68..24f614d397 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -252,6 +252,8 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
+	cpfl_set_rx_function(dev);
+
 	ret = idpf_vc_vport_ena_dis(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index f9295c970f..a0a442f61d 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -734,3 +734,21 @@ cpfl_stop_queues(struct rte_eth_dev *dev)
 			PMD_DRV_LOG(WARNING, "Fail to stop Tx queue %d", i);
 	}
 }
+
+void
+cpfl_set_rx_function(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Split Scalar Rx (port %d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
+	} else {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Single Scalar Rx (port %d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index f5882401dc..a5dd388e1f 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -37,4 +37,5 @@ int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+void cpfl_set_rx_function(struct rte_eth_dev *dev);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v8 10/21] net/cpfl: support basic Tx data path
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
                               ` (8 preceding siblings ...)
  2023-03-02 10:35             ` [PATCH v8 09/21] net/cpfl: support basic Rx data path Mingxia Liu
@ 2023-03-02 10:35             ` Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
                               ` (11 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 10:35 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add basic Tx support in split queue mode and single queue mode.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  3 +++
 drivers/net/cpfl/cpfl_rxtx.c   | 20 ++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  1 +
 3 files changed, 24 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 24f614d397..37e0270dd7 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -94,6 +94,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
 		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
@@ -253,6 +255,7 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 	}
 
 	cpfl_set_rx_function(dev);
+	cpfl_set_tx_function(dev);
 
 	ret = idpf_vc_vport_ena_dis(vport, true);
 	if (ret != 0) {
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index a0a442f61d..520f61e07e 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -752,3 +752,23 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
 	}
 }
+
+void
+cpfl_set_tx_function(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Split Scalar Tx (port %d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+	} else {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Single Scalar Tx (port %d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index a5dd388e1f..5f8144e55f 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -38,4 +38,5 @@ int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_set_rx_function(struct rte_eth_dev *dev);
+void cpfl_set_tx_function(struct rte_eth_dev *dev);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v8 11/21] net/cpfl: support write back based on ITR expire
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
                               ` (9 preceding siblings ...)
  2023-03-02 10:35             ` [PATCH v8 10/21] net/cpfl: support basic Tx " Mingxia Liu
@ 2023-03-02 10:35             ` Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 12/21] net/cpfl: support RSS Mingxia Liu
                               ` (10 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 10:35 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

ITR is the interval between two interrupts, it can be understood
as a timer here. WB_ON_ITR(write back on ITR expire) is used for
receiving packets without interrupts or full cache line, then
packets can be received one by one.

To enable WB_ON_ITR, need to enable some interrupt with
'idpf_vport_irq_map_config()' first.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 45 +++++++++++++++++++++++++++++++++-
 drivers/net/cpfl/cpfl_ethdev.h |  2 ++
 2 files changed, 46 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 37e0270dd7..b09c7d4996 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -209,6 +209,15 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	uint16_t nb_rx_queues = dev->data->nb_rx_queues;
+
+	return idpf_vport_irq_map_config(vport, nb_rx_queues);
+}
+
 static int
 cpfl_start_queues(struct rte_eth_dev *dev)
 {
@@ -246,12 +255,37 @@ static int
 cpfl_dev_start(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(base);
+	uint16_t num_allocated_vectors = base->caps.num_allocated_vectors;
+	uint16_t req_vecs_num;
 	int ret;
 
+	req_vecs_num = CPFL_DFLT_Q_VEC_NUM;
+	if (req_vecs_num + adapter->used_vecs_num > num_allocated_vectors) {
+		PMD_DRV_LOG(ERR, "The accumulated request vectors' number should be less than %d",
+			    num_allocated_vectors);
+		ret = -EINVAL;
+		goto err_vec;
+	}
+
+	ret = idpf_vc_vectors_alloc(vport, req_vecs_num);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to allocate interrupt vectors");
+		goto err_vec;
+	}
+	adapter->used_vecs_num += req_vecs_num;
+
+	ret = cpfl_config_rx_queues_irqs(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to configure irqs");
+		goto err_irq;
+	}
+
 	ret = cpfl_start_queues(dev);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to start queues");
-		return ret;
+		goto err_startq;
 	}
 
 	cpfl_set_rx_function(dev);
@@ -269,6 +303,11 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 
 err_vport:
 	cpfl_stop_queues(dev);
+err_startq:
+	idpf_vport_irq_unmap_config(vport, dev->data->nb_rx_queues);
+err_irq:
+	idpf_vc_vectors_dealloc(vport);
+err_vec:
 	return ret;
 }
 
@@ -284,6 +323,10 @@ cpfl_dev_stop(struct rte_eth_dev *dev)
 
 	cpfl_stop_queues(dev);
 
+	idpf_vport_irq_unmap_config(vport, dev->data->nb_rx_queues);
+
+	idpf_vc_vectors_dealloc(vport);
+
 	vport->stopped = 1;
 
 	return 0;
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
index 9738e89ca8..4d1441ae64 100644
--- a/drivers/net/cpfl/cpfl_ethdev.h
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -25,6 +25,8 @@
 
 #define CPFL_INVALID_VPORT_IDX	0xffff
 
+#define CPFL_DFLT_Q_VEC_NUM	1
+
 #define CPFL_MIN_BUF_SIZE	1024
 #define CPFL_MAX_FRAME_SIZE	9728
 #define CPFL_DEFAULT_MTU	RTE_ETHER_MTU
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v8 12/21] net/cpfl: support RSS
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
                               ` (10 preceding siblings ...)
  2023-03-02 10:35             ` [PATCH v8 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
@ 2023-03-02 10:35             ` Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 13/21] net/cpfl: support Rx offloading Mingxia Liu
                               ` (9 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 10:35 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add RSS support.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 60 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h | 15 +++++++++
 2 files changed, 75 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index b09c7d4996..7fa52a5a19 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -94,6 +94,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
+
 	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
@@ -159,11 +161,49 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static int
+cpfl_init_rss(struct idpf_vport *vport)
+{
+	struct rte_eth_rss_conf *rss_conf;
+	struct rte_eth_dev_data *dev_data;
+	uint16_t i, nb_q;
+	int ret = 0;
+
+	dev_data = vport->dev_data;
+	rss_conf = &dev_data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev_data->nb_rx_queues;
+
+	if (rss_conf->rss_key == NULL) {
+		for (i = 0; i < vport->rss_key_size; i++)
+			vport->rss_key[i] = (uint8_t)rte_rand();
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_INIT_LOG(ERR, "Invalid RSS key length in RSS configuration, should be %d",
+			     vport->rss_key_size);
+		return -EINVAL;
+	} else {
+		rte_memcpy(vport->rss_key, rss_conf->rss_key,
+			   vport->rss_key_size);
+	}
+
+	for (i = 0; i < vport->rss_lut_size; i++)
+		vport->rss_lut[i] = i % nb_q;
+
+	vport->rss_hf = IDPF_DEFAULT_RSS_HASH_EXPANDED;
+
+	ret = idpf_vport_rss_config(vport);
+	if (ret != 0)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS");
+
+	return ret;
+}
+
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_conf *conf = &dev->data->dev_conf;
+	struct idpf_adapter *base = vport->adapter;
+	int ret;
 
 	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
@@ -202,6 +242,26 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
+	if (conf->rxmode.mq_mode != RTE_ETH_MQ_RX_RSS &&
+	    conf->rxmode.mq_mode != RTE_ETH_MQ_RX_NONE) {
+		PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
+			     conf->rxmode.mq_mode);
+		return -EINVAL;
+	}
+
+	if (base->caps.rss_caps != 0 && dev->data->nb_rx_queues != 0 &&
+		conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
+		ret = cpfl_init_rss(vport);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to init rss");
+			return ret;
+		}
+	} else {
+		PMD_INIT_LOG(ERR, "RSS is not supported.");
+		if (conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
+			return -ENOTSUP;
+	}
+
 	vport->max_pkt_len =
 		(dev->data->mtu == 0) ? CPFL_DEFAULT_MTU : dev->data->mtu +
 		CPFL_ETH_OVERHEAD;
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
index 4d1441ae64..200dfcac02 100644
--- a/drivers/net/cpfl/cpfl_ethdev.h
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -35,6 +35,21 @@
 #define CPFL_ETH_OVERHEAD \
 	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + CPFL_VLAN_TAG_SIZE * 2)
 
+#define CPFL_RSS_OFFLOAD_ALL (				\
+		RTE_ETH_RSS_IPV4                |	\
+		RTE_ETH_RSS_FRAG_IPV4           |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_TCP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_UDP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_SCTP   |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_OTHER  |	\
+		RTE_ETH_RSS_IPV6                |	\
+		RTE_ETH_RSS_FRAG_IPV6           |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP   |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_OTHER  |	\
+		RTE_ETH_RSS_L2_PAYLOAD)
+
 #define CPFL_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
 #define CPFL_ALARM_INTERVAL	50000 /* us */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v8 13/21] net/cpfl: support Rx offloading
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
                               ` (11 preceding siblings ...)
  2023-03-02 10:35             ` [PATCH v8 12/21] net/cpfl: support RSS Mingxia Liu
@ 2023-03-02 10:35             ` Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 14/21] net/cpfl: support Tx offloading Mingxia Liu
                               ` (8 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 10:35 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add Rx offloading support:
 - support CHKSUM and RSS offload for split queue model
 - support CHKSUM offload for single queue model

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 2 ++
 drivers/net/cpfl/cpfl_ethdev.c    | 6 ++++++
 2 files changed, 8 insertions(+)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index 470ba81579..ee5948f444 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -8,6 +8,8 @@
 ;
 [Features]
 MTU update           = Y
+L3 checksum offload  = P
+L4 checksum offload  = P
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 7fa52a5a19..3949fd0368 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -96,6 +96,12 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
 
+	dev_info->rx_offload_capa =
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM           |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+
 	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v8 14/21] net/cpfl: support Tx offloading
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
                               ` (12 preceding siblings ...)
  2023-03-02 10:35             ` [PATCH v8 13/21] net/cpfl: support Rx offloading Mingxia Liu
@ 2023-03-02 10:35             ` Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
                               ` (7 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 10:35 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add Tx offloading support:
 - support TSO for single queue model and split queue model.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 8 +++++++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index ee5948f444..f4e45c7c68 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -8,6 +8,7 @@
 ;
 [Features]
 MTU update           = Y
+TSO                  = P
 L3 checksum offload  = P
 L4 checksum offload  = P
 Linux                = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 3949fd0368..b9bfc38292 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -102,7 +102,13 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
-	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->tx_offload_capa =
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_TCP_TSO		|
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v8 15/21] net/cpfl: add AVX512 data path for single queue model
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
                               ` (13 preceding siblings ...)
  2023-03-02 10:35             ` [PATCH v8 14/21] net/cpfl: support Tx offloading Mingxia Liu
@ 2023-03-02 10:35             ` Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 16/21] net/cpfl: support timestamp offload Mingxia Liu
                               ` (6 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 10:35 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu, Wenjun Wu

Add support of AVX512 vector data path for single queue model.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/cpfl.rst                |  24 +++++-
 drivers/net/cpfl/cpfl_ethdev.c          |   3 +-
 drivers/net/cpfl/cpfl_rxtx.c            |  93 ++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx_vec_common.h | 100 ++++++++++++++++++++++++
 drivers/net/cpfl/meson.build            |  25 +++++-
 5 files changed, 242 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h

diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
index 253fa3afae..e2d71f8a4c 100644
--- a/doc/guides/nics/cpfl.rst
+++ b/doc/guides/nics/cpfl.rst
@@ -82,4 +82,26 @@ Runtime Config Options
 Driver compilation and testing
 ------------------------------
 
-Refer to the document :doc:`build_and_test` for details.
\ No newline at end of file
+Refer to the document :doc:`build_and_test` for details.
+
+Features
+--------
+
+Vector PMD
+~~~~~~~~~~
+
+Vector path for Rx and Tx path are selected automatically.
+The paths are chosen based on 2 conditions:
+
+- ``CPU``
+
+  On the x86 platform, the driver checks if the CPU supports AVX512.
+  If the CPU supports AVX512 and EAL argument ``--force-max-simd-bitwidth``
+  is set to 512, AVX512 paths will be chosen.
+
+- ``Offload features``
+
+  The supported HW offload features are described in the document cpfl.ini,
+  A value "P" means the offload feature is not supported by vector path.
+  If any not supported features are used, cpfl vector PMD is disabled
+  and the scalar paths are chosen.
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index b9bfc38292..8685c6e27b 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -108,7 +108,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
 		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM		|
 		RTE_ETH_TX_OFFLOAD_TCP_TSO		|
-		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS		|
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 520f61e07e..a3832acd4f 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -8,6 +8,7 @@
 
 #include "cpfl_ethdev.h"
 #include "cpfl_rxtx.h"
+#include "cpfl_rxtx_vec_common.h"
 
 static uint64_t
 cpfl_rx_offload_convert(uint64_t offload)
@@ -739,24 +740,96 @@ void
 cpfl_set_rx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+#ifdef RTE_ARCH_X86
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	if (cpfl_rx_vec_dev_check_default(dev) == CPFL_VECTOR_PATH &&
+	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+		vport->rx_vec_allowed = true;
+
+		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
+#ifdef CC_AVX512_SUPPORT
+			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
+				vport->rx_use_avx512 = true;
+#else
+		PMD_DRV_LOG(NOTICE,
+			    "AVX512 is not supported in build env");
+#endif /* CC_AVX512_SUPPORT */
+	} else {
+		vport->rx_vec_allowed = false;
+	}
+#endif /* RTE_ARCH_X86 */
 
+#ifdef RTE_ARCH_X86
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		PMD_DRV_LOG(NOTICE,
 			    "Using Split Scalar Rx (port %d).",
 			    dev->data->port_id);
 		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
 	} else {
+		if (vport->rx_vec_allowed) {
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				rxq = dev->data->rx_queues[i];
+				(void)idpf_qc_singleq_rx_vec_setup(rxq);
+			}
+#ifdef CC_AVX512_SUPPORT
+			if (vport->rx_use_avx512) {
+				PMD_DRV_LOG(NOTICE,
+					    "Using Single AVX512 Vector Rx (port %d).",
+					    dev->data->port_id);
+				dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts_avx512;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Rx (port %d).",
 			    dev->data->port_id);
 		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
 	}
+#else
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Split Scalar Rx (port %d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
+	} else {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Single Scalar Rx (port %d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
+	}
+#endif /* RTE_ARCH_X86 */
 }
 
 void
 cpfl_set_tx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+#ifdef RTE_ARCH_X86
+#ifdef CC_AVX512_SUPPORT
+	struct idpf_tx_queue *txq;
+	int i;
+#endif /* CC_AVX512_SUPPORT */
+
+	if (cpfl_tx_vec_dev_check_default(dev) == CPFL_VECTOR_PATH &&
+	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+		vport->tx_vec_allowed = true;
+		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
+#ifdef CC_AVX512_SUPPORT
+			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
+				vport->tx_use_avx512 = true;
+#else
+		PMD_DRV_LOG(NOTICE,
+			    "AVX512 is not supported in build env");
+#endif /* CC_AVX512_SUPPORT */
+	} else {
+		vport->tx_vec_allowed = false;
+	}
+#endif /* RTE_ARCH_X86 */
 
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		PMD_DRV_LOG(NOTICE,
@@ -765,6 +838,26 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 		dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
 	} else {
+#ifdef RTE_ARCH_X86
+		if (vport->tx_vec_allowed) {
+#ifdef CC_AVX512_SUPPORT
+			if (vport->tx_use_avx512) {
+				for (i = 0; i < dev->data->nb_tx_queues; i++) {
+					txq = dev->data->tx_queues[i];
+					if (txq == NULL)
+						continue;
+					idpf_qc_tx_vec_avx512_setup(txq);
+				}
+				PMD_DRV_LOG(NOTICE,
+					    "Using Single AVX512 Vector Tx (port %d).",
+					    dev->data->port_id);
+				dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts_avx512;
+				dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
+#endif /* RTE_ARCH_X86 */
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Tx (port %d).",
 			    dev->data->port_id);
diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
new file mode 100644
index 0000000000..2d4c6a0ef3
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
@@ -0,0 +1,100 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _CPFL_RXTX_VEC_COMMON_H_
+#define _CPFL_RXTX_VEC_COMMON_H_
+#include <stdint.h>
+#include <ethdev_driver.h>
+#include <rte_malloc.h>
+
+#include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
+
+#ifndef __INTEL_COMPILER
+#pragma GCC diagnostic ignored "-Wcast-qual"
+#endif
+
+#define CPFL_SCALAR_PATH		0
+#define CPFL_VECTOR_PATH		1
+#define CPFL_RX_NO_VECTOR_FLAGS (		\
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+#define CPFL_TX_NO_VECTOR_FLAGS (		\
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |	\
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+static inline int
+cpfl_rx_vec_queue_default(struct idpf_rx_queue *rxq)
+{
+	if (rxq == NULL)
+		return CPFL_SCALAR_PATH;
+
+	if (rte_is_power_of_2(rxq->nb_rx_desc) == 0)
+		return CPFL_SCALAR_PATH;
+
+	if (rxq->rx_free_thresh < IDPF_VPMD_RX_MAX_BURST)
+		return CPFL_SCALAR_PATH;
+
+	if ((rxq->nb_rx_desc % rxq->rx_free_thresh) != 0)
+		return CPFL_SCALAR_PATH;
+
+	if ((rxq->offloads & CPFL_RX_NO_VECTOR_FLAGS) != 0)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_tx_vec_queue_default(struct idpf_tx_queue *txq)
+{
+	if (txq == NULL)
+		return CPFL_SCALAR_PATH;
+
+	if (txq->rs_thresh < IDPF_VPMD_TX_MAX_BURST ||
+	    (txq->rs_thresh & 3) != 0)
+		return CPFL_SCALAR_PATH;
+
+	if ((txq->offloads & CPFL_TX_NO_VECTOR_FLAGS) != 0)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i, ret = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		ret = (cpfl_rx_vec_queue_default(rxq));
+		if (ret == CPFL_SCALAR_PATH)
+			return CPFL_SCALAR_PATH;
+	}
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_tx_vec_dev_check_default(struct rte_eth_dev *dev)
+{
+	int i;
+	struct idpf_tx_queue *txq;
+	int ret = 0;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		ret = cpfl_tx_vec_queue_default(txq);
+		if (ret == CPFL_SCALAR_PATH)
+			return CPFL_SCALAR_PATH;
+	}
+
+	return CPFL_VECTOR_PATH;
+}
+
+#endif /*_CPFL_RXTX_VEC_COMMON_H_*/
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
index 1894423689..fbe6500826 100644
--- a/drivers/net/cpfl/meson.build
+++ b/drivers/net/cpfl/meson.build
@@ -7,9 +7,32 @@ if is_windows
     subdir_done()
 endif
 
+if dpdk_conf.get('RTE_IOVA_AS_PA') == 0
+    build = false
+    reason = 'driver does not support disabling IOVA as PA mode'
+    subdir_done()
+endif
+
 deps += ['common_idpf']
 
 sources = files(
         'cpfl_ethdev.c',
         'cpfl_rxtx.c',
-)
\ No newline at end of file
+)
+
+if arch_subdir == 'x86'
+    cpfl_avx512_cpu_support = (
+        cc.get_define('__AVX512F__', args: machine_args) != '' and
+        cc.get_define('__AVX512BW__', args: machine_args) != ''
+    )
+
+    cpfl_avx512_cc_support = (
+        not machine_args.contains('-mno-avx512f') and
+        cc.has_argument('-mavx512f') and
+        cc.has_argument('-mavx512bw')
+    )
+
+    if cpfl_avx512_cpu_support == true or cpfl_avx512_cc_support == true
+        cflags += ['-DCC_AVX512_SUPPORT']
+    endif
+endif
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v8 16/21] net/cpfl: support timestamp offload
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
                               ` (14 preceding siblings ...)
  2023-03-02 10:35             ` [PATCH v8 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
@ 2023-03-02 10:35             ` Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
                               ` (5 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 10:35 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for timestamp offload.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 3 ++-
 drivers/net/cpfl/cpfl_rxtx.c   | 7 +++++++
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 8685c6e27b..8ed6308a36 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -100,7 +100,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM           |
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
-		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index a3832acd4f..ea28d3978c 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -516,6 +516,13 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	err = idpf_qc_ts_mbuf_register(rxq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "fail to register timestamp mbuf %u",
+			    rx_queue_id);
+		return -EIO;
+	}
+
 	if (rxq->adapter->is_rx_singleq) {
 		/* Single queue */
 		err = idpf_qc_single_rxq_mbufs_alloc(rxq);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v8 17/21] net/cpfl: add AVX512 data path for split queue model
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
                               ` (15 preceding siblings ...)
  2023-03-02 10:35             ` [PATCH v8 16/21] net/cpfl: support timestamp offload Mingxia Liu
@ 2023-03-02 10:35             ` Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 18/21] net/cpfl: add HW statistics Mingxia Liu
                               ` (4 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 10:35 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu, Wenjun Wu

Add support of AVX512 data path for split queue model.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_rxtx.c            | 56 +++++++++++++++++++++++--
 drivers/net/cpfl/cpfl_rxtx_vec_common.h | 20 ++++++++-
 drivers/net/cpfl/meson.build            |  6 ++-
 3 files changed, 75 insertions(+), 7 deletions(-)

diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index ea28d3978c..dac95579f5 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -758,7 +758,8 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
 #ifdef CC_AVX512_SUPPORT
 			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
-			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512DQ))
 				vport->rx_use_avx512 = true;
 #else
 		PMD_DRV_LOG(NOTICE,
@@ -771,6 +772,21 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 
 #ifdef RTE_ARCH_X86
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		if (vport->rx_vec_allowed) {
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				rxq = dev->data->rx_queues[i];
+				(void)idpf_qc_splitq_rx_vec_setup(rxq);
+			}
+#ifdef CC_AVX512_SUPPORT
+			if (vport->rx_use_avx512) {
+				PMD_DRV_LOG(NOTICE,
+					    "Using Split AVX512 Vector Rx (port %d).",
+					    dev->data->port_id);
+				dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts_avx512;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Split Scalar Rx (port %d).",
 			    dev->data->port_id);
@@ -826,9 +842,17 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 		vport->tx_vec_allowed = true;
 		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
 #ifdef CC_AVX512_SUPPORT
+		{
 			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
 			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
 				vport->tx_use_avx512 = true;
+			if (vport->tx_use_avx512) {
+				for (i = 0; i < dev->data->nb_tx_queues; i++) {
+					txq = dev->data->tx_queues[i];
+					idpf_qc_tx_vec_avx512_setup(txq);
+				}
+			}
+		}
 #else
 		PMD_DRV_LOG(NOTICE,
 			    "AVX512 is not supported in build env");
@@ -838,14 +862,26 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 	}
 #endif /* RTE_ARCH_X86 */
 
+#ifdef RTE_ARCH_X86
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		if (vport->tx_vec_allowed) {
+#ifdef CC_AVX512_SUPPORT
+			if (vport->tx_use_avx512) {
+				PMD_DRV_LOG(NOTICE,
+					    "Using Split AVX512 Vector Tx (port %d).",
+					    dev->data->port_id);
+				dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts_avx512;
+				dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Split Scalar Tx (port %d).",
 			    dev->data->port_id);
 		dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
 	} else {
-#ifdef RTE_ARCH_X86
 		if (vport->tx_vec_allowed) {
 #ifdef CC_AVX512_SUPPORT
 			if (vport->tx_use_avx512) {
@@ -864,11 +900,25 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 			}
 #endif /* CC_AVX512_SUPPORT */
 		}
-#endif /* RTE_ARCH_X86 */
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Tx (port %d).",
 			    dev->data->port_id);
 		dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
 	}
+#else
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Split Scalar Tx (port %d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+	} else {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Single Scalar Tx (port %d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+	}
+#endif /* RTE_ARCH_X86 */
 }
diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
index 2d4c6a0ef3..665418d27d 100644
--- a/drivers/net/cpfl/cpfl_rxtx_vec_common.h
+++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
@@ -64,15 +64,31 @@ cpfl_tx_vec_queue_default(struct idpf_tx_queue *txq)
 	return CPFL_VECTOR_PATH;
 }
 
+static inline int
+cpfl_rx_splitq_vec_default(struct idpf_rx_queue *rxq)
+{
+	if (rxq->bufq2->rx_buf_len < rxq->max_pkt_len)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
 static inline int
 cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_rx_queue *rxq;
-	int i, ret = 0;
+	int i, default_ret, splitq_ret, ret = CPFL_SCALAR_PATH;
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
-		ret = (cpfl_rx_vec_queue_default(rxq));
+		default_ret = cpfl_rx_vec_queue_default(rxq);
+		if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+			splitq_ret = cpfl_rx_splitq_vec_default(rxq);
+			ret = splitq_ret && default_ret;
+		} else {
+			ret = default_ret;
+		}
 		if (ret == CPFL_SCALAR_PATH)
 			return CPFL_SCALAR_PATH;
 	}
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
index fbe6500826..2cf69258e2 100644
--- a/drivers/net/cpfl/meson.build
+++ b/drivers/net/cpfl/meson.build
@@ -23,13 +23,15 @@ sources = files(
 if arch_subdir == 'x86'
     cpfl_avx512_cpu_support = (
         cc.get_define('__AVX512F__', args: machine_args) != '' and
-        cc.get_define('__AVX512BW__', args: machine_args) != ''
+        cc.get_define('__AVX512BW__', args: machine_args) != '' and
+        cc.get_define('__AVX512DQ__', args: machine_args) != ''
     )
 
     cpfl_avx512_cc_support = (
         not machine_args.contains('-mno-avx512f') and
         cc.has_argument('-mavx512f') and
-        cc.has_argument('-mavx512bw')
+        cc.has_argument('-mavx512bw') and
+        cc.has_argument('-mavx512dq')
     )
 
     if cpfl_avx512_cpu_support == true or cpfl_avx512_cc_support == true
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v8 18/21] net/cpfl: add HW statistics
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
                               ` (16 preceding siblings ...)
  2023-03-02 10:35             ` [PATCH v8 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
@ 2023-03-02 10:35             ` Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
                               ` (3 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 10:35 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

This patch add hardware packets/bytes statistics.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 87 ++++++++++++++++++++++++++++++++++
 1 file changed, 87 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 8ed6308a36..1f41bb8977 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -175,6 +175,88 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static uint64_t
+cpfl_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	uint64_t mbuf_alloc_failed = 0;
+	struct idpf_rx_queue *rxq;
+	int i = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		mbuf_alloc_failed += __atomic_load_n(&rxq->rx_stats.mbuf_alloc_failed,
+						     __ATOMIC_RELAXED);
+	}
+
+	return mbuf_alloc_failed;
+}
+
+static int
+cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret == 0) {
+		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
+					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
+					 RTE_ETHER_CRC_LEN;
+
+		idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
+		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
+				pstats->rx_broadcast - pstats->rx_discards;
+		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
+						pstats->tx_unicast;
+		stats->imissed = pstats->rx_discards;
+		stats->ierrors = pstats->rx_errors;
+		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
+		stats->ibytes = pstats->rx_bytes;
+		stats->ibytes -= stats->ipackets * crc_stats_len;
+		stats->obytes = pstats->tx_bytes;
+
+		dev->data->rx_mbuf_alloc_failed = cpfl_get_mbuf_alloc_failed_stats(dev);
+		stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return ret;
+}
+
+static void
+cpfl_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		__atomic_store_n(&rxq->rx_stats.mbuf_alloc_failed, 0, __ATOMIC_RELAXED);
+	}
+}
+
+static int
+cpfl_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret != 0)
+		return ret;
+
+	/* set stats offset base on current values */
+	vport->eth_stats_offset = *pstats;
+
+	cpfl_reset_mbuf_alloc_failed_stats(dev);
+
+	return 0;
+}
+
 static int
 cpfl_init_rss(struct idpf_vport *vport)
 {
@@ -371,6 +453,9 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 		goto err_vport;
 	}
 
+	if (cpfl_dev_stats_reset(dev))
+		PMD_DRV_LOG(ERR, "Failed to reset stats");
+
 	vport->stopped = 0;
 
 	return 0;
@@ -441,6 +526,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_release		= cpfl_dev_tx_queue_release,
 	.mtu_set			= cpfl_dev_mtu_set,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
+	.stats_get			= cpfl_dev_stats_get,
+	.stats_reset			= cpfl_dev_stats_reset,
 };
 
 static int
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v8 19/21] net/cpfl: add RSS set/get ops
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
                               ` (17 preceding siblings ...)
  2023-03-02 10:35             ` [PATCH v8 18/21] net/cpfl: add HW statistics Mingxia Liu
@ 2023-03-02 10:35             ` Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 20/21] net/cpfl: support scalar scatter Rx datapath for single queue model Mingxia Liu
                               ` (2 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 10:35 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for these device ops:
- rss_reta_update
- rss_reta_query
- rss_hash_update
- rss_hash_conf_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 271 ++++++++++++++++++++++++++++++++-
 1 file changed, 270 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 1f41bb8977..fe800b49f3 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -47,6 +47,57 @@ uint32_t cpfl_supported_speeds[] = {
 	RTE_ETH_SPEED_NUM_200G
 };
 
+static const uint64_t cpfl_map_hena_rss[] = {
+	[IDPF_HASH_NONF_UNICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
+	[IDPF_HASH_NONF_IPV4_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+	[IDPF_HASH_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
+
+	/* IPv6 */
+	[IDPF_HASH_NONF_UNICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
+	[IDPF_HASH_NONF_IPV6_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+	[IDPF_HASH_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
+
+	/* L2 Payload */
+	[IDPF_HASH_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
+};
+
+static const uint64_t cpfl_ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV4;
+
+static const uint64_t cpfl_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV6;
+
+
 static int
 cpfl_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -94,6 +145,9 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->hash_key_size = vport->rss_key_size;
+	dev_info->reta_size = vport->rss_lut_size;
+
 	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
@@ -257,6 +311,36 @@ cpfl_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int cpfl_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
+{
+	uint64_t hena = 0;
+	uint16_t i;
+
+	/**
+	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
+	 * generalizations of all other IPv4 and IPv6 RSS types.
+	 */
+	if (rss_hf & RTE_ETH_RSS_IPV4)
+		rss_hf |= cpfl_ipv4_rss;
+
+	if (rss_hf & RTE_ETH_RSS_IPV6)
+		rss_hf |= cpfl_ipv6_rss;
+
+	for (i = 0; i < RTE_DIM(cpfl_map_hena_rss); i++) {
+		if (cpfl_map_hena_rss[i] & rss_hf)
+			hena |= BIT_ULL(i);
+	}
+
+	/**
+	 * At present, cp doesn't process the virtual channel msg of rss_hf configuration,
+	 * tips are given below.
+	 */
+	if (hena != vport->rss_hf)
+		PMD_DRV_LOG(WARNING, "Updating RSS Hash Function is not supported at present.");
+
+	return 0;
+}
+
 static int
 cpfl_init_rss(struct idpf_vport *vport)
 {
@@ -277,7 +361,7 @@ cpfl_init_rss(struct idpf_vport *vport)
 			     vport->rss_key_size);
 		return -EINVAL;
 	} else {
-		rte_memcpy(vport->rss_key, rss_conf->rss_key,
+		memcpy(vport->rss_key, rss_conf->rss_key,
 			   vport->rss_key_size);
 	}
 
@@ -293,6 +377,187 @@ cpfl_init_rss(struct idpf_vport *vport)
 	return ret;
 }
 
+static int
+cpfl_rss_reta_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_reta_entry64 *reta_conf,
+		     uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (base->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+				 "(%d) doesn't match the number of hardware can "
+				 "support (%d)",
+			    reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			vport->rss_lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	/* send virtchnl ops to configure RSS */
+	ret = idpf_vc_rss_lut_set(vport);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
+
+	return ret;
+}
+
+static int
+cpfl_rss_reta_query(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_reta_entry64 *reta_conf,
+		    uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (base->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	ret = idpf_vc_rss_lut_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS LUT");
+		return ret;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vport->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+cpfl_rss_hash_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	int ret = 0;
+
+	if (base->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		goto skip_rss_key;
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+				 "(%d) doesn't match the size of hardware can "
+				 "support (%d)",
+			    rss_conf->rss_key_len,
+			    vport->rss_key_size);
+		return -EINVAL;
+	}
+
+	memcpy(vport->rss_key, rss_conf->rss_key,
+		   vport->rss_key_size);
+	ret = idpf_vc_rss_key_set(vport);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS key");
+		return ret;
+	}
+
+skip_rss_key:
+	ret = cpfl_config_rss_hf(vport, rss_conf->rss_hf);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
+		return ret;
+	}
+
+	return 0;
+}
+
+static uint64_t
+cpfl_map_general_rss_hf(uint64_t config_rss_hf, uint64_t last_general_rss_hf)
+{
+	uint64_t valid_rss_hf = 0;
+	uint16_t i;
+
+	for (i = 0; i < RTE_DIM(cpfl_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (bit & config_rss_hf)
+			valid_rss_hf |= cpfl_map_hena_rss[i];
+	}
+
+	if (valid_rss_hf & cpfl_ipv4_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & cpfl_ipv6_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV6;
+
+	return valid_rss_hf;
+}
+
+static int
+cpfl_rss_hash_conf_get(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	int ret = 0;
+
+	if (base->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	ret = idpf_vc_rss_hash_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS hf");
+		return ret;
+	}
+
+	rss_conf->rss_hf = cpfl_map_general_rss_hf(vport->rss_hf, vport->last_general_rss_hf);
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	ret = idpf_vc_rss_key_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS key");
+		return ret;
+	}
+
+	if (rss_conf->rss_key_len > vport->rss_key_size)
+		rss_conf->rss_key_len = vport->rss_key_size;
+
+	memcpy(rss_conf->rss_key, vport->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
@@ -528,6 +793,10 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 	.stats_get			= cpfl_dev_stats_get,
 	.stats_reset			= cpfl_dev_stats_reset,
+	.reta_update			= cpfl_rss_reta_update,
+	.reta_query			= cpfl_rss_reta_query,
+	.rss_hash_update		= cpfl_rss_hash_update,
+	.rss_hash_conf_get		= cpfl_rss_hash_conf_get,
 };
 
 static int
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v8 20/21] net/cpfl: support scalar scatter Rx datapath for single queue model
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
                               ` (18 preceding siblings ...)
  2023-03-02 10:35             ` [PATCH v8 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
@ 2023-03-02 10:35             ` Mingxia Liu
  2023-03-02 10:35             ` [PATCH v8 21/21] net/cpfl: add xstats ops Mingxia Liu
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 10:35 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu, Wenjun Wu

This patch add single q recv scatter Rx function.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  3 ++-
 drivers/net/cpfl/cpfl_rxtx.c   | 27 +++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  2 ++
 3 files changed, 31 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index fe800b49f3..87c591ae5c 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -155,7 +155,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
-		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP		|
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index dac95579f5..9e8767df72 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -503,6 +503,8 @@ int
 cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct idpf_rx_queue *rxq;
+	uint16_t max_pkt_len;
+	uint32_t frame_size;
 	int err;
 
 	if (rx_queue_id >= dev->data->nb_rx_queues)
@@ -516,6 +518,17 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	frame_size = dev->data->mtu + CPFL_ETH_OVERHEAD;
+
+	max_pkt_len =
+	    RTE_MIN((uint32_t)CPFL_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+		    frame_size);
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
+	    frame_size > rxq->rx_buf_len)
+		dev->data->scattered_rx = 1;
+
 	err = idpf_qc_ts_mbuf_register(rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "fail to register timestamp mbuf %u",
@@ -807,6 +820,13 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 			}
 #endif /* CC_AVX512_SUPPORT */
 		}
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE,
+				    "Using Single Scalar Scatterd Rx (port %d).",
+				    dev->data->port_id);
+			dev->rx_pkt_burst = idpf_dp_singleq_recv_scatter_pkts;
+			return;
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Rx (port %d).",
 			    dev->data->port_id);
@@ -819,6 +839,13 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 			    dev->data->port_id);
 		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
 	} else {
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE,
+				    "Using Single Scalar Scatterd Rx (port %d).",
+				    dev->data->port_id);
+			dev->rx_pkt_burst = idpf_dp_singleq_recv_scatter_pkts;
+			return;
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Rx (port %d).",
 			    dev->data->port_id);
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 5f8144e55f..fb267d38c8 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -21,6 +21,8 @@
 #define CPFL_DEFAULT_TX_RS_THRESH	32
 #define CPFL_DEFAULT_TX_FREE_THRESH	32
 
+#define CPFL_SUPPORT_CHAIN_NUM 5
+
 int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_txconf *tx_conf);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v8 21/21] net/cpfl: add xstats ops
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
                               ` (19 preceding siblings ...)
  2023-03-02 10:35             ` [PATCH v8 20/21] net/cpfl: support scalar scatter Rx datapath for single queue model Mingxia Liu
@ 2023-03-02 10:35             ` Mingxia Liu
  2023-03-02  9:30               ` Ferruh Yigit
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
  21 siblings, 1 reply; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 10:35 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for these device ops:
- dev_xstats_get
- dev_xstats_get_names
- dev_xstats_reset

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 78 ++++++++++++++++++++++++++++++++++
 1 file changed, 78 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 87c591ae5c..0a971334c9 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -97,6 +97,28 @@ static const uint64_t cpfl_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
 			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
 			  RTE_ETH_RSS_FRAG_IPV6;
 
+struct rte_cpfl_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct rte_cpfl_xstats_name_off rte_cpfl_stats_strings[] = {
+	{"rx_bytes", offsetof(struct virtchnl2_vport_stats, rx_bytes)},
+	{"rx_unicast_packets", offsetof(struct virtchnl2_vport_stats, rx_unicast)},
+	{"rx_multicast_packets", offsetof(struct virtchnl2_vport_stats, rx_multicast)},
+	{"rx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, rx_broadcast)},
+	{"rx_dropped_packets", offsetof(struct virtchnl2_vport_stats, rx_discards)},
+	{"rx_errors", offsetof(struct virtchnl2_vport_stats, rx_errors)},
+	{"rx_unknown_protocol_packets", offsetof(struct virtchnl2_vport_stats,
+						 rx_unknown_protocol)},
+	{"tx_bytes", offsetof(struct virtchnl2_vport_stats, tx_bytes)},
+	{"tx_unicast_packets", offsetof(struct virtchnl2_vport_stats, tx_unicast)},
+	{"tx_multicast_packets", offsetof(struct virtchnl2_vport_stats, tx_multicast)},
+	{"tx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, tx_broadcast)},
+	{"tx_dropped_packets", offsetof(struct virtchnl2_vport_stats, tx_discards)},
+	{"tx_error_packets", offsetof(struct virtchnl2_vport_stats, tx_errors)}};
+
+#define CPFL_NB_XSTATS			RTE_DIM(rte_cpfl_stats_strings)
 
 static int
 cpfl_dev_link_update(struct rte_eth_dev *dev,
@@ -312,6 +334,59 @@ cpfl_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int cpfl_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+	cpfl_dev_stats_reset(dev);
+	return 0;
+}
+
+static int cpfl_dev_xstats_get(struct rte_eth_dev *dev,
+			       struct rte_eth_xstat *xstats, unsigned int n)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	unsigned int i;
+	int ret;
+
+	if (n < CPFL_NB_XSTATS)
+		return CPFL_NB_XSTATS;
+
+	if (!xstats)
+		return CPFL_NB_XSTATS;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+		return 0;
+	}
+
+	idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
+
+	/* loop over xstats array and values from pstats */
+	for (i = 0; i < CPFL_NB_XSTATS; i++) {
+		xstats[i].id = i;
+		xstats[i].value = *(uint64_t *)(((char *)pstats) +
+			rte_cpfl_stats_strings[i].offset);
+	}
+	return CPFL_NB_XSTATS;
+}
+
+static int cpfl_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+				     struct rte_eth_xstat_name *xstats_names,
+				     __rte_unused unsigned int limit)
+{
+	unsigned int i;
+
+	if (xstats_names)
+		for (i = 0; i < CPFL_NB_XSTATS; i++) {
+			snprintf(xstats_names[i].name,
+				 sizeof(xstats_names[i].name),
+				 "%s", rte_cpfl_stats_strings[i].name);
+		}
+	return CPFL_NB_XSTATS;
+}
+
 static int cpfl_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
 {
 	uint64_t hena = 0;
@@ -798,6 +873,9 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.reta_query			= cpfl_rss_reta_query,
 	.rss_hash_update		= cpfl_rss_hash_update,
 	.rss_hash_conf_get		= cpfl_rss_hash_conf_get,
+	.xstats_get			= cpfl_dev_xstats_get,
+	.xstats_get_names		= cpfl_dev_xstats_get_names,
+	.xstats_reset			= cpfl_dev_xstats_reset,
 };
 
 static int
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v8 21/21] net/cpfl: add xstats ops
  2023-03-02  9:30               ` Ferruh Yigit
@ 2023-03-02 11:19                 ` Liu, Mingxia
  0 siblings, 0 replies; 263+ messages in thread
From: Liu, Mingxia @ 2023-03-02 11:19 UTC (permalink / raw)
  To: Ferruh Yigit, dev, Xing,  Beilei, Zhang, Yuying



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Thursday, March 2, 2023 5:30 PM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> Subject: Re: [PATCH v8 21/21] net/cpfl: add xstats ops
> 
> On 3/2/2023 10:35 AM, Mingxia Liu wrote:
> > Add support for these device ops:
> > - dev_xstats_get
> > - dev_xstats_get_names
> > - dev_xstats_reset
> >
> > Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> 
> <...>
> 
> > +static int cpfl_dev_xstats_get_names(__rte_unused struct rte_eth_dev
> *dev,
> > +				     struct rte_eth_xstat_name
> *xstats_names,
> > +				     __rte_unused unsigned int limit) {
> > +	unsigned int i;
> > +
> > +	if (xstats_names)
> > +		for (i = 0; i < CPFL_NB_XSTATS; i++) {
> > +			snprintf(xstats_names[i].name,
> > +				 sizeof(xstats_names[i].name),
> > +				 "%s", rte_cpfl_stats_strings[i].name);
> > +		}
> 
> 
> Although above is correct, can you please add {}, it is safer to do it for multi
> line blocks.
[Liu, Mingxia] ok, I'll add nexr version.

^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v8 01/21] net/cpfl: support device initialization
  2023-03-02  9:31               ` Ferruh Yigit
@ 2023-03-02 11:24                 ` Liu, Mingxia
  2023-03-02 11:51                   ` Ferruh Yigit
  2023-03-02 12:08                 ` Xing, Beilei
  1 sibling, 1 reply; 263+ messages in thread
From: Liu, Mingxia @ 2023-03-02 11:24 UTC (permalink / raw)
  To: Ferruh Yigit, dev, Xing,  Beilei, Zhang, Yuying



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Thursday, March 2, 2023 5:31 PM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> Subject: Re: [PATCH v8 01/21] net/cpfl: support device initialization
> 
> On 3/2/2023 10:35 AM, Mingxia Liu wrote:
> > Support device init and add the following dev ops:
> >  - dev_configure
> >  - dev_close
> >  - dev_infos_get
> >  - link_update
> >  - dev_supported_ptypes_get
> >
> > Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> 
> <...>
> 
> > --- /dev/null
> > +++ b/doc/guides/nics/cpfl.rst
> > @@ -0,0 +1,85 @@
> > +.. SPDX-License-Identifier: BSD-3-Clause
> > +   Copyright(c) 2022 Intel Corporation.
> > +
> 
> s/2022/2023
> 
> <...>
> 
> > +static int
> > +cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
> > +	       struct rte_pci_device *pci_dev) {
> > +	struct cpfl_vport_param vport_param;
> > +	struct cpfl_adapter_ext *adapter;
> > +	struct cpfl_devargs devargs;
> > +	char name[RTE_ETH_NAME_MAX_LEN];
> > +	int i, retval;
> > +	bool first_probe = false;
> > +
> > +	if (!cpfl_adapter_list_init) {
> > +		rte_spinlock_init(&cpfl_adapter_lock);
> > +		TAILQ_INIT(&cpfl_adapter_list);
> > +		cpfl_adapter_list_init = true;
> > +	}
> > +
> > +	adapter = cpfl_find_adapter_ext(pci_dev);
> > +	if (adapter == NULL) {
> > +		first_probe = true;
> > +		adapter = rte_zmalloc("cpfl_adapter_ext",
> > +				      sizeof(struct cpfl_adapter_ext), 0);
> > +		if (adapter == NULL) {
> > +			PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
> > +			return -ENOMEM;
> > +		}
> > +
> > +		retval = cpfl_adapter_ext_init(pci_dev, adapter);
> > +		if (retval != 0) {
> > +			PMD_INIT_LOG(ERR, "Failed to init adapter.");
> > +			return retval;
> > +		}
> > +
> > +		rte_spinlock_lock(&cpfl_adapter_lock);
> > +		TAILQ_INSERT_TAIL(&cpfl_adapter_list, adapter, next);
> > +		rte_spinlock_unlock(&cpfl_adapter_lock);
> > +	}
> > +
> > +	retval = cpfl_parse_devargs(pci_dev, adapter, &devargs);
> > +	if (retval != 0) {
> > +		PMD_INIT_LOG(ERR, "Failed to parse private devargs");
> > +		goto err;
> > +	}
> > +
> > +	if (devargs.req_vport_nb == 0) {
> > +		/* If no vport devarg, create vport 0 by default. */
> > +		vport_param.adapter = adapter;
> > +		vport_param.devarg_id = 0;
> > +		vport_param.idx = cpfl_vport_idx_alloc(adapter);
> > +		if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
> > +			PMD_INIT_LOG(ERR, "No space for vport %u",
> vport_param.devarg_id);
> > +			return 0;
> > +		}
> > +		snprintf(name, sizeof(name), "cpfl_%s_vport_0",
> > +			 pci_dev->device.name);
> > +		retval = rte_eth_dev_create(&pci_dev->device, name,
> > +					    sizeof(struct idpf_vport),
> > +					    NULL, NULL, cpfl_dev_vport_init,
> > +					    &vport_param);
> > +		if (retval != 0)
> > +			PMD_DRV_LOG(ERR, "Failed to create default vport
> 0");
> > +	} else {
> > +		for (i = 0; i < devargs.req_vport_nb; i++) {
> > +			vport_param.adapter = adapter;
> > +			vport_param.devarg_id = devargs.req_vports[i];
> > +			vport_param.idx = cpfl_vport_idx_alloc(adapter);
> > +			if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
> > +				PMD_INIT_LOG(ERR, "No space for
> vport %u", vport_param.devarg_id);
> > +				break;
> > +			}
> > +			snprintf(name, sizeof(name), "cpfl_%s_vport_%d",
> > +				 pci_dev->device.name,
> > +				 devargs.req_vports[i]);
> > +			retval = rte_eth_dev_create(&pci_dev->device,
> name,
> > +						    sizeof(struct idpf_vport),
> > +						    NULL, NULL,
> cpfl_dev_vport_init,
> > +						    &vport_param);
> > +			if (retval != 0)
> > +				PMD_DRV_LOG(ERR, "Failed to create
> vport %d",
> > +					    vport_param.devarg_id);
> > +		}
> > +	}
> > +
> > +	return 0;
> > +
> > +err:
> > +	if (first_probe) {
> > +		rte_spinlock_lock(&cpfl_adapter_lock);
> > +		TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
> > +		rte_spinlock_unlock(&cpfl_adapter_lock);
> > +		cpfl_adapter_ext_deinit(adapter);
> > +		rte_free(adapter);
> > +	}
> 
> Is 'first_probe' left intentionally? If so, what is the reason to have this
> condition?
[beileix] It's related to create vports at runtime, this feature will be implemented in the future, but now it doesn't suppor.
> 


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v8 01/21] net/cpfl: support device initialization
  2023-03-02 11:24                 ` Liu, Mingxia
@ 2023-03-02 11:51                   ` Ferruh Yigit
  2023-03-02 12:08                     ` Xing, Beilei
  2023-03-02 13:11                     ` Liu, Mingxia
  0 siblings, 2 replies; 263+ messages in thread
From: Ferruh Yigit @ 2023-03-02 11:51 UTC (permalink / raw)
  To: Liu, Mingxia, dev, Xing, Beilei, Zhang, Yuying

On 3/2/2023 11:24 AM, Liu, Mingxia wrote:
> 
> 
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@amd.com>
>> Sent: Thursday, March 2, 2023 5:31 PM
>> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
>> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
>> Subject: Re: [PATCH v8 01/21] net/cpfl: support device initialization
>>
>> On 3/2/2023 10:35 AM, Mingxia Liu wrote:
>>> Support device init and add the following dev ops:
>>>  - dev_configure
>>>  - dev_close
>>>  - dev_infos_get
>>>  - link_update
>>>  - dev_supported_ptypes_get
>>>
>>> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
>>
>> <...>
>>
>>> --- /dev/null
>>> +++ b/doc/guides/nics/cpfl.rst
>>> @@ -0,0 +1,85 @@
>>> +.. SPDX-License-Identifier: BSD-3-Clause
>>> +   Copyright(c) 2022 Intel Corporation.
>>> +
>>
>> s/2022/2023
>>
>> <...>
>>
>>> +static int
>>> +cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
>>> +	       struct rte_pci_device *pci_dev) {
>>> +	struct cpfl_vport_param vport_param;
>>> +	struct cpfl_adapter_ext *adapter;
>>> +	struct cpfl_devargs devargs;
>>> +	char name[RTE_ETH_NAME_MAX_LEN];
>>> +	int i, retval;
>>> +	bool first_probe = false;
>>> +
>>> +	if (!cpfl_adapter_list_init) {
>>> +		rte_spinlock_init(&cpfl_adapter_lock);
>>> +		TAILQ_INIT(&cpfl_adapter_list);
>>> +		cpfl_adapter_list_init = true;
>>> +	}
>>> +
>>> +	adapter = cpfl_find_adapter_ext(pci_dev);
>>> +	if (adapter == NULL) {
>>> +		first_probe = true;
>>> +		adapter = rte_zmalloc("cpfl_adapter_ext",
>>> +				      sizeof(struct cpfl_adapter_ext), 0);
>>> +		if (adapter == NULL) {
>>> +			PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
>>> +			return -ENOMEM;
>>> +		}
>>> +
>>> +		retval = cpfl_adapter_ext_init(pci_dev, adapter);
>>> +		if (retval != 0) {
>>> +			PMD_INIT_LOG(ERR, "Failed to init adapter.");
>>> +			return retval;
>>> +		}
>>> +
>>> +		rte_spinlock_lock(&cpfl_adapter_lock);
>>> +		TAILQ_INSERT_TAIL(&cpfl_adapter_list, adapter, next);
>>> +		rte_spinlock_unlock(&cpfl_adapter_lock);
>>> +	}
>>> +
>>> +	retval = cpfl_parse_devargs(pci_dev, adapter, &devargs);
>>> +	if (retval != 0) {
>>> +		PMD_INIT_LOG(ERR, "Failed to parse private devargs");
>>> +		goto err;
>>> +	}
>>> +
>>> +	if (devargs.req_vport_nb == 0) {
>>> +		/* If no vport devarg, create vport 0 by default. */
>>> +		vport_param.adapter = adapter;
>>> +		vport_param.devarg_id = 0;
>>> +		vport_param.idx = cpfl_vport_idx_alloc(adapter);
>>> +		if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
>>> +			PMD_INIT_LOG(ERR, "No space for vport %u",
>> vport_param.devarg_id);
>>> +			return 0;
>>> +		}
>>> +		snprintf(name, sizeof(name), "cpfl_%s_vport_0",
>>> +			 pci_dev->device.name);
>>> +		retval = rte_eth_dev_create(&pci_dev->device, name,
>>> +					    sizeof(struct idpf_vport),
>>> +					    NULL, NULL, cpfl_dev_vport_init,
>>> +					    &vport_param);
>>> +		if (retval != 0)
>>> +			PMD_DRV_LOG(ERR, "Failed to create default vport
>> 0");
>>> +	} else {
>>> +		for (i = 0; i < devargs.req_vport_nb; i++) {
>>> +			vport_param.adapter = adapter;
>>> +			vport_param.devarg_id = devargs.req_vports[i];
>>> +			vport_param.idx = cpfl_vport_idx_alloc(adapter);
>>> +			if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
>>> +				PMD_INIT_LOG(ERR, "No space for
>> vport %u", vport_param.devarg_id);
>>> +				break;
>>> +			}
>>> +			snprintf(name, sizeof(name), "cpfl_%s_vport_%d",
>>> +				 pci_dev->device.name,
>>> +				 devargs.req_vports[i]);
>>> +			retval = rte_eth_dev_create(&pci_dev->device,
>> name,
>>> +						    sizeof(struct idpf_vport),
>>> +						    NULL, NULL,
>> cpfl_dev_vport_init,
>>> +						    &vport_param);
>>> +			if (retval != 0)
>>> +				PMD_DRV_LOG(ERR, "Failed to create
>> vport %d",
>>> +					    vport_param.devarg_id);
>>> +		}
>>> +	}
>>> +
>>> +	return 0;
>>> +
>>> +err:
>>> +	if (first_probe) {
>>> +		rte_spinlock_lock(&cpfl_adapter_lock);
>>> +		TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
>>> +		rte_spinlock_unlock(&cpfl_adapter_lock);
>>> +		cpfl_adapter_ext_deinit(adapter);
>>> +		rte_free(adapter);
>>> +	}
>>
>> Is 'first_probe' left intentionally? If so, what is the reason to have this
>> condition?
> [beileix] It's related to create vports at runtime, this feature will be implemented in the future, but now it doesn't suppor.

is it possible to remove it now and add back when needed? This is
confusing as it is.



^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v8 01/21] net/cpfl: support device initialization
  2023-03-02  9:31               ` Ferruh Yigit
  2023-03-02 11:24                 ` Liu, Mingxia
@ 2023-03-02 12:08                 ` Xing, Beilei
  1 sibling, 0 replies; 263+ messages in thread
From: Xing, Beilei @ 2023-03-02 12:08 UTC (permalink / raw)
  To: Ferruh Yigit, Liu, Mingxia, dev, Zhang, Yuying



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Thursday, March 2, 2023 5:31 PM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> Subject: Re: [PATCH v8 01/21] net/cpfl: support device initialization
> 
> On 3/2/2023 10:35 AM, Mingxia Liu wrote:
> > Support device init and add the following dev ops:
> >  - dev_configure
> >  - dev_close
> >  - dev_infos_get
> >  - link_update
> >  - dev_supported_ptypes_get
> >
> > Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> 
> <...>
> 
> > --- /dev/null
> > +++ b/doc/guides/nics/cpfl.rst
> > @@ -0,0 +1,85 @@
> > +.. SPDX-License-Identifier: BSD-3-Clause
> > +   Copyright(c) 2022 Intel Corporation.
> > +
> 
> s/2022/2023
> 
> <...>
> 
> > +static int
> > +cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
> > +	       struct rte_pci_device *pci_dev) {
> > +	struct cpfl_vport_param vport_param;
> > +	struct cpfl_adapter_ext *adapter;
> > +	struct cpfl_devargs devargs;
> > +	char name[RTE_ETH_NAME_MAX_LEN];
> > +	int i, retval;
> > +	bool first_probe = false;
> > +
> > +	if (!cpfl_adapter_list_init) {
> > +		rte_spinlock_init(&cpfl_adapter_lock);
> > +		TAILQ_INIT(&cpfl_adapter_list);
> > +		cpfl_adapter_list_init = true;
> > +	}
> > +
> > +	adapter = cpfl_find_adapter_ext(pci_dev);
> > +	if (adapter == NULL) {
> > +		first_probe = true;
> > +		adapter = rte_zmalloc("cpfl_adapter_ext",
> > +				      sizeof(struct cpfl_adapter_ext), 0);
> > +		if (adapter == NULL) {
> > +			PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
> > +			return -ENOMEM;
> > +		}
> > +
> > +		retval = cpfl_adapter_ext_init(pci_dev, adapter);
> > +		if (retval != 0) {
> > +			PMD_INIT_LOG(ERR, "Failed to init adapter.");
> > +			return retval;
> > +		}
> > +
> > +		rte_spinlock_lock(&cpfl_adapter_lock);
> > +		TAILQ_INSERT_TAIL(&cpfl_adapter_list, adapter, next);
> > +		rte_spinlock_unlock(&cpfl_adapter_lock);
> > +	}
> > +
> > +	retval = cpfl_parse_devargs(pci_dev, adapter, &devargs);
> > +	if (retval != 0) {
> > +		PMD_INIT_LOG(ERR, "Failed to parse private devargs");
> > +		goto err;
> > +	}
> > +
> > +	if (devargs.req_vport_nb == 0) {
> > +		/* If no vport devarg, create vport 0 by default. */
> > +		vport_param.adapter = adapter;
> > +		vport_param.devarg_id = 0;
> > +		vport_param.idx = cpfl_vport_idx_alloc(adapter);
> > +		if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
> > +			PMD_INIT_LOG(ERR, "No space for vport %u",
> vport_param.devarg_id);
> > +			return 0;
> > +		}
> > +		snprintf(name, sizeof(name), "cpfl_%s_vport_0",
> > +			 pci_dev->device.name);
> > +		retval = rte_eth_dev_create(&pci_dev->device, name,
> > +					    sizeof(struct idpf_vport),
> > +					    NULL, NULL, cpfl_dev_vport_init,
> > +					    &vport_param);
> > +		if (retval != 0)
> > +			PMD_DRV_LOG(ERR, "Failed to create default vport
> 0");
> > +	} else {
> > +		for (i = 0; i < devargs.req_vport_nb; i++) {
> > +			vport_param.adapter = adapter;
> > +			vport_param.devarg_id = devargs.req_vports[i];
> > +			vport_param.idx = cpfl_vport_idx_alloc(adapter);
> > +			if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
> > +				PMD_INIT_LOG(ERR, "No space for vport %u",
> vport_param.devarg_id);
> > +				break;
> > +			}
> > +			snprintf(name, sizeof(name), "cpfl_%s_vport_%d",
> > +				 pci_dev->device.name,
> > +				 devargs.req_vports[i]);
> > +			retval = rte_eth_dev_create(&pci_dev->device, name,
> > +						    sizeof(struct idpf_vport),
> > +						    NULL, NULL,
> cpfl_dev_vport_init,
> > +						    &vport_param);
> > +			if (retval != 0)
> > +				PMD_DRV_LOG(ERR, "Failed to create
> vport %d",
> > +					    vport_param.devarg_id);
> > +		}
> > +	}
> > +
> > +	return 0;
> > +
> > +err:
> > +	if (first_probe) {
> > +		rte_spinlock_lock(&cpfl_adapter_lock);
> > +		TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
> > +		rte_spinlock_unlock(&cpfl_adapter_lock);
> > +		cpfl_adapter_ext_deinit(adapter);
> > +		rte_free(adapter);
> > +	}
> 
> Is 'first_probe' left intentionally? If so, what is the reason to have this condition?
> 

Hi Ferruh,
We'll support runtime create vport in the future, so add this condition here.

BR,
Beilei


^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v8 01/21] net/cpfl: support device initialization
  2023-03-02 11:51                   ` Ferruh Yigit
@ 2023-03-02 12:08                     ` Xing, Beilei
  2023-03-02 13:11                     ` Liu, Mingxia
  1 sibling, 0 replies; 263+ messages in thread
From: Xing, Beilei @ 2023-03-02 12:08 UTC (permalink / raw)
  To: Ferruh Yigit, Liu, Mingxia, dev, Zhang, Yuying



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Thursday, March 2, 2023 7:51 PM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> Subject: Re: [PATCH v8 01/21] net/cpfl: support device initialization
> 
> On 3/2/2023 11:24 AM, Liu, Mingxia wrote:
> >
> >
> >> -----Original Message-----
> >> From: Ferruh Yigit <ferruh.yigit@amd.com>
> >> Sent: Thursday, March 2, 2023 5:31 PM
> >> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> >> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> >> Subject: Re: [PATCH v8 01/21] net/cpfl: support device initialization
> >>
> >> On 3/2/2023 10:35 AM, Mingxia Liu wrote:
> >>> Support device init and add the following dev ops:
> >>>  - dev_configure
> >>>  - dev_close
> >>>  - dev_infos_get
> >>>  - link_update
> >>>  - dev_supported_ptypes_get
> >>>
> >>> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> >>
> >> <...>
> >>
> >>> --- /dev/null
> >>> +++ b/doc/guides/nics/cpfl.rst
> >>> @@ -0,0 +1,85 @@
> >>> +.. SPDX-License-Identifier: BSD-3-Clause
> >>> +   Copyright(c) 2022 Intel Corporation.
> >>> +
> >>
> >> s/2022/2023
> >>
> >> <...>
> >>
> >>> +static int
> >>> +cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
> >>> +	       struct rte_pci_device *pci_dev) {
> >>> +	struct cpfl_vport_param vport_param;
> >>> +	struct cpfl_adapter_ext *adapter;
> >>> +	struct cpfl_devargs devargs;
> >>> +	char name[RTE_ETH_NAME_MAX_LEN];
> >>> +	int i, retval;
> >>> +	bool first_probe = false;
> >>> +
> >>> +	if (!cpfl_adapter_list_init) {
> >>> +		rte_spinlock_init(&cpfl_adapter_lock);
> >>> +		TAILQ_INIT(&cpfl_adapter_list);
> >>> +		cpfl_adapter_list_init = true;
> >>> +	}
> >>> +
> >>> +	adapter = cpfl_find_adapter_ext(pci_dev);
> >>> +	if (adapter == NULL) {
> >>> +		first_probe = true;
> >>> +		adapter = rte_zmalloc("cpfl_adapter_ext",
> >>> +				      sizeof(struct cpfl_adapter_ext), 0);
> >>> +		if (adapter == NULL) {
> >>> +			PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
> >>> +			return -ENOMEM;
> >>> +		}
> >>> +
> >>> +		retval = cpfl_adapter_ext_init(pci_dev, adapter);
> >>> +		if (retval != 0) {
> >>> +			PMD_INIT_LOG(ERR, "Failed to init adapter.");
> >>> +			return retval;
> >>> +		}
> >>> +
> >>> +		rte_spinlock_lock(&cpfl_adapter_lock);
> >>> +		TAILQ_INSERT_TAIL(&cpfl_adapter_list, adapter, next);
> >>> +		rte_spinlock_unlock(&cpfl_adapter_lock);
> >>> +	}
> >>> +
> >>> +	retval = cpfl_parse_devargs(pci_dev, adapter, &devargs);
> >>> +	if (retval != 0) {
> >>> +		PMD_INIT_LOG(ERR, "Failed to parse private devargs");
> >>> +		goto err;
> >>> +	}
> >>> +
> >>> +	if (devargs.req_vport_nb == 0) {
> >>> +		/* If no vport devarg, create vport 0 by default. */
> >>> +		vport_param.adapter = adapter;
> >>> +		vport_param.devarg_id = 0;
> >>> +		vport_param.idx = cpfl_vport_idx_alloc(adapter);
> >>> +		if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
> >>> +			PMD_INIT_LOG(ERR, "No space for vport %u",
> >> vport_param.devarg_id);
> >>> +			return 0;
> >>> +		}
> >>> +		snprintf(name, sizeof(name), "cpfl_%s_vport_0",
> >>> +			 pci_dev->device.name);
> >>> +		retval = rte_eth_dev_create(&pci_dev->device, name,
> >>> +					    sizeof(struct idpf_vport),
> >>> +					    NULL, NULL, cpfl_dev_vport_init,
> >>> +					    &vport_param);
> >>> +		if (retval != 0)
> >>> +			PMD_DRV_LOG(ERR, "Failed to create default vport
> >> 0");
> >>> +	} else {
> >>> +		for (i = 0; i < devargs.req_vport_nb; i++) {
> >>> +			vport_param.adapter = adapter;
> >>> +			vport_param.devarg_id = devargs.req_vports[i];
> >>> +			vport_param.idx = cpfl_vport_idx_alloc(adapter);
> >>> +			if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
> >>> +				PMD_INIT_LOG(ERR, "No space for
> >> vport %u", vport_param.devarg_id);
> >>> +				break;
> >>> +			}
> >>> +			snprintf(name, sizeof(name), "cpfl_%s_vport_%d",
> >>> +				 pci_dev->device.name,
> >>> +				 devargs.req_vports[i]);
> >>> +			retval = rte_eth_dev_create(&pci_dev->device,
> >> name,
> >>> +						    sizeof(struct idpf_vport),
> >>> +						    NULL, NULL,
> >> cpfl_dev_vport_init,
> >>> +						    &vport_param);
> >>> +			if (retval != 0)
> >>> +				PMD_DRV_LOG(ERR, "Failed to create
> >> vport %d",
> >>> +					    vport_param.devarg_id);
> >>> +		}
> >>> +	}
> >>> +
> >>> +	return 0;
> >>> +
> >>> +err:
> >>> +	if (first_probe) {
> >>> +		rte_spinlock_lock(&cpfl_adapter_lock);
> >>> +		TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
> >>> +		rte_spinlock_unlock(&cpfl_adapter_lock);
> >>> +		cpfl_adapter_ext_deinit(adapter);
> >>> +		rte_free(adapter);
> >>> +	}
> >>
> >> Is 'first_probe' left intentionally? If so, what is the reason to
> >> have this condition?
> > [beileix] It's related to create vports at runtime, this feature will be
> implemented in the future, but now it doesn't suppor.
> 
> is it possible to remove it now and add back when needed? This is confusing as
> it is.
> 
Make sense. Will remove it in this patch.

Beilei


^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v8 01/21] net/cpfl: support device initialization
  2023-03-02 11:51                   ` Ferruh Yigit
  2023-03-02 12:08                     ` Xing, Beilei
@ 2023-03-02 13:11                     ` Liu, Mingxia
  1 sibling, 0 replies; 263+ messages in thread
From: Liu, Mingxia @ 2023-03-02 13:11 UTC (permalink / raw)
  To: Ferruh Yigit, dev, Xing,  Beilei, Zhang, Yuying



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Thursday, March 2, 2023 7:51 PM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> Subject: Re: [PATCH v8 01/21] net/cpfl: support device initialization
> 
> On 3/2/2023 11:24 AM, Liu, Mingxia wrote:
> >
> >
> >> -----Original Message-----
> >> From: Ferruh Yigit <ferruh.yigit@amd.com>
> >> Sent: Thursday, March 2, 2023 5:31 PM
> >> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Xing, Beilei
> >> <beilei.xing@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> >> Subject: Re: [PATCH v8 01/21] net/cpfl: support device initialization
> >>
> >> On 3/2/2023 10:35 AM, Mingxia Liu wrote:
> >>> Support device init and add the following dev ops:
> >>>  - dev_configure
> >>>  - dev_close
> >>>  - dev_infos_get
> >>>  - link_update
> >>>  - dev_supported_ptypes_get
> >>>
> >>> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> >>
> >> <...>
> >>
> >>> --- /dev/null
> >>> +++ b/doc/guides/nics/cpfl.rst
> >>> @@ -0,0 +1,85 @@
> >>> +.. SPDX-License-Identifier: BSD-3-Clause
> >>> +   Copyright(c) 2022 Intel Corporation.
> >>> +
> >>
> >> s/2022/2023
> >>
> >> <...>
> >>
> >>> +static int
> >>> +cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
> >>> +	       struct rte_pci_device *pci_dev) {
> >>> +	struct cpfl_vport_param vport_param;
> >>> +	struct cpfl_adapter_ext *adapter;
> >>> +	struct cpfl_devargs devargs;
> >>> +	char name[RTE_ETH_NAME_MAX_LEN];
> >>> +	int i, retval;
> >>> +	bool first_probe = false;
> >>> +
> >>> +	if (!cpfl_adapter_list_init) {
> >>> +		rte_spinlock_init(&cpfl_adapter_lock);
> >>> +		TAILQ_INIT(&cpfl_adapter_list);
> >>> +		cpfl_adapter_list_init = true;
> >>> +	}
> >>> +
> >>> +	adapter = cpfl_find_adapter_ext(pci_dev);
> >>> +	if (adapter == NULL) {
> >>> +		first_probe = true;
> >>> +		adapter = rte_zmalloc("cpfl_adapter_ext",
> >>> +				      sizeof(struct cpfl_adapter_ext), 0);
> >>> +		if (adapter == NULL) {
> >>> +			PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
> >>> +			return -ENOMEM;
> >>> +		}
> >>> +
> >>> +		retval = cpfl_adapter_ext_init(pci_dev, adapter);
> >>> +		if (retval != 0) {
> >>> +			PMD_INIT_LOG(ERR, "Failed to init adapter.");
> >>> +			return retval;
> >>> +		}
> >>> +
> >>> +		rte_spinlock_lock(&cpfl_adapter_lock);
> >>> +		TAILQ_INSERT_TAIL(&cpfl_adapter_list, adapter, next);
> >>> +		rte_spinlock_unlock(&cpfl_adapter_lock);
> >>> +	}
> >>> +
> >>> +	retval = cpfl_parse_devargs(pci_dev, adapter, &devargs);
> >>> +	if (retval != 0) {
> >>> +		PMD_INIT_LOG(ERR, "Failed to parse private devargs");
> >>> +		goto err;
> >>> +	}
> >>> +
> >>> +	if (devargs.req_vport_nb == 0) {
> >>> +		/* If no vport devarg, create vport 0 by default. */
> >>> +		vport_param.adapter = adapter;
> >>> +		vport_param.devarg_id = 0;
> >>> +		vport_param.idx = cpfl_vport_idx_alloc(adapter);
> >>> +		if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
> >>> +			PMD_INIT_LOG(ERR, "No space for vport %u",
> >> vport_param.devarg_id);
> >>> +			return 0;
> >>> +		}
> >>> +		snprintf(name, sizeof(name), "cpfl_%s_vport_0",
> >>> +			 pci_dev->device.name);
> >>> +		retval = rte_eth_dev_create(&pci_dev->device, name,
> >>> +					    sizeof(struct idpf_vport),
> >>> +					    NULL, NULL, cpfl_dev_vport_init,
> >>> +					    &vport_param);
> >>> +		if (retval != 0)
> >>> +			PMD_DRV_LOG(ERR, "Failed to create default vport
> >> 0");
> >>> +	} else {
> >>> +		for (i = 0; i < devargs.req_vport_nb; i++) {
> >>> +			vport_param.adapter = adapter;
> >>> +			vport_param.devarg_id = devargs.req_vports[i];
> >>> +			vport_param.idx = cpfl_vport_idx_alloc(adapter);
> >>> +			if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
> >>> +				PMD_INIT_LOG(ERR, "No space for
> >> vport %u", vport_param.devarg_id);
> >>> +				break;
> >>> +			}
> >>> +			snprintf(name, sizeof(name), "cpfl_%s_vport_%d",
> >>> +				 pci_dev->device.name,
> >>> +				 devargs.req_vports[i]);
> >>> +			retval = rte_eth_dev_create(&pci_dev->device,
> >> name,
> >>> +						    sizeof(struct idpf_vport),
> >>> +						    NULL, NULL,
> >> cpfl_dev_vport_init,
> >>> +						    &vport_param);
> >>> +			if (retval != 0)
> >>> +				PMD_DRV_LOG(ERR, "Failed to create
> >> vport %d",
> >>> +					    vport_param.devarg_id);
> >>> +		}
> >>> +	}
> >>> +
> >>> +	return 0;
> >>> +
> >>> +err:
> >>> +	if (first_probe) {
> >>> +		rte_spinlock_lock(&cpfl_adapter_lock);
> >>> +		TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
> >>> +		rte_spinlock_unlock(&cpfl_adapter_lock);
> >>> +		cpfl_adapter_ext_deinit(adapter);
> >>> +		rte_free(adapter);
> >>> +	}
> >>
> >> Is 'first_probe' left intentionally? If so, what is the reason to
> >> have this condition?
> > [beileix] It's related to create vports at runtime, this feature will be
> implemented in the future, but now it doesn't suppor.
> 
> is it possible to remove it now and add back when needed? This is confusing
> as it is.
> 
[Liu, Mingxia] updated, new version sent. Thanks!


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v9 00/21] add support for cpfl PMD in DPDK
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
@ 2023-03-02 15:06               ` Ferruh Yigit
  2023-03-02 21:20               ` [PATCH v9 01/21] net/cpfl: support device initialization Mingxia Liu
                                 ` (20 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Ferruh Yigit @ 2023-03-02 15:06 UTC (permalink / raw)
  To: Mingxia Liu, beilei.xing, yuying.zhang; +Cc: dev, Qi Z Zhang

On 3/2/2023 9:20 PM, Mingxia Liu wrote:
> The patchset introduced the cpfl (Control Plane Function Library) PMD
> for Intel® IPU E2100’s Configure Physical Function (Device ID: 0x1453)
> 
> The cpfl PMD inherits all the features from idpf PMD which will follow
> an ongoing standard data plan function spec
> https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=idpf
> Besides, it will also support more device specific hardware offloading
> features from DPDK’s control path (e.g.: hairpin, rte_flow …). which is
> different from idpf PMD, and that's why we need a new cpfl PMD.
> 
> This patchset mainly focuses on idpf PMD’s equivalent features.
> To avoid duplicated code, the patchset depends on below patchsets which
> move the common part from net/idpf into common/idpf as a shared library.
> 
> v2 changes:
>  - rebase to the new baseline.
>  - Fix rss lut config issue.
> v3 changes:
>  - rebase to the new baseline.
> v4 changes:
>  - Resend v3. No code changed.
> v5 changes:
>  - rebase to the new baseline.
>  - optimize some code
>  - give "not supported" tips when user want to config rss hash type
>  - if stats reset fails at initialization time, don't rollback, just
>    print ERROR info
> v6 changes:
>  - for small fixed size structure, change rte_memcpy to memcpy()
>  - fix compilation for AVX512DQ
>  - update cpfl maintainers
> v7 changes:
>  - add dependency in cover-letter
> v8 changes:
>  - improve documentation and commit msg
>  - optimize function cpfl_dev_link_update()
>  - refine devargs check
> v9 changes:
>  - refine cpfl_pci_probe(), remove redundant code.
> 
> This patchset is based on the idpf PMD code for refining Rx/Tx queue
> model info:
> http://patches.dpdk.org/project/dpdk/patch/20230302195111.1104185-1-mingxia.liu@intel.com/
> 
> Mingxia Liu (21):
>   net/cpfl: support device initialization
>   net/cpfl: add Tx queue setup
>   net/cpfl: add Rx queue setup
>   net/cpfl: support device start and stop
>   net/cpfl: support queue start
>   net/cpfl: support queue stop
>   net/cpfl: support queue release
>   net/cpfl: support MTU configuration
>   net/cpfl: support basic Rx data path
>   net/cpfl: support basic Tx data path
>   net/cpfl: support write back based on ITR expire
>   net/cpfl: support RSS
>   net/cpfl: support Rx offloading
>   net/cpfl: support Tx offloading
>   net/cpfl: add AVX512 data path for single queue model
>   net/cpfl: support timestamp offload
>   net/cpfl: add AVX512 data path for split queue model
>   net/cpfl: add HW statistics
>   net/cpfl: add RSS set/get ops
>   net/cpfl: support scalar scatter Rx datapath for single queue model
>   net/cpfl: add xstats ops

Series applied to dpdk-next-net/main, thanks.

Thanks.

^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v9 00/21] add support for cpfl PMD in DPDK
  2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
                               ` (20 preceding siblings ...)
  2023-03-02 10:35             ` [PATCH v8 21/21] net/cpfl: add xstats ops Mingxia Liu
@ 2023-03-02 21:20             ` Mingxia Liu
  2023-03-02 15:06               ` Ferruh Yigit
                                 ` (21 more replies)
  21 siblings, 22 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 21:20 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

The patchset introduced the cpfl (Control Plane Function Library) PMD
for Intel® IPU E2100’s Configure Physical Function (Device ID: 0x1453)

The cpfl PMD inherits all the features from idpf PMD which will follow
an ongoing standard data plan function spec
https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=idpf
Besides, it will also support more device specific hardware offloading
features from DPDK’s control path (e.g.: hairpin, rte_flow …). which is
different from idpf PMD, and that's why we need a new cpfl PMD.

This patchset mainly focuses on idpf PMD’s equivalent features.
To avoid duplicated code, the patchset depends on below patchsets which
move the common part from net/idpf into common/idpf as a shared library.

v2 changes:
 - rebase to the new baseline.
 - Fix rss lut config issue.
v3 changes:
 - rebase to the new baseline.
v4 changes:
 - Resend v3. No code changed.
v5 changes:
 - rebase to the new baseline.
 - optimize some code
 - give "not supported" tips when user want to config rss hash type
 - if stats reset fails at initialization time, don't rollback, just
   print ERROR info
v6 changes:
 - for small fixed size structure, change rte_memcpy to memcpy()
 - fix compilation for AVX512DQ
 - update cpfl maintainers
v7 changes:
 - add dependency in cover-letter
v8 changes:
 - improve documentation and commit msg
 - optimize function cpfl_dev_link_update()
 - refine devargs check
v9 changes:
 - refine cpfl_pci_probe(), remove redundant code.

This patchset is based on the idpf PMD code for refining Rx/Tx queue
model info:
http://patches.dpdk.org/project/dpdk/patch/20230302195111.1104185-1-mingxia.liu@intel.com/

Mingxia Liu (21):
  net/cpfl: support device initialization
  net/cpfl: add Tx queue setup
  net/cpfl: add Rx queue setup
  net/cpfl: support device start and stop
  net/cpfl: support queue start
  net/cpfl: support queue stop
  net/cpfl: support queue release
  net/cpfl: support MTU configuration
  net/cpfl: support basic Rx data path
  net/cpfl: support basic Tx data path
  net/cpfl: support write back based on ITR expire
  net/cpfl: support RSS
  net/cpfl: support Rx offloading
  net/cpfl: support Tx offloading
  net/cpfl: add AVX512 data path for single queue model
  net/cpfl: support timestamp offload
  net/cpfl: add AVX512 data path for split queue model
  net/cpfl: add HW statistics
  net/cpfl: add RSS set/get ops
  net/cpfl: support scalar scatter Rx datapath for single queue model
  net/cpfl: add xstats ops

 MAINTAINERS                             |    8 +
 doc/guides/nics/cpfl.rst                |  107 ++
 doc/guides/nics/features/cpfl.ini       |   16 +
 doc/guides/nics/index.rst               |    1 +
 doc/guides/rel_notes/release_23_03.rst  |    6 +
 drivers/net/cpfl/cpfl_ethdev.c          | 1460 +++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h          |   94 ++
 drivers/net/cpfl/cpfl_logs.h            |   29 +
 drivers/net/cpfl/cpfl_rxtx.c            |  951 +++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h            |   44 +
 drivers/net/cpfl/cpfl_rxtx_vec_common.h |  116 ++
 drivers/net/cpfl/meson.build            |   40 +
 drivers/net/meson.build                 |    1 +
 13 files changed, 2873 insertions(+)
 create mode 100644 doc/guides/nics/cpfl.rst
 create mode 100644 doc/guides/nics/features/cpfl.ini
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.c
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.h
 create mode 100644 drivers/net/cpfl/cpfl_logs.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.h
 create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h
 create mode 100644 drivers/net/cpfl/meson.build

-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v9 01/21] net/cpfl: support device initialization
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
  2023-03-02 15:06               ` Ferruh Yigit
@ 2023-03-02 21:20               ` Mingxia Liu
  2023-03-07 14:11                 ` Ferruh Yigit
  2023-03-02 21:20               ` [PATCH v9 02/21] net/cpfl: add Tx queue setup Mingxia Liu
                                 ` (19 subsequent siblings)
  21 siblings, 1 reply; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 21:20 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Support device init and add the following dev ops:
 - dev_configure
 - dev_close
 - dev_infos_get
 - link_update
 - dev_supported_ptypes_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 MAINTAINERS                            |   8 +
 doc/guides/nics/cpfl.rst               |  85 +++
 doc/guides/nics/features/cpfl.ini      |  12 +
 doc/guides/nics/index.rst              |   1 +
 doc/guides/rel_notes/release_23_03.rst |   6 +
 drivers/net/cpfl/cpfl_ethdev.c         | 765 +++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h         |  77 +++
 drivers/net/cpfl/cpfl_logs.h           |  29 +
 drivers/net/cpfl/meson.build           |  14 +
 drivers/net/meson.build                |   1 +
 10 files changed, 998 insertions(+)
 create mode 100644 doc/guides/nics/cpfl.rst
 create mode 100644 doc/guides/nics/features/cpfl.ini
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.c
 create mode 100644 drivers/net/cpfl/cpfl_ethdev.h
 create mode 100644 drivers/net/cpfl/cpfl_logs.h
 create mode 100644 drivers/net/cpfl/meson.build

diff --git a/MAINTAINERS b/MAINTAINERS
index ffbf91296e..878204c93b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -783,6 +783,14 @@ F: drivers/common/idpf/
 F: doc/guides/nics/idpf.rst
 F: doc/guides/nics/features/idpf.ini
 
+Intel cpfl - EXPERIMENTAL
+M: Yuying Zhang <yuying.zhang@intel.com>
+M: Beilei Xing <beilei.xing@intel.com>
+T: git://dpdk.org/next/dpdk-next-net-intel
+F: drivers/net/cpfl/
+F: doc/guides/nics/cpfl.rst
+F: doc/guides/nics/features/cpfl.ini
+
 Intel igc
 M: Junfeng Guo <junfeng.guo@intel.com>
 M: Simei Su <simei.su@intel.com>
diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
new file mode 100644
index 0000000000..253fa3afae
--- /dev/null
+++ b/doc/guides/nics/cpfl.rst
@@ -0,0 +1,85 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright(c) 2022 Intel Corporation.
+
+.. include:: <isonum.txt>
+
+CPFL Poll Mode Driver
+=====================
+
+The [*EXPERIMENTAL*] cpfl PMD (**librte_net_cpfl**) provides poll mode driver support
+for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
+Please refer to
+https://www.intel.com/content/www/us/en/products/network-io/infrastructure-processing-units/asic/e2000-asic.html
+for more information.
+
+Linux Prerequisites
+-------------------
+
+Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment.
+
+To get better performance on Intel platforms,
+please follow the :doc:`../linux_gsg/nic_perf_intel_platform`.
+
+
+Pre-Installation Configuration
+------------------------------
+
+Runtime Config Options
+~~~~~~~~~~~~~~~~~~~~~~
+
+- ``vport`` (default ``0``)
+
+  The PMD supports creation of multiple vports for one PCI device,
+  each vport corresponds to a single ethdev.
+  The user can specify the vports with specific ID to be created, and ID should
+  be 0 ~ 7 currently, for example:
+
+    -a ca:00.0,vport=[0,2,3]
+
+  Then the PMD will create 3 vports (ethdevs) for device ``ca:00.0``.
+
+  If the parameter is not provided, the vport 0 will be created by default.
+
+- ``rx_single`` (default ``0``)
+
+  There are two queue modes supported by Intel\ |reg| IPU Ethernet E2100 Series,
+  single queue mode and split queue mode for Rx queue.
+
+  For the single queue model, the descriptor queue is used by SW to post buffer
+  descriptors to HW, and it's also used by HW to post completed descriptors to SW.
+
+  For the split queue model, "RX buffer queues" are used to pass descriptor buffers
+  from SW to HW, while RX queues are used only to pass the descriptor completions
+  from HW to SW.
+
+  User can choose Rx queue mode, example:
+
+    -a ca:00.0,rx_single=1
+
+  Then the PMD will configure Rx queue with single queue mode.
+  Otherwise, split queue mode is chosen by default.
+
+- ``tx_single`` (default ``0``)
+
+  There are two queue modes supported by Intel\ |reg| IPU Ethernet E2100 Series,
+  single queue mode and split queue mode for Tx queue.
+
+  For the single queue model, the descriptor queue is used by SW to post buffer
+  descriptors to HW, and it's also used by HW to post completed descriptors to SW.
+
+  For the split queue model, "TX completion queues" are used to pass descriptor buffers
+  from SW to HW, while TX queues are used only to pass the descriptor completions from
+  HW to SW.
+
+  User can choose Tx queue mode, example::
+
+    -a ca:00.0,tx_single=1
+
+  Then the PMD will configure Tx queue with single queue mode.
+  Otherwise, split queue mode is chosen by default.
+
+
+Driver compilation and testing
+------------------------------
+
+Refer to the document :doc:`build_and_test` for details.
\ No newline at end of file
diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
new file mode 100644
index 0000000000..a2d1ca9e15
--- /dev/null
+++ b/doc/guides/nics/features/cpfl.ini
@@ -0,0 +1,12 @@
+;
+; Supported features of the 'cpfl' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+; A feature with "P" indicates only be supported when non-vector path
+; is selected.
+;
+[Features]
+Linux                = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index df58a237ca..5c9d1edf5e 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -20,6 +20,7 @@ Network Interface Controller Drivers
     bnx2x
     bnxt
     cnxk
+    cpfl
     cxgbe
     dpaa
     dpaa2
diff --git a/doc/guides/rel_notes/release_23_03.rst b/doc/guides/rel_notes/release_23_03.rst
index 49c18617a5..29690d8813 100644
--- a/doc/guides/rel_notes/release_23_03.rst
+++ b/doc/guides/rel_notes/release_23_03.rst
@@ -148,6 +148,12 @@ New Features
   * Added support for timesync API.
   * Added support for packet pacing (launch time offloading).
 
+* **Added Intel cpfl driver.**
+
+  * Added the new ``cpfl`` net driver
+    for Intel\ |reg| Infrastructure Processing Unit (Intel\ |reg| IPU) E2100.
+    See the :doc:`../nics/cpfl.rst` NIC guide for more details on this new driver.
+
 * **Updated Marvell cnxk ethdev driver.**
 
   * Added support to skip RED using ``RTE_FLOW_ACTION_TYPE_SKIP_CMAN``.
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
new file mode 100644
index 0000000000..7f6fd2804b
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -0,0 +1,765 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_ether.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_dev.h>
+#include <errno.h>
+#include <rte_alarm.h>
+
+#include "cpfl_ethdev.h"
+
+#define CPFL_TX_SINGLE_Q	"tx_single"
+#define CPFL_RX_SINGLE_Q	"rx_single"
+#define CPFL_VPORT		"vport"
+
+rte_spinlock_t cpfl_adapter_lock;
+/* A list for all adapters, one adapter matches one PCI device */
+struct cpfl_adapter_list cpfl_adapter_list;
+bool cpfl_adapter_list_init;
+
+static const char * const cpfl_valid_args[] = {
+	CPFL_TX_SINGLE_Q,
+	CPFL_RX_SINGLE_Q,
+	CPFL_VPORT,
+	NULL
+};
+
+uint32_t cpfl_supported_speeds[] = {
+	RTE_ETH_SPEED_NUM_NONE,
+	RTE_ETH_SPEED_NUM_10M,
+	RTE_ETH_SPEED_NUM_100M,
+	RTE_ETH_SPEED_NUM_1G,
+	RTE_ETH_SPEED_NUM_2_5G,
+	RTE_ETH_SPEED_NUM_5G,
+	RTE_ETH_SPEED_NUM_10G,
+	RTE_ETH_SPEED_NUM_20G,
+	RTE_ETH_SPEED_NUM_25G,
+	RTE_ETH_SPEED_NUM_40G,
+	RTE_ETH_SPEED_NUM_50G,
+	RTE_ETH_SPEED_NUM_56G,
+	RTE_ETH_SPEED_NUM_100G,
+	RTE_ETH_SPEED_NUM_200G
+};
+
+static int
+cpfl_dev_link_update(struct rte_eth_dev *dev,
+		     __rte_unused int wait_to_complete)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct rte_eth_link new_link;
+	unsigned int i;
+
+	memset(&new_link, 0, sizeof(new_link));
+
+	for (i = 0; i < RTE_DIM(cpfl_supported_speeds); i++) {
+		if (vport->link_speed == cpfl_supported_speeds[i]) {
+			new_link.link_speed = vport->link_speed;
+			break;
+		}
+	}
+
+	if (i == RTE_DIM(cpfl_supported_speeds)) {
+		if (vport->link_up)
+			new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
+		else
+			new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	}
+
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
+		RTE_ETH_LINK_DOWN;
+	new_link.link_autoneg = (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) ?
+				 RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG;
+
+	return rte_eth_linkstatus_set(dev, &new_link);
+}
+
+static int
+cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+
+	dev_info->max_rx_queues = base->caps.max_rx_q;
+	dev_info->max_tx_queues = base->caps.max_tx_q;
+	dev_info->min_rx_bufsize = CPFL_MIN_BUF_SIZE;
+	dev_info->max_rx_pktlen = vport->max_mtu + CPFL_ETH_OVERHEAD;
+
+	dev_info->max_mtu = vport->max_mtu;
+	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
+
+	return 0;
+}
+
+static const uint32_t *
+cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
+{
+	static const uint32_t ptypes[] = {
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_L4_FRAG,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_SCTP,
+		RTE_PTYPE_L4_ICMP,
+		RTE_PTYPE_UNKNOWN
+	};
+
+	return ptypes;
+}
+
+static int
+cpfl_dev_configure(struct rte_eth_dev *dev)
+{
+	struct rte_eth_conf *conf = &dev->data->dev_conf;
+
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->txmode.mq_mode != RTE_ETH_MQ_TX_NONE) {
+		PMD_INIT_LOG(ERR, "Multi-queue TX mode %d is not supported",
+			     conf->txmode.mq_mode);
+		return -ENOTSUP;
+	}
+
+	if (conf->lpbk_mode != 0) {
+		PMD_INIT_LOG(ERR, "Loopback operation mode %d is not supported",
+			     conf->lpbk_mode);
+		return -ENOTSUP;
+	}
+
+	if (conf->dcb_capability_en != 0) {
+		PMD_INIT_LOG(ERR, "Priority Flow Control(PFC) if not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.lsc != 0) {
+		PMD_INIT_LOG(ERR, "LSC interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.rxq != 0) {
+		PMD_INIT_LOG(ERR, "RXQ interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	if (conf->intr_conf.rmv != 0) {
+		PMD_INIT_LOG(ERR, "RMV interrupt is not supported");
+		return -ENOTSUP;
+	}
+
+	return 0;
+}
+
+static int
+cpfl_dev_close(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter);
+
+	idpf_vport_deinit(vport);
+
+	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
+	adapter->cur_vport_nb--;
+	dev->data->dev_private = NULL;
+	adapter->vports[vport->sw_idx] = NULL;
+	rte_free(vport);
+
+	return 0;
+}
+
+static const struct eth_dev_ops cpfl_eth_dev_ops = {
+	.dev_configure			= cpfl_dev_configure,
+	.dev_close			= cpfl_dev_close,
+	.dev_infos_get			= cpfl_dev_info_get,
+	.link_update			= cpfl_dev_link_update,
+	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
+};
+
+static int
+insert_value(struct cpfl_devargs *devargs, uint16_t id)
+{
+	uint16_t i;
+
+	/* ignore duplicate */
+	for (i = 0; i < devargs->req_vport_nb; i++) {
+		if (devargs->req_vports[i] == id)
+			return 0;
+	}
+
+	devargs->req_vports[devargs->req_vport_nb] = id;
+	devargs->req_vport_nb++;
+
+	return 0;
+}
+
+static const char *
+parse_range(const char *value, struct cpfl_devargs *devargs)
+{
+	uint16_t lo, hi, i;
+	int n = 0;
+	int result;
+	const char *pos = value;
+
+	result = sscanf(value, "%hu%n-%hu%n", &lo, &n, &hi, &n);
+	if (result == 1) {
+		if (insert_value(devargs, lo) != 0)
+			return NULL;
+	} else if (result == 2) {
+		if (lo > hi)
+			return NULL;
+		for (i = lo; i <= hi; i++) {
+			if (insert_value(devargs, i) != 0)
+				return NULL;
+		}
+	} else {
+		return NULL;
+	}
+
+	return pos + n;
+}
+
+static int
+parse_vport(const char *key, const char *value, void *args)
+{
+	struct cpfl_devargs *devargs = args;
+	const char *pos = value;
+
+	devargs->req_vport_nb = 0;
+
+	if (*pos == '[')
+		pos++;
+
+	while (1) {
+		pos = parse_range(pos, devargs);
+		if (pos == NULL) {
+			PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ",
+				     value, key);
+			return -EINVAL;
+		}
+		if (*pos != ',')
+			break;
+		pos++;
+	}
+
+	if (*value == '[' && *pos != ']') {
+		PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", ",
+			     value, key);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+parse_bool(const char *key, const char *value, void *args)
+{
+	int *i = args;
+	char *end;
+	int num;
+
+	errno = 0;
+
+	num = strtoul(value, &end, 10);
+
+	if (errno == ERANGE || (num != 0 && num != 1)) {
+		PMD_INIT_LOG(ERR, "invalid value:\"%s\" for key:\"%s\", value must be 0 or 1",
+			value, key);
+		return -EINVAL;
+	}
+
+	*i = num;
+	return 0;
+}
+
+static int
+cpfl_parse_devargs(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter,
+		   struct cpfl_devargs *cpfl_args)
+{
+	struct rte_devargs *devargs = pci_dev->device.devargs;
+	struct rte_kvargs *kvlist;
+	int i, ret;
+
+	cpfl_args->req_vport_nb = 0;
+
+	if (devargs == NULL)
+		return 0;
+
+	kvlist = rte_kvargs_parse(devargs->args, cpfl_valid_args);
+	if (kvlist == NULL) {
+		PMD_INIT_LOG(ERR, "invalid kvargs key");
+		return -EINVAL;
+	}
+
+	if (rte_kvargs_count(kvlist, CPFL_VPORT) > 1) {
+		PMD_INIT_LOG(ERR, "devarg vport is duplicated.");
+		return -EINVAL;
+	}
+
+	ret = rte_kvargs_process(kvlist, CPFL_VPORT, &parse_vport,
+				 cpfl_args);
+	if (ret != 0)
+		goto fail;
+
+	ret = rte_kvargs_process(kvlist, CPFL_TX_SINGLE_Q, &parse_bool,
+				 &adapter->base.is_tx_singleq);
+	if (ret != 0)
+		goto fail;
+
+	ret = rte_kvargs_process(kvlist, CPFL_RX_SINGLE_Q, &parse_bool,
+				 &adapter->base.is_rx_singleq);
+	if (ret != 0)
+		goto fail;
+
+	/* check parsed devargs */
+	if (adapter->cur_vport_nb + cpfl_args->req_vport_nb >
+	    adapter->max_vport_nb) {
+		PMD_INIT_LOG(ERR, "Total vport number can't be > %d",
+			     adapter->max_vport_nb);
+		ret = -EINVAL;
+		goto fail;
+	}
+
+	for (i = 0; i < cpfl_args->req_vport_nb; i++) {
+		if (cpfl_args->req_vports[i] > adapter->max_vport_nb - 1) {
+			PMD_INIT_LOG(ERR, "Invalid vport id %d, it should be 0 ~ %d",
+				     cpfl_args->req_vports[i], adapter->max_vport_nb - 1);
+			ret = -EINVAL;
+			goto fail;
+		}
+
+		if (adapter->cur_vports & RTE_BIT32(cpfl_args->req_vports[i])) {
+			PMD_INIT_LOG(ERR, "Vport %d has been requested",
+				     cpfl_args->req_vports[i]);
+			ret = -EINVAL;
+			goto fail;
+		}
+	}
+
+fail:
+	rte_kvargs_free(kvlist);
+	return ret;
+}
+
+static struct idpf_vport *
+cpfl_find_vport(struct cpfl_adapter_ext *adapter, uint32_t vport_id)
+{
+	struct idpf_vport *vport = NULL;
+	int i;
+
+	for (i = 0; i < adapter->cur_vport_nb; i++) {
+		vport = adapter->vports[i];
+		if (vport->vport_id != vport_id)
+			continue;
+		else
+			return vport;
+	}
+
+	return vport;
+}
+
+static void
+cpfl_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t msglen)
+{
+	struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
+
+	if (msglen < sizeof(struct virtchnl2_event)) {
+		PMD_DRV_LOG(ERR, "Error event");
+		return;
+	}
+
+	switch (vc_event->event) {
+	case VIRTCHNL2_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL2_EVENT_LINK_CHANGE");
+		vport->link_up = !!(vc_event->link_status);
+		vport->link_speed = vc_event->link_speed;
+		cpfl_dev_link_update(dev, 0);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", vc_event->event);
+		break;
+	}
+}
+
+static void
+cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter)
+{
+	struct idpf_adapter *base = &adapter->base;
+	struct idpf_dma_mem *dma_mem = NULL;
+	struct idpf_hw *hw = &base->hw;
+	struct virtchnl2_event *vc_event;
+	struct idpf_ctlq_msg ctlq_msg;
+	enum idpf_mbx_opc mbx_op;
+	struct idpf_vport *vport;
+	enum virtchnl_ops vc_op;
+	uint16_t pending = 1;
+	int ret;
+
+	while (pending) {
+		ret = idpf_vc_ctlq_recv(hw->arq, &pending, &ctlq_msg);
+		if (ret) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret);
+			return;
+		}
+
+		memcpy(base->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
+			   IDPF_DFLT_MBX_BUF_SIZE);
+
+		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
+		vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+		base->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
+
+		switch (mbx_op) {
+		case idpf_mbq_opc_send_msg_to_peer_pf:
+			if (vc_op == VIRTCHNL2_OP_EVENT) {
+				if (ctlq_msg.data_len < sizeof(struct virtchnl2_event)) {
+					PMD_DRV_LOG(ERR, "Error event");
+					return;
+				}
+				vc_event = (struct virtchnl2_event *)base->mbx_resp;
+				vport = cpfl_find_vport(adapter, vc_event->vport_id);
+				if (!vport) {
+					PMD_DRV_LOG(ERR, "Can't find vport.");
+					return;
+				}
+				cpfl_handle_event_msg(vport, base->mbx_resp,
+						      ctlq_msg.data_len);
+			} else {
+				if (vc_op == base->pend_cmd)
+					notify_cmd(base, base->cmd_retval);
+				else
+					PMD_DRV_LOG(ERR, "command mismatch, expect %u, get %u",
+						    base->pend_cmd, vc_op);
+
+				PMD_DRV_LOG(DEBUG, " Virtual channel response is received,"
+					    "opcode = %d", vc_op);
+			}
+			goto post_buf;
+		default:
+			PMD_DRV_LOG(DEBUG, "Request %u is not supported yet", mbx_op);
+		}
+	}
+
+post_buf:
+	if (ctlq_msg.data_len)
+		dma_mem = ctlq_msg.ctx.indirect.payload;
+	else
+		pending = 0;
+
+	ret = idpf_vc_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
+	if (ret && dma_mem)
+		idpf_free_dma_mem(hw, dma_mem);
+}
+
+static void
+cpfl_dev_alarm_handler(void *param)
+{
+	struct cpfl_adapter_ext *adapter = param;
+
+	cpfl_handle_virtchnl_msg(adapter);
+
+	rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
+}
+
+static int
+cpfl_adapter_ext_init(struct rte_pci_device *pci_dev, struct cpfl_adapter_ext *adapter)
+{
+	struct idpf_adapter *base = &adapter->base;
+	struct idpf_hw *hw = &base->hw;
+	int ret = 0;
+
+	hw->hw_addr = (void *)pci_dev->mem_resource[0].addr;
+	hw->hw_addr_len = pci_dev->mem_resource[0].len;
+	hw->back = base;
+	hw->vendor_id = pci_dev->id.vendor_id;
+	hw->device_id = pci_dev->id.device_id;
+	hw->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+
+	strncpy(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE);
+
+	ret = idpf_adapter_init(base);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init adapter");
+		goto err_adapter_init;
+	}
+
+	rte_eal_alarm_set(CPFL_ALARM_INTERVAL, cpfl_dev_alarm_handler, adapter);
+
+	adapter->max_vport_nb = adapter->base.caps.max_vports > CPFL_MAX_VPORT_NUM ?
+				CPFL_MAX_VPORT_NUM : adapter->base.caps.max_vports;
+
+	adapter->vports = rte_zmalloc("vports",
+				      adapter->max_vport_nb *
+				      sizeof(*adapter->vports),
+				      0);
+	if (adapter->vports == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate vports memory");
+		ret = -ENOMEM;
+		goto err_get_ptype;
+	}
+
+	adapter->cur_vports = 0;
+	adapter->cur_vport_nb = 0;
+
+	adapter->used_vecs_num = 0;
+
+	return ret;
+
+err_get_ptype:
+	idpf_adapter_deinit(base);
+err_adapter_init:
+	return ret;
+}
+
+static uint16_t
+cpfl_vport_idx_alloc(struct cpfl_adapter_ext *adapter)
+{
+	uint16_t vport_idx;
+	uint16_t i;
+
+	for (i = 0; i < adapter->max_vport_nb; i++) {
+		if (adapter->vports[i] == NULL)
+			break;
+	}
+
+	if (i == adapter->max_vport_nb)
+		vport_idx = CPFL_INVALID_VPORT_IDX;
+	else
+		vport_idx = i;
+
+	return vport_idx;
+}
+
+static int
+cpfl_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct cpfl_vport_param *param = init_params;
+	struct cpfl_adapter_ext *adapter = param->adapter;
+	/* for sending create vport virtchnl msg prepare */
+	struct virtchnl2_create_vport create_vport_info;
+	int ret = 0;
+
+	dev->dev_ops = &cpfl_eth_dev_ops;
+	vport->adapter = &adapter->base;
+	vport->sw_idx = param->idx;
+	vport->devarg_id = param->devarg_id;
+	vport->dev = dev;
+
+	memset(&create_vport_info, 0, sizeof(create_vport_info));
+	ret = idpf_vport_info_init(vport, &create_vport_info);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init vport req_info.");
+		goto err;
+	}
+
+	ret = idpf_vport_init(vport, &create_vport_info, dev->data);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init vports.");
+		goto err;
+	}
+
+	adapter->vports[param->idx] = vport;
+	adapter->cur_vports |= RTE_BIT32(param->devarg_id);
+	adapter->cur_vport_nb++;
+
+	dev->data->mac_addrs = rte_zmalloc(NULL, RTE_ETHER_ADDR_LEN, 0);
+	if (dev->data->mac_addrs == NULL) {
+		PMD_INIT_LOG(ERR, "Cannot allocate mac_addr memory.");
+		ret = -ENOMEM;
+		goto err_mac_addrs;
+	}
+
+	rte_ether_addr_copy((struct rte_ether_addr *)vport->default_mac_addr,
+			    &dev->data->mac_addrs[0]);
+
+	return 0;
+
+err_mac_addrs:
+	adapter->vports[param->idx] = NULL;  /* reset */
+	idpf_vport_deinit(vport);
+	adapter->cur_vports &= ~RTE_BIT32(param->devarg_id);
+	adapter->cur_vport_nb--;
+err:
+	return ret;
+}
+
+static const struct rte_pci_id pci_id_cpfl_map[] = {
+	{ RTE_PCI_DEVICE(IDPF_INTEL_VENDOR_ID, IDPF_DEV_ID_CPF) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+static struct cpfl_adapter_ext *
+cpfl_find_adapter_ext(struct rte_pci_device *pci_dev)
+{
+	struct cpfl_adapter_ext *adapter;
+	int found = 0;
+
+	if (pci_dev == NULL)
+		return NULL;
+
+	rte_spinlock_lock(&cpfl_adapter_lock);
+	TAILQ_FOREACH(adapter, &cpfl_adapter_list, next) {
+		if (strncmp(adapter->name, pci_dev->device.name, PCI_PRI_STR_SIZE) == 0) {
+			found = 1;
+			break;
+		}
+	}
+	rte_spinlock_unlock(&cpfl_adapter_lock);
+
+	if (found == 0)
+		return NULL;
+
+	return adapter;
+}
+
+static void
+cpfl_adapter_ext_deinit(struct cpfl_adapter_ext *adapter)
+{
+	rte_eal_alarm_cancel(cpfl_dev_alarm_handler, adapter);
+	idpf_adapter_deinit(&adapter->base);
+
+	rte_free(adapter->vports);
+	adapter->vports = NULL;
+}
+
+static int
+cpfl_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+	       struct rte_pci_device *pci_dev)
+{
+	struct cpfl_vport_param vport_param;
+	struct cpfl_adapter_ext *adapter;
+	struct cpfl_devargs devargs;
+	char name[RTE_ETH_NAME_MAX_LEN];
+	int i, retval;
+
+	if (!cpfl_adapter_list_init) {
+		rte_spinlock_init(&cpfl_adapter_lock);
+		TAILQ_INIT(&cpfl_adapter_list);
+		cpfl_adapter_list_init = true;
+	}
+
+	adapter = rte_zmalloc("cpfl_adapter_ext",
+			      sizeof(struct cpfl_adapter_ext), 0);
+	if (adapter == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate adapter.");
+		return -ENOMEM;
+	}
+
+	retval = cpfl_adapter_ext_init(pci_dev, adapter);
+	if (retval != 0) {
+		PMD_INIT_LOG(ERR, "Failed to init adapter.");
+		return retval;
+	}
+
+	rte_spinlock_lock(&cpfl_adapter_lock);
+	TAILQ_INSERT_TAIL(&cpfl_adapter_list, adapter, next);
+	rte_spinlock_unlock(&cpfl_adapter_lock);
+
+	retval = cpfl_parse_devargs(pci_dev, adapter, &devargs);
+	if (retval != 0) {
+		PMD_INIT_LOG(ERR, "Failed to parse private devargs");
+		goto err;
+	}
+
+	if (devargs.req_vport_nb == 0) {
+		/* If no vport devarg, create vport 0 by default. */
+		vport_param.adapter = adapter;
+		vport_param.devarg_id = 0;
+		vport_param.idx = cpfl_vport_idx_alloc(adapter);
+		if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
+			PMD_INIT_LOG(ERR, "No space for vport %u", vport_param.devarg_id);
+			return 0;
+		}
+		snprintf(name, sizeof(name), "cpfl_%s_vport_0",
+			 pci_dev->device.name);
+		retval = rte_eth_dev_create(&pci_dev->device, name,
+					    sizeof(struct idpf_vport),
+					    NULL, NULL, cpfl_dev_vport_init,
+					    &vport_param);
+		if (retval != 0)
+			PMD_DRV_LOG(ERR, "Failed to create default vport 0");
+	} else {
+		for (i = 0; i < devargs.req_vport_nb; i++) {
+			vport_param.adapter = adapter;
+			vport_param.devarg_id = devargs.req_vports[i];
+			vport_param.idx = cpfl_vport_idx_alloc(adapter);
+			if (vport_param.idx == CPFL_INVALID_VPORT_IDX) {
+				PMD_INIT_LOG(ERR, "No space for vport %u", vport_param.devarg_id);
+				break;
+			}
+			snprintf(name, sizeof(name), "cpfl_%s_vport_%d",
+				 pci_dev->device.name,
+				 devargs.req_vports[i]);
+			retval = rte_eth_dev_create(&pci_dev->device, name,
+						    sizeof(struct idpf_vport),
+						    NULL, NULL, cpfl_dev_vport_init,
+						    &vport_param);
+			if (retval != 0)
+				PMD_DRV_LOG(ERR, "Failed to create vport %d",
+					    vport_param.devarg_id);
+		}
+	}
+
+	return 0;
+
+err:
+	rte_spinlock_lock(&cpfl_adapter_lock);
+	TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
+	rte_spinlock_unlock(&cpfl_adapter_lock);
+	cpfl_adapter_ext_deinit(adapter);
+	rte_free(adapter);
+	return retval;
+}
+
+static int
+cpfl_pci_remove(struct rte_pci_device *pci_dev)
+{
+	struct cpfl_adapter_ext *adapter = cpfl_find_adapter_ext(pci_dev);
+	uint16_t port_id;
+
+	/* Ethdev created can be found RTE_ETH_FOREACH_DEV_OF through rte_device */
+	RTE_ETH_FOREACH_DEV_OF(port_id, &pci_dev->device) {
+			rte_eth_dev_close(port_id);
+	}
+
+	rte_spinlock_lock(&cpfl_adapter_lock);
+	TAILQ_REMOVE(&cpfl_adapter_list, adapter, next);
+	rte_spinlock_unlock(&cpfl_adapter_lock);
+	cpfl_adapter_ext_deinit(adapter);
+	rte_free(adapter);
+
+	return 0;
+}
+
+static struct rte_pci_driver rte_cpfl_pmd = {
+	.id_table	= pci_id_cpfl_map,
+	.drv_flags	= RTE_PCI_DRV_NEED_MAPPING,
+	.probe		= cpfl_pci_probe,
+	.remove		= cpfl_pci_remove,
+};
+
+/**
+ * Driver initialization routine.
+ * Invoked once at EAL init time.
+ * Register itself as the [Poll Mode] Driver of PCI devices.
+ */
+RTE_PMD_REGISTER_PCI(net_cpfl, rte_cpfl_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(net_cpfl, pci_id_cpfl_map);
+RTE_PMD_REGISTER_KMOD_DEP(net_cpfl, "* igb_uio | vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(net_cpfl,
+	CPFL_TX_SINGLE_Q "=<0|1> "
+	CPFL_RX_SINGLE_Q "=<0|1> "
+	CPFL_VPORT "=[vport0_begin[-vport0_end][,vport1_begin[-vport1_end]][,..]]");
+
+RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_init, init, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(cpfl_logtype_driver, driver, NOTICE);
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
new file mode 100644
index 0000000000..9738e89ca8
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _CPFL_ETHDEV_H_
+#define _CPFL_ETHDEV_H_
+
+#include <stdint.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+#include <rte_ethdev.h>
+#include <rte_kvargs.h>
+#include <ethdev_driver.h>
+#include <ethdev_pci.h>
+
+#include "cpfl_logs.h"
+
+#include <idpf_common_device.h>
+#include <idpf_common_virtchnl.h>
+#include <base/idpf_prototype.h>
+#include <base/virtchnl2.h>
+
+/* Currently, backend supports up to 8 vports */
+#define CPFL_MAX_VPORT_NUM	8
+
+#define CPFL_INVALID_VPORT_IDX	0xffff
+
+#define CPFL_MIN_BUF_SIZE	1024
+#define CPFL_MAX_FRAME_SIZE	9728
+#define CPFL_DEFAULT_MTU	RTE_ETHER_MTU
+
+#define CPFL_VLAN_TAG_SIZE	4
+#define CPFL_ETH_OVERHEAD \
+	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + CPFL_VLAN_TAG_SIZE * 2)
+
+#define CPFL_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
+
+#define CPFL_ALARM_INTERVAL	50000 /* us */
+
+/* Device IDs */
+#define IDPF_DEV_ID_CPF			0x1453
+
+struct cpfl_vport_param {
+	struct cpfl_adapter_ext *adapter;
+	uint16_t devarg_id; /* arg id from user */
+	uint16_t idx;       /* index in adapter->vports[]*/
+};
+
+/* Struct used when parse driver specific devargs */
+struct cpfl_devargs {
+	uint16_t req_vports[CPFL_MAX_VPORT_NUM];
+	uint16_t req_vport_nb;
+};
+
+struct cpfl_adapter_ext {
+	TAILQ_ENTRY(cpfl_adapter_ext) next;
+	struct idpf_adapter base;
+
+	char name[CPFL_ADAPTER_NAME_LEN];
+
+	struct idpf_vport **vports;
+	uint16_t max_vport_nb;
+
+	uint16_t cur_vports; /* bit mask of created vport */
+	uint16_t cur_vport_nb;
+
+	uint16_t used_vecs_num;
+};
+
+TAILQ_HEAD(cpfl_adapter_list, cpfl_adapter_ext);
+
+#define CPFL_DEV_TO_PCI(eth_dev)		\
+	RTE_DEV_TO_PCI((eth_dev)->device)
+#define CPFL_ADAPTER_TO_EXT(p)					\
+	container_of((p), struct cpfl_adapter_ext, base)
+
+#endif /* _CPFL_ETHDEV_H_ */
diff --git a/drivers/net/cpfl/cpfl_logs.h b/drivers/net/cpfl/cpfl_logs.h
new file mode 100644
index 0000000000..bdfa5c41a5
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_logs.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _CPFL_LOGS_H_
+#define _CPFL_LOGS_H_
+
+#include <rte_log.h>
+
+extern int cpfl_logtype_init;
+extern int cpfl_logtype_driver;
+
+#define PMD_INIT_LOG(level, ...) \
+	rte_log(RTE_LOG_ ## level, \
+		cpfl_logtype_init, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#define PMD_DRV_LOG(level, ...) \
+	rte_log(RTE_LOG_ ## level, \
+		cpfl_logtype_driver, \
+		RTE_FMT("%s(): " \
+			RTE_FMT_HEAD(__VA_ARGS__,) "\n", \
+			__func__, \
+			RTE_FMT_TAIL(__VA_ARGS__,)))
+
+#endif /* _CPFL_LOGS_H_ */
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
new file mode 100644
index 0000000000..c721732b50
--- /dev/null
+++ b/drivers/net/cpfl/meson.build
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 Intel Corporation
+
+if is_windows
+    build = false
+    reason = 'not supported on Windows'
+    subdir_done()
+endif
+
+deps += ['common_idpf']
+
+sources = files(
+        'cpfl_ethdev.c',
+)
\ No newline at end of file
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index f83a6de117..b1df17ce8c 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -13,6 +13,7 @@ drivers = [
         'bnxt',
         'bonding',
         'cnxk',
+        'cpfl',
         'cxgbe',
         'dpaa',
         'dpaa2',
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v9 02/21] net/cpfl: add Tx queue setup
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
  2023-03-02 15:06               ` Ferruh Yigit
  2023-03-02 21:20               ` [PATCH v9 01/21] net/cpfl: support device initialization Mingxia Liu
@ 2023-03-02 21:20               ` Mingxia Liu
  2023-03-02 21:20               ` [PATCH v9 03/21] net/cpfl: add Rx " Mingxia Liu
                                 ` (18 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 21:20 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for tx_queue_setup ops.

There are two queue modes, single queue mode and split queue mode
for Tx queue.

For the single queue model, the descriptor TX queue is used by SW
to post buffer descriptors to HW, and it's also used by HW to post
completed descriptors to SW.

For the split queue model, "Tx completion queue" are used to pass
descriptor buffers from SW to HW, while TX queues are used only to
pass the descriptor completions from HW to SW.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  13 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 244 +++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  25 ++++
 drivers/net/cpfl/meson.build   |   1 +
 4 files changed, 283 insertions(+)
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
 create mode 100644 drivers/net/cpfl/cpfl_rxtx.h

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 7f6fd2804b..c26ff57730 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -12,6 +12,7 @@
 #include <rte_alarm.h>
 
 #include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
 
 #define CPFL_TX_SINGLE_Q	"tx_single"
 #define CPFL_RX_SINGLE_Q	"rx_single"
@@ -93,6 +94,17 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = CPFL_MAX_RING_DESC,
+		.nb_min = CPFL_MIN_RING_DESC,
+		.nb_align = CPFL_ALIGN_RING_DESC,
+	};
+
 	return 0;
 }
 
@@ -179,6 +191,7 @@ cpfl_dev_close(struct rte_eth_dev *dev)
 static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_configure			= cpfl_dev_configure,
 	.dev_close			= cpfl_dev_close,
+	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
 	.link_update			= cpfl_dev_link_update,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
new file mode 100644
index 0000000000..737d069ec2
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -0,0 +1,244 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#include <ethdev_driver.h>
+#include <rte_net.h>
+#include <rte_vect.h>
+
+#include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
+
+static uint64_t
+cpfl_tx_offload_convert(uint64_t offload)
+{
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) != 0)
+		ol |= IDPF_TX_OFFLOAD_SCTP_CKSUM;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0)
+		ol |= IDPF_TX_OFFLOAD_MULTI_SEGS;
+	if ((offload & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0)
+		ol |= IDPF_TX_OFFLOAD_MBUF_FAST_FREE;
+
+	return ol;
+}
+
+static const struct rte_memzone *
+cpfl_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
+		      uint16_t len, uint16_t queue_type,
+		      unsigned int socket_id, bool splitq)
+{
+	char ring_name[RTE_MEMZONE_NAMESIZE];
+	const struct rte_memzone *mz;
+	uint32_t ring_size;
+
+	memset(ring_name, 0, RTE_MEMZONE_NAMESIZE);
+	switch (queue_type) {
+	case VIRTCHNL2_QUEUE_TYPE_TX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_sched_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct idpf_flex_tx_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		memcpy(ring_name, "cpfl Tx ring", sizeof("cpfl Tx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX:
+		if (splitq)
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_rx_flex_desc_adv_nic_3),
+					      CPFL_DMA_MEM_ALIGN);
+		else
+			ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_singleq_rx_buf_desc),
+					      CPFL_DMA_MEM_ALIGN);
+		memcpy(ring_name, "cpfl Rx ring", sizeof("cpfl Rx ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
+		ring_size = RTE_ALIGN(len * sizeof(struct idpf_splitq_tx_compl_desc),
+				      CPFL_DMA_MEM_ALIGN);
+		memcpy(ring_name, "cpfl Tx compl ring", sizeof("cpfl Tx compl ring"));
+		break;
+	case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
+		ring_size = RTE_ALIGN(len * sizeof(struct virtchnl2_splitq_rx_buf_desc),
+				      CPFL_DMA_MEM_ALIGN);
+		memcpy(ring_name, "cpfl Rx buf ring", sizeof("cpfl Rx buf ring"));
+		break;
+	default:
+		PMD_INIT_LOG(ERR, "Invalid queue type");
+		return NULL;
+	}
+
+	mz = rte_eth_dma_zone_reserve(dev, ring_name, queue_idx,
+				      ring_size, CPFL_RING_BASE_ALIGN,
+				      socket_id);
+	if (mz == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to reserve DMA memory for ring");
+		return NULL;
+	}
+
+	/* Zero all the descriptors in the ring. */
+	memset(mz->addr, 0, ring_size);
+
+	return mz;
+}
+
+static void
+cpfl_dma_zone_release(const struct rte_memzone *mz)
+{
+	rte_memzone_free(mz);
+}
+
+static int
+cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
+		     uint16_t queue_idx, uint16_t nb_desc,
+		     unsigned int socket_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	const struct rte_memzone *mz;
+	struct idpf_tx_queue *cq;
+	int ret;
+
+	cq = rte_zmalloc_socket("cpfl splitq cq",
+				sizeof(struct idpf_tx_queue),
+				RTE_CACHE_LINE_SIZE,
+				socket_id);
+	if (cq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for Tx compl queue");
+		ret = -ENOMEM;
+		goto err_cq_alloc;
+	}
+
+	cq->nb_tx_desc = nb_desc;
+	cq->queue_id = vport->chunks_info.tx_compl_start_qid + queue_idx;
+	cq->port_id = dev->data->port_id;
+	cq->txqs = dev->data->tx_queues;
+	cq->tx_start_qid = vport->chunks_info.tx_start_qid;
+
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, nb_desc,
+				   VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION,
+				   socket_id, true);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	cq->tx_ring_phys_addr = mz->iova;
+	cq->compl_ring = mz->addr;
+	cq->mz = mz;
+	idpf_qc_split_tx_complq_reset(cq);
+
+	txq->complq = cq;
+
+	return 0;
+
+err_mz_reserve:
+	rte_free(cq);
+err_cq_alloc:
+	return ret;
+}
+
+int
+cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_txconf *tx_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	uint16_t tx_rs_thresh, tx_free_thresh;
+	struct idpf_hw *hw = &base->hw;
+	const struct rte_memzone *mz;
+	struct idpf_tx_queue *txq;
+	uint64_t offloads;
+	uint16_t len;
+	bool is_splitq;
+	int ret;
+
+	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
+
+	tx_rs_thresh = (uint16_t)((tx_conf->tx_rs_thresh > 0) ?
+		tx_conf->tx_rs_thresh : CPFL_DEFAULT_TX_RS_THRESH);
+	tx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh > 0) ?
+		tx_conf->tx_free_thresh : CPFL_DEFAULT_TX_FREE_THRESH);
+	if (idpf_qc_tx_thresh_check(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Allocate the TX queue data structure. */
+	txq = rte_zmalloc_socket("cpfl txq",
+				 sizeof(struct idpf_tx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (txq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for tx queue structure");
+		ret = -ENOMEM;
+		goto err_txq_alloc;
+	}
+
+	is_splitq = !!(vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
+
+	txq->nb_tx_desc = nb_desc;
+	txq->rs_thresh = tx_rs_thresh;
+	txq->free_thresh = tx_free_thresh;
+	txq->queue_id = vport->chunks_info.tx_start_qid + queue_idx;
+	txq->port_id = dev->data->port_id;
+	txq->offloads = cpfl_tx_offload_convert(offloads);
+	txq->tx_deferred_start = tx_conf->tx_deferred_start;
+
+	if (is_splitq)
+		len = 2 * nb_desc;
+	else
+		len = nb_desc;
+	txq->sw_nb_desc = len;
+
+	/* Allocate TX hardware ring descriptors. */
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, nb_desc, VIRTCHNL2_QUEUE_TYPE_TX,
+				   socket_id, is_splitq);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	txq->tx_ring_phys_addr = mz->iova;
+	txq->mz = mz;
+
+	txq->sw_ring = rte_zmalloc_socket("cpfl tx sw ring",
+					  sizeof(struct idpf_tx_entry) * len,
+					  RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq->sw_ring == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW TX ring");
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
+	}
+
+	if (!is_splitq) {
+		txq->tx_ring = mz->addr;
+		idpf_qc_single_tx_queue_reset(txq);
+	} else {
+		txq->desc_ring = mz->addr;
+		idpf_qc_split_tx_descq_reset(txq);
+
+		/* Setup tx completion queue if split model */
+		ret = cpfl_tx_complq_setup(dev, txq, queue_idx,
+					   2 * nb_desc, socket_id);
+		if (ret != 0)
+			goto err_complq_setup;
+	}
+
+	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
+			queue_idx * vport->chunks_info.tx_qtail_spacing);
+	txq->q_set = true;
+	dev->data->tx_queues[queue_idx] = txq;
+
+	return 0;
+
+err_complq_setup:
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(txq);
+err_txq_alloc:
+	return ret;
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
new file mode 100644
index 0000000000..232630c5e9
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _CPFL_RXTX_H_
+#define _CPFL_RXTX_H_
+
+#include <idpf_common_rxtx.h>
+#include "cpfl_ethdev.h"
+
+/* In QLEN must be whole number of 32 descriptors. */
+#define CPFL_ALIGN_RING_DESC	32
+#define CPFL_MIN_RING_DESC	32
+#define CPFL_MAX_RING_DESC	4096
+#define CPFL_DMA_MEM_ALIGN	4096
+/* Base address of the HW descriptor ring should be 128B aligned. */
+#define CPFL_RING_BASE_ALIGN	128
+
+#define CPFL_DEFAULT_TX_RS_THRESH	32
+#define CPFL_DEFAULT_TX_FREE_THRESH	32
+
+int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			uint16_t nb_desc, unsigned int socket_id,
+			const struct rte_eth_txconf *tx_conf);
+#endif /* _CPFL_RXTX_H_ */
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
index c721732b50..1894423689 100644
--- a/drivers/net/cpfl/meson.build
+++ b/drivers/net/cpfl/meson.build
@@ -11,4 +11,5 @@ deps += ['common_idpf']
 
 sources = files(
         'cpfl_ethdev.c',
+        'cpfl_rxtx.c',
 )
\ No newline at end of file
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v9 03/21] net/cpfl: add Rx queue setup
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                                 ` (2 preceding siblings ...)
  2023-03-02 21:20               ` [PATCH v9 02/21] net/cpfl: add Tx queue setup Mingxia Liu
@ 2023-03-02 21:20               ` Mingxia Liu
  2023-03-02 21:20               ` [PATCH v9 04/21] net/cpfl: support device start and stop Mingxia Liu
                                 ` (17 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 21:20 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for rx_queue_setup ops.

There are two queue modes supported, single queue mode and split
queue mode for Rx queue.

For the single queue model, the descriptor RX queue is used by SW
to post buffer descriptors to HW, and it's also used by HW to post
completed descriptors to SW.

For the split queue model, "RX buffer queues" are used to pass
descriptor buffers from SW to HW, while RX queues are used only to
pass the descriptor completions from HW to SW.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  11 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 232 +++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |   6 +
 3 files changed, 249 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index c26ff57730..ae011da76f 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -99,12 +99,22 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
 	};
 
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_free_thresh = CPFL_DEFAULT_RX_FREE_THRESH,
+	};
+
 	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
 		.nb_max = CPFL_MAX_RING_DESC,
 		.nb_min = CPFL_MIN_RING_DESC,
 		.nb_align = CPFL_ALIGN_RING_DESC,
 	};
 
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = CPFL_MAX_RING_DESC,
+		.nb_min = CPFL_MIN_RING_DESC,
+		.nb_align = CPFL_ALIGN_RING_DESC,
+	};
+
 	return 0;
 }
 
@@ -191,6 +201,7 @@ cpfl_dev_close(struct rte_eth_dev *dev)
 static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_configure			= cpfl_dev_configure,
 	.dev_close			= cpfl_dev_close,
+	.rx_queue_setup			= cpfl_rx_queue_setup,
 	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
 	.link_update			= cpfl_dev_link_update,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 737d069ec2..930d725a4a 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -9,6 +9,25 @@
 #include "cpfl_ethdev.h"
 #include "cpfl_rxtx.h"
 
+static uint64_t
+cpfl_rx_offload_convert(uint64_t offload)
+{
+	uint64_t ol = 0;
+
+	if ((offload & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_UDP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_TCP_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+		ol |= IDPF_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+	if ((offload & RTE_ETH_RX_OFFLOAD_TIMESTAMP) != 0)
+		ol |= IDPF_RX_OFFLOAD_TIMESTAMP;
+
+	return ol;
+}
+
 static uint64_t
 cpfl_tx_offload_convert(uint64_t offload)
 {
@@ -94,6 +113,219 @@ cpfl_dma_zone_release(const struct rte_memzone *mz)
 	rte_memzone_free(mz);
 }
 
+static int
+cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
+			 uint16_t queue_idx, uint16_t rx_free_thresh,
+			 uint16_t nb_desc, unsigned int socket_id,
+			 struct rte_mempool *mp, uint8_t bufq_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	struct idpf_hw *hw = &base->hw;
+	const struct rte_memzone *mz;
+	struct idpf_rx_queue *bufq;
+	uint16_t len;
+	int ret;
+
+	bufq = rte_zmalloc_socket("cpfl bufq",
+				   sizeof(struct idpf_rx_queue),
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (bufq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx buffer queue.");
+		ret = -ENOMEM;
+		goto err_bufq1_alloc;
+	}
+
+	bufq->mp = mp;
+	bufq->nb_rx_desc = nb_desc;
+	bufq->rx_free_thresh = rx_free_thresh;
+	bufq->queue_id = vport->chunks_info.rx_buf_start_qid + queue_idx;
+	bufq->port_id = dev->data->port_id;
+	bufq->rx_hdr_len = 0;
+	bufq->adapter = base;
+
+	len = rte_pktmbuf_data_room_size(bufq->mp) - RTE_PKTMBUF_HEADROOM;
+	bufq->rx_buf_len = len;
+
+	/* Allocate a little more to support bulk allocate. */
+	len = nb_desc + IDPF_RX_MAX_BURST;
+
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, len,
+				   VIRTCHNL2_QUEUE_TYPE_RX_BUFFER,
+				   socket_id, true);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+
+	bufq->rx_ring_phys_addr = mz->iova;
+	bufq->rx_ring = mz->addr;
+	bufq->mz = mz;
+
+	bufq->sw_ring =
+		rte_zmalloc_socket("cpfl rx bufq sw ring",
+				   sizeof(struct rte_mbuf *) * len,
+				   RTE_CACHE_LINE_SIZE,
+				   socket_id);
+	if (bufq->sw_ring == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+		ret = -ENOMEM;
+		goto err_sw_ring_alloc;
+	}
+
+	idpf_qc_split_rx_bufq_reset(bufq);
+	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
+			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+	bufq->q_set = true;
+
+	if (bufq_id == IDPF_RX_SPLIT_BUFQ1_ID) {
+		rxq->bufq1 = bufq;
+	} else if (bufq_id == IDPF_RX_SPLIT_BUFQ2_ID) {
+		rxq->bufq2 = bufq;
+	} else {
+		PMD_INIT_LOG(ERR, "Invalid buffer queue index.");
+		ret = -EINVAL;
+		goto err_bufq_id;
+	}
+
+	return 0;
+
+err_bufq_id:
+	rte_free(bufq->sw_ring);
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(bufq);
+err_bufq1_alloc:
+	return ret;
+}
+
+static void
+cpfl_rx_split_bufq_release(struct idpf_rx_queue *bufq)
+{
+	rte_free(bufq->sw_ring);
+	cpfl_dma_zone_release(bufq->mz);
+	rte_free(bufq);
+}
+
+int
+cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+		    uint16_t nb_desc, unsigned int socket_id,
+		    const struct rte_eth_rxconf *rx_conf,
+		    struct rte_mempool *mp)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	struct idpf_hw *hw = &base->hw;
+	const struct rte_memzone *mz;
+	struct idpf_rx_queue *rxq;
+	uint16_t rx_free_thresh;
+	uint64_t offloads;
+	bool is_splitq;
+	uint16_t len;
+	int ret;
+
+	offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads;
+
+	/* Check free threshold */
+	rx_free_thresh = (rx_conf->rx_free_thresh == 0) ?
+		CPFL_DEFAULT_RX_FREE_THRESH :
+		rx_conf->rx_free_thresh;
+	if (idpf_qc_rx_thresh_check(nb_desc, rx_free_thresh) != 0)
+		return -EINVAL;
+
+	/* Setup Rx queue */
+	rxq = rte_zmalloc_socket("cpfl rxq",
+				 sizeof(struct idpf_rx_queue),
+				 RTE_CACHE_LINE_SIZE,
+				 socket_id);
+	if (rxq == NULL) {
+		PMD_INIT_LOG(ERR, "Failed to allocate memory for rx queue data structure");
+		ret = -ENOMEM;
+		goto err_rxq_alloc;
+	}
+
+	is_splitq = !!(vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT);
+
+	rxq->mp = mp;
+	rxq->nb_rx_desc = nb_desc;
+	rxq->rx_free_thresh = rx_free_thresh;
+	rxq->queue_id = vport->chunks_info.rx_start_qid + queue_idx;
+	rxq->port_id = dev->data->port_id;
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+	rxq->rx_hdr_len = 0;
+	rxq->adapter = base;
+	rxq->offloads = cpfl_rx_offload_convert(offloads);
+
+	len = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_buf_len = len;
+
+	/* Allocate a little more to support bulk allocate. */
+	len = nb_desc + IDPF_RX_MAX_BURST;
+	mz = cpfl_dma_zone_reserve(dev, queue_idx, len, VIRTCHNL2_QUEUE_TYPE_RX,
+				   socket_id, is_splitq);
+	if (mz == NULL) {
+		ret = -ENOMEM;
+		goto err_mz_reserve;
+	}
+	rxq->rx_ring_phys_addr = mz->iova;
+	rxq->rx_ring = mz->addr;
+	rxq->mz = mz;
+
+	if (!is_splitq) {
+		rxq->sw_ring = rte_zmalloc_socket("cpfl rxq sw ring",
+						  sizeof(struct rte_mbuf *) * len,
+						  RTE_CACHE_LINE_SIZE,
+						  socket_id);
+		if (rxq->sw_ring == NULL) {
+			PMD_INIT_LOG(ERR, "Failed to allocate memory for SW ring");
+			ret = -ENOMEM;
+			goto err_sw_ring_alloc;
+		}
+
+		idpf_qc_single_rx_queue_reset(rxq);
+		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
+				queue_idx * vport->chunks_info.rx_qtail_spacing);
+	} else {
+		idpf_qc_split_rx_descq_reset(rxq);
+
+		/* Setup Rx buffer queues */
+		ret = cpfl_rx_split_bufq_setup(dev, rxq, 2 * queue_idx,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 1);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 1");
+			ret = -EINVAL;
+			goto err_bufq1_setup;
+		}
+
+		ret = cpfl_rx_split_bufq_setup(dev, rxq, 2 * queue_idx + 1,
+					       rx_free_thresh, nb_desc,
+					       socket_id, mp, 2);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to setup buffer queue 2");
+			ret = -EINVAL;
+			goto err_bufq2_setup;
+		}
+	}
+
+	rxq->q_set = true;
+	dev->data->rx_queues[queue_idx] = rxq;
+
+	return 0;
+
+err_bufq2_setup:
+	cpfl_rx_split_bufq_release(rxq->bufq1);
+err_bufq1_setup:
+err_sw_ring_alloc:
+	cpfl_dma_zone_release(mz);
+err_mz_reserve:
+	rte_free(rxq);
+err_rxq_alloc:
+	return ret;
+}
+
 static int
 cpfl_tx_complq_setup(struct rte_eth_dev *dev, struct idpf_tx_queue *txq,
 		     uint16_t queue_idx, uint16_t nb_desc,
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 232630c5e9..e0221abfa3 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -16,10 +16,16 @@
 /* Base address of the HW descriptor ring should be 128B aligned. */
 #define CPFL_RING_BASE_ALIGN	128
 
+#define CPFL_DEFAULT_RX_FREE_THRESH	32
+
 #define CPFL_DEFAULT_TX_RS_THRESH	32
 #define CPFL_DEFAULT_TX_FREE_THRESH	32
 
 int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_txconf *tx_conf);
+int cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+			uint16_t nb_desc, unsigned int socket_id,
+			const struct rte_eth_rxconf *rx_conf,
+			struct rte_mempool *mp);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v9 04/21] net/cpfl: support device start and stop
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                                 ` (3 preceding siblings ...)
  2023-03-02 21:20               ` [PATCH v9 03/21] net/cpfl: add Rx " Mingxia Liu
@ 2023-03-02 21:20               ` Mingxia Liu
  2023-03-02 21:20               ` [PATCH v9 05/21] net/cpfl: support queue start Mingxia Liu
                                 ` (16 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 21:20 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add dev ops dev_start, dev_stop and link_update.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 35 ++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index ae011da76f..6cbc950d84 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -181,12 +181,45 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_dev_start(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	int ret;
+
+	ret = idpf_vc_vport_ena_dis(vport, true);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to enable vport");
+		return ret;
+	}
+
+	vport->stopped = 0;
+
+	return 0;
+}
+
+static int
+cpfl_dev_stop(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->stopped == 1)
+		return 0;
+
+	idpf_vc_vport_ena_dis(vport, false);
+
+	vport->stopped = 1;
+
+	return 0;
+}
+
 static int
 cpfl_dev_close(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(vport->adapter);
 
+	cpfl_dev_stop(dev);
 	idpf_vport_deinit(vport);
 
 	adapter->cur_vports &= ~RTE_BIT32(vport->devarg_id);
@@ -204,6 +237,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.rx_queue_setup			= cpfl_rx_queue_setup,
 	.tx_queue_setup			= cpfl_tx_queue_setup,
 	.dev_infos_get			= cpfl_dev_info_get,
+	.dev_start			= cpfl_dev_start,
+	.dev_stop			= cpfl_dev_stop,
 	.link_update			= cpfl_dev_link_update,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v9 05/21] net/cpfl: support queue start
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                                 ` (4 preceding siblings ...)
  2023-03-02 21:20               ` [PATCH v9 04/21] net/cpfl: support device start and stop Mingxia Liu
@ 2023-03-02 21:20               ` Mingxia Liu
  2023-03-02 21:20               ` [PATCH v9 06/21] net/cpfl: support queue stop Mingxia Liu
                                 ` (15 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 21:20 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for these device ops:
 - rx_queue_start
 - tx_queue_start

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  41 ++++++++++
 drivers/net/cpfl/cpfl_rxtx.c   | 138 +++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |   4 +
 3 files changed, 183 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 6cbc950d84..02a771638e 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -181,12 +181,51 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_start_queues(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	struct idpf_tx_queue *txq;
+	int err = 0;
+	int i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq == NULL || txq->tx_deferred_start)
+			continue;
+		err = cpfl_tx_queue_start(dev, i);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start Tx queue %u", i);
+			return err;
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq == NULL || rxq->rx_deferred_start)
+			continue;
+		err = cpfl_rx_queue_start(dev, i);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start Rx queue %u", i);
+			return err;
+		}
+	}
+
+	return err;
+}
+
 static int
 cpfl_dev_start(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	int ret;
 
+	ret = cpfl_start_queues(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to start queues");
+		return ret;
+	}
+
 	ret = idpf_vc_vport_ena_dis(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
@@ -240,6 +279,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_start			= cpfl_dev_start,
 	.dev_stop			= cpfl_dev_stop,
 	.link_update			= cpfl_dev_link_update,
+	.rx_queue_start			= cpfl_rx_queue_start,
+	.tx_queue_start			= cpfl_tx_queue_start,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 930d725a4a..c13166b63c 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -474,3 +474,141 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 err_txq_alloc:
 	return ret;
 }
+
+int
+cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_rx_queue *rxq;
+	int err;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	if (rxq == NULL || !rxq->q_set) {
+		PMD_DRV_LOG(ERR, "RX queue %u not available or setup",
+					rx_queue_id);
+		return -EINVAL;
+	}
+
+	if (rxq->adapter->is_rx_singleq) {
+		/* Single queue */
+		err = idpf_qc_single_rxq_mbufs_alloc(rxq);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+			return err;
+		}
+
+		rte_wmb();
+
+		/* Init the RX tail register. */
+		IDPF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	} else {
+		/* Split queue */
+		err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq1);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+			return err;
+		}
+		err = idpf_qc_split_rxq_mbufs_alloc(rxq->bufq2);
+		if (err != 0) {
+			PMD_DRV_LOG(ERR, "Failed to allocate RX buffer queue mbuf");
+			return err;
+		}
+
+		rte_wmb();
+
+		/* Init the RX tail register. */
+		IDPF_PCI_REG_WRITE(rxq->bufq1->qrx_tail, rxq->bufq1->rx_tail);
+		IDPF_PCI_REG_WRITE(rxq->bufq2->qrx_tail, rxq->bufq2->rx_tail);
+	}
+
+	return err;
+}
+
+int
+cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_rx_queue *rxq =
+		dev->data->rx_queues[rx_queue_id];
+	int err = 0;
+
+	err = idpf_vc_rxq_config(vport, rxq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Fail to configure Rx queue %u", rx_queue_id);
+		return err;
+	}
+
+	err = cpfl_rx_queue_init(dev, rx_queue_id);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to init RX queue %u",
+			    rx_queue_id);
+		return err;
+	}
+
+	/* Ready to switch the queue on */
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, true);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+	} else {
+		rxq->q_started = true;
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	return err;
+}
+
+int
+cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_tx_queue *txq;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	txq = dev->data->tx_queues[tx_queue_id];
+
+	/* Init the RX tail register. */
+	IDPF_PCI_REG_WRITE(txq->qtx_tail, 0);
+
+	return 0;
+}
+
+int
+cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_tx_queue *txq =
+		dev->data->tx_queues[tx_queue_id];
+	int err = 0;
+
+	err = idpf_vc_txq_config(vport, txq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Fail to configure Tx queue %u", tx_queue_id);
+		return err;
+	}
+
+	err = cpfl_tx_queue_init(dev, tx_queue_id);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to init TX queue %u",
+			    tx_queue_id);
+		return err;
+	}
+
+	/* Ready to switch the queue on */
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, true);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+			    tx_queue_id);
+	} else {
+		txq->q_started = true;
+		dev->data->tx_queue_state[tx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+	}
+
+	return err;
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index e0221abfa3..716b2fefa4 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -28,4 +28,8 @@ int cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
 			struct rte_mempool *mp);
+int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+int cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v9 06/21] net/cpfl: support queue stop
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                                 ` (5 preceding siblings ...)
  2023-03-02 21:20               ` [PATCH v9 05/21] net/cpfl: support queue start Mingxia Liu
@ 2023-03-02 21:20               ` Mingxia Liu
  2023-03-02 21:20               ` [PATCH v9 07/21] net/cpfl: support queue release Mingxia Liu
                                 ` (14 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 21:20 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for these device ops:
 - rx_queue_stop
 - tx_queue_stop

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 10 +++-
 drivers/net/cpfl/cpfl_rxtx.c   | 98 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  3 ++
 3 files changed, 110 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 02a771638e..9aa95c1bb3 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -229,12 +229,16 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 	ret = idpf_vc_vport_ena_dis(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
-		return ret;
+		goto err_vport;
 	}
 
 	vport->stopped = 0;
 
 	return 0;
+
+err_vport:
+	cpfl_stop_queues(dev);
+	return ret;
 }
 
 static int
@@ -247,6 +251,8 @@ cpfl_dev_stop(struct rte_eth_dev *dev)
 
 	idpf_vc_vport_ena_dis(vport, false);
 
+	cpfl_stop_queues(dev);
+
 	vport->stopped = 1;
 
 	return 0;
@@ -281,6 +287,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.link_update			= cpfl_dev_link_update,
 	.rx_queue_start			= cpfl_rx_queue_start,
 	.tx_queue_start			= cpfl_tx_queue_start,
+	.rx_queue_stop			= cpfl_rx_queue_stop,
+	.tx_queue_stop			= cpfl_tx_queue_stop,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index c13166b63c..08db01412e 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -49,6 +49,14 @@ cpfl_tx_offload_convert(uint64_t offload)
 	return ol;
 }
 
+static const struct idpf_rxq_ops def_rxq_ops = {
+	.release_mbufs = idpf_qc_rxq_mbufs_release,
+};
+
+static const struct idpf_txq_ops def_txq_ops = {
+	.release_mbufs = idpf_qc_txq_mbufs_release,
+};
+
 static const struct rte_memzone *
 cpfl_dma_zone_reserve(struct rte_eth_dev *dev, uint16_t queue_idx,
 		      uint16_t len, uint16_t queue_type,
@@ -177,6 +185,7 @@ cpfl_rx_split_bufq_setup(struct rte_eth_dev *dev, struct idpf_rx_queue *rxq,
 	idpf_qc_split_rx_bufq_reset(bufq);
 	bufq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_buf_qtail_start +
 			 queue_idx * vport->chunks_info.rx_buf_qtail_spacing);
+	bufq->ops = &def_rxq_ops;
 	bufq->q_set = true;
 
 	if (bufq_id == IDPF_RX_SPLIT_BUFQ1_ID) {
@@ -287,6 +296,7 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		idpf_qc_single_rx_queue_reset(rxq);
 		rxq->qrx_tail = hw->hw_addr + (vport->chunks_info.rx_qtail_start +
 				queue_idx * vport->chunks_info.rx_qtail_spacing);
+		rxq->ops = &def_rxq_ops;
 	} else {
 		idpf_qc_split_rx_descq_reset(rxq);
 
@@ -461,6 +471,7 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 
 	txq->qtx_tail = hw->hw_addr + (vport->chunks_info.tx_qtail_start +
 			queue_idx * vport->chunks_info.tx_qtail_spacing);
+	txq->ops = &def_txq_ops;
 	txq->q_set = true;
 	dev->data->tx_queues[queue_idx] = txq;
 
@@ -612,3 +623,90 @@ cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 
 	return err;
 }
+
+int
+cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_rx_queue *rxq;
+	int err;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	err = idpf_vc_queue_switch(vport, rx_queue_id, true, false);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+			    rx_queue_id);
+		return err;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		rxq->ops->release_mbufs(rxq);
+		idpf_qc_single_rx_queue_reset(rxq);
+	} else {
+		rxq->bufq1->ops->release_mbufs(rxq->bufq1);
+		rxq->bufq2->ops->release_mbufs(rxq->bufq2);
+		idpf_qc_split_rx_queue_reset(rxq);
+	}
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+int
+cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_tx_queue *txq;
+	int err;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	err = idpf_vc_queue_switch(vport, tx_queue_id, false, false);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+			    tx_queue_id);
+		return err;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	txq->ops->release_mbufs(txq);
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SINGLE) {
+		idpf_qc_single_tx_queue_reset(txq);
+	} else {
+		idpf_qc_split_tx_descq_reset(txq);
+		idpf_qc_split_tx_complq_reset(txq->complq);
+	}
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+void
+cpfl_stop_queues(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	struct idpf_tx_queue *txq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq == NULL)
+			continue;
+
+		if (cpfl_rx_queue_stop(dev, i) != 0)
+			PMD_DRV_LOG(WARNING, "Fail to stop Rx queue %d", i);
+	}
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq == NULL)
+			continue;
+
+		if (cpfl_tx_queue_stop(dev, i) != 0)
+			PMD_DRV_LOG(WARNING, "Fail to stop Tx queue %d", i);
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 716b2fefa4..e9b810deaa 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -32,4 +32,7 @@ int cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int cpfl_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 int cpfl_tx_queue_init(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+void cpfl_stop_queues(struct rte_eth_dev *dev);
+int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
+int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v9 07/21] net/cpfl: support queue release
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                                 ` (6 preceding siblings ...)
  2023-03-02 21:20               ` [PATCH v9 06/21] net/cpfl: support queue stop Mingxia Liu
@ 2023-03-02 21:20               ` Mingxia Liu
  2023-03-02 21:20               ` [PATCH v9 08/21] net/cpfl: support MTU configuration Mingxia Liu
                                 ` (13 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 21:20 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for queue operations:
 - rx_queue_release
 - tx_queue_release

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  2 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 24 ++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  2 ++
 3 files changed, 28 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 9aa95c1bb3..cb1ec6e674 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -289,6 +289,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_start			= cpfl_tx_queue_start,
 	.rx_queue_stop			= cpfl_rx_queue_stop,
 	.tx_queue_stop			= cpfl_tx_queue_stop,
+	.rx_queue_release		= cpfl_dev_rx_queue_release,
+	.tx_queue_release		= cpfl_dev_tx_queue_release,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 08db01412e..f9295c970f 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -244,6 +244,12 @@ cpfl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (idpf_qc_rx_thresh_check(nb_desc, rx_free_thresh) != 0)
 		return -EINVAL;
 
+	/* Free memory if needed */
+	if (dev->data->rx_queues[queue_idx] != NULL) {
+		idpf_qc_rx_queue_release(dev->data->rx_queues[queue_idx]);
+		dev->data->rx_queues[queue_idx] = NULL;
+	}
+
 	/* Setup Rx queue */
 	rxq = rte_zmalloc_socket("cpfl rxq",
 				 sizeof(struct idpf_rx_queue),
@@ -409,6 +415,12 @@ cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (idpf_qc_tx_thresh_check(nb_desc, tx_rs_thresh, tx_free_thresh) != 0)
 		return -EINVAL;
 
+	/* Free memory if needed. */
+	if (dev->data->tx_queues[queue_idx] != NULL) {
+		idpf_qc_tx_queue_release(dev->data->tx_queues[queue_idx]);
+		dev->data->tx_queues[queue_idx] = NULL;
+	}
+
 	/* Allocate the TX queue data structure. */
 	txq = rte_zmalloc_socket("cpfl txq",
 				 sizeof(struct idpf_tx_queue),
@@ -685,6 +697,18 @@ cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	return 0;
 }
 
+void
+cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+	idpf_qc_rx_queue_release(dev->data->rx_queues[qid]);
+}
+
+void
+cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+{
+	idpf_qc_tx_queue_release(dev->data->tx_queues[qid]);
+}
+
 void
 cpfl_stop_queues(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index e9b810deaa..f5882401dc 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -35,4 +35,6 @@ int cpfl_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 void cpfl_stop_queues(struct rte_eth_dev *dev);
 int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
+void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v9 08/21] net/cpfl: support MTU configuration
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                                 ` (7 preceding siblings ...)
  2023-03-02 21:20               ` [PATCH v9 07/21] net/cpfl: support queue release Mingxia Liu
@ 2023-03-02 21:20               ` Mingxia Liu
  2023-03-02 21:20               ` [PATCH v9 09/21] net/cpfl: support basic Rx data path Mingxia Liu
                                 ` (12 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 21:20 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add dev ops mtu_set.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini |  1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 27 +++++++++++++++++++++++++++
 2 files changed, 28 insertions(+)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index a2d1ca9e15..470ba81579 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -7,6 +7,7 @@
 ; is selected.
 ;
 [Features]
+MTU update           = Y
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index cb1ec6e674..efc8b8f1e4 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -118,6 +118,27 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	return 0;
 }
 
+static int
+cpfl_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	/* mtu setting is forbidden if port is start */
+	if (dev->data->dev_started) {
+		PMD_DRV_LOG(ERR, "port must be stopped before configuration");
+		return -EBUSY;
+	}
+
+	if (mtu > vport->max_mtu) {
+		PMD_DRV_LOG(ERR, "MTU should be less than %d", vport->max_mtu);
+		return -EINVAL;
+	}
+
+	vport->max_pkt_len = mtu + CPFL_ETH_OVERHEAD;
+
+	return 0;
+}
+
 static const uint32_t *
 cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 {
@@ -139,6 +160,7 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_conf *conf = &dev->data->dev_conf;
 
 	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
@@ -178,6 +200,10 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
+	vport->max_pkt_len =
+		(dev->data->mtu == 0) ? CPFL_DEFAULT_MTU : dev->data->mtu +
+		CPFL_ETH_OVERHEAD;
+
 	return 0;
 }
 
@@ -291,6 +317,7 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_stop			= cpfl_tx_queue_stop,
 	.rx_queue_release		= cpfl_dev_rx_queue_release,
 	.tx_queue_release		= cpfl_dev_tx_queue_release,
+	.mtu_set			= cpfl_dev_mtu_set,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 };
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v9 09/21] net/cpfl: support basic Rx data path
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                                 ` (8 preceding siblings ...)
  2023-03-02 21:20               ` [PATCH v9 08/21] net/cpfl: support MTU configuration Mingxia Liu
@ 2023-03-02 21:20               ` Mingxia Liu
  2023-03-02 21:20               ` [PATCH v9 10/21] net/cpfl: support basic Tx " Mingxia Liu
                                 ` (11 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 21:20 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add basic Rx support in split queue mode and single queue mode.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  2 ++
 drivers/net/cpfl/cpfl_rxtx.c   | 18 ++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  1 +
 3 files changed, 21 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index efc8b8f1e4..767612b11c 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -252,6 +252,8 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
+	cpfl_set_rx_function(dev);
+
 	ret = idpf_vc_vport_ena_dis(vport, true);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to enable vport");
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index f9295c970f..a0a442f61d 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -734,3 +734,21 @@ cpfl_stop_queues(struct rte_eth_dev *dev)
 			PMD_DRV_LOG(WARNING, "Fail to stop Tx queue %d", i);
 	}
 }
+
+void
+cpfl_set_rx_function(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Split Scalar Rx (port %d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
+	} else {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Single Scalar Rx (port %d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index f5882401dc..a5dd388e1f 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -37,4 +37,5 @@ int cpfl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
+void cpfl_set_rx_function(struct rte_eth_dev *dev);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v9 10/21] net/cpfl: support basic Tx data path
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                                 ` (9 preceding siblings ...)
  2023-03-02 21:20               ` [PATCH v9 09/21] net/cpfl: support basic Rx data path Mingxia Liu
@ 2023-03-02 21:20               ` Mingxia Liu
  2023-03-02 21:20               ` [PATCH v9 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
                                 ` (10 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 21:20 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add basic Tx support in split queue mode and single queue mode.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  3 +++
 drivers/net/cpfl/cpfl_rxtx.c   | 20 ++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  1 +
 3 files changed, 24 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 767612b11c..c3a6104dac 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -94,6 +94,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
 		.tx_rs_thresh = CPFL_DEFAULT_TX_RS_THRESH,
@@ -253,6 +255,7 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 	}
 
 	cpfl_set_rx_function(dev);
+	cpfl_set_tx_function(dev);
 
 	ret = idpf_vc_vport_ena_dis(vport, true);
 	if (ret != 0) {
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index a0a442f61d..520f61e07e 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -752,3 +752,23 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
 	}
 }
+
+void
+cpfl_set_tx_function(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Split Scalar Tx (port %d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+	} else {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Single Scalar Tx (port %d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+	}
+}
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index a5dd388e1f..5f8144e55f 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -38,4 +38,5 @@ int cpfl_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id);
 void cpfl_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_dev_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid);
 void cpfl_set_rx_function(struct rte_eth_dev *dev);
+void cpfl_set_tx_function(struct rte_eth_dev *dev);
 #endif /* _CPFL_RXTX_H_ */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v9 11/21] net/cpfl: support write back based on ITR expire
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                                 ` (10 preceding siblings ...)
  2023-03-02 21:20               ` [PATCH v9 10/21] net/cpfl: support basic Tx " Mingxia Liu
@ 2023-03-02 21:20               ` Mingxia Liu
  2023-03-02 21:20               ` [PATCH v9 12/21] net/cpfl: support RSS Mingxia Liu
                                 ` (9 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 21:20 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

ITR is the interval between two interrupts, it can be understood
as a timer here. WB_ON_ITR(write back on ITR expire) is used for
receiving packets without interrupts or full cache line, then
packets can be received one by one.

To enable WB_ON_ITR, need to enable some interrupt with
'idpf_vport_irq_map_config()' first.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 45 +++++++++++++++++++++++++++++++++-
 drivers/net/cpfl/cpfl_ethdev.h |  2 ++
 2 files changed, 46 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index c3a6104dac..ef40ae08df 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -209,6 +209,15 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int
+cpfl_config_rx_queues_irqs(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	uint16_t nb_rx_queues = dev->data->nb_rx_queues;
+
+	return idpf_vport_irq_map_config(vport, nb_rx_queues);
+}
+
 static int
 cpfl_start_queues(struct rte_eth_dev *dev)
 {
@@ -246,12 +255,37 @@ static int
 cpfl_dev_start(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	struct cpfl_adapter_ext *adapter = CPFL_ADAPTER_TO_EXT(base);
+	uint16_t num_allocated_vectors = base->caps.num_allocated_vectors;
+	uint16_t req_vecs_num;
 	int ret;
 
+	req_vecs_num = CPFL_DFLT_Q_VEC_NUM;
+	if (req_vecs_num + adapter->used_vecs_num > num_allocated_vectors) {
+		PMD_DRV_LOG(ERR, "The accumulated request vectors' number should be less than %d",
+			    num_allocated_vectors);
+		ret = -EINVAL;
+		goto err_vec;
+	}
+
+	ret = idpf_vc_vectors_alloc(vport, req_vecs_num);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to allocate interrupt vectors");
+		goto err_vec;
+	}
+	adapter->used_vecs_num += req_vecs_num;
+
+	ret = cpfl_config_rx_queues_irqs(dev);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Failed to configure irqs");
+		goto err_irq;
+	}
+
 	ret = cpfl_start_queues(dev);
 	if (ret != 0) {
 		PMD_DRV_LOG(ERR, "Failed to start queues");
-		return ret;
+		goto err_startq;
 	}
 
 	cpfl_set_rx_function(dev);
@@ -269,6 +303,11 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 
 err_vport:
 	cpfl_stop_queues(dev);
+err_startq:
+	idpf_vport_irq_unmap_config(vport, dev->data->nb_rx_queues);
+err_irq:
+	idpf_vc_vectors_dealloc(vport);
+err_vec:
 	return ret;
 }
 
@@ -284,6 +323,10 @@ cpfl_dev_stop(struct rte_eth_dev *dev)
 
 	cpfl_stop_queues(dev);
 
+	idpf_vport_irq_unmap_config(vport, dev->data->nb_rx_queues);
+
+	idpf_vc_vectors_dealloc(vport);
+
 	vport->stopped = 1;
 
 	return 0;
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
index 9738e89ca8..4d1441ae64 100644
--- a/drivers/net/cpfl/cpfl_ethdev.h
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -25,6 +25,8 @@
 
 #define CPFL_INVALID_VPORT_IDX	0xffff
 
+#define CPFL_DFLT_Q_VEC_NUM	1
+
 #define CPFL_MIN_BUF_SIZE	1024
 #define CPFL_MAX_FRAME_SIZE	9728
 #define CPFL_DEFAULT_MTU	RTE_ETHER_MTU
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v9 12/21] net/cpfl: support RSS
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                                 ` (11 preceding siblings ...)
  2023-03-02 21:20               ` [PATCH v9 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
@ 2023-03-02 21:20               ` Mingxia Liu
  2023-03-02 21:20               ` [PATCH v9 13/21] net/cpfl: support Rx offloading Mingxia Liu
                                 ` (8 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 21:20 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add RSS support.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 60 ++++++++++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_ethdev.h | 15 +++++++++
 2 files changed, 75 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index ef40ae08df..92a98e45d5 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -94,6 +94,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
+
 	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
@@ -159,11 +161,49 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static int
+cpfl_init_rss(struct idpf_vport *vport)
+{
+	struct rte_eth_rss_conf *rss_conf;
+	struct rte_eth_dev_data *dev_data;
+	uint16_t i, nb_q;
+	int ret = 0;
+
+	dev_data = vport->dev_data;
+	rss_conf = &dev_data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev_data->nb_rx_queues;
+
+	if (rss_conf->rss_key == NULL) {
+		for (i = 0; i < vport->rss_key_size; i++)
+			vport->rss_key[i] = (uint8_t)rte_rand();
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_INIT_LOG(ERR, "Invalid RSS key length in RSS configuration, should be %d",
+			     vport->rss_key_size);
+		return -EINVAL;
+	} else {
+		rte_memcpy(vport->rss_key, rss_conf->rss_key,
+			   vport->rss_key_size);
+	}
+
+	for (i = 0; i < vport->rss_lut_size; i++)
+		vport->rss_lut[i] = i % nb_q;
+
+	vport->rss_hf = IDPF_DEFAULT_RSS_HASH_EXPANDED;
+
+	ret = idpf_vport_rss_config(vport);
+	if (ret != 0)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS");
+
+	return ret;
+}
+
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_conf *conf = &dev->data->dev_conf;
+	struct idpf_adapter *base = vport->adapter;
+	int ret;
 
 	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(ERR, "Setting link speed is not supported");
@@ -202,6 +242,26 @@ cpfl_dev_configure(struct rte_eth_dev *dev)
 		return -ENOTSUP;
 	}
 
+	if (conf->rxmode.mq_mode != RTE_ETH_MQ_RX_RSS &&
+	    conf->rxmode.mq_mode != RTE_ETH_MQ_RX_NONE) {
+		PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
+			     conf->rxmode.mq_mode);
+		return -EINVAL;
+	}
+
+	if (base->caps.rss_caps != 0 && dev->data->nb_rx_queues != 0 &&
+		conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
+		ret = cpfl_init_rss(vport);
+		if (ret != 0) {
+			PMD_INIT_LOG(ERR, "Failed to init rss");
+			return ret;
+		}
+	} else {
+		PMD_INIT_LOG(ERR, "RSS is not supported.");
+		if (conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
+			return -ENOTSUP;
+	}
+
 	vport->max_pkt_len =
 		(dev->data->mtu == 0) ? CPFL_DEFAULT_MTU : dev->data->mtu +
 		CPFL_ETH_OVERHEAD;
diff --git a/drivers/net/cpfl/cpfl_ethdev.h b/drivers/net/cpfl/cpfl_ethdev.h
index 4d1441ae64..200dfcac02 100644
--- a/drivers/net/cpfl/cpfl_ethdev.h
+++ b/drivers/net/cpfl/cpfl_ethdev.h
@@ -35,6 +35,21 @@
 #define CPFL_ETH_OVERHEAD \
 	(RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + CPFL_VLAN_TAG_SIZE * 2)
 
+#define CPFL_RSS_OFFLOAD_ALL (				\
+		RTE_ETH_RSS_IPV4                |	\
+		RTE_ETH_RSS_FRAG_IPV4           |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_TCP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_UDP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_SCTP   |	\
+		RTE_ETH_RSS_NONFRAG_IPV4_OTHER  |	\
+		RTE_ETH_RSS_IPV6                |	\
+		RTE_ETH_RSS_FRAG_IPV6           |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP    |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP   |	\
+		RTE_ETH_RSS_NONFRAG_IPV6_OTHER  |	\
+		RTE_ETH_RSS_L2_PAYLOAD)
+
 #define CPFL_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
 #define CPFL_ALARM_INTERVAL	50000 /* us */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v9 13/21] net/cpfl: support Rx offloading
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                                 ` (12 preceding siblings ...)
  2023-03-02 21:20               ` [PATCH v9 12/21] net/cpfl: support RSS Mingxia Liu
@ 2023-03-02 21:20               ` Mingxia Liu
  2023-03-02 21:20               ` [PATCH v9 14/21] net/cpfl: support Tx offloading Mingxia Liu
                                 ` (7 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 21:20 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add Rx offloading support:
 - support CHKSUM and RSS offload for split queue model
 - support CHKSUM offload for single queue model

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 2 ++
 drivers/net/cpfl/cpfl_ethdev.c    | 6 ++++++
 2 files changed, 8 insertions(+)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index 470ba81579..ee5948f444 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -8,6 +8,8 @@
 ;
 [Features]
 MTU update           = Y
+L3 checksum offload  = P
+L4 checksum offload  = P
 Linux                = Y
 x86-32               = Y
 x86-64               = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 92a98e45d5..f80265865d 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -96,6 +96,12 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
 
+	dev_info->rx_offload_capa =
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM           |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+
 	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v9 14/21] net/cpfl: support Tx offloading
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                                 ` (13 preceding siblings ...)
  2023-03-02 21:20               ` [PATCH v9 13/21] net/cpfl: support Rx offloading Mingxia Liu
@ 2023-03-02 21:20               ` Mingxia Liu
  2023-03-02 21:20               ` [PATCH v9 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
                                 ` (6 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 21:20 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add Tx offloading support:
 - support TSO for single queue model and split queue model.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/features/cpfl.ini | 1 +
 drivers/net/cpfl/cpfl_ethdev.c    | 8 +++++++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/features/cpfl.ini b/doc/guides/nics/features/cpfl.ini
index ee5948f444..f4e45c7c68 100644
--- a/doc/guides/nics/features/cpfl.ini
+++ b/doc/guides/nics/features/cpfl.ini
@@ -8,6 +8,7 @@
 ;
 [Features]
 MTU update           = Y
+TSO                  = P
 L3 checksum offload  = P
 L4 checksum offload  = P
 Linux                = Y
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index f80265865d..1167cdcef7 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -102,7 +102,13 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
-	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->tx_offload_capa =
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM		|
+		RTE_ETH_TX_OFFLOAD_TCP_TSO		|
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v9 15/21] net/cpfl: add AVX512 data path for single queue model
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                                 ` (14 preceding siblings ...)
  2023-03-02 21:20               ` [PATCH v9 14/21] net/cpfl: support Tx offloading Mingxia Liu
@ 2023-03-02 21:20               ` Mingxia Liu
  2023-03-02 21:20               ` [PATCH v9 16/21] net/cpfl: support timestamp offload Mingxia Liu
                                 ` (5 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 21:20 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu, Wenjun Wu

Add support of AVX512 vector data path for single queue model.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 doc/guides/nics/cpfl.rst                |  24 +++++-
 drivers/net/cpfl/cpfl_ethdev.c          |   3 +-
 drivers/net/cpfl/cpfl_rxtx.c            |  93 ++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx_vec_common.h | 100 ++++++++++++++++++++++++
 drivers/net/cpfl/meson.build            |  25 +++++-
 5 files changed, 242 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h

diff --git a/doc/guides/nics/cpfl.rst b/doc/guides/nics/cpfl.rst
index 253fa3afae..e2d71f8a4c 100644
--- a/doc/guides/nics/cpfl.rst
+++ b/doc/guides/nics/cpfl.rst
@@ -82,4 +82,26 @@ Runtime Config Options
 Driver compilation and testing
 ------------------------------
 
-Refer to the document :doc:`build_and_test` for details.
\ No newline at end of file
+Refer to the document :doc:`build_and_test` for details.
+
+Features
+--------
+
+Vector PMD
+~~~~~~~~~~
+
+Vector path for Rx and Tx path are selected automatically.
+The paths are chosen based on 2 conditions:
+
+- ``CPU``
+
+  On the x86 platform, the driver checks if the CPU supports AVX512.
+  If the CPU supports AVX512 and EAL argument ``--force-max-simd-bitwidth``
+  is set to 512, AVX512 paths will be chosen.
+
+- ``Offload features``
+
+  The supported HW offload features are described in the document cpfl.ini,
+  A value "P" means the offload feature is not supported by vector path.
+  If any not supported features are used, cpfl vector PMD is disabled
+  and the scalar paths are chosen.
diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 1167cdcef7..e64cadfd38 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -108,7 +108,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
 		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM		|
 		RTE_ETH_TX_OFFLOAD_TCP_TSO		|
-		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS		|
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = CPFL_DEFAULT_TX_FREE_THRESH,
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index 520f61e07e..a3832acd4f 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -8,6 +8,7 @@
 
 #include "cpfl_ethdev.h"
 #include "cpfl_rxtx.h"
+#include "cpfl_rxtx_vec_common.h"
 
 static uint64_t
 cpfl_rx_offload_convert(uint64_t offload)
@@ -739,24 +740,96 @@ void
 cpfl_set_rx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+#ifdef RTE_ARCH_X86
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	if (cpfl_rx_vec_dev_check_default(dev) == CPFL_VECTOR_PATH &&
+	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+		vport->rx_vec_allowed = true;
+
+		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
+#ifdef CC_AVX512_SUPPORT
+			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
+				vport->rx_use_avx512 = true;
+#else
+		PMD_DRV_LOG(NOTICE,
+			    "AVX512 is not supported in build env");
+#endif /* CC_AVX512_SUPPORT */
+	} else {
+		vport->rx_vec_allowed = false;
+	}
+#endif /* RTE_ARCH_X86 */
 
+#ifdef RTE_ARCH_X86
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		PMD_DRV_LOG(NOTICE,
 			    "Using Split Scalar Rx (port %d).",
 			    dev->data->port_id);
 		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
 	} else {
+		if (vport->rx_vec_allowed) {
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				rxq = dev->data->rx_queues[i];
+				(void)idpf_qc_singleq_rx_vec_setup(rxq);
+			}
+#ifdef CC_AVX512_SUPPORT
+			if (vport->rx_use_avx512) {
+				PMD_DRV_LOG(NOTICE,
+					    "Using Single AVX512 Vector Rx (port %d).",
+					    dev->data->port_id);
+				dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts_avx512;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Rx (port %d).",
 			    dev->data->port_id);
 		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
 	}
+#else
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Split Scalar Rx (port %d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
+	} else {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Single Scalar Rx (port %d).",
+			    dev->data->port_id);
+		dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
+	}
+#endif /* RTE_ARCH_X86 */
 }
 
 void
 cpfl_set_tx_function(struct rte_eth_dev *dev)
 {
 	struct idpf_vport *vport = dev->data->dev_private;
+#ifdef RTE_ARCH_X86
+#ifdef CC_AVX512_SUPPORT
+	struct idpf_tx_queue *txq;
+	int i;
+#endif /* CC_AVX512_SUPPORT */
+
+	if (cpfl_tx_vec_dev_check_default(dev) == CPFL_VECTOR_PATH &&
+	    rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
+		vport->tx_vec_allowed = true;
+		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
+#ifdef CC_AVX512_SUPPORT
+			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
+				vport->tx_use_avx512 = true;
+#else
+		PMD_DRV_LOG(NOTICE,
+			    "AVX512 is not supported in build env");
+#endif /* CC_AVX512_SUPPORT */
+	} else {
+		vport->tx_vec_allowed = false;
+	}
+#endif /* RTE_ARCH_X86 */
 
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		PMD_DRV_LOG(NOTICE,
@@ -765,6 +838,26 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 		dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
 	} else {
+#ifdef RTE_ARCH_X86
+		if (vport->tx_vec_allowed) {
+#ifdef CC_AVX512_SUPPORT
+			if (vport->tx_use_avx512) {
+				for (i = 0; i < dev->data->nb_tx_queues; i++) {
+					txq = dev->data->tx_queues[i];
+					if (txq == NULL)
+						continue;
+					idpf_qc_tx_vec_avx512_setup(txq);
+				}
+				PMD_DRV_LOG(NOTICE,
+					    "Using Single AVX512 Vector Tx (port %d).",
+					    dev->data->port_id);
+				dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts_avx512;
+				dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
+#endif /* RTE_ARCH_X86 */
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Tx (port %d).",
 			    dev->data->port_id);
diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
new file mode 100644
index 0000000000..2d4c6a0ef3
--- /dev/null
+++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
@@ -0,0 +1,100 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Intel Corporation
+ */
+
+#ifndef _CPFL_RXTX_VEC_COMMON_H_
+#define _CPFL_RXTX_VEC_COMMON_H_
+#include <stdint.h>
+#include <ethdev_driver.h>
+#include <rte_malloc.h>
+
+#include "cpfl_ethdev.h"
+#include "cpfl_rxtx.h"
+
+#ifndef __INTEL_COMPILER
+#pragma GCC diagnostic ignored "-Wcast-qual"
+#endif
+
+#define CPFL_SCALAR_PATH		0
+#define CPFL_VECTOR_PATH		1
+#define CPFL_RX_NO_VECTOR_FLAGS (		\
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |	\
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+#define CPFL_TX_NO_VECTOR_FLAGS (		\
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |	\
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+static inline int
+cpfl_rx_vec_queue_default(struct idpf_rx_queue *rxq)
+{
+	if (rxq == NULL)
+		return CPFL_SCALAR_PATH;
+
+	if (rte_is_power_of_2(rxq->nb_rx_desc) == 0)
+		return CPFL_SCALAR_PATH;
+
+	if (rxq->rx_free_thresh < IDPF_VPMD_RX_MAX_BURST)
+		return CPFL_SCALAR_PATH;
+
+	if ((rxq->nb_rx_desc % rxq->rx_free_thresh) != 0)
+		return CPFL_SCALAR_PATH;
+
+	if ((rxq->offloads & CPFL_RX_NO_VECTOR_FLAGS) != 0)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_tx_vec_queue_default(struct idpf_tx_queue *txq)
+{
+	if (txq == NULL)
+		return CPFL_SCALAR_PATH;
+
+	if (txq->rs_thresh < IDPF_VPMD_TX_MAX_BURST ||
+	    (txq->rs_thresh & 3) != 0)
+		return CPFL_SCALAR_PATH;
+
+	if ((txq->offloads & CPFL_TX_NO_VECTOR_FLAGS) != 0)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i, ret = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		ret = (cpfl_rx_vec_queue_default(rxq));
+		if (ret == CPFL_SCALAR_PATH)
+			return CPFL_SCALAR_PATH;
+	}
+
+	return CPFL_VECTOR_PATH;
+}
+
+static inline int
+cpfl_tx_vec_dev_check_default(struct rte_eth_dev *dev)
+{
+	int i;
+	struct idpf_tx_queue *txq;
+	int ret = 0;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		ret = cpfl_tx_vec_queue_default(txq);
+		if (ret == CPFL_SCALAR_PATH)
+			return CPFL_SCALAR_PATH;
+	}
+
+	return CPFL_VECTOR_PATH;
+}
+
+#endif /*_CPFL_RXTX_VEC_COMMON_H_*/
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
index 1894423689..fbe6500826 100644
--- a/drivers/net/cpfl/meson.build
+++ b/drivers/net/cpfl/meson.build
@@ -7,9 +7,32 @@ if is_windows
     subdir_done()
 endif
 
+if dpdk_conf.get('RTE_IOVA_AS_PA') == 0
+    build = false
+    reason = 'driver does not support disabling IOVA as PA mode'
+    subdir_done()
+endif
+
 deps += ['common_idpf']
 
 sources = files(
         'cpfl_ethdev.c',
         'cpfl_rxtx.c',
-)
\ No newline at end of file
+)
+
+if arch_subdir == 'x86'
+    cpfl_avx512_cpu_support = (
+        cc.get_define('__AVX512F__', args: machine_args) != '' and
+        cc.get_define('__AVX512BW__', args: machine_args) != ''
+    )
+
+    cpfl_avx512_cc_support = (
+        not machine_args.contains('-mno-avx512f') and
+        cc.has_argument('-mavx512f') and
+        cc.has_argument('-mavx512bw')
+    )
+
+    if cpfl_avx512_cpu_support == true or cpfl_avx512_cc_support == true
+        cflags += ['-DCC_AVX512_SUPPORT']
+    endif
+endif
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v9 16/21] net/cpfl: support timestamp offload
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                                 ` (15 preceding siblings ...)
  2023-03-02 21:20               ` [PATCH v9 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
@ 2023-03-02 21:20               ` Mingxia Liu
  2023-03-02 21:20               ` [PATCH v9 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
                                 ` (4 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 21:20 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for timestamp offload.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 3 ++-
 drivers/net/cpfl/cpfl_rxtx.c   | 7 +++++++
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index e64cadfd38..ae716d104c 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -100,7 +100,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM           |
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
-		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index a3832acd4f..ea28d3978c 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -516,6 +516,13 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	err = idpf_qc_ts_mbuf_register(rxq);
+	if (err != 0) {
+		PMD_DRV_LOG(ERR, "fail to register timestamp mbuf %u",
+			    rx_queue_id);
+		return -EIO;
+	}
+
 	if (rxq->adapter->is_rx_singleq) {
 		/* Single queue */
 		err = idpf_qc_single_rxq_mbufs_alloc(rxq);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v9 17/21] net/cpfl: add AVX512 data path for split queue model
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                                 ` (16 preceding siblings ...)
  2023-03-02 21:20               ` [PATCH v9 16/21] net/cpfl: support timestamp offload Mingxia Liu
@ 2023-03-02 21:20               ` Mingxia Liu
  2023-03-02 21:20               ` [PATCH v9 18/21] net/cpfl: add HW statistics Mingxia Liu
                                 ` (3 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 21:20 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu, Wenjun Wu

Add support of AVX512 data path for split queue model.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_rxtx.c            | 56 +++++++++++++++++++++++--
 drivers/net/cpfl/cpfl_rxtx_vec_common.h | 20 ++++++++-
 drivers/net/cpfl/meson.build            |  6 ++-
 3 files changed, 75 insertions(+), 7 deletions(-)

diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index ea28d3978c..dac95579f5 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -758,7 +758,8 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
 #ifdef CC_AVX512_SUPPORT
 			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
-			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1 &&
+			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512DQ))
 				vport->rx_use_avx512 = true;
 #else
 		PMD_DRV_LOG(NOTICE,
@@ -771,6 +772,21 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 
 #ifdef RTE_ARCH_X86
 	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		if (vport->rx_vec_allowed) {
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				rxq = dev->data->rx_queues[i];
+				(void)idpf_qc_splitq_rx_vec_setup(rxq);
+			}
+#ifdef CC_AVX512_SUPPORT
+			if (vport->rx_use_avx512) {
+				PMD_DRV_LOG(NOTICE,
+					    "Using Split AVX512 Vector Rx (port %d).",
+					    dev->data->port_id);
+				dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts_avx512;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Split Scalar Rx (port %d).",
 			    dev->data->port_id);
@@ -826,9 +842,17 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 		vport->tx_vec_allowed = true;
 		if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
 #ifdef CC_AVX512_SUPPORT
+		{
 			if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
 			    rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
 				vport->tx_use_avx512 = true;
+			if (vport->tx_use_avx512) {
+				for (i = 0; i < dev->data->nb_tx_queues; i++) {
+					txq = dev->data->tx_queues[i];
+					idpf_qc_tx_vec_avx512_setup(txq);
+				}
+			}
+		}
 #else
 		PMD_DRV_LOG(NOTICE,
 			    "AVX512 is not supported in build env");
@@ -838,14 +862,26 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 	}
 #endif /* RTE_ARCH_X86 */
 
+#ifdef RTE_ARCH_X86
 	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		if (vport->tx_vec_allowed) {
+#ifdef CC_AVX512_SUPPORT
+			if (vport->tx_use_avx512) {
+				PMD_DRV_LOG(NOTICE,
+					    "Using Split AVX512 Vector Tx (port %d).",
+					    dev->data->port_id);
+				dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts_avx512;
+				dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+				return;
+			}
+#endif /* CC_AVX512_SUPPORT */
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Split Scalar Tx (port %d).",
 			    dev->data->port_id);
 		dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
 	} else {
-#ifdef RTE_ARCH_X86
 		if (vport->tx_vec_allowed) {
 #ifdef CC_AVX512_SUPPORT
 			if (vport->tx_use_avx512) {
@@ -864,11 +900,25 @@ cpfl_set_tx_function(struct rte_eth_dev *dev)
 			}
 #endif /* CC_AVX512_SUPPORT */
 		}
-#endif /* RTE_ARCH_X86 */
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Tx (port %d).",
 			    dev->data->port_id);
 		dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts;
 		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
 	}
+#else
+	if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Split Scalar Tx (port %d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+	} else {
+		PMD_DRV_LOG(NOTICE,
+			    "Using Single Scalar Tx (port %d).",
+			    dev->data->port_id);
+		dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts;
+		dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+	}
+#endif /* RTE_ARCH_X86 */
 }
diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
index 2d4c6a0ef3..665418d27d 100644
--- a/drivers/net/cpfl/cpfl_rxtx_vec_common.h
+++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h
@@ -64,15 +64,31 @@ cpfl_tx_vec_queue_default(struct idpf_tx_queue *txq)
 	return CPFL_VECTOR_PATH;
 }
 
+static inline int
+cpfl_rx_splitq_vec_default(struct idpf_rx_queue *rxq)
+{
+	if (rxq->bufq2->rx_buf_len < rxq->max_pkt_len)
+		return CPFL_SCALAR_PATH;
+
+	return CPFL_VECTOR_PATH;
+}
+
 static inline int
 cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct idpf_rx_queue *rxq;
-	int i, ret = 0;
+	int i, default_ret, splitq_ret, ret = CPFL_SCALAR_PATH;
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
-		ret = (cpfl_rx_vec_queue_default(rxq));
+		default_ret = cpfl_rx_vec_queue_default(rxq);
+		if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+			splitq_ret = cpfl_rx_splitq_vec_default(rxq);
+			ret = splitq_ret && default_ret;
+		} else {
+			ret = default_ret;
+		}
 		if (ret == CPFL_SCALAR_PATH)
 			return CPFL_SCALAR_PATH;
 	}
diff --git a/drivers/net/cpfl/meson.build b/drivers/net/cpfl/meson.build
index fbe6500826..2cf69258e2 100644
--- a/drivers/net/cpfl/meson.build
+++ b/drivers/net/cpfl/meson.build
@@ -23,13 +23,15 @@ sources = files(
 if arch_subdir == 'x86'
     cpfl_avx512_cpu_support = (
         cc.get_define('__AVX512F__', args: machine_args) != '' and
-        cc.get_define('__AVX512BW__', args: machine_args) != ''
+        cc.get_define('__AVX512BW__', args: machine_args) != '' and
+        cc.get_define('__AVX512DQ__', args: machine_args) != ''
     )
 
     cpfl_avx512_cc_support = (
         not machine_args.contains('-mno-avx512f') and
         cc.has_argument('-mavx512f') and
-        cc.has_argument('-mavx512bw')
+        cc.has_argument('-mavx512bw') and
+        cc.has_argument('-mavx512dq')
     )
 
     if cpfl_avx512_cpu_support == true or cpfl_avx512_cc_support == true
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v9 18/21] net/cpfl: add HW statistics
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                                 ` (17 preceding siblings ...)
  2023-03-02 21:20               ` [PATCH v9 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
@ 2023-03-02 21:20               ` Mingxia Liu
  2023-03-02 21:20               ` [PATCH v9 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
                                 ` (2 subsequent siblings)
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 21:20 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

This patch add hardware packets/bytes statistics.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 87 ++++++++++++++++++++++++++++++++++
 1 file changed, 87 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index ae716d104c..4970020139 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -175,6 +175,88 @@ cpfl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static uint64_t
+cpfl_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	uint64_t mbuf_alloc_failed = 0;
+	struct idpf_rx_queue *rxq;
+	int i = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		mbuf_alloc_failed += __atomic_load_n(&rxq->rx_stats.mbuf_alloc_failed,
+						     __ATOMIC_RELAXED);
+	}
+
+	return mbuf_alloc_failed;
+}
+
+static int
+cpfl_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret == 0) {
+		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
+					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
+					 RTE_ETHER_CRC_LEN;
+
+		idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
+		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
+				pstats->rx_broadcast - pstats->rx_discards;
+		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
+						pstats->tx_unicast;
+		stats->imissed = pstats->rx_discards;
+		stats->ierrors = pstats->rx_errors;
+		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
+		stats->ibytes = pstats->rx_bytes;
+		stats->ibytes -= stats->ipackets * crc_stats_len;
+		stats->obytes = pstats->tx_bytes;
+
+		dev->data->rx_mbuf_alloc_failed = cpfl_get_mbuf_alloc_failed_stats(dev);
+		stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return ret;
+}
+
+static void
+cpfl_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		__atomic_store_n(&rxq->rx_stats.mbuf_alloc_failed, 0, __ATOMIC_RELAXED);
+	}
+}
+
+static int
+cpfl_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret != 0)
+		return ret;
+
+	/* set stats offset base on current values */
+	vport->eth_stats_offset = *pstats;
+
+	cpfl_reset_mbuf_alloc_failed_stats(dev);
+
+	return 0;
+}
+
 static int
 cpfl_init_rss(struct idpf_vport *vport)
 {
@@ -371,6 +453,9 @@ cpfl_dev_start(struct rte_eth_dev *dev)
 		goto err_vport;
 	}
 
+	if (cpfl_dev_stats_reset(dev))
+		PMD_DRV_LOG(ERR, "Failed to reset stats");
+
 	vport->stopped = 0;
 
 	return 0;
@@ -441,6 +526,8 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.tx_queue_release		= cpfl_dev_tx_queue_release,
 	.mtu_set			= cpfl_dev_mtu_set,
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
+	.stats_get			= cpfl_dev_stats_get,
+	.stats_reset			= cpfl_dev_stats_reset,
 };
 
 static int
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v9 19/21] net/cpfl: add RSS set/get ops
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                                 ` (18 preceding siblings ...)
  2023-03-02 21:20               ` [PATCH v9 18/21] net/cpfl: add HW statistics Mingxia Liu
@ 2023-03-02 21:20               ` Mingxia Liu
  2023-03-02 21:20               ` [PATCH v9 20/21] net/cpfl: support scalar scatter Rx datapath for single queue model Mingxia Liu
  2023-03-02 21:20               ` [PATCH v9 21/21] net/cpfl: add xstats ops Mingxia Liu
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 21:20 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for these device ops:
- rss_reta_update
- rss_reta_query
- rss_hash_update
- rss_hash_conf_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 271 ++++++++++++++++++++++++++++++++-
 1 file changed, 270 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 4970020139..3341e37afa 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -47,6 +47,57 @@ uint32_t cpfl_supported_speeds[] = {
 	RTE_ETH_SPEED_NUM_200G
 };
 
+static const uint64_t cpfl_map_hena_rss[] = {
+	[IDPF_HASH_NONF_UNICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
+	[IDPF_HASH_NONF_IPV4_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+	[IDPF_HASH_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
+
+	/* IPv6 */
+	[IDPF_HASH_NONF_UNICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
+	[IDPF_HASH_NONF_IPV6_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+	[IDPF_HASH_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
+
+	/* L2 Payload */
+	[IDPF_HASH_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
+};
+
+static const uint64_t cpfl_ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV4;
+
+static const uint64_t cpfl_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV6;
+
+
 static int
 cpfl_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -94,6 +145,9 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->hash_key_size = vport->rss_key_size;
+	dev_info->reta_size = vport->rss_lut_size;
+
 	dev_info->flow_type_rss_offloads = CPFL_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
@@ -257,6 +311,36 @@ cpfl_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int cpfl_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
+{
+	uint64_t hena = 0;
+	uint16_t i;
+
+	/**
+	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
+	 * generalizations of all other IPv4 and IPv6 RSS types.
+	 */
+	if (rss_hf & RTE_ETH_RSS_IPV4)
+		rss_hf |= cpfl_ipv4_rss;
+
+	if (rss_hf & RTE_ETH_RSS_IPV6)
+		rss_hf |= cpfl_ipv6_rss;
+
+	for (i = 0; i < RTE_DIM(cpfl_map_hena_rss); i++) {
+		if (cpfl_map_hena_rss[i] & rss_hf)
+			hena |= BIT_ULL(i);
+	}
+
+	/**
+	 * At present, cp doesn't process the virtual channel msg of rss_hf configuration,
+	 * tips are given below.
+	 */
+	if (hena != vport->rss_hf)
+		PMD_DRV_LOG(WARNING, "Updating RSS Hash Function is not supported at present.");
+
+	return 0;
+}
+
 static int
 cpfl_init_rss(struct idpf_vport *vport)
 {
@@ -277,7 +361,7 @@ cpfl_init_rss(struct idpf_vport *vport)
 			     vport->rss_key_size);
 		return -EINVAL;
 	} else {
-		rte_memcpy(vport->rss_key, rss_conf->rss_key,
+		memcpy(vport->rss_key, rss_conf->rss_key,
 			   vport->rss_key_size);
 	}
 
@@ -293,6 +377,187 @@ cpfl_init_rss(struct idpf_vport *vport)
 	return ret;
 }
 
+static int
+cpfl_rss_reta_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_reta_entry64 *reta_conf,
+		     uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (base->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+				 "(%d) doesn't match the number of hardware can "
+				 "support (%d)",
+			    reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			vport->rss_lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	/* send virtchnl ops to configure RSS */
+	ret = idpf_vc_rss_lut_set(vport);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
+
+	return ret;
+}
+
+static int
+cpfl_rss_reta_query(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_reta_entry64 *reta_conf,
+		    uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (base->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	ret = idpf_vc_rss_lut_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS LUT");
+		return ret;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vport->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+cpfl_rss_hash_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	int ret = 0;
+
+	if (base->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		goto skip_rss_key;
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+				 "(%d) doesn't match the size of hardware can "
+				 "support (%d)",
+			    rss_conf->rss_key_len,
+			    vport->rss_key_size);
+		return -EINVAL;
+	}
+
+	memcpy(vport->rss_key, rss_conf->rss_key,
+		   vport->rss_key_size);
+	ret = idpf_vc_rss_key_set(vport);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS key");
+		return ret;
+	}
+
+skip_rss_key:
+	ret = cpfl_config_rss_hf(vport, rss_conf->rss_hf);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
+		return ret;
+	}
+
+	return 0;
+}
+
+static uint64_t
+cpfl_map_general_rss_hf(uint64_t config_rss_hf, uint64_t last_general_rss_hf)
+{
+	uint64_t valid_rss_hf = 0;
+	uint16_t i;
+
+	for (i = 0; i < RTE_DIM(cpfl_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (bit & config_rss_hf)
+			valid_rss_hf |= cpfl_map_hena_rss[i];
+	}
+
+	if (valid_rss_hf & cpfl_ipv4_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & cpfl_ipv6_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV6;
+
+	return valid_rss_hf;
+}
+
+static int
+cpfl_rss_hash_conf_get(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *base = vport->adapter;
+	int ret = 0;
+
+	if (base->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	ret = idpf_vc_rss_hash_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS hf");
+		return ret;
+	}
+
+	rss_conf->rss_hf = cpfl_map_general_rss_hf(vport->rss_hf, vport->last_general_rss_hf);
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	ret = idpf_vc_rss_key_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS key");
+		return ret;
+	}
+
+	if (rss_conf->rss_key_len > vport->rss_key_size)
+		rss_conf->rss_key_len = vport->rss_key_size;
+
+	memcpy(rss_conf->rss_key, vport->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
 static int
 cpfl_dev_configure(struct rte_eth_dev *dev)
 {
@@ -528,6 +793,10 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.dev_supported_ptypes_get	= cpfl_dev_supported_ptypes_get,
 	.stats_get			= cpfl_dev_stats_get,
 	.stats_reset			= cpfl_dev_stats_reset,
+	.reta_update			= cpfl_rss_reta_update,
+	.reta_query			= cpfl_rss_reta_query,
+	.rss_hash_update		= cpfl_rss_hash_update,
+	.rss_hash_conf_get		= cpfl_rss_hash_conf_get,
 };
 
 static int
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v9 20/21] net/cpfl: support scalar scatter Rx datapath for single queue model
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                                 ` (19 preceding siblings ...)
  2023-03-02 21:20               ` [PATCH v9 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
@ 2023-03-02 21:20               ` Mingxia Liu
  2023-03-02 21:20               ` [PATCH v9 21/21] net/cpfl: add xstats ops Mingxia Liu
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 21:20 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu, Wenjun Wu

This patch add single q recv scatter Rx function.

Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c |  3 ++-
 drivers/net/cpfl/cpfl_rxtx.c   | 27 +++++++++++++++++++++++++++
 drivers/net/cpfl/cpfl_rxtx.h   |  2 ++
 3 files changed, 31 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index 3341e37afa..e403ae9de4 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -155,7 +155,8 @@ cpfl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
-		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP		|
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c
index dac95579f5..9e8767df72 100644
--- a/drivers/net/cpfl/cpfl_rxtx.c
+++ b/drivers/net/cpfl/cpfl_rxtx.c
@@ -503,6 +503,8 @@ int
 cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct idpf_rx_queue *rxq;
+	uint16_t max_pkt_len;
+	uint32_t frame_size;
 	int err;
 
 	if (rx_queue_id >= dev->data->nb_rx_queues)
@@ -516,6 +518,17 @@ cpfl_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	frame_size = dev->data->mtu + CPFL_ETH_OVERHEAD;
+
+	max_pkt_len =
+	    RTE_MIN((uint32_t)CPFL_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+		    frame_size);
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
+	    frame_size > rxq->rx_buf_len)
+		dev->data->scattered_rx = 1;
+
 	err = idpf_qc_ts_mbuf_register(rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "fail to register timestamp mbuf %u",
@@ -807,6 +820,13 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 			}
 #endif /* CC_AVX512_SUPPORT */
 		}
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE,
+				    "Using Single Scalar Scatterd Rx (port %d).",
+				    dev->data->port_id);
+			dev->rx_pkt_burst = idpf_dp_singleq_recv_scatter_pkts;
+			return;
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Rx (port %d).",
 			    dev->data->port_id);
@@ -819,6 +839,13 @@ cpfl_set_rx_function(struct rte_eth_dev *dev)
 			    dev->data->port_id);
 		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
 	} else {
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE,
+				    "Using Single Scalar Scatterd Rx (port %d).",
+				    dev->data->port_id);
+			dev->rx_pkt_burst = idpf_dp_singleq_recv_scatter_pkts;
+			return;
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Rx (port %d).",
 			    dev->data->port_id);
diff --git a/drivers/net/cpfl/cpfl_rxtx.h b/drivers/net/cpfl/cpfl_rxtx.h
index 5f8144e55f..fb267d38c8 100644
--- a/drivers/net/cpfl/cpfl_rxtx.h
+++ b/drivers/net/cpfl/cpfl_rxtx.h
@@ -21,6 +21,8 @@
 #define CPFL_DEFAULT_TX_RS_THRESH	32
 #define CPFL_DEFAULT_TX_FREE_THRESH	32
 
+#define CPFL_SUPPORT_CHAIN_NUM 5
+
 int cpfl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_txconf *tx_conf);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* [PATCH v9 21/21] net/cpfl: add xstats ops
  2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
                                 ` (20 preceding siblings ...)
  2023-03-02 21:20               ` [PATCH v9 20/21] net/cpfl: support scalar scatter Rx datapath for single queue model Mingxia Liu
@ 2023-03-02 21:20               ` Mingxia Liu
  21 siblings, 0 replies; 263+ messages in thread
From: Mingxia Liu @ 2023-03-02 21:20 UTC (permalink / raw)
  To: dev, beilei.xing, yuying.zhang; +Cc: Mingxia Liu

Add support for these device ops:
- dev_xstats_get
- dev_xstats_get_names
- dev_xstats_reset

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/cpfl/cpfl_ethdev.c | 79 ++++++++++++++++++++++++++++++++++
 1 file changed, 79 insertions(+)

diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
index e403ae9de4..0940bf1276 100644
--- a/drivers/net/cpfl/cpfl_ethdev.c
+++ b/drivers/net/cpfl/cpfl_ethdev.c
@@ -97,6 +97,28 @@ static const uint64_t cpfl_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
 			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
 			  RTE_ETH_RSS_FRAG_IPV6;
 
+struct rte_cpfl_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct rte_cpfl_xstats_name_off rte_cpfl_stats_strings[] = {
+	{"rx_bytes", offsetof(struct virtchnl2_vport_stats, rx_bytes)},
+	{"rx_unicast_packets", offsetof(struct virtchnl2_vport_stats, rx_unicast)},
+	{"rx_multicast_packets", offsetof(struct virtchnl2_vport_stats, rx_multicast)},
+	{"rx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, rx_broadcast)},
+	{"rx_dropped_packets", offsetof(struct virtchnl2_vport_stats, rx_discards)},
+	{"rx_errors", offsetof(struct virtchnl2_vport_stats, rx_errors)},
+	{"rx_unknown_protocol_packets", offsetof(struct virtchnl2_vport_stats,
+						 rx_unknown_protocol)},
+	{"tx_bytes", offsetof(struct virtchnl2_vport_stats, tx_bytes)},
+	{"tx_unicast_packets", offsetof(struct virtchnl2_vport_stats, tx_unicast)},
+	{"tx_multicast_packets", offsetof(struct virtchnl2_vport_stats, tx_multicast)},
+	{"tx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, tx_broadcast)},
+	{"tx_dropped_packets", offsetof(struct virtchnl2_vport_stats, tx_discards)},
+	{"tx_error_packets", offsetof(struct virtchnl2_vport_stats, tx_errors)}};
+
+#define CPFL_NB_XSTATS			RTE_DIM(rte_cpfl_stats_strings)
 
 static int
 cpfl_dev_link_update(struct rte_eth_dev *dev,
@@ -312,6 +334,60 @@ cpfl_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int cpfl_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+	cpfl_dev_stats_reset(dev);
+	return 0;
+}
+
+static int cpfl_dev_xstats_get(struct rte_eth_dev *dev,
+			       struct rte_eth_xstat *xstats, unsigned int n)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	unsigned int i;
+	int ret;
+
+	if (n < CPFL_NB_XSTATS)
+		return CPFL_NB_XSTATS;
+
+	if (!xstats)
+		return CPFL_NB_XSTATS;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+		return 0;
+	}
+
+	idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
+
+	/* loop over xstats array and values from pstats */
+	for (i = 0; i < CPFL_NB_XSTATS; i++) {
+		xstats[i].id = i;
+		xstats[i].value = *(uint64_t *)(((char *)pstats) +
+			rte_cpfl_stats_strings[i].offset);
+	}
+	return CPFL_NB_XSTATS;
+}
+
+static int cpfl_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+				     struct rte_eth_xstat_name *xstats_names,
+				     __rte_unused unsigned int limit)
+{
+	unsigned int i;
+
+	if (xstats_names) {
+		for (i = 0; i < CPFL_NB_XSTATS; i++) {
+			snprintf(xstats_names[i].name,
+				 sizeof(xstats_names[i].name),
+				 "%s", rte_cpfl_stats_strings[i].name);
+		}
+	}
+	return CPFL_NB_XSTATS;
+}
+
 static int cpfl_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
 {
 	uint64_t hena = 0;
@@ -798,6 +874,9 @@ static const struct eth_dev_ops cpfl_eth_dev_ops = {
 	.reta_query			= cpfl_rss_reta_query,
 	.rss_hash_update		= cpfl_rss_hash_update,
 	.rss_hash_conf_get		= cpfl_rss_hash_conf_get,
+	.xstats_get			= cpfl_dev_xstats_get,
+	.xstats_get_names		= cpfl_dev_xstats_get_names,
+	.xstats_reset			= cpfl_dev_xstats_reset,
 };
 
 static int
-- 
2.34.1


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v9 01/21] net/cpfl: support device initialization
  2023-03-02 21:20               ` [PATCH v9 01/21] net/cpfl: support device initialization Mingxia Liu
@ 2023-03-07 14:11                 ` Ferruh Yigit
  2023-03-07 15:03                   ` Ferruh Yigit
  0 siblings, 1 reply; 263+ messages in thread
From: Ferruh Yigit @ 2023-03-07 14:11 UTC (permalink / raw)
  To: Mingxia Liu, beilei.xing, yuying.zhang, Raslan Darawsheh; +Cc: dev

On 3/2/2023 9:20 PM, Mingxia Liu wrote:
> Support device init and add the following dev ops:
>  - dev_configure
>  - dev_close
>  - dev_infos_get
>  - link_update
>  - dev_supported_ptypes_get
> 
> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>

<...>

> +static void
> +cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter)
> +{
> +	struct idpf_adapter *base = &adapter->base;
> +	struct idpf_dma_mem *dma_mem = NULL;
> +	struct idpf_hw *hw = &base->hw;
> +	struct virtchnl2_event *vc_event;
> +	struct idpf_ctlq_msg ctlq_msg;
> +	enum idpf_mbx_opc mbx_op;
> +	struct idpf_vport *vport;
> +	enum virtchnl_ops vc_op;
> +	uint16_t pending = 1;
> +	int ret;
> +
> +	while (pending) {
> +		ret = idpf_vc_ctlq_recv(hw->arq, &pending, &ctlq_msg);
> +		if (ret) {
> +			PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret);
> +			return;
> +		}
> +
> +		memcpy(base->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
> +			   IDPF_DFLT_MBX_BUF_SIZE);
> +
> +		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
> +		vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
> +		base->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
> +
> +		switch (mbx_op) {
> +		case idpf_mbq_opc_send_msg_to_peer_pf:
> +			if (vc_op == VIRTCHNL2_OP_EVENT) {


Raslan reported following build error [1], 'VIRTCHNL2_OP_EVENT' is not
an element of "enum virtchnl_ops", can you please check?


I guess there are a few options, have a new enum for virtchnl2, like
"enum virtchnl2_ops" which inlucde all 'VIRTCHNL2_OP_',

OR

use 'uint32_t' type (instead of "enum virtchnl_ops") when
'VIRTCHNL2_OP_' opcodes can be used, this seems simpler.


BTW, this is same in the idfp driver.


[1]
drivers/libtmp_rte_net_cpfl.a.p/net_cpfl_cpfl_ethdev.c.o -c
../../root/dpdk/drivers/net/cpfl/cpfl_ethdev.c
../../root/dpdk/drivers/net/cpfl/cpfl_ethdev.c:1118:14: error:
comparison of constant 522 with expression of type 'enum virtchnl_ops'
is always false [-Werror,-Wtautological-constant-out-of-range-compare]
                        if (vc_op == VIRTCHNL2_OP_EVENT) {
                            ~~~~~ ^  ~~~~~~~~~~~~~~~~~~
1 error generated.


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v9 01/21] net/cpfl: support device initialization
  2023-03-07 14:11                 ` Ferruh Yigit
@ 2023-03-07 15:03                   ` Ferruh Yigit
  2023-03-08 17:03                     ` Ferruh Yigit
  2023-03-09  1:42                     ` Liu, Mingxia
  0 siblings, 2 replies; 263+ messages in thread
From: Ferruh Yigit @ 2023-03-07 15:03 UTC (permalink / raw)
  To: Mingxia Liu, beilei.xing, yuying.zhang, Raslan Darawsheh
  Cc: dev, Stephen Hemminger, Bruce Richardson, Qi Z Zhang

On 3/7/2023 2:11 PM, Ferruh Yigit wrote:
> On 3/2/2023 9:20 PM, Mingxia Liu wrote:
>> Support device init and add the following dev ops:
>>  - dev_configure
>>  - dev_close
>>  - dev_infos_get
>>  - link_update
>>  - dev_supported_ptypes_get
>>
>> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> 
> <...>
> 
>> +static void
>> +cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter)
>> +{
>> +	struct idpf_adapter *base = &adapter->base;
>> +	struct idpf_dma_mem *dma_mem = NULL;
>> +	struct idpf_hw *hw = &base->hw;
>> +	struct virtchnl2_event *vc_event;
>> +	struct idpf_ctlq_msg ctlq_msg;
>> +	enum idpf_mbx_opc mbx_op;
>> +	struct idpf_vport *vport;
>> +	enum virtchnl_ops vc_op;
>> +	uint16_t pending = 1;
>> +	int ret;
>> +
>> +	while (pending) {
>> +		ret = idpf_vc_ctlq_recv(hw->arq, &pending, &ctlq_msg);
>> +		if (ret) {
>> +			PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret);
>> +			return;
>> +		}
>> +
>> +		memcpy(base->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
>> +			   IDPF_DFLT_MBX_BUF_SIZE);
>> +
>> +		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
>> +		vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
>> +		base->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
>> +
>> +		switch (mbx_op) {
>> +		case idpf_mbq_opc_send_msg_to_peer_pf:
>> +			if (vc_op == VIRTCHNL2_OP_EVENT) {
> 
> 
> Raslan reported following build error [1], 'VIRTCHNL2_OP_EVENT' is not
> an element of "enum virtchnl_ops", can you please check?
> 
> 
> I guess there are a few options, have a new enum for virtchnl2, like
> "enum virtchnl2_ops" which inlucde all 'VIRTCHNL2_OP_',
> 
> OR
> 
> use 'uint32_t' type (instead of "enum virtchnl_ops") when
> 'VIRTCHNL2_OP_' opcodes can be used, this seems simpler.
> 
> 
> BTW, this is same in the idfp driver.
> 
> 
> [1]
> drivers/libtmp_rte_net_cpfl.a.p/net_cpfl_cpfl_ethdev.c.o -c
> ../../root/dpdk/drivers/net/cpfl/cpfl_ethdev.c
> ../../root/dpdk/drivers/net/cpfl/cpfl_ethdev.c:1118:14: error:
> comparison of constant 522 with expression of type 'enum virtchnl_ops'
> is always false [-Werror,-Wtautological-constant-out-of-range-compare]
>                         if (vc_op == VIRTCHNL2_OP_EVENT) {
>                             ~~~~~ ^  ~~~~~~~~~~~~~~~~~~
> 1 error generated.
> 

Thinking twice, I am not sure if this a compiler issue or coding issue,
many compilers doesn't complain about above issue.

As far as I understand C allows assigning unlisted values to enums,
because underneath it just uses an integer type.

Only caveat I can see is, the integer type used is not fixed,
technically compiler can select the type that fits all enum values, so
for above enum compiler can select an char type to store the values, but
fixed value is 522 out of the char limit may cause an issue. But in
practice I am not sure if compilers are selecting char as underlying
type, or if they all just use 'int'.


^ permalink raw reply	[flat|nested] 263+ messages in thread

* Re: [PATCH v9 01/21] net/cpfl: support device initialization
  2023-03-07 15:03                   ` Ferruh Yigit
@ 2023-03-08 17:03                     ` Ferruh Yigit
  2023-03-09  0:59                       ` Liu, Mingxia
  2023-03-09  1:42                     ` Liu, Mingxia
  1 sibling, 1 reply; 263+ messages in thread
From: Ferruh Yigit @ 2023-03-08 17:03 UTC (permalink / raw)
  To: Mingxia Liu, beilei.xing, yuying.zhang, Qi Z Zhang
  Cc: dev, Stephen Hemminger, Bruce Richardson, Raslan Darawsheh

On 3/7/2023 3:03 PM, Ferruh Yigit wrote:
> On 3/7/2023 2:11 PM, Ferruh Yigit wrote:
>> On 3/2/2023 9:20 PM, Mingxia Liu wrote:
>>> Support device init and add the following dev ops:
>>>  - dev_configure
>>>  - dev_close
>>>  - dev_infos_get
>>>  - link_update
>>>  - dev_supported_ptypes_get
>>>
>>> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
>>
>> <...>
>>
>>> +static void
>>> +cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter)
>>> +{
>>> +	struct idpf_adapter *base = &adapter->base;
>>> +	struct idpf_dma_mem *dma_mem = NULL;
>>> +	struct idpf_hw *hw = &base->hw;
>>> +	struct virtchnl2_event *vc_event;
>>> +	struct idpf_ctlq_msg ctlq_msg;
>>> +	enum idpf_mbx_opc mbx_op;
>>> +	struct idpf_vport *vport;
>>> +	enum virtchnl_ops vc_op;
>>> +	uint16_t pending = 1;
>>> +	int ret;
>>> +
>>> +	while (pending) {
>>> +		ret = idpf_vc_ctlq_recv(hw->arq, &pending, &ctlq_msg);
>>> +		if (ret) {
>>> +			PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret);
>>> +			return;
>>> +		}
>>> +
>>> +		memcpy(base->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
>>> +			   IDPF_DFLT_MBX_BUF_SIZE);
>>> +
>>> +		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
>>> +		vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
>>> +		base->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
>>> +
>>> +		switch (mbx_op) {
>>> +		case idpf_mbq_opc_send_msg_to_peer_pf:
>>> +			if (vc_op == VIRTCHNL2_OP_EVENT) {
>>
>>
>> Raslan reported following build error [1], 'VIRTCHNL2_OP_EVENT' is not
>> an element of "enum virtchnl_ops", can you please check?
>>
>>
>> I guess there are a few options, have a new enum for virtchnl2, like
>> "enum virtchnl2_ops" which inlucde all 'VIRTCHNL2_OP_',
>>
>> OR
>>
>> use 'uint32_t' type (instead of "enum virtchnl_ops") when
>> 'VIRTCHNL2_OP_' opcodes can be used, this seems simpler.
>>
>>
>> BTW, this is same in the idfp driver.
>>
>>
>> [1]
>> drivers/libtmp_rte_net_cpfl.a.p/net_cpfl_cpfl_ethdev.c.o -c
>> ../../root/dpdk/drivers/net/cpfl/cpfl_ethdev.c
>> ../../root/dpdk/drivers/net/cpfl/cpfl_ethdev.c:1118:14: error:
>> comparison of constant 522 with expression of type 'enum virtchnl_ops'
>> is always false [-Werror,-Wtautological-constant-out-of-range-compare]
>>                         if (vc_op == VIRTCHNL2_OP_EVENT) {
>>                             ~~~~~ ^  ~~~~~~~~~~~~~~~~~~
>> 1 error generated.
>>
> 
> Thinking twice, I am not sure if this a compiler issue or coding issue,
> many compilers doesn't complain about above issue.
> 
> As far as I understand C allows assigning unlisted values to enums,
> because underneath it just uses an integer type.
> 
> Only caveat I can see is, the integer type used is not fixed,
> technically compiler can select the type that fits all enum values, so
> for above enum compiler can select an char type to store the values, but
> fixed value is 522 out of the char limit may cause an issue. But in
> practice I am not sure if compilers are selecting char as underlying
> type, or if they all just use 'int'.
> 

Hi Mingxia, Beilei, Yuying, Qi,

Reminder of this issue.

Build error is observed by clang 3.4.x [1], can you please work on a fix?


[1] https://godbolt.org/z/zrKz7371b

Thanks,
ferruh

^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v9 01/21] net/cpfl: support device initialization
  2023-03-08 17:03                     ` Ferruh Yigit
@ 2023-03-09  0:59                       ` Liu, Mingxia
  0 siblings, 0 replies; 263+ messages in thread
From: Liu, Mingxia @ 2023-03-09  0:59 UTC (permalink / raw)
  To: Ferruh Yigit, Xing, Beilei, Zhang, Yuying, Zhang, Qi Z
  Cc: dev, Stephen Hemminger, Richardson, Bruce, Raslan Darawsheh



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Thursday, March 9, 2023 1:04 AM
> To: Liu, Mingxia <mingxia.liu@intel.com>; Xing, Beilei <beilei.xing@intel.com>;
> Zhang, Yuying <yuying.zhang@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; Stephen Hemminger <stephen@networkplumber.org>;
> Richardson, Bruce <bruce.richardson@intel.com>; Raslan Darawsheh
> <rasland@nvidia.com>
> Subject: Re: [PATCH v9 01/21] net/cpfl: support device initialization
> 
> On 3/7/2023 3:03 PM, Ferruh Yigit wrote:
> > On 3/7/2023 2:11 PM, Ferruh Yigit wrote:
> >> On 3/2/2023 9:20 PM, Mingxia Liu wrote:
> >>> Support device init and add the following dev ops:
> >>>  - dev_configure
> >>>  - dev_close
> >>>  - dev_infos_get
> >>>  - link_update
> >>>  - dev_supported_ptypes_get
> >>>
> >>> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> >>
> >> <...>
> >>
> >>> +static void
> >>> +cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter) {
> >>> +	struct idpf_adapter *base = &adapter->base;
> >>> +	struct idpf_dma_mem *dma_mem = NULL;
> >>> +	struct idpf_hw *hw = &base->hw;
> >>> +	struct virtchnl2_event *vc_event;
> >>> +	struct idpf_ctlq_msg ctlq_msg;
> >>> +	enum idpf_mbx_opc mbx_op;
> >>> +	struct idpf_vport *vport;
> >>> +	enum virtchnl_ops vc_op;
> >>> +	uint16_t pending = 1;
> >>> +	int ret;
> >>> +
> >>> +	while (pending) {
> >>> +		ret = idpf_vc_ctlq_recv(hw->arq, &pending, &ctlq_msg);
> >>> +		if (ret) {
> >>> +			PMD_DRV_LOG(INFO, "Failed to read msg from virtual
> channel, ret: %d", ret);
> >>> +			return;
> >>> +		}
> >>> +
> >>> +		memcpy(base->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
> >>> +			   IDPF_DFLT_MBX_BUF_SIZE);
> >>> +
> >>> +		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
> >>> +		vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
> >>> +		base->cmd_retval =
> >>> +rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
> >>> +
> >>> +		switch (mbx_op) {
> >>> +		case idpf_mbq_opc_send_msg_to_peer_pf:
> >>> +			if (vc_op == VIRTCHNL2_OP_EVENT) {
> >>
> >>
> >> Raslan reported following build error [1], 'VIRTCHNL2_OP_EVENT' is
> >> not an element of "enum virtchnl_ops", can you please check?
> >>
> >>
> >> I guess there are a few options, have a new enum for virtchnl2, like
> >> "enum virtchnl2_ops" which inlucde all 'VIRTCHNL2_OP_',
> >>
> >> OR
> >>
> >> use 'uint32_t' type (instead of "enum virtchnl_ops") when
> >> 'VIRTCHNL2_OP_' opcodes can be used, this seems simpler.
> >>
> >>
> >> BTW, this is same in the idfp driver.
> >>
> >>
> >> [1]
> >> drivers/libtmp_rte_net_cpfl.a.p/net_cpfl_cpfl_ethdev.c.o -c
> >> ../../root/dpdk/drivers/net/cpfl/cpfl_ethdev.c
> >> ../../root/dpdk/drivers/net/cpfl/cpfl_ethdev.c:1118:14: error:
> >> comparison of constant 522 with expression of type 'enum virtchnl_ops'
> >> is always false [-Werror,-Wtautological-constant-out-of-range-compare]
> >>                         if (vc_op == VIRTCHNL2_OP_EVENT) {
> >>                             ~~~~~ ^  ~~~~~~~~~~~~~~~~~~
> >> 1 error generated.
> >>
> >
> > Thinking twice, I am not sure if this a compiler issue or coding
> > issue, many compilers doesn't complain about above issue.
> >
> > As far as I understand C allows assigning unlisted values to enums,
> > because underneath it just uses an integer type.
> >
> > Only caveat I can see is, the integer type used is not fixed,
> > technically compiler can select the type that fits all enum values, so
> > for above enum compiler can select an char type to store the values,
> > but fixed value is 522 out of the char limit may cause an issue. But
> > in practice I am not sure if compilers are selecting char as
> > underlying type, or if they all just use 'int'.
> >
> 
> Hi Mingxia, Beilei, Yuying, Qi,
> 
> Reminder of this issue.
> 
> Build error is observed by clang 3.4.x [1], can you please work on a fix?
> 
> 
> [1] https://godbolt.org/z/zrKz7371b
> 
> Thanks,
> Ferruh
[Liu, Mingxia] Sorry for late reply, I just came back from sl. I'll check the issue as soon as possible.
Thanks!

^ permalink raw reply	[flat|nested] 263+ messages in thread

* RE: [PATCH v9 01/21] net/cpfl: support device initialization
  2023-03-07 15:03                   ` Ferruh Yigit
  2023-03-08 17:03                     ` Ferruh Yigit
@ 2023-03-09  1:42                     ` Liu, Mingxia
  1 sibling, 0 replies; 263+ messages in thread
From: Liu, Mingxia @ 2023-03-09  1:42 UTC (permalink / raw)
  To: Ferruh Yigit, Xing, Beilei, Zhang, Yuying, Raslan Darawsheh
  Cc: dev, Stephen Hemminger, Richardson, Bruce, Zhang, Qi Z



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Tuesday, March 7, 2023 11:03 PM
> To: Liu, Mingxia <mingxia.liu@intel.com>; Xing, Beilei <beilei.xing@intel.com>;
> Zhang, Yuying <yuying.zhang@intel.com>; Raslan Darawsheh
> <rasland@nvidia.com>
> Cc: dev@dpdk.org; Stephen Hemminger <stephen@networkplumber.org>;
> Richardson, Bruce <bruce.richardson@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>
> Subject: Re: [PATCH v9 01/21] net/cpfl: support device initialization
> 
> On 3/7/2023 2:11 PM, Ferruh Yigit wrote:
> > On 3/2/2023 9:20 PM, Mingxia Liu wrote:
> >> Support device init and add the following dev ops:
> >>  - dev_configure
> >>  - dev_close
> >>  - dev_infos_get
> >>  - link_update
> >>  - dev_supported_ptypes_get
> >>
> >> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> >
> > <...>
> >
> >> +static void
> >> +cpfl_handle_virtchnl_msg(struct cpfl_adapter_ext *adapter) {
> >> +	struct idpf_adapter *base = &adapter->base;
> >> +	struct idpf_dma_mem *dma_mem = NULL;
> >> +	struct idpf_hw *hw = &base->hw;
> >> +	struct virtchnl2_event *vc_event;
> >> +	struct idpf_ctlq_msg ctlq_msg;
> >> +	enum idpf_mbx_opc mbx_op;
> >> +	struct idpf_vport *vport;
> >> +	enum virtchnl_ops vc_op;
> >> +	uint16_t pending = 1;
> >> +	int ret;
> >> +
> >> +	while (pending) {
> >> +		ret = idpf_vc_ctlq_recv(hw->arq, &pending, &ctlq_msg);
> >> +		if (ret) {
> >> +			PMD_DRV_LOG(INFO, "Failed to read msg from virtual
> channel, ret: %d", ret);
> >> +			return;
> >> +		}
> >> +
> >> +		memcpy(base->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
> >> +			   IDPF_DFLT_MBX_BUF_SIZE);
> >> +
> >> +		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
> >> +		vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
> >> +		base->cmd_retval =
> >> +rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
> >> +
> >> +		switch (mbx_op) {
> >> +		case idpf_mbq_opc_send_msg_to_peer_pf:
> >> +			if (vc_op == VIRTCHNL2_OP_EVENT) {
> >
> >
> > Raslan reported following build error [1], 'VIRTCHNL2_OP_EVENT' is not
> > an element of "enum virtchnl_ops", can you please check?
> >
> >
> > I guess there are a few options, have a new enum for virtchnl2, like
> > "enum virtchnl2_ops" which inlucde all 'VIRTCHNL2_OP_',
> >
> > OR
> >
> > use 'uint32_t' type (instead of "enum virtchnl_ops") when
> > 'VIRTCHNL2_OP_' opcodes can be used, this seems simpler.
> >
> >
> > BTW, this is same in the idfp driver.
> >
> >
> > [1]
> > drivers/libtmp_rte_net_cpfl.a.p/net_cpfl_cpfl_ethdev.c.o -c
> > ../../root/dpdk/drivers/net/cpfl/cpfl_ethdev.c
> > ../../root/dpdk/drivers/net/cpfl/cpfl_ethdev.c:1118:14: error:
> > comparison of constant 522 with expression of type 'enum virtchnl_ops'
> > is always false [-Werror,-Wtautological-constant-out-of-range-compare]
> >                         if (vc_op == VIRTCHNL2_OP_EVENT) {
> >                             ~~~~~ ^  ~~~~~~~~~~~~~~~~~~
> > 1 error generated.
> >
> 
> Thinking twice, I am not sure if this a compiler issue or coding issue, many
> compilers doesn't complain about above issue.
> 
> As far as I understand C allows assigning unlisted values to enums, because
> underneath it just uses an integer type.
> 
> Only caveat I can see is, the integer type used is not fixed, technically compiler
> can select the type that fits all enum values, so for above enum compiler can
> select an char type to store the values, but fixed value is 522 out of the char
> limit may cause an issue. But in practice I am not sure if compilers are selecting
> char as underlying type, or if they all just use 'int'.
[Liu, Mingxia] By checking the code, we shouldn't compare an enum virtchnl_ops variable with VIRTCHNL2_OP_EVENT, 
as VIRTCHNL2_OP_EVENT is not included in enum virtchnl_ops. And the cpfl/idpf pmd use virtual msg opcodes prefixed with virtchnl2 or
 VIRTCHNL2. I'll send a fixed patch to fix this issue.


^ permalink raw reply	[flat|nested] 263+ messages in thread

end of thread, other threads:[~2023-03-09  1:42 UTC | newest]

Thread overview: 263+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-23  1:55 [PATCH 00/21] add support for cpfl PMD in DPDK Mingxia Liu
2022-12-23  1:55 ` [PATCH 01/21] net/cpfl: support device initialization Mingxia Liu
2022-12-23  1:55 ` [PATCH 02/21] net/cpfl: add Tx queue setup Mingxia Liu
2022-12-23  1:55 ` [PATCH 03/21] net/cpfl: add Rx " Mingxia Liu
2022-12-23  1:55 ` [PATCH 04/21] net/cpfl: support device start and stop Mingxia Liu
2022-12-23  1:55 ` [PATCH 05/21] net/cpfl: support queue start Mingxia Liu
2022-12-23  1:55 ` [PATCH 06/21] net/cpfl: support queue stop Mingxia Liu
2022-12-23  1:55 ` [PATCH 07/21] net/cpfl: support queue release Mingxia Liu
2022-12-23  1:55 ` [PATCH 08/21] net/cpfl: support MTU configuration Mingxia Liu
2022-12-23  1:55 ` [PATCH 09/21] net/cpfl: support basic Rx data path Mingxia Liu
2022-12-23  1:55 ` [PATCH 10/21] net/cpfl: support basic Tx " Mingxia Liu
2022-12-23  1:55 ` [PATCH 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
2022-12-23  1:55 ` [PATCH 12/21] net/cpfl: support RSS Mingxia Liu
2022-12-23  1:55 ` [PATCH 13/21] net/cpfl: support Rx offloading Mingxia Liu
2022-12-23  1:55 ` [PATCH 14/21] net/cpfl: support Tx offloading Mingxia Liu
2022-12-23  1:55 ` [PATCH 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
2022-12-23  1:55 ` [PATCH 16/21] net/cpfl: support timestamp offload Mingxia Liu
2022-12-23  1:55 ` [PATCH 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
2022-12-23  1:55 ` [PATCH 18/21] net/cpfl: add hw statistics Mingxia Liu
2022-12-23  1:55 ` [PATCH 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
2022-12-23  1:55 ` [PATCH 20/21] net/cpfl: support single q scatter RX datapath Mingxia Liu
2022-12-23  1:55 ` [PATCH 21/21] net/cpfl: add xstats ops Mingxia Liu
2023-01-13  8:19 ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Mingxia Liu
2023-01-13  8:19   ` [PATCH v2 01/21] net/cpfl: support device initialization Mingxia Liu
2023-01-13 13:32     ` Zhang, Helin
2023-01-13  8:19   ` [PATCH v2 02/21] net/cpfl: add Tx queue setup Mingxia Liu
2023-01-13  8:19   ` [PATCH v2 03/21] net/cpfl: add Rx " Mingxia Liu
2023-01-13  8:19   ` [PATCH v2 04/21] net/cpfl: support device start and stop Mingxia Liu
2023-01-13  8:19   ` [PATCH v2 05/21] net/cpfl: support queue start Mingxia Liu
2023-01-13  8:19   ` [PATCH v2 06/21] net/cpfl: support queue stop Mingxia Liu
2023-01-13  8:19   ` [PATCH v2 07/21] net/cpfl: support queue release Mingxia Liu
2023-01-13  8:19   ` [PATCH v2 08/21] net/cpfl: support MTU configuration Mingxia Liu
2023-01-13  8:19   ` [PATCH v2 09/21] net/cpfl: support basic Rx data path Mingxia Liu
2023-01-13  8:19   ` [PATCH v2 10/21] net/cpfl: support basic Tx " Mingxia Liu
2023-01-13  8:19   ` [PATCH v2 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
2023-01-13  8:19   ` [PATCH v2 12/21] net/cpfl: support RSS Mingxia Liu
2023-01-13  8:19   ` [PATCH v2 13/21] net/cpfl: support Rx offloading Mingxia Liu
2023-01-13  8:19   ` [PATCH v2 14/21] net/cpfl: support Tx offloading Mingxia Liu
2023-01-13  8:19   ` [PATCH v2 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
2023-01-13  8:19   ` [PATCH v2 16/21] net/cpfl: support timestamp offload Mingxia Liu
2023-01-13  8:19   ` [PATCH v2 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
2023-01-13  8:19   ` [PATCH v2 18/21] net/cpfl: add hw statistics Mingxia Liu
2023-01-13  8:19   ` [PATCH v2 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
2023-01-13  8:19   ` [PATCH v2 20/21] net/cpfl: support single q scatter RX datapath Mingxia Liu
2023-01-13  8:19   ` [PATCH v2 21/21] net/cpfl: add xstats ops Mingxia Liu
2023-01-13 12:49   ` [PATCH v2 00/21] add support for cpfl PMD in DPDK Zhang, Helin
2023-01-18  7:31   ` [PATCH v3 " Mingxia Liu
2023-01-18  7:31     ` [PATCH v3 01/21] net/cpfl: support device initialization Mingxia Liu
2023-01-18  7:31     ` [PATCH v3 02/21] net/cpfl: add Tx queue setup Mingxia Liu
2023-01-18  7:31     ` [PATCH v3 03/21] net/cpfl: add Rx " Mingxia Liu
2023-01-18  7:31     ` [PATCH v3 04/21] net/cpfl: support device start and stop Mingxia Liu
2023-01-18  7:31     ` [PATCH v3 05/21] net/cpfl: support queue start Mingxia Liu
2023-01-18  7:31     ` [PATCH v3 06/21] net/cpfl: support queue stop Mingxia Liu
2023-01-18  7:31     ` [PATCH v3 07/21] net/cpfl: support queue release Mingxia Liu
2023-01-18  7:31     ` [PATCH v3 08/21] net/cpfl: support MTU configuration Mingxia Liu
2023-01-18  7:31     ` [PATCH v3 09/21] net/cpfl: support basic Rx data path Mingxia Liu
2023-01-18  7:31     ` [PATCH v3 10/21] net/cpfl: support basic Tx " Mingxia Liu
2023-01-18  7:31     ` [PATCH v3 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
2023-01-18  7:31     ` [PATCH v3 12/21] net/cpfl: support RSS Mingxia Liu
2023-01-18  7:31     ` [PATCH v3 13/21] net/cpfl: support Rx offloading Mingxia Liu
2023-01-18  7:31     ` [PATCH v3 14/21] net/cpfl: support Tx offloading Mingxia Liu
2023-01-18  7:31     ` [PATCH v3 16/21] net/cpfl: support timestamp offload Mingxia Liu
2023-01-18  7:31     ` [PATCH v3 18/21] net/cpfl: add hw statistics Mingxia Liu
2023-01-18  7:31     ` [PATCH v3 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
2023-01-18  7:31     ` [PATCH v3 21/21] net/cpfl: add xstats ops Mingxia Liu
2023-01-18  7:33   ` [PATCH v3 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
2023-01-18  7:33     ` [PATCH v3 17/21] net/cpfl: add AVX512 data path for split " Mingxia Liu
2023-01-18  7:33     ` [PATCH v3 20/21] net/cpfl: support single q scatter RX datapath Mingxia Liu
2023-01-18  7:57   ` [PATCH v4 00/21] add support for cpfl PMD in DPDK Mingxia Liu
2023-01-18  7:57     ` [PATCH v4 01/21] net/cpfl: support device initialization Mingxia Liu
2023-01-18  7:57     ` [PATCH v4 02/21] net/cpfl: add Tx queue setup Mingxia Liu
2023-01-18  7:57     ` [PATCH v4 03/21] net/cpfl: add Rx " Mingxia Liu
2023-01-18  7:57     ` [PATCH v4 04/21] net/cpfl: support device start and stop Mingxia Liu
2023-01-18  7:57     ` [PATCH v4 05/21] net/cpfl: support queue start Mingxia Liu
2023-01-18  7:57     ` [PATCH v4 06/21] net/cpfl: support queue stop Mingxia Liu
2023-01-18  7:57     ` [PATCH v4 07/21] net/cpfl: support queue release Mingxia Liu
2023-01-18  7:57     ` [PATCH v4 08/21] net/cpfl: support MTU configuration Mingxia Liu
2023-01-18  7:57     ` [PATCH v4 09/21] net/cpfl: support basic Rx data path Mingxia Liu
2023-01-18  7:57     ` [PATCH v4 10/21] net/cpfl: support basic Tx " Mingxia Liu
2023-01-18  7:57     ` [PATCH v4 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
2023-01-18  7:57     ` [PATCH v4 12/21] net/cpfl: support RSS Mingxia Liu
2023-01-18  7:57     ` [PATCH v4 13/21] net/cpfl: support Rx offloading Mingxia Liu
2023-01-18  7:57     ` [PATCH v4 14/21] net/cpfl: support Tx offloading Mingxia Liu
2023-01-18  7:57     ` [PATCH v4 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
2023-01-18  7:57     ` [PATCH v4 16/21] net/cpfl: support timestamp offload Mingxia Liu
2023-01-18  7:57     ` [PATCH v4 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
2023-01-18  7:57     ` [PATCH v4 18/21] net/cpfl: add hw statistics Mingxia Liu
2023-01-18  7:57     ` [PATCH v4 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
2023-01-18  7:57     ` [PATCH v4 20/21] net/cpfl: support single q scatter RX datapath Mingxia Liu
2023-01-18  7:57     ` [PATCH v4 21/21] net/cpfl: add xstats ops Mingxia Liu
2023-02-09  8:45     ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Mingxia Liu
2023-02-09  8:45       ` [PATCH v5 01/21] net/cpfl: support device initialization Mingxia Liu
2023-02-09  8:45       ` [PATCH v5 02/21] net/cpfl: add Tx queue setup Mingxia Liu
2023-02-09  8:45       ` [PATCH v5 03/21] net/cpfl: add Rx " Mingxia Liu
2023-02-09  8:45       ` [PATCH v5 04/21] net/cpfl: support device start and stop Mingxia Liu
2023-02-09  8:45       ` [PATCH v5 05/21] net/cpfl: support queue start Mingxia Liu
2023-02-09  8:45       ` [PATCH v5 06/21] net/cpfl: support queue stop Mingxia Liu
2023-02-09  8:45       ` [PATCH v5 07/21] net/cpfl: support queue release Mingxia Liu
2023-02-09  8:45       ` [PATCH v5 08/21] net/cpfl: support MTU configuration Mingxia Liu
2023-02-09  8:45       ` [PATCH v5 09/21] net/cpfl: support basic Rx data path Mingxia Liu
2023-02-09  8:45       ` [PATCH v5 10/21] net/cpfl: support basic Tx " Mingxia Liu
2023-02-09  8:45       ` [PATCH v5 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
2023-02-09  8:45       ` [PATCH v5 12/21] net/cpfl: support RSS Mingxia Liu
2023-02-09  8:45       ` [PATCH v5 13/21] net/cpfl: support Rx offloading Mingxia Liu
2023-02-09  8:45       ` [PATCH v5 14/21] net/cpfl: support Tx offloading Mingxia Liu
2023-02-09  8:45       ` [PATCH v5 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
2023-02-09  8:45       ` [PATCH v5 16/21] net/cpfl: support timestamp offload Mingxia Liu
2023-02-09  8:45       ` [PATCH v5 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
2023-02-09  8:45       ` [PATCH v5 18/21] net/cpfl: add HW statistics Mingxia Liu
2023-02-09  8:45       ` [PATCH v5 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
2023-02-09  8:45       ` [PATCH v5 20/21] net/cpfl: support scalar scatter Rx datapath for single queue model Mingxia Liu
2023-02-09  8:45       ` [PATCH v5 21/21] net/cpfl: add xstats ops Mingxia Liu
2023-02-09 16:47       ` [PATCH v5 00/21] add support for cpfl PMD in DPDK Stephen Hemminger
2023-02-13  1:37         ` Liu, Mingxia
2023-02-13  2:19       ` [PATCH v6 " Mingxia Liu
2023-02-13  2:19         ` [PATCH v6 01/21] net/cpfl: support device initialization Mingxia Liu
2023-02-13  2:19         ` [PATCH v6 02/21] net/cpfl: add Tx queue setup Mingxia Liu
2023-02-13  2:19         ` [PATCH v6 03/21] net/cpfl: add Rx " Mingxia Liu
2023-02-13  2:19         ` [PATCH v6 04/21] net/cpfl: support device start and stop Mingxia Liu
2023-02-13  2:19         ` [PATCH v6 05/21] net/cpfl: support queue start Mingxia Liu
2023-02-13  2:19         ` [PATCH v6 06/21] net/cpfl: support queue stop Mingxia Liu
2023-02-13  2:19         ` [PATCH v6 07/21] net/cpfl: support queue release Mingxia Liu
2023-02-13  2:19         ` [PATCH v6 08/21] net/cpfl: support MTU configuration Mingxia Liu
2023-02-13  2:19         ` [PATCH v6 09/21] net/cpfl: support basic Rx data path Mingxia Liu
2023-02-13  2:19         ` [PATCH v6 10/21] net/cpfl: support basic Tx " Mingxia Liu
2023-02-13  2:19         ` [PATCH v6 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
2023-02-13  2:19         ` [PATCH v6 12/21] net/cpfl: support RSS Mingxia Liu
2023-02-13  2:19         ` [PATCH v6 13/21] net/cpfl: support Rx offloading Mingxia Liu
2023-02-13  2:19         ` [PATCH v6 14/21] net/cpfl: support Tx offloading Mingxia Liu
2023-02-13  2:19         ` [PATCH v6 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
2023-02-13  2:19         ` [PATCH v6 16/21] net/cpfl: support timestamp offload Mingxia Liu
2023-02-13  2:19         ` [PATCH v6 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
2023-02-13  2:19         ` [PATCH v6 18/21] net/cpfl: add HW statistics Mingxia Liu
2023-02-13  2:19         ` [PATCH v6 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
2023-02-13  2:19         ` [PATCH v6 20/21] net/cpfl: support scalar scatter Rx datapath for single queue model Mingxia Liu
2023-02-13  2:19         ` [PATCH v6 21/21] net/cpfl: add xstats ops Mingxia Liu
2023-02-15 14:04         ` [PATCH v6 00/21] add support for cpfl PMD in DPDK Ferruh Yigit
2023-02-16  1:16           ` Liu, Mingxia
2023-02-16  0:29         ` [PATCH v7 " Mingxia Liu
2023-02-16  0:29           ` [PATCH v7 01/21] net/cpfl: support device initialization Mingxia Liu
2023-02-27 13:46             ` Ferruh Yigit
2023-02-27 15:45               ` Thomas Monjalon
2023-02-27 23:38                 ` Ferruh Yigit
2023-02-28  2:06                 ` Liu, Mingxia
2023-02-28  9:53                   ` Ferruh Yigit
2023-02-27 21:43             ` Ferruh Yigit
2023-02-28 11:12               ` Liu, Mingxia
2023-02-28 11:34                 ` Ferruh Yigit
2023-02-16  0:29           ` [PATCH v7 02/21] net/cpfl: add Tx queue setup Mingxia Liu
2023-02-27 21:44             ` Ferruh Yigit
2023-02-28  2:40               ` Liu, Mingxia
2023-02-16  0:29           ` [PATCH v7 03/21] net/cpfl: add Rx " Mingxia Liu
2023-02-27 21:46             ` Ferruh Yigit
2023-02-28  3:03               ` Liu, Mingxia
2023-02-28 10:02                 ` Ferruh Yigit
2023-02-16  0:29           ` [PATCH v7 04/21] net/cpfl: support device start and stop Mingxia Liu
2023-02-16  0:29           ` [PATCH v7 05/21] net/cpfl: support queue start Mingxia Liu
2023-02-27 21:47             ` Ferruh Yigit
2023-02-28  3:14               ` Liu, Mingxia
2023-02-28  3:28                 ` Liu, Mingxia
2023-02-16  0:29           ` [PATCH v7 06/21] net/cpfl: support queue stop Mingxia Liu
2023-02-27 21:48             ` Ferruh Yigit
2023-02-16  0:29           ` [PATCH v7 07/21] net/cpfl: support queue release Mingxia Liu
2023-02-16  0:29           ` [PATCH v7 08/21] net/cpfl: support MTU configuration Mingxia Liu
2023-02-16  0:29           ` [PATCH v7 09/21] net/cpfl: support basic Rx data path Mingxia Liu
2023-02-16  0:29           ` [PATCH v7 10/21] net/cpfl: support basic Tx " Mingxia Liu
2023-02-16  0:30           ` [PATCH v7 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
2023-02-27 21:49             ` Ferruh Yigit
2023-02-28 11:31               ` Liu, Mingxia
2023-02-16  0:30           ` [PATCH v7 12/21] net/cpfl: support RSS Mingxia Liu
2023-02-27 21:50             ` Ferruh Yigit
2023-02-28 11:28               ` Liu, Mingxia
2023-02-28 11:34                 ` Ferruh Yigit
2023-02-16  0:30           ` [PATCH v7 13/21] net/cpfl: support Rx offloading Mingxia Liu
2023-02-27 21:50             ` Ferruh Yigit
2023-02-28  5:48               ` Liu, Mingxia
2023-02-16  0:30           ` [PATCH v7 14/21] net/cpfl: support Tx offloading Mingxia Liu
2023-02-16  0:30           ` [PATCH v7 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
2023-02-27 21:51             ` Ferruh Yigit
2023-02-28  3:19               ` Liu, Mingxia
2023-02-16  0:30           ` [PATCH v7 16/21] net/cpfl: support timestamp offload Mingxia Liu
2023-02-16  0:30           ` [PATCH v7 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
2023-02-27 21:52             ` Ferruh Yigit
2023-02-16  0:30           ` [PATCH v7 18/21] net/cpfl: add HW statistics Mingxia Liu
2023-02-27 21:52             ` Ferruh Yigit
2023-02-28  6:46               ` Liu, Mingxia
2023-02-28 10:01                 ` Ferruh Yigit
2023-02-28 11:47                   ` Liu, Mingxia
2023-02-28 12:04                     ` Ferruh Yigit
2023-02-28 12:12                       ` Bruce Richardson
2023-02-28 12:24                         ` Ferruh Yigit
2023-02-28 12:33                           ` Ferruh Yigit
2023-02-28 13:29                             ` Zhang, Qi Z
2023-02-28 13:34                               ` Ferruh Yigit
2023-02-28 14:04                                 ` Zhang, Qi Z
2023-02-28 14:24                                 ` Bruce Richardson
2023-02-28 16:14                                   ` Ferruh Yigit
2023-02-16  0:30           ` [PATCH v7 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
2023-02-16  0:30           ` [PATCH v7 20/21] net/cpfl: support scalar scatter Rx datapath for single queue model Mingxia Liu
2023-02-16  0:30           ` [PATCH v7 21/21] net/cpfl: add xstats ops Mingxia Liu
2023-02-27 21:52             ` Ferruh Yigit
2023-02-28  5:28               ` Liu, Mingxia
2023-02-28  5:54               ` Liu, Mingxia
2023-02-27 21:43           ` [PATCH v7 00/21] add support for cpfl PMD in DPDK Ferruh Yigit
2023-02-28  1:44             ` Zhang, Qi Z
2023-03-02 10:35           ` [PATCH v8 " Mingxia Liu
2023-03-02 10:35             ` [PATCH v8 01/21] net/cpfl: support device initialization Mingxia Liu
2023-03-02  9:31               ` Ferruh Yigit
2023-03-02 11:24                 ` Liu, Mingxia
2023-03-02 11:51                   ` Ferruh Yigit
2023-03-02 12:08                     ` Xing, Beilei
2023-03-02 13:11                     ` Liu, Mingxia
2023-03-02 12:08                 ` Xing, Beilei
2023-03-02 10:35             ` [PATCH v8 02/21] net/cpfl: add Tx queue setup Mingxia Liu
2023-03-02 10:35             ` [PATCH v8 03/21] net/cpfl: add Rx " Mingxia Liu
2023-03-02 10:35             ` [PATCH v8 04/21] net/cpfl: support device start and stop Mingxia Liu
2023-03-02 10:35             ` [PATCH v8 05/21] net/cpfl: support queue start Mingxia Liu
2023-03-02 10:35             ` [PATCH v8 06/21] net/cpfl: support queue stop Mingxia Liu
2023-03-02 10:35             ` [PATCH v8 07/21] net/cpfl: support queue release Mingxia Liu
2023-03-02 10:35             ` [PATCH v8 08/21] net/cpfl: support MTU configuration Mingxia Liu
2023-03-02 10:35             ` [PATCH v8 09/21] net/cpfl: support basic Rx data path Mingxia Liu
2023-03-02 10:35             ` [PATCH v8 10/21] net/cpfl: support basic Tx " Mingxia Liu
2023-03-02 10:35             ` [PATCH v8 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
2023-03-02 10:35             ` [PATCH v8 12/21] net/cpfl: support RSS Mingxia Liu
2023-03-02 10:35             ` [PATCH v8 13/21] net/cpfl: support Rx offloading Mingxia Liu
2023-03-02 10:35             ` [PATCH v8 14/21] net/cpfl: support Tx offloading Mingxia Liu
2023-03-02 10:35             ` [PATCH v8 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
2023-03-02 10:35             ` [PATCH v8 16/21] net/cpfl: support timestamp offload Mingxia Liu
2023-03-02 10:35             ` [PATCH v8 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
2023-03-02 10:35             ` [PATCH v8 18/21] net/cpfl: add HW statistics Mingxia Liu
2023-03-02 10:35             ` [PATCH v8 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
2023-03-02 10:35             ` [PATCH v8 20/21] net/cpfl: support scalar scatter Rx datapath for single queue model Mingxia Liu
2023-03-02 10:35             ` [PATCH v8 21/21] net/cpfl: add xstats ops Mingxia Liu
2023-03-02  9:30               ` Ferruh Yigit
2023-03-02 11:19                 ` Liu, Mingxia
2023-03-02 21:20             ` [PATCH v9 00/21] add support for cpfl PMD in DPDK Mingxia Liu
2023-03-02 15:06               ` Ferruh Yigit
2023-03-02 21:20               ` [PATCH v9 01/21] net/cpfl: support device initialization Mingxia Liu
2023-03-07 14:11                 ` Ferruh Yigit
2023-03-07 15:03                   ` Ferruh Yigit
2023-03-08 17:03                     ` Ferruh Yigit
2023-03-09  0:59                       ` Liu, Mingxia
2023-03-09  1:42                     ` Liu, Mingxia
2023-03-02 21:20               ` [PATCH v9 02/21] net/cpfl: add Tx queue setup Mingxia Liu
2023-03-02 21:20               ` [PATCH v9 03/21] net/cpfl: add Rx " Mingxia Liu
2023-03-02 21:20               ` [PATCH v9 04/21] net/cpfl: support device start and stop Mingxia Liu
2023-03-02 21:20               ` [PATCH v9 05/21] net/cpfl: support queue start Mingxia Liu
2023-03-02 21:20               ` [PATCH v9 06/21] net/cpfl: support queue stop Mingxia Liu
2023-03-02 21:20               ` [PATCH v9 07/21] net/cpfl: support queue release Mingxia Liu
2023-03-02 21:20               ` [PATCH v9 08/21] net/cpfl: support MTU configuration Mingxia Liu
2023-03-02 21:20               ` [PATCH v9 09/21] net/cpfl: support basic Rx data path Mingxia Liu
2023-03-02 21:20               ` [PATCH v9 10/21] net/cpfl: support basic Tx " Mingxia Liu
2023-03-02 21:20               ` [PATCH v9 11/21] net/cpfl: support write back based on ITR expire Mingxia Liu
2023-03-02 21:20               ` [PATCH v9 12/21] net/cpfl: support RSS Mingxia Liu
2023-03-02 21:20               ` [PATCH v9 13/21] net/cpfl: support Rx offloading Mingxia Liu
2023-03-02 21:20               ` [PATCH v9 14/21] net/cpfl: support Tx offloading Mingxia Liu
2023-03-02 21:20               ` [PATCH v9 15/21] net/cpfl: add AVX512 data path for single queue model Mingxia Liu
2023-03-02 21:20               ` [PATCH v9 16/21] net/cpfl: support timestamp offload Mingxia Liu
2023-03-02 21:20               ` [PATCH v9 17/21] net/cpfl: add AVX512 data path for split queue model Mingxia Liu
2023-03-02 21:20               ` [PATCH v9 18/21] net/cpfl: add HW statistics Mingxia Liu
2023-03-02 21:20               ` [PATCH v9 19/21] net/cpfl: add RSS set/get ops Mingxia Liu
2023-03-02 21:20               ` [PATCH v9 20/21] net/cpfl: support scalar scatter Rx datapath for single queue model Mingxia Liu
2023-03-02 21:20               ` [PATCH v9 21/21] net/cpfl: add xstats ops Mingxia Liu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).