DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 0/7] add idpf pmd enhancement features
@ 2022-12-16  9:36 Mingxia Liu
  2022-12-16  9:37 ` [PATCH 1/7] common/idpf: add hw statistics Mingxia Liu
                   ` (7 more replies)
  0 siblings, 8 replies; 63+ messages in thread
From: Mingxia Liu @ 2022-12-16  9:36 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

This patchset add several enhancement features of idpf pmd. 
Including the following:
- add hw statistics, support stats/xstats ops
- add rss configure/show ops
- add event handle: link status
- add scattered data path for single queue

This patchset is based on the refactor idpf PMD code:
http://patches.dpdk.org/project/dpdk/list/?submitter=410&q=&delegate=&archive=&series=&state=*
http://patches.dpdk.org/project/dpdk/list/?submitter=2083&q=&delegate=&archive=&series=&state=*


Mingxia Liu (7):
  common/idpf: add hw statistics
  common/idpf: add RSS set/get ops
  common/idpf: support single q scatter RX datapath
  common/idpf: add rss_offload hash in singleq rx
  common/idpf: add alarm to support handle vchnl message
  common/idpf: add xstats ops
  common/idpf: update mbuf_alloc_failed multi-thread process

 drivers/common/idpf/idpf_common_device.c      |  17 +
 drivers/common/idpf/idpf_common_device.h      |  11 +-
 drivers/common/idpf/idpf_common_rxtx.c        | 158 ++++-
 drivers/common/idpf/idpf_common_rxtx.h        |   5 +-
 drivers/common/idpf/idpf_common_rxtx_avx512.c |  12 +-
 drivers/common/idpf/idpf_common_virtchnl.c    | 157 ++++-
 drivers/common/idpf/idpf_common_virtchnl.h    |  18 +-
 drivers/common/idpf/version.map               |   9 +
 drivers/net/idpf/idpf_ethdev.c                | 639 +++++++++++++++++-
 drivers/net/idpf/idpf_ethdev.h                |   7 +-
 drivers/net/idpf/idpf_rxtx.c                  |  26 +-
 drivers/net/idpf/idpf_rxtx.h                  |   2 +
 12 files changed, 1029 insertions(+), 32 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH 1/7] common/idpf: add hw statistics
  2022-12-16  9:36 [PATCH 0/7] add idpf pmd enhancement features Mingxia Liu
@ 2022-12-16  9:37 ` Mingxia Liu
  2022-12-16  9:37 ` [PATCH 2/7] common/idpf: add RSS set/get ops Mingxia Liu
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2022-12-16  9:37 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

This patch add hardware packets/bytes statistics.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/common/idpf/idpf_common_device.c   | 17 +++++
 drivers/common/idpf/idpf_common_device.h   |  5 +-
 drivers/common/idpf/idpf_common_virtchnl.c | 27 +++++++
 drivers/common/idpf/idpf_common_virtchnl.h |  3 +
 drivers/common/idpf/version.map            |  2 +
 drivers/net/idpf/idpf_ethdev.c             | 87 ++++++++++++++++++++++
 6 files changed, 140 insertions(+), 1 deletion(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 3580028dce..49ed778831 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -672,4 +672,21 @@ idpf_create_vport_info_init(struct idpf_vport *vport,
 	return 0;
 }
 
+void
+idpf_update_stats(struct virtchnl2_vport_stats *oes, struct virtchnl2_vport_stats *nes)
+{
+	nes->rx_bytes = nes->rx_bytes - oes->rx_bytes;
+	nes->rx_unicast = nes->rx_unicast - oes->rx_unicast;
+	nes->rx_multicast = nes->rx_multicast - oes->rx_multicast;
+	nes->rx_broadcast = nes->rx_broadcast - oes->rx_broadcast;
+	nes->rx_errors = nes->rx_errors - oes->rx_errors;
+	nes->rx_discards = nes->rx_discards - oes->rx_discards;
+	nes->tx_bytes = nes->tx_bytes - oes->tx_bytes;
+	nes->tx_unicast = nes->tx_unicast - oes->tx_unicast;
+	nes->tx_multicast = nes->tx_multicast - oes->tx_multicast;
+	nes->tx_broadcast = nes->tx_broadcast - oes->tx_broadcast;
+	nes->tx_errors = nes->tx_errors - oes->tx_errors;
+	nes->tx_discards = nes->tx_discards - oes->tx_discards;
+}
+
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 6c9a65ae3b..5184dcee9f 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -112,6 +112,8 @@ struct idpf_vport {
 	bool tx_vec_allowed;
 	bool rx_use_avx512;
 	bool tx_use_avx512;
+
+	struct virtchnl2_vport_stats eth_stats_offset;
 };
 
 /* Message type read in virtual channel from PF */
@@ -188,5 +190,6 @@ int idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues);
 __rte_internal
 int idpf_create_vport_info_init(struct idpf_vport *vport,
 				struct virtchnl2_create_vport *vport_info);
-
+__rte_internal
+void idpf_update_stats(struct virtchnl2_vport_stats *oes, struct virtchnl2_vport_stats *nes);
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 324214caa1..80351d15de 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -217,6 +217,7 @@ idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
 	case VIRTCHNL2_OP_ALLOC_VECTORS:
 	case VIRTCHNL2_OP_DEALLOC_VECTORS:
+	case VIRTCHNL2_OP_GET_STATS:
 		/* for init virtchnl ops, need to poll the response */
 		err = idpf_read_one_msg(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
@@ -806,6 +807,32 @@ idpf_vc_query_ptype_info(struct idpf_adapter *adapter)
 	return err;
 }
 
+int
+idpf_query_stats(struct idpf_vport *vport,
+		struct virtchnl2_vport_stats **pstats)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_vport_stats vport_stats;
+	struct idpf_cmd_info args;
+	int err;
+
+	vport_stats.vport_id = vport->vport_id;
+	args.ops = VIRTCHNL2_OP_GET_STATS;
+	args.in_args = (u8 *)&vport_stats;
+	args.in_args_size = sizeof(vport_stats);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err) {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_STATS");
+		*pstats = NULL;
+		return err;
+	}
+	*pstats = (struct virtchnl2_vport_stats *)args.out_buffer;
+	return 0;
+}
+
 #define IDPF_RX_BUF_STRIDE		64
 int
 idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index d16b6b66f4..60347fe571 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -38,4 +38,7 @@ __rte_internal
 int idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
 __rte_internal
 int idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq);
+__rte_internal
+int idpf_query_stats(struct idpf_vport *vport,
+		     struct virtchnl2_vport_stats **pstats);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 7018a1d695..6a1dc13302 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -50,6 +50,8 @@ INTERNAL {
 	idpf_splitq_recv_pkts_avx512;
 	idpf_singleq_xmit_pkts_avx512;
 	idpf_splitq_xmit_pkts_avx512;
+	idpf_update_stats;
+	idpf_query_stats;
 
 	local: *;
 };
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index a70ae65558..1b1b0f30fd 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -131,6 +131,86 @@ idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static uint64_t
+idpf_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	uint64_t mbuf_alloc_failed = 0;
+	struct idpf_rx_queue *rxq;
+	int i = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		mbuf_alloc_failed += rxq->rx_stats.mbuf_alloc_failed;
+	}
+
+	return mbuf_alloc_failed;
+}
+
+static int
+idpf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_query_stats(vport, &pstats);
+	if (ret == 0) {
+		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
+					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
+					 RTE_ETHER_CRC_LEN;
+
+		idpf_update_stats(&vport->eth_stats_offset, pstats);
+		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
+				pstats->rx_broadcast - pstats->rx_discards;
+		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
+						pstats->tx_unicast;
+		stats->imissed = pstats->rx_discards;
+		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
+		stats->ibytes = pstats->rx_bytes;
+		stats->ibytes -= stats->ipackets * crc_stats_len;
+		stats->obytes = pstats->tx_bytes;
+
+		dev->data->rx_mbuf_alloc_failed = idpf_get_mbuf_alloc_failed_stats(dev);
+		stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return ret;
+}
+
+static void
+idpf_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		rxq->rx_stats.mbuf_alloc_failed = 0;
+	}
+}
+
+static int
+idpf_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_query_stats(vport, &pstats);
+	if (ret != 0)
+		return ret;
+
+	/* set stats offset base on current values */
+	vport->eth_stats_offset = *pstats;
+
+	idpf_reset_mbuf_alloc_failed_stats(dev);
+
+	return 0;
+}
+
 static int
 idpf_init_rss(struct idpf_vport *vport)
 {
@@ -324,6 +404,11 @@ idpf_dev_start(struct rte_eth_dev *dev)
 		goto err_vport;
 	}
 
+	if (idpf_dev_stats_reset(dev)) {
+		PMD_DRV_LOG(ERR, "Failed to reset stats");
+		goto err_vport;
+	}
+
 	return 0;
 
 err_vport:
@@ -597,6 +682,8 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 	.tx_queue_release		= idpf_dev_tx_queue_release,
 	.mtu_set			= idpf_dev_mtu_set,
 	.dev_supported_ptypes_get	= idpf_dev_supported_ptypes_get,
+	.stats_get			= idpf_dev_stats_get,
+	.stats_reset			= idpf_dev_stats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH 2/7] common/idpf: add RSS set/get ops
  2022-12-16  9:36 [PATCH 0/7] add idpf pmd enhancement features Mingxia Liu
  2022-12-16  9:37 ` [PATCH 1/7] common/idpf: add hw statistics Mingxia Liu
@ 2022-12-16  9:37 ` Mingxia Liu
  2022-12-16  9:37 ` [PATCH 3/7] common/idpf: support single q scatter RX datapath Mingxia Liu
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2022-12-16  9:37 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

Add support for these device ops:
- rss_reta_update
- rss_reta_query
- rss_hash_update
- rss_hash_conf_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/common/idpf/idpf_common_device.h   |   1 +
 drivers/common/idpf/idpf_common_virtchnl.c | 119 ++++++++
 drivers/common/idpf/idpf_common_virtchnl.h |  15 +-
 drivers/common/idpf/version.map            |   6 +
 drivers/net/idpf/idpf_ethdev.c             | 303 +++++++++++++++++++++
 drivers/net/idpf/idpf_ethdev.h             |   5 +-
 6 files changed, 445 insertions(+), 4 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 5184dcee9f..d7d4cd5363 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -95,6 +95,7 @@ struct idpf_vport {
 	uint32_t *rss_lut;
 	uint8_t *rss_key;
 	uint64_t rss_hf;
+	uint64_t last_general_rss_hf;
 
 	/* MSIX info*/
 	struct virtchnl2_queue_vector *qv_map; /* queue vector mapping */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 80351d15de..ae5a983836 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -218,6 +218,9 @@ idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	case VIRTCHNL2_OP_ALLOC_VECTORS:
 	case VIRTCHNL2_OP_DEALLOC_VECTORS:
 	case VIRTCHNL2_OP_GET_STATS:
+	case VIRTCHNL2_OP_GET_RSS_KEY:
+	case VIRTCHNL2_OP_GET_RSS_HASH:
+	case VIRTCHNL2_OP_GET_RSS_LUT:
 		/* for init virtchnl ops, need to poll the response */
 		err = idpf_read_one_msg(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
@@ -448,6 +451,48 @@ idpf_vc_set_rss_key(struct idpf_vport *vport)
 	return err;
 }
 
+int idpf_vc_get_rss_key(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_key *rss_key_ret;
+	struct virtchnl2_rss_key rss_key;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_key, 0, sizeof(rss_key));
+	rss_key.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_GET_RSS_KEY;
+	args.in_args = (uint8_t *)&rss_key;
+	args.in_args_size = sizeof(rss_key);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+
+	if (!err) {
+		rss_key_ret = (struct virtchnl2_rss_key *)args.out_buffer;
+		if (rss_key_ret->key_len != vport->rss_key_size) {
+			rte_free(vport->rss_key);
+			vport->rss_key = NULL;
+			vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN,
+						      rss_key_ret->key_len);
+			vport->rss_key = rte_zmalloc("rss_key", vport->rss_key_size, 0);
+			if (!vport->rss_key) {
+				vport->rss_key_size = 0;
+				DRV_LOG(ERR, "Failed to allocate RSS key");
+				return -ENOMEM;
+			}
+		}
+		rte_memcpy(vport->rss_key, rss_key_ret->key, vport->rss_key_size);
+	} else {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_RSS_KEY");
+	}
+
+	return err;
+}
+
 int
 idpf_vc_set_rss_lut(struct idpf_vport *vport)
 {
@@ -482,6 +527,48 @@ idpf_vc_set_rss_lut(struct idpf_vport *vport)
 	return err;
 }
 
+int
+idpf_vc_get_rss_lut(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_lut *rss_lut_ret;
+	struct virtchnl2_rss_lut rss_lut;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_lut, 0, sizeof(rss_lut));
+	rss_lut.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_GET_RSS_LUT;
+	args.in_args = (uint8_t *)&rss_lut;
+	args.in_args_size = sizeof(rss_lut);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+
+	if (!err) {
+		rss_lut_ret = (struct virtchnl2_rss_lut *)args.out_buffer;
+		if (rss_lut_ret->lut_entries != vport->rss_lut_size) {
+			rte_free(vport->rss_lut);
+			vport->rss_lut = NULL;
+			vport->rss_lut = rte_zmalloc("rss_lut",
+				     sizeof(uint32_t) * rss_lut_ret->lut_entries, 0);
+			if (vport->rss_lut == NULL) {
+				DRV_LOG(ERR, "Failed to allocate RSS lut");
+				return -ENOMEM;
+			}
+		}
+		rte_memcpy(vport->rss_lut, rss_lut_ret->lut, rss_lut_ret->lut_entries);
+		vport->rss_lut_size = rss_lut_ret->lut_entries;
+	} else {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_RSS_LUT");
+	}
+
+	return err;
+}
+
 int
 idpf_vc_set_rss_hash(struct idpf_vport *vport)
 {
@@ -508,6 +595,38 @@ idpf_vc_set_rss_hash(struct idpf_vport *vport)
 	return err;
 }
 
+int
+idpf_vc_get_rss_hash(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_hash *rss_hash_ret;
+	struct virtchnl2_rss_hash rss_hash;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_hash, 0, sizeof(rss_hash));
+	rss_hash.ptype_groups = vport->rss_hf;
+	rss_hash.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_GET_RSS_HASH;
+	args.in_args = (uint8_t *)&rss_hash;
+	args.in_args_size = sizeof(rss_hash);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+
+	if (!err) {
+		rss_hash_ret = (struct virtchnl2_rss_hash *)args.out_buffer;
+		vport->rss_hf = rss_hash_ret->ptype_groups;
+	} else {
+		DRV_LOG(ERR, "Failed to execute command of OP_GET_RSS_HASH");
+	}
+
+	return err;
+}
+
 int
 idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq, bool map)
 {
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index 60347fe571..b5d245a64f 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -13,9 +13,6 @@ int idpf_vc_get_caps(struct idpf_adapter *adapter);
 int idpf_vc_create_vport(struct idpf_vport *vport,
 			 struct virtchnl2_create_vport *vport_info);
 int idpf_vc_destroy_vport(struct idpf_vport *vport);
-int idpf_vc_set_rss_key(struct idpf_vport *vport);
-int idpf_vc_set_rss_lut(struct idpf_vport *vport);
-int idpf_vc_set_rss_hash(struct idpf_vport *vport);
 int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
 				 uint16_t nb_rxq, bool map);
 int idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
@@ -41,4 +38,16 @@ int idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq);
 __rte_internal
 int idpf_query_stats(struct idpf_vport *vport,
 		     struct virtchnl2_vport_stats **pstats);
+__rte_internal
+int idpf_vc_set_rss_key(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_get_rss_key(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_set_rss_lut(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_get_rss_lut(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_set_rss_hash(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_get_rss_hash(struct idpf_vport *vport);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 6a1dc13302..cba08c6b4a 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -52,6 +52,12 @@ INTERNAL {
 	idpf_splitq_xmit_pkts_avx512;
 	idpf_update_stats;
 	idpf_query_stats;
+	idpf_vc_set_rss_key;
+	idpf_vc_get_rss_key;
+	idpf_vc_set_rss_lut;
+	idpf_vc_get_rss_lut;
+	idpf_vc_set_rss_hash;
+	idpf_vc_get_rss_hash;
 
 	local: *;
 };
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 1b1b0f30fd..0d370ace4a 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -29,6 +29,56 @@ static const char * const idpf_valid_args[] = {
 	NULL
 };
 
+static const uint64_t idpf_map_hena_rss[] = {
+	[IDPF_HASH_NONF_UNICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
+	[IDPF_HASH_NONF_IPV4_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+	[IDPF_HASH_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
+
+	/* IPv6 */
+	[IDPF_HASH_NONF_UNICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
+	[IDPF_HASH_NONF_IPV6_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+	[IDPF_HASH_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
+
+	/* L2 Payload */
+	[IDPF_HASH_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
+};
+
+static const uint64_t idpf_ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV4;
+
+static const uint64_t idpf_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV6;
+
 static int
 idpf_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -59,6 +109,9 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = dev_info->max_rx_pktlen - IDPF_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->hash_key_size = vport->rss_key_size;
+	dev_info->reta_size = vport->rss_lut_size;
+
 	dev_info->flow_type_rss_offloads = IDPF_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
@@ -211,6 +264,54 @@ idpf_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int idpf_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
+{
+	uint64_t hena = 0, valid_rss_hf = 0;
+	int ret = 0;
+	uint16_t i;
+
+	/**
+	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
+	 * generalizations of all other IPv4 and IPv6 RSS types.
+	 */
+	if (rss_hf & RTE_ETH_RSS_IPV4)
+		rss_hf |= idpf_ipv4_rss;
+
+	if (rss_hf & RTE_ETH_RSS_IPV6)
+		rss_hf |= idpf_ipv6_rss;
+
+	for (i = 0; i < RTE_DIM(idpf_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (idpf_map_hena_rss[i] & rss_hf) {
+			valid_rss_hf |= idpf_map_hena_rss[i];
+			hena |= bit;
+		}
+	}
+
+	vport->rss_hf = hena;
+
+	ret = idpf_vc_set_rss_hash(vport);
+	if (ret != 0) {
+		PMD_DRV_LOG(WARNING,
+			    "fail to set RSS offload types, ret: %d", ret);
+		return ret;
+	}
+
+	if (valid_rss_hf & idpf_ipv4_rss)
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & idpf_ipv6_rss)
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
+
+	if (rss_hf & ~valid_rss_hf)
+		PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%" PRIx64,
+			    rss_hf & ~valid_rss_hf);
+	vport->last_general_rss_hf = valid_rss_hf;
+
+	return ret;
+}
+
 static int
 idpf_init_rss(struct idpf_vport *vport)
 {
@@ -247,6 +348,204 @@ idpf_init_rss(struct idpf_vport *vport)
 	return ret;
 }
 
+static int
+idpf_rss_reta_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_reta_entry64 *reta_conf,
+		     uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	uint32_t *lut;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+				 "(%d) doesn't match the number of hardware can "
+				 "support (%d)",
+			    reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	/* It MUST use the current LUT size to get the RSS lookup table,
+	 * otherwise if will fail with -100 error code.
+	 */
+	lut = rte_zmalloc(NULL, reta_size * sizeof(uint32_t), 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+	/* store the old lut table temporarily */
+	rte_memcpy(lut, vport->rss_lut, reta_size);
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	rte_memcpy(vport->rss_lut, lut, reta_size);
+	/* send virtchnl ops to configure RSS */
+	ret = idpf_vc_set_rss_lut(vport);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
+		goto out;
+	}
+out:
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+idpf_rss_reta_query(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_reta_entry64 *reta_conf,
+		    uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	ret = idpf_vc_get_rss_lut(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS LUT");
+		return ret;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vport->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+idpf_rss_hash_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		goto skip_rss_key;
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+				 "(%d) doesn't match the size of hardware can "
+				 "support (%d)",
+			    rss_conf->rss_key_len,
+			    vport->rss_key_size);
+		return -EINVAL;
+	}
+
+	rte_memcpy(vport->rss_key, rss_conf->rss_key,
+		   vport->rss_key_size);
+	ret = idpf_vc_set_rss_key(vport);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS key");
+		return ret;
+	}
+
+skip_rss_key:
+	ret = idpf_config_rss_hf(vport, rss_conf->rss_hf);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
+		return ret;
+	}
+
+	return 0;
+}
+
+static uint64_t
+idpf_map_general_rss_hf(uint64_t config_rss_hf, uint64_t last_general_rss_hf)
+{
+	uint64_t valid_rss_hf = 0;
+	uint16_t i;
+
+	for (i = 0; i < RTE_DIM(idpf_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (bit & config_rss_hf)
+			valid_rss_hf |= idpf_map_hena_rss[i];
+	}
+
+	if (valid_rss_hf & idpf_ipv4_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & idpf_ipv6_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV6;
+
+	return valid_rss_hf;
+}
+
+static int
+idpf_rss_hash_conf_get(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	ret = idpf_vc_get_rss_hash(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS hf");
+		return ret;
+	}
+
+	rss_conf->rss_hf = idpf_map_general_rss_hf(vport->rss_hf, vport->last_general_rss_hf);
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	ret = idpf_vc_get_rss_key(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS key");
+		return ret;
+	}
+
+	if (rss_conf->rss_key_len > vport->rss_key_size)
+		rss_conf->rss_key_len = vport->rss_key_size;
+
+	rte_memcpy(rss_conf->rss_key, vport->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
 static int
 idpf_dev_configure(struct rte_eth_dev *dev)
 {
@@ -684,6 +983,10 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 	.dev_supported_ptypes_get	= idpf_dev_supported_ptypes_get,
 	.stats_get			= idpf_dev_stats_get,
 	.stats_reset			= idpf_dev_stats_reset,
+	.reta_update			= idpf_rss_reta_update,
+	.reta_query			= idpf_rss_reta_query,
+	.rss_hash_update		= idpf_rss_hash_update,
+	.rss_hash_conf_get		= idpf_rss_hash_conf_get,
 };
 
 static uint16_t
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 133589cf98..f3e5d4cbd4 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -47,7 +47,10 @@
 		RTE_ETH_RSS_NONFRAG_IPV6_TCP    |	\
 		RTE_ETH_RSS_NONFRAG_IPV6_UDP    |	\
 		RTE_ETH_RSS_NONFRAG_IPV6_SCTP   |	\
-		RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
+		RTE_ETH_RSS_NONFRAG_IPV6_OTHER  |	\
+		RTE_ETH_RSS_L2_PAYLOAD)
+
+#define IDPF_RSS_KEY_LEN 52
 
 #define IDPF_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH 3/7] common/idpf: support single q scatter RX datapath
  2022-12-16  9:36 [PATCH 0/7] add idpf pmd enhancement features Mingxia Liu
  2022-12-16  9:37 ` [PATCH 1/7] common/idpf: add hw statistics Mingxia Liu
  2022-12-16  9:37 ` [PATCH 2/7] common/idpf: add RSS set/get ops Mingxia Liu
@ 2022-12-16  9:37 ` Mingxia Liu
  2022-12-16  9:37 ` [PATCH 4/7] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2022-12-16  9:37 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu, Wenjun Wu

This patch add single q recv scatter rx function.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c | 134 +++++++++++++++++++++++++
 drivers/common/idpf/idpf_common_rxtx.h |   3 +
 drivers/common/idpf/version.map        |   1 +
 drivers/net/idpf/idpf_ethdev.c         |   3 +-
 drivers/net/idpf/idpf_rxtx.c           |  26 ++++-
 drivers/net/idpf/idpf_rxtx.h           |   2 +
 6 files changed, 166 insertions(+), 3 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index 7f8311d8f6..dcdf43ca0a 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -1144,6 +1144,140 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	return nb_rx;
 }
 
+uint16_t
+idpf_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			       uint16_t nb_pkts)
+{
+	struct idpf_rx_queue *rxq = rx_queue;
+	volatile union virtchnl2_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union virtchnl2_rx_desc *rxdp;
+	union virtchnl2_rx_desc rxd;
+	struct idpf_adapter *ad;
+	struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+	struct rte_mbuf *rxm;
+	struct rte_mbuf *nmb;
+	struct rte_eth_dev *dev;
+	const uint32_t *ptype_tbl = rxq->adapter->ptype_tbl;
+	uint16_t nb_hold = 0, nb_rx = 0;
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t rx_packet_len;
+	uint16_t rx_status0;
+	uint64_t pkt_flags;
+	uint64_t dma_addr;
+	uint64_t ts_ns;
+
+	ad = rxq->adapter;
+
+	if (unlikely(!rxq) || unlikely(!rxq->q_started))
+		return nb_rx;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		rx_status0 = rte_le_to_cpu_16(rxdp->flex_nic_wb.status_error0);
+
+		/* Check the DD bit first */
+		if (!(rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S)))
+			break;
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			rxq->rx_stats.mbuf_alloc_failed++;
+			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+			       "queue_id=%u", rxq->port_id, rxq->queue_id);
+			break;
+		}
+
+		rxd = *rxdp;
+
+		nb_hold++;
+		rxm = rxq->sw_ring[rx_id];
+		rxq->sw_ring[rx_id] = nmb;
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+		rx_packet_len = (rte_cpu_to_le_16(rxd.flex_nic_wb.pkt_len) &
+				 VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M);
+		rxm->data_len = rx_packet_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+		/**
+		 * If this is the first buffer of the received packet, set the
+		 * pointer to the first mbuf of the packet and initialize its
+		 * context. Otherwise, update the total length and the number
+		 * of segments of the current scattered packet, and update the
+		 * pointer to the last mbuf of the current packet.
+		 */
+		if (!first_seg) {
+			first_seg = rxm;
+			first_seg->nb_segs = 1;
+			first_seg->pkt_len = rx_packet_len;
+		} else {
+			first_seg->pkt_len =
+				(uint16_t)(first_seg->pkt_len +
+					   rx_packet_len);
+			first_seg->nb_segs++;
+			last_seg->next = rxm;
+		}
+
+		if (!(rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_S))) {
+			last_seg = rxm;
+			continue;
+		}
+
+		rxm->next = NULL;
+
+		first_seg->port = rxq->port_id;
+		first_seg->ol_flags = 0;
+		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		first_seg->packet_type =
+			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
+				VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
+
+		if (idpf_timestamp_dynflag > 0 &&
+		    (rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0) {
+			/* timestamp */
+			ts_ns = idpf_tstamp_convert_32b_64b(ad,
+				rxq->hw_register_set,
+				rte_le_to_cpu_32(rxd.flex_nic_wb.flex_ts.ts_high));
+			rxq->hw_register_set = 0;
+			*RTE_MBUF_DYNFIELD(rxm,
+					   idpf_timestamp_dynfield_offset,
+					   rte_mbuf_timestamp_t *) = ts_ns;
+			first_seg->ol_flags |= idpf_timestamp_dynflag;
+		}
+
+		first_seg->ol_flags |= pkt_flags;
+		rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
+					  first_seg->data_off));
+		rx_pkts[nb_rx++] = first_seg;
+		first_seg = NULL;
+	}
+	rxq->rx_tail = rx_id;
+	rxq->pkt_first_seg = first_seg;
+	rxq->pkt_last_seg = last_seg;
+
+	idpf_update_rx_tail(rxq, nb_hold, rx_id);
+
+	return nb_rx;
+}
+
 static inline int
 idpf_xmit_cleanup(struct idpf_tx_queue *txq)
 {
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index e23484d031..eee9fdbd9e 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -260,6 +260,9 @@ __rte_internal
 uint16_t idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			       uint16_t nb_pkts);
 __rte_internal
+uint16_t idpf_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			  uint16_t nb_pkts);
+__rte_internal
 uint16_t idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			       uint16_t nb_pkts);
 __rte_internal
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index cba08c6b4a..1805e2cb04 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -41,6 +41,7 @@ INTERNAL {
 	idpf_splitq_recv_pkts;
 	idpf_splitq_xmit_pkts;
 	idpf_singleq_recv_pkts;
+	idpf_singleq_recv_scatter_pkts;
 	idpf_singleq_xmit_pkts;
 	idpf_prep_pkts;
 	idpf_singleq_rx_vec_setup;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 0d370ace4a..573afcab4f 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -119,7 +119,8 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
-		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP		|
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index e30d7c56ee..01df8e52c0 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -506,6 +506,8 @@ int
 idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct idpf_rx_queue *rxq;
+	uint16_t max_pkt_len;
+	uint32_t frame_size;
 	int err;
 
 	if (rx_queue_id >= dev->data->nb_rx_queues)
@@ -519,6 +521,17 @@ idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	frame_size = dev->data->mtu + IDPF_ETH_OVERHEAD;
+
+	max_pkt_len =
+	    RTE_MIN((uint32_t)IDPF_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+		    frame_size);
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
+	    frame_size > rxq->rx_buf_len)
+		dev->data->scattered_rx = 1;
+
 	err = idpf_register_ts_mbuf(rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "fail to regidter timestamp mbuf %u",
@@ -804,13 +817,22 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 #endif /* CC_AVX512_SUPPORT */
 		}
 
+		if (dev->data->scattered_rx) {
+			dev->rx_pkt_burst = idpf_singleq_recv_scatter_pkts;
+			return;
+		}
 		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
 	}
 #else
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
-	else
+	} else {
+		if (dev->data->scattered_rx) {
+			dev->rx_pkt_burst = idpf_singleq_recv_scatter_pkts;
+			return;
+		}
 		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+	}
 #endif /* RTE_ARCH_X86 */
 }
 
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 3a5084dfd6..41a7495083 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -23,6 +23,8 @@
 #define IDPF_DEFAULT_TX_RS_THRESH	32
 #define IDPF_DEFAULT_TX_FREE_THRESH	32
 
+#define IDPF_SUPPORT_CHAIN_NUM 5
+
 int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH 4/7] common/idpf: add rss_offload hash in singleq rx
  2022-12-16  9:36 [PATCH 0/7] add idpf pmd enhancement features Mingxia Liu
                   ` (2 preceding siblings ...)
  2022-12-16  9:37 ` [PATCH 3/7] common/idpf: support single q scatter RX datapath Mingxia Liu
@ 2022-12-16  9:37 ` Mingxia Liu
  2022-12-16  9:37 ` [PATCH 5/7] common/idpf: add alarm to support handle vchnl message Mingxia Liu
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2022-12-16  9:37 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

This patch add rss valid flag and hash value parsing of rx descriptor.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index dcdf43ca0a..cec99d2951 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -1028,6 +1028,20 @@ idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold,
 	rxq->nb_rx_hold = nb_hold;
 }
 
+static inline void
+idpf_singleq_rx_rss_offload(struct rte_mbuf *mb,
+			    volatile struct virtchnl2_rx_flex_desc_nic *rx_desc,
+			    uint64_t *pkt_flags)
+{
+	uint16_t rx_status0 = rte_le_to_cpu_16(rx_desc->status_error0);
+
+	if (rx_status0 & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_S)) {
+		*pkt_flags |= RTE_MBUF_F_RX_RSS_HASH;
+		mb->hash.rss = rte_le_to_cpu_32(rx_desc->rss_hash);
+	}
+
+}
+
 uint16_t
 idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		       uint16_t nb_pkts)
@@ -1116,6 +1130,7 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->port = rxq->port_id;
 		rxm->ol_flags = 0;
 		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		idpf_singleq_rx_rss_offload(rxm, &rxd.flex_nic_wb, &pkt_flags);
 		rxm->packet_type =
 			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
 					    VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
@@ -1246,6 +1261,7 @@ idpf_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		first_seg->port = rxq->port_id;
 		first_seg->ol_flags = 0;
 		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		idpf_singleq_rx_rss_offload(first_seg, &rxd.flex_nic_wb, &pkt_flags);
 		first_seg->packet_type =
 			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
 				VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH 5/7] common/idpf: add alarm to support handle vchnl message
  2022-12-16  9:36 [PATCH 0/7] add idpf pmd enhancement features Mingxia Liu
                   ` (3 preceding siblings ...)
  2022-12-16  9:37 ` [PATCH 4/7] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
@ 2022-12-16  9:37 ` Mingxia Liu
  2022-12-16  9:37 ` [PATCH 6/7] common/idpf: add xstats ops Mingxia Liu
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2022-12-16  9:37 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

Handle virtual channel message.
Refine link status update.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
Signed-off-by: Beilei Ling <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h   |   5 +
 drivers/common/idpf/idpf_common_virtchnl.c |  19 ---
 drivers/net/idpf/idpf_ethdev.c             | 165 ++++++++++++++++++++-
 drivers/net/idpf/idpf_ethdev.h             |   2 +
 4 files changed, 171 insertions(+), 20 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index d7d4cd5363..03697510bb 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -115,6 +115,11 @@ struct idpf_vport {
 	bool tx_use_avx512;
 
 	struct virtchnl2_vport_stats eth_stats_offset;
+
+	void *dev;
+	/* Event from ipf */
+	bool link_up;
+	uint32_t link_speed;
 };
 
 /* Message type read in virtual channel from PF */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index ae5a983836..c3e7569cc2 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -202,25 +202,6 @@ idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	switch (args->ops) {
 	case VIRTCHNL_OP_VERSION:
 	case VIRTCHNL2_OP_GET_CAPS:
-	case VIRTCHNL2_OP_CREATE_VPORT:
-	case VIRTCHNL2_OP_DESTROY_VPORT:
-	case VIRTCHNL2_OP_SET_RSS_KEY:
-	case VIRTCHNL2_OP_SET_RSS_LUT:
-	case VIRTCHNL2_OP_SET_RSS_HASH:
-	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
-	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
-	case VIRTCHNL2_OP_ENABLE_QUEUES:
-	case VIRTCHNL2_OP_DISABLE_QUEUES:
-	case VIRTCHNL2_OP_ENABLE_VPORT:
-	case VIRTCHNL2_OP_DISABLE_VPORT:
-	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_ALLOC_VECTORS:
-	case VIRTCHNL2_OP_DEALLOC_VECTORS:
-	case VIRTCHNL2_OP_GET_STATS:
-	case VIRTCHNL2_OP_GET_RSS_KEY:
-	case VIRTCHNL2_OP_GET_RSS_HASH:
-	case VIRTCHNL2_OP_GET_RSS_LUT:
 		/* for init virtchnl ops, need to poll the response */
 		err = idpf_read_one_msg(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 573afcab4f..3ffc4cd9a3 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -9,6 +9,7 @@
 #include <rte_memzone.h>
 #include <rte_dev.h>
 #include <errno.h>
+#include <rte_alarm.h>
 
 #include "idpf_ethdev.h"
 #include "idpf_rxtx.h"
@@ -83,12 +84,49 @@ static int
 idpf_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_link new_link;
 
 	memset(&new_link, 0, sizeof(new_link));
 
-	new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	switch (vport->link_speed) {
+	case 10:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
+		break;
+	case 100:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		break;
+	case 1000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
+		break;
+	case 10000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+		break;
+	case 20000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
+		break;
+	case 25000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
+		break;
+	case 40000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
+		break;
+	case 50000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
+		break;
+	case 100000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+		break;
+	case 200000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
+		break;
+	default:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	}
+
 	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
+		RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
 				  RTE_ETH_LINK_SPEED_FIXED);
 
@@ -918,6 +956,127 @@ idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adap
 	return ret;
 }
 
+static struct idpf_vport *
+idpf_find_vport(struct idpf_adapter_ext *adapter, uint32_t vport_id)
+{
+	struct idpf_vport *vport = NULL;
+	int i;
+
+	for (i = 0; i < adapter->cur_vport_nb; i++) {
+		vport = adapter->vports[i];
+		if (vport->vport_id != vport_id)
+			continue;
+		else
+			return vport;
+	}
+
+	return vport;
+}
+
+static void
+idpf_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t msglen)
+{
+	struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
+
+	if (msglen < sizeof(struct virtchnl2_event)) {
+		PMD_DRV_LOG(ERR, "Error event");
+		return;
+	}
+
+	switch (vc_event->event) {
+	case VIRTCHNL2_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL2_EVENT_LINK_CHANGE");
+		vport->link_up = vc_event->link_status;
+		vport->link_speed = vc_event->link_speed;
+		idpf_dev_link_update(dev, 0);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", vc_event->event);
+		break;
+	}
+}
+
+static void
+idpf_handle_virtchnl_msg(struct idpf_adapter_ext *adapter_ex)
+{
+	struct idpf_adapter *adapter = &adapter_ex->base;
+	struct idpf_dma_mem *dma_mem = NULL;
+	struct idpf_hw *hw = &adapter->hw;
+	struct virtchnl2_event *vc_event;
+	struct idpf_ctlq_msg ctlq_msg;
+	enum idpf_mbx_opc mbx_op;
+	struct idpf_vport *vport;
+	enum virtchnl_ops vc_op;
+	uint16_t pending = 1;
+	int ret;
+
+	while (pending) {
+		ret = idpf_ctlq_recv(hw->arq, &pending, &ctlq_msg);
+		if (ret) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret);
+			return;
+		}
+
+		rte_memcpy(adapter->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
+			   IDPF_DFLT_MBX_BUF_SIZE);
+
+		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
+		vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+		adapter->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
+
+		switch (mbx_op) {
+		case idpf_mbq_opc_send_msg_to_peer_pf:
+			if (vc_op == VIRTCHNL2_OP_EVENT) {
+				if (ctlq_msg.data_len < sizeof(struct virtchnl2_event)) {
+					PMD_DRV_LOG(ERR, "Error event");
+					return;
+				}
+				vc_event = (struct virtchnl2_event *)adapter->mbx_resp;
+				vport = idpf_find_vport(adapter_ex, vc_event->vport_id);
+				if (!vport) {
+					PMD_DRV_LOG(ERR, "Can't find vport.");
+					return;
+				}
+				idpf_handle_event_msg(vport, adapter->mbx_resp,
+						      ctlq_msg.data_len);
+			} else {
+				if (vc_op == adapter->pend_cmd)
+					notify_cmd(adapter, adapter->cmd_retval);
+				else
+					PMD_DRV_LOG(ERR, "command mismatch, expect %u, get %u",
+						    adapter->pend_cmd, vc_op);
+
+				PMD_DRV_LOG(DEBUG, " Virtual channel response is received,"
+					    "opcode = %d", vc_op);
+			}
+			goto post_buf;
+		default:
+			PMD_DRV_LOG(DEBUG, "Request %u is not supported yet", mbx_op);
+		}
+	}
+
+post_buf:
+	if (ctlq_msg.data_len)
+		dma_mem = ctlq_msg.ctx.indirect.payload;
+	else
+		pending = 0;
+
+	ret = idpf_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
+	if (ret && dma_mem)
+		idpf_free_dma_mem(hw, dma_mem);
+}
+
+static void
+idpf_dev_alarm_handler(void *param)
+{
+	struct idpf_adapter_ext *adapter = param;
+
+	idpf_handle_virtchnl_msg(adapter);
+
+	rte_eal_alarm_set(IDPF_ALARM_INTERVAL, idpf_dev_alarm_handler, adapter);
+}
+
 static int
 idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter)
 {
@@ -940,6 +1099,8 @@ idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *a
 		goto err_adapter_init;
 	}
 
+	rte_eal_alarm_set(IDPF_ALARM_INTERVAL, idpf_dev_alarm_handler, adapter);
+
 	adapter->max_vport_nb = adapter->base.caps.max_vports;
 
 	adapter->vports = rte_zmalloc("vports",
@@ -1023,6 +1184,7 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	vport->adapter = &adapter->base;
 	vport->sw_idx = param->idx;
 	vport->devarg_id = param->devarg_id;
+	vport->dev = dev;
 
 	memset(&create_vport_info, 0, sizeof(create_vport_info));
 	ret = idpf_create_vport_info_init(vport, &create_vport_info);
@@ -1092,6 +1254,7 @@ idpf_find_adapter_ext(struct rte_pci_device *pci_dev)
 static void
 idpf_adapter_ext_deinit(struct idpf_adapter_ext *adapter)
 {
+	rte_eal_alarm_cancel(idpf_dev_alarm_handler, adapter);
 	idpf_adapter_deinit(&adapter->base);
 
 	rte_free(adapter->vports);
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index f3e5d4cbd4..2b894f3c52 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -54,6 +54,8 @@
 
 #define IDPF_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
+#define IDPF_ALARM_INTERVAL	50000 /* us */
+
 struct idpf_vport_param {
 	struct idpf_adapter_ext *adapter;
 	uint16_t devarg_id; /* arg id from user */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH 6/7] common/idpf: add xstats ops
  2022-12-16  9:36 [PATCH 0/7] add idpf pmd enhancement features Mingxia Liu
                   ` (4 preceding siblings ...)
  2022-12-16  9:37 ` [PATCH 5/7] common/idpf: add alarm to support handle vchnl message Mingxia Liu
@ 2022-12-16  9:37 ` Mingxia Liu
  2022-12-16  9:37 ` [PATCH 7/7] common/idpf: update mbuf_alloc_failed multi-thread process Mingxia Liu
  2023-01-11  7:15 ` [PATCH 0/6] add idpf pmd enhancement features Mingxia Liu
  7 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2022-12-16  9:37 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

Add support for these device ops:
-idpf_dev_xstats_get
-idpf_dev_xstats_get_names
-idpf_dev_xstats_reset

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/idpf/idpf_ethdev.c | 80 ++++++++++++++++++++++++++++++++++
 1 file changed, 80 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 3ffc4cd9a3..97c03118e0 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -80,6 +80,30 @@ static const uint64_t idpf_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
 			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
 			  RTE_ETH_RSS_FRAG_IPV6;
 
+struct rte_idpf_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct rte_idpf_xstats_name_off rte_idpf_stats_strings[] = {
+	{"rx_bytes", offsetof(struct virtchnl2_vport_stats, rx_bytes)},
+	{"rx_unicast_packets", offsetof(struct virtchnl2_vport_stats, rx_unicast)},
+	{"rx_multicast_packets", offsetof(struct virtchnl2_vport_stats, rx_multicast)},
+	{"rx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, rx_broadcast)},
+	{"rx_dropped_packets", offsetof(struct virtchnl2_vport_stats, rx_discards)},
+	{"rx_errors", offsetof(struct virtchnl2_vport_stats, rx_errors)},
+	{"rx_unknown_protocol_packets", offsetof(struct virtchnl2_vport_stats,
+						 rx_unknown_protocol)},
+	{"tx_bytes", offsetof(struct virtchnl2_vport_stats, tx_bytes)},
+	{"tx_unicast_packets", offsetof(struct virtchnl2_vport_stats, tx_unicast)},
+	{"tx_multicast_packets", offsetof(struct virtchnl2_vport_stats, tx_multicast)},
+	{"tx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, tx_broadcast)},
+	{"tx_dropped_packets", offsetof(struct virtchnl2_vport_stats, tx_discards)},
+	{"tx_error_packets", offsetof(struct virtchnl2_vport_stats, tx_errors)}};
+
+#define IDPF_NB_XSTATS (sizeof(rte_idpf_stats_strings) / \
+		sizeof(rte_idpf_stats_strings[0]))
+
 static int
 idpf_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -303,6 +327,59 @@ idpf_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int idpf_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+	idpf_dev_stats_reset(dev);
+	return 0;
+}
+
+static int idpf_dev_xstats_get(struct rte_eth_dev *dev,
+			       struct rte_eth_xstat *xstats, unsigned int n)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	unsigned int i;
+	int ret;
+
+	if (n < IDPF_NB_XSTATS)
+		return IDPF_NB_XSTATS;
+
+	if (!xstats)
+		return 0;
+
+	ret = idpf_query_stats(vport, &pstats);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+		return 0;
+	}
+
+	idpf_update_stats(&vport->eth_stats_offset, pstats);
+
+	/* loop over xstats array and values from pstats */
+	for (i = 0; i < IDPF_NB_XSTATS; i++) {
+		xstats[i].id = i;
+		xstats[i].value = *(uint64_t *)(((char *)pstats) +
+			rte_idpf_stats_strings[i].offset);
+	}
+	return IDPF_NB_XSTATS;
+}
+
+static int idpf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+				     struct rte_eth_xstat_name *xstats_names,
+				     __rte_unused unsigned int limit)
+{
+	unsigned int i;
+
+	if (xstats_names)
+		for (i = 0; i < IDPF_NB_XSTATS; i++) {
+			snprintf(xstats_names[i].name,
+				 sizeof(xstats_names[i].name),
+				 "%s", rte_idpf_stats_strings[i].name);
+		}
+	return IDPF_NB_XSTATS;
+}
+
 static int idpf_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
 {
 	uint64_t hena = 0, valid_rss_hf = 0;
@@ -1149,6 +1226,9 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 	.reta_query			= idpf_rss_reta_query,
 	.rss_hash_update		= idpf_rss_hash_update,
 	.rss_hash_conf_get		= idpf_rss_hash_conf_get,
+	.xstats_get			= idpf_dev_xstats_get,
+	.xstats_get_names		= idpf_dev_xstats_get_names,
+	.xstats_reset			= idpf_dev_xstats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH 7/7] common/idpf: update mbuf_alloc_failed multi-thread process
  2022-12-16  9:36 [PATCH 0/7] add idpf pmd enhancement features Mingxia Liu
                   ` (5 preceding siblings ...)
  2022-12-16  9:37 ` [PATCH 6/7] common/idpf: add xstats ops Mingxia Liu
@ 2022-12-16  9:37 ` Mingxia Liu
  2023-01-11  7:15 ` [PATCH 0/6] add idpf pmd enhancement features Mingxia Liu
  7 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2022-12-16  9:37 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, qi.z.zhang, Mingxia Liu

As the variable mbuf_alloc_failed is operated by more than thread,
change it to type rte_atomic64_t and operated by rte_atomic64_xx()
function, this will avoid multithreading issue.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c        | 10 ++++++----
 drivers/common/idpf/idpf_common_rxtx.h        |  2 +-
 drivers/common/idpf/idpf_common_rxtx_avx512.c | 12 ++++++++----
 drivers/net/idpf/idpf_ethdev.c                |  5 +++--
 4 files changed, 18 insertions(+), 11 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index cec99d2951..dd8e761834 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -592,7 +592,8 @@ idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq)
 			next_avail = 0;
 			rx_bufq->nb_rx_hold -= delta;
 		} else {
-			rx_bufq->rx_stats.mbuf_alloc_failed += nb_desc - next_avail;
+			rte_atomic64_add(&(rx_bufq->rx_stats.mbuf_alloc_failed),
+					 nb_desc - next_avail);
 			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
 			       rx_bufq->port_id, rx_bufq->queue_id);
 			return;
@@ -611,7 +612,8 @@ idpf_split_rx_bufq_refill(struct idpf_rx_queue *rx_bufq)
 			next_avail += nb_refill;
 			rx_bufq->nb_rx_hold -= nb_refill;
 		} else {
-			rx_bufq->rx_stats.mbuf_alloc_failed += nb_desc - next_avail;
+			rte_atomic64_add(&(rx_bufq->rx_stats.mbuf_alloc_failed),
+					 nb_desc - next_avail);
 			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
 			       rx_bufq->port_id, rx_bufq->queue_id);
 		}
@@ -1088,7 +1090,7 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 		nmb = rte_mbuf_raw_alloc(rxq->mp);
 		if (unlikely(nmb == NULL)) {
-			rxq->rx_stats.mbuf_alloc_failed++;
+			rte_atomic64_inc(&(rxq->rx_stats.mbuf_alloc_failed));
 			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
 			       "queue_id=%u", rxq->port_id, rxq->queue_id);
 			break;
@@ -1197,7 +1199,7 @@ idpf_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 		nmb = rte_mbuf_raw_alloc(rxq->mp);
 		if (unlikely(!nmb)) {
-			rxq->rx_stats.mbuf_alloc_failed++;
+			rte_atomic64_inc(&(rxq->rx_stats.mbuf_alloc_failed));
 			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
 			       "queue_id=%u", rxq->port_id, rxq->queue_id);
 			break;
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index eee9fdbd9e..0209750187 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -91,7 +91,7 @@
 #define PF_GLTSYN_SHTIME_H_5	(PF_TIMESYNC_BAR4_BASE + 0x13C)
 
 struct idpf_rx_stats {
-	uint64_t mbuf_alloc_failed;
+	rte_atomic64_t mbuf_alloc_failed;
 };
 
 struct idpf_rx_queue {
diff --git a/drivers/common/idpf/idpf_common_rxtx_avx512.c b/drivers/common/idpf/idpf_common_rxtx_avx512.c
index 5a91ed610e..1fc110cc94 100644
--- a/drivers/common/idpf/idpf_common_rxtx_avx512.c
+++ b/drivers/common/idpf/idpf_common_rxtx_avx512.c
@@ -38,7 +38,8 @@ idpf_singleq_rearm_common(struct idpf_rx_queue *rxq)
 						dma_addr0);
 			}
 		}
-		rxq->rx_stats.mbuf_alloc_failed += IDPF_RXQ_REARM_THRESH;
+		rte_atomic64_add(&(rxq->rx_stats.mbuf_alloc_failed),
+				 IDPF_RXQ_REARM_THRESH);
 		return;
 	}
 	struct rte_mbuf *mb0, *mb1, *mb2, *mb3;
@@ -167,7 +168,8 @@ idpf_singleq_rearm(struct idpf_rx_queue *rxq)
 							 dma_addr0);
 				}
 			}
-			rxq->rx_stats.mbuf_alloc_failed += IDPF_RXQ_REARM_THRESH;
+			rte_atomic64_add(&(rxq->rx_stats.mbuf_alloc_failed),
+					 IDPF_RXQ_REARM_THRESH);
 			return;
 		}
 	}
@@ -562,7 +564,8 @@ idpf_splitq_rearm_common(struct idpf_rx_queue *rx_bufq)
 						dma_addr0);
 			}
 		}
-		rx_bufq->rx_stats.mbuf_alloc_failed += IDPF_RXQ_REARM_THRESH;
+		rte_atomic64_add(&(rx_bufq->rx_stats.mbuf_alloc_failed),
+				 IDPF_RXQ_REARM_THRESH);
 		return;
 	}
 
@@ -635,7 +638,8 @@ idpf_splitq_rearm(struct idpf_rx_queue *rx_bufq)
 							 dma_addr0);
 				}
 			}
-			rx_bufq->rx_stats.mbuf_alloc_failed += IDPF_RXQ_REARM_THRESH;
+			rte_atomic64_add(&(rx_bufq->rx_stats.mbuf_alloc_failed),
+					 IDPF_RXQ_REARM_THRESH);
 			return;
 		}
 	}
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 97c03118e0..1a7dab1844 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -256,7 +256,8 @@ idpf_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
-		mbuf_alloc_failed += rxq->rx_stats.mbuf_alloc_failed;
+		mbuf_alloc_failed +=
+		    rte_atomic64_read(&(rxq->rx_stats.mbuf_alloc_failed));
 	}
 
 	return mbuf_alloc_failed;
@@ -303,7 +304,7 @@ idpf_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
-		rxq->rx_stats.mbuf_alloc_failed = 0;
+		rte_atomic64_set(&(rxq->rx_stats.mbuf_alloc_failed), 0);
 	}
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH 0/6] add idpf pmd enhancement features
  2022-12-16  9:36 [PATCH 0/7] add idpf pmd enhancement features Mingxia Liu
                   ` (6 preceding siblings ...)
  2022-12-16  9:37 ` [PATCH 7/7] common/idpf: update mbuf_alloc_failed multi-thread process Mingxia Liu
@ 2023-01-11  7:15 ` Mingxia Liu
  2023-01-11  7:15   ` [PATCH v2 1/6] common/idpf: add hw statistics Mingxia Liu
                     ` (6 more replies)
  7 siblings, 7 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-01-11  7:15 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, Mingxia Liu

This patchset add several enhancement features of idpf pmd.
Including the following:
- add hw statistics, support stats/xstats ops
- add rss configure/show ops
- add event handle: link status
- add scattered data path for single queue

This patchset is based on the refactor idpf PMD code:
http://patches.dpdk.org/project/dpdk/cover/20230106090501.9106-1-beilei.xing@intel.com/
http://patches.dpdk.org/project/dpdk/cover/20230106091627.13530-1-beilei.xing@intel.com/
http://patches.dpdk.org/project/dpdk/cover/20230106064645.2657232-1-wenjun1.wu@intel.com/

v2 changes:
 - Fix rss lut config issue.

Mingxia Liu (6):
  common/idpf: add hw statistics
  common/idpf: add RSS set/get ops
  common/idpf: support single q scatter RX datapath
  common/idpf: add rss_offload hash in singleq rx
  common/idpf: add alarm to support handle vchnl message
  common/idpf: add xstats ops

 drivers/common/idpf/idpf_common_device.c   |  17 +
 drivers/common/idpf/idpf_common_device.h   |  11 +-
 drivers/common/idpf/idpf_common_rxtx.c     | 150 +++++
 drivers/common/idpf/idpf_common_rxtx.h     |   3 +
 drivers/common/idpf/idpf_common_virtchnl.c | 157 ++++-
 drivers/common/idpf/idpf_common_virtchnl.h |  18 +-
 drivers/common/idpf/version.map            |   9 +
 drivers/net/idpf/idpf_ethdev.c             | 638 ++++++++++++++++++++-
 drivers/net/idpf/idpf_ethdev.h             |   7 +-
 drivers/net/idpf/idpf_rxtx.c               |  26 +-
 drivers/net/idpf/idpf_rxtx.h               |   2 +
 11 files changed, 1014 insertions(+), 24 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v2 1/6] common/idpf: add hw statistics
  2023-01-11  7:15 ` [PATCH 0/6] add idpf pmd enhancement features Mingxia Liu
@ 2023-01-11  7:15   ` Mingxia Liu
  2023-01-11  7:15   ` [PATCH v2 2/6] common/idpf: add RSS set/get ops Mingxia Liu
                     ` (5 subsequent siblings)
  6 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-01-11  7:15 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, Mingxia Liu

This patch add hardware packets/bytes statistics.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/common/idpf/idpf_common_device.c   | 17 +++++
 drivers/common/idpf/idpf_common_device.h   |  5 +-
 drivers/common/idpf/idpf_common_virtchnl.c | 27 +++++++
 drivers/common/idpf/idpf_common_virtchnl.h |  3 +
 drivers/common/idpf/version.map            |  2 +
 drivers/net/idpf/idpf_ethdev.c             | 87 ++++++++++++++++++++++
 6 files changed, 140 insertions(+), 1 deletion(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 4e257a68fd..4adbb6f399 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -661,4 +661,21 @@ idpf_create_vport_info_init(struct idpf_vport *vport,
 	return 0;
 }
 
+void
+idpf_update_stats(struct virtchnl2_vport_stats *oes, struct virtchnl2_vport_stats *nes)
+{
+	nes->rx_bytes = nes->rx_bytes - oes->rx_bytes;
+	nes->rx_unicast = nes->rx_unicast - oes->rx_unicast;
+	nes->rx_multicast = nes->rx_multicast - oes->rx_multicast;
+	nes->rx_broadcast = nes->rx_broadcast - oes->rx_broadcast;
+	nes->rx_errors = nes->rx_errors - oes->rx_errors;
+	nes->rx_discards = nes->rx_discards - oes->rx_discards;
+	nes->tx_bytes = nes->tx_bytes - oes->tx_bytes;
+	nes->tx_unicast = nes->tx_unicast - oes->tx_unicast;
+	nes->tx_multicast = nes->tx_multicast - oes->tx_multicast;
+	nes->tx_broadcast = nes->tx_broadcast - oes->tx_broadcast;
+	nes->tx_errors = nes->tx_errors - oes->tx_errors;
+	nes->tx_discards = nes->tx_discards - oes->tx_discards;
+}
+
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 6c9a65ae3b..5184dcee9f 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -112,6 +112,8 @@ struct idpf_vport {
 	bool tx_vec_allowed;
 	bool rx_use_avx512;
 	bool tx_use_avx512;
+
+	struct virtchnl2_vport_stats eth_stats_offset;
 };
 
 /* Message type read in virtual channel from PF */
@@ -188,5 +190,6 @@ int idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues);
 __rte_internal
 int idpf_create_vport_info_init(struct idpf_vport *vport,
 				struct virtchnl2_create_vport *vport_info);
-
+__rte_internal
+void idpf_update_stats(struct virtchnl2_vport_stats *oes, struct virtchnl2_vport_stats *nes);
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 324214caa1..80351d15de 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -217,6 +217,7 @@ idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
 	case VIRTCHNL2_OP_ALLOC_VECTORS:
 	case VIRTCHNL2_OP_DEALLOC_VECTORS:
+	case VIRTCHNL2_OP_GET_STATS:
 		/* for init virtchnl ops, need to poll the response */
 		err = idpf_read_one_msg(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
@@ -806,6 +807,32 @@ idpf_vc_query_ptype_info(struct idpf_adapter *adapter)
 	return err;
 }
 
+int
+idpf_query_stats(struct idpf_vport *vport,
+		struct virtchnl2_vport_stats **pstats)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_vport_stats vport_stats;
+	struct idpf_cmd_info args;
+	int err;
+
+	vport_stats.vport_id = vport->vport_id;
+	args.ops = VIRTCHNL2_OP_GET_STATS;
+	args.in_args = (u8 *)&vport_stats;
+	args.in_args_size = sizeof(vport_stats);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err) {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_STATS");
+		*pstats = NULL;
+		return err;
+	}
+	*pstats = (struct virtchnl2_vport_stats *)args.out_buffer;
+	return 0;
+}
+
 #define IDPF_RX_BUF_STRIDE		64
 int
 idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index d16b6b66f4..60347fe571 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -38,4 +38,7 @@ __rte_internal
 int idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
 __rte_internal
 int idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq);
+__rte_internal
+int idpf_query_stats(struct idpf_vport *vport,
+		     struct virtchnl2_vport_stats **pstats);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 7018a1d695..6a1dc13302 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -50,6 +50,8 @@ INTERNAL {
 	idpf_splitq_recv_pkts_avx512;
 	idpf_singleq_xmit_pkts_avx512;
 	idpf_splitq_xmit_pkts_avx512;
+	idpf_update_stats;
+	idpf_query_stats;
 
 	local: *;
 };
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index ee2dec7c7c..e8bb097c78 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -140,6 +140,86 @@ idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static uint64_t
+idpf_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	uint64_t mbuf_alloc_failed = 0;
+	struct idpf_rx_queue *rxq;
+	int i = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		mbuf_alloc_failed += rte_atomic64_read(&(rxq->rx_stats.mbuf_alloc_failed));
+	}
+
+	return mbuf_alloc_failed;
+}
+
+static int
+idpf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_query_stats(vport, &pstats);
+	if (ret == 0) {
+		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
+					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
+					 RTE_ETHER_CRC_LEN;
+
+		idpf_update_stats(&vport->eth_stats_offset, pstats);
+		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
+				pstats->rx_broadcast - pstats->rx_discards;
+		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
+						pstats->tx_unicast;
+		stats->imissed = pstats->rx_discards;
+		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
+		stats->ibytes = pstats->rx_bytes;
+		stats->ibytes -= stats->ipackets * crc_stats_len;
+		stats->obytes = pstats->tx_bytes;
+
+		dev->data->rx_mbuf_alloc_failed = idpf_get_mbuf_alloc_failed_stats(dev);
+		stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return ret;
+}
+
+static void
+idpf_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		rte_atomic64_set(&(rxq->rx_stats.mbuf_alloc_failed), 0);
+	}
+}
+
+static int
+idpf_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_query_stats(vport, &pstats);
+	if (ret != 0)
+		return ret;
+
+	/* set stats offset base on current values */
+	vport->eth_stats_offset = *pstats;
+
+	idpf_reset_mbuf_alloc_failed_stats(dev);
+
+	return 0;
+}
+
 static int
 idpf_init_rss(struct idpf_vport *vport)
 {
@@ -327,6 +407,11 @@ idpf_dev_start(struct rte_eth_dev *dev)
 		goto err_vport;
 	}
 
+	if (idpf_dev_stats_reset(dev)) {
+		PMD_DRV_LOG(ERR, "Failed to reset stats");
+		goto err_vport;
+	}
+
 	vport->stopped = 0;
 
 	return 0;
@@ -606,6 +691,8 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 	.tx_queue_release		= idpf_dev_tx_queue_release,
 	.mtu_set			= idpf_dev_mtu_set,
 	.dev_supported_ptypes_get	= idpf_dev_supported_ptypes_get,
+	.stats_get			= idpf_dev_stats_get,
+	.stats_reset			= idpf_dev_stats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v2 2/6] common/idpf: add RSS set/get ops
  2023-01-11  7:15 ` [PATCH 0/6] add idpf pmd enhancement features Mingxia Liu
  2023-01-11  7:15   ` [PATCH v2 1/6] common/idpf: add hw statistics Mingxia Liu
@ 2023-01-11  7:15   ` Mingxia Liu
  2023-01-11  7:15   ` [PATCH v2 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
                     ` (4 subsequent siblings)
  6 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-01-11  7:15 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, Mingxia Liu

Add support for these device ops:
- rss_reta_update
- rss_reta_query
- rss_hash_update
- rss_hash_conf_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/common/idpf/idpf_common_device.h   |   1 +
 drivers/common/idpf/idpf_common_virtchnl.c | 119 ++++++++
 drivers/common/idpf/idpf_common_virtchnl.h |  15 +-
 drivers/common/idpf/version.map            |   6 +
 drivers/net/idpf/idpf_ethdev.c             | 303 +++++++++++++++++++++
 drivers/net/idpf/idpf_ethdev.h             |   5 +-
 6 files changed, 445 insertions(+), 4 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 5184dcee9f..d7d4cd5363 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -95,6 +95,7 @@ struct idpf_vport {
 	uint32_t *rss_lut;
 	uint8_t *rss_key;
 	uint64_t rss_hf;
+	uint64_t last_general_rss_hf;
 
 	/* MSIX info*/
 	struct virtchnl2_queue_vector *qv_map; /* queue vector mapping */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 80351d15de..ae5a983836 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -218,6 +218,9 @@ idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	case VIRTCHNL2_OP_ALLOC_VECTORS:
 	case VIRTCHNL2_OP_DEALLOC_VECTORS:
 	case VIRTCHNL2_OP_GET_STATS:
+	case VIRTCHNL2_OP_GET_RSS_KEY:
+	case VIRTCHNL2_OP_GET_RSS_HASH:
+	case VIRTCHNL2_OP_GET_RSS_LUT:
 		/* for init virtchnl ops, need to poll the response */
 		err = idpf_read_one_msg(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
@@ -448,6 +451,48 @@ idpf_vc_set_rss_key(struct idpf_vport *vport)
 	return err;
 }
 
+int idpf_vc_get_rss_key(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_key *rss_key_ret;
+	struct virtchnl2_rss_key rss_key;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_key, 0, sizeof(rss_key));
+	rss_key.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_GET_RSS_KEY;
+	args.in_args = (uint8_t *)&rss_key;
+	args.in_args_size = sizeof(rss_key);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+
+	if (!err) {
+		rss_key_ret = (struct virtchnl2_rss_key *)args.out_buffer;
+		if (rss_key_ret->key_len != vport->rss_key_size) {
+			rte_free(vport->rss_key);
+			vport->rss_key = NULL;
+			vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN,
+						      rss_key_ret->key_len);
+			vport->rss_key = rte_zmalloc("rss_key", vport->rss_key_size, 0);
+			if (!vport->rss_key) {
+				vport->rss_key_size = 0;
+				DRV_LOG(ERR, "Failed to allocate RSS key");
+				return -ENOMEM;
+			}
+		}
+		rte_memcpy(vport->rss_key, rss_key_ret->key, vport->rss_key_size);
+	} else {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_RSS_KEY");
+	}
+
+	return err;
+}
+
 int
 idpf_vc_set_rss_lut(struct idpf_vport *vport)
 {
@@ -482,6 +527,48 @@ idpf_vc_set_rss_lut(struct idpf_vport *vport)
 	return err;
 }
 
+int
+idpf_vc_get_rss_lut(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_lut *rss_lut_ret;
+	struct virtchnl2_rss_lut rss_lut;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_lut, 0, sizeof(rss_lut));
+	rss_lut.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_GET_RSS_LUT;
+	args.in_args = (uint8_t *)&rss_lut;
+	args.in_args_size = sizeof(rss_lut);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+
+	if (!err) {
+		rss_lut_ret = (struct virtchnl2_rss_lut *)args.out_buffer;
+		if (rss_lut_ret->lut_entries != vport->rss_lut_size) {
+			rte_free(vport->rss_lut);
+			vport->rss_lut = NULL;
+			vport->rss_lut = rte_zmalloc("rss_lut",
+				     sizeof(uint32_t) * rss_lut_ret->lut_entries, 0);
+			if (vport->rss_lut == NULL) {
+				DRV_LOG(ERR, "Failed to allocate RSS lut");
+				return -ENOMEM;
+			}
+		}
+		rte_memcpy(vport->rss_lut, rss_lut_ret->lut, rss_lut_ret->lut_entries);
+		vport->rss_lut_size = rss_lut_ret->lut_entries;
+	} else {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_RSS_LUT");
+	}
+
+	return err;
+}
+
 int
 idpf_vc_set_rss_hash(struct idpf_vport *vport)
 {
@@ -508,6 +595,38 @@ idpf_vc_set_rss_hash(struct idpf_vport *vport)
 	return err;
 }
 
+int
+idpf_vc_get_rss_hash(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_hash *rss_hash_ret;
+	struct virtchnl2_rss_hash rss_hash;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_hash, 0, sizeof(rss_hash));
+	rss_hash.ptype_groups = vport->rss_hf;
+	rss_hash.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_GET_RSS_HASH;
+	args.in_args = (uint8_t *)&rss_hash;
+	args.in_args_size = sizeof(rss_hash);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+
+	if (!err) {
+		rss_hash_ret = (struct virtchnl2_rss_hash *)args.out_buffer;
+		vport->rss_hf = rss_hash_ret->ptype_groups;
+	} else {
+		DRV_LOG(ERR, "Failed to execute command of OP_GET_RSS_HASH");
+	}
+
+	return err;
+}
+
 int
 idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq, bool map)
 {
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index 60347fe571..b5d245a64f 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -13,9 +13,6 @@ int idpf_vc_get_caps(struct idpf_adapter *adapter);
 int idpf_vc_create_vport(struct idpf_vport *vport,
 			 struct virtchnl2_create_vport *vport_info);
 int idpf_vc_destroy_vport(struct idpf_vport *vport);
-int idpf_vc_set_rss_key(struct idpf_vport *vport);
-int idpf_vc_set_rss_lut(struct idpf_vport *vport);
-int idpf_vc_set_rss_hash(struct idpf_vport *vport);
 int idpf_vc_config_irq_map_unmap(struct idpf_vport *vport,
 				 uint16_t nb_rxq, bool map);
 int idpf_vc_query_ptype_info(struct idpf_adapter *adapter);
@@ -41,4 +38,16 @@ int idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq);
 __rte_internal
 int idpf_query_stats(struct idpf_vport *vport,
 		     struct virtchnl2_vport_stats **pstats);
+__rte_internal
+int idpf_vc_set_rss_key(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_get_rss_key(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_set_rss_lut(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_get_rss_lut(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_set_rss_hash(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_get_rss_hash(struct idpf_vport *vport);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 6a1dc13302..cba08c6b4a 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -52,6 +52,12 @@ INTERNAL {
 	idpf_splitq_xmit_pkts_avx512;
 	idpf_update_stats;
 	idpf_query_stats;
+	idpf_vc_set_rss_key;
+	idpf_vc_get_rss_key;
+	idpf_vc_set_rss_lut;
+	idpf_vc_get_rss_lut;
+	idpf_vc_set_rss_hash;
+	idpf_vc_get_rss_hash;
 
 	local: *;
 };
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index e8bb097c78..037cabb04e 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -29,6 +29,56 @@ static const char * const idpf_valid_args[] = {
 	NULL
 };
 
+static const uint64_t idpf_map_hena_rss[] = {
+	[IDPF_HASH_NONF_UNICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
+	[IDPF_HASH_NONF_IPV4_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+	[IDPF_HASH_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
+
+	/* IPv6 */
+	[IDPF_HASH_NONF_UNICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
+	[IDPF_HASH_NONF_IPV6_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+	[IDPF_HASH_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
+
+	/* L2 Payload */
+	[IDPF_HASH_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
+};
+
+static const uint64_t idpf_ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV4;
+
+static const uint64_t idpf_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV6;
+
 static int
 idpf_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -59,6 +109,9 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->hash_key_size = vport->rss_key_size;
+	dev_info->reta_size = vport->rss_lut_size;
+
 	dev_info->flow_type_rss_offloads = IDPF_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
@@ -220,6 +273,54 @@ idpf_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int idpf_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
+{
+	uint64_t hena = 0, valid_rss_hf = 0;
+	int ret = 0;
+	uint16_t i;
+
+	/**
+	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
+	 * generalizations of all other IPv4 and IPv6 RSS types.
+	 */
+	if (rss_hf & RTE_ETH_RSS_IPV4)
+		rss_hf |= idpf_ipv4_rss;
+
+	if (rss_hf & RTE_ETH_RSS_IPV6)
+		rss_hf |= idpf_ipv6_rss;
+
+	for (i = 0; i < RTE_DIM(idpf_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (idpf_map_hena_rss[i] & rss_hf) {
+			valid_rss_hf |= idpf_map_hena_rss[i];
+			hena |= bit;
+		}
+	}
+
+	vport->rss_hf = hena;
+
+	ret = idpf_vc_set_rss_hash(vport);
+	if (ret != 0) {
+		PMD_DRV_LOG(WARNING,
+			    "fail to set RSS offload types, ret: %d", ret);
+		return ret;
+	}
+
+	if (valid_rss_hf & idpf_ipv4_rss)
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & idpf_ipv6_rss)
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
+
+	if (rss_hf & ~valid_rss_hf)
+		PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%" PRIx64,
+			    rss_hf & ~valid_rss_hf);
+	vport->last_general_rss_hf = valid_rss_hf;
+
+	return ret;
+}
+
 static int
 idpf_init_rss(struct idpf_vport *vport)
 {
@@ -256,6 +357,204 @@ idpf_init_rss(struct idpf_vport *vport)
 	return ret;
 }
 
+static int
+idpf_rss_reta_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_reta_entry64 *reta_conf,
+		     uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	uint32_t *lut;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+				 "(%d) doesn't match the number of hardware can "
+				 "support (%d)",
+			    reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	/* It MUST use the current LUT size to get the RSS lookup table,
+	 * otherwise if will fail with -100 error code.
+	 */
+	lut = rte_zmalloc(NULL, reta_size * sizeof(uint32_t), 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+	/* store the old lut table temporarily */
+	rte_memcpy(lut, vport->rss_lut, reta_size * sizeof(uint32_t));
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	rte_memcpy(vport->rss_lut, lut, reta_size * sizeof(uint32_t));
+	/* send virtchnl ops to configure RSS */
+	ret = idpf_vc_set_rss_lut(vport);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
+		goto out;
+	}
+out:
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+idpf_rss_reta_query(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_reta_entry64 *reta_conf,
+		    uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	ret = idpf_vc_get_rss_lut(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS LUT");
+		return ret;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vport->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+idpf_rss_hash_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		goto skip_rss_key;
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+				 "(%d) doesn't match the size of hardware can "
+				 "support (%d)",
+			    rss_conf->rss_key_len,
+			    vport->rss_key_size);
+		return -EINVAL;
+	}
+
+	rte_memcpy(vport->rss_key, rss_conf->rss_key,
+		   vport->rss_key_size);
+	ret = idpf_vc_set_rss_key(vport);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS key");
+		return ret;
+	}
+
+skip_rss_key:
+	ret = idpf_config_rss_hf(vport, rss_conf->rss_hf);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
+		return ret;
+	}
+
+	return 0;
+}
+
+static uint64_t
+idpf_map_general_rss_hf(uint64_t config_rss_hf, uint64_t last_general_rss_hf)
+{
+	uint64_t valid_rss_hf = 0;
+	uint16_t i;
+
+	for (i = 0; i < RTE_DIM(idpf_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (bit & config_rss_hf)
+			valid_rss_hf |= idpf_map_hena_rss[i];
+	}
+
+	if (valid_rss_hf & idpf_ipv4_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & idpf_ipv6_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV6;
+
+	return valid_rss_hf;
+}
+
+static int
+idpf_rss_hash_conf_get(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	ret = idpf_vc_get_rss_hash(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS hf");
+		return ret;
+	}
+
+	rss_conf->rss_hf = idpf_map_general_rss_hf(vport->rss_hf, vport->last_general_rss_hf);
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	ret = idpf_vc_get_rss_key(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS key");
+		return ret;
+	}
+
+	if (rss_conf->rss_key_len > vport->rss_key_size)
+		rss_conf->rss_key_len = vport->rss_key_size;
+
+	rte_memcpy(rss_conf->rss_key, vport->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
 static int
 idpf_dev_configure(struct rte_eth_dev *dev)
 {
@@ -693,6 +992,10 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 	.dev_supported_ptypes_get	= idpf_dev_supported_ptypes_get,
 	.stats_get			= idpf_dev_stats_get,
 	.stats_reset			= idpf_dev_stats_reset,
+	.reta_update			= idpf_rss_reta_update,
+	.reta_query			= idpf_rss_reta_query,
+	.rss_hash_update		= idpf_rss_hash_update,
+	.rss_hash_conf_get		= idpf_rss_hash_conf_get,
 };
 
 static uint16_t
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index d791d402fb..5bd1b441ea 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -48,7 +48,10 @@
 		RTE_ETH_RSS_NONFRAG_IPV6_TCP    |	\
 		RTE_ETH_RSS_NONFRAG_IPV6_UDP    |	\
 		RTE_ETH_RSS_NONFRAG_IPV6_SCTP   |	\
-		RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
+		RTE_ETH_RSS_NONFRAG_IPV6_OTHER  |	\
+		RTE_ETH_RSS_L2_PAYLOAD)
+
+#define IDPF_RSS_KEY_LEN 52
 
 #define IDPF_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v2 3/6] common/idpf: support single q scatter RX datapath
  2023-01-11  7:15 ` [PATCH 0/6] add idpf pmd enhancement features Mingxia Liu
  2023-01-11  7:15   ` [PATCH v2 1/6] common/idpf: add hw statistics Mingxia Liu
  2023-01-11  7:15   ` [PATCH v2 2/6] common/idpf: add RSS set/get ops Mingxia Liu
@ 2023-01-11  7:15   ` Mingxia Liu
  2023-01-11  7:15   ` [PATCH v2 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
                     ` (3 subsequent siblings)
  6 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-01-11  7:15 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, Mingxia Liu, Wenjun Wu

This patch add single q recv scatter rx function.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c | 134 +++++++++++++++++++++++++
 drivers/common/idpf/idpf_common_rxtx.h |   3 +
 drivers/common/idpf/version.map        |   1 +
 drivers/net/idpf/idpf_ethdev.c         |   3 +-
 drivers/net/idpf/idpf_rxtx.c           |  26 ++++-
 drivers/net/idpf/idpf_rxtx.h           |   2 +
 6 files changed, 166 insertions(+), 3 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index 3a9a32dddd..6bd40c4b0c 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -1146,6 +1146,140 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	return nb_rx;
 }
 
+uint16_t
+idpf_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			       uint16_t nb_pkts)
+{
+	struct idpf_rx_queue *rxq = rx_queue;
+	volatile union virtchnl2_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union virtchnl2_rx_desc *rxdp;
+	union virtchnl2_rx_desc rxd;
+	struct idpf_adapter *ad;
+	struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+	struct rte_mbuf *rxm;
+	struct rte_mbuf *nmb;
+	struct rte_eth_dev *dev;
+	const uint32_t *ptype_tbl = rxq->adapter->ptype_tbl;
+	uint16_t nb_hold = 0, nb_rx = 0;
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t rx_packet_len;
+	uint16_t rx_status0;
+	uint64_t pkt_flags;
+	uint64_t dma_addr;
+	uint64_t ts_ns;
+
+	ad = rxq->adapter;
+
+	if (unlikely(!rxq) || unlikely(!rxq->q_started))
+		return nb_rx;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		rx_status0 = rte_le_to_cpu_16(rxdp->flex_nic_wb.status_error0);
+
+		/* Check the DD bit first */
+		if (!(rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S)))
+			break;
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			rte_atomic64_inc(&(rxq->rx_stats.mbuf_alloc_failed));
+			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+			       "queue_id=%u", rxq->port_id, rxq->queue_id);
+			break;
+		}
+
+		rxd = *rxdp;
+
+		nb_hold++;
+		rxm = rxq->sw_ring[rx_id];
+		rxq->sw_ring[rx_id] = nmb;
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+		rx_packet_len = (rte_cpu_to_le_16(rxd.flex_nic_wb.pkt_len) &
+				 VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M);
+		rxm->data_len = rx_packet_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+		/**
+		 * If this is the first buffer of the received packet, set the
+		 * pointer to the first mbuf of the packet and initialize its
+		 * context. Otherwise, update the total length and the number
+		 * of segments of the current scattered packet, and update the
+		 * pointer to the last mbuf of the current packet.
+		 */
+		if (!first_seg) {
+			first_seg = rxm;
+			first_seg->nb_segs = 1;
+			first_seg->pkt_len = rx_packet_len;
+		} else {
+			first_seg->pkt_len =
+				(uint16_t)(first_seg->pkt_len +
+					   rx_packet_len);
+			first_seg->nb_segs++;
+			last_seg->next = rxm;
+		}
+
+		if (!(rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_S))) {
+			last_seg = rxm;
+			continue;
+		}
+
+		rxm->next = NULL;
+
+		first_seg->port = rxq->port_id;
+		first_seg->ol_flags = 0;
+		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		first_seg->packet_type =
+			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
+				VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
+
+		if (idpf_timestamp_dynflag > 0 &&
+		    (rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0) {
+			/* timestamp */
+			ts_ns = idpf_tstamp_convert_32b_64b(ad,
+				rxq->hw_register_set,
+				rte_le_to_cpu_32(rxd.flex_nic_wb.flex_ts.ts_high));
+			rxq->hw_register_set = 0;
+			*RTE_MBUF_DYNFIELD(rxm,
+					   idpf_timestamp_dynfield_offset,
+					   rte_mbuf_timestamp_t *) = ts_ns;
+			first_seg->ol_flags |= idpf_timestamp_dynflag;
+		}
+
+		first_seg->ol_flags |= pkt_flags;
+		rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
+					  first_seg->data_off));
+		rx_pkts[nb_rx++] = first_seg;
+		first_seg = NULL;
+	}
+	rxq->rx_tail = rx_id;
+	rxq->pkt_first_seg = first_seg;
+	rxq->pkt_last_seg = last_seg;
+
+	idpf_update_rx_tail(rxq, nb_hold, rx_id);
+
+	return nb_rx;
+}
+
 static inline int
 idpf_xmit_cleanup(struct idpf_tx_queue *txq)
 {
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index d44d92101a..0209750187 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -260,6 +260,9 @@ __rte_internal
 uint16_t idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			       uint16_t nb_pkts);
 __rte_internal
+uint16_t idpf_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			  uint16_t nb_pkts);
+__rte_internal
 uint16_t idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			       uint16_t nb_pkts);
 __rte_internal
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index cba08c6b4a..1805e2cb04 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -41,6 +41,7 @@ INTERNAL {
 	idpf_splitq_recv_pkts;
 	idpf_splitq_xmit_pkts;
 	idpf_singleq_recv_pkts;
+	idpf_singleq_recv_scatter_pkts;
 	idpf_singleq_xmit_pkts;
 	idpf_prep_pkts;
 	idpf_singleq_rx_vec_setup;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 037cabb04e..2ab31792ba 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -119,7 +119,8 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
-		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP		|
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 646b3a6798..d96c93eb37 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -503,6 +503,8 @@ int
 idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct idpf_rx_queue *rxq;
+	uint16_t max_pkt_len;
+	uint32_t frame_size;
 	int err;
 
 	if (rx_queue_id >= dev->data->nb_rx_queues)
@@ -516,6 +518,17 @@ idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	frame_size = dev->data->mtu + IDPF_ETH_OVERHEAD;
+
+	max_pkt_len =
+	    RTE_MIN((uint32_t)IDPF_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+		    frame_size);
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
+	    frame_size > rxq->rx_buf_len)
+		dev->data->scattered_rx = 1;
+
 	err = idpf_register_ts_mbuf(rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "fail to regidter timestamp mbuf %u",
@@ -801,13 +814,22 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 #endif /* CC_AVX512_SUPPORT */
 		}
 
+		if (dev->data->scattered_rx) {
+			dev->rx_pkt_burst = idpf_singleq_recv_scatter_pkts;
+			return;
+		}
 		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
 	}
 #else
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
-	else
+	} else {
+		if (dev->data->scattered_rx) {
+			dev->rx_pkt_burst = idpf_singleq_recv_scatter_pkts;
+			return;
+		}
 		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+	}
 #endif /* RTE_ARCH_X86 */
 }
 
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 3a5084dfd6..41a7495083 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -23,6 +23,8 @@
 #define IDPF_DEFAULT_TX_RS_THRESH	32
 #define IDPF_DEFAULT_TX_FREE_THRESH	32
 
+#define IDPF_SUPPORT_CHAIN_NUM 5
+
 int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v2 4/6] common/idpf: add rss_offload hash in singleq rx
  2023-01-11  7:15 ` [PATCH 0/6] add idpf pmd enhancement features Mingxia Liu
                     ` (2 preceding siblings ...)
  2023-01-11  7:15   ` [PATCH v2 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
@ 2023-01-11  7:15   ` Mingxia Liu
  2023-01-11  7:15   ` [PATCH v2 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
                     ` (2 subsequent siblings)
  6 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-01-11  7:15 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, Mingxia Liu

This patch add rss valid flag and hash value parsing of rx descriptor.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index 6bd40c4b0c..65bbcd0f1b 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -1030,6 +1030,20 @@ idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold,
 	rxq->nb_rx_hold = nb_hold;
 }
 
+static inline void
+idpf_singleq_rx_rss_offload(struct rte_mbuf *mb,
+			    volatile struct virtchnl2_rx_flex_desc_nic *rx_desc,
+			    uint64_t *pkt_flags)
+{
+	uint16_t rx_status0 = rte_le_to_cpu_16(rx_desc->status_error0);
+
+	if (rx_status0 & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_S)) {
+		*pkt_flags |= RTE_MBUF_F_RX_RSS_HASH;
+		mb->hash.rss = rte_le_to_cpu_32(rx_desc->rss_hash);
+	}
+
+}
+
 uint16_t
 idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		       uint16_t nb_pkts)
@@ -1118,6 +1132,7 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->port = rxq->port_id;
 		rxm->ol_flags = 0;
 		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		idpf_singleq_rx_rss_offload(rxm, &rxd.flex_nic_wb, &pkt_flags);
 		rxm->packet_type =
 			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
 					    VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
@@ -1248,6 +1263,7 @@ idpf_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		first_seg->port = rxq->port_id;
 		first_seg->ol_flags = 0;
 		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		idpf_singleq_rx_rss_offload(first_seg, &rxd.flex_nic_wb, &pkt_flags);
 		first_seg->packet_type =
 			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
 				VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v2 5/6] common/idpf: add alarm to support handle vchnl message
  2023-01-11  7:15 ` [PATCH 0/6] add idpf pmd enhancement features Mingxia Liu
                     ` (3 preceding siblings ...)
  2023-01-11  7:15   ` [PATCH v2 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
@ 2023-01-11  7:15   ` Mingxia Liu
  2023-01-11  7:15   ` [PATCH v2 6/6] common/idpf: add xstats ops Mingxia Liu
  2023-01-18  7:14   ` [PATCH v3 0/6] add idpf pmd enhancement features Mingxia Liu
  6 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-01-11  7:15 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, Mingxia Liu

Handle virtual channel message.
Refine link status update.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
Signed-off-by: Beilei Ling <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h   |   5 +
 drivers/common/idpf/idpf_common_virtchnl.c |  19 ---
 drivers/net/idpf/idpf_ethdev.c             | 165 ++++++++++++++++++++-
 drivers/net/idpf/idpf_ethdev.h             |   2 +
 4 files changed, 171 insertions(+), 20 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index d7d4cd5363..03697510bb 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -115,6 +115,11 @@ struct idpf_vport {
 	bool tx_use_avx512;
 
 	struct virtchnl2_vport_stats eth_stats_offset;
+
+	void *dev;
+	/* Event from ipf */
+	bool link_up;
+	uint32_t link_speed;
 };
 
 /* Message type read in virtual channel from PF */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index ae5a983836..c3e7569cc2 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -202,25 +202,6 @@ idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	switch (args->ops) {
 	case VIRTCHNL_OP_VERSION:
 	case VIRTCHNL2_OP_GET_CAPS:
-	case VIRTCHNL2_OP_CREATE_VPORT:
-	case VIRTCHNL2_OP_DESTROY_VPORT:
-	case VIRTCHNL2_OP_SET_RSS_KEY:
-	case VIRTCHNL2_OP_SET_RSS_LUT:
-	case VIRTCHNL2_OP_SET_RSS_HASH:
-	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
-	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
-	case VIRTCHNL2_OP_ENABLE_QUEUES:
-	case VIRTCHNL2_OP_DISABLE_QUEUES:
-	case VIRTCHNL2_OP_ENABLE_VPORT:
-	case VIRTCHNL2_OP_DISABLE_VPORT:
-	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_ALLOC_VECTORS:
-	case VIRTCHNL2_OP_DEALLOC_VECTORS:
-	case VIRTCHNL2_OP_GET_STATS:
-	case VIRTCHNL2_OP_GET_RSS_KEY:
-	case VIRTCHNL2_OP_GET_RSS_HASH:
-	case VIRTCHNL2_OP_GET_RSS_LUT:
 		/* for init virtchnl ops, need to poll the response */
 		err = idpf_read_one_msg(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 2ab31792ba..b86f63f94e 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -9,6 +9,7 @@
 #include <rte_memzone.h>
 #include <rte_dev.h>
 #include <errno.h>
+#include <rte_alarm.h>
 
 #include "idpf_ethdev.h"
 #include "idpf_rxtx.h"
@@ -83,12 +84,49 @@ static int
 idpf_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_link new_link;
 
 	memset(&new_link, 0, sizeof(new_link));
 
-	new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	switch (vport->link_speed) {
+	case 10:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
+		break;
+	case 100:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		break;
+	case 1000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
+		break;
+	case 10000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+		break;
+	case 20000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
+		break;
+	case 25000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
+		break;
+	case 40000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
+		break;
+	case 50000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
+		break;
+	case 100000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+		break;
+	case 200000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
+		break;
+	default:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	}
+
 	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
+		RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
 				  RTE_ETH_LINK_SPEED_FIXED);
 
@@ -927,6 +965,127 @@ idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adap
 	return ret;
 }
 
+static struct idpf_vport *
+idpf_find_vport(struct idpf_adapter_ext *adapter, uint32_t vport_id)
+{
+	struct idpf_vport *vport = NULL;
+	int i;
+
+	for (i = 0; i < adapter->cur_vport_nb; i++) {
+		vport = adapter->vports[i];
+		if (vport->vport_id != vport_id)
+			continue;
+		else
+			return vport;
+	}
+
+	return vport;
+}
+
+static void
+idpf_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t msglen)
+{
+	struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
+
+	if (msglen < sizeof(struct virtchnl2_event)) {
+		PMD_DRV_LOG(ERR, "Error event");
+		return;
+	}
+
+	switch (vc_event->event) {
+	case VIRTCHNL2_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL2_EVENT_LINK_CHANGE");
+		vport->link_up = vc_event->link_status;
+		vport->link_speed = vc_event->link_speed;
+		idpf_dev_link_update(dev, 0);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", vc_event->event);
+		break;
+	}
+}
+
+static void
+idpf_handle_virtchnl_msg(struct idpf_adapter_ext *adapter_ex)
+{
+	struct idpf_adapter *adapter = &adapter_ex->base;
+	struct idpf_dma_mem *dma_mem = NULL;
+	struct idpf_hw *hw = &adapter->hw;
+	struct virtchnl2_event *vc_event;
+	struct idpf_ctlq_msg ctlq_msg;
+	enum idpf_mbx_opc mbx_op;
+	struct idpf_vport *vport;
+	enum virtchnl_ops vc_op;
+	uint16_t pending = 1;
+	int ret;
+
+	while (pending) {
+		ret = idpf_ctlq_recv(hw->arq, &pending, &ctlq_msg);
+		if (ret) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret);
+			return;
+		}
+
+		rte_memcpy(adapter->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
+			   IDPF_DFLT_MBX_BUF_SIZE);
+
+		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
+		vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+		adapter->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
+
+		switch (mbx_op) {
+		case idpf_mbq_opc_send_msg_to_peer_pf:
+			if (vc_op == VIRTCHNL2_OP_EVENT) {
+				if (ctlq_msg.data_len < sizeof(struct virtchnl2_event)) {
+					PMD_DRV_LOG(ERR, "Error event");
+					return;
+				}
+				vc_event = (struct virtchnl2_event *)adapter->mbx_resp;
+				vport = idpf_find_vport(adapter_ex, vc_event->vport_id);
+				if (!vport) {
+					PMD_DRV_LOG(ERR, "Can't find vport.");
+					return;
+				}
+				idpf_handle_event_msg(vport, adapter->mbx_resp,
+						      ctlq_msg.data_len);
+			} else {
+				if (vc_op == adapter->pend_cmd)
+					notify_cmd(adapter, adapter->cmd_retval);
+				else
+					PMD_DRV_LOG(ERR, "command mismatch, expect %u, get %u",
+						    adapter->pend_cmd, vc_op);
+
+				PMD_DRV_LOG(DEBUG, " Virtual channel response is received,"
+					    "opcode = %d", vc_op);
+			}
+			goto post_buf;
+		default:
+			PMD_DRV_LOG(DEBUG, "Request %u is not supported yet", mbx_op);
+		}
+	}
+
+post_buf:
+	if (ctlq_msg.data_len)
+		dma_mem = ctlq_msg.ctx.indirect.payload;
+	else
+		pending = 0;
+
+	ret = idpf_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
+	if (ret && dma_mem)
+		idpf_free_dma_mem(hw, dma_mem);
+}
+
+static void
+idpf_dev_alarm_handler(void *param)
+{
+	struct idpf_adapter_ext *adapter = param;
+
+	idpf_handle_virtchnl_msg(adapter);
+
+	rte_eal_alarm_set(IDPF_ALARM_INTERVAL, idpf_dev_alarm_handler, adapter);
+}
+
 static int
 idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter)
 {
@@ -949,6 +1108,8 @@ idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *a
 		goto err_adapter_init;
 	}
 
+	rte_eal_alarm_set(IDPF_ALARM_INTERVAL, idpf_dev_alarm_handler, adapter);
+
 	adapter->max_vport_nb = adapter->base.caps.max_vports;
 
 	adapter->vports = rte_zmalloc("vports",
@@ -1032,6 +1193,7 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	vport->adapter = &adapter->base;
 	vport->sw_idx = param->idx;
 	vport->devarg_id = param->devarg_id;
+	vport->dev = dev;
 
 	memset(&create_vport_info, 0, sizeof(create_vport_info));
 	ret = idpf_create_vport_info_init(vport, &create_vport_info);
@@ -1101,6 +1263,7 @@ idpf_find_adapter_ext(struct rte_pci_device *pci_dev)
 static void
 idpf_adapter_ext_deinit(struct idpf_adapter_ext *adapter)
 {
+	rte_eal_alarm_cancel(idpf_dev_alarm_handler, adapter);
 	idpf_adapter_deinit(&adapter->base);
 
 	rte_free(adapter->vports);
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 5bd1b441ea..f414f1113e 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -55,6 +55,8 @@
 
 #define IDPF_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
+#define IDPF_ALARM_INTERVAL	50000 /* us */
+
 struct idpf_vport_param {
 	struct idpf_adapter_ext *adapter;
 	uint16_t devarg_id; /* arg id from user */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v2 6/6] common/idpf: add xstats ops
  2023-01-11  7:15 ` [PATCH 0/6] add idpf pmd enhancement features Mingxia Liu
                     ` (4 preceding siblings ...)
  2023-01-11  7:15   ` [PATCH v2 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
@ 2023-01-11  7:15   ` Mingxia Liu
  2023-01-18  7:14   ` [PATCH v3 0/6] add idpf pmd enhancement features Mingxia Liu
  6 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-01-11  7:15 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, Mingxia Liu

Add support for these device ops:
-idpf_dev_xstats_get
-idpf_dev_xstats_get_names
-idpf_dev_xstats_reset

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/idpf/idpf_ethdev.c | 80 ++++++++++++++++++++++++++++++++++
 1 file changed, 80 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index b86f63f94e..bcd15db3c5 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -80,6 +80,30 @@ static const uint64_t idpf_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
 			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
 			  RTE_ETH_RSS_FRAG_IPV6;
 
+struct rte_idpf_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct rte_idpf_xstats_name_off rte_idpf_stats_strings[] = {
+	{"rx_bytes", offsetof(struct virtchnl2_vport_stats, rx_bytes)},
+	{"rx_unicast_packets", offsetof(struct virtchnl2_vport_stats, rx_unicast)},
+	{"rx_multicast_packets", offsetof(struct virtchnl2_vport_stats, rx_multicast)},
+	{"rx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, rx_broadcast)},
+	{"rx_dropped_packets", offsetof(struct virtchnl2_vport_stats, rx_discards)},
+	{"rx_errors", offsetof(struct virtchnl2_vport_stats, rx_errors)},
+	{"rx_unknown_protocol_packets", offsetof(struct virtchnl2_vport_stats,
+						 rx_unknown_protocol)},
+	{"tx_bytes", offsetof(struct virtchnl2_vport_stats, tx_bytes)},
+	{"tx_unicast_packets", offsetof(struct virtchnl2_vport_stats, tx_unicast)},
+	{"tx_multicast_packets", offsetof(struct virtchnl2_vport_stats, tx_multicast)},
+	{"tx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, tx_broadcast)},
+	{"tx_dropped_packets", offsetof(struct virtchnl2_vport_stats, tx_discards)},
+	{"tx_error_packets", offsetof(struct virtchnl2_vport_stats, tx_errors)}};
+
+#define IDPF_NB_XSTATS (sizeof(rte_idpf_stats_strings) / \
+		sizeof(rte_idpf_stats_strings[0]))
+
 static int
 idpf_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -312,6 +336,59 @@ idpf_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int idpf_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+	idpf_dev_stats_reset(dev);
+	return 0;
+}
+
+static int idpf_dev_xstats_get(struct rte_eth_dev *dev,
+			       struct rte_eth_xstat *xstats, unsigned int n)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	unsigned int i;
+	int ret;
+
+	if (n < IDPF_NB_XSTATS)
+		return IDPF_NB_XSTATS;
+
+	if (!xstats)
+		return 0;
+
+	ret = idpf_query_stats(vport, &pstats);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+		return 0;
+	}
+
+	idpf_update_stats(&vport->eth_stats_offset, pstats);
+
+	/* loop over xstats array and values from pstats */
+	for (i = 0; i < IDPF_NB_XSTATS; i++) {
+		xstats[i].id = i;
+		xstats[i].value = *(uint64_t *)(((char *)pstats) +
+			rte_idpf_stats_strings[i].offset);
+	}
+	return IDPF_NB_XSTATS;
+}
+
+static int idpf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+				     struct rte_eth_xstat_name *xstats_names,
+				     __rte_unused unsigned int limit)
+{
+	unsigned int i;
+
+	if (xstats_names)
+		for (i = 0; i < IDPF_NB_XSTATS; i++) {
+			snprintf(xstats_names[i].name,
+				 sizeof(xstats_names[i].name),
+				 "%s", rte_idpf_stats_strings[i].name);
+		}
+	return IDPF_NB_XSTATS;
+}
+
 static int idpf_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
 {
 	uint64_t hena = 0, valid_rss_hf = 0;
@@ -1158,6 +1235,9 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 	.reta_query			= idpf_rss_reta_query,
 	.rss_hash_update		= idpf_rss_hash_update,
 	.rss_hash_conf_get		= idpf_rss_hash_conf_get,
+	.xstats_get			= idpf_dev_xstats_get,
+	.xstats_get_names		= idpf_dev_xstats_get_names,
+	.xstats_reset			= idpf_dev_xstats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v3 0/6] add idpf pmd enhancement features
  2023-01-11  7:15 ` [PATCH 0/6] add idpf pmd enhancement features Mingxia Liu
                     ` (5 preceding siblings ...)
  2023-01-11  7:15   ` [PATCH v2 6/6] common/idpf: add xstats ops Mingxia Liu
@ 2023-01-18  7:14   ` Mingxia Liu
  2023-01-18  7:14     ` [PATCH v3 1/6] common/idpf: add hw statistics Mingxia Liu
                       ` (6 more replies)
  6 siblings, 7 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:14 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, Mingxia Liu

This patchset add several enhancement features of idpf pmd.
Including the following:
- add hw statistics, support stats/xstats ops
- add rss configure/show ops
- add event handle: link status
- add scattered data path for single queue

This patchset is based on the refactor idpf PMD code:
http://patches.dpdk.org/project/dpdk/cover/20230106090501.9106-1-beilei.xing@intel.com/
http://patches.dpdk.org/project/dpdk/cover/20230117080622.105657-1-beilei.xing@intel.com/
http://patches.dpdk.org/project/dpdk/cover/20230118035139.485060-1-wenjun1.wu@intel.com/

v2 changes:
 - Fix rss lut config issue.
v2 changes:
 - rebase to the new baseline.

Mingxia Liu (6):
  common/idpf: add hw statistics
  common/idpf: add RSS set/get ops
  common/idpf: support single q scatter RX datapath
  common/idpf: add rss_offload hash in singleq rx
  common/idpf: add alarm to support handle vchnl message
  common/idpf: add xstats ops

 drivers/common/idpf/idpf_common_device.c   |  17 +
 drivers/common/idpf/idpf_common_device.h   |  11 +-
 drivers/common/idpf/idpf_common_rxtx.c     | 150 +++++
 drivers/common/idpf/idpf_common_rxtx.h     |   3 +
 drivers/common/idpf/idpf_common_virtchnl.c | 157 ++++-
 drivers/common/idpf/idpf_common_virtchnl.h |   9 +
 drivers/common/idpf/version.map            |   6 +
 drivers/net/idpf/idpf_ethdev.c             | 638 ++++++++++++++++++++-
 drivers/net/idpf/idpf_ethdev.h             |   5 +-
 drivers/net/idpf/idpf_rxtx.c               |  26 +-
 drivers/net/idpf/idpf_rxtx.h               |   2 +
 11 files changed, 1003 insertions(+), 21 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v3 1/6] common/idpf: add hw statistics
  2023-01-18  7:14   ` [PATCH v3 0/6] add idpf pmd enhancement features Mingxia Liu
@ 2023-01-18  7:14     ` Mingxia Liu
  2023-02-01  8:48       ` Wu, Jingjing
  2023-01-18  7:14     ` [PATCH v3 2/6] common/idpf: add RSS set/get ops Mingxia Liu
                       ` (5 subsequent siblings)
  6 siblings, 1 reply; 63+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:14 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, Mingxia Liu

This patch add hardware packets/bytes statistics.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/common/idpf/idpf_common_device.c   | 17 +++++
 drivers/common/idpf/idpf_common_device.h   |  5 +-
 drivers/common/idpf/idpf_common_virtchnl.c | 27 +++++++
 drivers/common/idpf/idpf_common_virtchnl.h |  3 +
 drivers/common/idpf/version.map            |  2 +
 drivers/net/idpf/idpf_ethdev.c             | 87 ++++++++++++++++++++++
 6 files changed, 140 insertions(+), 1 deletion(-)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 411873c902..b90b20d0f2 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -648,4 +648,21 @@ idpf_create_vport_info_init(struct idpf_vport *vport,
 	return 0;
 }
 
+void
+idpf_update_stats(struct virtchnl2_vport_stats *oes, struct virtchnl2_vport_stats *nes)
+{
+	nes->rx_bytes = nes->rx_bytes - oes->rx_bytes;
+	nes->rx_unicast = nes->rx_unicast - oes->rx_unicast;
+	nes->rx_multicast = nes->rx_multicast - oes->rx_multicast;
+	nes->rx_broadcast = nes->rx_broadcast - oes->rx_broadcast;
+	nes->rx_errors = nes->rx_errors - oes->rx_errors;
+	nes->rx_discards = nes->rx_discards - oes->rx_discards;
+	nes->tx_bytes = nes->tx_bytes - oes->tx_bytes;
+	nes->tx_unicast = nes->tx_unicast - oes->tx_unicast;
+	nes->tx_multicast = nes->tx_multicast - oes->tx_multicast;
+	nes->tx_broadcast = nes->tx_broadcast - oes->tx_broadcast;
+	nes->tx_errors = nes->tx_errors - oes->tx_errors;
+	nes->tx_discards = nes->tx_discards - oes->tx_discards;
+}
+
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 573852ff75..73d4ffb4b3 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -115,6 +115,8 @@ struct idpf_vport {
 	bool tx_vec_allowed;
 	bool rx_use_avx512;
 	bool tx_use_avx512;
+
+	struct virtchnl2_vport_stats eth_stats_offset;
 };
 
 /* Message type read in virtual channel from PF */
@@ -191,5 +193,6 @@ int idpf_config_irq_unmap(struct idpf_vport *vport, uint16_t nb_rx_queues);
 __rte_internal
 int idpf_create_vport_info_init(struct idpf_vport *vport,
 				struct virtchnl2_create_vport *vport_info);
-
+__rte_internal
+void idpf_update_stats(struct virtchnl2_vport_stats *oes, struct virtchnl2_vport_stats *nes);
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 188d0131a4..675dcebbf4 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -217,6 +217,7 @@ idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
 	case VIRTCHNL2_OP_ALLOC_VECTORS:
 	case VIRTCHNL2_OP_DEALLOC_VECTORS:
+	case VIRTCHNL2_OP_GET_STATS:
 		/* for init virtchnl ops, need to poll the response */
 		err = idpf_read_one_msg(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
@@ -806,6 +807,32 @@ idpf_vc_query_ptype_info(struct idpf_adapter *adapter)
 	return err;
 }
 
+int
+idpf_query_stats(struct idpf_vport *vport,
+		struct virtchnl2_vport_stats **pstats)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_vport_stats vport_stats;
+	struct idpf_cmd_info args;
+	int err;
+
+	vport_stats.vport_id = vport->vport_id;
+	args.ops = VIRTCHNL2_OP_GET_STATS;
+	args.in_args = (u8 *)&vport_stats;
+	args.in_args_size = sizeof(vport_stats);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+	if (err) {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_STATS");
+		*pstats = NULL;
+		return err;
+	}
+	*pstats = (struct virtchnl2_vport_stats *)args.out_buffer;
+	return 0;
+}
+
 #define IDPF_RX_BUF_STRIDE		64
 int
 idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index b8045ba63b..6d63e6ad35 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -49,4 +49,7 @@ __rte_internal
 int idpf_vc_config_rxq(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
 __rte_internal
 int idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq);
+__rte_internal
+int idpf_query_stats(struct idpf_vport *vport,
+		     struct virtchnl2_vport_stats **pstats);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index e39d1c4b32..0b4a22bae4 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -44,6 +44,8 @@ INTERNAL {
 	idpf_splitq_xmit_pkts_avx512;
 	idpf_switch_queue;
 	idpf_tx_queue_release;
+	idpf_update_stats;
+	idpf_query_stats;
 	idpf_vc_alloc_vectors;
 	idpf_vc_check_api_version;
 	idpf_vc_config_irq_map_unmap;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index ee2dec7c7c..e8bb097c78 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -140,6 +140,86 @@ idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static uint64_t
+idpf_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	uint64_t mbuf_alloc_failed = 0;
+	struct idpf_rx_queue *rxq;
+	int i = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		mbuf_alloc_failed += rte_atomic64_read(&(rxq->rx_stats.mbuf_alloc_failed));
+	}
+
+	return mbuf_alloc_failed;
+}
+
+static int
+idpf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_query_stats(vport, &pstats);
+	if (ret == 0) {
+		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
+					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
+					 RTE_ETHER_CRC_LEN;
+
+		idpf_update_stats(&vport->eth_stats_offset, pstats);
+		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
+				pstats->rx_broadcast - pstats->rx_discards;
+		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
+						pstats->tx_unicast;
+		stats->imissed = pstats->rx_discards;
+		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
+		stats->ibytes = pstats->rx_bytes;
+		stats->ibytes -= stats->ipackets * crc_stats_len;
+		stats->obytes = pstats->tx_bytes;
+
+		dev->data->rx_mbuf_alloc_failed = idpf_get_mbuf_alloc_failed_stats(dev);
+		stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return ret;
+}
+
+static void
+idpf_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		rte_atomic64_set(&(rxq->rx_stats.mbuf_alloc_failed), 0);
+	}
+}
+
+static int
+idpf_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_query_stats(vport, &pstats);
+	if (ret != 0)
+		return ret;
+
+	/* set stats offset base on current values */
+	vport->eth_stats_offset = *pstats;
+
+	idpf_reset_mbuf_alloc_failed_stats(dev);
+
+	return 0;
+}
+
 static int
 idpf_init_rss(struct idpf_vport *vport)
 {
@@ -327,6 +407,11 @@ idpf_dev_start(struct rte_eth_dev *dev)
 		goto err_vport;
 	}
 
+	if (idpf_dev_stats_reset(dev)) {
+		PMD_DRV_LOG(ERR, "Failed to reset stats");
+		goto err_vport;
+	}
+
 	vport->stopped = 0;
 
 	return 0;
@@ -606,6 +691,8 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 	.tx_queue_release		= idpf_dev_tx_queue_release,
 	.mtu_set			= idpf_dev_mtu_set,
 	.dev_supported_ptypes_get	= idpf_dev_supported_ptypes_get,
+	.stats_get			= idpf_dev_stats_get,
+	.stats_reset			= idpf_dev_stats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v3 2/6] common/idpf: add RSS set/get ops
  2023-01-18  7:14   ` [PATCH v3 0/6] add idpf pmd enhancement features Mingxia Liu
  2023-01-18  7:14     ` [PATCH v3 1/6] common/idpf: add hw statistics Mingxia Liu
@ 2023-01-18  7:14     ` Mingxia Liu
  2023-02-02  3:28       ` Wu, Jingjing
  2023-01-18  7:14     ` [PATCH v3 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
                       ` (4 subsequent siblings)
  6 siblings, 1 reply; 63+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:14 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, Mingxia Liu

Add support for these device ops:
- rss_reta_update
- rss_reta_query
- rss_hash_update
- rss_hash_conf_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/common/idpf/idpf_common_device.h   |   1 +
 drivers/common/idpf/idpf_common_virtchnl.c | 119 ++++++++
 drivers/common/idpf/idpf_common_virtchnl.h |   6 +
 drivers/common/idpf/version.map            |   3 +
 drivers/net/idpf/idpf_ethdev.c             | 303 +++++++++++++++++++++
 drivers/net/idpf/idpf_ethdev.h             |   3 +-
 6 files changed, 434 insertions(+), 1 deletion(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 73d4ffb4b3..f22ffde22e 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -98,6 +98,7 @@ struct idpf_vport {
 	uint32_t *rss_lut;
 	uint8_t *rss_key;
 	uint64_t rss_hf;
+	uint64_t last_general_rss_hf;
 
 	/* MSIX info*/
 	struct virtchnl2_queue_vector *qv_map; /* queue vector mapping */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 675dcebbf4..5965f9ee55 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -218,6 +218,9 @@ idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	case VIRTCHNL2_OP_ALLOC_VECTORS:
 	case VIRTCHNL2_OP_DEALLOC_VECTORS:
 	case VIRTCHNL2_OP_GET_STATS:
+	case VIRTCHNL2_OP_GET_RSS_KEY:
+	case VIRTCHNL2_OP_GET_RSS_HASH:
+	case VIRTCHNL2_OP_GET_RSS_LUT:
 		/* for init virtchnl ops, need to poll the response */
 		err = idpf_read_one_msg(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
@@ -448,6 +451,48 @@ idpf_vc_set_rss_key(struct idpf_vport *vport)
 	return err;
 }
 
+int idpf_vc_get_rss_key(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_key *rss_key_ret;
+	struct virtchnl2_rss_key rss_key;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_key, 0, sizeof(rss_key));
+	rss_key.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_GET_RSS_KEY;
+	args.in_args = (uint8_t *)&rss_key;
+	args.in_args_size = sizeof(rss_key);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+
+	if (!err) {
+		rss_key_ret = (struct virtchnl2_rss_key *)args.out_buffer;
+		if (rss_key_ret->key_len != vport->rss_key_size) {
+			rte_free(vport->rss_key);
+			vport->rss_key = NULL;
+			vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN,
+						      rss_key_ret->key_len);
+			vport->rss_key = rte_zmalloc("rss_key", vport->rss_key_size, 0);
+			if (!vport->rss_key) {
+				vport->rss_key_size = 0;
+				DRV_LOG(ERR, "Failed to allocate RSS key");
+				return -ENOMEM;
+			}
+		}
+		rte_memcpy(vport->rss_key, rss_key_ret->key, vport->rss_key_size);
+	} else {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_RSS_KEY");
+	}
+
+	return err;
+}
+
 int
 idpf_vc_set_rss_lut(struct idpf_vport *vport)
 {
@@ -482,6 +527,48 @@ idpf_vc_set_rss_lut(struct idpf_vport *vport)
 	return err;
 }
 
+int
+idpf_vc_get_rss_lut(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_lut *rss_lut_ret;
+	struct virtchnl2_rss_lut rss_lut;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_lut, 0, sizeof(rss_lut));
+	rss_lut.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_GET_RSS_LUT;
+	args.in_args = (uint8_t *)&rss_lut;
+	args.in_args_size = sizeof(rss_lut);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+
+	if (!err) {
+		rss_lut_ret = (struct virtchnl2_rss_lut *)args.out_buffer;
+		if (rss_lut_ret->lut_entries != vport->rss_lut_size) {
+			rte_free(vport->rss_lut);
+			vport->rss_lut = NULL;
+			vport->rss_lut = rte_zmalloc("rss_lut",
+				     sizeof(uint32_t) * rss_lut_ret->lut_entries, 0);
+			if (vport->rss_lut == NULL) {
+				DRV_LOG(ERR, "Failed to allocate RSS lut");
+				return -ENOMEM;
+			}
+		}
+		rte_memcpy(vport->rss_lut, rss_lut_ret->lut, rss_lut_ret->lut_entries);
+		vport->rss_lut_size = rss_lut_ret->lut_entries;
+	} else {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_RSS_LUT");
+	}
+
+	return err;
+}
+
 int
 idpf_vc_set_rss_hash(struct idpf_vport *vport)
 {
@@ -508,6 +595,38 @@ idpf_vc_set_rss_hash(struct idpf_vport *vport)
 	return err;
 }
 
+int
+idpf_vc_get_rss_hash(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_hash *rss_hash_ret;
+	struct virtchnl2_rss_hash rss_hash;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_hash, 0, sizeof(rss_hash));
+	rss_hash.ptype_groups = vport->rss_hf;
+	rss_hash.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_GET_RSS_HASH;
+	args.in_args = (uint8_t *)&rss_hash;
+	args.in_args_size = sizeof(rss_hash);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_execute_vc_cmd(adapter, &args);
+
+	if (!err) {
+		rss_hash_ret = (struct virtchnl2_rss_hash *)args.out_buffer;
+		vport->rss_hf = rss_hash_ret->ptype_groups;
+	} else {
+		DRV_LOG(ERR, "Failed to execute command of OP_GET_RSS_HASH");
+	}
+
+	return err;
+}
+
 int
 idpf_vc_config_irq_map_unmap(struct idpf_vport *vport, uint16_t nb_rxq, bool map)
 {
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index 6d63e6ad35..86a8dfcece 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -52,4 +52,10 @@ int idpf_vc_config_txq(struct idpf_vport *vport, struct idpf_tx_queue *txq);
 __rte_internal
 int idpf_query_stats(struct idpf_vport *vport,
 		     struct virtchnl2_vport_stats **pstats);
+__rte_internal
+int idpf_vc_get_rss_key(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_get_rss_lut(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_get_rss_hash(struct idpf_vport *vport);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 0b4a22bae4..36a3a90d39 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -63,6 +63,9 @@ INTERNAL {
 	idpf_vc_set_rss_lut;
 	idpf_vport_deinit;
 	idpf_vport_init;
+	idpf_vc_get_rss_key;
+	idpf_vc_get_rss_lut;
+	idpf_vc_get_rss_hash;
 
 	local: *;
 };
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index e8bb097c78..037cabb04e 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -29,6 +29,56 @@ static const char * const idpf_valid_args[] = {
 	NULL
 };
 
+static const uint64_t idpf_map_hena_rss[] = {
+	[IDPF_HASH_NONF_UNICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
+	[IDPF_HASH_NONF_IPV4_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+	[IDPF_HASH_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
+
+	/* IPv6 */
+	[IDPF_HASH_NONF_UNICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
+	[IDPF_HASH_NONF_IPV6_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+	[IDPF_HASH_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
+
+	/* L2 Payload */
+	[IDPF_HASH_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
+};
+
+static const uint64_t idpf_ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV4;
+
+static const uint64_t idpf_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV6;
+
 static int
 idpf_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -59,6 +109,9 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->hash_key_size = vport->rss_key_size;
+	dev_info->reta_size = vport->rss_lut_size;
+
 	dev_info->flow_type_rss_offloads = IDPF_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
@@ -220,6 +273,54 @@ idpf_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int idpf_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
+{
+	uint64_t hena = 0, valid_rss_hf = 0;
+	int ret = 0;
+	uint16_t i;
+
+	/**
+	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
+	 * generalizations of all other IPv4 and IPv6 RSS types.
+	 */
+	if (rss_hf & RTE_ETH_RSS_IPV4)
+		rss_hf |= idpf_ipv4_rss;
+
+	if (rss_hf & RTE_ETH_RSS_IPV6)
+		rss_hf |= idpf_ipv6_rss;
+
+	for (i = 0; i < RTE_DIM(idpf_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (idpf_map_hena_rss[i] & rss_hf) {
+			valid_rss_hf |= idpf_map_hena_rss[i];
+			hena |= bit;
+		}
+	}
+
+	vport->rss_hf = hena;
+
+	ret = idpf_vc_set_rss_hash(vport);
+	if (ret != 0) {
+		PMD_DRV_LOG(WARNING,
+			    "fail to set RSS offload types, ret: %d", ret);
+		return ret;
+	}
+
+	if (valid_rss_hf & idpf_ipv4_rss)
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & idpf_ipv6_rss)
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
+
+	if (rss_hf & ~valid_rss_hf)
+		PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%" PRIx64,
+			    rss_hf & ~valid_rss_hf);
+	vport->last_general_rss_hf = valid_rss_hf;
+
+	return ret;
+}
+
 static int
 idpf_init_rss(struct idpf_vport *vport)
 {
@@ -256,6 +357,204 @@ idpf_init_rss(struct idpf_vport *vport)
 	return ret;
 }
 
+static int
+idpf_rss_reta_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_reta_entry64 *reta_conf,
+		     uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	uint32_t *lut;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+				 "(%d) doesn't match the number of hardware can "
+				 "support (%d)",
+			    reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	/* It MUST use the current LUT size to get the RSS lookup table,
+	 * otherwise if will fail with -100 error code.
+	 */
+	lut = rte_zmalloc(NULL, reta_size * sizeof(uint32_t), 0);
+	if (!lut) {
+		PMD_DRV_LOG(ERR, "No memory can be allocated");
+		return -ENOMEM;
+	}
+	/* store the old lut table temporarily */
+	rte_memcpy(lut, vport->rss_lut, reta_size * sizeof(uint32_t));
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	rte_memcpy(vport->rss_lut, lut, reta_size * sizeof(uint32_t));
+	/* send virtchnl ops to configure RSS */
+	ret = idpf_vc_set_rss_lut(vport);
+	if (ret) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
+		goto out;
+	}
+out:
+	rte_free(lut);
+
+	return ret;
+}
+
+static int
+idpf_rss_reta_query(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_reta_entry64 *reta_conf,
+		    uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	ret = idpf_vc_get_rss_lut(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS LUT");
+		return ret;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vport->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+idpf_rss_hash_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		goto skip_rss_key;
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+				 "(%d) doesn't match the size of hardware can "
+				 "support (%d)",
+			    rss_conf->rss_key_len,
+			    vport->rss_key_size);
+		return -EINVAL;
+	}
+
+	rte_memcpy(vport->rss_key, rss_conf->rss_key,
+		   vport->rss_key_size);
+	ret = idpf_vc_set_rss_key(vport);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS key");
+		return ret;
+	}
+
+skip_rss_key:
+	ret = idpf_config_rss_hf(vport, rss_conf->rss_hf);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
+		return ret;
+	}
+
+	return 0;
+}
+
+static uint64_t
+idpf_map_general_rss_hf(uint64_t config_rss_hf, uint64_t last_general_rss_hf)
+{
+	uint64_t valid_rss_hf = 0;
+	uint16_t i;
+
+	for (i = 0; i < RTE_DIM(idpf_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (bit & config_rss_hf)
+			valid_rss_hf |= idpf_map_hena_rss[i];
+	}
+
+	if (valid_rss_hf & idpf_ipv4_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & idpf_ipv6_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV6;
+
+	return valid_rss_hf;
+}
+
+static int
+idpf_rss_hash_conf_get(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	ret = idpf_vc_get_rss_hash(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS hf");
+		return ret;
+	}
+
+	rss_conf->rss_hf = idpf_map_general_rss_hf(vport->rss_hf, vport->last_general_rss_hf);
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	ret = idpf_vc_get_rss_key(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS key");
+		return ret;
+	}
+
+	if (rss_conf->rss_key_len > vport->rss_key_size)
+		rss_conf->rss_key_len = vport->rss_key_size;
+
+	rte_memcpy(rss_conf->rss_key, vport->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
 static int
 idpf_dev_configure(struct rte_eth_dev *dev)
 {
@@ -693,6 +992,10 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 	.dev_supported_ptypes_get	= idpf_dev_supported_ptypes_get,
 	.stats_get			= idpf_dev_stats_get,
 	.stats_reset			= idpf_dev_stats_reset,
+	.reta_update			= idpf_rss_reta_update,
+	.reta_query			= idpf_rss_reta_query,
+	.rss_hash_update		= idpf_rss_hash_update,
+	.rss_hash_conf_get		= idpf_rss_hash_conf_get,
 };
 
 static uint16_t
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index d791d402fb..839a2bd82c 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -48,7 +48,8 @@
 		RTE_ETH_RSS_NONFRAG_IPV6_TCP    |	\
 		RTE_ETH_RSS_NONFRAG_IPV6_UDP    |	\
 		RTE_ETH_RSS_NONFRAG_IPV6_SCTP   |	\
-		RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
+		RTE_ETH_RSS_NONFRAG_IPV6_OTHER  |	\
+		RTE_ETH_RSS_L2_PAYLOAD)
 
 #define IDPF_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v3 3/6] common/idpf: support single q scatter RX datapath
  2023-01-18  7:14   ` [PATCH v3 0/6] add idpf pmd enhancement features Mingxia Liu
  2023-01-18  7:14     ` [PATCH v3 1/6] common/idpf: add hw statistics Mingxia Liu
  2023-01-18  7:14     ` [PATCH v3 2/6] common/idpf: add RSS set/get ops Mingxia Liu
@ 2023-01-18  7:14     ` Mingxia Liu
  2023-02-02  3:45       ` Wu, Jingjing
  2023-01-18  7:14     ` [PATCH v3 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
                       ` (3 subsequent siblings)
  6 siblings, 1 reply; 63+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:14 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, Mingxia Liu, Wenjun Wu

This patch add single q recv scatter rx function.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c | 134 +++++++++++++++++++++++++
 drivers/common/idpf/idpf_common_rxtx.h |   3 +
 drivers/common/idpf/version.map        |   1 +
 drivers/net/idpf/idpf_ethdev.c         |   3 +-
 drivers/net/idpf/idpf_rxtx.c           |  26 ++++-
 drivers/net/idpf/idpf_rxtx.h           |   2 +
 6 files changed, 166 insertions(+), 3 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index 7a5dc3f04c..9dbf0f4764 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -1146,6 +1146,140 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	return nb_rx;
 }
 
+uint16_t
+idpf_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			       uint16_t nb_pkts)
+{
+	struct idpf_rx_queue *rxq = rx_queue;
+	volatile union virtchnl2_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union virtchnl2_rx_desc *rxdp;
+	union virtchnl2_rx_desc rxd;
+	struct idpf_adapter *ad;
+	struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+	struct rte_mbuf *rxm;
+	struct rte_mbuf *nmb;
+	struct rte_eth_dev *dev;
+	const uint32_t *ptype_tbl = rxq->adapter->ptype_tbl;
+	uint16_t nb_hold = 0, nb_rx = 0;
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t rx_packet_len;
+	uint16_t rx_status0;
+	uint64_t pkt_flags;
+	uint64_t dma_addr;
+	uint64_t ts_ns;
+
+	ad = rxq->adapter;
+
+	if (unlikely(!rxq) || unlikely(!rxq->q_started))
+		return nb_rx;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		rx_status0 = rte_le_to_cpu_16(rxdp->flex_nic_wb.status_error0);
+
+		/* Check the DD bit first */
+		if (!(rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S)))
+			break;
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			rte_atomic64_inc(&(rxq->rx_stats.mbuf_alloc_failed));
+			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+			       "queue_id=%u", rxq->port_id, rxq->queue_id);
+			break;
+		}
+
+		rxd = *rxdp;
+
+		nb_hold++;
+		rxm = rxq->sw_ring[rx_id];
+		rxq->sw_ring[rx_id] = nmb;
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+		rx_packet_len = (rte_cpu_to_le_16(rxd.flex_nic_wb.pkt_len) &
+				 VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M);
+		rxm->data_len = rx_packet_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+		/**
+		 * If this is the first buffer of the received packet, set the
+		 * pointer to the first mbuf of the packet and initialize its
+		 * context. Otherwise, update the total length and the number
+		 * of segments of the current scattered packet, and update the
+		 * pointer to the last mbuf of the current packet.
+		 */
+		if (!first_seg) {
+			first_seg = rxm;
+			first_seg->nb_segs = 1;
+			first_seg->pkt_len = rx_packet_len;
+		} else {
+			first_seg->pkt_len =
+				(uint16_t)(first_seg->pkt_len +
+					   rx_packet_len);
+			first_seg->nb_segs++;
+			last_seg->next = rxm;
+		}
+
+		if (!(rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_S))) {
+			last_seg = rxm;
+			continue;
+		}
+
+		rxm->next = NULL;
+
+		first_seg->port = rxq->port_id;
+		first_seg->ol_flags = 0;
+		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		first_seg->packet_type =
+			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
+				VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
+
+		if (idpf_timestamp_dynflag > 0 &&
+		    (rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0) {
+			/* timestamp */
+			ts_ns = idpf_tstamp_convert_32b_64b(ad,
+				rxq->hw_register_set,
+				rte_le_to_cpu_32(rxd.flex_nic_wb.flex_ts.ts_high));
+			rxq->hw_register_set = 0;
+			*RTE_MBUF_DYNFIELD(rxm,
+					   idpf_timestamp_dynfield_offset,
+					   rte_mbuf_timestamp_t *) = ts_ns;
+			first_seg->ol_flags |= idpf_timestamp_dynflag;
+		}
+
+		first_seg->ol_flags |= pkt_flags;
+		rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
+					  first_seg->data_off));
+		rx_pkts[nb_rx++] = first_seg;
+		first_seg = NULL;
+	}
+	rxq->rx_tail = rx_id;
+	rxq->pkt_first_seg = first_seg;
+	rxq->pkt_last_seg = last_seg;
+
+	idpf_update_rx_tail(rxq, nb_hold, rx_id);
+
+	return nb_rx;
+}
+
 static inline int
 idpf_xmit_cleanup(struct idpf_tx_queue *txq)
 {
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index 98f492a8c1..aac61ea2cb 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -260,6 +260,9 @@ __rte_internal
 uint16_t idpf_splitq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			       uint16_t nb_pkts);
 __rte_internal
+uint16_t idpf_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			  uint16_t nb_pkts);
+__rte_internal
 uint16_t idpf_splitq_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			       uint16_t nb_pkts);
 __rte_internal
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 36a3a90d39..591af6b046 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -32,6 +32,7 @@ INTERNAL {
 	idpf_reset_split_tx_descq;
 	idpf_rx_queue_release;
 	idpf_singleq_recv_pkts;
+	idpf_singleq_recv_scatter_pkts;
 	idpf_singleq_recv_pkts_avx512;
 	idpf_singleq_rx_vec_setup;
 	idpf_splitq_rx_vec_setup;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 037cabb04e..2ab31792ba 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -119,7 +119,8 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
-		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP		|
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 6eeaab41cc..a865d14fea 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -503,6 +503,8 @@ int
 idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct idpf_rx_queue *rxq;
+	uint16_t max_pkt_len;
+	uint32_t frame_size;
 	int err;
 
 	if (rx_queue_id >= dev->data->nb_rx_queues)
@@ -516,6 +518,17 @@ idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	frame_size = dev->data->mtu + IDPF_ETH_OVERHEAD;
+
+	max_pkt_len =
+	    RTE_MIN((uint32_t)IDPF_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+		    frame_size);
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
+	    frame_size > rxq->rx_buf_len)
+		dev->data->scattered_rx = 1;
+
 	err = idpf_register_ts_mbuf(rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "fail to regidter timestamp mbuf %u",
@@ -801,13 +814,22 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 #endif /* CC_AVX512_SUPPORT */
 		}
 
+		if (dev->data->scattered_rx) {
+			dev->rx_pkt_burst = idpf_singleq_recv_scatter_pkts;
+			return;
+		}
 		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
 	}
 #else
-	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+	if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
 		dev->rx_pkt_burst = idpf_splitq_recv_pkts;
-	else
+	} else {
+		if (dev->data->scattered_rx) {
+			dev->rx_pkt_burst = idpf_singleq_recv_scatter_pkts;
+			return;
+		}
 		dev->rx_pkt_burst = idpf_singleq_recv_pkts;
+	}
 #endif /* RTE_ARCH_X86 */
 }
 
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 3a5084dfd6..41a7495083 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -23,6 +23,8 @@
 #define IDPF_DEFAULT_TX_RS_THRESH	32
 #define IDPF_DEFAULT_TX_FREE_THRESH	32
 
+#define IDPF_SUPPORT_CHAIN_NUM 5
+
 int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v3 4/6] common/idpf: add rss_offload hash in singleq rx
  2023-01-18  7:14   ` [PATCH v3 0/6] add idpf pmd enhancement features Mingxia Liu
                       ` (2 preceding siblings ...)
  2023-01-18  7:14     ` [PATCH v3 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
@ 2023-01-18  7:14     ` Mingxia Liu
  2023-01-18  7:14     ` [PATCH v3 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
                       ` (2 subsequent siblings)
  6 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:14 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, Mingxia Liu

This patch add rss valid flag and hash value parsing of rx descriptor.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index 9dbf0f4764..0ebb390842 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -1030,6 +1030,20 @@ idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold,
 	rxq->nb_rx_hold = nb_hold;
 }
 
+static inline void
+idpf_singleq_rx_rss_offload(struct rte_mbuf *mb,
+			    volatile struct virtchnl2_rx_flex_desc_nic *rx_desc,
+			    uint64_t *pkt_flags)
+{
+	uint16_t rx_status0 = rte_le_to_cpu_16(rx_desc->status_error0);
+
+	if (rx_status0 & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_S)) {
+		*pkt_flags |= RTE_MBUF_F_RX_RSS_HASH;
+		mb->hash.rss = rte_le_to_cpu_32(rx_desc->rss_hash);
+	}
+
+}
+
 uint16_t
 idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		       uint16_t nb_pkts)
@@ -1118,6 +1132,7 @@ idpf_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->port = rxq->port_id;
 		rxm->ol_flags = 0;
 		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		idpf_singleq_rx_rss_offload(rxm, &rxd.flex_nic_wb, &pkt_flags);
 		rxm->packet_type =
 			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
 					    VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
@@ -1248,6 +1263,7 @@ idpf_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		first_seg->port = rxq->port_id;
 		first_seg->ol_flags = 0;
 		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		idpf_singleq_rx_rss_offload(first_seg, &rxd.flex_nic_wb, &pkt_flags);
 		first_seg->packet_type =
 			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
 				VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v3 5/6] common/idpf: add alarm to support handle vchnl message
  2023-01-18  7:14   ` [PATCH v3 0/6] add idpf pmd enhancement features Mingxia Liu
                       ` (3 preceding siblings ...)
  2023-01-18  7:14     ` [PATCH v3 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
@ 2023-01-18  7:14     ` Mingxia Liu
  2023-02-02  4:23       ` Wu, Jingjing
  2023-01-18  7:14     ` [PATCH v3 6/6] common/idpf: add xstats ops Mingxia Liu
  2023-02-07  9:56     ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
  6 siblings, 1 reply; 63+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:14 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, Mingxia Liu

Handle virtual channel message.
Refine link status update.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
Signed-off-by: Beilei Ling <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h   |   5 +
 drivers/common/idpf/idpf_common_virtchnl.c |  19 ---
 drivers/net/idpf/idpf_ethdev.c             | 165 ++++++++++++++++++++-
 drivers/net/idpf/idpf_ethdev.h             |   2 +
 4 files changed, 171 insertions(+), 20 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index f22ffde22e..2adeeff37e 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -118,6 +118,11 @@ struct idpf_vport {
 	bool tx_use_avx512;
 
 	struct virtchnl2_vport_stats eth_stats_offset;
+
+	void *dev;
+	/* Event from ipf */
+	bool link_up;
+	uint32_t link_speed;
 };
 
 /* Message type read in virtual channel from PF */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 5965f9ee55..f36aae8a93 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -202,25 +202,6 @@ idpf_execute_vc_cmd(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	switch (args->ops) {
 	case VIRTCHNL_OP_VERSION:
 	case VIRTCHNL2_OP_GET_CAPS:
-	case VIRTCHNL2_OP_CREATE_VPORT:
-	case VIRTCHNL2_OP_DESTROY_VPORT:
-	case VIRTCHNL2_OP_SET_RSS_KEY:
-	case VIRTCHNL2_OP_SET_RSS_LUT:
-	case VIRTCHNL2_OP_SET_RSS_HASH:
-	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
-	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
-	case VIRTCHNL2_OP_ENABLE_QUEUES:
-	case VIRTCHNL2_OP_DISABLE_QUEUES:
-	case VIRTCHNL2_OP_ENABLE_VPORT:
-	case VIRTCHNL2_OP_DISABLE_VPORT:
-	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_ALLOC_VECTORS:
-	case VIRTCHNL2_OP_DEALLOC_VECTORS:
-	case VIRTCHNL2_OP_GET_STATS:
-	case VIRTCHNL2_OP_GET_RSS_KEY:
-	case VIRTCHNL2_OP_GET_RSS_HASH:
-	case VIRTCHNL2_OP_GET_RSS_LUT:
 		/* for init virtchnl ops, need to poll the response */
 		err = idpf_read_one_msg(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 2ab31792ba..b86f63f94e 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -9,6 +9,7 @@
 #include <rte_memzone.h>
 #include <rte_dev.h>
 #include <errno.h>
+#include <rte_alarm.h>
 
 #include "idpf_ethdev.h"
 #include "idpf_rxtx.h"
@@ -83,12 +84,49 @@ static int
 idpf_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_link new_link;
 
 	memset(&new_link, 0, sizeof(new_link));
 
-	new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	switch (vport->link_speed) {
+	case 10:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
+		break;
+	case 100:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		break;
+	case 1000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
+		break;
+	case 10000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+		break;
+	case 20000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
+		break;
+	case 25000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
+		break;
+	case 40000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
+		break;
+	case 50000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
+		break;
+	case 100000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+		break;
+	case 200000:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
+		break;
+	default:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	}
+
 	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
+		RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
 				  RTE_ETH_LINK_SPEED_FIXED);
 
@@ -927,6 +965,127 @@ idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adap
 	return ret;
 }
 
+static struct idpf_vport *
+idpf_find_vport(struct idpf_adapter_ext *adapter, uint32_t vport_id)
+{
+	struct idpf_vport *vport = NULL;
+	int i;
+
+	for (i = 0; i < adapter->cur_vport_nb; i++) {
+		vport = adapter->vports[i];
+		if (vport->vport_id != vport_id)
+			continue;
+		else
+			return vport;
+	}
+
+	return vport;
+}
+
+static void
+idpf_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t msglen)
+{
+	struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
+
+	if (msglen < sizeof(struct virtchnl2_event)) {
+		PMD_DRV_LOG(ERR, "Error event");
+		return;
+	}
+
+	switch (vc_event->event) {
+	case VIRTCHNL2_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL2_EVENT_LINK_CHANGE");
+		vport->link_up = vc_event->link_status;
+		vport->link_speed = vc_event->link_speed;
+		idpf_dev_link_update(dev, 0);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", vc_event->event);
+		break;
+	}
+}
+
+static void
+idpf_handle_virtchnl_msg(struct idpf_adapter_ext *adapter_ex)
+{
+	struct idpf_adapter *adapter = &adapter_ex->base;
+	struct idpf_dma_mem *dma_mem = NULL;
+	struct idpf_hw *hw = &adapter->hw;
+	struct virtchnl2_event *vc_event;
+	struct idpf_ctlq_msg ctlq_msg;
+	enum idpf_mbx_opc mbx_op;
+	struct idpf_vport *vport;
+	enum virtchnl_ops vc_op;
+	uint16_t pending = 1;
+	int ret;
+
+	while (pending) {
+		ret = idpf_ctlq_recv(hw->arq, &pending, &ctlq_msg);
+		if (ret) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret);
+			return;
+		}
+
+		rte_memcpy(adapter->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
+			   IDPF_DFLT_MBX_BUF_SIZE);
+
+		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
+		vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+		adapter->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
+
+		switch (mbx_op) {
+		case idpf_mbq_opc_send_msg_to_peer_pf:
+			if (vc_op == VIRTCHNL2_OP_EVENT) {
+				if (ctlq_msg.data_len < sizeof(struct virtchnl2_event)) {
+					PMD_DRV_LOG(ERR, "Error event");
+					return;
+				}
+				vc_event = (struct virtchnl2_event *)adapter->mbx_resp;
+				vport = idpf_find_vport(adapter_ex, vc_event->vport_id);
+				if (!vport) {
+					PMD_DRV_LOG(ERR, "Can't find vport.");
+					return;
+				}
+				idpf_handle_event_msg(vport, adapter->mbx_resp,
+						      ctlq_msg.data_len);
+			} else {
+				if (vc_op == adapter->pend_cmd)
+					notify_cmd(adapter, adapter->cmd_retval);
+				else
+					PMD_DRV_LOG(ERR, "command mismatch, expect %u, get %u",
+						    adapter->pend_cmd, vc_op);
+
+				PMD_DRV_LOG(DEBUG, " Virtual channel response is received,"
+					    "opcode = %d", vc_op);
+			}
+			goto post_buf;
+		default:
+			PMD_DRV_LOG(DEBUG, "Request %u is not supported yet", mbx_op);
+		}
+	}
+
+post_buf:
+	if (ctlq_msg.data_len)
+		dma_mem = ctlq_msg.ctx.indirect.payload;
+	else
+		pending = 0;
+
+	ret = idpf_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
+	if (ret && dma_mem)
+		idpf_free_dma_mem(hw, dma_mem);
+}
+
+static void
+idpf_dev_alarm_handler(void *param)
+{
+	struct idpf_adapter_ext *adapter = param;
+
+	idpf_handle_virtchnl_msg(adapter);
+
+	rte_eal_alarm_set(IDPF_ALARM_INTERVAL, idpf_dev_alarm_handler, adapter);
+}
+
 static int
 idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter)
 {
@@ -949,6 +1108,8 @@ idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *a
 		goto err_adapter_init;
 	}
 
+	rte_eal_alarm_set(IDPF_ALARM_INTERVAL, idpf_dev_alarm_handler, adapter);
+
 	adapter->max_vport_nb = adapter->base.caps.max_vports;
 
 	adapter->vports = rte_zmalloc("vports",
@@ -1032,6 +1193,7 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	vport->adapter = &adapter->base;
 	vport->sw_idx = param->idx;
 	vport->devarg_id = param->devarg_id;
+	vport->dev = dev;
 
 	memset(&create_vport_info, 0, sizeof(create_vport_info));
 	ret = idpf_create_vport_info_init(vport, &create_vport_info);
@@ -1101,6 +1263,7 @@ idpf_find_adapter_ext(struct rte_pci_device *pci_dev)
 static void
 idpf_adapter_ext_deinit(struct idpf_adapter_ext *adapter)
 {
+	rte_eal_alarm_cancel(idpf_dev_alarm_handler, adapter);
 	idpf_adapter_deinit(&adapter->base);
 
 	rte_free(adapter->vports);
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 839a2bd82c..3c2c932438 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -53,6 +53,8 @@
 
 #define IDPF_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
+#define IDPF_ALARM_INTERVAL	50000 /* us */
+
 struct idpf_vport_param {
 	struct idpf_adapter_ext *adapter;
 	uint16_t devarg_id; /* arg id from user */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v3 6/6] common/idpf: add xstats ops
  2023-01-18  7:14   ` [PATCH v3 0/6] add idpf pmd enhancement features Mingxia Liu
                       ` (4 preceding siblings ...)
  2023-01-18  7:14     ` [PATCH v3 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
@ 2023-01-18  7:14     ` Mingxia Liu
  2023-02-07  9:56     ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
  6 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-01-18  7:14 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, Mingxia Liu

Add support for these device ops:
-idpf_dev_xstats_get
-idpf_dev_xstats_get_names
-idpf_dev_xstats_reset

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/idpf/idpf_ethdev.c | 80 ++++++++++++++++++++++++++++++++++
 1 file changed, 80 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index b86f63f94e..bcd15db3c5 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -80,6 +80,30 @@ static const uint64_t idpf_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
 			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
 			  RTE_ETH_RSS_FRAG_IPV6;
 
+struct rte_idpf_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct rte_idpf_xstats_name_off rte_idpf_stats_strings[] = {
+	{"rx_bytes", offsetof(struct virtchnl2_vport_stats, rx_bytes)},
+	{"rx_unicast_packets", offsetof(struct virtchnl2_vport_stats, rx_unicast)},
+	{"rx_multicast_packets", offsetof(struct virtchnl2_vport_stats, rx_multicast)},
+	{"rx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, rx_broadcast)},
+	{"rx_dropped_packets", offsetof(struct virtchnl2_vport_stats, rx_discards)},
+	{"rx_errors", offsetof(struct virtchnl2_vport_stats, rx_errors)},
+	{"rx_unknown_protocol_packets", offsetof(struct virtchnl2_vport_stats,
+						 rx_unknown_protocol)},
+	{"tx_bytes", offsetof(struct virtchnl2_vport_stats, tx_bytes)},
+	{"tx_unicast_packets", offsetof(struct virtchnl2_vport_stats, tx_unicast)},
+	{"tx_multicast_packets", offsetof(struct virtchnl2_vport_stats, tx_multicast)},
+	{"tx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, tx_broadcast)},
+	{"tx_dropped_packets", offsetof(struct virtchnl2_vport_stats, tx_discards)},
+	{"tx_error_packets", offsetof(struct virtchnl2_vport_stats, tx_errors)}};
+
+#define IDPF_NB_XSTATS (sizeof(rte_idpf_stats_strings) / \
+		sizeof(rte_idpf_stats_strings[0]))
+
 static int
 idpf_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -312,6 +336,59 @@ idpf_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int idpf_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+	idpf_dev_stats_reset(dev);
+	return 0;
+}
+
+static int idpf_dev_xstats_get(struct rte_eth_dev *dev,
+			       struct rte_eth_xstat *xstats, unsigned int n)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	unsigned int i;
+	int ret;
+
+	if (n < IDPF_NB_XSTATS)
+		return IDPF_NB_XSTATS;
+
+	if (!xstats)
+		return 0;
+
+	ret = idpf_query_stats(vport, &pstats);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+		return 0;
+	}
+
+	idpf_update_stats(&vport->eth_stats_offset, pstats);
+
+	/* loop over xstats array and values from pstats */
+	for (i = 0; i < IDPF_NB_XSTATS; i++) {
+		xstats[i].id = i;
+		xstats[i].value = *(uint64_t *)(((char *)pstats) +
+			rte_idpf_stats_strings[i].offset);
+	}
+	return IDPF_NB_XSTATS;
+}
+
+static int idpf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+				     struct rte_eth_xstat_name *xstats_names,
+				     __rte_unused unsigned int limit)
+{
+	unsigned int i;
+
+	if (xstats_names)
+		for (i = 0; i < IDPF_NB_XSTATS; i++) {
+			snprintf(xstats_names[i].name,
+				 sizeof(xstats_names[i].name),
+				 "%s", rte_idpf_stats_strings[i].name);
+		}
+	return IDPF_NB_XSTATS;
+}
+
 static int idpf_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
 {
 	uint64_t hena = 0, valid_rss_hf = 0;
@@ -1158,6 +1235,9 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 	.reta_query			= idpf_rss_reta_query,
 	.rss_hash_update		= idpf_rss_hash_update,
 	.rss_hash_conf_get		= idpf_rss_hash_conf_get,
+	.xstats_get			= idpf_dev_xstats_get,
+	.xstats_get_names		= idpf_dev_xstats_get_names,
+	.xstats_reset			= idpf_dev_xstats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [PATCH v3 1/6] common/idpf: add hw statistics
  2023-01-18  7:14     ` [PATCH v3 1/6] common/idpf: add hw statistics Mingxia Liu
@ 2023-02-01  8:48       ` Wu, Jingjing
  2023-02-01 12:34         ` Liu, Mingxia
  0 siblings, 1 reply; 63+ messages in thread
From: Wu, Jingjing @ 2023-02-01  8:48 UTC (permalink / raw)
  To: Liu, Mingxia, dev; +Cc: Xing, Beilei

> @@ -327,6 +407,11 @@ idpf_dev_start(struct rte_eth_dev *dev)
>  		goto err_vport;
>  	}
> 
> +	if (idpf_dev_stats_reset(dev)) {
> +		PMD_DRV_LOG(ERR, "Failed to reset stats");
> +		goto err_vport;

If stats reset fails, will block the start process and roll back? I think print ERR may be enough.

> +	}
> +
>  	vport->stopped = 0;
> 
>  	return 0;
> @@ -606,6 +691,8 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
>  	.tx_queue_release		= idpf_dev_tx_queue_release,
>  	.mtu_set			= idpf_dev_mtu_set,
>  	.dev_supported_ptypes_get	= idpf_dev_supported_ptypes_get,
> +	.stats_get			= idpf_dev_stats_get,
> +	.stats_reset			= idpf_dev_stats_reset,
>  };
> 
>  static uint16_t
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [PATCH v3 1/6] common/idpf: add hw statistics
  2023-02-01  8:48       ` Wu, Jingjing
@ 2023-02-01 12:34         ` Liu, Mingxia
  0 siblings, 0 replies; 63+ messages in thread
From: Liu, Mingxia @ 2023-02-01 12:34 UTC (permalink / raw)
  To: Wu, Jingjing, dev; +Cc: Xing, Beilei



> -----Original Message-----
> From: Wu, Jingjing <jingjing.wu@intel.com>
> Sent: Wednesday, February 1, 2023 4:49 PM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org
> Cc: Xing, Beilei <beilei.xing@intel.com>
> Subject: RE: [PATCH v3 1/6] common/idpf: add hw statistics
> 
> > @@ -327,6 +407,11 @@ idpf_dev_start(struct rte_eth_dev *dev)
> >  		goto err_vport;
> >  	}
> >
> > +	if (idpf_dev_stats_reset(dev)) {
> > +		PMD_DRV_LOG(ERR, "Failed to reset stats");
> > +		goto err_vport;
> 
> If stats reset fails, will block the start process and roll back? I think print ERR
> may be enough.
> 
[Liu, Mingxia] Good idea, I'll delete the error process of rolling back.

> > +	}
> > +
> >  	vport->stopped = 0;
> >
> >  	return 0;
> > @@ -606,6 +691,8 @@ static const struct eth_dev_ops idpf_eth_dev_ops
> = {
> >  	.tx_queue_release		= idpf_dev_tx_queue_release,
> >  	.mtu_set			= idpf_dev_mtu_set,
> >  	.dev_supported_ptypes_get	= idpf_dev_supported_ptypes_get,
> > +	.stats_get			= idpf_dev_stats_get,
> > +	.stats_reset			= idpf_dev_stats_reset,
> >  };
> >
> >  static uint16_t
> > --
> > 2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [PATCH v3 2/6] common/idpf: add RSS set/get ops
  2023-01-18  7:14     ` [PATCH v3 2/6] common/idpf: add RSS set/get ops Mingxia Liu
@ 2023-02-02  3:28       ` Wu, Jingjing
  2023-02-07  3:10         ` Liu, Mingxia
  0 siblings, 1 reply; 63+ messages in thread
From: Wu, Jingjing @ 2023-02-02  3:28 UTC (permalink / raw)
  To: Liu, Mingxia, dev; +Cc: Xing, Beilei

> +static int idpf_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
> +{
> +	uint64_t hena = 0, valid_rss_hf = 0;
According to the coding style, only the last variable on a line should be initialized.

> +	int ret = 0;
> +	uint16_t i;
> +
> +	/**
> +	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
> +	 * generalizations of all other IPv4 and IPv6 RSS types.
> +	 */
> +	if (rss_hf & RTE_ETH_RSS_IPV4)
> +		rss_hf |= idpf_ipv4_rss;
> +
> +	if (rss_hf & RTE_ETH_RSS_IPV6)
> +		rss_hf |= idpf_ipv6_rss;
> +
> +	for (i = 0; i < RTE_DIM(idpf_map_hena_rss); i++) {
> +		uint64_t bit = BIT_ULL(i);
> +
> +		if (idpf_map_hena_rss[i] & rss_hf) {
> +			valid_rss_hf |= idpf_map_hena_rss[i];
> +			hena |= bit;
> +		}
> +	}
> +
> +	vport->rss_hf = hena;
> +
> +	ret = idpf_vc_set_rss_hash(vport);
> +	if (ret != 0) {
> +		PMD_DRV_LOG(WARNING,
> +			    "fail to set RSS offload types, ret: %d", ret);
> +		return ret;
> +	}
> +
> +	if (valid_rss_hf & idpf_ipv4_rss)
> +		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
> +
> +	if (valid_rss_hf & idpf_ipv6_rss)
> +		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
> +
> +	if (rss_hf & ~valid_rss_hf)
> +		PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%" PRIx64,
> +			    rss_hf & ~valid_rss_hf);
It makes me a bit confused, valid_rss_hf is would be the sub of rss_hf according above assignment. Would it be possible to go here?
And if it is possible, why not set valid_rss_hf before calling vc command?

> +	vport->last_general_rss_hf = valid_rss_hf;
> +
> +	return ret;
> +}
> +
>  static int
>  idpf_init_rss(struct idpf_vport *vport)
>  {
> @@ -256,6 +357,204 @@ idpf_init_rss(struct idpf_vport *vport)
>  	return ret;
>  }
> 
> +static int
> +idpf_rss_reta_update(struct rte_eth_dev *dev,
> +		     struct rte_eth_rss_reta_entry64 *reta_conf,
> +		     uint16_t reta_size)
> +{
> +	struct idpf_vport *vport = dev->data->dev_private;
> +	struct idpf_adapter *adapter = vport->adapter;
> +	uint16_t idx, shift;
> +	uint32_t *lut;
> +	int ret = 0;
> +	uint16_t i;
> +
> +	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
> +		PMD_DRV_LOG(DEBUG, "RSS is not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (reta_size != vport->rss_lut_size) {
> +		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
> +				 "(%d) doesn't match the number of hardware can "
> +				 "support (%d)",
> +			    reta_size, vport->rss_lut_size);
> +		return -EINVAL;
> +	}
> +
> +	/* It MUST use the current LUT size to get the RSS lookup table,
> +	 * otherwise if will fail with -100 error code.
> +	 */
> +	lut = rte_zmalloc(NULL, reta_size * sizeof(uint32_t), 0);
> +	if (!lut) {
> +		PMD_DRV_LOG(ERR, "No memory can be allocated");
> +		return -ENOMEM;
> +	}
> +	/* store the old lut table temporarily */
> +	rte_memcpy(lut, vport->rss_lut, reta_size * sizeof(uint32_t));
Stored the vport->rss_lut to lut? But you overwrite the lut below?

> +
> +	for (i = 0; i < reta_size; i++) {
> +		idx = i / RTE_ETH_RETA_GROUP_SIZE;
> +		shift = i % RTE_ETH_RETA_GROUP_SIZE;
> +		if (reta_conf[idx].mask & (1ULL << shift))
> +			lut[i] = reta_conf[idx].reta[shift];
> +	}
> +
> +	rte_memcpy(vport->rss_lut, lut, reta_size * sizeof(uint32_t));
> +	/* send virtchnl ops to configure RSS */
> +	ret = idpf_vc_set_rss_lut(vport);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
> +		goto out;
> +	}
> +out:
> +	rte_free(lut);
> +
> +	return ret;
> +}


^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [PATCH v3 3/6] common/idpf: support single q scatter RX datapath
  2023-01-18  7:14     ` [PATCH v3 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
@ 2023-02-02  3:45       ` Wu, Jingjing
  2023-02-02  7:19         ` Liu, Mingxia
  0 siblings, 1 reply; 63+ messages in thread
From: Wu, Jingjing @ 2023-02-02  3:45 UTC (permalink / raw)
  To: Liu, Mingxia, dev; +Cc: Xing, Beilei, Wu, Wenjun1

> 
> +uint16_t
> +idpf_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
> +			       uint16_t nb_pkts)
> +{
> +	struct idpf_rx_queue *rxq = rx_queue;
> +	volatile union virtchnl2_rx_desc *rx_ring = rxq->rx_ring;
> +	volatile union virtchnl2_rx_desc *rxdp;
> +	union virtchnl2_rx_desc rxd;
> +	struct idpf_adapter *ad;
> +	struct rte_mbuf *first_seg = rxq->pkt_first_seg;
> +	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
> +	struct rte_mbuf *rxm;
> +	struct rte_mbuf *nmb;
> +	struct rte_eth_dev *dev;
> +	const uint32_t *ptype_tbl = rxq->adapter->ptype_tbl;
> +	uint16_t nb_hold = 0, nb_rx = 0;
According to the coding style, only the last variable on a line should be initialized.

> +	uint16_t rx_id = rxq->rx_tail;
> +	uint16_t rx_packet_len;
> +	uint16_t rx_status0;
> +	uint64_t pkt_flags;
> +	uint64_t dma_addr;
> +	uint64_t ts_ns;
> +


^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [PATCH v3 5/6] common/idpf: add alarm to support handle vchnl message
  2023-01-18  7:14     ` [PATCH v3 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
@ 2023-02-02  4:23       ` Wu, Jingjing
  2023-02-02  7:39         ` Liu, Mingxia
  0 siblings, 1 reply; 63+ messages in thread
From: Wu, Jingjing @ 2023-02-02  4:23 UTC (permalink / raw)
  To: Liu, Mingxia, dev; +Cc: Xing, Beilei

> @@ -83,12 +84,49 @@ static int
>  idpf_dev_link_update(struct rte_eth_dev *dev,
>  		     __rte_unused int wait_to_complete)
>  {
> +	struct idpf_vport *vport = dev->data->dev_private;
>  	struct rte_eth_link new_link;
> 
>  	memset(&new_link, 0, sizeof(new_link));
> 
> -	new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
> +	switch (vport->link_speed) {
> +	case 10:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
> +		break;
> +	case 100:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
> +		break;
> +	case 1000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
> +		break;
> +	case 10000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
> +		break;
> +	case 20000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
> +		break;
> +	case 25000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
> +		break;
> +	case 40000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
> +		break;
> +	case 50000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
> +		break;
> +	case 100000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
> +		break;
> +	case 200000:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
> +		break;
> +	default:
> +		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
> +	}
> +
>  	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
> +	new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
> +		RTE_ETH_LINK_DOWN;
>  	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
>  				  RTE_ETH_LINK_SPEED_FIXED);
Better to use RTE_ETH_LINK_[AUTONEG/FIXED] instead.

> 
> @@ -927,6 +965,127 @@ idpf_parse_devargs(struct rte_pci_device *pci_dev, struct
> idpf_adapter_ext *adap
>  	return ret;
>  }
> 
> +static struct idpf_vport *
> +idpf_find_vport(struct idpf_adapter_ext *adapter, uint32_t vport_id)
> +{
> +	struct idpf_vport *vport = NULL;
> +	int i;
> +
> +	for (i = 0; i < adapter->cur_vport_nb; i++) {
> +		vport = adapter->vports[i];
> +		if (vport->vport_id != vport_id)
> +			continue;
> +		else
> +			return vport;
> +	}
> +
> +	return vport;
> +}
> +
> +static void
> +idpf_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t msglen)
> +{
> +	struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
> +	struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
> +
> +	if (msglen < sizeof(struct virtchnl2_event)) {
> +		PMD_DRV_LOG(ERR, "Error event");
> +		return;
> +	}
> +
> +	switch (vc_event->event) {
> +	case VIRTCHNL2_EVENT_LINK_CHANGE:
> +		PMD_DRV_LOG(DEBUG, "VIRTCHNL2_EVENT_LINK_CHANGE");
> +		vport->link_up = vc_event->link_status;
Any conversion between bool and uint8?



^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [PATCH v3 3/6] common/idpf: support single q scatter RX datapath
  2023-02-02  3:45       ` Wu, Jingjing
@ 2023-02-02  7:19         ` Liu, Mingxia
  0 siblings, 0 replies; 63+ messages in thread
From: Liu, Mingxia @ 2023-02-02  7:19 UTC (permalink / raw)
  To: Wu, Jingjing, dev; +Cc: Xing, Beilei, Wu, Wenjun1



> -----Original Message-----
> From: Wu, Jingjing <jingjing.wu@intel.com>
> Sent: Thursday, February 2, 2023 11:46 AM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org
> Cc: Xing, Beilei <beilei.xing@intel.com>; Wu, Wenjun1
> <wenjun1.wu@intel.com>
> Subject: RE: [PATCH v3 3/6] common/idpf: support single q scatter RX
> datapath
> 
> >
> > +uint16_t
> > +idpf_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf
> **rx_pkts,
> > +			       uint16_t nb_pkts)
> > +{
> > +	struct idpf_rx_queue *rxq = rx_queue;
> > +	volatile union virtchnl2_rx_desc *rx_ring = rxq->rx_ring;
> > +	volatile union virtchnl2_rx_desc *rxdp;
> > +	union virtchnl2_rx_desc rxd;
> > +	struct idpf_adapter *ad;
> > +	struct rte_mbuf *first_seg = rxq->pkt_first_seg;
> > +	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
> > +	struct rte_mbuf *rxm;
> > +	struct rte_mbuf *nmb;
> > +	struct rte_eth_dev *dev;
> > +	const uint32_t *ptype_tbl = rxq->adapter->ptype_tbl;
> > +	uint16_t nb_hold = 0, nb_rx = 0;
> According to the coding style, only the last variable on a line should be
> initialized.
> 
[Liu, Mingxia] Ok, thank, I'll check if the same issue exist otherwhere.

> > +	uint16_t rx_id = rxq->rx_tail;
> > +	uint16_t rx_packet_len;
> > +	uint16_t rx_status0;
> > +	uint64_t pkt_flags;
> > +	uint64_t dma_addr;
> > +	uint64_t ts_ns;
> > +


^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [PATCH v3 5/6] common/idpf: add alarm to support handle vchnl message
  2023-02-02  4:23       ` Wu, Jingjing
@ 2023-02-02  7:39         ` Liu, Mingxia
  2023-02-02  8:46           ` Wu, Jingjing
  0 siblings, 1 reply; 63+ messages in thread
From: Liu, Mingxia @ 2023-02-02  7:39 UTC (permalink / raw)
  To: Wu, Jingjing, dev; +Cc: Xing, Beilei



> -----Original Message-----
> From: Wu, Jingjing <jingjing.wu@intel.com>
> Sent: Thursday, February 2, 2023 12:24 PM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org
> Cc: Xing, Beilei <beilei.xing@intel.com>
> Subject: RE: [PATCH v3 5/6] common/idpf: add alarm to support handle vchnl
> message
> 
> > @@ -83,12 +84,49 @@ static int
> >  idpf_dev_link_update(struct rte_eth_dev *dev,
> >  		     __rte_unused int wait_to_complete)  {
> > +	struct idpf_vport *vport = dev->data->dev_private;
> >  	struct rte_eth_link new_link;
> >
> >  	memset(&new_link, 0, sizeof(new_link));
> >
> > -	new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
> > +	switch (vport->link_speed) {
> > +	case 10:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
> > +		break;
> > +	case 100:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
> > +		break;
> > +	case 1000:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
> > +		break;
> > +	case 10000:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
> > +		break;
> > +	case 20000:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
> > +		break;
> > +	case 25000:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
> > +		break;
> > +	case 40000:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
> > +		break;
> > +	case 50000:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
> > +		break;
> > +	case 100000:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
> > +		break;
> > +	case 200000:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
> > +		break;
> > +	default:
> > +		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
> > +	}
> > +
> >  	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
> > +	new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
> > +		RTE_ETH_LINK_DOWN;
> >  	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
> >  				  RTE_ETH_LINK_SPEED_FIXED);
> Better to use RTE_ETH_LINK_[AUTONEG/FIXED] instead.
> 
[Liu, Mingxia] According to the comment description of struct rte_eth_conf, RTE_ETH_LINK_SPEED_FIXED is better.
struct rte_eth_conf {
uint32_t link_speeds; /**< bitmap of RTE_ETH_LINK_SPEED_XXX of speeds to be
				used. RTE_ETH_LINK_SPEED_FIXED disables link
				autonegotiation, and a unique speed shall be
				set. Otherwise, the bitmap defines the set of
				speeds to be advertised. If the special value
				RTE_ETH_LINK_SPEED_AUTONEG (0) is used, all speeds
				supported are advertised. */


> >
> > @@ -927,6 +965,127 @@ idpf_parse_devargs(struct rte_pci_device
> > *pci_dev, struct idpf_adapter_ext *adap
> >  	return ret;
> >  }
> >
> > +static struct idpf_vport *
> > +idpf_find_vport(struct idpf_adapter_ext *adapter, uint32_t vport_id)
> > +{
> > +	struct idpf_vport *vport = NULL;
> > +	int i;
> > +
> > +	for (i = 0; i < adapter->cur_vport_nb; i++) {
> > +		vport = adapter->vports[i];
> > +		if (vport->vport_id != vport_id)
> > +			continue;
> > +		else
> > +			return vport;
> > +	}
> > +
> > +	return vport;
> > +}
> > +
> > +static void
> > +idpf_handle_event_msg(struct idpf_vport *vport, uint8_t *msg,
> > +uint16_t msglen) {
> > +	struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
> > +	struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
> > +
> > +	if (msglen < sizeof(struct virtchnl2_event)) {
> > +		PMD_DRV_LOG(ERR, "Error event");
> > +		return;
> > +	}
> > +
> > +	switch (vc_event->event) {
> > +	case VIRTCHNL2_EVENT_LINK_CHANGE:
> > +		PMD_DRV_LOG(DEBUG,
> "VIRTCHNL2_EVENT_LINK_CHANGE");
> > +		vport->link_up = vc_event->link_status;
> Any conversion between bool and uint8?
> 
[Liu, Mingxia] Ok, thanks, I 'll use !! to convert uint8 to bool.


^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [PATCH v3 5/6] common/idpf: add alarm to support handle vchnl message
  2023-02-02  7:39         ` Liu, Mingxia
@ 2023-02-02  8:46           ` Wu, Jingjing
  0 siblings, 0 replies; 63+ messages in thread
From: Wu, Jingjing @ 2023-02-02  8:46 UTC (permalink / raw)
  To: Liu, Mingxia, dev; +Cc: Xing, Beilei

> > > +
> > >  	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
> > > +	new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
> > > +		RTE_ETH_LINK_DOWN;
> > >  	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
> > >  				  RTE_ETH_LINK_SPEED_FIXED);
> > Better to use RTE_ETH_LINK_[AUTONEG/FIXED] instead.
> >
> [Liu, Mingxia] According to the comment description of struct rte_eth_conf,
> RTE_ETH_LINK_SPEED_FIXED is better.
> struct rte_eth_conf {
> uint32_t link_speeds; /**< bitmap of RTE_ETH_LINK_SPEED_XXX of speeds to be
> 				used. RTE_ETH_LINK_SPEED_FIXED disables link
> 				autonegotiation, and a unique speed shall be
> 				set. Otherwise, the bitmap defines the set of
> 				speeds to be advertised. If the special value
> 				RTE_ETH_LINK_SPEED_AUTONEG (0) is used, all speeds
> 				supported are advertised. */
> 
I am talking about link_autoneg but not link_speeds


^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [PATCH v3 2/6] common/idpf: add RSS set/get ops
  2023-02-02  3:28       ` Wu, Jingjing
@ 2023-02-07  3:10         ` Liu, Mingxia
  0 siblings, 0 replies; 63+ messages in thread
From: Liu, Mingxia @ 2023-02-07  3:10 UTC (permalink / raw)
  To: Wu, Jingjing, dev; +Cc: Xing, Beilei

> > +static int idpf_config_rss_hf(struct idpf_vport *vport, uint64_t
> > +rss_hf) {
> > +	uint64_t hena = 0, valid_rss_hf = 0;
> According to the coding style, only the last variable on a line should be
> initialized.
> 
[Liu, Mingxia] Ok, thank, I'll check if the same issue exist otherwhere.


> > +	vport->rss_hf = hena;
> > +
> > +	ret = idpf_vc_set_rss_hash(vport);
> > +	if (ret != 0) {
> > +		PMD_DRV_LOG(WARNING,
> > +			    "fail to set RSS offload types, ret: %d", ret);
> > +		return ret;
> > +	}
> > +
> > +	if (valid_rss_hf & idpf_ipv4_rss)
> > +		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
> > +
> > +	if (valid_rss_hf & idpf_ipv6_rss)
> > +		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
> > +
> > +	if (rss_hf & ~valid_rss_hf)
> > +		PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%"
> PRIx64,
> > +			    rss_hf & ~valid_rss_hf);
> It makes me a bit confused, valid_rss_hf is would be the sub of rss_hf
> according above assignment. Would it be possible to go here?
> And if it is possible, why not set valid_rss_hf before calling vc command?
>
[Liu, Mingxia] According to cmd_config_rss_parsed(), when the rss_hf set is not belong to flow_type_rss_offloads, it will be delete by &flow_type_rss_offloads.
What's more, in rte_eth_dev_rss_hash_update(), it will check again if rss_hf set is belong to flow_type_rss_offloads, if not, will return error.
So when entering function idpf_config_rss_hf, it wouldn't be possible that (rss_hf & ~valid_rss_hf) != 0.
Better to delete this piece of code.

For the second question,  why not set valid_rss_hf before calling vc command?
Because if we  set rss_hf to RTE_ETH_RSS_IPV4, then the rss hf value on idpf side mapping to  RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV4_TCP |  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |RTE_ETH_RSS_NONFRAG_IPV4_OTHER |RTE_ETH_RSS_FRAG_IPV4 is been set. 
But there is no  rss hf value on idpf side mapping to RTE_ETH_RSS_IPV4.
When we get rss_hf from vc, it won't tell us if RTE_ETH_RSS_IPV4 have ever been configured.
So dpdk software should record if RTE_ETH_RSS_IPV4 have ever been set by valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4, and return to user when needed.

RTE_ETH_RSS_IPV6 is similar.


> > +	/* It MUST use the current LUT size to get the RSS lookup table,
> > +	 * otherwise if will fail with -100 error code.
> > +	 */
> > +	lut = rte_zmalloc(NULL, reta_size * sizeof(uint32_t), 0);
> > +	if (!lut) {
> > +		PMD_DRV_LOG(ERR, "No memory can be allocated");
> > +		return -ENOMEM;
> > +	}
> > +	/* store the old lut table temporarily */
> > +	rte_memcpy(lut, vport->rss_lut, reta_size * sizeof(uint32_t));
> Stored the vport->rss_lut to lut? But you overwrite the lut below?
> 
[Liu, Mingxia] Because lut include all redirection table, but we may want to update only several value of redirection table,
so we first stored the original lut, and update the required table entries.

> -----Original Message-----
> From: Wu, Jingjing <jingjing.wu@intel.com>
> Sent: Thursday, February 2, 2023 11:28 AM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org
> Cc: Xing, Beilei <beilei.xing@intel.com>
> Subject: RE: [PATCH v3 2/6] common/idpf: add RSS set/get ops
> 
> > +static int idpf_config_rss_hf(struct idpf_vport *vport, uint64_t
> > +rss_hf) {
> > +	uint64_t hena = 0, valid_rss_hf = 0;
> According to the coding style, only the last variable on a line should be
> initialized.
> 
> > +	int ret = 0;
> > +	uint16_t i;
> > +
> > +	/**
> > +	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
> > +	 * generalizations of all other IPv4 and IPv6 RSS types.
> > +	 */
> > +	if (rss_hf & RTE_ETH_RSS_IPV4)
> > +		rss_hf |= idpf_ipv4_rss;
> > +
> > +	if (rss_hf & RTE_ETH_RSS_IPV6)
> > +		rss_hf |= idpf_ipv6_rss;
> > +
> > +	for (i = 0; i < RTE_DIM(idpf_map_hena_rss); i++) {
> > +		uint64_t bit = BIT_ULL(i);
> > +
> > +		if (idpf_map_hena_rss[i] & rss_hf) {
> > +			valid_rss_hf |= idpf_map_hena_rss[i];
> > +			hena |= bit;
> > +		}
> > +	}
> > +
> > +	vport->rss_hf = hena;
> > +
> > +	ret = idpf_vc_set_rss_hash(vport);
> > +	if (ret != 0) {
> > +		PMD_DRV_LOG(WARNING,
> > +			    "fail to set RSS offload types, ret: %d", ret);
> > +		return ret;
> > +	}
> > +
> > +	if (valid_rss_hf & idpf_ipv4_rss)
> > +		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
> > +
> > +	if (valid_rss_hf & idpf_ipv6_rss)
> > +		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
> > +
> > +	if (rss_hf & ~valid_rss_hf)
> > +		PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%"
> PRIx64,
> > +			    rss_hf & ~valid_rss_hf);
> It makes me a bit confused, valid_rss_hf is would be the sub of rss_hf
> according above assignment. Would it be possible to go here?
> And if it is possible, why not set valid_rss_hf before calling vc command?
> 
> > +	vport->last_general_rss_hf = valid_rss_hf;
> > +
> > +	return ret;
> > +}
> > +
> >  static int
> >  idpf_init_rss(struct idpf_vport *vport)  { @@ -256,6 +357,204 @@
> > idpf_init_rss(struct idpf_vport *vport)
> >  	return ret;
> >  }
> >
> > +static int
> > +idpf_rss_reta_update(struct rte_eth_dev *dev,
> > +		     struct rte_eth_rss_reta_entry64 *reta_conf,
> > +		     uint16_t reta_size)
> > +{
> > +	struct idpf_vport *vport = dev->data->dev_private;
> > +	struct idpf_adapter *adapter = vport->adapter;
> > +	uint16_t idx, shift;
> > +	uint32_t *lut;
> > +	int ret = 0;
> > +	uint16_t i;
> > +
> > +	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
> > +		PMD_DRV_LOG(DEBUG, "RSS is not supported");
> > +		return -ENOTSUP;
> > +	}
> > +
> > +	if (reta_size != vport->rss_lut_size) {
> > +		PMD_DRV_LOG(ERR, "The size of hash lookup table
> configured "
> > +				 "(%d) doesn't match the number of
> hardware can "
> > +				 "support (%d)",
> > +			    reta_size, vport->rss_lut_size);
> > +		return -EINVAL;
> > +	}
> > +
> > +	/* It MUST use the current LUT size to get the RSS lookup table,
> > +	 * otherwise if will fail with -100 error code.
> > +	 */
> > +	lut = rte_zmalloc(NULL, reta_size * sizeof(uint32_t), 0);
> > +	if (!lut) {
> > +		PMD_DRV_LOG(ERR, "No memory can be allocated");
> > +		return -ENOMEM;
> > +	}
> > +	/* store the old lut table temporarily */
> > +	rte_memcpy(lut, vport->rss_lut, reta_size * sizeof(uint32_t));
> Stored the vport->rss_lut to lut? But you overwrite the lut below?
> 
[Liu, Mingxia] Because lut include all redirection table, but we may want to update only several value of redirection table,
so we first stored the original lut, and update the required table entries.

> > +
> > +	for (i = 0; i < reta_size; i++) {
> > +		idx = i / RTE_ETH_RETA_GROUP_SIZE;
> > +		shift = i % RTE_ETH_RETA_GROUP_SIZE;
> > +		if (reta_conf[idx].mask & (1ULL << shift))
> > +			lut[i] = reta_conf[idx].reta[shift];
> > +	}
> > +
> > +	rte_memcpy(vport->rss_lut, lut, reta_size * sizeof(uint32_t));
> > +	/* send virtchnl ops to configure RSS */
> > +	ret = idpf_vc_set_rss_lut(vport);
> > +	if (ret) {
> > +		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
> > +		goto out;
> > +	}
> > +out:
> > +	rte_free(lut);
> > +
> > +	return ret;
> > +}


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v4 0/6] add idpf pmd enhancement features
  2023-01-18  7:14   ` [PATCH v3 0/6] add idpf pmd enhancement features Mingxia Liu
                       ` (5 preceding siblings ...)
  2023-01-18  7:14     ` [PATCH v3 6/6] common/idpf: add xstats ops Mingxia Liu
@ 2023-02-07  9:56     ` Mingxia Liu
  2023-02-07  9:56       ` [PATCH v4 1/6] common/idpf: add hw statistics Mingxia Liu
                         ` (6 more replies)
  6 siblings, 7 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-07  9:56 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

This patchset add several enhancement features of idpf pmd.
Including the following:
- add hw statistics, support stats/xstats ops
- add rss configure/show ops
- add event handle: link status
- add scattered data path for single queue

This patchset is based on the refactor idpf PMD code:
http://patches.dpdk.org/project/dpdk/patch/20230207084549.2225214-2-wenjun1.wu@intel.com/

v2 changes:
 - Fix rss lut config issue.
v3 changes:
 - rebase to the new baseline.
v4 changes:
 - rebase to the new baseline.
 - optimize some code
 - give "not supported" tips when user want to config rss hash type
 - if stats reset fails at initialization time, don't rollback, just
   print ERROR info.

Mingxia Liu (6):
  common/idpf: add hw statistics
  common/idpf: add RSS set/get ops
  common/idpf: support single q scatter RX datapath
  common/idpf: add rss_offload hash in singleq rx
  common/idpf: add alarm to support handle vchnl message
  common/idpf: add xstats ops

 drivers/common/idpf/idpf_common_device.c   |  17 +
 drivers/common/idpf/idpf_common_device.h   |  10 +
 drivers/common/idpf/idpf_common_rxtx.c     | 151 +++++
 drivers/common/idpf/idpf_common_rxtx.h     |   3 +
 drivers/common/idpf/idpf_common_virtchnl.c | 171 +++++-
 drivers/common/idpf/idpf_common_virtchnl.h |  15 +
 drivers/common/idpf/version.map            |   8 +
 drivers/net/idpf/idpf_ethdev.c             | 606 ++++++++++++++++++++-
 drivers/net/idpf/idpf_ethdev.h             |   5 +-
 drivers/net/idpf/idpf_rxtx.c               |  28 +
 drivers/net/idpf/idpf_rxtx.h               |   2 +
 11 files changed, 996 insertions(+), 20 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v4 1/6] common/idpf: add hw statistics
  2023-02-07  9:56     ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
@ 2023-02-07  9:56       ` Mingxia Liu
  2023-02-07  9:56       ` [PATCH v4 2/6] common/idpf: add RSS set/get ops Mingxia Liu
                         ` (5 subsequent siblings)
  6 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-07  9:56 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

This patch add hardware packets/bytes statistics.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/common/idpf/idpf_common_device.c   | 17 +++++
 drivers/common/idpf/idpf_common_device.h   |  4 +
 drivers/common/idpf/idpf_common_virtchnl.c | 27 +++++++
 drivers/common/idpf/idpf_common_virtchnl.h |  3 +
 drivers/common/idpf/version.map            |  2 +
 drivers/net/idpf/idpf_ethdev.c             | 86 ++++++++++++++++++++++
 6 files changed, 139 insertions(+)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 48b3e3c0dd..5475a3e52c 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -652,4 +652,21 @@ idpf_vport_info_init(struct idpf_vport *vport,
 	return 0;
 }
 
+void
+idpf_vport_stats_update(struct virtchnl2_vport_stats *oes, struct virtchnl2_vport_stats *nes)
+{
+	nes->rx_bytes = nes->rx_bytes - oes->rx_bytes;
+	nes->rx_unicast = nes->rx_unicast - oes->rx_unicast;
+	nes->rx_multicast = nes->rx_multicast - oes->rx_multicast;
+	nes->rx_broadcast = nes->rx_broadcast - oes->rx_broadcast;
+	nes->rx_errors = nes->rx_errors - oes->rx_errors;
+	nes->rx_discards = nes->rx_discards - oes->rx_discards;
+	nes->tx_bytes = nes->tx_bytes - oes->tx_bytes;
+	nes->tx_unicast = nes->tx_unicast - oes->tx_unicast;
+	nes->tx_multicast = nes->tx_multicast - oes->tx_multicast;
+	nes->tx_broadcast = nes->tx_broadcast - oes->tx_broadcast;
+	nes->tx_errors = nes->tx_errors - oes->tx_errors;
+	nes->tx_discards = nes->tx_discards - oes->tx_discards;
+}
+
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 545117df79..1d8e7d405a 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -115,6 +115,8 @@ struct idpf_vport {
 	bool tx_vec_allowed;
 	bool rx_use_avx512;
 	bool tx_use_avx512;
+
+	struct virtchnl2_vport_stats eth_stats_offset;
 };
 
 /* Message type read in virtual channel from PF */
@@ -191,5 +193,7 @@ int idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
 __rte_internal
 int idpf_vport_info_init(struct idpf_vport *vport,
 			 struct virtchnl2_create_vport *vport_info);
+__rte_internal
+void idpf_vport_stats_update(struct virtchnl2_vport_stats *oes, struct virtchnl2_vport_stats *nes);
 
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 31fadefbd3..40cff34c09 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -217,6 +217,7 @@ idpf_vc_cmd_execute(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
 	case VIRTCHNL2_OP_ALLOC_VECTORS:
 	case VIRTCHNL2_OP_DEALLOC_VECTORS:
+	case VIRTCHNL2_OP_GET_STATS:
 		/* for init virtchnl ops, need to poll the response */
 		err = idpf_vc_one_msg_read(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
@@ -806,6 +807,32 @@ idpf_vc_ptype_info_query(struct idpf_adapter *adapter)
 	return err;
 }
 
+int
+idpf_vc_stats_query(struct idpf_vport *vport,
+		struct virtchnl2_vport_stats **pstats)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_vport_stats vport_stats;
+	struct idpf_cmd_info args;
+	int err;
+
+	vport_stats.vport_id = vport->vport_id;
+	args.ops = VIRTCHNL2_OP_GET_STATS;
+	args.in_args = (u8 *)&vport_stats;
+	args.in_args_size = sizeof(vport_stats);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_vc_cmd_execute(adapter, &args);
+	if (err) {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_STATS");
+		*pstats = NULL;
+		return err;
+	}
+	*pstats = (struct virtchnl2_vport_stats *)args.out_buffer;
+	return 0;
+}
+
 #define IDPF_RX_BUF_STRIDE		64
 int
 idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index c105f02836..6b94fd5b8f 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -49,4 +49,7 @@ __rte_internal
 int idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
 __rte_internal
 int idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq);
+__rte_internal
+int idpf_vc_stats_query(struct idpf_vport *vport,
+			struct virtchnl2_vport_stats **pstats);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 8b33130bd6..e6a02828ba 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -46,6 +46,7 @@ INTERNAL {
 	idpf_vc_rss_key_set;
 	idpf_vc_rss_lut_set;
 	idpf_vc_rxq_config;
+	idpf_vc_stats_query;
 	idpf_vc_txq_config;
 	idpf_vc_vectors_alloc;
 	idpf_vc_vectors_dealloc;
@@ -59,6 +60,7 @@ INTERNAL {
 	idpf_vport_irq_map_config;
 	idpf_vport_irq_unmap_config;
 	idpf_vport_rss_config;
+	idpf_vport_stats_update;
 
 	local: *;
 };
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 33f5e90743..02ddb0330a 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -140,6 +140,87 @@ idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static uint64_t
+idpf_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	uint64_t mbuf_alloc_failed = 0;
+	struct idpf_rx_queue *rxq;
+	int i = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		mbuf_alloc_failed += __atomic_load_n(&rxq->rx_stats.mbuf_alloc_failed,
+						     __ATOMIC_RELAXED);
+	}
+
+	return mbuf_alloc_failed;
+}
+
+static int
+idpf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret == 0) {
+		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
+					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
+					 RTE_ETHER_CRC_LEN;
+
+		idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
+		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
+				pstats->rx_broadcast - pstats->rx_discards;
+		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
+						pstats->tx_unicast;
+		stats->imissed = pstats->rx_discards;
+		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
+		stats->ibytes = pstats->rx_bytes;
+		stats->ibytes -= stats->ipackets * crc_stats_len;
+		stats->obytes = pstats->tx_bytes;
+
+		dev->data->rx_mbuf_alloc_failed = idpf_get_mbuf_alloc_failed_stats(dev);
+		stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return ret;
+}
+
+static void
+idpf_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		__atomic_store_n(&rxq->rx_stats.mbuf_alloc_failed, 0, __ATOMIC_RELAXED);
+	}
+}
+
+static int
+idpf_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret != 0)
+		return ret;
+
+	/* set stats offset base on current values */
+	vport->eth_stats_offset = *pstats;
+
+	idpf_reset_mbuf_alloc_failed_stats(dev);
+
+	return 0;
+}
+
 static int
 idpf_init_rss(struct idpf_vport *vport)
 {
@@ -327,6 +408,9 @@ idpf_dev_start(struct rte_eth_dev *dev)
 		goto err_vport;
 	}
 
+	if (idpf_dev_stats_reset(dev))
+		PMD_DRV_LOG(ERR, "Failed to reset stats");
+
 	vport->stopped = 0;
 
 	return 0;
@@ -606,6 +690,8 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 	.tx_queue_release		= idpf_dev_tx_queue_release,
 	.mtu_set			= idpf_dev_mtu_set,
 	.dev_supported_ptypes_get	= idpf_dev_supported_ptypes_get,
+	.stats_get			= idpf_dev_stats_get,
+	.stats_reset			= idpf_dev_stats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v4 2/6] common/idpf: add RSS set/get ops
  2023-02-07  9:56     ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
  2023-02-07  9:56       ` [PATCH v4 1/6] common/idpf: add hw statistics Mingxia Liu
@ 2023-02-07  9:56       ` Mingxia Liu
  2023-02-07  9:56       ` [PATCH v4 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
                         ` (4 subsequent siblings)
  6 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-07  9:56 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add support for these device ops:
- rss_reta_update
- rss_reta_query
- rss_hash_update
- rss_hash_conf_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/common/idpf/idpf_common_device.h   |   1 +
 drivers/common/idpf/idpf_common_virtchnl.c | 119 +++++++++
 drivers/common/idpf/idpf_common_virtchnl.h |   6 +
 drivers/common/idpf/version.map            |   3 +
 drivers/net/idpf/idpf_ethdev.c             | 268 +++++++++++++++++++++
 drivers/net/idpf/idpf_ethdev.h             |   3 +-
 6 files changed, 399 insertions(+), 1 deletion(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 1d8e7d405a..7abc4d2a3a 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -98,6 +98,7 @@ struct idpf_vport {
 	uint32_t *rss_lut;
 	uint8_t *rss_key;
 	uint64_t rss_hf;
+	uint64_t last_general_rss_hf;
 
 	/* MSIX info*/
 	struct virtchnl2_queue_vector *qv_map; /* queue vector mapping */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 40cff34c09..10cfa33704 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -218,6 +218,9 @@ idpf_vc_cmd_execute(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	case VIRTCHNL2_OP_ALLOC_VECTORS:
 	case VIRTCHNL2_OP_DEALLOC_VECTORS:
 	case VIRTCHNL2_OP_GET_STATS:
+	case VIRTCHNL2_OP_GET_RSS_KEY:
+	case VIRTCHNL2_OP_GET_RSS_HASH:
+	case VIRTCHNL2_OP_GET_RSS_LUT:
 		/* for init virtchnl ops, need to poll the response */
 		err = idpf_vc_one_msg_read(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
@@ -448,6 +451,48 @@ idpf_vc_rss_key_set(struct idpf_vport *vport)
 	return err;
 }
 
+int idpf_vc_rss_key_get(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_key *rss_key_ret;
+	struct virtchnl2_rss_key rss_key;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_key, 0, sizeof(rss_key));
+	rss_key.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_GET_RSS_KEY;
+	args.in_args = (uint8_t *)&rss_key;
+	args.in_args_size = sizeof(rss_key);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_vc_cmd_execute(adapter, &args);
+
+	if (!err) {
+		rss_key_ret = (struct virtchnl2_rss_key *)args.out_buffer;
+		if (rss_key_ret->key_len != vport->rss_key_size) {
+			rte_free(vport->rss_key);
+			vport->rss_key = NULL;
+			vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN,
+						      rss_key_ret->key_len);
+			vport->rss_key = rte_zmalloc("rss_key", vport->rss_key_size, 0);
+			if (!vport->rss_key) {
+				vport->rss_key_size = 0;
+				DRV_LOG(ERR, "Failed to allocate RSS key");
+				return -ENOMEM;
+			}
+		}
+		rte_memcpy(vport->rss_key, rss_key_ret->key, vport->rss_key_size);
+	} else {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_RSS_KEY");
+	}
+
+	return err;
+}
+
 int
 idpf_vc_rss_lut_set(struct idpf_vport *vport)
 {
@@ -482,6 +527,80 @@ idpf_vc_rss_lut_set(struct idpf_vport *vport)
 	return err;
 }
 
+int
+idpf_vc_rss_lut_get(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_lut *rss_lut_ret;
+	struct virtchnl2_rss_lut rss_lut;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_lut, 0, sizeof(rss_lut));
+	rss_lut.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_GET_RSS_LUT;
+	args.in_args = (uint8_t *)&rss_lut;
+	args.in_args_size = sizeof(rss_lut);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_vc_cmd_execute(adapter, &args);
+
+	if (!err) {
+		rss_lut_ret = (struct virtchnl2_rss_lut *)args.out_buffer;
+		if (rss_lut_ret->lut_entries != vport->rss_lut_size) {
+			rte_free(vport->rss_lut);
+			vport->rss_lut = NULL;
+			vport->rss_lut = rte_zmalloc("rss_lut",
+				     sizeof(uint32_t) * rss_lut_ret->lut_entries, 0);
+			if (vport->rss_lut == NULL) {
+				DRV_LOG(ERR, "Failed to allocate RSS lut");
+				return -ENOMEM;
+			}
+		}
+		rte_memcpy(vport->rss_lut, rss_lut_ret->lut, rss_lut_ret->lut_entries);
+		vport->rss_lut_size = rss_lut_ret->lut_entries;
+	} else {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_RSS_LUT");
+	}
+
+	return err;
+}
+
+int
+idpf_vc_rss_hash_get(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_hash *rss_hash_ret;
+	struct virtchnl2_rss_hash rss_hash;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_hash, 0, sizeof(rss_hash));
+	rss_hash.ptype_groups = vport->rss_hf;
+	rss_hash.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_GET_RSS_HASH;
+	args.in_args = (uint8_t *)&rss_hash;
+	args.in_args_size = sizeof(rss_hash);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_vc_cmd_execute(adapter, &args);
+
+	if (!err) {
+		rss_hash_ret = (struct virtchnl2_rss_hash *)args.out_buffer;
+		vport->rss_hf = rss_hash_ret->ptype_groups;
+	} else {
+		DRV_LOG(ERR, "Failed to execute command of OP_GET_RSS_HASH");
+	}
+
+	return err;
+}
+
 int
 idpf_vc_rss_hash_set(struct idpf_vport *vport)
 {
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index 6b94fd5b8f..205d1a932d 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -52,4 +52,10 @@ int idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq);
 __rte_internal
 int idpf_vc_stats_query(struct idpf_vport *vport,
 			struct virtchnl2_vport_stats **pstats);
+__rte_internal
+int idpf_vc_rss_key_get(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_rss_lut_get(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_rss_hash_get(struct idpf_vport *vport);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index e6a02828ba..f6c92e7e57 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -42,8 +42,11 @@ INTERNAL {
 	idpf_vc_ptype_info_query;
 	idpf_vc_queue_switch;
 	idpf_vc_queues_ena_dis;
+	idpf_vc_rss_hash_get;
 	idpf_vc_rss_hash_set;
+	idpf_vc_rss_key_get;
 	idpf_vc_rss_key_set;
+	idpf_vc_rss_lut_get;
 	idpf_vc_rss_lut_set;
 	idpf_vc_rxq_config;
 	idpf_vc_stats_query;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 02ddb0330a..d50e0952bf 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -29,6 +29,56 @@ static const char * const idpf_valid_args[] = {
 	NULL
 };
 
+static const uint64_t idpf_map_hena_rss[] = {
+	[IDPF_HASH_NONF_UNICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
+	[IDPF_HASH_NONF_IPV4_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+	[IDPF_HASH_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
+
+	/* IPv6 */
+	[IDPF_HASH_NONF_UNICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
+	[IDPF_HASH_NONF_IPV6_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+	[IDPF_HASH_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
+
+	/* L2 Payload */
+	[IDPF_HASH_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
+};
+
+static const uint64_t idpf_ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV4;
+
+static const uint64_t idpf_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV6;
+
 static int
 idpf_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -59,6 +109,9 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->hash_key_size = vport->rss_key_size;
+	dev_info->reta_size = vport->rss_lut_size;
+
 	dev_info->flow_type_rss_offloads = IDPF_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
@@ -221,6 +274,36 @@ idpf_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int idpf_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
+{
+	uint64_t hena = 0;
+	uint16_t i;
+
+	/**
+	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
+	 * generalizations of all other IPv4 and IPv6 RSS types.
+	 */
+	if (rss_hf & RTE_ETH_RSS_IPV4)
+		rss_hf |= idpf_ipv4_rss;
+
+	if (rss_hf & RTE_ETH_RSS_IPV6)
+		rss_hf |= idpf_ipv6_rss;
+
+	for (i = 0; i < RTE_DIM(idpf_map_hena_rss); i++) {
+		if (idpf_map_hena_rss[i] & rss_hf)
+			hena |= BIT_ULL(i);
+	}
+
+	/**
+	 * At present, cp doesn't proccess the virtual channel msg of rss_hf configuration,
+	 * tips are given below.
+	 */
+	if (hena != vport->rss_hf)
+		PMD_DRV_LOG(WARNING, "Updating RSS Hash Function is not supported at present.");
+
+	return 0;
+}
+
 static int
 idpf_init_rss(struct idpf_vport *vport)
 {
@@ -257,6 +340,187 @@ idpf_init_rss(struct idpf_vport *vport)
 	return ret;
 }
 
+static int
+idpf_rss_reta_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_reta_entry64 *reta_conf,
+		     uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+				 "(%d) doesn't match the number of hardware can "
+				 "support (%d)",
+			    reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			vport->rss_lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	/* send virtchnl ops to configure RSS */
+	ret = idpf_vc_rss_lut_set(vport);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
+
+	return ret;
+}
+
+static int
+idpf_rss_reta_query(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_reta_entry64 *reta_conf,
+		    uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	ret = idpf_vc_rss_lut_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS LUT");
+		return ret;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vport->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+idpf_rss_hash_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		goto skip_rss_key;
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+				 "(%d) doesn't match the size of hardware can "
+				 "support (%d)",
+			    rss_conf->rss_key_len,
+			    vport->rss_key_size);
+		return -EINVAL;
+	}
+
+	rte_memcpy(vport->rss_key, rss_conf->rss_key,
+		   vport->rss_key_size);
+	ret = idpf_vc_rss_key_set(vport);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS key");
+		return ret;
+	}
+
+skip_rss_key:
+	ret = idpf_config_rss_hf(vport, rss_conf->rss_hf);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
+		return ret;
+	}
+
+	return 0;
+}
+
+static uint64_t
+idpf_map_general_rss_hf(uint64_t config_rss_hf, uint64_t last_general_rss_hf)
+{
+	uint64_t valid_rss_hf = 0;
+	uint16_t i;
+
+	for (i = 0; i < RTE_DIM(idpf_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (bit & config_rss_hf)
+			valid_rss_hf |= idpf_map_hena_rss[i];
+	}
+
+	if (valid_rss_hf & idpf_ipv4_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & idpf_ipv6_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV6;
+
+	return valid_rss_hf;
+}
+
+static int
+idpf_rss_hash_conf_get(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	ret = idpf_vc_rss_hash_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS hf");
+		return ret;
+	}
+
+	rss_conf->rss_hf = idpf_map_general_rss_hf(vport->rss_hf, vport->last_general_rss_hf);
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	ret = idpf_vc_rss_key_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS key");
+		return ret;
+	}
+
+	if (rss_conf->rss_key_len > vport->rss_key_size)
+		rss_conf->rss_key_len = vport->rss_key_size;
+
+	rte_memcpy(rss_conf->rss_key, vport->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
 static int
 idpf_dev_configure(struct rte_eth_dev *dev)
 {
@@ -692,6 +956,10 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 	.dev_supported_ptypes_get	= idpf_dev_supported_ptypes_get,
 	.stats_get			= idpf_dev_stats_get,
 	.stats_reset			= idpf_dev_stats_reset,
+	.reta_update			= idpf_rss_reta_update,
+	.reta_query			= idpf_rss_reta_query,
+	.rss_hash_update		= idpf_rss_hash_update,
+	.rss_hash_conf_get		= idpf_rss_hash_conf_get,
 };
 
 static uint16_t
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index d791d402fb..839a2bd82c 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -48,7 +48,8 @@
 		RTE_ETH_RSS_NONFRAG_IPV6_TCP    |	\
 		RTE_ETH_RSS_NONFRAG_IPV6_UDP    |	\
 		RTE_ETH_RSS_NONFRAG_IPV6_SCTP   |	\
-		RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
+		RTE_ETH_RSS_NONFRAG_IPV6_OTHER  |	\
+		RTE_ETH_RSS_L2_PAYLOAD)
 
 #define IDPF_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v4 3/6] common/idpf: support single q scatter RX datapath
  2023-02-07  9:56     ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
  2023-02-07  9:56       ` [PATCH v4 1/6] common/idpf: add hw statistics Mingxia Liu
  2023-02-07  9:56       ` [PATCH v4 2/6] common/idpf: add RSS set/get ops Mingxia Liu
@ 2023-02-07  9:56       ` Mingxia Liu
  2023-02-07  9:56       ` [PATCH v4 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
                         ` (3 subsequent siblings)
  6 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-07  9:56 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu, Wenjun Wu

This patch add single q recv scatter rx function.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c | 135 +++++++++++++++++++++++++
 drivers/common/idpf/idpf_common_rxtx.h |   3 +
 drivers/common/idpf/version.map        |   1 +
 drivers/net/idpf/idpf_ethdev.c         |   3 +-
 drivers/net/idpf/idpf_rxtx.c           |  28 +++++
 drivers/net/idpf/idpf_rxtx.h           |   2 +
 6 files changed, 171 insertions(+), 1 deletion(-)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index fdac2c3114..9303b51cce 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -1146,6 +1146,141 @@ idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	return nb_rx;
 }
 
+uint16_t
+idpf_dp_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			       uint16_t nb_pkts)
+{
+	struct idpf_rx_queue *rxq = rx_queue;
+	volatile union virtchnl2_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union virtchnl2_rx_desc *rxdp;
+	union virtchnl2_rx_desc rxd;
+	struct idpf_adapter *ad;
+	struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+	struct rte_mbuf *rxm;
+	struct rte_mbuf *nmb;
+	struct rte_eth_dev *dev;
+	const uint32_t *ptype_tbl = rxq->adapter->ptype_tbl;
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t rx_packet_len;
+	uint16_t nb_hold = 0;
+	uint16_t rx_status0;
+	uint16_t nb_rx = 0;
+	uint64_t pkt_flags;
+	uint64_t dma_addr;
+	uint64_t ts_ns;
+
+	ad = rxq->adapter;
+
+	if (unlikely(!rxq) || unlikely(!rxq->q_started))
+		return nb_rx;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		rx_status0 = rte_le_to_cpu_16(rxdp->flex_nic_wb.status_error0);
+
+		/* Check the DD bit first */
+		if (!(rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S)))
+			break;
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			__atomic_fetch_add(&rxq->rx_stats.mbuf_alloc_failed, 1, __ATOMIC_RELAXED);
+			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+			       "queue_id=%u", rxq->port_id, rxq->queue_id);
+			break;
+		}
+
+		rxd = *rxdp;
+
+		nb_hold++;
+		rxm = rxq->sw_ring[rx_id];
+		rxq->sw_ring[rx_id] = nmb;
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+		rx_packet_len = (rte_cpu_to_le_16(rxd.flex_nic_wb.pkt_len) &
+				 VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M);
+		rxm->data_len = rx_packet_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+		/**
+		 * If this is the first buffer of the received packet, set the
+		 * pointer to the first mbuf of the packet and initialize its
+		 * context. Otherwise, update the total length and the number
+		 * of segments of the current scattered packet, and update the
+		 * pointer to the last mbuf of the current packet.
+		 */
+		if (!first_seg) {
+			first_seg = rxm;
+			first_seg->nb_segs = 1;
+			first_seg->pkt_len = rx_packet_len;
+		} else {
+			first_seg->pkt_len =
+				(uint16_t)(first_seg->pkt_len +
+					   rx_packet_len);
+			first_seg->nb_segs++;
+			last_seg->next = rxm;
+		}
+
+		if (!(rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_S))) {
+			last_seg = rxm;
+			continue;
+		}
+
+		rxm->next = NULL;
+
+		first_seg->port = rxq->port_id;
+		first_seg->ol_flags = 0;
+		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		first_seg->packet_type =
+			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
+				VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
+
+		if (idpf_timestamp_dynflag > 0 &&
+		    (rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0) {
+			/* timestamp */
+			ts_ns = idpf_tstamp_convert_32b_64b(ad,
+				rxq->hw_register_set,
+				rte_le_to_cpu_32(rxd.flex_nic_wb.flex_ts.ts_high));
+			rxq->hw_register_set = 0;
+			*RTE_MBUF_DYNFIELD(rxm,
+					   idpf_timestamp_dynfield_offset,
+					   rte_mbuf_timestamp_t *) = ts_ns;
+			first_seg->ol_flags |= idpf_timestamp_dynflag;
+		}
+
+		first_seg->ol_flags |= pkt_flags;
+		rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
+					  first_seg->data_off));
+		rx_pkts[nb_rx++] = first_seg;
+		first_seg = NULL;
+	}
+	rxq->rx_tail = rx_id;
+	rxq->pkt_first_seg = first_seg;
+	rxq->pkt_last_seg = last_seg;
+
+	idpf_update_rx_tail(rxq, nb_hold, rx_id);
+
+	return nb_rx;
+}
+
 static inline int
 idpf_xmit_cleanup(struct idpf_tx_queue *txq)
 {
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index 263dab061c..7e6df080e6 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -293,5 +293,8 @@ uint16_t idpf_dp_singleq_xmit_pkts_avx512(void *tx_queue,
 __rte_internal
 uint16_t idpf_dp_splitq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
 					 uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_dp_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			  uint16_t nb_pkts);
 
 #endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index f6c92e7e57..e31f6ff4d9 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -7,6 +7,7 @@ INTERNAL {
 	idpf_dp_prep_pkts;
 	idpf_dp_singleq_recv_pkts;
 	idpf_dp_singleq_recv_pkts_avx512;
+	idpf_dp_singleq_recv_scatter_pkts;
 	idpf_dp_singleq_xmit_pkts;
 	idpf_dp_singleq_xmit_pkts_avx512;
 	idpf_dp_splitq_recv_pkts;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index d50e0952bf..bd7cf41b43 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -119,7 +119,8 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
-		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP		|
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 38d9829912..d16acd87fb 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -503,6 +503,8 @@ int
 idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct idpf_rx_queue *rxq;
+	uint16_t max_pkt_len;
+	uint32_t frame_size;
 	int err;
 
 	if (rx_queue_id >= dev->data->nb_rx_queues)
@@ -516,6 +518,17 @@ idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	frame_size = dev->data->mtu + IDPF_ETH_OVERHEAD;
+
+	max_pkt_len =
+	    RTE_MIN((uint32_t)IDPF_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+		    frame_size);
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
+	    frame_size > rxq->rx_buf_len)
+		dev->data->scattered_rx = 1;
+
 	err = idpf_qc_ts_mbuf_register(rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "fail to residter timestamp mbuf %u",
@@ -807,6 +820,14 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 			}
 #endif /* CC_AVX512_SUPPORT */
 		}
+
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE,
+				    "Using Single Scalar Scatterd Rx (port %d).",
+				    dev->data->port_id);
+			dev->rx_pkt_burst = idpf_dp_singleq_recv_scatter_pkts;
+			return;
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Rx (port %d).",
 			    dev->data->port_id);
@@ -819,6 +840,13 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 			    dev->data->port_id);
 		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
 	} else {
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE,
+				    "Using Single Scalar Scatterd Rx (port %d).",
+				    dev->data->port_id);
+			dev->rx_pkt_burst = idpf_dp_singleq_recv_scatter_pkts;
+			return;
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Rx (port %d).",
 			    dev->data->port_id);
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 3a5084dfd6..41a7495083 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -23,6 +23,8 @@
 #define IDPF_DEFAULT_TX_RS_THRESH	32
 #define IDPF_DEFAULT_TX_FREE_THRESH	32
 
+#define IDPF_SUPPORT_CHAIN_NUM 5
+
 int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v4 4/6] common/idpf: add rss_offload hash in singleq rx
  2023-02-07  9:56     ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
                         ` (2 preceding siblings ...)
  2023-02-07  9:56       ` [PATCH v4 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
@ 2023-02-07  9:56       ` Mingxia Liu
  2023-02-07  9:57       ` [PATCH v4 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
                         ` (2 subsequent siblings)
  6 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-07  9:56 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

This patch add rss valid flag and hash value parsing of rx descriptor.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index 9303b51cce..d7e8df1895 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -1030,6 +1030,20 @@ idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold,
 	rxq->nb_rx_hold = nb_hold;
 }
 
+static inline void
+idpf_singleq_rx_rss_offload(struct rte_mbuf *mb,
+			    volatile struct virtchnl2_rx_flex_desc_nic *rx_desc,
+			    uint64_t *pkt_flags)
+{
+	uint16_t rx_status0 = rte_le_to_cpu_16(rx_desc->status_error0);
+
+	if (rx_status0 & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_S)) {
+		*pkt_flags |= RTE_MBUF_F_RX_RSS_HASH;
+		mb->hash.rss = rte_le_to_cpu_32(rx_desc->rss_hash);
+	}
+
+}
+
 uint16_t
 idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			  uint16_t nb_pkts)
@@ -1118,6 +1132,7 @@ idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->port = rxq->port_id;
 		rxm->ol_flags = 0;
 		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		idpf_singleq_rx_rss_offload(rxm, &rxd.flex_nic_wb, &pkt_flags);
 		rxm->packet_type =
 			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
 					    VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
@@ -1249,6 +1264,7 @@ idpf_dp_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		first_seg->port = rxq->port_id;
 		first_seg->ol_flags = 0;
 		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		idpf_singleq_rx_rss_offload(first_seg, &rxd.flex_nic_wb, &pkt_flags);
 		first_seg->packet_type =
 			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
 				VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v4 5/6] common/idpf: add alarm to support handle vchnl message
  2023-02-07  9:56     ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
                         ` (3 preceding siblings ...)
  2023-02-07  9:56       ` [PATCH v4 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
@ 2023-02-07  9:57       ` Mingxia Liu
  2023-02-07  9:57       ` [PATCH v4 6/6] common/idpf: add xstats ops Mingxia Liu
  2023-02-07 10:08       ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
  6 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-07  9:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Handle virtual channel message.
Refine link status update.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
Signed-off-by: Beilei Ling <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h   |   5 +
 drivers/common/idpf/idpf_common_virtchnl.c |  33 ++--
 drivers/common/idpf/idpf_common_virtchnl.h |   6 +
 drivers/common/idpf/version.map            |   2 +
 drivers/net/idpf/idpf_ethdev.c             | 169 ++++++++++++++++++++-
 drivers/net/idpf/idpf_ethdev.h             |   2 +
 6 files changed, 195 insertions(+), 22 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 7abc4d2a3a..364a60221a 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -118,6 +118,11 @@ struct idpf_vport {
 	bool tx_use_avx512;
 
 	struct virtchnl2_vport_stats eth_stats_offset;
+
+	void *dev;
+	/* Event from ipf */
+	bool link_up;
+	uint32_t link_speed;
 };
 
 /* Message type read in virtual channel from PF */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 10cfa33704..99d9efbb7c 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -202,25 +202,6 @@ idpf_vc_cmd_execute(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	switch (args->ops) {
 	case VIRTCHNL_OP_VERSION:
 	case VIRTCHNL2_OP_GET_CAPS:
-	case VIRTCHNL2_OP_CREATE_VPORT:
-	case VIRTCHNL2_OP_DESTROY_VPORT:
-	case VIRTCHNL2_OP_SET_RSS_KEY:
-	case VIRTCHNL2_OP_SET_RSS_LUT:
-	case VIRTCHNL2_OP_SET_RSS_HASH:
-	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
-	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
-	case VIRTCHNL2_OP_ENABLE_QUEUES:
-	case VIRTCHNL2_OP_DISABLE_QUEUES:
-	case VIRTCHNL2_OP_ENABLE_VPORT:
-	case VIRTCHNL2_OP_DISABLE_VPORT:
-	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_ALLOC_VECTORS:
-	case VIRTCHNL2_OP_DEALLOC_VECTORS:
-	case VIRTCHNL2_OP_GET_STATS:
-	case VIRTCHNL2_OP_GET_RSS_KEY:
-	case VIRTCHNL2_OP_GET_RSS_HASH:
-	case VIRTCHNL2_OP_GET_RSS_LUT:
 		/* for init virtchnl ops, need to poll the response */
 		err = idpf_vc_one_msg_read(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
@@ -1111,3 +1092,17 @@ idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq)
 
 	return err;
 }
+
+int
+idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
+		  struct idpf_ctlq_msg *q_msg)
+{
+	return idpf_ctlq_recv(cq, num_q_msg, q_msg);
+}
+
+int
+idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
+			   u16 *buff_count, struct idpf_dma_mem **buffs)
+{
+	return idpf_ctlq_post_rx_buffs(hw, cq, buff_count, buffs);
+}
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index 205d1a932d..d479d93c8e 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -58,4 +58,10 @@ __rte_internal
 int idpf_vc_rss_lut_get(struct idpf_vport *vport);
 __rte_internal
 int idpf_vc_rss_hash_get(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
+		      struct idpf_ctlq_msg *q_msg);
+__rte_internal
+int idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
+			   u16 *buff_count, struct idpf_dma_mem **buffs);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index e31f6ff4d9..70334a1b03 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -38,6 +38,8 @@ INTERNAL {
 	idpf_vc_api_version_check;
 	idpf_vc_caps_get;
 	idpf_vc_cmd_execute;
+	idpf_vc_ctlq_post_rx_buffs;
+	idpf_vc_ctlq_recv;
 	idpf_vc_irq_map_unmap_config;
 	idpf_vc_one_msg_read;
 	idpf_vc_ptype_info_query;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index bd7cf41b43..c3a9e95388 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -9,6 +9,7 @@
 #include <rte_memzone.h>
 #include <rte_dev.h>
 #include <errno.h>
+#include <rte_alarm.h>
 
 #include "idpf_ethdev.h"
 #include "idpf_rxtx.h"
@@ -83,14 +84,51 @@ static int
 idpf_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_link new_link;
 
 	memset(&new_link, 0, sizeof(new_link));
 
-	new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	switch (vport->link_speed) {
+	case RTE_ETH_SPEED_NUM_10M:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
+		break;
+	case RTE_ETH_SPEED_NUM_100M:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		break;
+	case RTE_ETH_SPEED_NUM_1G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
+		break;
+	case RTE_ETH_SPEED_NUM_10G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+		break;
+	case RTE_ETH_SPEED_NUM_20G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
+		break;
+	case RTE_ETH_SPEED_NUM_25G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
+		break;
+	case RTE_ETH_SPEED_NUM_40G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
+		break;
+	case RTE_ETH_SPEED_NUM_50G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
+		break;
+	case RTE_ETH_SPEED_NUM_100G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+		break;
+	case RTE_ETH_SPEED_NUM_200G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
+		break;
+	default:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	}
+
 	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
-	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				  RTE_ETH_LINK_SPEED_FIXED);
+	new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
+		RTE_ETH_LINK_DOWN;
+	new_link.link_autoneg = (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) ?
+				 RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG;
 
 	return rte_eth_linkstatus_set(dev, &new_link);
 }
@@ -891,6 +929,127 @@ idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adap
 	return ret;
 }
 
+static struct idpf_vport *
+idpf_find_vport(struct idpf_adapter_ext *adapter, uint32_t vport_id)
+{
+	struct idpf_vport *vport = NULL;
+	int i;
+
+	for (i = 0; i < adapter->cur_vport_nb; i++) {
+		vport = adapter->vports[i];
+		if (vport->vport_id != vport_id)
+			continue;
+		else
+			return vport;
+	}
+
+	return vport;
+}
+
+static void
+idpf_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t msglen)
+{
+	struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
+
+	if (msglen < sizeof(struct virtchnl2_event)) {
+		PMD_DRV_LOG(ERR, "Error event");
+		return;
+	}
+
+	switch (vc_event->event) {
+	case VIRTCHNL2_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL2_EVENT_LINK_CHANGE");
+		vport->link_up = !!(vc_event->link_status);
+		vport->link_speed = vc_event->link_speed;
+		idpf_dev_link_update(dev, 0);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", vc_event->event);
+		break;
+	}
+}
+
+static void
+idpf_handle_virtchnl_msg(struct idpf_adapter_ext *adapter_ex)
+{
+	struct idpf_adapter *adapter = &adapter_ex->base;
+	struct idpf_dma_mem *dma_mem = NULL;
+	struct idpf_hw *hw = &adapter->hw;
+	struct virtchnl2_event *vc_event;
+	struct idpf_ctlq_msg ctlq_msg;
+	enum idpf_mbx_opc mbx_op;
+	struct idpf_vport *vport;
+	enum virtchnl_ops vc_op;
+	uint16_t pending = 1;
+	int ret;
+
+	while (pending) {
+		ret = idpf_vc_ctlq_recv(hw->arq, &pending, &ctlq_msg);
+		if (ret) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret);
+			return;
+		}
+
+		rte_memcpy(adapter->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
+			   IDPF_DFLT_MBX_BUF_SIZE);
+
+		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
+		vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+		adapter->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
+
+		switch (mbx_op) {
+		case idpf_mbq_opc_send_msg_to_peer_pf:
+			if (vc_op == VIRTCHNL2_OP_EVENT) {
+				if (ctlq_msg.data_len < sizeof(struct virtchnl2_event)) {
+					PMD_DRV_LOG(ERR, "Error event");
+					return;
+				}
+				vc_event = (struct virtchnl2_event *)adapter->mbx_resp;
+				vport = idpf_find_vport(adapter_ex, vc_event->vport_id);
+				if (!vport) {
+					PMD_DRV_LOG(ERR, "Can't find vport.");
+					return;
+				}
+				idpf_handle_event_msg(vport, adapter->mbx_resp,
+						      ctlq_msg.data_len);
+			} else {
+				if (vc_op == adapter->pend_cmd)
+					notify_cmd(adapter, adapter->cmd_retval);
+				else
+					PMD_DRV_LOG(ERR, "command mismatch, expect %u, get %u",
+						    adapter->pend_cmd, vc_op);
+
+				PMD_DRV_LOG(DEBUG, " Virtual channel response is received,"
+					    "opcode = %d", vc_op);
+			}
+			goto post_buf;
+		default:
+			PMD_DRV_LOG(DEBUG, "Request %u is not supported yet", mbx_op);
+		}
+	}
+
+post_buf:
+	if (ctlq_msg.data_len)
+		dma_mem = ctlq_msg.ctx.indirect.payload;
+	else
+		pending = 0;
+
+	ret = idpf_vc_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
+	if (ret && dma_mem)
+		idpf_free_dma_mem(hw, dma_mem);
+}
+
+static void
+idpf_dev_alarm_handler(void *param)
+{
+	struct idpf_adapter_ext *adapter = param;
+
+	idpf_handle_virtchnl_msg(adapter);
+
+	rte_eal_alarm_set(IDPF_ALARM_INTERVAL, idpf_dev_alarm_handler, adapter);
+}
+
 static int
 idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter)
 {
@@ -913,6 +1072,8 @@ idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *a
 		goto err_adapter_init;
 	}
 
+	rte_eal_alarm_set(IDPF_ALARM_INTERVAL, idpf_dev_alarm_handler, adapter);
+
 	adapter->max_vport_nb = adapter->base.caps.max_vports;
 
 	adapter->vports = rte_zmalloc("vports",
@@ -996,6 +1157,7 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	vport->adapter = &adapter->base;
 	vport->sw_idx = param->idx;
 	vport->devarg_id = param->devarg_id;
+	vport->dev = dev;
 
 	memset(&create_vport_info, 0, sizeof(create_vport_info));
 	ret = idpf_vport_info_init(vport, &create_vport_info);
@@ -1065,6 +1227,7 @@ idpf_find_adapter_ext(struct rte_pci_device *pci_dev)
 static void
 idpf_adapter_ext_deinit(struct idpf_adapter_ext *adapter)
 {
+	rte_eal_alarm_cancel(idpf_dev_alarm_handler, adapter);
 	idpf_adapter_deinit(&adapter->base);
 
 	rte_free(adapter->vports);
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 839a2bd82c..3c2c932438 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -53,6 +53,8 @@
 
 #define IDPF_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
+#define IDPF_ALARM_INTERVAL	50000 /* us */
+
 struct idpf_vport_param {
 	struct idpf_adapter_ext *adapter;
 	uint16_t devarg_id; /* arg id from user */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v4 6/6] common/idpf: add xstats ops
  2023-02-07  9:56     ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
                         ` (4 preceding siblings ...)
  2023-02-07  9:57       ` [PATCH v4 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
@ 2023-02-07  9:57       ` Mingxia Liu
  2023-02-07 10:08       ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
  6 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-07  9:57 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add support for these device ops:
-idpf_dev_xstats_get
-idpf_dev_xstats_get_names
-idpf_dev_xstats_reset

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/idpf/idpf_ethdev.c | 80 ++++++++++++++++++++++++++++++++++
 1 file changed, 80 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index c3a9e95388..3f6230f5b6 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -80,6 +80,30 @@ static const uint64_t idpf_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
 			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
 			  RTE_ETH_RSS_FRAG_IPV6;
 
+struct rte_idpf_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct rte_idpf_xstats_name_off rte_idpf_stats_strings[] = {
+	{"rx_bytes", offsetof(struct virtchnl2_vport_stats, rx_bytes)},
+	{"rx_unicast_packets", offsetof(struct virtchnl2_vport_stats, rx_unicast)},
+	{"rx_multicast_packets", offsetof(struct virtchnl2_vport_stats, rx_multicast)},
+	{"rx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, rx_broadcast)},
+	{"rx_dropped_packets", offsetof(struct virtchnl2_vport_stats, rx_discards)},
+	{"rx_errors", offsetof(struct virtchnl2_vport_stats, rx_errors)},
+	{"rx_unknown_protocol_packets", offsetof(struct virtchnl2_vport_stats,
+						 rx_unknown_protocol)},
+	{"tx_bytes", offsetof(struct virtchnl2_vport_stats, tx_bytes)},
+	{"tx_unicast_packets", offsetof(struct virtchnl2_vport_stats, tx_unicast)},
+	{"tx_multicast_packets", offsetof(struct virtchnl2_vport_stats, tx_multicast)},
+	{"tx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, tx_broadcast)},
+	{"tx_dropped_packets", offsetof(struct virtchnl2_vport_stats, tx_discards)},
+	{"tx_error_packets", offsetof(struct virtchnl2_vport_stats, tx_errors)}};
+
+#define IDPF_NB_XSTATS (sizeof(rte_idpf_stats_strings) / \
+		sizeof(rte_idpf_stats_strings[0]))
+
 static int
 idpf_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -313,6 +337,59 @@ idpf_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int idpf_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+	idpf_dev_stats_reset(dev);
+	return 0;
+}
+
+static int idpf_dev_xstats_get(struct rte_eth_dev *dev,
+			       struct rte_eth_xstat *xstats, unsigned int n)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	unsigned int i;
+	int ret;
+
+	if (n < IDPF_NB_XSTATS)
+		return IDPF_NB_XSTATS;
+
+	if (!xstats)
+		return 0;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+		return 0;
+	}
+
+	idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
+
+	/* loop over xstats array and values from pstats */
+	for (i = 0; i < IDPF_NB_XSTATS; i++) {
+		xstats[i].id = i;
+		xstats[i].value = *(uint64_t *)(((char *)pstats) +
+			rte_idpf_stats_strings[i].offset);
+	}
+	return IDPF_NB_XSTATS;
+}
+
+static int idpf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+				     struct rte_eth_xstat_name *xstats_names,
+				     __rte_unused unsigned int limit)
+{
+	unsigned int i;
+
+	if (xstats_names)
+		for (i = 0; i < IDPF_NB_XSTATS; i++) {
+			snprintf(xstats_names[i].name,
+				 sizeof(xstats_names[i].name),
+				 "%s", rte_idpf_stats_strings[i].name);
+		}
+	return IDPF_NB_XSTATS;
+}
+
 static int idpf_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
 {
 	uint64_t hena = 0;
@@ -1122,6 +1199,9 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 	.reta_query			= idpf_rss_reta_query,
 	.rss_hash_update		= idpf_rss_hash_update,
 	.rss_hash_conf_get		= idpf_rss_hash_conf_get,
+	.xstats_get			= idpf_dev_xstats_get,
+	.xstats_get_names		= idpf_dev_xstats_get_names,
+	.xstats_reset			= idpf_dev_xstats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v4 0/6] add idpf pmd enhancement features
  2023-02-07  9:56     ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
                         ` (5 preceding siblings ...)
  2023-02-07  9:57       ` [PATCH v4 6/6] common/idpf: add xstats ops Mingxia Liu
@ 2023-02-07 10:08       ` Mingxia Liu
  2023-02-07 10:08         ` [PATCH v5 1/6] common/idpf: add hw statistics Mingxia Liu
                           ` (5 more replies)
  6 siblings, 6 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-07 10:08 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

This patchset add several enhancement features of idpf pmd.
Including the following:
- add hw statistics, support stats/xstats ops
- add rss configure/show ops
- add event handle: link status
- add scattered data path for single queue

This patchset is based on the refactor idpf PMD code:
http://patches.dpdk.org/project/dpdk/patch/20230207084549.2225214-2-wenjun1.wu@intel.com/

v2 changes:
 - Fix rss lut config issue.
v3 changes:
 - rebase to the new baseline.
v4 changes:
 - rebase to the new baseline.
 - optimize some code
 - give "not supported" tips when user want to config rss hash type
 - if stats reset fails at initialization time, don't rollback, just
   print ERROR info.
v5 changes:
 - fix some spelling error

Mingxia Liu (6):
  common/idpf: add hw statistics
  common/idpf: add RSS set/get ops
  common/idpf: support single q scatter RX datapath
  common/idpf: add rss_offload hash in singleq rx
  common/idpf: add alarm to support handle vchnl message
  common/idpf: add xstats ops

 drivers/common/idpf/idpf_common_device.c   |  17 +
 drivers/common/idpf/idpf_common_device.h   |  10 +
 drivers/common/idpf/idpf_common_rxtx.c     | 151 +++++
 drivers/common/idpf/idpf_common_rxtx.h     |   3 +
 drivers/common/idpf/idpf_common_virtchnl.c | 171 +++++-
 drivers/common/idpf/idpf_common_virtchnl.h |  15 +
 drivers/common/idpf/version.map            |   8 +
 drivers/net/idpf/idpf_ethdev.c             | 606 ++++++++++++++++++++-
 drivers/net/idpf/idpf_ethdev.h             |   5 +-
 drivers/net/idpf/idpf_rxtx.c               |  28 +
 drivers/net/idpf/idpf_rxtx.h               |   2 +
 11 files changed, 996 insertions(+), 20 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v5 1/6] common/idpf: add hw statistics
  2023-02-07 10:08       ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
@ 2023-02-07 10:08         ` Mingxia Liu
  2023-02-07 10:16           ` [PATCH v6 0/6] add idpf pmd enhancement features Mingxia Liu
  2023-02-07 10:08         ` [PATCH v5 2/6] common/idpf: add RSS set/get ops Mingxia Liu
                           ` (4 subsequent siblings)
  5 siblings, 1 reply; 63+ messages in thread
From: Mingxia Liu @ 2023-02-07 10:08 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

This patch add hardware packets/bytes statistics.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/common/idpf/idpf_common_device.c   | 17 +++++
 drivers/common/idpf/idpf_common_device.h   |  4 +
 drivers/common/idpf/idpf_common_virtchnl.c | 27 +++++++
 drivers/common/idpf/idpf_common_virtchnl.h |  3 +
 drivers/common/idpf/version.map            |  2 +
 drivers/net/idpf/idpf_ethdev.c             | 86 ++++++++++++++++++++++
 6 files changed, 139 insertions(+)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 48b3e3c0dd..5475a3e52c 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -652,4 +652,21 @@ idpf_vport_info_init(struct idpf_vport *vport,
 	return 0;
 }
 
+void
+idpf_vport_stats_update(struct virtchnl2_vport_stats *oes, struct virtchnl2_vport_stats *nes)
+{
+	nes->rx_bytes = nes->rx_bytes - oes->rx_bytes;
+	nes->rx_unicast = nes->rx_unicast - oes->rx_unicast;
+	nes->rx_multicast = nes->rx_multicast - oes->rx_multicast;
+	nes->rx_broadcast = nes->rx_broadcast - oes->rx_broadcast;
+	nes->rx_errors = nes->rx_errors - oes->rx_errors;
+	nes->rx_discards = nes->rx_discards - oes->rx_discards;
+	nes->tx_bytes = nes->tx_bytes - oes->tx_bytes;
+	nes->tx_unicast = nes->tx_unicast - oes->tx_unicast;
+	nes->tx_multicast = nes->tx_multicast - oes->tx_multicast;
+	nes->tx_broadcast = nes->tx_broadcast - oes->tx_broadcast;
+	nes->tx_errors = nes->tx_errors - oes->tx_errors;
+	nes->tx_discards = nes->tx_discards - oes->tx_discards;
+}
+
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 545117df79..1d8e7d405a 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -115,6 +115,8 @@ struct idpf_vport {
 	bool tx_vec_allowed;
 	bool rx_use_avx512;
 	bool tx_use_avx512;
+
+	struct virtchnl2_vport_stats eth_stats_offset;
 };
 
 /* Message type read in virtual channel from PF */
@@ -191,5 +193,7 @@ int idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
 __rte_internal
 int idpf_vport_info_init(struct idpf_vport *vport,
 			 struct virtchnl2_create_vport *vport_info);
+__rte_internal
+void idpf_vport_stats_update(struct virtchnl2_vport_stats *oes, struct virtchnl2_vport_stats *nes);
 
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 31fadefbd3..40cff34c09 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -217,6 +217,7 @@ idpf_vc_cmd_execute(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
 	case VIRTCHNL2_OP_ALLOC_VECTORS:
 	case VIRTCHNL2_OP_DEALLOC_VECTORS:
+	case VIRTCHNL2_OP_GET_STATS:
 		/* for init virtchnl ops, need to poll the response */
 		err = idpf_vc_one_msg_read(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
@@ -806,6 +807,32 @@ idpf_vc_ptype_info_query(struct idpf_adapter *adapter)
 	return err;
 }
 
+int
+idpf_vc_stats_query(struct idpf_vport *vport,
+		struct virtchnl2_vport_stats **pstats)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_vport_stats vport_stats;
+	struct idpf_cmd_info args;
+	int err;
+
+	vport_stats.vport_id = vport->vport_id;
+	args.ops = VIRTCHNL2_OP_GET_STATS;
+	args.in_args = (u8 *)&vport_stats;
+	args.in_args_size = sizeof(vport_stats);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_vc_cmd_execute(adapter, &args);
+	if (err) {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_STATS");
+		*pstats = NULL;
+		return err;
+	}
+	*pstats = (struct virtchnl2_vport_stats *)args.out_buffer;
+	return 0;
+}
+
 #define IDPF_RX_BUF_STRIDE		64
 int
 idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index c105f02836..6b94fd5b8f 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -49,4 +49,7 @@ __rte_internal
 int idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
 __rte_internal
 int idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq);
+__rte_internal
+int idpf_vc_stats_query(struct idpf_vport *vport,
+			struct virtchnl2_vport_stats **pstats);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 8b33130bd6..e6a02828ba 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -46,6 +46,7 @@ INTERNAL {
 	idpf_vc_rss_key_set;
 	idpf_vc_rss_lut_set;
 	idpf_vc_rxq_config;
+	idpf_vc_stats_query;
 	idpf_vc_txq_config;
 	idpf_vc_vectors_alloc;
 	idpf_vc_vectors_dealloc;
@@ -59,6 +60,7 @@ INTERNAL {
 	idpf_vport_irq_map_config;
 	idpf_vport_irq_unmap_config;
 	idpf_vport_rss_config;
+	idpf_vport_stats_update;
 
 	local: *;
 };
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 33f5e90743..02ddb0330a 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -140,6 +140,87 @@ idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static uint64_t
+idpf_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	uint64_t mbuf_alloc_failed = 0;
+	struct idpf_rx_queue *rxq;
+	int i = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		mbuf_alloc_failed += __atomic_load_n(&rxq->rx_stats.mbuf_alloc_failed,
+						     __ATOMIC_RELAXED);
+	}
+
+	return mbuf_alloc_failed;
+}
+
+static int
+idpf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret == 0) {
+		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
+					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
+					 RTE_ETHER_CRC_LEN;
+
+		idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
+		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
+				pstats->rx_broadcast - pstats->rx_discards;
+		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
+						pstats->tx_unicast;
+		stats->imissed = pstats->rx_discards;
+		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
+		stats->ibytes = pstats->rx_bytes;
+		stats->ibytes -= stats->ipackets * crc_stats_len;
+		stats->obytes = pstats->tx_bytes;
+
+		dev->data->rx_mbuf_alloc_failed = idpf_get_mbuf_alloc_failed_stats(dev);
+		stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return ret;
+}
+
+static void
+idpf_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		__atomic_store_n(&rxq->rx_stats.mbuf_alloc_failed, 0, __ATOMIC_RELAXED);
+	}
+}
+
+static int
+idpf_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret != 0)
+		return ret;
+
+	/* set stats offset base on current values */
+	vport->eth_stats_offset = *pstats;
+
+	idpf_reset_mbuf_alloc_failed_stats(dev);
+
+	return 0;
+}
+
 static int
 idpf_init_rss(struct idpf_vport *vport)
 {
@@ -327,6 +408,9 @@ idpf_dev_start(struct rte_eth_dev *dev)
 		goto err_vport;
 	}
 
+	if (idpf_dev_stats_reset(dev))
+		PMD_DRV_LOG(ERR, "Failed to reset stats");
+
 	vport->stopped = 0;
 
 	return 0;
@@ -606,6 +690,8 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 	.tx_queue_release		= idpf_dev_tx_queue_release,
 	.mtu_set			= idpf_dev_mtu_set,
 	.dev_supported_ptypes_get	= idpf_dev_supported_ptypes_get,
+	.stats_get			= idpf_dev_stats_get,
+	.stats_reset			= idpf_dev_stats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v5 2/6] common/idpf: add RSS set/get ops
  2023-02-07 10:08       ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
  2023-02-07 10:08         ` [PATCH v5 1/6] common/idpf: add hw statistics Mingxia Liu
@ 2023-02-07 10:08         ` Mingxia Liu
  2023-02-07 10:08         ` [PATCH v5 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
                           ` (3 subsequent siblings)
  5 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-07 10:08 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add support for these device ops:
- rss_reta_update
- rss_reta_query
- rss_hash_update
- rss_hash_conf_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/common/idpf/idpf_common_device.h   |   1 +
 drivers/common/idpf/idpf_common_virtchnl.c | 119 +++++++++
 drivers/common/idpf/idpf_common_virtchnl.h |   6 +
 drivers/common/idpf/version.map            |   3 +
 drivers/net/idpf/idpf_ethdev.c             | 268 +++++++++++++++++++++
 drivers/net/idpf/idpf_ethdev.h             |   3 +-
 6 files changed, 399 insertions(+), 1 deletion(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 1d8e7d405a..7abc4d2a3a 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -98,6 +98,7 @@ struct idpf_vport {
 	uint32_t *rss_lut;
 	uint8_t *rss_key;
 	uint64_t rss_hf;
+	uint64_t last_general_rss_hf;
 
 	/* MSIX info*/
 	struct virtchnl2_queue_vector *qv_map; /* queue vector mapping */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 40cff34c09..10cfa33704 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -218,6 +218,9 @@ idpf_vc_cmd_execute(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	case VIRTCHNL2_OP_ALLOC_VECTORS:
 	case VIRTCHNL2_OP_DEALLOC_VECTORS:
 	case VIRTCHNL2_OP_GET_STATS:
+	case VIRTCHNL2_OP_GET_RSS_KEY:
+	case VIRTCHNL2_OP_GET_RSS_HASH:
+	case VIRTCHNL2_OP_GET_RSS_LUT:
 		/* for init virtchnl ops, need to poll the response */
 		err = idpf_vc_one_msg_read(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
@@ -448,6 +451,48 @@ idpf_vc_rss_key_set(struct idpf_vport *vport)
 	return err;
 }
 
+int idpf_vc_rss_key_get(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_key *rss_key_ret;
+	struct virtchnl2_rss_key rss_key;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_key, 0, sizeof(rss_key));
+	rss_key.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_GET_RSS_KEY;
+	args.in_args = (uint8_t *)&rss_key;
+	args.in_args_size = sizeof(rss_key);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_vc_cmd_execute(adapter, &args);
+
+	if (!err) {
+		rss_key_ret = (struct virtchnl2_rss_key *)args.out_buffer;
+		if (rss_key_ret->key_len != vport->rss_key_size) {
+			rte_free(vport->rss_key);
+			vport->rss_key = NULL;
+			vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN,
+						      rss_key_ret->key_len);
+			vport->rss_key = rte_zmalloc("rss_key", vport->rss_key_size, 0);
+			if (!vport->rss_key) {
+				vport->rss_key_size = 0;
+				DRV_LOG(ERR, "Failed to allocate RSS key");
+				return -ENOMEM;
+			}
+		}
+		rte_memcpy(vport->rss_key, rss_key_ret->key, vport->rss_key_size);
+	} else {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_RSS_KEY");
+	}
+
+	return err;
+}
+
 int
 idpf_vc_rss_lut_set(struct idpf_vport *vport)
 {
@@ -482,6 +527,80 @@ idpf_vc_rss_lut_set(struct idpf_vport *vport)
 	return err;
 }
 
+int
+idpf_vc_rss_lut_get(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_lut *rss_lut_ret;
+	struct virtchnl2_rss_lut rss_lut;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_lut, 0, sizeof(rss_lut));
+	rss_lut.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_GET_RSS_LUT;
+	args.in_args = (uint8_t *)&rss_lut;
+	args.in_args_size = sizeof(rss_lut);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_vc_cmd_execute(adapter, &args);
+
+	if (!err) {
+		rss_lut_ret = (struct virtchnl2_rss_lut *)args.out_buffer;
+		if (rss_lut_ret->lut_entries != vport->rss_lut_size) {
+			rte_free(vport->rss_lut);
+			vport->rss_lut = NULL;
+			vport->rss_lut = rte_zmalloc("rss_lut",
+				     sizeof(uint32_t) * rss_lut_ret->lut_entries, 0);
+			if (vport->rss_lut == NULL) {
+				DRV_LOG(ERR, "Failed to allocate RSS lut");
+				return -ENOMEM;
+			}
+		}
+		rte_memcpy(vport->rss_lut, rss_lut_ret->lut, rss_lut_ret->lut_entries);
+		vport->rss_lut_size = rss_lut_ret->lut_entries;
+	} else {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_RSS_LUT");
+	}
+
+	return err;
+}
+
+int
+idpf_vc_rss_hash_get(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_hash *rss_hash_ret;
+	struct virtchnl2_rss_hash rss_hash;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_hash, 0, sizeof(rss_hash));
+	rss_hash.ptype_groups = vport->rss_hf;
+	rss_hash.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_GET_RSS_HASH;
+	args.in_args = (uint8_t *)&rss_hash;
+	args.in_args_size = sizeof(rss_hash);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_vc_cmd_execute(adapter, &args);
+
+	if (!err) {
+		rss_hash_ret = (struct virtchnl2_rss_hash *)args.out_buffer;
+		vport->rss_hf = rss_hash_ret->ptype_groups;
+	} else {
+		DRV_LOG(ERR, "Failed to execute command of OP_GET_RSS_HASH");
+	}
+
+	return err;
+}
+
 int
 idpf_vc_rss_hash_set(struct idpf_vport *vport)
 {
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index 6b94fd5b8f..205d1a932d 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -52,4 +52,10 @@ int idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq);
 __rte_internal
 int idpf_vc_stats_query(struct idpf_vport *vport,
 			struct virtchnl2_vport_stats **pstats);
+__rte_internal
+int idpf_vc_rss_key_get(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_rss_lut_get(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_rss_hash_get(struct idpf_vport *vport);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index e6a02828ba..f6c92e7e57 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -42,8 +42,11 @@ INTERNAL {
 	idpf_vc_ptype_info_query;
 	idpf_vc_queue_switch;
 	idpf_vc_queues_ena_dis;
+	idpf_vc_rss_hash_get;
 	idpf_vc_rss_hash_set;
+	idpf_vc_rss_key_get;
 	idpf_vc_rss_key_set;
+	idpf_vc_rss_lut_get;
 	idpf_vc_rss_lut_set;
 	idpf_vc_rxq_config;
 	idpf_vc_stats_query;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 02ddb0330a..7262109d0a 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -29,6 +29,56 @@ static const char * const idpf_valid_args[] = {
 	NULL
 };
 
+static const uint64_t idpf_map_hena_rss[] = {
+	[IDPF_HASH_NONF_UNICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
+	[IDPF_HASH_NONF_IPV4_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+	[IDPF_HASH_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
+
+	/* IPv6 */
+	[IDPF_HASH_NONF_UNICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
+	[IDPF_HASH_NONF_IPV6_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+	[IDPF_HASH_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
+
+	/* L2 Payload */
+	[IDPF_HASH_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
+};
+
+static const uint64_t idpf_ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV4;
+
+static const uint64_t idpf_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV6;
+
 static int
 idpf_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -59,6 +109,9 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->hash_key_size = vport->rss_key_size;
+	dev_info->reta_size = vport->rss_lut_size;
+
 	dev_info->flow_type_rss_offloads = IDPF_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
@@ -221,6 +274,36 @@ idpf_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int idpf_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
+{
+	uint64_t hena = 0;
+	uint16_t i;
+
+	/**
+	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
+	 * generalizations of all other IPv4 and IPv6 RSS types.
+	 */
+	if (rss_hf & RTE_ETH_RSS_IPV4)
+		rss_hf |= idpf_ipv4_rss;
+
+	if (rss_hf & RTE_ETH_RSS_IPV6)
+		rss_hf |= idpf_ipv6_rss;
+
+	for (i = 0; i < RTE_DIM(idpf_map_hena_rss); i++) {
+		if (idpf_map_hena_rss[i] & rss_hf)
+			hena |= BIT_ULL(i);
+	}
+
+	/**
+	 * At present, cp doesn't process the virtual channel msg of rss_hf configuration,
+	 * tips are given below.
+	 */
+	if (hena != vport->rss_hf)
+		PMD_DRV_LOG(WARNING, "Updating RSS Hash Function is not supported at present.");
+
+	return 0;
+}
+
 static int
 idpf_init_rss(struct idpf_vport *vport)
 {
@@ -257,6 +340,187 @@ idpf_init_rss(struct idpf_vport *vport)
 	return ret;
 }
 
+static int
+idpf_rss_reta_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_reta_entry64 *reta_conf,
+		     uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+				 "(%d) doesn't match the number of hardware can "
+				 "support (%d)",
+			    reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			vport->rss_lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	/* send virtchnl ops to configure RSS */
+	ret = idpf_vc_rss_lut_set(vport);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
+
+	return ret;
+}
+
+static int
+idpf_rss_reta_query(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_reta_entry64 *reta_conf,
+		    uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	ret = idpf_vc_rss_lut_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS LUT");
+		return ret;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vport->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+idpf_rss_hash_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		goto skip_rss_key;
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+				 "(%d) doesn't match the size of hardware can "
+				 "support (%d)",
+			    rss_conf->rss_key_len,
+			    vport->rss_key_size);
+		return -EINVAL;
+	}
+
+	rte_memcpy(vport->rss_key, rss_conf->rss_key,
+		   vport->rss_key_size);
+	ret = idpf_vc_rss_key_set(vport);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS key");
+		return ret;
+	}
+
+skip_rss_key:
+	ret = idpf_config_rss_hf(vport, rss_conf->rss_hf);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
+		return ret;
+	}
+
+	return 0;
+}
+
+static uint64_t
+idpf_map_general_rss_hf(uint64_t config_rss_hf, uint64_t last_general_rss_hf)
+{
+	uint64_t valid_rss_hf = 0;
+	uint16_t i;
+
+	for (i = 0; i < RTE_DIM(idpf_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (bit & config_rss_hf)
+			valid_rss_hf |= idpf_map_hena_rss[i];
+	}
+
+	if (valid_rss_hf & idpf_ipv4_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & idpf_ipv6_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV6;
+
+	return valid_rss_hf;
+}
+
+static int
+idpf_rss_hash_conf_get(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	ret = idpf_vc_rss_hash_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS hf");
+		return ret;
+	}
+
+	rss_conf->rss_hf = idpf_map_general_rss_hf(vport->rss_hf, vport->last_general_rss_hf);
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	ret = idpf_vc_rss_key_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS key");
+		return ret;
+	}
+
+	if (rss_conf->rss_key_len > vport->rss_key_size)
+		rss_conf->rss_key_len = vport->rss_key_size;
+
+	rte_memcpy(rss_conf->rss_key, vport->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
 static int
 idpf_dev_configure(struct rte_eth_dev *dev)
 {
@@ -692,6 +956,10 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 	.dev_supported_ptypes_get	= idpf_dev_supported_ptypes_get,
 	.stats_get			= idpf_dev_stats_get,
 	.stats_reset			= idpf_dev_stats_reset,
+	.reta_update			= idpf_rss_reta_update,
+	.reta_query			= idpf_rss_reta_query,
+	.rss_hash_update		= idpf_rss_hash_update,
+	.rss_hash_conf_get		= idpf_rss_hash_conf_get,
 };
 
 static uint16_t
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index d791d402fb..839a2bd82c 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -48,7 +48,8 @@
 		RTE_ETH_RSS_NONFRAG_IPV6_TCP    |	\
 		RTE_ETH_RSS_NONFRAG_IPV6_UDP    |	\
 		RTE_ETH_RSS_NONFRAG_IPV6_SCTP   |	\
-		RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
+		RTE_ETH_RSS_NONFRAG_IPV6_OTHER  |	\
+		RTE_ETH_RSS_L2_PAYLOAD)
 
 #define IDPF_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v5 3/6] common/idpf: support single q scatter RX datapath
  2023-02-07 10:08       ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
  2023-02-07 10:08         ` [PATCH v5 1/6] common/idpf: add hw statistics Mingxia Liu
  2023-02-07 10:08         ` [PATCH v5 2/6] common/idpf: add RSS set/get ops Mingxia Liu
@ 2023-02-07 10:08         ` Mingxia Liu
  2023-02-07 10:08         ` [PATCH v5 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
                           ` (2 subsequent siblings)
  5 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-07 10:08 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu, Wenjun Wu

This patch add single q recv scatter rx function.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c | 135 +++++++++++++++++++++++++
 drivers/common/idpf/idpf_common_rxtx.h |   3 +
 drivers/common/idpf/version.map        |   1 +
 drivers/net/idpf/idpf_ethdev.c         |   3 +-
 drivers/net/idpf/idpf_rxtx.c           |  28 +++++
 drivers/net/idpf/idpf_rxtx.h           |   2 +
 6 files changed, 171 insertions(+), 1 deletion(-)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index fdac2c3114..9303b51cce 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -1146,6 +1146,141 @@ idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	return nb_rx;
 }
 
+uint16_t
+idpf_dp_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			       uint16_t nb_pkts)
+{
+	struct idpf_rx_queue *rxq = rx_queue;
+	volatile union virtchnl2_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union virtchnl2_rx_desc *rxdp;
+	union virtchnl2_rx_desc rxd;
+	struct idpf_adapter *ad;
+	struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+	struct rte_mbuf *rxm;
+	struct rte_mbuf *nmb;
+	struct rte_eth_dev *dev;
+	const uint32_t *ptype_tbl = rxq->adapter->ptype_tbl;
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t rx_packet_len;
+	uint16_t nb_hold = 0;
+	uint16_t rx_status0;
+	uint16_t nb_rx = 0;
+	uint64_t pkt_flags;
+	uint64_t dma_addr;
+	uint64_t ts_ns;
+
+	ad = rxq->adapter;
+
+	if (unlikely(!rxq) || unlikely(!rxq->q_started))
+		return nb_rx;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		rx_status0 = rte_le_to_cpu_16(rxdp->flex_nic_wb.status_error0);
+
+		/* Check the DD bit first */
+		if (!(rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S)))
+			break;
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			__atomic_fetch_add(&rxq->rx_stats.mbuf_alloc_failed, 1, __ATOMIC_RELAXED);
+			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+			       "queue_id=%u", rxq->port_id, rxq->queue_id);
+			break;
+		}
+
+		rxd = *rxdp;
+
+		nb_hold++;
+		rxm = rxq->sw_ring[rx_id];
+		rxq->sw_ring[rx_id] = nmb;
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+		rx_packet_len = (rte_cpu_to_le_16(rxd.flex_nic_wb.pkt_len) &
+				 VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M);
+		rxm->data_len = rx_packet_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+		/**
+		 * If this is the first buffer of the received packet, set the
+		 * pointer to the first mbuf of the packet and initialize its
+		 * context. Otherwise, update the total length and the number
+		 * of segments of the current scattered packet, and update the
+		 * pointer to the last mbuf of the current packet.
+		 */
+		if (!first_seg) {
+			first_seg = rxm;
+			first_seg->nb_segs = 1;
+			first_seg->pkt_len = rx_packet_len;
+		} else {
+			first_seg->pkt_len =
+				(uint16_t)(first_seg->pkt_len +
+					   rx_packet_len);
+			first_seg->nb_segs++;
+			last_seg->next = rxm;
+		}
+
+		if (!(rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_S))) {
+			last_seg = rxm;
+			continue;
+		}
+
+		rxm->next = NULL;
+
+		first_seg->port = rxq->port_id;
+		first_seg->ol_flags = 0;
+		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		first_seg->packet_type =
+			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
+				VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
+
+		if (idpf_timestamp_dynflag > 0 &&
+		    (rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0) {
+			/* timestamp */
+			ts_ns = idpf_tstamp_convert_32b_64b(ad,
+				rxq->hw_register_set,
+				rte_le_to_cpu_32(rxd.flex_nic_wb.flex_ts.ts_high));
+			rxq->hw_register_set = 0;
+			*RTE_MBUF_DYNFIELD(rxm,
+					   idpf_timestamp_dynfield_offset,
+					   rte_mbuf_timestamp_t *) = ts_ns;
+			first_seg->ol_flags |= idpf_timestamp_dynflag;
+		}
+
+		first_seg->ol_flags |= pkt_flags;
+		rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
+					  first_seg->data_off));
+		rx_pkts[nb_rx++] = first_seg;
+		first_seg = NULL;
+	}
+	rxq->rx_tail = rx_id;
+	rxq->pkt_first_seg = first_seg;
+	rxq->pkt_last_seg = last_seg;
+
+	idpf_update_rx_tail(rxq, nb_hold, rx_id);
+
+	return nb_rx;
+}
+
 static inline int
 idpf_xmit_cleanup(struct idpf_tx_queue *txq)
 {
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index 263dab061c..7e6df080e6 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -293,5 +293,8 @@ uint16_t idpf_dp_singleq_xmit_pkts_avx512(void *tx_queue,
 __rte_internal
 uint16_t idpf_dp_splitq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
 					 uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_dp_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			  uint16_t nb_pkts);
 
 #endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index f6c92e7e57..e31f6ff4d9 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -7,6 +7,7 @@ INTERNAL {
 	idpf_dp_prep_pkts;
 	idpf_dp_singleq_recv_pkts;
 	idpf_dp_singleq_recv_pkts_avx512;
+	idpf_dp_singleq_recv_scatter_pkts;
 	idpf_dp_singleq_xmit_pkts;
 	idpf_dp_singleq_xmit_pkts_avx512;
 	idpf_dp_splitq_recv_pkts;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 7262109d0a..11f0ca0085 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -119,7 +119,8 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
-		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP		|
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 38d9829912..d16acd87fb 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -503,6 +503,8 @@ int
 idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct idpf_rx_queue *rxq;
+	uint16_t max_pkt_len;
+	uint32_t frame_size;
 	int err;
 
 	if (rx_queue_id >= dev->data->nb_rx_queues)
@@ -516,6 +518,17 @@ idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	frame_size = dev->data->mtu + IDPF_ETH_OVERHEAD;
+
+	max_pkt_len =
+	    RTE_MIN((uint32_t)IDPF_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+		    frame_size);
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
+	    frame_size > rxq->rx_buf_len)
+		dev->data->scattered_rx = 1;
+
 	err = idpf_qc_ts_mbuf_register(rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "fail to residter timestamp mbuf %u",
@@ -807,6 +820,14 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 			}
 #endif /* CC_AVX512_SUPPORT */
 		}
+
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE,
+				    "Using Single Scalar Scatterd Rx (port %d).",
+				    dev->data->port_id);
+			dev->rx_pkt_burst = idpf_dp_singleq_recv_scatter_pkts;
+			return;
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Rx (port %d).",
 			    dev->data->port_id);
@@ -819,6 +840,13 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 			    dev->data->port_id);
 		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
 	} else {
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE,
+				    "Using Single Scalar Scatterd Rx (port %d).",
+				    dev->data->port_id);
+			dev->rx_pkt_burst = idpf_dp_singleq_recv_scatter_pkts;
+			return;
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Rx (port %d).",
 			    dev->data->port_id);
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 3a5084dfd6..41a7495083 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -23,6 +23,8 @@
 #define IDPF_DEFAULT_TX_RS_THRESH	32
 #define IDPF_DEFAULT_TX_FREE_THRESH	32
 
+#define IDPF_SUPPORT_CHAIN_NUM 5
+
 int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v5 4/6] common/idpf: add rss_offload hash in singleq rx
  2023-02-07 10:08       ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
                           ` (2 preceding siblings ...)
  2023-02-07 10:08         ` [PATCH v5 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
@ 2023-02-07 10:08         ` Mingxia Liu
  2023-02-07 10:08         ` [PATCH v5 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
  2023-02-07 10:08         ` [PATCH v5 6/6] common/idpf: add xstats ops Mingxia Liu
  5 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-07 10:08 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

This patch add rss valid flag and hash value parsing of rx descriptor.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index 9303b51cce..d7e8df1895 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -1030,6 +1030,20 @@ idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold,
 	rxq->nb_rx_hold = nb_hold;
 }
 
+static inline void
+idpf_singleq_rx_rss_offload(struct rte_mbuf *mb,
+			    volatile struct virtchnl2_rx_flex_desc_nic *rx_desc,
+			    uint64_t *pkt_flags)
+{
+	uint16_t rx_status0 = rte_le_to_cpu_16(rx_desc->status_error0);
+
+	if (rx_status0 & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_S)) {
+		*pkt_flags |= RTE_MBUF_F_RX_RSS_HASH;
+		mb->hash.rss = rte_le_to_cpu_32(rx_desc->rss_hash);
+	}
+
+}
+
 uint16_t
 idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			  uint16_t nb_pkts)
@@ -1118,6 +1132,7 @@ idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->port = rxq->port_id;
 		rxm->ol_flags = 0;
 		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		idpf_singleq_rx_rss_offload(rxm, &rxd.flex_nic_wb, &pkt_flags);
 		rxm->packet_type =
 			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
 					    VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
@@ -1249,6 +1264,7 @@ idpf_dp_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		first_seg->port = rxq->port_id;
 		first_seg->ol_flags = 0;
 		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		idpf_singleq_rx_rss_offload(first_seg, &rxd.flex_nic_wb, &pkt_flags);
 		first_seg->packet_type =
 			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
 				VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v5 5/6] common/idpf: add alarm to support handle vchnl message
  2023-02-07 10:08       ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
                           ` (3 preceding siblings ...)
  2023-02-07 10:08         ` [PATCH v5 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
@ 2023-02-07 10:08         ` Mingxia Liu
  2023-02-07 10:08         ` [PATCH v5 6/6] common/idpf: add xstats ops Mingxia Liu
  5 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-07 10:08 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Handle virtual channel message.
Refine link status update.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h   |   5 +
 drivers/common/idpf/idpf_common_virtchnl.c |  33 ++--
 drivers/common/idpf/idpf_common_virtchnl.h |   6 +
 drivers/common/idpf/version.map            |   2 +
 drivers/net/idpf/idpf_ethdev.c             | 169 ++++++++++++++++++++-
 drivers/net/idpf/idpf_ethdev.h             |   2 +
 6 files changed, 195 insertions(+), 22 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 7abc4d2a3a..364a60221a 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -118,6 +118,11 @@ struct idpf_vport {
 	bool tx_use_avx512;
 
 	struct virtchnl2_vport_stats eth_stats_offset;
+
+	void *dev;
+	/* Event from ipf */
+	bool link_up;
+	uint32_t link_speed;
 };
 
 /* Message type read in virtual channel from PF */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 10cfa33704..99d9efbb7c 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -202,25 +202,6 @@ idpf_vc_cmd_execute(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	switch (args->ops) {
 	case VIRTCHNL_OP_VERSION:
 	case VIRTCHNL2_OP_GET_CAPS:
-	case VIRTCHNL2_OP_CREATE_VPORT:
-	case VIRTCHNL2_OP_DESTROY_VPORT:
-	case VIRTCHNL2_OP_SET_RSS_KEY:
-	case VIRTCHNL2_OP_SET_RSS_LUT:
-	case VIRTCHNL2_OP_SET_RSS_HASH:
-	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
-	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
-	case VIRTCHNL2_OP_ENABLE_QUEUES:
-	case VIRTCHNL2_OP_DISABLE_QUEUES:
-	case VIRTCHNL2_OP_ENABLE_VPORT:
-	case VIRTCHNL2_OP_DISABLE_VPORT:
-	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_ALLOC_VECTORS:
-	case VIRTCHNL2_OP_DEALLOC_VECTORS:
-	case VIRTCHNL2_OP_GET_STATS:
-	case VIRTCHNL2_OP_GET_RSS_KEY:
-	case VIRTCHNL2_OP_GET_RSS_HASH:
-	case VIRTCHNL2_OP_GET_RSS_LUT:
 		/* for init virtchnl ops, need to poll the response */
 		err = idpf_vc_one_msg_read(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
@@ -1111,3 +1092,17 @@ idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq)
 
 	return err;
 }
+
+int
+idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
+		  struct idpf_ctlq_msg *q_msg)
+{
+	return idpf_ctlq_recv(cq, num_q_msg, q_msg);
+}
+
+int
+idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
+			   u16 *buff_count, struct idpf_dma_mem **buffs)
+{
+	return idpf_ctlq_post_rx_buffs(hw, cq, buff_count, buffs);
+}
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index 205d1a932d..d479d93c8e 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -58,4 +58,10 @@ __rte_internal
 int idpf_vc_rss_lut_get(struct idpf_vport *vport);
 __rte_internal
 int idpf_vc_rss_hash_get(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
+		      struct idpf_ctlq_msg *q_msg);
+__rte_internal
+int idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
+			   u16 *buff_count, struct idpf_dma_mem **buffs);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index e31f6ff4d9..70334a1b03 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -38,6 +38,8 @@ INTERNAL {
 	idpf_vc_api_version_check;
 	idpf_vc_caps_get;
 	idpf_vc_cmd_execute;
+	idpf_vc_ctlq_post_rx_buffs;
+	idpf_vc_ctlq_recv;
 	idpf_vc_irq_map_unmap_config;
 	idpf_vc_one_msg_read;
 	idpf_vc_ptype_info_query;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 11f0ca0085..751c0d8717 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -9,6 +9,7 @@
 #include <rte_memzone.h>
 #include <rte_dev.h>
 #include <errno.h>
+#include <rte_alarm.h>
 
 #include "idpf_ethdev.h"
 #include "idpf_rxtx.h"
@@ -83,14 +84,51 @@ static int
 idpf_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_link new_link;
 
 	memset(&new_link, 0, sizeof(new_link));
 
-	new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	switch (vport->link_speed) {
+	case RTE_ETH_SPEED_NUM_10M:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
+		break;
+	case RTE_ETH_SPEED_NUM_100M:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		break;
+	case RTE_ETH_SPEED_NUM_1G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
+		break;
+	case RTE_ETH_SPEED_NUM_10G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+		break;
+	case RTE_ETH_SPEED_NUM_20G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
+		break;
+	case RTE_ETH_SPEED_NUM_25G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
+		break;
+	case RTE_ETH_SPEED_NUM_40G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
+		break;
+	case RTE_ETH_SPEED_NUM_50G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
+		break;
+	case RTE_ETH_SPEED_NUM_100G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+		break;
+	case RTE_ETH_SPEED_NUM_200G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
+		break;
+	default:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	}
+
 	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
-	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				  RTE_ETH_LINK_SPEED_FIXED);
+	new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
+		RTE_ETH_LINK_DOWN;
+	new_link.link_autoneg = (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) ?
+				 RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG;
 
 	return rte_eth_linkstatus_set(dev, &new_link);
 }
@@ -891,6 +929,127 @@ idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adap
 	return ret;
 }
 
+static struct idpf_vport *
+idpf_find_vport(struct idpf_adapter_ext *adapter, uint32_t vport_id)
+{
+	struct idpf_vport *vport = NULL;
+	int i;
+
+	for (i = 0; i < adapter->cur_vport_nb; i++) {
+		vport = adapter->vports[i];
+		if (vport->vport_id != vport_id)
+			continue;
+		else
+			return vport;
+	}
+
+	return vport;
+}
+
+static void
+idpf_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t msglen)
+{
+	struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
+
+	if (msglen < sizeof(struct virtchnl2_event)) {
+		PMD_DRV_LOG(ERR, "Error event");
+		return;
+	}
+
+	switch (vc_event->event) {
+	case VIRTCHNL2_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL2_EVENT_LINK_CHANGE");
+		vport->link_up = !!(vc_event->link_status);
+		vport->link_speed = vc_event->link_speed;
+		idpf_dev_link_update(dev, 0);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", vc_event->event);
+		break;
+	}
+}
+
+static void
+idpf_handle_virtchnl_msg(struct idpf_adapter_ext *adapter_ex)
+{
+	struct idpf_adapter *adapter = &adapter_ex->base;
+	struct idpf_dma_mem *dma_mem = NULL;
+	struct idpf_hw *hw = &adapter->hw;
+	struct virtchnl2_event *vc_event;
+	struct idpf_ctlq_msg ctlq_msg;
+	enum idpf_mbx_opc mbx_op;
+	struct idpf_vport *vport;
+	enum virtchnl_ops vc_op;
+	uint16_t pending = 1;
+	int ret;
+
+	while (pending) {
+		ret = idpf_vc_ctlq_recv(hw->arq, &pending, &ctlq_msg);
+		if (ret) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret);
+			return;
+		}
+
+		rte_memcpy(adapter->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
+			   IDPF_DFLT_MBX_BUF_SIZE);
+
+		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
+		vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+		adapter->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
+
+		switch (mbx_op) {
+		case idpf_mbq_opc_send_msg_to_peer_pf:
+			if (vc_op == VIRTCHNL2_OP_EVENT) {
+				if (ctlq_msg.data_len < sizeof(struct virtchnl2_event)) {
+					PMD_DRV_LOG(ERR, "Error event");
+					return;
+				}
+				vc_event = (struct virtchnl2_event *)adapter->mbx_resp;
+				vport = idpf_find_vport(adapter_ex, vc_event->vport_id);
+				if (!vport) {
+					PMD_DRV_LOG(ERR, "Can't find vport.");
+					return;
+				}
+				idpf_handle_event_msg(vport, adapter->mbx_resp,
+						      ctlq_msg.data_len);
+			} else {
+				if (vc_op == adapter->pend_cmd)
+					notify_cmd(adapter, adapter->cmd_retval);
+				else
+					PMD_DRV_LOG(ERR, "command mismatch, expect %u, get %u",
+						    adapter->pend_cmd, vc_op);
+
+				PMD_DRV_LOG(DEBUG, " Virtual channel response is received,"
+					    "opcode = %d", vc_op);
+			}
+			goto post_buf;
+		default:
+			PMD_DRV_LOG(DEBUG, "Request %u is not supported yet", mbx_op);
+		}
+	}
+
+post_buf:
+	if (ctlq_msg.data_len)
+		dma_mem = ctlq_msg.ctx.indirect.payload;
+	else
+		pending = 0;
+
+	ret = idpf_vc_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
+	if (ret && dma_mem)
+		idpf_free_dma_mem(hw, dma_mem);
+}
+
+static void
+idpf_dev_alarm_handler(void *param)
+{
+	struct idpf_adapter_ext *adapter = param;
+
+	idpf_handle_virtchnl_msg(adapter);
+
+	rte_eal_alarm_set(IDPF_ALARM_INTERVAL, idpf_dev_alarm_handler, adapter);
+}
+
 static int
 idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter)
 {
@@ -913,6 +1072,8 @@ idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *a
 		goto err_adapter_init;
 	}
 
+	rte_eal_alarm_set(IDPF_ALARM_INTERVAL, idpf_dev_alarm_handler, adapter);
+
 	adapter->max_vport_nb = adapter->base.caps.max_vports;
 
 	adapter->vports = rte_zmalloc("vports",
@@ -996,6 +1157,7 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	vport->adapter = &adapter->base;
 	vport->sw_idx = param->idx;
 	vport->devarg_id = param->devarg_id;
+	vport->dev = dev;
 
 	memset(&create_vport_info, 0, sizeof(create_vport_info));
 	ret = idpf_vport_info_init(vport, &create_vport_info);
@@ -1065,6 +1227,7 @@ idpf_find_adapter_ext(struct rte_pci_device *pci_dev)
 static void
 idpf_adapter_ext_deinit(struct idpf_adapter_ext *adapter)
 {
+	rte_eal_alarm_cancel(idpf_dev_alarm_handler, adapter);
 	idpf_adapter_deinit(&adapter->base);
 
 	rte_free(adapter->vports);
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 839a2bd82c..3c2c932438 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -53,6 +53,8 @@
 
 #define IDPF_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
+#define IDPF_ALARM_INTERVAL	50000 /* us */
+
 struct idpf_vport_param {
 	struct idpf_adapter_ext *adapter;
 	uint16_t devarg_id; /* arg id from user */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v5 6/6] common/idpf: add xstats ops
  2023-02-07 10:08       ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
                           ` (4 preceding siblings ...)
  2023-02-07 10:08         ` [PATCH v5 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
@ 2023-02-07 10:08         ` Mingxia Liu
  5 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-07 10:08 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add support for these device ops:
-idpf_dev_xstats_get
-idpf_dev_xstats_get_names
-idpf_dev_xstats_reset

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/idpf/idpf_ethdev.c | 80 ++++++++++++++++++++++++++++++++++
 1 file changed, 80 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 751c0d8717..38cbbf369d 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -80,6 +80,30 @@ static const uint64_t idpf_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
 			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
 			  RTE_ETH_RSS_FRAG_IPV6;
 
+struct rte_idpf_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct rte_idpf_xstats_name_off rte_idpf_stats_strings[] = {
+	{"rx_bytes", offsetof(struct virtchnl2_vport_stats, rx_bytes)},
+	{"rx_unicast_packets", offsetof(struct virtchnl2_vport_stats, rx_unicast)},
+	{"rx_multicast_packets", offsetof(struct virtchnl2_vport_stats, rx_multicast)},
+	{"rx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, rx_broadcast)},
+	{"rx_dropped_packets", offsetof(struct virtchnl2_vport_stats, rx_discards)},
+	{"rx_errors", offsetof(struct virtchnl2_vport_stats, rx_errors)},
+	{"rx_unknown_protocol_packets", offsetof(struct virtchnl2_vport_stats,
+						 rx_unknown_protocol)},
+	{"tx_bytes", offsetof(struct virtchnl2_vport_stats, tx_bytes)},
+	{"tx_unicast_packets", offsetof(struct virtchnl2_vport_stats, tx_unicast)},
+	{"tx_multicast_packets", offsetof(struct virtchnl2_vport_stats, tx_multicast)},
+	{"tx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, tx_broadcast)},
+	{"tx_dropped_packets", offsetof(struct virtchnl2_vport_stats, tx_discards)},
+	{"tx_error_packets", offsetof(struct virtchnl2_vport_stats, tx_errors)}};
+
+#define IDPF_NB_XSTATS (sizeof(rte_idpf_stats_strings) / \
+		sizeof(rte_idpf_stats_strings[0]))
+
 static int
 idpf_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -313,6 +337,59 @@ idpf_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int idpf_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+	idpf_dev_stats_reset(dev);
+	return 0;
+}
+
+static int idpf_dev_xstats_get(struct rte_eth_dev *dev,
+			       struct rte_eth_xstat *xstats, unsigned int n)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	unsigned int i;
+	int ret;
+
+	if (n < IDPF_NB_XSTATS)
+		return IDPF_NB_XSTATS;
+
+	if (!xstats)
+		return 0;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+		return 0;
+	}
+
+	idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
+
+	/* loop over xstats array and values from pstats */
+	for (i = 0; i < IDPF_NB_XSTATS; i++) {
+		xstats[i].id = i;
+		xstats[i].value = *(uint64_t *)(((char *)pstats) +
+			rte_idpf_stats_strings[i].offset);
+	}
+	return IDPF_NB_XSTATS;
+}
+
+static int idpf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+				     struct rte_eth_xstat_name *xstats_names,
+				     __rte_unused unsigned int limit)
+{
+	unsigned int i;
+
+	if (xstats_names)
+		for (i = 0; i < IDPF_NB_XSTATS; i++) {
+			snprintf(xstats_names[i].name,
+				 sizeof(xstats_names[i].name),
+				 "%s", rte_idpf_stats_strings[i].name);
+		}
+	return IDPF_NB_XSTATS;
+}
+
 static int idpf_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
 {
 	uint64_t hena = 0;
@@ -1122,6 +1199,9 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 	.reta_query			= idpf_rss_reta_query,
 	.rss_hash_update		= idpf_rss_hash_update,
 	.rss_hash_conf_get		= idpf_rss_hash_conf_get,
+	.xstats_get			= idpf_dev_xstats_get,
+	.xstats_get_names		= idpf_dev_xstats_get_names,
+	.xstats_reset			= idpf_dev_xstats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v6 0/6] add idpf pmd enhancement features
  2023-02-07 10:08         ` [PATCH v5 1/6] common/idpf: add hw statistics Mingxia Liu
@ 2023-02-07 10:16           ` Mingxia Liu
  2023-02-07 10:16             ` [PATCH v6 1/6] common/idpf: add hw statistics Mingxia Liu
                               ` (7 more replies)
  0 siblings, 8 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-07 10:16 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

This patchset add several enhancement features of idpf pmd.
Including the following:
- add hw statistics, support stats/xstats ops
- add rss configure/show ops
- add event handle: link status
- add scattered data path for single queue

This patchset is based on the refactor idpf PMD code:
http://patches.dpdk.org/project/dpdk/patch/20230207084549.2225214-2-wenjun1.wu@intel.com/

v2 changes:
 - Fix rss lut config issue.
v3 changes:
 - rebase to the new baseline.
v4 changes:
 - rebase to the new baseline.
 - optimize some code
 - give "not supported" tips when user want to config rss hash type
 - if stats reset fails at initialization time, don't rollback, just
   print ERROR info.
v5 changes:
 - fix some spelling error
v6 changes:
 - add cover-letter

Mingxia Liu (6):
  common/idpf: add hw statistics
  common/idpf: add RSS set/get ops
  common/idpf: support single q scatter RX datapath
  common/idpf: add rss_offload hash in singleq rx
  common/idpf: add alarm to support handle vchnl message
  common/idpf: add xstats ops

 drivers/common/idpf/idpf_common_device.c   |  17 +
 drivers/common/idpf/idpf_common_device.h   |  10 +
 drivers/common/idpf/idpf_common_rxtx.c     | 151 +++++
 drivers/common/idpf/idpf_common_rxtx.h     |   3 +
 drivers/common/idpf/idpf_common_virtchnl.c | 171 +++++-
 drivers/common/idpf/idpf_common_virtchnl.h |  15 +
 drivers/common/idpf/version.map            |   8 +
 drivers/net/idpf/idpf_ethdev.c             | 606 ++++++++++++++++++++-
 drivers/net/idpf/idpf_ethdev.h             |   5 +-
 drivers/net/idpf/idpf_rxtx.c               |  28 +
 drivers/net/idpf/idpf_rxtx.h               |   2 +
 11 files changed, 996 insertions(+), 20 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v6 1/6] common/idpf: add hw statistics
  2023-02-07 10:16           ` [PATCH v6 0/6] add idpf pmd enhancement features Mingxia Liu
@ 2023-02-07 10:16             ` Mingxia Liu
  2023-02-08  2:00               ` Zhang, Qi Z
  2023-02-07 10:16             ` [PATCH v6 2/6] common/idpf: add RSS set/get ops Mingxia Liu
                               ` (6 subsequent siblings)
  7 siblings, 1 reply; 63+ messages in thread
From: Mingxia Liu @ 2023-02-07 10:16 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

This patch add hardware packets/bytes statistics.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/common/idpf/idpf_common_device.c   | 17 +++++
 drivers/common/idpf/idpf_common_device.h   |  4 +
 drivers/common/idpf/idpf_common_virtchnl.c | 27 +++++++
 drivers/common/idpf/idpf_common_virtchnl.h |  3 +
 drivers/common/idpf/version.map            |  2 +
 drivers/net/idpf/idpf_ethdev.c             | 86 ++++++++++++++++++++++
 6 files changed, 139 insertions(+)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 48b3e3c0dd..5475a3e52c 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -652,4 +652,21 @@ idpf_vport_info_init(struct idpf_vport *vport,
 	return 0;
 }
 
+void
+idpf_vport_stats_update(struct virtchnl2_vport_stats *oes, struct virtchnl2_vport_stats *nes)
+{
+	nes->rx_bytes = nes->rx_bytes - oes->rx_bytes;
+	nes->rx_unicast = nes->rx_unicast - oes->rx_unicast;
+	nes->rx_multicast = nes->rx_multicast - oes->rx_multicast;
+	nes->rx_broadcast = nes->rx_broadcast - oes->rx_broadcast;
+	nes->rx_errors = nes->rx_errors - oes->rx_errors;
+	nes->rx_discards = nes->rx_discards - oes->rx_discards;
+	nes->tx_bytes = nes->tx_bytes - oes->tx_bytes;
+	nes->tx_unicast = nes->tx_unicast - oes->tx_unicast;
+	nes->tx_multicast = nes->tx_multicast - oes->tx_multicast;
+	nes->tx_broadcast = nes->tx_broadcast - oes->tx_broadcast;
+	nes->tx_errors = nes->tx_errors - oes->tx_errors;
+	nes->tx_discards = nes->tx_discards - oes->tx_discards;
+}
+
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 545117df79..1d8e7d405a 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -115,6 +115,8 @@ struct idpf_vport {
 	bool tx_vec_allowed;
 	bool rx_use_avx512;
 	bool tx_use_avx512;
+
+	struct virtchnl2_vport_stats eth_stats_offset;
 };
 
 /* Message type read in virtual channel from PF */
@@ -191,5 +193,7 @@ int idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
 __rte_internal
 int idpf_vport_info_init(struct idpf_vport *vport,
 			 struct virtchnl2_create_vport *vport_info);
+__rte_internal
+void idpf_vport_stats_update(struct virtchnl2_vport_stats *oes, struct virtchnl2_vport_stats *nes);
 
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 31fadefbd3..40cff34c09 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -217,6 +217,7 @@ idpf_vc_cmd_execute(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
 	case VIRTCHNL2_OP_ALLOC_VECTORS:
 	case VIRTCHNL2_OP_DEALLOC_VECTORS:
+	case VIRTCHNL2_OP_GET_STATS:
 		/* for init virtchnl ops, need to poll the response */
 		err = idpf_vc_one_msg_read(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
@@ -806,6 +807,32 @@ idpf_vc_ptype_info_query(struct idpf_adapter *adapter)
 	return err;
 }
 
+int
+idpf_vc_stats_query(struct idpf_vport *vport,
+		struct virtchnl2_vport_stats **pstats)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_vport_stats vport_stats;
+	struct idpf_cmd_info args;
+	int err;
+
+	vport_stats.vport_id = vport->vport_id;
+	args.ops = VIRTCHNL2_OP_GET_STATS;
+	args.in_args = (u8 *)&vport_stats;
+	args.in_args_size = sizeof(vport_stats);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_vc_cmd_execute(adapter, &args);
+	if (err) {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_STATS");
+		*pstats = NULL;
+		return err;
+	}
+	*pstats = (struct virtchnl2_vport_stats *)args.out_buffer;
+	return 0;
+}
+
 #define IDPF_RX_BUF_STRIDE		64
 int
 idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index c105f02836..6b94fd5b8f 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -49,4 +49,7 @@ __rte_internal
 int idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
 __rte_internal
 int idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq);
+__rte_internal
+int idpf_vc_stats_query(struct idpf_vport *vport,
+			struct virtchnl2_vport_stats **pstats);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 8b33130bd6..e6a02828ba 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -46,6 +46,7 @@ INTERNAL {
 	idpf_vc_rss_key_set;
 	idpf_vc_rss_lut_set;
 	idpf_vc_rxq_config;
+	idpf_vc_stats_query;
 	idpf_vc_txq_config;
 	idpf_vc_vectors_alloc;
 	idpf_vc_vectors_dealloc;
@@ -59,6 +60,7 @@ INTERNAL {
 	idpf_vport_irq_map_config;
 	idpf_vport_irq_unmap_config;
 	idpf_vport_rss_config;
+	idpf_vport_stats_update;
 
 	local: *;
 };
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 33f5e90743..02ddb0330a 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -140,6 +140,87 @@ idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static uint64_t
+idpf_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	uint64_t mbuf_alloc_failed = 0;
+	struct idpf_rx_queue *rxq;
+	int i = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		mbuf_alloc_failed += __atomic_load_n(&rxq->rx_stats.mbuf_alloc_failed,
+						     __ATOMIC_RELAXED);
+	}
+
+	return mbuf_alloc_failed;
+}
+
+static int
+idpf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret == 0) {
+		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
+					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
+					 RTE_ETHER_CRC_LEN;
+
+		idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
+		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
+				pstats->rx_broadcast - pstats->rx_discards;
+		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
+						pstats->tx_unicast;
+		stats->imissed = pstats->rx_discards;
+		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
+		stats->ibytes = pstats->rx_bytes;
+		stats->ibytes -= stats->ipackets * crc_stats_len;
+		stats->obytes = pstats->tx_bytes;
+
+		dev->data->rx_mbuf_alloc_failed = idpf_get_mbuf_alloc_failed_stats(dev);
+		stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return ret;
+}
+
+static void
+idpf_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		__atomic_store_n(&rxq->rx_stats.mbuf_alloc_failed, 0, __ATOMIC_RELAXED);
+	}
+}
+
+static int
+idpf_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret != 0)
+		return ret;
+
+	/* set stats offset base on current values */
+	vport->eth_stats_offset = *pstats;
+
+	idpf_reset_mbuf_alloc_failed_stats(dev);
+
+	return 0;
+}
+
 static int
 idpf_init_rss(struct idpf_vport *vport)
 {
@@ -327,6 +408,9 @@ idpf_dev_start(struct rte_eth_dev *dev)
 		goto err_vport;
 	}
 
+	if (idpf_dev_stats_reset(dev))
+		PMD_DRV_LOG(ERR, "Failed to reset stats");
+
 	vport->stopped = 0;
 
 	return 0;
@@ -606,6 +690,8 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 	.tx_queue_release		= idpf_dev_tx_queue_release,
 	.mtu_set			= idpf_dev_mtu_set,
 	.dev_supported_ptypes_get	= idpf_dev_supported_ptypes_get,
+	.stats_get			= idpf_dev_stats_get,
+	.stats_reset			= idpf_dev_stats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v6 2/6] common/idpf: add RSS set/get ops
  2023-02-07 10:16           ` [PATCH v6 0/6] add idpf pmd enhancement features Mingxia Liu
  2023-02-07 10:16             ` [PATCH v6 1/6] common/idpf: add hw statistics Mingxia Liu
@ 2023-02-07 10:16             ` Mingxia Liu
  2023-02-07 10:16             ` [PATCH v6 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
                               ` (5 subsequent siblings)
  7 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-07 10:16 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add support for these device ops:
- rss_reta_update
- rss_reta_query
- rss_hash_update
- rss_hash_conf_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/common/idpf/idpf_common_device.h   |   1 +
 drivers/common/idpf/idpf_common_virtchnl.c | 119 +++++++++
 drivers/common/idpf/idpf_common_virtchnl.h |   6 +
 drivers/common/idpf/version.map            |   3 +
 drivers/net/idpf/idpf_ethdev.c             | 268 +++++++++++++++++++++
 drivers/net/idpf/idpf_ethdev.h             |   3 +-
 6 files changed, 399 insertions(+), 1 deletion(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 1d8e7d405a..7abc4d2a3a 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -98,6 +98,7 @@ struct idpf_vport {
 	uint32_t *rss_lut;
 	uint8_t *rss_key;
 	uint64_t rss_hf;
+	uint64_t last_general_rss_hf;
 
 	/* MSIX info*/
 	struct virtchnl2_queue_vector *qv_map; /* queue vector mapping */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 40cff34c09..10cfa33704 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -218,6 +218,9 @@ idpf_vc_cmd_execute(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	case VIRTCHNL2_OP_ALLOC_VECTORS:
 	case VIRTCHNL2_OP_DEALLOC_VECTORS:
 	case VIRTCHNL2_OP_GET_STATS:
+	case VIRTCHNL2_OP_GET_RSS_KEY:
+	case VIRTCHNL2_OP_GET_RSS_HASH:
+	case VIRTCHNL2_OP_GET_RSS_LUT:
 		/* for init virtchnl ops, need to poll the response */
 		err = idpf_vc_one_msg_read(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
@@ -448,6 +451,48 @@ idpf_vc_rss_key_set(struct idpf_vport *vport)
 	return err;
 }
 
+int idpf_vc_rss_key_get(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_key *rss_key_ret;
+	struct virtchnl2_rss_key rss_key;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_key, 0, sizeof(rss_key));
+	rss_key.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_GET_RSS_KEY;
+	args.in_args = (uint8_t *)&rss_key;
+	args.in_args_size = sizeof(rss_key);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_vc_cmd_execute(adapter, &args);
+
+	if (!err) {
+		rss_key_ret = (struct virtchnl2_rss_key *)args.out_buffer;
+		if (rss_key_ret->key_len != vport->rss_key_size) {
+			rte_free(vport->rss_key);
+			vport->rss_key = NULL;
+			vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN,
+						      rss_key_ret->key_len);
+			vport->rss_key = rte_zmalloc("rss_key", vport->rss_key_size, 0);
+			if (!vport->rss_key) {
+				vport->rss_key_size = 0;
+				DRV_LOG(ERR, "Failed to allocate RSS key");
+				return -ENOMEM;
+			}
+		}
+		rte_memcpy(vport->rss_key, rss_key_ret->key, vport->rss_key_size);
+	} else {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_RSS_KEY");
+	}
+
+	return err;
+}
+
 int
 idpf_vc_rss_lut_set(struct idpf_vport *vport)
 {
@@ -482,6 +527,80 @@ idpf_vc_rss_lut_set(struct idpf_vport *vport)
 	return err;
 }
 
+int
+idpf_vc_rss_lut_get(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_lut *rss_lut_ret;
+	struct virtchnl2_rss_lut rss_lut;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_lut, 0, sizeof(rss_lut));
+	rss_lut.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_GET_RSS_LUT;
+	args.in_args = (uint8_t *)&rss_lut;
+	args.in_args_size = sizeof(rss_lut);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_vc_cmd_execute(adapter, &args);
+
+	if (!err) {
+		rss_lut_ret = (struct virtchnl2_rss_lut *)args.out_buffer;
+		if (rss_lut_ret->lut_entries != vport->rss_lut_size) {
+			rte_free(vport->rss_lut);
+			vport->rss_lut = NULL;
+			vport->rss_lut = rte_zmalloc("rss_lut",
+				     sizeof(uint32_t) * rss_lut_ret->lut_entries, 0);
+			if (vport->rss_lut == NULL) {
+				DRV_LOG(ERR, "Failed to allocate RSS lut");
+				return -ENOMEM;
+			}
+		}
+		rte_memcpy(vport->rss_lut, rss_lut_ret->lut, rss_lut_ret->lut_entries);
+		vport->rss_lut_size = rss_lut_ret->lut_entries;
+	} else {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_RSS_LUT");
+	}
+
+	return err;
+}
+
+int
+idpf_vc_rss_hash_get(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_hash *rss_hash_ret;
+	struct virtchnl2_rss_hash rss_hash;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_hash, 0, sizeof(rss_hash));
+	rss_hash.ptype_groups = vport->rss_hf;
+	rss_hash.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_GET_RSS_HASH;
+	args.in_args = (uint8_t *)&rss_hash;
+	args.in_args_size = sizeof(rss_hash);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_vc_cmd_execute(adapter, &args);
+
+	if (!err) {
+		rss_hash_ret = (struct virtchnl2_rss_hash *)args.out_buffer;
+		vport->rss_hf = rss_hash_ret->ptype_groups;
+	} else {
+		DRV_LOG(ERR, "Failed to execute command of OP_GET_RSS_HASH");
+	}
+
+	return err;
+}
+
 int
 idpf_vc_rss_hash_set(struct idpf_vport *vport)
 {
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index 6b94fd5b8f..205d1a932d 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -52,4 +52,10 @@ int idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq);
 __rte_internal
 int idpf_vc_stats_query(struct idpf_vport *vport,
 			struct virtchnl2_vport_stats **pstats);
+__rte_internal
+int idpf_vc_rss_key_get(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_rss_lut_get(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_rss_hash_get(struct idpf_vport *vport);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index e6a02828ba..f6c92e7e57 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -42,8 +42,11 @@ INTERNAL {
 	idpf_vc_ptype_info_query;
 	idpf_vc_queue_switch;
 	idpf_vc_queues_ena_dis;
+	idpf_vc_rss_hash_get;
 	idpf_vc_rss_hash_set;
+	idpf_vc_rss_key_get;
 	idpf_vc_rss_key_set;
+	idpf_vc_rss_lut_get;
 	idpf_vc_rss_lut_set;
 	idpf_vc_rxq_config;
 	idpf_vc_stats_query;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 02ddb0330a..7262109d0a 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -29,6 +29,56 @@ static const char * const idpf_valid_args[] = {
 	NULL
 };
 
+static const uint64_t idpf_map_hena_rss[] = {
+	[IDPF_HASH_NONF_UNICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
+	[IDPF_HASH_NONF_IPV4_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+	[IDPF_HASH_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
+
+	/* IPv6 */
+	[IDPF_HASH_NONF_UNICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
+	[IDPF_HASH_NONF_IPV6_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+	[IDPF_HASH_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
+
+	/* L2 Payload */
+	[IDPF_HASH_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
+};
+
+static const uint64_t idpf_ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV4;
+
+static const uint64_t idpf_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV6;
+
 static int
 idpf_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -59,6 +109,9 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->hash_key_size = vport->rss_key_size;
+	dev_info->reta_size = vport->rss_lut_size;
+
 	dev_info->flow_type_rss_offloads = IDPF_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
@@ -221,6 +274,36 @@ idpf_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int idpf_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
+{
+	uint64_t hena = 0;
+	uint16_t i;
+
+	/**
+	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
+	 * generalizations of all other IPv4 and IPv6 RSS types.
+	 */
+	if (rss_hf & RTE_ETH_RSS_IPV4)
+		rss_hf |= idpf_ipv4_rss;
+
+	if (rss_hf & RTE_ETH_RSS_IPV6)
+		rss_hf |= idpf_ipv6_rss;
+
+	for (i = 0; i < RTE_DIM(idpf_map_hena_rss); i++) {
+		if (idpf_map_hena_rss[i] & rss_hf)
+			hena |= BIT_ULL(i);
+	}
+
+	/**
+	 * At present, cp doesn't process the virtual channel msg of rss_hf configuration,
+	 * tips are given below.
+	 */
+	if (hena != vport->rss_hf)
+		PMD_DRV_LOG(WARNING, "Updating RSS Hash Function is not supported at present.");
+
+	return 0;
+}
+
 static int
 idpf_init_rss(struct idpf_vport *vport)
 {
@@ -257,6 +340,187 @@ idpf_init_rss(struct idpf_vport *vport)
 	return ret;
 }
 
+static int
+idpf_rss_reta_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_reta_entry64 *reta_conf,
+		     uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+				 "(%d) doesn't match the number of hardware can "
+				 "support (%d)",
+			    reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			vport->rss_lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	/* send virtchnl ops to configure RSS */
+	ret = idpf_vc_rss_lut_set(vport);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
+
+	return ret;
+}
+
+static int
+idpf_rss_reta_query(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_reta_entry64 *reta_conf,
+		    uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	ret = idpf_vc_rss_lut_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS LUT");
+		return ret;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vport->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+idpf_rss_hash_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		goto skip_rss_key;
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+				 "(%d) doesn't match the size of hardware can "
+				 "support (%d)",
+			    rss_conf->rss_key_len,
+			    vport->rss_key_size);
+		return -EINVAL;
+	}
+
+	rte_memcpy(vport->rss_key, rss_conf->rss_key,
+		   vport->rss_key_size);
+	ret = idpf_vc_rss_key_set(vport);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS key");
+		return ret;
+	}
+
+skip_rss_key:
+	ret = idpf_config_rss_hf(vport, rss_conf->rss_hf);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
+		return ret;
+	}
+
+	return 0;
+}
+
+static uint64_t
+idpf_map_general_rss_hf(uint64_t config_rss_hf, uint64_t last_general_rss_hf)
+{
+	uint64_t valid_rss_hf = 0;
+	uint16_t i;
+
+	for (i = 0; i < RTE_DIM(idpf_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (bit & config_rss_hf)
+			valid_rss_hf |= idpf_map_hena_rss[i];
+	}
+
+	if (valid_rss_hf & idpf_ipv4_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & idpf_ipv6_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV6;
+
+	return valid_rss_hf;
+}
+
+static int
+idpf_rss_hash_conf_get(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	ret = idpf_vc_rss_hash_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS hf");
+		return ret;
+	}
+
+	rss_conf->rss_hf = idpf_map_general_rss_hf(vport->rss_hf, vport->last_general_rss_hf);
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	ret = idpf_vc_rss_key_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS key");
+		return ret;
+	}
+
+	if (rss_conf->rss_key_len > vport->rss_key_size)
+		rss_conf->rss_key_len = vport->rss_key_size;
+
+	rte_memcpy(rss_conf->rss_key, vport->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
 static int
 idpf_dev_configure(struct rte_eth_dev *dev)
 {
@@ -692,6 +956,10 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 	.dev_supported_ptypes_get	= idpf_dev_supported_ptypes_get,
 	.stats_get			= idpf_dev_stats_get,
 	.stats_reset			= idpf_dev_stats_reset,
+	.reta_update			= idpf_rss_reta_update,
+	.reta_query			= idpf_rss_reta_query,
+	.rss_hash_update		= idpf_rss_hash_update,
+	.rss_hash_conf_get		= idpf_rss_hash_conf_get,
 };
 
 static uint16_t
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index d791d402fb..839a2bd82c 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -48,7 +48,8 @@
 		RTE_ETH_RSS_NONFRAG_IPV6_TCP    |	\
 		RTE_ETH_RSS_NONFRAG_IPV6_UDP    |	\
 		RTE_ETH_RSS_NONFRAG_IPV6_SCTP   |	\
-		RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
+		RTE_ETH_RSS_NONFRAG_IPV6_OTHER  |	\
+		RTE_ETH_RSS_L2_PAYLOAD)
 
 #define IDPF_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v6 3/6] common/idpf: support single q scatter RX datapath
  2023-02-07 10:16           ` [PATCH v6 0/6] add idpf pmd enhancement features Mingxia Liu
  2023-02-07 10:16             ` [PATCH v6 1/6] common/idpf: add hw statistics Mingxia Liu
  2023-02-07 10:16             ` [PATCH v6 2/6] common/idpf: add RSS set/get ops Mingxia Liu
@ 2023-02-07 10:16             ` Mingxia Liu
  2023-02-07 10:16             ` [PATCH v6 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
                               ` (4 subsequent siblings)
  7 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-07 10:16 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu, Wenjun Wu

This patch add single q recv scatter rx function.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c | 135 +++++++++++++++++++++++++
 drivers/common/idpf/idpf_common_rxtx.h |   3 +
 drivers/common/idpf/version.map        |   1 +
 drivers/net/idpf/idpf_ethdev.c         |   3 +-
 drivers/net/idpf/idpf_rxtx.c           |  28 +++++
 drivers/net/idpf/idpf_rxtx.h           |   2 +
 6 files changed, 171 insertions(+), 1 deletion(-)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index fdac2c3114..9303b51cce 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -1146,6 +1146,141 @@ idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	return nb_rx;
 }
 
+uint16_t
+idpf_dp_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			       uint16_t nb_pkts)
+{
+	struct idpf_rx_queue *rxq = rx_queue;
+	volatile union virtchnl2_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union virtchnl2_rx_desc *rxdp;
+	union virtchnl2_rx_desc rxd;
+	struct idpf_adapter *ad;
+	struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+	struct rte_mbuf *rxm;
+	struct rte_mbuf *nmb;
+	struct rte_eth_dev *dev;
+	const uint32_t *ptype_tbl = rxq->adapter->ptype_tbl;
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t rx_packet_len;
+	uint16_t nb_hold = 0;
+	uint16_t rx_status0;
+	uint16_t nb_rx = 0;
+	uint64_t pkt_flags;
+	uint64_t dma_addr;
+	uint64_t ts_ns;
+
+	ad = rxq->adapter;
+
+	if (unlikely(!rxq) || unlikely(!rxq->q_started))
+		return nb_rx;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		rx_status0 = rte_le_to_cpu_16(rxdp->flex_nic_wb.status_error0);
+
+		/* Check the DD bit first */
+		if (!(rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S)))
+			break;
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			__atomic_fetch_add(&rxq->rx_stats.mbuf_alloc_failed, 1, __ATOMIC_RELAXED);
+			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+			       "queue_id=%u", rxq->port_id, rxq->queue_id);
+			break;
+		}
+
+		rxd = *rxdp;
+
+		nb_hold++;
+		rxm = rxq->sw_ring[rx_id];
+		rxq->sw_ring[rx_id] = nmb;
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+		rx_packet_len = (rte_cpu_to_le_16(rxd.flex_nic_wb.pkt_len) &
+				 VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M);
+		rxm->data_len = rx_packet_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+		/**
+		 * If this is the first buffer of the received packet, set the
+		 * pointer to the first mbuf of the packet and initialize its
+		 * context. Otherwise, update the total length and the number
+		 * of segments of the current scattered packet, and update the
+		 * pointer to the last mbuf of the current packet.
+		 */
+		if (!first_seg) {
+			first_seg = rxm;
+			first_seg->nb_segs = 1;
+			first_seg->pkt_len = rx_packet_len;
+		} else {
+			first_seg->pkt_len =
+				(uint16_t)(first_seg->pkt_len +
+					   rx_packet_len);
+			first_seg->nb_segs++;
+			last_seg->next = rxm;
+		}
+
+		if (!(rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_S))) {
+			last_seg = rxm;
+			continue;
+		}
+
+		rxm->next = NULL;
+
+		first_seg->port = rxq->port_id;
+		first_seg->ol_flags = 0;
+		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		first_seg->packet_type =
+			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
+				VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
+
+		if (idpf_timestamp_dynflag > 0 &&
+		    (rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0) {
+			/* timestamp */
+			ts_ns = idpf_tstamp_convert_32b_64b(ad,
+				rxq->hw_register_set,
+				rte_le_to_cpu_32(rxd.flex_nic_wb.flex_ts.ts_high));
+			rxq->hw_register_set = 0;
+			*RTE_MBUF_DYNFIELD(rxm,
+					   idpf_timestamp_dynfield_offset,
+					   rte_mbuf_timestamp_t *) = ts_ns;
+			first_seg->ol_flags |= idpf_timestamp_dynflag;
+		}
+
+		first_seg->ol_flags |= pkt_flags;
+		rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
+					  first_seg->data_off));
+		rx_pkts[nb_rx++] = first_seg;
+		first_seg = NULL;
+	}
+	rxq->rx_tail = rx_id;
+	rxq->pkt_first_seg = first_seg;
+	rxq->pkt_last_seg = last_seg;
+
+	idpf_update_rx_tail(rxq, nb_hold, rx_id);
+
+	return nb_rx;
+}
+
 static inline int
 idpf_xmit_cleanup(struct idpf_tx_queue *txq)
 {
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index 263dab061c..7e6df080e6 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -293,5 +293,8 @@ uint16_t idpf_dp_singleq_xmit_pkts_avx512(void *tx_queue,
 __rte_internal
 uint16_t idpf_dp_splitq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
 					 uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_dp_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			  uint16_t nb_pkts);
 
 #endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index f6c92e7e57..e31f6ff4d9 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -7,6 +7,7 @@ INTERNAL {
 	idpf_dp_prep_pkts;
 	idpf_dp_singleq_recv_pkts;
 	idpf_dp_singleq_recv_pkts_avx512;
+	idpf_dp_singleq_recv_scatter_pkts;
 	idpf_dp_singleq_xmit_pkts;
 	idpf_dp_singleq_xmit_pkts_avx512;
 	idpf_dp_splitq_recv_pkts;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 7262109d0a..11f0ca0085 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -119,7 +119,8 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
-		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP		|
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 38d9829912..d16acd87fb 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -503,6 +503,8 @@ int
 idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct idpf_rx_queue *rxq;
+	uint16_t max_pkt_len;
+	uint32_t frame_size;
 	int err;
 
 	if (rx_queue_id >= dev->data->nb_rx_queues)
@@ -516,6 +518,17 @@ idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	frame_size = dev->data->mtu + IDPF_ETH_OVERHEAD;
+
+	max_pkt_len =
+	    RTE_MIN((uint32_t)IDPF_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+		    frame_size);
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
+	    frame_size > rxq->rx_buf_len)
+		dev->data->scattered_rx = 1;
+
 	err = idpf_qc_ts_mbuf_register(rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "fail to residter timestamp mbuf %u",
@@ -807,6 +820,14 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 			}
 #endif /* CC_AVX512_SUPPORT */
 		}
+
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE,
+				    "Using Single Scalar Scatterd Rx (port %d).",
+				    dev->data->port_id);
+			dev->rx_pkt_burst = idpf_dp_singleq_recv_scatter_pkts;
+			return;
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Rx (port %d).",
 			    dev->data->port_id);
@@ -819,6 +840,13 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 			    dev->data->port_id);
 		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
 	} else {
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE,
+				    "Using Single Scalar Scatterd Rx (port %d).",
+				    dev->data->port_id);
+			dev->rx_pkt_burst = idpf_dp_singleq_recv_scatter_pkts;
+			return;
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Rx (port %d).",
 			    dev->data->port_id);
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 3a5084dfd6..41a7495083 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -23,6 +23,8 @@
 #define IDPF_DEFAULT_TX_RS_THRESH	32
 #define IDPF_DEFAULT_TX_FREE_THRESH	32
 
+#define IDPF_SUPPORT_CHAIN_NUM 5
+
 int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v6 4/6] common/idpf: add rss_offload hash in singleq rx
  2023-02-07 10:16           ` [PATCH v6 0/6] add idpf pmd enhancement features Mingxia Liu
                               ` (2 preceding siblings ...)
  2023-02-07 10:16             ` [PATCH v6 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
@ 2023-02-07 10:16             ` Mingxia Liu
  2023-02-07 10:16             ` [PATCH v6 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
                               ` (3 subsequent siblings)
  7 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-07 10:16 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

This patch add rss valid flag and hash value parsing of rx descriptor.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index 9303b51cce..d7e8df1895 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -1030,6 +1030,20 @@ idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold,
 	rxq->nb_rx_hold = nb_hold;
 }
 
+static inline void
+idpf_singleq_rx_rss_offload(struct rte_mbuf *mb,
+			    volatile struct virtchnl2_rx_flex_desc_nic *rx_desc,
+			    uint64_t *pkt_flags)
+{
+	uint16_t rx_status0 = rte_le_to_cpu_16(rx_desc->status_error0);
+
+	if (rx_status0 & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_S)) {
+		*pkt_flags |= RTE_MBUF_F_RX_RSS_HASH;
+		mb->hash.rss = rte_le_to_cpu_32(rx_desc->rss_hash);
+	}
+
+}
+
 uint16_t
 idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			  uint16_t nb_pkts)
@@ -1118,6 +1132,7 @@ idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->port = rxq->port_id;
 		rxm->ol_flags = 0;
 		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		idpf_singleq_rx_rss_offload(rxm, &rxd.flex_nic_wb, &pkt_flags);
 		rxm->packet_type =
 			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
 					    VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
@@ -1249,6 +1264,7 @@ idpf_dp_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		first_seg->port = rxq->port_id;
 		first_seg->ol_flags = 0;
 		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		idpf_singleq_rx_rss_offload(first_seg, &rxd.flex_nic_wb, &pkt_flags);
 		first_seg->packet_type =
 			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
 				VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v6 5/6] common/idpf: add alarm to support handle vchnl message
  2023-02-07 10:16           ` [PATCH v6 0/6] add idpf pmd enhancement features Mingxia Liu
                               ` (3 preceding siblings ...)
  2023-02-07 10:16             ` [PATCH v6 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
@ 2023-02-07 10:16             ` Mingxia Liu
  2023-02-07 10:16             ` [PATCH v6 6/6] common/idpf: add xstats ops Mingxia Liu
                               ` (2 subsequent siblings)
  7 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-07 10:16 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Handle virtual channel message.
Refine link status update.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h   |   5 +
 drivers/common/idpf/idpf_common_virtchnl.c |  33 ++--
 drivers/common/idpf/idpf_common_virtchnl.h |   6 +
 drivers/common/idpf/version.map            |   2 +
 drivers/net/idpf/idpf_ethdev.c             | 169 ++++++++++++++++++++-
 drivers/net/idpf/idpf_ethdev.h             |   2 +
 6 files changed, 195 insertions(+), 22 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 7abc4d2a3a..364a60221a 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -118,6 +118,11 @@ struct idpf_vport {
 	bool tx_use_avx512;
 
 	struct virtchnl2_vport_stats eth_stats_offset;
+
+	void *dev;
+	/* Event from ipf */
+	bool link_up;
+	uint32_t link_speed;
 };
 
 /* Message type read in virtual channel from PF */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 10cfa33704..99d9efbb7c 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -202,25 +202,6 @@ idpf_vc_cmd_execute(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	switch (args->ops) {
 	case VIRTCHNL_OP_VERSION:
 	case VIRTCHNL2_OP_GET_CAPS:
-	case VIRTCHNL2_OP_CREATE_VPORT:
-	case VIRTCHNL2_OP_DESTROY_VPORT:
-	case VIRTCHNL2_OP_SET_RSS_KEY:
-	case VIRTCHNL2_OP_SET_RSS_LUT:
-	case VIRTCHNL2_OP_SET_RSS_HASH:
-	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
-	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
-	case VIRTCHNL2_OP_ENABLE_QUEUES:
-	case VIRTCHNL2_OP_DISABLE_QUEUES:
-	case VIRTCHNL2_OP_ENABLE_VPORT:
-	case VIRTCHNL2_OP_DISABLE_VPORT:
-	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_ALLOC_VECTORS:
-	case VIRTCHNL2_OP_DEALLOC_VECTORS:
-	case VIRTCHNL2_OP_GET_STATS:
-	case VIRTCHNL2_OP_GET_RSS_KEY:
-	case VIRTCHNL2_OP_GET_RSS_HASH:
-	case VIRTCHNL2_OP_GET_RSS_LUT:
 		/* for init virtchnl ops, need to poll the response */
 		err = idpf_vc_one_msg_read(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
@@ -1111,3 +1092,17 @@ idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq)
 
 	return err;
 }
+
+int
+idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
+		  struct idpf_ctlq_msg *q_msg)
+{
+	return idpf_ctlq_recv(cq, num_q_msg, q_msg);
+}
+
+int
+idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
+			   u16 *buff_count, struct idpf_dma_mem **buffs)
+{
+	return idpf_ctlq_post_rx_buffs(hw, cq, buff_count, buffs);
+}
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index 205d1a932d..d479d93c8e 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -58,4 +58,10 @@ __rte_internal
 int idpf_vc_rss_lut_get(struct idpf_vport *vport);
 __rte_internal
 int idpf_vc_rss_hash_get(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
+		      struct idpf_ctlq_msg *q_msg);
+__rte_internal
+int idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
+			   u16 *buff_count, struct idpf_dma_mem **buffs);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index e31f6ff4d9..70334a1b03 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -38,6 +38,8 @@ INTERNAL {
 	idpf_vc_api_version_check;
 	idpf_vc_caps_get;
 	idpf_vc_cmd_execute;
+	idpf_vc_ctlq_post_rx_buffs;
+	idpf_vc_ctlq_recv;
 	idpf_vc_irq_map_unmap_config;
 	idpf_vc_one_msg_read;
 	idpf_vc_ptype_info_query;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 11f0ca0085..751c0d8717 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -9,6 +9,7 @@
 #include <rte_memzone.h>
 #include <rte_dev.h>
 #include <errno.h>
+#include <rte_alarm.h>
 
 #include "idpf_ethdev.h"
 #include "idpf_rxtx.h"
@@ -83,14 +84,51 @@ static int
 idpf_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_link new_link;
 
 	memset(&new_link, 0, sizeof(new_link));
 
-	new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	switch (vport->link_speed) {
+	case RTE_ETH_SPEED_NUM_10M:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
+		break;
+	case RTE_ETH_SPEED_NUM_100M:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		break;
+	case RTE_ETH_SPEED_NUM_1G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
+		break;
+	case RTE_ETH_SPEED_NUM_10G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+		break;
+	case RTE_ETH_SPEED_NUM_20G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
+		break;
+	case RTE_ETH_SPEED_NUM_25G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
+		break;
+	case RTE_ETH_SPEED_NUM_40G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
+		break;
+	case RTE_ETH_SPEED_NUM_50G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
+		break;
+	case RTE_ETH_SPEED_NUM_100G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+		break;
+	case RTE_ETH_SPEED_NUM_200G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
+		break;
+	default:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	}
+
 	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
-	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				  RTE_ETH_LINK_SPEED_FIXED);
+	new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
+		RTE_ETH_LINK_DOWN;
+	new_link.link_autoneg = (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) ?
+				 RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG;
 
 	return rte_eth_linkstatus_set(dev, &new_link);
 }
@@ -891,6 +929,127 @@ idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adap
 	return ret;
 }
 
+static struct idpf_vport *
+idpf_find_vport(struct idpf_adapter_ext *adapter, uint32_t vport_id)
+{
+	struct idpf_vport *vport = NULL;
+	int i;
+
+	for (i = 0; i < adapter->cur_vport_nb; i++) {
+		vport = adapter->vports[i];
+		if (vport->vport_id != vport_id)
+			continue;
+		else
+			return vport;
+	}
+
+	return vport;
+}
+
+static void
+idpf_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t msglen)
+{
+	struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
+
+	if (msglen < sizeof(struct virtchnl2_event)) {
+		PMD_DRV_LOG(ERR, "Error event");
+		return;
+	}
+
+	switch (vc_event->event) {
+	case VIRTCHNL2_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL2_EVENT_LINK_CHANGE");
+		vport->link_up = !!(vc_event->link_status);
+		vport->link_speed = vc_event->link_speed;
+		idpf_dev_link_update(dev, 0);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", vc_event->event);
+		break;
+	}
+}
+
+static void
+idpf_handle_virtchnl_msg(struct idpf_adapter_ext *adapter_ex)
+{
+	struct idpf_adapter *adapter = &adapter_ex->base;
+	struct idpf_dma_mem *dma_mem = NULL;
+	struct idpf_hw *hw = &adapter->hw;
+	struct virtchnl2_event *vc_event;
+	struct idpf_ctlq_msg ctlq_msg;
+	enum idpf_mbx_opc mbx_op;
+	struct idpf_vport *vport;
+	enum virtchnl_ops vc_op;
+	uint16_t pending = 1;
+	int ret;
+
+	while (pending) {
+		ret = idpf_vc_ctlq_recv(hw->arq, &pending, &ctlq_msg);
+		if (ret) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret);
+			return;
+		}
+
+		rte_memcpy(adapter->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
+			   IDPF_DFLT_MBX_BUF_SIZE);
+
+		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
+		vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+		adapter->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
+
+		switch (mbx_op) {
+		case idpf_mbq_opc_send_msg_to_peer_pf:
+			if (vc_op == VIRTCHNL2_OP_EVENT) {
+				if (ctlq_msg.data_len < sizeof(struct virtchnl2_event)) {
+					PMD_DRV_LOG(ERR, "Error event");
+					return;
+				}
+				vc_event = (struct virtchnl2_event *)adapter->mbx_resp;
+				vport = idpf_find_vport(adapter_ex, vc_event->vport_id);
+				if (!vport) {
+					PMD_DRV_LOG(ERR, "Can't find vport.");
+					return;
+				}
+				idpf_handle_event_msg(vport, adapter->mbx_resp,
+						      ctlq_msg.data_len);
+			} else {
+				if (vc_op == adapter->pend_cmd)
+					notify_cmd(adapter, adapter->cmd_retval);
+				else
+					PMD_DRV_LOG(ERR, "command mismatch, expect %u, get %u",
+						    adapter->pend_cmd, vc_op);
+
+				PMD_DRV_LOG(DEBUG, " Virtual channel response is received,"
+					    "opcode = %d", vc_op);
+			}
+			goto post_buf;
+		default:
+			PMD_DRV_LOG(DEBUG, "Request %u is not supported yet", mbx_op);
+		}
+	}
+
+post_buf:
+	if (ctlq_msg.data_len)
+		dma_mem = ctlq_msg.ctx.indirect.payload;
+	else
+		pending = 0;
+
+	ret = idpf_vc_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
+	if (ret && dma_mem)
+		idpf_free_dma_mem(hw, dma_mem);
+}
+
+static void
+idpf_dev_alarm_handler(void *param)
+{
+	struct idpf_adapter_ext *adapter = param;
+
+	idpf_handle_virtchnl_msg(adapter);
+
+	rte_eal_alarm_set(IDPF_ALARM_INTERVAL, idpf_dev_alarm_handler, adapter);
+}
+
 static int
 idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter)
 {
@@ -913,6 +1072,8 @@ idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *a
 		goto err_adapter_init;
 	}
 
+	rte_eal_alarm_set(IDPF_ALARM_INTERVAL, idpf_dev_alarm_handler, adapter);
+
 	adapter->max_vport_nb = adapter->base.caps.max_vports;
 
 	adapter->vports = rte_zmalloc("vports",
@@ -996,6 +1157,7 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	vport->adapter = &adapter->base;
 	vport->sw_idx = param->idx;
 	vport->devarg_id = param->devarg_id;
+	vport->dev = dev;
 
 	memset(&create_vport_info, 0, sizeof(create_vport_info));
 	ret = idpf_vport_info_init(vport, &create_vport_info);
@@ -1065,6 +1227,7 @@ idpf_find_adapter_ext(struct rte_pci_device *pci_dev)
 static void
 idpf_adapter_ext_deinit(struct idpf_adapter_ext *adapter)
 {
+	rte_eal_alarm_cancel(idpf_dev_alarm_handler, adapter);
 	idpf_adapter_deinit(&adapter->base);
 
 	rte_free(adapter->vports);
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 839a2bd82c..3c2c932438 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -53,6 +53,8 @@
 
 #define IDPF_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
+#define IDPF_ALARM_INTERVAL	50000 /* us */
+
 struct idpf_vport_param {
 	struct idpf_adapter_ext *adapter;
 	uint16_t devarg_id; /* arg id from user */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v6 6/6] common/idpf: add xstats ops
  2023-02-07 10:16           ` [PATCH v6 0/6] add idpf pmd enhancement features Mingxia Liu
                               ` (4 preceding siblings ...)
  2023-02-07 10:16             ` [PATCH v6 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
@ 2023-02-07 10:16             ` Mingxia Liu
  2023-02-08  0:28             ` [PATCH v6 0/6] add idpf pmd enhancement features Wu, Jingjing
  2023-02-08  7:33             ` [PATCH v7 " Mingxia Liu
  7 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-07 10:16 UTC (permalink / raw)
  To: dev, qi.z.zhang, jingjing.wu, beilei.xing; +Cc: Mingxia Liu

Add support for these device ops:
-idpf_dev_xstats_get
-idpf_dev_xstats_get_names
-idpf_dev_xstats_reset

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/idpf/idpf_ethdev.c | 80 ++++++++++++++++++++++++++++++++++
 1 file changed, 80 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 751c0d8717..38cbbf369d 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -80,6 +80,30 @@ static const uint64_t idpf_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
 			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
 			  RTE_ETH_RSS_FRAG_IPV6;
 
+struct rte_idpf_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct rte_idpf_xstats_name_off rte_idpf_stats_strings[] = {
+	{"rx_bytes", offsetof(struct virtchnl2_vport_stats, rx_bytes)},
+	{"rx_unicast_packets", offsetof(struct virtchnl2_vport_stats, rx_unicast)},
+	{"rx_multicast_packets", offsetof(struct virtchnl2_vport_stats, rx_multicast)},
+	{"rx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, rx_broadcast)},
+	{"rx_dropped_packets", offsetof(struct virtchnl2_vport_stats, rx_discards)},
+	{"rx_errors", offsetof(struct virtchnl2_vport_stats, rx_errors)},
+	{"rx_unknown_protocol_packets", offsetof(struct virtchnl2_vport_stats,
+						 rx_unknown_protocol)},
+	{"tx_bytes", offsetof(struct virtchnl2_vport_stats, tx_bytes)},
+	{"tx_unicast_packets", offsetof(struct virtchnl2_vport_stats, tx_unicast)},
+	{"tx_multicast_packets", offsetof(struct virtchnl2_vport_stats, tx_multicast)},
+	{"tx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, tx_broadcast)},
+	{"tx_dropped_packets", offsetof(struct virtchnl2_vport_stats, tx_discards)},
+	{"tx_error_packets", offsetof(struct virtchnl2_vport_stats, tx_errors)}};
+
+#define IDPF_NB_XSTATS (sizeof(rte_idpf_stats_strings) / \
+		sizeof(rte_idpf_stats_strings[0]))
+
 static int
 idpf_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -313,6 +337,59 @@ idpf_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int idpf_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+	idpf_dev_stats_reset(dev);
+	return 0;
+}
+
+static int idpf_dev_xstats_get(struct rte_eth_dev *dev,
+			       struct rte_eth_xstat *xstats, unsigned int n)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	unsigned int i;
+	int ret;
+
+	if (n < IDPF_NB_XSTATS)
+		return IDPF_NB_XSTATS;
+
+	if (!xstats)
+		return 0;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+		return 0;
+	}
+
+	idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
+
+	/* loop over xstats array and values from pstats */
+	for (i = 0; i < IDPF_NB_XSTATS; i++) {
+		xstats[i].id = i;
+		xstats[i].value = *(uint64_t *)(((char *)pstats) +
+			rte_idpf_stats_strings[i].offset);
+	}
+	return IDPF_NB_XSTATS;
+}
+
+static int idpf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+				     struct rte_eth_xstat_name *xstats_names,
+				     __rte_unused unsigned int limit)
+{
+	unsigned int i;
+
+	if (xstats_names)
+		for (i = 0; i < IDPF_NB_XSTATS; i++) {
+			snprintf(xstats_names[i].name,
+				 sizeof(xstats_names[i].name),
+				 "%s", rte_idpf_stats_strings[i].name);
+		}
+	return IDPF_NB_XSTATS;
+}
+
 static int idpf_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
 {
 	uint64_t hena = 0;
@@ -1122,6 +1199,9 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 	.reta_query			= idpf_rss_reta_query,
 	.rss_hash_update		= idpf_rss_hash_update,
 	.rss_hash_conf_get		= idpf_rss_hash_conf_get,
+	.xstats_get			= idpf_dev_xstats_get,
+	.xstats_get_names		= idpf_dev_xstats_get_names,
+	.xstats_reset			= idpf_dev_xstats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [PATCH v6 0/6] add idpf pmd enhancement features
  2023-02-07 10:16           ` [PATCH v6 0/6] add idpf pmd enhancement features Mingxia Liu
                               ` (5 preceding siblings ...)
  2023-02-07 10:16             ` [PATCH v6 6/6] common/idpf: add xstats ops Mingxia Liu
@ 2023-02-08  0:28             ` Wu, Jingjing
  2023-02-08  7:33             ` [PATCH v7 " Mingxia Liu
  7 siblings, 0 replies; 63+ messages in thread
From: Wu, Jingjing @ 2023-02-08  0:28 UTC (permalink / raw)
  To: Liu, Mingxia, dev, Zhang, Qi Z, Xing, Beilei



> -----Original Message-----
> From: Liu, Mingxia <mingxia.liu@intel.com>
> Sent: Tuesday, February 7, 2023 6:17 PM
> To: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> Cc: Liu, Mingxia <mingxia.liu@intel.com>
> Subject: [PATCH v6 0/6] add idpf pmd enhancement features
> 
> This patchset add several enhancement features of idpf pmd.
> Including the following:
> - add hw statistics, support stats/xstats ops
> - add rss configure/show ops
> - add event handle: link status
> - add scattered data path for single queue
> 
> This patchset is based on the refactor idpf PMD code:
> http://patches.dpdk.org/project/dpdk/patch/20230207084549.2225214-2-
> wenjun1.wu@intel.com/
> 
> v2 changes:
>  - Fix rss lut config issue.
> v3 changes:
>  - rebase to the new baseline.
> v4 changes:
>  - rebase to the new baseline.
>  - optimize some code
>  - give "not supported" tips when user want to config rss hash type
>  - if stats reset fails at initialization time, don't rollback, just
>    print ERROR info.
> v5 changes:
>  - fix some spelling error
> v6 changes:
>  - add cover-letter
> 

Reviewed-by: Jingjing Wu <jingjing.wu@intel.com>

^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [PATCH v6 1/6] common/idpf: add hw statistics
  2023-02-07 10:16             ` [PATCH v6 1/6] common/idpf: add hw statistics Mingxia Liu
@ 2023-02-08  2:00               ` Zhang, Qi Z
  2023-02-08  8:28                 ` Liu, Mingxia
  0 siblings, 1 reply; 63+ messages in thread
From: Zhang, Qi Z @ 2023-02-08  2:00 UTC (permalink / raw)
  To: Liu, Mingxia, dev, Wu, Jingjing, Xing, Beilei



> -----Original Message-----
> From: Liu, Mingxia <mingxia.liu@intel.com>
> Sent: Tuesday, February 7, 2023 6:17 PM
> To: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> Cc: Liu, Mingxia <mingxia.liu@intel.com>
> Subject: [PATCH v6 1/6] common/idpf: add hw statistics

Suggest to use ./devtools/check-git-log.sh to fix any title warning if possible
Also the main purpose of this patch is to support stats_get /stats_reset API,
the prefix is more reasonable to be "net/idpf" but not "common/idpf.

Please fix other patches if any similar issue. 

> 
> This patch add hardware packets/bytes statistics.
> 
> Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> ---
>  drivers/common/idpf/idpf_common_device.c   | 17 +++++
>  drivers/common/idpf/idpf_common_device.h   |  4 +
>  drivers/common/idpf/idpf_common_virtchnl.c | 27 +++++++
> drivers/common/idpf/idpf_common_virtchnl.h |  3 +
>  drivers/common/idpf/version.map            |  2 +
>  drivers/net/idpf/idpf_ethdev.c             | 86 ++++++++++++++++++++++
>  6 files changed, 139 insertions(+)
> 
> diff --git a/drivers/common/idpf/idpf_common_device.c
> b/drivers/common/idpf/idpf_common_device.c
> index 48b3e3c0dd..5475a3e52c 100644
> --- a/drivers/common/idpf/idpf_common_device.c
> +++ b/drivers/common/idpf/idpf_common_device.c
> @@ -652,4 +652,21 @@ idpf_vport_info_init(struct idpf_vport *vport,
>  	return 0;
>  }
> 
> +void
> +idpf_vport_stats_update(struct virtchnl2_vport_stats *oes, struct
> +virtchnl2_vport_stats *nes) {
> +	nes->rx_bytes = nes->rx_bytes - oes->rx_bytes;
> +	nes->rx_unicast = nes->rx_unicast - oes->rx_unicast;
> +	nes->rx_multicast = nes->rx_multicast - oes->rx_multicast;
> +	nes->rx_broadcast = nes->rx_broadcast - oes->rx_broadcast;
> +	nes->rx_errors = nes->rx_errors - oes->rx_errors;
> +	nes->rx_discards = nes->rx_discards - oes->rx_discards;
> +	nes->tx_bytes = nes->tx_bytes - oes->tx_bytes;
> +	nes->tx_unicast = nes->tx_unicast - oes->tx_unicast;
> +	nes->tx_multicast = nes->tx_multicast - oes->tx_multicast;
> +	nes->tx_broadcast = nes->tx_broadcast - oes->tx_broadcast;
> +	nes->tx_errors = nes->tx_errors - oes->tx_errors;
> +	nes->tx_discards = nes->tx_discards - oes->tx_discards; }
> +
>  RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE); diff
> --git a/drivers/common/idpf/idpf_common_device.h
> b/drivers/common/idpf/idpf_common_device.h
> index 545117df79..1d8e7d405a 100644
> --- a/drivers/common/idpf/idpf_common_device.h
> +++ b/drivers/common/idpf/idpf_common_device.h
> @@ -115,6 +115,8 @@ struct idpf_vport {
>  	bool tx_vec_allowed;
>  	bool rx_use_avx512;
>  	bool tx_use_avx512;
> +
> +	struct virtchnl2_vport_stats eth_stats_offset;
>  };
> 
>  /* Message type read in virtual channel from PF */ @@ -191,5 +193,7 @@
> int idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t
> nb_rx_queues)  __rte_internal  int idpf_vport_info_init(struct idpf_vport
> *vport,
>  			 struct virtchnl2_create_vport *vport_info);
> +__rte_internal
> +void idpf_vport_stats_update(struct virtchnl2_vport_stats *oes, struct
> +virtchnl2_vport_stats *nes);
> 
>  #endif /* _IDPF_COMMON_DEVICE_H_ */
> diff --git a/drivers/common/idpf/idpf_common_virtchnl.c
> b/drivers/common/idpf/idpf_common_virtchnl.c
> index 31fadefbd3..40cff34c09 100644
> --- a/drivers/common/idpf/idpf_common_virtchnl.c
> +++ b/drivers/common/idpf/idpf_common_virtchnl.c
> @@ -217,6 +217,7 @@ idpf_vc_cmd_execute(struct idpf_adapter *adapter,
> struct idpf_cmd_info *args)
>  	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
>  	case VIRTCHNL2_OP_ALLOC_VECTORS:
>  	case VIRTCHNL2_OP_DEALLOC_VECTORS:
> +	case VIRTCHNL2_OP_GET_STATS:
>  		/* for init virtchnl ops, need to poll the response */
>  		err = idpf_vc_one_msg_read(adapter, args->ops, args-
> >out_size, args->out_buffer);
>  		clear_cmd(adapter);
> @@ -806,6 +807,32 @@ idpf_vc_ptype_info_query(struct idpf_adapter
> *adapter)
>  	return err;
>  }
> 
> +int
> +idpf_vc_stats_query(struct idpf_vport *vport,
> +		struct virtchnl2_vport_stats **pstats) {
> +	struct idpf_adapter *adapter = vport->adapter;
> +	struct virtchnl2_vport_stats vport_stats;
> +	struct idpf_cmd_info args;
> +	int err;
> +
> +	vport_stats.vport_id = vport->vport_id;
> +	args.ops = VIRTCHNL2_OP_GET_STATS;
> +	args.in_args = (u8 *)&vport_stats;
> +	args.in_args_size = sizeof(vport_stats);
> +	args.out_buffer = adapter->mbx_resp;
> +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> +
> +	err = idpf_vc_cmd_execute(adapter, &args);
> +	if (err) {
> +		DRV_LOG(ERR, "Failed to execute command of
> VIRTCHNL2_OP_GET_STATS");
> +		*pstats = NULL;
> +		return err;
> +	}
> +	*pstats = (struct virtchnl2_vport_stats *)args.out_buffer;
> +	return 0;
> +}
> +
>  #define IDPF_RX_BUF_STRIDE		64
>  int
>  idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq) diff -
> -git a/drivers/common/idpf/idpf_common_virtchnl.h
> b/drivers/common/idpf/idpf_common_virtchnl.h
> index c105f02836..6b94fd5b8f 100644
> --- a/drivers/common/idpf/idpf_common_virtchnl.h
> +++ b/drivers/common/idpf/idpf_common_virtchnl.h
> @@ -49,4 +49,7 @@ __rte_internal
>  int idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
> __rte_internal  int idpf_vc_txq_config(struct idpf_vport *vport, struct
> idpf_tx_queue *txq);
> +__rte_internal
> +int idpf_vc_stats_query(struct idpf_vport *vport,
> +			struct virtchnl2_vport_stats **pstats);
>  #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
> diff --git a/drivers/common/idpf/version.map
> b/drivers/common/idpf/version.map index 8b33130bd6..e6a02828ba
> 100644
> --- a/drivers/common/idpf/version.map
> +++ b/drivers/common/idpf/version.map
> @@ -46,6 +46,7 @@ INTERNAL {
>  	idpf_vc_rss_key_set;
>  	idpf_vc_rss_lut_set;
>  	idpf_vc_rxq_config;
> +	idpf_vc_stats_query;
>  	idpf_vc_txq_config;
>  	idpf_vc_vectors_alloc;
>  	idpf_vc_vectors_dealloc;
> @@ -59,6 +60,7 @@ INTERNAL {
>  	idpf_vport_irq_map_config;
>  	idpf_vport_irq_unmap_config;
>  	idpf_vport_rss_config;
> +	idpf_vport_stats_update;
> 
>  	local: *;
>  };
> diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
> index 33f5e90743..02ddb0330a 100644
> --- a/drivers/net/idpf/idpf_ethdev.c
> +++ b/drivers/net/idpf/idpf_ethdev.c
> @@ -140,6 +140,87 @@ idpf_dev_supported_ptypes_get(struct rte_eth_dev
> *dev __rte_unused)
>  	return ptypes;
>  }
> 
> +static uint64_t
> +idpf_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev) {
> +	uint64_t mbuf_alloc_failed = 0;
> +	struct idpf_rx_queue *rxq;
> +	int i = 0;
> +
> +	for (i = 0; i < dev->data->nb_rx_queues; i++) {
> +		rxq = dev->data->rx_queues[i];
> +		mbuf_alloc_failed += __atomic_load_n(&rxq-
> >rx_stats.mbuf_alloc_failed,
> +						     __ATOMIC_RELAXED);
> +	}
> +
> +	return mbuf_alloc_failed;
> +}
> +
> +static int
> +idpf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats
> +*stats) {
> +	struct idpf_vport *vport =
> +		(struct idpf_vport *)dev->data->dev_private;
> +	struct virtchnl2_vport_stats *pstats = NULL;
> +	int ret;
> +
> +	ret = idpf_vc_stats_query(vport, &pstats);
> +	if (ret == 0) {
> +		uint8_t crc_stats_len = (dev->data-
> >dev_conf.rxmode.offloads &
> +					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
> 0 :
> +					 RTE_ETHER_CRC_LEN;
> +
> +		idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
> +		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
> +				pstats->rx_broadcast - pstats->rx_discards;
> +		stats->opackets = pstats->tx_broadcast + pstats-
> >tx_multicast +
> +						pstats->tx_unicast;
> +		stats->imissed = pstats->rx_discards;
> +		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
> +		stats->ibytes = pstats->rx_bytes;
> +		stats->ibytes -= stats->ipackets * crc_stats_len;
> +		stats->obytes = pstats->tx_bytes;
> +
> +		dev->data->rx_mbuf_alloc_failed =
> idpf_get_mbuf_alloc_failed_stats(dev);
> +		stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
> +	} else {
> +		PMD_DRV_LOG(ERR, "Get statistics failed");
> +	}
> +	return ret;
> +}
> +
> +static void
> +idpf_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev) {
> +	struct idpf_rx_queue *rxq;
> +	int i;
> +
> +	for (i = 0; i < dev->data->nb_rx_queues; i++) {
> +		rxq = dev->data->rx_queues[i];
> +		__atomic_store_n(&rxq->rx_stats.mbuf_alloc_failed, 0,
> __ATOMIC_RELAXED);
> +	}
> +}
> +
> +static int
> +idpf_dev_stats_reset(struct rte_eth_dev *dev) {
> +	struct idpf_vport *vport =
> +		(struct idpf_vport *)dev->data->dev_private;
> +	struct virtchnl2_vport_stats *pstats = NULL;
> +	int ret;
> +
> +	ret = idpf_vc_stats_query(vport, &pstats);
> +	if (ret != 0)
> +		return ret;
> +
> +	/* set stats offset base on current values */
> +	vport->eth_stats_offset = *pstats;
> +
> +	idpf_reset_mbuf_alloc_failed_stats(dev);
> +
> +	return 0;
> +}
> +
>  static int
>  idpf_init_rss(struct idpf_vport *vport)  { @@ -327,6 +408,9 @@
> idpf_dev_start(struct rte_eth_dev *dev)
>  		goto err_vport;
>  	}
> 
> +	if (idpf_dev_stats_reset(dev))
> +		PMD_DRV_LOG(ERR, "Failed to reset stats");
> +
>  	vport->stopped = 0;
> 
>  	return 0;
> @@ -606,6 +690,8 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
>  	.tx_queue_release		= idpf_dev_tx_queue_release,
>  	.mtu_set			= idpf_dev_mtu_set,
>  	.dev_supported_ptypes_get	= idpf_dev_supported_ptypes_get,
> +	.stats_get			= idpf_dev_stats_get,
> +	.stats_reset			= idpf_dev_stats_reset,
>  };
> 
>  static uint16_t
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v7 0/6] add idpf pmd enhancement features
  2023-02-07 10:16           ` [PATCH v6 0/6] add idpf pmd enhancement features Mingxia Liu
                               ` (6 preceding siblings ...)
  2023-02-08  0:28             ` [PATCH v6 0/6] add idpf pmd enhancement features Wu, Jingjing
@ 2023-02-08  7:33             ` Mingxia Liu
  2023-02-08  7:33               ` [PATCH v7 1/6] net/idpf: add hw statistics Mingxia Liu
                                 ` (6 more replies)
  7 siblings, 7 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-08  7:33 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, Mingxia Liu

This patchset add several enhancement features of idpf pmd.
Including the following:
- add hw statistics, support stats/xstats ops
- add rss configure/show ops
- add event handle: link status
- add scattered data path for single queue


v2 changes:
 - Fix rss lut config issue.
v3 changes:
 - rebase to the new baseline.
v4 changes:
 - rebase to the new baseline.
 - optimize some code
 - give "not supported" tips when user want to config rss hash type
 - if stats reset fails at initialization time, don't rollback, just
   print ERROR info.
v5 changes:
 - fix some spelling error
v6 changes:
 - add cover-letter
v7 changes:
 - change commit msg module from "common/idpf" to "net/idpf"

Mingxia Liu (6):
  net/idpf: add hw statistics
  net/idpf: add RSS set/get ops
  net/idpf: support single q scatter RX datapath
  net/idpf: add rss_offload hash in singleq rx
  net/idpf: add alarm to support handle vchnl message
  net/idpf: add xstats ops

 drivers/common/idpf/idpf_common_device.c   |  17 +
 drivers/common/idpf/idpf_common_device.h   |  10 +
 drivers/common/idpf/idpf_common_rxtx.c     | 151 +++++
 drivers/common/idpf/idpf_common_rxtx.h     |   3 +
 drivers/common/idpf/idpf_common_virtchnl.c | 171 +++++-
 drivers/common/idpf/idpf_common_virtchnl.h |  15 +
 drivers/common/idpf/version.map            |   8 +
 drivers/net/idpf/idpf_ethdev.c             | 606 ++++++++++++++++++++-
 drivers/net/idpf/idpf_ethdev.h             |   5 +-
 drivers/net/idpf/idpf_rxtx.c               |  28 +
 drivers/net/idpf/idpf_rxtx.h               |   2 +
 11 files changed, 996 insertions(+), 20 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v7 1/6] net/idpf: add hw statistics
  2023-02-08  7:33             ` [PATCH v7 " Mingxia Liu
@ 2023-02-08  7:33               ` Mingxia Liu
  2023-02-08  7:33               ` [PATCH v7 2/6] net/idpf: add RSS set/get ops Mingxia Liu
                                 ` (5 subsequent siblings)
  6 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-08  7:33 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, Mingxia Liu

This patch add hardware packets/bytes statistics.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/common/idpf/idpf_common_device.c   | 17 +++++
 drivers/common/idpf/idpf_common_device.h   |  4 +
 drivers/common/idpf/idpf_common_virtchnl.c | 27 +++++++
 drivers/common/idpf/idpf_common_virtchnl.h |  3 +
 drivers/common/idpf/version.map            |  2 +
 drivers/net/idpf/idpf_ethdev.c             | 86 ++++++++++++++++++++++
 6 files changed, 139 insertions(+)

diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index 48b3e3c0dd..5475a3e52c 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -652,4 +652,21 @@ idpf_vport_info_init(struct idpf_vport *vport,
 	return 0;
 }
 
+void
+idpf_vport_stats_update(struct virtchnl2_vport_stats *oes, struct virtchnl2_vport_stats *nes)
+{
+	nes->rx_bytes = nes->rx_bytes - oes->rx_bytes;
+	nes->rx_unicast = nes->rx_unicast - oes->rx_unicast;
+	nes->rx_multicast = nes->rx_multicast - oes->rx_multicast;
+	nes->rx_broadcast = nes->rx_broadcast - oes->rx_broadcast;
+	nes->rx_errors = nes->rx_errors - oes->rx_errors;
+	nes->rx_discards = nes->rx_discards - oes->rx_discards;
+	nes->tx_bytes = nes->tx_bytes - oes->tx_bytes;
+	nes->tx_unicast = nes->tx_unicast - oes->tx_unicast;
+	nes->tx_multicast = nes->tx_multicast - oes->tx_multicast;
+	nes->tx_broadcast = nes->tx_broadcast - oes->tx_broadcast;
+	nes->tx_errors = nes->tx_errors - oes->tx_errors;
+	nes->tx_discards = nes->tx_discards - oes->tx_discards;
+}
+
 RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 545117df79..1d8e7d405a 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -115,6 +115,8 @@ struct idpf_vport {
 	bool tx_vec_allowed;
 	bool rx_use_avx512;
 	bool tx_use_avx512;
+
+	struct virtchnl2_vport_stats eth_stats_offset;
 };
 
 /* Message type read in virtual channel from PF */
@@ -191,5 +193,7 @@ int idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t nb_rx_queues)
 __rte_internal
 int idpf_vport_info_init(struct idpf_vport *vport,
 			 struct virtchnl2_create_vport *vport_info);
+__rte_internal
+void idpf_vport_stats_update(struct virtchnl2_vport_stats *oes, struct virtchnl2_vport_stats *nes);
 
 #endif /* _IDPF_COMMON_DEVICE_H_ */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 31fadefbd3..40cff34c09 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -217,6 +217,7 @@ idpf_vc_cmd_execute(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
 	case VIRTCHNL2_OP_ALLOC_VECTORS:
 	case VIRTCHNL2_OP_DEALLOC_VECTORS:
+	case VIRTCHNL2_OP_GET_STATS:
 		/* for init virtchnl ops, need to poll the response */
 		err = idpf_vc_one_msg_read(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
@@ -806,6 +807,32 @@ idpf_vc_ptype_info_query(struct idpf_adapter *adapter)
 	return err;
 }
 
+int
+idpf_vc_stats_query(struct idpf_vport *vport,
+		struct virtchnl2_vport_stats **pstats)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_vport_stats vport_stats;
+	struct idpf_cmd_info args;
+	int err;
+
+	vport_stats.vport_id = vport->vport_id;
+	args.ops = VIRTCHNL2_OP_GET_STATS;
+	args.in_args = (u8 *)&vport_stats;
+	args.in_args_size = sizeof(vport_stats);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_vc_cmd_execute(adapter, &args);
+	if (err) {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_STATS");
+		*pstats = NULL;
+		return err;
+	}
+	*pstats = (struct virtchnl2_vport_stats *)args.out_buffer;
+	return 0;
+}
+
 #define IDPF_RX_BUF_STRIDE		64
 int
 idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq)
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index c105f02836..6b94fd5b8f 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -49,4 +49,7 @@ __rte_internal
 int idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue *rxq);
 __rte_internal
 int idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq);
+__rte_internal
+int idpf_vc_stats_query(struct idpf_vport *vport,
+			struct virtchnl2_vport_stats **pstats);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index 8b33130bd6..e6a02828ba 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -46,6 +46,7 @@ INTERNAL {
 	idpf_vc_rss_key_set;
 	idpf_vc_rss_lut_set;
 	idpf_vc_rxq_config;
+	idpf_vc_stats_query;
 	idpf_vc_txq_config;
 	idpf_vc_vectors_alloc;
 	idpf_vc_vectors_dealloc;
@@ -59,6 +60,7 @@ INTERNAL {
 	idpf_vport_irq_map_config;
 	idpf_vport_irq_unmap_config;
 	idpf_vport_rss_config;
+	idpf_vport_stats_update;
 
 	local: *;
 };
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 33f5e90743..02ddb0330a 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -140,6 +140,87 @@ idpf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused)
 	return ptypes;
 }
 
+static uint64_t
+idpf_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	uint64_t mbuf_alloc_failed = 0;
+	struct idpf_rx_queue *rxq;
+	int i = 0;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		mbuf_alloc_failed += __atomic_load_n(&rxq->rx_stats.mbuf_alloc_failed,
+						     __ATOMIC_RELAXED);
+	}
+
+	return mbuf_alloc_failed;
+}
+
+static int
+idpf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret == 0) {
+		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
+					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
+					 RTE_ETHER_CRC_LEN;
+
+		idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
+		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
+				pstats->rx_broadcast - pstats->rx_discards;
+		stats->opackets = pstats->tx_broadcast + pstats->tx_multicast +
+						pstats->tx_unicast;
+		stats->imissed = pstats->rx_discards;
+		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
+		stats->ibytes = pstats->rx_bytes;
+		stats->ibytes -= stats->ipackets * crc_stats_len;
+		stats->obytes = pstats->tx_bytes;
+
+		dev->data->rx_mbuf_alloc_failed = idpf_get_mbuf_alloc_failed_stats(dev);
+		stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return ret;
+}
+
+static void
+idpf_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev)
+{
+	struct idpf_rx_queue *rxq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		__atomic_store_n(&rxq->rx_stats.mbuf_alloc_failed, 0, __ATOMIC_RELAXED);
+	}
+}
+
+static int
+idpf_dev_stats_reset(struct rte_eth_dev *dev)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	int ret;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret != 0)
+		return ret;
+
+	/* set stats offset base on current values */
+	vport->eth_stats_offset = *pstats;
+
+	idpf_reset_mbuf_alloc_failed_stats(dev);
+
+	return 0;
+}
+
 static int
 idpf_init_rss(struct idpf_vport *vport)
 {
@@ -327,6 +408,9 @@ idpf_dev_start(struct rte_eth_dev *dev)
 		goto err_vport;
 	}
 
+	if (idpf_dev_stats_reset(dev))
+		PMD_DRV_LOG(ERR, "Failed to reset stats");
+
 	vport->stopped = 0;
 
 	return 0;
@@ -606,6 +690,8 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 	.tx_queue_release		= idpf_dev_tx_queue_release,
 	.mtu_set			= idpf_dev_mtu_set,
 	.dev_supported_ptypes_get	= idpf_dev_supported_ptypes_get,
+	.stats_get			= idpf_dev_stats_get,
+	.stats_reset			= idpf_dev_stats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v7 2/6] net/idpf: add RSS set/get ops
  2023-02-08  7:33             ` [PATCH v7 " Mingxia Liu
  2023-02-08  7:33               ` [PATCH v7 1/6] net/idpf: add hw statistics Mingxia Liu
@ 2023-02-08  7:33               ` Mingxia Liu
  2023-02-08  7:33               ` [PATCH v7 3/6] net/idpf: support single q scatter RX datapath Mingxia Liu
                                 ` (4 subsequent siblings)
  6 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-08  7:33 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, Mingxia Liu

Add support for these device ops:
- rss_reta_update
- rss_reta_query
- rss_hash_update
- rss_hash_conf_get

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/common/idpf/idpf_common_device.h   |   1 +
 drivers/common/idpf/idpf_common_virtchnl.c | 119 +++++++++
 drivers/common/idpf/idpf_common_virtchnl.h |   6 +
 drivers/common/idpf/version.map            |   3 +
 drivers/net/idpf/idpf_ethdev.c             | 268 +++++++++++++++++++++
 drivers/net/idpf/idpf_ethdev.h             |   3 +-
 6 files changed, 399 insertions(+), 1 deletion(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 1d8e7d405a..7abc4d2a3a 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -98,6 +98,7 @@ struct idpf_vport {
 	uint32_t *rss_lut;
 	uint8_t *rss_key;
 	uint64_t rss_hf;
+	uint64_t last_general_rss_hf;
 
 	/* MSIX info*/
 	struct virtchnl2_queue_vector *qv_map; /* queue vector mapping */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 40cff34c09..10cfa33704 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -218,6 +218,9 @@ idpf_vc_cmd_execute(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	case VIRTCHNL2_OP_ALLOC_VECTORS:
 	case VIRTCHNL2_OP_DEALLOC_VECTORS:
 	case VIRTCHNL2_OP_GET_STATS:
+	case VIRTCHNL2_OP_GET_RSS_KEY:
+	case VIRTCHNL2_OP_GET_RSS_HASH:
+	case VIRTCHNL2_OP_GET_RSS_LUT:
 		/* for init virtchnl ops, need to poll the response */
 		err = idpf_vc_one_msg_read(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
@@ -448,6 +451,48 @@ idpf_vc_rss_key_set(struct idpf_vport *vport)
 	return err;
 }
 
+int idpf_vc_rss_key_get(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_key *rss_key_ret;
+	struct virtchnl2_rss_key rss_key;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_key, 0, sizeof(rss_key));
+	rss_key.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_GET_RSS_KEY;
+	args.in_args = (uint8_t *)&rss_key;
+	args.in_args_size = sizeof(rss_key);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_vc_cmd_execute(adapter, &args);
+
+	if (!err) {
+		rss_key_ret = (struct virtchnl2_rss_key *)args.out_buffer;
+		if (rss_key_ret->key_len != vport->rss_key_size) {
+			rte_free(vport->rss_key);
+			vport->rss_key = NULL;
+			vport->rss_key_size = RTE_MIN(IDPF_RSS_KEY_LEN,
+						      rss_key_ret->key_len);
+			vport->rss_key = rte_zmalloc("rss_key", vport->rss_key_size, 0);
+			if (!vport->rss_key) {
+				vport->rss_key_size = 0;
+				DRV_LOG(ERR, "Failed to allocate RSS key");
+				return -ENOMEM;
+			}
+		}
+		rte_memcpy(vport->rss_key, rss_key_ret->key, vport->rss_key_size);
+	} else {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_RSS_KEY");
+	}
+
+	return err;
+}
+
 int
 idpf_vc_rss_lut_set(struct idpf_vport *vport)
 {
@@ -482,6 +527,80 @@ idpf_vc_rss_lut_set(struct idpf_vport *vport)
 	return err;
 }
 
+int
+idpf_vc_rss_lut_get(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_lut *rss_lut_ret;
+	struct virtchnl2_rss_lut rss_lut;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_lut, 0, sizeof(rss_lut));
+	rss_lut.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_GET_RSS_LUT;
+	args.in_args = (uint8_t *)&rss_lut;
+	args.in_args_size = sizeof(rss_lut);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_vc_cmd_execute(adapter, &args);
+
+	if (!err) {
+		rss_lut_ret = (struct virtchnl2_rss_lut *)args.out_buffer;
+		if (rss_lut_ret->lut_entries != vport->rss_lut_size) {
+			rte_free(vport->rss_lut);
+			vport->rss_lut = NULL;
+			vport->rss_lut = rte_zmalloc("rss_lut",
+				     sizeof(uint32_t) * rss_lut_ret->lut_entries, 0);
+			if (vport->rss_lut == NULL) {
+				DRV_LOG(ERR, "Failed to allocate RSS lut");
+				return -ENOMEM;
+			}
+		}
+		rte_memcpy(vport->rss_lut, rss_lut_ret->lut, rss_lut_ret->lut_entries);
+		vport->rss_lut_size = rss_lut_ret->lut_entries;
+	} else {
+		DRV_LOG(ERR, "Failed to execute command of VIRTCHNL2_OP_GET_RSS_LUT");
+	}
+
+	return err;
+}
+
+int
+idpf_vc_rss_hash_get(struct idpf_vport *vport)
+{
+	struct idpf_adapter *adapter = vport->adapter;
+	struct virtchnl2_rss_hash *rss_hash_ret;
+	struct virtchnl2_rss_hash rss_hash;
+	struct idpf_cmd_info args;
+	int err;
+
+	memset(&rss_hash, 0, sizeof(rss_hash));
+	rss_hash.ptype_groups = vport->rss_hf;
+	rss_hash.vport_id = vport->vport_id;
+
+	memset(&args, 0, sizeof(args));
+	args.ops = VIRTCHNL2_OP_GET_RSS_HASH;
+	args.in_args = (uint8_t *)&rss_hash;
+	args.in_args_size = sizeof(rss_hash);
+	args.out_buffer = adapter->mbx_resp;
+	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
+
+	err = idpf_vc_cmd_execute(adapter, &args);
+
+	if (!err) {
+		rss_hash_ret = (struct virtchnl2_rss_hash *)args.out_buffer;
+		vport->rss_hf = rss_hash_ret->ptype_groups;
+	} else {
+		DRV_LOG(ERR, "Failed to execute command of OP_GET_RSS_HASH");
+	}
+
+	return err;
+}
+
 int
 idpf_vc_rss_hash_set(struct idpf_vport *vport)
 {
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index 6b94fd5b8f..205d1a932d 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -52,4 +52,10 @@ int idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq);
 __rte_internal
 int idpf_vc_stats_query(struct idpf_vport *vport,
 			struct virtchnl2_vport_stats **pstats);
+__rte_internal
+int idpf_vc_rss_key_get(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_rss_lut_get(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_rss_hash_get(struct idpf_vport *vport);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index e6a02828ba..f6c92e7e57 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -42,8 +42,11 @@ INTERNAL {
 	idpf_vc_ptype_info_query;
 	idpf_vc_queue_switch;
 	idpf_vc_queues_ena_dis;
+	idpf_vc_rss_hash_get;
 	idpf_vc_rss_hash_set;
+	idpf_vc_rss_key_get;
 	idpf_vc_rss_key_set;
+	idpf_vc_rss_lut_get;
 	idpf_vc_rss_lut_set;
 	idpf_vc_rxq_config;
 	idpf_vc_stats_query;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 02ddb0330a..7262109d0a 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -29,6 +29,56 @@ static const char * const idpf_valid_args[] = {
 	NULL
 };
 
+static const uint64_t idpf_map_hena_rss[] = {
+	[IDPF_HASH_NONF_UNICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	[IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	[IDPF_HASH_NONF_IPV4_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
+	[IDPF_HASH_NONF_IPV4_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+	[IDPF_HASH_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
+
+	/* IPv6 */
+	[IDPF_HASH_NONF_UNICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_MULTICAST_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_UDP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	[IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_TCP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+	[IDPF_HASH_NONF_IPV6_SCTP] =
+			RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
+	[IDPF_HASH_NONF_IPV6_OTHER] =
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+	[IDPF_HASH_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
+
+	/* L2 Payload */
+	[IDPF_HASH_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
+};
+
+static const uint64_t idpf_ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV4;
+
+static const uint64_t idpf_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+			  RTE_ETH_RSS_FRAG_IPV6;
+
 static int
 idpf_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -59,6 +109,9 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = vport->max_mtu;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
+	dev_info->hash_key_size = vport->rss_key_size;
+	dev_info->reta_size = vport->rss_lut_size;
+
 	dev_info->flow_type_rss_offloads = IDPF_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
@@ -221,6 +274,36 @@ idpf_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int idpf_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
+{
+	uint64_t hena = 0;
+	uint16_t i;
+
+	/**
+	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
+	 * generalizations of all other IPv4 and IPv6 RSS types.
+	 */
+	if (rss_hf & RTE_ETH_RSS_IPV4)
+		rss_hf |= idpf_ipv4_rss;
+
+	if (rss_hf & RTE_ETH_RSS_IPV6)
+		rss_hf |= idpf_ipv6_rss;
+
+	for (i = 0; i < RTE_DIM(idpf_map_hena_rss); i++) {
+		if (idpf_map_hena_rss[i] & rss_hf)
+			hena |= BIT_ULL(i);
+	}
+
+	/**
+	 * At present, cp doesn't process the virtual channel msg of rss_hf configuration,
+	 * tips are given below.
+	 */
+	if (hena != vport->rss_hf)
+		PMD_DRV_LOG(WARNING, "Updating RSS Hash Function is not supported at present.");
+
+	return 0;
+}
+
 static int
 idpf_init_rss(struct idpf_vport *vport)
 {
@@ -257,6 +340,187 @@ idpf_init_rss(struct idpf_vport *vport)
 	return ret;
 }
 
+static int
+idpf_rss_reta_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_reta_entry64 *reta_conf,
+		     uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+				 "(%d) doesn't match the number of hardware can "
+				 "support (%d)",
+			    reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			vport->rss_lut[i] = reta_conf[idx].reta[shift];
+	}
+
+	/* send virtchnl ops to configure RSS */
+	ret = idpf_vc_rss_lut_set(vport);
+	if (ret)
+		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
+
+	return ret;
+}
+
+static int
+idpf_rss_reta_query(struct rte_eth_dev *dev,
+		    struct rte_eth_rss_reta_entry64 *reta_conf,
+		    uint16_t reta_size)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	uint16_t idx, shift;
+	int ret = 0;
+	uint16_t i;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (reta_size != vport->rss_lut_size) {
+		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
+			"(%d) doesn't match the number of hardware can "
+			"support (%d)", reta_size, vport->rss_lut_size);
+		return -EINVAL;
+	}
+
+	ret = idpf_vc_rss_lut_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS LUT");
+		return ret;
+	}
+
+	for (i = 0; i < reta_size; i++) {
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
+		if (reta_conf[idx].mask & (1ULL << shift))
+			reta_conf[idx].reta[shift] = vport->rss_lut[i];
+	}
+
+	return 0;
+}
+
+static int
+idpf_rss_hash_update(struct rte_eth_dev *dev,
+		     struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) {
+		PMD_DRV_LOG(DEBUG, "No key to be configured");
+		goto skip_rss_key;
+	} else if (rss_conf->rss_key_len != vport->rss_key_size) {
+		PMD_DRV_LOG(ERR, "The size of hash key configured "
+				 "(%d) doesn't match the size of hardware can "
+				 "support (%d)",
+			    rss_conf->rss_key_len,
+			    vport->rss_key_size);
+		return -EINVAL;
+	}
+
+	rte_memcpy(vport->rss_key, rss_conf->rss_key,
+		   vport->rss_key_size);
+	ret = idpf_vc_rss_key_set(vport);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS key");
+		return ret;
+	}
+
+skip_rss_key:
+	ret = idpf_config_rss_hf(vport, rss_conf->rss_hf);
+	if (ret != 0) {
+		PMD_INIT_LOG(ERR, "Failed to configure RSS hash");
+		return ret;
+	}
+
+	return 0;
+}
+
+static uint64_t
+idpf_map_general_rss_hf(uint64_t config_rss_hf, uint64_t last_general_rss_hf)
+{
+	uint64_t valid_rss_hf = 0;
+	uint16_t i;
+
+	for (i = 0; i < RTE_DIM(idpf_map_hena_rss); i++) {
+		uint64_t bit = BIT_ULL(i);
+
+		if (bit & config_rss_hf)
+			valid_rss_hf |= idpf_map_hena_rss[i];
+	}
+
+	if (valid_rss_hf & idpf_ipv4_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV4;
+
+	if (valid_rss_hf & idpf_ipv6_rss)
+		valid_rss_hf |= last_general_rss_hf & RTE_ETH_RSS_IPV6;
+
+	return valid_rss_hf;
+}
+
+static int
+idpf_rss_hash_conf_get(struct rte_eth_dev *dev,
+		       struct rte_eth_rss_conf *rss_conf)
+{
+	struct idpf_vport *vport = dev->data->dev_private;
+	struct idpf_adapter *adapter = vport->adapter;
+	int ret = 0;
+
+	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+
+	ret = idpf_vc_rss_hash_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS hf");
+		return ret;
+	}
+
+	rss_conf->rss_hf = idpf_map_general_rss_hf(vport->rss_hf, vport->last_general_rss_hf);
+
+	if (!rss_conf->rss_key)
+		return 0;
+
+	ret = idpf_vc_rss_key_get(vport);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to get RSS key");
+		return ret;
+	}
+
+	if (rss_conf->rss_key_len > vport->rss_key_size)
+		rss_conf->rss_key_len = vport->rss_key_size;
+
+	rte_memcpy(rss_conf->rss_key, vport->rss_key, rss_conf->rss_key_len);
+
+	return 0;
+}
+
 static int
 idpf_dev_configure(struct rte_eth_dev *dev)
 {
@@ -692,6 +956,10 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 	.dev_supported_ptypes_get	= idpf_dev_supported_ptypes_get,
 	.stats_get			= idpf_dev_stats_get,
 	.stats_reset			= idpf_dev_stats_reset,
+	.reta_update			= idpf_rss_reta_update,
+	.reta_query			= idpf_rss_reta_query,
+	.rss_hash_update		= idpf_rss_hash_update,
+	.rss_hash_conf_get		= idpf_rss_hash_conf_get,
 };
 
 static uint16_t
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index d791d402fb..839a2bd82c 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -48,7 +48,8 @@
 		RTE_ETH_RSS_NONFRAG_IPV6_TCP    |	\
 		RTE_ETH_RSS_NONFRAG_IPV6_UDP    |	\
 		RTE_ETH_RSS_NONFRAG_IPV6_SCTP   |	\
-		RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
+		RTE_ETH_RSS_NONFRAG_IPV6_OTHER  |	\
+		RTE_ETH_RSS_L2_PAYLOAD)
 
 #define IDPF_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v7 3/6] net/idpf: support single q scatter RX datapath
  2023-02-08  7:33             ` [PATCH v7 " Mingxia Liu
  2023-02-08  7:33               ` [PATCH v7 1/6] net/idpf: add hw statistics Mingxia Liu
  2023-02-08  7:33               ` [PATCH v7 2/6] net/idpf: add RSS set/get ops Mingxia Liu
@ 2023-02-08  7:33               ` Mingxia Liu
  2023-02-08  7:33               ` [PATCH v7 4/6] net/idpf: add rss_offload hash in singleq rx Mingxia Liu
                                 ` (3 subsequent siblings)
  6 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-08  7:33 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, Mingxia Liu, Wenjun Wu

This patch add single q recv scatter rx function.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c | 135 +++++++++++++++++++++++++
 drivers/common/idpf/idpf_common_rxtx.h |   3 +
 drivers/common/idpf/version.map        |   1 +
 drivers/net/idpf/idpf_ethdev.c         |   3 +-
 drivers/net/idpf/idpf_rxtx.c           |  28 +++++
 drivers/net/idpf/idpf_rxtx.h           |   2 +
 6 files changed, 171 insertions(+), 1 deletion(-)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index fdac2c3114..9303b51cce 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -1146,6 +1146,141 @@ idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	return nb_rx;
 }
 
+uint16_t
+idpf_dp_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			       uint16_t nb_pkts)
+{
+	struct idpf_rx_queue *rxq = rx_queue;
+	volatile union virtchnl2_rx_desc *rx_ring = rxq->rx_ring;
+	volatile union virtchnl2_rx_desc *rxdp;
+	union virtchnl2_rx_desc rxd;
+	struct idpf_adapter *ad;
+	struct rte_mbuf *first_seg = rxq->pkt_first_seg;
+	struct rte_mbuf *last_seg = rxq->pkt_last_seg;
+	struct rte_mbuf *rxm;
+	struct rte_mbuf *nmb;
+	struct rte_eth_dev *dev;
+	const uint32_t *ptype_tbl = rxq->adapter->ptype_tbl;
+	uint16_t rx_id = rxq->rx_tail;
+	uint16_t rx_packet_len;
+	uint16_t nb_hold = 0;
+	uint16_t rx_status0;
+	uint16_t nb_rx = 0;
+	uint64_t pkt_flags;
+	uint64_t dma_addr;
+	uint64_t ts_ns;
+
+	ad = rxq->adapter;
+
+	if (unlikely(!rxq) || unlikely(!rxq->q_started))
+		return nb_rx;
+
+	while (nb_rx < nb_pkts) {
+		rxdp = &rx_ring[rx_id];
+		rx_status0 = rte_le_to_cpu_16(rxdp->flex_nic_wb.status_error0);
+
+		/* Check the DD bit first */
+		if (!(rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_S)))
+			break;
+
+		nmb = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!nmb)) {
+			__atomic_fetch_add(&rxq->rx_stats.mbuf_alloc_failed, 1, __ATOMIC_RELAXED);
+			RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u "
+			       "queue_id=%u", rxq->port_id, rxq->queue_id);
+			break;
+		}
+
+		rxd = *rxdp;
+
+		nb_hold++;
+		rxm = rxq->sw_ring[rx_id];
+		rxq->sw_ring[rx_id] = nmb;
+		rx_id++;
+		if (unlikely(rx_id == rxq->nb_rx_desc))
+			rx_id = 0;
+
+		/* Prefetch next mbuf */
+		rte_prefetch0(rxq->sw_ring[rx_id]);
+
+		/* When next RX descriptor is on a cache line boundary,
+		 * prefetch the next 4 RX descriptors and next 8 pointers
+		 * to mbufs.
+		 */
+		if ((rx_id & 0x3) == 0) {
+			rte_prefetch0(&rx_ring[rx_id]);
+			rte_prefetch0(rxq->sw_ring[rx_id]);
+		}
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
+		rxdp->read.hdr_addr = 0;
+		rxdp->read.pkt_addr = dma_addr;
+		rx_packet_len = (rte_cpu_to_le_16(rxd.flex_nic_wb.pkt_len) &
+				 VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M);
+		rxm->data_len = rx_packet_len;
+		rxm->data_off = RTE_PKTMBUF_HEADROOM;
+
+		/**
+		 * If this is the first buffer of the received packet, set the
+		 * pointer to the first mbuf of the packet and initialize its
+		 * context. Otherwise, update the total length and the number
+		 * of segments of the current scattered packet, and update the
+		 * pointer to the last mbuf of the current packet.
+		 */
+		if (!first_seg) {
+			first_seg = rxm;
+			first_seg->nb_segs = 1;
+			first_seg->pkt_len = rx_packet_len;
+		} else {
+			first_seg->pkt_len =
+				(uint16_t)(first_seg->pkt_len +
+					   rx_packet_len);
+			first_seg->nb_segs++;
+			last_seg->next = rxm;
+		}
+
+		if (!(rx_status0 & (1 << VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_S))) {
+			last_seg = rxm;
+			continue;
+		}
+
+		rxm->next = NULL;
+
+		first_seg->port = rxq->port_id;
+		first_seg->ol_flags = 0;
+		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		first_seg->packet_type =
+			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
+				VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
+
+		if (idpf_timestamp_dynflag > 0 &&
+		    (rxq->offloads & IDPF_RX_OFFLOAD_TIMESTAMP) != 0) {
+			/* timestamp */
+			ts_ns = idpf_tstamp_convert_32b_64b(ad,
+				rxq->hw_register_set,
+				rte_le_to_cpu_32(rxd.flex_nic_wb.flex_ts.ts_high));
+			rxq->hw_register_set = 0;
+			*RTE_MBUF_DYNFIELD(rxm,
+					   idpf_timestamp_dynfield_offset,
+					   rte_mbuf_timestamp_t *) = ts_ns;
+			first_seg->ol_flags |= idpf_timestamp_dynflag;
+		}
+
+		first_seg->ol_flags |= pkt_flags;
+		rte_prefetch0(RTE_PTR_ADD(first_seg->buf_addr,
+					  first_seg->data_off));
+		rx_pkts[nb_rx++] = first_seg;
+		first_seg = NULL;
+	}
+	rxq->rx_tail = rx_id;
+	rxq->pkt_first_seg = first_seg;
+	rxq->pkt_last_seg = last_seg;
+
+	idpf_update_rx_tail(rxq, nb_hold, rx_id);
+
+	return nb_rx;
+}
+
 static inline int
 idpf_xmit_cleanup(struct idpf_tx_queue *txq)
 {
diff --git a/drivers/common/idpf/idpf_common_rxtx.h b/drivers/common/idpf/idpf_common_rxtx.h
index 263dab061c..7e6df080e6 100644
--- a/drivers/common/idpf/idpf_common_rxtx.h
+++ b/drivers/common/idpf/idpf_common_rxtx.h
@@ -293,5 +293,8 @@ uint16_t idpf_dp_singleq_xmit_pkts_avx512(void *tx_queue,
 __rte_internal
 uint16_t idpf_dp_splitq_xmit_pkts_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
 					 uint16_t nb_pkts);
+__rte_internal
+uint16_t idpf_dp_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
+			  uint16_t nb_pkts);
 
 #endif /* _IDPF_COMMON_RXTX_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index f6c92e7e57..e31f6ff4d9 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -7,6 +7,7 @@ INTERNAL {
 	idpf_dp_prep_pkts;
 	idpf_dp_singleq_recv_pkts;
 	idpf_dp_singleq_recv_pkts_avx512;
+	idpf_dp_singleq_recv_scatter_pkts;
 	idpf_dp_singleq_xmit_pkts;
 	idpf_dp_singleq_xmit_pkts_avx512;
 	idpf_dp_splitq_recv_pkts;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 7262109d0a..11f0ca0085 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -119,7 +119,8 @@ idpf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		RTE_ETH_RX_OFFLOAD_UDP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_TCP_CKSUM            |
 		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM     |
-		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP		|
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	dev_info->tx_offload_capa =
 		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
diff --git a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c
index 38d9829912..d16acd87fb 100644
--- a/drivers/net/idpf/idpf_rxtx.c
+++ b/drivers/net/idpf/idpf_rxtx.c
@@ -503,6 +503,8 @@ int
 idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct idpf_rx_queue *rxq;
+	uint16_t max_pkt_len;
+	uint32_t frame_size;
 	int err;
 
 	if (rx_queue_id >= dev->data->nb_rx_queues)
@@ -516,6 +518,17 @@ idpf_rx_queue_init(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		return -EINVAL;
 	}
 
+	frame_size = dev->data->mtu + IDPF_ETH_OVERHEAD;
+
+	max_pkt_len =
+	    RTE_MIN((uint32_t)IDPF_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
+		    frame_size);
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
+	    frame_size > rxq->rx_buf_len)
+		dev->data->scattered_rx = 1;
+
 	err = idpf_qc_ts_mbuf_register(rxq);
 	if (err != 0) {
 		PMD_DRV_LOG(ERR, "fail to residter timestamp mbuf %u",
@@ -807,6 +820,14 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 			}
 #endif /* CC_AVX512_SUPPORT */
 		}
+
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE,
+				    "Using Single Scalar Scatterd Rx (port %d).",
+				    dev->data->port_id);
+			dev->rx_pkt_burst = idpf_dp_singleq_recv_scatter_pkts;
+			return;
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Rx (port %d).",
 			    dev->data->port_id);
@@ -819,6 +840,13 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
 			    dev->data->port_id);
 		dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
 	} else {
+		if (dev->data->scattered_rx) {
+			PMD_DRV_LOG(NOTICE,
+				    "Using Single Scalar Scatterd Rx (port %d).",
+				    dev->data->port_id);
+			dev->rx_pkt_burst = idpf_dp_singleq_recv_scatter_pkts;
+			return;
+		}
 		PMD_DRV_LOG(NOTICE,
 			    "Using Single Scalar Rx (port %d).",
 			    dev->data->port_id);
diff --git a/drivers/net/idpf/idpf_rxtx.h b/drivers/net/idpf/idpf_rxtx.h
index 3a5084dfd6..41a7495083 100644
--- a/drivers/net/idpf/idpf_rxtx.h
+++ b/drivers/net/idpf/idpf_rxtx.h
@@ -23,6 +23,8 @@
 #define IDPF_DEFAULT_TX_RS_THRESH	32
 #define IDPF_DEFAULT_TX_FREE_THRESH	32
 
+#define IDPF_SUPPORT_CHAIN_NUM 5
+
 int idpf_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 			uint16_t nb_desc, unsigned int socket_id,
 			const struct rte_eth_rxconf *rx_conf,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v7 4/6] net/idpf: add rss_offload hash in singleq rx
  2023-02-08  7:33             ` [PATCH v7 " Mingxia Liu
                                 ` (2 preceding siblings ...)
  2023-02-08  7:33               ` [PATCH v7 3/6] net/idpf: support single q scatter RX datapath Mingxia Liu
@ 2023-02-08  7:33               ` Mingxia Liu
  2023-02-08  7:34               ` [PATCH v7 5/6] net/idpf: add alarm to support handle vchnl message Mingxia Liu
                                 ` (2 subsequent siblings)
  6 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-08  7:33 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, Mingxia Liu

This patch add rss valid flag and hash value parsing of rx descriptor.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/common/idpf/idpf_common_rxtx.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/drivers/common/idpf/idpf_common_rxtx.c b/drivers/common/idpf/idpf_common_rxtx.c
index 9303b51cce..d7e8df1895 100644
--- a/drivers/common/idpf/idpf_common_rxtx.c
+++ b/drivers/common/idpf/idpf_common_rxtx.c
@@ -1030,6 +1030,20 @@ idpf_update_rx_tail(struct idpf_rx_queue *rxq, uint16_t nb_hold,
 	rxq->nb_rx_hold = nb_hold;
 }
 
+static inline void
+idpf_singleq_rx_rss_offload(struct rte_mbuf *mb,
+			    volatile struct virtchnl2_rx_flex_desc_nic *rx_desc,
+			    uint64_t *pkt_flags)
+{
+	uint16_t rx_status0 = rte_le_to_cpu_16(rx_desc->status_error0);
+
+	if (rx_status0 & RTE_BIT32(VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_S)) {
+		*pkt_flags |= RTE_MBUF_F_RX_RSS_HASH;
+		mb->hash.rss = rte_le_to_cpu_32(rx_desc->rss_hash);
+	}
+
+}
+
 uint16_t
 idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 			  uint16_t nb_pkts)
@@ -1118,6 +1132,7 @@ idpf_dp_singleq_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		rxm->port = rxq->port_id;
 		rxm->ol_flags = 0;
 		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		idpf_singleq_rx_rss_offload(rxm, &rxd.flex_nic_wb, &pkt_flags);
 		rxm->packet_type =
 			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
 					    VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
@@ -1249,6 +1264,7 @@ idpf_dp_singleq_recv_scatter_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		first_seg->port = rxq->port_id;
 		first_seg->ol_flags = 0;
 		pkt_flags = idpf_rxd_to_pkt_flags(rx_status0);
+		idpf_singleq_rx_rss_offload(first_seg, &rxd.flex_nic_wb, &pkt_flags);
 		first_seg->packet_type =
 			ptype_tbl[(uint8_t)(rte_cpu_to_le_16(rxd.flex_nic_wb.ptype_flex_flags0) &
 				VIRTCHNL2_RX_FLEX_DESC_PTYPE_M)];
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v7 5/6] net/idpf: add alarm to support handle vchnl message
  2023-02-08  7:33             ` [PATCH v7 " Mingxia Liu
                                 ` (3 preceding siblings ...)
  2023-02-08  7:33               ` [PATCH v7 4/6] net/idpf: add rss_offload hash in singleq rx Mingxia Liu
@ 2023-02-08  7:34               ` Mingxia Liu
  2023-02-08  7:34               ` [PATCH v7 6/6] net/idpf: add xstats ops Mingxia Liu
  2023-02-08  9:32               ` [PATCH v7 0/6] add idpf pmd enhancement features Zhang, Qi Z
  6 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-08  7:34 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, Mingxia Liu

Handle virtual channel message.
Refine link status update.

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
Signed-off-by: Beilei Xing <beilei.xing@intel.com>
---
 drivers/common/idpf/idpf_common_device.h   |   5 +
 drivers/common/idpf/idpf_common_virtchnl.c |  33 ++--
 drivers/common/idpf/idpf_common_virtchnl.h |   6 +
 drivers/common/idpf/version.map            |   2 +
 drivers/net/idpf/idpf_ethdev.c             | 169 ++++++++++++++++++++-
 drivers/net/idpf/idpf_ethdev.h             |   2 +
 6 files changed, 195 insertions(+), 22 deletions(-)

diff --git a/drivers/common/idpf/idpf_common_device.h b/drivers/common/idpf/idpf_common_device.h
index 7abc4d2a3a..364a60221a 100644
--- a/drivers/common/idpf/idpf_common_device.h
+++ b/drivers/common/idpf/idpf_common_device.h
@@ -118,6 +118,11 @@ struct idpf_vport {
 	bool tx_use_avx512;
 
 	struct virtchnl2_vport_stats eth_stats_offset;
+
+	void *dev;
+	/* Event from ipf */
+	bool link_up;
+	uint32_t link_speed;
 };
 
 /* Message type read in virtual channel from PF */
diff --git a/drivers/common/idpf/idpf_common_virtchnl.c b/drivers/common/idpf/idpf_common_virtchnl.c
index 10cfa33704..99d9efbb7c 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.c
+++ b/drivers/common/idpf/idpf_common_virtchnl.c
@@ -202,25 +202,6 @@ idpf_vc_cmd_execute(struct idpf_adapter *adapter, struct idpf_cmd_info *args)
 	switch (args->ops) {
 	case VIRTCHNL_OP_VERSION:
 	case VIRTCHNL2_OP_GET_CAPS:
-	case VIRTCHNL2_OP_CREATE_VPORT:
-	case VIRTCHNL2_OP_DESTROY_VPORT:
-	case VIRTCHNL2_OP_SET_RSS_KEY:
-	case VIRTCHNL2_OP_SET_RSS_LUT:
-	case VIRTCHNL2_OP_SET_RSS_HASH:
-	case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
-	case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
-	case VIRTCHNL2_OP_ENABLE_QUEUES:
-	case VIRTCHNL2_OP_DISABLE_QUEUES:
-	case VIRTCHNL2_OP_ENABLE_VPORT:
-	case VIRTCHNL2_OP_DISABLE_VPORT:
-	case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
-	case VIRTCHNL2_OP_ALLOC_VECTORS:
-	case VIRTCHNL2_OP_DEALLOC_VECTORS:
-	case VIRTCHNL2_OP_GET_STATS:
-	case VIRTCHNL2_OP_GET_RSS_KEY:
-	case VIRTCHNL2_OP_GET_RSS_HASH:
-	case VIRTCHNL2_OP_GET_RSS_LUT:
 		/* for init virtchnl ops, need to poll the response */
 		err = idpf_vc_one_msg_read(adapter, args->ops, args->out_size, args->out_buffer);
 		clear_cmd(adapter);
@@ -1111,3 +1092,17 @@ idpf_vc_txq_config(struct idpf_vport *vport, struct idpf_tx_queue *txq)
 
 	return err;
 }
+
+int
+idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
+		  struct idpf_ctlq_msg *q_msg)
+{
+	return idpf_ctlq_recv(cq, num_q_msg, q_msg);
+}
+
+int
+idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
+			   u16 *buff_count, struct idpf_dma_mem **buffs)
+{
+	return idpf_ctlq_post_rx_buffs(hw, cq, buff_count, buffs);
+}
diff --git a/drivers/common/idpf/idpf_common_virtchnl.h b/drivers/common/idpf/idpf_common_virtchnl.h
index 205d1a932d..d479d93c8e 100644
--- a/drivers/common/idpf/idpf_common_virtchnl.h
+++ b/drivers/common/idpf/idpf_common_virtchnl.h
@@ -58,4 +58,10 @@ __rte_internal
 int idpf_vc_rss_lut_get(struct idpf_vport *vport);
 __rte_internal
 int idpf_vc_rss_hash_get(struct idpf_vport *vport);
+__rte_internal
+int idpf_vc_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
+		      struct idpf_ctlq_msg *q_msg);
+__rte_internal
+int idpf_vc_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
+			   u16 *buff_count, struct idpf_dma_mem **buffs);
 #endif /* _IDPF_COMMON_VIRTCHNL_H_ */
diff --git a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
index e31f6ff4d9..70334a1b03 100644
--- a/drivers/common/idpf/version.map
+++ b/drivers/common/idpf/version.map
@@ -38,6 +38,8 @@ INTERNAL {
 	idpf_vc_api_version_check;
 	idpf_vc_caps_get;
 	idpf_vc_cmd_execute;
+	idpf_vc_ctlq_post_rx_buffs;
+	idpf_vc_ctlq_recv;
 	idpf_vc_irq_map_unmap_config;
 	idpf_vc_one_msg_read;
 	idpf_vc_ptype_info_query;
diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 11f0ca0085..751c0d8717 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -9,6 +9,7 @@
 #include <rte_memzone.h>
 #include <rte_dev.h>
 #include <errno.h>
+#include <rte_alarm.h>
 
 #include "idpf_ethdev.h"
 #include "idpf_rxtx.h"
@@ -83,14 +84,51 @@ static int
 idpf_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
 {
+	struct idpf_vport *vport = dev->data->dev_private;
 	struct rte_eth_link new_link;
 
 	memset(&new_link, 0, sizeof(new_link));
 
-	new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	switch (vport->link_speed) {
+	case RTE_ETH_SPEED_NUM_10M:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
+		break;
+	case RTE_ETH_SPEED_NUM_100M:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		break;
+	case RTE_ETH_SPEED_NUM_1G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
+		break;
+	case RTE_ETH_SPEED_NUM_10G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+		break;
+	case RTE_ETH_SPEED_NUM_20G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
+		break;
+	case RTE_ETH_SPEED_NUM_25G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
+		break;
+	case RTE_ETH_SPEED_NUM_40G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
+		break;
+	case RTE_ETH_SPEED_NUM_50G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
+		break;
+	case RTE_ETH_SPEED_NUM_100G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+		break;
+	case RTE_ETH_SPEED_NUM_200G:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_200G;
+		break;
+	default:
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	}
+
 	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
-	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				  RTE_ETH_LINK_SPEED_FIXED);
+	new_link.link_status = vport->link_up ? RTE_ETH_LINK_UP :
+		RTE_ETH_LINK_DOWN;
+	new_link.link_autoneg = (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) ?
+				 RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG;
 
 	return rte_eth_linkstatus_set(dev, &new_link);
 }
@@ -891,6 +929,127 @@ idpf_parse_devargs(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adap
 	return ret;
 }
 
+static struct idpf_vport *
+idpf_find_vport(struct idpf_adapter_ext *adapter, uint32_t vport_id)
+{
+	struct idpf_vport *vport = NULL;
+	int i;
+
+	for (i = 0; i < adapter->cur_vport_nb; i++) {
+		vport = adapter->vports[i];
+		if (vport->vport_id != vport_id)
+			continue;
+		else
+			return vport;
+	}
+
+	return vport;
+}
+
+static void
+idpf_handle_event_msg(struct idpf_vport *vport, uint8_t *msg, uint16_t msglen)
+{
+	struct virtchnl2_event *vc_event = (struct virtchnl2_event *)msg;
+	struct rte_eth_dev *dev = (struct rte_eth_dev *)vport->dev;
+
+	if (msglen < sizeof(struct virtchnl2_event)) {
+		PMD_DRV_LOG(ERR, "Error event");
+		return;
+	}
+
+	switch (vc_event->event) {
+	case VIRTCHNL2_EVENT_LINK_CHANGE:
+		PMD_DRV_LOG(DEBUG, "VIRTCHNL2_EVENT_LINK_CHANGE");
+		vport->link_up = !!(vc_event->link_status);
+		vport->link_speed = vc_event->link_speed;
+		idpf_dev_link_update(dev, 0);
+		break;
+	default:
+		PMD_DRV_LOG(ERR, " unknown event received %u", vc_event->event);
+		break;
+	}
+}
+
+static void
+idpf_handle_virtchnl_msg(struct idpf_adapter_ext *adapter_ex)
+{
+	struct idpf_adapter *adapter = &adapter_ex->base;
+	struct idpf_dma_mem *dma_mem = NULL;
+	struct idpf_hw *hw = &adapter->hw;
+	struct virtchnl2_event *vc_event;
+	struct idpf_ctlq_msg ctlq_msg;
+	enum idpf_mbx_opc mbx_op;
+	struct idpf_vport *vport;
+	enum virtchnl_ops vc_op;
+	uint16_t pending = 1;
+	int ret;
+
+	while (pending) {
+		ret = idpf_vc_ctlq_recv(hw->arq, &pending, &ctlq_msg);
+		if (ret) {
+			PMD_DRV_LOG(INFO, "Failed to read msg from virtual channel, ret: %d", ret);
+			return;
+		}
+
+		rte_memcpy(adapter->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
+			   IDPF_DFLT_MBX_BUF_SIZE);
+
+		mbx_op = rte_le_to_cpu_16(ctlq_msg.opcode);
+		vc_op = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_opcode);
+		adapter->cmd_retval = rte_le_to_cpu_32(ctlq_msg.cookie.mbx.chnl_retval);
+
+		switch (mbx_op) {
+		case idpf_mbq_opc_send_msg_to_peer_pf:
+			if (vc_op == VIRTCHNL2_OP_EVENT) {
+				if (ctlq_msg.data_len < sizeof(struct virtchnl2_event)) {
+					PMD_DRV_LOG(ERR, "Error event");
+					return;
+				}
+				vc_event = (struct virtchnl2_event *)adapter->mbx_resp;
+				vport = idpf_find_vport(adapter_ex, vc_event->vport_id);
+				if (!vport) {
+					PMD_DRV_LOG(ERR, "Can't find vport.");
+					return;
+				}
+				idpf_handle_event_msg(vport, adapter->mbx_resp,
+						      ctlq_msg.data_len);
+			} else {
+				if (vc_op == adapter->pend_cmd)
+					notify_cmd(adapter, adapter->cmd_retval);
+				else
+					PMD_DRV_LOG(ERR, "command mismatch, expect %u, get %u",
+						    adapter->pend_cmd, vc_op);
+
+				PMD_DRV_LOG(DEBUG, " Virtual channel response is received,"
+					    "opcode = %d", vc_op);
+			}
+			goto post_buf;
+		default:
+			PMD_DRV_LOG(DEBUG, "Request %u is not supported yet", mbx_op);
+		}
+	}
+
+post_buf:
+	if (ctlq_msg.data_len)
+		dma_mem = ctlq_msg.ctx.indirect.payload;
+	else
+		pending = 0;
+
+	ret = idpf_vc_ctlq_post_rx_buffs(hw, hw->arq, &pending, &dma_mem);
+	if (ret && dma_mem)
+		idpf_free_dma_mem(hw, dma_mem);
+}
+
+static void
+idpf_dev_alarm_handler(void *param)
+{
+	struct idpf_adapter_ext *adapter = param;
+
+	idpf_handle_virtchnl_msg(adapter);
+
+	rte_eal_alarm_set(IDPF_ALARM_INTERVAL, idpf_dev_alarm_handler, adapter);
+}
+
 static int
 idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *adapter)
 {
@@ -913,6 +1072,8 @@ idpf_adapter_ext_init(struct rte_pci_device *pci_dev, struct idpf_adapter_ext *a
 		goto err_adapter_init;
 	}
 
+	rte_eal_alarm_set(IDPF_ALARM_INTERVAL, idpf_dev_alarm_handler, adapter);
+
 	adapter->max_vport_nb = adapter->base.caps.max_vports;
 
 	adapter->vports = rte_zmalloc("vports",
@@ -996,6 +1157,7 @@ idpf_dev_vport_init(struct rte_eth_dev *dev, void *init_params)
 	vport->adapter = &adapter->base;
 	vport->sw_idx = param->idx;
 	vport->devarg_id = param->devarg_id;
+	vport->dev = dev;
 
 	memset(&create_vport_info, 0, sizeof(create_vport_info));
 	ret = idpf_vport_info_init(vport, &create_vport_info);
@@ -1065,6 +1227,7 @@ idpf_find_adapter_ext(struct rte_pci_device *pci_dev)
 static void
 idpf_adapter_ext_deinit(struct idpf_adapter_ext *adapter)
 {
+	rte_eal_alarm_cancel(idpf_dev_alarm_handler, adapter);
 	idpf_adapter_deinit(&adapter->base);
 
 	rte_free(adapter->vports);
diff --git a/drivers/net/idpf/idpf_ethdev.h b/drivers/net/idpf/idpf_ethdev.h
index 839a2bd82c..3c2c932438 100644
--- a/drivers/net/idpf/idpf_ethdev.h
+++ b/drivers/net/idpf/idpf_ethdev.h
@@ -53,6 +53,8 @@
 
 #define IDPF_ADAPTER_NAME_LEN	(PCI_PRI_STR_SIZE + 1)
 
+#define IDPF_ALARM_INTERVAL	50000 /* us */
+
 struct idpf_vport_param {
 	struct idpf_adapter_ext *adapter;
 	uint16_t devarg_id; /* arg id from user */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* [PATCH v7 6/6] net/idpf: add xstats ops
  2023-02-08  7:33             ` [PATCH v7 " Mingxia Liu
                                 ` (4 preceding siblings ...)
  2023-02-08  7:34               ` [PATCH v7 5/6] net/idpf: add alarm to support handle vchnl message Mingxia Liu
@ 2023-02-08  7:34               ` Mingxia Liu
  2023-02-08  9:32               ` [PATCH v7 0/6] add idpf pmd enhancement features Zhang, Qi Z
  6 siblings, 0 replies; 63+ messages in thread
From: Mingxia Liu @ 2023-02-08  7:34 UTC (permalink / raw)
  To: dev; +Cc: jingjing.wu, beilei.xing, Mingxia Liu

Add support for these device ops:
-idpf_dev_xstats_get
-idpf_dev_xstats_get_names
-idpf_dev_xstats_reset

Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
---
 drivers/net/idpf/idpf_ethdev.c | 80 ++++++++++++++++++++++++++++++++++
 1 file changed, 80 insertions(+)

diff --git a/drivers/net/idpf/idpf_ethdev.c b/drivers/net/idpf/idpf_ethdev.c
index 751c0d8717..38cbbf369d 100644
--- a/drivers/net/idpf/idpf_ethdev.c
+++ b/drivers/net/idpf/idpf_ethdev.c
@@ -80,6 +80,30 @@ static const uint64_t idpf_ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
 			  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
 			  RTE_ETH_RSS_FRAG_IPV6;
 
+struct rte_idpf_xstats_name_off {
+	char name[RTE_ETH_XSTATS_NAME_SIZE];
+	unsigned int offset;
+};
+
+static const struct rte_idpf_xstats_name_off rte_idpf_stats_strings[] = {
+	{"rx_bytes", offsetof(struct virtchnl2_vport_stats, rx_bytes)},
+	{"rx_unicast_packets", offsetof(struct virtchnl2_vport_stats, rx_unicast)},
+	{"rx_multicast_packets", offsetof(struct virtchnl2_vport_stats, rx_multicast)},
+	{"rx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, rx_broadcast)},
+	{"rx_dropped_packets", offsetof(struct virtchnl2_vport_stats, rx_discards)},
+	{"rx_errors", offsetof(struct virtchnl2_vport_stats, rx_errors)},
+	{"rx_unknown_protocol_packets", offsetof(struct virtchnl2_vport_stats,
+						 rx_unknown_protocol)},
+	{"tx_bytes", offsetof(struct virtchnl2_vport_stats, tx_bytes)},
+	{"tx_unicast_packets", offsetof(struct virtchnl2_vport_stats, tx_unicast)},
+	{"tx_multicast_packets", offsetof(struct virtchnl2_vport_stats, tx_multicast)},
+	{"tx_broadcast_packets", offsetof(struct virtchnl2_vport_stats, tx_broadcast)},
+	{"tx_dropped_packets", offsetof(struct virtchnl2_vport_stats, tx_discards)},
+	{"tx_error_packets", offsetof(struct virtchnl2_vport_stats, tx_errors)}};
+
+#define IDPF_NB_XSTATS (sizeof(rte_idpf_stats_strings) / \
+		sizeof(rte_idpf_stats_strings[0]))
+
 static int
 idpf_dev_link_update(struct rte_eth_dev *dev,
 		     __rte_unused int wait_to_complete)
@@ -313,6 +337,59 @@ idpf_dev_stats_reset(struct rte_eth_dev *dev)
 	return 0;
 }
 
+static int idpf_dev_xstats_reset(struct rte_eth_dev *dev)
+{
+	idpf_dev_stats_reset(dev);
+	return 0;
+}
+
+static int idpf_dev_xstats_get(struct rte_eth_dev *dev,
+			       struct rte_eth_xstat *xstats, unsigned int n)
+{
+	struct idpf_vport *vport =
+		(struct idpf_vport *)dev->data->dev_private;
+	struct virtchnl2_vport_stats *pstats = NULL;
+	unsigned int i;
+	int ret;
+
+	if (n < IDPF_NB_XSTATS)
+		return IDPF_NB_XSTATS;
+
+	if (!xstats)
+		return 0;
+
+	ret = idpf_vc_stats_query(vport, &pstats);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+		return 0;
+	}
+
+	idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
+
+	/* loop over xstats array and values from pstats */
+	for (i = 0; i < IDPF_NB_XSTATS; i++) {
+		xstats[i].id = i;
+		xstats[i].value = *(uint64_t *)(((char *)pstats) +
+			rte_idpf_stats_strings[i].offset);
+	}
+	return IDPF_NB_XSTATS;
+}
+
+static int idpf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
+				     struct rte_eth_xstat_name *xstats_names,
+				     __rte_unused unsigned int limit)
+{
+	unsigned int i;
+
+	if (xstats_names)
+		for (i = 0; i < IDPF_NB_XSTATS; i++) {
+			snprintf(xstats_names[i].name,
+				 sizeof(xstats_names[i].name),
+				 "%s", rte_idpf_stats_strings[i].name);
+		}
+	return IDPF_NB_XSTATS;
+}
+
 static int idpf_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
 {
 	uint64_t hena = 0;
@@ -1122,6 +1199,9 @@ static const struct eth_dev_ops idpf_eth_dev_ops = {
 	.reta_query			= idpf_rss_reta_query,
 	.rss_hash_update		= idpf_rss_hash_update,
 	.rss_hash_conf_get		= idpf_rss_hash_conf_get,
+	.xstats_get			= idpf_dev_xstats_get,
+	.xstats_get_names		= idpf_dev_xstats_get_names,
+	.xstats_reset			= idpf_dev_xstats_reset,
 };
 
 static uint16_t
-- 
2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [PATCH v6 1/6] common/idpf: add hw statistics
  2023-02-08  2:00               ` Zhang, Qi Z
@ 2023-02-08  8:28                 ` Liu, Mingxia
  0 siblings, 0 replies; 63+ messages in thread
From: Liu, Mingxia @ 2023-02-08  8:28 UTC (permalink / raw)
  To: Zhang, Qi Z, dev, Wu,  Jingjing, Xing, Beilei

Thanks, will update module name.
And there is no warning when checked by ./devtools/check-git-log.sh.

> -----Original Message-----
> From: Zhang, Qi Z <qi.z.zhang@intel.com>
> Sent: Wednesday, February 8, 2023 10:00 AM
> To: Liu, Mingxia <mingxia.liu@intel.com>; dev@dpdk.org; Wu, Jingjing
> <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> Subject: RE: [PATCH v6 1/6] common/idpf: add hw statistics
> 
> 
> 
> > -----Original Message-----
> > From: Liu, Mingxia <mingxia.liu@intel.com>
> > Sent: Tuesday, February 7, 2023 6:17 PM
> > To: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing
> > <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> > Cc: Liu, Mingxia <mingxia.liu@intel.com>
> > Subject: [PATCH v6 1/6] common/idpf: add hw statistics
> 
> Suggest to use ./devtools/check-git-log.sh to fix any title warning if possible
> Also the main purpose of this patch is to support stats_get /stats_reset API,
> the prefix is more reasonable to be "net/idpf" but not "common/idpf.
> 
> Please fix other patches if any similar issue.
> 
> >
> > This patch add hardware packets/bytes statistics.
> >
> > Signed-off-by: Mingxia Liu <mingxia.liu@intel.com>
> > ---
> >  drivers/common/idpf/idpf_common_device.c   | 17 +++++
> >  drivers/common/idpf/idpf_common_device.h   |  4 +
> >  drivers/common/idpf/idpf_common_virtchnl.c | 27 +++++++
> > drivers/common/idpf/idpf_common_virtchnl.h |  3 +
> >  drivers/common/idpf/version.map            |  2 +
> >  drivers/net/idpf/idpf_ethdev.c             | 86 ++++++++++++++++++++++
> >  6 files changed, 139 insertions(+)
> >
> > diff --git a/drivers/common/idpf/idpf_common_device.c
> > b/drivers/common/idpf/idpf_common_device.c
> > index 48b3e3c0dd..5475a3e52c 100644
> > --- a/drivers/common/idpf/idpf_common_device.c
> > +++ b/drivers/common/idpf/idpf_common_device.c
> > @@ -652,4 +652,21 @@ idpf_vport_info_init(struct idpf_vport *vport,
> >  	return 0;
> >  }
> >
> > +void
> > +idpf_vport_stats_update(struct virtchnl2_vport_stats *oes, struct
> > +virtchnl2_vport_stats *nes) {
> > +	nes->rx_bytes = nes->rx_bytes - oes->rx_bytes;
> > +	nes->rx_unicast = nes->rx_unicast - oes->rx_unicast;
> > +	nes->rx_multicast = nes->rx_multicast - oes->rx_multicast;
> > +	nes->rx_broadcast = nes->rx_broadcast - oes->rx_broadcast;
> > +	nes->rx_errors = nes->rx_errors - oes->rx_errors;
> > +	nes->rx_discards = nes->rx_discards - oes->rx_discards;
> > +	nes->tx_bytes = nes->tx_bytes - oes->tx_bytes;
> > +	nes->tx_unicast = nes->tx_unicast - oes->tx_unicast;
> > +	nes->tx_multicast = nes->tx_multicast - oes->tx_multicast;
> > +	nes->tx_broadcast = nes->tx_broadcast - oes->tx_broadcast;
> > +	nes->tx_errors = nes->tx_errors - oes->tx_errors;
> > +	nes->tx_discards = nes->tx_discards - oes->tx_discards; }
> > +
> >  RTE_LOG_REGISTER_SUFFIX(idpf_common_logtype, common, NOTICE);
> diff
> > --git a/drivers/common/idpf/idpf_common_device.h
> > b/drivers/common/idpf/idpf_common_device.h
> > index 545117df79..1d8e7d405a 100644
> > --- a/drivers/common/idpf/idpf_common_device.h
> > +++ b/drivers/common/idpf/idpf_common_device.h
> > @@ -115,6 +115,8 @@ struct idpf_vport {
> >  	bool tx_vec_allowed;
> >  	bool rx_use_avx512;
> >  	bool tx_use_avx512;
> > +
> > +	struct virtchnl2_vport_stats eth_stats_offset;
> >  };
> >
> >  /* Message type read in virtual channel from PF */ @@ -191,5 +193,7
> > @@ int idpf_vport_irq_unmap_config(struct idpf_vport *vport, uint16_t
> > nb_rx_queues)  __rte_internal  int idpf_vport_info_init(struct
> > idpf_vport *vport,
> >  			 struct virtchnl2_create_vport *vport_info);
> > +__rte_internal
> > +void idpf_vport_stats_update(struct virtchnl2_vport_stats *oes,
> > +struct virtchnl2_vport_stats *nes);
> >
> >  #endif /* _IDPF_COMMON_DEVICE_H_ */
> > diff --git a/drivers/common/idpf/idpf_common_virtchnl.c
> > b/drivers/common/idpf/idpf_common_virtchnl.c
> > index 31fadefbd3..40cff34c09 100644
> > --- a/drivers/common/idpf/idpf_common_virtchnl.c
> > +++ b/drivers/common/idpf/idpf_common_virtchnl.c
> > @@ -217,6 +217,7 @@ idpf_vc_cmd_execute(struct idpf_adapter
> *adapter,
> > struct idpf_cmd_info *args)
> >  	case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
> >  	case VIRTCHNL2_OP_ALLOC_VECTORS:
> >  	case VIRTCHNL2_OP_DEALLOC_VECTORS:
> > +	case VIRTCHNL2_OP_GET_STATS:
> >  		/* for init virtchnl ops, need to poll the response */
> >  		err = idpf_vc_one_msg_read(adapter, args->ops, args-
> > >out_size, args->out_buffer);
> >  		clear_cmd(adapter);
> > @@ -806,6 +807,32 @@ idpf_vc_ptype_info_query(struct idpf_adapter
> > *adapter)
> >  	return err;
> >  }
> >
> > +int
> > +idpf_vc_stats_query(struct idpf_vport *vport,
> > +		struct virtchnl2_vport_stats **pstats) {
> > +	struct idpf_adapter *adapter = vport->adapter;
> > +	struct virtchnl2_vport_stats vport_stats;
> > +	struct idpf_cmd_info args;
> > +	int err;
> > +
> > +	vport_stats.vport_id = vport->vport_id;
> > +	args.ops = VIRTCHNL2_OP_GET_STATS;
> > +	args.in_args = (u8 *)&vport_stats;
> > +	args.in_args_size = sizeof(vport_stats);
> > +	args.out_buffer = adapter->mbx_resp;
> > +	args.out_size = IDPF_DFLT_MBX_BUF_SIZE;
> > +
> > +	err = idpf_vc_cmd_execute(adapter, &args);
> > +	if (err) {
> > +		DRV_LOG(ERR, "Failed to execute command of
> > VIRTCHNL2_OP_GET_STATS");
> > +		*pstats = NULL;
> > +		return err;
> > +	}
> > +	*pstats = (struct virtchnl2_vport_stats *)args.out_buffer;
> > +	return 0;
> > +}
> > +
> >  #define IDPF_RX_BUF_STRIDE		64
> >  int
> >  idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue
> > *rxq) diff - -git a/drivers/common/idpf/idpf_common_virtchnl.h
> > b/drivers/common/idpf/idpf_common_virtchnl.h
> > index c105f02836..6b94fd5b8f 100644
> > --- a/drivers/common/idpf/idpf_common_virtchnl.h
> > +++ b/drivers/common/idpf/idpf_common_virtchnl.h
> > @@ -49,4 +49,7 @@ __rte_internal
> >  int idpf_vc_rxq_config(struct idpf_vport *vport, struct idpf_rx_queue
> > *rxq); __rte_internal  int idpf_vc_txq_config(struct idpf_vport
> > *vport, struct idpf_tx_queue *txq);
> > +__rte_internal
> > +int idpf_vc_stats_query(struct idpf_vport *vport,
> > +			struct virtchnl2_vport_stats **pstats);
> >  #endif /* _IDPF_COMMON_VIRTCHNL_H_ */ diff --git
> > a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map
> > index 8b33130bd6..e6a02828ba
> > 100644
> > --- a/drivers/common/idpf/version.map
> > +++ b/drivers/common/idpf/version.map
> > @@ -46,6 +46,7 @@ INTERNAL {
> >  	idpf_vc_rss_key_set;
> >  	idpf_vc_rss_lut_set;
> >  	idpf_vc_rxq_config;
> > +	idpf_vc_stats_query;
> >  	idpf_vc_txq_config;
> >  	idpf_vc_vectors_alloc;
> >  	idpf_vc_vectors_dealloc;
> > @@ -59,6 +60,7 @@ INTERNAL {
> >  	idpf_vport_irq_map_config;
> >  	idpf_vport_irq_unmap_config;
> >  	idpf_vport_rss_config;
> > +	idpf_vport_stats_update;
> >
> >  	local: *;
> >  };
> > diff --git a/drivers/net/idpf/idpf_ethdev.c
> > b/drivers/net/idpf/idpf_ethdev.c index 33f5e90743..02ddb0330a 100644
> > --- a/drivers/net/idpf/idpf_ethdev.c
> > +++ b/drivers/net/idpf/idpf_ethdev.c
> > @@ -140,6 +140,87 @@ idpf_dev_supported_ptypes_get(struct
> rte_eth_dev
> > *dev __rte_unused)
> >  	return ptypes;
> >  }
> >
> > +static uint64_t
> > +idpf_get_mbuf_alloc_failed_stats(struct rte_eth_dev *dev) {
> > +	uint64_t mbuf_alloc_failed = 0;
> > +	struct idpf_rx_queue *rxq;
> > +	int i = 0;
> > +
> > +	for (i = 0; i < dev->data->nb_rx_queues; i++) {
> > +		rxq = dev->data->rx_queues[i];
> > +		mbuf_alloc_failed += __atomic_load_n(&rxq-
> > >rx_stats.mbuf_alloc_failed,
> > +						     __ATOMIC_RELAXED);
> > +	}
> > +
> > +	return mbuf_alloc_failed;
> > +}
> > +
> > +static int
> > +idpf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats
> > +*stats) {
> > +	struct idpf_vport *vport =
> > +		(struct idpf_vport *)dev->data->dev_private;
> > +	struct virtchnl2_vport_stats *pstats = NULL;
> > +	int ret;
> > +
> > +	ret = idpf_vc_stats_query(vport, &pstats);
> > +	if (ret == 0) {
> > +		uint8_t crc_stats_len = (dev->data-
> > >dev_conf.rxmode.offloads &
> > +					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
> > 0 :
> > +					 RTE_ETHER_CRC_LEN;
> > +
> > +		idpf_vport_stats_update(&vport->eth_stats_offset, pstats);
> > +		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
> > +				pstats->rx_broadcast - pstats->rx_discards;
> > +		stats->opackets = pstats->tx_broadcast + pstats-
> > >tx_multicast +
> > +						pstats->tx_unicast;
> > +		stats->imissed = pstats->rx_discards;
> > +		stats->oerrors = pstats->tx_errors + pstats->tx_discards;
> > +		stats->ibytes = pstats->rx_bytes;
> > +		stats->ibytes -= stats->ipackets * crc_stats_len;
> > +		stats->obytes = pstats->tx_bytes;
> > +
> > +		dev->data->rx_mbuf_alloc_failed =
> > idpf_get_mbuf_alloc_failed_stats(dev);
> > +		stats->rx_nombuf = dev->data->rx_mbuf_alloc_failed;
> > +	} else {
> > +		PMD_DRV_LOG(ERR, "Get statistics failed");
> > +	}
> > +	return ret;
> > +}
> > +
> > +static void
> > +idpf_reset_mbuf_alloc_failed_stats(struct rte_eth_dev *dev) {
> > +	struct idpf_rx_queue *rxq;
> > +	int i;
> > +
> > +	for (i = 0; i < dev->data->nb_rx_queues; i++) {
> > +		rxq = dev->data->rx_queues[i];
> > +		__atomic_store_n(&rxq->rx_stats.mbuf_alloc_failed, 0,
> > __ATOMIC_RELAXED);
> > +	}
> > +}
> > +
> > +static int
> > +idpf_dev_stats_reset(struct rte_eth_dev *dev) {
> > +	struct idpf_vport *vport =
> > +		(struct idpf_vport *)dev->data->dev_private;
> > +	struct virtchnl2_vport_stats *pstats = NULL;
> > +	int ret;
> > +
> > +	ret = idpf_vc_stats_query(vport, &pstats);
> > +	if (ret != 0)
> > +		return ret;
> > +
> > +	/* set stats offset base on current values */
> > +	vport->eth_stats_offset = *pstats;
> > +
> > +	idpf_reset_mbuf_alloc_failed_stats(dev);
> > +
> > +	return 0;
> > +}
> > +
> >  static int
> >  idpf_init_rss(struct idpf_vport *vport)  { @@ -327,6 +408,9 @@
> > idpf_dev_start(struct rte_eth_dev *dev)
> >  		goto err_vport;
> >  	}
> >
> > +	if (idpf_dev_stats_reset(dev))
> > +		PMD_DRV_LOG(ERR, "Failed to reset stats");
> > +
> >  	vport->stopped = 0;
> >
> >  	return 0;
> > @@ -606,6 +690,8 @@ static const struct eth_dev_ops idpf_eth_dev_ops
> = {
> >  	.tx_queue_release		= idpf_dev_tx_queue_release,
> >  	.mtu_set			= idpf_dev_mtu_set,
> >  	.dev_supported_ptypes_get	= idpf_dev_supported_ptypes_get,
> > +	.stats_get			= idpf_dev_stats_get,
> > +	.stats_reset			= idpf_dev_stats_reset,
> >  };
> >
> >  static uint16_t
> > --
> > 2.25.1


^ permalink raw reply	[flat|nested] 63+ messages in thread

* RE: [PATCH v7 0/6] add idpf pmd enhancement features
  2023-02-08  7:33             ` [PATCH v7 " Mingxia Liu
                                 ` (5 preceding siblings ...)
  2023-02-08  7:34               ` [PATCH v7 6/6] net/idpf: add xstats ops Mingxia Liu
@ 2023-02-08  9:32               ` Zhang, Qi Z
  6 siblings, 0 replies; 63+ messages in thread
From: Zhang, Qi Z @ 2023-02-08  9:32 UTC (permalink / raw)
  To: Liu, Mingxia, dev; +Cc: Wu, Jingjing, Xing, Beilei, Liu, Mingxia



> -----Original Message-----
> From: Mingxia Liu <mingxia.liu@intel.com>
> Sent: Wednesday, February 8, 2023 3:34 PM
> To: dev@dpdk.org
> Cc: Wu, Jingjing <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>;
> Liu, Mingxia <mingxia.liu@intel.com>
> Subject: [PATCH v7 0/6] add idpf pmd enhancement features
> 
> This patchset add several enhancement features of idpf pmd.
> Including the following:
> - add hw statistics, support stats/xstats ops
> - add rss configure/show ops
> - add event handle: link status
> - add scattered data path for single queue
> 
> 
> v2 changes:
>  - Fix rss lut config issue.
> v3 changes:
>  - rebase to the new baseline.
> v4 changes:
>  - rebase to the new baseline.
>  - optimize some code
>  - give "not supported" tips when user want to config rss hash type
>  - if stats reset fails at initialization time, don't rollback, just
>    print ERROR info.
> v5 changes:
>  - fix some spelling error
> v6 changes:
>  - add cover-letter
> v7 changes:
>  - change commit msg module from "common/idpf" to "net/idpf"
> 
> Mingxia Liu (6):
>   net/idpf: add hw statistics

s/hw/HW

>   net/idpf: add RSS set/get ops
>   net/idpf: support single q scatter RX datapath

s/single q/singleq
s/RX/Rx

>   net/idpf: add rss_offload hash in singleq rx
/rx/Rx

>   net/idpf: add alarm to support handle vchnl message
>   net/idpf: add xstats ops
> 
>  drivers/common/idpf/idpf_common_device.c   |  17 +
>  drivers/common/idpf/idpf_common_device.h   |  10 +
>  drivers/common/idpf/idpf_common_rxtx.c     | 151 +++++
>  drivers/common/idpf/idpf_common_rxtx.h     |   3 +
>  drivers/common/idpf/idpf_common_virtchnl.c | 171 +++++-
> drivers/common/idpf/idpf_common_virtchnl.h |  15 +
>  drivers/common/idpf/version.map            |   8 +
>  drivers/net/idpf/idpf_ethdev.c             | 606 ++++++++++++++++++++-
>  drivers/net/idpf/idpf_ethdev.h             |   5 +-
>  drivers/net/idpf/idpf_rxtx.c               |  28 +
>  drivers/net/idpf/idpf_rxtx.h               |   2 +
>  11 files changed, 996 insertions(+), 20 deletions(-)
> 
> --
> 2.25.1

Added Reviewed-by: jingjing.wu@intel.com<jingjing.wu@intel.com> from v6

Applied to dpdk-next-net-intel with couple commit log check-git-log fix as above comments

Thanks
Qi




^ permalink raw reply	[flat|nested] 63+ messages in thread

end of thread, other threads:[~2023-02-08  9:32 UTC | newest]

Thread overview: 63+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-16  9:36 [PATCH 0/7] add idpf pmd enhancement features Mingxia Liu
2022-12-16  9:37 ` [PATCH 1/7] common/idpf: add hw statistics Mingxia Liu
2022-12-16  9:37 ` [PATCH 2/7] common/idpf: add RSS set/get ops Mingxia Liu
2022-12-16  9:37 ` [PATCH 3/7] common/idpf: support single q scatter RX datapath Mingxia Liu
2022-12-16  9:37 ` [PATCH 4/7] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2022-12-16  9:37 ` [PATCH 5/7] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2022-12-16  9:37 ` [PATCH 6/7] common/idpf: add xstats ops Mingxia Liu
2022-12-16  9:37 ` [PATCH 7/7] common/idpf: update mbuf_alloc_failed multi-thread process Mingxia Liu
2023-01-11  7:15 ` [PATCH 0/6] add idpf pmd enhancement features Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 1/6] common/idpf: add hw statistics Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 2/6] common/idpf: add RSS set/get ops Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 6/6] common/idpf: add xstats ops Mingxia Liu
2023-01-18  7:14   ` [PATCH v3 0/6] add idpf pmd enhancement features Mingxia Liu
2023-01-18  7:14     ` [PATCH v3 1/6] common/idpf: add hw statistics Mingxia Liu
2023-02-01  8:48       ` Wu, Jingjing
2023-02-01 12:34         ` Liu, Mingxia
2023-01-18  7:14     ` [PATCH v3 2/6] common/idpf: add RSS set/get ops Mingxia Liu
2023-02-02  3:28       ` Wu, Jingjing
2023-02-07  3:10         ` Liu, Mingxia
2023-01-18  7:14     ` [PATCH v3 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
2023-02-02  3:45       ` Wu, Jingjing
2023-02-02  7:19         ` Liu, Mingxia
2023-01-18  7:14     ` [PATCH v3 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-01-18  7:14     ` [PATCH v3 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-02-02  4:23       ` Wu, Jingjing
2023-02-02  7:39         ` Liu, Mingxia
2023-02-02  8:46           ` Wu, Jingjing
2023-01-18  7:14     ` [PATCH v3 6/6] common/idpf: add xstats ops Mingxia Liu
2023-02-07  9:56     ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
2023-02-07  9:56       ` [PATCH v4 1/6] common/idpf: add hw statistics Mingxia Liu
2023-02-07  9:56       ` [PATCH v4 2/6] common/idpf: add RSS set/get ops Mingxia Liu
2023-02-07  9:56       ` [PATCH v4 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
2023-02-07  9:56       ` [PATCH v4 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-02-07  9:57       ` [PATCH v4 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-02-07  9:57       ` [PATCH v4 6/6] common/idpf: add xstats ops Mingxia Liu
2023-02-07 10:08       ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
2023-02-07 10:08         ` [PATCH v5 1/6] common/idpf: add hw statistics Mingxia Liu
2023-02-07 10:16           ` [PATCH v6 0/6] add idpf pmd enhancement features Mingxia Liu
2023-02-07 10:16             ` [PATCH v6 1/6] common/idpf: add hw statistics Mingxia Liu
2023-02-08  2:00               ` Zhang, Qi Z
2023-02-08  8:28                 ` Liu, Mingxia
2023-02-07 10:16             ` [PATCH v6 2/6] common/idpf: add RSS set/get ops Mingxia Liu
2023-02-07 10:16             ` [PATCH v6 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
2023-02-07 10:16             ` [PATCH v6 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-02-07 10:16             ` [PATCH v6 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-02-07 10:16             ` [PATCH v6 6/6] common/idpf: add xstats ops Mingxia Liu
2023-02-08  0:28             ` [PATCH v6 0/6] add idpf pmd enhancement features Wu, Jingjing
2023-02-08  7:33             ` [PATCH v7 " Mingxia Liu
2023-02-08  7:33               ` [PATCH v7 1/6] net/idpf: add hw statistics Mingxia Liu
2023-02-08  7:33               ` [PATCH v7 2/6] net/idpf: add RSS set/get ops Mingxia Liu
2023-02-08  7:33               ` [PATCH v7 3/6] net/idpf: support single q scatter RX datapath Mingxia Liu
2023-02-08  7:33               ` [PATCH v7 4/6] net/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-02-08  7:34               ` [PATCH v7 5/6] net/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-02-08  7:34               ` [PATCH v7 6/6] net/idpf: add xstats ops Mingxia Liu
2023-02-08  9:32               ` [PATCH v7 0/6] add idpf pmd enhancement features Zhang, Qi Z
2023-02-07 10:08         ` [PATCH v5 2/6] common/idpf: add RSS set/get ops Mingxia Liu
2023-02-07 10:08         ` [PATCH v5 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
2023-02-07 10:08         ` [PATCH v5 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-02-07 10:08         ` [PATCH v5 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-02-07 10:08         ` [PATCH v5 6/6] common/idpf: add xstats ops Mingxia Liu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).