DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action
@ 2021-05-27 15:24 Andrew Rybchenko
  2021-05-27 15:24 ` [dpdk-dev] [PATCH 01/20] net/sfc: introduce ethdev Rx queue ID Andrew Rybchenko
                   ` (22 more replies)
  0 siblings, 23 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-05-27 15:24 UTC (permalink / raw)
  To: dev

Update base driver and support COUNT action in transfer flow rules.

Andrew Rybchenko (6):
  net/sfc: do not enable interrupts on internal Rx queues
  common/sfc_efx/base: separate target EvQ and IRQ config
  common/sfc_efx/base: support custom EvQ to IRQ mapping
  net/sfc: explicitly control IRQ used for Rx queues
  net/sfc: add NUMA-aware registry of service logical cores
  common/sfc_efx/base: add packetiser packet format definition

Igor Romanov (14):
  net/sfc: introduce ethdev Rx queue ID
  net/sfc: introduce ethdev Tx queue ID
  common/sfc_efx/base: add ingress m-port RxQ flag
  common/sfc_efx/base: add user mark RxQ flag
  net/sfc: add abstractions for the management EVQ identity
  net/sfc: add support for initialising different RxQ types
  net/sfc: reserve RxQ for counters
  common/sfc_efx/base: add counter creation MCDI wrappers
  common/sfc_efx/base: add counter stream MCDI wrappers
  common/sfc_efx/base: support counter in action set
  net/sfc: add Rx datapath method to get pushed buffers count
  common/sfc_efx/base: add max MAE counters to limits
  net/sfc: support flow action COUNT in transfer rules
  net/sfc: support flow API query for count actions

 drivers/common/sfc_efx/base/ef10_ev.c         |  14 +-
 drivers/common/sfc_efx/base/ef10_impl.h       |   1 +
 drivers/common/sfc_efx/base/ef10_rx.c         |  57 +-
 drivers/common/sfc_efx/base/efx.h             | 113 +++
 drivers/common/sfc_efx/base/efx_ev.c          |  39 +-
 drivers/common/sfc_efx/base/efx_impl.h        |   8 +-
 drivers/common/sfc_efx/base/efx_mae.c         | 430 ++++++++-
 drivers/common/sfc_efx/base/efx_mcdi.c        |   7 +-
 drivers/common/sfc_efx/base/efx_mcdi.h        |   7 +
 .../base/efx_regs_counters_pkt_format.h       |  87 ++
 drivers/common/sfc_efx/base/efx_rx.c          |  14 +-
 drivers/common/sfc_efx/base/rhead_ev.c        |  14 +-
 drivers/common/sfc_efx/base/rhead_impl.h      |   1 +
 drivers/common/sfc_efx/base/rhead_rx.c        |   6 +
 drivers/common/sfc_efx/version.map            |   9 +
 drivers/net/sfc/meson.build                   |  12 +
 drivers/net/sfc/sfc.c                         |  68 +-
 drivers/net/sfc/sfc.h                         |  22 +
 drivers/net/sfc/sfc_dp.h                      |   6 +
 drivers/net/sfc/sfc_dp_rx.h                   |   4 +
 drivers/net/sfc/sfc_ef100_rx.c                |  15 +
 drivers/net/sfc/sfc_ethdev.c                  | 115 ++-
 drivers/net/sfc/sfc_ev.c                      |  36 +-
 drivers/net/sfc/sfc_ev.h                      | 107 ++-
 drivers/net/sfc/sfc_flow.c                    |  69 +-
 drivers/net/sfc/sfc_flow.h                    |   6 +
 drivers/net/sfc/sfc_mae.c                     | 296 ++++++-
 drivers/net/sfc/sfc_mae.h                     |  61 ++
 drivers/net/sfc/sfc_mae_counter.c             | 827 ++++++++++++++++++
 drivers/net/sfc/sfc_mae_counter.h             |  58 ++
 drivers/net/sfc/sfc_rx.c                      | 231 +++--
 drivers/net/sfc/sfc_rx.h                      |  15 +-
 drivers/net/sfc/sfc_service.c                 |  99 +++
 drivers/net/sfc/sfc_service.h                 |  20 +
 drivers/net/sfc/sfc_stats.h                   |  80 ++
 drivers/net/sfc/sfc_tweak.h                   |   9 +
 drivers/net/sfc/sfc_tx.c                      | 164 ++--
 drivers/net/sfc/sfc_tx.h                      |  11 +-
 38 files changed, 2888 insertions(+), 250 deletions(-)
 create mode 100644 drivers/common/sfc_efx/base/efx_regs_counters_pkt_format.h
 create mode 100644 drivers/net/sfc/sfc_mae_counter.c
 create mode 100644 drivers/net/sfc/sfc_mae_counter.h
 create mode 100644 drivers/net/sfc/sfc_service.c
 create mode 100644 drivers/net/sfc/sfc_service.h
 create mode 100644 drivers/net/sfc/sfc_stats.h

-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH 01/20] net/sfc: introduce ethdev Rx queue ID
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
@ 2021-05-27 15:24 ` Andrew Rybchenko
  2021-05-27 15:24 ` [dpdk-dev] [PATCH 02/20] net/sfc: do not enable interrupts on internal Rx queues Andrew Rybchenko
                   ` (21 subsequent siblings)
  22 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-05-27 15:24 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Make software index of an Rx queue and ethdev index separate.
When an ethdev RxQ is accessed in ethdev callbacks, an explicit ethdev
queue index is used.

This is a preparation to introducing non-ethdev Rx queues.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/sfc.h        |   2 +
 drivers/net/sfc/sfc_dp.h     |   4 +
 drivers/net/sfc/sfc_ethdev.c |  69 ++++++++------
 drivers/net/sfc/sfc_ev.c     |   2 +-
 drivers/net/sfc/sfc_ev.h     |  22 ++++-
 drivers/net/sfc/sfc_flow.c   |  22 +++--
 drivers/net/sfc/sfc_rx.c     | 179 +++++++++++++++++++++++++----------
 drivers/net/sfc/sfc_rx.h     |  10 +-
 8 files changed, 215 insertions(+), 95 deletions(-)

diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index b48a818adb..ebe705020d 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -29,6 +29,7 @@
 #include "sfc_filter.h"
 #include "sfc_sriov.h"
 #include "sfc_mae.h"
+#include "sfc_dp.h"
 
 #ifdef __cplusplus
 extern "C" {
@@ -168,6 +169,7 @@ struct sfc_rss {
 struct sfc_adapter_shared {
 	unsigned int			rxq_count;
 	struct sfc_rxq_info		*rxq_info;
+	unsigned int			ethdev_rxq_count;
 
 	unsigned int			txq_count;
 	struct sfc_txq_info		*txq_info;
diff --git a/drivers/net/sfc/sfc_dp.h b/drivers/net/sfc/sfc_dp.h
index 4bed137806..76065483d4 100644
--- a/drivers/net/sfc/sfc_dp.h
+++ b/drivers/net/sfc/sfc_dp.h
@@ -96,6 +96,10 @@ struct sfc_dp {
 /** List of datapath variants */
 TAILQ_HEAD(sfc_dp_list, sfc_dp);
 
+typedef unsigned int sfc_sw_index_t;
+typedef int32_t	sfc_ethdev_qid_t;
+#define SFC_ETHDEV_QID_INVALID	((sfc_ethdev_qid_t)(-1))
+
 /* Check if available HW/FW capabilities are sufficient for the datapath */
 static inline bool
 sfc_dp_match_hw_fw_caps(const struct sfc_dp *dp, unsigned int avail_caps)
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index c50ecea0b9..2651c41288 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -463,26 +463,31 @@ sfc_dev_allmulti_disable(struct rte_eth_dev *dev)
 }
 
 static int
-sfc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
+sfc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t ethdev_qid,
 		   uint16_t nb_rx_desc, unsigned int socket_id,
 		   const struct rte_eth_rxconf *rx_conf,
 		   struct rte_mempool *mb_pool)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+	struct sfc_rxq_info *rxq_info;
+	sfc_sw_index_t sw_index;
 	int rc;
 
 	sfc_log_init(sa, "RxQ=%u nb_rx_desc=%u socket_id=%u",
-		     rx_queue_id, nb_rx_desc, socket_id);
+		     ethdev_qid, nb_rx_desc, socket_id);
 
 	sfc_adapter_lock(sa);
 
-	rc = sfc_rx_qinit(sa, rx_queue_id, nb_rx_desc, socket_id,
+	sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas, sfc_ethdev_qid);
+	rc = sfc_rx_qinit(sa, sw_index, nb_rx_desc, socket_id,
 			  rx_conf, mb_pool);
 	if (rc != 0)
 		goto fail_rx_qinit;
 
-	dev->data->rx_queues[rx_queue_id] = sas->rxq_info[rx_queue_id].dp;
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+	dev->data->rx_queues[ethdev_qid] = rxq_info->dp;
 
 	sfc_adapter_unlock(sa);
 
@@ -500,7 +505,7 @@ sfc_rx_queue_release(void *queue)
 	struct sfc_dp_rxq *dp_rxq = queue;
 	struct sfc_rxq *rxq;
 	struct sfc_adapter *sa;
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 
 	if (dp_rxq == NULL)
 		return;
@@ -1182,15 +1187,14 @@ sfc_set_mc_addr_list(struct rte_eth_dev *dev,
  * use any process-local pointers from the adapter data.
  */
 static void
-sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
+sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
 		      struct rte_eth_rxq_info *qinfo)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 
-	SFC_ASSERT(rx_queue_id < sas->rxq_count);
-
-	rxq_info = &sas->rxq_info[rx_queue_id];
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
 
 	qinfo->mp = rxq_info->refill_mb_pool;
 	qinfo->conf.rx_free_thresh = rxq_info->refill_threshold;
@@ -1232,14 +1236,14 @@ sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t tx_queue_id,
  * use any process-local pointers from the adapter data.
  */
 static uint32_t
-sfc_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+sfc_rx_queue_count(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 
-	SFC_ASSERT(rx_queue_id < sas->rxq_count);
-	rxq_info = &sas->rxq_info[rx_queue_id];
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
 
 	if ((rxq_info->state & SFC_RXQ_STARTED) == 0)
 		return 0;
@@ -1293,13 +1297,16 @@ sfc_tx_descriptor_status(void *queue, uint16_t offset)
 }
 
 static int
-sfc_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+sfc_rx_queue_start(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+	struct sfc_rxq_info *rxq_info;
+	sfc_sw_index_t sw_index;
 	int rc;
 
-	sfc_log_init(sa, "RxQ=%u", rx_queue_id);
+	sfc_log_init(sa, "RxQ=%u", ethdev_qid);
 
 	sfc_adapter_lock(sa);
 
@@ -1307,14 +1314,16 @@ sfc_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (sa->state != SFC_ADAPTER_STARTED)
 		goto fail_not_started;
 
-	if (sas->rxq_info[rx_queue_id].state != SFC_RXQ_INITIALIZED)
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+	if (rxq_info->state != SFC_RXQ_INITIALIZED)
 		goto fail_not_setup;
 
-	rc = sfc_rx_qstart(sa, rx_queue_id);
+	sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas, sfc_ethdev_qid);
+	rc = sfc_rx_qstart(sa, sw_index);
 	if (rc != 0)
 		goto fail_rx_qstart;
 
-	sas->rxq_info[rx_queue_id].deferred_started = B_TRUE;
+	rxq_info->deferred_started = B_TRUE;
 
 	sfc_adapter_unlock(sa);
 
@@ -1329,17 +1338,23 @@ sfc_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 }
 
 static int
-sfc_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+sfc_rx_queue_stop(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+	struct sfc_rxq_info *rxq_info;
+	sfc_sw_index_t sw_index;
 
-	sfc_log_init(sa, "RxQ=%u", rx_queue_id);
+	sfc_log_init(sa, "RxQ=%u", ethdev_qid);
 
 	sfc_adapter_lock(sa);
-	sfc_rx_qstop(sa, rx_queue_id);
 
-	sas->rxq_info[rx_queue_id].deferred_started = B_FALSE;
+	sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas, sfc_ethdev_qid);
+	sfc_rx_qstop(sa, sw_index);
+
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+	rxq_info->deferred_started = B_FALSE;
 
 	sfc_adapter_unlock(sa);
 
@@ -1766,27 +1781,27 @@ sfc_pool_ops_supported(struct rte_eth_dev *dev, const char *pool)
 }
 
 static int
-sfc_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
+sfc_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 
-	SFC_ASSERT(queue_id < sas->rxq_count);
-	rxq_info = &sas->rxq_info[queue_id];
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
 
 	return sap->dp_rx->intr_enable(rxq_info->dp);
 }
 
 static int
-sfc_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
+sfc_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 
-	SFC_ASSERT(queue_id < sas->rxq_count);
-	rxq_info = &sas->rxq_info[queue_id];
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
 
 	return sap->dp_rx->intr_disable(rxq_info->dp);
 }
diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index b4953ac647..2262994112 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -582,7 +582,7 @@ sfc_ev_qpoll(struct sfc_evq *evq)
 		int rc;
 
 		if (evq->dp_rxq != NULL) {
-			unsigned int rxq_sw_index;
+			sfc_sw_index_t rxq_sw_index;
 
 			rxq_sw_index = evq->dp_rxq->dpq.queue_id;
 
diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h
index d796865b7f..5a9f85c2d9 100644
--- a/drivers/net/sfc/sfc_ev.h
+++ b/drivers/net/sfc/sfc_ev.h
@@ -69,9 +69,25 @@ struct sfc_evq {
  * Tx event queues follow Rx event queues.
  */
 
-static inline unsigned int
-sfc_evq_index_by_rxq_sw_index(__rte_unused struct sfc_adapter *sa,
-			      unsigned int rxq_sw_index)
+static inline sfc_ethdev_qid_t
+sfc_ethdev_rx_qid_by_rxq_sw_index(__rte_unused struct sfc_adapter_shared *sas,
+				  sfc_sw_index_t rxq_sw_index)
+{
+	/* Only ethdev queues are present for now */
+	return rxq_sw_index;
+}
+
+static inline sfc_sw_index_t
+sfc_rxq_sw_index_by_ethdev_rx_qid(__rte_unused struct sfc_adapter_shared *sas,
+				  sfc_ethdev_qid_t ethdev_qid)
+{
+	/* Only ethdev queues are present for now */
+	return ethdev_qid;
+}
+
+static inline sfc_sw_index_t
+sfc_evq_sw_index_by_rxq_sw_index(__rte_unused struct sfc_adapter *sa,
+				 sfc_sw_index_t rxq_sw_index)
 {
 	return 1 + rxq_sw_index;
 }
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 0bfd284c9e..2db8af1759 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -1400,10 +1400,10 @@ sfc_flow_parse_queue(struct sfc_adapter *sa,
 	struct sfc_rxq *rxq;
 	struct sfc_rxq_info *rxq_info;
 
-	if (queue->index >= sfc_sa2shared(sa)->rxq_count)
+	if (queue->index >= sfc_sa2shared(sa)->ethdev_rxq_count)
 		return -EINVAL;
 
-	rxq = &sa->rxq_ctrl[queue->index];
+	rxq = sfc_rxq_ctrl_by_ethdev_qid(sa, queue->index);
 	spec_filter->template.efs_dmaq_id = (uint16_t)rxq->hw_index;
 
 	rxq_info = &sfc_sa2shared(sa)->rxq_info[queue->index];
@@ -1420,7 +1420,7 @@ sfc_flow_parse_rss(struct sfc_adapter *sa,
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
 	struct sfc_rss *rss = &sas->rss;
-	unsigned int rxq_sw_index;
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_rxq *rxq;
 	unsigned int rxq_hw_index_min;
 	unsigned int rxq_hw_index_max;
@@ -1434,18 +1434,19 @@ sfc_flow_parse_rss(struct sfc_adapter *sa,
 	if (action_rss->queue_num == 0)
 		return -EINVAL;
 
-	rxq_sw_index = sfc_sa2shared(sa)->rxq_count - 1;
-	rxq = &sa->rxq_ctrl[rxq_sw_index];
+	ethdev_qid = sfc_sa2shared(sa)->ethdev_rxq_count - 1;
+	rxq = sfc_rxq_ctrl_by_ethdev_qid(sa, ethdev_qid);
 	rxq_hw_index_min = rxq->hw_index;
 	rxq_hw_index_max = 0;
 
 	for (i = 0; i < action_rss->queue_num; ++i) {
-		rxq_sw_index = action_rss->queue[i];
+		ethdev_qid = action_rss->queue[i];
 
-		if (rxq_sw_index >= sfc_sa2shared(sa)->rxq_count)
+		if ((unsigned int)ethdev_qid >=
+		    sfc_sa2shared(sa)->ethdev_rxq_count)
 			return -EINVAL;
 
-		rxq = &sa->rxq_ctrl[rxq_sw_index];
+		rxq = sfc_rxq_ctrl_by_ethdev_qid(sa, ethdev_qid);
 
 		if (rxq->hw_index < rxq_hw_index_min)
 			rxq_hw_index_min = rxq->hw_index;
@@ -1509,9 +1510,10 @@ sfc_flow_parse_rss(struct sfc_adapter *sa,
 
 	for (i = 0; i < RTE_DIM(sfc_rss_conf->rss_tbl); ++i) {
 		unsigned int nb_queues = action_rss->queue_num;
-		unsigned int rxq_sw_index = action_rss->queue[i % nb_queues];
-		struct sfc_rxq *rxq = &sa->rxq_ctrl[rxq_sw_index];
+		struct sfc_rxq *rxq;
 
+		ethdev_qid = action_rss->queue[i % nb_queues];
+		rxq = sfc_rxq_ctrl_by_ethdev_qid(sa, ethdev_qid);
 		sfc_rss_conf->rss_tbl[i] = rxq->hw_index - rxq_hw_index_min;
 	}
 
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 461afc5168..597785ae02 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -654,14 +654,17 @@ struct sfc_dp_rx sfc_efx_rx = {
 };
 
 static void
-sfc_rx_qflush(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_rx_qflush(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 	struct sfc_rxq *rxq;
 	unsigned int retry_count;
 	unsigned int wait_count;
 	int rc;
 
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, sw_index);
 	rxq_info = &sfc_sa2shared(sa)->rxq_info[sw_index];
 	SFC_ASSERT(rxq_info->state & SFC_RXQ_STARTED);
 
@@ -698,13 +701,16 @@ sfc_rx_qflush(struct sfc_adapter *sa, unsigned int sw_index)
 			 (wait_count++ < SFC_RX_QFLUSH_POLL_ATTEMPTS));
 
 		if (rxq_info->state & SFC_RXQ_FLUSHING)
-			sfc_err(sa, "RxQ %u flush timed out", sw_index);
+			sfc_err(sa, "RxQ %d (internal %u) flush timed out",
+				ethdev_qid, sw_index);
 
 		if (rxq_info->state & SFC_RXQ_FLUSH_FAILED)
-			sfc_err(sa, "RxQ %u flush failed", sw_index);
+			sfc_err(sa, "RxQ %d (internal %u) flush failed",
+				ethdev_qid, sw_index);
 
 		if (rxq_info->state & SFC_RXQ_FLUSHED)
-			sfc_notice(sa, "RxQ %u flushed", sw_index);
+			sfc_notice(sa, "RxQ %d (internal %u) flushed",
+				   ethdev_qid, sw_index);
 	}
 
 	sa->priv.dp_rx->qpurge(rxq_info->dp);
@@ -764,17 +770,20 @@ sfc_rx_default_rxq_set_filter(struct sfc_adapter *sa, struct sfc_rxq *rxq)
 }
 
 int
-sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_rx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 	struct sfc_rxq *rxq;
 	struct sfc_evq *evq;
 	efx_rx_prefix_layout_t pinfo;
 	int rc;
 
-	sfc_log_init(sa, "sw_index=%u", sw_index);
-
 	SFC_ASSERT(sw_index < sfc_sa2shared(sa)->rxq_count);
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "RxQ %d (internal %u)", ethdev_qid, sw_index);
 
 	rxq_info = &sfc_sa2shared(sa)->rxq_info[sw_index];
 	SFC_ASSERT(rxq_info->state == SFC_RXQ_INITIALIZED);
@@ -782,7 +791,7 @@ sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 	rxq = &sa->rxq_ctrl[sw_index];
 	evq = rxq->evq;
 
-	rc = sfc_ev_qstart(evq, sfc_evq_index_by_rxq_sw_index(sa, sw_index));
+	rc = sfc_ev_qstart(evq, sfc_evq_sw_index_by_rxq_sw_index(sa, sw_index));
 	if (rc != 0)
 		goto fail_ev_qstart;
 
@@ -833,15 +842,16 @@ sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 
 	rxq_info->state |= SFC_RXQ_STARTED;
 
-	if (sw_index == 0 && !sfc_sa2shared(sa)->isolated) {
+	if (ethdev_qid == 0 && !sfc_sa2shared(sa)->isolated) {
 		rc = sfc_rx_default_rxq_set_filter(sa, rxq);
 		if (rc != 0)
 			goto fail_mac_filter_default_rxq_set;
 	}
 
 	/* It seems to be used by DPDK for debug purposes only ('rte_ether') */
-	sa->eth_dev->data->rx_queue_state[sw_index] =
-		RTE_ETH_QUEUE_STATE_STARTED;
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		sa->eth_dev->data->rx_queue_state[ethdev_qid] =
+			RTE_ETH_QUEUE_STATE_STARTED;
 
 	return 0;
 
@@ -864,14 +874,17 @@ sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 }
 
 void
-sfc_rx_qstop(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_rx_qstop(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 	struct sfc_rxq *rxq;
 
-	sfc_log_init(sa, "sw_index=%u", sw_index);
-
 	SFC_ASSERT(sw_index < sfc_sa2shared(sa)->rxq_count);
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "RxQ %d (internal %u)", ethdev_qid, sw_index);
 
 	rxq_info = &sfc_sa2shared(sa)->rxq_info[sw_index];
 
@@ -880,13 +893,14 @@ sfc_rx_qstop(struct sfc_adapter *sa, unsigned int sw_index)
 	SFC_ASSERT(rxq_info->state & SFC_RXQ_STARTED);
 
 	/* It seems to be used by DPDK for debug purposes only ('rte_ether') */
-	sa->eth_dev->data->rx_queue_state[sw_index] =
-		RTE_ETH_QUEUE_STATE_STOPPED;
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		sa->eth_dev->data->rx_queue_state[ethdev_qid] =
+			RTE_ETH_QUEUE_STATE_STOPPED;
 
 	rxq = &sa->rxq_ctrl[sw_index];
 	sa->priv.dp_rx->qstop(rxq_info->dp, &rxq->evq->read_ptr);
 
-	if (sw_index == 0)
+	if (ethdev_qid == 0)
 		efx_mac_filter_default_rxq_clear(sa->nic);
 
 	sfc_rx_qflush(sa, sw_index);
@@ -1056,11 +1070,13 @@ sfc_rx_mb_pool_buf_size(struct sfc_adapter *sa, struct rte_mempool *mb_pool)
 }
 
 int
-sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
+sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 	     uint16_t nb_rx_desc, unsigned int socket_id,
 	     const struct rte_eth_rxconf *rx_conf,
 	     struct rte_mempool *mb_pool)
 {
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
 	struct sfc_rss *rss = &sfc_sa2shared(sa)->rss;
 	int rc;
@@ -1092,16 +1108,22 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	SFC_ASSERT(rxq_entries <= sa->rxq_max_entries);
 	SFC_ASSERT(rxq_max_fill_level <= nb_rx_desc);
 
-	offloads = rx_conf->offloads |
-		sa->eth_dev->data->dev_conf.rxmode.offloads;
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, sw_index);
+
+	offloads = rx_conf->offloads;
+	/* Add device level Rx offloads if the queue is an ethdev Rx queue */
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		offloads |= sa->eth_dev->data->dev_conf.rxmode.offloads;
+
 	rc = sfc_rx_qcheck_conf(sa, rxq_max_fill_level, rx_conf, offloads);
 	if (rc != 0)
 		goto fail_bad_conf;
 
 	buf_size = sfc_rx_mb_pool_buf_size(sa, mb_pool);
 	if (buf_size == 0) {
-		sfc_err(sa, "RxQ %u mbuf pool object size is too small",
-			sw_index);
+		sfc_err(sa,
+			"RxQ %d (internal %u) mbuf pool object size is too small",
+			ethdev_qid, sw_index);
 		rc = EINVAL;
 		goto fail_bad_conf;
 	}
@@ -1111,11 +1133,13 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 				  (offloads & DEV_RX_OFFLOAD_SCATTER),
 				  encp->enc_rx_scatter_max,
 				  &error)) {
-		sfc_err(sa, "RxQ %u MTU check failed: %s", sw_index, error);
-		sfc_err(sa, "RxQ %u calculated Rx buffer size is %u vs "
+		sfc_err(sa, "RxQ %d (internal %u) MTU check failed: %s",
+			ethdev_qid, sw_index, error);
+		sfc_err(sa,
+			"RxQ %d (internal %u) calculated Rx buffer size is %u vs "
 			"PDU size %u plus Rx prefix %u bytes",
-			sw_index, buf_size, (unsigned int)sa->port.pdu,
-			encp->enc_rx_prefix_size);
+			ethdev_qid, sw_index, buf_size,
+			(unsigned int)sa->port.pdu, encp->enc_rx_prefix_size);
 		rc = EINVAL;
 		goto fail_bad_conf;
 	}
@@ -1193,7 +1217,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	info.flags = rxq_info->rxq_flags;
 	info.rxq_entries = rxq_info->entries;
 	info.rxq_hw_ring = rxq->mem.esm_base;
-	info.evq_hw_index = sfc_evq_index_by_rxq_sw_index(sa, sw_index);
+	info.evq_hw_index = sfc_evq_sw_index_by_rxq_sw_index(sa, sw_index);
 	info.evq_entries = evq_entries;
 	info.evq_hw_ring = evq->mem.esm_base;
 	info.hw_index = rxq->hw_index;
@@ -1231,13 +1255,18 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 }
 
 void
-sfc_rx_qfini(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_rx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 	struct sfc_rxq *rxq;
 
 	SFC_ASSERT(sw_index < sfc_sa2shared(sa)->rxq_count);
-	sa->eth_dev->data->rx_queues[sw_index] = NULL;
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, sw_index);
+
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		sa->eth_dev->data->rx_queues[ethdev_qid] = NULL;
 
 	rxq_info = &sfc_sa2shared(sa)->rxq_info[sw_index];
 
@@ -1479,14 +1508,41 @@ sfc_rx_rss_config(struct sfc_adapter *sa)
 	return rc;
 }
 
+struct sfc_rxq_info *
+sfc_rxq_info_by_ethdev_qid(struct sfc_adapter_shared *sas,
+			   sfc_ethdev_qid_t ethdev_qid)
+{
+	sfc_sw_index_t sw_index;
+
+	SFC_ASSERT((unsigned int)ethdev_qid < sas->ethdev_rxq_count);
+	SFC_ASSERT(ethdev_qid != SFC_ETHDEV_QID_INVALID);
+
+	sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas, ethdev_qid);
+	return &sas->rxq_info[sw_index];
+}
+
+struct sfc_rxq *
+sfc_rxq_ctrl_by_ethdev_qid(struct sfc_adapter *sa, sfc_ethdev_qid_t ethdev_qid)
+{
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_sw_index_t sw_index;
+
+	SFC_ASSERT((unsigned int)ethdev_qid < sas->ethdev_rxq_count);
+	SFC_ASSERT(ethdev_qid != SFC_ETHDEV_QID_INVALID);
+
+	sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas, ethdev_qid);
+	return &sa->rxq_ctrl[sw_index];
+}
+
 int
 sfc_rx_start(struct sfc_adapter *sa)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 	int rc;
 
-	sfc_log_init(sa, "rxq_count=%u", sas->rxq_count);
+	sfc_log_init(sa, "rxq_count=%u (internal %u)", sas->ethdev_rxq_count,
+		     sas->rxq_count);
 
 	rc = efx_rx_init(sa->nic);
 	if (rc != 0)
@@ -1524,9 +1580,10 @@ void
 sfc_rx_stop(struct sfc_adapter *sa)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 
-	sfc_log_init(sa, "rxq_count=%u", sas->rxq_count);
+	sfc_log_init(sa, "rxq_count=%u (internal %u)", sas->ethdev_rxq_count,
+		     sas->rxq_count);
 
 	sw_index = sas->rxq_count;
 	while (sw_index-- > 0) {
@@ -1538,7 +1595,7 @@ sfc_rx_stop(struct sfc_adapter *sa)
 }
 
 static int
-sfc_rx_qinit_info(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_rx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
 	struct sfc_rxq_info *rxq_info = &sas->rxq_info[sw_index];
@@ -1606,17 +1663,29 @@ static void
 sfc_rx_fini_queues(struct sfc_adapter *sa, unsigned int nb_rx_queues)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	int sw_index;
+	sfc_sw_index_t sw_index;
+	sfc_ethdev_qid_t ethdev_qid;
 
-	SFC_ASSERT(nb_rx_queues <= sas->rxq_count);
+	SFC_ASSERT(nb_rx_queues <= sas->ethdev_rxq_count);
 
-	sw_index = sas->rxq_count;
-	while (--sw_index >= (int)nb_rx_queues) {
-		if (sas->rxq_info[sw_index].state & SFC_RXQ_INITIALIZED)
+	/*
+	 * Finalize only ethdev queues since other ones are finalized only
+	 * on device close and they may require additional deinitializaton.
+	 */
+	ethdev_qid = sas->ethdev_rxq_count;
+	while (--ethdev_qid >= (int)nb_rx_queues) {
+		struct sfc_rxq_info *rxq_info;
+
+		rxq_info = sfc_rxq_info_by_ethdev_qid(sas, ethdev_qid);
+		if (rxq_info->state & SFC_RXQ_INITIALIZED) {
+			sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas,
+								ethdev_qid);
 			sfc_rx_qfini(sa, sw_index);
+		}
+
 	}
 
-	sas->rxq_count = nb_rx_queues;
+	sas->ethdev_rxq_count = nb_rx_queues;
 }
 
 /**
@@ -1637,7 +1706,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
 	int rc;
 
 	sfc_log_init(sa, "nb_rx_queues=%u (old %u)",
-		     nb_rx_queues, sas->rxq_count);
+		     nb_rx_queues, sas->ethdev_rxq_count);
 
 	rc = sfc_rx_check_mode(sa, &dev_conf->rxmode);
 	if (rc != 0)
@@ -1666,7 +1735,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
 		struct sfc_rxq_info *new_rxq_info;
 		struct sfc_rxq *new_rxq_ctrl;
 
-		if (nb_rx_queues < sas->rxq_count)
+		if (nb_rx_queues < sas->ethdev_rxq_count)
 			sfc_rx_fini_queues(sa, nb_rx_queues);
 
 		rc = ENOMEM;
@@ -1685,30 +1754,38 @@ sfc_rx_configure(struct sfc_adapter *sa)
 		sas->rxq_info = new_rxq_info;
 		sa->rxq_ctrl = new_rxq_ctrl;
 		if (nb_rx_queues > sas->rxq_count) {
-			memset(&sas->rxq_info[sas->rxq_count], 0,
-			       (nb_rx_queues - sas->rxq_count) *
+			unsigned int rxq_count = sas->rxq_count;
+
+			memset(&sas->rxq_info[rxq_count], 0,
+			       (nb_rx_queues - rxq_count) *
 			       sizeof(sas->rxq_info[0]));
-			memset(&sa->rxq_ctrl[sas->rxq_count], 0,
-			       (nb_rx_queues - sas->rxq_count) *
+			memset(&sa->rxq_ctrl[rxq_count], 0,
+			       (nb_rx_queues - rxq_count) *
 			       sizeof(sa->rxq_ctrl[0]));
 		}
 	}
 
-	while (sas->rxq_count < nb_rx_queues) {
-		rc = sfc_rx_qinit_info(sa, sas->rxq_count);
+	while (sas->ethdev_rxq_count < nb_rx_queues) {
+		sfc_sw_index_t sw_index;
+
+		sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas,
+							sas->ethdev_rxq_count);
+		rc = sfc_rx_qinit_info(sa, sw_index);
 		if (rc != 0)
 			goto fail_rx_qinit_info;
 
-		sas->rxq_count++;
+		sas->ethdev_rxq_count++;
 	}
 
+	sas->rxq_count = sas->ethdev_rxq_count;
+
 configure_rss:
 	rss->channels = (dev_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) ?
-			 MIN(sas->rxq_count, EFX_MAXRSS) : 0;
+			 MIN(sas->ethdev_rxq_count, EFX_MAXRSS) : 0;
 
 	if (rss->channels > 0) {
 		struct rte_eth_rss_conf *adv_conf_rss;
-		unsigned int sw_index;
+		sfc_sw_index_t sw_index;
 
 		for (sw_index = 0; sw_index < EFX_RSS_TBL_SIZE; ++sw_index)
 			rss->tbl[sw_index] = sw_index % rss->channels;
diff --git a/drivers/net/sfc/sfc_rx.h b/drivers/net/sfc/sfc_rx.h
index 2730454fd6..96c7dc415d 100644
--- a/drivers/net/sfc/sfc_rx.h
+++ b/drivers/net/sfc/sfc_rx.h
@@ -119,6 +119,10 @@ struct sfc_rxq_info {
 };
 
 struct sfc_rxq_info *sfc_rxq_info_by_dp_rxq(const struct sfc_dp_rxq *dp_rxq);
+struct sfc_rxq_info *sfc_rxq_info_by_ethdev_qid(struct sfc_adapter_shared *sas,
+						sfc_ethdev_qid_t ethdev_qid);
+struct sfc_rxq *sfc_rxq_ctrl_by_ethdev_qid(struct sfc_adapter *sa,
+					   sfc_ethdev_qid_t ethdev_qid);
 
 int sfc_rx_configure(struct sfc_adapter *sa);
 void sfc_rx_close(struct sfc_adapter *sa);
@@ -129,9 +133,9 @@ int sfc_rx_qinit(struct sfc_adapter *sa, unsigned int rx_queue_id,
 		 uint16_t nb_rx_desc, unsigned int socket_id,
 		 const struct rte_eth_rxconf *rx_conf,
 		 struct rte_mempool *mb_pool);
-void sfc_rx_qfini(struct sfc_adapter *sa, unsigned int sw_index);
-int sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index);
-void sfc_rx_qstop(struct sfc_adapter *sa, unsigned int sw_index);
+void sfc_rx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
+int sfc_rx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
+void sfc_rx_qstop(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
 
 uint64_t sfc_rx_get_dev_offload_caps(struct sfc_adapter *sa);
 uint64_t sfc_rx_get_queue_offload_caps(struct sfc_adapter *sa);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH 02/20] net/sfc: do not enable interrupts on internal Rx queues
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
  2021-05-27 15:24 ` [dpdk-dev] [PATCH 01/20] net/sfc: introduce ethdev Rx queue ID Andrew Rybchenko
@ 2021-05-27 15:24 ` Andrew Rybchenko
  2021-05-27 15:24 ` [dpdk-dev] [PATCH 03/20] common/sfc_efx/base: separate target EvQ and IRQ config Andrew Rybchenko
                   ` (20 subsequent siblings)
  22 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-05-27 15:24 UTC (permalink / raw)
  To: dev

rxq_intr flag requests support for interrupt mode for ethdev Rx queues.
There is no internal Rx queues yet.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 drivers/net/sfc/sfc_ev.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index 2262994112..9a8149f052 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -663,7 +663,9 @@ sfc_ev_qstart(struct sfc_evq *evq, unsigned int hw_index)
 		     efx_evq_size(sa->nic, evq->entries, evq_flags));
 
 	if ((sa->intr.lsc_intr && hw_index == sa->mgmt_evq_index) ||
-	    (sa->intr.rxq_intr && evq->dp_rxq != NULL))
+	    (sa->intr.rxq_intr && evq->dp_rxq != NULL &&
+	     sfc_ethdev_rx_qid_by_rxq_sw_index(sfc_sa2shared(sa),
+		evq->dp_rxq->dpq.queue_id) != SFC_ETHDEV_QID_INVALID))
 		evq_flags |= EFX_EVQ_FLAGS_NOTIFY_INTERRUPT;
 	else
 		evq_flags |= EFX_EVQ_FLAGS_NOTIFY_DISABLED;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH 03/20] common/sfc_efx/base: separate target EvQ and IRQ config
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
  2021-05-27 15:24 ` [dpdk-dev] [PATCH 01/20] net/sfc: introduce ethdev Rx queue ID Andrew Rybchenko
  2021-05-27 15:24 ` [dpdk-dev] [PATCH 02/20] net/sfc: do not enable interrupts on internal Rx queues Andrew Rybchenko
@ 2021-05-27 15:24 ` Andrew Rybchenko
  2021-05-27 15:24 ` [dpdk-dev] [PATCH 04/20] common/sfc_efx/base: support custom EvQ to IRQ mapping Andrew Rybchenko
                   ` (19 subsequent siblings)
  22 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-05-27 15:24 UTC (permalink / raw)
  To: dev; +Cc: Andy Moreton

Target EvQ and IRQ number are specified in the same location
in MCDI request. The value is treated as IRQ number if the
event queue is interrupting (corresponding flag is set) and
as target event queue otherwise.

However it is better to separate it on helper API level to
make it more clear.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/common/sfc_efx/base/ef10_ev.c  | 12 +++++++-----
 drivers/common/sfc_efx/base/efx_impl.h |  1 +
 drivers/common/sfc_efx/base/efx_mcdi.c |  7 ++++++-
 drivers/common/sfc_efx/base/rhead_ev.c | 12 +++++++-----
 4 files changed, 21 insertions(+), 11 deletions(-)

diff --git a/drivers/common/sfc_efx/base/ef10_ev.c b/drivers/common/sfc_efx/base/ef10_ev.c
index ea59beecc4..c0cbc427b9 100644
--- a/drivers/common/sfc_efx/base/ef10_ev.c
+++ b/drivers/common/sfc_efx/base/ef10_ev.c
@@ -121,7 +121,8 @@ ef10_ev_qcreate(
 	__in		efx_evq_t *eep)
 {
 	efx_nic_cfg_t *encp = &(enp->en_nic_cfg);
-	uint32_t irq;
+	uint32_t irq = 0;
+	uint32_t target_evq = 0;
 	efx_rc_t rc;
 	boolean_t low_latency;
 
@@ -159,11 +160,12 @@ ef10_ev_qcreate(
 	    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT) {
 		irq = index;
 	} else if (index == EFX_EF10_ALWAYS_INTERRUPTING_EVQ_INDEX) {
-		irq = index;
+		/* Use the first interrupt for always interrupting EvQ */
+		irq = 0;
 		flags = (flags & ~EFX_EVQ_FLAGS_NOTIFY_MASK) |
 		    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT;
 	} else {
-		irq = EFX_EF10_ALWAYS_INTERRUPTING_EVQ_INDEX;
+		target_evq = EFX_EF10_ALWAYS_INTERRUPTING_EVQ_INDEX;
 	}
 
 	/*
@@ -187,8 +189,8 @@ ef10_ev_qcreate(
 	 * decision and low_latency hint is ignored.
 	 */
 	low_latency = encp->enc_datapath_cap_evb ? 0 : 1;
-	rc = efx_mcdi_init_evq(enp, index, esmp, ndescs, irq, us, flags,
-	    low_latency);
+	rc = efx_mcdi_init_evq(enp, index, esmp, ndescs, irq, target_evq, us,
+	    flags, low_latency);
 	if (rc != 0)
 		goto fail2;
 
diff --git a/drivers/common/sfc_efx/base/efx_impl.h b/drivers/common/sfc_efx/base/efx_impl.h
index 8b63cfb37d..4fff9e1842 100644
--- a/drivers/common/sfc_efx/base/efx_impl.h
+++ b/drivers/common/sfc_efx/base/efx_impl.h
@@ -1535,6 +1535,7 @@ efx_mcdi_init_evq(
 	__in		efsys_mem_t *esmp,
 	__in		size_t nevs,
 	__in		uint32_t irq,
+	__in		uint32_t target_evq,
 	__in		uint32_t us,
 	__in		uint32_t flags,
 	__in		boolean_t low_latency);
diff --git a/drivers/common/sfc_efx/base/efx_mcdi.c b/drivers/common/sfc_efx/base/efx_mcdi.c
index f226ffd923..b68fc0503d 100644
--- a/drivers/common/sfc_efx/base/efx_mcdi.c
+++ b/drivers/common/sfc_efx/base/efx_mcdi.c
@@ -2568,6 +2568,7 @@ efx_mcdi_init_evq(
 	__in		efsys_mem_t *esmp,
 	__in		size_t nevs,
 	__in		uint32_t irq,
+	__in		uint32_t target_evq,
 	__in		uint32_t us,
 	__in		uint32_t flags,
 	__in		boolean_t low_latency)
@@ -2602,11 +2603,15 @@ efx_mcdi_init_evq(
 
 	MCDI_IN_SET_DWORD(req, INIT_EVQ_V2_IN_SIZE, nevs);
 	MCDI_IN_SET_DWORD(req, INIT_EVQ_V2_IN_INSTANCE, instance);
-	MCDI_IN_SET_DWORD(req, INIT_EVQ_V2_IN_IRQ_NUM, irq);
 
 	interrupting = ((flags & EFX_EVQ_FLAGS_NOTIFY_MASK) ==
 	    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT);
 
+	if (interrupting)
+		MCDI_IN_SET_DWORD(req, INIT_EVQ_V2_IN_IRQ_NUM, irq);
+	else
+		MCDI_IN_SET_DWORD(req, INIT_EVQ_V2_IN_TARGET_EVQ, target_evq);
+
 	if (encp->enc_init_evq_v2_supported) {
 		/*
 		 * On Medford the low latency license is required to enable RX
diff --git a/drivers/common/sfc_efx/base/rhead_ev.c b/drivers/common/sfc_efx/base/rhead_ev.c
index 2099581fd7..533cd9e34a 100644
--- a/drivers/common/sfc_efx/base/rhead_ev.c
+++ b/drivers/common/sfc_efx/base/rhead_ev.c
@@ -106,7 +106,8 @@ rhead_ev_qcreate(
 {
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
 	size_t desc_size;
-	uint32_t irq;
+	uint32_t irq = 0;
+	uint32_t target_evq = 0;
 	efx_rc_t rc;
 
 	_NOTE(ARGUNUSED(id))	/* buftbl id managed by MC */
@@ -142,19 +143,20 @@ rhead_ev_qcreate(
 	    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT) {
 		irq = index;
 	} else if (index == EFX_RHEAD_ALWAYS_INTERRUPTING_EVQ_INDEX) {
-		irq = index;
+		/* Use the first interrupt for always interrupting EvQ */
+		irq = 0;
 		flags = (flags & ~EFX_EVQ_FLAGS_NOTIFY_MASK) |
 		    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT;
 	} else {
-		irq = EFX_RHEAD_ALWAYS_INTERRUPTING_EVQ_INDEX;
+		target_evq = EFX_RHEAD_ALWAYS_INTERRUPTING_EVQ_INDEX;
 	}
 
 	/*
 	 * Interrupts may be raised for events immediately after the queue is
 	 * created. See bug58606.
 	 */
-	rc = efx_mcdi_init_evq(enp, index, esmp, ndescs, irq, us, flags,
-	    B_FALSE);
+	rc = efx_mcdi_init_evq(enp, index, esmp, ndescs, irq, target_evq, us,
+	    flags, B_FALSE);
 	if (rc != 0)
 		goto fail2;
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH 04/20] common/sfc_efx/base: support custom EvQ to IRQ mapping
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                   ` (2 preceding siblings ...)
  2021-05-27 15:24 ` [dpdk-dev] [PATCH 03/20] common/sfc_efx/base: separate target EvQ and IRQ config Andrew Rybchenko
@ 2021-05-27 15:24 ` Andrew Rybchenko
  2021-05-27 15:24 ` [dpdk-dev] [PATCH 05/20] net/sfc: explicitly control IRQ used for Rx queues Andrew Rybchenko
                   ` (18 subsequent siblings)
  22 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-05-27 15:24 UTC (permalink / raw)
  To: dev; +Cc: Andy Moreton

Custom mapping is actually supported for EF10 and EF100 families only.

A driver (e.g. DPDK PMD) may require to customize mapping of EvQ
to interrupts if, for example, extra EvQ are used for house-keeping
in polling or wake up (via another EvQ) mode.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/common/sfc_efx/base/ef10_ev.c    |  4 +--
 drivers/common/sfc_efx/base/ef10_impl.h  |  1 +
 drivers/common/sfc_efx/base/efx.h        | 13 ++++++++
 drivers/common/sfc_efx/base/efx_ev.c     | 39 ++++++++++++++++++++----
 drivers/common/sfc_efx/base/efx_impl.h   |  3 +-
 drivers/common/sfc_efx/base/rhead_ev.c   |  4 +--
 drivers/common/sfc_efx/base/rhead_impl.h |  1 +
 drivers/common/sfc_efx/version.map       |  1 +
 8 files changed, 55 insertions(+), 11 deletions(-)

diff --git a/drivers/common/sfc_efx/base/ef10_ev.c b/drivers/common/sfc_efx/base/ef10_ev.c
index c0cbc427b9..ba078940b6 100644
--- a/drivers/common/sfc_efx/base/ef10_ev.c
+++ b/drivers/common/sfc_efx/base/ef10_ev.c
@@ -118,10 +118,10 @@ ef10_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep)
 {
 	efx_nic_cfg_t *encp = &(enp->en_nic_cfg);
-	uint32_t irq = 0;
 	uint32_t target_evq = 0;
 	efx_rc_t rc;
 	boolean_t low_latency;
@@ -158,7 +158,7 @@ ef10_ev_qcreate(
 	/* INIT_EVQ expects function-relative vector number */
 	if ((flags & EFX_EVQ_FLAGS_NOTIFY_MASK) ==
 	    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT) {
-		irq = index;
+		/* IRQ number is specified by caller */
 	} else if (index == EFX_EF10_ALWAYS_INTERRUPTING_EVQ_INDEX) {
 		/* Use the first interrupt for always interrupting EvQ */
 		irq = 0;
diff --git a/drivers/common/sfc_efx/base/ef10_impl.h b/drivers/common/sfc_efx/base/ef10_impl.h
index 40210fbd91..7c8d51b7a5 100644
--- a/drivers/common/sfc_efx/base/ef10_impl.h
+++ b/drivers/common/sfc_efx/base/ef10_impl.h
@@ -111,6 +111,7 @@ ef10_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep);
 
 LIBEFX_INTERNAL
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 8e13075b07..6a99099ad2 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -2333,6 +2333,19 @@ efx_ev_qcreate(
 	__in		uint32_t flags,
 	__deref_out	efx_evq_t **eepp);
 
+LIBEFX_API
+extern	__checkReturn	efx_rc_t
+efx_ev_qcreate_irq(
+	__in		efx_nic_t *enp,
+	__in		unsigned int index,
+	__in		efsys_mem_t *esmp,
+	__in		size_t ndescs,
+	__in		uint32_t id,
+	__in		uint32_t us,
+	__in		uint32_t flags,
+	__in		uint32_t irq,
+	__deref_out	efx_evq_t **eepp);
+
 LIBEFX_API
 extern		void
 efx_ev_qpost(
diff --git a/drivers/common/sfc_efx/base/efx_ev.c b/drivers/common/sfc_efx/base/efx_ev.c
index 19bdea03fd..4808f8ddfc 100644
--- a/drivers/common/sfc_efx/base/efx_ev.c
+++ b/drivers/common/sfc_efx/base/efx_ev.c
@@ -35,6 +35,7 @@ siena_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep);
 
 static			void
@@ -253,7 +254,7 @@ efx_ev_fini(
 
 
 	__checkReturn	efx_rc_t
-efx_ev_qcreate(
+efx_ev_qcreate_irq(
 	__in		efx_nic_t *enp,
 	__in		unsigned int index,
 	__in		efsys_mem_t *esmp,
@@ -261,6 +262,7 @@ efx_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__deref_out	efx_evq_t **eepp)
 {
 	const efx_ev_ops_t *eevop = enp->en_eevop;
@@ -347,7 +349,7 @@ efx_ev_qcreate(
 	*eepp = eep;
 
 	if ((rc = eevop->eevo_qcreate(enp, index, esmp, ndescs, id, us, flags,
-	    eep)) != 0)
+	    irq, eep)) != 0)
 		goto fail9;
 
 	return (0);
@@ -377,6 +379,23 @@ efx_ev_qcreate(
 	return (rc);
 }
 
+	__checkReturn	efx_rc_t
+efx_ev_qcreate(
+	__in		efx_nic_t *enp,
+	__in		unsigned int index,
+	__in		efsys_mem_t *esmp,
+	__in		size_t ndescs,
+	__in		uint32_t id,
+	__in		uint32_t us,
+	__in		uint32_t flags,
+	__deref_out	efx_evq_t **eepp)
+{
+	uint32_t irq = index;
+
+	return (efx_ev_qcreate_irq(enp, index, esmp, ndescs, id, us, flags,
+	    irq, eepp));
+}
+
 		void
 efx_ev_qdestroy(
 	__in	efx_evq_t *eep)
@@ -1278,6 +1297,7 @@ siena_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep)
 {
 	efx_nic_cfg_t *encp = &(enp->en_nic_cfg);
@@ -1290,11 +1310,16 @@ siena_ev_qcreate(
 
 	EFSYS_ASSERT((flags & EFX_EVQ_FLAGS_EXTENDED_WIDTH) == 0);
 
+	if (irq != index) {
+		rc = EINVAL;
+		goto fail1;
+	}
+
 #if EFSYS_OPT_RX_SCALE
 	if (enp->en_intr.ei_type == EFX_INTR_LINE &&
 	    index >= EFX_MAXRSS_LEGACY) {
 		rc = EINVAL;
-		goto fail1;
+		goto fail2;
 	}
 #endif
 	for (size = 0;
@@ -1304,7 +1329,7 @@ siena_ev_qcreate(
 			break;
 	if (id + (1 << size) >= encp->enc_buftbl_limit) {
 		rc = EINVAL;
-		goto fail2;
+		goto fail3;
 	}
 
 	/* Set up the handler table */
@@ -1336,11 +1361,13 @@ siena_ev_qcreate(
 
 	return (0);
 
+fail3:
+	EFSYS_PROBE(fail3);
+#if EFSYS_OPT_RX_SCALE
 fail2:
 	EFSYS_PROBE(fail2);
-#if EFSYS_OPT_RX_SCALE
-fail1:
 #endif
+fail1:
 	EFSYS_PROBE1(fail1, efx_rc_t, rc);
 
 	return (rc);
diff --git a/drivers/common/sfc_efx/base/efx_impl.h b/drivers/common/sfc_efx/base/efx_impl.h
index 4fff9e1842..f891e2616e 100644
--- a/drivers/common/sfc_efx/base/efx_impl.h
+++ b/drivers/common/sfc_efx/base/efx_impl.h
@@ -87,7 +87,8 @@ typedef struct efx_ev_ops_s {
 	void		(*eevo_fini)(efx_nic_t *);
 	efx_rc_t	(*eevo_qcreate)(efx_nic_t *, unsigned int,
 					  efsys_mem_t *, size_t, uint32_t,
-					  uint32_t, uint32_t, efx_evq_t *);
+					  uint32_t, uint32_t, uint32_t,
+					  efx_evq_t *);
 	void		(*eevo_qdestroy)(efx_evq_t *);
 	efx_rc_t	(*eevo_qprime)(efx_evq_t *, unsigned int);
 	void		(*eevo_qpost)(efx_evq_t *, uint16_t);
diff --git a/drivers/common/sfc_efx/base/rhead_ev.c b/drivers/common/sfc_efx/base/rhead_ev.c
index 533cd9e34a..3eaed9e94b 100644
--- a/drivers/common/sfc_efx/base/rhead_ev.c
+++ b/drivers/common/sfc_efx/base/rhead_ev.c
@@ -102,11 +102,11 @@ rhead_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep)
 {
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
 	size_t desc_size;
-	uint32_t irq = 0;
 	uint32_t target_evq = 0;
 	efx_rc_t rc;
 
@@ -141,7 +141,7 @@ rhead_ev_qcreate(
 	/* INIT_EVQ expects function-relative vector number */
 	if ((flags & EFX_EVQ_FLAGS_NOTIFY_MASK) ==
 	    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT) {
-		irq = index;
+		/* IRQ number is specified by caller */
 	} else if (index == EFX_RHEAD_ALWAYS_INTERRUPTING_EVQ_INDEX) {
 		/* Use the first interrupt for always interrupting EvQ */
 		irq = 0;
diff --git a/drivers/common/sfc_efx/base/rhead_impl.h b/drivers/common/sfc_efx/base/rhead_impl.h
index 3bf9beceb0..dd38ded775 100644
--- a/drivers/common/sfc_efx/base/rhead_impl.h
+++ b/drivers/common/sfc_efx/base/rhead_impl.h
@@ -131,6 +131,7 @@ rhead_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep);
 
 LIBEFX_INTERNAL
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 75da5aa5c2..ae85ed18c6 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -7,6 +7,7 @@ INTERNAL {
 	efx_ev_init;
 	efx_ev_qcreate;
 	efx_ev_qcreate_check_init_done;
+	efx_ev_qcreate_irq;
 	efx_ev_qdestroy;
 	efx_ev_qmoderate;
 	efx_ev_qpending;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH 05/20] net/sfc: explicitly control IRQ used for Rx queues
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                   ` (3 preceding siblings ...)
  2021-05-27 15:24 ` [dpdk-dev] [PATCH 04/20] common/sfc_efx/base: support custom EvQ to IRQ mapping Andrew Rybchenko
@ 2021-05-27 15:24 ` Andrew Rybchenko
  2021-05-27 15:24 ` [dpdk-dev] [PATCH 06/20] net/sfc: introduce ethdev Tx queue ID Andrew Rybchenko
                   ` (17 subsequent siblings)
  22 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-05-27 15:24 UTC (permalink / raw)
  To: dev; +Cc: Andy Moreton

Interrupts support has assumptions on interrupt numbers used
for LSC and Rx queues. The first interrupt is used for LSC,
subsequent interrupts are used for Rx queues.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/net/sfc/sfc_ev.c | 32 ++++++++++++++++++++++++--------
 1 file changed, 24 insertions(+), 8 deletions(-)

diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index 9a8149f052..71f706e403 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -648,6 +648,7 @@ sfc_ev_qstart(struct sfc_evq *evq, unsigned int hw_index)
 	struct sfc_adapter *sa = evq->sa;
 	efsys_mem_t *esmp;
 	uint32_t evq_flags = sa->evq_flags;
+	uint32_t irq = 0;
 	unsigned int total_delay_us;
 	unsigned int delay_us;
 	int rc;
@@ -662,20 +663,35 @@ sfc_ev_qstart(struct sfc_evq *evq, unsigned int hw_index)
 	(void)memset((void *)esmp->esm_base, 0xff,
 		     efx_evq_size(sa->nic, evq->entries, evq_flags));
 
-	if ((sa->intr.lsc_intr && hw_index == sa->mgmt_evq_index) ||
-	    (sa->intr.rxq_intr && evq->dp_rxq != NULL &&
-	     sfc_ethdev_rx_qid_by_rxq_sw_index(sfc_sa2shared(sa),
-		evq->dp_rxq->dpq.queue_id) != SFC_ETHDEV_QID_INVALID))
+	if (sa->intr.lsc_intr && hw_index == sa->mgmt_evq_index) {
 		evq_flags |= EFX_EVQ_FLAGS_NOTIFY_INTERRUPT;
-	else
+		irq = 0;
+	} else if (sa->intr.rxq_intr && evq->dp_rxq != NULL) {
+		sfc_ethdev_qid_t ethdev_qid;
+
+		ethdev_qid =
+			sfc_ethdev_rx_qid_by_rxq_sw_index(sfc_sa2shared(sa),
+				evq->dp_rxq->dpq.queue_id);
+		if (ethdev_qid != SFC_ETHDEV_QID_INVALID) {
+			evq_flags |= EFX_EVQ_FLAGS_NOTIFY_INTERRUPT;
+			/*
+			 * The first interrupt is used for management EvQ
+			 * (LSC etc). RxQ interrupts follow it.
+			 */
+			irq = 1 + ethdev_qid;
+		} else {
+			evq_flags |= EFX_EVQ_FLAGS_NOTIFY_DISABLED;
+		}
+	} else {
 		evq_flags |= EFX_EVQ_FLAGS_NOTIFY_DISABLED;
+	}
 
 	evq->init_state = SFC_EVQ_STARTING;
 
 	/* Create the common code event queue */
-	rc = efx_ev_qcreate(sa->nic, hw_index, esmp, evq->entries,
-			    0 /* unused on EF10 */, 0, evq_flags,
-			    &evq->common);
+	rc = efx_ev_qcreate_irq(sa->nic, hw_index, esmp, evq->entries,
+				0 /* unused on EF10 */, 0, evq_flags,
+				irq, &evq->common);
 	if (rc != 0)
 		goto fail_ev_qcreate;
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH 06/20] net/sfc: introduce ethdev Tx queue ID
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                   ` (4 preceding siblings ...)
  2021-05-27 15:24 ` [dpdk-dev] [PATCH 05/20] net/sfc: explicitly control IRQ used for Rx queues Andrew Rybchenko
@ 2021-05-27 15:24 ` Andrew Rybchenko
  2021-05-27 15:24 ` [dpdk-dev] [PATCH 07/20] common/sfc_efx/base: add ingress m-port RxQ flag Andrew Rybchenko
                   ` (16 subsequent siblings)
  22 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-05-27 15:24 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Make software index of a Tx queue and ethdev index separate.
When an ethdev TxQ is accessed in ethdev callbacks, an explicit ethdev
queue index is used.

This is a preparation to introducing non-ethdev Tx queues.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/sfc.h        |   1 +
 drivers/net/sfc/sfc_ethdev.c |  46 ++++++----
 drivers/net/sfc/sfc_ev.c     |   2 +-
 drivers/net/sfc/sfc_ev.h     |  21 ++++-
 drivers/net/sfc/sfc_tx.c     | 164 ++++++++++++++++++++++++-----------
 drivers/net/sfc/sfc_tx.h     |  11 +--
 6 files changed, 171 insertions(+), 74 deletions(-)

diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index ebe705020d..00fc26cf0e 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -173,6 +173,7 @@ struct sfc_adapter_shared {
 
 	unsigned int			txq_count;
 	struct sfc_txq_info		*txq_info;
+	unsigned int			ethdev_txq_count;
 
 	struct sfc_rss			rss;
 
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2651c41288..88896db1f8 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -524,24 +524,28 @@ sfc_rx_queue_release(void *queue)
 }
 
 static int
-sfc_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
+sfc_tx_queue_setup(struct rte_eth_dev *dev, uint16_t ethdev_qid,
 		   uint16_t nb_tx_desc, unsigned int socket_id,
 		   const struct rte_eth_txconf *tx_conf)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	struct sfc_txq_info *txq_info;
+	sfc_sw_index_t sw_index;
 	int rc;
 
 	sfc_log_init(sa, "TxQ = %u, nb_tx_desc = %u, socket_id = %u",
-		     tx_queue_id, nb_tx_desc, socket_id);
+		     ethdev_qid, nb_tx_desc, socket_id);
 
 	sfc_adapter_lock(sa);
 
-	rc = sfc_tx_qinit(sa, tx_queue_id, nb_tx_desc, socket_id, tx_conf);
+	sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas, ethdev_qid);
+	rc = sfc_tx_qinit(sa, sw_index, nb_tx_desc, socket_id, tx_conf);
 	if (rc != 0)
 		goto fail_tx_qinit;
 
-	dev->data->tx_queues[tx_queue_id] = sas->txq_info[tx_queue_id].dp;
+	txq_info = sfc_txq_info_by_ethdev_qid(sas, ethdev_qid);
+	dev->data->tx_queues[ethdev_qid] = txq_info->dp;
 
 	sfc_adapter_unlock(sa);
 	return 0;
@@ -557,7 +561,7 @@ sfc_tx_queue_release(void *queue)
 {
 	struct sfc_dp_txq *dp_txq = queue;
 	struct sfc_txq *txq;
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 	struct sfc_adapter *sa;
 
 	if (dp_txq == NULL)
@@ -1213,15 +1217,15 @@ sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
  * use any process-local pointers from the adapter data.
  */
 static void
-sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t tx_queue_id,
+sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
 		      struct rte_eth_txq_info *qinfo)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_txq_info *txq_info;
 
-	SFC_ASSERT(tx_queue_id < sas->txq_count);
+	SFC_ASSERT(ethdev_qid < sas->ethdev_txq_count);
 
-	txq_info = &sas->txq_info[tx_queue_id];
+	txq_info = sfc_txq_info_by_ethdev_qid(sas, ethdev_qid);
 
 	memset(qinfo, 0, sizeof(*qinfo));
 
@@ -1362,13 +1366,15 @@ sfc_rx_queue_stop(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 }
 
 static int
-sfc_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+sfc_tx_queue_start(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	struct sfc_txq_info *txq_info;
+	sfc_sw_index_t sw_index;
 	int rc;
 
-	sfc_log_init(sa, "TxQ = %u", tx_queue_id);
+	sfc_log_init(sa, "TxQ = %u", ethdev_qid);
 
 	sfc_adapter_lock(sa);
 
@@ -1376,14 +1382,16 @@ sfc_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	if (sa->state != SFC_ADAPTER_STARTED)
 		goto fail_not_started;
 
-	if (sas->txq_info[tx_queue_id].state != SFC_TXQ_INITIALIZED)
+	txq_info = sfc_txq_info_by_ethdev_qid(sas, ethdev_qid);
+	if (txq_info->state != SFC_TXQ_INITIALIZED)
 		goto fail_not_setup;
 
-	rc = sfc_tx_qstart(sa, tx_queue_id);
+	sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas, ethdev_qid);
+	rc = sfc_tx_qstart(sa, sw_index);
 	if (rc != 0)
 		goto fail_tx_qstart;
 
-	sas->txq_info[tx_queue_id].deferred_started = B_TRUE;
+	txq_info->deferred_started = B_TRUE;
 
 	sfc_adapter_unlock(sa);
 	return 0;
@@ -1398,18 +1406,22 @@ sfc_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 }
 
 static int
-sfc_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+sfc_tx_queue_stop(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	struct sfc_txq_info *txq_info;
+	sfc_sw_index_t sw_index;
 
-	sfc_log_init(sa, "TxQ = %u", tx_queue_id);
+	sfc_log_init(sa, "TxQ = %u", ethdev_qid);
 
 	sfc_adapter_lock(sa);
 
-	sfc_tx_qstop(sa, tx_queue_id);
+	sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas, ethdev_qid);
+	sfc_tx_qstop(sa, sw_index);
 
-	sas->txq_info[tx_queue_id].deferred_started = B_FALSE;
+	txq_info = sfc_txq_info_by_ethdev_qid(sas, ethdev_qid);
+	txq_info->deferred_started = B_FALSE;
 
 	sfc_adapter_unlock(sa);
 	return 0;
diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index 71f706e403..ed28d51e12 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -598,7 +598,7 @@ sfc_ev_qpoll(struct sfc_evq *evq)
 		}
 
 		if (evq->dp_txq != NULL) {
-			unsigned int txq_sw_index;
+			sfc_sw_index_t txq_sw_index;
 
 			txq_sw_index = evq->dp_txq->dpq.queue_id;
 
diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h
index 5a9f85c2d9..75b9dcdebd 100644
--- a/drivers/net/sfc/sfc_ev.h
+++ b/drivers/net/sfc/sfc_ev.h
@@ -92,8 +92,25 @@ sfc_evq_sw_index_by_rxq_sw_index(__rte_unused struct sfc_adapter *sa,
 	return 1 + rxq_sw_index;
 }
 
-static inline unsigned int
-sfc_evq_index_by_txq_sw_index(struct sfc_adapter *sa, unsigned int txq_sw_index)
+static inline sfc_ethdev_qid_t
+sfc_ethdev_tx_qid_by_txq_sw_index(__rte_unused struct sfc_adapter_shared *sas,
+				  sfc_sw_index_t txq_sw_index)
+{
+	/* Only ethdev queues are present for now */
+	return txq_sw_index;
+}
+
+static inline sfc_sw_index_t
+sfc_txq_sw_index_by_ethdev_tx_qid(__rte_unused struct sfc_adapter_shared *sas,
+				  sfc_ethdev_qid_t ethdev_qid)
+{
+	/* Only ethdev queues are present for now */
+	return ethdev_qid;
+}
+
+static inline sfc_sw_index_t
+sfc_evq_sw_index_by_txq_sw_index(struct sfc_adapter *sa,
+				 sfc_sw_index_t txq_sw_index)
 {
 	return 1 + sa->eth_dev->data->nb_rx_queues + txq_sw_index;
 }
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 28d696de61..ce2a9a6a4f 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -34,6 +34,19 @@
  */
 #define SFC_TX_QFLUSH_POLL_ATTEMPTS	(2000)
 
+struct sfc_txq_info *
+sfc_txq_info_by_ethdev_qid(struct sfc_adapter_shared *sas,
+			   sfc_ethdev_qid_t ethdev_qid)
+{
+	sfc_sw_index_t sw_index;
+
+	SFC_ASSERT((unsigned int)ethdev_qid < sas->ethdev_txq_count);
+	SFC_ASSERT(ethdev_qid != SFC_ETHDEV_QID_INVALID);
+
+	sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas, ethdev_qid);
+	return &sas->txq_info[sw_index];
+}
+
 static uint64_t
 sfc_tx_get_offload_mask(struct sfc_adapter *sa)
 {
@@ -118,10 +131,12 @@ sfc_tx_qflush_done(struct sfc_txq_info *txq_info)
 }
 
 int
-sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
+sfc_tx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 	     uint16_t nb_tx_desc, unsigned int socket_id,
 	     const struct rte_eth_txconf *tx_conf)
 {
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
 	unsigned int txq_entries;
 	unsigned int evq_entries;
@@ -134,7 +149,9 @@ sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	uint64_t offloads;
 	struct sfc_dp_tx_hw_limits hw_limits;
 
-	sfc_log_init(sa, "TxQ = %u", sw_index);
+	ethdev_qid = sfc_ethdev_tx_qid_by_txq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "TxQ = %d (internal %u)", ethdev_qid, sw_index);
 
 	memset(&hw_limits, 0, sizeof(hw_limits));
 	hw_limits.txq_max_entries = sa->txq_max_entries;
@@ -150,8 +167,11 @@ sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	SFC_ASSERT(txq_entries >= nb_tx_desc);
 	SFC_ASSERT(txq_max_fill_level <= nb_tx_desc);
 
-	offloads = tx_conf->offloads |
-		sa->eth_dev->data->dev_conf.txmode.offloads;
+	offloads = tx_conf->offloads;
+	/* Add device level Tx offloads if the queue is an ethdev Tx queue */
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		offloads |= sa->eth_dev->data->dev_conf.txmode.offloads;
+
 	rc = sfc_tx_qcheck_conf(sa, txq_max_fill_level, tx_conf, offloads);
 	if (rc != 0)
 		goto fail_bad_conf;
@@ -231,20 +251,26 @@ sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 
 fail_bad_conf:
 fail_size_up_rings:
-	sfc_log_init(sa, "failed (TxQ = %u, rc = %d)", sw_index, rc);
+	sfc_log_init(sa, "failed (TxQ = %d (internal %u), rc = %d)", ethdev_qid,
+		     sw_index, rc);
 	return rc;
 }
 
 void
-sfc_tx_qfini(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_tx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_txq_info *txq_info;
 	struct sfc_txq *txq;
 
-	sfc_log_init(sa, "TxQ = %u", sw_index);
+	ethdev_qid = sfc_ethdev_tx_qid_by_txq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "TxQ = %d (internal %u)", ethdev_qid, sw_index);
 
 	SFC_ASSERT(sw_index < sfc_sa2shared(sa)->txq_count);
-	sa->eth_dev->data->tx_queues[sw_index] = NULL;
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		sa->eth_dev->data->tx_queues[ethdev_qid] = NULL;
 
 	txq_info = &sfc_sa2shared(sa)->txq_info[sw_index];
 
@@ -265,9 +291,14 @@ sfc_tx_qfini(struct sfc_adapter *sa, unsigned int sw_index)
 }
 
 static int
-sfc_tx_qinit_info(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_tx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
-	sfc_log_init(sa, "TxQ = %u", sw_index);
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
+
+	ethdev_qid = sfc_ethdev_tx_qid_by_txq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "TxQ = %d (internal %u)", ethdev_qid, sw_index);
 
 	return 0;
 }
@@ -316,17 +347,26 @@ static void
 sfc_tx_fini_queues(struct sfc_adapter *sa, unsigned int nb_tx_queues)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	int sw_index;
+	sfc_sw_index_t sw_index;
+	sfc_ethdev_qid_t ethdev_qid;
 
-	SFC_ASSERT(nb_tx_queues <= sas->txq_count);
+	SFC_ASSERT(nb_tx_queues <= sas->ethdev_txq_count);
 
-	sw_index = sas->txq_count;
-	while (--sw_index >= (int)nb_tx_queues) {
-		if (sas->txq_info[sw_index].state & SFC_TXQ_INITIALIZED)
+	/*
+	 * Finalize only ethdev queues since other ones are finalized only
+	 * on device close and they may require additional deinitializaton.
+	 */
+	ethdev_qid = sas->ethdev_txq_count;
+	while (--ethdev_qid >= (int)nb_tx_queues) {
+		struct sfc_txq_info *txq_info;
+
+		sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas, ethdev_qid);
+		txq_info = sfc_txq_info_by_ethdev_qid(sas, ethdev_qid);
+		if (txq_info->state & SFC_TXQ_INITIALIZED)
 			sfc_tx_qfini(sa, sw_index);
 	}
 
-	sas->txq_count = nb_tx_queues;
+	sas->ethdev_txq_count = nb_tx_queues;
 }
 
 int
@@ -339,7 +379,7 @@ sfc_tx_configure(struct sfc_adapter *sa)
 	int rc = 0;
 
 	sfc_log_init(sa, "nb_tx_queues=%u (old %u)",
-		     nb_tx_queues, sas->txq_count);
+		     nb_tx_queues, sas->ethdev_txq_count);
 
 	/*
 	 * The datapath implementation assumes absence of boundary
@@ -377,7 +417,7 @@ sfc_tx_configure(struct sfc_adapter *sa)
 		struct sfc_txq_info *new_txq_info;
 		struct sfc_txq *new_txq_ctrl;
 
-		if (nb_tx_queues < sas->txq_count)
+		if (nb_tx_queues < sas->ethdev_txq_count)
 			sfc_tx_fini_queues(sa, nb_tx_queues);
 
 		new_txq_info =
@@ -393,24 +433,30 @@ sfc_tx_configure(struct sfc_adapter *sa)
 
 		sas->txq_info = new_txq_info;
 		sa->txq_ctrl = new_txq_ctrl;
-		if (nb_tx_queues > sas->txq_count) {
-			memset(&sas->txq_info[sas->txq_count], 0,
-			       (nb_tx_queues - sas->txq_count) *
+		if (nb_tx_queues > sas->ethdev_txq_count) {
+			memset(&sas->txq_info[sas->ethdev_txq_count], 0,
+			       (nb_tx_queues - sas->ethdev_txq_count) *
 			       sizeof(sas->txq_info[0]));
-			memset(&sa->txq_ctrl[sas->txq_count], 0,
-			       (nb_tx_queues - sas->txq_count) *
+			memset(&sa->txq_ctrl[sas->ethdev_txq_count], 0,
+			       (nb_tx_queues - sas->ethdev_txq_count) *
 			       sizeof(sa->txq_ctrl[0]));
 		}
 	}
 
-	while (sas->txq_count < nb_tx_queues) {
-		rc = sfc_tx_qinit_info(sa, sas->txq_count);
+	while (sas->ethdev_txq_count < nb_tx_queues) {
+		sfc_sw_index_t sw_index;
+
+		sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas,
+				sas->ethdev_txq_count);
+		rc = sfc_tx_qinit_info(sa, sw_index);
 		if (rc != 0)
 			goto fail_tx_qinit_info;
 
-		sas->txq_count++;
+		sas->ethdev_txq_count++;
 	}
 
+	sas->txq_count = sas->ethdev_txq_count;
+
 done:
 	return 0;
 
@@ -440,12 +486,12 @@ sfc_tx_close(struct sfc_adapter *sa)
 }
 
 int
-sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_tx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	uint64_t offloads_supported = sfc_tx_get_dev_offload_caps(sa) |
 				      sfc_tx_get_queue_offload_caps(sa);
-	struct rte_eth_dev_data *dev_data;
 	struct sfc_txq_info *txq_info;
 	struct sfc_txq *txq;
 	struct sfc_evq *evq;
@@ -453,7 +499,9 @@ sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 	unsigned int desc_index;
 	int rc = 0;
 
-	sfc_log_init(sa, "TxQ = %u", sw_index);
+	ethdev_qid = sfc_ethdev_tx_qid_by_txq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "TxQ = %d (internal %u)", ethdev_qid, sw_index);
 
 	SFC_ASSERT(sw_index < sas->txq_count);
 	txq_info = &sas->txq_info[sw_index];
@@ -463,7 +511,7 @@ sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 	txq = &sa->txq_ctrl[sw_index];
 	evq = txq->evq;
 
-	rc = sfc_ev_qstart(evq, sfc_evq_index_by_txq_sw_index(sa, sw_index));
+	rc = sfc_ev_qstart(evq, sfc_evq_sw_index_by_txq_sw_index(sa, sw_index));
 	if (rc != 0)
 		goto fail_ev_qstart;
 
@@ -505,11 +553,17 @@ sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 	if (rc != 0)
 		goto fail_dp_qstart;
 
-	/*
-	 * It seems to be used by DPDK for debug purposes only ('rte_ether')
-	 */
-	dev_data = sa->eth_dev->data;
-	dev_data->tx_queue_state[sw_index] = RTE_ETH_QUEUE_STATE_STARTED;
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID) {
+		struct rte_eth_dev_data *dev_data;
+
+		/*
+		 * It sems to be used by DPDK for debug purposes only
+		 * ('rte_ether').
+		 */
+		dev_data = sa->eth_dev->data;
+		dev_data->tx_queue_state[ethdev_qid] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+	}
 
 	return 0;
 
@@ -525,17 +579,19 @@ sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 }
 
 void
-sfc_tx_qstop(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_tx_qstop(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	struct rte_eth_dev_data *dev_data;
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_txq_info *txq_info;
 	struct sfc_txq *txq;
 	unsigned int retry_count;
 	unsigned int wait_count;
 	int rc;
 
-	sfc_log_init(sa, "TxQ = %u", sw_index);
+	ethdev_qid = sfc_ethdev_tx_qid_by_txq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "TxQ = %d (internal %u)", ethdev_qid, sw_index);
 
 	SFC_ASSERT(sw_index < sas->txq_count);
 	txq_info = &sas->txq_info[sw_index];
@@ -577,10 +633,12 @@ sfc_tx_qstop(struct sfc_adapter *sa, unsigned int sw_index)
 			 wait_count++ < SFC_TX_QFLUSH_POLL_ATTEMPTS);
 
 		if (txq_info->state & SFC_TXQ_FLUSHING)
-			sfc_err(sa, "TxQ %u flush timed out", sw_index);
+			sfc_err(sa, "TxQ %d (internal %u) flush timed out",
+				ethdev_qid, sw_index);
 
 		if (txq_info->state & SFC_TXQ_FLUSHED)
-			sfc_notice(sa, "TxQ %u flushed", sw_index);
+			sfc_notice(sa, "TxQ %d (internal %u) flushed",
+				   ethdev_qid, sw_index);
 	}
 
 	sa->priv.dp_tx->qreap(txq_info->dp);
@@ -591,11 +649,17 @@ sfc_tx_qstop(struct sfc_adapter *sa, unsigned int sw_index)
 
 	sfc_ev_qstop(txq->evq);
 
-	/*
-	 * It seems to be used by DPDK for debug purposes only ('rte_ether')
-	 */
-	dev_data = sa->eth_dev->data;
-	dev_data->tx_queue_state[sw_index] = RTE_ETH_QUEUE_STATE_STOPPED;
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID) {
+		struct rte_eth_dev_data *dev_data;
+
+		/*
+		 * It seems to be used by DPDK for debug purposes only
+		 * ('rte_ether')
+		 */
+		dev_data = sa->eth_dev->data;
+		dev_data->tx_queue_state[ethdev_qid] =
+			RTE_ETH_QUEUE_STATE_STOPPED;
+	}
 }
 
 int
@@ -603,10 +667,11 @@ sfc_tx_start(struct sfc_adapter *sa)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 	int rc = 0;
 
-	sfc_log_init(sa, "txq_count = %u", sas->txq_count);
+	sfc_log_init(sa, "txq_count = %u (internal %u)",
+		     sas->ethdev_txq_count, sas->txq_count);
 
 	if (sa->tso) {
 		if (!encp->enc_fw_assisted_tso_v2_enabled &&
@@ -654,9 +719,10 @@ void
 sfc_tx_stop(struct sfc_adapter *sa)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 
-	sfc_log_init(sa, "txq_count = %u", sas->txq_count);
+	sfc_log_init(sa, "txq_count = %u (internal %u)",
+		     sas->ethdev_txq_count, sas->txq_count);
 
 	sw_index = sas->txq_count;
 	while (sw_index-- > 0) {
diff --git a/drivers/net/sfc/sfc_tx.h b/drivers/net/sfc/sfc_tx.h
index 5ed678703e..f1700b13ca 100644
--- a/drivers/net/sfc/sfc_tx.h
+++ b/drivers/net/sfc/sfc_tx.h
@@ -58,7 +58,8 @@ struct sfc_txq {
 };
 
 struct sfc_txq *sfc_txq_by_dp_txq(const struct sfc_dp_txq *dp_txq);
-
+struct sfc_txq_info *sfc_txq_info_by_ethdev_qid(struct sfc_adapter_shared *sas,
+						sfc_ethdev_qid_t ethdev_qid);
 /**
  * Transmit queue information used on libefx-based data path.
  * Allocated on the socket specified on the queue setup.
@@ -107,14 +108,14 @@ struct sfc_txq_info *sfc_txq_info_by_dp_txq(const struct sfc_dp_txq *dp_txq);
 int sfc_tx_configure(struct sfc_adapter *sa);
 void sfc_tx_close(struct sfc_adapter *sa);
 
-int sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
+int sfc_tx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 		 uint16_t nb_tx_desc, unsigned int socket_id,
 		 const struct rte_eth_txconf *tx_conf);
-void sfc_tx_qfini(struct sfc_adapter *sa, unsigned int sw_index);
+void sfc_tx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
 
 void sfc_tx_qflush_done(struct sfc_txq_info *txq_info);
-int sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index);
-void sfc_tx_qstop(struct sfc_adapter *sa, unsigned int sw_index);
+int sfc_tx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
+void sfc_tx_qstop(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
 int sfc_tx_start(struct sfc_adapter *sa);
 void sfc_tx_stop(struct sfc_adapter *sa);
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH 07/20] common/sfc_efx/base: add ingress m-port RxQ flag
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                   ` (5 preceding siblings ...)
  2021-05-27 15:24 ` [dpdk-dev] [PATCH 06/20] net/sfc: introduce ethdev Tx queue ID Andrew Rybchenko
@ 2021-05-27 15:24 ` Andrew Rybchenko
  2021-05-27 15:24 ` [dpdk-dev] [PATCH 08/20] common/sfc_efx/base: add user mark " Andrew Rybchenko
                   ` (15 subsequent siblings)
  22 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-05-27 15:24 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Add a flag to request support for ingress m-port on an RxQ.
Implement it only for Riverhead, other families will return an error
if the flag is set.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/ef10_rx.c  |  9 ++++++++-
 drivers/common/sfc_efx/base/efx.h      |  5 +++++
 drivers/common/sfc_efx/base/efx_rx.c   | 14 +++++++++-----
 drivers/common/sfc_efx/base/rhead_rx.c |  3 +++
 4 files changed, 25 insertions(+), 6 deletions(-)

diff --git a/drivers/common/sfc_efx/base/ef10_rx.c b/drivers/common/sfc_efx/base/ef10_rx.c
index cfa60bd324..0e140645a5 100644
--- a/drivers/common/sfc_efx/base/ef10_rx.c
+++ b/drivers/common/sfc_efx/base/ef10_rx.c
@@ -1031,6 +1031,11 @@ ef10_rx_qcreate(
 	EFSYS_ASSERT(params.es_bufs_per_desc == 0);
 #endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
 
+	if (flags & EFX_RXQ_FLAG_INGRESS_MPORT) {
+		rc = ENOTSUP;
+		goto fail12;
+	}
+
 	/* Scatter can only be disabled if the firmware supports doing so */
 	if (flags & EFX_RXQ_FLAG_SCATTER)
 		params.disable_scatter = B_FALSE;
@@ -1044,7 +1049,7 @@ ef10_rx_qcreate(
 
 	if ((rc = efx_mcdi_init_rxq(enp, ndescs, eep, label, index,
 		    esmp, &params)) != 0)
-		goto fail12;
+		goto fail13;
 
 	erp->er_eep = eep;
 	erp->er_label = label;
@@ -1057,6 +1062,8 @@ ef10_rx_qcreate(
 
 	return (0);
 
+fail13:
+	EFSYS_PROBE(fail13);
 fail12:
 	EFSYS_PROBE(fail12);
 #if EFSYS_OPT_RX_ES_SUPER_BUFFER
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 6a99099ad2..72ab4af01c 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -2925,6 +2925,7 @@ typedef enum efx_rx_prefix_field_e {
 	EFX_RX_PREFIX_FIELD_USER_MARK_VALID,
 	EFX_RX_PREFIX_FIELD_CSUM_FRAME,
 	EFX_RX_PREFIX_FIELD_INGRESS_VPORT,
+	EFX_RX_PREFIX_FIELD_INGRESS_MPORT = EFX_RX_PREFIX_FIELD_INGRESS_VPORT,
 	EFX_RX_PREFIX_NFIELDS
 } efx_rx_prefix_field_t;
 
@@ -2998,6 +2999,10 @@ typedef enum efx_rxq_type_e {
  * the driver.
  */
 #define	EFX_RXQ_FLAG_RSS_HASH		0x4
+/*
+ * Request ingress mport field in the Rx prefix of a queue.
+ */
+#define	EFX_RXQ_FLAG_INGRESS_MPORT	0x8
 
 LIBEFX_API
 extern	__checkReturn	efx_rc_t
diff --git a/drivers/common/sfc_efx/base/efx_rx.c b/drivers/common/sfc_efx/base/efx_rx.c
index 7c6fecf925..7e63363be7 100644
--- a/drivers/common/sfc_efx/base/efx_rx.c
+++ b/drivers/common/sfc_efx/base/efx_rx.c
@@ -1743,14 +1743,20 @@ siena_rx_qcreate(
 		goto fail2;
 	}
 
-	if (flags & EFX_RXQ_FLAG_SCATTER) {
 #if EFSYS_OPT_RX_SCATTER
-		jumbo = B_TRUE;
+#define SUPPORTED_RXQ_FLAGS EFX_RXQ_FLAG_SCATTER
 #else
+#define SUPPORTED_RXQ_FLAGS EFX_RXQ_FLAG_NONE
+#endif
+	/* Reject flags for unsupported queue features */
+	if ((flags & ~SUPPORTED_RXQ_FLAGS) != 0) {
 		rc = EINVAL;
 		goto fail3;
-#endif	/* EFSYS_OPT_RX_SCATTER */
 	}
+#undef SUPPORTED_RXQ_FLAGS
+
+	if (flags & EFX_RXQ_FLAG_SCATTER)
+		jumbo = B_TRUE;
 
 	/* Set up the new descriptor queue */
 	EFX_POPULATE_OWORD_7(oword,
@@ -1769,10 +1775,8 @@ siena_rx_qcreate(
 
 	return (0);
 
-#if !EFSYS_OPT_RX_SCATTER
 fail3:
 	EFSYS_PROBE(fail3);
-#endif
 fail2:
 	EFSYS_PROBE(fail2);
 fail1:
diff --git a/drivers/common/sfc_efx/base/rhead_rx.c b/drivers/common/sfc_efx/base/rhead_rx.c
index b2dacbab32..f1d46f7c70 100644
--- a/drivers/common/sfc_efx/base/rhead_rx.c
+++ b/drivers/common/sfc_efx/base/rhead_rx.c
@@ -629,6 +629,9 @@ rhead_rx_qcreate(
 		fields_mask |= 1U << EFX_RX_PREFIX_FIELD_RSS_HASH_VALID;
 	}
 
+	if (flags & EFX_RXQ_FLAG_INGRESS_MPORT)
+		fields_mask |= 1U << EFX_RX_PREFIX_FIELD_INGRESS_MPORT;
+
 	/*
 	 * LENGTH is required in EF100 host interface, as receive events
 	 * do not include the packet length.
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH 08/20] common/sfc_efx/base: add user mark RxQ flag
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                   ` (6 preceding siblings ...)
  2021-05-27 15:24 ` [dpdk-dev] [PATCH 07/20] common/sfc_efx/base: add ingress m-port RxQ flag Andrew Rybchenko
@ 2021-05-27 15:24 ` Andrew Rybchenko
  2021-05-27 15:24 ` [dpdk-dev] [PATCH 09/20] net/sfc: add abstractions for the management EVQ identity Andrew Rybchenko
                   ` (14 subsequent siblings)
  22 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-05-27 15:24 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Add a flag to request support for user mark field on an RxQ.
The field is required to retrieve generation count value from
counter RxQ.

Implement it only for Riverhead and EF10 ESSB since they support
the field in the Rx prefix.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/ef10_rx.c  | 52 ++++++++++++++++----------
 drivers/common/sfc_efx/base/efx.h      |  4 ++
 drivers/common/sfc_efx/base/rhead_rx.c |  3 ++
 3 files changed, 39 insertions(+), 20 deletions(-)

diff --git a/drivers/common/sfc_efx/base/ef10_rx.c b/drivers/common/sfc_efx/base/ef10_rx.c
index 0e140645a5..0c3f9413cf 100644
--- a/drivers/common/sfc_efx/base/ef10_rx.c
+++ b/drivers/common/sfc_efx/base/ef10_rx.c
@@ -926,6 +926,10 @@ ef10_rx_qcreate(
 			goto fail1;
 		}
 		erp->er_buf_size = type_data->ertd_default.ed_buf_size;
+		if (flags & EFX_RXQ_FLAG_USER_MARK) {
+			rc = ENOTSUP;
+			goto fail2;
+		}
 		/*
 		 * Ignore EFX_RXQ_FLAG_RSS_HASH since if RSS hash is calculated
 		 * it is always delivered from HW in the pseudo-header.
@@ -936,7 +940,7 @@ ef10_rx_qcreate(
 		erpl = &ef10_packed_stream_rx_prefix_layout;
 		if (type_data == NULL) {
 			rc = EINVAL;
-			goto fail2;
+			goto fail3;
 		}
 		switch (type_data->ertd_packed_stream.eps_buf_size) {
 		case EFX_RXQ_PACKED_STREAM_BUF_SIZE_1M:
@@ -956,13 +960,17 @@ ef10_rx_qcreate(
 			break;
 		default:
 			rc = ENOTSUP;
-			goto fail3;
+			goto fail4;
 		}
 		erp->er_buf_size = type_data->ertd_packed_stream.eps_buf_size;
 		/* Packed stream pseudo header does not have RSS hash value */
 		if (flags & EFX_RXQ_FLAG_RSS_HASH) {
 			rc = ENOTSUP;
-			goto fail4;
+			goto fail5;
+		}
+		if (flags & EFX_RXQ_FLAG_USER_MARK) {
+			rc = ENOTSUP;
+			goto fail6;
 		}
 		break;
 #endif /* EFSYS_OPT_RX_PACKED_STREAM */
@@ -971,7 +979,7 @@ ef10_rx_qcreate(
 		erpl = &ef10_essb_rx_prefix_layout;
 		if (type_data == NULL) {
 			rc = EINVAL;
-			goto fail5;
+			goto fail7;
 		}
 		params.es_bufs_per_desc =
 		    type_data->ertd_es_super_buffer.eessb_bufs_per_desc;
@@ -989,7 +997,7 @@ ef10_rx_qcreate(
 #endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
 	default:
 		rc = ENOTSUP;
-		goto fail6;
+		goto fail8;
 	}
 
 #if EFSYS_OPT_RX_PACKED_STREAM
@@ -997,13 +1005,13 @@ ef10_rx_qcreate(
 		/* Check if datapath firmware supports packed stream mode */
 		if (encp->enc_rx_packed_stream_supported == B_FALSE) {
 			rc = ENOTSUP;
-			goto fail7;
+			goto fail9;
 		}
 		/* Check if packed stream allows configurable buffer sizes */
 		if ((params.ps_buf_size != MC_CMD_INIT_RXQ_EXT_IN_PS_BUFF_1M) &&
 		    (encp->enc_rx_var_packed_stream_supported == B_FALSE)) {
 			rc = ENOTSUP;
-			goto fail8;
+			goto fail10;
 		}
 	}
 #else /* EFSYS_OPT_RX_PACKED_STREAM */
@@ -1014,17 +1022,17 @@ ef10_rx_qcreate(
 	if (params.es_bufs_per_desc > 0) {
 		if (encp->enc_rx_es_super_buffer_supported == B_FALSE) {
 			rc = ENOTSUP;
-			goto fail9;
+			goto fail11;
 		}
 		if (!EFX_IS_P2ALIGNED(uint32_t, params.es_max_dma_len,
 			    EFX_RX_ES_SUPER_BUFFER_BUF_ALIGNMENT)) {
 			rc = EINVAL;
-			goto fail10;
+			goto fail12;
 		}
 		if (!EFX_IS_P2ALIGNED(uint32_t, params.es_buf_stride,
 			    EFX_RX_ES_SUPER_BUFFER_BUF_ALIGNMENT)) {
 			rc = EINVAL;
-			goto fail11;
+			goto fail13;
 		}
 	}
 #else /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
@@ -1033,7 +1041,7 @@ ef10_rx_qcreate(
 
 	if (flags & EFX_RXQ_FLAG_INGRESS_MPORT) {
 		rc = ENOTSUP;
-		goto fail12;
+		goto fail14;
 	}
 
 	/* Scatter can only be disabled if the firmware supports doing so */
@@ -1049,7 +1057,7 @@ ef10_rx_qcreate(
 
 	if ((rc = efx_mcdi_init_rxq(enp, ndescs, eep, label, index,
 		    esmp, &params)) != 0)
-		goto fail13;
+		goto fail15;
 
 	erp->er_eep = eep;
 	erp->er_label = label;
@@ -1062,38 +1070,42 @@ ef10_rx_qcreate(
 
 	return (0);
 
+fail15:
+	EFSYS_PROBE(fail15);
+fail14:
+	EFSYS_PROBE(fail14);
+#if EFSYS_OPT_RX_ES_SUPER_BUFFER
 fail13:
 	EFSYS_PROBE(fail13);
 fail12:
 	EFSYS_PROBE(fail12);
-#if EFSYS_OPT_RX_ES_SUPER_BUFFER
 fail11:
 	EFSYS_PROBE(fail11);
+#endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
+#if EFSYS_OPT_RX_PACKED_STREAM
 fail10:
 	EFSYS_PROBE(fail10);
 fail9:
 	EFSYS_PROBE(fail9);
-#endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
-#if EFSYS_OPT_RX_PACKED_STREAM
+#endif /* EFSYS_OPT_RX_PACKED_STREAM */
 fail8:
 	EFSYS_PROBE(fail8);
+#if EFSYS_OPT_RX_ES_SUPER_BUFFER
 fail7:
 	EFSYS_PROBE(fail7);
-#endif /* EFSYS_OPT_RX_PACKED_STREAM */
+#endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
+#if EFSYS_OPT_RX_PACKED_STREAM
 fail6:
 	EFSYS_PROBE(fail6);
-#if EFSYS_OPT_RX_ES_SUPER_BUFFER
 fail5:
 	EFSYS_PROBE(fail5);
-#endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
-#if EFSYS_OPT_RX_PACKED_STREAM
 fail4:
 	EFSYS_PROBE(fail4);
 fail3:
 	EFSYS_PROBE(fail3);
+#endif /* EFSYS_OPT_RX_PACKED_STREAM */
 fail2:
 	EFSYS_PROBE(fail2);
-#endif /* EFSYS_OPT_RX_PACKED_STREAM */
 fail1:
 	EFSYS_PROBE1(fail1, efx_rc_t, rc);
 
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 72ab4af01c..9bbd7cae55 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -3003,6 +3003,10 @@ typedef enum efx_rxq_type_e {
  * Request ingress mport field in the Rx prefix of a queue.
  */
 #define	EFX_RXQ_FLAG_INGRESS_MPORT	0x8
+/*
+ * Request user mark field in the Rx prefix of a queue.
+ */
+#define	EFX_RXQ_FLAG_USER_MARK		0x10
 
 LIBEFX_API
 extern	__checkReturn	efx_rc_t
diff --git a/drivers/common/sfc_efx/base/rhead_rx.c b/drivers/common/sfc_efx/base/rhead_rx.c
index f1d46f7c70..76b8ce302a 100644
--- a/drivers/common/sfc_efx/base/rhead_rx.c
+++ b/drivers/common/sfc_efx/base/rhead_rx.c
@@ -632,6 +632,9 @@ rhead_rx_qcreate(
 	if (flags & EFX_RXQ_FLAG_INGRESS_MPORT)
 		fields_mask |= 1U << EFX_RX_PREFIX_FIELD_INGRESS_MPORT;
 
+	if (flags & EFX_RXQ_FLAG_USER_MARK)
+		fields_mask |= 1U << EFX_RX_PREFIX_FIELD_USER_MARK;
+
 	/*
 	 * LENGTH is required in EF100 host interface, as receive events
 	 * do not include the packet length.
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH 09/20] net/sfc: add abstractions for the management EVQ identity
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                   ` (7 preceding siblings ...)
  2021-05-27 15:24 ` [dpdk-dev] [PATCH 08/20] common/sfc_efx/base: add user mark " Andrew Rybchenko
@ 2021-05-27 15:24 ` Andrew Rybchenko
  2021-05-27 15:25 ` [dpdk-dev] [PATCH 10/20] net/sfc: add support for initialising different RxQ types Andrew Rybchenko
                   ` (13 subsequent siblings)
  22 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-05-27 15:24 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Add a function returning management event queue software index.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/net/sfc/sfc_ev.c | 2 +-
 drivers/net/sfc/sfc_ev.h | 6 ++++++
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index ed28d51e12..ba4409369a 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -983,7 +983,7 @@ sfc_ev_attach(struct sfc_adapter *sa)
 		goto fail_kvarg_perf_profile;
 	}
 
-	sa->mgmt_evq_index = 0;
+	sa->mgmt_evq_index = sfc_mgmt_evq_sw_index(sfc_sa2shared(sa));
 	rte_spinlock_init(&sa->mgmt_evq_lock);
 
 	rc = sfc_ev_qinit(sa, SFC_EVQ_TYPE_MGMT, 0, sa->evq_min_entries,
diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h
index 75b9dcdebd..3f3c4b5b9a 100644
--- a/drivers/net/sfc/sfc_ev.h
+++ b/drivers/net/sfc/sfc_ev.h
@@ -60,6 +60,12 @@ struct sfc_evq {
 	unsigned int			entries;
 };
 
+static inline sfc_sw_index_t
+sfc_mgmt_evq_sw_index(__rte_unused const struct sfc_adapter_shared *sas)
+{
+	return 0;
+}
+
 /*
  * Functions below define event queue to transmit/receive queue and vice
  * versa mapping.
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH 10/20] net/sfc: add support for initialising different RxQ types
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                   ` (8 preceding siblings ...)
  2021-05-27 15:24 ` [dpdk-dev] [PATCH 09/20] net/sfc: add abstractions for the management EVQ identity Andrew Rybchenko
@ 2021-05-27 15:25 ` Andrew Rybchenko
  2021-05-27 15:25 ` [dpdk-dev] [PATCH 11/20] net/sfc: add NUMA-aware registry of service logical cores Andrew Rybchenko
                   ` (12 subsequent siblings)
  22 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-05-27 15:25 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Add extra EFX flags to RxQ info initialization API to support
choosing different RxQ types and make the API public to use
it in for counter queues.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/net/sfc/sfc_rx.c | 10 ++++++----
 drivers/net/sfc/sfc_rx.h |  2 ++
 2 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 597785ae02..c7a7bd66ef 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -1155,7 +1155,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 	else
 		rxq_info->type = EFX_RXQ_TYPE_DEFAULT;
 
-	rxq_info->type_flags =
+	rxq_info->type_flags |=
 		(offloads & DEV_RX_OFFLOAD_SCATTER) ?
 		EFX_RXQ_FLAG_SCATTER : EFX_RXQ_FLAG_NONE;
 
@@ -1594,8 +1594,9 @@ sfc_rx_stop(struct sfc_adapter *sa)
 	efx_rx_fini(sa->nic);
 }
 
-static int
-sfc_rx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
+int
+sfc_rx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
+		  unsigned int extra_efx_type_flags)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
 	struct sfc_rxq_info *rxq_info = &sas->rxq_info[sw_index];
@@ -1606,6 +1607,7 @@ sfc_rx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 	SFC_ASSERT(rte_is_power_of_2(max_entries));
 
 	rxq_info->max_entries = max_entries;
+	rxq_info->type_flags = extra_efx_type_flags;
 
 	return 0;
 }
@@ -1770,7 +1772,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
 
 		sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas,
 							sas->ethdev_rxq_count);
-		rc = sfc_rx_qinit_info(sa, sw_index);
+		rc = sfc_rx_qinit_info(sa, sw_index, 0);
 		if (rc != 0)
 			goto fail_rx_qinit_info;
 
diff --git a/drivers/net/sfc/sfc_rx.h b/drivers/net/sfc/sfc_rx.h
index 96c7dc415d..e5a6fde79b 100644
--- a/drivers/net/sfc/sfc_rx.h
+++ b/drivers/net/sfc/sfc_rx.h
@@ -129,6 +129,8 @@ void sfc_rx_close(struct sfc_adapter *sa);
 int sfc_rx_start(struct sfc_adapter *sa);
 void sfc_rx_stop(struct sfc_adapter *sa);
 
+int sfc_rx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
+		      unsigned int extra_efx_type_flags);
 int sfc_rx_qinit(struct sfc_adapter *sa, unsigned int rx_queue_id,
 		 uint16_t nb_rx_desc, unsigned int socket_id,
 		 const struct rte_eth_rxconf *rx_conf,
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH 11/20] net/sfc: add NUMA-aware registry of service logical cores
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                   ` (9 preceding siblings ...)
  2021-05-27 15:25 ` [dpdk-dev] [PATCH 10/20] net/sfc: add support for initialising different RxQ types Andrew Rybchenko
@ 2021-05-27 15:25 ` Andrew Rybchenko
  2021-05-27 15:25 ` [dpdk-dev] [PATCH 12/20] net/sfc: reserve RxQ for counters Andrew Rybchenko
                   ` (11 subsequent siblings)
  22 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-05-27 15:25 UTC (permalink / raw)
  To: dev; +Cc: Andy Moreton, Ivan Malov

The driver requires service cores for housekeeping. Share these
cores for many adapters and various purposes to avoid extra CPU
overhead.

Since housekeeping services will talk to NIC, it should be possible
to choose logical core on matching NUMA node.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/meson.build   |  1 +
 drivers/net/sfc/sfc_service.c | 99 +++++++++++++++++++++++++++++++++++
 drivers/net/sfc/sfc_service.h | 20 +++++++
 3 files changed, 120 insertions(+)
 create mode 100644 drivers/net/sfc/sfc_service.c
 create mode 100644 drivers/net/sfc/sfc_service.h

diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index ccf5984d87..4ac97e8d43 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -62,4 +62,5 @@ sources = files(
         'sfc_ef10_tx.c',
         'sfc_ef100_rx.c',
         'sfc_ef100_tx.c',
+        'sfc_service.c',
 )
diff --git a/drivers/net/sfc/sfc_service.c b/drivers/net/sfc/sfc_service.c
new file mode 100644
index 0000000000..9c89484406
--- /dev/null
+++ b/drivers/net/sfc/sfc_service.c
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2020-2021 Xilinx, Inc.
+ */
+
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_spinlock.h>
+#include <rte_lcore.h>
+#include <rte_service.h>
+#include <rte_memory.h>
+
+#include "sfc_log.h"
+#include "sfc_service.h"
+#include "sfc_debug.h"
+
+static uint32_t sfc_service_lcore[RTE_MAX_NUMA_NODES];
+static rte_spinlock_t sfc_service_lcore_lock = RTE_SPINLOCK_INITIALIZER;
+
+RTE_INIT(sfc_service_lcore_init)
+{
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(sfc_service_lcore); ++i)
+		sfc_service_lcore[i] = RTE_MAX_LCORE;
+}
+
+static uint32_t
+sfc_find_service_lcore(int *socket_id)
+{
+	uint32_t service_core_list[RTE_MAX_LCORE];
+	uint32_t lcore_id;
+	int num;
+	int i;
+
+	SFC_ASSERT(rte_spinlock_is_locked(&sfc_service_lcore_lock));
+
+	num = rte_service_lcore_list(service_core_list,
+				    RTE_DIM(service_core_list));
+	if (num == 0) {
+		SFC_GENERIC_LOG(WARNING, "No service cores available");
+		return RTE_MAX_LCORE;
+	}
+	if (num < 0) {
+		SFC_GENERIC_LOG(ERR, "Failed to get service core list");
+		return RTE_MAX_LCORE;
+	}
+
+	for (i = 0; i < num; ++i) {
+		lcore_id = service_core_list[i];
+
+		if (*socket_id == SOCKET_ID_ANY) {
+			*socket_id = rte_lcore_to_socket_id(lcore_id);
+			break;
+		} else if (rte_lcore_to_socket_id(lcore_id) ==
+			   (unsigned int)*socket_id) {
+			break;
+		}
+	}
+
+	if (i == num) {
+		SFC_GENERIC_LOG(WARNING,
+			"No service cores reserved at socket %d", *socket_id);
+		return RTE_MAX_LCORE;
+	}
+
+	return lcore_id;
+}
+
+uint32_t
+sfc_get_service_lcore(int socket_id)
+{
+	uint32_t lcore_id = RTE_MAX_LCORE;
+
+	rte_spinlock_lock(&sfc_service_lcore_lock);
+
+	if (socket_id != SOCKET_ID_ANY) {
+		lcore_id = sfc_service_lcore[socket_id];
+	} else {
+		size_t i;
+
+		for (i = 0; i < RTE_DIM(sfc_service_lcore); ++i) {
+			if (sfc_service_lcore[i] != RTE_MAX_LCORE) {
+				lcore_id = sfc_service_lcore[i];
+				break;
+			}
+		}
+	}
+
+	if (lcore_id == RTE_MAX_LCORE) {
+		lcore_id = sfc_find_service_lcore(&socket_id);
+		if (lcore_id != RTE_MAX_LCORE)
+			sfc_service_lcore[socket_id] = lcore_id;
+	}
+
+	rte_spinlock_unlock(&sfc_service_lcore_lock);
+	return lcore_id;
+}
diff --git a/drivers/net/sfc/sfc_service.h b/drivers/net/sfc/sfc_service.h
new file mode 100644
index 0000000000..bbcce28479
--- /dev/null
+++ b/drivers/net/sfc/sfc_service.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2020-2021 Xilinx, Inc.
+ */
+
+#ifndef _SFC_SERVICE_H
+#define _SFC_SERVICE_H
+
+#include <stdint.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+uint32_t sfc_get_service_lcore(int socket_id);
+
+#ifdef __cplusplus
+}
+#endif
+#endif  /* _SFC_SERVICE_H */
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH 12/20] net/sfc: reserve RxQ for counters
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                   ` (10 preceding siblings ...)
  2021-05-27 15:25 ` [dpdk-dev] [PATCH 11/20] net/sfc: add NUMA-aware registry of service logical cores Andrew Rybchenko
@ 2021-05-27 15:25 ` Andrew Rybchenko
  2021-05-27 15:25 ` [dpdk-dev] [PATCH 13/20] common/sfc_efx/base: add counter creation MCDI wrappers Andrew Rybchenko
                   ` (10 subsequent siblings)
  22 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-05-27 15:25 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

MAE delivers counters data as special packets via dedicated Rx queue.
Reserve an RxQ so that it does not interfere with ethdev Rx queues.
A routine will be added later to handle these packets.

There is no point to reserve the queue if no service cores are
available and counters cannot be used.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/meson.build       |   1 +
 drivers/net/sfc/sfc.c             |  68 ++++++++--
 drivers/net/sfc/sfc.h             |  19 +++
 drivers/net/sfc/sfc_dp.h          |   2 +
 drivers/net/sfc/sfc_ev.h          |  72 ++++++++--
 drivers/net/sfc/sfc_mae.c         |   1 +
 drivers/net/sfc/sfc_mae_counter.c | 217 ++++++++++++++++++++++++++++++
 drivers/net/sfc/sfc_mae_counter.h |  44 ++++++
 drivers/net/sfc/sfc_rx.c          |  43 ++++--
 9 files changed, 438 insertions(+), 29 deletions(-)
 create mode 100644 drivers/net/sfc/sfc_mae_counter.c
 create mode 100644 drivers/net/sfc/sfc_mae_counter.h

diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index 4ac97e8d43..f8880f740a 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -55,6 +55,7 @@ sources = files(
         'sfc_filter.c',
         'sfc_switch.c',
         'sfc_mae.c',
+        'sfc_mae_counter.c',
         'sfc_flow.c',
         'sfc_dp.c',
         'sfc_ef10_rx.c',
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 3477c7530b..4097cf39de 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -20,6 +20,7 @@
 #include "sfc_log.h"
 #include "sfc_ev.h"
 #include "sfc_rx.h"
+#include "sfc_mae_counter.h"
 #include "sfc_tx.h"
 #include "sfc_kvargs.h"
 #include "sfc_tweak.h"
@@ -174,6 +175,7 @@ static int
 sfc_estimate_resource_limits(struct sfc_adapter *sa)
 {
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
 	efx_drv_limits_t limits;
 	int rc;
 	uint32_t evq_allocated;
@@ -235,17 +237,53 @@ sfc_estimate_resource_limits(struct sfc_adapter *sa)
 	rxq_allocated = MIN(rxq_allocated, limits.edl_max_rxq_count);
 	txq_allocated = MIN(txq_allocated, limits.edl_max_txq_count);
 
-	/* Subtract management EVQ not used for traffic */
-	SFC_ASSERT(evq_allocated > 0);
+	/*
+	 * Subtract management EVQ not used for traffic
+	 * The resource allocation strategy is as follows:
+	 * - one EVQ for management
+	 * - one EVQ for each ethdev RXQ
+	 * - one EVQ for each ethdev TXQ
+	 * - one EVQ and one RXQ for optional MAE counters.
+	 */
+	if (evq_allocated == 0) {
+		sfc_err(sa, "count of allocated EvQ is 0");
+		rc = ENOMEM;
+		goto fail_allocate_evq;
+	}
 	evq_allocated--;
 
-	/* Right now we use separate EVQ for Rx and Tx */
-	sa->rxq_max = MIN(rxq_allocated, evq_allocated / 2);
-	sa->txq_max = MIN(txq_allocated, evq_allocated - sa->rxq_max);
+	/*
+	 * Reserve absolutely required minimum.
+	 * Right now we use separate EVQ for Rx and Tx.
+	 */
+	if (rxq_allocated > 0 && evq_allocated > 0) {
+		sa->rxq_max = 1;
+		rxq_allocated--;
+		evq_allocated--;
+	}
+	if (txq_allocated > 0 && evq_allocated > 0) {
+		sa->txq_max = 1;
+		txq_allocated--;
+		evq_allocated--;
+	}
+
+	if (sfc_mae_counter_rxq_required(sa) &&
+	    rxq_allocated > 0 && evq_allocated > 0) {
+		rxq_allocated--;
+		evq_allocated--;
+		sas->counters_rxq_allocated = true;
+	} else {
+		sas->counters_rxq_allocated = false;
+	}
+
+	/* Add remaining allocated queues */
+	sa->rxq_max += MIN(rxq_allocated, evq_allocated / 2);
+	sa->txq_max += MIN(txq_allocated, evq_allocated - sa->rxq_max);
 
 	/* Keep NIC initialized */
 	return 0;
 
+fail_allocate_evq:
 fail_get_vi_pool:
 	efx_nic_fini(sa->nic);
 fail_nic_init:
@@ -256,14 +294,20 @@ static int
 sfc_set_drv_limits(struct sfc_adapter *sa)
 {
 	const struct rte_eth_dev_data *data = sa->eth_dev->data;
+	uint32_t rxq_reserved = sfc_nb_reserved_rxq(sfc_sa2shared(sa));
 	efx_drv_limits_t lim;
 
 	memset(&lim, 0, sizeof(lim));
 
-	/* Limits are strict since take into account initial estimation */
+	/*
+	 * Limits are strict since take into account initial estimation.
+	 * Resource allocation stategy is described in
+	 * sfc_estimate_resource_limits().
+	 */
 	lim.edl_min_evq_count = lim.edl_max_evq_count =
-		1 + data->nb_rx_queues + data->nb_tx_queues;
-	lim.edl_min_rxq_count = lim.edl_max_rxq_count = data->nb_rx_queues;
+		1 + data->nb_rx_queues + data->nb_tx_queues + rxq_reserved;
+	lim.edl_min_rxq_count = lim.edl_max_rxq_count =
+		data->nb_rx_queues + rxq_reserved;
 	lim.edl_min_txq_count = lim.edl_max_txq_count = data->nb_tx_queues;
 
 	return efx_nic_set_drv_limits(sa->nic, &lim);
@@ -834,6 +878,10 @@ sfc_attach(struct sfc_adapter *sa)
 	if (rc != 0)
 		goto fail_filter_attach;
 
+	rc = sfc_mae_counter_rxq_attach(sa);
+	if (rc != 0)
+		goto fail_mae_counter_rxq_attach;
+
 	rc = sfc_mae_attach(sa);
 	if (rc != 0)
 		goto fail_mae_attach;
@@ -862,6 +910,9 @@ sfc_attach(struct sfc_adapter *sa)
 	sfc_mae_detach(sa);
 
 fail_mae_attach:
+	sfc_mae_counter_rxq_detach(sa);
+
+fail_mae_counter_rxq_attach:
 	sfc_filter_detach(sa);
 
 fail_filter_attach:
@@ -903,6 +954,7 @@ sfc_detach(struct sfc_adapter *sa)
 	sfc_flow_fini(sa);
 
 	sfc_mae_detach(sa);
+	sfc_mae_counter_rxq_detach(sa);
 	sfc_filter_detach(sa);
 	sfc_rss_detach(sa);
 	sfc_port_detach(sa);
diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index 00fc26cf0e..546739bd4a 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -186,6 +186,8 @@ struct sfc_adapter_shared {
 
 	char				*dp_rx_name;
 	char				*dp_tx_name;
+
+	bool				counters_rxq_allocated;
 };
 
 /* Adapter process private data */
@@ -205,6 +207,15 @@ sfc_adapter_priv_by_eth_dev(struct rte_eth_dev *eth_dev)
 	return sap;
 }
 
+/* RxQ dedicated for counters (counter only RxQ) data */
+struct sfc_counter_rxq {
+	unsigned int			state;
+#define SFC_COUNTER_RXQ_ATTACHED		0x1
+#define SFC_COUNTER_RXQ_INITIALIZED		0x2
+	sfc_sw_index_t			sw_index;
+	struct rte_mempool		*mp;
+};
+
 /* Adapter private data */
 struct sfc_adapter {
 	/*
@@ -283,6 +294,8 @@ struct sfc_adapter {
 	bool				mgmt_evq_running;
 	struct sfc_evq			*mgmt_evq;
 
+	struct sfc_counter_rxq		counter_rxq;
+
 	struct sfc_rxq			*rxq_ctrl;
 	struct sfc_txq			*txq_ctrl;
 
@@ -357,6 +370,12 @@ sfc_adapter_lock_fini(__rte_unused struct sfc_adapter *sa)
 	/* Just for symmetry of the API */
 }
 
+static inline unsigned int
+sfc_nb_counter_rxq(const struct sfc_adapter_shared *sas)
+{
+	return sas->counters_rxq_allocated ? 1 : 0;
+}
+
 /** Get the number of milliseconds since boot from the default timer */
 static inline uint64_t
 sfc_get_system_msecs(void)
diff --git a/drivers/net/sfc/sfc_dp.h b/drivers/net/sfc/sfc_dp.h
index 76065483d4..61c1a3fbac 100644
--- a/drivers/net/sfc/sfc_dp.h
+++ b/drivers/net/sfc/sfc_dp.h
@@ -97,6 +97,8 @@ struct sfc_dp {
 TAILQ_HEAD(sfc_dp_list, sfc_dp);
 
 typedef unsigned int sfc_sw_index_t;
+#define SFC_SW_INDEX_INVALID	((sfc_sw_index_t)(UINT_MAX))
+
 typedef int32_t	sfc_ethdev_qid_t;
 #define SFC_ETHDEV_QID_INVALID	((sfc_ethdev_qid_t)(-1))
 
diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h
index 3f3c4b5b9a..b2a0380205 100644
--- a/drivers/net/sfc/sfc_ev.h
+++ b/drivers/net/sfc/sfc_ev.h
@@ -66,36 +66,87 @@ sfc_mgmt_evq_sw_index(__rte_unused const struct sfc_adapter_shared *sas)
 	return 0;
 }
 
+/* Return the number of Rx queues reserved for driver's internal use */
+static inline unsigned int
+sfc_nb_reserved_rxq(const struct sfc_adapter_shared *sas)
+{
+	return sfc_nb_counter_rxq(sas);
+}
+
+static inline unsigned int
+sfc_nb_reserved_evq(const struct sfc_adapter_shared *sas)
+{
+	/* An EvQ is required for each reserved RxQ */
+	return 1 + sfc_nb_reserved_rxq(sas);
+}
+
+/*
+ * The mapping functions that return SW index of a specific reserved
+ * queue rely on the relative order of reserved queues. Some reserved
+ * queues are optional, and if they are disabled or not supported, then
+ * the function for that specific reserved queue will return previous
+ * valid index of a reserved queue in the dependency chain or
+ * SFC_SW_INDEX_INVALID if it is the first reserved queue in the chain.
+ * If at least one of the reserved queues in the chain is enabled, then
+ * the corresponding function will give valid SW index, even if previous
+ * functions in the chain returned SFC_SW_INDEX_INVALID, since this value
+ * is one less than the first valid SW index.
+ *
+ * The dependency mechanism is utilized to avoid regid defines for SW indices
+ * for reserved queues and to allow these indices to shrink and make space
+ * for ethdev queue indices when some of the reserved queues are disabled.
+ */
+
+static inline sfc_sw_index_t
+sfc_counters_rxq_sw_index(const struct sfc_adapter_shared *sas)
+{
+	return sas->counters_rxq_allocated ? 0 : SFC_SW_INDEX_INVALID;
+}
+
 /*
  * Functions below define event queue to transmit/receive queue and vice
  * versa mapping.
+ * SFC_ETHDEV_QID_INVALID is returned when sw_index is converted to
+ * ethdev_qid, but sw_index represents a reserved queue for driver's
+ * internal use.
  * Own event queue is allocated for management, each Rx and each Tx queue.
  * Zero event queue is used for management events.
- * Rx event queues from 1 to RxQ number follow management event queue.
+ * When counters are supported, one Rx event queue is reserved.
+ * Rx event queues follow reserved event queues.
  * Tx event queues follow Rx event queues.
  */
 
 static inline sfc_ethdev_qid_t
-sfc_ethdev_rx_qid_by_rxq_sw_index(__rte_unused struct sfc_adapter_shared *sas,
+sfc_ethdev_rx_qid_by_rxq_sw_index(struct sfc_adapter_shared *sas,
 				  sfc_sw_index_t rxq_sw_index)
 {
-	/* Only ethdev queues are present for now */
-	return rxq_sw_index;
+	if (rxq_sw_index < sfc_nb_reserved_rxq(sas))
+		return SFC_ETHDEV_QID_INVALID;
+
+	return rxq_sw_index - sfc_nb_reserved_rxq(sas);
 }
 
 static inline sfc_sw_index_t
-sfc_rxq_sw_index_by_ethdev_rx_qid(__rte_unused struct sfc_adapter_shared *sas,
+sfc_rxq_sw_index_by_ethdev_rx_qid(struct sfc_adapter_shared *sas,
 				  sfc_ethdev_qid_t ethdev_qid)
 {
-	/* Only ethdev queues are present for now */
-	return ethdev_qid;
+	return sfc_nb_reserved_rxq(sas) + ethdev_qid;
 }
 
 static inline sfc_sw_index_t
-sfc_evq_sw_index_by_rxq_sw_index(__rte_unused struct sfc_adapter *sa,
+sfc_evq_sw_index_by_rxq_sw_index(struct sfc_adapter *sa,
 				 sfc_sw_index_t rxq_sw_index)
 {
-	return 1 + rxq_sw_index;
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
+
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, rxq_sw_index);
+	if (ethdev_qid == SFC_ETHDEV_QID_INVALID) {
+		/* One EvQ is reserved for management */
+		return 1 + rxq_sw_index;
+	}
+
+	return sfc_nb_reserved_evq(sas) + ethdev_qid;
 }
 
 static inline sfc_ethdev_qid_t
@@ -118,7 +169,8 @@ static inline sfc_sw_index_t
 sfc_evq_sw_index_by_txq_sw_index(struct sfc_adapter *sa,
 				 sfc_sw_index_t txq_sw_index)
 {
-	return 1 + sa->eth_dev->data->nb_rx_queues + txq_sw_index;
+	return sfc_nb_reserved_evq(sfc_sa2shared(sa)) +
+		sa->eth_dev->data->nb_rx_queues + txq_sw_index;
 }
 
 int sfc_ev_attach(struct sfc_adapter *sa);
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index d8c662503f..e603ffbdb4 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -16,6 +16,7 @@
 #include "efx.h"
 
 #include "sfc.h"
+#include "sfc_mae_counter.h"
 #include "sfc_log.h"
 #include "sfc_switch.h"
 
diff --git a/drivers/net/sfc/sfc_mae_counter.c b/drivers/net/sfc/sfc_mae_counter.c
new file mode 100644
index 0000000000..c7646cf7b1
--- /dev/null
+++ b/drivers/net/sfc/sfc_mae_counter.c
@@ -0,0 +1,217 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2020-2021 Xilinx, Inc.
+ */
+
+#include <rte_common.h>
+
+#include "efx.h"
+
+#include "sfc_ev.h"
+#include "sfc.h"
+#include "sfc_rx.h"
+#include "sfc_mae_counter.h"
+#include "sfc_service.h"
+
+static uint32_t
+sfc_mae_counter_get_service_lcore(struct sfc_adapter *sa)
+{
+	uint32_t cid;
+
+	cid = sfc_get_service_lcore(sa->socket_id);
+	if (cid != RTE_MAX_LCORE)
+		return cid;
+
+	if (sa->socket_id != SOCKET_ID_ANY)
+		cid = sfc_get_service_lcore(SOCKET_ID_ANY);
+
+	if (cid == RTE_MAX_LCORE) {
+		sfc_warn(sa, "failed to get service lcore for counter service");
+	} else if (sa->socket_id != SOCKET_ID_ANY) {
+		sfc_warn(sa,
+			"failed to get service lcore for counter service at socket %d, but got at socket %u",
+			sa->socket_id, rte_lcore_to_socket_id(cid));
+	}
+	return cid;
+}
+
+bool
+sfc_mae_counter_rxq_required(struct sfc_adapter *sa)
+{
+	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
+
+	if (encp->enc_mae_supported == B_FALSE)
+		return false;
+
+	if (sfc_mae_counter_get_service_lcore(sa) == RTE_MAX_LCORE)
+		return false;
+
+	return true;
+}
+
+int
+sfc_mae_counter_rxq_attach(struct sfc_adapter *sa)
+{
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	char name[RTE_MEMPOOL_NAMESIZE];
+	struct rte_mempool *mp;
+	unsigned int n_elements;
+	unsigned int cache_size;
+	/* The mempool is internal and private area is not required */
+	const uint16_t priv_size = 0;
+	const uint16_t data_room_size = RTE_PKTMBUF_HEADROOM +
+		SFC_MAE_COUNTER_STREAM_PACKET_SIZE;
+	int rc;
+
+	sfc_log_init(sa, "entry");
+
+	if (!sas->counters_rxq_allocated) {
+		sfc_log_init(sa, "counter queue is not supported - skip");
+		return 0;
+	}
+
+	/*
+	 * At least one element in the ring is always unused to distinguish
+	 * between empty and full ring cases.
+	 */
+	n_elements = SFC_COUNTER_RXQ_RX_DESC_COUNT - 1;
+
+	/*
+	 * The cache must have sufficient space to put received buckets
+	 * before they're reused on refill.
+	 */
+	cache_size = rte_align32pow2(SFC_COUNTER_RXQ_REFILL_LEVEL +
+				     SFC_MAE_COUNTER_RX_BURST - 1);
+
+	if (snprintf(name, sizeof(name), "counter_rxq-pool-%u", sas->port_id) >=
+	    (int)sizeof(name)) {
+		sfc_err(sa, "failed: counter RxQ mempool name is too long");
+		rc = ENAMETOOLONG;
+		goto fail_long_name;
+	}
+
+	/*
+	 * It could be single-producer single-consumer ring mempool which
+	 * requires minimal barriers. However, cache size and refill/burst
+	 * policy are aligned, therefore it does not matter which
+	 * mempool backend is chosen since backend is unused.
+	 */
+	mp = rte_pktmbuf_pool_create(name, n_elements, cache_size,
+				     priv_size, data_room_size, sa->socket_id);
+	if (mp == NULL) {
+		sfc_err(sa, "failed to create counter RxQ mempool");
+		rc = rte_errno;
+		goto fail_mp_create;
+	}
+
+	sa->counter_rxq.sw_index = sfc_counters_rxq_sw_index(sas);
+	sa->counter_rxq.mp = mp;
+	sa->counter_rxq.state |= SFC_COUNTER_RXQ_ATTACHED;
+
+	sfc_log_init(sa, "done");
+
+	return 0;
+
+fail_mp_create:
+fail_long_name:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+
+	return rc;
+}
+
+void
+sfc_mae_counter_rxq_detach(struct sfc_adapter *sa)
+{
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+
+	sfc_log_init(sa, "entry");
+
+	if (!sas->counters_rxq_allocated) {
+		sfc_log_init(sa, "counter queue is not supported - skip");
+		return;
+	}
+
+	if ((sa->counter_rxq.state & SFC_COUNTER_RXQ_ATTACHED) == 0) {
+		sfc_log_init(sa, "counter queue is not attached - skip");
+		return;
+	}
+
+	rte_mempool_free(sa->counter_rxq.mp);
+	sa->counter_rxq.mp = NULL;
+	sa->counter_rxq.state &= ~SFC_COUNTER_RXQ_ATTACHED;
+
+	sfc_log_init(sa, "done");
+}
+
+int
+sfc_mae_counter_rxq_init(struct sfc_adapter *sa)
+{
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	const struct rte_eth_rxconf rxconf = {
+		.rx_free_thresh = SFC_COUNTER_RXQ_REFILL_LEVEL,
+		.rx_drop_en = 1,
+	};
+	uint16_t nb_rx_desc = SFC_COUNTER_RXQ_RX_DESC_COUNT;
+	int rc;
+
+	sfc_log_init(sa, "entry");
+
+	if (!sas->counters_rxq_allocated) {
+		sfc_log_init(sa, "counter queue is not supported - skip");
+		return 0;
+	}
+
+	if ((sa->counter_rxq.state & SFC_COUNTER_RXQ_ATTACHED) == 0) {
+		sfc_log_init(sa, "counter queue is not attached - skip");
+		return 0;
+	}
+
+	nb_rx_desc = RTE_MIN(nb_rx_desc, sa->rxq_max_entries);
+	nb_rx_desc = RTE_MAX(nb_rx_desc, sa->rxq_min_entries);
+
+	rc = sfc_rx_qinit_info(sa, sa->counter_rxq.sw_index,
+			       EFX_RXQ_FLAG_USER_MARK);
+	if (rc != 0)
+		goto fail_counter_rxq_init_info;
+
+	rc = sfc_rx_qinit(sa, sa->counter_rxq.sw_index, nb_rx_desc,
+			  sa->socket_id, &rxconf, sa->counter_rxq.mp);
+	if (rc != 0) {
+		sfc_err(sa, "failed to init counter RxQ");
+		goto fail_counter_rxq_init;
+	}
+
+	sa->counter_rxq.state |= SFC_COUNTER_RXQ_INITIALIZED;
+
+	sfc_log_init(sa, "done");
+
+	return 0;
+
+fail_counter_rxq_init:
+fail_counter_rxq_init_info:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+
+	return rc;
+}
+
+void
+sfc_mae_counter_rxq_fini(struct sfc_adapter *sa)
+{
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+
+	sfc_log_init(sa, "entry");
+
+	if (!sas->counters_rxq_allocated) {
+		sfc_log_init(sa, "counter queue is not supported - skip");
+		return;
+	}
+
+	if ((sa->counter_rxq.state & SFC_COUNTER_RXQ_INITIALIZED) == 0) {
+		sfc_log_init(sa, "counter queue is not initialized - skip");
+		return;
+	}
+
+	sfc_rx_qfini(sa, sa->counter_rxq.sw_index);
+
+	sfc_log_init(sa, "done");
+}
diff --git a/drivers/net/sfc/sfc_mae_counter.h b/drivers/net/sfc/sfc_mae_counter.h
new file mode 100644
index 0000000000..f16d64a999
--- /dev/null
+++ b/drivers/net/sfc/sfc_mae_counter.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2020-2021 Xilinx, Inc.
+ */
+
+#ifndef _SFC_MAE_COUNTER_H
+#define _SFC_MAE_COUNTER_H
+
+#include "sfc.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* Default values for a user of counter RxQ */
+#define SFC_MAE_COUNTER_RX_BURST 32
+#define SFC_COUNTER_RXQ_RX_DESC_COUNT 256
+
+/*
+ * The refill level is chosen based on requirement to keep number
+ * of give credits operations low.
+ */
+#define SFC_COUNTER_RXQ_REFILL_LEVEL (SFC_COUNTER_RXQ_RX_DESC_COUNT / 4)
+
+/*
+ * SF-122415-TC states that the packetiser that generates packets for
+ * counter stream must support 9k frames. Set it to the maximum supported
+ * size since in case of huge flow of counters, having fewer packets in counter
+ * updates is better.
+ */
+#define SFC_MAE_COUNTER_STREAM_PACKET_SIZE 9216
+
+bool sfc_mae_counter_rxq_required(struct sfc_adapter *sa);
+
+int sfc_mae_counter_rxq_attach(struct sfc_adapter *sa);
+void sfc_mae_counter_rxq_detach(struct sfc_adapter *sa);
+
+int sfc_mae_counter_rxq_init(struct sfc_adapter *sa);
+void sfc_mae_counter_rxq_fini(struct sfc_adapter *sa);
+
+#ifdef __cplusplus
+}
+#endif
+#endif  /* _SFC_MAE_COUNTER_H */
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index c7a7bd66ef..0532f77082 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -16,6 +16,7 @@
 #include "sfc_log.h"
 #include "sfc_ev.h"
 #include "sfc_rx.h"
+#include "sfc_mae_counter.h"
 #include "sfc_kvargs.h"
 #include "sfc_tweak.h"
 
@@ -1705,6 +1706,9 @@ sfc_rx_configure(struct sfc_adapter *sa)
 	struct sfc_rss *rss = &sas->rss;
 	struct rte_eth_conf *dev_conf = &sa->eth_dev->data->dev_conf;
 	const unsigned int nb_rx_queues = sa->eth_dev->data->nb_rx_queues;
+	const unsigned int nb_rsrv_rx_queues = sfc_nb_reserved_rxq(sas);
+	const unsigned int nb_rxq_total = nb_rx_queues + nb_rsrv_rx_queues;
+	bool reconfigure;
 	int rc;
 
 	sfc_log_init(sa, "nb_rx_queues=%u (old %u)",
@@ -1714,12 +1718,15 @@ sfc_rx_configure(struct sfc_adapter *sa)
 	if (rc != 0)
 		goto fail_check_mode;
 
-	if (nb_rx_queues == sas->rxq_count)
+	if (nb_rxq_total == sas->rxq_count) {
+		reconfigure = true;
 		goto configure_rss;
+	}
 
 	if (sas->rxq_info == NULL) {
+		reconfigure = false;
 		rc = ENOMEM;
-		sas->rxq_info = rte_calloc_socket("sfc-rxqs", nb_rx_queues,
+		sas->rxq_info = rte_calloc_socket("sfc-rxqs", nb_rxq_total,
 						  sizeof(sas->rxq_info[0]), 0,
 						  sa->socket_id);
 		if (sas->rxq_info == NULL)
@@ -1730,39 +1737,42 @@ sfc_rx_configure(struct sfc_adapter *sa)
 		 * since it should not be shared.
 		 */
 		rc = ENOMEM;
-		sa->rxq_ctrl = calloc(nb_rx_queues, sizeof(sa->rxq_ctrl[0]));
+		sa->rxq_ctrl = calloc(nb_rxq_total, sizeof(sa->rxq_ctrl[0]));
 		if (sa->rxq_ctrl == NULL)
 			goto fail_rxqs_ctrl_alloc;
 	} else {
 		struct sfc_rxq_info *new_rxq_info;
 		struct sfc_rxq *new_rxq_ctrl;
 
+		reconfigure = true;
+
+		/* Do not ununitialize reserved queues */
 		if (nb_rx_queues < sas->ethdev_rxq_count)
 			sfc_rx_fini_queues(sa, nb_rx_queues);
 
 		rc = ENOMEM;
 		new_rxq_info =
 			rte_realloc(sas->rxq_info,
-				    nb_rx_queues * sizeof(sas->rxq_info[0]), 0);
-		if (new_rxq_info == NULL && nb_rx_queues > 0)
+				    nb_rxq_total * sizeof(sas->rxq_info[0]), 0);
+		if (new_rxq_info == NULL && nb_rxq_total > 0)
 			goto fail_rxqs_realloc;
 
 		rc = ENOMEM;
 		new_rxq_ctrl = realloc(sa->rxq_ctrl,
-				       nb_rx_queues * sizeof(sa->rxq_ctrl[0]));
-		if (new_rxq_ctrl == NULL && nb_rx_queues > 0)
+				       nb_rxq_total * sizeof(sa->rxq_ctrl[0]));
+		if (new_rxq_ctrl == NULL && nb_rxq_total > 0)
 			goto fail_rxqs_ctrl_realloc;
 
 		sas->rxq_info = new_rxq_info;
 		sa->rxq_ctrl = new_rxq_ctrl;
-		if (nb_rx_queues > sas->rxq_count) {
+		if (nb_rxq_total > sas->rxq_count) {
 			unsigned int rxq_count = sas->rxq_count;
 
 			memset(&sas->rxq_info[rxq_count], 0,
-			       (nb_rx_queues - rxq_count) *
+			       (nb_rxq_total - rxq_count) *
 			       sizeof(sas->rxq_info[0]));
 			memset(&sa->rxq_ctrl[rxq_count], 0,
-			       (nb_rx_queues - rxq_count) *
+			       (nb_rxq_total - rxq_count) *
 			       sizeof(sa->rxq_ctrl[0]));
 		}
 	}
@@ -1779,7 +1789,13 @@ sfc_rx_configure(struct sfc_adapter *sa)
 		sas->ethdev_rxq_count++;
 	}
 
-	sas->rxq_count = sas->ethdev_rxq_count;
+	sas->rxq_count = sas->ethdev_rxq_count + nb_rsrv_rx_queues;
+
+	if (!reconfigure) {
+		rc = sfc_mae_counter_rxq_init(sa);
+		if (rc != 0)
+			goto fail_count_rxq_init;
+	}
 
 configure_rss:
 	rss->channels = (dev_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) ?
@@ -1801,6 +1817,10 @@ sfc_rx_configure(struct sfc_adapter *sa)
 	return 0;
 
 fail_rx_process_adv_conf_rss:
+	if (!reconfigure)
+		sfc_mae_counter_rxq_fini(sa);
+
+fail_count_rxq_init:
 fail_rx_qinit_info:
 fail_rxqs_ctrl_realloc:
 fail_rxqs_realloc:
@@ -1824,6 +1844,7 @@ sfc_rx_close(struct sfc_adapter *sa)
 	struct sfc_rss *rss = &sfc_sa2shared(sa)->rss;
 
 	sfc_rx_fini_queues(sa, 0);
+	sfc_mae_counter_rxq_fini(sa);
 
 	rss->channels = 0;
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH 13/20] common/sfc_efx/base: add counter creation MCDI wrappers
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                   ` (11 preceding siblings ...)
  2021-05-27 15:25 ` [dpdk-dev] [PATCH 12/20] net/sfc: reserve RxQ for counters Andrew Rybchenko
@ 2021-05-27 15:25 ` Andrew Rybchenko
  2021-05-27 15:25 ` [dpdk-dev] [PATCH 14/20] common/sfc_efx/base: add counter stream " Andrew Rybchenko
                   ` (9 subsequent siblings)
  22 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-05-27 15:25 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

User will be able to create and free MAE counters. Support for
associating counters with action set will be added in upcoming
patches.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/efx.h      |  37 ++++++
 drivers/common/sfc_efx/base/efx_impl.h |   1 +
 drivers/common/sfc_efx/base/efx_mae.c  | 158 +++++++++++++++++++++++++
 drivers/common/sfc_efx/base/efx_mcdi.h |   7 ++
 drivers/common/sfc_efx/version.map     |   2 +
 5 files changed, 205 insertions(+)

diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 9bbd7cae55..d0f8bc10b3 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4406,6 +4406,10 @@ efx_mae_action_set_fill_in_eh_id(
 	__in				efx_mae_actions_t *spec,
 	__in				const efx_mae_eh_id_t *eh_idp);
 
+typedef struct efx_counter_s {
+	uint32_t id;
+} efx_counter_t;
+
 /* Action set ID */
 typedef struct efx_mae_aset_id_s {
 	uint32_t id;
@@ -4418,6 +4422,39 @@ efx_mae_action_set_alloc(
 	__in				const efx_mae_actions_t *spec,
 	__out				efx_mae_aset_id_t *aset_idp);
 
+/*
+ * Generation count has two purposes:
+ *
+ * 1) Distinguish between counter packets that belong to freed counter
+ *    and the packets that belong to reallocated counter (with the same ID);
+ * 2) Make sure that all packets are received for a counter that was freed;
+ *
+ * API users should provide generation count out parameter in allocation
+ * function if counters can be reallocated and consistent counter values are
+ * required.
+ *
+ * API users that need consistent final counter values after counter
+ * deallocation or counter stream stop should provide the parameter in
+ * functions that free the counters and stop the counter stream.
+ */
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_counters_alloc(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_counters,
+	__out				uint32_t *n_allocatedp,
+	__out_ecount(n_counters)	efx_counter_t *countersp,
+	__out_opt			uint32_t *gen_countp);
+
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_counters_free(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_counters,
+	__out				uint32_t *n_freedp,
+	__in_ecount(n_counters)		const efx_counter_t *countersp,
+	__out_opt			uint32_t *gen_countp);
+
 LIBEFX_API
 extern	__checkReturn			efx_rc_t
 efx_mae_action_set_free(
diff --git a/drivers/common/sfc_efx/base/efx_impl.h b/drivers/common/sfc_efx/base/efx_impl.h
index f891e2616e..9dbf6d450c 100644
--- a/drivers/common/sfc_efx/base/efx_impl.h
+++ b/drivers/common/sfc_efx/base/efx_impl.h
@@ -821,6 +821,7 @@ typedef struct efx_mae_s {
 	/** Outer rule match field capabilities. */
 	efx_mae_field_cap_t		*em_outer_rule_field_caps;
 	size_t				em_outer_rule_field_caps_size;
+	uint32_t			em_max_ncounters;
 } efx_mae_t;
 
 #endif /* EFSYS_OPT_MAE */
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index 5697488040..955f1d4353 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -67,6 +67,9 @@ efx_mae_get_capabilities(
 	maep->em_max_nfields =
 	    MCDI_OUT_DWORD(req, MAE_GET_CAPS_OUT_MATCH_FIELD_COUNT);
 
+	maep->em_max_ncounters =
+	    MCDI_OUT_DWORD(req, MAE_GET_CAPS_OUT_COUNTERS);
+
 	return (0);
 
 fail2:
@@ -2600,6 +2603,161 @@ efx_mae_action_rule_remove(
 
 	return (0);
 
+fail4:
+	EFSYS_PROBE(fail4);
+fail3:
+	EFSYS_PROBE(fail3);
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+	return (rc);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_counters_alloc(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_counters,
+	__out				uint32_t *n_allocatedp,
+	__out_ecount(n_counters)	efx_counter_t *countersp,
+	__out_opt			uint32_t *gen_countp)
+{
+	EFX_MCDI_DECLARE_BUF(payload,
+	    MC_CMD_MAE_COUNTER_ALLOC_IN_LEN,
+	    MC_CMD_MAE_COUNTER_ALLOC_OUT_LENMAX_MCDI2);
+	efx_mae_t *maep = enp->en_maep;
+	uint32_t n_allocated;
+	efx_mcdi_req_t req;
+	unsigned int i;
+	efx_rc_t rc;
+
+	if (n_counters > maep->em_max_ncounters ||
+	    n_counters < MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_MINNUM ||
+	    n_counters > MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_MAXNUM_MCDI2) {
+		rc = EINVAL;
+		goto fail1;
+	}
+
+	req.emr_cmd = MC_CMD_MAE_COUNTER_ALLOC;
+	req.emr_in_buf = payload;
+	req.emr_in_length = MC_CMD_MAE_COUNTER_ALLOC_IN_LEN;
+	req.emr_out_buf = payload;
+	req.emr_out_length = MC_CMD_MAE_COUNTER_ALLOC_OUT_LEN(n_counters);
+
+	MCDI_IN_SET_DWORD(req, MAE_COUNTER_ALLOC_IN_REQUESTED_COUNT,
+	    n_counters);
+
+	efx_mcdi_execute(enp, &req);
+
+	if (req.emr_rc != 0) {
+		rc = req.emr_rc;
+		goto fail2;
+	}
+
+	if (req.emr_out_length_used < MC_CMD_MAE_COUNTER_ALLOC_OUT_LENMIN) {
+		rc = EMSGSIZE;
+		goto fail3;
+	}
+
+	n_allocated = MCDI_OUT_DWORD(req,
+	    MAE_COUNTER_ALLOC_OUT_COUNTER_ID_COUNT);
+	if (n_allocated < MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_MINNUM) {
+		rc = EFAULT;
+		goto fail4;
+	}
+
+	for (i = 0; i < n_allocated; i++) {
+		countersp[i].id = MCDI_OUT_INDEXED_DWORD(req,
+		    MAE_COUNTER_ALLOC_OUT_COUNTER_ID, i);
+	}
+
+	if (gen_countp != NULL) {
+		*gen_countp = MCDI_OUT_DWORD(req,
+				    MAE_COUNTER_ALLOC_OUT_GENERATION_COUNT);
+	}
+
+	*n_allocatedp = n_allocated;
+
+	return (0);
+
+fail4:
+	EFSYS_PROBE(fail4);
+fail3:
+	EFSYS_PROBE(fail3);
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
+	return (rc);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_counters_free(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_counters,
+	__out				uint32_t *n_freedp,
+	__in_ecount(n_counters)		const efx_counter_t *countersp,
+	__out_opt			uint32_t *gen_countp)
+{
+	EFX_MCDI_DECLARE_BUF(payload,
+	    MC_CMD_MAE_COUNTER_FREE_IN_LENMAX_MCDI2,
+	    MC_CMD_MAE_COUNTER_FREE_OUT_LENMAX_MCDI2);
+	efx_mae_t *maep = enp->en_maep;
+	efx_mcdi_req_t req;
+	uint32_t n_freed;
+	unsigned int i;
+	efx_rc_t rc;
+
+	if (n_counters > maep->em_max_ncounters ||
+	    n_counters < MC_CMD_MAE_COUNTER_FREE_IN_FREE_COUNTER_ID_MINNUM ||
+	    n_counters >
+	    MC_CMD_MAE_COUNTER_FREE_IN_FREE_COUNTER_ID_MAXNUM_MCDI2) {
+		rc = EINVAL;
+		goto fail1;
+	}
+
+	req.emr_cmd = MC_CMD_MAE_COUNTER_FREE;
+	req.emr_in_buf = payload;
+	req.emr_in_length = MC_CMD_MAE_COUNTER_FREE_IN_LEN(n_counters);
+	req.emr_out_buf = payload;
+	req.emr_out_length = MC_CMD_MAE_COUNTER_FREE_OUT_LEN(n_counters);
+
+	for (i = 0; i < n_counters; i++) {
+		MCDI_IN_SET_INDEXED_DWORD(req,
+		    MAE_COUNTER_FREE_IN_FREE_COUNTER_ID, i, countersp[i].id);
+	}
+	MCDI_IN_SET_DWORD(req, MAE_COUNTER_FREE_IN_COUNTER_ID_COUNT,
+			  n_counters);
+
+	efx_mcdi_execute(enp, &req);
+
+	if (req.emr_rc != 0) {
+		rc = req.emr_rc;
+		goto fail2;
+	}
+
+	if (req.emr_out_length_used < MC_CMD_MAE_COUNTER_FREE_OUT_LENMIN) {
+		rc = EMSGSIZE;
+		goto fail3;
+	}
+
+	n_freed = MCDI_OUT_DWORD(req, MAE_COUNTER_FREE_OUT_COUNTER_ID_COUNT);
+
+	if (n_freed < MC_CMD_MAE_COUNTER_FREE_OUT_FREED_COUNTER_ID_MINNUM) {
+		rc = EFAULT;
+		goto fail4;
+	}
+
+	if (gen_countp != NULL) {
+		*gen_countp = MCDI_OUT_DWORD(req,
+				    MAE_COUNTER_FREE_OUT_GENERATION_COUNT);
+	}
+
+	*n_freedp = n_freed;
+
+	return (0);
+
 fail4:
 	EFSYS_PROBE(fail4);
 fail3:
diff --git a/drivers/common/sfc_efx/base/efx_mcdi.h b/drivers/common/sfc_efx/base/efx_mcdi.h
index 70a97ea337..90b70de97b 100644
--- a/drivers/common/sfc_efx/base/efx_mcdi.h
+++ b/drivers/common/sfc_efx/base/efx_mcdi.h
@@ -311,6 +311,10 @@ efx_mcdi_phy_module_get_info(
 	EFX_SET_DWORD_FIELD(*MCDI_IN2(_emr, efx_dword_t, _ofst),	\
 		MC_CMD_ ## _field, _value)
 
+#define	MCDI_IN_SET_INDEXED_DWORD(_emr, _ofst, _idx, _value)		\
+	EFX_POPULATE_DWORD_1(*(MCDI_IN2(_emr, efx_dword_t, _ofst) +	\
+			     (_idx)), EFX_DWORD_0, _value)		\
+
 #define	MCDI_IN_POPULATE_DWORD_1(_emr, _ofst, _field1, _value1)		\
 	EFX_POPULATE_DWORD_1(*MCDI_IN2(_emr, efx_dword_t, _ofst),	\
 		MC_CMD_ ## _field1, _value1)
@@ -451,6 +455,9 @@ efx_mcdi_phy_module_get_info(
 	EFX_DWORD_FIELD(*MCDI_OUT2(_emr, efx_dword_t, _ofst),		\
 			MC_CMD_ ## _field)
 
+#define	MCDI_OUT_INDEXED_DWORD(_emr, _ofst, _idx)			\
+	MCDI_OUT_INDEXED_DWORD_FIELD(_emr, _ofst, _idx, EFX_DWORD_0)
+
 #define	MCDI_OUT_INDEXED_DWORD_FIELD(_emr, _ofst, _idx, _field)		\
 	EFX_DWORD_FIELD(*(MCDI_OUT2(_emr, efx_dword_t, _ofst) +		\
 			(_idx)), _field)
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index ae85ed18c6..30b243a1e7 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -102,6 +102,8 @@ INTERNAL {
 	efx_mae_action_set_spec_fini;
 	efx_mae_action_set_spec_init;
 	efx_mae_action_set_specs_equal;
+	efx_mae_counters_alloc;
+	efx_mae_counters_free;
 	efx_mae_encap_header_alloc;
 	efx_mae_encap_header_free;
 	efx_mae_fini;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH 14/20] common/sfc_efx/base: add counter stream MCDI wrappers
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                   ` (12 preceding siblings ...)
  2021-05-27 15:25 ` [dpdk-dev] [PATCH 13/20] common/sfc_efx/base: add counter creation MCDI wrappers Andrew Rybchenko
@ 2021-05-27 15:25 ` Andrew Rybchenko
  2021-05-27 15:25 ` [dpdk-dev] [PATCH 15/20] common/sfc_efx/base: support counter in action set Andrew Rybchenko
                   ` (8 subsequent siblings)
  22 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-05-27 15:25 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

The MCDIs will be used to control counter Rx queue packet flow.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/efx.h     |  32 ++++++
 drivers/common/sfc_efx/base/efx_mae.c | 138 ++++++++++++++++++++++++++
 drivers/common/sfc_efx/version.map    |   3 +
 3 files changed, 173 insertions(+)

diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index d0f8bc10b3..cc173d13c6 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4455,6 +4455,38 @@ efx_mae_counters_free(
 	__in_ecount(n_counters)		const efx_counter_t *countersp,
 	__out_opt			uint32_t *gen_countp);
 
+/* When set, include counters with a value of zero */
+#define	EFX_MAE_COUNTERS_STREAM_IN_ZERO_SQUASH_DISABLE	(1U << 0)
+
+/*
+ * Set if credit-based flow control is used. In this case the driver
+ * must call efx_mae_counters_stream_give_credits() to notify the
+ * packetiser of descriptors written.
+ */
+#define	EFX_MAE_COUNTERS_STREAM_OUT_USES_CREDITS	(1U << 0)
+
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_counters_stream_start(
+	__in				efx_nic_t *enp,
+	__in				uint16_t rxq_id,
+	__in				uint16_t packet_size,
+	__in				uint32_t flags_in,
+	__out				uint32_t *flags_out);
+
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_counters_stream_stop(
+	__in				efx_nic_t *enp,
+	__in				uint16_t rxq_id,
+	__out_opt			uint32_t *gen_countp);
+
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_counters_stream_give_credits(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_credits);
+
 LIBEFX_API
 extern	__checkReturn			efx_rc_t
 efx_mae_action_set_free(
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index 955f1d4353..0b3131161b 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -2766,6 +2766,144 @@ efx_mae_counters_free(
 	EFSYS_PROBE(fail2);
 fail1:
 	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
+	return (rc);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_counters_stream_start(
+	__in				efx_nic_t *enp,
+	__in				uint16_t rxq_id,
+	__in				uint16_t packet_size,
+	__in				uint32_t flags_in,
+	__out				uint32_t *flags_out)
+{
+	efx_mcdi_req_t req;
+	EFX_MCDI_DECLARE_BUF(payload, MC_CMD_MAE_COUNTERS_STREAM_START_IN_LEN,
+			     MC_CMD_MAE_COUNTERS_STREAM_START_OUT_LEN);
+	efx_rc_t rc;
+
+	EFX_STATIC_ASSERT(EFX_MAE_COUNTERS_STREAM_IN_ZERO_SQUASH_DISABLE ==
+	    1U << MC_CMD_MAE_COUNTERS_STREAM_START_IN_ZERO_SQUASH_DISABLE_LBN);
+
+	EFX_STATIC_ASSERT(EFX_MAE_COUNTERS_STREAM_OUT_USES_CREDITS ==
+	    1U << MC_CMD_MAE_COUNTERS_STREAM_START_OUT_USES_CREDITS_LBN);
+
+	req.emr_cmd = MC_CMD_MAE_COUNTERS_STREAM_START;
+	req.emr_in_buf = payload;
+	req.emr_in_length = MC_CMD_MAE_COUNTERS_STREAM_START_IN_LEN;
+	req.emr_out_buf = payload;
+	req.emr_out_length = MC_CMD_MAE_COUNTERS_STREAM_START_OUT_LEN;
+
+	MCDI_IN_SET_WORD(req, MAE_COUNTERS_STREAM_START_IN_QID, rxq_id);
+	MCDI_IN_SET_WORD(req, MAE_COUNTERS_STREAM_START_IN_PACKET_SIZE,
+			 packet_size);
+	MCDI_IN_SET_DWORD(req, MAE_COUNTERS_STREAM_START_IN_FLAGS, flags_in);
+
+	efx_mcdi_execute(enp, &req);
+
+	if (req.emr_rc != 0) {
+		rc = req.emr_rc;
+		goto fail1;
+	}
+
+	if (req.emr_out_length_used <
+	    MC_CMD_MAE_COUNTERS_STREAM_START_OUT_LEN) {
+		rc = EMSGSIZE;
+		goto fail2;
+	}
+
+	*flags_out = MCDI_OUT_DWORD(req, MAE_COUNTERS_STREAM_START_OUT_FLAGS);
+
+	return (0);
+
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
+	return (rc);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_counters_stream_stop(
+	__in				efx_nic_t *enp,
+	__in				uint16_t rxq_id,
+	__out_opt			uint32_t *gen_countp)
+{
+	efx_mcdi_req_t req;
+	EFX_MCDI_DECLARE_BUF(payload, MC_CMD_MAE_COUNTERS_STREAM_STOP_IN_LEN,
+			     MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_LEN);
+	efx_rc_t rc;
+
+	req.emr_cmd = MC_CMD_MAE_COUNTERS_STREAM_STOP;
+	req.emr_in_buf = payload;
+	req.emr_in_length = MC_CMD_MAE_COUNTERS_STREAM_STOP_IN_LEN;
+	req.emr_out_buf = payload;
+	req.emr_out_length = MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_LEN;
+
+	MCDI_IN_SET_WORD(req, MAE_COUNTERS_STREAM_STOP_IN_QID, rxq_id);
+
+	efx_mcdi_execute(enp, &req);
+
+	if (req.emr_rc != 0) {
+		rc = req.emr_rc;
+		goto fail1;
+	}
+
+	if (req.emr_out_length_used <
+	    MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_LEN) {
+		rc = EMSGSIZE;
+		goto fail2;
+	}
+
+	if (gen_countp != NULL) {
+		*gen_countp = MCDI_OUT_DWORD(req,
+			    MAE_COUNTERS_STREAM_STOP_OUT_GENERATION_COUNT);
+	}
+
+	return (0);
+
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
+	return (rc);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_counters_stream_give_credits(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_credits)
+{
+	efx_mcdi_req_t req;
+	EFX_MCDI_DECLARE_BUF(payload,
+			     MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS_IN_LEN,
+			     MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS_OUT_LEN);
+	efx_rc_t rc;
+
+	req.emr_cmd = MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS;
+	req.emr_in_buf = payload;
+	req.emr_in_length = MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS_IN_LEN;
+	req.emr_out_buf = payload;
+	req.emr_out_length = MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS_OUT_LEN;
+
+	MCDI_IN_SET_DWORD(req, MAE_COUNTERS_STREAM_GIVE_CREDITS_IN_NUM_CREDITS,
+			 n_credits);
+
+	efx_mcdi_execute(enp, &req);
+
+	if (req.emr_rc != 0) {
+		rc = req.emr_rc;
+		goto fail1;
+	}
+
+	return (0);
+
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
 	return (rc);
 }
 
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 30b243a1e7..622f5d4cf5 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -104,6 +104,9 @@ INTERNAL {
 	efx_mae_action_set_specs_equal;
 	efx_mae_counters_alloc;
 	efx_mae_counters_free;
+	efx_mae_counters_stream_give_credits;
+	efx_mae_counters_stream_start;
+	efx_mae_counters_stream_stop;
 	efx_mae_encap_header_alloc;
 	efx_mae_encap_header_free;
 	efx_mae_fini;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH 15/20] common/sfc_efx/base: support counter in action set
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                   ` (13 preceding siblings ...)
  2021-05-27 15:25 ` [dpdk-dev] [PATCH 14/20] common/sfc_efx/base: add counter stream " Andrew Rybchenko
@ 2021-05-27 15:25 ` Andrew Rybchenko
  2021-05-27 15:25 ` [dpdk-dev] [PATCH 16/20] net/sfc: add Rx datapath method to get pushed buffers count Andrew Rybchenko
                   ` (7 subsequent siblings)
  22 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-05-27 15:25 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

User will be able to associate counter with MAE action set to
collect counter packets and bytes for a specific action set.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/efx.h      |  21 ++++
 drivers/common/sfc_efx/base/efx_impl.h |   3 +
 drivers/common/sfc_efx/base/efx_mae.c  | 133 ++++++++++++++++++++++++-
 drivers/common/sfc_efx/version.map     |   3 +
 4 files changed, 157 insertions(+), 3 deletions(-)

diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index cc173d13c6..628e61e065 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4306,6 +4306,15 @@ extern	__checkReturn			efx_rc_t
 efx_mae_action_set_populate_encap(
 	__in				efx_mae_actions_t *spec);
 
+/*
+ * Use efx_mae_action_set_fill_in_counter_id() to set ID of a counter
+ * in the specification prior to action set allocation.
+ */
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_action_set_populate_count(
+	__in				efx_mae_actions_t *spec);
+
 LIBEFX_API
 extern	__checkReturn			efx_rc_t
 efx_mae_action_set_populate_flag(
@@ -4410,6 +4419,18 @@ typedef struct efx_counter_s {
 	uint32_t id;
 } efx_counter_t;
 
+LIBEFX_API
+extern	__checkReturn			unsigned int
+efx_mae_action_set_get_nb_count(
+	__in				const efx_mae_actions_t *spec);
+
+/* See description before efx_mae_action_set_populate_count(). */
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_action_set_fill_in_counter_id(
+	__in				efx_mae_actions_t *spec,
+	__in				const efx_counter_t *counter_idp);
+
 /* Action set ID */
 typedef struct efx_mae_aset_id_s {
 	uint32_t id;
diff --git a/drivers/common/sfc_efx/base/efx_impl.h b/drivers/common/sfc_efx/base/efx_impl.h
index 9dbf6d450c..992edbabe3 100644
--- a/drivers/common/sfc_efx/base/efx_impl.h
+++ b/drivers/common/sfc_efx/base/efx_impl.h
@@ -1734,6 +1734,7 @@ typedef enum efx_mae_action_e {
 	EFX_MAE_ACTION_DECAP,
 	EFX_MAE_ACTION_VLAN_POP,
 	EFX_MAE_ACTION_VLAN_PUSH,
+	EFX_MAE_ACTION_COUNT,
 	EFX_MAE_ACTION_ENCAP,
 
 	/*
@@ -1764,6 +1765,7 @@ typedef struct efx_mae_action_vlan_push_s {
 
 typedef struct efx_mae_actions_rsrc_s {
 	efx_mae_eh_id_t			emar_eh_id;
+	efx_counter_t			emar_counter_id;
 } efx_mae_actions_rsrc_t;
 
 struct efx_mae_actions_s {
@@ -1774,6 +1776,7 @@ struct efx_mae_actions_s {
 	unsigned int			ema_n_vlan_tags_to_push;
 	efx_mae_action_vlan_push_t	ema_vlan_push_descs[
 	    EFX_MAE_VLAN_PUSH_MAX_NTAGS];
+	unsigned int			ema_n_count_actions;
 	uint32_t			ema_mark_value;
 	efx_mport_sel_t			ema_deliver_mport;
 
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index 0b3131161b..8d1294a627 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -1191,6 +1191,7 @@ efx_mae_action_set_spec_init(
 	}
 
 	spec->ema_rsrc.emar_eh_id.id = EFX_MAE_RSRC_ID_INVALID;
+	spec->ema_rsrc.emar_counter_id.id = EFX_MAE_RSRC_ID_INVALID;
 
 	*specp = spec;
 
@@ -1358,6 +1359,50 @@ efx_mae_action_set_add_encap(
 	return (rc);
 }
 
+static	__checkReturn			efx_rc_t
+efx_mae_action_set_add_count(
+	__in				efx_mae_actions_t *spec,
+	__in				size_t arg_size,
+	__in_bcount(arg_size)		const uint8_t *arg)
+{
+	efx_rc_t rc;
+
+	EFX_STATIC_ASSERT(EFX_MAE_RSRC_ID_INVALID ==
+			  MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_NULL);
+
+	/*
+	 * Preparing an action set spec to update a counter requires
+	 * two steps: first add this action to the action spec, and then
+	 * add the counter ID to the spec. This allows validity checking
+	 * and resource allocation to be done separately.
+	 * Mark the counter ID as invalid in the spec to ensure that the
+	 * caller must also invoke efx_mae_action_set_fill_in_counter_id()
+	 * before action set allocation.
+	 */
+	spec->ema_rsrc.emar_counter_id.id = EFX_MAE_RSRC_ID_INVALID;
+
+	/* Nothing else is supposed to take place over here. */
+	if (arg_size != 0) {
+		rc = EINVAL;
+		goto fail1;
+	}
+
+	if (arg != NULL) {
+		rc = EINVAL;
+		goto fail2;
+	}
+
+	++(spec->ema_n_count_actions);
+
+	return (0);
+
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+	return (rc);
+}
+
 static	__checkReturn			efx_rc_t
 efx_mae_action_set_add_flag(
 	__in				efx_mae_actions_t *spec,
@@ -1466,6 +1511,9 @@ static const efx_mae_action_desc_t efx_mae_actions[EFX_MAE_NACTIONS] = {
 	[EFX_MAE_ACTION_ENCAP] = {
 		.emad_add = efx_mae_action_set_add_encap
 	},
+	[EFX_MAE_ACTION_COUNT] = {
+		.emad_add = efx_mae_action_set_add_count
+	},
 	[EFX_MAE_ACTION_FLAG] = {
 		.emad_add = efx_mae_action_set_add_flag
 	},
@@ -1481,6 +1529,12 @@ static const uint32_t efx_mae_action_ordered_map =
 	(1U << EFX_MAE_ACTION_DECAP) |
 	(1U << EFX_MAE_ACTION_VLAN_POP) |
 	(1U << EFX_MAE_ACTION_VLAN_PUSH) |
+	/*
+	 * HW will conduct action COUNT after
+	 * the matching packet has been modified by
+	 * length-affecting actions except for ENCAP.
+	 */
+	(1U << EFX_MAE_ACTION_COUNT) |
 	(1U << EFX_MAE_ACTION_ENCAP) |
 	(1U << EFX_MAE_ACTION_FLAG) |
 	(1U << EFX_MAE_ACTION_MARK) |
@@ -1497,7 +1551,8 @@ static const uint32_t efx_mae_action_nonstrict_map =
 
 static const uint32_t efx_mae_action_repeat_map =
 	(1U << EFX_MAE_ACTION_VLAN_POP) |
-	(1U << EFX_MAE_ACTION_VLAN_PUSH);
+	(1U << EFX_MAE_ACTION_VLAN_PUSH) |
+	(1U << EFX_MAE_ACTION_COUNT);
 
 /*
  * Add an action to an action set.
@@ -1620,6 +1675,20 @@ efx_mae_action_set_populate_encap(
 	    EFX_MAE_ACTION_ENCAP, 0, NULL));
 }
 
+	__checkReturn			efx_rc_t
+efx_mae_action_set_populate_count(
+	__in				efx_mae_actions_t *spec)
+{
+	/*
+	 * There is no argument to pass counter ID, thus, one does not
+	 * need to allocate a counter while parsing application input.
+	 * This is useful since building an action set may be done simply to
+	 * validate a rule, whilst resource allocation usually consumes time.
+	 */
+	return (efx_mae_action_set_spec_populate(spec,
+	    EFX_MAE_ACTION_COUNT, 0, NULL));
+}
+
 	__checkReturn			efx_rc_t
 efx_mae_action_set_populate_flag(
 	__in				efx_mae_actions_t *spec)
@@ -2306,8 +2375,6 @@ efx_mae_action_set_alloc(
 	 */
 	MCDI_IN_SET_DWORD(req,
 	    MAE_ACTION_SET_ALLOC_IN_COUNTER_LIST_ID, EFX_MAE_RSRC_ID_INVALID);
-	MCDI_IN_SET_DWORD(req,
-	    MAE_ACTION_SET_ALLOC_IN_COUNTER_ID, EFX_MAE_RSRC_ID_INVALID);
 
 	if ((spec->ema_actions & (1U << EFX_MAE_ACTION_DECAP)) != 0) {
 		MCDI_IN_SET_DWORD_FIELD(req, MAE_ACTION_SET_ALLOC_IN_FLAGS,
@@ -2344,6 +2411,8 @@ efx_mae_action_set_alloc(
 
 	MCDI_IN_SET_DWORD(req, MAE_ACTION_SET_ALLOC_IN_ENCAP_HEADER_ID,
 	    spec->ema_rsrc.emar_eh_id.id);
+	MCDI_IN_SET_DWORD(req, MAE_ACTION_SET_ALLOC_IN_COUNTER_ID,
+	    spec->ema_rsrc.emar_counter_id.id);
 
 	if ((spec->ema_actions & (1U << EFX_MAE_ACTION_FLAG)) != 0) {
 		MCDI_IN_SET_DWORD_FIELD(req, MAE_ACTION_SET_ALLOC_IN_FLAGS,
@@ -2603,6 +2672,64 @@ efx_mae_action_rule_remove(
 
 	return (0);
 
+fail4:
+	EFSYS_PROBE(fail4);
+fail3:
+	EFSYS_PROBE(fail3);
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+	return (rc);
+}
+
+	__checkReturn			unsigned int
+efx_mae_action_set_get_nb_count(
+	__in				const efx_mae_actions_t *spec)
+{
+	return (spec->ema_n_count_actions);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_action_set_fill_in_counter_id(
+	__in				efx_mae_actions_t *spec,
+	__in				const efx_counter_t *counter_idp)
+{
+	efx_rc_t rc;
+
+	if ((spec->ema_actions & (1U << EFX_MAE_ACTION_COUNT)) == 0) {
+		/*
+		 * Invalid to add counter ID if spec does not have COUNT action.
+		 */
+		rc = EINVAL;
+		goto fail1;
+	}
+
+	if (spec->ema_n_count_actions != 1) {
+		/*
+		 * Having multiple COUNT actions in the spec requires a counter
+		 * list to be used. This API must only be used for a single
+		 * counter per spec. Turn down the request as inappropriate.
+		 */
+		rc = EINVAL;
+		goto fail2;
+	}
+
+	if (spec->ema_rsrc.emar_counter_id.id != EFX_MAE_RSRC_ID_INVALID) {
+		/* The caller attempts to indicate counter ID twice. */
+		rc = EALREADY;
+		goto fail3;
+	}
+
+	if (counter_idp->id == EFX_MAE_RSRC_ID_INVALID) {
+		rc = EINVAL;
+		goto fail4;
+	}
+
+	spec->ema_rsrc.emar_counter_id.id = counter_idp->id;
+
+	return (0);
+
 fail4:
 	EFSYS_PROBE(fail4);
 fail3:
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 622f5d4cf5..0c5bcdfa84 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -89,8 +89,11 @@ INTERNAL {
 	efx_mae_action_rule_insert;
 	efx_mae_action_rule_remove;
 	efx_mae_action_set_alloc;
+	efx_mae_action_set_fill_in_counter_id;
 	efx_mae_action_set_fill_in_eh_id;
 	efx_mae_action_set_free;
+	efx_mae_action_set_get_nb_count;
+	efx_mae_action_set_populate_count;
 	efx_mae_action_set_populate_decap;
 	efx_mae_action_set_populate_deliver;
 	efx_mae_action_set_populate_drop;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH 16/20] net/sfc: add Rx datapath method to get pushed buffers count
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                   ` (14 preceding siblings ...)
  2021-05-27 15:25 ` [dpdk-dev] [PATCH 15/20] common/sfc_efx/base: support counter in action set Andrew Rybchenko
@ 2021-05-27 15:25 ` Andrew Rybchenko
  2021-05-27 15:25 ` [dpdk-dev] [PATCH 17/20] common/sfc_efx/base: add max MAE counters to limits Andrew Rybchenko
                   ` (6 subsequent siblings)
  22 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-05-27 15:25 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

The information about the number of pushed Rx buffers is required
for counter Rx queue to know when to give credits to counter
stream.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/sfc_dp_rx.h    |  4 ++++
 drivers/net/sfc/sfc_ef100_rx.c | 15 +++++++++++++++
 drivers/net/sfc/sfc_rx.c       |  9 +++++++++
 drivers/net/sfc/sfc_rx.h       |  3 +++
 4 files changed, 31 insertions(+)

diff --git a/drivers/net/sfc/sfc_dp_rx.h b/drivers/net/sfc/sfc_dp_rx.h
index 3f6857b1ff..b6c44085ce 100644
--- a/drivers/net/sfc/sfc_dp_rx.h
+++ b/drivers/net/sfc/sfc_dp_rx.h
@@ -204,6 +204,9 @@ typedef int (sfc_dp_rx_intr_enable_t)(struct sfc_dp_rxq *dp_rxq);
 /** Disable Rx interrupts */
 typedef int (sfc_dp_rx_intr_disable_t)(struct sfc_dp_rxq *dp_rxq);
 
+/** Get number of pushed Rx buffers */
+typedef unsigned int (sfc_dp_rx_get_pushed_t)(struct sfc_dp_rxq *dp_rxq);
+
 /** Receive datapath definition */
 struct sfc_dp_rx {
 	struct sfc_dp				dp;
@@ -238,6 +241,7 @@ struct sfc_dp_rx {
 	sfc_dp_rx_qdesc_status_t		*qdesc_status;
 	sfc_dp_rx_intr_enable_t			*intr_enable;
 	sfc_dp_rx_intr_disable_t		*intr_disable;
+	sfc_dp_rx_get_pushed_t			*get_pushed;
 	eth_rx_burst_t				pkt_burst;
 };
 
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index 8cde24c585..7447f8b9de 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -892,6 +892,20 @@ sfc_ef100_rx_intr_disable(struct sfc_dp_rxq *dp_rxq)
 	return 0;
 }
 
+static sfc_dp_rx_get_pushed_t sfc_ef100_rx_get_pushed;
+static unsigned int
+sfc_ef100_rx_get_pushed(struct sfc_dp_rxq *dp_rxq)
+{
+	struct sfc_ef100_rxq *rxq = sfc_ef100_rxq_by_dp_rxq(dp_rxq);
+
+	/*
+	 * The datapath keeps track only of added descriptors, since
+	 * the number of pushed descriptors always equals the number
+	 * of added descriptors due to enforced alignment.
+	 */
+	return rxq->added;
+}
+
 struct sfc_dp_rx sfc_ef100_rx = {
 	.dp = {
 		.name		= SFC_KVARG_DATAPATH_EF100,
@@ -919,5 +933,6 @@ struct sfc_dp_rx sfc_ef100_rx = {
 	.qdesc_status		= sfc_ef100_rx_qdesc_status,
 	.intr_enable		= sfc_ef100_rx_intr_enable,
 	.intr_disable		= sfc_ef100_rx_intr_disable,
+	.get_pushed		= sfc_ef100_rx_get_pushed,
 	.pkt_burst		= sfc_ef100_recv_pkts,
 };
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 0532f77082..f6a8ac68e8 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -53,6 +53,15 @@ sfc_rx_qflush_failed(struct sfc_rxq_info *rxq_info)
 	rxq_info->state &= ~SFC_RXQ_FLUSHING;
 }
 
+/* This returns the running counter, which is not bounded by ring size */
+unsigned int
+sfc_rx_get_pushed(struct sfc_adapter *sa, struct sfc_dp_rxq *dp_rxq)
+{
+	SFC_ASSERT(sa->priv.dp_rx->get_pushed != NULL);
+
+	return sa->priv.dp_rx->get_pushed(dp_rxq);
+}
+
 static int
 sfc_efx_rx_qprime(struct sfc_efx_rxq *rxq)
 {
diff --git a/drivers/net/sfc/sfc_rx.h b/drivers/net/sfc/sfc_rx.h
index e5a6fde79b..4ab513915e 100644
--- a/drivers/net/sfc/sfc_rx.h
+++ b/drivers/net/sfc/sfc_rx.h
@@ -145,6 +145,9 @@ uint64_t sfc_rx_get_queue_offload_caps(struct sfc_adapter *sa);
 void sfc_rx_qflush_done(struct sfc_rxq_info *rxq_info);
 void sfc_rx_qflush_failed(struct sfc_rxq_info *rxq_info);
 
+unsigned int sfc_rx_get_pushed(struct sfc_adapter *sa,
+			       struct sfc_dp_rxq *dp_rxq);
+
 int sfc_rx_hash_init(struct sfc_adapter *sa);
 void sfc_rx_hash_fini(struct sfc_adapter *sa);
 int sfc_rx_hf_rte_to_efx(struct sfc_adapter *sa, uint64_t rte,
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH 17/20] common/sfc_efx/base: add max MAE counters to limits
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                   ` (15 preceding siblings ...)
  2021-05-27 15:25 ` [dpdk-dev] [PATCH 16/20] net/sfc: add Rx datapath method to get pushed buffers count Andrew Rybchenko
@ 2021-05-27 15:25 ` Andrew Rybchenko
  2021-05-27 15:25 ` [dpdk-dev] [PATCH 18/20] common/sfc_efx/base: add packetiser packet format definition Andrew Rybchenko
                   ` (5 subsequent siblings)
  22 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-05-27 15:25 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

The information about the maximum number of MAE counters is
crucial to the counter support in the driver.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/efx.h     | 1 +
 drivers/common/sfc_efx/base/efx_mae.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 628e61e065..b2301b845a 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4093,6 +4093,7 @@ typedef struct efx_mae_limits_s {
 	uint32_t			eml_max_n_outer_prios;
 	uint32_t			eml_encap_types_supported;
 	uint32_t			eml_encap_header_size_limit;
+	uint32_t			eml_max_n_counters;
 } efx_mae_limits_t;
 
 LIBEFX_API
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index 8d1294a627..5a320dcda6 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -374,6 +374,7 @@ efx_mae_get_limits(
 	emlp->eml_encap_types_supported = maep->em_encap_types_supported;
 	emlp->eml_encap_header_size_limit =
 	    MC_CMD_MAE_ENCAP_HEADER_ALLOC_IN_HDR_DATA_MAXNUM_MCDI2;
+	emlp->eml_max_n_counters = maep->em_max_ncounters;
 
 	return (0);
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH 18/20] common/sfc_efx/base: add packetiser packet format definition
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                   ` (16 preceding siblings ...)
  2021-05-27 15:25 ` [dpdk-dev] [PATCH 17/20] common/sfc_efx/base: add max MAE counters to limits Andrew Rybchenko
@ 2021-05-27 15:25 ` Andrew Rybchenko
  2021-05-27 15:25 ` [dpdk-dev] [PATCH 19/20] net/sfc: support flow action COUNT in transfer rules Andrew Rybchenko
                   ` (4 subsequent siblings)
  22 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-05-27 15:25 UTC (permalink / raw)
  To: dev; +Cc: Andy Moreton

Packetiser composes packets with MAE counters update.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 .../base/efx_regs_counters_pkt_format.h       | 87 +++++++++++++++++++
 1 file changed, 87 insertions(+)
 create mode 100644 drivers/common/sfc_efx/base/efx_regs_counters_pkt_format.h

diff --git a/drivers/common/sfc_efx/base/efx_regs_counters_pkt_format.h b/drivers/common/sfc_efx/base/efx_regs_counters_pkt_format.h
new file mode 100644
index 0000000000..6610d07dc0
--- /dev/null
+++ b/drivers/common/sfc_efx/base/efx_regs_counters_pkt_format.h
@@ -0,0 +1,87 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2020-2021 Xilinx, Inc.
+ */
+
+#ifndef	_SYS_EFX_REGS_COUNTERS_PKT_FORMAT_H
+#define	_SYS_EFX_REGS_COUNTERS_PKT_FORMAT_H
+
+/*
+ * Packetiser packet format definition.
+ * SF-122415-TC - OVS Counter Design Specification section 7
+ * Primary copy of the header is located in the smartnic_registry repo:
+ * src/ovs_counter/packetiser_packet_format.h
+ */
+
+/*------------------------------------------------------------*/
+/*
+ * ER_RX_SL_PACKETISER_HEADER_WORD(160bit):
+ *
+ */
+#define	ER_RX_SL_PACKETISER_HEADER_WORD_SIZE 20
+
+#define	ERF_SC_PACKETISER_HEADER_VERSION_LBN 0
+#define	ERF_SC_PACKETISER_HEADER_VERSION_WIDTH 8
+/* Deprecated, use ERF_SC_PACKETISER_HEADER_VERSION_2 instead */
+#define	ERF_SC_PACKETISER_HEADER_VERSION_VALUE 2
+#define	ERF_SC_PACKETISER_HEADER_VERSION_2 2
+#define	ERF_SC_PACKETISER_HEADER_IDENTIFIER_LBN 8
+#define	ERF_SC_PACKETISER_HEADER_IDENTIFIER_WIDTH 8
+#define	ERF_SC_PACKETISER_HEADER_IDENTIFIER_AR 0
+#define	ERF_SC_PACKETISER_HEADER_IDENTIFIER_CT 1
+#define	ERF_SC_PACKETISER_HEADER_HEADER_OFFSET_LBN 16
+#define	ERF_SC_PACKETISER_HEADER_HEADER_OFFSET_WIDTH 8
+#define	ERF_SC_PACKETISER_HEADER_HEADER_OFFSET_DEFAULT 0x4
+#define	ERF_SC_PACKETISER_HEADER_PAYLOAD_OFFSET_LBN 24
+#define	ERF_SC_PACKETISER_HEADER_PAYLOAD_OFFSET_WIDTH 8
+#define	ERF_SC_PACKETISER_HEADER_PAYLOAD_OFFSET_DEFAULT 0x14
+#define	ERF_SC_PACKETISER_HEADER_INDEX_LBN 32
+#define	ERF_SC_PACKETISER_HEADER_INDEX_WIDTH 16
+#define	ERF_SC_PACKETISER_HEADER_COUNT_LBN 48
+#define	ERF_SC_PACKETISER_HEADER_COUNT_WIDTH 16
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_0_LBN 64
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_0_WIDTH 32
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_1_LBN 96
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_1_WIDTH 32
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_2_LBN 128
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_2_WIDTH 32
+
+
+/*------------------------------------------------------------*/
+/*
+ * ER_RX_SL_PACKETISER_PAYLOAD_WORD(128bit):
+ *
+ */
+#define	ER_RX_SL_PACKETISER_PAYLOAD_WORD_SIZE 16
+
+#define	ERF_SC_PACKETISER_PAYLOAD_COUNTER_INDEX_LBN 0
+#define	ERF_SC_PACKETISER_PAYLOAD_COUNTER_INDEX_WIDTH 24
+#define	ERF_SC_PACKETISER_PAYLOAD_RESERVED_LBN 24
+#define	ERF_SC_PACKETISER_PAYLOAD_RESERVED_WIDTH 8
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_OFST 4
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_SIZE 6
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LBN 32
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_WIDTH 48
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO_OFST 4
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO_SIZE 4
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO_LBN 32
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO_WIDTH 32
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_HI_OFST 8
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_HI_SIZE 2
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_HI_LBN 64
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_HI_WIDTH 16
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_OFST 10
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_SIZE 6
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LBN 80
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_WIDTH 48
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO_OFST 10
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO_SIZE 2
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO_LBN 80
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO_WIDTH 16
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_HI_OFST 12
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_HI_SIZE 4
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_HI_LBN 96
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_HI_WIDTH 32
+
+
+#endif /* _SYS_EFX_REGS_COUNTERS_PKT_FORMAT_H */
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH 19/20] net/sfc: support flow action COUNT in transfer rules
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                   ` (17 preceding siblings ...)
  2021-05-27 15:25 ` [dpdk-dev] [PATCH 18/20] common/sfc_efx/base: add packetiser packet format definition Andrew Rybchenko
@ 2021-05-27 15:25 ` Andrew Rybchenko
  2021-05-27 15:25 ` [dpdk-dev] [PATCH 20/20] net/sfc: support flow API query for count actions Andrew Rybchenko
                   ` (3 subsequent siblings)
  22 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-05-27 15:25 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

For now, a rule may have only one dedicated counter, shared counters
are not supported.

HW delivers (or "streams") counter readings using special packets.
The driver creates a dedicated Rx queue to receive such packets
and requests that HW start "streaming" the readings to it.

The counter queue is polled periodically, and the first available
service core is used for that. Hence, the user has to specify at least
one service core for counters to work. Such a core is shared by all
MAE-capable devices managed by sfc driver.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/meson.build       |  10 +
 drivers/net/sfc/sfc_flow.c        |   7 +
 drivers/net/sfc/sfc_mae.c         | 231 +++++++++++-
 drivers/net/sfc/sfc_mae.h         |  60 ++++
 drivers/net/sfc/sfc_mae_counter.c | 578 ++++++++++++++++++++++++++++++
 drivers/net/sfc/sfc_mae_counter.h |  11 +
 drivers/net/sfc/sfc_stats.h       |  80 +++++
 drivers/net/sfc/sfc_tweak.h       |   9 +
 8 files changed, 981 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/sfc/sfc_stats.h

diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index f8880f740a..32b58e3d76 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -39,6 +39,16 @@ foreach flag: extra_flags
     endif
 endforeach
 
+# for clang 32-bit compiles we need libatomic for 64-bit atomic ops
+if cc.get_id() == 'clang' and dpdk_conf.get('RTE_ARCH_64') == false
+    ext_deps += cc.find_library('atomic')
+endif
+
+# for gcc compiles we need -latomic for 128-bit atomic ops
+if cc.get_id() == 'gcc'
+    ext_deps += cc.find_library('atomic')
+endif
+
 deps += ['common_sfc_efx', 'bus_pci']
 sources = files(
         'sfc_ethdev.c',
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 2db8af1759..1294dbd3a7 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -24,6 +24,7 @@
 #include "sfc_flow.h"
 #include "sfc_log.h"
 #include "sfc_dp_rx.h"
+#include "sfc_mae_counter.h"
 
 struct sfc_flow_ops_by_spec {
 	sfc_flow_parse_cb_t	*parse;
@@ -2854,6 +2855,12 @@ sfc_flow_stop(struct sfc_adapter *sa)
 		efx_rx_scale_context_free(sa->nic, rss->dummy_rss_context);
 		rss->dummy_rss_context = EFX_RSS_CONTEXT_DEFAULT;
 	}
+
+	/*
+	 * MAE counter service is not stopped on flow rule remove to avoid
+	 * extra work. Make sure that it is stopped here.
+	 */
+	sfc_mae_counter_stop(sa);
 }
 
 int
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index e603ffbdb4..370a39da1d 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -19,6 +19,7 @@
 #include "sfc_mae_counter.h"
 #include "sfc_log.h"
 #include "sfc_switch.h"
+#include "sfc_service.h"
 
 static int
 sfc_mae_assign_entity_mport(struct sfc_adapter *sa,
@@ -30,6 +31,19 @@ sfc_mae_assign_entity_mport(struct sfc_adapter *sa,
 					      mportp);
 }
 
+static int
+sfc_mae_counter_registry_init(struct sfc_mae_counter_registry *registry,
+			      uint32_t nb_counters_max)
+{
+	return sfc_mae_counters_init(&registry->counters, nb_counters_max);
+}
+
+static void
+sfc_mae_counter_registry_fini(struct sfc_mae_counter_registry *registry)
+{
+	sfc_mae_counters_fini(&registry->counters);
+}
+
 int
 sfc_mae_attach(struct sfc_adapter *sa)
 {
@@ -59,6 +73,15 @@ sfc_mae_attach(struct sfc_adapter *sa)
 	if (rc != 0)
 		goto fail_mae_get_limits;
 
+	sfc_log_init(sa, "init MAE counter registry");
+	rc = sfc_mae_counter_registry_init(&mae->counter_registry,
+					   limits.eml_max_n_counters);
+	if (rc != 0) {
+		sfc_err(sa, "failed to init MAE counters registry for %u entries: %s",
+			limits.eml_max_n_counters, rte_strerror(rc));
+		goto fail_counter_registry_init;
+	}
+
 	sfc_log_init(sa, "assign entity MPORT");
 	rc = sfc_mae_assign_entity_mport(sa, &entity_mport);
 	if (rc != 0)
@@ -107,6 +130,9 @@ sfc_mae_attach(struct sfc_adapter *sa)
 fail_mae_assign_switch_port:
 fail_mae_assign_switch_domain:
 fail_mae_assign_entity_mport:
+	sfc_mae_counter_registry_fini(&mae->counter_registry);
+
+fail_counter_registry_init:
 fail_mae_get_limits:
 	efx_mae_fini(sa->nic);
 
@@ -131,6 +157,7 @@ sfc_mae_detach(struct sfc_adapter *sa)
 		return;
 
 	rte_free(mae->bounce_eh.buf);
+	sfc_mae_counter_registry_fini(&mae->counter_registry);
 
 	efx_mae_fini(sa->nic);
 
@@ -480,9 +507,72 @@ sfc_mae_encap_header_disable(struct sfc_adapter *sa,
 	--(fw_rsrc->refcnt);
 }
 
+static int
+sfc_mae_counters_enable(struct sfc_adapter *sa,
+			struct sfc_mae_counter_id *counters,
+			unsigned int n_counters,
+			efx_mae_actions_t *action_set_spec)
+{
+	int rc;
+
+	sfc_log_init(sa, "entry");
+
+	if (n_counters == 0) {
+		sfc_log_init(sa, "no counters - skip");
+		return 0;
+	}
+
+	SFC_ASSERT(sfc_adapter_is_locked(sa));
+	SFC_ASSERT(n_counters == 1);
+
+	rc = sfc_mae_counter_enable(sa, &counters[0]);
+	if (rc != 0) {
+		sfc_err(sa, "failed to enable MAE counter %u: %s",
+			counters[0].mae_id.id, rte_strerror(rc));
+		goto fail_counter_add;
+	}
+
+	rc = efx_mae_action_set_fill_in_counter_id(action_set_spec,
+						   &counters[0].mae_id);
+	if (rc != 0) {
+		sfc_err(sa, "failed to fill in MAE counter %u in action set: %s",
+			counters[0].mae_id.id, rte_strerror(rc));
+		goto fail_fill_in_id;
+	}
+
+	return 0;
+
+fail_fill_in_id:
+	(void)sfc_mae_counter_disable(sa, &counters[0]);
+
+fail_counter_add:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+	return rc;
+}
+
+static int
+sfc_mae_counters_disable(struct sfc_adapter *sa,
+			 struct sfc_mae_counter_id *counters,
+			 unsigned int n_counters)
+{
+	if (n_counters == 0)
+		return 0;
+
+	SFC_ASSERT(sfc_adapter_is_locked(sa));
+	SFC_ASSERT(n_counters == 1);
+
+	if (counters[0].mae_id.id == EFX_MAE_RSRC_ID_INVALID) {
+		sfc_err(sa, "failed to disable: already disabled");
+		return EALREADY;
+	}
+
+	return sfc_mae_counter_disable(sa, &counters[0]);
+}
+
 static struct sfc_mae_action_set *
 sfc_mae_action_set_attach(struct sfc_adapter *sa,
 			  const struct sfc_mae_encap_header *encap_header,
+			  unsigned int n_count,
 			  const efx_mae_actions_t *spec)
 {
 	struct sfc_mae_action_set *action_set;
@@ -491,7 +581,12 @@ sfc_mae_action_set_attach(struct sfc_adapter *sa,
 	SFC_ASSERT(sfc_adapter_is_locked(sa));
 
 	TAILQ_FOREACH(action_set, &mae->action_sets, entries) {
+		/*
+		 * Shared counters are not supported, hence action sets with
+		 * COUNT are not attachable.
+		 */
 		if (action_set->encap_header == encap_header &&
+		    n_count == 0 &&
 		    efx_mae_action_set_specs_equal(action_set->spec, spec)) {
 			sfc_dbg(sa, "attaching to action_set=%p", action_set);
 			++(action_set->refcnt);
@@ -504,18 +599,52 @@ sfc_mae_action_set_attach(struct sfc_adapter *sa,
 
 static int
 sfc_mae_action_set_add(struct sfc_adapter *sa,
+		       const struct rte_flow_action actions[],
 		       efx_mae_actions_t *spec,
 		       struct sfc_mae_encap_header *encap_header,
+		       unsigned int n_counters,
 		       struct sfc_mae_action_set **action_setp)
 {
 	struct sfc_mae_action_set *action_set;
 	struct sfc_mae *mae = &sa->mae;
+	unsigned int i;
 
 	SFC_ASSERT(sfc_adapter_is_locked(sa));
 
 	action_set = rte_zmalloc("sfc_mae_action_set", sizeof(*action_set), 0);
-	if (action_set == NULL)
+	if (action_set == NULL) {
+		sfc_err(sa, "failed to alloc action set");
 		return ENOMEM;
+	}
+
+	if (n_counters > 0) {
+		const struct rte_flow_action *action;
+
+		action_set->counters = rte_malloc("sfc_mae_counter_ids",
+			sizeof(action_set->counters[0]) * n_counters, 0);
+		if (action_set->counters == NULL) {
+			rte_free(action_set);
+			sfc_err(sa, "failed to alloc counters");
+			return ENOMEM;
+		}
+
+		for (action = actions, i = 0;
+		     action->type != RTE_FLOW_ACTION_TYPE_END && i < n_counters;
+		     ++action) {
+			const struct rte_flow_action_count *conf;
+
+			if (action->type != RTE_FLOW_ACTION_TYPE_COUNT)
+				continue;
+
+			conf = action->conf;
+
+			action_set->counters[i].mae_id.id =
+				EFX_MAE_RSRC_ID_INVALID;
+			action_set->counters[i].rte_id = conf->id;
+			i++;
+		}
+		action_set->n_counters = n_counters;
+	}
 
 	action_set->refcnt = 1;
 	action_set->spec = spec;
@@ -555,6 +684,12 @@ sfc_mae_action_set_del(struct sfc_adapter *sa,
 
 	efx_mae_action_set_spec_fini(sa->nic, action_set->spec);
 	sfc_mae_encap_header_del(sa, action_set->encap_header);
+	if (action_set->n_counters > 0) {
+		SFC_ASSERT(action_set->n_counters == 1);
+		SFC_ASSERT(action_set->counters[0].mae_id.id ==
+			   EFX_MAE_RSRC_ID_INVALID);
+		rte_free(action_set->counters);
+	}
 	TAILQ_REMOVE(&mae->action_sets, action_set, entries);
 	rte_free(action_set);
 
@@ -566,6 +701,7 @@ sfc_mae_action_set_enable(struct sfc_adapter *sa,
 			  struct sfc_mae_action_set *action_set)
 {
 	struct sfc_mae_encap_header *encap_header = action_set->encap_header;
+	struct sfc_mae_counter_id *counters = action_set->counters;
 	struct sfc_mae_fw_rsrc *fw_rsrc = &action_set->fw_rsrc;
 	int rc;
 
@@ -580,14 +716,26 @@ sfc_mae_action_set_enable(struct sfc_adapter *sa,
 		if (rc != 0)
 			return rc;
 
-		rc = efx_mae_action_set_alloc(sa->nic, action_set->spec,
-					      &fw_rsrc->aset_id);
+		rc = sfc_mae_counters_enable(sa, counters,
+					     action_set->n_counters,
+					     action_set->spec);
 		if (rc != 0) {
+			sfc_err(sa, "failed to enable %u MAE counters: %s",
+				action_set->n_counters, rte_strerror(rc));
+
 			sfc_mae_encap_header_disable(sa, encap_header);
+			return rc;
+		}
 
+		rc = efx_mae_action_set_alloc(sa->nic, action_set->spec,
+					      &fw_rsrc->aset_id);
+		if (rc != 0) {
 			sfc_err(sa, "failed to enable action_set=%p: %s",
 				action_set, strerror(rc));
 
+			(void)sfc_mae_counters_disable(sa, counters,
+						       action_set->n_counters);
+			sfc_mae_encap_header_disable(sa, encap_header);
 			return rc;
 		}
 
@@ -627,6 +775,13 @@ sfc_mae_action_set_disable(struct sfc_adapter *sa,
 		}
 		fw_rsrc->aset_id.id = EFX_MAE_RSRC_ID_INVALID;
 
+		rc = sfc_mae_counters_disable(sa, action_set->counters,
+					      action_set->n_counters);
+		if (rc != 0) {
+			sfc_err(sa, "failed to disable %u MAE counters: %s",
+				action_set->n_counters, rte_strerror(rc));
+		}
+
 		sfc_mae_encap_header_disable(sa, action_set->encap_header);
 	}
 
@@ -2598,6 +2753,48 @@ sfc_mae_rule_parse_action_mark(const struct rte_flow_action_mark *conf,
 	return efx_mae_action_set_populate_mark(spec, conf->id);
 }
 
+static int
+sfc_mae_rule_parse_action_count(struct sfc_adapter *sa,
+				const struct rte_flow_action_count *conf,
+				efx_mae_actions_t *spec)
+{
+	int rc;
+
+	if (conf->shared) {
+		rc = ENOTSUP;
+		goto fail_counter_shared;
+	}
+
+	if ((sa->counter_rxq.state & SFC_COUNTER_RXQ_INITIALIZED) == 0) {
+		sfc_err(sa,
+			"counter queue is not configured for COUNT action");
+		rc = EINVAL;
+		goto fail_counter_queue_uninit;
+	}
+
+	if (sfc_get_service_lcore(SOCKET_ID_ANY) == RTE_MAX_LCORE) {
+		rc = EINVAL;
+		goto fail_no_service_core;
+	}
+
+	rc = efx_mae_action_set_populate_count(spec);
+	if (rc != 0) {
+		sfc_err(sa,
+			"failed to populate counters in MAE action set: %s",
+			rte_strerror(rc));
+		goto fail_populate_count;
+	}
+
+	return 0;
+
+fail_populate_count:
+fail_no_service_core:
+fail_counter_queue_uninit:
+fail_counter_shared:
+
+	return rc;
+}
+
 static int
 sfc_mae_rule_parse_action_phy_port(struct sfc_adapter *sa,
 				   const struct rte_flow_action_phy_port *conf,
@@ -2713,6 +2910,11 @@ sfc_mae_rule_parse_action(struct sfc_adapter *sa,
 							   spec, error);
 		custom_error = B_TRUE;
 		break;
+	case RTE_FLOW_ACTION_TYPE_COUNT:
+		SFC_BUILD_SET_OVERFLOW(RTE_FLOW_ACTION_TYPE_COUNT,
+				       bundle->actions_mask);
+		rc = sfc_mae_rule_parse_action_count(sa, action->conf, spec);
+		break;
 	case RTE_FLOW_ACTION_TYPE_FLAG:
 		SFC_BUILD_SET_OVERFLOW(RTE_FLOW_ACTION_TYPE_FLAG,
 				       bundle->actions_mask);
@@ -2798,6 +3000,7 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa,
 	const struct rte_flow_action *action;
 	struct sfc_mae *mae = &sa->mae;
 	efx_mae_actions_t *spec;
+	unsigned int n_count;
 	int rc;
 
 	rte_errno = 0;
@@ -2835,15 +3038,22 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa,
 	if (rc != 0)
 		goto fail_process_encap_header;
 
+	n_count = efx_mae_action_set_get_nb_count(spec);
+	if (n_count > 1) {
+		rc = ENOTSUP;
+		sfc_err(sa, "too many count actions requested: %u", n_count);
+		goto fail_nb_count;
+	}
+
 	spec_mae->action_set = sfc_mae_action_set_attach(sa, encap_header,
-							 spec);
+							 n_count, spec);
 	if (spec_mae->action_set != NULL) {
 		sfc_mae_encap_header_del(sa, encap_header);
 		efx_mae_action_set_spec_fini(sa->nic, spec);
 		return 0;
 	}
 
-	rc = sfc_mae_action_set_add(sa, spec, encap_header,
+	rc = sfc_mae_action_set_add(sa, actions, spec, encap_header, n_count,
 				    &spec_mae->action_set);
 	if (rc != 0)
 		goto fail_action_set_add;
@@ -2851,6 +3061,7 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa,
 	return 0;
 
 fail_action_set_add:
+fail_nb_count:
 	sfc_mae_encap_header_del(sa, encap_header);
 
 fail_process_encap_header:
@@ -3005,6 +3216,15 @@ sfc_mae_flow_insert(struct sfc_adapter *sa,
 	if (rc != 0)
 		goto fail_action_set_enable;
 
+	if (action_set->n_counters > 0) {
+		rc = sfc_mae_counter_start(sa);
+		if (rc != 0) {
+			sfc_err(sa, "failed to start MAE counters support: %s",
+				rte_strerror(rc));
+			goto fail_mae_counter_start;
+		}
+	}
+
 	rc = efx_mae_action_rule_insert(sa->nic, spec_mae->match_spec,
 					NULL, &fw_rsrc->aset_id,
 					&spec_mae->rule_id);
@@ -3017,6 +3237,7 @@ sfc_mae_flow_insert(struct sfc_adapter *sa,
 	return 0;
 
 fail_action_rule_insert:
+fail_mae_counter_start:
 	sfc_mae_action_set_disable(sa, action_set);
 
 fail_action_set_enable:
diff --git a/drivers/net/sfc/sfc_mae.h b/drivers/net/sfc/sfc_mae.h
index 0241fe33c4..15fe5ebca5 100644
--- a/drivers/net/sfc/sfc_mae.h
+++ b/drivers/net/sfc/sfc_mae.h
@@ -16,6 +16,8 @@
 
 #include "efx.h"
 
+#include "sfc_stats.h"
+
 #ifdef __cplusplus
 extern "C" {
 #endif
@@ -54,10 +56,20 @@ struct sfc_mae_encap_header {
 
 TAILQ_HEAD(sfc_mae_encap_headers, sfc_mae_encap_header);
 
+/* Counter ID */
+struct sfc_mae_counter_id {
+	/* ID of a counter in MAE */
+	efx_counter_t			mae_id;
+	/* ID of a counter in RTE */
+	uint32_t			rte_id;
+};
+
 /** Action set registry entry */
 struct sfc_mae_action_set {
 	TAILQ_ENTRY(sfc_mae_action_set)	entries;
 	unsigned int			refcnt;
+	struct sfc_mae_counter_id	*counters;
+	uint32_t			n_counters;
 	efx_mae_actions_t		*spec;
 	struct sfc_mae_encap_header	*encap_header;
 	struct sfc_mae_fw_rsrc		fw_rsrc;
@@ -83,6 +95,50 @@ struct sfc_mae_bounce_eh {
 	efx_tunnel_protocol_t		type;
 };
 
+/** Counter collection entry */
+struct sfc_mae_counter {
+	bool				inuse;
+	uint32_t			generation_count;
+	union sfc_pkts_bytes		value;
+	union sfc_pkts_bytes		reset;
+};
+
+struct sfc_mae_counters_xstats {
+	uint64_t			not_inuse_update;
+	uint64_t			realloc_update;
+};
+
+struct sfc_mae_counters {
+	/** An array of all MAE counters */
+	struct sfc_mae_counter		*mae_counters;
+	/** Extra statistics for counters */
+	struct sfc_mae_counters_xstats	xstats;
+	/** Count of all MAE counters */
+	unsigned int			n_mae_counters;
+};
+
+struct sfc_mae_counter_registry {
+	/* Common counter information */
+	/** Counters collection */
+	struct sfc_mae_counters		counters;
+
+	/* Information used by counter update service */
+	/** Callback to get packets from RxQ */
+	eth_rx_burst_t			rx_pkt_burst;
+	/** Data for the callback to get packets */
+	struct sfc_dp_rxq		*rx_dp;
+	/** Number of buffers pushed to the RxQ */
+	unsigned int			pushed_n_buffers;
+	/** Are credits used by counter stream */
+	bool				use_credits;
+
+	/* Information used by configuration routines */
+	/** Counter service core ID */
+	uint32_t			service_core_id;
+	/** Counter service ID */
+	uint32_t			service_id;
+};
+
 struct sfc_mae {
 	/** Assigned switch domain identifier */
 	uint16_t			switch_domain_id;
@@ -104,6 +160,10 @@ struct sfc_mae {
 	struct sfc_mae_action_sets	action_sets;
 	/** Encap. header bounce buffer */
 	struct sfc_mae_bounce_eh	bounce_eh;
+	/** Flag indicating whether counter-only RxQ is running */
+	bool				counter_rxq_running;
+	/** Counter registry */
+	struct sfc_mae_counter_registry	counter_registry;
 };
 
 struct sfc_adapter;
diff --git a/drivers/net/sfc/sfc_mae_counter.c b/drivers/net/sfc/sfc_mae_counter.c
index c7646cf7b1..3aeb37f7ec 100644
--- a/drivers/net/sfc/sfc_mae_counter.c
+++ b/drivers/net/sfc/sfc_mae_counter.c
@@ -4,8 +4,10 @@
  */
 
 #include <rte_common.h>
+#include <rte_service_component.h>
 
 #include "efx.h"
+#include "efx_regs_counters_pkt_format.h"
 
 #include "sfc_ev.h"
 #include "sfc.h"
@@ -49,6 +51,520 @@ sfc_mae_counter_rxq_required(struct sfc_adapter *sa)
 	return true;
 }
 
+int
+sfc_mae_counter_enable(struct sfc_adapter *sa,
+		       struct sfc_mae_counter_id *counterp)
+{
+	struct sfc_mae_counter_registry *reg = &sa->mae.counter_registry;
+	struct sfc_mae_counters *counters = &reg->counters;
+	struct sfc_mae_counter *p;
+	efx_counter_t mae_counter;
+	uint32_t generation_count;
+	uint32_t unused;
+	int rc;
+
+	/*
+	 * The actual count of counters allocated is ignored since a failure
+	 * to allocate a single counter is indicated by non-zero return code.
+	 */
+	rc = efx_mae_counters_alloc(sa->nic, 1, &unused, &mae_counter,
+				    &generation_count);
+	if (rc != 0) {
+		sfc_err(sa, "failed to alloc MAE counter: %s",
+			rte_strerror(rc));
+		goto fail_mae_counter_alloc;
+	}
+
+	if (mae_counter.id >= counters->n_mae_counters) {
+		/*
+		 * ID of a counter is expected to be within the range
+		 * between 0 and the maximum count of counters to always
+		 * fit into a pre-allocated array size of maximum counter ID.
+		 */
+		sfc_err(sa, "MAE counter ID is out of expected range");
+		rc = EFAULT;
+		goto fail_counter_id_range;
+	}
+
+	counterp->mae_id = mae_counter;
+
+	p = &counters->mae_counters[mae_counter.id];
+
+	/*
+	 * Ordering is relaxed since it is the only operation on counter value.
+	 * And it does not depend on different stores/loads in other threads.
+	 * Paired with relaxed ordering in counter increment.
+	 */
+	__atomic_store(&p->reset.pkts_bytes.int128,
+		       &p->value.pkts_bytes.int128, __ATOMIC_RELAXED);
+	p->generation_count = generation_count;
+
+	/*
+	 * The flag is set at the very end of add operation and reset
+	 * at the beginning of delete operation. Release ordering is
+	 * paired with acquire ordering on load in counter increment operation.
+	 */
+	__atomic_store_n(&p->inuse, true, __ATOMIC_RELEASE);
+
+	sfc_info(sa, "enabled MAE counter #%u with reset pkts=%" PRIu64
+		 " bytes=%" PRIu64, mae_counter.id,
+		 p->reset.pkts, p->reset.bytes);
+
+	return 0;
+
+fail_counter_id_range:
+	(void)efx_mae_counters_free(sa->nic, 1, &unused, &mae_counter, NULL);
+
+fail_mae_counter_alloc:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+	return rc;
+}
+
+int
+sfc_mae_counter_disable(struct sfc_adapter *sa,
+			struct sfc_mae_counter_id *counter)
+{
+	struct sfc_mae_counter_registry *reg = &sa->mae.counter_registry;
+	struct sfc_mae_counters *counters = &reg->counters;
+	struct sfc_mae_counter *p;
+	uint32_t unused;
+	int rc;
+
+	if (counter->mae_id.id == EFX_MAE_RSRC_ID_INVALID)
+		return 0;
+
+	SFC_ASSERT(counter->mae_id.id < counters->n_mae_counters);
+	/*
+	 * The flag is set at the very end of add operation and reset
+	 * at the beginning of delete operation. Release ordering is
+	 * paired with acquire ordering on load in counter increment operation.
+	 */
+	p = &counters->mae_counters[counter->mae_id.id];
+	__atomic_store_n(&p->inuse, false, __ATOMIC_RELEASE);
+
+	rc = efx_mae_counters_free(sa->nic, 1, &unused, &counter->mae_id, NULL);
+	if (rc != 0)
+		sfc_err(sa, "failed to free MAE counter %u: %s",
+			counter->mae_id.id, rte_strerror(rc));
+
+	sfc_info(sa, "disabled MAE counter #%u with reset pkts=%" PRIu64
+		 " bytes=%" PRIu64, counter->mae_id.id,
+		 p->reset.pkts, p->reset.bytes);
+
+	/*
+	 * Do this regardless of what efx_mae_counters_free() return value is.
+	 * If there's some error, the resulting resource leakage is bad, but
+	 * nothing sensible can be done in this case.
+	 */
+	counter->mae_id.id = EFX_MAE_RSRC_ID_INVALID;
+
+	return rc;
+}
+
+static void
+sfc_mae_counter_increment(struct sfc_adapter *sa,
+			  struct sfc_mae_counters *counters,
+			  uint32_t mae_counter_id,
+			  uint32_t generation_count,
+			  uint64_t pkts, uint64_t bytes)
+{
+	struct sfc_mae_counter *p = &counters->mae_counters[mae_counter_id];
+	struct sfc_mae_counters_xstats *xstats = &counters->xstats;
+	union sfc_pkts_bytes cnt_val;
+	bool inuse;
+
+	/*
+	 * Acquire ordering is paired with release ordering in counter add
+	 * and delete operations.
+	 */
+	__atomic_load(&p->inuse, &inuse, __ATOMIC_ACQUIRE);
+	if (!inuse) {
+		/*
+		 * Two possible cases include:
+		 * 1) Counter is just allocated. Too early counter update
+		 *    cannot be processed properly.
+		 * 2) Stale update of freed and not reallocated counter.
+		 *    There is no point in processing that update.
+		 */
+		xstats->not_inuse_update++;
+		return;
+	}
+
+	if (unlikely(generation_count < p->generation_count)) {
+		/*
+		 * It is a stale update for the reallocated counter
+		 * (i.e., freed and the same ID allocated again).
+		 */
+		xstats->realloc_update++;
+		return;
+	}
+
+	cnt_val.pkts = p->value.pkts + pkts;
+	cnt_val.bytes = p->value.bytes + bytes;
+
+	/*
+	 * Ordering is relaxed since it is the only operation on counter value.
+	 * And it does not depend on different stores/loads in other threads.
+	 * Paired with relaxed ordering on counter reset.
+	 */
+	__atomic_store(&p->value.pkts_bytes,
+		       &cnt_val.pkts_bytes, __ATOMIC_RELAXED);
+
+	sfc_info(sa, "update MAE counter #%u: pkts+%" PRIu64 "=%" PRIu64
+		 ", bytes+%" PRIu64 "=%" PRIu64, mae_counter_id,
+		 pkts, cnt_val.pkts, bytes, cnt_val.bytes);
+}
+
+static void
+sfc_mae_parse_counter_packet(struct sfc_adapter *sa,
+			     struct sfc_mae_counter_registry *counter_registry,
+			     const struct rte_mbuf *m)
+{
+	uint32_t generation_count;
+	const efx_xword_t *hdr;
+	const efx_oword_t *counters_data;
+	unsigned int version;
+	unsigned int id;
+	unsigned int header_offset;
+	unsigned int payload_offset;
+	unsigned int counter_count;
+	unsigned int required_len;
+	unsigned int i;
+
+	if (unlikely(m->nb_segs != 1)) {
+		sfc_err(sa, "unexpectedly scattered MAE counters packet (%u segments)",
+			m->nb_segs);
+		return;
+	}
+
+	if (unlikely(m->data_len < ER_RX_SL_PACKETISER_HEADER_WORD_SIZE)) {
+		sfc_err(sa, "too short MAE counters packet (%u bytes)",
+			m->data_len);
+		return;
+	}
+
+	/*
+	 * The generation count is located in the Rx prefix in the USER_MARK
+	 * field which is written into hash.fdir.hi field of an mbuf. See
+	 * SF-123581-TC SmartNIC Datapath Offloads section 4.7.5 Counters.
+	 */
+	generation_count = m->hash.fdir.hi;
+
+	hdr = rte_pktmbuf_mtod(m, const efx_xword_t *);
+
+	version = EFX_XWORD_FIELD(*hdr, ERF_SC_PACKETISER_HEADER_VERSION);
+	if (unlikely(version != ERF_SC_PACKETISER_HEADER_VERSION_2)) {
+		sfc_err(sa, "unexpected MAE counters packet version %u",
+			version);
+		return;
+	}
+
+	id = EFX_XWORD_FIELD(*hdr, ERF_SC_PACKETISER_HEADER_IDENTIFIER);
+	if (unlikely(id != ERF_SC_PACKETISER_HEADER_IDENTIFIER_AR)) {
+		sfc_err(sa, "unexpected MAE counters source identifier %u", id);
+		return;
+	}
+
+	/* Packet layout definitions assume fixed header offset in fact */
+	header_offset =
+		EFX_XWORD_FIELD(*hdr, ERF_SC_PACKETISER_HEADER_HEADER_OFFSET);
+	if (unlikely(header_offset !=
+		     ERF_SC_PACKETISER_HEADER_HEADER_OFFSET_DEFAULT)) {
+		sfc_err(sa, "unexpected MAE counters packet header offset %u",
+			header_offset);
+		return;
+	}
+
+	payload_offset =
+		EFX_XWORD_FIELD(*hdr, ERF_SC_PACKETISER_HEADER_PAYLOAD_OFFSET);
+
+	counter_count = EFX_XWORD_FIELD(*hdr, ERF_SC_PACKETISER_HEADER_COUNT);
+
+	required_len = payload_offset +
+			counter_count * sizeof(counters_data[0]);
+	if (unlikely(required_len > m->data_len)) {
+		sfc_err(sa, "truncated MAE counters packet: %u counters, packet length is %u vs %u required",
+			counter_count, m->data_len, required_len);
+		/*
+		 * In theory it is possible process available counters data,
+		 * but such condition is really unexpected and it is
+		 * better to treat entire packet as corrupted.
+		 */
+		return;
+	}
+
+	/* Ensure that counters data is 32-bit aligned */
+	if (unlikely(payload_offset % sizeof(uint32_t) != 0)) {
+		sfc_err(sa, "unsupported MAE counters payload offset %u, must be 32-bit aligned",
+			payload_offset);
+		return;
+	}
+	RTE_BUILD_BUG_ON(sizeof(counters_data[0]) !=
+			ER_RX_SL_PACKETISER_PAYLOAD_WORD_SIZE);
+
+	counters_data =
+		rte_pktmbuf_mtod_offset(m, const efx_oword_t *, payload_offset);
+
+	sfc_info(sa, "update %u MAE counters with gc=%u",
+		 counter_count, generation_count);
+
+	for (i = 0; i < counter_count; ++i) {
+		uint32_t packet_count_lo;
+		uint32_t packet_count_hi;
+		uint32_t byte_count_lo;
+		uint32_t byte_count_hi;
+
+		/*
+		 * Use 32-bit field accessors below since counters data
+		 * is not 64-bit aligned.
+		 * 32-bit alignment is checked above taking into account
+		 * that start of packet data is 32-bit aligned
+		 * (cache-line size aligned in fact).
+		 */
+		packet_count_lo =
+			EFX_OWORD_FIELD32(counters_data[i],
+				ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO);
+		packet_count_hi =
+			EFX_OWORD_FIELD32(counters_data[i],
+				ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_HI);
+		byte_count_lo =
+			EFX_OWORD_FIELD32(counters_data[i],
+				ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO);
+		byte_count_hi =
+			EFX_OWORD_FIELD32(counters_data[i],
+				ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_HI);
+		sfc_mae_counter_increment(sa,
+			&counter_registry->counters,
+			EFX_OWORD_FIELD32(counters_data[i],
+				ERF_SC_PACKETISER_PAYLOAD_COUNTER_INDEX),
+			generation_count,
+			(uint64_t)packet_count_lo |
+			((uint64_t)packet_count_hi <<
+			 ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO_WIDTH),
+			(uint64_t)byte_count_lo |
+			((uint64_t)byte_count_hi <<
+			 ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO_WIDTH));
+	}
+}
+
+static int32_t
+sfc_mae_counter_routine(void *arg)
+{
+	struct sfc_adapter *sa = arg;
+	struct sfc_mae_counter_registry *counter_registry =
+		&sa->mae.counter_registry;
+	struct rte_mbuf *mbufs[SFC_MAE_COUNTER_RX_BURST];
+	unsigned int pushed_diff;
+	unsigned int pushed;
+	unsigned int i;
+	uint16_t n;
+	int rc;
+
+	n = counter_registry->rx_pkt_burst(counter_registry->rx_dp, mbufs,
+					   SFC_MAE_COUNTER_RX_BURST);
+
+	for (i = 0; i < n; i++)
+		sfc_mae_parse_counter_packet(sa, counter_registry, mbufs[i]);
+
+	rte_pktmbuf_free_bulk(mbufs, n);
+
+	if (!counter_registry->use_credits)
+		return 0;
+
+	pushed = sfc_rx_get_pushed(sa, counter_registry->rx_dp);
+	pushed_diff = pushed - counter_registry->pushed_n_buffers;
+
+	if (pushed_diff >= SFC_COUNTER_RXQ_REFILL_LEVEL) {
+		rc = efx_mae_counters_stream_give_credits(sa->nic, pushed_diff);
+		if (rc == 0) {
+			counter_registry->pushed_n_buffers = pushed;
+		} else {
+			/*
+			 * FIXME: counters might be important for the
+			 * application. Handle the error in order to recover
+			 * from the failure
+			 */
+			SFC_GENERIC_LOG(DEBUG, "Give credits failed: %s",
+					rte_strerror(rc));
+		}
+	}
+
+	return 0;
+}
+
+static void
+sfc_mae_counter_service_unregister(struct sfc_adapter *sa)
+{
+	struct sfc_mae_counter_registry *registry =
+		&sa->mae.counter_registry;
+	const unsigned int wait_ms = 10000;
+	unsigned int i;
+
+	rte_service_runstate_set(registry->service_id, 0);
+	rte_service_component_runstate_set(registry->service_id, 0);
+
+	/*
+	 * Wait for the counter routine to finish the last iteration.
+	 * Give up on timeout.
+	 */
+	for (i = 0; i < wait_ms; i++) {
+		if (rte_service_may_be_active(registry->service_id) == 0)
+			break;
+
+		rte_delay_ms(1);
+	}
+	if (i == wait_ms)
+		sfc_warn(sa, "failed to wait for counter service to stop");
+
+	rte_service_map_lcore_set(registry->service_id,
+				  registry->service_core_id, 0);
+
+	rte_service_component_unregister(registry->service_id);
+}
+
+static struct sfc_rxq_info *
+sfc_counter_rxq_info_get(struct sfc_adapter *sa)
+{
+	return &sfc_sa2shared(sa)->rxq_info[sa->counter_rxq.sw_index];
+}
+
+static int
+sfc_mae_counter_service_register(struct sfc_adapter *sa,
+				 uint32_t counter_stream_flags)
+{
+	struct rte_service_spec service;
+	char counter_service_name[sizeof(service.name)] = "counter_sevice";
+	struct sfc_mae_counter_registry *counter_registry =
+		&sa->mae.counter_registry;
+	uint32_t cid;
+	uint32_t sid;
+	int rc;
+
+	sfc_log_init(sa, "entry");
+
+	/* Prepare service info */
+	memset(&service, 0, sizeof(service));
+	rte_strscpy(service.name, counter_service_name, sizeof(service.name));
+	service.socket_id = sa->socket_id;
+	service.callback = sfc_mae_counter_routine;
+	service.callback_userdata = sa;
+	counter_registry->rx_pkt_burst = sa->eth_dev->rx_pkt_burst;
+	counter_registry->rx_dp = sfc_counter_rxq_info_get(sa)->dp;
+	counter_registry->pushed_n_buffers = 0;
+	counter_registry->use_credits = counter_stream_flags &
+		EFX_MAE_COUNTERS_STREAM_OUT_USES_CREDITS;
+
+	cid = sfc_get_service_lcore(sa->socket_id);
+	if (cid == RTE_MAX_LCORE && sa->socket_id != SOCKET_ID_ANY) {
+		/* Warn and try to allocate on any NUMA node */
+		sfc_warn(sa,
+			"failed to get service lcore for counter service at socket %d",
+			sa->socket_id);
+
+		cid = sfc_get_service_lcore(SOCKET_ID_ANY);
+	}
+	if (cid == RTE_MAX_LCORE) {
+		rc = ENOTSUP;
+		sfc_err(sa, "failed to get service lcore for counter service");
+		goto fail_get_service_lcore;
+	}
+
+	/* Service core may be in "stopped" state, start it */
+	rc = rte_service_lcore_start(cid);
+	if (rc != 0 && rc != -EALREADY) {
+		sfc_err(sa, "failed to start service core for counter service: %s",
+			rte_strerror(-rc));
+		rc = ENOTSUP;
+		goto fail_start_core;
+	}
+
+	/* Register counter service */
+	rc = rte_service_component_register(&service, &sid);
+	if (rc != 0) {
+		rc = ENOEXEC;
+		sfc_err(sa, "failed to register counter service component");
+		goto fail_register;
+	}
+
+	/* Map the service with the service core */
+	rc = rte_service_map_lcore_set(sid, cid, 1);
+	if (rc != 0) {
+		rc = -rc;
+		sfc_err(sa, "failed to map lcore for counter service: %s",
+			rte_strerror(rc));
+		goto fail_map_lcore;
+	}
+
+	/* Run the service */
+	rc = rte_service_component_runstate_set(sid, 1);
+	if (rc < 0) {
+		rc = -rc;
+		sfc_err(sa, "failed to run counter service component: %s",
+			rte_strerror(rc));
+		goto fail_component_runstate_set;
+	}
+	rc = rte_service_runstate_set(sid, 1);
+	if (rc < 0) {
+		rc = -rc;
+		sfc_err(sa, "failed to run counter service");
+		goto fail_runstate_set;
+	}
+
+	counter_registry->service_core_id = cid;
+	counter_registry->service_id = sid;
+
+	sfc_log_init(sa, "done");
+
+	return 0;
+
+fail_runstate_set:
+	rte_service_component_runstate_set(sid, 0);
+
+fail_component_runstate_set:
+	rte_service_map_lcore_set(sid, cid, 0);
+
+fail_map_lcore:
+	rte_service_component_unregister(sid);
+
+fail_register:
+fail_start_core:
+fail_get_service_lcore:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+
+	return rc;
+}
+
+int
+sfc_mae_counters_init(struct sfc_mae_counters *counters,
+		      uint32_t nb_counters_max)
+{
+	int rc;
+
+	SFC_GENERIC_LOG(DEBUG, "%s: entry", __func__);
+
+	counters->mae_counters = rte_zmalloc("sfc_mae_counters",
+		sizeof(*counters->mae_counters) * nb_counters_max, 0);
+	if (counters->mae_counters == NULL) {
+		rc = ENOMEM;
+		SFC_GENERIC_LOG(ERR, "%s: failed: %s", __func__,
+				rte_strerror(rc));
+		return rc;
+	}
+
+	counters->n_mae_counters = nb_counters_max;
+
+	SFC_GENERIC_LOG(DEBUG, "%s: done", __func__);
+
+	return 0;
+}
+
+void
+sfc_mae_counters_fini(struct sfc_mae_counters *counters)
+{
+	rte_free(counters->mae_counters);
+	counters->mae_counters = NULL;
+}
+
 int
 sfc_mae_counter_rxq_attach(struct sfc_adapter *sa)
 {
@@ -215,3 +731,65 @@ sfc_mae_counter_rxq_fini(struct sfc_adapter *sa)
 
 	sfc_log_init(sa, "done");
 }
+
+void
+sfc_mae_counter_stop(struct sfc_adapter *sa)
+{
+	struct sfc_mae *mae = &sa->mae;
+
+	sfc_log_init(sa, "entry");
+
+	if (!mae->counter_rxq_running) {
+		sfc_log_init(sa, "counter queue is not running - skip");
+		return;
+	}
+
+	sfc_mae_counter_service_unregister(sa);
+	efx_mae_counters_stream_stop(sa->nic, sa->counter_rxq.sw_index, NULL);
+
+	mae->counter_rxq_running = false;
+
+	sfc_log_init(sa, "done");
+}
+
+int
+sfc_mae_counter_start(struct sfc_adapter *sa)
+{
+	struct sfc_mae *mae = &sa->mae;
+	uint32_t flags;
+	int rc;
+
+	SFC_ASSERT(sa->counter_rxq.state & SFC_COUNTER_RXQ_ATTACHED);
+
+	if (mae->counter_rxq_running)
+		return 0;
+
+	sfc_log_init(sa, "entry");
+
+	rc = efx_mae_counters_stream_start(sa->nic, sa->counter_rxq.sw_index,
+					   SFC_MAE_COUNTER_STREAM_PACKET_SIZE,
+					   0 /* No flags required */, &flags);
+	if (rc != 0) {
+		sfc_err(sa, "failed to start MAE counters stream: %s",
+			rte_strerror(rc));
+		goto fail_counter_stream;
+	}
+
+	sfc_log_init(sa, "stream start flags: 0x%x", flags);
+
+	rc = sfc_mae_counter_service_register(sa, flags);
+	if (rc != 0)
+		goto fail_service_register;
+
+	mae->counter_rxq_running = true;
+
+	return 0;
+
+fail_service_register:
+	efx_mae_counters_stream_stop(sa->nic, sa->counter_rxq.sw_index, NULL);
+
+fail_counter_stream:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+
+	return rc;
+}
diff --git a/drivers/net/sfc/sfc_mae_counter.h b/drivers/net/sfc/sfc_mae_counter.h
index f16d64a999..f61a6b59cb 100644
--- a/drivers/net/sfc/sfc_mae_counter.h
+++ b/drivers/net/sfc/sfc_mae_counter.h
@@ -38,6 +38,17 @@ void sfc_mae_counter_rxq_detach(struct sfc_adapter *sa);
 int sfc_mae_counter_rxq_init(struct sfc_adapter *sa);
 void sfc_mae_counter_rxq_fini(struct sfc_adapter *sa);
 
+int sfc_mae_counters_init(struct sfc_mae_counters *counters,
+			  uint32_t nb_counters_max);
+void sfc_mae_counters_fini(struct sfc_mae_counters *counters);
+int sfc_mae_counter_enable(struct sfc_adapter *sa,
+			   struct sfc_mae_counter_id *counterp);
+int sfc_mae_counter_disable(struct sfc_adapter *sa,
+			    struct sfc_mae_counter_id *counter);
+
+int sfc_mae_counter_start(struct sfc_adapter *sa);
+void sfc_mae_counter_stop(struct sfc_adapter *sa);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/net/sfc/sfc_stats.h b/drivers/net/sfc/sfc_stats.h
new file mode 100644
index 0000000000..2d7ab71f14
--- /dev/null
+++ b/drivers/net/sfc/sfc_stats.h
@@ -0,0 +1,80 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2019-2021 Xilinx, Inc.
+ * Copyright(c) 2019 Solarflare Communications Inc.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#ifndef _SFC_STATS_H
+#define _SFC_STATS_H
+
+#include <stdint.h>
+
+#include <rte_atomic.h>
+
+#include "sfc_tweak.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * 64-bit packets and bytes counters covered by 128-bit integer
+ * in order to do atomic updates to guarantee consistency if
+ * required.
+ */
+union sfc_pkts_bytes {
+	RTE_STD_C11
+	struct {
+		uint64_t		pkts;
+		uint64_t		bytes;
+	};
+	rte_int128_t			pkts_bytes;
+};
+
+/**
+ * Update packets and bytes counters atomically in assumption that
+ * the counter is written on one core only.
+ */
+static inline void
+sfc_pkts_bytes_add(union sfc_pkts_bytes *st, uint64_t pkts, uint64_t bytes)
+{
+#if SFC_SW_STATS_ATOMIC
+	union sfc_pkts_bytes result;
+
+	/* Stats are written on single core only, so just load values */
+	result.pkts = st->pkts + pkts;
+	result.bytes = st->bytes + bytes;
+
+	/*
+	 * Store the result atomically to guarantee that the reader
+	 * core sees both counter updates together.
+	 */
+	__atomic_store_n(&st->pkts_bytes.int128, result.pkts_bytes.int128,
+			 __ATOMIC_RELEASE);
+#else
+	st->pkts += pkts;
+	st->bytes += bytes;
+#endif
+}
+
+/**
+ * Get an atomic copy of a packets and bytes counters.
+ */
+static inline void
+sfc_pkts_bytes_get(const union sfc_pkts_bytes *st, union sfc_pkts_bytes *result)
+{
+#if SFC_SW_STATS_ATOMIC
+	result->pkts_bytes.int128 = __atomic_load_n(&st->pkts_bytes.int128,
+						    __ATOMIC_ACQUIRE);
+#else
+	*result = *st;
+#endif
+}
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _SFC_STATS_H */
diff --git a/drivers/net/sfc/sfc_tweak.h b/drivers/net/sfc/sfc_tweak.h
index f2d8701421..d09c7a3125 100644
--- a/drivers/net/sfc/sfc_tweak.h
+++ b/drivers/net/sfc/sfc_tweak.h
@@ -42,4 +42,13 @@
  */
 #define SFC_RXD_WAIT_TIMEOUT_NS_DEF	(200U * 1000)
 
+/**
+ * Ideally reading packet and byte counters together should return
+ * consistent values. I.e. a number of bytes corresponds to a number of
+ * packets. Since counters are updated in one thread and queried in
+ * another it requires either locking or atomics which are very
+ * expensive from performance point of view. So, disable it by default.
+ */
+#define SFC_SW_STATS_ATOMIC		0
+
 #endif /* _SFC_TWEAK_H_ */
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH 20/20] net/sfc: support flow API query for count actions
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                   ` (18 preceding siblings ...)
  2021-05-27 15:25 ` [dpdk-dev] [PATCH 19/20] net/sfc: support flow action COUNT in transfer rules Andrew Rybchenko
@ 2021-05-27 15:25 ` Andrew Rybchenko
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                   ` (2 subsequent siblings)
  22 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-05-27 15:25 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

The query reports the number of hits for a counter associated
with a flow rule.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/sfc_flow.c        | 40 ++++++++++++++++++-
 drivers/net/sfc/sfc_flow.h        |  6 +++
 drivers/net/sfc/sfc_mae.c         | 64 +++++++++++++++++++++++++++++++
 drivers/net/sfc/sfc_mae.h         |  1 +
 drivers/net/sfc/sfc_mae_counter.c | 32 ++++++++++++++++
 drivers/net/sfc/sfc_mae_counter.h |  3 ++
 6 files changed, 145 insertions(+), 1 deletion(-)

diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 1294dbd3a7..d00d3a2363 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -32,6 +32,7 @@ struct sfc_flow_ops_by_spec {
 	sfc_flow_cleanup_cb_t	*cleanup;
 	sfc_flow_insert_cb_t	*insert;
 	sfc_flow_remove_cb_t	*remove;
+	sfc_flow_query_cb_t	*query;
 };
 
 static sfc_flow_parse_cb_t sfc_flow_parse_rte_to_filter;
@@ -45,6 +46,7 @@ static const struct sfc_flow_ops_by_spec sfc_flow_ops_filter = {
 	.cleanup = NULL,
 	.insert = sfc_flow_filter_insert,
 	.remove = sfc_flow_filter_remove,
+	.query = NULL,
 };
 
 static const struct sfc_flow_ops_by_spec sfc_flow_ops_mae = {
@@ -53,6 +55,7 @@ static const struct sfc_flow_ops_by_spec sfc_flow_ops_mae = {
 	.cleanup = sfc_mae_flow_cleanup,
 	.insert = sfc_mae_flow_insert,
 	.remove = sfc_mae_flow_remove,
+	.query = sfc_mae_flow_query,
 };
 
 static const struct sfc_flow_ops_by_spec *
@@ -2788,6 +2791,41 @@ sfc_flow_flush(struct rte_eth_dev *dev,
 	return -ret;
 }
 
+static int
+sfc_flow_query(struct rte_eth_dev *dev,
+	       struct rte_flow *flow,
+	       const struct rte_flow_action *action,
+	       void *data,
+	       struct rte_flow_error *error)
+{
+	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	const struct sfc_flow_ops_by_spec *ops;
+	int ret;
+
+	sfc_adapter_lock(sa);
+
+	ops = sfc_flow_get_ops_by_spec(flow);
+	if (ops == NULL || ops->query == NULL) {
+		ret = rte_flow_error_set(error, ENOTSUP,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			"No backend to handle this flow");
+		goto fail_no_backend;
+	}
+
+	ret = ops->query(dev, flow, action, data, error);
+	if (ret != 0)
+		goto fail_query;
+
+	sfc_adapter_unlock(sa);
+
+	return 0;
+
+fail_query:
+fail_no_backend:
+	sfc_adapter_unlock(sa);
+	return ret;
+}
+
 static int
 sfc_flow_isolate(struct rte_eth_dev *dev, int enable,
 		 struct rte_flow_error *error)
@@ -2814,7 +2852,7 @@ const struct rte_flow_ops sfc_flow_ops = {
 	.create = sfc_flow_create,
 	.destroy = sfc_flow_destroy,
 	.flush = sfc_flow_flush,
-	.query = NULL,
+	.query = sfc_flow_query,
 	.isolate = sfc_flow_isolate,
 };
 
diff --git a/drivers/net/sfc/sfc_flow.h b/drivers/net/sfc/sfc_flow.h
index bd3b374d68..99e5cf9cff 100644
--- a/drivers/net/sfc/sfc_flow.h
+++ b/drivers/net/sfc/sfc_flow.h
@@ -181,6 +181,12 @@ typedef int (sfc_flow_insert_cb_t)(struct sfc_adapter *sa,
 typedef int (sfc_flow_remove_cb_t)(struct sfc_adapter *sa,
 				   struct rte_flow *flow);
 
+typedef int (sfc_flow_query_cb_t)(struct rte_eth_dev *dev,
+				  struct rte_flow *flow,
+				  const struct rte_flow_action *action,
+				  void *data,
+				  struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 370a39da1d..ee1188bc1e 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -3277,3 +3277,67 @@ sfc_mae_flow_remove(struct sfc_adapter *sa,
 
 	return 0;
 }
+
+static int
+sfc_mae_query_counter(struct sfc_adapter *sa,
+		      struct sfc_flow_spec_mae *spec,
+		      const struct rte_flow_action *action,
+		      struct rte_flow_query_count *data,
+		      struct rte_flow_error *error)
+{
+	struct sfc_mae_action_set *action_set = spec->action_set;
+	const struct rte_flow_action_count *conf = action->conf;
+	unsigned int i;
+	int rc;
+
+	if (action_set->n_counters == 0) {
+		return rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION, action,
+			"Queried flow rule does not have count actions");
+	}
+
+	for (i = 0; i < action_set->n_counters; i++) {
+		/*
+		 * Get the first available counter of the flow rule if
+		 * counter ID is not specified.
+		 */
+		if (conf != NULL && action_set->counters[i].rte_id != conf->id)
+			continue;
+
+		rc = sfc_mae_counter_get(&sa->mae.counter_registry.counters,
+					 &action_set->counters[i], data);
+		if (rc != 0) {
+			return rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION, action,
+				"Queried flow rule counter action is invalid");
+		}
+
+		return 0;
+	}
+
+	return rte_flow_error_set(error, ENOENT,
+				  RTE_FLOW_ERROR_TYPE_ACTION, action,
+				  "No such flow rule action count ID");
+}
+
+int
+sfc_mae_flow_query(struct rte_eth_dev *dev,
+		   struct rte_flow *flow,
+		   const struct rte_flow_action *action,
+		   void *data,
+		   struct rte_flow_error *error)
+{
+	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	struct sfc_flow_spec *spec = &flow->spec;
+	struct sfc_flow_spec_mae *spec_mae = &spec->mae;
+
+	switch (action->type) {
+	case RTE_FLOW_ACTION_TYPE_COUNT:
+		return sfc_mae_query_counter(sa, spec_mae, action,
+					     data, error);
+	default:
+		return rte_flow_error_set(error, ENOTSUP,
+			RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+			"Query for action of this type is not supported");
+	}
+}
diff --git a/drivers/net/sfc/sfc_mae.h b/drivers/net/sfc/sfc_mae.h
index 15fe5ebca5..7e3b6a7a97 100644
--- a/drivers/net/sfc/sfc_mae.h
+++ b/drivers/net/sfc/sfc_mae.h
@@ -304,6 +304,7 @@ int sfc_mae_rule_parse_actions(struct sfc_adapter *sa,
 sfc_flow_verify_cb_t sfc_mae_flow_verify;
 sfc_flow_insert_cb_t sfc_mae_flow_insert;
 sfc_flow_remove_cb_t sfc_mae_flow_remove;
+sfc_flow_query_cb_t sfc_mae_flow_query;
 
 #ifdef __cplusplus
 }
diff --git a/drivers/net/sfc/sfc_mae_counter.c b/drivers/net/sfc/sfc_mae_counter.c
index 3aeb37f7ec..c758b74b9b 100644
--- a/drivers/net/sfc/sfc_mae_counter.c
+++ b/drivers/net/sfc/sfc_mae_counter.c
@@ -793,3 +793,35 @@ sfc_mae_counter_start(struct sfc_adapter *sa)
 
 	return rc;
 }
+
+int
+sfc_mae_counter_get(struct sfc_mae_counters *counters,
+		    const struct sfc_mae_counter_id *counter,
+		    struct rte_flow_query_count *data)
+{
+	struct sfc_mae_counter *p;
+	union sfc_pkts_bytes value;
+
+	SFC_ASSERT(counter->mae_id.id < counters->n_mae_counters);
+	p = &counters->mae_counters[counter->mae_id.id];
+
+	/*
+	 * Ordering is relaxed since it is the only operation on counter value.
+	 * And it does not depend on different stores/loads in other threads.
+	 * Paired with relaxed ordering in counter increment.
+	 */
+	value.pkts_bytes.int128 = __atomic_load_n(&p->value.pkts_bytes.int128,
+						  __ATOMIC_RELAXED);
+
+	data->hits_set = 1;
+	data->bytes_set = 1;
+	data->hits = value.pkts - p->reset.pkts;
+	data->bytes = value.bytes - p->reset.bytes;
+
+	if (data->reset != 0) {
+		p->reset.pkts = value.pkts;
+		p->reset.bytes = value.bytes;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/sfc/sfc_mae_counter.h b/drivers/net/sfc/sfc_mae_counter.h
index f61a6b59cb..2c953c2968 100644
--- a/drivers/net/sfc/sfc_mae_counter.h
+++ b/drivers/net/sfc/sfc_mae_counter.h
@@ -45,6 +45,9 @@ int sfc_mae_counter_enable(struct sfc_adapter *sa,
 			   struct sfc_mae_counter_id *counterp);
 int sfc_mae_counter_disable(struct sfc_adapter *sa,
 			    struct sfc_mae_counter_id *counter);
+int sfc_mae_counter_get(struct sfc_mae_counters *counters,
+			const struct sfc_mae_counter_id *counter,
+			struct rte_flow_query_count *data);
 
 int sfc_mae_counter_start(struct sfc_adapter *sa);
 void sfc_mae_counter_stop(struct sfc_adapter *sa);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                   ` (19 preceding siblings ...)
  2021-05-27 15:25 ` [dpdk-dev] [PATCH 20/20] net/sfc: support flow API query for count actions Andrew Rybchenko
@ 2021-06-04 14:23 ` Andrew Rybchenko
  2021-06-04 14:23   ` [dpdk-dev] [PATCH v2 01/20] net/sfc: introduce ethdev Rx queue ID Andrew Rybchenko
                     ` (20 more replies)
  2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
  22 siblings, 21 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-04 14:23 UTC (permalink / raw)
  To: dev

Update base driver and support COUNT action in transfer flow rules.

v2:
 - add release notes
 - add missing documentaion
 - fix spelling
 - handle query in stopped gracefully

Andrew Rybchenko (6):
  net/sfc: do not enable interrupts on internal Rx queues
  common/sfc_efx/base: separate target EvQ and IRQ config
  common/sfc_efx/base: support custom EvQ to IRQ mapping
  net/sfc: explicitly control IRQ used for Rx queues
  net/sfc: add NUMA-aware registry of service logical cores
  common/sfc_efx/base: add packetiser packet format definition

Igor Romanov (14):
  net/sfc: introduce ethdev Rx queue ID
  net/sfc: introduce ethdev Tx queue ID
  common/sfc_efx/base: add ingress m-port RxQ flag
  common/sfc_efx/base: add user mark RxQ flag
  net/sfc: add abstractions for the management EVQ identity
  net/sfc: add support for initialising different RxQ types
  net/sfc: reserve RxQ for counters
  common/sfc_efx/base: add counter creation MCDI wrappers
  common/sfc_efx/base: add counter stream MCDI wrappers
  common/sfc_efx/base: support counter in action set
  net/sfc: add Rx datapath method to get pushed buffers count
  common/sfc_efx/base: add max MAE counters to limits
  net/sfc: support flow action COUNT in transfer rules
  net/sfc: support flow API query for count actions

 doc/guides/nics/sfc_efx.rst                   |   2 +
 doc/guides/rel_notes/release_21_08.rst        |   6 +
 drivers/common/sfc_efx/base/ef10_ev.c         |  14 +-
 drivers/common/sfc_efx/base/ef10_impl.h       |   1 +
 drivers/common/sfc_efx/base/ef10_rx.c         |  57 +-
 drivers/common/sfc_efx/base/efx.h             | 113 +++
 drivers/common/sfc_efx/base/efx_ev.c          |  39 +-
 drivers/common/sfc_efx/base/efx_impl.h        |   8 +-
 drivers/common/sfc_efx/base/efx_mae.c         | 430 ++++++++-
 drivers/common/sfc_efx/base/efx_mcdi.c        |   7 +-
 drivers/common/sfc_efx/base/efx_mcdi.h        |   7 +
 .../base/efx_regs_counters_pkt_format.h       |  87 ++
 drivers/common/sfc_efx/base/efx_rx.c          |  14 +-
 drivers/common/sfc_efx/base/rhead_ev.c        |  14 +-
 drivers/common/sfc_efx/base/rhead_impl.h      |   1 +
 drivers/common/sfc_efx/base/rhead_rx.c        |   6 +
 drivers/common/sfc_efx/version.map            |   9 +
 drivers/net/sfc/meson.build                   |  12 +
 drivers/net/sfc/sfc.c                         |  68 +-
 drivers/net/sfc/sfc.h                         |  22 +
 drivers/net/sfc/sfc_dp.h                      |   6 +
 drivers/net/sfc/sfc_dp_rx.h                   |   4 +
 drivers/net/sfc/sfc_ef100_rx.c                |  15 +
 drivers/net/sfc/sfc_ethdev.c                  | 115 ++-
 drivers/net/sfc/sfc_ev.c                      |  36 +-
 drivers/net/sfc/sfc_ev.h                      | 107 ++-
 drivers/net/sfc/sfc_flow.c                    |  77 +-
 drivers/net/sfc/sfc_flow.h                    |   6 +
 drivers/net/sfc/sfc_mae.c                     | 296 ++++++-
 drivers/net/sfc/sfc_mae.h                     |  61 ++
 drivers/net/sfc/sfc_mae_counter.c             | 827 ++++++++++++++++++
 drivers/net/sfc/sfc_mae_counter.h             |  58 ++
 drivers/net/sfc/sfc_rx.c                      | 231 +++--
 drivers/net/sfc/sfc_rx.h                      |  15 +-
 drivers/net/sfc/sfc_service.c                 |  99 +++
 drivers/net/sfc/sfc_service.h                 |  20 +
 drivers/net/sfc/sfc_stats.h                   |  80 ++
 drivers/net/sfc/sfc_tweak.h                   |   9 +
 drivers/net/sfc/sfc_tx.c                      | 164 ++--
 drivers/net/sfc/sfc_tx.h                      |  11 +-
 40 files changed, 2904 insertions(+), 250 deletions(-)
 create mode 100644 drivers/common/sfc_efx/base/efx_regs_counters_pkt_format.h
 create mode 100644 drivers/net/sfc/sfc_mae_counter.c
 create mode 100644 drivers/net/sfc/sfc_mae_counter.h
 create mode 100644 drivers/net/sfc/sfc_service.c
 create mode 100644 drivers/net/sfc/sfc_service.h
 create mode 100644 drivers/net/sfc/sfc_stats.h

-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v2 01/20] net/sfc: introduce ethdev Rx queue ID
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
@ 2021-06-04 14:23   ` Andrew Rybchenko
  2021-06-04 14:23   ` [dpdk-dev] [PATCH v2 02/20] net/sfc: do not enable interrupts on internal Rx queues Andrew Rybchenko
                     ` (19 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-04 14:23 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Make software index of an Rx queue and ethdev index separate.
When an ethdev RxQ is accessed in ethdev callbacks, an explicit ethdev
queue index is used.

This is a preparation to introducing non-ethdev Rx queues.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/sfc.h        |   2 +
 drivers/net/sfc/sfc_dp.h     |   4 +
 drivers/net/sfc/sfc_ethdev.c |  69 ++++++++------
 drivers/net/sfc/sfc_ev.c     |   2 +-
 drivers/net/sfc/sfc_ev.h     |  22 ++++-
 drivers/net/sfc/sfc_flow.c   |  22 +++--
 drivers/net/sfc/sfc_rx.c     | 179 +++++++++++++++++++++++++----------
 drivers/net/sfc/sfc_rx.h     |  10 +-
 8 files changed, 215 insertions(+), 95 deletions(-)

diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index b48a818adb..ebe705020d 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -29,6 +29,7 @@
 #include "sfc_filter.h"
 #include "sfc_sriov.h"
 #include "sfc_mae.h"
+#include "sfc_dp.h"
 
 #ifdef __cplusplus
 extern "C" {
@@ -168,6 +169,7 @@ struct sfc_rss {
 struct sfc_adapter_shared {
 	unsigned int			rxq_count;
 	struct sfc_rxq_info		*rxq_info;
+	unsigned int			ethdev_rxq_count;
 
 	unsigned int			txq_count;
 	struct sfc_txq_info		*txq_info;
diff --git a/drivers/net/sfc/sfc_dp.h b/drivers/net/sfc/sfc_dp.h
index 4bed137806..76065483d4 100644
--- a/drivers/net/sfc/sfc_dp.h
+++ b/drivers/net/sfc/sfc_dp.h
@@ -96,6 +96,10 @@ struct sfc_dp {
 /** List of datapath variants */
 TAILQ_HEAD(sfc_dp_list, sfc_dp);
 
+typedef unsigned int sfc_sw_index_t;
+typedef int32_t	sfc_ethdev_qid_t;
+#define SFC_ETHDEV_QID_INVALID	((sfc_ethdev_qid_t)(-1))
+
 /* Check if available HW/FW capabilities are sufficient for the datapath */
 static inline bool
 sfc_dp_match_hw_fw_caps(const struct sfc_dp *dp, unsigned int avail_caps)
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index c50ecea0b9..2651c41288 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -463,26 +463,31 @@ sfc_dev_allmulti_disable(struct rte_eth_dev *dev)
 }
 
 static int
-sfc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
+sfc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t ethdev_qid,
 		   uint16_t nb_rx_desc, unsigned int socket_id,
 		   const struct rte_eth_rxconf *rx_conf,
 		   struct rte_mempool *mb_pool)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+	struct sfc_rxq_info *rxq_info;
+	sfc_sw_index_t sw_index;
 	int rc;
 
 	sfc_log_init(sa, "RxQ=%u nb_rx_desc=%u socket_id=%u",
-		     rx_queue_id, nb_rx_desc, socket_id);
+		     ethdev_qid, nb_rx_desc, socket_id);
 
 	sfc_adapter_lock(sa);
 
-	rc = sfc_rx_qinit(sa, rx_queue_id, nb_rx_desc, socket_id,
+	sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas, sfc_ethdev_qid);
+	rc = sfc_rx_qinit(sa, sw_index, nb_rx_desc, socket_id,
 			  rx_conf, mb_pool);
 	if (rc != 0)
 		goto fail_rx_qinit;
 
-	dev->data->rx_queues[rx_queue_id] = sas->rxq_info[rx_queue_id].dp;
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+	dev->data->rx_queues[ethdev_qid] = rxq_info->dp;
 
 	sfc_adapter_unlock(sa);
 
@@ -500,7 +505,7 @@ sfc_rx_queue_release(void *queue)
 	struct sfc_dp_rxq *dp_rxq = queue;
 	struct sfc_rxq *rxq;
 	struct sfc_adapter *sa;
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 
 	if (dp_rxq == NULL)
 		return;
@@ -1182,15 +1187,14 @@ sfc_set_mc_addr_list(struct rte_eth_dev *dev,
  * use any process-local pointers from the adapter data.
  */
 static void
-sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
+sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
 		      struct rte_eth_rxq_info *qinfo)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 
-	SFC_ASSERT(rx_queue_id < sas->rxq_count);
-
-	rxq_info = &sas->rxq_info[rx_queue_id];
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
 
 	qinfo->mp = rxq_info->refill_mb_pool;
 	qinfo->conf.rx_free_thresh = rxq_info->refill_threshold;
@@ -1232,14 +1236,14 @@ sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t tx_queue_id,
  * use any process-local pointers from the adapter data.
  */
 static uint32_t
-sfc_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+sfc_rx_queue_count(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 
-	SFC_ASSERT(rx_queue_id < sas->rxq_count);
-	rxq_info = &sas->rxq_info[rx_queue_id];
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
 
 	if ((rxq_info->state & SFC_RXQ_STARTED) == 0)
 		return 0;
@@ -1293,13 +1297,16 @@ sfc_tx_descriptor_status(void *queue, uint16_t offset)
 }
 
 static int
-sfc_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+sfc_rx_queue_start(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+	struct sfc_rxq_info *rxq_info;
+	sfc_sw_index_t sw_index;
 	int rc;
 
-	sfc_log_init(sa, "RxQ=%u", rx_queue_id);
+	sfc_log_init(sa, "RxQ=%u", ethdev_qid);
 
 	sfc_adapter_lock(sa);
 
@@ -1307,14 +1314,16 @@ sfc_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (sa->state != SFC_ADAPTER_STARTED)
 		goto fail_not_started;
 
-	if (sas->rxq_info[rx_queue_id].state != SFC_RXQ_INITIALIZED)
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+	if (rxq_info->state != SFC_RXQ_INITIALIZED)
 		goto fail_not_setup;
 
-	rc = sfc_rx_qstart(sa, rx_queue_id);
+	sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas, sfc_ethdev_qid);
+	rc = sfc_rx_qstart(sa, sw_index);
 	if (rc != 0)
 		goto fail_rx_qstart;
 
-	sas->rxq_info[rx_queue_id].deferred_started = B_TRUE;
+	rxq_info->deferred_started = B_TRUE;
 
 	sfc_adapter_unlock(sa);
 
@@ -1329,17 +1338,23 @@ sfc_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 }
 
 static int
-sfc_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+sfc_rx_queue_stop(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+	struct sfc_rxq_info *rxq_info;
+	sfc_sw_index_t sw_index;
 
-	sfc_log_init(sa, "RxQ=%u", rx_queue_id);
+	sfc_log_init(sa, "RxQ=%u", ethdev_qid);
 
 	sfc_adapter_lock(sa);
-	sfc_rx_qstop(sa, rx_queue_id);
 
-	sas->rxq_info[rx_queue_id].deferred_started = B_FALSE;
+	sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas, sfc_ethdev_qid);
+	sfc_rx_qstop(sa, sw_index);
+
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+	rxq_info->deferred_started = B_FALSE;
 
 	sfc_adapter_unlock(sa);
 
@@ -1766,27 +1781,27 @@ sfc_pool_ops_supported(struct rte_eth_dev *dev, const char *pool)
 }
 
 static int
-sfc_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
+sfc_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 
-	SFC_ASSERT(queue_id < sas->rxq_count);
-	rxq_info = &sas->rxq_info[queue_id];
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
 
 	return sap->dp_rx->intr_enable(rxq_info->dp);
 }
 
 static int
-sfc_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
+sfc_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 
-	SFC_ASSERT(queue_id < sas->rxq_count);
-	rxq_info = &sas->rxq_info[queue_id];
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
 
 	return sap->dp_rx->intr_disable(rxq_info->dp);
 }
diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index b4953ac647..2262994112 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -582,7 +582,7 @@ sfc_ev_qpoll(struct sfc_evq *evq)
 		int rc;
 
 		if (evq->dp_rxq != NULL) {
-			unsigned int rxq_sw_index;
+			sfc_sw_index_t rxq_sw_index;
 
 			rxq_sw_index = evq->dp_rxq->dpq.queue_id;
 
diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h
index d796865b7f..5a9f85c2d9 100644
--- a/drivers/net/sfc/sfc_ev.h
+++ b/drivers/net/sfc/sfc_ev.h
@@ -69,9 +69,25 @@ struct sfc_evq {
  * Tx event queues follow Rx event queues.
  */
 
-static inline unsigned int
-sfc_evq_index_by_rxq_sw_index(__rte_unused struct sfc_adapter *sa,
-			      unsigned int rxq_sw_index)
+static inline sfc_ethdev_qid_t
+sfc_ethdev_rx_qid_by_rxq_sw_index(__rte_unused struct sfc_adapter_shared *sas,
+				  sfc_sw_index_t rxq_sw_index)
+{
+	/* Only ethdev queues are present for now */
+	return rxq_sw_index;
+}
+
+static inline sfc_sw_index_t
+sfc_rxq_sw_index_by_ethdev_rx_qid(__rte_unused struct sfc_adapter_shared *sas,
+				  sfc_ethdev_qid_t ethdev_qid)
+{
+	/* Only ethdev queues are present for now */
+	return ethdev_qid;
+}
+
+static inline sfc_sw_index_t
+sfc_evq_sw_index_by_rxq_sw_index(__rte_unused struct sfc_adapter *sa,
+				 sfc_sw_index_t rxq_sw_index)
 {
 	return 1 + rxq_sw_index;
 }
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 0bfd284c9e..2db8af1759 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -1400,10 +1400,10 @@ sfc_flow_parse_queue(struct sfc_adapter *sa,
 	struct sfc_rxq *rxq;
 	struct sfc_rxq_info *rxq_info;
 
-	if (queue->index >= sfc_sa2shared(sa)->rxq_count)
+	if (queue->index >= sfc_sa2shared(sa)->ethdev_rxq_count)
 		return -EINVAL;
 
-	rxq = &sa->rxq_ctrl[queue->index];
+	rxq = sfc_rxq_ctrl_by_ethdev_qid(sa, queue->index);
 	spec_filter->template.efs_dmaq_id = (uint16_t)rxq->hw_index;
 
 	rxq_info = &sfc_sa2shared(sa)->rxq_info[queue->index];
@@ -1420,7 +1420,7 @@ sfc_flow_parse_rss(struct sfc_adapter *sa,
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
 	struct sfc_rss *rss = &sas->rss;
-	unsigned int rxq_sw_index;
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_rxq *rxq;
 	unsigned int rxq_hw_index_min;
 	unsigned int rxq_hw_index_max;
@@ -1434,18 +1434,19 @@ sfc_flow_parse_rss(struct sfc_adapter *sa,
 	if (action_rss->queue_num == 0)
 		return -EINVAL;
 
-	rxq_sw_index = sfc_sa2shared(sa)->rxq_count - 1;
-	rxq = &sa->rxq_ctrl[rxq_sw_index];
+	ethdev_qid = sfc_sa2shared(sa)->ethdev_rxq_count - 1;
+	rxq = sfc_rxq_ctrl_by_ethdev_qid(sa, ethdev_qid);
 	rxq_hw_index_min = rxq->hw_index;
 	rxq_hw_index_max = 0;
 
 	for (i = 0; i < action_rss->queue_num; ++i) {
-		rxq_sw_index = action_rss->queue[i];
+		ethdev_qid = action_rss->queue[i];
 
-		if (rxq_sw_index >= sfc_sa2shared(sa)->rxq_count)
+		if ((unsigned int)ethdev_qid >=
+		    sfc_sa2shared(sa)->ethdev_rxq_count)
 			return -EINVAL;
 
-		rxq = &sa->rxq_ctrl[rxq_sw_index];
+		rxq = sfc_rxq_ctrl_by_ethdev_qid(sa, ethdev_qid);
 
 		if (rxq->hw_index < rxq_hw_index_min)
 			rxq_hw_index_min = rxq->hw_index;
@@ -1509,9 +1510,10 @@ sfc_flow_parse_rss(struct sfc_adapter *sa,
 
 	for (i = 0; i < RTE_DIM(sfc_rss_conf->rss_tbl); ++i) {
 		unsigned int nb_queues = action_rss->queue_num;
-		unsigned int rxq_sw_index = action_rss->queue[i % nb_queues];
-		struct sfc_rxq *rxq = &sa->rxq_ctrl[rxq_sw_index];
+		struct sfc_rxq *rxq;
 
+		ethdev_qid = action_rss->queue[i % nb_queues];
+		rxq = sfc_rxq_ctrl_by_ethdev_qid(sa, ethdev_qid);
 		sfc_rss_conf->rss_tbl[i] = rxq->hw_index - rxq_hw_index_min;
 	}
 
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 461afc5168..597785ae02 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -654,14 +654,17 @@ struct sfc_dp_rx sfc_efx_rx = {
 };
 
 static void
-sfc_rx_qflush(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_rx_qflush(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 	struct sfc_rxq *rxq;
 	unsigned int retry_count;
 	unsigned int wait_count;
 	int rc;
 
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, sw_index);
 	rxq_info = &sfc_sa2shared(sa)->rxq_info[sw_index];
 	SFC_ASSERT(rxq_info->state & SFC_RXQ_STARTED);
 
@@ -698,13 +701,16 @@ sfc_rx_qflush(struct sfc_adapter *sa, unsigned int sw_index)
 			 (wait_count++ < SFC_RX_QFLUSH_POLL_ATTEMPTS));
 
 		if (rxq_info->state & SFC_RXQ_FLUSHING)
-			sfc_err(sa, "RxQ %u flush timed out", sw_index);
+			sfc_err(sa, "RxQ %d (internal %u) flush timed out",
+				ethdev_qid, sw_index);
 
 		if (rxq_info->state & SFC_RXQ_FLUSH_FAILED)
-			sfc_err(sa, "RxQ %u flush failed", sw_index);
+			sfc_err(sa, "RxQ %d (internal %u) flush failed",
+				ethdev_qid, sw_index);
 
 		if (rxq_info->state & SFC_RXQ_FLUSHED)
-			sfc_notice(sa, "RxQ %u flushed", sw_index);
+			sfc_notice(sa, "RxQ %d (internal %u) flushed",
+				   ethdev_qid, sw_index);
 	}
 
 	sa->priv.dp_rx->qpurge(rxq_info->dp);
@@ -764,17 +770,20 @@ sfc_rx_default_rxq_set_filter(struct sfc_adapter *sa, struct sfc_rxq *rxq)
 }
 
 int
-sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_rx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 	struct sfc_rxq *rxq;
 	struct sfc_evq *evq;
 	efx_rx_prefix_layout_t pinfo;
 	int rc;
 
-	sfc_log_init(sa, "sw_index=%u", sw_index);
-
 	SFC_ASSERT(sw_index < sfc_sa2shared(sa)->rxq_count);
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "RxQ %d (internal %u)", ethdev_qid, sw_index);
 
 	rxq_info = &sfc_sa2shared(sa)->rxq_info[sw_index];
 	SFC_ASSERT(rxq_info->state == SFC_RXQ_INITIALIZED);
@@ -782,7 +791,7 @@ sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 	rxq = &sa->rxq_ctrl[sw_index];
 	evq = rxq->evq;
 
-	rc = sfc_ev_qstart(evq, sfc_evq_index_by_rxq_sw_index(sa, sw_index));
+	rc = sfc_ev_qstart(evq, sfc_evq_sw_index_by_rxq_sw_index(sa, sw_index));
 	if (rc != 0)
 		goto fail_ev_qstart;
 
@@ -833,15 +842,16 @@ sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 
 	rxq_info->state |= SFC_RXQ_STARTED;
 
-	if (sw_index == 0 && !sfc_sa2shared(sa)->isolated) {
+	if (ethdev_qid == 0 && !sfc_sa2shared(sa)->isolated) {
 		rc = sfc_rx_default_rxq_set_filter(sa, rxq);
 		if (rc != 0)
 			goto fail_mac_filter_default_rxq_set;
 	}
 
 	/* It seems to be used by DPDK for debug purposes only ('rte_ether') */
-	sa->eth_dev->data->rx_queue_state[sw_index] =
-		RTE_ETH_QUEUE_STATE_STARTED;
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		sa->eth_dev->data->rx_queue_state[ethdev_qid] =
+			RTE_ETH_QUEUE_STATE_STARTED;
 
 	return 0;
 
@@ -864,14 +874,17 @@ sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 }
 
 void
-sfc_rx_qstop(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_rx_qstop(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 	struct sfc_rxq *rxq;
 
-	sfc_log_init(sa, "sw_index=%u", sw_index);
-
 	SFC_ASSERT(sw_index < sfc_sa2shared(sa)->rxq_count);
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "RxQ %d (internal %u)", ethdev_qid, sw_index);
 
 	rxq_info = &sfc_sa2shared(sa)->rxq_info[sw_index];
 
@@ -880,13 +893,14 @@ sfc_rx_qstop(struct sfc_adapter *sa, unsigned int sw_index)
 	SFC_ASSERT(rxq_info->state & SFC_RXQ_STARTED);
 
 	/* It seems to be used by DPDK for debug purposes only ('rte_ether') */
-	sa->eth_dev->data->rx_queue_state[sw_index] =
-		RTE_ETH_QUEUE_STATE_STOPPED;
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		sa->eth_dev->data->rx_queue_state[ethdev_qid] =
+			RTE_ETH_QUEUE_STATE_STOPPED;
 
 	rxq = &sa->rxq_ctrl[sw_index];
 	sa->priv.dp_rx->qstop(rxq_info->dp, &rxq->evq->read_ptr);
 
-	if (sw_index == 0)
+	if (ethdev_qid == 0)
 		efx_mac_filter_default_rxq_clear(sa->nic);
 
 	sfc_rx_qflush(sa, sw_index);
@@ -1056,11 +1070,13 @@ sfc_rx_mb_pool_buf_size(struct sfc_adapter *sa, struct rte_mempool *mb_pool)
 }
 
 int
-sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
+sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 	     uint16_t nb_rx_desc, unsigned int socket_id,
 	     const struct rte_eth_rxconf *rx_conf,
 	     struct rte_mempool *mb_pool)
 {
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
 	struct sfc_rss *rss = &sfc_sa2shared(sa)->rss;
 	int rc;
@@ -1092,16 +1108,22 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	SFC_ASSERT(rxq_entries <= sa->rxq_max_entries);
 	SFC_ASSERT(rxq_max_fill_level <= nb_rx_desc);
 
-	offloads = rx_conf->offloads |
-		sa->eth_dev->data->dev_conf.rxmode.offloads;
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, sw_index);
+
+	offloads = rx_conf->offloads;
+	/* Add device level Rx offloads if the queue is an ethdev Rx queue */
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		offloads |= sa->eth_dev->data->dev_conf.rxmode.offloads;
+
 	rc = sfc_rx_qcheck_conf(sa, rxq_max_fill_level, rx_conf, offloads);
 	if (rc != 0)
 		goto fail_bad_conf;
 
 	buf_size = sfc_rx_mb_pool_buf_size(sa, mb_pool);
 	if (buf_size == 0) {
-		sfc_err(sa, "RxQ %u mbuf pool object size is too small",
-			sw_index);
+		sfc_err(sa,
+			"RxQ %d (internal %u) mbuf pool object size is too small",
+			ethdev_qid, sw_index);
 		rc = EINVAL;
 		goto fail_bad_conf;
 	}
@@ -1111,11 +1133,13 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 				  (offloads & DEV_RX_OFFLOAD_SCATTER),
 				  encp->enc_rx_scatter_max,
 				  &error)) {
-		sfc_err(sa, "RxQ %u MTU check failed: %s", sw_index, error);
-		sfc_err(sa, "RxQ %u calculated Rx buffer size is %u vs "
+		sfc_err(sa, "RxQ %d (internal %u) MTU check failed: %s",
+			ethdev_qid, sw_index, error);
+		sfc_err(sa,
+			"RxQ %d (internal %u) calculated Rx buffer size is %u vs "
 			"PDU size %u plus Rx prefix %u bytes",
-			sw_index, buf_size, (unsigned int)sa->port.pdu,
-			encp->enc_rx_prefix_size);
+			ethdev_qid, sw_index, buf_size,
+			(unsigned int)sa->port.pdu, encp->enc_rx_prefix_size);
 		rc = EINVAL;
 		goto fail_bad_conf;
 	}
@@ -1193,7 +1217,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	info.flags = rxq_info->rxq_flags;
 	info.rxq_entries = rxq_info->entries;
 	info.rxq_hw_ring = rxq->mem.esm_base;
-	info.evq_hw_index = sfc_evq_index_by_rxq_sw_index(sa, sw_index);
+	info.evq_hw_index = sfc_evq_sw_index_by_rxq_sw_index(sa, sw_index);
 	info.evq_entries = evq_entries;
 	info.evq_hw_ring = evq->mem.esm_base;
 	info.hw_index = rxq->hw_index;
@@ -1231,13 +1255,18 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 }
 
 void
-sfc_rx_qfini(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_rx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 	struct sfc_rxq *rxq;
 
 	SFC_ASSERT(sw_index < sfc_sa2shared(sa)->rxq_count);
-	sa->eth_dev->data->rx_queues[sw_index] = NULL;
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, sw_index);
+
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		sa->eth_dev->data->rx_queues[ethdev_qid] = NULL;
 
 	rxq_info = &sfc_sa2shared(sa)->rxq_info[sw_index];
 
@@ -1479,14 +1508,41 @@ sfc_rx_rss_config(struct sfc_adapter *sa)
 	return rc;
 }
 
+struct sfc_rxq_info *
+sfc_rxq_info_by_ethdev_qid(struct sfc_adapter_shared *sas,
+			   sfc_ethdev_qid_t ethdev_qid)
+{
+	sfc_sw_index_t sw_index;
+
+	SFC_ASSERT((unsigned int)ethdev_qid < sas->ethdev_rxq_count);
+	SFC_ASSERT(ethdev_qid != SFC_ETHDEV_QID_INVALID);
+
+	sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas, ethdev_qid);
+	return &sas->rxq_info[sw_index];
+}
+
+struct sfc_rxq *
+sfc_rxq_ctrl_by_ethdev_qid(struct sfc_adapter *sa, sfc_ethdev_qid_t ethdev_qid)
+{
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_sw_index_t sw_index;
+
+	SFC_ASSERT((unsigned int)ethdev_qid < sas->ethdev_rxq_count);
+	SFC_ASSERT(ethdev_qid != SFC_ETHDEV_QID_INVALID);
+
+	sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas, ethdev_qid);
+	return &sa->rxq_ctrl[sw_index];
+}
+
 int
 sfc_rx_start(struct sfc_adapter *sa)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 	int rc;
 
-	sfc_log_init(sa, "rxq_count=%u", sas->rxq_count);
+	sfc_log_init(sa, "rxq_count=%u (internal %u)", sas->ethdev_rxq_count,
+		     sas->rxq_count);
 
 	rc = efx_rx_init(sa->nic);
 	if (rc != 0)
@@ -1524,9 +1580,10 @@ void
 sfc_rx_stop(struct sfc_adapter *sa)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 
-	sfc_log_init(sa, "rxq_count=%u", sas->rxq_count);
+	sfc_log_init(sa, "rxq_count=%u (internal %u)", sas->ethdev_rxq_count,
+		     sas->rxq_count);
 
 	sw_index = sas->rxq_count;
 	while (sw_index-- > 0) {
@@ -1538,7 +1595,7 @@ sfc_rx_stop(struct sfc_adapter *sa)
 }
 
 static int
-sfc_rx_qinit_info(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_rx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
 	struct sfc_rxq_info *rxq_info = &sas->rxq_info[sw_index];
@@ -1606,17 +1663,29 @@ static void
 sfc_rx_fini_queues(struct sfc_adapter *sa, unsigned int nb_rx_queues)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	int sw_index;
+	sfc_sw_index_t sw_index;
+	sfc_ethdev_qid_t ethdev_qid;
 
-	SFC_ASSERT(nb_rx_queues <= sas->rxq_count);
+	SFC_ASSERT(nb_rx_queues <= sas->ethdev_rxq_count);
 
-	sw_index = sas->rxq_count;
-	while (--sw_index >= (int)nb_rx_queues) {
-		if (sas->rxq_info[sw_index].state & SFC_RXQ_INITIALIZED)
+	/*
+	 * Finalize only ethdev queues since other ones are finalized only
+	 * on device close and they may require additional deinitializaton.
+	 */
+	ethdev_qid = sas->ethdev_rxq_count;
+	while (--ethdev_qid >= (int)nb_rx_queues) {
+		struct sfc_rxq_info *rxq_info;
+
+		rxq_info = sfc_rxq_info_by_ethdev_qid(sas, ethdev_qid);
+		if (rxq_info->state & SFC_RXQ_INITIALIZED) {
+			sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas,
+								ethdev_qid);
 			sfc_rx_qfini(sa, sw_index);
+		}
+
 	}
 
-	sas->rxq_count = nb_rx_queues;
+	sas->ethdev_rxq_count = nb_rx_queues;
 }
 
 /**
@@ -1637,7 +1706,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
 	int rc;
 
 	sfc_log_init(sa, "nb_rx_queues=%u (old %u)",
-		     nb_rx_queues, sas->rxq_count);
+		     nb_rx_queues, sas->ethdev_rxq_count);
 
 	rc = sfc_rx_check_mode(sa, &dev_conf->rxmode);
 	if (rc != 0)
@@ -1666,7 +1735,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
 		struct sfc_rxq_info *new_rxq_info;
 		struct sfc_rxq *new_rxq_ctrl;
 
-		if (nb_rx_queues < sas->rxq_count)
+		if (nb_rx_queues < sas->ethdev_rxq_count)
 			sfc_rx_fini_queues(sa, nb_rx_queues);
 
 		rc = ENOMEM;
@@ -1685,30 +1754,38 @@ sfc_rx_configure(struct sfc_adapter *sa)
 		sas->rxq_info = new_rxq_info;
 		sa->rxq_ctrl = new_rxq_ctrl;
 		if (nb_rx_queues > sas->rxq_count) {
-			memset(&sas->rxq_info[sas->rxq_count], 0,
-			       (nb_rx_queues - sas->rxq_count) *
+			unsigned int rxq_count = sas->rxq_count;
+
+			memset(&sas->rxq_info[rxq_count], 0,
+			       (nb_rx_queues - rxq_count) *
 			       sizeof(sas->rxq_info[0]));
-			memset(&sa->rxq_ctrl[sas->rxq_count], 0,
-			       (nb_rx_queues - sas->rxq_count) *
+			memset(&sa->rxq_ctrl[rxq_count], 0,
+			       (nb_rx_queues - rxq_count) *
 			       sizeof(sa->rxq_ctrl[0]));
 		}
 	}
 
-	while (sas->rxq_count < nb_rx_queues) {
-		rc = sfc_rx_qinit_info(sa, sas->rxq_count);
+	while (sas->ethdev_rxq_count < nb_rx_queues) {
+		sfc_sw_index_t sw_index;
+
+		sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas,
+							sas->ethdev_rxq_count);
+		rc = sfc_rx_qinit_info(sa, sw_index);
 		if (rc != 0)
 			goto fail_rx_qinit_info;
 
-		sas->rxq_count++;
+		sas->ethdev_rxq_count++;
 	}
 
+	sas->rxq_count = sas->ethdev_rxq_count;
+
 configure_rss:
 	rss->channels = (dev_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) ?
-			 MIN(sas->rxq_count, EFX_MAXRSS) : 0;
+			 MIN(sas->ethdev_rxq_count, EFX_MAXRSS) : 0;
 
 	if (rss->channels > 0) {
 		struct rte_eth_rss_conf *adv_conf_rss;
-		unsigned int sw_index;
+		sfc_sw_index_t sw_index;
 
 		for (sw_index = 0; sw_index < EFX_RSS_TBL_SIZE; ++sw_index)
 			rss->tbl[sw_index] = sw_index % rss->channels;
diff --git a/drivers/net/sfc/sfc_rx.h b/drivers/net/sfc/sfc_rx.h
index 2730454fd6..96c7dc415d 100644
--- a/drivers/net/sfc/sfc_rx.h
+++ b/drivers/net/sfc/sfc_rx.h
@@ -119,6 +119,10 @@ struct sfc_rxq_info {
 };
 
 struct sfc_rxq_info *sfc_rxq_info_by_dp_rxq(const struct sfc_dp_rxq *dp_rxq);
+struct sfc_rxq_info *sfc_rxq_info_by_ethdev_qid(struct sfc_adapter_shared *sas,
+						sfc_ethdev_qid_t ethdev_qid);
+struct sfc_rxq *sfc_rxq_ctrl_by_ethdev_qid(struct sfc_adapter *sa,
+					   sfc_ethdev_qid_t ethdev_qid);
 
 int sfc_rx_configure(struct sfc_adapter *sa);
 void sfc_rx_close(struct sfc_adapter *sa);
@@ -129,9 +133,9 @@ int sfc_rx_qinit(struct sfc_adapter *sa, unsigned int rx_queue_id,
 		 uint16_t nb_rx_desc, unsigned int socket_id,
 		 const struct rte_eth_rxconf *rx_conf,
 		 struct rte_mempool *mb_pool);
-void sfc_rx_qfini(struct sfc_adapter *sa, unsigned int sw_index);
-int sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index);
-void sfc_rx_qstop(struct sfc_adapter *sa, unsigned int sw_index);
+void sfc_rx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
+int sfc_rx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
+void sfc_rx_qstop(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
 
 uint64_t sfc_rx_get_dev_offload_caps(struct sfc_adapter *sa);
 uint64_t sfc_rx_get_queue_offload_caps(struct sfc_adapter *sa);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v2 02/20] net/sfc: do not enable interrupts on internal Rx queues
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
  2021-06-04 14:23   ` [dpdk-dev] [PATCH v2 01/20] net/sfc: introduce ethdev Rx queue ID Andrew Rybchenko
@ 2021-06-04 14:23   ` Andrew Rybchenko
  2021-06-04 14:23   ` [dpdk-dev] [PATCH v2 03/20] common/sfc_efx/base: separate target EvQ and IRQ config Andrew Rybchenko
                     ` (18 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-04 14:23 UTC (permalink / raw)
  To: dev

rxq_intr flag requests support for interrupt mode for ethdev Rx queues.
There is no internal Rx queues yet.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 drivers/net/sfc/sfc_ev.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index 2262994112..9a8149f052 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -663,7 +663,9 @@ sfc_ev_qstart(struct sfc_evq *evq, unsigned int hw_index)
 		     efx_evq_size(sa->nic, evq->entries, evq_flags));
 
 	if ((sa->intr.lsc_intr && hw_index == sa->mgmt_evq_index) ||
-	    (sa->intr.rxq_intr && evq->dp_rxq != NULL))
+	    (sa->intr.rxq_intr && evq->dp_rxq != NULL &&
+	     sfc_ethdev_rx_qid_by_rxq_sw_index(sfc_sa2shared(sa),
+		evq->dp_rxq->dpq.queue_id) != SFC_ETHDEV_QID_INVALID))
 		evq_flags |= EFX_EVQ_FLAGS_NOTIFY_INTERRUPT;
 	else
 		evq_flags |= EFX_EVQ_FLAGS_NOTIFY_DISABLED;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v2 03/20] common/sfc_efx/base: separate target EvQ and IRQ config
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
  2021-06-04 14:23   ` [dpdk-dev] [PATCH v2 01/20] net/sfc: introduce ethdev Rx queue ID Andrew Rybchenko
  2021-06-04 14:23   ` [dpdk-dev] [PATCH v2 02/20] net/sfc: do not enable interrupts on internal Rx queues Andrew Rybchenko
@ 2021-06-04 14:23   ` Andrew Rybchenko
  2021-06-04 14:23   ` [dpdk-dev] [PATCH v2 04/20] common/sfc_efx/base: support custom EvQ to IRQ mapping Andrew Rybchenko
                     ` (17 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-04 14:23 UTC (permalink / raw)
  To: dev; +Cc: Andy Moreton

Target EvQ and IRQ number are specified in the same location
in MCDI request. The value is treated as IRQ number if the
event queue is interrupting (corresponding flag is set) and
as target event queue otherwise.

However it is better to separate it on helper API level to
make it more clear.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/common/sfc_efx/base/ef10_ev.c  | 12 +++++++-----
 drivers/common/sfc_efx/base/efx_impl.h |  1 +
 drivers/common/sfc_efx/base/efx_mcdi.c |  7 ++++++-
 drivers/common/sfc_efx/base/rhead_ev.c | 12 +++++++-----
 4 files changed, 21 insertions(+), 11 deletions(-)

diff --git a/drivers/common/sfc_efx/base/ef10_ev.c b/drivers/common/sfc_efx/base/ef10_ev.c
index ea59beecc4..c0cbc427b9 100644
--- a/drivers/common/sfc_efx/base/ef10_ev.c
+++ b/drivers/common/sfc_efx/base/ef10_ev.c
@@ -121,7 +121,8 @@ ef10_ev_qcreate(
 	__in		efx_evq_t *eep)
 {
 	efx_nic_cfg_t *encp = &(enp->en_nic_cfg);
-	uint32_t irq;
+	uint32_t irq = 0;
+	uint32_t target_evq = 0;
 	efx_rc_t rc;
 	boolean_t low_latency;
 
@@ -159,11 +160,12 @@ ef10_ev_qcreate(
 	    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT) {
 		irq = index;
 	} else if (index == EFX_EF10_ALWAYS_INTERRUPTING_EVQ_INDEX) {
-		irq = index;
+		/* Use the first interrupt for always interrupting EvQ */
+		irq = 0;
 		flags = (flags & ~EFX_EVQ_FLAGS_NOTIFY_MASK) |
 		    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT;
 	} else {
-		irq = EFX_EF10_ALWAYS_INTERRUPTING_EVQ_INDEX;
+		target_evq = EFX_EF10_ALWAYS_INTERRUPTING_EVQ_INDEX;
 	}
 
 	/*
@@ -187,8 +189,8 @@ ef10_ev_qcreate(
 	 * decision and low_latency hint is ignored.
 	 */
 	low_latency = encp->enc_datapath_cap_evb ? 0 : 1;
-	rc = efx_mcdi_init_evq(enp, index, esmp, ndescs, irq, us, flags,
-	    low_latency);
+	rc = efx_mcdi_init_evq(enp, index, esmp, ndescs, irq, target_evq, us,
+	    flags, low_latency);
 	if (rc != 0)
 		goto fail2;
 
diff --git a/drivers/common/sfc_efx/base/efx_impl.h b/drivers/common/sfc_efx/base/efx_impl.h
index 8b63cfb37d..4fff9e1842 100644
--- a/drivers/common/sfc_efx/base/efx_impl.h
+++ b/drivers/common/sfc_efx/base/efx_impl.h
@@ -1535,6 +1535,7 @@ efx_mcdi_init_evq(
 	__in		efsys_mem_t *esmp,
 	__in		size_t nevs,
 	__in		uint32_t irq,
+	__in		uint32_t target_evq,
 	__in		uint32_t us,
 	__in		uint32_t flags,
 	__in		boolean_t low_latency);
diff --git a/drivers/common/sfc_efx/base/efx_mcdi.c b/drivers/common/sfc_efx/base/efx_mcdi.c
index f226ffd923..b68fc0503d 100644
--- a/drivers/common/sfc_efx/base/efx_mcdi.c
+++ b/drivers/common/sfc_efx/base/efx_mcdi.c
@@ -2568,6 +2568,7 @@ efx_mcdi_init_evq(
 	__in		efsys_mem_t *esmp,
 	__in		size_t nevs,
 	__in		uint32_t irq,
+	__in		uint32_t target_evq,
 	__in		uint32_t us,
 	__in		uint32_t flags,
 	__in		boolean_t low_latency)
@@ -2602,11 +2603,15 @@ efx_mcdi_init_evq(
 
 	MCDI_IN_SET_DWORD(req, INIT_EVQ_V2_IN_SIZE, nevs);
 	MCDI_IN_SET_DWORD(req, INIT_EVQ_V2_IN_INSTANCE, instance);
-	MCDI_IN_SET_DWORD(req, INIT_EVQ_V2_IN_IRQ_NUM, irq);
 
 	interrupting = ((flags & EFX_EVQ_FLAGS_NOTIFY_MASK) ==
 	    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT);
 
+	if (interrupting)
+		MCDI_IN_SET_DWORD(req, INIT_EVQ_V2_IN_IRQ_NUM, irq);
+	else
+		MCDI_IN_SET_DWORD(req, INIT_EVQ_V2_IN_TARGET_EVQ, target_evq);
+
 	if (encp->enc_init_evq_v2_supported) {
 		/*
 		 * On Medford the low latency license is required to enable RX
diff --git a/drivers/common/sfc_efx/base/rhead_ev.c b/drivers/common/sfc_efx/base/rhead_ev.c
index 2099581fd7..533cd9e34a 100644
--- a/drivers/common/sfc_efx/base/rhead_ev.c
+++ b/drivers/common/sfc_efx/base/rhead_ev.c
@@ -106,7 +106,8 @@ rhead_ev_qcreate(
 {
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
 	size_t desc_size;
-	uint32_t irq;
+	uint32_t irq = 0;
+	uint32_t target_evq = 0;
 	efx_rc_t rc;
 
 	_NOTE(ARGUNUSED(id))	/* buftbl id managed by MC */
@@ -142,19 +143,20 @@ rhead_ev_qcreate(
 	    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT) {
 		irq = index;
 	} else if (index == EFX_RHEAD_ALWAYS_INTERRUPTING_EVQ_INDEX) {
-		irq = index;
+		/* Use the first interrupt for always interrupting EvQ */
+		irq = 0;
 		flags = (flags & ~EFX_EVQ_FLAGS_NOTIFY_MASK) |
 		    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT;
 	} else {
-		irq = EFX_RHEAD_ALWAYS_INTERRUPTING_EVQ_INDEX;
+		target_evq = EFX_RHEAD_ALWAYS_INTERRUPTING_EVQ_INDEX;
 	}
 
 	/*
 	 * Interrupts may be raised for events immediately after the queue is
 	 * created. See bug58606.
 	 */
-	rc = efx_mcdi_init_evq(enp, index, esmp, ndescs, irq, us, flags,
-	    B_FALSE);
+	rc = efx_mcdi_init_evq(enp, index, esmp, ndescs, irq, target_evq, us,
+	    flags, B_FALSE);
 	if (rc != 0)
 		goto fail2;
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v2 04/20] common/sfc_efx/base: support custom EvQ to IRQ mapping
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (2 preceding siblings ...)
  2021-06-04 14:23   ` [dpdk-dev] [PATCH v2 03/20] common/sfc_efx/base: separate target EvQ and IRQ config Andrew Rybchenko
@ 2021-06-04 14:23   ` Andrew Rybchenko
  2021-06-04 14:23   ` [dpdk-dev] [PATCH v2 05/20] net/sfc: explicitly control IRQ used for Rx queues Andrew Rybchenko
                     ` (16 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-04 14:23 UTC (permalink / raw)
  To: dev; +Cc: Andy Moreton

Custom mapping is actually supported for EF10 and EF100 families only.

A driver (e.g. DPDK PMD) may require to customize mapping of EvQ
to interrupts if, for example, extra EvQ are used for house-keeping
in polling or wake up (via another EvQ) mode.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/common/sfc_efx/base/ef10_ev.c    |  4 +--
 drivers/common/sfc_efx/base/ef10_impl.h  |  1 +
 drivers/common/sfc_efx/base/efx.h        | 13 ++++++++
 drivers/common/sfc_efx/base/efx_ev.c     | 39 ++++++++++++++++++++----
 drivers/common/sfc_efx/base/efx_impl.h   |  3 +-
 drivers/common/sfc_efx/base/rhead_ev.c   |  4 +--
 drivers/common/sfc_efx/base/rhead_impl.h |  1 +
 drivers/common/sfc_efx/version.map       |  1 +
 8 files changed, 55 insertions(+), 11 deletions(-)

diff --git a/drivers/common/sfc_efx/base/ef10_ev.c b/drivers/common/sfc_efx/base/ef10_ev.c
index c0cbc427b9..ba078940b6 100644
--- a/drivers/common/sfc_efx/base/ef10_ev.c
+++ b/drivers/common/sfc_efx/base/ef10_ev.c
@@ -118,10 +118,10 @@ ef10_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep)
 {
 	efx_nic_cfg_t *encp = &(enp->en_nic_cfg);
-	uint32_t irq = 0;
 	uint32_t target_evq = 0;
 	efx_rc_t rc;
 	boolean_t low_latency;
@@ -158,7 +158,7 @@ ef10_ev_qcreate(
 	/* INIT_EVQ expects function-relative vector number */
 	if ((flags & EFX_EVQ_FLAGS_NOTIFY_MASK) ==
 	    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT) {
-		irq = index;
+		/* IRQ number is specified by caller */
 	} else if (index == EFX_EF10_ALWAYS_INTERRUPTING_EVQ_INDEX) {
 		/* Use the first interrupt for always interrupting EvQ */
 		irq = 0;
diff --git a/drivers/common/sfc_efx/base/ef10_impl.h b/drivers/common/sfc_efx/base/ef10_impl.h
index 40210fbd91..7c8d51b7a5 100644
--- a/drivers/common/sfc_efx/base/ef10_impl.h
+++ b/drivers/common/sfc_efx/base/ef10_impl.h
@@ -111,6 +111,7 @@ ef10_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep);
 
 LIBEFX_INTERNAL
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 8e13075b07..6a99099ad2 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -2333,6 +2333,19 @@ efx_ev_qcreate(
 	__in		uint32_t flags,
 	__deref_out	efx_evq_t **eepp);
 
+LIBEFX_API
+extern	__checkReturn	efx_rc_t
+efx_ev_qcreate_irq(
+	__in		efx_nic_t *enp,
+	__in		unsigned int index,
+	__in		efsys_mem_t *esmp,
+	__in		size_t ndescs,
+	__in		uint32_t id,
+	__in		uint32_t us,
+	__in		uint32_t flags,
+	__in		uint32_t irq,
+	__deref_out	efx_evq_t **eepp);
+
 LIBEFX_API
 extern		void
 efx_ev_qpost(
diff --git a/drivers/common/sfc_efx/base/efx_ev.c b/drivers/common/sfc_efx/base/efx_ev.c
index 19bdea03fd..4808f8ddfc 100644
--- a/drivers/common/sfc_efx/base/efx_ev.c
+++ b/drivers/common/sfc_efx/base/efx_ev.c
@@ -35,6 +35,7 @@ siena_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep);
 
 static			void
@@ -253,7 +254,7 @@ efx_ev_fini(
 
 
 	__checkReturn	efx_rc_t
-efx_ev_qcreate(
+efx_ev_qcreate_irq(
 	__in		efx_nic_t *enp,
 	__in		unsigned int index,
 	__in		efsys_mem_t *esmp,
@@ -261,6 +262,7 @@ efx_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__deref_out	efx_evq_t **eepp)
 {
 	const efx_ev_ops_t *eevop = enp->en_eevop;
@@ -347,7 +349,7 @@ efx_ev_qcreate(
 	*eepp = eep;
 
 	if ((rc = eevop->eevo_qcreate(enp, index, esmp, ndescs, id, us, flags,
-	    eep)) != 0)
+	    irq, eep)) != 0)
 		goto fail9;
 
 	return (0);
@@ -377,6 +379,23 @@ efx_ev_qcreate(
 	return (rc);
 }
 
+	__checkReturn	efx_rc_t
+efx_ev_qcreate(
+	__in		efx_nic_t *enp,
+	__in		unsigned int index,
+	__in		efsys_mem_t *esmp,
+	__in		size_t ndescs,
+	__in		uint32_t id,
+	__in		uint32_t us,
+	__in		uint32_t flags,
+	__deref_out	efx_evq_t **eepp)
+{
+	uint32_t irq = index;
+
+	return (efx_ev_qcreate_irq(enp, index, esmp, ndescs, id, us, flags,
+	    irq, eepp));
+}
+
 		void
 efx_ev_qdestroy(
 	__in	efx_evq_t *eep)
@@ -1278,6 +1297,7 @@ siena_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep)
 {
 	efx_nic_cfg_t *encp = &(enp->en_nic_cfg);
@@ -1290,11 +1310,16 @@ siena_ev_qcreate(
 
 	EFSYS_ASSERT((flags & EFX_EVQ_FLAGS_EXTENDED_WIDTH) == 0);
 
+	if (irq != index) {
+		rc = EINVAL;
+		goto fail1;
+	}
+
 #if EFSYS_OPT_RX_SCALE
 	if (enp->en_intr.ei_type == EFX_INTR_LINE &&
 	    index >= EFX_MAXRSS_LEGACY) {
 		rc = EINVAL;
-		goto fail1;
+		goto fail2;
 	}
 #endif
 	for (size = 0;
@@ -1304,7 +1329,7 @@ siena_ev_qcreate(
 			break;
 	if (id + (1 << size) >= encp->enc_buftbl_limit) {
 		rc = EINVAL;
-		goto fail2;
+		goto fail3;
 	}
 
 	/* Set up the handler table */
@@ -1336,11 +1361,13 @@ siena_ev_qcreate(
 
 	return (0);
 
+fail3:
+	EFSYS_PROBE(fail3);
+#if EFSYS_OPT_RX_SCALE
 fail2:
 	EFSYS_PROBE(fail2);
-#if EFSYS_OPT_RX_SCALE
-fail1:
 #endif
+fail1:
 	EFSYS_PROBE1(fail1, efx_rc_t, rc);
 
 	return (rc);
diff --git a/drivers/common/sfc_efx/base/efx_impl.h b/drivers/common/sfc_efx/base/efx_impl.h
index 4fff9e1842..f891e2616e 100644
--- a/drivers/common/sfc_efx/base/efx_impl.h
+++ b/drivers/common/sfc_efx/base/efx_impl.h
@@ -87,7 +87,8 @@ typedef struct efx_ev_ops_s {
 	void		(*eevo_fini)(efx_nic_t *);
 	efx_rc_t	(*eevo_qcreate)(efx_nic_t *, unsigned int,
 					  efsys_mem_t *, size_t, uint32_t,
-					  uint32_t, uint32_t, efx_evq_t *);
+					  uint32_t, uint32_t, uint32_t,
+					  efx_evq_t *);
 	void		(*eevo_qdestroy)(efx_evq_t *);
 	efx_rc_t	(*eevo_qprime)(efx_evq_t *, unsigned int);
 	void		(*eevo_qpost)(efx_evq_t *, uint16_t);
diff --git a/drivers/common/sfc_efx/base/rhead_ev.c b/drivers/common/sfc_efx/base/rhead_ev.c
index 533cd9e34a..3eaed9e94b 100644
--- a/drivers/common/sfc_efx/base/rhead_ev.c
+++ b/drivers/common/sfc_efx/base/rhead_ev.c
@@ -102,11 +102,11 @@ rhead_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep)
 {
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
 	size_t desc_size;
-	uint32_t irq = 0;
 	uint32_t target_evq = 0;
 	efx_rc_t rc;
 
@@ -141,7 +141,7 @@ rhead_ev_qcreate(
 	/* INIT_EVQ expects function-relative vector number */
 	if ((flags & EFX_EVQ_FLAGS_NOTIFY_MASK) ==
 	    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT) {
-		irq = index;
+		/* IRQ number is specified by caller */
 	} else if (index == EFX_RHEAD_ALWAYS_INTERRUPTING_EVQ_INDEX) {
 		/* Use the first interrupt for always interrupting EvQ */
 		irq = 0;
diff --git a/drivers/common/sfc_efx/base/rhead_impl.h b/drivers/common/sfc_efx/base/rhead_impl.h
index 3bf9beceb0..dd38ded775 100644
--- a/drivers/common/sfc_efx/base/rhead_impl.h
+++ b/drivers/common/sfc_efx/base/rhead_impl.h
@@ -131,6 +131,7 @@ rhead_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep);
 
 LIBEFX_INTERNAL
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 75da5aa5c2..ae85ed18c6 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -7,6 +7,7 @@ INTERNAL {
 	efx_ev_init;
 	efx_ev_qcreate;
 	efx_ev_qcreate_check_init_done;
+	efx_ev_qcreate_irq;
 	efx_ev_qdestroy;
 	efx_ev_qmoderate;
 	efx_ev_qpending;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v2 05/20] net/sfc: explicitly control IRQ used for Rx queues
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (3 preceding siblings ...)
  2021-06-04 14:23   ` [dpdk-dev] [PATCH v2 04/20] common/sfc_efx/base: support custom EvQ to IRQ mapping Andrew Rybchenko
@ 2021-06-04 14:23   ` Andrew Rybchenko
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 06/20] net/sfc: introduce ethdev Tx queue ID Andrew Rybchenko
                     ` (15 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-04 14:23 UTC (permalink / raw)
  To: dev; +Cc: Andy Moreton

Interrupts support has assumptions on interrupt numbers used
for LSC and Rx queues. The first interrupt is used for LSC,
subsequent interrupts are used for Rx queues.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/net/sfc/sfc_ev.c | 32 ++++++++++++++++++++++++--------
 1 file changed, 24 insertions(+), 8 deletions(-)

diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index 9a8149f052..71f706e403 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -648,6 +648,7 @@ sfc_ev_qstart(struct sfc_evq *evq, unsigned int hw_index)
 	struct sfc_adapter *sa = evq->sa;
 	efsys_mem_t *esmp;
 	uint32_t evq_flags = sa->evq_flags;
+	uint32_t irq = 0;
 	unsigned int total_delay_us;
 	unsigned int delay_us;
 	int rc;
@@ -662,20 +663,35 @@ sfc_ev_qstart(struct sfc_evq *evq, unsigned int hw_index)
 	(void)memset((void *)esmp->esm_base, 0xff,
 		     efx_evq_size(sa->nic, evq->entries, evq_flags));
 
-	if ((sa->intr.lsc_intr && hw_index == sa->mgmt_evq_index) ||
-	    (sa->intr.rxq_intr && evq->dp_rxq != NULL &&
-	     sfc_ethdev_rx_qid_by_rxq_sw_index(sfc_sa2shared(sa),
-		evq->dp_rxq->dpq.queue_id) != SFC_ETHDEV_QID_INVALID))
+	if (sa->intr.lsc_intr && hw_index == sa->mgmt_evq_index) {
 		evq_flags |= EFX_EVQ_FLAGS_NOTIFY_INTERRUPT;
-	else
+		irq = 0;
+	} else if (sa->intr.rxq_intr && evq->dp_rxq != NULL) {
+		sfc_ethdev_qid_t ethdev_qid;
+
+		ethdev_qid =
+			sfc_ethdev_rx_qid_by_rxq_sw_index(sfc_sa2shared(sa),
+				evq->dp_rxq->dpq.queue_id);
+		if (ethdev_qid != SFC_ETHDEV_QID_INVALID) {
+			evq_flags |= EFX_EVQ_FLAGS_NOTIFY_INTERRUPT;
+			/*
+			 * The first interrupt is used for management EvQ
+			 * (LSC etc). RxQ interrupts follow it.
+			 */
+			irq = 1 + ethdev_qid;
+		} else {
+			evq_flags |= EFX_EVQ_FLAGS_NOTIFY_DISABLED;
+		}
+	} else {
 		evq_flags |= EFX_EVQ_FLAGS_NOTIFY_DISABLED;
+	}
 
 	evq->init_state = SFC_EVQ_STARTING;
 
 	/* Create the common code event queue */
-	rc = efx_ev_qcreate(sa->nic, hw_index, esmp, evq->entries,
-			    0 /* unused on EF10 */, 0, evq_flags,
-			    &evq->common);
+	rc = efx_ev_qcreate_irq(sa->nic, hw_index, esmp, evq->entries,
+				0 /* unused on EF10 */, 0, evq_flags,
+				irq, &evq->common);
 	if (rc != 0)
 		goto fail_ev_qcreate;
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v2 06/20] net/sfc: introduce ethdev Tx queue ID
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (4 preceding siblings ...)
  2021-06-04 14:23   ` [dpdk-dev] [PATCH v2 05/20] net/sfc: explicitly control IRQ used for Rx queues Andrew Rybchenko
@ 2021-06-04 14:24   ` Andrew Rybchenko
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 07/20] common/sfc_efx/base: add ingress m-port RxQ flag Andrew Rybchenko
                     ` (14 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-04 14:24 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Make software index of a Tx queue and ethdev index separate.
When an ethdev TxQ is accessed in ethdev callbacks, an explicit ethdev
queue index is used.

This is a preparation to introducing non-ethdev Tx queues.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/sfc.h        |   1 +
 drivers/net/sfc/sfc_ethdev.c |  46 ++++++----
 drivers/net/sfc/sfc_ev.c     |   2 +-
 drivers/net/sfc/sfc_ev.h     |  21 ++++-
 drivers/net/sfc/sfc_tx.c     | 164 ++++++++++++++++++++++++-----------
 drivers/net/sfc/sfc_tx.h     |  11 +--
 6 files changed, 171 insertions(+), 74 deletions(-)

diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index ebe705020d..00fc26cf0e 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -173,6 +173,7 @@ struct sfc_adapter_shared {
 
 	unsigned int			txq_count;
 	struct sfc_txq_info		*txq_info;
+	unsigned int			ethdev_txq_count;
 
 	struct sfc_rss			rss;
 
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2651c41288..88896db1f8 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -524,24 +524,28 @@ sfc_rx_queue_release(void *queue)
 }
 
 static int
-sfc_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
+sfc_tx_queue_setup(struct rte_eth_dev *dev, uint16_t ethdev_qid,
 		   uint16_t nb_tx_desc, unsigned int socket_id,
 		   const struct rte_eth_txconf *tx_conf)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	struct sfc_txq_info *txq_info;
+	sfc_sw_index_t sw_index;
 	int rc;
 
 	sfc_log_init(sa, "TxQ = %u, nb_tx_desc = %u, socket_id = %u",
-		     tx_queue_id, nb_tx_desc, socket_id);
+		     ethdev_qid, nb_tx_desc, socket_id);
 
 	sfc_adapter_lock(sa);
 
-	rc = sfc_tx_qinit(sa, tx_queue_id, nb_tx_desc, socket_id, tx_conf);
+	sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas, ethdev_qid);
+	rc = sfc_tx_qinit(sa, sw_index, nb_tx_desc, socket_id, tx_conf);
 	if (rc != 0)
 		goto fail_tx_qinit;
 
-	dev->data->tx_queues[tx_queue_id] = sas->txq_info[tx_queue_id].dp;
+	txq_info = sfc_txq_info_by_ethdev_qid(sas, ethdev_qid);
+	dev->data->tx_queues[ethdev_qid] = txq_info->dp;
 
 	sfc_adapter_unlock(sa);
 	return 0;
@@ -557,7 +561,7 @@ sfc_tx_queue_release(void *queue)
 {
 	struct sfc_dp_txq *dp_txq = queue;
 	struct sfc_txq *txq;
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 	struct sfc_adapter *sa;
 
 	if (dp_txq == NULL)
@@ -1213,15 +1217,15 @@ sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
  * use any process-local pointers from the adapter data.
  */
 static void
-sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t tx_queue_id,
+sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
 		      struct rte_eth_txq_info *qinfo)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_txq_info *txq_info;
 
-	SFC_ASSERT(tx_queue_id < sas->txq_count);
+	SFC_ASSERT(ethdev_qid < sas->ethdev_txq_count);
 
-	txq_info = &sas->txq_info[tx_queue_id];
+	txq_info = sfc_txq_info_by_ethdev_qid(sas, ethdev_qid);
 
 	memset(qinfo, 0, sizeof(*qinfo));
 
@@ -1362,13 +1366,15 @@ sfc_rx_queue_stop(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 }
 
 static int
-sfc_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+sfc_tx_queue_start(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	struct sfc_txq_info *txq_info;
+	sfc_sw_index_t sw_index;
 	int rc;
 
-	sfc_log_init(sa, "TxQ = %u", tx_queue_id);
+	sfc_log_init(sa, "TxQ = %u", ethdev_qid);
 
 	sfc_adapter_lock(sa);
 
@@ -1376,14 +1382,16 @@ sfc_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	if (sa->state != SFC_ADAPTER_STARTED)
 		goto fail_not_started;
 
-	if (sas->txq_info[tx_queue_id].state != SFC_TXQ_INITIALIZED)
+	txq_info = sfc_txq_info_by_ethdev_qid(sas, ethdev_qid);
+	if (txq_info->state != SFC_TXQ_INITIALIZED)
 		goto fail_not_setup;
 
-	rc = sfc_tx_qstart(sa, tx_queue_id);
+	sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas, ethdev_qid);
+	rc = sfc_tx_qstart(sa, sw_index);
 	if (rc != 0)
 		goto fail_tx_qstart;
 
-	sas->txq_info[tx_queue_id].deferred_started = B_TRUE;
+	txq_info->deferred_started = B_TRUE;
 
 	sfc_adapter_unlock(sa);
 	return 0;
@@ -1398,18 +1406,22 @@ sfc_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 }
 
 static int
-sfc_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+sfc_tx_queue_stop(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	struct sfc_txq_info *txq_info;
+	sfc_sw_index_t sw_index;
 
-	sfc_log_init(sa, "TxQ = %u", tx_queue_id);
+	sfc_log_init(sa, "TxQ = %u", ethdev_qid);
 
 	sfc_adapter_lock(sa);
 
-	sfc_tx_qstop(sa, tx_queue_id);
+	sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas, ethdev_qid);
+	sfc_tx_qstop(sa, sw_index);
 
-	sas->txq_info[tx_queue_id].deferred_started = B_FALSE;
+	txq_info = sfc_txq_info_by_ethdev_qid(sas, ethdev_qid);
+	txq_info->deferred_started = B_FALSE;
 
 	sfc_adapter_unlock(sa);
 	return 0;
diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index 71f706e403..ed28d51e12 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -598,7 +598,7 @@ sfc_ev_qpoll(struct sfc_evq *evq)
 		}
 
 		if (evq->dp_txq != NULL) {
-			unsigned int txq_sw_index;
+			sfc_sw_index_t txq_sw_index;
 
 			txq_sw_index = evq->dp_txq->dpq.queue_id;
 
diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h
index 5a9f85c2d9..75b9dcdebd 100644
--- a/drivers/net/sfc/sfc_ev.h
+++ b/drivers/net/sfc/sfc_ev.h
@@ -92,8 +92,25 @@ sfc_evq_sw_index_by_rxq_sw_index(__rte_unused struct sfc_adapter *sa,
 	return 1 + rxq_sw_index;
 }
 
-static inline unsigned int
-sfc_evq_index_by_txq_sw_index(struct sfc_adapter *sa, unsigned int txq_sw_index)
+static inline sfc_ethdev_qid_t
+sfc_ethdev_tx_qid_by_txq_sw_index(__rte_unused struct sfc_adapter_shared *sas,
+				  sfc_sw_index_t txq_sw_index)
+{
+	/* Only ethdev queues are present for now */
+	return txq_sw_index;
+}
+
+static inline sfc_sw_index_t
+sfc_txq_sw_index_by_ethdev_tx_qid(__rte_unused struct sfc_adapter_shared *sas,
+				  sfc_ethdev_qid_t ethdev_qid)
+{
+	/* Only ethdev queues are present for now */
+	return ethdev_qid;
+}
+
+static inline sfc_sw_index_t
+sfc_evq_sw_index_by_txq_sw_index(struct sfc_adapter *sa,
+				 sfc_sw_index_t txq_sw_index)
 {
 	return 1 + sa->eth_dev->data->nb_rx_queues + txq_sw_index;
 }
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 28d696de61..ce2a9a6a4f 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -34,6 +34,19 @@
  */
 #define SFC_TX_QFLUSH_POLL_ATTEMPTS	(2000)
 
+struct sfc_txq_info *
+sfc_txq_info_by_ethdev_qid(struct sfc_adapter_shared *sas,
+			   sfc_ethdev_qid_t ethdev_qid)
+{
+	sfc_sw_index_t sw_index;
+
+	SFC_ASSERT((unsigned int)ethdev_qid < sas->ethdev_txq_count);
+	SFC_ASSERT(ethdev_qid != SFC_ETHDEV_QID_INVALID);
+
+	sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas, ethdev_qid);
+	return &sas->txq_info[sw_index];
+}
+
 static uint64_t
 sfc_tx_get_offload_mask(struct sfc_adapter *sa)
 {
@@ -118,10 +131,12 @@ sfc_tx_qflush_done(struct sfc_txq_info *txq_info)
 }
 
 int
-sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
+sfc_tx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 	     uint16_t nb_tx_desc, unsigned int socket_id,
 	     const struct rte_eth_txconf *tx_conf)
 {
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
 	unsigned int txq_entries;
 	unsigned int evq_entries;
@@ -134,7 +149,9 @@ sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	uint64_t offloads;
 	struct sfc_dp_tx_hw_limits hw_limits;
 
-	sfc_log_init(sa, "TxQ = %u", sw_index);
+	ethdev_qid = sfc_ethdev_tx_qid_by_txq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "TxQ = %d (internal %u)", ethdev_qid, sw_index);
 
 	memset(&hw_limits, 0, sizeof(hw_limits));
 	hw_limits.txq_max_entries = sa->txq_max_entries;
@@ -150,8 +167,11 @@ sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	SFC_ASSERT(txq_entries >= nb_tx_desc);
 	SFC_ASSERT(txq_max_fill_level <= nb_tx_desc);
 
-	offloads = tx_conf->offloads |
-		sa->eth_dev->data->dev_conf.txmode.offloads;
+	offloads = tx_conf->offloads;
+	/* Add device level Tx offloads if the queue is an ethdev Tx queue */
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		offloads |= sa->eth_dev->data->dev_conf.txmode.offloads;
+
 	rc = sfc_tx_qcheck_conf(sa, txq_max_fill_level, tx_conf, offloads);
 	if (rc != 0)
 		goto fail_bad_conf;
@@ -231,20 +251,26 @@ sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 
 fail_bad_conf:
 fail_size_up_rings:
-	sfc_log_init(sa, "failed (TxQ = %u, rc = %d)", sw_index, rc);
+	sfc_log_init(sa, "failed (TxQ = %d (internal %u), rc = %d)", ethdev_qid,
+		     sw_index, rc);
 	return rc;
 }
 
 void
-sfc_tx_qfini(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_tx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_txq_info *txq_info;
 	struct sfc_txq *txq;
 
-	sfc_log_init(sa, "TxQ = %u", sw_index);
+	ethdev_qid = sfc_ethdev_tx_qid_by_txq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "TxQ = %d (internal %u)", ethdev_qid, sw_index);
 
 	SFC_ASSERT(sw_index < sfc_sa2shared(sa)->txq_count);
-	sa->eth_dev->data->tx_queues[sw_index] = NULL;
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		sa->eth_dev->data->tx_queues[ethdev_qid] = NULL;
 
 	txq_info = &sfc_sa2shared(sa)->txq_info[sw_index];
 
@@ -265,9 +291,14 @@ sfc_tx_qfini(struct sfc_adapter *sa, unsigned int sw_index)
 }
 
 static int
-sfc_tx_qinit_info(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_tx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
-	sfc_log_init(sa, "TxQ = %u", sw_index);
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
+
+	ethdev_qid = sfc_ethdev_tx_qid_by_txq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "TxQ = %d (internal %u)", ethdev_qid, sw_index);
 
 	return 0;
 }
@@ -316,17 +347,26 @@ static void
 sfc_tx_fini_queues(struct sfc_adapter *sa, unsigned int nb_tx_queues)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	int sw_index;
+	sfc_sw_index_t sw_index;
+	sfc_ethdev_qid_t ethdev_qid;
 
-	SFC_ASSERT(nb_tx_queues <= sas->txq_count);
+	SFC_ASSERT(nb_tx_queues <= sas->ethdev_txq_count);
 
-	sw_index = sas->txq_count;
-	while (--sw_index >= (int)nb_tx_queues) {
-		if (sas->txq_info[sw_index].state & SFC_TXQ_INITIALIZED)
+	/*
+	 * Finalize only ethdev queues since other ones are finalized only
+	 * on device close and they may require additional deinitializaton.
+	 */
+	ethdev_qid = sas->ethdev_txq_count;
+	while (--ethdev_qid >= (int)nb_tx_queues) {
+		struct sfc_txq_info *txq_info;
+
+		sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas, ethdev_qid);
+		txq_info = sfc_txq_info_by_ethdev_qid(sas, ethdev_qid);
+		if (txq_info->state & SFC_TXQ_INITIALIZED)
 			sfc_tx_qfini(sa, sw_index);
 	}
 
-	sas->txq_count = nb_tx_queues;
+	sas->ethdev_txq_count = nb_tx_queues;
 }
 
 int
@@ -339,7 +379,7 @@ sfc_tx_configure(struct sfc_adapter *sa)
 	int rc = 0;
 
 	sfc_log_init(sa, "nb_tx_queues=%u (old %u)",
-		     nb_tx_queues, sas->txq_count);
+		     nb_tx_queues, sas->ethdev_txq_count);
 
 	/*
 	 * The datapath implementation assumes absence of boundary
@@ -377,7 +417,7 @@ sfc_tx_configure(struct sfc_adapter *sa)
 		struct sfc_txq_info *new_txq_info;
 		struct sfc_txq *new_txq_ctrl;
 
-		if (nb_tx_queues < sas->txq_count)
+		if (nb_tx_queues < sas->ethdev_txq_count)
 			sfc_tx_fini_queues(sa, nb_tx_queues);
 
 		new_txq_info =
@@ -393,24 +433,30 @@ sfc_tx_configure(struct sfc_adapter *sa)
 
 		sas->txq_info = new_txq_info;
 		sa->txq_ctrl = new_txq_ctrl;
-		if (nb_tx_queues > sas->txq_count) {
-			memset(&sas->txq_info[sas->txq_count], 0,
-			       (nb_tx_queues - sas->txq_count) *
+		if (nb_tx_queues > sas->ethdev_txq_count) {
+			memset(&sas->txq_info[sas->ethdev_txq_count], 0,
+			       (nb_tx_queues - sas->ethdev_txq_count) *
 			       sizeof(sas->txq_info[0]));
-			memset(&sa->txq_ctrl[sas->txq_count], 0,
-			       (nb_tx_queues - sas->txq_count) *
+			memset(&sa->txq_ctrl[sas->ethdev_txq_count], 0,
+			       (nb_tx_queues - sas->ethdev_txq_count) *
 			       sizeof(sa->txq_ctrl[0]));
 		}
 	}
 
-	while (sas->txq_count < nb_tx_queues) {
-		rc = sfc_tx_qinit_info(sa, sas->txq_count);
+	while (sas->ethdev_txq_count < nb_tx_queues) {
+		sfc_sw_index_t sw_index;
+
+		sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas,
+				sas->ethdev_txq_count);
+		rc = sfc_tx_qinit_info(sa, sw_index);
 		if (rc != 0)
 			goto fail_tx_qinit_info;
 
-		sas->txq_count++;
+		sas->ethdev_txq_count++;
 	}
 
+	sas->txq_count = sas->ethdev_txq_count;
+
 done:
 	return 0;
 
@@ -440,12 +486,12 @@ sfc_tx_close(struct sfc_adapter *sa)
 }
 
 int
-sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_tx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	uint64_t offloads_supported = sfc_tx_get_dev_offload_caps(sa) |
 				      sfc_tx_get_queue_offload_caps(sa);
-	struct rte_eth_dev_data *dev_data;
 	struct sfc_txq_info *txq_info;
 	struct sfc_txq *txq;
 	struct sfc_evq *evq;
@@ -453,7 +499,9 @@ sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 	unsigned int desc_index;
 	int rc = 0;
 
-	sfc_log_init(sa, "TxQ = %u", sw_index);
+	ethdev_qid = sfc_ethdev_tx_qid_by_txq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "TxQ = %d (internal %u)", ethdev_qid, sw_index);
 
 	SFC_ASSERT(sw_index < sas->txq_count);
 	txq_info = &sas->txq_info[sw_index];
@@ -463,7 +511,7 @@ sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 	txq = &sa->txq_ctrl[sw_index];
 	evq = txq->evq;
 
-	rc = sfc_ev_qstart(evq, sfc_evq_index_by_txq_sw_index(sa, sw_index));
+	rc = sfc_ev_qstart(evq, sfc_evq_sw_index_by_txq_sw_index(sa, sw_index));
 	if (rc != 0)
 		goto fail_ev_qstart;
 
@@ -505,11 +553,17 @@ sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 	if (rc != 0)
 		goto fail_dp_qstart;
 
-	/*
-	 * It seems to be used by DPDK for debug purposes only ('rte_ether')
-	 */
-	dev_data = sa->eth_dev->data;
-	dev_data->tx_queue_state[sw_index] = RTE_ETH_QUEUE_STATE_STARTED;
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID) {
+		struct rte_eth_dev_data *dev_data;
+
+		/*
+		 * It sems to be used by DPDK for debug purposes only
+		 * ('rte_ether').
+		 */
+		dev_data = sa->eth_dev->data;
+		dev_data->tx_queue_state[ethdev_qid] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+	}
 
 	return 0;
 
@@ -525,17 +579,19 @@ sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 }
 
 void
-sfc_tx_qstop(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_tx_qstop(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	struct rte_eth_dev_data *dev_data;
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_txq_info *txq_info;
 	struct sfc_txq *txq;
 	unsigned int retry_count;
 	unsigned int wait_count;
 	int rc;
 
-	sfc_log_init(sa, "TxQ = %u", sw_index);
+	ethdev_qid = sfc_ethdev_tx_qid_by_txq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "TxQ = %d (internal %u)", ethdev_qid, sw_index);
 
 	SFC_ASSERT(sw_index < sas->txq_count);
 	txq_info = &sas->txq_info[sw_index];
@@ -577,10 +633,12 @@ sfc_tx_qstop(struct sfc_adapter *sa, unsigned int sw_index)
 			 wait_count++ < SFC_TX_QFLUSH_POLL_ATTEMPTS);
 
 		if (txq_info->state & SFC_TXQ_FLUSHING)
-			sfc_err(sa, "TxQ %u flush timed out", sw_index);
+			sfc_err(sa, "TxQ %d (internal %u) flush timed out",
+				ethdev_qid, sw_index);
 
 		if (txq_info->state & SFC_TXQ_FLUSHED)
-			sfc_notice(sa, "TxQ %u flushed", sw_index);
+			sfc_notice(sa, "TxQ %d (internal %u) flushed",
+				   ethdev_qid, sw_index);
 	}
 
 	sa->priv.dp_tx->qreap(txq_info->dp);
@@ -591,11 +649,17 @@ sfc_tx_qstop(struct sfc_adapter *sa, unsigned int sw_index)
 
 	sfc_ev_qstop(txq->evq);
 
-	/*
-	 * It seems to be used by DPDK for debug purposes only ('rte_ether')
-	 */
-	dev_data = sa->eth_dev->data;
-	dev_data->tx_queue_state[sw_index] = RTE_ETH_QUEUE_STATE_STOPPED;
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID) {
+		struct rte_eth_dev_data *dev_data;
+
+		/*
+		 * It seems to be used by DPDK for debug purposes only
+		 * ('rte_ether')
+		 */
+		dev_data = sa->eth_dev->data;
+		dev_data->tx_queue_state[ethdev_qid] =
+			RTE_ETH_QUEUE_STATE_STOPPED;
+	}
 }
 
 int
@@ -603,10 +667,11 @@ sfc_tx_start(struct sfc_adapter *sa)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 	int rc = 0;
 
-	sfc_log_init(sa, "txq_count = %u", sas->txq_count);
+	sfc_log_init(sa, "txq_count = %u (internal %u)",
+		     sas->ethdev_txq_count, sas->txq_count);
 
 	if (sa->tso) {
 		if (!encp->enc_fw_assisted_tso_v2_enabled &&
@@ -654,9 +719,10 @@ void
 sfc_tx_stop(struct sfc_adapter *sa)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 
-	sfc_log_init(sa, "txq_count = %u", sas->txq_count);
+	sfc_log_init(sa, "txq_count = %u (internal %u)",
+		     sas->ethdev_txq_count, sas->txq_count);
 
 	sw_index = sas->txq_count;
 	while (sw_index-- > 0) {
diff --git a/drivers/net/sfc/sfc_tx.h b/drivers/net/sfc/sfc_tx.h
index 5ed678703e..f1700b13ca 100644
--- a/drivers/net/sfc/sfc_tx.h
+++ b/drivers/net/sfc/sfc_tx.h
@@ -58,7 +58,8 @@ struct sfc_txq {
 };
 
 struct sfc_txq *sfc_txq_by_dp_txq(const struct sfc_dp_txq *dp_txq);
-
+struct sfc_txq_info *sfc_txq_info_by_ethdev_qid(struct sfc_adapter_shared *sas,
+						sfc_ethdev_qid_t ethdev_qid);
 /**
  * Transmit queue information used on libefx-based data path.
  * Allocated on the socket specified on the queue setup.
@@ -107,14 +108,14 @@ struct sfc_txq_info *sfc_txq_info_by_dp_txq(const struct sfc_dp_txq *dp_txq);
 int sfc_tx_configure(struct sfc_adapter *sa);
 void sfc_tx_close(struct sfc_adapter *sa);
 
-int sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
+int sfc_tx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 		 uint16_t nb_tx_desc, unsigned int socket_id,
 		 const struct rte_eth_txconf *tx_conf);
-void sfc_tx_qfini(struct sfc_adapter *sa, unsigned int sw_index);
+void sfc_tx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
 
 void sfc_tx_qflush_done(struct sfc_txq_info *txq_info);
-int sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index);
-void sfc_tx_qstop(struct sfc_adapter *sa, unsigned int sw_index);
+int sfc_tx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
+void sfc_tx_qstop(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
 int sfc_tx_start(struct sfc_adapter *sa);
 void sfc_tx_stop(struct sfc_adapter *sa);
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v2 07/20] common/sfc_efx/base: add ingress m-port RxQ flag
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (5 preceding siblings ...)
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 06/20] net/sfc: introduce ethdev Tx queue ID Andrew Rybchenko
@ 2021-06-04 14:24   ` Andrew Rybchenko
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 08/20] common/sfc_efx/base: add user mark " Andrew Rybchenko
                     ` (13 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-04 14:24 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Add a flag to request support for ingress m-port on an RxQ.
Implement it only for Riverhead, other families will return an error
if the flag is set.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/ef10_rx.c  |  9 ++++++++-
 drivers/common/sfc_efx/base/efx.h      |  5 +++++
 drivers/common/sfc_efx/base/efx_rx.c   | 14 +++++++++-----
 drivers/common/sfc_efx/base/rhead_rx.c |  3 +++
 4 files changed, 25 insertions(+), 6 deletions(-)

diff --git a/drivers/common/sfc_efx/base/ef10_rx.c b/drivers/common/sfc_efx/base/ef10_rx.c
index cfa60bd324..0e140645a5 100644
--- a/drivers/common/sfc_efx/base/ef10_rx.c
+++ b/drivers/common/sfc_efx/base/ef10_rx.c
@@ -1031,6 +1031,11 @@ ef10_rx_qcreate(
 	EFSYS_ASSERT(params.es_bufs_per_desc == 0);
 #endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
 
+	if (flags & EFX_RXQ_FLAG_INGRESS_MPORT) {
+		rc = ENOTSUP;
+		goto fail12;
+	}
+
 	/* Scatter can only be disabled if the firmware supports doing so */
 	if (flags & EFX_RXQ_FLAG_SCATTER)
 		params.disable_scatter = B_FALSE;
@@ -1044,7 +1049,7 @@ ef10_rx_qcreate(
 
 	if ((rc = efx_mcdi_init_rxq(enp, ndescs, eep, label, index,
 		    esmp, &params)) != 0)
-		goto fail12;
+		goto fail13;
 
 	erp->er_eep = eep;
 	erp->er_label = label;
@@ -1057,6 +1062,8 @@ ef10_rx_qcreate(
 
 	return (0);
 
+fail13:
+	EFSYS_PROBE(fail13);
 fail12:
 	EFSYS_PROBE(fail12);
 #if EFSYS_OPT_RX_ES_SUPER_BUFFER
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 6a99099ad2..72ab4af01c 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -2925,6 +2925,7 @@ typedef enum efx_rx_prefix_field_e {
 	EFX_RX_PREFIX_FIELD_USER_MARK_VALID,
 	EFX_RX_PREFIX_FIELD_CSUM_FRAME,
 	EFX_RX_PREFIX_FIELD_INGRESS_VPORT,
+	EFX_RX_PREFIX_FIELD_INGRESS_MPORT = EFX_RX_PREFIX_FIELD_INGRESS_VPORT,
 	EFX_RX_PREFIX_NFIELDS
 } efx_rx_prefix_field_t;
 
@@ -2998,6 +2999,10 @@ typedef enum efx_rxq_type_e {
  * the driver.
  */
 #define	EFX_RXQ_FLAG_RSS_HASH		0x4
+/*
+ * Request ingress mport field in the Rx prefix of a queue.
+ */
+#define	EFX_RXQ_FLAG_INGRESS_MPORT	0x8
 
 LIBEFX_API
 extern	__checkReturn	efx_rc_t
diff --git a/drivers/common/sfc_efx/base/efx_rx.c b/drivers/common/sfc_efx/base/efx_rx.c
index 7c6fecf925..7e63363be7 100644
--- a/drivers/common/sfc_efx/base/efx_rx.c
+++ b/drivers/common/sfc_efx/base/efx_rx.c
@@ -1743,14 +1743,20 @@ siena_rx_qcreate(
 		goto fail2;
 	}
 
-	if (flags & EFX_RXQ_FLAG_SCATTER) {
 #if EFSYS_OPT_RX_SCATTER
-		jumbo = B_TRUE;
+#define SUPPORTED_RXQ_FLAGS EFX_RXQ_FLAG_SCATTER
 #else
+#define SUPPORTED_RXQ_FLAGS EFX_RXQ_FLAG_NONE
+#endif
+	/* Reject flags for unsupported queue features */
+	if ((flags & ~SUPPORTED_RXQ_FLAGS) != 0) {
 		rc = EINVAL;
 		goto fail3;
-#endif	/* EFSYS_OPT_RX_SCATTER */
 	}
+#undef SUPPORTED_RXQ_FLAGS
+
+	if (flags & EFX_RXQ_FLAG_SCATTER)
+		jumbo = B_TRUE;
 
 	/* Set up the new descriptor queue */
 	EFX_POPULATE_OWORD_7(oword,
@@ -1769,10 +1775,8 @@ siena_rx_qcreate(
 
 	return (0);
 
-#if !EFSYS_OPT_RX_SCATTER
 fail3:
 	EFSYS_PROBE(fail3);
-#endif
 fail2:
 	EFSYS_PROBE(fail2);
 fail1:
diff --git a/drivers/common/sfc_efx/base/rhead_rx.c b/drivers/common/sfc_efx/base/rhead_rx.c
index b2dacbab32..f1d46f7c70 100644
--- a/drivers/common/sfc_efx/base/rhead_rx.c
+++ b/drivers/common/sfc_efx/base/rhead_rx.c
@@ -629,6 +629,9 @@ rhead_rx_qcreate(
 		fields_mask |= 1U << EFX_RX_PREFIX_FIELD_RSS_HASH_VALID;
 	}
 
+	if (flags & EFX_RXQ_FLAG_INGRESS_MPORT)
+		fields_mask |= 1U << EFX_RX_PREFIX_FIELD_INGRESS_MPORT;
+
 	/*
 	 * LENGTH is required in EF100 host interface, as receive events
 	 * do not include the packet length.
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v2 08/20] common/sfc_efx/base: add user mark RxQ flag
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (6 preceding siblings ...)
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 07/20] common/sfc_efx/base: add ingress m-port RxQ flag Andrew Rybchenko
@ 2021-06-04 14:24   ` Andrew Rybchenko
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 09/20] net/sfc: add abstractions for the management EVQ identity Andrew Rybchenko
                     ` (12 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-04 14:24 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Add a flag to request support for user mark field on an RxQ.
The field is required to retrieve generation count value from
counter RxQ.

Implement it only for Riverhead and EF10 ESSB since they support
the field in the Rx prefix.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/ef10_rx.c  | 52 ++++++++++++++++----------
 drivers/common/sfc_efx/base/efx.h      |  4 ++
 drivers/common/sfc_efx/base/rhead_rx.c |  3 ++
 3 files changed, 39 insertions(+), 20 deletions(-)

diff --git a/drivers/common/sfc_efx/base/ef10_rx.c b/drivers/common/sfc_efx/base/ef10_rx.c
index 0e140645a5..0c3f9413cf 100644
--- a/drivers/common/sfc_efx/base/ef10_rx.c
+++ b/drivers/common/sfc_efx/base/ef10_rx.c
@@ -926,6 +926,10 @@ ef10_rx_qcreate(
 			goto fail1;
 		}
 		erp->er_buf_size = type_data->ertd_default.ed_buf_size;
+		if (flags & EFX_RXQ_FLAG_USER_MARK) {
+			rc = ENOTSUP;
+			goto fail2;
+		}
 		/*
 		 * Ignore EFX_RXQ_FLAG_RSS_HASH since if RSS hash is calculated
 		 * it is always delivered from HW in the pseudo-header.
@@ -936,7 +940,7 @@ ef10_rx_qcreate(
 		erpl = &ef10_packed_stream_rx_prefix_layout;
 		if (type_data == NULL) {
 			rc = EINVAL;
-			goto fail2;
+			goto fail3;
 		}
 		switch (type_data->ertd_packed_stream.eps_buf_size) {
 		case EFX_RXQ_PACKED_STREAM_BUF_SIZE_1M:
@@ -956,13 +960,17 @@ ef10_rx_qcreate(
 			break;
 		default:
 			rc = ENOTSUP;
-			goto fail3;
+			goto fail4;
 		}
 		erp->er_buf_size = type_data->ertd_packed_stream.eps_buf_size;
 		/* Packed stream pseudo header does not have RSS hash value */
 		if (flags & EFX_RXQ_FLAG_RSS_HASH) {
 			rc = ENOTSUP;
-			goto fail4;
+			goto fail5;
+		}
+		if (flags & EFX_RXQ_FLAG_USER_MARK) {
+			rc = ENOTSUP;
+			goto fail6;
 		}
 		break;
 #endif /* EFSYS_OPT_RX_PACKED_STREAM */
@@ -971,7 +979,7 @@ ef10_rx_qcreate(
 		erpl = &ef10_essb_rx_prefix_layout;
 		if (type_data == NULL) {
 			rc = EINVAL;
-			goto fail5;
+			goto fail7;
 		}
 		params.es_bufs_per_desc =
 		    type_data->ertd_es_super_buffer.eessb_bufs_per_desc;
@@ -989,7 +997,7 @@ ef10_rx_qcreate(
 #endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
 	default:
 		rc = ENOTSUP;
-		goto fail6;
+		goto fail8;
 	}
 
 #if EFSYS_OPT_RX_PACKED_STREAM
@@ -997,13 +1005,13 @@ ef10_rx_qcreate(
 		/* Check if datapath firmware supports packed stream mode */
 		if (encp->enc_rx_packed_stream_supported == B_FALSE) {
 			rc = ENOTSUP;
-			goto fail7;
+			goto fail9;
 		}
 		/* Check if packed stream allows configurable buffer sizes */
 		if ((params.ps_buf_size != MC_CMD_INIT_RXQ_EXT_IN_PS_BUFF_1M) &&
 		    (encp->enc_rx_var_packed_stream_supported == B_FALSE)) {
 			rc = ENOTSUP;
-			goto fail8;
+			goto fail10;
 		}
 	}
 #else /* EFSYS_OPT_RX_PACKED_STREAM */
@@ -1014,17 +1022,17 @@ ef10_rx_qcreate(
 	if (params.es_bufs_per_desc > 0) {
 		if (encp->enc_rx_es_super_buffer_supported == B_FALSE) {
 			rc = ENOTSUP;
-			goto fail9;
+			goto fail11;
 		}
 		if (!EFX_IS_P2ALIGNED(uint32_t, params.es_max_dma_len,
 			    EFX_RX_ES_SUPER_BUFFER_BUF_ALIGNMENT)) {
 			rc = EINVAL;
-			goto fail10;
+			goto fail12;
 		}
 		if (!EFX_IS_P2ALIGNED(uint32_t, params.es_buf_stride,
 			    EFX_RX_ES_SUPER_BUFFER_BUF_ALIGNMENT)) {
 			rc = EINVAL;
-			goto fail11;
+			goto fail13;
 		}
 	}
 #else /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
@@ -1033,7 +1041,7 @@ ef10_rx_qcreate(
 
 	if (flags & EFX_RXQ_FLAG_INGRESS_MPORT) {
 		rc = ENOTSUP;
-		goto fail12;
+		goto fail14;
 	}
 
 	/* Scatter can only be disabled if the firmware supports doing so */
@@ -1049,7 +1057,7 @@ ef10_rx_qcreate(
 
 	if ((rc = efx_mcdi_init_rxq(enp, ndescs, eep, label, index,
 		    esmp, &params)) != 0)
-		goto fail13;
+		goto fail15;
 
 	erp->er_eep = eep;
 	erp->er_label = label;
@@ -1062,38 +1070,42 @@ ef10_rx_qcreate(
 
 	return (0);
 
+fail15:
+	EFSYS_PROBE(fail15);
+fail14:
+	EFSYS_PROBE(fail14);
+#if EFSYS_OPT_RX_ES_SUPER_BUFFER
 fail13:
 	EFSYS_PROBE(fail13);
 fail12:
 	EFSYS_PROBE(fail12);
-#if EFSYS_OPT_RX_ES_SUPER_BUFFER
 fail11:
 	EFSYS_PROBE(fail11);
+#endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
+#if EFSYS_OPT_RX_PACKED_STREAM
 fail10:
 	EFSYS_PROBE(fail10);
 fail9:
 	EFSYS_PROBE(fail9);
-#endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
-#if EFSYS_OPT_RX_PACKED_STREAM
+#endif /* EFSYS_OPT_RX_PACKED_STREAM */
 fail8:
 	EFSYS_PROBE(fail8);
+#if EFSYS_OPT_RX_ES_SUPER_BUFFER
 fail7:
 	EFSYS_PROBE(fail7);
-#endif /* EFSYS_OPT_RX_PACKED_STREAM */
+#endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
+#if EFSYS_OPT_RX_PACKED_STREAM
 fail6:
 	EFSYS_PROBE(fail6);
-#if EFSYS_OPT_RX_ES_SUPER_BUFFER
 fail5:
 	EFSYS_PROBE(fail5);
-#endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
-#if EFSYS_OPT_RX_PACKED_STREAM
 fail4:
 	EFSYS_PROBE(fail4);
 fail3:
 	EFSYS_PROBE(fail3);
+#endif /* EFSYS_OPT_RX_PACKED_STREAM */
 fail2:
 	EFSYS_PROBE(fail2);
-#endif /* EFSYS_OPT_RX_PACKED_STREAM */
 fail1:
 	EFSYS_PROBE1(fail1, efx_rc_t, rc);
 
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 72ab4af01c..9bbd7cae55 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -3003,6 +3003,10 @@ typedef enum efx_rxq_type_e {
  * Request ingress mport field in the Rx prefix of a queue.
  */
 #define	EFX_RXQ_FLAG_INGRESS_MPORT	0x8
+/*
+ * Request user mark field in the Rx prefix of a queue.
+ */
+#define	EFX_RXQ_FLAG_USER_MARK		0x10
 
 LIBEFX_API
 extern	__checkReturn	efx_rc_t
diff --git a/drivers/common/sfc_efx/base/rhead_rx.c b/drivers/common/sfc_efx/base/rhead_rx.c
index f1d46f7c70..76b8ce302a 100644
--- a/drivers/common/sfc_efx/base/rhead_rx.c
+++ b/drivers/common/sfc_efx/base/rhead_rx.c
@@ -632,6 +632,9 @@ rhead_rx_qcreate(
 	if (flags & EFX_RXQ_FLAG_INGRESS_MPORT)
 		fields_mask |= 1U << EFX_RX_PREFIX_FIELD_INGRESS_MPORT;
 
+	if (flags & EFX_RXQ_FLAG_USER_MARK)
+		fields_mask |= 1U << EFX_RX_PREFIX_FIELD_USER_MARK;
+
 	/*
 	 * LENGTH is required in EF100 host interface, as receive events
 	 * do not include the packet length.
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v2 09/20] net/sfc: add abstractions for the management EVQ identity
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (7 preceding siblings ...)
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 08/20] common/sfc_efx/base: add user mark " Andrew Rybchenko
@ 2021-06-04 14:24   ` Andrew Rybchenko
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 10/20] net/sfc: add support for initialising different RxQ types Andrew Rybchenko
                     ` (11 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-04 14:24 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Add a function returning management event queue software index.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/net/sfc/sfc_ev.c | 2 +-
 drivers/net/sfc/sfc_ev.h | 6 ++++++
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index ed28d51e12..ba4409369a 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -983,7 +983,7 @@ sfc_ev_attach(struct sfc_adapter *sa)
 		goto fail_kvarg_perf_profile;
 	}
 
-	sa->mgmt_evq_index = 0;
+	sa->mgmt_evq_index = sfc_mgmt_evq_sw_index(sfc_sa2shared(sa));
 	rte_spinlock_init(&sa->mgmt_evq_lock);
 
 	rc = sfc_ev_qinit(sa, SFC_EVQ_TYPE_MGMT, 0, sa->evq_min_entries,
diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h
index 75b9dcdebd..3f3c4b5b9a 100644
--- a/drivers/net/sfc/sfc_ev.h
+++ b/drivers/net/sfc/sfc_ev.h
@@ -60,6 +60,12 @@ struct sfc_evq {
 	unsigned int			entries;
 };
 
+static inline sfc_sw_index_t
+sfc_mgmt_evq_sw_index(__rte_unused const struct sfc_adapter_shared *sas)
+{
+	return 0;
+}
+
 /*
  * Functions below define event queue to transmit/receive queue and vice
  * versa mapping.
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v2 10/20] net/sfc: add support for initialising different RxQ types
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (8 preceding siblings ...)
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 09/20] net/sfc: add abstractions for the management EVQ identity Andrew Rybchenko
@ 2021-06-04 14:24   ` Andrew Rybchenko
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 11/20] net/sfc: add NUMA-aware registry of service logical cores Andrew Rybchenko
                     ` (10 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-04 14:24 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Add extra EFX flags to RxQ info initialization API to support
choosing different RxQ types and make the API public to use
it in for counter queues.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/net/sfc/sfc_rx.c | 10 ++++++----
 drivers/net/sfc/sfc_rx.h |  2 ++
 2 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 597785ae02..c7a7bd66ef 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -1155,7 +1155,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 	else
 		rxq_info->type = EFX_RXQ_TYPE_DEFAULT;
 
-	rxq_info->type_flags =
+	rxq_info->type_flags |=
 		(offloads & DEV_RX_OFFLOAD_SCATTER) ?
 		EFX_RXQ_FLAG_SCATTER : EFX_RXQ_FLAG_NONE;
 
@@ -1594,8 +1594,9 @@ sfc_rx_stop(struct sfc_adapter *sa)
 	efx_rx_fini(sa->nic);
 }
 
-static int
-sfc_rx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
+int
+sfc_rx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
+		  unsigned int extra_efx_type_flags)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
 	struct sfc_rxq_info *rxq_info = &sas->rxq_info[sw_index];
@@ -1606,6 +1607,7 @@ sfc_rx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 	SFC_ASSERT(rte_is_power_of_2(max_entries));
 
 	rxq_info->max_entries = max_entries;
+	rxq_info->type_flags = extra_efx_type_flags;
 
 	return 0;
 }
@@ -1770,7 +1772,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
 
 		sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas,
 							sas->ethdev_rxq_count);
-		rc = sfc_rx_qinit_info(sa, sw_index);
+		rc = sfc_rx_qinit_info(sa, sw_index, 0);
 		if (rc != 0)
 			goto fail_rx_qinit_info;
 
diff --git a/drivers/net/sfc/sfc_rx.h b/drivers/net/sfc/sfc_rx.h
index 96c7dc415d..e5a6fde79b 100644
--- a/drivers/net/sfc/sfc_rx.h
+++ b/drivers/net/sfc/sfc_rx.h
@@ -129,6 +129,8 @@ void sfc_rx_close(struct sfc_adapter *sa);
 int sfc_rx_start(struct sfc_adapter *sa);
 void sfc_rx_stop(struct sfc_adapter *sa);
 
+int sfc_rx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
+		      unsigned int extra_efx_type_flags);
 int sfc_rx_qinit(struct sfc_adapter *sa, unsigned int rx_queue_id,
 		 uint16_t nb_rx_desc, unsigned int socket_id,
 		 const struct rte_eth_rxconf *rx_conf,
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v2 11/20] net/sfc: add NUMA-aware registry of service logical cores
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (9 preceding siblings ...)
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 10/20] net/sfc: add support for initialising different RxQ types Andrew Rybchenko
@ 2021-06-04 14:24   ` Andrew Rybchenko
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 12/20] net/sfc: reserve RxQ for counters Andrew Rybchenko
                     ` (9 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-04 14:24 UTC (permalink / raw)
  To: dev; +Cc: Andy Moreton, Ivan Malov

The driver requires service cores for housekeeping. Share these
cores for many adapters and various purposes to avoid extra CPU
overhead.

Since housekeeping services will talk to NIC, it should be possible
to choose logical core on matching NUMA node.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/meson.build   |  1 +
 drivers/net/sfc/sfc_service.c | 99 +++++++++++++++++++++++++++++++++++
 drivers/net/sfc/sfc_service.h | 20 +++++++
 3 files changed, 120 insertions(+)
 create mode 100644 drivers/net/sfc/sfc_service.c
 create mode 100644 drivers/net/sfc/sfc_service.h

diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index ccf5984d87..4ac97e8d43 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -62,4 +62,5 @@ sources = files(
         'sfc_ef10_tx.c',
         'sfc_ef100_rx.c',
         'sfc_ef100_tx.c',
+        'sfc_service.c',
 )
diff --git a/drivers/net/sfc/sfc_service.c b/drivers/net/sfc/sfc_service.c
new file mode 100644
index 0000000000..9c89484406
--- /dev/null
+++ b/drivers/net/sfc/sfc_service.c
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2020-2021 Xilinx, Inc.
+ */
+
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_spinlock.h>
+#include <rte_lcore.h>
+#include <rte_service.h>
+#include <rte_memory.h>
+
+#include "sfc_log.h"
+#include "sfc_service.h"
+#include "sfc_debug.h"
+
+static uint32_t sfc_service_lcore[RTE_MAX_NUMA_NODES];
+static rte_spinlock_t sfc_service_lcore_lock = RTE_SPINLOCK_INITIALIZER;
+
+RTE_INIT(sfc_service_lcore_init)
+{
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(sfc_service_lcore); ++i)
+		sfc_service_lcore[i] = RTE_MAX_LCORE;
+}
+
+static uint32_t
+sfc_find_service_lcore(int *socket_id)
+{
+	uint32_t service_core_list[RTE_MAX_LCORE];
+	uint32_t lcore_id;
+	int num;
+	int i;
+
+	SFC_ASSERT(rte_spinlock_is_locked(&sfc_service_lcore_lock));
+
+	num = rte_service_lcore_list(service_core_list,
+				    RTE_DIM(service_core_list));
+	if (num == 0) {
+		SFC_GENERIC_LOG(WARNING, "No service cores available");
+		return RTE_MAX_LCORE;
+	}
+	if (num < 0) {
+		SFC_GENERIC_LOG(ERR, "Failed to get service core list");
+		return RTE_MAX_LCORE;
+	}
+
+	for (i = 0; i < num; ++i) {
+		lcore_id = service_core_list[i];
+
+		if (*socket_id == SOCKET_ID_ANY) {
+			*socket_id = rte_lcore_to_socket_id(lcore_id);
+			break;
+		} else if (rte_lcore_to_socket_id(lcore_id) ==
+			   (unsigned int)*socket_id) {
+			break;
+		}
+	}
+
+	if (i == num) {
+		SFC_GENERIC_LOG(WARNING,
+			"No service cores reserved at socket %d", *socket_id);
+		return RTE_MAX_LCORE;
+	}
+
+	return lcore_id;
+}
+
+uint32_t
+sfc_get_service_lcore(int socket_id)
+{
+	uint32_t lcore_id = RTE_MAX_LCORE;
+
+	rte_spinlock_lock(&sfc_service_lcore_lock);
+
+	if (socket_id != SOCKET_ID_ANY) {
+		lcore_id = sfc_service_lcore[socket_id];
+	} else {
+		size_t i;
+
+		for (i = 0; i < RTE_DIM(sfc_service_lcore); ++i) {
+			if (sfc_service_lcore[i] != RTE_MAX_LCORE) {
+				lcore_id = sfc_service_lcore[i];
+				break;
+			}
+		}
+	}
+
+	if (lcore_id == RTE_MAX_LCORE) {
+		lcore_id = sfc_find_service_lcore(&socket_id);
+		if (lcore_id != RTE_MAX_LCORE)
+			sfc_service_lcore[socket_id] = lcore_id;
+	}
+
+	rte_spinlock_unlock(&sfc_service_lcore_lock);
+	return lcore_id;
+}
diff --git a/drivers/net/sfc/sfc_service.h b/drivers/net/sfc/sfc_service.h
new file mode 100644
index 0000000000..bbcce28479
--- /dev/null
+++ b/drivers/net/sfc/sfc_service.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2020-2021 Xilinx, Inc.
+ */
+
+#ifndef _SFC_SERVICE_H
+#define _SFC_SERVICE_H
+
+#include <stdint.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+uint32_t sfc_get_service_lcore(int socket_id);
+
+#ifdef __cplusplus
+}
+#endif
+#endif  /* _SFC_SERVICE_H */
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v2 12/20] net/sfc: reserve RxQ for counters
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (10 preceding siblings ...)
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 11/20] net/sfc: add NUMA-aware registry of service logical cores Andrew Rybchenko
@ 2021-06-04 14:24   ` Andrew Rybchenko
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 13/20] common/sfc_efx/base: add counter creation MCDI wrappers Andrew Rybchenko
                     ` (8 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-04 14:24 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

MAE delivers counters data as special packets via dedicated Rx queue.
Reserve an RxQ so that it does not interfere with ethdev Rx queues.
A routine will be added later to handle these packets.

There is no point to reserve the queue if no service cores are
available and counters cannot be used.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/meson.build       |   1 +
 drivers/net/sfc/sfc.c             |  68 ++++++++--
 drivers/net/sfc/sfc.h             |  19 +++
 drivers/net/sfc/sfc_dp.h          |   2 +
 drivers/net/sfc/sfc_ev.h          |  72 ++++++++--
 drivers/net/sfc/sfc_mae.c         |   1 +
 drivers/net/sfc/sfc_mae_counter.c | 217 ++++++++++++++++++++++++++++++
 drivers/net/sfc/sfc_mae_counter.h |  44 ++++++
 drivers/net/sfc/sfc_rx.c          |  43 ++++--
 9 files changed, 438 insertions(+), 29 deletions(-)
 create mode 100644 drivers/net/sfc/sfc_mae_counter.c
 create mode 100644 drivers/net/sfc/sfc_mae_counter.h

diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index 4ac97e8d43..f8880f740a 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -55,6 +55,7 @@ sources = files(
         'sfc_filter.c',
         'sfc_switch.c',
         'sfc_mae.c',
+        'sfc_mae_counter.c',
         'sfc_flow.c',
         'sfc_dp.c',
         'sfc_ef10_rx.c',
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 3477c7530b..4097cf39de 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -20,6 +20,7 @@
 #include "sfc_log.h"
 #include "sfc_ev.h"
 #include "sfc_rx.h"
+#include "sfc_mae_counter.h"
 #include "sfc_tx.h"
 #include "sfc_kvargs.h"
 #include "sfc_tweak.h"
@@ -174,6 +175,7 @@ static int
 sfc_estimate_resource_limits(struct sfc_adapter *sa)
 {
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
 	efx_drv_limits_t limits;
 	int rc;
 	uint32_t evq_allocated;
@@ -235,17 +237,53 @@ sfc_estimate_resource_limits(struct sfc_adapter *sa)
 	rxq_allocated = MIN(rxq_allocated, limits.edl_max_rxq_count);
 	txq_allocated = MIN(txq_allocated, limits.edl_max_txq_count);
 
-	/* Subtract management EVQ not used for traffic */
-	SFC_ASSERT(evq_allocated > 0);
+	/*
+	 * Subtract management EVQ not used for traffic
+	 * The resource allocation strategy is as follows:
+	 * - one EVQ for management
+	 * - one EVQ for each ethdev RXQ
+	 * - one EVQ for each ethdev TXQ
+	 * - one EVQ and one RXQ for optional MAE counters.
+	 */
+	if (evq_allocated == 0) {
+		sfc_err(sa, "count of allocated EvQ is 0");
+		rc = ENOMEM;
+		goto fail_allocate_evq;
+	}
 	evq_allocated--;
 
-	/* Right now we use separate EVQ for Rx and Tx */
-	sa->rxq_max = MIN(rxq_allocated, evq_allocated / 2);
-	sa->txq_max = MIN(txq_allocated, evq_allocated - sa->rxq_max);
+	/*
+	 * Reserve absolutely required minimum.
+	 * Right now we use separate EVQ for Rx and Tx.
+	 */
+	if (rxq_allocated > 0 && evq_allocated > 0) {
+		sa->rxq_max = 1;
+		rxq_allocated--;
+		evq_allocated--;
+	}
+	if (txq_allocated > 0 && evq_allocated > 0) {
+		sa->txq_max = 1;
+		txq_allocated--;
+		evq_allocated--;
+	}
+
+	if (sfc_mae_counter_rxq_required(sa) &&
+	    rxq_allocated > 0 && evq_allocated > 0) {
+		rxq_allocated--;
+		evq_allocated--;
+		sas->counters_rxq_allocated = true;
+	} else {
+		sas->counters_rxq_allocated = false;
+	}
+
+	/* Add remaining allocated queues */
+	sa->rxq_max += MIN(rxq_allocated, evq_allocated / 2);
+	sa->txq_max += MIN(txq_allocated, evq_allocated - sa->rxq_max);
 
 	/* Keep NIC initialized */
 	return 0;
 
+fail_allocate_evq:
 fail_get_vi_pool:
 	efx_nic_fini(sa->nic);
 fail_nic_init:
@@ -256,14 +294,20 @@ static int
 sfc_set_drv_limits(struct sfc_adapter *sa)
 {
 	const struct rte_eth_dev_data *data = sa->eth_dev->data;
+	uint32_t rxq_reserved = sfc_nb_reserved_rxq(sfc_sa2shared(sa));
 	efx_drv_limits_t lim;
 
 	memset(&lim, 0, sizeof(lim));
 
-	/* Limits are strict since take into account initial estimation */
+	/*
+	 * Limits are strict since take into account initial estimation.
+	 * Resource allocation stategy is described in
+	 * sfc_estimate_resource_limits().
+	 */
 	lim.edl_min_evq_count = lim.edl_max_evq_count =
-		1 + data->nb_rx_queues + data->nb_tx_queues;
-	lim.edl_min_rxq_count = lim.edl_max_rxq_count = data->nb_rx_queues;
+		1 + data->nb_rx_queues + data->nb_tx_queues + rxq_reserved;
+	lim.edl_min_rxq_count = lim.edl_max_rxq_count =
+		data->nb_rx_queues + rxq_reserved;
 	lim.edl_min_txq_count = lim.edl_max_txq_count = data->nb_tx_queues;
 
 	return efx_nic_set_drv_limits(sa->nic, &lim);
@@ -834,6 +878,10 @@ sfc_attach(struct sfc_adapter *sa)
 	if (rc != 0)
 		goto fail_filter_attach;
 
+	rc = sfc_mae_counter_rxq_attach(sa);
+	if (rc != 0)
+		goto fail_mae_counter_rxq_attach;
+
 	rc = sfc_mae_attach(sa);
 	if (rc != 0)
 		goto fail_mae_attach;
@@ -862,6 +910,9 @@ sfc_attach(struct sfc_adapter *sa)
 	sfc_mae_detach(sa);
 
 fail_mae_attach:
+	sfc_mae_counter_rxq_detach(sa);
+
+fail_mae_counter_rxq_attach:
 	sfc_filter_detach(sa);
 
 fail_filter_attach:
@@ -903,6 +954,7 @@ sfc_detach(struct sfc_adapter *sa)
 	sfc_flow_fini(sa);
 
 	sfc_mae_detach(sa);
+	sfc_mae_counter_rxq_detach(sa);
 	sfc_filter_detach(sa);
 	sfc_rss_detach(sa);
 	sfc_port_detach(sa);
diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index 00fc26cf0e..546739bd4a 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -186,6 +186,8 @@ struct sfc_adapter_shared {
 
 	char				*dp_rx_name;
 	char				*dp_tx_name;
+
+	bool				counters_rxq_allocated;
 };
 
 /* Adapter process private data */
@@ -205,6 +207,15 @@ sfc_adapter_priv_by_eth_dev(struct rte_eth_dev *eth_dev)
 	return sap;
 }
 
+/* RxQ dedicated for counters (counter only RxQ) data */
+struct sfc_counter_rxq {
+	unsigned int			state;
+#define SFC_COUNTER_RXQ_ATTACHED		0x1
+#define SFC_COUNTER_RXQ_INITIALIZED		0x2
+	sfc_sw_index_t			sw_index;
+	struct rte_mempool		*mp;
+};
+
 /* Adapter private data */
 struct sfc_adapter {
 	/*
@@ -283,6 +294,8 @@ struct sfc_adapter {
 	bool				mgmt_evq_running;
 	struct sfc_evq			*mgmt_evq;
 
+	struct sfc_counter_rxq		counter_rxq;
+
 	struct sfc_rxq			*rxq_ctrl;
 	struct sfc_txq			*txq_ctrl;
 
@@ -357,6 +370,12 @@ sfc_adapter_lock_fini(__rte_unused struct sfc_adapter *sa)
 	/* Just for symmetry of the API */
 }
 
+static inline unsigned int
+sfc_nb_counter_rxq(const struct sfc_adapter_shared *sas)
+{
+	return sas->counters_rxq_allocated ? 1 : 0;
+}
+
 /** Get the number of milliseconds since boot from the default timer */
 static inline uint64_t
 sfc_get_system_msecs(void)
diff --git a/drivers/net/sfc/sfc_dp.h b/drivers/net/sfc/sfc_dp.h
index 76065483d4..61c1a3fbac 100644
--- a/drivers/net/sfc/sfc_dp.h
+++ b/drivers/net/sfc/sfc_dp.h
@@ -97,6 +97,8 @@ struct sfc_dp {
 TAILQ_HEAD(sfc_dp_list, sfc_dp);
 
 typedef unsigned int sfc_sw_index_t;
+#define SFC_SW_INDEX_INVALID	((sfc_sw_index_t)(UINT_MAX))
+
 typedef int32_t	sfc_ethdev_qid_t;
 #define SFC_ETHDEV_QID_INVALID	((sfc_ethdev_qid_t)(-1))
 
diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h
index 3f3c4b5b9a..b2a0380205 100644
--- a/drivers/net/sfc/sfc_ev.h
+++ b/drivers/net/sfc/sfc_ev.h
@@ -66,36 +66,87 @@ sfc_mgmt_evq_sw_index(__rte_unused const struct sfc_adapter_shared *sas)
 	return 0;
 }
 
+/* Return the number of Rx queues reserved for driver's internal use */
+static inline unsigned int
+sfc_nb_reserved_rxq(const struct sfc_adapter_shared *sas)
+{
+	return sfc_nb_counter_rxq(sas);
+}
+
+static inline unsigned int
+sfc_nb_reserved_evq(const struct sfc_adapter_shared *sas)
+{
+	/* An EvQ is required for each reserved RxQ */
+	return 1 + sfc_nb_reserved_rxq(sas);
+}
+
+/*
+ * The mapping functions that return SW index of a specific reserved
+ * queue rely on the relative order of reserved queues. Some reserved
+ * queues are optional, and if they are disabled or not supported, then
+ * the function for that specific reserved queue will return previous
+ * valid index of a reserved queue in the dependency chain or
+ * SFC_SW_INDEX_INVALID if it is the first reserved queue in the chain.
+ * If at least one of the reserved queues in the chain is enabled, then
+ * the corresponding function will give valid SW index, even if previous
+ * functions in the chain returned SFC_SW_INDEX_INVALID, since this value
+ * is one less than the first valid SW index.
+ *
+ * The dependency mechanism is utilized to avoid regid defines for SW indices
+ * for reserved queues and to allow these indices to shrink and make space
+ * for ethdev queue indices when some of the reserved queues are disabled.
+ */
+
+static inline sfc_sw_index_t
+sfc_counters_rxq_sw_index(const struct sfc_adapter_shared *sas)
+{
+	return sas->counters_rxq_allocated ? 0 : SFC_SW_INDEX_INVALID;
+}
+
 /*
  * Functions below define event queue to transmit/receive queue and vice
  * versa mapping.
+ * SFC_ETHDEV_QID_INVALID is returned when sw_index is converted to
+ * ethdev_qid, but sw_index represents a reserved queue for driver's
+ * internal use.
  * Own event queue is allocated for management, each Rx and each Tx queue.
  * Zero event queue is used for management events.
- * Rx event queues from 1 to RxQ number follow management event queue.
+ * When counters are supported, one Rx event queue is reserved.
+ * Rx event queues follow reserved event queues.
  * Tx event queues follow Rx event queues.
  */
 
 static inline sfc_ethdev_qid_t
-sfc_ethdev_rx_qid_by_rxq_sw_index(__rte_unused struct sfc_adapter_shared *sas,
+sfc_ethdev_rx_qid_by_rxq_sw_index(struct sfc_adapter_shared *sas,
 				  sfc_sw_index_t rxq_sw_index)
 {
-	/* Only ethdev queues are present for now */
-	return rxq_sw_index;
+	if (rxq_sw_index < sfc_nb_reserved_rxq(sas))
+		return SFC_ETHDEV_QID_INVALID;
+
+	return rxq_sw_index - sfc_nb_reserved_rxq(sas);
 }
 
 static inline sfc_sw_index_t
-sfc_rxq_sw_index_by_ethdev_rx_qid(__rte_unused struct sfc_adapter_shared *sas,
+sfc_rxq_sw_index_by_ethdev_rx_qid(struct sfc_adapter_shared *sas,
 				  sfc_ethdev_qid_t ethdev_qid)
 {
-	/* Only ethdev queues are present for now */
-	return ethdev_qid;
+	return sfc_nb_reserved_rxq(sas) + ethdev_qid;
 }
 
 static inline sfc_sw_index_t
-sfc_evq_sw_index_by_rxq_sw_index(__rte_unused struct sfc_adapter *sa,
+sfc_evq_sw_index_by_rxq_sw_index(struct sfc_adapter *sa,
 				 sfc_sw_index_t rxq_sw_index)
 {
-	return 1 + rxq_sw_index;
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
+
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, rxq_sw_index);
+	if (ethdev_qid == SFC_ETHDEV_QID_INVALID) {
+		/* One EvQ is reserved for management */
+		return 1 + rxq_sw_index;
+	}
+
+	return sfc_nb_reserved_evq(sas) + ethdev_qid;
 }
 
 static inline sfc_ethdev_qid_t
@@ -118,7 +169,8 @@ static inline sfc_sw_index_t
 sfc_evq_sw_index_by_txq_sw_index(struct sfc_adapter *sa,
 				 sfc_sw_index_t txq_sw_index)
 {
-	return 1 + sa->eth_dev->data->nb_rx_queues + txq_sw_index;
+	return sfc_nb_reserved_evq(sfc_sa2shared(sa)) +
+		sa->eth_dev->data->nb_rx_queues + txq_sw_index;
 }
 
 int sfc_ev_attach(struct sfc_adapter *sa);
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index d8c662503f..e603ffbdb4 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -16,6 +16,7 @@
 #include "efx.h"
 
 #include "sfc.h"
+#include "sfc_mae_counter.h"
 #include "sfc_log.h"
 #include "sfc_switch.h"
 
diff --git a/drivers/net/sfc/sfc_mae_counter.c b/drivers/net/sfc/sfc_mae_counter.c
new file mode 100644
index 0000000000..c7646cf7b1
--- /dev/null
+++ b/drivers/net/sfc/sfc_mae_counter.c
@@ -0,0 +1,217 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2020-2021 Xilinx, Inc.
+ */
+
+#include <rte_common.h>
+
+#include "efx.h"
+
+#include "sfc_ev.h"
+#include "sfc.h"
+#include "sfc_rx.h"
+#include "sfc_mae_counter.h"
+#include "sfc_service.h"
+
+static uint32_t
+sfc_mae_counter_get_service_lcore(struct sfc_adapter *sa)
+{
+	uint32_t cid;
+
+	cid = sfc_get_service_lcore(sa->socket_id);
+	if (cid != RTE_MAX_LCORE)
+		return cid;
+
+	if (sa->socket_id != SOCKET_ID_ANY)
+		cid = sfc_get_service_lcore(SOCKET_ID_ANY);
+
+	if (cid == RTE_MAX_LCORE) {
+		sfc_warn(sa, "failed to get service lcore for counter service");
+	} else if (sa->socket_id != SOCKET_ID_ANY) {
+		sfc_warn(sa,
+			"failed to get service lcore for counter service at socket %d, but got at socket %u",
+			sa->socket_id, rte_lcore_to_socket_id(cid));
+	}
+	return cid;
+}
+
+bool
+sfc_mae_counter_rxq_required(struct sfc_adapter *sa)
+{
+	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
+
+	if (encp->enc_mae_supported == B_FALSE)
+		return false;
+
+	if (sfc_mae_counter_get_service_lcore(sa) == RTE_MAX_LCORE)
+		return false;
+
+	return true;
+}
+
+int
+sfc_mae_counter_rxq_attach(struct sfc_adapter *sa)
+{
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	char name[RTE_MEMPOOL_NAMESIZE];
+	struct rte_mempool *mp;
+	unsigned int n_elements;
+	unsigned int cache_size;
+	/* The mempool is internal and private area is not required */
+	const uint16_t priv_size = 0;
+	const uint16_t data_room_size = RTE_PKTMBUF_HEADROOM +
+		SFC_MAE_COUNTER_STREAM_PACKET_SIZE;
+	int rc;
+
+	sfc_log_init(sa, "entry");
+
+	if (!sas->counters_rxq_allocated) {
+		sfc_log_init(sa, "counter queue is not supported - skip");
+		return 0;
+	}
+
+	/*
+	 * At least one element in the ring is always unused to distinguish
+	 * between empty and full ring cases.
+	 */
+	n_elements = SFC_COUNTER_RXQ_RX_DESC_COUNT - 1;
+
+	/*
+	 * The cache must have sufficient space to put received buckets
+	 * before they're reused on refill.
+	 */
+	cache_size = rte_align32pow2(SFC_COUNTER_RXQ_REFILL_LEVEL +
+				     SFC_MAE_COUNTER_RX_BURST - 1);
+
+	if (snprintf(name, sizeof(name), "counter_rxq-pool-%u", sas->port_id) >=
+	    (int)sizeof(name)) {
+		sfc_err(sa, "failed: counter RxQ mempool name is too long");
+		rc = ENAMETOOLONG;
+		goto fail_long_name;
+	}
+
+	/*
+	 * It could be single-producer single-consumer ring mempool which
+	 * requires minimal barriers. However, cache size and refill/burst
+	 * policy are aligned, therefore it does not matter which
+	 * mempool backend is chosen since backend is unused.
+	 */
+	mp = rte_pktmbuf_pool_create(name, n_elements, cache_size,
+				     priv_size, data_room_size, sa->socket_id);
+	if (mp == NULL) {
+		sfc_err(sa, "failed to create counter RxQ mempool");
+		rc = rte_errno;
+		goto fail_mp_create;
+	}
+
+	sa->counter_rxq.sw_index = sfc_counters_rxq_sw_index(sas);
+	sa->counter_rxq.mp = mp;
+	sa->counter_rxq.state |= SFC_COUNTER_RXQ_ATTACHED;
+
+	sfc_log_init(sa, "done");
+
+	return 0;
+
+fail_mp_create:
+fail_long_name:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+
+	return rc;
+}
+
+void
+sfc_mae_counter_rxq_detach(struct sfc_adapter *sa)
+{
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+
+	sfc_log_init(sa, "entry");
+
+	if (!sas->counters_rxq_allocated) {
+		sfc_log_init(sa, "counter queue is not supported - skip");
+		return;
+	}
+
+	if ((sa->counter_rxq.state & SFC_COUNTER_RXQ_ATTACHED) == 0) {
+		sfc_log_init(sa, "counter queue is not attached - skip");
+		return;
+	}
+
+	rte_mempool_free(sa->counter_rxq.mp);
+	sa->counter_rxq.mp = NULL;
+	sa->counter_rxq.state &= ~SFC_COUNTER_RXQ_ATTACHED;
+
+	sfc_log_init(sa, "done");
+}
+
+int
+sfc_mae_counter_rxq_init(struct sfc_adapter *sa)
+{
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	const struct rte_eth_rxconf rxconf = {
+		.rx_free_thresh = SFC_COUNTER_RXQ_REFILL_LEVEL,
+		.rx_drop_en = 1,
+	};
+	uint16_t nb_rx_desc = SFC_COUNTER_RXQ_RX_DESC_COUNT;
+	int rc;
+
+	sfc_log_init(sa, "entry");
+
+	if (!sas->counters_rxq_allocated) {
+		sfc_log_init(sa, "counter queue is not supported - skip");
+		return 0;
+	}
+
+	if ((sa->counter_rxq.state & SFC_COUNTER_RXQ_ATTACHED) == 0) {
+		sfc_log_init(sa, "counter queue is not attached - skip");
+		return 0;
+	}
+
+	nb_rx_desc = RTE_MIN(nb_rx_desc, sa->rxq_max_entries);
+	nb_rx_desc = RTE_MAX(nb_rx_desc, sa->rxq_min_entries);
+
+	rc = sfc_rx_qinit_info(sa, sa->counter_rxq.sw_index,
+			       EFX_RXQ_FLAG_USER_MARK);
+	if (rc != 0)
+		goto fail_counter_rxq_init_info;
+
+	rc = sfc_rx_qinit(sa, sa->counter_rxq.sw_index, nb_rx_desc,
+			  sa->socket_id, &rxconf, sa->counter_rxq.mp);
+	if (rc != 0) {
+		sfc_err(sa, "failed to init counter RxQ");
+		goto fail_counter_rxq_init;
+	}
+
+	sa->counter_rxq.state |= SFC_COUNTER_RXQ_INITIALIZED;
+
+	sfc_log_init(sa, "done");
+
+	return 0;
+
+fail_counter_rxq_init:
+fail_counter_rxq_init_info:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+
+	return rc;
+}
+
+void
+sfc_mae_counter_rxq_fini(struct sfc_adapter *sa)
+{
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+
+	sfc_log_init(sa, "entry");
+
+	if (!sas->counters_rxq_allocated) {
+		sfc_log_init(sa, "counter queue is not supported - skip");
+		return;
+	}
+
+	if ((sa->counter_rxq.state & SFC_COUNTER_RXQ_INITIALIZED) == 0) {
+		sfc_log_init(sa, "counter queue is not initialized - skip");
+		return;
+	}
+
+	sfc_rx_qfini(sa, sa->counter_rxq.sw_index);
+
+	sfc_log_init(sa, "done");
+}
diff --git a/drivers/net/sfc/sfc_mae_counter.h b/drivers/net/sfc/sfc_mae_counter.h
new file mode 100644
index 0000000000..f16d64a999
--- /dev/null
+++ b/drivers/net/sfc/sfc_mae_counter.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2020-2021 Xilinx, Inc.
+ */
+
+#ifndef _SFC_MAE_COUNTER_H
+#define _SFC_MAE_COUNTER_H
+
+#include "sfc.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* Default values for a user of counter RxQ */
+#define SFC_MAE_COUNTER_RX_BURST 32
+#define SFC_COUNTER_RXQ_RX_DESC_COUNT 256
+
+/*
+ * The refill level is chosen based on requirement to keep number
+ * of give credits operations low.
+ */
+#define SFC_COUNTER_RXQ_REFILL_LEVEL (SFC_COUNTER_RXQ_RX_DESC_COUNT / 4)
+
+/*
+ * SF-122415-TC states that the packetiser that generates packets for
+ * counter stream must support 9k frames. Set it to the maximum supported
+ * size since in case of huge flow of counters, having fewer packets in counter
+ * updates is better.
+ */
+#define SFC_MAE_COUNTER_STREAM_PACKET_SIZE 9216
+
+bool sfc_mae_counter_rxq_required(struct sfc_adapter *sa);
+
+int sfc_mae_counter_rxq_attach(struct sfc_adapter *sa);
+void sfc_mae_counter_rxq_detach(struct sfc_adapter *sa);
+
+int sfc_mae_counter_rxq_init(struct sfc_adapter *sa);
+void sfc_mae_counter_rxq_fini(struct sfc_adapter *sa);
+
+#ifdef __cplusplus
+}
+#endif
+#endif  /* _SFC_MAE_COUNTER_H */
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index c7a7bd66ef..0532f77082 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -16,6 +16,7 @@
 #include "sfc_log.h"
 #include "sfc_ev.h"
 #include "sfc_rx.h"
+#include "sfc_mae_counter.h"
 #include "sfc_kvargs.h"
 #include "sfc_tweak.h"
 
@@ -1705,6 +1706,9 @@ sfc_rx_configure(struct sfc_adapter *sa)
 	struct sfc_rss *rss = &sas->rss;
 	struct rte_eth_conf *dev_conf = &sa->eth_dev->data->dev_conf;
 	const unsigned int nb_rx_queues = sa->eth_dev->data->nb_rx_queues;
+	const unsigned int nb_rsrv_rx_queues = sfc_nb_reserved_rxq(sas);
+	const unsigned int nb_rxq_total = nb_rx_queues + nb_rsrv_rx_queues;
+	bool reconfigure;
 	int rc;
 
 	sfc_log_init(sa, "nb_rx_queues=%u (old %u)",
@@ -1714,12 +1718,15 @@ sfc_rx_configure(struct sfc_adapter *sa)
 	if (rc != 0)
 		goto fail_check_mode;
 
-	if (nb_rx_queues == sas->rxq_count)
+	if (nb_rxq_total == sas->rxq_count) {
+		reconfigure = true;
 		goto configure_rss;
+	}
 
 	if (sas->rxq_info == NULL) {
+		reconfigure = false;
 		rc = ENOMEM;
-		sas->rxq_info = rte_calloc_socket("sfc-rxqs", nb_rx_queues,
+		sas->rxq_info = rte_calloc_socket("sfc-rxqs", nb_rxq_total,
 						  sizeof(sas->rxq_info[0]), 0,
 						  sa->socket_id);
 		if (sas->rxq_info == NULL)
@@ -1730,39 +1737,42 @@ sfc_rx_configure(struct sfc_adapter *sa)
 		 * since it should not be shared.
 		 */
 		rc = ENOMEM;
-		sa->rxq_ctrl = calloc(nb_rx_queues, sizeof(sa->rxq_ctrl[0]));
+		sa->rxq_ctrl = calloc(nb_rxq_total, sizeof(sa->rxq_ctrl[0]));
 		if (sa->rxq_ctrl == NULL)
 			goto fail_rxqs_ctrl_alloc;
 	} else {
 		struct sfc_rxq_info *new_rxq_info;
 		struct sfc_rxq *new_rxq_ctrl;
 
+		reconfigure = true;
+
+		/* Do not ununitialize reserved queues */
 		if (nb_rx_queues < sas->ethdev_rxq_count)
 			sfc_rx_fini_queues(sa, nb_rx_queues);
 
 		rc = ENOMEM;
 		new_rxq_info =
 			rte_realloc(sas->rxq_info,
-				    nb_rx_queues * sizeof(sas->rxq_info[0]), 0);
-		if (new_rxq_info == NULL && nb_rx_queues > 0)
+				    nb_rxq_total * sizeof(sas->rxq_info[0]), 0);
+		if (new_rxq_info == NULL && nb_rxq_total > 0)
 			goto fail_rxqs_realloc;
 
 		rc = ENOMEM;
 		new_rxq_ctrl = realloc(sa->rxq_ctrl,
-				       nb_rx_queues * sizeof(sa->rxq_ctrl[0]));
-		if (new_rxq_ctrl == NULL && nb_rx_queues > 0)
+				       nb_rxq_total * sizeof(sa->rxq_ctrl[0]));
+		if (new_rxq_ctrl == NULL && nb_rxq_total > 0)
 			goto fail_rxqs_ctrl_realloc;
 
 		sas->rxq_info = new_rxq_info;
 		sa->rxq_ctrl = new_rxq_ctrl;
-		if (nb_rx_queues > sas->rxq_count) {
+		if (nb_rxq_total > sas->rxq_count) {
 			unsigned int rxq_count = sas->rxq_count;
 
 			memset(&sas->rxq_info[rxq_count], 0,
-			       (nb_rx_queues - rxq_count) *
+			       (nb_rxq_total - rxq_count) *
 			       sizeof(sas->rxq_info[0]));
 			memset(&sa->rxq_ctrl[rxq_count], 0,
-			       (nb_rx_queues - rxq_count) *
+			       (nb_rxq_total - rxq_count) *
 			       sizeof(sa->rxq_ctrl[0]));
 		}
 	}
@@ -1779,7 +1789,13 @@ sfc_rx_configure(struct sfc_adapter *sa)
 		sas->ethdev_rxq_count++;
 	}
 
-	sas->rxq_count = sas->ethdev_rxq_count;
+	sas->rxq_count = sas->ethdev_rxq_count + nb_rsrv_rx_queues;
+
+	if (!reconfigure) {
+		rc = sfc_mae_counter_rxq_init(sa);
+		if (rc != 0)
+			goto fail_count_rxq_init;
+	}
 
 configure_rss:
 	rss->channels = (dev_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) ?
@@ -1801,6 +1817,10 @@ sfc_rx_configure(struct sfc_adapter *sa)
 	return 0;
 
 fail_rx_process_adv_conf_rss:
+	if (!reconfigure)
+		sfc_mae_counter_rxq_fini(sa);
+
+fail_count_rxq_init:
 fail_rx_qinit_info:
 fail_rxqs_ctrl_realloc:
 fail_rxqs_realloc:
@@ -1824,6 +1844,7 @@ sfc_rx_close(struct sfc_adapter *sa)
 	struct sfc_rss *rss = &sfc_sa2shared(sa)->rss;
 
 	sfc_rx_fini_queues(sa, 0);
+	sfc_mae_counter_rxq_fini(sa);
 
 	rss->channels = 0;
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v2 13/20] common/sfc_efx/base: add counter creation MCDI wrappers
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (11 preceding siblings ...)
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 12/20] net/sfc: reserve RxQ for counters Andrew Rybchenko
@ 2021-06-04 14:24   ` Andrew Rybchenko
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 14/20] common/sfc_efx/base: add counter stream " Andrew Rybchenko
                     ` (7 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-04 14:24 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

User will be able to create and free MAE counters. Support for
associating counters with action set will be added in upcoming
patches.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/efx.h      |  37 ++++++
 drivers/common/sfc_efx/base/efx_impl.h |   1 +
 drivers/common/sfc_efx/base/efx_mae.c  | 158 +++++++++++++++++++++++++
 drivers/common/sfc_efx/base/efx_mcdi.h |   7 ++
 drivers/common/sfc_efx/version.map     |   2 +
 5 files changed, 205 insertions(+)

diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 9bbd7cae55..d0f8bc10b3 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4406,6 +4406,10 @@ efx_mae_action_set_fill_in_eh_id(
 	__in				efx_mae_actions_t *spec,
 	__in				const efx_mae_eh_id_t *eh_idp);
 
+typedef struct efx_counter_s {
+	uint32_t id;
+} efx_counter_t;
+
 /* Action set ID */
 typedef struct efx_mae_aset_id_s {
 	uint32_t id;
@@ -4418,6 +4422,39 @@ efx_mae_action_set_alloc(
 	__in				const efx_mae_actions_t *spec,
 	__out				efx_mae_aset_id_t *aset_idp);
 
+/*
+ * Generation count has two purposes:
+ *
+ * 1) Distinguish between counter packets that belong to freed counter
+ *    and the packets that belong to reallocated counter (with the same ID);
+ * 2) Make sure that all packets are received for a counter that was freed;
+ *
+ * API users should provide generation count out parameter in allocation
+ * function if counters can be reallocated and consistent counter values are
+ * required.
+ *
+ * API users that need consistent final counter values after counter
+ * deallocation or counter stream stop should provide the parameter in
+ * functions that free the counters and stop the counter stream.
+ */
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_counters_alloc(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_counters,
+	__out				uint32_t *n_allocatedp,
+	__out_ecount(n_counters)	efx_counter_t *countersp,
+	__out_opt			uint32_t *gen_countp);
+
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_counters_free(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_counters,
+	__out				uint32_t *n_freedp,
+	__in_ecount(n_counters)		const efx_counter_t *countersp,
+	__out_opt			uint32_t *gen_countp);
+
 LIBEFX_API
 extern	__checkReturn			efx_rc_t
 efx_mae_action_set_free(
diff --git a/drivers/common/sfc_efx/base/efx_impl.h b/drivers/common/sfc_efx/base/efx_impl.h
index f891e2616e..9dbf6d450c 100644
--- a/drivers/common/sfc_efx/base/efx_impl.h
+++ b/drivers/common/sfc_efx/base/efx_impl.h
@@ -821,6 +821,7 @@ typedef struct efx_mae_s {
 	/** Outer rule match field capabilities. */
 	efx_mae_field_cap_t		*em_outer_rule_field_caps;
 	size_t				em_outer_rule_field_caps_size;
+	uint32_t			em_max_ncounters;
 } efx_mae_t;
 
 #endif /* EFSYS_OPT_MAE */
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index 5697488040..955f1d4353 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -67,6 +67,9 @@ efx_mae_get_capabilities(
 	maep->em_max_nfields =
 	    MCDI_OUT_DWORD(req, MAE_GET_CAPS_OUT_MATCH_FIELD_COUNT);
 
+	maep->em_max_ncounters =
+	    MCDI_OUT_DWORD(req, MAE_GET_CAPS_OUT_COUNTERS);
+
 	return (0);
 
 fail2:
@@ -2600,6 +2603,161 @@ efx_mae_action_rule_remove(
 
 	return (0);
 
+fail4:
+	EFSYS_PROBE(fail4);
+fail3:
+	EFSYS_PROBE(fail3);
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+	return (rc);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_counters_alloc(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_counters,
+	__out				uint32_t *n_allocatedp,
+	__out_ecount(n_counters)	efx_counter_t *countersp,
+	__out_opt			uint32_t *gen_countp)
+{
+	EFX_MCDI_DECLARE_BUF(payload,
+	    MC_CMD_MAE_COUNTER_ALLOC_IN_LEN,
+	    MC_CMD_MAE_COUNTER_ALLOC_OUT_LENMAX_MCDI2);
+	efx_mae_t *maep = enp->en_maep;
+	uint32_t n_allocated;
+	efx_mcdi_req_t req;
+	unsigned int i;
+	efx_rc_t rc;
+
+	if (n_counters > maep->em_max_ncounters ||
+	    n_counters < MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_MINNUM ||
+	    n_counters > MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_MAXNUM_MCDI2) {
+		rc = EINVAL;
+		goto fail1;
+	}
+
+	req.emr_cmd = MC_CMD_MAE_COUNTER_ALLOC;
+	req.emr_in_buf = payload;
+	req.emr_in_length = MC_CMD_MAE_COUNTER_ALLOC_IN_LEN;
+	req.emr_out_buf = payload;
+	req.emr_out_length = MC_CMD_MAE_COUNTER_ALLOC_OUT_LEN(n_counters);
+
+	MCDI_IN_SET_DWORD(req, MAE_COUNTER_ALLOC_IN_REQUESTED_COUNT,
+	    n_counters);
+
+	efx_mcdi_execute(enp, &req);
+
+	if (req.emr_rc != 0) {
+		rc = req.emr_rc;
+		goto fail2;
+	}
+
+	if (req.emr_out_length_used < MC_CMD_MAE_COUNTER_ALLOC_OUT_LENMIN) {
+		rc = EMSGSIZE;
+		goto fail3;
+	}
+
+	n_allocated = MCDI_OUT_DWORD(req,
+	    MAE_COUNTER_ALLOC_OUT_COUNTER_ID_COUNT);
+	if (n_allocated < MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_MINNUM) {
+		rc = EFAULT;
+		goto fail4;
+	}
+
+	for (i = 0; i < n_allocated; i++) {
+		countersp[i].id = MCDI_OUT_INDEXED_DWORD(req,
+		    MAE_COUNTER_ALLOC_OUT_COUNTER_ID, i);
+	}
+
+	if (gen_countp != NULL) {
+		*gen_countp = MCDI_OUT_DWORD(req,
+				    MAE_COUNTER_ALLOC_OUT_GENERATION_COUNT);
+	}
+
+	*n_allocatedp = n_allocated;
+
+	return (0);
+
+fail4:
+	EFSYS_PROBE(fail4);
+fail3:
+	EFSYS_PROBE(fail3);
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
+	return (rc);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_counters_free(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_counters,
+	__out				uint32_t *n_freedp,
+	__in_ecount(n_counters)		const efx_counter_t *countersp,
+	__out_opt			uint32_t *gen_countp)
+{
+	EFX_MCDI_DECLARE_BUF(payload,
+	    MC_CMD_MAE_COUNTER_FREE_IN_LENMAX_MCDI2,
+	    MC_CMD_MAE_COUNTER_FREE_OUT_LENMAX_MCDI2);
+	efx_mae_t *maep = enp->en_maep;
+	efx_mcdi_req_t req;
+	uint32_t n_freed;
+	unsigned int i;
+	efx_rc_t rc;
+
+	if (n_counters > maep->em_max_ncounters ||
+	    n_counters < MC_CMD_MAE_COUNTER_FREE_IN_FREE_COUNTER_ID_MINNUM ||
+	    n_counters >
+	    MC_CMD_MAE_COUNTER_FREE_IN_FREE_COUNTER_ID_MAXNUM_MCDI2) {
+		rc = EINVAL;
+		goto fail1;
+	}
+
+	req.emr_cmd = MC_CMD_MAE_COUNTER_FREE;
+	req.emr_in_buf = payload;
+	req.emr_in_length = MC_CMD_MAE_COUNTER_FREE_IN_LEN(n_counters);
+	req.emr_out_buf = payload;
+	req.emr_out_length = MC_CMD_MAE_COUNTER_FREE_OUT_LEN(n_counters);
+
+	for (i = 0; i < n_counters; i++) {
+		MCDI_IN_SET_INDEXED_DWORD(req,
+		    MAE_COUNTER_FREE_IN_FREE_COUNTER_ID, i, countersp[i].id);
+	}
+	MCDI_IN_SET_DWORD(req, MAE_COUNTER_FREE_IN_COUNTER_ID_COUNT,
+			  n_counters);
+
+	efx_mcdi_execute(enp, &req);
+
+	if (req.emr_rc != 0) {
+		rc = req.emr_rc;
+		goto fail2;
+	}
+
+	if (req.emr_out_length_used < MC_CMD_MAE_COUNTER_FREE_OUT_LENMIN) {
+		rc = EMSGSIZE;
+		goto fail3;
+	}
+
+	n_freed = MCDI_OUT_DWORD(req, MAE_COUNTER_FREE_OUT_COUNTER_ID_COUNT);
+
+	if (n_freed < MC_CMD_MAE_COUNTER_FREE_OUT_FREED_COUNTER_ID_MINNUM) {
+		rc = EFAULT;
+		goto fail4;
+	}
+
+	if (gen_countp != NULL) {
+		*gen_countp = MCDI_OUT_DWORD(req,
+				    MAE_COUNTER_FREE_OUT_GENERATION_COUNT);
+	}
+
+	*n_freedp = n_freed;
+
+	return (0);
+
 fail4:
 	EFSYS_PROBE(fail4);
 fail3:
diff --git a/drivers/common/sfc_efx/base/efx_mcdi.h b/drivers/common/sfc_efx/base/efx_mcdi.h
index 70a97ea337..90b70de97b 100644
--- a/drivers/common/sfc_efx/base/efx_mcdi.h
+++ b/drivers/common/sfc_efx/base/efx_mcdi.h
@@ -311,6 +311,10 @@ efx_mcdi_phy_module_get_info(
 	EFX_SET_DWORD_FIELD(*MCDI_IN2(_emr, efx_dword_t, _ofst),	\
 		MC_CMD_ ## _field, _value)
 
+#define	MCDI_IN_SET_INDEXED_DWORD(_emr, _ofst, _idx, _value)		\
+	EFX_POPULATE_DWORD_1(*(MCDI_IN2(_emr, efx_dword_t, _ofst) +	\
+			     (_idx)), EFX_DWORD_0, _value)		\
+
 #define	MCDI_IN_POPULATE_DWORD_1(_emr, _ofst, _field1, _value1)		\
 	EFX_POPULATE_DWORD_1(*MCDI_IN2(_emr, efx_dword_t, _ofst),	\
 		MC_CMD_ ## _field1, _value1)
@@ -451,6 +455,9 @@ efx_mcdi_phy_module_get_info(
 	EFX_DWORD_FIELD(*MCDI_OUT2(_emr, efx_dword_t, _ofst),		\
 			MC_CMD_ ## _field)
 
+#define	MCDI_OUT_INDEXED_DWORD(_emr, _ofst, _idx)			\
+	MCDI_OUT_INDEXED_DWORD_FIELD(_emr, _ofst, _idx, EFX_DWORD_0)
+
 #define	MCDI_OUT_INDEXED_DWORD_FIELD(_emr, _ofst, _idx, _field)		\
 	EFX_DWORD_FIELD(*(MCDI_OUT2(_emr, efx_dword_t, _ofst) +		\
 			(_idx)), _field)
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index ae85ed18c6..30b243a1e7 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -102,6 +102,8 @@ INTERNAL {
 	efx_mae_action_set_spec_fini;
 	efx_mae_action_set_spec_init;
 	efx_mae_action_set_specs_equal;
+	efx_mae_counters_alloc;
+	efx_mae_counters_free;
 	efx_mae_encap_header_alloc;
 	efx_mae_encap_header_free;
 	efx_mae_fini;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v2 14/20] common/sfc_efx/base: add counter stream MCDI wrappers
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (12 preceding siblings ...)
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 13/20] common/sfc_efx/base: add counter creation MCDI wrappers Andrew Rybchenko
@ 2021-06-04 14:24   ` Andrew Rybchenko
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 15/20] common/sfc_efx/base: support counter in action set Andrew Rybchenko
                     ` (6 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-04 14:24 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

The MCDIs will be used to control counter Rx queue packet flow.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/efx.h     |  32 ++++++
 drivers/common/sfc_efx/base/efx_mae.c | 138 ++++++++++++++++++++++++++
 drivers/common/sfc_efx/version.map    |   3 +
 3 files changed, 173 insertions(+)

diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index d0f8bc10b3..cc173d13c6 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4455,6 +4455,38 @@ efx_mae_counters_free(
 	__in_ecount(n_counters)		const efx_counter_t *countersp,
 	__out_opt			uint32_t *gen_countp);
 
+/* When set, include counters with a value of zero */
+#define	EFX_MAE_COUNTERS_STREAM_IN_ZERO_SQUASH_DISABLE	(1U << 0)
+
+/*
+ * Set if credit-based flow control is used. In this case the driver
+ * must call efx_mae_counters_stream_give_credits() to notify the
+ * packetiser of descriptors written.
+ */
+#define	EFX_MAE_COUNTERS_STREAM_OUT_USES_CREDITS	(1U << 0)
+
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_counters_stream_start(
+	__in				efx_nic_t *enp,
+	__in				uint16_t rxq_id,
+	__in				uint16_t packet_size,
+	__in				uint32_t flags_in,
+	__out				uint32_t *flags_out);
+
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_counters_stream_stop(
+	__in				efx_nic_t *enp,
+	__in				uint16_t rxq_id,
+	__out_opt			uint32_t *gen_countp);
+
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_counters_stream_give_credits(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_credits);
+
 LIBEFX_API
 extern	__checkReturn			efx_rc_t
 efx_mae_action_set_free(
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index 955f1d4353..0b3131161b 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -2766,6 +2766,144 @@ efx_mae_counters_free(
 	EFSYS_PROBE(fail2);
 fail1:
 	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
+	return (rc);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_counters_stream_start(
+	__in				efx_nic_t *enp,
+	__in				uint16_t rxq_id,
+	__in				uint16_t packet_size,
+	__in				uint32_t flags_in,
+	__out				uint32_t *flags_out)
+{
+	efx_mcdi_req_t req;
+	EFX_MCDI_DECLARE_BUF(payload, MC_CMD_MAE_COUNTERS_STREAM_START_IN_LEN,
+			     MC_CMD_MAE_COUNTERS_STREAM_START_OUT_LEN);
+	efx_rc_t rc;
+
+	EFX_STATIC_ASSERT(EFX_MAE_COUNTERS_STREAM_IN_ZERO_SQUASH_DISABLE ==
+	    1U << MC_CMD_MAE_COUNTERS_STREAM_START_IN_ZERO_SQUASH_DISABLE_LBN);
+
+	EFX_STATIC_ASSERT(EFX_MAE_COUNTERS_STREAM_OUT_USES_CREDITS ==
+	    1U << MC_CMD_MAE_COUNTERS_STREAM_START_OUT_USES_CREDITS_LBN);
+
+	req.emr_cmd = MC_CMD_MAE_COUNTERS_STREAM_START;
+	req.emr_in_buf = payload;
+	req.emr_in_length = MC_CMD_MAE_COUNTERS_STREAM_START_IN_LEN;
+	req.emr_out_buf = payload;
+	req.emr_out_length = MC_CMD_MAE_COUNTERS_STREAM_START_OUT_LEN;
+
+	MCDI_IN_SET_WORD(req, MAE_COUNTERS_STREAM_START_IN_QID, rxq_id);
+	MCDI_IN_SET_WORD(req, MAE_COUNTERS_STREAM_START_IN_PACKET_SIZE,
+			 packet_size);
+	MCDI_IN_SET_DWORD(req, MAE_COUNTERS_STREAM_START_IN_FLAGS, flags_in);
+
+	efx_mcdi_execute(enp, &req);
+
+	if (req.emr_rc != 0) {
+		rc = req.emr_rc;
+		goto fail1;
+	}
+
+	if (req.emr_out_length_used <
+	    MC_CMD_MAE_COUNTERS_STREAM_START_OUT_LEN) {
+		rc = EMSGSIZE;
+		goto fail2;
+	}
+
+	*flags_out = MCDI_OUT_DWORD(req, MAE_COUNTERS_STREAM_START_OUT_FLAGS);
+
+	return (0);
+
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
+	return (rc);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_counters_stream_stop(
+	__in				efx_nic_t *enp,
+	__in				uint16_t rxq_id,
+	__out_opt			uint32_t *gen_countp)
+{
+	efx_mcdi_req_t req;
+	EFX_MCDI_DECLARE_BUF(payload, MC_CMD_MAE_COUNTERS_STREAM_STOP_IN_LEN,
+			     MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_LEN);
+	efx_rc_t rc;
+
+	req.emr_cmd = MC_CMD_MAE_COUNTERS_STREAM_STOP;
+	req.emr_in_buf = payload;
+	req.emr_in_length = MC_CMD_MAE_COUNTERS_STREAM_STOP_IN_LEN;
+	req.emr_out_buf = payload;
+	req.emr_out_length = MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_LEN;
+
+	MCDI_IN_SET_WORD(req, MAE_COUNTERS_STREAM_STOP_IN_QID, rxq_id);
+
+	efx_mcdi_execute(enp, &req);
+
+	if (req.emr_rc != 0) {
+		rc = req.emr_rc;
+		goto fail1;
+	}
+
+	if (req.emr_out_length_used <
+	    MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_LEN) {
+		rc = EMSGSIZE;
+		goto fail2;
+	}
+
+	if (gen_countp != NULL) {
+		*gen_countp = MCDI_OUT_DWORD(req,
+			    MAE_COUNTERS_STREAM_STOP_OUT_GENERATION_COUNT);
+	}
+
+	return (0);
+
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
+	return (rc);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_counters_stream_give_credits(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_credits)
+{
+	efx_mcdi_req_t req;
+	EFX_MCDI_DECLARE_BUF(payload,
+			     MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS_IN_LEN,
+			     MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS_OUT_LEN);
+	efx_rc_t rc;
+
+	req.emr_cmd = MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS;
+	req.emr_in_buf = payload;
+	req.emr_in_length = MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS_IN_LEN;
+	req.emr_out_buf = payload;
+	req.emr_out_length = MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS_OUT_LEN;
+
+	MCDI_IN_SET_DWORD(req, MAE_COUNTERS_STREAM_GIVE_CREDITS_IN_NUM_CREDITS,
+			 n_credits);
+
+	efx_mcdi_execute(enp, &req);
+
+	if (req.emr_rc != 0) {
+		rc = req.emr_rc;
+		goto fail1;
+	}
+
+	return (0);
+
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
 	return (rc);
 }
 
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 30b243a1e7..622f5d4cf5 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -104,6 +104,9 @@ INTERNAL {
 	efx_mae_action_set_specs_equal;
 	efx_mae_counters_alloc;
 	efx_mae_counters_free;
+	efx_mae_counters_stream_give_credits;
+	efx_mae_counters_stream_start;
+	efx_mae_counters_stream_stop;
 	efx_mae_encap_header_alloc;
 	efx_mae_encap_header_free;
 	efx_mae_fini;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v2 15/20] common/sfc_efx/base: support counter in action set
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (13 preceding siblings ...)
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 14/20] common/sfc_efx/base: add counter stream " Andrew Rybchenko
@ 2021-06-04 14:24   ` Andrew Rybchenko
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 16/20] net/sfc: add Rx datapath method to get pushed buffers count Andrew Rybchenko
                     ` (5 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-04 14:24 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

User will be able to associate counter with MAE action set to
collect counter packets and bytes for a specific action set.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/efx.h      |  21 ++++
 drivers/common/sfc_efx/base/efx_impl.h |   3 +
 drivers/common/sfc_efx/base/efx_mae.c  | 133 ++++++++++++++++++++++++-
 drivers/common/sfc_efx/version.map     |   3 +
 4 files changed, 157 insertions(+), 3 deletions(-)

diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index cc173d13c6..628e61e065 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4306,6 +4306,15 @@ extern	__checkReturn			efx_rc_t
 efx_mae_action_set_populate_encap(
 	__in				efx_mae_actions_t *spec);
 
+/*
+ * Use efx_mae_action_set_fill_in_counter_id() to set ID of a counter
+ * in the specification prior to action set allocation.
+ */
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_action_set_populate_count(
+	__in				efx_mae_actions_t *spec);
+
 LIBEFX_API
 extern	__checkReturn			efx_rc_t
 efx_mae_action_set_populate_flag(
@@ -4410,6 +4419,18 @@ typedef struct efx_counter_s {
 	uint32_t id;
 } efx_counter_t;
 
+LIBEFX_API
+extern	__checkReturn			unsigned int
+efx_mae_action_set_get_nb_count(
+	__in				const efx_mae_actions_t *spec);
+
+/* See description before efx_mae_action_set_populate_count(). */
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_action_set_fill_in_counter_id(
+	__in				efx_mae_actions_t *spec,
+	__in				const efx_counter_t *counter_idp);
+
 /* Action set ID */
 typedef struct efx_mae_aset_id_s {
 	uint32_t id;
diff --git a/drivers/common/sfc_efx/base/efx_impl.h b/drivers/common/sfc_efx/base/efx_impl.h
index 9dbf6d450c..992edbabe3 100644
--- a/drivers/common/sfc_efx/base/efx_impl.h
+++ b/drivers/common/sfc_efx/base/efx_impl.h
@@ -1734,6 +1734,7 @@ typedef enum efx_mae_action_e {
 	EFX_MAE_ACTION_DECAP,
 	EFX_MAE_ACTION_VLAN_POP,
 	EFX_MAE_ACTION_VLAN_PUSH,
+	EFX_MAE_ACTION_COUNT,
 	EFX_MAE_ACTION_ENCAP,
 
 	/*
@@ -1764,6 +1765,7 @@ typedef struct efx_mae_action_vlan_push_s {
 
 typedef struct efx_mae_actions_rsrc_s {
 	efx_mae_eh_id_t			emar_eh_id;
+	efx_counter_t			emar_counter_id;
 } efx_mae_actions_rsrc_t;
 
 struct efx_mae_actions_s {
@@ -1774,6 +1776,7 @@ struct efx_mae_actions_s {
 	unsigned int			ema_n_vlan_tags_to_push;
 	efx_mae_action_vlan_push_t	ema_vlan_push_descs[
 	    EFX_MAE_VLAN_PUSH_MAX_NTAGS];
+	unsigned int			ema_n_count_actions;
 	uint32_t			ema_mark_value;
 	efx_mport_sel_t			ema_deliver_mport;
 
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index 0b3131161b..8d1294a627 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -1191,6 +1191,7 @@ efx_mae_action_set_spec_init(
 	}
 
 	spec->ema_rsrc.emar_eh_id.id = EFX_MAE_RSRC_ID_INVALID;
+	spec->ema_rsrc.emar_counter_id.id = EFX_MAE_RSRC_ID_INVALID;
 
 	*specp = spec;
 
@@ -1358,6 +1359,50 @@ efx_mae_action_set_add_encap(
 	return (rc);
 }
 
+static	__checkReturn			efx_rc_t
+efx_mae_action_set_add_count(
+	__in				efx_mae_actions_t *spec,
+	__in				size_t arg_size,
+	__in_bcount(arg_size)		const uint8_t *arg)
+{
+	efx_rc_t rc;
+
+	EFX_STATIC_ASSERT(EFX_MAE_RSRC_ID_INVALID ==
+			  MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_NULL);
+
+	/*
+	 * Preparing an action set spec to update a counter requires
+	 * two steps: first add this action to the action spec, and then
+	 * add the counter ID to the spec. This allows validity checking
+	 * and resource allocation to be done separately.
+	 * Mark the counter ID as invalid in the spec to ensure that the
+	 * caller must also invoke efx_mae_action_set_fill_in_counter_id()
+	 * before action set allocation.
+	 */
+	spec->ema_rsrc.emar_counter_id.id = EFX_MAE_RSRC_ID_INVALID;
+
+	/* Nothing else is supposed to take place over here. */
+	if (arg_size != 0) {
+		rc = EINVAL;
+		goto fail1;
+	}
+
+	if (arg != NULL) {
+		rc = EINVAL;
+		goto fail2;
+	}
+
+	++(spec->ema_n_count_actions);
+
+	return (0);
+
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+	return (rc);
+}
+
 static	__checkReturn			efx_rc_t
 efx_mae_action_set_add_flag(
 	__in				efx_mae_actions_t *spec,
@@ -1466,6 +1511,9 @@ static const efx_mae_action_desc_t efx_mae_actions[EFX_MAE_NACTIONS] = {
 	[EFX_MAE_ACTION_ENCAP] = {
 		.emad_add = efx_mae_action_set_add_encap
 	},
+	[EFX_MAE_ACTION_COUNT] = {
+		.emad_add = efx_mae_action_set_add_count
+	},
 	[EFX_MAE_ACTION_FLAG] = {
 		.emad_add = efx_mae_action_set_add_flag
 	},
@@ -1481,6 +1529,12 @@ static const uint32_t efx_mae_action_ordered_map =
 	(1U << EFX_MAE_ACTION_DECAP) |
 	(1U << EFX_MAE_ACTION_VLAN_POP) |
 	(1U << EFX_MAE_ACTION_VLAN_PUSH) |
+	/*
+	 * HW will conduct action COUNT after
+	 * the matching packet has been modified by
+	 * length-affecting actions except for ENCAP.
+	 */
+	(1U << EFX_MAE_ACTION_COUNT) |
 	(1U << EFX_MAE_ACTION_ENCAP) |
 	(1U << EFX_MAE_ACTION_FLAG) |
 	(1U << EFX_MAE_ACTION_MARK) |
@@ -1497,7 +1551,8 @@ static const uint32_t efx_mae_action_nonstrict_map =
 
 static const uint32_t efx_mae_action_repeat_map =
 	(1U << EFX_MAE_ACTION_VLAN_POP) |
-	(1U << EFX_MAE_ACTION_VLAN_PUSH);
+	(1U << EFX_MAE_ACTION_VLAN_PUSH) |
+	(1U << EFX_MAE_ACTION_COUNT);
 
 /*
  * Add an action to an action set.
@@ -1620,6 +1675,20 @@ efx_mae_action_set_populate_encap(
 	    EFX_MAE_ACTION_ENCAP, 0, NULL));
 }
 
+	__checkReturn			efx_rc_t
+efx_mae_action_set_populate_count(
+	__in				efx_mae_actions_t *spec)
+{
+	/*
+	 * There is no argument to pass counter ID, thus, one does not
+	 * need to allocate a counter while parsing application input.
+	 * This is useful since building an action set may be done simply to
+	 * validate a rule, whilst resource allocation usually consumes time.
+	 */
+	return (efx_mae_action_set_spec_populate(spec,
+	    EFX_MAE_ACTION_COUNT, 0, NULL));
+}
+
 	__checkReturn			efx_rc_t
 efx_mae_action_set_populate_flag(
 	__in				efx_mae_actions_t *spec)
@@ -2306,8 +2375,6 @@ efx_mae_action_set_alloc(
 	 */
 	MCDI_IN_SET_DWORD(req,
 	    MAE_ACTION_SET_ALLOC_IN_COUNTER_LIST_ID, EFX_MAE_RSRC_ID_INVALID);
-	MCDI_IN_SET_DWORD(req,
-	    MAE_ACTION_SET_ALLOC_IN_COUNTER_ID, EFX_MAE_RSRC_ID_INVALID);
 
 	if ((spec->ema_actions & (1U << EFX_MAE_ACTION_DECAP)) != 0) {
 		MCDI_IN_SET_DWORD_FIELD(req, MAE_ACTION_SET_ALLOC_IN_FLAGS,
@@ -2344,6 +2411,8 @@ efx_mae_action_set_alloc(
 
 	MCDI_IN_SET_DWORD(req, MAE_ACTION_SET_ALLOC_IN_ENCAP_HEADER_ID,
 	    spec->ema_rsrc.emar_eh_id.id);
+	MCDI_IN_SET_DWORD(req, MAE_ACTION_SET_ALLOC_IN_COUNTER_ID,
+	    spec->ema_rsrc.emar_counter_id.id);
 
 	if ((spec->ema_actions & (1U << EFX_MAE_ACTION_FLAG)) != 0) {
 		MCDI_IN_SET_DWORD_FIELD(req, MAE_ACTION_SET_ALLOC_IN_FLAGS,
@@ -2603,6 +2672,64 @@ efx_mae_action_rule_remove(
 
 	return (0);
 
+fail4:
+	EFSYS_PROBE(fail4);
+fail3:
+	EFSYS_PROBE(fail3);
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+	return (rc);
+}
+
+	__checkReturn			unsigned int
+efx_mae_action_set_get_nb_count(
+	__in				const efx_mae_actions_t *spec)
+{
+	return (spec->ema_n_count_actions);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_action_set_fill_in_counter_id(
+	__in				efx_mae_actions_t *spec,
+	__in				const efx_counter_t *counter_idp)
+{
+	efx_rc_t rc;
+
+	if ((spec->ema_actions & (1U << EFX_MAE_ACTION_COUNT)) == 0) {
+		/*
+		 * Invalid to add counter ID if spec does not have COUNT action.
+		 */
+		rc = EINVAL;
+		goto fail1;
+	}
+
+	if (spec->ema_n_count_actions != 1) {
+		/*
+		 * Having multiple COUNT actions in the spec requires a counter
+		 * list to be used. This API must only be used for a single
+		 * counter per spec. Turn down the request as inappropriate.
+		 */
+		rc = EINVAL;
+		goto fail2;
+	}
+
+	if (spec->ema_rsrc.emar_counter_id.id != EFX_MAE_RSRC_ID_INVALID) {
+		/* The caller attempts to indicate counter ID twice. */
+		rc = EALREADY;
+		goto fail3;
+	}
+
+	if (counter_idp->id == EFX_MAE_RSRC_ID_INVALID) {
+		rc = EINVAL;
+		goto fail4;
+	}
+
+	spec->ema_rsrc.emar_counter_id.id = counter_idp->id;
+
+	return (0);
+
 fail4:
 	EFSYS_PROBE(fail4);
 fail3:
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 622f5d4cf5..0c5bcdfa84 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -89,8 +89,11 @@ INTERNAL {
 	efx_mae_action_rule_insert;
 	efx_mae_action_rule_remove;
 	efx_mae_action_set_alloc;
+	efx_mae_action_set_fill_in_counter_id;
 	efx_mae_action_set_fill_in_eh_id;
 	efx_mae_action_set_free;
+	efx_mae_action_set_get_nb_count;
+	efx_mae_action_set_populate_count;
 	efx_mae_action_set_populate_decap;
 	efx_mae_action_set_populate_deliver;
 	efx_mae_action_set_populate_drop;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v2 16/20] net/sfc: add Rx datapath method to get pushed buffers count
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (14 preceding siblings ...)
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 15/20] common/sfc_efx/base: support counter in action set Andrew Rybchenko
@ 2021-06-04 14:24   ` Andrew Rybchenko
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 17/20] common/sfc_efx/base: add max MAE counters to limits Andrew Rybchenko
                     ` (4 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-04 14:24 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

The information about the number of pushed Rx buffers is required
for counter Rx queue to know when to give credits to counter
stream.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/sfc_dp_rx.h    |  4 ++++
 drivers/net/sfc/sfc_ef100_rx.c | 15 +++++++++++++++
 drivers/net/sfc/sfc_rx.c       |  9 +++++++++
 drivers/net/sfc/sfc_rx.h       |  3 +++
 4 files changed, 31 insertions(+)

diff --git a/drivers/net/sfc/sfc_dp_rx.h b/drivers/net/sfc/sfc_dp_rx.h
index 3f6857b1ff..b6c44085ce 100644
--- a/drivers/net/sfc/sfc_dp_rx.h
+++ b/drivers/net/sfc/sfc_dp_rx.h
@@ -204,6 +204,9 @@ typedef int (sfc_dp_rx_intr_enable_t)(struct sfc_dp_rxq *dp_rxq);
 /** Disable Rx interrupts */
 typedef int (sfc_dp_rx_intr_disable_t)(struct sfc_dp_rxq *dp_rxq);
 
+/** Get number of pushed Rx buffers */
+typedef unsigned int (sfc_dp_rx_get_pushed_t)(struct sfc_dp_rxq *dp_rxq);
+
 /** Receive datapath definition */
 struct sfc_dp_rx {
 	struct sfc_dp				dp;
@@ -238,6 +241,7 @@ struct sfc_dp_rx {
 	sfc_dp_rx_qdesc_status_t		*qdesc_status;
 	sfc_dp_rx_intr_enable_t			*intr_enable;
 	sfc_dp_rx_intr_disable_t		*intr_disable;
+	sfc_dp_rx_get_pushed_t			*get_pushed;
 	eth_rx_burst_t				pkt_burst;
 };
 
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index 8cde24c585..7447f8b9de 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -892,6 +892,20 @@ sfc_ef100_rx_intr_disable(struct sfc_dp_rxq *dp_rxq)
 	return 0;
 }
 
+static sfc_dp_rx_get_pushed_t sfc_ef100_rx_get_pushed;
+static unsigned int
+sfc_ef100_rx_get_pushed(struct sfc_dp_rxq *dp_rxq)
+{
+	struct sfc_ef100_rxq *rxq = sfc_ef100_rxq_by_dp_rxq(dp_rxq);
+
+	/*
+	 * The datapath keeps track only of added descriptors, since
+	 * the number of pushed descriptors always equals the number
+	 * of added descriptors due to enforced alignment.
+	 */
+	return rxq->added;
+}
+
 struct sfc_dp_rx sfc_ef100_rx = {
 	.dp = {
 		.name		= SFC_KVARG_DATAPATH_EF100,
@@ -919,5 +933,6 @@ struct sfc_dp_rx sfc_ef100_rx = {
 	.qdesc_status		= sfc_ef100_rx_qdesc_status,
 	.intr_enable		= sfc_ef100_rx_intr_enable,
 	.intr_disable		= sfc_ef100_rx_intr_disable,
+	.get_pushed		= sfc_ef100_rx_get_pushed,
 	.pkt_burst		= sfc_ef100_recv_pkts,
 };
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 0532f77082..f6a8ac68e8 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -53,6 +53,15 @@ sfc_rx_qflush_failed(struct sfc_rxq_info *rxq_info)
 	rxq_info->state &= ~SFC_RXQ_FLUSHING;
 }
 
+/* This returns the running counter, which is not bounded by ring size */
+unsigned int
+sfc_rx_get_pushed(struct sfc_adapter *sa, struct sfc_dp_rxq *dp_rxq)
+{
+	SFC_ASSERT(sa->priv.dp_rx->get_pushed != NULL);
+
+	return sa->priv.dp_rx->get_pushed(dp_rxq);
+}
+
 static int
 sfc_efx_rx_qprime(struct sfc_efx_rxq *rxq)
 {
diff --git a/drivers/net/sfc/sfc_rx.h b/drivers/net/sfc/sfc_rx.h
index e5a6fde79b..4ab513915e 100644
--- a/drivers/net/sfc/sfc_rx.h
+++ b/drivers/net/sfc/sfc_rx.h
@@ -145,6 +145,9 @@ uint64_t sfc_rx_get_queue_offload_caps(struct sfc_adapter *sa);
 void sfc_rx_qflush_done(struct sfc_rxq_info *rxq_info);
 void sfc_rx_qflush_failed(struct sfc_rxq_info *rxq_info);
 
+unsigned int sfc_rx_get_pushed(struct sfc_adapter *sa,
+			       struct sfc_dp_rxq *dp_rxq);
+
 int sfc_rx_hash_init(struct sfc_adapter *sa);
 void sfc_rx_hash_fini(struct sfc_adapter *sa);
 int sfc_rx_hf_rte_to_efx(struct sfc_adapter *sa, uint64_t rte,
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v2 17/20] common/sfc_efx/base: add max MAE counters to limits
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (15 preceding siblings ...)
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 16/20] net/sfc: add Rx datapath method to get pushed buffers count Andrew Rybchenko
@ 2021-06-04 14:24   ` Andrew Rybchenko
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 18/20] common/sfc_efx/base: add packetiser packet format definition Andrew Rybchenko
                     ` (3 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-04 14:24 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

The information about the maximum number of MAE counters is
crucial to the counter support in the driver.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/efx.h     | 1 +
 drivers/common/sfc_efx/base/efx_mae.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 628e61e065..b2301b845a 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4093,6 +4093,7 @@ typedef struct efx_mae_limits_s {
 	uint32_t			eml_max_n_outer_prios;
 	uint32_t			eml_encap_types_supported;
 	uint32_t			eml_encap_header_size_limit;
+	uint32_t			eml_max_n_counters;
 } efx_mae_limits_t;
 
 LIBEFX_API
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index 8d1294a627..5a320dcda6 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -374,6 +374,7 @@ efx_mae_get_limits(
 	emlp->eml_encap_types_supported = maep->em_encap_types_supported;
 	emlp->eml_encap_header_size_limit =
 	    MC_CMD_MAE_ENCAP_HEADER_ALLOC_IN_HDR_DATA_MAXNUM_MCDI2;
+	emlp->eml_max_n_counters = maep->em_max_ncounters;
 
 	return (0);
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v2 18/20] common/sfc_efx/base: add packetiser packet format definition
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (16 preceding siblings ...)
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 17/20] common/sfc_efx/base: add max MAE counters to limits Andrew Rybchenko
@ 2021-06-04 14:24   ` Andrew Rybchenko
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 19/20] net/sfc: support flow action COUNT in transfer rules Andrew Rybchenko
                     ` (2 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-04 14:24 UTC (permalink / raw)
  To: dev; +Cc: Andy Moreton

Packetiser composes packets with MAE counters update.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 .../base/efx_regs_counters_pkt_format.h       | 87 +++++++++++++++++++
 1 file changed, 87 insertions(+)
 create mode 100644 drivers/common/sfc_efx/base/efx_regs_counters_pkt_format.h

diff --git a/drivers/common/sfc_efx/base/efx_regs_counters_pkt_format.h b/drivers/common/sfc_efx/base/efx_regs_counters_pkt_format.h
new file mode 100644
index 0000000000..6610d07dc0
--- /dev/null
+++ b/drivers/common/sfc_efx/base/efx_regs_counters_pkt_format.h
@@ -0,0 +1,87 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2020-2021 Xilinx, Inc.
+ */
+
+#ifndef	_SYS_EFX_REGS_COUNTERS_PKT_FORMAT_H
+#define	_SYS_EFX_REGS_COUNTERS_PKT_FORMAT_H
+
+/*
+ * Packetiser packet format definition.
+ * SF-122415-TC - OVS Counter Design Specification section 7
+ * Primary copy of the header is located in the smartnic_registry repo:
+ * src/ovs_counter/packetiser_packet_format.h
+ */
+
+/*------------------------------------------------------------*/
+/*
+ * ER_RX_SL_PACKETISER_HEADER_WORD(160bit):
+ *
+ */
+#define	ER_RX_SL_PACKETISER_HEADER_WORD_SIZE 20
+
+#define	ERF_SC_PACKETISER_HEADER_VERSION_LBN 0
+#define	ERF_SC_PACKETISER_HEADER_VERSION_WIDTH 8
+/* Deprecated, use ERF_SC_PACKETISER_HEADER_VERSION_2 instead */
+#define	ERF_SC_PACKETISER_HEADER_VERSION_VALUE 2
+#define	ERF_SC_PACKETISER_HEADER_VERSION_2 2
+#define	ERF_SC_PACKETISER_HEADER_IDENTIFIER_LBN 8
+#define	ERF_SC_PACKETISER_HEADER_IDENTIFIER_WIDTH 8
+#define	ERF_SC_PACKETISER_HEADER_IDENTIFIER_AR 0
+#define	ERF_SC_PACKETISER_HEADER_IDENTIFIER_CT 1
+#define	ERF_SC_PACKETISER_HEADER_HEADER_OFFSET_LBN 16
+#define	ERF_SC_PACKETISER_HEADER_HEADER_OFFSET_WIDTH 8
+#define	ERF_SC_PACKETISER_HEADER_HEADER_OFFSET_DEFAULT 0x4
+#define	ERF_SC_PACKETISER_HEADER_PAYLOAD_OFFSET_LBN 24
+#define	ERF_SC_PACKETISER_HEADER_PAYLOAD_OFFSET_WIDTH 8
+#define	ERF_SC_PACKETISER_HEADER_PAYLOAD_OFFSET_DEFAULT 0x14
+#define	ERF_SC_PACKETISER_HEADER_INDEX_LBN 32
+#define	ERF_SC_PACKETISER_HEADER_INDEX_WIDTH 16
+#define	ERF_SC_PACKETISER_HEADER_COUNT_LBN 48
+#define	ERF_SC_PACKETISER_HEADER_COUNT_WIDTH 16
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_0_LBN 64
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_0_WIDTH 32
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_1_LBN 96
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_1_WIDTH 32
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_2_LBN 128
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_2_WIDTH 32
+
+
+/*------------------------------------------------------------*/
+/*
+ * ER_RX_SL_PACKETISER_PAYLOAD_WORD(128bit):
+ *
+ */
+#define	ER_RX_SL_PACKETISER_PAYLOAD_WORD_SIZE 16
+
+#define	ERF_SC_PACKETISER_PAYLOAD_COUNTER_INDEX_LBN 0
+#define	ERF_SC_PACKETISER_PAYLOAD_COUNTER_INDEX_WIDTH 24
+#define	ERF_SC_PACKETISER_PAYLOAD_RESERVED_LBN 24
+#define	ERF_SC_PACKETISER_PAYLOAD_RESERVED_WIDTH 8
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_OFST 4
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_SIZE 6
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LBN 32
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_WIDTH 48
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO_OFST 4
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO_SIZE 4
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO_LBN 32
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO_WIDTH 32
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_HI_OFST 8
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_HI_SIZE 2
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_HI_LBN 64
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_HI_WIDTH 16
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_OFST 10
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_SIZE 6
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LBN 80
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_WIDTH 48
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO_OFST 10
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO_SIZE 2
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO_LBN 80
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO_WIDTH 16
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_HI_OFST 12
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_HI_SIZE 4
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_HI_LBN 96
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_HI_WIDTH 32
+
+
+#endif /* _SYS_EFX_REGS_COUNTERS_PKT_FORMAT_H */
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v2 19/20] net/sfc: support flow action COUNT in transfer rules
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (17 preceding siblings ...)
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 18/20] common/sfc_efx/base: add packetiser packet format definition Andrew Rybchenko
@ 2021-06-04 14:24   ` Andrew Rybchenko
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 20/20] net/sfc: support flow API query for count actions Andrew Rybchenko
  2021-06-17  8:37   ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action David Marchand
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-04 14:24 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

For now, a rule may have only one dedicated counter, shared counters
are not supported.

HW delivers (or "streams") counter readings using special packets.
The driver creates a dedicated Rx queue to receive such packets
and requests that HW start "streaming" the readings to it.

The counter queue is polled periodically, and the first available
service core is used for that. Hence, the user has to specify at least
one service core for counters to work. Such a core is shared by all
MAE-capable devices managed by sfc driver.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 doc/guides/nics/sfc_efx.rst            |   2 +
 doc/guides/rel_notes/release_21_08.rst |   6 +
 drivers/net/sfc/meson.build            |  10 +
 drivers/net/sfc/sfc_flow.c             |   7 +
 drivers/net/sfc/sfc_mae.c              | 231 +++++++++-
 drivers/net/sfc/sfc_mae.h              |  60 +++
 drivers/net/sfc/sfc_mae_counter.c      | 578 +++++++++++++++++++++++++
 drivers/net/sfc/sfc_mae_counter.h      |  11 +
 drivers/net/sfc/sfc_stats.h            |  80 ++++
 drivers/net/sfc/sfc_tweak.h            |   9 +
 10 files changed, 989 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/sfc/sfc_stats.h

diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index cf1269cc03..bd08118da7 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -240,6 +240,8 @@ Supported actions (***transfer*** rules):
 
 - PORT_ID
 
+- COUNT
+
 - DROP
 
 Validating flow rules depends on the firmware variant.
diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst
index a6ecfdf3ce..75688304da 100644
--- a/doc/guides/rel_notes/release_21_08.rst
+++ b/doc/guides/rel_notes/release_21_08.rst
@@ -55,6 +55,12 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Updated Solarflare network PMD.**
+
+  Updated the Solarflare ``sfc_efx`` driver with changes including:
+
+  * Added COUNT action support for SN1000 NICs
+
 
 Removed Items
 -------------
diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index f8880f740a..32b58e3d76 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -39,6 +39,16 @@ foreach flag: extra_flags
     endif
 endforeach
 
+# for clang 32-bit compiles we need libatomic for 64-bit atomic ops
+if cc.get_id() == 'clang' and dpdk_conf.get('RTE_ARCH_64') == false
+    ext_deps += cc.find_library('atomic')
+endif
+
+# for gcc compiles we need -latomic for 128-bit atomic ops
+if cc.get_id() == 'gcc'
+    ext_deps += cc.find_library('atomic')
+endif
+
 deps += ['common_sfc_efx', 'bus_pci']
 sources = files(
         'sfc_ethdev.c',
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 2db8af1759..1294dbd3a7 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -24,6 +24,7 @@
 #include "sfc_flow.h"
 #include "sfc_log.h"
 #include "sfc_dp_rx.h"
+#include "sfc_mae_counter.h"
 
 struct sfc_flow_ops_by_spec {
 	sfc_flow_parse_cb_t	*parse;
@@ -2854,6 +2855,12 @@ sfc_flow_stop(struct sfc_adapter *sa)
 		efx_rx_scale_context_free(sa->nic, rss->dummy_rss_context);
 		rss->dummy_rss_context = EFX_RSS_CONTEXT_DEFAULT;
 	}
+
+	/*
+	 * MAE counter service is not stopped on flow rule remove to avoid
+	 * extra work. Make sure that it is stopped here.
+	 */
+	sfc_mae_counter_stop(sa);
 }
 
 int
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index e603ffbdb4..370a39da1d 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -19,6 +19,7 @@
 #include "sfc_mae_counter.h"
 #include "sfc_log.h"
 #include "sfc_switch.h"
+#include "sfc_service.h"
 
 static int
 sfc_mae_assign_entity_mport(struct sfc_adapter *sa,
@@ -30,6 +31,19 @@ sfc_mae_assign_entity_mport(struct sfc_adapter *sa,
 					      mportp);
 }
 
+static int
+sfc_mae_counter_registry_init(struct sfc_mae_counter_registry *registry,
+			      uint32_t nb_counters_max)
+{
+	return sfc_mae_counters_init(&registry->counters, nb_counters_max);
+}
+
+static void
+sfc_mae_counter_registry_fini(struct sfc_mae_counter_registry *registry)
+{
+	sfc_mae_counters_fini(&registry->counters);
+}
+
 int
 sfc_mae_attach(struct sfc_adapter *sa)
 {
@@ -59,6 +73,15 @@ sfc_mae_attach(struct sfc_adapter *sa)
 	if (rc != 0)
 		goto fail_mae_get_limits;
 
+	sfc_log_init(sa, "init MAE counter registry");
+	rc = sfc_mae_counter_registry_init(&mae->counter_registry,
+					   limits.eml_max_n_counters);
+	if (rc != 0) {
+		sfc_err(sa, "failed to init MAE counters registry for %u entries: %s",
+			limits.eml_max_n_counters, rte_strerror(rc));
+		goto fail_counter_registry_init;
+	}
+
 	sfc_log_init(sa, "assign entity MPORT");
 	rc = sfc_mae_assign_entity_mport(sa, &entity_mport);
 	if (rc != 0)
@@ -107,6 +130,9 @@ sfc_mae_attach(struct sfc_adapter *sa)
 fail_mae_assign_switch_port:
 fail_mae_assign_switch_domain:
 fail_mae_assign_entity_mport:
+	sfc_mae_counter_registry_fini(&mae->counter_registry);
+
+fail_counter_registry_init:
 fail_mae_get_limits:
 	efx_mae_fini(sa->nic);
 
@@ -131,6 +157,7 @@ sfc_mae_detach(struct sfc_adapter *sa)
 		return;
 
 	rte_free(mae->bounce_eh.buf);
+	sfc_mae_counter_registry_fini(&mae->counter_registry);
 
 	efx_mae_fini(sa->nic);
 
@@ -480,9 +507,72 @@ sfc_mae_encap_header_disable(struct sfc_adapter *sa,
 	--(fw_rsrc->refcnt);
 }
 
+static int
+sfc_mae_counters_enable(struct sfc_adapter *sa,
+			struct sfc_mae_counter_id *counters,
+			unsigned int n_counters,
+			efx_mae_actions_t *action_set_spec)
+{
+	int rc;
+
+	sfc_log_init(sa, "entry");
+
+	if (n_counters == 0) {
+		sfc_log_init(sa, "no counters - skip");
+		return 0;
+	}
+
+	SFC_ASSERT(sfc_adapter_is_locked(sa));
+	SFC_ASSERT(n_counters == 1);
+
+	rc = sfc_mae_counter_enable(sa, &counters[0]);
+	if (rc != 0) {
+		sfc_err(sa, "failed to enable MAE counter %u: %s",
+			counters[0].mae_id.id, rte_strerror(rc));
+		goto fail_counter_add;
+	}
+
+	rc = efx_mae_action_set_fill_in_counter_id(action_set_spec,
+						   &counters[0].mae_id);
+	if (rc != 0) {
+		sfc_err(sa, "failed to fill in MAE counter %u in action set: %s",
+			counters[0].mae_id.id, rte_strerror(rc));
+		goto fail_fill_in_id;
+	}
+
+	return 0;
+
+fail_fill_in_id:
+	(void)sfc_mae_counter_disable(sa, &counters[0]);
+
+fail_counter_add:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+	return rc;
+}
+
+static int
+sfc_mae_counters_disable(struct sfc_adapter *sa,
+			 struct sfc_mae_counter_id *counters,
+			 unsigned int n_counters)
+{
+	if (n_counters == 0)
+		return 0;
+
+	SFC_ASSERT(sfc_adapter_is_locked(sa));
+	SFC_ASSERT(n_counters == 1);
+
+	if (counters[0].mae_id.id == EFX_MAE_RSRC_ID_INVALID) {
+		sfc_err(sa, "failed to disable: already disabled");
+		return EALREADY;
+	}
+
+	return sfc_mae_counter_disable(sa, &counters[0]);
+}
+
 static struct sfc_mae_action_set *
 sfc_mae_action_set_attach(struct sfc_adapter *sa,
 			  const struct sfc_mae_encap_header *encap_header,
+			  unsigned int n_count,
 			  const efx_mae_actions_t *spec)
 {
 	struct sfc_mae_action_set *action_set;
@@ -491,7 +581,12 @@ sfc_mae_action_set_attach(struct sfc_adapter *sa,
 	SFC_ASSERT(sfc_adapter_is_locked(sa));
 
 	TAILQ_FOREACH(action_set, &mae->action_sets, entries) {
+		/*
+		 * Shared counters are not supported, hence action sets with
+		 * COUNT are not attachable.
+		 */
 		if (action_set->encap_header == encap_header &&
+		    n_count == 0 &&
 		    efx_mae_action_set_specs_equal(action_set->spec, spec)) {
 			sfc_dbg(sa, "attaching to action_set=%p", action_set);
 			++(action_set->refcnt);
@@ -504,18 +599,52 @@ sfc_mae_action_set_attach(struct sfc_adapter *sa,
 
 static int
 sfc_mae_action_set_add(struct sfc_adapter *sa,
+		       const struct rte_flow_action actions[],
 		       efx_mae_actions_t *spec,
 		       struct sfc_mae_encap_header *encap_header,
+		       unsigned int n_counters,
 		       struct sfc_mae_action_set **action_setp)
 {
 	struct sfc_mae_action_set *action_set;
 	struct sfc_mae *mae = &sa->mae;
+	unsigned int i;
 
 	SFC_ASSERT(sfc_adapter_is_locked(sa));
 
 	action_set = rte_zmalloc("sfc_mae_action_set", sizeof(*action_set), 0);
-	if (action_set == NULL)
+	if (action_set == NULL) {
+		sfc_err(sa, "failed to alloc action set");
 		return ENOMEM;
+	}
+
+	if (n_counters > 0) {
+		const struct rte_flow_action *action;
+
+		action_set->counters = rte_malloc("sfc_mae_counter_ids",
+			sizeof(action_set->counters[0]) * n_counters, 0);
+		if (action_set->counters == NULL) {
+			rte_free(action_set);
+			sfc_err(sa, "failed to alloc counters");
+			return ENOMEM;
+		}
+
+		for (action = actions, i = 0;
+		     action->type != RTE_FLOW_ACTION_TYPE_END && i < n_counters;
+		     ++action) {
+			const struct rte_flow_action_count *conf;
+
+			if (action->type != RTE_FLOW_ACTION_TYPE_COUNT)
+				continue;
+
+			conf = action->conf;
+
+			action_set->counters[i].mae_id.id =
+				EFX_MAE_RSRC_ID_INVALID;
+			action_set->counters[i].rte_id = conf->id;
+			i++;
+		}
+		action_set->n_counters = n_counters;
+	}
 
 	action_set->refcnt = 1;
 	action_set->spec = spec;
@@ -555,6 +684,12 @@ sfc_mae_action_set_del(struct sfc_adapter *sa,
 
 	efx_mae_action_set_spec_fini(sa->nic, action_set->spec);
 	sfc_mae_encap_header_del(sa, action_set->encap_header);
+	if (action_set->n_counters > 0) {
+		SFC_ASSERT(action_set->n_counters == 1);
+		SFC_ASSERT(action_set->counters[0].mae_id.id ==
+			   EFX_MAE_RSRC_ID_INVALID);
+		rte_free(action_set->counters);
+	}
 	TAILQ_REMOVE(&mae->action_sets, action_set, entries);
 	rte_free(action_set);
 
@@ -566,6 +701,7 @@ sfc_mae_action_set_enable(struct sfc_adapter *sa,
 			  struct sfc_mae_action_set *action_set)
 {
 	struct sfc_mae_encap_header *encap_header = action_set->encap_header;
+	struct sfc_mae_counter_id *counters = action_set->counters;
 	struct sfc_mae_fw_rsrc *fw_rsrc = &action_set->fw_rsrc;
 	int rc;
 
@@ -580,14 +716,26 @@ sfc_mae_action_set_enable(struct sfc_adapter *sa,
 		if (rc != 0)
 			return rc;
 
-		rc = efx_mae_action_set_alloc(sa->nic, action_set->spec,
-					      &fw_rsrc->aset_id);
+		rc = sfc_mae_counters_enable(sa, counters,
+					     action_set->n_counters,
+					     action_set->spec);
 		if (rc != 0) {
+			sfc_err(sa, "failed to enable %u MAE counters: %s",
+				action_set->n_counters, rte_strerror(rc));
+
 			sfc_mae_encap_header_disable(sa, encap_header);
+			return rc;
+		}
 
+		rc = efx_mae_action_set_alloc(sa->nic, action_set->spec,
+					      &fw_rsrc->aset_id);
+		if (rc != 0) {
 			sfc_err(sa, "failed to enable action_set=%p: %s",
 				action_set, strerror(rc));
 
+			(void)sfc_mae_counters_disable(sa, counters,
+						       action_set->n_counters);
+			sfc_mae_encap_header_disable(sa, encap_header);
 			return rc;
 		}
 
@@ -627,6 +775,13 @@ sfc_mae_action_set_disable(struct sfc_adapter *sa,
 		}
 		fw_rsrc->aset_id.id = EFX_MAE_RSRC_ID_INVALID;
 
+		rc = sfc_mae_counters_disable(sa, action_set->counters,
+					      action_set->n_counters);
+		if (rc != 0) {
+			sfc_err(sa, "failed to disable %u MAE counters: %s",
+				action_set->n_counters, rte_strerror(rc));
+		}
+
 		sfc_mae_encap_header_disable(sa, action_set->encap_header);
 	}
 
@@ -2598,6 +2753,48 @@ sfc_mae_rule_parse_action_mark(const struct rte_flow_action_mark *conf,
 	return efx_mae_action_set_populate_mark(spec, conf->id);
 }
 
+static int
+sfc_mae_rule_parse_action_count(struct sfc_adapter *sa,
+				const struct rte_flow_action_count *conf,
+				efx_mae_actions_t *spec)
+{
+	int rc;
+
+	if (conf->shared) {
+		rc = ENOTSUP;
+		goto fail_counter_shared;
+	}
+
+	if ((sa->counter_rxq.state & SFC_COUNTER_RXQ_INITIALIZED) == 0) {
+		sfc_err(sa,
+			"counter queue is not configured for COUNT action");
+		rc = EINVAL;
+		goto fail_counter_queue_uninit;
+	}
+
+	if (sfc_get_service_lcore(SOCKET_ID_ANY) == RTE_MAX_LCORE) {
+		rc = EINVAL;
+		goto fail_no_service_core;
+	}
+
+	rc = efx_mae_action_set_populate_count(spec);
+	if (rc != 0) {
+		sfc_err(sa,
+			"failed to populate counters in MAE action set: %s",
+			rte_strerror(rc));
+		goto fail_populate_count;
+	}
+
+	return 0;
+
+fail_populate_count:
+fail_no_service_core:
+fail_counter_queue_uninit:
+fail_counter_shared:
+
+	return rc;
+}
+
 static int
 sfc_mae_rule_parse_action_phy_port(struct sfc_adapter *sa,
 				   const struct rte_flow_action_phy_port *conf,
@@ -2713,6 +2910,11 @@ sfc_mae_rule_parse_action(struct sfc_adapter *sa,
 							   spec, error);
 		custom_error = B_TRUE;
 		break;
+	case RTE_FLOW_ACTION_TYPE_COUNT:
+		SFC_BUILD_SET_OVERFLOW(RTE_FLOW_ACTION_TYPE_COUNT,
+				       bundle->actions_mask);
+		rc = sfc_mae_rule_parse_action_count(sa, action->conf, spec);
+		break;
 	case RTE_FLOW_ACTION_TYPE_FLAG:
 		SFC_BUILD_SET_OVERFLOW(RTE_FLOW_ACTION_TYPE_FLAG,
 				       bundle->actions_mask);
@@ -2798,6 +3000,7 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa,
 	const struct rte_flow_action *action;
 	struct sfc_mae *mae = &sa->mae;
 	efx_mae_actions_t *spec;
+	unsigned int n_count;
 	int rc;
 
 	rte_errno = 0;
@@ -2835,15 +3038,22 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa,
 	if (rc != 0)
 		goto fail_process_encap_header;
 
+	n_count = efx_mae_action_set_get_nb_count(spec);
+	if (n_count > 1) {
+		rc = ENOTSUP;
+		sfc_err(sa, "too many count actions requested: %u", n_count);
+		goto fail_nb_count;
+	}
+
 	spec_mae->action_set = sfc_mae_action_set_attach(sa, encap_header,
-							 spec);
+							 n_count, spec);
 	if (spec_mae->action_set != NULL) {
 		sfc_mae_encap_header_del(sa, encap_header);
 		efx_mae_action_set_spec_fini(sa->nic, spec);
 		return 0;
 	}
 
-	rc = sfc_mae_action_set_add(sa, spec, encap_header,
+	rc = sfc_mae_action_set_add(sa, actions, spec, encap_header, n_count,
 				    &spec_mae->action_set);
 	if (rc != 0)
 		goto fail_action_set_add;
@@ -2851,6 +3061,7 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa,
 	return 0;
 
 fail_action_set_add:
+fail_nb_count:
 	sfc_mae_encap_header_del(sa, encap_header);
 
 fail_process_encap_header:
@@ -3005,6 +3216,15 @@ sfc_mae_flow_insert(struct sfc_adapter *sa,
 	if (rc != 0)
 		goto fail_action_set_enable;
 
+	if (action_set->n_counters > 0) {
+		rc = sfc_mae_counter_start(sa);
+		if (rc != 0) {
+			sfc_err(sa, "failed to start MAE counters support: %s",
+				rte_strerror(rc));
+			goto fail_mae_counter_start;
+		}
+	}
+
 	rc = efx_mae_action_rule_insert(sa->nic, spec_mae->match_spec,
 					NULL, &fw_rsrc->aset_id,
 					&spec_mae->rule_id);
@@ -3017,6 +3237,7 @@ sfc_mae_flow_insert(struct sfc_adapter *sa,
 	return 0;
 
 fail_action_rule_insert:
+fail_mae_counter_start:
 	sfc_mae_action_set_disable(sa, action_set);
 
 fail_action_set_enable:
diff --git a/drivers/net/sfc/sfc_mae.h b/drivers/net/sfc/sfc_mae.h
index 0241fe33c4..15fe5ebca5 100644
--- a/drivers/net/sfc/sfc_mae.h
+++ b/drivers/net/sfc/sfc_mae.h
@@ -16,6 +16,8 @@
 
 #include "efx.h"
 
+#include "sfc_stats.h"
+
 #ifdef __cplusplus
 extern "C" {
 #endif
@@ -54,10 +56,20 @@ struct sfc_mae_encap_header {
 
 TAILQ_HEAD(sfc_mae_encap_headers, sfc_mae_encap_header);
 
+/* Counter ID */
+struct sfc_mae_counter_id {
+	/* ID of a counter in MAE */
+	efx_counter_t			mae_id;
+	/* ID of a counter in RTE */
+	uint32_t			rte_id;
+};
+
 /** Action set registry entry */
 struct sfc_mae_action_set {
 	TAILQ_ENTRY(sfc_mae_action_set)	entries;
 	unsigned int			refcnt;
+	struct sfc_mae_counter_id	*counters;
+	uint32_t			n_counters;
 	efx_mae_actions_t		*spec;
 	struct sfc_mae_encap_header	*encap_header;
 	struct sfc_mae_fw_rsrc		fw_rsrc;
@@ -83,6 +95,50 @@ struct sfc_mae_bounce_eh {
 	efx_tunnel_protocol_t		type;
 };
 
+/** Counter collection entry */
+struct sfc_mae_counter {
+	bool				inuse;
+	uint32_t			generation_count;
+	union sfc_pkts_bytes		value;
+	union sfc_pkts_bytes		reset;
+};
+
+struct sfc_mae_counters_xstats {
+	uint64_t			not_inuse_update;
+	uint64_t			realloc_update;
+};
+
+struct sfc_mae_counters {
+	/** An array of all MAE counters */
+	struct sfc_mae_counter		*mae_counters;
+	/** Extra statistics for counters */
+	struct sfc_mae_counters_xstats	xstats;
+	/** Count of all MAE counters */
+	unsigned int			n_mae_counters;
+};
+
+struct sfc_mae_counter_registry {
+	/* Common counter information */
+	/** Counters collection */
+	struct sfc_mae_counters		counters;
+
+	/* Information used by counter update service */
+	/** Callback to get packets from RxQ */
+	eth_rx_burst_t			rx_pkt_burst;
+	/** Data for the callback to get packets */
+	struct sfc_dp_rxq		*rx_dp;
+	/** Number of buffers pushed to the RxQ */
+	unsigned int			pushed_n_buffers;
+	/** Are credits used by counter stream */
+	bool				use_credits;
+
+	/* Information used by configuration routines */
+	/** Counter service core ID */
+	uint32_t			service_core_id;
+	/** Counter service ID */
+	uint32_t			service_id;
+};
+
 struct sfc_mae {
 	/** Assigned switch domain identifier */
 	uint16_t			switch_domain_id;
@@ -104,6 +160,10 @@ struct sfc_mae {
 	struct sfc_mae_action_sets	action_sets;
 	/** Encap. header bounce buffer */
 	struct sfc_mae_bounce_eh	bounce_eh;
+	/** Flag indicating whether counter-only RxQ is running */
+	bool				counter_rxq_running;
+	/** Counter registry */
+	struct sfc_mae_counter_registry	counter_registry;
 };
 
 struct sfc_adapter;
diff --git a/drivers/net/sfc/sfc_mae_counter.c b/drivers/net/sfc/sfc_mae_counter.c
index c7646cf7b1..b0cb8157aa 100644
--- a/drivers/net/sfc/sfc_mae_counter.c
+++ b/drivers/net/sfc/sfc_mae_counter.c
@@ -4,8 +4,10 @@
  */
 
 #include <rte_common.h>
+#include <rte_service_component.h>
 
 #include "efx.h"
+#include "efx_regs_counters_pkt_format.h"
 
 #include "sfc_ev.h"
 #include "sfc.h"
@@ -49,6 +51,520 @@ sfc_mae_counter_rxq_required(struct sfc_adapter *sa)
 	return true;
 }
 
+int
+sfc_mae_counter_enable(struct sfc_adapter *sa,
+		       struct sfc_mae_counter_id *counterp)
+{
+	struct sfc_mae_counter_registry *reg = &sa->mae.counter_registry;
+	struct sfc_mae_counters *counters = &reg->counters;
+	struct sfc_mae_counter *p;
+	efx_counter_t mae_counter;
+	uint32_t generation_count;
+	uint32_t unused;
+	int rc;
+
+	/*
+	 * The actual count of counters allocated is ignored since a failure
+	 * to allocate a single counter is indicated by non-zero return code.
+	 */
+	rc = efx_mae_counters_alloc(sa->nic, 1, &unused, &mae_counter,
+				    &generation_count);
+	if (rc != 0) {
+		sfc_err(sa, "failed to alloc MAE counter: %s",
+			rte_strerror(rc));
+		goto fail_mae_counter_alloc;
+	}
+
+	if (mae_counter.id >= counters->n_mae_counters) {
+		/*
+		 * ID of a counter is expected to be within the range
+		 * between 0 and the maximum count of counters to always
+		 * fit into a pre-allocated array size of maximum counter ID.
+		 */
+		sfc_err(sa, "MAE counter ID is out of expected range");
+		rc = EFAULT;
+		goto fail_counter_id_range;
+	}
+
+	counterp->mae_id = mae_counter;
+
+	p = &counters->mae_counters[mae_counter.id];
+
+	/*
+	 * Ordering is relaxed since it is the only operation on counter value.
+	 * And it does not depend on different stores/loads in other threads.
+	 * Paired with relaxed ordering in counter increment.
+	 */
+	__atomic_store(&p->reset.pkts_bytes.int128,
+		       &p->value.pkts_bytes.int128, __ATOMIC_RELAXED);
+	p->generation_count = generation_count;
+
+	/*
+	 * The flag is set at the very end of add operation and reset
+	 * at the beginning of delete operation. Release ordering is
+	 * paired with acquire ordering on load in counter increment operation.
+	 */
+	__atomic_store_n(&p->inuse, true, __ATOMIC_RELEASE);
+
+	sfc_info(sa, "enabled MAE counter #%u with reset pkts=%" PRIu64
+		 " bytes=%" PRIu64, mae_counter.id,
+		 p->reset.pkts, p->reset.bytes);
+
+	return 0;
+
+fail_counter_id_range:
+	(void)efx_mae_counters_free(sa->nic, 1, &unused, &mae_counter, NULL);
+
+fail_mae_counter_alloc:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+	return rc;
+}
+
+int
+sfc_mae_counter_disable(struct sfc_adapter *sa,
+			struct sfc_mae_counter_id *counter)
+{
+	struct sfc_mae_counter_registry *reg = &sa->mae.counter_registry;
+	struct sfc_mae_counters *counters = &reg->counters;
+	struct sfc_mae_counter *p;
+	uint32_t unused;
+	int rc;
+
+	if (counter->mae_id.id == EFX_MAE_RSRC_ID_INVALID)
+		return 0;
+
+	SFC_ASSERT(counter->mae_id.id < counters->n_mae_counters);
+	/*
+	 * The flag is set at the very end of add operation and reset
+	 * at the beginning of delete operation. Release ordering is
+	 * paired with acquire ordering on load in counter increment operation.
+	 */
+	p = &counters->mae_counters[counter->mae_id.id];
+	__atomic_store_n(&p->inuse, false, __ATOMIC_RELEASE);
+
+	rc = efx_mae_counters_free(sa->nic, 1, &unused, &counter->mae_id, NULL);
+	if (rc != 0)
+		sfc_err(sa, "failed to free MAE counter %u: %s",
+			counter->mae_id.id, rte_strerror(rc));
+
+	sfc_info(sa, "disabled MAE counter #%u with reset pkts=%" PRIu64
+		 " bytes=%" PRIu64, counter->mae_id.id,
+		 p->reset.pkts, p->reset.bytes);
+
+	/*
+	 * Do this regardless of what efx_mae_counters_free() return value is.
+	 * If there's some error, the resulting resource leakage is bad, but
+	 * nothing sensible can be done in this case.
+	 */
+	counter->mae_id.id = EFX_MAE_RSRC_ID_INVALID;
+
+	return rc;
+}
+
+static void
+sfc_mae_counter_increment(struct sfc_adapter *sa,
+			  struct sfc_mae_counters *counters,
+			  uint32_t mae_counter_id,
+			  uint32_t generation_count,
+			  uint64_t pkts, uint64_t bytes)
+{
+	struct sfc_mae_counter *p = &counters->mae_counters[mae_counter_id];
+	struct sfc_mae_counters_xstats *xstats = &counters->xstats;
+	union sfc_pkts_bytes cnt_val;
+	bool inuse;
+
+	/*
+	 * Acquire ordering is paired with release ordering in counter add
+	 * and delete operations.
+	 */
+	__atomic_load(&p->inuse, &inuse, __ATOMIC_ACQUIRE);
+	if (!inuse) {
+		/*
+		 * Two possible cases include:
+		 * 1) Counter is just allocated. Too early counter update
+		 *    cannot be processed properly.
+		 * 2) Stale update of freed and not reallocated counter.
+		 *    There is no point in processing that update.
+		 */
+		xstats->not_inuse_update++;
+		return;
+	}
+
+	if (unlikely(generation_count < p->generation_count)) {
+		/*
+		 * It is a stale update for the reallocated counter
+		 * (i.e., freed and the same ID allocated again).
+		 */
+		xstats->realloc_update++;
+		return;
+	}
+
+	cnt_val.pkts = p->value.pkts + pkts;
+	cnt_val.bytes = p->value.bytes + bytes;
+
+	/*
+	 * Ordering is relaxed since it is the only operation on counter value.
+	 * And it does not depend on different stores/loads in other threads.
+	 * Paired with relaxed ordering on counter reset.
+	 */
+	__atomic_store(&p->value.pkts_bytes,
+		       &cnt_val.pkts_bytes, __ATOMIC_RELAXED);
+
+	sfc_info(sa, "update MAE counter #%u: pkts+%" PRIu64 "=%" PRIu64
+		 ", bytes+%" PRIu64 "=%" PRIu64, mae_counter_id,
+		 pkts, cnt_val.pkts, bytes, cnt_val.bytes);
+}
+
+static void
+sfc_mae_parse_counter_packet(struct sfc_adapter *sa,
+			     struct sfc_mae_counter_registry *counter_registry,
+			     const struct rte_mbuf *m)
+{
+	uint32_t generation_count;
+	const efx_xword_t *hdr;
+	const efx_oword_t *counters_data;
+	unsigned int version;
+	unsigned int id;
+	unsigned int header_offset;
+	unsigned int payload_offset;
+	unsigned int counter_count;
+	unsigned int required_len;
+	unsigned int i;
+
+	if (unlikely(m->nb_segs != 1)) {
+		sfc_err(sa, "unexpectedly scattered MAE counters packet (%u segments)",
+			m->nb_segs);
+		return;
+	}
+
+	if (unlikely(m->data_len < ER_RX_SL_PACKETISER_HEADER_WORD_SIZE)) {
+		sfc_err(sa, "too short MAE counters packet (%u bytes)",
+			m->data_len);
+		return;
+	}
+
+	/*
+	 * The generation count is located in the Rx prefix in the USER_MARK
+	 * field which is written into hash.fdir.hi field of an mbuf. See
+	 * SF-123581-TC SmartNIC Datapath Offloads section 4.7.5 Counters.
+	 */
+	generation_count = m->hash.fdir.hi;
+
+	hdr = rte_pktmbuf_mtod(m, const efx_xword_t *);
+
+	version = EFX_XWORD_FIELD(*hdr, ERF_SC_PACKETISER_HEADER_VERSION);
+	if (unlikely(version != ERF_SC_PACKETISER_HEADER_VERSION_2)) {
+		sfc_err(sa, "unexpected MAE counters packet version %u",
+			version);
+		return;
+	}
+
+	id = EFX_XWORD_FIELD(*hdr, ERF_SC_PACKETISER_HEADER_IDENTIFIER);
+	if (unlikely(id != ERF_SC_PACKETISER_HEADER_IDENTIFIER_AR)) {
+		sfc_err(sa, "unexpected MAE counters source identifier %u", id);
+		return;
+	}
+
+	/* Packet layout definitions assume fixed header offset in fact */
+	header_offset =
+		EFX_XWORD_FIELD(*hdr, ERF_SC_PACKETISER_HEADER_HEADER_OFFSET);
+	if (unlikely(header_offset !=
+		     ERF_SC_PACKETISER_HEADER_HEADER_OFFSET_DEFAULT)) {
+		sfc_err(sa, "unexpected MAE counters packet header offset %u",
+			header_offset);
+		return;
+	}
+
+	payload_offset =
+		EFX_XWORD_FIELD(*hdr, ERF_SC_PACKETISER_HEADER_PAYLOAD_OFFSET);
+
+	counter_count = EFX_XWORD_FIELD(*hdr, ERF_SC_PACKETISER_HEADER_COUNT);
+
+	required_len = payload_offset +
+			counter_count * sizeof(counters_data[0]);
+	if (unlikely(required_len > m->data_len)) {
+		sfc_err(sa, "truncated MAE counters packet: %u counters, packet length is %u vs %u required",
+			counter_count, m->data_len, required_len);
+		/*
+		 * In theory it is possible process available counters data,
+		 * but such condition is really unexpected and it is
+		 * better to treat entire packet as corrupted.
+		 */
+		return;
+	}
+
+	/* Ensure that counters data is 32-bit aligned */
+	if (unlikely(payload_offset % sizeof(uint32_t) != 0)) {
+		sfc_err(sa, "unsupported MAE counters payload offset %u, must be 32-bit aligned",
+			payload_offset);
+		return;
+	}
+	RTE_BUILD_BUG_ON(sizeof(counters_data[0]) !=
+			ER_RX_SL_PACKETISER_PAYLOAD_WORD_SIZE);
+
+	counters_data =
+		rte_pktmbuf_mtod_offset(m, const efx_oword_t *, payload_offset);
+
+	sfc_info(sa, "update %u MAE counters with gc=%u",
+		 counter_count, generation_count);
+
+	for (i = 0; i < counter_count; ++i) {
+		uint32_t packet_count_lo;
+		uint32_t packet_count_hi;
+		uint32_t byte_count_lo;
+		uint32_t byte_count_hi;
+
+		/*
+		 * Use 32-bit field accessors below since counters data
+		 * is not 64-bit aligned.
+		 * 32-bit alignment is checked above taking into account
+		 * that start of packet data is 32-bit aligned
+		 * (cache-line size aligned in fact).
+		 */
+		packet_count_lo =
+			EFX_OWORD_FIELD32(counters_data[i],
+				ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO);
+		packet_count_hi =
+			EFX_OWORD_FIELD32(counters_data[i],
+				ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_HI);
+		byte_count_lo =
+			EFX_OWORD_FIELD32(counters_data[i],
+				ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO);
+		byte_count_hi =
+			EFX_OWORD_FIELD32(counters_data[i],
+				ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_HI);
+		sfc_mae_counter_increment(sa,
+			&counter_registry->counters,
+			EFX_OWORD_FIELD32(counters_data[i],
+				ERF_SC_PACKETISER_PAYLOAD_COUNTER_INDEX),
+			generation_count,
+			(uint64_t)packet_count_lo |
+			((uint64_t)packet_count_hi <<
+			 ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO_WIDTH),
+			(uint64_t)byte_count_lo |
+			((uint64_t)byte_count_hi <<
+			 ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO_WIDTH));
+	}
+}
+
+static int32_t
+sfc_mae_counter_routine(void *arg)
+{
+	struct sfc_adapter *sa = arg;
+	struct sfc_mae_counter_registry *counter_registry =
+		&sa->mae.counter_registry;
+	struct rte_mbuf *mbufs[SFC_MAE_COUNTER_RX_BURST];
+	unsigned int pushed_diff;
+	unsigned int pushed;
+	unsigned int i;
+	uint16_t n;
+	int rc;
+
+	n = counter_registry->rx_pkt_burst(counter_registry->rx_dp, mbufs,
+					   SFC_MAE_COUNTER_RX_BURST);
+
+	for (i = 0; i < n; i++)
+		sfc_mae_parse_counter_packet(sa, counter_registry, mbufs[i]);
+
+	rte_pktmbuf_free_bulk(mbufs, n);
+
+	if (!counter_registry->use_credits)
+		return 0;
+
+	pushed = sfc_rx_get_pushed(sa, counter_registry->rx_dp);
+	pushed_diff = pushed - counter_registry->pushed_n_buffers;
+
+	if (pushed_diff >= SFC_COUNTER_RXQ_REFILL_LEVEL) {
+		rc = efx_mae_counters_stream_give_credits(sa->nic, pushed_diff);
+		if (rc == 0) {
+			counter_registry->pushed_n_buffers = pushed;
+		} else {
+			/*
+			 * FIXME: counters might be important for the
+			 * application. Handle the error in order to recover
+			 * from the failure
+			 */
+			SFC_GENERIC_LOG(DEBUG, "Give credits failed: %s",
+					rte_strerror(rc));
+		}
+	}
+
+	return 0;
+}
+
+static void
+sfc_mae_counter_service_unregister(struct sfc_adapter *sa)
+{
+	struct sfc_mae_counter_registry *registry =
+		&sa->mae.counter_registry;
+	const unsigned int wait_ms = 10000;
+	unsigned int i;
+
+	rte_service_runstate_set(registry->service_id, 0);
+	rte_service_component_runstate_set(registry->service_id, 0);
+
+	/*
+	 * Wait for the counter routine to finish the last iteration.
+	 * Give up on timeout.
+	 */
+	for (i = 0; i < wait_ms; i++) {
+		if (rte_service_may_be_active(registry->service_id) == 0)
+			break;
+
+		rte_delay_ms(1);
+	}
+	if (i == wait_ms)
+		sfc_warn(sa, "failed to wait for counter service to stop");
+
+	rte_service_map_lcore_set(registry->service_id,
+				  registry->service_core_id, 0);
+
+	rte_service_component_unregister(registry->service_id);
+}
+
+static struct sfc_rxq_info *
+sfc_counter_rxq_info_get(struct sfc_adapter *sa)
+{
+	return &sfc_sa2shared(sa)->rxq_info[sa->counter_rxq.sw_index];
+}
+
+static int
+sfc_mae_counter_service_register(struct sfc_adapter *sa,
+				 uint32_t counter_stream_flags)
+{
+	struct rte_service_spec service;
+	char counter_service_name[sizeof(service.name)] = "counter_service";
+	struct sfc_mae_counter_registry *counter_registry =
+		&sa->mae.counter_registry;
+	uint32_t cid;
+	uint32_t sid;
+	int rc;
+
+	sfc_log_init(sa, "entry");
+
+	/* Prepare service info */
+	memset(&service, 0, sizeof(service));
+	rte_strscpy(service.name, counter_service_name, sizeof(service.name));
+	service.socket_id = sa->socket_id;
+	service.callback = sfc_mae_counter_routine;
+	service.callback_userdata = sa;
+	counter_registry->rx_pkt_burst = sa->eth_dev->rx_pkt_burst;
+	counter_registry->rx_dp = sfc_counter_rxq_info_get(sa)->dp;
+	counter_registry->pushed_n_buffers = 0;
+	counter_registry->use_credits = counter_stream_flags &
+		EFX_MAE_COUNTERS_STREAM_OUT_USES_CREDITS;
+
+	cid = sfc_get_service_lcore(sa->socket_id);
+	if (cid == RTE_MAX_LCORE && sa->socket_id != SOCKET_ID_ANY) {
+		/* Warn and try to allocate on any NUMA node */
+		sfc_warn(sa,
+			"failed to get service lcore for counter service at socket %d",
+			sa->socket_id);
+
+		cid = sfc_get_service_lcore(SOCKET_ID_ANY);
+	}
+	if (cid == RTE_MAX_LCORE) {
+		rc = ENOTSUP;
+		sfc_err(sa, "failed to get service lcore for counter service");
+		goto fail_get_service_lcore;
+	}
+
+	/* Service core may be in "stopped" state, start it */
+	rc = rte_service_lcore_start(cid);
+	if (rc != 0 && rc != -EALREADY) {
+		sfc_err(sa, "failed to start service core for counter service: %s",
+			rte_strerror(-rc));
+		rc = ENOTSUP;
+		goto fail_start_core;
+	}
+
+	/* Register counter service */
+	rc = rte_service_component_register(&service, &sid);
+	if (rc != 0) {
+		rc = ENOEXEC;
+		sfc_err(sa, "failed to register counter service component");
+		goto fail_register;
+	}
+
+	/* Map the service with the service core */
+	rc = rte_service_map_lcore_set(sid, cid, 1);
+	if (rc != 0) {
+		rc = -rc;
+		sfc_err(sa, "failed to map lcore for counter service: %s",
+			rte_strerror(rc));
+		goto fail_map_lcore;
+	}
+
+	/* Run the service */
+	rc = rte_service_component_runstate_set(sid, 1);
+	if (rc < 0) {
+		rc = -rc;
+		sfc_err(sa, "failed to run counter service component: %s",
+			rte_strerror(rc));
+		goto fail_component_runstate_set;
+	}
+	rc = rte_service_runstate_set(sid, 1);
+	if (rc < 0) {
+		rc = -rc;
+		sfc_err(sa, "failed to run counter service");
+		goto fail_runstate_set;
+	}
+
+	counter_registry->service_core_id = cid;
+	counter_registry->service_id = sid;
+
+	sfc_log_init(sa, "done");
+
+	return 0;
+
+fail_runstate_set:
+	rte_service_component_runstate_set(sid, 0);
+
+fail_component_runstate_set:
+	rte_service_map_lcore_set(sid, cid, 0);
+
+fail_map_lcore:
+	rte_service_component_unregister(sid);
+
+fail_register:
+fail_start_core:
+fail_get_service_lcore:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+
+	return rc;
+}
+
+int
+sfc_mae_counters_init(struct sfc_mae_counters *counters,
+		      uint32_t nb_counters_max)
+{
+	int rc;
+
+	SFC_GENERIC_LOG(DEBUG, "%s: entry", __func__);
+
+	counters->mae_counters = rte_zmalloc("sfc_mae_counters",
+		sizeof(*counters->mae_counters) * nb_counters_max, 0);
+	if (counters->mae_counters == NULL) {
+		rc = ENOMEM;
+		SFC_GENERIC_LOG(ERR, "%s: failed: %s", __func__,
+				rte_strerror(rc));
+		return rc;
+	}
+
+	counters->n_mae_counters = nb_counters_max;
+
+	SFC_GENERIC_LOG(DEBUG, "%s: done", __func__);
+
+	return 0;
+}
+
+void
+sfc_mae_counters_fini(struct sfc_mae_counters *counters)
+{
+	rte_free(counters->mae_counters);
+	counters->mae_counters = NULL;
+}
+
 int
 sfc_mae_counter_rxq_attach(struct sfc_adapter *sa)
 {
@@ -215,3 +731,65 @@ sfc_mae_counter_rxq_fini(struct sfc_adapter *sa)
 
 	sfc_log_init(sa, "done");
 }
+
+void
+sfc_mae_counter_stop(struct sfc_adapter *sa)
+{
+	struct sfc_mae *mae = &sa->mae;
+
+	sfc_log_init(sa, "entry");
+
+	if (!mae->counter_rxq_running) {
+		sfc_log_init(sa, "counter queue is not running - skip");
+		return;
+	}
+
+	sfc_mae_counter_service_unregister(sa);
+	efx_mae_counters_stream_stop(sa->nic, sa->counter_rxq.sw_index, NULL);
+
+	mae->counter_rxq_running = false;
+
+	sfc_log_init(sa, "done");
+}
+
+int
+sfc_mae_counter_start(struct sfc_adapter *sa)
+{
+	struct sfc_mae *mae = &sa->mae;
+	uint32_t flags;
+	int rc;
+
+	SFC_ASSERT(sa->counter_rxq.state & SFC_COUNTER_RXQ_ATTACHED);
+
+	if (mae->counter_rxq_running)
+		return 0;
+
+	sfc_log_init(sa, "entry");
+
+	rc = efx_mae_counters_stream_start(sa->nic, sa->counter_rxq.sw_index,
+					   SFC_MAE_COUNTER_STREAM_PACKET_SIZE,
+					   0 /* No flags required */, &flags);
+	if (rc != 0) {
+		sfc_err(sa, "failed to start MAE counters stream: %s",
+			rte_strerror(rc));
+		goto fail_counter_stream;
+	}
+
+	sfc_log_init(sa, "stream start flags: 0x%x", flags);
+
+	rc = sfc_mae_counter_service_register(sa, flags);
+	if (rc != 0)
+		goto fail_service_register;
+
+	mae->counter_rxq_running = true;
+
+	return 0;
+
+fail_service_register:
+	efx_mae_counters_stream_stop(sa->nic, sa->counter_rxq.sw_index, NULL);
+
+fail_counter_stream:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+
+	return rc;
+}
diff --git a/drivers/net/sfc/sfc_mae_counter.h b/drivers/net/sfc/sfc_mae_counter.h
index f16d64a999..f61a6b59cb 100644
--- a/drivers/net/sfc/sfc_mae_counter.h
+++ b/drivers/net/sfc/sfc_mae_counter.h
@@ -38,6 +38,17 @@ void sfc_mae_counter_rxq_detach(struct sfc_adapter *sa);
 int sfc_mae_counter_rxq_init(struct sfc_adapter *sa);
 void sfc_mae_counter_rxq_fini(struct sfc_adapter *sa);
 
+int sfc_mae_counters_init(struct sfc_mae_counters *counters,
+			  uint32_t nb_counters_max);
+void sfc_mae_counters_fini(struct sfc_mae_counters *counters);
+int sfc_mae_counter_enable(struct sfc_adapter *sa,
+			   struct sfc_mae_counter_id *counterp);
+int sfc_mae_counter_disable(struct sfc_adapter *sa,
+			    struct sfc_mae_counter_id *counter);
+
+int sfc_mae_counter_start(struct sfc_adapter *sa);
+void sfc_mae_counter_stop(struct sfc_adapter *sa);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/net/sfc/sfc_stats.h b/drivers/net/sfc/sfc_stats.h
new file mode 100644
index 0000000000..2d7ab71f14
--- /dev/null
+++ b/drivers/net/sfc/sfc_stats.h
@@ -0,0 +1,80 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2019-2021 Xilinx, Inc.
+ * Copyright(c) 2019 Solarflare Communications Inc.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#ifndef _SFC_STATS_H
+#define _SFC_STATS_H
+
+#include <stdint.h>
+
+#include <rte_atomic.h>
+
+#include "sfc_tweak.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * 64-bit packets and bytes counters covered by 128-bit integer
+ * in order to do atomic updates to guarantee consistency if
+ * required.
+ */
+union sfc_pkts_bytes {
+	RTE_STD_C11
+	struct {
+		uint64_t		pkts;
+		uint64_t		bytes;
+	};
+	rte_int128_t			pkts_bytes;
+};
+
+/**
+ * Update packets and bytes counters atomically in assumption that
+ * the counter is written on one core only.
+ */
+static inline void
+sfc_pkts_bytes_add(union sfc_pkts_bytes *st, uint64_t pkts, uint64_t bytes)
+{
+#if SFC_SW_STATS_ATOMIC
+	union sfc_pkts_bytes result;
+
+	/* Stats are written on single core only, so just load values */
+	result.pkts = st->pkts + pkts;
+	result.bytes = st->bytes + bytes;
+
+	/*
+	 * Store the result atomically to guarantee that the reader
+	 * core sees both counter updates together.
+	 */
+	__atomic_store_n(&st->pkts_bytes.int128, result.pkts_bytes.int128,
+			 __ATOMIC_RELEASE);
+#else
+	st->pkts += pkts;
+	st->bytes += bytes;
+#endif
+}
+
+/**
+ * Get an atomic copy of a packets and bytes counters.
+ */
+static inline void
+sfc_pkts_bytes_get(const union sfc_pkts_bytes *st, union sfc_pkts_bytes *result)
+{
+#if SFC_SW_STATS_ATOMIC
+	result->pkts_bytes.int128 = __atomic_load_n(&st->pkts_bytes.int128,
+						    __ATOMIC_ACQUIRE);
+#else
+	*result = *st;
+#endif
+}
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _SFC_STATS_H */
diff --git a/drivers/net/sfc/sfc_tweak.h b/drivers/net/sfc/sfc_tweak.h
index f2d8701421..d09c7a3125 100644
--- a/drivers/net/sfc/sfc_tweak.h
+++ b/drivers/net/sfc/sfc_tweak.h
@@ -42,4 +42,13 @@
  */
 #define SFC_RXD_WAIT_TIMEOUT_NS_DEF	(200U * 1000)
 
+/**
+ * Ideally reading packet and byte counters together should return
+ * consistent values. I.e. a number of bytes corresponds to a number of
+ * packets. Since counters are updated in one thread and queried in
+ * another it requires either locking or atomics which are very
+ * expensive from performance point of view. So, disable it by default.
+ */
+#define SFC_SW_STATS_ATOMIC		0
+
 #endif /* _SFC_TWEAK_H_ */
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v2 20/20] net/sfc: support flow API query for count actions
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (18 preceding siblings ...)
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 19/20] net/sfc: support flow action COUNT in transfer rules Andrew Rybchenko
@ 2021-06-04 14:24   ` Andrew Rybchenko
  2021-06-17  8:37   ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action David Marchand
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-04 14:24 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

The query reports the number of hits for a counter associated
with a flow rule.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/sfc_flow.c        | 48 ++++++++++++++++++++++-
 drivers/net/sfc/sfc_flow.h        |  6 +++
 drivers/net/sfc/sfc_mae.c         | 64 +++++++++++++++++++++++++++++++
 drivers/net/sfc/sfc_mae.h         |  1 +
 drivers/net/sfc/sfc_mae_counter.c | 32 ++++++++++++++++
 drivers/net/sfc/sfc_mae_counter.h |  3 ++
 6 files changed, 153 insertions(+), 1 deletion(-)

diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 1294dbd3a7..a3721089ca 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -32,6 +32,7 @@ struct sfc_flow_ops_by_spec {
 	sfc_flow_cleanup_cb_t	*cleanup;
 	sfc_flow_insert_cb_t	*insert;
 	sfc_flow_remove_cb_t	*remove;
+	sfc_flow_query_cb_t	*query;
 };
 
 static sfc_flow_parse_cb_t sfc_flow_parse_rte_to_filter;
@@ -45,6 +46,7 @@ static const struct sfc_flow_ops_by_spec sfc_flow_ops_filter = {
 	.cleanup = NULL,
 	.insert = sfc_flow_filter_insert,
 	.remove = sfc_flow_filter_remove,
+	.query = NULL,
 };
 
 static const struct sfc_flow_ops_by_spec sfc_flow_ops_mae = {
@@ -53,6 +55,7 @@ static const struct sfc_flow_ops_by_spec sfc_flow_ops_mae = {
 	.cleanup = sfc_mae_flow_cleanup,
 	.insert = sfc_mae_flow_insert,
 	.remove = sfc_mae_flow_remove,
+	.query = sfc_mae_flow_query,
 };
 
 static const struct sfc_flow_ops_by_spec *
@@ -2788,6 +2791,49 @@ sfc_flow_flush(struct rte_eth_dev *dev,
 	return -ret;
 }
 
+static int
+sfc_flow_query(struct rte_eth_dev *dev,
+	       struct rte_flow *flow,
+	       const struct rte_flow_action *action,
+	       void *data,
+	       struct rte_flow_error *error)
+{
+	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	const struct sfc_flow_ops_by_spec *ops;
+	int ret;
+
+	sfc_adapter_lock(sa);
+
+	ops = sfc_flow_get_ops_by_spec(flow);
+	if (ops == NULL || ops->query == NULL) {
+		ret = rte_flow_error_set(error, ENOTSUP,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			"No backend to handle this flow");
+		goto fail_no_backend;
+	}
+
+	if (sa->state != SFC_ETHDEV_STARTED) {
+		ret = rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			"Can't query the flow: the adapter is not started");
+		goto fail_not_started;
+	}
+
+	ret = ops->query(dev, flow, action, data, error);
+	if (ret != 0)
+		goto fail_query;
+
+	sfc_adapter_unlock(sa);
+
+	return 0;
+
+fail_query:
+fail_not_started:
+fail_no_backend:
+	sfc_adapter_unlock(sa);
+	return ret;
+}
+
 static int
 sfc_flow_isolate(struct rte_eth_dev *dev, int enable,
 		 struct rte_flow_error *error)
@@ -2814,7 +2860,7 @@ const struct rte_flow_ops sfc_flow_ops = {
 	.create = sfc_flow_create,
 	.destroy = sfc_flow_destroy,
 	.flush = sfc_flow_flush,
-	.query = NULL,
+	.query = sfc_flow_query,
 	.isolate = sfc_flow_isolate,
 };
 
diff --git a/drivers/net/sfc/sfc_flow.h b/drivers/net/sfc/sfc_flow.h
index bd3b374d68..99e5cf9cff 100644
--- a/drivers/net/sfc/sfc_flow.h
+++ b/drivers/net/sfc/sfc_flow.h
@@ -181,6 +181,12 @@ typedef int (sfc_flow_insert_cb_t)(struct sfc_adapter *sa,
 typedef int (sfc_flow_remove_cb_t)(struct sfc_adapter *sa,
 				   struct rte_flow *flow);
 
+typedef int (sfc_flow_query_cb_t)(struct rte_eth_dev *dev,
+				  struct rte_flow *flow,
+				  const struct rte_flow_action *action,
+				  void *data,
+				  struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 370a39da1d..ee1188bc1e 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -3277,3 +3277,67 @@ sfc_mae_flow_remove(struct sfc_adapter *sa,
 
 	return 0;
 }
+
+static int
+sfc_mae_query_counter(struct sfc_adapter *sa,
+		      struct sfc_flow_spec_mae *spec,
+		      const struct rte_flow_action *action,
+		      struct rte_flow_query_count *data,
+		      struct rte_flow_error *error)
+{
+	struct sfc_mae_action_set *action_set = spec->action_set;
+	const struct rte_flow_action_count *conf = action->conf;
+	unsigned int i;
+	int rc;
+
+	if (action_set->n_counters == 0) {
+		return rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION, action,
+			"Queried flow rule does not have count actions");
+	}
+
+	for (i = 0; i < action_set->n_counters; i++) {
+		/*
+		 * Get the first available counter of the flow rule if
+		 * counter ID is not specified.
+		 */
+		if (conf != NULL && action_set->counters[i].rte_id != conf->id)
+			continue;
+
+		rc = sfc_mae_counter_get(&sa->mae.counter_registry.counters,
+					 &action_set->counters[i], data);
+		if (rc != 0) {
+			return rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION, action,
+				"Queried flow rule counter action is invalid");
+		}
+
+		return 0;
+	}
+
+	return rte_flow_error_set(error, ENOENT,
+				  RTE_FLOW_ERROR_TYPE_ACTION, action,
+				  "No such flow rule action count ID");
+}
+
+int
+sfc_mae_flow_query(struct rte_eth_dev *dev,
+		   struct rte_flow *flow,
+		   const struct rte_flow_action *action,
+		   void *data,
+		   struct rte_flow_error *error)
+{
+	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	struct sfc_flow_spec *spec = &flow->spec;
+	struct sfc_flow_spec_mae *spec_mae = &spec->mae;
+
+	switch (action->type) {
+	case RTE_FLOW_ACTION_TYPE_COUNT:
+		return sfc_mae_query_counter(sa, spec_mae, action,
+					     data, error);
+	default:
+		return rte_flow_error_set(error, ENOTSUP,
+			RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+			"Query for action of this type is not supported");
+	}
+}
diff --git a/drivers/net/sfc/sfc_mae.h b/drivers/net/sfc/sfc_mae.h
index 15fe5ebca5..7e3b6a7a97 100644
--- a/drivers/net/sfc/sfc_mae.h
+++ b/drivers/net/sfc/sfc_mae.h
@@ -304,6 +304,7 @@ int sfc_mae_rule_parse_actions(struct sfc_adapter *sa,
 sfc_flow_verify_cb_t sfc_mae_flow_verify;
 sfc_flow_insert_cb_t sfc_mae_flow_insert;
 sfc_flow_remove_cb_t sfc_mae_flow_remove;
+sfc_flow_query_cb_t sfc_mae_flow_query;
 
 #ifdef __cplusplus
 }
diff --git a/drivers/net/sfc/sfc_mae_counter.c b/drivers/net/sfc/sfc_mae_counter.c
index b0cb8157aa..5afd450a11 100644
--- a/drivers/net/sfc/sfc_mae_counter.c
+++ b/drivers/net/sfc/sfc_mae_counter.c
@@ -793,3 +793,35 @@ sfc_mae_counter_start(struct sfc_adapter *sa)
 
 	return rc;
 }
+
+int
+sfc_mae_counter_get(struct sfc_mae_counters *counters,
+		    const struct sfc_mae_counter_id *counter,
+		    struct rte_flow_query_count *data)
+{
+	struct sfc_mae_counter *p;
+	union sfc_pkts_bytes value;
+
+	SFC_ASSERT(counter->mae_id.id < counters->n_mae_counters);
+	p = &counters->mae_counters[counter->mae_id.id];
+
+	/*
+	 * Ordering is relaxed since it is the only operation on counter value.
+	 * And it does not depend on different stores/loads in other threads.
+	 * Paired with relaxed ordering in counter increment.
+	 */
+	value.pkts_bytes.int128 = __atomic_load_n(&p->value.pkts_bytes.int128,
+						  __ATOMIC_RELAXED);
+
+	data->hits_set = 1;
+	data->bytes_set = 1;
+	data->hits = value.pkts - p->reset.pkts;
+	data->bytes = value.bytes - p->reset.bytes;
+
+	if (data->reset != 0) {
+		p->reset.pkts = value.pkts;
+		p->reset.bytes = value.bytes;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/sfc/sfc_mae_counter.h b/drivers/net/sfc/sfc_mae_counter.h
index f61a6b59cb..2c953c2968 100644
--- a/drivers/net/sfc/sfc_mae_counter.h
+++ b/drivers/net/sfc/sfc_mae_counter.h
@@ -45,6 +45,9 @@ int sfc_mae_counter_enable(struct sfc_adapter *sa,
 			   struct sfc_mae_counter_id *counterp);
 int sfc_mae_counter_disable(struct sfc_adapter *sa,
 			    struct sfc_mae_counter_id *counter);
+int sfc_mae_counter_get(struct sfc_mae_counters *counters,
+			const struct sfc_mae_counter_id *counter,
+			struct rte_flow_query_count *data);
 
 int sfc_mae_counter_start(struct sfc_adapter *sa);
 void sfc_mae_counter_stop(struct sfc_adapter *sa);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (19 preceding siblings ...)
  2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 20/20] net/sfc: support flow API query for count actions Andrew Rybchenko
@ 2021-06-17  8:37   ` David Marchand
  2021-06-18 13:40     ` Andrew Rybchenko
  20 siblings, 1 reply; 104+ messages in thread
From: David Marchand @ 2021-06-17  8:37 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev

Hello Andrew,

On Fri, Jun 4, 2021 at 4:24 PM Andrew Rybchenko
<andrew.rybchenko@oktetlabs.ru> wrote:
>
> Update base driver and support COUNT action in transfer flow rules.
>
> v2:
>  - add release notes
>  - add missing documentaion
>  - fix spelling
>  - handle query in stopped gracefully

I see build issues in the CI.
Can you have a look?

gcc -Idrivers/libtmp_rte_net_sfc.a.p -Idrivers -I../drivers
-Idrivers/net/sfc -I../drivers/net/sfc -Ilib/ethdev -I../lib/ethdev
-I. -I.. -Iconfig -I../config -Ilib/eal/include -I../lib/eal/include
-Ilib/eal/linux/include -I../lib/eal/linux/include
-Ilib/eal/x86/include -I../lib/eal/x86/include -Ilib/eal/common
-I../lib/eal/common -Ilib/eal -I../lib/eal -Ilib/kvargs
-I../lib/kvargs -Ilib/metrics -I../lib/metrics -Ilib/telemetry
-I../lib/telemetry -Ilib/net -I../lib/net -Ilib/mbuf -I../lib/mbuf
-Ilib/mempool -I../lib/mempool -Ilib/ring -I../lib/ring -Ilib/meter
-I../lib/meter -Idrivers/bus/pci -I../drivers/bus/pci
-I../drivers/bus/pci/linux -Ilib/pci -I../lib/pci -Idrivers/bus/vdev
-I../drivers/bus/vdev -Idrivers/common/sfc_efx
-I../drivers/common/sfc_efx -Idrivers/common/sfc_efx/base
-I../drivers/common/sfc_efx/base -fdiagnostics-color=always -pipe
-D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Werror -O3 -include
rte_config.h -Wextra -Wcast-qual -Wdeprecated -Wformat
-Wformat-nonliteral -Wformat-security -Wmissing-declarations
-Wmissing-prototypes -Wnested-externs -Wold-style-definition
-Wpointer-arith -Wsign-compare -Wstrict-prototypes -Wundef
-Wwrite-strings -Wno-packed-not-aligned
-Wno-missing-field-initializers -D_GNU_SOURCE -fPIC -march=native
-DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -Wno-format-truncation
-Wno-strict-aliasing -Wdisabled-optimization -Waggregate-return
-Wbad-function-cast -DRTE_LOG_DEFAULT_LOGTYPE=pmd.net.sfc -MD -MQ
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_flow.c.o -MF
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_flow.c.o.d -o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_flow.c.o -c
../drivers/net/sfc/sfc_flow.c
../drivers/net/sfc/sfc_flow.c: In function ‘sfc_flow_query’:
../drivers/net/sfc/sfc_flow.c:2815:19: error: ‘SFC_ETHDEV_STARTED’
undeclared (first use in this function); did you mean
‘SFC_ADAPTER_STARTED’?
  if (sa->state != SFC_ETHDEV_STARTED) {
                   ^~~~~~~~~~~~~~~~~~
                   SFC_ADAPTER_STARTED
../drivers/net/sfc/sfc_flow.c:2815:19: note: each undeclared
identifier is reported only once for each function it appears in

$ git grep SFC_ETHDEV_STARTED
drivers/net/sfc/sfc_flow.c:     if (sa->state != SFC_ETHDEV_STARTED) {



-- 
David Marchand


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v3 00/20] net/sfc: support flow API COUNT action
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                   ` (20 preceding siblings ...)
  2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
@ 2021-06-18 13:40 ` Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 01/20] net/sfc: introduce ethdev Rx queue ID Andrew Rybchenko
                     ` (19 more replies)
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
  22 siblings, 20 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-18 13:40 UTC (permalink / raw)
  To: dev; +Cc: David Marchand

Update base driver and support COUNT action in transfer flow rules.

v3:
 - fix build brekage because of incorrectly rebased and squashed
   in fix

v2:
 - add release notes
 - add missing documentaion
 - fix spelling
 - handle query in stopped gracefully

Andrew Rybchenko (6):
  net/sfc: do not enable interrupts on internal Rx queues
  common/sfc_efx/base: separate target EvQ and IRQ config
  common/sfc_efx/base: support custom EvQ to IRQ mapping
  net/sfc: explicitly control IRQ used for Rx queues
  net/sfc: add NUMA-aware registry of service logical cores
  common/sfc_efx/base: add packetiser packet format definition

Igor Romanov (14):
  net/sfc: introduce ethdev Rx queue ID
  net/sfc: introduce ethdev Tx queue ID
  common/sfc_efx/base: add ingress m-port RxQ flag
  common/sfc_efx/base: add user mark RxQ flag
  net/sfc: add abstractions for the management EVQ identity
  net/sfc: add support for initialising different RxQ types
  net/sfc: reserve RxQ for counters
  common/sfc_efx/base: add counter creation MCDI wrappers
  common/sfc_efx/base: add counter stream MCDI wrappers
  common/sfc_efx/base: support counter in action set
  net/sfc: add Rx datapath method to get pushed buffers count
  common/sfc_efx/base: add max MAE counters to limits
  net/sfc: support flow action COUNT in transfer rules
  net/sfc: support flow API query for count actions

 doc/guides/nics/sfc_efx.rst                   |   2 +
 doc/guides/rel_notes/release_21_08.rst        |   6 +
 drivers/common/sfc_efx/base/ef10_ev.c         |  14 +-
 drivers/common/sfc_efx/base/ef10_impl.h       |   1 +
 drivers/common/sfc_efx/base/ef10_rx.c         |  57 +-
 drivers/common/sfc_efx/base/efx.h             | 113 +++
 drivers/common/sfc_efx/base/efx_ev.c          |  39 +-
 drivers/common/sfc_efx/base/efx_impl.h        |   8 +-
 drivers/common/sfc_efx/base/efx_mae.c         | 430 ++++++++-
 drivers/common/sfc_efx/base/efx_mcdi.c        |   7 +-
 drivers/common/sfc_efx/base/efx_mcdi.h        |   7 +
 .../base/efx_regs_counters_pkt_format.h       |  87 ++
 drivers/common/sfc_efx/base/efx_rx.c          |  14 +-
 drivers/common/sfc_efx/base/rhead_ev.c        |  14 +-
 drivers/common/sfc_efx/base/rhead_impl.h      |   1 +
 drivers/common/sfc_efx/base/rhead_rx.c        |   6 +
 drivers/common/sfc_efx/version.map            |   9 +
 drivers/net/sfc/meson.build                   |  12 +
 drivers/net/sfc/sfc.c                         |  68 +-
 drivers/net/sfc/sfc.h                         |  22 +
 drivers/net/sfc/sfc_dp.h                      |   6 +
 drivers/net/sfc/sfc_dp_rx.h                   |   4 +
 drivers/net/sfc/sfc_ef100_rx.c                |  15 +
 drivers/net/sfc/sfc_ethdev.c                  | 115 ++-
 drivers/net/sfc/sfc_ev.c                      |  36 +-
 drivers/net/sfc/sfc_ev.h                      | 107 ++-
 drivers/net/sfc/sfc_flow.c                    |  77 +-
 drivers/net/sfc/sfc_flow.h                    |   6 +
 drivers/net/sfc/sfc_mae.c                     | 296 ++++++-
 drivers/net/sfc/sfc_mae.h                     |  61 ++
 drivers/net/sfc/sfc_mae_counter.c             | 827 ++++++++++++++++++
 drivers/net/sfc/sfc_mae_counter.h             |  58 ++
 drivers/net/sfc/sfc_rx.c                      | 231 +++--
 drivers/net/sfc/sfc_rx.h                      |  15 +-
 drivers/net/sfc/sfc_service.c                 |  99 +++
 drivers/net/sfc/sfc_service.h                 |  20 +
 drivers/net/sfc/sfc_stats.h                   |  80 ++
 drivers/net/sfc/sfc_tweak.h                   |   9 +
 drivers/net/sfc/sfc_tx.c                      | 164 ++--
 drivers/net/sfc/sfc_tx.h                      |  11 +-
 40 files changed, 2904 insertions(+), 250 deletions(-)
 create mode 100644 drivers/common/sfc_efx/base/efx_regs_counters_pkt_format.h
 create mode 100644 drivers/net/sfc/sfc_mae_counter.c
 create mode 100644 drivers/net/sfc/sfc_mae_counter.h
 create mode 100644 drivers/net/sfc/sfc_service.c
 create mode 100644 drivers/net/sfc/sfc_service.h
 create mode 100644 drivers/net/sfc/sfc_stats.h

-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v3 01/20] net/sfc: introduce ethdev Rx queue ID
  2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
@ 2021-06-18 13:40   ` Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 02/20] net/sfc: do not enable interrupts on internal Rx queues Andrew Rybchenko
                     ` (18 subsequent siblings)
  19 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-18 13:40 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Make software index of an Rx queue and ethdev index separate.
When an ethdev RxQ is accessed in ethdev callbacks, an explicit ethdev
queue index is used.

This is a preparation to introducing non-ethdev Rx queues.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/sfc.h        |   2 +
 drivers/net/sfc/sfc_dp.h     |   4 +
 drivers/net/sfc/sfc_ethdev.c |  69 ++++++++------
 drivers/net/sfc/sfc_ev.c     |   2 +-
 drivers/net/sfc/sfc_ev.h     |  22 ++++-
 drivers/net/sfc/sfc_flow.c   |  22 +++--
 drivers/net/sfc/sfc_rx.c     | 179 +++++++++++++++++++++++++----------
 drivers/net/sfc/sfc_rx.h     |  10 +-
 8 files changed, 215 insertions(+), 95 deletions(-)

diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index b48a818adb..ebe705020d 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -29,6 +29,7 @@
 #include "sfc_filter.h"
 #include "sfc_sriov.h"
 #include "sfc_mae.h"
+#include "sfc_dp.h"
 
 #ifdef __cplusplus
 extern "C" {
@@ -168,6 +169,7 @@ struct sfc_rss {
 struct sfc_adapter_shared {
 	unsigned int			rxq_count;
 	struct sfc_rxq_info		*rxq_info;
+	unsigned int			ethdev_rxq_count;
 
 	unsigned int			txq_count;
 	struct sfc_txq_info		*txq_info;
diff --git a/drivers/net/sfc/sfc_dp.h b/drivers/net/sfc/sfc_dp.h
index 4bed137806..76065483d4 100644
--- a/drivers/net/sfc/sfc_dp.h
+++ b/drivers/net/sfc/sfc_dp.h
@@ -96,6 +96,10 @@ struct sfc_dp {
 /** List of datapath variants */
 TAILQ_HEAD(sfc_dp_list, sfc_dp);
 
+typedef unsigned int sfc_sw_index_t;
+typedef int32_t	sfc_ethdev_qid_t;
+#define SFC_ETHDEV_QID_INVALID	((sfc_ethdev_qid_t)(-1))
+
 /* Check if available HW/FW capabilities are sufficient for the datapath */
 static inline bool
 sfc_dp_match_hw_fw_caps(const struct sfc_dp *dp, unsigned int avail_caps)
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index c50ecea0b9..2651c41288 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -463,26 +463,31 @@ sfc_dev_allmulti_disable(struct rte_eth_dev *dev)
 }
 
 static int
-sfc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
+sfc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t ethdev_qid,
 		   uint16_t nb_rx_desc, unsigned int socket_id,
 		   const struct rte_eth_rxconf *rx_conf,
 		   struct rte_mempool *mb_pool)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+	struct sfc_rxq_info *rxq_info;
+	sfc_sw_index_t sw_index;
 	int rc;
 
 	sfc_log_init(sa, "RxQ=%u nb_rx_desc=%u socket_id=%u",
-		     rx_queue_id, nb_rx_desc, socket_id);
+		     ethdev_qid, nb_rx_desc, socket_id);
 
 	sfc_adapter_lock(sa);
 
-	rc = sfc_rx_qinit(sa, rx_queue_id, nb_rx_desc, socket_id,
+	sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas, sfc_ethdev_qid);
+	rc = sfc_rx_qinit(sa, sw_index, nb_rx_desc, socket_id,
 			  rx_conf, mb_pool);
 	if (rc != 0)
 		goto fail_rx_qinit;
 
-	dev->data->rx_queues[rx_queue_id] = sas->rxq_info[rx_queue_id].dp;
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+	dev->data->rx_queues[ethdev_qid] = rxq_info->dp;
 
 	sfc_adapter_unlock(sa);
 
@@ -500,7 +505,7 @@ sfc_rx_queue_release(void *queue)
 	struct sfc_dp_rxq *dp_rxq = queue;
 	struct sfc_rxq *rxq;
 	struct sfc_adapter *sa;
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 
 	if (dp_rxq == NULL)
 		return;
@@ -1182,15 +1187,14 @@ sfc_set_mc_addr_list(struct rte_eth_dev *dev,
  * use any process-local pointers from the adapter data.
  */
 static void
-sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
+sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
 		      struct rte_eth_rxq_info *qinfo)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 
-	SFC_ASSERT(rx_queue_id < sas->rxq_count);
-
-	rxq_info = &sas->rxq_info[rx_queue_id];
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
 
 	qinfo->mp = rxq_info->refill_mb_pool;
 	qinfo->conf.rx_free_thresh = rxq_info->refill_threshold;
@@ -1232,14 +1236,14 @@ sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t tx_queue_id,
  * use any process-local pointers from the adapter data.
  */
 static uint32_t
-sfc_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+sfc_rx_queue_count(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 
-	SFC_ASSERT(rx_queue_id < sas->rxq_count);
-	rxq_info = &sas->rxq_info[rx_queue_id];
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
 
 	if ((rxq_info->state & SFC_RXQ_STARTED) == 0)
 		return 0;
@@ -1293,13 +1297,16 @@ sfc_tx_descriptor_status(void *queue, uint16_t offset)
 }
 
 static int
-sfc_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+sfc_rx_queue_start(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+	struct sfc_rxq_info *rxq_info;
+	sfc_sw_index_t sw_index;
 	int rc;
 
-	sfc_log_init(sa, "RxQ=%u", rx_queue_id);
+	sfc_log_init(sa, "RxQ=%u", ethdev_qid);
 
 	sfc_adapter_lock(sa);
 
@@ -1307,14 +1314,16 @@ sfc_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (sa->state != SFC_ADAPTER_STARTED)
 		goto fail_not_started;
 
-	if (sas->rxq_info[rx_queue_id].state != SFC_RXQ_INITIALIZED)
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+	if (rxq_info->state != SFC_RXQ_INITIALIZED)
 		goto fail_not_setup;
 
-	rc = sfc_rx_qstart(sa, rx_queue_id);
+	sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas, sfc_ethdev_qid);
+	rc = sfc_rx_qstart(sa, sw_index);
 	if (rc != 0)
 		goto fail_rx_qstart;
 
-	sas->rxq_info[rx_queue_id].deferred_started = B_TRUE;
+	rxq_info->deferred_started = B_TRUE;
 
 	sfc_adapter_unlock(sa);
 
@@ -1329,17 +1338,23 @@ sfc_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 }
 
 static int
-sfc_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+sfc_rx_queue_stop(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+	struct sfc_rxq_info *rxq_info;
+	sfc_sw_index_t sw_index;
 
-	sfc_log_init(sa, "RxQ=%u", rx_queue_id);
+	sfc_log_init(sa, "RxQ=%u", ethdev_qid);
 
 	sfc_adapter_lock(sa);
-	sfc_rx_qstop(sa, rx_queue_id);
 
-	sas->rxq_info[rx_queue_id].deferred_started = B_FALSE;
+	sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas, sfc_ethdev_qid);
+	sfc_rx_qstop(sa, sw_index);
+
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+	rxq_info->deferred_started = B_FALSE;
 
 	sfc_adapter_unlock(sa);
 
@@ -1766,27 +1781,27 @@ sfc_pool_ops_supported(struct rte_eth_dev *dev, const char *pool)
 }
 
 static int
-sfc_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
+sfc_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 
-	SFC_ASSERT(queue_id < sas->rxq_count);
-	rxq_info = &sas->rxq_info[queue_id];
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
 
 	return sap->dp_rx->intr_enable(rxq_info->dp);
 }
 
 static int
-sfc_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
+sfc_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 
-	SFC_ASSERT(queue_id < sas->rxq_count);
-	rxq_info = &sas->rxq_info[queue_id];
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
 
 	return sap->dp_rx->intr_disable(rxq_info->dp);
 }
diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index b4953ac647..2262994112 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -582,7 +582,7 @@ sfc_ev_qpoll(struct sfc_evq *evq)
 		int rc;
 
 		if (evq->dp_rxq != NULL) {
-			unsigned int rxq_sw_index;
+			sfc_sw_index_t rxq_sw_index;
 
 			rxq_sw_index = evq->dp_rxq->dpq.queue_id;
 
diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h
index d796865b7f..5a9f85c2d9 100644
--- a/drivers/net/sfc/sfc_ev.h
+++ b/drivers/net/sfc/sfc_ev.h
@@ -69,9 +69,25 @@ struct sfc_evq {
  * Tx event queues follow Rx event queues.
  */
 
-static inline unsigned int
-sfc_evq_index_by_rxq_sw_index(__rte_unused struct sfc_adapter *sa,
-			      unsigned int rxq_sw_index)
+static inline sfc_ethdev_qid_t
+sfc_ethdev_rx_qid_by_rxq_sw_index(__rte_unused struct sfc_adapter_shared *sas,
+				  sfc_sw_index_t rxq_sw_index)
+{
+	/* Only ethdev queues are present for now */
+	return rxq_sw_index;
+}
+
+static inline sfc_sw_index_t
+sfc_rxq_sw_index_by_ethdev_rx_qid(__rte_unused struct sfc_adapter_shared *sas,
+				  sfc_ethdev_qid_t ethdev_qid)
+{
+	/* Only ethdev queues are present for now */
+	return ethdev_qid;
+}
+
+static inline sfc_sw_index_t
+sfc_evq_sw_index_by_rxq_sw_index(__rte_unused struct sfc_adapter *sa,
+				 sfc_sw_index_t rxq_sw_index)
 {
 	return 1 + rxq_sw_index;
 }
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 0bfd284c9e..2db8af1759 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -1400,10 +1400,10 @@ sfc_flow_parse_queue(struct sfc_adapter *sa,
 	struct sfc_rxq *rxq;
 	struct sfc_rxq_info *rxq_info;
 
-	if (queue->index >= sfc_sa2shared(sa)->rxq_count)
+	if (queue->index >= sfc_sa2shared(sa)->ethdev_rxq_count)
 		return -EINVAL;
 
-	rxq = &sa->rxq_ctrl[queue->index];
+	rxq = sfc_rxq_ctrl_by_ethdev_qid(sa, queue->index);
 	spec_filter->template.efs_dmaq_id = (uint16_t)rxq->hw_index;
 
 	rxq_info = &sfc_sa2shared(sa)->rxq_info[queue->index];
@@ -1420,7 +1420,7 @@ sfc_flow_parse_rss(struct sfc_adapter *sa,
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
 	struct sfc_rss *rss = &sas->rss;
-	unsigned int rxq_sw_index;
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_rxq *rxq;
 	unsigned int rxq_hw_index_min;
 	unsigned int rxq_hw_index_max;
@@ -1434,18 +1434,19 @@ sfc_flow_parse_rss(struct sfc_adapter *sa,
 	if (action_rss->queue_num == 0)
 		return -EINVAL;
 
-	rxq_sw_index = sfc_sa2shared(sa)->rxq_count - 1;
-	rxq = &sa->rxq_ctrl[rxq_sw_index];
+	ethdev_qid = sfc_sa2shared(sa)->ethdev_rxq_count - 1;
+	rxq = sfc_rxq_ctrl_by_ethdev_qid(sa, ethdev_qid);
 	rxq_hw_index_min = rxq->hw_index;
 	rxq_hw_index_max = 0;
 
 	for (i = 0; i < action_rss->queue_num; ++i) {
-		rxq_sw_index = action_rss->queue[i];
+		ethdev_qid = action_rss->queue[i];
 
-		if (rxq_sw_index >= sfc_sa2shared(sa)->rxq_count)
+		if ((unsigned int)ethdev_qid >=
+		    sfc_sa2shared(sa)->ethdev_rxq_count)
 			return -EINVAL;
 
-		rxq = &sa->rxq_ctrl[rxq_sw_index];
+		rxq = sfc_rxq_ctrl_by_ethdev_qid(sa, ethdev_qid);
 
 		if (rxq->hw_index < rxq_hw_index_min)
 			rxq_hw_index_min = rxq->hw_index;
@@ -1509,9 +1510,10 @@ sfc_flow_parse_rss(struct sfc_adapter *sa,
 
 	for (i = 0; i < RTE_DIM(sfc_rss_conf->rss_tbl); ++i) {
 		unsigned int nb_queues = action_rss->queue_num;
-		unsigned int rxq_sw_index = action_rss->queue[i % nb_queues];
-		struct sfc_rxq *rxq = &sa->rxq_ctrl[rxq_sw_index];
+		struct sfc_rxq *rxq;
 
+		ethdev_qid = action_rss->queue[i % nb_queues];
+		rxq = sfc_rxq_ctrl_by_ethdev_qid(sa, ethdev_qid);
 		sfc_rss_conf->rss_tbl[i] = rxq->hw_index - rxq_hw_index_min;
 	}
 
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 461afc5168..597785ae02 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -654,14 +654,17 @@ struct sfc_dp_rx sfc_efx_rx = {
 };
 
 static void
-sfc_rx_qflush(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_rx_qflush(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 	struct sfc_rxq *rxq;
 	unsigned int retry_count;
 	unsigned int wait_count;
 	int rc;
 
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, sw_index);
 	rxq_info = &sfc_sa2shared(sa)->rxq_info[sw_index];
 	SFC_ASSERT(rxq_info->state & SFC_RXQ_STARTED);
 
@@ -698,13 +701,16 @@ sfc_rx_qflush(struct sfc_adapter *sa, unsigned int sw_index)
 			 (wait_count++ < SFC_RX_QFLUSH_POLL_ATTEMPTS));
 
 		if (rxq_info->state & SFC_RXQ_FLUSHING)
-			sfc_err(sa, "RxQ %u flush timed out", sw_index);
+			sfc_err(sa, "RxQ %d (internal %u) flush timed out",
+				ethdev_qid, sw_index);
 
 		if (rxq_info->state & SFC_RXQ_FLUSH_FAILED)
-			sfc_err(sa, "RxQ %u flush failed", sw_index);
+			sfc_err(sa, "RxQ %d (internal %u) flush failed",
+				ethdev_qid, sw_index);
 
 		if (rxq_info->state & SFC_RXQ_FLUSHED)
-			sfc_notice(sa, "RxQ %u flushed", sw_index);
+			sfc_notice(sa, "RxQ %d (internal %u) flushed",
+				   ethdev_qid, sw_index);
 	}
 
 	sa->priv.dp_rx->qpurge(rxq_info->dp);
@@ -764,17 +770,20 @@ sfc_rx_default_rxq_set_filter(struct sfc_adapter *sa, struct sfc_rxq *rxq)
 }
 
 int
-sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_rx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 	struct sfc_rxq *rxq;
 	struct sfc_evq *evq;
 	efx_rx_prefix_layout_t pinfo;
 	int rc;
 
-	sfc_log_init(sa, "sw_index=%u", sw_index);
-
 	SFC_ASSERT(sw_index < sfc_sa2shared(sa)->rxq_count);
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "RxQ %d (internal %u)", ethdev_qid, sw_index);
 
 	rxq_info = &sfc_sa2shared(sa)->rxq_info[sw_index];
 	SFC_ASSERT(rxq_info->state == SFC_RXQ_INITIALIZED);
@@ -782,7 +791,7 @@ sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 	rxq = &sa->rxq_ctrl[sw_index];
 	evq = rxq->evq;
 
-	rc = sfc_ev_qstart(evq, sfc_evq_index_by_rxq_sw_index(sa, sw_index));
+	rc = sfc_ev_qstart(evq, sfc_evq_sw_index_by_rxq_sw_index(sa, sw_index));
 	if (rc != 0)
 		goto fail_ev_qstart;
 
@@ -833,15 +842,16 @@ sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 
 	rxq_info->state |= SFC_RXQ_STARTED;
 
-	if (sw_index == 0 && !sfc_sa2shared(sa)->isolated) {
+	if (ethdev_qid == 0 && !sfc_sa2shared(sa)->isolated) {
 		rc = sfc_rx_default_rxq_set_filter(sa, rxq);
 		if (rc != 0)
 			goto fail_mac_filter_default_rxq_set;
 	}
 
 	/* It seems to be used by DPDK for debug purposes only ('rte_ether') */
-	sa->eth_dev->data->rx_queue_state[sw_index] =
-		RTE_ETH_QUEUE_STATE_STARTED;
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		sa->eth_dev->data->rx_queue_state[ethdev_qid] =
+			RTE_ETH_QUEUE_STATE_STARTED;
 
 	return 0;
 
@@ -864,14 +874,17 @@ sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 }
 
 void
-sfc_rx_qstop(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_rx_qstop(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 	struct sfc_rxq *rxq;
 
-	sfc_log_init(sa, "sw_index=%u", sw_index);
-
 	SFC_ASSERT(sw_index < sfc_sa2shared(sa)->rxq_count);
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "RxQ %d (internal %u)", ethdev_qid, sw_index);
 
 	rxq_info = &sfc_sa2shared(sa)->rxq_info[sw_index];
 
@@ -880,13 +893,14 @@ sfc_rx_qstop(struct sfc_adapter *sa, unsigned int sw_index)
 	SFC_ASSERT(rxq_info->state & SFC_RXQ_STARTED);
 
 	/* It seems to be used by DPDK for debug purposes only ('rte_ether') */
-	sa->eth_dev->data->rx_queue_state[sw_index] =
-		RTE_ETH_QUEUE_STATE_STOPPED;
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		sa->eth_dev->data->rx_queue_state[ethdev_qid] =
+			RTE_ETH_QUEUE_STATE_STOPPED;
 
 	rxq = &sa->rxq_ctrl[sw_index];
 	sa->priv.dp_rx->qstop(rxq_info->dp, &rxq->evq->read_ptr);
 
-	if (sw_index == 0)
+	if (ethdev_qid == 0)
 		efx_mac_filter_default_rxq_clear(sa->nic);
 
 	sfc_rx_qflush(sa, sw_index);
@@ -1056,11 +1070,13 @@ sfc_rx_mb_pool_buf_size(struct sfc_adapter *sa, struct rte_mempool *mb_pool)
 }
 
 int
-sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
+sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 	     uint16_t nb_rx_desc, unsigned int socket_id,
 	     const struct rte_eth_rxconf *rx_conf,
 	     struct rte_mempool *mb_pool)
 {
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
 	struct sfc_rss *rss = &sfc_sa2shared(sa)->rss;
 	int rc;
@@ -1092,16 +1108,22 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	SFC_ASSERT(rxq_entries <= sa->rxq_max_entries);
 	SFC_ASSERT(rxq_max_fill_level <= nb_rx_desc);
 
-	offloads = rx_conf->offloads |
-		sa->eth_dev->data->dev_conf.rxmode.offloads;
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, sw_index);
+
+	offloads = rx_conf->offloads;
+	/* Add device level Rx offloads if the queue is an ethdev Rx queue */
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		offloads |= sa->eth_dev->data->dev_conf.rxmode.offloads;
+
 	rc = sfc_rx_qcheck_conf(sa, rxq_max_fill_level, rx_conf, offloads);
 	if (rc != 0)
 		goto fail_bad_conf;
 
 	buf_size = sfc_rx_mb_pool_buf_size(sa, mb_pool);
 	if (buf_size == 0) {
-		sfc_err(sa, "RxQ %u mbuf pool object size is too small",
-			sw_index);
+		sfc_err(sa,
+			"RxQ %d (internal %u) mbuf pool object size is too small",
+			ethdev_qid, sw_index);
 		rc = EINVAL;
 		goto fail_bad_conf;
 	}
@@ -1111,11 +1133,13 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 				  (offloads & DEV_RX_OFFLOAD_SCATTER),
 				  encp->enc_rx_scatter_max,
 				  &error)) {
-		sfc_err(sa, "RxQ %u MTU check failed: %s", sw_index, error);
-		sfc_err(sa, "RxQ %u calculated Rx buffer size is %u vs "
+		sfc_err(sa, "RxQ %d (internal %u) MTU check failed: %s",
+			ethdev_qid, sw_index, error);
+		sfc_err(sa,
+			"RxQ %d (internal %u) calculated Rx buffer size is %u vs "
 			"PDU size %u plus Rx prefix %u bytes",
-			sw_index, buf_size, (unsigned int)sa->port.pdu,
-			encp->enc_rx_prefix_size);
+			ethdev_qid, sw_index, buf_size,
+			(unsigned int)sa->port.pdu, encp->enc_rx_prefix_size);
 		rc = EINVAL;
 		goto fail_bad_conf;
 	}
@@ -1193,7 +1217,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	info.flags = rxq_info->rxq_flags;
 	info.rxq_entries = rxq_info->entries;
 	info.rxq_hw_ring = rxq->mem.esm_base;
-	info.evq_hw_index = sfc_evq_index_by_rxq_sw_index(sa, sw_index);
+	info.evq_hw_index = sfc_evq_sw_index_by_rxq_sw_index(sa, sw_index);
 	info.evq_entries = evq_entries;
 	info.evq_hw_ring = evq->mem.esm_base;
 	info.hw_index = rxq->hw_index;
@@ -1231,13 +1255,18 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 }
 
 void
-sfc_rx_qfini(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_rx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 	struct sfc_rxq *rxq;
 
 	SFC_ASSERT(sw_index < sfc_sa2shared(sa)->rxq_count);
-	sa->eth_dev->data->rx_queues[sw_index] = NULL;
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, sw_index);
+
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		sa->eth_dev->data->rx_queues[ethdev_qid] = NULL;
 
 	rxq_info = &sfc_sa2shared(sa)->rxq_info[sw_index];
 
@@ -1479,14 +1508,41 @@ sfc_rx_rss_config(struct sfc_adapter *sa)
 	return rc;
 }
 
+struct sfc_rxq_info *
+sfc_rxq_info_by_ethdev_qid(struct sfc_adapter_shared *sas,
+			   sfc_ethdev_qid_t ethdev_qid)
+{
+	sfc_sw_index_t sw_index;
+
+	SFC_ASSERT((unsigned int)ethdev_qid < sas->ethdev_rxq_count);
+	SFC_ASSERT(ethdev_qid != SFC_ETHDEV_QID_INVALID);
+
+	sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas, ethdev_qid);
+	return &sas->rxq_info[sw_index];
+}
+
+struct sfc_rxq *
+sfc_rxq_ctrl_by_ethdev_qid(struct sfc_adapter *sa, sfc_ethdev_qid_t ethdev_qid)
+{
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_sw_index_t sw_index;
+
+	SFC_ASSERT((unsigned int)ethdev_qid < sas->ethdev_rxq_count);
+	SFC_ASSERT(ethdev_qid != SFC_ETHDEV_QID_INVALID);
+
+	sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas, ethdev_qid);
+	return &sa->rxq_ctrl[sw_index];
+}
+
 int
 sfc_rx_start(struct sfc_adapter *sa)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 	int rc;
 
-	sfc_log_init(sa, "rxq_count=%u", sas->rxq_count);
+	sfc_log_init(sa, "rxq_count=%u (internal %u)", sas->ethdev_rxq_count,
+		     sas->rxq_count);
 
 	rc = efx_rx_init(sa->nic);
 	if (rc != 0)
@@ -1524,9 +1580,10 @@ void
 sfc_rx_stop(struct sfc_adapter *sa)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 
-	sfc_log_init(sa, "rxq_count=%u", sas->rxq_count);
+	sfc_log_init(sa, "rxq_count=%u (internal %u)", sas->ethdev_rxq_count,
+		     sas->rxq_count);
 
 	sw_index = sas->rxq_count;
 	while (sw_index-- > 0) {
@@ -1538,7 +1595,7 @@ sfc_rx_stop(struct sfc_adapter *sa)
 }
 
 static int
-sfc_rx_qinit_info(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_rx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
 	struct sfc_rxq_info *rxq_info = &sas->rxq_info[sw_index];
@@ -1606,17 +1663,29 @@ static void
 sfc_rx_fini_queues(struct sfc_adapter *sa, unsigned int nb_rx_queues)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	int sw_index;
+	sfc_sw_index_t sw_index;
+	sfc_ethdev_qid_t ethdev_qid;
 
-	SFC_ASSERT(nb_rx_queues <= sas->rxq_count);
+	SFC_ASSERT(nb_rx_queues <= sas->ethdev_rxq_count);
 
-	sw_index = sas->rxq_count;
-	while (--sw_index >= (int)nb_rx_queues) {
-		if (sas->rxq_info[sw_index].state & SFC_RXQ_INITIALIZED)
+	/*
+	 * Finalize only ethdev queues since other ones are finalized only
+	 * on device close and they may require additional deinitializaton.
+	 */
+	ethdev_qid = sas->ethdev_rxq_count;
+	while (--ethdev_qid >= (int)nb_rx_queues) {
+		struct sfc_rxq_info *rxq_info;
+
+		rxq_info = sfc_rxq_info_by_ethdev_qid(sas, ethdev_qid);
+		if (rxq_info->state & SFC_RXQ_INITIALIZED) {
+			sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas,
+								ethdev_qid);
 			sfc_rx_qfini(sa, sw_index);
+		}
+
 	}
 
-	sas->rxq_count = nb_rx_queues;
+	sas->ethdev_rxq_count = nb_rx_queues;
 }
 
 /**
@@ -1637,7 +1706,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
 	int rc;
 
 	sfc_log_init(sa, "nb_rx_queues=%u (old %u)",
-		     nb_rx_queues, sas->rxq_count);
+		     nb_rx_queues, sas->ethdev_rxq_count);
 
 	rc = sfc_rx_check_mode(sa, &dev_conf->rxmode);
 	if (rc != 0)
@@ -1666,7 +1735,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
 		struct sfc_rxq_info *new_rxq_info;
 		struct sfc_rxq *new_rxq_ctrl;
 
-		if (nb_rx_queues < sas->rxq_count)
+		if (nb_rx_queues < sas->ethdev_rxq_count)
 			sfc_rx_fini_queues(sa, nb_rx_queues);
 
 		rc = ENOMEM;
@@ -1685,30 +1754,38 @@ sfc_rx_configure(struct sfc_adapter *sa)
 		sas->rxq_info = new_rxq_info;
 		sa->rxq_ctrl = new_rxq_ctrl;
 		if (nb_rx_queues > sas->rxq_count) {
-			memset(&sas->rxq_info[sas->rxq_count], 0,
-			       (nb_rx_queues - sas->rxq_count) *
+			unsigned int rxq_count = sas->rxq_count;
+
+			memset(&sas->rxq_info[rxq_count], 0,
+			       (nb_rx_queues - rxq_count) *
 			       sizeof(sas->rxq_info[0]));
-			memset(&sa->rxq_ctrl[sas->rxq_count], 0,
-			       (nb_rx_queues - sas->rxq_count) *
+			memset(&sa->rxq_ctrl[rxq_count], 0,
+			       (nb_rx_queues - rxq_count) *
 			       sizeof(sa->rxq_ctrl[0]));
 		}
 	}
 
-	while (sas->rxq_count < nb_rx_queues) {
-		rc = sfc_rx_qinit_info(sa, sas->rxq_count);
+	while (sas->ethdev_rxq_count < nb_rx_queues) {
+		sfc_sw_index_t sw_index;
+
+		sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas,
+							sas->ethdev_rxq_count);
+		rc = sfc_rx_qinit_info(sa, sw_index);
 		if (rc != 0)
 			goto fail_rx_qinit_info;
 
-		sas->rxq_count++;
+		sas->ethdev_rxq_count++;
 	}
 
+	sas->rxq_count = sas->ethdev_rxq_count;
+
 configure_rss:
 	rss->channels = (dev_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) ?
-			 MIN(sas->rxq_count, EFX_MAXRSS) : 0;
+			 MIN(sas->ethdev_rxq_count, EFX_MAXRSS) : 0;
 
 	if (rss->channels > 0) {
 		struct rte_eth_rss_conf *adv_conf_rss;
-		unsigned int sw_index;
+		sfc_sw_index_t sw_index;
 
 		for (sw_index = 0; sw_index < EFX_RSS_TBL_SIZE; ++sw_index)
 			rss->tbl[sw_index] = sw_index % rss->channels;
diff --git a/drivers/net/sfc/sfc_rx.h b/drivers/net/sfc/sfc_rx.h
index 2730454fd6..96c7dc415d 100644
--- a/drivers/net/sfc/sfc_rx.h
+++ b/drivers/net/sfc/sfc_rx.h
@@ -119,6 +119,10 @@ struct sfc_rxq_info {
 };
 
 struct sfc_rxq_info *sfc_rxq_info_by_dp_rxq(const struct sfc_dp_rxq *dp_rxq);
+struct sfc_rxq_info *sfc_rxq_info_by_ethdev_qid(struct sfc_adapter_shared *sas,
+						sfc_ethdev_qid_t ethdev_qid);
+struct sfc_rxq *sfc_rxq_ctrl_by_ethdev_qid(struct sfc_adapter *sa,
+					   sfc_ethdev_qid_t ethdev_qid);
 
 int sfc_rx_configure(struct sfc_adapter *sa);
 void sfc_rx_close(struct sfc_adapter *sa);
@@ -129,9 +133,9 @@ int sfc_rx_qinit(struct sfc_adapter *sa, unsigned int rx_queue_id,
 		 uint16_t nb_rx_desc, unsigned int socket_id,
 		 const struct rte_eth_rxconf *rx_conf,
 		 struct rte_mempool *mb_pool);
-void sfc_rx_qfini(struct sfc_adapter *sa, unsigned int sw_index);
-int sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index);
-void sfc_rx_qstop(struct sfc_adapter *sa, unsigned int sw_index);
+void sfc_rx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
+int sfc_rx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
+void sfc_rx_qstop(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
 
 uint64_t sfc_rx_get_dev_offload_caps(struct sfc_adapter *sa);
 uint64_t sfc_rx_get_queue_offload_caps(struct sfc_adapter *sa);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v3 02/20] net/sfc: do not enable interrupts on internal Rx queues
  2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 01/20] net/sfc: introduce ethdev Rx queue ID Andrew Rybchenko
@ 2021-06-18 13:40   ` Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 03/20] common/sfc_efx/base: separate target EvQ and IRQ config Andrew Rybchenko
                     ` (17 subsequent siblings)
  19 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-18 13:40 UTC (permalink / raw)
  To: dev; +Cc: David Marchand

rxq_intr flag requests support for interrupt mode for ethdev Rx queues.
There is no internal Rx queues yet.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 drivers/net/sfc/sfc_ev.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index 2262994112..9a8149f052 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -663,7 +663,9 @@ sfc_ev_qstart(struct sfc_evq *evq, unsigned int hw_index)
 		     efx_evq_size(sa->nic, evq->entries, evq_flags));
 
 	if ((sa->intr.lsc_intr && hw_index == sa->mgmt_evq_index) ||
-	    (sa->intr.rxq_intr && evq->dp_rxq != NULL))
+	    (sa->intr.rxq_intr && evq->dp_rxq != NULL &&
+	     sfc_ethdev_rx_qid_by_rxq_sw_index(sfc_sa2shared(sa),
+		evq->dp_rxq->dpq.queue_id) != SFC_ETHDEV_QID_INVALID))
 		evq_flags |= EFX_EVQ_FLAGS_NOTIFY_INTERRUPT;
 	else
 		evq_flags |= EFX_EVQ_FLAGS_NOTIFY_DISABLED;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v3 03/20] common/sfc_efx/base: separate target EvQ and IRQ config
  2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 01/20] net/sfc: introduce ethdev Rx queue ID Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 02/20] net/sfc: do not enable interrupts on internal Rx queues Andrew Rybchenko
@ 2021-06-18 13:40   ` Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 04/20] common/sfc_efx/base: support custom EvQ to IRQ mapping Andrew Rybchenko
                     ` (16 subsequent siblings)
  19 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-18 13:40 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Andy Moreton

Target EvQ and IRQ number are specified in the same location
in MCDI request. The value is treated as IRQ number if the
event queue is interrupting (corresponding flag is set) and
as target event queue otherwise.

However it is better to separate it on helper API level to
make it more clear.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/common/sfc_efx/base/ef10_ev.c  | 12 +++++++-----
 drivers/common/sfc_efx/base/efx_impl.h |  1 +
 drivers/common/sfc_efx/base/efx_mcdi.c |  7 ++++++-
 drivers/common/sfc_efx/base/rhead_ev.c | 12 +++++++-----
 4 files changed, 21 insertions(+), 11 deletions(-)

diff --git a/drivers/common/sfc_efx/base/ef10_ev.c b/drivers/common/sfc_efx/base/ef10_ev.c
index ea59beecc4..c0cbc427b9 100644
--- a/drivers/common/sfc_efx/base/ef10_ev.c
+++ b/drivers/common/sfc_efx/base/ef10_ev.c
@@ -121,7 +121,8 @@ ef10_ev_qcreate(
 	__in		efx_evq_t *eep)
 {
 	efx_nic_cfg_t *encp = &(enp->en_nic_cfg);
-	uint32_t irq;
+	uint32_t irq = 0;
+	uint32_t target_evq = 0;
 	efx_rc_t rc;
 	boolean_t low_latency;
 
@@ -159,11 +160,12 @@ ef10_ev_qcreate(
 	    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT) {
 		irq = index;
 	} else if (index == EFX_EF10_ALWAYS_INTERRUPTING_EVQ_INDEX) {
-		irq = index;
+		/* Use the first interrupt for always interrupting EvQ */
+		irq = 0;
 		flags = (flags & ~EFX_EVQ_FLAGS_NOTIFY_MASK) |
 		    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT;
 	} else {
-		irq = EFX_EF10_ALWAYS_INTERRUPTING_EVQ_INDEX;
+		target_evq = EFX_EF10_ALWAYS_INTERRUPTING_EVQ_INDEX;
 	}
 
 	/*
@@ -187,8 +189,8 @@ ef10_ev_qcreate(
 	 * decision and low_latency hint is ignored.
 	 */
 	low_latency = encp->enc_datapath_cap_evb ? 0 : 1;
-	rc = efx_mcdi_init_evq(enp, index, esmp, ndescs, irq, us, flags,
-	    low_latency);
+	rc = efx_mcdi_init_evq(enp, index, esmp, ndescs, irq, target_evq, us,
+	    flags, low_latency);
 	if (rc != 0)
 		goto fail2;
 
diff --git a/drivers/common/sfc_efx/base/efx_impl.h b/drivers/common/sfc_efx/base/efx_impl.h
index 4a513171a1..c1f98def40 100644
--- a/drivers/common/sfc_efx/base/efx_impl.h
+++ b/drivers/common/sfc_efx/base/efx_impl.h
@@ -1535,6 +1535,7 @@ efx_mcdi_init_evq(
 	__in		efsys_mem_t *esmp,
 	__in		size_t nevs,
 	__in		uint32_t irq,
+	__in		uint32_t target_evq,
 	__in		uint32_t us,
 	__in		uint32_t flags,
 	__in		boolean_t low_latency);
diff --git a/drivers/common/sfc_efx/base/efx_mcdi.c b/drivers/common/sfc_efx/base/efx_mcdi.c
index f226ffd923..b68fc0503d 100644
--- a/drivers/common/sfc_efx/base/efx_mcdi.c
+++ b/drivers/common/sfc_efx/base/efx_mcdi.c
@@ -2568,6 +2568,7 @@ efx_mcdi_init_evq(
 	__in		efsys_mem_t *esmp,
 	__in		size_t nevs,
 	__in		uint32_t irq,
+	__in		uint32_t target_evq,
 	__in		uint32_t us,
 	__in		uint32_t flags,
 	__in		boolean_t low_latency)
@@ -2602,11 +2603,15 @@ efx_mcdi_init_evq(
 
 	MCDI_IN_SET_DWORD(req, INIT_EVQ_V2_IN_SIZE, nevs);
 	MCDI_IN_SET_DWORD(req, INIT_EVQ_V2_IN_INSTANCE, instance);
-	MCDI_IN_SET_DWORD(req, INIT_EVQ_V2_IN_IRQ_NUM, irq);
 
 	interrupting = ((flags & EFX_EVQ_FLAGS_NOTIFY_MASK) ==
 	    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT);
 
+	if (interrupting)
+		MCDI_IN_SET_DWORD(req, INIT_EVQ_V2_IN_IRQ_NUM, irq);
+	else
+		MCDI_IN_SET_DWORD(req, INIT_EVQ_V2_IN_TARGET_EVQ, target_evq);
+
 	if (encp->enc_init_evq_v2_supported) {
 		/*
 		 * On Medford the low latency license is required to enable RX
diff --git a/drivers/common/sfc_efx/base/rhead_ev.c b/drivers/common/sfc_efx/base/rhead_ev.c
index 2099581fd7..533cd9e34a 100644
--- a/drivers/common/sfc_efx/base/rhead_ev.c
+++ b/drivers/common/sfc_efx/base/rhead_ev.c
@@ -106,7 +106,8 @@ rhead_ev_qcreate(
 {
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
 	size_t desc_size;
-	uint32_t irq;
+	uint32_t irq = 0;
+	uint32_t target_evq = 0;
 	efx_rc_t rc;
 
 	_NOTE(ARGUNUSED(id))	/* buftbl id managed by MC */
@@ -142,19 +143,20 @@ rhead_ev_qcreate(
 	    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT) {
 		irq = index;
 	} else if (index == EFX_RHEAD_ALWAYS_INTERRUPTING_EVQ_INDEX) {
-		irq = index;
+		/* Use the first interrupt for always interrupting EvQ */
+		irq = 0;
 		flags = (flags & ~EFX_EVQ_FLAGS_NOTIFY_MASK) |
 		    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT;
 	} else {
-		irq = EFX_RHEAD_ALWAYS_INTERRUPTING_EVQ_INDEX;
+		target_evq = EFX_RHEAD_ALWAYS_INTERRUPTING_EVQ_INDEX;
 	}
 
 	/*
 	 * Interrupts may be raised for events immediately after the queue is
 	 * created. See bug58606.
 	 */
-	rc = efx_mcdi_init_evq(enp, index, esmp, ndescs, irq, us, flags,
-	    B_FALSE);
+	rc = efx_mcdi_init_evq(enp, index, esmp, ndescs, irq, target_evq, us,
+	    flags, B_FALSE);
 	if (rc != 0)
 		goto fail2;
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v3 04/20] common/sfc_efx/base: support custom EvQ to IRQ mapping
  2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
                     ` (2 preceding siblings ...)
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 03/20] common/sfc_efx/base: separate target EvQ and IRQ config Andrew Rybchenko
@ 2021-06-18 13:40   ` Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 05/20] net/sfc: explicitly control IRQ used for Rx queues Andrew Rybchenko
                     ` (15 subsequent siblings)
  19 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-18 13:40 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Andy Moreton

Custom mapping is actually supported for EF10 and EF100 families only.

A driver (e.g. DPDK PMD) may require to customize mapping of EvQ
to interrupts if, for example, extra EvQ are used for house-keeping
in polling or wake up (via another EvQ) mode.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/common/sfc_efx/base/ef10_ev.c    |  4 +--
 drivers/common/sfc_efx/base/ef10_impl.h  |  1 +
 drivers/common/sfc_efx/base/efx.h        | 13 ++++++++
 drivers/common/sfc_efx/base/efx_ev.c     | 39 ++++++++++++++++++++----
 drivers/common/sfc_efx/base/efx_impl.h   |  3 +-
 drivers/common/sfc_efx/base/rhead_ev.c   |  4 +--
 drivers/common/sfc_efx/base/rhead_impl.h |  1 +
 drivers/common/sfc_efx/version.map       |  1 +
 8 files changed, 55 insertions(+), 11 deletions(-)

diff --git a/drivers/common/sfc_efx/base/ef10_ev.c b/drivers/common/sfc_efx/base/ef10_ev.c
index c0cbc427b9..ba078940b6 100644
--- a/drivers/common/sfc_efx/base/ef10_ev.c
+++ b/drivers/common/sfc_efx/base/ef10_ev.c
@@ -118,10 +118,10 @@ ef10_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep)
 {
 	efx_nic_cfg_t *encp = &(enp->en_nic_cfg);
-	uint32_t irq = 0;
 	uint32_t target_evq = 0;
 	efx_rc_t rc;
 	boolean_t low_latency;
@@ -158,7 +158,7 @@ ef10_ev_qcreate(
 	/* INIT_EVQ expects function-relative vector number */
 	if ((flags & EFX_EVQ_FLAGS_NOTIFY_MASK) ==
 	    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT) {
-		irq = index;
+		/* IRQ number is specified by caller */
 	} else if (index == EFX_EF10_ALWAYS_INTERRUPTING_EVQ_INDEX) {
 		/* Use the first interrupt for always interrupting EvQ */
 		irq = 0;
diff --git a/drivers/common/sfc_efx/base/ef10_impl.h b/drivers/common/sfc_efx/base/ef10_impl.h
index 40210fbd91..7c8d51b7a5 100644
--- a/drivers/common/sfc_efx/base/ef10_impl.h
+++ b/drivers/common/sfc_efx/base/ef10_impl.h
@@ -111,6 +111,7 @@ ef10_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep);
 
 LIBEFX_INTERNAL
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 771fe5a170..e43efbda1f 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -2333,6 +2333,19 @@ efx_ev_qcreate(
 	__in		uint32_t flags,
 	__deref_out	efx_evq_t **eepp);
 
+LIBEFX_API
+extern	__checkReturn	efx_rc_t
+efx_ev_qcreate_irq(
+	__in		efx_nic_t *enp,
+	__in		unsigned int index,
+	__in		efsys_mem_t *esmp,
+	__in		size_t ndescs,
+	__in		uint32_t id,
+	__in		uint32_t us,
+	__in		uint32_t flags,
+	__in		uint32_t irq,
+	__deref_out	efx_evq_t **eepp);
+
 LIBEFX_API
 extern		void
 efx_ev_qpost(
diff --git a/drivers/common/sfc_efx/base/efx_ev.c b/drivers/common/sfc_efx/base/efx_ev.c
index 19bdea03fd..4808f8ddfc 100644
--- a/drivers/common/sfc_efx/base/efx_ev.c
+++ b/drivers/common/sfc_efx/base/efx_ev.c
@@ -35,6 +35,7 @@ siena_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep);
 
 static			void
@@ -253,7 +254,7 @@ efx_ev_fini(
 
 
 	__checkReturn	efx_rc_t
-efx_ev_qcreate(
+efx_ev_qcreate_irq(
 	__in		efx_nic_t *enp,
 	__in		unsigned int index,
 	__in		efsys_mem_t *esmp,
@@ -261,6 +262,7 @@ efx_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__deref_out	efx_evq_t **eepp)
 {
 	const efx_ev_ops_t *eevop = enp->en_eevop;
@@ -347,7 +349,7 @@ efx_ev_qcreate(
 	*eepp = eep;
 
 	if ((rc = eevop->eevo_qcreate(enp, index, esmp, ndescs, id, us, flags,
-	    eep)) != 0)
+	    irq, eep)) != 0)
 		goto fail9;
 
 	return (0);
@@ -377,6 +379,23 @@ efx_ev_qcreate(
 	return (rc);
 }
 
+	__checkReturn	efx_rc_t
+efx_ev_qcreate(
+	__in		efx_nic_t *enp,
+	__in		unsigned int index,
+	__in		efsys_mem_t *esmp,
+	__in		size_t ndescs,
+	__in		uint32_t id,
+	__in		uint32_t us,
+	__in		uint32_t flags,
+	__deref_out	efx_evq_t **eepp)
+{
+	uint32_t irq = index;
+
+	return (efx_ev_qcreate_irq(enp, index, esmp, ndescs, id, us, flags,
+	    irq, eepp));
+}
+
 		void
 efx_ev_qdestroy(
 	__in	efx_evq_t *eep)
@@ -1278,6 +1297,7 @@ siena_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep)
 {
 	efx_nic_cfg_t *encp = &(enp->en_nic_cfg);
@@ -1290,11 +1310,16 @@ siena_ev_qcreate(
 
 	EFSYS_ASSERT((flags & EFX_EVQ_FLAGS_EXTENDED_WIDTH) == 0);
 
+	if (irq != index) {
+		rc = EINVAL;
+		goto fail1;
+	}
+
 #if EFSYS_OPT_RX_SCALE
 	if (enp->en_intr.ei_type == EFX_INTR_LINE &&
 	    index >= EFX_MAXRSS_LEGACY) {
 		rc = EINVAL;
-		goto fail1;
+		goto fail2;
 	}
 #endif
 	for (size = 0;
@@ -1304,7 +1329,7 @@ siena_ev_qcreate(
 			break;
 	if (id + (1 << size) >= encp->enc_buftbl_limit) {
 		rc = EINVAL;
-		goto fail2;
+		goto fail3;
 	}
 
 	/* Set up the handler table */
@@ -1336,11 +1361,13 @@ siena_ev_qcreate(
 
 	return (0);
 
+fail3:
+	EFSYS_PROBE(fail3);
+#if EFSYS_OPT_RX_SCALE
 fail2:
 	EFSYS_PROBE(fail2);
-#if EFSYS_OPT_RX_SCALE
-fail1:
 #endif
+fail1:
 	EFSYS_PROBE1(fail1, efx_rc_t, rc);
 
 	return (rc);
diff --git a/drivers/common/sfc_efx/base/efx_impl.h b/drivers/common/sfc_efx/base/efx_impl.h
index c1f98def40..a6b20704ac 100644
--- a/drivers/common/sfc_efx/base/efx_impl.h
+++ b/drivers/common/sfc_efx/base/efx_impl.h
@@ -87,7 +87,8 @@ typedef struct efx_ev_ops_s {
 	void		(*eevo_fini)(efx_nic_t *);
 	efx_rc_t	(*eevo_qcreate)(efx_nic_t *, unsigned int,
 					  efsys_mem_t *, size_t, uint32_t,
-					  uint32_t, uint32_t, efx_evq_t *);
+					  uint32_t, uint32_t, uint32_t,
+					  efx_evq_t *);
 	void		(*eevo_qdestroy)(efx_evq_t *);
 	efx_rc_t	(*eevo_qprime)(efx_evq_t *, unsigned int);
 	void		(*eevo_qpost)(efx_evq_t *, uint16_t);
diff --git a/drivers/common/sfc_efx/base/rhead_ev.c b/drivers/common/sfc_efx/base/rhead_ev.c
index 533cd9e34a..3eaed9e94b 100644
--- a/drivers/common/sfc_efx/base/rhead_ev.c
+++ b/drivers/common/sfc_efx/base/rhead_ev.c
@@ -102,11 +102,11 @@ rhead_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep)
 {
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
 	size_t desc_size;
-	uint32_t irq = 0;
 	uint32_t target_evq = 0;
 	efx_rc_t rc;
 
@@ -141,7 +141,7 @@ rhead_ev_qcreate(
 	/* INIT_EVQ expects function-relative vector number */
 	if ((flags & EFX_EVQ_FLAGS_NOTIFY_MASK) ==
 	    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT) {
-		irq = index;
+		/* IRQ number is specified by caller */
 	} else if (index == EFX_RHEAD_ALWAYS_INTERRUPTING_EVQ_INDEX) {
 		/* Use the first interrupt for always interrupting EvQ */
 		irq = 0;
diff --git a/drivers/common/sfc_efx/base/rhead_impl.h b/drivers/common/sfc_efx/base/rhead_impl.h
index 3bf9beceb0..dd38ded775 100644
--- a/drivers/common/sfc_efx/base/rhead_impl.h
+++ b/drivers/common/sfc_efx/base/rhead_impl.h
@@ -131,6 +131,7 @@ rhead_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep);
 
 LIBEFX_INTERNAL
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 5e724fd102..d534d8ecb5 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -7,6 +7,7 @@ INTERNAL {
 	efx_ev_init;
 	efx_ev_qcreate;
 	efx_ev_qcreate_check_init_done;
+	efx_ev_qcreate_irq;
 	efx_ev_qdestroy;
 	efx_ev_qmoderate;
 	efx_ev_qpending;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v3 05/20] net/sfc: explicitly control IRQ used for Rx queues
  2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
                     ` (3 preceding siblings ...)
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 04/20] common/sfc_efx/base: support custom EvQ to IRQ mapping Andrew Rybchenko
@ 2021-06-18 13:40   ` Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 06/20] net/sfc: introduce ethdev Tx queue ID Andrew Rybchenko
                     ` (14 subsequent siblings)
  19 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-18 13:40 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Andy Moreton

Interrupts support has assumptions on interrupt numbers used
for LSC and Rx queues. The first interrupt is used for LSC,
subsequent interrupts are used for Rx queues.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/net/sfc/sfc_ev.c | 32 ++++++++++++++++++++++++--------
 1 file changed, 24 insertions(+), 8 deletions(-)

diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index 9a8149f052..71f706e403 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -648,6 +648,7 @@ sfc_ev_qstart(struct sfc_evq *evq, unsigned int hw_index)
 	struct sfc_adapter *sa = evq->sa;
 	efsys_mem_t *esmp;
 	uint32_t evq_flags = sa->evq_flags;
+	uint32_t irq = 0;
 	unsigned int total_delay_us;
 	unsigned int delay_us;
 	int rc;
@@ -662,20 +663,35 @@ sfc_ev_qstart(struct sfc_evq *evq, unsigned int hw_index)
 	(void)memset((void *)esmp->esm_base, 0xff,
 		     efx_evq_size(sa->nic, evq->entries, evq_flags));
 
-	if ((sa->intr.lsc_intr && hw_index == sa->mgmt_evq_index) ||
-	    (sa->intr.rxq_intr && evq->dp_rxq != NULL &&
-	     sfc_ethdev_rx_qid_by_rxq_sw_index(sfc_sa2shared(sa),
-		evq->dp_rxq->dpq.queue_id) != SFC_ETHDEV_QID_INVALID))
+	if (sa->intr.lsc_intr && hw_index == sa->mgmt_evq_index) {
 		evq_flags |= EFX_EVQ_FLAGS_NOTIFY_INTERRUPT;
-	else
+		irq = 0;
+	} else if (sa->intr.rxq_intr && evq->dp_rxq != NULL) {
+		sfc_ethdev_qid_t ethdev_qid;
+
+		ethdev_qid =
+			sfc_ethdev_rx_qid_by_rxq_sw_index(sfc_sa2shared(sa),
+				evq->dp_rxq->dpq.queue_id);
+		if (ethdev_qid != SFC_ETHDEV_QID_INVALID) {
+			evq_flags |= EFX_EVQ_FLAGS_NOTIFY_INTERRUPT;
+			/*
+			 * The first interrupt is used for management EvQ
+			 * (LSC etc). RxQ interrupts follow it.
+			 */
+			irq = 1 + ethdev_qid;
+		} else {
+			evq_flags |= EFX_EVQ_FLAGS_NOTIFY_DISABLED;
+		}
+	} else {
 		evq_flags |= EFX_EVQ_FLAGS_NOTIFY_DISABLED;
+	}
 
 	evq->init_state = SFC_EVQ_STARTING;
 
 	/* Create the common code event queue */
-	rc = efx_ev_qcreate(sa->nic, hw_index, esmp, evq->entries,
-			    0 /* unused on EF10 */, 0, evq_flags,
-			    &evq->common);
+	rc = efx_ev_qcreate_irq(sa->nic, hw_index, esmp, evq->entries,
+				0 /* unused on EF10 */, 0, evq_flags,
+				irq, &evq->common);
 	if (rc != 0)
 		goto fail_ev_qcreate;
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v3 06/20] net/sfc: introduce ethdev Tx queue ID
  2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
                     ` (4 preceding siblings ...)
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 05/20] net/sfc: explicitly control IRQ used for Rx queues Andrew Rybchenko
@ 2021-06-18 13:40   ` Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 07/20] common/sfc_efx/base: add ingress m-port RxQ flag Andrew Rybchenko
                     ` (13 subsequent siblings)
  19 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-18 13:40 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Make software index of a Tx queue and ethdev index separate.
When an ethdev TxQ is accessed in ethdev callbacks, an explicit ethdev
queue index is used.

This is a preparation to introducing non-ethdev Tx queues.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/sfc.h        |   1 +
 drivers/net/sfc/sfc_ethdev.c |  46 ++++++----
 drivers/net/sfc/sfc_ev.c     |   2 +-
 drivers/net/sfc/sfc_ev.h     |  21 ++++-
 drivers/net/sfc/sfc_tx.c     | 164 ++++++++++++++++++++++++-----------
 drivers/net/sfc/sfc_tx.h     |  11 +--
 6 files changed, 171 insertions(+), 74 deletions(-)

diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index ebe705020d..00fc26cf0e 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -173,6 +173,7 @@ struct sfc_adapter_shared {
 
 	unsigned int			txq_count;
 	struct sfc_txq_info		*txq_info;
+	unsigned int			ethdev_txq_count;
 
 	struct sfc_rss			rss;
 
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2651c41288..88896db1f8 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -524,24 +524,28 @@ sfc_rx_queue_release(void *queue)
 }
 
 static int
-sfc_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
+sfc_tx_queue_setup(struct rte_eth_dev *dev, uint16_t ethdev_qid,
 		   uint16_t nb_tx_desc, unsigned int socket_id,
 		   const struct rte_eth_txconf *tx_conf)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	struct sfc_txq_info *txq_info;
+	sfc_sw_index_t sw_index;
 	int rc;
 
 	sfc_log_init(sa, "TxQ = %u, nb_tx_desc = %u, socket_id = %u",
-		     tx_queue_id, nb_tx_desc, socket_id);
+		     ethdev_qid, nb_tx_desc, socket_id);
 
 	sfc_adapter_lock(sa);
 
-	rc = sfc_tx_qinit(sa, tx_queue_id, nb_tx_desc, socket_id, tx_conf);
+	sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas, ethdev_qid);
+	rc = sfc_tx_qinit(sa, sw_index, nb_tx_desc, socket_id, tx_conf);
 	if (rc != 0)
 		goto fail_tx_qinit;
 
-	dev->data->tx_queues[tx_queue_id] = sas->txq_info[tx_queue_id].dp;
+	txq_info = sfc_txq_info_by_ethdev_qid(sas, ethdev_qid);
+	dev->data->tx_queues[ethdev_qid] = txq_info->dp;
 
 	sfc_adapter_unlock(sa);
 	return 0;
@@ -557,7 +561,7 @@ sfc_tx_queue_release(void *queue)
 {
 	struct sfc_dp_txq *dp_txq = queue;
 	struct sfc_txq *txq;
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 	struct sfc_adapter *sa;
 
 	if (dp_txq == NULL)
@@ -1213,15 +1217,15 @@ sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
  * use any process-local pointers from the adapter data.
  */
 static void
-sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t tx_queue_id,
+sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
 		      struct rte_eth_txq_info *qinfo)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_txq_info *txq_info;
 
-	SFC_ASSERT(tx_queue_id < sas->txq_count);
+	SFC_ASSERT(ethdev_qid < sas->ethdev_txq_count);
 
-	txq_info = &sas->txq_info[tx_queue_id];
+	txq_info = sfc_txq_info_by_ethdev_qid(sas, ethdev_qid);
 
 	memset(qinfo, 0, sizeof(*qinfo));
 
@@ -1362,13 +1366,15 @@ sfc_rx_queue_stop(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 }
 
 static int
-sfc_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+sfc_tx_queue_start(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	struct sfc_txq_info *txq_info;
+	sfc_sw_index_t sw_index;
 	int rc;
 
-	sfc_log_init(sa, "TxQ = %u", tx_queue_id);
+	sfc_log_init(sa, "TxQ = %u", ethdev_qid);
 
 	sfc_adapter_lock(sa);
 
@@ -1376,14 +1382,16 @@ sfc_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	if (sa->state != SFC_ADAPTER_STARTED)
 		goto fail_not_started;
 
-	if (sas->txq_info[tx_queue_id].state != SFC_TXQ_INITIALIZED)
+	txq_info = sfc_txq_info_by_ethdev_qid(sas, ethdev_qid);
+	if (txq_info->state != SFC_TXQ_INITIALIZED)
 		goto fail_not_setup;
 
-	rc = sfc_tx_qstart(sa, tx_queue_id);
+	sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas, ethdev_qid);
+	rc = sfc_tx_qstart(sa, sw_index);
 	if (rc != 0)
 		goto fail_tx_qstart;
 
-	sas->txq_info[tx_queue_id].deferred_started = B_TRUE;
+	txq_info->deferred_started = B_TRUE;
 
 	sfc_adapter_unlock(sa);
 	return 0;
@@ -1398,18 +1406,22 @@ sfc_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 }
 
 static int
-sfc_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+sfc_tx_queue_stop(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	struct sfc_txq_info *txq_info;
+	sfc_sw_index_t sw_index;
 
-	sfc_log_init(sa, "TxQ = %u", tx_queue_id);
+	sfc_log_init(sa, "TxQ = %u", ethdev_qid);
 
 	sfc_adapter_lock(sa);
 
-	sfc_tx_qstop(sa, tx_queue_id);
+	sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas, ethdev_qid);
+	sfc_tx_qstop(sa, sw_index);
 
-	sas->txq_info[tx_queue_id].deferred_started = B_FALSE;
+	txq_info = sfc_txq_info_by_ethdev_qid(sas, ethdev_qid);
+	txq_info->deferred_started = B_FALSE;
 
 	sfc_adapter_unlock(sa);
 	return 0;
diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index 71f706e403..ed28d51e12 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -598,7 +598,7 @@ sfc_ev_qpoll(struct sfc_evq *evq)
 		}
 
 		if (evq->dp_txq != NULL) {
-			unsigned int txq_sw_index;
+			sfc_sw_index_t txq_sw_index;
 
 			txq_sw_index = evq->dp_txq->dpq.queue_id;
 
diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h
index 5a9f85c2d9..75b9dcdebd 100644
--- a/drivers/net/sfc/sfc_ev.h
+++ b/drivers/net/sfc/sfc_ev.h
@@ -92,8 +92,25 @@ sfc_evq_sw_index_by_rxq_sw_index(__rte_unused struct sfc_adapter *sa,
 	return 1 + rxq_sw_index;
 }
 
-static inline unsigned int
-sfc_evq_index_by_txq_sw_index(struct sfc_adapter *sa, unsigned int txq_sw_index)
+static inline sfc_ethdev_qid_t
+sfc_ethdev_tx_qid_by_txq_sw_index(__rte_unused struct sfc_adapter_shared *sas,
+				  sfc_sw_index_t txq_sw_index)
+{
+	/* Only ethdev queues are present for now */
+	return txq_sw_index;
+}
+
+static inline sfc_sw_index_t
+sfc_txq_sw_index_by_ethdev_tx_qid(__rte_unused struct sfc_adapter_shared *sas,
+				  sfc_ethdev_qid_t ethdev_qid)
+{
+	/* Only ethdev queues are present for now */
+	return ethdev_qid;
+}
+
+static inline sfc_sw_index_t
+sfc_evq_sw_index_by_txq_sw_index(struct sfc_adapter *sa,
+				 sfc_sw_index_t txq_sw_index)
 {
 	return 1 + sa->eth_dev->data->nb_rx_queues + txq_sw_index;
 }
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 28d696de61..ce2a9a6a4f 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -34,6 +34,19 @@
  */
 #define SFC_TX_QFLUSH_POLL_ATTEMPTS	(2000)
 
+struct sfc_txq_info *
+sfc_txq_info_by_ethdev_qid(struct sfc_adapter_shared *sas,
+			   sfc_ethdev_qid_t ethdev_qid)
+{
+	sfc_sw_index_t sw_index;
+
+	SFC_ASSERT((unsigned int)ethdev_qid < sas->ethdev_txq_count);
+	SFC_ASSERT(ethdev_qid != SFC_ETHDEV_QID_INVALID);
+
+	sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas, ethdev_qid);
+	return &sas->txq_info[sw_index];
+}
+
 static uint64_t
 sfc_tx_get_offload_mask(struct sfc_adapter *sa)
 {
@@ -118,10 +131,12 @@ sfc_tx_qflush_done(struct sfc_txq_info *txq_info)
 }
 
 int
-sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
+sfc_tx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 	     uint16_t nb_tx_desc, unsigned int socket_id,
 	     const struct rte_eth_txconf *tx_conf)
 {
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
 	unsigned int txq_entries;
 	unsigned int evq_entries;
@@ -134,7 +149,9 @@ sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	uint64_t offloads;
 	struct sfc_dp_tx_hw_limits hw_limits;
 
-	sfc_log_init(sa, "TxQ = %u", sw_index);
+	ethdev_qid = sfc_ethdev_tx_qid_by_txq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "TxQ = %d (internal %u)", ethdev_qid, sw_index);
 
 	memset(&hw_limits, 0, sizeof(hw_limits));
 	hw_limits.txq_max_entries = sa->txq_max_entries;
@@ -150,8 +167,11 @@ sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	SFC_ASSERT(txq_entries >= nb_tx_desc);
 	SFC_ASSERT(txq_max_fill_level <= nb_tx_desc);
 
-	offloads = tx_conf->offloads |
-		sa->eth_dev->data->dev_conf.txmode.offloads;
+	offloads = tx_conf->offloads;
+	/* Add device level Tx offloads if the queue is an ethdev Tx queue */
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		offloads |= sa->eth_dev->data->dev_conf.txmode.offloads;
+
 	rc = sfc_tx_qcheck_conf(sa, txq_max_fill_level, tx_conf, offloads);
 	if (rc != 0)
 		goto fail_bad_conf;
@@ -231,20 +251,26 @@ sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 
 fail_bad_conf:
 fail_size_up_rings:
-	sfc_log_init(sa, "failed (TxQ = %u, rc = %d)", sw_index, rc);
+	sfc_log_init(sa, "failed (TxQ = %d (internal %u), rc = %d)", ethdev_qid,
+		     sw_index, rc);
 	return rc;
 }
 
 void
-sfc_tx_qfini(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_tx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_txq_info *txq_info;
 	struct sfc_txq *txq;
 
-	sfc_log_init(sa, "TxQ = %u", sw_index);
+	ethdev_qid = sfc_ethdev_tx_qid_by_txq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "TxQ = %d (internal %u)", ethdev_qid, sw_index);
 
 	SFC_ASSERT(sw_index < sfc_sa2shared(sa)->txq_count);
-	sa->eth_dev->data->tx_queues[sw_index] = NULL;
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		sa->eth_dev->data->tx_queues[ethdev_qid] = NULL;
 
 	txq_info = &sfc_sa2shared(sa)->txq_info[sw_index];
 
@@ -265,9 +291,14 @@ sfc_tx_qfini(struct sfc_adapter *sa, unsigned int sw_index)
 }
 
 static int
-sfc_tx_qinit_info(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_tx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
-	sfc_log_init(sa, "TxQ = %u", sw_index);
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
+
+	ethdev_qid = sfc_ethdev_tx_qid_by_txq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "TxQ = %d (internal %u)", ethdev_qid, sw_index);
 
 	return 0;
 }
@@ -316,17 +347,26 @@ static void
 sfc_tx_fini_queues(struct sfc_adapter *sa, unsigned int nb_tx_queues)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	int sw_index;
+	sfc_sw_index_t sw_index;
+	sfc_ethdev_qid_t ethdev_qid;
 
-	SFC_ASSERT(nb_tx_queues <= sas->txq_count);
+	SFC_ASSERT(nb_tx_queues <= sas->ethdev_txq_count);
 
-	sw_index = sas->txq_count;
-	while (--sw_index >= (int)nb_tx_queues) {
-		if (sas->txq_info[sw_index].state & SFC_TXQ_INITIALIZED)
+	/*
+	 * Finalize only ethdev queues since other ones are finalized only
+	 * on device close and they may require additional deinitializaton.
+	 */
+	ethdev_qid = sas->ethdev_txq_count;
+	while (--ethdev_qid >= (int)nb_tx_queues) {
+		struct sfc_txq_info *txq_info;
+
+		sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas, ethdev_qid);
+		txq_info = sfc_txq_info_by_ethdev_qid(sas, ethdev_qid);
+		if (txq_info->state & SFC_TXQ_INITIALIZED)
 			sfc_tx_qfini(sa, sw_index);
 	}
 
-	sas->txq_count = nb_tx_queues;
+	sas->ethdev_txq_count = nb_tx_queues;
 }
 
 int
@@ -339,7 +379,7 @@ sfc_tx_configure(struct sfc_adapter *sa)
 	int rc = 0;
 
 	sfc_log_init(sa, "nb_tx_queues=%u (old %u)",
-		     nb_tx_queues, sas->txq_count);
+		     nb_tx_queues, sas->ethdev_txq_count);
 
 	/*
 	 * The datapath implementation assumes absence of boundary
@@ -377,7 +417,7 @@ sfc_tx_configure(struct sfc_adapter *sa)
 		struct sfc_txq_info *new_txq_info;
 		struct sfc_txq *new_txq_ctrl;
 
-		if (nb_tx_queues < sas->txq_count)
+		if (nb_tx_queues < sas->ethdev_txq_count)
 			sfc_tx_fini_queues(sa, nb_tx_queues);
 
 		new_txq_info =
@@ -393,24 +433,30 @@ sfc_tx_configure(struct sfc_adapter *sa)
 
 		sas->txq_info = new_txq_info;
 		sa->txq_ctrl = new_txq_ctrl;
-		if (nb_tx_queues > sas->txq_count) {
-			memset(&sas->txq_info[sas->txq_count], 0,
-			       (nb_tx_queues - sas->txq_count) *
+		if (nb_tx_queues > sas->ethdev_txq_count) {
+			memset(&sas->txq_info[sas->ethdev_txq_count], 0,
+			       (nb_tx_queues - sas->ethdev_txq_count) *
 			       sizeof(sas->txq_info[0]));
-			memset(&sa->txq_ctrl[sas->txq_count], 0,
-			       (nb_tx_queues - sas->txq_count) *
+			memset(&sa->txq_ctrl[sas->ethdev_txq_count], 0,
+			       (nb_tx_queues - sas->ethdev_txq_count) *
 			       sizeof(sa->txq_ctrl[0]));
 		}
 	}
 
-	while (sas->txq_count < nb_tx_queues) {
-		rc = sfc_tx_qinit_info(sa, sas->txq_count);
+	while (sas->ethdev_txq_count < nb_tx_queues) {
+		sfc_sw_index_t sw_index;
+
+		sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas,
+				sas->ethdev_txq_count);
+		rc = sfc_tx_qinit_info(sa, sw_index);
 		if (rc != 0)
 			goto fail_tx_qinit_info;
 
-		sas->txq_count++;
+		sas->ethdev_txq_count++;
 	}
 
+	sas->txq_count = sas->ethdev_txq_count;
+
 done:
 	return 0;
 
@@ -440,12 +486,12 @@ sfc_tx_close(struct sfc_adapter *sa)
 }
 
 int
-sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_tx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	uint64_t offloads_supported = sfc_tx_get_dev_offload_caps(sa) |
 				      sfc_tx_get_queue_offload_caps(sa);
-	struct rte_eth_dev_data *dev_data;
 	struct sfc_txq_info *txq_info;
 	struct sfc_txq *txq;
 	struct sfc_evq *evq;
@@ -453,7 +499,9 @@ sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 	unsigned int desc_index;
 	int rc = 0;
 
-	sfc_log_init(sa, "TxQ = %u", sw_index);
+	ethdev_qid = sfc_ethdev_tx_qid_by_txq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "TxQ = %d (internal %u)", ethdev_qid, sw_index);
 
 	SFC_ASSERT(sw_index < sas->txq_count);
 	txq_info = &sas->txq_info[sw_index];
@@ -463,7 +511,7 @@ sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 	txq = &sa->txq_ctrl[sw_index];
 	evq = txq->evq;
 
-	rc = sfc_ev_qstart(evq, sfc_evq_index_by_txq_sw_index(sa, sw_index));
+	rc = sfc_ev_qstart(evq, sfc_evq_sw_index_by_txq_sw_index(sa, sw_index));
 	if (rc != 0)
 		goto fail_ev_qstart;
 
@@ -505,11 +553,17 @@ sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 	if (rc != 0)
 		goto fail_dp_qstart;
 
-	/*
-	 * It seems to be used by DPDK for debug purposes only ('rte_ether')
-	 */
-	dev_data = sa->eth_dev->data;
-	dev_data->tx_queue_state[sw_index] = RTE_ETH_QUEUE_STATE_STARTED;
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID) {
+		struct rte_eth_dev_data *dev_data;
+
+		/*
+		 * It sems to be used by DPDK for debug purposes only
+		 * ('rte_ether').
+		 */
+		dev_data = sa->eth_dev->data;
+		dev_data->tx_queue_state[ethdev_qid] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+	}
 
 	return 0;
 
@@ -525,17 +579,19 @@ sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 }
 
 void
-sfc_tx_qstop(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_tx_qstop(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	struct rte_eth_dev_data *dev_data;
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_txq_info *txq_info;
 	struct sfc_txq *txq;
 	unsigned int retry_count;
 	unsigned int wait_count;
 	int rc;
 
-	sfc_log_init(sa, "TxQ = %u", sw_index);
+	ethdev_qid = sfc_ethdev_tx_qid_by_txq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "TxQ = %d (internal %u)", ethdev_qid, sw_index);
 
 	SFC_ASSERT(sw_index < sas->txq_count);
 	txq_info = &sas->txq_info[sw_index];
@@ -577,10 +633,12 @@ sfc_tx_qstop(struct sfc_adapter *sa, unsigned int sw_index)
 			 wait_count++ < SFC_TX_QFLUSH_POLL_ATTEMPTS);
 
 		if (txq_info->state & SFC_TXQ_FLUSHING)
-			sfc_err(sa, "TxQ %u flush timed out", sw_index);
+			sfc_err(sa, "TxQ %d (internal %u) flush timed out",
+				ethdev_qid, sw_index);
 
 		if (txq_info->state & SFC_TXQ_FLUSHED)
-			sfc_notice(sa, "TxQ %u flushed", sw_index);
+			sfc_notice(sa, "TxQ %d (internal %u) flushed",
+				   ethdev_qid, sw_index);
 	}
 
 	sa->priv.dp_tx->qreap(txq_info->dp);
@@ -591,11 +649,17 @@ sfc_tx_qstop(struct sfc_adapter *sa, unsigned int sw_index)
 
 	sfc_ev_qstop(txq->evq);
 
-	/*
-	 * It seems to be used by DPDK for debug purposes only ('rte_ether')
-	 */
-	dev_data = sa->eth_dev->data;
-	dev_data->tx_queue_state[sw_index] = RTE_ETH_QUEUE_STATE_STOPPED;
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID) {
+		struct rte_eth_dev_data *dev_data;
+
+		/*
+		 * It seems to be used by DPDK for debug purposes only
+		 * ('rte_ether')
+		 */
+		dev_data = sa->eth_dev->data;
+		dev_data->tx_queue_state[ethdev_qid] =
+			RTE_ETH_QUEUE_STATE_STOPPED;
+	}
 }
 
 int
@@ -603,10 +667,11 @@ sfc_tx_start(struct sfc_adapter *sa)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 	int rc = 0;
 
-	sfc_log_init(sa, "txq_count = %u", sas->txq_count);
+	sfc_log_init(sa, "txq_count = %u (internal %u)",
+		     sas->ethdev_txq_count, sas->txq_count);
 
 	if (sa->tso) {
 		if (!encp->enc_fw_assisted_tso_v2_enabled &&
@@ -654,9 +719,10 @@ void
 sfc_tx_stop(struct sfc_adapter *sa)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 
-	sfc_log_init(sa, "txq_count = %u", sas->txq_count);
+	sfc_log_init(sa, "txq_count = %u (internal %u)",
+		     sas->ethdev_txq_count, sas->txq_count);
 
 	sw_index = sas->txq_count;
 	while (sw_index-- > 0) {
diff --git a/drivers/net/sfc/sfc_tx.h b/drivers/net/sfc/sfc_tx.h
index 5ed678703e..f1700b13ca 100644
--- a/drivers/net/sfc/sfc_tx.h
+++ b/drivers/net/sfc/sfc_tx.h
@@ -58,7 +58,8 @@ struct sfc_txq {
 };
 
 struct sfc_txq *sfc_txq_by_dp_txq(const struct sfc_dp_txq *dp_txq);
-
+struct sfc_txq_info *sfc_txq_info_by_ethdev_qid(struct sfc_adapter_shared *sas,
+						sfc_ethdev_qid_t ethdev_qid);
 /**
  * Transmit queue information used on libefx-based data path.
  * Allocated on the socket specified on the queue setup.
@@ -107,14 +108,14 @@ struct sfc_txq_info *sfc_txq_info_by_dp_txq(const struct sfc_dp_txq *dp_txq);
 int sfc_tx_configure(struct sfc_adapter *sa);
 void sfc_tx_close(struct sfc_adapter *sa);
 
-int sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
+int sfc_tx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 		 uint16_t nb_tx_desc, unsigned int socket_id,
 		 const struct rte_eth_txconf *tx_conf);
-void sfc_tx_qfini(struct sfc_adapter *sa, unsigned int sw_index);
+void sfc_tx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
 
 void sfc_tx_qflush_done(struct sfc_txq_info *txq_info);
-int sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index);
-void sfc_tx_qstop(struct sfc_adapter *sa, unsigned int sw_index);
+int sfc_tx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
+void sfc_tx_qstop(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
 int sfc_tx_start(struct sfc_adapter *sa);
 void sfc_tx_stop(struct sfc_adapter *sa);
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v3 07/20] common/sfc_efx/base: add ingress m-port RxQ flag
  2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
                     ` (5 preceding siblings ...)
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 06/20] net/sfc: introduce ethdev Tx queue ID Andrew Rybchenko
@ 2021-06-18 13:40   ` Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 08/20] common/sfc_efx/base: add user mark " Andrew Rybchenko
                     ` (12 subsequent siblings)
  19 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-18 13:40 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Add a flag to request support for ingress m-port on an RxQ.
Implement it only for Riverhead, other families will return an error
if the flag is set.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/ef10_rx.c  |  9 ++++++++-
 drivers/common/sfc_efx/base/efx.h      |  5 +++++
 drivers/common/sfc_efx/base/efx_rx.c   | 14 +++++++++-----
 drivers/common/sfc_efx/base/rhead_rx.c |  3 +++
 4 files changed, 25 insertions(+), 6 deletions(-)

diff --git a/drivers/common/sfc_efx/base/ef10_rx.c b/drivers/common/sfc_efx/base/ef10_rx.c
index cfa60bd324..0e140645a5 100644
--- a/drivers/common/sfc_efx/base/ef10_rx.c
+++ b/drivers/common/sfc_efx/base/ef10_rx.c
@@ -1031,6 +1031,11 @@ ef10_rx_qcreate(
 	EFSYS_ASSERT(params.es_bufs_per_desc == 0);
 #endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
 
+	if (flags & EFX_RXQ_FLAG_INGRESS_MPORT) {
+		rc = ENOTSUP;
+		goto fail12;
+	}
+
 	/* Scatter can only be disabled if the firmware supports doing so */
 	if (flags & EFX_RXQ_FLAG_SCATTER)
 		params.disable_scatter = B_FALSE;
@@ -1044,7 +1049,7 @@ ef10_rx_qcreate(
 
 	if ((rc = efx_mcdi_init_rxq(enp, ndescs, eep, label, index,
 		    esmp, &params)) != 0)
-		goto fail12;
+		goto fail13;
 
 	erp->er_eep = eep;
 	erp->er_label = label;
@@ -1057,6 +1062,8 @@ ef10_rx_qcreate(
 
 	return (0);
 
+fail13:
+	EFSYS_PROBE(fail13);
 fail12:
 	EFSYS_PROBE(fail12);
 #if EFSYS_OPT_RX_ES_SUPER_BUFFER
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index e43efbda1f..76092d794f 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -2925,6 +2925,7 @@ typedef enum efx_rx_prefix_field_e {
 	EFX_RX_PREFIX_FIELD_USER_MARK_VALID,
 	EFX_RX_PREFIX_FIELD_CSUM_FRAME,
 	EFX_RX_PREFIX_FIELD_INGRESS_VPORT,
+	EFX_RX_PREFIX_FIELD_INGRESS_MPORT = EFX_RX_PREFIX_FIELD_INGRESS_VPORT,
 	EFX_RX_PREFIX_NFIELDS
 } efx_rx_prefix_field_t;
 
@@ -2998,6 +2999,10 @@ typedef enum efx_rxq_type_e {
  * the driver.
  */
 #define	EFX_RXQ_FLAG_RSS_HASH		0x4
+/*
+ * Request ingress mport field in the Rx prefix of a queue.
+ */
+#define	EFX_RXQ_FLAG_INGRESS_MPORT	0x8
 
 LIBEFX_API
 extern	__checkReturn	efx_rc_t
diff --git a/drivers/common/sfc_efx/base/efx_rx.c b/drivers/common/sfc_efx/base/efx_rx.c
index 7c6fecf925..7e63363be7 100644
--- a/drivers/common/sfc_efx/base/efx_rx.c
+++ b/drivers/common/sfc_efx/base/efx_rx.c
@@ -1743,14 +1743,20 @@ siena_rx_qcreate(
 		goto fail2;
 	}
 
-	if (flags & EFX_RXQ_FLAG_SCATTER) {
 #if EFSYS_OPT_RX_SCATTER
-		jumbo = B_TRUE;
+#define SUPPORTED_RXQ_FLAGS EFX_RXQ_FLAG_SCATTER
 #else
+#define SUPPORTED_RXQ_FLAGS EFX_RXQ_FLAG_NONE
+#endif
+	/* Reject flags for unsupported queue features */
+	if ((flags & ~SUPPORTED_RXQ_FLAGS) != 0) {
 		rc = EINVAL;
 		goto fail3;
-#endif	/* EFSYS_OPT_RX_SCATTER */
 	}
+#undef SUPPORTED_RXQ_FLAGS
+
+	if (flags & EFX_RXQ_FLAG_SCATTER)
+		jumbo = B_TRUE;
 
 	/* Set up the new descriptor queue */
 	EFX_POPULATE_OWORD_7(oword,
@@ -1769,10 +1775,8 @@ siena_rx_qcreate(
 
 	return (0);
 
-#if !EFSYS_OPT_RX_SCATTER
 fail3:
 	EFSYS_PROBE(fail3);
-#endif
 fail2:
 	EFSYS_PROBE(fail2);
 fail1:
diff --git a/drivers/common/sfc_efx/base/rhead_rx.c b/drivers/common/sfc_efx/base/rhead_rx.c
index b2dacbab32..f1d46f7c70 100644
--- a/drivers/common/sfc_efx/base/rhead_rx.c
+++ b/drivers/common/sfc_efx/base/rhead_rx.c
@@ -629,6 +629,9 @@ rhead_rx_qcreate(
 		fields_mask |= 1U << EFX_RX_PREFIX_FIELD_RSS_HASH_VALID;
 	}
 
+	if (flags & EFX_RXQ_FLAG_INGRESS_MPORT)
+		fields_mask |= 1U << EFX_RX_PREFIX_FIELD_INGRESS_MPORT;
+
 	/*
 	 * LENGTH is required in EF100 host interface, as receive events
 	 * do not include the packet length.
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v3 08/20] common/sfc_efx/base: add user mark RxQ flag
  2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
                     ` (6 preceding siblings ...)
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 07/20] common/sfc_efx/base: add ingress m-port RxQ flag Andrew Rybchenko
@ 2021-06-18 13:40   ` Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 09/20] net/sfc: add abstractions for the management EVQ identity Andrew Rybchenko
                     ` (11 subsequent siblings)
  19 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-18 13:40 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Add a flag to request support for user mark field on an RxQ.
The field is required to retrieve generation count value from
counter RxQ.

Implement it only for Riverhead and EF10 ESSB since they support
the field in the Rx prefix.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/ef10_rx.c  | 52 ++++++++++++++++----------
 drivers/common/sfc_efx/base/efx.h      |  4 ++
 drivers/common/sfc_efx/base/rhead_rx.c |  3 ++
 3 files changed, 39 insertions(+), 20 deletions(-)

diff --git a/drivers/common/sfc_efx/base/ef10_rx.c b/drivers/common/sfc_efx/base/ef10_rx.c
index 0e140645a5..0c3f9413cf 100644
--- a/drivers/common/sfc_efx/base/ef10_rx.c
+++ b/drivers/common/sfc_efx/base/ef10_rx.c
@@ -926,6 +926,10 @@ ef10_rx_qcreate(
 			goto fail1;
 		}
 		erp->er_buf_size = type_data->ertd_default.ed_buf_size;
+		if (flags & EFX_RXQ_FLAG_USER_MARK) {
+			rc = ENOTSUP;
+			goto fail2;
+		}
 		/*
 		 * Ignore EFX_RXQ_FLAG_RSS_HASH since if RSS hash is calculated
 		 * it is always delivered from HW in the pseudo-header.
@@ -936,7 +940,7 @@ ef10_rx_qcreate(
 		erpl = &ef10_packed_stream_rx_prefix_layout;
 		if (type_data == NULL) {
 			rc = EINVAL;
-			goto fail2;
+			goto fail3;
 		}
 		switch (type_data->ertd_packed_stream.eps_buf_size) {
 		case EFX_RXQ_PACKED_STREAM_BUF_SIZE_1M:
@@ -956,13 +960,17 @@ ef10_rx_qcreate(
 			break;
 		default:
 			rc = ENOTSUP;
-			goto fail3;
+			goto fail4;
 		}
 		erp->er_buf_size = type_data->ertd_packed_stream.eps_buf_size;
 		/* Packed stream pseudo header does not have RSS hash value */
 		if (flags & EFX_RXQ_FLAG_RSS_HASH) {
 			rc = ENOTSUP;
-			goto fail4;
+			goto fail5;
+		}
+		if (flags & EFX_RXQ_FLAG_USER_MARK) {
+			rc = ENOTSUP;
+			goto fail6;
 		}
 		break;
 #endif /* EFSYS_OPT_RX_PACKED_STREAM */
@@ -971,7 +979,7 @@ ef10_rx_qcreate(
 		erpl = &ef10_essb_rx_prefix_layout;
 		if (type_data == NULL) {
 			rc = EINVAL;
-			goto fail5;
+			goto fail7;
 		}
 		params.es_bufs_per_desc =
 		    type_data->ertd_es_super_buffer.eessb_bufs_per_desc;
@@ -989,7 +997,7 @@ ef10_rx_qcreate(
 #endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
 	default:
 		rc = ENOTSUP;
-		goto fail6;
+		goto fail8;
 	}
 
 #if EFSYS_OPT_RX_PACKED_STREAM
@@ -997,13 +1005,13 @@ ef10_rx_qcreate(
 		/* Check if datapath firmware supports packed stream mode */
 		if (encp->enc_rx_packed_stream_supported == B_FALSE) {
 			rc = ENOTSUP;
-			goto fail7;
+			goto fail9;
 		}
 		/* Check if packed stream allows configurable buffer sizes */
 		if ((params.ps_buf_size != MC_CMD_INIT_RXQ_EXT_IN_PS_BUFF_1M) &&
 		    (encp->enc_rx_var_packed_stream_supported == B_FALSE)) {
 			rc = ENOTSUP;
-			goto fail8;
+			goto fail10;
 		}
 	}
 #else /* EFSYS_OPT_RX_PACKED_STREAM */
@@ -1014,17 +1022,17 @@ ef10_rx_qcreate(
 	if (params.es_bufs_per_desc > 0) {
 		if (encp->enc_rx_es_super_buffer_supported == B_FALSE) {
 			rc = ENOTSUP;
-			goto fail9;
+			goto fail11;
 		}
 		if (!EFX_IS_P2ALIGNED(uint32_t, params.es_max_dma_len,
 			    EFX_RX_ES_SUPER_BUFFER_BUF_ALIGNMENT)) {
 			rc = EINVAL;
-			goto fail10;
+			goto fail12;
 		}
 		if (!EFX_IS_P2ALIGNED(uint32_t, params.es_buf_stride,
 			    EFX_RX_ES_SUPER_BUFFER_BUF_ALIGNMENT)) {
 			rc = EINVAL;
-			goto fail11;
+			goto fail13;
 		}
 	}
 #else /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
@@ -1033,7 +1041,7 @@ ef10_rx_qcreate(
 
 	if (flags & EFX_RXQ_FLAG_INGRESS_MPORT) {
 		rc = ENOTSUP;
-		goto fail12;
+		goto fail14;
 	}
 
 	/* Scatter can only be disabled if the firmware supports doing so */
@@ -1049,7 +1057,7 @@ ef10_rx_qcreate(
 
 	if ((rc = efx_mcdi_init_rxq(enp, ndescs, eep, label, index,
 		    esmp, &params)) != 0)
-		goto fail13;
+		goto fail15;
 
 	erp->er_eep = eep;
 	erp->er_label = label;
@@ -1062,38 +1070,42 @@ ef10_rx_qcreate(
 
 	return (0);
 
+fail15:
+	EFSYS_PROBE(fail15);
+fail14:
+	EFSYS_PROBE(fail14);
+#if EFSYS_OPT_RX_ES_SUPER_BUFFER
 fail13:
 	EFSYS_PROBE(fail13);
 fail12:
 	EFSYS_PROBE(fail12);
-#if EFSYS_OPT_RX_ES_SUPER_BUFFER
 fail11:
 	EFSYS_PROBE(fail11);
+#endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
+#if EFSYS_OPT_RX_PACKED_STREAM
 fail10:
 	EFSYS_PROBE(fail10);
 fail9:
 	EFSYS_PROBE(fail9);
-#endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
-#if EFSYS_OPT_RX_PACKED_STREAM
+#endif /* EFSYS_OPT_RX_PACKED_STREAM */
 fail8:
 	EFSYS_PROBE(fail8);
+#if EFSYS_OPT_RX_ES_SUPER_BUFFER
 fail7:
 	EFSYS_PROBE(fail7);
-#endif /* EFSYS_OPT_RX_PACKED_STREAM */
+#endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
+#if EFSYS_OPT_RX_PACKED_STREAM
 fail6:
 	EFSYS_PROBE(fail6);
-#if EFSYS_OPT_RX_ES_SUPER_BUFFER
 fail5:
 	EFSYS_PROBE(fail5);
-#endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
-#if EFSYS_OPT_RX_PACKED_STREAM
 fail4:
 	EFSYS_PROBE(fail4);
 fail3:
 	EFSYS_PROBE(fail3);
+#endif /* EFSYS_OPT_RX_PACKED_STREAM */
 fail2:
 	EFSYS_PROBE(fail2);
-#endif /* EFSYS_OPT_RX_PACKED_STREAM */
 fail1:
 	EFSYS_PROBE1(fail1, efx_rc_t, rc);
 
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 76092d794f..f81837a931 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -3003,6 +3003,10 @@ typedef enum efx_rxq_type_e {
  * Request ingress mport field in the Rx prefix of a queue.
  */
 #define	EFX_RXQ_FLAG_INGRESS_MPORT	0x8
+/*
+ * Request user mark field in the Rx prefix of a queue.
+ */
+#define	EFX_RXQ_FLAG_USER_MARK		0x10
 
 LIBEFX_API
 extern	__checkReturn	efx_rc_t
diff --git a/drivers/common/sfc_efx/base/rhead_rx.c b/drivers/common/sfc_efx/base/rhead_rx.c
index f1d46f7c70..76b8ce302a 100644
--- a/drivers/common/sfc_efx/base/rhead_rx.c
+++ b/drivers/common/sfc_efx/base/rhead_rx.c
@@ -632,6 +632,9 @@ rhead_rx_qcreate(
 	if (flags & EFX_RXQ_FLAG_INGRESS_MPORT)
 		fields_mask |= 1U << EFX_RX_PREFIX_FIELD_INGRESS_MPORT;
 
+	if (flags & EFX_RXQ_FLAG_USER_MARK)
+		fields_mask |= 1U << EFX_RX_PREFIX_FIELD_USER_MARK;
+
 	/*
 	 * LENGTH is required in EF100 host interface, as receive events
 	 * do not include the packet length.
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v3 09/20] net/sfc: add abstractions for the management EVQ identity
  2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
                     ` (7 preceding siblings ...)
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 08/20] common/sfc_efx/base: add user mark " Andrew Rybchenko
@ 2021-06-18 13:40   ` Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 10/20] net/sfc: add support for initialising different RxQ types Andrew Rybchenko
                     ` (10 subsequent siblings)
  19 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-18 13:40 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Add a function returning management event queue software index.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/net/sfc/sfc_ev.c | 2 +-
 drivers/net/sfc/sfc_ev.h | 6 ++++++
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index ed28d51e12..ba4409369a 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -983,7 +983,7 @@ sfc_ev_attach(struct sfc_adapter *sa)
 		goto fail_kvarg_perf_profile;
 	}
 
-	sa->mgmt_evq_index = 0;
+	sa->mgmt_evq_index = sfc_mgmt_evq_sw_index(sfc_sa2shared(sa));
 	rte_spinlock_init(&sa->mgmt_evq_lock);
 
 	rc = sfc_ev_qinit(sa, SFC_EVQ_TYPE_MGMT, 0, sa->evq_min_entries,
diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h
index 75b9dcdebd..3f3c4b5b9a 100644
--- a/drivers/net/sfc/sfc_ev.h
+++ b/drivers/net/sfc/sfc_ev.h
@@ -60,6 +60,12 @@ struct sfc_evq {
 	unsigned int			entries;
 };
 
+static inline sfc_sw_index_t
+sfc_mgmt_evq_sw_index(__rte_unused const struct sfc_adapter_shared *sas)
+{
+	return 0;
+}
+
 /*
  * Functions below define event queue to transmit/receive queue and vice
  * versa mapping.
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v3 10/20] net/sfc: add support for initialising different RxQ types
  2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
                     ` (8 preceding siblings ...)
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 09/20] net/sfc: add abstractions for the management EVQ identity Andrew Rybchenko
@ 2021-06-18 13:40   ` Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 11/20] net/sfc: add NUMA-aware registry of service logical cores Andrew Rybchenko
                     ` (9 subsequent siblings)
  19 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-18 13:40 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Add extra EFX flags to RxQ info initialization API to support
choosing different RxQ types and make the API public to use
it in for counter queues.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/net/sfc/sfc_rx.c | 10 ++++++----
 drivers/net/sfc/sfc_rx.h |  2 ++
 2 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 597785ae02..c7a7bd66ef 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -1155,7 +1155,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 	else
 		rxq_info->type = EFX_RXQ_TYPE_DEFAULT;
 
-	rxq_info->type_flags =
+	rxq_info->type_flags |=
 		(offloads & DEV_RX_OFFLOAD_SCATTER) ?
 		EFX_RXQ_FLAG_SCATTER : EFX_RXQ_FLAG_NONE;
 
@@ -1594,8 +1594,9 @@ sfc_rx_stop(struct sfc_adapter *sa)
 	efx_rx_fini(sa->nic);
 }
 
-static int
-sfc_rx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
+int
+sfc_rx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
+		  unsigned int extra_efx_type_flags)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
 	struct sfc_rxq_info *rxq_info = &sas->rxq_info[sw_index];
@@ -1606,6 +1607,7 @@ sfc_rx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 	SFC_ASSERT(rte_is_power_of_2(max_entries));
 
 	rxq_info->max_entries = max_entries;
+	rxq_info->type_flags = extra_efx_type_flags;
 
 	return 0;
 }
@@ -1770,7 +1772,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
 
 		sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas,
 							sas->ethdev_rxq_count);
-		rc = sfc_rx_qinit_info(sa, sw_index);
+		rc = sfc_rx_qinit_info(sa, sw_index, 0);
 		if (rc != 0)
 			goto fail_rx_qinit_info;
 
diff --git a/drivers/net/sfc/sfc_rx.h b/drivers/net/sfc/sfc_rx.h
index 96c7dc415d..e5a6fde79b 100644
--- a/drivers/net/sfc/sfc_rx.h
+++ b/drivers/net/sfc/sfc_rx.h
@@ -129,6 +129,8 @@ void sfc_rx_close(struct sfc_adapter *sa);
 int sfc_rx_start(struct sfc_adapter *sa);
 void sfc_rx_stop(struct sfc_adapter *sa);
 
+int sfc_rx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
+		      unsigned int extra_efx_type_flags);
 int sfc_rx_qinit(struct sfc_adapter *sa, unsigned int rx_queue_id,
 		 uint16_t nb_rx_desc, unsigned int socket_id,
 		 const struct rte_eth_rxconf *rx_conf,
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v3 11/20] net/sfc: add NUMA-aware registry of service logical cores
  2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
                     ` (9 preceding siblings ...)
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 10/20] net/sfc: add support for initialising different RxQ types Andrew Rybchenko
@ 2021-06-18 13:40   ` Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 12/20] net/sfc: reserve RxQ for counters Andrew Rybchenko
                     ` (8 subsequent siblings)
  19 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-18 13:40 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Andy Moreton, Ivan Malov

The driver requires service cores for housekeeping. Share these
cores for many adapters and various purposes to avoid extra CPU
overhead.

Since housekeeping services will talk to NIC, it should be possible
to choose logical core on matching NUMA node.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/meson.build   |  1 +
 drivers/net/sfc/sfc_service.c | 99 +++++++++++++++++++++++++++++++++++
 drivers/net/sfc/sfc_service.h | 20 +++++++
 3 files changed, 120 insertions(+)
 create mode 100644 drivers/net/sfc/sfc_service.c
 create mode 100644 drivers/net/sfc/sfc_service.h

diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index ccf5984d87..4ac97e8d43 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -62,4 +62,5 @@ sources = files(
         'sfc_ef10_tx.c',
         'sfc_ef100_rx.c',
         'sfc_ef100_tx.c',
+        'sfc_service.c',
 )
diff --git a/drivers/net/sfc/sfc_service.c b/drivers/net/sfc/sfc_service.c
new file mode 100644
index 0000000000..9c89484406
--- /dev/null
+++ b/drivers/net/sfc/sfc_service.c
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2020-2021 Xilinx, Inc.
+ */
+
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_spinlock.h>
+#include <rte_lcore.h>
+#include <rte_service.h>
+#include <rte_memory.h>
+
+#include "sfc_log.h"
+#include "sfc_service.h"
+#include "sfc_debug.h"
+
+static uint32_t sfc_service_lcore[RTE_MAX_NUMA_NODES];
+static rte_spinlock_t sfc_service_lcore_lock = RTE_SPINLOCK_INITIALIZER;
+
+RTE_INIT(sfc_service_lcore_init)
+{
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(sfc_service_lcore); ++i)
+		sfc_service_lcore[i] = RTE_MAX_LCORE;
+}
+
+static uint32_t
+sfc_find_service_lcore(int *socket_id)
+{
+	uint32_t service_core_list[RTE_MAX_LCORE];
+	uint32_t lcore_id;
+	int num;
+	int i;
+
+	SFC_ASSERT(rte_spinlock_is_locked(&sfc_service_lcore_lock));
+
+	num = rte_service_lcore_list(service_core_list,
+				    RTE_DIM(service_core_list));
+	if (num == 0) {
+		SFC_GENERIC_LOG(WARNING, "No service cores available");
+		return RTE_MAX_LCORE;
+	}
+	if (num < 0) {
+		SFC_GENERIC_LOG(ERR, "Failed to get service core list");
+		return RTE_MAX_LCORE;
+	}
+
+	for (i = 0; i < num; ++i) {
+		lcore_id = service_core_list[i];
+
+		if (*socket_id == SOCKET_ID_ANY) {
+			*socket_id = rte_lcore_to_socket_id(lcore_id);
+			break;
+		} else if (rte_lcore_to_socket_id(lcore_id) ==
+			   (unsigned int)*socket_id) {
+			break;
+		}
+	}
+
+	if (i == num) {
+		SFC_GENERIC_LOG(WARNING,
+			"No service cores reserved at socket %d", *socket_id);
+		return RTE_MAX_LCORE;
+	}
+
+	return lcore_id;
+}
+
+uint32_t
+sfc_get_service_lcore(int socket_id)
+{
+	uint32_t lcore_id = RTE_MAX_LCORE;
+
+	rte_spinlock_lock(&sfc_service_lcore_lock);
+
+	if (socket_id != SOCKET_ID_ANY) {
+		lcore_id = sfc_service_lcore[socket_id];
+	} else {
+		size_t i;
+
+		for (i = 0; i < RTE_DIM(sfc_service_lcore); ++i) {
+			if (sfc_service_lcore[i] != RTE_MAX_LCORE) {
+				lcore_id = sfc_service_lcore[i];
+				break;
+			}
+		}
+	}
+
+	if (lcore_id == RTE_MAX_LCORE) {
+		lcore_id = sfc_find_service_lcore(&socket_id);
+		if (lcore_id != RTE_MAX_LCORE)
+			sfc_service_lcore[socket_id] = lcore_id;
+	}
+
+	rte_spinlock_unlock(&sfc_service_lcore_lock);
+	return lcore_id;
+}
diff --git a/drivers/net/sfc/sfc_service.h b/drivers/net/sfc/sfc_service.h
new file mode 100644
index 0000000000..bbcce28479
--- /dev/null
+++ b/drivers/net/sfc/sfc_service.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2020-2021 Xilinx, Inc.
+ */
+
+#ifndef _SFC_SERVICE_H
+#define _SFC_SERVICE_H
+
+#include <stdint.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+uint32_t sfc_get_service_lcore(int socket_id);
+
+#ifdef __cplusplus
+}
+#endif
+#endif  /* _SFC_SERVICE_H */
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v3 12/20] net/sfc: reserve RxQ for counters
  2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
                     ` (10 preceding siblings ...)
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 11/20] net/sfc: add NUMA-aware registry of service logical cores Andrew Rybchenko
@ 2021-06-18 13:40   ` Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 13/20] common/sfc_efx/base: add counter creation MCDI wrappers Andrew Rybchenko
                     ` (7 subsequent siblings)
  19 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-18 13:40 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

MAE delivers counters data as special packets via dedicated Rx queue.
Reserve an RxQ so that it does not interfere with ethdev Rx queues.
A routine will be added later to handle these packets.

There is no point to reserve the queue if no service cores are
available and counters cannot be used.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/meson.build       |   1 +
 drivers/net/sfc/sfc.c             |  68 ++++++++--
 drivers/net/sfc/sfc.h             |  19 +++
 drivers/net/sfc/sfc_dp.h          |   2 +
 drivers/net/sfc/sfc_ev.h          |  72 ++++++++--
 drivers/net/sfc/sfc_mae.c         |   1 +
 drivers/net/sfc/sfc_mae_counter.c | 217 ++++++++++++++++++++++++++++++
 drivers/net/sfc/sfc_mae_counter.h |  44 ++++++
 drivers/net/sfc/sfc_rx.c          |  43 ++++--
 9 files changed, 438 insertions(+), 29 deletions(-)
 create mode 100644 drivers/net/sfc/sfc_mae_counter.c
 create mode 100644 drivers/net/sfc/sfc_mae_counter.h

diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index 4ac97e8d43..f8880f740a 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -55,6 +55,7 @@ sources = files(
         'sfc_filter.c',
         'sfc_switch.c',
         'sfc_mae.c',
+        'sfc_mae_counter.c',
         'sfc_flow.c',
         'sfc_dp.c',
         'sfc_ef10_rx.c',
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 3477c7530b..4097cf39de 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -20,6 +20,7 @@
 #include "sfc_log.h"
 #include "sfc_ev.h"
 #include "sfc_rx.h"
+#include "sfc_mae_counter.h"
 #include "sfc_tx.h"
 #include "sfc_kvargs.h"
 #include "sfc_tweak.h"
@@ -174,6 +175,7 @@ static int
 sfc_estimate_resource_limits(struct sfc_adapter *sa)
 {
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
 	efx_drv_limits_t limits;
 	int rc;
 	uint32_t evq_allocated;
@@ -235,17 +237,53 @@ sfc_estimate_resource_limits(struct sfc_adapter *sa)
 	rxq_allocated = MIN(rxq_allocated, limits.edl_max_rxq_count);
 	txq_allocated = MIN(txq_allocated, limits.edl_max_txq_count);
 
-	/* Subtract management EVQ not used for traffic */
-	SFC_ASSERT(evq_allocated > 0);
+	/*
+	 * Subtract management EVQ not used for traffic
+	 * The resource allocation strategy is as follows:
+	 * - one EVQ for management
+	 * - one EVQ for each ethdev RXQ
+	 * - one EVQ for each ethdev TXQ
+	 * - one EVQ and one RXQ for optional MAE counters.
+	 */
+	if (evq_allocated == 0) {
+		sfc_err(sa, "count of allocated EvQ is 0");
+		rc = ENOMEM;
+		goto fail_allocate_evq;
+	}
 	evq_allocated--;
 
-	/* Right now we use separate EVQ for Rx and Tx */
-	sa->rxq_max = MIN(rxq_allocated, evq_allocated / 2);
-	sa->txq_max = MIN(txq_allocated, evq_allocated - sa->rxq_max);
+	/*
+	 * Reserve absolutely required minimum.
+	 * Right now we use separate EVQ for Rx and Tx.
+	 */
+	if (rxq_allocated > 0 && evq_allocated > 0) {
+		sa->rxq_max = 1;
+		rxq_allocated--;
+		evq_allocated--;
+	}
+	if (txq_allocated > 0 && evq_allocated > 0) {
+		sa->txq_max = 1;
+		txq_allocated--;
+		evq_allocated--;
+	}
+
+	if (sfc_mae_counter_rxq_required(sa) &&
+	    rxq_allocated > 0 && evq_allocated > 0) {
+		rxq_allocated--;
+		evq_allocated--;
+		sas->counters_rxq_allocated = true;
+	} else {
+		sas->counters_rxq_allocated = false;
+	}
+
+	/* Add remaining allocated queues */
+	sa->rxq_max += MIN(rxq_allocated, evq_allocated / 2);
+	sa->txq_max += MIN(txq_allocated, evq_allocated - sa->rxq_max);
 
 	/* Keep NIC initialized */
 	return 0;
 
+fail_allocate_evq:
 fail_get_vi_pool:
 	efx_nic_fini(sa->nic);
 fail_nic_init:
@@ -256,14 +294,20 @@ static int
 sfc_set_drv_limits(struct sfc_adapter *sa)
 {
 	const struct rte_eth_dev_data *data = sa->eth_dev->data;
+	uint32_t rxq_reserved = sfc_nb_reserved_rxq(sfc_sa2shared(sa));
 	efx_drv_limits_t lim;
 
 	memset(&lim, 0, sizeof(lim));
 
-	/* Limits are strict since take into account initial estimation */
+	/*
+	 * Limits are strict since take into account initial estimation.
+	 * Resource allocation stategy is described in
+	 * sfc_estimate_resource_limits().
+	 */
 	lim.edl_min_evq_count = lim.edl_max_evq_count =
-		1 + data->nb_rx_queues + data->nb_tx_queues;
-	lim.edl_min_rxq_count = lim.edl_max_rxq_count = data->nb_rx_queues;
+		1 + data->nb_rx_queues + data->nb_tx_queues + rxq_reserved;
+	lim.edl_min_rxq_count = lim.edl_max_rxq_count =
+		data->nb_rx_queues + rxq_reserved;
 	lim.edl_min_txq_count = lim.edl_max_txq_count = data->nb_tx_queues;
 
 	return efx_nic_set_drv_limits(sa->nic, &lim);
@@ -834,6 +878,10 @@ sfc_attach(struct sfc_adapter *sa)
 	if (rc != 0)
 		goto fail_filter_attach;
 
+	rc = sfc_mae_counter_rxq_attach(sa);
+	if (rc != 0)
+		goto fail_mae_counter_rxq_attach;
+
 	rc = sfc_mae_attach(sa);
 	if (rc != 0)
 		goto fail_mae_attach;
@@ -862,6 +910,9 @@ sfc_attach(struct sfc_adapter *sa)
 	sfc_mae_detach(sa);
 
 fail_mae_attach:
+	sfc_mae_counter_rxq_detach(sa);
+
+fail_mae_counter_rxq_attach:
 	sfc_filter_detach(sa);
 
 fail_filter_attach:
@@ -903,6 +954,7 @@ sfc_detach(struct sfc_adapter *sa)
 	sfc_flow_fini(sa);
 
 	sfc_mae_detach(sa);
+	sfc_mae_counter_rxq_detach(sa);
 	sfc_filter_detach(sa);
 	sfc_rss_detach(sa);
 	sfc_port_detach(sa);
diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index 00fc26cf0e..546739bd4a 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -186,6 +186,8 @@ struct sfc_adapter_shared {
 
 	char				*dp_rx_name;
 	char				*dp_tx_name;
+
+	bool				counters_rxq_allocated;
 };
 
 /* Adapter process private data */
@@ -205,6 +207,15 @@ sfc_adapter_priv_by_eth_dev(struct rte_eth_dev *eth_dev)
 	return sap;
 }
 
+/* RxQ dedicated for counters (counter only RxQ) data */
+struct sfc_counter_rxq {
+	unsigned int			state;
+#define SFC_COUNTER_RXQ_ATTACHED		0x1
+#define SFC_COUNTER_RXQ_INITIALIZED		0x2
+	sfc_sw_index_t			sw_index;
+	struct rte_mempool		*mp;
+};
+
 /* Adapter private data */
 struct sfc_adapter {
 	/*
@@ -283,6 +294,8 @@ struct sfc_adapter {
 	bool				mgmt_evq_running;
 	struct sfc_evq			*mgmt_evq;
 
+	struct sfc_counter_rxq		counter_rxq;
+
 	struct sfc_rxq			*rxq_ctrl;
 	struct sfc_txq			*txq_ctrl;
 
@@ -357,6 +370,12 @@ sfc_adapter_lock_fini(__rte_unused struct sfc_adapter *sa)
 	/* Just for symmetry of the API */
 }
 
+static inline unsigned int
+sfc_nb_counter_rxq(const struct sfc_adapter_shared *sas)
+{
+	return sas->counters_rxq_allocated ? 1 : 0;
+}
+
 /** Get the number of milliseconds since boot from the default timer */
 static inline uint64_t
 sfc_get_system_msecs(void)
diff --git a/drivers/net/sfc/sfc_dp.h b/drivers/net/sfc/sfc_dp.h
index 76065483d4..61c1a3fbac 100644
--- a/drivers/net/sfc/sfc_dp.h
+++ b/drivers/net/sfc/sfc_dp.h
@@ -97,6 +97,8 @@ struct sfc_dp {
 TAILQ_HEAD(sfc_dp_list, sfc_dp);
 
 typedef unsigned int sfc_sw_index_t;
+#define SFC_SW_INDEX_INVALID	((sfc_sw_index_t)(UINT_MAX))
+
 typedef int32_t	sfc_ethdev_qid_t;
 #define SFC_ETHDEV_QID_INVALID	((sfc_ethdev_qid_t)(-1))
 
diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h
index 3f3c4b5b9a..b2a0380205 100644
--- a/drivers/net/sfc/sfc_ev.h
+++ b/drivers/net/sfc/sfc_ev.h
@@ -66,36 +66,87 @@ sfc_mgmt_evq_sw_index(__rte_unused const struct sfc_adapter_shared *sas)
 	return 0;
 }
 
+/* Return the number of Rx queues reserved for driver's internal use */
+static inline unsigned int
+sfc_nb_reserved_rxq(const struct sfc_adapter_shared *sas)
+{
+	return sfc_nb_counter_rxq(sas);
+}
+
+static inline unsigned int
+sfc_nb_reserved_evq(const struct sfc_adapter_shared *sas)
+{
+	/* An EvQ is required for each reserved RxQ */
+	return 1 + sfc_nb_reserved_rxq(sas);
+}
+
+/*
+ * The mapping functions that return SW index of a specific reserved
+ * queue rely on the relative order of reserved queues. Some reserved
+ * queues are optional, and if they are disabled or not supported, then
+ * the function for that specific reserved queue will return previous
+ * valid index of a reserved queue in the dependency chain or
+ * SFC_SW_INDEX_INVALID if it is the first reserved queue in the chain.
+ * If at least one of the reserved queues in the chain is enabled, then
+ * the corresponding function will give valid SW index, even if previous
+ * functions in the chain returned SFC_SW_INDEX_INVALID, since this value
+ * is one less than the first valid SW index.
+ *
+ * The dependency mechanism is utilized to avoid regid defines for SW indices
+ * for reserved queues and to allow these indices to shrink and make space
+ * for ethdev queue indices when some of the reserved queues are disabled.
+ */
+
+static inline sfc_sw_index_t
+sfc_counters_rxq_sw_index(const struct sfc_adapter_shared *sas)
+{
+	return sas->counters_rxq_allocated ? 0 : SFC_SW_INDEX_INVALID;
+}
+
 /*
  * Functions below define event queue to transmit/receive queue and vice
  * versa mapping.
+ * SFC_ETHDEV_QID_INVALID is returned when sw_index is converted to
+ * ethdev_qid, but sw_index represents a reserved queue for driver's
+ * internal use.
  * Own event queue is allocated for management, each Rx and each Tx queue.
  * Zero event queue is used for management events.
- * Rx event queues from 1 to RxQ number follow management event queue.
+ * When counters are supported, one Rx event queue is reserved.
+ * Rx event queues follow reserved event queues.
  * Tx event queues follow Rx event queues.
  */
 
 static inline sfc_ethdev_qid_t
-sfc_ethdev_rx_qid_by_rxq_sw_index(__rte_unused struct sfc_adapter_shared *sas,
+sfc_ethdev_rx_qid_by_rxq_sw_index(struct sfc_adapter_shared *sas,
 				  sfc_sw_index_t rxq_sw_index)
 {
-	/* Only ethdev queues are present for now */
-	return rxq_sw_index;
+	if (rxq_sw_index < sfc_nb_reserved_rxq(sas))
+		return SFC_ETHDEV_QID_INVALID;
+
+	return rxq_sw_index - sfc_nb_reserved_rxq(sas);
 }
 
 static inline sfc_sw_index_t
-sfc_rxq_sw_index_by_ethdev_rx_qid(__rte_unused struct sfc_adapter_shared *sas,
+sfc_rxq_sw_index_by_ethdev_rx_qid(struct sfc_adapter_shared *sas,
 				  sfc_ethdev_qid_t ethdev_qid)
 {
-	/* Only ethdev queues are present for now */
-	return ethdev_qid;
+	return sfc_nb_reserved_rxq(sas) + ethdev_qid;
 }
 
 static inline sfc_sw_index_t
-sfc_evq_sw_index_by_rxq_sw_index(__rte_unused struct sfc_adapter *sa,
+sfc_evq_sw_index_by_rxq_sw_index(struct sfc_adapter *sa,
 				 sfc_sw_index_t rxq_sw_index)
 {
-	return 1 + rxq_sw_index;
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
+
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, rxq_sw_index);
+	if (ethdev_qid == SFC_ETHDEV_QID_INVALID) {
+		/* One EvQ is reserved for management */
+		return 1 + rxq_sw_index;
+	}
+
+	return sfc_nb_reserved_evq(sas) + ethdev_qid;
 }
 
 static inline sfc_ethdev_qid_t
@@ -118,7 +169,8 @@ static inline sfc_sw_index_t
 sfc_evq_sw_index_by_txq_sw_index(struct sfc_adapter *sa,
 				 sfc_sw_index_t txq_sw_index)
 {
-	return 1 + sa->eth_dev->data->nb_rx_queues + txq_sw_index;
+	return sfc_nb_reserved_evq(sfc_sa2shared(sa)) +
+		sa->eth_dev->data->nb_rx_queues + txq_sw_index;
 }
 
 int sfc_ev_attach(struct sfc_adapter *sa);
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index a2c0aa1436..8ffcf72d88 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -16,6 +16,7 @@
 #include "efx.h"
 
 #include "sfc.h"
+#include "sfc_mae_counter.h"
 #include "sfc_log.h"
 #include "sfc_switch.h"
 
diff --git a/drivers/net/sfc/sfc_mae_counter.c b/drivers/net/sfc/sfc_mae_counter.c
new file mode 100644
index 0000000000..c7646cf7b1
--- /dev/null
+++ b/drivers/net/sfc/sfc_mae_counter.c
@@ -0,0 +1,217 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2020-2021 Xilinx, Inc.
+ */
+
+#include <rte_common.h>
+
+#include "efx.h"
+
+#include "sfc_ev.h"
+#include "sfc.h"
+#include "sfc_rx.h"
+#include "sfc_mae_counter.h"
+#include "sfc_service.h"
+
+static uint32_t
+sfc_mae_counter_get_service_lcore(struct sfc_adapter *sa)
+{
+	uint32_t cid;
+
+	cid = sfc_get_service_lcore(sa->socket_id);
+	if (cid != RTE_MAX_LCORE)
+		return cid;
+
+	if (sa->socket_id != SOCKET_ID_ANY)
+		cid = sfc_get_service_lcore(SOCKET_ID_ANY);
+
+	if (cid == RTE_MAX_LCORE) {
+		sfc_warn(sa, "failed to get service lcore for counter service");
+	} else if (sa->socket_id != SOCKET_ID_ANY) {
+		sfc_warn(sa,
+			"failed to get service lcore for counter service at socket %d, but got at socket %u",
+			sa->socket_id, rte_lcore_to_socket_id(cid));
+	}
+	return cid;
+}
+
+bool
+sfc_mae_counter_rxq_required(struct sfc_adapter *sa)
+{
+	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
+
+	if (encp->enc_mae_supported == B_FALSE)
+		return false;
+
+	if (sfc_mae_counter_get_service_lcore(sa) == RTE_MAX_LCORE)
+		return false;
+
+	return true;
+}
+
+int
+sfc_mae_counter_rxq_attach(struct sfc_adapter *sa)
+{
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	char name[RTE_MEMPOOL_NAMESIZE];
+	struct rte_mempool *mp;
+	unsigned int n_elements;
+	unsigned int cache_size;
+	/* The mempool is internal and private area is not required */
+	const uint16_t priv_size = 0;
+	const uint16_t data_room_size = RTE_PKTMBUF_HEADROOM +
+		SFC_MAE_COUNTER_STREAM_PACKET_SIZE;
+	int rc;
+
+	sfc_log_init(sa, "entry");
+
+	if (!sas->counters_rxq_allocated) {
+		sfc_log_init(sa, "counter queue is not supported - skip");
+		return 0;
+	}
+
+	/*
+	 * At least one element in the ring is always unused to distinguish
+	 * between empty and full ring cases.
+	 */
+	n_elements = SFC_COUNTER_RXQ_RX_DESC_COUNT - 1;
+
+	/*
+	 * The cache must have sufficient space to put received buckets
+	 * before they're reused on refill.
+	 */
+	cache_size = rte_align32pow2(SFC_COUNTER_RXQ_REFILL_LEVEL +
+				     SFC_MAE_COUNTER_RX_BURST - 1);
+
+	if (snprintf(name, sizeof(name), "counter_rxq-pool-%u", sas->port_id) >=
+	    (int)sizeof(name)) {
+		sfc_err(sa, "failed: counter RxQ mempool name is too long");
+		rc = ENAMETOOLONG;
+		goto fail_long_name;
+	}
+
+	/*
+	 * It could be single-producer single-consumer ring mempool which
+	 * requires minimal barriers. However, cache size and refill/burst
+	 * policy are aligned, therefore it does not matter which
+	 * mempool backend is chosen since backend is unused.
+	 */
+	mp = rte_pktmbuf_pool_create(name, n_elements, cache_size,
+				     priv_size, data_room_size, sa->socket_id);
+	if (mp == NULL) {
+		sfc_err(sa, "failed to create counter RxQ mempool");
+		rc = rte_errno;
+		goto fail_mp_create;
+	}
+
+	sa->counter_rxq.sw_index = sfc_counters_rxq_sw_index(sas);
+	sa->counter_rxq.mp = mp;
+	sa->counter_rxq.state |= SFC_COUNTER_RXQ_ATTACHED;
+
+	sfc_log_init(sa, "done");
+
+	return 0;
+
+fail_mp_create:
+fail_long_name:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+
+	return rc;
+}
+
+void
+sfc_mae_counter_rxq_detach(struct sfc_adapter *sa)
+{
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+
+	sfc_log_init(sa, "entry");
+
+	if (!sas->counters_rxq_allocated) {
+		sfc_log_init(sa, "counter queue is not supported - skip");
+		return;
+	}
+
+	if ((sa->counter_rxq.state & SFC_COUNTER_RXQ_ATTACHED) == 0) {
+		sfc_log_init(sa, "counter queue is not attached - skip");
+		return;
+	}
+
+	rte_mempool_free(sa->counter_rxq.mp);
+	sa->counter_rxq.mp = NULL;
+	sa->counter_rxq.state &= ~SFC_COUNTER_RXQ_ATTACHED;
+
+	sfc_log_init(sa, "done");
+}
+
+int
+sfc_mae_counter_rxq_init(struct sfc_adapter *sa)
+{
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	const struct rte_eth_rxconf rxconf = {
+		.rx_free_thresh = SFC_COUNTER_RXQ_REFILL_LEVEL,
+		.rx_drop_en = 1,
+	};
+	uint16_t nb_rx_desc = SFC_COUNTER_RXQ_RX_DESC_COUNT;
+	int rc;
+
+	sfc_log_init(sa, "entry");
+
+	if (!sas->counters_rxq_allocated) {
+		sfc_log_init(sa, "counter queue is not supported - skip");
+		return 0;
+	}
+
+	if ((sa->counter_rxq.state & SFC_COUNTER_RXQ_ATTACHED) == 0) {
+		sfc_log_init(sa, "counter queue is not attached - skip");
+		return 0;
+	}
+
+	nb_rx_desc = RTE_MIN(nb_rx_desc, sa->rxq_max_entries);
+	nb_rx_desc = RTE_MAX(nb_rx_desc, sa->rxq_min_entries);
+
+	rc = sfc_rx_qinit_info(sa, sa->counter_rxq.sw_index,
+			       EFX_RXQ_FLAG_USER_MARK);
+	if (rc != 0)
+		goto fail_counter_rxq_init_info;
+
+	rc = sfc_rx_qinit(sa, sa->counter_rxq.sw_index, nb_rx_desc,
+			  sa->socket_id, &rxconf, sa->counter_rxq.mp);
+	if (rc != 0) {
+		sfc_err(sa, "failed to init counter RxQ");
+		goto fail_counter_rxq_init;
+	}
+
+	sa->counter_rxq.state |= SFC_COUNTER_RXQ_INITIALIZED;
+
+	sfc_log_init(sa, "done");
+
+	return 0;
+
+fail_counter_rxq_init:
+fail_counter_rxq_init_info:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+
+	return rc;
+}
+
+void
+sfc_mae_counter_rxq_fini(struct sfc_adapter *sa)
+{
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+
+	sfc_log_init(sa, "entry");
+
+	if (!sas->counters_rxq_allocated) {
+		sfc_log_init(sa, "counter queue is not supported - skip");
+		return;
+	}
+
+	if ((sa->counter_rxq.state & SFC_COUNTER_RXQ_INITIALIZED) == 0) {
+		sfc_log_init(sa, "counter queue is not initialized - skip");
+		return;
+	}
+
+	sfc_rx_qfini(sa, sa->counter_rxq.sw_index);
+
+	sfc_log_init(sa, "done");
+}
diff --git a/drivers/net/sfc/sfc_mae_counter.h b/drivers/net/sfc/sfc_mae_counter.h
new file mode 100644
index 0000000000..f16d64a999
--- /dev/null
+++ b/drivers/net/sfc/sfc_mae_counter.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2020-2021 Xilinx, Inc.
+ */
+
+#ifndef _SFC_MAE_COUNTER_H
+#define _SFC_MAE_COUNTER_H
+
+#include "sfc.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* Default values for a user of counter RxQ */
+#define SFC_MAE_COUNTER_RX_BURST 32
+#define SFC_COUNTER_RXQ_RX_DESC_COUNT 256
+
+/*
+ * The refill level is chosen based on requirement to keep number
+ * of give credits operations low.
+ */
+#define SFC_COUNTER_RXQ_REFILL_LEVEL (SFC_COUNTER_RXQ_RX_DESC_COUNT / 4)
+
+/*
+ * SF-122415-TC states that the packetiser that generates packets for
+ * counter stream must support 9k frames. Set it to the maximum supported
+ * size since in case of huge flow of counters, having fewer packets in counter
+ * updates is better.
+ */
+#define SFC_MAE_COUNTER_STREAM_PACKET_SIZE 9216
+
+bool sfc_mae_counter_rxq_required(struct sfc_adapter *sa);
+
+int sfc_mae_counter_rxq_attach(struct sfc_adapter *sa);
+void sfc_mae_counter_rxq_detach(struct sfc_adapter *sa);
+
+int sfc_mae_counter_rxq_init(struct sfc_adapter *sa);
+void sfc_mae_counter_rxq_fini(struct sfc_adapter *sa);
+
+#ifdef __cplusplus
+}
+#endif
+#endif  /* _SFC_MAE_COUNTER_H */
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index c7a7bd66ef..0532f77082 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -16,6 +16,7 @@
 #include "sfc_log.h"
 #include "sfc_ev.h"
 #include "sfc_rx.h"
+#include "sfc_mae_counter.h"
 #include "sfc_kvargs.h"
 #include "sfc_tweak.h"
 
@@ -1705,6 +1706,9 @@ sfc_rx_configure(struct sfc_adapter *sa)
 	struct sfc_rss *rss = &sas->rss;
 	struct rte_eth_conf *dev_conf = &sa->eth_dev->data->dev_conf;
 	const unsigned int nb_rx_queues = sa->eth_dev->data->nb_rx_queues;
+	const unsigned int nb_rsrv_rx_queues = sfc_nb_reserved_rxq(sas);
+	const unsigned int nb_rxq_total = nb_rx_queues + nb_rsrv_rx_queues;
+	bool reconfigure;
 	int rc;
 
 	sfc_log_init(sa, "nb_rx_queues=%u (old %u)",
@@ -1714,12 +1718,15 @@ sfc_rx_configure(struct sfc_adapter *sa)
 	if (rc != 0)
 		goto fail_check_mode;
 
-	if (nb_rx_queues == sas->rxq_count)
+	if (nb_rxq_total == sas->rxq_count) {
+		reconfigure = true;
 		goto configure_rss;
+	}
 
 	if (sas->rxq_info == NULL) {
+		reconfigure = false;
 		rc = ENOMEM;
-		sas->rxq_info = rte_calloc_socket("sfc-rxqs", nb_rx_queues,
+		sas->rxq_info = rte_calloc_socket("sfc-rxqs", nb_rxq_total,
 						  sizeof(sas->rxq_info[0]), 0,
 						  sa->socket_id);
 		if (sas->rxq_info == NULL)
@@ -1730,39 +1737,42 @@ sfc_rx_configure(struct sfc_adapter *sa)
 		 * since it should not be shared.
 		 */
 		rc = ENOMEM;
-		sa->rxq_ctrl = calloc(nb_rx_queues, sizeof(sa->rxq_ctrl[0]));
+		sa->rxq_ctrl = calloc(nb_rxq_total, sizeof(sa->rxq_ctrl[0]));
 		if (sa->rxq_ctrl == NULL)
 			goto fail_rxqs_ctrl_alloc;
 	} else {
 		struct sfc_rxq_info *new_rxq_info;
 		struct sfc_rxq *new_rxq_ctrl;
 
+		reconfigure = true;
+
+		/* Do not ununitialize reserved queues */
 		if (nb_rx_queues < sas->ethdev_rxq_count)
 			sfc_rx_fini_queues(sa, nb_rx_queues);
 
 		rc = ENOMEM;
 		new_rxq_info =
 			rte_realloc(sas->rxq_info,
-				    nb_rx_queues * sizeof(sas->rxq_info[0]), 0);
-		if (new_rxq_info == NULL && nb_rx_queues > 0)
+				    nb_rxq_total * sizeof(sas->rxq_info[0]), 0);
+		if (new_rxq_info == NULL && nb_rxq_total > 0)
 			goto fail_rxqs_realloc;
 
 		rc = ENOMEM;
 		new_rxq_ctrl = realloc(sa->rxq_ctrl,
-				       nb_rx_queues * sizeof(sa->rxq_ctrl[0]));
-		if (new_rxq_ctrl == NULL && nb_rx_queues > 0)
+				       nb_rxq_total * sizeof(sa->rxq_ctrl[0]));
+		if (new_rxq_ctrl == NULL && nb_rxq_total > 0)
 			goto fail_rxqs_ctrl_realloc;
 
 		sas->rxq_info = new_rxq_info;
 		sa->rxq_ctrl = new_rxq_ctrl;
-		if (nb_rx_queues > sas->rxq_count) {
+		if (nb_rxq_total > sas->rxq_count) {
 			unsigned int rxq_count = sas->rxq_count;
 
 			memset(&sas->rxq_info[rxq_count], 0,
-			       (nb_rx_queues - rxq_count) *
+			       (nb_rxq_total - rxq_count) *
 			       sizeof(sas->rxq_info[0]));
 			memset(&sa->rxq_ctrl[rxq_count], 0,
-			       (nb_rx_queues - rxq_count) *
+			       (nb_rxq_total - rxq_count) *
 			       sizeof(sa->rxq_ctrl[0]));
 		}
 	}
@@ -1779,7 +1789,13 @@ sfc_rx_configure(struct sfc_adapter *sa)
 		sas->ethdev_rxq_count++;
 	}
 
-	sas->rxq_count = sas->ethdev_rxq_count;
+	sas->rxq_count = sas->ethdev_rxq_count + nb_rsrv_rx_queues;
+
+	if (!reconfigure) {
+		rc = sfc_mae_counter_rxq_init(sa);
+		if (rc != 0)
+			goto fail_count_rxq_init;
+	}
 
 configure_rss:
 	rss->channels = (dev_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) ?
@@ -1801,6 +1817,10 @@ sfc_rx_configure(struct sfc_adapter *sa)
 	return 0;
 
 fail_rx_process_adv_conf_rss:
+	if (!reconfigure)
+		sfc_mae_counter_rxq_fini(sa);
+
+fail_count_rxq_init:
 fail_rx_qinit_info:
 fail_rxqs_ctrl_realloc:
 fail_rxqs_realloc:
@@ -1824,6 +1844,7 @@ sfc_rx_close(struct sfc_adapter *sa)
 	struct sfc_rss *rss = &sfc_sa2shared(sa)->rss;
 
 	sfc_rx_fini_queues(sa, 0);
+	sfc_mae_counter_rxq_fini(sa);
 
 	rss->channels = 0;
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v3 13/20] common/sfc_efx/base: add counter creation MCDI wrappers
  2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
                     ` (11 preceding siblings ...)
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 12/20] net/sfc: reserve RxQ for counters Andrew Rybchenko
@ 2021-06-18 13:40   ` Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 14/20] common/sfc_efx/base: add counter stream " Andrew Rybchenko
                     ` (6 subsequent siblings)
  19 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-18 13:40 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

User will be able to create and free MAE counters. Support for
associating counters with action set will be added in upcoming
patches.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/efx.h      |  37 ++++++
 drivers/common/sfc_efx/base/efx_impl.h |   1 +
 drivers/common/sfc_efx/base/efx_mae.c  | 158 +++++++++++++++++++++++++
 drivers/common/sfc_efx/base/efx_mcdi.h |   7 ++
 drivers/common/sfc_efx/version.map     |   2 +
 5 files changed, 205 insertions(+)

diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index f81837a931..b789e19b98 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4388,6 +4388,10 @@ efx_mae_action_set_fill_in_eh_id(
 	__in				efx_mae_actions_t *spec,
 	__in				const efx_mae_eh_id_t *eh_idp);
 
+typedef struct efx_counter_s {
+	uint32_t id;
+} efx_counter_t;
+
 /* Action set ID */
 typedef struct efx_mae_aset_id_s {
 	uint32_t id;
@@ -4400,6 +4404,39 @@ efx_mae_action_set_alloc(
 	__in				const efx_mae_actions_t *spec,
 	__out				efx_mae_aset_id_t *aset_idp);
 
+/*
+ * Generation count has two purposes:
+ *
+ * 1) Distinguish between counter packets that belong to freed counter
+ *    and the packets that belong to reallocated counter (with the same ID);
+ * 2) Make sure that all packets are received for a counter that was freed;
+ *
+ * API users should provide generation count out parameter in allocation
+ * function if counters can be reallocated and consistent counter values are
+ * required.
+ *
+ * API users that need consistent final counter values after counter
+ * deallocation or counter stream stop should provide the parameter in
+ * functions that free the counters and stop the counter stream.
+ */
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_counters_alloc(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_counters,
+	__out				uint32_t *n_allocatedp,
+	__out_ecount(n_counters)	efx_counter_t *countersp,
+	__out_opt			uint32_t *gen_countp);
+
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_counters_free(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_counters,
+	__out				uint32_t *n_freedp,
+	__in_ecount(n_counters)		const efx_counter_t *countersp,
+	__out_opt			uint32_t *gen_countp);
+
 LIBEFX_API
 extern	__checkReturn			efx_rc_t
 efx_mae_action_set_free(
diff --git a/drivers/common/sfc_efx/base/efx_impl.h b/drivers/common/sfc_efx/base/efx_impl.h
index a6b20704ac..b69463385e 100644
--- a/drivers/common/sfc_efx/base/efx_impl.h
+++ b/drivers/common/sfc_efx/base/efx_impl.h
@@ -821,6 +821,7 @@ typedef struct efx_mae_s {
 	/** Outer rule match field capabilities. */
 	efx_mae_field_cap_t		*em_outer_rule_field_caps;
 	size_t				em_outer_rule_field_caps_size;
+	uint32_t			em_max_ncounters;
 } efx_mae_t;
 
 #endif /* EFSYS_OPT_MAE */
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index c1784211e7..cf6c449a16 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -67,6 +67,9 @@ efx_mae_get_capabilities(
 	maep->em_max_nfields =
 	    MCDI_OUT_DWORD(req, MAE_GET_CAPS_OUT_MATCH_FIELD_COUNT);
 
+	maep->em_max_ncounters =
+	    MCDI_OUT_DWORD(req, MAE_GET_CAPS_OUT_COUNTERS);
+
 	return (0);
 
 fail2:
@@ -2369,6 +2372,161 @@ efx_mae_action_rule_remove(
 
 	return (0);
 
+fail4:
+	EFSYS_PROBE(fail4);
+fail3:
+	EFSYS_PROBE(fail3);
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+	return (rc);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_counters_alloc(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_counters,
+	__out				uint32_t *n_allocatedp,
+	__out_ecount(n_counters)	efx_counter_t *countersp,
+	__out_opt			uint32_t *gen_countp)
+{
+	EFX_MCDI_DECLARE_BUF(payload,
+	    MC_CMD_MAE_COUNTER_ALLOC_IN_LEN,
+	    MC_CMD_MAE_COUNTER_ALLOC_OUT_LENMAX_MCDI2);
+	efx_mae_t *maep = enp->en_maep;
+	uint32_t n_allocated;
+	efx_mcdi_req_t req;
+	unsigned int i;
+	efx_rc_t rc;
+
+	if (n_counters > maep->em_max_ncounters ||
+	    n_counters < MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_MINNUM ||
+	    n_counters > MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_MAXNUM_MCDI2) {
+		rc = EINVAL;
+		goto fail1;
+	}
+
+	req.emr_cmd = MC_CMD_MAE_COUNTER_ALLOC;
+	req.emr_in_buf = payload;
+	req.emr_in_length = MC_CMD_MAE_COUNTER_ALLOC_IN_LEN;
+	req.emr_out_buf = payload;
+	req.emr_out_length = MC_CMD_MAE_COUNTER_ALLOC_OUT_LEN(n_counters);
+
+	MCDI_IN_SET_DWORD(req, MAE_COUNTER_ALLOC_IN_REQUESTED_COUNT,
+	    n_counters);
+
+	efx_mcdi_execute(enp, &req);
+
+	if (req.emr_rc != 0) {
+		rc = req.emr_rc;
+		goto fail2;
+	}
+
+	if (req.emr_out_length_used < MC_CMD_MAE_COUNTER_ALLOC_OUT_LENMIN) {
+		rc = EMSGSIZE;
+		goto fail3;
+	}
+
+	n_allocated = MCDI_OUT_DWORD(req,
+	    MAE_COUNTER_ALLOC_OUT_COUNTER_ID_COUNT);
+	if (n_allocated < MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_MINNUM) {
+		rc = EFAULT;
+		goto fail4;
+	}
+
+	for (i = 0; i < n_allocated; i++) {
+		countersp[i].id = MCDI_OUT_INDEXED_DWORD(req,
+		    MAE_COUNTER_ALLOC_OUT_COUNTER_ID, i);
+	}
+
+	if (gen_countp != NULL) {
+		*gen_countp = MCDI_OUT_DWORD(req,
+				    MAE_COUNTER_ALLOC_OUT_GENERATION_COUNT);
+	}
+
+	*n_allocatedp = n_allocated;
+
+	return (0);
+
+fail4:
+	EFSYS_PROBE(fail4);
+fail3:
+	EFSYS_PROBE(fail3);
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
+	return (rc);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_counters_free(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_counters,
+	__out				uint32_t *n_freedp,
+	__in_ecount(n_counters)		const efx_counter_t *countersp,
+	__out_opt			uint32_t *gen_countp)
+{
+	EFX_MCDI_DECLARE_BUF(payload,
+	    MC_CMD_MAE_COUNTER_FREE_IN_LENMAX_MCDI2,
+	    MC_CMD_MAE_COUNTER_FREE_OUT_LENMAX_MCDI2);
+	efx_mae_t *maep = enp->en_maep;
+	efx_mcdi_req_t req;
+	uint32_t n_freed;
+	unsigned int i;
+	efx_rc_t rc;
+
+	if (n_counters > maep->em_max_ncounters ||
+	    n_counters < MC_CMD_MAE_COUNTER_FREE_IN_FREE_COUNTER_ID_MINNUM ||
+	    n_counters >
+	    MC_CMD_MAE_COUNTER_FREE_IN_FREE_COUNTER_ID_MAXNUM_MCDI2) {
+		rc = EINVAL;
+		goto fail1;
+	}
+
+	req.emr_cmd = MC_CMD_MAE_COUNTER_FREE;
+	req.emr_in_buf = payload;
+	req.emr_in_length = MC_CMD_MAE_COUNTER_FREE_IN_LEN(n_counters);
+	req.emr_out_buf = payload;
+	req.emr_out_length = MC_CMD_MAE_COUNTER_FREE_OUT_LEN(n_counters);
+
+	for (i = 0; i < n_counters; i++) {
+		MCDI_IN_SET_INDEXED_DWORD(req,
+		    MAE_COUNTER_FREE_IN_FREE_COUNTER_ID, i, countersp[i].id);
+	}
+	MCDI_IN_SET_DWORD(req, MAE_COUNTER_FREE_IN_COUNTER_ID_COUNT,
+			  n_counters);
+
+	efx_mcdi_execute(enp, &req);
+
+	if (req.emr_rc != 0) {
+		rc = req.emr_rc;
+		goto fail2;
+	}
+
+	if (req.emr_out_length_used < MC_CMD_MAE_COUNTER_FREE_OUT_LENMIN) {
+		rc = EMSGSIZE;
+		goto fail3;
+	}
+
+	n_freed = MCDI_OUT_DWORD(req, MAE_COUNTER_FREE_OUT_COUNTER_ID_COUNT);
+
+	if (n_freed < MC_CMD_MAE_COUNTER_FREE_OUT_FREED_COUNTER_ID_MINNUM) {
+		rc = EFAULT;
+		goto fail4;
+	}
+
+	if (gen_countp != NULL) {
+		*gen_countp = MCDI_OUT_DWORD(req,
+				    MAE_COUNTER_FREE_OUT_GENERATION_COUNT);
+	}
+
+	*n_freedp = n_freed;
+
+	return (0);
+
 fail4:
 	EFSYS_PROBE(fail4);
 fail3:
diff --git a/drivers/common/sfc_efx/base/efx_mcdi.h b/drivers/common/sfc_efx/base/efx_mcdi.h
index 70a97ea337..90b70de97b 100644
--- a/drivers/common/sfc_efx/base/efx_mcdi.h
+++ b/drivers/common/sfc_efx/base/efx_mcdi.h
@@ -311,6 +311,10 @@ efx_mcdi_phy_module_get_info(
 	EFX_SET_DWORD_FIELD(*MCDI_IN2(_emr, efx_dword_t, _ofst),	\
 		MC_CMD_ ## _field, _value)
 
+#define	MCDI_IN_SET_INDEXED_DWORD(_emr, _ofst, _idx, _value)		\
+	EFX_POPULATE_DWORD_1(*(MCDI_IN2(_emr, efx_dword_t, _ofst) +	\
+			     (_idx)), EFX_DWORD_0, _value)		\
+
 #define	MCDI_IN_POPULATE_DWORD_1(_emr, _ofst, _field1, _value1)		\
 	EFX_POPULATE_DWORD_1(*MCDI_IN2(_emr, efx_dword_t, _ofst),	\
 		MC_CMD_ ## _field1, _value1)
@@ -451,6 +455,9 @@ efx_mcdi_phy_module_get_info(
 	EFX_DWORD_FIELD(*MCDI_OUT2(_emr, efx_dword_t, _ofst),		\
 			MC_CMD_ ## _field)
 
+#define	MCDI_OUT_INDEXED_DWORD(_emr, _ofst, _idx)			\
+	MCDI_OUT_INDEXED_DWORD_FIELD(_emr, _ofst, _idx, EFX_DWORD_0)
+
 #define	MCDI_OUT_INDEXED_DWORD_FIELD(_emr, _ofst, _idx, _field)		\
 	EFX_DWORD_FIELD(*(MCDI_OUT2(_emr, efx_dword_t, _ofst) +		\
 			(_idx)), _field)
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index d534d8ecb5..d60cd477fa 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -102,6 +102,8 @@ INTERNAL {
 	efx_mae_action_set_spec_fini;
 	efx_mae_action_set_spec_init;
 	efx_mae_action_set_specs_equal;
+	efx_mae_counters_alloc;
+	efx_mae_counters_free;
 	efx_mae_encap_header_alloc;
 	efx_mae_encap_header_free;
 	efx_mae_fini;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v3 14/20] common/sfc_efx/base: add counter stream MCDI wrappers
  2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
                     ` (12 preceding siblings ...)
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 13/20] common/sfc_efx/base: add counter creation MCDI wrappers Andrew Rybchenko
@ 2021-06-18 13:40   ` Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 15/20] common/sfc_efx/base: support counter in action set Andrew Rybchenko
                     ` (5 subsequent siblings)
  19 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-18 13:40 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

The MCDIs will be used to control counter Rx queue packet flow.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/efx.h     |  32 ++++++
 drivers/common/sfc_efx/base/efx_mae.c | 138 ++++++++++++++++++++++++++
 drivers/common/sfc_efx/version.map    |   3 +
 3 files changed, 173 insertions(+)

diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index b789e19b98..a5d40c2e3d 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4437,6 +4437,38 @@ efx_mae_counters_free(
 	__in_ecount(n_counters)		const efx_counter_t *countersp,
 	__out_opt			uint32_t *gen_countp);
 
+/* When set, include counters with a value of zero */
+#define	EFX_MAE_COUNTERS_STREAM_IN_ZERO_SQUASH_DISABLE	(1U << 0)
+
+/*
+ * Set if credit-based flow control is used. In this case the driver
+ * must call efx_mae_counters_stream_give_credits() to notify the
+ * packetiser of descriptors written.
+ */
+#define	EFX_MAE_COUNTERS_STREAM_OUT_USES_CREDITS	(1U << 0)
+
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_counters_stream_start(
+	__in				efx_nic_t *enp,
+	__in				uint16_t rxq_id,
+	__in				uint16_t packet_size,
+	__in				uint32_t flags_in,
+	__out				uint32_t *flags_out);
+
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_counters_stream_stop(
+	__in				efx_nic_t *enp,
+	__in				uint16_t rxq_id,
+	__out_opt			uint32_t *gen_countp);
+
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_counters_stream_give_credits(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_credits);
+
 LIBEFX_API
 extern	__checkReturn			efx_rc_t
 efx_mae_action_set_free(
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index cf6c449a16..1f313c8127 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -2535,6 +2535,144 @@ efx_mae_counters_free(
 	EFSYS_PROBE(fail2);
 fail1:
 	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
+	return (rc);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_counters_stream_start(
+	__in				efx_nic_t *enp,
+	__in				uint16_t rxq_id,
+	__in				uint16_t packet_size,
+	__in				uint32_t flags_in,
+	__out				uint32_t *flags_out)
+{
+	efx_mcdi_req_t req;
+	EFX_MCDI_DECLARE_BUF(payload, MC_CMD_MAE_COUNTERS_STREAM_START_IN_LEN,
+			     MC_CMD_MAE_COUNTERS_STREAM_START_OUT_LEN);
+	efx_rc_t rc;
+
+	EFX_STATIC_ASSERT(EFX_MAE_COUNTERS_STREAM_IN_ZERO_SQUASH_DISABLE ==
+	    1U << MC_CMD_MAE_COUNTERS_STREAM_START_IN_ZERO_SQUASH_DISABLE_LBN);
+
+	EFX_STATIC_ASSERT(EFX_MAE_COUNTERS_STREAM_OUT_USES_CREDITS ==
+	    1U << MC_CMD_MAE_COUNTERS_STREAM_START_OUT_USES_CREDITS_LBN);
+
+	req.emr_cmd = MC_CMD_MAE_COUNTERS_STREAM_START;
+	req.emr_in_buf = payload;
+	req.emr_in_length = MC_CMD_MAE_COUNTERS_STREAM_START_IN_LEN;
+	req.emr_out_buf = payload;
+	req.emr_out_length = MC_CMD_MAE_COUNTERS_STREAM_START_OUT_LEN;
+
+	MCDI_IN_SET_WORD(req, MAE_COUNTERS_STREAM_START_IN_QID, rxq_id);
+	MCDI_IN_SET_WORD(req, MAE_COUNTERS_STREAM_START_IN_PACKET_SIZE,
+			 packet_size);
+	MCDI_IN_SET_DWORD(req, MAE_COUNTERS_STREAM_START_IN_FLAGS, flags_in);
+
+	efx_mcdi_execute(enp, &req);
+
+	if (req.emr_rc != 0) {
+		rc = req.emr_rc;
+		goto fail1;
+	}
+
+	if (req.emr_out_length_used <
+	    MC_CMD_MAE_COUNTERS_STREAM_START_OUT_LEN) {
+		rc = EMSGSIZE;
+		goto fail2;
+	}
+
+	*flags_out = MCDI_OUT_DWORD(req, MAE_COUNTERS_STREAM_START_OUT_FLAGS);
+
+	return (0);
+
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
+	return (rc);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_counters_stream_stop(
+	__in				efx_nic_t *enp,
+	__in				uint16_t rxq_id,
+	__out_opt			uint32_t *gen_countp)
+{
+	efx_mcdi_req_t req;
+	EFX_MCDI_DECLARE_BUF(payload, MC_CMD_MAE_COUNTERS_STREAM_STOP_IN_LEN,
+			     MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_LEN);
+	efx_rc_t rc;
+
+	req.emr_cmd = MC_CMD_MAE_COUNTERS_STREAM_STOP;
+	req.emr_in_buf = payload;
+	req.emr_in_length = MC_CMD_MAE_COUNTERS_STREAM_STOP_IN_LEN;
+	req.emr_out_buf = payload;
+	req.emr_out_length = MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_LEN;
+
+	MCDI_IN_SET_WORD(req, MAE_COUNTERS_STREAM_STOP_IN_QID, rxq_id);
+
+	efx_mcdi_execute(enp, &req);
+
+	if (req.emr_rc != 0) {
+		rc = req.emr_rc;
+		goto fail1;
+	}
+
+	if (req.emr_out_length_used <
+	    MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_LEN) {
+		rc = EMSGSIZE;
+		goto fail2;
+	}
+
+	if (gen_countp != NULL) {
+		*gen_countp = MCDI_OUT_DWORD(req,
+			    MAE_COUNTERS_STREAM_STOP_OUT_GENERATION_COUNT);
+	}
+
+	return (0);
+
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
+	return (rc);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_counters_stream_give_credits(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_credits)
+{
+	efx_mcdi_req_t req;
+	EFX_MCDI_DECLARE_BUF(payload,
+			     MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS_IN_LEN,
+			     MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS_OUT_LEN);
+	efx_rc_t rc;
+
+	req.emr_cmd = MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS;
+	req.emr_in_buf = payload;
+	req.emr_in_length = MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS_IN_LEN;
+	req.emr_out_buf = payload;
+	req.emr_out_length = MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS_OUT_LEN;
+
+	MCDI_IN_SET_DWORD(req, MAE_COUNTERS_STREAM_GIVE_CREDITS_IN_NUM_CREDITS,
+			 n_credits);
+
+	efx_mcdi_execute(enp, &req);
+
+	if (req.emr_rc != 0) {
+		rc = req.emr_rc;
+		goto fail1;
+	}
+
+	return (0);
+
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
 	return (rc);
 }
 
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index d60cd477fa..7f69d6bb0d 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -104,6 +104,9 @@ INTERNAL {
 	efx_mae_action_set_specs_equal;
 	efx_mae_counters_alloc;
 	efx_mae_counters_free;
+	efx_mae_counters_stream_give_credits;
+	efx_mae_counters_stream_start;
+	efx_mae_counters_stream_stop;
 	efx_mae_encap_header_alloc;
 	efx_mae_encap_header_free;
 	efx_mae_fini;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v3 15/20] common/sfc_efx/base: support counter in action set
  2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
                     ` (13 preceding siblings ...)
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 14/20] common/sfc_efx/base: add counter stream " Andrew Rybchenko
@ 2021-06-18 13:40   ` Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 16/20] net/sfc: add Rx datapath method to get pushed buffers count Andrew Rybchenko
                     ` (4 subsequent siblings)
  19 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-18 13:40 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

User will be able to associate counter with MAE action set to
collect counter packets and bytes for a specific action set.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/efx.h      |  21 ++++
 drivers/common/sfc_efx/base/efx_impl.h |   3 +
 drivers/common/sfc_efx/base/efx_mae.c  | 133 ++++++++++++++++++++++++-
 drivers/common/sfc_efx/version.map     |   3 +
 4 files changed, 157 insertions(+), 3 deletions(-)

diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index a5d40c2e3d..d3cf9fe571 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4288,6 +4288,15 @@ extern	__checkReturn			efx_rc_t
 efx_mae_action_set_populate_encap(
 	__in				efx_mae_actions_t *spec);
 
+/*
+ * Use efx_mae_action_set_fill_in_counter_id() to set ID of a counter
+ * in the specification prior to action set allocation.
+ */
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_action_set_populate_count(
+	__in				efx_mae_actions_t *spec);
+
 LIBEFX_API
 extern	__checkReturn			efx_rc_t
 efx_mae_action_set_populate_flag(
@@ -4392,6 +4401,18 @@ typedef struct efx_counter_s {
 	uint32_t id;
 } efx_counter_t;
 
+LIBEFX_API
+extern	__checkReturn			unsigned int
+efx_mae_action_set_get_nb_count(
+	__in				const efx_mae_actions_t *spec);
+
+/* See description before efx_mae_action_set_populate_count(). */
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_action_set_fill_in_counter_id(
+	__in				efx_mae_actions_t *spec,
+	__in				const efx_counter_t *counter_idp);
+
 /* Action set ID */
 typedef struct efx_mae_aset_id_s {
 	uint32_t id;
diff --git a/drivers/common/sfc_efx/base/efx_impl.h b/drivers/common/sfc_efx/base/efx_impl.h
index b69463385e..c4925568be 100644
--- a/drivers/common/sfc_efx/base/efx_impl.h
+++ b/drivers/common/sfc_efx/base/efx_impl.h
@@ -1733,6 +1733,7 @@ typedef enum efx_mae_action_e {
 	EFX_MAE_ACTION_DECAP,
 	EFX_MAE_ACTION_VLAN_POP,
 	EFX_MAE_ACTION_VLAN_PUSH,
+	EFX_MAE_ACTION_COUNT,
 	EFX_MAE_ACTION_ENCAP,
 
 	/*
@@ -1763,6 +1764,7 @@ typedef struct efx_mae_action_vlan_push_s {
 
 typedef struct efx_mae_actions_rsrc_s {
 	efx_mae_eh_id_t			emar_eh_id;
+	efx_counter_t			emar_counter_id;
 } efx_mae_actions_rsrc_t;
 
 struct efx_mae_actions_s {
@@ -1773,6 +1775,7 @@ struct efx_mae_actions_s {
 	unsigned int			ema_n_vlan_tags_to_push;
 	efx_mae_action_vlan_push_t	ema_vlan_push_descs[
 	    EFX_MAE_VLAN_PUSH_MAX_NTAGS];
+	unsigned int			ema_n_count_actions;
 	uint32_t			ema_mark_value;
 	efx_mport_sel_t			ema_deliver_mport;
 
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index 1f313c8127..b0e6fadd46 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -1014,6 +1014,7 @@ efx_mae_action_set_spec_init(
 	}
 
 	spec->ema_rsrc.emar_eh_id.id = EFX_MAE_RSRC_ID_INVALID;
+	spec->ema_rsrc.emar_counter_id.id = EFX_MAE_RSRC_ID_INVALID;
 
 	*specp = spec;
 
@@ -1181,6 +1182,50 @@ efx_mae_action_set_add_encap(
 	return (rc);
 }
 
+static	__checkReturn			efx_rc_t
+efx_mae_action_set_add_count(
+	__in				efx_mae_actions_t *spec,
+	__in				size_t arg_size,
+	__in_bcount(arg_size)		const uint8_t *arg)
+{
+	efx_rc_t rc;
+
+	EFX_STATIC_ASSERT(EFX_MAE_RSRC_ID_INVALID ==
+			  MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_NULL);
+
+	/*
+	 * Preparing an action set spec to update a counter requires
+	 * two steps: first add this action to the action spec, and then
+	 * add the counter ID to the spec. This allows validity checking
+	 * and resource allocation to be done separately.
+	 * Mark the counter ID as invalid in the spec to ensure that the
+	 * caller must also invoke efx_mae_action_set_fill_in_counter_id()
+	 * before action set allocation.
+	 */
+	spec->ema_rsrc.emar_counter_id.id = EFX_MAE_RSRC_ID_INVALID;
+
+	/* Nothing else is supposed to take place over here. */
+	if (arg_size != 0) {
+		rc = EINVAL;
+		goto fail1;
+	}
+
+	if (arg != NULL) {
+		rc = EINVAL;
+		goto fail2;
+	}
+
+	++(spec->ema_n_count_actions);
+
+	return (0);
+
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+	return (rc);
+}
+
 static	__checkReturn			efx_rc_t
 efx_mae_action_set_add_flag(
 	__in				efx_mae_actions_t *spec,
@@ -1289,6 +1334,9 @@ static const efx_mae_action_desc_t efx_mae_actions[EFX_MAE_NACTIONS] = {
 	[EFX_MAE_ACTION_ENCAP] = {
 		.emad_add = efx_mae_action_set_add_encap
 	},
+	[EFX_MAE_ACTION_COUNT] = {
+		.emad_add = efx_mae_action_set_add_count
+	},
 	[EFX_MAE_ACTION_FLAG] = {
 		.emad_add = efx_mae_action_set_add_flag
 	},
@@ -1304,6 +1352,12 @@ static const uint32_t efx_mae_action_ordered_map =
 	(1U << EFX_MAE_ACTION_DECAP) |
 	(1U << EFX_MAE_ACTION_VLAN_POP) |
 	(1U << EFX_MAE_ACTION_VLAN_PUSH) |
+	/*
+	 * HW will conduct action COUNT after
+	 * the matching packet has been modified by
+	 * length-affecting actions except for ENCAP.
+	 */
+	(1U << EFX_MAE_ACTION_COUNT) |
 	(1U << EFX_MAE_ACTION_ENCAP) |
 	(1U << EFX_MAE_ACTION_FLAG) |
 	(1U << EFX_MAE_ACTION_MARK) |
@@ -1320,7 +1374,8 @@ static const uint32_t efx_mae_action_nonstrict_map =
 
 static const uint32_t efx_mae_action_repeat_map =
 	(1U << EFX_MAE_ACTION_VLAN_POP) |
-	(1U << EFX_MAE_ACTION_VLAN_PUSH);
+	(1U << EFX_MAE_ACTION_VLAN_PUSH) |
+	(1U << EFX_MAE_ACTION_COUNT);
 
 /*
  * Add an action to an action set.
@@ -1443,6 +1498,20 @@ efx_mae_action_set_populate_encap(
 	    EFX_MAE_ACTION_ENCAP, 0, NULL));
 }
 
+	__checkReturn			efx_rc_t
+efx_mae_action_set_populate_count(
+	__in				efx_mae_actions_t *spec)
+{
+	/*
+	 * There is no argument to pass counter ID, thus, one does not
+	 * need to allocate a counter while parsing application input.
+	 * This is useful since building an action set may be done simply to
+	 * validate a rule, whilst resource allocation usually consumes time.
+	 */
+	return (efx_mae_action_set_spec_populate(spec,
+	    EFX_MAE_ACTION_COUNT, 0, NULL));
+}
+
 	__checkReturn			efx_rc_t
 efx_mae_action_set_populate_flag(
 	__in				efx_mae_actions_t *spec)
@@ -2075,8 +2144,6 @@ efx_mae_action_set_alloc(
 	 */
 	MCDI_IN_SET_DWORD(req,
 	    MAE_ACTION_SET_ALLOC_IN_COUNTER_LIST_ID, EFX_MAE_RSRC_ID_INVALID);
-	MCDI_IN_SET_DWORD(req,
-	    MAE_ACTION_SET_ALLOC_IN_COUNTER_ID, EFX_MAE_RSRC_ID_INVALID);
 
 	if ((spec->ema_actions & (1U << EFX_MAE_ACTION_DECAP)) != 0) {
 		MCDI_IN_SET_DWORD_FIELD(req, MAE_ACTION_SET_ALLOC_IN_FLAGS,
@@ -2113,6 +2180,8 @@ efx_mae_action_set_alloc(
 
 	MCDI_IN_SET_DWORD(req, MAE_ACTION_SET_ALLOC_IN_ENCAP_HEADER_ID,
 	    spec->ema_rsrc.emar_eh_id.id);
+	MCDI_IN_SET_DWORD(req, MAE_ACTION_SET_ALLOC_IN_COUNTER_ID,
+	    spec->ema_rsrc.emar_counter_id.id);
 
 	if ((spec->ema_actions & (1U << EFX_MAE_ACTION_FLAG)) != 0) {
 		MCDI_IN_SET_DWORD_FIELD(req, MAE_ACTION_SET_ALLOC_IN_FLAGS,
@@ -2372,6 +2441,64 @@ efx_mae_action_rule_remove(
 
 	return (0);
 
+fail4:
+	EFSYS_PROBE(fail4);
+fail3:
+	EFSYS_PROBE(fail3);
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+	return (rc);
+}
+
+	__checkReturn			unsigned int
+efx_mae_action_set_get_nb_count(
+	__in				const efx_mae_actions_t *spec)
+{
+	return (spec->ema_n_count_actions);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_action_set_fill_in_counter_id(
+	__in				efx_mae_actions_t *spec,
+	__in				const efx_counter_t *counter_idp)
+{
+	efx_rc_t rc;
+
+	if ((spec->ema_actions & (1U << EFX_MAE_ACTION_COUNT)) == 0) {
+		/*
+		 * Invalid to add counter ID if spec does not have COUNT action.
+		 */
+		rc = EINVAL;
+		goto fail1;
+	}
+
+	if (spec->ema_n_count_actions != 1) {
+		/*
+		 * Having multiple COUNT actions in the spec requires a counter
+		 * list to be used. This API must only be used for a single
+		 * counter per spec. Turn down the request as inappropriate.
+		 */
+		rc = EINVAL;
+		goto fail2;
+	}
+
+	if (spec->ema_rsrc.emar_counter_id.id != EFX_MAE_RSRC_ID_INVALID) {
+		/* The caller attempts to indicate counter ID twice. */
+		rc = EALREADY;
+		goto fail3;
+	}
+
+	if (counter_idp->id == EFX_MAE_RSRC_ID_INVALID) {
+		rc = EINVAL;
+		goto fail4;
+	}
+
+	spec->ema_rsrc.emar_counter_id.id = counter_idp->id;
+
+	return (0);
+
 fail4:
 	EFSYS_PROBE(fail4);
 fail3:
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 7f69d6bb0d..8496f409e6 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -89,8 +89,11 @@ INTERNAL {
 	efx_mae_action_rule_insert;
 	efx_mae_action_rule_remove;
 	efx_mae_action_set_alloc;
+	efx_mae_action_set_fill_in_counter_id;
 	efx_mae_action_set_fill_in_eh_id;
 	efx_mae_action_set_free;
+	efx_mae_action_set_get_nb_count;
+	efx_mae_action_set_populate_count;
 	efx_mae_action_set_populate_decap;
 	efx_mae_action_set_populate_deliver;
 	efx_mae_action_set_populate_drop;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v3 16/20] net/sfc: add Rx datapath method to get pushed buffers count
  2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
                     ` (14 preceding siblings ...)
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 15/20] common/sfc_efx/base: support counter in action set Andrew Rybchenko
@ 2021-06-18 13:40   ` Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 17/20] common/sfc_efx/base: add max MAE counters to limits Andrew Rybchenko
                     ` (3 subsequent siblings)
  19 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-18 13:40 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

The information about the number of pushed Rx buffers is required
for counter Rx queue to know when to give credits to counter
stream.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/sfc_dp_rx.h    |  4 ++++
 drivers/net/sfc/sfc_ef100_rx.c | 15 +++++++++++++++
 drivers/net/sfc/sfc_rx.c       |  9 +++++++++
 drivers/net/sfc/sfc_rx.h       |  3 +++
 4 files changed, 31 insertions(+)

diff --git a/drivers/net/sfc/sfc_dp_rx.h b/drivers/net/sfc/sfc_dp_rx.h
index 3f6857b1ff..b6c44085ce 100644
--- a/drivers/net/sfc/sfc_dp_rx.h
+++ b/drivers/net/sfc/sfc_dp_rx.h
@@ -204,6 +204,9 @@ typedef int (sfc_dp_rx_intr_enable_t)(struct sfc_dp_rxq *dp_rxq);
 /** Disable Rx interrupts */
 typedef int (sfc_dp_rx_intr_disable_t)(struct sfc_dp_rxq *dp_rxq);
 
+/** Get number of pushed Rx buffers */
+typedef unsigned int (sfc_dp_rx_get_pushed_t)(struct sfc_dp_rxq *dp_rxq);
+
 /** Receive datapath definition */
 struct sfc_dp_rx {
 	struct sfc_dp				dp;
@@ -238,6 +241,7 @@ struct sfc_dp_rx {
 	sfc_dp_rx_qdesc_status_t		*qdesc_status;
 	sfc_dp_rx_intr_enable_t			*intr_enable;
 	sfc_dp_rx_intr_disable_t		*intr_disable;
+	sfc_dp_rx_get_pushed_t			*get_pushed;
 	eth_rx_burst_t				pkt_burst;
 };
 
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index 8cde24c585..7447f8b9de 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -892,6 +892,20 @@ sfc_ef100_rx_intr_disable(struct sfc_dp_rxq *dp_rxq)
 	return 0;
 }
 
+static sfc_dp_rx_get_pushed_t sfc_ef100_rx_get_pushed;
+static unsigned int
+sfc_ef100_rx_get_pushed(struct sfc_dp_rxq *dp_rxq)
+{
+	struct sfc_ef100_rxq *rxq = sfc_ef100_rxq_by_dp_rxq(dp_rxq);
+
+	/*
+	 * The datapath keeps track only of added descriptors, since
+	 * the number of pushed descriptors always equals the number
+	 * of added descriptors due to enforced alignment.
+	 */
+	return rxq->added;
+}
+
 struct sfc_dp_rx sfc_ef100_rx = {
 	.dp = {
 		.name		= SFC_KVARG_DATAPATH_EF100,
@@ -919,5 +933,6 @@ struct sfc_dp_rx sfc_ef100_rx = {
 	.qdesc_status		= sfc_ef100_rx_qdesc_status,
 	.intr_enable		= sfc_ef100_rx_intr_enable,
 	.intr_disable		= sfc_ef100_rx_intr_disable,
+	.get_pushed		= sfc_ef100_rx_get_pushed,
 	.pkt_burst		= sfc_ef100_recv_pkts,
 };
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 0532f77082..f6a8ac68e8 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -53,6 +53,15 @@ sfc_rx_qflush_failed(struct sfc_rxq_info *rxq_info)
 	rxq_info->state &= ~SFC_RXQ_FLUSHING;
 }
 
+/* This returns the running counter, which is not bounded by ring size */
+unsigned int
+sfc_rx_get_pushed(struct sfc_adapter *sa, struct sfc_dp_rxq *dp_rxq)
+{
+	SFC_ASSERT(sa->priv.dp_rx->get_pushed != NULL);
+
+	return sa->priv.dp_rx->get_pushed(dp_rxq);
+}
+
 static int
 sfc_efx_rx_qprime(struct sfc_efx_rxq *rxq)
 {
diff --git a/drivers/net/sfc/sfc_rx.h b/drivers/net/sfc/sfc_rx.h
index e5a6fde79b..4ab513915e 100644
--- a/drivers/net/sfc/sfc_rx.h
+++ b/drivers/net/sfc/sfc_rx.h
@@ -145,6 +145,9 @@ uint64_t sfc_rx_get_queue_offload_caps(struct sfc_adapter *sa);
 void sfc_rx_qflush_done(struct sfc_rxq_info *rxq_info);
 void sfc_rx_qflush_failed(struct sfc_rxq_info *rxq_info);
 
+unsigned int sfc_rx_get_pushed(struct sfc_adapter *sa,
+			       struct sfc_dp_rxq *dp_rxq);
+
 int sfc_rx_hash_init(struct sfc_adapter *sa);
 void sfc_rx_hash_fini(struct sfc_adapter *sa);
 int sfc_rx_hf_rte_to_efx(struct sfc_adapter *sa, uint64_t rte,
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v3 17/20] common/sfc_efx/base: add max MAE counters to limits
  2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
                     ` (15 preceding siblings ...)
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 16/20] net/sfc: add Rx datapath method to get pushed buffers count Andrew Rybchenko
@ 2021-06-18 13:40   ` Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 18/20] common/sfc_efx/base: add packetiser packet format definition Andrew Rybchenko
                     ` (2 subsequent siblings)
  19 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-18 13:40 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

The information about the maximum number of MAE counters is
crucial to the counter support in the driver.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/efx.h     | 1 +
 drivers/common/sfc_efx/base/efx_mae.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index d3cf9fe571..21fd151b70 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4093,6 +4093,7 @@ typedef struct efx_mae_limits_s {
 	uint32_t			eml_max_n_outer_prios;
 	uint32_t			eml_encap_types_supported;
 	uint32_t			eml_encap_header_size_limit;
+	uint32_t			eml_max_n_counters;
 } efx_mae_limits_t;
 
 LIBEFX_API
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index b0e6fadd46..67d1c22037 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -374,6 +374,7 @@ efx_mae_get_limits(
 	emlp->eml_encap_types_supported = maep->em_encap_types_supported;
 	emlp->eml_encap_header_size_limit =
 	    MC_CMD_MAE_ENCAP_HEADER_ALLOC_IN_HDR_DATA_MAXNUM_MCDI2;
+	emlp->eml_max_n_counters = maep->em_max_ncounters;
 
 	return (0);
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v3 18/20] common/sfc_efx/base: add packetiser packet format definition
  2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
                     ` (16 preceding siblings ...)
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 17/20] common/sfc_efx/base: add max MAE counters to limits Andrew Rybchenko
@ 2021-06-18 13:40   ` Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 19/20] net/sfc: support flow action COUNT in transfer rules Andrew Rybchenko
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 20/20] net/sfc: support flow API query for count actions Andrew Rybchenko
  19 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-18 13:40 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Andy Moreton

Packetiser composes packets with MAE counters update.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 .../base/efx_regs_counters_pkt_format.h       | 87 +++++++++++++++++++
 1 file changed, 87 insertions(+)
 create mode 100644 drivers/common/sfc_efx/base/efx_regs_counters_pkt_format.h

diff --git a/drivers/common/sfc_efx/base/efx_regs_counters_pkt_format.h b/drivers/common/sfc_efx/base/efx_regs_counters_pkt_format.h
new file mode 100644
index 0000000000..6610d07dc0
--- /dev/null
+++ b/drivers/common/sfc_efx/base/efx_regs_counters_pkt_format.h
@@ -0,0 +1,87 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2020-2021 Xilinx, Inc.
+ */
+
+#ifndef	_SYS_EFX_REGS_COUNTERS_PKT_FORMAT_H
+#define	_SYS_EFX_REGS_COUNTERS_PKT_FORMAT_H
+
+/*
+ * Packetiser packet format definition.
+ * SF-122415-TC - OVS Counter Design Specification section 7
+ * Primary copy of the header is located in the smartnic_registry repo:
+ * src/ovs_counter/packetiser_packet_format.h
+ */
+
+/*------------------------------------------------------------*/
+/*
+ * ER_RX_SL_PACKETISER_HEADER_WORD(160bit):
+ *
+ */
+#define	ER_RX_SL_PACKETISER_HEADER_WORD_SIZE 20
+
+#define	ERF_SC_PACKETISER_HEADER_VERSION_LBN 0
+#define	ERF_SC_PACKETISER_HEADER_VERSION_WIDTH 8
+/* Deprecated, use ERF_SC_PACKETISER_HEADER_VERSION_2 instead */
+#define	ERF_SC_PACKETISER_HEADER_VERSION_VALUE 2
+#define	ERF_SC_PACKETISER_HEADER_VERSION_2 2
+#define	ERF_SC_PACKETISER_HEADER_IDENTIFIER_LBN 8
+#define	ERF_SC_PACKETISER_HEADER_IDENTIFIER_WIDTH 8
+#define	ERF_SC_PACKETISER_HEADER_IDENTIFIER_AR 0
+#define	ERF_SC_PACKETISER_HEADER_IDENTIFIER_CT 1
+#define	ERF_SC_PACKETISER_HEADER_HEADER_OFFSET_LBN 16
+#define	ERF_SC_PACKETISER_HEADER_HEADER_OFFSET_WIDTH 8
+#define	ERF_SC_PACKETISER_HEADER_HEADER_OFFSET_DEFAULT 0x4
+#define	ERF_SC_PACKETISER_HEADER_PAYLOAD_OFFSET_LBN 24
+#define	ERF_SC_PACKETISER_HEADER_PAYLOAD_OFFSET_WIDTH 8
+#define	ERF_SC_PACKETISER_HEADER_PAYLOAD_OFFSET_DEFAULT 0x14
+#define	ERF_SC_PACKETISER_HEADER_INDEX_LBN 32
+#define	ERF_SC_PACKETISER_HEADER_INDEX_WIDTH 16
+#define	ERF_SC_PACKETISER_HEADER_COUNT_LBN 48
+#define	ERF_SC_PACKETISER_HEADER_COUNT_WIDTH 16
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_0_LBN 64
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_0_WIDTH 32
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_1_LBN 96
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_1_WIDTH 32
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_2_LBN 128
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_2_WIDTH 32
+
+
+/*------------------------------------------------------------*/
+/*
+ * ER_RX_SL_PACKETISER_PAYLOAD_WORD(128bit):
+ *
+ */
+#define	ER_RX_SL_PACKETISER_PAYLOAD_WORD_SIZE 16
+
+#define	ERF_SC_PACKETISER_PAYLOAD_COUNTER_INDEX_LBN 0
+#define	ERF_SC_PACKETISER_PAYLOAD_COUNTER_INDEX_WIDTH 24
+#define	ERF_SC_PACKETISER_PAYLOAD_RESERVED_LBN 24
+#define	ERF_SC_PACKETISER_PAYLOAD_RESERVED_WIDTH 8
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_OFST 4
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_SIZE 6
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LBN 32
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_WIDTH 48
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO_OFST 4
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO_SIZE 4
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO_LBN 32
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO_WIDTH 32
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_HI_OFST 8
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_HI_SIZE 2
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_HI_LBN 64
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_HI_WIDTH 16
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_OFST 10
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_SIZE 6
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LBN 80
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_WIDTH 48
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO_OFST 10
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO_SIZE 2
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO_LBN 80
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO_WIDTH 16
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_HI_OFST 12
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_HI_SIZE 4
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_HI_LBN 96
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_HI_WIDTH 32
+
+
+#endif /* _SYS_EFX_REGS_COUNTERS_PKT_FORMAT_H */
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v3 19/20] net/sfc: support flow action COUNT in transfer rules
  2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
                     ` (17 preceding siblings ...)
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 18/20] common/sfc_efx/base: add packetiser packet format definition Andrew Rybchenko
@ 2021-06-18 13:40   ` Andrew Rybchenko
  2021-06-21  8:28     ` David Marchand
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 20/20] net/sfc: support flow API query for count actions Andrew Rybchenko
  19 siblings, 1 reply; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-18 13:40 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

For now, a rule may have only one dedicated counter, shared counters
are not supported.

HW delivers (or "streams") counter readings using special packets.
The driver creates a dedicated Rx queue to receive such packets
and requests that HW start "streaming" the readings to it.

The counter queue is polled periodically, and the first available
service core is used for that. Hence, the user has to specify at least
one service core for counters to work. Such a core is shared by all
MAE-capable devices managed by sfc driver.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 doc/guides/nics/sfc_efx.rst            |   2 +
 doc/guides/rel_notes/release_21_08.rst |   6 +
 drivers/net/sfc/meson.build            |  10 +
 drivers/net/sfc/sfc_flow.c             |   7 +
 drivers/net/sfc/sfc_mae.c              | 231 +++++++++-
 drivers/net/sfc/sfc_mae.h              |  60 +++
 drivers/net/sfc/sfc_mae_counter.c      | 578 +++++++++++++++++++++++++
 drivers/net/sfc/sfc_mae_counter.h      |  11 +
 drivers/net/sfc/sfc_stats.h            |  80 ++++
 drivers/net/sfc/sfc_tweak.h            |   9 +
 10 files changed, 989 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/sfc/sfc_stats.h

diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index cf1269cc03..bd08118da7 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -240,6 +240,8 @@ Supported actions (***transfer*** rules):
 
 - PORT_ID
 
+- COUNT
+
 - DROP
 
 Validating flow rules depends on the firmware variant.
diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst
index a6ecfdf3ce..75688304da 100644
--- a/doc/guides/rel_notes/release_21_08.rst
+++ b/doc/guides/rel_notes/release_21_08.rst
@@ -55,6 +55,12 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Updated Solarflare network PMD.**
+
+  Updated the Solarflare ``sfc_efx`` driver with changes including:
+
+  * Added COUNT action support for SN1000 NICs
+
 
 Removed Items
 -------------
diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index f8880f740a..32b58e3d76 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -39,6 +39,16 @@ foreach flag: extra_flags
     endif
 endforeach
 
+# for clang 32-bit compiles we need libatomic for 64-bit atomic ops
+if cc.get_id() == 'clang' and dpdk_conf.get('RTE_ARCH_64') == false
+    ext_deps += cc.find_library('atomic')
+endif
+
+# for gcc compiles we need -latomic for 128-bit atomic ops
+if cc.get_id() == 'gcc'
+    ext_deps += cc.find_library('atomic')
+endif
+
 deps += ['common_sfc_efx', 'bus_pci']
 sources = files(
         'sfc_ethdev.c',
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 2db8af1759..1294dbd3a7 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -24,6 +24,7 @@
 #include "sfc_flow.h"
 #include "sfc_log.h"
 #include "sfc_dp_rx.h"
+#include "sfc_mae_counter.h"
 
 struct sfc_flow_ops_by_spec {
 	sfc_flow_parse_cb_t	*parse;
@@ -2854,6 +2855,12 @@ sfc_flow_stop(struct sfc_adapter *sa)
 		efx_rx_scale_context_free(sa->nic, rss->dummy_rss_context);
 		rss->dummy_rss_context = EFX_RSS_CONTEXT_DEFAULT;
 	}
+
+	/*
+	 * MAE counter service is not stopped on flow rule remove to avoid
+	 * extra work. Make sure that it is stopped here.
+	 */
+	sfc_mae_counter_stop(sa);
 }
 
 int
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 8ffcf72d88..c3efd5b407 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -19,6 +19,7 @@
 #include "sfc_mae_counter.h"
 #include "sfc_log.h"
 #include "sfc_switch.h"
+#include "sfc_service.h"
 
 static int
 sfc_mae_assign_entity_mport(struct sfc_adapter *sa,
@@ -30,6 +31,19 @@ sfc_mae_assign_entity_mport(struct sfc_adapter *sa,
 					      mportp);
 }
 
+static int
+sfc_mae_counter_registry_init(struct sfc_mae_counter_registry *registry,
+			      uint32_t nb_counters_max)
+{
+	return sfc_mae_counters_init(&registry->counters, nb_counters_max);
+}
+
+static void
+sfc_mae_counter_registry_fini(struct sfc_mae_counter_registry *registry)
+{
+	sfc_mae_counters_fini(&registry->counters);
+}
+
 int
 sfc_mae_attach(struct sfc_adapter *sa)
 {
@@ -59,6 +73,15 @@ sfc_mae_attach(struct sfc_adapter *sa)
 	if (rc != 0)
 		goto fail_mae_get_limits;
 
+	sfc_log_init(sa, "init MAE counter registry");
+	rc = sfc_mae_counter_registry_init(&mae->counter_registry,
+					   limits.eml_max_n_counters);
+	if (rc != 0) {
+		sfc_err(sa, "failed to init MAE counters registry for %u entries: %s",
+			limits.eml_max_n_counters, rte_strerror(rc));
+		goto fail_counter_registry_init;
+	}
+
 	sfc_log_init(sa, "assign entity MPORT");
 	rc = sfc_mae_assign_entity_mport(sa, &entity_mport);
 	if (rc != 0)
@@ -107,6 +130,9 @@ sfc_mae_attach(struct sfc_adapter *sa)
 fail_mae_assign_switch_port:
 fail_mae_assign_switch_domain:
 fail_mae_assign_entity_mport:
+	sfc_mae_counter_registry_fini(&mae->counter_registry);
+
+fail_counter_registry_init:
 fail_mae_get_limits:
 	efx_mae_fini(sa->nic);
 
@@ -131,6 +157,7 @@ sfc_mae_detach(struct sfc_adapter *sa)
 		return;
 
 	rte_free(mae->bounce_eh.buf);
+	sfc_mae_counter_registry_fini(&mae->counter_registry);
 
 	efx_mae_fini(sa->nic);
 
@@ -480,9 +507,72 @@ sfc_mae_encap_header_disable(struct sfc_adapter *sa,
 	--(fw_rsrc->refcnt);
 }
 
+static int
+sfc_mae_counters_enable(struct sfc_adapter *sa,
+			struct sfc_mae_counter_id *counters,
+			unsigned int n_counters,
+			efx_mae_actions_t *action_set_spec)
+{
+	int rc;
+
+	sfc_log_init(sa, "entry");
+
+	if (n_counters == 0) {
+		sfc_log_init(sa, "no counters - skip");
+		return 0;
+	}
+
+	SFC_ASSERT(sfc_adapter_is_locked(sa));
+	SFC_ASSERT(n_counters == 1);
+
+	rc = sfc_mae_counter_enable(sa, &counters[0]);
+	if (rc != 0) {
+		sfc_err(sa, "failed to enable MAE counter %u: %s",
+			counters[0].mae_id.id, rte_strerror(rc));
+		goto fail_counter_add;
+	}
+
+	rc = efx_mae_action_set_fill_in_counter_id(action_set_spec,
+						   &counters[0].mae_id);
+	if (rc != 0) {
+		sfc_err(sa, "failed to fill in MAE counter %u in action set: %s",
+			counters[0].mae_id.id, rte_strerror(rc));
+		goto fail_fill_in_id;
+	}
+
+	return 0;
+
+fail_fill_in_id:
+	(void)sfc_mae_counter_disable(sa, &counters[0]);
+
+fail_counter_add:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+	return rc;
+}
+
+static int
+sfc_mae_counters_disable(struct sfc_adapter *sa,
+			 struct sfc_mae_counter_id *counters,
+			 unsigned int n_counters)
+{
+	if (n_counters == 0)
+		return 0;
+
+	SFC_ASSERT(sfc_adapter_is_locked(sa));
+	SFC_ASSERT(n_counters == 1);
+
+	if (counters[0].mae_id.id == EFX_MAE_RSRC_ID_INVALID) {
+		sfc_err(sa, "failed to disable: already disabled");
+		return EALREADY;
+	}
+
+	return sfc_mae_counter_disable(sa, &counters[0]);
+}
+
 static struct sfc_mae_action_set *
 sfc_mae_action_set_attach(struct sfc_adapter *sa,
 			  const struct sfc_mae_encap_header *encap_header,
+			  unsigned int n_count,
 			  const efx_mae_actions_t *spec)
 {
 	struct sfc_mae_action_set *action_set;
@@ -491,7 +581,12 @@ sfc_mae_action_set_attach(struct sfc_adapter *sa,
 	SFC_ASSERT(sfc_adapter_is_locked(sa));
 
 	TAILQ_FOREACH(action_set, &mae->action_sets, entries) {
+		/*
+		 * Shared counters are not supported, hence action sets with
+		 * COUNT are not attachable.
+		 */
 		if (action_set->encap_header == encap_header &&
+		    n_count == 0 &&
 		    efx_mae_action_set_specs_equal(action_set->spec, spec)) {
 			sfc_dbg(sa, "attaching to action_set=%p", action_set);
 			++(action_set->refcnt);
@@ -504,18 +599,52 @@ sfc_mae_action_set_attach(struct sfc_adapter *sa,
 
 static int
 sfc_mae_action_set_add(struct sfc_adapter *sa,
+		       const struct rte_flow_action actions[],
 		       efx_mae_actions_t *spec,
 		       struct sfc_mae_encap_header *encap_header,
+		       unsigned int n_counters,
 		       struct sfc_mae_action_set **action_setp)
 {
 	struct sfc_mae_action_set *action_set;
 	struct sfc_mae *mae = &sa->mae;
+	unsigned int i;
 
 	SFC_ASSERT(sfc_adapter_is_locked(sa));
 
 	action_set = rte_zmalloc("sfc_mae_action_set", sizeof(*action_set), 0);
-	if (action_set == NULL)
+	if (action_set == NULL) {
+		sfc_err(sa, "failed to alloc action set");
 		return ENOMEM;
+	}
+
+	if (n_counters > 0) {
+		const struct rte_flow_action *action;
+
+		action_set->counters = rte_malloc("sfc_mae_counter_ids",
+			sizeof(action_set->counters[0]) * n_counters, 0);
+		if (action_set->counters == NULL) {
+			rte_free(action_set);
+			sfc_err(sa, "failed to alloc counters");
+			return ENOMEM;
+		}
+
+		for (action = actions, i = 0;
+		     action->type != RTE_FLOW_ACTION_TYPE_END && i < n_counters;
+		     ++action) {
+			const struct rte_flow_action_count *conf;
+
+			if (action->type != RTE_FLOW_ACTION_TYPE_COUNT)
+				continue;
+
+			conf = action->conf;
+
+			action_set->counters[i].mae_id.id =
+				EFX_MAE_RSRC_ID_INVALID;
+			action_set->counters[i].rte_id = conf->id;
+			i++;
+		}
+		action_set->n_counters = n_counters;
+	}
 
 	action_set->refcnt = 1;
 	action_set->spec = spec;
@@ -555,6 +684,12 @@ sfc_mae_action_set_del(struct sfc_adapter *sa,
 
 	efx_mae_action_set_spec_fini(sa->nic, action_set->spec);
 	sfc_mae_encap_header_del(sa, action_set->encap_header);
+	if (action_set->n_counters > 0) {
+		SFC_ASSERT(action_set->n_counters == 1);
+		SFC_ASSERT(action_set->counters[0].mae_id.id ==
+			   EFX_MAE_RSRC_ID_INVALID);
+		rte_free(action_set->counters);
+	}
 	TAILQ_REMOVE(&mae->action_sets, action_set, entries);
 	rte_free(action_set);
 
@@ -566,6 +701,7 @@ sfc_mae_action_set_enable(struct sfc_adapter *sa,
 			  struct sfc_mae_action_set *action_set)
 {
 	struct sfc_mae_encap_header *encap_header = action_set->encap_header;
+	struct sfc_mae_counter_id *counters = action_set->counters;
 	struct sfc_mae_fw_rsrc *fw_rsrc = &action_set->fw_rsrc;
 	int rc;
 
@@ -580,14 +716,26 @@ sfc_mae_action_set_enable(struct sfc_adapter *sa,
 		if (rc != 0)
 			return rc;
 
-		rc = efx_mae_action_set_alloc(sa->nic, action_set->spec,
-					      &fw_rsrc->aset_id);
+		rc = sfc_mae_counters_enable(sa, counters,
+					     action_set->n_counters,
+					     action_set->spec);
 		if (rc != 0) {
+			sfc_err(sa, "failed to enable %u MAE counters: %s",
+				action_set->n_counters, rte_strerror(rc));
+
 			sfc_mae_encap_header_disable(sa, encap_header);
+			return rc;
+		}
 
+		rc = efx_mae_action_set_alloc(sa->nic, action_set->spec,
+					      &fw_rsrc->aset_id);
+		if (rc != 0) {
 			sfc_err(sa, "failed to enable action_set=%p: %s",
 				action_set, strerror(rc));
 
+			(void)sfc_mae_counters_disable(sa, counters,
+						       action_set->n_counters);
+			sfc_mae_encap_header_disable(sa, encap_header);
 			return rc;
 		}
 
@@ -627,6 +775,13 @@ sfc_mae_action_set_disable(struct sfc_adapter *sa,
 		}
 		fw_rsrc->aset_id.id = EFX_MAE_RSRC_ID_INVALID;
 
+		rc = sfc_mae_counters_disable(sa, action_set->counters,
+					      action_set->n_counters);
+		if (rc != 0) {
+			sfc_err(sa, "failed to disable %u MAE counters: %s",
+				action_set->n_counters, rte_strerror(rc));
+		}
+
 		sfc_mae_encap_header_disable(sa, action_set->encap_header);
 	}
 
@@ -2508,6 +2663,48 @@ sfc_mae_rule_parse_action_mark(const struct rte_flow_action_mark *conf,
 	return efx_mae_action_set_populate_mark(spec, conf->id);
 }
 
+static int
+sfc_mae_rule_parse_action_count(struct sfc_adapter *sa,
+				const struct rte_flow_action_count *conf,
+				efx_mae_actions_t *spec)
+{
+	int rc;
+
+	if (conf->shared) {
+		rc = ENOTSUP;
+		goto fail_counter_shared;
+	}
+
+	if ((sa->counter_rxq.state & SFC_COUNTER_RXQ_INITIALIZED) == 0) {
+		sfc_err(sa,
+			"counter queue is not configured for COUNT action");
+		rc = EINVAL;
+		goto fail_counter_queue_uninit;
+	}
+
+	if (sfc_get_service_lcore(SOCKET_ID_ANY) == RTE_MAX_LCORE) {
+		rc = EINVAL;
+		goto fail_no_service_core;
+	}
+
+	rc = efx_mae_action_set_populate_count(spec);
+	if (rc != 0) {
+		sfc_err(sa,
+			"failed to populate counters in MAE action set: %s",
+			rte_strerror(rc));
+		goto fail_populate_count;
+	}
+
+	return 0;
+
+fail_populate_count:
+fail_no_service_core:
+fail_counter_queue_uninit:
+fail_counter_shared:
+
+	return rc;
+}
+
 static int
 sfc_mae_rule_parse_action_phy_port(struct sfc_adapter *sa,
 				   const struct rte_flow_action_phy_port *conf,
@@ -2623,6 +2820,11 @@ sfc_mae_rule_parse_action(struct sfc_adapter *sa,
 							   spec, error);
 		custom_error = B_TRUE;
 		break;
+	case RTE_FLOW_ACTION_TYPE_COUNT:
+		SFC_BUILD_SET_OVERFLOW(RTE_FLOW_ACTION_TYPE_COUNT,
+				       bundle->actions_mask);
+		rc = sfc_mae_rule_parse_action_count(sa, action->conf, spec);
+		break;
 	case RTE_FLOW_ACTION_TYPE_FLAG:
 		SFC_BUILD_SET_OVERFLOW(RTE_FLOW_ACTION_TYPE_FLAG,
 				       bundle->actions_mask);
@@ -2708,6 +2910,7 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa,
 	const struct rte_flow_action *action;
 	struct sfc_mae *mae = &sa->mae;
 	efx_mae_actions_t *spec;
+	unsigned int n_count;
 	int rc;
 
 	rte_errno = 0;
@@ -2745,15 +2948,22 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa,
 	if (rc != 0)
 		goto fail_process_encap_header;
 
+	n_count = efx_mae_action_set_get_nb_count(spec);
+	if (n_count > 1) {
+		rc = ENOTSUP;
+		sfc_err(sa, "too many count actions requested: %u", n_count);
+		goto fail_nb_count;
+	}
+
 	spec_mae->action_set = sfc_mae_action_set_attach(sa, encap_header,
-							 spec);
+							 n_count, spec);
 	if (spec_mae->action_set != NULL) {
 		sfc_mae_encap_header_del(sa, encap_header);
 		efx_mae_action_set_spec_fini(sa->nic, spec);
 		return 0;
 	}
 
-	rc = sfc_mae_action_set_add(sa, spec, encap_header,
+	rc = sfc_mae_action_set_add(sa, actions, spec, encap_header, n_count,
 				    &spec_mae->action_set);
 	if (rc != 0)
 		goto fail_action_set_add;
@@ -2761,6 +2971,7 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa,
 	return 0;
 
 fail_action_set_add:
+fail_nb_count:
 	sfc_mae_encap_header_del(sa, encap_header);
 
 fail_process_encap_header:
@@ -2915,6 +3126,15 @@ sfc_mae_flow_insert(struct sfc_adapter *sa,
 	if (rc != 0)
 		goto fail_action_set_enable;
 
+	if (action_set->n_counters > 0) {
+		rc = sfc_mae_counter_start(sa);
+		if (rc != 0) {
+			sfc_err(sa, "failed to start MAE counters support: %s",
+				rte_strerror(rc));
+			goto fail_mae_counter_start;
+		}
+	}
+
 	rc = efx_mae_action_rule_insert(sa->nic, spec_mae->match_spec,
 					NULL, &fw_rsrc->aset_id,
 					&spec_mae->rule_id);
@@ -2927,6 +3147,7 @@ sfc_mae_flow_insert(struct sfc_adapter *sa,
 	return 0;
 
 fail_action_rule_insert:
+fail_mae_counter_start:
 	sfc_mae_action_set_disable(sa, action_set);
 
 fail_action_set_enable:
diff --git a/drivers/net/sfc/sfc_mae.h b/drivers/net/sfc/sfc_mae.h
index 9740e54e49..2cc4334890 100644
--- a/drivers/net/sfc/sfc_mae.h
+++ b/drivers/net/sfc/sfc_mae.h
@@ -16,6 +16,8 @@
 
 #include "efx.h"
 
+#include "sfc_stats.h"
+
 #ifdef __cplusplus
 extern "C" {
 #endif
@@ -54,10 +56,20 @@ struct sfc_mae_encap_header {
 
 TAILQ_HEAD(sfc_mae_encap_headers, sfc_mae_encap_header);
 
+/* Counter ID */
+struct sfc_mae_counter_id {
+	/* ID of a counter in MAE */
+	efx_counter_t			mae_id;
+	/* ID of a counter in RTE */
+	uint32_t			rte_id;
+};
+
 /** Action set registry entry */
 struct sfc_mae_action_set {
 	TAILQ_ENTRY(sfc_mae_action_set)	entries;
 	unsigned int			refcnt;
+	struct sfc_mae_counter_id	*counters;
+	uint32_t			n_counters;
 	efx_mae_actions_t		*spec;
 	struct sfc_mae_encap_header	*encap_header;
 	struct sfc_mae_fw_rsrc		fw_rsrc;
@@ -83,6 +95,50 @@ struct sfc_mae_bounce_eh {
 	efx_tunnel_protocol_t		type;
 };
 
+/** Counter collection entry */
+struct sfc_mae_counter {
+	bool				inuse;
+	uint32_t			generation_count;
+	union sfc_pkts_bytes		value;
+	union sfc_pkts_bytes		reset;
+};
+
+struct sfc_mae_counters_xstats {
+	uint64_t			not_inuse_update;
+	uint64_t			realloc_update;
+};
+
+struct sfc_mae_counters {
+	/** An array of all MAE counters */
+	struct sfc_mae_counter		*mae_counters;
+	/** Extra statistics for counters */
+	struct sfc_mae_counters_xstats	xstats;
+	/** Count of all MAE counters */
+	unsigned int			n_mae_counters;
+};
+
+struct sfc_mae_counter_registry {
+	/* Common counter information */
+	/** Counters collection */
+	struct sfc_mae_counters		counters;
+
+	/* Information used by counter update service */
+	/** Callback to get packets from RxQ */
+	eth_rx_burst_t			rx_pkt_burst;
+	/** Data for the callback to get packets */
+	struct sfc_dp_rxq		*rx_dp;
+	/** Number of buffers pushed to the RxQ */
+	unsigned int			pushed_n_buffers;
+	/** Are credits used by counter stream */
+	bool				use_credits;
+
+	/* Information used by configuration routines */
+	/** Counter service core ID */
+	uint32_t			service_core_id;
+	/** Counter service ID */
+	uint32_t			service_id;
+};
+
 struct sfc_mae {
 	/** Assigned switch domain identifier */
 	uint16_t			switch_domain_id;
@@ -104,6 +160,10 @@ struct sfc_mae {
 	struct sfc_mae_action_sets	action_sets;
 	/** Encap. header bounce buffer */
 	struct sfc_mae_bounce_eh	bounce_eh;
+	/** Flag indicating whether counter-only RxQ is running */
+	bool				counter_rxq_running;
+	/** Counter registry */
+	struct sfc_mae_counter_registry	counter_registry;
 };
 
 struct sfc_adapter;
diff --git a/drivers/net/sfc/sfc_mae_counter.c b/drivers/net/sfc/sfc_mae_counter.c
index c7646cf7b1..b0cb8157aa 100644
--- a/drivers/net/sfc/sfc_mae_counter.c
+++ b/drivers/net/sfc/sfc_mae_counter.c
@@ -4,8 +4,10 @@
  */
 
 #include <rte_common.h>
+#include <rte_service_component.h>
 
 #include "efx.h"
+#include "efx_regs_counters_pkt_format.h"
 
 #include "sfc_ev.h"
 #include "sfc.h"
@@ -49,6 +51,520 @@ sfc_mae_counter_rxq_required(struct sfc_adapter *sa)
 	return true;
 }
 
+int
+sfc_mae_counter_enable(struct sfc_adapter *sa,
+		       struct sfc_mae_counter_id *counterp)
+{
+	struct sfc_mae_counter_registry *reg = &sa->mae.counter_registry;
+	struct sfc_mae_counters *counters = &reg->counters;
+	struct sfc_mae_counter *p;
+	efx_counter_t mae_counter;
+	uint32_t generation_count;
+	uint32_t unused;
+	int rc;
+
+	/*
+	 * The actual count of counters allocated is ignored since a failure
+	 * to allocate a single counter is indicated by non-zero return code.
+	 */
+	rc = efx_mae_counters_alloc(sa->nic, 1, &unused, &mae_counter,
+				    &generation_count);
+	if (rc != 0) {
+		sfc_err(sa, "failed to alloc MAE counter: %s",
+			rte_strerror(rc));
+		goto fail_mae_counter_alloc;
+	}
+
+	if (mae_counter.id >= counters->n_mae_counters) {
+		/*
+		 * ID of a counter is expected to be within the range
+		 * between 0 and the maximum count of counters to always
+		 * fit into a pre-allocated array size of maximum counter ID.
+		 */
+		sfc_err(sa, "MAE counter ID is out of expected range");
+		rc = EFAULT;
+		goto fail_counter_id_range;
+	}
+
+	counterp->mae_id = mae_counter;
+
+	p = &counters->mae_counters[mae_counter.id];
+
+	/*
+	 * Ordering is relaxed since it is the only operation on counter value.
+	 * And it does not depend on different stores/loads in other threads.
+	 * Paired with relaxed ordering in counter increment.
+	 */
+	__atomic_store(&p->reset.pkts_bytes.int128,
+		       &p->value.pkts_bytes.int128, __ATOMIC_RELAXED);
+	p->generation_count = generation_count;
+
+	/*
+	 * The flag is set at the very end of add operation and reset
+	 * at the beginning of delete operation. Release ordering is
+	 * paired with acquire ordering on load in counter increment operation.
+	 */
+	__atomic_store_n(&p->inuse, true, __ATOMIC_RELEASE);
+
+	sfc_info(sa, "enabled MAE counter #%u with reset pkts=%" PRIu64
+		 " bytes=%" PRIu64, mae_counter.id,
+		 p->reset.pkts, p->reset.bytes);
+
+	return 0;
+
+fail_counter_id_range:
+	(void)efx_mae_counters_free(sa->nic, 1, &unused, &mae_counter, NULL);
+
+fail_mae_counter_alloc:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+	return rc;
+}
+
+int
+sfc_mae_counter_disable(struct sfc_adapter *sa,
+			struct sfc_mae_counter_id *counter)
+{
+	struct sfc_mae_counter_registry *reg = &sa->mae.counter_registry;
+	struct sfc_mae_counters *counters = &reg->counters;
+	struct sfc_mae_counter *p;
+	uint32_t unused;
+	int rc;
+
+	if (counter->mae_id.id == EFX_MAE_RSRC_ID_INVALID)
+		return 0;
+
+	SFC_ASSERT(counter->mae_id.id < counters->n_mae_counters);
+	/*
+	 * The flag is set at the very end of add operation and reset
+	 * at the beginning of delete operation. Release ordering is
+	 * paired with acquire ordering on load in counter increment operation.
+	 */
+	p = &counters->mae_counters[counter->mae_id.id];
+	__atomic_store_n(&p->inuse, false, __ATOMIC_RELEASE);
+
+	rc = efx_mae_counters_free(sa->nic, 1, &unused, &counter->mae_id, NULL);
+	if (rc != 0)
+		sfc_err(sa, "failed to free MAE counter %u: %s",
+			counter->mae_id.id, rte_strerror(rc));
+
+	sfc_info(sa, "disabled MAE counter #%u with reset pkts=%" PRIu64
+		 " bytes=%" PRIu64, counter->mae_id.id,
+		 p->reset.pkts, p->reset.bytes);
+
+	/*
+	 * Do this regardless of what efx_mae_counters_free() return value is.
+	 * If there's some error, the resulting resource leakage is bad, but
+	 * nothing sensible can be done in this case.
+	 */
+	counter->mae_id.id = EFX_MAE_RSRC_ID_INVALID;
+
+	return rc;
+}
+
+static void
+sfc_mae_counter_increment(struct sfc_adapter *sa,
+			  struct sfc_mae_counters *counters,
+			  uint32_t mae_counter_id,
+			  uint32_t generation_count,
+			  uint64_t pkts, uint64_t bytes)
+{
+	struct sfc_mae_counter *p = &counters->mae_counters[mae_counter_id];
+	struct sfc_mae_counters_xstats *xstats = &counters->xstats;
+	union sfc_pkts_bytes cnt_val;
+	bool inuse;
+
+	/*
+	 * Acquire ordering is paired with release ordering in counter add
+	 * and delete operations.
+	 */
+	__atomic_load(&p->inuse, &inuse, __ATOMIC_ACQUIRE);
+	if (!inuse) {
+		/*
+		 * Two possible cases include:
+		 * 1) Counter is just allocated. Too early counter update
+		 *    cannot be processed properly.
+		 * 2) Stale update of freed and not reallocated counter.
+		 *    There is no point in processing that update.
+		 */
+		xstats->not_inuse_update++;
+		return;
+	}
+
+	if (unlikely(generation_count < p->generation_count)) {
+		/*
+		 * It is a stale update for the reallocated counter
+		 * (i.e., freed and the same ID allocated again).
+		 */
+		xstats->realloc_update++;
+		return;
+	}
+
+	cnt_val.pkts = p->value.pkts + pkts;
+	cnt_val.bytes = p->value.bytes + bytes;
+
+	/*
+	 * Ordering is relaxed since it is the only operation on counter value.
+	 * And it does not depend on different stores/loads in other threads.
+	 * Paired with relaxed ordering on counter reset.
+	 */
+	__atomic_store(&p->value.pkts_bytes,
+		       &cnt_val.pkts_bytes, __ATOMIC_RELAXED);
+
+	sfc_info(sa, "update MAE counter #%u: pkts+%" PRIu64 "=%" PRIu64
+		 ", bytes+%" PRIu64 "=%" PRIu64, mae_counter_id,
+		 pkts, cnt_val.pkts, bytes, cnt_val.bytes);
+}
+
+static void
+sfc_mae_parse_counter_packet(struct sfc_adapter *sa,
+			     struct sfc_mae_counter_registry *counter_registry,
+			     const struct rte_mbuf *m)
+{
+	uint32_t generation_count;
+	const efx_xword_t *hdr;
+	const efx_oword_t *counters_data;
+	unsigned int version;
+	unsigned int id;
+	unsigned int header_offset;
+	unsigned int payload_offset;
+	unsigned int counter_count;
+	unsigned int required_len;
+	unsigned int i;
+
+	if (unlikely(m->nb_segs != 1)) {
+		sfc_err(sa, "unexpectedly scattered MAE counters packet (%u segments)",
+			m->nb_segs);
+		return;
+	}
+
+	if (unlikely(m->data_len < ER_RX_SL_PACKETISER_HEADER_WORD_SIZE)) {
+		sfc_err(sa, "too short MAE counters packet (%u bytes)",
+			m->data_len);
+		return;
+	}
+
+	/*
+	 * The generation count is located in the Rx prefix in the USER_MARK
+	 * field which is written into hash.fdir.hi field of an mbuf. See
+	 * SF-123581-TC SmartNIC Datapath Offloads section 4.7.5 Counters.
+	 */
+	generation_count = m->hash.fdir.hi;
+
+	hdr = rte_pktmbuf_mtod(m, const efx_xword_t *);
+
+	version = EFX_XWORD_FIELD(*hdr, ERF_SC_PACKETISER_HEADER_VERSION);
+	if (unlikely(version != ERF_SC_PACKETISER_HEADER_VERSION_2)) {
+		sfc_err(sa, "unexpected MAE counters packet version %u",
+			version);
+		return;
+	}
+
+	id = EFX_XWORD_FIELD(*hdr, ERF_SC_PACKETISER_HEADER_IDENTIFIER);
+	if (unlikely(id != ERF_SC_PACKETISER_HEADER_IDENTIFIER_AR)) {
+		sfc_err(sa, "unexpected MAE counters source identifier %u", id);
+		return;
+	}
+
+	/* Packet layout definitions assume fixed header offset in fact */
+	header_offset =
+		EFX_XWORD_FIELD(*hdr, ERF_SC_PACKETISER_HEADER_HEADER_OFFSET);
+	if (unlikely(header_offset !=
+		     ERF_SC_PACKETISER_HEADER_HEADER_OFFSET_DEFAULT)) {
+		sfc_err(sa, "unexpected MAE counters packet header offset %u",
+			header_offset);
+		return;
+	}
+
+	payload_offset =
+		EFX_XWORD_FIELD(*hdr, ERF_SC_PACKETISER_HEADER_PAYLOAD_OFFSET);
+
+	counter_count = EFX_XWORD_FIELD(*hdr, ERF_SC_PACKETISER_HEADER_COUNT);
+
+	required_len = payload_offset +
+			counter_count * sizeof(counters_data[0]);
+	if (unlikely(required_len > m->data_len)) {
+		sfc_err(sa, "truncated MAE counters packet: %u counters, packet length is %u vs %u required",
+			counter_count, m->data_len, required_len);
+		/*
+		 * In theory it is possible process available counters data,
+		 * but such condition is really unexpected and it is
+		 * better to treat entire packet as corrupted.
+		 */
+		return;
+	}
+
+	/* Ensure that counters data is 32-bit aligned */
+	if (unlikely(payload_offset % sizeof(uint32_t) != 0)) {
+		sfc_err(sa, "unsupported MAE counters payload offset %u, must be 32-bit aligned",
+			payload_offset);
+		return;
+	}
+	RTE_BUILD_BUG_ON(sizeof(counters_data[0]) !=
+			ER_RX_SL_PACKETISER_PAYLOAD_WORD_SIZE);
+
+	counters_data =
+		rte_pktmbuf_mtod_offset(m, const efx_oword_t *, payload_offset);
+
+	sfc_info(sa, "update %u MAE counters with gc=%u",
+		 counter_count, generation_count);
+
+	for (i = 0; i < counter_count; ++i) {
+		uint32_t packet_count_lo;
+		uint32_t packet_count_hi;
+		uint32_t byte_count_lo;
+		uint32_t byte_count_hi;
+
+		/*
+		 * Use 32-bit field accessors below since counters data
+		 * is not 64-bit aligned.
+		 * 32-bit alignment is checked above taking into account
+		 * that start of packet data is 32-bit aligned
+		 * (cache-line size aligned in fact).
+		 */
+		packet_count_lo =
+			EFX_OWORD_FIELD32(counters_data[i],
+				ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO);
+		packet_count_hi =
+			EFX_OWORD_FIELD32(counters_data[i],
+				ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_HI);
+		byte_count_lo =
+			EFX_OWORD_FIELD32(counters_data[i],
+				ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO);
+		byte_count_hi =
+			EFX_OWORD_FIELD32(counters_data[i],
+				ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_HI);
+		sfc_mae_counter_increment(sa,
+			&counter_registry->counters,
+			EFX_OWORD_FIELD32(counters_data[i],
+				ERF_SC_PACKETISER_PAYLOAD_COUNTER_INDEX),
+			generation_count,
+			(uint64_t)packet_count_lo |
+			((uint64_t)packet_count_hi <<
+			 ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO_WIDTH),
+			(uint64_t)byte_count_lo |
+			((uint64_t)byte_count_hi <<
+			 ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO_WIDTH));
+	}
+}
+
+static int32_t
+sfc_mae_counter_routine(void *arg)
+{
+	struct sfc_adapter *sa = arg;
+	struct sfc_mae_counter_registry *counter_registry =
+		&sa->mae.counter_registry;
+	struct rte_mbuf *mbufs[SFC_MAE_COUNTER_RX_BURST];
+	unsigned int pushed_diff;
+	unsigned int pushed;
+	unsigned int i;
+	uint16_t n;
+	int rc;
+
+	n = counter_registry->rx_pkt_burst(counter_registry->rx_dp, mbufs,
+					   SFC_MAE_COUNTER_RX_BURST);
+
+	for (i = 0; i < n; i++)
+		sfc_mae_parse_counter_packet(sa, counter_registry, mbufs[i]);
+
+	rte_pktmbuf_free_bulk(mbufs, n);
+
+	if (!counter_registry->use_credits)
+		return 0;
+
+	pushed = sfc_rx_get_pushed(sa, counter_registry->rx_dp);
+	pushed_diff = pushed - counter_registry->pushed_n_buffers;
+
+	if (pushed_diff >= SFC_COUNTER_RXQ_REFILL_LEVEL) {
+		rc = efx_mae_counters_stream_give_credits(sa->nic, pushed_diff);
+		if (rc == 0) {
+			counter_registry->pushed_n_buffers = pushed;
+		} else {
+			/*
+			 * FIXME: counters might be important for the
+			 * application. Handle the error in order to recover
+			 * from the failure
+			 */
+			SFC_GENERIC_LOG(DEBUG, "Give credits failed: %s",
+					rte_strerror(rc));
+		}
+	}
+
+	return 0;
+}
+
+static void
+sfc_mae_counter_service_unregister(struct sfc_adapter *sa)
+{
+	struct sfc_mae_counter_registry *registry =
+		&sa->mae.counter_registry;
+	const unsigned int wait_ms = 10000;
+	unsigned int i;
+
+	rte_service_runstate_set(registry->service_id, 0);
+	rte_service_component_runstate_set(registry->service_id, 0);
+
+	/*
+	 * Wait for the counter routine to finish the last iteration.
+	 * Give up on timeout.
+	 */
+	for (i = 0; i < wait_ms; i++) {
+		if (rte_service_may_be_active(registry->service_id) == 0)
+			break;
+
+		rte_delay_ms(1);
+	}
+	if (i == wait_ms)
+		sfc_warn(sa, "failed to wait for counter service to stop");
+
+	rte_service_map_lcore_set(registry->service_id,
+				  registry->service_core_id, 0);
+
+	rte_service_component_unregister(registry->service_id);
+}
+
+static struct sfc_rxq_info *
+sfc_counter_rxq_info_get(struct sfc_adapter *sa)
+{
+	return &sfc_sa2shared(sa)->rxq_info[sa->counter_rxq.sw_index];
+}
+
+static int
+sfc_mae_counter_service_register(struct sfc_adapter *sa,
+				 uint32_t counter_stream_flags)
+{
+	struct rte_service_spec service;
+	char counter_service_name[sizeof(service.name)] = "counter_service";
+	struct sfc_mae_counter_registry *counter_registry =
+		&sa->mae.counter_registry;
+	uint32_t cid;
+	uint32_t sid;
+	int rc;
+
+	sfc_log_init(sa, "entry");
+
+	/* Prepare service info */
+	memset(&service, 0, sizeof(service));
+	rte_strscpy(service.name, counter_service_name, sizeof(service.name));
+	service.socket_id = sa->socket_id;
+	service.callback = sfc_mae_counter_routine;
+	service.callback_userdata = sa;
+	counter_registry->rx_pkt_burst = sa->eth_dev->rx_pkt_burst;
+	counter_registry->rx_dp = sfc_counter_rxq_info_get(sa)->dp;
+	counter_registry->pushed_n_buffers = 0;
+	counter_registry->use_credits = counter_stream_flags &
+		EFX_MAE_COUNTERS_STREAM_OUT_USES_CREDITS;
+
+	cid = sfc_get_service_lcore(sa->socket_id);
+	if (cid == RTE_MAX_LCORE && sa->socket_id != SOCKET_ID_ANY) {
+		/* Warn and try to allocate on any NUMA node */
+		sfc_warn(sa,
+			"failed to get service lcore for counter service at socket %d",
+			sa->socket_id);
+
+		cid = sfc_get_service_lcore(SOCKET_ID_ANY);
+	}
+	if (cid == RTE_MAX_LCORE) {
+		rc = ENOTSUP;
+		sfc_err(sa, "failed to get service lcore for counter service");
+		goto fail_get_service_lcore;
+	}
+
+	/* Service core may be in "stopped" state, start it */
+	rc = rte_service_lcore_start(cid);
+	if (rc != 0 && rc != -EALREADY) {
+		sfc_err(sa, "failed to start service core for counter service: %s",
+			rte_strerror(-rc));
+		rc = ENOTSUP;
+		goto fail_start_core;
+	}
+
+	/* Register counter service */
+	rc = rte_service_component_register(&service, &sid);
+	if (rc != 0) {
+		rc = ENOEXEC;
+		sfc_err(sa, "failed to register counter service component");
+		goto fail_register;
+	}
+
+	/* Map the service with the service core */
+	rc = rte_service_map_lcore_set(sid, cid, 1);
+	if (rc != 0) {
+		rc = -rc;
+		sfc_err(sa, "failed to map lcore for counter service: %s",
+			rte_strerror(rc));
+		goto fail_map_lcore;
+	}
+
+	/* Run the service */
+	rc = rte_service_component_runstate_set(sid, 1);
+	if (rc < 0) {
+		rc = -rc;
+		sfc_err(sa, "failed to run counter service component: %s",
+			rte_strerror(rc));
+		goto fail_component_runstate_set;
+	}
+	rc = rte_service_runstate_set(sid, 1);
+	if (rc < 0) {
+		rc = -rc;
+		sfc_err(sa, "failed to run counter service");
+		goto fail_runstate_set;
+	}
+
+	counter_registry->service_core_id = cid;
+	counter_registry->service_id = sid;
+
+	sfc_log_init(sa, "done");
+
+	return 0;
+
+fail_runstate_set:
+	rte_service_component_runstate_set(sid, 0);
+
+fail_component_runstate_set:
+	rte_service_map_lcore_set(sid, cid, 0);
+
+fail_map_lcore:
+	rte_service_component_unregister(sid);
+
+fail_register:
+fail_start_core:
+fail_get_service_lcore:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+
+	return rc;
+}
+
+int
+sfc_mae_counters_init(struct sfc_mae_counters *counters,
+		      uint32_t nb_counters_max)
+{
+	int rc;
+
+	SFC_GENERIC_LOG(DEBUG, "%s: entry", __func__);
+
+	counters->mae_counters = rte_zmalloc("sfc_mae_counters",
+		sizeof(*counters->mae_counters) * nb_counters_max, 0);
+	if (counters->mae_counters == NULL) {
+		rc = ENOMEM;
+		SFC_GENERIC_LOG(ERR, "%s: failed: %s", __func__,
+				rte_strerror(rc));
+		return rc;
+	}
+
+	counters->n_mae_counters = nb_counters_max;
+
+	SFC_GENERIC_LOG(DEBUG, "%s: done", __func__);
+
+	return 0;
+}
+
+void
+sfc_mae_counters_fini(struct sfc_mae_counters *counters)
+{
+	rte_free(counters->mae_counters);
+	counters->mae_counters = NULL;
+}
+
 int
 sfc_mae_counter_rxq_attach(struct sfc_adapter *sa)
 {
@@ -215,3 +731,65 @@ sfc_mae_counter_rxq_fini(struct sfc_adapter *sa)
 
 	sfc_log_init(sa, "done");
 }
+
+void
+sfc_mae_counter_stop(struct sfc_adapter *sa)
+{
+	struct sfc_mae *mae = &sa->mae;
+
+	sfc_log_init(sa, "entry");
+
+	if (!mae->counter_rxq_running) {
+		sfc_log_init(sa, "counter queue is not running - skip");
+		return;
+	}
+
+	sfc_mae_counter_service_unregister(sa);
+	efx_mae_counters_stream_stop(sa->nic, sa->counter_rxq.sw_index, NULL);
+
+	mae->counter_rxq_running = false;
+
+	sfc_log_init(sa, "done");
+}
+
+int
+sfc_mae_counter_start(struct sfc_adapter *sa)
+{
+	struct sfc_mae *mae = &sa->mae;
+	uint32_t flags;
+	int rc;
+
+	SFC_ASSERT(sa->counter_rxq.state & SFC_COUNTER_RXQ_ATTACHED);
+
+	if (mae->counter_rxq_running)
+		return 0;
+
+	sfc_log_init(sa, "entry");
+
+	rc = efx_mae_counters_stream_start(sa->nic, sa->counter_rxq.sw_index,
+					   SFC_MAE_COUNTER_STREAM_PACKET_SIZE,
+					   0 /* No flags required */, &flags);
+	if (rc != 0) {
+		sfc_err(sa, "failed to start MAE counters stream: %s",
+			rte_strerror(rc));
+		goto fail_counter_stream;
+	}
+
+	sfc_log_init(sa, "stream start flags: 0x%x", flags);
+
+	rc = sfc_mae_counter_service_register(sa, flags);
+	if (rc != 0)
+		goto fail_service_register;
+
+	mae->counter_rxq_running = true;
+
+	return 0;
+
+fail_service_register:
+	efx_mae_counters_stream_stop(sa->nic, sa->counter_rxq.sw_index, NULL);
+
+fail_counter_stream:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+
+	return rc;
+}
diff --git a/drivers/net/sfc/sfc_mae_counter.h b/drivers/net/sfc/sfc_mae_counter.h
index f16d64a999..f61a6b59cb 100644
--- a/drivers/net/sfc/sfc_mae_counter.h
+++ b/drivers/net/sfc/sfc_mae_counter.h
@@ -38,6 +38,17 @@ void sfc_mae_counter_rxq_detach(struct sfc_adapter *sa);
 int sfc_mae_counter_rxq_init(struct sfc_adapter *sa);
 void sfc_mae_counter_rxq_fini(struct sfc_adapter *sa);
 
+int sfc_mae_counters_init(struct sfc_mae_counters *counters,
+			  uint32_t nb_counters_max);
+void sfc_mae_counters_fini(struct sfc_mae_counters *counters);
+int sfc_mae_counter_enable(struct sfc_adapter *sa,
+			   struct sfc_mae_counter_id *counterp);
+int sfc_mae_counter_disable(struct sfc_adapter *sa,
+			    struct sfc_mae_counter_id *counter);
+
+int sfc_mae_counter_start(struct sfc_adapter *sa);
+void sfc_mae_counter_stop(struct sfc_adapter *sa);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/net/sfc/sfc_stats.h b/drivers/net/sfc/sfc_stats.h
new file mode 100644
index 0000000000..2d7ab71f14
--- /dev/null
+++ b/drivers/net/sfc/sfc_stats.h
@@ -0,0 +1,80 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2019-2021 Xilinx, Inc.
+ * Copyright(c) 2019 Solarflare Communications Inc.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#ifndef _SFC_STATS_H
+#define _SFC_STATS_H
+
+#include <stdint.h>
+
+#include <rte_atomic.h>
+
+#include "sfc_tweak.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * 64-bit packets and bytes counters covered by 128-bit integer
+ * in order to do atomic updates to guarantee consistency if
+ * required.
+ */
+union sfc_pkts_bytes {
+	RTE_STD_C11
+	struct {
+		uint64_t		pkts;
+		uint64_t		bytes;
+	};
+	rte_int128_t			pkts_bytes;
+};
+
+/**
+ * Update packets and bytes counters atomically in assumption that
+ * the counter is written on one core only.
+ */
+static inline void
+sfc_pkts_bytes_add(union sfc_pkts_bytes *st, uint64_t pkts, uint64_t bytes)
+{
+#if SFC_SW_STATS_ATOMIC
+	union sfc_pkts_bytes result;
+
+	/* Stats are written on single core only, so just load values */
+	result.pkts = st->pkts + pkts;
+	result.bytes = st->bytes + bytes;
+
+	/*
+	 * Store the result atomically to guarantee that the reader
+	 * core sees both counter updates together.
+	 */
+	__atomic_store_n(&st->pkts_bytes.int128, result.pkts_bytes.int128,
+			 __ATOMIC_RELEASE);
+#else
+	st->pkts += pkts;
+	st->bytes += bytes;
+#endif
+}
+
+/**
+ * Get an atomic copy of a packets and bytes counters.
+ */
+static inline void
+sfc_pkts_bytes_get(const union sfc_pkts_bytes *st, union sfc_pkts_bytes *result)
+{
+#if SFC_SW_STATS_ATOMIC
+	result->pkts_bytes.int128 = __atomic_load_n(&st->pkts_bytes.int128,
+						    __ATOMIC_ACQUIRE);
+#else
+	*result = *st;
+#endif
+}
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _SFC_STATS_H */
diff --git a/drivers/net/sfc/sfc_tweak.h b/drivers/net/sfc/sfc_tweak.h
index f2d8701421..d09c7a3125 100644
--- a/drivers/net/sfc/sfc_tweak.h
+++ b/drivers/net/sfc/sfc_tweak.h
@@ -42,4 +42,13 @@
  */
 #define SFC_RXD_WAIT_TIMEOUT_NS_DEF	(200U * 1000)
 
+/**
+ * Ideally reading packet and byte counters together should return
+ * consistent values. I.e. a number of bytes corresponds to a number of
+ * packets. Since counters are updated in one thread and queried in
+ * another it requires either locking or atomics which are very
+ * expensive from performance point of view. So, disable it by default.
+ */
+#define SFC_SW_STATS_ATOMIC		0
+
 #endif /* _SFC_TWEAK_H_ */
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v3 20/20] net/sfc: support flow API query for count actions
  2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
                     ` (18 preceding siblings ...)
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 19/20] net/sfc: support flow action COUNT in transfer rules Andrew Rybchenko
@ 2021-06-18 13:40   ` Andrew Rybchenko
  19 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-18 13:40 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

The query reports the number of hits for a counter associated
with a flow rule.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/sfc_flow.c        | 48 ++++++++++++++++++++++-
 drivers/net/sfc/sfc_flow.h        |  6 +++
 drivers/net/sfc/sfc_mae.c         | 64 +++++++++++++++++++++++++++++++
 drivers/net/sfc/sfc_mae.h         |  1 +
 drivers/net/sfc/sfc_mae_counter.c | 32 ++++++++++++++++
 drivers/net/sfc/sfc_mae_counter.h |  3 ++
 6 files changed, 153 insertions(+), 1 deletion(-)

diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 1294dbd3a7..af7f5df4bf 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -32,6 +32,7 @@ struct sfc_flow_ops_by_spec {
 	sfc_flow_cleanup_cb_t	*cleanup;
 	sfc_flow_insert_cb_t	*insert;
 	sfc_flow_remove_cb_t	*remove;
+	sfc_flow_query_cb_t	*query;
 };
 
 static sfc_flow_parse_cb_t sfc_flow_parse_rte_to_filter;
@@ -45,6 +46,7 @@ static const struct sfc_flow_ops_by_spec sfc_flow_ops_filter = {
 	.cleanup = NULL,
 	.insert = sfc_flow_filter_insert,
 	.remove = sfc_flow_filter_remove,
+	.query = NULL,
 };
 
 static const struct sfc_flow_ops_by_spec sfc_flow_ops_mae = {
@@ -53,6 +55,7 @@ static const struct sfc_flow_ops_by_spec sfc_flow_ops_mae = {
 	.cleanup = sfc_mae_flow_cleanup,
 	.insert = sfc_mae_flow_insert,
 	.remove = sfc_mae_flow_remove,
+	.query = sfc_mae_flow_query,
 };
 
 static const struct sfc_flow_ops_by_spec *
@@ -2788,6 +2791,49 @@ sfc_flow_flush(struct rte_eth_dev *dev,
 	return -ret;
 }
 
+static int
+sfc_flow_query(struct rte_eth_dev *dev,
+	       struct rte_flow *flow,
+	       const struct rte_flow_action *action,
+	       void *data,
+	       struct rte_flow_error *error)
+{
+	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	const struct sfc_flow_ops_by_spec *ops;
+	int ret;
+
+	sfc_adapter_lock(sa);
+
+	ops = sfc_flow_get_ops_by_spec(flow);
+	if (ops == NULL || ops->query == NULL) {
+		ret = rte_flow_error_set(error, ENOTSUP,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			"No backend to handle this flow");
+		goto fail_no_backend;
+	}
+
+	if (sa->state != SFC_ADAPTER_STARTED) {
+		ret = rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			"Can't query the flow: the adapter is not started");
+		goto fail_not_started;
+	}
+
+	ret = ops->query(dev, flow, action, data, error);
+	if (ret != 0)
+		goto fail_query;
+
+	sfc_adapter_unlock(sa);
+
+	return 0;
+
+fail_query:
+fail_not_started:
+fail_no_backend:
+	sfc_adapter_unlock(sa);
+	return ret;
+}
+
 static int
 sfc_flow_isolate(struct rte_eth_dev *dev, int enable,
 		 struct rte_flow_error *error)
@@ -2814,7 +2860,7 @@ const struct rte_flow_ops sfc_flow_ops = {
 	.create = sfc_flow_create,
 	.destroy = sfc_flow_destroy,
 	.flush = sfc_flow_flush,
-	.query = NULL,
+	.query = sfc_flow_query,
 	.isolate = sfc_flow_isolate,
 };
 
diff --git a/drivers/net/sfc/sfc_flow.h b/drivers/net/sfc/sfc_flow.h
index bd3b374d68..99e5cf9cff 100644
--- a/drivers/net/sfc/sfc_flow.h
+++ b/drivers/net/sfc/sfc_flow.h
@@ -181,6 +181,12 @@ typedef int (sfc_flow_insert_cb_t)(struct sfc_adapter *sa,
 typedef int (sfc_flow_remove_cb_t)(struct sfc_adapter *sa,
 				   struct rte_flow *flow);
 
+typedef int (sfc_flow_query_cb_t)(struct rte_eth_dev *dev,
+				  struct rte_flow *flow,
+				  const struct rte_flow_action *action,
+				  void *data,
+				  struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index c3efd5b407..a4eab30dec 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -3187,3 +3187,67 @@ sfc_mae_flow_remove(struct sfc_adapter *sa,
 
 	return 0;
 }
+
+static int
+sfc_mae_query_counter(struct sfc_adapter *sa,
+		      struct sfc_flow_spec_mae *spec,
+		      const struct rte_flow_action *action,
+		      struct rte_flow_query_count *data,
+		      struct rte_flow_error *error)
+{
+	struct sfc_mae_action_set *action_set = spec->action_set;
+	const struct rte_flow_action_count *conf = action->conf;
+	unsigned int i;
+	int rc;
+
+	if (action_set->n_counters == 0) {
+		return rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION, action,
+			"Queried flow rule does not have count actions");
+	}
+
+	for (i = 0; i < action_set->n_counters; i++) {
+		/*
+		 * Get the first available counter of the flow rule if
+		 * counter ID is not specified.
+		 */
+		if (conf != NULL && action_set->counters[i].rte_id != conf->id)
+			continue;
+
+		rc = sfc_mae_counter_get(&sa->mae.counter_registry.counters,
+					 &action_set->counters[i], data);
+		if (rc != 0) {
+			return rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION, action,
+				"Queried flow rule counter action is invalid");
+		}
+
+		return 0;
+	}
+
+	return rte_flow_error_set(error, ENOENT,
+				  RTE_FLOW_ERROR_TYPE_ACTION, action,
+				  "No such flow rule action count ID");
+}
+
+int
+sfc_mae_flow_query(struct rte_eth_dev *dev,
+		   struct rte_flow *flow,
+		   const struct rte_flow_action *action,
+		   void *data,
+		   struct rte_flow_error *error)
+{
+	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	struct sfc_flow_spec *spec = &flow->spec;
+	struct sfc_flow_spec_mae *spec_mae = &spec->mae;
+
+	switch (action->type) {
+	case RTE_FLOW_ACTION_TYPE_COUNT:
+		return sfc_mae_query_counter(sa, spec_mae, action,
+					     data, error);
+	default:
+		return rte_flow_error_set(error, ENOTSUP,
+			RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+			"Query for action of this type is not supported");
+	}
+}
diff --git a/drivers/net/sfc/sfc_mae.h b/drivers/net/sfc/sfc_mae.h
index 2cc4334890..6bfc8afb82 100644
--- a/drivers/net/sfc/sfc_mae.h
+++ b/drivers/net/sfc/sfc_mae.h
@@ -291,6 +291,7 @@ int sfc_mae_rule_parse_actions(struct sfc_adapter *sa,
 sfc_flow_verify_cb_t sfc_mae_flow_verify;
 sfc_flow_insert_cb_t sfc_mae_flow_insert;
 sfc_flow_remove_cb_t sfc_mae_flow_remove;
+sfc_flow_query_cb_t sfc_mae_flow_query;
 
 #ifdef __cplusplus
 }
diff --git a/drivers/net/sfc/sfc_mae_counter.c b/drivers/net/sfc/sfc_mae_counter.c
index b0cb8157aa..5afd450a11 100644
--- a/drivers/net/sfc/sfc_mae_counter.c
+++ b/drivers/net/sfc/sfc_mae_counter.c
@@ -793,3 +793,35 @@ sfc_mae_counter_start(struct sfc_adapter *sa)
 
 	return rc;
 }
+
+int
+sfc_mae_counter_get(struct sfc_mae_counters *counters,
+		    const struct sfc_mae_counter_id *counter,
+		    struct rte_flow_query_count *data)
+{
+	struct sfc_mae_counter *p;
+	union sfc_pkts_bytes value;
+
+	SFC_ASSERT(counter->mae_id.id < counters->n_mae_counters);
+	p = &counters->mae_counters[counter->mae_id.id];
+
+	/*
+	 * Ordering is relaxed since it is the only operation on counter value.
+	 * And it does not depend on different stores/loads in other threads.
+	 * Paired with relaxed ordering in counter increment.
+	 */
+	value.pkts_bytes.int128 = __atomic_load_n(&p->value.pkts_bytes.int128,
+						  __ATOMIC_RELAXED);
+
+	data->hits_set = 1;
+	data->bytes_set = 1;
+	data->hits = value.pkts - p->reset.pkts;
+	data->bytes = value.bytes - p->reset.bytes;
+
+	if (data->reset != 0) {
+		p->reset.pkts = value.pkts;
+		p->reset.bytes = value.bytes;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/sfc/sfc_mae_counter.h b/drivers/net/sfc/sfc_mae_counter.h
index f61a6b59cb..2c953c2968 100644
--- a/drivers/net/sfc/sfc_mae_counter.h
+++ b/drivers/net/sfc/sfc_mae_counter.h
@@ -45,6 +45,9 @@ int sfc_mae_counter_enable(struct sfc_adapter *sa,
 			   struct sfc_mae_counter_id *counterp);
 int sfc_mae_counter_disable(struct sfc_adapter *sa,
 			    struct sfc_mae_counter_id *counter);
+int sfc_mae_counter_get(struct sfc_mae_counters *counters,
+			const struct sfc_mae_counter_id *counter,
+			struct rte_flow_query_count *data);
 
 int sfc_mae_counter_start(struct sfc_adapter *sa);
 void sfc_mae_counter_stop(struct sfc_adapter *sa);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action
  2021-06-17  8:37   ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action David Marchand
@ 2021-06-18 13:40     ` Andrew Rybchenko
  0 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-06-18 13:40 UTC (permalink / raw)
  To: David Marchand; +Cc: dev

On 6/17/21 11:37 AM, David Marchand wrote:
> Hello Andrew,
> 
> On Fri, Jun 4, 2021 at 4:24 PM Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru> wrote:
>>
>> Update base driver and support COUNT action in transfer flow rules.
>>
>> v2:
>>  - add release notes
>>  - add missing documentaion
>>  - fix spelling
>>  - handle query in stopped gracefully
> 
> I see build issues in the CI.
> Can you have a look?
> 
> gcc -Idrivers/libtmp_rte_net_sfc.a.p -Idrivers -I../drivers
> -Idrivers/net/sfc -I../drivers/net/sfc -Ilib/ethdev -I../lib/ethdev
> -I. -I.. -Iconfig -I../config -Ilib/eal/include -I../lib/eal/include
> -Ilib/eal/linux/include -I../lib/eal/linux/include
> -Ilib/eal/x86/include -I../lib/eal/x86/include -Ilib/eal/common
> -I../lib/eal/common -Ilib/eal -I../lib/eal -Ilib/kvargs
> -I../lib/kvargs -Ilib/metrics -I../lib/metrics -Ilib/telemetry
> -I../lib/telemetry -Ilib/net -I../lib/net -Ilib/mbuf -I../lib/mbuf
> -Ilib/mempool -I../lib/mempool -Ilib/ring -I../lib/ring -Ilib/meter
> -I../lib/meter -Idrivers/bus/pci -I../drivers/bus/pci
> -I../drivers/bus/pci/linux -Ilib/pci -I../lib/pci -Idrivers/bus/vdev
> -I../drivers/bus/vdev -Idrivers/common/sfc_efx
> -I../drivers/common/sfc_efx -Idrivers/common/sfc_efx/base
> -I../drivers/common/sfc_efx/base -fdiagnostics-color=always -pipe
> -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -Werror -O3 -include
> rte_config.h -Wextra -Wcast-qual -Wdeprecated -Wformat
> -Wformat-nonliteral -Wformat-security -Wmissing-declarations
> -Wmissing-prototypes -Wnested-externs -Wold-style-definition
> -Wpointer-arith -Wsign-compare -Wstrict-prototypes -Wundef
> -Wwrite-strings -Wno-packed-not-aligned
> -Wno-missing-field-initializers -D_GNU_SOURCE -fPIC -march=native
> -DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -Wno-format-truncation
> -Wno-strict-aliasing -Wdisabled-optimization -Waggregate-return
> -Wbad-function-cast -DRTE_LOG_DEFAULT_LOGTYPE=pmd.net.sfc -MD -MQ
> drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_flow.c.o -MF
> drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_flow.c.o.d -o
> drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_flow.c.o -c
> ../drivers/net/sfc/sfc_flow.c
> ../drivers/net/sfc/sfc_flow.c: In function ‘sfc_flow_query’:
> ../drivers/net/sfc/sfc_flow.c:2815:19: error: ‘SFC_ETHDEV_STARTED’
> undeclared (first use in this function); did you mean
> ‘SFC_ADAPTER_STARTED’?
>   if (sa->state != SFC_ETHDEV_STARTED) {
>                    ^~~~~~~~~~~~~~~~~~
>                    SFC_ADAPTER_STARTED
> ../drivers/net/sfc/sfc_flow.c:2815:19: note: each undeclared
> identifier is reported only once for each function it appears in
> 
> $ git grep SFC_ETHDEV_STARTED
> drivers/net/sfc/sfc_flow.c:     if (sa->state != SFC_ETHDEV_STARTED) {

Thanks David, my bad. Quick fixup without build check before
sending. I'll fix it in v3 and send it shortly.

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [dpdk-dev] [PATCH v3 19/20] net/sfc: support flow action COUNT in transfer rules
  2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 19/20] net/sfc: support flow action COUNT in transfer rules Andrew Rybchenko
@ 2021-06-21  8:28     ` David Marchand
  2021-06-21  9:30       ` Thomas Monjalon
  0 siblings, 1 reply; 104+ messages in thread
From: David Marchand @ 2021-06-21  8:28 UTC (permalink / raw)
  To: Andrew Rybchenko, Bruce Richardson
  Cc: dev, Igor Romanov, Andy Moreton, Ivan Malov

On Fri, Jun 18, 2021 at 3:41 PM Andrew Rybchenko
<andrew.rybchenko@oktetlabs.ru> wrote:
> diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
> index f8880f740a..32b58e3d76 100644
> --- a/drivers/net/sfc/meson.build
> +++ b/drivers/net/sfc/meson.build
> @@ -39,6 +39,16 @@ foreach flag: extra_flags
>      endif
>  endforeach
>
> +# for clang 32-bit compiles we need libatomic for 64-bit atomic ops
> +if cc.get_id() == 'clang' and dpdk_conf.get('RTE_ARCH_64') == false
> +    ext_deps += cc.find_library('atomic')
> +endif

I don't think this block is needed.
The atomic library is globally required in config/meson.build for the
clang + 32bits case.


> +
> +# for gcc compiles we need -latomic for 128-bit atomic ops
> +if cc.get_id() == 'gcc'
> +    ext_deps += cc.find_library('atomic')
> +endif
> +

This patch breaks compilation on rhel/fedora (most failures in UNH for
this series are linked to this issue) when the libatomic rpm is not
installed.
ninja: Entering directory `/home/dmarchan/builds/build-gcc-static'
[1/18] Linking target drivers/librte_net_sfc.so.21.3
FAILED: drivers/librte_net_sfc.so.21.3
gcc  -o drivers/librte_net_sfc.so.21.3
drivers/librte_net_sfc.so.21.3.p/meson-generated_.._rte_net_sfc.pmd.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_ethdev.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_kvargs.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_mcdi.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_sriov.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_intr.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_ev.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_port.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_rx.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_tx.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_tso.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_filter.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_switch.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_mae.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_mae_counter.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_flow.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_dp.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_ef10_rx.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_ef10_essb_rx.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_ef10_tx.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_ef100_rx.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_ef100_tx.c.o
drivers/libtmp_rte_net_sfc.a.p/net_sfc_sfc_service.c.o
-I/home/dmarchan/intel-ipsec-mb/install/include
-L/home/dmarchan/intel-ipsec-mb/install/lib -Wl,--as-needed
-Wl,--no-undefined -shared -fPIC -Wl,--start-group
-Wl,-soname,librte_net_sfc.so.21 -Wl,--no-as-needed -pthread -lm -ldl
-lnuma -lfdt lib/librte_ethdev.so.21.3 lib/librte_eal.so.21.3
lib/librte_kvargs.so.21.3 lib/librte_telemetry.so.21.3
lib/librte_net.so.21.3 lib/librte_mbuf.so.21.3
lib/librte_mempool.so.21.3 lib/librte_ring.so.21.3
lib/librte_meter.so.21.3 drivers/librte_bus_pci.so.21.3
lib/librte_pci.so.21.3 drivers/librte_bus_vdev.so.21.3
drivers/librte_common_sfc_efx.so.21.3
-Wl,--version-script=/home/dmarchan/dpdk/drivers/net/sfc/version.map
/usr/lib/gcc/x86_64-redhat-linux/10/libatomic.so /usr/lib64/libbsd.so
-Wl,--end-group '-Wl,-rpath,$ORIGIN/../lib:$ORIGIN/'
-Wl,-rpath-link,/home/dmarchan/builds/build-gcc-static/lib
-Wl,-rpath-link,/home/dmarchan/builds/build-gcc-static/drivers
/usr/bin/ld: cannot find /usr/lib64/libatomic.so.1.2.0
collect2: error: ld returned 1 exit status


It seems meson related.
I do see:
Library atomic found: YES
Message: drivers/net/sfc: Defining dependency "net_sfc"


But looking at /home/dmarchan/build/build-gcc-static/meson-logs/meson-log.txt:
"""
Running compile:
Working directory:
/home/dmarchan/builds/build-gcc-static/meson-private/tmpdu27j15z
Command line:  gcc -L/home/dmarchan/intel-ipsec-mb/install/lib
-I/home/dmarchan/intel-ipsec-mb/install/include
/home/dmarchan/builds/build-gcc-static/meson-private/tmpdu27j15z/testfile.c
-o /home/dmarchan/builds/build-gcc-static/meson-private/tmpdu27j15z/output.exe
-pipe -D_FILE_OFFSET_BITS=64 -O0 -Wl,--start-group -latomic
-Wl,--end-group -Wl,--allow-shlib-undefined

Code:
 int main(void) { return 0; }

Compiler stdout:

Compiler stderr:
 /usr/bin/ld: cannot find /usr/lib64/libatomic.so.1.2.0
collect2: error: ld returned 1 exit status

Library atomic found: YES
"""


And:

[dmarchan@wsfd-netdev66 dpdk]$ cat
/usr/lib/gcc/x86_64-redhat-linux/10/libatomic.so
INPUT ( /usr/lib64/libatomic.so.1.2.0 )
[dmarchan@wsfd-netdev66 dpdk]$ file /usr/lib64/libatomic.so.1.2.0
/usr/lib64/libatomic.so.1.2.0: cannot open
`/usr/lib64/libatomic.so.1.2.0' (No such file or directory)

[dmarchan@wsfd-netdev66 dpdk]$ meson --version
0.55.3


-- 
David Marchand


^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [dpdk-dev] [PATCH v3 19/20] net/sfc: support flow action COUNT in transfer rules
  2021-06-21  8:28     ` David Marchand
@ 2021-06-21  9:30       ` Thomas Monjalon
  2021-07-01  9:22         ` Andrew Rybchenko
  0 siblings, 1 reply; 104+ messages in thread
From: Thomas Monjalon @ 2021-06-21  9:30 UTC (permalink / raw)
  To: Andrew Rybchenko, Bruce Richardson
  Cc: dev, Igor Romanov, Andy Moreton, Ivan Malov, David Marchand

21/06/2021 10:28, David Marchand:
> On Fri, Jun 18, 2021 at 3:41 PM Andrew Rybchenko
> > +# for gcc compiles we need -latomic for 128-bit atomic ops
> > +if cc.get_id() == 'gcc'
> > +    ext_deps += cc.find_library('atomic')
> > +endif
> 
> This patch breaks compilation on rhel/fedora (most failures in UNH for
> this series are linked to this issue) when the libatomic rpm is not
> installed.
[...]
> """
> Running compile:
> Working directory:
> /home/dmarchan/builds/build-gcc-static/meson-private/tmpdu27j15z
> Command line:  gcc -L/home/dmarchan/intel-ipsec-mb/install/lib
> -I/home/dmarchan/intel-ipsec-mb/install/include
> /home/dmarchan/builds/build-gcc-static/meson-private/tmpdu27j15z/testfile.c
> -o /home/dmarchan/builds/build-gcc-static/meson-private/tmpdu27j15z/output.exe
> -pipe -D_FILE_OFFSET_BITS=64 -O0 -Wl,--start-group -latomic
> -Wl,--end-group -Wl,--allow-shlib-undefined
> 
> Code:
>  int main(void) { return 0; }
> 
> Compiler stdout:
> 
> Compiler stderr:
>  /usr/bin/ld: cannot find /usr/lib64/libatomic.so.1.2.0
> collect2: error: ld returned 1 exit status
> 
> Library atomic found: YES
> """

Indeed it looks like a bug in meson.

How does it behave with clang 32-bit?

For reference, in config/meson.build:
"""
# for clang 32-bit compiles we need libatomic for 64-bit atomic ops
if cc.get_id() == 'clang' and dpdk_conf.get('RTE_ARCH_64') == false
    atomic_dep = cc.find_library('atomic', required: true)
    add_project_link_arguments('-latomic', language: 'c')
    dpdk_extra_ldflags += '-latomic'
endif
"""

> [dmarchan@wsfd-netdev66 dpdk]$ cat
> /usr/lib/gcc/x86_64-redhat-linux/10/libatomic.so
> INPUT ( /usr/lib64/libatomic.so.1.2.0 )
> [dmarchan@wsfd-netdev66 dpdk]$ file /usr/lib64/libatomic.so.1.2.0
> /usr/lib64/libatomic.so.1.2.0: cannot open
> `/usr/lib64/libatomic.so.1.2.0' (No such file or directory)

We must handle this case where libatomic is not completely installed.

Hope there is a good fix possible.



^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [dpdk-dev] [PATCH v3 19/20] net/sfc: support flow action COUNT in transfer rules
  2021-06-21  9:30       ` Thomas Monjalon
@ 2021-07-01  9:22         ` Andrew Rybchenko
  2021-07-01 12:34           ` David Marchand
  0 siblings, 1 reply; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-01  9:22 UTC (permalink / raw)
  To: Thomas Monjalon, Bruce Richardson
  Cc: dev, Igor Romanov, Andy Moreton, Ivan Malov, David Marchand

On 6/21/21 12:30 PM, Thomas Monjalon wrote:
> 21/06/2021 10:28, David Marchand:
>> On Fri, Jun 18, 2021 at 3:41 PM Andrew Rybchenko
>>> +# for gcc compiles we need -latomic for 128-bit atomic ops
>>> +if cc.get_id() == 'gcc'
>>> +    ext_deps += cc.find_library('atomic')
>>> +endif
>>
>> This patch breaks compilation on rhel/fedora (most failures in UNH for
>> this series are linked to this issue) when the libatomic rpm is not
>> installed.
> [...]
>> """
>> Running compile:
>> Working directory:
>> /home/dmarchan/builds/build-gcc-static/meson-private/tmpdu27j15z
>> Command line:  gcc -L/home/dmarchan/intel-ipsec-mb/install/lib
>> -I/home/dmarchan/intel-ipsec-mb/install/include
>> /home/dmarchan/builds/build-gcc-static/meson-private/tmpdu27j15z/testfile.c
>> -o /home/dmarchan/builds/build-gcc-static/meson-private/tmpdu27j15z/output.exe
>> -pipe -D_FILE_OFFSET_BITS=64 -O0 -Wl,--start-group -latomic
>> -Wl,--end-group -Wl,--allow-shlib-undefined
>>
>> Code:
>>  int main(void) { return 0; }
>>
>> Compiler stdout:
>>
>> Compiler stderr:
>>  /usr/bin/ld: cannot find /usr/lib64/libatomic.so.1.2.0
>> collect2: error: ld returned 1 exit status
>>
>> Library atomic found: YES
>> """
> 
> Indeed it looks like a bug in meson.
> 
> How does it behave with clang 32-bit?
> 
> For reference, in config/meson.build:
> """
> # for clang 32-bit compiles we need libatomic for 64-bit atomic ops
> if cc.get_id() == 'clang' and dpdk_conf.get('RTE_ARCH_64') == false
>     atomic_dep = cc.find_library('atomic', required: true)
>     add_project_link_arguments('-latomic', language: 'c')
>     dpdk_extra_ldflags += '-latomic'
> endif
> """
> 
>> [dmarchan@wsfd-netdev66 dpdk]$ cat
>> /usr/lib/gcc/x86_64-redhat-linux/10/libatomic.so
>> INPUT ( /usr/lib64/libatomic.so.1.2.0 )
>> [dmarchan@wsfd-netdev66 dpdk]$ file /usr/lib64/libatomic.so.1.2.0
>> /usr/lib64/libatomic.so.1.2.0: cannot open
>> `/usr/lib64/libatomic.so.1.2.0' (No such file or directory)
> 
> We must handle this case where libatomic is not completely installed.
> 
> Hope there is a good fix possible.
> 

The build works fine for me on FC34, but it has
libatomic-11.1.1-3.fc34.x86_64 installed.

I'd like to understand what we're trying to solve here.
Are we trying to make meson to report the missing library
correctly?

If so, I think I can do simple check using cc.links()
which will fail if the library is not found. I'll
test that it works as expected if the library is not
completely installed.

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [dpdk-dev] [PATCH v3 19/20] net/sfc: support flow action COUNT in transfer rules
  2021-07-01  9:22         ` Andrew Rybchenko
@ 2021-07-01 12:34           ` David Marchand
  2021-07-01 13:05             ` Andrew Rybchenko
  0 siblings, 1 reply; 104+ messages in thread
From: David Marchand @ 2021-07-01 12:34 UTC (permalink / raw)
  To: Andrew Rybchenko, Bruce Richardson
  Cc: Thomas Monjalon, dev, Igor Romanov, Andy Moreton, Ivan Malov

On Thu, Jul 1, 2021 at 11:22 AM Andrew Rybchenko
<andrew.rybchenko@oktetlabs.ru> wrote:
> The build works fine for me on FC34, but it has
> libatomic-11.1.1-3.fc34.x86_64 installed.

I first produced the issue on my "old" FC32.
Afaics, for FC33 and later, gcc now depends on libatomic and the
problem won't be noticed.
FC32 and before are EOL, but I then reproduced the issue on RHEL 8
(and Intel CI reported it on Centos 8 too).


>
> I'd like to understand what we're trying to solve here.
> Are we trying to make meson to report the missing library
> correctly?
>
> If so, I think I can do simple check using cc.links()
> which will fail if the library is not found. I'll
> test that it works as expected if the library is not
> completely installed.
>

I tried below diff, and it works for me.
"works" as in net/sfc gets disabled without libatomic installed:

diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index 32b58e3d76..8d62aad774 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -15,6 +15,7 @@ endif
 if (arch_subdir != 'x86' or not dpdk_conf.get('RTE_ARCH_64')) and
(arch_subdir != 'arm' or not
host_machine.cpu_family().startswith('aarch64'))
     build = false
     reason = 'only supported on x86_64 and aarch64'
+    subdir_done()
 endif

 extra_flags = []
@@ -46,6 +47,14 @@ endif

 # for gcc compiles we need -latomic for 128-bit atomic ops
 if cc.get_id() == 'gcc'
+    code = '''#include <stdio.h>
+    void main() { printf("Atomilink me.\n"); }
+    '''
+    if not cc.links(code, args: '-latomic', name: 'libatomic link check')
+        build = false
+        reason = 'missing dependency, "libatomic"'
+        subdir_done()
+    endif
     ext_deps += cc.find_library('atomic')
 endif



-- 
David Marchand


^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [dpdk-dev] [PATCH v3 19/20] net/sfc: support flow action COUNT in transfer rules
  2021-07-01 12:34           ` David Marchand
@ 2021-07-01 13:05             ` Andrew Rybchenko
  2021-07-01 13:35               ` Bruce Richardson
  2021-07-02  8:43               ` Andrew Rybchenko
  0 siblings, 2 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-01 13:05 UTC (permalink / raw)
  To: David Marchand, Bruce Richardson
  Cc: Thomas Monjalon, dev, Igor Romanov, Andy Moreton, Ivan Malov

@Bruce, see below.

On 7/1/21 3:34 PM, David Marchand wrote:
> On Thu, Jul 1, 2021 at 11:22 AM Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru> wrote:
>> The build works fine for me on FC34, but it has
>> libatomic-11.1.1-3.fc34.x86_64 installed.
> 
> I first produced the issue on my "old" FC32.
> Afaics, for FC33 and later, gcc now depends on libatomic and the
> problem won't be noticed.
> FC32 and before are EOL, but I then reproduced the issue on RHEL 8
> (and Intel CI reported it on Centos 8 too).

I see. Thanks for the clarification.

>>
>> I'd like to understand what we're trying to solve here.
>> Are we trying to make meson to report the missing library
>> correctly?
>>
>> If so, I think I can do simple check using cc.links()
>> which will fail if the library is not found. I'll
>> test that it works as expected if the library is not
>> completely installed.
>>
> 
> I tried below diff, and it works for me.
> "works" as in net/sfc gets disabled without libatomic installed:
> 
> diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
> index 32b58e3d76..8d62aad774 100644
> --- a/drivers/net/sfc/meson.build
> +++ b/drivers/net/sfc/meson.build
> @@ -15,6 +15,7 @@ endif
>  if (arch_subdir != 'x86' or not dpdk_conf.get('RTE_ARCH_64')) and
> (arch_subdir != 'arm' or not
> host_machine.cpu_family().startswith('aarch64'))
>      build = false
>      reason = 'only supported on x86_64 and aarch64'
> +    subdir_done()

@Bruce  Shouldn't we add subdir_done() after all build = false
cases? As I understand it is OK for minimum supported meson
version.

>  endif
> 
>  extra_flags = []
> @@ -46,6 +47,14 @@ endif
> 
>  # for gcc compiles we need -latomic for 128-bit atomic ops
>  if cc.get_id() == 'gcc'
> +    code = '''#include <stdio.h>
> +    void main() { printf("Atomilink me.\n"); }
> +    '''
> +    if not cc.links(code, args: '-latomic', name: 'libatomic link check')
> +        build = false
> +        reason = 'missing dependency, "libatomic"'
> +        subdir_done()
> +    endif
>      ext_deps += cc.find_library('atomic')
>  endif

Many thanks, LGTM. I'll pick it up and add comments why
it is checked this way.

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [dpdk-dev] [PATCH v3 19/20] net/sfc: support flow action COUNT in transfer rules
  2021-07-01 13:05             ` Andrew Rybchenko
@ 2021-07-01 13:35               ` Bruce Richardson
  2021-07-02  8:03                 ` Andrew Rybchenko
  2021-07-02  8:43               ` Andrew Rybchenko
  1 sibling, 1 reply; 104+ messages in thread
From: Bruce Richardson @ 2021-07-01 13:35 UTC (permalink / raw)
  To: Andrew Rybchenko
  Cc: David Marchand, Thomas Monjalon, dev, Igor Romanov, Andy Moreton,
	Ivan Malov

On Thu, Jul 01, 2021 at 04:05:56PM +0300, Andrew Rybchenko wrote:
> @Bruce, see below.
> 
> On 7/1/21 3:34 PM, David Marchand wrote:
> > On Thu, Jul 1, 2021 at 11:22 AM Andrew Rybchenko
> > <andrew.rybchenko@oktetlabs.ru> wrote:
> >> The build works fine for me on FC34, but it has
> >> libatomic-11.1.1-3.fc34.x86_64 installed.
> > 
> > I first produced the issue on my "old" FC32.
> > Afaics, for FC33 and later, gcc now depends on libatomic and the
> > problem won't be noticed.
> > FC32 and before are EOL, but I then reproduced the issue on RHEL 8
> > (and Intel CI reported it on Centos 8 too).
> 
> I see. Thanks for the clarification.
> 
> >>
> >> I'd like to understand what we're trying to solve here.
> >> Are we trying to make meson to report the missing library
> >> correctly?
> >>
> >> If so, I think I can do simple check using cc.links()
> >> which will fail if the library is not found. I'll
> >> test that it works as expected if the library is not
> >> completely installed.
> >>
> > 
> > I tried below diff, and it works for me.
> > "works" as in net/sfc gets disabled without libatomic installed:
> > 
> > diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
> > index 32b58e3d76..8d62aad774 100644
> > --- a/drivers/net/sfc/meson.build
> > +++ b/drivers/net/sfc/meson.build
> > @@ -15,6 +15,7 @@ endif
> >  if (arch_subdir != 'x86' or not dpdk_conf.get('RTE_ARCH_64')) and
> > (arch_subdir != 'arm' or not
> > host_machine.cpu_family().startswith('aarch64'))
> >      build = false
> >      reason = 'only supported on x86_64 and aarch64'
> > +    subdir_done()
> 
> @Bruce  Shouldn't we add subdir_done() after all build = false
> cases? As I understand it is OK for minimum supported meson
> version.
>
We can add it, no problem. For many files it's just not necessary, since in
a lot of cases we just do assignments to variables afterward and those vars
are just ignored and unused on return. That's the only reason it wasn't
added generally before.

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [dpdk-dev] [PATCH v3 19/20] net/sfc: support flow action COUNT in transfer rules
  2021-07-01 13:35               ` Bruce Richardson
@ 2021-07-02  8:03                 ` Andrew Rybchenko
  0 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:03 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: David Marchand, Thomas Monjalon, dev, Igor Romanov, Andy Moreton,
	Ivan Malov

On 7/1/21 4:35 PM, Bruce Richardson wrote:
> On Thu, Jul 01, 2021 at 04:05:56PM +0300, Andrew Rybchenko wrote:
>> @Bruce, see below.
>>
>> On 7/1/21 3:34 PM, David Marchand wrote:
>>> On Thu, Jul 1, 2021 at 11:22 AM Andrew Rybchenko
>>> <andrew.rybchenko@oktetlabs.ru> wrote:
>>>> The build works fine for me on FC34, but it has
>>>> libatomic-11.1.1-3.fc34.x86_64 installed.
>>>
>>> I first produced the issue on my "old" FC32.
>>> Afaics, for FC33 and later, gcc now depends on libatomic and the
>>> problem won't be noticed.
>>> FC32 and before are EOL, but I then reproduced the issue on RHEL 8
>>> (and Intel CI reported it on Centos 8 too).
>>
>> I see. Thanks for the clarification.
>>
>>>>
>>>> I'd like to understand what we're trying to solve here.
>>>> Are we trying to make meson to report the missing library
>>>> correctly?
>>>>
>>>> If so, I think I can do simple check using cc.links()
>>>> which will fail if the library is not found. I'll
>>>> test that it works as expected if the library is not
>>>> completely installed.
>>>>
>>>
>>> I tried below diff, and it works for me.
>>> "works" as in net/sfc gets disabled without libatomic installed:
>>>
>>> diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
>>> index 32b58e3d76..8d62aad774 100644
>>> --- a/drivers/net/sfc/meson.build
>>> +++ b/drivers/net/sfc/meson.build
>>> @@ -15,6 +15,7 @@ endif
>>>  if (arch_subdir != 'x86' or not dpdk_conf.get('RTE_ARCH_64')) and
>>> (arch_subdir != 'arm' or not
>>> host_machine.cpu_family().startswith('aarch64'))
>>>      build = false
>>>      reason = 'only supported on x86_64 and aarch64'
>>> +    subdir_done()
>>
>> @Bruce  Shouldn't we add subdir_done() after all build = false
>> cases? As I understand it is OK for minimum supported meson
>> version.
>>
> We can add it, no problem. For many files it's just not necessary, since in
> a lot of cases we just do assignments to variables afterward and those vars
> are just ignored and unused on return. That's the only reason it wasn't
> added generally before.

Thanks Bruce, I see the difference now.


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action
  2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                   ` (21 preceding siblings ...)
  2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
@ 2021-07-02  8:39 ` Andrew Rybchenko
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 01/20] net/sfc: introduce ethdev Rx queue ID Andrew Rybchenko
                     ` (20 more replies)
  22 siblings, 21 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:39 UTC (permalink / raw)
  To: dev; +Cc: David Marchand

Update base driver and support COUNT action in transfer flow rules.

v4:
 - fix build on Fedora 32 and RHEL 8 / CentOS 8 with half-installed
   libatomic

v3:
 - fix build brekage because of incorrectly rebased and squashed
   in fix

v2:
 - add release notes
 - add missing documentaion
 - fix spelling
 - handle query in stopped gracefully


Andrew Rybchenko (6):
  net/sfc: do not enable interrupts on internal Rx queues
  common/sfc_efx/base: separate target EvQ and IRQ config
  common/sfc_efx/base: support custom EvQ to IRQ mapping
  net/sfc: explicitly control IRQ used for Rx queues
  net/sfc: add NUMA-aware registry of service logical cores
  common/sfc_efx/base: add packetiser packet format definition

Igor Romanov (14):
  net/sfc: introduce ethdev Rx queue ID
  net/sfc: introduce ethdev Tx queue ID
  common/sfc_efx/base: add ingress m-port RxQ flag
  common/sfc_efx/base: add user mark RxQ flag
  net/sfc: add abstractions for the management EVQ identity
  net/sfc: add support for initialising different RxQ types
  net/sfc: reserve RxQ for counters
  common/sfc_efx/base: add counter creation MCDI wrappers
  common/sfc_efx/base: add counter stream MCDI wrappers
  common/sfc_efx/base: support counter in action set
  net/sfc: add Rx datapath method to get pushed buffers count
  common/sfc_efx/base: add max MAE counters to limits
  net/sfc: support flow action COUNT in transfer rules
  net/sfc: support flow API query for count actions

 doc/guides/nics/sfc_efx.rst                   |   2 +
 doc/guides/rel_notes/release_21_08.rst        |   6 +
 drivers/common/sfc_efx/base/ef10_ev.c         |  14 +-
 drivers/common/sfc_efx/base/ef10_impl.h       |   1 +
 drivers/common/sfc_efx/base/ef10_rx.c         |  57 +-
 drivers/common/sfc_efx/base/efx.h             | 113 +++
 drivers/common/sfc_efx/base/efx_ev.c          |  39 +-
 drivers/common/sfc_efx/base/efx_impl.h        |   8 +-
 drivers/common/sfc_efx/base/efx_mae.c         | 430 ++++++++-
 drivers/common/sfc_efx/base/efx_mcdi.c        |   7 +-
 drivers/common/sfc_efx/base/efx_mcdi.h        |   7 +
 .../base/efx_regs_counters_pkt_format.h       |  87 ++
 drivers/common/sfc_efx/base/efx_rx.c          |  14 +-
 drivers/common/sfc_efx/base/rhead_ev.c        |  14 +-
 drivers/common/sfc_efx/base/rhead_impl.h      |   1 +
 drivers/common/sfc_efx/base/rhead_rx.c        |   6 +
 drivers/common/sfc_efx/version.map            |   9 +
 drivers/net/sfc/meson.build                   |  26 +
 drivers/net/sfc/sfc.c                         |  68 +-
 drivers/net/sfc/sfc.h                         |  22 +
 drivers/net/sfc/sfc_dp.h                      |   6 +
 drivers/net/sfc/sfc_dp_rx.h                   |   4 +
 drivers/net/sfc/sfc_ef100_rx.c                |  15 +
 drivers/net/sfc/sfc_ethdev.c                  | 115 ++-
 drivers/net/sfc/sfc_ev.c                      |  36 +-
 drivers/net/sfc/sfc_ev.h                      | 107 ++-
 drivers/net/sfc/sfc_flow.c                    |  77 +-
 drivers/net/sfc/sfc_flow.h                    |   6 +
 drivers/net/sfc/sfc_mae.c                     | 296 ++++++-
 drivers/net/sfc/sfc_mae.h                     |  61 ++
 drivers/net/sfc/sfc_mae_counter.c             | 827 ++++++++++++++++++
 drivers/net/sfc/sfc_mae_counter.h             |  58 ++
 drivers/net/sfc/sfc_rx.c                      | 231 +++--
 drivers/net/sfc/sfc_rx.h                      |  15 +-
 drivers/net/sfc/sfc_service.c                 |  99 +++
 drivers/net/sfc/sfc_service.h                 |  20 +
 drivers/net/sfc/sfc_stats.h                   |  80 ++
 drivers/net/sfc/sfc_tweak.h                   |   9 +
 drivers/net/sfc/sfc_tx.c                      | 164 ++--
 drivers/net/sfc/sfc_tx.h                      |  11 +-
 40 files changed, 2918 insertions(+), 250 deletions(-)
 create mode 100644 drivers/common/sfc_efx/base/efx_regs_counters_pkt_format.h
 create mode 100644 drivers/net/sfc/sfc_mae_counter.c
 create mode 100644 drivers/net/sfc/sfc_mae_counter.h
 create mode 100644 drivers/net/sfc/sfc_service.c
 create mode 100644 drivers/net/sfc/sfc_service.h
 create mode 100644 drivers/net/sfc/sfc_stats.h

-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v4 01/20] net/sfc: introduce ethdev Rx queue ID
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
@ 2021-07-02  8:39   ` Andrew Rybchenko
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 02/20] net/sfc: do not enable interrupts on internal Rx queues Andrew Rybchenko
                     ` (19 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:39 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Make software index of an Rx queue and ethdev index separate.
When an ethdev RxQ is accessed in ethdev callbacks, an explicit ethdev
queue index is used.

This is a preparation to introducing non-ethdev Rx queues.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/sfc.h        |   2 +
 drivers/net/sfc/sfc_dp.h     |   4 +
 drivers/net/sfc/sfc_ethdev.c |  69 ++++++++------
 drivers/net/sfc/sfc_ev.c     |   2 +-
 drivers/net/sfc/sfc_ev.h     |  22 ++++-
 drivers/net/sfc/sfc_flow.c   |  22 +++--
 drivers/net/sfc/sfc_rx.c     | 179 +++++++++++++++++++++++++----------
 drivers/net/sfc/sfc_rx.h     |  10 +-
 8 files changed, 215 insertions(+), 95 deletions(-)

diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index b48a818adb..ebe705020d 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -29,6 +29,7 @@
 #include "sfc_filter.h"
 #include "sfc_sriov.h"
 #include "sfc_mae.h"
+#include "sfc_dp.h"
 
 #ifdef __cplusplus
 extern "C" {
@@ -168,6 +169,7 @@ struct sfc_rss {
 struct sfc_adapter_shared {
 	unsigned int			rxq_count;
 	struct sfc_rxq_info		*rxq_info;
+	unsigned int			ethdev_rxq_count;
 
 	unsigned int			txq_count;
 	struct sfc_txq_info		*txq_info;
diff --git a/drivers/net/sfc/sfc_dp.h b/drivers/net/sfc/sfc_dp.h
index 4bed137806..76065483d4 100644
--- a/drivers/net/sfc/sfc_dp.h
+++ b/drivers/net/sfc/sfc_dp.h
@@ -96,6 +96,10 @@ struct sfc_dp {
 /** List of datapath variants */
 TAILQ_HEAD(sfc_dp_list, sfc_dp);
 
+typedef unsigned int sfc_sw_index_t;
+typedef int32_t	sfc_ethdev_qid_t;
+#define SFC_ETHDEV_QID_INVALID	((sfc_ethdev_qid_t)(-1))
+
 /* Check if available HW/FW capabilities are sufficient for the datapath */
 static inline bool
 sfc_dp_match_hw_fw_caps(const struct sfc_dp *dp, unsigned int avail_caps)
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index c50ecea0b9..2651c41288 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -463,26 +463,31 @@ sfc_dev_allmulti_disable(struct rte_eth_dev *dev)
 }
 
 static int
-sfc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
+sfc_rx_queue_setup(struct rte_eth_dev *dev, uint16_t ethdev_qid,
 		   uint16_t nb_rx_desc, unsigned int socket_id,
 		   const struct rte_eth_rxconf *rx_conf,
 		   struct rte_mempool *mb_pool)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+	struct sfc_rxq_info *rxq_info;
+	sfc_sw_index_t sw_index;
 	int rc;
 
 	sfc_log_init(sa, "RxQ=%u nb_rx_desc=%u socket_id=%u",
-		     rx_queue_id, nb_rx_desc, socket_id);
+		     ethdev_qid, nb_rx_desc, socket_id);
 
 	sfc_adapter_lock(sa);
 
-	rc = sfc_rx_qinit(sa, rx_queue_id, nb_rx_desc, socket_id,
+	sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas, sfc_ethdev_qid);
+	rc = sfc_rx_qinit(sa, sw_index, nb_rx_desc, socket_id,
 			  rx_conf, mb_pool);
 	if (rc != 0)
 		goto fail_rx_qinit;
 
-	dev->data->rx_queues[rx_queue_id] = sas->rxq_info[rx_queue_id].dp;
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+	dev->data->rx_queues[ethdev_qid] = rxq_info->dp;
 
 	sfc_adapter_unlock(sa);
 
@@ -500,7 +505,7 @@ sfc_rx_queue_release(void *queue)
 	struct sfc_dp_rxq *dp_rxq = queue;
 	struct sfc_rxq *rxq;
 	struct sfc_adapter *sa;
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 
 	if (dp_rxq == NULL)
 		return;
@@ -1182,15 +1187,14 @@ sfc_set_mc_addr_list(struct rte_eth_dev *dev,
  * use any process-local pointers from the adapter data.
  */
 static void
-sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id,
+sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
 		      struct rte_eth_rxq_info *qinfo)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 
-	SFC_ASSERT(rx_queue_id < sas->rxq_count);
-
-	rxq_info = &sas->rxq_info[rx_queue_id];
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
 
 	qinfo->mp = rxq_info->refill_mb_pool;
 	qinfo->conf.rx_free_thresh = rxq_info->refill_threshold;
@@ -1232,14 +1236,14 @@ sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t tx_queue_id,
  * use any process-local pointers from the adapter data.
  */
 static uint32_t
-sfc_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+sfc_rx_queue_count(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 
-	SFC_ASSERT(rx_queue_id < sas->rxq_count);
-	rxq_info = &sas->rxq_info[rx_queue_id];
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
 
 	if ((rxq_info->state & SFC_RXQ_STARTED) == 0)
 		return 0;
@@ -1293,13 +1297,16 @@ sfc_tx_descriptor_status(void *queue, uint16_t offset)
 }
 
 static int
-sfc_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+sfc_rx_queue_start(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+	struct sfc_rxq_info *rxq_info;
+	sfc_sw_index_t sw_index;
 	int rc;
 
-	sfc_log_init(sa, "RxQ=%u", rx_queue_id);
+	sfc_log_init(sa, "RxQ=%u", ethdev_qid);
 
 	sfc_adapter_lock(sa);
 
@@ -1307,14 +1314,16 @@ sfc_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (sa->state != SFC_ADAPTER_STARTED)
 		goto fail_not_started;
 
-	if (sas->rxq_info[rx_queue_id].state != SFC_RXQ_INITIALIZED)
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+	if (rxq_info->state != SFC_RXQ_INITIALIZED)
 		goto fail_not_setup;
 
-	rc = sfc_rx_qstart(sa, rx_queue_id);
+	sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas, sfc_ethdev_qid);
+	rc = sfc_rx_qstart(sa, sw_index);
 	if (rc != 0)
 		goto fail_rx_qstart;
 
-	sas->rxq_info[rx_queue_id].deferred_started = B_TRUE;
+	rxq_info->deferred_started = B_TRUE;
 
 	sfc_adapter_unlock(sa);
 
@@ -1329,17 +1338,23 @@ sfc_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 }
 
 static int
-sfc_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+sfc_rx_queue_stop(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
+	struct sfc_rxq_info *rxq_info;
+	sfc_sw_index_t sw_index;
 
-	sfc_log_init(sa, "RxQ=%u", rx_queue_id);
+	sfc_log_init(sa, "RxQ=%u", ethdev_qid);
 
 	sfc_adapter_lock(sa);
-	sfc_rx_qstop(sa, rx_queue_id);
 
-	sas->rxq_info[rx_queue_id].deferred_started = B_FALSE;
+	sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas, sfc_ethdev_qid);
+	sfc_rx_qstop(sa, sw_index);
+
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
+	rxq_info->deferred_started = B_FALSE;
 
 	sfc_adapter_unlock(sa);
 
@@ -1766,27 +1781,27 @@ sfc_pool_ops_supported(struct rte_eth_dev *dev, const char *pool)
 }
 
 static int
-sfc_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)
+sfc_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 
-	SFC_ASSERT(queue_id < sas->rxq_count);
-	rxq_info = &sas->rxq_info[queue_id];
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
 
 	return sap->dp_rx->intr_enable(rxq_info->dp);
 }
 
 static int
-sfc_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)
+sfc_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	const struct sfc_adapter_priv *sap = sfc_adapter_priv_by_eth_dev(dev);
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
+	sfc_ethdev_qid_t sfc_ethdev_qid = ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 
-	SFC_ASSERT(queue_id < sas->rxq_count);
-	rxq_info = &sas->rxq_info[queue_id];
+	rxq_info = sfc_rxq_info_by_ethdev_qid(sas, sfc_ethdev_qid);
 
 	return sap->dp_rx->intr_disable(rxq_info->dp);
 }
diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index b4953ac647..2262994112 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -582,7 +582,7 @@ sfc_ev_qpoll(struct sfc_evq *evq)
 		int rc;
 
 		if (evq->dp_rxq != NULL) {
-			unsigned int rxq_sw_index;
+			sfc_sw_index_t rxq_sw_index;
 
 			rxq_sw_index = evq->dp_rxq->dpq.queue_id;
 
diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h
index d796865b7f..5a9f85c2d9 100644
--- a/drivers/net/sfc/sfc_ev.h
+++ b/drivers/net/sfc/sfc_ev.h
@@ -69,9 +69,25 @@ struct sfc_evq {
  * Tx event queues follow Rx event queues.
  */
 
-static inline unsigned int
-sfc_evq_index_by_rxq_sw_index(__rte_unused struct sfc_adapter *sa,
-			      unsigned int rxq_sw_index)
+static inline sfc_ethdev_qid_t
+sfc_ethdev_rx_qid_by_rxq_sw_index(__rte_unused struct sfc_adapter_shared *sas,
+				  sfc_sw_index_t rxq_sw_index)
+{
+	/* Only ethdev queues are present for now */
+	return rxq_sw_index;
+}
+
+static inline sfc_sw_index_t
+sfc_rxq_sw_index_by_ethdev_rx_qid(__rte_unused struct sfc_adapter_shared *sas,
+				  sfc_ethdev_qid_t ethdev_qid)
+{
+	/* Only ethdev queues are present for now */
+	return ethdev_qid;
+}
+
+static inline sfc_sw_index_t
+sfc_evq_sw_index_by_rxq_sw_index(__rte_unused struct sfc_adapter *sa,
+				 sfc_sw_index_t rxq_sw_index)
 {
 	return 1 + rxq_sw_index;
 }
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 0bfd284c9e..2db8af1759 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -1400,10 +1400,10 @@ sfc_flow_parse_queue(struct sfc_adapter *sa,
 	struct sfc_rxq *rxq;
 	struct sfc_rxq_info *rxq_info;
 
-	if (queue->index >= sfc_sa2shared(sa)->rxq_count)
+	if (queue->index >= sfc_sa2shared(sa)->ethdev_rxq_count)
 		return -EINVAL;
 
-	rxq = &sa->rxq_ctrl[queue->index];
+	rxq = sfc_rxq_ctrl_by_ethdev_qid(sa, queue->index);
 	spec_filter->template.efs_dmaq_id = (uint16_t)rxq->hw_index;
 
 	rxq_info = &sfc_sa2shared(sa)->rxq_info[queue->index];
@@ -1420,7 +1420,7 @@ sfc_flow_parse_rss(struct sfc_adapter *sa,
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
 	struct sfc_rss *rss = &sas->rss;
-	unsigned int rxq_sw_index;
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_rxq *rxq;
 	unsigned int rxq_hw_index_min;
 	unsigned int rxq_hw_index_max;
@@ -1434,18 +1434,19 @@ sfc_flow_parse_rss(struct sfc_adapter *sa,
 	if (action_rss->queue_num == 0)
 		return -EINVAL;
 
-	rxq_sw_index = sfc_sa2shared(sa)->rxq_count - 1;
-	rxq = &sa->rxq_ctrl[rxq_sw_index];
+	ethdev_qid = sfc_sa2shared(sa)->ethdev_rxq_count - 1;
+	rxq = sfc_rxq_ctrl_by_ethdev_qid(sa, ethdev_qid);
 	rxq_hw_index_min = rxq->hw_index;
 	rxq_hw_index_max = 0;
 
 	for (i = 0; i < action_rss->queue_num; ++i) {
-		rxq_sw_index = action_rss->queue[i];
+		ethdev_qid = action_rss->queue[i];
 
-		if (rxq_sw_index >= sfc_sa2shared(sa)->rxq_count)
+		if ((unsigned int)ethdev_qid >=
+		    sfc_sa2shared(sa)->ethdev_rxq_count)
 			return -EINVAL;
 
-		rxq = &sa->rxq_ctrl[rxq_sw_index];
+		rxq = sfc_rxq_ctrl_by_ethdev_qid(sa, ethdev_qid);
 
 		if (rxq->hw_index < rxq_hw_index_min)
 			rxq_hw_index_min = rxq->hw_index;
@@ -1509,9 +1510,10 @@ sfc_flow_parse_rss(struct sfc_adapter *sa,
 
 	for (i = 0; i < RTE_DIM(sfc_rss_conf->rss_tbl); ++i) {
 		unsigned int nb_queues = action_rss->queue_num;
-		unsigned int rxq_sw_index = action_rss->queue[i % nb_queues];
-		struct sfc_rxq *rxq = &sa->rxq_ctrl[rxq_sw_index];
+		struct sfc_rxq *rxq;
 
+		ethdev_qid = action_rss->queue[i % nb_queues];
+		rxq = sfc_rxq_ctrl_by_ethdev_qid(sa, ethdev_qid);
 		sfc_rss_conf->rss_tbl[i] = rxq->hw_index - rxq_hw_index_min;
 	}
 
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 461afc5168..597785ae02 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -654,14 +654,17 @@ struct sfc_dp_rx sfc_efx_rx = {
 };
 
 static void
-sfc_rx_qflush(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_rx_qflush(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 	struct sfc_rxq *rxq;
 	unsigned int retry_count;
 	unsigned int wait_count;
 	int rc;
 
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, sw_index);
 	rxq_info = &sfc_sa2shared(sa)->rxq_info[sw_index];
 	SFC_ASSERT(rxq_info->state & SFC_RXQ_STARTED);
 
@@ -698,13 +701,16 @@ sfc_rx_qflush(struct sfc_adapter *sa, unsigned int sw_index)
 			 (wait_count++ < SFC_RX_QFLUSH_POLL_ATTEMPTS));
 
 		if (rxq_info->state & SFC_RXQ_FLUSHING)
-			sfc_err(sa, "RxQ %u flush timed out", sw_index);
+			sfc_err(sa, "RxQ %d (internal %u) flush timed out",
+				ethdev_qid, sw_index);
 
 		if (rxq_info->state & SFC_RXQ_FLUSH_FAILED)
-			sfc_err(sa, "RxQ %u flush failed", sw_index);
+			sfc_err(sa, "RxQ %d (internal %u) flush failed",
+				ethdev_qid, sw_index);
 
 		if (rxq_info->state & SFC_RXQ_FLUSHED)
-			sfc_notice(sa, "RxQ %u flushed", sw_index);
+			sfc_notice(sa, "RxQ %d (internal %u) flushed",
+				   ethdev_qid, sw_index);
 	}
 
 	sa->priv.dp_rx->qpurge(rxq_info->dp);
@@ -764,17 +770,20 @@ sfc_rx_default_rxq_set_filter(struct sfc_adapter *sa, struct sfc_rxq *rxq)
 }
 
 int
-sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_rx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 	struct sfc_rxq *rxq;
 	struct sfc_evq *evq;
 	efx_rx_prefix_layout_t pinfo;
 	int rc;
 
-	sfc_log_init(sa, "sw_index=%u", sw_index);
-
 	SFC_ASSERT(sw_index < sfc_sa2shared(sa)->rxq_count);
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "RxQ %d (internal %u)", ethdev_qid, sw_index);
 
 	rxq_info = &sfc_sa2shared(sa)->rxq_info[sw_index];
 	SFC_ASSERT(rxq_info->state == SFC_RXQ_INITIALIZED);
@@ -782,7 +791,7 @@ sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 	rxq = &sa->rxq_ctrl[sw_index];
 	evq = rxq->evq;
 
-	rc = sfc_ev_qstart(evq, sfc_evq_index_by_rxq_sw_index(sa, sw_index));
+	rc = sfc_ev_qstart(evq, sfc_evq_sw_index_by_rxq_sw_index(sa, sw_index));
 	if (rc != 0)
 		goto fail_ev_qstart;
 
@@ -833,15 +842,16 @@ sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 
 	rxq_info->state |= SFC_RXQ_STARTED;
 
-	if (sw_index == 0 && !sfc_sa2shared(sa)->isolated) {
+	if (ethdev_qid == 0 && !sfc_sa2shared(sa)->isolated) {
 		rc = sfc_rx_default_rxq_set_filter(sa, rxq);
 		if (rc != 0)
 			goto fail_mac_filter_default_rxq_set;
 	}
 
 	/* It seems to be used by DPDK for debug purposes only ('rte_ether') */
-	sa->eth_dev->data->rx_queue_state[sw_index] =
-		RTE_ETH_QUEUE_STATE_STARTED;
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		sa->eth_dev->data->rx_queue_state[ethdev_qid] =
+			RTE_ETH_QUEUE_STATE_STARTED;
 
 	return 0;
 
@@ -864,14 +874,17 @@ sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 }
 
 void
-sfc_rx_qstop(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_rx_qstop(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 	struct sfc_rxq *rxq;
 
-	sfc_log_init(sa, "sw_index=%u", sw_index);
-
 	SFC_ASSERT(sw_index < sfc_sa2shared(sa)->rxq_count);
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "RxQ %d (internal %u)", ethdev_qid, sw_index);
 
 	rxq_info = &sfc_sa2shared(sa)->rxq_info[sw_index];
 
@@ -880,13 +893,14 @@ sfc_rx_qstop(struct sfc_adapter *sa, unsigned int sw_index)
 	SFC_ASSERT(rxq_info->state & SFC_RXQ_STARTED);
 
 	/* It seems to be used by DPDK for debug purposes only ('rte_ether') */
-	sa->eth_dev->data->rx_queue_state[sw_index] =
-		RTE_ETH_QUEUE_STATE_STOPPED;
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		sa->eth_dev->data->rx_queue_state[ethdev_qid] =
+			RTE_ETH_QUEUE_STATE_STOPPED;
 
 	rxq = &sa->rxq_ctrl[sw_index];
 	sa->priv.dp_rx->qstop(rxq_info->dp, &rxq->evq->read_ptr);
 
-	if (sw_index == 0)
+	if (ethdev_qid == 0)
 		efx_mac_filter_default_rxq_clear(sa->nic);
 
 	sfc_rx_qflush(sa, sw_index);
@@ -1056,11 +1070,13 @@ sfc_rx_mb_pool_buf_size(struct sfc_adapter *sa, struct rte_mempool *mb_pool)
 }
 
 int
-sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
+sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 	     uint16_t nb_rx_desc, unsigned int socket_id,
 	     const struct rte_eth_rxconf *rx_conf,
 	     struct rte_mempool *mb_pool)
 {
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
 	struct sfc_rss *rss = &sfc_sa2shared(sa)->rss;
 	int rc;
@@ -1092,16 +1108,22 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	SFC_ASSERT(rxq_entries <= sa->rxq_max_entries);
 	SFC_ASSERT(rxq_max_fill_level <= nb_rx_desc);
 
-	offloads = rx_conf->offloads |
-		sa->eth_dev->data->dev_conf.rxmode.offloads;
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, sw_index);
+
+	offloads = rx_conf->offloads;
+	/* Add device level Rx offloads if the queue is an ethdev Rx queue */
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		offloads |= sa->eth_dev->data->dev_conf.rxmode.offloads;
+
 	rc = sfc_rx_qcheck_conf(sa, rxq_max_fill_level, rx_conf, offloads);
 	if (rc != 0)
 		goto fail_bad_conf;
 
 	buf_size = sfc_rx_mb_pool_buf_size(sa, mb_pool);
 	if (buf_size == 0) {
-		sfc_err(sa, "RxQ %u mbuf pool object size is too small",
-			sw_index);
+		sfc_err(sa,
+			"RxQ %d (internal %u) mbuf pool object size is too small",
+			ethdev_qid, sw_index);
 		rc = EINVAL;
 		goto fail_bad_conf;
 	}
@@ -1111,11 +1133,13 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 				  (offloads & DEV_RX_OFFLOAD_SCATTER),
 				  encp->enc_rx_scatter_max,
 				  &error)) {
-		sfc_err(sa, "RxQ %u MTU check failed: %s", sw_index, error);
-		sfc_err(sa, "RxQ %u calculated Rx buffer size is %u vs "
+		sfc_err(sa, "RxQ %d (internal %u) MTU check failed: %s",
+			ethdev_qid, sw_index, error);
+		sfc_err(sa,
+			"RxQ %d (internal %u) calculated Rx buffer size is %u vs "
 			"PDU size %u plus Rx prefix %u bytes",
-			sw_index, buf_size, (unsigned int)sa->port.pdu,
-			encp->enc_rx_prefix_size);
+			ethdev_qid, sw_index, buf_size,
+			(unsigned int)sa->port.pdu, encp->enc_rx_prefix_size);
 		rc = EINVAL;
 		goto fail_bad_conf;
 	}
@@ -1193,7 +1217,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	info.flags = rxq_info->rxq_flags;
 	info.rxq_entries = rxq_info->entries;
 	info.rxq_hw_ring = rxq->mem.esm_base;
-	info.evq_hw_index = sfc_evq_index_by_rxq_sw_index(sa, sw_index);
+	info.evq_hw_index = sfc_evq_sw_index_by_rxq_sw_index(sa, sw_index);
 	info.evq_entries = evq_entries;
 	info.evq_hw_ring = evq->mem.esm_base;
 	info.hw_index = rxq->hw_index;
@@ -1231,13 +1255,18 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 }
 
 void
-sfc_rx_qfini(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_rx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_rxq_info *rxq_info;
 	struct sfc_rxq *rxq;
 
 	SFC_ASSERT(sw_index < sfc_sa2shared(sa)->rxq_count);
-	sa->eth_dev->data->rx_queues[sw_index] = NULL;
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, sw_index);
+
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		sa->eth_dev->data->rx_queues[ethdev_qid] = NULL;
 
 	rxq_info = &sfc_sa2shared(sa)->rxq_info[sw_index];
 
@@ -1479,14 +1508,41 @@ sfc_rx_rss_config(struct sfc_adapter *sa)
 	return rc;
 }
 
+struct sfc_rxq_info *
+sfc_rxq_info_by_ethdev_qid(struct sfc_adapter_shared *sas,
+			   sfc_ethdev_qid_t ethdev_qid)
+{
+	sfc_sw_index_t sw_index;
+
+	SFC_ASSERT((unsigned int)ethdev_qid < sas->ethdev_rxq_count);
+	SFC_ASSERT(ethdev_qid != SFC_ETHDEV_QID_INVALID);
+
+	sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas, ethdev_qid);
+	return &sas->rxq_info[sw_index];
+}
+
+struct sfc_rxq *
+sfc_rxq_ctrl_by_ethdev_qid(struct sfc_adapter *sa, sfc_ethdev_qid_t ethdev_qid)
+{
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_sw_index_t sw_index;
+
+	SFC_ASSERT((unsigned int)ethdev_qid < sas->ethdev_rxq_count);
+	SFC_ASSERT(ethdev_qid != SFC_ETHDEV_QID_INVALID);
+
+	sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas, ethdev_qid);
+	return &sa->rxq_ctrl[sw_index];
+}
+
 int
 sfc_rx_start(struct sfc_adapter *sa)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 	int rc;
 
-	sfc_log_init(sa, "rxq_count=%u", sas->rxq_count);
+	sfc_log_init(sa, "rxq_count=%u (internal %u)", sas->ethdev_rxq_count,
+		     sas->rxq_count);
 
 	rc = efx_rx_init(sa->nic);
 	if (rc != 0)
@@ -1524,9 +1580,10 @@ void
 sfc_rx_stop(struct sfc_adapter *sa)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 
-	sfc_log_init(sa, "rxq_count=%u", sas->rxq_count);
+	sfc_log_init(sa, "rxq_count=%u (internal %u)", sas->ethdev_rxq_count,
+		     sas->rxq_count);
 
 	sw_index = sas->rxq_count;
 	while (sw_index-- > 0) {
@@ -1538,7 +1595,7 @@ sfc_rx_stop(struct sfc_adapter *sa)
 }
 
 static int
-sfc_rx_qinit_info(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_rx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
 	struct sfc_rxq_info *rxq_info = &sas->rxq_info[sw_index];
@@ -1606,17 +1663,29 @@ static void
 sfc_rx_fini_queues(struct sfc_adapter *sa, unsigned int nb_rx_queues)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	int sw_index;
+	sfc_sw_index_t sw_index;
+	sfc_ethdev_qid_t ethdev_qid;
 
-	SFC_ASSERT(nb_rx_queues <= sas->rxq_count);
+	SFC_ASSERT(nb_rx_queues <= sas->ethdev_rxq_count);
 
-	sw_index = sas->rxq_count;
-	while (--sw_index >= (int)nb_rx_queues) {
-		if (sas->rxq_info[sw_index].state & SFC_RXQ_INITIALIZED)
+	/*
+	 * Finalize only ethdev queues since other ones are finalized only
+	 * on device close and they may require additional deinitializaton.
+	 */
+	ethdev_qid = sas->ethdev_rxq_count;
+	while (--ethdev_qid >= (int)nb_rx_queues) {
+		struct sfc_rxq_info *rxq_info;
+
+		rxq_info = sfc_rxq_info_by_ethdev_qid(sas, ethdev_qid);
+		if (rxq_info->state & SFC_RXQ_INITIALIZED) {
+			sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas,
+								ethdev_qid);
 			sfc_rx_qfini(sa, sw_index);
+		}
+
 	}
 
-	sas->rxq_count = nb_rx_queues;
+	sas->ethdev_rxq_count = nb_rx_queues;
 }
 
 /**
@@ -1637,7 +1706,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
 	int rc;
 
 	sfc_log_init(sa, "nb_rx_queues=%u (old %u)",
-		     nb_rx_queues, sas->rxq_count);
+		     nb_rx_queues, sas->ethdev_rxq_count);
 
 	rc = sfc_rx_check_mode(sa, &dev_conf->rxmode);
 	if (rc != 0)
@@ -1666,7 +1735,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
 		struct sfc_rxq_info *new_rxq_info;
 		struct sfc_rxq *new_rxq_ctrl;
 
-		if (nb_rx_queues < sas->rxq_count)
+		if (nb_rx_queues < sas->ethdev_rxq_count)
 			sfc_rx_fini_queues(sa, nb_rx_queues);
 
 		rc = ENOMEM;
@@ -1685,30 +1754,38 @@ sfc_rx_configure(struct sfc_adapter *sa)
 		sas->rxq_info = new_rxq_info;
 		sa->rxq_ctrl = new_rxq_ctrl;
 		if (nb_rx_queues > sas->rxq_count) {
-			memset(&sas->rxq_info[sas->rxq_count], 0,
-			       (nb_rx_queues - sas->rxq_count) *
+			unsigned int rxq_count = sas->rxq_count;
+
+			memset(&sas->rxq_info[rxq_count], 0,
+			       (nb_rx_queues - rxq_count) *
 			       sizeof(sas->rxq_info[0]));
-			memset(&sa->rxq_ctrl[sas->rxq_count], 0,
-			       (nb_rx_queues - sas->rxq_count) *
+			memset(&sa->rxq_ctrl[rxq_count], 0,
+			       (nb_rx_queues - rxq_count) *
 			       sizeof(sa->rxq_ctrl[0]));
 		}
 	}
 
-	while (sas->rxq_count < nb_rx_queues) {
-		rc = sfc_rx_qinit_info(sa, sas->rxq_count);
+	while (sas->ethdev_rxq_count < nb_rx_queues) {
+		sfc_sw_index_t sw_index;
+
+		sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas,
+							sas->ethdev_rxq_count);
+		rc = sfc_rx_qinit_info(sa, sw_index);
 		if (rc != 0)
 			goto fail_rx_qinit_info;
 
-		sas->rxq_count++;
+		sas->ethdev_rxq_count++;
 	}
 
+	sas->rxq_count = sas->ethdev_rxq_count;
+
 configure_rss:
 	rss->channels = (dev_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) ?
-			 MIN(sas->rxq_count, EFX_MAXRSS) : 0;
+			 MIN(sas->ethdev_rxq_count, EFX_MAXRSS) : 0;
 
 	if (rss->channels > 0) {
 		struct rte_eth_rss_conf *adv_conf_rss;
-		unsigned int sw_index;
+		sfc_sw_index_t sw_index;
 
 		for (sw_index = 0; sw_index < EFX_RSS_TBL_SIZE; ++sw_index)
 			rss->tbl[sw_index] = sw_index % rss->channels;
diff --git a/drivers/net/sfc/sfc_rx.h b/drivers/net/sfc/sfc_rx.h
index 2730454fd6..96c7dc415d 100644
--- a/drivers/net/sfc/sfc_rx.h
+++ b/drivers/net/sfc/sfc_rx.h
@@ -119,6 +119,10 @@ struct sfc_rxq_info {
 };
 
 struct sfc_rxq_info *sfc_rxq_info_by_dp_rxq(const struct sfc_dp_rxq *dp_rxq);
+struct sfc_rxq_info *sfc_rxq_info_by_ethdev_qid(struct sfc_adapter_shared *sas,
+						sfc_ethdev_qid_t ethdev_qid);
+struct sfc_rxq *sfc_rxq_ctrl_by_ethdev_qid(struct sfc_adapter *sa,
+					   sfc_ethdev_qid_t ethdev_qid);
 
 int sfc_rx_configure(struct sfc_adapter *sa);
 void sfc_rx_close(struct sfc_adapter *sa);
@@ -129,9 +133,9 @@ int sfc_rx_qinit(struct sfc_adapter *sa, unsigned int rx_queue_id,
 		 uint16_t nb_rx_desc, unsigned int socket_id,
 		 const struct rte_eth_rxconf *rx_conf,
 		 struct rte_mempool *mb_pool);
-void sfc_rx_qfini(struct sfc_adapter *sa, unsigned int sw_index);
-int sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index);
-void sfc_rx_qstop(struct sfc_adapter *sa, unsigned int sw_index);
+void sfc_rx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
+int sfc_rx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
+void sfc_rx_qstop(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
 
 uint64_t sfc_rx_get_dev_offload_caps(struct sfc_adapter *sa);
 uint64_t sfc_rx_get_queue_offload_caps(struct sfc_adapter *sa);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v4 02/20] net/sfc: do not enable interrupts on internal Rx queues
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 01/20] net/sfc: introduce ethdev Rx queue ID Andrew Rybchenko
@ 2021-07-02  8:39   ` Andrew Rybchenko
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 03/20] common/sfc_efx/base: separate target EvQ and IRQ config Andrew Rybchenko
                     ` (18 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:39 UTC (permalink / raw)
  To: dev; +Cc: David Marchand

rxq_intr flag requests support for interrupt mode for ethdev Rx queues.
There is no internal Rx queues yet.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 drivers/net/sfc/sfc_ev.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index 2262994112..9a8149f052 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -663,7 +663,9 @@ sfc_ev_qstart(struct sfc_evq *evq, unsigned int hw_index)
 		     efx_evq_size(sa->nic, evq->entries, evq_flags));
 
 	if ((sa->intr.lsc_intr && hw_index == sa->mgmt_evq_index) ||
-	    (sa->intr.rxq_intr && evq->dp_rxq != NULL))
+	    (sa->intr.rxq_intr && evq->dp_rxq != NULL &&
+	     sfc_ethdev_rx_qid_by_rxq_sw_index(sfc_sa2shared(sa),
+		evq->dp_rxq->dpq.queue_id) != SFC_ETHDEV_QID_INVALID))
 		evq_flags |= EFX_EVQ_FLAGS_NOTIFY_INTERRUPT;
 	else
 		evq_flags |= EFX_EVQ_FLAGS_NOTIFY_DISABLED;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v4 03/20] common/sfc_efx/base: separate target EvQ and IRQ config
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 01/20] net/sfc: introduce ethdev Rx queue ID Andrew Rybchenko
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 02/20] net/sfc: do not enable interrupts on internal Rx queues Andrew Rybchenko
@ 2021-07-02  8:39   ` Andrew Rybchenko
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 04/20] common/sfc_efx/base: support custom EvQ to IRQ mapping Andrew Rybchenko
                     ` (17 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:39 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Andy Moreton

Target EvQ and IRQ number are specified in the same location
in MCDI request. The value is treated as IRQ number if the
event queue is interrupting (corresponding flag is set) and
as target event queue otherwise.

However it is better to separate it on helper API level to
make it more clear.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/common/sfc_efx/base/ef10_ev.c  | 12 +++++++-----
 drivers/common/sfc_efx/base/efx_impl.h |  1 +
 drivers/common/sfc_efx/base/efx_mcdi.c |  7 ++++++-
 drivers/common/sfc_efx/base/rhead_ev.c | 12 +++++++-----
 4 files changed, 21 insertions(+), 11 deletions(-)

diff --git a/drivers/common/sfc_efx/base/ef10_ev.c b/drivers/common/sfc_efx/base/ef10_ev.c
index ea59beecc4..c0cbc427b9 100644
--- a/drivers/common/sfc_efx/base/ef10_ev.c
+++ b/drivers/common/sfc_efx/base/ef10_ev.c
@@ -121,7 +121,8 @@ ef10_ev_qcreate(
 	__in		efx_evq_t *eep)
 {
 	efx_nic_cfg_t *encp = &(enp->en_nic_cfg);
-	uint32_t irq;
+	uint32_t irq = 0;
+	uint32_t target_evq = 0;
 	efx_rc_t rc;
 	boolean_t low_latency;
 
@@ -159,11 +160,12 @@ ef10_ev_qcreate(
 	    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT) {
 		irq = index;
 	} else if (index == EFX_EF10_ALWAYS_INTERRUPTING_EVQ_INDEX) {
-		irq = index;
+		/* Use the first interrupt for always interrupting EvQ */
+		irq = 0;
 		flags = (flags & ~EFX_EVQ_FLAGS_NOTIFY_MASK) |
 		    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT;
 	} else {
-		irq = EFX_EF10_ALWAYS_INTERRUPTING_EVQ_INDEX;
+		target_evq = EFX_EF10_ALWAYS_INTERRUPTING_EVQ_INDEX;
 	}
 
 	/*
@@ -187,8 +189,8 @@ ef10_ev_qcreate(
 	 * decision and low_latency hint is ignored.
 	 */
 	low_latency = encp->enc_datapath_cap_evb ? 0 : 1;
-	rc = efx_mcdi_init_evq(enp, index, esmp, ndescs, irq, us, flags,
-	    low_latency);
+	rc = efx_mcdi_init_evq(enp, index, esmp, ndescs, irq, target_evq, us,
+	    flags, low_latency);
 	if (rc != 0)
 		goto fail2;
 
diff --git a/drivers/common/sfc_efx/base/efx_impl.h b/drivers/common/sfc_efx/base/efx_impl.h
index 4a513171a1..c1f98def40 100644
--- a/drivers/common/sfc_efx/base/efx_impl.h
+++ b/drivers/common/sfc_efx/base/efx_impl.h
@@ -1535,6 +1535,7 @@ efx_mcdi_init_evq(
 	__in		efsys_mem_t *esmp,
 	__in		size_t nevs,
 	__in		uint32_t irq,
+	__in		uint32_t target_evq,
 	__in		uint32_t us,
 	__in		uint32_t flags,
 	__in		boolean_t low_latency);
diff --git a/drivers/common/sfc_efx/base/efx_mcdi.c b/drivers/common/sfc_efx/base/efx_mcdi.c
index f226ffd923..b68fc0503d 100644
--- a/drivers/common/sfc_efx/base/efx_mcdi.c
+++ b/drivers/common/sfc_efx/base/efx_mcdi.c
@@ -2568,6 +2568,7 @@ efx_mcdi_init_evq(
 	__in		efsys_mem_t *esmp,
 	__in		size_t nevs,
 	__in		uint32_t irq,
+	__in		uint32_t target_evq,
 	__in		uint32_t us,
 	__in		uint32_t flags,
 	__in		boolean_t low_latency)
@@ -2602,11 +2603,15 @@ efx_mcdi_init_evq(
 
 	MCDI_IN_SET_DWORD(req, INIT_EVQ_V2_IN_SIZE, nevs);
 	MCDI_IN_SET_DWORD(req, INIT_EVQ_V2_IN_INSTANCE, instance);
-	MCDI_IN_SET_DWORD(req, INIT_EVQ_V2_IN_IRQ_NUM, irq);
 
 	interrupting = ((flags & EFX_EVQ_FLAGS_NOTIFY_MASK) ==
 	    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT);
 
+	if (interrupting)
+		MCDI_IN_SET_DWORD(req, INIT_EVQ_V2_IN_IRQ_NUM, irq);
+	else
+		MCDI_IN_SET_DWORD(req, INIT_EVQ_V2_IN_TARGET_EVQ, target_evq);
+
 	if (encp->enc_init_evq_v2_supported) {
 		/*
 		 * On Medford the low latency license is required to enable RX
diff --git a/drivers/common/sfc_efx/base/rhead_ev.c b/drivers/common/sfc_efx/base/rhead_ev.c
index 2099581fd7..533cd9e34a 100644
--- a/drivers/common/sfc_efx/base/rhead_ev.c
+++ b/drivers/common/sfc_efx/base/rhead_ev.c
@@ -106,7 +106,8 @@ rhead_ev_qcreate(
 {
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
 	size_t desc_size;
-	uint32_t irq;
+	uint32_t irq = 0;
+	uint32_t target_evq = 0;
 	efx_rc_t rc;
 
 	_NOTE(ARGUNUSED(id))	/* buftbl id managed by MC */
@@ -142,19 +143,20 @@ rhead_ev_qcreate(
 	    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT) {
 		irq = index;
 	} else if (index == EFX_RHEAD_ALWAYS_INTERRUPTING_EVQ_INDEX) {
-		irq = index;
+		/* Use the first interrupt for always interrupting EvQ */
+		irq = 0;
 		flags = (flags & ~EFX_EVQ_FLAGS_NOTIFY_MASK) |
 		    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT;
 	} else {
-		irq = EFX_RHEAD_ALWAYS_INTERRUPTING_EVQ_INDEX;
+		target_evq = EFX_RHEAD_ALWAYS_INTERRUPTING_EVQ_INDEX;
 	}
 
 	/*
 	 * Interrupts may be raised for events immediately after the queue is
 	 * created. See bug58606.
 	 */
-	rc = efx_mcdi_init_evq(enp, index, esmp, ndescs, irq, us, flags,
-	    B_FALSE);
+	rc = efx_mcdi_init_evq(enp, index, esmp, ndescs, irq, target_evq, us,
+	    flags, B_FALSE);
 	if (rc != 0)
 		goto fail2;
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v4 04/20] common/sfc_efx/base: support custom EvQ to IRQ mapping
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (2 preceding siblings ...)
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 03/20] common/sfc_efx/base: separate target EvQ and IRQ config Andrew Rybchenko
@ 2021-07-02  8:39   ` Andrew Rybchenko
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 05/20] net/sfc: explicitly control IRQ used for Rx queues Andrew Rybchenko
                     ` (16 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:39 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Andy Moreton

Custom mapping is actually supported for EF10 and EF100 families only.

A driver (e.g. DPDK PMD) may require to customize mapping of EvQ
to interrupts if, for example, extra EvQ are used for house-keeping
in polling or wake up (via another EvQ) mode.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/common/sfc_efx/base/ef10_ev.c    |  4 +--
 drivers/common/sfc_efx/base/ef10_impl.h  |  1 +
 drivers/common/sfc_efx/base/efx.h        | 13 ++++++++
 drivers/common/sfc_efx/base/efx_ev.c     | 39 ++++++++++++++++++++----
 drivers/common/sfc_efx/base/efx_impl.h   |  3 +-
 drivers/common/sfc_efx/base/rhead_ev.c   |  4 +--
 drivers/common/sfc_efx/base/rhead_impl.h |  1 +
 drivers/common/sfc_efx/version.map       |  1 +
 8 files changed, 55 insertions(+), 11 deletions(-)

diff --git a/drivers/common/sfc_efx/base/ef10_ev.c b/drivers/common/sfc_efx/base/ef10_ev.c
index c0cbc427b9..ba078940b6 100644
--- a/drivers/common/sfc_efx/base/ef10_ev.c
+++ b/drivers/common/sfc_efx/base/ef10_ev.c
@@ -118,10 +118,10 @@ ef10_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep)
 {
 	efx_nic_cfg_t *encp = &(enp->en_nic_cfg);
-	uint32_t irq = 0;
 	uint32_t target_evq = 0;
 	efx_rc_t rc;
 	boolean_t low_latency;
@@ -158,7 +158,7 @@ ef10_ev_qcreate(
 	/* INIT_EVQ expects function-relative vector number */
 	if ((flags & EFX_EVQ_FLAGS_NOTIFY_MASK) ==
 	    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT) {
-		irq = index;
+		/* IRQ number is specified by caller */
 	} else if (index == EFX_EF10_ALWAYS_INTERRUPTING_EVQ_INDEX) {
 		/* Use the first interrupt for always interrupting EvQ */
 		irq = 0;
diff --git a/drivers/common/sfc_efx/base/ef10_impl.h b/drivers/common/sfc_efx/base/ef10_impl.h
index 40210fbd91..7c8d51b7a5 100644
--- a/drivers/common/sfc_efx/base/ef10_impl.h
+++ b/drivers/common/sfc_efx/base/ef10_impl.h
@@ -111,6 +111,7 @@ ef10_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep);
 
 LIBEFX_INTERNAL
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 771fe5a170..e43efbda1f 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -2333,6 +2333,19 @@ efx_ev_qcreate(
 	__in		uint32_t flags,
 	__deref_out	efx_evq_t **eepp);
 
+LIBEFX_API
+extern	__checkReturn	efx_rc_t
+efx_ev_qcreate_irq(
+	__in		efx_nic_t *enp,
+	__in		unsigned int index,
+	__in		efsys_mem_t *esmp,
+	__in		size_t ndescs,
+	__in		uint32_t id,
+	__in		uint32_t us,
+	__in		uint32_t flags,
+	__in		uint32_t irq,
+	__deref_out	efx_evq_t **eepp);
+
 LIBEFX_API
 extern		void
 efx_ev_qpost(
diff --git a/drivers/common/sfc_efx/base/efx_ev.c b/drivers/common/sfc_efx/base/efx_ev.c
index 19bdea03fd..4808f8ddfc 100644
--- a/drivers/common/sfc_efx/base/efx_ev.c
+++ b/drivers/common/sfc_efx/base/efx_ev.c
@@ -35,6 +35,7 @@ siena_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep);
 
 static			void
@@ -253,7 +254,7 @@ efx_ev_fini(
 
 
 	__checkReturn	efx_rc_t
-efx_ev_qcreate(
+efx_ev_qcreate_irq(
 	__in		efx_nic_t *enp,
 	__in		unsigned int index,
 	__in		efsys_mem_t *esmp,
@@ -261,6 +262,7 @@ efx_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__deref_out	efx_evq_t **eepp)
 {
 	const efx_ev_ops_t *eevop = enp->en_eevop;
@@ -347,7 +349,7 @@ efx_ev_qcreate(
 	*eepp = eep;
 
 	if ((rc = eevop->eevo_qcreate(enp, index, esmp, ndescs, id, us, flags,
-	    eep)) != 0)
+	    irq, eep)) != 0)
 		goto fail9;
 
 	return (0);
@@ -377,6 +379,23 @@ efx_ev_qcreate(
 	return (rc);
 }
 
+	__checkReturn	efx_rc_t
+efx_ev_qcreate(
+	__in		efx_nic_t *enp,
+	__in		unsigned int index,
+	__in		efsys_mem_t *esmp,
+	__in		size_t ndescs,
+	__in		uint32_t id,
+	__in		uint32_t us,
+	__in		uint32_t flags,
+	__deref_out	efx_evq_t **eepp)
+{
+	uint32_t irq = index;
+
+	return (efx_ev_qcreate_irq(enp, index, esmp, ndescs, id, us, flags,
+	    irq, eepp));
+}
+
 		void
 efx_ev_qdestroy(
 	__in	efx_evq_t *eep)
@@ -1278,6 +1297,7 @@ siena_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep)
 {
 	efx_nic_cfg_t *encp = &(enp->en_nic_cfg);
@@ -1290,11 +1310,16 @@ siena_ev_qcreate(
 
 	EFSYS_ASSERT((flags & EFX_EVQ_FLAGS_EXTENDED_WIDTH) == 0);
 
+	if (irq != index) {
+		rc = EINVAL;
+		goto fail1;
+	}
+
 #if EFSYS_OPT_RX_SCALE
 	if (enp->en_intr.ei_type == EFX_INTR_LINE &&
 	    index >= EFX_MAXRSS_LEGACY) {
 		rc = EINVAL;
-		goto fail1;
+		goto fail2;
 	}
 #endif
 	for (size = 0;
@@ -1304,7 +1329,7 @@ siena_ev_qcreate(
 			break;
 	if (id + (1 << size) >= encp->enc_buftbl_limit) {
 		rc = EINVAL;
-		goto fail2;
+		goto fail3;
 	}
 
 	/* Set up the handler table */
@@ -1336,11 +1361,13 @@ siena_ev_qcreate(
 
 	return (0);
 
+fail3:
+	EFSYS_PROBE(fail3);
+#if EFSYS_OPT_RX_SCALE
 fail2:
 	EFSYS_PROBE(fail2);
-#if EFSYS_OPT_RX_SCALE
-fail1:
 #endif
+fail1:
 	EFSYS_PROBE1(fail1, efx_rc_t, rc);
 
 	return (rc);
diff --git a/drivers/common/sfc_efx/base/efx_impl.h b/drivers/common/sfc_efx/base/efx_impl.h
index c1f98def40..a6b20704ac 100644
--- a/drivers/common/sfc_efx/base/efx_impl.h
+++ b/drivers/common/sfc_efx/base/efx_impl.h
@@ -87,7 +87,8 @@ typedef struct efx_ev_ops_s {
 	void		(*eevo_fini)(efx_nic_t *);
 	efx_rc_t	(*eevo_qcreate)(efx_nic_t *, unsigned int,
 					  efsys_mem_t *, size_t, uint32_t,
-					  uint32_t, uint32_t, efx_evq_t *);
+					  uint32_t, uint32_t, uint32_t,
+					  efx_evq_t *);
 	void		(*eevo_qdestroy)(efx_evq_t *);
 	efx_rc_t	(*eevo_qprime)(efx_evq_t *, unsigned int);
 	void		(*eevo_qpost)(efx_evq_t *, uint16_t);
diff --git a/drivers/common/sfc_efx/base/rhead_ev.c b/drivers/common/sfc_efx/base/rhead_ev.c
index 533cd9e34a..3eaed9e94b 100644
--- a/drivers/common/sfc_efx/base/rhead_ev.c
+++ b/drivers/common/sfc_efx/base/rhead_ev.c
@@ -102,11 +102,11 @@ rhead_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep)
 {
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
 	size_t desc_size;
-	uint32_t irq = 0;
 	uint32_t target_evq = 0;
 	efx_rc_t rc;
 
@@ -141,7 +141,7 @@ rhead_ev_qcreate(
 	/* INIT_EVQ expects function-relative vector number */
 	if ((flags & EFX_EVQ_FLAGS_NOTIFY_MASK) ==
 	    EFX_EVQ_FLAGS_NOTIFY_INTERRUPT) {
-		irq = index;
+		/* IRQ number is specified by caller */
 	} else if (index == EFX_RHEAD_ALWAYS_INTERRUPTING_EVQ_INDEX) {
 		/* Use the first interrupt for always interrupting EvQ */
 		irq = 0;
diff --git a/drivers/common/sfc_efx/base/rhead_impl.h b/drivers/common/sfc_efx/base/rhead_impl.h
index 3bf9beceb0..dd38ded775 100644
--- a/drivers/common/sfc_efx/base/rhead_impl.h
+++ b/drivers/common/sfc_efx/base/rhead_impl.h
@@ -131,6 +131,7 @@ rhead_ev_qcreate(
 	__in		uint32_t id,
 	__in		uint32_t us,
 	__in		uint32_t flags,
+	__in		uint32_t irq,
 	__in		efx_evq_t *eep);
 
 LIBEFX_INTERNAL
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 5e724fd102..d534d8ecb5 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -7,6 +7,7 @@ INTERNAL {
 	efx_ev_init;
 	efx_ev_qcreate;
 	efx_ev_qcreate_check_init_done;
+	efx_ev_qcreate_irq;
 	efx_ev_qdestroy;
 	efx_ev_qmoderate;
 	efx_ev_qpending;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v4 05/20] net/sfc: explicitly control IRQ used for Rx queues
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (3 preceding siblings ...)
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 04/20] common/sfc_efx/base: support custom EvQ to IRQ mapping Andrew Rybchenko
@ 2021-07-02  8:39   ` Andrew Rybchenko
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 06/20] net/sfc: introduce ethdev Tx queue ID Andrew Rybchenko
                     ` (15 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:39 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Andy Moreton

Interrupts support has assumptions on interrupt numbers used
for LSC and Rx queues. The first interrupt is used for LSC,
subsequent interrupts are used for Rx queues.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/net/sfc/sfc_ev.c | 32 ++++++++++++++++++++++++--------
 1 file changed, 24 insertions(+), 8 deletions(-)

diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index 9a8149f052..71f706e403 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -648,6 +648,7 @@ sfc_ev_qstart(struct sfc_evq *evq, unsigned int hw_index)
 	struct sfc_adapter *sa = evq->sa;
 	efsys_mem_t *esmp;
 	uint32_t evq_flags = sa->evq_flags;
+	uint32_t irq = 0;
 	unsigned int total_delay_us;
 	unsigned int delay_us;
 	int rc;
@@ -662,20 +663,35 @@ sfc_ev_qstart(struct sfc_evq *evq, unsigned int hw_index)
 	(void)memset((void *)esmp->esm_base, 0xff,
 		     efx_evq_size(sa->nic, evq->entries, evq_flags));
 
-	if ((sa->intr.lsc_intr && hw_index == sa->mgmt_evq_index) ||
-	    (sa->intr.rxq_intr && evq->dp_rxq != NULL &&
-	     sfc_ethdev_rx_qid_by_rxq_sw_index(sfc_sa2shared(sa),
-		evq->dp_rxq->dpq.queue_id) != SFC_ETHDEV_QID_INVALID))
+	if (sa->intr.lsc_intr && hw_index == sa->mgmt_evq_index) {
 		evq_flags |= EFX_EVQ_FLAGS_NOTIFY_INTERRUPT;
-	else
+		irq = 0;
+	} else if (sa->intr.rxq_intr && evq->dp_rxq != NULL) {
+		sfc_ethdev_qid_t ethdev_qid;
+
+		ethdev_qid =
+			sfc_ethdev_rx_qid_by_rxq_sw_index(sfc_sa2shared(sa),
+				evq->dp_rxq->dpq.queue_id);
+		if (ethdev_qid != SFC_ETHDEV_QID_INVALID) {
+			evq_flags |= EFX_EVQ_FLAGS_NOTIFY_INTERRUPT;
+			/*
+			 * The first interrupt is used for management EvQ
+			 * (LSC etc). RxQ interrupts follow it.
+			 */
+			irq = 1 + ethdev_qid;
+		} else {
+			evq_flags |= EFX_EVQ_FLAGS_NOTIFY_DISABLED;
+		}
+	} else {
 		evq_flags |= EFX_EVQ_FLAGS_NOTIFY_DISABLED;
+	}
 
 	evq->init_state = SFC_EVQ_STARTING;
 
 	/* Create the common code event queue */
-	rc = efx_ev_qcreate(sa->nic, hw_index, esmp, evq->entries,
-			    0 /* unused on EF10 */, 0, evq_flags,
-			    &evq->common);
+	rc = efx_ev_qcreate_irq(sa->nic, hw_index, esmp, evq->entries,
+				0 /* unused on EF10 */, 0, evq_flags,
+				irq, &evq->common);
 	if (rc != 0)
 		goto fail_ev_qcreate;
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v4 06/20] net/sfc: introduce ethdev Tx queue ID
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (4 preceding siblings ...)
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 05/20] net/sfc: explicitly control IRQ used for Rx queues Andrew Rybchenko
@ 2021-07-02  8:39   ` Andrew Rybchenko
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 07/20] common/sfc_efx/base: add ingress m-port RxQ flag Andrew Rybchenko
                     ` (14 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:39 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Make software index of a Tx queue and ethdev index separate.
When an ethdev TxQ is accessed in ethdev callbacks, an explicit ethdev
queue index is used.

This is a preparation to introducing non-ethdev Tx queues.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/sfc.h        |   1 +
 drivers/net/sfc/sfc_ethdev.c |  46 ++++++----
 drivers/net/sfc/sfc_ev.c     |   2 +-
 drivers/net/sfc/sfc_ev.h     |  21 ++++-
 drivers/net/sfc/sfc_tx.c     | 164 ++++++++++++++++++++++++-----------
 drivers/net/sfc/sfc_tx.h     |  11 +--
 6 files changed, 171 insertions(+), 74 deletions(-)

diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index ebe705020d..00fc26cf0e 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -173,6 +173,7 @@ struct sfc_adapter_shared {
 
 	unsigned int			txq_count;
 	struct sfc_txq_info		*txq_info;
+	unsigned int			ethdev_txq_count;
 
 	struct sfc_rss			rss;
 
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2651c41288..88896db1f8 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -524,24 +524,28 @@ sfc_rx_queue_release(void *queue)
 }
 
 static int
-sfc_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
+sfc_tx_queue_setup(struct rte_eth_dev *dev, uint16_t ethdev_qid,
 		   uint16_t nb_tx_desc, unsigned int socket_id,
 		   const struct rte_eth_txconf *tx_conf)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	struct sfc_txq_info *txq_info;
+	sfc_sw_index_t sw_index;
 	int rc;
 
 	sfc_log_init(sa, "TxQ = %u, nb_tx_desc = %u, socket_id = %u",
-		     tx_queue_id, nb_tx_desc, socket_id);
+		     ethdev_qid, nb_tx_desc, socket_id);
 
 	sfc_adapter_lock(sa);
 
-	rc = sfc_tx_qinit(sa, tx_queue_id, nb_tx_desc, socket_id, tx_conf);
+	sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas, ethdev_qid);
+	rc = sfc_tx_qinit(sa, sw_index, nb_tx_desc, socket_id, tx_conf);
 	if (rc != 0)
 		goto fail_tx_qinit;
 
-	dev->data->tx_queues[tx_queue_id] = sas->txq_info[tx_queue_id].dp;
+	txq_info = sfc_txq_info_by_ethdev_qid(sas, ethdev_qid);
+	dev->data->tx_queues[ethdev_qid] = txq_info->dp;
 
 	sfc_adapter_unlock(sa);
 	return 0;
@@ -557,7 +561,7 @@ sfc_tx_queue_release(void *queue)
 {
 	struct sfc_dp_txq *dp_txq = queue;
 	struct sfc_txq *txq;
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 	struct sfc_adapter *sa;
 
 	if (dp_txq == NULL)
@@ -1213,15 +1217,15 @@ sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
  * use any process-local pointers from the adapter data.
  */
 static void
-sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t tx_queue_id,
+sfc_tx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
 		      struct rte_eth_txq_info *qinfo)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_txq_info *txq_info;
 
-	SFC_ASSERT(tx_queue_id < sas->txq_count);
+	SFC_ASSERT(ethdev_qid < sas->ethdev_txq_count);
 
-	txq_info = &sas->txq_info[tx_queue_id];
+	txq_info = sfc_txq_info_by_ethdev_qid(sas, ethdev_qid);
 
 	memset(qinfo, 0, sizeof(*qinfo));
 
@@ -1362,13 +1366,15 @@ sfc_rx_queue_stop(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 }
 
 static int
-sfc_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+sfc_tx_queue_start(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	struct sfc_txq_info *txq_info;
+	sfc_sw_index_t sw_index;
 	int rc;
 
-	sfc_log_init(sa, "TxQ = %u", tx_queue_id);
+	sfc_log_init(sa, "TxQ = %u", ethdev_qid);
 
 	sfc_adapter_lock(sa);
 
@@ -1376,14 +1382,16 @@ sfc_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 	if (sa->state != SFC_ADAPTER_STARTED)
 		goto fail_not_started;
 
-	if (sas->txq_info[tx_queue_id].state != SFC_TXQ_INITIALIZED)
+	txq_info = sfc_txq_info_by_ethdev_qid(sas, ethdev_qid);
+	if (txq_info->state != SFC_TXQ_INITIALIZED)
 		goto fail_not_setup;
 
-	rc = sfc_tx_qstart(sa, tx_queue_id);
+	sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas, ethdev_qid);
+	rc = sfc_tx_qstart(sa, sw_index);
 	if (rc != 0)
 		goto fail_tx_qstart;
 
-	sas->txq_info[tx_queue_id].deferred_started = B_TRUE;
+	txq_info->deferred_started = B_TRUE;
 
 	sfc_adapter_unlock(sa);
 	return 0;
@@ -1398,18 +1406,22 @@ sfc_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
 }
 
 static int
-sfc_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+sfc_tx_queue_stop(struct rte_eth_dev *dev, uint16_t ethdev_qid)
 {
 	struct sfc_adapter_shared *sas = sfc_adapter_shared_by_eth_dev(dev);
 	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	struct sfc_txq_info *txq_info;
+	sfc_sw_index_t sw_index;
 
-	sfc_log_init(sa, "TxQ = %u", tx_queue_id);
+	sfc_log_init(sa, "TxQ = %u", ethdev_qid);
 
 	sfc_adapter_lock(sa);
 
-	sfc_tx_qstop(sa, tx_queue_id);
+	sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas, ethdev_qid);
+	sfc_tx_qstop(sa, sw_index);
 
-	sas->txq_info[tx_queue_id].deferred_started = B_FALSE;
+	txq_info = sfc_txq_info_by_ethdev_qid(sas, ethdev_qid);
+	txq_info->deferred_started = B_FALSE;
 
 	sfc_adapter_unlock(sa);
 	return 0;
diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index 71f706e403..ed28d51e12 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -598,7 +598,7 @@ sfc_ev_qpoll(struct sfc_evq *evq)
 		}
 
 		if (evq->dp_txq != NULL) {
-			unsigned int txq_sw_index;
+			sfc_sw_index_t txq_sw_index;
 
 			txq_sw_index = evq->dp_txq->dpq.queue_id;
 
diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h
index 5a9f85c2d9..75b9dcdebd 100644
--- a/drivers/net/sfc/sfc_ev.h
+++ b/drivers/net/sfc/sfc_ev.h
@@ -92,8 +92,25 @@ sfc_evq_sw_index_by_rxq_sw_index(__rte_unused struct sfc_adapter *sa,
 	return 1 + rxq_sw_index;
 }
 
-static inline unsigned int
-sfc_evq_index_by_txq_sw_index(struct sfc_adapter *sa, unsigned int txq_sw_index)
+static inline sfc_ethdev_qid_t
+sfc_ethdev_tx_qid_by_txq_sw_index(__rte_unused struct sfc_adapter_shared *sas,
+				  sfc_sw_index_t txq_sw_index)
+{
+	/* Only ethdev queues are present for now */
+	return txq_sw_index;
+}
+
+static inline sfc_sw_index_t
+sfc_txq_sw_index_by_ethdev_tx_qid(__rte_unused struct sfc_adapter_shared *sas,
+				  sfc_ethdev_qid_t ethdev_qid)
+{
+	/* Only ethdev queues are present for now */
+	return ethdev_qid;
+}
+
+static inline sfc_sw_index_t
+sfc_evq_sw_index_by_txq_sw_index(struct sfc_adapter *sa,
+				 sfc_sw_index_t txq_sw_index)
 {
 	return 1 + sa->eth_dev->data->nb_rx_queues + txq_sw_index;
 }
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 28d696de61..ce2a9a6a4f 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -34,6 +34,19 @@
  */
 #define SFC_TX_QFLUSH_POLL_ATTEMPTS	(2000)
 
+struct sfc_txq_info *
+sfc_txq_info_by_ethdev_qid(struct sfc_adapter_shared *sas,
+			   sfc_ethdev_qid_t ethdev_qid)
+{
+	sfc_sw_index_t sw_index;
+
+	SFC_ASSERT((unsigned int)ethdev_qid < sas->ethdev_txq_count);
+	SFC_ASSERT(ethdev_qid != SFC_ETHDEV_QID_INVALID);
+
+	sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas, ethdev_qid);
+	return &sas->txq_info[sw_index];
+}
+
 static uint64_t
 sfc_tx_get_offload_mask(struct sfc_adapter *sa)
 {
@@ -118,10 +131,12 @@ sfc_tx_qflush_done(struct sfc_txq_info *txq_info)
 }
 
 int
-sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
+sfc_tx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 	     uint16_t nb_tx_desc, unsigned int socket_id,
 	     const struct rte_eth_txconf *tx_conf)
 {
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
 	unsigned int txq_entries;
 	unsigned int evq_entries;
@@ -134,7 +149,9 @@ sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	uint64_t offloads;
 	struct sfc_dp_tx_hw_limits hw_limits;
 
-	sfc_log_init(sa, "TxQ = %u", sw_index);
+	ethdev_qid = sfc_ethdev_tx_qid_by_txq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "TxQ = %d (internal %u)", ethdev_qid, sw_index);
 
 	memset(&hw_limits, 0, sizeof(hw_limits));
 	hw_limits.txq_max_entries = sa->txq_max_entries;
@@ -150,8 +167,11 @@ sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	SFC_ASSERT(txq_entries >= nb_tx_desc);
 	SFC_ASSERT(txq_max_fill_level <= nb_tx_desc);
 
-	offloads = tx_conf->offloads |
-		sa->eth_dev->data->dev_conf.txmode.offloads;
+	offloads = tx_conf->offloads;
+	/* Add device level Tx offloads if the queue is an ethdev Tx queue */
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		offloads |= sa->eth_dev->data->dev_conf.txmode.offloads;
+
 	rc = sfc_tx_qcheck_conf(sa, txq_max_fill_level, tx_conf, offloads);
 	if (rc != 0)
 		goto fail_bad_conf;
@@ -231,20 +251,26 @@ sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 
 fail_bad_conf:
 fail_size_up_rings:
-	sfc_log_init(sa, "failed (TxQ = %u, rc = %d)", sw_index, rc);
+	sfc_log_init(sa, "failed (TxQ = %d (internal %u), rc = %d)", ethdev_qid,
+		     sw_index, rc);
 	return rc;
 }
 
 void
-sfc_tx_qfini(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_tx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_txq_info *txq_info;
 	struct sfc_txq *txq;
 
-	sfc_log_init(sa, "TxQ = %u", sw_index);
+	ethdev_qid = sfc_ethdev_tx_qid_by_txq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "TxQ = %d (internal %u)", ethdev_qid, sw_index);
 
 	SFC_ASSERT(sw_index < sfc_sa2shared(sa)->txq_count);
-	sa->eth_dev->data->tx_queues[sw_index] = NULL;
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID)
+		sa->eth_dev->data->tx_queues[ethdev_qid] = NULL;
 
 	txq_info = &sfc_sa2shared(sa)->txq_info[sw_index];
 
@@ -265,9 +291,14 @@ sfc_tx_qfini(struct sfc_adapter *sa, unsigned int sw_index)
 }
 
 static int
-sfc_tx_qinit_info(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_tx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
-	sfc_log_init(sa, "TxQ = %u", sw_index);
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
+
+	ethdev_qid = sfc_ethdev_tx_qid_by_txq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "TxQ = %d (internal %u)", ethdev_qid, sw_index);
 
 	return 0;
 }
@@ -316,17 +347,26 @@ static void
 sfc_tx_fini_queues(struct sfc_adapter *sa, unsigned int nb_tx_queues)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	int sw_index;
+	sfc_sw_index_t sw_index;
+	sfc_ethdev_qid_t ethdev_qid;
 
-	SFC_ASSERT(nb_tx_queues <= sas->txq_count);
+	SFC_ASSERT(nb_tx_queues <= sas->ethdev_txq_count);
 
-	sw_index = sas->txq_count;
-	while (--sw_index >= (int)nb_tx_queues) {
-		if (sas->txq_info[sw_index].state & SFC_TXQ_INITIALIZED)
+	/*
+	 * Finalize only ethdev queues since other ones are finalized only
+	 * on device close and they may require additional deinitializaton.
+	 */
+	ethdev_qid = sas->ethdev_txq_count;
+	while (--ethdev_qid >= (int)nb_tx_queues) {
+		struct sfc_txq_info *txq_info;
+
+		sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas, ethdev_qid);
+		txq_info = sfc_txq_info_by_ethdev_qid(sas, ethdev_qid);
+		if (txq_info->state & SFC_TXQ_INITIALIZED)
 			sfc_tx_qfini(sa, sw_index);
 	}
 
-	sas->txq_count = nb_tx_queues;
+	sas->ethdev_txq_count = nb_tx_queues;
 }
 
 int
@@ -339,7 +379,7 @@ sfc_tx_configure(struct sfc_adapter *sa)
 	int rc = 0;
 
 	sfc_log_init(sa, "nb_tx_queues=%u (old %u)",
-		     nb_tx_queues, sas->txq_count);
+		     nb_tx_queues, sas->ethdev_txq_count);
 
 	/*
 	 * The datapath implementation assumes absence of boundary
@@ -377,7 +417,7 @@ sfc_tx_configure(struct sfc_adapter *sa)
 		struct sfc_txq_info *new_txq_info;
 		struct sfc_txq *new_txq_ctrl;
 
-		if (nb_tx_queues < sas->txq_count)
+		if (nb_tx_queues < sas->ethdev_txq_count)
 			sfc_tx_fini_queues(sa, nb_tx_queues);
 
 		new_txq_info =
@@ -393,24 +433,30 @@ sfc_tx_configure(struct sfc_adapter *sa)
 
 		sas->txq_info = new_txq_info;
 		sa->txq_ctrl = new_txq_ctrl;
-		if (nb_tx_queues > sas->txq_count) {
-			memset(&sas->txq_info[sas->txq_count], 0,
-			       (nb_tx_queues - sas->txq_count) *
+		if (nb_tx_queues > sas->ethdev_txq_count) {
+			memset(&sas->txq_info[sas->ethdev_txq_count], 0,
+			       (nb_tx_queues - sas->ethdev_txq_count) *
 			       sizeof(sas->txq_info[0]));
-			memset(&sa->txq_ctrl[sas->txq_count], 0,
-			       (nb_tx_queues - sas->txq_count) *
+			memset(&sa->txq_ctrl[sas->ethdev_txq_count], 0,
+			       (nb_tx_queues - sas->ethdev_txq_count) *
 			       sizeof(sa->txq_ctrl[0]));
 		}
 	}
 
-	while (sas->txq_count < nb_tx_queues) {
-		rc = sfc_tx_qinit_info(sa, sas->txq_count);
+	while (sas->ethdev_txq_count < nb_tx_queues) {
+		sfc_sw_index_t sw_index;
+
+		sw_index = sfc_txq_sw_index_by_ethdev_tx_qid(sas,
+				sas->ethdev_txq_count);
+		rc = sfc_tx_qinit_info(sa, sw_index);
 		if (rc != 0)
 			goto fail_tx_qinit_info;
 
-		sas->txq_count++;
+		sas->ethdev_txq_count++;
 	}
 
+	sas->txq_count = sas->ethdev_txq_count;
+
 done:
 	return 0;
 
@@ -440,12 +486,12 @@ sfc_tx_close(struct sfc_adapter *sa)
 }
 
 int
-sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_tx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
 	uint64_t offloads_supported = sfc_tx_get_dev_offload_caps(sa) |
 				      sfc_tx_get_queue_offload_caps(sa);
-	struct rte_eth_dev_data *dev_data;
 	struct sfc_txq_info *txq_info;
 	struct sfc_txq *txq;
 	struct sfc_evq *evq;
@@ -453,7 +499,9 @@ sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 	unsigned int desc_index;
 	int rc = 0;
 
-	sfc_log_init(sa, "TxQ = %u", sw_index);
+	ethdev_qid = sfc_ethdev_tx_qid_by_txq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "TxQ = %d (internal %u)", ethdev_qid, sw_index);
 
 	SFC_ASSERT(sw_index < sas->txq_count);
 	txq_info = &sas->txq_info[sw_index];
@@ -463,7 +511,7 @@ sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 	txq = &sa->txq_ctrl[sw_index];
 	evq = txq->evq;
 
-	rc = sfc_ev_qstart(evq, sfc_evq_index_by_txq_sw_index(sa, sw_index));
+	rc = sfc_ev_qstart(evq, sfc_evq_sw_index_by_txq_sw_index(sa, sw_index));
 	if (rc != 0)
 		goto fail_ev_qstart;
 
@@ -505,11 +553,17 @@ sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 	if (rc != 0)
 		goto fail_dp_qstart;
 
-	/*
-	 * It seems to be used by DPDK for debug purposes only ('rte_ether')
-	 */
-	dev_data = sa->eth_dev->data;
-	dev_data->tx_queue_state[sw_index] = RTE_ETH_QUEUE_STATE_STARTED;
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID) {
+		struct rte_eth_dev_data *dev_data;
+
+		/*
+		 * It sems to be used by DPDK for debug purposes only
+		 * ('rte_ether').
+		 */
+		dev_data = sa->eth_dev->data;
+		dev_data->tx_queue_state[ethdev_qid] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+	}
 
 	return 0;
 
@@ -525,17 +579,19 @@ sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 }
 
 void
-sfc_tx_qstop(struct sfc_adapter *sa, unsigned int sw_index)
+sfc_tx_qstop(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	struct rte_eth_dev_data *dev_data;
+	sfc_ethdev_qid_t ethdev_qid;
 	struct sfc_txq_info *txq_info;
 	struct sfc_txq *txq;
 	unsigned int retry_count;
 	unsigned int wait_count;
 	int rc;
 
-	sfc_log_init(sa, "TxQ = %u", sw_index);
+	ethdev_qid = sfc_ethdev_tx_qid_by_txq_sw_index(sas, sw_index);
+
+	sfc_log_init(sa, "TxQ = %d (internal %u)", ethdev_qid, sw_index);
 
 	SFC_ASSERT(sw_index < sas->txq_count);
 	txq_info = &sas->txq_info[sw_index];
@@ -577,10 +633,12 @@ sfc_tx_qstop(struct sfc_adapter *sa, unsigned int sw_index)
 			 wait_count++ < SFC_TX_QFLUSH_POLL_ATTEMPTS);
 
 		if (txq_info->state & SFC_TXQ_FLUSHING)
-			sfc_err(sa, "TxQ %u flush timed out", sw_index);
+			sfc_err(sa, "TxQ %d (internal %u) flush timed out",
+				ethdev_qid, sw_index);
 
 		if (txq_info->state & SFC_TXQ_FLUSHED)
-			sfc_notice(sa, "TxQ %u flushed", sw_index);
+			sfc_notice(sa, "TxQ %d (internal %u) flushed",
+				   ethdev_qid, sw_index);
 	}
 
 	sa->priv.dp_tx->qreap(txq_info->dp);
@@ -591,11 +649,17 @@ sfc_tx_qstop(struct sfc_adapter *sa, unsigned int sw_index)
 
 	sfc_ev_qstop(txq->evq);
 
-	/*
-	 * It seems to be used by DPDK for debug purposes only ('rte_ether')
-	 */
-	dev_data = sa->eth_dev->data;
-	dev_data->tx_queue_state[sw_index] = RTE_ETH_QUEUE_STATE_STOPPED;
+	if (ethdev_qid != SFC_ETHDEV_QID_INVALID) {
+		struct rte_eth_dev_data *dev_data;
+
+		/*
+		 * It seems to be used by DPDK for debug purposes only
+		 * ('rte_ether')
+		 */
+		dev_data = sa->eth_dev->data;
+		dev_data->tx_queue_state[ethdev_qid] =
+			RTE_ETH_QUEUE_STATE_STOPPED;
+	}
 }
 
 int
@@ -603,10 +667,11 @@ sfc_tx_start(struct sfc_adapter *sa)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 	int rc = 0;
 
-	sfc_log_init(sa, "txq_count = %u", sas->txq_count);
+	sfc_log_init(sa, "txq_count = %u (internal %u)",
+		     sas->ethdev_txq_count, sas->txq_count);
 
 	if (sa->tso) {
 		if (!encp->enc_fw_assisted_tso_v2_enabled &&
@@ -654,9 +719,10 @@ void
 sfc_tx_stop(struct sfc_adapter *sa)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
-	unsigned int sw_index;
+	sfc_sw_index_t sw_index;
 
-	sfc_log_init(sa, "txq_count = %u", sas->txq_count);
+	sfc_log_init(sa, "txq_count = %u (internal %u)",
+		     sas->ethdev_txq_count, sas->txq_count);
 
 	sw_index = sas->txq_count;
 	while (sw_index-- > 0) {
diff --git a/drivers/net/sfc/sfc_tx.h b/drivers/net/sfc/sfc_tx.h
index 5ed678703e..f1700b13ca 100644
--- a/drivers/net/sfc/sfc_tx.h
+++ b/drivers/net/sfc/sfc_tx.h
@@ -58,7 +58,8 @@ struct sfc_txq {
 };
 
 struct sfc_txq *sfc_txq_by_dp_txq(const struct sfc_dp_txq *dp_txq);
-
+struct sfc_txq_info *sfc_txq_info_by_ethdev_qid(struct sfc_adapter_shared *sas,
+						sfc_ethdev_qid_t ethdev_qid);
 /**
  * Transmit queue information used on libefx-based data path.
  * Allocated on the socket specified on the queue setup.
@@ -107,14 +108,14 @@ struct sfc_txq_info *sfc_txq_info_by_dp_txq(const struct sfc_dp_txq *dp_txq);
 int sfc_tx_configure(struct sfc_adapter *sa);
 void sfc_tx_close(struct sfc_adapter *sa);
 
-int sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
+int sfc_tx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 		 uint16_t nb_tx_desc, unsigned int socket_id,
 		 const struct rte_eth_txconf *tx_conf);
-void sfc_tx_qfini(struct sfc_adapter *sa, unsigned int sw_index);
+void sfc_tx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
 
 void sfc_tx_qflush_done(struct sfc_txq_info *txq_info);
-int sfc_tx_qstart(struct sfc_adapter *sa, unsigned int sw_index);
-void sfc_tx_qstop(struct sfc_adapter *sa, unsigned int sw_index);
+int sfc_tx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
+void sfc_tx_qstop(struct sfc_adapter *sa, sfc_sw_index_t sw_index);
 int sfc_tx_start(struct sfc_adapter *sa);
 void sfc_tx_stop(struct sfc_adapter *sa);
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v4 07/20] common/sfc_efx/base: add ingress m-port RxQ flag
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (5 preceding siblings ...)
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 06/20] net/sfc: introduce ethdev Tx queue ID Andrew Rybchenko
@ 2021-07-02  8:39   ` Andrew Rybchenko
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 08/20] common/sfc_efx/base: add user mark " Andrew Rybchenko
                     ` (13 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:39 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Add a flag to request support for ingress m-port on an RxQ.
Implement it only for Riverhead, other families will return an error
if the flag is set.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/ef10_rx.c  |  9 ++++++++-
 drivers/common/sfc_efx/base/efx.h      |  5 +++++
 drivers/common/sfc_efx/base/efx_rx.c   | 14 +++++++++-----
 drivers/common/sfc_efx/base/rhead_rx.c |  3 +++
 4 files changed, 25 insertions(+), 6 deletions(-)

diff --git a/drivers/common/sfc_efx/base/ef10_rx.c b/drivers/common/sfc_efx/base/ef10_rx.c
index cfa60bd324..0e140645a5 100644
--- a/drivers/common/sfc_efx/base/ef10_rx.c
+++ b/drivers/common/sfc_efx/base/ef10_rx.c
@@ -1031,6 +1031,11 @@ ef10_rx_qcreate(
 	EFSYS_ASSERT(params.es_bufs_per_desc == 0);
 #endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
 
+	if (flags & EFX_RXQ_FLAG_INGRESS_MPORT) {
+		rc = ENOTSUP;
+		goto fail12;
+	}
+
 	/* Scatter can only be disabled if the firmware supports doing so */
 	if (flags & EFX_RXQ_FLAG_SCATTER)
 		params.disable_scatter = B_FALSE;
@@ -1044,7 +1049,7 @@ ef10_rx_qcreate(
 
 	if ((rc = efx_mcdi_init_rxq(enp, ndescs, eep, label, index,
 		    esmp, &params)) != 0)
-		goto fail12;
+		goto fail13;
 
 	erp->er_eep = eep;
 	erp->er_label = label;
@@ -1057,6 +1062,8 @@ ef10_rx_qcreate(
 
 	return (0);
 
+fail13:
+	EFSYS_PROBE(fail13);
 fail12:
 	EFSYS_PROBE(fail12);
 #if EFSYS_OPT_RX_ES_SUPER_BUFFER
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index e43efbda1f..76092d794f 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -2925,6 +2925,7 @@ typedef enum efx_rx_prefix_field_e {
 	EFX_RX_PREFIX_FIELD_USER_MARK_VALID,
 	EFX_RX_PREFIX_FIELD_CSUM_FRAME,
 	EFX_RX_PREFIX_FIELD_INGRESS_VPORT,
+	EFX_RX_PREFIX_FIELD_INGRESS_MPORT = EFX_RX_PREFIX_FIELD_INGRESS_VPORT,
 	EFX_RX_PREFIX_NFIELDS
 } efx_rx_prefix_field_t;
 
@@ -2998,6 +2999,10 @@ typedef enum efx_rxq_type_e {
  * the driver.
  */
 #define	EFX_RXQ_FLAG_RSS_HASH		0x4
+/*
+ * Request ingress mport field in the Rx prefix of a queue.
+ */
+#define	EFX_RXQ_FLAG_INGRESS_MPORT	0x8
 
 LIBEFX_API
 extern	__checkReturn	efx_rc_t
diff --git a/drivers/common/sfc_efx/base/efx_rx.c b/drivers/common/sfc_efx/base/efx_rx.c
index 7c6fecf925..7e63363be7 100644
--- a/drivers/common/sfc_efx/base/efx_rx.c
+++ b/drivers/common/sfc_efx/base/efx_rx.c
@@ -1743,14 +1743,20 @@ siena_rx_qcreate(
 		goto fail2;
 	}
 
-	if (flags & EFX_RXQ_FLAG_SCATTER) {
 #if EFSYS_OPT_RX_SCATTER
-		jumbo = B_TRUE;
+#define SUPPORTED_RXQ_FLAGS EFX_RXQ_FLAG_SCATTER
 #else
+#define SUPPORTED_RXQ_FLAGS EFX_RXQ_FLAG_NONE
+#endif
+	/* Reject flags for unsupported queue features */
+	if ((flags & ~SUPPORTED_RXQ_FLAGS) != 0) {
 		rc = EINVAL;
 		goto fail3;
-#endif	/* EFSYS_OPT_RX_SCATTER */
 	}
+#undef SUPPORTED_RXQ_FLAGS
+
+	if (flags & EFX_RXQ_FLAG_SCATTER)
+		jumbo = B_TRUE;
 
 	/* Set up the new descriptor queue */
 	EFX_POPULATE_OWORD_7(oword,
@@ -1769,10 +1775,8 @@ siena_rx_qcreate(
 
 	return (0);
 
-#if !EFSYS_OPT_RX_SCATTER
 fail3:
 	EFSYS_PROBE(fail3);
-#endif
 fail2:
 	EFSYS_PROBE(fail2);
 fail1:
diff --git a/drivers/common/sfc_efx/base/rhead_rx.c b/drivers/common/sfc_efx/base/rhead_rx.c
index b2dacbab32..f1d46f7c70 100644
--- a/drivers/common/sfc_efx/base/rhead_rx.c
+++ b/drivers/common/sfc_efx/base/rhead_rx.c
@@ -629,6 +629,9 @@ rhead_rx_qcreate(
 		fields_mask |= 1U << EFX_RX_PREFIX_FIELD_RSS_HASH_VALID;
 	}
 
+	if (flags & EFX_RXQ_FLAG_INGRESS_MPORT)
+		fields_mask |= 1U << EFX_RX_PREFIX_FIELD_INGRESS_MPORT;
+
 	/*
 	 * LENGTH is required in EF100 host interface, as receive events
 	 * do not include the packet length.
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v4 08/20] common/sfc_efx/base: add user mark RxQ flag
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (6 preceding siblings ...)
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 07/20] common/sfc_efx/base: add ingress m-port RxQ flag Andrew Rybchenko
@ 2021-07-02  8:39   ` Andrew Rybchenko
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 09/20] net/sfc: add abstractions for the management EVQ identity Andrew Rybchenko
                     ` (12 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:39 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Add a flag to request support for user mark field on an RxQ.
The field is required to retrieve generation count value from
counter RxQ.

Implement it only for Riverhead and EF10 ESSB since they support
the field in the Rx prefix.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/ef10_rx.c  | 52 ++++++++++++++++----------
 drivers/common/sfc_efx/base/efx.h      |  4 ++
 drivers/common/sfc_efx/base/rhead_rx.c |  3 ++
 3 files changed, 39 insertions(+), 20 deletions(-)

diff --git a/drivers/common/sfc_efx/base/ef10_rx.c b/drivers/common/sfc_efx/base/ef10_rx.c
index 0e140645a5..0c3f9413cf 100644
--- a/drivers/common/sfc_efx/base/ef10_rx.c
+++ b/drivers/common/sfc_efx/base/ef10_rx.c
@@ -926,6 +926,10 @@ ef10_rx_qcreate(
 			goto fail1;
 		}
 		erp->er_buf_size = type_data->ertd_default.ed_buf_size;
+		if (flags & EFX_RXQ_FLAG_USER_MARK) {
+			rc = ENOTSUP;
+			goto fail2;
+		}
 		/*
 		 * Ignore EFX_RXQ_FLAG_RSS_HASH since if RSS hash is calculated
 		 * it is always delivered from HW in the pseudo-header.
@@ -936,7 +940,7 @@ ef10_rx_qcreate(
 		erpl = &ef10_packed_stream_rx_prefix_layout;
 		if (type_data == NULL) {
 			rc = EINVAL;
-			goto fail2;
+			goto fail3;
 		}
 		switch (type_data->ertd_packed_stream.eps_buf_size) {
 		case EFX_RXQ_PACKED_STREAM_BUF_SIZE_1M:
@@ -956,13 +960,17 @@ ef10_rx_qcreate(
 			break;
 		default:
 			rc = ENOTSUP;
-			goto fail3;
+			goto fail4;
 		}
 		erp->er_buf_size = type_data->ertd_packed_stream.eps_buf_size;
 		/* Packed stream pseudo header does not have RSS hash value */
 		if (flags & EFX_RXQ_FLAG_RSS_HASH) {
 			rc = ENOTSUP;
-			goto fail4;
+			goto fail5;
+		}
+		if (flags & EFX_RXQ_FLAG_USER_MARK) {
+			rc = ENOTSUP;
+			goto fail6;
 		}
 		break;
 #endif /* EFSYS_OPT_RX_PACKED_STREAM */
@@ -971,7 +979,7 @@ ef10_rx_qcreate(
 		erpl = &ef10_essb_rx_prefix_layout;
 		if (type_data == NULL) {
 			rc = EINVAL;
-			goto fail5;
+			goto fail7;
 		}
 		params.es_bufs_per_desc =
 		    type_data->ertd_es_super_buffer.eessb_bufs_per_desc;
@@ -989,7 +997,7 @@ ef10_rx_qcreate(
 #endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
 	default:
 		rc = ENOTSUP;
-		goto fail6;
+		goto fail8;
 	}
 
 #if EFSYS_OPT_RX_PACKED_STREAM
@@ -997,13 +1005,13 @@ ef10_rx_qcreate(
 		/* Check if datapath firmware supports packed stream mode */
 		if (encp->enc_rx_packed_stream_supported == B_FALSE) {
 			rc = ENOTSUP;
-			goto fail7;
+			goto fail9;
 		}
 		/* Check if packed stream allows configurable buffer sizes */
 		if ((params.ps_buf_size != MC_CMD_INIT_RXQ_EXT_IN_PS_BUFF_1M) &&
 		    (encp->enc_rx_var_packed_stream_supported == B_FALSE)) {
 			rc = ENOTSUP;
-			goto fail8;
+			goto fail10;
 		}
 	}
 #else /* EFSYS_OPT_RX_PACKED_STREAM */
@@ -1014,17 +1022,17 @@ ef10_rx_qcreate(
 	if (params.es_bufs_per_desc > 0) {
 		if (encp->enc_rx_es_super_buffer_supported == B_FALSE) {
 			rc = ENOTSUP;
-			goto fail9;
+			goto fail11;
 		}
 		if (!EFX_IS_P2ALIGNED(uint32_t, params.es_max_dma_len,
 			    EFX_RX_ES_SUPER_BUFFER_BUF_ALIGNMENT)) {
 			rc = EINVAL;
-			goto fail10;
+			goto fail12;
 		}
 		if (!EFX_IS_P2ALIGNED(uint32_t, params.es_buf_stride,
 			    EFX_RX_ES_SUPER_BUFFER_BUF_ALIGNMENT)) {
 			rc = EINVAL;
-			goto fail11;
+			goto fail13;
 		}
 	}
 #else /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
@@ -1033,7 +1041,7 @@ ef10_rx_qcreate(
 
 	if (flags & EFX_RXQ_FLAG_INGRESS_MPORT) {
 		rc = ENOTSUP;
-		goto fail12;
+		goto fail14;
 	}
 
 	/* Scatter can only be disabled if the firmware supports doing so */
@@ -1049,7 +1057,7 @@ ef10_rx_qcreate(
 
 	if ((rc = efx_mcdi_init_rxq(enp, ndescs, eep, label, index,
 		    esmp, &params)) != 0)
-		goto fail13;
+		goto fail15;
 
 	erp->er_eep = eep;
 	erp->er_label = label;
@@ -1062,38 +1070,42 @@ ef10_rx_qcreate(
 
 	return (0);
 
+fail15:
+	EFSYS_PROBE(fail15);
+fail14:
+	EFSYS_PROBE(fail14);
+#if EFSYS_OPT_RX_ES_SUPER_BUFFER
 fail13:
 	EFSYS_PROBE(fail13);
 fail12:
 	EFSYS_PROBE(fail12);
-#if EFSYS_OPT_RX_ES_SUPER_BUFFER
 fail11:
 	EFSYS_PROBE(fail11);
+#endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
+#if EFSYS_OPT_RX_PACKED_STREAM
 fail10:
 	EFSYS_PROBE(fail10);
 fail9:
 	EFSYS_PROBE(fail9);
-#endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
-#if EFSYS_OPT_RX_PACKED_STREAM
+#endif /* EFSYS_OPT_RX_PACKED_STREAM */
 fail8:
 	EFSYS_PROBE(fail8);
+#if EFSYS_OPT_RX_ES_SUPER_BUFFER
 fail7:
 	EFSYS_PROBE(fail7);
-#endif /* EFSYS_OPT_RX_PACKED_STREAM */
+#endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
+#if EFSYS_OPT_RX_PACKED_STREAM
 fail6:
 	EFSYS_PROBE(fail6);
-#if EFSYS_OPT_RX_ES_SUPER_BUFFER
 fail5:
 	EFSYS_PROBE(fail5);
-#endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
-#if EFSYS_OPT_RX_PACKED_STREAM
 fail4:
 	EFSYS_PROBE(fail4);
 fail3:
 	EFSYS_PROBE(fail3);
+#endif /* EFSYS_OPT_RX_PACKED_STREAM */
 fail2:
 	EFSYS_PROBE(fail2);
-#endif /* EFSYS_OPT_RX_PACKED_STREAM */
 fail1:
 	EFSYS_PROBE1(fail1, efx_rc_t, rc);
 
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 76092d794f..f81837a931 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -3003,6 +3003,10 @@ typedef enum efx_rxq_type_e {
  * Request ingress mport field in the Rx prefix of a queue.
  */
 #define	EFX_RXQ_FLAG_INGRESS_MPORT	0x8
+/*
+ * Request user mark field in the Rx prefix of a queue.
+ */
+#define	EFX_RXQ_FLAG_USER_MARK		0x10
 
 LIBEFX_API
 extern	__checkReturn	efx_rc_t
diff --git a/drivers/common/sfc_efx/base/rhead_rx.c b/drivers/common/sfc_efx/base/rhead_rx.c
index f1d46f7c70..76b8ce302a 100644
--- a/drivers/common/sfc_efx/base/rhead_rx.c
+++ b/drivers/common/sfc_efx/base/rhead_rx.c
@@ -632,6 +632,9 @@ rhead_rx_qcreate(
 	if (flags & EFX_RXQ_FLAG_INGRESS_MPORT)
 		fields_mask |= 1U << EFX_RX_PREFIX_FIELD_INGRESS_MPORT;
 
+	if (flags & EFX_RXQ_FLAG_USER_MARK)
+		fields_mask |= 1U << EFX_RX_PREFIX_FIELD_USER_MARK;
+
 	/*
 	 * LENGTH is required in EF100 host interface, as receive events
 	 * do not include the packet length.
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v4 09/20] net/sfc: add abstractions for the management EVQ identity
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (7 preceding siblings ...)
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 08/20] common/sfc_efx/base: add user mark " Andrew Rybchenko
@ 2021-07-02  8:39   ` Andrew Rybchenko
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 10/20] net/sfc: add support for initialising different RxQ types Andrew Rybchenko
                     ` (11 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:39 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Add a function returning management event queue software index.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/net/sfc/sfc_ev.c | 2 +-
 drivers/net/sfc/sfc_ev.h | 6 ++++++
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index ed28d51e12..ba4409369a 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -983,7 +983,7 @@ sfc_ev_attach(struct sfc_adapter *sa)
 		goto fail_kvarg_perf_profile;
 	}
 
-	sa->mgmt_evq_index = 0;
+	sa->mgmt_evq_index = sfc_mgmt_evq_sw_index(sfc_sa2shared(sa));
 	rte_spinlock_init(&sa->mgmt_evq_lock);
 
 	rc = sfc_ev_qinit(sa, SFC_EVQ_TYPE_MGMT, 0, sa->evq_min_entries,
diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h
index 75b9dcdebd..3f3c4b5b9a 100644
--- a/drivers/net/sfc/sfc_ev.h
+++ b/drivers/net/sfc/sfc_ev.h
@@ -60,6 +60,12 @@ struct sfc_evq {
 	unsigned int			entries;
 };
 
+static inline sfc_sw_index_t
+sfc_mgmt_evq_sw_index(__rte_unused const struct sfc_adapter_shared *sas)
+{
+	return 0;
+}
+
 /*
  * Functions below define event queue to transmit/receive queue and vice
  * versa mapping.
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v4 10/20] net/sfc: add support for initialising different RxQ types
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (8 preceding siblings ...)
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 09/20] net/sfc: add abstractions for the management EVQ identity Andrew Rybchenko
@ 2021-07-02  8:39   ` Andrew Rybchenko
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 11/20] net/sfc: add NUMA-aware registry of service logical cores Andrew Rybchenko
                     ` (10 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:39 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Add extra EFX flags to RxQ info initialization API to support
choosing different RxQ types and make the API public to use
it in for counter queues.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/net/sfc/sfc_rx.c | 10 ++++++----
 drivers/net/sfc/sfc_rx.h |  2 ++
 2 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 597785ae02..c7a7bd66ef 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -1155,7 +1155,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 	else
 		rxq_info->type = EFX_RXQ_TYPE_DEFAULT;
 
-	rxq_info->type_flags =
+	rxq_info->type_flags |=
 		(offloads & DEV_RX_OFFLOAD_SCATTER) ?
 		EFX_RXQ_FLAG_SCATTER : EFX_RXQ_FLAG_NONE;
 
@@ -1594,8 +1594,9 @@ sfc_rx_stop(struct sfc_adapter *sa)
 	efx_rx_fini(sa->nic);
 }
 
-static int
-sfc_rx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
+int
+sfc_rx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
+		  unsigned int extra_efx_type_flags)
 {
 	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
 	struct sfc_rxq_info *rxq_info = &sas->rxq_info[sw_index];
@@ -1606,6 +1607,7 @@ sfc_rx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 	SFC_ASSERT(rte_is_power_of_2(max_entries));
 
 	rxq_info->max_entries = max_entries;
+	rxq_info->type_flags = extra_efx_type_flags;
 
 	return 0;
 }
@@ -1770,7 +1772,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
 
 		sw_index = sfc_rxq_sw_index_by_ethdev_rx_qid(sas,
 							sas->ethdev_rxq_count);
-		rc = sfc_rx_qinit_info(sa, sw_index);
+		rc = sfc_rx_qinit_info(sa, sw_index, 0);
 		if (rc != 0)
 			goto fail_rx_qinit_info;
 
diff --git a/drivers/net/sfc/sfc_rx.h b/drivers/net/sfc/sfc_rx.h
index 96c7dc415d..e5a6fde79b 100644
--- a/drivers/net/sfc/sfc_rx.h
+++ b/drivers/net/sfc/sfc_rx.h
@@ -129,6 +129,8 @@ void sfc_rx_close(struct sfc_adapter *sa);
 int sfc_rx_start(struct sfc_adapter *sa);
 void sfc_rx_stop(struct sfc_adapter *sa);
 
+int sfc_rx_qinit_info(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
+		      unsigned int extra_efx_type_flags);
 int sfc_rx_qinit(struct sfc_adapter *sa, unsigned int rx_queue_id,
 		 uint16_t nb_rx_desc, unsigned int socket_id,
 		 const struct rte_eth_rxconf *rx_conf,
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v4 11/20] net/sfc: add NUMA-aware registry of service logical cores
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (9 preceding siblings ...)
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 10/20] net/sfc: add support for initialising different RxQ types Andrew Rybchenko
@ 2021-07-02  8:39   ` Andrew Rybchenko
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 12/20] net/sfc: reserve RxQ for counters Andrew Rybchenko
                     ` (9 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:39 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Andy Moreton, Ivan Malov

The driver requires service cores for housekeeping. Share these
cores for many adapters and various purposes to avoid extra CPU
overhead.

Since housekeeping services will talk to NIC, it should be possible
to choose logical core on matching NUMA node.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/meson.build   |  1 +
 drivers/net/sfc/sfc_service.c | 99 +++++++++++++++++++++++++++++++++++
 drivers/net/sfc/sfc_service.h | 20 +++++++
 3 files changed, 120 insertions(+)
 create mode 100644 drivers/net/sfc/sfc_service.c
 create mode 100644 drivers/net/sfc/sfc_service.h

diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index ccf5984d87..4ac97e8d43 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -62,4 +62,5 @@ sources = files(
         'sfc_ef10_tx.c',
         'sfc_ef100_rx.c',
         'sfc_ef100_tx.c',
+        'sfc_service.c',
 )
diff --git a/drivers/net/sfc/sfc_service.c b/drivers/net/sfc/sfc_service.c
new file mode 100644
index 0000000000..9c89484406
--- /dev/null
+++ b/drivers/net/sfc/sfc_service.c
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2020-2021 Xilinx, Inc.
+ */
+
+#include <stdint.h>
+
+#include <rte_common.h>
+#include <rte_spinlock.h>
+#include <rte_lcore.h>
+#include <rte_service.h>
+#include <rte_memory.h>
+
+#include "sfc_log.h"
+#include "sfc_service.h"
+#include "sfc_debug.h"
+
+static uint32_t sfc_service_lcore[RTE_MAX_NUMA_NODES];
+static rte_spinlock_t sfc_service_lcore_lock = RTE_SPINLOCK_INITIALIZER;
+
+RTE_INIT(sfc_service_lcore_init)
+{
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(sfc_service_lcore); ++i)
+		sfc_service_lcore[i] = RTE_MAX_LCORE;
+}
+
+static uint32_t
+sfc_find_service_lcore(int *socket_id)
+{
+	uint32_t service_core_list[RTE_MAX_LCORE];
+	uint32_t lcore_id;
+	int num;
+	int i;
+
+	SFC_ASSERT(rte_spinlock_is_locked(&sfc_service_lcore_lock));
+
+	num = rte_service_lcore_list(service_core_list,
+				    RTE_DIM(service_core_list));
+	if (num == 0) {
+		SFC_GENERIC_LOG(WARNING, "No service cores available");
+		return RTE_MAX_LCORE;
+	}
+	if (num < 0) {
+		SFC_GENERIC_LOG(ERR, "Failed to get service core list");
+		return RTE_MAX_LCORE;
+	}
+
+	for (i = 0; i < num; ++i) {
+		lcore_id = service_core_list[i];
+
+		if (*socket_id == SOCKET_ID_ANY) {
+			*socket_id = rte_lcore_to_socket_id(lcore_id);
+			break;
+		} else if (rte_lcore_to_socket_id(lcore_id) ==
+			   (unsigned int)*socket_id) {
+			break;
+		}
+	}
+
+	if (i == num) {
+		SFC_GENERIC_LOG(WARNING,
+			"No service cores reserved at socket %d", *socket_id);
+		return RTE_MAX_LCORE;
+	}
+
+	return lcore_id;
+}
+
+uint32_t
+sfc_get_service_lcore(int socket_id)
+{
+	uint32_t lcore_id = RTE_MAX_LCORE;
+
+	rte_spinlock_lock(&sfc_service_lcore_lock);
+
+	if (socket_id != SOCKET_ID_ANY) {
+		lcore_id = sfc_service_lcore[socket_id];
+	} else {
+		size_t i;
+
+		for (i = 0; i < RTE_DIM(sfc_service_lcore); ++i) {
+			if (sfc_service_lcore[i] != RTE_MAX_LCORE) {
+				lcore_id = sfc_service_lcore[i];
+				break;
+			}
+		}
+	}
+
+	if (lcore_id == RTE_MAX_LCORE) {
+		lcore_id = sfc_find_service_lcore(&socket_id);
+		if (lcore_id != RTE_MAX_LCORE)
+			sfc_service_lcore[socket_id] = lcore_id;
+	}
+
+	rte_spinlock_unlock(&sfc_service_lcore_lock);
+	return lcore_id;
+}
diff --git a/drivers/net/sfc/sfc_service.h b/drivers/net/sfc/sfc_service.h
new file mode 100644
index 0000000000..bbcce28479
--- /dev/null
+++ b/drivers/net/sfc/sfc_service.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2020-2021 Xilinx, Inc.
+ */
+
+#ifndef _SFC_SERVICE_H
+#define _SFC_SERVICE_H
+
+#include <stdint.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+uint32_t sfc_get_service_lcore(int socket_id);
+
+#ifdef __cplusplus
+}
+#endif
+#endif  /* _SFC_SERVICE_H */
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v4 12/20] net/sfc: reserve RxQ for counters
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (10 preceding siblings ...)
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 11/20] net/sfc: add NUMA-aware registry of service logical cores Andrew Rybchenko
@ 2021-07-02  8:39   ` Andrew Rybchenko
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 13/20] common/sfc_efx/base: add counter creation MCDI wrappers Andrew Rybchenko
                     ` (8 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:39 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

MAE delivers counters data as special packets via dedicated Rx queue.
Reserve an RxQ so that it does not interfere with ethdev Rx queues.
A routine will be added later to handle these packets.

There is no point to reserve the queue if no service cores are
available and counters cannot be used.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/meson.build       |   1 +
 drivers/net/sfc/sfc.c             |  68 ++++++++--
 drivers/net/sfc/sfc.h             |  19 +++
 drivers/net/sfc/sfc_dp.h          |   2 +
 drivers/net/sfc/sfc_ev.h          |  72 ++++++++--
 drivers/net/sfc/sfc_mae.c         |   1 +
 drivers/net/sfc/sfc_mae_counter.c | 217 ++++++++++++++++++++++++++++++
 drivers/net/sfc/sfc_mae_counter.h |  44 ++++++
 drivers/net/sfc/sfc_rx.c          |  43 ++++--
 9 files changed, 438 insertions(+), 29 deletions(-)
 create mode 100644 drivers/net/sfc/sfc_mae_counter.c
 create mode 100644 drivers/net/sfc/sfc_mae_counter.h

diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index 4ac97e8d43..f8880f740a 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -55,6 +55,7 @@ sources = files(
         'sfc_filter.c',
         'sfc_switch.c',
         'sfc_mae.c',
+        'sfc_mae_counter.c',
         'sfc_flow.c',
         'sfc_dp.c',
         'sfc_ef10_rx.c',
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 3477c7530b..4097cf39de 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -20,6 +20,7 @@
 #include "sfc_log.h"
 #include "sfc_ev.h"
 #include "sfc_rx.h"
+#include "sfc_mae_counter.h"
 #include "sfc_tx.h"
 #include "sfc_kvargs.h"
 #include "sfc_tweak.h"
@@ -174,6 +175,7 @@ static int
 sfc_estimate_resource_limits(struct sfc_adapter *sa)
 {
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
 	efx_drv_limits_t limits;
 	int rc;
 	uint32_t evq_allocated;
@@ -235,17 +237,53 @@ sfc_estimate_resource_limits(struct sfc_adapter *sa)
 	rxq_allocated = MIN(rxq_allocated, limits.edl_max_rxq_count);
 	txq_allocated = MIN(txq_allocated, limits.edl_max_txq_count);
 
-	/* Subtract management EVQ not used for traffic */
-	SFC_ASSERT(evq_allocated > 0);
+	/*
+	 * Subtract management EVQ not used for traffic
+	 * The resource allocation strategy is as follows:
+	 * - one EVQ for management
+	 * - one EVQ for each ethdev RXQ
+	 * - one EVQ for each ethdev TXQ
+	 * - one EVQ and one RXQ for optional MAE counters.
+	 */
+	if (evq_allocated == 0) {
+		sfc_err(sa, "count of allocated EvQ is 0");
+		rc = ENOMEM;
+		goto fail_allocate_evq;
+	}
 	evq_allocated--;
 
-	/* Right now we use separate EVQ for Rx and Tx */
-	sa->rxq_max = MIN(rxq_allocated, evq_allocated / 2);
-	sa->txq_max = MIN(txq_allocated, evq_allocated - sa->rxq_max);
+	/*
+	 * Reserve absolutely required minimum.
+	 * Right now we use separate EVQ for Rx and Tx.
+	 */
+	if (rxq_allocated > 0 && evq_allocated > 0) {
+		sa->rxq_max = 1;
+		rxq_allocated--;
+		evq_allocated--;
+	}
+	if (txq_allocated > 0 && evq_allocated > 0) {
+		sa->txq_max = 1;
+		txq_allocated--;
+		evq_allocated--;
+	}
+
+	if (sfc_mae_counter_rxq_required(sa) &&
+	    rxq_allocated > 0 && evq_allocated > 0) {
+		rxq_allocated--;
+		evq_allocated--;
+		sas->counters_rxq_allocated = true;
+	} else {
+		sas->counters_rxq_allocated = false;
+	}
+
+	/* Add remaining allocated queues */
+	sa->rxq_max += MIN(rxq_allocated, evq_allocated / 2);
+	sa->txq_max += MIN(txq_allocated, evq_allocated - sa->rxq_max);
 
 	/* Keep NIC initialized */
 	return 0;
 
+fail_allocate_evq:
 fail_get_vi_pool:
 	efx_nic_fini(sa->nic);
 fail_nic_init:
@@ -256,14 +294,20 @@ static int
 sfc_set_drv_limits(struct sfc_adapter *sa)
 {
 	const struct rte_eth_dev_data *data = sa->eth_dev->data;
+	uint32_t rxq_reserved = sfc_nb_reserved_rxq(sfc_sa2shared(sa));
 	efx_drv_limits_t lim;
 
 	memset(&lim, 0, sizeof(lim));
 
-	/* Limits are strict since take into account initial estimation */
+	/*
+	 * Limits are strict since take into account initial estimation.
+	 * Resource allocation stategy is described in
+	 * sfc_estimate_resource_limits().
+	 */
 	lim.edl_min_evq_count = lim.edl_max_evq_count =
-		1 + data->nb_rx_queues + data->nb_tx_queues;
-	lim.edl_min_rxq_count = lim.edl_max_rxq_count = data->nb_rx_queues;
+		1 + data->nb_rx_queues + data->nb_tx_queues + rxq_reserved;
+	lim.edl_min_rxq_count = lim.edl_max_rxq_count =
+		data->nb_rx_queues + rxq_reserved;
 	lim.edl_min_txq_count = lim.edl_max_txq_count = data->nb_tx_queues;
 
 	return efx_nic_set_drv_limits(sa->nic, &lim);
@@ -834,6 +878,10 @@ sfc_attach(struct sfc_adapter *sa)
 	if (rc != 0)
 		goto fail_filter_attach;
 
+	rc = sfc_mae_counter_rxq_attach(sa);
+	if (rc != 0)
+		goto fail_mae_counter_rxq_attach;
+
 	rc = sfc_mae_attach(sa);
 	if (rc != 0)
 		goto fail_mae_attach;
@@ -862,6 +910,9 @@ sfc_attach(struct sfc_adapter *sa)
 	sfc_mae_detach(sa);
 
 fail_mae_attach:
+	sfc_mae_counter_rxq_detach(sa);
+
+fail_mae_counter_rxq_attach:
 	sfc_filter_detach(sa);
 
 fail_filter_attach:
@@ -903,6 +954,7 @@ sfc_detach(struct sfc_adapter *sa)
 	sfc_flow_fini(sa);
 
 	sfc_mae_detach(sa);
+	sfc_mae_counter_rxq_detach(sa);
 	sfc_filter_detach(sa);
 	sfc_rss_detach(sa);
 	sfc_port_detach(sa);
diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index 00fc26cf0e..546739bd4a 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -186,6 +186,8 @@ struct sfc_adapter_shared {
 
 	char				*dp_rx_name;
 	char				*dp_tx_name;
+
+	bool				counters_rxq_allocated;
 };
 
 /* Adapter process private data */
@@ -205,6 +207,15 @@ sfc_adapter_priv_by_eth_dev(struct rte_eth_dev *eth_dev)
 	return sap;
 }
 
+/* RxQ dedicated for counters (counter only RxQ) data */
+struct sfc_counter_rxq {
+	unsigned int			state;
+#define SFC_COUNTER_RXQ_ATTACHED		0x1
+#define SFC_COUNTER_RXQ_INITIALIZED		0x2
+	sfc_sw_index_t			sw_index;
+	struct rte_mempool		*mp;
+};
+
 /* Adapter private data */
 struct sfc_adapter {
 	/*
@@ -283,6 +294,8 @@ struct sfc_adapter {
 	bool				mgmt_evq_running;
 	struct sfc_evq			*mgmt_evq;
 
+	struct sfc_counter_rxq		counter_rxq;
+
 	struct sfc_rxq			*rxq_ctrl;
 	struct sfc_txq			*txq_ctrl;
 
@@ -357,6 +370,12 @@ sfc_adapter_lock_fini(__rte_unused struct sfc_adapter *sa)
 	/* Just for symmetry of the API */
 }
 
+static inline unsigned int
+sfc_nb_counter_rxq(const struct sfc_adapter_shared *sas)
+{
+	return sas->counters_rxq_allocated ? 1 : 0;
+}
+
 /** Get the number of milliseconds since boot from the default timer */
 static inline uint64_t
 sfc_get_system_msecs(void)
diff --git a/drivers/net/sfc/sfc_dp.h b/drivers/net/sfc/sfc_dp.h
index 76065483d4..61c1a3fbac 100644
--- a/drivers/net/sfc/sfc_dp.h
+++ b/drivers/net/sfc/sfc_dp.h
@@ -97,6 +97,8 @@ struct sfc_dp {
 TAILQ_HEAD(sfc_dp_list, sfc_dp);
 
 typedef unsigned int sfc_sw_index_t;
+#define SFC_SW_INDEX_INVALID	((sfc_sw_index_t)(UINT_MAX))
+
 typedef int32_t	sfc_ethdev_qid_t;
 #define SFC_ETHDEV_QID_INVALID	((sfc_ethdev_qid_t)(-1))
 
diff --git a/drivers/net/sfc/sfc_ev.h b/drivers/net/sfc/sfc_ev.h
index 3f3c4b5b9a..b2a0380205 100644
--- a/drivers/net/sfc/sfc_ev.h
+++ b/drivers/net/sfc/sfc_ev.h
@@ -66,36 +66,87 @@ sfc_mgmt_evq_sw_index(__rte_unused const struct sfc_adapter_shared *sas)
 	return 0;
 }
 
+/* Return the number of Rx queues reserved for driver's internal use */
+static inline unsigned int
+sfc_nb_reserved_rxq(const struct sfc_adapter_shared *sas)
+{
+	return sfc_nb_counter_rxq(sas);
+}
+
+static inline unsigned int
+sfc_nb_reserved_evq(const struct sfc_adapter_shared *sas)
+{
+	/* An EvQ is required for each reserved RxQ */
+	return 1 + sfc_nb_reserved_rxq(sas);
+}
+
+/*
+ * The mapping functions that return SW index of a specific reserved
+ * queue rely on the relative order of reserved queues. Some reserved
+ * queues are optional, and if they are disabled or not supported, then
+ * the function for that specific reserved queue will return previous
+ * valid index of a reserved queue in the dependency chain or
+ * SFC_SW_INDEX_INVALID if it is the first reserved queue in the chain.
+ * If at least one of the reserved queues in the chain is enabled, then
+ * the corresponding function will give valid SW index, even if previous
+ * functions in the chain returned SFC_SW_INDEX_INVALID, since this value
+ * is one less than the first valid SW index.
+ *
+ * The dependency mechanism is utilized to avoid regid defines for SW indices
+ * for reserved queues and to allow these indices to shrink and make space
+ * for ethdev queue indices when some of the reserved queues are disabled.
+ */
+
+static inline sfc_sw_index_t
+sfc_counters_rxq_sw_index(const struct sfc_adapter_shared *sas)
+{
+	return sas->counters_rxq_allocated ? 0 : SFC_SW_INDEX_INVALID;
+}
+
 /*
  * Functions below define event queue to transmit/receive queue and vice
  * versa mapping.
+ * SFC_ETHDEV_QID_INVALID is returned when sw_index is converted to
+ * ethdev_qid, but sw_index represents a reserved queue for driver's
+ * internal use.
  * Own event queue is allocated for management, each Rx and each Tx queue.
  * Zero event queue is used for management events.
- * Rx event queues from 1 to RxQ number follow management event queue.
+ * When counters are supported, one Rx event queue is reserved.
+ * Rx event queues follow reserved event queues.
  * Tx event queues follow Rx event queues.
  */
 
 static inline sfc_ethdev_qid_t
-sfc_ethdev_rx_qid_by_rxq_sw_index(__rte_unused struct sfc_adapter_shared *sas,
+sfc_ethdev_rx_qid_by_rxq_sw_index(struct sfc_adapter_shared *sas,
 				  sfc_sw_index_t rxq_sw_index)
 {
-	/* Only ethdev queues are present for now */
-	return rxq_sw_index;
+	if (rxq_sw_index < sfc_nb_reserved_rxq(sas))
+		return SFC_ETHDEV_QID_INVALID;
+
+	return rxq_sw_index - sfc_nb_reserved_rxq(sas);
 }
 
 static inline sfc_sw_index_t
-sfc_rxq_sw_index_by_ethdev_rx_qid(__rte_unused struct sfc_adapter_shared *sas,
+sfc_rxq_sw_index_by_ethdev_rx_qid(struct sfc_adapter_shared *sas,
 				  sfc_ethdev_qid_t ethdev_qid)
 {
-	/* Only ethdev queues are present for now */
-	return ethdev_qid;
+	return sfc_nb_reserved_rxq(sas) + ethdev_qid;
 }
 
 static inline sfc_sw_index_t
-sfc_evq_sw_index_by_rxq_sw_index(__rte_unused struct sfc_adapter *sa,
+sfc_evq_sw_index_by_rxq_sw_index(struct sfc_adapter *sa,
 				 sfc_sw_index_t rxq_sw_index)
 {
-	return 1 + rxq_sw_index;
+	struct sfc_adapter_shared *sas = sfc_sa2shared(sa);
+	sfc_ethdev_qid_t ethdev_qid;
+
+	ethdev_qid = sfc_ethdev_rx_qid_by_rxq_sw_index(sas, rxq_sw_index);
+	if (ethdev_qid == SFC_ETHDEV_QID_INVALID) {
+		/* One EvQ is reserved for management */
+		return 1 + rxq_sw_index;
+	}
+
+	return sfc_nb_reserved_evq(sas) + ethdev_qid;
 }
 
 static inline sfc_ethdev_qid_t
@@ -118,7 +169,8 @@ static inline sfc_sw_index_t
 sfc_evq_sw_index_by_txq_sw_index(struct sfc_adapter *sa,
 				 sfc_sw_index_t txq_sw_index)
 {
-	return 1 + sa->eth_dev->data->nb_rx_queues + txq_sw_index;
+	return sfc_nb_reserved_evq(sfc_sa2shared(sa)) +
+		sa->eth_dev->data->nb_rx_queues + txq_sw_index;
 }
 
 int sfc_ev_attach(struct sfc_adapter *sa);
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index a2c0aa1436..8ffcf72d88 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -16,6 +16,7 @@
 #include "efx.h"
 
 #include "sfc.h"
+#include "sfc_mae_counter.h"
 #include "sfc_log.h"
 #include "sfc_switch.h"
 
diff --git a/drivers/net/sfc/sfc_mae_counter.c b/drivers/net/sfc/sfc_mae_counter.c
new file mode 100644
index 0000000000..c7646cf7b1
--- /dev/null
+++ b/drivers/net/sfc/sfc_mae_counter.c
@@ -0,0 +1,217 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2020-2021 Xilinx, Inc.
+ */
+
+#include <rte_common.h>
+
+#include "efx.h"
+
+#include "sfc_ev.h"
+#include "sfc.h"
+#include "sfc_rx.h"
+#include "sfc_mae_counter.h"
+#include "sfc_service.h"
+
+static uint32_t
+sfc_mae_counter_get_service_lcore(struct sfc_adapter *sa)
+{
+	uint32_t cid;
+
+	cid = sfc_get_service_lcore(sa->socket_id);
+	if (cid != RTE_MAX_LCORE)
+		return cid;
+
+	if (sa->socket_id != SOCKET_ID_ANY)
+		cid = sfc_get_service_lcore(SOCKET_ID_ANY);
+
+	if (cid == RTE_MAX_LCORE) {
+		sfc_warn(sa, "failed to get service lcore for counter service");
+	} else if (sa->socket_id != SOCKET_ID_ANY) {
+		sfc_warn(sa,
+			"failed to get service lcore for counter service at socket %d, but got at socket %u",
+			sa->socket_id, rte_lcore_to_socket_id(cid));
+	}
+	return cid;
+}
+
+bool
+sfc_mae_counter_rxq_required(struct sfc_adapter *sa)
+{
+	const efx_nic_cfg_t *encp = efx_nic_cfg_get(sa->nic);
+
+	if (encp->enc_mae_supported == B_FALSE)
+		return false;
+
+	if (sfc_mae_counter_get_service_lcore(sa) == RTE_MAX_LCORE)
+		return false;
+
+	return true;
+}
+
+int
+sfc_mae_counter_rxq_attach(struct sfc_adapter *sa)
+{
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	char name[RTE_MEMPOOL_NAMESIZE];
+	struct rte_mempool *mp;
+	unsigned int n_elements;
+	unsigned int cache_size;
+	/* The mempool is internal and private area is not required */
+	const uint16_t priv_size = 0;
+	const uint16_t data_room_size = RTE_PKTMBUF_HEADROOM +
+		SFC_MAE_COUNTER_STREAM_PACKET_SIZE;
+	int rc;
+
+	sfc_log_init(sa, "entry");
+
+	if (!sas->counters_rxq_allocated) {
+		sfc_log_init(sa, "counter queue is not supported - skip");
+		return 0;
+	}
+
+	/*
+	 * At least one element in the ring is always unused to distinguish
+	 * between empty and full ring cases.
+	 */
+	n_elements = SFC_COUNTER_RXQ_RX_DESC_COUNT - 1;
+
+	/*
+	 * The cache must have sufficient space to put received buckets
+	 * before they're reused on refill.
+	 */
+	cache_size = rte_align32pow2(SFC_COUNTER_RXQ_REFILL_LEVEL +
+				     SFC_MAE_COUNTER_RX_BURST - 1);
+
+	if (snprintf(name, sizeof(name), "counter_rxq-pool-%u", sas->port_id) >=
+	    (int)sizeof(name)) {
+		sfc_err(sa, "failed: counter RxQ mempool name is too long");
+		rc = ENAMETOOLONG;
+		goto fail_long_name;
+	}
+
+	/*
+	 * It could be single-producer single-consumer ring mempool which
+	 * requires minimal barriers. However, cache size and refill/burst
+	 * policy are aligned, therefore it does not matter which
+	 * mempool backend is chosen since backend is unused.
+	 */
+	mp = rte_pktmbuf_pool_create(name, n_elements, cache_size,
+				     priv_size, data_room_size, sa->socket_id);
+	if (mp == NULL) {
+		sfc_err(sa, "failed to create counter RxQ mempool");
+		rc = rte_errno;
+		goto fail_mp_create;
+	}
+
+	sa->counter_rxq.sw_index = sfc_counters_rxq_sw_index(sas);
+	sa->counter_rxq.mp = mp;
+	sa->counter_rxq.state |= SFC_COUNTER_RXQ_ATTACHED;
+
+	sfc_log_init(sa, "done");
+
+	return 0;
+
+fail_mp_create:
+fail_long_name:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+
+	return rc;
+}
+
+void
+sfc_mae_counter_rxq_detach(struct sfc_adapter *sa)
+{
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+
+	sfc_log_init(sa, "entry");
+
+	if (!sas->counters_rxq_allocated) {
+		sfc_log_init(sa, "counter queue is not supported - skip");
+		return;
+	}
+
+	if ((sa->counter_rxq.state & SFC_COUNTER_RXQ_ATTACHED) == 0) {
+		sfc_log_init(sa, "counter queue is not attached - skip");
+		return;
+	}
+
+	rte_mempool_free(sa->counter_rxq.mp);
+	sa->counter_rxq.mp = NULL;
+	sa->counter_rxq.state &= ~SFC_COUNTER_RXQ_ATTACHED;
+
+	sfc_log_init(sa, "done");
+}
+
+int
+sfc_mae_counter_rxq_init(struct sfc_adapter *sa)
+{
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+	const struct rte_eth_rxconf rxconf = {
+		.rx_free_thresh = SFC_COUNTER_RXQ_REFILL_LEVEL,
+		.rx_drop_en = 1,
+	};
+	uint16_t nb_rx_desc = SFC_COUNTER_RXQ_RX_DESC_COUNT;
+	int rc;
+
+	sfc_log_init(sa, "entry");
+
+	if (!sas->counters_rxq_allocated) {
+		sfc_log_init(sa, "counter queue is not supported - skip");
+		return 0;
+	}
+
+	if ((sa->counter_rxq.state & SFC_COUNTER_RXQ_ATTACHED) == 0) {
+		sfc_log_init(sa, "counter queue is not attached - skip");
+		return 0;
+	}
+
+	nb_rx_desc = RTE_MIN(nb_rx_desc, sa->rxq_max_entries);
+	nb_rx_desc = RTE_MAX(nb_rx_desc, sa->rxq_min_entries);
+
+	rc = sfc_rx_qinit_info(sa, sa->counter_rxq.sw_index,
+			       EFX_RXQ_FLAG_USER_MARK);
+	if (rc != 0)
+		goto fail_counter_rxq_init_info;
+
+	rc = sfc_rx_qinit(sa, sa->counter_rxq.sw_index, nb_rx_desc,
+			  sa->socket_id, &rxconf, sa->counter_rxq.mp);
+	if (rc != 0) {
+		sfc_err(sa, "failed to init counter RxQ");
+		goto fail_counter_rxq_init;
+	}
+
+	sa->counter_rxq.state |= SFC_COUNTER_RXQ_INITIALIZED;
+
+	sfc_log_init(sa, "done");
+
+	return 0;
+
+fail_counter_rxq_init:
+fail_counter_rxq_init_info:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+
+	return rc;
+}
+
+void
+sfc_mae_counter_rxq_fini(struct sfc_adapter *sa)
+{
+	struct sfc_adapter_shared * const sas = sfc_sa2shared(sa);
+
+	sfc_log_init(sa, "entry");
+
+	if (!sas->counters_rxq_allocated) {
+		sfc_log_init(sa, "counter queue is not supported - skip");
+		return;
+	}
+
+	if ((sa->counter_rxq.state & SFC_COUNTER_RXQ_INITIALIZED) == 0) {
+		sfc_log_init(sa, "counter queue is not initialized - skip");
+		return;
+	}
+
+	sfc_rx_qfini(sa, sa->counter_rxq.sw_index);
+
+	sfc_log_init(sa, "done");
+}
diff --git a/drivers/net/sfc/sfc_mae_counter.h b/drivers/net/sfc/sfc_mae_counter.h
new file mode 100644
index 0000000000..f16d64a999
--- /dev/null
+++ b/drivers/net/sfc/sfc_mae_counter.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2020-2021 Xilinx, Inc.
+ */
+
+#ifndef _SFC_MAE_COUNTER_H
+#define _SFC_MAE_COUNTER_H
+
+#include "sfc.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* Default values for a user of counter RxQ */
+#define SFC_MAE_COUNTER_RX_BURST 32
+#define SFC_COUNTER_RXQ_RX_DESC_COUNT 256
+
+/*
+ * The refill level is chosen based on requirement to keep number
+ * of give credits operations low.
+ */
+#define SFC_COUNTER_RXQ_REFILL_LEVEL (SFC_COUNTER_RXQ_RX_DESC_COUNT / 4)
+
+/*
+ * SF-122415-TC states that the packetiser that generates packets for
+ * counter stream must support 9k frames. Set it to the maximum supported
+ * size since in case of huge flow of counters, having fewer packets in counter
+ * updates is better.
+ */
+#define SFC_MAE_COUNTER_STREAM_PACKET_SIZE 9216
+
+bool sfc_mae_counter_rxq_required(struct sfc_adapter *sa);
+
+int sfc_mae_counter_rxq_attach(struct sfc_adapter *sa);
+void sfc_mae_counter_rxq_detach(struct sfc_adapter *sa);
+
+int sfc_mae_counter_rxq_init(struct sfc_adapter *sa);
+void sfc_mae_counter_rxq_fini(struct sfc_adapter *sa);
+
+#ifdef __cplusplus
+}
+#endif
+#endif  /* _SFC_MAE_COUNTER_H */
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index c7a7bd66ef..0532f77082 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -16,6 +16,7 @@
 #include "sfc_log.h"
 #include "sfc_ev.h"
 #include "sfc_rx.h"
+#include "sfc_mae_counter.h"
 #include "sfc_kvargs.h"
 #include "sfc_tweak.h"
 
@@ -1705,6 +1706,9 @@ sfc_rx_configure(struct sfc_adapter *sa)
 	struct sfc_rss *rss = &sas->rss;
 	struct rte_eth_conf *dev_conf = &sa->eth_dev->data->dev_conf;
 	const unsigned int nb_rx_queues = sa->eth_dev->data->nb_rx_queues;
+	const unsigned int nb_rsrv_rx_queues = sfc_nb_reserved_rxq(sas);
+	const unsigned int nb_rxq_total = nb_rx_queues + nb_rsrv_rx_queues;
+	bool reconfigure;
 	int rc;
 
 	sfc_log_init(sa, "nb_rx_queues=%u (old %u)",
@@ -1714,12 +1718,15 @@ sfc_rx_configure(struct sfc_adapter *sa)
 	if (rc != 0)
 		goto fail_check_mode;
 
-	if (nb_rx_queues == sas->rxq_count)
+	if (nb_rxq_total == sas->rxq_count) {
+		reconfigure = true;
 		goto configure_rss;
+	}
 
 	if (sas->rxq_info == NULL) {
+		reconfigure = false;
 		rc = ENOMEM;
-		sas->rxq_info = rte_calloc_socket("sfc-rxqs", nb_rx_queues,
+		sas->rxq_info = rte_calloc_socket("sfc-rxqs", nb_rxq_total,
 						  sizeof(sas->rxq_info[0]), 0,
 						  sa->socket_id);
 		if (sas->rxq_info == NULL)
@@ -1730,39 +1737,42 @@ sfc_rx_configure(struct sfc_adapter *sa)
 		 * since it should not be shared.
 		 */
 		rc = ENOMEM;
-		sa->rxq_ctrl = calloc(nb_rx_queues, sizeof(sa->rxq_ctrl[0]));
+		sa->rxq_ctrl = calloc(nb_rxq_total, sizeof(sa->rxq_ctrl[0]));
 		if (sa->rxq_ctrl == NULL)
 			goto fail_rxqs_ctrl_alloc;
 	} else {
 		struct sfc_rxq_info *new_rxq_info;
 		struct sfc_rxq *new_rxq_ctrl;
 
+		reconfigure = true;
+
+		/* Do not ununitialize reserved queues */
 		if (nb_rx_queues < sas->ethdev_rxq_count)
 			sfc_rx_fini_queues(sa, nb_rx_queues);
 
 		rc = ENOMEM;
 		new_rxq_info =
 			rte_realloc(sas->rxq_info,
-				    nb_rx_queues * sizeof(sas->rxq_info[0]), 0);
-		if (new_rxq_info == NULL && nb_rx_queues > 0)
+				    nb_rxq_total * sizeof(sas->rxq_info[0]), 0);
+		if (new_rxq_info == NULL && nb_rxq_total > 0)
 			goto fail_rxqs_realloc;
 
 		rc = ENOMEM;
 		new_rxq_ctrl = realloc(sa->rxq_ctrl,
-				       nb_rx_queues * sizeof(sa->rxq_ctrl[0]));
-		if (new_rxq_ctrl == NULL && nb_rx_queues > 0)
+				       nb_rxq_total * sizeof(sa->rxq_ctrl[0]));
+		if (new_rxq_ctrl == NULL && nb_rxq_total > 0)
 			goto fail_rxqs_ctrl_realloc;
 
 		sas->rxq_info = new_rxq_info;
 		sa->rxq_ctrl = new_rxq_ctrl;
-		if (nb_rx_queues > sas->rxq_count) {
+		if (nb_rxq_total > sas->rxq_count) {
 			unsigned int rxq_count = sas->rxq_count;
 
 			memset(&sas->rxq_info[rxq_count], 0,
-			       (nb_rx_queues - rxq_count) *
+			       (nb_rxq_total - rxq_count) *
 			       sizeof(sas->rxq_info[0]));
 			memset(&sa->rxq_ctrl[rxq_count], 0,
-			       (nb_rx_queues - rxq_count) *
+			       (nb_rxq_total - rxq_count) *
 			       sizeof(sa->rxq_ctrl[0]));
 		}
 	}
@@ -1779,7 +1789,13 @@ sfc_rx_configure(struct sfc_adapter *sa)
 		sas->ethdev_rxq_count++;
 	}
 
-	sas->rxq_count = sas->ethdev_rxq_count;
+	sas->rxq_count = sas->ethdev_rxq_count + nb_rsrv_rx_queues;
+
+	if (!reconfigure) {
+		rc = sfc_mae_counter_rxq_init(sa);
+		if (rc != 0)
+			goto fail_count_rxq_init;
+	}
 
 configure_rss:
 	rss->channels = (dev_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) ?
@@ -1801,6 +1817,10 @@ sfc_rx_configure(struct sfc_adapter *sa)
 	return 0;
 
 fail_rx_process_adv_conf_rss:
+	if (!reconfigure)
+		sfc_mae_counter_rxq_fini(sa);
+
+fail_count_rxq_init:
 fail_rx_qinit_info:
 fail_rxqs_ctrl_realloc:
 fail_rxqs_realloc:
@@ -1824,6 +1844,7 @@ sfc_rx_close(struct sfc_adapter *sa)
 	struct sfc_rss *rss = &sfc_sa2shared(sa)->rss;
 
 	sfc_rx_fini_queues(sa, 0);
+	sfc_mae_counter_rxq_fini(sa);
 
 	rss->channels = 0;
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v4 13/20] common/sfc_efx/base: add counter creation MCDI wrappers
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (11 preceding siblings ...)
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 12/20] net/sfc: reserve RxQ for counters Andrew Rybchenko
@ 2021-07-02  8:39   ` Andrew Rybchenko
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 14/20] common/sfc_efx/base: add counter stream " Andrew Rybchenko
                     ` (7 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:39 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

User will be able to create and free MAE counters. Support for
associating counters with action set will be added in upcoming
patches.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/efx.h      |  37 ++++++
 drivers/common/sfc_efx/base/efx_impl.h |   1 +
 drivers/common/sfc_efx/base/efx_mae.c  | 158 +++++++++++++++++++++++++
 drivers/common/sfc_efx/base/efx_mcdi.h |   7 ++
 drivers/common/sfc_efx/version.map     |   2 +
 5 files changed, 205 insertions(+)

diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index f81837a931..b789e19b98 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4388,6 +4388,10 @@ efx_mae_action_set_fill_in_eh_id(
 	__in				efx_mae_actions_t *spec,
 	__in				const efx_mae_eh_id_t *eh_idp);
 
+typedef struct efx_counter_s {
+	uint32_t id;
+} efx_counter_t;
+
 /* Action set ID */
 typedef struct efx_mae_aset_id_s {
 	uint32_t id;
@@ -4400,6 +4404,39 @@ efx_mae_action_set_alloc(
 	__in				const efx_mae_actions_t *spec,
 	__out				efx_mae_aset_id_t *aset_idp);
 
+/*
+ * Generation count has two purposes:
+ *
+ * 1) Distinguish between counter packets that belong to freed counter
+ *    and the packets that belong to reallocated counter (with the same ID);
+ * 2) Make sure that all packets are received for a counter that was freed;
+ *
+ * API users should provide generation count out parameter in allocation
+ * function if counters can be reallocated and consistent counter values are
+ * required.
+ *
+ * API users that need consistent final counter values after counter
+ * deallocation or counter stream stop should provide the parameter in
+ * functions that free the counters and stop the counter stream.
+ */
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_counters_alloc(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_counters,
+	__out				uint32_t *n_allocatedp,
+	__out_ecount(n_counters)	efx_counter_t *countersp,
+	__out_opt			uint32_t *gen_countp);
+
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_counters_free(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_counters,
+	__out				uint32_t *n_freedp,
+	__in_ecount(n_counters)		const efx_counter_t *countersp,
+	__out_opt			uint32_t *gen_countp);
+
 LIBEFX_API
 extern	__checkReturn			efx_rc_t
 efx_mae_action_set_free(
diff --git a/drivers/common/sfc_efx/base/efx_impl.h b/drivers/common/sfc_efx/base/efx_impl.h
index a6b20704ac..b69463385e 100644
--- a/drivers/common/sfc_efx/base/efx_impl.h
+++ b/drivers/common/sfc_efx/base/efx_impl.h
@@ -821,6 +821,7 @@ typedef struct efx_mae_s {
 	/** Outer rule match field capabilities. */
 	efx_mae_field_cap_t		*em_outer_rule_field_caps;
 	size_t				em_outer_rule_field_caps_size;
+	uint32_t			em_max_ncounters;
 } efx_mae_t;
 
 #endif /* EFSYS_OPT_MAE */
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index c1784211e7..cf6c449a16 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -67,6 +67,9 @@ efx_mae_get_capabilities(
 	maep->em_max_nfields =
 	    MCDI_OUT_DWORD(req, MAE_GET_CAPS_OUT_MATCH_FIELD_COUNT);
 
+	maep->em_max_ncounters =
+	    MCDI_OUT_DWORD(req, MAE_GET_CAPS_OUT_COUNTERS);
+
 	return (0);
 
 fail2:
@@ -2369,6 +2372,161 @@ efx_mae_action_rule_remove(
 
 	return (0);
 
+fail4:
+	EFSYS_PROBE(fail4);
+fail3:
+	EFSYS_PROBE(fail3);
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+	return (rc);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_counters_alloc(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_counters,
+	__out				uint32_t *n_allocatedp,
+	__out_ecount(n_counters)	efx_counter_t *countersp,
+	__out_opt			uint32_t *gen_countp)
+{
+	EFX_MCDI_DECLARE_BUF(payload,
+	    MC_CMD_MAE_COUNTER_ALLOC_IN_LEN,
+	    MC_CMD_MAE_COUNTER_ALLOC_OUT_LENMAX_MCDI2);
+	efx_mae_t *maep = enp->en_maep;
+	uint32_t n_allocated;
+	efx_mcdi_req_t req;
+	unsigned int i;
+	efx_rc_t rc;
+
+	if (n_counters > maep->em_max_ncounters ||
+	    n_counters < MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_MINNUM ||
+	    n_counters > MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_MAXNUM_MCDI2) {
+		rc = EINVAL;
+		goto fail1;
+	}
+
+	req.emr_cmd = MC_CMD_MAE_COUNTER_ALLOC;
+	req.emr_in_buf = payload;
+	req.emr_in_length = MC_CMD_MAE_COUNTER_ALLOC_IN_LEN;
+	req.emr_out_buf = payload;
+	req.emr_out_length = MC_CMD_MAE_COUNTER_ALLOC_OUT_LEN(n_counters);
+
+	MCDI_IN_SET_DWORD(req, MAE_COUNTER_ALLOC_IN_REQUESTED_COUNT,
+	    n_counters);
+
+	efx_mcdi_execute(enp, &req);
+
+	if (req.emr_rc != 0) {
+		rc = req.emr_rc;
+		goto fail2;
+	}
+
+	if (req.emr_out_length_used < MC_CMD_MAE_COUNTER_ALLOC_OUT_LENMIN) {
+		rc = EMSGSIZE;
+		goto fail3;
+	}
+
+	n_allocated = MCDI_OUT_DWORD(req,
+	    MAE_COUNTER_ALLOC_OUT_COUNTER_ID_COUNT);
+	if (n_allocated < MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_MINNUM) {
+		rc = EFAULT;
+		goto fail4;
+	}
+
+	for (i = 0; i < n_allocated; i++) {
+		countersp[i].id = MCDI_OUT_INDEXED_DWORD(req,
+		    MAE_COUNTER_ALLOC_OUT_COUNTER_ID, i);
+	}
+
+	if (gen_countp != NULL) {
+		*gen_countp = MCDI_OUT_DWORD(req,
+				    MAE_COUNTER_ALLOC_OUT_GENERATION_COUNT);
+	}
+
+	*n_allocatedp = n_allocated;
+
+	return (0);
+
+fail4:
+	EFSYS_PROBE(fail4);
+fail3:
+	EFSYS_PROBE(fail3);
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
+	return (rc);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_counters_free(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_counters,
+	__out				uint32_t *n_freedp,
+	__in_ecount(n_counters)		const efx_counter_t *countersp,
+	__out_opt			uint32_t *gen_countp)
+{
+	EFX_MCDI_DECLARE_BUF(payload,
+	    MC_CMD_MAE_COUNTER_FREE_IN_LENMAX_MCDI2,
+	    MC_CMD_MAE_COUNTER_FREE_OUT_LENMAX_MCDI2);
+	efx_mae_t *maep = enp->en_maep;
+	efx_mcdi_req_t req;
+	uint32_t n_freed;
+	unsigned int i;
+	efx_rc_t rc;
+
+	if (n_counters > maep->em_max_ncounters ||
+	    n_counters < MC_CMD_MAE_COUNTER_FREE_IN_FREE_COUNTER_ID_MINNUM ||
+	    n_counters >
+	    MC_CMD_MAE_COUNTER_FREE_IN_FREE_COUNTER_ID_MAXNUM_MCDI2) {
+		rc = EINVAL;
+		goto fail1;
+	}
+
+	req.emr_cmd = MC_CMD_MAE_COUNTER_FREE;
+	req.emr_in_buf = payload;
+	req.emr_in_length = MC_CMD_MAE_COUNTER_FREE_IN_LEN(n_counters);
+	req.emr_out_buf = payload;
+	req.emr_out_length = MC_CMD_MAE_COUNTER_FREE_OUT_LEN(n_counters);
+
+	for (i = 0; i < n_counters; i++) {
+		MCDI_IN_SET_INDEXED_DWORD(req,
+		    MAE_COUNTER_FREE_IN_FREE_COUNTER_ID, i, countersp[i].id);
+	}
+	MCDI_IN_SET_DWORD(req, MAE_COUNTER_FREE_IN_COUNTER_ID_COUNT,
+			  n_counters);
+
+	efx_mcdi_execute(enp, &req);
+
+	if (req.emr_rc != 0) {
+		rc = req.emr_rc;
+		goto fail2;
+	}
+
+	if (req.emr_out_length_used < MC_CMD_MAE_COUNTER_FREE_OUT_LENMIN) {
+		rc = EMSGSIZE;
+		goto fail3;
+	}
+
+	n_freed = MCDI_OUT_DWORD(req, MAE_COUNTER_FREE_OUT_COUNTER_ID_COUNT);
+
+	if (n_freed < MC_CMD_MAE_COUNTER_FREE_OUT_FREED_COUNTER_ID_MINNUM) {
+		rc = EFAULT;
+		goto fail4;
+	}
+
+	if (gen_countp != NULL) {
+		*gen_countp = MCDI_OUT_DWORD(req,
+				    MAE_COUNTER_FREE_OUT_GENERATION_COUNT);
+	}
+
+	*n_freedp = n_freed;
+
+	return (0);
+
 fail4:
 	EFSYS_PROBE(fail4);
 fail3:
diff --git a/drivers/common/sfc_efx/base/efx_mcdi.h b/drivers/common/sfc_efx/base/efx_mcdi.h
index 70a97ea337..90b70de97b 100644
--- a/drivers/common/sfc_efx/base/efx_mcdi.h
+++ b/drivers/common/sfc_efx/base/efx_mcdi.h
@@ -311,6 +311,10 @@ efx_mcdi_phy_module_get_info(
 	EFX_SET_DWORD_FIELD(*MCDI_IN2(_emr, efx_dword_t, _ofst),	\
 		MC_CMD_ ## _field, _value)
 
+#define	MCDI_IN_SET_INDEXED_DWORD(_emr, _ofst, _idx, _value)		\
+	EFX_POPULATE_DWORD_1(*(MCDI_IN2(_emr, efx_dword_t, _ofst) +	\
+			     (_idx)), EFX_DWORD_0, _value)		\
+
 #define	MCDI_IN_POPULATE_DWORD_1(_emr, _ofst, _field1, _value1)		\
 	EFX_POPULATE_DWORD_1(*MCDI_IN2(_emr, efx_dword_t, _ofst),	\
 		MC_CMD_ ## _field1, _value1)
@@ -451,6 +455,9 @@ efx_mcdi_phy_module_get_info(
 	EFX_DWORD_FIELD(*MCDI_OUT2(_emr, efx_dword_t, _ofst),		\
 			MC_CMD_ ## _field)
 
+#define	MCDI_OUT_INDEXED_DWORD(_emr, _ofst, _idx)			\
+	MCDI_OUT_INDEXED_DWORD_FIELD(_emr, _ofst, _idx, EFX_DWORD_0)
+
 #define	MCDI_OUT_INDEXED_DWORD_FIELD(_emr, _ofst, _idx, _field)		\
 	EFX_DWORD_FIELD(*(MCDI_OUT2(_emr, efx_dword_t, _ofst) +		\
 			(_idx)), _field)
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index d534d8ecb5..d60cd477fa 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -102,6 +102,8 @@ INTERNAL {
 	efx_mae_action_set_spec_fini;
 	efx_mae_action_set_spec_init;
 	efx_mae_action_set_specs_equal;
+	efx_mae_counters_alloc;
+	efx_mae_counters_free;
 	efx_mae_encap_header_alloc;
 	efx_mae_encap_header_free;
 	efx_mae_fini;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v4 14/20] common/sfc_efx/base: add counter stream MCDI wrappers
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (12 preceding siblings ...)
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 13/20] common/sfc_efx/base: add counter creation MCDI wrappers Andrew Rybchenko
@ 2021-07-02  8:39   ` Andrew Rybchenko
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 15/20] common/sfc_efx/base: support counter in action set Andrew Rybchenko
                     ` (6 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:39 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

The MCDIs will be used to control counter Rx queue packet flow.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/efx.h     |  32 ++++++
 drivers/common/sfc_efx/base/efx_mae.c | 138 ++++++++++++++++++++++++++
 drivers/common/sfc_efx/version.map    |   3 +
 3 files changed, 173 insertions(+)

diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index b789e19b98..a5d40c2e3d 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4437,6 +4437,38 @@ efx_mae_counters_free(
 	__in_ecount(n_counters)		const efx_counter_t *countersp,
 	__out_opt			uint32_t *gen_countp);
 
+/* When set, include counters with a value of zero */
+#define	EFX_MAE_COUNTERS_STREAM_IN_ZERO_SQUASH_DISABLE	(1U << 0)
+
+/*
+ * Set if credit-based flow control is used. In this case the driver
+ * must call efx_mae_counters_stream_give_credits() to notify the
+ * packetiser of descriptors written.
+ */
+#define	EFX_MAE_COUNTERS_STREAM_OUT_USES_CREDITS	(1U << 0)
+
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_counters_stream_start(
+	__in				efx_nic_t *enp,
+	__in				uint16_t rxq_id,
+	__in				uint16_t packet_size,
+	__in				uint32_t flags_in,
+	__out				uint32_t *flags_out);
+
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_counters_stream_stop(
+	__in				efx_nic_t *enp,
+	__in				uint16_t rxq_id,
+	__out_opt			uint32_t *gen_countp);
+
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_counters_stream_give_credits(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_credits);
+
 LIBEFX_API
 extern	__checkReturn			efx_rc_t
 efx_mae_action_set_free(
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index cf6c449a16..1f313c8127 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -2535,6 +2535,144 @@ efx_mae_counters_free(
 	EFSYS_PROBE(fail2);
 fail1:
 	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
+	return (rc);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_counters_stream_start(
+	__in				efx_nic_t *enp,
+	__in				uint16_t rxq_id,
+	__in				uint16_t packet_size,
+	__in				uint32_t flags_in,
+	__out				uint32_t *flags_out)
+{
+	efx_mcdi_req_t req;
+	EFX_MCDI_DECLARE_BUF(payload, MC_CMD_MAE_COUNTERS_STREAM_START_IN_LEN,
+			     MC_CMD_MAE_COUNTERS_STREAM_START_OUT_LEN);
+	efx_rc_t rc;
+
+	EFX_STATIC_ASSERT(EFX_MAE_COUNTERS_STREAM_IN_ZERO_SQUASH_DISABLE ==
+	    1U << MC_CMD_MAE_COUNTERS_STREAM_START_IN_ZERO_SQUASH_DISABLE_LBN);
+
+	EFX_STATIC_ASSERT(EFX_MAE_COUNTERS_STREAM_OUT_USES_CREDITS ==
+	    1U << MC_CMD_MAE_COUNTERS_STREAM_START_OUT_USES_CREDITS_LBN);
+
+	req.emr_cmd = MC_CMD_MAE_COUNTERS_STREAM_START;
+	req.emr_in_buf = payload;
+	req.emr_in_length = MC_CMD_MAE_COUNTERS_STREAM_START_IN_LEN;
+	req.emr_out_buf = payload;
+	req.emr_out_length = MC_CMD_MAE_COUNTERS_STREAM_START_OUT_LEN;
+
+	MCDI_IN_SET_WORD(req, MAE_COUNTERS_STREAM_START_IN_QID, rxq_id);
+	MCDI_IN_SET_WORD(req, MAE_COUNTERS_STREAM_START_IN_PACKET_SIZE,
+			 packet_size);
+	MCDI_IN_SET_DWORD(req, MAE_COUNTERS_STREAM_START_IN_FLAGS, flags_in);
+
+	efx_mcdi_execute(enp, &req);
+
+	if (req.emr_rc != 0) {
+		rc = req.emr_rc;
+		goto fail1;
+	}
+
+	if (req.emr_out_length_used <
+	    MC_CMD_MAE_COUNTERS_STREAM_START_OUT_LEN) {
+		rc = EMSGSIZE;
+		goto fail2;
+	}
+
+	*flags_out = MCDI_OUT_DWORD(req, MAE_COUNTERS_STREAM_START_OUT_FLAGS);
+
+	return (0);
+
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
+	return (rc);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_counters_stream_stop(
+	__in				efx_nic_t *enp,
+	__in				uint16_t rxq_id,
+	__out_opt			uint32_t *gen_countp)
+{
+	efx_mcdi_req_t req;
+	EFX_MCDI_DECLARE_BUF(payload, MC_CMD_MAE_COUNTERS_STREAM_STOP_IN_LEN,
+			     MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_LEN);
+	efx_rc_t rc;
+
+	req.emr_cmd = MC_CMD_MAE_COUNTERS_STREAM_STOP;
+	req.emr_in_buf = payload;
+	req.emr_in_length = MC_CMD_MAE_COUNTERS_STREAM_STOP_IN_LEN;
+	req.emr_out_buf = payload;
+	req.emr_out_length = MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_LEN;
+
+	MCDI_IN_SET_WORD(req, MAE_COUNTERS_STREAM_STOP_IN_QID, rxq_id);
+
+	efx_mcdi_execute(enp, &req);
+
+	if (req.emr_rc != 0) {
+		rc = req.emr_rc;
+		goto fail1;
+	}
+
+	if (req.emr_out_length_used <
+	    MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_LEN) {
+		rc = EMSGSIZE;
+		goto fail2;
+	}
+
+	if (gen_countp != NULL) {
+		*gen_countp = MCDI_OUT_DWORD(req,
+			    MAE_COUNTERS_STREAM_STOP_OUT_GENERATION_COUNT);
+	}
+
+	return (0);
+
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
+	return (rc);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_counters_stream_give_credits(
+	__in				efx_nic_t *enp,
+	__in				uint32_t n_credits)
+{
+	efx_mcdi_req_t req;
+	EFX_MCDI_DECLARE_BUF(payload,
+			     MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS_IN_LEN,
+			     MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS_OUT_LEN);
+	efx_rc_t rc;
+
+	req.emr_cmd = MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS;
+	req.emr_in_buf = payload;
+	req.emr_in_length = MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS_IN_LEN;
+	req.emr_out_buf = payload;
+	req.emr_out_length = MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS_OUT_LEN;
+
+	MCDI_IN_SET_DWORD(req, MAE_COUNTERS_STREAM_GIVE_CREDITS_IN_NUM_CREDITS,
+			 n_credits);
+
+	efx_mcdi_execute(enp, &req);
+
+	if (req.emr_rc != 0) {
+		rc = req.emr_rc;
+		goto fail1;
+	}
+
+	return (0);
+
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
 	return (rc);
 }
 
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index d60cd477fa..7f69d6bb0d 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -104,6 +104,9 @@ INTERNAL {
 	efx_mae_action_set_specs_equal;
 	efx_mae_counters_alloc;
 	efx_mae_counters_free;
+	efx_mae_counters_stream_give_credits;
+	efx_mae_counters_stream_start;
+	efx_mae_counters_stream_stop;
 	efx_mae_encap_header_alloc;
 	efx_mae_encap_header_free;
 	efx_mae_fini;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v4 15/20] common/sfc_efx/base: support counter in action set
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (13 preceding siblings ...)
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 14/20] common/sfc_efx/base: add counter stream " Andrew Rybchenko
@ 2021-07-02  8:39   ` Andrew Rybchenko
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 16/20] net/sfc: add Rx datapath method to get pushed buffers count Andrew Rybchenko
                     ` (5 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:39 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

User will be able to associate counter with MAE action set to
collect counter packets and bytes for a specific action set.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/efx.h      |  21 ++++
 drivers/common/sfc_efx/base/efx_impl.h |   3 +
 drivers/common/sfc_efx/base/efx_mae.c  | 133 ++++++++++++++++++++++++-
 drivers/common/sfc_efx/version.map     |   3 +
 4 files changed, 157 insertions(+), 3 deletions(-)

diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index a5d40c2e3d..d3cf9fe571 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4288,6 +4288,15 @@ extern	__checkReturn			efx_rc_t
 efx_mae_action_set_populate_encap(
 	__in				efx_mae_actions_t *spec);
 
+/*
+ * Use efx_mae_action_set_fill_in_counter_id() to set ID of a counter
+ * in the specification prior to action set allocation.
+ */
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_action_set_populate_count(
+	__in				efx_mae_actions_t *spec);
+
 LIBEFX_API
 extern	__checkReturn			efx_rc_t
 efx_mae_action_set_populate_flag(
@@ -4392,6 +4401,18 @@ typedef struct efx_counter_s {
 	uint32_t id;
 } efx_counter_t;
 
+LIBEFX_API
+extern	__checkReturn			unsigned int
+efx_mae_action_set_get_nb_count(
+	__in				const efx_mae_actions_t *spec);
+
+/* See description before efx_mae_action_set_populate_count(). */
+LIBEFX_API
+extern	__checkReturn			efx_rc_t
+efx_mae_action_set_fill_in_counter_id(
+	__in				efx_mae_actions_t *spec,
+	__in				const efx_counter_t *counter_idp);
+
 /* Action set ID */
 typedef struct efx_mae_aset_id_s {
 	uint32_t id;
diff --git a/drivers/common/sfc_efx/base/efx_impl.h b/drivers/common/sfc_efx/base/efx_impl.h
index b69463385e..c4925568be 100644
--- a/drivers/common/sfc_efx/base/efx_impl.h
+++ b/drivers/common/sfc_efx/base/efx_impl.h
@@ -1733,6 +1733,7 @@ typedef enum efx_mae_action_e {
 	EFX_MAE_ACTION_DECAP,
 	EFX_MAE_ACTION_VLAN_POP,
 	EFX_MAE_ACTION_VLAN_PUSH,
+	EFX_MAE_ACTION_COUNT,
 	EFX_MAE_ACTION_ENCAP,
 
 	/*
@@ -1763,6 +1764,7 @@ typedef struct efx_mae_action_vlan_push_s {
 
 typedef struct efx_mae_actions_rsrc_s {
 	efx_mae_eh_id_t			emar_eh_id;
+	efx_counter_t			emar_counter_id;
 } efx_mae_actions_rsrc_t;
 
 struct efx_mae_actions_s {
@@ -1773,6 +1775,7 @@ struct efx_mae_actions_s {
 	unsigned int			ema_n_vlan_tags_to_push;
 	efx_mae_action_vlan_push_t	ema_vlan_push_descs[
 	    EFX_MAE_VLAN_PUSH_MAX_NTAGS];
+	unsigned int			ema_n_count_actions;
 	uint32_t			ema_mark_value;
 	efx_mport_sel_t			ema_deliver_mport;
 
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index 1f313c8127..b0e6fadd46 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -1014,6 +1014,7 @@ efx_mae_action_set_spec_init(
 	}
 
 	spec->ema_rsrc.emar_eh_id.id = EFX_MAE_RSRC_ID_INVALID;
+	spec->ema_rsrc.emar_counter_id.id = EFX_MAE_RSRC_ID_INVALID;
 
 	*specp = spec;
 
@@ -1181,6 +1182,50 @@ efx_mae_action_set_add_encap(
 	return (rc);
 }
 
+static	__checkReturn			efx_rc_t
+efx_mae_action_set_add_count(
+	__in				efx_mae_actions_t *spec,
+	__in				size_t arg_size,
+	__in_bcount(arg_size)		const uint8_t *arg)
+{
+	efx_rc_t rc;
+
+	EFX_STATIC_ASSERT(EFX_MAE_RSRC_ID_INVALID ==
+			  MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_NULL);
+
+	/*
+	 * Preparing an action set spec to update a counter requires
+	 * two steps: first add this action to the action spec, and then
+	 * add the counter ID to the spec. This allows validity checking
+	 * and resource allocation to be done separately.
+	 * Mark the counter ID as invalid in the spec to ensure that the
+	 * caller must also invoke efx_mae_action_set_fill_in_counter_id()
+	 * before action set allocation.
+	 */
+	spec->ema_rsrc.emar_counter_id.id = EFX_MAE_RSRC_ID_INVALID;
+
+	/* Nothing else is supposed to take place over here. */
+	if (arg_size != 0) {
+		rc = EINVAL;
+		goto fail1;
+	}
+
+	if (arg != NULL) {
+		rc = EINVAL;
+		goto fail2;
+	}
+
+	++(spec->ema_n_count_actions);
+
+	return (0);
+
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+	return (rc);
+}
+
 static	__checkReturn			efx_rc_t
 efx_mae_action_set_add_flag(
 	__in				efx_mae_actions_t *spec,
@@ -1289,6 +1334,9 @@ static const efx_mae_action_desc_t efx_mae_actions[EFX_MAE_NACTIONS] = {
 	[EFX_MAE_ACTION_ENCAP] = {
 		.emad_add = efx_mae_action_set_add_encap
 	},
+	[EFX_MAE_ACTION_COUNT] = {
+		.emad_add = efx_mae_action_set_add_count
+	},
 	[EFX_MAE_ACTION_FLAG] = {
 		.emad_add = efx_mae_action_set_add_flag
 	},
@@ -1304,6 +1352,12 @@ static const uint32_t efx_mae_action_ordered_map =
 	(1U << EFX_MAE_ACTION_DECAP) |
 	(1U << EFX_MAE_ACTION_VLAN_POP) |
 	(1U << EFX_MAE_ACTION_VLAN_PUSH) |
+	/*
+	 * HW will conduct action COUNT after
+	 * the matching packet has been modified by
+	 * length-affecting actions except for ENCAP.
+	 */
+	(1U << EFX_MAE_ACTION_COUNT) |
 	(1U << EFX_MAE_ACTION_ENCAP) |
 	(1U << EFX_MAE_ACTION_FLAG) |
 	(1U << EFX_MAE_ACTION_MARK) |
@@ -1320,7 +1374,8 @@ static const uint32_t efx_mae_action_nonstrict_map =
 
 static const uint32_t efx_mae_action_repeat_map =
 	(1U << EFX_MAE_ACTION_VLAN_POP) |
-	(1U << EFX_MAE_ACTION_VLAN_PUSH);
+	(1U << EFX_MAE_ACTION_VLAN_PUSH) |
+	(1U << EFX_MAE_ACTION_COUNT);
 
 /*
  * Add an action to an action set.
@@ -1443,6 +1498,20 @@ efx_mae_action_set_populate_encap(
 	    EFX_MAE_ACTION_ENCAP, 0, NULL));
 }
 
+	__checkReturn			efx_rc_t
+efx_mae_action_set_populate_count(
+	__in				efx_mae_actions_t *spec)
+{
+	/*
+	 * There is no argument to pass counter ID, thus, one does not
+	 * need to allocate a counter while parsing application input.
+	 * This is useful since building an action set may be done simply to
+	 * validate a rule, whilst resource allocation usually consumes time.
+	 */
+	return (efx_mae_action_set_spec_populate(spec,
+	    EFX_MAE_ACTION_COUNT, 0, NULL));
+}
+
 	__checkReturn			efx_rc_t
 efx_mae_action_set_populate_flag(
 	__in				efx_mae_actions_t *spec)
@@ -2075,8 +2144,6 @@ efx_mae_action_set_alloc(
 	 */
 	MCDI_IN_SET_DWORD(req,
 	    MAE_ACTION_SET_ALLOC_IN_COUNTER_LIST_ID, EFX_MAE_RSRC_ID_INVALID);
-	MCDI_IN_SET_DWORD(req,
-	    MAE_ACTION_SET_ALLOC_IN_COUNTER_ID, EFX_MAE_RSRC_ID_INVALID);
 
 	if ((spec->ema_actions & (1U << EFX_MAE_ACTION_DECAP)) != 0) {
 		MCDI_IN_SET_DWORD_FIELD(req, MAE_ACTION_SET_ALLOC_IN_FLAGS,
@@ -2113,6 +2180,8 @@ efx_mae_action_set_alloc(
 
 	MCDI_IN_SET_DWORD(req, MAE_ACTION_SET_ALLOC_IN_ENCAP_HEADER_ID,
 	    spec->ema_rsrc.emar_eh_id.id);
+	MCDI_IN_SET_DWORD(req, MAE_ACTION_SET_ALLOC_IN_COUNTER_ID,
+	    spec->ema_rsrc.emar_counter_id.id);
 
 	if ((spec->ema_actions & (1U << EFX_MAE_ACTION_FLAG)) != 0) {
 		MCDI_IN_SET_DWORD_FIELD(req, MAE_ACTION_SET_ALLOC_IN_FLAGS,
@@ -2372,6 +2441,64 @@ efx_mae_action_rule_remove(
 
 	return (0);
 
+fail4:
+	EFSYS_PROBE(fail4);
+fail3:
+	EFSYS_PROBE(fail3);
+fail2:
+	EFSYS_PROBE(fail2);
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+	return (rc);
+}
+
+	__checkReturn			unsigned int
+efx_mae_action_set_get_nb_count(
+	__in				const efx_mae_actions_t *spec)
+{
+	return (spec->ema_n_count_actions);
+}
+
+	__checkReturn			efx_rc_t
+efx_mae_action_set_fill_in_counter_id(
+	__in				efx_mae_actions_t *spec,
+	__in				const efx_counter_t *counter_idp)
+{
+	efx_rc_t rc;
+
+	if ((spec->ema_actions & (1U << EFX_MAE_ACTION_COUNT)) == 0) {
+		/*
+		 * Invalid to add counter ID if spec does not have COUNT action.
+		 */
+		rc = EINVAL;
+		goto fail1;
+	}
+
+	if (spec->ema_n_count_actions != 1) {
+		/*
+		 * Having multiple COUNT actions in the spec requires a counter
+		 * list to be used. This API must only be used for a single
+		 * counter per spec. Turn down the request as inappropriate.
+		 */
+		rc = EINVAL;
+		goto fail2;
+	}
+
+	if (spec->ema_rsrc.emar_counter_id.id != EFX_MAE_RSRC_ID_INVALID) {
+		/* The caller attempts to indicate counter ID twice. */
+		rc = EALREADY;
+		goto fail3;
+	}
+
+	if (counter_idp->id == EFX_MAE_RSRC_ID_INVALID) {
+		rc = EINVAL;
+		goto fail4;
+	}
+
+	spec->ema_rsrc.emar_counter_id.id = counter_idp->id;
+
+	return (0);
+
 fail4:
 	EFSYS_PROBE(fail4);
 fail3:
diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map
index 7f69d6bb0d..8496f409e6 100644
--- a/drivers/common/sfc_efx/version.map
+++ b/drivers/common/sfc_efx/version.map
@@ -89,8 +89,11 @@ INTERNAL {
 	efx_mae_action_rule_insert;
 	efx_mae_action_rule_remove;
 	efx_mae_action_set_alloc;
+	efx_mae_action_set_fill_in_counter_id;
 	efx_mae_action_set_fill_in_eh_id;
 	efx_mae_action_set_free;
+	efx_mae_action_set_get_nb_count;
+	efx_mae_action_set_populate_count;
 	efx_mae_action_set_populate_decap;
 	efx_mae_action_set_populate_deliver;
 	efx_mae_action_set_populate_drop;
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v4 16/20] net/sfc: add Rx datapath method to get pushed buffers count
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (14 preceding siblings ...)
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 15/20] common/sfc_efx/base: support counter in action set Andrew Rybchenko
@ 2021-07-02  8:39   ` Andrew Rybchenko
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 17/20] common/sfc_efx/base: add max MAE counters to limits Andrew Rybchenko
                     ` (4 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:39 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

The information about the number of pushed Rx buffers is required
for counter Rx queue to know when to give credits to counter
stream.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/sfc_dp_rx.h    |  4 ++++
 drivers/net/sfc/sfc_ef100_rx.c | 15 +++++++++++++++
 drivers/net/sfc/sfc_rx.c       |  9 +++++++++
 drivers/net/sfc/sfc_rx.h       |  3 +++
 4 files changed, 31 insertions(+)

diff --git a/drivers/net/sfc/sfc_dp_rx.h b/drivers/net/sfc/sfc_dp_rx.h
index 3f6857b1ff..b6c44085ce 100644
--- a/drivers/net/sfc/sfc_dp_rx.h
+++ b/drivers/net/sfc/sfc_dp_rx.h
@@ -204,6 +204,9 @@ typedef int (sfc_dp_rx_intr_enable_t)(struct sfc_dp_rxq *dp_rxq);
 /** Disable Rx interrupts */
 typedef int (sfc_dp_rx_intr_disable_t)(struct sfc_dp_rxq *dp_rxq);
 
+/** Get number of pushed Rx buffers */
+typedef unsigned int (sfc_dp_rx_get_pushed_t)(struct sfc_dp_rxq *dp_rxq);
+
 /** Receive datapath definition */
 struct sfc_dp_rx {
 	struct sfc_dp				dp;
@@ -238,6 +241,7 @@ struct sfc_dp_rx {
 	sfc_dp_rx_qdesc_status_t		*qdesc_status;
 	sfc_dp_rx_intr_enable_t			*intr_enable;
 	sfc_dp_rx_intr_disable_t		*intr_disable;
+	sfc_dp_rx_get_pushed_t			*get_pushed;
 	eth_rx_burst_t				pkt_burst;
 };
 
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index 8b90463533..10c74aa118 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -892,6 +892,20 @@ sfc_ef100_rx_intr_disable(struct sfc_dp_rxq *dp_rxq)
 	return 0;
 }
 
+static sfc_dp_rx_get_pushed_t sfc_ef100_rx_get_pushed;
+static unsigned int
+sfc_ef100_rx_get_pushed(struct sfc_dp_rxq *dp_rxq)
+{
+	struct sfc_ef100_rxq *rxq = sfc_ef100_rxq_by_dp_rxq(dp_rxq);
+
+	/*
+	 * The datapath keeps track only of added descriptors, since
+	 * the number of pushed descriptors always equals the number
+	 * of added descriptors due to enforced alignment.
+	 */
+	return rxq->added;
+}
+
 struct sfc_dp_rx sfc_ef100_rx = {
 	.dp = {
 		.name		= SFC_KVARG_DATAPATH_EF100,
@@ -919,5 +933,6 @@ struct sfc_dp_rx sfc_ef100_rx = {
 	.qdesc_status		= sfc_ef100_rx_qdesc_status,
 	.intr_enable		= sfc_ef100_rx_intr_enable,
 	.intr_disable		= sfc_ef100_rx_intr_disable,
+	.get_pushed		= sfc_ef100_rx_get_pushed,
 	.pkt_burst		= sfc_ef100_recv_pkts,
 };
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 0532f77082..f6a8ac68e8 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -53,6 +53,15 @@ sfc_rx_qflush_failed(struct sfc_rxq_info *rxq_info)
 	rxq_info->state &= ~SFC_RXQ_FLUSHING;
 }
 
+/* This returns the running counter, which is not bounded by ring size */
+unsigned int
+sfc_rx_get_pushed(struct sfc_adapter *sa, struct sfc_dp_rxq *dp_rxq)
+{
+	SFC_ASSERT(sa->priv.dp_rx->get_pushed != NULL);
+
+	return sa->priv.dp_rx->get_pushed(dp_rxq);
+}
+
 static int
 sfc_efx_rx_qprime(struct sfc_efx_rxq *rxq)
 {
diff --git a/drivers/net/sfc/sfc_rx.h b/drivers/net/sfc/sfc_rx.h
index e5a6fde79b..4ab513915e 100644
--- a/drivers/net/sfc/sfc_rx.h
+++ b/drivers/net/sfc/sfc_rx.h
@@ -145,6 +145,9 @@ uint64_t sfc_rx_get_queue_offload_caps(struct sfc_adapter *sa);
 void sfc_rx_qflush_done(struct sfc_rxq_info *rxq_info);
 void sfc_rx_qflush_failed(struct sfc_rxq_info *rxq_info);
 
+unsigned int sfc_rx_get_pushed(struct sfc_adapter *sa,
+			       struct sfc_dp_rxq *dp_rxq);
+
 int sfc_rx_hash_init(struct sfc_adapter *sa);
 void sfc_rx_hash_fini(struct sfc_adapter *sa);
 int sfc_rx_hf_rte_to_efx(struct sfc_adapter *sa, uint64_t rte,
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v4 17/20] common/sfc_efx/base: add max MAE counters to limits
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (15 preceding siblings ...)
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 16/20] net/sfc: add Rx datapath method to get pushed buffers count Andrew Rybchenko
@ 2021-07-02  8:39   ` Andrew Rybchenko
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 18/20] common/sfc_efx/base: add packetiser packet format definition Andrew Rybchenko
                     ` (3 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:39 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

The information about the maximum number of MAE counters is
crucial to the counter support in the driver.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/efx.h     | 1 +
 drivers/common/sfc_efx/base/efx_mae.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index d3cf9fe571..21fd151b70 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -4093,6 +4093,7 @@ typedef struct efx_mae_limits_s {
 	uint32_t			eml_max_n_outer_prios;
 	uint32_t			eml_encap_types_supported;
 	uint32_t			eml_encap_header_size_limit;
+	uint32_t			eml_max_n_counters;
 } efx_mae_limits_t;
 
 LIBEFX_API
diff --git a/drivers/common/sfc_efx/base/efx_mae.c b/drivers/common/sfc_efx/base/efx_mae.c
index b0e6fadd46..67d1c22037 100644
--- a/drivers/common/sfc_efx/base/efx_mae.c
+++ b/drivers/common/sfc_efx/base/efx_mae.c
@@ -374,6 +374,7 @@ efx_mae_get_limits(
 	emlp->eml_encap_types_supported = maep->em_encap_types_supported;
 	emlp->eml_encap_header_size_limit =
 	    MC_CMD_MAE_ENCAP_HEADER_ALLOC_IN_HDR_DATA_MAXNUM_MCDI2;
+	emlp->eml_max_n_counters = maep->em_max_ncounters;
 
 	return (0);
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v4 18/20] common/sfc_efx/base: add packetiser packet format definition
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (16 preceding siblings ...)
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 17/20] common/sfc_efx/base: add max MAE counters to limits Andrew Rybchenko
@ 2021-07-02  8:39   ` Andrew Rybchenko
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 19/20] net/sfc: support flow action COUNT in transfer rules Andrew Rybchenko
                     ` (2 subsequent siblings)
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:39 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Andy Moreton

Packetiser composes packets with MAE counters update.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 .../base/efx_regs_counters_pkt_format.h       | 87 +++++++++++++++++++
 1 file changed, 87 insertions(+)
 create mode 100644 drivers/common/sfc_efx/base/efx_regs_counters_pkt_format.h

diff --git a/drivers/common/sfc_efx/base/efx_regs_counters_pkt_format.h b/drivers/common/sfc_efx/base/efx_regs_counters_pkt_format.h
new file mode 100644
index 0000000000..6610d07dc0
--- /dev/null
+++ b/drivers/common/sfc_efx/base/efx_regs_counters_pkt_format.h
@@ -0,0 +1,87 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2020-2021 Xilinx, Inc.
+ */
+
+#ifndef	_SYS_EFX_REGS_COUNTERS_PKT_FORMAT_H
+#define	_SYS_EFX_REGS_COUNTERS_PKT_FORMAT_H
+
+/*
+ * Packetiser packet format definition.
+ * SF-122415-TC - OVS Counter Design Specification section 7
+ * Primary copy of the header is located in the smartnic_registry repo:
+ * src/ovs_counter/packetiser_packet_format.h
+ */
+
+/*------------------------------------------------------------*/
+/*
+ * ER_RX_SL_PACKETISER_HEADER_WORD(160bit):
+ *
+ */
+#define	ER_RX_SL_PACKETISER_HEADER_WORD_SIZE 20
+
+#define	ERF_SC_PACKETISER_HEADER_VERSION_LBN 0
+#define	ERF_SC_PACKETISER_HEADER_VERSION_WIDTH 8
+/* Deprecated, use ERF_SC_PACKETISER_HEADER_VERSION_2 instead */
+#define	ERF_SC_PACKETISER_HEADER_VERSION_VALUE 2
+#define	ERF_SC_PACKETISER_HEADER_VERSION_2 2
+#define	ERF_SC_PACKETISER_HEADER_IDENTIFIER_LBN 8
+#define	ERF_SC_PACKETISER_HEADER_IDENTIFIER_WIDTH 8
+#define	ERF_SC_PACKETISER_HEADER_IDENTIFIER_AR 0
+#define	ERF_SC_PACKETISER_HEADER_IDENTIFIER_CT 1
+#define	ERF_SC_PACKETISER_HEADER_HEADER_OFFSET_LBN 16
+#define	ERF_SC_PACKETISER_HEADER_HEADER_OFFSET_WIDTH 8
+#define	ERF_SC_PACKETISER_HEADER_HEADER_OFFSET_DEFAULT 0x4
+#define	ERF_SC_PACKETISER_HEADER_PAYLOAD_OFFSET_LBN 24
+#define	ERF_SC_PACKETISER_HEADER_PAYLOAD_OFFSET_WIDTH 8
+#define	ERF_SC_PACKETISER_HEADER_PAYLOAD_OFFSET_DEFAULT 0x14
+#define	ERF_SC_PACKETISER_HEADER_INDEX_LBN 32
+#define	ERF_SC_PACKETISER_HEADER_INDEX_WIDTH 16
+#define	ERF_SC_PACKETISER_HEADER_COUNT_LBN 48
+#define	ERF_SC_PACKETISER_HEADER_COUNT_WIDTH 16
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_0_LBN 64
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_0_WIDTH 32
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_1_LBN 96
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_1_WIDTH 32
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_2_LBN 128
+#define	ERF_SC_PACKETISER_HEADER_RESERVED_2_WIDTH 32
+
+
+/*------------------------------------------------------------*/
+/*
+ * ER_RX_SL_PACKETISER_PAYLOAD_WORD(128bit):
+ *
+ */
+#define	ER_RX_SL_PACKETISER_PAYLOAD_WORD_SIZE 16
+
+#define	ERF_SC_PACKETISER_PAYLOAD_COUNTER_INDEX_LBN 0
+#define	ERF_SC_PACKETISER_PAYLOAD_COUNTER_INDEX_WIDTH 24
+#define	ERF_SC_PACKETISER_PAYLOAD_RESERVED_LBN 24
+#define	ERF_SC_PACKETISER_PAYLOAD_RESERVED_WIDTH 8
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_OFST 4
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_SIZE 6
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LBN 32
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_WIDTH 48
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO_OFST 4
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO_SIZE 4
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO_LBN 32
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO_WIDTH 32
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_HI_OFST 8
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_HI_SIZE 2
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_HI_LBN 64
+#define	ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_HI_WIDTH 16
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_OFST 10
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_SIZE 6
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LBN 80
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_WIDTH 48
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO_OFST 10
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO_SIZE 2
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO_LBN 80
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO_WIDTH 16
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_HI_OFST 12
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_HI_SIZE 4
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_HI_LBN 96
+#define	ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_HI_WIDTH 32
+
+
+#endif /* _SYS_EFX_REGS_COUNTERS_PKT_FORMAT_H */
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v4 19/20] net/sfc: support flow action COUNT in transfer rules
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (17 preceding siblings ...)
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 18/20] common/sfc_efx/base: add packetiser packet format definition Andrew Rybchenko
@ 2021-07-02  8:39   ` Andrew Rybchenko
  2021-07-15 14:58     ` David Marchand
  2021-07-16 12:12     ` David Marchand
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 20/20] net/sfc: support flow API query for count actions Andrew Rybchenko
  2021-07-20 12:19   ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action David Marchand
  20 siblings, 2 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:39 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

For now, a rule may have only one dedicated counter, shared counters
are not supported.

HW delivers (or "streams") counter readings using special packets.
The driver creates a dedicated Rx queue to receive such packets
and requests that HW start "streaming" the readings to it.

The counter queue is polled periodically, and the first available
service core is used for that. Hence, the user has to specify at least
one service core for counters to work. Such a core is shared by all
MAE-capable devices managed by sfc driver.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 doc/guides/nics/sfc_efx.rst            |   2 +
 doc/guides/rel_notes/release_21_08.rst |   6 +
 drivers/net/sfc/meson.build            |  24 +
 drivers/net/sfc/sfc_flow.c             |   7 +
 drivers/net/sfc/sfc_mae.c              | 231 +++++++++-
 drivers/net/sfc/sfc_mae.h              |  60 +++
 drivers/net/sfc/sfc_mae_counter.c      | 578 +++++++++++++++++++++++++
 drivers/net/sfc/sfc_mae_counter.h      |  11 +
 drivers/net/sfc/sfc_stats.h            |  80 ++++
 drivers/net/sfc/sfc_tweak.h            |   9 +
 10 files changed, 1003 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/sfc/sfc_stats.h

diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index cf1269cc03..bd08118da7 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -240,6 +240,8 @@ Supported actions (***transfer*** rules):
 
 - PORT_ID
 
+- COUNT
+
 - DROP
 
 Validating flow rules depends on the firmware variant.
diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst
index a6ecfdf3ce..75688304da 100644
--- a/doc/guides/rel_notes/release_21_08.rst
+++ b/doc/guides/rel_notes/release_21_08.rst
@@ -55,6 +55,12 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Updated Solarflare network PMD.**
+
+  Updated the Solarflare ``sfc_efx`` driver with changes including:
+
+  * Added COUNT action support for SN1000 NICs
+
 
 Removed Items
 -------------
diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index f8880f740a..55f42eee17 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -15,6 +15,7 @@ endif
 if (arch_subdir != 'x86' or not dpdk_conf.get('RTE_ARCH_64')) and (arch_subdir != 'arm' or not host_machine.cpu_family().startswith('aarch64'))
     build = false
     reason = 'only supported on x86_64 and aarch64'
+    subdir_done()
 endif
 
 extra_flags = []
@@ -39,6 +40,29 @@ foreach flag: extra_flags
     endif
 endforeach
 
+# for gcc compiles we need -latomic for 128-bit atomic ops
+if cc.get_id() == 'gcc'
+    libatomic_dep = cc.find_library('atomic', required: false)
+    if not libatomic_dep.found()
+        build = false
+        reason = 'missing dependency, "libatomic"'
+        subdir_done()
+    endif
+
+    # libatomic could be half-installed when above check finds it but
+    # linkage fails
+    atomic_link_code = '''
+    #include <stdio.h>
+    void main() { printf("libatomic link check\n"); }
+    '''
+    if not cc.links(atomic_link_code, dependencies: libatomic_dep)
+        build = false
+        reason = 'broken dependency, "libatomic"'
+        subdir_done()
+    endif
+    ext_deps += libatomic_dep
+endif
+
 deps += ['common_sfc_efx', 'bus_pci']
 sources = files(
         'sfc_ethdev.c',
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 2db8af1759..1294dbd3a7 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -24,6 +24,7 @@
 #include "sfc_flow.h"
 #include "sfc_log.h"
 #include "sfc_dp_rx.h"
+#include "sfc_mae_counter.h"
 
 struct sfc_flow_ops_by_spec {
 	sfc_flow_parse_cb_t	*parse;
@@ -2854,6 +2855,12 @@ sfc_flow_stop(struct sfc_adapter *sa)
 		efx_rx_scale_context_free(sa->nic, rss->dummy_rss_context);
 		rss->dummy_rss_context = EFX_RSS_CONTEXT_DEFAULT;
 	}
+
+	/*
+	 * MAE counter service is not stopped on flow rule remove to avoid
+	 * extra work. Make sure that it is stopped here.
+	 */
+	sfc_mae_counter_stop(sa);
 }
 
 int
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 8ffcf72d88..c3efd5b407 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -19,6 +19,7 @@
 #include "sfc_mae_counter.h"
 #include "sfc_log.h"
 #include "sfc_switch.h"
+#include "sfc_service.h"
 
 static int
 sfc_mae_assign_entity_mport(struct sfc_adapter *sa,
@@ -30,6 +31,19 @@ sfc_mae_assign_entity_mport(struct sfc_adapter *sa,
 					      mportp);
 }
 
+static int
+sfc_mae_counter_registry_init(struct sfc_mae_counter_registry *registry,
+			      uint32_t nb_counters_max)
+{
+	return sfc_mae_counters_init(&registry->counters, nb_counters_max);
+}
+
+static void
+sfc_mae_counter_registry_fini(struct sfc_mae_counter_registry *registry)
+{
+	sfc_mae_counters_fini(&registry->counters);
+}
+
 int
 sfc_mae_attach(struct sfc_adapter *sa)
 {
@@ -59,6 +73,15 @@ sfc_mae_attach(struct sfc_adapter *sa)
 	if (rc != 0)
 		goto fail_mae_get_limits;
 
+	sfc_log_init(sa, "init MAE counter registry");
+	rc = sfc_mae_counter_registry_init(&mae->counter_registry,
+					   limits.eml_max_n_counters);
+	if (rc != 0) {
+		sfc_err(sa, "failed to init MAE counters registry for %u entries: %s",
+			limits.eml_max_n_counters, rte_strerror(rc));
+		goto fail_counter_registry_init;
+	}
+
 	sfc_log_init(sa, "assign entity MPORT");
 	rc = sfc_mae_assign_entity_mport(sa, &entity_mport);
 	if (rc != 0)
@@ -107,6 +130,9 @@ sfc_mae_attach(struct sfc_adapter *sa)
 fail_mae_assign_switch_port:
 fail_mae_assign_switch_domain:
 fail_mae_assign_entity_mport:
+	sfc_mae_counter_registry_fini(&mae->counter_registry);
+
+fail_counter_registry_init:
 fail_mae_get_limits:
 	efx_mae_fini(sa->nic);
 
@@ -131,6 +157,7 @@ sfc_mae_detach(struct sfc_adapter *sa)
 		return;
 
 	rte_free(mae->bounce_eh.buf);
+	sfc_mae_counter_registry_fini(&mae->counter_registry);
 
 	efx_mae_fini(sa->nic);
 
@@ -480,9 +507,72 @@ sfc_mae_encap_header_disable(struct sfc_adapter *sa,
 	--(fw_rsrc->refcnt);
 }
 
+static int
+sfc_mae_counters_enable(struct sfc_adapter *sa,
+			struct sfc_mae_counter_id *counters,
+			unsigned int n_counters,
+			efx_mae_actions_t *action_set_spec)
+{
+	int rc;
+
+	sfc_log_init(sa, "entry");
+
+	if (n_counters == 0) {
+		sfc_log_init(sa, "no counters - skip");
+		return 0;
+	}
+
+	SFC_ASSERT(sfc_adapter_is_locked(sa));
+	SFC_ASSERT(n_counters == 1);
+
+	rc = sfc_mae_counter_enable(sa, &counters[0]);
+	if (rc != 0) {
+		sfc_err(sa, "failed to enable MAE counter %u: %s",
+			counters[0].mae_id.id, rte_strerror(rc));
+		goto fail_counter_add;
+	}
+
+	rc = efx_mae_action_set_fill_in_counter_id(action_set_spec,
+						   &counters[0].mae_id);
+	if (rc != 0) {
+		sfc_err(sa, "failed to fill in MAE counter %u in action set: %s",
+			counters[0].mae_id.id, rte_strerror(rc));
+		goto fail_fill_in_id;
+	}
+
+	return 0;
+
+fail_fill_in_id:
+	(void)sfc_mae_counter_disable(sa, &counters[0]);
+
+fail_counter_add:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+	return rc;
+}
+
+static int
+sfc_mae_counters_disable(struct sfc_adapter *sa,
+			 struct sfc_mae_counter_id *counters,
+			 unsigned int n_counters)
+{
+	if (n_counters == 0)
+		return 0;
+
+	SFC_ASSERT(sfc_adapter_is_locked(sa));
+	SFC_ASSERT(n_counters == 1);
+
+	if (counters[0].mae_id.id == EFX_MAE_RSRC_ID_INVALID) {
+		sfc_err(sa, "failed to disable: already disabled");
+		return EALREADY;
+	}
+
+	return sfc_mae_counter_disable(sa, &counters[0]);
+}
+
 static struct sfc_mae_action_set *
 sfc_mae_action_set_attach(struct sfc_adapter *sa,
 			  const struct sfc_mae_encap_header *encap_header,
+			  unsigned int n_count,
 			  const efx_mae_actions_t *spec)
 {
 	struct sfc_mae_action_set *action_set;
@@ -491,7 +581,12 @@ sfc_mae_action_set_attach(struct sfc_adapter *sa,
 	SFC_ASSERT(sfc_adapter_is_locked(sa));
 
 	TAILQ_FOREACH(action_set, &mae->action_sets, entries) {
+		/*
+		 * Shared counters are not supported, hence action sets with
+		 * COUNT are not attachable.
+		 */
 		if (action_set->encap_header == encap_header &&
+		    n_count == 0 &&
 		    efx_mae_action_set_specs_equal(action_set->spec, spec)) {
 			sfc_dbg(sa, "attaching to action_set=%p", action_set);
 			++(action_set->refcnt);
@@ -504,18 +599,52 @@ sfc_mae_action_set_attach(struct sfc_adapter *sa,
 
 static int
 sfc_mae_action_set_add(struct sfc_adapter *sa,
+		       const struct rte_flow_action actions[],
 		       efx_mae_actions_t *spec,
 		       struct sfc_mae_encap_header *encap_header,
+		       unsigned int n_counters,
 		       struct sfc_mae_action_set **action_setp)
 {
 	struct sfc_mae_action_set *action_set;
 	struct sfc_mae *mae = &sa->mae;
+	unsigned int i;
 
 	SFC_ASSERT(sfc_adapter_is_locked(sa));
 
 	action_set = rte_zmalloc("sfc_mae_action_set", sizeof(*action_set), 0);
-	if (action_set == NULL)
+	if (action_set == NULL) {
+		sfc_err(sa, "failed to alloc action set");
 		return ENOMEM;
+	}
+
+	if (n_counters > 0) {
+		const struct rte_flow_action *action;
+
+		action_set->counters = rte_malloc("sfc_mae_counter_ids",
+			sizeof(action_set->counters[0]) * n_counters, 0);
+		if (action_set->counters == NULL) {
+			rte_free(action_set);
+			sfc_err(sa, "failed to alloc counters");
+			return ENOMEM;
+		}
+
+		for (action = actions, i = 0;
+		     action->type != RTE_FLOW_ACTION_TYPE_END && i < n_counters;
+		     ++action) {
+			const struct rte_flow_action_count *conf;
+
+			if (action->type != RTE_FLOW_ACTION_TYPE_COUNT)
+				continue;
+
+			conf = action->conf;
+
+			action_set->counters[i].mae_id.id =
+				EFX_MAE_RSRC_ID_INVALID;
+			action_set->counters[i].rte_id = conf->id;
+			i++;
+		}
+		action_set->n_counters = n_counters;
+	}
 
 	action_set->refcnt = 1;
 	action_set->spec = spec;
@@ -555,6 +684,12 @@ sfc_mae_action_set_del(struct sfc_adapter *sa,
 
 	efx_mae_action_set_spec_fini(sa->nic, action_set->spec);
 	sfc_mae_encap_header_del(sa, action_set->encap_header);
+	if (action_set->n_counters > 0) {
+		SFC_ASSERT(action_set->n_counters == 1);
+		SFC_ASSERT(action_set->counters[0].mae_id.id ==
+			   EFX_MAE_RSRC_ID_INVALID);
+		rte_free(action_set->counters);
+	}
 	TAILQ_REMOVE(&mae->action_sets, action_set, entries);
 	rte_free(action_set);
 
@@ -566,6 +701,7 @@ sfc_mae_action_set_enable(struct sfc_adapter *sa,
 			  struct sfc_mae_action_set *action_set)
 {
 	struct sfc_mae_encap_header *encap_header = action_set->encap_header;
+	struct sfc_mae_counter_id *counters = action_set->counters;
 	struct sfc_mae_fw_rsrc *fw_rsrc = &action_set->fw_rsrc;
 	int rc;
 
@@ -580,14 +716,26 @@ sfc_mae_action_set_enable(struct sfc_adapter *sa,
 		if (rc != 0)
 			return rc;
 
-		rc = efx_mae_action_set_alloc(sa->nic, action_set->spec,
-					      &fw_rsrc->aset_id);
+		rc = sfc_mae_counters_enable(sa, counters,
+					     action_set->n_counters,
+					     action_set->spec);
 		if (rc != 0) {
+			sfc_err(sa, "failed to enable %u MAE counters: %s",
+				action_set->n_counters, rte_strerror(rc));
+
 			sfc_mae_encap_header_disable(sa, encap_header);
+			return rc;
+		}
 
+		rc = efx_mae_action_set_alloc(sa->nic, action_set->spec,
+					      &fw_rsrc->aset_id);
+		if (rc != 0) {
 			sfc_err(sa, "failed to enable action_set=%p: %s",
 				action_set, strerror(rc));
 
+			(void)sfc_mae_counters_disable(sa, counters,
+						       action_set->n_counters);
+			sfc_mae_encap_header_disable(sa, encap_header);
 			return rc;
 		}
 
@@ -627,6 +775,13 @@ sfc_mae_action_set_disable(struct sfc_adapter *sa,
 		}
 		fw_rsrc->aset_id.id = EFX_MAE_RSRC_ID_INVALID;
 
+		rc = sfc_mae_counters_disable(sa, action_set->counters,
+					      action_set->n_counters);
+		if (rc != 0) {
+			sfc_err(sa, "failed to disable %u MAE counters: %s",
+				action_set->n_counters, rte_strerror(rc));
+		}
+
 		sfc_mae_encap_header_disable(sa, action_set->encap_header);
 	}
 
@@ -2508,6 +2663,48 @@ sfc_mae_rule_parse_action_mark(const struct rte_flow_action_mark *conf,
 	return efx_mae_action_set_populate_mark(spec, conf->id);
 }
 
+static int
+sfc_mae_rule_parse_action_count(struct sfc_adapter *sa,
+				const struct rte_flow_action_count *conf,
+				efx_mae_actions_t *spec)
+{
+	int rc;
+
+	if (conf->shared) {
+		rc = ENOTSUP;
+		goto fail_counter_shared;
+	}
+
+	if ((sa->counter_rxq.state & SFC_COUNTER_RXQ_INITIALIZED) == 0) {
+		sfc_err(sa,
+			"counter queue is not configured for COUNT action");
+		rc = EINVAL;
+		goto fail_counter_queue_uninit;
+	}
+
+	if (sfc_get_service_lcore(SOCKET_ID_ANY) == RTE_MAX_LCORE) {
+		rc = EINVAL;
+		goto fail_no_service_core;
+	}
+
+	rc = efx_mae_action_set_populate_count(spec);
+	if (rc != 0) {
+		sfc_err(sa,
+			"failed to populate counters in MAE action set: %s",
+			rte_strerror(rc));
+		goto fail_populate_count;
+	}
+
+	return 0;
+
+fail_populate_count:
+fail_no_service_core:
+fail_counter_queue_uninit:
+fail_counter_shared:
+
+	return rc;
+}
+
 static int
 sfc_mae_rule_parse_action_phy_port(struct sfc_adapter *sa,
 				   const struct rte_flow_action_phy_port *conf,
@@ -2623,6 +2820,11 @@ sfc_mae_rule_parse_action(struct sfc_adapter *sa,
 							   spec, error);
 		custom_error = B_TRUE;
 		break;
+	case RTE_FLOW_ACTION_TYPE_COUNT:
+		SFC_BUILD_SET_OVERFLOW(RTE_FLOW_ACTION_TYPE_COUNT,
+				       bundle->actions_mask);
+		rc = sfc_mae_rule_parse_action_count(sa, action->conf, spec);
+		break;
 	case RTE_FLOW_ACTION_TYPE_FLAG:
 		SFC_BUILD_SET_OVERFLOW(RTE_FLOW_ACTION_TYPE_FLAG,
 				       bundle->actions_mask);
@@ -2708,6 +2910,7 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa,
 	const struct rte_flow_action *action;
 	struct sfc_mae *mae = &sa->mae;
 	efx_mae_actions_t *spec;
+	unsigned int n_count;
 	int rc;
 
 	rte_errno = 0;
@@ -2745,15 +2948,22 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa,
 	if (rc != 0)
 		goto fail_process_encap_header;
 
+	n_count = efx_mae_action_set_get_nb_count(spec);
+	if (n_count > 1) {
+		rc = ENOTSUP;
+		sfc_err(sa, "too many count actions requested: %u", n_count);
+		goto fail_nb_count;
+	}
+
 	spec_mae->action_set = sfc_mae_action_set_attach(sa, encap_header,
-							 spec);
+							 n_count, spec);
 	if (spec_mae->action_set != NULL) {
 		sfc_mae_encap_header_del(sa, encap_header);
 		efx_mae_action_set_spec_fini(sa->nic, spec);
 		return 0;
 	}
 
-	rc = sfc_mae_action_set_add(sa, spec, encap_header,
+	rc = sfc_mae_action_set_add(sa, actions, spec, encap_header, n_count,
 				    &spec_mae->action_set);
 	if (rc != 0)
 		goto fail_action_set_add;
@@ -2761,6 +2971,7 @@ sfc_mae_rule_parse_actions(struct sfc_adapter *sa,
 	return 0;
 
 fail_action_set_add:
+fail_nb_count:
 	sfc_mae_encap_header_del(sa, encap_header);
 
 fail_process_encap_header:
@@ -2915,6 +3126,15 @@ sfc_mae_flow_insert(struct sfc_adapter *sa,
 	if (rc != 0)
 		goto fail_action_set_enable;
 
+	if (action_set->n_counters > 0) {
+		rc = sfc_mae_counter_start(sa);
+		if (rc != 0) {
+			sfc_err(sa, "failed to start MAE counters support: %s",
+				rte_strerror(rc));
+			goto fail_mae_counter_start;
+		}
+	}
+
 	rc = efx_mae_action_rule_insert(sa->nic, spec_mae->match_spec,
 					NULL, &fw_rsrc->aset_id,
 					&spec_mae->rule_id);
@@ -2927,6 +3147,7 @@ sfc_mae_flow_insert(struct sfc_adapter *sa,
 	return 0;
 
 fail_action_rule_insert:
+fail_mae_counter_start:
 	sfc_mae_action_set_disable(sa, action_set);
 
 fail_action_set_enable:
diff --git a/drivers/net/sfc/sfc_mae.h b/drivers/net/sfc/sfc_mae.h
index 9740e54e49..2cc4334890 100644
--- a/drivers/net/sfc/sfc_mae.h
+++ b/drivers/net/sfc/sfc_mae.h
@@ -16,6 +16,8 @@
 
 #include "efx.h"
 
+#include "sfc_stats.h"
+
 #ifdef __cplusplus
 extern "C" {
 #endif
@@ -54,10 +56,20 @@ struct sfc_mae_encap_header {
 
 TAILQ_HEAD(sfc_mae_encap_headers, sfc_mae_encap_header);
 
+/* Counter ID */
+struct sfc_mae_counter_id {
+	/* ID of a counter in MAE */
+	efx_counter_t			mae_id;
+	/* ID of a counter in RTE */
+	uint32_t			rte_id;
+};
+
 /** Action set registry entry */
 struct sfc_mae_action_set {
 	TAILQ_ENTRY(sfc_mae_action_set)	entries;
 	unsigned int			refcnt;
+	struct sfc_mae_counter_id	*counters;
+	uint32_t			n_counters;
 	efx_mae_actions_t		*spec;
 	struct sfc_mae_encap_header	*encap_header;
 	struct sfc_mae_fw_rsrc		fw_rsrc;
@@ -83,6 +95,50 @@ struct sfc_mae_bounce_eh {
 	efx_tunnel_protocol_t		type;
 };
 
+/** Counter collection entry */
+struct sfc_mae_counter {
+	bool				inuse;
+	uint32_t			generation_count;
+	union sfc_pkts_bytes		value;
+	union sfc_pkts_bytes		reset;
+};
+
+struct sfc_mae_counters_xstats {
+	uint64_t			not_inuse_update;
+	uint64_t			realloc_update;
+};
+
+struct sfc_mae_counters {
+	/** An array of all MAE counters */
+	struct sfc_mae_counter		*mae_counters;
+	/** Extra statistics for counters */
+	struct sfc_mae_counters_xstats	xstats;
+	/** Count of all MAE counters */
+	unsigned int			n_mae_counters;
+};
+
+struct sfc_mae_counter_registry {
+	/* Common counter information */
+	/** Counters collection */
+	struct sfc_mae_counters		counters;
+
+	/* Information used by counter update service */
+	/** Callback to get packets from RxQ */
+	eth_rx_burst_t			rx_pkt_burst;
+	/** Data for the callback to get packets */
+	struct sfc_dp_rxq		*rx_dp;
+	/** Number of buffers pushed to the RxQ */
+	unsigned int			pushed_n_buffers;
+	/** Are credits used by counter stream */
+	bool				use_credits;
+
+	/* Information used by configuration routines */
+	/** Counter service core ID */
+	uint32_t			service_core_id;
+	/** Counter service ID */
+	uint32_t			service_id;
+};
+
 struct sfc_mae {
 	/** Assigned switch domain identifier */
 	uint16_t			switch_domain_id;
@@ -104,6 +160,10 @@ struct sfc_mae {
 	struct sfc_mae_action_sets	action_sets;
 	/** Encap. header bounce buffer */
 	struct sfc_mae_bounce_eh	bounce_eh;
+	/** Flag indicating whether counter-only RxQ is running */
+	bool				counter_rxq_running;
+	/** Counter registry */
+	struct sfc_mae_counter_registry	counter_registry;
 };
 
 struct sfc_adapter;
diff --git a/drivers/net/sfc/sfc_mae_counter.c b/drivers/net/sfc/sfc_mae_counter.c
index c7646cf7b1..b0cb8157aa 100644
--- a/drivers/net/sfc/sfc_mae_counter.c
+++ b/drivers/net/sfc/sfc_mae_counter.c
@@ -4,8 +4,10 @@
  */
 
 #include <rte_common.h>
+#include <rte_service_component.h>
 
 #include "efx.h"
+#include "efx_regs_counters_pkt_format.h"
 
 #include "sfc_ev.h"
 #include "sfc.h"
@@ -49,6 +51,520 @@ sfc_mae_counter_rxq_required(struct sfc_adapter *sa)
 	return true;
 }
 
+int
+sfc_mae_counter_enable(struct sfc_adapter *sa,
+		       struct sfc_mae_counter_id *counterp)
+{
+	struct sfc_mae_counter_registry *reg = &sa->mae.counter_registry;
+	struct sfc_mae_counters *counters = &reg->counters;
+	struct sfc_mae_counter *p;
+	efx_counter_t mae_counter;
+	uint32_t generation_count;
+	uint32_t unused;
+	int rc;
+
+	/*
+	 * The actual count of counters allocated is ignored since a failure
+	 * to allocate a single counter is indicated by non-zero return code.
+	 */
+	rc = efx_mae_counters_alloc(sa->nic, 1, &unused, &mae_counter,
+				    &generation_count);
+	if (rc != 0) {
+		sfc_err(sa, "failed to alloc MAE counter: %s",
+			rte_strerror(rc));
+		goto fail_mae_counter_alloc;
+	}
+
+	if (mae_counter.id >= counters->n_mae_counters) {
+		/*
+		 * ID of a counter is expected to be within the range
+		 * between 0 and the maximum count of counters to always
+		 * fit into a pre-allocated array size of maximum counter ID.
+		 */
+		sfc_err(sa, "MAE counter ID is out of expected range");
+		rc = EFAULT;
+		goto fail_counter_id_range;
+	}
+
+	counterp->mae_id = mae_counter;
+
+	p = &counters->mae_counters[mae_counter.id];
+
+	/*
+	 * Ordering is relaxed since it is the only operation on counter value.
+	 * And it does not depend on different stores/loads in other threads.
+	 * Paired with relaxed ordering in counter increment.
+	 */
+	__atomic_store(&p->reset.pkts_bytes.int128,
+		       &p->value.pkts_bytes.int128, __ATOMIC_RELAXED);
+	p->generation_count = generation_count;
+
+	/*
+	 * The flag is set at the very end of add operation and reset
+	 * at the beginning of delete operation. Release ordering is
+	 * paired with acquire ordering on load in counter increment operation.
+	 */
+	__atomic_store_n(&p->inuse, true, __ATOMIC_RELEASE);
+
+	sfc_info(sa, "enabled MAE counter #%u with reset pkts=%" PRIu64
+		 " bytes=%" PRIu64, mae_counter.id,
+		 p->reset.pkts, p->reset.bytes);
+
+	return 0;
+
+fail_counter_id_range:
+	(void)efx_mae_counters_free(sa->nic, 1, &unused, &mae_counter, NULL);
+
+fail_mae_counter_alloc:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+	return rc;
+}
+
+int
+sfc_mae_counter_disable(struct sfc_adapter *sa,
+			struct sfc_mae_counter_id *counter)
+{
+	struct sfc_mae_counter_registry *reg = &sa->mae.counter_registry;
+	struct sfc_mae_counters *counters = &reg->counters;
+	struct sfc_mae_counter *p;
+	uint32_t unused;
+	int rc;
+
+	if (counter->mae_id.id == EFX_MAE_RSRC_ID_INVALID)
+		return 0;
+
+	SFC_ASSERT(counter->mae_id.id < counters->n_mae_counters);
+	/*
+	 * The flag is set at the very end of add operation and reset
+	 * at the beginning of delete operation. Release ordering is
+	 * paired with acquire ordering on load in counter increment operation.
+	 */
+	p = &counters->mae_counters[counter->mae_id.id];
+	__atomic_store_n(&p->inuse, false, __ATOMIC_RELEASE);
+
+	rc = efx_mae_counters_free(sa->nic, 1, &unused, &counter->mae_id, NULL);
+	if (rc != 0)
+		sfc_err(sa, "failed to free MAE counter %u: %s",
+			counter->mae_id.id, rte_strerror(rc));
+
+	sfc_info(sa, "disabled MAE counter #%u with reset pkts=%" PRIu64
+		 " bytes=%" PRIu64, counter->mae_id.id,
+		 p->reset.pkts, p->reset.bytes);
+
+	/*
+	 * Do this regardless of what efx_mae_counters_free() return value is.
+	 * If there's some error, the resulting resource leakage is bad, but
+	 * nothing sensible can be done in this case.
+	 */
+	counter->mae_id.id = EFX_MAE_RSRC_ID_INVALID;
+
+	return rc;
+}
+
+static void
+sfc_mae_counter_increment(struct sfc_adapter *sa,
+			  struct sfc_mae_counters *counters,
+			  uint32_t mae_counter_id,
+			  uint32_t generation_count,
+			  uint64_t pkts, uint64_t bytes)
+{
+	struct sfc_mae_counter *p = &counters->mae_counters[mae_counter_id];
+	struct sfc_mae_counters_xstats *xstats = &counters->xstats;
+	union sfc_pkts_bytes cnt_val;
+	bool inuse;
+
+	/*
+	 * Acquire ordering is paired with release ordering in counter add
+	 * and delete operations.
+	 */
+	__atomic_load(&p->inuse, &inuse, __ATOMIC_ACQUIRE);
+	if (!inuse) {
+		/*
+		 * Two possible cases include:
+		 * 1) Counter is just allocated. Too early counter update
+		 *    cannot be processed properly.
+		 * 2) Stale update of freed and not reallocated counter.
+		 *    There is no point in processing that update.
+		 */
+		xstats->not_inuse_update++;
+		return;
+	}
+
+	if (unlikely(generation_count < p->generation_count)) {
+		/*
+		 * It is a stale update for the reallocated counter
+		 * (i.e., freed and the same ID allocated again).
+		 */
+		xstats->realloc_update++;
+		return;
+	}
+
+	cnt_val.pkts = p->value.pkts + pkts;
+	cnt_val.bytes = p->value.bytes + bytes;
+
+	/*
+	 * Ordering is relaxed since it is the only operation on counter value.
+	 * And it does not depend on different stores/loads in other threads.
+	 * Paired with relaxed ordering on counter reset.
+	 */
+	__atomic_store(&p->value.pkts_bytes,
+		       &cnt_val.pkts_bytes, __ATOMIC_RELAXED);
+
+	sfc_info(sa, "update MAE counter #%u: pkts+%" PRIu64 "=%" PRIu64
+		 ", bytes+%" PRIu64 "=%" PRIu64, mae_counter_id,
+		 pkts, cnt_val.pkts, bytes, cnt_val.bytes);
+}
+
+static void
+sfc_mae_parse_counter_packet(struct sfc_adapter *sa,
+			     struct sfc_mae_counter_registry *counter_registry,
+			     const struct rte_mbuf *m)
+{
+	uint32_t generation_count;
+	const efx_xword_t *hdr;
+	const efx_oword_t *counters_data;
+	unsigned int version;
+	unsigned int id;
+	unsigned int header_offset;
+	unsigned int payload_offset;
+	unsigned int counter_count;
+	unsigned int required_len;
+	unsigned int i;
+
+	if (unlikely(m->nb_segs != 1)) {
+		sfc_err(sa, "unexpectedly scattered MAE counters packet (%u segments)",
+			m->nb_segs);
+		return;
+	}
+
+	if (unlikely(m->data_len < ER_RX_SL_PACKETISER_HEADER_WORD_SIZE)) {
+		sfc_err(sa, "too short MAE counters packet (%u bytes)",
+			m->data_len);
+		return;
+	}
+
+	/*
+	 * The generation count is located in the Rx prefix in the USER_MARK
+	 * field which is written into hash.fdir.hi field of an mbuf. See
+	 * SF-123581-TC SmartNIC Datapath Offloads section 4.7.5 Counters.
+	 */
+	generation_count = m->hash.fdir.hi;
+
+	hdr = rte_pktmbuf_mtod(m, const efx_xword_t *);
+
+	version = EFX_XWORD_FIELD(*hdr, ERF_SC_PACKETISER_HEADER_VERSION);
+	if (unlikely(version != ERF_SC_PACKETISER_HEADER_VERSION_2)) {
+		sfc_err(sa, "unexpected MAE counters packet version %u",
+			version);
+		return;
+	}
+
+	id = EFX_XWORD_FIELD(*hdr, ERF_SC_PACKETISER_HEADER_IDENTIFIER);
+	if (unlikely(id != ERF_SC_PACKETISER_HEADER_IDENTIFIER_AR)) {
+		sfc_err(sa, "unexpected MAE counters source identifier %u", id);
+		return;
+	}
+
+	/* Packet layout definitions assume fixed header offset in fact */
+	header_offset =
+		EFX_XWORD_FIELD(*hdr, ERF_SC_PACKETISER_HEADER_HEADER_OFFSET);
+	if (unlikely(header_offset !=
+		     ERF_SC_PACKETISER_HEADER_HEADER_OFFSET_DEFAULT)) {
+		sfc_err(sa, "unexpected MAE counters packet header offset %u",
+			header_offset);
+		return;
+	}
+
+	payload_offset =
+		EFX_XWORD_FIELD(*hdr, ERF_SC_PACKETISER_HEADER_PAYLOAD_OFFSET);
+
+	counter_count = EFX_XWORD_FIELD(*hdr, ERF_SC_PACKETISER_HEADER_COUNT);
+
+	required_len = payload_offset +
+			counter_count * sizeof(counters_data[0]);
+	if (unlikely(required_len > m->data_len)) {
+		sfc_err(sa, "truncated MAE counters packet: %u counters, packet length is %u vs %u required",
+			counter_count, m->data_len, required_len);
+		/*
+		 * In theory it is possible process available counters data,
+		 * but such condition is really unexpected and it is
+		 * better to treat entire packet as corrupted.
+		 */
+		return;
+	}
+
+	/* Ensure that counters data is 32-bit aligned */
+	if (unlikely(payload_offset % sizeof(uint32_t) != 0)) {
+		sfc_err(sa, "unsupported MAE counters payload offset %u, must be 32-bit aligned",
+			payload_offset);
+		return;
+	}
+	RTE_BUILD_BUG_ON(sizeof(counters_data[0]) !=
+			ER_RX_SL_PACKETISER_PAYLOAD_WORD_SIZE);
+
+	counters_data =
+		rte_pktmbuf_mtod_offset(m, const efx_oword_t *, payload_offset);
+
+	sfc_info(sa, "update %u MAE counters with gc=%u",
+		 counter_count, generation_count);
+
+	for (i = 0; i < counter_count; ++i) {
+		uint32_t packet_count_lo;
+		uint32_t packet_count_hi;
+		uint32_t byte_count_lo;
+		uint32_t byte_count_hi;
+
+		/*
+		 * Use 32-bit field accessors below since counters data
+		 * is not 64-bit aligned.
+		 * 32-bit alignment is checked above taking into account
+		 * that start of packet data is 32-bit aligned
+		 * (cache-line size aligned in fact).
+		 */
+		packet_count_lo =
+			EFX_OWORD_FIELD32(counters_data[i],
+				ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO);
+		packet_count_hi =
+			EFX_OWORD_FIELD32(counters_data[i],
+				ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_HI);
+		byte_count_lo =
+			EFX_OWORD_FIELD32(counters_data[i],
+				ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO);
+		byte_count_hi =
+			EFX_OWORD_FIELD32(counters_data[i],
+				ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_HI);
+		sfc_mae_counter_increment(sa,
+			&counter_registry->counters,
+			EFX_OWORD_FIELD32(counters_data[i],
+				ERF_SC_PACKETISER_PAYLOAD_COUNTER_INDEX),
+			generation_count,
+			(uint64_t)packet_count_lo |
+			((uint64_t)packet_count_hi <<
+			 ERF_SC_PACKETISER_PAYLOAD_PACKET_COUNT_LO_WIDTH),
+			(uint64_t)byte_count_lo |
+			((uint64_t)byte_count_hi <<
+			 ERF_SC_PACKETISER_PAYLOAD_BYTE_COUNT_LO_WIDTH));
+	}
+}
+
+static int32_t
+sfc_mae_counter_routine(void *arg)
+{
+	struct sfc_adapter *sa = arg;
+	struct sfc_mae_counter_registry *counter_registry =
+		&sa->mae.counter_registry;
+	struct rte_mbuf *mbufs[SFC_MAE_COUNTER_RX_BURST];
+	unsigned int pushed_diff;
+	unsigned int pushed;
+	unsigned int i;
+	uint16_t n;
+	int rc;
+
+	n = counter_registry->rx_pkt_burst(counter_registry->rx_dp, mbufs,
+					   SFC_MAE_COUNTER_RX_BURST);
+
+	for (i = 0; i < n; i++)
+		sfc_mae_parse_counter_packet(sa, counter_registry, mbufs[i]);
+
+	rte_pktmbuf_free_bulk(mbufs, n);
+
+	if (!counter_registry->use_credits)
+		return 0;
+
+	pushed = sfc_rx_get_pushed(sa, counter_registry->rx_dp);
+	pushed_diff = pushed - counter_registry->pushed_n_buffers;
+
+	if (pushed_diff >= SFC_COUNTER_RXQ_REFILL_LEVEL) {
+		rc = efx_mae_counters_stream_give_credits(sa->nic, pushed_diff);
+		if (rc == 0) {
+			counter_registry->pushed_n_buffers = pushed;
+		} else {
+			/*
+			 * FIXME: counters might be important for the
+			 * application. Handle the error in order to recover
+			 * from the failure
+			 */
+			SFC_GENERIC_LOG(DEBUG, "Give credits failed: %s",
+					rte_strerror(rc));
+		}
+	}
+
+	return 0;
+}
+
+static void
+sfc_mae_counter_service_unregister(struct sfc_adapter *sa)
+{
+	struct sfc_mae_counter_registry *registry =
+		&sa->mae.counter_registry;
+	const unsigned int wait_ms = 10000;
+	unsigned int i;
+
+	rte_service_runstate_set(registry->service_id, 0);
+	rte_service_component_runstate_set(registry->service_id, 0);
+
+	/*
+	 * Wait for the counter routine to finish the last iteration.
+	 * Give up on timeout.
+	 */
+	for (i = 0; i < wait_ms; i++) {
+		if (rte_service_may_be_active(registry->service_id) == 0)
+			break;
+
+		rte_delay_ms(1);
+	}
+	if (i == wait_ms)
+		sfc_warn(sa, "failed to wait for counter service to stop");
+
+	rte_service_map_lcore_set(registry->service_id,
+				  registry->service_core_id, 0);
+
+	rte_service_component_unregister(registry->service_id);
+}
+
+static struct sfc_rxq_info *
+sfc_counter_rxq_info_get(struct sfc_adapter *sa)
+{
+	return &sfc_sa2shared(sa)->rxq_info[sa->counter_rxq.sw_index];
+}
+
+static int
+sfc_mae_counter_service_register(struct sfc_adapter *sa,
+				 uint32_t counter_stream_flags)
+{
+	struct rte_service_spec service;
+	char counter_service_name[sizeof(service.name)] = "counter_service";
+	struct sfc_mae_counter_registry *counter_registry =
+		&sa->mae.counter_registry;
+	uint32_t cid;
+	uint32_t sid;
+	int rc;
+
+	sfc_log_init(sa, "entry");
+
+	/* Prepare service info */
+	memset(&service, 0, sizeof(service));
+	rte_strscpy(service.name, counter_service_name, sizeof(service.name));
+	service.socket_id = sa->socket_id;
+	service.callback = sfc_mae_counter_routine;
+	service.callback_userdata = sa;
+	counter_registry->rx_pkt_burst = sa->eth_dev->rx_pkt_burst;
+	counter_registry->rx_dp = sfc_counter_rxq_info_get(sa)->dp;
+	counter_registry->pushed_n_buffers = 0;
+	counter_registry->use_credits = counter_stream_flags &
+		EFX_MAE_COUNTERS_STREAM_OUT_USES_CREDITS;
+
+	cid = sfc_get_service_lcore(sa->socket_id);
+	if (cid == RTE_MAX_LCORE && sa->socket_id != SOCKET_ID_ANY) {
+		/* Warn and try to allocate on any NUMA node */
+		sfc_warn(sa,
+			"failed to get service lcore for counter service at socket %d",
+			sa->socket_id);
+
+		cid = sfc_get_service_lcore(SOCKET_ID_ANY);
+	}
+	if (cid == RTE_MAX_LCORE) {
+		rc = ENOTSUP;
+		sfc_err(sa, "failed to get service lcore for counter service");
+		goto fail_get_service_lcore;
+	}
+
+	/* Service core may be in "stopped" state, start it */
+	rc = rte_service_lcore_start(cid);
+	if (rc != 0 && rc != -EALREADY) {
+		sfc_err(sa, "failed to start service core for counter service: %s",
+			rte_strerror(-rc));
+		rc = ENOTSUP;
+		goto fail_start_core;
+	}
+
+	/* Register counter service */
+	rc = rte_service_component_register(&service, &sid);
+	if (rc != 0) {
+		rc = ENOEXEC;
+		sfc_err(sa, "failed to register counter service component");
+		goto fail_register;
+	}
+
+	/* Map the service with the service core */
+	rc = rte_service_map_lcore_set(sid, cid, 1);
+	if (rc != 0) {
+		rc = -rc;
+		sfc_err(sa, "failed to map lcore for counter service: %s",
+			rte_strerror(rc));
+		goto fail_map_lcore;
+	}
+
+	/* Run the service */
+	rc = rte_service_component_runstate_set(sid, 1);
+	if (rc < 0) {
+		rc = -rc;
+		sfc_err(sa, "failed to run counter service component: %s",
+			rte_strerror(rc));
+		goto fail_component_runstate_set;
+	}
+	rc = rte_service_runstate_set(sid, 1);
+	if (rc < 0) {
+		rc = -rc;
+		sfc_err(sa, "failed to run counter service");
+		goto fail_runstate_set;
+	}
+
+	counter_registry->service_core_id = cid;
+	counter_registry->service_id = sid;
+
+	sfc_log_init(sa, "done");
+
+	return 0;
+
+fail_runstate_set:
+	rte_service_component_runstate_set(sid, 0);
+
+fail_component_runstate_set:
+	rte_service_map_lcore_set(sid, cid, 0);
+
+fail_map_lcore:
+	rte_service_component_unregister(sid);
+
+fail_register:
+fail_start_core:
+fail_get_service_lcore:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+
+	return rc;
+}
+
+int
+sfc_mae_counters_init(struct sfc_mae_counters *counters,
+		      uint32_t nb_counters_max)
+{
+	int rc;
+
+	SFC_GENERIC_LOG(DEBUG, "%s: entry", __func__);
+
+	counters->mae_counters = rte_zmalloc("sfc_mae_counters",
+		sizeof(*counters->mae_counters) * nb_counters_max, 0);
+	if (counters->mae_counters == NULL) {
+		rc = ENOMEM;
+		SFC_GENERIC_LOG(ERR, "%s: failed: %s", __func__,
+				rte_strerror(rc));
+		return rc;
+	}
+
+	counters->n_mae_counters = nb_counters_max;
+
+	SFC_GENERIC_LOG(DEBUG, "%s: done", __func__);
+
+	return 0;
+}
+
+void
+sfc_mae_counters_fini(struct sfc_mae_counters *counters)
+{
+	rte_free(counters->mae_counters);
+	counters->mae_counters = NULL;
+}
+
 int
 sfc_mae_counter_rxq_attach(struct sfc_adapter *sa)
 {
@@ -215,3 +731,65 @@ sfc_mae_counter_rxq_fini(struct sfc_adapter *sa)
 
 	sfc_log_init(sa, "done");
 }
+
+void
+sfc_mae_counter_stop(struct sfc_adapter *sa)
+{
+	struct sfc_mae *mae = &sa->mae;
+
+	sfc_log_init(sa, "entry");
+
+	if (!mae->counter_rxq_running) {
+		sfc_log_init(sa, "counter queue is not running - skip");
+		return;
+	}
+
+	sfc_mae_counter_service_unregister(sa);
+	efx_mae_counters_stream_stop(sa->nic, sa->counter_rxq.sw_index, NULL);
+
+	mae->counter_rxq_running = false;
+
+	sfc_log_init(sa, "done");
+}
+
+int
+sfc_mae_counter_start(struct sfc_adapter *sa)
+{
+	struct sfc_mae *mae = &sa->mae;
+	uint32_t flags;
+	int rc;
+
+	SFC_ASSERT(sa->counter_rxq.state & SFC_COUNTER_RXQ_ATTACHED);
+
+	if (mae->counter_rxq_running)
+		return 0;
+
+	sfc_log_init(sa, "entry");
+
+	rc = efx_mae_counters_stream_start(sa->nic, sa->counter_rxq.sw_index,
+					   SFC_MAE_COUNTER_STREAM_PACKET_SIZE,
+					   0 /* No flags required */, &flags);
+	if (rc != 0) {
+		sfc_err(sa, "failed to start MAE counters stream: %s",
+			rte_strerror(rc));
+		goto fail_counter_stream;
+	}
+
+	sfc_log_init(sa, "stream start flags: 0x%x", flags);
+
+	rc = sfc_mae_counter_service_register(sa, flags);
+	if (rc != 0)
+		goto fail_service_register;
+
+	mae->counter_rxq_running = true;
+
+	return 0;
+
+fail_service_register:
+	efx_mae_counters_stream_stop(sa->nic, sa->counter_rxq.sw_index, NULL);
+
+fail_counter_stream:
+	sfc_log_init(sa, "failed: %s", rte_strerror(rc));
+
+	return rc;
+}
diff --git a/drivers/net/sfc/sfc_mae_counter.h b/drivers/net/sfc/sfc_mae_counter.h
index f16d64a999..f61a6b59cb 100644
--- a/drivers/net/sfc/sfc_mae_counter.h
+++ b/drivers/net/sfc/sfc_mae_counter.h
@@ -38,6 +38,17 @@ void sfc_mae_counter_rxq_detach(struct sfc_adapter *sa);
 int sfc_mae_counter_rxq_init(struct sfc_adapter *sa);
 void sfc_mae_counter_rxq_fini(struct sfc_adapter *sa);
 
+int sfc_mae_counters_init(struct sfc_mae_counters *counters,
+			  uint32_t nb_counters_max);
+void sfc_mae_counters_fini(struct sfc_mae_counters *counters);
+int sfc_mae_counter_enable(struct sfc_adapter *sa,
+			   struct sfc_mae_counter_id *counterp);
+int sfc_mae_counter_disable(struct sfc_adapter *sa,
+			    struct sfc_mae_counter_id *counter);
+
+int sfc_mae_counter_start(struct sfc_adapter *sa);
+void sfc_mae_counter_stop(struct sfc_adapter *sa);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/net/sfc/sfc_stats.h b/drivers/net/sfc/sfc_stats.h
new file mode 100644
index 0000000000..2d7ab71f14
--- /dev/null
+++ b/drivers/net/sfc/sfc_stats.h
@@ -0,0 +1,80 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2019-2021 Xilinx, Inc.
+ * Copyright(c) 2019 Solarflare Communications Inc.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#ifndef _SFC_STATS_H
+#define _SFC_STATS_H
+
+#include <stdint.h>
+
+#include <rte_atomic.h>
+
+#include "sfc_tweak.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * 64-bit packets and bytes counters covered by 128-bit integer
+ * in order to do atomic updates to guarantee consistency if
+ * required.
+ */
+union sfc_pkts_bytes {
+	RTE_STD_C11
+	struct {
+		uint64_t		pkts;
+		uint64_t		bytes;
+	};
+	rte_int128_t			pkts_bytes;
+};
+
+/**
+ * Update packets and bytes counters atomically in assumption that
+ * the counter is written on one core only.
+ */
+static inline void
+sfc_pkts_bytes_add(union sfc_pkts_bytes *st, uint64_t pkts, uint64_t bytes)
+{
+#if SFC_SW_STATS_ATOMIC
+	union sfc_pkts_bytes result;
+
+	/* Stats are written on single core only, so just load values */
+	result.pkts = st->pkts + pkts;
+	result.bytes = st->bytes + bytes;
+
+	/*
+	 * Store the result atomically to guarantee that the reader
+	 * core sees both counter updates together.
+	 */
+	__atomic_store_n(&st->pkts_bytes.int128, result.pkts_bytes.int128,
+			 __ATOMIC_RELEASE);
+#else
+	st->pkts += pkts;
+	st->bytes += bytes;
+#endif
+}
+
+/**
+ * Get an atomic copy of a packets and bytes counters.
+ */
+static inline void
+sfc_pkts_bytes_get(const union sfc_pkts_bytes *st, union sfc_pkts_bytes *result)
+{
+#if SFC_SW_STATS_ATOMIC
+	result->pkts_bytes.int128 = __atomic_load_n(&st->pkts_bytes.int128,
+						    __ATOMIC_ACQUIRE);
+#else
+	*result = *st;
+#endif
+}
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _SFC_STATS_H */
diff --git a/drivers/net/sfc/sfc_tweak.h b/drivers/net/sfc/sfc_tweak.h
index f2d8701421..d09c7a3125 100644
--- a/drivers/net/sfc/sfc_tweak.h
+++ b/drivers/net/sfc/sfc_tweak.h
@@ -42,4 +42,13 @@
  */
 #define SFC_RXD_WAIT_TIMEOUT_NS_DEF	(200U * 1000)
 
+/**
+ * Ideally reading packet and byte counters together should return
+ * consistent values. I.e. a number of bytes corresponds to a number of
+ * packets. Since counters are updated in one thread and queried in
+ * another it requires either locking or atomics which are very
+ * expensive from performance point of view. So, disable it by default.
+ */
+#define SFC_SW_STATS_ATOMIC		0
+
 #endif /* _SFC_TWEAK_H_ */
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* [dpdk-dev] [PATCH v4 20/20] net/sfc: support flow API query for count actions
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (18 preceding siblings ...)
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 19/20] net/sfc: support flow action COUNT in transfer rules Andrew Rybchenko
@ 2021-07-02  8:39   ` Andrew Rybchenko
  2021-07-20 12:19   ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action David Marchand
  20 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:39 UTC (permalink / raw)
  To: dev; +Cc: David Marchand, Igor Romanov, Andy Moreton, Ivan Malov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

The query reports the number of hits for a counter associated
with a flow rule.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Reviewed-by: Ivan Malov <ivan.malov@oktetlabs.ru>
---
 drivers/net/sfc/sfc_flow.c        | 48 ++++++++++++++++++++++-
 drivers/net/sfc/sfc_flow.h        |  6 +++
 drivers/net/sfc/sfc_mae.c         | 64 +++++++++++++++++++++++++++++++
 drivers/net/sfc/sfc_mae.h         |  1 +
 drivers/net/sfc/sfc_mae_counter.c | 32 ++++++++++++++++
 drivers/net/sfc/sfc_mae_counter.h |  3 ++
 6 files changed, 153 insertions(+), 1 deletion(-)

diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 1294dbd3a7..af7f5df4bf 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -32,6 +32,7 @@ struct sfc_flow_ops_by_spec {
 	sfc_flow_cleanup_cb_t	*cleanup;
 	sfc_flow_insert_cb_t	*insert;
 	sfc_flow_remove_cb_t	*remove;
+	sfc_flow_query_cb_t	*query;
 };
 
 static sfc_flow_parse_cb_t sfc_flow_parse_rte_to_filter;
@@ -45,6 +46,7 @@ static const struct sfc_flow_ops_by_spec sfc_flow_ops_filter = {
 	.cleanup = NULL,
 	.insert = sfc_flow_filter_insert,
 	.remove = sfc_flow_filter_remove,
+	.query = NULL,
 };
 
 static const struct sfc_flow_ops_by_spec sfc_flow_ops_mae = {
@@ -53,6 +55,7 @@ static const struct sfc_flow_ops_by_spec sfc_flow_ops_mae = {
 	.cleanup = sfc_mae_flow_cleanup,
 	.insert = sfc_mae_flow_insert,
 	.remove = sfc_mae_flow_remove,
+	.query = sfc_mae_flow_query,
 };
 
 static const struct sfc_flow_ops_by_spec *
@@ -2788,6 +2791,49 @@ sfc_flow_flush(struct rte_eth_dev *dev,
 	return -ret;
 }
 
+static int
+sfc_flow_query(struct rte_eth_dev *dev,
+	       struct rte_flow *flow,
+	       const struct rte_flow_action *action,
+	       void *data,
+	       struct rte_flow_error *error)
+{
+	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	const struct sfc_flow_ops_by_spec *ops;
+	int ret;
+
+	sfc_adapter_lock(sa);
+
+	ops = sfc_flow_get_ops_by_spec(flow);
+	if (ops == NULL || ops->query == NULL) {
+		ret = rte_flow_error_set(error, ENOTSUP,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			"No backend to handle this flow");
+		goto fail_no_backend;
+	}
+
+	if (sa->state != SFC_ADAPTER_STARTED) {
+		ret = rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			"Can't query the flow: the adapter is not started");
+		goto fail_not_started;
+	}
+
+	ret = ops->query(dev, flow, action, data, error);
+	if (ret != 0)
+		goto fail_query;
+
+	sfc_adapter_unlock(sa);
+
+	return 0;
+
+fail_query:
+fail_not_started:
+fail_no_backend:
+	sfc_adapter_unlock(sa);
+	return ret;
+}
+
 static int
 sfc_flow_isolate(struct rte_eth_dev *dev, int enable,
 		 struct rte_flow_error *error)
@@ -2814,7 +2860,7 @@ const struct rte_flow_ops sfc_flow_ops = {
 	.create = sfc_flow_create,
 	.destroy = sfc_flow_destroy,
 	.flush = sfc_flow_flush,
-	.query = NULL,
+	.query = sfc_flow_query,
 	.isolate = sfc_flow_isolate,
 };
 
diff --git a/drivers/net/sfc/sfc_flow.h b/drivers/net/sfc/sfc_flow.h
index bd3b374d68..99e5cf9cff 100644
--- a/drivers/net/sfc/sfc_flow.h
+++ b/drivers/net/sfc/sfc_flow.h
@@ -181,6 +181,12 @@ typedef int (sfc_flow_insert_cb_t)(struct sfc_adapter *sa,
 typedef int (sfc_flow_remove_cb_t)(struct sfc_adapter *sa,
 				   struct rte_flow *flow);
 
+typedef int (sfc_flow_query_cb_t)(struct rte_eth_dev *dev,
+				  struct rte_flow *flow,
+				  const struct rte_flow_action *action,
+				  void *data,
+				  struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index c3efd5b407..a4eab30dec 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -3187,3 +3187,67 @@ sfc_mae_flow_remove(struct sfc_adapter *sa,
 
 	return 0;
 }
+
+static int
+sfc_mae_query_counter(struct sfc_adapter *sa,
+		      struct sfc_flow_spec_mae *spec,
+		      const struct rte_flow_action *action,
+		      struct rte_flow_query_count *data,
+		      struct rte_flow_error *error)
+{
+	struct sfc_mae_action_set *action_set = spec->action_set;
+	const struct rte_flow_action_count *conf = action->conf;
+	unsigned int i;
+	int rc;
+
+	if (action_set->n_counters == 0) {
+		return rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ACTION, action,
+			"Queried flow rule does not have count actions");
+	}
+
+	for (i = 0; i < action_set->n_counters; i++) {
+		/*
+		 * Get the first available counter of the flow rule if
+		 * counter ID is not specified.
+		 */
+		if (conf != NULL && action_set->counters[i].rte_id != conf->id)
+			continue;
+
+		rc = sfc_mae_counter_get(&sa->mae.counter_registry.counters,
+					 &action_set->counters[i], data);
+		if (rc != 0) {
+			return rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_ACTION, action,
+				"Queried flow rule counter action is invalid");
+		}
+
+		return 0;
+	}
+
+	return rte_flow_error_set(error, ENOENT,
+				  RTE_FLOW_ERROR_TYPE_ACTION, action,
+				  "No such flow rule action count ID");
+}
+
+int
+sfc_mae_flow_query(struct rte_eth_dev *dev,
+		   struct rte_flow *flow,
+		   const struct rte_flow_action *action,
+		   void *data,
+		   struct rte_flow_error *error)
+{
+	struct sfc_adapter *sa = sfc_adapter_by_eth_dev(dev);
+	struct sfc_flow_spec *spec = &flow->spec;
+	struct sfc_flow_spec_mae *spec_mae = &spec->mae;
+
+	switch (action->type) {
+	case RTE_FLOW_ACTION_TYPE_COUNT:
+		return sfc_mae_query_counter(sa, spec_mae, action,
+					     data, error);
+	default:
+		return rte_flow_error_set(error, ENOTSUP,
+			RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+			"Query for action of this type is not supported");
+	}
+}
diff --git a/drivers/net/sfc/sfc_mae.h b/drivers/net/sfc/sfc_mae.h
index 2cc4334890..6bfc8afb82 100644
--- a/drivers/net/sfc/sfc_mae.h
+++ b/drivers/net/sfc/sfc_mae.h
@@ -291,6 +291,7 @@ int sfc_mae_rule_parse_actions(struct sfc_adapter *sa,
 sfc_flow_verify_cb_t sfc_mae_flow_verify;
 sfc_flow_insert_cb_t sfc_mae_flow_insert;
 sfc_flow_remove_cb_t sfc_mae_flow_remove;
+sfc_flow_query_cb_t sfc_mae_flow_query;
 
 #ifdef __cplusplus
 }
diff --git a/drivers/net/sfc/sfc_mae_counter.c b/drivers/net/sfc/sfc_mae_counter.c
index b0cb8157aa..5afd450a11 100644
--- a/drivers/net/sfc/sfc_mae_counter.c
+++ b/drivers/net/sfc/sfc_mae_counter.c
@@ -793,3 +793,35 @@ sfc_mae_counter_start(struct sfc_adapter *sa)
 
 	return rc;
 }
+
+int
+sfc_mae_counter_get(struct sfc_mae_counters *counters,
+		    const struct sfc_mae_counter_id *counter,
+		    struct rte_flow_query_count *data)
+{
+	struct sfc_mae_counter *p;
+	union sfc_pkts_bytes value;
+
+	SFC_ASSERT(counter->mae_id.id < counters->n_mae_counters);
+	p = &counters->mae_counters[counter->mae_id.id];
+
+	/*
+	 * Ordering is relaxed since it is the only operation on counter value.
+	 * And it does not depend on different stores/loads in other threads.
+	 * Paired with relaxed ordering in counter increment.
+	 */
+	value.pkts_bytes.int128 = __atomic_load_n(&p->value.pkts_bytes.int128,
+						  __ATOMIC_RELAXED);
+
+	data->hits_set = 1;
+	data->bytes_set = 1;
+	data->hits = value.pkts - p->reset.pkts;
+	data->bytes = value.bytes - p->reset.bytes;
+
+	if (data->reset != 0) {
+		p->reset.pkts = value.pkts;
+		p->reset.bytes = value.bytes;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/sfc/sfc_mae_counter.h b/drivers/net/sfc/sfc_mae_counter.h
index f61a6b59cb..2c953c2968 100644
--- a/drivers/net/sfc/sfc_mae_counter.h
+++ b/drivers/net/sfc/sfc_mae_counter.h
@@ -45,6 +45,9 @@ int sfc_mae_counter_enable(struct sfc_adapter *sa,
 			   struct sfc_mae_counter_id *counterp);
 int sfc_mae_counter_disable(struct sfc_adapter *sa,
 			    struct sfc_mae_counter_id *counter);
+int sfc_mae_counter_get(struct sfc_mae_counters *counters,
+			const struct sfc_mae_counter_id *counter,
+			struct rte_flow_query_count *data);
 
 int sfc_mae_counter_start(struct sfc_adapter *sa);
 void sfc_mae_counter_stop(struct sfc_adapter *sa);
-- 
2.30.2


^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [dpdk-dev] [PATCH v3 19/20] net/sfc: support flow action COUNT in transfer rules
  2021-07-01 13:05             ` Andrew Rybchenko
  2021-07-01 13:35               ` Bruce Richardson
@ 2021-07-02  8:43               ` Andrew Rybchenko
  2021-07-02 12:30                 ` Thomas Monjalon
  2021-07-02 13:37                 ` David Marchand
  1 sibling, 2 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02  8:43 UTC (permalink / raw)
  To: David Marchand, Bruce Richardson
  Cc: Thomas Monjalon, dev, Igor Romanov, Andy Moreton, Ivan Malov

Hi David,

On 7/1/21 4:05 PM, Andrew Rybchenko wrote:
> @Bruce, see below.
> 
> On 7/1/21 3:34 PM, David Marchand wrote:
>> On Thu, Jul 1, 2021 at 11:22 AM Andrew Rybchenko
>> <andrew.rybchenko@oktetlabs.ru> wrote:
>>> The build works fine for me on FC34, but it has
>>> libatomic-11.1.1-3.fc34.x86_64 installed.
>>
>> I first produced the issue on my "old" FC32.
>> Afaics, for FC33 and later, gcc now depends on libatomic and the
>> problem won't be noticed.
>> FC32 and before are EOL, but I then reproduced the issue on RHEL 8
>> (and Intel CI reported it on Centos 8 too).
> 
> I see. Thanks for the clarification.
> 
>>>
>>> I'd like to understand what we're trying to solve here.
>>> Are we trying to make meson to report the missing library
>>> correctly?
>>>
>>> If so, I think I can do simple check using cc.links()
>>> which will fail if the library is not found. I'll
>>> test that it works as expected if the library is not
>>> completely installed.
>>>
>>
>> I tried below diff, and it works for me.
>> "works" as in net/sfc gets disabled without libatomic installed:
>>
>> diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
>> index 32b58e3d76..8d62aad774 100644
>> --- a/drivers/net/sfc/meson.build
>> +++ b/drivers/net/sfc/meson.build
>> @@ -15,6 +15,7 @@ endif
>>  if (arch_subdir != 'x86' or not dpdk_conf.get('RTE_ARCH_64')) and
>> (arch_subdir != 'arm' or not
>> host_machine.cpu_family().startswith('aarch64'))
>>      build = false
>>      reason = 'only supported on x86_64 and aarch64'
>> +    subdir_done()
> 
> @Bruce  Shouldn't we add subdir_done() after all build = false
> cases? As I understand it is OK for minimum supported meson
> version.
> 
>>  endif
>>
>>  extra_flags = []
>> @@ -46,6 +47,14 @@ endif
>>
>>  # for gcc compiles we need -latomic for 128-bit atomic ops
>>  if cc.get_id() == 'gcc'
>> +    code = '''#include <stdio.h>
>> +    void main() { printf("Atomilink me.\n"); }
>> +    '''
>> +    if not cc.links(code, args: '-latomic', name: 'libatomic link check')
>> +        build = false
>> +        reason = 'missing dependency, "libatomic"'
>> +        subdir_done()
>> +    endif
>>      ext_deps += cc.find_library('atomic')
>>  endif
> 
> Many thanks, LGTM. I'll pick it up and add comments why
> it is checked this way.
> 

I've send v4 with the problem fixed. However, I'm afraid
build test systems should be updated to have libatomic
correctly installed. Otherwise, they do not really check
net/sfc build.

Andrew.

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [dpdk-dev] [PATCH v3 19/20] net/sfc: support flow action COUNT in transfer rules
  2021-07-02  8:43               ` Andrew Rybchenko
@ 2021-07-02 12:30                 ` Thomas Monjalon
  2021-07-02 12:53                   ` Andrew Rybchenko
  2021-07-02 13:37                 ` David Marchand
  1 sibling, 1 reply; 104+ messages in thread
From: Thomas Monjalon @ 2021-07-02 12:30 UTC (permalink / raw)
  To: David Marchand, Bruce Richardson, Andrew Rybchenko
  Cc: dev, Igor Romanov, Andy Moreton, Ivan Malov

02/07/2021 10:43, Andrew Rybchenko:
> On 7/1/21 4:05 PM, Andrew Rybchenko wrote:
> > On 7/1/21 3:34 PM, David Marchand wrote:
> >> On Thu, Jul 1, 2021 at 11:22 AM Andrew Rybchenko
> >> <andrew.rybchenko@oktetlabs.ru> wrote:
> >>> The build works fine for me on FC34, but it has
> >>> libatomic-11.1.1-3.fc34.x86_64 installed.
> >>
> >> I first produced the issue on my "old" FC32.
> >> Afaics, for FC33 and later, gcc now depends on libatomic and the
> >> problem won't be noticed.
> >> FC32 and before are EOL, but I then reproduced the issue on RHEL 8
> >> (and Intel CI reported it on Centos 8 too).
> > 
> > I see. Thanks for the clarification.
> > 
> >>>
> >>> I'd like to understand what we're trying to solve here.
> >>> Are we trying to make meson to report the missing library
> >>> correctly?
> >>>
> >>> If so, I think I can do simple check using cc.links()
> >>> which will fail if the library is not found. I'll
> >>> test that it works as expected if the library is not
> >>> completely installed.
> >>>
> >>
> >> I tried below diff, and it works for me.
> >> "works" as in net/sfc gets disabled without libatomic installed:
[...]
> >>  # for gcc compiles we need -latomic for 128-bit atomic ops
> >>  if cc.get_id() == 'gcc'
> >> +    code = '''#include <stdio.h>
> >> +    void main() { printf("Atomilink me.\n"); }
> >> +    '''
> >> +    if not cc.links(code, args: '-latomic', name: 'libatomic link check')
> >> +        build = false
> >> +        reason = 'missing dependency, "libatomic"'
> >> +        subdir_done()
> >> +    endif
> >>      ext_deps += cc.find_library('atomic')
> >>  endif
> > 
> > Many thanks, LGTM. I'll pick it up and add comments why
> > it is checked this way.
> > 
> 
> I've send v4 with the problem fixed. However, I'm afraid
> build test systems should be updated to have libatomic
> correctly installed. Otherwise, they do not really check
> net/sfc build.

When testing on old systems, sfc won't be tested anymore after this patchset.
On recent systems, sfc should be enabled I guess.
I don't see how to manage better, sorry.



^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [dpdk-dev] [PATCH v3 19/20] net/sfc: support flow action COUNT in transfer rules
  2021-07-02 12:30                 ` Thomas Monjalon
@ 2021-07-02 12:53                   ` Andrew Rybchenko
  2021-07-04 19:45                     ` Thomas Monjalon
  0 siblings, 1 reply; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02 12:53 UTC (permalink / raw)
  To: Thomas Monjalon, David Marchand, Bruce Richardson
  Cc: dev, Igor Romanov, Andy Moreton, Ivan Malov

On 7/2/21 3:30 PM, Thomas Monjalon wrote:
> 02/07/2021 10:43, Andrew Rybchenko:
>> On 7/1/21 4:05 PM, Andrew Rybchenko wrote:
>>> On 7/1/21 3:34 PM, David Marchand wrote:
>>>> On Thu, Jul 1, 2021 at 11:22 AM Andrew Rybchenko
>>>> <andrew.rybchenko@oktetlabs.ru> wrote:
>>>>> The build works fine for me on FC34, but it has
>>>>> libatomic-11.1.1-3.fc34.x86_64 installed.
>>>>
>>>> I first produced the issue on my "old" FC32.
>>>> Afaics, for FC33 and later, gcc now depends on libatomic and the
>>>> problem won't be noticed.
>>>> FC32 and before are EOL, but I then reproduced the issue on RHEL 8
>>>> (and Intel CI reported it on Centos 8 too).
>>>
>>> I see. Thanks for the clarification.
>>>
>>>>>
>>>>> I'd like to understand what we're trying to solve here.
>>>>> Are we trying to make meson to report the missing library
>>>>> correctly?
>>>>>
>>>>> If so, I think I can do simple check using cc.links()
>>>>> which will fail if the library is not found. I'll
>>>>> test that it works as expected if the library is not
>>>>> completely installed.
>>>>>
>>>>
>>>> I tried below diff, and it works for me.
>>>> "works" as in net/sfc gets disabled without libatomic installed:
> [...]
>>>>  # for gcc compiles we need -latomic for 128-bit atomic ops
>>>>  if cc.get_id() == 'gcc'
>>>> +    code = '''#include <stdio.h>
>>>> +    void main() { printf("Atomilink me.\n"); }
>>>> +    '''
>>>> +    if not cc.links(code, args: '-latomic', name: 'libatomic link check')
>>>> +        build = false
>>>> +        reason = 'missing dependency, "libatomic"'
>>>> +        subdir_done()
>>>> +    endif
>>>>      ext_deps += cc.find_library('atomic')
>>>>  endif
>>>
>>> Many thanks, LGTM. I'll pick it up and add comments why
>>> it is checked this way.
>>>
>>
>> I've send v4 with the problem fixed. However, I'm afraid
>> build test systems should be updated to have libatomic
>> correctly installed. Otherwise, they do not really check
>> net/sfc build.
> 
> When testing on old systems, sfc won't be tested anymore after this patchset.
> On recent systems, sfc should be enabled I guess.
> I don't see how to manage better, sorry.
> 

I see. I thought that it is possible to install missing
package on corresponding systems to make build coverage
better.

Now I automatically test build on problematic distros
with previously missing packages installed. So I have
internal build coverage anyway.

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [dpdk-dev] [PATCH v3 19/20] net/sfc: support flow action COUNT in transfer rules
  2021-07-02  8:43               ` Andrew Rybchenko
  2021-07-02 12:30                 ` Thomas Monjalon
@ 2021-07-02 13:37                 ` David Marchand
  2021-07-02 13:39                   ` Andrew Rybchenko
  1 sibling, 1 reply; 104+ messages in thread
From: David Marchand @ 2021-07-02 13:37 UTC (permalink / raw)
  To: Andrew Rybchenko
  Cc: Bruce Richardson, Thomas Monjalon, dev, Igor Romanov,
	Andy Moreton, Ivan Malov

On Fri, Jul 2, 2021 at 10:43 AM Andrew Rybchenko
<andrew.rybchenko@oktetlabs.ru> wrote:
> I've send v4 with the problem fixed. However, I'm afraid
> build test systems should be updated to have libatomic
> correctly installed. Otherwise, they do not really check
> net/sfc build.

CI systems must be updated if they check ABI.
And in general, we want them to continue testing net/sfc.
I sent a mail to ask for this.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [dpdk-dev] [PATCH v3 19/20] net/sfc: support flow action COUNT in transfer rules
  2021-07-02 13:37                 ` David Marchand
@ 2021-07-02 13:39                   ` Andrew Rybchenko
  0 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-02 13:39 UTC (permalink / raw)
  To: David Marchand
  Cc: Bruce Richardson, Thomas Monjalon, dev, Igor Romanov,
	Andy Moreton, Ivan Malov

On 7/2/21 4:37 PM, David Marchand wrote:
> On Fri, Jul 2, 2021 at 10:43 AM Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru> wrote:
>> I've send v4 with the problem fixed. However, I'm afraid
>> build test systems should be updated to have libatomic
>> correctly installed. Otherwise, they do not really check
>> net/sfc build.
> 
> CI systems must be updated if they check ABI.
> And in general, we want them to continue testing net/sfc.
> I sent a mail to ask for this.

Many thanks, David


^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [dpdk-dev] [PATCH v3 19/20] net/sfc: support flow action COUNT in transfer rules
  2021-07-02 12:53                   ` Andrew Rybchenko
@ 2021-07-04 19:45                     ` Thomas Monjalon
  2021-07-05  8:41                       ` Andrew Rybchenko
  0 siblings, 1 reply; 104+ messages in thread
From: Thomas Monjalon @ 2021-07-04 19:45 UTC (permalink / raw)
  To: Andrew Rybchenko
  Cc: David Marchand, Bruce Richardson, dev, Igor Romanov,
	Andy Moreton, Ivan Malov

02/07/2021 14:53, Andrew Rybchenko:
> On 7/2/21 3:30 PM, Thomas Monjalon wrote:
> > 02/07/2021 10:43, Andrew Rybchenko:
> >> On 7/1/21 4:05 PM, Andrew Rybchenko wrote:
> >>> On 7/1/21 3:34 PM, David Marchand wrote:
> >>>> On Thu, Jul 1, 2021 at 11:22 AM Andrew Rybchenko
> >>>> <andrew.rybchenko@oktetlabs.ru> wrote:
> >>>>> The build works fine for me on FC34, but it has
> >>>>> libatomic-11.1.1-3.fc34.x86_64 installed.
> >>>>
> >>>> I first produced the issue on my "old" FC32.
> >>>> Afaics, for FC33 and later, gcc now depends on libatomic and the
> >>>> problem won't be noticed.
> >>>> FC32 and before are EOL, but I then reproduced the issue on RHEL 8
> >>>> (and Intel CI reported it on Centos 8 too).
> >>>
> >>> I see. Thanks for the clarification.
> >>>
> >>>>>
> >>>>> I'd like to understand what we're trying to solve here.
> >>>>> Are we trying to make meson to report the missing library
> >>>>> correctly?
> >>>>>
> >>>>> If so, I think I can do simple check using cc.links()
> >>>>> which will fail if the library is not found. I'll
> >>>>> test that it works as expected if the library is not
> >>>>> completely installed.
> >>>>>
> >>>>
> >>>> I tried below diff, and it works for me.
> >>>> "works" as in net/sfc gets disabled without libatomic installed:
> > [...]
> >>>>  # for gcc compiles we need -latomic for 128-bit atomic ops
> >>>>  if cc.get_id() == 'gcc'
> >>>> +    code = '''#include <stdio.h>
> >>>> +    void main() { printf("Atomilink me.\n"); }
> >>>> +    '''
> >>>> +    if not cc.links(code, args: '-latomic', name: 'libatomic link check')
> >>>> +        build = false
> >>>> +        reason = 'missing dependency, "libatomic"'
> >>>> +        subdir_done()
> >>>> +    endif
> >>>>      ext_deps += cc.find_library('atomic')
> >>>>  endif
> >>>
> >>> Many thanks, LGTM. I'll pick it up and add comments why
> >>> it is checked this way.
> >>>
> >>
> >> I've send v4 with the problem fixed. However, I'm afraid
> >> build test systems should be updated to have libatomic
> >> correctly installed. Otherwise, they do not really check
> >> net/sfc build.
> > 
> > When testing on old systems, sfc won't be tested anymore after this patchset.
> > On recent systems, sfc should be enabled I guess.
> > I don't see how to manage better, sorry.
> > 
> 
> I see. I thought that it is possible to install missing
> package on corresponding systems to make build coverage
> better.
> 
> Now I automatically test build on problematic distros
> with previously missing packages installed. So I have
> internal build coverage anyway.

David asked for installing libatomic:
https://inbox.dpdk.org/ci/CAJFAV8xCNBL4yEZU0c=dJGYS+13QM7Uz7e2qnUkMuM7eaKKw+Q@mail.gmail.com/

We should wait for it to be installed otherwise ABI check will fail.




^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [dpdk-dev] [PATCH v3 19/20] net/sfc: support flow action COUNT in transfer rules
  2021-07-04 19:45                     ` Thomas Monjalon
@ 2021-07-05  8:41                       ` Andrew Rybchenko
  0 siblings, 0 replies; 104+ messages in thread
From: Andrew Rybchenko @ 2021-07-05  8:41 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: David Marchand, Bruce Richardson, dev, Igor Romanov,
	Andy Moreton, Ivan Malov

On 7/4/21 10:45 PM, Thomas Monjalon wrote:
> 02/07/2021 14:53, Andrew Rybchenko:
>> On 7/2/21 3:30 PM, Thomas Monjalon wrote:
>>> 02/07/2021 10:43, Andrew Rybchenko:
>>>> On 7/1/21 4:05 PM, Andrew Rybchenko wrote:
>>>>> On 7/1/21 3:34 PM, David Marchand wrote:
>>>>>> On Thu, Jul 1, 2021 at 11:22 AM Andrew Rybchenko
>>>>>> <andrew.rybchenko@oktetlabs.ru> wrote:
>>>>>>> The build works fine for me on FC34, but it has
>>>>>>> libatomic-11.1.1-3.fc34.x86_64 installed.
>>>>>> I first produced the issue on my "old" FC32.
>>>>>> Afaics, for FC33 and later, gcc now depends on libatomic and the
>>>>>> problem won't be noticed.
>>>>>> FC32 and before are EOL, but I then reproduced the issue on RHEL 8
>>>>>> (and Intel CI reported it on Centos 8 too).
>>>>> I see. Thanks for the clarification.
>>>>>
>>>>>>> I'd like to understand what we're trying to solve here.
>>>>>>> Are we trying to make meson to report the missing library
>>>>>>> correctly?
>>>>>>>
>>>>>>> If so, I think I can do simple check using cc.links()
>>>>>>> which will fail if the library is not found. I'll
>>>>>>> test that it works as expected if the library is not
>>>>>>> completely installed.
>>>>>>>
>>>>>> I tried below diff, and it works for me.
>>>>>> "works" as in net/sfc gets disabled without libatomic installed:
>>> [...]
>>>>>>  # for gcc compiles we need -latomic for 128-bit atomic ops
>>>>>>  if cc.get_id() == 'gcc'
>>>>>> +    code = '''#include <stdio.h>
>>>>>> +    void main() { printf("Atomilink me.\n"); }
>>>>>> +    '''
>>>>>> +    if not cc.links(code, args: '-latomic', name: 'libatomic link check')
>>>>>> +        build = false
>>>>>> +        reason = 'missing dependency, "libatomic"'
>>>>>> +        subdir_done()
>>>>>> +    endif
>>>>>>      ext_deps += cc.find_library('atomic')
>>>>>>  endif
>>>>> Many thanks, LGTM. I'll pick it up and add comments why
>>>>> it is checked this way.
>>>>>
>>>> I've send v4 with the problem fixed. However, I'm afraid
>>>> build test systems should be updated to have libatomic
>>>> correctly installed. Otherwise, they do not really check
>>>> net/sfc build.
>>> When testing on old systems, sfc won't be tested anymore after this patchset.
>>> On recent systems, sfc should be enabled I guess.
>>> I don't see how to manage better, sorry.
>>>
>> I see. I thought that it is possible to install missing
>> package on corresponding systems to make build coverage
>> better.
>>
>> Now I automatically test build on problematic distros
>> with previously missing packages installed. So I have
>> internal build coverage anyway.
> David asked for installing libatomic:
> https://inbox.dpdk.org/ci/CAJFAV8xCNBL4yEZU0c=dJGYS+13QM7Uz7e2qnUkMuM7eaKKw+Q@mail.gmail.com/
>
> We should wait for it to be installed otherwise ABI check will fail.

Yes, I see. Thanks.


^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [dpdk-dev] [PATCH v4 19/20] net/sfc: support flow action COUNT in transfer rules
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 19/20] net/sfc: support flow action COUNT in transfer rules Andrew Rybchenko
@ 2021-07-15 14:58     ` David Marchand
  2021-07-15 18:30       ` Ivan Malov
  2021-07-16 12:12     ` David Marchand
  1 sibling, 1 reply; 104+ messages in thread
From: David Marchand @ 2021-07-15 14:58 UTC (permalink / raw)
  To: Andrew Rybchenko, Igor Romanov; +Cc: dev, Andy Moreton, Ivan Malov

On Fri, Jul 2, 2021 at 10:41 AM Andrew Rybchenko
<andrew.rybchenko@oktetlabs.ru> wrote:
> diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
> index cf1269cc03..bd08118da7 100644
> --- a/doc/guides/nics/sfc_efx.rst
> +++ b/doc/guides/nics/sfc_efx.rst
> @@ -240,6 +240,8 @@ Supported actions (***transfer*** rules):
>
>  - PORT_ID
>
> +- COUNT
> +
>  - DROP
>
>  Validating flow rules depends on the firmware variant.


Sorry for catching this so late... this patch lacks some rte_flow capa update.
I can fix when applying if someone confirms this is fine:

diff --git a/doc/guides/nics/features/sfc.ini b/doc/guides/nics/features/sfc.ini
index 9e66ec4293..f6d998ddc8 100644
--- a/doc/guides/nics/features/sfc.ini
+++ b/doc/guides/nics/features/sfc.ini
@@ -59,6 +59,7 @@ vlan                 = Y
 vxlan                = Y

 [rte_flow actions]
+count                = Y
 drop                 = Y
 flag                 = Y
 mark                 = Y


-- 
David Marchand


^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [dpdk-dev] [PATCH v4 19/20] net/sfc: support flow action COUNT in transfer rules
  2021-07-15 14:58     ` David Marchand
@ 2021-07-15 18:30       ` Ivan Malov
  0 siblings, 0 replies; 104+ messages in thread
From: Ivan Malov @ 2021-07-15 18:30 UTC (permalink / raw)
  To: David Marchand, Andrew Rybchenko, Igor Romanov; +Cc: dev, Andy Moreton

Hi,

On 15/07/2021 17:58, David Marchand wrote:
> On Fri, Jul 2, 2021 at 10:41 AM Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru> wrote:
>> diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
>> index cf1269cc03..bd08118da7 100644
>> --- a/doc/guides/nics/sfc_efx.rst
>> +++ b/doc/guides/nics/sfc_efx.rst
>> @@ -240,6 +240,8 @@ Supported actions (***transfer*** rules):
>>
>>   - PORT_ID
>>
>> +- COUNT
>> +
>>   - DROP
>>
>>   Validating flow rules depends on the firmware variant.
> 
> 
> Sorry for catching this so late... this patch lacks some rte_flow capa update.
> I can fix when applying if someone confirms this is fine:

Yes, this should be fine.

> 
> diff --git a/doc/guides/nics/features/sfc.ini b/doc/guides/nics/features/sfc.ini
> index 9e66ec4293..f6d998ddc8 100644
> --- a/doc/guides/nics/features/sfc.ini
> +++ b/doc/guides/nics/features/sfc.ini
> @@ -59,6 +59,7 @@ vlan                 = Y
>   vxlan                = Y
> 
>   [rte_flow actions]
> +count                = Y
>   drop                 = Y
>   flag                 = Y
>   mark                 = Y
> 
> 

-- 
Ivan M

^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [dpdk-dev] [PATCH v4 19/20] net/sfc: support flow action COUNT in transfer rules
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 19/20] net/sfc: support flow action COUNT in transfer rules Andrew Rybchenko
  2021-07-15 14:58     ` David Marchand
@ 2021-07-16 12:12     ` David Marchand
  1 sibling, 0 replies; 104+ messages in thread
From: David Marchand @ 2021-07-16 12:12 UTC (permalink / raw)
  To: Andrew Rybchenko, Igor Romanov, Ivan Malov; +Cc: dev, Andy Moreton

Hello gyus,

On Fri, Jul 2, 2021 at 10:41 AM Andrew Rybchenko
<andrew.rybchenko@oktetlabs.ru> wrote:
>
> From: Igor Romanov <igor.romanov@oktetlabs.ru>
>
> For now, a rule may have only one dedicated counter, shared counters
> are not supported.
>
> HW delivers (or "streams") counter readings using special packets.
> The driver creates a dedicated Rx queue to receive such packets
> and requests that HW start "streaming" the readings to it.
>
> The counter queue is polled periodically, and the first available
> service core is used for that. Hence, the user has to specify at least
> one service core for counters to work. Such a core is shared by all
> MAE-capable devices managed by sfc driver.

If no service lcore is available, did you consider falling back and
using a control thread per mae device?


-- 
David Marchand


^ permalink raw reply	[flat|nested] 104+ messages in thread

* Re: [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action
  2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
                     ` (19 preceding siblings ...)
  2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 20/20] net/sfc: support flow API query for count actions Andrew Rybchenko
@ 2021-07-20 12:19   ` David Marchand
  20 siblings, 0 replies; 104+ messages in thread
From: David Marchand @ 2021-07-20 12:19 UTC (permalink / raw)
  To: Andrew Rybchenko; +Cc: dev, Ivan Malov, Andy Moreton

On Fri, Jul 2, 2021 at 10:40 AM Andrew Rybchenko
<andrew.rybchenko@oktetlabs.ru> wrote:
>
> Update base driver and support COUNT action in transfer flow rules.
>
> v4:
>  - fix build on Fedora 32 and RHEL 8 / CentOS 8 with half-installed
>    libatomic
>
> v3:
>  - fix build brekage because of incorrectly rebased and squashed
>    in fix
>
> v2:
>  - add release notes
>  - add missing documentaion
>  - fix spelling
>  - handle query in stopped gracefully
>
>
> Andrew Rybchenko (6):
>   net/sfc: do not enable interrupts on internal Rx queues
>   common/sfc_efx/base: separate target EvQ and IRQ config
>   common/sfc_efx/base: support custom EvQ to IRQ mapping
>   net/sfc: explicitly control IRQ used for Rx queues
>   net/sfc: add NUMA-aware registry of service logical cores
>   common/sfc_efx/base: add packetiser packet format definition

I added the missing rte_flow feature in sfc.ini.

We had some exchanges offlist on the rte_service requirement.
This discussion will probably end up on the ml later.

Series applied, thanks.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 104+ messages in thread

end of thread, other threads:[~2021-07-20 12:19 UTC | newest]

Thread overview: 104+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-27 15:24 [dpdk-dev] [PATCH 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
2021-05-27 15:24 ` [dpdk-dev] [PATCH 01/20] net/sfc: introduce ethdev Rx queue ID Andrew Rybchenko
2021-05-27 15:24 ` [dpdk-dev] [PATCH 02/20] net/sfc: do not enable interrupts on internal Rx queues Andrew Rybchenko
2021-05-27 15:24 ` [dpdk-dev] [PATCH 03/20] common/sfc_efx/base: separate target EvQ and IRQ config Andrew Rybchenko
2021-05-27 15:24 ` [dpdk-dev] [PATCH 04/20] common/sfc_efx/base: support custom EvQ to IRQ mapping Andrew Rybchenko
2021-05-27 15:24 ` [dpdk-dev] [PATCH 05/20] net/sfc: explicitly control IRQ used for Rx queues Andrew Rybchenko
2021-05-27 15:24 ` [dpdk-dev] [PATCH 06/20] net/sfc: introduce ethdev Tx queue ID Andrew Rybchenko
2021-05-27 15:24 ` [dpdk-dev] [PATCH 07/20] common/sfc_efx/base: add ingress m-port RxQ flag Andrew Rybchenko
2021-05-27 15:24 ` [dpdk-dev] [PATCH 08/20] common/sfc_efx/base: add user mark " Andrew Rybchenko
2021-05-27 15:24 ` [dpdk-dev] [PATCH 09/20] net/sfc: add abstractions for the management EVQ identity Andrew Rybchenko
2021-05-27 15:25 ` [dpdk-dev] [PATCH 10/20] net/sfc: add support for initialising different RxQ types Andrew Rybchenko
2021-05-27 15:25 ` [dpdk-dev] [PATCH 11/20] net/sfc: add NUMA-aware registry of service logical cores Andrew Rybchenko
2021-05-27 15:25 ` [dpdk-dev] [PATCH 12/20] net/sfc: reserve RxQ for counters Andrew Rybchenko
2021-05-27 15:25 ` [dpdk-dev] [PATCH 13/20] common/sfc_efx/base: add counter creation MCDI wrappers Andrew Rybchenko
2021-05-27 15:25 ` [dpdk-dev] [PATCH 14/20] common/sfc_efx/base: add counter stream " Andrew Rybchenko
2021-05-27 15:25 ` [dpdk-dev] [PATCH 15/20] common/sfc_efx/base: support counter in action set Andrew Rybchenko
2021-05-27 15:25 ` [dpdk-dev] [PATCH 16/20] net/sfc: add Rx datapath method to get pushed buffers count Andrew Rybchenko
2021-05-27 15:25 ` [dpdk-dev] [PATCH 17/20] common/sfc_efx/base: add max MAE counters to limits Andrew Rybchenko
2021-05-27 15:25 ` [dpdk-dev] [PATCH 18/20] common/sfc_efx/base: add packetiser packet format definition Andrew Rybchenko
2021-05-27 15:25 ` [dpdk-dev] [PATCH 19/20] net/sfc: support flow action COUNT in transfer rules Andrew Rybchenko
2021-05-27 15:25 ` [dpdk-dev] [PATCH 20/20] net/sfc: support flow API query for count actions Andrew Rybchenko
2021-06-04 14:23 ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
2021-06-04 14:23   ` [dpdk-dev] [PATCH v2 01/20] net/sfc: introduce ethdev Rx queue ID Andrew Rybchenko
2021-06-04 14:23   ` [dpdk-dev] [PATCH v2 02/20] net/sfc: do not enable interrupts on internal Rx queues Andrew Rybchenko
2021-06-04 14:23   ` [dpdk-dev] [PATCH v2 03/20] common/sfc_efx/base: separate target EvQ and IRQ config Andrew Rybchenko
2021-06-04 14:23   ` [dpdk-dev] [PATCH v2 04/20] common/sfc_efx/base: support custom EvQ to IRQ mapping Andrew Rybchenko
2021-06-04 14:23   ` [dpdk-dev] [PATCH v2 05/20] net/sfc: explicitly control IRQ used for Rx queues Andrew Rybchenko
2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 06/20] net/sfc: introduce ethdev Tx queue ID Andrew Rybchenko
2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 07/20] common/sfc_efx/base: add ingress m-port RxQ flag Andrew Rybchenko
2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 08/20] common/sfc_efx/base: add user mark " Andrew Rybchenko
2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 09/20] net/sfc: add abstractions for the management EVQ identity Andrew Rybchenko
2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 10/20] net/sfc: add support for initialising different RxQ types Andrew Rybchenko
2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 11/20] net/sfc: add NUMA-aware registry of service logical cores Andrew Rybchenko
2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 12/20] net/sfc: reserve RxQ for counters Andrew Rybchenko
2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 13/20] common/sfc_efx/base: add counter creation MCDI wrappers Andrew Rybchenko
2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 14/20] common/sfc_efx/base: add counter stream " Andrew Rybchenko
2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 15/20] common/sfc_efx/base: support counter in action set Andrew Rybchenko
2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 16/20] net/sfc: add Rx datapath method to get pushed buffers count Andrew Rybchenko
2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 17/20] common/sfc_efx/base: add max MAE counters to limits Andrew Rybchenko
2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 18/20] common/sfc_efx/base: add packetiser packet format definition Andrew Rybchenko
2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 19/20] net/sfc: support flow action COUNT in transfer rules Andrew Rybchenko
2021-06-04 14:24   ` [dpdk-dev] [PATCH v2 20/20] net/sfc: support flow API query for count actions Andrew Rybchenko
2021-06-17  8:37   ` [dpdk-dev] [PATCH v2 00/20] net/sfc: support flow API COUNT action David Marchand
2021-06-18 13:40     ` Andrew Rybchenko
2021-06-18 13:40 ` [dpdk-dev] [PATCH v3 " Andrew Rybchenko
2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 01/20] net/sfc: introduce ethdev Rx queue ID Andrew Rybchenko
2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 02/20] net/sfc: do not enable interrupts on internal Rx queues Andrew Rybchenko
2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 03/20] common/sfc_efx/base: separate target EvQ and IRQ config Andrew Rybchenko
2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 04/20] common/sfc_efx/base: support custom EvQ to IRQ mapping Andrew Rybchenko
2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 05/20] net/sfc: explicitly control IRQ used for Rx queues Andrew Rybchenko
2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 06/20] net/sfc: introduce ethdev Tx queue ID Andrew Rybchenko
2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 07/20] common/sfc_efx/base: add ingress m-port RxQ flag Andrew Rybchenko
2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 08/20] common/sfc_efx/base: add user mark " Andrew Rybchenko
2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 09/20] net/sfc: add abstractions for the management EVQ identity Andrew Rybchenko
2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 10/20] net/sfc: add support for initialising different RxQ types Andrew Rybchenko
2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 11/20] net/sfc: add NUMA-aware registry of service logical cores Andrew Rybchenko
2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 12/20] net/sfc: reserve RxQ for counters Andrew Rybchenko
2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 13/20] common/sfc_efx/base: add counter creation MCDI wrappers Andrew Rybchenko
2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 14/20] common/sfc_efx/base: add counter stream " Andrew Rybchenko
2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 15/20] common/sfc_efx/base: support counter in action set Andrew Rybchenko
2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 16/20] net/sfc: add Rx datapath method to get pushed buffers count Andrew Rybchenko
2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 17/20] common/sfc_efx/base: add max MAE counters to limits Andrew Rybchenko
2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 18/20] common/sfc_efx/base: add packetiser packet format definition Andrew Rybchenko
2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 19/20] net/sfc: support flow action COUNT in transfer rules Andrew Rybchenko
2021-06-21  8:28     ` David Marchand
2021-06-21  9:30       ` Thomas Monjalon
2021-07-01  9:22         ` Andrew Rybchenko
2021-07-01 12:34           ` David Marchand
2021-07-01 13:05             ` Andrew Rybchenko
2021-07-01 13:35               ` Bruce Richardson
2021-07-02  8:03                 ` Andrew Rybchenko
2021-07-02  8:43               ` Andrew Rybchenko
2021-07-02 12:30                 ` Thomas Monjalon
2021-07-02 12:53                   ` Andrew Rybchenko
2021-07-04 19:45                     ` Thomas Monjalon
2021-07-05  8:41                       ` Andrew Rybchenko
2021-07-02 13:37                 ` David Marchand
2021-07-02 13:39                   ` Andrew Rybchenko
2021-06-18 13:40   ` [dpdk-dev] [PATCH v3 20/20] net/sfc: support flow API query for count actions Andrew Rybchenko
2021-07-02  8:39 ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action Andrew Rybchenko
2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 01/20] net/sfc: introduce ethdev Rx queue ID Andrew Rybchenko
2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 02/20] net/sfc: do not enable interrupts on internal Rx queues Andrew Rybchenko
2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 03/20] common/sfc_efx/base: separate target EvQ and IRQ config Andrew Rybchenko
2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 04/20] common/sfc_efx/base: support custom EvQ to IRQ mapping Andrew Rybchenko
2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 05/20] net/sfc: explicitly control IRQ used for Rx queues Andrew Rybchenko
2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 06/20] net/sfc: introduce ethdev Tx queue ID Andrew Rybchenko
2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 07/20] common/sfc_efx/base: add ingress m-port RxQ flag Andrew Rybchenko
2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 08/20] common/sfc_efx/base: add user mark " Andrew Rybchenko
2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 09/20] net/sfc: add abstractions for the management EVQ identity Andrew Rybchenko
2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 10/20] net/sfc: add support for initialising different RxQ types Andrew Rybchenko
2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 11/20] net/sfc: add NUMA-aware registry of service logical cores Andrew Rybchenko
2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 12/20] net/sfc: reserve RxQ for counters Andrew Rybchenko
2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 13/20] common/sfc_efx/base: add counter creation MCDI wrappers Andrew Rybchenko
2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 14/20] common/sfc_efx/base: add counter stream " Andrew Rybchenko
2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 15/20] common/sfc_efx/base: support counter in action set Andrew Rybchenko
2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 16/20] net/sfc: add Rx datapath method to get pushed buffers count Andrew Rybchenko
2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 17/20] common/sfc_efx/base: add max MAE counters to limits Andrew Rybchenko
2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 18/20] common/sfc_efx/base: add packetiser packet format definition Andrew Rybchenko
2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 19/20] net/sfc: support flow action COUNT in transfer rules Andrew Rybchenko
2021-07-15 14:58     ` David Marchand
2021-07-15 18:30       ` Ivan Malov
2021-07-16 12:12     ` David Marchand
2021-07-02  8:39   ` [dpdk-dev] [PATCH v4 20/20] net/sfc: support flow API query for count actions Andrew Rybchenko
2021-07-20 12:19   ` [dpdk-dev] [PATCH v4 00/20] net/sfc: support flow API COUNT action David Marchand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).