DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support
@ 2020-10-13 13:45 Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 01/36] doc: fix typo in EF10 Rx equal stride super-buffer name Andrew Rybchenko
                   ` (36 more replies)
  0 siblings, 37 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

Add Alveo SN1000 SmartNICs family basic support.

Andrew Rybchenko (30):
  doc: fix typo in EF10 Rx equal stride super-buffer name
  doc: avoid references to removed config variables in net/sfc
  common/sfc_efx/base: factor out wrapper to set PHY link
  common/sfc_efx/base: factor out MCDI wrapper to set LEDs
  common/sfc_efx/base: fix PHY config failure on Riverhead
  common/sfc_efx/base: add max number of Rx scatter buffers
  net/sfc: log Rx/Tx doorbell addresses useful for debugging
  net/sfc: add caps to specify if libefx supports Rx/Tx
  net/sfc: add EF100 support
  net/sfc: implement libefx Rx packets event callbacks
  net/sfc: implement libefx Tx descs complete event callbacks
  net/sfc: log DMA allocations addresses
  net/sfc: support datapath logs which may be compiled out
  net/sfc: implement EF100 native Rx datapath
  net/sfc: implement EF100 native Tx datapath
  net/sfc: support multi-segment transmit for EF100 datapath
  net/sfc: support TCP and UDP checksum offloads for EF100
  net/sfc: support IPv4 header checksum offload for EF100 Tx
  net/sfc: support tunnels for EF100 native Tx datapath
  net/sfc: support Tx VLAN insertion offload for EF100
  net/sfc: support Rx checksum offload for EF100
  common/sfc_efx/base: simplify to request Rx prefix fields
  common/sfc_efx/base: provide control to deliver RSS hash
  common/sfc_efx/base: provide helper to check Rx prefix
  net/sfc: map Rx offload RSS hash to corresponding RxQ flag
  net/sfc: support per-queue Rx prefix for EF100
  net/sfc: support per-queue Rx RSS hash offload for EF100
  net/sfc: support user mark and flag Rx for EF100
  net/sfc: add Rx interrupts support for EF100
  doc: advertise Alveo SN1000 SmartNICs family support

Igor Romanov (3):
  net/sfc: check vs maximum number of Rx scatter buffers
  net/sfc: use BAR layout discovery to find control window
  net/sfc: forward function control window offset to datapath

Ivan Malov (3):
  net/sfc: add header segments check for EF100 Tx datapath
  net/sfc: support TSO for EF100 native datapath
  net/sfc: support tunnel TSO for EF100 native Tx datapath

 doc/guides/nics/sfc_efx.rst                   |  45 +-
 drivers/common/sfc_efx/base/ef10_nic.c        |   3 +
 drivers/common/sfc_efx/base/ef10_phy.c        | 134 ++-
 drivers/common/sfc_efx/base/ef10_rx.c         |  45 +-
 drivers/common/sfc_efx/base/efx.h             |  23 +-
 drivers/common/sfc_efx/base/efx_rx.c          |  59 ++
 drivers/common/sfc_efx/base/rhead_nic.c       |   3 +
 drivers/common/sfc_efx/base/rhead_rx.c        |  14 +-
 drivers/common/sfc_efx/base/siena_nic.c       |   1 +
 drivers/common/sfc_efx/efsys.h                |  12 +-
 .../sfc_efx/rte_common_sfc_efx_version.map    |   1 +
 drivers/net/sfc/meson.build                   |   6 +-
 drivers/net/sfc/sfc.c                         |  93 +-
 drivers/net/sfc/sfc.h                         |   2 +
 drivers/net/sfc/sfc_dp.h                      |  10 +
 drivers/net/sfc/sfc_dp_rx.h                   |   6 +-
 drivers/net/sfc/sfc_dp_tx.h                   |  96 +-
 drivers/net/sfc/sfc_ef100.h                   |  63 ++
 drivers/net/sfc/sfc_ef100_rx.c                | 918 +++++++++++++++++
 drivers/net/sfc/sfc_ef100_tx.c                | 965 ++++++++++++++++++
 drivers/net/sfc/sfc_ef10_essb_rx.c            |  34 +-
 drivers/net/sfc/sfc_ef10_rx.c                 |  24 +-
 drivers/net/sfc/sfc_ef10_tx.c                 |   7 +-
 drivers/net/sfc/sfc_ethdev.c                  |  11 +-
 drivers/net/sfc/sfc_ev.c                      |  60 ++
 drivers/net/sfc/sfc_kvargs.h                  |   7 +-
 drivers/net/sfc/sfc_rx.c                      |  42 +-
 drivers/net/sfc/sfc_rx.h                      |   1 +
 drivers/net/sfc/sfc_tx.c                      |  21 +-
 29 files changed, 2578 insertions(+), 128 deletions(-)
 create mode 100644 drivers/net/sfc/sfc_ef100.h
 create mode 100644 drivers/net/sfc/sfc_ef100_rx.c
 create mode 100644 drivers/net/sfc/sfc_ef100_tx.c

-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 01/36] doc: fix typo in EF10 Rx equal stride super-buffer name
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 02/36] doc: avoid references to removed config variables in net/sfc Andrew Rybchenko
                   ` (35 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev; +Cc: stable

Fixes: 390f9b8d82c9 ("net/sfc: support equal stride super-buffer Rx mode")
Cc: stable@dpdk.org

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/nics/sfc_efx.rst | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index ab44ce66c8..812c1e7951 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -297,7 +297,7 @@ whitelist option like "-w 02:00.0,arg1=value1,...".
 Case-insensitive 1/y/yes/on or 0/n/no/off may be used to specify
 boolean parameters value.
 
-- ``rx_datapath`` [auto|efx|ef10|ef10_esps] (default **auto**)
+- ``rx_datapath`` [auto|efx|ef10|ef10_essb] (default **auto**)
 
   Choose receive datapath implementation.
   **auto** allows the driver itself to make a choice based on firmware
@@ -306,7 +306,7 @@ boolean parameters value.
   **ef10** chooses EF10 (SFN7xxx, SFN8xxx, X2xxx) native datapath which is
   more efficient than libefx-based and provides richer packet type
   classification.
-  **ef10_esps** chooses SFNX2xxx equal stride packed stream datapath
+  **ef10_essb** chooses SFNX2xxx equal stride super-buffer datapath
   which may be used on DPDK firmware variant only
   (see notes about its limitations above).
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 02/36] doc: avoid references to removed config variables in net/sfc
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 01/36] doc: fix typo in EF10 Rx equal stride super-buffer name Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-14 10:40   ` Ferruh Yigit
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 03/36] common/sfc_efx/base: factor out wrapper to set PHY link Andrew Rybchenko
                   ` (34 subsequent siblings)
  36 siblings, 1 reply; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

CONFIG_* variables were used by make-based build system which is
removed.

Fixes: 3cc6ecfdfe85 ("build: remove makefiles")

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/nics/sfc_efx.rst | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index 812c1e7951..84b9b56ddb 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -273,17 +273,15 @@ Pre-Installation Configuration
 ------------------------------
 
 
-Config File Options
-~~~~~~~~~~~~~~~~~~~
+Build Options
+~~~~~~~~~~~~~
 
-The following options can be modified in the ``.config`` file.
-Please note that enabling debugging options may affect system performance.
-
-- ``CONFIG_RTE_LIBRTE_SFC_EFX_PMD`` (default **y**)
+The following build-time options may be enabled on build time using
+``-Dc_args=`` meson argument (e.g.  ``-Dc_args=-DRTE_LIBRTE_SFC_EFX_DEBUG``).
 
-  Enable compilation of Solarflare libefx-based poll-mode driver.
+Please note that enabling debugging options may affect system performance.
 
-- ``CONFIG_RTE_LIBRTE_SFC_EFX_DEBUG`` (default **n**)
+- ``RTE_LIBRTE_SFC_EFX_DEBUG`` (undefined by default)
 
   Enable compilation of the extra run-time consistency checks.
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 03/36] common/sfc_efx/base: factor out wrapper to set PHY link
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 01/36] doc: fix typo in EF10 Rx equal stride super-buffer name Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 02/36] doc: avoid references to removed config variables in net/sfc Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 04/36] common/sfc_efx/base: factor out MCDI wrapper to set LEDs Andrew Rybchenko
                   ` (33 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

Make ef10_phy_reconfigure() simpler to read and less error-prone.
Avoid confusing case when two MCDI's are called from one function.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/common/sfc_efx/base/ef10_phy.c | 90 +++++++++++++++++---------
 1 file changed, 61 insertions(+), 29 deletions(-)

diff --git a/drivers/common/sfc_efx/base/ef10_phy.c b/drivers/common/sfc_efx/base/ef10_phy.c
index b9822e4d42..0005870736 100644
--- a/drivers/common/sfc_efx/base/ef10_phy.c
+++ b/drivers/common/sfc_efx/base/ef10_phy.c
@@ -329,34 +329,26 @@ ef10_phy_get_link(
 	return (rc);
 }
 
-	__checkReturn	efx_rc_t
-ef10_phy_reconfigure(
-	__in		efx_nic_t *enp)
+static	__checkReturn	efx_rc_t
+efx_mcdi_phy_set_link(
+	__in		efx_nic_t *enp,
+	__in		uint32_t cap_mask,
+	__in		efx_loopback_type_t loopback_type,
+	__in		efx_link_mode_t loopback_link_mode,
+	__in		uint32_t phy_flags)
 {
-	efx_port_t *epp = &(enp->en_port);
 	efx_mcdi_req_t req;
 	EFX_MCDI_DECLARE_BUF(payload, MC_CMD_SET_LINK_IN_LEN,
 		MC_CMD_SET_LINK_OUT_LEN);
-	uint32_t cap_mask;
-#if EFSYS_OPT_PHY_LED_CONTROL
-	unsigned int led_mode;
-#endif
 	unsigned int speed;
-	boolean_t supported;
 	efx_rc_t rc;
 
-	if ((rc = efx_mcdi_link_control_supported(enp, &supported)) != 0)
-		goto fail1;
-	if (supported == B_FALSE)
-		goto out;
-
 	req.emr_cmd = MC_CMD_SET_LINK;
 	req.emr_in_buf = payload;
 	req.emr_in_length = MC_CMD_SET_LINK_IN_LEN;
 	req.emr_out_buf = payload;
 	req.emr_out_length = MC_CMD_SET_LINK_OUT_LEN;
 
-	cap_mask = epp->ep_adv_cap_mask;
 	MCDI_IN_POPULATE_DWORD_10(req, SET_LINK_IN_CAP,
 		PHY_CAP_10HDX, (cap_mask >> EFX_PHY_CAP_10HDX) & 0x1,
 		PHY_CAP_10FDX, (cap_mask >> EFX_PHY_CAP_10FDX) & 0x1,
@@ -397,10 +389,9 @@ ef10_phy_reconfigure(
 	    PHY_CAP_25G_BASER_FEC_REQUESTED,
 	    (cap_mask >> EFX_PHY_CAP_25G_BASER_FEC_REQUESTED) & 0x1);
 
-#if EFSYS_OPT_LOOPBACK
-	MCDI_IN_SET_DWORD(req, SET_LINK_IN_LOOPBACK_MODE,
-		    epp->ep_loopback_type);
-	switch (epp->ep_loopback_link_mode) {
+	MCDI_IN_SET_DWORD(req, SET_LINK_IN_LOOPBACK_MODE, loopback_type);
+
+	switch (loopback_link_mode) {
 	case EFX_LINK_100FDX:
 		speed = 100;
 		break;
@@ -424,26 +415,67 @@ ef10_phy_reconfigure(
 		break;
 	default:
 		speed = 0;
+		break;
 	}
-#else
-	MCDI_IN_SET_DWORD(req, SET_LINK_IN_LOOPBACK_MODE, MC_CMD_LOOPBACK_NONE);
-	speed = 0;
-#endif	/* EFSYS_OPT_LOOPBACK */
 	MCDI_IN_SET_DWORD(req, SET_LINK_IN_LOOPBACK_SPEED, speed);
 
-#if EFSYS_OPT_PHY_FLAGS
-	MCDI_IN_SET_DWORD(req, SET_LINK_IN_FLAGS, epp->ep_phy_flags);
-#else
-	MCDI_IN_SET_DWORD(req, SET_LINK_IN_FLAGS, 0);
-#endif	/* EFSYS_OPT_PHY_FLAGS */
+	MCDI_IN_SET_DWORD(req, SET_LINK_IN_FLAGS, phy_flags);
 
 	efx_mcdi_execute(enp, &req);
 
 	if (req.emr_rc != 0) {
 		rc = req.emr_rc;
-		goto fail2;
+		goto fail1;
 	}
 
+	return (0);
+
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
+	return (rc);
+}
+
+	__checkReturn	efx_rc_t
+ef10_phy_reconfigure(
+	__in		efx_nic_t *enp)
+{
+	efx_port_t *epp = &(enp->en_port);
+	efx_mcdi_req_t req;
+	EFX_MCDI_DECLARE_BUF(payload, MC_CMD_SET_ID_LED_IN_LEN,
+		MC_CMD_SET_ID_LED_OUT_LEN);
+	efx_loopback_type_t loopback_type;
+	efx_link_mode_t loopback_link_mode;
+	uint32_t phy_flags;
+#if EFSYS_OPT_PHY_LED_CONTROL
+	unsigned int led_mode;
+#endif
+	boolean_t supported;
+	efx_rc_t rc;
+
+	if ((rc = efx_mcdi_link_control_supported(enp, &supported)) != 0)
+		goto fail1;
+	if (supported == B_FALSE)
+		goto out;
+
+#if EFSYS_OPT_LOOPBACK
+	loopback_type = epp->ep_loopback_type;
+	loopback_link_mode = epp->ep_loopback_link_mode;
+#else
+	loopback_type = EFX_LOOPBACK_OFF;
+	loopback_link_mode = EFX_LINK_UNKNOWN;
+#endif
+#if EFSYS_OPT_PHY_FLAGS
+	phy_flags = epp->ep_phy_flags;
+#else
+	phy_flags = 0;
+#endif
+
+	rc = efx_mcdi_phy_set_link(enp, epp->ep_adv_cap_mask,
+	    loopback_type, loopback_link_mode, phy_flags);
+	if (rc != 0)
+		goto fail2;
+
 	/* And set the blink mode */
 	(void) memset(payload, 0, sizeof (payload));
 	req.emr_cmd = MC_CMD_SET_ID_LED;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 04/36] common/sfc_efx/base: factor out MCDI wrapper to set LEDs
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (2 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 03/36] common/sfc_efx/base: factor out wrapper to set PHY link Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 05/36] common/sfc_efx/base: fix PHY config failure on Riverhead Andrew Rybchenko
                   ` (32 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

For consistency it is better to have separate MCDI wrappers.

Make efx_phy_led_mode_t visible even if EFSYS_OPT_PHY_LED_CONTROL
is disabled to be able to use it in the added wrapper arguments.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/common/sfc_efx/base/ef10_phy.c | 92 ++++++++++++++++----------
 drivers/common/sfc_efx/base/efx.h      |  4 +-
 2 files changed, 59 insertions(+), 37 deletions(-)

diff --git a/drivers/common/sfc_efx/base/ef10_phy.c b/drivers/common/sfc_efx/base/ef10_phy.c
index 0005870736..3d07c254bf 100644
--- a/drivers/common/sfc_efx/base/ef10_phy.c
+++ b/drivers/common/sfc_efx/base/ef10_phy.c
@@ -430,6 +430,56 @@ efx_mcdi_phy_set_link(
 
 	return (0);
 
+fail1:
+	EFSYS_PROBE1(fail1, efx_rc_t, rc);
+
+	return (rc);
+}
+
+static	__checkReturn	efx_rc_t
+efx_mcdi_phy_set_led(
+	__in		efx_nic_t *enp,
+	__in		efx_phy_led_mode_t phy_led_mode)
+{
+	efx_mcdi_req_t req;
+	EFX_MCDI_DECLARE_BUF(payload, MC_CMD_SET_ID_LED_IN_LEN,
+		MC_CMD_SET_ID_LED_OUT_LEN);
+	unsigned int led_mode;
+	efx_rc_t rc;
+
+	req.emr_cmd = MC_CMD_SET_ID_LED;
+	req.emr_in_buf = payload;
+	req.emr_in_length = MC_CMD_SET_ID_LED_IN_LEN;
+	req.emr_out_buf = payload;
+	req.emr_out_length = MC_CMD_SET_ID_LED_OUT_LEN;
+
+	switch (phy_led_mode) {
+	case EFX_PHY_LED_DEFAULT:
+		led_mode = MC_CMD_LED_DEFAULT;
+		break;
+	case EFX_PHY_LED_OFF:
+		led_mode = MC_CMD_LED_OFF;
+		break;
+	case EFX_PHY_LED_ON:
+		led_mode = MC_CMD_LED_ON;
+		break;
+	default:
+		EFSYS_ASSERT(0);
+		led_mode = MC_CMD_LED_DEFAULT;
+		break;
+	}
+
+	MCDI_IN_SET_DWORD(req, SET_ID_LED_IN_STATE, led_mode);
+
+	efx_mcdi_execute(enp, &req);
+
+	if (req.emr_rc != 0) {
+		rc = req.emr_rc;
+		goto fail1;
+	}
+
+	return (0);
+
 fail1:
 	EFSYS_PROBE1(fail1, efx_rc_t, rc);
 
@@ -441,15 +491,10 @@ ef10_phy_reconfigure(
 	__in		efx_nic_t *enp)
 {
 	efx_port_t *epp = &(enp->en_port);
-	efx_mcdi_req_t req;
-	EFX_MCDI_DECLARE_BUF(payload, MC_CMD_SET_ID_LED_IN_LEN,
-		MC_CMD_SET_ID_LED_OUT_LEN);
 	efx_loopback_type_t loopback_type;
 	efx_link_mode_t loopback_link_mode;
 	uint32_t phy_flags;
-#if EFSYS_OPT_PHY_LED_CONTROL
-	unsigned int led_mode;
-#endif
+	efx_phy_led_mode_t phy_led_mode;
 	boolean_t supported;
 	efx_rc_t rc;
 
@@ -477,40 +522,17 @@ ef10_phy_reconfigure(
 		goto fail2;
 
 	/* And set the blink mode */
-	(void) memset(payload, 0, sizeof (payload));
-	req.emr_cmd = MC_CMD_SET_ID_LED;
-	req.emr_in_buf = payload;
-	req.emr_in_length = MC_CMD_SET_ID_LED_IN_LEN;
-	req.emr_out_buf = payload;
-	req.emr_out_length = MC_CMD_SET_ID_LED_OUT_LEN;
 
 #if EFSYS_OPT_PHY_LED_CONTROL
-	switch (epp->ep_phy_led_mode) {
-	case EFX_PHY_LED_DEFAULT:
-		led_mode = MC_CMD_LED_DEFAULT;
-		break;
-	case EFX_PHY_LED_OFF:
-		led_mode = MC_CMD_LED_OFF;
-		break;
-	case EFX_PHY_LED_ON:
-		led_mode = MC_CMD_LED_ON;
-		break;
-	default:
-		EFSYS_ASSERT(0);
-		led_mode = MC_CMD_LED_DEFAULT;
-	}
-
-	MCDI_IN_SET_DWORD(req, SET_ID_LED_IN_STATE, led_mode);
+	phy_led_mode = epp->ep_phy_led_mode;
 #else
-	MCDI_IN_SET_DWORD(req, SET_ID_LED_IN_STATE, MC_CMD_LED_DEFAULT);
-#endif	/* EFSYS_OPT_PHY_LED_CONTROL */
-
-	efx_mcdi_execute(enp, &req);
+	phy_led_mode = EFX_PHY_LED_DEFAULT;
+#endif
 
-	if (req.emr_rc != 0) {
-		rc = req.emr_rc;
+	rc = efx_mcdi_phy_set_led(enp, phy_led_mode);
+	if (rc != 0)
 		goto fail3;
-	}
+
 out:
 	return (0);
 
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 07a7e3c952..a245acfe0f 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -1004,8 +1004,6 @@ extern	__checkReturn	efx_rc_t
 efx_phy_verify(
 	__in		efx_nic_t *enp);
 
-#if EFSYS_OPT_PHY_LED_CONTROL
-
 typedef enum efx_phy_led_mode_e {
 	EFX_PHY_LED_DEFAULT = 0,
 	EFX_PHY_LED_OFF,
@@ -1014,6 +1012,8 @@ typedef enum efx_phy_led_mode_e {
 	EFX_PHY_LED_NMODES
 } efx_phy_led_mode_t;
 
+#if EFSYS_OPT_PHY_LED_CONTROL
+
 LIBEFX_API
 extern	__checkReturn	efx_rc_t
 efx_phy_led_set(
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 05/36] common/sfc_efx/base: fix PHY config failure on Riverhead
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (3 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 04/36] common/sfc_efx/base: factor out MCDI wrapper to set LEDs Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 06/36] common/sfc_efx/base: add max number of Rx scatter buffers Andrew Rybchenko
                   ` (31 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

Riverhead does not support LED control yet. It is perfectly
fine to ignore LED set failure because of no support if
configured LED mode is the default.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/common/sfc_efx/base/ef10_phy.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/common/sfc_efx/base/ef10_phy.c b/drivers/common/sfc_efx/base/ef10_phy.c
index 3d07c254bf..74a18841d9 100644
--- a/drivers/common/sfc_efx/base/ef10_phy.c
+++ b/drivers/common/sfc_efx/base/ef10_phy.c
@@ -530,8 +530,16 @@ ef10_phy_reconfigure(
 #endif
 
 	rc = efx_mcdi_phy_set_led(enp, phy_led_mode);
-	if (rc != 0)
+	if (rc != 0) {
+		/*
+		 * If LED control is not supported by firmware, we can
+		 * silently ignore default mode set failure
+		 * (see FWRIVERHD-198).
+		 */
+		if (rc == EOPNOTSUPP && phy_led_mode == EFX_PHY_LED_DEFAULT)
+			goto out;
 		goto fail3;
+	}
 
 out:
 	return (0);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 06/36] common/sfc_efx/base: add max number of Rx scatter buffers
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (4 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 05/36] common/sfc_efx/base: fix PHY config failure on Riverhead Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 07/36] net/sfc: check vs maximum " Andrew Rybchenko
                   ` (30 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

Riverhead QDMA has limitation on maximum number of Rx scatter
buffers to be used by a packet. If the limitation is violated,
the datapath is dead. FW should ensure that it is OK, but
drivers need to know the limitation anyway to check parameters
when Rx queues are configured and MTU is set.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/common/sfc_efx/base/ef10_nic.c  | 3 +++
 drivers/common/sfc_efx/base/efx.h       | 2 ++
 drivers/common/sfc_efx/base/rhead_nic.c | 3 +++
 drivers/common/sfc_efx/base/siena_nic.c | 1 +
 4 files changed, 9 insertions(+)

diff --git a/drivers/common/sfc_efx/base/ef10_nic.c b/drivers/common/sfc_efx/base/ef10_nic.c
index 81cd436424..df7db6a803 100644
--- a/drivers/common/sfc_efx/base/ef10_nic.c
+++ b/drivers/common/sfc_efx/base/ef10_nic.c
@@ -1156,6 +1156,9 @@ ef10_get_datapath_caps(
 	else
 		encp->enc_rx_disable_scatter_supported = B_FALSE;
 
+	/* No limit on maximum number of Rx scatter elements per packet. */
+	encp->enc_rx_scatter_max = -1;
+
 	/* Check if the firmware supports packed stream mode */
 	if (CAP_FLAGS1(req, RX_PACKED_STREAM))
 		encp->enc_rx_packed_stream_supported = B_TRUE;
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index a245acfe0f..4b7beb209d 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -1555,6 +1555,8 @@ typedef struct efx_nic_cfg_s {
 	/* Datapath firmware vport reconfigure support */
 	boolean_t		enc_vport_reconfigure_supported;
 	boolean_t		enc_rx_disable_scatter_supported;
+	/* Maximum number of Rx scatter segments supported by HW */
+	uint32_t		enc_rx_scatter_max;
 	boolean_t		enc_allow_set_mac_with_installed_filters;
 	boolean_t		enc_enhanced_set_mac_supported;
 	boolean_t		enc_init_evq_v2_supported;
diff --git a/drivers/common/sfc_efx/base/rhead_nic.c b/drivers/common/sfc_efx/base/rhead_nic.c
index 66db68b384..92bc6fdfae 100644
--- a/drivers/common/sfc_efx/base/rhead_nic.c
+++ b/drivers/common/sfc_efx/base/rhead_nic.c
@@ -158,6 +158,9 @@ rhead_board_cfg(
 	}
 	encp->enc_rx_buf_align_end = end_padding;
 
+	/* FIXME: It should be extracted from design parameters (Bug 86844) */
+	encp->enc_rx_scatter_max = 7;
+
 	/*
 	 * Riverhead stores a single global copy of VPD, not per-PF as on
 	 * Huntington.
diff --git a/drivers/common/sfc_efx/base/siena_nic.c b/drivers/common/sfc_efx/base/siena_nic.c
index 9c30e27f59..4137c1e245 100644
--- a/drivers/common/sfc_efx/base/siena_nic.c
+++ b/drivers/common/sfc_efx/base/siena_nic.c
@@ -177,6 +177,7 @@ siena_board_cfg(
 	encp->enc_fw_assisted_tso_v2_enabled = B_FALSE;
 	encp->enc_fw_assisted_tso_v2_n_contexts = 0;
 	encp->enc_tso_v3_enabled = B_FALSE;
+	encp->enc_rx_scatter_max = -1;
 	encp->enc_allow_set_mac_with_installed_filters = B_TRUE;
 	encp->enc_rx_packed_stream_supported = B_FALSE;
 	encp->enc_rx_var_packed_stream_supported = B_FALSE;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 07/36] net/sfc: check vs maximum number of Rx scatter buffers
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (5 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 06/36] common/sfc_efx/base: add max number of Rx scatter buffers Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 08/36] net/sfc: log Rx/Tx doorbell addresses useful for debugging Andrew Rybchenko
                   ` (29 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Update generic code to check that MTU and Rx buffer sizes
do not result in more Rx scatter segments than NIC can make.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_ethdev.c |  3 ++-
 drivers/net/sfc/sfc_rx.c     | 17 ++++++++++++++---
 drivers/net/sfc/sfc_rx.h     |  1 +
 3 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index f41d0f5fe2..ca1b99a00f 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -954,7 +954,8 @@ sfc_check_scatter_on_all_rx_queues(struct sfc_adapter *sa, size_t pdu)
 
 		if (!sfc_rx_check_scatter(pdu, sa->rxq_ctrl[i].buf_size,
 					  encp->enc_rx_prefix_size,
-					  scatter_enabled, &error)) {
+					  scatter_enabled,
+					  encp->enc_rx_scatter_max, &error)) {
 			sfc_err(sa, "MTU check for RxQ %u failed: %s", i,
 				error);
 			return EINVAL;
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 3e5c8e42da..7c50fe58b8 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -378,10 +378,20 @@ sfc_efx_rx_qdesc_status(struct sfc_dp_rxq *dp_rxq, uint16_t offset)
 
 boolean_t
 sfc_rx_check_scatter(size_t pdu, size_t rx_buf_size, uint32_t rx_prefix_size,
-		     boolean_t rx_scatter_enabled, const char **error)
+		     boolean_t rx_scatter_enabled, uint32_t rx_scatter_max,
+		     const char **error)
 {
-	if ((rx_buf_size < pdu + rx_prefix_size) && !rx_scatter_enabled) {
-		*error = "Rx scatter is disabled and RxQ mbuf pool object size is too small";
+	uint32_t effective_rx_scatter_max;
+	uint32_t rx_scatter_bufs;
+
+	effective_rx_scatter_max = rx_scatter_enabled ? rx_scatter_max : 1;
+	rx_scatter_bufs = EFX_DIV_ROUND_UP(pdu + rx_prefix_size, rx_buf_size);
+
+	if (rx_scatter_bufs > effective_rx_scatter_max) {
+		if (rx_scatter_enabled)
+			*error = "Possible number of Rx scatter buffers exceeds maximum number";
+		else
+			*error = "Rx scatter is disabled and RxQ mbuf pool object size is too small";
 		return B_FALSE;
 	}
 
@@ -1084,6 +1094,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	if (!sfc_rx_check_scatter(sa->port.pdu, buf_size,
 				  encp->enc_rx_prefix_size,
 				  (offloads & DEV_RX_OFFLOAD_SCATTER),
+				  encp->enc_rx_scatter_max,
 				  &error)) {
 		sfc_err(sa, "RxQ %u MTU check failed: %s", sw_index, error);
 		sfc_err(sa, "RxQ %u calculated Rx buffer size is %u vs "
diff --git a/drivers/net/sfc/sfc_rx.h b/drivers/net/sfc/sfc_rx.h
index b0b5327a49..d6ee9cf802 100644
--- a/drivers/net/sfc/sfc_rx.h
+++ b/drivers/net/sfc/sfc_rx.h
@@ -147,6 +147,7 @@ uint64_t sfc_rx_hf_efx_to_rte(struct sfc_rss *rss, efx_rx_hash_type_t efx);
 boolean_t sfc_rx_check_scatter(size_t pdu, size_t rx_buf_size,
 			       uint32_t rx_prefix_size,
 			       boolean_t rx_scatter_enabled,
+			       uint32_t rx_scatter_max,
 			       const char **error);
 
 #ifdef __cplusplus
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 08/36] net/sfc: log Rx/Tx doorbell addresses useful for debugging
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (6 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 07/36] net/sfc: check vs maximum " Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 09/36] net/sfc: add caps to specify if libefx supports Rx/Tx Andrew Rybchenko
                   ` (28 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_ef10_essb_rx.c | 2 ++
 drivers/net/sfc/sfc_ef10_rx.c      | 5 +++++
 drivers/net/sfc/sfc_ef10_tx.c      | 5 +++++
 3 files changed, 12 insertions(+)

diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 8238cc830d..d9bf28525b 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -596,6 +596,8 @@ sfc_ef10_essb_rx_qcreate(uint16_t port_id, uint16_t queue_id,
 			ER_DZ_RX_DESC_UPD_REG_OFST +
 			(info->hw_index << info->vi_window_shift);
 
+	sfc_ef10_essb_rx_info(&rxq->dp.dpq, "RxQ doorbell is %p",
+			      rxq->doorbell);
 	sfc_ef10_essb_rx_info(&rxq->dp.dpq,
 			      "block size is %u, buf stride is %u",
 			      rxq->block_size, rxq->buf_stride);
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 8c6ebaa2fa..62d0b6206b 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -33,6 +33,9 @@
 #define sfc_ef10_rx_err(dpq, ...) \
 	SFC_DP_LOG(SFC_KVARG_DATAPATH_EF10, ERR, dpq, __VA_ARGS__)
 
+#define sfc_ef10_rx_info(dpq, ...) \
+	SFC_DP_LOG(SFC_KVARG_DATAPATH_EF10, INFO, dpq, __VA_ARGS__)
+
 /**
  * Maximum number of descriptors/buffers in the Rx ring.
  * It should guarantee that corresponding event queue never overfill.
@@ -672,6 +675,8 @@ sfc_ef10_rx_qcreate(uint16_t port_id, uint16_t queue_id,
 		      ER_DZ_EVQ_RPTR_REG_OFST +
 		      (info->evq_hw_index << info->vi_window_shift);
 
+	sfc_ef10_rx_info(&rxq->dp.dpq, "RxQ doorbell is %p", rxq->doorbell);
+
 	*dp_rxqp = &rxq->dp;
 	return 0;
 
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index 4d7da427cb..6fb4ac88a8 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -29,6 +29,9 @@
 #define sfc_ef10_tx_err(dpq, ...) \
 	SFC_DP_LOG(SFC_KVARG_DATAPATH_EF10, ERR, dpq, __VA_ARGS__)
 
+#define sfc_ef10_tx_info(dpq, ...) \
+	SFC_DP_LOG(SFC_KVARG_DATAPATH_EF10, INFO, dpq, __VA_ARGS__)
+
 /** Maximum length of the DMA descriptor data */
 #define SFC_EF10_TX_DMA_DESC_LEN_MAX \
 	((1u << ESF_DZ_TX_KER_BYTE_CNT_WIDTH) - 1)
@@ -960,6 +963,8 @@ sfc_ef10_tx_qcreate(uint16_t port_id, uint16_t queue_id,
 	txq->evq_hw_ring = info->evq_hw_ring;
 	txq->tso_tcp_header_offset_limit = info->tso_tcp_header_offset_limit;
 
+	sfc_ef10_tx_info(&txq->dp.dpq, "TxQ doorbell is %p", txq->doorbell);
+
 	*dp_txqp = &txq->dp;
 	return 0;
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 09/36] net/sfc: add caps to specify if libefx supports Rx/Tx
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (7 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 08/36] net/sfc: log Rx/Tx doorbell addresses useful for debugging Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 10/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (27 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

libefx usage may be limitted to control path only and its
implementation of datapath may not support NIC family or
PMD efx Rx/Tx datapaths implementation may be not yet ported
to updated libefx.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_dp.h     | 2 ++
 drivers/net/sfc/sfc_ethdev.c | 2 ++
 drivers/net/sfc/sfc_rx.c     | 2 +-
 drivers/net/sfc/sfc_tx.c     | 2 +-
 4 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/net/sfc/sfc_dp.h b/drivers/net/sfc/sfc_dp.h
index a161b0b07c..0c11cb09d0 100644
--- a/drivers/net/sfc/sfc_dp.h
+++ b/drivers/net/sfc/sfc_dp.h
@@ -81,6 +81,8 @@ struct sfc_dp {
 	unsigned int			hw_fw_caps;
 #define SFC_DP_HW_FW_CAP_EF10				0x1
 #define SFC_DP_HW_FW_CAP_RX_ES_SUPER_BUFFER		0x2
+#define SFC_DP_HW_FW_CAP_RX_EFX				0x4
+#define SFC_DP_HW_FW_CAP_TX_EFX				0x8
 };
 
 /** List of datapath variants */
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index ca1b99a00f..2140ac5d98 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1924,6 +1924,8 @@ sfc_eth_dev_set_ops(struct rte_eth_dev *dev)
 	case EFX_FAMILY_MEDFORD:
 	case EFX_FAMILY_MEDFORD2:
 		avail_caps |= SFC_DP_HW_FW_CAP_EF10;
+		avail_caps |= SFC_DP_HW_FW_CAP_RX_EFX;
+		avail_caps |= SFC_DP_HW_FW_CAP_TX_EFX;
 		break;
 	default:
 		break;
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 7c50fe58b8..a9217ada9d 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -624,7 +624,7 @@ struct sfc_dp_rx sfc_efx_rx = {
 	.dp = {
 		.name		= SFC_KVARG_DATAPATH_EFX,
 		.type		= SFC_DP_RX,
-		.hw_fw_caps	= 0,
+		.hw_fw_caps	= SFC_DP_HW_FW_CAP_RX_EFX,
 	},
 	.features		= SFC_DP_RX_FEAT_INTR,
 	.dev_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 05a2cf009e..4ea614816a 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -1138,7 +1138,7 @@ struct sfc_dp_tx sfc_efx_tx = {
 	.dp = {
 		.name		= SFC_KVARG_DATAPATH_EFX,
 		.type		= SFC_DP_TX,
-		.hw_fw_caps	= 0,
+		.hw_fw_caps	= SFC_DP_HW_FW_CAP_TX_EFX,
 	},
 	.features		= 0,
 	.dev_offload_capa	= DEV_TX_OFFLOAD_VLAN_INSERT |
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 10/36] net/sfc: add EF100 support
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (8 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 09/36] net/sfc: add caps to specify if libefx supports Rx/Tx Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-14 10:40   ` Ferruh Yigit
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 11/36] net/sfc: use BAR layout discovery to find control window Andrew Rybchenko
                   ` (26 subsequent siblings)
  36 siblings, 1 reply; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

Riverhead is the first NIC of the EF100 architecture.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/common/sfc_efx/efsys.h | 4 ++--
 drivers/net/sfc/sfc_dp.h       | 1 +
 drivers/net/sfc/sfc_ethdev.c   | 4 ++++
 3 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/common/sfc_efx/efsys.h b/drivers/common/sfc_efx/efsys.h
index 8584cd1a40..530dd7097d 100644
--- a/drivers/common/sfc_efx/efsys.h
+++ b/drivers/common/sfc_efx/efsys.h
@@ -104,8 +104,8 @@ prefetch_read_once(const volatile void *addr)
 #define EFSYS_OPT_MEDFORD 1
 /* Enable SFN2xxx support */
 #define EFSYS_OPT_MEDFORD2 1
-/* Disable Riverhead support */
-#define EFSYS_OPT_RIVERHEAD 0
+/* Enable Riverhead support */
+#define EFSYS_OPT_RIVERHEAD 1
 
 #ifdef RTE_LIBRTE_SFC_EFX_DEBUG
 #define EFSYS_OPT_CHECK_REG 1
diff --git a/drivers/net/sfc/sfc_dp.h b/drivers/net/sfc/sfc_dp.h
index 0c11cb09d0..47487d1f1f 100644
--- a/drivers/net/sfc/sfc_dp.h
+++ b/drivers/net/sfc/sfc_dp.h
@@ -83,6 +83,7 @@ struct sfc_dp {
 #define SFC_DP_HW_FW_CAP_RX_ES_SUPER_BUFFER		0x2
 #define SFC_DP_HW_FW_CAP_RX_EFX				0x4
 #define SFC_DP_HW_FW_CAP_TX_EFX				0x8
+#define SFC_DP_HW_FW_CAP_EF100				0x10
 };
 
 /** List of datapath variants */
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2140ac5d98..ae668face0 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -1927,6 +1927,9 @@ sfc_eth_dev_set_ops(struct rte_eth_dev *dev)
 		avail_caps |= SFC_DP_HW_FW_CAP_RX_EFX;
 		avail_caps |= SFC_DP_HW_FW_CAP_TX_EFX;
 		break;
+	case EFX_FAMILY_RIVERHEAD:
+		avail_caps |= SFC_DP_HW_FW_CAP_EF100;
+		break;
 	default:
 		break;
 	}
@@ -2302,6 +2305,7 @@ static const struct rte_pci_id pci_id_sfc_efx_map[] = {
 	{ RTE_PCI_DEVICE(EFX_PCI_VENID_SFC, EFX_PCI_DEVID_MEDFORD_VF) },
 	{ RTE_PCI_DEVICE(EFX_PCI_VENID_SFC, EFX_PCI_DEVID_MEDFORD2) },
 	{ RTE_PCI_DEVICE(EFX_PCI_VENID_SFC, EFX_PCI_DEVID_MEDFORD2_VF) },
+	{ RTE_PCI_DEVICE(EFX_PCI_VENID_XILINX, EFX_PCI_DEVID_RIVERHEAD) },
 	{ .vendor_id = 0 /* sentinel */ }
 };
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 11/36] net/sfc: use BAR layout discovery to find control window
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (9 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 10/36] net/sfc: add EF100 support Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 12/36] net/sfc: implement libefx Rx packets event callbacks Andrew Rybchenko
                   ` (25 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Control window is required to talk to NIC.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/common/sfc_efx/efsys.h |  8 +++-
 drivers/net/sfc/meson.build    |  2 +-
 drivers/net/sfc/sfc.c          | 77 +++++++++++++++++++++++++++++-----
 3 files changed, 75 insertions(+), 12 deletions(-)

diff --git a/drivers/common/sfc_efx/efsys.h b/drivers/common/sfc_efx/efsys.h
index 530dd7097d..f7d5f8a060 100644
--- a/drivers/common/sfc_efx/efsys.h
+++ b/drivers/common/sfc_efx/efsys.h
@@ -163,7 +163,7 @@ prefetch_read_once(const volatile void *addr)
 
 #define EFSYS_OPT_MCDI_PROXY_AUTH_SERVER 0
 
-#define EFSYS_OPT_PCI 0
+#define EFSYS_OPT_PCI 1
 
 #define EFSYS_OPT_DESC_PROXY 0
 
@@ -741,6 +741,12 @@ typedef uint64_t	efsys_stat_t;
 
 #define EFSYS_HAS_ROTL_DWORD	0
 
+/* PCI */
+
+typedef struct efsys_pci_config_s {
+	struct rte_pci_device	*espc_dev;
+} efsys_pci_config_t;
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index 1c6451938a..304e8686e5 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -33,7 +33,7 @@ foreach flag: extra_flags
 	endif
 endforeach
 
-deps += ['common_sfc_efx']
+deps += ['common_sfc_efx', 'bus_pci']
 sources = files(
 	'sfc_ethdev.c',
 	'sfc_kvargs.c',
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 4f1fd0c695..559f9039c2 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -626,16 +626,40 @@ sfc_close(struct sfc_adapter *sa)
 	sfc_log_init(sa, "done");
 }
 
+static efx_rc_t
+sfc_find_mem_bar(efsys_pci_config_t *configp, int bar_index,
+		 efsys_bar_t *barp)
+{
+	efsys_bar_t result;
+	struct rte_pci_device *dev;
+
+	memset(&result, 0, sizeof(result));
+
+	if (bar_index < 0 || bar_index >= PCI_MAX_RESOURCE)
+		return EINVAL;
+
+	dev = configp->espc_dev;
+
+	result.esb_rid = bar_index;
+	result.esb_dev = dev;
+	result.esb_base = dev->mem_resource[bar_index].addr;
+
+	*barp = result;
+
+	return 0;
+}
+
 static int
-sfc_mem_bar_init(struct sfc_adapter *sa, unsigned int membar)
+sfc_mem_bar_init(struct sfc_adapter *sa, const efx_bar_region_t *mem_ebrp)
 {
 	struct rte_eth_dev *eth_dev = sa->eth_dev;
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	efsys_bar_t *ebp = &sa->mem_bar;
-	struct rte_mem_resource *res = &pci_dev->mem_resource[membar];
+	struct rte_mem_resource *res =
+		&pci_dev->mem_resource[mem_ebrp->ebr_index];
 
 	SFC_BAR_LOCK_INIT(ebp, eth_dev->data->name);
-	ebp->esb_rid = membar;
+	ebp->esb_rid = mem_ebrp->ebr_index;
 	ebp->esb_dev = pci_dev;
 	ebp->esb_base = res->addr;
 	return 0;
@@ -1053,11 +1077,43 @@ sfc_nic_probe(struct sfc_adapter *sa)
 	return 0;
 }
 
+static efx_rc_t
+sfc_pci_config_readd(efsys_pci_config_t *configp, uint32_t offset,
+		     efx_dword_t *edp)
+{
+	int rc;
+
+	rc = rte_pci_read_config(configp->espc_dev, edp->ed_u32, sizeof(*edp),
+				 offset);
+
+	return (rc < 0 || rc != sizeof(*edp)) ? EIO : 0;
+}
+
+static int
+sfc_family(struct sfc_adapter *sa, efx_bar_region_t *mem_ebrp)
+{
+	struct rte_eth_dev *eth_dev = sa->eth_dev;
+	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
+	efsys_pci_config_t espcp;
+	static const efx_pci_ops_t ops = {
+		.epo_config_readd = sfc_pci_config_readd,
+		.epo_find_mem_bar = sfc_find_mem_bar,
+	};
+	int rc;
+
+	espcp.espc_dev = pci_dev;
+
+	rc = efx_family_probe_bar(pci_dev->id.vendor_id,
+				  pci_dev->id.device_id,
+				  &espcp, &ops, &sa->family, mem_ebrp);
+
+	return rc;
+}
+
 int
 sfc_probe(struct sfc_adapter *sa)
 {
-	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(sa->eth_dev);
-	unsigned int membar;
+	efx_bar_region_t mem_ebrp;
 	efx_nic_t *enp;
 	int rc;
 
@@ -1069,21 +1125,22 @@ sfc_probe(struct sfc_adapter *sa)
 	rte_atomic32_init(&sa->restart_required);
 
 	sfc_log_init(sa, "get family");
-	rc = efx_family(pci_dev->id.vendor_id, pci_dev->id.device_id,
-			&sa->family, &membar);
+	rc = sfc_family(sa, &mem_ebrp);
 	if (rc != 0)
 		goto fail_family;
-	sfc_log_init(sa, "family is %u, membar is %u", sa->family, membar);
+	sfc_log_init(sa,
+		     "family is %u, membar is %u, function control window offset is %lu",
+		     sa->family, mem_ebrp.ebr_index, mem_ebrp.ebr_offset);
 
 	sfc_log_init(sa, "init mem bar");
-	rc = sfc_mem_bar_init(sa, membar);
+	rc = sfc_mem_bar_init(sa, &mem_ebrp);
 	if (rc != 0)
 		goto fail_mem_bar_init;
 
 	sfc_log_init(sa, "create nic");
 	rte_spinlock_init(&sa->nic_lock);
 	rc = efx_nic_create(sa->family, (efsys_identifier_t *)sa,
-			    &sa->mem_bar, 0,
+			    &sa->mem_bar, mem_ebrp.ebr_offset,
 			    &sa->nic_lock, &enp);
 	if (rc != 0)
 		goto fail_nic_create;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 12/36] net/sfc: implement libefx Rx packets event callbacks
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (10 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 11/36] net/sfc: use BAR layout discovery to find control window Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 13/36] net/sfc: implement libefx Tx descs complete " Andrew Rybchenko
                   ` (24 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

These callbacks are used when event queue is polled via libefx.
The libefx polling is used for management event queue, but we do not
expect any Rx events on it, and for datapath event queue at flushing
(when these events are typically ignored, since queue is being stopped).

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_ev.c | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index cc7d5d1179..322a391100 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -162,6 +162,32 @@ sfc_ev_dp_rx(void *arg, __rte_unused uint32_t label, uint32_t id,
 	return evq->sa->priv.dp_rx->qrx_ev(dp_rxq, id);
 }
 
+static boolean_t
+sfc_ev_nop_rx_packets(void *arg, uint32_t label, unsigned int num_packets,
+		      uint32_t flags)
+{
+	struct sfc_evq *evq = arg;
+
+	sfc_err(evq->sa,
+		"EVQ %u unexpected Rx packets event label=%u num=%u flags=%#x",
+		evq->evq_index, label, num_packets, flags);
+	return B_TRUE;
+}
+
+static boolean_t
+sfc_ev_dp_rx_packets(void *arg, __rte_unused uint32_t label,
+		     unsigned int num_packets, __rte_unused uint32_t flags)
+{
+	struct sfc_evq *evq = arg;
+	struct sfc_dp_rxq *dp_rxq;
+
+	dp_rxq = evq->dp_rxq;
+	SFC_ASSERT(dp_rxq != NULL);
+
+	SFC_ASSERT(evq->sa->priv.dp_rx->qrx_ev != NULL);
+	return evq->sa->priv.dp_rx->qrx_ev(dp_rxq, num_packets);
+}
+
 static boolean_t
 sfc_ev_nop_rx_ps(void *arg, uint32_t label, uint32_t id,
 		 uint32_t pkt_count, uint16_t flags)
@@ -429,6 +455,7 @@ sfc_ev_link_change(void *arg, efx_link_mode_t link_mode)
 static const efx_ev_callbacks_t sfc_ev_callbacks = {
 	.eec_initialized	= sfc_ev_initialized,
 	.eec_rx			= sfc_ev_nop_rx,
+	.eec_rx_packets		= sfc_ev_nop_rx_packets,
 	.eec_rx_ps		= sfc_ev_nop_rx_ps,
 	.eec_tx			= sfc_ev_nop_tx,
 	.eec_exception		= sfc_ev_exception,
@@ -445,6 +472,7 @@ static const efx_ev_callbacks_t sfc_ev_callbacks = {
 static const efx_ev_callbacks_t sfc_ev_callbacks_efx_rx = {
 	.eec_initialized	= sfc_ev_initialized,
 	.eec_rx			= sfc_ev_efx_rx,
+	.eec_rx_packets		= sfc_ev_nop_rx_packets,
 	.eec_rx_ps		= sfc_ev_nop_rx_ps,
 	.eec_tx			= sfc_ev_nop_tx,
 	.eec_exception		= sfc_ev_exception,
@@ -461,6 +489,7 @@ static const efx_ev_callbacks_t sfc_ev_callbacks_efx_rx = {
 static const efx_ev_callbacks_t sfc_ev_callbacks_dp_rx = {
 	.eec_initialized	= sfc_ev_initialized,
 	.eec_rx			= sfc_ev_dp_rx,
+	.eec_rx_packets		= sfc_ev_dp_rx_packets,
 	.eec_rx_ps		= sfc_ev_dp_rx_ps,
 	.eec_tx			= sfc_ev_nop_tx,
 	.eec_exception		= sfc_ev_exception,
@@ -477,6 +506,7 @@ static const efx_ev_callbacks_t sfc_ev_callbacks_dp_rx = {
 static const efx_ev_callbacks_t sfc_ev_callbacks_efx_tx = {
 	.eec_initialized	= sfc_ev_initialized,
 	.eec_rx			= sfc_ev_nop_rx,
+	.eec_rx_packets		= sfc_ev_nop_rx_packets,
 	.eec_rx_ps		= sfc_ev_nop_rx_ps,
 	.eec_tx			= sfc_ev_tx,
 	.eec_exception		= sfc_ev_exception,
@@ -493,6 +523,7 @@ static const efx_ev_callbacks_t sfc_ev_callbacks_efx_tx = {
 static const efx_ev_callbacks_t sfc_ev_callbacks_dp_tx = {
 	.eec_initialized	= sfc_ev_initialized,
 	.eec_rx			= sfc_ev_nop_rx,
+	.eec_rx_packets		= sfc_ev_nop_rx_packets,
 	.eec_rx_ps		= sfc_ev_nop_rx_ps,
 	.eec_tx			= sfc_ev_dp_tx,
 	.eec_exception		= sfc_ev_exception,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 13/36] net/sfc: implement libefx Tx descs complete event callbacks
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (11 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 12/36] net/sfc: implement libefx Rx packets event callbacks Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 14/36] net/sfc: log DMA allocations addresses Andrew Rybchenko
                   ` (23 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

These callbacks are used when event queue is polled via libefx.
The libefx polling is used for management event queue, but we do not
expect any Tx complete events on it, and for datapath event queue at
flushing.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_ev.c | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/drivers/net/sfc/sfc_ev.c b/drivers/net/sfc/sfc_ev.c
index 322a391100..ac3cd75577 100644
--- a/drivers/net/sfc/sfc_ev.c
+++ b/drivers/net/sfc/sfc_ev.c
@@ -269,6 +269,30 @@ sfc_ev_dp_tx(void *arg, __rte_unused uint32_t label, uint32_t id)
 	return evq->sa->priv.dp_tx->qtx_ev(dp_txq, id);
 }
 
+static boolean_t
+sfc_ev_nop_tx_ndescs(void *arg, uint32_t label, unsigned int ndescs)
+{
+	struct sfc_evq *evq = arg;
+
+	sfc_err(evq->sa, "EVQ %u unexpected Tx event label=%u ndescs=%#x",
+		evq->evq_index, label, ndescs);
+	return B_TRUE;
+}
+
+static boolean_t
+sfc_ev_dp_tx_ndescs(void *arg, __rte_unused uint32_t label,
+		      unsigned int ndescs)
+{
+	struct sfc_evq *evq = arg;
+	struct sfc_dp_txq *dp_txq;
+
+	dp_txq = evq->dp_txq;
+	SFC_ASSERT(dp_txq != NULL);
+
+	SFC_ASSERT(evq->sa->priv.dp_tx->qtx_ev != NULL);
+	return evq->sa->priv.dp_tx->qtx_ev(dp_txq, ndescs);
+}
+
 static boolean_t
 sfc_ev_exception(void *arg, uint32_t code, __rte_unused uint32_t data)
 {
@@ -458,6 +482,7 @@ static const efx_ev_callbacks_t sfc_ev_callbacks = {
 	.eec_rx_packets		= sfc_ev_nop_rx_packets,
 	.eec_rx_ps		= sfc_ev_nop_rx_ps,
 	.eec_tx			= sfc_ev_nop_tx,
+	.eec_tx_ndescs		= sfc_ev_nop_tx_ndescs,
 	.eec_exception		= sfc_ev_exception,
 	.eec_rxq_flush_done	= sfc_ev_nop_rxq_flush_done,
 	.eec_rxq_flush_failed	= sfc_ev_nop_rxq_flush_failed,
@@ -475,6 +500,7 @@ static const efx_ev_callbacks_t sfc_ev_callbacks_efx_rx = {
 	.eec_rx_packets		= sfc_ev_nop_rx_packets,
 	.eec_rx_ps		= sfc_ev_nop_rx_ps,
 	.eec_tx			= sfc_ev_nop_tx,
+	.eec_tx_ndescs		= sfc_ev_nop_tx_ndescs,
 	.eec_exception		= sfc_ev_exception,
 	.eec_rxq_flush_done	= sfc_ev_rxq_flush_done,
 	.eec_rxq_flush_failed	= sfc_ev_rxq_flush_failed,
@@ -492,6 +518,7 @@ static const efx_ev_callbacks_t sfc_ev_callbacks_dp_rx = {
 	.eec_rx_packets		= sfc_ev_dp_rx_packets,
 	.eec_rx_ps		= sfc_ev_dp_rx_ps,
 	.eec_tx			= sfc_ev_nop_tx,
+	.eec_tx_ndescs		= sfc_ev_nop_tx_ndescs,
 	.eec_exception		= sfc_ev_exception,
 	.eec_rxq_flush_done	= sfc_ev_rxq_flush_done,
 	.eec_rxq_flush_failed	= sfc_ev_rxq_flush_failed,
@@ -509,6 +536,7 @@ static const efx_ev_callbacks_t sfc_ev_callbacks_efx_tx = {
 	.eec_rx_packets		= sfc_ev_nop_rx_packets,
 	.eec_rx_ps		= sfc_ev_nop_rx_ps,
 	.eec_tx			= sfc_ev_tx,
+	.eec_tx_ndescs		= sfc_ev_nop_tx_ndescs,
 	.eec_exception		= sfc_ev_exception,
 	.eec_rxq_flush_done	= sfc_ev_nop_rxq_flush_done,
 	.eec_rxq_flush_failed	= sfc_ev_nop_rxq_flush_failed,
@@ -526,6 +554,7 @@ static const efx_ev_callbacks_t sfc_ev_callbacks_dp_tx = {
 	.eec_rx_packets		= sfc_ev_nop_rx_packets,
 	.eec_rx_ps		= sfc_ev_nop_rx_ps,
 	.eec_tx			= sfc_ev_dp_tx,
+	.eec_tx_ndescs		= sfc_ev_dp_tx_ndescs,
 	.eec_exception		= sfc_ev_exception,
 	.eec_rxq_flush_done	= sfc_ev_nop_rxq_flush_done,
 	.eec_rxq_flush_failed	= sfc_ev_nop_rxq_flush_failed,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 14/36] net/sfc: log DMA allocations addresses
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (12 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 13/36] net/sfc: implement libefx Tx descs complete " Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 15/36] net/sfc: support datapath logs which may be compiled out Andrew Rybchenko
                   ` (22 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

The information about DMA allocations is very useful for debugging.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 559f9039c2..cfba485ad2 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -52,6 +52,11 @@ sfc_dma_alloc(const struct sfc_adapter *sa, const char *name, uint16_t id,
 	esmp->esm_mz = mz;
 	esmp->esm_base = mz->addr;
 
+	sfc_info(sa,
+		 "DMA name=%s id=%u len=%lu socket_id=%d => virt=%p iova=%lx",
+		 name, id, len, socket_id, esmp->esm_base,
+		 (unsigned long)esmp->esm_addr);
+
 	return 0;
 }
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 15/36] net/sfc: support datapath logs which may be compiled out
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (13 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 14/36] net/sfc: log DMA allocations addresses Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 16/36] net/sfc: implement EF100 native Rx datapath Andrew Rybchenko
                   ` (21 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

Add datapath log level which limits logs included in build since
on datapath it is too expensive to dive into rte_log() function
even if it does nothing.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_dp.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/net/sfc/sfc_dp.h b/drivers/net/sfc/sfc_dp.h
index 47487d1f1f..df76f3f2bb 100644
--- a/drivers/net/sfc/sfc_dp.h
+++ b/drivers/net/sfc/sfc_dp.h
@@ -51,6 +51,11 @@ void sfc_dp_queue_init(struct sfc_dp_queue *dpq,
 		       uint16_t port_id, uint16_t queue_id,
 		       const struct rte_pci_addr *pci_addr);
 
+/* Maximum datapath log level to be included in build. */
+#ifndef SFC_DP_LOG_LEVEL
+#define SFC_DP_LOG_LEVEL	RTE_LOG_NOTICE
+#endif
+
 /*
  * Helper macro to define datapath logging macros and have uniform
  * logging.
@@ -60,6 +65,8 @@ void sfc_dp_queue_init(struct sfc_dp_queue *dpq,
 		const struct sfc_dp_queue *_dpq = (dpq);		\
 		const struct rte_pci_addr *_addr = &(_dpq)->pci_addr;	\
 									\
+		if (RTE_LOG_ ## level > SFC_DP_LOG_LEVEL)		\
+			break;						\
 		SFC_GENERIC_LOG(level,					\
 			RTE_FMT("%s " PCI_PRI_FMT			\
 				" #%" PRIu16 ".%" PRIu16 ": "		\
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 16/36] net/sfc: implement EF100 native Rx datapath
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (14 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 15/36] net/sfc: support datapath logs which may be compiled out Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 17/36] net/sfc: implement EF100 native Tx datapath Andrew Rybchenko
                   ` (20 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/nics/sfc_efx.rst    |   3 +
 drivers/net/sfc/meson.build    |   3 +-
 drivers/net/sfc/sfc_dp_rx.h    |   1 +
 drivers/net/sfc/sfc_ef100.h    |  35 ++
 drivers/net/sfc/sfc_ef100_rx.c | 612 +++++++++++++++++++++++++++++++++
 drivers/net/sfc/sfc_ethdev.c   |   1 +
 drivers/net/sfc/sfc_kvargs.h   |   4 +-
 7 files changed, 657 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/sfc/sfc_ef100.h
 create mode 100644 drivers/net/sfc/sfc_ef100_rx.c

diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index 84b9b56ddb..c05c565275 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -301,12 +301,15 @@ boolean parameters value.
   **auto** allows the driver itself to make a choice based on firmware
   features available and required by the datapath implementation.
   **efx** chooses libefx-based datapath which supports Rx scatter.
+  Supported for SFN7xxx, SFN8xxx and X2xxx family adapters only.
   **ef10** chooses EF10 (SFN7xxx, SFN8xxx, X2xxx) native datapath which is
   more efficient than libefx-based and provides richer packet type
   classification.
   **ef10_essb** chooses SFNX2xxx equal stride super-buffer datapath
   which may be used on DPDK firmware variant only
   (see notes about its limitations above).
+  **ef100** chooses EF100 native datapath which is the only supported
+  Rx datapath for EF100 architecture based NICs.
 
 - ``tx_datapath`` [auto|efx|ef10|ef10_simple] (default **auto**)
 
diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index 304e8686e5..604c67cddd 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -51,5 +51,6 @@ sources = files(
 	'sfc_dp.c',
 	'sfc_ef10_rx.c',
 	'sfc_ef10_essb_rx.c',
-	'sfc_ef10_tx.c'
+	'sfc_ef10_tx.c',
+	'sfc_ef100_rx.c',
 )
diff --git a/drivers/net/sfc/sfc_dp_rx.h b/drivers/net/sfc/sfc_dp_rx.h
index 2101fd7547..3aba39658e 100644
--- a/drivers/net/sfc/sfc_dp_rx.h
+++ b/drivers/net/sfc/sfc_dp_rx.h
@@ -266,6 +266,7 @@ const struct sfc_dp_rx *sfc_dp_rx_by_dp_rxq(const struct sfc_dp_rxq *dp_rxq);
 extern struct sfc_dp_rx sfc_efx_rx;
 extern struct sfc_dp_rx sfc_ef10_rx;
 extern struct sfc_dp_rx sfc_ef10_essb_rx;
+extern struct sfc_dp_rx sfc_ef100_rx;
 
 #ifdef __cplusplus
 }
diff --git a/drivers/net/sfc/sfc_ef100.h b/drivers/net/sfc/sfc_ef100.h
new file mode 100644
index 0000000000..6da6cfabdb
--- /dev/null
+++ b/drivers/net/sfc/sfc_ef100.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2019-2020 Xilinx, Inc.
+ * Copyright(c) 2018-2019 Solarflare Communications Inc.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#ifndef _SFC_EF100_H
+#define _SFC_EF100_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+static inline bool
+sfc_ef100_ev_present(const efx_qword_t *ev, bool phase_bit)
+{
+	return !((ev->eq_u64[0] &
+		  EFX_INPLACE_MASK64(0, 63, ESF_GZ_EV_EVQ_PHASE)) ^
+		 ((uint64_t)phase_bit << ESF_GZ_EV_EVQ_PHASE_LBN));
+}
+
+static inline bool
+sfc_ef100_ev_type_is(const efx_qword_t *ev, unsigned int type)
+{
+	return (ev->eq_u64[0] & EFX_INPLACE_MASK64(0, 63, ESF_GZ_E_TYPE)) ==
+		EFX_INSERT_FIELD64(0, 63, ESF_GZ_E_TYPE, type);
+}
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* _SFC_EF100_H */
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
new file mode 100644
index 0000000000..c0e70c9943
--- /dev/null
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -0,0 +1,612 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2019-2020 Xilinx, Inc.
+ * Copyright(c) 2018-2019 Solarflare Communications Inc.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+/* EF100 native datapath implementation */
+
+#include <stdbool.h>
+
+#include <rte_byteorder.h>
+#include <rte_mbuf_ptype.h>
+#include <rte_mbuf.h>
+#include <rte_io.h>
+
+#include "efx_types.h"
+#include "efx_regs_ef100.h"
+
+#include "sfc_debug.h"
+#include "sfc_tweak.h"
+#include "sfc_dp_rx.h"
+#include "sfc_kvargs.h"
+#include "sfc_ef100.h"
+
+
+#define sfc_ef100_rx_err(_rxq, ...) \
+	SFC_DP_LOG(SFC_KVARG_DATAPATH_EF100, ERR, &(_rxq)->dp.dpq, __VA_ARGS__)
+
+#define sfc_ef100_rx_debug(_rxq, ...) \
+	SFC_DP_LOG(SFC_KVARG_DATAPATH_EF100, DEBUG, &(_rxq)->dp.dpq, \
+		   __VA_ARGS__)
+
+/**
+ * Maximum number of descriptors/buffers in the Rx ring.
+ * It should guarantee that corresponding event queue never overfill.
+ * EF10 native datapath uses event queue of the same size as Rx queue.
+ * Maximum number of events on datapath can be estimated as number of
+ * Rx queue entries (one event per Rx buffer in the worst case) plus
+ * Rx error and flush events.
+ */
+#define SFC_EF100_RXQ_LIMIT(_ndesc) \
+	((_ndesc) - 1 /* head must not step on tail */ - \
+	 1 /* Rx error */ - 1 /* flush */)
+
+struct sfc_ef100_rx_sw_desc {
+	struct rte_mbuf			*mbuf;
+};
+
+struct sfc_ef100_rxq {
+	/* Used on data path */
+	unsigned int			flags;
+#define SFC_EF100_RXQ_STARTED		0x1
+#define SFC_EF100_RXQ_NOT_RUNNING	0x2
+#define SFC_EF100_RXQ_EXCEPTION		0x4
+	unsigned int			ptr_mask;
+	unsigned int			evq_phase_bit_shift;
+	unsigned int			ready_pkts;
+	unsigned int			completed;
+	unsigned int			evq_read_ptr;
+	volatile efx_qword_t		*evq_hw_ring;
+	struct sfc_ef100_rx_sw_desc	*sw_ring;
+	uint64_t			rearm_data;
+	uint16_t			buf_size;
+	uint16_t			prefix_size;
+
+	/* Used on refill */
+	unsigned int			added;
+	unsigned int			max_fill_level;
+	unsigned int			refill_threshold;
+	struct rte_mempool		*refill_mb_pool;
+	efx_qword_t			*rxq_hw_ring;
+	volatile void			*doorbell;
+
+	/* Datapath receive queue anchor */
+	struct sfc_dp_rxq		dp;
+};
+
+static inline struct sfc_ef100_rxq *
+sfc_ef100_rxq_by_dp_rxq(struct sfc_dp_rxq *dp_rxq)
+{
+	return container_of(dp_rxq, struct sfc_ef100_rxq, dp);
+}
+
+static inline void
+sfc_ef100_rx_qpush(struct sfc_ef100_rxq *rxq, unsigned int added)
+{
+	efx_dword_t dword;
+
+	EFX_POPULATE_DWORD_1(dword, ERF_GZ_RX_RING_PIDX, added & rxq->ptr_mask);
+
+	/* DMA sync to device is not required */
+
+	/*
+	 * rte_write32() has rte_io_wmb() which guarantees that the STORE
+	 * operations (i.e. Rx and event descriptor updates) that precede
+	 * the rte_io_wmb() call are visible to NIC before the STORE
+	 * operations that follow it (i.e. doorbell write).
+	 */
+	rte_write32(dword.ed_u32[0], rxq->doorbell);
+
+	sfc_ef100_rx_debug(rxq, "RxQ pushed doorbell at pidx %u (added=%u)",
+			   EFX_DWORD_FIELD(dword, ERF_GZ_RX_RING_PIDX),
+			   added);
+}
+
+static void
+sfc_ef100_rx_qrefill(struct sfc_ef100_rxq *rxq)
+{
+	const unsigned int ptr_mask = rxq->ptr_mask;
+	unsigned int free_space;
+	unsigned int bulks;
+	void *objs[SFC_RX_REFILL_BULK];
+	unsigned int added = rxq->added;
+
+	free_space = rxq->max_fill_level - (added - rxq->completed);
+
+	if (free_space < rxq->refill_threshold)
+		return;
+
+	bulks = free_space / RTE_DIM(objs);
+	/* refill_threshold guarantees that bulks is positive */
+	SFC_ASSERT(bulks > 0);
+
+	do {
+		unsigned int id;
+		unsigned int i;
+
+		if (unlikely(rte_mempool_get_bulk(rxq->refill_mb_pool, objs,
+						  RTE_DIM(objs)) < 0)) {
+			struct rte_eth_dev_data *dev_data =
+				rte_eth_devices[rxq->dp.dpq.port_id].data;
+
+			/*
+			 * It is hardly a safe way to increment counter
+			 * from different contexts, but all PMDs do it.
+			 */
+			dev_data->rx_mbuf_alloc_failed += RTE_DIM(objs);
+			/* Return if we have posted nothing yet */
+			if (added == rxq->added)
+				return;
+			/* Push posted */
+			break;
+		}
+
+		for (i = 0, id = added & ptr_mask;
+		     i < RTE_DIM(objs);
+		     ++i, ++id) {
+			struct rte_mbuf *m = objs[i];
+			struct sfc_ef100_rx_sw_desc *rxd;
+			rte_iova_t phys_addr;
+
+			MBUF_RAW_ALLOC_CHECK(m);
+
+			SFC_ASSERT((id & ~ptr_mask) == 0);
+			rxd = &rxq->sw_ring[id];
+			rxd->mbuf = m;
+
+			/*
+			 * Avoid writing to mbuf. It is cheaper to do it
+			 * when we receive packet and fill in nearby
+			 * structure members.
+			 */
+
+			phys_addr = rte_mbuf_data_iova_default(m);
+			EFX_POPULATE_QWORD_1(rxq->rxq_hw_ring[id],
+			    ESF_GZ_RX_BUF_ADDR, phys_addr);
+		}
+
+		added += RTE_DIM(objs);
+	} while (--bulks > 0);
+
+	SFC_ASSERT(rxq->added != added);
+	rxq->added = added;
+	sfc_ef100_rx_qpush(rxq, added);
+}
+
+static bool
+sfc_ef100_rx_prefix_to_offloads(const efx_oword_t *rx_prefix,
+				struct rte_mbuf *m)
+{
+	const efx_word_t *class;
+	uint64_t ol_flags = 0;
+
+	RTE_BUILD_BUG_ON(EFX_LOW_BIT(ESF_GZ_RX_PREFIX_CLASS) % CHAR_BIT != 0);
+	RTE_BUILD_BUG_ON(EFX_WIDTH(ESF_GZ_RX_PREFIX_CLASS) % CHAR_BIT != 0);
+	RTE_BUILD_BUG_ON(EFX_WIDTH(ESF_GZ_RX_PREFIX_CLASS) / CHAR_BIT !=
+			 sizeof(*class));
+	class = (const efx_word_t *)((const uint8_t *)rx_prefix +
+		EFX_LOW_BIT(ESF_GZ_RX_PREFIX_CLASS) / CHAR_BIT);
+	if (unlikely(EFX_WORD_FIELD(*class,
+				    ESF_GZ_RX_PREFIX_HCLASS_L2_STATUS) !=
+		     ESE_GZ_RH_HCLASS_L2_STATUS_OK))
+		return false;
+
+	m->ol_flags = ol_flags;
+	return true;
+}
+
+static const uint8_t *
+sfc_ef100_rx_pkt_prefix(const struct rte_mbuf *m)
+{
+	return (const uint8_t *)m->buf_addr + RTE_PKTMBUF_HEADROOM;
+}
+
+static struct rte_mbuf *
+sfc_ef100_rx_next_mbuf(struct sfc_ef100_rxq *rxq)
+{
+	struct rte_mbuf *m;
+	unsigned int id;
+
+	/* mbuf associated with current Rx descriptor */
+	m = rxq->sw_ring[rxq->completed++ & rxq->ptr_mask].mbuf;
+
+	/* completed is already moved to the next one */
+	if (unlikely(rxq->completed == rxq->added))
+		goto done;
+
+	/*
+	 * Prefetch Rx prefix of the next packet.
+	 * Current packet is scattered and the next mbuf is its fragment
+	 * it simply prefetches some data - no harm since packet rate
+	 * should not be high if scatter is used.
+	 */
+	id = rxq->completed & rxq->ptr_mask;
+	rte_prefetch0(sfc_ef100_rx_pkt_prefix(rxq->sw_ring[id].mbuf));
+
+	if (unlikely(rxq->completed + 1 == rxq->added))
+		goto done;
+
+	/*
+	 * Prefetch mbuf control structure of the next after next Rx
+	 * descriptor.
+	 */
+	id = (id == rxq->ptr_mask) ? 0 : (id + 1);
+	rte_mbuf_prefetch_part1(rxq->sw_ring[id].mbuf);
+
+	/*
+	 * If the next time we'll need SW Rx descriptor from the next
+	 * cache line, try to make sure that we have it in cache.
+	 */
+	if ((id & 0x7) == 0x7)
+		rte_prefetch0(&rxq->sw_ring[(id + 1) & rxq->ptr_mask]);
+
+done:
+	return m;
+}
+
+static struct rte_mbuf **
+sfc_ef100_rx_process_ready_pkts(struct sfc_ef100_rxq *rxq,
+				struct rte_mbuf **rx_pkts,
+				struct rte_mbuf ** const rx_pkts_end)
+{
+	while (rxq->ready_pkts > 0 && rx_pkts != rx_pkts_end) {
+		struct rte_mbuf *pkt;
+		struct rte_mbuf *lastseg;
+		const efx_oword_t *rx_prefix;
+		uint16_t pkt_len;
+		uint16_t seg_len;
+		bool deliver;
+
+		rxq->ready_pkts--;
+
+		pkt = sfc_ef100_rx_next_mbuf(rxq);
+		MBUF_RAW_ALLOC_CHECK(pkt);
+
+		RTE_BUILD_BUG_ON(sizeof(pkt->rearm_data[0]) !=
+				 sizeof(rxq->rearm_data));
+		pkt->rearm_data[0] = rxq->rearm_data;
+
+		/* data_off already moved past Rx prefix */
+		rx_prefix = (const efx_oword_t *)sfc_ef100_rx_pkt_prefix(pkt);
+
+		pkt_len = EFX_OWORD_FIELD(rx_prefix[0],
+					  ESF_GZ_RX_PREFIX_LENGTH);
+		SFC_ASSERT(pkt_len > 0);
+		rte_pktmbuf_pkt_len(pkt) = pkt_len;
+
+		seg_len = RTE_MIN(pkt_len, rxq->buf_size - rxq->prefix_size);
+		rte_pktmbuf_data_len(pkt) = seg_len;
+
+		deliver = sfc_ef100_rx_prefix_to_offloads(rx_prefix, pkt);
+
+		lastseg = pkt;
+		while ((pkt_len -= seg_len) > 0) {
+			struct rte_mbuf *seg;
+
+			seg = sfc_ef100_rx_next_mbuf(rxq);
+			MBUF_RAW_ALLOC_CHECK(seg);
+
+			seg->data_off = RTE_PKTMBUF_HEADROOM;
+
+			seg_len = RTE_MIN(pkt_len, rxq->buf_size);
+			rte_pktmbuf_data_len(seg) = seg_len;
+			rte_pktmbuf_pkt_len(seg) = seg_len;
+
+			pkt->nb_segs++;
+			lastseg->next = seg;
+			lastseg = seg;
+		}
+
+		if (likely(deliver))
+			*rx_pkts++ = pkt;
+		else
+			rte_pktmbuf_free(pkt);
+	}
+
+	return rx_pkts;
+}
+
+static bool
+sfc_ef100_rx_get_event(struct sfc_ef100_rxq *rxq, efx_qword_t *ev)
+{
+	*ev = rxq->evq_hw_ring[rxq->evq_read_ptr & rxq->ptr_mask];
+
+	if (!sfc_ef100_ev_present(ev,
+			(rxq->evq_read_ptr >> rxq->evq_phase_bit_shift) & 1))
+		return false;
+
+	if (unlikely(!sfc_ef100_ev_type_is(ev, ESE_GZ_EF100_EV_RX_PKTS))) {
+		/*
+		 * Do not move read_ptr to keep the event for exception
+		 * handling by the control path.
+		 */
+		rxq->flags |= SFC_EF100_RXQ_EXCEPTION;
+		sfc_ef100_rx_err(rxq,
+			"RxQ exception at EvQ ptr %u(%#x), event %08x:%08x",
+			rxq->evq_read_ptr, rxq->evq_read_ptr & rxq->ptr_mask,
+			EFX_QWORD_FIELD(*ev, EFX_DWORD_1),
+			EFX_QWORD_FIELD(*ev, EFX_DWORD_0));
+		return false;
+	}
+
+	sfc_ef100_rx_debug(rxq, "RxQ got event %08x:%08x at %u (%#x)",
+			   EFX_QWORD_FIELD(*ev, EFX_DWORD_1),
+			   EFX_QWORD_FIELD(*ev, EFX_DWORD_0),
+			   rxq->evq_read_ptr,
+			   rxq->evq_read_ptr & rxq->ptr_mask);
+
+	rxq->evq_read_ptr++;
+	return true;
+}
+
+static uint16_t
+sfc_ef100_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+{
+	struct sfc_ef100_rxq *rxq = sfc_ef100_rxq_by_dp_rxq(rx_queue);
+	struct rte_mbuf ** const rx_pkts_end = &rx_pkts[nb_pkts];
+	efx_qword_t rx_ev;
+
+	rx_pkts = sfc_ef100_rx_process_ready_pkts(rxq, rx_pkts, rx_pkts_end);
+
+	if (unlikely(rxq->flags &
+		     (SFC_EF100_RXQ_NOT_RUNNING | SFC_EF100_RXQ_EXCEPTION)))
+		goto done;
+
+	while (rx_pkts != rx_pkts_end && sfc_ef100_rx_get_event(rxq, &rx_ev)) {
+		rxq->ready_pkts =
+			EFX_QWORD_FIELD(rx_ev, ESF_GZ_EV_RXPKTS_NUM_PKT);
+		rx_pkts = sfc_ef100_rx_process_ready_pkts(rxq, rx_pkts,
+							  rx_pkts_end);
+	}
+
+	/* It is not a problem if we refill in the case of exception */
+	sfc_ef100_rx_qrefill(rxq);
+
+done:
+	return nb_pkts - (rx_pkts_end - rx_pkts);
+}
+
+static const uint32_t *
+sfc_ef100_supported_ptypes_get(__rte_unused uint32_t tunnel_encaps)
+{
+	static const uint32_t ef100_native_ptypes[] = {
+		RTE_PTYPE_UNKNOWN
+	};
+
+	return ef100_native_ptypes;
+}
+
+static sfc_dp_rx_qdesc_npending_t sfc_ef100_rx_qdesc_npending;
+static unsigned int
+sfc_ef100_rx_qdesc_npending(__rte_unused struct sfc_dp_rxq *dp_rxq)
+{
+	return 0;
+}
+
+static sfc_dp_rx_qdesc_status_t sfc_ef100_rx_qdesc_status;
+static int
+sfc_ef100_rx_qdesc_status(__rte_unused struct sfc_dp_rxq *dp_rxq,
+			  __rte_unused uint16_t offset)
+{
+	return -ENOTSUP;
+}
+
+
+static sfc_dp_rx_get_dev_info_t sfc_ef100_rx_get_dev_info;
+static void
+sfc_ef100_rx_get_dev_info(struct rte_eth_dev_info *dev_info)
+{
+	/*
+	 * Number of descriptors just defines maximum number of pushed
+	 * descriptors (fill level).
+	 */
+	dev_info->rx_desc_lim.nb_min = SFC_RX_REFILL_BULK;
+	dev_info->rx_desc_lim.nb_align = SFC_RX_REFILL_BULK;
+}
+
+
+static sfc_dp_rx_qsize_up_rings_t sfc_ef100_rx_qsize_up_rings;
+static int
+sfc_ef100_rx_qsize_up_rings(uint16_t nb_rx_desc,
+			   struct sfc_dp_rx_hw_limits *limits,
+			   __rte_unused struct rte_mempool *mb_pool,
+			   unsigned int *rxq_entries,
+			   unsigned int *evq_entries,
+			   unsigned int *rxq_max_fill_level)
+{
+	/*
+	 * rte_ethdev API guarantees that the number meets min, max and
+	 * alignment requirements.
+	 */
+	if (nb_rx_desc <= limits->rxq_min_entries)
+		*rxq_entries = limits->rxq_min_entries;
+	else
+		*rxq_entries = rte_align32pow2(nb_rx_desc);
+
+	*evq_entries = *rxq_entries;
+
+	*rxq_max_fill_level = RTE_MIN(nb_rx_desc,
+				      SFC_EF100_RXQ_LIMIT(*evq_entries));
+	return 0;
+}
+
+
+static uint64_t
+sfc_ef100_mk_mbuf_rearm_data(uint16_t port_id, uint16_t prefix_size)
+{
+	struct rte_mbuf m;
+
+	memset(&m, 0, sizeof(m));
+
+	rte_mbuf_refcnt_set(&m, 1);
+	m.data_off = RTE_PKTMBUF_HEADROOM + prefix_size;
+	m.nb_segs = 1;
+	m.port = port_id;
+
+	/* rearm_data covers structure members filled in above */
+	rte_compiler_barrier();
+	RTE_BUILD_BUG_ON(sizeof(m.rearm_data[0]) != sizeof(uint64_t));
+	return m.rearm_data[0];
+}
+
+static sfc_dp_rx_qcreate_t sfc_ef100_rx_qcreate;
+static int
+sfc_ef100_rx_qcreate(uint16_t port_id, uint16_t queue_id,
+		    const struct rte_pci_addr *pci_addr, int socket_id,
+		    const struct sfc_dp_rx_qcreate_info *info,
+		    struct sfc_dp_rxq **dp_rxqp)
+{
+	struct sfc_ef100_rxq *rxq;
+	int rc;
+
+	rc = EINVAL;
+	if (info->rxq_entries != info->evq_entries)
+		goto fail_rxq_args;
+
+	rc = ENOMEM;
+	rxq = rte_zmalloc_socket("sfc-ef100-rxq", sizeof(*rxq),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (rxq == NULL)
+		goto fail_rxq_alloc;
+
+	sfc_dp_queue_init(&rxq->dp.dpq, port_id, queue_id, pci_addr);
+
+	rc = ENOMEM;
+	rxq->sw_ring = rte_calloc_socket("sfc-ef100-rxq-sw_ring",
+					 info->rxq_entries,
+					 sizeof(*rxq->sw_ring),
+					 RTE_CACHE_LINE_SIZE, socket_id);
+	if (rxq->sw_ring == NULL)
+		goto fail_desc_alloc;
+
+	rxq->flags |= SFC_EF100_RXQ_NOT_RUNNING;
+	rxq->ptr_mask = info->rxq_entries - 1;
+	rxq->evq_phase_bit_shift = rte_bsf32(info->evq_entries);
+	rxq->evq_hw_ring = info->evq_hw_ring;
+	rxq->max_fill_level = info->max_fill_level;
+	rxq->refill_threshold = info->refill_threshold;
+	rxq->rearm_data =
+		sfc_ef100_mk_mbuf_rearm_data(port_id, info->prefix_size);
+	rxq->prefix_size = info->prefix_size;
+	rxq->buf_size = info->buf_size;
+	rxq->refill_mb_pool = info->refill_mb_pool;
+	rxq->rxq_hw_ring = info->rxq_hw_ring;
+	rxq->doorbell = (volatile uint8_t *)info->mem_bar +
+			ER_GZ_RX_RING_DOORBELL_OFST +
+			(info->hw_index << info->vi_window_shift);
+
+	sfc_ef100_rx_debug(rxq, "RxQ doorbell is %p", rxq->doorbell);
+
+	*dp_rxqp = &rxq->dp;
+	return 0;
+
+fail_desc_alloc:
+	rte_free(rxq);
+
+fail_rxq_alloc:
+fail_rxq_args:
+	return rc;
+}
+
+static sfc_dp_rx_qdestroy_t sfc_ef100_rx_qdestroy;
+static void
+sfc_ef100_rx_qdestroy(struct sfc_dp_rxq *dp_rxq)
+{
+	struct sfc_ef100_rxq *rxq = sfc_ef100_rxq_by_dp_rxq(dp_rxq);
+
+	rte_free(rxq->sw_ring);
+	rte_free(rxq);
+}
+
+static sfc_dp_rx_qstart_t sfc_ef100_rx_qstart;
+static int
+sfc_ef100_rx_qstart(struct sfc_dp_rxq *dp_rxq, unsigned int evq_read_ptr)
+{
+	struct sfc_ef100_rxq *rxq = sfc_ef100_rxq_by_dp_rxq(dp_rxq);
+
+	SFC_ASSERT(rxq->completed == 0);
+	SFC_ASSERT(rxq->added == 0);
+
+	sfc_ef100_rx_qrefill(rxq);
+
+	rxq->evq_read_ptr = evq_read_ptr;
+
+	rxq->flags |= SFC_EF100_RXQ_STARTED;
+	rxq->flags &= ~(SFC_EF100_RXQ_NOT_RUNNING | SFC_EF100_RXQ_EXCEPTION);
+
+	return 0;
+}
+
+static sfc_dp_rx_qstop_t sfc_ef100_rx_qstop;
+static void
+sfc_ef100_rx_qstop(struct sfc_dp_rxq *dp_rxq, unsigned int *evq_read_ptr)
+{
+	struct sfc_ef100_rxq *rxq = sfc_ef100_rxq_by_dp_rxq(dp_rxq);
+
+	rxq->flags |= SFC_EF100_RXQ_NOT_RUNNING;
+
+	*evq_read_ptr = rxq->evq_read_ptr;
+}
+
+static sfc_dp_rx_qrx_ev_t sfc_ef100_rx_qrx_ev;
+static bool
+sfc_ef100_rx_qrx_ev(struct sfc_dp_rxq *dp_rxq, __rte_unused unsigned int id)
+{
+	__rte_unused struct sfc_ef100_rxq *rxq = sfc_ef100_rxq_by_dp_rxq(dp_rxq);
+
+	SFC_ASSERT(rxq->flags & SFC_EF100_RXQ_NOT_RUNNING);
+
+	/*
+	 * It is safe to ignore Rx event since we free all mbufs on
+	 * queue purge anyway.
+	 */
+
+	return false;
+}
+
+static sfc_dp_rx_qpurge_t sfc_ef100_rx_qpurge;
+static void
+sfc_ef100_rx_qpurge(struct sfc_dp_rxq *dp_rxq)
+{
+	struct sfc_ef100_rxq *rxq = sfc_ef100_rxq_by_dp_rxq(dp_rxq);
+	unsigned int i;
+	struct sfc_ef100_rx_sw_desc *rxd;
+
+	for (i = rxq->completed; i != rxq->added; ++i) {
+		rxd = &rxq->sw_ring[i & rxq->ptr_mask];
+		rte_mbuf_raw_free(rxd->mbuf);
+		rxd->mbuf = NULL;
+	}
+
+	rxq->completed = rxq->added = 0;
+	rxq->ready_pkts = 0;
+
+	rxq->flags &= ~SFC_EF100_RXQ_STARTED;
+}
+
+struct sfc_dp_rx sfc_ef100_rx = {
+	.dp = {
+		.name		= SFC_KVARG_DATAPATH_EF100,
+		.type		= SFC_DP_RX,
+		.hw_fw_caps	= SFC_DP_HW_FW_CAP_EF100,
+	},
+	.features		= SFC_DP_RX_FEAT_MULTI_PROCESS,
+	.dev_offload_capa	= 0,
+	.queue_offload_capa	= DEV_RX_OFFLOAD_SCATTER,
+	.get_dev_info		= sfc_ef100_rx_get_dev_info,
+	.qsize_up_rings		= sfc_ef100_rx_qsize_up_rings,
+	.qcreate		= sfc_ef100_rx_qcreate,
+	.qdestroy		= sfc_ef100_rx_qdestroy,
+	.qstart			= sfc_ef100_rx_qstart,
+	.qstop			= sfc_ef100_rx_qstop,
+	.qrx_ev			= sfc_ef100_rx_qrx_ev,
+	.qpurge			= sfc_ef100_rx_qpurge,
+	.supported_ptypes_get	= sfc_ef100_supported_ptypes_get,
+	.qdesc_npending		= sfc_ef100_rx_qdesc_npending,
+	.qdesc_status		= sfc_ef100_rx_qdesc_status,
+	.pkt_burst		= sfc_ef100_recv_pkts,
+};
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index ae668face0..e1db9236e9 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -2151,6 +2151,7 @@ sfc_register_dp(void)
 	/* Register once */
 	if (TAILQ_EMPTY(&sfc_dp_head)) {
 		/* Prefer EF10 datapath */
+		sfc_dp_register(&sfc_dp_head, &sfc_ef100_rx.dp);
 		sfc_dp_register(&sfc_dp_head, &sfc_ef10_essb_rx.dp);
 		sfc_dp_register(&sfc_dp_head, &sfc_ef10_rx.dp);
 		sfc_dp_register(&sfc_dp_head, &sfc_efx_rx.dp);
diff --git a/drivers/net/sfc/sfc_kvargs.h b/drivers/net/sfc/sfc_kvargs.h
index f9d10e71cf..cc3f4a353e 100644
--- a/drivers/net/sfc/sfc_kvargs.h
+++ b/drivers/net/sfc/sfc_kvargs.h
@@ -34,12 +34,14 @@ extern "C" {
 #define SFC_KVARG_DATAPATH_EF10		"ef10"
 #define SFC_KVARG_DATAPATH_EF10_SIMPLE	"ef10_simple"
 #define SFC_KVARG_DATAPATH_EF10_ESSB	"ef10_essb"
+#define SFC_KVARG_DATAPATH_EF100	"ef100"
 
 #define SFC_KVARG_RX_DATAPATH		"rx_datapath"
 #define SFC_KVARG_VALUES_RX_DATAPATH \
 	"[" SFC_KVARG_DATAPATH_EFX "|" \
 	    SFC_KVARG_DATAPATH_EF10 "|" \
-	    SFC_KVARG_DATAPATH_EF10_ESSB "]"
+	    SFC_KVARG_DATAPATH_EF10_ESSB "|" \
+	    SFC_KVARG_DATAPATH_EF100 "]"
 
 #define SFC_KVARG_TX_DATAPATH		"tx_datapath"
 #define SFC_KVARG_VALUES_TX_DATAPATH \
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 17/36] net/sfc: implement EF100 native Tx datapath
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (15 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 16/36] net/sfc: implement EF100 native Rx datapath Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 18/36] net/sfc: support multi-segment transmit for EF100 datapath Andrew Rybchenko
                   ` (19 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

No offloads support yet including multi-segment (Tx gather).

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/nics/sfc_efx.rst    |   5 +-
 drivers/net/sfc/meson.build    |   1 +
 drivers/net/sfc/sfc_dp_tx.h    |   1 +
 drivers/net/sfc/sfc_ef100_tx.c | 546 +++++++++++++++++++++++++++++++++
 drivers/net/sfc/sfc_ethdev.c   |   1 +
 drivers/net/sfc/sfc_kvargs.h   |   3 +-
 6 files changed, 555 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/sfc/sfc_ef100_tx.c

diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index c05c565275..726d653fa8 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -311,7 +311,7 @@ boolean parameters value.
   **ef100** chooses EF100 native datapath which is the only supported
   Rx datapath for EF100 architecture based NICs.
 
-- ``tx_datapath`` [auto|efx|ef10|ef10_simple] (default **auto**)
+- ``tx_datapath`` [auto|efx|ef10|ef10_simple|ef100] (default **auto**)
 
   Choose transmit datapath implementation.
   **auto** allows the driver itself to make a choice based on firmware
@@ -320,6 +320,7 @@ boolean parameters value.
   (full-feature firmware variant only), TSO and multi-segment mbufs.
   Mbuf segments may come from different mempools, and mbuf reference
   counters are treated responsibly.
+  Supported for SFN7xxx, SFN8xxx and X2xxx family adapters only.
   **ef10** chooses EF10 (SFN7xxx, SFN8xxx, X2xxx) native datapath which is
   more efficient than libefx-based but has no VLAN insertion support yet.
   Mbuf segments may come from different mempools, and mbuf reference
@@ -327,6 +328,8 @@ boolean parameters value.
   **ef10_simple** chooses EF10 (SFN7xxx, SFN8xxx, X2xxx) native datapath which
   is even more faster then **ef10** but does not support multi-segment
   mbufs, disallows multiple mempools and neglects mbuf reference counters.
+  **ef100** chooses EF100 native datapath which does not support multi-segment
+  mbufs and any offloads.
 
 - ``perf_profile`` [auto|throughput|low-latency] (default **throughput**)
 
diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index 604c67cddd..589f7863ae 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -53,4 +53,5 @@ sources = files(
 	'sfc_ef10_essb_rx.c',
 	'sfc_ef10_tx.c',
 	'sfc_ef100_rx.c',
+	'sfc_ef100_tx.c',
 )
diff --git a/drivers/net/sfc/sfc_dp_tx.h b/drivers/net/sfc/sfc_dp_tx.h
index dcad4fe585..67aa398b7f 100644
--- a/drivers/net/sfc/sfc_dp_tx.h
+++ b/drivers/net/sfc/sfc_dp_tx.h
@@ -289,6 +289,7 @@ sfc_dp_tx_prepare_pkt(struct rte_mbuf *m,
 extern struct sfc_dp_tx sfc_efx_tx;
 extern struct sfc_dp_tx sfc_ef10_tx;
 extern struct sfc_dp_tx sfc_ef10_simple_tx;
+extern struct sfc_dp_tx sfc_ef100_tx;
 
 #ifdef __cplusplus
 }
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
new file mode 100644
index 0000000000..20b7c786cc
--- /dev/null
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -0,0 +1,546 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Copyright(c) 2019-2020 Xilinx, Inc.
+ * Copyright(c) 2018-2019 Solarflare Communications Inc.
+ *
+ * This software was jointly developed between OKTET Labs (under contract
+ * for Solarflare) and Solarflare Communications, Inc.
+ */
+
+#include <stdbool.h>
+
+#include <rte_mbuf.h>
+#include <rte_io.h>
+
+#include "efx.h"
+#include "efx_types.h"
+#include "efx_regs.h"
+#include "efx_regs_ef100.h"
+
+#include "sfc_debug.h"
+#include "sfc_dp_tx.h"
+#include "sfc_tweak.h"
+#include "sfc_kvargs.h"
+#include "sfc_ef100.h"
+
+
+#define sfc_ef100_tx_err(_txq, ...) \
+	SFC_DP_LOG(SFC_KVARG_DATAPATH_EF100, ERR, &(_txq)->dp.dpq, __VA_ARGS__)
+
+#define sfc_ef100_tx_debug(_txq, ...) \
+	SFC_DP_LOG(SFC_KVARG_DATAPATH_EF100, DEBUG, &(_txq)->dp.dpq, \
+		   __VA_ARGS__)
+
+
+/** Maximum length of the send descriptor data */
+#define SFC_EF100_TX_SEND_DESC_LEN_MAX \
+	((1u << ESF_GZ_TX_SEND_LEN_WIDTH) - 1)
+
+/**
+ * Maximum number of descriptors/buffers in the Tx ring.
+ * It should guarantee that corresponding event queue never overfill.
+ * EF100 native datapath uses event queue of the same size as Tx queue.
+ * Maximum number of events on datapath can be estimated as number of
+ * Tx queue entries (one event per Tx buffer in the worst case) plus
+ * Tx error and flush events.
+ */
+#define SFC_EF100_TXQ_LIMIT(_ndesc) \
+	((_ndesc) - 1 /* head must not step on tail */ - \
+	 1 /* Rx error */ - 1 /* flush */)
+
+struct sfc_ef100_tx_sw_desc {
+	struct rte_mbuf			*mbuf;
+};
+
+struct sfc_ef100_txq {
+	unsigned int			flags;
+#define SFC_EF100_TXQ_STARTED		0x1
+#define SFC_EF100_TXQ_NOT_RUNNING	0x2
+#define SFC_EF100_TXQ_EXCEPTION		0x4
+
+	unsigned int			ptr_mask;
+	unsigned int			added;
+	unsigned int			completed;
+	unsigned int			max_fill_level;
+	unsigned int			free_thresh;
+	struct sfc_ef100_tx_sw_desc	*sw_ring;
+	efx_oword_t			*txq_hw_ring;
+	volatile void			*doorbell;
+
+	/* Completion/reap */
+	unsigned int			evq_read_ptr;
+	unsigned int			evq_phase_bit_shift;
+	volatile efx_qword_t		*evq_hw_ring;
+
+	/* Datapath transmit queue anchor */
+	struct sfc_dp_txq		dp;
+};
+
+static inline struct sfc_ef100_txq *
+sfc_ef100_txq_by_dp_txq(struct sfc_dp_txq *dp_txq)
+{
+	return container_of(dp_txq, struct sfc_ef100_txq, dp);
+}
+
+static bool
+sfc_ef100_tx_get_event(struct sfc_ef100_txq *txq, efx_qword_t *ev)
+{
+	volatile efx_qword_t *evq_hw_ring = txq->evq_hw_ring;
+
+	/*
+	 * Exception flag is set when reap is done.
+	 * It is never done twice per packet burst get, and absence of
+	 * the flag is checked on burst get entry.
+	 */
+	SFC_ASSERT((txq->flags & SFC_EF100_TXQ_EXCEPTION) == 0);
+
+	*ev = evq_hw_ring[txq->evq_read_ptr & txq->ptr_mask];
+
+	if (!sfc_ef100_ev_present(ev,
+			(txq->evq_read_ptr >> txq->evq_phase_bit_shift) & 1))
+		return false;
+
+	if (unlikely(!sfc_ef100_ev_type_is(ev,
+					   ESE_GZ_EF100_EV_TX_COMPLETION))) {
+		/*
+		 * Do not move read_ptr to keep the event for exception
+		 * handling by the control path.
+		 */
+		txq->flags |= SFC_EF100_TXQ_EXCEPTION;
+		sfc_ef100_tx_err(txq,
+			"TxQ exception at EvQ ptr %u(%#x), event %08x:%08x",
+			txq->evq_read_ptr, txq->evq_read_ptr & txq->ptr_mask,
+			EFX_QWORD_FIELD(*ev, EFX_DWORD_1),
+			EFX_QWORD_FIELD(*ev, EFX_DWORD_0));
+		return false;
+	}
+
+	sfc_ef100_tx_debug(txq, "TxQ got event %08x:%08x at %u (%#x)",
+			   EFX_QWORD_FIELD(*ev, EFX_DWORD_1),
+			   EFX_QWORD_FIELD(*ev, EFX_DWORD_0),
+			   txq->evq_read_ptr,
+			   txq->evq_read_ptr & txq->ptr_mask);
+
+	txq->evq_read_ptr++;
+	return true;
+}
+
+static unsigned int
+sfc_ef100_tx_process_events(struct sfc_ef100_txq *txq)
+{
+	unsigned int num_descs = 0;
+	efx_qword_t tx_ev;
+
+	while (sfc_ef100_tx_get_event(txq, &tx_ev))
+		num_descs += EFX_QWORD_FIELD(tx_ev, ESF_GZ_EV_TXCMPL_NUM_DESC);
+
+	return num_descs;
+}
+
+static void
+sfc_ef100_tx_reap_num_descs(struct sfc_ef100_txq *txq, unsigned int num_descs)
+{
+	if (num_descs > 0) {
+		unsigned int completed = txq->completed;
+		unsigned int pending = completed + num_descs;
+		struct rte_mbuf *bulk[SFC_TX_REAP_BULK_SIZE];
+		unsigned int nb = 0;
+
+		do {
+			struct sfc_ef100_tx_sw_desc *txd;
+			struct rte_mbuf *m;
+
+			txd = &txq->sw_ring[completed & txq->ptr_mask];
+			if (txd->mbuf == NULL)
+				continue;
+
+			m = rte_pktmbuf_prefree_seg(txd->mbuf);
+			if (m == NULL)
+				continue;
+
+			txd->mbuf = NULL;
+
+			if (nb == RTE_DIM(bulk) ||
+			    (nb != 0 && m->pool != bulk[0]->pool)) {
+				rte_mempool_put_bulk(bulk[0]->pool,
+						     (void *)bulk, nb);
+				nb = 0;
+			}
+
+			bulk[nb++] = m;
+		} while (++completed != pending);
+
+		if (nb != 0)
+			rte_mempool_put_bulk(bulk[0]->pool, (void *)bulk, nb);
+
+		txq->completed = completed;
+	}
+}
+
+static void
+sfc_ef100_tx_reap(struct sfc_ef100_txq *txq)
+{
+	sfc_ef100_tx_reap_num_descs(txq, sfc_ef100_tx_process_events(txq));
+}
+
+static void
+sfc_ef100_tx_qdesc_send_create(const struct rte_mbuf *m, efx_oword_t *tx_desc)
+{
+	EFX_POPULATE_OWORD_4(*tx_desc,
+			ESF_GZ_TX_SEND_ADDR, rte_mbuf_data_iova(m),
+			ESF_GZ_TX_SEND_LEN, rte_pktmbuf_data_len(m),
+			ESF_GZ_TX_SEND_NUM_SEGS, 1,
+			ESF_GZ_TX_DESC_TYPE, ESE_GZ_TX_DESC_TYPE_SEND);
+}
+
+static inline void
+sfc_ef100_tx_qpush(struct sfc_ef100_txq *txq, unsigned int added)
+{
+	efx_dword_t dword;
+
+	EFX_POPULATE_DWORD_1(dword, ERF_GZ_TX_RING_PIDX, added & txq->ptr_mask);
+
+	/* DMA sync to device is not required */
+
+	/*
+	 * rte_write32() has rte_io_wmb() which guarantees that the STORE
+	 * operations (i.e. Rx and event descriptor updates) that precede
+	 * the rte_io_wmb() call are visible to NIC before the STORE
+	 * operations that follow it (i.e. doorbell write).
+	 */
+	rte_write32(dword.ed_u32[0], txq->doorbell);
+
+	sfc_ef100_tx_debug(txq, "TxQ pushed doorbell at pidx %u (added=%u)",
+			   EFX_DWORD_FIELD(dword, ERF_GZ_TX_RING_PIDX),
+			   added);
+}
+
+static unsigned int
+sfc_ef100_tx_pkt_descs_max(const struct rte_mbuf *m)
+{
+/** Maximum length of an mbuf segment data */
+#define SFC_MBUF_SEG_LEN_MAX		UINT16_MAX
+	RTE_BUILD_BUG_ON(sizeof(m->data_len) != 2);
+
+	/*
+	 * mbuf segment cannot be bigger than maximum segnment length and
+	 * maximum packet length since TSO is not supported yet.
+	 * Make sure that the first segment does not need fragmentation
+	 * (split into many Tx descriptors).
+	 */
+	RTE_BUILD_BUG_ON(SFC_EF100_TX_SEND_DESC_LEN_MAX <
+		RTE_MIN((unsigned int)EFX_MAC_PDU_MAX, SFC_MBUF_SEG_LEN_MAX));
+
+	SFC_ASSERT(m->nb_segs == 1);
+	return 1;
+}
+
+static uint16_t
+sfc_ef100_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+{
+	struct sfc_ef100_txq * const txq = sfc_ef100_txq_by_dp_txq(tx_queue);
+	unsigned int added;
+	unsigned int dma_desc_space;
+	bool reap_done;
+	struct rte_mbuf **pktp;
+	struct rte_mbuf **pktp_end;
+
+	if (unlikely(txq->flags &
+		     (SFC_EF100_TXQ_NOT_RUNNING | SFC_EF100_TXQ_EXCEPTION)))
+		return 0;
+
+	added = txq->added;
+	dma_desc_space = txq->max_fill_level - (added - txq->completed);
+
+	reap_done = (dma_desc_space < txq->free_thresh);
+	if (reap_done) {
+		sfc_ef100_tx_reap(txq);
+		dma_desc_space = txq->max_fill_level - (added - txq->completed);
+	}
+
+	for (pktp = &tx_pkts[0], pktp_end = &tx_pkts[nb_pkts];
+	     pktp != pktp_end;
+	     ++pktp) {
+		struct rte_mbuf *m_seg = *pktp;
+		unsigned int pkt_start = added;
+		unsigned int id;
+
+		if (likely(pktp + 1 != pktp_end))
+			rte_mbuf_prefetch_part1(pktp[1]);
+
+		if (sfc_ef100_tx_pkt_descs_max(m_seg) > dma_desc_space) {
+			if (reap_done)
+				break;
+
+			/* Push already prepared descriptors before polling */
+			if (added != txq->added) {
+				sfc_ef100_tx_qpush(txq, added);
+				txq->added = added;
+			}
+
+			sfc_ef100_tx_reap(txq);
+			reap_done = true;
+			dma_desc_space = txq->max_fill_level -
+				(added - txq->completed);
+			if (sfc_ef100_tx_pkt_descs_max(m_seg) > dma_desc_space)
+				break;
+		}
+
+		id = added++ & txq->ptr_mask;
+		sfc_ef100_tx_qdesc_send_create(m_seg, &txq->txq_hw_ring[id]);
+
+		/*
+		 * rte_pktmbuf_free() is commonly used in DPDK for
+		 * recycling packets - the function checks every
+		 * segment's reference counter and returns the
+		 * buffer to its pool whenever possible;
+		 * nevertheless, freeing mbuf segments one by one
+		 * may entail some performance decline;
+		 * from this point, sfc_efx_tx_reap() does the same job
+		 * on its own and frees buffers in bulks (all mbufs
+		 * within a bulk belong to the same pool);
+		 * from this perspective, individual segment pointers
+		 * must be associated with the corresponding SW
+		 * descriptors independently so that only one loop
+		 * is sufficient on reap to inspect all the buffers
+		 */
+		txq->sw_ring[id].mbuf = m_seg;
+
+		dma_desc_space -= (added - pkt_start);
+	}
+
+	if (likely(added != txq->added)) {
+		sfc_ef100_tx_qpush(txq, added);
+		txq->added = added;
+	}
+
+#if SFC_TX_XMIT_PKTS_REAP_AT_LEAST_ONCE
+	if (!reap_done)
+		sfc_ef100_tx_reap(txq);
+#endif
+
+	return pktp - &tx_pkts[0];
+}
+
+static sfc_dp_tx_get_dev_info_t sfc_ef100_get_dev_info;
+static void
+sfc_ef100_get_dev_info(struct rte_eth_dev_info *dev_info)
+{
+	/*
+	 * Number of descriptors just defines maximum number of pushed
+	 * descriptors (fill level).
+	 */
+	dev_info->tx_desc_lim.nb_min = 1;
+	dev_info->tx_desc_lim.nb_align = 1;
+}
+
+static sfc_dp_tx_qsize_up_rings_t sfc_ef100_tx_qsize_up_rings;
+static int
+sfc_ef100_tx_qsize_up_rings(uint16_t nb_tx_desc,
+			   struct sfc_dp_tx_hw_limits *limits,
+			   unsigned int *txq_entries,
+			   unsigned int *evq_entries,
+			   unsigned int *txq_max_fill_level)
+{
+	/*
+	 * rte_ethdev API guarantees that the number meets min, max and
+	 * alignment requirements.
+	 */
+	if (nb_tx_desc <= limits->txq_min_entries)
+		*txq_entries = limits->txq_min_entries;
+	else
+		*txq_entries = rte_align32pow2(nb_tx_desc);
+
+	*evq_entries = *txq_entries;
+
+	*txq_max_fill_level = RTE_MIN(nb_tx_desc,
+				      SFC_EF100_TXQ_LIMIT(*evq_entries));
+	return 0;
+}
+
+static sfc_dp_tx_qcreate_t sfc_ef100_tx_qcreate;
+static int
+sfc_ef100_tx_qcreate(uint16_t port_id, uint16_t queue_id,
+		    const struct rte_pci_addr *pci_addr, int socket_id,
+		    const struct sfc_dp_tx_qcreate_info *info,
+		    struct sfc_dp_txq **dp_txqp)
+{
+	struct sfc_ef100_txq *txq;
+	int rc;
+
+	rc = EINVAL;
+	if (info->txq_entries != info->evq_entries)
+		goto fail_bad_args;
+
+	rc = ENOMEM;
+	txq = rte_zmalloc_socket("sfc-ef100-txq", sizeof(*txq),
+				 RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq == NULL)
+		goto fail_txq_alloc;
+
+	sfc_dp_queue_init(&txq->dp.dpq, port_id, queue_id, pci_addr);
+
+	rc = ENOMEM;
+	txq->sw_ring = rte_calloc_socket("sfc-ef100-txq-sw_ring",
+					 info->txq_entries,
+					 sizeof(*txq->sw_ring),
+					 RTE_CACHE_LINE_SIZE, socket_id);
+	if (txq->sw_ring == NULL)
+		goto fail_sw_ring_alloc;
+
+	txq->flags = SFC_EF100_TXQ_NOT_RUNNING;
+	txq->ptr_mask = info->txq_entries - 1;
+	txq->max_fill_level = info->max_fill_level;
+	txq->free_thresh = info->free_thresh;
+	txq->evq_phase_bit_shift = rte_bsf32(info->evq_entries);
+	txq->txq_hw_ring = info->txq_hw_ring;
+	txq->doorbell = (volatile uint8_t *)info->mem_bar +
+			ER_GZ_TX_RING_DOORBELL_OFST +
+			(info->hw_index << info->vi_window_shift);
+	txq->evq_hw_ring = info->evq_hw_ring;
+
+	sfc_ef100_tx_debug(txq, "TxQ doorbell is %p", txq->doorbell);
+
+	*dp_txqp = &txq->dp;
+	return 0;
+
+fail_sw_ring_alloc:
+	rte_free(txq);
+
+fail_txq_alloc:
+fail_bad_args:
+	return rc;
+}
+
+static sfc_dp_tx_qdestroy_t sfc_ef100_tx_qdestroy;
+static void
+sfc_ef100_tx_qdestroy(struct sfc_dp_txq *dp_txq)
+{
+	struct sfc_ef100_txq *txq = sfc_ef100_txq_by_dp_txq(dp_txq);
+
+	rte_free(txq->sw_ring);
+	rte_free(txq);
+}
+
+static sfc_dp_tx_qstart_t sfc_ef100_tx_qstart;
+static int
+sfc_ef100_tx_qstart(struct sfc_dp_txq *dp_txq, unsigned int evq_read_ptr,
+		   unsigned int txq_desc_index)
+{
+	struct sfc_ef100_txq *txq = sfc_ef100_txq_by_dp_txq(dp_txq);
+
+	txq->evq_read_ptr = evq_read_ptr;
+	txq->added = txq->completed = txq_desc_index;
+
+	txq->flags |= SFC_EF100_TXQ_STARTED;
+	txq->flags &= ~(SFC_EF100_TXQ_NOT_RUNNING | SFC_EF100_TXQ_EXCEPTION);
+
+	return 0;
+}
+
+static sfc_dp_tx_qstop_t sfc_ef100_tx_qstop;
+static void
+sfc_ef100_tx_qstop(struct sfc_dp_txq *dp_txq, unsigned int *evq_read_ptr)
+{
+	struct sfc_ef100_txq *txq = sfc_ef100_txq_by_dp_txq(dp_txq);
+
+	txq->flags |= SFC_EF100_TXQ_NOT_RUNNING;
+
+	*evq_read_ptr = txq->evq_read_ptr;
+}
+
+static sfc_dp_tx_qtx_ev_t sfc_ef100_tx_qtx_ev;
+static bool
+sfc_ef100_tx_qtx_ev(struct sfc_dp_txq *dp_txq, unsigned int num_descs)
+{
+	struct sfc_ef100_txq *txq = sfc_ef100_txq_by_dp_txq(dp_txq);
+
+	SFC_ASSERT(txq->flags & SFC_EF100_TXQ_NOT_RUNNING);
+
+	sfc_ef100_tx_reap_num_descs(txq, num_descs);
+
+	return false;
+}
+
+static sfc_dp_tx_qreap_t sfc_ef100_tx_qreap;
+static void
+sfc_ef100_tx_qreap(struct sfc_dp_txq *dp_txq)
+{
+	struct sfc_ef100_txq *txq = sfc_ef100_txq_by_dp_txq(dp_txq);
+	unsigned int completed;
+
+	for (completed = txq->completed; completed != txq->added; ++completed) {
+		struct sfc_ef100_tx_sw_desc *txd;
+
+		txd = &txq->sw_ring[completed & txq->ptr_mask];
+		if (txd->mbuf != NULL) {
+			rte_pktmbuf_free_seg(txd->mbuf);
+			txd->mbuf = NULL;
+		}
+	}
+
+	txq->flags &= ~SFC_EF100_TXQ_STARTED;
+}
+
+static unsigned int
+sfc_ef100_tx_qdesc_npending(struct sfc_ef100_txq *txq)
+{
+	const unsigned int evq_old_read_ptr = txq->evq_read_ptr;
+	unsigned int npending = 0;
+	efx_qword_t tx_ev;
+
+	if (unlikely(txq->flags &
+		     (SFC_EF100_TXQ_NOT_RUNNING | SFC_EF100_TXQ_EXCEPTION)))
+		return 0;
+
+	while (sfc_ef100_tx_get_event(txq, &tx_ev))
+		npending += EFX_QWORD_FIELD(tx_ev, ESF_GZ_EV_TXCMPL_NUM_DESC);
+
+	/*
+	 * The function does not process events, so return event queue read
+	 * pointer to the original position to allow the events that were
+	 * read to be processed later
+	 */
+	txq->evq_read_ptr = evq_old_read_ptr;
+
+	return npending;
+}
+
+static sfc_dp_tx_qdesc_status_t sfc_ef100_tx_qdesc_status;
+static int
+sfc_ef100_tx_qdesc_status(struct sfc_dp_txq *dp_txq, uint16_t offset)
+{
+	struct sfc_ef100_txq *txq = sfc_ef100_txq_by_dp_txq(dp_txq);
+	unsigned int pushed = txq->added - txq->completed;
+
+	if (unlikely(offset > txq->ptr_mask))
+		return -EINVAL;
+
+	if (unlikely(offset >= txq->max_fill_level))
+		return RTE_ETH_TX_DESC_UNAVAIL;
+
+	return (offset >= pushed ||
+		offset < sfc_ef100_tx_qdesc_npending(txq)) ?
+		RTE_ETH_TX_DESC_DONE : RTE_ETH_TX_DESC_FULL;
+}
+
+struct sfc_dp_tx sfc_ef100_tx = {
+	.dp = {
+		.name		= SFC_KVARG_DATAPATH_EF100,
+		.type		= SFC_DP_TX,
+		.hw_fw_caps	= SFC_DP_HW_FW_CAP_EF100,
+	},
+	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS,
+	.dev_offload_capa	= 0,
+	.queue_offload_capa	= 0,
+	.get_dev_info		= sfc_ef100_get_dev_info,
+	.qsize_up_rings		= sfc_ef100_tx_qsize_up_rings,
+	.qcreate		= sfc_ef100_tx_qcreate,
+	.qdestroy		= sfc_ef100_tx_qdestroy,
+	.qstart			= sfc_ef100_tx_qstart,
+	.qtx_ev			= sfc_ef100_tx_qtx_ev,
+	.qstop			= sfc_ef100_tx_qstop,
+	.qreap			= sfc_ef100_tx_qreap,
+	.qdesc_status		= sfc_ef100_tx_qdesc_status,
+	.pkt_burst		= sfc_ef100_xmit_pkts,
+};
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index e1db9236e9..165776b652 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -2156,6 +2156,7 @@ sfc_register_dp(void)
 		sfc_dp_register(&sfc_dp_head, &sfc_ef10_rx.dp);
 		sfc_dp_register(&sfc_dp_head, &sfc_efx_rx.dp);
 
+		sfc_dp_register(&sfc_dp_head, &sfc_ef100_tx.dp);
 		sfc_dp_register(&sfc_dp_head, &sfc_ef10_tx.dp);
 		sfc_dp_register(&sfc_dp_head, &sfc_efx_tx.dp);
 		sfc_dp_register(&sfc_dp_head, &sfc_ef10_simple_tx.dp);
diff --git a/drivers/net/sfc/sfc_kvargs.h b/drivers/net/sfc/sfc_kvargs.h
index cc3f4a353e..0c3660890c 100644
--- a/drivers/net/sfc/sfc_kvargs.h
+++ b/drivers/net/sfc/sfc_kvargs.h
@@ -47,7 +47,8 @@ extern "C" {
 #define SFC_KVARG_VALUES_TX_DATAPATH \
 	"[" SFC_KVARG_DATAPATH_EFX "|" \
 	    SFC_KVARG_DATAPATH_EF10 "|" \
-	    SFC_KVARG_DATAPATH_EF10_SIMPLE "]"
+	    SFC_KVARG_DATAPATH_EF10_SIMPLE "|" \
+	    SFC_KVARG_DATAPATH_EF100 "]"
 
 #define SFC_KVARG_FW_VARIANT		"fw_variant"
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 18/36] net/sfc: support multi-segment transmit for EF100 datapath
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (16 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 17/36] net/sfc: implement EF100 native Tx datapath Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 19/36] net/sfc: support TCP and UDP checksum offloads for EF100 Andrew Rybchenko
                   ` (18 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/nics/sfc_efx.rst    |  4 +-
 drivers/net/sfc/sfc_ef100_tx.c | 69 ++++++++++++++++++++++++++++++++--
 2 files changed, 67 insertions(+), 6 deletions(-)

diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index 726d653fa8..17e9461bea 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -328,8 +328,8 @@ boolean parameters value.
   **ef10_simple** chooses EF10 (SFN7xxx, SFN8xxx, X2xxx) native datapath which
   is even more faster then **ef10** but does not support multi-segment
   mbufs, disallows multiple mempools and neglects mbuf reference counters.
-  **ef100** chooses EF100 native datapath which does not support multi-segment
-  mbufs and any offloads.
+  **ef100** chooses EF100 native datapath which does not support
+  any offloads except multi-segment mbufs.
 
 - ``perf_profile`` [auto|throughput|low-latency] (default **throughput**)
 
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index 20b7c786cc..0a7bd74651 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -36,6 +36,10 @@
 #define SFC_EF100_TX_SEND_DESC_LEN_MAX \
 	((1u << ESF_GZ_TX_SEND_LEN_WIDTH) - 1)
 
+/** Maximum length of the segment descriptor data */
+#define SFC_EF100_TX_SEG_DESC_LEN_MAX \
+	((1u << ESF_GZ_TX_SEG_LEN_WIDTH) - 1)
+
 /**
  * Maximum number of descriptors/buffers in the Tx ring.
  * It should guarantee that corresponding event queue never overfill.
@@ -82,6 +86,32 @@ sfc_ef100_txq_by_dp_txq(struct sfc_dp_txq *dp_txq)
 	return container_of(dp_txq, struct sfc_ef100_txq, dp);
 }
 
+static uint16_t
+sfc_ef100_tx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
+			  uint16_t nb_pkts)
+{
+	struct sfc_ef100_txq * const txq = sfc_ef100_txq_by_dp_txq(tx_queue);
+	uint16_t i;
+
+	for (i = 0; i < nb_pkts; i++) {
+		struct rte_mbuf *m = tx_pkts[i];
+		int ret;
+
+		ret = sfc_dp_tx_prepare_pkt(m, 0, txq->max_fill_level, 0, 0);
+		if (unlikely(ret != 0)) {
+			rte_errno = ret;
+			break;
+		}
+
+		if (m->nb_segs > EFX_MASK32(ESF_GZ_TX_SEND_NUM_SEGS)) {
+			rte_errno = EINVAL;
+			break;
+		}
+	}
+
+	return i;
+}
+
 static bool
 sfc_ef100_tx_get_event(struct sfc_ef100_txq *txq, efx_qword_t *ev)
 {
@@ -189,10 +219,20 @@ sfc_ef100_tx_qdesc_send_create(const struct rte_mbuf *m, efx_oword_t *tx_desc)
 	EFX_POPULATE_OWORD_4(*tx_desc,
 			ESF_GZ_TX_SEND_ADDR, rte_mbuf_data_iova(m),
 			ESF_GZ_TX_SEND_LEN, rte_pktmbuf_data_len(m),
-			ESF_GZ_TX_SEND_NUM_SEGS, 1,
+			ESF_GZ_TX_SEND_NUM_SEGS, m->nb_segs,
 			ESF_GZ_TX_DESC_TYPE, ESE_GZ_TX_DESC_TYPE_SEND);
 }
 
+static void
+sfc_ef100_tx_qdesc_seg_create(rte_iova_t addr, uint16_t len,
+			      efx_oword_t *tx_desc)
+{
+	EFX_POPULATE_OWORD_3(*tx_desc,
+			ESF_GZ_TX_SEG_ADDR, addr,
+			ESF_GZ_TX_SEG_LEN, len,
+			ESF_GZ_TX_DESC_TYPE, ESE_GZ_TX_DESC_TYPE_SEG);
+}
+
 static inline void
 sfc_ef100_tx_qpush(struct sfc_ef100_txq *txq, unsigned int added)
 {
@@ -231,8 +271,17 @@ sfc_ef100_tx_pkt_descs_max(const struct rte_mbuf *m)
 	RTE_BUILD_BUG_ON(SFC_EF100_TX_SEND_DESC_LEN_MAX <
 		RTE_MIN((unsigned int)EFX_MAC_PDU_MAX, SFC_MBUF_SEG_LEN_MAX));
 
-	SFC_ASSERT(m->nb_segs == 1);
-	return 1;
+	/*
+	 * Any segment of scattered packet cannot be bigger than maximum
+	 * segment length and maximum packet legnth since TSO is not
+	 * supported yet.
+	 * Make sure that subsequent segments do not need fragmentation (split
+	 * into many Tx descriptors).
+	 */
+	RTE_BUILD_BUG_ON(SFC_EF100_TX_SEG_DESC_LEN_MAX <
+		RTE_MIN((unsigned int)EFX_MAC_PDU_MAX, SFC_MBUF_SEG_LEN_MAX));
+
+	return m->nb_segs;
 }
 
 static uint16_t
@@ -306,6 +355,17 @@ sfc_ef100_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 */
 		txq->sw_ring[id].mbuf = m_seg;
 
+		while ((m_seg = m_seg->next) != NULL) {
+			RTE_BUILD_BUG_ON(SFC_MBUF_SEG_LEN_MAX >
+					 SFC_EF100_TX_SEG_DESC_LEN_MAX);
+
+			id = added++ & txq->ptr_mask;
+			sfc_ef100_tx_qdesc_seg_create(rte_mbuf_data_iova(m_seg),
+					rte_pktmbuf_data_len(m_seg),
+					&txq->txq_hw_ring[id]);
+			txq->sw_ring[id].mbuf = m_seg;
+		}
+
 		dma_desc_space -= (added - pkt_start);
 	}
 
@@ -532,7 +592,7 @@ struct sfc_dp_tx sfc_ef100_tx = {
 	},
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS,
 	.dev_offload_capa	= 0,
-	.queue_offload_capa	= 0,
+	.queue_offload_capa	= DEV_TX_OFFLOAD_MULTI_SEGS,
 	.get_dev_info		= sfc_ef100_get_dev_info,
 	.qsize_up_rings		= sfc_ef100_tx_qsize_up_rings,
 	.qcreate		= sfc_ef100_tx_qcreate,
@@ -542,5 +602,6 @@ struct sfc_dp_tx sfc_ef100_tx = {
 	.qstop			= sfc_ef100_tx_qstop,
 	.qreap			= sfc_ef100_tx_qreap,
 	.qdesc_status		= sfc_ef100_tx_qdesc_status,
+	.pkt_prepare		= sfc_ef100_tx_prepare_pkts,
 	.pkt_burst		= sfc_ef100_xmit_pkts,
 };
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 19/36] net/sfc: support TCP and UDP checksum offloads for EF100
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (17 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 18/36] net/sfc: support multi-segment transmit for EF100 datapath Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 20/36] net/sfc: support IPv4 header checksum offload for EF100 Tx Andrew Rybchenko
                   ` (17 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

Use outer layer 4 full checksum offload which does not require any
assistance from driver.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/nics/sfc_efx.rst    |  4 ++--
 drivers/net/sfc/sfc_ef100_tx.c | 11 +++++++++--
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index 17e9461bea..98521f9975 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -328,8 +328,8 @@ boolean parameters value.
   **ef10_simple** chooses EF10 (SFN7xxx, SFN8xxx, X2xxx) native datapath which
   is even more faster then **ef10** but does not support multi-segment
   mbufs, disallows multiple mempools and neglects mbuf reference counters.
-  **ef100** chooses EF100 native datapath which does not support
-  any offloads except multi-segment mbufs.
+  **ef100** chooses EF100 native datapath which supports multi-segment
+  mbufs and TCP/UDP checksum offloads.
 
 - ``perf_profile`` [auto|throughput|low-latency] (default **throughput**)
 
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index 0a7bd74651..343730b5c9 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -216,10 +216,15 @@ sfc_ef100_tx_reap(struct sfc_ef100_txq *txq)
 static void
 sfc_ef100_tx_qdesc_send_create(const struct rte_mbuf *m, efx_oword_t *tx_desc)
 {
-	EFX_POPULATE_OWORD_4(*tx_desc,
+	bool outer_l4;
+
+	outer_l4 = (m->ol_flags & PKT_TX_L4_MASK);
+
+	EFX_POPULATE_OWORD_5(*tx_desc,
 			ESF_GZ_TX_SEND_ADDR, rte_mbuf_data_iova(m),
 			ESF_GZ_TX_SEND_LEN, rte_pktmbuf_data_len(m),
 			ESF_GZ_TX_SEND_NUM_SEGS, m->nb_segs,
+			ESF_GZ_TX_SEND_CSO_OUTER_L4, outer_l4,
 			ESF_GZ_TX_DESC_TYPE, ESE_GZ_TX_DESC_TYPE_SEND);
 }
 
@@ -592,7 +597,9 @@ struct sfc_dp_tx sfc_ef100_tx = {
 	},
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS,
 	.dev_offload_capa	= 0,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_MULTI_SEGS,
+	.queue_offload_capa	= DEV_TX_OFFLOAD_UDP_CKSUM |
+				  DEV_TX_OFFLOAD_TCP_CKSUM |
+				  DEV_TX_OFFLOAD_MULTI_SEGS,
 	.get_dev_info		= sfc_ef100_get_dev_info,
 	.qsize_up_rings		= sfc_ef100_tx_qsize_up_rings,
 	.qcreate		= sfc_ef100_tx_qcreate,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 20/36] net/sfc: support IPv4 header checksum offload for EF100 Tx
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (18 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 19/36] net/sfc: support TCP and UDP checksum offloads for EF100 Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 21/36] net/sfc: add header segments check for EF100 Tx datapath Andrew Rybchenko
                   ` (16 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

Use outer layer 3 full checksum offload which does not require any
assistance from driver.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/nics/sfc_efx.rst    | 2 +-
 drivers/net/sfc/sfc_ef100_tx.c | 8 ++++++--
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index 98521f9975..0e32d0c6d9 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -329,7 +329,7 @@ boolean parameters value.
   is even more faster then **ef10** but does not support multi-segment
   mbufs, disallows multiple mempools and neglects mbuf reference counters.
   **ef100** chooses EF100 native datapath which supports multi-segment
-  mbufs and TCP/UDP checksum offloads.
+  mbufs, IPv4 and TCP/UDP checksum offloads.
 
 - ``perf_profile`` [auto|throughput|low-latency] (default **throughput**)
 
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index 343730b5c9..41b1554f12 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -216,14 +216,17 @@ sfc_ef100_tx_reap(struct sfc_ef100_txq *txq)
 static void
 sfc_ef100_tx_qdesc_send_create(const struct rte_mbuf *m, efx_oword_t *tx_desc)
 {
+	bool outer_l3;
 	bool outer_l4;
 
+	outer_l3 = (m->ol_flags & PKT_TX_IP_CKSUM);
 	outer_l4 = (m->ol_flags & PKT_TX_L4_MASK);
 
-	EFX_POPULATE_OWORD_5(*tx_desc,
+	EFX_POPULATE_OWORD_6(*tx_desc,
 			ESF_GZ_TX_SEND_ADDR, rte_mbuf_data_iova(m),
 			ESF_GZ_TX_SEND_LEN, rte_pktmbuf_data_len(m),
 			ESF_GZ_TX_SEND_NUM_SEGS, m->nb_segs,
+			ESF_GZ_TX_SEND_CSO_OUTER_L3, outer_l3,
 			ESF_GZ_TX_SEND_CSO_OUTER_L4, outer_l4,
 			ESF_GZ_TX_DESC_TYPE, ESE_GZ_TX_DESC_TYPE_SEND);
 }
@@ -597,7 +600,8 @@ struct sfc_dp_tx sfc_ef100_tx = {
 	},
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS,
 	.dev_offload_capa	= 0,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_UDP_CKSUM |
+	.queue_offload_capa	= DEV_TX_OFFLOAD_IPV4_CKSUM |
+				  DEV_TX_OFFLOAD_UDP_CKSUM |
 				  DEV_TX_OFFLOAD_TCP_CKSUM |
 				  DEV_TX_OFFLOAD_MULTI_SEGS,
 	.get_dev_info		= sfc_ef100_get_dev_info,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 21/36] net/sfc: add header segments check for EF100 Tx datapath
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (19 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 20/36] net/sfc: support IPv4 header checksum offload for EF100 Tx Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 22/36] net/sfc: support tunnels for EF100 native " Andrew Rybchenko
                   ` (15 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev; +Cc: Ivan Malov

From: Ivan Malov <ivan.malov@oktetlabs.ru>

EF100 native Tx datapath demands that packet header be contiguous
when partial checksum offloads are used since helper function is
used to calculate pseudo-header checksum (and the function requires
contiguous header).

Add an explicit check for this assumption and restructure the code
to avoid TSO header linearisation check since TSO header
linearisation is not done on EF100 native Tx datapath.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_dp_tx.h    | 85 +++++++++++++++++++++++++++-------
 drivers/net/sfc/sfc_ef100_tx.c |  4 +-
 drivers/net/sfc/sfc_ef10_tx.c  |  2 +-
 drivers/net/sfc/sfc_tx.c       |  2 +-
 4 files changed, 73 insertions(+), 20 deletions(-)

diff --git a/drivers/net/sfc/sfc_dp_tx.h b/drivers/net/sfc/sfc_dp_tx.h
index 67aa398b7f..bed8ce84aa 100644
--- a/drivers/net/sfc/sfc_dp_tx.h
+++ b/drivers/net/sfc/sfc_dp_tx.h
@@ -206,14 +206,38 @@ sfc_dp_tx_offload_capa(const struct sfc_dp_tx *dp_tx)
 	return dp_tx->dev_offload_capa | dp_tx->queue_offload_capa;
 }
 
+static inline unsigned int
+sfc_dp_tx_pkt_extra_hdr_segs(struct rte_mbuf **m_seg,
+			     unsigned int *header_len_remaining)
+{
+	unsigned int nb_extra_header_segs = 0;
+
+	while (rte_pktmbuf_data_len(*m_seg) < *header_len_remaining) {
+		*header_len_remaining -= rte_pktmbuf_data_len(*m_seg);
+		*m_seg = (*m_seg)->next;
+		++nb_extra_header_segs;
+	}
+
+	return nb_extra_header_segs;
+}
+
 static inline int
 sfc_dp_tx_prepare_pkt(struct rte_mbuf *m,
+			   unsigned int max_nb_header_segs,
+			   unsigned int tso_bounce_buffer_len,
 			   uint32_t tso_tcp_header_offset_limit,
 			   unsigned int max_fill_level,
 			   unsigned int nb_tso_descs,
 			   unsigned int nb_vlan_descs)
 {
 	unsigned int descs_required = m->nb_segs;
+	unsigned int tcph_off = ((m->ol_flags & PKT_TX_TUNNEL_MASK) ?
+				 m->outer_l2_len + m->outer_l3_len : 0) +
+				m->l2_len + m->l3_len;
+	unsigned int header_len = tcph_off + m->l4_len;
+	unsigned int header_len_remaining = header_len;
+	unsigned int nb_header_segs = 1;
+	struct rte_mbuf *m_seg = m;
 
 #ifdef RTE_LIBRTE_SFC_EFX_DEBUG
 	int ret;
@@ -229,10 +253,29 @@ sfc_dp_tx_prepare_pkt(struct rte_mbuf *m,
 	}
 #endif
 
-	if (m->ol_flags & PKT_TX_TCP_SEG) {
-		unsigned int tcph_off = m->l2_len + m->l3_len;
-		unsigned int header_len;
+	if (max_nb_header_segs != 0) {
+		/* There is a limit on the number of header segments. */
 
+		nb_header_segs +=
+		    sfc_dp_tx_pkt_extra_hdr_segs(&m_seg,
+						 &header_len_remaining);
+
+		if (unlikely(nb_header_segs > max_nb_header_segs)) {
+			/*
+			 * The number of header segments is too large.
+			 *
+			 * If TSO is requested and if the datapath supports
+			 * linearisation of TSO headers, allow the packet
+			 * to proceed with additional checks below.
+			 * Otherwise, throw an error.
+			 */
+			if ((m->ol_flags & PKT_TX_TCP_SEG) == 0 ||
+			    tso_bounce_buffer_len == 0)
+				return EINVAL;
+		}
+	}
+
+	if (m->ol_flags & PKT_TX_TCP_SEG) {
 		switch (m->ol_flags & PKT_TX_TUNNEL_MASK) {
 		case 0:
 			break;
@@ -242,30 +285,38 @@ sfc_dp_tx_prepare_pkt(struct rte_mbuf *m,
 			if (!(m->ol_flags &
 			      (PKT_TX_OUTER_IPV4 | PKT_TX_OUTER_IPV6)))
 				return EINVAL;
-
-			tcph_off += m->outer_l2_len + m->outer_l3_len;
 		}
 
-		header_len = tcph_off + m->l4_len;
-
 		if (unlikely(tcph_off > tso_tcp_header_offset_limit))
 			return EINVAL;
 
 		descs_required += nb_tso_descs;
 
 		/*
-		 * Extra descriptor that is required when a packet header
-		 * is separated from remaining content of the first segment.
+		 * If headers segments are already counted above, here
+		 * nothing is done since remaining length is smaller
+		 * then current segment size.
+		 */
+		nb_header_segs +=
+		    sfc_dp_tx_pkt_extra_hdr_segs(&m_seg,
+						 &header_len_remaining);
+
+		/*
+		 * Extra descriptor which is required when (a part of) payload
+		 * shares the same segment with (a part of) the header.
 		 */
-		if (rte_pktmbuf_data_len(m) > header_len) {
+		if (rte_pktmbuf_data_len(m_seg) > header_len_remaining)
 			descs_required++;
-		} else if (rte_pktmbuf_data_len(m) < header_len &&
-			 unlikely(header_len > SFC_TSOH_STD_LEN)) {
-			/*
-			 * Header linearization is required and
-			 * the header is too big to be linearized
-			 */
-			return EINVAL;
+
+		if (tso_bounce_buffer_len != 0) {
+			if (nb_header_segs > 1 &&
+			    unlikely(header_len > tso_bounce_buffer_len)) {
+				/*
+				 * Header linearization is required and
+				 * the header is too big to be linearized
+				 */
+				return EINVAL;
+			}
 		}
 	}
 
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index 41b1554f12..0dba5c8eee 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -95,9 +95,11 @@ sfc_ef100_tx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 	for (i = 0; i < nb_pkts; i++) {
 		struct rte_mbuf *m = tx_pkts[i];
+		unsigned int max_nb_header_segs = 0;
 		int ret;
 
-		ret = sfc_dp_tx_prepare_pkt(m, 0, txq->max_fill_level, 0, 0);
+		ret = sfc_dp_tx_prepare_pkt(m, max_nb_header_segs, 0,
+					    0, txq->max_fill_level, 0, 0);
 		if (unlikely(ret != 0)) {
 			rte_errno = ret;
 			break;
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index 6fb4ac88a8..961689dc34 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -352,7 +352,7 @@ sfc_ef10_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			}
 		}
 #endif
-		ret = sfc_dp_tx_prepare_pkt(m,
+		ret = sfc_dp_tx_prepare_pkt(m, 0, SFC_TSOH_STD_LEN,
 				txq->tso_tcp_header_offset_limit,
 				txq->max_fill_level,
 				SFC_EF10_TSO_OPT_DESCS_NUM, 0);
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 4ea614816a..d50d49ca56 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -718,7 +718,7 @@ sfc_efx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		 * insertion offload is requested regardless the offload
 		 * requested/supported.
 		 */
-		ret = sfc_dp_tx_prepare_pkt(tx_pkts[i],
+		ret = sfc_dp_tx_prepare_pkt(tx_pkts[i], 0, SFC_TSOH_STD_LEN,
 				encp->enc_tx_tso_tcp_header_offset_limit,
 				txq->max_fill_level, EFX_TX_FATSOV2_OPT_NDESCS,
 				1);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 22/36] net/sfc: support tunnels for EF100 native Tx datapath
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (20 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 21/36] net/sfc: add header segments check for EF100 Tx datapath Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 23/36] net/sfc: support TSO for EF100 native datapath Andrew Rybchenko
                   ` (14 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

Add support for outer IPv4/UDP and inner IPv4/UDP/TCP checksum offloads.
Use partial checksum offload for inner TCP/UDP offload.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/nics/sfc_efx.rst    |  2 +-
 drivers/net/sfc/sfc_ef100_tx.c | 93 ++++++++++++++++++++++++++++++++--
 2 files changed, 90 insertions(+), 5 deletions(-)

diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index 0e32d0c6d9..f3135fdd70 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -329,7 +329,7 @@ boolean parameters value.
   is even more faster then **ef10** but does not support multi-segment
   mbufs, disallows multiple mempools and neglects mbuf reference counters.
   **ef100** chooses EF100 native datapath which supports multi-segment
-  mbufs, IPv4 and TCP/UDP checksum offloads.
+  mbufs, inner/outer IPv4 and TCP/UDP checksum offloads.
 
 - ``perf_profile`` [auto|throughput|low-latency] (default **throughput**)
 
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index 0dba5c8eee..20d4d1cf9c 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -11,6 +11,7 @@
 
 #include <rte_mbuf.h>
 #include <rte_io.h>
+#include <rte_net.h>
 
 #include "efx.h"
 #include "efx_types.h"
@@ -96,8 +97,21 @@ sfc_ef100_tx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	for (i = 0; i < nb_pkts; i++) {
 		struct rte_mbuf *m = tx_pkts[i];
 		unsigned int max_nb_header_segs = 0;
+		bool calc_phdr_cksum = false;
 		int ret;
 
+		/*
+		 * Partial checksum offload is used in the case of
+		 * inner TCP/UDP checksum offload. It requires
+		 * pseudo-header checksum which is calculated below,
+		 * but requires contiguous packet headers.
+		 */
+		if ((m->ol_flags & PKT_TX_TUNNEL_MASK) &&
+		    (m->ol_flags & PKT_TX_L4_MASK)) {
+			calc_phdr_cksum = true;
+			max_nb_header_segs = 1;
+		}
+
 		ret = sfc_dp_tx_prepare_pkt(m, max_nb_header_segs, 0,
 					    0, txq->max_fill_level, 0, 0);
 		if (unlikely(ret != 0)) {
@@ -109,6 +123,19 @@ sfc_ef100_tx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			rte_errno = EINVAL;
 			break;
 		}
+
+		if (calc_phdr_cksum) {
+			/*
+			 * Full checksum offload does IPv4 header checksum
+			 * and does not require any assistance.
+			 */
+			ret = rte_net_intel_cksum_flags_prepare(m,
+					m->ol_flags & ~PKT_TX_IP_CKSUM);
+			if (unlikely(ret != 0)) {
+				rte_errno = -ret;
+				break;
+			}
+		}
 	}
 
 	return i;
@@ -215,19 +242,75 @@ sfc_ef100_tx_reap(struct sfc_ef100_txq *txq)
 	sfc_ef100_tx_reap_num_descs(txq, sfc_ef100_tx_process_events(txq));
 }
 
+static uint8_t
+sfc_ef100_tx_qdesc_cso_inner_l3(uint64_t tx_tunnel)
+{
+	uint8_t inner_l3;
+
+	switch (tx_tunnel) {
+	case PKT_TX_TUNNEL_VXLAN:
+		inner_l3 = ESE_GZ_TX_DESC_CS_INNER_L3_VXLAN;
+		break;
+	case PKT_TX_TUNNEL_GENEVE:
+		inner_l3 = ESE_GZ_TX_DESC_CS_INNER_L3_GENEVE;
+		break;
+	default:
+		inner_l3 = ESE_GZ_TX_DESC_CS_INNER_L3_OFF;
+		break;
+	}
+	return inner_l3;
+}
+
 static void
 sfc_ef100_tx_qdesc_send_create(const struct rte_mbuf *m, efx_oword_t *tx_desc)
 {
 	bool outer_l3;
 	bool outer_l4;
+	uint8_t inner_l3;
+	uint8_t partial_en;
+	uint16_t part_cksum_w;
+	uint16_t l4_offset_w;
+
+	if ((m->ol_flags & PKT_TX_TUNNEL_MASK) == 0) {
+		outer_l3 = (m->ol_flags & PKT_TX_IP_CKSUM);
+		outer_l4 = (m->ol_flags & PKT_TX_L4_MASK);
+		inner_l3 = ESE_GZ_TX_DESC_CS_INNER_L3_OFF;
+		partial_en = ESE_GZ_TX_DESC_CSO_PARTIAL_EN_OFF;
+		part_cksum_w = 0;
+		l4_offset_w = 0;
+	} else {
+		outer_l3 = (m->ol_flags & PKT_TX_OUTER_IP_CKSUM);
+		outer_l4 = (m->ol_flags & PKT_TX_OUTER_UDP_CKSUM);
+		inner_l3 = sfc_ef100_tx_qdesc_cso_inner_l3(m->ol_flags &
+							   PKT_TX_TUNNEL_MASK);
+
+		switch (m->ol_flags & PKT_TX_L4_MASK) {
+		case PKT_TX_TCP_CKSUM:
+			partial_en = ESE_GZ_TX_DESC_CSO_PARTIAL_EN_TCP;
+			part_cksum_w = offsetof(struct rte_tcp_hdr, cksum) >> 1;
+			break;
+		case PKT_TX_UDP_CKSUM:
+			partial_en = ESE_GZ_TX_DESC_CSO_PARTIAL_EN_UDP;
+			part_cksum_w = offsetof(struct rte_udp_hdr,
+						dgram_cksum) >> 1;
+			break;
+		default:
+			partial_en = ESE_GZ_TX_DESC_CSO_PARTIAL_EN_OFF;
+			part_cksum_w = 0;
+			break;
+		}
+		l4_offset_w = (m->outer_l2_len + m->outer_l3_len +
+				m->l2_len + m->l3_len) >> 1;
+	}
 
-	outer_l3 = (m->ol_flags & PKT_TX_IP_CKSUM);
-	outer_l4 = (m->ol_flags & PKT_TX_L4_MASK);
-
-	EFX_POPULATE_OWORD_6(*tx_desc,
+	EFX_POPULATE_OWORD_10(*tx_desc,
 			ESF_GZ_TX_SEND_ADDR, rte_mbuf_data_iova(m),
 			ESF_GZ_TX_SEND_LEN, rte_pktmbuf_data_len(m),
 			ESF_GZ_TX_SEND_NUM_SEGS, m->nb_segs,
+			ESF_GZ_TX_SEND_CSO_PARTIAL_START_W, l4_offset_w,
+			ESF_GZ_TX_SEND_CSO_PARTIAL_CSUM_W, part_cksum_w,
+			ESF_GZ_TX_SEND_CSO_PARTIAL_EN, partial_en,
+			ESF_GZ_TX_SEND_CSO_INNER_L3, inner_l3,
 			ESF_GZ_TX_SEND_CSO_OUTER_L3, outer_l3,
 			ESF_GZ_TX_SEND_CSO_OUTER_L4, outer_l4,
 			ESF_GZ_TX_DESC_TYPE, ESE_GZ_TX_DESC_TYPE_SEND);
@@ -603,6 +686,8 @@ struct sfc_dp_tx sfc_ef100_tx = {
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS,
 	.dev_offload_capa	= 0,
 	.queue_offload_capa	= DEV_TX_OFFLOAD_IPV4_CKSUM |
+				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
 				  DEV_TX_OFFLOAD_UDP_CKSUM |
 				  DEV_TX_OFFLOAD_TCP_CKSUM |
 				  DEV_TX_OFFLOAD_MULTI_SEGS,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 23/36] net/sfc: support TSO for EF100 native datapath
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (21 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 22/36] net/sfc: support tunnels for EF100 native " Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 24/36] net/sfc: support tunnel TSO for EF100 native Tx datapath Andrew Rybchenko
                   ` (13 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev; +Cc: Ivan Malov

From: Ivan Malov <ivan.malov@oktetlabs.ru>

Riverhead boards support TSO version 3.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/nics/sfc_efx.rst    |   2 +-
 drivers/net/sfc/sfc.c          |   5 +-
 drivers/net/sfc/sfc_dp_tx.h    |  10 ++
 drivers/net/sfc/sfc_ef100_tx.c | 266 ++++++++++++++++++++++++++++-----
 drivers/net/sfc/sfc_tx.c       |  14 +-
 5 files changed, 257 insertions(+), 40 deletions(-)

diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index f3135fdd70..104ab38aa9 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -329,7 +329,7 @@ boolean parameters value.
   is even more faster then **ef10** but does not support multi-segment
   mbufs, disallows multiple mempools and neglects mbuf reference counters.
   **ef100** chooses EF100 native datapath which supports multi-segment
-  mbufs, inner/outer IPv4 and TCP/UDP checksum offloads.
+  mbufs, inner/outer IPv4 and TCP/UDP checksum and TCP segmentation offloads.
 
 - ``perf_profile`` [auto|throughput|low-latency] (default **throughput**)
 
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index cfba485ad2..b41db65003 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -205,7 +205,7 @@ sfc_estimate_resource_limits(struct sfc_adapter *sa)
 		MIN(encp->enc_txq_limit,
 		    limits.edl_max_evq_count - 1 - limits.edl_max_rxq_count);
 
-	if (sa->tso)
+	if (sa->tso && encp->enc_fw_assisted_tso_v2_enabled)
 		limits.edl_max_txq_count =
 			MIN(limits.edl_max_txq_count,
 			    encp->enc_fw_assisted_tso_v2_n_contexts /
@@ -795,7 +795,8 @@ sfc_attach(struct sfc_adapter *sa)
 		encp->enc_tunnel_encapsulations_supported;
 
 	if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & DEV_TX_OFFLOAD_TCP_TSO) {
-		sa->tso = encp->enc_fw_assisted_tso_v2_enabled;
+		sa->tso = encp->enc_fw_assisted_tso_v2_enabled ||
+			  encp->enc_tso_v3_enabled;
 		if (!sa->tso)
 			sfc_info(sa, "TSO support isn't available on this adapter");
 	}
diff --git a/drivers/net/sfc/sfc_dp_tx.h b/drivers/net/sfc/sfc_dp_tx.h
index bed8ce84aa..3ecdfcdd28 100644
--- a/drivers/net/sfc/sfc_dp_tx.h
+++ b/drivers/net/sfc/sfc_dp_tx.h
@@ -70,6 +70,16 @@ struct sfc_dp_tx_qcreate_info {
 	 * the hardware to apply TSO packet edits.
 	 */
 	uint16_t		tso_tcp_header_offset_limit;
+	/** Maximum number of header DMA descriptors per TSOv3 transaction */
+	uint16_t		tso_max_nb_header_descs;
+	/** Maximum header length acceptable by TSOv3 transaction */
+	uint16_t		tso_max_header_len;
+	/** Maximum number of payload DMA descriptors per TSOv3 transaction */
+	uint16_t		tso_max_nb_payload_descs;
+	/** Maximum payload length per TSOv3 transaction */
+	uint32_t		tso_max_payload_len;
+	/** Maximum number of frames to be generated per TSOv3 transaction */
+	uint32_t		tso_max_nb_outgoing_frames;
 };
 
 /**
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index 20d4d1cf9c..5ad0813a9b 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -77,6 +77,13 @@ struct sfc_ef100_txq {
 	unsigned int			evq_phase_bit_shift;
 	volatile efx_qword_t		*evq_hw_ring;
 
+	uint16_t			tso_tcp_header_offset_limit;
+	uint16_t			tso_max_nb_header_descs;
+	uint16_t			tso_max_header_len;
+	uint16_t			tso_max_nb_payload_descs;
+	uint32_t			tso_max_payload_len;
+	uint32_t			tso_max_nb_outgoing_frames;
+
 	/* Datapath transmit queue anchor */
 	struct sfc_dp_txq		dp;
 };
@@ -87,6 +94,42 @@ sfc_ef100_txq_by_dp_txq(struct sfc_dp_txq *dp_txq)
 	return container_of(dp_txq, struct sfc_ef100_txq, dp);
 }
 
+static int
+sfc_ef100_tx_prepare_pkt_tso(struct sfc_ef100_txq * const txq,
+			     struct rte_mbuf *m)
+{
+	size_t header_len = m->l2_len + m->l3_len + m->l4_len;
+	size_t payload_len = m->pkt_len - header_len;
+	unsigned long mss_conformant_max_payload_len;
+	unsigned int nb_payload_descs;
+
+	mss_conformant_max_payload_len =
+		m->tso_segsz * txq->tso_max_nb_outgoing_frames;
+
+	/*
+	 * Don't really want to know exact number of payload segments.
+	 * Just use total number of segments as upper limit. Practically
+	 * maximum number of payload segments is significantly bigger
+	 * than maximum number header segments, so we can neglect header
+	 * segments excluded total number of segments to estimate number
+	 * of payload segments required.
+	 */
+	nb_payload_descs = m->nb_segs;
+
+	/*
+	 * Carry out multiple independent checks using bitwise OR
+	 * to avoid unnecessary conditional branching.
+	 */
+	if (unlikely((header_len > txq->tso_max_header_len) |
+		     (nb_payload_descs > txq->tso_max_nb_payload_descs) |
+		     (payload_len > txq->tso_max_payload_len) |
+		     (payload_len > mss_conformant_max_payload_len) |
+		     (m->pkt_len == header_len)))
+		return EINVAL;
+
+	return 0;
+}
+
 static uint16_t
 sfc_ef100_tx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 			  uint16_t nb_pkts)
@@ -110,16 +153,25 @@ sfc_ef100_tx_prepare_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		    (m->ol_flags & PKT_TX_L4_MASK)) {
 			calc_phdr_cksum = true;
 			max_nb_header_segs = 1;
+		} else if (m->ol_flags & PKT_TX_TCP_SEG) {
+			max_nb_header_segs = txq->tso_max_nb_header_descs;
 		}
 
 		ret = sfc_dp_tx_prepare_pkt(m, max_nb_header_segs, 0,
-					    0, txq->max_fill_level, 0, 0);
+					    txq->tso_tcp_header_offset_limit,
+					    txq->max_fill_level, 1, 0);
 		if (unlikely(ret != 0)) {
 			rte_errno = ret;
 			break;
 		}
 
-		if (m->nb_segs > EFX_MASK32(ESF_GZ_TX_SEND_NUM_SEGS)) {
+		if (m->ol_flags & PKT_TX_TCP_SEG) {
+			ret = sfc_ef100_tx_prepare_pkt_tso(txq, m);
+			if (unlikely(ret != 0)) {
+				rte_errno = ret;
+				break;
+			}
+		} else if (m->nb_segs > EFX_MASK32(ESF_GZ_TX_SEND_NUM_SEGS)) {
 			rte_errno = EINVAL;
 			break;
 		}
@@ -326,6 +378,48 @@ sfc_ef100_tx_qdesc_seg_create(rte_iova_t addr, uint16_t len,
 			ESF_GZ_TX_DESC_TYPE, ESE_GZ_TX_DESC_TYPE_SEG);
 }
 
+static void
+sfc_ef100_tx_qdesc_tso_create(const struct rte_mbuf *m,
+			      uint16_t nb_header_descs,
+			      uint16_t nb_payload_descs,
+			      size_t header_len, size_t payload_len,
+			      size_t iph_off, size_t tcph_off,
+			      efx_oword_t *tx_desc)
+{
+	efx_oword_t tx_desc_extra_fields;
+	/*
+	 * If no tunnel encapsulation is present, then the ED_INNER
+	 * fields should be used.
+	 */
+	int ed_inner_ip_id = ESE_GZ_TX_DESC_IP4_ID_INC_MOD16;
+
+	EFX_POPULATE_OWORD_7(*tx_desc,
+			ESF_GZ_TX_TSO_MSS, m->tso_segsz,
+			ESF_GZ_TX_TSO_HDR_NUM_SEGS, nb_header_descs,
+			ESF_GZ_TX_TSO_PAYLOAD_NUM_SEGS, nb_payload_descs,
+			ESF_GZ_TX_TSO_ED_INNER_IP4_ID, ed_inner_ip_id,
+			ESF_GZ_TX_TSO_ED_INNER_IP_LEN, 1,
+			ESF_GZ_TX_TSO_HDR_LEN_W, header_len >> 1,
+			ESF_GZ_TX_TSO_PAYLOAD_LEN, payload_len);
+
+	EFX_POPULATE_OWORD_5(tx_desc_extra_fields,
+			/*
+			 * Inner offsets are required for inner IPv4 ID
+			 * and IP length edits.
+			 */
+			ESF_GZ_TX_TSO_INNER_L3_OFF_W, iph_off >> 1,
+			ESF_GZ_TX_TSO_INNER_L4_OFF_W, tcph_off >> 1,
+			/*
+			 * Use outer full checksum offloads which do
+			 * not require any extra information.
+			 */
+			ESF_GZ_TX_TSO_CSO_OUTER_L3, 1,
+			ESF_GZ_TX_TSO_CSO_OUTER_L4, 1,
+			ESF_GZ_TX_DESC_TYPE, ESE_GZ_TX_DESC_TYPE_TSO);
+
+	EFX_OR_OWORD(*tx_desc, tx_desc_extra_fields);
+}
+
 static inline void
 sfc_ef100_tx_qpush(struct sfc_ef100_txq *txq, unsigned int added)
 {
@@ -351,30 +445,115 @@ sfc_ef100_tx_qpush(struct sfc_ef100_txq *txq, unsigned int added)
 static unsigned int
 sfc_ef100_tx_pkt_descs_max(const struct rte_mbuf *m)
 {
+	unsigned int extra_descs = 0;
+
 /** Maximum length of an mbuf segment data */
 #define SFC_MBUF_SEG_LEN_MAX		UINT16_MAX
 	RTE_BUILD_BUG_ON(sizeof(m->data_len) != 2);
 
+	if (m->ol_flags & PKT_TX_TCP_SEG) {
+		/* Tx TSO descriptor */
+		extra_descs++;
+		/*
+		 * Extra Tx segment descriptor may be required if header
+		 * ends in the middle of segment.
+		 */
+		extra_descs++;
+	} else {
+		/*
+		 * mbuf segment cannot be bigger than maximum segnment length
+		 * and maximum packet length since TSO is not supported yet.
+		 * Make sure that the first segment does not need fragmentation
+		 * (split into many Tx descriptors).
+		 */
+		RTE_BUILD_BUG_ON(SFC_EF100_TX_SEND_DESC_LEN_MAX <
+				 RTE_MIN((unsigned int)EFX_MAC_PDU_MAX,
+				 SFC_MBUF_SEG_LEN_MAX));
+	}
+
 	/*
-	 * mbuf segment cannot be bigger than maximum segnment length and
-	 * maximum packet length since TSO is not supported yet.
-	 * Make sure that the first segment does not need fragmentation
-	 * (split into many Tx descriptors).
+	 * Any segment of scattered packet cannot be bigger than maximum
+	 * segment length. Make sure that subsequent segments do not need
+	 * fragmentation (split into many Tx descriptors).
 	 */
-	RTE_BUILD_BUG_ON(SFC_EF100_TX_SEND_DESC_LEN_MAX <
-		RTE_MIN((unsigned int)EFX_MAC_PDU_MAX, SFC_MBUF_SEG_LEN_MAX));
+	RTE_BUILD_BUG_ON(SFC_EF100_TX_SEG_DESC_LEN_MAX < SFC_MBUF_SEG_LEN_MAX);
+
+	return m->nb_segs + extra_descs;
+}
+
+static struct rte_mbuf *
+sfc_ef100_xmit_tso_pkt(struct sfc_ef100_txq * const txq,
+		       struct rte_mbuf *m, unsigned int *added)
+{
+	struct rte_mbuf *m_seg = m;
+	unsigned int nb_hdr_descs;
+	unsigned int nb_pld_descs;
+	unsigned int seg_split = 0;
+	unsigned int tso_desc_id;
+	unsigned int id;
+	size_t iph_off;
+	size_t tcph_off;
+	size_t header_len;
+	size_t remaining_hdr_len;
+
+	iph_off = m->l2_len;
+	tcph_off = iph_off + m->l3_len;
+	header_len = tcph_off + m->l4_len;
 
 	/*
-	 * Any segment of scattered packet cannot be bigger than maximum
-	 * segment length and maximum packet legnth since TSO is not
-	 * supported yet.
-	 * Make sure that subsequent segments do not need fragmentation (split
-	 * into many Tx descriptors).
+	 * Remember ID of the TX_TSO descriptor to be filled in.
+	 * We can't fill it in right now since we need to calculate
+	 * number of header and payload segments first and don't want
+	 * to traverse it twice here.
+	 */
+	tso_desc_id = (*added)++ & txq->ptr_mask;
+
+	remaining_hdr_len = header_len;
+	do {
+		id = (*added)++ & txq->ptr_mask;
+		if (rte_pktmbuf_data_len(m_seg) <= remaining_hdr_len) {
+			/* The segment is fully header segment */
+			sfc_ef100_tx_qdesc_seg_create(
+				rte_mbuf_data_iova(m_seg),
+				rte_pktmbuf_data_len(m_seg),
+				&txq->txq_hw_ring[id]);
+			remaining_hdr_len -= rte_pktmbuf_data_len(m_seg);
+		} else {
+			/*
+			 * The segment must be split into header and
+			 * payload segments
+			 */
+			sfc_ef100_tx_qdesc_seg_create(
+				rte_mbuf_data_iova(m_seg),
+				remaining_hdr_len,
+				&txq->txq_hw_ring[id]);
+			SFC_ASSERT(txq->sw_ring[id].mbuf == NULL);
+
+			id = (*added)++ & txq->ptr_mask;
+			sfc_ef100_tx_qdesc_seg_create(
+				rte_mbuf_data_iova(m_seg) + remaining_hdr_len,
+				rte_pktmbuf_data_len(m_seg) - remaining_hdr_len,
+				&txq->txq_hw_ring[id]);
+			remaining_hdr_len = 0;
+			seg_split = 1;
+		}
+		txq->sw_ring[id].mbuf = m_seg;
+		m_seg = m_seg->next;
+	} while (remaining_hdr_len > 0);
+
+	/*
+	 * If a segment is split into header and payload segments, added
+	 * pointer counts it twice and we should correct it.
 	 */
-	RTE_BUILD_BUG_ON(SFC_EF100_TX_SEG_DESC_LEN_MAX <
-		RTE_MIN((unsigned int)EFX_MAC_PDU_MAX, SFC_MBUF_SEG_LEN_MAX));
+	nb_hdr_descs = ((id - tso_desc_id) & txq->ptr_mask) - seg_split;
+	nb_pld_descs = m->nb_segs - nb_hdr_descs + seg_split;
+
+	sfc_ef100_tx_qdesc_tso_create(m, nb_hdr_descs, nb_pld_descs, header_len,
+				      rte_pktmbuf_pkt_len(m) - header_len,
+				      iph_off, tcph_off,
+				      &txq->txq_hw_ring[tso_desc_id]);
 
-	return m->nb_segs;
+	return m_seg;
 }
 
 static uint16_t
@@ -428,27 +607,33 @@ sfc_ef100_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				break;
 		}
 
-		id = added++ & txq->ptr_mask;
-		sfc_ef100_tx_qdesc_send_create(m_seg, &txq->txq_hw_ring[id]);
+		if (m_seg->ol_flags & PKT_TX_TCP_SEG) {
+			m_seg = sfc_ef100_xmit_tso_pkt(txq, m_seg, &added);
+		} else {
+			id = added++ & txq->ptr_mask;
+			sfc_ef100_tx_qdesc_send_create(m_seg,
+						       &txq->txq_hw_ring[id]);
 
-		/*
-		 * rte_pktmbuf_free() is commonly used in DPDK for
-		 * recycling packets - the function checks every
-		 * segment's reference counter and returns the
-		 * buffer to its pool whenever possible;
-		 * nevertheless, freeing mbuf segments one by one
-		 * may entail some performance decline;
-		 * from this point, sfc_efx_tx_reap() does the same job
-		 * on its own and frees buffers in bulks (all mbufs
-		 * within a bulk belong to the same pool);
-		 * from this perspective, individual segment pointers
-		 * must be associated with the corresponding SW
-		 * descriptors independently so that only one loop
-		 * is sufficient on reap to inspect all the buffers
-		 */
-		txq->sw_ring[id].mbuf = m_seg;
+			/*
+			 * rte_pktmbuf_free() is commonly used in DPDK for
+			 * recycling packets - the function checks every
+			 * segment's reference counter and returns the
+			 * buffer to its pool whenever possible;
+			 * nevertheless, freeing mbuf segments one by one
+			 * may entail some performance decline;
+			 * from this point, sfc_efx_tx_reap() does the same job
+			 * on its own and frees buffers in bulks (all mbufs
+			 * within a bulk belong to the same pool);
+			 * from this perspective, individual segment pointers
+			 * must be associated with the corresponding SW
+			 * descriptors independently so that only one loop
+			 * is sufficient on reap to inspect all the buffers
+			 */
+			txq->sw_ring[id].mbuf = m_seg;
+			m_seg = m_seg->next;
+		}
 
-		while ((m_seg = m_seg->next) != NULL) {
+		while (m_seg != NULL) {
 			RTE_BUILD_BUG_ON(SFC_MBUF_SEG_LEN_MAX >
 					 SFC_EF100_TX_SEG_DESC_LEN_MAX);
 
@@ -457,6 +642,7 @@ sfc_ef100_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 					rte_pktmbuf_data_len(m_seg),
 					&txq->txq_hw_ring[id]);
 			txq->sw_ring[id].mbuf = m_seg;
+			m_seg = m_seg->next;
 		}
 
 		dma_desc_space -= (added - pkt_start);
@@ -552,6 +738,13 @@ sfc_ef100_tx_qcreate(uint16_t port_id, uint16_t queue_id,
 			(info->hw_index << info->vi_window_shift);
 	txq->evq_hw_ring = info->evq_hw_ring;
 
+	txq->tso_tcp_header_offset_limit = info->tso_tcp_header_offset_limit;
+	txq->tso_max_nb_header_descs = info->tso_max_nb_header_descs;
+	txq->tso_max_header_len = info->tso_max_header_len;
+	txq->tso_max_nb_payload_descs = info->tso_max_nb_payload_descs;
+	txq->tso_max_payload_len = info->tso_max_payload_len;
+	txq->tso_max_nb_outgoing_frames = info->tso_max_nb_outgoing_frames;
+
 	sfc_ef100_tx_debug(txq, "TxQ doorbell is %p", txq->doorbell);
 
 	*dp_txqp = &txq->dp;
@@ -690,7 +883,8 @@ struct sfc_dp_tx sfc_ef100_tx = {
 				  DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
 				  DEV_TX_OFFLOAD_UDP_CKSUM |
 				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_MULTI_SEGS,
+				  DEV_TX_OFFLOAD_MULTI_SEGS |
+				  DEV_TX_OFFLOAD_TCP_TSO,
 	.get_dev_info		= sfc_ef100_get_dev_info,
 	.qsize_up_rings		= sfc_ef100_tx_qsize_up_rings,
 	.qcreate		= sfc_ef100_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index d50d49ca56..7a8495efc7 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -188,6 +188,17 @@ sfc_tx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	info.vi_window_shift = encp->enc_vi_window_shift;
 	info.tso_tcp_header_offset_limit =
 		encp->enc_tx_tso_tcp_header_offset_limit;
+	info.tso_max_nb_header_descs =
+		RTE_MIN(encp->enc_tx_tso_max_header_ndescs,
+			(uint32_t)UINT16_MAX);
+	info.tso_max_header_len =
+		RTE_MIN(encp->enc_tx_tso_max_header_length,
+			(uint32_t)UINT16_MAX);
+	info.tso_max_nb_payload_descs =
+		RTE_MIN(encp->enc_tx_tso_max_payload_ndescs,
+			(uint32_t)UINT16_MAX);
+	info.tso_max_payload_len = encp->enc_tx_tso_max_payload_length;
+	info.tso_max_nb_outgoing_frames = encp->enc_tx_tso_max_nframes;
 
 	rc = sa->priv.dp_tx->qcreate(sa->eth_dev->data->port_id, sw_index,
 				     &RTE_ETH_DEV_TO_PCI(sa->eth_dev)->addr,
@@ -592,7 +603,8 @@ sfc_tx_start(struct sfc_adapter *sa)
 	sfc_log_init(sa, "txq_count = %u", sas->txq_count);
 
 	if (sa->tso) {
-		if (!encp->enc_fw_assisted_tso_v2_enabled) {
+		if (!encp->enc_fw_assisted_tso_v2_enabled &&
+		    !encp->enc_tso_v3_enabled) {
 			sfc_warn(sa, "TSO support was unable to be restored");
 			sa->tso = B_FALSE;
 			sa->tso_encap = B_FALSE;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 24/36] net/sfc: support tunnel TSO for EF100 native Tx datapath
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (22 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 23/36] net/sfc: support TSO for EF100 native datapath Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 25/36] net/sfc: support Tx VLAN insertion offload for EF100 Andrew Rybchenko
                   ` (12 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev; +Cc: Ivan Malov

From: Ivan Malov <ivan.malov@oktetlabs.ru>

Handle VXLAN and Geneve TSO on EF100 native Tx datapath.

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/nics/sfc_efx.rst    |  3 +-
 drivers/net/sfc/sfc.c          |  3 +-
 drivers/net/sfc/sfc_ef100_tx.c | 59 ++++++++++++++++++++++++++++++----
 drivers/net/sfc/sfc_tx.c       |  3 +-
 4 files changed, 59 insertions(+), 9 deletions(-)

diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index 104ab38aa9..e108043f38 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -329,7 +329,8 @@ boolean parameters value.
   is even more faster then **ef10** but does not support multi-segment
   mbufs, disallows multiple mempools and neglects mbuf reference counters.
   **ef100** chooses EF100 native datapath which supports multi-segment
-  mbufs, inner/outer IPv4 and TCP/UDP checksum and TCP segmentation offloads.
+  mbufs, inner/outer IPv4 and TCP/UDP checksum and TCP segmentation offloads
+  including VXLAN and GENEVE IPv4/IPv6 tunnels.
 
 - ``perf_profile`` [auto|throughput|low-latency] (default **throughput**)
 
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index b41db65003..d4478a2846 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -805,7 +805,8 @@ sfc_attach(struct sfc_adapter *sa)
 	    (sfc_dp_tx_offload_capa(sa->priv.dp_tx) &
 	     (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
 	      DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
-		sa->tso_encap = encp->enc_fw_assisted_tso_v2_encap_enabled;
+		sa->tso_encap = encp->enc_fw_assisted_tso_v2_encap_enabled ||
+				encp->enc_tso_v3_enabled;
 		if (!sa->tso_encap)
 			sfc_info(sa, "Encapsulated TSO support isn't available on this adapter");
 	}
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index 5ad0813a9b..a740bc9d55 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -98,11 +98,26 @@ static int
 sfc_ef100_tx_prepare_pkt_tso(struct sfc_ef100_txq * const txq,
 			     struct rte_mbuf *m)
 {
-	size_t header_len = m->l2_len + m->l3_len + m->l4_len;
+	size_t header_len = ((m->ol_flags & PKT_TX_TUNNEL_MASK) ?
+			     m->outer_l2_len + m->outer_l3_len : 0) +
+			    m->l2_len + m->l3_len + m->l4_len;
 	size_t payload_len = m->pkt_len - header_len;
 	unsigned long mss_conformant_max_payload_len;
 	unsigned int nb_payload_descs;
 
+#ifdef RTE_LIBRTE_SFC_EFX_DEBUG
+	switch (m->ol_flags & PKT_TX_TUNNEL_MASK) {
+	case 0:
+		/* FALLTHROUGH */
+	case PKT_TX_TUNNEL_VXLAN:
+		/* FALLTHROUGH */
+	case PKT_TX_TUNNEL_GENEVE:
+		break;
+	default:
+		return ENOTSUP;
+	}
+#endif
+
 	mss_conformant_max_payload_len =
 		m->tso_segsz * txq->tso_max_nb_outgoing_frames;
 
@@ -383,32 +398,52 @@ sfc_ef100_tx_qdesc_tso_create(const struct rte_mbuf *m,
 			      uint16_t nb_header_descs,
 			      uint16_t nb_payload_descs,
 			      size_t header_len, size_t payload_len,
+			      size_t outer_iph_off, size_t outer_udph_off,
 			      size_t iph_off, size_t tcph_off,
 			      efx_oword_t *tx_desc)
 {
 	efx_oword_t tx_desc_extra_fields;
+	int ed_outer_udp_len = (outer_udph_off != 0) ? 1 : 0;
+	int ed_outer_ip_len = (outer_iph_off != 0) ? 1 : 0;
+	int ed_outer_ip_id = (outer_iph_off != 0) ?
+		ESE_GZ_TX_DESC_IP4_ID_INC_MOD16 : 0;
 	/*
 	 * If no tunnel encapsulation is present, then the ED_INNER
 	 * fields should be used.
 	 */
 	int ed_inner_ip_id = ESE_GZ_TX_DESC_IP4_ID_INC_MOD16;
+	uint8_t inner_l3 = sfc_ef100_tx_qdesc_cso_inner_l3(
+					m->ol_flags & PKT_TX_TUNNEL_MASK);
 
-	EFX_POPULATE_OWORD_7(*tx_desc,
+	EFX_POPULATE_OWORD_10(*tx_desc,
 			ESF_GZ_TX_TSO_MSS, m->tso_segsz,
 			ESF_GZ_TX_TSO_HDR_NUM_SEGS, nb_header_descs,
 			ESF_GZ_TX_TSO_PAYLOAD_NUM_SEGS, nb_payload_descs,
+			ESF_GZ_TX_TSO_ED_OUTER_IP4_ID, ed_outer_ip_id,
 			ESF_GZ_TX_TSO_ED_INNER_IP4_ID, ed_inner_ip_id,
+			ESF_GZ_TX_TSO_ED_OUTER_IP_LEN, ed_outer_ip_len,
 			ESF_GZ_TX_TSO_ED_INNER_IP_LEN, 1,
+			ESF_GZ_TX_TSO_ED_OUTER_UDP_LEN, ed_outer_udp_len,
 			ESF_GZ_TX_TSO_HDR_LEN_W, header_len >> 1,
 			ESF_GZ_TX_TSO_PAYLOAD_LEN, payload_len);
 
-	EFX_POPULATE_OWORD_5(tx_desc_extra_fields,
+	EFX_POPULATE_OWORD_9(tx_desc_extra_fields,
+			/*
+			 * Outer offsets are required for outer IPv4 ID
+			 * and length edits in the case of tunnel TSO.
+			 */
+			ESF_GZ_TX_TSO_OUTER_L3_OFF_W, outer_iph_off >> 1,
+			ESF_GZ_TX_TSO_OUTER_L4_OFF_W, outer_udph_off >> 1,
 			/*
 			 * Inner offsets are required for inner IPv4 ID
-			 * and IP length edits.
+			 * and IP length edits and partial checksum
+			 * offload in the case of tunnel TSO.
 			 */
 			ESF_GZ_TX_TSO_INNER_L3_OFF_W, iph_off >> 1,
 			ESF_GZ_TX_TSO_INNER_L4_OFF_W, tcph_off >> 1,
+			ESF_GZ_TX_TSO_CSO_INNER_L4,
+				inner_l3 != ESE_GZ_TX_DESC_CS_INNER_L3_OFF,
+			ESF_GZ_TX_TSO_CSO_INNER_L3, inner_l3,
 			/*
 			 * Use outer full checksum offloads which do
 			 * not require any extra information.
@@ -491,12 +526,21 @@ sfc_ef100_xmit_tso_pkt(struct sfc_ef100_txq * const txq,
 	unsigned int seg_split = 0;
 	unsigned int tso_desc_id;
 	unsigned int id;
+	size_t outer_iph_off;
+	size_t outer_udph_off;
 	size_t iph_off;
 	size_t tcph_off;
 	size_t header_len;
 	size_t remaining_hdr_len;
 
-	iph_off = m->l2_len;
+	if (m->ol_flags & PKT_TX_TUNNEL_MASK) {
+		outer_iph_off = m->outer_l2_len;
+		outer_udph_off = outer_iph_off + m->outer_l3_len;
+	} else {
+		outer_iph_off = 0;
+		outer_udph_off = 0;
+	}
+	iph_off = outer_udph_off + m->l2_len;
 	tcph_off = iph_off + m->l3_len;
 	header_len = tcph_off + m->l4_len;
 
@@ -550,6 +594,7 @@ sfc_ef100_xmit_tso_pkt(struct sfc_ef100_txq * const txq,
 
 	sfc_ef100_tx_qdesc_tso_create(m, nb_hdr_descs, nb_pld_descs, header_len,
 				      rte_pktmbuf_pkt_len(m) - header_len,
+				      outer_iph_off, outer_udph_off,
 				      iph_off, tcph_off,
 				      &txq->txq_hw_ring[tso_desc_id]);
 
@@ -884,7 +929,9 @@ struct sfc_dp_tx sfc_ef100_tx = {
 				  DEV_TX_OFFLOAD_UDP_CKSUM |
 				  DEV_TX_OFFLOAD_TCP_CKSUM |
 				  DEV_TX_OFFLOAD_MULTI_SEGS |
-				  DEV_TX_OFFLOAD_TCP_TSO,
+				  DEV_TX_OFFLOAD_TCP_TSO |
+				  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
+				  DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
 	.get_dev_info		= sfc_ef100_get_dev_info,
 	.qsize_up_rings		= sfc_ef100_tx_qsize_up_rings,
 	.qcreate		= sfc_ef100_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 7a8495efc7..24602e3d10 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -611,7 +611,8 @@ sfc_tx_start(struct sfc_adapter *sa)
 		}
 	}
 
-	if (sa->tso_encap && !encp->enc_fw_assisted_tso_v2_encap_enabled) {
+	if (sa->tso_encap && !encp->enc_fw_assisted_tso_v2_encap_enabled &&
+	    !encp->enc_tso_v3_enabled) {
 		sfc_warn(sa, "Encapsulated TSO support was unable to be restored");
 		sa->tso_encap = B_FALSE;
 	}
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 25/36] net/sfc: support Tx VLAN insertion offload for EF100
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (23 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 24/36] net/sfc: support tunnel TSO for EF100 native Tx datapath Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 26/36] net/sfc: support Rx checksum " Andrew Rybchenko
                   ` (11 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/nics/sfc_efx.rst    |  4 ++--
 drivers/net/sfc/sfc_ef100_tx.c | 21 ++++++++++++++++++++-
 2 files changed, 22 insertions(+), 3 deletions(-)

diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index e108043f38..c89484d473 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -329,8 +329,8 @@ boolean parameters value.
   is even more faster then **ef10** but does not support multi-segment
   mbufs, disallows multiple mempools and neglects mbuf reference counters.
   **ef100** chooses EF100 native datapath which supports multi-segment
-  mbufs, inner/outer IPv4 and TCP/UDP checksum and TCP segmentation offloads
-  including VXLAN and GENEVE IPv4/IPv6 tunnels.
+  mbufs, VLAN insertion, inner/outer IPv4 and TCP/UDP checksum and TCP
+  segmentation offloads including VXLAN and GENEVE IPv4/IPv6 tunnels.
 
 - ``perf_profile`` [auto|throughput|low-latency] (default **throughput**)
 
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index a740bc9d55..fcf61d987c 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -381,6 +381,16 @@ sfc_ef100_tx_qdesc_send_create(const struct rte_mbuf *m, efx_oword_t *tx_desc)
 			ESF_GZ_TX_SEND_CSO_OUTER_L3, outer_l3,
 			ESF_GZ_TX_SEND_CSO_OUTER_L4, outer_l4,
 			ESF_GZ_TX_DESC_TYPE, ESE_GZ_TX_DESC_TYPE_SEND);
+
+	if (m->ol_flags & PKT_TX_VLAN_PKT) {
+		efx_oword_t tx_desc_extra_fields;
+
+		EFX_POPULATE_OWORD_2(tx_desc_extra_fields,
+				ESF_GZ_TX_SEND_VLAN_INSERT_EN, 1,
+				ESF_GZ_TX_SEND_VLAN_INSERT_TCI, m->vlan_tci);
+
+		EFX_OR_OWORD(*tx_desc, tx_desc_extra_fields);
+	}
 }
 
 static void
@@ -453,6 +463,14 @@ sfc_ef100_tx_qdesc_tso_create(const struct rte_mbuf *m,
 			ESF_GZ_TX_DESC_TYPE, ESE_GZ_TX_DESC_TYPE_TSO);
 
 	EFX_OR_OWORD(*tx_desc, tx_desc_extra_fields);
+
+	if (m->ol_flags & PKT_TX_VLAN_PKT) {
+		EFX_POPULATE_OWORD_2(tx_desc_extra_fields,
+				ESF_GZ_TX_TSO_VLAN_INSERT_EN, 1,
+				ESF_GZ_TX_TSO_VLAN_INSERT_TCI, m->vlan_tci);
+
+		EFX_OR_OWORD(*tx_desc, tx_desc_extra_fields);
+	}
 }
 
 static inline void
@@ -923,7 +941,8 @@ struct sfc_dp_tx sfc_ef100_tx = {
 	},
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS,
 	.dev_offload_capa	= 0,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_IPV4_CKSUM |
+	.queue_offload_capa	= DEV_TX_OFFLOAD_VLAN_INSERT |
+				  DEV_TX_OFFLOAD_IPV4_CKSUM |
 				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
 				  DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
 				  DEV_TX_OFFLOAD_UDP_CKSUM |
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 26/36] net/sfc: support Rx checksum offload for EF100
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (24 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 25/36] net/sfc: support Tx VLAN insertion offload for EF100 Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 27/36] common/sfc_efx/base: simplify to request Rx prefix fields Andrew Rybchenko
                   ` (10 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

Also support Rx packet type offload.

Checksumming is actually always enabled. Report it per-queue offload
to give applications maximum flexibility.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_ef100_rx.c | 183 ++++++++++++++++++++++++++++++++-
 1 file changed, 182 insertions(+), 1 deletion(-)

diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index c0e70c9943..2f5c5ab533 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -177,6 +177,166 @@ sfc_ef100_rx_qrefill(struct sfc_ef100_rxq *rxq)
 	sfc_ef100_rx_qpush(rxq, added);
 }
 
+static inline uint64_t
+sfc_ef100_rx_nt_or_inner_l4_csum(const efx_word_t class)
+{
+	return EFX_WORD_FIELD(class,
+			      ESF_GZ_RX_PREFIX_HCLASS_NT_OR_INNER_L4_CSUM) ==
+		ESE_GZ_RH_HCLASS_L4_CSUM_GOOD ?
+		PKT_RX_L4_CKSUM_GOOD : PKT_RX_L4_CKSUM_BAD;
+}
+
+static inline uint64_t
+sfc_ef100_rx_tun_outer_l4_csum(const efx_word_t class)
+{
+	return EFX_WORD_FIELD(class,
+			      ESF_GZ_RX_PREFIX_HCLASS_TUN_OUTER_L4_CSUM) ==
+		ESE_GZ_RH_HCLASS_L4_CSUM_GOOD ?
+		PKT_RX_OUTER_L4_CKSUM_GOOD : PKT_RX_OUTER_L4_CKSUM_GOOD;
+}
+
+static uint32_t
+sfc_ef100_rx_class_decode(const efx_word_t class, uint64_t *ol_flags)
+{
+	uint32_t ptype;
+	bool no_tunnel = false;
+
+	if (unlikely(EFX_WORD_FIELD(class, ESF_GZ_RX_PREFIX_HCLASS_L2_CLASS) !=
+		     ESE_GZ_RH_HCLASS_L2_CLASS_E2_0123VLAN))
+		return 0;
+
+	switch (EFX_WORD_FIELD(class, ESF_GZ_RX_PREFIX_HCLASS_L2_N_VLAN)) {
+	case 0:
+		ptype = RTE_PTYPE_L2_ETHER;
+		break;
+	case 1:
+		ptype = RTE_PTYPE_L2_ETHER_VLAN;
+		break;
+	default:
+		ptype = RTE_PTYPE_L2_ETHER_QINQ;
+		break;
+	}
+
+	switch (EFX_WORD_FIELD(class, ESF_GZ_RX_PREFIX_HCLASS_TUNNEL_CLASS)) {
+	case ESE_GZ_RH_HCLASS_TUNNEL_CLASS_NONE:
+		no_tunnel = true;
+		break;
+	case ESE_GZ_RH_HCLASS_TUNNEL_CLASS_VXLAN:
+		ptype |= RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_L4_UDP;
+		*ol_flags |= sfc_ef100_rx_tun_outer_l4_csum(class);
+		break;
+	case ESE_GZ_RH_HCLASS_TUNNEL_CLASS_NVGRE:
+		ptype |= RTE_PTYPE_TUNNEL_NVGRE;
+		break;
+	case ESE_GZ_RH_HCLASS_TUNNEL_CLASS_GENEVE:
+		ptype |= RTE_PTYPE_TUNNEL_GENEVE | RTE_PTYPE_L4_UDP;
+		*ol_flags |= sfc_ef100_rx_tun_outer_l4_csum(class);
+		break;
+	default:
+		/*
+		 * Driver does not know the tunnel, but it is
+		 * still a tunnel and NT_OR_INNER refer to inner
+		 * frame.
+		 */
+		no_tunnel = false;
+	}
+
+	if (no_tunnel) {
+		bool l4_valid = true;
+
+		switch (EFX_WORD_FIELD(class,
+			ESF_GZ_RX_PREFIX_HCLASS_NT_OR_INNER_L3_CLASS)) {
+		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4GOOD:
+			ptype |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
+			*ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			break;
+		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4BAD:
+			ptype |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
+			*ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			break;
+		case ESE_GZ_RH_HCLASS_L3_CLASS_IP6:
+			ptype |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
+			break;
+		default:
+			l4_valid = false;
+		}
+
+		if (l4_valid) {
+			switch (EFX_WORD_FIELD(class,
+				ESF_GZ_RX_PREFIX_HCLASS_NT_OR_INNER_L4_CLASS)) {
+			case ESE_GZ_RH_HCLASS_L4_CLASS_TCP:
+				ptype |= RTE_PTYPE_L4_TCP;
+				*ol_flags |=
+					sfc_ef100_rx_nt_or_inner_l4_csum(class);
+				break;
+			case ESE_GZ_RH_HCLASS_L4_CLASS_UDP:
+				ptype |= RTE_PTYPE_L4_UDP;
+				*ol_flags |=
+					sfc_ef100_rx_nt_or_inner_l4_csum(class);
+				break;
+			case ESE_GZ_RH_HCLASS_L4_CLASS_FRAG:
+				ptype |= RTE_PTYPE_L4_FRAG;
+				break;
+			}
+		}
+	} else {
+		bool l4_valid = true;
+
+		switch (EFX_WORD_FIELD(class,
+			ESF_GZ_RX_PREFIX_HCLASS_TUN_OUTER_L3_CLASS)) {
+		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4GOOD:
+			ptype |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
+			break;
+		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4BAD:
+			ptype |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
+			*ol_flags |= PKT_RX_EIP_CKSUM_BAD;
+			break;
+		case ESE_GZ_RH_HCLASS_L3_CLASS_IP6:
+			ptype |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;
+			break;
+		}
+
+		switch (EFX_WORD_FIELD(class,
+			ESF_GZ_RX_PREFIX_HCLASS_NT_OR_INNER_L3_CLASS)) {
+		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4GOOD:
+			ptype |= RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN;
+			*ol_flags |= PKT_RX_IP_CKSUM_GOOD;
+			break;
+		case ESE_GZ_RH_HCLASS_L3_CLASS_IP4BAD:
+			ptype |= RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN;
+			*ol_flags |= PKT_RX_IP_CKSUM_BAD;
+			break;
+		case ESE_GZ_RH_HCLASS_L3_CLASS_IP6:
+			ptype |= RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN;
+			break;
+		default:
+			l4_valid = false;
+			break;
+		}
+
+		if (l4_valid) {
+			switch (EFX_WORD_FIELD(class,
+				ESF_GZ_RX_PREFIX_HCLASS_NT_OR_INNER_L4_CLASS)) {
+			case ESE_GZ_RH_HCLASS_L4_CLASS_TCP:
+				ptype |= RTE_PTYPE_INNER_L4_TCP;
+				*ol_flags |=
+					sfc_ef100_rx_nt_or_inner_l4_csum(class);
+				break;
+			case ESE_GZ_RH_HCLASS_L4_CLASS_UDP:
+				ptype |= RTE_PTYPE_INNER_L4_UDP;
+				*ol_flags |=
+					sfc_ef100_rx_nt_or_inner_l4_csum(class);
+				break;
+			case ESE_GZ_RH_HCLASS_L4_CLASS_FRAG:
+				ptype |= RTE_PTYPE_INNER_L4_FRAG;
+				break;
+			}
+		}
+	}
+
+	return ptype;
+}
+
 static bool
 sfc_ef100_rx_prefix_to_offloads(const efx_oword_t *rx_prefix,
 				struct rte_mbuf *m)
@@ -195,6 +355,8 @@ sfc_ef100_rx_prefix_to_offloads(const efx_oword_t *rx_prefix,
 		     ESE_GZ_RH_HCLASS_L2_STATUS_OK))
 		return false;
 
+	m->packet_type = sfc_ef100_rx_class_decode(*class, &ol_flags);
+
 	m->ol_flags = ol_flags;
 	return true;
 }
@@ -374,6 +536,22 @@ static const uint32_t *
 sfc_ef100_supported_ptypes_get(__rte_unused uint32_t tunnel_encaps)
 {
 	static const uint32_t ef100_native_ptypes[] = {
+		RTE_PTYPE_L2_ETHER,
+		RTE_PTYPE_L2_ETHER_VLAN,
+		RTE_PTYPE_L2_ETHER_QINQ,
+		RTE_PTYPE_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_L4_TCP,
+		RTE_PTYPE_L4_UDP,
+		RTE_PTYPE_L4_FRAG,
+		RTE_PTYPE_TUNNEL_VXLAN,
+		RTE_PTYPE_TUNNEL_NVGRE,
+		RTE_PTYPE_TUNNEL_GENEVE,
+		RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN,
+		RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN,
+		RTE_PTYPE_INNER_L4_TCP,
+		RTE_PTYPE_INNER_L4_UDP,
+		RTE_PTYPE_INNER_L4_FRAG,
 		RTE_PTYPE_UNKNOWN
 	};
 
@@ -596,7 +774,10 @@ struct sfc_dp_rx sfc_ef100_rx = {
 	},
 	.features		= SFC_DP_RX_FEAT_MULTI_PROCESS,
 	.dev_offload_capa	= 0,
-	.queue_offload_capa	= DEV_RX_OFFLOAD_SCATTER,
+	.queue_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
+				  DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
+				  DEV_RX_OFFLOAD_SCATTER,
 	.get_dev_info		= sfc_ef100_rx_get_dev_info,
 	.qsize_up_rings		= sfc_ef100_rx_qsize_up_rings,
 	.qcreate		= sfc_ef100_rx_qcreate,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 27/36] common/sfc_efx/base: simplify to request Rx prefix fields
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (25 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 26/36] net/sfc: support Rx checksum " Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 28/36] common/sfc_efx/base: provide control to deliver RSS hash Andrew Rybchenko
                   ` (9 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

Introduce an extra variable with required Rx prefix fields mask
to make it easier to request more fields.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/common/sfc_efx/base/rhead_rx.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/common/sfc_efx/base/rhead_rx.c b/drivers/common/sfc_efx/base/rhead_rx.c
index d683f280ce..d3d7339b8c 100644
--- a/drivers/common/sfc_efx/base/rhead_rx.c
+++ b/drivers/common/sfc_efx/base/rhead_rx.c
@@ -594,6 +594,7 @@ rhead_rx_qcreate(
 	const efx_nic_cfg_t *encp = efx_nic_cfg_get(enp);
 	efx_mcdi_init_rxq_params_t params;
 	efx_rx_prefix_layout_t erpl;
+	uint32_t fields_mask = 0;
 	efx_rc_t rc;
 
 	_NOTE(ARGUNUSED(id))
@@ -631,8 +632,8 @@ rhead_rx_qcreate(
 	 * which fields are required or may be allow to request so-called
 	 * default Rx prefix (which ID is equal to 0).
 	 */
-	if ((rc = rhead_rx_choose_prefix_id(enp,
-	    (1U << EFX_RX_PREFIX_FIELD_LENGTH), &erpl)) != 0)
+	fields_mask |= 1U << EFX_RX_PREFIX_FIELD_LENGTH;
+	if ((rc = rhead_rx_choose_prefix_id(enp, fields_mask, &erpl)) != 0)
 		goto fail3;
 
 	params.prefix_id = erpl.erpl_id;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 28/36] common/sfc_efx/base: provide control to deliver RSS hash
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (26 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 27/36] common/sfc_efx/base: simplify to request Rx prefix fields Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 29/36] common/sfc_efx/base: provide helper to check Rx prefix Andrew Rybchenko
                   ` (8 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

When Rx queue is created, allow to specify if the driver would like
to get RSS hash value calculated by the hardware.

Use the flag to choose Rx prefix on Riverhead.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/common/sfc_efx/base/ef10_rx.c  | 45 +++++++++++++++++---------
 drivers/common/sfc_efx/base/efx.h      |  5 +++
 drivers/common/sfc_efx/base/efx_rx.c   | 19 +++++++++++
 drivers/common/sfc_efx/base/rhead_rx.c |  9 +++---
 4 files changed, 59 insertions(+), 19 deletions(-)

diff --git a/drivers/common/sfc_efx/base/ef10_rx.c b/drivers/common/sfc_efx/base/ef10_rx.c
index ea5f514f18..1b4f3f0152 100644
--- a/drivers/common/sfc_efx/base/ef10_rx.c
+++ b/drivers/common/sfc_efx/base/ef10_rx.c
@@ -926,6 +926,10 @@ ef10_rx_qcreate(
 			goto fail1;
 		}
 		erp->er_buf_size = type_data->ertd_default.ed_buf_size;
+		/*
+		 * Ignore EFX_RXQ_FLAG_RSS_HASH since if RSS hash is calculated
+		 * it is always delivered from HW in the pseudo-header.
+		 */
 		break;
 #if EFSYS_OPT_RX_PACKED_STREAM
 	case EFX_RXQ_TYPE_PACKED_STREAM:
@@ -955,6 +959,11 @@ ef10_rx_qcreate(
 			goto fail3;
 		}
 		erp->er_buf_size = type_data->ertd_packed_stream.eps_buf_size;
+		/* Packed stream pseudo header does not have RSS hash value */
+		if (flags & EFX_RXQ_FLAG_RSS_HASH) {
+			rc = ENOTSUP;
+			goto fail4;
+		}
 		break;
 #endif /* EFSYS_OPT_RX_PACKED_STREAM */
 #if EFSYS_OPT_RX_ES_SUPER_BUFFER
@@ -962,7 +971,7 @@ ef10_rx_qcreate(
 		erpl = &ef10_essb_rx_prefix_layout;
 		if (type_data == NULL) {
 			rc = EINVAL;
-			goto fail4;
+			goto fail5;
 		}
 		params.es_bufs_per_desc =
 		    type_data->ertd_es_super_buffer.eessb_bufs_per_desc;
@@ -972,11 +981,15 @@ ef10_rx_qcreate(
 		    type_data->ertd_es_super_buffer.eessb_buf_stride;
 		params.hol_block_timeout =
 		    type_data->ertd_es_super_buffer.eessb_hol_block_timeout;
+		/*
+		 * Ignore EFX_RXQ_FLAG_RSS_HASH since if RSS hash is calculated
+		 * it is always delivered from HW in the pseudo-header.
+		 */
 		break;
 #endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
 	default:
 		rc = ENOTSUP;
-		goto fail5;
+		goto fail6;
 	}
 
 #if EFSYS_OPT_RX_PACKED_STREAM
@@ -984,13 +997,13 @@ ef10_rx_qcreate(
 		/* Check if datapath firmware supports packed stream mode */
 		if (encp->enc_rx_packed_stream_supported == B_FALSE) {
 			rc = ENOTSUP;
-			goto fail6;
+			goto fail7;
 		}
 		/* Check if packed stream allows configurable buffer sizes */
 		if ((params.ps_buf_size != MC_CMD_INIT_RXQ_EXT_IN_PS_BUFF_1M) &&
 		    (encp->enc_rx_var_packed_stream_supported == B_FALSE)) {
 			rc = ENOTSUP;
-			goto fail7;
+			goto fail8;
 		}
 	}
 #else /* EFSYS_OPT_RX_PACKED_STREAM */
@@ -1001,17 +1014,17 @@ ef10_rx_qcreate(
 	if (params.es_bufs_per_desc > 0) {
 		if (encp->enc_rx_es_super_buffer_supported == B_FALSE) {
 			rc = ENOTSUP;
-			goto fail8;
+			goto fail9;
 		}
 		if (!EFX_IS_P2ALIGNED(uint32_t, params.es_max_dma_len,
 			    EFX_RX_ES_SUPER_BUFFER_BUF_ALIGNMENT)) {
 			rc = EINVAL;
-			goto fail9;
+			goto fail10;
 		}
 		if (!EFX_IS_P2ALIGNED(uint32_t, params.es_buf_stride,
 			    EFX_RX_ES_SUPER_BUFFER_BUF_ALIGNMENT)) {
 			rc = EINVAL;
-			goto fail10;
+			goto fail11;
 		}
 	}
 #else /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
@@ -1031,7 +1044,7 @@ ef10_rx_qcreate(
 
 	if ((rc = efx_mcdi_init_rxq(enp, ndescs, eep, label, index,
 		    esmp, &params)) != 0)
-		goto fail11;
+		goto fail12;
 
 	erp->er_eep = eep;
 	erp->er_label = label;
@@ -1044,29 +1057,31 @@ ef10_rx_qcreate(
 
 	return (0);
 
+fail12:
+	EFSYS_PROBE(fail12);
+#if EFSYS_OPT_RX_ES_SUPER_BUFFER
 fail11:
 	EFSYS_PROBE(fail11);
-#if EFSYS_OPT_RX_ES_SUPER_BUFFER
 fail10:
 	EFSYS_PROBE(fail10);
 fail9:
 	EFSYS_PROBE(fail9);
-fail8:
-	EFSYS_PROBE(fail8);
 #endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
 #if EFSYS_OPT_RX_PACKED_STREAM
+fail8:
+	EFSYS_PROBE(fail8);
 fail7:
 	EFSYS_PROBE(fail7);
+#endif /* EFSYS_OPT_RX_PACKED_STREAM */
 fail6:
 	EFSYS_PROBE(fail6);
-#endif /* EFSYS_OPT_RX_PACKED_STREAM */
+#if EFSYS_OPT_RX_ES_SUPER_BUFFER
 fail5:
 	EFSYS_PROBE(fail5);
-#if EFSYS_OPT_RX_ES_SUPER_BUFFER
-fail4:
-	EFSYS_PROBE(fail4);
 #endif /* EFSYS_OPT_RX_ES_SUPER_BUFFER */
 #if EFSYS_OPT_RX_PACKED_STREAM
+fail4:
+	EFSYS_PROBE(fail4);
 fail3:
 	EFSYS_PROBE(fail3);
 fail2:
diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 4b7beb209d..406e96caf8 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -2948,6 +2948,11 @@ typedef enum efx_rxq_type_e {
  * Rx checksum offload results.
  */
 #define	EFX_RXQ_FLAG_INNER_CLASSES	0x2
+/*
+ * Request delivery of the RSS hash calculated by HW to be used by
+ * the driver.
+ */
+#define	EFX_RXQ_FLAG_RSS_HASH		0x4
 
 LIBEFX_API
 extern	__checkReturn	efx_rc_t
diff --git a/drivers/common/sfc_efx/base/efx_rx.c b/drivers/common/sfc_efx/base/efx_rx.c
index 3536b0eb07..d6b56fec48 100644
--- a/drivers/common/sfc_efx/base/efx_rx.c
+++ b/drivers/common/sfc_efx/base/efx_rx.c
@@ -889,11 +889,26 @@ efx_rx_qcreate_internal(
 	    ndescs, id, flags, eep, erp)) != 0)
 		goto fail4;
 
+	/* Sanity check queue creation result */
+	if (flags & EFX_RXQ_FLAG_RSS_HASH) {
+		const efx_rx_prefix_layout_t *erplp = &erp->er_prefix_layout;
+		const efx_rx_prefix_field_info_t *rss_hash_field;
+
+		rss_hash_field =
+		    &erplp->erpl_fields[EFX_RX_PREFIX_FIELD_RSS_HASH];
+		if (rss_hash_field->erpfi_width_bits == 0)
+			goto fail5;
+	}
+
 	enp->en_rx_qcount++;
 	*erpp = erp;
 
 	return (0);
 
+fail5:
+	EFSYS_PROBE(fail5);
+
+	erxop->erxo_qdestroy(erp);
 fail4:
 	EFSYS_PROBE(fail4);
 
@@ -1717,6 +1732,10 @@ siena_rx_qcreate(
 	switch (type) {
 	case EFX_RXQ_TYPE_DEFAULT:
 		erp->er_buf_size = type_data->ertd_default.ed_buf_size;
+		/*
+		 * Ignore EFX_RXQ_FLAG_RSS_HASH since if RSS hash is calculated
+		 * it is always delivered from HW in the pseudo-header.
+		 */
 		break;
 
 	default:
diff --git a/drivers/common/sfc_efx/base/rhead_rx.c b/drivers/common/sfc_efx/base/rhead_rx.c
index d3d7339b8c..b6f9d51fef 100644
--- a/drivers/common/sfc_efx/base/rhead_rx.c
+++ b/drivers/common/sfc_efx/base/rhead_rx.c
@@ -624,13 +624,14 @@ rhead_rx_qcreate(
 	else
 		params.disable_scatter = encp->enc_rx_disable_scatter_supported;
 
+	if (flags & EFX_RXQ_FLAG_RSS_HASH) {
+		fields_mask |= 1U << EFX_RX_PREFIX_FIELD_RSS_HASH;
+		fields_mask |= 1U << EFX_RX_PREFIX_FIELD_RSS_HASH_VALID;
+	}
+
 	/*
 	 * LENGTH is required in EF100 host interface, as receive events
 	 * do not include the packet length.
-	 * NOTE: Required fields are hard-wired now. Future designs will
-	 * want to allow the client (driver) code to have control over
-	 * which fields are required or may be allow to request so-called
-	 * default Rx prefix (which ID is equal to 0).
 	 */
 	fields_mask |= 1U << EFX_RX_PREFIX_FIELD_LENGTH;
 	if ((rc = rhead_rx_choose_prefix_id(enp, fields_mask, &erpl)) != 0)
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 29/36] common/sfc_efx/base: provide helper to check Rx prefix
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (27 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 28/36] common/sfc_efx/base: provide control to deliver RSS hash Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 30/36] net/sfc: map Rx offload RSS hash to corresponding RxQ flag Andrew Rybchenko
                   ` (7 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

A new function allows to check if used Rx prefix layout matches
available Rx prefix layout. Length check is out-of-scope of the
function and caller should ensure length is either checked or
different length with everything required in place is handled
properly.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/common/sfc_efx/base/efx.h             | 12 ++++++
 drivers/common/sfc_efx/base/efx_rx.c          | 40 +++++++++++++++++++
 .../sfc_efx/rte_common_sfc_efx_version.map    |  1 +
 3 files changed, 53 insertions(+)

diff --git a/drivers/common/sfc_efx/base/efx.h b/drivers/common/sfc_efx/base/efx.h
index 406e96caf8..bd1ac303b1 100644
--- a/drivers/common/sfc_efx/base/efx.h
+++ b/drivers/common/sfc_efx/base/efx.h
@@ -2920,6 +2920,18 @@ typedef struct efx_rx_prefix_layout_s {
 	efx_rx_prefix_field_info_t	erpl_fields[EFX_RX_PREFIX_NFIELDS];
 } efx_rx_prefix_layout_t;
 
+/*
+ * Helper function to find out a bit mask of wanted but not available
+ * Rx prefix fields.
+ *
+ * A field is considered as not available if any parameter mismatch.
+ */
+LIBEFX_API
+extern	__checkReturn	uint32_t
+efx_rx_prefix_layout_check(
+	__in		const efx_rx_prefix_layout_t *available,
+	__in		const efx_rx_prefix_layout_t *wanted);
+
 LIBEFX_API
 extern	__checkReturn	efx_rc_t
 efx_rx_prefix_get_layout(
diff --git a/drivers/common/sfc_efx/base/efx_rx.c b/drivers/common/sfc_efx/base/efx_rx.c
index d6b56fec48..93a73703ed 100644
--- a/drivers/common/sfc_efx/base/efx_rx.c
+++ b/drivers/common/sfc_efx/base/efx_rx.c
@@ -1803,3 +1803,43 @@ siena_rx_fini(
 }
 
 #endif /* EFSYS_OPT_SIENA */
+
+static	__checkReturn	boolean_t
+efx_rx_prefix_layout_fields_match(
+	__in		const efx_rx_prefix_field_info_t *erpfip1,
+	__in		const efx_rx_prefix_field_info_t *erpfip2)
+{
+	if (erpfip1->erpfi_offset_bits != erpfip2->erpfi_offset_bits)
+		return (B_FALSE);
+
+	if (erpfip1->erpfi_width_bits != erpfip2->erpfi_width_bits)
+		return (B_FALSE);
+
+	if (erpfip1->erpfi_big_endian != erpfip2->erpfi_big_endian)
+		return (B_FALSE);
+
+	return (B_TRUE);
+}
+
+	__checkReturn	uint32_t
+efx_rx_prefix_layout_check(
+	__in		const efx_rx_prefix_layout_t *available,
+	__in		const efx_rx_prefix_layout_t *wanted)
+{
+	uint32_t result = 0;
+	unsigned int i;
+
+	EFX_STATIC_ASSERT(EFX_RX_PREFIX_NFIELDS < sizeof (result) * 8);
+	for (i = 0; i < EFX_RX_PREFIX_NFIELDS; ++i) {
+		/* Skip the field if driver does not want to use it */
+		if (wanted->erpl_fields[i].erpfi_width_bits == 0)
+			continue;
+
+		if (efx_rx_prefix_layout_fields_match(
+			    &available->erpl_fields[i],
+			    &wanted->erpl_fields[i]) == B_FALSE)
+			result |= (1U << i);
+	}
+
+	return (result);
+}
diff --git a/drivers/common/sfc_efx/rte_common_sfc_efx_version.map b/drivers/common/sfc_efx/rte_common_sfc_efx_version.map
index fd95fd09e5..f656d5b644 100644
--- a/drivers/common/sfc_efx/rte_common_sfc_efx_version.map
+++ b/drivers/common/sfc_efx/rte_common_sfc_efx_version.map
@@ -141,6 +141,7 @@ INTERNAL {
 	efx_rx_hash_default_support_get;
 	efx_rx_init;
 	efx_rx_prefix_get_layout;
+	efx_rx_prefix_layout_check;
 	efx_rx_qcreate;
 	efx_rx_qcreate_es_super_buffer;
 	efx_rx_qdestroy;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 30/36] net/sfc: map Rx offload RSS hash to corresponding RxQ flag
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (28 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 29/36] common/sfc_efx/base: provide helper to check Rx prefix Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 31/36] net/sfc: support per-queue Rx prefix for EF100 Andrew Rybchenko
                   ` (6 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

If RSS hash offload is requested, Rx queue should be configured
to request RSS hash information delivery.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/net/sfc/sfc_rx.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index a9217ada9d..09afb519d5 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -1125,6 +1125,9 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
 		rxq_info->type_flags |= EFX_RXQ_FLAG_INNER_CLASSES;
 
+	if (offloads & DEV_RX_OFFLOAD_RSS_HASH)
+		rxq_info->type_flags |= EFX_RXQ_FLAG_RSS_HASH;
+
 	rc = sfc_ev_qinit(sa, SFC_EVQ_TYPE_RX, sw_index,
 			  evq_entries, socket_id, &evq);
 	if (rc != 0)
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 31/36] net/sfc: support per-queue Rx prefix for EF100
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (29 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 30/36] net/sfc: map Rx offload RSS hash to corresponding RxQ flag Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 32/36] net/sfc: support per-queue Rx RSS hash offload " Andrew Rybchenko
                   ` (5 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

Riverhead FW supports Rx prefix choice based on required fields in Rx
prefix. The feature is generalized in libefx to provide Rx prefixes
layout for other NICs and firmware variants. Now driver can get
the prefix layout after Rx queue start and use the layout details to
check its expectations or simply in run-time.

Rx prefix choice and query interface is defined in SF-119689-TC
EF100 host interface.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Andy Moreton <amoreton@xilinx.com>
---
 drivers/net/sfc/sfc_dp_rx.h        |  3 ++-
 drivers/net/sfc/sfc_ef100_rx.c     | 41 +++++++++++++++++++++++++++---
 drivers/net/sfc/sfc_ef10_essb_rx.c | 32 ++++++++++++++++++++++-
 drivers/net/sfc/sfc_ef10_rx.c      | 19 +++++++++++++-
 drivers/net/sfc/sfc_rx.c           | 19 ++++++++++++--
 5 files changed, 106 insertions(+), 8 deletions(-)

diff --git a/drivers/net/sfc/sfc_dp_rx.h b/drivers/net/sfc/sfc_dp_rx.h
index 3aba39658e..362be933a9 100644
--- a/drivers/net/sfc/sfc_dp_rx.h
+++ b/drivers/net/sfc/sfc_dp_rx.h
@@ -159,7 +159,8 @@ typedef void (sfc_dp_rx_qdestroy_t)(struct sfc_dp_rxq *dp_rxq);
  * It handovers EvQ to the datapath.
  */
 typedef int (sfc_dp_rx_qstart_t)(struct sfc_dp_rxq *dp_rxq,
-				 unsigned int evq_read_ptr);
+				 unsigned int evq_read_ptr,
+				 const efx_rx_prefix_layout_t *pinfo);
 
 /**
  * Receive queue stop function called before flush.
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index 2f5c5ab533..5d46d5bac1 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -18,6 +18,7 @@
 
 #include "efx_types.h"
 #include "efx_regs_ef100.h"
+#include "efx.h"
 
 #include "sfc_debug.h"
 #include "sfc_tweak.h"
@@ -337,6 +338,23 @@ sfc_ef100_rx_class_decode(const efx_word_t class, uint64_t *ol_flags)
 	return ptype;
 }
 
+/*
+ * Below function relies on the following fields in Rx prefix.
+ * Some fields are mandatory, some fields are optional.
+ * See sfc_ef100_rx_qstart() below.
+ */
+static const efx_rx_prefix_layout_t sfc_ef100_rx_prefix_layout = {
+	.erpl_fields	= {
+#define	SFC_EF100_RX_PREFIX_FIELD(_name, _big_endian) \
+	EFX_RX_PREFIX_FIELD(_name, ESF_GZ_RX_PREFIX_ ## _name, _big_endian)
+
+		SFC_EF100_RX_PREFIX_FIELD(LENGTH, B_FALSE),
+		SFC_EF100_RX_PREFIX_FIELD(CLASS, B_FALSE),
+
+#undef	SFC_EF100_RX_PREFIX_FIELD
+	}
+};
+
 static bool
 sfc_ef100_rx_prefix_to_offloads(const efx_oword_t *rx_prefix,
 				struct rte_mbuf *m)
@@ -667,8 +685,6 @@ sfc_ef100_rx_qcreate(uint16_t port_id, uint16_t queue_id,
 	rxq->evq_hw_ring = info->evq_hw_ring;
 	rxq->max_fill_level = info->max_fill_level;
 	rxq->refill_threshold = info->refill_threshold;
-	rxq->rearm_data =
-		sfc_ef100_mk_mbuf_rearm_data(port_id, info->prefix_size);
 	rxq->prefix_size = info->prefix_size;
 	rxq->buf_size = info->buf_size;
 	rxq->refill_mb_pool = info->refill_mb_pool;
@@ -702,13 +718,32 @@ sfc_ef100_rx_qdestroy(struct sfc_dp_rxq *dp_rxq)
 
 static sfc_dp_rx_qstart_t sfc_ef100_rx_qstart;
 static int
-sfc_ef100_rx_qstart(struct sfc_dp_rxq *dp_rxq, unsigned int evq_read_ptr)
+sfc_ef100_rx_qstart(struct sfc_dp_rxq *dp_rxq, unsigned int evq_read_ptr,
+		    const efx_rx_prefix_layout_t *pinfo)
 {
 	struct sfc_ef100_rxq *rxq = sfc_ef100_rxq_by_dp_rxq(dp_rxq);
+	uint32_t unsup_rx_prefix_fields;
 
 	SFC_ASSERT(rxq->completed == 0);
 	SFC_ASSERT(rxq->added == 0);
 
+	/* Prefix must fit into reserved Rx buffer space */
+	if (pinfo->erpl_length > rxq->prefix_size)
+		return ENOTSUP;
+
+	unsup_rx_prefix_fields =
+		efx_rx_prefix_layout_check(pinfo, &sfc_ef100_rx_prefix_layout);
+
+	/* LENGTH and CLASS filds must always be present */
+	if ((unsup_rx_prefix_fields &
+	     ((1U << EFX_RX_PREFIX_FIELD_LENGTH) |
+	      (1U << EFX_RX_PREFIX_FIELD_CLASS))) != 0)
+		return ENOTSUP;
+
+	rxq->prefix_size = pinfo->erpl_length;
+	rxq->rearm_data = sfc_ef100_mk_mbuf_rearm_data(rxq->dp.dpq.port_id,
+						       rxq->prefix_size);
+
 	sfc_ef100_rx_qrefill(rxq);
 
 	rxq->evq_read_ptr = evq_read_ptr;
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index d9bf28525b..17e4c140f5 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -17,6 +17,7 @@
 
 #include "efx_types.h"
 #include "efx_regs_ef10.h"
+#include "efx.h"
 
 #include "sfc_debug.h"
 #include "sfc_tweak.h"
@@ -304,6 +305,27 @@ sfc_ef10_essb_rx_process_ev(struct sfc_ef10_essb_rxq *rxq, efx_qword_t rx_ev)
 	} while (ready > 0);
 }
 
+/*
+ * Below function relies on the following length and layout of the
+ * Rx prefix.
+ */
+static const efx_rx_prefix_layout_t sfc_ef10_essb_rx_prefix_layout = {
+	.erpl_length	= ES_EZ_ESSB_RX_PREFIX_LEN,
+	.erpl_fields	= {
+#define	SFC_EF10_ESSB_RX_PREFIX_FIELD(_efx, _ef10) \
+	EFX_RX_PREFIX_FIELD(_efx, ES_EZ_ESSB_RX_PREFIX_ ## _ef10, B_FALSE)
+
+		SFC_EF10_ESSB_RX_PREFIX_FIELD(LENGTH, DATA_LEN),
+		SFC_EF10_ESSB_RX_PREFIX_FIELD(USER_MARK, MARK),
+		SFC_EF10_ESSB_RX_PREFIX_FIELD(RSS_HASH_VALID, HASH_VALID),
+		SFC_EF10_ESSB_RX_PREFIX_FIELD(USER_MARK_VALID, MARK_VALID),
+		SFC_EF10_ESSB_RX_PREFIX_FIELD(USER_FLAG, MATCH_FLAG),
+		SFC_EF10_ESSB_RX_PREFIX_FIELD(RSS_HASH, HASH),
+
+#undef	SFC_EF10_ESSB_RX_PREFIX_FIELD
+	}
+};
+
 static unsigned int
 sfc_ef10_essb_rx_get_pending(struct sfc_ef10_essb_rxq *rxq,
 			     struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
@@ -633,10 +655,18 @@ sfc_ef10_essb_rx_qdestroy(struct sfc_dp_rxq *dp_rxq)
 
 static sfc_dp_rx_qstart_t sfc_ef10_essb_rx_qstart;
 static int
-sfc_ef10_essb_rx_qstart(struct sfc_dp_rxq *dp_rxq, unsigned int evq_read_ptr)
+sfc_ef10_essb_rx_qstart(struct sfc_dp_rxq *dp_rxq, unsigned int evq_read_ptr,
+			const efx_rx_prefix_layout_t *pinfo)
 {
 	struct sfc_ef10_essb_rxq *rxq = sfc_ef10_essb_rxq_by_dp_rxq(dp_rxq);
 
+	if (pinfo->erpl_length != sfc_ef10_essb_rx_prefix_layout.erpl_length)
+		return ENOTSUP;
+
+	if (efx_rx_prefix_layout_check(pinfo,
+				       &sfc_ef10_essb_rx_prefix_layout) != 0)
+		return ENOTSUP;
+
 	rxq->evq_read_ptr = evq_read_ptr;
 
 	/* Initialize before refill */
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 62d0b6206b..e6bf3b9f42 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -223,6 +223,18 @@ sfc_ef10_rx_pending(struct sfc_ef10_rxq *rxq, struct rte_mbuf **rx_pkts,
 	return rx_pkts;
 }
 
+/*
+ * Below Rx pseudo-header (aka Rx prefix) accesssors rely on the
+ * following fields layout.
+ */
+static const efx_rx_prefix_layout_t sfc_ef10_rx_prefix_layout = {
+	.erpl_fields	= {
+		[EFX_RX_PREFIX_FIELD_RSS_HASH]	=
+		    { 0, sizeof(uint32_t) * CHAR_BIT, B_FALSE },
+		[EFX_RX_PREFIX_FIELD_LENGTH]	=
+		    { 8 * CHAR_BIT, sizeof(uint16_t) * CHAR_BIT, B_FALSE },
+	}
+};
 static uint16_t
 sfc_ef10_rx_pseudo_hdr_get_len(const uint8_t *pseudo_hdr)
 {
@@ -700,7 +712,8 @@ sfc_ef10_rx_qdestroy(struct sfc_dp_rxq *dp_rxq)
 
 static sfc_dp_rx_qstart_t sfc_ef10_rx_qstart;
 static int
-sfc_ef10_rx_qstart(struct sfc_dp_rxq *dp_rxq, unsigned int evq_read_ptr)
+sfc_ef10_rx_qstart(struct sfc_dp_rxq *dp_rxq, unsigned int evq_read_ptr,
+		   const efx_rx_prefix_layout_t *pinfo)
 {
 	struct sfc_ef10_rxq *rxq = sfc_ef10_rxq_by_dp_rxq(dp_rxq);
 
@@ -708,6 +721,10 @@ sfc_ef10_rx_qstart(struct sfc_dp_rxq *dp_rxq, unsigned int evq_read_ptr)
 	SFC_ASSERT(rxq->pending == 0);
 	SFC_ASSERT(rxq->added == 0);
 
+	if (pinfo->erpl_length != rxq->prefix_size ||
+	    efx_rx_prefix_layout_check(pinfo, &sfc_ef10_rx_prefix_layout) != 0)
+		return ENOTSUP;
+
 	sfc_ef10_rx_qrefill(rxq);
 
 	rxq->evq_read_ptr = evq_read_ptr;
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 09afb519d5..ff4e69e679 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -528,13 +528,22 @@ static sfc_dp_rx_qpurge_t sfc_efx_rx_qpurge;
 static sfc_dp_rx_qstart_t sfc_efx_rx_qstart;
 static int
 sfc_efx_rx_qstart(struct sfc_dp_rxq *dp_rxq,
-		  __rte_unused unsigned int evq_read_ptr)
+		  __rte_unused unsigned int evq_read_ptr,
+		  const efx_rx_prefix_layout_t *pinfo)
 {
 	/* libefx-based datapath is specific to libefx-based PMD */
 	struct sfc_efx_rxq *rxq = sfc_efx_rxq_by_dp_rxq(dp_rxq);
 	struct sfc_rxq *crxq = sfc_rxq_by_dp_rxq(dp_rxq);
 	int rc;
 
+	/*
+	 * libefx API is used to extract information from Rx prefix and
+	 * it guarantees consistency. Just do length check to ensure
+	 * that we reserved space in Rx buffers correctly.
+	 */
+	if (rxq->prefix_size != pinfo->erpl_length)
+		return ENOTSUP;
+
 	rxq->common = crxq->common;
 
 	rxq->pending = rxq->completed = rxq->added = rxq->pushed = 0;
@@ -760,6 +769,7 @@ sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 	struct sfc_rxq_info *rxq_info;
 	struct sfc_rxq *rxq;
 	struct sfc_evq *evq;
+	efx_rx_prefix_layout_t pinfo;
 	int rc;
 
 	sfc_log_init(sa, "sw_index=%u", sw_index);
@@ -811,9 +821,13 @@ sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 	if (rc != 0)
 		goto fail_rx_qcreate;
 
+	rc = efx_rx_prefix_get_layout(rxq->common, &pinfo);
+	if (rc != 0)
+		goto fail_prefix_get_layout;
+
 	efx_rx_qenable(rxq->common);
 
-	rc = sa->priv.dp_rx->qstart(rxq_info->dp, evq->read_ptr);
+	rc = sa->priv.dp_rx->qstart(rxq_info->dp, evq->read_ptr, &pinfo);
 	if (rc != 0)
 		goto fail_dp_qstart;
 
@@ -839,6 +853,7 @@ sfc_rx_qstart(struct sfc_adapter *sa, unsigned int sw_index)
 fail_dp_qstart:
 	efx_rx_qdestroy(rxq->common);
 
+fail_prefix_get_layout:
 fail_rx_qcreate:
 fail_bad_contig_block_size:
 fail_mp_get_info:
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 32/36] net/sfc: support per-queue Rx RSS hash offload for EF100
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (30 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 31/36] net/sfc: support per-queue Rx prefix for EF100 Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 33/36] net/sfc: support user mark and flag Rx " Andrew Rybchenko
                   ` (4 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

Riverhead allows to choose Rx prefix (which contains RSS hash value
and valid flag) per queue.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_ef100_rx.c | 27 ++++++++++++++++++++++++---
 1 file changed, 24 insertions(+), 3 deletions(-)

diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index 5d46d5bac1..6fb78b6e68 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -56,6 +56,7 @@ struct sfc_ef100_rxq {
 #define SFC_EF100_RXQ_STARTED		0x1
 #define SFC_EF100_RXQ_NOT_RUNNING	0x2
 #define SFC_EF100_RXQ_EXCEPTION		0x4
+#define SFC_EF100_RXQ_RSS_HASH		0x10
 	unsigned int			ptr_mask;
 	unsigned int			evq_phase_bit_shift;
 	unsigned int			ready_pkts;
@@ -349,14 +350,17 @@ static const efx_rx_prefix_layout_t sfc_ef100_rx_prefix_layout = {
 	EFX_RX_PREFIX_FIELD(_name, ESF_GZ_RX_PREFIX_ ## _name, _big_endian)
 
 		SFC_EF100_RX_PREFIX_FIELD(LENGTH, B_FALSE),
+		SFC_EF100_RX_PREFIX_FIELD(RSS_HASH_VALID, B_FALSE),
 		SFC_EF100_RX_PREFIX_FIELD(CLASS, B_FALSE),
+		SFC_EF100_RX_PREFIX_FIELD(RSS_HASH, B_FALSE),
 
 #undef	SFC_EF100_RX_PREFIX_FIELD
 	}
 };
 
 static bool
-sfc_ef100_rx_prefix_to_offloads(const efx_oword_t *rx_prefix,
+sfc_ef100_rx_prefix_to_offloads(const struct sfc_ef100_rxq *rxq,
+				const efx_oword_t *rx_prefix,
 				struct rte_mbuf *m)
 {
 	const efx_word_t *class;
@@ -375,6 +379,15 @@ sfc_ef100_rx_prefix_to_offloads(const efx_oword_t *rx_prefix,
 
 	m->packet_type = sfc_ef100_rx_class_decode(*class, &ol_flags);
 
+	if ((rxq->flags & SFC_EF100_RXQ_RSS_HASH) &&
+	    EFX_TEST_OWORD_BIT(rx_prefix[0],
+			       ESF_GZ_RX_PREFIX_RSS_HASH_VALID_LBN)) {
+		ol_flags |= PKT_RX_RSS_HASH;
+		/* EFX_OWORD_FIELD converts little-endian to CPU */
+		m->hash.rss = EFX_OWORD_FIELD(rx_prefix[0],
+					      ESF_GZ_RX_PREFIX_RSS_HASH);
+	}
+
 	m->ol_flags = ol_flags;
 	return true;
 }
@@ -461,7 +474,7 @@ sfc_ef100_rx_process_ready_pkts(struct sfc_ef100_rxq *rxq,
 		seg_len = RTE_MIN(pkt_len, rxq->buf_size - rxq->prefix_size);
 		rte_pktmbuf_data_len(pkt) = seg_len;
 
-		deliver = sfc_ef100_rx_prefix_to_offloads(rx_prefix, pkt);
+		deliver = sfc_ef100_rx_prefix_to_offloads(rxq, rx_prefix, pkt);
 
 		lastseg = pkt;
 		while ((pkt_len -= seg_len) > 0) {
@@ -740,6 +753,13 @@ sfc_ef100_rx_qstart(struct sfc_dp_rxq *dp_rxq, unsigned int evq_read_ptr,
 	      (1U << EFX_RX_PREFIX_FIELD_CLASS))) != 0)
 		return ENOTSUP;
 
+	if ((unsup_rx_prefix_fields &
+	     ((1U << EFX_RX_PREFIX_FIELD_RSS_HASH_VALID) |
+	      (1U << EFX_RX_PREFIX_FIELD_RSS_HASH))) == 0)
+		rxq->flags |= SFC_EF100_RXQ_RSS_HASH;
+	else
+		rxq->flags &= ~SFC_EF100_RXQ_RSS_HASH;
+
 	rxq->prefix_size = pinfo->erpl_length;
 	rxq->rearm_data = sfc_ef100_mk_mbuf_rearm_data(rxq->dp.dpq.port_id,
 						       rxq->prefix_size);
@@ -812,7 +832,8 @@ struct sfc_dp_rx sfc_ef100_rx = {
 	.queue_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
 				  DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
 				  DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-				  DEV_RX_OFFLOAD_SCATTER,
+				  DEV_RX_OFFLOAD_SCATTER |
+				  DEV_RX_OFFLOAD_RSS_HASH,
 	.get_dev_info		= sfc_ef100_rx_get_dev_info,
 	.qsize_up_rings		= sfc_ef100_rx_qsize_up_rings,
 	.qcreate		= sfc_ef100_rx_qcreate,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 33/36] net/sfc: support user mark and flag Rx for EF100
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (31 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 32/36] net/sfc: support per-queue Rx RSS hash offload " Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 34/36] net/sfc: forward function control window offset to datapath Andrew Rybchenko
                   ` (3 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

Flow rules may be used mark packets. Support delivery of mark/flag
values to user in mbuf fields.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_ef100_rx.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index 6fb78b6e68..0623f6e574 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -57,6 +57,7 @@ struct sfc_ef100_rxq {
 #define SFC_EF100_RXQ_NOT_RUNNING	0x2
 #define SFC_EF100_RXQ_EXCEPTION		0x4
 #define SFC_EF100_RXQ_RSS_HASH		0x10
+#define SFC_EF100_RXQ_USER_MARK		0x20
 	unsigned int			ptr_mask;
 	unsigned int			evq_phase_bit_shift;
 	unsigned int			ready_pkts;
@@ -351,8 +352,10 @@ static const efx_rx_prefix_layout_t sfc_ef100_rx_prefix_layout = {
 
 		SFC_EF100_RX_PREFIX_FIELD(LENGTH, B_FALSE),
 		SFC_EF100_RX_PREFIX_FIELD(RSS_HASH_VALID, B_FALSE),
+		SFC_EF100_RX_PREFIX_FIELD(USER_FLAG, B_FALSE),
 		SFC_EF100_RX_PREFIX_FIELD(CLASS, B_FALSE),
 		SFC_EF100_RX_PREFIX_FIELD(RSS_HASH, B_FALSE),
+		SFC_EF100_RX_PREFIX_FIELD(USER_MARK, B_FALSE),
 
 #undef	SFC_EF100_RX_PREFIX_FIELD
 	}
@@ -388,6 +391,14 @@ sfc_ef100_rx_prefix_to_offloads(const struct sfc_ef100_rxq *rxq,
 					      ESF_GZ_RX_PREFIX_RSS_HASH);
 	}
 
+	if ((rxq->flags & SFC_EF100_RXQ_USER_MARK) &&
+	    EFX_TEST_OWORD_BIT(rx_prefix[0], ESF_GZ_RX_PREFIX_USER_FLAG_LBN)) {
+		ol_flags |= PKT_RX_FDIR_ID;
+		/* EFX_OWORD_FIELD converts little-endian to CPU */
+		m->hash.fdir.hi = EFX_OWORD_FIELD(rx_prefix[0],
+						  ESF_GZ_RX_PREFIX_USER_MARK);
+	}
+
 	m->ol_flags = ol_flags;
 	return true;
 }
@@ -760,6 +771,13 @@ sfc_ef100_rx_qstart(struct sfc_dp_rxq *dp_rxq, unsigned int evq_read_ptr,
 	else
 		rxq->flags &= ~SFC_EF100_RXQ_RSS_HASH;
 
+	if ((unsup_rx_prefix_fields &
+	     ((1U << EFX_RX_PREFIX_FIELD_USER_FLAG) |
+	      (1U << EFX_RX_PREFIX_FIELD_USER_MARK))) == 0)
+		rxq->flags |= SFC_EF100_RXQ_USER_MARK;
+	else
+		rxq->flags &= ~SFC_EF100_RXQ_USER_MARK;
+
 	rxq->prefix_size = pinfo->erpl_length;
 	rxq->rearm_data = sfc_ef100_mk_mbuf_rearm_data(rxq->dp.dpq.port_id,
 						       rxq->prefix_size);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 34/36] net/sfc: forward function control window offset to datapath
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (32 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 33/36] net/sfc: support user mark and flag Rx " Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 35/36] net/sfc: add Rx interrupts support for EF100 Andrew Rybchenko
                   ` (2 subsequent siblings)
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev; +Cc: Igor Romanov

From: Igor Romanov <igor.romanov@oktetlabs.ru>

Store function control window offset to correctly set the offset
of prime EvQ in EF100.

Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc.c       | 3 +++
 drivers/net/sfc/sfc.h       | 2 ++
 drivers/net/sfc/sfc_dp_rx.h | 2 ++
 drivers/net/sfc/sfc_rx.c    | 1 +
 4 files changed, 8 insertions(+)

diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index d4478a2846..8fa790da55 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -667,6 +667,9 @@ sfc_mem_bar_init(struct sfc_adapter *sa, const efx_bar_region_t *mem_ebrp)
 	ebp->esb_rid = mem_ebrp->ebr_index;
 	ebp->esb_dev = pci_dev;
 	ebp->esb_base = res->addr;
+
+	sa->fcw_offset = mem_ebrp->ebr_offset;
+
 	return 0;
 }
 
diff --git a/drivers/net/sfc/sfc.h b/drivers/net/sfc/sfc.h
index ecdd716256..047ca64de7 100644
--- a/drivers/net/sfc/sfc.h
+++ b/drivers/net/sfc/sfc.h
@@ -221,6 +221,8 @@ struct sfc_adapter {
 	struct rte_kvargs		*kvargs;
 	int				socket_id;
 	efsys_bar_t			mem_bar;
+	/* Function control window offset */
+	efsys_dma_addr_t		fcw_offset;
 	efx_family_t			family;
 	efx_nic_t			*nic;
 	rte_spinlock_t			nic_lock;
diff --git a/drivers/net/sfc/sfc_dp_rx.h b/drivers/net/sfc/sfc_dp_rx.h
index 362be933a9..f3e00e2e38 100644
--- a/drivers/net/sfc/sfc_dp_rx.h
+++ b/drivers/net/sfc/sfc_dp_rx.h
@@ -88,6 +88,8 @@ struct sfc_dp_rx_qcreate_info {
 	 * doorbell
 	 */
 	volatile void		*mem_bar;
+	/** Function control window offset */
+	efsys_dma_addr_t	fcw_offset;
 	/** VI window size shift */
 	unsigned int		vi_window_shift;
 };
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index ff4e69e679..de0773b8a7 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -1199,6 +1199,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, unsigned int sw_index,
 	info.hw_index = rxq->hw_index;
 	info.mem_bar = sa->mem_bar.esb_base;
 	info.vi_window_shift = encp->enc_vi_window_shift;
+	info.fcw_offset = sa->fcw_offset;
 
 	rc = sa->priv.dp_rx->qcreate(sa->eth_dev->data->port_id, sw_index,
 				     &RTE_ETH_DEV_TO_PCI(sa->eth_dev)->addr,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 35/36] net/sfc: add Rx interrupts support for EF100
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (33 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 34/36] net/sfc: forward function control window offset to datapath Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 36/36] doc: advertise Alveo SN1000 SmartNICs family support Andrew Rybchenko
  2020-10-14 10:41 ` [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Ferruh Yigit
  36 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 drivers/net/sfc/sfc_ef100.h    | 28 ++++++++++++++++++
 drivers/net/sfc/sfc_ef100_rx.c | 53 +++++++++++++++++++++++++++++++++-
 2 files changed, 80 insertions(+), 1 deletion(-)

diff --git a/drivers/net/sfc/sfc_ef100.h b/drivers/net/sfc/sfc_ef100.h
index 6da6cfabdb..97ddb00797 100644
--- a/drivers/net/sfc/sfc_ef100.h
+++ b/drivers/net/sfc/sfc_ef100.h
@@ -14,6 +14,34 @@
 extern "C" {
 #endif
 
+/**
+ * Prime event queue to allow processed events to be reused.
+ *
+ * @param evq_prime	Global address of the prime register
+ * @param evq_hw_index	Event queue index
+ * @param evq_read_ptr	Masked event qeueu read pointer
+ */
+static inline void
+sfc_ef100_evq_prime(volatile void *evq_prime, unsigned int evq_hw_index,
+		    unsigned int evq_read_ptr)
+{
+	efx_dword_t dword;
+
+	EFX_POPULATE_DWORD_2(dword,
+			     ERF_GZ_EVQ_ID, evq_hw_index,
+			     ERF_GZ_IDX, evq_read_ptr);
+
+	/*
+	 * EvQ prime on EF100 allows HW to reuse descriptors. So we
+	 * should be sure that event descriptor reads are done.
+	 * However, there is implicit data dependency here since we
+	 * move past event if we have found out that the event has
+	 * come (i.e. we read it) and we have processed it.
+	 * So, no extra barriers are required here.
+	 */
+	rte_write32_relaxed(dword.ed_u32[0], evq_prime);
+}
+
 static inline bool
 sfc_ef100_ev_present(const efx_qword_t *ev, bool phase_bit)
 {
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index 0623f6e574..5e761601be 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -58,17 +58,22 @@ struct sfc_ef100_rxq {
 #define SFC_EF100_RXQ_EXCEPTION		0x4
 #define SFC_EF100_RXQ_RSS_HASH		0x10
 #define SFC_EF100_RXQ_USER_MARK		0x20
+#define SFC_EF100_RXQ_FLAG_INTR_EN	0x40
 	unsigned int			ptr_mask;
 	unsigned int			evq_phase_bit_shift;
 	unsigned int			ready_pkts;
 	unsigned int			completed;
 	unsigned int			evq_read_ptr;
+	unsigned int			evq_read_ptr_primed;
 	volatile efx_qword_t		*evq_hw_ring;
 	struct sfc_ef100_rx_sw_desc	*sw_ring;
 	uint64_t			rearm_data;
 	uint16_t			buf_size;
 	uint16_t			prefix_size;
 
+	unsigned int			evq_hw_index;
+	volatile void			*evq_prime;
+
 	/* Used on refill */
 	unsigned int			added;
 	unsigned int			max_fill_level;
@@ -87,6 +92,14 @@ sfc_ef100_rxq_by_dp_rxq(struct sfc_dp_rxq *dp_rxq)
 	return container_of(dp_rxq, struct sfc_ef100_rxq, dp);
 }
 
+static void
+sfc_ef100_rx_qprime(struct sfc_ef100_rxq *rxq)
+{
+	sfc_ef100_evq_prime(rxq->evq_prime, rxq->evq_hw_index,
+			    rxq->evq_read_ptr & rxq->ptr_mask);
+	rxq->evq_read_ptr_primed = rxq->evq_read_ptr;
+}
+
 static inline void
 sfc_ef100_rx_qpush(struct sfc_ef100_rxq *rxq, unsigned int added)
 {
@@ -570,6 +583,10 @@ sfc_ef100_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	/* It is not a problem if we refill in the case of exception */
 	sfc_ef100_rx_qrefill(rxq);
 
+	if ((rxq->flags & SFC_EF100_RXQ_FLAG_INTR_EN) &&
+	    rxq->evq_read_ptr_primed != rxq->evq_read_ptr)
+		sfc_ef100_rx_qprime(rxq);
+
 done:
 	return nb_pkts - (rx_pkts_end - rx_pkts);
 }
@@ -717,6 +734,11 @@ sfc_ef100_rx_qcreate(uint16_t port_id, uint16_t queue_id,
 			ER_GZ_RX_RING_DOORBELL_OFST +
 			(info->hw_index << info->vi_window_shift);
 
+	rxq->evq_hw_index = info->evq_hw_index;
+	rxq->evq_prime = (volatile uint8_t *)info->mem_bar +
+			 info->fcw_offset +
+			 ER_GZ_EVQ_INT_PRIME_OFST;
+
 	sfc_ef100_rx_debug(rxq, "RxQ doorbell is %p", rxq->doorbell);
 
 	*dp_rxqp = &rxq->dp;
@@ -789,6 +811,9 @@ sfc_ef100_rx_qstart(struct sfc_dp_rxq *dp_rxq, unsigned int evq_read_ptr,
 	rxq->flags |= SFC_EF100_RXQ_STARTED;
 	rxq->flags &= ~(SFC_EF100_RXQ_NOT_RUNNING | SFC_EF100_RXQ_EXCEPTION);
 
+	if (rxq->flags & SFC_EF100_RXQ_FLAG_INTR_EN)
+		sfc_ef100_rx_qprime(rxq);
+
 	return 0;
 }
 
@@ -839,13 +864,37 @@ sfc_ef100_rx_qpurge(struct sfc_dp_rxq *dp_rxq)
 	rxq->flags &= ~SFC_EF100_RXQ_STARTED;
 }
 
+static sfc_dp_rx_intr_enable_t sfc_ef100_rx_intr_enable;
+static int
+sfc_ef100_rx_intr_enable(struct sfc_dp_rxq *dp_rxq)
+{
+	struct sfc_ef100_rxq *rxq = sfc_ef100_rxq_by_dp_rxq(dp_rxq);
+
+	rxq->flags |= SFC_EF100_RXQ_FLAG_INTR_EN;
+	if (rxq->flags & SFC_EF100_RXQ_STARTED)
+		sfc_ef100_rx_qprime(rxq);
+	return 0;
+}
+
+static sfc_dp_rx_intr_disable_t sfc_ef100_rx_intr_disable;
+static int
+sfc_ef100_rx_intr_disable(struct sfc_dp_rxq *dp_rxq)
+{
+	struct sfc_ef100_rxq *rxq = sfc_ef100_rxq_by_dp_rxq(dp_rxq);
+
+	/* Cannot disarm, just disable rearm */
+	rxq->flags &= ~SFC_EF100_RXQ_FLAG_INTR_EN;
+	return 0;
+}
+
 struct sfc_dp_rx sfc_ef100_rx = {
 	.dp = {
 		.name		= SFC_KVARG_DATAPATH_EF100,
 		.type		= SFC_DP_RX,
 		.hw_fw_caps	= SFC_DP_HW_FW_CAP_EF100,
 	},
-	.features		= SFC_DP_RX_FEAT_MULTI_PROCESS,
+	.features		= SFC_DP_RX_FEAT_MULTI_PROCESS |
+				  SFC_DP_RX_FEAT_INTR,
 	.dev_offload_capa	= 0,
 	.queue_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
 				  DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
@@ -863,5 +912,7 @@ struct sfc_dp_rx sfc_ef100_rx = {
 	.supported_ptypes_get	= sfc_ef100_supported_ptypes_get,
 	.qdesc_npending		= sfc_ef100_rx_qdesc_npending,
 	.qdesc_status		= sfc_ef100_rx_qdesc_status,
+	.intr_enable		= sfc_ef100_rx_intr_enable,
+	.intr_disable		= sfc_ef100_rx_intr_disable,
 	.pkt_burst		= sfc_ef100_recv_pkts,
 };
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [dpdk-dev] [PATCH 36/36] doc: advertise Alveo SN1000 SmartNICs family support
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (34 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 35/36] net/sfc: add Rx interrupts support for EF100 Andrew Rybchenko
@ 2020-10-13 13:45 ` Andrew Rybchenko
  2020-10-14 10:41   ` Ferruh Yigit
  2020-10-14 10:41 ` [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Ferruh Yigit
  36 siblings, 1 reply; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-13 13:45 UTC (permalink / raw)
  To: dev

Alveo SN1000 family is SmartNICs based on EF100 architecture.

Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
---
 doc/guides/nics/sfc_efx.rst | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
index c89484d473..959b52c1c3 100644
--- a/doc/guides/nics/sfc_efx.rst
+++ b/doc/guides/nics/sfc_efx.rst
@@ -9,8 +9,9 @@ Solarflare libefx-based Poll Mode Driver
 ========================================
 
 The SFC EFX PMD (**librte_pmd_sfc_efx**) provides poll mode driver support
-for **Solarflare SFN7xxx and SFN8xxx** family of 10/40 Gbps adapters and
-**Solarflare XtremeScale X2xxx** family of 10/25/40/50/100 Gbps adapters.
+for **Solarflare SFN7xxx and SFN8xxx** family of 10/40 Gbps adapters,
+**Solarflare XtremeScale X2xxx** family of 10/25/40/50/100 Gbps adapters and
+**Alveo SN1000 SmartNICs** family of 10/25/40/50/100 Gbps adapters.
 SFC EFX PMD has support for the latest Linux and FreeBSD operating systems.
 
 More information can be found at `Solarflare Communications website
@@ -219,6 +220,10 @@ conditions is met:
 Supported NICs
 --------------
 
+- Xilinx Adapters:
+
+   - Alveo SN1022 SmartNIC
+
 - Solarflare XtremeScale Adapters:
 
    - Solarflare X2522 Dual Port SFP28 10/25GbE Adapter
@@ -351,10 +356,11 @@ boolean parameters value.
 - ``fw_variant`` [dont-care|full-feature|ultra-low-latency|
   capture-packed-stream|dpdk] (default **dont-care**)
 
-  Choose the preferred firmware variant to use. In order for the selected
-  option to have an effect, the **sfboot** utility must be configured with the
-  **auto** firmware-variant option. The preferred firmware variant applies to
-  all ports on the NIC.
+  Choose the preferred firmware variant to use.
+  The parameter is supported for SFN7xxX, SFN8xxx and X2xxx families only.
+  In order for the selected option to have an effect, the **sfboot** utility
+  must be configured with the **auto** firmware-variant option.
+  The preferred firmware variant applies to all ports on the NIC.
   **dont-care** ensures that the driver can attach to an unprivileged function.
   The datapath firmware type to use is controlled by the **sfboot**
   utility.
-- 
2.17.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 02/36] doc: avoid references to removed config variables in net/sfc
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 02/36] doc: avoid references to removed config variables in net/sfc Andrew Rybchenko
@ 2020-10-14 10:40   ` Ferruh Yigit
  0 siblings, 0 replies; 43+ messages in thread
From: Ferruh Yigit @ 2020-10-14 10:40 UTC (permalink / raw)
  To: Andrew Rybchenko, dev; +Cc: Thomas Monjalon, Ciara Power

On 10/13/2020 2:45 PM, Andrew Rybchenko wrote:
> CONFIG_* variables were used by make-based build system which is
> removed.
> 
> Fixes: 3cc6ecfdfe85 ("build: remove makefiles")
> 
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---
>   doc/guides/nics/sfc_efx.rst | 14 ++++++--------
>   1 file changed, 6 insertions(+), 8 deletions(-)
> 
> diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
> index 812c1e7951..84b9b56ddb 100644
> --- a/doc/guides/nics/sfc_efx.rst
> +++ b/doc/guides/nics/sfc_efx.rst
> @@ -273,17 +273,15 @@ Pre-Installation Configuration
>   ------------------------------
>   
>   
> -Config File Options
> -~~~~~~~~~~~~~~~~~~~
> +Build Options
> +~~~~~~~~~~~~~
>   
> -The following options can be modified in the ``.config`` file.
> -Please note that enabling debugging options may affect system performance.
> -
> -- ``CONFIG_RTE_LIBRTE_SFC_EFX_PMD`` (default **y**)
> +The following build-time options may be enabled on build time using
> +``-Dc_args=`` meson argument (e.g.  ``-Dc_args=-DRTE_LIBRTE_SFC_EFX_DEBUG``).
>   
> -  Enable compilation of Solarflare libefx-based poll-mode driver.
> +Please note that enabling debugging options may affect system performance.
>   
> -- ``CONFIG_RTE_LIBRTE_SFC_EFX_DEBUG`` (default **n**)
> +- ``RTE_LIBRTE_SFC_EFX_DEBUG`` (undefined by default)
>   
>     Enable compilation of the extra run-time consistency checks.
>   
> 

This will conflict with Ciara's Make removal patch:
https://patches.dpdk.org/patch/80120/

cc'ed Ciara & Thomas to be aware of it.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 10/36] net/sfc: add EF100 support
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 10/36] net/sfc: add EF100 support Andrew Rybchenko
@ 2020-10-14 10:40   ` Ferruh Yigit
  2020-10-14 11:21     ` Andrew Rybchenko
  0 siblings, 1 reply; 43+ messages in thread
From: Ferruh Yigit @ 2020-10-14 10:40 UTC (permalink / raw)
  To: Andrew Rybchenko, dev

On 10/13/2020 2:45 PM, Andrew Rybchenko wrote:
> Riverhead is the first NIC of the EF100 architecture.
> 
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>

Should documentation and web page [1] updated for new device support?

Riverhead is name of the NIC, and EF100 is the name of the IP in that NIC, 
right? Is the Riverhead NIC public now?

[1] https://core.dpdk.org/supported/nics/solarflare/

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 36/36] doc: advertise Alveo SN1000 SmartNICs family support
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 36/36] doc: advertise Alveo SN1000 SmartNICs family support Andrew Rybchenko
@ 2020-10-14 10:41   ` Ferruh Yigit
  2020-10-14 11:15     ` Andrew Rybchenko
  0 siblings, 1 reply; 43+ messages in thread
From: Ferruh Yigit @ 2020-10-14 10:41 UTC (permalink / raw)
  To: Andrew Rybchenko, dev

On 10/13/2020 2:45 PM, Andrew Rybchenko wrote:
> Alveo SN1000 family is SmartNICs based on EF100 architecture.

Is "Alveo SN1000" and "Riverhead" are same device?


> 
> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
> ---
>   doc/guides/nics/sfc_efx.rst | 18 ++++++++++++------
>   1 file changed, 12 insertions(+), 6 deletions(-)
> 
> diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
> index c89484d473..959b52c1c3 100644
> --- a/doc/guides/nics/sfc_efx.rst
> +++ b/doc/guides/nics/sfc_efx.rst
> @@ -9,8 +9,9 @@ Solarflare libefx-based Poll Mode Driver
>   ========================================
>   
>   The SFC EFX PMD (**librte_pmd_sfc_efx**) provides poll mode driver support
> -for **Solarflare SFN7xxx and SFN8xxx** family of 10/40 Gbps adapters and
> -**Solarflare XtremeScale X2xxx** family of 10/25/40/50/100 Gbps adapters.
> +for **Solarflare SFN7xxx and SFN8xxx** family of 10/40 Gbps adapters,
> +**Solarflare XtremeScale X2xxx** family of 10/25/40/50/100 Gbps adapters and
> +**Alveo SN1000 SmartNICs** family of 10/25/40/50/100 Gbps adapters.
>   SFC EFX PMD has support for the latest Linux and FreeBSD operating systems.
>   

Again does web page should be updated too?

>   More information can be found at `Solarflare Communications website
> @@ -219,6 +220,10 @@ conditions is met:
>   Supported NICs
>   --------------
>   
> +- Xilinx Adapters:
> +
> +   - Alveo SN1022 SmartNIC
> +

Can you provide a link for the device? I didn't able to find it by a simple search.

IF you can provide updates, I can squash to the set later.

>   - Solarflare XtremeScale Adapters:
>   
>      - Solarflare X2522 Dual Port SFP28 10/25GbE Adapter
> @@ -351,10 +356,11 @@ boolean parameters value.
>   - ``fw_variant`` [dont-care|full-feature|ultra-low-latency|
>     capture-packed-stream|dpdk] (default **dont-care**)
>   
> -  Choose the preferred firmware variant to use. In order for the selected
> -  option to have an effect, the **sfboot** utility must be configured with the
> -  **auto** firmware-variant option. The preferred firmware variant applies to
> -  all ports on the NIC.
> +  Choose the preferred firmware variant to use.
> +  The parameter is supported for SFN7xxX, SFN8xxx and X2xxx families only.
> +  In order for the selected option to have an effect, the **sfboot** utility
> +  must be configured with the **auto** firmware-variant option.
> +  The preferred firmware variant applies to all ports on the NIC.
>     **dont-care** ensures that the driver can attach to an unprivileged function.
>     The datapath firmware type to use is controlled by the **sfboot**
>     utility.
> 


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support
  2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
                   ` (35 preceding siblings ...)
  2020-10-13 13:45 ` [dpdk-dev] [PATCH 36/36] doc: advertise Alveo SN1000 SmartNICs family support Andrew Rybchenko
@ 2020-10-14 10:41 ` Ferruh Yigit
  36 siblings, 0 replies; 43+ messages in thread
From: Ferruh Yigit @ 2020-10-14 10:41 UTC (permalink / raw)
  To: Andrew Rybchenko, dev

On 10/13/2020 2:45 PM, Andrew Rybchenko wrote:
> Add Alveo SN1000 SmartNICs family basic support.
> 
> Andrew Rybchenko (30):
>    doc: fix typo in EF10 Rx equal stride super-buffer name
>    doc: avoid references to removed config variables in net/sfc
>    common/sfc_efx/base: factor out wrapper to set PHY link
>    common/sfc_efx/base: factor out MCDI wrapper to set LEDs
>    common/sfc_efx/base: fix PHY config failure on Riverhead
>    common/sfc_efx/base: add max number of Rx scatter buffers
>    net/sfc: log Rx/Tx doorbell addresses useful for debugging
>    net/sfc: add caps to specify if libefx supports Rx/Tx
>    net/sfc: add EF100 support
>    net/sfc: implement libefx Rx packets event callbacks
>    net/sfc: implement libefx Tx descs complete event callbacks
>    net/sfc: log DMA allocations addresses
>    net/sfc: support datapath logs which may be compiled out
>    net/sfc: implement EF100 native Rx datapath
>    net/sfc: implement EF100 native Tx datapath
>    net/sfc: support multi-segment transmit for EF100 datapath
>    net/sfc: support TCP and UDP checksum offloads for EF100
>    net/sfc: support IPv4 header checksum offload for EF100 Tx
>    net/sfc: support tunnels for EF100 native Tx datapath
>    net/sfc: support Tx VLAN insertion offload for EF100
>    net/sfc: support Rx checksum offload for EF100
>    common/sfc_efx/base: simplify to request Rx prefix fields
>    common/sfc_efx/base: provide control to deliver RSS hash
>    common/sfc_efx/base: provide helper to check Rx prefix
>    net/sfc: map Rx offload RSS hash to corresponding RxQ flag
>    net/sfc: support per-queue Rx prefix for EF100
>    net/sfc: support per-queue Rx RSS hash offload for EF100
>    net/sfc: support user mark and flag Rx for EF100
>    net/sfc: add Rx interrupts support for EF100
>    doc: advertise Alveo SN1000 SmartNICs family support
> 
> Igor Romanov (3):
>    net/sfc: check vs maximum number of Rx scatter buffers
>    net/sfc: use BAR layout discovery to find control window
>    net/sfc: forward function control window offset to datapath
> 
> Ivan Malov (3):
>    net/sfc: add header segments check for EF100 Tx datapath
>    net/sfc: support TSO for EF100 native datapath
>    net/sfc: support tunnel TSO for EF100 native Tx datapath
> 

Series applied to dpdk-next-net/main, thanks.

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 36/36] doc: advertise Alveo SN1000 SmartNICs family support
  2020-10-14 10:41   ` Ferruh Yigit
@ 2020-10-14 11:15     ` Andrew Rybchenko
  0 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-14 11:15 UTC (permalink / raw)
  To: Ferruh Yigit, dev

On 10/14/20 1:41 PM, Ferruh Yigit wrote:
> On 10/13/2020 2:45 PM, Andrew Rybchenko wrote:
>> Alveo SN1000 family is SmartNICs based on EF100 architecture.
> 
> Is "Alveo SN1000" and "Riverhead" are same device?

Yes.

>>
>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>> ---
>>   doc/guides/nics/sfc_efx.rst | 18 ++++++++++++------
>>   1 file changed, 12 insertions(+), 6 deletions(-)
>>
>> diff --git a/doc/guides/nics/sfc_efx.rst b/doc/guides/nics/sfc_efx.rst
>> index c89484d473..959b52c1c3 100644
>> --- a/doc/guides/nics/sfc_efx.rst
>> +++ b/doc/guides/nics/sfc_efx.rst
>> @@ -9,8 +9,9 @@ Solarflare libefx-based Poll Mode Driver
>>   ========================================
>>     The SFC EFX PMD (**librte_pmd_sfc_efx**) provides poll mode driver
>> support
>> -for **Solarflare SFN7xxx and SFN8xxx** family of 10/40 Gbps adapters and
>> -**Solarflare XtremeScale X2xxx** family of 10/25/40/50/100 Gbps
>> adapters.
>> +for **Solarflare SFN7xxx and SFN8xxx** family of 10/40 Gbps adapters,
>> +**Solarflare XtremeScale X2xxx** family of 10/25/40/50/100 Gbps
>> adapters and
>> +**Alveo SN1000 SmartNICs** family of 10/25/40/50/100 Gbps adapters.
>>   SFC EFX PMD has support for the latest Linux and FreeBSD operating
>> systems.
>>   
> 
> Again does web page should be updated too?

Yes, I will send dpdk-web patch to add it.
Thanks for the reminder.

> 
>>   More information can be found at `Solarflare Communications website
>> @@ -219,6 +220,10 @@ conditions is met:
>>   Supported NICs
>>   --------------
>>   +- Xilinx Adapters:
>> +
>> +   - Alveo SN1022 SmartNIC
>> +
> 
> Can you provide a link for the device? I didn't able to find it by a
> simple search.

Hm, never tried. I'll try to find out and come back.

> 
> IF you can provide updates, I can squash to the set later.

Thanks.

> 
>>   - Solarflare XtremeScale Adapters:
>>        - Solarflare X2522 Dual Port SFP28 10/25GbE Adapter
>> @@ -351,10 +356,11 @@ boolean parameters value.
>>   - ``fw_variant`` [dont-care|full-feature|ultra-low-latency|
>>     capture-packed-stream|dpdk] (default **dont-care**)
>>   -  Choose the preferred firmware variant to use. In order for the
>> selected
>> -  option to have an effect, the **sfboot** utility must be configured
>> with the
>> -  **auto** firmware-variant option. The preferred firmware variant
>> applies to
>> -  all ports on the NIC.
>> +  Choose the preferred firmware variant to use.
>> +  The parameter is supported for SFN7xxX, SFN8xxx and X2xxx families
>> only.
>> +  In order for the selected option to have an effect, the **sfboot**
>> utility
>> +  must be configured with the **auto** firmware-variant option.
>> +  The preferred firmware variant applies to all ports on the NIC.
>>     **dont-care** ensures that the driver can attach to an
>> unprivileged function.
>>     The datapath firmware type to use is controlled by the **sfboot**
>>     utility.
>>
> 


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [dpdk-dev] [PATCH 10/36] net/sfc: add EF100 support
  2020-10-14 10:40   ` Ferruh Yigit
@ 2020-10-14 11:21     ` Andrew Rybchenko
  0 siblings, 0 replies; 43+ messages in thread
From: Andrew Rybchenko @ 2020-10-14 11:21 UTC (permalink / raw)
  To: Ferruh Yigit, dev

On 10/14/20 1:40 PM, Ferruh Yigit wrote:
> On 10/13/2020 2:45 PM, Andrew Rybchenko wrote:
>> Riverhead is the first NIC of the EF100 architecture.
>>
>> Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
>
> Should documentation and web page [1] updated for new device support?

Yes, I will send web page update patch a bit later. Thanks.

> Riverhead is name of the NIC, and EF100 is the name of the IP in that
> NIC, right? Is the Riverhead NIC public now?

EF100 is a an architecture name (the previous one is EF10).
Riverhead is an engineering name (similar to Huntington,
Medford and Medford2 in base driver). See NIC name in doc
patch.

> [1] https://core.dpdk.org/supported/nics/solarflare/


^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2020-10-14 11:21 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-13 13:45 [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 01/36] doc: fix typo in EF10 Rx equal stride super-buffer name Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 02/36] doc: avoid references to removed config variables in net/sfc Andrew Rybchenko
2020-10-14 10:40   ` Ferruh Yigit
2020-10-13 13:45 ` [dpdk-dev] [PATCH 03/36] common/sfc_efx/base: factor out wrapper to set PHY link Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 04/36] common/sfc_efx/base: factor out MCDI wrapper to set LEDs Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 05/36] common/sfc_efx/base: fix PHY config failure on Riverhead Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 06/36] common/sfc_efx/base: add max number of Rx scatter buffers Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 07/36] net/sfc: check vs maximum " Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 08/36] net/sfc: log Rx/Tx doorbell addresses useful for debugging Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 09/36] net/sfc: add caps to specify if libefx supports Rx/Tx Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 10/36] net/sfc: add EF100 support Andrew Rybchenko
2020-10-14 10:40   ` Ferruh Yigit
2020-10-14 11:21     ` Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 11/36] net/sfc: use BAR layout discovery to find control window Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 12/36] net/sfc: implement libefx Rx packets event callbacks Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 13/36] net/sfc: implement libefx Tx descs complete " Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 14/36] net/sfc: log DMA allocations addresses Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 15/36] net/sfc: support datapath logs which may be compiled out Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 16/36] net/sfc: implement EF100 native Rx datapath Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 17/36] net/sfc: implement EF100 native Tx datapath Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 18/36] net/sfc: support multi-segment transmit for EF100 datapath Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 19/36] net/sfc: support TCP and UDP checksum offloads for EF100 Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 20/36] net/sfc: support IPv4 header checksum offload for EF100 Tx Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 21/36] net/sfc: add header segments check for EF100 Tx datapath Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 22/36] net/sfc: support tunnels for EF100 native " Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 23/36] net/sfc: support TSO for EF100 native datapath Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 24/36] net/sfc: support tunnel TSO for EF100 native Tx datapath Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 25/36] net/sfc: support Tx VLAN insertion offload for EF100 Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 26/36] net/sfc: support Rx checksum " Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 27/36] common/sfc_efx/base: simplify to request Rx prefix fields Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 28/36] common/sfc_efx/base: provide control to deliver RSS hash Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 29/36] common/sfc_efx/base: provide helper to check Rx prefix Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 30/36] net/sfc: map Rx offload RSS hash to corresponding RxQ flag Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 31/36] net/sfc: support per-queue Rx prefix for EF100 Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 32/36] net/sfc: support per-queue Rx RSS hash offload " Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 33/36] net/sfc: support user mark and flag Rx " Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 34/36] net/sfc: forward function control window offset to datapath Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 35/36] net/sfc: add Rx interrupts support for EF100 Andrew Rybchenko
2020-10-13 13:45 ` [dpdk-dev] [PATCH 36/36] doc: advertise Alveo SN1000 SmartNICs family support Andrew Rybchenko
2020-10-14 10:41   ` Ferruh Yigit
2020-10-14 11:15     ` Andrew Rybchenko
2020-10-14 10:41 ` [dpdk-dev] [PATCH 00/36] net/sfc: add EF100 support Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).