patches for DPDK stable branches
 help / color / mirror / Atom feed
* [dpdk-stable] [PATCH 01/17] net/mlx5: fix ASO SQ creation error flow
       [not found] <1608205475-20067-1-git-send-email-michaelba@nvidia.com>
@ 2020-12-17 11:44 ` Michael Baum
       [not found]   ` <1609231944-29274-1-git-send-email-michaelba@nvidia.com>
  0 siblings, 1 reply; 5+ messages in thread
From: Michael Baum @ 2020-12-17 11:44 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko, stable

In ASO SQ creation, the PMD allocates umem buffer for SQ.

When umem buffer allocation is fails, the MR and CQ memory are not freed
what caused a memory leak.

Free it.

Fixes: f935ed4b645a ("net/mlx5: support flow hit action for aging")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_age.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_flow_age.c b/drivers/net/mlx5/mlx5_flow_age.c
index cea2cf7..0ea61be 100644
--- a/drivers/net/mlx5/mlx5_flow_age.c
+++ b/drivers/net/mlx5/mlx5_flow_age.c
@@ -278,7 +278,8 @@
 				   sizeof(*sq->db_rec) * 2, 4096, socket);
 	if (!sq->umem_buf) {
 		DRV_LOG(ERR, "Can't allocate wqe buffer.");
-		return -ENOMEM;
+		rte_errno = ENOMEM;
+		goto error;
 	}
 	sq->wqe_umem = mlx5_glue->devx_umem_reg(ctx,
 						(void *)(uintptr_t)sq->umem_buf,
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [dpdk-stable] [PATCH v2 01/17] net/mlx5: fix ASO SQ creation error flow
       [not found]   ` <1609231944-29274-1-git-send-email-michaelba@nvidia.com>
@ 2020-12-29  8:52     ` Michael Baum
       [not found]       ` <1609921181-5019-1-git-send-email-michaelba@nvidia.com>
  0 siblings, 1 reply; 5+ messages in thread
From: Michael Baum @ 2020-12-29  8:52 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko, stable

In ASO SQ creation, the PMD allocates umem buffer for SQ.

When umem buffer allocation is fails, the MR and CQ memory are not freed
what caused a memory leak.

Free it.

Fixes: f935ed4b645a ("net/mlx5: support flow hit action for aging")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_age.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_flow_age.c b/drivers/net/mlx5/mlx5_flow_age.c
index cea2cf7..0ea61be 100644
--- a/drivers/net/mlx5/mlx5_flow_age.c
+++ b/drivers/net/mlx5/mlx5_flow_age.c
@@ -278,7 +278,8 @@
 				   sizeof(*sq->db_rec) * 2, 4096, socket);
 	if (!sq->umem_buf) {
 		DRV_LOG(ERR, "Can't allocate wqe buffer.");
-		return -ENOMEM;
+		rte_errno = ENOMEM;
+		goto error;
 	}
 	sq->wqe_umem = mlx5_glue->devx_umem_reg(ctx,
 						(void *)(uintptr_t)sq->umem_buf,
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [dpdk-stable] [PATCH v3 01/19] common/mlx5: fix completion queue entry size configuration
       [not found]       ` <1609921181-5019-1-git-send-email-michaelba@nvidia.com>
@ 2021-01-06  8:19         ` Michael Baum
  2021-01-06  8:19         ` [dpdk-stable] [PATCH v3 02/19] net/mlx5: remove CQE padding device argument Michael Baum
  2021-01-06  8:19         ` [dpdk-stable] [PATCH v3 03/19] net/mlx5: fix ASO SQ creation error flow Michael Baum
  2 siblings, 0 replies; 5+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko, stable

According to the current data-path implementation in the PMD the CQE
size must follow the cache-line size.
So, the configuration of the CQE size should be depended in
RTE_CACHE_LINE_SIZE.

Wrongly, part of the CQE creations didn't follow it exactly what caused
an incompatibility between HW and SW in the data-path when working in
128B cache-line size systems.

Adjust the rule for any CQE creation.
Remove the cqe_size attribute from the DevX CQ creation command and set
it inside the command translation according to the cache-line size.

Fixes: 79a7e409a2f6 ("common/mlx5: prepare support of packet pacing")
Fixes: 5cd0a83f413e ("common/mlx5: support more fields in DevX CQ create")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/mlx5_devx_cmds.c | 4 ++--
 drivers/common/mlx5/mlx5_devx_cmds.h | 1 -
 drivers/net/mlx5/mlx5_devx.c         | 4 ----
 drivers/net/mlx5/mlx5_txpp.c         | 4 ----
 4 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 12f51a9..59f0bcc 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -1569,7 +1569,8 @@ struct mlx5_devx_obj *
 	} else {
 		MLX5_SET64(cqc, cqctx, dbr_addr, attr->db_addr);
 	}
-	MLX5_SET(cqc, cqctx, cqe_sz, attr->cqe_size);
+	MLX5_SET(cqc, cqctx, cqe_sz, (RTE_CACHE_LINE_SIZE == 128) ?
+				     MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B);
 	MLX5_SET(cqc, cqctx, cc, attr->use_first_only);
 	MLX5_SET(cqc, cqctx, oi, attr->overrun_ignore);
 	MLX5_SET(cqc, cqctx, log_cq_size, attr->log_cq_size);
@@ -1582,7 +1583,6 @@ struct mlx5_devx_obj *
 		 attr->mini_cqe_res_format);
 	MLX5_SET(cqc, cqctx, mini_cqe_res_format_ext,
 		 attr->mini_cqe_res_format_ext);
-	MLX5_SET(cqc, cqctx, cqe_sz, attr->cqe_size);
 	if (attr->q_umem_valid) {
 		MLX5_SET(create_cq_in, in, cq_umem_valid, attr->q_umem_valid);
 		MLX5_SET(create_cq_in, in, cq_umem_id, attr->q_umem_id);
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index b335b7c..a14f3bf 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -277,7 +277,6 @@ struct mlx5_devx_cq_attr {
 	uint32_t cqe_comp_en:1;
 	uint32_t mini_cqe_res_format:2;
 	uint32_t mini_cqe_res_format_ext:2;
-	uint32_t cqe_size:3;
 	uint32_t log_cq_size:5;
 	uint32_t log_page_size:5;
 	uint32_t uar_page_id;
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index da3bb78..5c5bea6 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -486,8 +486,6 @@
 			"Port %u Rx CQE compression is disabled for LRO.",
 			dev->data->port_id);
 	}
-	if (priv->config.cqe_pad)
-		cq_attr.cqe_size = MLX5_CQE_SIZE_128B;
 	log_cqe_n = log2above(cqe_n);
 	cq_size = sizeof(struct mlx5_cqe) * (1 << log_cqe_n);
 	buf = rte_calloc_socket(__func__, 1, cq_size, page_size,
@@ -1262,8 +1260,6 @@
 		DRV_LOG(ERR, "Failed to allocate CQ door-bell.");
 		goto error;
 	}
-	cq_attr.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
-			    MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B;
 	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(priv->sh->tx_uar);
 	cq_attr.eqn = priv->sh->eqn;
 	cq_attr.q_umem_valid = 1;
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index 726bdc6a..e998de3 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -278,8 +278,6 @@
 		goto error;
 	}
 	/* Create completion queue object for Rearm Queue. */
-	cq_attr.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
-			    MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B;
 	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
 	cq_attr.eqn = sh->eqn;
 	cq_attr.q_umem_valid = 1;
@@ -516,8 +514,6 @@
 		goto error;
 	}
 	/* Create completion queue object for Clock Queue. */
-	cq_attr.cqe_size = (sizeof(struct mlx5_cqe) == 128) ?
-			    MLX5_CQE_SIZE_128B : MLX5_CQE_SIZE_64B;
 	cq_attr.use_first_only = 1;
 	cq_attr.overrun_ignore = 1;
 	cq_attr.uar_page_id = mlx5_os_get_devx_uar_page_id(sh->tx_uar);
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [dpdk-stable] [PATCH v3 02/19] net/mlx5: remove CQE padding device argument
       [not found]       ` <1609921181-5019-1-git-send-email-michaelba@nvidia.com>
  2021-01-06  8:19         ` [dpdk-stable] [PATCH v3 01/19] common/mlx5: fix completion queue entry size configuration Michael Baum
@ 2021-01-06  8:19         ` Michael Baum
  2021-01-06  8:19         ` [dpdk-stable] [PATCH v3 03/19] net/mlx5: fix ASO SQ creation error flow Michael Baum
  2 siblings, 0 replies; 5+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko, stable

The data-path code doesn't take care on 'rxq_cqe_pad_en' and use padded
CQE for any case when the system cache-line size is 128B.

This makes the argument redundant.

Remove it.

Fixes: bc91e8db12cd ("net/mlx5: add 128B padding of Rx completion entry")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 doc/guides/nics/mlx5.rst            | 18 ------------------
 drivers/net/mlx5/linux/mlx5_os.c    | 12 ------------
 drivers/net/mlx5/linux/mlx5_verbs.c |  2 +-
 drivers/net/mlx5/mlx5.c             |  6 ------
 drivers/net/mlx5/mlx5.h             |  1 -
 drivers/net/mlx5/windows/mlx5_os.c  |  7 -------
 6 files changed, 1 insertion(+), 45 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 3bda0f8..6950cc1 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -448,24 +448,6 @@ Driver options
   - POWER9 and ARMv8 with ConnectX-4 Lx, ConnectX-5, ConnectX-6, ConnectX-6 Dx,
     ConnectX-6 Lx, BlueField and BlueField-2.
 
-- ``rxq_cqe_pad_en`` parameter [int]
-
-  A nonzero value enables 128B padding of CQE on RX side. The size of CQE
-  is aligned with the size of a cacheline of the core. If cacheline size is
-  128B, the CQE size is configured to be 128B even though the device writes
-  only 64B data on the cacheline. This is to avoid unnecessary cache
-  invalidation by device's two consecutive writes on to one cacheline.
-  However in some architecture, it is more beneficial to update entire
-  cacheline with padding the rest 64B rather than striding because
-  read-modify-write could drop performance a lot. On the other hand,
-  writing extra data will consume more PCIe bandwidth and could also drop
-  the maximum throughput. It is recommended to empirically set this
-  parameter. Disabled by default.
-
-  Supported on:
-
-  - CPU having 128B cacheline with ConnectX-5 and BlueField.
-
 - ``rxq_pkt_pad_en`` parameter [int]
 
   A nonzero value enables padding Rx packet to the size of cacheline on PCI
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 6812a1f..9ac1d46 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -677,7 +677,6 @@
 	unsigned int hw_padding = 0;
 	unsigned int mps;
 	unsigned int cqe_comp;
-	unsigned int cqe_pad = 0;
 	unsigned int tunnel_en = 0;
 	unsigned int mpls_en = 0;
 	unsigned int swp = 0;
@@ -875,11 +874,6 @@
 	else
 		cqe_comp = 1;
 	config->cqe_comp = cqe_comp;
-#ifdef HAVE_IBV_MLX5_MOD_CQE_128B_PAD
-	/* Whether device supports 128B Rx CQE padding. */
-	cqe_pad = RTE_CACHE_LINE_SIZE == 128 &&
-		  (dv_attr.flags & MLX5DV_CONTEXT_FLAGS_CQE_128B_PAD);
-#endif
 #ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT
 	if (dv_attr.comp_mask & MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS) {
 		tunnel_en = ((dv_attr.tunnel_offloads_caps &
@@ -1116,12 +1110,6 @@
 		DRV_LOG(WARNING, "Rx CQE compression isn't supported");
 		config->cqe_comp = 0;
 	}
-	if (config->cqe_pad && !cqe_pad) {
-		DRV_LOG(WARNING, "Rx CQE padding isn't supported");
-		config->cqe_pad = 0;
-	} else if (config->cqe_pad) {
-		DRV_LOG(INFO, "Rx CQE padding is enabled");
-	}
 	if (config->devx) {
 		err = mlx5_devx_cmd_query_hca_attr(sh->ctx, &config->hca_attr);
 		if (err) {
diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c
index b52ae2e..318e39b 100644
--- a/drivers/net/mlx5/linux/mlx5_verbs.c
+++ b/drivers/net/mlx5/linux/mlx5_verbs.c
@@ -234,7 +234,7 @@
 			dev->data->port_id);
 	}
 #ifdef HAVE_IBV_MLX5_MOD_CQE_128B_PAD
-	if (priv->config.cqe_pad) {
+	if (RTE_CACHE_LINE_SIZE == 128) {
 		cq_attr.mlx5.comp_mask |= MLX5DV_CQ_INIT_ATTR_MASK_FLAGS;
 		cq_attr.mlx5.flags |= MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD;
 	}
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 023ef50..91492c5 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -44,9 +44,6 @@
 /* Device parameter to enable RX completion queue compression. */
 #define MLX5_RXQ_CQE_COMP_EN "rxq_cqe_comp_en"
 
-/* Device parameter to enable RX completion entry padding to 128B. */
-#define MLX5_RXQ_CQE_PAD_EN "rxq_cqe_pad_en"
-
 /* Device parameter to enable padding Rx packet to cacheline size. */
 #define MLX5_RXQ_PKT_PAD_EN "rxq_pkt_pad_en"
 
@@ -1625,8 +1622,6 @@ struct mlx5_dev_ctx_shared *
 		}
 		config->cqe_comp = !!tmp;
 		config->cqe_comp_fmt = tmp;
-	} else if (strcmp(MLX5_RXQ_CQE_PAD_EN, key) == 0) {
-		config->cqe_pad = !!tmp;
 	} else if (strcmp(MLX5_RXQ_PKT_PAD_EN, key) == 0) {
 		config->hw_padding = !!tmp;
 	} else if (strcmp(MLX5_RX_MPRQ_EN, key) == 0) {
@@ -1755,7 +1750,6 @@ struct mlx5_dev_ctx_shared *
 {
 	const char **params = (const char *[]){
 		MLX5_RXQ_CQE_COMP_EN,
-		MLX5_RXQ_CQE_PAD_EN,
 		MLX5_RXQ_PKT_PAD_EN,
 		MLX5_RX_MPRQ_EN,
 		MLX5_RX_MPRQ_LOG_STRIDE_NUM,
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 41034f5..92a5d04 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -212,7 +212,6 @@ struct mlx5_dev_config {
 	unsigned int mpls_en:1; /* MPLS over GRE/UDP is enabled. */
 	unsigned int cqe_comp:1; /* CQE compression is enabled. */
 	unsigned int cqe_comp_fmt:3; /* CQE compression format. */
-	unsigned int cqe_pad:1; /* CQE padding is enabled. */
 	unsigned int tso:1; /* Whether TSO is supported. */
 	unsigned int rx_vec_en:1; /* Rx vector is enabled. */
 	unsigned int mr_ext_memseg_en:1;
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index fdd69fd..b036432 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -313,7 +313,6 @@
 	struct mlx5_priv *priv = NULL;
 	int err = 0;
 	unsigned int cqe_comp;
-	unsigned int cqe_pad = 0;
 	struct rte_ether_addr mac;
 	char name[RTE_ETH_NAME_MAX_LEN];
 	int own_domain_id = 0;
@@ -461,12 +460,6 @@
 		DRV_LOG(WARNING, "Rx CQE compression isn't supported.");
 		config->cqe_comp = 0;
 	}
-	if (config->cqe_pad && !cqe_pad) {
-		DRV_LOG(WARNING, "Rx CQE padding isn't supported.");
-		config->cqe_pad = 0;
-	} else if (config->cqe_pad) {
-		DRV_LOG(INFO, "Rx CQE padding is enabled.");
-	}
 	if (config->devx) {
 		err = mlx5_devx_cmd_query_hca_attr(sh->ctx, &config->hca_attr);
 		if (err) {
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [dpdk-stable] [PATCH v3 03/19] net/mlx5: fix ASO SQ creation error flow
       [not found]       ` <1609921181-5019-1-git-send-email-michaelba@nvidia.com>
  2021-01-06  8:19         ` [dpdk-stable] [PATCH v3 01/19] common/mlx5: fix completion queue entry size configuration Michael Baum
  2021-01-06  8:19         ` [dpdk-stable] [PATCH v3 02/19] net/mlx5: remove CQE padding device argument Michael Baum
@ 2021-01-06  8:19         ` Michael Baum
  2 siblings, 0 replies; 5+ messages in thread
From: Michael Baum @ 2021-01-06  8:19 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko, stable

In ASO SQ creation, the PMD allocates umem buffer for SQ.

When umem buffer allocation is fails, the MR and CQ memory are not freed
what caused a memory leak.

Free it.

Fixes: f935ed4b645a ("net/mlx5: support flow hit action for aging")
Cc: stable@dpdk.org

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_age.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_flow_age.c b/drivers/net/mlx5/mlx5_flow_age.c
index 1f15f19..e867607 100644
--- a/drivers/net/mlx5/mlx5_flow_age.c
+++ b/drivers/net/mlx5/mlx5_flow_age.c
@@ -279,7 +279,8 @@
 				   sizeof(*sq->db_rec) * 2, 4096, socket);
 	if (!sq->umem_buf) {
 		DRV_LOG(ERR, "Can't allocate wqe buffer.");
-		return -ENOMEM;
+		rte_errno = ENOMEM;
+		goto error;
 	}
 	sq->wqe_umem = mlx5_os_umem_reg(ctx,
 						(void *)(uintptr_t)sq->umem_buf,
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-01-06  8:20 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <1608205475-20067-1-git-send-email-michaelba@nvidia.com>
2020-12-17 11:44 ` [dpdk-stable] [PATCH 01/17] net/mlx5: fix ASO SQ creation error flow Michael Baum
     [not found]   ` <1609231944-29274-1-git-send-email-michaelba@nvidia.com>
2020-12-29  8:52     ` [dpdk-stable] [PATCH v2 " Michael Baum
     [not found]       ` <1609921181-5019-1-git-send-email-michaelba@nvidia.com>
2021-01-06  8:19         ` [dpdk-stable] [PATCH v3 01/19] common/mlx5: fix completion queue entry size configuration Michael Baum
2021-01-06  8:19         ` [dpdk-stable] [PATCH v3 02/19] net/mlx5: remove CQE padding device argument Michael Baum
2021-01-06  8:19         ` [dpdk-stable] [PATCH v3 03/19] net/mlx5: fix ASO SQ creation error flow Michael Baum

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).