DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH] net/mlx5: check Tx queue size overflow
@ 2019-04-30 19:04 Yongseok Koh
  2019-04-30 19:04 ` Yongseok Koh
                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Yongseok Koh @ 2019-04-30 19:04 UTC (permalink / raw)
  To: shahafs; +Cc: dev

If Tx packet inlining is enabled, rdma-core library should allocate large
Tx WQ enough to support it. It is better for PMD to calculate the size of
WQ based on the parameters and return error with appropriate message if it
exceeds the device capability.

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
---
 drivers/net/mlx5/mlx5_txq.c | 35 +++++++++++++++++++++++++++++++----
 1 file changed, 31 insertions(+), 4 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 4d55fd413c..dfc9afbe75 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -679,6 +679,27 @@ mlx5_txq_ibv_verify(struct rte_eth_dev *dev)
 }
 
 /**
+ * Calcuate the total number of WQEBB for Tx queue.
+ *
+ * Simplified version of calc_sq_size() in rdma-core.
+ *
+ * @param txq_ctrl
+ *   Pointer to Tx queue control structure.
+ *
+ * @return
+ *   The number of WQEBB.
+ */
+static int
+txq_calc_wqebb_cnt(struct mlx5_txq_ctrl *txq_ctrl)
+{
+	unsigned int wqe_size;
+	const unsigned int desc = 1 << txq_ctrl->txq.elts_n;
+
+	wqe_size = MLX5_WQE_SIZE + txq_ctrl->max_inline_data;
+	return rte_align32pow2(wqe_size * desc) / MLX5_WQE_SIZE;
+}
+
+/**
  * Set Tx queue parameters from device configuration.
  *
  * @param txq_ctrl
@@ -824,10 +845,16 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	tmpl->txq.port_id = dev->data->port_id;
 	tmpl->txq.idx = idx;
 	txq_set_params(tmpl);
-	DRV_LOG(DEBUG, "port %u device_attr.max_qp_wr is %d",
-		dev->data->port_id, priv->sh->device_attr.orig_attr.max_qp_wr);
-	DRV_LOG(DEBUG, "port %u device_attr.max_sge is %d",
-		dev->data->port_id, priv->sh->device_attr.orig_attr.max_sge);
+	if (txq_calc_wqebb_cnt(tmpl) >
+	    priv->sh->device_attr.orig_attr.max_qp_wr) {
+		DRV_LOG(DEBUG,
+			"port %u Tx WQEBB count exceeds the limit (%d),"
+			" try smaller queue size again",
+			dev->data->port_id,
+			priv->sh->device_attr.orig_attr.max_qp_wr);
+		rte_errno = ENOMEM;
+		goto error;
+	}
 	tmpl->txq.elts =
 		(struct rte_mbuf *(*)[1 << tmpl->txq.elts_n])(tmpl + 1);
 	rte_atomic32_inc(&tmpl->refcnt);
-- 
2.11.0

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [dpdk-dev] [PATCH] net/mlx5: check Tx queue size overflow
  2019-04-30 19:04 [dpdk-dev] [PATCH] net/mlx5: check Tx queue size overflow Yongseok Koh
@ 2019-04-30 19:04 ` Yongseok Koh
  2019-04-30 20:46 ` Stephen Hemminger
  2019-05-01  1:01 ` [dpdk-dev] [PATCH v2] " Yongseok Koh
  2 siblings, 0 replies; 14+ messages in thread
From: Yongseok Koh @ 2019-04-30 19:04 UTC (permalink / raw)
  To: shahafs; +Cc: dev

If Tx packet inlining is enabled, rdma-core library should allocate large
Tx WQ enough to support it. It is better for PMD to calculate the size of
WQ based on the parameters and return error with appropriate message if it
exceeds the device capability.

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
---
 drivers/net/mlx5/mlx5_txq.c | 35 +++++++++++++++++++++++++++++++----
 1 file changed, 31 insertions(+), 4 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 4d55fd413c..dfc9afbe75 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -679,6 +679,27 @@ mlx5_txq_ibv_verify(struct rte_eth_dev *dev)
 }
 
 /**
+ * Calcuate the total number of WQEBB for Tx queue.
+ *
+ * Simplified version of calc_sq_size() in rdma-core.
+ *
+ * @param txq_ctrl
+ *   Pointer to Tx queue control structure.
+ *
+ * @return
+ *   The number of WQEBB.
+ */
+static int
+txq_calc_wqebb_cnt(struct mlx5_txq_ctrl *txq_ctrl)
+{
+	unsigned int wqe_size;
+	const unsigned int desc = 1 << txq_ctrl->txq.elts_n;
+
+	wqe_size = MLX5_WQE_SIZE + txq_ctrl->max_inline_data;
+	return rte_align32pow2(wqe_size * desc) / MLX5_WQE_SIZE;
+}
+
+/**
  * Set Tx queue parameters from device configuration.
  *
  * @param txq_ctrl
@@ -824,10 +845,16 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	tmpl->txq.port_id = dev->data->port_id;
 	tmpl->txq.idx = idx;
 	txq_set_params(tmpl);
-	DRV_LOG(DEBUG, "port %u device_attr.max_qp_wr is %d",
-		dev->data->port_id, priv->sh->device_attr.orig_attr.max_qp_wr);
-	DRV_LOG(DEBUG, "port %u device_attr.max_sge is %d",
-		dev->data->port_id, priv->sh->device_attr.orig_attr.max_sge);
+	if (txq_calc_wqebb_cnt(tmpl) >
+	    priv->sh->device_attr.orig_attr.max_qp_wr) {
+		DRV_LOG(DEBUG,
+			"port %u Tx WQEBB count exceeds the limit (%d),"
+			" try smaller queue size again",
+			dev->data->port_id,
+			priv->sh->device_attr.orig_attr.max_qp_wr);
+		rte_errno = ENOMEM;
+		goto error;
+	}
 	tmpl->txq.elts =
 		(struct rte_mbuf *(*)[1 << tmpl->txq.elts_n])(tmpl + 1);
 	rte_atomic32_inc(&tmpl->refcnt);
-- 
2.11.0


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] [PATCH] net/mlx5: check Tx queue size overflow
  2019-04-30 19:04 [dpdk-dev] [PATCH] net/mlx5: check Tx queue size overflow Yongseok Koh
  2019-04-30 19:04 ` Yongseok Koh
@ 2019-04-30 20:46 ` Stephen Hemminger
  2019-04-30 20:46   ` Stephen Hemminger
  2019-05-01  0:43   ` Yongseok Koh
  2019-05-01  1:01 ` [dpdk-dev] [PATCH v2] " Yongseok Koh
  2 siblings, 2 replies; 14+ messages in thread
From: Stephen Hemminger @ 2019-04-30 20:46 UTC (permalink / raw)
  To: Yongseok Koh; +Cc: shahafs, dev

On Tue, 30 Apr 2019 12:04:26 -0700
Yongseok Koh <yskoh@mellanox.com> wrote:

> +	    priv->sh->device_attr.orig_attr.max_qp_wr) {
> +		DRV_LOG(DEBUG,
> +			"port %u Tx WQEBB count exceeds the limit (%d),"
> +			" try smaller queue size again",
> +			dev->data->port_id,

The patch looks good, but it could be improved to make life easier
for the users.

This is an error, why not print it at NOTICE level since DEBUG messages
are usually suppressed.

Please don't break long lines in log messages. The latter part of the message
is obvious, why not skip it.

Also since max_qp_wr is __u32, the print format should be %u

Instead:
		DRV_LOG(NOTICE,
			"port %u Tx WQEBB count (%u) exceeds the limit (%u)",
			dev->data->port_id,
			txq_calc_wqebb_cnt(tmpl),
			priv->sh->device_attr.orig_attr.max_qp_wr);

Also, should it have a Fixes: tag to backport to stable?

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] [PATCH] net/mlx5: check Tx queue size overflow
  2019-04-30 20:46 ` Stephen Hemminger
@ 2019-04-30 20:46   ` Stephen Hemminger
  2019-05-01  0:43   ` Yongseok Koh
  1 sibling, 0 replies; 14+ messages in thread
From: Stephen Hemminger @ 2019-04-30 20:46 UTC (permalink / raw)
  To: Yongseok Koh; +Cc: shahafs, dev

On Tue, 30 Apr 2019 12:04:26 -0700
Yongseok Koh <yskoh@mellanox.com> wrote:

> +	    priv->sh->device_attr.orig_attr.max_qp_wr) {
> +		DRV_LOG(DEBUG,
> +			"port %u Tx WQEBB count exceeds the limit (%d),"
> +			" try smaller queue size again",
> +			dev->data->port_id,

The patch looks good, but it could be improved to make life easier
for the users.

This is an error, why not print it at NOTICE level since DEBUG messages
are usually suppressed.

Please don't break long lines in log messages. The latter part of the message
is obvious, why not skip it.

Also since max_qp_wr is __u32, the print format should be %u

Instead:
		DRV_LOG(NOTICE,
			"port %u Tx WQEBB count (%u) exceeds the limit (%u)",
			dev->data->port_id,
			txq_calc_wqebb_cnt(tmpl),
			priv->sh->device_attr.orig_attr.max_qp_wr);

Also, should it have a Fixes: tag to backport to stable?

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] [PATCH] net/mlx5: check Tx queue size overflow
  2019-04-30 20:46 ` Stephen Hemminger
  2019-04-30 20:46   ` Stephen Hemminger
@ 2019-05-01  0:43   ` Yongseok Koh
  2019-05-01  0:43     ` Yongseok Koh
  2019-05-01  0:56     ` Yongseok Koh
  1 sibling, 2 replies; 14+ messages in thread
From: Yongseok Koh @ 2019-05-01  0:43 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: Shahaf Shuler, dev


> On Apr 30, 2019, at 1:46 PM, Stephen Hemminger <stephen@networkplumber.org> wrote:
> 
> On Tue, 30 Apr 2019 12:04:26 -0700
> Yongseok Koh <yskoh@mellanox.com> wrote:
> 
>> +	    priv->sh->device_attr.orig_attr.max_qp_wr) {
>> +		DRV_LOG(DEBUG,
>> +			"port %u Tx WQEBB count exceeds the limit (%d),"
>> +			" try smaller queue size again",
>> +			dev->data->port_id,
> 
> The patch looks good, but it could be improved to make life easier
> for the users.
> 
> This is an error, why not print it at NOTICE level since DEBUG messages
> are usually suppressed.

Okay, better to set ERROR.

> Please don't break long lines in log messages.

That doesn't add new-line and mlx5 specific coding style.

> The latter part of the message
> is obvious, why not skip it.

Because not all users know what WQEBB means, I wanted to make user's life easier
by informing what to do.

> Also since max_qp_wr is __u32, the print format should be %u

It was copied from the removing line but agree to change.

> Instead:
> 		DRV_LOG(NOTICE,
> 			"port %u Tx WQEBB count (%u) exceeds the limit (%u)",
> 			dev->data->port_id,
> 			txq_calc_wqebb_cnt(tmpl),
> 			priv->sh->device_attr.orig_attr.max_qp_wr);
> 
> Also, should it have a Fixes: tag to backport to stable?

Okay, no harm to give more information.


Thanks for the review.
Yongseok

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] [PATCH] net/mlx5: check Tx queue size overflow
  2019-05-01  0:43   ` Yongseok Koh
@ 2019-05-01  0:43     ` Yongseok Koh
  2019-05-01  0:56     ` Yongseok Koh
  1 sibling, 0 replies; 14+ messages in thread
From: Yongseok Koh @ 2019-05-01  0:43 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: Shahaf Shuler, dev


> On Apr 30, 2019, at 1:46 PM, Stephen Hemminger <stephen@networkplumber.org> wrote:
> 
> On Tue, 30 Apr 2019 12:04:26 -0700
> Yongseok Koh <yskoh@mellanox.com> wrote:
> 
>> +	    priv->sh->device_attr.orig_attr.max_qp_wr) {
>> +		DRV_LOG(DEBUG,
>> +			"port %u Tx WQEBB count exceeds the limit (%d),"
>> +			" try smaller queue size again",
>> +			dev->data->port_id,
> 
> The patch looks good, but it could be improved to make life easier
> for the users.
> 
> This is an error, why not print it at NOTICE level since DEBUG messages
> are usually suppressed.

Okay, better to set ERROR.

> Please don't break long lines in log messages.

That doesn't add new-line and mlx5 specific coding style.

> The latter part of the message
> is obvious, why not skip it.

Because not all users know what WQEBB means, I wanted to make user's life easier
by informing what to do.

> Also since max_qp_wr is __u32, the print format should be %u

It was copied from the removing line but agree to change.

> Instead:
> 		DRV_LOG(NOTICE,
> 			"port %u Tx WQEBB count (%u) exceeds the limit (%u)",
> 			dev->data->port_id,
> 			txq_calc_wqebb_cnt(tmpl),
> 			priv->sh->device_attr.orig_attr.max_qp_wr);
> 
> Also, should it have a Fixes: tag to backport to stable?

Okay, no harm to give more information.


Thanks for the review.
Yongseok


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] [PATCH] net/mlx5: check Tx queue size overflow
  2019-05-01  0:43   ` Yongseok Koh
  2019-05-01  0:43     ` Yongseok Koh
@ 2019-05-01  0:56     ` Yongseok Koh
  2019-05-01  0:56       ` Yongseok Koh
  1 sibling, 1 reply; 14+ messages in thread
From: Yongseok Koh @ 2019-05-01  0:56 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: Shahaf Shuler, dev


> On Apr 30, 2019, at 5:43 PM, Yongseok Koh <yskoh@mellanox.com> wrote:
> 
>> 
>> On Apr 30, 2019, at 1:46 PM, Stephen Hemminger <stephen@networkplumber.org> wrote:
>> 
>> On Tue, 30 Apr 2019 12:04:26 -0700
>> Yongseok Koh <yskoh@mellanox.com> wrote:
>> 
>>> +	    priv->sh->device_attr.orig_attr.max_qp_wr) {
>>> +		DRV_LOG(DEBUG,
>>> +			"port %u Tx WQEBB count exceeds the limit (%d),"
>>> +			" try smaller queue size again",
>>> +			dev->data->port_id,
>> 
>> The patch looks good, but it could be improved to make life easier
>> for the users.
>> 
>> This is an error, why not print it at NOTICE level since DEBUG messages
>> are usually suppressed.
> 
> Okay, better to set ERROR.
> 
>> Please don't break long lines in log messages.
> 
> That doesn't add new-line and mlx5 specific coding style.
> 
>> The latter part of the message
>> is obvious, why not skip it.
> 
> Because not all users know what WQEBB means, I wanted to make user's life easier
> by informing what to do.
> 
>> Also since max_qp_wr is __u32, the print format should be %u
> 
> It was copied from the removing line but agree to change.

No, it is 'int'. Please check 'struct ibv_device_attr'.

> 
>> Instead:
>> 		DRV_LOG(NOTICE,
>> 			"port %u Tx WQEBB count (%u) exceeds the limit (%u)",
>> 			dev->data->port_id,
>> 			txq_calc_wqebb_cnt(tmpl),
>> 			priv->sh->device_attr.orig_attr.max_qp_wr);
>> 
>> Also, should it have a Fixes: tag to backport to stable?
> 
> Okay, no harm to give more information.
> 
> 
> Thanks for the review.
> Yongseok

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] [PATCH] net/mlx5: check Tx queue size overflow
  2019-05-01  0:56     ` Yongseok Koh
@ 2019-05-01  0:56       ` Yongseok Koh
  0 siblings, 0 replies; 14+ messages in thread
From: Yongseok Koh @ 2019-05-01  0:56 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: Shahaf Shuler, dev


> On Apr 30, 2019, at 5:43 PM, Yongseok Koh <yskoh@mellanox.com> wrote:
> 
>> 
>> On Apr 30, 2019, at 1:46 PM, Stephen Hemminger <stephen@networkplumber.org> wrote:
>> 
>> On Tue, 30 Apr 2019 12:04:26 -0700
>> Yongseok Koh <yskoh@mellanox.com> wrote:
>> 
>>> +	    priv->sh->device_attr.orig_attr.max_qp_wr) {
>>> +		DRV_LOG(DEBUG,
>>> +			"port %u Tx WQEBB count exceeds the limit (%d),"
>>> +			" try smaller queue size again",
>>> +			dev->data->port_id,
>> 
>> The patch looks good, but it could be improved to make life easier
>> for the users.
>> 
>> This is an error, why not print it at NOTICE level since DEBUG messages
>> are usually suppressed.
> 
> Okay, better to set ERROR.
> 
>> Please don't break long lines in log messages.
> 
> That doesn't add new-line and mlx5 specific coding style.
> 
>> The latter part of the message
>> is obvious, why not skip it.
> 
> Because not all users know what WQEBB means, I wanted to make user's life easier
> by informing what to do.
> 
>> Also since max_qp_wr is __u32, the print format should be %u
> 
> It was copied from the removing line but agree to change.

No, it is 'int'. Please check 'struct ibv_device_attr'.

> 
>> Instead:
>> 		DRV_LOG(NOTICE,
>> 			"port %u Tx WQEBB count (%u) exceeds the limit (%u)",
>> 			dev->data->port_id,
>> 			txq_calc_wqebb_cnt(tmpl),
>> 			priv->sh->device_attr.orig_attr.max_qp_wr);
>> 
>> Also, should it have a Fixes: tag to backport to stable?
> 
> Okay, no harm to give more information.
> 
> 
> Thanks for the review.
> Yongseok


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [dpdk-dev] [PATCH v2] net/mlx5: check Tx queue size overflow
  2019-04-30 19:04 [dpdk-dev] [PATCH] net/mlx5: check Tx queue size overflow Yongseok Koh
  2019-04-30 19:04 ` Yongseok Koh
  2019-04-30 20:46 ` Stephen Hemminger
@ 2019-05-01  1:01 ` Yongseok Koh
  2019-05-01  1:01   ` Yongseok Koh
  2019-05-01  1:15   ` Yongseok Koh
  2 siblings, 2 replies; 14+ messages in thread
From: Yongseok Koh @ 2019-05-01  1:01 UTC (permalink / raw)
  To: shahafs; +Cc: dev, stephen

If Tx packet inlining is enabled, rdma-core library should allocate large
Tx WQ enough to support it. It is better for PMD to calculate the size of
WQ based on the parameters and return error with appropriate message if it
exceeds the device capability.

Cc:stable@dpdk.org

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
---
 drivers/net/mlx5/mlx5_txq.c | 35 +++++++++++++++++++++++++++++++----
 1 file changed, 31 insertions(+), 4 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 4d55fd413c..b281c45027 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -679,6 +679,27 @@ mlx5_txq_ibv_verify(struct rte_eth_dev *dev)
 }
 
 /**
+ * Calcuate the total number of WQEBB for Tx queue.
+ *
+ * Simplified version of calc_sq_size() in rdma-core.
+ *
+ * @param txq_ctrl
+ *   Pointer to Tx queue control structure.
+ *
+ * @return
+ *   The number of WQEBB.
+ */
+static int
+txq_calc_wqebb_cnt(struct mlx5_txq_ctrl *txq_ctrl)
+{
+	unsigned int wqe_size;
+	const unsigned int desc = 1 << txq_ctrl->txq.elts_n;
+
+	wqe_size = MLX5_WQE_SIZE + txq_ctrl->max_inline_data;
+	return rte_align32pow2(wqe_size * desc) / MLX5_WQE_SIZE;
+}
+
+/**
  * Set Tx queue parameters from device configuration.
  *
  * @param txq_ctrl
@@ -824,10 +845,16 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	tmpl->txq.port_id = dev->data->port_id;
 	tmpl->txq.idx = idx;
 	txq_set_params(tmpl);
-	DRV_LOG(DEBUG, "port %u device_attr.max_qp_wr is %d",
-		dev->data->port_id, priv->sh->device_attr.orig_attr.max_qp_wr);
-	DRV_LOG(DEBUG, "port %u device_attr.max_sge is %d",
-		dev->data->port_id, priv->sh->device_attr.orig_attr.max_sge);
+	if (txq_calc_wqebb_cnt(tmpl) >
+	    priv->sh->device_attr.orig_attr.max_qp_wr) {
+		DRV_LOG(ERR,
+			"port %u Tx WQEBB count (%d) exceeds the limit (%d),"
+			" try smaller queue size",
+			dev->data->port_id, txq_calc_wqebb_cnt(tmpl),
+			priv->sh->device_attr.orig_attr.max_qp_wr);
+		rte_errno = ENOMEM;
+		goto error;
+	}
 	tmpl->txq.elts =
 		(struct rte_mbuf *(*)[1 << tmpl->txq.elts_n])(tmpl + 1);
 	rte_atomic32_inc(&tmpl->refcnt);
-- 
2.11.0

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [dpdk-dev] [PATCH v2] net/mlx5: check Tx queue size overflow
  2019-05-01  1:01 ` [dpdk-dev] [PATCH v2] " Yongseok Koh
@ 2019-05-01  1:01   ` Yongseok Koh
  2019-05-01  1:15   ` Yongseok Koh
  1 sibling, 0 replies; 14+ messages in thread
From: Yongseok Koh @ 2019-05-01  1:01 UTC (permalink / raw)
  To: shahafs; +Cc: dev, stephen

If Tx packet inlining is enabled, rdma-core library should allocate large
Tx WQ enough to support it. It is better for PMD to calculate the size of
WQ based on the parameters and return error with appropriate message if it
exceeds the device capability.

Cc:stable@dpdk.org

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
---
 drivers/net/mlx5/mlx5_txq.c | 35 +++++++++++++++++++++++++++++++----
 1 file changed, 31 insertions(+), 4 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 4d55fd413c..b281c45027 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -679,6 +679,27 @@ mlx5_txq_ibv_verify(struct rte_eth_dev *dev)
 }
 
 /**
+ * Calcuate the total number of WQEBB for Tx queue.
+ *
+ * Simplified version of calc_sq_size() in rdma-core.
+ *
+ * @param txq_ctrl
+ *   Pointer to Tx queue control structure.
+ *
+ * @return
+ *   The number of WQEBB.
+ */
+static int
+txq_calc_wqebb_cnt(struct mlx5_txq_ctrl *txq_ctrl)
+{
+	unsigned int wqe_size;
+	const unsigned int desc = 1 << txq_ctrl->txq.elts_n;
+
+	wqe_size = MLX5_WQE_SIZE + txq_ctrl->max_inline_data;
+	return rte_align32pow2(wqe_size * desc) / MLX5_WQE_SIZE;
+}
+
+/**
  * Set Tx queue parameters from device configuration.
  *
  * @param txq_ctrl
@@ -824,10 +845,16 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	tmpl->txq.port_id = dev->data->port_id;
 	tmpl->txq.idx = idx;
 	txq_set_params(tmpl);
-	DRV_LOG(DEBUG, "port %u device_attr.max_qp_wr is %d",
-		dev->data->port_id, priv->sh->device_attr.orig_attr.max_qp_wr);
-	DRV_LOG(DEBUG, "port %u device_attr.max_sge is %d",
-		dev->data->port_id, priv->sh->device_attr.orig_attr.max_sge);
+	if (txq_calc_wqebb_cnt(tmpl) >
+	    priv->sh->device_attr.orig_attr.max_qp_wr) {
+		DRV_LOG(ERR,
+			"port %u Tx WQEBB count (%d) exceeds the limit (%d),"
+			" try smaller queue size",
+			dev->data->port_id, txq_calc_wqebb_cnt(tmpl),
+			priv->sh->device_attr.orig_attr.max_qp_wr);
+		rte_errno = ENOMEM;
+		goto error;
+	}
 	tmpl->txq.elts =
 		(struct rte_mbuf *(*)[1 << tmpl->txq.elts_n])(tmpl + 1);
 	rte_atomic32_inc(&tmpl->refcnt);
-- 
2.11.0


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] [PATCH v2] net/mlx5: check Tx queue size overflow
  2019-05-01  1:01 ` [dpdk-dev] [PATCH v2] " Yongseok Koh
  2019-05-01  1:01   ` Yongseok Koh
@ 2019-05-01  1:15   ` Yongseok Koh
  2019-05-01  1:15     ` Yongseok Koh
  2019-05-01  6:10     ` Shahaf Shuler
  1 sibling, 2 replies; 14+ messages in thread
From: Yongseok Koh @ 2019-05-01  1:15 UTC (permalink / raw)
  To: Shahaf Shuler; +Cc: dev, stephen


> On Apr 30, 2019, at 6:01 PM, Yongseok Koh <yskoh@mellanox.com> wrote:
> 
> If Tx packet inlining is enabled, rdma-core library should allocate large
> Tx WQ enough to support it. It is better for PMD to calculate the size of
> WQ based on the parameters and return error with appropriate message if it
> exceeds the device capability.
> 
> Cc:stable@dpdk.org

Shahaf,
There's a checkpatch warning. Please put a space after ':' if you merge it.

Thanks,
Yongseok

> 
> Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
> ---
> drivers/net/mlx5/mlx5_txq.c | 35 +++++++++++++++++++++++++++++++----
> 1 file changed, 31 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
> index 4d55fd413c..b281c45027 100644
> --- a/drivers/net/mlx5/mlx5_txq.c
> +++ b/drivers/net/mlx5/mlx5_txq.c
> @@ -679,6 +679,27 @@ mlx5_txq_ibv_verify(struct rte_eth_dev *dev)
> }
> 
> /**
> + * Calcuate the total number of WQEBB for Tx queue.
> + *
> + * Simplified version of calc_sq_size() in rdma-core.
> + *
> + * @param txq_ctrl
> + *   Pointer to Tx queue control structure.
> + *
> + * @return
> + *   The number of WQEBB.
> + */
> +static int
> +txq_calc_wqebb_cnt(struct mlx5_txq_ctrl *txq_ctrl)
> +{
> +	unsigned int wqe_size;
> +	const unsigned int desc = 1 << txq_ctrl->txq.elts_n;
> +
> +	wqe_size = MLX5_WQE_SIZE + txq_ctrl->max_inline_data;
> +	return rte_align32pow2(wqe_size * desc) / MLX5_WQE_SIZE;
> +}
> +
> +/**
>  * Set Tx queue parameters from device configuration.
>  *
>  * @param txq_ctrl
> @@ -824,10 +845,16 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> 	tmpl->txq.port_id = dev->data->port_id;
> 	tmpl->txq.idx = idx;
> 	txq_set_params(tmpl);
> -	DRV_LOG(DEBUG, "port %u device_attr.max_qp_wr is %d",
> -		dev->data->port_id, priv->sh->device_attr.orig_attr.max_qp_wr);
> -	DRV_LOG(DEBUG, "port %u device_attr.max_sge is %d",
> -		dev->data->port_id, priv->sh->device_attr.orig_attr.max_sge);
> +	if (txq_calc_wqebb_cnt(tmpl) >
> +	    priv->sh->device_attr.orig_attr.max_qp_wr) {
> +		DRV_LOG(ERR,
> +			"port %u Tx WQEBB count (%d) exceeds the limit (%d),"
> +			" try smaller queue size",
> +			dev->data->port_id, txq_calc_wqebb_cnt(tmpl),
> +			priv->sh->device_attr.orig_attr.max_qp_wr);
> +		rte_errno = ENOMEM;
> +		goto error;
> +	}
> 	tmpl->txq.elts =
> 		(struct rte_mbuf *(*)[1 << tmpl->txq.elts_n])(tmpl + 1);
> 	rte_atomic32_inc(&tmpl->refcnt);
> -- 
> 2.11.0
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] [PATCH v2] net/mlx5: check Tx queue size overflow
  2019-05-01  1:15   ` Yongseok Koh
@ 2019-05-01  1:15     ` Yongseok Koh
  2019-05-01  6:10     ` Shahaf Shuler
  1 sibling, 0 replies; 14+ messages in thread
From: Yongseok Koh @ 2019-05-01  1:15 UTC (permalink / raw)
  To: Shahaf Shuler; +Cc: dev, stephen


> On Apr 30, 2019, at 6:01 PM, Yongseok Koh <yskoh@mellanox.com> wrote:
> 
> If Tx packet inlining is enabled, rdma-core library should allocate large
> Tx WQ enough to support it. It is better for PMD to calculate the size of
> WQ based on the parameters and return error with appropriate message if it
> exceeds the device capability.
> 
> Cc:stable@dpdk.org

Shahaf,
There's a checkpatch warning. Please put a space after ':' if you merge it.

Thanks,
Yongseok

> 
> Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
> ---
> drivers/net/mlx5/mlx5_txq.c | 35 +++++++++++++++++++++++++++++++----
> 1 file changed, 31 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
> index 4d55fd413c..b281c45027 100644
> --- a/drivers/net/mlx5/mlx5_txq.c
> +++ b/drivers/net/mlx5/mlx5_txq.c
> @@ -679,6 +679,27 @@ mlx5_txq_ibv_verify(struct rte_eth_dev *dev)
> }
> 
> /**
> + * Calcuate the total number of WQEBB for Tx queue.
> + *
> + * Simplified version of calc_sq_size() in rdma-core.
> + *
> + * @param txq_ctrl
> + *   Pointer to Tx queue control structure.
> + *
> + * @return
> + *   The number of WQEBB.
> + */
> +static int
> +txq_calc_wqebb_cnt(struct mlx5_txq_ctrl *txq_ctrl)
> +{
> +	unsigned int wqe_size;
> +	const unsigned int desc = 1 << txq_ctrl->txq.elts_n;
> +
> +	wqe_size = MLX5_WQE_SIZE + txq_ctrl->max_inline_data;
> +	return rte_align32pow2(wqe_size * desc) / MLX5_WQE_SIZE;
> +}
> +
> +/**
>  * Set Tx queue parameters from device configuration.
>  *
>  * @param txq_ctrl
> @@ -824,10 +845,16 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> 	tmpl->txq.port_id = dev->data->port_id;
> 	tmpl->txq.idx = idx;
> 	txq_set_params(tmpl);
> -	DRV_LOG(DEBUG, "port %u device_attr.max_qp_wr is %d",
> -		dev->data->port_id, priv->sh->device_attr.orig_attr.max_qp_wr);
> -	DRV_LOG(DEBUG, "port %u device_attr.max_sge is %d",
> -		dev->data->port_id, priv->sh->device_attr.orig_attr.max_sge);
> +	if (txq_calc_wqebb_cnt(tmpl) >
> +	    priv->sh->device_attr.orig_attr.max_qp_wr) {
> +		DRV_LOG(ERR,
> +			"port %u Tx WQEBB count (%d) exceeds the limit (%d),"
> +			" try smaller queue size",
> +			dev->data->port_id, txq_calc_wqebb_cnt(tmpl),
> +			priv->sh->device_attr.orig_attr.max_qp_wr);
> +		rte_errno = ENOMEM;
> +		goto error;
> +	}
> 	tmpl->txq.elts =
> 		(struct rte_mbuf *(*)[1 << tmpl->txq.elts_n])(tmpl + 1);
> 	rte_atomic32_inc(&tmpl->refcnt);
> -- 
> 2.11.0
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] [PATCH v2] net/mlx5: check Tx queue size overflow
  2019-05-01  1:15   ` Yongseok Koh
  2019-05-01  1:15     ` Yongseok Koh
@ 2019-05-01  6:10     ` Shahaf Shuler
  2019-05-01  6:10       ` Shahaf Shuler
  1 sibling, 1 reply; 14+ messages in thread
From: Shahaf Shuler @ 2019-05-01  6:10 UTC (permalink / raw)
  To: Yongseok Koh; +Cc: dev, stephen

Wednesday, May 1, 2019 4:16 AM, Yongseok Koh:
> Subject: Re: [PATCH v2] net/mlx5: check Tx queue size overflow
> 
> 
> > On Apr 30, 2019, at 6:01 PM, Yongseok Koh <yskoh@mellanox.com> wrote:
> >
> > If Tx packet inlining is enabled, rdma-core library should allocate
> > large Tx WQ enough to support it. It is better for PMD to calculate
> > the size of WQ based on the parameters and return error with
> > appropriate message if it exceeds the device capability.
> >
> > Cc:stable@dpdk.org
> 
> Shahaf,
> There's a checkpatch warning. Please put a space after ':' if you merge it.

Changed and applied to next-net-mlx, thanks.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] [PATCH v2] net/mlx5: check Tx queue size overflow
  2019-05-01  6:10     ` Shahaf Shuler
@ 2019-05-01  6:10       ` Shahaf Shuler
  0 siblings, 0 replies; 14+ messages in thread
From: Shahaf Shuler @ 2019-05-01  6:10 UTC (permalink / raw)
  To: Yongseok Koh; +Cc: dev, stephen

Wednesday, May 1, 2019 4:16 AM, Yongseok Koh:
> Subject: Re: [PATCH v2] net/mlx5: check Tx queue size overflow
> 
> 
> > On Apr 30, 2019, at 6:01 PM, Yongseok Koh <yskoh@mellanox.com> wrote:
> >
> > If Tx packet inlining is enabled, rdma-core library should allocate
> > large Tx WQ enough to support it. It is better for PMD to calculate
> > the size of WQ based on the parameters and return error with
> > appropriate message if it exceeds the device capability.
> >
> > Cc:stable@dpdk.org
> 
> Shahaf,
> There's a checkpatch warning. Please put a space after ':' if you merge it.

Changed and applied to next-net-mlx, thanks.


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2019-05-01  6:10 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-30 19:04 [dpdk-dev] [PATCH] net/mlx5: check Tx queue size overflow Yongseok Koh
2019-04-30 19:04 ` Yongseok Koh
2019-04-30 20:46 ` Stephen Hemminger
2019-04-30 20:46   ` Stephen Hemminger
2019-05-01  0:43   ` Yongseok Koh
2019-05-01  0:43     ` Yongseok Koh
2019-05-01  0:56     ` Yongseok Koh
2019-05-01  0:56       ` Yongseok Koh
2019-05-01  1:01 ` [dpdk-dev] [PATCH v2] " Yongseok Koh
2019-05-01  1:01   ` Yongseok Koh
2019-05-01  1:15   ` Yongseok Koh
2019-05-01  1:15     ` Yongseok Koh
2019-05-01  6:10     ` Shahaf Shuler
2019-05-01  6:10       ` Shahaf Shuler

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).