patches for DPDK stable branches
 help / color / mirror / Atom feed
* [dpdk-stable] [PATCH] net/mlx5: fix minimum size of Multi-Packet Rx queue
@ 2018-08-08 19:32 Yongseok Koh
  2018-08-09 10:20 ` [dpdk-stable] [dpdk-dev] " Thomas Monjalon
  0 siblings, 1 reply; 2+ messages in thread
From: Yongseok Koh @ 2018-08-08 19:32 UTC (permalink / raw)
  To: shahafs; +Cc: dev, Yongseok Koh, stable

The size of Rx queue is determined by dividing the number of descriptors by
the number of strides. As device can't support single slot queue, if the
number of descriptors is same as the number of strides, MPRQ shouldn't be
enabled. Otherwise, this will cause HW fault. For example, if rxd is set to
512 with testpmd on ConnectX-4 Lx, PMD can't receive more than 512 packets
because the minimum number of strides for ConnectX-4 Lx is 512. Users have
to configure larger number of descriptors in this case.

Fixes: 7d6bf6b866b8 ("net/mlx5: add Multi-Packet Rx support")
Cc: stable@dpdk.org

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
---
 drivers/net/mlx5/mlx5_rxq.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 8b4c1b1a14..1f7bfd4414 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1356,7 +1356,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		sizeof(struct rte_mbuf_ext_shared_info) +
 		RTE_PKTMBUF_HEADROOM;
 	if (mprq_en &&
-	    desc >= (1U << config->mprq.stride_num_n) &&
+	    desc > (1U << config->mprq.stride_num_n) &&
 	    mprq_stride_size <= (1U << config->mprq.max_stride_size_n)) {
 		/* TODO: Rx scatter isn't supported yet. */
 		tmpl->rxq.sges_n = 0;
@@ -1411,6 +1411,14 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 			dev->data->dev_conf.rxmode.max_rx_pkt_len,
 			mb_len - RTE_PKTMBUF_HEADROOM);
 	}
+	if (mprq_en && !mlx5_rxq_mprq_enabled(&tmpl->rxq))
+		DRV_LOG(WARNING,
+			"port %u MPRQ is requested but cannot be enabled"
+			" (requested: desc = %u, stride_sz = %u,"
+			" supported: min_stride_num = %u, max_stride_sz = %u).",
+			dev->data->port_id, desc, mprq_stride_size,
+			(1 << config->mprq.stride_num_n),
+			(1 << config->mprq.max_stride_size_n));
 	DRV_LOG(DEBUG, "port %u maximum number of segments per packet: %u",
 		dev->data->port_id, 1 << tmpl->rxq.sges_n);
 	if (desc % (1 << tmpl->rxq.sges_n)) {
-- 
2.11.0

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dpdk-stable] [dpdk-dev] [PATCH] net/mlx5: fix minimum size of Multi-Packet Rx queue
  2018-08-08 19:32 [dpdk-stable] [PATCH] net/mlx5: fix minimum size of Multi-Packet Rx queue Yongseok Koh
@ 2018-08-09 10:20 ` Thomas Monjalon
  0 siblings, 0 replies; 2+ messages in thread
From: Thomas Monjalon @ 2018-08-09 10:20 UTC (permalink / raw)
  To: Yongseok Koh; +Cc: dev, shahafs, stable

08/08/2018 21:32, Yongseok Koh:
> The size of Rx queue is determined by dividing the number of descriptors by
> the number of strides. As device can't support single slot queue, if the
> number of descriptors is same as the number of strides, MPRQ shouldn't be
> enabled. Otherwise, this will cause HW fault. For example, if rxd is set to
> 512 with testpmd on ConnectX-4 Lx, PMD can't receive more than 512 packets
> because the minimum number of strides for ConnectX-4 Lx is 512. Users have
> to configure larger number of descriptors in this case.
> 
> Fixes: 7d6bf6b866b8 ("net/mlx5: add Multi-Packet Rx support")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Yongseok Koh <yskoh@mellanox.com>

Applied, thanks

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2018-08-09 10:20 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-08 19:32 [dpdk-stable] [PATCH] net/mlx5: fix minimum size of Multi-Packet Rx queue Yongseok Koh
2018-08-09 10:20 ` [dpdk-stable] [dpdk-dev] " Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).