* [dpdk-dev] [PATCH 01/11] net/mlx5: fix Rx scatter mode validation
2019-07-29 11:53 [dpdk-dev] [PATCH 00/11] net/mlx5: LRO fixes and enhancements Matan Azrad
@ 2019-07-29 11:53 ` Matan Azrad
2019-07-29 11:53 ` [dpdk-dev] [PATCH 02/11] net/mlx5: limit LRO size to the maximum Rx packet Matan Azrad
` (11 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Matan Azrad @ 2019-07-29 11:53 UTC (permalink / raw)
To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko
Cc: dev, Dekel Peled, stable
If the mbuf size of the Rx mempool supplied by the user in the Rx setup
is unable to contain the maximum Rx packet length in addition to the
mbuf head-room, the Rx scatter offload must be configured. Otherwise,
there is not enough space in single mbuf to contain a packet with size of
the maximum Rx packet length.
The PMD did not return an error in the abovementioned case.
Return an error in the above case.
Fixes: 7d6bf6b866b8 ("net/mlx5: add Multi-Packet Rx support")
Fixes: edad38fcd00e ("net/mlx: enhance Rx scatter mode detection")
Cc: stable@dpdk.org
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
drivers/net/mlx5/mlx5_rxq.c | 39 ++++++++++++++++++++-------------------
1 file changed, 20 insertions(+), 19 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 441f158..dc878f2 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1616,7 +1616,20 @@ struct mlx5_rxq_ctrl *
uint64_t offloads = conf->offloads |
dev->data->dev_conf.rxmode.offloads;
const int mprq_en = mlx5_check_mprq_support(dev) > 0;
-
+ unsigned int max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
+ RTE_PKTMBUF_HEADROOM;
+
+ if (non_scatter_min_mbuf_size > mb_len && !(offloads &
+ DEV_RX_OFFLOAD_SCATTER)) {
+ DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
+ " configured and no enough mbuf space(%u) to contain "
+ "the maximum RX packet length(%u) with head-room(%u)",
+ dev->data->port_id, idx, mb_len, max_rx_pkt_len,
+ RTE_PKTMBUF_HEADROOM);
+ rte_errno = ENOSPC;
+ return NULL;
+ }
tmpl = rte_calloc_socket("RXQ", 1,
sizeof(*tmpl) +
desc_n * sizeof(struct rte_mbuf *),
@@ -1642,9 +1655,8 @@ struct mlx5_rxq_ctrl *
* stride.
* Otherwise, enable Rx scatter if necessary.
*/
- assert(mb_len >= RTE_PKTMBUF_HEADROOM * strd_headroom_en);
- mprq_stride_size = dev->data->dev_conf.rxmode.max_rx_pkt_len +
- RTE_PKTMBUF_HEADROOM * strd_headroom_en;
+ mprq_stride_size = max_rx_pkt_len + RTE_PKTMBUF_HEADROOM *
+ strd_headroom_en;
if (mprq_en &&
desc > (1U << config->mprq.stride_num_n) &&
mprq_stride_size <= (1U << config->mprq.max_stride_size_n)) {
@@ -1666,13 +1678,10 @@ struct mlx5_rxq_ctrl *
" strd_num_n = %u, strd_sz_n = %u",
dev->data->port_id, idx,
tmpl->rxq.strd_num_n, tmpl->rxq.strd_sz_n);
- } else if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
- (mb_len - RTE_PKTMBUF_HEADROOM)) {
+ } else if (max_rx_pkt_len <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
tmpl->rxq.sges_n = 0;
} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
- unsigned int size =
- RTE_PKTMBUF_HEADROOM +
- dev->data->dev_conf.rxmode.max_rx_pkt_len;
+ unsigned int size = non_scatter_min_mbuf_size;
unsigned int sges_n;
/*
@@ -1684,24 +1693,16 @@ struct mlx5_rxq_ctrl *
/* Make sure rxq.sges_n did not overflow. */
size = mb_len * (1 << tmpl->rxq.sges_n);
size -= RTE_PKTMBUF_HEADROOM;
- if (size < dev->data->dev_conf.rxmode.max_rx_pkt_len) {
+ if (size < max_rx_pkt_len) {
DRV_LOG(ERR,
"port %u too many SGEs (%u) needed to handle"
" requested maximum packet size %u",
dev->data->port_id,
1 << sges_n,
- dev->data->dev_conf.rxmode.max_rx_pkt_len);
+ max_rx_pkt_len);
rte_errno = EOVERFLOW;
goto error;
}
- } else {
- DRV_LOG(WARNING,
- "port %u the requested maximum Rx packet size (%u) is"
- " larger than a single mbuf (%u) and scattered mode has"
- " not been requested",
- dev->data->port_id,
- dev->data->dev_conf.rxmode.max_rx_pkt_len,
- mb_len - RTE_PKTMBUF_HEADROOM);
}
if (mprq_en && !mlx5_rxq_mprq_enabled(&tmpl->rxq))
DRV_LOG(WARNING,
--
1.8.3.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [dpdk-dev] [PATCH 02/11] net/mlx5: limit LRO size to the maximum Rx packet
2019-07-29 11:53 [dpdk-dev] [PATCH 00/11] net/mlx5: LRO fixes and enhancements Matan Azrad
2019-07-29 11:53 ` [dpdk-dev] [PATCH 01/11] net/mlx5: fix Rx scatter mode validation Matan Azrad
@ 2019-07-29 11:53 ` Matan Azrad
2019-07-29 11:53 ` [dpdk-dev] [PATCH 03/11] net/mlx5: remove redundant offload flag reset Matan Azrad
` (10 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Matan Azrad @ 2019-07-29 11:53 UTC (permalink / raw)
To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled
The field max_rx_pkt_len in Rx configuration indicates the maximum size
for Rx packet to be received.
There was no any field to indicate the maximum size of LRO packet to be
received by the application.
Assuming the user configures max_rx_pkt_len as the maximum LRO packet
length when LRO is configured on the port, the PMD limits the maximum
LRO packet size received from HW to be max_rx_pkt_len.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
drivers/net/mlx5/mlx5_prm.h | 5 +++++
drivers/net/mlx5/mlx5_rxq.c | 38 ++++++++++++++++++--------------------
2 files changed, 23 insertions(+), 20 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index 32bc7a6..0716bbd 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -1465,6 +1465,11 @@ enum {
MLX5_TIRC_SELF_LB_BLOCK_BLOCK_MULTICAST = 0x2,
};
+enum {
+ MLX5_LRO_MAX_MSG_SIZE_START_FROM_L4 = 0x0,
+ MLX5_LRO_MAX_MSG_SIZE_START_FROM_L2 = 0x1,
+};
+
struct mlx5_ifc_tirc_bits {
u8 reserved_at_0[0x20];
u8 disp_type[0x4];
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index dc878f2..bd26ee2 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1543,37 +1543,35 @@ struct mlx5_rxq_obj *
return 0;
}
+#define MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u)
+#define MLX5_MAX_TCP_HDR_OFFSET ((unsigned int)(sizeof(struct rte_ether_hdr) + \
+ sizeof(struct rte_vlan_hdr) * 2 + \
+ sizeof(struct rte_ipv6_hdr)))
/**
* Adjust the maximum LRO massage size.
- * LRO massage is contained in the MPRQ strides.
- * While the LRO massage size cannot be bigger than 65280 according to the
- * PRM, the strides which contain it may be bigger.
- * Adjust the maximum LRO massage size to avoid the above option.
*
* @param dev
* Pointer to Ethernet device.
- * @param strd_n
- * Number of strides per WQE..
- * @param strd_sz
- * The stride size.
+ * @param max_lro_size
+ * The maximum size for LRO packet.
*/
static void
-mlx5_max_lro_msg_size_adjust(struct rte_eth_dev *dev, uint32_t strd_n,
- uint32_t strd_sz)
+mlx5_max_lro_msg_size_adjust(struct rte_eth_dev *dev, uint32_t max_lro_size)
{
struct mlx5_priv *priv = dev->data->dev_private;
- uint32_t max_buf_len = strd_sz * strd_n;
- if (max_buf_len > (uint64_t)UINT16_MAX)
- max_buf_len = RTE_ALIGN_FLOOR((uint32_t)UINT16_MAX, strd_sz);
- max_buf_len /= 256;
- max_buf_len = RTE_MIN(max_buf_len, (uint32_t)UINT8_MAX);
- assert(max_buf_len);
+ if (priv->config.hca_attr.lro_max_msg_sz_mode ==
+ MLX5_LRO_MAX_MSG_SIZE_START_FROM_L4 && max_lro_size >
+ MLX5_MAX_TCP_HDR_OFFSET)
+ max_lro_size -= MLX5_MAX_TCP_HDR_OFFSET;
+ max_lro_size = RTE_MIN(max_lro_size, MLX5_MAX_LRO_SIZE);
+ assert(max_lro_size >= 256u);
+ max_lro_size /= 256u;
if (priv->max_lro_msg_size)
priv->max_lro_msg_size =
- RTE_MIN((uint32_t)priv->max_lro_msg_size, max_buf_len);
+ RTE_MIN((uint32_t)priv->max_lro_msg_size, max_lro_size);
else
- priv->max_lro_msg_size = max_buf_len;
+ priv->max_lro_msg_size = max_lro_size;
}
/**
@@ -1671,8 +1669,8 @@ struct mlx5_rxq_ctrl *
tmpl->rxq.strd_headroom_en = strd_headroom_en;
tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(mb_len -
RTE_PKTMBUF_HEADROOM, config->mprq.max_memcpy_len);
- mlx5_max_lro_msg_size_adjust(dev, (1 << tmpl->rxq.strd_num_n),
- (1 << tmpl->rxq.strd_sz_n));
+ mlx5_max_lro_msg_size_adjust(dev, RTE_MIN(max_rx_pkt_len,
+ (1u << tmpl->rxq.strd_num_n) * (1u << tmpl->rxq.strd_sz_n)));
DRV_LOG(DEBUG,
"port %u Rx queue %u: Multi-Packet RQ is enabled"
" strd_num_n = %u, strd_sz_n = %u",
--
1.8.3.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [dpdk-dev] [PATCH 03/11] net/mlx5: remove redundant offload flag reset
2019-07-29 11:53 [dpdk-dev] [PATCH 00/11] net/mlx5: LRO fixes and enhancements Matan Azrad
2019-07-29 11:53 ` [dpdk-dev] [PATCH 01/11] net/mlx5: fix Rx scatter mode validation Matan Azrad
2019-07-29 11:53 ` [dpdk-dev] [PATCH 02/11] net/mlx5: limit LRO size to the maximum Rx packet Matan Azrad
@ 2019-07-29 11:53 ` Matan Azrad
2019-07-29 11:53 ` [dpdk-dev] [PATCH 04/11] net/mlx5: support mbuf headroom for LRO packet Matan Azrad
` (9 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Matan Azrad @ 2019-07-29 11:53 UTC (permalink / raw)
To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled
When mbuf is allocated by rte_pktmbuf_alloc the offload flag is reset by
it, so data-path function should not do it again.
Remove the above offload flag reset from MPRQ data-path.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
drivers/net/mlx5/mlx5_rxtx.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 007df8f..a7ec73d 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -1623,8 +1623,6 @@ enum mlx5_txcmp_code {
len -= RTE_ETHER_CRC_LEN;
offset = strd_idx * strd_sz + strd_shift;
addr = RTE_PTR_ADD(mlx5_mprq_buf_addr(buf, strd_n), offset);
- /* Initialize the offload flag. */
- pkt->ol_flags = 0;
/*
* Memcpy packets to the target mbuf if:
* - The size of packet is smaller than mprq_max_memcpy_len.
--
1.8.3.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [dpdk-dev] [PATCH 04/11] net/mlx5: support mbuf headroom for LRO packet
2019-07-29 11:53 [dpdk-dev] [PATCH 00/11] net/mlx5: LRO fixes and enhancements Matan Azrad
` (2 preceding siblings ...)
2019-07-29 11:53 ` [dpdk-dev] [PATCH 03/11] net/mlx5: remove redundant offload flag reset Matan Azrad
@ 2019-07-29 11:53 ` Matan Azrad
2019-07-29 11:53 ` [dpdk-dev] [PATCH 05/11] net/mlx5: fix DevX scattered Rx queue size Matan Azrad
` (8 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Matan Azrad @ 2019-07-29 11:53 UTC (permalink / raw)
To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled
Patch [1] zeroes the mbuf headroom when the port is configured with LRO
because when working with more than one stride per packet the HW cannot
guaranty an headroom in the start stride of each packet.
Change the solution to support mbuf headroom by adding an empty buffer
as the first packet segment, scatter mode must be enabled to support it.
[1] http://patches.dpdk.org/patch/56912/
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
doc/guides/nics/mlx5.rst | 3 +--
drivers/net/mlx5/mlx5_rxq.c | 24 ++++++++++++++++--------
drivers/net/mlx5/mlx5_rxtx.c | 22 +++++++++++++++++++++-
3 files changed, 38 insertions(+), 11 deletions(-)
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 92f1b97..cd550f4 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -165,8 +165,7 @@ Limitations
- LRO:
- - No mbuf headroom space is created for RX packets when LRO is configured.
- - ``scatter_fcs`` is disabled when LRO is configured.
+ - scatter_fcs is disabled when LRO is configured.
Statistics
----------
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index bd26ee2..d10c5c1 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1599,12 +1599,7 @@ struct mlx5_rxq_ctrl *
unsigned int mb_len = rte_pktmbuf_data_room_size(mp);
unsigned int mprq_stride_size;
struct mlx5_dev_config *config = &priv->config;
- /*
- * LRO packet may consume all the stride memory, hence we cannot
- * guaranty head-room. A new striding RQ feature may be added in CX6 DX
- * to allow head-room and tail-room for the LRO packets.
- */
- unsigned int strd_headroom_en = mlx5_lro_on(dev) ? 0 : 1;
+ unsigned int strd_headroom_en;
/*
* Always allocate extra slots, even if eventually
* the vector Rx will not be used.
@@ -1645,6 +1640,21 @@ struct mlx5_rxq_ctrl *
if (dev->data->dev_conf.intr_conf.rxq)
tmpl->irq = 1;
/*
+ * LRO packet may consume all the stride memory, hence we cannot
+ * guaranty head-room near the packet memory in the stride.
+ * In this case scatter is, for sure, enabled and an empty mbuf may be
+ * added in the start for the head-room.
+ */
+ if (mlx5_lro_on(dev) && RTE_PKTMBUF_HEADROOM > 0 &&
+ non_scatter_min_mbuf_size > mb_len) {
+ strd_headroom_en = 0;
+ mprq_stride_size = RTE_MIN(max_rx_pkt_len,
+ 1u << config->mprq.max_stride_size_n);
+ } else {
+ strd_headroom_en = 1;
+ mprq_stride_size = non_scatter_min_mbuf_size;
+ }
+ /*
* This Rx queue can be configured as a Multi-Packet RQ if all of the
* following conditions are met:
* - MPRQ is enabled.
@@ -1653,8 +1663,6 @@ struct mlx5_rxq_ctrl *
* stride.
* Otherwise, enable Rx scatter if necessary.
*/
- mprq_stride_size = max_rx_pkt_len + RTE_PKTMBUF_HEADROOM *
- strd_headroom_en;
if (mprq_en &&
desc > (1U << config->mprq.stride_num_n) &&
mprq_stride_size <= (1U << config->mprq.max_stride_size_n)) {
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index a7ec73d..003eefd 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -1639,6 +1639,7 @@ enum mlx5_txcmp_code {
continue;
}
rte_memcpy(rte_pktmbuf_mtod(pkt, void *), addr, len);
+ DATA_LEN(pkt) = len;
} else {
rte_iova_t buf_iova;
struct rte_mbuf_ext_shared_info *shinfo;
@@ -1679,6 +1680,26 @@ enum mlx5_txcmp_code {
++rxq->stats.idropped;
continue;
}
+ DATA_LEN(pkt) = len;
+ /*
+ * LRO packet may consume all the stride memory, in this
+ * case packet head-room space is not guaranteed so must
+ * to add an empty mbuf for the head-room.
+ */
+ if (!rxq->strd_headroom_en) {
+ struct rte_mbuf *headroom_mbuf =
+ rte_pktmbuf_alloc(rxq->mp);
+
+ if (unlikely(headroom_mbuf == NULL)) {
+ rte_pktmbuf_free_seg(pkt);
+ ++rxq->stats.rx_nombuf;
+ break;
+ }
+ PORT(pkt) = rxq->port_id;
+ NEXT(headroom_mbuf) = pkt;
+ pkt = headroom_mbuf;
+ NB_SEGS(pkt) = 2;
+ }
}
rxq_cq_to_mbuf(rxq, pkt, cqe, rss_hash_res);
if (lro_num_seg > 1) {
@@ -1687,7 +1708,6 @@ enum mlx5_txcmp_code {
pkt->tso_segsz = strd_sz;
}
PKT_LEN(pkt) = len;
- DATA_LEN(pkt) = len;
PORT(pkt) = rxq->port_id;
#ifdef MLX5_PMD_SOFT_COUNTERS
/* Increment bytes counter. */
--
1.8.3.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [dpdk-dev] [PATCH 05/11] net/mlx5: fix DevX scattered Rx queue size
2019-07-29 11:53 [dpdk-dev] [PATCH 00/11] net/mlx5: LRO fixes and enhancements Matan Azrad
` (3 preceding siblings ...)
2019-07-29 11:53 ` [dpdk-dev] [PATCH 04/11] net/mlx5: support mbuf headroom for LRO packet Matan Azrad
@ 2019-07-29 11:53 ` Matan Azrad
2019-07-29 11:53 ` [dpdk-dev] [PATCH 06/11] net/mlx5: fix DevX Rx queue type Matan Azrad
` (7 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Matan Azrad @ 2019-07-29 11:53 UTC (permalink / raw)
To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled
The WQ size configuration via DevX didn't take into acount the maximum
number of segments per packet what wrongly caused to configure bigger
WQE size than the size expected by the PMD in other places.
The scatter mode stride size should be the size of segment multiplied
by the number of maximum segments per packet.
The number of WQEs per WQ should be the number of descriptors divided by
the number of the maximum segments per packet.
Fix the size calculations to the above rule.
Fixes: dc9ceff73c99 ("net/mlx5: create advanced RxQ via DevX")
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
drivers/net/mlx5/mlx5_rxq.c | 14 ++++----------
1 file changed, 4 insertions(+), 10 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index d10c5c1..c95627e 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1090,7 +1090,7 @@
struct mlx5_rxq_ctrl *rxq_ctrl =
container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
struct mlx5_devx_create_rq_attr rq_attr;
- uint32_t wqe_n = 1 << rxq_data->elts_n;
+ uint32_t wqe_n = 1 << (rxq_data->elts_n - rxq_data->sges_n);
uint32_t wq_size = 0;
uint32_t wqe_size = 0;
uint32_t log_wqe_size = 0;
@@ -1118,17 +1118,11 @@
MLX5_MIN_SINGLE_STRIDE_LOG_NUM_BYTES;
wqe_size = sizeof(struct mlx5_wqe_mprq);
} else {
- int max_sge = 0;
- int num_scatter = 0;
-
- rq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
- max_sge = 1 << rxq_data->sges_n;
- num_scatter = RTE_MAX(max_sge, 1);
- wqe_size = sizeof(struct mlx5_wqe_data_seg) * num_scatter;
+ wqe_size = sizeof(struct mlx5_wqe_data_seg);
}
- log_wqe_size = log2above(wqe_size);
+ log_wqe_size = log2above(wqe_size) + rxq_data->sges_n;
rq_attr.wq_attr.log_wq_stride = log_wqe_size;
- rq_attr.wq_attr.log_wq_sz = rxq_data->elts_n;
+ rq_attr.wq_attr.log_wq_sz = rxq_data->elts_n - rxq_data->sges_n;
/* Calculate and allocate WQ memory space. */
wqe_size = 1 << log_wqe_size; /* round up power of two.*/
wq_size = wqe_n * wqe_size;
--
1.8.3.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [dpdk-dev] [PATCH 06/11] net/mlx5: fix DevX Rx queue type
2019-07-29 11:53 [dpdk-dev] [PATCH 00/11] net/mlx5: LRO fixes and enhancements Matan Azrad
` (4 preceding siblings ...)
2019-07-29 11:53 ` [dpdk-dev] [PATCH 05/11] net/mlx5: fix DevX scattered Rx queue size Matan Azrad
@ 2019-07-29 11:53 ` Matan Azrad
2019-07-29 11:53 ` [dpdk-dev] [PATCH 07/11] net/mlx5: allow LRO in regular Rx queue Matan Azrad
` (6 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Matan Azrad @ 2019-07-29 11:53 UTC (permalink / raw)
To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled
When the Rx queue is not in striding RQ mode it should be configured as
cyclic RQ.
In this case the type remains 0 which means linked-list type.
Set the RQ type to be cyclic when the queue is not in striding RQ mode.
Fixes: dc9ceff73c99 ("net/mlx5: create advanced RxQ via DevX")
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
drivers/net/mlx5/mlx5_rxq.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index c95627e..5e54156 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1118,6 +1118,7 @@
MLX5_MIN_SINGLE_STRIDE_LOG_NUM_BYTES;
wqe_size = sizeof(struct mlx5_wqe_mprq);
} else {
+ rq_attr.wq_attr.wq_type = MLX5_WQ_TYPE_CYCLIC;
wqe_size = sizeof(struct mlx5_wqe_data_seg);
}
log_wqe_size = log2above(wqe_size) + rxq_data->sges_n;
--
1.8.3.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [dpdk-dev] [PATCH 07/11] net/mlx5: allow LRO in regular Rx queue
2019-07-29 11:53 [dpdk-dev] [PATCH 00/11] net/mlx5: LRO fixes and enhancements Matan Azrad
` (5 preceding siblings ...)
2019-07-29 11:53 ` [dpdk-dev] [PATCH 06/11] net/mlx5: fix DevX Rx queue type Matan Azrad
@ 2019-07-29 11:53 ` Matan Azrad
2019-07-29 11:53 ` [dpdk-dev] [PATCH 08/11] net/mlx5: fix DevX Rx queue memory alignment Matan Azrad
` (5 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Matan Azrad @ 2019-07-29 11:53 UTC (permalink / raw)
To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled
LRO support was only for MPRQ, hence mprq Rx burst was selected when
LRO was configured in the port.
The current support for MPRQ is suffering from bad memory utilization
since an external mempool is allocated by the PMD for the packets data
in addition to the user mempool, besides that, the user may get packet
data addresses which were not configured by him.
Even though MPRQ has the best performance for packet receiving in the
most cases and because of the above facts it is better to remove the
automatic MPRQ select when LRO is configured.
Move MPRQ to be selected only when the user force it by the PMD
arguments including LRO case.
Allow LRO offload using the regular RQ with the regular Rx burst
function.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
drivers/net/mlx5/mlx5.c | 4 +---
drivers/net/mlx5/mlx5_ethdev.c | 6 ------
drivers/net/mlx5/mlx5_prm.h | 3 +++
drivers/net/mlx5/mlx5_rxq.c | 27 ++++++++++++++-------------
drivers/net/mlx5/mlx5_rxtx.h | 4 ++--
drivers/net/mlx5/mlx5_rxtx_vec.c | 2 ++
6 files changed, 22 insertions(+), 24 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index ad0883d..a490bf2 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1856,7 +1856,7 @@ struct mlx5_dev_spawn_data {
if (priv->counter_fallback)
DRV_LOG(INFO, "Use fall-back DV counter management\n");
/* Check for LRO support. */
- if (config.dest_tir && mprq && config.hca_attr.lro_cap) {
+ if (config.dest_tir && config.hca_attr.lro_cap) {
/* TBD check tunnel lro caps. */
config.lro.supported = config.hca_attr.lro_cap;
DRV_LOG(DEBUG, "Device supports LRO");
@@ -1869,8 +1869,6 @@ struct mlx5_dev_spawn_data {
config.hca_attr.lro_timer_supported_periods[0];
DRV_LOG(DEBUG, "LRO session timeout set to %d usec",
config.lro.timeout);
- config.mprq.enabled = 1;
- DRV_LOG(DEBUG, "Enable MPRQ for LRO use");
}
}
if (config.mprq.enabled && mprq) {
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index e627909..9d11831 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -433,12 +433,6 @@ struct ethtool_link_settings {
dev->data->port_id, priv->rxqs_n, rxqs_n);
priv->rxqs_n = rxqs_n;
/*
- * WHen using LRO, MPRQ is implicitly enabled.
- * Adjust threshold value to ensure MPRQ can be enabled.
- */
- if (lro_on && priv->config.mprq.min_rxqs_num > priv->rxqs_n)
- priv->config.mprq.min_rxqs_num = priv->rxqs_n;
- /*
* If the requested number of RX queues is not a power of two,
* use the maximum indirection table size for better balancing.
* The result is always rounded to the next power of two.
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index 0716bbd..6ea6345 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -237,6 +237,9 @@
/* Amount of data bytes after eth data segment. */
#define MLX5_ESEG_EXTRA_DATA_SIZE 32u
+/* The maximum log value of segments per RQ WQE. */
+#define MLX5_MAX_LOG_RQ_SEGS 5u
+
/* Completion mode. */
enum mlx5_completion_mode {
MLX5_COMP_ONLY_ERR = 0x0,
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 5e54156..ad5b0a9 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -93,7 +93,6 @@
/**
* Check whether Multi-Packet RQ is enabled for the device.
- * MPRQ can be enabled explicitly, or implicitly by enabling LRO.
*
* @param dev
* Pointer to Ethernet device.
@@ -1607,6 +1606,7 @@ struct mlx5_rxq_ctrl *
unsigned int max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
RTE_PKTMBUF_HEADROOM;
+ unsigned int max_lro_size = 0;
if (non_scatter_min_mbuf_size > mb_len && !(offloads &
DEV_RX_OFFLOAD_SCATTER)) {
@@ -1672,8 +1672,9 @@ struct mlx5_rxq_ctrl *
tmpl->rxq.strd_headroom_en = strd_headroom_en;
tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(mb_len -
RTE_PKTMBUF_HEADROOM, config->mprq.max_memcpy_len);
- mlx5_max_lro_msg_size_adjust(dev, RTE_MIN(max_rx_pkt_len,
- (1u << tmpl->rxq.strd_num_n) * (1u << tmpl->rxq.strd_sz_n)));
+ max_lro_size = RTE_MIN(max_rx_pkt_len,
+ (1u << tmpl->rxq.strd_num_n) *
+ (1u << tmpl->rxq.strd_sz_n));
DRV_LOG(DEBUG,
"port %u Rx queue %u: Multi-Packet RQ is enabled"
" strd_num_n = %u, strd_sz_n = %u",
@@ -1681,6 +1682,7 @@ struct mlx5_rxq_ctrl *
tmpl->rxq.strd_num_n, tmpl->rxq.strd_sz_n);
} else if (max_rx_pkt_len <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
tmpl->rxq.sges_n = 0;
+ max_lro_size = max_rx_pkt_len;
} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
unsigned int size = non_scatter_min_mbuf_size;
unsigned int sges_n;
@@ -1690,20 +1692,18 @@ struct mlx5_rxq_ctrl *
* and round it to the next power of two.
*/
sges_n = log2above((size / mb_len) + !!(size % mb_len));
- tmpl->rxq.sges_n = sges_n;
- /* Make sure rxq.sges_n did not overflow. */
- size = mb_len * (1 << tmpl->rxq.sges_n);
- size -= RTE_PKTMBUF_HEADROOM;
- if (size < max_rx_pkt_len) {
+ if (sges_n > MLX5_MAX_LOG_RQ_SEGS) {
DRV_LOG(ERR,
"port %u too many SGEs (%u) needed to handle"
- " requested maximum packet size %u",
- dev->data->port_id,
- 1 << sges_n,
- max_rx_pkt_len);
- rte_errno = EOVERFLOW;
+ " requested maximum packet size %u, the maximum"
+ " supported are %u", dev->data->port_id,
+ 1 << sges_n, max_rx_pkt_len,
+ 1u << MLX5_MAX_LOG_RQ_SEGS);
+ rte_errno = ENOTSUP;
goto error;
}
+ tmpl->rxq.sges_n = sges_n;
+ max_lro_size = max_rx_pkt_len;
}
if (mprq_en && !mlx5_rxq_mprq_enabled(&tmpl->rxq))
DRV_LOG(WARNING,
@@ -1725,6 +1725,7 @@ struct mlx5_rxq_ctrl *
rte_errno = EINVAL;
goto error;
}
+ mlx5_max_lro_msg_size_adjust(dev, max_lro_size);
/* Toggle RX checksum offload if hardware supports it. */
tmpl->rxq.csum = !!(offloads & DEV_RX_OFFLOAD_CHECKSUM);
tmpl->rxq.hw_timestamp = !!(offloads & DEV_RX_OFFLOAD_TIMESTAMP);
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 60d871c..5704d0a 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -105,7 +105,7 @@ struct mlx5_rxq_data {
unsigned int hw_timestamp:1; /* Enable HW timestamp. */
unsigned int vlan_strip:1; /* Enable VLAN stripping. */
unsigned int crc_present:1; /* CRC must be subtracted. */
- unsigned int sges_n:2; /* Log 2 of SGEs (max buffers per packet). */
+ unsigned int sges_n:3; /* Log 2 of SGEs (max buffers per packet). */
unsigned int cqe_n:4; /* Log 2 of CQ elements. */
unsigned int elts_n:4; /* Log 2 of Mbufs. */
unsigned int rss_hash:1; /* RSS hash result is enabled. */
@@ -115,7 +115,7 @@ struct mlx5_rxq_data {
unsigned int strd_shift_en:1; /* Enable 2bytes shift on a stride. */
unsigned int err_state:2; /* enum mlx5_rxq_err_state. */
unsigned int strd_headroom_en:1; /* Enable mbuf headroom in MPRQ. */
- unsigned int :3; /* Remaining bits. */
+ unsigned int :2; /* Remaining bits. */
volatile uint32_t *rq_db;
volatile uint32_t *cq_db;
uint16_t port_id;
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c
index f6ec828..3815ff6 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.c
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.c
@@ -151,6 +151,8 @@ int __attribute__((cold))
return -ENOTSUP;
if (mlx5_mprq_enabled(dev))
return -ENOTSUP;
+ if (mlx5_lro_on(dev))
+ return -ENOTSUP;
/* All the configured queues should support. */
for (i = 0; i < priv->rxqs_n; ++i) {
struct mlx5_rxq_data *rxq = (*priv->rxqs)[i];
--
1.8.3.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [dpdk-dev] [PATCH 08/11] net/mlx5: fix DevX Rx queue memory alignment
2019-07-29 11:53 [dpdk-dev] [PATCH 00/11] net/mlx5: LRO fixes and enhancements Matan Azrad
` (6 preceding siblings ...)
2019-07-29 11:53 ` [dpdk-dev] [PATCH 07/11] net/mlx5: allow LRO in regular Rx queue Matan Azrad
@ 2019-07-29 11:53 ` Matan Azrad
2019-07-29 11:53 ` [dpdk-dev] [PATCH 09/11] net/mlx5: handle LRO packets in regular Rx queue Matan Azrad
` (4 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Matan Azrad @ 2019-07-29 11:53 UTC (permalink / raw)
To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled
The alignment requested by the FW for WQ buffer allocation is 512.
Change it from cache line alignment to 512.
Fixes: dc9ceff73c99 ("net/mlx5: create advanced RxQ via DevX")
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
drivers/net/mlx5/mlx5_prm.h | 3 +++
drivers/net/mlx5/mlx5_rxq.c | 2 +-
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h
index 6ea6345..42ead7d 100644
--- a/drivers/net/mlx5/mlx5_prm.h
+++ b/drivers/net/mlx5/mlx5_prm.h
@@ -240,6 +240,9 @@
/* The maximum log value of segments per RQ WQE. */
#define MLX5_MAX_LOG_RQ_SEGS 5u
+/* The alignment needed for WQ buffer. */
+#define MLX5_WQE_BUF_ALIGNMENT 512
+
/* Completion mode. */
enum mlx5_completion_mode {
MLX5_COMP_ONLY_ERR = 0x0,
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index ad5b0a9..e96bb1e 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1126,7 +1126,7 @@
/* Calculate and allocate WQ memory space. */
wqe_size = 1 << log_wqe_size; /* round up power of two.*/
wq_size = wqe_n * wqe_size;
- buf = rte_calloc_socket(__func__, 1, wq_size, RTE_CACHE_LINE_SIZE,
+ buf = rte_calloc_socket(__func__, 1, wq_size, MLX5_WQE_BUF_ALIGNMENT,
rxq_ctrl->socket);
if (!buf)
return NULL;
--
1.8.3.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [dpdk-dev] [PATCH 09/11] net/mlx5: handle LRO packets in regular Rx queue
2019-07-29 11:53 [dpdk-dev] [PATCH 00/11] net/mlx5: LRO fixes and enhancements Matan Azrad
` (7 preceding siblings ...)
2019-07-29 11:53 ` [dpdk-dev] [PATCH 08/11] net/mlx5: fix DevX Rx queue memory alignment Matan Azrad
@ 2019-07-29 11:53 ` Matan Azrad
2019-07-29 11:53 ` [dpdk-dev] [PATCH 10/11] net/mlx5: allow implicit LRO flow Matan Azrad
` (3 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Matan Azrad @ 2019-07-29 11:53 UTC (permalink / raw)
To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled
When LRO offload is configured in Rx queue, the HW may coalesce TCP
packets from same TCP connection into single packet.
In this case the SW should fix the relevant packet headers because
the HW doesn't update them according to the new created packet
characteristics but provides the update values in the CQE.
Add update header code to the regular Rx burst function to support LRO
feature.
Make sure the first mbuf has enough space to include each TCP header,
otherwise the header update may cross mbufs what complicates the
operation too match.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
doc/guides/nics/mlx5.rst | 4 +++-
drivers/net/mlx5/mlx5_rxq.c | 20 +++++++++++++++++---
drivers/net/mlx5/mlx5_rxtx.c | 17 +++++++++++++++++
3 files changed, 37 insertions(+), 4 deletions(-)
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index cd550f4..6f0c382 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -165,7 +165,9 @@ Limitations
- LRO:
- - scatter_fcs is disabled when LRO is configured.
+ - KEEP_CRC offload cannot be supported with LRO.
+ - The first mbuf length, without head-room, must be big enough to include the
+ TCP header (122B).
Statistics
----------
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index e96bb1e..3705d07 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1541,6 +1541,11 @@ struct mlx5_rxq_obj *
#define MLX5_MAX_TCP_HDR_OFFSET ((unsigned int)(sizeof(struct rte_ether_hdr) + \
sizeof(struct rte_vlan_hdr) * 2 + \
sizeof(struct rte_ipv6_hdr)))
+#define MAX_TCP_OPTION_SIZE 40u
+#define MLX5_MAX_LRO_HEADER_FIX ((unsigned int)(MLX5_MAX_TCP_HDR_OFFSET + \
+ sizeof(struct rte_tcp_hdr) + \
+ MAX_TCP_OPTION_SIZE))
+
/**
* Adjust the maximum LRO massage size.
*
@@ -1607,6 +1612,7 @@ struct mlx5_rxq_ctrl *
unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
RTE_PKTMBUF_HEADROOM;
unsigned int max_lro_size = 0;
+ unsigned int first_mb_free_size = mb_len - RTE_PKTMBUF_HEADROOM;
if (non_scatter_min_mbuf_size > mb_len && !(offloads &
DEV_RX_OFFLOAD_SCATTER)) {
@@ -1670,8 +1676,8 @@ struct mlx5_rxq_ctrl *
config->mprq.min_stride_size_n);
tmpl->rxq.strd_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT;
tmpl->rxq.strd_headroom_en = strd_headroom_en;
- tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(mb_len -
- RTE_PKTMBUF_HEADROOM, config->mprq.max_memcpy_len);
+ tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
+ config->mprq.max_memcpy_len);
max_lro_size = RTE_MIN(max_rx_pkt_len,
(1u << tmpl->rxq.strd_num_n) *
(1u << tmpl->rxq.strd_sz_n));
@@ -1680,13 +1686,21 @@ struct mlx5_rxq_ctrl *
" strd_num_n = %u, strd_sz_n = %u",
dev->data->port_id, idx,
tmpl->rxq.strd_num_n, tmpl->rxq.strd_sz_n);
- } else if (max_rx_pkt_len <= (mb_len - RTE_PKTMBUF_HEADROOM)) {
+ } else if (max_rx_pkt_len <= first_mb_free_size) {
tmpl->rxq.sges_n = 0;
max_lro_size = max_rx_pkt_len;
} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
unsigned int size = non_scatter_min_mbuf_size;
unsigned int sges_n;
+ if (mlx5_lro_on(dev) && first_mb_free_size <
+ MLX5_MAX_LRO_HEADER_FIX) {
+ DRV_LOG(ERR, "Not enough space in the first segment(%u)"
+ " to include the max header size(%u) for LRO",
+ first_mb_free_size, MLX5_MAX_LRO_HEADER_FIX);
+ rte_errno = ENOTSUP;
+ goto error;
+ }
/*
* Determine the number of SGEs needed for a full packet
* and round it to the next power of two.
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 003eefd..6627b54 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -107,6 +107,16 @@ enum mlx5_txcmp_code {
mlx5_queue_state_modify(struct rte_eth_dev *dev,
struct mlx5_mp_arg_queue_state_modify *sm);
+static inline void
+mlx5_lro_update_tcp_hdr(struct rte_tcp_hdr *restrict tcp,
+ volatile struct mlx5_cqe *restrict cqe,
+ uint32_t phcsum);
+
+static inline void
+mlx5_lro_update_hdr(uint8_t *restrict padd,
+ volatile struct mlx5_cqe *restrict cqe,
+ uint32_t len);
+
uint32_t mlx5_ptype_table[] __rte_cache_aligned = {
[0xff] = RTE_PTYPE_ALL_MASK, /* Last entry for errored packet. */
};
@@ -1323,6 +1333,13 @@ enum mlx5_txcmp_code {
if (rxq->crc_present)
len -= RTE_ETHER_CRC_LEN;
PKT_LEN(pkt) = len;
+ if (cqe->lro_num_seg > 1) {
+ mlx5_lro_update_hdr
+ (rte_pktmbuf_mtod(pkt, uint8_t *), cqe,
+ len);
+ pkt->ol_flags |= PKT_RX_LRO;
+ pkt->tso_segsz = len / cqe->lro_num_seg;
+ }
}
DATA_LEN(rep) = DATA_LEN(seg);
PKT_LEN(rep) = PKT_LEN(seg);
--
1.8.3.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [dpdk-dev] [PATCH 10/11] net/mlx5: allow implicit LRO flow
2019-07-29 11:53 [dpdk-dev] [PATCH 00/11] net/mlx5: LRO fixes and enhancements Matan Azrad
` (8 preceding siblings ...)
2019-07-29 11:53 ` [dpdk-dev] [PATCH 09/11] net/mlx5: handle LRO packets in regular Rx queue Matan Azrad
@ 2019-07-29 11:53 ` Matan Azrad
2019-07-29 11:53 ` [dpdk-dev] [PATCH 11/11] net/mlx5: allow LRO per Rx queue Matan Azrad
` (2 subsequent siblings)
12 siblings, 0 replies; 14+ messages in thread
From: Matan Azrad @ 2019-07-29 11:53 UTC (permalink / raw)
To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled
When a user configures LRO in the port offloads, he probably wants each
TCP packet will have a chance to open an LRO session.
The PMD wasn't configure LRO in the flow TIR if the flow is not
explicitly configured TCP item despite the flow included TCP traffic.
For example, the next flows were not LRO offloaded:
pattern eth / end, pattern eth / ip / end, pattern eth / ipv6 / end.
Enable LRO configuration for all the TIRs if LRO is configured in the
port.
No performance impact for non-LRO traffic in these TIRs.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
drivers/net/mlx5/mlx5.h | 3 ---
drivers/net/mlx5/mlx5_flow_dv.c | 18 +-----------------
drivers/net/mlx5/mlx5_flow_verbs.c | 3 +--
drivers/net/mlx5/mlx5_rxq.c | 10 +++++-----
drivers/net/mlx5/mlx5_rxtx.h | 2 +-
5 files changed, 8 insertions(+), 28 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 6cb8858..5c40091 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -198,9 +198,6 @@ struct mlx5_hca_attr {
#define MLX5_LRO_ENABLED(dev) \
((dev)->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO)
-#define MLX5_FLOW_IPV4_LRO (1 << 0)
-#define MLX5_FLOW_IPV6_LRO (1 << 1)
-
/* LRO configurations structure. */
struct mlx5_lro_config {
uint32_t supported:1; /* Whether LRO is supported. */
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index f1d32bd..59ef716 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -62,9 +62,6 @@
uint32_t attr;
};
-#define MLX5_FLOW_IPV4_LRO (1 << 0)
-#define MLX5_FLOW_IPV6_LRO (1 << 1)
-
/**
* Initialize flow attributes structure according to flow items' types.
*
@@ -5186,26 +5183,13 @@ struct field_modify_info modify_tcp[] = {
(*flow->queue),
flow->rss.queue_num);
if (!hrxq) {
- int lro = 0;
-
- if (mlx5_lro_on(dev)) {
- if ((dev_flow->layers &
- MLX5_FLOW_LAYER_IPV4_LRO)
- == MLX5_FLOW_LAYER_IPV4_LRO)
- lro = MLX5_FLOW_IPV4_LRO;
- else if ((dev_flow->layers &
- MLX5_FLOW_LAYER_IPV6_LRO)
- == MLX5_FLOW_LAYER_IPV6_LRO)
- lro = MLX5_FLOW_IPV6_LRO;
- }
hrxq = mlx5_hrxq_new
(dev, flow->key, MLX5_RSS_HASH_KEY_LEN,
dv->hash_fields, (*flow->queue),
flow->rss.queue_num,
!!(dev_flow->layers &
- MLX5_FLOW_LAYER_TUNNEL), lro);
+ MLX5_FLOW_LAYER_TUNNEL));
}
-
if (!hrxq) {
rte_flow_error_set
(error, rte_errno,
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index bcec3b4..fd6f2d5 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1669,8 +1669,7 @@
(*flow->queue),
flow->rss.queue_num,
!!(dev_flow->layers &
- MLX5_FLOW_LAYER_TUNNEL),
- 0);
+ MLX5_FLOW_LAYER_TUNNEL));
if (!hrxq) {
rte_flow_error_set
(error, rte_errno,
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 3705d07..f7e861c 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -2100,8 +2100,6 @@ struct mlx5_rxq_ctrl *
* Number of queues.
* @param tunnel
* Tunnel type.
- * @param lro
- * Flow rule is relevant for LRO, i.e. contains IPv4/IPv6 and TCP.
*
* @return
* The Verbs/DevX object initialised, NULL otherwise and rte_errno is set.
@@ -2111,7 +2109,7 @@ struct mlx5_hrxq *
const uint8_t *rss_key, uint32_t rss_key_len,
uint64_t hash_fields,
const uint16_t *queues, uint32_t queues_n,
- int tunnel __rte_unused, int lro)
+ int tunnel __rte_unused)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_hrxq *hrxq;
@@ -2218,11 +2216,13 @@ struct mlx5_hrxq *
if (dev->data->dev_conf.lpbk_mode)
tir_attr.self_lb_block =
MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST;
- if (lro) {
+ if (mlx5_lro_on(dev)) {
tir_attr.lro_timeout_period_usecs =
priv->config.lro.timeout;
tir_attr.lro_max_msg_sz = priv->max_lro_msg_size;
- tir_attr.lro_enable_mask = lro;
+ tir_attr.lro_enable_mask =
+ MLX5_TIRC_LRO_ENABLE_MASK_IPV4_LRO |
+ MLX5_TIRC_LRO_ENABLE_MASK_IPV6_LRO;
}
tir = mlx5_devx_cmd_create_tir(priv->sh->ctx, &tir_attr);
if (!tir) {
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 5704d0a..9b58d0a 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -358,7 +358,7 @@ struct mlx5_hrxq *mlx5_hrxq_new(struct rte_eth_dev *dev,
const uint8_t *rss_key, uint32_t rss_key_len,
uint64_t hash_fields,
const uint16_t *queues, uint32_t queues_n,
- int tunnel __rte_unused, int lro);
+ int tunnel __rte_unused);
struct mlx5_hrxq *mlx5_hrxq_get(struct rte_eth_dev *dev,
const uint8_t *rss_key, uint32_t rss_key_len,
uint64_t hash_fields,
--
1.8.3.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [dpdk-dev] [PATCH 11/11] net/mlx5: allow LRO per Rx queue
2019-07-29 11:53 [dpdk-dev] [PATCH 00/11] net/mlx5: LRO fixes and enhancements Matan Azrad
` (9 preceding siblings ...)
2019-07-29 11:53 ` [dpdk-dev] [PATCH 10/11] net/mlx5: allow implicit LRO flow Matan Azrad
@ 2019-07-29 11:53 ` Matan Azrad
2019-07-29 12:32 ` [dpdk-dev] [PATCH 00/11] net/mlx5: LRO fixes and enhancements Slava Ovsiienko
2019-07-29 14:37 ` Raslan Darawsheh
12 siblings, 0 replies; 14+ messages in thread
From: Matan Azrad @ 2019-07-29 11:53 UTC (permalink / raw)
To: Shahaf Shuler, Yongseok Koh, Viacheslav Ovsiienko; +Cc: dev, Dekel Peled
Enabling LRO offload per queue makes sense because the user will
probably want to allocate different mempool for LRO queues - the LRO
mempool mbuf size may be bigger than non LRO mempool.
Change the LRO offload to be per queue instead of per port.
If one of the queues is with LRO enabled, all the queues will be
configured via DevX.
If RSS flows direct TCP packets to queues with different LRO enabling,
these flows will not be offloaded with LRO.
Signed-off-by: Matan Azrad <matan@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
drivers/net/mlx5/mlx5.h | 3 ---
drivers/net/mlx5/mlx5_ethdev.c | 8 +------
drivers/net/mlx5/mlx5_rxq.c | 52 +++++++++++++++++++---------------------
drivers/net/mlx5/mlx5_rxtx.h | 6 ++---
drivers/net/mlx5/mlx5_rxtx_vec.c | 4 ++--
drivers/net/mlx5/mlx5_trigger.c | 10 +++++---
6 files changed, 38 insertions(+), 45 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 5c40091..e812374 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -195,9 +195,6 @@ struct mlx5_hca_attr {
#define MLX5_LRO_SUPPORTED(dev) \
(((struct mlx5_priv *)((dev)->data->dev_private))->config.lro.supported)
-#define MLX5_LRO_ENABLED(dev) \
- ((dev)->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO)
-
/* LRO configurations structure. */
struct mlx5_lro_config {
uint32_t supported:1; /* Whether LRO is supported. */
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 9d11831..9629cfb 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -389,7 +389,6 @@ struct ethtool_link_settings {
const uint8_t use_app_rss_key =
!!dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key;
int ret = 0;
- unsigned int lro_on = mlx5_lro_on(dev);
if (use_app_rss_key &&
(dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len !=
@@ -454,11 +453,6 @@ struct ethtool_link_settings {
j = 0;
}
}
- if (lro_on && priv->config.cqe_comp) {
- /* CQE compressing is not supported for LRO CQEs. */
- DRV_LOG(WARNING, "Rx CQE compression isn't supported with LRO");
- priv->config.cqe_comp = 0;
- }
ret = mlx5_proc_priv_init(dev);
if (ret)
return ret;
@@ -571,7 +565,7 @@ struct ethtool_link_settings {
info->max_tx_queues = max;
info->max_mac_addrs = MLX5_MAX_UC_MAC_ADDRESSES;
info->rx_queue_offload_capa = mlx5_get_rx_queue_offloads(dev);
- info->rx_offload_capa = (mlx5_get_rx_port_offloads(dev) |
+ info->rx_offload_capa = (mlx5_get_rx_port_offloads() |
info->rx_queue_offload_capa);
info->tx_offload_capa = mlx5_get_tx_port_offloads(dev);
info->if_index = mlx5_ifindex(dev);
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index f7e861c..a1fdeef 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -124,21 +124,6 @@
}
/**
- * Check whether LRO is supported and enabled for the device.
- *
- * @param dev
- * Pointer to Ethernet device.
- *
- * @return
- * 0 if disabled, 1 if enabled.
- */
-inline int
-mlx5_lro_on(struct rte_eth_dev *dev)
-{
- return (MLX5_LRO_SUPPORTED(dev) && MLX5_LRO_ENABLED(dev));
-}
-
-/**
* Allocate RX queue elements for Multi-Packet RQ.
*
* @param rxq_ctrl
@@ -394,6 +379,8 @@
DEV_RX_OFFLOAD_TCP_CKSUM);
if (config->hw_vlan_strip)
offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (MLX5_LRO_SUPPORTED(dev))
+ offloads |= DEV_RX_OFFLOAD_TCP_LRO;
return offloads;
}
@@ -401,19 +388,14 @@
/**
* Returns the per-port supported offloads.
*
- * @param dev
- * Pointer to Ethernet device.
- *
* @return
* Supported Rx offloads.
*/
uint64_t
-mlx5_get_rx_port_offloads(struct rte_eth_dev *dev)
+mlx5_get_rx_port_offloads(void)
{
uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
- if (MLX5_LRO_SUPPORTED(dev))
- offloads |= DEV_RX_OFFLOAD_TCP_LRO;
return offloads;
}
@@ -889,7 +871,8 @@
cq_attr.mlx5 = (struct mlx5dv_cq_init_attr){
.comp_mask = 0,
};
- if (priv->config.cqe_comp && !rxq_data->hw_timestamp) {
+ if (priv->config.cqe_comp && !rxq_data->hw_timestamp &&
+ !rxq_data->lro) {
cq_attr.mlx5.comp_mask |=
MLX5DV_CQ_INIT_ATTR_MASK_COMPRESSED_CQE;
#ifdef HAVE_IBV_DEVICE_STRIDING_RQ_SUPPORT
@@ -911,6 +894,10 @@
"port %u Rx CQE compression is disabled for HW"
" timestamp",
dev->data->port_id);
+ } else if (priv->config.cqe_comp && rxq_data->lro) {
+ DRV_LOG(DEBUG,
+ "port %u Rx CQE compression is disabled for LRO",
+ dev->data->port_id);
}
#ifdef HAVE_IBV_MLX5_MOD_CQE_128B_PAD
if (priv->config.cqe_pad) {
@@ -1607,6 +1594,7 @@ struct mlx5_rxq_ctrl *
desc + config->rx_vec_en * MLX5_VPMD_DESCS_PER_LOOP;
uint64_t offloads = conf->offloads |
dev->data->dev_conf.rxmode.offloads;
+ unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
const int mprq_en = mlx5_check_mprq_support(dev) > 0;
unsigned int max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
unsigned int non_scatter_min_mbuf_size = max_rx_pkt_len +
@@ -1646,7 +1634,7 @@ struct mlx5_rxq_ctrl *
* In this case scatter is, for sure, enabled and an empty mbuf may be
* added in the start for the head-room.
*/
- if (mlx5_lro_on(dev) && RTE_PKTMBUF_HEADROOM > 0 &&
+ if (lro_on_queue && RTE_PKTMBUF_HEADROOM > 0 &&
non_scatter_min_mbuf_size > mb_len) {
strd_headroom_en = 0;
mprq_stride_size = RTE_MIN(max_rx_pkt_len,
@@ -1693,7 +1681,7 @@ struct mlx5_rxq_ctrl *
unsigned int size = non_scatter_min_mbuf_size;
unsigned int sges_n;
- if (mlx5_lro_on(dev) && first_mb_free_size <
+ if (lro_on_queue && first_mb_free_size <
MLX5_MAX_LRO_HEADER_FIX) {
DRV_LOG(ERR, "Not enough space in the first segment(%u)"
" to include the max header size(%u) for LRO",
@@ -1747,13 +1735,14 @@ struct mlx5_rxq_ctrl *
tmpl->rxq.vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
/* By default, FCS (CRC) is stripped by hardware. */
tmpl->rxq.crc_present = 0;
+ tmpl->rxq.lro = lro_on_queue;
if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
if (config->hw_fcs_strip) {
/*
* RQs used for LRO-enabled TIRs should not be
* configured to scatter the FCS.
*/
- if (mlx5_lro_on(dev))
+ if (lro_on_queue)
DRV_LOG(WARNING,
"port %u CRC stripping has been "
"disabled but will still be performed "
@@ -2204,7 +2193,16 @@ struct mlx5_hrxq *
}
} else { /* ind_tbl->type == MLX5_IND_TBL_TYPE_DEVX */
struct mlx5_devx_tir_attr tir_attr;
-
+ uint32_t i;
+ uint32_t lro = 1;
+
+ /* Enable TIR LRO only if all the queues were configured for. */
+ for (i = 0; i < queues_n; ++i) {
+ if (!(*priv->rxqs)[queues[i]]->lro) {
+ lro = 0;
+ break;
+ }
+ }
memset(&tir_attr, 0, sizeof(tir_attr));
tir_attr.disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT;
tir_attr.rx_hash_fn = MLX5_RX_HASH_FN_TOEPLITZ;
@@ -2216,7 +2214,7 @@ struct mlx5_hrxq *
if (dev->data->dev_conf.lpbk_mode)
tir_attr.self_lb_block =
MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST;
- if (mlx5_lro_on(dev)) {
+ if (lro) {
tir_attr.lro_timeout_period_usecs =
priv->config.lro.timeout;
tir_attr.lro_max_msg_sz = priv->max_lro_msg_size;
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 9b58d0a..c209d99 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -115,7 +115,8 @@ struct mlx5_rxq_data {
unsigned int strd_shift_en:1; /* Enable 2bytes shift on a stride. */
unsigned int err_state:2; /* enum mlx5_rxq_err_state. */
unsigned int strd_headroom_en:1; /* Enable mbuf headroom in MPRQ. */
- unsigned int :2; /* Remaining bits. */
+ unsigned int lro:1; /* Enable LRO. */
+ unsigned int :1; /* Remaining bits. */
volatile uint32_t *rq_db;
volatile uint32_t *cq_db;
uint16_t port_id;
@@ -367,9 +368,8 @@ struct mlx5_hrxq *mlx5_hrxq_get(struct rte_eth_dev *dev,
int mlx5_hrxq_verify(struct rte_eth_dev *dev);
struct mlx5_hrxq *mlx5_hrxq_drop_new(struct rte_eth_dev *dev);
void mlx5_hrxq_drop_release(struct rte_eth_dev *dev);
-uint64_t mlx5_get_rx_port_offloads(struct rte_eth_dev *dev);
+uint64_t mlx5_get_rx_port_offloads(void);
uint64_t mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev);
-int mlx5_lro_on(struct rte_eth_dev *dev);
/* mlx5_txq.c */
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c
index 3815ff6..3925f4d 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.c
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.c
@@ -129,6 +129,8 @@ int __attribute__((cold))
return -ENOTSUP;
if (!ctrl->priv->config.rx_vec_en || rxq->sges_n != 0)
return -ENOTSUP;
+ if (rxq->lro)
+ return -ENOTSUP;
return 1;
}
@@ -151,8 +153,6 @@ int __attribute__((cold))
return -ENOTSUP;
if (mlx5_mprq_enabled(dev))
return -ENOTSUP;
- if (mlx5_lro_on(dev))
- return -ENOTSUP;
/* All the configured queues should support. */
for (i = 0; i < priv->rxqs_n; ++i) {
struct mlx5_rxq_data *rxq = (*priv->rxqs)[i];
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 8bc2174..aa323ad 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -99,10 +99,14 @@
struct mlx5_priv *priv = dev->data->dev_private;
unsigned int i;
int ret = 0;
- unsigned int lro_on = mlx5_lro_on(dev);
- enum mlx5_rxq_obj_type obj_type = lro_on ? MLX5_RXQ_OBJ_TYPE_DEVX_RQ :
- MLX5_RXQ_OBJ_TYPE_IBV;
+ enum mlx5_rxq_obj_type obj_type = MLX5_RXQ_OBJ_TYPE_IBV;
+ for (i = 0; i < priv->rxqs_n; ++i) {
+ if ((*priv->rxqs)[i]->lro) {
+ obj_type = MLX5_RXQ_OBJ_TYPE_DEVX_RQ;
+ break;
+ }
+ }
/* Allocate/reuse/resize mempool for Multi-Packet RQ. */
if (mlx5_mprq_alloc_mp(dev)) {
/* Should not release Rx queues but return immediately. */
--
1.8.3.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [dpdk-dev] [PATCH 00/11] net/mlx5: LRO fixes and enhancements
2019-07-29 11:53 [dpdk-dev] [PATCH 00/11] net/mlx5: LRO fixes and enhancements Matan Azrad
` (10 preceding siblings ...)
2019-07-29 11:53 ` [dpdk-dev] [PATCH 11/11] net/mlx5: allow LRO per Rx queue Matan Azrad
@ 2019-07-29 12:32 ` Slava Ovsiienko
2019-07-29 14:37 ` Raslan Darawsheh
12 siblings, 0 replies; 14+ messages in thread
From: Slava Ovsiienko @ 2019-07-29 12:32 UTC (permalink / raw)
To: Matan Azrad, Shahaf Shuler, Yongseok Koh; +Cc: dev, Dekel Peled
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, July 29, 2019 14:53
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [PATCH 00/11] net/mlx5: LRO fixes and enhancements
>
> 1. Fixes.
> 2. 3 modes to support headroom with LRO.
> 3. Allow LRO per queue.
>
> Matan Azrad (11):
> net/mlx5: fix Rx scatter mode validation
> net/mlx5: limit LRO size to the maximum Rx packet
> net/mlx5: remove redundant offload flag reset
> net/mlx5: support mbuf headroom for LRO packet
> net/mlx5: fix DevX scattered Rx queue size
> net/mlx5: fix DevX Rx queue type
> net/mlx5: allow LRO in regular Rx queue
> net/mlx5: fix DevX Rx queue memory alignment
> net/mlx5: handle LRO packets in regular Rx queue
> net/mlx5: allow implicit LRO flow
> net/mlx5: allow LRO per Rx queue
>
> doc/guides/nics/mlx5.rst | 5 +-
> drivers/net/mlx5/mlx5.c | 4 +-
> drivers/net/mlx5/mlx5.h | 6 --
> drivers/net/mlx5/mlx5_ethdev.c | 14 +--
> drivers/net/mlx5/mlx5_flow_dv.c | 18 +---
> drivers/net/mlx5/mlx5_flow_verbs.c | 3 +-
> drivers/net/mlx5/mlx5_prm.h | 11 ++
> drivers/net/mlx5/mlx5_rxq.c | 203 ++++++++++++++++++++---------------
> --
> drivers/net/mlx5/mlx5_rxtx.c | 41 +++++++-
> drivers/net/mlx5/mlx5_rxtx.h | 10 +-
> drivers/net/mlx5/mlx5_rxtx_vec.c | 2 +
> drivers/net/mlx5/mlx5_trigger.c | 10 +-
> 12 files changed, 179 insertions(+), 148 deletions(-)
>
> --
> 1.8.3.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [dpdk-dev] [PATCH 00/11] net/mlx5: LRO fixes and enhancements
2019-07-29 11:53 [dpdk-dev] [PATCH 00/11] net/mlx5: LRO fixes and enhancements Matan Azrad
` (11 preceding siblings ...)
2019-07-29 12:32 ` [dpdk-dev] [PATCH 00/11] net/mlx5: LRO fixes and enhancements Slava Ovsiienko
@ 2019-07-29 14:37 ` Raslan Darawsheh
12 siblings, 0 replies; 14+ messages in thread
From: Raslan Darawsheh @ 2019-07-29 14:37 UTC (permalink / raw)
To: Matan Azrad, Shahaf Shuler, Yongseok Koh, Slava Ovsiienko
Cc: dev, Dekel Peled
Hi,
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Matan Azrad
> Sent: Monday, July 29, 2019 2:53 PM
> To: Shahaf Shuler <shahafs@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Dekel Peled <dekelp@mellanox.com>
> Subject: [dpdk-dev] [PATCH 00/11] net/mlx5: LRO fixes and enhancements
>
> 1. Fixes.
> 2. 3 modes to support headroom with LRO.
> 3. Allow LRO per queue.
>
> Matan Azrad (11):
> net/mlx5: fix Rx scatter mode validation
> net/mlx5: limit LRO size to the maximum Rx packet
> net/mlx5: remove redundant offload flag reset
> net/mlx5: support mbuf headroom for LRO packet
> net/mlx5: fix DevX scattered Rx queue size
> net/mlx5: fix DevX Rx queue type
> net/mlx5: allow LRO in regular Rx queue
> net/mlx5: fix DevX Rx queue memory alignment
> net/mlx5: handle LRO packets in regular Rx queue
> net/mlx5: allow implicit LRO flow
> net/mlx5: allow LRO per Rx queue
>
> doc/guides/nics/mlx5.rst | 5 +-
> drivers/net/mlx5/mlx5.c | 4 +-
> drivers/net/mlx5/mlx5.h | 6 --
> drivers/net/mlx5/mlx5_ethdev.c | 14 +--
> drivers/net/mlx5/mlx5_flow_dv.c | 18 +---
> drivers/net/mlx5/mlx5_flow_verbs.c | 3 +-
> drivers/net/mlx5/mlx5_prm.h | 11 ++
> drivers/net/mlx5/mlx5_rxq.c | 203 ++++++++++++++++++++--------------
> ---
> drivers/net/mlx5/mlx5_rxtx.c | 41 +++++++-
> drivers/net/mlx5/mlx5_rxtx.h | 10 +-
> drivers/net/mlx5/mlx5_rxtx_vec.c | 2 +
> drivers/net/mlx5/mlx5_trigger.c | 10 +-
> 12 files changed, 179 insertions(+), 148 deletions(-)
>
> --
> 1.8.3.1
Series applied to next-net-mlx,
Kindest regards,
Raslan Darawsheh
^ permalink raw reply [flat|nested] 14+ messages in thread