DPDK patches and discussions
 help / color / Atom feed
* [dpdk-dev] [PATCH v4 0/3] AF_XDP PMD Fixes
@ 2020-02-13  8:49 Ciara Loftus
  2020-02-13  8:49 ` [dpdk-dev] [PATCH v4 1/3] net/af_xdp: fix umem frame size & headroom calculations Ciara Loftus
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Ciara Loftus @ 2020-02-13  8:49 UTC (permalink / raw)
  To: dev; +Cc: xiaolong.ye, Ciara Loftus

This series introduces some fixes for the zero copy path of the AF_XDP.
In zero copy, the mempool objects are mapped directly into the AF_XDP UMEM.
Below depicts the layout of an object in a mempool.

+-----+--------+------+------+-----+-------------+
| mp  | struct | mbuf | mbuf | XDP |             |
| hdr | rte_   | priv |  hr  | hr  |   payload   |
| obj | mbuf   |      |      |     |             |
+-----+--------+------+------+-----+-------------+
  64     128       *     128   256        *

<---------------- frame size -------------------->
<---- frame hr ------------->

1: net/af_xdp: fix umem frame size & headroom calculations
* The previous frame size calculation incorrectly used
mb_pool->private_data_size and didn't include mb_pool->header_size. Instead
of performing a manual calculation, use the rte_mempool_calc_obj_size API
to determine the frame size.
* The previous frame headroom calculation also incorrectly used
mb_pool->private_data_size and didn't include mb_pool->header_size or the
mbuf priv size.

2. net/af_xdp: use correct fill queue addresses
The fill queue addresses should start at the beginning of the mempool
object instead of the beginning of the mbuf. This is because the umem frame
headroom includes the mp hdrobj size. Starting at this point ensures AF_XDP
doesn't write past the available room in the frame, in the case of larger
packets which are close to the size of the mbuf.

3. net/af_xdp: fix maximum MTU value
The maximum MTU for af_xdp zero copy is equal to the page size less the
frame overhead introduced by AF_XDP (XDP HR = 256) and DPDK (frame hr = 
320). The patch updates this value to reflect this, and removes some
unneeded constants for both zero-copy and copy mode.

v4:
* Deduct XDP_PACKET_HEADROOM from max mtu for copy mode.

v3:
* Fix send-email issue - use in-reply-to

v2:
* Include mbuf priv size in rx mbuf data_off calculation

Ciara Loftus (3):
  net/af_xdp: fix umem frame size & headroom calculations
  net/af_xdp: use correct fill queue addresses
  net/af_xdp: fix maximum MTU value

 drivers/net/af_xdp/rte_eth_af_xdp.c | 61 +++++++++++++++++------------
 1 file changed, 36 insertions(+), 25 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dpdk-dev] [PATCH v4 1/3] net/af_xdp: fix umem frame size & headroom calculations
  2020-02-13  8:49 [dpdk-dev] [PATCH v4 0/3] AF_XDP PMD Fixes Ciara Loftus
@ 2020-02-13  8:49 ` Ciara Loftus
  2020-02-13  8:49 ` [dpdk-dev] [PATCH v4 2/3] net/af_xdp: use correct fill queue addresses Ciara Loftus
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Ciara Loftus @ 2020-02-13  8:49 UTC (permalink / raw)
  To: dev; +Cc: xiaolong.ye, Ciara Loftus, stable

The previous frame size calculation incorrectly used
mb_pool->private_data_size and didn't include mb_pool->header_size.
Instead of performing a manual calculation, use the
rte_mempool_calc_obj_size API to determine the frame size.

The previous frame headroom calculation also incorrectly used
mb_pool->private_data_size and didn't include mb_pool->header_size or the
mbuf priv size. Fix this.

Fixes: d8a210774e1d ("net/af_xdp: support unaligned umem chunks")
Cc: stable@dpdk.org

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
---
 drivers/net/af_xdp/rte_eth_af_xdp.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
index ed784dff19..111ab000cc 100644
--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
@@ -33,6 +33,7 @@
 #include <rte_log.h>
 #include <rte_memory.h>
 #include <rte_memzone.h>
+#include <rte_mempool.h>
 #include <rte_mbuf.h>
 #include <rte_malloc.h>
 #include <rte_ring.h>
@@ -754,11 +755,13 @@ xsk_umem_info *xdp_umem_configure(struct pmd_internals *internals __rte_unused,
 	void *base_addr = NULL;
 	struct rte_mempool *mb_pool = rxq->mb_pool;
 
-	usr_config.frame_size = rte_pktmbuf_data_room_size(mb_pool) +
-					ETH_AF_XDP_MBUF_OVERHEAD +
-					mb_pool->private_data_size;
-	usr_config.frame_headroom = ETH_AF_XDP_DATA_HEADROOM +
-					mb_pool->private_data_size;
+	usr_config.frame_size = rte_mempool_calc_obj_size(mb_pool->elt_size,
+								mb_pool->flags,
+								NULL);
+	usr_config.frame_headroom = mb_pool->header_size +
+					sizeof(struct rte_mbuf) +
+					rte_pktmbuf_priv_size(mb_pool) +
+					RTE_PKTMBUF_HEADROOM;
 
 	umem = rte_zmalloc_socket("umem", sizeof(*umem), 0, rte_socket_id());
 	if (umem == NULL) {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dpdk-dev] [PATCH v4 2/3] net/af_xdp: use correct fill queue addresses
  2020-02-13  8:49 [dpdk-dev] [PATCH v4 0/3] AF_XDP PMD Fixes Ciara Loftus
  2020-02-13  8:49 ` [dpdk-dev] [PATCH v4 1/3] net/af_xdp: fix umem frame size & headroom calculations Ciara Loftus
@ 2020-02-13  8:49 ` Ciara Loftus
  2020-02-13  8:49 ` [dpdk-dev] [PATCH v4 3/3] net/af_xdp: fix maximum MTU value Ciara Loftus
  2020-02-13 12:18 ` [dpdk-dev] [PATCH v4 0/3] AF_XDP PMD Fixes Ferruh Yigit
  3 siblings, 0 replies; 6+ messages in thread
From: Ciara Loftus @ 2020-02-13  8:49 UTC (permalink / raw)
  To: dev; +Cc: xiaolong.ye, Ciara Loftus, stable

The fill queue addresses should start at the beginning of the mempool
object instead of the beginning of the mbuf. This is because the umem
frame headroom includes the mp hdrobj size. Starting at this point
ensures AF_XDP doesn't write past the available room in the frame, in
the case of larger packets which are close to the size of the mbuf.

Fixes: d8a210774e1d ("net/af_xdp: support unaligned umem chunks")
Cc: stable@dpdk.org

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
---
 drivers/net/af_xdp/rte_eth_af_xdp.c | 25 +++++++++++++++++--------
 1 file changed, 17 insertions(+), 8 deletions(-)

diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
index 111ab000cc..a0edfc3cd3 100644
--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
@@ -171,7 +171,8 @@ reserve_fill_queue_zc(struct xsk_umem_info *umem, uint16_t reserve_size,
 		uint64_t addr;
 
 		fq_addr = xsk_ring_prod__fill_addr(fq, idx++);
-		addr = (uint64_t)bufs[i] - (uint64_t)umem->buffer;
+		addr = (uint64_t)bufs[i] - (uint64_t)umem->buffer -
+				umem->mb_pool->header_size;
 		*fq_addr = addr;
 	}
 
@@ -270,8 +271,11 @@ af_xdp_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		addr = xsk_umem__extract_addr(addr);
 
 		bufs[i] = (struct rte_mbuf *)
-				xsk_umem__get_data(umem->buffer, addr);
-		bufs[i]->data_off = offset - sizeof(struct rte_mbuf);
+				xsk_umem__get_data(umem->buffer, addr +
+					umem->mb_pool->header_size);
+		bufs[i]->data_off = offset - sizeof(struct rte_mbuf) -
+			rte_pktmbuf_priv_size(umem->mb_pool) -
+			umem->mb_pool->header_size;
 
 		rte_pktmbuf_pkt_len(bufs[i]) = len;
 		rte_pktmbuf_data_len(bufs[i]) = len;
@@ -384,7 +388,8 @@ pull_umem_cq(struct xsk_umem_info *umem, int size)
 #if defined(XDP_UMEM_UNALIGNED_CHUNK_FLAG)
 		addr = xsk_umem__extract_addr(addr);
 		rte_pktmbuf_free((struct rte_mbuf *)
-					xsk_umem__get_data(umem->buffer, addr));
+					xsk_umem__get_data(umem->buffer,
+					addr + umem->mb_pool->header_size));
 #else
 		rte_ring_enqueue(umem->buf_ring, (void *)addr);
 #endif
@@ -442,9 +447,11 @@ af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			}
 			desc = xsk_ring_prod__tx_desc(&txq->tx, idx_tx);
 			desc->len = mbuf->pkt_len;
-			addr = (uint64_t)mbuf - (uint64_t)umem->buffer;
+			addr = (uint64_t)mbuf - (uint64_t)umem->buffer -
+					umem->mb_pool->header_size;
 			offset = rte_pktmbuf_mtod(mbuf, uint64_t) -
-					(uint64_t)mbuf;
+					(uint64_t)mbuf +
+					umem->mb_pool->header_size;
 			offset = offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT;
 			desc->addr = addr | offset;
 			count++;
@@ -465,9 +472,11 @@ af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			desc = xsk_ring_prod__tx_desc(&txq->tx, idx_tx);
 			desc->len = mbuf->pkt_len;
 
-			addr = (uint64_t)local_mbuf - (uint64_t)umem->buffer;
+			addr = (uint64_t)local_mbuf - (uint64_t)umem->buffer -
+					umem->mb_pool->header_size;
 			offset = rte_pktmbuf_mtod(local_mbuf, uint64_t) -
-					(uint64_t)local_mbuf;
+					(uint64_t)local_mbuf +
+					umem->mb_pool->header_size;
 			pkt = xsk_umem__get_data(umem->buffer, addr + offset);
 			offset = offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT;
 			desc->addr = addr | offset;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dpdk-dev] [PATCH v4 3/3] net/af_xdp: fix maximum MTU value
  2020-02-13  8:49 [dpdk-dev] [PATCH v4 0/3] AF_XDP PMD Fixes Ciara Loftus
  2020-02-13  8:49 ` [dpdk-dev] [PATCH v4 1/3] net/af_xdp: fix umem frame size & headroom calculations Ciara Loftus
  2020-02-13  8:49 ` [dpdk-dev] [PATCH v4 2/3] net/af_xdp: use correct fill queue addresses Ciara Loftus
@ 2020-02-13  8:49 ` Ciara Loftus
  2020-02-13  9:26   ` Ye Xiaolong
  2020-02-13 12:18 ` [dpdk-dev] [PATCH v4 0/3] AF_XDP PMD Fixes Ferruh Yigit
  3 siblings, 1 reply; 6+ messages in thread
From: Ciara Loftus @ 2020-02-13  8:49 UTC (permalink / raw)
  To: dev; +Cc: xiaolong.ye, Ciara Loftus, stable

The maximum MTU for af_xdp zero copy is equal to the page size less the
frame overhead introduced by AF_XDP (XDP HR = 256) and DPDK (frame headroom
= 320). The patch updates this value to reflect this.

This change also makes it possible to remove unneeded constants for both
zero-copy and copy mode.

Fixes: d8a210774e1d ("net/af_xdp: support unaligned umem chunks")
Cc: stable@dpdk.org

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
---
 drivers/net/af_xdp/rte_eth_af_xdp.c | 23 +++++++++++------------
 1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
index a0edfc3cd3..06124ba789 100644
--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
@@ -58,13 +58,6 @@ static int af_xdp_logtype;
 
 #define ETH_AF_XDP_FRAME_SIZE		2048
 #define ETH_AF_XDP_NUM_BUFFERS		4096
-#ifdef XDP_UMEM_UNALIGNED_CHUNK_FLAG
-#define ETH_AF_XDP_MBUF_OVERHEAD	128 /* sizeof(struct rte_mbuf) */
-#define ETH_AF_XDP_DATA_HEADROOM \
-	(ETH_AF_XDP_MBUF_OVERHEAD + RTE_PKTMBUF_HEADROOM)
-#else
-#define ETH_AF_XDP_DATA_HEADROOM	0
-#endif
 #define ETH_AF_XDP_DFLT_NUM_DESCS	XSK_RING_CONS__DEFAULT_NUM_DESCS
 #define ETH_AF_XDP_DFLT_START_QUEUE_IDX	0
 #define ETH_AF_XDP_DFLT_QUEUE_COUNT	1
@@ -601,7 +594,14 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_tx_queues = internals->queue_cnt;
 
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
-	dev_info->max_mtu = ETH_AF_XDP_FRAME_SIZE - ETH_AF_XDP_DATA_HEADROOM;
+#if defined(XDP_UMEM_UNALIGNED_CHUNK_FLAG)
+	dev_info->max_mtu = getpagesize() -
+				sizeof(struct rte_mempool_objhdr) -
+				sizeof(struct rte_mbuf) -
+				RTE_PKTMBUF_HEADROOM - XDP_PACKET_HEADROOM;
+#else
+	dev_info->max_mtu = ETH_AF_XDP_FRAME_SIZE - XDP_PACKET_HEADROOM;
+#endif
 
 	dev_info->default_rxportconf.nb_queues = 1;
 	dev_info->default_txportconf.nb_queues = 1;
@@ -803,7 +803,7 @@ xsk_umem_info *xdp_umem_configure(struct pmd_internals *internals,
 		.fill_size = ETH_AF_XDP_DFLT_NUM_DESCS,
 		.comp_size = ETH_AF_XDP_DFLT_NUM_DESCS,
 		.frame_size = ETH_AF_XDP_FRAME_SIZE,
-		.frame_headroom = ETH_AF_XDP_DATA_HEADROOM };
+		.frame_headroom = 0 };
 	char ring_name[RTE_RING_NAMESIZE];
 	char mz_name[RTE_MEMZONE_NAMESIZE];
 	int ret;
@@ -828,8 +828,7 @@ xsk_umem_info *xdp_umem_configure(struct pmd_internals *internals,
 
 	for (i = 0; i < ETH_AF_XDP_NUM_BUFFERS; i++)
 		rte_ring_enqueue(umem->buf_ring,
-				 (void *)(i * ETH_AF_XDP_FRAME_SIZE +
-					  ETH_AF_XDP_DATA_HEADROOM));
+				 (void *)(i * ETH_AF_XDP_FRAME_SIZE));
 
 	snprintf(mz_name, sizeof(mz_name), "af_xdp_umem_%s_%u",
 		       internals->if_name, rxq->xsk_queue_idx);
@@ -938,7 +937,7 @@ eth_rx_queue_setup(struct rte_eth_dev *dev,
 	/* Now get the space available for data in the mbuf */
 	buf_size = rte_pktmbuf_data_room_size(mb_pool) -
 		RTE_PKTMBUF_HEADROOM;
-	data_size = ETH_AF_XDP_FRAME_SIZE - ETH_AF_XDP_DATA_HEADROOM;
+	data_size = ETH_AF_XDP_FRAME_SIZE;
 
 	if (data_size > buf_size) {
 		AF_XDP_LOG(ERR, "%s: %d bytes will not fit in mbuf (%d bytes)\n",
-- 
2.17.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [PATCH v4 3/3] net/af_xdp: fix maximum MTU value
  2020-02-13  8:49 ` [dpdk-dev] [PATCH v4 3/3] net/af_xdp: fix maximum MTU value Ciara Loftus
@ 2020-02-13  9:26   ` Ye Xiaolong
  0 siblings, 0 replies; 6+ messages in thread
From: Ye Xiaolong @ 2020-02-13  9:26 UTC (permalink / raw)
  To: Ciara Loftus; +Cc: dev, stable

On 02/13, Ciara Loftus wrote:
>The maximum MTU for af_xdp zero copy is equal to the page size less the
>frame overhead introduced by AF_XDP (XDP HR = 256) and DPDK (frame headroom
>= 320). The patch updates this value to reflect this.
>
>This change also makes it possible to remove unneeded constants for both
>zero-copy and copy mode.
>
>Fixes: d8a210774e1d ("net/af_xdp: support unaligned umem chunks")
>Cc: stable@dpdk.org
>
>Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
>---
> drivers/net/af_xdp/rte_eth_af_xdp.c | 23 +++++++++++------------
> 1 file changed, 11 insertions(+), 12 deletions(-)
>
>diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
>index a0edfc3cd3..06124ba789 100644
>--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
>+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
>@@ -58,13 +58,6 @@ static int af_xdp_logtype;
> 
> #define ETH_AF_XDP_FRAME_SIZE		2048
> #define ETH_AF_XDP_NUM_BUFFERS		4096
>-#ifdef XDP_UMEM_UNALIGNED_CHUNK_FLAG
>-#define ETH_AF_XDP_MBUF_OVERHEAD	128 /* sizeof(struct rte_mbuf) */
>-#define ETH_AF_XDP_DATA_HEADROOM \
>-	(ETH_AF_XDP_MBUF_OVERHEAD + RTE_PKTMBUF_HEADROOM)
>-#else
>-#define ETH_AF_XDP_DATA_HEADROOM	0
>-#endif
> #define ETH_AF_XDP_DFLT_NUM_DESCS	XSK_RING_CONS__DEFAULT_NUM_DESCS
> #define ETH_AF_XDP_DFLT_START_QUEUE_IDX	0
> #define ETH_AF_XDP_DFLT_QUEUE_COUNT	1
>@@ -601,7 +594,14 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
> 	dev_info->max_tx_queues = internals->queue_cnt;
> 
> 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
>-	dev_info->max_mtu = ETH_AF_XDP_FRAME_SIZE - ETH_AF_XDP_DATA_HEADROOM;
>+#if defined(XDP_UMEM_UNALIGNED_CHUNK_FLAG)
>+	dev_info->max_mtu = getpagesize() -
>+				sizeof(struct rte_mempool_objhdr) -
>+				sizeof(struct rte_mbuf) -
>+				RTE_PKTMBUF_HEADROOM - XDP_PACKET_HEADROOM;
>+#else
>+	dev_info->max_mtu = ETH_AF_XDP_FRAME_SIZE - XDP_PACKET_HEADROOM;
>+#endif
> 
> 	dev_info->default_rxportconf.nb_queues = 1;
> 	dev_info->default_txportconf.nb_queues = 1;
>@@ -803,7 +803,7 @@ xsk_umem_info *xdp_umem_configure(struct pmd_internals *internals,
> 		.fill_size = ETH_AF_XDP_DFLT_NUM_DESCS,
> 		.comp_size = ETH_AF_XDP_DFLT_NUM_DESCS,
> 		.frame_size = ETH_AF_XDP_FRAME_SIZE,
>-		.frame_headroom = ETH_AF_XDP_DATA_HEADROOM };
>+		.frame_headroom = 0 };
> 	char ring_name[RTE_RING_NAMESIZE];
> 	char mz_name[RTE_MEMZONE_NAMESIZE];
> 	int ret;
>@@ -828,8 +828,7 @@ xsk_umem_info *xdp_umem_configure(struct pmd_internals *internals,
> 
> 	for (i = 0; i < ETH_AF_XDP_NUM_BUFFERS; i++)
> 		rte_ring_enqueue(umem->buf_ring,
>-				 (void *)(i * ETH_AF_XDP_FRAME_SIZE +
>-					  ETH_AF_XDP_DATA_HEADROOM));
>+				 (void *)(i * ETH_AF_XDP_FRAME_SIZE));
> 
> 	snprintf(mz_name, sizeof(mz_name), "af_xdp_umem_%s_%u",
> 		       internals->if_name, rxq->xsk_queue_idx);
>@@ -938,7 +937,7 @@ eth_rx_queue_setup(struct rte_eth_dev *dev,
> 	/* Now get the space available for data in the mbuf */
> 	buf_size = rte_pktmbuf_data_room_size(mb_pool) -
> 		RTE_PKTMBUF_HEADROOM;
>-	data_size = ETH_AF_XDP_FRAME_SIZE - ETH_AF_XDP_DATA_HEADROOM;
>+	data_size = ETH_AF_XDP_FRAME_SIZE;
> 
> 	if (data_size > buf_size) {
> 		AF_XDP_LOG(ERR, "%s: %d bytes will not fit in mbuf (%d bytes)\n",
>-- 
>2.17.1
>

Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [PATCH v4 0/3] AF_XDP PMD Fixes
  2020-02-13  8:49 [dpdk-dev] [PATCH v4 0/3] AF_XDP PMD Fixes Ciara Loftus
                   ` (2 preceding siblings ...)
  2020-02-13  8:49 ` [dpdk-dev] [PATCH v4 3/3] net/af_xdp: fix maximum MTU value Ciara Loftus
@ 2020-02-13 12:18 ` Ferruh Yigit
  3 siblings, 0 replies; 6+ messages in thread
From: Ferruh Yigit @ 2020-02-13 12:18 UTC (permalink / raw)
  To: Ciara Loftus, dev; +Cc: xiaolong.ye

On 2/13/2020 8:49 AM, Ciara Loftus wrote:
> This series introduces some fixes for the zero copy path of the AF_XDP.
> In zero copy, the mempool objects are mapped directly into the AF_XDP UMEM.
> Below depicts the layout of an object in a mempool.
> 
> +-----+--------+------+------+-----+-------------+
> | mp  | struct | mbuf | mbuf | XDP |             |
> | hdr | rte_   | priv |  hr  | hr  |   payload   |
> | obj | mbuf   |      |      |     |             |
> +-----+--------+------+------+-----+-------------+
>   64     128       *     128   256        *
> 
> <---------------- frame size -------------------->
> <---- frame hr ------------->
> 
> 1: net/af_xdp: fix umem frame size & headroom calculations
> * The previous frame size calculation incorrectly used
> mb_pool->private_data_size and didn't include mb_pool->header_size. Instead
> of performing a manual calculation, use the rte_mempool_calc_obj_size API
> to determine the frame size.
> * The previous frame headroom calculation also incorrectly used
> mb_pool->private_data_size and didn't include mb_pool->header_size or the
> mbuf priv size.
> 
> 2. net/af_xdp: use correct fill queue addresses
> The fill queue addresses should start at the beginning of the mempool
> object instead of the beginning of the mbuf. This is because the umem frame
> headroom includes the mp hdrobj size. Starting at this point ensures AF_XDP
> doesn't write past the available room in the frame, in the case of larger
> packets which are close to the size of the mbuf.
> 
> 3. net/af_xdp: fix maximum MTU value
> The maximum MTU for af_xdp zero copy is equal to the page size less the
> frame overhead introduced by AF_XDP (XDP HR = 256) and DPDK (frame hr = 
> 320). The patch updates this value to reflect this, and removes some
> unneeded constants for both zero-copy and copy mode.
> 
> v4:
> * Deduct XDP_PACKET_HEADROOM from max mtu for copy mode.
> 
> v3:
> * Fix send-email issue - use in-reply-to
> 
> v2:
> * Include mbuf priv size in rx mbuf data_off calculation
> 
> Ciara Loftus (3):
>   net/af_xdp: fix umem frame size & headroom calculations
>   net/af_xdp: use correct fill queue addresses
>   net/af_xdp: fix maximum MTU value
> 

Series applied to dpdk-next-net/master, thanks.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, back to index

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-13  8:49 [dpdk-dev] [PATCH v4 0/3] AF_XDP PMD Fixes Ciara Loftus
2020-02-13  8:49 ` [dpdk-dev] [PATCH v4 1/3] net/af_xdp: fix umem frame size & headroom calculations Ciara Loftus
2020-02-13  8:49 ` [dpdk-dev] [PATCH v4 2/3] net/af_xdp: use correct fill queue addresses Ciara Loftus
2020-02-13  8:49 ` [dpdk-dev] [PATCH v4 3/3] net/af_xdp: fix maximum MTU value Ciara Loftus
2020-02-13  9:26   ` Ye Xiaolong
2020-02-13 12:18 ` [dpdk-dev] [PATCH v4 0/3] AF_XDP PMD Fixes Ferruh Yigit

DPDK patches and discussions

Archives are clonable:
	git clone --mirror http://inbox.dpdk.org/dev/0 dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dev dev/ http://inbox.dpdk.org/dev \
		dev@dpdk.org
	public-inbox-index dev


Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/ public-inbox