DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ye Xiaolong <xiaolong.ye@intel.com>
To: Ciara Loftus <ciara.loftus@intel.com>
Cc: dev@dpdk.org, stable@dpdk.org
Subject: Re: [dpdk-dev] [PATCH v3 1/3] net/af_xdp: fix umem frame size & headroom calculations
Date: Thu, 13 Feb 2020 10:45:33 +0800	[thread overview]
Message-ID: <20200213024533.GN80720@intel.com> (raw)
In-Reply-To: <20200210114009.49590-2-ciara.loftus@intel.com>

On 02/10, Ciara Loftus wrote:
>The previous frame size calculation incorrectly used
>mb_pool->private_data_size and didn't include mb_pool->header_size.
>Instead of performing a manual calculation, use the
>rte_mempool_calc_obj_size API to determine the frame size.
>
>The previous frame headroom calculation also incorrectly used
>mb_pool->private_data_size and didn't include mb_pool->header_size or the
>mbuf priv size. Fix this.
>
>Fixes: d8a210774e1d ("net/af_xdp: support unaligned umem chunks")
>Cc: stable@dpdk.org
>
>Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
>---
> drivers/net/af_xdp/rte_eth_af_xdp.c | 13 ++++++++-----
> 1 file changed, 8 insertions(+), 5 deletions(-)
>
>diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
>index 683e2a559..8b189119c 100644
>--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
>+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
>@@ -34,6 +34,7 @@
> #include <rte_log.h>
> #include <rte_memory.h>
> #include <rte_memzone.h>
>+#include <rte_mempool.h>
> #include <rte_mbuf.h>
> #include <rte_malloc.h>
> #include <rte_ring.h>
>@@ -755,11 +756,13 @@ xsk_umem_info *xdp_umem_configure(struct pmd_internals *internals __rte_unused,
> 	void *base_addr = NULL;
> 	struct rte_mempool *mb_pool = rxq->mb_pool;
> 
>-	usr_config.frame_size = rte_pktmbuf_data_room_size(mb_pool) +
>-					ETH_AF_XDP_MBUF_OVERHEAD +
>-					mb_pool->private_data_size;
>-	usr_config.frame_headroom = ETH_AF_XDP_DATA_HEADROOM +
>-					mb_pool->private_data_size;
>+	usr_config.frame_size = rte_mempool_calc_obj_size(mb_pool->elt_size,
>+								mb_pool->flags,
>+								NULL);
>+	usr_config.frame_headroom = mb_pool->header_size +
>+					sizeof(struct rte_mbuf) +
>+					rte_pktmbuf_priv_size(mb_pool) +
>+					RTE_PKTMBUF_HEADROOM;
> 
> 	umem = rte_zmalloc_socket("umem", sizeof(*umem), 0, rte_socket_id());
> 	if (umem == NULL) {
>-- 
>2.17.1
>


Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>

  reply	other threads:[~2020-02-13  2:46 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-10 11:40 [dpdk-dev] [PATCH v3 0/3] AF_XDP PMD Fixes Ciara Loftus
2020-02-10 11:40 ` [dpdk-dev] [PATCH v3 1/3] net/af_xdp: fix umem frame size & headroom calculations Ciara Loftus
2020-02-13  2:45   ` Ye Xiaolong [this message]
2020-02-10 11:40 ` [dpdk-dev] [PATCH v3 2/3] net/af_xdp: use correct fill queue addresses Ciara Loftus
2020-02-13  3:09   ` Ye Xiaolong
2020-02-10 11:40 ` [dpdk-dev] [PATCH v3 3/3] net/af_xdp: fix maximum MTU value Ciara Loftus
2020-02-13  3:05   ` Ye Xiaolong
2020-02-13  8:43     ` Loftus, Ciara

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200213024533.GN80720@intel.com \
    --to=xiaolong.ye@intel.com \
    --cc=ciara.loftus@intel.com \
    --cc=dev@dpdk.org \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).