From: Maryam Tahhan <mtahhan@redhat.com>
To: Ariel Otilibili <ariel.otilibili@6wind.com>, dev@dpdk.org
Cc: stable@dpdk.org, Stephen Hemminger <stephen@networkplumber.org>,
Thomas Monjalon <thomas@monjalon.net>,
David Marchand <david.marchand@redhat.com>,
Ciara Loftus <ciara.loftus@intel.com>
Subject: Re: [PATCH v2 2/2] net/af_xdp: Refactor af_xdp_tx_zc()
Date: Mon, 20 Jan 2025 15:28:50 +0000 [thread overview]
Message-ID: <44ffb73b-427d-4ddf-a195-900e05241050@redhat.com> (raw)
In-Reply-To: <20250116225151.188214-3-ariel.otilibili@6wind.com>
On 16/01/2025 17:51, Ariel Otilibili wrote:
> Both legs of the loop share the same logic: either to go out of the
> function, or to fall through to the next instructions.
>
> Depends-on: patch-1 ("net/af_xdp: fix use after free in af_xdp_tx_zc()")
> Bugzilla ID: 1440
> Signed-off-by: Ariel Otilibili <ariel.otilibili@6wind.com>
> ---
> drivers/net/af_xdp/rte_eth_af_xdp.c | 47 +++++++++++++----------------
> 1 file changed, 21 insertions(+), 26 deletions(-)
>
> diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
> index 4326a29f7042..b1de47a41eb3 100644
> --- a/drivers/net/af_xdp/rte_eth_af_xdp.c
> +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
> @@ -551,6 +551,7 @@ af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
> uint64_t addr, offset;
> struct xsk_ring_cons *cq = &txq->pair->cq;
> uint32_t free_thresh = cq->size >> 1;
> + struct rte_mbuf *local_mbuf = NULL;
>
> if (xsk_cons_nb_avail(cq, free_thresh) >= free_thresh)
> pull_umem_cq(umem, XSK_RING_CONS__DEFAULT_NUM_DESCS, cq);
> @@ -565,21 +566,8 @@ af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
> &idx_tx))
> goto out;
> }
> - desc = xsk_ring_prod__tx_desc(&txq->tx, idx_tx);
> - desc->len = mbuf->pkt_len;
> - addr = (uint64_t)mbuf - (uint64_t)umem->buffer -
> - umem->mb_pool->header_size;
> - offset = rte_pktmbuf_mtod(mbuf, uint64_t) -
> - (uint64_t)mbuf +
> - umem->mb_pool->header_size;
> - offset = offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT;
> - desc->addr = addr | offset;
> - tx_bytes += mbuf->pkt_len;
> - count++;
> } else {
> - struct rte_mbuf *local_mbuf =
> - rte_pktmbuf_alloc(umem->mb_pool);
> - void *pkt;
> + local_mbuf = rte_pktmbuf_alloc(umem->mb_pool);
>
> if (local_mbuf == NULL)
> goto out;
> @@ -588,24 +576,31 @@ af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
> rte_pktmbuf_free(local_mbuf);
> goto out;
> }
> + }
>
> - desc = xsk_ring_prod__tx_desc(&txq->tx, idx_tx);
> - desc->len = mbuf->pkt_len;
> + struct rte_mbuf *tmp;
> + void *pkt;
> + tmp = mbuf->pool == umem->mb_pool ? mbuf : local_mbuf;
> +
> + desc = xsk_ring_prod__tx_desc(&txq->tx, idx_tx);
> + desc->len = mbuf->pkt_len;
>
> - addr = (uint64_t)local_mbuf - (uint64_t)umem->buffer -
> - umem->mb_pool->header_size;
> - offset = rte_pktmbuf_mtod(local_mbuf, uint64_t) -
> - (uint64_t)local_mbuf +
> - umem->mb_pool->header_size;
> + addr = (uint64_t)tmp - (uint64_t)umem->buffer
> + - umem->mb_pool->header_size;
> + offset = rte_pktmbuf_mtod(tmp, uint64_t) - (uint64_t)tmp
> + + umem->mb_pool->header_size;
> + offset = offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT;
> + desc->addr = addr | offset;
> +
> + if (mbuf->pool == umem->mb_pool) {
> + tx_bytes += mbuf->pkt_len;
> + } else {
> pkt = xsk_umem__get_data(umem->buffer, addr + offset);
> - offset = offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT;
> - desc->addr = addr | offset;
> - rte_memcpy(pkt, rte_pktmbuf_mtod(mbuf, void *),
> - desc->len);
> + rte_memcpy(pkt, rte_pktmbuf_mtod(mbuf, void *), desc->len);
> tx_bytes += mbuf->pkt_len;
> rte_pktmbuf_free(mbuf);
> - count++;
> }
> + count++;
> }
>
> out:
This ends up duplicating the if condition `if (mbuf->pool ==
umem->mb_pool) {` twice in `af_xdp_tx_zc`. Which is messy to read tbh...
I think it would be better to create an inline function for the
duplicate code that setting desc, addr and offset. These three things
could be pointers passed to the new inline function and that way their
values can be used in `af_xdp_tx_zc()` after they are set. I think that
would cleanup the `af_xdp_tx_zc()` function in a neater way.
prev parent reply other threads:[~2025-01-20 15:28 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-16 19:56 [PATCH 0/2] Fix use after free, and refactor af_xdp_tx_zc() Ariel Otilibili
2025-01-16 19:56 ` [PATCH 1/2] net/af_xdp: fix use after free in af_xdp_tx_zc() Ariel Otilibili
2025-01-16 19:56 ` [PATCH 2/2] net/af_xdp: Refactor af_xdp_tx_zc() Ariel Otilibili
2025-01-16 21:47 ` Stephen Hemminger
2025-01-16 22:20 ` Ariel Otilibili
2025-01-16 22:26 ` Stephen Hemminger
2025-01-16 22:36 ` Ariel Otilibili
2025-01-16 22:51 ` [PATCH v2 0/2] Fix use after free, and refactor af_xdp_tx_zc() Ariel Otilibili
2025-01-16 22:51 ` [PATCH v2 1/2] net/af_xdp: fix use after free in af_xdp_tx_zc() Ariel Otilibili
2025-01-20 14:54 ` Maryam Tahhan
2025-01-16 22:51 ` [PATCH v2 2/2] net/af_xdp: Refactor af_xdp_tx_zc() Ariel Otilibili
2025-01-20 15:28 ` Maryam Tahhan [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=44ffb73b-427d-4ddf-a195-900e05241050@redhat.com \
--to=mtahhan@redhat.com \
--cc=ariel.otilibili@6wind.com \
--cc=ciara.loftus@intel.com \
--cc=david.marchand@redhat.com \
--cc=dev@dpdk.org \
--cc=stable@dpdk.org \
--cc=stephen@networkplumber.org \
--cc=thomas@monjalon.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).