DPDK patches and discussions
 help / color / mirror / Atom feed
From: Stephen Hemminger <stephen@networkplumber.org>
To: Ariel Otilibili <ariel.otilibili@6wind.com>
Cc: dev@dpdk.org, stable@dpdk.org,
	Thomas Monjalon <thomas@monjalon.net>,
	David Marchand <david.marchand@redhat.com>,
	Ciara Loftus <ciara.loftus@intel.com>,
	Maryam Tahhan <mtahhan@redhat.com>
Subject: Re: [PATCH 2/2] net/af_xdp: Refactor af_xdp_tx_zc()
Date: Thu, 16 Jan 2025 14:26:40 -0800	[thread overview]
Message-ID: <20250116142640.26391bf0@hermes.local> (raw)
In-Reply-To: <CAF1zDgYL6jUG-bTc7oqyOhK0VbFGM2H+Jbt9gG1-nNq2NR5DeA@mail.gmail.com>

On Thu, 16 Jan 2025 23:20:06 +0100
Ariel Otilibili <ariel.otilibili@6wind.com> wrote:

> Hi Stephen,
> 
> On Thu, Jan 16, 2025 at 10:47 PM Stephen Hemminger <
> stephen@networkplumber.org> wrote:  
> 
> > On Thu, 16 Jan 2025 20:56:39 +0100
> > Ariel Otilibili <ariel.otilibili@6wind.com> wrote:
> >  
> > > Both branches of the loop share the same logic. Now each one is a
> > > goto dispatcher; either to out (end of function), or to
> > > stats (continuation of the loop).
> > >
> > > Bugzilla ID: 1440
> > > Depends-on: patch-1 ("net/af_xdp: fix use after free in af_xdp_tx_zc()")
> > > Signed-off-by: Ariel Otilibili <ariel.otilibili@6wind.com>
> > > ---
> > >  drivers/net/af_xdp/rte_eth_af_xdp.c | 57 ++++++++++++++---------------
> > >  1 file changed, 27 insertions(+), 30 deletions(-)
> > >
> > > diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c  
> > b/drivers/net/af_xdp/rte_eth_af_xdp.c  
> > > index 4326a29f7042..8b42704b6d9f 100644
> > > --- a/drivers/net/af_xdp/rte_eth_af_xdp.c
> > > +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
> > > @@ -551,6 +551,7 @@ af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs,  
> > uint16_t nb_pkts)  
> > >       uint64_t addr, offset;
> > >       struct xsk_ring_cons *cq = &txq->pair->cq;
> > >       uint32_t free_thresh = cq->size >> 1;
> > > +     struct rte_mbuf *local_mbuf = NULL;
> > >
> > >       if (xsk_cons_nb_avail(cq, free_thresh) >= free_thresh)
> > >               pull_umem_cq(umem, XSK_RING_CONS__DEFAULT_NUM_DESCS, cq);
> > > @@ -565,21 +566,10 @@ af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs,  
> > uint16_t nb_pkts)  
> > >                                                           &idx_tx))
> > >                                       goto out;
> > >                       }
> > > -                     desc = xsk_ring_prod__tx_desc(&txq->tx, idx_tx);
> > > -                     desc->len = mbuf->pkt_len;
> > > -                     addr = (uint64_t)mbuf - (uint64_t)umem->buffer -
> > > -                                     umem->mb_pool->header_size;
> > > -                     offset = rte_pktmbuf_mtod(mbuf, uint64_t) -
> > > -                                     (uint64_t)mbuf +
> > > -                                     umem->mb_pool->header_size;
> > > -                     offset = offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT;
> > > -                     desc->addr = addr | offset;
> > > -                     tx_bytes += mbuf->pkt_len;
> > > -                     count++;
> > > +
> > > +                     goto stats;
> > >               } else {
> > > -                     struct rte_mbuf *local_mbuf =
> > > -                                     rte_pktmbuf_alloc(umem->mb_pool);
> > > -                     void *pkt;
> > > +                     local_mbuf = rte_pktmbuf_alloc(umem->mb_pool);
> > >
> > >                       if (local_mbuf == NULL)
> > >                               goto out;
> > > @@ -589,23 +579,30 @@ af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs,  
> > uint16_t nb_pkts)  
> > >                               goto out;
> > >                       }
> > >
> > > -                     desc = xsk_ring_prod__tx_desc(&txq->tx, idx_tx);
> > > -                     desc->len = mbuf->pkt_len;
> > > -
> > > -                     addr = (uint64_t)local_mbuf -  
> > (uint64_t)umem->buffer -  
> > > -                                     umem->mb_pool->header_size;
> > > -                     offset = rte_pktmbuf_mtod(local_mbuf, uint64_t) -
> > > -                                     (uint64_t)local_mbuf +
> > > -                                     umem->mb_pool->header_size;
> > > -                     pkt = xsk_umem__get_data(umem->buffer, addr +  
> > offset);  
> > > -                     offset = offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT;
> > > -                     desc->addr = addr | offset;
> > > -                     rte_memcpy(pkt, rte_pktmbuf_mtod(mbuf, void *),
> > > -                                     desc->len);
> > > -                     tx_bytes += mbuf->pkt_len;
> > > -                     rte_pktmbuf_free(mbuf);
> > > -                     count++;
> > > +                     goto stats;
> > >               }
> > > +stats:
> > > +     struct rte_mbuf *tmp;
> > > +     void *pkt;
> > > +     tmp = mbuf->pool == umem->mb_pool ? mbuf : local_mbuf;
> > > +
> > > +     desc = xsk_ring_prod__tx_desc(&txq->tx, idx_tx);
> > > +     desc->len = mbuf->pkt_len;
> > > +
> > > +     addr = (uint64_t)tmp - (uint64_t)umem->buffer -  
> > umem->mb_pool->header_size;  
> > > +     offset = rte_pktmbuf_mtod(tmp, uint64_t) - (uint64_t)tmp +  
> > umem->mb_pool->header_size;  
> > > +     offset = offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT;
> > > +     desc->addr = addr | offset;
> > > +
> > > +     if (mbuf->pool == umem->mb_pool) {
> > > +             tx_bytes += mbuf->pkt_len;
> > > +     } else {
> > > +             pkt = xsk_umem__get_data(umem->buffer, addr + offset);
> > > +             rte_memcpy(pkt, rte_pktmbuf_mtod(mbuf, void *), desc->len);
> > > +             tx_bytes += mbuf->pkt_len;
> > > +             rte_pktmbuf_free(mbuf);
> > > +     }
> > > +     count++;
> > >       }
> > >
> > >  out:  
> >
> > Indentation here is wrong, and looks suspect.
> > Either stats tag should be outside of loop
> > Or stats is inside loop, and both of those goto's are unnecessary
> >  
> Thanks for the feedback; I am pushing a new series with an extra tab.
> So it be obvious that stats belongs to the the loop.


But the the goto;s aren't needed? Both legs of the If would fall through
to that location.

  reply	other threads:[~2025-01-16 22:27 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-01-16 19:56 [PATCH 0/2] Fix use after free, and refactor af_xdp_tx_zc() Ariel Otilibili
2025-01-16 19:56 ` [PATCH 1/2] net/af_xdp: fix use after free in af_xdp_tx_zc() Ariel Otilibili
2025-01-16 19:56 ` [PATCH 2/2] net/af_xdp: Refactor af_xdp_tx_zc() Ariel Otilibili
2025-01-16 21:47   ` Stephen Hemminger
2025-01-16 22:20     ` Ariel Otilibili
2025-01-16 22:26       ` Stephen Hemminger [this message]
2025-01-16 22:36         ` Ariel Otilibili
2025-01-16 22:51 ` [PATCH v2 0/2] Fix use after free, and refactor af_xdp_tx_zc() Ariel Otilibili
2025-01-16 22:51   ` [PATCH v2 1/2] net/af_xdp: fix use after free in af_xdp_tx_zc() Ariel Otilibili
2025-01-16 22:51   ` [PATCH v2 2/2] net/af_xdp: Refactor af_xdp_tx_zc() Ariel Otilibili

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250116142640.26391bf0@hermes.local \
    --to=stephen@networkplumber.org \
    --cc=ariel.otilibili@6wind.com \
    --cc=ciara.loftus@intel.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=mtahhan@redhat.com \
    --cc=stable@dpdk.org \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).