patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Shani Peretz <shperetz@nvidia.com>
To: Joshua Washington <joshwash@google.com>,
	"stable@dpdk.org" <stable@dpdk.org>,
	"NBU-Contact-Thomas Monjalon (EXTERNAL)" <thomas@monjalon.net>,
	Junfeng Guo <junfengg@nvidia.com>,
	Jeroen de Borst <jeroendb@google.com>,
	Rushil Gupta <rushilg@google.com>
Cc: Ankit Garg <nktgrg@google.com>
Subject: RE: [PATCH 23.11 1/3] net/gve: send whole packet when mbuf is large
Date: Thu, 25 Dec 2025 08:51:22 +0000	[thread overview]
Message-ID: <MW4PR12MB7484CFD1CD92B035452407E2BFB3A@MW4PR12MB7484.namprd12.prod.outlook.com> (raw)
In-Reply-To: <20251223223050.3870080-1-joshwash@google.com>



> -----Original Message-----
> From: Joshua Washington <joshwash@google.com>
> Sent: Wednesday, 24 December 2025 0:31
> To: stable@dpdk.org; NBU-Contact-Thomas Monjalon (EXTERNAL)
> <thomas@monjalon.net>; Junfeng Guo <junfengg@nvidia.com>; Jeroen de
> Borst <jeroendb@google.com>; Rushil Gupta <rushilg@google.com>; Joshua
> Washington <joshwash@google.com>
> Cc: Ankit Garg <nktgrg@google.com>
> Subject: [PATCH 23.11 1/3] net/gve: send whole packet when mbuf is large
> 
> External email: Use caution opening links or attachments
> 
> 
> Before this patch, only one descriptor would be written per mbuf in a packet.
> In cases like TSO, it is possible for a single mbuf to have more bytes than
> GVE_MAX_TX_BUF_SIZE_DQO. As such, instead of simply truncating the data
> down to this size, the driver should actually write descriptors for the rest of
> the buffers in the mbuf segment.
> 
> To that effect, the number of descriptors needed to send a packet must be
> corrected to account for the potential additional descriptors.
> 
> Fixes: 4022f9999f56 ("net/gve: support basic Tx data path for DQO")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Joshua Washington <joshwash@google.com>
> Reviewed-by: Ankit Garg <nktgrg@google.com>
> ---
>  .mailmap                     |  1 +
>  drivers/net/gve/gve_tx_dqo.c | 53 ++++++++++++++++++++++++++---------
> -
>  2 files changed, 39 insertions(+), 15 deletions(-)
> 
> diff --git a/.mailmap b/.mailmap
> index 96b7809f89..eb6a3afa44 100644
> --- a/.mailmap
> +++ b/.mailmap
> @@ -119,6 +119,7 @@ Andy Green <andy@warmcat.com>  Andy Moreton
> <andy.moreton@amd.com> <amoreton@xilinx.com>
> <amoreton@solarflare.com>  Andy Pei <andy.pei@intel.com>  Anirudh
> Venkataramanan <anirudh.venkataramanan@intel.com>
> +Ankit Garg <nktgrg@google.com>
>  Ankur Dwivedi <adwivedi@marvell.com>
> <ankur.dwivedi@caviumnetworks.com> <ankur.dwivedi@cavium.com>  Anna
> Lukin <annal@silicom.co.il>  Anoob Joseph <anoobj@marvell.com>
> <anoob.joseph@caviumnetworks.com> diff --git
> a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c index
> 95a02bab17..a4ba8c3536 100644
> --- a/drivers/net/gve/gve_tx_dqo.c
> +++ b/drivers/net/gve/gve_tx_dqo.c
> @@ -74,6 +74,19 @@ gve_tx_clean_dqo(struct gve_tx_queue *txq)
>         txq->complq_tail = next;
>  }
> 
> +static uint16_t
> +gve_tx_pkt_nb_data_descs(struct rte_mbuf *tx_pkt) {
> +       int nb_descs = 0;
> +
> +       while (tx_pkt) {
> +               nb_descs += (GVE_TX_MAX_BUF_SIZE_DQO - 1 + tx_pkt->data_len) /
> +                       GVE_TX_MAX_BUF_SIZE_DQO;
> +               tx_pkt = tx_pkt->next;
> +       }
> +       return nb_descs;
> +}
> +
>  uint16_t
>  gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t
> nb_pkts)  { @@ -88,7 +101,7 @@ gve_tx_burst_dqo(void *tx_queue, struct
> rte_mbuf **tx_pkts, uint16_t nb_pkts)
>         const char *reason;
>         uint16_t nb_tx = 0;
>         uint64_t ol_flags;
> -       uint16_t nb_used;
> +       uint16_t nb_descs;
>         uint16_t tx_id;
>         uint16_t sw_id;
>         uint64_t bytes;
> @@ -122,11 +135,14 @@ gve_tx_burst_dqo(void *tx_queue, struct
> rte_mbuf **tx_pkts, uint16_t nb_pkts)
>                 }
> 
>                 ol_flags = tx_pkt->ol_flags;
> -               nb_used = tx_pkt->nb_segs;
>                 first_sw_id = sw_id;
> 
>                 csum = !!(ol_flags & GVE_TX_CKSUM_OFFLOAD_MASK_DQO);
> 
> +               nb_descs = gve_tx_pkt_nb_data_descs(tx_pkt);
> +               if (txq->nb_free < nb_descs)
> +                       break;
> +
>                 do {
>                         if (sw_ring[sw_id] != NULL)
>                                 PMD_DRV_LOG(DEBUG, "Overwriting an entry in sw_ring");
> @@ -135,22 +151,29 @@ gve_tx_burst_dqo(void *tx_queue, struct
> rte_mbuf **tx_pkts, uint16_t nb_pkts)
>                         if (!tx_pkt->data_len)
>                                 goto finish_mbuf;
> 
> -                       txd = &txr[tx_id];
>                         sw_ring[sw_id] = tx_pkt;
> 
> -                       /* fill Tx descriptor */
> -                       txd->pkt.buf_addr =
> rte_cpu_to_le_64(rte_mbuf_data_iova(tx_pkt));
> -                       txd->pkt.dtype = GVE_TX_PKT_DESC_DTYPE_DQO;
> -                       txd->pkt.compl_tag = rte_cpu_to_le_16(first_sw_id);
> -                       txd->pkt.buf_size = RTE_MIN(tx_pkt->data_len,
> GVE_TX_MAX_BUF_SIZE_DQO);
> -                       txd->pkt.end_of_packet = 0;
> -                       txd->pkt.checksum_offload_enable = csum;
> +                       /* fill Tx descriptors */
> +                       int mbuf_offset = 0;
> +                       while (mbuf_offset < tx_pkt->data_len) {
> +                               uint64_t buf_addr = rte_mbuf_data_iova(tx_pkt) +
> +                                       mbuf_offset;
> +
> +                               txd = &txr[tx_id];
> +                               txd->pkt.buf_addr = rte_cpu_to_le_64(buf_addr);
> +                               txd->pkt.compl_tag = rte_cpu_to_le_16(first_sw_id);
> +                               txd->pkt.dtype = GVE_TX_PKT_DESC_DTYPE_DQO;
> +                               txd->pkt.buf_size = RTE_MIN(tx_pkt->data_len -
> mbuf_offset,
> +                                                           GVE_TX_MAX_BUF_SIZE_DQO);
> +                               txd->pkt.end_of_packet = 0;
> +                               txd->pkt.checksum_offload_enable = csum;
> +
> +                               mbuf_offset += txd->pkt.buf_size;
> +                               tx_id = (tx_id + 1) & mask;
> +                       }
> 
> -                       /* size of desc_ring and sw_ring could be different */
> -                       tx_id = (tx_id + 1) & mask;
>  finish_mbuf:
>                         sw_id = (sw_id + 1) & sw_mask;
> -
>                         bytes += tx_pkt->data_len;
>                         tx_pkt = tx_pkt->next;
>                 } while (tx_pkt);
> @@ -159,8 +182,8 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf
> **tx_pkts, uint16_t nb_pkts)
>                 txd = &txr[(tx_id - 1) & mask];
>                 txd->pkt.end_of_packet = 1;
> 
> -               txq->nb_free -= nb_used;
> -               txq->nb_used += nb_used;
> +               txq->nb_free -= nb_descs;
> +               txq->nb_used += nb_descs;
>         }
> 
>         /* update the tail pointer if any packets were processed */
> --
> 2.52.0.351.gbe84eed79e-goog


Thanks Joshua, I'll add the series to 23.11


      parent reply	other threads:[~2025-12-25  8:51 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-23 22:30 Joshua Washington
2025-12-23 22:30 ` [PATCH 23.11 2/3] net/gve: clean when insufficient Tx descriptors Joshua Washington
2025-12-23 22:30 ` [PATCH 23.11 3/3] net/gve: add DQO Tx descriptor limit Joshua Washington
2025-12-25  8:51 ` Shani Peretz [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=MW4PR12MB7484CFD1CD92B035452407E2BFB3A@MW4PR12MB7484.namprd12.prod.outlook.com \
    --to=shperetz@nvidia.com \
    --cc=jeroendb@google.com \
    --cc=joshwash@google.com \
    --cc=junfengg@nvidia.com \
    --cc=nktgrg@google.com \
    --cc=rushilg@google.com \
    --cc=stable@dpdk.org \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).