From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BC5D9470D5 for ; Tue, 23 Dec 2025 23:30:55 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 929F44025A; Tue, 23 Dec 2025 23:30:55 +0100 (CET) Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) by mails.dpdk.org (Postfix) with ESMTP id 9AF054025A for ; Tue, 23 Dec 2025 23:30:53 +0100 (CET) Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-7f66686710fso10683634b3a.3 for ; Tue, 23 Dec 2025 14:30:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1766529052; x=1767133852; darn=dpdk.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=Xcw2gksnYdCBUzgl1bmsGmf/SacIe/XrkKtIryknOdk=; b=hjGBDKuID5QV70jxMpri3V+94M//KkzD2yMZinMY4HZW2hjut9d0VZnqMuFU8rFs/d xqygN40QytkuXaXZ4gmgoe4KHj82w73UKEiXdpZmu31krdDkJZlchQ+hBtxvT6EiJj/B 4Weeczqusx410YfYmJRouLCJScTWOILQ16yzuW3RqENebpS0Xctrhk9nqH2DyoEkY29C iBzsMUgz5RMT34Vlqv7p+pnNc8Elvt2Er1exB5r1rMd8wf+wPjXxgk7rsAoszO3R+Sdt Z66eKaYqu6ETMFyx6wH4yJgzjSbu15thiWz8Gis4IJHCn0JnUmbC/hsYqiDpgdIBM5i2 JQ7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766529052; x=1767133852; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=Xcw2gksnYdCBUzgl1bmsGmf/SacIe/XrkKtIryknOdk=; b=TU6ewIzP71bUJBlHOge3trJ43rFU6dJFNeYiCTSpq54uG3+eG687NUwHvtVSe5Vhi3 1ptsvBDIuQ399qHE9yQOr+JvVyV2gt93ZIEL7n/hF8vIQ4S6sE2MvWxAcAJ7Vr0FmUOS eZWI3y6OXVHUBMTSdlosd8jToB/zpNRWO63bCJoQzuQmjs6nQ3iKXAz9jHRwulM6if/N 6fqHvutbHeXiKXhtOPoa0EwkzFhlA8Qkp0ChzdR20iletrzRERh0p6wRQYNOhjbCNnNn e+HoOAwrqIN9Efw0NyZtARihDZqvFApgN8RUOCfgPoRq1e3gxI1lvPeBDMHD/WxrD+Wq jTSw== X-Gm-Message-State: AOJu0YztGgpS/UcP4y6ya4o455HtYuznYytQWdCMZ/XCronAX8phprRI DCTaFb+K/IgcWNb8LSRRsx9jVZiz45ke+ScH7+UrU3HX06dPUciciQlXbGtwNTaYE5fXCzvZwNV iGMG83e6xNfY6luRoXUQ73PLtdhwkmm0+OvpfHsjQ+m0+bQvuL0t9Obm/yIIxFpGYEqjlOfo77a 2INScV67sGfbzY3VzdK81V2PrqGQ7X12TAX0OwMw== X-Google-Smtp-Source: AGHT+IEhCaBl/IKwmmEI/vbF0M8GFQiTztxatp4UuGtQ7g54HWpZi8yILazl6D8k6Bh8AVU6KMxtDyiSHW6kCg== X-Received: from pgbfq19.prod.google.com ([2002:a05:6a02:2993:b0:c0e:da74:78ee]) (user=joshwash job=prod-delivery.src-stubby-dispatcher) by 2002:aa7:9a85:0:b0:7f7:5d81:172b with SMTP id d2e1a72fcca58-7ff664807a0mr15783325b3a.42.1766529052144; Tue, 23 Dec 2025 14:30:52 -0800 (PST) Date: Tue, 23 Dec 2025 14:30:47 -0800 Mime-Version: 1.0 X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog Message-ID: <20251223223050.3870080-1-joshwash@google.com> Subject: [PATCH 23.11 1/3] net/gve: send whole packet when mbuf is large From: Joshua Washington To: stable@dpdk.org, Thomas Monjalon , Junfeng Guo , Jeroen de Borst , Rushil Gupta , Joshua Washington Cc: Ankit Garg Content-Type: text/plain; charset="UTF-8" X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Before this patch, only one descriptor would be written per mbuf in a packet. In cases like TSO, it is possible for a single mbuf to have more bytes than GVE_MAX_TX_BUF_SIZE_DQO. As such, instead of simply truncating the data down to this size, the driver should actually write descriptors for the rest of the buffers in the mbuf segment. To that effect, the number of descriptors needed to send a packet must be corrected to account for the potential additional descriptors. Fixes: 4022f9999f56 ("net/gve: support basic Tx data path for DQO") Cc: stable@dpdk.org Signed-off-by: Joshua Washington Reviewed-by: Ankit Garg --- .mailmap | 1 + drivers/net/gve/gve_tx_dqo.c | 53 ++++++++++++++++++++++++++---------- 2 files changed, 39 insertions(+), 15 deletions(-) diff --git a/.mailmap b/.mailmap index 96b7809f89..eb6a3afa44 100644 --- a/.mailmap +++ b/.mailmap @@ -119,6 +119,7 @@ Andy Green Andy Moreton Andy Pei Anirudh Venkataramanan +Ankit Garg Ankur Dwivedi Anna Lukin Anoob Joseph diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c index 95a02bab17..a4ba8c3536 100644 --- a/drivers/net/gve/gve_tx_dqo.c +++ b/drivers/net/gve/gve_tx_dqo.c @@ -74,6 +74,19 @@ gve_tx_clean_dqo(struct gve_tx_queue *txq) txq->complq_tail = next; } +static uint16_t +gve_tx_pkt_nb_data_descs(struct rte_mbuf *tx_pkt) +{ + int nb_descs = 0; + + while (tx_pkt) { + nb_descs += (GVE_TX_MAX_BUF_SIZE_DQO - 1 + tx_pkt->data_len) / + GVE_TX_MAX_BUF_SIZE_DQO; + tx_pkt = tx_pkt->next; + } + return nb_descs; +} + uint16_t gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { @@ -88,7 +101,7 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) const char *reason; uint16_t nb_tx = 0; uint64_t ol_flags; - uint16_t nb_used; + uint16_t nb_descs; uint16_t tx_id; uint16_t sw_id; uint64_t bytes; @@ -122,11 +135,14 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) } ol_flags = tx_pkt->ol_flags; - nb_used = tx_pkt->nb_segs; first_sw_id = sw_id; csum = !!(ol_flags & GVE_TX_CKSUM_OFFLOAD_MASK_DQO); + nb_descs = gve_tx_pkt_nb_data_descs(tx_pkt); + if (txq->nb_free < nb_descs) + break; + do { if (sw_ring[sw_id] != NULL) PMD_DRV_LOG(DEBUG, "Overwriting an entry in sw_ring"); @@ -135,22 +151,29 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) if (!tx_pkt->data_len) goto finish_mbuf; - txd = &txr[tx_id]; sw_ring[sw_id] = tx_pkt; - /* fill Tx descriptor */ - txd->pkt.buf_addr = rte_cpu_to_le_64(rte_mbuf_data_iova(tx_pkt)); - txd->pkt.dtype = GVE_TX_PKT_DESC_DTYPE_DQO; - txd->pkt.compl_tag = rte_cpu_to_le_16(first_sw_id); - txd->pkt.buf_size = RTE_MIN(tx_pkt->data_len, GVE_TX_MAX_BUF_SIZE_DQO); - txd->pkt.end_of_packet = 0; - txd->pkt.checksum_offload_enable = csum; + /* fill Tx descriptors */ + int mbuf_offset = 0; + while (mbuf_offset < tx_pkt->data_len) { + uint64_t buf_addr = rte_mbuf_data_iova(tx_pkt) + + mbuf_offset; + + txd = &txr[tx_id]; + txd->pkt.buf_addr = rte_cpu_to_le_64(buf_addr); + txd->pkt.compl_tag = rte_cpu_to_le_16(first_sw_id); + txd->pkt.dtype = GVE_TX_PKT_DESC_DTYPE_DQO; + txd->pkt.buf_size = RTE_MIN(tx_pkt->data_len - mbuf_offset, + GVE_TX_MAX_BUF_SIZE_DQO); + txd->pkt.end_of_packet = 0; + txd->pkt.checksum_offload_enable = csum; + + mbuf_offset += txd->pkt.buf_size; + tx_id = (tx_id + 1) & mask; + } - /* size of desc_ring and sw_ring could be different */ - tx_id = (tx_id + 1) & mask; finish_mbuf: sw_id = (sw_id + 1) & sw_mask; - bytes += tx_pkt->data_len; tx_pkt = tx_pkt->next; } while (tx_pkt); @@ -159,8 +182,8 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) txd = &txr[(tx_id - 1) & mask]; txd->pkt.end_of_packet = 1; - txq->nb_free -= nb_used; - txq->nb_used += nb_used; + txq->nb_free -= nb_descs; + txq->nb_used += nb_descs; } /* update the tail pointer if any packets were processed */ -- 2.52.0.351.gbe84eed79e-goog