From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A087646B1F for ; Tue, 8 Jul 2025 01:18:17 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 93CB14066E; Tue, 8 Jul 2025 01:18:17 +0200 (CEST) Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) by mails.dpdk.org (Postfix) with ESMTP id 447184025D for ; Tue, 8 Jul 2025 01:18:16 +0200 (CEST) Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-31202bbaafaso3433251a91.1 for ; Mon, 07 Jul 2025 16:18:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1751930295; x=1752535095; darn=dpdk.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JL2GR9k8TfrIAtwgBW08TOlQNN+U+HFblbZJQm49WvI=; b=wkU9ESgTIPUxOk/DbX8aRGCn2o42sqDGA37zjemqk0mOlisVn+cPtK8cj/HIL8C+8z Wbht0mwn/gAAZ5fAVP0pwXKEBsN41sM3FwzfGVtaEY8GVeVhFMVjRgK1jH58Scb5p5UZ tDhWib1amOMgCHICH7SOel5FSWi7mlptC7/mZ2Wd3efIWs/vIoQybWI5ddNsI2ENeuft Uge3iDTQbFLS144VWzSfW50gJIQ+v2j6FiT15XkoFUUS9cW6z/ogtUCUEKyWLVgUjaiQ LUD9fdZVCMb043eRJStQEeWDP5q24qXFtdXuCEays4OAJAwg+fHasjj/w9yiFqXMJ702 kHkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751930295; x=1752535095; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JL2GR9k8TfrIAtwgBW08TOlQNN+U+HFblbZJQm49WvI=; b=jasDpSf6rZ0+s5aTmOl/tRgLXafVJ9o62DlXSMTgannvWyt8Xht/yNSW36Qdch+GPP cXZ2VAivgdjhcio6d8gVYiPCPUTEFBFafk0oPZMxz2+0vyIXjnmIGiBwGe3747tOjPww fpRKXDj4saSJyzjiC00T1nZbENsh7jfRuS+QRr9fs2M3uBneWIYT5gz+AtGsnnbkyP9v 3gdkYxWCCOlT9FhrofRLp66ls2tikCkryOdpn0ZXE7A9C2vZGUOpQ7Svil6iy7mkUgsg 3BORMd8BcMieejsJhMgFeFx3HvSWymME4PCK+E0pNqZAOiB/RgxX7bdG5JQZJXw0MsHa txMg== X-Forwarded-Encrypted: i=1; AJvYcCXn23mFPXv3fprZOAr1VU4mAHjlofi82hUsSZvRfAGfm6h7oI1z/46jlGhSDLpxWWeM9uCNXT0=@dpdk.org X-Gm-Message-State: AOJu0YxpHE7zS50hjDvAYCTA8FQNSa/25ZWQbWz+q3hFJZpCKFy7of5M civdtjRPDa+erZPajmui2SaCB0man5qHEnEmkY5Xs7SDBG0VR/ZEpCwCkMMnsye2yv890d3uVsq 3G55KjA1to4VNoA== X-Google-Smtp-Source: AGHT+IGal8lxeRJuacB5YzawqgNYim3e5LSdxF9JhwaCJEv9PqFxb2xAEWu1nsz+d/1ophxfxDVBrWQhS+BsvQ== X-Received: from pjcc8.prod.google.com ([2002:a17:90b:5748:b0:312:1e70:e233]) (user=joshwash job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:6d0:b0:311:9c9a:58c5 with SMTP id 98e67ed59e1d1-31aadcf5780mr22240461a91.12.1751930295430; Mon, 07 Jul 2025 16:18:15 -0700 (PDT) Date: Mon, 7 Jul 2025 16:18:05 -0700 In-Reply-To: <20250707231812.1937260-1-joshwash@google.com> Mime-Version: 1.0 References: <20250707231812.1937260-1-joshwash@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250707231812.1937260-2-joshwash@google.com> Subject: [PATCH 1/7] net/gve: send whole packet when mbuf is large From: Joshua Washington To: Thomas Monjalon , Jeroen de Borst , Joshua Washington , Junfeng Guo , Rushil Gupta Cc: dev@dpdk.org, junfeng.guo@intel.com, stable@dpdk.org, Ankit Garg Content-Type: text/plain; charset="UTF-8" X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Before this patch, only one descriptor would be written per mbuf in a packet. In cases like TSO, it is possible for a single mbuf to have more bytes than GVE_MAX_TX_BUF_SIZE_DQO. As such, instead of simply truncating the data down to this size, the driver should actually write descriptors for the rest of the buffers in the mbuf segment. To that effect, the number of descriptors needed to send a packet must be corrected to account for the potential additional descriptors. Fixes: 4022f9999f56 ("net/gve: support basic Tx data path for DQO") Cc: junfeng.guo@intel.com Cc: stable@dpdk.org Signed-off-by: Joshua Washington Reviewed-by: Ankit Garg --- .mailmap | 1 + drivers/net/gve/gve_tx_dqo.c | 54 ++++++++++++++++++++++++------------ 2 files changed, 38 insertions(+), 17 deletions(-) diff --git a/.mailmap b/.mailmap index 1ea4f9446d..758878bd8b 100644 --- a/.mailmap +++ b/.mailmap @@ -124,6 +124,7 @@ Andy Green Andy Moreton Andy Pei Anirudh Venkataramanan +Ankit Garg Ankur Dwivedi Anna Lukin Anoob Joseph diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c index 6984f92443..6227fa73b0 100644 --- a/drivers/net/gve/gve_tx_dqo.c +++ b/drivers/net/gve/gve_tx_dqo.c @@ -74,6 +74,19 @@ gve_tx_clean_dqo(struct gve_tx_queue *txq) txq->complq_tail = next; } +static uint16_t +gve_tx_pkt_nb_data_descs(struct rte_mbuf *tx_pkt) +{ + int nb_descs = 0; + + while (tx_pkt) { + nb_descs += (GVE_TX_MAX_BUF_SIZE_DQO - 1 + tx_pkt->data_len) / + GVE_TX_MAX_BUF_SIZE_DQO; + tx_pkt = tx_pkt->next; + } + return nb_descs; +} + static inline void gve_tx_fill_seg_desc_dqo(volatile union gve_tx_desc_dqo *desc, struct rte_mbuf *tx_pkt) { @@ -97,7 +110,7 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) uint16_t nb_to_clean; uint16_t nb_tx = 0; uint64_t ol_flags; - uint16_t nb_used; + uint16_t nb_descs; uint16_t tx_id; uint16_t sw_id; uint64_t bytes; @@ -124,14 +137,14 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) } ol_flags = tx_pkt->ol_flags; - nb_used = tx_pkt->nb_segs; first_sw_id = sw_id; tso = !!(ol_flags & RTE_MBUF_F_TX_TCP_SEG); csum = !!(ol_flags & GVE_TX_CKSUM_OFFLOAD_MASK_DQO); - nb_used += tso; - if (txq->nb_free < nb_used) + nb_descs = gve_tx_pkt_nb_data_descs(tx_pkt); + nb_descs += tso; + if (txq->nb_free < nb_descs) break; if (tso) { @@ -144,21 +157,28 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) if (sw_ring[sw_id] != NULL) PMD_DRV_LOG(DEBUG, "Overwriting an entry in sw_ring"); - txd = &txr[tx_id]; sw_ring[sw_id] = tx_pkt; - /* fill Tx descriptor */ - txd->pkt.buf_addr = rte_cpu_to_le_64(rte_mbuf_data_iova(tx_pkt)); - txd->pkt.dtype = GVE_TX_PKT_DESC_DTYPE_DQO; - txd->pkt.compl_tag = rte_cpu_to_le_16(first_sw_id); - txd->pkt.buf_size = RTE_MIN(tx_pkt->data_len, GVE_TX_MAX_BUF_SIZE_DQO); - txd->pkt.end_of_packet = 0; - txd->pkt.checksum_offload_enable = csum; + /* fill Tx descriptors */ + int mbuf_offset = 0; + while (mbuf_offset < tx_pkt->data_len) { + uint64_t buf_addr = rte_mbuf_data_iova(tx_pkt) + + mbuf_offset; + + txd = &txr[tx_id]; + txd->pkt.buf_addr = rte_cpu_to_le_64(buf_addr); + txd->pkt.compl_tag = rte_cpu_to_le_16(first_sw_id); + txd->pkt.dtype = GVE_TX_PKT_DESC_DTYPE_DQO; + txd->pkt.buf_size = RTE_MIN(tx_pkt->data_len - mbuf_offset, + GVE_TX_MAX_BUF_SIZE_DQO); + txd->pkt.end_of_packet = 0; + txd->pkt.checksum_offload_enable = csum; + + mbuf_offset += txd->pkt.buf_size; + tx_id = (tx_id + 1) & mask; + } - /* size of desc_ring and sw_ring could be different */ - tx_id = (tx_id + 1) & mask; sw_id = (sw_id + 1) & sw_mask; - bytes += tx_pkt->data_len; tx_pkt = tx_pkt->next; } while (tx_pkt); @@ -166,8 +186,8 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) /* fill the last descriptor with End of Packet (EOP) bit */ txd->pkt.end_of_packet = 1; - txq->nb_free -= nb_used; - txq->nb_used += nb_used; + txq->nb_free -= nb_descs; + txq->nb_used += nb_descs; } /* update the tail pointer if any packets were processed */ -- 2.50.0.727.gbf7dc18ff4-goog