From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5D26D47003 for ; Wed, 10 Dec 2025 12:42:14 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 619E240A6C; Wed, 10 Dec 2025 12:42:09 +0100 (CET) Received: from mail-qk1-f182.google.com (mail-qk1-f182.google.com [209.85.222.182]) by mails.dpdk.org (Postfix) with ESMTP id 060774028F for ; Wed, 10 Dec 2025 12:42:07 +0100 (CET) Received: by mail-qk1-f182.google.com with SMTP id af79cd13be357-8b1e54aefc5so486446985a.1 for ; Wed, 10 Dec 2025 03:42:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1765366927; x=1765971727; darn=dpdk.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=XWI9f7EU0zQyDcKhNg2yvrVkbAkMoJEEH6H33ZcrREY=; b=EoyhRCI4MM4iVgvst7g1tMiU99V7b5uRQxbAYtl9dWiYwxC5OOoTdW1ujumE+03igB 7dqjfBDmZ8FDwIfCVPr3PmxvdGAsi/Prqg7ddbwkhVLf+f/cvnAa4M0HWhjGaRyduqol 49wD1uG6X746S7RJKIhGJJMCfeDK+afa1BaWRBD4pY2IhhOFllYhcbxJDxg9eG/2EoPc o+V62ge+VtmiQS8PYhSjhz7Y6AYIjXGwQl6AdW7330gqeD5yC7Uho6YRHIfVfAlFLyrX wa0Z9jay/FoILzaaiYl79aj//JtI/Xvqvs3+SyHR6EGWQlUtUwzvon/VlcFC7BdCCAQh 2rlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765366927; x=1765971727; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=XWI9f7EU0zQyDcKhNg2yvrVkbAkMoJEEH6H33ZcrREY=; b=Zam/q7UITG58O2HOo4HM1TSagbHAEYr0XHbws1XBIQsqmagilWg3V6/oOCsOmk2Cdd bQ1UuRmnrwT1HuKaUpwXoFhuXT5mchhPDFDe5TkaPw8JMWl8fnWm4u/ZrZVOcm1mm2h5 S665NkFYK2fLslXkvNBEd34XP0tYgHEdr8eF1XN+eDngYYbye3vfsLTraC+L/CPhpW2Q oouxWCl+BnnWss2hTfnKGZb+w9VuvWZwAIC+Fm71aNuIx6yhuj4AIyqMjH62ZyGjJWxq 8MWIiCAkbJMVNQ2IldG2antWoGO3dyraGWYMmryxmkjd3oSxiJ6Hrwt9evaHIkfAtqAl xerQ== X-Gm-Message-State: AOJu0YzUmspX1shrcyBTtGXpMm9ebwKDZOmsAQ1eQPlpHYH+dDKoIc8A a4fCgKevU06F00CWxlKdwhL1sND0f+0D7JRT2OrLdqNrBX4okSJkvf65SJ4ZFEUFc+mo3tvUfWO hxYDQIlYieBAQqKTrBsRsLoJuukQj5+k= X-Gm-Gg: ASbGncuVfEYHMDqeETcEDAy3/n53tA2pnkFmyTItZqnoG42RaP+tG/Oi9iYwS95Cqfy +/teM1ZEcgBvKWP9xHayRlOrcIAuB8LrLwm+CFcfYA3CMeHr+D73Zzcy0b0Vecof80nX5t5qkNa sEuaSQ4W7ncQlQhS6eseIlAczDreb2XUE6dmxeEJ288G/dFjmDeEqECqXs/mua/B77UkCa9kDHJ i6LNyZzoKrryHfLZQVSGpbx3s9cgZUmFuE28l2ykzX3DJ1cRlcEANejabyQShix1cpLwSlY X-Google-Smtp-Source: AGHT+IE9ctFBGlSSEO9yjTwm1Ik1+8bE6pLeQ6U157tXEBYE//qZRTTLU0nQQDfnU6hXJZ+5c+mjQrcRr0dkAy988vk= X-Received: by 2002:a05:622a:6207:b0:4ed:5f45:448a with SMTP id d75a77b69052e-4f1b1a43e23mr24427161cf.6.1765366927219; Wed, 10 Dec 2025 03:42:07 -0800 (PST) MIME-Version: 1.0 References: <98CBD80474FA8B44BF855DF32C47DC35F655D0@smartserver.smartshare.dk> In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35F655D0@smartserver.smartshare.dk> From: narsimharaj pentam Date: Wed, 10 Dec 2025 17:11:55 +0530 X-Gm-Features: AQt7F2qFwa8Fb3H_R8URabXXKla_cLq9KDhhgXpf2Dy9gzsF3rR_xMREII9NIbA Message-ID: Subject: Re: Indirect mbuf handling To: =?UTF-8?Q?Morten_Br=C3=B8rup?= Cc: users@dpdk.org, dev@dpdk.org Content-Type: multipart/alternative; boundary="00000000000098606506459784b9" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --00000000000098606506459784b9 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Thanks for your response, got it. BR Narsimha On Wed, Dec 10, 2025 at 3:14=E2=80=AFPM Morten Br=C3=B8rup wrote: > > From: narsimharaj pentam [mailto:pnarsimharaj@gmail.com] > > Sent: Tuesday, 9 December 2025 18.05 > > > > Added dev group. > > > > On Tue, Dec 9, 2025 at 10:11=E2=80=AFPM narsimharaj pentam < > pnarsimharaj@gmail.com> wrote: > > Hi > > > > I have a query related to ip fragmentation handling in DPDK. > > > > The DPDK application is trying to send a larger packet than the > configured MTU on the interface, before sending the packet to the i40e P= MD > the packet will > > undergo fragmentation . The DPDK library function > "rte_ipv4_fragment_packet" is used for fragmentation. Function > rte_ipv4_fragment_packet will create > > direct and indirect mbuf's for a fragment , the indirect buffers will > have reference to the mbuf of the actual packet (zero copy). > > > > The application will call function rte_eth_tx_burst to transmit > fragments , which internally invokes i40e_xmit_pkts , the question here = is > when should main application > > mbuf should be freed , can It be freed immediately after i40e_xmit_pkt= s > returns success, not sure because the mbuf's are queued up in software ri= ng > before actual transmit, > > I am worried about the fragments holding references to the main > application buffer. > > The original packet can be freed immediately when then fragments have bee= n > created. > > This is what the fragmentation example does: > > https://elixir.bootlin.com/dpdk/v25.11/source/examples/ip_fragmentation/m= ain.c#L289 > > This is what happens: > The original packet has a reference counter (which was incremented for > each of the indirect mbufs referring to it), so freeing it at that point > doesn't put it back in the pool. > When the last of the indirect mbufs is freed (by the driver called by > rte_eth_tx_burst()), the original packet's reference counter reaches zero= , > and then the original mbuf is put back in the pool. > > > > > Thanks. > > > > BR > > Narsimha > > --00000000000098606506459784b9 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Thanks for your response, got it.

BR
Narsimha

On Wed, Dec 10, 2025 at 3:14=E2= =80=AFPM Morten Br=C3=B8rup <mb@smartsharesystems.com> wrote:
> From: narsimharaj pentam [mailto:pnarsimharaj@gmail.com]=
> Sent: Tuesday, 9 December 2025 18.05
>
> Added dev group.
>
> On Tue, Dec 9, 2025 at 10:11=E2=80=AFPM narsimharaj pentam <pnarsimharaj@gmail.com= > wrote:
> Hi
>
> I have a query related to ip fragmentation handling in DPDK.
>
> The DPDK application is trying to send a larger packet than the config= ured MTU on the interface, before sending the packet to the=C2=A0 i40e PMD = the packet will
> undergo fragmentation . The DPDK library function "rte_ipv4_fragm= ent_packet" is used for fragmentation. Function rte_ipv4_fragment_pack= et will create
> direct and indirect mbuf's=C2=A0 for a fragment , the indirect buf= fers will have reference to the mbuf of the actual packet (zero copy).
>
> The application will call function rte_eth_tx_burst to transmit fragme= nts , which internally invokes i40e_xmit_pkts , the question here=C2=A0 is = when should main application
> mbuf should be freed , can It be freed immediately=C2=A0 after i40e_xm= it_pkts returns success, not sure because the mbuf's are queued up in s= oftware ring before actual transmit,
> I am worried about the fragments holding references to the main applic= ation buffer.

The original packet can be freed immediately when then fragments have been = created.

This is what the fragmentation example does:
https://elixir.bo= otlin.com/dpdk/v25.11/source/examples/ip_fragmentation/main.c#L289

This is what happens:
The original packet has a reference counter (which was incremented for each= of the indirect mbufs referring to it), so freeing it at that point doesn&= #39;t put it back in the pool.
When the last of the indirect mbufs is freed (by the driver called by rte_e= th_tx_burst()), the original packet's reference counter reaches zero, a= nd then the original mbuf is put back in the pool.

>
> Thanks.
>
> BR
> Narsimha

--00000000000098606506459784b9--