DPDK patches and discussions
 help / color / mirror / Atom feed
* Re: Indirect mbuf handling
       [not found] <CAFLDJDprPRB8mjybypwuvHOEp+MfXjSK6W1YJD=Db2CURUhLNA@mail.gmail.com>
@ 2025-12-09 17:05 ` narsimharaj pentam
  2025-12-10  9:44   ` Morten Brørup
  0 siblings, 1 reply; 3+ messages in thread
From: narsimharaj pentam @ 2025-12-09 17:05 UTC (permalink / raw)
  To: users, dev

[-- Attachment #1: Type: text/plain, Size: 1146 bytes --]

Added dev group.

On Tue, Dec 9, 2025 at 10:11 PM narsimharaj pentam <pnarsimharaj@gmail.com>
wrote:

> Hi
>
> I have a query related to ip fragmentation handling in DPDK.
>
> The DPDK application is trying to send a larger packet than the configured
> MTU on the interface, before sending the packet to the  i40e PMD the packet
> will
> undergo fragmentation . The DPDK library function
> *"rte_ipv4_fragment_packet"* is used for fragmentation. Function
> *rte_ipv4_fragment_packet* will create
> direct and indirect mbuf's  for a fragment , *the indirect buffers will
> have reference to the mbuf of the actual packet (zero copy).*
>
> The application will call function rte_eth_tx_burst to transmit fragments
> , which internally invokes *i40e_xmit_pkts *, the question here  is when
> should main application
> mbuf should be freed , can It be freed immediately  after i40e_xmit_pkts
> returns success, not sure because the mbuf's are queued up in software ring
> before actual transmit,
> I am worried about the fragments holding references to the main
> application buffer.
>
> Thanks.
>
> BR
> Narsimha
>

[-- Attachment #2: Type: text/html, Size: 1612 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* RE: Indirect mbuf handling
  2025-12-09 17:05 ` Indirect mbuf handling narsimharaj pentam
@ 2025-12-10  9:44   ` Morten Brørup
  2025-12-10 11:41     ` narsimharaj pentam
  0 siblings, 1 reply; 3+ messages in thread
From: Morten Brørup @ 2025-12-10  9:44 UTC (permalink / raw)
  To: narsimharaj pentam, users, dev

> From: narsimharaj pentam [mailto:pnarsimharaj@gmail.com] 
> Sent: Tuesday, 9 December 2025 18.05
> 
> Added dev group.
> 
> On Tue, Dec 9, 2025 at 10:11 PM narsimharaj pentam <pnarsimharaj@gmail.com> wrote:
> Hi 
> 
> I have a query related to ip fragmentation handling in DPDK.
> 
> The DPDK application is trying to send a larger packet than the configured MTU on the interface, before sending the packet to the  i40e PMD the packet will
> undergo fragmentation . The DPDK library function "rte_ipv4_fragment_packet" is used for fragmentation. Function rte_ipv4_fragment_packet will create
> direct and indirect mbuf's  for a fragment , the indirect buffers will have reference to the mbuf of the actual packet (zero copy).
> 
> The application will call function rte_eth_tx_burst to transmit fragments , which internally invokes i40e_xmit_pkts , the question here  is when should main application
> mbuf should be freed , can It be freed immediately  after i40e_xmit_pkts returns success, not sure because the mbuf's are queued up in software ring before actual transmit, 
> I am worried about the fragments holding references to the main application buffer.

The original packet can be freed immediately when then fragments have been created.

This is what the fragmentation example does:
https://elixir.bootlin.com/dpdk/v25.11/source/examples/ip_fragmentation/main.c#L289

This is what happens:
The original packet has a reference counter (which was incremented for each of the indirect mbufs referring to it), so freeing it at that point doesn't put it back in the pool.
When the last of the indirect mbufs is freed (by the driver called by rte_eth_tx_burst()), the original packet's reference counter reaches zero, and then the original mbuf is put back in the pool.

> 
> Thanks.
> 
> BR
> Narsimha


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: Indirect mbuf handling
  2025-12-10  9:44   ` Morten Brørup
@ 2025-12-10 11:41     ` narsimharaj pentam
  0 siblings, 0 replies; 3+ messages in thread
From: narsimharaj pentam @ 2025-12-10 11:41 UTC (permalink / raw)
  To: Morten Brørup; +Cc: users, dev

[-- Attachment #1: Type: text/plain, Size: 2091 bytes --]

Thanks for your response, got it.

BR
Narsimha

On Wed, Dec 10, 2025 at 3:14 PM Morten Brørup <mb@smartsharesystems.com>
wrote:

> > From: narsimharaj pentam [mailto:pnarsimharaj@gmail.com]
> > Sent: Tuesday, 9 December 2025 18.05
> >
> > Added dev group.
> >
> > On Tue, Dec 9, 2025 at 10:11 PM narsimharaj pentam <
> pnarsimharaj@gmail.com> wrote:
> > Hi
> >
> > I have a query related to ip fragmentation handling in DPDK.
> >
> > The DPDK application is trying to send a larger packet than the
> configured MTU on the interface, before sending the packet to the  i40e PMD
> the packet will
> > undergo fragmentation . The DPDK library function
> "rte_ipv4_fragment_packet" is used for fragmentation. Function
> rte_ipv4_fragment_packet will create
> > direct and indirect mbuf's  for a fragment , the indirect buffers will
> have reference to the mbuf of the actual packet (zero copy).
> >
> > The application will call function rte_eth_tx_burst to transmit
> fragments , which internally invokes i40e_xmit_pkts , the question here  is
> when should main application
> > mbuf should be freed , can It be freed immediately  after i40e_xmit_pkts
> returns success, not sure because the mbuf's are queued up in software ring
> before actual transmit,
> > I am worried about the fragments holding references to the main
> application buffer.
>
> The original packet can be freed immediately when then fragments have been
> created.
>
> This is what the fragmentation example does:
>
> https://elixir.bootlin.com/dpdk/v25.11/source/examples/ip_fragmentation/main.c#L289
>
> This is what happens:
> The original packet has a reference counter (which was incremented for
> each of the indirect mbufs referring to it), so freeing it at that point
> doesn't put it back in the pool.
> When the last of the indirect mbufs is freed (by the driver called by
> rte_eth_tx_burst()), the original packet's reference counter reaches zero,
> and then the original mbuf is put back in the pool.
>
> >
> > Thanks.
> >
> > BR
> > Narsimha
>
>

[-- Attachment #2: Type: text/html, Size: 2813 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-12-10 11:42 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CAFLDJDprPRB8mjybypwuvHOEp+MfXjSK6W1YJD=Db2CURUhLNA@mail.gmail.com>
2025-12-09 17:05 ` Indirect mbuf handling narsimharaj pentam
2025-12-10  9:44   ` Morten Brørup
2025-12-10 11:41     ` narsimharaj pentam

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).