DPDK usage discussions
 help / color / mirror / Atom feed
* Support jumbo packets with XDP
@ 2025-02-02  6:53 Ofer Dagan
  2025-02-02 17:32 ` Stephen Hemminger
  0 siblings, 1 reply; 4+ messages in thread
From: Ofer Dagan @ 2025-02-02  6:53 UTC (permalink / raw)
  To: users

[-- Attachment #1: Type: text/plain, Size: 233 bytes --]

Hi all,

We are trying to start using AF_XDP instead of libpcap (for use cases where
dpdk drivers aren't a good fit for us). When using XDP, we can't set high
MTU. How to still support jumbo packets in our application?

Thanks,
Ofer

[-- Attachment #2: Type: text/html, Size: 321 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Support jumbo packets with XDP
  2025-02-02  6:53 Support jumbo packets with XDP Ofer Dagan
@ 2025-02-02 17:32 ` Stephen Hemminger
  2025-02-03  7:16   ` [EXTERNAL] " Ofer Dagan
  0 siblings, 1 reply; 4+ messages in thread
From: Stephen Hemminger @ 2025-02-02 17:32 UTC (permalink / raw)
  To: Ofer Dagan; +Cc: users

On Sun, 2 Feb 2025 08:53:42 +0200
Ofer Dagan <ofer.d@claroty.com> wrote:

> Hi all,
> 
> We are trying to start using AF_XDP instead of libpcap (for use cases where
> dpdk drivers aren't a good fit for us). When using XDP, we can't set high
> MTU. How to still support jumbo packets in our application?
> 
> Thanks,
> Ofer

What error are you seeing?
What version of DPDK, and what version of kernel are you building for.

The current version of AF_XDP Poll mode driver supports larger mtu sizes.
It is constrained because the receive buffer has to fit on a single page
and there is overhead for the various headers.


	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
#if defined(XDP_UMEM_UNALIGNED_CHUNK_FLAG)
	dev_info->max_rx_pktlen = getpagesize() -
				  sizeof(struct rte_mempool_objhdr) -
				  sizeof(struct rte_mbuf) -
				  RTE_PKTMBUF_HEADROOM - XDP_PACKET_HEADROOM;
#else
	dev_info->max_rx_pktlen = ETH_AF_XDP_FRAME_SIZE - XDP_PACKET_HEADROOM;
#endif
	dev_info->max_mtu = dev_info->max_rx_pktlen - ETH_AF_XDP_ETH_OVERHEAD;

If you have a relatively recent kernel the UNALIGNED_CHUNK_FLAG should be set.
Stepping through the maths for that
	max_rx_pktlen = 4096 - 24 - 128 - 128 - 256 = 3560
	max_mtu = max_rx_pktlen - 14 - 4 = 3542

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [EXTERNAL] Re: Support jumbo packets with XDP
  2025-02-02 17:32 ` Stephen Hemminger
@ 2025-02-03  7:16   ` Ofer Dagan
  2025-02-04  2:43     ` Stephen Hemminger
  0 siblings, 1 reply; 4+ messages in thread
From: Ofer Dagan @ 2025-02-03  7:16 UTC (permalink / raw)
  To: stephen; +Cc: users

[-- Attachment #1: Type: text/plain, Size: 2215 bytes --]

Hi Stephan,

Thanks for your response. I'm building DPDK 24.11 on kernel version
5.15.0-130-generic. It does seem to work with the MTU you suggested, but
how can I support even larger packets (up to 9000)?
Are there any workarounds for such cases? I don't mind a performance
penalty as these use cases are expected to support less traffic then the
ones using dpdk drivers.

On Sun, Feb 2, 2025 at 7:33 PM Stephen Hemminger <stephen@networkplumber.org>
wrote:

> On Sun, 2 Feb 2025 08: 53: 42 +0200 Ofer Dagan <ofer. d@ claroty. com>
> wrote: > Hi all, > > We are trying to start using AF_XDP instead of libpcap
> (for use cases where > dpdk drivers aren't a good fit for us). When using
> XDP, we
> ZjQcmQRYFpfptBannerStart
> ------------------------------
> WARNING:External E-Mail - Use caution with links and attachments
>
> ZjQcmQRYFpfptBannerEnd
>
> On Sun, 2 Feb 2025 08:53:42 +0200
> Ofer Dagan <ofer.d@claroty.com> wrote:
>
> > Hi all,
> >
> > We are trying to start using AF_XDP instead of libpcap (for use cases where
> > dpdk drivers aren't a good fit for us). When using XDP, we can't set high
> > MTU. How to still support jumbo packets in our application?
> >
> > Thanks,
> > Ofer
>
> What error are you seeing?
> What version of DPDK, and what version of kernel are you building for.
>
> The current version of AF_XDP Poll mode driver supports larger mtu sizes.
> It is constrained because the receive buffer has to fit on a single page
> and there is overhead for the various headers.
>
>
> 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
> #if defined(XDP_UMEM_UNALIGNED_CHUNK_FLAG)
> 	dev_info->max_rx_pktlen = getpagesize() -
> 				  sizeof(struct rte_mempool_objhdr) -
> 				  sizeof(struct rte_mbuf) -
> 				  RTE_PKTMBUF_HEADROOM - XDP_PACKET_HEADROOM;
> #else
> 	dev_info->max_rx_pktlen = ETH_AF_XDP_FRAME_SIZE - XDP_PACKET_HEADROOM;
> #endif
> 	dev_info->max_mtu = dev_info->max_rx_pktlen - ETH_AF_XDP_ETH_OVERHEAD;
>
> If you have a relatively recent kernel the UNALIGNED_CHUNK_FLAG should be set.
> Stepping through the maths for that
> 	max_rx_pktlen = 4096 - 24 - 128 - 128 - 256 = 3560
> 	max_mtu = max_rx_pktlen - 14 - 4 = 3542
>
>

[-- Attachment #2: Type: text/html, Size: 4372 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [EXTERNAL] Re: Support jumbo packets with XDP
  2025-02-03  7:16   ` [EXTERNAL] " Ofer Dagan
@ 2025-02-04  2:43     ` Stephen Hemminger
  0 siblings, 0 replies; 4+ messages in thread
From: Stephen Hemminger @ 2025-02-04  2:43 UTC (permalink / raw)
  To: Ofer Dagan; +Cc: users

On Mon, 3 Feb 2025 09:16:58 +0200
Ofer Dagan <ofer.d@claroty.com> wrote:

> Hi Stephan,
> 
> Thanks for your response. I'm building DPDK 24.11 on kernel version
> 5.15.0-130-generic. It does seem to work with the MTU you suggested, but
> how can I support even larger packets (up to 9000)?
> Are there any workarounds for such cases? I don't mind a performance
> penalty as these use cases are expected to support less traffic then the
> ones using dpdk drivers.
> 
> On Sun, Feb 2, 2025 at 7:33 PM Stephen Hemminger <stephen@networkplumber.org>
> wrote:
> 
> > On Sun, 2 Feb 2025 08: 53: 42 +0200 Ofer Dagan <ofer. d@ claroty. com>
> > wrote: > Hi all, > > We are trying to start using AF_XDP instead of libpcap
> > (for use cases where > dpdk drivers aren't a good fit for us). When using
> > XDP, we
> > ZjQcmQRYFpfptBannerStart
> > ------------------------------
> > WARNING:External E-Mail - Use caution with links and attachments
> >
> > ZjQcmQRYFpfptBannerEnd
> >
> > On Sun, 2 Feb 2025 08:53:42 +0200
> > Ofer Dagan <ofer.d@claroty.com> wrote:
> >  
> > > Hi all,
> > >
> > > We are trying to start using AF_XDP instead of libpcap (for use cases where
> > > dpdk drivers aren't a good fit for us). When using XDP, we can't set high
> > > MTU. How to still support jumbo packets in our application?
> > >
> > > Thanks,
> > > Ofer  
> >
> > What error are you seeing?
> > What version of DPDK, and what version of kernel are you building for.
> >
> > The current version of AF_XDP Poll mode driver supports larger mtu sizes.
> > It is constrained because the receive buffer has to fit on a single page
> > and there is overhead for the various headers.
> >
> >
> > 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
> > #if defined(XDP_UMEM_UNALIGNED_CHUNK_FLAG)
> > 	dev_info->max_rx_pktlen = getpagesize() -
> > 				  sizeof(struct rte_mempool_objhdr) -
> > 				  sizeof(struct rte_mbuf) -
> > 				  RTE_PKTMBUF_HEADROOM - XDP_PACKET_HEADROOM;
> > #else
> > 	dev_info->max_rx_pktlen = ETH_AF_XDP_FRAME_SIZE - XDP_PACKET_HEADROOM;
> > #endif
> > 	dev_info->max_mtu = dev_info->max_rx_pktlen - ETH_AF_XDP_ETH_OVERHEAD;
> >
> > If you have a relatively recent kernel the UNALIGNED_CHUNK_FLAG should be set.
> > Stepping through the maths for that
> > 	max_rx_pktlen = 4096 - 24 - 128 - 128 - 256 = 3560
> > 	max_mtu = max_rx_pktlen - 14 - 4 = 3542

The limitation is in the kernel (not DPDK). The kernel XDP uses BPF and to optimize
with zero copies only supports up to one page. You might talk to the kernel XDP maintainers
about if they support scatter/gather.


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2025-02-04  2:43 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-02-02  6:53 Support jumbo packets with XDP Ofer Dagan
2025-02-02 17:32 ` Stephen Hemminger
2025-02-03  7:16   ` [EXTERNAL] " Ofer Dagan
2025-02-04  2:43     ` Stephen Hemminger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).