Hi Stephan,

Thanks for your response. I'm building DPDK 24.11 on kernel version 5.15.0-130-generic. It does seem to work with the MTU you suggested, but how can I support even larger packets (up to 9000)?
Are there any workarounds for such cases? I don't mind a performance penalty as these use cases are expected to support less traffic then the ones using dpdk drivers.

On Sun, Feb 2, 2025 at 7:33 PM Stephen Hemminger <stephen@networkplumber.org> wrote:
On Sun, 2 Feb 2025 08: 53: 42 +0200 Ofer Dagan <ofer. d@ claroty. com> wrote: > Hi all, > > We are trying to start using AF_XDP instead of libpcap (for use cases where > dpdk drivers aren't a good fit for us). When using XDP, we
ZjQcmQRYFpfptBannerStart

WARNING:External E-Mail - Use caution with links and attachments
 
ZjQcmQRYFpfptBannerEnd
On Sun, 2 Feb 2025 08:53:42 +0200
Ofer Dagan <ofer.d@claroty.com> wrote:

> Hi all,
> 
> We are trying to start using AF_XDP instead of libpcap (for use cases where
> dpdk drivers aren't a good fit for us). When using XDP, we can't set high
> MTU. How to still support jumbo packets in our application?
> 
> Thanks,
> Ofer

What error are you seeing?
What version of DPDK, and what version of kernel are you building for.

The current version of AF_XDP Poll mode driver supports larger mtu sizes.
It is constrained because the receive buffer has to fit on a single page
and there is overhead for the various headers.


	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
#if defined(XDP_UMEM_UNALIGNED_CHUNK_FLAG)
	dev_info->max_rx_pktlen = getpagesize() -
				  sizeof(struct rte_mempool_objhdr) -
				  sizeof(struct rte_mbuf) -
				  RTE_PKTMBUF_HEADROOM - XDP_PACKET_HEADROOM;
#else
	dev_info->max_rx_pktlen = ETH_AF_XDP_FRAME_SIZE - XDP_PACKET_HEADROOM;
#endif
	dev_info->max_mtu = dev_info->max_rx_pktlen - ETH_AF_XDP_ETH_OVERHEAD;

If you have a relatively recent kernel the UNALIGNED_CHUNK_FLAG should be set.
Stepping through the maths for that
	max_rx_pktlen = 4096 - 24 - 128 - 128 - 256 = 3560
	max_mtu = max_rx_pktlen - 14 - 4 = 3542