Hi Robert,

I tried your suggestion and it resolved my issue for the E810 NIC in PCI PT.

 

I also tried checking the port txmod.offloads capability and if supported I set txmod.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS to see if VMXNET3 supported it, and it did. 

 

Why is this required to explicitly set “RTE_ETH_TX_OFFLOAD_MULTI_SEGS “ bit for the Physical NIC but not really required for VMWARE VMXNET3 vNIC?

 

Thank you!

 

Regards,

Ed

 

 

From: Sanford, Robert <rsanford@akamai.com>
Sent: Friday, July 26, 2024 5:52 PM
To: Lombardo, Ed <Ed.Lombardo@netscout.com>; users@dpdk.org
Subject: Re: prepend mbuf to another mbuf

 

External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe.

Did you try to set DEV_TX_OFFLOAD_MULTI_SEGS in txmode.offloads ?

 

Regards,

Robert

 

 

From: "Lombardo, Ed" <Ed.Lombardo@netscout.com>
Date: Friday, July 26, 2024 at 3:30 PM
To: "
users@dpdk.org" <users@dpdk.org>
Subject: prepend mbuf to another mbuf

 

Hi,

I have an issue with retransmitting a received packet with encapsulation headers prepended to original received packet when using E810 NIC for transmit.

 

I have received a packet and is stored in one or more mbufs.  To do encapsulation I am acquiring a mbuf from free pool.  The new mbuf is where I add the L2 Header, IPv4 header and GRE header.  I update the mbuf with rte_mbuf_refcnt_set(new_mbuf, 1) and rte_mbuf_refcnt_update(mbuf, 1); and then fill in the new mbuf metadata like (nb_segs, pkt_len, port, mbuf->next, etc) from the original mbuf.

 

When I test this feature in VMWare with VMXNET3 vnic it works perfectly, the packet is transmitted with the encapsulation headers ahead of the original packet seen at the endpoint.

 

When I test same on Intel E810 only the first mbuf of data is transmitted, the original packet data from remaining mbufs is not transmitted.

 

I compared the mbufs just prior to transmit, byte by byte, in the VMXNET3 and E810 NIC cases and they are identical, the code path is the same.

I also tried dpdk 17.11 and dpdk 22.11 versions with same results.

Also same test fails with Intel X710 and X540 NICs similar to way E810 fails.

 

I modified the code to insert the encapsulation headers in the headroom of the original mbuf and it worked perfectly.

 

What could be the issue with the Intel NICs when transmitting a chain of mbufs, where the first mbuf has only the L2 header, IPv4 header and GRE header and remaining mbuf(s) contain the original packet data?

 

Thanks,

Ed