Hi Ed,

It’s good to know that RTE_ETH_TX_OFFLOAD_MULTI_SEGS solved your problem!

 

I don’t know for sure why s/w drivers don’t need it. They’re probably just simpler, in that they may not need any specific configuration up front to handle multiple segments for RX or TX. They’re nice enough to let you do it without telling them in advance.

 

H/w drivers probably need to configure h/w registers, descriptors, and other resources, depending on how you plan to use them. I know that some drivers, during config, select which RX and TX burst functions to use, based on the offload flags or other config data.

 

We found that one 100Gb NIC we use performs poorly when we enable DEV_RX_OFFLOAD_SCATTER to support jumbo frames. We didn’t really need 9KB packets, we just wanted to support frame size >1500 and <2KB. We kept the mbuf buffer size 2KB, enabled DEV_RX_OFFLOAD_JUMBO_FRAME, but kept DEV_RX_OFFLOAD_SCATTER off. I believe the performance problem was due to the RX-burst function that the driver selected for RX-SCATTER support.

 

We use mbuf headroom for adding encapsulation headers. We typically use TX multi-segments only for IP fragmentation.

 

Regards,

Robert

 

From: "Lombardo, Ed" <Ed.Lombardo@netscout.com>
Date: Monday, July 29, 2024 at 3:15 PM
To: "Sanford, Robert" <rsanford@akamai.com>, "users@dpdk.org" <users@dpdk.org>
Subject: RE: prepend mbuf to another mbuf

 

Hi Robert,

I tried your suggestion and it resolved my issue for the E810 NIC in PCI PT.

 

I also tried checking the port txmod.offloads capability and if supported I set txmod.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS to see if VMXNET3 supported it, and it did. 

 

Why is this required to explicitly set “RTE_ETH_TX_OFFLOAD_MULTI_SEGS “ bit for the Physical NIC but not really required for VMWARE VMXNET3 vNIC?

 

Thank you!

 

Regards,

Ed

 

 

From: Sanford, Robert <rsanford@akamai.com>
Sent: Friday, July 26, 2024 5:52 PM
To: Lombardo, Ed <Ed.Lombardo@netscout.com>; users@dpdk.org
Subject: Re: prepend mbuf to another mbuf

 

External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe.

Did you try to set DEV_TX_OFFLOAD_MULTI_SEGS in txmode.offloads ?

 

Regards,

Robert

 

 

From: "Lombardo, Ed" <Ed.Lombardo@netscout.com>
Date: Friday, July 26, 2024 at 3:30 PM
To: "
users@dpdk.org" <users@dpdk.org>
Subject: prepend mbuf to another mbuf

 

Hi,

I have an issue with retransmitting a received packet with encapsulation headers prepended to original received packet when using E810 NIC for transmit.

 

I have received a packet and is stored in one or more mbufs.  To do encapsulation I am acquiring a mbuf from free pool.  The new mbuf is where I add the L2 Header, IPv4 header and GRE header.  I update the mbuf with rte_mbuf_refcnt_set(new_mbuf, 1) and rte_mbuf_refcnt_update(mbuf, 1); and then fill in the new mbuf metadata like (nb_segs, pkt_len, port, mbuf->next, etc) from the original mbuf.

 

When I test this feature in VMWare with VMXNET3 vnic it works perfectly, the packet is transmitted with the encapsulation headers ahead of the original packet seen at the endpoint.

 

When I test same on Intel E810 only the first mbuf of data is transmitted, the original packet data from remaining mbufs is not transmitted.

 

I compared the mbufs just prior to transmit, byte by byte, in the VMXNET3 and E810 NIC cases and they are identical, the code path is the same.

I also tried dpdk 17.11 and dpdk 22.11 versions with same results.

Also same test fails with Intel X710 and X540 NICs similar to way E810 fails.

 

I modified the code to insert the encapsulation headers in the headroom of the original mbuf and it worked perfectly.

 

What could be the issue with the Intel NICs when transmitting a chain of mbufs, where the first mbuf has only the L2 header, IPv4 header and GRE header and remaining mbuf(s) contain the original packet data?

 

Thanks,

Ed