DPDK usage discussions
 help / color / mirror / Atom feed
* prepend mbuf to another mbuf
@ 2024-07-26 19:29 Lombardo, Ed
  2024-07-26 21:51 ` Sanford, Robert
  2024-07-27  4:08 ` Ivan Malov
  0 siblings, 2 replies; 5+ messages in thread
From: Lombardo, Ed @ 2024-07-26 19:29 UTC (permalink / raw)
  To: users

[-- Attachment #1: Type: text/plain, Size: 1527 bytes --]

Hi,
I have an issue with retransmitting a received packet with encapsulation headers prepended to original received packet when using E810 NIC for transmit.

I have received a packet and is stored in one or more mbufs.  To do encapsulation I am acquiring a mbuf from free pool.  The new mbuf is where I add the L2 Header, IPv4 header and GRE header.  I update the mbuf with rte_mbuf_refcnt_set(new_mbuf, 1) and rte_mbuf_refcnt_update(mbuf, 1); and then fill in the new mbuf metadata like (nb_segs, pkt_len, port, mbuf->next, etc) from the original mbuf.

When I test this feature in VMWare with VMXNET3 vnic it works perfectly, the packet is transmitted with the encapsulation headers ahead of the original packet seen at the endpoint.

When I test same on Intel E810 only the first mbuf of data is transmitted, the original packet data from remaining mbufs is not transmitted.

I compared the mbufs just prior to transmit, byte by byte, in the VMXNET3 and E810 NIC cases and they are identical, the code path is the same.
I also tried dpdk 17.11 and dpdk 22.11 versions with same results.
Also same test fails with Intel X710 and X540 NICs similar to way E810 fails.

I modified the code to insert the encapsulation headers in the headroom of the original mbuf and it worked perfectly.

What could be the issue with the Intel NICs when transmitting a chain of mbufs, where the first mbuf has only the L2 header, IPv4 header and GRE header and remaining mbuf(s) contain the original packet data?

Thanks,
Ed

[-- Attachment #2: Type: text/html, Size: 3701 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: prepend mbuf to another mbuf
  2024-07-26 19:29 prepend mbuf to another mbuf Lombardo, Ed
@ 2024-07-26 21:51 ` Sanford, Robert
  2024-07-29 19:14   ` Lombardo, Ed
  2024-07-27  4:08 ` Ivan Malov
  1 sibling, 1 reply; 5+ messages in thread
From: Sanford, Robert @ 2024-07-26 21:51 UTC (permalink / raw)
  To: Lombardo, Ed, users

[-- Attachment #1: Type: text/plain, Size: 1791 bytes --]

Did you try to set DEV_TX_OFFLOAD_MULTI_SEGS in txmode.offloads ?

Regards,
Robert


From: "Lombardo, Ed" <Ed.Lombardo@netscout.com>
Date: Friday, July 26, 2024 at 3:30 PM
To: "users@dpdk.org" <users@dpdk.org>
Subject: prepend mbuf to another mbuf

Hi,
I have an issue with retransmitting a received packet with encapsulation headers prepended to original received packet when using E810 NIC for transmit.

I have received a packet and is stored in one or more mbufs.  To do encapsulation I am acquiring a mbuf from free pool.  The new mbuf is where I add the L2 Header, IPv4 header and GRE header.  I update the mbuf with rte_mbuf_refcnt_set(new_mbuf, 1) and rte_mbuf_refcnt_update(mbuf, 1); and then fill in the new mbuf metadata like (nb_segs, pkt_len, port, mbuf->next, etc) from the original mbuf.

When I test this feature in VMWare with VMXNET3 vnic it works perfectly, the packet is transmitted with the encapsulation headers ahead of the original packet seen at the endpoint.

When I test same on Intel E810 only the first mbuf of data is transmitted, the original packet data from remaining mbufs is not transmitted.

I compared the mbufs just prior to transmit, byte by byte, in the VMXNET3 and E810 NIC cases and they are identical, the code path is the same.
I also tried dpdk 17.11 and dpdk 22.11 versions with same results.
Also same test fails with Intel X710 and X540 NICs similar to way E810 fails.

I modified the code to insert the encapsulation headers in the headroom of the original mbuf and it worked perfectly.

What could be the issue with the Intel NICs when transmitting a chain of mbufs, where the first mbuf has only the L2 header, IPv4 header and GRE header and remaining mbuf(s) contain the original packet data?

Thanks,
Ed


[-- Attachment #2: Type: text/html, Size: 4862 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: prepend mbuf to another mbuf
  2024-07-26 19:29 prepend mbuf to another mbuf Lombardo, Ed
  2024-07-26 21:51 ` Sanford, Robert
@ 2024-07-27  4:08 ` Ivan Malov
  1 sibling, 0 replies; 5+ messages in thread
From: Ivan Malov @ 2024-07-27  4:08 UTC (permalink / raw)
  To: Lombardo, Ed; +Cc: users

[-- Attachment #1: Type: text/plain, Size: 2246 bytes --]

Hi Ed,

Tampering with reference counts seems peculiar. Why do that?

I'd recommend you replace the manual prepend workflow that you have described
with just one invocation of 'rte_pktmbuf_chain(mbuf_encap, mbuf_orig)' [1]
or, if it's unfit for whatver reasons, make sure that the manual prepend
code of yours does set correct 'pkt_len' in the new head mbuf and reset
the same field in the original mbuf head. Will that work for you?

Should you have further questions, please don't hesitate to ask.

Thank you.

[1] https://doc.dpdk.org/api-22.11/rte__mbuf_8h_source.html#l01758

On Fri, 26 Jul 2024, Lombardo, Ed wrote:

> 
> Hi,
> 
> I have an issue with retransmitting a received packet with encapsulation headers prepended to original received packet when using E810 NIC for transmit.
> 
>  
> 
> I have received a packet and is stored in one or more mbufs.  To do encapsulation I am acquiring a mbuf from free pool.  The new mbuf is where I add the L2 Header, IPv4 header and GRE
> header.  I update the mbuf with rte_mbuf_refcnt_set(new_mbuf, 1) and rte_mbuf_refcnt_update(mbuf, 1); and then fill in the new mbuf metadata like (nb_segs, pkt_len, port, mbuf->next,
> etc) from the original mbuf.
> 
>  
> 
> When I test this feature in VMWare with VMXNET3 vnic it works perfectly, the packet is transmitted with the encapsulation headers ahead of the original packet seen at the endpoint.
> 
>  
> 
> When I test same on Intel E810 only the first mbuf of data is transmitted, the original packet data from remaining mbufs is not transmitted.
> 
>  
> 
> I compared the mbufs just prior to transmit, byte by byte, in the VMXNET3 and E810 NIC cases and they are identical, the code path is the same.
> 
> I also tried dpdk 17.11 and dpdk 22.11 versions with same results.
> 
> Also same test fails with Intel X710 and X540 NICs similar to way E810 fails.
> 
>  
> 
> I modified the code to insert the encapsulation headers in the headroom of the original mbuf and it worked perfectly.
> 
>  
> 
> What could be the issue with the Intel NICs when transmitting a chain of mbufs, where the first mbuf has only the L2 header, IPv4 header and GRE header and remaining mbuf(s) contain the
> original packet data?
> 
>  
> 
> Thanks,
> 
> Ed
> 
> 
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: prepend mbuf to another mbuf
  2024-07-26 21:51 ` Sanford, Robert
@ 2024-07-29 19:14   ` Lombardo, Ed
  2024-07-29 20:20     ` Sanford, Robert
  0 siblings, 1 reply; 5+ messages in thread
From: Lombardo, Ed @ 2024-07-29 19:14 UTC (permalink / raw)
  To: Sanford, Robert, users

[-- Attachment #1: Type: text/plain, Size: 2673 bytes --]

Hi Robert,
I tried your suggestion and it resolved my issue for the E810 NIC in PCI PT.

I also tried checking the port txmod.offloads capability and if supported I set txmod.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS to see if VMXNET3 supported it, and it did.

Why is this required to explicitly set “RTE_ETH_TX_OFFLOAD_MULTI_SEGS “ bit for the Physical NIC but not really required for VMWARE VMXNET3 vNIC?

Thank you!

Regards,
Ed


From: Sanford, Robert <rsanford@akamai.com>
Sent: Friday, July 26, 2024 5:52 PM
To: Lombardo, Ed <Ed.Lombardo@netscout.com>; users@dpdk.org
Subject: Re: prepend mbuf to another mbuf

External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe.
Did you try to set DEV_TX_OFFLOAD_MULTI_SEGS in txmode.offloads ?

Regards,
Robert


From: "Lombardo, Ed" <Ed.Lombardo@netscout.com<mailto:Ed.Lombardo@netscout.com>>
Date: Friday, July 26, 2024 at 3:30 PM
To: "users@dpdk.org<mailto:users@dpdk.org>" <users@dpdk.org<mailto:users@dpdk.org>>
Subject: prepend mbuf to another mbuf

Hi,
I have an issue with retransmitting a received packet with encapsulation headers prepended to original received packet when using E810 NIC for transmit.

I have received a packet and is stored in one or more mbufs.  To do encapsulation I am acquiring a mbuf from free pool.  The new mbuf is where I add the L2 Header, IPv4 header and GRE header.  I update the mbuf with rte_mbuf_refcnt_set(new_mbuf, 1) and rte_mbuf_refcnt_update(mbuf, 1); and then fill in the new mbuf metadata like (nb_segs, pkt_len, port, mbuf->next, etc) from the original mbuf.

When I test this feature in VMWare with VMXNET3 vnic it works perfectly, the packet is transmitted with the encapsulation headers ahead of the original packet seen at the endpoint.

When I test same on Intel E810 only the first mbuf of data is transmitted, the original packet data from remaining mbufs is not transmitted.

I compared the mbufs just prior to transmit, byte by byte, in the VMXNET3 and E810 NIC cases and they are identical, the code path is the same.
I also tried dpdk 17.11 and dpdk 22.11 versions with same results.
Also same test fails with Intel X710 and X540 NICs similar to way E810 fails.

I modified the code to insert the encapsulation headers in the headroom of the original mbuf and it worked perfectly.

What could be the issue with the Intel NICs when transmitting a chain of mbufs, where the first mbuf has only the L2 header, IPv4 header and GRE header and remaining mbuf(s) contain the original packet data?

Thanks,
Ed


[-- Attachment #2: Type: text/html, Size: 7973 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: prepend mbuf to another mbuf
  2024-07-29 19:14   ` Lombardo, Ed
@ 2024-07-29 20:20     ` Sanford, Robert
  0 siblings, 0 replies; 5+ messages in thread
From: Sanford, Robert @ 2024-07-29 20:20 UTC (permalink / raw)
  To: Lombardo, Ed, users

[-- Attachment #1: Type: text/plain, Size: 4084 bytes --]

Hi Ed,
It’s good to know that RTE_ETH_TX_OFFLOAD_MULTI_SEGS solved your problem!

I don’t know for sure why s/w drivers don’t need it. They’re probably just simpler, in that they may not need any specific configuration up front to handle multiple segments for RX or TX. They’re nice enough to let you do it without telling them in advance.

H/w drivers probably need to configure h/w registers, descriptors, and other resources, depending on how you plan to use them. I know that some drivers, during config, select which RX and TX burst functions to use, based on the offload flags or other config data.

We found that one 100Gb NIC we use performs poorly when we enable DEV_RX_OFFLOAD_SCATTER to support jumbo frames. We didn’t really need 9KB packets, we just wanted to support frame size >1500 and <2KB. We kept the mbuf buffer size 2KB, enabled DEV_RX_OFFLOAD_JUMBO_FRAME, but kept DEV_RX_OFFLOAD_SCATTER off. I believe the performance problem was due to the RX-burst function that the driver selected for RX-SCATTER support.

We use mbuf headroom for adding encapsulation headers. We typically use TX multi-segments only for IP fragmentation.

Regards,
Robert

From: "Lombardo, Ed" <Ed.Lombardo@netscout.com>
Date: Monday, July 29, 2024 at 3:15 PM
To: "Sanford, Robert" <rsanford@akamai.com>, "users@dpdk.org" <users@dpdk.org>
Subject: RE: prepend mbuf to another mbuf

Hi Robert,
I tried your suggestion and it resolved my issue for the E810 NIC in PCI PT.

I also tried checking the port txmod.offloads capability and if supported I set txmod.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS to see if VMXNET3 supported it, and it did.

Why is this required to explicitly set “RTE_ETH_TX_OFFLOAD_MULTI_SEGS “ bit for the Physical NIC but not really required for VMWARE VMXNET3 vNIC?

Thank you!

Regards,
Ed


From: Sanford, Robert <rsanford@akamai.com>
Sent: Friday, July 26, 2024 5:52 PM
To: Lombardo, Ed <Ed.Lombardo@netscout.com>; users@dpdk.org
Subject: Re: prepend mbuf to another mbuf

External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe.

Did you try to set DEV_TX_OFFLOAD_MULTI_SEGS in txmode.offloads ?

Regards,
Robert


From: "Lombardo, Ed" <Ed.Lombardo@netscout.com<mailto:Ed.Lombardo@netscout.com>>
Date: Friday, July 26, 2024 at 3:30 PM
To: "users@dpdk.org<mailto:users@dpdk.org>" <users@dpdk.org<mailto:users@dpdk.org>>
Subject: prepend mbuf to another mbuf

Hi,
I have an issue with retransmitting a received packet with encapsulation headers prepended to original received packet when using E810 NIC for transmit.

I have received a packet and is stored in one or more mbufs.  To do encapsulation I am acquiring a mbuf from free pool.  The new mbuf is where I add the L2 Header, IPv4 header and GRE header.  I update the mbuf with rte_mbuf_refcnt_set(new_mbuf, 1) and rte_mbuf_refcnt_update(mbuf, 1); and then fill in the new mbuf metadata like (nb_segs, pkt_len, port, mbuf->next, etc) from the original mbuf.

When I test this feature in VMWare with VMXNET3 vnic it works perfectly, the packet is transmitted with the encapsulation headers ahead of the original packet seen at the endpoint.

When I test same on Intel E810 only the first mbuf of data is transmitted, the original packet data from remaining mbufs is not transmitted.

I compared the mbufs just prior to transmit, byte by byte, in the VMXNET3 and E810 NIC cases and they are identical, the code path is the same.
I also tried dpdk 17.11 and dpdk 22.11 versions with same results.
Also same test fails with Intel X710 and X540 NICs similar to way E810 fails.

I modified the code to insert the encapsulation headers in the headroom of the original mbuf and it worked perfectly.

What could be the issue with the Intel NICs when transmitting a chain of mbufs, where the first mbuf has only the L2 header, IPv4 header and GRE header and remaining mbuf(s) contain the original packet data?

Thanks,
Ed


[-- Attachment #2: Type: text/html, Size: 10522 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2024-07-29 20:20 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-07-26 19:29 prepend mbuf to another mbuf Lombardo, Ed
2024-07-26 21:51 ` Sanford, Robert
2024-07-29 19:14   ` Lombardo, Ed
2024-07-29 20:20     ` Sanford, Robert
2024-07-27  4:08 ` Ivan Malov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).