DPDK usage discussions
 help / color / mirror / Atom feed
* mlx5: is GTP encapsulation possible using the rte_flow api?
@ 2024-05-08 15:23 László Molnár
  0 siblings, 0 replies; only message in thread
From: László Molnár @ 2024-05-08 15:23 UTC (permalink / raw)
  To: users

Hi All,

I wonder whether it would be possible to implement HW accelerated GTP
encapsulation (as a first step) functionality using a Bluefield 2 NIC
and the rte_flow API?

The encapsulation would need to work between different ports using
hairpin queues.

Let's say I already have the rules in dpdk-testpmd that remove the
original ETH header using raw_decap, and add the new ETH/IP/UDP/GTP
using raw_encap.

Now I would need to update some header fields (payload length for
ipv4, udp, gtp). I would use "modify_field op add", but I found no way
I can access the payload length field for UDP and GTP.

For example, when I try to access the UDP payload length field by using
"dst_type udp_port_src dst_offset 32" in the "modify_field" action,
I get a "destination offset is too big: Invalid argument" error.

This seems to be caused by a check in the mlx5 driver, which is a bit
surprising as the documentation in rte_flow.rst (DPDK version 24.03)
says that:

   ``offset`` allows going past the specified packet field boundary to
   copy a field to an arbitrary place in a packet,

Is this just a driver limitation or an HW limitation? Or could a flex
item solve this?

Thanks, Laszlo

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2024-05-08 15:23 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-05-08 15:23 mlx5: is GTP encapsulation possible using the rte_flow api? László Molnár

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).