DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH] net/mlx5: increase number of supported DV sub-flows
@ 2024-10-27 13:25 Gregory Etelson
  0 siblings, 0 replies; only message in thread
From: Gregory Etelson @ 2024-10-27 13:25 UTC (permalink / raw)
  To: dev
  Cc: getelson,  ,
	rasland, Dariusz Sosnowski, Viacheslav Ovsiienko, Bing Zhao,
	Ori Kam, Suanming Mou, Matan Azrad

Testpmd example that could not work with existing number of DV
sub-flows:

dpdk-testpmd  -a PCI,dv_xmeta_en=1,l3_vxlan_en=1,dv_flow_en=1 -- \
              -i  --nb-cores=4  --rxq=5 --txq=5

set sample_actions 1 mark id 43704 / \
 rss queues 3 0 1 1 end types ipv4 ipv4-other udp tcp ipv4-udp end / \
 end

flow create 0 priority 15 group 271 ingress \
 pattern mark id spec 16777184 id mask 0xffffff / end \
 actions sample ratio 1 index 1 / queue index 0 / end

Increase number of supported DV sub-flows to 64

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index db56ae051d..9a8eccdd25 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -974,7 +974,7 @@ struct mlx5_flow_verbs_workspace {
 #define MLX5_SCALE_JUMP_FLOW_GROUP_BIT 1
 
 /** Maximal number of device sub-flows supported. */
-#define MLX5_NUM_MAX_DEV_FLOWS 32
+#define MLX5_NUM_MAX_DEV_FLOWS 64
 
 /**
  * tunnel offload rules type
-- 
2.43.0


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2024-10-27 13:26 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-10-27 13:25 [PATCH] net/mlx5: increase number of supported DV sub-flows Gregory Etelson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).