Hi Raslan Darawsheh,

 

Thanks for the details. It is very useful. Now I understand that the behavior observed is expected.

 

Best regards,

Parameswaran Krishnamurthy

 

 

Internal Use - Confidential

From: Raslan Darawsheh <rasland@nvidia.com>
Sent: Monday, March 20, 2023 9:32 PM
To: Krishna, Parameswaran; dev@dpdk.org
Cc: Ori Kam
Subject: RE: MLX5 : Having RTE_FLOW_ACTION_TYPE_PORT_ID flow entry and RTE_FLOW_ACTION_TYPE_QUEUE flow entry in same flow group

 

[EXTERNAL EMAIL]

Hi   Krishna,

 

You have two types of tables:

  1. FDB (those are the ones with the transfer attribute being set to 1)
  2. NIC ( those are the ones with the ingress attribute only being set to 1)

 

When you do jump from one group to another on FDB it will not reach the NIC groups

It will jump to an FDB table (group 1) and in your case,  it’s empty (since you don’t have any flow in that

group).

 

The second flow is on the NIC table which needs another root table flow to jump to it from group 0 on NIC.

 

So the order of execution for the tables would be like follows:

  1. FDB: group 0 -> …. Group N
  2. NIC: group 0 -> group N.

 

If a packet will miss on an FDB table it will be redirected to NIC table on group 0.

If a packet will miss on a NIC table group > 0 it will be dropped.

 

Kindest regards,

Raslan Darawsheh

 

From: Krishna, Parameswaran <Parameswaran.Krishna@dell.com>
Sent: Monday, March 20, 2023 1:31 PM
To: dev@dpdk.org
Cc: Ori Kam <orika@nvidia.com>
Subject: MLX5 : Having RTE_FLOW_ACTION_TYPE_PORT_ID flow entry and RTE_FLOW_ACTION_TYPE_QUEUE flow entry in same flow group

 

Hi Experts,

 

I’m using DPDK 21.08 with Mlx5 NIC. I’m trying to configure rte-flows with multiple groups. I’m observing that under certain circumstances, jump-group action from Group-0 to Group-1 is not working.

 

I installed a flow rule in group 0 with Attribute Transfer=1, matching src-mac ce:25:02:c2:a0:f2 and action to VNET_FLOW_ACTION_JUMP_GROUP to Group 1.This rule seem to have got installed in FDB table.

 

Then In Group1, I installed a flow rule with Attribute Transfer=0, matching src-mac ce:25:02:c2:a0:f2 and action RTE_FLOW_ACTION_TYPE_QUEUE to queue 0. For RTE_FLOW_ACTION_TYPE_QUEUE, looks like setting Transfer=0 is mandatory. Setting Transfer to 1 reported error “unsupported action QUEUE”. This rule seem to have got installed in NIC_RX table.

 

Now, When I set packets from ce:25:02:c2:a0:f2, it hit the rule in Group 0, but did not hit the rule in Group 1. Looks like JUMP from FDB table to NIC_RX table is not happening.

 

When I installed the JUMP action rule in group 0 with Transfer=0, the RTE_FLOW_ACTION_TYPE_QUEUE rule entry in group 1 got hit successfully. With Transfer=0 set for both the rules, I guess both the rules got installed in NIC_RX table and the JUMP action worked fine.

 

But the problem is, now I’m unable to get a rule with action RTE_FLOW_ACTION_TYPE_PORT_ID in group1 get hit as RTE_FLOW_ACTION_TYPE_PORT_ID insists on setting Transfer=1 and this rule is getting installed in FDB table.

 

Is there any means by which I can have both RTE_FLOW_ACTION_TYPE_PORT_ID flow entry and RTE_FLOW_ACTION_TYPE_QUEUE flow entry in Group 1 and get them hit when jumped from Group 0 ?

 

Any input is highly appreciated. Thanks in advance.

 

root@server:/mlx_steering_dump/sws# python3 mlx_steering_dump_parser.py -p 87 -f /tmp/DpdkDump -t -port 0

domain 0x5702: type: FDB, gvmi: 0x4, support_sw_steering True, dev_name uverbs0, package_version 38.0, flags None, ste_buddies None, mh_buddies None, ptrn_buddies None

     table 0xaaaad5ba9520: level: 1, type: FDB

        matcher 0xaaaad5ada050: priority 2, rx e_anchor 0xf0200015, tx e_anchor 0xf0200017

           mask: smac: 0xffffffffffff, cvlan_tag: 0x1, metadata_reg_c_0: 0xffff0000

           rule 0xaaaad5699610

              match: metadata_reg_c_0: 0x00030000, smac: ce:25:02:c2:a0:f2

              action: FT devx id 0x15, dest_ft 0xaaaad5682c30 & CTR(counter), index 0x8011fd

     table 0xaaaad5ba9110: level: 0, type: ROOT

     table 0xaaaad5682c30: level: 11, type: FDB

domain 0x5700: type: NIC_RX, gvmi: 0x4, support_sw_steering True, dev_name uverbs0, package_version 38.0, flags None, ste_buddies None, mh_buddies None, ptrn_buddies None

     table 0xaaaad550c550: level: 0, type: ROOT

     table 0xaaaad56829a0: level: 10, type: NIC_RX

        matcher 0xaaaad5af1010: priority 2, rx e_anchor 0xf010003a

           mask: smac: 0xffffffffffff, cvlan_tag: 0x1, metadata_reg_c_0: 0xffff0000

           rule 0xaaaad5682bb0

              match: metadata_reg_c_0: 0x00030000, smac: ce:25:02:c2:a0:f2

              action: CTR(counter), index 0x8011fe & DEVX_TIR, ICM addr 0x46f2800014b40

domain 0x5701: type: NIC_TX, gvmi: 0x4, support_sw_steering True, dev_name uverbs0, package_version 38.0, flags None, ste_buddies None, mh_buddies None, ptrn_buddies None

 

Best Regards,

Parameswaran Krishnamurthy

 

Internal Use - Confidential