DPDK usage discussions
 help / color / mirror / Atom feed
From: Taha Sami <taha.sami@dreambigsemi.com>
To: Dariusz Sosnowski <dsosnowski@nvidia.com>,
	"users@dpdk.org" <users@dpdk.org>, Bing Zhao <bingz@nvidia.com>,
	Ori Kam <orika@nvidia.com>
Cc: Asaf Penso <asafp@nvidia.com>, Maayan Kashani <mkashani@nvidia.com>
Subject: Re: connection tracking in mlx5 not working or displaying strange results
Date: Tue, 30 Apr 2024 13:10:47 +0000	[thread overview]
Message-ID: <BY5PR22MB1986559603975057455A2DE6901A2@BY5PR22MB1986.namprd22.prod.outlook.com> (raw)
In-Reply-To: <PH0PR12MB8800AC9B62799EB6A2B21EAAA4162@PH0PR12MB8800.namprd12.prod.outlook.com>

[-- Attachment #1: Type: text/plain, Size: 19938 bytes --]

Hi Dariusz,

The fix you provided worked.  I was able to create a match pattern. But another problem occurred. I will first walk you through the steps.

I first started the PMD using the following cmds  -l 0-15 -n 4 -a 0000:04:00.0  --file-prefix tsa -- -i  --txq=18 --rxq=18


Afterwards, I use the following cmds to create a ct object and flow rules.

set conntrack com peer 0 is_orig 1 enable 1 live 1 sack 1 cack 0 last_dir 0 liberal 0 state 1 max_ack_win 7 r_lim 5 last_win 510 last_seq 2632987379 last_ack 2532480967 last_end 2632987379 last_index 0x8

 set conntrack orig scale 7 fin 0 acked 1 unack_data 0 sent_end  2632987379 reply_end 2532480967 max_win 65280 max_ack 2632987379

set conntrack rply scale 7 fin 0 acked 1 unack_data 0 sent_end 2532480967 reply_end 2632987379  max_win 65280 max_ack 2532480967

 flow indirect_action 0 destroy action_id 0
 flow indirect_action 0 create ingress action conntrack / end
 flow create 0 group 0 ingress pattern eth / ipv4 / tcp dst is 5555 / end actions jump group 3 / end
 flow create 0 group 3 ingress pattern eth / ipv4 / tcp dst is 5555 / end actions indirect 0 / jump group 5 / end

The flow rules were created without any problems. However, when I tried to create a rule using the conntrack pattern item I received a seg fault error and the PMD crashed.

flow create 0 group 5 ingress pattern eth / ipv4 / tcp / conntrack mask 2 / end actions queue index 12 / end

I made the following fix:


diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index d434c678c8..e1ccf57fa5 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -3261,7 +3261,7 @@ flow_dv_validate_item_aso_ct(struct rte_eth_dev *dev,
                                          "Only one CT is supported");
        if (!mask)
                mask = &rte_flow_item_conntrack_mask;
-       flags = spec->flags & mask->flags;
+       flags =mask->flags;
        if ((flags & RTE_FLOW_CONNTRACK_PKT_STATE_VALID) &&
            ((flags & RTE_FLOW_CONNTRACK_PKT_STATE_INVALID) ||
             (flags & RTE_FLOW_CONNTRACK_PKT_STATE_BAD) ||


Afterward, I was able to create a pattern item for conntrack, The PMD allowed me to insert the flow rules and was able to match on the above mask.

ID    Group Prio  Attr  Rule
0     0     0     i--   ETH IPV4 TCP => JUMP
1     3     0     i--   ETH IPV4 TCP => INDIRECT JUMP
2     5     0     i--   ETH IPV4 TCP CONNTRACK => QUEUE

One thing of note I was only able to create flow rules on a specific mask for instance for the mask (0/5/7) the PMD gave me the following  error

testpmd> flow create 0 group 5 ingress pattern eth / ipv4 / tcp / conntrack mask 7 / end actions queue index 12 / end
port_flow_complain(): Caught PMD error type 13 (specific pattern item): Conflict status bits: Invalid argument.


For the test case that we discussed, I started the PMD using cmd

run -l 0-15 -n 4 -a 0000:04:00.0,dv_flow_en=2 -a 0000:04:00.1,dv_flow_en=2  --file-prefix tsa -- -i  --txq=18 --rxq=18

The rules were

port stop all
flow configure 0 queues_number 9 queues_size 256 conn_tracks_number 4
flow configure 0 queues_number 9 queues_size 256 conn_tracks_number 4

flow pattern_template 0 create pattern_template_id 2 relaxed true ingress template tcp / end

flow pattern_template 0 create pattern_template_id 4 relaxed true ingress template eth / ipv4 / tcp  / conntrack  mask 0xffff / end

set conntrack com peer 1 is_orig 1 enable 1 live 1 sack 1 cack 0 last_dir 0 liberal 0 state 1 max_ack_win 7 r_lim 5 last_win 510 last_seq 2632987379 last_ack 2532480967 last_end 2632987379 last_index 0x8


 set conntrack orig scale 7 fin 0 acked 1 unack_data 0 sent_end  2632987379 reply_end 2532480967 max_win 65280 max_ack 2632987379


 set conntrack rply scale 7 fin 0 acked 1 unack_data 0 sent_end 2532480967 reply_end 2632987379  max_win 65280 max_ack 2532480967

flow queue 0 indirect_action 0 create action_id 9 ingress postpone no action conntrack / end
flow push 0 queue 0
flow pull 0 queue 0


flow actions_template 0 create ingress actions_template_id 1 template jump group 1 / end mask jump group 1 / end
flow  actions_template 0 create ingress actions_template_id 2 template indirect 9 / jump group 5 / end mask conntrack / jump group 5 / end

flow actions_template 0 create ingress actions_template_id 4 template queue   / end mask queue  / end


flow template_table 0 create table_id 1 group 0 ingress rules_number 10 pattern_template 2 actions_template 1
flow template_table 0 create table_id 2 group 1 ingress rules_number 10 pattern_template 2 actions_template 2

flow template_table 0 create table_id 4 group 5 ingress rules_number 10 pattern_template 4 actions_template 4

port start all


flow queue 0 create 0 template_table 1 pattern_template 0 actions_template 0 postpone 0 pattern tcp / end actions jump group 1 / end
flow push 0 queue 0
flow pull 0 queue 0

flow queue 0 create 0 template_table 2 pattern_template 0 actions_template 0 postpone 0 pattern tcp  /  end actions indirect 9 / jump group 5 / end
flow push 0 queue 0
flow pull 0 queue 0

The above rules were enqued and pulled successfully.

When I tried to create a rule to match on the conntrack pattern it again gave me a seg fault the rule was

flow queue 0 create 0 template_table 4 pattern_template 0 actions_template 0 postpone 0 pattern eth / ipv4 / tcp / conntrack mask 8 / end actions queue index 11 / end

I was able to do a temporary workaround by applying the following fix.

diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 35a2ed2048..1ba5d727e7 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -2163,7 +2163,7 @@ mlx5dr_definer_conv_item_conntrack(struct mlx5dr_definer_conv_data *cd,

        fc->item_idx = item_idx;
        fc->tag_mask_set = &mlx5dr_definer_conntrack_mask;
-       fc->tag_set = &mlx5dr_definer_conntrack_tag;
+       //fc->tag_set = &mlx5dr_definer_conntrack_tag;

        return 0;
 }
@@ -3564,7 +3564,7 @@ void mlx5dr_definer_create_tag(const struct rte_flow_item *items,
        uint32_t i;

        for (i = 0; i < fc_sz; i++) {
-               fc->tag_set(fc, items[fc->item_idx].spec, tag);
+               fc->tag_set(fc, items[fc->item_idx].mask, tag);
                fc++;
        }
 }

The PMD then allowed me to enqueue the above rules using the conntrack mask item and the packet matched and was redirected to the subsequent queue.

From my understanding, there might be a bug in The PMD regarding the handling of mask items for conntrack.

Scapy Packets

 sendp( Ether(dst="B8:CE:F6:D2:CD:22",src="aa:bb:cc:dd:ee:ff")/IP()/TCP(dport=5555,flags="SA"), iface="enp134s0f0",count=1)




Regards,
Taha


________________________________
From: Dariusz Sosnowski <dsosnowski@nvidia.com>
Sent: Friday, April 26, 2024 11:57 PM
To: Taha Sami <taha.sami@dreambigsemi.com>; users@dpdk.org <users@dpdk.org>; Bing Zhao <bingz@nvidia.com>; Ori Kam <orika@nvidia.com>
Cc: Asaf Penso <asafp@nvidia.com>; Maayan Kashani <mkashani@nvidia.com>
Subject: RE: connection tracking in mlx5 not working or displaying strange results

Hi,

I've reviewed the cases and I wanted to provide some clarifications about the expected behavior.

There are a few issues I see in all cases:

- There's a bug in handling of "conntrack" item in testpmd. Testpmd does not add CONNTRACK item correctly to pattern list.
  Could you please apply the diff from [1] on testpmd source and recompile?
  We will provide an official fix soon.
- Even with the fix from [1], the pattern template with conntrack item is specified incorrectly, because only spec is specified.
  Pattern templates define fields on which matching is done and specs are ignored.
  In case of conntrack mask is explicitly required.
  As a result, no matching on conntrack state will be done for all rules using this template.
  conntrack state matching must be configured like so:

        flow pattern_template 0 create ingress relaxed yes pattern_template_id 10 template conntrack mask 0xffff / end

- Flow rules which use indirect conntrack action do not actually use it, because actions template of corresponding tables
  do not specify any indirect actions. Because of that, during flow creation, only jump and queue are applied.
  You can specify conntrack action in actions template like so:

        Flow create actions_template 0 create ingress actions_template_id 10
                template indirect 9 / jump group 5 / end
                mask conntrack / jump group 5 / end

  Specifying conntrack as action type in the mask is critical for mlx5 PMD,
  because PMD requires that type of indirect action is known at
  template creation time.
- Connection tracking object created with these testpmd commands are in a state
  where non-zero sequence numbers are expected, but generated packets have seq=ack=0.
  If such packet passed through conntrack object, packet state would be INVALID (==4),
  because it would be considered as being out of window.
- Regarding case 4 - Since only conntrack spec is specified in pattern template,
  flow rule will not match on conntrack state.
  If mask was specified and conntrack state was actually matched, the behavior would be undefined
  since conntrack action was not executed on the packet.

Could you please apply the pattern and actions template changes,
and diff from [1] and test?

Please let me know if you have any questions.

Best regards,
Dariusz Sosnowski

---

[1] Quick fix for testpmd conntrack issue:

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 60ee9337cf..c8d328fb90 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -5797,9 +5797,11 @@ static const struct token token_list[] = {
    [ITEM_CONNTRACK] = {
        .name = "conntrack",
        .help = "conntrack state",
+       .priv = PRIV_ITEM(CONNTRACK, sizeof(struct rte_flow_item_conntrack)),
        .next = NEXT(NEXT_ENTRY(ITEM_NEXT), NEXT_ENTRY(COMMON_UNSIGNED),
                 item_param),
        .args = ARGS(ARGS_ENTRY(struct rte_flow_item_conntrack, flags)),
+       .call = parse_vc,
    },
    [ITEM_PORT_REPRESENTOR] = {
        .name = "port_representor",

> From: Taha Sami <taha.sami@dreambigsemi.com>
> Sent: Friday, April 19, 2024 15:10
> To: users@dpdk.org; Bing Zhao <bingz@nvidia.com>; Ori Kam <orika@nvidia.com>; Dariusz Sosnowski <dsosnowski@nvidia.com>
> Cc: Asaf Penso <asafp@nvidia.com>
> Subject: connection tracking in mlx5 not working or displaying strange results
>
> External email: Use caution opening links or attachments
>
> Test Case 1: Working Fine
>
> port stop all
> flow configure 0 queues_number 9 queues_size 256 conn_tracks_number 4
>
> flow pattern_template 0 create pattern_template_id 2 relaxed true ingress template tcp / end
>
> flow pattern_template 0 create pattern_template_id 4 relaxed true ingress template eth / ipv4 / tcp  / conntrack  spec 1 / end
>
> flow actions_template 0 create actions_template_id 1 template jump group 1 / end mask jump group 1 / end
> flow actions_template 0 create actions_template_id 2 template jump group 5 / end mask jump group 5 / end
> flow actions_template 0 create actions_template_id 4 template queue index 4 / end mask queue index 4 / end
>
> flow template_table 0 create table_id 1 group 0 ingress rules_number 10 pattern_template 2 actions_template 1
> flow template_table 0 create table_id 2 group 1 ingress rules_number 10 pattern_template 2 actions_template 2
> flow template_table 0 create table_id 4 group 5 ingress rules_number 10 pattern_template 4 actions_template 4
>
> port start all
>
> set conntrack com peer 0 is_orig 1 enable 1 live 1 sack 1 cack 0 last_dir 1 liberal 0 state 1 max_ack_win 7 r_lim 5 last_win 510 last_seq 2 last_ack 1 last_end 2 last_index 0x2
>
> set conntrack orig scale 7 fin 0 acked 1 unack_data 0 sent_end 2 reply_end 1 max_win 8192  max_ack 2
> set conntrack rply scale 7 fin 0 acked 1 unack_data 0 sent_end 2 reply_end 1 max_win 8192 max_ack 1
>
> flow queue 0 indirect_action 0 create action_id 9 ingress postpone no action conntrack / end
> flow push 0 queue 0
> flow pull 0 queue 0
>
> flow queue 0 create 0 template_table 1 pattern_template 0 actions_template 0 postpone 0 pattern tcp / end actions jump group 1 / end
> flow push 0 queue 0
> flow pull 0 queue 0
>
> flow queue 0 create 0 template_table 2 pattern_template 0 actions_template 0 postpone 0 pattern tcp  /  end actions indirect 9 / jump group 5 / end
> flow push 0 queue 0
> flow pull 0 queue 0
>
> flow queue 0 create 0 template_table 4 pattern_template 0 actions_template 0 postpone 0 pattern eth / ipv4 / tcp / conntrack spec 1 / end actions queue index 4 / end
> flow push 0 queue 0
> flow pull 0 queue 0
>
> Result:
> It was observed that the packet was sent to the right queue when the correct packet was sent
>
> Scapy Packet:
> sendp( Ether(dst="B8:CE:F6:D2:CD:22",src="aa:bb:cc:dd:ee:ff")/IP()/TCP(dport=5555), iface="enp134s0f0",count=1) --->> redirect the packet to queue 4 for Flow rule 3
>
> sendp( Ether(dst="B8:CE:F6:D2:CD:22",src="aa:bb:cc:dd:ee:ff")/IP()/TCP(dport=1234), iface="enp134s0f0",count=1)  --->> redirect the packet to queue 6  for Flow rule 2
>
> Test Case 2:
> This test case has shown strange behavior when we keep everything similar to the above test case and just make changes in the pattern template and rule involving ct the match is not found and the packets are drop
>
> Note: I have Tested this test on the same packets.
>
> The changes We  did our to find an exact match on tcp dst port
>
> flow pattern_template 0 create pattern_template_id 4 relaxed true ingress template eth / ipv4 / tcp  / conntrack  spec 1 / end
>
>
> flow pattern_template 0 create pattern_template_id 4 relaxed true ingress template eth / ipv4 / tcp  dst is 5555  / conntrack  spec 1 / end
>
> For Flow Rule
> flow queue 0 create 0 template_table 4 pattern_template 0 actions_template 0 postpone 0 pattern eth / ipv4 / tcp / conntrack spec 1 / end actions queue index 4 / end
>
> flow queue 0 create 0 template_table 4 pattern_template 0 actions_template 0 postpone 0 pattern eth / ipv4 / tcp dst is 5555 / conntrack spec 1 / end actions queue index 4 / end
>
> Result:
> The packets were sent to queue 6 but not to queue for
>
> Scapy Packet:
> sendp( Ether(dst="B8:CE:F6:D2:CD:22",src="aa:bb:cc:dd:ee:ff")/IP()/TCP(dport=5555), iface="enp134s0f0",count=1) -- were not redirected  to queue 4 for flow rule
>
> sendp( Ether(dst="B8:CE:F6:D2:CD:22",src="aa:bb:cc:dd:ee:ff")/IP()/TCP(dport=1234), iface="enp134s0f0",count=1)  --were redirected to queue 6  for Flow rule 2.
>
>
>
> Test Case 3:
> kept the same configuration for this test similar to test 1 except for some minor changes
>
> flow pattern_template 0 create pattern_template_id 4 relaxed true ingress template eth / ipv4 / tcp  / conntrack  spec 1 / end
>
> flow pattern_template 0 create pattern_template_id 4 relaxed true ingress template eth / ipv4 / tcp  / conntrack  is 1 / end
>
> For Flow Rule:
> flow queue 0 create 0 template_table 4 pattern_template 0 actions_template 0 postpone 0 pattern eth / ipv4 / tcp / conntrack spec 1 / end actions queue index 4 / end
>       
>
> flow queue 0 create 0 template_table 4 pattern_template 0 actions_template 0 postpone 0 pattern eth / ipv4 / tcp dst is 5555 / conntrack is 1 / end actions queue index 4 / end
>
>
>
> Result:
> The packets were sent to queue 6 but not to queue for
>
> Scapy Packet:
> sendp( Ether(dst="B8:CE:F6:D2:CD:22",src="aa:bb:cc:dd:ee:ff")/IP()/TCP(dport=5555), iface="enp134s0f0",count=1) -- were not redirected  to queue 4 for flow rule
>
> sendp( Ether(dst="B8:CE:F6:D2:CD:22",src="aa:bb:cc:dd:ee:ff")/IP()/TCP(dport=1234), iface="enp134s0f0",count=1)  --were redirected to queue 6  for Flow rule 2.
>
> Test Case 4:
>
> In this test case, I have removed the ct object, The indirect action and configuration related to ct
>
> NOTE: Please observe there are no ct object ,indirect action or connection tracking numbers in the following test
>
> port stop all
> flow configure 0 queues_number 9 queues_size 256
>
> flow indirect_action 0 destroy action_id 0
>
> flow indirect_action 0 destroy action_id 9
>
>
> flow pattern_template 0 create pattern_template_id 2 relaxed true ingress template tcp / end
> flow pattern_template 0 create pattern_template_id 3 relaxed true ingress template eth / ipv4 / tcp dst is 1234 / end
> flow pattern_template 0 create pattern_template_id 4 relaxed true ingress template eth / ipv4 / tcp / conntrack spec 65535 / end
>
>
>
> flow actions_template 0 create actions_template_id 1 template jump group 1 / end mask jump group 1 / end
> flow actions_template 0 create actions_template_id 2 template jump group 5 / end mask jump group 5 / end
> flow actions_template 0 create actions_template_id 3 template queue index 6 / end mask queue index 6 / end
> flow actions_template 0 create actions_template_id 4 template queue index 4 / end mask queue index 4 / end
>
>
> flow template_table 0 create table_id 1 group 0 ingress rules_number 10 pattern_template 2 actions_template 1
> flow template_table 0 create table_id 2 group 1 ingress rules_number 10 pattern_template 2 actions_template 2
> flow template_table 0 create table_id 3 group 5 ingress rules_number 10 pattern_template 3 actions_template 3
> flow template_table 0 create table_id 4 group 5 ingress rules_number 10 pattern_template 4 actions_template 4
>
> port start all
>
>
>
> flow queue 0 create 0 template_table 1 pattern_template 0 actions_template 0 postpone 0 pattern tcp / end actions jump group 1 / end
> flow push 0 queue 0
> flow pull 0 queue 0
>
> flow queue 0 create 0 template_table 2 pattern_template 0 actions_template 0 postpone 0 pattern tcp  /  end actions jump group 5 / end
> flow push 0 queue 0
> flow pull 0 queue 0
>
> flow queue 0 create 0 template_table 3 pattern_template 0 actions_template 0 postpone 0 pattern eth / ipv4 / tcp dst is 1234 / end actions queue index 6 / end
> flow push 0 queue 0
> flow pull 0 queue 0
>
> flow queue 0 create 0 template_table 4 pattern_template 0 actions_template 0 postpone 0 pattern eth / ipv4 / tcp  / conntrack spec 65535 / end actions queue index 4 / end
> flow push 0 queue 0
> flow pull 0 queue 0
>
>
> set verbose 3
> start
>
>
> Result:
> The packets were sent to Queue 6 and Queue 4.
> This shouldn't be the case since no ct object was created in the rule and their was no indirect action.
>
> Scapy Packet:
> sendp( Ether(dst="B8:CE:F6:D2:CD:22",src="aa:bb:cc:dd:ee:ff")/IP()/TCP(dport=5555), iface="enp134s0f0",count=1) -- were  redirected  to queue 4 for flow rule
>
> sendp( Ether(dst="B8:CE:F6:D2:CD:22",src="aa:bb:cc:dd:ee:ff")/IP()/TCP(dport=1234), iface="enp134s0f0",count=1)  --were redirected to queue 6  for Flow rule 2.
>
> Summary:
>
> • Test case 1 worked as it  was intended and packets were redirected to the desired queues
> • Test Cases 2 and 3 did not work as it was intended when matching on the exact tcp dst port and when the connection tracking keyword is was used.
> • Test case 4 sent the packet to their desired queues but they shouldn't have because no ct context was created and no indirection action was created
>
> Regards,
> Taha
>
>
>
> flow pattern_template 0 create pattern_template_id 4 relaxed true ingress template eth / ipv4 / tcp  / conntrack  spec 1 / end
>
> Test Case 1:

[-- Attachment #2: Type: text/html, Size: 43653 bytes --]

      reply	other threads:[~2024-05-01  7:52 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-19 13:09 Taha Sami
2024-04-26 18:57 ` Dariusz Sosnowski
2024-04-30 13:10   ` Taha Sami [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BY5PR22MB1986559603975057455A2DE6901A2@BY5PR22MB1986.namprd22.prod.outlook.com \
    --to=taha.sami@dreambigsemi.com \
    --cc=asafp@nvidia.com \
    --cc=bingz@nvidia.com \
    --cc=dsosnowski@nvidia.com \
    --cc=mkashani@nvidia.com \
    --cc=orika@nvidia.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).