DPDK usage discussions
 help / color / mirror / Atom feed
From: Dariusz Sosnowski <dsosnowski@nvidia.com>
To: Taha Sami <taha.sami@dreambigsemi.com>,
	"users@dpdk.org" <users@dpdk.org>,  Bing Zhao <bingz@nvidia.com>,
	Ori Kam <orika@nvidia.com>
Cc: Asaf Penso <asafp@nvidia.com>, Maayan Kashani <mkashani@nvidia.com>
Subject: RE: connection tracking in mlx5 not working or displaying strange results
Date: Fri, 26 Apr 2024 18:57:59 +0000	[thread overview]
Message-ID: <PH0PR12MB8800AC9B62799EB6A2B21EAAA4162@PH0PR12MB8800.namprd12.prod.outlook.com> (raw)
In-Reply-To: <BY5PR22MB198628034342226DB1474B06900D2@BY5PR22MB1986.namprd22.prod.outlook.com>

Hi,

I've reviewed the cases and I wanted to provide some clarifications about the expected behavior.

There are a few issues I see in all cases:

- There's a bug in handling of "conntrack" item in testpmd. Testpmd does not add CONNTRACK item correctly to pattern list.
  Could you please apply the diff from [1] on testpmd source and recompile?
  We will provide an official fix soon.
- Even with the fix from [1], the pattern template with conntrack item is specified incorrectly, because only spec is specified.
  Pattern templates define fields on which matching is done and specs are ignored.
  In case of conntrack mask is explicitly required.
  As a result, no matching on conntrack state will be done for all rules using this template.
  conntrack state matching must be configured like so:

	flow pattern_template 0 create ingress relaxed yes pattern_template_id 10 template conntrack mask 0xffff / end

- Flow rules which use indirect conntrack action do not actually use it, because actions template of corresponding tables
  do not specify any indirect actions. Because of that, during flow creation, only jump and queue are applied.
  You can specify conntrack action in actions template like so:

	Flow create actions_template 0 create ingress actions_template_id 10
		template indirect 9 / jump group 5 / end
		mask conntrack / jump group 5 / end

  Specifying conntrack as action type in the mask is critical for mlx5 PMD,
  because PMD requires that type of indirect action is known at
  template creation time.
- Connection tracking object created with these testpmd commands are in a state
  where non-zero sequence numbers are expected, but generated packets have seq=ack=0.
  If such packet passed through conntrack object, packet state would be INVALID (==4),
  because it would be considered as being out of window.
- Regarding case 4 - Since only conntrack spec is specified in pattern template,
  flow rule will not match on conntrack state.
  If mask was specified and conntrack state was actually matched, the behavior would be undefined
  since conntrack action was not executed on the packet.

Could you please apply the pattern and actions template changes,
and diff from [1] and test?

Please let me know if you have any questions.

Best regards,
Dariusz Sosnowski

---

[1] Quick fix for testpmd conntrack issue:

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 60ee9337cf..c8d328fb90 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -5797,9 +5797,11 @@ static const struct token token_list[] = {
    [ITEM_CONNTRACK] = {
        .name = "conntrack",
        .help = "conntrack state",
+       .priv = PRIV_ITEM(CONNTRACK, sizeof(struct rte_flow_item_conntrack)),
        .next = NEXT(NEXT_ENTRY(ITEM_NEXT), NEXT_ENTRY(COMMON_UNSIGNED),
                 item_param),
        .args = ARGS(ARGS_ENTRY(struct rte_flow_item_conntrack, flags)),
+       .call = parse_vc,
    },
    [ITEM_PORT_REPRESENTOR] = {
        .name = "port_representor",

> From: Taha Sami <taha.sami@dreambigsemi.com> 
> Sent: Friday, April 19, 2024 15:10
> To: users@dpdk.org; Bing Zhao <bingz@nvidia.com>; Ori Kam <orika@nvidia.com>; Dariusz Sosnowski <dsosnowski@nvidia.com>
> Cc: Asaf Penso <asafp@nvidia.com>
> Subject: connection tracking in mlx5 not working or displaying strange results
> 
> External email: Use caution opening links or attachments 
> 
> Test Case 1: Working Fine
> 
> port stop all
> flow configure 0 queues_number 9 queues_size 256 conn_tracks_number 4
> 
> flow pattern_template 0 create pattern_template_id 2 relaxed true ingress template tcp / end
> 
> flow pattern_template 0 create pattern_template_id 4 relaxed true ingress template eth / ipv4 / tcp  / conntrack  spec 1 / end
> 
> flow actions_template 0 create actions_template_id 1 template jump group 1 / end mask jump group 1 / end
> flow actions_template 0 create actions_template_id 2 template jump group 5 / end mask jump group 5 / end
> flow actions_template 0 create actions_template_id 4 template queue index 4 / end mask queue index 4 / end
> 
> flow template_table 0 create table_id 1 group 0 ingress rules_number 10 pattern_template 2 actions_template 1
> flow template_table 0 create table_id 2 group 1 ingress rules_number 10 pattern_template 2 actions_template 2
> flow template_table 0 create table_id 4 group 5 ingress rules_number 10 pattern_template 4 actions_template 4
> 
> port start all
> 
> set conntrack com peer 0 is_orig 1 enable 1 live 1 sack 1 cack 0 last_dir 1 liberal 0 state 1 max_ack_win 7 r_lim 5 last_win 510 last_seq 2 last_ack 1 last_end 2 last_index 0x2
> 
> set conntrack orig scale 7 fin 0 acked 1 unack_data 0 sent_end 2 reply_end 1 max_win 8192  max_ack 2
> set conntrack rply scale 7 fin 0 acked 1 unack_data 0 sent_end 2 reply_end 1 max_win 8192 max_ack 1
> 
> flow queue 0 indirect_action 0 create action_id 9 ingress postpone no action conntrack / end
> flow push 0 queue 0
> flow pull 0 queue 0
> 
> flow queue 0 create 0 template_table 1 pattern_template 0 actions_template 0 postpone 0 pattern tcp / end actions jump group 1 / end
> flow push 0 queue 0
> flow pull 0 queue 0
> 
> flow queue 0 create 0 template_table 2 pattern_template 0 actions_template 0 postpone 0 pattern tcp  /  end actions indirect 9 / jump group 5 / end 
> flow push 0 queue 0
> flow pull 0 queue 0
> 
> flow queue 0 create 0 template_table 4 pattern_template 0 actions_template 0 postpone 0 pattern eth / ipv4 / tcp / conntrack spec 1 / end actions queue index 4 / end
> flow push 0 queue 0
> flow pull 0 queue 0
> 
> Result:
> It was observed that the packet was sent to the right queue when the correct packet was sent
> 
> Scapy Packet:
> sendp( Ether(dst="B8:CE:F6:D2:CD:22",src="aa:bb:cc:dd:ee:ff")/IP()/TCP(dport=5555), iface="enp134s0f0",count=1) --->> redirect the packet to queue 4 for Flow rule 3
> 
> sendp( Ether(dst="B8:CE:F6:D2:CD:22",src="aa:bb:cc:dd:ee:ff")/IP()/TCP(dport=1234), iface="enp134s0f0",count=1)  --->> redirect the packet to queue 6  for Flow rule 2
> 
> Test Case 2:
> This test case has shown strange behavior when we keep everything similar to the above test case and just make changes in the pattern template and rule involving ct the match is not found and the packets are drop
> 
> Note: I have Tested this test on the same packets.
> 
> The changes We  did our to find an exact match on tcp dst port
> 
> flow pattern_template 0 create pattern_template_id 4 relaxed true ingress template eth / ipv4 / tcp  / conntrack  spec 1 / end
> 
> 
> flow pattern_template 0 create pattern_template_id 4 relaxed true ingress template eth / ipv4 / tcp  dst is 5555  / conntrack  spec 1 / end
> 
> For Flow Rule
> flow queue 0 create 0 template_table 4 pattern_template 0 actions_template 0 postpone 0 pattern eth / ipv4 / tcp / conntrack spec 1 / end actions queue index 4 / end
> 
> flow queue 0 create 0 template_table 4 pattern_template 0 actions_template 0 postpone 0 pattern eth / ipv4 / tcp dst is 5555 / conntrack spec 1 / end actions queue index 4 / end
> 
> Result:
> The packets were sent to queue 6 but not to queue for
> 
> Scapy Packet:
> sendp( Ether(dst="B8:CE:F6:D2:CD:22",src="aa:bb:cc:dd:ee:ff")/IP()/TCP(dport=5555), iface="enp134s0f0",count=1) -- were not redirected  to queue 4 for flow rule
> 
> sendp( Ether(dst="B8:CE:F6:D2:CD:22",src="aa:bb:cc:dd:ee:ff")/IP()/TCP(dport=1234), iface="enp134s0f0",count=1)  --were redirected to queue 6  for Flow rule 2.
> 
> 
> 
> Test Case 3:
> kept the same configuration for this test similar to test 1 except for some minor changes
> 
> flow pattern_template 0 create pattern_template_id 4 relaxed true ingress template eth / ipv4 / tcp  / conntrack  spec 1 / end
> 
> flow pattern_template 0 create pattern_template_id 4 relaxed true ingress template eth / ipv4 / tcp  / conntrack  is 1 / end
> 
> For Flow Rule:
> flow queue 0 create 0 template_table 4 pattern_template 0 actions_template 0 postpone 0 pattern eth / ipv4 / tcp / conntrack spec 1 / end actions queue index 4 / end
>       
> 
> flow queue 0 create 0 template_table 4 pattern_template 0 actions_template 0 postpone 0 pattern eth / ipv4 / tcp dst is 5555 / conntrack is 1 / end actions queue index 4 / end
> 
> 
> 
> Result:
> The packets were sent to queue 6 but not to queue for
> 
> Scapy Packet:
> sendp( Ether(dst="B8:CE:F6:D2:CD:22",src="aa:bb:cc:dd:ee:ff")/IP()/TCP(dport=5555), iface="enp134s0f0",count=1) -- were not redirected  to queue 4 for flow rule
> 
> sendp( Ether(dst="B8:CE:F6:D2:CD:22",src="aa:bb:cc:dd:ee:ff")/IP()/TCP(dport=1234), iface="enp134s0f0",count=1)  --were redirected to queue 6  for Flow rule 2.
> 
> Test Case 4:
> 
> In this test case, I have removed the ct object, The indirect action and configuration related to ct
> 
> NOTE: Please observe there are no ct object ,indirect action or connection tracking numbers in the following test
> 
> port stop all
> flow configure 0 queues_number 9 queues_size 256 
> 
> flow indirect_action 0 destroy action_id 0
> 
> flow indirect_action 0 destroy action_id 9
> 
> 
> flow pattern_template 0 create pattern_template_id 2 relaxed true ingress template tcp / end
> flow pattern_template 0 create pattern_template_id 3 relaxed true ingress template eth / ipv4 / tcp dst is 1234 / end
> flow pattern_template 0 create pattern_template_id 4 relaxed true ingress template eth / ipv4 / tcp / conntrack spec 65535 / end
> 
> 
> 
> flow actions_template 0 create actions_template_id 1 template jump group 1 / end mask jump group 1 / end
> flow actions_template 0 create actions_template_id 2 template jump group 5 / end mask jump group 5 / end
> flow actions_template 0 create actions_template_id 3 template queue index 6 / end mask queue index 6 / end
> flow actions_template 0 create actions_template_id 4 template queue index 4 / end mask queue index 4 / end
> 
> 
> flow template_table 0 create table_id 1 group 0 ingress rules_number 10 pattern_template 2 actions_template 1
> flow template_table 0 create table_id 2 group 1 ingress rules_number 10 pattern_template 2 actions_template 2
> flow template_table 0 create table_id 3 group 5 ingress rules_number 10 pattern_template 3 actions_template 3
> flow template_table 0 create table_id 4 group 5 ingress rules_number 10 pattern_template 4 actions_template 4
> 
> port start all
> 
> 
> 
> flow queue 0 create 0 template_table 1 pattern_template 0 actions_template 0 postpone 0 pattern tcp / end actions jump group 1 / end
> flow push 0 queue 0
> flow pull 0 queue 0
> 
> flow queue 0 create 0 template_table 2 pattern_template 0 actions_template 0 postpone 0 pattern tcp  /  end actions jump group 5 / end 
> flow push 0 queue 0
> flow pull 0 queue 0
> 
> flow queue 0 create 0 template_table 3 pattern_template 0 actions_template 0 postpone 0 pattern eth / ipv4 / tcp dst is 1234 / end actions queue index 6 / end
> flow push 0 queue 0
> flow pull 0 queue 0
> 
> flow queue 0 create 0 template_table 4 pattern_template 0 actions_template 0 postpone 0 pattern eth / ipv4 / tcp  / conntrack spec 65535 / end actions queue index 4 / end
> flow push 0 queue 0
> flow pull 0 queue 0
> 
> 
> set verbose 3
> start
> 
> 
> Result:
> The packets were sent to Queue 6 and Queue 4.
> This shouldn't be the case since no ct object was created in the rule and their was no indirect action.
> 
> Scapy Packet:
> sendp( Ether(dst="B8:CE:F6:D2:CD:22",src="aa:bb:cc:dd:ee:ff")/IP()/TCP(dport=5555), iface="enp134s0f0",count=1) -- were  redirected  to queue 4 for flow rule
> 
> sendp( Ether(dst="B8:CE:F6:D2:CD:22",src="aa:bb:cc:dd:ee:ff")/IP()/TCP(dport=1234), iface="enp134s0f0",count=1)  --were redirected to queue 6  for Flow rule 2.
> 
> Summary:
> 
> • Test case 1 worked as it  was intended and packets were redirected to the desired queues
> • Test Cases 2 and 3 did not work as it was intended when matching on the exact tcp dst port and when the connection tracking keyword is was used.
> • Test case 4 sent the packet to their desired queues but they shouldn't have because no ct context was created and no indirection action was created
> 
> Regards,
> Taha
> 
> 
> 
> flow pattern_template 0 create pattern_template_id 4 relaxed true ingress template eth / ipv4 / tcp  / conntrack  spec 1 / end
> 
> Test Case 1:

  reply	other threads:[~2024-04-26 18:58 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-19 13:09 Taha Sami
2024-04-26 18:57 ` Dariusz Sosnowski [this message]
2024-04-30 13:10   ` Taha Sami

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=PH0PR12MB8800AC9B62799EB6A2B21EAAA4162@PH0PR12MB8800.namprd12.prod.outlook.com \
    --to=dsosnowski@nvidia.com \
    --cc=asafp@nvidia.com \
    --cc=bingz@nvidia.com \
    --cc=mkashani@nvidia.com \
    --cc=orika@nvidia.com \
    --cc=taha.sami@dreambigsemi.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).