From: Tony Hart <tony.hart@domainhart.com>
To: Maayan Kashani <mkashani@nvidia.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: rte_flow: total_length field match not working (MLX5)?
Date: Sun, 7 Jan 2024 08:38:09 -0500 [thread overview]
Message-ID: <CAC6tBwxmcQU=h+1Q1sqGcOozC34cr9k2E9Pnq0sgB8hB=pFz5w@mail.gmail.com> (raw)
In-Reply-To: <IA1PR12MB75448EC21E37D3EDC30C4AC9B2642@IA1PR12MB7544.namprd12.prod.outlook.com>
Hi Maayan.
Thanks for the reply! Yes, I'd opened a case with Nvidia and learned
that I needed to use HWS (rather than SWS) and the template API. So
that fixes that problem.!
Unfortunately my next problem is in switching to HWS I lose the
sample/mirror feature, so I'm investigating workarounds for that.
regards
Tony
On Sun, Jan 7, 2024 at 3:36 AM Maayan Kashani <mkashani@nvidia.com> wrote:
>
> Hi,
> Sorry for the late response,
> This feature is supported on template mode with template API (dv_flow_en=2).
> I see the documentation is not clear, we are working on updating it.
>
> Regards,
> Maayan Kashani
>
> > -----Original Message-----
> > From: Tony Hart <tony.hart@domainhart.com>
> > Sent: Saturday, 28 October 2023 16:53
> > To: users@dpdk.org
> > Subject: rte_flow: total_length field match not working (MLX5)?
> >
> > External email: Use caution opening links or attachments
> >
> >
> > I'm trying to match on the IPv4 total-length field using the feature introduced
> > in DPDK 23.07; however although the length option is accepted by testpmd it
> > seems that it's ignored by Connectx6.
> >
> > I'm sending small packets. e.g.
> >
> > 09:44:39.617427 IP 100.100.1.14.10000 > 1.1.1.1.80: Flags [S], seq
> > 408572454:408572460, win 8192, length 6: HTTP
> >
> > and have the following flows set in testpmd...
> >
> > flow create 0 ingress pattern end actions jump group 1 / end flow create 0
> > group 1 ingress pattern eth / ipv4 dst is 1.1.1.1 length spec 1400 length last
> > 1448 / end actions count / drop / end
> >
> > However the flow is matched even though the packet should fail the length
> > criteria.
> >
> > testpmd> flow list 0
> > ID Group Prio Attr Rule
> > 0 0 0 i-- => JUMP
> > 1 1 0 i-- ETH IPV4 => COUNT DROP
> >
> > testpmd> flow query 0 1 count
> > COUNT:
> > hits_set: 1
> > bytes_set: 1
> > hits: 13
> > bytes: 832
> >
> > Any thoughts?
> > Tony
> >
> > btw, I'm using OFED 5.8 and firmware version 22.35.2000
> >
> > mlnx-ofa_kernel-5.8-OFED.5.8.1.1.2.1.rhel9u1.x86_64
> > kmod-mlnx-ofa_kernel-5.8-OFED.5.8.1.1.2.1.rhel9u1.x86_64
> > mlnx-ofa_kernel-devel-5.8-OFED.5.8.1.1.2.1.rhel9u1.x86_64
> > ofed-scripts-5.8-OFED.5.8.1.1.2.x86_64
> >
> > hca_id: mlx5_0
> > transport: InfiniBand (0)
> > fw_ver: 22.35.2000
> > node_guid: e8eb:d303:00b4:7c86
> > sys_image_guid: e8eb:d303:00b4:7c86
> > vendor_id: 0x02c9
> > vendor_part_id: 4125
> > hw_ver: 0x0
> > board_id: MT_0000000359
--
tony
prev parent reply other threads:[~2024-01-07 13:38 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-28 13:52 Tony Hart
2024-01-07 8:36 ` Maayan Kashani
2024-01-07 13:38 ` Tony Hart [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAC6tBwxmcQU=h+1Q1sqGcOozC34cr9k2E9Pnq0sgB8hB=pFz5w@mail.gmail.com' \
--to=tony.hart@domainhart.com \
--cc=mkashani@nvidia.com \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).