From: Jan Viktorin <viktorin@cesnet.cz>
To: "Jiawei(Jonny) Wang" <jiaweiw@nvidia.com>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com>,
Asaf Penso <asafp@nvidia.com>, "dev@dpdk.org" <dev@dpdk.org>,
Ori Kam <orika@nvidia.com>
Subject: Re: [dpdk-dev] Duplicating traffic with RTE Flow
Date: Thu, 11 Mar 2021 17:32:52 +0100 [thread overview]
Message-ID: <20210311173252.5c3dc2ab@tanguero.localdomain> (raw)
In-Reply-To: <BL0PR12MB24193798BC6090CFCEC00DAFC6909@BL0PR12MB2419.namprd12.prod.outlook.com>
On Thu, 11 Mar 2021 02:11:07 +0000
"Jiawei(Jonny) Wang" <jiaweiw@nvidia.com> wrote:
> Hi Jan,
>
> Sorry for late response,
>
> First rule is invalid, port only works on FDB domain so need 'transfer' here;
> Second rule should be ok, could you please check if the port 1 was enabled on you dpdk application?
I assume that it is enabled, see full transcript:
$ ofed_info
MLNX_OFED_LINUX-5.2-1.0.4.0 (OFED-5.2-1.0.4):
...
$ sudo dpdk-testpmd -v -- -i
EAL: Detected 24 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: RTE Version: 'DPDK 20.11.0'
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:04:00.0 (socket 0)
mlx5_pci: No available register for Sampler.
mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:04:00.1 (socket 0)
mlx5_pci: No available register for Sampler.
mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
EAL: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=331456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: B8:59:9F:E2:09:F6
Configuring Port 1 (socket 0)
Port 1: B8:59:9F:E2:09:F7
Checking link statuses...
Done
testpmd> port start 1
Port 1 is now not stopped
Please stop the ports first
Done
testpmd> set sample_actions 0 port_id id 1 / end
testpmd> flow validate 0 ingress transfer pattern end actions sample ratio 1 index 0 / drop / end
port_flow_complain(): Caught PMD error type 1 (cause unspecified): (no stated reason): Operation not supported
testpmd> flow create 0 ingress transfer pattern end actions sample ratio 1 index 0 / drop / end
port_flow_complain(): Caught PMD error type 1 (cause unspecified): (no stated reason): Operation not supported
testpmd>
Stopping port 0...
Stopping ports...
Done
Stopping port 1...
Stopping ports...
Done
Shutting down port 0...
Closing ports...
Port 0 is closed
Done
Shutting down port 1...
Closing ports...
Port 1 is closed
Done
Bye...
Jan
>
> Thanks.
> Jonny
>
> > -----Original Message-----
> > From: Jan Viktorin <viktorin@cesnet.cz>
> > Sent: Monday, March 1, 2021 10:43 PM
> > To: Slava Ovsiienko <viacheslavo@nvidia.com>
> > Cc: Asaf Penso <asafp@nvidia.com>; dev@dpdk.org; Ori Kam
> > <orika@nvidia.com>; Jiawei(Jonny) Wang <jiaweiw@nvidia.com>
> > Subject: Re: [dpdk-dev] Duplicating traffic with RTE Flow
> >
> > On Mon, 1 Mar 2021 14:34:07 +0000
> > Slava Ovsiienko <viacheslavo@nvidia.com> wrote:
> >
> > > Hi, Jan
> > >
> > > To use port action (I see it is in your sample action list) the flow
> > > should be applied to the FDB domain, ie "transfer" attribute should be
> > specified:
> > >
> > > flow validate 0 ingress transfer...
> >
> > As you can see (however, it's a bit messy in the response below, in [1], it is
> > better), I tried both. First without transfer and second with. The first gives
> > hint "action is valid in transfer mode only" but the second try with transfer
> > gives "Operation not supported".
> >
> > Jan
> >
> > [1] http://mails.dpdk.org/archives/dev/2021-March/200475.html
> >
> > >
> > > With best regards, Slava
> > >
> > > > -----Original Message-----
> > > > From: Jan Viktorin <viktorin@cesnet.cz>
> > > > Sent: Monday, March 1, 2021 14:21
> > > > To: Asaf Penso <asafp@nvidia.com>
> > > > Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Jiawei(Jonny) Wang
> > > > <jiaweiw@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>
> > > > Subject: Re: [dpdk-dev] Duplicating traffic with RTE Flow
> > > >
> > > > Hello Asaf,
> > > >
> > > > it is a while we were in touch regarding this topic. Finally, I am
> > > > again trying to get work this feature. I've seen that sampling is
> > > > already upstreamed which is great. However, I am not very successful
> > > > with that. There is nearly no documentation, just [1], I found no examples,
> > just commit logs...
> > > >
> > > > I tried:
> > > >
> > > > > set sample_actions 0 port_id id 1 / end > flow validate 0
> > > > ingress pattern end actions sample ratio 1 index 0 / drop / end
> > > > port_flow_complain(): Caught PMD error type 1 (cause unspecified):
> > > > port id action is valid in transfer mode only: Operation not
> > > > supported > flow validate
> > > > 0 ingress transfer pattern end actions sample ratio 1 index 0 / drop
> > > > / end
> > > > port_flow_complain(): Caught PMD error type 1 (cause unspecified):
> > > > (no stated reason): Operation not supported
> > > >
> > > > Using CentOS 7, DPDK 20.11.0, OFED-5.2-1.0.4.
> > > > NICs: MT2892 Family [ConnectX-6 Dx] 101d (fw 22.28.1002), MT27800
> > > > Family [ConnectX-5] 1017 (fw 16.27.2008).
> > > >
> > > > My primary goal is to be able to deliver exactly the same packets
> > > > both to DPDK and to the Linux kernel. Doing this at RTE Flow level
> > > > would be great due to performance and transparency.
> > > >
> > > > Jan
> > > >
> > > > [1]
> > > > https://doc.dpdk.org/guides/prog_guide/rte_flow.html#action-sample
> > > >
> > > > On Fri, 18 Sep 2020 14:23:42 +0000
> > > > Asaf Penso <asafp@nvidia.com> wrote:
> > > >
> > > > > Hello Jan,
> > > > >
> > > > > You can have a look in series [1] where we propose to add APIs to
> > > > DPDK20.11 for both mirroring and sampling for packets, with
> > > > additional actions of the different traffic.
> > > > >
> > > > > [1]
> > > > > http://patches.dpdk.org/project/dpdk/list/?series=12045
> > > > >
> > > > > Regards,
> > > > > Asaf Penso
> > > > >
> > > > > >-----Original Message-----
> > > > > >From: dev <dev-bounces@dpdk.org> On Behalf Of Jan Viktorin
> > > > > >Sent: Friday, September 18, 2020 3:56 PM
> > > > > >To: dev@dpdk.org
> > > > > >Subject: [dpdk-dev] Duplicating traffic with RTE Flow
> > > > > >
> > > > > >Hello all,
> > > > > >
> > > > > >we are looking for a way to duplicate ingress traffic in hardware.
> > > > > >
> > > > > >There is an example in [1] suggesting to insert two fate actions
> > > > > >into the RTE Flow actions array like:
> > > > > >
> > > > > > flow create 0 ingress pattern end \
> > > > > > actions queue index 0 / void / queue index 1 / end
> > > > > >
> > > > > >But our experience is that PMDs reject two fate actions (tried
> > > > > >with mlx5). Another similar approach would be to deliver every
> > > > > >single packet into two virtual
> > > > > >functions:
> > > > > >
> > > > > > flow create 0 ingress pattern end \
> > > > > > actions vf index 0 / vf index 1 / end
> > > > > >
> > > > > >Third possibility was to use passthru:
> > > > > >
> > > > > > flow create 0 ingress pattern end \
> > > > > > actions passthru / vf index 0 / end flow create 0 ingress
> > > > > > pattern end \
> > > > > > actions vf index 1 / end
> > > > > >
> > > > > >Again, tried on mlx5 and it does not support the passthru.
> > > > > >
> > > > > >Last idea was to use isolate with passthru (to deliver both to
> > > > > >DPDK application and to the kernel) but again there was no
> > > > > >support on mlx5 for
> > > > passthru...
> > > > > >
> > > > > > flow isolate 0 true
> > > > > > flow create 0 ingress pattern end actions passthru / rss end /
> > > > > > end
> > > > > >
> > > > > >Is there any other possibility or PMD+NIC that is known to solve
> > > > > >such
> > > > issue?
> > > > > >
> > > > > >Thanks
> > > > > >Jan Viktorin
> > > > > >
> > > > > >[1]
> > > > > >https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%
> > 2
> > > > > >Fdoc
> > > > > >.dpdk
> > > > > >.org%2Fguides%2Fprog_guide%2Frte_flow.html%23table-rte-flow-
> > redir
> > > > > >ect-
> > > > > >queue-5-
> > > > >
> > > > >3&data=02%7C01%7Casafp%40nvidia.com%7C1a46005bec5245e72
> > 9e70
> > > > 8d
> > > > >
> > > > >85bd24caf%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C6373
> > 6030
> > > > 60
> > > > >
> > > > >73519816&sdata=EOF%2Fz62crvBZK8rwzwKIWxj5cVlfPVnU3FLmcL9
> > X2w0
> > > > %3
> > > > > >D&reserved=0
> > >
>
next prev parent reply other threads:[~2021-03-11 16:32 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-18 12:56 Jan Viktorin
2020-09-18 14:23 ` Asaf Penso
2020-09-21 20:03 ` Jan Viktorin
2020-09-23 2:28 ` Jiawei(Jonny) Wang
2020-09-23 8:29 ` Jan Viktorin
2020-09-27 6:33 ` Jiawei(Jonny) Wang
2021-03-01 12:21 ` Jan Viktorin
2021-03-01 14:34 ` Slava Ovsiienko
2021-03-01 14:43 ` Jan Viktorin
2021-03-11 2:11 ` Jiawei(Jonny) Wang
2021-03-11 16:32 ` Jan Viktorin [this message]
2021-03-12 9:32 ` Jiawei(Jonny) Wang
2021-03-15 13:22 ` Jan Viktorin
2021-03-15 17:57 ` Jan Viktorin
2021-03-16 4:00 ` Jiawei(Jonny) Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210311173252.5c3dc2ab@tanguero.localdomain \
--to=viktorin@cesnet.cz \
--cc=asafp@nvidia.com \
--cc=dev@dpdk.org \
--cc=jiaweiw@nvidia.com \
--cc=orika@nvidia.com \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).