* [RFC] ethdev: add send to kernel action
@ 2022-08-11 11:35 Michael Savisko
2022-08-15 12:02 ` Ori Kam
` (2 more replies)
0 siblings, 3 replies; 24+ messages in thread
From: Michael Savisko @ 2022-08-11 11:35 UTC (permalink / raw)
To: orika, andrew.rybchenko, ferruh.yigit; +Cc: dev, michaelsav
In some cases application may receive a packet that should have been
received by the kernel. In this case application uses KNI or other means
to transfer the packet to the kernel.
This commit introduces rte flow action that the application may use
to route the packet to the kernel while still in the HW.
Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
---
lib/librte_ethdev/rte_flow.h | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index f92bef0184..969a607115 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -2853,6 +2853,11 @@ enum rte_flow_action_type {
* See file rte_mtr.h for MTR profile object configuration.
*/
RTE_FLOW_ACTION_TYPE_METER_MARK,
+
+ /*
+ * Send traffic to kernel.
+ */
+ RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
};
/**
--
2.27.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [RFC] ethdev: add send to kernel action
2022-08-11 11:35 [RFC] ethdev: add send to kernel action Michael Savisko
@ 2022-08-15 12:02 ` Ori Kam
2022-08-16 9:50 ` Ferruh Yigit
2022-09-14 9:32 ` [PATCH v2] " Michael Savisko
2 siblings, 0 replies; 24+ messages in thread
From: Ori Kam @ 2022-08-15 12:02 UTC (permalink / raw)
To: Michael Savisko, andrew.rybchenko, ferruh.yigit; +Cc: dev
> -----Original Message-----
> From: Michael Savisko <michaelsav@nvidia.com>
> Sent: Thursday, 11 August 2022 14:36
>
> In some cases application may receive a packet that should have been
> received by the kernel. In this case application uses KNI or other means
> to transfer the packet to the kernel.
> This commit introduces rte flow action that the application may use
> to route the packet to the kernel while still in the HW.
>
> Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
> ---
> lib/librte_ethdev/rte_flow.h | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> index f92bef0184..969a607115 100644
> --- a/lib/librte_ethdev/rte_flow.h
> +++ b/lib/librte_ethdev/rte_flow.h
> @@ -2853,6 +2853,11 @@ enum rte_flow_action_type {
> * See file rte_mtr.h for MTR profile object configuration.
> */
> RTE_FLOW_ACTION_TYPE_METER_MARK,
> +
> + /*
> + * Send traffic to kernel.
> + */
> + RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
> };
>
> /**
> --
> 2.27.0
Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] ethdev: add send to kernel action
2022-08-11 11:35 [RFC] ethdev: add send to kernel action Michael Savisko
2022-08-15 12:02 ` Ori Kam
@ 2022-08-16 9:50 ` Ferruh Yigit
2022-09-12 13:32 ` Thomas Monjalon
2022-09-14 9:32 ` [PATCH v2] " Michael Savisko
2 siblings, 1 reply; 24+ messages in thread
From: Ferruh Yigit @ 2022-08-16 9:50 UTC (permalink / raw)
To: Michael Savisko, orika, andrew.rybchenko; +Cc: dev
On 8/11/2022 12:35 PM, Michael Savisko wrote:
> CAUTION: This message has originated from an External Source. Please use proper judgment and caution when opening attachments, clicking links, or responding to this email.
>
>
> In some cases application may receive a packet that should have been
> received by the kernel. In this case application uses KNI or other means
> to transfer the packet to the kernel.
> This commit introduces rte flow action that the application may use
> to route the packet to the kernel while still in the HW.
>
> Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
I assume this only works for bifurcated drivers, right?
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] ethdev: add send to kernel action
2022-08-16 9:50 ` Ferruh Yigit
@ 2022-09-12 13:32 ` Thomas Monjalon
2022-09-12 13:39 ` Michael Savisko
0 siblings, 1 reply; 24+ messages in thread
From: Thomas Monjalon @ 2022-09-12 13:32 UTC (permalink / raw)
To: Michael Savisko, orika; +Cc: andrew.rybchenko, dev, Ferruh Yigit, viacheslavo
16/08/2022 11:50, Ferruh Yigit:
> On 8/11/2022 12:35 PM, Michael Savisko wrote:
> > CAUTION: This message has originated from an External Source. Please use proper judgment and caution when opening attachments, clicking links, or responding to this email.
> >
> >
> > In some cases application may receive a packet that should have been
> > received by the kernel. In this case application uses KNI or other means
> > to transfer the packet to the kernel.
> > This commit introduces rte flow action that the application may use
> > to route the packet to the kernel while still in the HW.
> >
> > Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
>
> I assume this only works for bifurcated drivers, right?
This question has not been replied after a month.
Please let's be more reactive.
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [RFC] ethdev: add send to kernel action
2022-09-12 13:32 ` Thomas Monjalon
@ 2022-09-12 13:39 ` Michael Savisko
2022-09-12 14:41 ` Andrew Rybchenko
0 siblings, 1 reply; 24+ messages in thread
From: Michael Savisko @ 2022-09-12 13:39 UTC (permalink / raw)
To: NBU-Contact-Thomas Monjalon (EXTERNAL), Ori Kam
Cc: andrew.rybchenko, dev, Ferruh Yigit, Slava Ovsiienko
[-- Attachment #1: Type: text/plain, Size: 1486 bytes --]
I replied to it the same day but unfortunately only to the author (see attached). My apologies.
Here's the answer:
"Depends on HW. If it can forward packets to different places then it can also be supported. But in most cases yes - for bifurcated drivers."
Regards,
Michael
-----Original Message-----
From: Thomas Monjalon <thomas@monjalon.net>
Sent: Monday, 12 September 2022 16:33
To: Michael Savisko <michaelsav@nvidia.com>; Ori Kam <orika@nvidia.com>
Cc: andrew.rybchenko@oktetlabs.ru; dev@dpdk.org; Ferruh Yigit <ferruh.yigit@xilinx.com>; Slava Ovsiienko <viacheslavo@nvidia.com>
Subject: Re: [RFC] ethdev: add send to kernel action
16/08/2022 11:50, Ferruh Yigit:
> On 8/11/2022 12:35 PM, Michael Savisko wrote:
> > CAUTION: This message has originated from an External Source. Please use proper judgment and caution when opening attachments, clicking links, or responding to this email.
> >
> >
> > In some cases application may receive a packet that should have been
> > received by the kernel. In this case application uses KNI or other
> > means to transfer the packet to the kernel.
> > This commit introduces rte flow action that the application may use
> > to route the packet to the kernel while still in the HW.
> >
> > Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
>
> I assume this only works for bifurcated drivers, right?
This question has not been replied after a month.
Please let's be more reactive.
[-- Attachment #2: Type: message/rfc822, Size: 2406 bytes --]
From: Michael Savisko <michaelsav@nvidia.com>
To: Ferruh Yigit <ferruh.yigit@xilinx.com>
Subject: RE: [RFC] ethdev: add send to kernel action
Date: Tue, 16 Aug 2022 10:29:46 +0000
Message-ID: <DS0PR12MB66072506843C66B78A0AEA37AB6B9@DS0PR12MB6607.namprd12.prod.outlook.com>
Depends on HW. If it can forward packets to different places then it can also be supported. But in most cases yes - for bifurcated drivers.
-----Original Message-----
From: Ferruh Yigit <ferruh.yigit@xilinx.com>
Sent: Tuesday, August 16, 2022 12:51 PM
To: Michael Savisko <michaelsav@nvidia.com>; Ori Kam <orika@nvidia.com>; andrew.rybchenko@oktetlabs.ru
Cc: dev@dpdk.org
Subject: Re: [RFC] ethdev: add send to kernel action
On 8/11/2022 12:35 PM, Michael Savisko wrote:
> CAUTION: This message has originated from an External Source. Please use proper judgment and caution when opening attachments, clicking links, or responding to this email.
>
>
> In some cases application may receive a packet that should have been
> received by the kernel. In this case application uses KNI or other
> means to transfer the packet to the kernel.
> This commit introduces rte flow action that the application may use to
> route the packet to the kernel while still in the HW.
>
> Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
I assume this only works for bifurcated drivers, right?
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] ethdev: add send to kernel action
2022-09-12 13:39 ` Michael Savisko
@ 2022-09-12 14:41 ` Andrew Rybchenko
2022-09-13 12:09 ` Michael Savisko
0 siblings, 1 reply; 24+ messages in thread
From: Andrew Rybchenko @ 2022-09-12 14:41 UTC (permalink / raw)
To: Michael Savisko, NBU-Contact-Thomas Monjalon (EXTERNAL), Ori Kam
Cc: dev, Ferruh Yigit, Slava Ovsiienko
On 9/12/22 16:39, Michael Savisko wrote:
>> -----Original Message-----
>> From: Thomas Monjalon <thomas@monjalon.net>
>> Sent: Monday, 12 September 2022 16:33
>> To: Michael Savisko <michaelsav@nvidia.com>; Ori Kam <orika@nvidia.com>
>> Cc: andrew.rybchenko@oktetlabs.ru; dev@dpdk.org; Ferruh Yigit <ferruh.yigit@xilinx.com>; Slava Ovsiienko <viacheslavo@nvidia.com>
>> Subject: Re: [RFC] ethdev: add send to kernel action
>>
>> 16/08/2022 11:50, Ferruh Yigit:
>>> On 8/11/2022 12:35 PM, Michael Savisko wrote:
>>>>
>>>> In some cases application may receive a packet that should have been
>>>> received by the kernel. In this case application uses KNI or other
>>>> means to transfer the packet to the kernel.
>>>> This commit introduces rte flow action that the application may use
>>>> to route the packet to the kernel while still in the HW.
>>>>
>>>> Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
>>>
>>> I assume this only works for bifurcated drivers, right?
>>
>> This question has not been replied after a month.
>> Please let's be more reactive.
>
> Depends on HW. If it can forward packets to different places then it
can also be supported. But in most cases yes - for bifurcated drivers.
The action sounds like "do some magic". As far as I know we
have no concept of kernel and cooperation with the kernel
in DPDK yet.
Is it a transfer or non-transfer action?
I guess non-transfer, since otherwise the next question is
which kernel...
In the case of non-transfer DPDK has a concept of Rx queues
which are used to deliver traffic to and we have QUEUE and
RSS flow actions to do it.
The patch adds some magic direction "kernel". Don't we
want to control destination queue? RSS?
May be we need dedicated control steps to setup kernel
Rx queues and than use QUEUE/RSS to direct traffic there?
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [RFC] ethdev: add send to kernel action
2022-09-12 14:41 ` Andrew Rybchenko
@ 2022-09-13 12:09 ` Michael Savisko
2022-09-14 9:57 ` Thomas Monjalon
0 siblings, 1 reply; 24+ messages in thread
From: Michael Savisko @ 2022-09-13 12:09 UTC (permalink / raw)
To: Andrew Rybchenko, NBU-Contact-Thomas Monjalon (EXTERNAL), Ori Kam
Cc: dev, Ferruh Yigit, Slava Ovsiienko
> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Monday, 12 September 2022 17:41
>
> On 9/12/22 16:39, Michael Savisko wrote:
> >> -----Original Message-----
> >> From: Thomas Monjalon <thomas@monjalon.net>
> >> Sent: Monday, 12 September 2022 16:33
> >>
> >> 16/08/2022 11:50, Ferruh Yigit:
> >>> On 8/11/2022 12:35 PM, Michael Savisko wrote:
> >>>>
> >>>> In some cases application may receive a packet that should have
> >>>> been received by the kernel. In this case application uses KNI or
> >>>> other means to transfer the packet to the kernel.
> >>>> This commit introduces rte flow action that the application may use
> >>>> to route the packet to the kernel while still in the HW.
> >>>>
> >>>> Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
> >>>
> >>> I assume this only works for bifurcated drivers, right?
> >>
> >> This question has not been replied after a month.
> >> Please let's be more reactive.
> >
> > Depends on HW. If it can forward packets to different places then it can also
> be supported. But in most cases yes - for bifurcated drivers.
>
> The action sounds like "do some magic". As far as I know we have no concept of
> kernel and cooperation with the kernel in DPDK yet.
There's nothing "magical". Kernel is not a part of DPDK, but DPDK can use KNI to transfer messages between application and kernel.
With bifurcated driver we can have a rule to route the packet matching a pattern (example: IPv4 packets) to the DPDK application and the rest of the traffic will be received by the kernel.
But if we want to receive most of the traffic in DPDK except specific pattern (example: ICMP packets) that should be processed by the kernel, then it's easier to re-route these packets with a single rule.
The new action I'm suggesting allows application to route packets directly to the kernel without software involvement, it is a HW offload.
We see it used when working with bifurcated driver, because the kernel driver and the DPDK driver are sharing the same HW.
> Is it a transfer or non-transfer action?
> I guess non-transfer, since otherwise the next question is which kernel...
This is an ingress action only.
> In the case of non-transfer DPDK has a concept of Rx queues which are used to
> deliver traffic to and we have QUEUE and RSS flow actions to do it.
The idea of this offload action is to route traffic away from the DPDK application.
> The patch adds some magic direction "kernel". Don't we want to control
> destination queue? RSS?
> May be we need dedicated control steps to setup kernel Rx queues and than use
> QUEUE/RSS to direct traffic there?
We have no control of how the kernel is configured.
I will provide a new version of the patch with better documentation. Please feel free to suggest any wording.
Thank you,
Michael
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2] ethdev: add send to kernel action
2022-08-11 11:35 [RFC] ethdev: add send to kernel action Michael Savisko
2022-08-15 12:02 ` Ori Kam
2022-08-16 9:50 ` Ferruh Yigit
@ 2022-09-14 9:32 ` Michael Savisko
2022-09-19 15:50 ` [PATCH v3] " Michael Savisko
2022-09-20 10:57 ` [PATCH v2] " Ori Kam
2 siblings, 2 replies; 24+ messages in thread
From: Michael Savisko @ 2022-09-14 9:32 UTC (permalink / raw)
To: dev
Cc: michaelsav, orika, viacheslavo, asafp, tmonjalon, Aman Singh,
Yuying Zhang, Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko
In some cases application may receive a packet that should have been
received by the kernel. In this case application uses KNI or other means
to transfer the packet to the kernel.
With bifurcated driver we can have a rule to route packets matching
a pattern (example: IPv4 packets) to the DPDK application and the rest
of the traffic will be received by the kernel.
But if we want to receive most of the traffic in DPDK except specific
pattern (example: ICMP packets) that should be processed by the kernel,
then it's easier to re-route these packets with a single rule.
This commit introduces new rte_flow action which allows application to
re-route packets directly to the kernel without software involvement.
Add new testpmd rte_flow action 'send_to_kernel'. The application
may use this action to route the packet to the kernel while still
in the HW.
Example with testpmd command:
flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
type mask 0xffff / end actions send_to_kernel / end
Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
---
app/test-pmd/cmdline_flow.c | 9 +++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 2 ++
lib/ethdev/rte_flow.c | 1 +
lib/ethdev/rte_flow.h | 9 +++++++++
4 files changed, 21 insertions(+)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 7f50028eb7..042f6b34a6 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -612,6 +612,7 @@ enum index {
ACTION_PORT_REPRESENTOR_PORT_ID,
ACTION_REPRESENTED_PORT,
ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
+ ACTION_SEND_TO_KERNEL,
};
/** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -1872,6 +1873,7 @@ static const enum index next_action[] = {
ACTION_CONNTRACK_UPDATE,
ACTION_PORT_REPRESENTOR,
ACTION_REPRESENTED_PORT,
+ ACTION_SEND_TO_KERNEL,
ZERO,
};
@@ -6341,6 +6343,13 @@ static const struct token token_list[] = {
.help = "submit a list of associated actions for red",
.next = NEXT(next_action),
},
+ [ACTION_SEND_TO_KERNEL] = {
+ .name = "send_to_kernel",
+ .help = "send packets to kernel",
+ .priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
+ .next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+ .call = parse_vc,
+ },
/* Top-level command. */
[ADD] = {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 330e34427d..c259c8239a 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -4189,6 +4189,8 @@ This section lists supported actions and their attributes, if any.
- ``ethdev_port_id {unsigned}``: ethdev port ID
+- ``send_to_kernel``: send packets to kernel.
+
Destroying flow rules
~~~~~~~~~~~~~~~~~~~~~
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 501be9d602..627c671ce4 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -259,6 +259,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = {
MK_FLOW_ACTION(CONNTRACK, sizeof(struct rte_flow_action_conntrack)),
MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct rte_flow_action_ethdev)),
MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct rte_flow_action_ethdev)),
+ MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
};
int
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index a79f1e7ef0..a82992a6ae 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -2879,6 +2879,15 @@ enum rte_flow_action_type {
* @see struct rte_flow_action_ethdev
*/
RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
+
+ /*
+ * Send packets to the kernel, without going to userspace at all.
+ * The packets will be received by the kernel driver sharing
+ * the same device as the DPDK port.
+ *
+ * No associated configuration structure.
+ */
+ RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
};
/**
--
2.27.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC] ethdev: add send to kernel action
2022-09-13 12:09 ` Michael Savisko
@ 2022-09-14 9:57 ` Thomas Monjalon
0 siblings, 0 replies; 24+ messages in thread
From: Thomas Monjalon @ 2022-09-14 9:57 UTC (permalink / raw)
To: Andrew Rybchenko, Ori Kam, Michael Savisko
Cc: dev, Ferruh Yigit, Slava Ovsiienko
13/09/2022 14:09, Michael Savisko:
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> > On 9/12/22 16:39, Michael Savisko wrote:
> > > From: Thomas Monjalon <thomas@monjalon.net>
> > >> 16/08/2022 11:50, Ferruh Yigit:
> > >>> On 8/11/2022 12:35 PM, Michael Savisko wrote:
> > >>>>
> > >>>> In some cases application may receive a packet that should have
> > >>>> been received by the kernel. In this case application uses KNI or
> > >>>> other means to transfer the packet to the kernel.
> > >>>> This commit introduces rte flow action that the application may use
> > >>>> to route the packet to the kernel while still in the HW.
> > >>>>
> > >>>> Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
> > >>>
> > >>> I assume this only works for bifurcated drivers, right?
> > >>
> > >> This question has not been replied after a month.
> > >> Please let's be more reactive.
> > >
> > > Depends on HW. If it can forward packets to different places then it can also
> > be supported. But in most cases yes - for bifurcated drivers.
> >
> > The action sounds like "do some magic". As far as I know we have no concept of
> > kernel and cooperation with the kernel in DPDK yet.
>
> There's nothing "magical". Kernel is not a part of DPDK, but DPDK can use KNI to transfer messages between application and kernel.
> With bifurcated driver we can have a rule to route the packet matching a pattern (example: IPv4 packets) to the DPDK application and the rest of the traffic will be received by the kernel.
> But if we want to receive most of the traffic in DPDK except specific pattern (example: ICMP packets) that should be processed by the kernel, then it's easier to re-route these packets with a single rule.
> The new action I'm suggesting allows application to route packets directly to the kernel without software involvement, it is a HW offload.
> We see it used when working with bifurcated driver, because the kernel driver and the DPDK driver are sharing the same HW.
>
> > Is it a transfer or non-transfer action?
> > I guess non-transfer, since otherwise the next question is which kernel...
>
> This is an ingress action only.
Should we add this note in the doxygen comment?
This is the wording in the v2 sent today:
+ /*
+ * Send packets to the kernel, without going to userspace at all.
+ * The packets will be received by the kernel driver sharing
+ * the same device as the DPDK port.
+ *
+ * No associated configuration structure.
+ */
+ RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
> > In the case of non-transfer DPDK has a concept of Rx queues which are used to
> > deliver traffic to and we have QUEUE and RSS flow actions to do it.
>
> The idea of this offload action is to route traffic away from the DPDK application.
>
> > The patch adds some magic direction "kernel". Don't we want to control
> > destination queue? RSS?
> > May be we need dedicated control steps to setup kernel Rx queues and than use
> > QUEUE/RSS to direct traffic there?
>
> We have no control of how the kernel is configured.
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v3] ethdev: add send to kernel action
2022-09-14 9:32 ` [PATCH v2] " Michael Savisko
@ 2022-09-19 15:50 ` Michael Savisko
2022-09-20 11:08 ` Ori Kam
` (2 more replies)
2022-09-20 10:57 ` [PATCH v2] " Ori Kam
1 sibling, 3 replies; 24+ messages in thread
From: Michael Savisko @ 2022-09-19 15:50 UTC (permalink / raw)
To: dev
Cc: michaelsav, orika, viacheslavo, asafp, Aman Singh, Yuying Zhang,
Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko
In some cases application may receive a packet that should have been
received by the kernel. In this case application uses KNI or other means
to transfer the packet to the kernel.
With bifurcated driver we can have a rule to route packets matching
a pattern (example: IPv4 packets) to the DPDK application and the rest
of the traffic will be received by the kernel.
But if we want to receive most of the traffic in DPDK except specific
pattern (example: ICMP packets) that should be processed by the kernel,
then it's easier to re-route these packets with a single rule.
This commit introduces new rte_flow action which allows application to
re-route packets directly to the kernel without software involvement.
Add new testpmd rte_flow action 'send_to_kernel'. The application
may use this action to route the packet to the kernel while still
in the HW.
Example with testpmd command:
flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
type mask 0xffff / end actions send_to_kernel / end
Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
---
app/test-pmd/cmdline_flow.c | 9 +++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 2 ++
lib/ethdev/rte_flow.c | 1 +
lib/ethdev/rte_flow.h | 10 ++++++++++
4 files changed, 22 insertions(+)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 7f50028eb7..042f6b34a6 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -612,6 +612,7 @@ enum index {
ACTION_PORT_REPRESENTOR_PORT_ID,
ACTION_REPRESENTED_PORT,
ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
+ ACTION_SEND_TO_KERNEL,
};
/** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -1872,6 +1873,7 @@ static const enum index next_action[] = {
ACTION_CONNTRACK_UPDATE,
ACTION_PORT_REPRESENTOR,
ACTION_REPRESENTED_PORT,
+ ACTION_SEND_TO_KERNEL,
ZERO,
};
@@ -6341,6 +6343,13 @@ static const struct token token_list[] = {
.help = "submit a list of associated actions for red",
.next = NEXT(next_action),
},
+ [ACTION_SEND_TO_KERNEL] = {
+ .name = "send_to_kernel",
+ .help = "send packets to kernel",
+ .priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
+ .next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+ .call = parse_vc,
+ },
/* Top-level command. */
[ADD] = {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 330e34427d..c259c8239a 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -4189,6 +4189,8 @@ This section lists supported actions and their attributes, if any.
- ``ethdev_port_id {unsigned}``: ethdev port ID
+- ``send_to_kernel``: send packets to kernel.
+
Destroying flow rules
~~~~~~~~~~~~~~~~~~~~~
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 501be9d602..627c671ce4 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -259,6 +259,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = {
MK_FLOW_ACTION(CONNTRACK, sizeof(struct rte_flow_action_conntrack)),
MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct rte_flow_action_ethdev)),
MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct rte_flow_action_ethdev)),
+ MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
};
int
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index a79f1e7ef0..bf076087b3 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -2879,6 +2879,16 @@ enum rte_flow_action_type {
* @see struct rte_flow_action_ethdev
*/
RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
+
+ /**
+ * Send packets to the kernel, without going to userspace at all.
+ * The packets will be received by the kernel driver sharing
+ * the same device as the DPDK port.
+ * This is an ingress action only.
+ *
+ * No associated configuration structure.
+ */
+ RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
};
/**
--
2.27.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [PATCH v2] ethdev: add send to kernel action
2022-09-14 9:32 ` [PATCH v2] " Michael Savisko
2022-09-19 15:50 ` [PATCH v3] " Michael Savisko
@ 2022-09-20 10:57 ` Ori Kam
1 sibling, 0 replies; 24+ messages in thread
From: Ori Kam @ 2022-09-20 10:57 UTC (permalink / raw)
To: Michael Savisko, dev
Cc: Slava Ovsiienko, Asaf Penso, Thomas Monjalon, Aman Singh,
Yuying Zhang, NBU-Contact-Thomas Monjalon (EXTERNAL),
Ferruh Yigit, Andrew Rybchenko
Hi Michael,
> -----Original Message-----
> From: Michael Savisko <michaelsav@nvidia.com>
> Sent: Wednesday, 14 September 2022 12:32
>
> In some cases application may receive a packet that should have been
> received by the kernel. In this case application uses KNI or other means
> to transfer the packet to the kernel.
>
> With bifurcated driver we can have a rule to route packets matching
> a pattern (example: IPv4 packets) to the DPDK application and the rest
> of the traffic will be received by the kernel.
> But if we want to receive most of the traffic in DPDK except specific
> pattern (example: ICMP packets) that should be processed by the kernel,
> then it's easier to re-route these packets with a single rule.
>
> This commit introduces new rte_flow action which allows application to
> re-route packets directly to the kernel without software involvement.
>
> Add new testpmd rte_flow action 'send_to_kernel'. The application
> may use this action to route the packet to the kernel while still
> in the HW.
>
> Example with testpmd command:
>
> flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
> type mask 0xffff / end actions send_to_kernel / end
>
> Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
> ---
> app/test-pmd/cmdline_flow.c | 9 +++++++++
> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 2 ++
> lib/ethdev/rte_flow.c | 1 +
> lib/ethdev/rte_flow.h | 9 +++++++++
> 4 files changed, 21 insertions(+)
>
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> index 7f50028eb7..042f6b34a6 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -612,6 +612,7 @@ enum index {
> ACTION_PORT_REPRESENTOR_PORT_ID,
> ACTION_REPRESENTED_PORT,
> ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
> + ACTION_SEND_TO_KERNEL,
> };
>
> /** Maximum size for pattern in struct rte_flow_item_raw. */
> @@ -1872,6 +1873,7 @@ static const enum index next_action[] = {
> ACTION_CONNTRACK_UPDATE,
> ACTION_PORT_REPRESENTOR,
> ACTION_REPRESENTED_PORT,
> + ACTION_SEND_TO_KERNEL,
> ZERO,
> };
>
> @@ -6341,6 +6343,13 @@ static const struct token token_list[] = {
> .help = "submit a list of associated actions for red",
> .next = NEXT(next_action),
> },
> + [ACTION_SEND_TO_KERNEL] = {
> + .name = "send_to_kernel",
> + .help = "send packets to kernel",
> + .priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
> + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> + .call = parse_vc,
> + },
>
> /* Top-level command. */
> [ADD] = {
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 330e34427d..c259c8239a 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -4189,6 +4189,8 @@ This section lists supported actions and their
> attributes, if any.
>
> - ``ethdev_port_id {unsigned}``: ethdev port ID
>
> +- ``send_to_kernel``: send packets to kernel.
> +
> Destroying flow rules
> ~~~~~~~~~~~~~~~~~~~~~
>
> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> index 501be9d602..627c671ce4 100644
> --- a/lib/ethdev/rte_flow.c
> +++ b/lib/ethdev/rte_flow.c
> @@ -259,6 +259,7 @@ static const struct rte_flow_desc_data
> rte_flow_desc_action[] = {
> MK_FLOW_ACTION(CONNTRACK, sizeof(struct
> rte_flow_action_conntrack)),
> MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct
> rte_flow_action_ethdev)),
> MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct
> rte_flow_action_ethdev)),
> + MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
> };
>
> int
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index a79f1e7ef0..a82992a6ae 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -2879,6 +2879,15 @@ enum rte_flow_action_type {
> * @see struct rte_flow_action_ethdev
> */
> RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
> +
> + /*
> + * Send packets to the kernel, without going to userspace at all.
> + * The packets will be received by the kernel driver sharing
> + * the same device as the DPDK port.
> + *
> + * No associated configuration structure.
> + */
> + RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
> };
>
> /**
> --
> 2.27.0
Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [PATCH v3] ethdev: add send to kernel action
2022-09-19 15:50 ` [PATCH v3] " Michael Savisko
@ 2022-09-20 11:08 ` Ori Kam
2022-09-26 13:06 ` Andrew Rybchenko
2022-09-29 14:54 ` [PATCH v4] " Michael Savisko
2 siblings, 0 replies; 24+ messages in thread
From: Ori Kam @ 2022-09-20 11:08 UTC (permalink / raw)
To: Michael Savisko, dev
Cc: Slava Ovsiienko, Asaf Penso, Aman Singh, Yuying Zhang,
NBU-Contact-Thomas Monjalon (EXTERNAL),
Ferruh Yigit, Andrew Rybchenko
Hi Michael
> -----Original Message-----
> From: Michael Savisko <michaelsav@nvidia.com>
> Sent: Monday, 19 September 2022 18:50
>
> In some cases application may receive a packet that should have been
> received by the kernel. In this case application uses KNI or other means
> to transfer the packet to the kernel.
>
> With bifurcated driver we can have a rule to route packets matching
> a pattern (example: IPv4 packets) to the DPDK application and the rest
> of the traffic will be received by the kernel.
> But if we want to receive most of the traffic in DPDK except specific
> pattern (example: ICMP packets) that should be processed by the kernel,
> then it's easier to re-route these packets with a single rule.
>
> This commit introduces new rte_flow action which allows application to
> re-route packets directly to the kernel without software involvement.
>
> Add new testpmd rte_flow action 'send_to_kernel'. The application
> may use this action to route the packet to the kernel while still
> in the HW.
>
> Example with testpmd command:
>
> flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
> type mask 0xffff / end actions send_to_kernel / end
>
> Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
> ---
> app/test-pmd/cmdline_flow.c | 9 +++++++++
> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 2 ++
> lib/ethdev/rte_flow.c | 1 +
> lib/ethdev/rte_flow.h | 10 ++++++++++
> 4 files changed, 22 insertions(+)
>
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> index 7f50028eb7..042f6b34a6 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -612,6 +612,7 @@ enum index {
> ACTION_PORT_REPRESENTOR_PORT_ID,
> ACTION_REPRESENTED_PORT,
> ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
> + ACTION_SEND_TO_KERNEL,
> };
>
> /** Maximum size for pattern in struct rte_flow_item_raw. */
> @@ -1872,6 +1873,7 @@ static const enum index next_action[] = {
> ACTION_CONNTRACK_UPDATE,
> ACTION_PORT_REPRESENTOR,
> ACTION_REPRESENTED_PORT,
> + ACTION_SEND_TO_KERNEL,
> ZERO,
> };
>
> @@ -6341,6 +6343,13 @@ static const struct token token_list[] = {
> .help = "submit a list of associated actions for red",
> .next = NEXT(next_action),
> },
> + [ACTION_SEND_TO_KERNEL] = {
> + .name = "send_to_kernel",
> + .help = "send packets to kernel",
> + .priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
> + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> + .call = parse_vc,
> + },
>
> /* Top-level command. */
> [ADD] = {
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 330e34427d..c259c8239a 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -4189,6 +4189,8 @@ This section lists supported actions and their
> attributes, if any.
>
> - ``ethdev_port_id {unsigned}``: ethdev port ID
>
> +- ``send_to_kernel``: send packets to kernel.
> +
> Destroying flow rules
> ~~~~~~~~~~~~~~~~~~~~~
>
> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> index 501be9d602..627c671ce4 100644
> --- a/lib/ethdev/rte_flow.c
> +++ b/lib/ethdev/rte_flow.c
> @@ -259,6 +259,7 @@ static const struct rte_flow_desc_data
> rte_flow_desc_action[] = {
> MK_FLOW_ACTION(CONNTRACK, sizeof(struct
> rte_flow_action_conntrack)),
> MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct
> rte_flow_action_ethdev)),
> MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct
> rte_flow_action_ethdev)),
> + MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
> };
>
> int
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index a79f1e7ef0..bf076087b3 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -2879,6 +2879,16 @@ enum rte_flow_action_type {
> * @see struct rte_flow_action_ethdev
> */
> RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
> +
> + /**
> + * Send packets to the kernel, without going to userspace at all.
> + * The packets will be received by the kernel driver sharing
> + * the same device as the DPDK port.
> + * This is an ingress action only.
> + *
> + * No associated configuration structure.
> + */
> + RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
> };
>
> /**
> --
> 2.27.0
Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v3] ethdev: add send to kernel action
2022-09-19 15:50 ` [PATCH v3] " Michael Savisko
2022-09-20 11:08 ` Ori Kam
@ 2022-09-26 13:06 ` Andrew Rybchenko
2022-09-28 14:30 ` Michael Savisko
2022-09-29 14:54 ` [PATCH v4] " Michael Savisko
2 siblings, 1 reply; 24+ messages in thread
From: Andrew Rybchenko @ 2022-09-26 13:06 UTC (permalink / raw)
To: Michael Savisko, Ferruh Yigit, Thomas Monjalon
Cc: orika, viacheslavo, asafp, Aman Singh, Yuying Zhang, dev
On 9/19/22 18:50, Michael Savisko wrote:
> In some cases application may receive a packet that should have been
> received by the kernel. In this case application uses KNI or other means
> to transfer the packet to the kernel.
>
> With bifurcated driver we can have a rule to route packets matching
> a pattern (example: IPv4 packets) to the DPDK application and the rest
> of the traffic will be received by the kernel.
> But if we want to receive most of the traffic in DPDK except specific
> pattern (example: ICMP packets) that should be processed by the kernel,
> then it's easier to re-route these packets with a single rule.
>
> This commit introduces new rte_flow action which allows application to
> re-route packets directly to the kernel without software involvement.
>
> Add new testpmd rte_flow action 'send_to_kernel'. The application
> may use this action to route the packet to the kernel while still
> in the HW.
>
> Example with testpmd command:
>
> flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
> type mask 0xffff / end actions send_to_kernel / end
>
> Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
> ---
> app/test-pmd/cmdline_flow.c | 9 +++++++++
> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 2 ++
> lib/ethdev/rte_flow.c | 1 +
> lib/ethdev/rte_flow.h | 10 ++++++++++
> 4 files changed, 22 insertions(+)
>
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> index 7f50028eb7..042f6b34a6 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -612,6 +612,7 @@ enum index {
> ACTION_PORT_REPRESENTOR_PORT_ID,
> ACTION_REPRESENTED_PORT,
> ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
> + ACTION_SEND_TO_KERNEL,
> };
>
> /** Maximum size for pattern in struct rte_flow_item_raw. */
> @@ -1872,6 +1873,7 @@ static const enum index next_action[] = {
> ACTION_CONNTRACK_UPDATE,
> ACTION_PORT_REPRESENTOR,
> ACTION_REPRESENTED_PORT,
> + ACTION_SEND_TO_KERNEL,
> ZERO,
> };
>
> @@ -6341,6 +6343,13 @@ static const struct token token_list[] = {
> .help = "submit a list of associated actions for red",
> .next = NEXT(next_action),
> },
> + [ACTION_SEND_TO_KERNEL] = {
> + .name = "send_to_kernel",
> + .help = "send packets to kernel",
> + .priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
> + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> + .call = parse_vc,
> + },
>
> /* Top-level command. */
> [ADD] = {
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 330e34427d..c259c8239a 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -4189,6 +4189,8 @@ This section lists supported actions and their attributes, if any.
>
> - ``ethdev_port_id {unsigned}``: ethdev port ID
>
> +- ``send_to_kernel``: send packets to kernel.
> +
> Destroying flow rules
> ~~~~~~~~~~~~~~~~~~~~~
>
> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> index 501be9d602..627c671ce4 100644
> --- a/lib/ethdev/rte_flow.c
> +++ b/lib/ethdev/rte_flow.c
> @@ -259,6 +259,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = {
> MK_FLOW_ACTION(CONNTRACK, sizeof(struct rte_flow_action_conntrack)),
> MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct rte_flow_action_ethdev)),
> MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct rte_flow_action_ethdev)),
> + MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
> };
>
> int
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index a79f1e7ef0..bf076087b3 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -2879,6 +2879,16 @@ enum rte_flow_action_type {
> * @see struct rte_flow_action_ethdev
> */
> RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
> +
> + /**
> + * Send packets to the kernel, without going to userspace at all.
> + * The packets will be received by the kernel driver sharing
May be it is better to mentioned bifurcated driver model and
add a reference to the documentation.
> + * the same device as the DPDK port.
Which DPDK port? There is no control structure associated
with the action.
I guess it is the port used to create the flow rule.
If so, it should be documented that it is non-transfer
action and highlighted that the port used to create
the action.
> + * This is an ingress action only.
> + *
> + * No associated configuration structure.
> + */
> + RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
> };
>
> /**
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [PATCH v3] ethdev: add send to kernel action
2022-09-26 13:06 ` Andrew Rybchenko
@ 2022-09-28 14:30 ` Michael Savisko
0 siblings, 0 replies; 24+ messages in thread
From: Michael Savisko @ 2022-09-28 14:30 UTC (permalink / raw)
To: Andrew Rybchenko, Ferruh Yigit, NBU-Contact-Thomas Monjalon (EXTERNAL)
Cc: Ori Kam, Slava Ovsiienko, Asaf Penso, Aman Singh, Yuying Zhang, dev
> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Monday, 26 September 2022 16:07
>
> On 9/19/22 18:50, Michael Savisko wrote:
> > In some cases application may receive a packet that should have been
> > received by the kernel. In this case application uses KNI or other
> > means to transfer the packet to the kernel.
> >
> > With bifurcated driver we can have a rule to route packets matching a
> > pattern (example: IPv4 packets) to the DPDK application and the rest
> > of the traffic will be received by the kernel.
> > But if we want to receive most of the traffic in DPDK except specific
> > pattern (example: ICMP packets) that should be processed by the
> > kernel, then it's easier to re-route these packets with a single rule.
> >
> > This commit introduces new rte_flow action which allows application to
> > re-route packets directly to the kernel without software involvement.
> >
> > Add new testpmd rte_flow action 'send_to_kernel'. The application may
> > use this action to route the packet to the kernel while still in the
> > HW.
> >
> > Example with testpmd command:
> >
> > flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
> > type mask 0xffff / end actions send_to_kernel / end
> >
> > Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
> > ---
> > app/test-pmd/cmdline_flow.c | 9 +++++++++
> > doc/guides/testpmd_app_ug/testpmd_funcs.rst | 2 ++
> > lib/ethdev/rte_flow.c | 1 +
> > lib/ethdev/rte_flow.h | 10 ++++++++++
> > 4 files changed, 22 insertions(+)
> >
> > diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> > index 7f50028eb7..042f6b34a6 100644
> > --- a/app/test-pmd/cmdline_flow.c
> > +++ b/app/test-pmd/cmdline_flow.c
> > @@ -612,6 +612,7 @@ enum index {
> > ACTION_PORT_REPRESENTOR_PORT_ID,
> > ACTION_REPRESENTED_PORT,
> > ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
> > + ACTION_SEND_TO_KERNEL,
> > };
> >
> > /** Maximum size for pattern in struct rte_flow_item_raw. */ @@
> > -1872,6 +1873,7 @@ static const enum index next_action[] = {
> > ACTION_CONNTRACK_UPDATE,
> > ACTION_PORT_REPRESENTOR,
> > ACTION_REPRESENTED_PORT,
> > + ACTION_SEND_TO_KERNEL,
> > ZERO,
> > };
> >
> > @@ -6341,6 +6343,13 @@ static const struct token token_list[] = {
> > .help = "submit a list of associated actions for red",
> > .next = NEXT(next_action),
> > },
> > + [ACTION_SEND_TO_KERNEL] = {
> > + .name = "send_to_kernel",
> > + .help = "send packets to kernel",
> > + .priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
> > + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> > + .call = parse_vc,
> > + },
> >
> > /* Top-level command. */
> > [ADD] = {
> > diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > index 330e34427d..c259c8239a 100644
> > --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > @@ -4189,6 +4189,8 @@ This section lists supported actions and their
> attributes, if any.
> >
> > - ``ethdev_port_id {unsigned}``: ethdev port ID
> >
> > +- ``send_to_kernel``: send packets to kernel.
> > +
> > Destroying flow rules
> > ~~~~~~~~~~~~~~~~~~~~~
> >
> > diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index
> > 501be9d602..627c671ce4 100644
> > --- a/lib/ethdev/rte_flow.c
> > +++ b/lib/ethdev/rte_flow.c
> > @@ -259,6 +259,7 @@ static const struct rte_flow_desc_data
> rte_flow_desc_action[] = {
> > MK_FLOW_ACTION(CONNTRACK, sizeof(struct
> rte_flow_action_conntrack)),
> > MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct
> rte_flow_action_ethdev)),
> > MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct
> > rte_flow_action_ethdev)),
> > + MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
> > };
> >
> > int
> > diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index
> > a79f1e7ef0..bf076087b3 100644
> > --- a/lib/ethdev/rte_flow.h
> > +++ b/lib/ethdev/rte_flow.h
> > @@ -2879,6 +2879,16 @@ enum rte_flow_action_type {
> > * @see struct rte_flow_action_ethdev
> > */
> > RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
> > +
> > + /**
> > + * Send packets to the kernel, without going to userspace at all.
> > + * The packets will be received by the kernel driver sharing
>
> May be it is better to mentioned bifurcated driver model and add a reference to
> the documentation.
Yes, I will add mentioning bifurcated driver model.
>
> > + * the same device as the DPDK port.
>
> Which DPDK port? There is no control structure associated with the action.
>
> I guess it is the port used to create the flow rule.
> If so, it should be documented that it is non-transfer action and highlighted that
> the port used to create the action.
Yes, its is the port used to create the flow rule. I will add this to the comment.
I will emphasize that it's non-transfer action as well.
I will provide a new version of the patch with better documentation.
Thank you,
Michael Savisko
>
> > + * This is an ingress action only.
> > + *
> > + * No associated configuration structure.
> > + */
> > + RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
> > };
> >
> > /**
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v4] ethdev: add send to kernel action
2022-09-19 15:50 ` [PATCH v3] " Michael Savisko
2022-09-20 11:08 ` Ori Kam
2022-09-26 13:06 ` Andrew Rybchenko
@ 2022-09-29 14:54 ` Michael Savisko
2022-10-03 7:53 ` Andrew Rybchenko
2022-10-03 16:34 ` [PATCH v5] " Michael Savisko
2 siblings, 2 replies; 24+ messages in thread
From: Michael Savisko @ 2022-09-29 14:54 UTC (permalink / raw)
To: dev
Cc: michaelsav, orika, viacheslavo, asafp, Aman Singh, Yuying Zhang,
Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko
In some cases application may receive a packet that should have been
received by the kernel. In this case application uses KNI or other means
to transfer the packet to the kernel.
With bifurcated driver we can have a rule to route packets matching
a pattern (example: IPv4 packets) to the DPDK application and the rest
of the traffic will be received by the kernel.
But if we want to receive most of the traffic in DPDK except specific
pattern (example: ICMP packets) that should be processed by the kernel,
then it's easier to re-route these packets with a single rule.
This commit introduces new rte_flow action which allows application to
re-route packets directly to the kernel without software involvement.
Add new testpmd rte_flow action 'send_to_kernel'. The application
may use this action to route the packet to the kernel while still
in the HW.
Example with testpmd command:
flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
type mask 0xffff / end actions send_to_kernel / end
Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
v4:
- improve description comment above RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL
v3:
http://patches.dpdk.org/project/dpdk/patch/20220919155013.61473-1-michaelsav@nvidia.com/
v2:
http://patches.dpdk.org/project/dpdk/patch/20220914093219.11728-1-michaelsav@nvidia.com/
---
app/test-pmd/cmdline_flow.c | 9 +++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 2 ++
lib/ethdev/rte_flow.c | 1 +
lib/ethdev/rte_flow.h | 12 ++++++++++++
4 files changed, 24 insertions(+)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 7f50028eb7..042f6b34a6 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -612,6 +612,7 @@ enum index {
ACTION_PORT_REPRESENTOR_PORT_ID,
ACTION_REPRESENTED_PORT,
ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
+ ACTION_SEND_TO_KERNEL,
};
/** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -1872,6 +1873,7 @@ static const enum index next_action[] = {
ACTION_CONNTRACK_UPDATE,
ACTION_PORT_REPRESENTOR,
ACTION_REPRESENTED_PORT,
+ ACTION_SEND_TO_KERNEL,
ZERO,
};
@@ -6341,6 +6343,13 @@ static const struct token token_list[] = {
.help = "submit a list of associated actions for red",
.next = NEXT(next_action),
},
+ [ACTION_SEND_TO_KERNEL] = {
+ .name = "send_to_kernel",
+ .help = "send packets to kernel",
+ .priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
+ .next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+ .call = parse_vc,
+ },
/* Top-level command. */
[ADD] = {
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 330e34427d..c259c8239a 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -4189,6 +4189,8 @@ This section lists supported actions and their attributes, if any.
- ``ethdev_port_id {unsigned}``: ethdev port ID
+- ``send_to_kernel``: send packets to kernel.
+
Destroying flow rules
~~~~~~~~~~~~~~~~~~~~~
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 501be9d602..627c671ce4 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -259,6 +259,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = {
MK_FLOW_ACTION(CONNTRACK, sizeof(struct rte_flow_action_conntrack)),
MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct rte_flow_action_ethdev)),
MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct rte_flow_action_ethdev)),
+ MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
};
int
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index a79f1e7ef0..2c15279a3b 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -2879,6 +2879,18 @@ enum rte_flow_action_type {
* @see struct rte_flow_action_ethdev
*/
RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
+
+ /**
+ * Send packets to the kernel, without going to userspace at all.
+ * The packets will be received by the kernel driver sharing
+ * the same device as the DPDK port on which this action is
+ * configured. This action is mostly suits bifurcated driver
+ * model.
+ * This is an ingress non-transfer action only.
+ *
+ * No associated configuration structure.
+ */
+ RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
};
/**
--
2.27.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4] ethdev: add send to kernel action
2022-09-29 14:54 ` [PATCH v4] " Michael Savisko
@ 2022-10-03 7:53 ` Andrew Rybchenko
2022-10-03 8:23 ` Ori Kam
2022-10-03 16:34 ` [PATCH v5] " Michael Savisko
1 sibling, 1 reply; 24+ messages in thread
From: Andrew Rybchenko @ 2022-10-03 7:53 UTC (permalink / raw)
To: Michael Savisko, dev
Cc: orika, viacheslavo, asafp, Aman Singh, Yuying Zhang,
Thomas Monjalon, Ferruh Yigit
On 9/29/22 17:54, Michael Savisko wrote:
> In some cases application may receive a packet that should have been
> received by the kernel. In this case application uses KNI or other means
> to transfer the packet to the kernel.
>
> With bifurcated driver we can have a rule to route packets matching
> a pattern (example: IPv4 packets) to the DPDK application and the rest
> of the traffic will be received by the kernel.
> But if we want to receive most of the traffic in DPDK except specific
> pattern (example: ICMP packets) that should be processed by the kernel,
> then it's easier to re-route these packets with a single rule.
>
> This commit introduces new rte_flow action which allows application to
> re-route packets directly to the kernel without software involvement.
>
> Add new testpmd rte_flow action 'send_to_kernel'. The application
> may use this action to route the packet to the kernel while still
> in the HW.
>
> Example with testpmd command:
>
> flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
> type mask 0xffff / end actions send_to_kernel / end
>
> Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
> Acked-by: Ori Kam <orika@nvidia.com>
> ---
> v4:
> - improve description comment above RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL
>
> v3:
> http://patches.dpdk.org/project/dpdk/patch/20220919155013.61473-1-michaelsav@nvidia.com/
>
> v2:
> http://patches.dpdk.org/project/dpdk/patch/20220914093219.11728-1-michaelsav@nvidia.com/
>
> ---
> app/test-pmd/cmdline_flow.c | 9 +++++++++
> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 2 ++
> lib/ethdev/rte_flow.c | 1 +
> lib/ethdev/rte_flow.h | 12 ++++++++++++
> 4 files changed, 24 insertions(+)
>
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> index 7f50028eb7..042f6b34a6 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -612,6 +612,7 @@ enum index {
> ACTION_PORT_REPRESENTOR_PORT_ID,
> ACTION_REPRESENTED_PORT,
> ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
> + ACTION_SEND_TO_KERNEL,
> };
>
> /** Maximum size for pattern in struct rte_flow_item_raw. */
> @@ -1872,6 +1873,7 @@ static const enum index next_action[] = {
> ACTION_CONNTRACK_UPDATE,
> ACTION_PORT_REPRESENTOR,
> ACTION_REPRESENTED_PORT,
> + ACTION_SEND_TO_KERNEL,
> ZERO,
> };
>
> @@ -6341,6 +6343,13 @@ static const struct token token_list[] = {
> .help = "submit a list of associated actions for red",
> .next = NEXT(next_action),
> },
> + [ACTION_SEND_TO_KERNEL] = {
> + .name = "send_to_kernel",
> + .help = "send packets to kernel",
> + .priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
> + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> + .call = parse_vc,
> + },
>
> /* Top-level command. */
> [ADD] = {
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 330e34427d..c259c8239a 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -4189,6 +4189,8 @@ This section lists supported actions and their attributes, if any.
>
> - ``ethdev_port_id {unsigned}``: ethdev port ID
>
> +- ``send_to_kernel``: send packets to kernel.
> +
> Destroying flow rules
> ~~~~~~~~~~~~~~~~~~~~~
>
> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> index 501be9d602..627c671ce4 100644
> --- a/lib/ethdev/rte_flow.c
> +++ b/lib/ethdev/rte_flow.c
> @@ -259,6 +259,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = {
> MK_FLOW_ACTION(CONNTRACK, sizeof(struct rte_flow_action_conntrack)),
> MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct rte_flow_action_ethdev)),
> MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct rte_flow_action_ethdev)),
> + MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
> };
>
> int
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index a79f1e7ef0..2c15279a3b 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -2879,6 +2879,18 @@ enum rte_flow_action_type {
> * @see struct rte_flow_action_ethdev
> */
> RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
> +
> + /**
> + * Send packets to the kernel, without going to userspace at all.
> + * The packets will be received by the kernel driver sharing
> + * the same device as the DPDK port on which this action is
> + * configured. This action is mostly suits bifurcated driver
> + * model.
> + * This is an ingress non-transfer action only.
May be we should not limit the definition to ingress only?
It could be useful on egress as a way to reroute packet
back to kernel.
> + *
> + * No associated configuration structure.
> + */
> + RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
> };
>
> /**
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [PATCH v4] ethdev: add send to kernel action
2022-10-03 7:53 ` Andrew Rybchenko
@ 2022-10-03 8:23 ` Ori Kam
2022-10-03 9:44 ` Andrew Rybchenko
0 siblings, 1 reply; 24+ messages in thread
From: Ori Kam @ 2022-10-03 8:23 UTC (permalink / raw)
To: Andrew Rybchenko, Michael Savisko, dev
Cc: Slava Ovsiienko, Asaf Penso, Aman Singh, Yuying Zhang,
NBU-Contact-Thomas Monjalon (EXTERNAL),
Ferruh Yigit
Hi Andrew
> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Monday, 3 October 2022 10:54
> On 9/29/22 17:54, Michael Savisko wrote:
> > In some cases application may receive a packet that should have been
> > received by the kernel. In this case application uses KNI or other means
> > to transfer the packet to the kernel.
> >
> > With bifurcated driver we can have a rule to route packets matching
> > a pattern (example: IPv4 packets) to the DPDK application and the rest
> > of the traffic will be received by the kernel.
> > But if we want to receive most of the traffic in DPDK except specific
> > pattern (example: ICMP packets) that should be processed by the kernel,
> > then it's easier to re-route these packets with a single rule.
> >
> > This commit introduces new rte_flow action which allows application to
> > re-route packets directly to the kernel without software involvement.
> >
> > Add new testpmd rte_flow action 'send_to_kernel'. The application
> > may use this action to route the packet to the kernel while still
> > in the HW.
> >
> > Example with testpmd command:
> >
> > flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
> > type mask 0xffff / end actions send_to_kernel / end
> >
> > Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
> > Acked-by: Ori Kam <orika@nvidia.com>
> > ---
> > v4:
> > - improve description comment above
> RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL
> >
> > v3:
> > http://patches.dpdk.org/project/dpdk/patch/20220919155013.61473-1-
> michaelsav@nvidia.com/
> >
> > v2:
> > http://patches.dpdk.org/project/dpdk/patch/20220914093219.11728-1-
> michaelsav@nvidia.com/
> >
> > ---
> > app/test-pmd/cmdline_flow.c | 9 +++++++++
> > doc/guides/testpmd_app_ug/testpmd_funcs.rst | 2 ++
> > lib/ethdev/rte_flow.c | 1 +
> > lib/ethdev/rte_flow.h | 12 ++++++++++++
> > 4 files changed, 24 insertions(+)
> >
> > diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> > index 7f50028eb7..042f6b34a6 100644
> > --- a/app/test-pmd/cmdline_flow.c
> > +++ b/app/test-pmd/cmdline_flow.c
> > @@ -612,6 +612,7 @@ enum index {
> > ACTION_PORT_REPRESENTOR_PORT_ID,
> > ACTION_REPRESENTED_PORT,
> > ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
> > + ACTION_SEND_TO_KERNEL,
> > };
> >
> > /** Maximum size for pattern in struct rte_flow_item_raw. */
> > @@ -1872,6 +1873,7 @@ static const enum index next_action[] = {
> > ACTION_CONNTRACK_UPDATE,
> > ACTION_PORT_REPRESENTOR,
> > ACTION_REPRESENTED_PORT,
> > + ACTION_SEND_TO_KERNEL,
> > ZERO,
> > };
> >
> > @@ -6341,6 +6343,13 @@ static const struct token token_list[] = {
> > .help = "submit a list of associated actions for red",
> > .next = NEXT(next_action),
> > },
> > + [ACTION_SEND_TO_KERNEL] = {
> > + .name = "send_to_kernel",
> > + .help = "send packets to kernel",
> > + .priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
> > + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> > + .call = parse_vc,
> > + },
> >
> > /* Top-level command. */
> > [ADD] = {
> > diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > index 330e34427d..c259c8239a 100644
> > --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> > @@ -4189,6 +4189,8 @@ This section lists supported actions and their
> attributes, if any.
> >
> > - ``ethdev_port_id {unsigned}``: ethdev port ID
> >
> > +- ``send_to_kernel``: send packets to kernel.
> > +
> > Destroying flow rules
> > ~~~~~~~~~~~~~~~~~~~~~
> >
> > diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> > index 501be9d602..627c671ce4 100644
> > --- a/lib/ethdev/rte_flow.c
> > +++ b/lib/ethdev/rte_flow.c
> > @@ -259,6 +259,7 @@ static const struct rte_flow_desc_data
> rte_flow_desc_action[] = {
> > MK_FLOW_ACTION(CONNTRACK, sizeof(struct
> rte_flow_action_conntrack)),
> > MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct
> rte_flow_action_ethdev)),
> > MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct
> rte_flow_action_ethdev)),
> > + MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
> > };
> >
> > int
> > diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> > index a79f1e7ef0..2c15279a3b 100644
> > --- a/lib/ethdev/rte_flow.h
> > +++ b/lib/ethdev/rte_flow.h
> > @@ -2879,6 +2879,18 @@ enum rte_flow_action_type {
> > * @see struct rte_flow_action_ethdev
> > */
> > RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
> > +
> > + /**
> > + * Send packets to the kernel, without going to userspace at all.
> > + * The packets will be received by the kernel driver sharing
> > + * the same device as the DPDK port on which this action is
> > + * configured. This action is mostly suits bifurcated driver
> > + * model.
> > + * This is an ingress non-transfer action only.
>
> May be we should not limit the definition to ingress only?
> It could be useful on egress as a way to reroute packet
> back to kernel.
>
Interesting, but there are no Kernel queues on egress that can receive packets (by definition of egress)
do you mean that this will also do loopback from the egress back to the ingress of the same port and then
send to kernel?
if so, I think we need a new action "loop_back"
>
> > + *
> > + * No associated configuration structure.
> > + */
> > + RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
> > };
> >
> > /**
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4] ethdev: add send to kernel action
2022-10-03 8:23 ` Ori Kam
@ 2022-10-03 9:44 ` Andrew Rybchenko
2022-10-03 9:57 ` Ori Kam
0 siblings, 1 reply; 24+ messages in thread
From: Andrew Rybchenko @ 2022-10-03 9:44 UTC (permalink / raw)
To: Ori Kam, Michael Savisko, dev
Cc: Slava Ovsiienko, Asaf Penso, Aman Singh, Yuying Zhang,
NBU-Contact-Thomas Monjalon (EXTERNAL),
Ferruh Yigit
On 10/3/22 11:23, Ori Kam wrote:
> Hi Andrew
>
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Monday, 3 October 2022 10:54
>> On 9/29/22 17:54, Michael Savisko wrote:
>>> In some cases application may receive a packet that should have been
>>> received by the kernel. In this case application uses KNI or other means
>>> to transfer the packet to the kernel.
>>>
>>> With bifurcated driver we can have a rule to route packets matching
>>> a pattern (example: IPv4 packets) to the DPDK application and the rest
>>> of the traffic will be received by the kernel.
>>> But if we want to receive most of the traffic in DPDK except specific
>>> pattern (example: ICMP packets) that should be processed by the kernel,
>>> then it's easier to re-route these packets with a single rule.
>>>
>>> This commit introduces new rte_flow action which allows application to
>>> re-route packets directly to the kernel without software involvement.
>>>
>>> Add new testpmd rte_flow action 'send_to_kernel'. The application
>>> may use this action to route the packet to the kernel while still
>>> in the HW.
>>>
>>> Example with testpmd command:
>>>
>>> flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
>>> type mask 0xffff / end actions send_to_kernel / end
>>>
>>> Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
>>> Acked-by: Ori Kam <orika@nvidia.com>
>>> ---
>>> v4:
>>> - improve description comment above
>> RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL
>>>
>>> v3:
>>> http://patches.dpdk.org/project/dpdk/patch/20220919155013.61473-1-
>> michaelsav@nvidia.com/
>>>
>>> v2:
>>> http://patches.dpdk.org/project/dpdk/patch/20220914093219.11728-1-
>> michaelsav@nvidia.com/
>>>
>>> ---
>>> app/test-pmd/cmdline_flow.c | 9 +++++++++
>>> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 2 ++
>>> lib/ethdev/rte_flow.c | 1 +
>>> lib/ethdev/rte_flow.h | 12 ++++++++++++
>>> 4 files changed, 24 insertions(+)
>>>
>>> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
>>> index 7f50028eb7..042f6b34a6 100644
>>> --- a/app/test-pmd/cmdline_flow.c
>>> +++ b/app/test-pmd/cmdline_flow.c
>>> @@ -612,6 +612,7 @@ enum index {
>>> ACTION_PORT_REPRESENTOR_PORT_ID,
>>> ACTION_REPRESENTED_PORT,
>>> ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
>>> + ACTION_SEND_TO_KERNEL,
>>> };
>>>
>>> /** Maximum size for pattern in struct rte_flow_item_raw. */
>>> @@ -1872,6 +1873,7 @@ static const enum index next_action[] = {
>>> ACTION_CONNTRACK_UPDATE,
>>> ACTION_PORT_REPRESENTOR,
>>> ACTION_REPRESENTED_PORT,
>>> + ACTION_SEND_TO_KERNEL,
>>> ZERO,
>>> };
>>>
>>> @@ -6341,6 +6343,13 @@ static const struct token token_list[] = {
>>> .help = "submit a list of associated actions for red",
>>> .next = NEXT(next_action),
>>> },
>>> + [ACTION_SEND_TO_KERNEL] = {
>>> + .name = "send_to_kernel",
>>> + .help = "send packets to kernel",
>>> + .priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
>>> + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
>>> + .call = parse_vc,
>>> + },
>>>
>>> /* Top-level command. */
>>> [ADD] = {
>>> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>> index 330e34427d..c259c8239a 100644
>>> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>> @@ -4189,6 +4189,8 @@ This section lists supported actions and their
>> attributes, if any.
>>>
>>> - ``ethdev_port_id {unsigned}``: ethdev port ID
>>>
>>> +- ``send_to_kernel``: send packets to kernel.
>>> +
>>> Destroying flow rules
>>> ~~~~~~~~~~~~~~~~~~~~~
>>>
>>> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
>>> index 501be9d602..627c671ce4 100644
>>> --- a/lib/ethdev/rte_flow.c
>>> +++ b/lib/ethdev/rte_flow.c
>>> @@ -259,6 +259,7 @@ static const struct rte_flow_desc_data
>> rte_flow_desc_action[] = {
>>> MK_FLOW_ACTION(CONNTRACK, sizeof(struct
>> rte_flow_action_conntrack)),
>>> MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct
>> rte_flow_action_ethdev)),
>>> MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct
>> rte_flow_action_ethdev)),
>>> + MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
>>> };
>>>
>>> int
>>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
>>> index a79f1e7ef0..2c15279a3b 100644
>>> --- a/lib/ethdev/rte_flow.h
>>> +++ b/lib/ethdev/rte_flow.h
>>> @@ -2879,6 +2879,18 @@ enum rte_flow_action_type {
>>> * @see struct rte_flow_action_ethdev
>>> */
>>> RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
>>> +
>>> + /**
>>> + * Send packets to the kernel, without going to userspace at all.
>>> + * The packets will be received by the kernel driver sharing
>>> + * the same device as the DPDK port on which this action is
>>> + * configured. This action is mostly suits bifurcated driver
>>> + * model.
>>> + * This is an ingress non-transfer action only.
>>
>> May be we should not limit the definition to ingress only?
>> It could be useful on egress as a way to reroute packet
>> back to kernel.
>>
>
> Interesting, but there are no Kernel queues on egress that can receive packets (by definition of egress)
> do you mean that this will also do loopback from the egress back to the ingress of the same port and then
> send to kernel?
> if so, I think we need a new action "loop_back"
Yes, I meant intercept packet on egress and send to kernel.
But we still need loopback+send_to_kernel. Loopback itself
cannot send to kernel. Moreover it should be two rules:
loopback on egress plus send-to-kernel on ingress. Does
it really worse it? I'm not sure. Yes, it sounds a bit
better from arch point of view, but I'm still unsure.
I'd allow send-to-kernel on egress. Up to you.
>
>>
>>> + *
>>> + * No associated configuration structure.
>>> + */
>>> + RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
>>> };
>>>
>>> /**
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [PATCH v4] ethdev: add send to kernel action
2022-10-03 9:44 ` Andrew Rybchenko
@ 2022-10-03 9:57 ` Ori Kam
2022-10-03 10:47 ` Andrew Rybchenko
0 siblings, 1 reply; 24+ messages in thread
From: Ori Kam @ 2022-10-03 9:57 UTC (permalink / raw)
To: Andrew Rybchenko, Michael Savisko, dev
Cc: Slava Ovsiienko, Asaf Penso, Aman Singh, Yuying Zhang,
NBU-Contact-Thomas Monjalon (EXTERNAL),
Ferruh Yigit
Hi Andrew,
> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Monday, 3 October 2022 12:44
>
> On 10/3/22 11:23, Ori Kam wrote:
> > Hi Andrew
> >
> >> -----Original Message-----
> >> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >> Sent: Monday, 3 October 2022 10:54
> >> On 9/29/22 17:54, Michael Savisko wrote:
> >>> In some cases application may receive a packet that should have been
> >>> received by the kernel. In this case application uses KNI or other means
> >>> to transfer the packet to the kernel.
> >>>
> >>> With bifurcated driver we can have a rule to route packets matching
> >>> a pattern (example: IPv4 packets) to the DPDK application and the rest
> >>> of the traffic will be received by the kernel.
> >>> But if we want to receive most of the traffic in DPDK except specific
> >>> pattern (example: ICMP packets) that should be processed by the
> kernel,
> >>> then it's easier to re-route these packets with a single rule.
> >>>
> >>> This commit introduces new rte_flow action which allows application to
> >>> re-route packets directly to the kernel without software involvement.
> >>>
> >>> Add new testpmd rte_flow action 'send_to_kernel'. The application
> >>> may use this action to route the packet to the kernel while still
> >>> in the HW.
> >>>
> >>> Example with testpmd command:
> >>>
> >>> flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
> >>> type mask 0xffff / end actions send_to_kernel / end
> >>>
> >>> Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
> >>> Acked-by: Ori Kam <orika@nvidia.com>
> >>> ---
> >>> v4:
> >>> - improve description comment above
> >> RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL
> >>>
> >>> v3:
> >>> http://patches.dpdk.org/project/dpdk/patch/20220919155013.61473-1-
> >> michaelsav@nvidia.com/
> >>>
> >>> v2:
> >>> http://patches.dpdk.org/project/dpdk/patch/20220914093219.11728-1-
> >> michaelsav@nvidia.com/
> >>>
> >>> ---
> >>> app/test-pmd/cmdline_flow.c | 9 +++++++++
> >>> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 2 ++
> >>> lib/ethdev/rte_flow.c | 1 +
> >>> lib/ethdev/rte_flow.h | 12 ++++++++++++
> >>> 4 files changed, 24 insertions(+)
> >>>
> >>> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-
> pmd/cmdline_flow.c
> >>> index 7f50028eb7..042f6b34a6 100644
> >>> --- a/app/test-pmd/cmdline_flow.c
> >>> +++ b/app/test-pmd/cmdline_flow.c
> >>> @@ -612,6 +612,7 @@ enum index {
> >>> ACTION_PORT_REPRESENTOR_PORT_ID,
> >>> ACTION_REPRESENTED_PORT,
> >>> ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
> >>> + ACTION_SEND_TO_KERNEL,
> >>> };
> >>>
> >>> /** Maximum size for pattern in struct rte_flow_item_raw. */
> >>> @@ -1872,6 +1873,7 @@ static const enum index next_action[] = {
> >>> ACTION_CONNTRACK_UPDATE,
> >>> ACTION_PORT_REPRESENTOR,
> >>> ACTION_REPRESENTED_PORT,
> >>> + ACTION_SEND_TO_KERNEL,
> >>> ZERO,
> >>> };
> >>>
> >>> @@ -6341,6 +6343,13 @@ static const struct token token_list[] = {
> >>> .help = "submit a list of associated actions for red",
> >>> .next = NEXT(next_action),
> >>> },
> >>> + [ACTION_SEND_TO_KERNEL] = {
> >>> + .name = "send_to_kernel",
> >>> + .help = "send packets to kernel",
> >>> + .priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
> >>> + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> >>> + .call = parse_vc,
> >>> + },
> >>>
> >>> /* Top-level command. */
> >>> [ADD] = {
> >>> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> >> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> >>> index 330e34427d..c259c8239a 100644
> >>> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> >>> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> >>> @@ -4189,6 +4189,8 @@ This section lists supported actions and their
> >> attributes, if any.
> >>>
> >>> - ``ethdev_port_id {unsigned}``: ethdev port ID
> >>>
> >>> +- ``send_to_kernel``: send packets to kernel.
> >>> +
> >>> Destroying flow rules
> >>> ~~~~~~~~~~~~~~~~~~~~~
> >>>
> >>> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> >>> index 501be9d602..627c671ce4 100644
> >>> --- a/lib/ethdev/rte_flow.c
> >>> +++ b/lib/ethdev/rte_flow.c
> >>> @@ -259,6 +259,7 @@ static const struct rte_flow_desc_data
> >> rte_flow_desc_action[] = {
> >>> MK_FLOW_ACTION(CONNTRACK, sizeof(struct
> >> rte_flow_action_conntrack)),
> >>> MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct
> >> rte_flow_action_ethdev)),
> >>> MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct
> >> rte_flow_action_ethdev)),
> >>> + MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
> >>> };
> >>>
> >>> int
> >>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> >>> index a79f1e7ef0..2c15279a3b 100644
> >>> --- a/lib/ethdev/rte_flow.h
> >>> +++ b/lib/ethdev/rte_flow.h
> >>> @@ -2879,6 +2879,18 @@ enum rte_flow_action_type {
> >>> * @see struct rte_flow_action_ethdev
> >>> */
> >>> RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
> >>> +
> >>> + /**
> >>> + * Send packets to the kernel, without going to userspace at all.
> >>> + * The packets will be received by the kernel driver sharing
> >>> + * the same device as the DPDK port on which this action is
> >>> + * configured. This action is mostly suits bifurcated driver
> >>> + * model.
> >>> + * This is an ingress non-transfer action only.
> >>
> >> May be we should not limit the definition to ingress only?
> >> It could be useful on egress as a way to reroute packet
> >> back to kernel.
> >>
> >
> > Interesting, but there are no Kernel queues on egress that can receive
> packets (by definition of egress)
> > do you mean that this will also do loopback from the egress back to the
> ingress of the same port and then
> > send to kernel?
> > if so, I think we need a new action "loop_back"
>
> Yes, I meant intercept packet on egress and send to kernel.
> But we still need loopback+send_to_kernel. Loopback itself
> cannot send to kernel. Moreover it should be two rules:
> loopback on egress plus send-to-kernel on ingress. Does
> it really worse it? I'm not sure. Yes, it sounds a bit
> better from arch point of view, but I'm still unsure.
> I'd allow send-to-kernel on egress. Up to you.
>
It looks more correct with loop_back on the egress and send-to-kernel on egress
I suggest to keep the current design,
and if we see that we can merge those to commands, we will change it
> >
> >>
> >>> + *
> >>> + * No associated configuration structure.
> >>> + */
> >>> + RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
> >>> };
> >>>
> >>> /**
> >
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4] ethdev: add send to kernel action
2022-10-03 9:57 ` Ori Kam
@ 2022-10-03 10:47 ` Andrew Rybchenko
2022-10-03 11:06 ` Ori Kam
0 siblings, 1 reply; 24+ messages in thread
From: Andrew Rybchenko @ 2022-10-03 10:47 UTC (permalink / raw)
To: Ori Kam, Michael Savisko, dev
Cc: Slava Ovsiienko, Asaf Penso, Aman Singh, Yuying Zhang,
NBU-Contact-Thomas Monjalon (EXTERNAL),
Ferruh Yigit
On 10/3/22 12:57, Ori Kam wrote:
> Hi Andrew,
>
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Monday, 3 October 2022 12:44
>>
>> On 10/3/22 11:23, Ori Kam wrote:
>>> Hi Andrew
>>>
>>>> -----Original Message-----
>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>> Sent: Monday, 3 October 2022 10:54
>>>> On 9/29/22 17:54, Michael Savisko wrote:
>>>>> In some cases application may receive a packet that should have been
>>>>> received by the kernel. In this case application uses KNI or other means
>>>>> to transfer the packet to the kernel.
>>>>>
>>>>> With bifurcated driver we can have a rule to route packets matching
>>>>> a pattern (example: IPv4 packets) to the DPDK application and the rest
>>>>> of the traffic will be received by the kernel.
>>>>> But if we want to receive most of the traffic in DPDK except specific
>>>>> pattern (example: ICMP packets) that should be processed by the
>> kernel,
>>>>> then it's easier to re-route these packets with a single rule.
>>>>>
>>>>> This commit introduces new rte_flow action which allows application to
>>>>> re-route packets directly to the kernel without software involvement.
>>>>>
>>>>> Add new testpmd rte_flow action 'send_to_kernel'. The application
>>>>> may use this action to route the packet to the kernel while still
>>>>> in the HW.
>>>>>
>>>>> Example with testpmd command:
>>>>>
>>>>> flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
>>>>> type mask 0xffff / end actions send_to_kernel / end
>>>>>
>>>>> Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
>>>>> Acked-by: Ori Kam <orika@nvidia.com>
>>>>> ---
>>>>> v4:
>>>>> - improve description comment above
>>>> RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL
>>>>>
>>>>> v3:
>>>>> http://patches.dpdk.org/project/dpdk/patch/20220919155013.61473-1-
>>>> michaelsav@nvidia.com/
>>>>>
>>>>> v2:
>>>>> http://patches.dpdk.org/project/dpdk/patch/20220914093219.11728-1-
>>>> michaelsav@nvidia.com/
>>>>>
>>>>> ---
>>>>> app/test-pmd/cmdline_flow.c | 9 +++++++++
>>>>> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 2 ++
>>>>> lib/ethdev/rte_flow.c | 1 +
>>>>> lib/ethdev/rte_flow.h | 12 ++++++++++++
>>>>> 4 files changed, 24 insertions(+)
>>>>>
>>>>> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-
>> pmd/cmdline_flow.c
>>>>> index 7f50028eb7..042f6b34a6 100644
>>>>> --- a/app/test-pmd/cmdline_flow.c
>>>>> +++ b/app/test-pmd/cmdline_flow.c
>>>>> @@ -612,6 +612,7 @@ enum index {
>>>>> ACTION_PORT_REPRESENTOR_PORT_ID,
>>>>> ACTION_REPRESENTED_PORT,
>>>>> ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
>>>>> + ACTION_SEND_TO_KERNEL,
>>>>> };
>>>>>
>>>>> /** Maximum size for pattern in struct rte_flow_item_raw. */
>>>>> @@ -1872,6 +1873,7 @@ static const enum index next_action[] = {
>>>>> ACTION_CONNTRACK_UPDATE,
>>>>> ACTION_PORT_REPRESENTOR,
>>>>> ACTION_REPRESENTED_PORT,
>>>>> + ACTION_SEND_TO_KERNEL,
>>>>> ZERO,
>>>>> };
>>>>>
>>>>> @@ -6341,6 +6343,13 @@ static const struct token token_list[] = {
>>>>> .help = "submit a list of associated actions for red",
>>>>> .next = NEXT(next_action),
>>>>> },
>>>>> + [ACTION_SEND_TO_KERNEL] = {
>>>>> + .name = "send_to_kernel",
>>>>> + .help = "send packets to kernel",
>>>>> + .priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
>>>>> + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
>>>>> + .call = parse_vc,
>>>>> + },
>>>>>
>>>>> /* Top-level command. */
>>>>> [ADD] = {
>>>>> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>>> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>>>> index 330e34427d..c259c8239a 100644
>>>>> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>>>> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>>>> @@ -4189,6 +4189,8 @@ This section lists supported actions and their
>>>> attributes, if any.
>>>>>
>>>>> - ``ethdev_port_id {unsigned}``: ethdev port ID
>>>>>
>>>>> +- ``send_to_kernel``: send packets to kernel.
>>>>> +
>>>>> Destroying flow rules
>>>>> ~~~~~~~~~~~~~~~~~~~~~
>>>>>
>>>>> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
>>>>> index 501be9d602..627c671ce4 100644
>>>>> --- a/lib/ethdev/rte_flow.c
>>>>> +++ b/lib/ethdev/rte_flow.c
>>>>> @@ -259,6 +259,7 @@ static const struct rte_flow_desc_data
>>>> rte_flow_desc_action[] = {
>>>>> MK_FLOW_ACTION(CONNTRACK, sizeof(struct
>>>> rte_flow_action_conntrack)),
>>>>> MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct
>>>> rte_flow_action_ethdev)),
>>>>> MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct
>>>> rte_flow_action_ethdev)),
>>>>> + MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
>>>>> };
>>>>>
>>>>> int
>>>>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
>>>>> index a79f1e7ef0..2c15279a3b 100644
>>>>> --- a/lib/ethdev/rte_flow.h
>>>>> +++ b/lib/ethdev/rte_flow.h
>>>>> @@ -2879,6 +2879,18 @@ enum rte_flow_action_type {
>>>>> * @see struct rte_flow_action_ethdev
>>>>> */
>>>>> RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
>>>>> +
>>>>> + /**
>>>>> + * Send packets to the kernel, without going to userspace at all.
>>>>> + * The packets will be received by the kernel driver sharing
>>>>> + * the same device as the DPDK port on which this action is
>>>>> + * configured. This action is mostly suits bifurcated driver
>>>>> + * model.
>>>>> + * This is an ingress non-transfer action only.
>>>>
>>>> May be we should not limit the definition to ingress only?
>>>> It could be useful on egress as a way to reroute packet
>>>> back to kernel.
>>>>
>>>
>>> Interesting, but there are no Kernel queues on egress that can receive
>> packets (by definition of egress)
>>> do you mean that this will also do loopback from the egress back to the
>> ingress of the same port and then
>>> send to kernel?
>>> if so, I think we need a new action "loop_back"
>>
>> Yes, I meant intercept packet on egress and send to kernel.
>> But we still need loopback+send_to_kernel. Loopback itself
>> cannot send to kernel. Moreover it should be two rules:
>> loopback on egress plus send-to-kernel on ingress. Does
>> it really worse it? I'm not sure. Yes, it sounds a bit
>> better from arch point of view, but I'm still unsure.
>> I'd allow send-to-kernel on egress. Up to you.
>>
>
> It looks more correct with loop_back on the egress and send-to-kernel on egress
> I suggest to keep the current design,
> and if we see that we can merge those to commands, we will change it
OK. And the last question: do we need to announce it in release
notes?
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>
>>>
>>>>
>>>>> + *
>>>>> + * No associated configuration structure.
>>>>> + */
>>>>> + RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
>>>>> };
>>>>>
>>>>> /**
>>>
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [PATCH v4] ethdev: add send to kernel action
2022-10-03 10:47 ` Andrew Rybchenko
@ 2022-10-03 11:06 ` Ori Kam
2022-10-03 11:08 ` Andrew Rybchenko
0 siblings, 1 reply; 24+ messages in thread
From: Ori Kam @ 2022-10-03 11:06 UTC (permalink / raw)
To: Andrew Rybchenko, Michael Savisko, dev
Cc: Slava Ovsiienko, Asaf Penso, Aman Singh, Yuying Zhang,
NBU-Contact-Thomas Monjalon (EXTERNAL),
Ferruh Yigit
Hi Andrew,
> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Monday, 3 October 2022 13:47
>
> On 10/3/22 12:57, Ori Kam wrote:
> > Hi Andrew,
> >
> >> -----Original Message-----
> >> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >> Sent: Monday, 3 October 2022 12:44
> >>
> >> On 10/3/22 11:23, Ori Kam wrote:
> >>> Hi Andrew
> >>>
> >>>> -----Original Message-----
> >>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >>>> Sent: Monday, 3 October 2022 10:54
> >>>> On 9/29/22 17:54, Michael Savisko wrote:
> >>>>> In some cases application may receive a packet that should have been
> >>>>> received by the kernel. In this case application uses KNI or other
> means
> >>>>> to transfer the packet to the kernel.
> >>>>>
> >>>>> With bifurcated driver we can have a rule to route packets matching
> >>>>> a pattern (example: IPv4 packets) to the DPDK application and the
> rest
> >>>>> of the traffic will be received by the kernel.
> >>>>> But if we want to receive most of the traffic in DPDK except specific
> >>>>> pattern (example: ICMP packets) that should be processed by the
> >> kernel,
> >>>>> then it's easier to re-route these packets with a single rule.
> >>>>>
> >>>>> This commit introduces new rte_flow action which allows application
> to
> >>>>> re-route packets directly to the kernel without software involvement.
> >>>>>
> >>>>> Add new testpmd rte_flow action 'send_to_kernel'. The application
> >>>>> may use this action to route the packet to the kernel while still
> >>>>> in the HW.
> >>>>>
> >>>>> Example with testpmd command:
> >>>>>
> >>>>> flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
> >>>>> type mask 0xffff / end actions send_to_kernel / end
> >>>>>
> >>>>> Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
> >>>>> Acked-by: Ori Kam <orika@nvidia.com>
> >>>>> ---
> >>>>> v4:
> >>>>> - improve description comment above
> >>>> RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL
> >>>>>
> >>>>> v3:
> >>>>> http://patches.dpdk.org/project/dpdk/patch/20220919155013.61473-
> 1-
> >>>> michaelsav@nvidia.com/
> >>>>>
> >>>>> v2:
> >>>>> http://patches.dpdk.org/project/dpdk/patch/20220914093219.11728-
> 1-
> >>>> michaelsav@nvidia.com/
> >>>>>
> >>>>> ---
> >>>>> app/test-pmd/cmdline_flow.c | 9 +++++++++
> >>>>> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 2 ++
> >>>>> lib/ethdev/rte_flow.c | 1 +
> >>>>> lib/ethdev/rte_flow.h | 12 ++++++++++++
> >>>>> 4 files changed, 24 insertions(+)
> >>>>>
> >>>>> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-
> >> pmd/cmdline_flow.c
> >>>>> index 7f50028eb7..042f6b34a6 100644
> >>>>> --- a/app/test-pmd/cmdline_flow.c
> >>>>> +++ b/app/test-pmd/cmdline_flow.c
> >>>>> @@ -612,6 +612,7 @@ enum index {
> >>>>> ACTION_PORT_REPRESENTOR_PORT_ID,
> >>>>> ACTION_REPRESENTED_PORT,
> >>>>> ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
> >>>>> + ACTION_SEND_TO_KERNEL,
> >>>>> };
> >>>>>
> >>>>> /** Maximum size for pattern in struct rte_flow_item_raw. */
> >>>>> @@ -1872,6 +1873,7 @@ static const enum index next_action[] = {
> >>>>> ACTION_CONNTRACK_UPDATE,
> >>>>> ACTION_PORT_REPRESENTOR,
> >>>>> ACTION_REPRESENTED_PORT,
> >>>>> + ACTION_SEND_TO_KERNEL,
> >>>>> ZERO,
> >>>>> };
> >>>>>
> >>>>> @@ -6341,6 +6343,13 @@ static const struct token token_list[] = {
> >>>>> .help = "submit a list of associated actions for red",
> >>>>> .next = NEXT(next_action),
> >>>>> },
> >>>>> + [ACTION_SEND_TO_KERNEL] = {
> >>>>> + .name = "send_to_kernel",
> >>>>> + .help = "send packets to kernel",
> >>>>> + .priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
> >>>>> + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
> >>>>> + .call = parse_vc,
> >>>>> + },
> >>>>>
> >>>>> /* Top-level command. */
> >>>>> [ADD] = {
> >>>>> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> >>>> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> >>>>> index 330e34427d..c259c8239a 100644
> >>>>> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> >>>>> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> >>>>> @@ -4189,6 +4189,8 @@ This section lists supported actions and their
> >>>> attributes, if any.
> >>>>>
> >>>>> - ``ethdev_port_id {unsigned}``: ethdev port ID
> >>>>>
> >>>>> +- ``send_to_kernel``: send packets to kernel.
> >>>>> +
> >>>>> Destroying flow rules
> >>>>> ~~~~~~~~~~~~~~~~~~~~~
> >>>>>
> >>>>> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> >>>>> index 501be9d602..627c671ce4 100644
> >>>>> --- a/lib/ethdev/rte_flow.c
> >>>>> +++ b/lib/ethdev/rte_flow.c
> >>>>> @@ -259,6 +259,7 @@ static const struct rte_flow_desc_data
> >>>> rte_flow_desc_action[] = {
> >>>>> MK_FLOW_ACTION(CONNTRACK, sizeof(struct
> >>>> rte_flow_action_conntrack)),
> >>>>> MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct
> >>>> rte_flow_action_ethdev)),
> >>>>> MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct
> >>>> rte_flow_action_ethdev)),
> >>>>> + MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
> >>>>> };
> >>>>>
> >>>>> int
> >>>>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> >>>>> index a79f1e7ef0..2c15279a3b 100644
> >>>>> --- a/lib/ethdev/rte_flow.h
> >>>>> +++ b/lib/ethdev/rte_flow.h
> >>>>> @@ -2879,6 +2879,18 @@ enum rte_flow_action_type {
> >>>>> * @see struct rte_flow_action_ethdev
> >>>>> */
> >>>>> RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
> >>>>> +
> >>>>> + /**
> >>>>> + * Send packets to the kernel, without going to userspace at
> all.
> >>>>> + * The packets will be received by the kernel driver sharing
> >>>>> + * the same device as the DPDK port on which this action is
> >>>>> + * configured. This action is mostly suits bifurcated driver
> >>>>> + * model.
> >>>>> + * This is an ingress non-transfer action only.
> >>>>
> >>>> May be we should not limit the definition to ingress only?
> >>>> It could be useful on egress as a way to reroute packet
> >>>> back to kernel.
> >>>>
> >>>
> >>> Interesting, but there are no Kernel queues on egress that can receive
> >> packets (by definition of egress)
> >>> do you mean that this will also do loopback from the egress back to the
> >> ingress of the same port and then
> >>> send to kernel?
> >>> if so, I think we need a new action "loop_back"
> >>
> >> Yes, I meant intercept packet on egress and send to kernel.
> >> But we still need loopback+send_to_kernel. Loopback itself
> >> cannot send to kernel. Moreover it should be two rules:
> >> loopback on egress plus send-to-kernel on ingress. Does
> >> it really worse it? I'm not sure. Yes, it sounds a bit
> >> better from arch point of view, but I'm still unsure.
> >> I'd allow send-to-kernel on egress. Up to you.
> >>
> >
> > It looks more correct with loop_back on the egress and send-to-kernel on
> egress
> > I suggest to keep the current design,
> > and if we see that we can merge those to commands, we will change it
>
> OK. And the last question: do we need to announce it in release
> notes?
>
+1
> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>
> >
> >>>
> >>>>
> >>>>> + *
> >>>>> + * No associated configuration structure.
> >>>>> + */
> >>>>> + RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
> >>>>> };
> >>>>>
> >>>>> /**
> >>>
> >
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4] ethdev: add send to kernel action
2022-10-03 11:06 ` Ori Kam
@ 2022-10-03 11:08 ` Andrew Rybchenko
0 siblings, 0 replies; 24+ messages in thread
From: Andrew Rybchenko @ 2022-10-03 11:08 UTC (permalink / raw)
To: Ori Kam, Michael Savisko, dev
Cc: Slava Ovsiienko, Asaf Penso, Aman Singh, Yuying Zhang,
NBU-Contact-Thomas Monjalon (EXTERNAL),
Ferruh Yigit
On 10/3/22 14:06, Ori Kam wrote:
> Hi Andrew,
>
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Monday, 3 October 2022 13:47
>>
>> On 10/3/22 12:57, Ori Kam wrote:
>>> Hi Andrew,
>>>
>>>> -----Original Message-----
>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>> Sent: Monday, 3 October 2022 12:44
>>>>
>>>> On 10/3/22 11:23, Ori Kam wrote:
>>>>> Hi Andrew
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>>>> Sent: Monday, 3 October 2022 10:54
>>>>>> On 9/29/22 17:54, Michael Savisko wrote:
>>>>>>> In some cases application may receive a packet that should have been
>>>>>>> received by the kernel. In this case application uses KNI or other
>> means
>>>>>>> to transfer the packet to the kernel.
>>>>>>>
>>>>>>> With bifurcated driver we can have a rule to route packets matching
>>>>>>> a pattern (example: IPv4 packets) to the DPDK application and the
>> rest
>>>>>>> of the traffic will be received by the kernel.
>>>>>>> But if we want to receive most of the traffic in DPDK except specific
>>>>>>> pattern (example: ICMP packets) that should be processed by the
>>>> kernel,
>>>>>>> then it's easier to re-route these packets with a single rule.
>>>>>>>
>>>>>>> This commit introduces new rte_flow action which allows application
>> to
>>>>>>> re-route packets directly to the kernel without software involvement.
>>>>>>>
>>>>>>> Add new testpmd rte_flow action 'send_to_kernel'. The application
>>>>>>> may use this action to route the packet to the kernel while still
>>>>>>> in the HW.
>>>>>>>
>>>>>>> Example with testpmd command:
>>>>>>>
>>>>>>> flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
>>>>>>> type mask 0xffff / end actions send_to_kernel / end
>>>>>>>
>>>>>>> Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
>>>>>>> Acked-by: Ori Kam <orika@nvidia.com>
>>>>>>> ---
>>>>>>> v4:
>>>>>>> - improve description comment above
>>>>>> RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL
>>>>>>>
>>>>>>> v3:
>>>>>>> http://patches.dpdk.org/project/dpdk/patch/20220919155013.61473-
>> 1-
>>>>>> michaelsav@nvidia.com/
>>>>>>>
>>>>>>> v2:
>>>>>>> http://patches.dpdk.org/project/dpdk/patch/20220914093219.11728-
>> 1-
>>>>>> michaelsav@nvidia.com/
>>>>>>>
>>>>>>> ---
>>>>>>> app/test-pmd/cmdline_flow.c | 9 +++++++++
>>>>>>> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 2 ++
>>>>>>> lib/ethdev/rte_flow.c | 1 +
>>>>>>> lib/ethdev/rte_flow.h | 12 ++++++++++++
>>>>>>> 4 files changed, 24 insertions(+)
>>>>>>>
>>>>>>> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-
>>>> pmd/cmdline_flow.c
>>>>>>> index 7f50028eb7..042f6b34a6 100644
>>>>>>> --- a/app/test-pmd/cmdline_flow.c
>>>>>>> +++ b/app/test-pmd/cmdline_flow.c
>>>>>>> @@ -612,6 +612,7 @@ enum index {
>>>>>>> ACTION_PORT_REPRESENTOR_PORT_ID,
>>>>>>> ACTION_REPRESENTED_PORT,
>>>>>>> ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
>>>>>>> + ACTION_SEND_TO_KERNEL,
>>>>>>> };
>>>>>>>
>>>>>>> /** Maximum size for pattern in struct rte_flow_item_raw. */
>>>>>>> @@ -1872,6 +1873,7 @@ static const enum index next_action[] = {
>>>>>>> ACTION_CONNTRACK_UPDATE,
>>>>>>> ACTION_PORT_REPRESENTOR,
>>>>>>> ACTION_REPRESENTED_PORT,
>>>>>>> + ACTION_SEND_TO_KERNEL,
>>>>>>> ZERO,
>>>>>>> };
>>>>>>>
>>>>>>> @@ -6341,6 +6343,13 @@ static const struct token token_list[] = {
>>>>>>> .help = "submit a list of associated actions for red",
>>>>>>> .next = NEXT(next_action),
>>>>>>> },
>>>>>>> + [ACTION_SEND_TO_KERNEL] = {
>>>>>>> + .name = "send_to_kernel",
>>>>>>> + .help = "send packets to kernel",
>>>>>>> + .priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
>>>>>>> + .next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
>>>>>>> + .call = parse_vc,
>>>>>>> + },
>>>>>>>
>>>>>>> /* Top-level command. */
>>>>>>> [ADD] = {
>>>>>>> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>>>>> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>>>>>> index 330e34427d..c259c8239a 100644
>>>>>>> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>>>>>> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>>>>>> @@ -4189,6 +4189,8 @@ This section lists supported actions and their
>>>>>> attributes, if any.
>>>>>>>
>>>>>>> - ``ethdev_port_id {unsigned}``: ethdev port ID
>>>>>>>
>>>>>>> +- ``send_to_kernel``: send packets to kernel.
>>>>>>> +
>>>>>>> Destroying flow rules
>>>>>>> ~~~~~~~~~~~~~~~~~~~~~
>>>>>>>
>>>>>>> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
>>>>>>> index 501be9d602..627c671ce4 100644
>>>>>>> --- a/lib/ethdev/rte_flow.c
>>>>>>> +++ b/lib/ethdev/rte_flow.c
>>>>>>> @@ -259,6 +259,7 @@ static const struct rte_flow_desc_data
>>>>>> rte_flow_desc_action[] = {
>>>>>>> MK_FLOW_ACTION(CONNTRACK, sizeof(struct
>>>>>> rte_flow_action_conntrack)),
>>>>>>> MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct
>>>>>> rte_flow_action_ethdev)),
>>>>>>> MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct
>>>>>> rte_flow_action_ethdev)),
>>>>>>> + MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
>>>>>>> };
>>>>>>>
>>>>>>> int
>>>>>>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
>>>>>>> index a79f1e7ef0..2c15279a3b 100644
>>>>>>> --- a/lib/ethdev/rte_flow.h
>>>>>>> +++ b/lib/ethdev/rte_flow.h
>>>>>>> @@ -2879,6 +2879,18 @@ enum rte_flow_action_type {
>>>>>>> * @see struct rte_flow_action_ethdev
>>>>>>> */
>>>>>>> RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT,
>>>>>>> +
>>>>>>> + /**
>>>>>>> + * Send packets to the kernel, without going to userspace at
>> all.
>>>>>>> + * The packets will be received by the kernel driver sharing
>>>>>>> + * the same device as the DPDK port on which this action is
>>>>>>> + * configured. This action is mostly suits bifurcated driver
>>>>>>> + * model.
>>>>>>> + * This is an ingress non-transfer action only.
>>>>>>
>>>>>> May be we should not limit the definition to ingress only?
>>>>>> It could be useful on egress as a way to reroute packet
>>>>>> back to kernel.
>>>>>>
>>>>>
>>>>> Interesting, but there are no Kernel queues on egress that can receive
>>>> packets (by definition of egress)
>>>>> do you mean that this will also do loopback from the egress back to the
>>>> ingress of the same port and then
>>>>> send to kernel?
>>>>> if so, I think we need a new action "loop_back"
>>>>
>>>> Yes, I meant intercept packet on egress and send to kernel.
>>>> But we still need loopback+send_to_kernel. Loopback itself
>>>> cannot send to kernel. Moreover it should be two rules:
>>>> loopback on egress plus send-to-kernel on ingress. Does
>>>> it really worse it? I'm not sure. Yes, it sounds a bit
>>>> better from arch point of view, but I'm still unsure.
>>>> I'd allow send-to-kernel on egress. Up to you.
>>>>
>>>
>>> It looks more correct with loop_back on the egress and send-to-kernel on
>> egress
>>> I suggest to keep the current design,
>>> and if we see that we can merge those to commands, we will change it
>>
>> OK. And the last question: do we need to announce it in release
>> notes?
>>
>
> +1
Michael,
please, send v5 with release notes update. Don't forget to
rebase in on current next-net/main, please.
Thanks,
Andrew.
>
>> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>
>>>
>>>>>
>>>>>>
>>>>>>> + *
>>>>>>> + * No associated configuration structure.
>>>>>>> + */
>>>>>>> + RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
>>>>>>> };
>>>>>>>
>>>>>>> /**
>>>>>
>>>
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v5] ethdev: add send to kernel action
2022-09-29 14:54 ` [PATCH v4] " Michael Savisko
2022-10-03 7:53 ` Andrew Rybchenko
@ 2022-10-03 16:34 ` Michael Savisko
2022-10-04 7:48 ` Andrew Rybchenko
1 sibling, 1 reply; 24+ messages in thread
From: Michael Savisko @ 2022-10-03 16:34 UTC (permalink / raw)
To: dev
Cc: michaelsav, orika, viacheslavo, asafp, Aman Singh, Yuying Zhang,
Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko
In some cases application may receive a packet that should have been
received by the kernel. In this case application uses KNI or other means
to transfer the packet to the kernel.
With bifurcated driver we can have a rule to route packets matching
a pattern (example: IPv4 packets) to the DPDK application and the rest
of the traffic will be received by the kernel.
But if we want to receive most of the traffic in DPDK except specific
pattern (example: ICMP packets) that should be processed by the kernel,
then it's easier to re-route these packets with a single rule.
This commit introduces new rte_flow action which allows application to
re-route packets directly to the kernel without software involvement.
Add new testpmd rte_flow action 'send_to_kernel'. The application
may use this action to route the packet to the kernel while still
in the HW.
Example with testpmd command:
flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
type mask 0xffff / end actions send_to_kernel / end
Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
v5:
- added description of the feature to release notes
v4:
- improve description comment above RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL
http://patches.dpdk.org/project/dpdk/patch/20220929145445.181369-1-michaelsav@nvidia.com/
v3:
http://patches.dpdk.org/project/dpdk/patch/20220919155013.61473-1-michaelsav@nvidia.com/
v2:
http://patches.dpdk.org/project/dpdk/patch/20220914093219.11728-1-michaelsav@nvidia.com/
---
app/test-pmd/cmdline_flow.c | 9 +++++++++
doc/guides/rel_notes/release_22_11.rst | 5 +++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 3 +++
lib/ethdev/rte_flow.c | 1 +
lib/ethdev/rte_flow.h | 12 ++++++++++++
5 files changed, 30 insertions(+)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 91c6950b60..9e299b8335 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -622,6 +622,7 @@ enum index {
ACTION_PORT_REPRESENTOR_PORT_ID,
ACTION_REPRESENTED_PORT,
ACTION_REPRESENTED_PORT_ETHDEV_PORT_ID,
+ ACTION_SEND_TO_KERNEL,
};
/** Maximum size for pattern in struct rte_flow_item_raw. */
@@ -1888,6 +1889,7 @@ static const enum index next_action[] = {
ACTION_CONNTRACK_UPDATE,
ACTION_PORT_REPRESENTOR,
ACTION_REPRESENTED_PORT,
+ ACTION_SEND_TO_KERNEL,
ZERO,
};
@@ -6098,6 +6100,13 @@ static const struct token token_list[] = {
width)),
.call = parse_vc_conf,
},
+ [ACTION_SEND_TO_KERNEL] = {
+ .name = "send_to_kernel",
+ .help = "send packets to kernel",
+ .priv = PRIV_ACTION(SEND_TO_KERNEL, 0),
+ .next = NEXT(NEXT_ENTRY(ACTION_NEXT)),
+ .call = parse_vc,
+ },
/* Top level command. */
[SET] = {
.name = "set",
diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
index c6bcb45100..7783eeb489 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -114,6 +114,11 @@ New Features
* Added ``rte_event_eth_tx_adapter_queue_stop`` to stop the Tx Adapter
from enqueueing any packets to the Tx queue.
+* **Added new rte_flow action SEND_TO_KERNEL.**
+
+ Added new rte_flow action which allows application to re-route packets
+ directly to the kernel without software involvement.
+
Removed Items
-------------
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index b4e9d978ba..a5b6fb12e3 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3691,6 +3691,9 @@ This section lists supported pattern items and their attributes, if any.
- ``color {value}``: Meter color value(green/yellow/red).
+- ``send_to_kernel``: send packets to kernel.
+
+
Actions list
^^^^^^^^^^^^
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index fd802f87a2..a6b1bf21c4 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -257,6 +257,7 @@ static const struct rte_flow_desc_data rte_flow_desc_action[] = {
MK_FLOW_ACTION(PORT_REPRESENTOR, sizeof(struct rte_flow_action_ethdev)),
MK_FLOW_ACTION(REPRESENTED_PORT, sizeof(struct rte_flow_action_ethdev)),
MK_FLOW_ACTION(METER_MARK, sizeof(struct rte_flow_action_meter_mark)),
+ MK_FLOW_ACTION(SEND_TO_KERNEL, 0),
};
int
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 49aaf05b67..c895c574cd 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -2797,6 +2797,18 @@ enum rte_flow_action_type {
* See file rte_mtr.h for MTR profile object configuration.
*/
RTE_FLOW_ACTION_TYPE_METER_MARK,
+
+ /**
+ * Send packets to the kernel, without going to userspace at all.
+ * The packets will be received by the kernel driver sharing
+ * the same device as the DPDK port on which this action is
+ * configured. This action is mostly suits bifurcated driver
+ * model.
+ * This is an ingress non-transfer action only.
+ *
+ * No associated configuration structure.
+ */
+ RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL,
};
/**
--
2.27.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v5] ethdev: add send to kernel action
2022-10-03 16:34 ` [PATCH v5] " Michael Savisko
@ 2022-10-04 7:48 ` Andrew Rybchenko
0 siblings, 0 replies; 24+ messages in thread
From: Andrew Rybchenko @ 2022-10-04 7:48 UTC (permalink / raw)
To: Michael Savisko, dev
Cc: orika, viacheslavo, asafp, Aman Singh, Yuying Zhang,
Thomas Monjalon, Ferruh Yigit
On 10/3/22 19:34, Michael Savisko wrote:
> In some cases application may receive a packet that should have been
> received by the kernel. In this case application uses KNI or other means
> to transfer the packet to the kernel.
>
> With bifurcated driver we can have a rule to route packets matching
> a pattern (example: IPv4 packets) to the DPDK application and the rest
> of the traffic will be received by the kernel.
> But if we want to receive most of the traffic in DPDK except specific
> pattern (example: ICMP packets) that should be processed by the kernel,
> then it's easier to re-route these packets with a single rule.
>
> This commit introduces new rte_flow action which allows application to
> re-route packets directly to the kernel without software involvement.
>
> Add new testpmd rte_flow action 'send_to_kernel'. The application
> may use this action to route the packet to the kernel while still
> in the HW.
>
> Example with testpmd command:
>
> flow create 0 ingress priority 0 group 1 pattern eth type spec 0x0800
> type mask 0xffff / end actions send_to_kernel / end
>
> Signed-off-by: Michael Savisko <michaelsav@nvidia.com>
> Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Applied to dpdk-next-net/main, thanks.
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2022-10-04 7:48 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-11 11:35 [RFC] ethdev: add send to kernel action Michael Savisko
2022-08-15 12:02 ` Ori Kam
2022-08-16 9:50 ` Ferruh Yigit
2022-09-12 13:32 ` Thomas Monjalon
2022-09-12 13:39 ` Michael Savisko
2022-09-12 14:41 ` Andrew Rybchenko
2022-09-13 12:09 ` Michael Savisko
2022-09-14 9:57 ` Thomas Monjalon
2022-09-14 9:32 ` [PATCH v2] " Michael Savisko
2022-09-19 15:50 ` [PATCH v3] " Michael Savisko
2022-09-20 11:08 ` Ori Kam
2022-09-26 13:06 ` Andrew Rybchenko
2022-09-28 14:30 ` Michael Savisko
2022-09-29 14:54 ` [PATCH v4] " Michael Savisko
2022-10-03 7:53 ` Andrew Rybchenko
2022-10-03 8:23 ` Ori Kam
2022-10-03 9:44 ` Andrew Rybchenko
2022-10-03 9:57 ` Ori Kam
2022-10-03 10:47 ` Andrew Rybchenko
2022-10-03 11:06 ` Ori Kam
2022-10-03 11:08 ` Andrew Rybchenko
2022-10-03 16:34 ` [PATCH v5] " Michael Savisko
2022-10-04 7:48 ` Andrew Rybchenko
2022-09-20 10:57 ` [PATCH v2] " Ori Kam
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).