From: Yongseok Koh <yskoh@mellanox.com>
To: Matan Azrad <matan@mellanox.com>
Cc: "Shahaf Shuler" <shahafs@mellanox.com>,
"Adrien Mazarguil" <adrien.mazarguil@6wind.com>,
"Nélio Laranjeiro" <nelio.laranjeiro@6wind.com>,
"dev@dpdk.org" <dev@dpdk.org>,
"stable@dpdk.org" <stable@dpdk.org>,
"Xueming(Steven) Li" <xuemingl@mellanox.com>
Subject: Re: [dpdk-dev] [PATCH] net/mlx5: fix GRE flow rule
Date: Wed, 23 May 2018 11:34:01 -0700 [thread overview]
Message-ID: <20180523183359.GA13339@yongseok-MBP.local> (raw)
In-Reply-To: <VI1PR0501MB26080435D388537111F9311CD26B0@VI1PR0501MB2608.eurprd05.prod.outlook.com>
On Wed, May 23, 2018 at 04:45:33AM -0700, Matan Azrad wrote:
>
> Hi Yongseok
> + Steven
>
> From: Yongseok Koh
> > On Tue, May 22, 2018 at 10:36:43PM -0700, Matan Azrad wrote:
> > > Hi Yongseok
> > >
> > > From: Yongseok Koh
> > > > Creating a flow having pattern from the middle of a packet is
> > > > allowed. For example,
> > > >
> > > > testpmd> flow create 0 ingress pattern vxlan vni is 20 / end actions ...
> > > >
> > > > Device can parse GRE header but without proper support from library
> > > > and firmware (HAVE_IBV_DEVICE_MPLS_SUPPORT), a field in GRE header
> > > > can't be specified when creating a rule. As a result, the following
> > > > rule will be interpreted as a wildcard rule, which always matches any
> > packet.
> > > >
> > > > testpmd> flow create 0 ingress pattern gre / end actions ...
> > > > Fixes: 96c6c65a10d2 ("net/mlx5: support GRE tunnel flow")
> > > > Fixes: 1f106da2bf7b ("net/mlx5: support MPLS-in-GRE and
> > > > MPLS-in-UDP")
> > > > Cc: stable@dpdk.org
> > > >
> > > > Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
> > > > ---
> > > > drivers/net/mlx5/mlx5_flow.c | 6 ++++--
> > > > 1 file changed, 4 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/drivers/net/mlx5/mlx5_flow.c
> > > > b/drivers/net/mlx5/mlx5_flow.c index 994be05be..526fe6b0e 100644
> > > > --- a/drivers/net/mlx5/mlx5_flow.c
> > > > +++ b/drivers/net/mlx5/mlx5_flow.c
> > > > @@ -330,9 +330,11 @@ static const enum rte_flow_action_type
> > > > valid_actions[] = { static const struct mlx5_flow_items mlx5_flow_items[] =
> > {
> > > > [RTE_FLOW_ITEM_TYPE_END] = {
> > > > .items = ITEMS(RTE_FLOW_ITEM_TYPE_ETH,
> > > > +#ifdef HAVE_IBV_DEVICE_MPLS_SUPPORT
> > >
> > > The GRE item was here even before the MPLSoGRE support
> >
> > Yes, this bug has existed before adding MPLSoGRE support.
> >
> > > so I think that this is not the correct fix and even that it can hurt
> > > the support of GRE for the current customers use it.
> >
> > How can it hurt? Please clarify.
>
> Someone who uses the next flow and have not the new verbs version of MPLS:
> flow create 0 ingress pattern gre / ipv4 src is X / end actions ...
> ipv4 src or any other inner specifications.
>
> This flow will probably get any supported tunnel packets with inner ipv4 src = X.
Do you think we should compromise? This is logically wrong for sure. Let me give
you a specific example.
If I create the following two flows,
flow create 0 ingress pattern vxlan vni is 10 / end actions queue index 3 / mark id 10 / end
flow create 0 ingress pattern vxlan vni is 20 / end actions queue index 3 / mark id 20 / end
The following two packets will match correctly and have flow ID (10 and 20)
according to VNI.
Ether()/IP()/UDP()/VXLAN(vni=10)/Ether()/IPv6()
Ether()/IP()/UDP()/VXLAN(vni=20)/Ether()/IPv6()
However, if three flows are created as follows,
flow create 0 ingress pattern gre / ipv6 / end actions queue index 3 / mark id 2 / end
flow create 0 ingress pattern vxlan vni is 10 / end actions queue index 3 / mark id 10 / end
flow create 0 ingress pattern vxlan vni is 20 / end actions queue index 3 / mark id 20 / end
The packets will hit the first flow regardless of VNI and have wrong flow ID.
That's why I have to drop this specific GRE case. Whoever is using this kind of
GRE flow, that's buggy anyway. They have to know it and change it.
> It may be enough for the current user (which probably use only 1 tunnel type at a certain time).
Router/switch-like applications can have multiple tunnels for sure. I'm not sure
who 'the current user' is but I don't think we can make such an assumption.
I don't want to allow users create faulty flows.
> > > Looks like you must specify at least 1 spec in the GRE to apply it
> > > correctly as you did for VXLAN, Can you try empty vxlan and fully gre
> > > (with protocol field)?
> >
> > That's exactly the reason why I'm taking this out. If you look at the code, it
> > doesn't even set any field for GRE if HAVE_IBV_DEVICE_MPLS_SUPPORT isn't
> > supported. Thus, it is considered as a wildcard (all-matching) rule. But if it has
> > HAVE_IBV_DEVICE_MPLS_SUPPORT, such pattern can be allowed.
>
> Yes, so your GRE flow will not work even if you have MPLS support.
I'm not sure what you meant but with IBV MPLS support, I think IBV_FLOW_SPEC_GRE
will make things right. Without the support, IBV_FLOW_SPEC_VXLAN_TUNNEL is even
set for GRE flows.
> I think the issue is generally in all the items:
> You should not configure them if they miss both at least one
> self-specification or item which points to them by "next protocol" field.
>
> In case of VXLAN tunnels we just don't allow them without self-specification,
> In case of gre we force the next protocol of the previous item but only when it exists.
> In case of eth(inner),vlan,ipv4,ipv6,udp,tcp we don't force anything.
>
> I think we need a global fix for all, this is probably the root cause.
Well, the root-cause is that old device/lib doesn't differentiate GRE from VXLAN
when creating flows.
Thanks,
Yongseok
> > Having pattern 'vxlan' without vni isn't allowed by mlx5 PMD because zero VNI
> > is never accepted.
> >
> > Thanks,
> > Yongseok
> >
> > > > + RTE_FLOW_ITEM_TYPE_GRE,
> > > > +#endif
> > > > RTE_FLOW_ITEM_TYPE_VXLAN,
> > > > - RTE_FLOW_ITEM_TYPE_VXLAN_GPE,
> > > > - RTE_FLOW_ITEM_TYPE_GRE),
> > > > + RTE_FLOW_ITEM_TYPE_VXLAN_GPE),
> > > > },
> > > > [RTE_FLOW_ITEM_TYPE_ETH] = {
> > > > .items = ITEMS(RTE_FLOW_ITEM_TYPE_VLAN,
> > >
> > >
> > >
next prev parent reply other threads:[~2018-05-23 18:34 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-05-23 1:51 Yongseok Koh
2018-05-23 5:36 ` Matan Azrad
2018-05-23 10:01 ` Yongseok Koh
2018-05-23 11:45 ` Matan Azrad
2018-05-23 18:34 ` Yongseok Koh [this message]
2018-05-23 22:55 ` Xueming(Steven) Li
2018-05-24 6:34 ` Matan Azrad
2018-05-24 17:56 ` [dpdk-dev] [PATCH v2] " Yongseok Koh
2018-06-19 23:23 ` Yongseok Koh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180523183359.GA13339@yongseok-MBP.local \
--to=yskoh@mellanox.com \
--cc=adrien.mazarguil@6wind.com \
--cc=dev@dpdk.org \
--cc=matan@mellanox.com \
--cc=nelio.laranjeiro@6wind.com \
--cc=shahafs@mellanox.com \
--cc=stable@dpdk.org \
--cc=xuemingl@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).