DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH] net/mlx5: fix the eCPRI common header endianness
@ 2020-11-03  5:41 Bing Zhao
  2020-11-05 15:01 ` Raslan Darawsheh
  2020-11-06 11:19 ` Ferruh Yigit
  0 siblings, 2 replies; 6+ messages in thread
From: Bing Zhao @ 2020-11-03  5:41 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: dev, orika, rasland, stable

The input header of a RTE flow item is with network byte order. In
the host with little endian, the bit field order are the same as the
byte order.
When checking the an eCPRI message type, the wrong field will be
selected. Right now, since the whole u32 is being checked and for
all types, the following implementation is unique. There is no
functional risk but it is still an error to fix.

Fixes: daa38a8924a0 ("net/mlx5: add flow translation of eCPRI header")

Cc: stable@dpdk.org

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_dv.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 01b6e7c..7af01e9 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -7798,6 +7798,7 @@ struct mlx5_hlist_entry *
 	struct mlx5_priv *priv = dev->data->dev_private;
 	const struct rte_flow_item_ecpri *ecpri_m = item->mask;
 	const struct rte_flow_item_ecpri *ecpri_v = item->spec;
+	struct rte_ecpri_common_hdr common;
 	void *misc4_m = MLX5_ADDR_OF(fte_match_param, matcher,
 				     misc_parameters_4);
 	void *misc4_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_4);
@@ -7838,7 +7839,8 @@ struct mlx5_hlist_entry *
 	 * Some wildcard rules only matching type field should be supported.
 	 */
 	if (ecpri_m->hdr.dummy[0]) {
-		switch (ecpri_v->hdr.common.type) {
+		common.u32 = rte_be_to_cpu_32(ecpri_v->hdr.common.u32);
+		switch (common.type) {
 		case RTE_ECPRI_MSG_TYPE_IQ_DATA:
 		case RTE_ECPRI_MSG_TYPE_RTC_CTRL:
 		case RTE_ECPRI_MSG_TYPE_DLY_MSR:
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [PATCH] net/mlx5: fix the eCPRI common header endianness
  2020-11-03  5:41 [dpdk-dev] [PATCH] net/mlx5: fix the eCPRI common header endianness Bing Zhao
@ 2020-11-05 15:01 ` Raslan Darawsheh
  2020-11-06 11:19 ` Ferruh Yigit
  1 sibling, 0 replies; 6+ messages in thread
From: Raslan Darawsheh @ 2020-11-05 15:01 UTC (permalink / raw)
  To: Bing Zhao, Slava Ovsiienko, Matan Azrad; +Cc: dev, Ori Kam, stable

Hi,
> -----Original Message-----
> From: Bing Zhao <bingz@nvidia.com>
> Sent: Tuesday, November 3, 2020 7:42 AM
> To: Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad
> <matan@nvidia.com>
> Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>; stable@dpdk.org
> Subject: [PATCH] net/mlx5: fix the eCPRI common header endianness
> 
> The input header of a RTE flow item is with network byte order. In
> the host with little endian, the bit field order are the same as the
> byte order.
> When checking the an eCPRI message type, the wrong field will be
> selected. Right now, since the whole u32 is being checked and for
> all types, the following implementation is unique. There is no
> functional risk but it is still an error to fix.
> 
> Fixes: daa38a8924a0 ("net/mlx5: add flow translation of eCPRI header")
> 
> Cc: stable@dpdk.org
> 
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
>  drivers/net/mlx5/mlx5_flow_dv.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
Patch applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [PATCH] net/mlx5: fix the eCPRI common header endianness
  2020-11-03  5:41 [dpdk-dev] [PATCH] net/mlx5: fix the eCPRI common header endianness Bing Zhao
  2020-11-05 15:01 ` Raslan Darawsheh
@ 2020-11-06 11:19 ` Ferruh Yigit
  2020-11-06 14:10   ` Bing Zhao
  1 sibling, 1 reply; 6+ messages in thread
From: Ferruh Yigit @ 2020-11-06 11:19 UTC (permalink / raw)
  To: Bing Zhao, viacheslavo, matan; +Cc: dev, orika, rasland, stable

On 11/3/2020 5:41 AM, Bing Zhao wrote:
> The input header of a RTE flow item is with network byte order. In
> the host with little endian, the bit field order are the same as the
> byte order.
> When checking the an eCPRI message type, the wrong field will be
> selected. Right now, since the whole u32 is being checked and for
> all types, the following implementation is unique. There is no
> functional risk but it is still an error to fix.
 >

Isn't the 'ecpri_v' filled by application (CPU), why there is an assumption that 
it is big endian?

And even if it is big endian, it should be broken previously right? Since it was 
checking wrong field as 'type' as you said, why there were no functional risk?

> 
> Fixes: daa38a8924a0 ("net/mlx5: add flow translation of eCPRI header")
> 
> Cc: stable@dpdk.org
> 
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
>   drivers/net/mlx5/mlx5_flow_dv.c | 4 +++-
>   1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
> index 01b6e7c..7af01e9 100644
> --- a/drivers/net/mlx5/mlx5_flow_dv.c
> +++ b/drivers/net/mlx5/mlx5_flow_dv.c
> @@ -7798,6 +7798,7 @@ struct mlx5_hlist_entry *
>   	struct mlx5_priv *priv = dev->data->dev_private;
>   	const struct rte_flow_item_ecpri *ecpri_m = item->mask;
>   	const struct rte_flow_item_ecpri *ecpri_v = item->spec;
> +	struct rte_ecpri_common_hdr common;
>   	void *misc4_m = MLX5_ADDR_OF(fte_match_param, matcher,
>   				     misc_parameters_4);
>   	void *misc4_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_4);
> @@ -7838,7 +7839,8 @@ struct mlx5_hlist_entry *
>   	 * Some wildcard rules only matching type field should be supported.
>   	 */
>   	if (ecpri_m->hdr.dummy[0]) {
> -		switch (ecpri_v->hdr.common.type) {
> +		common.u32 = rte_be_to_cpu_32(ecpri_v->hdr.common.u32);
> +		switch (common.type) {
>   		case RTE_ECPRI_MSG_TYPE_IQ_DATA:
>   		case RTE_ECPRI_MSG_TYPE_RTC_CTRL:
>   		case RTE_ECPRI_MSG_TYPE_DLY_MSR:
> 


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [PATCH] net/mlx5: fix the eCPRI common header endianness
  2020-11-06 11:19 ` Ferruh Yigit
@ 2020-11-06 14:10   ` Bing Zhao
  2020-11-06 17:41     ` Ferruh Yigit
  0 siblings, 1 reply; 6+ messages in thread
From: Bing Zhao @ 2020-11-06 14:10 UTC (permalink / raw)
  To: Ferruh Yigit, Slava Ovsiienko, Matan Azrad
  Cc: dev, Ori Kam, Raslan Darawsheh, stable

Hi Ferruh,

Thanks for your review and comments, PSB.

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Friday, November 6, 2020 7:20 PM
> To: Bing Zhao <bingz@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>
> Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>; stable@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] net/mlx5: fix the eCPRI common
> header endianness
> 
> External email: Use caution opening links or attachments
> 
> 
> On 11/3/2020 5:41 AM, Bing Zhao wrote:
> > The input header of a RTE flow item is with network byte order. In
> the
> > host with little endian, the bit field order are the same as the
> byte
> > order.
> > When checking the an eCPRI message type, the wrong field will be
> > selected. Right now, since the whole u32 is being checked and for
> all
> > types, the following implementation is unique. There is no
> functional
> > risk but it is still an error to fix.
>  >
> 
> Isn't the 'ecpri_v' filled by application (CPU), why there is an
> assumption that it is big endian?
> 

Yes, this is filled by the application SW. I checked the current code of other headers and current implementation.
1. In the testpmd flow example, all the headers of network stack like IPv4 are translated into to be in BE, "ARGS_ENTRY_HTON"
2. All fields are with "rte_be*_t"type, even though only a typedef, it should be considered in BE.
3. RTE flow will just pass the flow items pointer to the PMD drivers, so in the PMD part, the headers should be treated as in BE.
So, to my understanding, this is not an assumption but some constraint.
Correct me if my understanding is wrong.

> And even if it is big endian, it should be broken previously right?
> Since it was checking wrong field as 'type' as you said, why there
> were no functional risk?

In the PMD driver, the first u32 *value containing this type is already used for matching. And a checking is used for the following "sub-headers"
matching. Indeed, the first u32 *mask is used to confirm if the following sub-header need to be matched.
Since different types will lead to different variants of the packets, the switch-of-type is used.
But all the 3 types supported in PMD now almost have the same results (part of the second u32, remaining will be all 0s).
So even if the type of the value is always "0" by mistake, the cases are the same and it still works by coincidence.
If more types are supported in the future, this will be problematic.

> 
> >
> > Fixes: daa38a8924a0 ("net/mlx5: add flow translation of eCPRI
> header")
> >
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Bing Zhao <bingz@nvidia.com>
> > Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> > ---
> >   drivers/net/mlx5/mlx5_flow_dv.c | 4 +++-
> >   1 file changed, 3 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/mlx5/mlx5_flow_dv.c
> > b/drivers/net/mlx5/mlx5_flow_dv.c index 01b6e7c..7af01e9 100644
> > --- a/drivers/net/mlx5/mlx5_flow_dv.c
> > +++ b/drivers/net/mlx5/mlx5_flow_dv.c
> > @@ -7798,6 +7798,7 @@ struct mlx5_hlist_entry *
> >       struct mlx5_priv *priv = dev->data->dev_private;
> >       const struct rte_flow_item_ecpri *ecpri_m = item->mask;
> >       const struct rte_flow_item_ecpri *ecpri_v = item->spec;
> > +     struct rte_ecpri_common_hdr common;
> >       void *misc4_m = MLX5_ADDR_OF(fte_match_param, matcher,
> >                                    misc_parameters_4);
> >       void *misc4_v = MLX5_ADDR_OF(fte_match_param, key,
> > misc_parameters_4); @@ -7838,7 +7839,8 @@ struct mlx5_hlist_entry
> *
> >        * Some wildcard rules only matching type field should be
> supported.
> >        */
> >       if (ecpri_m->hdr.dummy[0]) {
> > -             switch (ecpri_v->hdr.common.type) {
> > +             common.u32 = rte_be_to_cpu_32(ecpri_v-
> >hdr.common.u32);
> > +             switch (common.type) {
> >               case RTE_ECPRI_MSG_TYPE_IQ_DATA:
> >               case RTE_ECPRI_MSG_TYPE_RTC_CTRL:
> >               case RTE_ECPRI_MSG_TYPE_DLY_MSR:
> >


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [PATCH] net/mlx5: fix the eCPRI common header endianness
  2020-11-06 14:10   ` Bing Zhao
@ 2020-11-06 17:41     ` Ferruh Yigit
  2020-11-09  6:01       ` Bing Zhao
  0 siblings, 1 reply; 6+ messages in thread
From: Ferruh Yigit @ 2020-11-06 17:41 UTC (permalink / raw)
  To: Bing Zhao, Slava Ovsiienko, Matan Azrad
  Cc: dev, Ori Kam, Raslan Darawsheh, stable

On 11/6/2020 2:10 PM, Bing Zhao wrote:
> Hi Ferruh,
> 
> Thanks for your review and comments, PSB.
> 
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Sent: Friday, November 6, 2020 7:20 PM
>> To: Bing Zhao <bingz@nvidia.com>; Slava Ovsiienko
>> <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>
>> Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Raslan Darawsheh
>> <rasland@nvidia.com>; stable@dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH] net/mlx5: fix the eCPRI common
>> header endianness
>>
>> External email: Use caution opening links or attachments
>>
>>
>> On 11/3/2020 5:41 AM, Bing Zhao wrote:
>>> The input header of a RTE flow item is with network byte order. In
>> the
>>> host with little endian, the bit field order are the same as the
>> byte
>>> order.
>>> When checking the an eCPRI message type, the wrong field will be
>>> selected. Right now, since the whole u32 is being checked and for
>> all
>>> types, the following implementation is unique. There is no
>> functional
>>> risk but it is still an error to fix.
>>   >
>>
>> Isn't the 'ecpri_v' filled by application (CPU), why there is an
>> assumption that it is big endian?
>>
> 
> Yes, this is filled by the application SW. I checked the current code of other headers and current implementation.
> 1. In the testpmd flow example, all the headers of network stack like IPv4 are translated into to be in BE, "ARGS_ENTRY_HTON"
> 2. All fields are with "rte_be*_t"type, even though only a typedef, it should be considered in BE.
> 3. RTE flow will just pass the flow items pointer to the PMD drivers, so in the PMD part, the headers should be treated as in BE.
> So, to my understanding, this is not an assumption but some constraint.
> Correct me if my understanding is wrong.

I missed in 'cmdline_flow.c' big endian conversion done via 'arg->hton' check, 
so yes PMD getting big endian values makes sense, thanks for clarification.

> 
>> And even if it is big endian, it should be broken previously right?
>> Since it was checking wrong field as 'type' as you said, why there
>> were no functional risk?
> 
> In the PMD driver, the first u32 *value containing this type is already used for matching. And a checking is used for the following "sub-headers"
> matching. Indeed, the first u32 *mask is used to confirm if the following sub-header need to be matched.
> Since different types will lead to different variants of the packets, the switch-of-type is used.
> But all the 3 types supported in PMD now almost have the same results (part of the second u32, remaining will be all 0s).
> So even if the type of the value is always "0" by mistake, the cases are the same and it still works by coincidence.

if 'type' is '0' I see it works, but what if the 'type' is something other than 
values covered in 'switch', than it may fail matching 'sub-headers' and I guess 
that can happen based on value of 'size' field.

Anyway, since you are already fixing it, will you be OK if I drop last two 
sentences from the commit log and proceed?

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [PATCH] net/mlx5: fix the eCPRI common header endianness
  2020-11-06 17:41     ` Ferruh Yigit
@ 2020-11-09  6:01       ` Bing Zhao
  0 siblings, 0 replies; 6+ messages in thread
From: Bing Zhao @ 2020-11-09  6:01 UTC (permalink / raw)
  To: Ferruh Yigit, Slava Ovsiienko, Matan Azrad
  Cc: dev, Ori Kam, Raslan Darawsheh, stable

Hi Ferruh,

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Saturday, November 7, 2020 1:41 AM
> To: Bing Zhao <bingz@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>
> Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>; stable@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] net/mlx5: fix the eCPRI common
> header endianness
> 
> External email: Use caution opening links or attachments
> 
> 
> On 11/6/2020 2:10 PM, Bing Zhao wrote:
> > Hi Ferruh,
> >
> > Thanks for your review and comments, PSB.
> >
> >> -----Original Message-----
> >> From: Ferruh Yigit <ferruh.yigit@intel.com>
> >> Sent: Friday, November 6, 2020 7:20 PM
> >> To: Bing Zhao <bingz@nvidia.com>; Slava Ovsiienko
> >> <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>
> >> Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Raslan Darawsheh
> >> <rasland@nvidia.com>; stable@dpdk.org
> >> Subject: Re: [dpdk-dev] [PATCH] net/mlx5: fix the eCPRI common
> header
> >> endianness
> >>
> >> External email: Use caution opening links or attachments
> >>
> >>
> >> On 11/3/2020 5:41 AM, Bing Zhao wrote:
> >>> The input header of a RTE flow item is with network byte order.
> In
> >> the
> >>> host with little endian, the bit field order are the same as the
> >> byte
> >>> order.
> >>> When checking the an eCPRI message type, the wrong field will be
> >>> selected. Right now, since the whole u32 is being checked and
> for
> >> all
> >>> types, the following implementation is unique. There is no
> >> functional
> >>> risk but it is still an error to fix.
> >>   >
> >>
> >> Isn't the 'ecpri_v' filled by application (CPU), why there is an
> >> assumption that it is big endian?
> >>
> >
> > Yes, this is filled by the application SW. I checked the current
> code of other headers and current implementation.
> > 1. In the testpmd flow example, all the headers of network stack
> like IPv4 are translated into to be in BE, "ARGS_ENTRY_HTON"
> > 2. All fields are with "rte_be*_t"type, even though only a typedef,
> it should be considered in BE.
> > 3. RTE flow will just pass the flow items pointer to the PMD
> drivers, so in the PMD part, the headers should be treated as in BE.
> > So, to my understanding, this is not an assumption but some
> constraint.
> > Correct me if my understanding is wrong.
> 
> I missed in 'cmdline_flow.c' big endian conversion done via 'arg-
> >hton' check, so yes PMD getting big endian values makes sense,
> thanks for clarification.
> 
> >
> >> And even if it is big endian, it should be broken previously
> right?
> >> Since it was checking wrong field as 'type' as you said, why
> there
> >> were no functional risk?
> >
> > In the PMD driver, the first u32 *value containing this type is
> already used for matching. And a checking is used for the following
> "sub-headers"
> > matching. Indeed, the first u32 *mask is used to confirm if the
> following sub-header need to be matched.
> > Since different types will lead to different variants of the
> packets, the switch-of-type is used.
> > But all the 3 types supported in PMD now almost have the same
> results (part of the second u32, remaining will be all 0s).
> > So even if the type of the value is always "0" by mistake, the
> cases are the same and it still works by coincidence.
> 
> if 'type' is '0' I see it works, but what if the 'type' is something
> other than values covered in 'switch', than it may fail matching
> 'sub-headers' and I guess that can happen based on value of 'size'
> field.

Yes, it has the risk. In the past, the packet's length tested didn't expose this issue. So when it change the size upper byte to other value instead of 0/2/5, then the flow will get a failure when being created. Thanks for catching this point.

> 
> Anyway, since you are already fixing it, will you be OK if I drop
> last two sentences from the commit log and proceed?

Sure, thanks a lot for your help.

BR. Bing


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-11-09  6:02 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-03  5:41 [dpdk-dev] [PATCH] net/mlx5: fix the eCPRI common header endianness Bing Zhao
2020-11-05 15:01 ` Raslan Darawsheh
2020-11-06 11:19 ` Ferruh Yigit
2020-11-06 14:10   ` Bing Zhao
2020-11-06 17:41     ` Ferruh Yigit
2020-11-09  6:01       ` Bing Zhao

DPDK patches and discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://inbox.dpdk.org/dev/0 dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dev dev/ https://inbox.dpdk.org/dev \
		dev@dpdk.org
	public-inbox-index dev

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git