* [dpdk-dev] nvgre inner rss problem in mlx5
@ 2021-04-28 3:47 wenxu
2021-04-28 4:22 ` Asaf Penso
0 siblings, 1 reply; 12+ messages in thread
From: wenxu @ 2021-04-28 3:47 UTC (permalink / raw)
To: dev
Hi mlnx teams,
I test the upstream dpdk with the nvgre inner rss action with dpdk-testpmd
# ./dpdk-testpmd -c 0x1f -n 4 -m 4096 -w "0000:19:00.1" --huge-dir=/mnt/ovsdpdk -- --forward-mode=rxonly --rxq=4 --txq=4 --auto-start --nb-cores=4
# testpmd>> flow create 0 ingress pattern eth / ipv4 / nvgre / end actions rss level 2 types ip udp tcp end queues 0 1 2 3 end / end
Add rte flow for nvgre wit action inner(level 2) rss to the queues 0,1,2,3.
And I test this with the same underlay tunnel but differrent inner ip address/udp ports. But Only one queue recieve the packet.
And if I test this with vxlan case. it works as we expect.
testpmd>> flow create 0 ingress pattern eth / ipv4 / udp / vxlan / end actions rss level 2 types ip udp tcp end queues 0 1 2 3 end / end
# lspci | grep Ether
19:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
19:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
Fw version is 16.29.1016
# ethtool -i net3
driver: mlx5_core
version: 5.12.0-rc4+
firmware-version: 16.29.1016 (MT_0000000080)
Are there any problems for my test case.
BR
wenxu
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] nvgre inner rss problem in mlx5
2021-04-28 3:47 [dpdk-dev] nvgre inner rss problem in mlx5 wenxu
@ 2021-04-28 4:22 ` Asaf Penso
2021-04-28 5:48 ` wenxu
0 siblings, 1 reply; 12+ messages in thread
From: Asaf Penso @ 2021-04-28 4:22 UTC (permalink / raw)
To: dev, wenxu
Hello Wenxu,
Thank you for reaching us. I would like to know a few more details before I can provide an assistance.
Can you share the version numbers for:
rdma-core
OFED
OS
Regards,
Asaf Penso
________________________________
From: dev <dev-bounces@dpdk.org> on behalf of wenxu <wenxu@ucloud.cn>
Sent: Wednesday, April 28, 2021 6:47:45 AM
To: dev@dpdk.org <dev@dpdk.org>
Subject: [dpdk-dev] nvgre inner rss problem in mlx5
Hi mlnx teams,
I test the upstream dpdk with the nvgre inner rss action with dpdk-testpmd
# ./dpdk-testpmd -c 0x1f -n 4 -m 4096 -w "0000:19:00.1" --huge-dir=/mnt/ovsdpdk -- --forward-mode=rxonly --rxq=4 --txq=4 --auto-start --nb-cores=4
# testpmd>> flow create 0 ingress pattern eth / ipv4 / nvgre / end actions rss level 2 types ip udp tcp end queues 0 1 2 3 end / end
Add rte flow for nvgre wit action inner(level 2) rss to the queues 0,1,2,3.
And I test this with the same underlay tunnel but differrent inner ip address/udp ports. But Only one queue recieve the packet.
And if I test this with vxlan case. it works as we expect.
testpmd>> flow create 0 ingress pattern eth / ipv4 / udp / vxlan / end actions rss level 2 types ip udp tcp end queues 0 1 2 3 end / end
# lspci | grep Ether
19:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
19:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
Fw version is 16.29.1016
# ethtool -i net3
driver: mlx5_core
version: 5.12.0-rc4+
firmware-version: 16.29.1016 (MT_0000000080)
Are there any problems for my test case.
BR
wenxu
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] nvgre inner rss problem in mlx5
2021-04-28 4:22 ` Asaf Penso
@ 2021-04-28 5:48 ` wenxu
2021-04-28 9:31 ` Asaf Penso
0 siblings, 1 reply; 12+ messages in thread
From: wenxu @ 2021-04-28 5:48 UTC (permalink / raw)
To: Asaf Penso; +Cc: dev
rdma-core version is: rdma-core-52mlnx1-1.52104.x86_64
发件人:Asaf Penso <asafp@nvidia.com>
发送日期:2021-04-28 12:22:32
收件人:"dev@dpdk.org" <dev@dpdk.org>,wenxu <wenxu@ucloud.cn>
主题:Re: [dpdk-dev] nvgre inner rss problem in mlx5
Hello Wenxu,
Thank you for reaching us. I would like to know a few more details before I can provide an assistance.
Can you share the version numbers for:
rdma-core
OFED
OS
Regards,
Asaf Penso
From: dev <dev-bounces@dpdk.org> on behalf of wenxu <wenxu@ucloud.cn>
Sent: Wednesday, April 28, 2021 6:47:45 AM
To: dev@dpdk.org <dev@dpdk.org>
Subject: [dpdk-dev] nvgre inner rss problem in mlx5
Hi mlnx teams,
I test the upstream dpdk with the nvgre inner rss action with dpdk-testpmd
# ./dpdk-testpmd -c 0x1f -n 4 -m 4096 -w "0000:19:00.1" --huge-dir=/mnt/ovsdpdk -- --forward-mode=rxonly --rxq=4 --txq=4 --auto-start --nb-cores=4
# testpmd>> flow create 0 ingress pattern eth / ipv4 / nvgre / end actions rss level 2 types ip udp tcp end queues 0 1 2 3 end / end
Add rte flow for nvgre wit action inner(level 2) rss to the queues 0,1,2,3.
And I test this with the same underlay tunnel but differrent inner ip address/udp ports. But Only one queue recieve the packet.
And if I test this with vxlan case. it works as we expect.
testpmd>> flow create 0 ingress pattern eth / ipv4 / udp / vxlan / end actions rss level 2 types ip udp tcp end queues 0 1 2 3 end / end
# lspci | grep Ether
19:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
19:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
Fw version is 16.29.1016
# ethtool -i net3
driver: mlx5_core
version: 5.12.0-rc4+
firmware-version: 16.29.1016 (MT_0000000080)
Are there any problems for my test case.
BR
wenxu
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] nvgre inner rss problem in mlx5
2021-04-28 5:48 ` wenxu
@ 2021-04-28 9:31 ` Asaf Penso
2021-04-29 8:29 ` wenxu
0 siblings, 1 reply; 12+ messages in thread
From: Asaf Penso @ 2021-04-28 9:31 UTC (permalink / raw)
To: wenxu; +Cc: dev
What DPDK version are you using?
Can you try using upstream? We had a fix for a similar issue recently.
Regards,
Asaf Penso
From: wenxu <wenxu@ucloud.cn>
Sent: Wednesday, April 28, 2021 8:48 AM
To: Asaf Penso <asafp@nvidia.com>
Cc: dev@dpdk.org
Subject: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5
rdma-core version is: rdma-core-52mlnx1-1.52104.x86_64
发件人:Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
发送日期:2021-04-28 12:22:32
收件人:"dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>,wenxu <wenxu@ucloud.cn<mailto:wenxu@ucloud.cn>>
主题:Re: [dpdk-dev] nvgre inner rss problem in mlx5
Hello Wenxu,
Thank you for reaching us. I would like to know a few more details before I can provide an assistance.
Can you share the version numbers for:
rdma-core
OFED
OS
Regards,
Asaf Penso
________________________________
From: dev <dev-bounces@dpdk.org<mailto:dev-bounces@dpdk.org>> on behalf of wenxu <wenxu@ucloud.cn<mailto:wenxu@ucloud.cn>>
Sent: Wednesday, April 28, 2021 6:47:45 AM
To: dev@dpdk.org<mailto:dev@dpdk.org> <dev@dpdk.org<mailto:dev@dpdk.org>>
Subject: [dpdk-dev] nvgre inner rss problem in mlx5
Hi mlnx teams,
I test the upstream dpdk with the nvgre inner rss action with dpdk-testpmd
# ./dpdk-testpmd -c 0x1f -n 4 -m 4096 -w "0000:19:00.1" --huge-dir=/mnt/ovsdpdk -- --forward-mode=rxonly --rxq=4 --txq=4 --auto-start --nb-cores=4
# testpmd>> flow create 0 ingress pattern eth / ipv4 / nvgre / end actions rss level 2 types ip udp tcp end queues 0 1 2 3 end / end
Add rte flow for nvgre wit action inner(level 2) rss to the queues 0,1,2,3.
And I test this with the same underlay tunnel but differrent inner ip address/udp ports. But Only one queue recieve the packet.
And if I test this with vxlan case. it works as we expect.
testpmd>> flow create 0 ingress pattern eth / ipv4 / udp / vxlan / end actions rss level 2 types ip udp tcp end queues 0 1 2 3 end / end
# lspci | grep Ether
19:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
19:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
Fw version is 16.29.1016
# ethtool -i net3
driver: mlx5_core
version: 5.12.0-rc4+
firmware-version: 16.29.1016 (MT_0000000080)
Are there any problems for my test case.
BR
wenxu
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] nvgre inner rss problem in mlx5
2021-04-28 9:31 ` Asaf Penso
@ 2021-04-29 8:29 ` wenxu
2021-04-29 9:06 ` Asaf Penso
0 siblings, 1 reply; 12+ messages in thread
From: wenxu @ 2021-04-29 8:29 UTC (permalink / raw)
To: Asaf Penso; +Cc: dev
Hi Asaf,
I using the upstream dpdk. There are the same issue.
So I think thi s problem I mentioned is not fixed
Could you help us handle with this?
Br
wenxu
发件人:Asaf Penso <asafp@nvidia.com>
发送日期:2021-04-28 17:31:03
收件人:wenxu <wenxu@ucloud.cn>
抄送人:"dev@dpdk.org" <dev@dpdk.org>
主题:RE: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5
What DPDK version are you using?
Can you try using upstream? We had a fix for a similar issue recently.
Regards,
Asaf Penso
From: wenxu <wenxu@ucloud.cn>
Sent: Wednesday, April 28, 2021 8:48 AM
To: Asaf Penso <asafp@nvidia.com>
Cc: dev@dpdk.org
Subject: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5
rdma-core version is: rdma-core-52mlnx1-1.52104.x86_64
发件人:Asaf Penso <asafp@nvidia.com>
发送日期:2021-04-28 12:22:32
收件人:"dev@dpdk.org" <dev@dpdk.org>,wenxu <wenxu@ucloud.cn>
主题:Re: [dpdk-dev] nvgre inner rss problem in mlx5
Hello Wenxu,
Thank you for reaching us. I would like to know a few more details before I can provide an assistance.
Can you share the version numbers for:
rdma-core
OFED
OS
Regards,
Asaf Penso
From: dev <dev-bounces@dpdk.org> on behalf of wenxu <wenxu@ucloud.cn>
Sent: Wednesday, April 28, 2021 6:47:45 AM
To: dev@dpdk.org <dev@dpdk.org>
Subject: [dpdk-dev] nvgre inner rss problem in mlx5
Hi mlnx teams,
I test the upstream dpdk with the nvgre inner rss action with dpdk-testpmd
# ./dpdk-testpmd -c 0x1f -n 4 -m 4096 -w "0000:19:00.1" --huge-dir=/mnt/ovsdpdk -- --forward-mode=rxonly --rxq=4 --txq=4 --auto-start --nb-cores=4
# testpmd>> flow create 0 ingress pattern eth / ipv4 / nvgre / end actions rss level 2 types ip udp tcp end queues 0 1 2 3 end / end
Add rte flow for nvgre wit action inner(level 2) rss to the queues 0,1,2,3.
And I test this with the same underlay tunnel but differrent inner ip address/udp ports. But Only one queue recieve the packet.
And if I test this with vxlan case. it works as we expect.
testpmd>> flow create 0 ingress pattern eth / ipv4 / udp / vxlan / end actions rss level 2 types ip udp tcp end queues 0 1 2 3 end / end
# lspci | grep Ether
19:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
19:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
Fw version is 16.29.1016
# ethtool -i net3
driver: mlx5_core
version: 5.12.0-rc4+
firmware-version: 16.29.1016 (MT_0000000080)
Are there any problems for my test case.
BR
wenxu
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] nvgre inner rss problem in mlx5
2021-04-29 8:29 ` wenxu
@ 2021-04-29 9:06 ` Asaf Penso
2021-05-10 4:53 ` [dpdk-dev] : " wenxu
0 siblings, 1 reply; 12+ messages in thread
From: Asaf Penso @ 2021-04-29 9:06 UTC (permalink / raw)
To: wenxu; +Cc: dev
Sure, let’s take it offline and come back here with updated results.
Regards,
Asaf Penso
From: wenxu <wenxu@ucloud.cn>
Sent: Thursday, April 29, 2021 11:30 AM
To: Asaf Penso <asafp@nvidia.com>
Cc: dev@dpdk.org
Subject: Re:RE: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5
Hi Asaf,
I using the upstream dpdk. There are the same issue.
So I think thi s problem I mentioned is not fixed
Could you help us handle with this?
Br
wenxu
发件人:Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
发送日期:2021-04-28 17:31:03
收件人:wenxu <wenxu@ucloud.cn<mailto:wenxu@ucloud.cn>>
抄送人:"dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>
主题:RE: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5
What DPDK version are you using?
Can you try using upstream? We had a fix for a similar issue recently.
Regards,
Asaf Penso
From: wenxu <wenxu@ucloud.cn<mailto:wenxu@ucloud.cn>>
Sent: Wednesday, April 28, 2021 8:48 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: dev@dpdk.org<mailto:dev@dpdk.org>
Subject: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5
rdma-core version is: rdma-core-52mlnx1-1.52104.x86_64
发件人:Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
发送日期:2021-04-28 12:22:32
收件人:"dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>,wenxu <wenxu@ucloud.cn<mailto:wenxu@ucloud.cn>>
主题:Re: [dpdk-dev] nvgre inner rss problem in mlx5
Hello Wenxu,
Thank you for reaching us. I would like to know a few more details before I can provide an assistance.
Can you share the version numbers for:
rdma-core
OFED
OS
Regards,
Asaf Penso
________________________________
From: dev <dev-bounces@dpdk.org<mailto:dev-bounces@dpdk.org>> on behalf of wenxu <wenxu@ucloud.cn<mailto:wenxu@ucloud.cn>>
Sent: Wednesday, April 28, 2021 6:47:45 AM
To: dev@dpdk.org<mailto:dev@dpdk.org> <dev@dpdk.org<mailto:dev@dpdk.org>>
Subject: [dpdk-dev] nvgre inner rss problem in mlx5
Hi mlnx teams,
I test the upstream dpdk with the nvgre inner rss action with dpdk-testpmd
# ./dpdk-testpmd -c 0x1f -n 4 -m 4096 -w "0000:19:00.1" --huge-dir=/mnt/ovsdpdk -- --forward-mode=rxonly --rxq=4 --txq=4 --auto-start --nb-cores=4
# testpmd>> flow create 0 ingress pattern eth / ipv4 / nvgre / end actions rss level 2 types ip udp tcp end queues 0 1 2 3 end / end
Add rte flow for nvgre wit action inner(level 2) rss to the queues 0,1,2,3.
And I test this with the same underlay tunnel but differrent inner ip address/udp ports. But Only one queue recieve the packet.
And if I test this with vxlan case. it works as we expect.
testpmd>> flow create 0 ingress pattern eth / ipv4 / udp / vxlan / end actions rss level 2 types ip udp tcp end queues 0 1 2 3 end / end
# lspci | grep Ether
19:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
19:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
Fw version is 16.29.1016
# ethtool -i net3
driver: mlx5_core
version: 5.12.0-rc4+
firmware-version: 16.29.1016 (MT_0000000080)
Are there any problems for my test case.
BR
wenxu
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] : nvgre inner rss problem in mlx5
2021-04-29 9:06 ` Asaf Penso
@ 2021-05-10 4:53 ` wenxu
2021-05-10 8:05 ` Asaf Penso
0 siblings, 1 reply; 12+ messages in thread
From: wenxu @ 2021-05-10 4:53 UTC (permalink / raw)
To: Asaf Penso; +Cc: dev
Hi Asaf,
Are there any progress for this case?
BR
wenxu
发件人:Asaf Penso <asafp@nvidia.com>
发送日期:2021-04-29 17:06:52
收件人:wenxu <wenxu@ucloud.cn>
抄送人:"dev@dpdk.org" <dev@dpdk.org>
主题:RE: Re:RE: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5
Sure, let’s take it offline and come back here with updated results.
Regards,
Asaf Penso
From: wenxu <wenxu@ucloud.cn>
Sent: Thursday, April 29, 2021 11:30 AM
To: Asaf Penso <asafp@nvidia.com>
Cc: dev@dpdk.org
Subject: Re:RE: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5
Hi Asaf,
I using the upstream dpdk. There are the same issue.
So I think thi s problem I mentioned is not fixed
Could you help us handle with this?
Br
wenxu
发件人:Asaf Penso <asafp@nvidia.com>
发送日期:2021-04-28 17:31:03
收件人:wenxu <wenxu@ucloud.cn>
抄送人:"dev@dpdk.org" <dev@dpdk.org>
主题:RE: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5
What DPDK version are you using?
Can you try using upstream? We had a fix for a similar issue recently.
Regards,
Asaf Penso
From: wenxu <wenxu@ucloud.cn>
Sent: Wednesday, April 28, 2021 8:48 AM
To: Asaf Penso <asafp@nvidia.com>
Cc: dev@dpdk.org
Subject: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5
rdma-core version is: rdma-core-52mlnx1-1.52104.x86_64
发件人:Asaf Penso <asafp@nvidia.com>
发送日期:2021-04-28 12:22:32
收件人:"dev@dpdk.org" <dev@dpdk.org>,wenxu <wenxu@ucloud.cn>
主题:Re: [dpdk-dev] nvgre inner rss problem in mlx5
Hello Wenxu,
Thank you for reaching us. I would like to know a few more details before I can provide an assistance.
Can you share the version numbers for:
rdma-core
OFED
OS
Regards,
Asaf Penso
From: dev <dev-bounces@dpdk.org> on behalf of wenxu <wenxu@ucloud.cn>
Sent: Wednesday, April 28, 2021 6:47:45 AM
To: dev@dpdk.org <dev@dpdk.org>
Subject: [dpdk-dev] nvgre inner rss problem in mlx5
Hi mlnx teams,
I test the upstream dpdk with the nvgre inner rss action with dpdk-testpmd
# ./dpdk-testpmd -c 0x1f -n 4 -m 4096 -w "0000:19:00.1" --huge-dir=/mnt/ovsdpdk -- --forward-mode=rxonly --rxq=4 --txq=4 --auto-start --nb-cores=4
# testpmd>> flow create 0 ingress pattern eth / ipv4 / nvgre / end actions rss level 2 types ip udp tcp end queues 0 1 2 3 end / end
Add rte flow for nvgre wit action inner(level 2) rss to the queues 0,1,2,3.
And I test this with the same underlay tunnel but differrent inner ip address/udp ports. But Only one queue recieve the packet.
And if I test this with vxlan case. it works as we expect.
testpmd>> flow create 0 ingress pattern eth / ipv4 / udp / vxlan / end actions rss level 2 types ip udp tcp end queues 0 1 2 3 end / end
# lspci | grep Ether
19:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
19:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
Fw version is 16.29.1016
# ethtool -i net3
driver: mlx5_core
version: 5.12.0-rc4+
firmware-version: 16.29.1016 (MT_0000000080)
Are there any problems for my test case.
BR
wenxu
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] : nvgre inner rss problem in mlx5
2021-05-10 4:53 ` [dpdk-dev] : " wenxu
@ 2021-05-10 8:05 ` Asaf Penso
2021-05-11 3:10 ` wenxu
0 siblings, 1 reply; 12+ messages in thread
From: Asaf Penso @ 2021-05-10 8:05 UTC (permalink / raw)
To: wenxu; +Cc: dev
Hello Wenxu,
Can you please create a new BZ ticket?
Looks like this is not handled properly in our pmd and we’ll handle it and update.
Regards,
Asaf Penso
From: wenxu <wenxu@ucloud.cn>
Sent: Monday, May 10, 2021 7:54 AM
To: Asaf Penso <asafp@nvidia.com>
Cc: dev@dpdk.org
Subject: Re:: [dpdk-dev] nvgre inner rss problem in mlx5
Hi Asaf,
Are there any progress for this case?
BR
wenxu
发件人:Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
发送日期:2021-04-29 17:06:52
收件人:wenxu <wenxu@ucloud.cn<mailto:wenxu@ucloud.cn>>
抄送人:"dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>
主题:RE: Re:RE: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5
Sure, let’s take it offline and come back here with updated results.
Regards,
Asaf Penso
From: wenxu <wenxu@ucloud.cn<mailto:wenxu@ucloud.cn>>
Sent: Thursday, April 29, 2021 11:30 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: dev@dpdk.org<mailto:dev@dpdk.org>
Subject: Re:RE: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5
Hi Asaf,
I using the upstream dpdk. There are the same issue.
So I think thi s problem I mentioned is not fixed
Could you help us handle with this?
Br
wenxu
发件人:Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
发送日期:2021-04-28 17:31:03
收件人:wenxu <wenxu@ucloud.cn<mailto:wenxu@ucloud.cn>>
抄送人:"dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>
主题:RE: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5
What DPDK version are you using?
Can you try using upstream? We had a fix for a similar issue recently.
Regards,
Asaf Penso
From: wenxu <wenxu@ucloud.cn<mailto:wenxu@ucloud.cn>>
Sent: Wednesday, April 28, 2021 8:48 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: dev@dpdk.org<mailto:dev@dpdk.org>
Subject: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5
rdma-core version is: rdma-core-52mlnx1-1.52104.x86_64
发件人:Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
发送日期:2021-04-28 12:22:32
收件人:"dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>,wenxu <wenxu@ucloud.cn<mailto:wenxu@ucloud.cn>>
主题:Re: [dpdk-dev] nvgre inner rss problem in mlx5
Hello Wenxu,
Thank you for reaching us. I would like to know a few more details before I can provide an assistance.
Can you share the version numbers for:
rdma-core
OFED
OS
Regards,
Asaf Penso
________________________________
From: dev <dev-bounces@dpdk.org<mailto:dev-bounces@dpdk.org>> on behalf of wenxu <wenxu@ucloud.cn<mailto:wenxu@ucloud.cn>>
Sent: Wednesday, April 28, 2021 6:47:45 AM
To: dev@dpdk.org<mailto:dev@dpdk.org> <dev@dpdk.org<mailto:dev@dpdk.org>>
Subject: [dpdk-dev] nvgre inner rss problem in mlx5
Hi mlnx teams,
I test the upstream dpdk with the nvgre inner rss action with dpdk-testpmd
# ./dpdk-testpmd -c 0x1f -n 4 -m 4096 -w "0000:19:00.1" --huge-dir=/mnt/ovsdpdk -- --forward-mode=rxonly --rxq=4 --txq=4 --auto-start --nb-cores=4
# testpmd>> flow create 0 ingress pattern eth / ipv4 / nvgre / end actions rss level 2 types ip udp tcp end queues 0 1 2 3 end / end
Add rte flow for nvgre wit action inner(level 2) rss to the queues 0,1,2,3.
And I test this with the same underlay tunnel but differrent inner ip address/udp ports. But Only one queue recieve the packet.
And if I test this with vxlan case. it works as we expect.
testpmd>> flow create 0 ingress pattern eth / ipv4 / udp / vxlan / end actions rss level 2 types ip udp tcp end queues 0 1 2 3 end / end
# lspci | grep Ether
19:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
19:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
Fw version is 16.29.1016
# ethtool -i net3
driver: mlx5_core
version: 5.12.0-rc4+
firmware-version: 16.29.1016 (MT_0000000080)
Are there any problems for my test case.
BR
wenxu
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] : nvgre inner rss problem in mlx5
2021-05-10 8:05 ` Asaf Penso
@ 2021-05-11 3:10 ` wenxu
2021-05-12 18:45 ` Asaf Penso
0 siblings, 1 reply; 12+ messages in thread
From: wenxu @ 2021-05-11 3:10 UTC (permalink / raw)
To: Asaf Penso; +Cc: dev
Will do. Thanks
BR
wenxu
发件人:Asaf Penso <asafp@nvidia.com>
发送日期:2021-05-10 16:05:54
收件人:wenxu <wenxu@ucloud.cn>
抄送人:"dev@dpdk.org" <dev@dpdk.org>
主题:RE: Re:: [dpdk-dev] nvgre inner rss problem in mlx5
Hello Wenxu,
Can you please create a new BZ ticket?
Looks like this is not handled properly in our pmd and we’ll handle it and update.
Regards,
Asaf Penso
From: wenxu <wenxu@ucloud.cn>
Sent: Monday, May 10, 2021 7:54 AM
To: Asaf Penso <asafp@nvidia.com>
Cc: dev@dpdk.org
Subject: Re:: [dpdk-dev] nvgre inner rss problem in mlx5
Hi Asaf,
Are there any progress for this case?
BR
wenxu
发件人:Asaf Penso <asafp@nvidia.com>
发送日期:2021-04-29 17:06:52
收件人:wenxu <wenxu@ucloud.cn>
抄送人:"dev@dpdk.org" <dev@dpdk.org>
主题:RE: Re:RE: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5
Sure, let’s take it offline and come back here with updated results.
Regards,
Asaf Penso
From: wenxu <wenxu@ucloud.cn>
Sent: Thursday, April 29, 2021 11:30 AM
To: Asaf Penso <asafp@nvidia.com>
Cc: dev@dpdk.org
Subject: Re:RE: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5
Hi Asaf,
I using the upstream dpdk. There are the same issue.
So I think thi s problem I mentioned is not fixed
Could you help us handle with this?
Br
wenxu
发件人:Asaf Penso <asafp@nvidia.com>
发送日期:2021-04-28 17:31:03
收件人:wenxu <wenxu@ucloud.cn>
抄送人:"dev@dpdk.org" <dev@dpdk.org>
主题:RE: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5
What DPDK version are you using?
Can you try using upstream? We had a fix for a similar issue recently.
Regards,
Asaf Penso
From: wenxu <wenxu@ucloud.cn>
Sent: Wednesday, April 28, 2021 8:48 AM
To: Asaf Penso <asafp@nvidia.com>
Cc: dev@dpdk.org
Subject: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5
rdma-core version is: rdma-core-52mlnx1-1.52104.x86_64
发件人:Asaf Penso <asafp@nvidia.com>
发送日期:2021-04-28 12:22:32
收件人:"dev@dpdk.org" <dev@dpdk.org>,wenxu <wenxu@ucloud.cn>
主题:Re: [dpdk-dev] nvgre inner rss problem in mlx5
Hello Wenxu,
Thank you for reaching us. I would like to know a few more details before I can provide an assistance.
Can you share the version numbers for:
rdma-core
OFED
OS
Regards,
Asaf Penso
From: dev <dev-bounces@dpdk.org> on behalf of wenxu <wenxu@ucloud.cn>
Sent: Wednesday, April 28, 2021 6:47:45 AM
To: dev@dpdk.org <dev@dpdk.org>
Subject: [dpdk-dev] nvgre inner rss problem in mlx5
Hi mlnx teams,
I test the upstream dpdk with the nvgre inner rss action with dpdk-testpmd
# ./dpdk-testpmd -c 0x1f -n 4 -m 4096 -w "0000:19:00.1" --huge-dir=/mnt/ovsdpdk -- --forward-mode=rxonly --rxq=4 --txq=4 --auto-start --nb-cores=4
# testpmd>> flow create 0 ingress pattern eth / ipv4 / nvgre / end actions rss level 2 types ip udp tcp end queues 0 1 2 3 end / end
Add rte flow for nvgre wit action inner(level 2) rss to the queues 0,1,2,3.
And I test this with the same underlay tunnel but differrent inner ip address/udp ports. But Only one queue recieve the packet.
And if I test this with vxlan case. it works as we expect.
testpmd>> flow create 0 ingress pattern eth / ipv4 / udp / vxlan / end actions rss level 2 types ip udp tcp end queues 0 1 2 3 end / end
# lspci | grep Ether
19:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
19:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
Fw version is 16.29.1016
# ethtool -i net3
driver: mlx5_core
version: 5.12.0-rc4+
firmware-version: 16.29.1016 (MT_0000000080)
Are there any problems for my test case.
BR
wenxu
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [dpdk-dev] : nvgre inner rss problem in mlx5
2021-05-11 3:10 ` wenxu
@ 2021-05-12 18:45 ` Asaf Penso
0 siblings, 0 replies; 12+ messages in thread
From: Asaf Penso @ 2021-05-12 18:45 UTC (permalink / raw)
To: wenxu; +Cc: dev
Hello Wenxu,
We've integrated this fix - https://patchwork.dpdk.org/project/dpdk/patch/20210512102408.7501-1-jiaweiw@nvidia.com/
Could you please confirm it resolves your issue?
BTW, have you opened a BZ ticket? If so, could you please send me the link?
Regards,
Asaf Penso
________________________________
From: wenxu <wenxu@ucloud.cn>
Sent: Tuesday, May 11, 2021 6:10:57 AM
To: Asaf Penso <asafp@nvidia.com>
Cc: dev@dpdk.org <dev@dpdk.org>
Subject: Re:RE: Re:: [dpdk-dev] nvgre inner rss problem in mlx5
Will do. Thanks
BR
wenxu
发件人:Asaf Penso <asafp@nvidia.com>
发送日期:2021-05-10 16:05:54
收件人:wenxu <wenxu@ucloud.cn>
抄送人:"dev@dpdk.org" <dev@dpdk.org>
主题:RE: Re:: [dpdk-dev] nvgre inner rss problem in mlx5
Hello Wenxu,
Can you please create a new BZ ticket?
Looks like this is not handled properly in our pmd and we’ll handle it and update.
Regards,
Asaf Penso
From: wenxu <wenxu@ucloud.cn>
Sent: Monday, May 10, 2021 7:54 AM
To: Asaf Penso <asafp@nvidia.com>
Cc: dev@dpdk.org
Subject: Re:: [dpdk-dev] nvgre inner rss problem in mlx5
Hi Asaf,
Are there any progress for this case?
BR
wenxu
发件人:Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
发送日期:2021-04-29 17:06:52
收件人:wenxu <wenxu@ucloud.cn<mailto:wenxu@ucloud.cn>>
抄送人:"dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>
主题:RE: Re:RE: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5
Sure, let’s take it offline and come back here with updated results.
Regards,
Asaf Penso
From: wenxu <wenxu@ucloud.cn<mailto:wenxu@ucloud.cn>>
Sent: Thursday, April 29, 2021 11:30 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: dev@dpdk.org<mailto:dev@dpdk.org>
Subject: Re:RE: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5
Hi Asaf,
I using the upstream dpdk. There are the same issue.
So I think thi s problem I mentioned is not fixed
Could you help us handle with this?
Br
wenxu
发件人:Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
发送日期:2021-04-28 17:31:03
收件人:wenxu <wenxu@ucloud.cn<mailto:wenxu@ucloud.cn>>
抄送人:"dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>
主题:RE: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5
What DPDK version are you using?
Can you try using upstream? We had a fix for a similar issue recently.
Regards,
Asaf Penso
From: wenxu <wenxu@ucloud.cn<mailto:wenxu@ucloud.cn>>
Sent: Wednesday, April 28, 2021 8:48 AM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Cc: dev@dpdk.org<mailto:dev@dpdk.org>
Subject: Re:Re: [dpdk-dev] nvgre inner rss problem in mlx5
rdma-core version is: rdma-core-52mlnx1-1.52104.x86_64
发件人:Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
发送日期:2021-04-28 12:22:32
收件人:"dev@dpdk.org<mailto:dev@dpdk.org>" <dev@dpdk.org<mailto:dev@dpdk.org>>,wenxu <wenxu@ucloud.cn<mailto:wenxu@ucloud.cn>>
主题:Re: [dpdk-dev] nvgre inner rss problem in mlx5
Hello Wenxu,
Thank you for reaching us. I would like to know a few more details before I can provide an assistance.
Can you share the version numbers for:
rdma-core
OFED
OS
Regards,
Asaf Penso
________________________________
From: dev <dev-bounces@dpdk.org<mailto:dev-bounces@dpdk.org>> on behalf of wenxu <wenxu@ucloud.cn<mailto:wenxu@ucloud.cn>>
Sent: Wednesday, April 28, 2021 6:47:45 AM
To: dev@dpdk.org<mailto:dev@dpdk.org> <dev@dpdk.org<mailto:dev@dpdk.org>>
Subject: [dpdk-dev] nvgre inner rss problem in mlx5
Hi mlnx teams,
I test the upstream dpdk with the nvgre inner rss action with dpdk-testpmd
# ./dpdk-testpmd -c 0x1f -n 4 -m 4096 -w "0000:19:00.1" --huge-dir=/mnt/ovsdpdk -- --forward-mode=rxonly --rxq=4 --txq=4 --auto-start --nb-cores=4
# testpmd>> flow create 0 ingress pattern eth / ipv4 / nvgre / end actions rss level 2 types ip udp tcp end queues 0 1 2 3 end / end
Add rte flow for nvgre wit action inner(level 2) rss to the queues 0,1,2,3.
And I test this with the same underlay tunnel but differrent inner ip address/udp ports. But Only one queue recieve the packet.
And if I test this with vxlan case. it works as we expect.
testpmd>> flow create 0 ingress pattern eth / ipv4 / udp / vxlan / end actions rss level 2 types ip udp tcp end queues 0 1 2 3 end / end
# lspci | grep Ether
19:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
19:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
Fw version is 16.29.1016
# ethtool -i net3
driver: mlx5_core
version: 5.12.0-rc4+
firmware-version: 16.29.1016 (MT_0000000080)
Are there any problems for my test case.
BR
wenxu
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] nvgre inner rss problem in mlx5
@ 2021-08-03 8:44 wenxu
0 siblings, 0 replies; 12+ messages in thread
From: wenxu @ 2021-08-03 8:44 UTC (permalink / raw)
To: asafp; +Cc: dev
Hi nvidia teams,
I test the upstream dpdk for vxlan encap offload with dpdk-testpmd
# lspci | grep Ether
19:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
19:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
Fw version is 16.31.1014
#ethtool -i net2
driver: mlx5_core
version: 5.13.0-rc3+
firmware-version: 16.31.1014 (MT_0000000080)
expansion-rom-version:
bus-info: 0000:19:00.0
start the eswitch
echo 0 > /sys/class/net/net2/device/sriov_numvfs
echo 1 > /sys/class/net/net2/device/sriov_numvfs
echo 0000:19:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind
devlink dev eswitch set pci/0000:19:00.0 mode switchdev
echo 0000:19:00.2 > /sys/bus/pci/drivers/mlx5_core/bind
ip link shows
4: net2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 1c:34:da:77:fb:d8 brd ff:ff:ff:ff:ff:ff
vf 0 MAC 4e:41:8f:92:41:44, spoof checking off, link-state disable, trust off, query_rss off
vf 1 MAC 00:00:00:00:00:00, spoof checking off, link-state disable, trust off, query_rss off
8: pf0vf0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 4e:41:8f:92:41:44 brd ff:ff:ff:ff:ff:ff
10: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 46:87:9e:9e:c8:23 brd ff:ff:ff:ff:ff:ff
net2 is pf, pf0vf0 is vf represntor, eth0 is vf.
start the pmd
./dpdk-testpmd -c 0x1f -n 4 -m 4096 --file-prefix=ovs -a "0000:19:00.0,representor=pf0vf0,dv_flow_en=1,dv_esw_en=1,dv_xmeta_en=1" --huge-dir=/mnt/ovsdpdk -- -i --flow-isolate-all --forward-mode=rxonly --rxq=4 --txq=4 --auto-start --nb-cores=4
testpmd> set vxlan ip-version ipv4 vni 1000 udp-src 0 udp-dst 4789 ip-src 172.168.152.50 ip-dst 172.168.152.73 eth-src 1c:34:da:77:fb:d8 eth-dst 3c:fd:fe:bb:1c:0c
testpmd> flow create 1 ingress priority 0 group 0 transfer pattern eth src is 46:87:9e:9e:c8:23 dst is 5a:9e:0f:74:6c:5e type is 0x0800 / ipv4 tos spec 0x0 tos mask 0x3 / end actions count / vxlan_encap / port_id original 0 id 0 / end
port_flow_complain(): Caught PMD error type 16 (specific action): port does not belong to E-Switch being configured: Invalid argument
Add the rule fail for "port does not belong to E-Switch being configured"
I checkout with the dpdk codes
In the function
flow_dv_validate_action_port_id
if (act_priv->domain_id != dev_priv->domain_id)
return rte_flow_error_set
(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, NULL,
"port does not belong to"
" E-Switch being configured");
The domain_id of vf representor is not the same as domain_id of PF.
And check the mlx5_dev_spawn the vlaue of domain_id for vf representor and PF will be always diffirent.
mlx5_dev_spawn
/*
* Look for sibling devices in order to reuse their switch domain
* if any, otherwise allocate one.
*/
MLX5_ETH_FOREACH_DEV(port_id, NULL) {
const struct mlx5_priv *opriv =
rte_eth_devices[port_id].data->dev_private;
if (!opriv ||
opriv->sh != priv->sh ||
opriv->domain_id ==
RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID)
continue;
priv->domain_id = opriv->domain_id;
break;
}
if (priv->domain_id == RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID) {
err = rte_eth_switch_domain_alloc(&priv->domain_id);
The MLX5_ETH_FOREACH_DEV will never for PF eth_dev.
mlx5_eth_find_next(uint16_t port_id, struct rte_device *odev)
{
while (port_id < RTE_MAX_ETHPORTS) {
struct rte_eth_dev *dev = &rte_eth_devices[port_id];
if (dev->state != RTE_ETH_DEV_UNUSED &&
dev->device &&
(dev->device == odev ||
(dev->device->driver &&
dev->device->driver->name &&
((strcmp(dev->device->driver->name,
MLX5_PCI_DRIVER_NAME) == 0) ||
(strcmp(dev->device->driver->name,
MLX5_AUXILIARY_DRIVER_NAME) == 0)))))
Although the state of eth_dev is ATTACHED. But the driver is not set .
The driver only set in the rte_pci_probe_one_driver which all ports
on the same device is probed.
So at this moment representor vf will never find the PF one, this will
lead the repsentor vf choose another domain_id
So in this case it should put the pci_driver to the mlx5_driver_probe (mlx5_os_pci_probe)
BR
wenxu
^ permalink raw reply [flat|nested] 12+ messages in thread
* [dpdk-dev] nvgre inner rss problem in mlx5
@ 2021-04-28 3:43 wenxu
0 siblings, 0 replies; 12+ messages in thread
From: wenxu @ 2021-04-28 3:43 UTC (permalink / raw)
To: dev
Hi mlnx teams,
I test the upstream dpdk with the nvgre inner rss action with dpdk-testpmd
# ./dpdk-testpmd -c 0x1f -n 4 -m 4096 -w "0000:19:00.1" --huge-dir=/mnt/ovsdpdk -- --forward-mode=rxonly --rxq=4 --txq=4 --auto-start --nb-cores=4
# testpmd>> flow create 0 ingress pattern eth / ipv4 / nvgre / end actions rss level 2 types ip udp tcp end queues 0 1 2 3 end / end
Add rte flow for nvgre wit action inner(level 2) rss to the queues 0,1,2,3.
And I test this with the same underlay tunnel but differrent inner ip address/udp ports. But Only one queue recieve the packet.
And if I test this with vxlan case. it works as we expect.
testpmd>> flow create 0 ingress pattern eth / ipv4 / udp / vxlan / end actions rss level 2 types ip udp tcp end queues 0 1 2 3 end / end
# lspci | grep Ether
19:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
19:00.1 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
Fw version is 16.29.1016
# ethtool -i net3
driver: mlx5_core
version: 5.12.0-rc4+
firmware-version: 16.29.1016 (MT_0000000080)
Are there any problems for my test case.
BR
wenxu
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2021-08-03 8:45 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-28 3:47 [dpdk-dev] nvgre inner rss problem in mlx5 wenxu
2021-04-28 4:22 ` Asaf Penso
2021-04-28 5:48 ` wenxu
2021-04-28 9:31 ` Asaf Penso
2021-04-29 8:29 ` wenxu
2021-04-29 9:06 ` Asaf Penso
2021-05-10 4:53 ` [dpdk-dev] : " wenxu
2021-05-10 8:05 ` Asaf Penso
2021-05-11 3:10 ` wenxu
2021-05-12 18:45 ` Asaf Penso
-- strict thread matches above, loose matches on Subject: below --
2021-08-03 8:44 [dpdk-dev] " wenxu
2021-04-28 3:43 wenxu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).