* [dpdk-dev] Testpmd parameter --enable-lro fails on Mellanox ConnectX5
@ 2019-10-15 6:06 Georgios Katsikas
2019-10-15 14:06 ` Asaf Penso
0 siblings, 1 reply; 4+ messages in thread
From: Georgios Katsikas @ 2019-10-15 6:06 UTC (permalink / raw)
To: dev; +Cc: Tom Barbette
Hi all,
In the latest features of the MLX5 PMD (i.e., DPDK 19.08), there is LRO
hardware support.
However, I cannot make it work with our dual port 100GbE Mellanox ConnectX5
(MT27800) card.
Let me first describe the configuration of my testbed: First,I upgraded
OFED to the latest 4.7-1.0.0 with firmware version 16.26.1040
(MT_0000000008) and installed the latest MFT (4.13) to enable DevX
according to the instructions here
<https://doc.dpdk.org/guides/nics/mlx5.html>:
mlxconfig -d <device> set UCTX_EN=1
My OS is Ubuntu server 18.04 with kernel 4.15.0-65-generic. I also tested
on another server with the same distro but more recent kernel (5.15),
without any luck.
To test LRO, I simply use testpmd as follows:
sudo ./testpmd -l 0-15 -w 0000:03:00.1 -v -- --txq=16 --rxq=16
--nb-cores=15 --enable-lro -i
Th output I get is:
Configuring Port 0 (socket 0)
net_mlx5: Failed to create RQ using DevX
net_mlx5: port 0 Rx queue 8 RQ creation failure
net_mlx5: port 0 Rx queue allocation failed: Cannot allocate memory
Fail to start port 0
Please stop the ports first
Done
Reducing the number of queues does not solve the problem. Specifically, if
I ask for 1 rX and 1 Tx queue, testpmd fails without an error from the mlx
driver:
Configuring Port 0 (socket 0)
Fail to start port 0
Please stop the ports first
Done
I would appreciate your help on this issue.
Best regards,
Georgios
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] Testpmd parameter --enable-lro fails on Mellanox ConnectX5
2019-10-15 6:06 [dpdk-dev] Testpmd parameter --enable-lro fails on Mellanox ConnectX5 Georgios Katsikas
@ 2019-10-15 14:06 ` Asaf Penso
2019-10-17 16:20 ` Asaf Penso
0 siblings, 1 reply; 4+ messages in thread
From: Asaf Penso @ 2019-10-15 14:06 UTC (permalink / raw)
To: Georgios Katsikas, dev; +Cc: Tom Barbette
Hello Georgios,
Thanks for your mail!
We'll have a deeper look internally and will contact you about our results.
Regards,
Asaf Penso
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Georgios Katsikas
> Sent: Tuesday, October 15, 2019 9:06 AM
> To: dev@dpdk.org
> Cc: Tom Barbette <barbette@kth.se>
> Subject: [dpdk-dev] Testpmd parameter --enable-lro fails on Mellanox
> ConnectX5
>
> Hi all,
>
> In the latest features of the MLX5 PMD (i.e., DPDK 19.08), there is LRO
> hardware support.
> However, I cannot make it work with our dual port 100GbE Mellanox
> ConnectX5
> (MT27800) card.
>
> Let me first describe the configuration of my testbed: First,I upgraded
> OFED to the latest 4.7-1.0.0 with firmware version 16.26.1040
> (MT_0000000008) and installed the latest MFT (4.13) to enable DevX
> according to the instructions here
> <https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdoc.
> dpdk.org%2Fguides%2Fnics%2Fmlx5.html&data=02%7C01%7Casafp%40
> mellanox.com%7C18e034c229d64911ff7b08d75135d96a%7Ca652971c7d2e4d9
> ba6a4d149256f461b%7C0%7C1%7C637067164036442822&sdata=PfV3NV
> Nj3t0Bw3CiWxRIiVgats8Ma1b755qY4UIXXo0%3D&reserved=0>:
>
> mlxconfig -d <device> set UCTX_EN=1
>
> My OS is Ubuntu server 18.04 with kernel 4.15.0-65-generic. I also tested
> on another server with the same distro but more recent kernel (5.15),
> without any luck.
>
> To test LRO, I simply use testpmd as follows:
>
> sudo ./testpmd -l 0-15 -w 0000:03:00.1 -v -- --txq=16 --rxq=16
> --nb-cores=15 --enable-lro -i
>
> Th output I get is:
>
> Configuring Port 0 (socket 0)
> net_mlx5: Failed to create RQ using DevX
> net_mlx5: port 0 Rx queue 8 RQ creation failure
> net_mlx5: port 0 Rx queue allocation failed: Cannot allocate memory
> Fail to start port 0
> Please stop the ports first
> Done
>
> Reducing the number of queues does not solve the problem. Specifically, if
> I ask for 1 rX and 1 Tx queue, testpmd fails without an error from the mlx
> driver:
>
> Configuring Port 0 (socket 0)
> Fail to start port 0
> Please stop the ports first
> Done
>
> I would appreciate your help on this issue.
>
> Best regards,
> Georgios
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] Testpmd parameter --enable-lro fails on Mellanox ConnectX5
2019-10-15 14:06 ` Asaf Penso
@ 2019-10-17 16:20 ` Asaf Penso
2019-10-18 6:51 ` Georgios Katsikas
0 siblings, 1 reply; 4+ messages in thread
From: Asaf Penso @ 2019-10-17 16:20 UTC (permalink / raw)
To: Asaf Penso, Georgios Katsikas, dev; +Cc: Tom Barbette
Hi,
For the LRO feature to work in our pmd, please add dv_flow_en=1 as following:
sudo ./testpmd -l 0-15 -w 0000:03:00.1,dv_flow_en=1 -v -- --txq=16 --rxq=16 --nb-cores=15 --enable-lro -i
Can you try this and let us know the result?
Regards,
Asaf Penso
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Asaf Penso
> Sent: Tuesday, October 15, 2019 5:07 PM
> To: Georgios Katsikas <katsikas.gp@gmail.com>; dev@dpdk.org
> Cc: Tom Barbette <barbette@kth.se>
> Subject: Re: [dpdk-dev] Testpmd parameter --enable-lro fails on Mellanox
> ConnectX5
>
> Hello Georgios,
>
> Thanks for your mail!
> We'll have a deeper look internally and will contact you about our results.
>
> Regards,
> Asaf Penso
>
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Georgios Katsikas
> > Sent: Tuesday, October 15, 2019 9:06 AM
> > To: dev@dpdk.org
> > Cc: Tom Barbette <barbette@kth.se>
> > Subject: [dpdk-dev] Testpmd parameter --enable-lro fails on Mellanox
> > ConnectX5
> >
> > Hi all,
> >
> > In the latest features of the MLX5 PMD (i.e., DPDK 19.08), there is LRO
> > hardware support.
> > However, I cannot make it work with our dual port 100GbE Mellanox
> > ConnectX5
> > (MT27800) card.
> >
> > Let me first describe the configuration of my testbed: First,I upgraded
> > OFED to the latest 4.7-1.0.0 with firmware version 16.26.1040
> > (MT_0000000008) and installed the latest MFT (4.13) to enable DevX
> > according to the instructions here
> >
> <https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdoc.
> >
> dpdk.org%2Fguides%2Fnics%2Fmlx5.html&data=02%7C01%7Casafp%40
> >
> mellanox.com%7C18e034c229d64911ff7b08d75135d96a%7Ca652971c7d2e4d9
> >
> ba6a4d149256f461b%7C0%7C1%7C637067164036442822&sdata=PfV3NV
> > Nj3t0Bw3CiWxRIiVgats8Ma1b755qY4UIXXo0%3D&reserved=0>:
> >
> > mlxconfig -d <device> set UCTX_EN=1
> >
> > My OS is Ubuntu server 18.04 with kernel 4.15.0-65-generic. I also tested
> > on another server with the same distro but more recent kernel (5.15),
> > without any luck.
> >
> > To test LRO, I simply use testpmd as follows:
> >
> > sudo ./testpmd -l 0-15 -w 0000:03:00.1 -v -- --txq=16 --rxq=16
> > --nb-cores=15 --enable-lro -i
> >
> > Th output I get is:
> >
> > Configuring Port 0 (socket 0)
> > net_mlx5: Failed to create RQ using DevX
> > net_mlx5: port 0 Rx queue 8 RQ creation failure
> > net_mlx5: port 0 Rx queue allocation failed: Cannot allocate memory
> > Fail to start port 0
> > Please stop the ports first
> > Done
> >
> > Reducing the number of queues does not solve the problem. Specifically, if
> > I ask for 1 rX and 1 Tx queue, testpmd fails without an error from the mlx
> > driver:
> >
> > Configuring Port 0 (socket 0)
> > Fail to start port 0
> > Please stop the ports first
> > Done
> >
> > I would appreciate your help on this issue.
> >
> > Best regards,
> > Georgios
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] Testpmd parameter --enable-lro fails on Mellanox ConnectX5
2019-10-17 16:20 ` Asaf Penso
@ 2019-10-18 6:51 ` Georgios Katsikas
0 siblings, 0 replies; 4+ messages in thread
From: Georgios Katsikas @ 2019-10-18 6:51 UTC (permalink / raw)
To: Asaf Penso; +Cc: dev, Tom Barbette
Hi Asaf,
Unfortunately, this extra parameter did not work., I got the same error as
before.
However, the result of the command "show port 0 rx_offload configuration"
shows that TCP_LRO is "enabled"
testpmd> show port 0 rx_offload configuration
Rx Offloading Configuration of port 0 :
Port : TCP_LRO
Queue[ 0] : TCP_LRO
Queue[ 1] : TCP_LRO
Queue[ 2] : TCP_LRO
Queue[ 3] : TCP_LRO
Queue[ 4] : TCP_LRO
Queue[ 5] : TCP_LRO
Queue[ 6] : TCP_LRO
Queue[ 7] : TCP_LRO
Queue[ 8] : TCP_LRO
Queue[ 9] : TCP_LRO
Queue[10] : TCP_LRO
Queue[11] : TCP_LRO
Queue[12] : TCP_LRO
Queue[13] : TCP_LRO
Queue[14] : TCP_LRO
Queue[15] : TCP_LRO
Also, this feature is advertised by the offload capabilities of the device:
testpmd> show port 0 rx_offload capabilities
Rx Offloading Capabilities of port 0 :
Per Queue : VLAN_STRIP IPV4_CKSUM UDP_CKSUM TCP_CKSUM TCP_LRO JUMBO_FRAME
SCATTER TIMESTAMP KEEP_CRC
Per Port : VLAN_FILTER
Finally, activating LRO from the testpmd cmd results in the same error:
testpmd> port config 0 rx_offload tcp_lro off
testpmd> port config 0 rx_offload tcp_lro on
testpmd> port start 0
Configuring Port 0 (socket 0)
net_mlx5: Failed to create RQ using DevX
net_mlx5: port 0 Rx queue 3 RQ creation failure
net_mlx5: port 0 Rx queue allocation failed: Cannot allocate memory
Fail to start port 0
Please stop the ports first
Done
Thanks for your feedback,
Georgios
On Thu, Oct 17, 2019 at 7:20 PM Asaf Penso <asafp@mellanox.com> wrote:
> Hi,
>
> For the LRO feature to work in our pmd, please add dv_flow_en=1 as
> following:
> sudo ./testpmd -l 0-15 -w 0000:03:00.1,dv_flow_en=1 -v -- --txq=16
> --rxq=16 --nb-cores=15 --enable-lro -i
>
> Can you try this and let us know the result?
>
> Regards,
> Asaf Penso
>
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Asaf Penso
> > Sent: Tuesday, October 15, 2019 5:07 PM
> > To: Georgios Katsikas <katsikas.gp@gmail.com>; dev@dpdk.org
> > Cc: Tom Barbette <barbette@kth.se>
> > Subject: Re: [dpdk-dev] Testpmd parameter --enable-lro fails on Mellanox
> > ConnectX5
> >
> > Hello Georgios,
> >
> > Thanks for your mail!
> > We'll have a deeper look internally and will contact you about our
> results.
> >
> > Regards,
> > Asaf Penso
> >
> > > -----Original Message-----
> > > From: dev <dev-bounces@dpdk.org> On Behalf Of Georgios Katsikas
> > > Sent: Tuesday, October 15, 2019 9:06 AM
> > > To: dev@dpdk.org
> > > Cc: Tom Barbette <barbette@kth.se>
> > > Subject: [dpdk-dev] Testpmd parameter --enable-lro fails on Mellanox
> > > ConnectX5
> > >
> > > Hi all,
> > >
> > > In the latest features of the MLX5 PMD (i.e., DPDK 19.08), there is LRO
> > > hardware support.
> > > However, I cannot make it work with our dual port 100GbE Mellanox
> > > ConnectX5
> > > (MT27800) card.
> > >
> > > Let me first describe the configuration of my testbed: First,I upgraded
> > > OFED to the latest 4.7-1.0.0 with firmware version 16.26.1040
> > > (MT_0000000008) and installed the latest MFT (4.13) to enable DevX
> > > according to the instructions here
> > >
> > <https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdoc.
> > >
> > dpdk.org%2Fguides%2Fnics%2Fmlx5.html&data=02%7C01%7Casafp%40
> > >
> > mellanox.com%7C18e034c229d64911ff7b08d75135d96a%7Ca652971c7d2e4d9
> > >
> > ba6a4d149256f461b%7C0%7C1%7C637067164036442822&sdata=PfV3NV
> > > Nj3t0Bw3CiWxRIiVgats8Ma1b755qY4UIXXo0%3D&reserved=0>:
> > >
> > > mlxconfig -d <device> set UCTX_EN=1
> > >
> > > My OS is Ubuntu server 18.04 with kernel 4.15.0-65-generic. I also
> tested
> > > on another server with the same distro but more recent kernel (5.15),
> > > without any luck.
> > >
> > > To test LRO, I simply use testpmd as follows:
> > >
> > > sudo ./testpmd -l 0-15 -w 0000:03:00.1 -v -- --txq=16 --rxq=16
> > > --nb-cores=15 --enable-lro -i
> > >
> > > Th output I get is:
> > >
> > > Configuring Port 0 (socket 0)
> > > net_mlx5: Failed to create RQ using DevX
> > > net_mlx5: port 0 Rx queue 8 RQ creation failure
> > > net_mlx5: port 0 Rx queue allocation failed: Cannot allocate memory
> > > Fail to start port 0
> > > Please stop the ports first
> > > Done
> > >
> > > Reducing the number of queues does not solve the problem.
> Specifically, if
> > > I ask for 1 rX and 1 Tx queue, testpmd fails without an error from
> the mlx
> > > driver:
> > >
> > > Configuring Port 0 (socket 0)
> > > Fail to start port 0
> > > Please stop the ports first
> > > Done
> > >
> > > I would appreciate your help on this issue.
> > >
> > > Best regards,
> > > Georgios
>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2019-10-18 6:51 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-15 6:06 [dpdk-dev] Testpmd parameter --enable-lro fails on Mellanox ConnectX5 Georgios Katsikas
2019-10-15 14:06 ` Asaf Penso
2019-10-17 16:20 ` Asaf Penso
2019-10-18 6:51 ` Georgios Katsikas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).