DPDK usage discussions
 help / color / mirror / Atom feed
* Re: [dpdk-users] DPDK not working with ConnectX-3 card on Openstack virtual setup
       [not found] <ORIGINAL-RELEASE-1556836841268755868-VI1PR0701MB2142AD6832915BBC2C845323FB340@VI1PR0701MB2142.eurprd07.prod.outlook.com>
@ 2019-05-09  7:39 ` Greg O'Rawe
  0 siblings, 0 replies; 2+ messages in thread
From: Greg O'Rawe @ 2019-05-09  7:39 UTC (permalink / raw)
  To: users

Hi,

I see the same issue was raised here on this thread https://mails.dpdk.org/archives/users/2018-October/003620.html

Any ideas on a possible solution? I didn't see any response on that thread.

Many thanks
Greg

-----Original Message-----
From: users <users-bounces@dpdk.org> On Behalf Of Greg O'Rawe
Sent: 02 May 2019 16:42
To: users@dpdk.org
Subject: [dpdk-users] DPDK not working with ConnectX-3 card on Openstack virtual setup

Hi,



I am trying to get DPDK 17.11.4 to run with a ConnectX-3 card on a virtual environment using Openstack.



This uses the VFIO driver which initialises correctly (though running in no-IOMMU mode). However starting DPDK via the testpmd binary fails trying to add default flows to the device.

mlx_fe-fe-0$ /root/testpmd -c 0xf -n 4 -w 0000:00:06.0 -w 0000:00:08.0  -- --rxq=2 --txq=2 -i
EAL: Detected 12 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
EAL: PCI device 0000:00:06.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 15b3:1004 net_mlx4
PMD: net_mlx4: PCI information matches, using device "mlx4_0" (VF: true)
PMD: net_mlx4: 1 port(s) detected
PMD: net_mlx4: port 1 MAC address is fa:16:3e:c6:5b:df
EAL: PCI device 0000:00:08.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 15b3:1004 net_mlx4
PMD: net_mlx4: PCI information matches, using device "mlx4_2" (VF: true)
PMD: net_mlx4: 1 port(s) detected
PMD: net_mlx4: port 1 MAC address is fa:16:3e:d9:f9:9d Interactive-mode selected
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0 Configuring Port 0 (socket 0)
PMD: net_mlx4: 0x55e3598ee200: cannot attach flow rules (code 12, "Cannot allocate memory"), flow error type 2, cause 0x13de8ee40, message: flow rule rejected by device Fail to start port 0 Configuring Port 1 (socket 0)
PMD: net_mlx4: 0x55e3598f2280: cannot attach flow rules (code 12, "Cannot allocate memory"), flow error type 2, cause 0x13de8acc0, message: flow rule rejected by device Fail to start port 1 Please stop the ports first Done
testpmd>

/var/log/messages contains the following errors:

May  2 15:38:56 mlx_fe-fe-0 kernel: <mlx4_ib> __mlx4_ib_create_flow: mcg table is full. Fail to register network rule.
May  2 15:38:56 mlx_fe-fe-0 testpmd[27582]: PMD: net_mlx4: 0x55e3598ee200: cannot attach flow rules (code 12, "Cannot allocate memory"), flow error type 2, cause 0x13de8ee40, message: flow rule rejected by device May  2 15:38:56 mlx_fe-fe-0 kernel: <mlx4_ib> __mlx4_ib_create_flow: mcg table is full. Fail to register network rule.
May  2 15:38:56 mlx_fe-fe-0 testpmd[27582]: PMD: net_mlx4: 0x55e3598f2280: cannot attach flow rules (code 12, "Cannot allocate memory"), flow error type 2, cause 0x13de8acc0, message: flow rule rejected by device

It seems that adding default flows fails due to the mcg table being full.
What could be the cause of this error? How is the mcg table configured?


Thanks
Greg O'Rawe

This message, including attachments, is CONFIDENTIAL. It may also be privileged or otherwise protected by law. If you received this email by mistake please let us know by reply and then delete it from your system; you should not copy it or disclose its contents to anyone. All messages sent to and from Enea may be monitored to ensure compliance with internal policies and to protect our business. Emails are not secure and cannot be guaranteed to be error free as they can be intercepted, a mended, lost or destroyed, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of email transmission. Anyone who communicates with us by email accepts these risks.

This message, including attachments, is CONFIDENTIAL. It may also be privileged or otherwise protected by law. If you received this email by mistake please let us know by reply and then delete it from your system; you should not copy it or disclose its contents to anyone. All messages sent to and from Enea may be monitored to ensure compliance with internal policies and to protect our business. Emails are not secure and cannot be guaranteed to be error free as they can be intercepted, a mended, lost or destroyed, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of email transmission. Anyone who communicates with us by email accepts these risks.

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [dpdk-users] DPDK not working with ConnectX-3 card on Openstack virtual setup
@ 2019-05-02 15:42 Greg O'Rawe
  0 siblings, 0 replies; 2+ messages in thread
From: Greg O'Rawe @ 2019-05-02 15:42 UTC (permalink / raw)
  To: users

Hi,



I am trying to get DPDK 17.11.4 to run with a ConnectX-3 card on a virtual environment using Openstack.



This uses the VFIO driver which initialises correctly (though running in no-IOMMU mode). However starting DPDK via the testpmd binary fails trying to add default flows to the device.

mlx_fe-fe-0$ /root/testpmd -c 0xf -n 4 -w 0000:00:06.0 -w 0000:00:08.0  -- --rxq=2 --txq=2 -i
EAL: Detected 12 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
EAL: PCI device 0000:00:06.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 15b3:1004 net_mlx4
PMD: net_mlx4: PCI information matches, using device "mlx4_0" (VF: true)
PMD: net_mlx4: 1 port(s) detected
PMD: net_mlx4: port 1 MAC address is fa:16:3e:c6:5b:df
EAL: PCI device 0000:00:08.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 15b3:1004 net_mlx4
PMD: net_mlx4: PCI information matches, using device "mlx4_2" (VF: true)
PMD: net_mlx4: 1 port(s) detected
PMD: net_mlx4: port 1 MAC address is fa:16:3e:d9:f9:9d
Interactive-mode selected
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
Configuring Port 0 (socket 0)
PMD: net_mlx4: 0x55e3598ee200: cannot attach flow rules (code 12, "Cannot allocate memory"), flow error type 2, cause 0x13de8ee40, message: flow rule rejected by device
Fail to start port 0
Configuring Port 1 (socket 0)
PMD: net_mlx4: 0x55e3598f2280: cannot attach flow rules (code 12, "Cannot allocate memory"), flow error type 2, cause 0x13de8acc0, message: flow rule rejected by device
Fail to start port 1
Please stop the ports first
Done
testpmd>

/var/log/messages contains the following errors:

May  2 15:38:56 mlx_fe-fe-0 kernel: <mlx4_ib> __mlx4_ib_create_flow: mcg table is full. Fail to register network rule.
May  2 15:38:56 mlx_fe-fe-0 testpmd[27582]: PMD: net_mlx4: 0x55e3598ee200: cannot attach flow rules (code 12, "Cannot allocate memory"), flow error type 2, cause 0x13de8ee40, message: flow rule rejected by device
May  2 15:38:56 mlx_fe-fe-0 kernel: <mlx4_ib> __mlx4_ib_create_flow: mcg table is full. Fail to register network rule.
May  2 15:38:56 mlx_fe-fe-0 testpmd[27582]: PMD: net_mlx4: 0x55e3598f2280: cannot attach flow rules (code 12, "Cannot allocate memory"), flow error type 2, cause 0x13de8acc0, message: flow rule rejected by device

It seems that adding default flows fails due to the mcg table being full.
What could be the cause of this error? How is the mcg table configured?


Thanks
Greg O'Rawe

This message, including attachments, is CONFIDENTIAL. It may also be privileged or otherwise protected by law. If you received this email by mistake please let us know by reply and then delete it from your system; you should not copy it or disclose its contents to anyone. All messages sent to and from Enea may be monitored to ensure compliance with internal policies and to protect our business. Emails are not secure and cannot be guaranteed to be error free as they can be intercepted, a mended, lost or destroyed, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of email transmission. Anyone who communicates with us by email accepts these risks.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-05-09  7:39 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <ORIGINAL-RELEASE-1556836841268755868-VI1PR0701MB2142AD6832915BBC2C845323FB340@VI1PR0701MB2142.eurprd07.prod.outlook.com>
2019-05-09  7:39 ` [dpdk-users] DPDK not working with ConnectX-3 card on Openstack virtual setup Greg O'Rawe
2019-05-02 15:42 Greg O'Rawe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).