DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] DPDK mlx4 PMD on Azure VM
@ 2017-12-19  7:14 Hui Ling
  2017-12-19 15:47 ` Thomas Monjalon
  2017-12-19 16:22 ` Thomas Monjalon
  0 siblings, 2 replies; 14+ messages in thread
From: Hui Ling @ 2017-12-19  7:14 UTC (permalink / raw)
  To: users

Hi Folks,

I am trying to get DPDK up and running on my Azure VM. Per
instructions from MS, I need to install DPDK with mlx4 PMD. I was able
to compile but it doesn't seem to run correctly.

I installed DPDK 17.11 on Ubuntu 16.04. And I downloaded MLNX OFED
4.2-1.2.0.0 and installed up-stream libs with
./mlnxofedinstall --guest --dpdk --upstream-libs

MLX4 PMD in DPDK doesn't seem to work with lib from ubuntu repository
and install OFED allows me to compile DPDK mlx4 PMD without any
compilation problem.

Then I tried to see if the mlx4 PMD works or not by running:

root@myVM:
./build/app/testpmd -l 1-2 -n 4 -w 0003:00:02.0 -w 0004:00:02.0 --
--rxq=2 --txq=2 -i

EAL: Detected 4 lcore(s)
EAL: 2 hugepages of size 1073741824 reserved, but no mounted hugetlbfs
found for that size
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using
unreliable clock cycles !
EAL: PCI device 0003:00:02.0 on NUMA socket 0
EAL:   probe driver: 15b3:1004 net_mlx4
PMD: mlx4.c:465: mlx4_pci_probe(): PCI information matches, using
device "mlx4_2" (VF: true)
PMD: mlx4.c:492: mlx4_pci_probe(): 1 port(s) detected
PMD: mlx4.c:586: mlx4_pci_probe(): port 1 MAC address is 00:0d:3a:f9:08:0b
EAL: PCI device 0004:00:02.0 on NUMA socket 0
EAL:   probe driver: 15b3:1004 net_mlx4
PMD: mlx4.c:465: mlx4_pci_probe(): PCI information matches, using
device "mlx4_3" (VF: true)
PMD: mlx4.c:492: mlx4_pci_probe(): 1 port(s) detected
PMD: mlx4.c:586: mlx4_pci_probe(): port 1 MAC address is 00:0d:3a:f9:23:63
Interactive-mode selected
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=155456,
size=2176, socket=0
Configuring Port 0 (socket 0)
PMD: mlx4_rxq.c:811: mlx4_rx_queue_setup(): 0xde0740: MR creation
failure: Operation not permitted
Fail to configure port 0 rx queues
EAL: Error - exiting with code: 1
  Cause: Start ports failed

This is where I am having problem: I already ran testpmd with root so
I should have all the permission I need. And yet the error status
indicates:
PMD: mlx4_rxq.c:811: mlx4_rx_queue_setup(): 0xde0740: MR creation
failure: Operation not permitted

Is this because I am running on Azure VM?

I also tried to run DPDK 17.11 on Ubuntu 17.10. It didn't work either.
the testpmd hangs during "configuring Port 0" forever.

Can someone from MS or Mellanox help me figure out why? and how to
make mlx4 PMD work on Azure VM?

Thank you!

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-users] DPDK mlx4 PMD on Azure VM
  2017-12-19  7:14 [dpdk-users] DPDK mlx4 PMD on Azure VM Hui Ling
@ 2017-12-19 15:47 ` Thomas Monjalon
  2017-12-19 16:22 ` Thomas Monjalon
  1 sibling, 0 replies; 14+ messages in thread
From: Thomas Monjalon @ 2017-12-19 15:47 UTC (permalink / raw)
  To: Hui Ling; +Cc: users

Hi,

19/12/2017 08:14, Hui Ling:
> I installed DPDK 17.11 on Ubuntu 16.04. And I downloaded MLNX OFED
> 4.2-1.2.0.0 and installed up-stream libs with
> ./mlnxofedinstall --guest --dpdk --upstream-libs
> 
> MLX4 PMD in DPDK doesn't seem to work with lib from ubuntu repository
> and install OFED allows me to compile DPDK mlx4 PMD without any
> compilation problem.

The recommended setup is using Linux 4.14 with rdma-core v15, not OFED.
Please check this doc:
	http://dpdk.org/doc/guides/nics/mlx4.html#current-rdma-core-package-and-linux-kernel-recommended

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-users] DPDK mlx4 PMD on Azure VM
  2017-12-19  7:14 [dpdk-users] DPDK mlx4 PMD on Azure VM Hui Ling
  2017-12-19 15:47 ` Thomas Monjalon
@ 2017-12-19 16:22 ` Thomas Monjalon
  2017-12-19 16:29   ` Ophir Munk
  1 sibling, 1 reply; 14+ messages in thread
From: Thomas Monjalon @ 2017-12-19 16:22 UTC (permalink / raw)
  To: Hui Ling; +Cc: users, ophirmu

19/12/2017 08:14, Hui Ling:
> I installed DPDK 17.11 on Ubuntu 16.04. And I downloaded MLNX OFED
> 4.2-1.2.0.0 and installed up-stream libs with
> ./mlnxofedinstall --guest --dpdk --upstream-libs
> 
> MLX4 PMD in DPDK doesn't seem to work with lib from ubuntu repository
> and install OFED allows me to compile DPDK mlx4 PMD without any
> compilation problem.
> 
> Then I tried to see if the mlx4 PMD works or not by running:
> 
> root@myVM:
> ./build/app/testpmd -l 1-2 -n 4 -w 0003:00:02.0 -w 0004:00:02.0 --
> --rxq=2 --txq=2 -i
[...]
> Configuring Port 0 (socket 0)
> PMD: mlx4_rxq.c:811: mlx4_rx_queue_setup(): 0xde0740: MR creation
> failure: Operation not permitted

[...]
> I also tried to run DPDK 17.11 on Ubuntu 17.10. It didn't work either.
> the testpmd hangs during "configuring Port 0" forever.

So you see 2 different errors on Ubuntu 16.04 and 17.10.
What are the Linux kernel versions?

> Can someone from MS or Mellanox help me figure out why? and how to
> make mlx4 PMD work on Azure VM?

Mellanox will support you.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-users] DPDK mlx4 PMD on Azure VM
  2017-12-19 16:22 ` Thomas Monjalon
@ 2017-12-19 16:29   ` Ophir Munk
  2017-12-20  2:00     ` Hui Ling
  0 siblings, 1 reply; 14+ messages in thread
From: Ophir Munk @ 2017-12-19 16:29 UTC (permalink / raw)
  To: Thomas Monjalon, Hui Ling; +Cc: users

Hi Hui,
Can you please let know if running testpmd with just one PCI device you are getting the same error?

> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Tuesday, December 19, 2017 6:22 PM
> To: Hui Ling <kelvin.brookletling@gmail.com>
> Cc: users@dpdk.org; Ophir Munk <ophirmu@mellanox.com>
> Subject: Re: [dpdk-users] DPDK mlx4 PMD on Azure VM
> 
> 19/12/2017 08:14, Hui Ling:
> > I installed DPDK 17.11 on Ubuntu 16.04. And I downloaded MLNX OFED
> > 4.2-1.2.0.0 and installed up-stream libs with ./mlnxofedinstall
> > --guest --dpdk --upstream-libs
> >
> > MLX4 PMD in DPDK doesn't seem to work with lib from ubuntu repository
> > and install OFED allows me to compile DPDK mlx4 PMD without any
> > compilation problem.
> >
> > Then I tried to see if the mlx4 PMD works or not by running:
> >
> > root@myVM:
> > ./build/app/testpmd -l 1-2 -n 4 -w 0003:00:02.0 -w 0004:00:02.0 --
> > --rxq=2 --txq=2 -i
> [...]
> > Configuring Port 0 (socket 0)
> > PMD: mlx4_rxq.c:811: mlx4_rx_queue_setup(): 0xde0740: MR creation
> > failure: Operation not permitted
> 
> [...]
> > I also tried to run DPDK 17.11 on Ubuntu 17.10. It didn't work either.
> > the testpmd hangs during "configuring Port 0" forever.
> 
> So you see 2 different errors on Ubuntu 16.04 and 17.10.
> What are the Linux kernel versions?
> 
> > Can someone from MS or Mellanox help me figure out why? and how to
> > make mlx4 PMD work on Azure VM?
> 
> Mellanox will support you.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-users] DPDK mlx4 PMD on Azure VM
  2017-12-19 16:29   ` Ophir Munk
@ 2017-12-20  2:00     ` Hui Ling
  2017-12-20 13:39       ` Andrew Bainbridge
  0 siblings, 1 reply; 14+ messages in thread
From: Hui Ling @ 2017-12-20  2:00 UTC (permalink / raw)
  To: Ophir Munk; +Cc: Thomas Monjalon, users

Ophir,

Here it is:

root@myVM:# ./build/app/testpmd -l 1-2 -n 4 -w 0003:00:02.0 -- -i
--port-topology=chained
EAL: Detected 4 lcore(s)
EAL: 2 hugepages of size 1073741824 reserved, but no mounted hugetlbfs
found for that size
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using
unreliable clock cycles !
EAL: PCI device 0003:00:02.0 on NUMA socket 0
EAL:   probe driver: 15b3:1004 net_mlx4
PMD: mlx4.c:465: mlx4_pci_probe(): PCI information matches, using
device "mlx4_2" (VF: true)
PMD: mlx4.c:492: mlx4_pci_probe(): 1 port(s) detected
PMD: mlx4.c:586: mlx4_pci_probe(): port 1 MAC address is 00:0d:3a:f9:08:0b
Interactive-mode selected
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=155456,
size=2176, socket=0
Configuring Port 0 (socket 0)
PMD: mlx4_rxq.c:811: mlx4_rx_queue_setup(): 0xde0740: MR creation
failure: Operation not permitted
Fail to configure port 0 rx queues
EAL: Error - exiting with code: 1
  Cause: Start ports failed

Same "Operation not permitted" error as before.

Do you have a working setup for MLX4 in DPDK 17.11? I can start from
scratch to build a new VM.
Giving all the information I gathered so far from DPDK, I can't seem
to make it work in my VM.

This is my VM info in case it is needed.
=======================================================================================================
A Standard_DS3_v2 instance from Azure. (one of these models support AN)

Kernel 4.11.0-1016-azure,  (I think from somewhere I saw that 4.14
kernel is required for the DPDK solution in
MS Azure to work. So I tried to update kernel to 4.14, but It didn't
seem to help solve the MLX4 driver problem)

Distributor ID: Ubuntu
Description:    Ubuntu 16.04.3 LTS
Release:        16.04
Codename:       xenial

ii  ibverbs-providers:amd64             42mlnx2-1.42120
            amd64        User space provider drivers for libibverbs
ii  ibverbs-utils                       42mlnx2-1.42120
            amd64        Examples for the libibverbs library
ii  libibverbs-dev:amd64                42mlnx2-1.42120
            amd64        Development files for the libibverbs library
ii  libibverbs1:amd64                   42mlnx2-1.42120
            amd64        Library for direct userspace use of RDMA
(InfiniBand/iWARP)
ii  librdmacm-dev                       42mlnx2-1.42120
            amd64        Development files for the librdmacm library
ii  librdmacm1                          42mlnx2-1.42120
            amd64        Library for managing RDMA connections
ii  rdma-core                           42mlnx2-1.42120
            amd64        RDMA core userspace infrastructure and
documentation
ii  rdmacm-utils                        42mlnx2-1.42120
            amd64        Examples for the librdmacm library

DPDK 17.11

and some mem info:

HugePages_Total:    1024
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
=========================================================================================================

On Wed, Dec 20, 2017 at 12:29 AM, Ophir Munk <ophirmu@mellanox.com> wrote:
> Hi Hui,
> Can you please let know if running testpmd with just one PCI device you are getting the same error?
>
>> -----Original Message-----
>> From: Thomas Monjalon [mailto:thomas@monjalon.net]
>> Sent: Tuesday, December 19, 2017 6:22 PM
>> To: Hui Ling <kelvin.brookletling@gmail.com>
>> Cc: users@dpdk.org; Ophir Munk <ophirmu@mellanox.com>
>> Subject: Re: [dpdk-users] DPDK mlx4 PMD on Azure VM
>>
>> 19/12/2017 08:14, Hui Ling:
>> > I installed DPDK 17.11 on Ubuntu 16.04. And I downloaded MLNX OFED
>> > 4.2-1.2.0.0 and installed up-stream libs with ./mlnxofedinstall
>> > --guest --dpdk --upstream-libs
>> >
>> > MLX4 PMD in DPDK doesn't seem to work with lib from ubuntu repository
>> > and install OFED allows me to compile DPDK mlx4 PMD without any
>> > compilation problem.
>> >
>> > Then I tried to see if the mlx4 PMD works or not by running:
>> >
>> > root@myVM:
>> > ./build/app/testpmd -l 1-2 -n 4 -w 0003:00:02.0 -w 0004:00:02.0 --
>> > --rxq=2 --txq=2 -i
>> [...]
>> > Configuring Port 0 (socket 0)
>> > PMD: mlx4_rxq.c:811: mlx4_rx_queue_setup(): 0xde0740: MR creation
>> > failure: Operation not permitted
>>
>> [...]
>> > I also tried to run DPDK 17.11 on Ubuntu 17.10. It didn't work either.
>> > the testpmd hangs during "configuring Port 0" forever.
>>
>> So you see 2 different errors on Ubuntu 16.04 and 17.10.
>> What are the Linux kernel versions?
>>
>> > Can someone from MS or Mellanox help me figure out why? and how to
>> > make mlx4 PMD work on Azure VM?
>>
>> Mellanox will support you.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-users] DPDK mlx4 PMD on Azure VM
  2017-12-20  2:00     ` Hui Ling
@ 2017-12-20 13:39       ` Andrew Bainbridge
  2017-12-21  7:35         ` Hui Ling
  0 siblings, 1 reply; 14+ messages in thread
From: Andrew Bainbridge @ 2017-12-20 13:39 UTC (permalink / raw)
  To: Hui Ling; +Cc: users

Hi Hui

Did you create your VM in the "Canada East" data center? This page suggests that is a requirement:
https://azure.microsoft.com/en-us/blog/azure-networking-updates-for-fall-2017/

Also, I seem to remember reading that the VM must have at least 8 cores. Sorry, I can't find a reference for that.

- Andy

-----Original Message-----
From: Hui Ling

This is my VM info in case it is needed.
=======================================================================================================
A Standard_DS3_v2 instance from Azure. (one of these models support AN)


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-users] DPDK mlx4 PMD on Azure VM
  2017-12-20 13:39       ` Andrew Bainbridge
@ 2017-12-21  7:35         ` Hui Ling
  2018-01-02  4:27           ` Stephen Hemminger
  0 siblings, 1 reply; 14+ messages in thread
From: Hui Ling @ 2017-12-21  7:35 UTC (permalink / raw)
  To: Andrew Bainbridge; +Cc: users

Andy,

My last VM is not in "Canada East" center since no AN type of instance
was available to me at the time I created my VM.

Just tried on a same type VM in Canada East, and it seems that the
location does make a difference.

This time, I was able to run testpmd without any explicit errors:

root@myVM:/home/hling/dpdk-17.11# build/app/testpmd -l 1-2 -n 4 -w
0004:00:02.0 0002:00:02.0 -- --rxq=2 --txq=2 -i
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using
unreliable clock cycles !
EAL: PCI device 0004:00:02.0 on NUMA socket 0
EAL:   probe driver: 15b3:1004 net_mlx4
PMD: mlx4.c:465: mlx4_pci_probe(): PCI information matches, using
device "mlx4_3" (VF: true)
PMD: mlx4.c:492: mlx4_pci_probe(): 1 port(s) detected
PMD: mlx4.c:586: mlx4_pci_probe(): port 1 MAC address is 00:0d:3a:f4:49:c4
Interactive-mode selected
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=155456,
size=2176, socket=0
Configuring Port 0 (socket 0)
Port 0: 00:0D:3A:F4:49:C4
Checking link statuses...
Done

testpmd> start tx_first
io packet forwarding - ports=1 - cores=1 - streams=2 - NUMA support
enabled, MP over anonymous pages disabled
Logical Core 2 (socket 0) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
  RX P=0/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding - CRC stripping enabled - packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=1
  RX queues=2 - RX desc=128 - RX free threshold=0
  RX threshold registers: pthresh=0 hthresh=0 wthresh=0
  TX queues=2 - TX desc=512 - TX free threshold=0
  TX threshold registers: pthresh=0 hthresh=0 wthresh=0
  TX RS bit threshold=0 - TXQ flags=0x0
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

  ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 0/Queue= 0 -------
  RX-packets: 0              TX-packets: 32             TX-dropped: 0
  ------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 0/Queue= 1 -------
  RX-packets: 0              TX-packets: 32             TX-dropped: 0
  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 64             TX-dropped: 0             TX-total: 64
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 64             TX-dropped: 0             TX-total: 64
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.
testpmd>



Not sure why I don't see any packets transmission, but at least the
MLX4 PMD seems to be able to talk to the mlx4_en driver, or is it?

Will keep digging.

Hui

On Wed, Dec 20, 2017 at 9:39 PM, Andrew Bainbridge
<andbain@microsoft.com> wrote:
> Hi Hui
>
> Did you create your VM in the "Canada East" data center? This page suggests that is a requirement:
> https://azure.microsoft.com/en-us/blog/azure-networking-updates-for-fall-2017/
>
> Also, I seem to remember reading that the VM must have at least 8 cores. Sorry, I can't find a reference for that.
>
> - Andy
>
> -----Original Message-----
> From: Hui Ling
>
> This is my VM info in case it is needed.
> =======================================================================================================
> A Standard_DS3_v2 instance from Azure. (one of these models support AN)
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-users] DPDK mlx4 PMD on Azure VM
  2017-12-21  7:35         ` Hui Ling
@ 2018-01-02  4:27           ` Stephen Hemminger
  2018-01-05 20:45             ` Stephen Hemminger
  0 siblings, 1 reply; 14+ messages in thread
From: Stephen Hemminger @ 2018-01-02  4:27 UTC (permalink / raw)
  To: Hui Ling; +Cc: Andrew Bainbridge, users

On Thu, 21 Dec 2017 15:35:00 +0800
Hui Ling <kelvin.brookletling@gmail.com> wrote:

> Andy,
> 
> My last VM is not in "Canada East" center since no AN type of instance
> was available to me at the time I created my VM.
> 
> Just tried on a same type VM in Canada East, and it seems that the
> location does make a difference.
> 
> This time, I was able to run testpmd without any explicit errors:
> 
> root@myVM:/home/hling/dpdk-17.11# build/app/testpmd -l 1-2 -n 4 -w
> 0004:00:02.0 0002:00:02.0 -- --rxq=2 --txq=2 -i
> EAL: Detected 4 lcore(s)
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: Probing VFIO support...
> EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using
> unreliable clock cycles !
> EAL: PCI device 0004:00:02.0 on NUMA socket 0
> EAL:   probe driver: 15b3:1004 net_mlx4
> PMD: mlx4.c:465: mlx4_pci_probe(): PCI information matches, using
> device "mlx4_3" (VF: true)
> PMD: mlx4.c:492: mlx4_pci_probe(): 1 port(s) detected
> PMD: mlx4.c:586: mlx4_pci_probe(): port 1 MAC address is 00:0d:3a:f4:49:c4
> Interactive-mode selected
> USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=155456,
> size=2176, socket=0
> Configuring Port 0 (socket 0)
> Port 0: 00:0D:3A:F4:49:C4
> Checking link statuses...
> Done
> 
> testpmd> start tx_first  
> io packet forwarding - ports=1 - cores=1 - streams=2 - NUMA support
> enabled, MP over anonymous pages disabled
> Logical Core 2 (socket 0) forwards packets on 2 streams:
>   RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
>   RX P=0/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
> 
>   io packet forwarding - CRC stripping enabled - packets/burst=32
>   nb forwarding cores=1 - nb forwarding ports=1
>   RX queues=2 - RX desc=128 - RX free threshold=0
>   RX threshold registers: pthresh=0 hthresh=0 wthresh=0
>   TX queues=2 - TX desc=512 - TX free threshold=0
>   TX threshold registers: pthresh=0 hthresh=0 wthresh=0
>   TX RS bit threshold=0 - TXQ flags=0x0
> testpmd> stop  
> Telling cores to stop...
> Waiting for lcores to finish...
> 
>   ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 0/Queue= 0 -------
>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>   ------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 0/Queue= 1 -------
>   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>   ---------------------- Forward statistics for port 0  ----------------------
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 64             TX-dropped: 0             TX-total: 64
>   ----------------------------------------------------------------------------
> 
>   +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
>   RX-packets: 0              RX-dropped: 0             RX-total: 0
>   TX-packets: 64             TX-dropped: 0             TX-total: 64
>   ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> 
> Done.
> testpmd>  
> 
> 
> 
> Not sure why I don't see any packets transmission, but at least the
> MLX4 PMD seems to be able to talk to the mlx4_en driver, or is it?
> 
> Will keep digging.
> 
> Hui
> 
> On Wed, Dec 20, 2017 at 9:39 PM, Andrew Bainbridge
> <andbain@microsoft.com> wrote:
> > Hi Hui
> >
> > Did you create your VM in the "Canada East" data center? This page suggests that is a requirement:
> > https://azure.microsoft.com/en-us/blog/azure-networking-updates-for-fall-2017/
> >
> > Also, I seem to remember reading that the VM must have at least 8 cores. Sorry, I can't find a reference for that.
> >
> > - Andy
> >
> > -----Original Message-----
> > From: Hui Ling
> >
> > This is my VM info in case it is needed.
> > =======================================================================================================
> > A Standard_DS3_v2 instance from Azure. (one of these models support AN)
> >  

You will need to a couple of things.
1. Make sure you have a VM capable of accelerated networking, and that your Azure account
   has opt-ed in. Last I checked it was still in preview until RHEL 7 with AN support was released.

  https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-create-vm-accelerated-networking

  There are many different regions, and most have AN by now. Which one are you trying?


   Make sure Linux without DPDK is working with AN first.

2. DPDK support requires 17.11 or later DPDK and the failsafe and TAP PMD's.
   The Mellanox mlx4 on Azure is only used after a flow is established.
   The initial packet (and broadcast/multicast) show up on the non-accelerated netvsc device.
   See the DPDK User Summit in Dublin 2017 for more detal.

For later releases if you watch the development mailing list you will see
the enhancements being done to simplify setup of TAP/failsafe.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-users] DPDK mlx4 PMD on Azure VM
  2018-01-02  4:27           ` Stephen Hemminger
@ 2018-01-05 20:45             ` Stephen Hemminger
  2018-01-08  3:01               ` Hui Ling
  0 siblings, 1 reply; 14+ messages in thread
From: Stephen Hemminger @ 2018-01-05 20:45 UTC (permalink / raw)
  To: Hui Ling; +Cc: Andrew Bainbridge, users

Accelerated networking is now generally available for Linux (and Windows)
in all regions.

https://azure.microsoft.com/en-us/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/

On Mon, Jan 1, 2018 at 8:27 PM, Stephen Hemminger <
stephen@networkplumber.org> wrote:

> On Thu, 21 Dec 2017 15:35:00 +0800
> Hui Ling <kelvin.brookletling@gmail.com> wrote:
>
> > Andy,
> >
> > My last VM is not in "Canada East" center since no AN type of instance
> > was available to me at the time I created my VM.
> >
> > Just tried on a same type VM in Canada East, and it seems that the
> > location does make a difference.
> >
> > This time, I was able to run testpmd without any explicit errors:
> >
> > root@myVM:/home/hling/dpdk-17.11# build/app/testpmd -l 1-2 -n 4 -w
> > 0004:00:02.0 0002:00:02.0 -- --rxq=2 --txq=2 -i
> > EAL: Detected 4 lcore(s)
> > EAL: No free hugepages reported in hugepages-1048576kB
> > EAL: Probing VFIO support...
> > EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using
> > unreliable clock cycles !
> > EAL: PCI device 0004:00:02.0 on NUMA socket 0
> > EAL:   probe driver: 15b3:1004 net_mlx4
> > PMD: mlx4.c:465: mlx4_pci_probe(): PCI information matches, using
> > device "mlx4_3" (VF: true)
> > PMD: mlx4.c:492: mlx4_pci_probe(): 1 port(s) detected
> > PMD: mlx4.c:586: mlx4_pci_probe(): port 1 MAC address is
> 00:0d:3a:f4:49:c4
> > Interactive-mode selected
> > USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=155456,
> > size=2176, socket=0
> > Configuring Port 0 (socket 0)
> > Port 0: 00:0D:3A:F4:49:C4
> > Checking link statuses...
> > Done
> >
> > testpmd> start tx_first
> > io packet forwarding - ports=1 - cores=1 - streams=2 - NUMA support
> > enabled, MP over anonymous pages disabled
> > Logical Core 2 (socket 0) forwards packets on 2 streams:
> >   RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
> >   RX P=0/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
> >
> >   io packet forwarding - CRC stripping enabled - packets/burst=32
> >   nb forwarding cores=1 - nb forwarding ports=1
> >   RX queues=2 - RX desc=128 - RX free threshold=0
> >   RX threshold registers: pthresh=0 hthresh=0 wthresh=0
> >   TX queues=2 - TX desc=512 - TX free threshold=0
> >   TX threshold registers: pthresh=0 hthresh=0 wthresh=0
> >   TX RS bit threshold=0 - TXQ flags=0x0
> > testpmd> stop
> > Telling cores to stop...
> > Waiting for lcores to finish...
> >
> >   ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 0/Queue= 0
> -------
> >   RX-packets: 0              TX-packets: 32             TX-dropped: 0
> >   ------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 0/Queue= 1
> -------
> >   RX-packets: 0              TX-packets: 32             TX-dropped: 0
> >   ---------------------- Forward statistics for port 0
> ----------------------
> >   RX-packets: 0              RX-dropped: 0             RX-total: 0
> >   TX-packets: 64             TX-dropped: 0             TX-total: 64
> >   ------------------------------------------------------------
> ----------------
> >
> >   +++++++++++++++ Accumulated forward statistics for all
> ports+++++++++++++++
> >   RX-packets: 0              RX-dropped: 0             RX-total: 0
> >   TX-packets: 64             TX-dropped: 0             TX-total: 64
> >   ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> ++++++++++++++++
> >
> > Done.
> > testpmd>
> >
> >
> >
> > Not sure why I don't see any packets transmission, but at least the
> > MLX4 PMD seems to be able to talk to the mlx4_en driver, or is it?
> >
> > Will keep digging.
> >
> > Hui
> >
> > On Wed, Dec 20, 2017 at 9:39 PM, Andrew Bainbridge
> > <andbain@microsoft.com> wrote:
> > > Hi Hui
> > >
> > > Did you create your VM in the "Canada East" data center? This page
> suggests that is a requirement:
> > > https://azure.microsoft.com/en-us/blog/azure-networking-
> updates-for-fall-2017/
> > >
> > > Also, I seem to remember reading that the VM must have at least 8
> cores. Sorry, I can't find a reference for that.
> > >
> > > - Andy
> > >
> > > -----Original Message-----
> > > From: Hui Ling
> > >
> > > This is my VM info in case it is needed.
> > > ============================================================
> ===========================================
> > > A Standard_DS3_v2 instance from Azure. (one of these models support AN)
> > >
>
> You will need to a couple of things.
> 1. Make sure you have a VM capable of accelerated networking, and that
> your Azure account
>    has opt-ed in. Last I checked it was still in preview until RHEL 7 with
> AN support was released.
>
>   https://docs.microsoft.com/en-us/azure/virtual-network/
> virtual-network-create-vm-accelerated-networking
>
>   There are many different regions, and most have AN by now. Which one are
> you trying?
>
>
>    Make sure Linux without DPDK is working with AN first.
>
> 2. DPDK support requires 17.11 or later DPDK and the failsafe and TAP
> PMD's.
>    The Mellanox mlx4 on Azure is only used after a flow is established.
>    The initial packet (and broadcast/multicast) show up on the
> non-accelerated netvsc device.
>    See the DPDK User Summit in Dublin 2017 for more detal.
>
> For later releases if you watch the development mailing list you will see
> the enhancements being done to simplify setup of TAP/failsafe.
>
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-users] DPDK mlx4 PMD on Azure VM
  2018-01-05 20:45             ` Stephen Hemminger
@ 2018-01-08  3:01               ` Hui Ling
  2018-01-08 15:42                 ` Stephen Hemminger
  0 siblings, 1 reply; 14+ messages in thread
From: Hui Ling @ 2018-01-08  3:01 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: Andrew Bainbridge, users

Stephen,

My last VM (DS3 v2, with 4 cores and 16GB memory) is in Canada East
and I got AN enabled on my subscription so AN worked just fine.

For DPDK, I saw from your slides in DPDK submit last year saying
current DPDK solution in Azure needs:

1) Linux kerlnel 4.14
2) 8 cores

I am not sure if these are must. I tried to upgrade my Ubuntu to 4.14,
but then I ran into compilation issues with DPDK 17.11 for MLX4 PMD.
So I stayed with older version of kernel for Ubuntu 16.04.
With my config of VM, however, I could not get the failsafe.sh I got
from MS for azure to work.

So I am not sure if it is my VM setting, or my VM kernel.

It will be very helpful if MS has some clear guide on how DPDK works on Azure.

Hui



On Sat, Jan 6, 2018 at 4:45 AM, Stephen Hemminger
<stephen@networkplumber.org> wrote:
> Accelerated networking is now generally available for Linux (and Windows) in
> all regions.
>
> https://azure.microsoft.com/en-us/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/
>
> On Mon, Jan 1, 2018 at 8:27 PM, Stephen Hemminger
> <stephen@networkplumber.org> wrote:
>>
>> On Thu, 21 Dec 2017 15:35:00 +0800
>> Hui Ling <kelvin.brookletling@gmail.com> wrote:
>>
>> > Andy,
>> >
>> > My last VM is not in "Canada East" center since no AN type of instance
>> > was available to me at the time I created my VM.
>> >
>> > Just tried on a same type VM in Canada East, and it seems that the
>> > location does make a difference.
>> >
>> > This time, I was able to run testpmd without any explicit errors:
>> >
>> > root@myVM:/home/hling/dpdk-17.11# build/app/testpmd -l 1-2 -n 4 -w
>> > 0004:00:02.0 0002:00:02.0 -- --rxq=2 --txq=2 -i
>> > EAL: Detected 4 lcore(s)
>> > EAL: No free hugepages reported in hugepages-1048576kB
>> > EAL: Probing VFIO support...
>> > EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using
>> > unreliable clock cycles !
>> > EAL: PCI device 0004:00:02.0 on NUMA socket 0
>> > EAL:   probe driver: 15b3:1004 net_mlx4
>> > PMD: mlx4.c:465: mlx4_pci_probe(): PCI information matches, using
>> > device "mlx4_3" (VF: true)
>> > PMD: mlx4.c:492: mlx4_pci_probe(): 1 port(s) detected
>> > PMD: mlx4.c:586: mlx4_pci_probe(): port 1 MAC address is
>> > 00:0d:3a:f4:49:c4
>> > Interactive-mode selected
>> > USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=155456,
>> > size=2176, socket=0
>> > Configuring Port 0 (socket 0)
>> > Port 0: 00:0D:3A:F4:49:C4
>> > Checking link statuses...
>> > Done
>> >
>> > testpmd> start tx_first
>> > io packet forwarding - ports=1 - cores=1 - streams=2 - NUMA support
>> > enabled, MP over anonymous pages disabled
>> > Logical Core 2 (socket 0) forwards packets on 2 streams:
>> >   RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
>> >   RX P=0/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
>> >
>> >   io packet forwarding - CRC stripping enabled - packets/burst=32
>> >   nb forwarding cores=1 - nb forwarding ports=1
>> >   RX queues=2 - RX desc=128 - RX free threshold=0
>> >   RX threshold registers: pthresh=0 hthresh=0 wthresh=0
>> >   TX queues=2 - TX desc=512 - TX free threshold=0
>> >   TX threshold registers: pthresh=0 hthresh=0 wthresh=0
>> >   TX RS bit threshold=0 - TXQ flags=0x0
>> > testpmd> stop
>> > Telling cores to stop...
>> > Waiting for lcores to finish...
>> >
>> >   ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 0/Queue= 0
>> > -------
>> >   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>> >   ------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 0/Queue= 1
>> > -------
>> >   RX-packets: 0              TX-packets: 32             TX-dropped: 0
>> >   ---------------------- Forward statistics for port 0
>> > ----------------------
>> >   RX-packets: 0              RX-dropped: 0             RX-total: 0
>> >   TX-packets: 64             TX-dropped: 0             TX-total: 64
>> >
>> > ----------------------------------------------------------------------------
>> >
>> >   +++++++++++++++ Accumulated forward statistics for all
>> > ports+++++++++++++++
>> >   RX-packets: 0              RX-dropped: 0             RX-total: 0
>> >   TX-packets: 64             TX-dropped: 0             TX-total: 64
>> >
>> > ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>> >
>> > Done.
>> > testpmd>
>> >
>> >
>> >
>> > Not sure why I don't see any packets transmission, but at least the
>> > MLX4 PMD seems to be able to talk to the mlx4_en driver, or is it?
>> >
>> > Will keep digging.
>> >
>> > Hui
>> >
>> > On Wed, Dec 20, 2017 at 9:39 PM, Andrew Bainbridge
>> > <andbain@microsoft.com> wrote:
>> > > Hi Hui
>> > >
>> > > Did you create your VM in the "Canada East" data center? This page
>> > > suggests that is a requirement:
>> > >
>> > > https://azure.microsoft.com/en-us/blog/azure-networking-updates-for-fall-2017/
>> > >
>> > > Also, I seem to remember reading that the VM must have at least 8
>> > > cores. Sorry, I can't find a reference for that.
>> > >
>> > > - Andy
>> > >
>> > > -----Original Message-----
>> > > From: Hui Ling
>> > >
>> > > This is my VM info in case it is needed.
>> > >
>> > > =======================================================================================================
>> > > A Standard_DS3_v2 instance from Azure. (one of these models support
>> > > AN)
>> > >
>>
>> You will need to a couple of things.
>> 1. Make sure you have a VM capable of accelerated networking, and that
>> your Azure account
>>    has opt-ed in. Last I checked it was still in preview until RHEL 7 with
>> AN support was released.
>>
>>
>> https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-create-vm-accelerated-networking
>>
>>   There are many different regions, and most have AN by now. Which one are
>> you trying?
>>
>>
>>    Make sure Linux without DPDK is working with AN first.
>>
>> 2. DPDK support requires 17.11 or later DPDK and the failsafe and TAP
>> PMD's.
>>    The Mellanox mlx4 on Azure is only used after a flow is established.
>>    The initial packet (and broadcast/multicast) show up on the
>> non-accelerated netvsc device.
>>    See the DPDK User Summit in Dublin 2017 for more detal.
>>
>> For later releases if you watch the development mailing list you will see
>> the enhancements being done to simplify setup of TAP/failsafe.
>>
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-users] DPDK mlx4 PMD on Azure VM
  2018-01-08  3:01               ` Hui Ling
@ 2018-01-08 15:42                 ` Stephen Hemminger
  2018-04-10  8:40                   ` Hui Ling
  0 siblings, 1 reply; 14+ messages in thread
From: Stephen Hemminger @ 2018-01-08 15:42 UTC (permalink / raw)
  To: Hui Ling; +Cc: Andrew Bainbridge, users

On Mon, 8 Jan 2018 11:01:09 +0800
Hui Ling <kelvin.brookletling@gmail.com> wrote:

> Stephen,
> 
> My last VM (DS3 v2, with 4 cores and 16GB memory) is in Canada East
> and I got AN enabled on my subscription so AN worked just fine.
> 
> For DPDK, I saw from your slides in DPDK submit last year saying
> current DPDK solution in Azure needs:
> 
> 1) Linux kerlnel 4.14

You need transparent bonding support which is in kernel netvsc driver.
It has been backported to Ubuntu Azure (4.13) package, RHEL 7 and SLES.



> 2) 8 cores
> 
> I am not sure if these are must. I tried to upgrade my Ubuntu to 4.14,
> but then I ran into compilation issues with DPDK 17.11 for MLX4 PMD.
> So I stayed with older version of kernel for Ubuntu 16.04.
> With my config of VM, however, I could not get the failsafe.sh I got
> from MS for azure to work.

The number of cores requirement has been dropped, think it is available
with 4 now.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-users] DPDK mlx4 PMD on Azure VM
  2018-01-08 15:42                 ` Stephen Hemminger
@ 2018-04-10  8:40                   ` Hui Ling
  2018-04-13 12:47                     ` Andrew Bainbridge
  0 siblings, 1 reply; 14+ messages in thread
From: Hui Ling @ 2018-04-10  8:40 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: Andrew Bainbridge, users

In case anyone else might need.

I solved this issue by installing the mlx4 related libraries from mlxn
ofed source for azure.

1) download source from:

http://www.mellanox.com/page/firmware_table_Microsoft?mtag=oem_firmware_download

Note: download the sources (48M), not the others. For some reason,
only the source package works for me.

2) unpack and install
./install.pl --guest --dpdk --upstream-libs

3) after installation completes, run
/etc/init.d/openibd restart

this will insert mlx4 related module into kernels.

After that, build dpdk and run testpmd seems working for me

So not sure what is the deal with the MLNX packages for Azure, but
this seems to be way for me to make it work.

Hope this may help someone else.




On Mon, Jan 8, 2018 at 11:42 PM, Stephen Hemminger
<stephen@networkplumber.org> wrote:
> On Mon, 8 Jan 2018 11:01:09 +0800
> Hui Ling <kelvin.brookletling@gmail.com> wrote:
>
>> Stephen,
>>
>> My last VM (DS3 v2, with 4 cores and 16GB memory) is in Canada East
>> and I got AN enabled on my subscription so AN worked just fine.
>>
>> For DPDK, I saw from your slides in DPDK submit last year saying
>> current DPDK solution in Azure needs:
>>
>> 1) Linux kerlnel 4.14
>
> You need transparent bonding support which is in kernel netvsc driver.
> It has been backported to Ubuntu Azure (4.13) package, RHEL 7 and SLES.
>
>
>
>> 2) 8 cores
>>
>> I am not sure if these are must. I tried to upgrade my Ubuntu to 4.14,
>> but then I ran into compilation issues with DPDK 17.11 for MLX4 PMD.
>> So I stayed with older version of kernel for Ubuntu 16.04.
>> With my config of VM, however, I could not get the failsafe.sh I got
>> from MS for azure to work.
>
> The number of cores requirement has been dropped, think it is available
> with 4 now.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-users] DPDK mlx4 PMD on Azure VM
  2018-04-10  8:40                   ` Hui Ling
@ 2018-04-13 12:47                     ` Andrew Bainbridge
  2018-04-13 14:50                       ` Hui Ling
  0 siblings, 1 reply; 14+ messages in thread
From: Andrew Bainbridge @ 2018-04-13 12:47 UTC (permalink / raw)
  To: Hui Ling, Stephen Hemminger; +Cc: users

Hi Hui

That is very similar to the steps I found I needed to follow.

Another useful fact: I found that sometimes when testpmd gives the, "MR creation failure: Operation not permitted" error, stopping the VM (via the Azure portal) and restarting it fixes it. Someone on the Azure team suggested that this is because of a problem on some VM hosts. Stopping and starting the VM is likely to move it to a new host that might not have the problem. I believe they are working on fixing the bad hosts.

________________________________
From: Hui Ling <kelvin.brookletling@gmail.com>
Sent: 10 April 2018 09:40
To: Stephen Hemminger
Cc: Andrew Bainbridge; users@dpdk.org
Subject: Re: [dpdk-users] DPDK mlx4 PMD on Azure VM

In case anyone else might need.

I solved this issue by installing the mlx4 related libraries from mlxn
ofed source for azure.

1) download source from:

https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.mellanox.com%2Fpage%2Ffirmware_table_Microsoft%3Fmtag%3Doem_firmware_download&data=02%7C01%7Candbain%40microsoft.com%7Ca8d12124d8ab49e8bf7508d59ebeacaa%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636589464133550197&sdata=egGzdWoT9wZ13HRxzjdpBUWYXhA2Mm%2FjR4ARuowevUQ%3D&reserved=0

Note: download the sources (48M), not the others. For some reason,
only the source package works for me.

2) unpack and install
./install.pl --guest --dpdk --upstream-libs

3) after installation completes, run
/etc/init.d/openibd restart

this will insert mlx4 related module into kernels.

After that, build dpdk and run testpmd seems working for me

So not sure what is the deal with the MLNX packages for Azure, but
this seems to be way for me to make it work.

Hope this may help someone else.




On Mon, Jan 8, 2018 at 11:42 PM, Stephen Hemminger
<stephen@networkplumber.org> wrote:
> On Mon, 8 Jan 2018 11:01:09 +0800
> Hui Ling <kelvin.brookletling@gmail.com> wrote:
>
>> Stephen,
>>
>> My last VM (DS3 v2, with 4 cores and 16GB memory) is in Canada East
>> and I got AN enabled on my subscription so AN worked just fine.
>>
>> For DPDK, I saw from your slides in DPDK submit last year saying
>> current DPDK solution in Azure needs:
>>
>> 1) Linux kerlnel 4.14
>
> You need transparent bonding support which is in kernel netvsc driver.
> It has been backported to Ubuntu Azure (4.13) package, RHEL 7 and SLES.
>
>
>
>> 2) 8 cores
>>
>> I am not sure if these are must. I tried to upgrade my Ubuntu to 4.14,
>> but then I ran into compilation issues with DPDK 17.11 for MLX4 PMD.
>> So I stayed with older version of kernel for Ubuntu 16.04.
>> With my config of VM, however, I could not get the failsafe.sh I got
>> from MS for azure to work.
>
> The number of cores requirement has been dropped, think it is available
> with 4 now.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-users] DPDK mlx4 PMD on Azure VM
  2018-04-13 12:47                     ` Andrew Bainbridge
@ 2018-04-13 14:50                       ` Hui Ling
  0 siblings, 0 replies; 14+ messages in thread
From: Hui Ling @ 2018-04-13 14:50 UTC (permalink / raw)
  To: Andrew Bainbridge; +Cc: Stephen Hemminger, users

The thing for me is that on my VM, I have always ran into the same error:
"MR creation failure: Operation not permitted" till I used the drivers
inserted with the specific commands.

It could be that the host my VM runs on is only compatible with some
specific version of MLNX drivers. And the steps I listed might just
happen
to work on my VM.

So as Andrew suggested, if you can't fix "MR creation failure:
Operation not permitted" error, you may want to ask Azure team to
check and fix the host your VM runs on first.



On Fri, Apr 13, 2018 at 8:47 PM, Andrew Bainbridge
<andbain@microsoft.com> wrote:
> Hi Hui
>
> That is very similar to the steps I found I needed to follow.
>
> Another useful fact: I found that sometimes when testpmd gives the, "MR
> creation failure: Operation not permitted" error, stopping the VM (via the
> Azure portal) and restarting it fixes it. Someone on the Azure team
> suggested that this is because of a problem on some VM hosts. Stopping and
> starting the VM is likely to move it to a new host that might not have the
> problem. I believe they are working on fixing the bad hosts.
>
> ________________________________
> From: Hui Ling <kelvin.brookletling@gmail.com>
> Sent: 10 April 2018 09:40
> To: Stephen Hemminger
> Cc: Andrew Bainbridge; users@dpdk.org
> Subject: Re: [dpdk-users] DPDK mlx4 PMD on Azure VM
>
> In case anyone else might need.
>
> I solved this issue by installing the mlx4 related libraries from mlxn
> ofed source for azure.
>
> 1) download source from:
>
> https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.mellanox.com%2Fpage%2Ffirmware_table_Microsoft%3Fmtag%3Doem_firmware_download&data=02%7C01%7Candbain%40microsoft.com%7Ca8d12124d8ab49e8bf7508d59ebeacaa%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636589464133550197&sdata=egGzdWoT9wZ13HRxzjdpBUWYXhA2Mm%2FjR4ARuowevUQ%3D&reserved=0
>
>
> Note: download the sources (48M), not the others. For some reason,
> only the source package works for me.
>
> 2) unpack and install
> ./install.pl --guest --dpdk --upstream-libs
>
> 3) after installation completes, run
> /etc/init.d/openibd restart
>
> this will insert mlx4 related module into kernels.
>
> After that, build dpdk and run testpmd seems working for me
>
> So not sure what is the deal with the MLNX packages for Azure, but
> this seems to be way for me to make it work.
>
> Hope this may help someone else.
>
>
>
>
> On Mon, Jan 8, 2018 at 11:42 PM, Stephen Hemminger
> <stephen@networkplumber.org> wrote:
>> On Mon, 8 Jan 2018 11:01:09 +0800
>> Hui Ling <kelvin.brookletling@gmail.com> wrote:
>>
>>> Stephen,
>>>
>>> My last VM (DS3 v2, with 4 cores and 16GB memory) is in Canada East
>>> and I got AN enabled on my subscription so AN worked just fine.
>>>
>>> For DPDK, I saw from your slides in DPDK submit last year saying
>>> current DPDK solution in Azure needs:
>>>
>>> 1) Linux kerlnel 4.14
>>
>> You need transparent bonding support which is in kernel netvsc driver.
>> It has been backported to Ubuntu Azure (4.13) package, RHEL 7 and SLES.
>>
>>
>>
>>> 2) 8 cores
>>>
>>> I am not sure if these are must. I tried to upgrade my Ubuntu to 4.14,
>>> but then I ran into compilation issues with DPDK 17.11 for MLX4 PMD.
>>> So I stayed with older version of kernel for Ubuntu 16.04.
>>> With my config of VM, however, I could not get the failsafe.sh I got
>>> from MS for azure to work.
>>
>> The number of cores requirement has been dropped, think it is available
>> with 4 now.

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2018-04-13 14:50 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-19  7:14 [dpdk-users] DPDK mlx4 PMD on Azure VM Hui Ling
2017-12-19 15:47 ` Thomas Monjalon
2017-12-19 16:22 ` Thomas Monjalon
2017-12-19 16:29   ` Ophir Munk
2017-12-20  2:00     ` Hui Ling
2017-12-20 13:39       ` Andrew Bainbridge
2017-12-21  7:35         ` Hui Ling
2018-01-02  4:27           ` Stephen Hemminger
2018-01-05 20:45             ` Stephen Hemminger
2018-01-08  3:01               ` Hui Ling
2018-01-08 15:42                 ` Stephen Hemminger
2018-04-10  8:40                   ` Hui Ling
2018-04-13 12:47                     ` Andrew Bainbridge
2018-04-13 14:50                       ` Hui Ling

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).