DPDK usage discussions
 help / color / mirror / Atom feed
From: Stephen Hemminger <stephen@networkplumber.org>
To: Hui Ling <kelvin.brookletling@gmail.com>
Cc: Andrew Bainbridge <andbain@microsoft.com>,
	"users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] DPDK mlx4 PMD on Azure VM
Date: Fri, 5 Jan 2018 12:45:18 -0800	[thread overview]
Message-ID: <CAOaVG16R8bbuxUxtH9pm3QT2TJoUZbRMYsx1=P_Lm7jPVzBsLA@mail.gmail.com> (raw)
In-Reply-To: <20180101202732.6423d6b9@xeon-e3>

Accelerated networking is now generally available for Linux (and Windows)
in all regions.

https://azure.microsoft.com/en-us/blog/maximize-your-vm-s-performance-with-accelerated-networking-now-generally-available-for-both-windows-and-linux/

On Mon, Jan 1, 2018 at 8:27 PM, Stephen Hemminger <
stephen@networkplumber.org> wrote:

> On Thu, 21 Dec 2017 15:35:00 +0800
> Hui Ling <kelvin.brookletling@gmail.com> wrote:
>
> > Andy,
> >
> > My last VM is not in "Canada East" center since no AN type of instance
> > was available to me at the time I created my VM.
> >
> > Just tried on a same type VM in Canada East, and it seems that the
> > location does make a difference.
> >
> > This time, I was able to run testpmd without any explicit errors:
> >
> > root@myVM:/home/hling/dpdk-17.11# build/app/testpmd -l 1-2 -n 4 -w
> > 0004:00:02.0 0002:00:02.0 -- --rxq=2 --txq=2 -i
> > EAL: Detected 4 lcore(s)
> > EAL: No free hugepages reported in hugepages-1048576kB
> > EAL: Probing VFIO support...
> > EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using
> > unreliable clock cycles !
> > EAL: PCI device 0004:00:02.0 on NUMA socket 0
> > EAL:   probe driver: 15b3:1004 net_mlx4
> > PMD: mlx4.c:465: mlx4_pci_probe(): PCI information matches, using
> > device "mlx4_3" (VF: true)
> > PMD: mlx4.c:492: mlx4_pci_probe(): 1 port(s) detected
> > PMD: mlx4.c:586: mlx4_pci_probe(): port 1 MAC address is
> 00:0d:3a:f4:49:c4
> > Interactive-mode selected
> > USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=155456,
> > size=2176, socket=0
> > Configuring Port 0 (socket 0)
> > Port 0: 00:0D:3A:F4:49:C4
> > Checking link statuses...
> > Done
> >
> > testpmd> start tx_first
> > io packet forwarding - ports=1 - cores=1 - streams=2 - NUMA support
> > enabled, MP over anonymous pages disabled
> > Logical Core 2 (socket 0) forwards packets on 2 streams:
> >   RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
> >   RX P=0/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
> >
> >   io packet forwarding - CRC stripping enabled - packets/burst=32
> >   nb forwarding cores=1 - nb forwarding ports=1
> >   RX queues=2 - RX desc=128 - RX free threshold=0
> >   RX threshold registers: pthresh=0 hthresh=0 wthresh=0
> >   TX queues=2 - TX desc=512 - TX free threshold=0
> >   TX threshold registers: pthresh=0 hthresh=0 wthresh=0
> >   TX RS bit threshold=0 - TXQ flags=0x0
> > testpmd> stop
> > Telling cores to stop...
> > Waiting for lcores to finish...
> >
> >   ------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 0/Queue= 0
> -------
> >   RX-packets: 0              TX-packets: 32             TX-dropped: 0
> >   ------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 0/Queue= 1
> -------
> >   RX-packets: 0              TX-packets: 32             TX-dropped: 0
> >   ---------------------- Forward statistics for port 0
> ----------------------
> >   RX-packets: 0              RX-dropped: 0             RX-total: 0
> >   TX-packets: 64             TX-dropped: 0             TX-total: 64
> >   ------------------------------------------------------------
> ----------------
> >
> >   +++++++++++++++ Accumulated forward statistics for all
> ports+++++++++++++++
> >   RX-packets: 0              RX-dropped: 0             RX-total: 0
> >   TX-packets: 64             TX-dropped: 0             TX-total: 64
> >   ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> ++++++++++++++++
> >
> > Done.
> > testpmd>
> >
> >
> >
> > Not sure why I don't see any packets transmission, but at least the
> > MLX4 PMD seems to be able to talk to the mlx4_en driver, or is it?
> >
> > Will keep digging.
> >
> > Hui
> >
> > On Wed, Dec 20, 2017 at 9:39 PM, Andrew Bainbridge
> > <andbain@microsoft.com> wrote:
> > > Hi Hui
> > >
> > > Did you create your VM in the "Canada East" data center? This page
> suggests that is a requirement:
> > > https://azure.microsoft.com/en-us/blog/azure-networking-
> updates-for-fall-2017/
> > >
> > > Also, I seem to remember reading that the VM must have at least 8
> cores. Sorry, I can't find a reference for that.
> > >
> > > - Andy
> > >
> > > -----Original Message-----
> > > From: Hui Ling
> > >
> > > This is my VM info in case it is needed.
> > > ============================================================
> ===========================================
> > > A Standard_DS3_v2 instance from Azure. (one of these models support AN)
> > >
>
> You will need to a couple of things.
> 1. Make sure you have a VM capable of accelerated networking, and that
> your Azure account
>    has opt-ed in. Last I checked it was still in preview until RHEL 7 with
> AN support was released.
>
>   https://docs.microsoft.com/en-us/azure/virtual-network/
> virtual-network-create-vm-accelerated-networking
>
>   There are many different regions, and most have AN by now. Which one are
> you trying?
>
>
>    Make sure Linux without DPDK is working with AN first.
>
> 2. DPDK support requires 17.11 or later DPDK and the failsafe and TAP
> PMD's.
>    The Mellanox mlx4 on Azure is only used after a flow is established.
>    The initial packet (and broadcast/multicast) show up on the
> non-accelerated netvsc device.
>    See the DPDK User Summit in Dublin 2017 for more detal.
>
> For later releases if you watch the development mailing list you will see
> the enhancements being done to simplify setup of TAP/failsafe.
>
>

  reply	other threads:[~2018-01-05 20:45 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-19  7:14 Hui Ling
2017-12-19 15:47 ` Thomas Monjalon
2017-12-19 16:22 ` Thomas Monjalon
2017-12-19 16:29   ` Ophir Munk
2017-12-20  2:00     ` Hui Ling
2017-12-20 13:39       ` Andrew Bainbridge
2017-12-21  7:35         ` Hui Ling
2018-01-02  4:27           ` Stephen Hemminger
2018-01-05 20:45             ` Stephen Hemminger [this message]
2018-01-08  3:01               ` Hui Ling
2018-01-08 15:42                 ` Stephen Hemminger
2018-04-10  8:40                   ` Hui Ling
2018-04-13 12:47                     ` Andrew Bainbridge
2018-04-13 14:50                       ` Hui Ling

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAOaVG16R8bbuxUxtH9pm3QT2TJoUZbRMYsx1=P_Lm7jPVzBsLA@mail.gmail.com' \
    --to=stephen@networkplumber.org \
    --cc=andbain@microsoft.com \
    --cc=kelvin.brookletling@gmail.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).