DPDK usage discussions
 help / color / mirror / Atom feed
From: Liwu Liu <liwuliu@fortinet.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] Cannot run DPDK applications using Mellanox NIC under Hyper-V
Date: Sat, 5 Jan 2019 00:35:13 +0000	[thread overview]
Message-ID: <02a2fbe93b93458190386e8650c3fb2c@fortinet.com> (raw)
In-Reply-To: <20190104162349.264da710@hermes.lan>

Hi Stephen,

  Yeah we are seeing all NETVSC, failsafe PMD, TAP, MLX4 VF. We are expecting bifurcation mode where default traffic goes with MLX4_EN slave to NETVSC master to enter Linux net stack and certain flow-defined traffic shall go into DPDK RX queue rather than MLX4_EN net-dev queue. Unfortunately as we cannot define flow into VF NIC, we cannot re-direct traffic into DPDK user-land. 

   In fact I am looking for a way to steer all RX traffic from MLX4_EN slave device into DPDK RX ring. 

  Hypervisor: Windows server + MLNX VPI 5.5; VM: Linux dpdk-18.11 mlnx-ofed-kernel-4.4 rdma-core-43mlnx1 

  Ethernet controller: Mellanox Technologies MT27500 Family [ConnectX-3 Virtual Function]

Best,

Liwu

-----Original Message-----
From: Stephen Hemminger <stephen@networkplumber.org> 
Sent: Friday, January 4, 2019 4:24 PM
To: Liwu Liu <liwuliu@fortinet.com>
Subject: Re: [dpdk-users] Cannot run DPDK applications using Mellanox NIC under Hyper-V

On Sat, 5 Jan 2019 00:10:43 +0000
Liwu Liu <liwuliu@fortinet.com> wrote:

> Hi Stephen,
> 
>    It is SR-IOV
> 
>   Thanks,
> 
> Liwu
> 
> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Friday, January 4, 2019 4:07 PM
> To: Liwu Liu <liwuliu@fortinet.com>
> Cc: users@dpdk.org
> Subject: Re: [dpdk-users] Cannot run DPDK applications using Mellanox 
> NIC under Hyper-V
> 
> On Fri, 4 Jan 2019 20:06:48 +0000
> Liwu Liu <liwuliu@fortinet.com> wrote:
> 
> > Hi Team,
> >     We used to have similar problem for Ubuntu 18.04 hypervisor and 
> > resolved by set log_num_mgm_entry_size=-1. (Refer to 
> > https://mails.dpdk.org/archives/users/2018-November/003647.html
> > )
> > 
> >   Somehow for Windows Servers with MLNX VPI, I do not know where to set such and DPDK over MLX4 on linux vm has same failure of attaching flow.
> > 
> >     [  374.568992] <mlx4_ib> __mlx4_ib_create_flow: mcg table is 
> > full. Fail to register network rule. size = 64 (out of memory error 
> > code)
> > 
> >     Would like to get your help on this. It seems to be the case that the PF interface is not configured to trust VF interfaces to be able to add new flow rules.
> > 
> >    Many thanks for help,
> > 
> > Liwu
> > 
> > 
> > 
> > 
> > ***  Please note that this message and any attachments may contain 
> > confidential and proprietary material and information and are 
> > intended only for the use of the intended recipient(s). If you are 
> > not the intended recipient, you are hereby notified that any review, 
> > use, disclosure, dissemination, distribution or copying of this 
> > message and any attachments is strictly prohibited. If you have 
> > received this email in error, please immediately notify the sender 
> > and destroy this e-mail and any attachments and all copies, whether 
> > electronic or printed. Please also note that any views, opinions, 
> > conclusions or commitments expressed in this message are those of 
> > the individual sender and do not necessarily reflect the views of 
> > Fortinet, Inc., its affiliates, and emails are not binding on 
> > Fortinet and only a writing manually signed by Fortinet's General 
> > Counsel can be a binding commitment of Fortinet to Fortinet's 
> > customers or partners. Thank you. ***
> >   
> 
> How are using the Mellanox device with Windows Server? PCI passthrough or SR-IOV?

If using SR-IOV then you can't use the Mellanox device directly. You have to use the synthetic device. If you use vdev_netvsc pseudo-device, then it sets up a failsafe PMD, TAP and the VF (Mellanox) PMD for you.

The experimental way is to use the netvsc PMD which will manage VF if available.

  parent reply	other threads:[~2019-01-05  0:35 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-04 20:06 Liwu Liu
2019-01-05  0:06 ` Stephen Hemminger
2019-01-05  0:12   ` Liwu Liu
     [not found]   ` <596318879eb14b32971e4c182489f537@fortinet.com>
     [not found]     ` <20190104162349.264da710@hermes.lan>
2019-01-05  0:35       ` Liwu Liu [this message]
2019-01-05  1:08         ` Stephen Hemminger
2019-01-08 18:38           ` Liwu Liu
2019-01-04 22:28 Liwu Liu
2019-01-05  9:45 ` Olga Shern
2019-01-07 17:46   ` Liwu Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=02a2fbe93b93458190386e8650c3fb2c@fortinet.com \
    --to=liwuliu@fortinet.com \
    --cc=stephen@networkplumber.org \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).