DPDK patches and discussions
 help / color / mirror / Atom feed
From: Stephen Hemminger <stephen@networkplumber.org>
To: Matan Azrad <matan@mellanox.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] RFC - vdev_netvsc automatic blacklisting
Date: Wed, 12 Jun 2019 06:26:37 -0700	[thread overview]
Message-ID: <20190612062637.57e9ab6e@hermes.lan> (raw)
In-Reply-To: <AM0PR0502MB4019CB927BFB60F46D5AF4B0D2EC0@AM0PR0502MB4019.eurprd05.prod.outlook.com>

On Wed, 12 Jun 2019 05:15:47 +0000
Matan Azrad <matan@mellanox.com> wrote:

>  From: Stephen Hemminger 
> > > Hi Stephen
> > >
> > > From: Stephen Hemminger  
> > > > When using DPDK on Azure it is common to have one non-DPDK  
> > interface.  
> > > > If that non-DPDK interface is present vdev_netvsc correctly skip it.
> > > > But if the non-DPDK has accelerated networking the Mellanox driver
> > > > will still get associated with DPDK (and break connectivity).
> > > >
> > > > The current process is to tell users to do whitelist or blacklist
> > > > the PCI
> > > > device(s) not used for DPDK. But vdev_netvsc already is doing a lot
> > > > of looking at devices and VF devices.
> > > >
> > > > Could vdev_netvsc just do this automatically by setting devargs for
> > > > the VF to blacklist?  
> > >
> > >
> > > There is way to blacklist a device by setting it a rout\IP\IPv6, from the  
> > VDEV_NETVSC doc:  
> > > "Not specifying either iface or mac makes this driver attach itself to all  
> > unrouted NetVSC interfaces found on the system. Specifying the device
> > makes this driver attach itself to the device regardless the device routes."  
> > >
> > > So, we are expecting that used VFs will be with a rout and DPDK VFs will not  
> > be with a rout.  
> > >
> > > Doesn't it enough?
> > >
> > >
> > > Matan  
> > 
> > I am talking about if eth0 has a route, it gets skipped but the associated MLX
> > SR-IOV device does not. When the MLX device is then configured for DPDK, it
> > breaks it for use by kernel; and therefore connectivity with the VM is lost.  
> 
> Ok, I think I got you.
> You want to blacklist the PCI device which its netvsc net-device is detected as routed. Do you?
> 
> If so,
> 
> I don't think that probing the pci device hurts the connectivity, only the configuration should hurt it.
> 
> It means that the application configures the device and hurt it.
> Doesn't it an application issue? 
> 
> Matan

Actually probing does hurt, it corrupts the MLX driver.
In theory, the driver supports bifurcated but in practice it is greedy and grabs all flows.

  reply	other threads:[~2019-06-12 13:26 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-04 19:54 Stephen Hemminger
2019-06-11  6:21 ` Matan Azrad
2019-06-11 14:26   ` Stephen Hemminger
2019-06-12  5:15     ` Matan Azrad
2019-06-12 13:26       ` Stephen Hemminger [this message]
2019-06-12 16:13         ` Matan Azrad

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190612062637.57e9ab6e@hermes.lan \
    --to=stephen@networkplumber.org \
    --cc=dev@dpdk.org \
    --cc=matan@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).