DPDK usage discussions
 help / color / mirror / Atom feed
From: Cliff Burdick <shaklee3@gmail.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: users <users@dpdk.org>
Subject: Re: [dpdk-users] Mellanox + DPDK + Docker/Kubernetes
Date: Thu, 14 Feb 2019 18:16:41 -0800	[thread overview]
Message-ID: <CA+Gp1nYS3t_rwwUCGEYBHrGCWgLW=UT1XqHe6VAJEoTJak_ANQ@mail.gmail.com> (raw)
In-Reply-To: <20190214181115.416fddc8@shemminger-XPS-13-9360>

This is bare metal (PF). I actually traced down where it's failing in the
mellanox driver -- it's doing an ioctl with the name of the interface, and
that call fails since the devices aren't visible to the application (not in
/proc/net/dev). Although, the mellanox driver did successfully pull the
name from the ib driver. I think this is probably just not possible without
something like multus to support multiple devices in an unofficial way. I'm
assuming this would actually work with Intel NICs since they aren't visible
to the kernel as a regular net device at all, and thus wouldn't need ioctl.

On Thu, Feb 14, 2019, 18:11 Stephen Hemminger <stephen@networkplumber.org
wrote:

> On Thu, 14 Feb 2019 13:05:28 -0800
> Cliff Burdick <shaklee3@gmail.com> wrote:
>
> > Hi, I'm trying to get DPDK working inside of a container deployed with
> > Kubernetes. It works great if I pass hostNetwork: true (effectively
> > net=host in Docker) to where the container sees all the host interfaces.
> > The problem with this is you lose all normal Kubernetes networking for
> > other non-DPDK interfaces by doing this. If I disable host networking, I
> > get the following error:
> >
> > EAL: Probing VFIO support...
> > EAL: PCI device 0000:01:00.0 on NUMA socket 0
> > EAL:   probe driver: 15b3:1013 net_mlx5
> > net_mlx5: port 0 cannot get MAC address, is mlx5_en loaded? (errno: No
> such
> > device)
> > net_mlx5: probe of PCI device 0000:01:00.0 aborted after encountering an
> > error: No such device
> > EAL: Requested device 0000:01:00.0 cannot be used
> >
> > I've tried mounting /sys and /dev in the container from the host, and it
> > still doesn't work. Is there something I can do to get the Mellanox mlx5
> > driver to work inside a container if it can't see the host interfaces?
>
> Is this Mellanox on bare-metal (ie PF).
> Or Mellanox VF as used in Hyper-V/Azure?
>
>

      reply	other threads:[~2019-02-15  2:16 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-14 21:05 Cliff Burdick
2019-02-15  2:11 ` Stephen Hemminger
2019-02-15  2:16   ` Cliff Burdick [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CA+Gp1nYS3t_rwwUCGEYBHrGCWgLW=UT1XqHe6VAJEoTJak_ANQ@mail.gmail.com' \
    --to=shaklee3@gmail.com \
    --cc=stephen@networkplumber.org \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).