DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] A Question about the necessity of DPDK VF for Ethernet PMDs
@ 2019-02-04  7:43 Rami Rosen
  2019-02-04 10:22 ` Alejandro Lucero
  0 siblings, 1 reply; 6+ messages in thread
From: Rami Rosen @ 2019-02-04  7:43 UTC (permalink / raw)
  To: dev

 Hello all,



Now that DPDK 19.02 was released three days ago (on time!!), hopefully
there will be time

for people to answer the following question:



According to the "DPDK Getting Started Guide",



"If UEFI secure boot is enabled, the Linux kernel may disallow the use of
UIO on the system.

Therefore, devices for use by DPDK should be bound to the vfio-pci kernel
module

rather than igb_uio or uio_pci_generic. For more details see Binding and

Unbinding Network Ports to/from the Kernel Modules."

See:

http://doc.dpdk.org/guides/linux_gsg/sys_reqs.html#bios-setting-prerequisite-on-x86



Now, when you bind a PCI device with vfio-pci, then the "max_vfs" entry is
*not* created under

/sys/bus/pci/devices/<BDF>/ (as opposed to the case when you bind with
igb_uio).

This means that you cannot create DPDK VFS in this case (as you cannot
write num_vfs to the

non existing max_vfs entry). You can however create Kernel VFs (by echoing
into the sriov_num_vfs

sysfs entry).



So I assume there are deployments of DPDK (with secure boot) when the PMDs
are being binded not by

igb_uio but by vfio_pci.



So the question is:

As explained above, t probably there are setups when you cannot generated
PMD VFs.

Most PMD Ethernet Vendors *do* provide VF PMDs in the DPDK official repo;

but what are the benefits of providing DPDK VFs PMD? is it mandatory in
some use cases ?

Is there any advantage for using a DPDK PF/DPDK VF

combination over using Kernel VF?



Regards,

Rami Rosen

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] A Question about the necessity of DPDK VF for Ethernet PMDs
  2019-02-04  7:43 [dpdk-dev] A Question about the necessity of DPDK VF for Ethernet PMDs Rami Rosen
@ 2019-02-04 10:22 ` Alejandro Lucero
  2019-02-04 10:44   ` Rami Rosen
  0 siblings, 1 reply; 6+ messages in thread
From: Alejandro Lucero @ 2019-02-04 10:22 UTC (permalink / raw)
  To: Rami Rosen; +Cc: dev

Hi Rami,

Your concern is related to this thread:

http://mails.dpdk.org/archives/dev/2019-January/123466.html

I'm working on solving the problem when PF needs to be bound to VFIO. My
proposal is to use mediated devices.
Although it is not strictly necessary to rely on current kernel WiP about
VFIO med-devices IOMMU aware

https://lwn.net/Articles/763793/

using that patch makes the implementation easier. Indeed, this solution
implies changes to current kernel netdev drivers, and the previous
commented kernel patch will likely be accepted upstream before our push.

DPDK changes for supporting mdev bus will happen soon (I hope). Intel guys
are working on this for the I/O scale case Intel is interested in, with
Netronome's case, and the one you point out, being a specific case which
can be implemented with those DPDK and kernel changes.


On Mon, Feb 4, 2019 at 7:43 AM Rami Rosen <ramirose@gmail.com> wrote:

>  Hello all,
>
>
>
> Now that DPDK 19.02 was released three days ago (on time!!), hopefully
> there will be time
>
> for people to answer the following question:
>
>
>
> According to the "DPDK Getting Started Guide",
>
>
>
> "If UEFI secure boot is enabled, the Linux kernel may disallow the use of
> UIO on the system.
>
> Therefore, devices for use by DPDK should be bound to the vfio-pci kernel
> module
>
> rather than igb_uio or uio_pci_generic. For more details see Binding and
>
> Unbinding Network Ports to/from the Kernel Modules."
>
> See:
>
>
> http://doc.dpdk.org/guides/linux_gsg/sys_reqs.html#bios-setting-prerequisite-on-x86
>
>
>
> Now, when you bind a PCI device with vfio-pci, then the "max_vfs" entry is
> *not* created under
>
> /sys/bus/pci/devices/<BDF>/ (as opposed to the case when you bind with
> igb_uio).
>
> This means that you cannot create DPDK VFS in this case (as you cannot
> write num_vfs to the
>
> non existing max_vfs entry). You can however create Kernel VFs (by echoing
> into the sriov_num_vfs
>
> sysfs entry).
>
>
>
> So I assume there are deployments of DPDK (with secure boot) when the PMDs
> are being binded not by
>
> igb_uio but by vfio_pci.
>
>
>
> So the question is:
>
> As explained above, t probably there are setups when you cannot generated
> PMD VFs.
>
> Most PMD Ethernet Vendors *do* provide VF PMDs in the DPDK official repo;
>
> but what are the benefits of providing DPDK VFs PMD? is it mandatory in
> some use cases ?
>
> Is there any advantage for using a DPDK PF/DPDK VF
>
> combination over using Kernel VF?
>
>
>
> Regards,
>
> Rami Rosen
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] A Question about the necessity of DPDK VF for Ethernet PMDs
  2019-02-04 10:22 ` Alejandro Lucero
@ 2019-02-04 10:44   ` Rami Rosen
  2019-02-04 11:30     ` Alejandro Lucero
  0 siblings, 1 reply; 6+ messages in thread
From: Rami Rosen @ 2019-02-04 10:44 UTC (permalink / raw)
  To: Alejandro Lucero; +Cc: dev

Hi Alejandro,

>Your concern is related to this thread

Thanks for your reply, I was aware of this thread.

Still, I am not sure, in current kernels and currently available Ethernet
DPDK PMDs about the answer to my queries (I don't think this mail thread
gives info about it), like about what are the benefits of providing DPDK VFs,
is it mandatory in some use cases, and are there any advantage for using a
DPDK PF/DPDK VF combination over using Kernel VF?

Regards,

Rami Rosen

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] A Question about the necessity of DPDK VF for Ethernet PMDs
  2019-02-04 10:44   ` Rami Rosen
@ 2019-02-04 11:30     ` Alejandro Lucero
  2019-02-04 12:19       ` Rami Rosen
  0 siblings, 1 reply; 6+ messages in thread
From: Alejandro Lucero @ 2019-02-04 11:30 UTC (permalink / raw)
  To: Rami Rosen; +Cc: dev

On Mon, Feb 4, 2019 at 10:44 AM Rami Rosen <ramirose@gmail.com> wrote:

> Hi Alejandro,
>
> >Your concern is related to this thread
>
> Thanks for your reply, I was aware of this thread.
>
OK


> Still, I am not sure, in current kernels and currently available Ethernet
> DPDK PMDs about the answer to my queries (I don't think this mail thread
> gives info about it), like about what are the benefits of providing DPDK
> VFs,  is it mandatory in some use cases, and are there any advantage for
> using a DPDK PF/DPDK VF combination over using Kernel VF?
>
> That is an interesting discussion. I know there is some interest in this
case from OVS people, exactly for running an OVS instance inside a VM.
I can see other reasons:
   - when SRIOV is used by VMs, the slow path will always be faster (and
with lower latency) with DPDK.
   - When more VMs/containers than VFs, DPDK will allow to use SRIOV (for
higher priority VMs/containers) and vhost-user (for low-priority), and with
the low-priority being faster than through the kernel.
   - If SRIOV is not used by VMs, DPDK forwarding path using vhost-user
along with VF PMDs is faster than through the kernel.
   - Having the PF managed by user space could potentially mean faster VM
migration.
   - PF flow management, inserting/deleting flow rules, faster in user
space.


Regards,
>
> Rami Rosen
>
>
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] A Question about the necessity of DPDK VF for Ethernet PMDs
  2019-02-04 11:30     ` Alejandro Lucero
@ 2019-02-04 12:19       ` Rami Rosen
  2019-02-04 12:53         ` Alejandro Lucero
  0 siblings, 1 reply; 6+ messages in thread
From: Rami Rosen @ 2019-02-04 12:19 UTC (permalink / raw)
  To: Alejandro Lucero; +Cc: dev

Hi, Alejandro,

Thanks for you quick response.

>When SRIOV is used by VMs, the slow path will always be faster (and
>with lower latency) with DPDK.

Yes, I am referring primarily to the SRIOV case in my question, when
assigning PCI VF to a VM (most likely QEMU VM)
Can you please explain what you mean about slow path in this context ?

Not sure the OVS setup is relevant here, it seems (if I understand
correctly) that VFIO is more commonly used in OVS setups,
so this query is not relevant here (unless you use igb_uio which is less
common)
According to
http://docs.openvswitch.org/en/latest/intro/install/dpdk/
"VFIO is prefered to the UIO driver when using recent versions of DPDK. "
Regards,
Rami Rosen

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] A Question about the necessity of DPDK VF for Ethernet PMDs
  2019-02-04 12:19       ` Rami Rosen
@ 2019-02-04 12:53         ` Alejandro Lucero
  0 siblings, 0 replies; 6+ messages in thread
From: Alejandro Lucero @ 2019-02-04 12:53 UTC (permalink / raw)
  To: Rami Rosen; +Cc: dev

On Mon, Feb 4, 2019 at 12:19 PM Rami Rosen <ramirose@gmail.com> wrote:

> Hi, Alejandro,
>
> Thanks for you quick response.
>
> >When SRIOV is used by VMs, the slow path will always be faster (and
> >with lower latency) with DPDK.
>
> Yes, I am referring primarily to the SRIOV case in my question, when
> assigning PCI VF to a VM (most likely QEMU VM)
> Can you please explain what you mean about slow path in this context ?
>
>
First packets from a new VM connection will go through the slow path. With
SRIOV and assuming OVS offload, that implies the NIC sending the first
packet up to OVS through the PF and back down once OVS decides what to do
with that flow (including likely some flow rule offload).


> Not sure the OVS setup is relevant here, it seems (if I understand
> correctly) that VFIO is more commonly used in OVS setups,
> so this query is not relevant here (unless you use igb_uio which is less
> common)
>

I do not understand what you mean. VFIO is preferred because it is the
right way of doing IOMMU, and also the only way if OVS run as non-root.
If you have VMs using SRIOV, you need something like OVS in the host, and
here it comes the potential benefit of OVS DPDK, because any OVS required
action will likely be faster.


> According to
> http://docs.openvswitch.org/en/latest/intro/install/dpdk/
> "VFIO is prefered to the UIO driver when using recent versions of DPDK. "
> Regards,
> Rami Rosen
>
>
>
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-02-04 12:54 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-04  7:43 [dpdk-dev] A Question about the necessity of DPDK VF for Ethernet PMDs Rami Rosen
2019-02-04 10:22 ` Alejandro Lucero
2019-02-04 10:44   ` Rami Rosen
2019-02-04 11:30     ` Alejandro Lucero
2019-02-04 12:19       ` Rami Rosen
2019-02-04 12:53         ` Alejandro Lucero

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).