From: Ale Mansoor <mansooraa@hotmail.com>
To: Stephen Hemminger <stephen@networkplumber.org>,
"dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] Poor SRIOV performance with ESXi Linux guest
Date: Thu, 3 Sep 2015 01:02:17 +0000 [thread overview]
Message-ID: <BLU179-W869419434617642C1CFEEFAC680@phx.gbl> (raw)
In-Reply-To: <20150902153104.65a7d70d@urahara>
Thank you for your input, earlier on this same ESXi server another similar guest was able to run the Kernel mode ixgbevf driver and push several hundred kilo packets per second via the kernel mode driver.
I am trying to get my hands on a second similar physical system for comparison, but likely the hardware and ESXi SRIOV VF mapping into the guest is not the issue.
Are there any Kernel options or DPDK options that may influence performance so significantly under ESXi ?
I used the tools/setup.sh script under DPDK version 2.1 for building and setting up my DPDK environment.
My Kernel UIO settings are below:
----------------------------------------------
cat /boot/config-3.19.8-100.fc20.x86_64 |grep -i uio
CONFIG_HID_HUION=m
CONFIG_UIO=m
CONFIG_UIO_CIF=m
# CONFIG_UIO_PDRV_GENIRQ is not set
# CONFIG_UIO_DMEM_GENIRQ is not set
CONFIG_UIO_AEC=m
CONFIG_UIO_SERCOS3=m
CONFIG_UIO_PCI_GENERIC=m
# CONFIG_UIO_NETX is not set
# CONFIG_UIO_MF624 is not set
--
Regards
Ale
> Date: Wed, 2 Sep 2015 15:31:04 -0700
> From: stephen@networkplumber.org
> To: mansooraa@hotmail.com
> CC: dev@dpdk.org
> Subject: Re: [dpdk-dev] Poor SRIOV performance with ESXi Linux guest
>
> On Wed, 2 Sep 2015 22:18:27 +0000
> Ale Mansoor <mansooraa@hotmail.com> wrote:
>
> > Getting less than 100 packets per second throughput between VF's under my Fedora FC20 VM running under ESXi 6.0 with DPDK l2fwd (Used as ./l2fwd -c 0xf -n 4 -- -p 0x3 -T 1)
>
> That is many orders of magnitude less than expected.
>
>
> > Questions:
> > ---------------
> >
> > Q1) Is DPDK + SRIOV under ESXi supposed to use the igb_uio driver or the vfio-pci driver inside Linux guest os ?
>
> You have to use igb_uio, there is no emulated IOMMU in ESX
>
> > Q2) What is the expected l2fwd performance when running DPDK under the Linux guest OS under ESXI with SRIOV ?
>
> Depends on many things. With SRIOV you should reach 10Mpps or more.
> Did you try running Linux on baremetal on same hardware first?
>
> > Q3) Any idea what may be preventing vfio-pci driver from binding to the VF's inside the guest instance ?
>
> vfio-pci needs IOMMU which is not available in guest.
>
> > Q4) Why is igb_uio performing so poorly ?
>
> Don't blame igb_uio. It is probably something in system or vmware.
>
prev parent reply other threads:[~2015-09-03 1:02 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-09-02 22:18 Ale Mansoor
2015-09-02 22:31 ` Stephen Hemminger
2015-09-03 1:02 ` Ale Mansoor [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=BLU179-W869419434617642C1CFEEFAC680@phx.gbl \
--to=mansooraa@hotmail.com \
--cc=dev@dpdk.org \
--cc=stephen@networkplumber.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).