* [dpdk-dev] Performance issue with vmxnet3 pmd @ 2014-07-07 22:22 Hyunseok 2014-07-07 23:03 ` Patel, Rashmin N 2014-07-08 7:05 ` Thomas Monjalon 0 siblings, 2 replies; 7+ messages in thread From: Hyunseok @ 2014-07-07 22:22 UTC (permalink / raw) To: dev Hi, I was testing l2-fwd with vmxnet3 pmd (included in dpdk). The maximum forwarding rate I got from vmxnet3 pmd with l2fwd is only 2.5 to 2.8 Gbps. This is in contrast with ixgbe pmd with which I could easily achieve 10 gbps forwarding rate. With the original vmxnet3 driver (non pmd), I could also achieve close to 10 gpbs with multiple iperf. But I can never achieve that rate with vmxnet3 pmd... So basically vmxnet3 pmd doesn't seem that fast. Is this a known issue? Thanks, -Hyunseok ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [dpdk-dev] Performance issue with vmxnet3 pmd 2014-07-07 22:22 [dpdk-dev] Performance issue with vmxnet3 pmd Hyunseok @ 2014-07-07 23:03 ` Patel, Rashmin N 2014-07-07 23:48 ` Hyunseok 2014-07-08 7:05 ` Thomas Monjalon 1 sibling, 1 reply; 7+ messages in thread From: Patel, Rashmin N @ 2014-07-07 23:03 UTC (permalink / raw) To: hyunseok, dev Hi Hyunseok, We should not compare Vmxnet3-PMD with ixgbe-PMD performance as Vmxnet3 device is a para-virtual device and it's not similar to directly assigned device to a VM either. There is VMEXIT/VMEXIT occurrence at burst-size boundary and that overhead can’t be eliminated unless the design of Vmxnet3 is updated in future. In addition to that the packets is being touched in ESXi hypervisor vSwitch layer between physical NIC and a virtual machine, which introduces extra overhead, which you won't have in case of using Niantic being used natively or passed through Vt-d to a virtual machine. Feature wise, we can compare it to Virtio-PMD solution, but again there is a little different in device handling and backend driver support compared to Vmxnet3 device so performance comparison won’t to apple to apple. Thanks, Rashmin -----Original Message----- From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hyunseok Sent: Monday, July 07, 2014 3:22 PM To: dev@dpdk.org Subject: [dpdk-dev] Performance issue with vmxnet3 pmd Hi, I was testing l2-fwd with vmxnet3 pmd (included in dpdk). The maximum forwarding rate I got from vmxnet3 pmd with l2fwd is only 2.5 to 2.8 Gbps. This is in contrast with ixgbe pmd with which I could easily achieve 10 gbps forwarding rate. With the original vmxnet3 driver (non pmd), I could also achieve close to 10 gpbs with multiple iperf. But I can never achieve that rate with vmxnet3 pmd... So basically vmxnet3 pmd doesn't seem that fast. Is this a known issue? Thanks, -Hyunseok ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [dpdk-dev] Performance issue with vmxnet3 pmd 2014-07-07 23:03 ` Patel, Rashmin N @ 2014-07-07 23:48 ` Hyunseok 2014-07-08 0:07 ` Patel, Rashmin N 0 siblings, 1 reply; 7+ messages in thread From: Hyunseok @ 2014-07-07 23:48 UTC (permalink / raw) To: Patel, Rashmin N, dev Thanks for your response. I am actually more interested in stock (non-dpdk) vmxnet3 driver vs. vmxnet3 pmd driver comparison. When I forward pkts from stock vmxnet3 driver, I am able to achieve much higher throughput than with vmxnet3 pmd. To make comparison fair, I did not leverage gro/gso. Does any of the overheads you mentioned play a role in this comparison? Here I am comparing different drivers for the same vmxnet3 interface... Regards, Hyunseok On Jul 7, 2014 7:03 PM, "Patel, Rashmin N" <rashmin.n.patel@intel.com> wrote: > Hi Hyunseok, > > We should not compare Vmxnet3-PMD with ixgbe-PMD performance as Vmxnet3 > device is a para-virtual device and it's not similar to directly assigned > device to a VM either. > There is VMEXIT/VMEXIT occurrence at burst-size boundary and that overhead > can’t be eliminated unless the design of Vmxnet3 is updated in future. In > addition to that the packets is being touched in ESXi hypervisor vSwitch > layer between physical NIC and a virtual machine, which introduces extra > overhead, which you won't have in case of using Niantic being used natively > or passed through Vt-d to a virtual machine. > > Feature wise, we can compare it to Virtio-PMD solution, but again there is > a little different in device handling and backend driver support compared > to Vmxnet3 device so performance comparison won’t to apple to apple. > > Thanks, > Rashmin > > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hyunseok > Sent: Monday, July 07, 2014 3:22 PM > To: dev@dpdk.org > Subject: [dpdk-dev] Performance issue with vmxnet3 pmd > > Hi, > > I was testing l2-fwd with vmxnet3 pmd (included in dpdk). > > The maximum forwarding rate I got from vmxnet3 pmd with l2fwd is only 2.5 > to 2.8 Gbps. > > This is in contrast with ixgbe pmd with which I could easily achieve 10 > gbps forwarding rate. > > With the original vmxnet3 driver (non pmd), I could also achieve close to > 10 gpbs with multiple iperf. But I can never achieve that rate with > vmxnet3 pmd... > > So basically vmxnet3 pmd doesn't seem that fast. Is this a known issue? > > Thanks, > -Hyunseok > ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [dpdk-dev] Performance issue with vmxnet3 pmd 2014-07-07 23:48 ` Hyunseok @ 2014-07-08 0:07 ` Patel, Rashmin N 0 siblings, 0 replies; 7+ messages in thread From: Patel, Rashmin N @ 2014-07-08 0:07 UTC (permalink / raw) To: hyunseok, dev; +Cc: Verplanke, Edwin According to my experiments at moment, the bottleneck lies in backend in hypervisor for para-virtual devices including Vmxnet3 and hence different front-end drivers (stock Vmxnet3 driver or Vmxnet3-PMD) would performance equally well, I don’t have solid numbers to show at moment though. Will update you on that. For now, the main advantage of having DPDK version of Vmxnet3 driver is having scalability across multiple hypervisors, and application portability keeping in mind that backend can be optimized and higher throughput can be achieved at later stage. Thanks, Rashmin From: hyunseok.chang@gmail.com [mailto:hyunseok.chang@gmail.com] On Behalf Of Hyunseok Sent: Monday, July 07, 2014 4:49 PM To: Patel, Rashmin N; dev@dpdk.org Subject: RE: [dpdk-dev] Performance issue with vmxnet3 pmd Thanks for your response. I am actually more interested in stock (non-dpdk) vmxnet3 driver vs. vmxnet3 pmd driver comparison. When I forward pkts from stock vmxnet3 driver, I am able to achieve much higher throughput than with vmxnet3 pmd. To make comparison fair, I did not leverage gro/gso. Does any of the overheads you mentioned play a role in this comparison? Here I am comparing different drivers for the same vmxnet3 interface... Regards, Hyunseok On Jul 7, 2014 7:03 PM, "Patel, Rashmin N" <rashmin.n.patel@intel.com<mailto:rashmin.n.patel@intel.com>> wrote: Hi Hyunseok, We should not compare Vmxnet3-PMD with ixgbe-PMD performance as Vmxnet3 device is a para-virtual device and it's not similar to directly assigned device to a VM either. There is VMEXIT/VMEXIT occurrence at burst-size boundary and that overhead can’t be eliminated unless the design of Vmxnet3 is updated in future. In addition to that the packets is being touched in ESXi hypervisor vSwitch layer between physical NIC and a virtual machine, which introduces extra overhead, which you won't have in case of using Niantic being used natively or passed through Vt-d to a virtual machine. Feature wise, we can compare it to Virtio-PMD solution, but again there is a little different in device handling and backend driver support compared to Vmxnet3 device so performance comparison won’t to apple to apple. Thanks, Rashmin -----Original Message----- From: dev [mailto:dev-bounces@dpdk.org<mailto:dev-bounces@dpdk.org>] On Behalf Of Hyunseok Sent: Monday, July 07, 2014 3:22 PM To: dev@dpdk.org<mailto:dev@dpdk.org> Subject: [dpdk-dev] Performance issue with vmxnet3 pmd Hi, I was testing l2-fwd with vmxnet3 pmd (included in dpdk). The maximum forwarding rate I got from vmxnet3 pmd with l2fwd is only 2.5 to 2.8 Gbps. This is in contrast with ixgbe pmd with which I could easily achieve 10 gbps forwarding rate. With the original vmxnet3 driver (non pmd), I could also achieve close to 10 gpbs with multiple iperf. But I can never achieve that rate with vmxnet3 pmd... So basically vmxnet3 pmd doesn't seem that fast. Is this a known issue? Thanks, -Hyunseok ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [dpdk-dev] Performance issue with vmxnet3 pmd 2014-07-07 22:22 [dpdk-dev] Performance issue with vmxnet3 pmd Hyunseok 2014-07-07 23:03 ` Patel, Rashmin N @ 2014-07-08 7:05 ` Thomas Monjalon 2014-07-08 15:08 ` Hyunseok 1 sibling, 1 reply; 7+ messages in thread From: Thomas Monjalon @ 2014-07-08 7:05 UTC (permalink / raw) To: hyunseok; +Cc: dev Hi, 2014-07-07 18:22, Hyunseok: > I was testing l2-fwd with vmxnet3 pmd (included in dpdk). Have you tested vmxnet3-usermap (http://dpdk.org/doc/vmxnet3-usermap)? > The maximum forwarding rate I got from vmxnet3 pmd with l2fwd is only 2.5 > to 2.8 Gbps. It could be interesting to know your exact testing procedure with numbers for vmxnet3-usermap. Thanks -- Thomas ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [dpdk-dev] Performance issue with vmxnet3 pmd 2014-07-08 7:05 ` Thomas Monjalon @ 2014-07-08 15:08 ` Hyunseok 2014-08-13 8:13 ` Alex Markuze 0 siblings, 1 reply; 7+ messages in thread From: Hyunseok @ 2014-07-08 15:08 UTC (permalink / raw) To: Thomas Monjalon; +Cc: dev Thomas, The last time I tried vmxnet3-usermap a couple of weeks ago, it did not compile against the latest kernel (3.11). Is it still the case? Or do you have the latest version which is compatible with newer kernels? Also, do you have any benchmark numbers with vmxnet3-usermap in any chance? Regards, -Hyunseok On Tue, Jul 8, 2014 at 3:05 AM, Thomas Monjalon <thomas.monjalon@6wind.com> wrote: > Hi, > > 2014-07-07 18:22, Hyunseok: > > I was testing l2-fwd with vmxnet3 pmd (included in dpdk). > > Have you tested vmxnet3-usermap (http://dpdk.org/doc/vmxnet3-usermap)? > > > The maximum forwarding rate I got from vmxnet3 pmd with l2fwd is only 2.5 > > to 2.8 Gbps. > > It could be interesting to know your exact testing procedure with numbers > for > vmxnet3-usermap. > > Thanks > -- > Thomas > ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [dpdk-dev] Performance issue with vmxnet3 pmd 2014-07-08 15:08 ` Hyunseok @ 2014-08-13 8:13 ` Alex Markuze 0 siblings, 0 replies; 7+ messages in thread From: Alex Markuze @ 2014-08-13 8:13 UTC (permalink / raw) To: hyunseok; +Cc: dev HI Guys, I will continue this thread. On ubuntu 14.4 (Kernel 3.13) with DPDK 1.7 vmxnet3-usermap 1.2 doesn't compile. >From the git it seems to have been updated 3 months ago. Is this project going to be killed? And should I look for different alternatives. On Tue, Jul 8, 2014 at 6:08 PM, Hyunseok <hyunseok@ieee.org> wrote: > Thomas, > > The last time I tried vmxnet3-usermap a couple of weeks ago, it did not > compile against the latest kernel (3.11). Is it still the case? Or do you > have the latest version which is compatible with newer kernels? > > Also, do you have any benchmark numbers with vmxnet3-usermap in any chance? > > Regards, > -Hyunseok > > > > > On Tue, Jul 8, 2014 at 3:05 AM, Thomas Monjalon <thomas.monjalon@6wind.com> > wrote: > >> Hi, >> >> 2014-07-07 18:22, Hyunseok: >> > I was testing l2-fwd with vmxnet3 pmd (included in dpdk). >> >> Have you tested vmxnet3-usermap (http://dpdk.org/doc/vmxnet3-usermap)? >> >> > The maximum forwarding rate I got from vmxnet3 pmd with l2fwd is only 2.5 >> > to 2.8 Gbps. >> >> It could be interesting to know your exact testing procedure with numbers >> for >> vmxnet3-usermap. >> >> Thanks >> -- >> Thomas >> ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2014-08-13 8:10 UTC | newest] Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2014-07-07 22:22 [dpdk-dev] Performance issue with vmxnet3 pmd Hyunseok 2014-07-07 23:03 ` Patel, Rashmin N 2014-07-07 23:48 ` Hyunseok 2014-07-08 0:07 ` Patel, Rashmin N 2014-07-08 7:05 ` Thomas Monjalon 2014-07-08 15:08 ` Hyunseok 2014-08-13 8:13 ` Alex Markuze
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).