DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] l2fwd performance in VM with SR-IOV
@ 2015-12-19  6:23 Furong
  0 siblings, 0 replies; only message in thread
From: Furong @ 2015-12-19  6:23 UTC (permalink / raw)
  To: users

Hello, everybody.
     I have measured performace of example/l2fwd in VM with SR-IOV.
     My experiment server: CPU: 32 core Intel Xeon E5-4603 v2 @ 
2.20GHz,  NIC: 10G Intel 82599ES, OS:ubuntu14.04.3.
     I started a VM with this command:
         # qemu-system-x86_64 -enable-kvm -cpu host -m 4G -smp 4 -net 
none -device vfio-pci,host=<vf1-pcie-addr> -device 
vfio-pci,host=<vf2-pcie-addr> -hda vm.img -vnc :1
     In VM:
         I bound vf1 & vf2 to igb_uio, then started a example/l2fwd in VM.
     Then i started a pktgen in another server (same hardware & os with 
this server) to send packets (small packet - 64bit).
     The results is :
         1. when i sent packets with pktgen from only 1 port , the 
throughput (measured by pktgen rx/tx rates) was 7.0Gbps.
         2. when i sent packets from both 2 port, the throughput was 
7.2Gbps (3.6Gbps each port).

     But, i have measured l2fwd performance in host with SR-IOV (binding 
vf1 & vf2 to vfio-pci & starting l2fwd in host).
     The result is :
         when i sent packets from both 2 port, the throughput was 
14.4Gbps (7.2Gbps each port).

     I want to ask when i ran l2fwd in VM, Can i achieve similar 
performance with host? or, there are some methods to tune the performance ?

     Thanks a lot!
     Furong

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2015-12-19  6:23 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-19  6:23 [dpdk-users] l2fwd performance in VM with SR-IOV Furong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).