DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] KNI performance numbers...
@ 2015-06-24  5:58 Vithal S Mohare
  2015-06-25 11:20 ` Maciej Grochowski
  0 siblings, 1 reply; 2+ messages in thread
From: Vithal S Mohare @ 2015-06-24  5:58 UTC (permalink / raw)
  To: dev

Hi,

I am running DPDP KNI application on linux (3.18 kernel) VM (ESXi 5.5), directly connected to another linux box to measure throughput using  iperf tool.  Link speed: 1Gbps.   Maximum throughput I get is 50% with 1470 Bytes.  With 512B pkt sizes, throughput drops to 282 Mbps.

Tried using KNI loopback modes (and traffic from Ixia), but no change in throughput.

KNI is running in single thread mode.  One lcore for rx, one for tx and another fir kni thread.

Is the result expected?  Has anybody got better numbers?  Appreciate for input and relevant info.

Thanks,
-Vithal

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dpdk-dev] KNI performance numbers...
  2015-06-24  5:58 [dpdk-dev] KNI performance numbers Vithal S Mohare
@ 2015-06-25 11:20 ` Maciej Grochowski
  0 siblings, 0 replies; 2+ messages in thread
From: Maciej Grochowski @ 2015-06-25 11:20 UTC (permalink / raw)
  To: Vithal S Mohare; +Cc: dev

I meet similar issue with KNI connected VM, but In my case I run 2 VM
guests based on KNI and measure network performance between them:

sesion:

### I just started demo with kni

./build/kni -c 0xf0 -n 4 -- -P -p 0x3 --config="(0,4,6,8),(1,5,7,9)"

###starting...

###set kni on vEthX to connect (as in example)

echo 1 > /sys/class/net/vEth0_0/sock_en
fd=`cat /sys/class/net/vEth0_0/sock_fd`

## start first guest VM
kvm -nographic -name vm1 -cpu host -m 2048 -smp 1 -hda
.../debian_squeeze_amd64.qcow2 -netdev tap,fd=$fd,id=hostnet1,vhost=on
-device virtio-net-pci,netdev=hostnet1,id=net1,bus=pci.0,addr=0x4

## start second guest VM
echo 1 > /sys/class/net/vEth1_0/sock_en
fd=`cat /sys/class/net/vEth1_0/sock_fd`

kvm -nographic -name vm2 -cpu host -m 2048 -smp 1 -hda
.../debian_squeeze2_amd64.qcow2 -netdev tap,fd=$fd,id=hostnet1,vhost=on
-device virtio-net-pci,netdev=hostnet1,id=net1,bus=pci.0,addr=0x4

###END: ustawiam 2 kvm z virtual guestem


### first VM node start server
 netserver -p 22113

### performance from second VM guest to first (server) using netperf

root@debian-amd64:~# netperf -H 10.0.0.200 -p 22113 -t TCP_STREAM
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
10.0.0.200 () port 0 AF_INET
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec
 87380  16384  16384    10.01     219.86

So I got 220M between two VM using KNI, but it was only experiment (I
didn't analyze it deeply)

On Wed, Jun 24, 2015 at 7:58 AM, Vithal S Mohare <vmohare@arubanetworks.com>
wrote:

> Hi,
>
> I am running DPDP KNI application on linux (3.18 kernel) VM (ESXi 5.5),
> directly connected to another linux box to measure throughput using  iperf
> tool.  Link speed: 1Gbps.   Maximum throughput I get is 50% with 1470
> Bytes.  With 512B pkt sizes, throughput drops to 282 Mbps.
>
> Tried using KNI loopback modes (and traffic from Ixia), but no change in
> throughput.
>
> KNI is running in single thread mode.  One lcore for rx, one for tx and
> another fir kni thread.
>
> Is the result expected?  Has anybody got better numbers?  Appreciate for
> input and relevant info.
>
> Thanks,
> -Vithal
>

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2015-06-25 11:20 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-06-24  5:58 [dpdk-dev] KNI performance numbers Vithal S Mohare
2015-06-25 11:20 ` Maciej Grochowski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).