DPDK patches and discussions
 help / color / mirror / Atom feed
From: Stephen Hemminger <stephen@networkplumber.org>
To: James Yu <ypyu2011@gmail.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] Increasing number of txd and rxd from 256 to 1024 for virtio-net-pmd-1.1
Date: Tue, 26 Nov 2013 22:26:06 -0800	[thread overview]
Message-ID: <20131126222606.3b99a80b@nehalam.linuxnetplumber.net> (raw)
In-Reply-To: <CAFMB=kByTv2MKmyxS7AsJ-7jA30jxaJiDzFXcnd9MH34ag3urA@mail.gmail.com>

On Tue, 26 Nov 2013 21:15:02 -0800
James Yu <ypyu2011@gmail.com> wrote:

> Running one directional traffic from Spirent traffic generator to l2fwd
> running inside a guest OS on a RHEL 6.2 KVM host, I encountered performance
> issue and need to increase the number of rxd and txd from 256 to 1024.
> There was not enough freeslots for packets to be transmitted in this routine
>       virtio_send_packet(){
>       ....
>         if (tq->freeslots < nseg + 1) {
>                 return -1;
>         }
>       ....
>       }
> 
> How do I solve the performance issue by one of the following
> 1. increase the number of rxd and txd from 256 to 1024
>         This should prevent packets could not be stored into the ring due
> to lack of freeslots. But l2fwd fails to run and indicate the number must
> be equal to 256.
> 2. increase the MAX_PKT_BURST
>         But this is not ideal since it will increase the delay while
> improving the throughput
> 3. other mechanism that you know can improve it ?
>         Is there any other approach to have enough freeslots to store the
> packets before passing down to PCI ?
> 
> 
> Thanks
> 
> James
> 
> 
> This is the performance numbers I measured on the l2fwd printout for the
> receiving part. I added codes inside l2fwd to do tx part.
> ====================================================================================
> vhost-net is enabled on KVM host, # of cache buffer 4096, Ubuntu 12.04.3
> LTS (3.2.0-53-generic); kvm 1.2.0, libvirtd: 0.9.8
> 64 Bytes/pkt from Spirent @ 223k pps, running test for 10 seconds.
> ====================================================================================
> DPDK 1.3 + virtio + 256 txd/rxd + nice -19 priority (l2fwd, guest kvm
> process)
> bash command: nice -n -19
> /root/dpdk/dpdk-1.3.1r2/examples/l2fwd/build/l2fwd -c 3 -n 1 -b 000:00:03.0
> -b 000:00:07.0 -b 000:00:0a.0 -b 000:00:09.0 -d
> /root/dpdk/virtio-net-pmd-1.1/librte_pmd_virtio.so -- -q 1 -p 1
> ====================================================================================
> Spirent -> l2fwd (receiving 10G) (RX on KVM guest)
>     MAX_PKT_BURST     10seconds (<1% loss)  Packets Per Second
> -------------------------------------------------------------------------------------------------------------------------------
>     32                              74k pps
>     64                              80k pps
>     128                           126kpps
>     256                           133kpps
> 
> l2fw -> Spirent (10G port) (transmitting) (using one-directional one port
> (port 0) setup)
>     MAX_PKT_BURST     < 1% packet loss
>     32                             88kpp
> 
> 
> **********************************
> The same test run on e1000 ports
> 
> ====================================================================================
> DPDK 1.3 + e1000 + 1024 txd/rxd + nice -19 priority (l2fwd, guest kvm
> process)
> bash command: nice -n -19
> /root/dpdk/dpdk-1.3.1r2/examples/l2fwd/build/l2fwd -c 3 -n 1 -b 000:00:03.0
> -b 000:00:07.0 -b 000:00:0a.0 -b 000:00:09.0 -- -q 1 -p 1
> ====================================================================================
> Spirent -> l2fwd (RECEIVING 10G)
>     MAX_PKT_BURST     <= 1% packet loss
>     32                             110k pps
> 
> l2fw -> Spirent (10G port) (TRANSMITTING) (using one-directional one port
> (port 0) setup)
>     MAX_PKT_BURST     pkts transmitted on l2fwd
>     32                            171k pps (0% dropped)
>     240                          203k pps (6% dropped, 130k pps received on
> eth6 (assumed on Spirent)) **
> **: not enough freeslots in tx ring
> ==> this indicate the effects of small txd/rxd (256) when more traffic is
> generated, the packets can not
>     be sent due to lack of freeslots in tx ring. I guess this is the
> symptom occurs in the virtio_net

The number of slots with virtio is a parameter negotiated with the host.
So unless the host (KVM) gives the device more slots, then it won't work.
I have a better virtio driver and one of the features being added is multiqueue
and merged TX buffer support which would give a bigger queue.

  reply	other threads:[~2013-11-27  6:25 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-11-27  5:15 James Yu
2013-11-27  6:26 ` Stephen Hemminger [this message]
2013-11-27 20:06   ` James Yu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20131126222606.3b99a80b@nehalam.linuxnetplumber.net \
    --to=stephen@networkplumber.org \
    --cc=dev@dpdk.org \
    --cc=ypyu2011@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).