From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qe0-x22e.google.com (mail-qe0-x22e.google.com [IPv6:2607:f8b0:400d:c02::22e]) by dpdk.org (Postfix) with ESMTP id 97FF1156 for ; Wed, 27 Nov 2013 21:05:58 +0100 (CET) Received: by mail-qe0-f46.google.com with SMTP id a11so7915241qen.19 for ; Wed, 27 Nov 2013 12:06:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=tiskPZIV6rjmfqKoo8b8/LXqHt4SY2fhOLejNaWOrFE=; b=g2f7iJMPqoo4UXgq7NdgyTvuPyNfKR9texCGn3KQl8Jy4ZCNYN1mEITRgA1uCjTvl9 6Q8Mk7Pwl/NDKSrDPLMjLSev0WAJGkX1Jclq2LUp/olYgPx7PDiNwee5i1srS+IZfo45 2WeKYgbMTPtBxjz8KF87GwHAUcXMZ4Nuhc0B37HX0p/n3rRVRR+sZIlVudXqWY+s/hGY jXwd30RJ8haJsaQ0DDaK406jNyTI0esePiAr0KJdc7/BjQgiFxUbGiZIMCwdeat3RBok 4GVz1GZPWQ2ph22EYSO2smN9bHmMAiBQkt86tl+fnf7yUCn8qtgOAgBqPqcLkWgN3w8J xdhQ== MIME-Version: 1.0 X-Received: by 10.224.38.74 with SMTP id a10mr70303424qae.10.1385582817807; Wed, 27 Nov 2013 12:06:57 -0800 (PST) Received: by 10.96.91.67 with HTTP; Wed, 27 Nov 2013 12:06:57 -0800 (PST) In-Reply-To: <20131126222606.3b99a80b@nehalam.linuxnetplumber.net> References: <20131126222606.3b99a80b@nehalam.linuxnetplumber.net> Date: Wed, 27 Nov 2013 12:06:57 -0800 Message-ID: From: James Yu To: Stephen Hemminger Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: dev@dpdk.org Subject: Re: [dpdk-dev] Increasing number of txd and rxd from 256 to 1024 for virtio-net-pmd-1.1 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 27 Nov 2013 20:05:59 -0000 Can you share your virtio driver with me ? Do you mean to create multiple queues, each has 256 txd/rxd ? The packets could be stored into the freeslots in those queues. But how can the virtio pmd codes feed the slots down to the hardware to deliver them ? The other question is that I was using vhost-net on the KVM host. This supposed to be transparent to the DPDK + virtio pmd codes. But this cause problem in the packet delivery ? Thanks On Tue, Nov 26, 2013 at 10:26 PM, Stephen Hemminger < stephen@networkplumber.org> wrote: > On Tue, 26 Nov 2013 21:15:02 -0800 > James Yu wrote: > > > Running one directional traffic from Spirent traffic generator to l2fwd > > running inside a guest OS on a RHEL 6.2 KVM host, I encountered > performance > > issue and need to increase the number of rxd and txd from 256 to 1024. > > There was not enough freeslots for packets to be transmitted in this > routine > > virtio_send_packet(){ > > .... > > if (tq->freeslots < nseg + 1) { > > return -1; > > } > > .... > > } > > > > How do I solve the performance issue by one of the following > > 1. increase the number of rxd and txd from 256 to 1024 > > This should prevent packets could not be stored into the ring due > > to lack of freeslots. But l2fwd fails to run and indicate the number must > > be equal to 256. > > 2. increase the MAX_PKT_BURST > > But this is not ideal since it will increase the delay while > > improving the throughput > > 3. other mechanism that you know can improve it ? > > Is there any other approach to have enough freeslots to store the > > packets before passing down to PCI ? > > > > > > Thanks > > > > James > > > > > > This is the performance numbers I measured on the l2fwd printout for the > > receiving part. I added codes inside l2fwd to do tx part. > > > ==================================================================================== > > vhost-net is enabled on KVM host, # of cache buffer 4096, Ubuntu 12.04.3 > > LTS (3.2.0-53-generic); kvm 1.2.0, libvirtd: 0.9.8 > > 64 Bytes/pkt from Spirent @ 223k pps, running test for 10 seconds. > > > ==================================================================================== > > DPDK 1.3 + virtio + 256 txd/rxd + nice -19 priority (l2fwd, guest kvm > > process) > > bash command: nice -n -19 > > /root/dpdk/dpdk-1.3.1r2/examples/l2fwd/build/l2fwd -c 3 -n 1 -b > 000:00:03.0 > > -b 000:00:07.0 -b 000:00:0a.0 -b 000:00:09.0 -d > > /root/dpdk/virtio-net-pmd-1.1/librte_pmd_virtio.so -- -q 1 -p 1 > > > ==================================================================================== > > Spirent -> l2fwd (receiving 10G) (RX on KVM guest) > > MAX_PKT_BURST 10seconds (<1% loss) Packets Per Second > > > ------------------------------------------------------------------------------------------------------------------------------- > > 32 74k pps > > 64 80k pps > > 128 126kpps > > 256 133kpps > > > > l2fw -> Spirent (10G port) (transmitting) (using one-directional one port > > (port 0) setup) > > MAX_PKT_BURST < 1% packet loss > > 32 88kpp > > > > > > ********************************** > > The same test run on e1000 ports > > > > > ==================================================================================== > > DPDK 1.3 + e1000 + 1024 txd/rxd + nice -19 priority (l2fwd, guest kvm > > process) > > bash command: nice -n -19 > > /root/dpdk/dpdk-1.3.1r2/examples/l2fwd/build/l2fwd -c 3 -n 1 -b > 000:00:03.0 > > -b 000:00:07.0 -b 000:00:0a.0 -b 000:00:09.0 -- -q 1 -p 1 > > > ==================================================================================== > > Spirent -> l2fwd (RECEIVING 10G) > > MAX_PKT_BURST <= 1% packet loss > > 32 110k pps > > > > l2fw -> Spirent (10G port) (TRANSMITTING) (using one-directional one port > > (port 0) setup) > > MAX_PKT_BURST pkts transmitted on l2fwd > > 32 171k pps (0% dropped) > > 240 203k pps (6% dropped, 130k pps received > on > > eth6 (assumed on Spirent)) ** > > **: not enough freeslots in tx ring > > ==> this indicate the effects of small txd/rxd (256) when more traffic is > > generated, the packets can not > > be sent due to lack of freeslots in tx ring. I guess this is the > > symptom occurs in the virtio_net > > The number of slots with virtio is a parameter negotiated with the host. > So unless the host (KVM) gives the device more slots, then it won't work. > I have a better virtio driver and one of the features being added is > multiqueue > and merged TX buffer support which would give a bigger queue. > >