From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pd0-f175.google.com (mail-pd0-f175.google.com [209.85.192.175]) by dpdk.org (Postfix) with ESMTP id 1C5E158EB for ; Wed, 27 Nov 2013 07:25:08 +0100 (CET) Received: by mail-pd0-f175.google.com with SMTP id w10so9219077pde.20 for ; Tue, 26 Nov 2013 22:26:09 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to :references:mime-version:content-type:content-transfer-encoding; bh=rkJ27zqVX/K3wWTyKC4IPBbftG6sEn5UcE1tfvGHpMI=; b=J1mQHOiB9sN82w1n7FeRtiacntTsQIkqCAzvJIlZ9VprOMfOCUusAeGfNGyxwy1OJ5 4SZCzm2fcyn5K7WdER8bPa2kJatHoOYQ34gkRkF5Kv2lQogrN32wS9hJ5Mhj3LYjMCoh KEd2pHc14nfzl7+ftjwLEK4Kqz4NOR5qy9a46Gys6jjsKl7LJ3/iYVIP1BVSSnZCgyg6 UxBM8eFtkNUNqak/b8nlaaWeGu8dk/IUZrzRkQ6kqaZcd8YpLbINPYnxf+bh0WL5IIO9 wTIlEP41t03e98+SoPjDS8yqXJKB7aiMPs4bR/JKr/okXnXRJjkAnMxDX7lrAOwDU/v/ BHEQ== X-Gm-Message-State: ALoCoQn3qkT1FZnKOS6XFtqbd5HoACbPGbWQ2J+Xe2DLbum2kRqt5SvyQWaGSHw1U4Dc831a3+za X-Received: by 10.67.5.227 with SMTP id cp3mr1329005pad.186.1385533568944; Tue, 26 Nov 2013 22:26:08 -0800 (PST) Received: from nehalam.linuxnetplumber.net (static-50-53-83-51.bvtn.or.frontiernet.net. [50.53.83.51]) by mx.google.com with ESMTPSA id er3sm85299732pbb.40.2013.11.26.22.26.08 for (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Tue, 26 Nov 2013 22:26:08 -0800 (PST) Date: Tue, 26 Nov 2013 22:26:06 -0800 From: Stephen Hemminger To: James Yu Message-ID: <20131126222606.3b99a80b@nehalam.linuxnetplumber.net> In-Reply-To: References: X-Mailer: Claws Mail 3.8.1 (GTK+ 2.24.10; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: dev@dpdk.org Subject: Re: [dpdk-dev] Increasing number of txd and rxd from 256 to 1024 for virtio-net-pmd-1.1 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 27 Nov 2013 06:25:09 -0000 On Tue, 26 Nov 2013 21:15:02 -0800 James Yu wrote: > Running one directional traffic from Spirent traffic generator to l2fwd > running inside a guest OS on a RHEL 6.2 KVM host, I encountered performance > issue and need to increase the number of rxd and txd from 256 to 1024. > There was not enough freeslots for packets to be transmitted in this routine > virtio_send_packet(){ > .... > if (tq->freeslots < nseg + 1) { > return -1; > } > .... > } > > How do I solve the performance issue by one of the following > 1. increase the number of rxd and txd from 256 to 1024 > This should prevent packets could not be stored into the ring due > to lack of freeslots. But l2fwd fails to run and indicate the number must > be equal to 256. > 2. increase the MAX_PKT_BURST > But this is not ideal since it will increase the delay while > improving the throughput > 3. other mechanism that you know can improve it ? > Is there any other approach to have enough freeslots to store the > packets before passing down to PCI ? > > > Thanks > > James > > > This is the performance numbers I measured on the l2fwd printout for the > receiving part. I added codes inside l2fwd to do tx part. > ==================================================================================== > vhost-net is enabled on KVM host, # of cache buffer 4096, Ubuntu 12.04.3 > LTS (3.2.0-53-generic); kvm 1.2.0, libvirtd: 0.9.8 > 64 Bytes/pkt from Spirent @ 223k pps, running test for 10 seconds. > ==================================================================================== > DPDK 1.3 + virtio + 256 txd/rxd + nice -19 priority (l2fwd, guest kvm > process) > bash command: nice -n -19 > /root/dpdk/dpdk-1.3.1r2/examples/l2fwd/build/l2fwd -c 3 -n 1 -b 000:00:03.0 > -b 000:00:07.0 -b 000:00:0a.0 -b 000:00:09.0 -d > /root/dpdk/virtio-net-pmd-1.1/librte_pmd_virtio.so -- -q 1 -p 1 > ==================================================================================== > Spirent -> l2fwd (receiving 10G) (RX on KVM guest) > MAX_PKT_BURST 10seconds (<1% loss) Packets Per Second > ------------------------------------------------------------------------------------------------------------------------------- > 32 74k pps > 64 80k pps > 128 126kpps > 256 133kpps > > l2fw -> Spirent (10G port) (transmitting) (using one-directional one port > (port 0) setup) > MAX_PKT_BURST < 1% packet loss > 32 88kpp > > > ********************************** > The same test run on e1000 ports > > ==================================================================================== > DPDK 1.3 + e1000 + 1024 txd/rxd + nice -19 priority (l2fwd, guest kvm > process) > bash command: nice -n -19 > /root/dpdk/dpdk-1.3.1r2/examples/l2fwd/build/l2fwd -c 3 -n 1 -b 000:00:03.0 > -b 000:00:07.0 -b 000:00:0a.0 -b 000:00:09.0 -- -q 1 -p 1 > ==================================================================================== > Spirent -> l2fwd (RECEIVING 10G) > MAX_PKT_BURST <= 1% packet loss > 32 110k pps > > l2fw -> Spirent (10G port) (TRANSMITTING) (using one-directional one port > (port 0) setup) > MAX_PKT_BURST pkts transmitted on l2fwd > 32 171k pps (0% dropped) > 240 203k pps (6% dropped, 130k pps received on > eth6 (assumed on Spirent)) ** > **: not enough freeslots in tx ring > ==> this indicate the effects of small txd/rxd (256) when more traffic is > generated, the packets can not > be sent due to lack of freeslots in tx ring. I guess this is the > symptom occurs in the virtio_net The number of slots with virtio is a parameter negotiated with the host. So unless the host (KVM) gives the device more slots, then it won't work. I have a better virtio driver and one of the features being added is multiqueue and merged TX buffer support which would give a bigger queue.