From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qe0-x234.google.com (mail-qe0-x234.google.com [IPv6:2607:f8b0:400d:c02::234]) by dpdk.org (Postfix) with ESMTP id C4C26156 for ; Wed, 27 Nov 2013 06:14:02 +0100 (CET) Received: by mail-qe0-f52.google.com with SMTP id ne12so6930696qeb.11 for ; Tue, 26 Nov 2013 21:15:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=Tj85ZmuXdrWF6lhum1urp3NPJnqsmDb2wLWyh+mCgX4=; b=oiDVZH/qsbyTVB3TBdnQ4AfH9gz1+XFpKQD85iK4RgpVLJhcAlpJzT54G7STvB4o7K WLaj2MzKhlRMlEqCLFwsYoDjI7Nv2xHwMZKpKmMYmzaxvlso/GyxQOSsVu+qOuIYGl6d sZVLtfbClCy9WRXJLHnblD9zYU2YyF4p1viZSfhMSwYdMCEhVB7W/eOI4J4eHwi4B4pe ZkEH2DScJWwpteZDxonuSKYlvuDSCOazyEb1xNi9McTZOUtrXYLxmJJ9W7Uts9D5prFL SUqevrRAe6vcAZQUIJjpBxqVKNbXkXFSep4OOyq52z4RG3aGZO06YJI8OkbrU2cQbVBU jAmQ== MIME-Version: 1.0 X-Received: by 10.224.111.197 with SMTP id t5mr63291939qap.49.1385529302686; Tue, 26 Nov 2013 21:15:02 -0800 (PST) Received: by 10.96.91.67 with HTTP; Tue, 26 Nov 2013 21:15:02 -0800 (PST) Date: Tue, 26 Nov 2013 21:15:02 -0800 Message-ID: From: James Yu To: dev@dpdk.org, James Yu Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: [dpdk-dev] Increasing number of txd and rxd from 256 to 1024 for virtio-net-pmd-1.1 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 27 Nov 2013 05:14:03 -0000 Running one directional traffic from Spirent traffic generator to l2fwd running inside a guest OS on a RHEL 6.2 KVM host, I encountered performance issue and need to increase the number of rxd and txd from 256 to 1024. There was not enough freeslots for packets to be transmitted in this routine virtio_send_packet(){ .... if (tq->freeslots < nseg + 1) { return -1; } .... } How do I solve the performance issue by one of the following 1. increase the number of rxd and txd from 256 to 1024 This should prevent packets could not be stored into the ring due to lack of freeslots. But l2fwd fails to run and indicate the number must be equal to 256. 2. increase the MAX_PKT_BURST But this is not ideal since it will increase the delay while improving the throughput 3. other mechanism that you know can improve it ? Is there any other approach to have enough freeslots to store the packets before passing down to PCI ? Thanks James This is the performance numbers I measured on the l2fwd printout for the receiving part. I added codes inside l2fwd to do tx part. ==================================================================================== vhost-net is enabled on KVM host, # of cache buffer 4096, Ubuntu 12.04.3 LTS (3.2.0-53-generic); kvm 1.2.0, libvirtd: 0.9.8 64 Bytes/pkt from Spirent @ 223k pps, running test for 10 seconds. ==================================================================================== DPDK 1.3 + virtio + 256 txd/rxd + nice -19 priority (l2fwd, guest kvm process) bash command: nice -n -19 /root/dpdk/dpdk-1.3.1r2/examples/l2fwd/build/l2fwd -c 3 -n 1 -b 000:00:03.0 -b 000:00:07.0 -b 000:00:0a.0 -b 000:00:09.0 -d /root/dpdk/virtio-net-pmd-1.1/librte_pmd_virtio.so -- -q 1 -p 1 ==================================================================================== Spirent -> l2fwd (receiving 10G) (RX on KVM guest) MAX_PKT_BURST 10seconds (<1% loss) Packets Per Second ------------------------------------------------------------------------------------------------------------------------------- 32 74k pps 64 80k pps 128 126kpps 256 133kpps l2fw -> Spirent (10G port) (transmitting) (using one-directional one port (port 0) setup) MAX_PKT_BURST < 1% packet loss 32 88kpp ********************************** The same test run on e1000 ports ==================================================================================== DPDK 1.3 + e1000 + 1024 txd/rxd + nice -19 priority (l2fwd, guest kvm process) bash command: nice -n -19 /root/dpdk/dpdk-1.3.1r2/examples/l2fwd/build/l2fwd -c 3 -n 1 -b 000:00:03.0 -b 000:00:07.0 -b 000:00:0a.0 -b 000:00:09.0 -- -q 1 -p 1 ==================================================================================== Spirent -> l2fwd (RECEIVING 10G) MAX_PKT_BURST <= 1% packet loss 32 110k pps l2fw -> Spirent (10G port) (TRANSMITTING) (using one-directional one port (port 0) setup) MAX_PKT_BURST pkts transmitted on l2fwd 32 171k pps (0% dropped) 240 203k pps (6% dropped, 130k pps received on eth6 (assumed on Spirent)) ** **: not enough freeslots in tx ring ==> this indicate the effects of small txd/rxd (256) when more traffic is generated, the packets can not be sent due to lack of freeslots in tx ring. I guess this is the symptom occurs in the virtio_net