From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qg0-f46.google.com (mail-qg0-f46.google.com [209.85.192.46]) by dpdk.org (Postfix) with ESMTP id 962495AA6 for ; Thu, 22 Jan 2015 18:12:19 +0100 (CET) Received: by mail-qg0-f46.google.com with SMTP id i50so2182295qgf.5 for ; Thu, 22 Jan 2015 09:12:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xdel.ru; s=google; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=WfkacXDmSj0D5JBEjpJ8LTL+1xrb5zdUPXZ/079vpKo=; b=q++2eWp1vZrpgDZfX4k4m45+yY9jrvYvn9bRGl0DyMxOjsV714M6MhWV12ANJcN22b Pv48qI7/vprx7RB5HYGFo9HBLxMXnxTpvvYn7wv4ZIeDMm1Ikuhu1UgFE+iBr37o49Wv 0pMfIg8UbbGx/cpGnEa45+UuHL62fYHcNOP24= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc:content-type; bh=WfkacXDmSj0D5JBEjpJ8LTL+1xrb5zdUPXZ/079vpKo=; b=TlFLYE4a8NsxlYe/U3PU2kqYD4RQSmvBYPLqfQ7L+3+/Oj3SgpBEFGOBCPpE+tHusV tgTgcYosuOkax3qk8I5FojyCqhsAKh3ItIiMZOW5n7QDc5iqtDkepgQc9s2OZtEJ0CH2 92Yn4MaTnKO4KoG5oXRckZRcBgZRU2kaTL1wL5iksxtRgwZSP9LqrOdplxfjzFh1OEIJ CW/DdF7MMxxWk/z2NYnuBZctDrOvxoF3Etut24B1Gs0FIWl33/p/q2ZwYeq92XBw7N/q +GLvg2L1GMjuxJ3hkc8DFcNp18Xb/mI00QMZ+2Fvp7RniMCbvwfofTufrHSUQnsUrt7j jpjA== X-Gm-Message-State: ALoCoQndMr+Rw1Ybba76LM+aIlxvfb/X+WtBGKigF9w8vX5fiIH7jH9pRYfLDVEu3wMO5pwPnUYH X-Received: by 10.224.36.199 with SMTP id u7mr4466516qad.72.1421946739043; Thu, 22 Jan 2015 09:12:19 -0800 (PST) MIME-Version: 1.0 Received: by 10.140.92.197 with HTTP; Thu, 22 Jan 2015 09:11:56 -0800 (PST) In-Reply-To: References: From: Andrey Korolyov Date: Thu, 22 Jan 2015 21:11:56 +0400 Message-ID: To: dev@dpdk.org Content-Type: text/plain; charset=UTF-8 Cc: "discuss@openvswitch.org" Subject: Re: [dpdk-dev] Packet drops during non-exhaustive flood with OVS and 1.8.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 22 Jan 2015 17:12:20 -0000 On Wed, Jan 21, 2015 at 8:02 PM, Andrey Korolyov wrote: > Hello, > > I observed that the latest OVS with dpdk-1.8.0 and igb_uio starts to > drop packets earlier than a regular Linux ixgbe 10G interface, setup > follows: > > receiver/forwarder: > - 8 core/2 head system with E5-2603v2, cores 1-3 are given to OVS exclusively > - n-dpdk-rxqs=6, rx scattering is not enabled > - x520 da > - 3.10/3.18 host kernel > - during 'legacy mode' testing, queue interrupts are scattered through all cores > > sender: > - 16-core E52630, netmap framework for packet generation > - pkt-gen -f tx -i eth2 -s 10.6.9.0-10.6.9.255 -d > 10.6.10.0-10.6.10.255 -S 90:e2:ba:84:19:a0 -D 90:e2:ba:85:06:07 -R > 11000000, results in 11Mpps 60-byte packet flood, there are constant > values during test. > > OVS contains only single drop rule at the moment: > ovs-ofctl add-flow br0 in_port=1,actions=DROP > > Packet generator was launched for tens of seconds for both Linux stack > and OVS+DPDK cases, resulting in zero drop/error count on the > interface in first, along with same counter values on pktgen and host > interface stat (means that the none of generated packets are > unaccounted). > > I selected rate for about 11M because OVS starts to do packet drop > around this value, after same short test interface stat shows > following: > > statistics : {collisions=0, rx_bytes=22003928768, > rx_crc_err=0, rx_dropped=0, rx_errors=10694693, rx_frame_err=0, > rx_over_err=0, rx_packets=343811387, tx_bytes=0, tx_dropped=0, > tx_errors=0, tx_packets=0} > > pktgen side: > Sent 354506080 packets, 60 bytes each, in 32.23 seconds. > Speed: 11.00 Mpps Bandwidth: 5.28 Gbps (raw 7.39 Gbps) > > If rate will be increased up to 13-14Mpps, the relative error/overall > ratio will rise up to a one third. So far OVS on dpdk shows perfect > results and I do not want to reject this solution due to exhaustive > behavior like described one, so I`m open for any suggestions to > improve the situation (except using 1.7 branch :) ). At a glance it looks like there is a problem with pmd threads, as they starting to consume about five thousandth of sys% on a dedicated cores during flood but in theory they should not. Any ideas for debugging/improving this situation are very welcomed!