From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 8EDE78D9A for ; Tue, 13 Oct 2015 17:35:13 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga101.fm.intel.com with ESMTP; 13 Oct 2015 08:34:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,678,1437462000"; d="scan'208";a="579879105" Received: from nvenkate-mobl1.amr.corp.intel.com (HELO [10.24.25.25]) ([10.24.25.25]) by FMSMGA003.fm.intel.com with ESMTP; 13 Oct 2015 08:34:49 -0700 Message-ID: <561D2498.1060707@intel.com> Date: Tue, 13 Oct 2015 08:34:48 -0700 From: "Venkatesan, Venky" User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: dev@dpdk.org References: <20151012221830.6f5f42af@xeon-e3> <20151013135955.GA31844@bricha3-MOBL3> In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] IXGBE RX packet loss with 5+ cores X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 13 Oct 2015 15:35:14 -0000 On 10/13/2015 7:47 AM, Sanford, Robert wrote: >>>> [Robert:] >>>> 1. The 82599 device supports up to 128 queues. Why do we see trouble >>>> with as few as 5 queues? What could limit the system (and one port >>>> controlled by 5+ cores) from receiving at line-rate without loss? >>>> >>>> 2. As far as we can tell, the RX path only touches the device >>>> registers when it updates a Receive Descriptor Tail register (RDT[n]), >>>> roughly every rx_free_thresh packets. Is there a big difference >>>> between one core doing this and N cores doing it 1/N as often? >>> [Stephen:] >>> As you add cores, there is more traffic on the PCI bus from each core >>> polling. There is a fix number of PCI bus transactions per second >>> possible. >>> Each core is increasing the number of useless (empty) transactions. >> [Bruce:] >> The polling for packets by the core should not be using PCI bandwidth >> directly, >> as the ixgbe driver (and other drivers) check for the DD bit being set on >> the >> descriptor in memory/cache. > I was preparing to reply with the same point. > >>> [Stephen:] Why do you think adding more cores will help? > We're using run-to-completion and sometimes spend too many cycles per pkt. > We realize that we need to move to io+workers model, but wanted a better > understanding of the dynamics involved here. > >> [Bruce:] However, using an increased number of queues can >> use PCI bandwidth in other ways, for instance, with more queues you >> reduce the >> amount of descriptor coalescing that can be done by the NICs, so that >> instead of >> having a single transaction of 4 descriptors to one queue, the NIC may >> instead >> have to do 4 transactions each writing 1 descriptor to 4 different >> queues. This >> is possibly why sending all traffic to a single queue works ok - the >> polling on >> the other queues is still being done, but has little effect. > Brilliant! This idea did not occur to me. To add a little more detail - this ends up being both a bandwidth and a transaction bottleneck. Not only do you add an increased transaction count, you also add a huge amount of bandwidth overhead (each 16 byte descriptor is preceded by a PCI-E TLP which is about the same size). So what ends up happening in the case where the incoming packets are bifurcated to different queues (1 per queue) is that you have 2x the number of transactions (1 for the packet and one for the descriptor) and then we essentially double the bandwidth used because you now have the TLP overhead per descriptor write. There is a second issue that also pops up when coalescing breaks down - testpmd essentially in iofwd mode simply transmits the number of packets it receives (i.e. Rx (n) -> Tx (n)). This means that the transmit side also suffers from writing one descriptor at a time for output (i.e. when the NIC pulls a descriptor cache line to transmit, it finds 1 valid descriptor). When a second descriptor is transmitted on the same it will again pull and find only one valid descriptor. That is another 2x increase in transaction count as well as PCI-E TLP overhead. The third hit actually comes from the transmit side when transmitting one packet at a time. The last part of the transmit process is a MMIO write to the tail pointer. This is a costly operation (since it is a un-cacheable memory operation) in terms of cycles, not to mention again with heavy PCI-E overhead (TLP + 4 byte write) and increased transaction counts on PCI-E. Hope that explains all the touch-points as to why you see the drop off in performance you see. > > > > -- > Thanks guys, > Robert >