From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pa0-f50.google.com (mail-pa0-f50.google.com [209.85.220.50]) by dpdk.org (Postfix) with ESMTP id 4E8978D99 for ; Tue, 13 Oct 2015 22:24:24 +0200 (CEST) Received: by pabrc13 with SMTP id rc13so30986929pab.0 for ; Tue, 13 Oct 2015 13:24:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=subject:to:references:from:message-id:date:user-agent:mime-version :in-reply-to:content-type:content-transfer-encoding; bh=JiMWivGItgpgdvz4FkeHYRcUB84KPpDEvEV8dfHGfNo=; b=lon5C2gPhPx3HTPHqovh7sftr5Wbd8KnpvwswPU5Sua8c087jQ7ZbjQXFv701aRbuP /M+f1Q36F9LoOO3trsUEnI1uwj8gOtgnb35F80NHKEvOTLSVL780ZGb9lkOoTzJwgl+V AD4SXQXSnNaE90Urn2QWRbvSnqnDaK/Oqc+goCFfod6kQy16hdIdQV3Q88mrm9MCmVtw ha35xu+tfzgOmSMXsg0grsJgzWzCuV0pAz7ABd/37+UE6IGFH0cISXi511npgSR4pqLx 3VcGQyQcC5BYFE7w1ohmbkL07l80VOAsJxUfptqT6ag+WLxX/tny+U6Dbwl3O0HYG0sj zFaQ== X-Received: by 10.68.242.130 with SMTP id wq2mr42071055pbc.117.1444767863374; Tue, 13 Oct 2015 13:24:23 -0700 (PDT) Received: from [192.168.1.188] (static-50-53-21-5.bvtn.or.frontiernet.net. [50.53.21.5]) by smtp.googlemail.com with ESMTPSA id mk5sm5468650pab.44.2015.10.13.13.24.22 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 13 Oct 2015 13:24:22 -0700 (PDT) To: "Sanford, Robert" , Bruce Richardson , Stephen Hemminger , "dev@dpdk.org" References: <20151012221830.6f5f42af@xeon-e3> <20151013135955.GA31844@bricha3-MOBL3> From: Alexander Duyck Message-ID: <561D6876.6040709@gmail.com> Date: Tue, 13 Oct 2015 13:24:22 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] IXGBE RX packet loss with 5+ cores X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 13 Oct 2015 20:24:24 -0000 On 10/13/2015 07:47 AM, Sanford, Robert wrote: >>>> [Robert:] >>>> 1. The 82599 device supports up to 128 queues. Why do we see trouble >>>> with as few as 5 queues? What could limit the system (and one port >>>> controlled by 5+ cores) from receiving at line-rate without loss? >>>> >>>> 2. As far as we can tell, the RX path only touches the device >>>> registers when it updates a Receive Descriptor Tail register (RDT[n]), >>>> roughly every rx_free_thresh packets. Is there a big difference >>>> between one core doing this and N cores doing it 1/N as often? >>> [Stephen:] >>> As you add cores, there is more traffic on the PCI bus from each core >>> polling. There is a fix number of PCI bus transactions per second >>> possible. >>> Each core is increasing the number of useless (empty) transactions. >> [Bruce:] >> The polling for packets by the core should not be using PCI bandwidth >> directly, >> as the ixgbe driver (and other drivers) check for the DD bit being set on >> the >> descriptor in memory/cache. > I was preparing to reply with the same point. > >>> [Stephen:] Why do you think adding more cores will help? > We're using run-to-completion and sometimes spend too many cycles per pkt. > We realize that we need to move to io+workers model, but wanted a better > understanding of the dynamics involved here. > > > >> [Bruce:] However, using an increased number of queues can >> use PCI bandwidth in other ways, for instance, with more queues you >> reduce the >> amount of descriptor coalescing that can be done by the NICs, so that >> instead of >> having a single transaction of 4 descriptors to one queue, the NIC may >> instead >> have to do 4 transactions each writing 1 descriptor to 4 different >> queues. This >> is possibly why sending all traffic to a single queue works ok - the >> polling on >> the other queues is still being done, but has little effect. > Brilliant! This idea did not occur to me. You can actually make the throughput regression disappear by altering the traffic pattern you are testing with. In the past I have found that sending traffic in bursts where 4 frames belong to the same queue before moving to the next one essentially eliminated the dropped packets due to PCIe bandwidth limitations. The trick is you need to have the Rx descriptor processing work in batches so that you can get multiple descriptors processed for each PCIe read/write. - Alex