From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.mhcomputing.net (master.mhcomputing.net [74.208.46.186]) by dpdk.org (Postfix) with ESMTP id 341FD5960 for ; Tue, 21 Apr 2015 19:46:27 +0200 (CEST) Received: by mail.mhcomputing.net (Postfix, from userid 1000) id 9C08E80BD84; Tue, 21 Apr 2015 10:44:54 -0700 (PDT) Date: Tue, 21 Apr 2015 10:44:54 -0700 From: Matthew Hall To: Bruce Richardson Message-ID: <20150421174454.GA1172@mhcomputing.net> References: <5534CFFF.7000404@cloudius-systems.com> <20150420105020.GB9280@bricha3-MOBL3> <55360F9C.7070601@cloudius-systems.com> <20150421092748.GB5360@bricha3-MOBL3> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150421092748.GB5360@bricha3-MOBL3> User-Agent: Mutt/1.5.23 (2014-03-12) Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] DCA X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 21 Apr 2015 17:46:27 -0000 On Tue, Apr 21, 2015 at 10:27:48AM +0100, Bruce Richardson wrote: > Can you perhaps comment on the use-case where you find this binding > limiting? Modern platforms have multiple NUMA nodes, but they also generally > have PCI slots connected to those multiple NUMA nodes also, so that you can > have your NIC ports similarly NUMA partitionned? Hi Bruce, I was wondering if you have tried to do this on COTS (commerical off-the-shelf) hardware before. What I found each time I tried it was that PCIe slots are not very evenly distributed across the NUMA nodes unlike what you'd expect. Sometimes the PCIe lanes on CPU 0 get partly used up by Super IO or other integrated peripherals. Other times the motherboards give you 2 x8 when you needed 1 x16 or they give you a bundh of x4 when you needed x8, etc. It's actually pretty difficult to find the mapping, for one, and even when you do, even harder to get the right slots for your cards and so on. In the ixgbe kernel driver you'll sometimes get some cryptic debug prints when it's been munged and performance will suffer. But in the ixgbe PMD driver you're on your own mostly. Matthew.