From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 13377FFA for ; Wed, 22 Apr 2015 11:10:07 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga101.jf.intel.com with ESMTP; 22 Apr 2015 02:10:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.11,623,1422950400"; d="scan'208";a="559927344" Received: from unknown ([10.237.220.106]) by orsmga003.jf.intel.com with SMTP; 22 Apr 2015 02:10:05 -0700 Received: by (sSMTP sendmail emulation); Wed, 22 Apr 2015 10:10:04 +0025 Date: Wed, 22 Apr 2015 10:10:04 +0100 From: Bruce Richardson To: Matthew Hall Message-ID: <20150422091004.GB5652@bricha3-MOBL3> References: <5534CFFF.7000404@cloudius-systems.com> <20150420105020.GB9280@bricha3-MOBL3> <55360F9C.7070601@cloudius-systems.com> <20150421092748.GB5360@bricha3-MOBL3> <20150421174454.GA1172@mhcomputing.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150421174454.GA1172@mhcomputing.net> Organization: Intel Shannon Ltd. User-Agent: Mutt/1.5.23 (2014-03-12) Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] DCA X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 22 Apr 2015 09:10:08 -0000 On Tue, Apr 21, 2015 at 10:44:54AM -0700, Matthew Hall wrote: > On Tue, Apr 21, 2015 at 10:27:48AM +0100, Bruce Richardson wrote: > > Can you perhaps comment on the use-case where you find this binding > > limiting? Modern platforms have multiple NUMA nodes, but they also generally > > have PCI slots connected to those multiple NUMA nodes also, so that you can > > have your NIC ports similarly NUMA partitionned? > > Hi Bruce, > > I was wondering if you have tried to do this on COTS (commerical > off-the-shelf) hardware before. What I found each time I tried it was that > PCIe slots are not very evenly distributed across the NUMA nodes unlike what > you'd expect. > I doubt I've tried it on regular commercial boards as much as you guys have, though it does happen! > Sometimes the PCIe lanes on CPU 0 get partly used up by Super IO or other > integrated peripherals. Other times the motherboards give you 2 x8 when you > needed 1 x16 or they give you a bundh of x4 when you needed x8, etc. Point taken! > > It's actually pretty difficult to find the mapping, for one, and even when you > do, even harder to get the right slots for your cards and so on. In the ixgbe > kernel driver you'll sometimes get some cryptic debug prints when it's been > munged and performance will suffer. But in the ixgbe PMD driver you're on your > own mostly. It was to try and make the NUMA mapping of PCI clearer that we added in the printing of the NUMA node on PCI scan: EAL: PCI device 0000:86:00.0 on NUMA socket 1 EAL: probe driver: 8086:154a rte_ixgbe_pmd EAL: PCI memory mapped at 0x7fb452f04000 EAL: PCI memory mapped at 0x7fb453004000 Is there something more than this you feel we could do in the PMD to help with slot identification? /Bruce > > Matthew.