From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id B9F836880 for ; Fri, 21 Nov 2014 15:33:59 +0100 (CET) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP; 21 Nov 2014 06:44:33 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.07,431,1413270000"; d="scan'208";a="635925342" Received: from bricha3-mobl3.ger.corp.intel.com ([10.243.20.46]) by fmsmga002.fm.intel.com with SMTP; 21 Nov 2014 06:44:31 -0800 Received: by (sSMTP sendmail emulation); Fri, 21 Nov 2014 14:44:30 +0025 Date: Fri, 21 Nov 2014 14:44:30 +0000 From: Bruce Richardson To: Newman Poborsky Message-ID: <20141121144430.GA9404@bricha3-MOBL3> References: <20141120215233.GA15551@mhcomputing.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Organization: Intel Shannon Ltd. User-Agent: Mutt/1.5.23 (2014-03-12) Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] one worker reading multiple ports X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Nov 2014 14:34:00 -0000 On Fri, Nov 21, 2014 at 03:03:25PM +0100, Newman Poborsky wrote: > So, since mempool is multi-consumer (by default), if one is used to > configure queues on multiple NICs that have different socket owners, then > mbuf allocation will fail? But if 2 NICs have the socket owner, everything > should work fine? Since I'm talking about 2 ports on the same NIC, they > must have the same owner, RX receive should work with RX queues configured > with the same mempool, right? But it my case it doesn't so I guess I'm > missing something. Actually, the mempools will work with NICs on multiple sockets - it's just that performance is likely to suffer due to QPI usage. The mempools being on one socket or the other is not going to break your application. > > Any idea how can I troubleshoot why allocation fails with one mempool and > works fine with each queue having its own mempool? At a guess, I'd say that your mempools just aren't bit enough. Try doubling the size of th mempool in the single-pool case and see if it helps things. /Bruce > > Thank you, > > Newman > > On Thu, Nov 20, 2014 at 10:52 PM, Matthew Hall > wrote: > > > On Thu, Nov 20, 2014 at 05:10:51PM +0100, Newman Poborsky wrote: > > > Thank you for your answer. > > > > > > I just realized that the reason the rte_eth_rx_burst() returns 0 is > > because > > > inside ixgbe_recv_pkts() this fails: > > > nmb = rte_rxmbuf_alloc(rxq->mb_pool); => nmb is NULL > > > > > > Does this mean that every RX queue should have its own rte_mempool? If > > so, > > > are there any optimal values for: number of RX descriptors, per-queue > > > rte_mempool size, number of hugepages (from what I understand, these 3 > > are > > > correlated)? > > > > > > If I'm wrong, please explain why. > > > > > > Thanks! > > > > > > BR, > > > Newman > > > > Newman, > > > > Mempools are created per NUMA node (ordinarily this means per processor > > socket > > if sockets > 1). > > > > When doing Tx / Rx Queue Setup, one should determine the socket which owns > > the > > given PCI NIC, and try to use memory on that same socket to handle traffic > > for > > that NIC and Queues. > > > > So, for N cards with Q * N Tx / Rx queues, you only need S mempools. > > > > Then each of the Q * N queues will use the mempool from the socket closest > > to > > the card. > > > > Matthew. > >