From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.mhcomputing.net (master.mhcomputing.net [74.208.46.186]) by dpdk.org (Postfix) with ESMTP id 87A177F0C for ; Thu, 20 Nov 2014 22:43:29 +0100 (CET) Received: by mail.mhcomputing.net (Postfix, from userid 1000) id E469180C00C; Thu, 20 Nov 2014 13:52:33 -0800 (PST) Date: Thu, 20 Nov 2014 13:52:33 -0800 From: Matthew Hall To: Newman Poborsky Message-ID: <20141120215233.GA15551@mhcomputing.net> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] one worker reading multiple ports X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Nov 2014 21:43:29 -0000 On Thu, Nov 20, 2014 at 05:10:51PM +0100, Newman Poborsky wrote: > Thank you for your answer. > > I just realized that the reason the rte_eth_rx_burst() returns 0 is because > inside ixgbe_recv_pkts() this fails: > nmb = rte_rxmbuf_alloc(rxq->mb_pool); => nmb is NULL > > Does this mean that every RX queue should have its own rte_mempool? If so, > are there any optimal values for: number of RX descriptors, per-queue > rte_mempool size, number of hugepages (from what I understand, these 3 are > correlated)? > > If I'm wrong, please explain why. > > Thanks! > > BR, > Newman Newman, Mempools are created per NUMA node (ordinarily this means per processor socket if sockets > 1). When doing Tx / Rx Queue Setup, one should determine the socket which owns the given PCI NIC, and try to use memory on that same socket to handle traffic for that NIC and Queues. So, for N cards with Q * N Tx / Rx queues, you only need S mempools. Then each of the Q * N queues will use the mempool from the socket closest to the card. Matthew.