From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-la0-f42.google.com (mail-la0-f42.google.com [209.85.215.42]) by dpdk.org (Postfix) with ESMTP id 8A15D7F50 for ; Fri, 21 Nov 2014 23:44:37 +0100 (CET) Received: by mail-la0-f42.google.com with SMTP id s18so5157862lam.1 for ; Fri, 21 Nov 2014 14:55:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=HTEtPyyj+hlFnBNJoQ71RehAFGHnC/qiam6Bgcrfp6I=; b=XrX6CSTcbj9sJ3kHKdzkbD6bmmcOZvcjjSddZKbAHtoEQJyZ+A3VdD+YxY/vis7k94 jUmeFHkCLz62oj2ycFawniOYuONr7ObiJiMZhBbSB8xOtBtUclEYmz2I95N5TkTZRgq2 syQPoZWrr05K/mnXnf8UtQ8f6S3znVYIiBplUBOxdPYv5lSe4OvNxacsAjHyY2Fu+0Cc vgGhD7YO61La4ahZsW5Ow3v3jABbd+qzOV3EzcGI3dX11gL3+dJXV19IJUJdvCZK+OvI /B20OYl0Ypqdn4p1eSNck+kRE781hl9VhsDpGfeoLZoQE8m9wwRbRLU1GtVkf889CtA3 AxsQ== MIME-Version: 1.0 X-Received: by 10.152.224.163 with SMTP id rd3mr8021092lac.24.1416610513519; Fri, 21 Nov 2014 14:55:13 -0800 (PST) Received: by 10.25.216.133 with HTTP; Fri, 21 Nov 2014 14:55:13 -0800 (PST) In-Reply-To: <20141121144430.GA9404@bricha3-MOBL3> References: <20141120215233.GA15551@mhcomputing.net> <20141121144430.GA9404@bricha3-MOBL3> Date: Fri, 21 Nov 2014 23:55:13 +0100 Message-ID: From: Newman Poborsky To: Bruce Richardson Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] one worker reading multiple ports X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 21 Nov 2014 22:44:37 -0000 Nice guess :) After adding check with rte_mempool_empty(), as soon as I enable second port for reading, it shows that the mempool is empty. Thank you for help! On Fri, Nov 21, 2014 at 3:44 PM, Bruce Richardson < bruce.richardson@intel.com> wrote: > On Fri, Nov 21, 2014 at 03:03:25PM +0100, Newman Poborsky wrote: > > So, since mempool is multi-consumer (by default), if one is used to > > configure queues on multiple NICs that have different socket owners, then > > mbuf allocation will fail? But if 2 NICs have the socket owner, > everything > > should work fine? Since I'm talking about 2 ports on the same NIC, they > > must have the same owner, RX receive should work with RX queues > configured > > with the same mempool, right? But it my case it doesn't so I guess I'm > > missing something. > > Actually, the mempools will work with NICs on multiple sockets - it's just > that performance is likely to suffer due to QPI usage. The mempools being > on > one socket or the other is not going to break your application. > > > > > Any idea how can I troubleshoot why allocation fails with one mempool and > > works fine with each queue having its own mempool? > > At a guess, I'd say that your mempools just aren't bit enough. Try > doubling the > size of th mempool in the single-pool case and see if it helps things. > > /Bruce > > > > > Thank you, > > > > Newman > > > > On Thu, Nov 20, 2014 at 10:52 PM, Matthew Hall > > wrote: > > > > > On Thu, Nov 20, 2014 at 05:10:51PM +0100, Newman Poborsky wrote: > > > > Thank you for your answer. > > > > > > > > I just realized that the reason the rte_eth_rx_burst() returns 0 is > > > because > > > > inside ixgbe_recv_pkts() this fails: > > > > nmb = rte_rxmbuf_alloc(rxq->mb_pool); => nmb is NULL > > > > > > > > Does this mean that every RX queue should have its own rte_mempool? > If > > > so, > > > > are there any optimal values for: number of RX descriptors, per-queue > > > > rte_mempool size, number of hugepages (from what I understand, these > 3 > > > are > > > > correlated)? > > > > > > > > If I'm wrong, please explain why. > > > > > > > > Thanks! > > > > > > > > BR, > > > > Newman > > > > > > Newman, > > > > > > Mempools are created per NUMA node (ordinarily this means per processor > > > socket > > > if sockets > 1). > > > > > > When doing Tx / Rx Queue Setup, one should determine the socket which > owns > > > the > > > given PCI NIC, and try to use memory on that same socket to handle > traffic > > > for > > > that NIC and Queues. > > > > > > So, for N cards with Q * N Tx / Rx queues, you only need S mempools. > > > > > > Then each of the Q * N queues will use the mempool from the socket > closest > > > to > > > the card. > > > > > > Matthew. > > > >