From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2BB85A034E for ; Sun, 30 Jan 2022 12:32:51 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AF91D410EA; Sun, 30 Jan 2022 12:32:50 +0100 (CET) Received: from mail-lf1-f49.google.com (mail-lf1-f49.google.com [209.85.167.49]) by mails.dpdk.org (Postfix) with ESMTP id B73BF40041 for ; Sun, 30 Jan 2022 12:32:49 +0100 (CET) Received: by mail-lf1-f49.google.com with SMTP id z4so21070338lft.3 for ; Sun, 30 Jan 2022 03:32:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=date:from:to:cc:subject:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=c7qjgTKaOXR3uJihpW3qbC/7NFRbM4NwMETLoydJpEU=; b=g82TduXqtiaxDP/KFfR4DL14JekhpSK5kUPrVCO9bc0phFlARsb/wjlEDpVpkfavC3 UmIB+DTicx3jQCI0iKsMJ0py42FK1X4Ivuuk9ptPqadhGJiFW0OyY1LJ+pBh0xeYSttv +5yDcDWvDk7qH7akELjmhY6Mo0fupCTykwDxAMmVoPDERTH6dxffAg/a4M/hnvSCac0f BwCm892WEM71mLqwO5SLuJikLJhV+tTzacNz4oZGLk5tGHopeLJEhyfY5yhcb9fwop2b XZK5Labbv82S809Q7qI259naFrVVE/2KyOq6PYC9d70lr+MpZJO2igJPXDi1TP3Vm8sQ 2sng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=c7qjgTKaOXR3uJihpW3qbC/7NFRbM4NwMETLoydJpEU=; b=g9XPSzt/YhQj3VAjUgqPsmcdHIKNUexfuTQre+Kg4bc9Rzc4DyfBBXyMaz38MgX0BF +znNIppb9KhNSYWsOQRec03s/RO7lhN7o5R4Auy6LJn32MO/miDiFc+yiYCKOjNcWClT dgWyv2Pxy7v5LKQmcIJ6aZVGFpfyAJuJcfPZ2jTybFDZFlfV5SDcSiwfNtNTABRjYXI+ YvFl4hLCatPRqp/aSlRA4RV03IE58n0GNGri9Jjey68++2WupB6p7plzk1rWRF7G1Asg blvN3/WBrfUB6ObOsOLIDRh6u0F9/udRQ3F31n3oE6bIl7HwEg0ElajjfuoHneb7oSf2 Qdzg== X-Gm-Message-State: AOAM533ovAr6duZMC5hus92LUrUr1AFg7OJZbpcJH1tlDIgt3JTsQ4TP GzZ3aUbh7Bd//Utrz3WDR5MJQnKQGEM= X-Google-Smtp-Source: ABdhPJxHG2ivL1QMnHa2Gr47PMvpAFzAGjRZL7VudebXBzDqr3XAzl+HzWBgQgdpmzDYuePYLeMIjg== X-Received: by 2002:ac2:4295:: with SMTP id m21mr11825683lfh.526.1643542369090; Sun, 30 Jan 2022 03:32:49 -0800 (PST) Received: from sovereign (broadband-37-110-65-23.ip.moscow.rt.ru. [37.110.65.23]) by smtp.gmail.com with ESMTPSA id t20sm2670202lji.61.2022.01.30.03.32.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 30 Jan 2022 03:32:48 -0800 (PST) Date: Sun, 30 Jan 2022 14:32:47 +0300 From: Dmitry Kozlyuk To: fwefew 4t4tg <7532yahoo@gmail.com> Cc: users@dpdk.org Subject: Re: allocating a mempool w/ rte_pktmbuf_pool_create() Message-ID: <20220130143247.19aaeba8@sovereign> In-Reply-To: References: <20220130042309.5e590857@sovereign> X-Mailer: Claws Mail 3.18.0 (GTK+ 2.24.33; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Hi, 2022-01-29 21:33 (UTC-0500), fwefew 4t4tg: [...] > > The other crucial insight is: so long as memory is allocated on the same > > NUMA node as the RXQ/TXQ runs that ultimately uses it, there is only marginal > > performance advantage to having per-core caching of mbufs in a mempool > > as provided by the private_data_size formal argument in rte_mempool_create() here: > > > > https://doc.dpdk.org/api/rte__mempool_8h.html#a503f2f889043a48ca9995878846db2fd > > > > In fact the API doc should really point out the advantage; perhaps it > > eliminates some cache sloshing to get the last few percent of performance. Note: "cache sloshing", aka "false sharing", is not the case here. There is a true, not false, concurrency for the mempool ring in case multiple lcores use one mempool (see below why you may want this). A colloquial term is "contention", per-lcore caching reduces it. Later you are talking about the case when a mempool is created for each queue. The potential issue with this approach is that one queue may quickly deplete its mempool; say, if it does IPv4 reassembly and holds fragments for long. To counter this, each queue mempool must be large, which is a memory waste. This is why often one mempool is created for a set of queues (processed on lcores from a single NUMA node at least). If one queue consumes more mbufs then the others, it is not a problem anymore as long as the mempool as a whole is not depleted. Per-lcore caching is optimizing this case when many lcores access one mempool. It may be less relevant for your case. You can run "mempool_perf_autotest" command of app/test/dpdk-test binary to see how the cache influences performance. See also: https://doc.dpdk.org/guides/prog_guide/mempool_lib.html#mempool-local-cache [...] > > Let's turn then to a larger issue: what happens if different RXQ/TXQs have > > radically different needs? > > > > As the code above illustrates, one merely allocates a size appropriate to > > an individual RXQ/TXQ by changing the count and size of mbufs ---- > > which is as simple as it can get. Correct. As explained above, it can be also one mempool per queue group. What do you think is missing here for your use case?