From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 77368A034C for ; Tue, 11 Jan 2022 19:02:30 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1B0504274B; Tue, 11 Jan 2022 19:02:30 +0100 (CET) Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) by mails.dpdk.org (Postfix) with ESMTP id 52171410F1 for ; Tue, 11 Jan 2022 19:02:29 +0100 (CET) Received: by mail-pl1-f172.google.com with SMTP id t18so8893293plg.9 for ; Tue, 11 Jan 2022 10:02:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dhELAM6k8XC6swOUEls0WVh+RxpJRQna67H+NDKFQyY=; b=6rNyXsxYA73WOEiNLr6mEGx/YPp22aYusNnCzkCJf8dKW/PGhH3vrjj3EszJwyiICk T4mwYPLbH8Mj06/MVX5mDysjyq2o/I0sDV1wUKTMzQyH+W+1OYTzPPFXkBztKOMHphsv rXDBrpzBhBYTO70mip3BJkwTgCxQbfkVfAF1R20z1fDCy/RnHtXma5YTMnQY4/qWgl4U Ox/lUAv65zt/xBst4DMBhlVOHsiPVBHcCjGB945tOnbtgxfRMzFba6eyc5ZkbN+8n1Cu uo6zc4QbdC8l6GFYuO88drBx6n3kREzz82GEjQIztbomOx+h0/16uXhY3DOOHRcdGfqd VdRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dhELAM6k8XC6swOUEls0WVh+RxpJRQna67H+NDKFQyY=; b=JVI8N6O910mHm9AwEVsMSSNh7r9vD4Cx31yrUXeMCwWi95KE3MaA8DzHXTUdOoljwW dCFtP4z9fiZo7axCO68wEGbWMn6lWuaxaIjtWmKDUNxbYx+0/E0VVcFdv+caQMMbL3Ki bjTGphX8TEO8K+c5WHYkbl3HjKhUmq+M9zoO//p64R8RsSxBHefTT8YE0WUGJLe8JT+b xZISrZOKfMNPDR4MhORPsn439rTLX0wNiWL+vGiEfKvHt45U5tQkKwF3r4pzXtRIP8NL c5vE/dj+UljIq984jh/AtJhVSrNuGBaRZSvdIi5R2BhjOqg65SZe1dBvVzk+m3QAfRNn Qyhw== X-Gm-Message-State: AOAM531psl+vqYwc4ivvruLbQJ7/YNGuTuqzlT/egqy3APlnyW9WrB9l rTOLOYpIZcDzP5nRx3ikAieaKDtvSRO3TA== X-Google-Smtp-Source: ABdhPJwilRy1mr2qNpKyNXBFAPlD1eiC4t56PSKWDmw1OrM7UvIHHCBrBkYaCa2U5voFiCiWqjAaIQ== X-Received: by 2002:a63:9809:: with SMTP id q9mr4993388pgd.509.1641924148385; Tue, 11 Jan 2022 10:02:28 -0800 (PST) Received: from hermes.local (204-195-112-199.wavecable.com. [204.195.112.199]) by smtp.gmail.com with ESMTPSA id o11sm39393pgk.36.2022.01.11.10.02.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 11 Jan 2022 10:02:28 -0800 (PST) Date: Tue, 11 Jan 2022 10:02:25 -0800 From: Stephen Hemminger To: Filip Janiszewski Cc: "users@dpdk.org" Subject: Re: rte_pktmbuf_free_bulk vs rte_pktmbuf_free Message-ID: <20220111100225.53fd75d3@hermes.local> In-Reply-To: <5ecf22af-ac38-7dd5-b3ce-5b2ccf60b32f@filipjaniszewski.com> References: <5ecf22af-ac38-7dd5-b3ce-5b2ccf60b32f@filipjaniszewski.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org On Tue, 11 Jan 2022 13:12:24 +0100 Filip Janiszewski wrote: > Hi, > > Is there any specific reason why using rte_pktmbuf_free_bulk seems to be > much slower than rte_pktmbuf_free in a loop? (DPDK 21.11) > > I ran a bunch of tests on a 50GbE link where I'm getting packet drops > (running with too few RX cores on purpose, to make some performance > verification) and when the time comes to release the packets, i did a > quick change like this: > > . > //rte_pktmbuf_free_bulk( data, pkt_cnt ); > for( idx = 0 ; idx < pkt_cnt ; ++idx ) { > rte_pktmbuf_free( data[ idx ] ); > } > . > > And suddenly I'm dropping around 10% less packets (The traffic rate is > around ~95Mpps). In case that's relevant, RX from the nic is done on a > separate core than where the pkts are released (processed and released) > > I did also the following experiment: Found the MPPs speed value where i > get around 2-5% drops using rte_pktmbuf_free_bulk, executed a bunch of > readings where I consistently get drops.. Then switched to the loop with > rte_pktmbuf_free and executed the same tests again, of a sudden I can't > drop anymore. > > Isn't this strange? I was sure rte_pktmbuf_free_bulk would be kind of > optimized for bulk releases so people don't have to loop themselves. > > Thanks > Is your mbuf pool close to exhausted? How big is your bulk size? It might be with that with larger bulk sizes, the loop is giving packets back that instantly get consumed by incoming packets. So either pool is almost empty or the non-bulk is keeping packets in cache more.