From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 7805B5F1C for ; Mon, 5 Mar 2018 16:24:57 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 05 Mar 2018 07:24:56 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.47,427,1515484800"; d="scan'208";a="180017195" Received: from fyigit-mobl.ger.corp.intel.com (HELO [10.237.221.62]) ([10.237.221.62]) by orsmga004.jf.intel.com with ESMTP; 05 Mar 2018 07:24:55 -0800 To: Mallesh Koujalagi , dev@dpdk.org Cc: mtetsuyah@gmail.com References: <1517627510-60932-1-git-send-email-malleshx.koujalagi@intel.com> From: Ferruh Yigit Message-ID: <803da3d7-7443-01e8-a14c-1e899c3a5a17@intel.com> Date: Mon, 5 Mar 2018 15:24:54 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 In-Reply-To: <1517627510-60932-1-git-send-email-malleshx.koujalagi@intel.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH] net/null: Support bulk alloc and free. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 05 Mar 2018 15:24:57 -0000 On 2/3/2018 3:11 AM, Mallesh Koujalagi wrote: > After bulk allocation and freeing of multiple mbufs increase more than ~2% > throughput on single core. > > Signed-off-by: Mallesh Koujalagi > --- > drivers/net/null/rte_eth_null.c | 16 +++++++--------- > 1 file changed, 7 insertions(+), 9 deletions(-) > > diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c > index 9385ffd..247ede0 100644 > --- a/drivers/net/null/rte_eth_null.c > +++ b/drivers/net/null/rte_eth_null.c > @@ -130,10 +130,11 @@ eth_null_copy_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) > return 0; > > packet_size = h->internals->packet_size; > + > + if (rte_pktmbuf_alloc_bulk(h->mb_pool, bufs, nb_bufs) != 0) > + return 0; > + > for (i = 0; i < nb_bufs; i++) { > - bufs[i] = rte_pktmbuf_alloc(h->mb_pool); > - if (!bufs[i]) > - break; > rte_memcpy(rte_pktmbuf_mtod(bufs[i], void *), h->dummy_packet, > packet_size); > bufs[i]->data_len = (uint16_t)packet_size; > @@ -149,18 +150,15 @@ eth_null_copy_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) > static uint16_t > eth_null_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) > { > - int i; > struct null_queue *h = q; > > if ((q == NULL) || (bufs == NULL)) > return 0; > > - for (i = 0; i < nb_bufs; i++) > - rte_pktmbuf_free(bufs[i]); > + rte_mempool_put_bulk(bufs[0]->pool, (void **)bufs, nb_bufs); Is it guarantied that all mbufs will be from same mempool? > + rte_atomic64_add(&h->tx_pkts, nb_bufs); > > - rte_atomic64_add(&(h->tx_pkts), i); > - > - return i; > + return nb_bufs; > } > > static uint16_t >