From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 87BF643925; Mon, 22 Jan 2024 08:11:01 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4CAA54067B; Mon, 22 Jan 2024 08:11:01 +0100 (CET) Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by mails.dpdk.org (Postfix) with ESMTP id 2ACAC4067B for ; Mon, 22 Jan 2024 08:10:58 +0100 (CET) Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4TJLvS0wnWz1Q8BX; Mon, 22 Jan 2024 15:09:56 +0800 (CST) Received: from dggpeml500024.china.huawei.com (unknown [7.185.36.10]) by mail.maildlp.com (Postfix) with ESMTPS id F317718007D; Mon, 22 Jan 2024 15:10:55 +0800 (CST) Received: from [10.67.121.161] (10.67.121.161) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Mon, 22 Jan 2024 15:10:55 +0800 Subject: Re: [PATCH] mempool: test performance with larger bursts To: =?UTF-8?Q?Morten_Br=c3=b8rup?= , CC: References: <20240121045249.22465-1-mb@smartsharesystems.com> From: fengchengwen Message-ID: <01eae3f0-8c74-73cc-0003-087b80f66386@huawei.com> Date: Mon, 22 Jan 2024 15:10:55 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.11.0 MIME-Version: 1.0 In-Reply-To: <20240121045249.22465-1-mb@smartsharesystems.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 8bit X-Originating-IP: [10.67.121.161] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpeml500024.china.huawei.com (7.185.36.10) X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hi Morten, On 2024/1/21 12:52, Morten Brørup wrote: > Bursts of up to 128 packets are not uncommon, so increase the maximum > tested get and put burst sizes from 32 to 128. How about add 64 ? > > Some applications keep more than 512 objects, so increase the maximum > number of kept objects from 512 to 4096. > This exceeds the typical mempool cache size of 512 objects, so the test > also exercises the mempool driver. And for 2048? (I notice below already has 1024) PS: with this commit, the number of combinations will grow much, and every subtest cost 5sec, so the total time will increases great. So could this perf suite support paramters or derivative command ? for instance: REGISTER_PERF_TEST(mempool_perf_autotest, test_mempool_perf); REGISTER_PERF_TEST(mempool_perf_autotest_keeps256, test_mempool_perf_keeps256); Thanks. > > Signed-off-by: Morten Brørup > --- > app/test/test_mempool_perf.c | 25 ++++++++++++++++--------- > 1 file changed, 16 insertions(+), 9 deletions(-) > > diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c > index 96de347f04..f52106e833 100644 > --- a/app/test/test_mempool_perf.c > +++ b/app/test/test_mempool_perf.c > @@ -1,6 +1,6 @@ > /* SPDX-License-Identifier: BSD-3-Clause > * Copyright(c) 2010-2014 Intel Corporation > - * Copyright(c) 2022 SmartShare Systems > + * Copyright(c) 2022-2024 SmartShare Systems > */ > > #include > @@ -54,22 +54,24 @@ > * > * - Bulk size (*n_get_bulk*, *n_put_bulk*) > * > - * - Bulk get from 1 to 32 > - * - Bulk put from 1 to 32 > - * - Bulk get and put from 1 to 32, compile time constant > + * - Bulk get from 1 to 128 > + * - Bulk put from 1 to 128 > + * - Bulk get and put from 1 to 128, compile time constant > * > * - Number of kept objects (*n_keep*) > * > * - 32 > * - 128 > * - 512 > + * - 1024 > + * - 4096 > */ > > #define N 65536 > #define TIME_S 5 > #define MEMPOOL_ELT_SIZE 2048 > -#define MAX_KEEP 512 > -#define MEMPOOL_SIZE ((rte_lcore_count()*(MAX_KEEP+RTE_MEMPOOL_CACHE_MAX_SIZE))-1) > +#define MAX_KEEP 4096 > +#define MEMPOOL_SIZE ((rte_lcore_count()*(MAX_KEEP+RTE_MEMPOOL_CACHE_MAX_SIZE*2))-1) > > /* Number of pointers fitting into one cache line. */ > #define CACHE_LINE_BURST (RTE_CACHE_LINE_SIZE / sizeof(uintptr_t)) > @@ -204,6 +206,8 @@ per_lcore_mempool_test(void *arg) > CACHE_LINE_BURST, CACHE_LINE_BURST); > else if (n_get_bulk == 32) > ret = test_loop(mp, cache, n_keep, 32, 32); > + else if (n_get_bulk == 128) > + ret = test_loop(mp, cache, n_keep, 128, 128); > else > ret = -1; > > @@ -289,9 +293,9 @@ launch_cores(struct rte_mempool *mp, unsigned int cores) > static int > do_one_mempool_test(struct rte_mempool *mp, unsigned int cores) > { > - unsigned int bulk_tab_get[] = { 1, 4, CACHE_LINE_BURST, 32, 0 }; > - unsigned int bulk_tab_put[] = { 1, 4, CACHE_LINE_BURST, 32, 0 }; > - unsigned int keep_tab[] = { 32, 128, 512, 0 }; > + unsigned int bulk_tab_get[] = { 1, 4, CACHE_LINE_BURST, 32, 128, 0 }; > + unsigned int bulk_tab_put[] = { 1, 4, CACHE_LINE_BURST, 32, 128, 0 }; > + unsigned int keep_tab[] = { 32, 128, 512, 1024, 4096, 0 }; > unsigned *get_bulk_ptr; > unsigned *put_bulk_ptr; > unsigned *keep_ptr; > @@ -301,6 +305,9 @@ do_one_mempool_test(struct rte_mempool *mp, unsigned int cores) > for (put_bulk_ptr = bulk_tab_put; *put_bulk_ptr; put_bulk_ptr++) { > for (keep_ptr = keep_tab; *keep_ptr; keep_ptr++) { > > + if (*keep_ptr < *get_bulk_ptr || *keep_ptr < *put_bulk_ptr) > + continue; > + > use_constant_values = 0; > n_get_bulk = *get_bulk_ptr; > n_put_bulk = *put_bulk_ptr; >