From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wi0-f170.google.com (mail-wi0-f170.google.com [209.85.212.170]) by dpdk.org (Postfix) with ESMTP id 901F15A71 for ; Thu, 9 Jul 2015 11:02:43 +0200 (CEST) Received: by wibdq8 with SMTP id dq8so235622052wib.1 for ; Thu, 09 Jul 2015 02:02:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=EWUrAYdv06xlfK3dgI3ot2eUUp21gryFihi6QhaLilU=; b=c+QYuBbzV5ANj58lbrjETyS5BhyC+iPxuPF/zv0MqVc6espW13Ql9BLx9OXEq3PnAC CJJn4OJpb+wCBcgbIN1zJGdBCnHWCYy2M6v0uWJVlC4bPU0utx0PTo99nqT0crx5ioGR BSPaWD14zNYuGfr283z+nC6b1C33zw08cDE/R3B3o6J5SkbsXPATAAbK4XenODfRfoC8 /M4CHoD4QdSNeHr/WDRm9ryfQO9uqFLU3TjjYteZT6A1AZM9xfAH17KKcgJAFYTmJbJc r4kP7OkNTTJfcqP731TDBdBzX5lPzC6502HLWSik9wFOE0cTN3aCr3Vv0OMFboCnGCmu nPyg== X-Gm-Message-State: ALoCoQm4WYGiG0IwTaqfCsXvQoU9C9gk/cOiNbWey0uL50LfmDECtNXJ4YZk85woEkySHqmCVE+H X-Received: by 10.180.23.69 with SMTP id k5mr30460315wif.3.1436432563347; Thu, 09 Jul 2015 02:02:43 -0700 (PDT) Received: from glumotte.dev.6wind.com (6wind.net2.nerim.net. [213.41.151.210]) by smtp.gmail.com with ESMTPSA id x5sm7112642wif.21.2015.07.09.02.02.42 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 09 Jul 2015 02:02:42 -0700 (PDT) From: Olivier Matz To: dev@dpdk.org Date: Thu, 9 Jul 2015 11:01:29 +0200 Message-Id: <1436432489-31161-1-git-send-email-olivier.matz@6wind.com> X-Mailer: git-send-email 2.1.4 Subject: [dpdk-dev] [PATCH] test/mempool: decrease size of requested mempool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 09 Jul 2015 09:02:43 -0000 In test application, the default size of allocated mempool is calculated as following: (RTE_MAX_LCORE * (RTE_MEMPOOL_CACHE_MAX_SIZE + max_kept_objects)) - 1 The objective is to ensure that all cores can fill their cache and keep 'max_kept_objects' at the same time. As RTE_MAX_LCORE is 128 and RTE_MEMPOOL_CACHE_MAX_SIZE is 512 in the default configuration, it can produce very large mempools (170 MB). We can replace the number of core by a dynamic value, which drastically reduces the amount of memory needed for this test (5 MB with 4 cores). Signed-off-by: Olivier Matz --- app/test/test_mempool.c | 2 +- app/test/test_mempool_perf.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/app/test/test_mempool.c b/app/test/test_mempool.c index 9eb0d5d..72f8fb6 100644 --- a/app/test/test_mempool.c +++ b/app/test/test_mempool.c @@ -75,7 +75,7 @@ #define TIME_S 5 #define MEMPOOL_ELT_SIZE 2048 #define MAX_KEEP 128 -#define MEMPOOL_SIZE ((RTE_MAX_LCORE*(MAX_KEEP+RTE_MEMPOOL_CACHE_MAX_SIZE))-1) +#define MEMPOOL_SIZE ((rte_lcore_count()*(MAX_KEEP+RTE_MEMPOOL_CACHE_MAX_SIZE))-1) static struct rte_mempool *mp; static struct rte_mempool *mp_cache, *mp_nocache; diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c index 1f7539c..cdc02a0 100644 --- a/app/test/test_mempool_perf.c +++ b/app/test/test_mempool_perf.c @@ -94,7 +94,7 @@ #define TIME_S 5 #define MEMPOOL_ELT_SIZE 2048 #define MAX_KEEP 128 -#define MEMPOOL_SIZE ((RTE_MAX_LCORE*(MAX_KEEP+RTE_MEMPOOL_CACHE_MAX_SIZE))-1) +#define MEMPOOL_SIZE ((rte_lcore_count()*(MAX_KEEP+RTE_MEMPOOL_CACHE_MAX_SIZE))-1) static struct rte_mempool *mp; static struct rte_mempool *mp_cache, *mp_nocache; -- 2.1.4