From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id C3A5B1B4B8 for ; Thu, 27 Sep 2018 15:18:48 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Sep 2018 06:18:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,310,1534834800"; d="scan'208";a="95352350" Received: from aburakov-mobl1.ger.corp.intel.com (HELO [10.237.220.55]) ([10.237.220.55]) by orsmga002.jf.intel.com with ESMTP; 27 Sep 2018 06:18:43 -0700 To: Alejandro Lucero Cc: dev , Thomas Monjalon , Bruce Richardson , laszlo.madarassy@ericsson.com, laszlo.vadkerti@ericsson.com, andras.kovacs@ericsson.com, winnie.tian@ericsson.com, daniel.andrasi@ericsson.com, janos.kobor@ericsson.com, geza.koblo@ericsson.com, srinath.mannam@broadcom.com, scott.branden@broadcom.com, Ajit Khaparde , "Wiles, Keith" , Shreyansh Jain , Shahaf Shuler , Andrew Rybchenko References: <1a882c20a3e84c2588ace4c9bdd7ea85e07d0fb8.1538044725.git.anatoly.burakov@intel.com> From: "Burakov, Anatoly" Message-ID: <0d293246-cc13-a523-ba75-43ca36864f5a@intel.com> Date: Thu, 27 Sep 2018 14:18:42 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v6 03/21] malloc: index heaps using heap ID rather than NUMA node X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 27 Sep 2018 13:18:49 -0000 On 27-Sep-18 2:01 PM, Alejandro Lucero wrote: > On Thu, Sep 27, 2018 at 11:47 AM Anatoly Burakov > wrote: > >> Switch over all parts of EAL to use heap ID instead of NUMA node >> ID to identify heaps. Heap ID for DPDK-internal heaps is NUMA >> node's index within the detected NUMA node list. Heap ID for >> external heaps will be order of their creation. >> >> Signed-off-by: Anatoly Burakov >> --- >> config/common_base | 1 + >> config/rte_config.h | 1 + >> .../common/include/rte_eal_memconfig.h | 4 +- >> .../common/include/rte_malloc_heap.h | 1 + >> lib/librte_eal/common/malloc_heap.c | 98 +++++++++++++------ >> lib/librte_eal/common/malloc_heap.h | 3 + >> lib/librte_eal/common/rte_malloc.c | 41 +++++--- >> 7 files changed, 106 insertions(+), 43 deletions(-) >> >> diff --git a/config/common_base b/config/common_base >> index 155c7d40e..b52770b27 100644 >> --- a/config/common_base >> +++ b/config/common_base >> @@ -61,6 +61,7 @@ CONFIG_RTE_CACHE_LINE_SIZE=64 >> CONFIG_RTE_LIBRTE_EAL=y >> CONFIG_RTE_MAX_LCORE=128 >> CONFIG_RTE_MAX_NUMA_NODES=8 >> +CONFIG_RTE_MAX_HEAPS=32 >> CONFIG_RTE_MAX_MEMSEG_LISTS=64 >> # each memseg list will be limited to either RTE_MAX_MEMSEG_PER_LIST pages >> # or RTE_MAX_MEM_MB_PER_LIST megabytes worth of memory, whichever is >> smaller >> diff --git a/config/rte_config.h b/config/rte_config.h >> index 567051b9c..5dd2ac1ad 100644 >> --- a/config/rte_config.h >> +++ b/config/rte_config.h >> @@ -24,6 +24,7 @@ >> #define RTE_BUILD_SHARED_LIB >> >> /* EAL defines */ >> +#define RTE_MAX_HEAPS 32 >> #define RTE_MAX_MEMSEG_LISTS 128 >> #define RTE_MAX_MEMSEG_PER_LIST 8192 >> #define RTE_MAX_MEM_MB_PER_LIST 32768 >> diff --git a/lib/librte_eal/common/include/rte_eal_memconfig.h >> b/lib/librte_eal/common/include/rte_eal_memconfig.h >> index 6baa6854f..d7920a4e0 100644 >> --- a/lib/librte_eal/common/include/rte_eal_memconfig.h >> +++ b/lib/librte_eal/common/include/rte_eal_memconfig.h >> @@ -72,8 +72,8 @@ struct rte_mem_config { >> >> struct rte_tailq_head tailq_head[RTE_MAX_TAILQ]; /**< Tailqs for >> objects */ >> >> - /* Heaps of Malloc per socket */ >> - struct malloc_heap malloc_heaps[RTE_MAX_NUMA_NODES]; >> + /* Heaps of Malloc */ >> + struct malloc_heap malloc_heaps[RTE_MAX_HEAPS]; >> >> /* address of mem_config in primary process. used to map shared >> config into >> * exact same address the primary process maps it. >> diff --git a/lib/librte_eal/common/include/rte_malloc_heap.h >> b/lib/librte_eal/common/include/rte_malloc_heap.h >> index d43fa9097..e7ac32d42 100644 >> --- a/lib/librte_eal/common/include/rte_malloc_heap.h >> +++ b/lib/librte_eal/common/include/rte_malloc_heap.h >> @@ -27,6 +27,7 @@ struct malloc_heap { >> >> unsigned alloc_count; >> size_t total_size; >> + unsigned int socket_id; >> } __rte_cache_aligned; >> >> #endif /* _RTE_MALLOC_HEAP_H_ */ >> diff --git a/lib/librte_eal/common/malloc_heap.c >> b/lib/librte_eal/common/malloc_heap.c >> index 3c8e2063b..1d1e35708 100644 >> --- a/lib/librte_eal/common/malloc_heap.c >> +++ b/lib/librte_eal/common/malloc_heap.c >> @@ -66,6 +66,21 @@ check_hugepage_sz(unsigned flags, uint64_t hugepage_sz) >> return check_flag & flags; >> } >> >> +int >> +malloc_socket_to_heap_id(unsigned int socket_id) >> +{ >> + struct rte_mem_config *mcfg = >> rte_eal_get_configuration()->mem_config; >> + int i; >> + >> + for (i = 0; i < RTE_MAX_HEAPS; i++) { >> + struct malloc_heap *heap = &mcfg->malloc_heaps[i]; >> + >> + if (heap->socket_id == socket_id) >> + return i; >> + } >> + return -1; >> +} >> + >> /* >> * Expand the heap with a memory area. >> */ >> @@ -93,12 +108,13 @@ malloc_add_seg(const struct rte_memseg_list *msl, >> struct rte_mem_config *mcfg = >> rte_eal_get_configuration()->mem_config; >> struct rte_memseg_list *found_msl; >> struct malloc_heap *heap; >> - int msl_idx; >> + int msl_idx, heap_idx; >> >> if (msl->external) >> return 0; >> >> - heap = &mcfg->malloc_heaps[msl->socket_id]; >> + heap_idx = malloc_socket_to_heap_id(msl->socket_id); >> > > malloc_socket_to_heap_id can return -1 so it requires to handle that > possibility. > Not really, this is called from memseg walk function - we know the msl and its socket ID are valid. Or at least something has gone *very* wrong if we got a -1 result :) However, i guess this check won't hurt. -- Thanks, Anatoly