From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2AC5BA0547; Wed, 12 Oct 2022 10:24:06 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 38C3442FCE; Wed, 12 Oct 2022 10:24:04 +0200 (CEST) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id 9F8DC4281B for ; Wed, 12 Oct 2022 10:24:01 +0200 (CEST) Received: from dggpeml500024.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4MnQXg2d2jzHv1R; Wed, 12 Oct 2022 16:18:59 +0800 (CST) Received: from [10.67.100.224] (10.67.100.224) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Wed, 12 Oct 2022 16:23:58 +0800 Subject: Re: [PATCH v8 7/9] memarea: support backup memory mechanism To: =?UTF-8?Q?Mattias_R=c3=b6nnblom?= , Dmitry Kozlyuk CC: , , , , , References: <20220721044648.6817-1-fengchengwen@huawei.com> <20221011121720.2657-1-fengchengwen@huawei.com> <20221011121720.2657-8-fengchengwen@huawei.com> <20221011185832.4aaffcc5@sovereign> <4a8e9496-17ac-d58a-9744-00630c4b7412@lysator.liu.se> From: fengchengwen Message-ID: <7cf77cf3-9071-83da-5c6f-271faa9ae2ed@huawei.com> Date: Wed, 12 Oct 2022 16:23:58 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.11.0 MIME-Version: 1.0 In-Reply-To: <4a8e9496-17ac-d58a-9744-00630c4b7412@lysator.liu.se> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 8bit X-Originating-IP: [10.67.100.224] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hi Mattias, On 2022/10/12 4:26, Mattias Rönnblom wrote: > On 2022-10-11 17:58, Dmitry Kozlyuk wrote: >> 2022-10-11 12:17 (UTC+0000), Chengwen Feng: >>> This patch adds a memarea backup mechanism, where an allocation request >>> which cannot be met by the current memarea is deferred to its backup >>> memarea. >> >> This is a controversial feature. >> >> 1. It violates memarea property of freeing all allocated objects >>     at once when the memarea itself is destroyed. Objects allocated >>     in the backup memarea through the destroyed one will remain. >> >> 2. If there was an API to check that the object belongs to a memarea >>     (the check from rte_memarea_update_refcnt() in this patch), >>     it would be trivial to implement this feature on top of memarea API. >> >> Nit: "Deferred" is about time -> "forwarded", "delegated", or "handled over". >> >> A general note about this series. >> IMO, libraries should have limited scope and allow composition >> rather than accumulate features and control their use via options. >> The core idea of memarea is an allocator within a memory region, >> a fast one and with low overhead, usable to free all objects at once. >> > > What's a typical use case for a memory region? In a packet processing context. we used it in video system: udp-packets -> splitter -> stream-1-reorder -> stream-1-decoder -- | | | |--> compose-picture -> picture-encoder -> udp-packets | | -> stream-2-reorder -> stream-2-decoder -- each stream-decoder use a dedicated memarea which have following advantage: 1. there will no global lock which like rte_malloc 2. memory objects leakage will only impact the decoder, it will not impact global system. 3. simple the programmer, only destroy the memarea after the session finished. As you see, this is a different idea of memory optimization, in pktmbuf_pool, it use prealloc + per-lcore cache, it readuces lock conficts by lcore cache, but didn't fix memory-leakage's impact. > > The ability to instantiate a variable number of heaps/regions seems useful, although it's not clear to me if the application should order that to happen on a per-lcore basis, on a per-NUMA node basis, a per--basis, or something else entirely. > > It seems to me that DPDK is lacking a variable-size memory allocator which is efficient and safe to use from lcore threads. My impression is that glibc malloc() and rte_malloc() are too slow for the packet processing threads, and involves code paths taking locks shared with non-EAL threads. > >> This is orthogonal to the question from where the memory comes from. >> HEAP and LIBC sources could be built on top of USER source, >> which means that the concept of source is less relevant. >> Backup mechanism could instead be a way to add memory to the area, >> in which case HEAP and LIBC memarea would also be expandable. >> Memarea API could be defined as a structure with callbacks, >> and different types of memarea could be combined, >> for example, interlocked memarea on top of expandable memarea on top of >> memarea with a particular memory management algorithm. >> >> I'm not saying we should immediately build all this complexity. > > The part with implementing runtime polymorphism using a struct with function pointers, instead of the enum+switch-based-type-test approach, doesn't sound like something that would add complexity. Rather the opposite. > > Also, having a clear-cut separation of concern between the-thing-that-allocates-and-frees-the-region and the region-internal memory manager (what's called an algorithm in this patchset) also seems like something that would simplify the code. > >> On the contrary, I would merge the basic things first, >> then try to _stack_ new features on top, >> then look if interfaces emerge that can be used for composition. > > .