From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 72106A0547; Wed, 12 Oct 2022 09:57:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2092A42D86; Wed, 12 Oct 2022 09:57:14 +0200 (CEST) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id BC9E642BF0 for ; Wed, 12 Oct 2022 09:57:12 +0200 (CEST) Received: from dggpeml500024.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4MnPxk6YgLzHtyd; Wed, 12 Oct 2022 15:52:10 +0800 (CST) Received: from [10.67.100.224] (10.67.100.224) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Wed, 12 Oct 2022 15:57:10 +0800 Subject: Re: [PATCH v8 7/9] memarea: support backup memory mechanism To: Dmitry Kozlyuk CC: , , , , , , References: <20220721044648.6817-1-fengchengwen@huawei.com> <20221011121720.2657-1-fengchengwen@huawei.com> <20221011121720.2657-8-fengchengwen@huawei.com> <20221011185832.4aaffcc5@sovereign> From: fengchengwen Message-ID: <8ed10f13-1783-9a2b-ece0-057d68dc2557@huawei.com> Date: Wed, 12 Oct 2022 15:57:10 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.11.0 MIME-Version: 1.0 In-Reply-To: <20221011185832.4aaffcc5@sovereign> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.67.100.224] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpeml500024.china.huawei.com (7.185.36.10) X-CFilter-Loop: Reflected X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hi Dmitry, On 2022/10/11 23:58, Dmitry Kozlyuk wrote: > 2022-10-11 12:17 (UTC+0000), Chengwen Feng: >> This patch adds a memarea backup mechanism, where an allocation request >> which cannot be met by the current memarea is deferred to its backup >> memarea. > > This is a controversial feature. > > 1. It violates memarea property of freeing all allocated objects > at once when the memarea itself is destroyed. Objects allocated > in the backup memarea through the destroyed one will remain. > > 2. If there was an API to check that the object belongs to a memarea > (the check from rte_memarea_update_refcnt() in this patch), > it would be trivial to implement this feature on top of memarea API. This patch add 'struct memarea *owner' in each object's meta data, and it was used to free which allocated from backup memarea. So this problem will not exist. > > Nit: "Deferred" is about time -> "forwarded", "delegated", or "handled over". ok > > A general note about this series. > IMO, libraries should have limited scope and allow composition > rather than accumulate features and control their use via options. > The core idea of memarea is an allocator within a memory region, > a fast one and with low overhead, usable to free all objects at once. > > This is orthogonal to the question from where the memory comes from. > HEAP and LIBC sources could be built on top of USER source, > which means that the concept of source is less relevant. > Backup mechanism could instead be a way to add memory to the area, > in which case HEAP and LIBC memarea would also be expandable. > Memarea API could be defined as a structure with callbacks, > and different types of memarea could be combined, > for example, interlocked memarea on top of expandable memarea on top of > memarea with a particular memory management algorithm. > > I'm not saying we should immediately build all this complexity. > On the contrary, I would merge the basic things first, > then try to _stack_ new features on top, > then look if interfaces emerge that can be used for composition. Agree, I will drop this feature in next version. > . >