From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 11CE1A00C3; Mon, 17 Jan 2022 15:07:14 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DEAF7411EE; Mon, 17 Jan 2022 15:07:13 +0100 (CET) Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by mails.dpdk.org (Postfix) with ESMTP id BDF5140141 for ; Mon, 17 Jan 2022 15:07:11 +0100 (CET) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 6AB555C0EA5; Mon, 17 Jan 2022 09:07:11 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Mon, 17 Jan 2022 09:07:11 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=fm3; bh= XQfItFMqM6ukmsUHx7/lJb2t5HdKPuQd6H9jO5clPZ8=; b=mFFK7dkBcaBiY9ts FsYQBJt4M2gFdw6tY4nMpB4JVozlEWP4DbhDa72Q7j+h3Xku0TgudB10W98kAQ2J KjktoeurRAheNPpcAujOF/PaybmH5A1B42FWxydjRW+wtmYr+/nys4NNnt6He5eY k/tyqWtfAmGPhpvM2lpGhxXvYGK183dM7cbOhImnEQeneoRAVLSufUik8J88lfLJ Wqon+D8bz1y/N8UqaBeWgS+bsD56hIWN24jNrcCjE/QakUYdKwX7iqNJ1tmT8+Ak 6FN5dhmvp63YXg1Uylw4ZA/N/9pgBHy/YZYFs00o1TvHTUzd9IVUCpZ9C3lGB/k+ zvvQHw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm1; bh=XQfItFMqM6ukmsUHx7/lJb2t5HdKPuQd6H9jO5clP Z8=; b=eBZ6+fZ+cePbeOOFooVa4EU5d7+xOxggodQzrKdIjLgD/QFoRXK384m8I htiSSJmPPROlVXjXvZtA5+AbX066nUsUVmUZbBD8eS7kQTKnTLXxzIf2r3oXfoXB Her3LgBXRHuthG7sWEybHzkKmoTMCcX99hoOiGx49DLOACmQIvDumTvNczp0zazN EUrRn3rGpTIiwA/8JruZmp4T089HU0W5t2AKIwc13FbD1VMdIqm8c+pIPZtZ+8kk 7eGAGHkZKZ+ivQqSGm87TksQ8ow3TVyh3QJaKmVw1T6Fzh9SHEW2phXmlDRvKFPe sDR8sxs9NVbGtHulR1mFeimquBz4g== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvvddruddugdeiudcutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhmrghs ucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenucggtf frrghtthgvrhhnpedugefgvdefudfftdefgeelgffhueekgfffhfeujedtteeutdejueei iedvffegheenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhroh hmpehthhhomhgrshesmhhonhhjrghlohhnrdhnvght X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 17 Jan 2022 09:07:10 -0500 (EST) From: Thomas Monjalon To: Dmitry Kozlyuk Cc: dev@dpdk.org, Anatoly Burakov Subject: Re: [PATCH v1 3/6] mem: add dirty malloc element support Date: Mon, 17 Jan 2022 15:07:08 +0100 Message-ID: <3209882.KgjxqYA5nG@thomas> In-Reply-To: <20220117080801.481568-4-dkozlyuk@nvidia.com> References: <20211230143744.3550098-1-dkozlyuk@nvidia.com> <20220117080801.481568-1-dkozlyuk@nvidia.com> <20220117080801.481568-4-dkozlyuk@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org 17/01/2022 09:07, Dmitry Kozlyuk: > EAL malloc layer assumed all free elements content > is filled with zeros ("clean"), as opposed to uninitialized ("dirty"). > This assumption was ensured in two ways: > 1. EAL memalloc layer always returned clean memory. > 2. Freed memory was cleared before returning into the heap. > > Clearing the memory can be as slow as around 14 GiB/s. > To save doing so, memalloc layer is allowed to return dirty memory. > Such segments being marked with RTE_MEMSEG_FLAG_DIRTY. > The allocator tracks elements that contain dirty memory > using the new flag in the element header. > When clean memory is requested via rte_zmalloc*() > and the suitable element is dirty, it is cleared on allocation. > When memory is deallocated, the freed element is joined > with adjacent free elements, and the dirty flag is updated: > > dirty + freed + dirty = dirty => no need to clean > freed + dirty = dirty the freed memory It is not said why dirty parts are not cleaned. > > clean + freed + clean = clean => freed memory > clean + freed = clean must be cleared > freed + clean = clean > freed = clean > > As a result, memory is either cleared on free, as before, > or it will be cleared on allocation if need be, but never twice. It is not said whether it is a change for everybody, or only when enabling an option.