From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id E1AF51B2C0; Thu, 21 Dec 2017 17:54:27 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Dec 2017 08:54:26 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,436,1508828400"; d="scan'208";a="20186351" Received: from irvmail001.ir.intel.com ([163.33.26.43]) by orsmga002.jf.intel.com with ESMTP; 21 Dec 2017 08:54:25 -0800 Received: from sivswdev01.ir.intel.com (sivswdev01.ir.intel.com [10.237.217.45]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id vBLGsOYl018242; Thu, 21 Dec 2017 16:54:24 GMT Received: from sivswdev01.ir.intel.com (localhost [127.0.0.1]) by sivswdev01.ir.intel.com with ESMTP id vBLGsO6w004666; Thu, 21 Dec 2017 16:54:24 GMT Received: (from aburakov@localhost) by sivswdev01.ir.intel.com with LOCAL id vBLGsOVF004662; Thu, 21 Dec 2017 16:54:24 GMT From: Anatoly Burakov To: dev@dpdk.org Cc: Sergio Gonzalez Monroy , stable@dpdk.org Date: Thu, 21 Dec 2017 16:54:24 +0000 Message-Id: X-Mailer: git-send-email 1.7.0.7 Subject: [dpdk-stable] [PATCH] eal: fix end for bounded malloc elements X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 21 Dec 2017 16:54:28 -0000 In cases when alignment is bigger than boundary, we may incorrectly calculate end of a bounded malloc element. Consider this: suppose we are allocating a bounded malloc element that should be of 128 bytes in size, bounded to 128 bytes and aligned on a 256-byte boundary. Suppose our malloc element ends at 0x140 - that is, 256 plus one cacheline. So, right at the start, we are aligning our new_data_start to include the required element size, and to be aligned on a specified boundary - so new_data_start becomes 0. This fails the following bounds check, because our element cannot go above 128 bytes from the start, and we are at 320. So, we enter the bounds handling branch. While we're in there, we are aligning end_pt to our boundedness requirement of 128 byte, and end up with 0x100 (since 256 is 128-byte aligned). We recalculate new_data_size and it stays at 0, however our end is at 0x100, which is beyond the 128 byte boundary, and we report inability to reserve a bounded element when we could have. This patch adds an end_pt recalculation after new_data_start adjustment - we already know that size <= bound, so we can do it safely - and we then correctly report that we can, in fact, try using this element for bounded malloc allocation. Fixes: fafcc11985a2 ("mem: rework memzone to be allocated by malloc") Cc: sergio.gonzalez.monroy@intel.com Cc: stable@dpdk.org Signed-off-by: Anatoly Burakov --- lib/librte_eal/common/malloc_elem.c | 1 + 1 file changed, 1 insertion(+) diff --git a/lib/librte_eal/common/malloc_elem.c b/lib/librte_eal/common/malloc_elem.c index 98bcd37..f6cbc42 100644 --- a/lib/librte_eal/common/malloc_elem.c +++ b/lib/librte_eal/common/malloc_elem.c @@ -98,6 +98,7 @@ elem_start_pt(struct malloc_elem *elem, size_t size, unsigned align, if ((new_data_start & bmask) != ((end_pt - 1) & bmask)) { end_pt = RTE_ALIGN_FLOOR(end_pt, bound); new_data_start = RTE_ALIGN_FLOOR((end_pt - size), align); + end_pt = new_data_start + size; if (((end_pt - 1) & bmask) != (new_data_start & bmask)) return NULL; } -- 2.7.4