From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp-fw-6002.amazon.com (smtp-fw-6002.amazon.com [52.95.49.90]) by dpdk.org (Postfix) with ESMTP id 06FDF7D97 for ; Wed, 31 May 2017 02:17:13 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1496189834; x=1527725834; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=kRRdYR6+Q40FRfxKu5+pTzvOvVPIUvKqSaGxd3/45T8=; b=Sz+OHwR3WykgruKKh8FgGPux0Tr9lDgb5BR909d6zjCiBLJ/8vZow0tD AfgyUniPVMmEkdD6PNnaI0pSxI0E0uRQZ/3ImtxbMvfFDXGUePiDtSQnr DOedvSH6RapCo+fKiTsTCltnhDzdA7dHKCydSfLc2addhFRl3zrnu5WGO Q=; X-IronPort-AV: E=Sophos;i="5.38,420,1491264000"; d="scan'208";a="286171134" Received: from iad6-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-71001.iad55.amazon.com) ([10.124.125.6]) by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP/TLS/DHE-RSA-AES256-SHA; 31 May 2017 00:17:07 +0000 Received: from EX13MTAUWA001.ant.amazon.com (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166]) by email-inbound-relay-71001.iad55.amazon.com (8.14.7/8.14.7) with ESMTP id v4V0H407032481 (version=TLSv1/SSLv3 cipher=AES256-SHA bits=256 verify=FAIL) for ; Wed, 31 May 2017 00:17:07 GMT Received: from EX13d09UWA003.ant.amazon.com (10.43.160.227) by EX13MTAUWA001.ant.amazon.com (10.43.160.118) with Microsoft SMTP Server (TLS) id 15.0.1104.5; Wed, 31 May 2017 00:17:07 +0000 Received: from EX13MTAUEE001.ant.amazon.com (10.43.62.200) by EX13d09UWA003.ant.amazon.com (10.43.160.227) with Microsoft SMTP Server (TLS) id 15.0.1104.5; Wed, 31 May 2017 00:17:06 +0000 Received: from dev-dsk-lavignen-2a-i-6727e5bf.us-west-2.amazon.com (172.22.103.204) by mail-relay.amazon.com (10.43.62.226) with Microsoft SMTP Server id 15.0.1104.5 via Frontend Transport; Wed, 31 May 2017 00:17:06 +0000 Received: by dev-dsk-lavignen-2a-i-6727e5bf.us-west-2.amazon.com (Postfix, from userid 3314725) id A6AEE83DEC; Wed, 31 May 2017 00:17:05 +0000 (UTC) From: Jamie Lavigne To: CC: Jamie Lavigne Date: Wed, 31 May 2017 00:16:58 +0000 Message-ID: <1496189818-2307-1-git-send-email-lavignen@amazon.com> X-Mailer: git-send-email 2.7.3.AMZN In-Reply-To: <1496189340-27813-1-git-send-email-lavignen@amazon.com> References: <1496189340-27813-1-git-send-email-lavignen@amazon.com> MIME-Version: 1.0 Content-Type: text/plain Precedence: Bulk Subject: [dpdk-dev] [PATCH v2] Correctly handle malloc_elem resize with padding X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 31 May 2017 00:17:14 -0000 Currently when a malloc_elem is split after resizing, any padding present in the elem is ignored. This causes the resized elem to be too small when padding is present, and user data can overwrite the beginning of the following malloc_elem. Solve this by including the size of the padding when computing where to split the malloc_elem. Signed-off-by: Jamie Lavigne --- lib/librte_eal/common/malloc_elem.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/lib/librte_eal/common/malloc_elem.c b/lib/librte_eal/common/malloc_elem.c index 42568e1..8766fa8 100644 --- a/lib/librte_eal/common/malloc_elem.c +++ b/lib/librte_eal/common/malloc_elem.c @@ -333,9 +333,11 @@ malloc_elem_resize(struct malloc_elem *elem, size_t size) elem_free_list_remove(next); join_elem(elem, next); - if (elem->size - new_size >= MIN_DATA_SIZE + MALLOC_ELEM_OVERHEAD){ + const size_t new_total_size = new_size + elem->pad; + + if (elem->size - new_total_size >= MIN_DATA_SIZE + MALLOC_ELEM_OVERHEAD) { /* now we have a big block together. Lets cut it down a bit, by splitting */ - struct malloc_elem *split_pt = RTE_PTR_ADD(elem, new_size); + struct malloc_elem *split_pt = RTE_PTR_ADD(elem, new_total_size); split_pt = RTE_PTR_ALIGN_CEIL(split_pt, RTE_CACHE_LINE_SIZE); split_elem(elem, split_pt); malloc_elem_free_list_insert(split_pt); -- 2.7.3.AMZN