From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 6CBB38D96 for ; Wed, 25 Nov 2015 14:26:47 +0100 (CET) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga103.fm.intel.com with ESMTP; 25 Nov 2015 05:26:36 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,342,1444719600"; d="scan'208";a="859494044" Received: from dwdohert-dpdk.ir.intel.com ([163.33.213.167]) by fmsmga002.fm.intel.com with ESMTP; 25 Nov 2015 05:26:35 -0800 From: Declan Doherty To: dev@dpdk.org Date: Wed, 25 Nov 2015 13:25:10 +0000 Message-Id: <1448457917-27695-4-git-send-email-declan.doherty@intel.com> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1448457917-27695-1-git-send-email-declan.doherty@intel.com> References: <1447441090-8129-1-git-send-email-declan.doherty@intel.com> <1448457917-27695-1-git-send-email-declan.doherty@intel.com> Subject: [dpdk-dev] [PATCH v8 03/10] eal: add __rte_packed /__rte_aligned macros X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 25 Nov 2015 13:26:48 -0000 Adding a new macro for specifying __aligned__ attribute, and updating the current __rte_cache_aligned macro to use it. Also adding a new macro to specify the __packed__ attribute Signed-off-by: Declan Doherty Acked-by: Sergio Gonzalez Monroy --- lib/librte_eal/common/include/rte_memory.h | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h index 067be10..20feed9 100644 --- a/lib/librte_eal/common/include/rte_memory.h +++ b/lib/librte_eal/common/include/rte_memory.h @@ -78,9 +78,19 @@ enum rte_page_sizes { /**< Return the first cache-aligned value greater or equal to size. */ /** + * Force alignment + */ +#define __rte_aligned(a) __attribute__((__aligned__(a))) + +/** * Force alignment to cache line. */ -#define __rte_cache_aligned __attribute__((__aligned__(RTE_CACHE_LINE_SIZE))) +#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE) + +/** + * Force a structure to be packed + */ +#define __rte_packed __attribute__((__packed__)) typedef uint64_t phys_addr_t; /**< Physical address definition. */ #define RTE_BAD_PHYS_ADDR ((phys_addr_t)-1) @@ -106,7 +116,7 @@ struct rte_memseg { /**< store segment MFNs */ uint64_t mfn[DOM0_NUM_MEMBLOCK]; #endif -} __attribute__((__packed__)); +} __rte_packed; /** * Lock page in physical memory and prevent from swapping. -- 2.5.0