From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id A66238E79 for ; Tue, 10 Nov 2015 18:28:00 +0100 (CET) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga102.fm.intel.com with ESMTP; 10 Nov 2015 09:27:58 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,271,1444719600"; d="scan'208";a="847140671" Received: from dwdohert-dpdk-fedora-20.ir.intel.com ([163.33.213.96]) by orsmga002.jf.intel.com with ESMTP; 10 Nov 2015 09:27:54 -0800 From: Declan Doherty To: dev@dpdk.org Date: Tue, 10 Nov 2015 17:32:36 +0000 Message-Id: <1447176763-19303-4-git-send-email-declan.doherty@intel.com> X-Mailer: git-send-email 2.4.3 In-Reply-To: <1447176763-19303-1-git-send-email-declan.doherty@intel.com> References: <1447101259-18972-1-git-send-email-declan.doherty@intel.com> <1447176763-19303-1-git-send-email-declan.doherty@intel.com> Subject: [dpdk-dev] [PATCH v6 03/10] eal: add __rte_packed /__rte_aligned macros X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 Nov 2015 17:28:01 -0000 Adding a new marco for specifing __aligned__ attribute, and updating the current __rte_cache_aligned macro to use it. Also adding a new macro to specify the __packed__ attribute Acked-by: Sergio Gonzalez Monroy Signed-off-by: Declan Doherty --- lib/librte_eal/common/include/rte_memory.h | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h index 1bed415..18fd952 100644 --- a/lib/librte_eal/common/include/rte_memory.h +++ b/lib/librte_eal/common/include/rte_memory.h @@ -76,9 +76,19 @@ enum rte_page_sizes { /**< Return the first cache-aligned value greater or equal to size. */ /** + * Force alignment + */ +#define __rte_aligned(a) __attribute__((__aligned__(a))) + +/** * Force alignment to cache line. */ -#define __rte_cache_aligned __attribute__((__aligned__(RTE_CACHE_LINE_SIZE))) +#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE) + +/** + * Force a structure to be packed + */ +#define __rte_packed __attribute__((__packed__)) typedef uint64_t phys_addr_t; /**< Physical address definition. */ #define RTE_BAD_PHYS_ADDR ((phys_addr_t)-1) @@ -104,7 +114,7 @@ struct rte_memseg { /**< store segment MFNs */ uint64_t mfn[DOM0_NUM_MEMBLOCK]; #endif -} __attribute__((__packed__)); +} __rte_packed; /** * Lock page in physical memory and prevent from swapping. -- 2.4.3