From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DAE48A0613 for ; Fri, 27 Sep 2019 15:51:31 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A35011BEB9; Fri, 27 Sep 2019 15:51:31 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 58FD11BEB6 for ; Fri, 27 Sep 2019 15:51:30 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 27 Sep 2019 06:51:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,555,1559545200"; d="scan'208";a="183981941" Received: from sivswdev08.ir.intel.com ([10.237.217.47]) by orsmga008.jf.intel.com with ESMTP; 27 Sep 2019 06:51:27 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: michel@digirati.com.br, olivier.matz@6wind.com, anatoly.burakov@intel.com, vipin.varghese@intel.com, Konstantin Ananyev Date: Fri, 27 Sep 2019 14:50:52 +0100 Message-Id: <20190927135054.20845-2-konstantin.ananyev@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20190927135054.20845-1-konstantin.ananyev@intel.com> References: <20190816125304.29719-1-konstantin.ananyev@intel.com> <20190927135054.20845-1-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v2 1/3] eal: move CACHE and IOVA related definitions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Right now RTE_CACHE_ and IOVA definitions are located inside rte_memory.h That might cause an unwanted inclusions of arch/os specific header files. See [1] for particular problem example. Probably the simplest way to deal with such problems - move these definitions into rte_commmon.h Note that this move doesn't introduce any change in functionality. [1] https://bugs.dpdk.org/show_bug.cgi?id=321 Suggested-by: Vipin Varghese Signed-off-by: Konstantin Ananyev --- lib/librte_eal/common/include/rte_common.h | 44 ++++++++++++++++++++++ lib/librte_eal/common/include/rte_memory.h | 38 ------------------- 2 files changed, 44 insertions(+), 38 deletions(-) diff --git a/lib/librte_eal/common/include/rte_common.h b/lib/librte_eal/common/include/rte_common.h index 05a3a6401..c275093d7 100644 --- a/lib/librte_eal/common/include/rte_common.h +++ b/lib/librte_eal/common/include/rte_common.h @@ -291,6 +291,50 @@ rte_is_aligned(void *ptr, unsigned align) */ #define RTE_BUILD_BUG_ON(condition) ((void)sizeof(char[1 - 2*!!(condition)])) +/*********** RTE_CACHE related macros ********/ + +#define RTE_CACHE_LINE_MASK (RTE_CACHE_LINE_SIZE-1) /**< Cache line mask. */ + +#define RTE_CACHE_LINE_ROUNDUP(size) \ + (RTE_CACHE_LINE_SIZE * ((size + RTE_CACHE_LINE_SIZE - 1) / \ + RTE_CACHE_LINE_SIZE)) +/**< Return the first cache-aligned value greater or equal to size. */ + +/**< Cache line size in terms of log2 */ +#if RTE_CACHE_LINE_SIZE == 64 +#define RTE_CACHE_LINE_SIZE_LOG2 6 +#elif RTE_CACHE_LINE_SIZE == 128 +#define RTE_CACHE_LINE_SIZE_LOG2 7 +#else +#error "Unsupported cache line size" +#endif + +#define RTE_CACHE_LINE_MIN_SIZE 64 /**< Minimum Cache line size. */ + +/** + * Force alignment to cache line. + */ +#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE) + +/** + * Force minimum cache line alignment. + */ +#define __rte_cache_min_aligned __rte_aligned(RTE_CACHE_LINE_MIN_SIZE) + +/*********** PA/IOVA type definitions ********/ + +typedef uint64_t phys_addr_t; /**< Physical address. */ +#define RTE_BAD_PHYS_ADDR ((phys_addr_t)-1) +/** + * IO virtual address type. + * When the physical addressing mode (IOVA as PA) is in use, + * the translation from an IO virtual address (IOVA) to a physical address + * is a direct mapping, i.e. the same value. + * Otherwise, in virtual mode (IOVA as VA), an IOMMU may do the translation. + */ +typedef uint64_t rte_iova_t; +#define RTE_BAD_IOVA ((rte_iova_t)-1) + /** * Combines 32b inputs most significant set bits into the least * significant bits to construct a value with the same MSBs as x diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h index 4717dcb43..38e00e382 100644 --- a/lib/librte_eal/common/include/rte_memory.h +++ b/lib/librte_eal/common/include/rte_memory.h @@ -39,44 +39,6 @@ enum rte_page_sizes { }; #define SOCKET_ID_ANY -1 /**< Any NUMA socket. */ -#define RTE_CACHE_LINE_MASK (RTE_CACHE_LINE_SIZE-1) /**< Cache line mask. */ - -#define RTE_CACHE_LINE_ROUNDUP(size) \ - (RTE_CACHE_LINE_SIZE * ((size + RTE_CACHE_LINE_SIZE - 1) / RTE_CACHE_LINE_SIZE)) -/**< Return the first cache-aligned value greater or equal to size. */ - -/**< Cache line size in terms of log2 */ -#if RTE_CACHE_LINE_SIZE == 64 -#define RTE_CACHE_LINE_SIZE_LOG2 6 -#elif RTE_CACHE_LINE_SIZE == 128 -#define RTE_CACHE_LINE_SIZE_LOG2 7 -#else -#error "Unsupported cache line size" -#endif - -#define RTE_CACHE_LINE_MIN_SIZE 64 /**< Minimum Cache line size. */ - -/** - * Force alignment to cache line. - */ -#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE) - -/** - * Force minimum cache line alignment. - */ -#define __rte_cache_min_aligned __rte_aligned(RTE_CACHE_LINE_MIN_SIZE) - -typedef uint64_t phys_addr_t; /**< Physical address. */ -#define RTE_BAD_PHYS_ADDR ((phys_addr_t)-1) -/** - * IO virtual address type. - * When the physical addressing mode (IOVA as PA) is in use, - * the translation from an IO virtual address (IOVA) to a physical address - * is a direct mapping, i.e. the same value. - * Otherwise, in virtual mode (IOVA as VA), an IOMMU may do the translation. - */ -typedef uint64_t rte_iova_t; -#define RTE_BAD_IOVA ((rte_iova_t)-1) /** * Physical memory segment descriptor. -- 2.17.1