From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id CB262A045E for ; Wed, 29 May 2019 15:05:52 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 853F31B9BE; Wed, 29 May 2019 15:05:25 +0200 (CEST) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id C04781B9B7 for ; Wed, 29 May 2019 15:05:23 +0200 (CEST) Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id B3295309C001; Wed, 29 May 2019 13:05:16 +0000 (UTC) Received: from localhost.localdomain (ovpn-112-24.ams2.redhat.com [10.36.112.24]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7B41361B6B; Wed, 29 May 2019 13:05:07 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, tiwei.bie@intel.com, david.marchand@redhat.com, jfreimann@redhat.com, bruce.richardson@intel.com, zhihong.wang@intel.com, konstantin.ananyev@intel.com, mattias.ronnblom@ericsson.com Cc: Maxime Coquelin Date: Wed, 29 May 2019 15:04:20 +0200 Message-Id: <20190529130420.6428-6-maxime.coquelin@redhat.com> In-Reply-To: <20190529130420.6428-1-maxime.coquelin@redhat.com> References: <20190529130420.6428-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.45]); Wed, 29 May 2019 13:05:21 +0000 (UTC) Subject: [dpdk-dev] [PATCH v3 5/5] eal/x86: force inlining of all memcpy and mov helpers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Some helpers in the header file are forced inlined other are only inlined, this patch forces inline for all. It will avoid it to be embedded as functions when called multiple times in the same object file. For example, when we added packed ring support in vhost-user library, rte_memcpy_generic got no more inlined. Signed-off-by: Maxime Coquelin --- .../common/include/arch/x86/rte_memcpy.h | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/lib/librte_eal/common/include/arch/x86/rte_memcpy.h b/lib/librte_eal/common/include/arch/x86/rte_memcpy.h index 7b758094df..ba44c4a328 100644 --- a/lib/librte_eal/common/include/arch/x86/rte_memcpy.h +++ b/lib/librte_eal/common/include/arch/x86/rte_memcpy.h @@ -115,7 +115,7 @@ rte_mov256(uint8_t *dst, const uint8_t *src) * Copy 128-byte blocks from one location to another, * locations should not overlap. */ -static inline void +static __rte_always_inline void rte_mov128blocks(uint8_t *dst, const uint8_t *src, size_t n) { __m512i zmm0, zmm1; @@ -163,7 +163,7 @@ rte_mov512blocks(uint8_t *dst, const uint8_t *src, size_t n) } } -static inline void * +static __rte_always_inline void * rte_memcpy_generic(void *dst, const void *src, size_t n) { uintptr_t dstu = (uintptr_t)dst; @@ -330,7 +330,7 @@ rte_mov64(uint8_t *dst, const uint8_t *src) * Copy 128 bytes from one location to another, * locations should not overlap. */ -static inline void +static __rte_always_inline void rte_mov128(uint8_t *dst, const uint8_t *src) { rte_mov32((uint8_t *)dst + 0 * 32, (const uint8_t *)src + 0 * 32); @@ -343,7 +343,7 @@ rte_mov128(uint8_t *dst, const uint8_t *src) * Copy 128-byte blocks from one location to another, * locations should not overlap. */ -static inline void +static __rte_always_inline void rte_mov128blocks(uint8_t *dst, const uint8_t *src, size_t n) { __m256i ymm0, ymm1, ymm2, ymm3; @@ -363,7 +363,7 @@ rte_mov128blocks(uint8_t *dst, const uint8_t *src, size_t n) } } -static inline void * +static __rte_always_inline void * rte_memcpy_generic(void *dst, const void *src, size_t n) { uintptr_t dstu = (uintptr_t)dst; @@ -523,7 +523,7 @@ rte_mov64(uint8_t *dst, const uint8_t *src) * Copy 128 bytes from one location to another, * locations should not overlap. */ -static inline void +static __rte_always_inline void rte_mov128(uint8_t *dst, const uint8_t *src) { rte_mov16((uint8_t *)dst + 0 * 16, (const uint8_t *)src + 0 * 16); @@ -655,7 +655,7 @@ __extension__ ({ \ } \ }) -static inline void * +static __rte_always_inline void * rte_memcpy_generic(void *dst, const void *src, size_t n) { __m128i xmm0, xmm1, xmm2, xmm3, xmm4, xmm5, xmm6, xmm7, xmm8; @@ -800,7 +800,7 @@ rte_memcpy_generic(void *dst, const void *src, size_t n) #endif /* RTE_MACHINE_CPUFLAG */ -static inline void * +static __rte_always_inline void * rte_memcpy_aligned(void *dst, const void *src, size_t n) { void *ret = dst; @@ -860,7 +860,7 @@ rte_memcpy_aligned(void *dst, const void *src, size_t n) return ret; } -static inline void * +static __rte_always_inline void * rte_memcpy(void *dst, const void *src, size_t n) { if (!(((uintptr_t)dst | (uintptr_t)src) & ALIGNMENT_MASK)) -- 2.21.0