* [dpdk-dev] add support for HTM lock elision for x86
@ 2015-06-02 13:11 Roman Dementiev
2015-06-02 13:11 ` [dpdk-dev] [PATCH 1/3] spinlock: " Roman Dementiev
` (5 more replies)
0 siblings, 6 replies; 27+ messages in thread
From: Roman Dementiev @ 2015-06-02 13:11 UTC (permalink / raw)
To: dev
This series of patches adds methods that use hardware memory transactions (HTM)
on fast-path for DPDK locks (a.k.a. lock elision). Here the methods are implemented
for x86 using Restricted Transactional Memory instructions (Intel(r) Transactional
Synchronization Extensions). The implementation fall-backs to the normal DPDK lock
if HTM is not available or memory transactions fail.
This is not a replacement for lock usages since not all critical sections protected
by locks are friendly to HTM.
Roman Dementiev (3):
spinlock: add support for HTM lock elision for x86
rwlock: add support for HTM lock elision for x86
test scaling of HTM lock elision protecting rte_hash
app/test/Makefile | 1 +
app/test/test_hash_scaling.c | 223 +++++++++++++++++++++
lib/librte_eal/common/Makefile | 4 +-
.../common/include/arch/ppc_64/rte_rwlock.h | 38 ++++
.../common/include/arch/ppc_64/rte_spinlock.h | 41 ++++
lib/librte_eal/common/include/arch/x86/rte_rtm.h | 73 +++++++
.../common/include/arch/x86/rte_rwlock.h | 82 ++++++++
.../common/include/arch/x86/rte_spinlock.h | 107 ++++++++++
lib/librte_eal/common/include/generic/rte_rwlock.h | 194 ++++++++++++++++++
.../common/include/generic/rte_spinlock.h | 75 +++++++
lib/librte_eal/common/include/rte_rwlock.h | 158 ---------------
11 files changed, 836 insertions(+), 160 deletions(-)
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH 1/3] spinlock: add support for HTM lock elision for x86
2015-06-02 13:11 [dpdk-dev] add support for HTM lock elision for x86 Roman Dementiev
@ 2015-06-02 13:11 ` Roman Dementiev
2015-06-02 13:11 ` [dpdk-dev] [PATCH 2/3] rwlock: " Roman Dementiev
` (4 subsequent siblings)
5 siblings, 0 replies; 27+ messages in thread
From: Roman Dementiev @ 2015-06-02 13:11 UTC (permalink / raw)
To: dev
This patch adds methods that use hardware memory transactions (HTM)
on fast-path for spinlocks (a.k.a. lock elision). Here the methods
are implemented for x86 using Restricted Transactional Memory
instructions (Intel(r) Transactional Synchronization Extensions).
The implementation fall-backs to the normal spinlock if HTM is not
available or memory transactions fail.
This is not a replacement for all spinlock usages since not all
critical sections protected by spinlocks are friendly to HTM.
Signed-off-by: Roman Dementiev <roman.dementiev@intel.com>
---
.../common/include/arch/ppc_64/rte_spinlock.h | 41 ++++++++
lib/librte_eal/common/include/arch/x86/rte_rtm.h | 73 ++++++++++++++
.../common/include/arch/x86/rte_spinlock.h | 107 +++++++++++++++++++++
.../common/include/generic/rte_spinlock.h | 75 +++++++++++++++
4 files changed, 296 insertions(+)
create mode 100644 lib/librte_eal/common/include/arch/x86/rte_rtm.h
diff --git a/lib/librte_eal/common/include/arch/ppc_64/rte_spinlock.h b/lib/librte_eal/common/include/arch/ppc_64/rte_spinlock.h
index cf8b81a..3336435 100644
--- a/lib/librte_eal/common/include/arch/ppc_64/rte_spinlock.h
+++ b/lib/librte_eal/common/include/arch/ppc_64/rte_spinlock.h
@@ -66,6 +66,47 @@ rte_spinlock_trylock(rte_spinlock_t *sl)
#endif
+static inline int rte_tm_supported(void)
+{
+ return 0;
+}
+
+static inline void
+rte_spinlock_lock_tm(rte_spinlock_t *sl)
+{
+ rte_spinlock_lock(sl); /* fall-back */
+}
+
+static inline int
+rte_spinlock_trylock_tm(rte_spinlock_t *sl)
+{
+ return rte_spinlock_trylock(sl);
+}
+
+static inline void
+rte_spinlock_unlock_tm(rte_spinlock_t *sl)
+{
+ rte_spinlock_unlock(sl);
+}
+
+static inline void
+rte_spinlock_recursive_lock_tm(rte_spinlock_recursive_t *slr)
+{
+ rte_spinlock_recursive_lock(slr); /* fall-back */
+}
+
+static inline void
+rte_spinlock_recursive_unlock_tm(rte_spinlock_recursive_t *slr)
+{
+ rte_spinlock_recursive_unlock(slr);
+}
+
+static inline int
+rte_spinlock_recursive_trylock_tm(rte_spinlock_recursive_t *slr)
+{
+ return rte_spinlock_recursive_trylock(slr);
+}
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_eal/common/include/arch/x86/rte_rtm.h b/lib/librte_eal/common/include/arch/x86/rte_rtm.h
new file mode 100644
index 0000000..d935641
--- /dev/null
+++ b/lib/librte_eal/common/include/arch/x86/rte_rtm.h
@@ -0,0 +1,73 @@
+#ifndef _RTE_RTM_H_
+#define _RTE_RTM_H_ 1
+
+/*
+ * Copyright (c) 2012,2013 Intel Corporation
+ * Author: Andi Kleen
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that: (1) source code distributions
+ * retain the above copyright notice and this paragraph in its entirety, (2)
+ * distributions including binary code include the above copyright notice and
+ * this paragraph in its entirety in the documentation or other materials
+ * provided with the distribution
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* Official RTM intrinsics interface matching gcc/icc, but works
+ on older gcc compatible compilers and binutils. */
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+
+#define RTE_XBEGIN_STARTED (~0u)
+#define RTE_XABORT_EXPLICIT (1 << 0)
+#define RTE_XABORT_RETRY (1 << 1)
+#define RTE_XABORT_CONFLICT (1 << 2)
+#define RTE_XABORT_CAPACITY (1 << 3)
+#define RTE_XABORT_DEBUG (1 << 4)
+#define RTE_XABORT_NESTED (1 << 5)
+#define RTE_XABORT_CODE(x) (((x) >> 24) & 0xff)
+
+static __attribute__((__always_inline__)) inline
+unsigned int rte_xbegin(void)
+{
+ unsigned int ret = RTE_XBEGIN_STARTED;
+
+ asm volatile(".byte 0xc7,0xf8 ; .long 0" : "+a" (ret) :: "memory");
+ return ret;
+}
+
+static __attribute__((__always_inline__)) inline
+void rte_xend(void)
+{
+ asm volatile(".byte 0x0f,0x01,0xd5" ::: "memory");
+}
+
+static __attribute__((__always_inline__)) inline
+void rte_xabort(const unsigned int status)
+{
+ asm volatile(".byte 0xc6,0xf8,%P0" :: "i" (status) : "memory");
+}
+
+static __attribute__((__always_inline__)) inline
+int rte_xtest(void)
+{
+ unsigned char out;
+
+ asm volatile(".byte 0x0f,0x01,0xd6 ; setnz %0" :
+ "=r" (out) :: "memory");
+ return out;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RTM_H_ */
diff --git a/lib/librte_eal/common/include/arch/x86/rte_spinlock.h b/lib/librte_eal/common/include/arch/x86/rte_spinlock.h
index 54fba95..136f25a 100644
--- a/lib/librte_eal/common/include/arch/x86/rte_spinlock.h
+++ b/lib/librte_eal/common/include/arch/x86/rte_spinlock.h
@@ -39,6 +39,13 @@ extern "C" {
#endif
#include "generic/rte_spinlock.h"
+#include "rte_rtm.h"
+#include "rte_cpuflags.h"
+#include "rte_branch_prediction.h"
+#include <rte_common.h>
+
+#define RTE_RTM_MAX_RETRIES (10)
+#define RTE_XABORT_LOCK_BUSY (0xff)
#ifndef RTE_FORCE_INTRINSICS
static inline void
@@ -87,6 +94,106 @@ rte_spinlock_trylock (rte_spinlock_t *sl)
}
#endif
+static uint8_t rtm_supported; /* cache the flag to avoid the overhead
+ of the rte_cpu_get_flag_enabled function */
+
+static inline void __attribute__((constructor))
+rte_rtm_init(void)
+{
+ rtm_supported = rte_cpu_get_flag_enabled(RTE_CPUFLAG_RTM);
+}
+
+static inline int rte_tm_supported(void)
+{
+ return rtm_supported;
+}
+
+static inline int
+rte_try_tm(volatile int *lock)
+{
+ if (!rtm_supported)
+ return 0;
+
+ int retries = RTE_RTM_MAX_RETRIES;
+
+ while (likely(retries--)) {
+
+ unsigned int status = rte_xbegin();
+
+ if (likely(RTE_XBEGIN_STARTED == status)) {
+ if (unlikely(*lock))
+ rte_xabort(RTE_XABORT_LOCK_BUSY);
+ else
+ return 1;
+ }
+ while (*lock)
+ rte_pause();
+
+ if ((status & RTE_XABORT_EXPLICIT) &&
+ (RTE_XABORT_CODE(status) == RTE_XABORT_LOCK_BUSY))
+ continue;
+
+ if ((status & RTE_XABORT_RETRY) == 0) /* do not retry */
+ break;
+ }
+ return 0;
+}
+
+static inline void
+rte_spinlock_lock_tm(rte_spinlock_t *sl)
+{
+ if (likely(rte_try_tm(&sl->locked)))
+ return;
+
+ rte_spinlock_lock(sl); /* fall-back */
+}
+
+static inline int
+rte_spinlock_trylock_tm(rte_spinlock_t *sl)
+{
+ if (likely(rte_try_tm(&sl->locked)))
+ return 1;
+
+ return rte_spinlock_trylock(sl);
+}
+
+static inline void
+rte_spinlock_unlock_tm(rte_spinlock_t *sl)
+{
+ if (unlikely(sl->locked))
+ rte_spinlock_unlock(sl);
+ else
+ rte_xend();
+}
+
+static inline void
+rte_spinlock_recursive_lock_tm(rte_spinlock_recursive_t *slr)
+{
+ if (likely(rte_try_tm(&slr->sl.locked)))
+ return;
+
+ rte_spinlock_recursive_lock(slr); /* fall-back */
+}
+
+static inline void
+rte_spinlock_recursive_unlock_tm(rte_spinlock_recursive_t *slr)
+{
+ if (unlikely(slr->sl.locked))
+ rte_spinlock_recursive_unlock(slr);
+ else
+ rte_xend();
+}
+
+static inline int
+rte_spinlock_recursive_trylock_tm(rte_spinlock_recursive_t *slr)
+{
+ if (likely(rte_try_tm(&slr->sl.locked)))
+ return 1;
+
+ return rte_spinlock_recursive_trylock(slr);
+}
+
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_eal/common/include/generic/rte_spinlock.h b/lib/librte_eal/common/include/generic/rte_spinlock.h
index c7fb0df..ddb79bf 100644
--- a/lib/librte_eal/common/include/generic/rte_spinlock.h
+++ b/lib/librte_eal/common/include/generic/rte_spinlock.h
@@ -145,6 +145,47 @@ static inline int rte_spinlock_is_locked (rte_spinlock_t *sl)
}
/**
+ * Test if hardware transactional memory (lock elision) is supported
+ *
+ * @return
+ * 1 if the hardware transactional memory is supported; 0 otherwise.
+ */
+static inline int rte_tm_supported(void);
+
+/**
+ * Try to execute critical section in a hardware memory transaction,
+ * if it fails or not available take the spinlock.
+ *
+ * @param sl
+ * A pointer to the spinlock.
+ */
+static inline void
+rte_spinlock_lock_tm(rte_spinlock_t *sl);
+
+/**
+ * Commit hardware memory transaction or release the spinlock if
+ * the spinlock is used as a fall-back
+ *
+ * @param sl
+ * A pointer to the spinlock.
+ */
+static inline void
+rte_spinlock_unlock_tm(rte_spinlock_t *sl);
+
+/**
+ * Try to execute critical section in a hardware memory transaction,
+ * if it fails or not available try to take the lock.
+ *
+ * @param sl
+ * A pointer to the spinlock.
+ * @return
+ * 1 if the hardware memory transaction is successfully started
+ * or lock is successfully taken; 0 otherwise.
+ */
+static inline int
+rte_spinlock_trylock_tm(rte_spinlock_t *sl);
+
+/**
* The rte_spinlock_recursive_t type.
*/
typedef struct {
@@ -223,4 +264,38 @@ static inline int rte_spinlock_recursive_trylock(rte_spinlock_recursive_t *slr)
return 1;
}
+
+/**
+ * Try to execute critical section in a hardware memory transaction,
+ * if it fails or not available take the recursive spinlocks
+ *
+ * @param slr
+ * A pointer to the recursive spinlock.
+ */
+static inline void rte_spinlock_recursive_lock_tm(
+ rte_spinlock_recursive_t *slr);
+
+/**
+ * Commit hardware memory transaction or release the recursive spinlock
+ * if the recursive spinlock is used as a fall-back
+ *
+ * @param slr
+ * A pointer to the recursive spinlock.
+ */
+static inline void rte_spinlock_recursive_unlock_tm(
+ rte_spinlock_recursive_t *slr);
+
+/**
+ * Try to execute critical section in a hardware memory transaction,
+ * if it fails or not available try to take the recursive lock
+ *
+ * @param slr
+ * A pointer to the recursive spinlock.
+ * @return
+ * 1 if the hardware memory transaction is successfully started
+ * or lock is successfully taken; 0 otherwise.
+ */
+static inline int rte_spinlock_recursive_trylock_tm(
+ rte_spinlock_recursive_t *slr);
+
#endif /* _RTE_SPINLOCK_H_ */
--
1.9.5.msysgit.0
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH 2/3] rwlock: add support for HTM lock elision for x86
2015-06-02 13:11 [dpdk-dev] add support for HTM lock elision for x86 Roman Dementiev
2015-06-02 13:11 ` [dpdk-dev] [PATCH 1/3] spinlock: " Roman Dementiev
@ 2015-06-02 13:11 ` Roman Dementiev
2015-06-02 13:11 ` [dpdk-dev] [PATCH 3/3] test scaling of HTM lock elision protecting rte_hash Roman Dementiev
` (3 subsequent siblings)
5 siblings, 0 replies; 27+ messages in thread
From: Roman Dementiev @ 2015-06-02 13:11 UTC (permalink / raw)
To: dev
This patch adds methods that use hardware memory transactions (HTM)
on fast-path for rwlock (a.k.a. lock elision). Here the methods
are implemented for x86 using Restricted Transactional Memory
instructions (Intel(r) Transactional Synchronization Extensions).
The implementation fall-backs to the normal rwlock if HTM is not
available or memory transactions fail. This is not a replacement
for all rwlock usages since not all critical sections protected
by locks are friendly to HTM.
Signed-off-by: Roman Dementiev <roman.dementiev@intel.com>
---
lib/librte_eal/common/Makefile | 4 +-
.../common/include/arch/ppc_64/rte_rwlock.h | 38 ++++
.../common/include/arch/x86/rte_rwlock.h | 82 +++++++++
lib/librte_eal/common/include/generic/rte_rwlock.h | 194 +++++++++++++++++++++
lib/librte_eal/common/include/rte_rwlock.h | 158 -----------------
5 files changed, 316 insertions(+), 160 deletions(-)
create mode 100644 lib/librte_eal/common/include/arch/ppc_64/rte_rwlock.h
create mode 100644 lib/librte_eal/common/include/arch/x86/rte_rwlock.h
create mode 100644 lib/librte_eal/common/include/generic/rte_rwlock.h
delete mode 100644 lib/librte_eal/common/include/rte_rwlock.h
diff --git a/lib/librte_eal/common/Makefile b/lib/librte_eal/common/Makefile
index 3ea3bbf..38772d4 100644
--- a/lib/librte_eal/common/Makefile
+++ b/lib/librte_eal/common/Makefile
@@ -35,7 +35,7 @@ INC := rte_branch_prediction.h rte_common.h
INC += rte_debug.h rte_eal.h rte_errno.h rte_launch.h rte_lcore.h
INC += rte_log.h rte_memory.h rte_memzone.h rte_pci.h
INC += rte_pci_dev_ids.h rte_per_lcore.h rte_random.h
-INC += rte_rwlock.h rte_tailq.h rte_interrupts.h rte_alarm.h
+INC += rte_tailq.h rte_interrupts.h rte_alarm.h
INC += rte_string_fns.h rte_version.h
INC += rte_eal_memconfig.h rte_malloc_heap.h
INC += rte_hexdump.h rte_devargs.h rte_dev.h
@@ -46,7 +46,7 @@ INC += rte_warnings.h
endif
GENERIC_INC := rte_atomic.h rte_byteorder.h rte_cycles.h rte_prefetch.h
-GENERIC_INC += rte_spinlock.h rte_memcpy.h rte_cpuflags.h
+GENERIC_INC += rte_spinlock.h rte_memcpy.h rte_cpuflags.h rte_rwlock.h
# defined in mk/arch/$(RTE_ARCH)/rte.vars.mk
ARCH_DIR ?= $(RTE_ARCH)
ARCH_INC := $(notdir $(wildcard $(RTE_SDK)/lib/librte_eal/common/include/arch/$(ARCH_DIR)/*.h))
diff --git a/lib/librte_eal/common/include/arch/ppc_64/rte_rwlock.h b/lib/librte_eal/common/include/arch/ppc_64/rte_rwlock.h
new file mode 100644
index 0000000..de8af19
--- /dev/null
+++ b/lib/librte_eal/common/include/arch/ppc_64/rte_rwlock.h
@@ -0,0 +1,38 @@
+#ifndef _RTE_RWLOCK_PPC_64_H_
+#define _RTE_RWLOCK_PPC_64_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "generic/rte_rwlock.h"
+
+static inline void
+rte_rwlock_read_lock_tm(rte_rwlock_t *rwl)
+{
+ rte_rwlock_read_lock(rwl);
+}
+
+static inline void
+rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl)
+{
+ rte_rwlock_read_unlock(rwl);
+}
+
+static inline void
+rte_rwlock_write_lock_tm(rte_rwlock_t *rwl)
+{
+ rte_rwlock_write_lock(rwl);
+}
+
+static inline void
+rte_rwlock_write_unlock_tm(rte_rwlock_t *rwl)
+{
+ rte_rwlock_write_unlock(rwl);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RWLOCK_PPC_64_H_ */
diff --git a/lib/librte_eal/common/include/arch/x86/rte_rwlock.h b/lib/librte_eal/common/include/arch/x86/rte_rwlock.h
new file mode 100644
index 0000000..afd1c3c
--- /dev/null
+++ b/lib/librte_eal/common/include/arch/x86/rte_rwlock.h
@@ -0,0 +1,82 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2015 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_RWLOCK_X86_64_H_
+#define _RTE_RWLOCK_X86_64_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "generic/rte_rwlock.h"
+#include "rte_spinlock.h"
+
+static inline void
+rte_rwlock_read_lock_tm(rte_rwlock_t *rwl)
+{
+ if (likely(rte_try_tm(&rwl->cnt)))
+ return;
+ rte_rwlock_read_lock(rwl);
+}
+
+static inline void
+rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl)
+{
+ if (unlikely(rwl->cnt))
+ rte_rwlock_read_unlock(rwl);
+ else
+ rte_xend();
+}
+
+static inline void
+rte_rwlock_write_lock_tm(rte_rwlock_t *rwl)
+{
+ if (likely(rte_try_tm(&rwl->cnt)))
+ return;
+ rte_rwlock_write_lock(rwl);
+}
+
+static inline void
+rte_rwlock_write_unlock_tm(rte_rwlock_t *rwl)
+{
+ if (unlikely(rwl->cnt))
+ rte_rwlock_write_unlock(rwl);
+ else
+ rte_xend();
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RWLOCK_X86_64_H_ */
diff --git a/lib/librte_eal/common/include/generic/rte_rwlock.h b/lib/librte_eal/common/include/generic/rte_rwlock.h
new file mode 100644
index 0000000..85b0d3a
--- /dev/null
+++ b/lib/librte_eal/common/include/generic/rte_rwlock.h
@@ -0,0 +1,194 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_RWLOCK_H_
+#define _RTE_RWLOCK_H_
+
+/**
+ * @file
+ *
+ * RTE Read-Write Locks
+ *
+ * This file defines an API for read-write locks. The lock is used to
+ * protect data that allows multiple readers in parallel, but only
+ * one writer. All readers are blocked until the writer is finished
+ * writing.
+ *
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_common.h>
+#include <rte_atomic.h>
+
+/**
+ * The rte_rwlock_t type.
+ *
+ * cnt is -1 when write lock is held, and > 0 when read locks are held.
+ */
+typedef struct {
+ volatile int32_t cnt; /**< -1 when W lock held, > 0 when R locks held. */
+} rte_rwlock_t;
+
+/**
+ * A static rwlock initializer.
+ */
+#define RTE_RWLOCK_INITIALIZER { 0 }
+
+/**
+ * Initialize the rwlock to an unlocked state.
+ *
+ * @param rwl
+ * A pointer to the rwlock structure.
+ */
+static inline void
+rte_rwlock_init(rte_rwlock_t *rwl)
+{
+ rwl->cnt = 0;
+}
+
+/**
+ * Take a read lock. Loop until the lock is held.
+ *
+ * @param rwl
+ * A pointer to a rwlock structure.
+ */
+static inline void
+rte_rwlock_read_lock(rte_rwlock_t *rwl)
+{
+ int32_t x;
+ int success = 0;
+
+ while (success == 0) {
+ x = rwl->cnt;
+ /* write lock is held */
+ if (x < 0) {
+ rte_pause();
+ continue;
+ }
+ success = rte_atomic32_cmpset((volatile uint32_t *)&rwl->cnt,
+ x, x + 1);
+ }
+}
+
+/**
+ * Release a read lock.
+ *
+ * @param rwl
+ * A pointer to the rwlock structure.
+ */
+static inline void
+rte_rwlock_read_unlock(rte_rwlock_t *rwl)
+{
+ rte_atomic32_dec((rte_atomic32_t *)(intptr_t)&rwl->cnt);
+}
+
+/**
+ * Take a write lock. Loop until the lock is held.
+ *
+ * @param rwl
+ * A pointer to a rwlock structure.
+ */
+static inline void
+rte_rwlock_write_lock(rte_rwlock_t *rwl)
+{
+ int32_t x;
+ int success = 0;
+
+ while (success == 0) {
+ x = rwl->cnt;
+ /* a lock is held */
+ if (x != 0) {
+ rte_pause();
+ continue;
+ }
+ success = rte_atomic32_cmpset((volatile uint32_t *)&rwl->cnt,
+ 0, -1);
+ }
+}
+
+/**
+ * Release a write lock.
+ *
+ * @param rwl
+ * A pointer to a rwlock structure.
+ */
+static inline void
+rte_rwlock_write_unlock(rte_rwlock_t *rwl)
+{
+ rte_atomic32_inc((rte_atomic32_t *)(intptr_t)&rwl->cnt);
+}
+
+/**
+ * Try to execute critical section in a hardware memory transaction, if it fails or not available take a read lock
+ *
+ * @param rwl
+ * A pointer to a rwlock structure.
+ */
+static inline void
+rte_rwlock_read_lock_tm(rte_rwlock_t *rwl);
+
+/**
+ * Commit hardware memory transaction or release the read lock if the lock is used as a fall-back
+ *
+ * @param rwl
+ * A pointer to the rwlock structure.
+ */
+static inline void
+rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl);
+
+/**
+ * Try to execute critical section in a hardware memory transaction, if it fails or not available take a write lock
+ *
+ * @param rwl
+ * A pointer to a rwlock structure.
+ */
+static inline void
+rte_rwlock_write_lock_tm(rte_rwlock_t *rwl);
+
+/**
+ * Commit hardware memory transaction or release the write lock if the lock is used as a fall-back
+ *
+ * @param rwl
+ * A pointer to a rwlock structure.
+ */
+static inline void
+rte_rwlock_write_unlock_tm(rte_rwlock_t *rwl);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RWLOCK_H_ */
diff --git a/lib/librte_eal/common/include/rte_rwlock.h b/lib/librte_eal/common/include/rte_rwlock.h
deleted file mode 100644
index 115731d..0000000
--- a/lib/librte_eal/common/include/rte_rwlock.h
+++ /dev/null
@@ -1,158 +0,0 @@
-/*-
- * BSD LICENSE
- *
- * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#ifndef _RTE_RWLOCK_H_
-#define _RTE_RWLOCK_H_
-
-/**
- * @file
- *
- * RTE Read-Write Locks
- *
- * This file defines an API for read-write locks. The lock is used to
- * protect data that allows multiple readers in parallel, but only
- * one writer. All readers are blocked until the writer is finished
- * writing.
- *
- */
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-#include <rte_common.h>
-#include <rte_atomic.h>
-
-/**
- * The rte_rwlock_t type.
- *
- * cnt is -1 when write lock is held, and > 0 when read locks are held.
- */
-typedef struct {
- volatile int32_t cnt; /**< -1 when W lock held, > 0 when R locks held. */
-} rte_rwlock_t;
-
-/**
- * A static rwlock initializer.
- */
-#define RTE_RWLOCK_INITIALIZER { 0 }
-
-/**
- * Initialize the rwlock to an unlocked state.
- *
- * @param rwl
- * A pointer to the rwlock structure.
- */
-static inline void
-rte_rwlock_init(rte_rwlock_t *rwl)
-{
- rwl->cnt = 0;
-}
-
-/**
- * Take a read lock. Loop until the lock is held.
- *
- * @param rwl
- * A pointer to a rwlock structure.
- */
-static inline void
-rte_rwlock_read_lock(rte_rwlock_t *rwl)
-{
- int32_t x;
- int success = 0;
-
- while (success == 0) {
- x = rwl->cnt;
- /* write lock is held */
- if (x < 0) {
- rte_pause();
- continue;
- }
- success = rte_atomic32_cmpset((volatile uint32_t *)&rwl->cnt,
- x, x + 1);
- }
-}
-
-/**
- * Release a read lock.
- *
- * @param rwl
- * A pointer to the rwlock structure.
- */
-static inline void
-rte_rwlock_read_unlock(rte_rwlock_t *rwl)
-{
- rte_atomic32_dec((rte_atomic32_t *)(intptr_t)&rwl->cnt);
-}
-
-/**
- * Take a write lock. Loop until the lock is held.
- *
- * @param rwl
- * A pointer to a rwlock structure.
- */
-static inline void
-rte_rwlock_write_lock(rte_rwlock_t *rwl)
-{
- int32_t x;
- int success = 0;
-
- while (success == 0) {
- x = rwl->cnt;
- /* a lock is held */
- if (x != 0) {
- rte_pause();
- continue;
- }
- success = rte_atomic32_cmpset((volatile uint32_t *)&rwl->cnt,
- 0, -1);
- }
-}
-
-/**
- * Release a write lock.
- *
- * @param rwl
- * A pointer to a rwlock structure.
- */
-static inline void
-rte_rwlock_write_unlock(rte_rwlock_t *rwl)
-{
- rte_atomic32_inc((rte_atomic32_t *)(intptr_t)&rwl->cnt);
-}
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif /* _RTE_RWLOCK_H_ */
--
1.9.5.msysgit.0
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH 3/3] test scaling of HTM lock elision protecting rte_hash
2015-06-02 13:11 [dpdk-dev] add support for HTM lock elision for x86 Roman Dementiev
2015-06-02 13:11 ` [dpdk-dev] [PATCH 1/3] spinlock: " Roman Dementiev
2015-06-02 13:11 ` [dpdk-dev] [PATCH 2/3] rwlock: " Roman Dementiev
@ 2015-06-02 13:11 ` Roman Dementiev
[not found] ` <CADNuJVpeKa9-R7WHkoCzw82vpYd=3XmhOoz2JfGsFLzDW+F5UQ@mail.gmail.com>
` (2 subsequent siblings)
5 siblings, 0 replies; 27+ messages in thread
From: Roman Dementiev @ 2015-06-02 13:11 UTC (permalink / raw)
To: dev
This patch adds a new auto-test for testing the scaling
of concurrent inserts into rte_hash when protected by
the normal spinlock vs. the spinlock with HTM lock
elision. The test also benchmarks single-threaded
access without any locks.
Signed-off-by: Roman Dementiev <roman.dementiev@intel.com>
---
app/test/Makefile | 1 +
app/test/test_hash_scaling.c | 223 +++++++++++++++++++++++++++++++++++++++++++
2 files changed, 224 insertions(+)
create mode 100644 app/test/test_hash_scaling.c
diff --git a/app/test/Makefile b/app/test/Makefile
index 3c777bf..6ffe539 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -83,6 +83,7 @@ SRCS-y += test_memcpy_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_HASH) += test_hash.c
SRCS-$(CONFIG_RTE_LIBRTE_HASH) += test_hash_perf.c
+SRCS-$(CONFIG_RTE_LIBRTE_HASH) += test_hash_scaling.c
SRCS-$(CONFIG_RTE_LIBRTE_LPM) += test_lpm.c
SRCS-$(CONFIG_RTE_LIBRTE_LPM) += test_lpm6.c
diff --git a/app/test/test_hash_scaling.c b/app/test/test_hash_scaling.c
new file mode 100644
index 0000000..682ae94
--- /dev/null
+++ b/app/test/test_hash_scaling.c
@@ -0,0 +1,223 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2015 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_cycles.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_spinlock.h>
+#include <rte_launch.h>
+
+#include "test.h"
+
+/*
+ * Check condition and return an error if true. Assumes that "handle" is the
+ * name of the hash structure pointer to be freed.
+ */
+#define RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR line %d: " str "\n", __LINE__, \
+ ##__VA_ARGS__); \
+ if (handle) \
+ rte_hash_free(handle); \
+ return -1; \
+ } \
+} while (0)
+
+enum locking_mode_t {
+ NORMAL_LOCK,
+ LOCK_ELISION,
+ NULL_LOCK
+};
+
+struct {
+ uint32_t num_iterations;
+ struct rte_hash *h;
+ rte_spinlock_t *lock;
+ int locking_mode;
+} tbl_scaling_test_params;
+
+static rte_atomic64_t gcycles;
+
+static int test_hash_scaling_worker(__attribute__((unused)) void *arg)
+{
+ uint64_t i, key;
+ uint32_t thr_id = rte_sys_gettid();
+ uint64_t begin, cycles = 0;
+
+ switch (tbl_scaling_test_params.locking_mode) {
+
+ case NORMAL_LOCK:
+
+ for (i = 0; i < tbl_scaling_test_params.num_iterations; i++) {
+ /* different threads get different keys because
+ we use the thread-id in the key computation
+ */
+ key = rte_hash_crc(&i, sizeof(i), thr_id);
+ begin = rte_rdtsc_precise();
+ rte_spinlock_lock(tbl_scaling_test_params.lock);
+ rte_hash_add_key(tbl_scaling_test_params.h, &key);
+ rte_spinlock_unlock(tbl_scaling_test_params.lock);
+ cycles += rte_rdtsc_precise() - begin;
+ }
+ break;
+
+ case LOCK_ELISION:
+
+ for (i = 0; i < tbl_scaling_test_params.num_iterations; i++) {
+ key = rte_hash_crc(&i, sizeof(i), thr_id);
+ begin = rte_rdtsc_precise();
+ rte_spinlock_lock_tm(tbl_scaling_test_params.lock);
+ rte_hash_add_key(tbl_scaling_test_params.h, &key);
+ rte_spinlock_unlock_tm(tbl_scaling_test_params.lock);
+ cycles += rte_rdtsc_precise() - begin;
+ }
+ break;
+
+ default:
+
+ for (i = 0; i < tbl_scaling_test_params.num_iterations; i++) {
+ key = rte_hash_crc(&i, sizeof(i), thr_id);
+ begin = rte_rdtsc_precise();
+ rte_hash_add_key(tbl_scaling_test_params.h, &key);
+ cycles += rte_rdtsc_precise() - begin;
+ }
+ }
+
+ rte_atomic64_add(&gcycles, cycles);
+
+ return 0;
+}
+
+/*
+ * Do scalability perf tests.
+ */
+static int
+test_hash_scaling(int locking_mode)
+{
+ static unsigned calledCount = 1;
+ uint32_t num_iterations = 1024*1024;
+ uint64_t i, key;
+ struct rte_hash_parameters hash_params = {
+ .entries = num_iterations*2,
+ .bucket_entries = 16,
+ .key_len = sizeof(key),
+ .hash_func = rte_hash_crc,
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ };
+ struct rte_hash *handle;
+ char name[RTE_HASH_NAMESIZE];
+ rte_spinlock_t lock;
+
+ rte_spinlock_init(&lock);
+
+ snprintf(name, 32, "test%u", calledCount++);
+ hash_params.name = name;
+
+ handle = rte_hash_create(&hash_params);
+ RETURN_IF_ERROR(handle == NULL, "hash creation failed");
+
+ tbl_scaling_test_params.num_iterations =
+ num_iterations/rte_lcore_count();
+ tbl_scaling_test_params.h = handle;
+ tbl_scaling_test_params.lock = &lock;
+ tbl_scaling_test_params.locking_mode = locking_mode;
+
+ rte_atomic64_init(&gcycles);
+ rte_atomic64_clear(&gcycles);
+
+ /* fill up to initial size */
+ for (i = 0; i < num_iterations; i++) {
+ key = rte_hash_crc(&i, sizeof(i), 0xabcdabcd);
+ rte_hash_add_key(tbl_scaling_test_params.h, &key);
+ }
+
+ rte_eal_mp_remote_launch(test_hash_scaling_worker, NULL, CALL_MASTER);
+ rte_eal_mp_wait_lcore();
+
+ unsigned long long int cycles_per_operation =
+ rte_atomic64_read(&gcycles)/
+ (tbl_scaling_test_params.num_iterations*rte_lcore_count());
+ const char *lock_name;
+
+ switch (locking_mode) {
+ case NORMAL_LOCK:
+ lock_name = "normal spinlock";
+ break;
+ case LOCK_ELISION:
+ lock_name = "lock elision";
+ break;
+ default:
+ lock_name = "null lock";
+ }
+ printf("--------------------------------------------------------\n");
+ printf("Cores: %d; %s mode -> cycles per operation: %llu\n",
+ rte_lcore_count(), lock_name, cycles_per_operation);
+ printf("--------------------------------------------------------\n");
+ /* CSV output */
+ printf(">>>%d,%s,%llu\n", rte_lcore_count(), lock_name,
+ cycles_per_operation);
+
+ rte_hash_free(handle);
+ return 0;
+}
+
+static int
+test_hash_scaling_main(void)
+{
+ int r = 0;
+
+ if (rte_lcore_count() == 1)
+ r = test_hash_scaling(NULL_LOCK);
+
+ if (r == 0)
+ r = test_hash_scaling(NORMAL_LOCK);
+
+ if (!rte_tm_supported()) {
+ printf("Hardware transactional memory (lock elision) is NOT supported\n");
+ return r;
+ }
+ printf("Hardware transactional memory (lock elision) is supported\n");
+
+ if (r == 0)
+ r = test_hash_scaling(LOCK_ELISION);
+
+ return r;
+}
+
+
+static struct test_command hash_scaling_cmd = {
+ .command = "hash_scaling_autotest",
+ .callback = test_hash_scaling_main,
+};
+REGISTER_TEST_COMMAND(hash_scaling_cmd);
--
1.9.5.msysgit.0
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] add support for HTM lock elision for x86
[not found] ` <CADNuJVpeKa9-R7WHkoCzw82vpYd=3XmhOoz2JfGsFLzDW+F5UQ@mail.gmail.com>
@ 2015-06-02 13:39 ` Dementiev, Roman
2015-06-02 14:55 ` Roman Dementiev
1 sibling, 0 replies; 27+ messages in thread
From: Dementiev, Roman @ 2015-06-02 13:39 UTC (permalink / raw)
To: Jay Rolette; +Cc: DPDK
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
From yong.liu@intel.com Tue Jun 2 16:09:11 2015
Return-Path: <yong.liu@intel.com>
Received: from mga01.intel.com (mga01.intel.com [192.55.52.88])
by dpdk.org (Postfix) with ESMTP id 23E59C334
for <dev@dpdk.org>; Tue, 2 Jun 2015 16:09:09 +0200 (CEST)
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
by fmsmga101.fm.intel.com with ESMTP; 02 Jun 2015 07:08:47 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.13,540,1427785200"; d="scan'208";a="735637182"
Received: from kmsmsx152.gar.corp.intel.com ([172.21.73.87])
by fmsmga002.fm.intel.com with ESMTP; 02 Jun 2015 07:08:45 -0700
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
KMSMSX152.gar.corp.intel.com (172.21.73.87) with Microsoft SMTP Server (TLS)
id 14.3.224.2; Tue, 2 Jun 2015 22:08:44 +0800
Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.23]) by
shsmsx102.ccr.corp.intel.com ([169.254.2.109]) with mapi id 14.03.0224.002;
Tue, 2 Jun 2015 22:08:43 +0800
From: "Liu, Yong" <yong.liu@intel.com>
To: "Liang, Cunming" <cunming.liang@intel.com>, "dev@dpdk.org" <dev@dpdk.org>
Thread-Topic: [PATCH v10 00/13] Interrupt mode PMD
Thread-Index: AQHQnQDjRUxaR9oYIk2Xx85CWpKmRJ2ZQSHw
Date: Tue, 2 Jun 2015 14:08:43 +0000
Message-ID: <86228AFD5BCD8E4EBFD2B90117B5E81E10E37490@SHSMSX103.ccr.corp.intel.com>
References: <1432889125-20255-1-git-send-email-cunming.liang@intel.com>
<1433228006-24661-1-git-send-email-cunming.liang@intel.com>
In-Reply-To: <1433228006-24661-1-git-send-email-cunming.liang@intel.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-originating-ip: [10.239.127.40]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: Re: [dpdk-dev] [PATCH v10 00/13] Interrupt mode PMD
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
<mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
<mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Tue, 02 Jun 2015 14:09:11 -0000
Tested-by: Yong Liu <yong.liu@intel.com>
- Tested Commit: 7c4c66bf666b8059ed0ad2f2478ef349b3272f51
- OS: Fedora20 3.15.5
- GCC: gcc version 4.8.3 20140911
- CPU: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
- NIC: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ [8086:10fb]
- NIC: Intel Corporation I350 Gigabit Network Connection [8086:1521]
- Default x86_64-native-linuxapp-gcc configuration
- Prerequisites: vfio related case request vt-d enable in bios and IOMMU enable in kernel
- Total 17 cases, 17 passed, 0 failed
- Case: pf_lsc_igbuio_legacy
Description: check when pf bound to igb_uio with legacy mode, link status change interrupt can be normally handled
Command / instruction:
Insmod igb_uio driver with legacy interrupt mode
insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko intr_mode=legacy
Change port config to lsc enable and rxq disable in l3fwd-power/main.c
Build l3fwd-power and start l3fwd-power with 2 ports
l3fwd-power -c 0x6 -n 3 -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Change tester port0 link down and verify link down detected on dut port0
Port 0: link down
Change tester port0 link up and verify link up detected on dut port0
Port 0: link up
Change tester port1 link down and verify link down detected on dut port1
Port 1: link down
Change tester port1 link up and verify link up detected on dut port1
Port 1: link up
Change port config to lsc enable and rxq enable in l3fwd-power/main.c
Build l3fwd-power and start l3fwd-power with 2 ports
l3fwd-power -c 0x6 -n 3 -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Verify lsc disabled for can't enable lsc and rxq in the same time when pf bound to igb_uio
lsc won't enable because of no intr multiplex
- Case: pf_lsc_igbuio_msix
Description: check when pf bound to igb_uio with msix mode, link status change interrupt can be normally handled
Command / instruction:
Insmod igb_uio driver with msix interrupt mode
insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko intr_mode=msix
Verify link status can be normally handled like previous case pf_lsc_igbuio_legacy.
- Case: pf_lsc_vfio_legacy
Description: check when pf bound to vfio with legacy mode, link status change interrupt can be normally handled
Command / instruction:
Do prerequisites for vfio driver then bind device to vfio-driver
modprobe vfio
modprobe vfio-pci
./tools/dpdk_nic_bind.py --bind=vfio-pci 08:00.0 08:00.1
Change port config to lsc enable and rxq disable in l3fwd-power/main.c
Start l3fwd-power with vfio legacy mode
l3fwd-power -c 0x6 -n 3 --vfio-intr=legacy -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Check link status change interrupt can be normally handled like previous case.
Change port config to lsc enable and rxq enable in l3fwd-power/main.c
Start l3fwd-power with vfio legacy mode
l3fwd-power -c 0x6 -n 3 --vfio-intr=legacy -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Verify lsc disabled for can't enable lsc and rxq in the same time with legacy mode.
- Case: pf_lsc_vfio_msi
Description: check when pf bound to vfio with msi mode, link status change interrupt can be normally handled
Command / instruction:
Do prerequisites for vfio driver then bind device to vfio-driver
modprobe vfio
modprobe vfio-pci
./tools/dpdk_nic_bind.py --bind=vfio-pci 08:00.0 08:00.1
Change port config to lsc enable and rxq disable in l3fwd-power/main.c
Start l3fwd-power with vfio msi mode
l3fwd-power -c 0x6 -n 3 --vfio-intr=msi -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Check link status change interrupt can be normally handled like previous case.
Change port config to lsc enable and rxq enable in l3fwd-power/main.c
Start l3fwd-power with vfio msi mode
l3fwd-power -c 0x6 -n 3 --vfio-intr=msi -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Verify lsc disabled for can't enable lsc and rxq in the same time with legacy mode.
- Case: pf_lsc_vfio_msix
Description: check when pf bound to vfio with msix mode, link status change interrupt can be normally handled
Command / instruction:
Do prerequisites for vfio driver then bind device to vfio-driver
modprobe vfio
modprobe vfio-pci
./tools/dpdk_nic_bind.py --bind=vfio-pci 08:00.0 08:00.1
Change port config to lsc enable and rxq disable in l3fwd-power/main.c
Start l3fwd-power with vfio msix mode
l3fwd-power -c 0x6 -n 3 --vfio-intr=msix -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Check link status change interrupt can be normally handled like previous case.
Change port config to lsc enable and rxq enable in l3fwd-power/main.c
Start l3fwd-power with vfio msix mode
l3fwd-power -c 0x6 -n 3 --vfio-intr=msix -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Check link status change interrupt can be normally handled like previous case.
- Case: pf_rxq_on_vfio_msix
Description: check when pf bound to vfio with default msix mode, receive packet interrupt can be normally handled
Command / instruction:
Do prerequisites for vfio driver then bind device to vfio-driver
modprobe vfio
modprobe vfio-pci
./tools/dpdk_nic_bind.py --bind=vfio-pci 08:00.0 08:00.1
Start l3fwd-power with 2 ports and 2 cores.
l3fwd-power -c 0x6 -n 3 -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Send packet from tester port0 and verify dut core1 wakeup and then sleep.
lcore 1 is waked up from rx interrupt on port 0 queue 0
lcore 1 sleeps until interrupt triggers
Send packet from tester port1 and verify dut core2 wakeup and then sleep.
lcore 2 is waked up from rx interrupt on port 1 queue 0
lcore 2 sleeps until interrupt triggers
- Case: pf_rxq_on_vfio_msi
Description: check when pf bound to vfio with msi mode, receive packet interrupt can be normally handled
Command / instruction:
Do prerequisites for vfio driver then bind device to vfio-driver
modprobe vfio
modprobe vfio-pci
./tools/dpdk_nic_bind.py --bind=vfio-pci 08:00.0 08:00.1
Start l3fwd-power with 2 ports and 2 cores.
l3fwd-power -c 0x6 -n 3 --vfio-intr=msi -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Verify packet interrupt can be normally handled like previous case pf_rxq_on_vfio_msix.
- Case: pf_rxq_on_vfio_legacy
Description: check when pf bound to vfio with legacy mode, receive packet interrupt can be normally handled
Command / instruction:
Do prerequisites for vfio driver then bind device to vfio-driver
modprobe vfio
modprobe vfio-pci
./tools/dpdk_nic_bind.py --bind=vfio-pci 08:00.0 08:00.1
Start l3fwd-power with 2 ports and 2 cores.
l3fwd-power -c 0x6 -n 3 --vfio-intr=legacy -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Verify packet interrupt can be normally handled like previous case pf_rxq_on_vfio_msix.
- Case: pf_onecore_on_vfio
Description: check when all pf devices bound to one core, receive packet interrupt can be normally handled
Command / instruction:
Do prerequisites for vfio driver then bind device to vfio-driver
modprobe vfio
modprobe vfio-pci
./tools/dpdk_nic_bind.py --bind=vfio-pci 08:00.0 08:00.1
Start l3fwd-power with 2 ports and 1 cores.
l3fwd-power -c 0x2 -n 3 -- -p 0x3 -P --config="(0,0,1),(1,0,1)"
Verify packet interrupt can be normally handled like previous case pf_rxq_on_vfio_msix.
- Case: pf_multiqueue_on_vfio
Description: check when pf device has mulit queues, receive packet interrupt can be normally handled
Command / instruction:
Start l3fwd-power with 2 ports and 4 cores.
l3fwd-power -c 0x100000e -n 3 -- -p 0x3 -P --config="(0,0,1),(0,1,2),(1,0,3),(1,1,24)"
Send enough packets with different destination ip address.
sendp([Ether()/IP(dst="127.0.0.X")/UDP()/Raw('0'*18)], iface="p786p1")
Verify all cores wakeup and then sleep as expected.
- Case: pf_maxqueue_on_vfio
Description: check when pf device has maximum queues, receive packet interrupt can be normally handled
Command / instruction:
Start l3fwd-power with 2 ports and 32 cores [only for niantic], different nic has different maximum rx queues
l3fwd-power -c 0x3fdfe3fdfe -n 3 -- -p 0x3 -P --config="(0,0,1),(0,1,21),(0,2,2),(0,3,22),\
(0,4,3),(0,5,23),(0,6,24),(0,7,4),(0,8,25),(0,9,5),(0,10,26),(0,11,6),(0,12,27),(0,13,7),\
(0,14,8),(0,15,28),(1,0,10),(1,1,30),(1,2,11),(1,3,31),(1,4,32),(1,5,12),(1,6,33),(1,7,13),\
(1,8,34),(1,9,14),(1,10,35),(1,11,15),(1,12,16),(1,13,36),(1,14,17),(1,15,37),"
Send enough packets with different destination ip address.
sendp([Ether()/IP(dst="127.0.0.X")/UDP()/Raw('0'*18)], iface="p786p1")
Verify all cores wakeup and then sleep as expected.
- Case: pf_rxq_on_igbuio_legacy
Description: check when pf bound to igb_uio with legacy mode, receive packet interrupt can be normally handled
Command / instruction:
Insmod igb_uio driver with legacy interrupt mode
insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko intr_mode=legacy
./tools/dpdk_nic_bind.py --bind=igb_uio 08:00.0 08:00.1
Start l3fwd-power with 2 ports and 2 cores.
l3fwd-power -c 0x6 -n 3 -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Verify packet interrupt can be normally handled like previous case pf_rxq_on_vfio_msix.
- Case: pf_rxq_on_igbuio_msix
Description: check when pf bound to igb_uio with msix mode, receive packet interrupt can be normally handled
Command / instruction:
Insmod igb_uio driver with msix interrupt mode
insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko intr_mode=msix
./tools/dpdk_nic_bind.py --bind=igb_uio 08:00.0 08:00.1
Start l3fwd-power with 2 ports and 2 cores.
l3fwd-power -c 0x6 -n 3 -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Verify packet interrupt can be normally handled like previous case pf_rxq_on_vfio_msix.
- Case: pf_rxq_on_uiopcigeneric
Description: check when pf bound to uio_pci_generic, receive packet interrupt can be normally handled
Command / instruction:
Insmod uio_pci_generic driver and bind pf device on it.
./tools/dpdk_nic_bind.py --bind=uio_pci_generic 08:00.0 08:00.1
Start l3fwd-power with 2 ports and 2 cores.
l3fwd-power -c 0x6 -n 3 -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Verify packet interrupt can be normally handled like previous case pf_rxq_on_vfio_msix.
- Case: pf_lsc_on_uiopcigeneric
Description: check when pf bound to uio_pci_generic, link status changed interrupt can be normally handled
Command / instruction:
Insmod uio_pci_generic driver and bind pf device on it.
./tools/dpdk_nic_bind.py --bind=uio_pci_generic 08:00.0 08:00.1
Start l3fwd-power with 2 ports and 2 cores.
l3fwd-power -c 0x6 -n 3 -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Change tester port0 link down and verify link down detected on dut port0
Port 0: link down
Change tester port0 link up and verify link up detected on dut port0
Port 0: link up
Change tester port1 link down and verify link down detected on dut port1
Port 1: link down
Change tester port1 link up and verify link up detected on dut port1
Port 1: link up
Change port config to lsc enable and rxq enable in l3fwd-power/main.c
Build l3fwd-power and start l3fwd-power with 2 ports
l3fwd-power -c 0x6 -n 3 -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Verify lsc disabled for can't enable lsc and rxq in the same time when pf bound to uio_pci_generic
lsc won't enable because of no intr multiplex
- Case: vf_in_vm_rxq
Description: check when vf bound to igb_uio in virtual machine, receive packet interrupt can be normally handled
Only support niantic by now.
Command / instruction:
Create vf devices and bound into virtual machine
echo 1 > /sys/bus/pci/devices/0000\:08\:00.0/sriov_numvfs
echo 1 > /sys/bus/pci/devices/0000\:08\:00.1/sriov_numvfs
virsh
virsh # nodedev-dettach pci_0000_08_10_0
virsh # nodedev-dettach pci_0000_08_10_1
Start virtual machine and bind vf devices to driver igb_uio.
./tools/dpdk_nic_bind.py --bind=igb_uio eth1 eth2
Change port config to lsc disable and rxq enable in l3fwd-power/main.c
Start l3fwd-power with 2 ports and 2 cores.
l3fwd-power -c 0x6 -n 3 -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Send packet from tester port0 with promisc mac and verify vm core1 wakeup and then sleep.
lcore 1 is waked up from rx interrupt on port 0 queue 0
lcore 1 sleeps until interrupt triggers
Send packet from tester port1 with promisc mac and verify vm core2 wakeup and then sleep.
lcore 2 is waked up from rx interrupt on port 1 queue 0
lcore 2 sleeps until interrupt triggers
- Case: vf_in_host_rxq
Description: check when vf bound to vfio with msix mode, receive packet interrupt can be normally handled
Only support niantic by now.
Command / instruction:
Create vf devices and bound to vfio
echo 1 > /sys/bus/pci/devices/0000\:08\:00.0/sriov_numvfs
echo 1 > /sys/bus/pci/devices/0000\:08\:00.1/sriov_numvfs
modprobe vfio
modprobe vfio-pci
./tools/dpdk_nic_bind.py --bind=vfio-pci 08:10.0 08:10.1
Start l3fwd-power with 2 ports and 2 cores.
l3fwd-power -c 0x6 -n 3 -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Send packet from tester port0 with promisc mac and verify dut core1 wakeup and then sleep.
lcore 1 is waked up from rx interrupt on port 0 queue 0
lcore 1 sleeps until interrupt triggers
Send packet from tester port1 with promisc mac and verify dut core2 wakeup and then sleep.
lcore 2 is waked up from rx interrupt on port 1 queue 0
lcore 2 sleeps until interrupt triggers
> -----Original Message-----
> From: Liang, Cunming
> Sent: Tuesday, June 02, 2015 2:53 PM
> To: dev@dpdk.org
> Cc: shemming@brocade.com; david.marchand@6wind.com;
> thomas.monjalon@6wind.com; Zhou, Danny; Wang, Liang-min; Richardson, Bruce;
> Liu, Yong; nhorman@tuxdriver.com; Liang, Cunming
> Subject: [PATCH v10 00/13] Interrupt mode PMD
>
> v10 changes
> - code rework to return actual error code
> - bug fix for lsc when using uio_pci_generic
>
> v9 changes
> - code rework to fix open comment
> - bug fix for igb lsc when both lsc and rxq are enabled in vfio-msix
> - new patch to turn off the feature by defalut so as to avoid v2.1 abi
> broken
>
> v8 changes
> - remove condition check for only vfio-msix
> - add multiplex intr support when only one intr vector allowed
> - lsc and rxq interrupt runtime enable decision
> - add safe event delete while the event wakeup execution happens
>
> v7 changes
> - decouple epoll event and intr operation
> - add condition check in the case intr vector is disabled
> - renaming some APIs
>
> v6 changes
> - split rte_intr_wait_rx_pkt into two APIs 'wait' and 'set'.
> - rewrite rte_intr_rx_wait/rte_intr_rx_set.
> - using vector number instead of queue_id as interrupt API params.
> - patch reorder and split.
>
> v5 changes
> - Rebase the patchset onto the HEAD
> - Isolate ethdev from EAL for new-added wait-for-rx interrupt function
> - Export wait-for-rx interrupt function for shared libraries
> - Split-off a new patch file for changed struct rte_intr_handle that
> other patches depend on, to avoid breaking git bisect
> - Change sample applicaiton to accomodate EAL function spec change
> accordingly
>
> v4 changes
> - Export interrupt enable/disable functions for shared libraries
> - Adjust position of new-added structure fields and functions to
> avoid breaking ABI
>
> v3 changes
> - Add return value for interrupt enable/disable functions
> - Move spinlok from PMD to L3fwd-power
> - Remove unnecessary variables in e1000_mac_info
> - Fix miscelleous review comments
>
> v2 changes
> - Fix compilation issue in Makefile for missed header file.
> - Consolidate internal and community review comments of v1 patch set.
>
> The patch series introduce low-latency one-shot rx interrupt into DPDK
> with
> polling and interrupt mode switch control example.
>
> DPDK userspace interrupt notification and handling mechanism is based on
> UIO
> with below limitation:
> 1) It is designed to handle LSC interrupt only with inefficient suspended
> pthread wakeup procedure (e.g. UIO wakes up LSC interrupt handling
> thread
> which then wakes up DPDK polling thread). In this way, it introduces
> non-deterministic wakeup latency for DPDK polling thread as well as
> packet
> latency if it is used to handle Rx interrupt.
> 2) UIO only supports a single interrupt vector which has to been shared by
> LSC interrupt and interrupts assigned to dedicated rx queues.
>
> This patchset includes below features:
> 1) Enable one-shot rx queue interrupt in ixgbe PMD(PF & VF) and igb PMD(PF
> only).
> 2) Build on top of the VFIO mechanism instead of UIO, so it could support
> up to 64 interrupt vectors for rx queue interrupts.
> 3) Have 1 DPDK polling thread handle per Rx queue interrupt with a
> dedicated
> VFIO eventfd, which eliminates non-deterministic pthread wakeup latency
> in
> user space.
> 4) Demonstrate interrupts control APIs and userspace NAIP-like
> polling/interrupt
> switch algorithms in L3fwd-power example.
>
> Known limitations:
> 1) It does not work for UIO due to a single interrupt eventfd shared by
> LSC
> and rx queue interrupt handlers causes a mess. [FIXED]
> 2) LSC interrupt is not supported by VF driver, so it is by default
> disabled
> in L3fwd-power now. Feel free to turn in on if you want to support both
> LSC
> and rx queue interrupts on a PF.
>
> Cunming Liang (13):
> eal/linux: add interrupt vectors support in intr_handle
> eal/linux: add rte_epoll_wait/ctl support
> eal/linux: add API to set rx interrupt event monitor
> eal/linux: fix comments typo on vfio msi
> eal/linux: add interrupt vectors handling on VFIO
> eal/linux: standalone intr event fd create support
> eal/linux: fix lsc read error in uio_pci_generic
> eal/bsd: dummy for new intr definition
> ethdev: add rx intr enable, disable and ctl functions
> ixgbe: enable rx queue interrupts for both PF and VF
> igb: enable rx queue interrupts for PF
> l3fwd-power: enable one-shot rx interrupt and polling/interrupt mode
> switch
> abi: fix v2.1 abi broken issue
>
> drivers/net/e1000/igb_ethdev.c | 311 ++++++++++--
> drivers/net/ixgbe/ixgbe_ethdev.c | 519
> ++++++++++++++++++++-
> drivers/net/ixgbe/ixgbe_ethdev.h | 4 +
> examples/l3fwd-power/main.c | 206 ++++++--
> lib/librte_eal/bsdapp/eal/eal_interrupts.c | 19 +
> .../bsdapp/eal/include/exec-env/rte_interrupts.h | 81 ++++
> lib/librte_eal/bsdapp/eal/rte_eal_version.map | 5 +
> lib/librte_eal/linuxapp/eal/eal_interrupts.c | 360 ++++++++++++--
> .../linuxapp/eal/include/exec-env/rte_interrupts.h | 219 +++++++++
> lib/librte_eal/linuxapp/eal/rte_eal_version.map | 8 +
> lib/librte_ether/rte_ethdev.c | 109 +++++
> lib/librte_ether/rte_ethdev.h | 132 ++++++
> lib/librte_ether/rte_ether_version.map | 4 +
> 13 files changed, 1852 insertions(+), 125 deletions(-)
>
> --
> 1.8.1.4
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] add support for HTM lock elision for x86
[not found] ` <CADNuJVpeKa9-R7WHkoCzw82vpYd=3XmhOoz2JfGsFLzDW+F5UQ@mail.gmail.com>
2015-06-02 13:39 ` [dpdk-dev] add support for HTM lock elision for x86 Dementiev, Roman
@ 2015-06-02 14:55 ` Roman Dementiev
1 sibling, 0 replies; 27+ messages in thread
From: Roman Dementiev @ 2015-06-02 14:55 UTC (permalink / raw)
To: Jay Rolette; +Cc: DPDK
Hello Jay,
Tuesday, June 2, 2015, 3:21:24 PM, you wrote:
> On Tue, Jun 2, 2015 at 8:11 AM, Roman Dementiev <roman.dementiev@intel.com>wrote:
> This series of patches adds methods that use hardware memory transactions (HTM)
> on fast-path for DPDK locks (a.k.a. lock elision). Here the methods are implemented
> for x86 using Restricted Transactional Memory instructions (Intel(r) Transactional
> Synchronization Extensions). The implementation fall-backs to the normal DPDK lock
> if HTM is not available or memory transactions fail.
It provides a very good scaling given the protected data structure is friendly
to HTM. One example is rte_hash which is benchmarked in the unit test I have
provided as the last patch.
There are some papers showing additional test results with similar lock
elision implementations: www.intel.com/software/tsx
> This is very interesting. Do you have any summary you can give us
> of what the performance implications are from your test data?
>
> This is not a replacement for lock usages since not all critical sections protected
> by locks are friendly to HTM.
> Any pointers to material that describe where HTM in its current incarnation on x86 is approprate?
>
>
I meant to say: not a replacement to *ALL* lock usages. You can check the material here: www.intel.com/software/tsx
Please let me know if you need additional information.
> Roman Dementiev (3):
> spinlock: add support for HTM lock elision for x86
> rwlock: add support for HTM lock elision for x86
> test scaling of HTM lock elision protecting rte_hash
>
> app/test/Makefile | 1 +
> app/test/test_hash_scaling.c | 223 +++++++++++++++++++++
> lib/librte_eal/common/Makefile | 4 +-
> .../common/include/arch/ppc_64/rte_rwlock.h | 38 ++++
> .../common/include/arch/ppc_64/rte_spinlock.h | 41 ++++
> lib/librte_eal/common/include/arch/x86/rte_rtm.h | 73 +++++++
> .../common/include/arch/x86/rte_rwlock.h | 82 ++++++++
> .../common/include/arch/x86/rte_spinlock.h | 107 ++++++++++
> lib/librte_eal/common/include/generic/rte_rwlock.h | 194 ++++++++++++++++++
> .../common/include/generic/rte_spinlock.h | 75 +++++++
> lib/librte_eal/common/include/rte_rwlock.h | 158 ---------------
> 11 files changed, 836 insertions(+), 160 deletions(-)
> Thanks!
> Jay
--
Best regards,
Roman mailto:roman.dementiev@intel.com
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] add support for HTM lock elision for x86
2015-06-02 13:11 [dpdk-dev] add support for HTM lock elision for x86 Roman Dementiev
` (3 preceding siblings ...)
[not found] ` <CADNuJVpeKa9-R7WHkoCzw82vpYd=3XmhOoz2JfGsFLzDW+F5UQ@mail.gmail.com>
@ 2015-06-03 18:40 ` Stephen Hemminger
2015-06-05 15:12 ` Roman Dementiev
2015-06-16 17:16 ` [dpdk-dev] [PATCH v2 0/3] " Roman Dementiev
5 siblings, 1 reply; 27+ messages in thread
From: Stephen Hemminger @ 2015-06-03 18:40 UTC (permalink / raw)
To: Roman Dementiev; +Cc: dev
On Tue, 2 Jun 2015 15:11:30 +0200
Roman Dementiev <roman.dementiev@intel.com> wrote:
>
> This series of patches adds methods that use hardware memory transactions (HTM)
> on fast-path for DPDK locks (a.k.a. lock elision). Here the methods are implemented
> for x86 using Restricted Transactional Memory instructions (Intel(r) Transactional
> Synchronization Extensions). The implementation fall-backs to the normal DPDK lock
> if HTM is not available or memory transactions fail.
> This is not a replacement for lock usages since not all critical sections protected
> by locks are friendly to HTM.
>
> Roman Dementiev (3):
> spinlock: add support for HTM lock elision for x86
> rwlock: add support for HTM lock elision for x86
> test scaling of HTM lock elision protecting rte_hash
>
> app/test/Makefile | 1 +
> app/test/test_hash_scaling.c | 223 +++++++++++++++++++++
> lib/librte_eal/common/Makefile | 4 +-
> .../common/include/arch/ppc_64/rte_rwlock.h | 38 ++++
> .../common/include/arch/ppc_64/rte_spinlock.h | 41 ++++
> lib/librte_eal/common/include/arch/x86/rte_rtm.h | 73 +++++++
> .../common/include/arch/x86/rte_rwlock.h | 82 ++++++++
> .../common/include/arch/x86/rte_spinlock.h | 107 ++++++++++
> lib/librte_eal/common/include/generic/rte_rwlock.h | 194 ++++++++++++++++++
> .../common/include/generic/rte_spinlock.h | 75 +++++++
> lib/librte_eal/common/include/rte_rwlock.h | 158 ---------------
> 11 files changed, 836 insertions(+), 160 deletions(-)
>
> Intel GmbH
> Dornacher Strasse 1
> 85622 Feldkirchen/Muenchen, Deutschland
> Sitz der Gesellschaft: Feldkirchen bei Muenchen
> Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
> Registergericht: Muenchen HRB 47456
> Ust.-IdNr./VAT Registration No.: DE129385895
> Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
>
You probably want to put a caveat around this, it won't work for people
that expect to use spinlocks to protect I/O operations on hardware.
Since I/O operations aren't like memory.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] add support for HTM lock elision for x86
2015-06-03 18:40 ` Stephen Hemminger
@ 2015-06-05 15:12 ` Roman Dementiev
0 siblings, 0 replies; 27+ messages in thread
From: Roman Dementiev @ 2015-06-05 15:12 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
Hello Stephen,
Wednesday, June 3, 2015, 8:40:14 PM, you wrote:
> On Tue, 2 Jun 2015 15:11:30 +0200
> Roman Dementiev <roman.dementiev@intel.com> wrote:
>>
>> This series of patches adds methods that use hardware memory transactions (HTM)
>> on fast-path for DPDK locks (a.k.a. lock elision). Here the methods are implemented
>> for x86 using Restricted Transactional Memory instructions (Intel(r) Transactional
>> Synchronization Extensions). The implementation fall-backs to the normal DPDK lock
>> if HTM is not available or memory transactions fail.
>> This is not a replacement for all lock usages since not all critical sections protected
>> by locks are friendly to HTM.
>>
> You probably want to put a caveat around this, it won't work for people
> that expect to use spinlocks to protect I/O operations on hardware.
> Since I/O operations aren't like memory.
yes, I/O can not be rolled back by the CPU if the transaction should fail. Thus
the HTM transaction protecting I/O operations are always aborted by
CPU. In Intel TSX the I/O operations (MMIO, outp, etc) are TSX-unfriendly
causing immediate abort.
--
Best regards,
Roman mailto:roman.dementiev@intel.com
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v2 0/3] add support for HTM lock elision for x86
2015-06-02 13:11 [dpdk-dev] add support for HTM lock elision for x86 Roman Dementiev
` (4 preceding siblings ...)
2015-06-03 18:40 ` Stephen Hemminger
@ 2015-06-16 17:16 ` Roman Dementiev
2015-06-16 17:16 ` [dpdk-dev] [PATCH v2 1/3] spinlock: " Roman Dementiev
` (4 more replies)
5 siblings, 5 replies; 27+ messages in thread
From: Roman Dementiev @ 2015-06-16 17:16 UTC (permalink / raw)
To: dev
This series of patches adds methods that use hardware memory transactions (HTM)
on fast-path for DPDK locks (a.k.a. lock elision). Here the methods are
implemented for x86 using Restricted Transactional Memory instructions (Intel(r)
Transactional Synchronization Extensions). The implementation fall-backs to
the normal DPDK lock if HTM is not available or memory transactions fail. This
is not a replacement for ALL lock usages since not all critical sections
protected by locks are friendly to HTM. For example, an attempt to perform
a HW I/O operation inside a hardware memory transaction always aborts
the transaction since the CPU is not able to roll-back should the transaction
fail. Therefore, hardware transactional locks are not advised to be used around
rte_eth_rx_burst() and rte_eth_tx_burst() calls.
v2 changes
-added a documentation note about hardware limitations
Roman Dementiev (3):
spinlock: add support for HTM lock elision for x86
rwlock: add support for HTM lock elision for x86
test scaling of HTM lock elision protecting rte_hash
app/test/Makefile | 1 +
app/test/test_hash_scaling.c | 223 +++++++++++++++++++++
lib/librte_eal/common/Makefile | 4 +-
.../common/include/arch/ppc_64/rte_rwlock.h | 38 ++++
.../common/include/arch/ppc_64/rte_spinlock.h | 41 ++++
lib/librte_eal/common/include/arch/x86/rte_rtm.h | 73 +++++++
.../common/include/arch/x86/rte_rwlock.h | 82 ++++++++
.../common/include/arch/x86/rte_spinlock.h | 107 ++++++++++
lib/librte_eal/common/include/generic/rte_rwlock.h | 208 +++++++++++++++++++
.../common/include/generic/rte_spinlock.h | 99 +++++++++
lib/librte_eal/common/include/rte_rwlock.h | 158 ---------------
11 files changed, 874 insertions(+), 160 deletions(-)
create mode 100644 app/test/test_hash_scaling.c
create mode 100644 lib/librte_eal/common/include/arch/ppc_64/rte_rwlock.h
create mode 100644 lib/librte_eal/common/include/arch/x86/rte_rtm.h
create mode 100644 lib/librte_eal/common/include/arch/x86/rte_rwlock.h
create mode 100644 lib/librte_eal/common/include/generic/rte_rwlock.h
delete mode 100644 lib/librte_eal/common/include/rte_rwlock.h
--
1.9.5.msysgit.0
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v2 1/3] spinlock: add support for HTM lock elision for x86
2015-06-16 17:16 ` [dpdk-dev] [PATCH v2 0/3] " Roman Dementiev
@ 2015-06-16 17:16 ` Roman Dementiev
2015-06-17 21:29 ` Thomas Monjalon
2015-06-16 17:16 ` [dpdk-dev] [PATCH v2 2/3] rwlock: add support for HTM lock elision for x86 Roman Dementiev
` (3 subsequent siblings)
4 siblings, 1 reply; 27+ messages in thread
From: Roman Dementiev @ 2015-06-16 17:16 UTC (permalink / raw)
To: dev
This patch adds methods that use hardware memory transactions (HTM) on fast-path
for spinlocks (a.k.a. lock elision). Here the methods are implemented for x86
using Restricted Transactional Memory instructions (Intel(r) Transactional
Synchronization Extensions). The implementation fall-backs to the normal
spinlock if HTM is not available or memory transactions fail. This is not
a replacement for all spinlock usages since not all critical sections protected
by spinlocks are friendly to HTM. For example, an attempt to perform a HW I/O
operation inside a hardware memory transaction always aborts the transaction
since the CPU is not able to roll-back should the transaction fail.
Therefore, hardware transactional locks are not advised to be used around
rte_eth_rx_burst() and rte_eth_tx_burst() calls.
Signed-off-by: Roman Dementiev <roman.dementiev@intel.com>
---
.../common/include/arch/ppc_64/rte_spinlock.h | 41 ++++++++
lib/librte_eal/common/include/arch/x86/rte_rtm.h | 73 ++++++++++++++
.../common/include/arch/x86/rte_spinlock.h | 107 +++++++++++++++++++++
.../common/include/generic/rte_spinlock.h | 99 +++++++++++++++++++
4 files changed, 320 insertions(+)
create mode 100644 lib/librte_eal/common/include/arch/x86/rte_rtm.h
diff --git a/lib/librte_eal/common/include/arch/ppc_64/rte_spinlock.h b/lib/librte_eal/common/include/arch/ppc_64/rte_spinlock.h
index cf8b81a..3336435 100644
--- a/lib/librte_eal/common/include/arch/ppc_64/rte_spinlock.h
+++ b/lib/librte_eal/common/include/arch/ppc_64/rte_spinlock.h
@@ -66,6 +66,47 @@ rte_spinlock_trylock(rte_spinlock_t *sl)
#endif
+static inline int rte_tm_supported(void)
+{
+ return 0;
+}
+
+static inline void
+rte_spinlock_lock_tm(rte_spinlock_t *sl)
+{
+ rte_spinlock_lock(sl); /* fall-back */
+}
+
+static inline int
+rte_spinlock_trylock_tm(rte_spinlock_t *sl)
+{
+ return rte_spinlock_trylock(sl);
+}
+
+static inline void
+rte_spinlock_unlock_tm(rte_spinlock_t *sl)
+{
+ rte_spinlock_unlock(sl);
+}
+
+static inline void
+rte_spinlock_recursive_lock_tm(rte_spinlock_recursive_t *slr)
+{
+ rte_spinlock_recursive_lock(slr); /* fall-back */
+}
+
+static inline void
+rte_spinlock_recursive_unlock_tm(rte_spinlock_recursive_t *slr)
+{
+ rte_spinlock_recursive_unlock(slr);
+}
+
+static inline int
+rte_spinlock_recursive_trylock_tm(rte_spinlock_recursive_t *slr)
+{
+ return rte_spinlock_recursive_trylock(slr);
+}
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_eal/common/include/arch/x86/rte_rtm.h b/lib/librte_eal/common/include/arch/x86/rte_rtm.h
new file mode 100644
index 0000000..d935641
--- /dev/null
+++ b/lib/librte_eal/common/include/arch/x86/rte_rtm.h
@@ -0,0 +1,73 @@
+#ifndef _RTE_RTM_H_
+#define _RTE_RTM_H_ 1
+
+/*
+ * Copyright (c) 2012,2013 Intel Corporation
+ * Author: Andi Kleen
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that: (1) source code distributions
+ * retain the above copyright notice and this paragraph in its entirety, (2)
+ * distributions including binary code include the above copyright notice and
+ * this paragraph in its entirety in the documentation or other materials
+ * provided with the distribution
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* Official RTM intrinsics interface matching gcc/icc, but works
+ on older gcc compatible compilers and binutils. */
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+
+#define RTE_XBEGIN_STARTED (~0u)
+#define RTE_XABORT_EXPLICIT (1 << 0)
+#define RTE_XABORT_RETRY (1 << 1)
+#define RTE_XABORT_CONFLICT (1 << 2)
+#define RTE_XABORT_CAPACITY (1 << 3)
+#define RTE_XABORT_DEBUG (1 << 4)
+#define RTE_XABORT_NESTED (1 << 5)
+#define RTE_XABORT_CODE(x) (((x) >> 24) & 0xff)
+
+static __attribute__((__always_inline__)) inline
+unsigned int rte_xbegin(void)
+{
+ unsigned int ret = RTE_XBEGIN_STARTED;
+
+ asm volatile(".byte 0xc7,0xf8 ; .long 0" : "+a" (ret) :: "memory");
+ return ret;
+}
+
+static __attribute__((__always_inline__)) inline
+void rte_xend(void)
+{
+ asm volatile(".byte 0x0f,0x01,0xd5" ::: "memory");
+}
+
+static __attribute__((__always_inline__)) inline
+void rte_xabort(const unsigned int status)
+{
+ asm volatile(".byte 0xc6,0xf8,%P0" :: "i" (status) : "memory");
+}
+
+static __attribute__((__always_inline__)) inline
+int rte_xtest(void)
+{
+ unsigned char out;
+
+ asm volatile(".byte 0x0f,0x01,0xd6 ; setnz %0" :
+ "=r" (out) :: "memory");
+ return out;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RTM_H_ */
diff --git a/lib/librte_eal/common/include/arch/x86/rte_spinlock.h b/lib/librte_eal/common/include/arch/x86/rte_spinlock.h
index 54fba95..136f25a 100644
--- a/lib/librte_eal/common/include/arch/x86/rte_spinlock.h
+++ b/lib/librte_eal/common/include/arch/x86/rte_spinlock.h
@@ -39,6 +39,13 @@ extern "C" {
#endif
#include "generic/rte_spinlock.h"
+#include "rte_rtm.h"
+#include "rte_cpuflags.h"
+#include "rte_branch_prediction.h"
+#include <rte_common.h>
+
+#define RTE_RTM_MAX_RETRIES (10)
+#define RTE_XABORT_LOCK_BUSY (0xff)
#ifndef RTE_FORCE_INTRINSICS
static inline void
@@ -87,6 +94,106 @@ rte_spinlock_trylock (rte_spinlock_t *sl)
}
#endif
+static uint8_t rtm_supported; /* cache the flag to avoid the overhead
+ of the rte_cpu_get_flag_enabled function */
+
+static inline void __attribute__((constructor))
+rte_rtm_init(void)
+{
+ rtm_supported = rte_cpu_get_flag_enabled(RTE_CPUFLAG_RTM);
+}
+
+static inline int rte_tm_supported(void)
+{
+ return rtm_supported;
+}
+
+static inline int
+rte_try_tm(volatile int *lock)
+{
+ if (!rtm_supported)
+ return 0;
+
+ int retries = RTE_RTM_MAX_RETRIES;
+
+ while (likely(retries--)) {
+
+ unsigned int status = rte_xbegin();
+
+ if (likely(RTE_XBEGIN_STARTED == status)) {
+ if (unlikely(*lock))
+ rte_xabort(RTE_XABORT_LOCK_BUSY);
+ else
+ return 1;
+ }
+ while (*lock)
+ rte_pause();
+
+ if ((status & RTE_XABORT_EXPLICIT) &&
+ (RTE_XABORT_CODE(status) == RTE_XABORT_LOCK_BUSY))
+ continue;
+
+ if ((status & RTE_XABORT_RETRY) == 0) /* do not retry */
+ break;
+ }
+ return 0;
+}
+
+static inline void
+rte_spinlock_lock_tm(rte_spinlock_t *sl)
+{
+ if (likely(rte_try_tm(&sl->locked)))
+ return;
+
+ rte_spinlock_lock(sl); /* fall-back */
+}
+
+static inline int
+rte_spinlock_trylock_tm(rte_spinlock_t *sl)
+{
+ if (likely(rte_try_tm(&sl->locked)))
+ return 1;
+
+ return rte_spinlock_trylock(sl);
+}
+
+static inline void
+rte_spinlock_unlock_tm(rte_spinlock_t *sl)
+{
+ if (unlikely(sl->locked))
+ rte_spinlock_unlock(sl);
+ else
+ rte_xend();
+}
+
+static inline void
+rte_spinlock_recursive_lock_tm(rte_spinlock_recursive_t *slr)
+{
+ if (likely(rte_try_tm(&slr->sl.locked)))
+ return;
+
+ rte_spinlock_recursive_lock(slr); /* fall-back */
+}
+
+static inline void
+rte_spinlock_recursive_unlock_tm(rte_spinlock_recursive_t *slr)
+{
+ if (unlikely(slr->sl.locked))
+ rte_spinlock_recursive_unlock(slr);
+ else
+ rte_xend();
+}
+
+static inline int
+rte_spinlock_recursive_trylock_tm(rte_spinlock_recursive_t *slr)
+{
+ if (likely(rte_try_tm(&slr->sl.locked)))
+ return 1;
+
+ return rte_spinlock_recursive_trylock(slr);
+}
+
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_eal/common/include/generic/rte_spinlock.h b/lib/librte_eal/common/include/generic/rte_spinlock.h
index c7fb0df..4e0a3c3 100644
--- a/lib/librte_eal/common/include/generic/rte_spinlock.h
+++ b/lib/librte_eal/common/include/generic/rte_spinlock.h
@@ -145,6 +145,59 @@ static inline int rte_spinlock_is_locked (rte_spinlock_t *sl)
}
/**
+ * Test if hardware transactional memory (lock elision) is supported
+ *
+ * @return
+ * 1 if the hardware transactional memory is supported; 0 otherwise.
+ */
+static inline int rte_tm_supported(void);
+
+/**
+ * Try to execute critical section in a hardware memory transaction,
+ * if it fails or not available take the spinlock.
+ *
+ * NOTE: An attempt to perform a HW I/O operation inside a hardware memory
+ * transaction always aborts the transaction since the CPU is not able to
+ * roll-back should the transaction fail. Therefore, hardware transactional
+ * locks are not advised to be used around rte_eth_rx_burst() and
+ * rte_eth_tx_burst() calls.
+ *
+ * @param sl
+ * A pointer to the spinlock.
+ */
+static inline void
+rte_spinlock_lock_tm(rte_spinlock_t *sl);
+
+/**
+ * Commit hardware memory transaction or release the spinlock if
+ * the spinlock is used as a fall-back
+ *
+ * @param sl
+ * A pointer to the spinlock.
+ */
+static inline void
+rte_spinlock_unlock_tm(rte_spinlock_t *sl);
+
+/**
+ * Try to execute critical section in a hardware memory transaction,
+ * if it fails or not available try to take the lock.
+ *
+ * NOTE: An attempt to perform a HW I/O operation inside a hardware memory
+ * transaction always aborts the transaction since the CPU is not able to
+ * roll-back should the transaction fail. Therefore, hardware transactional
+ * locks are not advised to be used around rte_eth_rx_burst() and
+ * rte_eth_tx_burst() calls.
+ *
+ * @param sl
+ * A pointer to the spinlock.
+ * @return
+ * 1 if the hardware memory transaction is successfully started
+ * or lock is successfully taken; 0 otherwise.
+ */
+static inline int
+rte_spinlock_trylock_tm(rte_spinlock_t *sl);
+
+/**
* The rte_spinlock_recursive_t type.
*/
typedef struct {
@@ -223,4 +276,50 @@ static inline int rte_spinlock_recursive_trylock(rte_spinlock_recursive_t *slr)
return 1;
}
+
+/**
+ * Try to execute critical section in a hardware memory transaction,
+ * if it fails or not available take the recursive spinlocks
+ *
+ * NOTE: An attempt to perform a HW I/O operation inside a hardware memory
+ * transaction always aborts the transaction since the CPU is not able to
+ * roll-back should the transaction fail. Therefore, hardware transactional
+ * locks are not advised to be used around rte_eth_rx_burst() and
+ * rte_eth_tx_burst() calls.
+ *
+ * @param slr
+ * A pointer to the recursive spinlock.
+ */
+static inline void rte_spinlock_recursive_lock_tm(
+ rte_spinlock_recursive_t *slr);
+
+/**
+ * Commit hardware memory transaction or release the recursive spinlock
+ * if the recursive spinlock is used as a fall-back
+ *
+ * @param slr
+ * A pointer to the recursive spinlock.
+ */
+static inline void rte_spinlock_recursive_unlock_tm(
+ rte_spinlock_recursive_t *slr);
+
+/**
+ * Try to execute critical section in a hardware memory transaction,
+ * if it fails or not available try to take the recursive lock
+ *
+ * NOTE: An attempt to perform a HW I/O operation inside a hardware memory
+ * transaction always aborts the transaction since the CPU is not able to
+ * roll-back should the transaction fail. Therefore, hardware transactional
+ * locks are not advised to be used around rte_eth_rx_burst() and
+ * rte_eth_tx_burst() calls.
+ *
+ * @param slr
+ * A pointer to the recursive spinlock.
+ * @return
+ * 1 if the hardware memory transaction is successfully started
+ * or lock is successfully taken; 0 otherwise.
+ */
+static inline int rte_spinlock_recursive_trylock_tm(
+ rte_spinlock_recursive_t *slr);
+
#endif /* _RTE_SPINLOCK_H_ */
--
1.9.5.msysgit.0
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v2 2/3] rwlock: add support for HTM lock elision for x86
2015-06-16 17:16 ` [dpdk-dev] [PATCH v2 0/3] " Roman Dementiev
2015-06-16 17:16 ` [dpdk-dev] [PATCH v2 1/3] spinlock: " Roman Dementiev
@ 2015-06-16 17:16 ` Roman Dementiev
2015-06-16 17:16 ` [dpdk-dev] [PATCH v2 3/3] test scaling of HTM lock elision protecting rte_hash Roman Dementiev
` (2 subsequent siblings)
4 siblings, 0 replies; 27+ messages in thread
From: Roman Dementiev @ 2015-06-16 17:16 UTC (permalink / raw)
To: dev
This patch adds methods that use hardware memory transactions (HTM) on
fast-path for rwlock (a.k.a. lock elision). Here the methods are implemented
for x86 using Restricted Transactional Memory instructions (Intel(r)
Transactional Synchronization Extensions). The implementation fall-backs to
the normal rwlock if HTM is not available or memory transactions fail. This is
not a replacement for all rwlock usages since not all critical sections
protected by locks are friendly to HTM. For example, an attempt to perform
a HW I/O operation inside a hardware memory transaction always aborts
the transaction since the CPU is not able to roll-back should the transaction
fail. Therefore, hardware transactional locks are not advised to be used around
rte_eth_rx_burst() and rte_eth_tx_burst() calls.
Signed-off-by: Roman Dementiev <roman.dementiev@intel.com>
---
lib/librte_eal/common/Makefile | 4 +-
.../common/include/arch/ppc_64/rte_rwlock.h | 38 ++++
.../common/include/arch/x86/rte_rwlock.h | 82 ++++++++
lib/librte_eal/common/include/generic/rte_rwlock.h | 208 +++++++++++++++++++++
lib/librte_eal/common/include/rte_rwlock.h | 158 ----------------
5 files changed, 330 insertions(+), 160 deletions(-)
create mode 100644 lib/librte_eal/common/include/arch/ppc_64/rte_rwlock.h
create mode 100644 lib/librte_eal/common/include/arch/x86/rte_rwlock.h
create mode 100644 lib/librte_eal/common/include/generic/rte_rwlock.h
delete mode 100644 lib/librte_eal/common/include/rte_rwlock.h
diff --git a/lib/librte_eal/common/Makefile b/lib/librte_eal/common/Makefile
index 3ea3bbf..38772d4 100644
--- a/lib/librte_eal/common/Makefile
+++ b/lib/librte_eal/common/Makefile
@@ -35,7 +35,7 @@ INC := rte_branch_prediction.h rte_common.h
INC += rte_debug.h rte_eal.h rte_errno.h rte_launch.h rte_lcore.h
INC += rte_log.h rte_memory.h rte_memzone.h rte_pci.h
INC += rte_pci_dev_ids.h rte_per_lcore.h rte_random.h
-INC += rte_rwlock.h rte_tailq.h rte_interrupts.h rte_alarm.h
+INC += rte_tailq.h rte_interrupts.h rte_alarm.h
INC += rte_string_fns.h rte_version.h
INC += rte_eal_memconfig.h rte_malloc_heap.h
INC += rte_hexdump.h rte_devargs.h rte_dev.h
@@ -46,7 +46,7 @@ INC += rte_warnings.h
endif
GENERIC_INC := rte_atomic.h rte_byteorder.h rte_cycles.h rte_prefetch.h
-GENERIC_INC += rte_spinlock.h rte_memcpy.h rte_cpuflags.h
+GENERIC_INC += rte_spinlock.h rte_memcpy.h rte_cpuflags.h rte_rwlock.h
# defined in mk/arch/$(RTE_ARCH)/rte.vars.mk
ARCH_DIR ?= $(RTE_ARCH)
ARCH_INC := $(notdir $(wildcard $(RTE_SDK)/lib/librte_eal/common/include/arch/$(ARCH_DIR)/*.h))
diff --git a/lib/librte_eal/common/include/arch/ppc_64/rte_rwlock.h b/lib/librte_eal/common/include/arch/ppc_64/rte_rwlock.h
new file mode 100644
index 0000000..de8af19
--- /dev/null
+++ b/lib/librte_eal/common/include/arch/ppc_64/rte_rwlock.h
@@ -0,0 +1,38 @@
+#ifndef _RTE_RWLOCK_PPC_64_H_
+#define _RTE_RWLOCK_PPC_64_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "generic/rte_rwlock.h"
+
+static inline void
+rte_rwlock_read_lock_tm(rte_rwlock_t *rwl)
+{
+ rte_rwlock_read_lock(rwl);
+}
+
+static inline void
+rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl)
+{
+ rte_rwlock_read_unlock(rwl);
+}
+
+static inline void
+rte_rwlock_write_lock_tm(rte_rwlock_t *rwl)
+{
+ rte_rwlock_write_lock(rwl);
+}
+
+static inline void
+rte_rwlock_write_unlock_tm(rte_rwlock_t *rwl)
+{
+ rte_rwlock_write_unlock(rwl);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RWLOCK_PPC_64_H_ */
diff --git a/lib/librte_eal/common/include/arch/x86/rte_rwlock.h b/lib/librte_eal/common/include/arch/x86/rte_rwlock.h
new file mode 100644
index 0000000..afd1c3c
--- /dev/null
+++ b/lib/librte_eal/common/include/arch/x86/rte_rwlock.h
@@ -0,0 +1,82 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2015 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_RWLOCK_X86_64_H_
+#define _RTE_RWLOCK_X86_64_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "generic/rte_rwlock.h"
+#include "rte_spinlock.h"
+
+static inline void
+rte_rwlock_read_lock_tm(rte_rwlock_t *rwl)
+{
+ if (likely(rte_try_tm(&rwl->cnt)))
+ return;
+ rte_rwlock_read_lock(rwl);
+}
+
+static inline void
+rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl)
+{
+ if (unlikely(rwl->cnt))
+ rte_rwlock_read_unlock(rwl);
+ else
+ rte_xend();
+}
+
+static inline void
+rte_rwlock_write_lock_tm(rte_rwlock_t *rwl)
+{
+ if (likely(rte_try_tm(&rwl->cnt)))
+ return;
+ rte_rwlock_write_lock(rwl);
+}
+
+static inline void
+rte_rwlock_write_unlock_tm(rte_rwlock_t *rwl)
+{
+ if (unlikely(rwl->cnt))
+ rte_rwlock_write_unlock(rwl);
+ else
+ rte_xend();
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RWLOCK_X86_64_H_ */
diff --git a/lib/librte_eal/common/include/generic/rte_rwlock.h b/lib/librte_eal/common/include/generic/rte_rwlock.h
new file mode 100644
index 0000000..7a0fdc5
--- /dev/null
+++ b/lib/librte_eal/common/include/generic/rte_rwlock.h
@@ -0,0 +1,208 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_RWLOCK_H_
+#define _RTE_RWLOCK_H_
+
+/**
+ * @file
+ *
+ * RTE Read-Write Locks
+ *
+ * This file defines an API for read-write locks. The lock is used to
+ * protect data that allows multiple readers in parallel, but only
+ * one writer. All readers are blocked until the writer is finished
+ * writing.
+ *
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_common.h>
+#include <rte_atomic.h>
+
+/**
+ * The rte_rwlock_t type.
+ *
+ * cnt is -1 when write lock is held, and > 0 when read locks are held.
+ */
+typedef struct {
+ volatile int32_t cnt; /**< -1 when W lock held, > 0 when R locks held. */
+} rte_rwlock_t;
+
+/**
+ * A static rwlock initializer.
+ */
+#define RTE_RWLOCK_INITIALIZER { 0 }
+
+/**
+ * Initialize the rwlock to an unlocked state.
+ *
+ * @param rwl
+ * A pointer to the rwlock structure.
+ */
+static inline void
+rte_rwlock_init(rte_rwlock_t *rwl)
+{
+ rwl->cnt = 0;
+}
+
+/**
+ * Take a read lock. Loop until the lock is held.
+ *
+ * @param rwl
+ * A pointer to a rwlock structure.
+ */
+static inline void
+rte_rwlock_read_lock(rte_rwlock_t *rwl)
+{
+ int32_t x;
+ int success = 0;
+
+ while (success == 0) {
+ x = rwl->cnt;
+ /* write lock is held */
+ if (x < 0) {
+ rte_pause();
+ continue;
+ }
+ success = rte_atomic32_cmpset((volatile uint32_t *)&rwl->cnt,
+ x, x + 1);
+ }
+}
+
+/**
+ * Release a read lock.
+ *
+ * @param rwl
+ * A pointer to the rwlock structure.
+ */
+static inline void
+rte_rwlock_read_unlock(rte_rwlock_t *rwl)
+{
+ rte_atomic32_dec((rte_atomic32_t *)(intptr_t)&rwl->cnt);
+}
+
+/**
+ * Take a write lock. Loop until the lock is held.
+ *
+ * @param rwl
+ * A pointer to a rwlock structure.
+ */
+static inline void
+rte_rwlock_write_lock(rte_rwlock_t *rwl)
+{
+ int32_t x;
+ int success = 0;
+
+ while (success == 0) {
+ x = rwl->cnt;
+ /* a lock is held */
+ if (x != 0) {
+ rte_pause();
+ continue;
+ }
+ success = rte_atomic32_cmpset((volatile uint32_t *)&rwl->cnt,
+ 0, -1);
+ }
+}
+
+/**
+ * Release a write lock.
+ *
+ * @param rwl
+ * A pointer to a rwlock structure.
+ */
+static inline void
+rte_rwlock_write_unlock(rte_rwlock_t *rwl)
+{
+ rte_atomic32_inc((rte_atomic32_t *)(intptr_t)&rwl->cnt);
+}
+
+/**
+ * Try to execute critical section in a hardware memory transaction, if it
+ * fails or not available take a read lock
+ *
+ * NOTE: An attempt to perform a HW I/O operation inside a hardware memory
+ * transaction always aborts the transaction since the CPU is not able to
+ * roll-back should the transaction fail. Therefore, hardware transactional
+ * locks are not advised to be used around rte_eth_rx_burst() and
+ * rte_eth_tx_burst() calls.
+ *
+ * @param rwl
+ * A pointer to a rwlock structure.
+ */
+static inline void
+rte_rwlock_read_lock_tm(rte_rwlock_t *rwl);
+
+/**
+ * Commit hardware memory transaction or release the read lock if the lock is used as a fall-back
+ *
+ * @param rwl
+ * A pointer to the rwlock structure.
+ */
+static inline void
+rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl);
+
+/**
+ * Try to execute critical section in a hardware memory transaction, if it
+ * fails or not available take a write lock
+ *
+ * NOTE: An attempt to perform a HW I/O operation inside a hardware memory
+ * transaction always aborts the transaction since the CPU is not able to
+ * roll-back should the transaction fail. Therefore, hardware transactional
+ * locks are not advised to be used around rte_eth_rx_burst() and
+ * rte_eth_tx_burst() calls.
+ *
+ * @param rwl
+ * A pointer to a rwlock structure.
+ */
+static inline void
+rte_rwlock_write_lock_tm(rte_rwlock_t *rwl);
+
+/**
+ * Commit hardware memory transaction or release the write lock if the lock is used as a fall-back
+ *
+ * @param rwl
+ * A pointer to a rwlock structure.
+ */
+static inline void
+rte_rwlock_write_unlock_tm(rte_rwlock_t *rwl);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RWLOCK_H_ */
diff --git a/lib/librte_eal/common/include/rte_rwlock.h b/lib/librte_eal/common/include/rte_rwlock.h
deleted file mode 100644
index 115731d..0000000
--- a/lib/librte_eal/common/include/rte_rwlock.h
+++ /dev/null
@@ -1,158 +0,0 @@
-/*-
- * BSD LICENSE
- *
- * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#ifndef _RTE_RWLOCK_H_
-#define _RTE_RWLOCK_H_
-
-/**
- * @file
- *
- * RTE Read-Write Locks
- *
- * This file defines an API for read-write locks. The lock is used to
- * protect data that allows multiple readers in parallel, but only
- * one writer. All readers are blocked until the writer is finished
- * writing.
- *
- */
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-#include <rte_common.h>
-#include <rte_atomic.h>
-
-/**
- * The rte_rwlock_t type.
- *
- * cnt is -1 when write lock is held, and > 0 when read locks are held.
- */
-typedef struct {
- volatile int32_t cnt; /**< -1 when W lock held, > 0 when R locks held. */
-} rte_rwlock_t;
-
-/**
- * A static rwlock initializer.
- */
-#define RTE_RWLOCK_INITIALIZER { 0 }
-
-/**
- * Initialize the rwlock to an unlocked state.
- *
- * @param rwl
- * A pointer to the rwlock structure.
- */
-static inline void
-rte_rwlock_init(rte_rwlock_t *rwl)
-{
- rwl->cnt = 0;
-}
-
-/**
- * Take a read lock. Loop until the lock is held.
- *
- * @param rwl
- * A pointer to a rwlock structure.
- */
-static inline void
-rte_rwlock_read_lock(rte_rwlock_t *rwl)
-{
- int32_t x;
- int success = 0;
-
- while (success == 0) {
- x = rwl->cnt;
- /* write lock is held */
- if (x < 0) {
- rte_pause();
- continue;
- }
- success = rte_atomic32_cmpset((volatile uint32_t *)&rwl->cnt,
- x, x + 1);
- }
-}
-
-/**
- * Release a read lock.
- *
- * @param rwl
- * A pointer to the rwlock structure.
- */
-static inline void
-rte_rwlock_read_unlock(rte_rwlock_t *rwl)
-{
- rte_atomic32_dec((rte_atomic32_t *)(intptr_t)&rwl->cnt);
-}
-
-/**
- * Take a write lock. Loop until the lock is held.
- *
- * @param rwl
- * A pointer to a rwlock structure.
- */
-static inline void
-rte_rwlock_write_lock(rte_rwlock_t *rwl)
-{
- int32_t x;
- int success = 0;
-
- while (success == 0) {
- x = rwl->cnt;
- /* a lock is held */
- if (x != 0) {
- rte_pause();
- continue;
- }
- success = rte_atomic32_cmpset((volatile uint32_t *)&rwl->cnt,
- 0, -1);
- }
-}
-
-/**
- * Release a write lock.
- *
- * @param rwl
- * A pointer to a rwlock structure.
- */
-static inline void
-rte_rwlock_write_unlock(rte_rwlock_t *rwl)
-{
- rte_atomic32_inc((rte_atomic32_t *)(intptr_t)&rwl->cnt);
-}
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif /* _RTE_RWLOCK_H_ */
--
1.9.5.msysgit.0
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v2 3/3] test scaling of HTM lock elision protecting rte_hash
2015-06-16 17:16 ` [dpdk-dev] [PATCH v2 0/3] " Roman Dementiev
2015-06-16 17:16 ` [dpdk-dev] [PATCH v2 1/3] spinlock: " Roman Dementiev
2015-06-16 17:16 ` [dpdk-dev] [PATCH v2 2/3] rwlock: add support for HTM lock elision for x86 Roman Dementiev
@ 2015-06-16 17:16 ` Roman Dementiev
2015-06-17 13:05 ` [dpdk-dev] [PATCH v2 0/3] add support for HTM lock elision for x86 Bruce Richardson
2015-06-19 11:08 ` [dpdk-dev] [PATCH v3 " Roman Dementiev
4 siblings, 0 replies; 27+ messages in thread
From: Roman Dementiev @ 2015-06-16 17:16 UTC (permalink / raw)
To: dev
This patch adds a new auto-test for testing the scaling
of concurrent inserts into rte_hash when protected by
the normal spinlock vs. the spinlock with HTM lock
elision. The test also benchmarks single-threaded
access without any locks.
Signed-off-by: Roman Dementiev <roman.dementiev@intel.com>
---
app/test/Makefile | 1 +
app/test/test_hash_scaling.c | 223 +++++++++++++++++++++++++++++++++++++++++++
2 files changed, 224 insertions(+)
create mode 100644 app/test/test_hash_scaling.c
diff --git a/app/test/Makefile b/app/test/Makefile
index 3c777bf..6ffe539 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -83,6 +83,7 @@ SRCS-y += test_memcpy_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_HASH) += test_hash.c
SRCS-$(CONFIG_RTE_LIBRTE_HASH) += test_hash_perf.c
+SRCS-$(CONFIG_RTE_LIBRTE_HASH) += test_hash_scaling.c
SRCS-$(CONFIG_RTE_LIBRTE_LPM) += test_lpm.c
SRCS-$(CONFIG_RTE_LIBRTE_LPM) += test_lpm6.c
diff --git a/app/test/test_hash_scaling.c b/app/test/test_hash_scaling.c
new file mode 100644
index 0000000..682ae94
--- /dev/null
+++ b/app/test/test_hash_scaling.c
@@ -0,0 +1,223 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2015 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_cycles.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_spinlock.h>
+#include <rte_launch.h>
+
+#include "test.h"
+
+/*
+ * Check condition and return an error if true. Assumes that "handle" is the
+ * name of the hash structure pointer to be freed.
+ */
+#define RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR line %d: " str "\n", __LINE__, \
+ ##__VA_ARGS__); \
+ if (handle) \
+ rte_hash_free(handle); \
+ return -1; \
+ } \
+} while (0)
+
+enum locking_mode_t {
+ NORMAL_LOCK,
+ LOCK_ELISION,
+ NULL_LOCK
+};
+
+struct {
+ uint32_t num_iterations;
+ struct rte_hash *h;
+ rte_spinlock_t *lock;
+ int locking_mode;
+} tbl_scaling_test_params;
+
+static rte_atomic64_t gcycles;
+
+static int test_hash_scaling_worker(__attribute__((unused)) void *arg)
+{
+ uint64_t i, key;
+ uint32_t thr_id = rte_sys_gettid();
+ uint64_t begin, cycles = 0;
+
+ switch (tbl_scaling_test_params.locking_mode) {
+
+ case NORMAL_LOCK:
+
+ for (i = 0; i < tbl_scaling_test_params.num_iterations; i++) {
+ /* different threads get different keys because
+ we use the thread-id in the key computation
+ */
+ key = rte_hash_crc(&i, sizeof(i), thr_id);
+ begin = rte_rdtsc_precise();
+ rte_spinlock_lock(tbl_scaling_test_params.lock);
+ rte_hash_add_key(tbl_scaling_test_params.h, &key);
+ rte_spinlock_unlock(tbl_scaling_test_params.lock);
+ cycles += rte_rdtsc_precise() - begin;
+ }
+ break;
+
+ case LOCK_ELISION:
+
+ for (i = 0; i < tbl_scaling_test_params.num_iterations; i++) {
+ key = rte_hash_crc(&i, sizeof(i), thr_id);
+ begin = rte_rdtsc_precise();
+ rte_spinlock_lock_tm(tbl_scaling_test_params.lock);
+ rte_hash_add_key(tbl_scaling_test_params.h, &key);
+ rte_spinlock_unlock_tm(tbl_scaling_test_params.lock);
+ cycles += rte_rdtsc_precise() - begin;
+ }
+ break;
+
+ default:
+
+ for (i = 0; i < tbl_scaling_test_params.num_iterations; i++) {
+ key = rte_hash_crc(&i, sizeof(i), thr_id);
+ begin = rte_rdtsc_precise();
+ rte_hash_add_key(tbl_scaling_test_params.h, &key);
+ cycles += rte_rdtsc_precise() - begin;
+ }
+ }
+
+ rte_atomic64_add(&gcycles, cycles);
+
+ return 0;
+}
+
+/*
+ * Do scalability perf tests.
+ */
+static int
+test_hash_scaling(int locking_mode)
+{
+ static unsigned calledCount = 1;
+ uint32_t num_iterations = 1024*1024;
+ uint64_t i, key;
+ struct rte_hash_parameters hash_params = {
+ .entries = num_iterations*2,
+ .bucket_entries = 16,
+ .key_len = sizeof(key),
+ .hash_func = rte_hash_crc,
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ };
+ struct rte_hash *handle;
+ char name[RTE_HASH_NAMESIZE];
+ rte_spinlock_t lock;
+
+ rte_spinlock_init(&lock);
+
+ snprintf(name, 32, "test%u", calledCount++);
+ hash_params.name = name;
+
+ handle = rte_hash_create(&hash_params);
+ RETURN_IF_ERROR(handle == NULL, "hash creation failed");
+
+ tbl_scaling_test_params.num_iterations =
+ num_iterations/rte_lcore_count();
+ tbl_scaling_test_params.h = handle;
+ tbl_scaling_test_params.lock = &lock;
+ tbl_scaling_test_params.locking_mode = locking_mode;
+
+ rte_atomic64_init(&gcycles);
+ rte_atomic64_clear(&gcycles);
+
+ /* fill up to initial size */
+ for (i = 0; i < num_iterations; i++) {
+ key = rte_hash_crc(&i, sizeof(i), 0xabcdabcd);
+ rte_hash_add_key(tbl_scaling_test_params.h, &key);
+ }
+
+ rte_eal_mp_remote_launch(test_hash_scaling_worker, NULL, CALL_MASTER);
+ rte_eal_mp_wait_lcore();
+
+ unsigned long long int cycles_per_operation =
+ rte_atomic64_read(&gcycles)/
+ (tbl_scaling_test_params.num_iterations*rte_lcore_count());
+ const char *lock_name;
+
+ switch (locking_mode) {
+ case NORMAL_LOCK:
+ lock_name = "normal spinlock";
+ break;
+ case LOCK_ELISION:
+ lock_name = "lock elision";
+ break;
+ default:
+ lock_name = "null lock";
+ }
+ printf("--------------------------------------------------------\n");
+ printf("Cores: %d; %s mode -> cycles per operation: %llu\n",
+ rte_lcore_count(), lock_name, cycles_per_operation);
+ printf("--------------------------------------------------------\n");
+ /* CSV output */
+ printf(">>>%d,%s,%llu\n", rte_lcore_count(), lock_name,
+ cycles_per_operation);
+
+ rte_hash_free(handle);
+ return 0;
+}
+
+static int
+test_hash_scaling_main(void)
+{
+ int r = 0;
+
+ if (rte_lcore_count() == 1)
+ r = test_hash_scaling(NULL_LOCK);
+
+ if (r == 0)
+ r = test_hash_scaling(NORMAL_LOCK);
+
+ if (!rte_tm_supported()) {
+ printf("Hardware transactional memory (lock elision) is NOT supported\n");
+ return r;
+ }
+ printf("Hardware transactional memory (lock elision) is supported\n");
+
+ if (r == 0)
+ r = test_hash_scaling(LOCK_ELISION);
+
+ return r;
+}
+
+
+static struct test_command hash_scaling_cmd = {
+ .command = "hash_scaling_autotest",
+ .callback = test_hash_scaling_main,
+};
+REGISTER_TEST_COMMAND(hash_scaling_cmd);
--
1.9.5.msysgit.0
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/3] add support for HTM lock elision for x86
2015-06-16 17:16 ` [dpdk-dev] [PATCH v2 0/3] " Roman Dementiev
` (2 preceding siblings ...)
2015-06-16 17:16 ` [dpdk-dev] [PATCH v2 3/3] test scaling of HTM lock elision protecting rte_hash Roman Dementiev
@ 2015-06-17 13:05 ` Bruce Richardson
2015-06-17 13:14 ` Thomas Monjalon
2015-06-19 11:08 ` [dpdk-dev] [PATCH v3 " Roman Dementiev
4 siblings, 1 reply; 27+ messages in thread
From: Bruce Richardson @ 2015-06-17 13:05 UTC (permalink / raw)
To: Roman Dementiev; +Cc: dev
On Tue, Jun 16, 2015 at 10:16:43AM -0700, Roman Dementiev wrote:
> This series of patches adds methods that use hardware memory transactions (HTM)
> on fast-path for DPDK locks (a.k.a. lock elision). Here the methods are
> implemented for x86 using Restricted Transactional Memory instructions (Intel(r)
> Transactional Synchronization Extensions). The implementation fall-backs to
> the normal DPDK lock if HTM is not available or memory transactions fail. This
> is not a replacement for ALL lock usages since not all critical sections
> protected by locks are friendly to HTM. For example, an attempt to perform
> a HW I/O operation inside a hardware memory transaction always aborts
> the transaction since the CPU is not able to roll-back should the transaction
> fail. Therefore, hardware transactional locks are not advised to be used around
> rte_eth_rx_burst() and rte_eth_tx_burst() calls.
>
> v2 changes
> -added a documentation note about hardware limitations
>
> Roman Dementiev (3):
> spinlock: add support for HTM lock elision for x86
> rwlock: add support for HTM lock elision for x86
> test scaling of HTM lock elision protecting rte_hash
>
A change with a conflict in the test makefile was merged last night. However,
the patches themselves otherwise seem ok.
Thomas, is a V3 needed for this small conflict, or can you handle it on applying
the patch?
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/3] add support for HTM lock elision for x86
2015-06-17 13:05 ` [dpdk-dev] [PATCH v2 0/3] add support for HTM lock elision for x86 Bruce Richardson
@ 2015-06-17 13:14 ` Thomas Monjalon
2015-06-17 13:48 ` Bruce Richardson
0 siblings, 1 reply; 27+ messages in thread
From: Thomas Monjalon @ 2015-06-17 13:14 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev
2015-06-17 14:05, Bruce Richardson:
> On Tue, Jun 16, 2015 at 10:16:43AM -0700, Roman Dementiev wrote:
> > This series of patches adds methods that use hardware memory transactions (HTM)
> > on fast-path for DPDK locks (a.k.a. lock elision). Here the methods are
> > implemented for x86 using Restricted Transactional Memory instructions (Intel(r)
> > Transactional Synchronization Extensions). The implementation fall-backs to
> > the normal DPDK lock if HTM is not available or memory transactions fail. This
> > is not a replacement for ALL lock usages since not all critical sections
> > protected by locks are friendly to HTM. For example, an attempt to perform
> > a HW I/O operation inside a hardware memory transaction always aborts
> > the transaction since the CPU is not able to roll-back should the transaction
> > fail. Therefore, hardware transactional locks are not advised to be used around
> > rte_eth_rx_burst() and rte_eth_tx_burst() calls.
> >
> > v2 changes
> > -added a documentation note about hardware limitations
> >
> > Roman Dementiev (3):
> > spinlock: add support for HTM lock elision for x86
> > rwlock: add support for HTM lock elision for x86
> > test scaling of HTM lock elision protecting rte_hash
> >
> A change with a conflict in the test makefile was merged last night. However,
> the patches themselves otherwise seem ok.
Does it mean you ack these patches and they can be blindly applied
without double checking?
> Thomas, is a V3 needed for this small conflict, or can you handle it on applying
> the patch?
Don't worry about conflicts.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/3] add support for HTM lock elision for x86
2015-06-17 13:14 ` Thomas Monjalon
@ 2015-06-17 13:48 ` Bruce Richardson
0 siblings, 0 replies; 27+ messages in thread
From: Bruce Richardson @ 2015-06-17 13:48 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
On Wed, Jun 17, 2015 at 03:14:49PM +0200, Thomas Monjalon wrote:
> 2015-06-17 14:05, Bruce Richardson:
> > On Tue, Jun 16, 2015 at 10:16:43AM -0700, Roman Dementiev wrote:
> > > This series of patches adds methods that use hardware memory transactions (HTM)
> > > on fast-path for DPDK locks (a.k.a. lock elision). Here the methods are
> > > implemented for x86 using Restricted Transactional Memory instructions (Intel(r)
> > > Transactional Synchronization Extensions). The implementation fall-backs to
> > > the normal DPDK lock if HTM is not available or memory transactions fail. This
> > > is not a replacement for ALL lock usages since not all critical sections
> > > protected by locks are friendly to HTM. For example, an attempt to perform
> > > a HW I/O operation inside a hardware memory transaction always aborts
> > > the transaction since the CPU is not able to roll-back should the transaction
> > > fail. Therefore, hardware transactional locks are not advised to be used around
> > > rte_eth_rx_burst() and rte_eth_tx_burst() calls.
> > >
> > > v2 changes
> > > -added a documentation note about hardware limitations
> > >
> > > Roman Dementiev (3):
> > > spinlock: add support for HTM lock elision for x86
> > > rwlock: add support for HTM lock elision for x86
> > > test scaling of HTM lock elision protecting rte_hash
> > >
> > A change with a conflict in the test makefile was merged last night. However,
> > the patches themselves otherwise seem ok.
>
> Does it mean you ack these patches and they can be blindly applied
> without double checking?
>
Series Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/3] spinlock: add support for HTM lock elision for x86
2015-06-16 17:16 ` [dpdk-dev] [PATCH v2 1/3] spinlock: " Roman Dementiev
@ 2015-06-17 21:29 ` Thomas Monjalon
2015-06-18 10:00 ` Bruce Richardson
0 siblings, 1 reply; 27+ messages in thread
From: Thomas Monjalon @ 2015-06-17 21:29 UTC (permalink / raw)
To: Roman Dementiev; +Cc: dev
2015-06-16 10:16, Roman Dementiev:
> --- a/lib/librte_eal/common/include/arch/x86/rte_spinlock.h
> +++ b/lib/librte_eal/common/include/arch/x86/rte_spinlock.h
> @@ -39,6 +39,13 @@ extern "C" {
> #endif
>
> #include "generic/rte_spinlock.h"
> +#include "rte_rtm.h"
> +#include "rte_cpuflags.h"
> +#include "rte_branch_prediction.h"
> +#include <rte_common.h>
Why using angle brackets for rte_common.h?
Introducing rte_cpuflags.h in this header breaks the compilation of
the mlx4 pmd with CONFIG_RTE_LIBRTE_MLX4_DEBUG=y.
Indeed, it triggers the -pedantic flag which is not supported by rte_cpuflags.h.
Maybe it's time to fix this header?
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/3] spinlock: add support for HTM lock elision for x86
2015-06-17 21:29 ` Thomas Monjalon
@ 2015-06-18 10:00 ` Bruce Richardson
2015-06-19 13:35 ` Thomas Monjalon
0 siblings, 1 reply; 27+ messages in thread
From: Bruce Richardson @ 2015-06-18 10:00 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
On Wed, Jun 17, 2015 at 11:29:49PM +0200, Thomas Monjalon wrote:
> 2015-06-16 10:16, Roman Dementiev:
> > --- a/lib/librte_eal/common/include/arch/x86/rte_spinlock.h
> > +++ b/lib/librte_eal/common/include/arch/x86/rte_spinlock.h
> > @@ -39,6 +39,13 @@ extern "C" {
> > #endif
> >
> > #include "generic/rte_spinlock.h"
> > +#include "rte_rtm.h"
> > +#include "rte_cpuflags.h"
> > +#include "rte_branch_prediction.h"
> > +#include <rte_common.h>
>
> Why using angle brackets for rte_common.h?
>
> Introducing rte_cpuflags.h in this header breaks the compilation of
> the mlx4 pmd with CONFIG_RTE_LIBRTE_MLX4_DEBUG=y.
> Indeed, it triggers the -pedantic flag which is not supported by rte_cpuflags.h.
> Maybe it's time to fix this header?
Do all our headers need to support the pedantic C flag? I don't believe this
was a previous requirement for header files. The mlx4 driver appears to be the
only place in the dpdk.org codebase where the flag actually appears - and even
then the flag disabled in mlx.c where the dpdk headers are actually included.
73 /* DPDK headers don't like -pedantic. */$
74 #ifdef PEDANTIC$
75 #pragma GCC diagnostic ignored "-pedantic"$
76 #endif$
77 #include <rte_config.h>$
.....
I'm just not convinced that rte_cpuflags needs to be fixed at all here.
/Bruce
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v3 0/3] add support for HTM lock elision for x86
2015-06-16 17:16 ` [dpdk-dev] [PATCH v2 0/3] " Roman Dementiev
` (3 preceding siblings ...)
2015-06-17 13:05 ` [dpdk-dev] [PATCH v2 0/3] add support for HTM lock elision for x86 Bruce Richardson
@ 2015-06-19 11:08 ` Roman Dementiev
2015-06-19 11:08 ` [dpdk-dev] [PATCH v3 1/3] spinlock: " Roman Dementiev
` (3 more replies)
4 siblings, 4 replies; 27+ messages in thread
From: Roman Dementiev @ 2015-06-19 11:08 UTC (permalink / raw)
To: dev
This series of patches adds methods that use hardware memory transactions (HTM)
on fast-path for DPDK locks (a.k.a. lock elision). Here the methods are
implemented for x86 using Restricted Transactional Memory instructions (Intel(r)
Transactional Synchronization Extensions). The implementation fall-backs to
the normal DPDK lock if HTM is not available or memory transactions fail. This
is not a replacement for ALL lock usages since not all critical sections
protected by locks are friendly to HTM. For example, an attempt to perform
a HW I/O operation inside a hardware memory transaction always aborts
the transaction since the CPU is not able to roll-back should the transaction
fail. Therefore, hardware transactional locks are not advised to be used around
rte_eth_rx_burst() and rte_eth_tx_burst() calls.
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
v3 changes
-resolved a conflict in app/test/Makefile
-don't use angle brackets for rte_common.h include
v2 changes
-added a documentation note about hardware limitations
Roman Dementiev (3):
spinlock: add support for HTM lock elision for x86
rwlock: add support for HTM lock elision for x86
test scaling of HTM lock elision protecting rte_hash
app/test/Makefile | 1 +
app/test/test_hash_scaling.c | 223 +++++++++++++++++++++
lib/librte_eal/common/Makefile | 4 +-
.../common/include/arch/ppc_64/rte_rwlock.h | 38 ++++
.../common/include/arch/ppc_64/rte_spinlock.h | 41 ++++
lib/librte_eal/common/include/arch/x86/rte_rtm.h | 73 +++++++
.../common/include/arch/x86/rte_rwlock.h | 82 ++++++++
.../common/include/arch/x86/rte_spinlock.h | 107 ++++++++++
lib/librte_eal/common/include/generic/rte_rwlock.h | 208 +++++++++++++++++++
.../common/include/generic/rte_spinlock.h | 99 +++++++++
lib/librte_eal/common/include/rte_rwlock.h | 158 ---------------
11 files changed, 874 insertions(+), 160 deletions(-)
create mode 100644 app/test/test_hash_scaling.c
create mode 100644 lib/librte_eal/common/include/arch/ppc_64/rte_rwlock.h
create mode 100644 lib/librte_eal/common/include/arch/x86/rte_rtm.h
create mode 100644 lib/librte_eal/common/include/arch/x86/rte_rwlock.h
create mode 100644 lib/librte_eal/common/include/generic/rte_rwlock.h
delete mode 100644 lib/librte_eal/common/include/rte_rwlock.h
--
1.9.5.msysgit.0
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v3 1/3] spinlock: add support for HTM lock elision for x86
2015-06-19 11:08 ` [dpdk-dev] [PATCH v3 " Roman Dementiev
@ 2015-06-19 11:08 ` Roman Dementiev
2015-06-19 11:08 ` [dpdk-dev] [PATCH v3 2/3] rwlock: " Roman Dementiev
` (2 subsequent siblings)
3 siblings, 0 replies; 27+ messages in thread
From: Roman Dementiev @ 2015-06-19 11:08 UTC (permalink / raw)
To: dev
This patch adds methods that use hardware memory transactions (HTM) on fast-path
for spinlocks (a.k.a. lock elision). Here the methods are implemented for x86
using Restricted Transactional Memory instructions (Intel(r) Transactional
Synchronization Extensions). The implementation fall-backs to the normal
spinlock if HTM is not available or memory transactions fail. This is not
a replacement for all spinlock usages since not all critical sections protected
by spinlocks are friendly to HTM. For example, an attempt to perform a HW I/O
operation inside a hardware memory transaction always aborts the transaction
since the CPU is not able to roll-back should the transaction fail.
Therefore, hardware transactional locks are not advised to be used around
rte_eth_rx_burst() and rte_eth_tx_burst() calls.
Signed-off-by: Roman Dementiev <roman.dementiev@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
.../common/include/arch/ppc_64/rte_spinlock.h | 41 ++++++++
lib/librte_eal/common/include/arch/x86/rte_rtm.h | 73 ++++++++++++++
.../common/include/arch/x86/rte_spinlock.h | 107 +++++++++++++++++++++
.../common/include/generic/rte_spinlock.h | 99 +++++++++++++++++++
4 files changed, 320 insertions(+)
create mode 100644 lib/librte_eal/common/include/arch/x86/rte_rtm.h
diff --git a/lib/librte_eal/common/include/arch/ppc_64/rte_spinlock.h b/lib/librte_eal/common/include/arch/ppc_64/rte_spinlock.h
index cf8b81a..3336435 100644
--- a/lib/librte_eal/common/include/arch/ppc_64/rte_spinlock.h
+++ b/lib/librte_eal/common/include/arch/ppc_64/rte_spinlock.h
@@ -66,6 +66,47 @@ rte_spinlock_trylock(rte_spinlock_t *sl)
#endif
+static inline int rte_tm_supported(void)
+{
+ return 0;
+}
+
+static inline void
+rte_spinlock_lock_tm(rte_spinlock_t *sl)
+{
+ rte_spinlock_lock(sl); /* fall-back */
+}
+
+static inline int
+rte_spinlock_trylock_tm(rte_spinlock_t *sl)
+{
+ return rte_spinlock_trylock(sl);
+}
+
+static inline void
+rte_spinlock_unlock_tm(rte_spinlock_t *sl)
+{
+ rte_spinlock_unlock(sl);
+}
+
+static inline void
+rte_spinlock_recursive_lock_tm(rte_spinlock_recursive_t *slr)
+{
+ rte_spinlock_recursive_lock(slr); /* fall-back */
+}
+
+static inline void
+rte_spinlock_recursive_unlock_tm(rte_spinlock_recursive_t *slr)
+{
+ rte_spinlock_recursive_unlock(slr);
+}
+
+static inline int
+rte_spinlock_recursive_trylock_tm(rte_spinlock_recursive_t *slr)
+{
+ return rte_spinlock_recursive_trylock(slr);
+}
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_eal/common/include/arch/x86/rte_rtm.h b/lib/librte_eal/common/include/arch/x86/rte_rtm.h
new file mode 100644
index 0000000..d935641
--- /dev/null
+++ b/lib/librte_eal/common/include/arch/x86/rte_rtm.h
@@ -0,0 +1,73 @@
+#ifndef _RTE_RTM_H_
+#define _RTE_RTM_H_ 1
+
+/*
+ * Copyright (c) 2012,2013 Intel Corporation
+ * Author: Andi Kleen
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that: (1) source code distributions
+ * retain the above copyright notice and this paragraph in its entirety, (2)
+ * distributions including binary code include the above copyright notice and
+ * this paragraph in its entirety in the documentation or other materials
+ * provided with the distribution
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
+ */
+
+/* Official RTM intrinsics interface matching gcc/icc, but works
+ on older gcc compatible compilers and binutils. */
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+
+#define RTE_XBEGIN_STARTED (~0u)
+#define RTE_XABORT_EXPLICIT (1 << 0)
+#define RTE_XABORT_RETRY (1 << 1)
+#define RTE_XABORT_CONFLICT (1 << 2)
+#define RTE_XABORT_CAPACITY (1 << 3)
+#define RTE_XABORT_DEBUG (1 << 4)
+#define RTE_XABORT_NESTED (1 << 5)
+#define RTE_XABORT_CODE(x) (((x) >> 24) & 0xff)
+
+static __attribute__((__always_inline__)) inline
+unsigned int rte_xbegin(void)
+{
+ unsigned int ret = RTE_XBEGIN_STARTED;
+
+ asm volatile(".byte 0xc7,0xf8 ; .long 0" : "+a" (ret) :: "memory");
+ return ret;
+}
+
+static __attribute__((__always_inline__)) inline
+void rte_xend(void)
+{
+ asm volatile(".byte 0x0f,0x01,0xd5" ::: "memory");
+}
+
+static __attribute__((__always_inline__)) inline
+void rte_xabort(const unsigned int status)
+{
+ asm volatile(".byte 0xc6,0xf8,%P0" :: "i" (status) : "memory");
+}
+
+static __attribute__((__always_inline__)) inline
+int rte_xtest(void)
+{
+ unsigned char out;
+
+ asm volatile(".byte 0x0f,0x01,0xd6 ; setnz %0" :
+ "=r" (out) :: "memory");
+ return out;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RTM_H_ */
diff --git a/lib/librte_eal/common/include/arch/x86/rte_spinlock.h b/lib/librte_eal/common/include/arch/x86/rte_spinlock.h
index 54fba95..20ef0a7 100644
--- a/lib/librte_eal/common/include/arch/x86/rte_spinlock.h
+++ b/lib/librte_eal/common/include/arch/x86/rte_spinlock.h
@@ -39,6 +39,13 @@ extern "C" {
#endif
#include "generic/rte_spinlock.h"
+#include "rte_rtm.h"
+#include "rte_cpuflags.h"
+#include "rte_branch_prediction.h"
+#include "rte_common.h"
+
+#define RTE_RTM_MAX_RETRIES (10)
+#define RTE_XABORT_LOCK_BUSY (0xff)
#ifndef RTE_FORCE_INTRINSICS
static inline void
@@ -87,6 +94,106 @@ rte_spinlock_trylock (rte_spinlock_t *sl)
}
#endif
+static uint8_t rtm_supported; /* cache the flag to avoid the overhead
+ of the rte_cpu_get_flag_enabled function */
+
+static inline void __attribute__((constructor))
+rte_rtm_init(void)
+{
+ rtm_supported = rte_cpu_get_flag_enabled(RTE_CPUFLAG_RTM);
+}
+
+static inline int rte_tm_supported(void)
+{
+ return rtm_supported;
+}
+
+static inline int
+rte_try_tm(volatile int *lock)
+{
+ if (!rtm_supported)
+ return 0;
+
+ int retries = RTE_RTM_MAX_RETRIES;
+
+ while (likely(retries--)) {
+
+ unsigned int status = rte_xbegin();
+
+ if (likely(RTE_XBEGIN_STARTED == status)) {
+ if (unlikely(*lock))
+ rte_xabort(RTE_XABORT_LOCK_BUSY);
+ else
+ return 1;
+ }
+ while (*lock)
+ rte_pause();
+
+ if ((status & RTE_XABORT_EXPLICIT) &&
+ (RTE_XABORT_CODE(status) == RTE_XABORT_LOCK_BUSY))
+ continue;
+
+ if ((status & RTE_XABORT_RETRY) == 0) /* do not retry */
+ break;
+ }
+ return 0;
+}
+
+static inline void
+rte_spinlock_lock_tm(rte_spinlock_t *sl)
+{
+ if (likely(rte_try_tm(&sl->locked)))
+ return;
+
+ rte_spinlock_lock(sl); /* fall-back */
+}
+
+static inline int
+rte_spinlock_trylock_tm(rte_spinlock_t *sl)
+{
+ if (likely(rte_try_tm(&sl->locked)))
+ return 1;
+
+ return rte_spinlock_trylock(sl);
+}
+
+static inline void
+rte_spinlock_unlock_tm(rte_spinlock_t *sl)
+{
+ if (unlikely(sl->locked))
+ rte_spinlock_unlock(sl);
+ else
+ rte_xend();
+}
+
+static inline void
+rte_spinlock_recursive_lock_tm(rte_spinlock_recursive_t *slr)
+{
+ if (likely(rte_try_tm(&slr->sl.locked)))
+ return;
+
+ rte_spinlock_recursive_lock(slr); /* fall-back */
+}
+
+static inline void
+rte_spinlock_recursive_unlock_tm(rte_spinlock_recursive_t *slr)
+{
+ if (unlikely(slr->sl.locked))
+ rte_spinlock_recursive_unlock(slr);
+ else
+ rte_xend();
+}
+
+static inline int
+rte_spinlock_recursive_trylock_tm(rte_spinlock_recursive_t *slr)
+{
+ if (likely(rte_try_tm(&slr->sl.locked)))
+ return 1;
+
+ return rte_spinlock_recursive_trylock(slr);
+}
+
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_eal/common/include/generic/rte_spinlock.h b/lib/librte_eal/common/include/generic/rte_spinlock.h
index c7fb0df..4e0a3c3 100644
--- a/lib/librte_eal/common/include/generic/rte_spinlock.h
+++ b/lib/librte_eal/common/include/generic/rte_spinlock.h
@@ -145,6 +145,59 @@ static inline int rte_spinlock_is_locked (rte_spinlock_t *sl)
}
/**
+ * Test if hardware transactional memory (lock elision) is supported
+ *
+ * @return
+ * 1 if the hardware transactional memory is supported; 0 otherwise.
+ */
+static inline int rte_tm_supported(void);
+
+/**
+ * Try to execute critical section in a hardware memory transaction,
+ * if it fails or not available take the spinlock.
+ *
+ * NOTE: An attempt to perform a HW I/O operation inside a hardware memory
+ * transaction always aborts the transaction since the CPU is not able to
+ * roll-back should the transaction fail. Therefore, hardware transactional
+ * locks are not advised to be used around rte_eth_rx_burst() and
+ * rte_eth_tx_burst() calls.
+ *
+ * @param sl
+ * A pointer to the spinlock.
+ */
+static inline void
+rte_spinlock_lock_tm(rte_spinlock_t *sl);
+
+/**
+ * Commit hardware memory transaction or release the spinlock if
+ * the spinlock is used as a fall-back
+ *
+ * @param sl
+ * A pointer to the spinlock.
+ */
+static inline void
+rte_spinlock_unlock_tm(rte_spinlock_t *sl);
+
+/**
+ * Try to execute critical section in a hardware memory transaction,
+ * if it fails or not available try to take the lock.
+ *
+ * NOTE: An attempt to perform a HW I/O operation inside a hardware memory
+ * transaction always aborts the transaction since the CPU is not able to
+ * roll-back should the transaction fail. Therefore, hardware transactional
+ * locks are not advised to be used around rte_eth_rx_burst() and
+ * rte_eth_tx_burst() calls.
+ *
+ * @param sl
+ * A pointer to the spinlock.
+ * @return
+ * 1 if the hardware memory transaction is successfully started
+ * or lock is successfully taken; 0 otherwise.
+ */
+static inline int
+rte_spinlock_trylock_tm(rte_spinlock_t *sl);
+
+/**
* The rte_spinlock_recursive_t type.
*/
typedef struct {
@@ -223,4 +276,50 @@ static inline int rte_spinlock_recursive_trylock(rte_spinlock_recursive_t *slr)
return 1;
}
+
+/**
+ * Try to execute critical section in a hardware memory transaction,
+ * if it fails or not available take the recursive spinlocks
+ *
+ * NOTE: An attempt to perform a HW I/O operation inside a hardware memory
+ * transaction always aborts the transaction since the CPU is not able to
+ * roll-back should the transaction fail. Therefore, hardware transactional
+ * locks are not advised to be used around rte_eth_rx_burst() and
+ * rte_eth_tx_burst() calls.
+ *
+ * @param slr
+ * A pointer to the recursive spinlock.
+ */
+static inline void rte_spinlock_recursive_lock_tm(
+ rte_spinlock_recursive_t *slr);
+
+/**
+ * Commit hardware memory transaction or release the recursive spinlock
+ * if the recursive spinlock is used as a fall-back
+ *
+ * @param slr
+ * A pointer to the recursive spinlock.
+ */
+static inline void rte_spinlock_recursive_unlock_tm(
+ rte_spinlock_recursive_t *slr);
+
+/**
+ * Try to execute critical section in a hardware memory transaction,
+ * if it fails or not available try to take the recursive lock
+ *
+ * NOTE: An attempt to perform a HW I/O operation inside a hardware memory
+ * transaction always aborts the transaction since the CPU is not able to
+ * roll-back should the transaction fail. Therefore, hardware transactional
+ * locks are not advised to be used around rte_eth_rx_burst() and
+ * rte_eth_tx_burst() calls.
+ *
+ * @param slr
+ * A pointer to the recursive spinlock.
+ * @return
+ * 1 if the hardware memory transaction is successfully started
+ * or lock is successfully taken; 0 otherwise.
+ */
+static inline int rte_spinlock_recursive_trylock_tm(
+ rte_spinlock_recursive_t *slr);
+
#endif /* _RTE_SPINLOCK_H_ */
--
1.9.5.msysgit.0
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v3 2/3] rwlock: add support for HTM lock elision for x86
2015-06-19 11:08 ` [dpdk-dev] [PATCH v3 " Roman Dementiev
2015-06-19 11:08 ` [dpdk-dev] [PATCH v3 1/3] spinlock: " Roman Dementiev
@ 2015-06-19 11:08 ` Roman Dementiev
2015-06-19 11:08 ` [dpdk-dev] [PATCH v3 3/3] test scaling of HTM lock elision protecting rte_hash Roman Dementiev
2015-06-19 14:38 ` [dpdk-dev] [PATCH v3 0/3] add support for HTM lock elision for x86 Thomas Monjalon
3 siblings, 0 replies; 27+ messages in thread
From: Roman Dementiev @ 2015-06-19 11:08 UTC (permalink / raw)
To: dev
This patch adds methods that use hardware memory transactions (HTM) on
fast-path for rwlock (a.k.a. lock elision). Here the methods are implemented
for x86 using Restricted Transactional Memory instructions (Intel(r)
Transactional Synchronization Extensions). The implementation fall-backs to
the normal rwlock if HTM is not available or memory transactions fail. This is
not a replacement for all rwlock usages since not all critical sections
protected by locks are friendly to HTM. For example, an attempt to perform
a HW I/O operation inside a hardware memory transaction always aborts
the transaction since the CPU is not able to roll-back should the transaction
fail. Therefore, hardware transactional locks are not advised to be used around
rte_eth_rx_burst() and rte_eth_tx_burst() calls.
Signed-off-by: Roman Dementiev <roman.dementiev@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/librte_eal/common/Makefile | 4 +-
.../common/include/arch/ppc_64/rte_rwlock.h | 38 ++++
.../common/include/arch/x86/rte_rwlock.h | 82 ++++++++
lib/librte_eal/common/include/generic/rte_rwlock.h | 208 +++++++++++++++++++++
lib/librte_eal/common/include/rte_rwlock.h | 158 ----------------
5 files changed, 330 insertions(+), 160 deletions(-)
create mode 100644 lib/librte_eal/common/include/arch/ppc_64/rte_rwlock.h
create mode 100644 lib/librte_eal/common/include/arch/x86/rte_rwlock.h
create mode 100644 lib/librte_eal/common/include/generic/rte_rwlock.h
delete mode 100644 lib/librte_eal/common/include/rte_rwlock.h
diff --git a/lib/librte_eal/common/Makefile b/lib/librte_eal/common/Makefile
index 3ea3bbf..38772d4 100644
--- a/lib/librte_eal/common/Makefile
+++ b/lib/librte_eal/common/Makefile
@@ -35,7 +35,7 @@ INC := rte_branch_prediction.h rte_common.h
INC += rte_debug.h rte_eal.h rte_errno.h rte_launch.h rte_lcore.h
INC += rte_log.h rte_memory.h rte_memzone.h rte_pci.h
INC += rte_pci_dev_ids.h rte_per_lcore.h rte_random.h
-INC += rte_rwlock.h rte_tailq.h rte_interrupts.h rte_alarm.h
+INC += rte_tailq.h rte_interrupts.h rte_alarm.h
INC += rte_string_fns.h rte_version.h
INC += rte_eal_memconfig.h rte_malloc_heap.h
INC += rte_hexdump.h rte_devargs.h rte_dev.h
@@ -46,7 +46,7 @@ INC += rte_warnings.h
endif
GENERIC_INC := rte_atomic.h rte_byteorder.h rte_cycles.h rte_prefetch.h
-GENERIC_INC += rte_spinlock.h rte_memcpy.h rte_cpuflags.h
+GENERIC_INC += rte_spinlock.h rte_memcpy.h rte_cpuflags.h rte_rwlock.h
# defined in mk/arch/$(RTE_ARCH)/rte.vars.mk
ARCH_DIR ?= $(RTE_ARCH)
ARCH_INC := $(notdir $(wildcard $(RTE_SDK)/lib/librte_eal/common/include/arch/$(ARCH_DIR)/*.h))
diff --git a/lib/librte_eal/common/include/arch/ppc_64/rte_rwlock.h b/lib/librte_eal/common/include/arch/ppc_64/rte_rwlock.h
new file mode 100644
index 0000000..de8af19
--- /dev/null
+++ b/lib/librte_eal/common/include/arch/ppc_64/rte_rwlock.h
@@ -0,0 +1,38 @@
+#ifndef _RTE_RWLOCK_PPC_64_H_
+#define _RTE_RWLOCK_PPC_64_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "generic/rte_rwlock.h"
+
+static inline void
+rte_rwlock_read_lock_tm(rte_rwlock_t *rwl)
+{
+ rte_rwlock_read_lock(rwl);
+}
+
+static inline void
+rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl)
+{
+ rte_rwlock_read_unlock(rwl);
+}
+
+static inline void
+rte_rwlock_write_lock_tm(rte_rwlock_t *rwl)
+{
+ rte_rwlock_write_lock(rwl);
+}
+
+static inline void
+rte_rwlock_write_unlock_tm(rte_rwlock_t *rwl)
+{
+ rte_rwlock_write_unlock(rwl);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RWLOCK_PPC_64_H_ */
diff --git a/lib/librte_eal/common/include/arch/x86/rte_rwlock.h b/lib/librte_eal/common/include/arch/x86/rte_rwlock.h
new file mode 100644
index 0000000..afd1c3c
--- /dev/null
+++ b/lib/librte_eal/common/include/arch/x86/rte_rwlock.h
@@ -0,0 +1,82 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2015 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_RWLOCK_X86_64_H_
+#define _RTE_RWLOCK_X86_64_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "generic/rte_rwlock.h"
+#include "rte_spinlock.h"
+
+static inline void
+rte_rwlock_read_lock_tm(rte_rwlock_t *rwl)
+{
+ if (likely(rte_try_tm(&rwl->cnt)))
+ return;
+ rte_rwlock_read_lock(rwl);
+}
+
+static inline void
+rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl)
+{
+ if (unlikely(rwl->cnt))
+ rte_rwlock_read_unlock(rwl);
+ else
+ rte_xend();
+}
+
+static inline void
+rte_rwlock_write_lock_tm(rte_rwlock_t *rwl)
+{
+ if (likely(rte_try_tm(&rwl->cnt)))
+ return;
+ rte_rwlock_write_lock(rwl);
+}
+
+static inline void
+rte_rwlock_write_unlock_tm(rte_rwlock_t *rwl)
+{
+ if (unlikely(rwl->cnt))
+ rte_rwlock_write_unlock(rwl);
+ else
+ rte_xend();
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RWLOCK_X86_64_H_ */
diff --git a/lib/librte_eal/common/include/generic/rte_rwlock.h b/lib/librte_eal/common/include/generic/rte_rwlock.h
new file mode 100644
index 0000000..7a0fdc5
--- /dev/null
+++ b/lib/librte_eal/common/include/generic/rte_rwlock.h
@@ -0,0 +1,208 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_RWLOCK_H_
+#define _RTE_RWLOCK_H_
+
+/**
+ * @file
+ *
+ * RTE Read-Write Locks
+ *
+ * This file defines an API for read-write locks. The lock is used to
+ * protect data that allows multiple readers in parallel, but only
+ * one writer. All readers are blocked until the writer is finished
+ * writing.
+ *
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_common.h>
+#include <rte_atomic.h>
+
+/**
+ * The rte_rwlock_t type.
+ *
+ * cnt is -1 when write lock is held, and > 0 when read locks are held.
+ */
+typedef struct {
+ volatile int32_t cnt; /**< -1 when W lock held, > 0 when R locks held. */
+} rte_rwlock_t;
+
+/**
+ * A static rwlock initializer.
+ */
+#define RTE_RWLOCK_INITIALIZER { 0 }
+
+/**
+ * Initialize the rwlock to an unlocked state.
+ *
+ * @param rwl
+ * A pointer to the rwlock structure.
+ */
+static inline void
+rte_rwlock_init(rte_rwlock_t *rwl)
+{
+ rwl->cnt = 0;
+}
+
+/**
+ * Take a read lock. Loop until the lock is held.
+ *
+ * @param rwl
+ * A pointer to a rwlock structure.
+ */
+static inline void
+rte_rwlock_read_lock(rte_rwlock_t *rwl)
+{
+ int32_t x;
+ int success = 0;
+
+ while (success == 0) {
+ x = rwl->cnt;
+ /* write lock is held */
+ if (x < 0) {
+ rte_pause();
+ continue;
+ }
+ success = rte_atomic32_cmpset((volatile uint32_t *)&rwl->cnt,
+ x, x + 1);
+ }
+}
+
+/**
+ * Release a read lock.
+ *
+ * @param rwl
+ * A pointer to the rwlock structure.
+ */
+static inline void
+rte_rwlock_read_unlock(rte_rwlock_t *rwl)
+{
+ rte_atomic32_dec((rte_atomic32_t *)(intptr_t)&rwl->cnt);
+}
+
+/**
+ * Take a write lock. Loop until the lock is held.
+ *
+ * @param rwl
+ * A pointer to a rwlock structure.
+ */
+static inline void
+rte_rwlock_write_lock(rte_rwlock_t *rwl)
+{
+ int32_t x;
+ int success = 0;
+
+ while (success == 0) {
+ x = rwl->cnt;
+ /* a lock is held */
+ if (x != 0) {
+ rte_pause();
+ continue;
+ }
+ success = rte_atomic32_cmpset((volatile uint32_t *)&rwl->cnt,
+ 0, -1);
+ }
+}
+
+/**
+ * Release a write lock.
+ *
+ * @param rwl
+ * A pointer to a rwlock structure.
+ */
+static inline void
+rte_rwlock_write_unlock(rte_rwlock_t *rwl)
+{
+ rte_atomic32_inc((rte_atomic32_t *)(intptr_t)&rwl->cnt);
+}
+
+/**
+ * Try to execute critical section in a hardware memory transaction, if it
+ * fails or not available take a read lock
+ *
+ * NOTE: An attempt to perform a HW I/O operation inside a hardware memory
+ * transaction always aborts the transaction since the CPU is not able to
+ * roll-back should the transaction fail. Therefore, hardware transactional
+ * locks are not advised to be used around rte_eth_rx_burst() and
+ * rte_eth_tx_burst() calls.
+ *
+ * @param rwl
+ * A pointer to a rwlock structure.
+ */
+static inline void
+rte_rwlock_read_lock_tm(rte_rwlock_t *rwl);
+
+/**
+ * Commit hardware memory transaction or release the read lock if the lock is used as a fall-back
+ *
+ * @param rwl
+ * A pointer to the rwlock structure.
+ */
+static inline void
+rte_rwlock_read_unlock_tm(rte_rwlock_t *rwl);
+
+/**
+ * Try to execute critical section in a hardware memory transaction, if it
+ * fails or not available take a write lock
+ *
+ * NOTE: An attempt to perform a HW I/O operation inside a hardware memory
+ * transaction always aborts the transaction since the CPU is not able to
+ * roll-back should the transaction fail. Therefore, hardware transactional
+ * locks are not advised to be used around rte_eth_rx_burst() and
+ * rte_eth_tx_burst() calls.
+ *
+ * @param rwl
+ * A pointer to a rwlock structure.
+ */
+static inline void
+rte_rwlock_write_lock_tm(rte_rwlock_t *rwl);
+
+/**
+ * Commit hardware memory transaction or release the write lock if the lock is used as a fall-back
+ *
+ * @param rwl
+ * A pointer to a rwlock structure.
+ */
+static inline void
+rte_rwlock_write_unlock_tm(rte_rwlock_t *rwl);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_RWLOCK_H_ */
diff --git a/lib/librte_eal/common/include/rte_rwlock.h b/lib/librte_eal/common/include/rte_rwlock.h
deleted file mode 100644
index 115731d..0000000
--- a/lib/librte_eal/common/include/rte_rwlock.h
+++ /dev/null
@@ -1,158 +0,0 @@
-/*-
- * BSD LICENSE
- *
- * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- *
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in
- * the documentation and/or other materials provided with the
- * distribution.
- * * Neither the name of Intel Corporation nor the names of its
- * contributors may be used to endorse or promote products derived
- * from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
- * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
- * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
- * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
- * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
- * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
- * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
- * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
- * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#ifndef _RTE_RWLOCK_H_
-#define _RTE_RWLOCK_H_
-
-/**
- * @file
- *
- * RTE Read-Write Locks
- *
- * This file defines an API for read-write locks. The lock is used to
- * protect data that allows multiple readers in parallel, but only
- * one writer. All readers are blocked until the writer is finished
- * writing.
- *
- */
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-#include <rte_common.h>
-#include <rte_atomic.h>
-
-/**
- * The rte_rwlock_t type.
- *
- * cnt is -1 when write lock is held, and > 0 when read locks are held.
- */
-typedef struct {
- volatile int32_t cnt; /**< -1 when W lock held, > 0 when R locks held. */
-} rte_rwlock_t;
-
-/**
- * A static rwlock initializer.
- */
-#define RTE_RWLOCK_INITIALIZER { 0 }
-
-/**
- * Initialize the rwlock to an unlocked state.
- *
- * @param rwl
- * A pointer to the rwlock structure.
- */
-static inline void
-rte_rwlock_init(rte_rwlock_t *rwl)
-{
- rwl->cnt = 0;
-}
-
-/**
- * Take a read lock. Loop until the lock is held.
- *
- * @param rwl
- * A pointer to a rwlock structure.
- */
-static inline void
-rte_rwlock_read_lock(rte_rwlock_t *rwl)
-{
- int32_t x;
- int success = 0;
-
- while (success == 0) {
- x = rwl->cnt;
- /* write lock is held */
- if (x < 0) {
- rte_pause();
- continue;
- }
- success = rte_atomic32_cmpset((volatile uint32_t *)&rwl->cnt,
- x, x + 1);
- }
-}
-
-/**
- * Release a read lock.
- *
- * @param rwl
- * A pointer to the rwlock structure.
- */
-static inline void
-rte_rwlock_read_unlock(rte_rwlock_t *rwl)
-{
- rte_atomic32_dec((rte_atomic32_t *)(intptr_t)&rwl->cnt);
-}
-
-/**
- * Take a write lock. Loop until the lock is held.
- *
- * @param rwl
- * A pointer to a rwlock structure.
- */
-static inline void
-rte_rwlock_write_lock(rte_rwlock_t *rwl)
-{
- int32_t x;
- int success = 0;
-
- while (success == 0) {
- x = rwl->cnt;
- /* a lock is held */
- if (x != 0) {
- rte_pause();
- continue;
- }
- success = rte_atomic32_cmpset((volatile uint32_t *)&rwl->cnt,
- 0, -1);
- }
-}
-
-/**
- * Release a write lock.
- *
- * @param rwl
- * A pointer to a rwlock structure.
- */
-static inline void
-rte_rwlock_write_unlock(rte_rwlock_t *rwl)
-{
- rte_atomic32_inc((rte_atomic32_t *)(intptr_t)&rwl->cnt);
-}
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif /* _RTE_RWLOCK_H_ */
--
1.9.5.msysgit.0
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v3 3/3] test scaling of HTM lock elision protecting rte_hash
2015-06-19 11:08 ` [dpdk-dev] [PATCH v3 " Roman Dementiev
2015-06-19 11:08 ` [dpdk-dev] [PATCH v3 1/3] spinlock: " Roman Dementiev
2015-06-19 11:08 ` [dpdk-dev] [PATCH v3 2/3] rwlock: " Roman Dementiev
@ 2015-06-19 11:08 ` Roman Dementiev
2015-06-19 14:38 ` [dpdk-dev] [PATCH v3 0/3] add support for HTM lock elision for x86 Thomas Monjalon
3 siblings, 0 replies; 27+ messages in thread
From: Roman Dementiev @ 2015-06-19 11:08 UTC (permalink / raw)
To: dev
This patch adds a new auto-test for testing the scaling
of concurrent inserts into rte_hash when protected by
the normal spinlock vs. the spinlock with HTM lock
elision. The test also benchmarks single-threaded
access without any locks.
Signed-off-by: Roman Dementiev <roman.dementiev@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
app/test/Makefile | 1 +
app/test/test_hash_scaling.c | 223 +++++++++++++++++++++++++++++++++++++++++++
2 files changed, 224 insertions(+)
create mode 100644 app/test/test_hash_scaling.c
diff --git a/app/test/Makefile b/app/test/Makefile
index 5cf8296..2e2758c 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -84,6 +84,7 @@ SRCS-y += test_memcpy_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_HASH) += test_hash.c
SRCS-$(CONFIG_RTE_LIBRTE_HASH) += test_hash_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_HASH) += test_hash_functions.c
+SRCS-$(CONFIG_RTE_LIBRTE_HASH) += test_hash_scaling.c
SRCS-$(CONFIG_RTE_LIBRTE_LPM) += test_lpm.c
SRCS-$(CONFIG_RTE_LIBRTE_LPM) += test_lpm6.c
diff --git a/app/test/test_hash_scaling.c b/app/test/test_hash_scaling.c
new file mode 100644
index 0000000..682ae94
--- /dev/null
+++ b/app/test/test_hash_scaling.c
@@ -0,0 +1,223 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2015 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_cycles.h>
+#include <rte_hash.h>
+#include <rte_hash_crc.h>
+#include <rte_spinlock.h>
+#include <rte_launch.h>
+
+#include "test.h"
+
+/*
+ * Check condition and return an error if true. Assumes that "handle" is the
+ * name of the hash structure pointer to be freed.
+ */
+#define RETURN_IF_ERROR(cond, str, ...) do { \
+ if (cond) { \
+ printf("ERROR line %d: " str "\n", __LINE__, \
+ ##__VA_ARGS__); \
+ if (handle) \
+ rte_hash_free(handle); \
+ return -1; \
+ } \
+} while (0)
+
+enum locking_mode_t {
+ NORMAL_LOCK,
+ LOCK_ELISION,
+ NULL_LOCK
+};
+
+struct {
+ uint32_t num_iterations;
+ struct rte_hash *h;
+ rte_spinlock_t *lock;
+ int locking_mode;
+} tbl_scaling_test_params;
+
+static rte_atomic64_t gcycles;
+
+static int test_hash_scaling_worker(__attribute__((unused)) void *arg)
+{
+ uint64_t i, key;
+ uint32_t thr_id = rte_sys_gettid();
+ uint64_t begin, cycles = 0;
+
+ switch (tbl_scaling_test_params.locking_mode) {
+
+ case NORMAL_LOCK:
+
+ for (i = 0; i < tbl_scaling_test_params.num_iterations; i++) {
+ /* different threads get different keys because
+ we use the thread-id in the key computation
+ */
+ key = rte_hash_crc(&i, sizeof(i), thr_id);
+ begin = rte_rdtsc_precise();
+ rte_spinlock_lock(tbl_scaling_test_params.lock);
+ rte_hash_add_key(tbl_scaling_test_params.h, &key);
+ rte_spinlock_unlock(tbl_scaling_test_params.lock);
+ cycles += rte_rdtsc_precise() - begin;
+ }
+ break;
+
+ case LOCK_ELISION:
+
+ for (i = 0; i < tbl_scaling_test_params.num_iterations; i++) {
+ key = rte_hash_crc(&i, sizeof(i), thr_id);
+ begin = rte_rdtsc_precise();
+ rte_spinlock_lock_tm(tbl_scaling_test_params.lock);
+ rte_hash_add_key(tbl_scaling_test_params.h, &key);
+ rte_spinlock_unlock_tm(tbl_scaling_test_params.lock);
+ cycles += rte_rdtsc_precise() - begin;
+ }
+ break;
+
+ default:
+
+ for (i = 0; i < tbl_scaling_test_params.num_iterations; i++) {
+ key = rte_hash_crc(&i, sizeof(i), thr_id);
+ begin = rte_rdtsc_precise();
+ rte_hash_add_key(tbl_scaling_test_params.h, &key);
+ cycles += rte_rdtsc_precise() - begin;
+ }
+ }
+
+ rte_atomic64_add(&gcycles, cycles);
+
+ return 0;
+}
+
+/*
+ * Do scalability perf tests.
+ */
+static int
+test_hash_scaling(int locking_mode)
+{
+ static unsigned calledCount = 1;
+ uint32_t num_iterations = 1024*1024;
+ uint64_t i, key;
+ struct rte_hash_parameters hash_params = {
+ .entries = num_iterations*2,
+ .bucket_entries = 16,
+ .key_len = sizeof(key),
+ .hash_func = rte_hash_crc,
+ .hash_func_init_val = 0,
+ .socket_id = rte_socket_id(),
+ };
+ struct rte_hash *handle;
+ char name[RTE_HASH_NAMESIZE];
+ rte_spinlock_t lock;
+
+ rte_spinlock_init(&lock);
+
+ snprintf(name, 32, "test%u", calledCount++);
+ hash_params.name = name;
+
+ handle = rte_hash_create(&hash_params);
+ RETURN_IF_ERROR(handle == NULL, "hash creation failed");
+
+ tbl_scaling_test_params.num_iterations =
+ num_iterations/rte_lcore_count();
+ tbl_scaling_test_params.h = handle;
+ tbl_scaling_test_params.lock = &lock;
+ tbl_scaling_test_params.locking_mode = locking_mode;
+
+ rte_atomic64_init(&gcycles);
+ rte_atomic64_clear(&gcycles);
+
+ /* fill up to initial size */
+ for (i = 0; i < num_iterations; i++) {
+ key = rte_hash_crc(&i, sizeof(i), 0xabcdabcd);
+ rte_hash_add_key(tbl_scaling_test_params.h, &key);
+ }
+
+ rte_eal_mp_remote_launch(test_hash_scaling_worker, NULL, CALL_MASTER);
+ rte_eal_mp_wait_lcore();
+
+ unsigned long long int cycles_per_operation =
+ rte_atomic64_read(&gcycles)/
+ (tbl_scaling_test_params.num_iterations*rte_lcore_count());
+ const char *lock_name;
+
+ switch (locking_mode) {
+ case NORMAL_LOCK:
+ lock_name = "normal spinlock";
+ break;
+ case LOCK_ELISION:
+ lock_name = "lock elision";
+ break;
+ default:
+ lock_name = "null lock";
+ }
+ printf("--------------------------------------------------------\n");
+ printf("Cores: %d; %s mode -> cycles per operation: %llu\n",
+ rte_lcore_count(), lock_name, cycles_per_operation);
+ printf("--------------------------------------------------------\n");
+ /* CSV output */
+ printf(">>>%d,%s,%llu\n", rte_lcore_count(), lock_name,
+ cycles_per_operation);
+
+ rte_hash_free(handle);
+ return 0;
+}
+
+static int
+test_hash_scaling_main(void)
+{
+ int r = 0;
+
+ if (rte_lcore_count() == 1)
+ r = test_hash_scaling(NULL_LOCK);
+
+ if (r == 0)
+ r = test_hash_scaling(NORMAL_LOCK);
+
+ if (!rte_tm_supported()) {
+ printf("Hardware transactional memory (lock elision) is NOT supported\n");
+ return r;
+ }
+ printf("Hardware transactional memory (lock elision) is supported\n");
+
+ if (r == 0)
+ r = test_hash_scaling(LOCK_ELISION);
+
+ return r;
+}
+
+
+static struct test_command hash_scaling_cmd = {
+ .command = "hash_scaling_autotest",
+ .callback = test_hash_scaling_main,
+};
+REGISTER_TEST_COMMAND(hash_scaling_cmd);
--
1.9.5.msysgit.0
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/3] spinlock: add support for HTM lock elision for x86
2015-06-18 10:00 ` Bruce Richardson
@ 2015-06-19 13:35 ` Thomas Monjalon
2015-06-22 15:32 ` Adrien Mazarguil
2015-06-29 9:34 ` [dpdk-dev] [PATCH] eal: fix cpu_feature_table[] compilation with -pedantic Adrien Mazarguil
0 siblings, 2 replies; 27+ messages in thread
From: Thomas Monjalon @ 2015-06-19 13:35 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev
2015-06-18 11:00, Bruce Richardson:
> On Wed, Jun 17, 2015 at 11:29:49PM +0200, Thomas Monjalon wrote:
> > Introducing rte_cpuflags.h in this header breaks the compilation of
> > the mlx4 pmd with CONFIG_RTE_LIBRTE_MLX4_DEBUG=y.
> > Indeed, it triggers the -pedantic flag which is not supported by rte_cpuflags.h.
> > Maybe it's time to fix this header?
>
> Do all our headers need to support the pedantic C flag? I don't believe this
> was a previous requirement for header files. The mlx4 driver appears to be the
> only place in the dpdk.org codebase where the flag actually appears - and even
> then the flag disabled in mlx.c where the dpdk headers are actually included.
>
> 73 /* DPDK headers don't like -pedantic. */$
> 74 #ifdef PEDANTIC$
> 75 #pragma GCC diagnostic ignored "-pedantic"$
> 76 #endif$
> 77 #include <rte_config.h>$
> .....
You're right. It seems this disabling doesn't work.
> I'm just not convinced that rte_cpuflags needs to be fixed at all here.
Yes, it's probably simpler to remove the -pedantic flag.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/3] add support for HTM lock elision for x86
2015-06-19 11:08 ` [dpdk-dev] [PATCH v3 " Roman Dementiev
` (2 preceding siblings ...)
2015-06-19 11:08 ` [dpdk-dev] [PATCH v3 3/3] test scaling of HTM lock elision protecting rte_hash Roman Dementiev
@ 2015-06-19 14:38 ` Thomas Monjalon
3 siblings, 0 replies; 27+ messages in thread
From: Thomas Monjalon @ 2015-06-19 14:38 UTC (permalink / raw)
To: Roman Dementiev; +Cc: dev
2015-06-19 13:08, Roman Dementiev:
> This series of patches adds methods that use hardware memory transactions (HTM)
> on fast-path for DPDK locks (a.k.a. lock elision). Here the methods are
> implemented for x86 using Restricted Transactional Memory instructions (Intel(r)
> Transactional Synchronization Extensions). The implementation fall-backs to
> the normal DPDK lock if HTM is not available or memory transactions fail. This
> is not a replacement for ALL lock usages since not all critical sections
> protected by locks are friendly to HTM. For example, an attempt to perform
> a HW I/O operation inside a hardware memory transaction always aborts
> the transaction since the CPU is not able to roll-back should the transaction
> fail. Therefore, hardware transactional locks are not advised to be used around
> rte_eth_rx_burst() and rte_eth_tx_burst() calls.
>
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
>
> v3 changes
> -resolved a conflict in app/test/Makefile
> -don't use angle brackets for rte_common.h include
>
> v2 changes
> -added a documentation note about hardware limitations
>
>
> Roman Dementiev (3):
> spinlock: add support for HTM lock elision for x86
> rwlock: add support for HTM lock elision for x86
> test scaling of HTM lock elision protecting rte_hash
Applied, thanks
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/3] spinlock: add support for HTM lock elision for x86
2015-06-19 13:35 ` Thomas Monjalon
@ 2015-06-22 15:32 ` Adrien Mazarguil
2015-06-29 9:34 ` [dpdk-dev] [PATCH] eal: fix cpu_feature_table[] compilation with -pedantic Adrien Mazarguil
1 sibling, 0 replies; 27+ messages in thread
From: Adrien Mazarguil @ 2015-06-22 15:32 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
On Fri, Jun 19, 2015 at 03:35:38PM +0200, Thomas Monjalon wrote:
> 2015-06-18 11:00, Bruce Richardson:
> > On Wed, Jun 17, 2015 at 11:29:49PM +0200, Thomas Monjalon wrote:
> > > Introducing rte_cpuflags.h in this header breaks the compilation of
> > > the mlx4 pmd with CONFIG_RTE_LIBRTE_MLX4_DEBUG=y.
> > > Indeed, it triggers the -pedantic flag which is not supported by rte_cpuflags.h.
> > > Maybe it's time to fix this header?
> >
> > Do all our headers need to support the pedantic C flag? I don't believe this
> > was a previous requirement for header files. The mlx4 driver appears to be the
> > only place in the dpdk.org codebase where the flag actually appears - and even
> > then the flag disabled in mlx.c where the dpdk headers are actually included.
> >
> > 73 /* DPDK headers don't like -pedantic. */$
> > 74 #ifdef PEDANTIC$
> > 75 #pragma GCC diagnostic ignored "-pedantic"$
> > 76 #endif$
> > 77 #include <rte_config.h>$
> > .....
>
> You're right. It seems this disabling doesn't work.
Well, it used to work, at least sufficiently until now.
The mlx4 driver started as an out of tree development long ago, this flag
is here from the beginning and was left around to maintain a clean code base
in the PMD itself. Unfortunately, it had to include a few headers that were
not quite ready to handle such constraints, hence the somewhat ugly #pragma
workarounds left until these headers could be fixed someday.
> > I'm just not convinced that rte_cpuflags needs to be fixed at all here.
>
> Yes, it's probably simpler to remove the -pedantic flag.
I'm not going to argue against that, as a PMD's Makefile is obviously not
the right place to add a -pedantic parameter anyway.
However outside of PMD usage, I think public API headers (I'm not talking
about the entire DPDK code base, just headers) should handle all kind of
warnings a user application might throw at it for its own use (-pedantic and
other -Wstuff, I'd even say -std=c99 for strict ISO C compliance), as is the
case for the C library and most, if not all system-wide headers.
--
Adrien Mazarguil
6WIND
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH] eal: fix cpu_feature_table[] compilation with -pedantic
2015-06-19 13:35 ` Thomas Monjalon
2015-06-22 15:32 ` Adrien Mazarguil
@ 2015-06-29 9:34 ` Adrien Mazarguil
2015-06-29 12:10 ` David Marchand
1 sibling, 1 reply; 27+ messages in thread
From: Adrien Mazarguil @ 2015-06-29 9:34 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
Since the commit below includes rte_cpuflags.h in rte_spinlock.h,
compilation of the mlx4 driver fails when CONFIG_RTE_LIBRTE_MLX4_DEBUG=y.
This mode adds -pedantic to the compiler's command line for mlx4, which
complains about the static definition of an empty cpu_feature_table[] in
common rte_cpuflags.h, then about its redefinition as a larger array in
arch-specific rte_cpuflags.h.
While DPDK does not officially support -pedantic internally, external
applications may enable it and include rte_spinlock.h from the public API.
Instead of removing -pedantic from mlx4, this commit fixes rte_cpuflags.h.
Fixes: ba7468997ea6 ("spinlock: add HTM lock elision for x86")
Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
---
lib/librte_eal/common/include/generic/rte_cpuflags.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/lib/librte_eal/common/include/generic/rte_cpuflags.h b/lib/librte_eal/common/include/generic/rte_cpuflags.h
index a04e021..61c4db1 100644
--- a/lib/librte_eal/common/include/generic/rte_cpuflags.h
+++ b/lib/librte_eal/common/include/generic/rte_cpuflags.h
@@ -74,8 +74,12 @@ struct feature_entry {
/**
* An array that holds feature entries
+ *
+ * Defined in arch-specific rte_cpuflags.h.
*/
+#ifdef __DOXYGEN__
static const struct feature_entry cpu_feature_table[];
+#endif
/**
* Execute CPUID instruction and get contents of a specific register
--
2.1.0
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH] eal: fix cpu_feature_table[] compilation with -pedantic
2015-06-29 9:34 ` [dpdk-dev] [PATCH] eal: fix cpu_feature_table[] compilation with -pedantic Adrien Mazarguil
@ 2015-06-29 12:10 ` David Marchand
2015-06-29 12:19 ` Thomas Monjalon
0 siblings, 1 reply; 27+ messages in thread
From: David Marchand @ 2015-06-29 12:10 UTC (permalink / raw)
To: Adrien Mazarguil; +Cc: dev
On Mon, Jun 29, 2015 at 11:34 AM, Adrien Mazarguil <
adrien.mazarguil@6wind.com> wrote:
> Since the commit below includes rte_cpuflags.h in rte_spinlock.h,
> compilation of the mlx4 driver fails when CONFIG_RTE_LIBRTE_MLX4_DEBUG=y.
>
> This mode adds -pedantic to the compiler's command line for mlx4, which
> complains about the static definition of an empty cpu_feature_table[] in
> common rte_cpuflags.h, then about its redefinition as a larger array in
> arch-specific rte_cpuflags.h.
>
> While DPDK does not officially support -pedantic internally, external
> applications may enable it and include rte_spinlock.h from the public API.
>
> Instead of removing -pedantic from mlx4, this commit fixes rte_cpuflags.h.
>
> Fixes: ba7468997ea6 ("spinlock: add HTM lock elision for x86")
>
> Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
>
Acked-by: David Marchand <david.marchand@6wind.com>
--
David Marchand
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH] eal: fix cpu_feature_table[] compilation with -pedantic
2015-06-29 12:10 ` David Marchand
@ 2015-06-29 12:19 ` Thomas Monjalon
0 siblings, 0 replies; 27+ messages in thread
From: Thomas Monjalon @ 2015-06-29 12:19 UTC (permalink / raw)
To: Adrien Mazarguil; +Cc: dev
2015-06-29 14:10, David Marchand:
> On Mon, Jun 29, 2015 at 11:34 AM, Adrien Mazarguil <
> adrien.mazarguil@6wind.com> wrote:
>
> > Since the commit below includes rte_cpuflags.h in rte_spinlock.h,
> > compilation of the mlx4 driver fails when CONFIG_RTE_LIBRTE_MLX4_DEBUG=y.
> >
> > This mode adds -pedantic to the compiler's command line for mlx4, which
> > complains about the static definition of an empty cpu_feature_table[] in
> > common rte_cpuflags.h, then about its redefinition as a larger array in
> > arch-specific rte_cpuflags.h.
> >
> > While DPDK does not officially support -pedantic internally, external
> > applications may enable it and include rte_spinlock.h from the public API.
> >
> > Instead of removing -pedantic from mlx4, this commit fixes rte_cpuflags.h.
> >
> > Fixes: ba7468997ea6 ("spinlock: add HTM lock elision for x86")
> >
> > Signed-off-by: Adrien Mazarguil <adrien.mazarguil@6wind.com>
>
> Acked-by: David Marchand <david.marchand@6wind.com>
Applied, thanks
^ permalink raw reply [flat|nested] 27+ messages in thread
end of thread, other threads:[~2015-06-29 12:20 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-06-02 13:11 [dpdk-dev] add support for HTM lock elision for x86 Roman Dementiev
2015-06-02 13:11 ` [dpdk-dev] [PATCH 1/3] spinlock: " Roman Dementiev
2015-06-02 13:11 ` [dpdk-dev] [PATCH 2/3] rwlock: " Roman Dementiev
2015-06-02 13:11 ` [dpdk-dev] [PATCH 3/3] test scaling of HTM lock elision protecting rte_hash Roman Dementiev
[not found] ` <CADNuJVpeKa9-R7WHkoCzw82vpYd=3XmhOoz2JfGsFLzDW+F5UQ@mail.gmail.com>
2015-06-02 13:39 ` [dpdk-dev] add support for HTM lock elision for x86 Dementiev, Roman
2015-06-02 14:55 ` Roman Dementiev
2015-06-03 18:40 ` Stephen Hemminger
2015-06-05 15:12 ` Roman Dementiev
2015-06-16 17:16 ` [dpdk-dev] [PATCH v2 0/3] " Roman Dementiev
2015-06-16 17:16 ` [dpdk-dev] [PATCH v2 1/3] spinlock: " Roman Dementiev
2015-06-17 21:29 ` Thomas Monjalon
2015-06-18 10:00 ` Bruce Richardson
2015-06-19 13:35 ` Thomas Monjalon
2015-06-22 15:32 ` Adrien Mazarguil
2015-06-29 9:34 ` [dpdk-dev] [PATCH] eal: fix cpu_feature_table[] compilation with -pedantic Adrien Mazarguil
2015-06-29 12:10 ` David Marchand
2015-06-29 12:19 ` Thomas Monjalon
2015-06-16 17:16 ` [dpdk-dev] [PATCH v2 2/3] rwlock: add support for HTM lock elision for x86 Roman Dementiev
2015-06-16 17:16 ` [dpdk-dev] [PATCH v2 3/3] test scaling of HTM lock elision protecting rte_hash Roman Dementiev
2015-06-17 13:05 ` [dpdk-dev] [PATCH v2 0/3] add support for HTM lock elision for x86 Bruce Richardson
2015-06-17 13:14 ` Thomas Monjalon
2015-06-17 13:48 ` Bruce Richardson
2015-06-19 11:08 ` [dpdk-dev] [PATCH v3 " Roman Dementiev
2015-06-19 11:08 ` [dpdk-dev] [PATCH v3 1/3] spinlock: " Roman Dementiev
2015-06-19 11:08 ` [dpdk-dev] [PATCH v3 2/3] rwlock: " Roman Dementiev
2015-06-19 11:08 ` [dpdk-dev] [PATCH v3 3/3] test scaling of HTM lock elision protecting rte_hash Roman Dementiev
2015-06-19 14:38 ` [dpdk-dev] [PATCH v3 0/3] add support for HTM lock elision for x86 Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).