From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7A118A050C; Sun, 8 May 2022 13:56:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2421D4068F; Sun, 8 May 2022 13:56:15 +0200 (CEST) Received: from mail.lysator.liu.se (mail.lysator.liu.se [130.236.254.3]) by mails.dpdk.org (Postfix) with ESMTP id E2DEF40395 for ; Sun, 8 May 2022 13:56:13 +0200 (CEST) Received: from mail.lysator.liu.se (localhost [127.0.0.1]) by mail.lysator.liu.se (Postfix) with ESMTP id 2B3751EAF8 for ; Sun, 8 May 2022 13:56:10 +0200 (CEST) Received: by mail.lysator.liu.se (Postfix, from userid 1004) id 299721EAF7; Sun, 8 May 2022 13:56:10 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on hermod.lysator.liu.se X-Spam-Level: X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED, AWL, NICE_REPLY_A, T_SCC_BODY_TEXT_LINE autolearn=disabled version=3.4.6 X-Spam-Score: -2.9 Received: from [192.168.1.59] (unknown [62.63.215.114]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mail.lysator.liu.se (Postfix) with ESMTPSA id 3ADD01EF0A; Sun, 8 May 2022 13:56:02 +0200 (CEST) Message-ID: Date: Sun, 8 May 2022 13:56:00 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.8.1 Subject: Re: [PATCH v5] eal: add seqlock Content-Language: en-US To: fengchengwen , =?UTF-8?Q?Mattias_R=c3=b6nnblom?= , dev@dpdk.org Cc: Thomas Monjalon , David Marchand , onar.olsen@ericsson.com, Honnappa.Nagarahalli@arm.com, nd@arm.com, konstantin.ananyev@intel.com, mb@smartsharesystems.com, stephen@networkplumber.org, Ola Liljedahl References: <3e9b0560-9306-0826-4911-1cf0e72c30d9@lysator.liu.se> <20220501140327.265128-1-mattias.ronnblom@ericsson.com> From: =?UTF-8?Q?Mattias_R=c3=b6nnblom?= In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Virus-Scanned: ClamAV using ClamSMTP X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 2022-05-06 03:26, fengchengwen wrote: > On 2022/5/1 22:03, Mattias Rönnblom wrote: >> A sequence lock (seqlock) is synchronization primitive which allows >> for data-race free, low-overhead, high-frequency reads, especially for >> data structures shared across many cores and which are updated >> relatively infrequently. >> > > ... > >> +} >> + >> +static void >> +reader_stop(struct reader *reader) >> +{ >> + __atomic_store_n(&reader->stop, 1, __ATOMIC_RELAXED); >> +} >> + >> +#define NUM_WRITERS (2) /* main lcore + one worker */ >> +#define MIN_NUM_READERS (2) >> +#define MAX_READERS (RTE_MAX_LCORE - NUM_WRITERS - 1) > > Why minus 1 ? > Suggest define MAX_READERS RTE_MAX_LCORE to avoid underflow with small size VM. > OK. >> +#define MIN_LCORE_COUNT (NUM_WRITERS + MIN_NUM_READERS) >> + >> +/* Only a compile-time test */ >> +static rte_seqlock_t __rte_unused static_init_lock = RTE_SEQLOCK_INITIALIZER; >> + >> +static int >> +test_seqlock(void) >> +{ >> + struct reader readers[MAX_READERS]; >> + unsigned int num_readers; >> + unsigned int num_lcores; >> + unsigned int i; >> + unsigned int lcore_id; >> + unsigned int reader_lcore_ids[MAX_READERS]; >> + unsigned int worker_writer_lcore_id = 0; >> + int rc = TEST_SUCCESS; >> + >> + num_lcores = rte_lcore_count(); >> + >> + if (num_lcores < MIN_LCORE_COUNT) { >> + printf("Too few cores to run test. Skipping.\n"); >> + return TEST_SKIPPED; >> + } >> + >> + num_readers = num_lcores - NUM_WRITERS; >> + >> + struct data *data = rte_zmalloc(NULL, sizeof(struct data), 0); > > Please check whether the value of data is NULL. > OK. >> + >> + i = 0; >> + RTE_LCORE_FOREACH_WORKER(lcore_id) { >> + if (i == 0) { >> + rte_eal_remote_launch(writer_run, data, lcore_id); >> + worker_writer_lcore_id = lcore_id; >> + } else { >> + unsigned int reader_idx = i - 1; >> + struct reader *reader = &readers[reader_idx]; >> + >> + reader->data = data; >> + reader->stop = 0; >> + >> + rte_eal_remote_launch(reader_run, reader, lcore_id); >> + reader_lcore_ids[reader_idx] = lcore_id; >> + } >> + i++; >> + } >> + >> + if (writer_run(data) != 0 || >> + rte_eal_wait_lcore(worker_writer_lcore_id) != 0) >> + rc = TEST_FAILED; >> + >> + for (i = 0; i < num_readers; i++) { >> + reader_stop(&readers[i]); >> + if (rte_eal_wait_lcore(reader_lcore_ids[i]) != 0) >> + rc = TEST_FAILED; >> + } >> + > > Please free data memory. > OK. >> + return rc; >> +} >> + >> +REGISTER_TEST_COMMAND(seqlock_autotest, test_seqlock); >> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md >> index 4245b9635c..f23e33ae30 100644 >> --- a/doc/api/doxy-api-index.md >> +++ b/doc/api/doxy-api-index.md >> @@ -77,6 +77,7 @@ The public API headers are grouped by topics: >> [rwlock] (@ref rte_rwlock.h), >> [spinlock] (@ref rte_spinlock.h), >> [ticketlock] (@ref rte_ticketlock.h), >> + [seqlock] (@ref rte_seqlock.h), >> [RCU] (@ref rte_rcu_qsbr.h) >> > > ... > >> + */ >> +__rte_experimental >> +static inline bool >> +rte_seqlock_read_retry(const rte_seqlock_t *seqlock, uint32_t begin_sn) >> +{ >> + uint32_t end_sn; >> + >> + /* An odd sequence number means the protected data was being >> + * modified already at the point of the rte_seqlock_read_begin() >> + * call. >> + */ >> + if (unlikely(begin_sn & 1)) >> + return true; >> + >> + /* make sure the data loads happens before the sn load */ >> + rte_atomic_thread_fence(__ATOMIC_ACQUIRE); > > In ARMv8, the rte_atomic_thread_fence(__ATOMIC_ACQUIRE) and rte_smp_rmb() both output 'dma ishld' > Suggest use rte_smp_rmb(), please see below comment. > >> + >> + end_sn = __atomic_load_n(&seqlock->sn, __ATOMIC_RELAXED); >> + >> + /* A writer incremented the sequence number during this read >> + * critical section. >> + */ >> + if (unlikely(begin_sn != end_sn)) >> + return true; >> + >> + return false; >> +} >> + >> +/** >> + * @warning >> + * @b EXPERIMENTAL: this API may change without prior notice. >> + * >> + * Begin a write-side critical section. >> + * >> + * A call to this function acquires the write lock associated @p >> + * seqlock, and marks the beginning of a write-side critical section. >> + * >> + * After having called this function, the caller may go on to modify >> + * (both read and write) the protected data, in an atomic or >> + * non-atomic manner. >> + * >> + * After the necessary updates have been performed, the application >> + * calls rte_seqlock_write_unlock(). >> + * >> + * This function is not preemption-safe in the sense that preemption >> + * of the calling thread may block reader progress until the writer >> + * thread is rescheduled. >> + * >> + * Unlike rte_seqlock_read_begin(), each call made to >> + * rte_seqlock_write_lock() must be matched with an unlock call. >> + * >> + * @param seqlock >> + * A pointer to the seqlock. >> + * >> + * @see rte_seqlock_write_unlock() >> + */ >> +__rte_experimental >> +static inline void >> +rte_seqlock_write_lock(rte_seqlock_t *seqlock) >> +{ >> + uint32_t sn; >> + >> + /* to synchronize with other writers */ >> + rte_spinlock_lock(&seqlock->lock); >> + >> + sn = seqlock->sn + 1; >> + >> + __atomic_store_n(&seqlock->sn, sn, __ATOMIC_RELAXED); >> + >> + /* __ATOMIC_RELEASE to prevent stores after (in program order) >> + * from happening before the sn store. >> + */ >> + rte_atomic_thread_fence(__ATOMIC_RELEASE); > > In ARMv8, rte_atomic_thread_fence(__ATOMIC_RELEASE) will output 'dmb ish', and > rte_smp_wmb() will output 'dma ishst'. > Suggest use rte_smp_wmb(). I think here only need to use store mb here. > (This has already been discussed further down in the mail thread, and I have nothing to add.) >> +} >> + >> +/** >> + * @warning >> + * @b EXPERIMENTAL: this API may change without prior notice. >> + * >> + * End a write-side critical section. >> + * >> + * A call to this function marks the end of the write-side critical >> + * section, for @p seqlock. After this call has been made, the protected >> + * data may no longer be modified. >> + * >> + * @param seqlock >> + * A pointer to the seqlock. >> + * >> + * @see rte_seqlock_write_lock() >> + */ >> +__rte_experimental >> +static inline void >> +rte_seqlock_write_unlock(rte_seqlock_t *seqlock) >> +{ >> + uint32_t sn; >> + >> + sn = seqlock->sn + 1; >> + >> + /* synchronizes-with the load acquire in rte_seqlock_read_begin() */ >> + __atomic_store_n(&seqlock->sn, sn, __ATOMIC_RELEASE); >> + >> + rte_spinlock_unlock(&seqlock->lock); >> +} >> + >> +#ifdef __cplusplus >> +} >> +#endif >> + >> +#endif /* _RTE_SEQLOCK_H_ */ >> diff --git a/lib/eal/version.map b/lib/eal/version.map >> index b53eeb30d7..4a9d0ed899 100644 >> --- a/lib/eal/version.map >> +++ b/lib/eal/version.map >> @@ -420,6 +420,9 @@ EXPERIMENTAL { >> rte_intr_instance_free; >> rte_intr_type_get; >> rte_intr_type_set; >> + >> + # added in 22.07 >> + rte_seqlock_init; >> }; >> >> INTERNAL { >> > > Reviewed-by: Chengwen Feng > > Thanks a lot for the review!