From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DF5F6A00C2; Mon, 23 May 2022 16:24:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C31364014F; Mon, 23 May 2022 16:24:25 +0200 (CEST) Received: from EUR01-DB5-obe.outbound.protection.outlook.com (mail-eopbgr150082.outbound.protection.outlook.com [40.107.15.82]) by mails.dpdk.org (Postfix) with ESMTP id 1EB5240141 for ; Mon, 23 May 2022 16:24:24 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Dsp+OlCPzgl2Nrh8cRUd+LwQkKXH/nMYKkEcH1hGKWu5f3cjXthslsszIORaKyYS/fWmTi3iFs1ZmeAJsQH0z57m7jEl3oj+z/PTdL7ZbIFoevLXn8hlFlUyxePRHxDcVRcOjwnNUUxk3pgDf+oJ+CCIGqP9Xxq35cugNjCjBvTFNXzbb8K2dowVP2n7nTehBUTX806NQZkjecgITBNnzy6xxhHKjZpjV6zeULMKP+MDnlAJUqGGN3/Q0G9wCrb6XtDGMYcSIfpII4gtIB1UCyXb27bF4eLVLhvUQ8EUZDUtq49TeuhAnZGVU2uKCTLhPEkspVrxQLvVeYIVRBNLCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=wXXy2JzTlVq4Xz1iga692poQKN6lAtTJO6HjbC5bbag=; b=feGYsGFHO4YiStP9d/OF0VKmkDwoEOOpVqAQ0ZuDrpBglR/c4VZ9Yf35zyHMdEg+iS/X4Kzp2iJYh1yG8fzXO9hszERNlPxTQmSEs+10CU6BGoH3YPVqX2MmHWNJtEc7d0YeSgDTYR7QBRy8O1MJlfy7AMOyCLwlsRvWds/poFDTVZkQoy1JyJ8UdBhK+KZxc+MdkQEMa7Zvh46uVlsjmd2EtkblJ86Gq4GbLi7SvypHAAyZAi0+nlRD55HUKm804YmZPG/JIy/amzwCEOzawP8UItKwjl4nL3As3oJ/kkIZ+aDntBqsN1XZGzQkx84k6XFXObhGtoG5bCRxa4CBqQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 192.176.1.74) smtp.rcpttodomain=arm.com smtp.mailfrom=ericsson.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=ericsson.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ericsson.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=wXXy2JzTlVq4Xz1iga692poQKN6lAtTJO6HjbC5bbag=; b=JUMJ9dxxqdRQ99Diq8516E/WkSTZG2va+F+qZA2Cm6AXZn7qagfkC2MFSElaOfwvmz7BBxmNX2orZraQzRT/rLI/HvlRngRYyXjZMzM3GE/v/HpPoIbI6wAmdUwrBODu6xhoxbMOVevBYvAMoUs+T6SxZF/T+L+LDnOAB/LO6Cc= Received: from SV0P279CA0050.NORP279.PROD.OUTLOOK.COM (2603:10a6:f10:13::19) by VI1PR0701MB2525.eurprd07.prod.outlook.com (2603:10a6:800:6e::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.12; Mon, 23 May 2022 14:24:21 +0000 Received: from HE1EUR02FT068.eop-EUR02.prod.protection.outlook.com (2603:10a6:f10:13:cafe::d0) by SV0P279CA0050.outlook.office365.com (2603:10a6:f10:13::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5273.14 via Frontend Transport; Mon, 23 May 2022 14:24:21 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 192.176.1.74) smtp.mailfrom=ericsson.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=ericsson.com; Received-SPF: Pass (protection.outlook.com: domain of ericsson.com designates 192.176.1.74 as permitted sender) receiver=protection.outlook.com; client-ip=192.176.1.74; helo=oa.msg.ericsson.com; pr=C Received: from oa.msg.ericsson.com (192.176.1.74) by HE1EUR02FT068.mail.protection.outlook.com (10.152.10.208) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.20.5273.14 via Frontend Transport; Mon, 23 May 2022 14:24:21 +0000 Received: from ESESSMB503.ericsson.se (153.88.183.164) by ESESBMR504.ericsson.se (153.88.183.139) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.24; Mon, 23 May 2022 16:24:21 +0200 Received: from seliiuapp00218.seli.gic.ericsson.se (153.88.183.153) by smtp.internal.ericsson.com (153.88.183.191) with Microsoft SMTP Server id 15.1.2375.24 via Frontend Transport; Mon, 23 May 2022 16:24:21 +0200 Received: from localhost.localdomain (seliicwb00002.seli.gic.ericsson.se [10.156.25.100]) by seliiuapp00218.seli.gic.ericsson.se (Postfix) with ESMTP id 0B05B6017E; Mon, 23 May 2022 16:24:21 +0200 (CEST) From: =?UTF-8?q?Mattias=20R=C3=B6nnblom?= To: Thomas Monjalon , David Marchand CC: , , , , , , , , Chengwen Feng , =?UTF-8?q?Mattias=20R=C3=B6nnblom?= , Ola Liljedahl Subject: [PATCH v9] eal: add seqlock Date: Mon, 23 May 2022 16:23:46 +0200 Message-ID: <20220523142346.366902-1-mattias.ronnblom@ericsson.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220523113111.366599-1-mattias.ronnblom@ericsson.com> References: <20220523113111.366599-1-mattias.ronnblom@ericsson.com> MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b91172c6-09cc-4b90-06fa-08da3cc7edf6 X-MS-TrafficTypeDiagnostic: VI1PR0701MB2525:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: tbPnsmJ/LwJwr20WU3fGikPGt+T3oXg74OSe3KurOOcN5iGLJdK4aJRJvFy430CkuKvR+VFtpntbhWHKjaZqTotqRijGHniyaBZ+/WmuR/lmV3QE4iE/870wPvdUvDZlN8ZH1mzMpzGCmGvn7otFg8oN1zneEkjpEAyZNO5B0cCnv/wdFBgEbVDa/Wt9HTwYEpVm+2F+kpHbAMKgQ0nf78I3M61HEtoUUgfeqxcGc3lVLNg7TPUvcmltHsC6817VfHxWsabuQ18H3joSXZ8YiMqH6aEEOqv470Hw89prRC14cCNb0JSkEu3aCZIT+fkEzIwAYBYNQlf6gEgRX03fUqSzcO8AQrZIkR/FBOvEkRT3DqeLd8HpVqsznzyQATf+aJOZ8Iac0ImMsQhulD+hiMz7G6ANNFK19IgIlpLAt3o2Oe53rRnaqGB3bG2wJi2bDv2At15yEKI8p9OM6Y2FTciYDZq1gKeaMxpNQIcZ3EbXKalOopTPTng5hmP670UV81hWxnFKIyAaIevKVlYkBX6axHyq+ZXIIx5bAk5efcVqYAUDzSHlvZOPKupOOgKsRmKdRPQOzj7Nzp8hQ07c3nou7vHKgCCuXVzyKU+MfzRvVnXV8NG9B9zddCfxllx3iJuzgTYNn0BbqVAseBKaaB/i8soAO85yBkZeeK473+s4XtFmPhUr/SiNvgjhS0w4U/eLP6JGUtuFMUioOeXbdvAkIHbB/x1yND0Qyj20J+lFpc2NWUgLsVj1+Mkv2XfphxynU2JRni/BNTy6KiOloIt5reO/mo16wgQlQRiEGrA= X-Forefront-Antispam-Report: CIP:192.176.1.74; CTRY:SE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:oa.msg.ericsson.com; PTR:office365.se.ericsson.net; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(70206006)(7416002)(70586007)(66574015)(82960400001)(7636003)(2616005)(2906002)(26005)(356005)(83380400001)(336012)(82310400005)(47076005)(8936002)(8676002)(4326008)(6266002)(1076003)(5660300002)(966005)(86362001)(36860700001)(36756003)(508600001)(316002)(54906003)(186003)(30864003)(6666004)(110136005)(40460700003); DIR:OUT; SFP:1101; X-OriginatorOrg: ericsson.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 May 2022 14:24:21.4519 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b91172c6-09cc-4b90-06fa-08da3cc7edf6 X-MS-Exchange-CrossTenant-Id: 92e84ceb-fbfd-47ab-be52-080c6b87953f X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=92e84ceb-fbfd-47ab-be52-080c6b87953f; Ip=[192.176.1.74]; Helo=[oa.msg.ericsson.com] X-MS-Exchange-CrossTenant-AuthSource: HE1EUR02FT068.eop-EUR02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0701MB2525 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org A sequence lock (seqlock) is a synchronization primitive which allows for data-race free, low-overhead, high-frequency reads, suitable for data structures shared across many cores and which are updated relatively infrequently. A seqlock permits multiple parallel readers. A spinlock is used to serialize writers. In cases where there is only a single writer, or writer-writer synchronization is done by some external means, the "raw" sequence counter type (and accompanying rte_seqcount_*() functions) may be used instead. To avoid resource reclamation and other issues, the data protected by a seqlock is best off being self-contained (i.e., no pointers [except to constant data]). One way to think about seqlocks is that they provide means to perform atomic operations on data objects larger than what the native atomic machine instructions allow for. DPDK seqlocks (and the underlying sequence counters) are not preemption safe on the writer side. A thread preemption affects performance, not correctness. A seqlock contains a sequence number, which can be thought of as the generation of the data it protects. A reader will 1. Load the sequence number (sn). 2. Load, in arbitrary order, the seqlock-protected data. 3. Load the sn again. 4. Check if the first and second sn are equal, and even numbered. If they are not, discard the loaded data, and restart from 1. The first three steps need to be ordered using suitable memory fences. A writer will 1. Take the spinlock, to serialize writer access. 2. Load the sn. 3. Store the original sn + 1 as the new sn. 4. Perform load and stores to the seqlock-protected data. 5. Store the original sn + 2 as the new sn. 6. Release the spinlock. Proper memory fencing is required to make sure the first sn store, the data stores, and the second sn store appear to the reader in the mentioned order. The sn loads and stores must be atomic, but the data loads and stores need not be. The original seqlock design and implementation was done by Stephen Hemminger. This is an independent implementation, using C11 atomics. For more information on seqlocks, see https://en.wikipedia.org/wiki/Seqlock --- PATCH v9: * Include for __rte_experimental. The failure to do so caused build failures on 32-bit ARM. PATCH v8: * Move the sequence counter into a separate header file. * Move the initialization code into the header files and eliminate the tiny rte_seqlock.c. PATCH v7: * Factor out the sequence number into a separate type rte_seqcount_t. PATCH v6: * Check for failed memory allocations in unit test. * Fix underflow issue in test case for small RTE_LCORE_MAX values. * Fix test case memory leak. PATCH v5: * Add sequence lock section to MAINTAINERS. * Add entry in the release notes. * Add seqlock reference in the API index. * Fix meson build file indentation. * Use "increment" to describe how a writer changes the sequence number. * Remove compiler barriers from seqlock test. * Use appropriate macros (e.g., TEST_SUCCESS) for test return values. PATCH v4: * Reverted to Linux kernel style naming on the read side. * Bail out early from the retry function if an odd sequence number is encountered. * Added experimental warnings in the API documentation. * Static initializer now uses named field initialization. * Various tweaks to API documentation (including the example). PATCH v3: * Renamed both read and write-side critical section begin/end functions to better match rwlock naming, per Ola Liljedahl's suggestion. * Added 'extern "C"' guards for C++ compatibility. * Refer to the main lcore as the main lcore, and nothing else. PATCH v2: * Skip instead of fail unit test in case too few lcores are available. * Use main lcore for testing, reducing the minimum number of lcores required to run the unit tests to four. * Consistently refer to sn field as the "sequence number" in the documentation. * Fixed spelling mistakes in documentation. Updates since RFC: * Added API documentation. * Added link to Wikipedia article in the commit message. * Changed seqlock sequence number field from uint64_t (which was overkill) to uint32_t. The sn type needs to be sufficiently large to assure no reader will read a sn, access the data, and then read the same sn, but the sn has been incremented enough times to have wrapped during the read, and arrived back at the original sn. * Added RTE_SEQLOCK_INITIALIZER macro for static initialization. * Removed the rte_seqlock struct + separate rte_seqlock_t typedef with an anonymous struct typedef:ed to rte_seqlock_t. Acked-by: Morten Brørup Acked-by: Konstantin Ananyev Reviewed-by: Ola Liljedahl Reviewed-by: Chengwen Feng Signed-off-by: Mattias Rönnblom --- MAINTAINERS | 6 + app/test/meson.build | 2 + app/test/test_seqlock.c | 190 +++++++++++++++++++ doc/api/doxy-api-index.md | 2 + doc/guides/rel_notes/release_22_07.rst | 18 ++ lib/eal/include/meson.build | 2 + lib/eal/include/rte_seqcount.h | 252 ++++++++++++++++++++++++ lib/eal/include/rte_seqlock.h | 253 +++++++++++++++++++++++++ 8 files changed, 725 insertions(+) create mode 100644 app/test/test_seqlock.c create mode 100644 lib/eal/include/rte_seqcount.h create mode 100644 lib/eal/include/rte_seqlock.h diff --git a/MAINTAINERS b/MAINTAINERS index 17a0559ee7..458ea7e47c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -263,6 +263,12 @@ M: Joyce Kong F: lib/eal/include/generic/rte_ticketlock.h F: app/test/test_ticketlock.c +Sequence Lock +M: Mattias Rönnblom +F: lib/eal/include/rte_seqcount.h +F: lib/eal/include/rte_seqlock.h +F: app/test/test_seqlock.c + Pseudo-random Number Generation M: Mattias Rönnblom F: lib/eal/include/rte_random.h diff --git a/app/test/meson.build b/app/test/meson.build index 15591ce5cf..48344e2071 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -125,6 +125,7 @@ test_sources = files( 'test_rwlock.c', 'test_sched.c', 'test_security.c', + 'test_seqlock.c', 'test_service_cores.c', 'test_spinlock.c', 'test_stack.c', @@ -216,6 +217,7 @@ fast_tests = [ ['rwlock_rde_wro_autotest', true, true], ['sched_autotest', true, true], ['security_autotest', false, true], + ['seqlock_autotest', true, true], ['spinlock_autotest', true, true], ['stack_autotest', false, true], ['stack_lf_autotest', false, true], diff --git a/app/test/test_seqlock.c b/app/test/test_seqlock.c new file mode 100644 index 0000000000..cb1c1baa82 --- /dev/null +++ b/app/test/test_seqlock.c @@ -0,0 +1,190 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2022 Ericsson AB + */ + +#include + +#include +#include +#include + +#include + +#include "test.h" + +struct data { + rte_seqlock_t lock; + + uint64_t a; + uint64_t b __rte_cache_aligned; + uint64_t c __rte_cache_aligned; +} __rte_cache_aligned; + +struct reader { + struct data *data; + uint8_t stop; +}; + +#define WRITER_RUNTIME (2.0) /* s */ + +#define WRITER_MAX_DELAY (100) /* us */ + +#define INTERRUPTED_WRITER_FREQUENCY (1000) +#define WRITER_INTERRUPT_TIME (1) /* us */ + +static int +writer_run(void *arg) +{ + struct data *data = arg; + uint64_t deadline; + + deadline = rte_get_timer_cycles() + + WRITER_RUNTIME * rte_get_timer_hz(); + + while (rte_get_timer_cycles() < deadline) { + bool interrupted; + uint64_t new_value; + unsigned int delay; + + new_value = rte_rand(); + + interrupted = rte_rand_max(INTERRUPTED_WRITER_FREQUENCY) == 0; + + rte_seqlock_write_lock(&data->lock); + + data->c = new_value; + data->b = new_value; + + if (interrupted) + rte_delay_us_block(WRITER_INTERRUPT_TIME); + + data->a = new_value; + + rte_seqlock_write_unlock(&data->lock); + + delay = rte_rand_max(WRITER_MAX_DELAY); + + rte_delay_us_block(delay); + } + + return TEST_SUCCESS; +} + +#define INTERRUPTED_READER_FREQUENCY (1000) +#define READER_INTERRUPT_TIME (1000) /* us */ + +static int +reader_run(void *arg) +{ + struct reader *r = arg; + int rc = TEST_SUCCESS; + + while (__atomic_load_n(&r->stop, __ATOMIC_RELAXED) == 0 && + rc == TEST_SUCCESS) { + struct data *data = r->data; + bool interrupted; + uint32_t sn; + uint64_t a; + uint64_t b; + uint64_t c; + + interrupted = rte_rand_max(INTERRUPTED_READER_FREQUENCY) == 0; + + do { + sn = rte_seqlock_read_begin(&data->lock); + + a = data->a; + if (interrupted) + rte_delay_us_block(READER_INTERRUPT_TIME); + c = data->c; + b = data->b; + + } while (rte_seqlock_read_retry(&data->lock, sn)); + + if (a != b || b != c) { + printf("Reader observed inconsistent data values " + "%" PRIu64 " %" PRIu64 " %" PRIu64 "\n", + a, b, c); + rc = TEST_FAILED; + } + } + + return rc; +} + +static void +reader_stop(struct reader *reader) +{ + __atomic_store_n(&reader->stop, 1, __ATOMIC_RELAXED); +} + +#define NUM_WRITERS (2) /* main lcore + one worker */ +#define MIN_NUM_READERS (2) +#define MIN_LCORE_COUNT (NUM_WRITERS + MIN_NUM_READERS) + +/* Only a compile-time test */ +static rte_seqlock_t __rte_unused static_init_lock = RTE_SEQLOCK_INITIALIZER; + +static int +test_seqlock(void) +{ + struct reader readers[RTE_MAX_LCORE]; + unsigned int num_lcores; + unsigned int num_readers; + struct data *data; + unsigned int i; + unsigned int lcore_id; + unsigned int reader_lcore_ids[RTE_MAX_LCORE]; + unsigned int worker_writer_lcore_id = 0; + int rc = TEST_SUCCESS; + + num_lcores = rte_lcore_count(); + + if (num_lcores < MIN_LCORE_COUNT) { + printf("Too few cores to run test. Skipping.\n"); + return TEST_SKIPPED; + } + + num_readers = num_lcores - NUM_WRITERS; + + data = rte_zmalloc(NULL, sizeof(struct data), 0); + + if (data == NULL) { + printf("Failed to allocate memory for seqlock data\n"); + return TEST_FAILED; + } + + i = 0; + RTE_LCORE_FOREACH_WORKER(lcore_id) { + if (i == 0) { + rte_eal_remote_launch(writer_run, data, lcore_id); + worker_writer_lcore_id = lcore_id; + } else { + unsigned int reader_idx = i - 1; + struct reader *reader = &readers[reader_idx]; + + reader->data = data; + reader->stop = 0; + + rte_eal_remote_launch(reader_run, reader, lcore_id); + reader_lcore_ids[reader_idx] = lcore_id; + } + i++; + } + + if (writer_run(data) != 0 || + rte_eal_wait_lcore(worker_writer_lcore_id) != 0) + rc = TEST_FAILED; + + for (i = 0; i < num_readers; i++) { + reader_stop(&readers[i]); + if (rte_eal_wait_lcore(reader_lcore_ids[i]) != 0) + rc = TEST_FAILED; + } + + rte_free(data); + + return rc; +} + +REGISTER_TEST_COMMAND(seqlock_autotest, test_seqlock); diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index 2b78d796ea..6dd219ef0d 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -78,6 +78,8 @@ The public API headers are grouped by topics: [rwlock] (@ref rte_rwlock.h), [spinlock] (@ref rte_spinlock.h), [ticketlock] (@ref rte_ticketlock.h), + [seqlock] (@ref rte_seqlock.h), + [seqcount] (@ref rte_seqcount.h), [RCU] (@ref rte_rcu_qsbr.h) - **CPU arch**: diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst index e49cacecef..9c48769171 100644 --- a/doc/guides/rel_notes/release_22_07.rst +++ b/doc/guides/rel_notes/release_22_07.rst @@ -60,6 +60,24 @@ New Features Added an API which can get the number of in-flight packets in vhost async data path without using lock. +* **Added Sequence Lock.** + + Added a new synchronization primitive: the sequence lock + (seqlock). A seqlock allows for low overhead, parallel reads. The + DPDK seqlock uses a spinlock to serialize multiple writing threads. + + In particular, seqlocks are useful for protecting data structures + which are read very frequently, by threads running on many different + cores, and modified relatively infrequently. + + One way to think about seqlocks is that they provide means to + perform atomic operations on data objects larger than what the + native atomic machine instructions allow for. + + In cases where there is only a single writer, or writer-writer + synchronization is performed by some means external to the seqlock, + direct use of the underlying sequence counter may be more suitable. + * **Updated Intel iavf driver.** * Added Tx QoS queue rate limitation support. diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build index 9700494816..40ebb5b63d 100644 --- a/lib/eal/include/meson.build +++ b/lib/eal/include/meson.build @@ -36,6 +36,8 @@ headers += files( 'rte_per_lcore.h', 'rte_random.h', 'rte_reciprocal.h', + 'rte_seqcount.h', + 'rte_seqlock.h', 'rte_service.h', 'rte_service_component.h', 'rte_string_fns.h', diff --git a/lib/eal/include/rte_seqcount.h b/lib/eal/include/rte_seqcount.h new file mode 100644 index 0000000000..67c7ee03a4 --- /dev/null +++ b/lib/eal/include/rte_seqcount.h @@ -0,0 +1,252 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2022 Ericsson AB + */ + +#ifndef _RTE_SEQCOUNT_H_ +#define _RTE_SEQCOUNT_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * @file + * RTE Seqcount + * + * The sequence counter synchronizes a single writer with multiple, + * parallel readers. It is used as the basis for the RTE sequence + * lock. + * + * @see rte_seqlock.h + */ + +#include +#include + +#include +#include +#include + +/** + * The RTE seqcount type. + */ +typedef struct { + uint32_t sn; /**< A sequence number for the protected data. */ +} rte_seqcount_t; + +/** + * A static seqcount initializer. + */ +#define RTE_SEQCOUNT_INITIALIZER \ + { \ + .sn = 0 \ + } + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Initialize the sequence counter. + * + * @param seqcount + * A pointer to the sequence counter. + */ +__rte_experimental +static inline void +rte_seqcount_init(rte_seqcount_t *seqcount) +{ + seqcount->sn = 0; +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Begin a read-side critical section. + * + * A call to this function marks the beginning of a read-side critical + * section, for @p seqcount. + * + * rte_seqcount_read_begin() returns a sequence number, which is later + * used in rte_seqcount_read_retry() to check if the protected data + * underwent any modifications during the read transaction. + * + * After (in program order) rte_seqcount_read_begin() has been called, + * the calling thread reads the protected data, for later use. The + * protected data read *must* be copied (either in pristine form, or + * in the form of some derivative), since the caller may only read the + * data from within the read-side critical section (i.e., after + * rte_seqcount_read_begin() and before rte_seqcount_read_retry()), + * but must not act upon the retrieved data while in the critical + * section, since it does not yet know if it is consistent. + * + * The protected data may be read using atomic and/or non-atomic + * operations. + * + * After (in program order) all required data loads have been + * performed, rte_seqcount_read_retry() should be called, marking + * the end of the read-side critical section. + * + * If rte_seqcount_read_retry() returns true, the just-read data is + * inconsistent and should be discarded. The caller has the option to + * either restart the whole procedure right away (i.e., calling + * rte_seqcount_read_begin() again), or do the same at some later time. + * + * If rte_seqcount_read_retry() returns false, the data was read + * atomically and the copied data is consistent. + * + * @param seqcount + * A pointer to the sequence counter. + * @return + * The seqcount sequence number for this critical section, to + * later be passed to rte_seqcount_read_retry(). + * + * @see rte_seqcount_read_retry() + */ + +__rte_experimental +static inline uint32_t +rte_seqcount_read_begin(const rte_seqcount_t *seqcount) +{ + /* __ATOMIC_ACQUIRE to prevent loads after (in program order) + * from happening before the sn load. Synchronizes-with the + * store release in rte_seqcount_write_end(). + */ + return __atomic_load_n(&seqcount->sn, __ATOMIC_ACQUIRE); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * End a read-side critical section. + * + * A call to this function marks the end of a read-side critical + * section, for @p seqcount. The application must supply the sequence + * number produced by the corresponding rte_seqcount_read_begin() call. + * + * After this function has been called, the caller should not access + * the protected data. + * + * In case rte_seqcount_read_retry() returns true, the just-read data + * was modified as it was being read and may be inconsistent, and thus + * should be discarded. + * + * In case this function returns false, the data is consistent and the + * set of atomic and non-atomic load operations performed between + * rte_seqcount_read_begin() and rte_seqcount_read_retry() were atomic, + * as a whole. + * + * @param seqcount + * A pointer to the sequence counter. + * @param begin_sn + * The sequence number returned by rte_seqcount_read_begin(). + * @return + * true or false, if the just-read seqcount-protected data was + * inconsistent or consistent, respectively, at the time it was + * read. + * + * @see rte_seqcount_read_begin() + */ + +__rte_experimental +static inline bool +rte_seqcount_read_retry(const rte_seqcount_t *seqcount, uint32_t begin_sn) +{ + uint32_t end_sn; + + /* An odd sequence number means the protected data was being + * modified already at the point of the rte_seqcount_read_begin() + * call. + */ + if (unlikely(begin_sn & 1)) + return true; + + /* make sure the data loads happens before the sn load */ + rte_atomic_thread_fence(__ATOMIC_ACQUIRE); + + end_sn = __atomic_load_n(&seqcount->sn, __ATOMIC_RELAXED); + + /* A writer incremented the sequence number during this read + * critical section. + */ + if (unlikely(begin_sn != end_sn)) + return true; + + return false; +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Begin a write-side critical section. + * + * A call to this function marks the beginning of a write-side + * critical section, after which the caller may go on to modify (both + * read and write) the protected data, in an atomic or non-atomic + * manner. + * + * After the necessary updates have been performed, the application + * calls rte_seqcount_write_end(). + * + * Multiple, parallel writers must use some external serialization. + * + * This function is not preemption-safe in the sense that preemption + * of the calling thread may block reader progress until the writer + * thread is rescheduled. + * + * @param seqcount + * A pointer to the sequence counter. + * + * @see rte_seqcount_write_end() + */ + +__rte_experimental +static inline void +rte_seqcount_write_begin(rte_seqcount_t *seqcount) +{ + uint32_t sn; + + sn = seqcount->sn + 1; + + __atomic_store_n(&seqcount->sn, sn, __ATOMIC_RELAXED); + + /* __ATOMIC_RELEASE to prevent stores after (in program order) + * from happening before the sn store. + */ + rte_atomic_thread_fence(__ATOMIC_RELEASE); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * End a write-side critical section. + * + * A call to this function marks the end of the write-side critical + * section, for @p seqcount. After this call has been made, the + * protected data may no longer be modified. + * + * @param seqcount + * A pointer to the sequence counter. + * + * @see rte_seqcount_write_begin() + */ +__rte_experimental +static inline void +rte_seqcount_write_end(rte_seqcount_t *seqcount) +{ + uint32_t sn; + + sn = seqcount->sn + 1; + + /* synchronizes-with the load acquire in rte_seqcount_read_begin() */ + __atomic_store_n(&seqcount->sn, sn, __ATOMIC_RELEASE); +} + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_SEQCOUNT_H_ */ diff --git a/lib/eal/include/rte_seqlock.h b/lib/eal/include/rte_seqlock.h new file mode 100644 index 0000000000..5eb9023e31 --- /dev/null +++ b/lib/eal/include/rte_seqlock.h @@ -0,0 +1,253 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2022 Ericsson AB + */ + +#ifndef _RTE_SEQLOCK_H_ +#define _RTE_SEQLOCK_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * @file + * RTE Seqlock + * + * A sequence lock (seqlock) is a synchronization primitive allowing + * multiple, parallel, readers to efficiently and safely (i.e., in a + * data-race free manner) access lock-protected data. The RTE seqlock + * permits multiple writers as well. A spinlock is used for + * writer-writer synchronization. + * + * A reader never blocks a writer. Very high frequency writes may + * prevent readers from making progress. + * + * A seqlock is not preemption-safe on the writer side. If a writer is + * preempted, it may block readers until the writer thread is allowed + * to continue. Heavy computations should be kept out of the + * writer-side critical section, to avoid delaying readers. + * + * Seqlocks are useful for data which are read by many cores, at a + * high frequency, and relatively infrequently written to. + * + * One way to think about seqlocks is that they provide means to + * perform atomic operations on objects larger than what the native + * machine instructions allow for. + * + * To avoid resource reclamation issues, the data protected by a + * seqlock should typically be kept self-contained (e.g., no pointers + * to mutable, dynamically allocated data). + * + * Example usage: + * @code{.c} + * #define MAX_Y_LEN (16) + * // Application-defined example data structure, protected by a seqlock. + * struct config { + * rte_seqlock_t lock; + * int param_x; + * char param_y[MAX_Y_LEN]; + * }; + * + * // Accessor function for reading config fields. + * void + * config_read(const struct config *config, int *param_x, char *param_y) + * { + * uint32_t sn; + * + * do { + * sn = rte_seqlock_read_begin(&config->lock); + * + * // Loads may be atomic or non-atomic, as in this example. + * *param_x = config->param_x; + * strcpy(param_y, config->param_y); + * // An alternative to an immediate retry is to abort and + * // try again at some later time, assuming progress is + * // possible without the data. + * } while (rte_seqlock_read_retry(&config->lock)); + * } + * + * // Accessor function for writing config fields. + * void + * config_update(struct config *config, int param_x, const char *param_y) + * { + * rte_seqlock_write_lock(&config->lock); + * // Stores may be atomic or non-atomic, as in this example. + * config->param_x = param_x; + * strcpy(config->param_y, param_y); + * rte_seqlock_write_unlock(&config->lock); + * } + * @endcode + * + * In case there is only a single writer, or writer-writer + * serialization is provided by other means, the use of sequence lock + * (i.e., rte_seqlock_t) can be replaced with the use of the "raw" + * rte_seqcount_t type instead. + * + * @see + * https://en.wikipedia.org/wiki/Seqlock. + */ + +#include +#include + +#include +#include +#include +#include +#include + +/** + * The RTE seqlock type. + */ +typedef struct { + rte_seqcount_t count; /**< Sequence count for the protected data. */ + rte_spinlock_t lock; /**< Spinlock used to serialize writers. */ +} rte_seqlock_t; + +/** + * A static seqlock initializer. + */ +#define RTE_SEQLOCK_INITIALIZER \ + { \ + .count = RTE_SEQCOUNT_INITIALIZER, \ + .lock = RTE_SPINLOCK_INITIALIZER \ + } + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Initialize the seqlock. + * + * This function initializes the seqlock, and leaves the writer-side + * spinlock unlocked. + * + * @param seqlock + * A pointer to the seqlock. + */ +__rte_experimental +static inline void +rte_seqlock_init(rte_seqlock_t *seqlock) +{ + rte_seqcount_init(&seqlock->count); + rte_spinlock_init(&seqlock->lock); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Begin a read-side critical section. + * + * See rte_seqcount_read_retry() for details. + * + * @param seqlock + * A pointer to the seqlock. + * @return + * The seqlock sequence number for this critical section, to + * later be passed to rte_seqlock_read_retry(). + * + * @see rte_seqlock_read_retry() + * @see rte_seqcount_read_retry() + */ + +__rte_experimental +static inline uint32_t +rte_seqlock_read_begin(const rte_seqlock_t *seqlock) +{ + return rte_seqcount_read_begin(&seqlock->count); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * End a read-side critical section. + * + * See rte_seqcount_read_retry() for details. + * + * @param seqlock + * A pointer to the seqlock. + * @param begin_sn + * The seqlock sequence number returned by rte_seqlock_read_begin(). + * @return + * true or false, if the just-read seqlock-protected data was + * inconsistent or consistent, respectively, at the time it was + * read. + * + * @see rte_seqlock_read_begin() + */ +__rte_experimental +static inline bool +rte_seqlock_read_retry(const rte_seqlock_t *seqlock, uint32_t begin_sn) +{ + return rte_seqcount_read_retry(&seqlock->count, begin_sn); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Begin a write-side critical section. + * + * A call to this function acquires the write lock associated @p + * seqlock, and marks the beginning of a write-side critical section. + * + * After having called this function, the caller may go on to modify + * (both read and write) the protected data, in an atomic or + * non-atomic manner. + * + * After the necessary updates have been performed, the application + * calls rte_seqlock_write_unlock(). + * + * This function is not preemption-safe in the sense that preemption + * of the calling thread may block reader progress until the writer + * thread is rescheduled. + * + * Unlike rte_seqlock_read_begin(), each call made to + * rte_seqlock_write_lock() must be matched with an unlock call. + * + * @param seqlock + * A pointer to the seqlock. + * + * @see rte_seqlock_write_unlock() + */ +__rte_experimental +static inline void +rte_seqlock_write_lock(rte_seqlock_t *seqlock) +{ + /* to synchronize with other writers */ + rte_spinlock_lock(&seqlock->lock); + + rte_seqcount_write_begin(&seqlock->count); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * End a write-side critical section. + * + * A call to this function marks the end of the write-side critical + * section, for @p seqlock. After this call has been made, the protected + * data may no longer be modified. + * + * @param seqlock + * A pointer to the seqlock. + * + * @see rte_seqlock_write_lock() + */ +__rte_experimental +static inline void +rte_seqlock_write_unlock(rte_seqlock_t *seqlock) +{ + rte_seqcount_write_end(&seqlock->count); + + rte_spinlock_unlock(&seqlock->lock); +} + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_SEQLOCK_H_ */ -- 2.25.1