From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 024C942529; Wed, 6 Sep 2023 19:20:27 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 893EE4027C; Wed, 6 Sep 2023 19:20:27 +0200 (CEST) Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by mails.dpdk.org (Postfix) with ESMTP id 39C3940270 for ; Wed, 6 Sep 2023 19:20:26 +0200 (CEST) Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-1bf5c314a57so331435ad.1 for ; Wed, 06 Sep 2023 10:20:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1694020825; x=1694625625; darn=dpdk.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=RdNvY32yBbao20ZRxXSZQBqNL/z6kLsk2GWVTs6dVA8=; b=a9B1oZp2gYeMGDh5ZmS853MRFy0a0H+dDSed9/W7eTwXRDRZ9bqiT601HHxPu8yxKb w5/5Zfmuk5dUII5Mc1x9TYUJFgqIIVbjSc/YlYXPTGmZN0bHEWX2Z1j5v4f4LsTCdsve btFYpvoXDaj0zjeugQnNJQ9RlmuQaB05a9tEY2EMb0IIkDYvRPWakZ1bTDnZ9w4tLhmJ vpG9VMw4xhDTW2K8XHuqpRCktGZP1z/NBmOO6aG4I9NNZmbZ1jSJUmXElFbEfHhtB/8z GpC5prR5LINTnUPHVje/dp4eSKhEyFFvLDFnY4uX6zID0//YocK1X8Xt+gbCMLT/w0tc GrUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1694020825; x=1694625625; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=RdNvY32yBbao20ZRxXSZQBqNL/z6kLsk2GWVTs6dVA8=; b=RWkN7KviU/XjGgMM5CG85YTvz31Smk+ccR/ap3+T6PWa/LL2X0npwVWeLskzdFZs9C jFdPW3JKcHONnyrZi5IsTxQnv5QlIg81Sc7pPqpfmBKk+Zugn/vOygI+SGmtDwYcYexH 8OgEh/UBR3PQeGTR7JUP8Uu2/Cc1lvRRqPZbwwvUAG8Ue19Q9N1ol3oxWLZx3HD35JyM zELwXsWI+t/+FFOVtWBwV8VlSMXtrJN7w3mJLPX060+8P9GrpbDsTp/LPjO5A6+Kmzqw DEE7onHX4ddxKx5JMRAFcr33p+RTjSVX+McFR8ZqqA6Xe+Fh9mIv/aMzf7DqYhL4xg4G gqTw== X-Gm-Message-State: AOJu0Yw4Z3ErNd1xeQFBqFkZ61LA6pp+m1wWaCw34qyy1azCD9ot/4+R 3Nq2pn8X06w3WoZMsHHdx5FceXZdlPaawVP7cc8= X-Google-Smtp-Source: AGHT+IGYog1GEgjxBl+SMeIOAgBMQWfQAr1YbgciWrkNUsMZJsu13j6SLvvTbP7R8rJvgCqMNn8GpQ== X-Received: by 2002:a17:902:c109:b0:1b3:d4ae:7e21 with SMTP id 9-20020a170902c10900b001b3d4ae7e21mr13693902pli.63.1694020824776; Wed, 06 Sep 2023 10:20:24 -0700 (PDT) Received: from hermes.local (204-195-112-131.wavecable.com. [204.195.112.131]) by smtp.gmail.com with ESMTPSA id h7-20020a170902f7c700b001c0a414695bsm11411213plw.43.2023.09.06.10.20.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Sep 2023 10:20:24 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , =?UTF-8?q?Mattias=20R=C3=B6nnblom?= Subject: [RFC] random: use per lcore state Date: Wed, 6 Sep 2023 10:20:13 -0700 Message-Id: <20230906172013.169846-1-stephen@networkplumber.org> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Move the random number state into thread local storage. This has a several benefits. - no false cache sharing from cpu prefetching - fixes initialization of random state for non-DPDK threads - fixes unsafe usage of random state by non-DPDK threads The initialization of random number state is done by the lcore (lazy initialization). Signed-off-by: Stephen Hemminger --- lib/eal/common/rte_random.c | 38 +++++++++++++++++++------------------ 1 file changed, 20 insertions(+), 18 deletions(-) diff --git a/lib/eal/common/rte_random.c b/lib/eal/common/rte_random.c index 53636331a27b..9657adf6ad3b 100644 --- a/lib/eal/common/rte_random.c +++ b/lib/eal/common/rte_random.c @@ -19,13 +19,14 @@ struct rte_rand_state { uint64_t z3; uint64_t z4; uint64_t z5; -} __rte_cache_aligned; + uint64_t seed; +}; -/* One instance each for every lcore id-equipped thread, and one - * additional instance to be shared by all others threads (i.e., all - * unregistered non-EAL threads). - */ -static struct rte_rand_state rand_states[RTE_MAX_LCORE + 1]; +/* Global random seed */ +static uint64_t rte_rand_seed; + +/* Per lcore random state. */ +static RTE_DEFINE_PER_LCORE(struct rte_rand_state, rte_rand_state); static uint32_t __rte_rand_lcg32(uint32_t *seed) @@ -81,11 +82,7 @@ __rte_srand_lfsr258(uint64_t seed, struct rte_rand_state *state) void rte_srand(uint64_t seed) { - unsigned int lcore_id; - - /* add lcore_id to seed to avoid having the same sequence */ - for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) - __rte_srand_lfsr258(seed + lcore_id, &rand_states[lcore_id]); + __atomic_store_n(&rte_rand_seed, seed, __ATOMIC_RELAXED); } static __rte_always_inline uint64_t @@ -119,15 +116,18 @@ __rte_rand_lfsr258(struct rte_rand_state *state) static __rte_always_inline struct rte_rand_state *__rte_rand_get_state(void) { - unsigned int idx; + struct rte_rand_state *rand_state = &RTE_PER_LCORE(rte_rand_state); + uint64_t seed; - idx = rte_lcore_id(); + seed = __atomic_load_n(&rte_rand_seed, __ATOMIC_RELAXED); + if (unlikely(seed != rand_state->seed)) { + rand_state->seed = seed; - /* last instance reserved for unregistered non-EAL threads */ - if (unlikely(idx == LCORE_ID_ANY)) - idx = RTE_MAX_LCORE; + seed += rte_thread_self().opaque_id; + __rte_srand_lfsr258(seed, rand_state); + } - return &rand_states[idx]; + return rand_state; } uint64_t @@ -227,7 +227,9 @@ RTE_INIT(rte_rand_init) { uint64_t seed; - seed = __rte_random_initial_seed(); + do + seed = __rte_random_initial_seed(); + while (seed == 0); rte_srand(seed); } -- 2.39.2