From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 413D643A22; Wed, 31 Jan 2024 18:54:54 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2329A42D9A; Wed, 31 Jan 2024 18:54:54 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.12]) by mails.dpdk.org (Postfix) with ESMTP id AFEE44068E for ; Wed, 31 Jan 2024 18:54:52 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1706723693; x=1738259693; h=date:from:to:cc:subject:message-id:references: content-transfer-encoding:in-reply-to:mime-version; bh=BDRwBbjzFe3ilRnXuGKItU68cXUs+dIt0SAQjzyna4s=; b=V4YGwl118DOlb4LO9450uB8lvMDoKCN6l1ZBU1sWGPBUbHQ4QrRo5r6f RBN3yWgN0TlV6E8ZGZ3j19K4Ku9ELmVr3oSXF1uoTP+VgSgCrha/VY787 r5/+lDbsuOnt5st+qsvqCFJ2IEzoK4gADqx1q8bdDzt56XBehtwI5/uS2 lLEsnGEn5WE0oob385iGJeHySc4AXrzV1hOzDq0x6MCF+hV7Ir34jrr3M ogLmnzPzyiNelhIer3ij2sTO9Ja7Eq52DOQKkvZdukjoPfkWuM8p2caNe PNYu+IBBtQzBCBYWwEbrc+AuJOvjT6vUcz8zV9vPdsSz3RgxevuzOFU3f g==; X-IronPort-AV: E=McAfee;i="6600,9927,10969"; a="3516987" X-IronPort-AV: E=Sophos;i="6.05,231,1701158400"; d="scan'208";a="3516987" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 31 Jan 2024 09:54:52 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,231,1701158400"; d="scan'208";a="22861529" Received: from orsmsx601.amr.corp.intel.com ([10.22.229.14]) by fmviesa002.fm.intel.com with ESMTP/TLS/AES256-GCM-SHA384; 31 Jan 2024 09:54:51 -0800 Received: from orsmsx612.amr.corp.intel.com (10.22.229.25) by ORSMSX601.amr.corp.intel.com (10.22.229.14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 31 Jan 2024 09:54:50 -0800 Received: from orsmsx601.amr.corp.intel.com (10.22.229.14) by ORSMSX612.amr.corp.intel.com (10.22.229.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 31 Jan 2024 09:54:50 -0800 Received: from ORSEDG601.ED.cps.intel.com (10.7.248.6) by orsmsx601.amr.corp.intel.com (10.22.229.14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35 via Frontend Transport; Wed, 31 Jan 2024 09:54:50 -0800 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (104.47.56.169) by edgegateway.intel.com (134.134.137.102) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Wed, 31 Jan 2024 09:54:49 -0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=OroxyB+P8Nlkul0rRNXFaMHTeNdmfK48yl08+mLPtafoWts6EuN7yY4J7Q4hwxizKqBB0YiS8Yu6bMRpfUWwwkMLQsFnJ4iXHp1C+J6ZCHU48VCN0QJ7B65lkP6LpZoZPEH98ha83gFu8NKldDC8iQqYUo/HO9kQQNNcsmH3bQ7Iwoaw+a+pA1iPUXVIzFlmlLXeK4Hhu7iwYVhdPAnp/B4lDJpDPgcXvG0K7PziUxhwiiurWnSTxvVFFxf69jTtcLgWMX4l4t8mR4sr6ygcY+pG+Mgt5836PAtPgi3njSLDgmWrm3sPLXF8wGcEa7cEH4+jkvRKE2VUn9p0OAMLwg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Mv5zjJZFh1PU4Fu06rtNFC6yGmOqp0njdcSvcHlYaLo=; b=apq9U5NZ0k2kBC2LogPgcFztkgw3KbvhXyGrivNRW50wkiot2efajjsbfVP/TRNeQ/kjqGkN+O7bCpg+cNxcFOqAXIPfPuqE3mZcdnyLRWopyI3DKXmZa7VOyXPH/o8FgRSx+0nvc5HV9m6+P5+fMVnbTMejblJC+tj+4j4ZqI8DPbyS7ipNzgxLxRcvAlFmrSUhS19ljXA03x5++yN89ESdDqXgmjWEI4C733FEbKd7t4DZukLR4z/PHWhrLr6U0g+SwuUt9g0DU0caeBRhrj77o4301TonzSM2uMqOLePKchIUdmBYe0uGp26Jjfxhu6zoODRTAQ3glqYHnio6MA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; Received: from DS0PR11MB7309.namprd11.prod.outlook.com (2603:10b6:8:13e::17) by IA1PR11MB6346.namprd11.prod.outlook.com (2603:10b6:208:38a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7228.34; Wed, 31 Jan 2024 17:54:47 +0000 Received: from DS0PR11MB7309.namprd11.prod.outlook.com ([fe80::df88:b743:97f8:516c]) by DS0PR11MB7309.namprd11.prod.outlook.com ([fe80::df88:b743:97f8:516c%5]) with mapi id 15.20.7249.017; Wed, 31 Jan 2024 17:54:46 +0000 Date: Wed, 31 Jan 2024 17:54:40 +0000 From: Bruce Richardson To: Mattias =?iso-8859-1?Q?R=F6nnblom?= CC: , , , , , , , Subject: Re: [PATCH v2 10/11] eventdev: RFC clarify comments on scheduling types Message-ID: References: <20240118134557.73172-1-bruce.richardson@intel.com> <20240119174346.108905-1-bruce.richardson@intel.com> <20240119174346.108905-11-bruce.richardson@intel.com> Content-Type: text/plain; charset="iso-8859-1" Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-ClientProxiedBy: DB9PR06CA0019.eurprd06.prod.outlook.com (2603:10a6:10:1db::24) To DS0PR11MB7309.namprd11.prod.outlook.com (2603:10b6:8:13e::17) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR11MB7309:EE_|IA1PR11MB6346:EE_ X-MS-Office365-Filtering-Correlation-Id: 1b67c25b-e2a3-432d-d3a7-08dc2285b621 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: xePQ+ZXUhk+PRNA+ZaoZKIb5p6EeT0iYrSOJT6i9O3TeM/28HKwmcVOq6hxV/Hy30VwUi/i7tpgsW99jYTW12kfycNFmdd8UXeK996PnQYkqyCKP6OU9S4d/31Pk4/sn/GpRcXuXfGyDQehMEzdWBwd8IVBcUOGNiQsEEjGCd/sSqHkfL/RN/GUCZ3wOD68d1jy1RTGFnIkYBJW3Eqz6qlFgwf4fIQ8oJ7zomSjbmAaV5BSnXe7k1qf9ydIijcB7T23wUll9jWiVUZRkJU7nu/m4oS8OzFw3EimgapxxV8ZM1ehyrmpJn91G4zFYspRnGH6znOr0jLdPjc5W5eSmvNZipMeoMGIshi1+U87uhhesFVnK8NXmViDXXX0mMSzWajaCnB8I7T1/5GcGFARyUKPuoZmQsExuX2Dbn7f59Zf2nqu7Lw7kezJ9nY1xxhKowW9DDknDDa0Iv0W8r0VSTHDAWKa3MP2OranWm0leLHLTgR0+iaTrsRyjTNdeLD3q+R7KaRqoGj/v403L9MsemKd8/xWu+pZx1SH+r/xtqBtiWeitDoRGjDiUwzw+DeuD X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DS0PR11MB7309.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(136003)(396003)(39850400004)(366004)(376002)(346002)(230922051799003)(186009)(451199024)(64100799003)(1800799012)(26005)(107886003)(66574015)(83380400001)(38100700002)(82960400001)(2906002)(8676002)(44832011)(4326008)(8936002)(66556008)(66476007)(6916009)(66946007)(296002)(316002)(5660300002)(6666004)(478600001)(53546011)(6512007)(41300700001)(6486002)(86362001)(6506007)(66899024); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?iso-8859-1?Q?GmDCjN2t7J5Ye++TqDtlpo1XAcvW0uunp3m1zItKGBtnGDDPCulnZuTYVD?= =?iso-8859-1?Q?pWbi+zkyeSc0XtZD4uo63u6XEuHuT7oocNJdLzEMnzN4y/2wOm00ptymm5?= =?iso-8859-1?Q?fyrFsFbTc3po5DeIa4Ez88HvZ9yanrjXTY7N92tcktfGIPjQyczlPbdNEO?= =?iso-8859-1?Q?Wh8gCbl/XuMj4zrOG8PU+WVyy1fMNEHOPjbJE/VAdPNUt8RIiYQzQiJtwo?= =?iso-8859-1?Q?ziNDt+FYvM+GKG8E4aOcyfan5TzUc4W6Qsf07D2wzjc3nbYBDdxfFceSAR?= =?iso-8859-1?Q?ZOnHg9gw8lb78tf9X0ZxFuu9cUPzKj1XCsidDI87Lj3klzRf/IWc65skI5?= =?iso-8859-1?Q?brYwFbnubIRheG+Q4KKce1VF7TBZpExwEdeDhY3xIaQyL+CQvzQFLPsGnS?= =?iso-8859-1?Q?nc4XpB+tFDXuwjDnsgyAEDsgv2gfQMFHamNa7226KabAjXKnNmkuzXsNP9?= =?iso-8859-1?Q?fGV1pE5HPrFtvD1fLqgry7CelwMzABPqyCmxydWFPbJKfTlDlqmTpOj3wV?= =?iso-8859-1?Q?iXCmjydrLN34/DXjBbYGKTnZNIcPeYM+LugL+Cbl1+0eXaZM0XzBqxx4Cq?= =?iso-8859-1?Q?jr1ipCYp1Nh7YJvUTOuz0lVZFxC6mb+qNfuMGEiDS/PKMY2B+JwuEoi6nf?= =?iso-8859-1?Q?1KMD6mz5LeFJKz7S9Mj70TnDJderKJhnpTAfLwi9i2VV3d7+5VwL+lltYi?= =?iso-8859-1?Q?0QOKUAbxbleQ6WrHVpHLXHGF6bc0bZix7DBmyALJGyTny5yFUjFTegowMA?= =?iso-8859-1?Q?7UjNCMqAUGLeCCrQ7D4fpYl0gLmxmfKq4AYjiA4JQQeXUiABvBFZWbA9um?= =?iso-8859-1?Q?lZVsNbWJKMWKe8hpXx/vMDuc3ZpGdeOquvaqTmBr/YtnyBBjLXIa50+NAr?= =?iso-8859-1?Q?UKWLrV8wTIm2P4eGq/tJTepVR4qU0jrg2NHns71OFxrFPi7Qp21BM90grD?= =?iso-8859-1?Q?bRl103iWSOqh3pCuGkLtIMrJiLFCXsZWr2ejMWY5walvAHX8oz9PMxX5kI?= =?iso-8859-1?Q?/pfpOb68KyD/HYG11Zu7uTWhpRmbQMKBPYORU+8ecnxp62jtjEwQbxGj+N?= =?iso-8859-1?Q?UTD/+2UulCusoxE3lyvY36E4mUpXeymcs+NLlYuHBaG77H6fDhcD33mrQL?= =?iso-8859-1?Q?50fTp8ZL+3+KNJId+D7Unt9w+VOSAvfv8n8A4vh+dd+JR2O+Aa7lX1H/PK?= =?iso-8859-1?Q?M2Or6cokFXsc18ohJ5vPvKE58O+q295FxT1Hc1l63Se64kLYfQ5pBhacvp?= =?iso-8859-1?Q?fcMu323WrtvKheGAYcUxQE2CCziwzFIQwVkC4ftkR3Gm4bPT7ce/WE3T79?= =?iso-8859-1?Q?M++UxktG2j1nK7MbO9IdIjcIhkH5NN3dwXueTEIrG+4UJbRiPs5gZru9Vj?= =?iso-8859-1?Q?WVlEaZsUQfLhltDphUFO60HFox0WuJdiIaP1Fx2O6JalfBMJfYEdm6NwGw?= =?iso-8859-1?Q?IycU7RS5T4i+hxkScz6I3S552Wl45QByWZ9Jp94ydcLav0Jy3KnxtD6MO8?= =?iso-8859-1?Q?gKsLRiK2ttID0Mx86T4NN0UMlUw19+3EJkdbwwVVweA6InA6OfkEb6v/dd?= =?iso-8859-1?Q?hWcZ+/rBumO+IAOwCB156qCokR4fMZ/WNKsMNEJ4l0NE58o/ZsxccAOFS2?= =?iso-8859-1?Q?3emZ0ac9BG/GGT7qanj6KEBjJ3T1lernbG9DSETP4Yz7vNaLN3d6MXXg?= =?iso-8859-1?Q?=3D=3D?= X-MS-Exchange-CrossTenant-Network-Message-Id: 1b67c25b-e2a3-432d-d3a7-08dc2285b621 X-MS-Exchange-CrossTenant-AuthSource: DS0PR11MB7309.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jan 2024 17:54:46.3555 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: wKdiikzxT6ULVih9FGNrA4SiHAZv/QpQa9VRNzRx6b0FgaO3K0PHn6GoiW2gVAnk1shuMlhdQU7bsSZyrQVvXGs5AVdGXjNs9ODw26z9zvM= X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR11MB6346 X-OriginatorOrg: intel.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Tue, Jan 23, 2024 at 05:19:18PM +0100, Mattias Rönnblom wrote: > On 2024-01-19 18:43, Bruce Richardson wrote: > > The description of ordered and atomic scheduling given in the eventdev > > doxygen documentation was not always clear. Try and simplify this so > > that it is clearer for the end-user of the application > > > > Signed-off-by: Bruce Richardson > > --- > > > > NOTE TO REVIEWERS: > > I've updated this based on my understanding of what these scheduling > > types are meant to do. It matches my understanding of the support > > offered by our Intel DLB2 driver, as well as the SW eventdev, and I > > believe the DSW eventdev too. If it does not match the behaviour of > > other eventdevs, let's have a discussion to see if we can reach a good > > definition of the behaviour that is common. > > --- > > lib/eventdev/rte_eventdev.h | 47 ++++++++++++++++++++----------------- > > 1 file changed, 25 insertions(+), 22 deletions(-) > > > > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h > > index 2c6576e921..cb13602ffb 100644 > > --- a/lib/eventdev/rte_eventdev.h > > +++ b/lib/eventdev/rte_eventdev.h > > @@ -1313,26 +1313,24 @@ struct rte_event_vector { > > #define RTE_SCHED_TYPE_ORDERED 0 > > /**< Ordered scheduling > > * > > - * Events from an ordered flow of an event queue can be scheduled to multiple > > + * Events from an ordered event queue can be scheduled to multiple > > What is the rationale for this change? > > An implementation that impose a total order on all events on a particular > ordered queue will still adhere to the current, more relaxed, per-flow > ordering semantics. > > An application wanting a total order would just set the flow id to 0 on all > events destined that queue, and it would work on all event devices. > > Why don't you just put a note in the DLB driver saying "btw it's total > order", so any application where per-flow ordering is crucial for > performance (i.e., where the potentially needless head-of-line blocking is > an issue) can use multiple queues when running with the DLB. > > In the API as-written, the app is free to express more relaxed ordering > requirements (i.e., to have multiple flows) and it's up to the event device > to figure out if it's in a position where it can translate this to lower > latency. > > > * ports for concurrent processing while maintaining the original event order. > > Maybe it's worth mentioning what is the original event order. "(i.e., the > order in which the events were enqueued to the queue)". Especially since one > like to specify what ordering guarantees one have of events enqueued to the > same queue on different ports and by different lcores). > > I don't know where that information should go though, since it's relevant > for both atomic and ordered-type queues. > > > * This scheme enables the user to achieve high single flow throughput by > > - * avoiding SW synchronization for ordering between ports which bound to cores. > > - * > > - * The source flow ordering from an event queue is maintained when events are > > - * enqueued to their destination queue within the same ordered flow context. > > - * An event port holds the context until application call > > - * rte_event_dequeue_burst() from the same port, which implicitly releases > > - * the context. > > - * User may allow the scheduler to release the context earlier than that > > - * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation. > > - * > > - * Events from the source queue appear in their original order when dequeued > > - * from a destination queue. > > - * Event ordering is based on the received event(s), but also other > > - * (newly allocated or stored) events are ordered when enqueued within the same > > - * ordered context. Events not enqueued (e.g. released or stored) within the > > - * context are considered missing from reordering and are skipped at this time > > - * (but can be ordered again within another context). > > + * avoiding SW synchronization for ordering between ports which are polled by > > + * different cores. > > + * > > + * As events are scheduled to ports/cores, the original event order from the > > + * source event queue is recorded internally in the scheduler. As events are > > + * returned (via FORWARD type enqueue) to the scheduler, the original event > > + * order is restored before the events are enqueued into their new destination > > + * queue. > > Delete the first sentence on implementation. > > "As events are re-enqueued to the next queue (with the op field set to > RTE_EVENT_OP_FORWARD), the event device restores the original event order > before the events arrive on the destination queue." > This whole section on ordered processing I'm reworking quite extensively for v3, and hopefully I've taken all your comments into account. Finding it really hard to try and explain it all simply and clearly. Please re-review this part when I get the v3 finished and sent! > > + * > > + * Any events not forwarded, ie. dropped explicitly via RELEASE or implicitly > > + * released by the next dequeue from a port, are skipped by the reordering > > + * stage and do not affect the reordering of returned events. > > + * > > + * The ordering behaviour of NEW events with respect to FORWARD events is > > + * undefined and implementation dependent. > > For some reason I find this a little vague. "NEW and FORWARD events enqueued > to a queue are not ordered in relation to each other (even if the flow id is > the same)." > > I think I agree that NEW shouldn't be ordered vis-a-vi FORWARD, but maybe > one should say that an event device should avoid excessive reordering NEW > and FORWARD events. > > I think it would also be helpful to address port-to-port ordering guarantees > (or a lack thereof). > > "Events enqueued on one port are not ordered in relation to events enqueued > on some other port." > > Or are they? Not in DSW, at least, and I'm not sure I see a use case for > such a guarantee, but it's a little counter-intuitive to have them > potentially re-shuffled. > > (This is also relevant for atomic queues.) > > > * > > * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE > > */ > > @@ -1340,18 +1338,23 @@ struct rte_event_vector { > > #define RTE_SCHED_TYPE_ATOMIC 1 > > /**< Atomic scheduling > > * > > - * Events from an atomic flow of an event queue can be scheduled only to a > > + * Events from an atomic flow, identified by @ref rte_event.flow_id, > > A flow is identified by the combination of queue_id and flow_id, so if you > reference one you should also reference the other. > This is done in v3. I have mention of what defines in flow in both comments for ordered and atomic. > > + * of an event queue can be scheduled only to a > > * single port at a time. The port is guaranteed to have exclusive (atomic) > > * access to the associated flow context, which enables the user to avoid SW > > * synchronization. Atomic flows also help to maintain event ordering > > "help" here needs to go, I think. It sounds like a best-effort affair. The > atomic queue ordering guarantees (or the lack thereof) should be spelled > out. > > "Event order in an atomic flow is maintained." > > > - * since only one port at a time can process events from a flow of an > > + * since only one port at a time can process events from each flow of an > > * event queue. > > Yes, and *but also since* the event device is not reshuffling events > enqueued to an atomic queue. And that's more complicated than just something > that falls out of atomicity, especially if you assume that FORWARD type > enqueues are not ordered with other FORWARD type enqueues on a different > port. > > > * > > - * The atomic queue synchronization context is dedicated to the port until > > + * The atomic queue synchronization context for a flow is dedicated to the port until > > What is an "atomic queue synchronization context" (except for something that > makes for long sentences). > > How about: > "The atomic flow is locked to the port until /../" > > You could also used the word "bound" instead of "locked". > Going with the term "lock" for v3. > > * application call rte_event_dequeue_burst() from the same port, > > * which implicitly releases the context. User may allow the scheduler to > > * release the context earlier than that by invoking rte_event_enqueue_burst() > > - * with RTE_EVENT_OP_RELEASE operation. > > + * with RTE_EVENT_OP_RELEASE operation for each event from that flow. The context > > + * is only released once the last event from the flow, outstanding on the port, > > + * is released. So long as there is one event from an atomic flow scheduled to > > + * a port/core (including any events in the port's dequeue queue, not yet read > > + * by the application), that port will hold the synchronization context. > > In case you like the "atomic flow locked/bound to port", this part would > also need updating. > > Maybe here is a good place to add a note on memory ordering and event > ordering. > > "Any memory stores done as a part of event processing will be globally > visible before the next event in the same atomic flow is dequeued on a > different lcore." > > I.e., enqueue includes write barrier before the event can be seen. > > One should probably mentioned a rmb in dequeue as well. > Not adding memory ordering in v3. If necessary we can add it later in another patch. /Bruce