From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 13B14439B5; Wed, 24 Jan 2024 12:21:36 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8F15240294; Wed, 24 Jan 2024 12:21:35 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.13]) by mails.dpdk.org (Postfix) with ESMTP id 678144026F for ; Wed, 24 Jan 2024 12:21:33 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1706095294; x=1737631294; h=date:from:to:cc:subject:message-id:references: content-transfer-encoding:in-reply-to:mime-version; bh=75Ec2PW/AAuKA6CQJBSR9TN1Evc3K1hjtyNo2blbBuQ=; b=YrlZQ7RfwuT9/leXVDlHHsJEmJYYOsngrGDgBe9ijCPQKVAqfUyMZkLQ cQREPpxEsm+I3OwKHPBo5ZACO7sN+/bYhn/kSXhLWFy4Iv09D2QMB+FRB uEXGqpBycyQFULcstbKpISV1F8Yl9DdLa1g9Zuqj+jOeDAbWROZR9ZVrU NuA5kUVfbR3Cab3+iHyxdrlW6Ieu7O3yf30nqQ/26cnCjSrjnbCXlrFUC psBSVo3eEmnj575gEuikjldwgIeM3vRLiMqbP+WFyfVFaoVcY9wHU0H7p jRCx5dZGAIbRganUvj/Bs5iVsAgClPMBostlfcYfTIE65JaB/NXLDczRu g==; X-IronPort-AV: E=McAfee;i="6600,9927,10962"; a="8925837" X-IronPort-AV: E=Sophos;i="6.05,216,1701158400"; d="scan'208";a="8925837" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by orvoesa105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jan 2024 03:21:32 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.05,216,1701158400"; d="scan'208";a="1879157" Received: from fmsmsx603.amr.corp.intel.com ([10.18.126.83]) by fmviesa005.fm.intel.com with ESMTP/TLS/AES256-GCM-SHA384; 24 Jan 2024 03:21:31 -0800 Received: from fmsmsx611.amr.corp.intel.com (10.18.126.91) by fmsmsx603.amr.corp.intel.com (10.18.126.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 24 Jan 2024 03:21:29 -0800 Received: from fmsmsx610.amr.corp.intel.com (10.18.126.90) by fmsmsx611.amr.corp.intel.com (10.18.126.91) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 24 Jan 2024 03:21:29 -0800 Received: from fmsedg602.ED.cps.intel.com (10.1.192.136) by fmsmsx610.amr.corp.intel.com (10.18.126.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35 via Frontend Transport; Wed, 24 Jan 2024 03:21:29 -0800 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (104.47.57.168) by edgegateway.intel.com (192.55.55.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Wed, 24 Jan 2024 03:21:29 -0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=g22zf5bG9zUbgZI86bqPEy9BeEks7tTZgQjJQfSAvI0fpPe9Y4Lt66CMPGWeuaJcCQPEp8A583sSwRSW2sAv2PnYPi3N68Nri6jj++MgNhcR1KjDzGPX4/0MM8GF4gtu/DYZt2l6Gj0ZobQeZv+ijG1Vdbd3Cs3soZvvUWNlRUyV/XGG7HLK0SpAAX8xy/tI0jg1Op+t9MYiMosxPhQfOV+bcj8dbPKxTeEgvjhz9i/NMfFjm2H6Spnrt+dT2bZJtBRfxtkW8hGGgrSKFnhptfUwBaRRT3+5Y6g5lGhYUUVD8Tro2MapcbfC1VXLC1ztph97y+hyy0ddsCbhPtJOXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=AZkFhksBXcQrw6nH33/IGvLvLanOuIkoyIYo8RLhpSg=; b=A2cXY1WkIv1bC3XRqymKCgKG9vOZHJ7bdUsF+R78VUFfmuPNt6KlK2emdSlLMQm3R+Zr1QHVXvpjw1uh9YcLWySYEajUzFrE+tCpnp0UHkXezx2vUI+6ox4rbJSzI6HyT9QCXXYtEvhHhNyX265cx43bQB3l5nZE9EB0DuhkpD2t1JKfF099FC34hFWaDoE+NKvx8JakkVtsfK3VHWKMoycB6z5o6Lw1aRJXZ3gwSmkIQR5IhW7cGv8rqde22gTddurNKmH//q0x4O/okRSAffS4Pqbem2njCkqzIqDg2ktWF6eiYlTNQpdpMFy/GwaIlv7ICmT8drR6j16WsMwLRQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; Received: from DS0PR11MB7309.namprd11.prod.outlook.com (2603:10b6:8:13e::17) by DS7PR11MB7930.namprd11.prod.outlook.com (2603:10b6:8:da::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7202.38; Wed, 24 Jan 2024 11:21:22 +0000 Received: from DS0PR11MB7309.namprd11.prod.outlook.com ([fe80::df88:b743:97f8:516c]) by DS0PR11MB7309.namprd11.prod.outlook.com ([fe80::df88:b743:97f8:516c%5]) with mapi id 15.20.7202.035; Wed, 24 Jan 2024 11:21:22 +0000 Date: Wed, 24 Jan 2024 11:21:16 +0000 From: Bruce Richardson To: Mattias =?iso-8859-1?Q?R=F6nnblom?= CC: , , , , , , , Subject: Re: [PATCH v2 10/11] eventdev: RFC clarify comments on scheduling types Message-ID: References: <20240118134557.73172-1-bruce.richardson@intel.com> <20240119174346.108905-1-bruce.richardson@intel.com> <20240119174346.108905-11-bruce.richardson@intel.com> Content-Type: text/plain; charset="iso-8859-1" Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-ClientProxiedBy: DUZPR01CA0247.eurprd01.prod.exchangelabs.com (2603:10a6:10:4b5::17) To DS0PR11MB7309.namprd11.prod.outlook.com (2603:10b6:8:13e::17) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR11MB7309:EE_|DS7PR11MB7930:EE_ X-MS-Office365-Filtering-Correlation-Id: cb736968-bf7d-4b5f-4a0a-08dc1cce9825 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: nlMNMyGPz8SgwNG7DF9jEp2LUQ8Ao3iGmdPoK968XXV4U8mAkuubYGHVW1X2NXKEYft3xQ4wDDxUbRmJWa/dSyaS/y/cKdTe2msmY6WoqBi+7v+xUmguOuPdWOWGWp44u5CjR0KMxB+ZtI0NmlWXCJz6u2cP5T3JdK/5vldBnAPpA+utzoTRwMRfUd/Vfgk5TtNhdSC2IUGZYFEIkUfv6WIwnmP3eZnxohxfIFnZruTsrQXgDmcN+DiSNUxtfiuqGxVL8FOf5pfH1XEg2CuVm9h3wzxQh3vO1xI8FXHxLTBIBvP7wHPnBWctwRhZgzonWZXd4KzKsTg9an71vSNJ1AnQztur2KaJ8BMnWFT3NKKdwI20EgEMrjD55NlXN7hwIUHWQJtq+pSlPHe07BNo6yqro9cdojmKe//1CjxSh56bXRjhkCkOb+CfO1Ap+q/SaIwspi7KCpPPMRLQLsxhb7sCfl3iZSj9CDuv43ZW+w0/G/V5VsrdceEKHUPHSjJyQ6ygHVGaCTZU7wRo0dY8mJ/K4pos5wsxJl1BWmN/b7dEmLF71Eh7RYSCWEIzLkpk X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DS0PR11MB7309.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(136003)(366004)(396003)(376002)(346002)(39860400002)(230922051799003)(1800799012)(64100799003)(186009)(451199024)(38100700002)(66899024)(2906002)(5660300002)(66574015)(66946007)(6506007)(6512007)(53546011)(86362001)(478600001)(83380400001)(4326008)(107886003)(26005)(6666004)(8676002)(66556008)(316002)(44832011)(8936002)(296002)(6486002)(66476007)(6916009)(41300700001)(82960400001); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?iso-8859-1?Q?/Aimsk+avTg6pNFkML6IZgNf7TU5GqWi60dlH7eOU+VWrqJHR9V/QOIyHA?= =?iso-8859-1?Q?24DcoUek97INewptPQBD61NU0AD/i9g5+o2SDOyF1pIDczgMaSYO5ZBVkj?= =?iso-8859-1?Q?2m0OQ4UUR5uHesI82CD7s5DlCJl98j65wmCaN22Se8UeHRgaJSp44KDvMY?= =?iso-8859-1?Q?zBpIPSiKzNLhwdjdL0s6Ef1hhnsXZV4rpHOC2If0y80koMjLMgoQSpSEjP?= =?iso-8859-1?Q?X4F+XcYCsDnJ2e4alvGV56wg1hWDfBSoK+ioKekXzlCJyJDUcgQkfxH+S1?= =?iso-8859-1?Q?To6w6B5IeapJOYJjDId6d3HKqKPzQBt4PNRiRti25UFlKfQ2ESxE6i1vQy?= =?iso-8859-1?Q?W3+MFIuOTboMoxtfj8BDNFHIdKuE2Tr2P1uUqz1R2gIuKoitgfSGaYKjBK?= =?iso-8859-1?Q?sjFjosjcrSuq5p2WjQAJZLSraWDpYqaA5bF442sCjSjHf9UIO/cOc0AYb8?= =?iso-8859-1?Q?uWdn2rmL8j9gxbTusXdIqZaXYUiyjGZ6ZX1HOSyZInNfkxE84IlTby0w7u?= =?iso-8859-1?Q?rKFr8rPSv5pzJyZ3TaPbUpdqbz1eK13yPQdRvaMBtLqmIvXVn5LKFmoZa5?= =?iso-8859-1?Q?N61whgRv21Z3ROuRjAKJOtXwpkxtW8ETQuwwtn+NXSHzmIG7ambapQ3o6Z?= =?iso-8859-1?Q?FfgUAk4GFn98wKQWzFlslrVIRmxncQIErfNSPFun8fK6vdKWNmgGiOgYUs?= =?iso-8859-1?Q?5gczVYD3CtBO4R78yethNVxJ4QAuwKf+RL6vFuhgF8uPItEG7nW71/LgeR?= =?iso-8859-1?Q?TMCMowjwzXQDZPehkoEv3ZZR08vHmfjSdioPgSQ/aJMD16OcYU+jnYnJLr?= =?iso-8859-1?Q?8hLST/eM3/WLL+wU7F7RynCrvnesp/SvurqaN7QYCuOEGk1LmWhbvTITos?= =?iso-8859-1?Q?CaCFHEnWPhEv8leuiwo3AIYXvx64+HidnqELZ9DWtpqci8Bq1WA9hmaDkv?= =?iso-8859-1?Q?JLVB4zW9VX62hqHg/Wm/G4M8qOUnw5na5EDT+XtFAxt6c1ncHPJFnCXBki?= =?iso-8859-1?Q?Jf980Kc4wtrVyAEgnYySabUjUyY3mWj3wAMhq5Af+nduWLwDOM65xTU3fU?= =?iso-8859-1?Q?E6VTxEGekOt/rGkltHkn0Y0uxYAKH9jDAyDV+D8IggZmhqAcb3YGej4YpQ?= =?iso-8859-1?Q?DOEiLKDJcAXX+uT5Z/CwmvE3MFrwoDZodjf5SsB6jxL2yGbbOoy/Jw4o+Y?= =?iso-8859-1?Q?GHYrXSLN3uT8unIzQfNZgx1+nDslqmRH1mtsjNJy+dyRrizzk6aBZdp+J0?= =?iso-8859-1?Q?485hCvo7ZCwGyQbY6UwdXO7d7mUVaqqMBuQmCw90G/rUGi+KUz0G6uaf0I?= =?iso-8859-1?Q?ftVS9/8qTQKKYzdam3cQArg1i3ZdDmPn4Jgzzx2u1ulDlBs6AE+Wwjzq4L?= =?iso-8859-1?Q?d7SWDCZyzGRWNFXJQYJLAUtoMNlYD6E4O5VMsiLTeolZi1dMxC7oSNOk70?= =?iso-8859-1?Q?/6MFRcPuIW8mnkKecDxvk17xRnsfdQfOyVRMhgQHFkN/KqRCsPLqrzqESg?= =?iso-8859-1?Q?YKtu4lssEw2yAiEZFG4JeH26ljst2FBfr4bDDje9aLCCDGLFlM9tpVLe3H?= =?iso-8859-1?Q?wagnTHKexizHDSz0T6fFYzBRSnJ5cKHdSn3A78z3MhX5Zg9bqPu09D1sLu?= =?iso-8859-1?Q?TvoqCsmyAqKVX65IQuA7K82/kgf/DrW8l1kSY4txzMMV5CEI7LWA3WAg?= =?iso-8859-1?Q?=3D=3D?= X-MS-Exchange-CrossTenant-Network-Message-Id: cb736968-bf7d-4b5f-4a0a-08dc1cce9825 X-MS-Exchange-CrossTenant-AuthSource: DS0PR11MB7309.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jan 2024 11:21:22.3230 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: LzajcGqBFsQvb93nXbM0CHA7ULHrtMA4u+c9KV0iF4W99NIZtGuuNGj5CtrlRLkqY3XdhmdPyaQUOryY4xBqP75PYDN3p2tKzJjoNQwloo0= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR11MB7930 X-OriginatorOrg: intel.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Tue, Jan 23, 2024 at 05:19:18PM +0100, Mattias Rönnblom wrote: > On 2024-01-19 18:43, Bruce Richardson wrote: > > The description of ordered and atomic scheduling given in the eventdev > > doxygen documentation was not always clear. Try and simplify this so > > that it is clearer for the end-user of the application > > > > Signed-off-by: Bruce Richardson > > --- > > > > NOTE TO REVIEWERS: > > I've updated this based on my understanding of what these scheduling > > types are meant to do. It matches my understanding of the support > > offered by our Intel DLB2 driver, as well as the SW eventdev, and I > > believe the DSW eventdev too. If it does not match the behaviour of > > other eventdevs, let's have a discussion to see if we can reach a good > > definition of the behaviour that is common. > > --- > > lib/eventdev/rte_eventdev.h | 47 ++++++++++++++++++++----------------- > > 1 file changed, 25 insertions(+), 22 deletions(-) > > > > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h > > index 2c6576e921..cb13602ffb 100644 > > --- a/lib/eventdev/rte_eventdev.h > > +++ b/lib/eventdev/rte_eventdev.h > > @@ -1313,26 +1313,24 @@ struct rte_event_vector { > > #define RTE_SCHED_TYPE_ORDERED 0 > > /**< Ordered scheduling > > * > > - * Events from an ordered flow of an event queue can be scheduled to multiple > > + * Events from an ordered event queue can be scheduled to multiple > > What is the rationale for this change? > > An implementation that impose a total order on all events on a particular > ordered queue will still adhere to the current, more relaxed, per-flow > ordering semantics. > > An application wanting a total order would just set the flow id to 0 on all > events destined that queue, and it would work on all event devices. > > Why don't you just put a note in the DLB driver saying "btw it's total > order", so any application where per-flow ordering is crucial for > performance (i.e., where the potentially needless head-of-line blocking is > an issue) can use multiple queues when running with the DLB. > > In the API as-written, the app is free to express more relaxed ordering > requirements (i.e., to have multiple flows) and it's up to the event device > to figure out if it's in a position where it can translate this to lower > latency. > Yes, you are right. I'll roll-back or rework this change in V3. Keep it documented that flow-ordering is guaranteed, but note that some implementations may use total ordering to achieve that. > > * ports for concurrent processing while maintaining the original event order. > > Maybe it's worth mentioning what is the original event order. "(i.e., the > order in which the events were enqueued to the queue)". Especially since one > like to specify what ordering guarantees one have of events enqueued to the > same queue on different ports and by different lcores). > > I don't know where that information should go though, since it's relevant > for both atomic and ordered-type queues. > It's probably more relevant for ordered, but I'll try and see where it's best to go. > > * This scheme enables the user to achieve high single flow throughput by > > - * avoiding SW synchronization for ordering between ports which bound to cores. > > - * > > - * The source flow ordering from an event queue is maintained when events are > > - * enqueued to their destination queue within the same ordered flow context. > > - * An event port holds the context until application call > > - * rte_event_dequeue_burst() from the same port, which implicitly releases > > - * the context. > > - * User may allow the scheduler to release the context earlier than that > > - * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation. > > - * > > - * Events from the source queue appear in their original order when dequeued > > - * from a destination queue. > > - * Event ordering is based on the received event(s), but also other > > - * (newly allocated or stored) events are ordered when enqueued within the same > > - * ordered context. Events not enqueued (e.g. released or stored) within the > > - * context are considered missing from reordering and are skipped at this time > > - * (but can be ordered again within another context). > > + * avoiding SW synchronization for ordering between ports which are polled by > > + * different cores. > > + * > > + * As events are scheduled to ports/cores, the original event order from the > > + * source event queue is recorded internally in the scheduler. As events are > > + * returned (via FORWARD type enqueue) to the scheduler, the original event > > + * order is restored before the events are enqueued into their new destination > > + * queue. > > Delete the first sentence on implementation. > > "As events are re-enqueued to the next queue (with the op field set to > RTE_EVENT_OP_FORWARD), the event device restores the original event order > before the events arrive on the destination queue." > > > + * > > + * Any events not forwarded, ie. dropped explicitly via RELEASE or implicitly > > + * released by the next dequeue from a port, are skipped by the reordering > > + * stage and do not affect the reordering of returned events. > > + * > > + * The ordering behaviour of NEW events with respect to FORWARD events is > > + * undefined and implementation dependent. > > For some reason I find this a little vague. "NEW and FORWARD events enqueued > to a queue are not ordered in relation to each other (even if the flow id is > the same)." > > I think I agree that NEW shouldn't be ordered vis-a-vi FORWARD, but maybe > one should say that an event device should avoid excessive reordering NEW > and FORWARD events. > > I think it would also be helpful to address port-to-port ordering guarantees > (or a lack thereof). > > "Events enqueued on one port are not ordered in relation to events enqueued > on some other port." > > Or are they? Not in DSW, at least, and I'm not sure I see a use case for > such a guarantee, but it's a little counter-intuitive to have them > potentially re-shuffled. > > (This is also relevant for atomic queues.) > Ack. > > * > > * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE > > */ > > @@ -1340,18 +1338,23 @@ struct rte_event_vector { > > #define RTE_SCHED_TYPE_ATOMIC 1 > > /**< Atomic scheduling > > * > > - * Events from an atomic flow of an event queue can be scheduled only to a > > + * Events from an atomic flow, identified by @ref rte_event.flow_id, > > A flow is identified by the combination of queue_id and flow_id, so if you > reference one you should also reference the other. > Yes, this is probably one to be reflected globally. Also on your previous comment about priority, I believe that a flow for ordering guarantees should be a combination of queue_id, flow_id and priority. Two packets with different priorities should expect to be reordered, since that tends to be what priority implies. > > + * of an event queue can be scheduled only to a > > * single port at a time. The port is guaranteed to have exclusive (atomic) > > * access to the associated flow context, which enables the user to avoid SW > > * synchronization. Atomic flows also help to maintain event ordering > > "help" here needs to go, I think. It sounds like a best-effort affair. The > atomic queue ordering guarantees (or the lack thereof) should be spelled > out. > > "Event order in an atomic flow is maintained." Ack. > > > - * since only one port at a time can process events from a flow of an > > + * since only one port at a time can process events from each flow of an > > * event queue. > > Yes, and *but also since* the event device is not reshuffling events > enqueued to an atomic queue. And that's more complicated than just something > that falls out of atomicity, especially if you assume that FORWARD type > enqueues are not ordered with other FORWARD type enqueues on a different > port. > Ack. > > * > > - * The atomic queue synchronization context is dedicated to the port until > > + * The atomic queue synchronization context for a flow is dedicated to the port until > > What is an "atomic queue synchronization context" (except for something that > makes for long sentences). > Yes, it's rather wordy. I like the idea of using lock terminology you suggest. The use of the word "contexts" in relation to atomic/ordered I find confusing myself too. > How about: > "The atomic flow is locked to the port until /../" > > You could also used the word "bound" instead of "locked". > > > * application call rte_event_dequeue_burst() from the same port, > > * which implicitly releases the context. User may allow the scheduler to > > * release the context earlier than that by invoking rte_event_enqueue_burst() > > - * with RTE_EVENT_OP_RELEASE operation. > > + * with RTE_EVENT_OP_RELEASE operation for each event from that flow. The context > > + * is only released once the last event from the flow, outstanding on the port, > > + * is released. So long as there is one event from an atomic flow scheduled to > > + * a port/core (including any events in the port's dequeue queue, not yet read > > + * by the application), that port will hold the synchronization context. > > In case you like the "atomic flow locked/bound to port", this part would > also need updating. > > Maybe here is a good place to add a note on memory ordering and event > ordering. > > "Any memory stores done as a part of event processing will be globally > visible before the next event in the same atomic flow is dequeued on a > different lcore." > > I.e., enqueue includes write barrier before the event can be seen. > > One should probably mentioned a rmb in dequeue as well. > Do we think that that is necessary? I can add it, but I would have thought that - as with rings - it could be assumed. /Bruce