From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2FA8A43B57; Tue, 20 Feb 2024 18:24:06 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 171E9402A7; Tue, 20 Feb 2024 18:24:06 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by mails.dpdk.org (Postfix) with ESMTP id 97C8840289 for ; Tue, 20 Feb 2024 18:24:03 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708449844; x=1739985844; h=date:from:to:cc:subject:message-id:references: content-transfer-encoding:in-reply-to:mime-version; bh=9KokIwBz7Q84bQZF28aIDZgi8nDPEnw6BjnNJ3FI9uE=; b=HsxSblXtyriXVHnFLkn2vKfVncRVWLK+IO5Hb+kUdb6SzVSAocuitbjB W9pSXNWY5KVebaf7LmehfvbBmsatPIXqmZ3yrQSkf7XDuxosRtV2cHjhG 3uGjeCSZl2BN8jJ1dwVPHlJTsAP6GhsPL5YHa+aNFlhTSgEJCSS+mGNPh bDKXU3gM7HlONk+DQT57EUg/nUJXU3FxzgTGvfrvLGgRpzXOve1aGz3QB dQM/2rU6s6pXIDWiwEi3DQTwwpmdWddVLX0XZ16cPUuqKOqhYS3fBGm1V V7ChbwDkt+WyDAbunLzyAGuwxilZjTST1mtkj2qr7Lvfg5bimbcfotVHo A==; X-IronPort-AV: E=McAfee;i="6600,9927,10990"; a="20101814" X-IronPort-AV: E=Sophos;i="6.06,174,1705392000"; d="scan'208";a="20101814" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Feb 2024 09:23:37 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,174,1705392000"; d="scan'208";a="5058469" Received: from fmsmsx601.amr.corp.intel.com ([10.18.126.81]) by fmviesa006.fm.intel.com with ESMTP/TLS/AES256-GCM-SHA384; 20 Feb 2024 09:23:37 -0800 Received: from fmsmsx603.amr.corp.intel.com (10.18.126.83) by fmsmsx601.amr.corp.intel.com (10.18.126.81) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Tue, 20 Feb 2024 09:23:35 -0800 Received: from fmsedg602.ED.cps.intel.com (10.1.192.136) by fmsmsx603.amr.corp.intel.com (10.18.126.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35 via Frontend Transport; Tue, 20 Feb 2024 09:23:35 -0800 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (104.47.59.169) by edgegateway.intel.com (192.55.55.71) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Tue, 20 Feb 2024 09:23:35 -0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=b0hXvSc04kEJrgdcMEaqXccjMnLnE+hZhSDj+Ms+DL/znFBU8ey0vj4TtOfRDsgj0sVlvcHbr66BwqfJbf/oIZc27ZUzoaLVky+p+8nGalc8MHzSF9XUsBHptD6cLfzMRI4Br5L76OWebTDLHQQQ5Ln7zQzQYUcBt9NOtHu40HMfgwUBzfO+iT4UaVzZOstNIUieGB/Mt4UiBp/PCKxG3rug7MALfzcrARyPu5L3wSlscmQjcWorLpFDU1SLznvouxGP5mOaiik+J7AEF/qRMw9h+SLhowAv/ZZ/rBr9964piFhQFVwLY413uG8CHd77Fi+ud9MJxEm5oVCMPUGjRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=qs4LYg7u7SqHTQi4iUVbJdju08Zb8JEUqFUlBaQuUlQ=; b=Bar4aNf/+PhJHxdUGnwEFPQDHxSZy0TxMyvMuAbzNjRa3Z35JOMQ58jI2QgDHmUI1MJLxz4EPHx6yzc9kWfkgiBN0z2ptsl1daQ/8GjWdKCo0vShGnZwjfkIk5kW1TTA+eLXbjJUtOaDOODnVDpYQNHOkcvbbAxSVIV4QnlvastvSqIUXMfLAVS3y8NXZgjLG7seGZcaBSsB/y1DpNsPRUsaL+NHMTXKhakebOhdULS69pUzR6TUqUuyqgaX2O2F95nb9CqXcqfKcUKpgFMtMUafAsNLKXIbKmsrFszsBZOQAM0amYVqKItYQY3iS7pHUwqIKcVIq6nnQiG3a/0avg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; Received: from DS0PR11MB7309.namprd11.prod.outlook.com (2603:10b6:8:13e::17) by SJ0PR11MB4957.namprd11.prod.outlook.com (2603:10b6:a03:2df::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7316.21; Tue, 20 Feb 2024 17:23:30 +0000 Received: from DS0PR11MB7309.namprd11.prod.outlook.com ([fe80::d10:3009:a8d3:1d2e]) by DS0PR11MB7309.namprd11.prod.outlook.com ([fe80::d10:3009:a8d3:1d2e%7]) with mapi id 15.20.7292.036; Tue, 20 Feb 2024 17:23:30 +0000 Date: Tue, 20 Feb 2024 17:23:23 +0000 From: Bruce Richardson To: Mattias =?iso-8859-1?Q?R=F6nnblom?= CC: Jerin Jacob , , , , , , , , Subject: Re: [PATCH v3 09/11] eventdev: improve comments on scheduling types Message-ID: References: <20240119174346.108905-1-bruce.richardson@intel.com> <20240202123953.77166-1-bruce.richardson@intel.com> <20240202123953.77166-10-bruce.richardson@intel.com> <0a94b2e5-1c66-4f89-8d28-123ce26217f1@lysator.liu.se> Content-Type: text/plain; charset="utf-8" Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <0a94b2e5-1c66-4f89-8d28-123ce26217f1@lysator.liu.se> X-ClientProxiedBy: DU7PR01CA0036.eurprd01.prod.exchangelabs.com (2603:10a6:10:50e::25) To DS0PR11MB7309.namprd11.prod.outlook.com (2603:10b6:8:13e::17) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DS0PR11MB7309:EE_|SJ0PR11MB4957:EE_ X-MS-Office365-Filtering-Correlation-Id: 21747ff7-42b4-456b-bc3a-08dc3238a84d X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: w386QibTzPLExQ/I7e2jGKbPg8Op7aSoiNdQ3LAzir0TBmIt3g//BYtBv+lf2wxUUuZw5lX4ZO66nZuDHmJhO7D6o1/7GtO4aS+y4VBAdNj/gFq2aPVQ/RzT7pZDI4BF5URdasMkiaFsgPO8tIULGhOFerrWqDaWj0WlEzTSdqyJKe5Xb45hFp3cWec8pgcMW0OZXIbXsE8VlcaverxXGk9ezaLd63FD2PPW48XvWYWVODzcrNfWX+6SSZzhHaEzNobpIzhJuwmUrQ+sZTLg77n8UHfG9yFU+5kkAKC5t1x6U32KWciWK6TCYdwfwRh1AvuwD5BAcSdL9YQGTYHeQneU4PT/C4TtsZUHQgEvDPrDB90ba/AF6jfSxQZLfEC8mR1B/x+f2YgyEokd3kmUKaEvkzW+iVVlnbPTryXFCAjOG5bpKT46YGO3+0GMnPYyaQZuig6CY5sf0lQWKbdDqmgrYXwdIap1292JQHzwADQlNsbEmWqU64IBTmROCzxCapGN588Hgful/MZU6cySw63K/2eJxqK8xTJVp2q0opw= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DS0PR11MB7309.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?TDU0L2dkSElqeUtDQVVIcDkwWElPU3F6dXk2K0FZUkJjNkZDK05WeDhwdXNl?= =?utf-8?B?TGxrbWY0L01Yc0lHTG83bU5tYTU4QzltY3JPN1d0M09VVlJMd2tvdW1TcDhl?= =?utf-8?B?d0ZXSldGeHR6cThyNUw3eVRGbkhqUVBBczJMYVRyMkgrMWpYelV0OGlIVmhK?= =?utf-8?B?U1pVTDBoUnEzSG1uTi85djFzZUFIcTdiYWFPOEZlMVJCZGN1bGtxSy90MVk2?= =?utf-8?B?RlgrZWxlamYvakNWeFhiQUhjME9tYmo4ZEJMeENXdHJHSWVETWVaWlVpeHdh?= =?utf-8?B?Y05jSDh6Qk9ZK2U0ZzVkKzFDQ05yc0NaL0trRzk4YXJTamg0K2QzQlFQaWFS?= =?utf-8?B?L08zZjBFSVptVUVPeTUyRW9uUEpXc2hvNUlXN1l0RjkyaEdKWkhlQ3EwNERO?= =?utf-8?B?VElIUis3TDJ5TWErOUlpcjZydEx5SFJiRW9OcjZ2UEQrbXJ0a0I4WjNIczc1?= =?utf-8?B?eFFaTGJ5UDB5emk0YndNVHJNZy91TTUrTk9RZ2x4c3h5V3dLTjhoanBMVTNi?= =?utf-8?B?bWNmczZOK1d0WVpzRGUzK0dNM1AyTm10SnQ0NWtQU0dVOGZyUFVmZ3hxSzBy?= =?utf-8?B?N1NURzhsOThWNkNCQUhwc1dER1I1NnlIUHNpLytGTXBYTEZaTzBNcDhpaTVM?= =?utf-8?B?NEJMa2FjaUx5S0xSWXhvM3JySXdUSEJjN2E2L0ZLZ2ZTWStNaGRFYjFrbUhV?= =?utf-8?B?VG96aWNQMnhqdUh1ZSsvbFhDdmVuc1RwZkI2MlRHQ1dycllKTEpXRG9NVmRU?= =?utf-8?B?bm9nbGhoNTJYZXNiTzRZNVU5NlA5Vi9Bc1VQMnI5eVZEb2l4UW43SXNqeElU?= =?utf-8?B?eVVhM1hCQ2cxNzh3SCs5RlEyc3R3Qng4N3VCR3lBekp3RUw4RlZ5b3V6djJE?= =?utf-8?B?ZjBRMVFSUHFOYUJFNDF6TlBhdi9ROEZESWh5N05SOERNM0Z5ZEMwTzdvK1Np?= =?utf-8?B?UG5ib1V2SUtyeitxMUNIVTdhcmJEN3ZabllxdUNhazFIcE1RUkpIYkJDT1Ez?= =?utf-8?B?MmRScjFoVHhQUDh0ZE9lZ3FRREN3L1dQeEJBd3plSUtxelpQUUpSYm45ajRV?= =?utf-8?B?VXpoZW16S084NzdDU0liRWRlL0RGNUlKQ1YrS3FuVXArSG9NblhqVWZyencw?= =?utf-8?B?eWV6eW9qSnpGNkFSNUJVdk9nNEFuTDVudW9sQ2pvRGdTQWJxbGxFQ1hNOE1U?= =?utf-8?B?NXR6SXcvWkFxWW9DRWVvbjZIM09WSm1pekJEL2Vlb2NrYmRjMXNoUHpTQXE3?= =?utf-8?B?eGx1R0JOcHR6aGZ3UFRnMVNoVXovSmJVMHFiMlEvU09GVFBYTzJTb3E3U2Vl?= =?utf-8?B?NDE2S1BobnZ5RjhSay93WnU2dDBUZ3BYZHZDR0dvcFRmbGc1LytlTUVONS9F?= =?utf-8?B?ZXZjTTM1TmNwbUpnUXVsTWxkSjNTREZmenhjazZoZ2E2dWdOcGFPNnhDUGVR?= =?utf-8?B?THpNaFZHUFNHVU5sUS9yakFMZ2x4bGhaS0pXY3JXRmRKZTFQSHY2K2l4WTVH?= =?utf-8?B?K0ZCUEtpVjFrb2tQcXY5LzdjRG5XVDBCVytWcmZkbDBYVHZEeisyMUhWTzh5?= =?utf-8?B?Tk5mMmwrYkNyc2xmZVFRTUZFQXF0RkpEUmRPdUxkd0RKL1JtNFJmWGFNUjV0?= =?utf-8?B?em5BcEt0YzBkWjk1Si9rdkFyb1VYSS9td0dBYWJKT0RyV3RrS09EVGxudlVo?= =?utf-8?B?czAxNEZneE9mbkZkbWs0aWhudjNPNWMvQmJHZFZqb0dWZ2RmY3hBM3lpa2Z2?= =?utf-8?B?SlMrVmVsVm5JeWtlUDhLT1E1QUxFcjVmWTZnSjhzR1pLZk16MUJsY3NVQjdl?= =?utf-8?B?dGJRUXdMV3dFb2JtcysrREJxYmJsL3FZRGhmVTE2TDZpbis5ZVlHTlVTTDEv?= =?utf-8?B?azRNMWkxSENDR0NMbGQ0K2V3MzFZaW5rOVN3R2dubHh0aTcwWktkamZSVGdS?= =?utf-8?B?QkFKRXNpSUUreTlIQnIrRGJST1Ywd1FYSFlydzd0V3FZdlRWekFXV3NBQXQy?= =?utf-8?B?Q1dHT095Z3VsWFN0Ujc0S1lqdXNBWmNiVFFmNkdaYUxrSVpSQU5qclBIRmR3?= =?utf-8?B?TmdNcnZ1OW1Sc3l6Z0JVN01zaERwd0VkRGNQY0J2SXNPZjYvMzhBbXJFdXQ2?= =?utf-8?B?YzF5MjZVK0NOUEo5SnZjbldZdHpUQm1vdlU4WlBYTGlxRlZrRmQwbndNdmtK?= =?utf-8?B?dHc9PQ==?= X-MS-Exchange-CrossTenant-Network-Message-Id: 21747ff7-42b4-456b-bc3a-08dc3238a84d X-MS-Exchange-CrossTenant-AuthSource: DS0PR11MB7309.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Feb 2024 17:23:30.6062 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: lfWmHPpFJaeY059SIeotCWZpD5t8DWtdT9kHQl5VB167i/3Ha7+CrG6CJR5B/flaVOugxdibhkMBVcHOILh49wAg4oseGnZmtoU7ZHTJXVs= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR11MB4957 X-OriginatorOrg: intel.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Thu, Feb 08, 2024 at 11:04:03AM +0100, Mattias Rönnblom wrote: > On 2024-02-08 10:18, Jerin Jacob wrote: > > On Fri, Feb 2, 2024 at 6:11 PM Bruce Richardson > > wrote: > > > > > > The description of ordered and atomic scheduling given in the eventdev > > > doxygen documentation was not always clear. Try and simplify this so > > > that it is clearer for the end-user of the application > > > > > > Signed-off-by: Bruce Richardson > > > > > > --- > > > V3: extensive rework following feedback. Please re-review! > > > --- > > > lib/eventdev/rte_eventdev.h | 73 +++++++++++++++++++++++-------------- > > > 1 file changed, 45 insertions(+), 28 deletions(-) > > > > > > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h > > > index a7d8c28015..8d72765ae7 100644 > > > --- a/lib/eventdev/rte_eventdev.h > > > +++ b/lib/eventdev/rte_eventdev.h > > > @@ -1347,25 +1347,35 @@ struct rte_event_vector { > > > /**< Ordered scheduling > > > * > > > * Events from an ordered flow of an event queue can be scheduled to multiple > > > - * ports for concurrent processing while maintaining the original event order. > > > + * ports for concurrent processing while maintaining the original event order, > > > + * i.e. the order in which they were first enqueued to that queue. > > > * This scheme enables the user to achieve high single flow throughput by > > > - * avoiding SW synchronization for ordering between ports which bound to cores. > > > - * > > > - * The source flow ordering from an event queue is maintained when events are > > > - * enqueued to their destination queue within the same ordered flow context. > > > - * An event port holds the context until application call > > > - * rte_event_dequeue_burst() from the same port, which implicitly releases > > > - * the context. > > > - * User may allow the scheduler to release the context earlier than that > > > - * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation. > > > - * > > > - * Events from the source queue appear in their original order when dequeued > > > - * from a destination queue. > > > - * Event ordering is based on the received event(s), but also other > > > - * (newly allocated or stored) events are ordered when enqueued within the same > > > - * ordered context. Events not enqueued (e.g. released or stored) within the > > > - * context are considered missing from reordering and are skipped at this time > > > - * (but can be ordered again within another context). > > > + * avoiding SW synchronization for ordering between ports which are polled > > > + * by different cores. > > > > I prefer the following version to remove "polled" and to be more explicit. > > > > avoiding SW synchronization for ordering between ports which are > > dequeuing events > > using @ref rte_event_deque_burst() across different cores. > > > > "This scheme allows events pertaining to the same, potentially large flow to > be processed in parallel on multiple cores without incurring any > application-level order restoration logic overhead." > Ack. > > > + * > > > + * After events are dequeued from a set of ports, as those events are re-enqueued > > > + * to another queue (with the op field set to @ref RTE_EVENT_OP_FORWARD), the event > > > + * device restores the original event order - including events returned from all > > > + * ports in the set - before the events arrive on the destination queue. > > > > _arrrive_ is bit vague since we have enqueue operation. How about, > > "before the events actually deposited on the destination queue." > > I'll use the term "placed" rather than "deposited". > > > > > + * > > > + * Any events not forwarded i.e. dropped explicitly via RELEASE or implicitly > > > + * released by the next dequeue operation on a port, are skipped by the reordering > > > + * stage and do not affect the reordering of other returned events. > > > + * > > > + * Any NEW events sent on a port are not ordered with respect to FORWARD events sent > > > + * on the same port, since they have no original event order. They also are not > > > + * ordered with respect to NEW events enqueued on other ports. > > > + * However, NEW events to the same destination queue from the same port are guaranteed > > > + * to be enqueued in the order they were submitted via rte_event_enqueue_burst(). > > > + * > > > + * NOTE: > > > + * In restoring event order of forwarded events, the eventdev API guarantees that > > > + * all events from the same flow (i.e. same @ref rte_event.flow_id, > > > + * @ref rte_event.priority and @ref rte_event.queue_id) will be put in the original > > > + * order before being forwarded to the destination queue. > > > + * Some eventdevs may implement stricter ordering to achieve this aim, > > > + * for example, restoring the order across *all* flows dequeued from the same ORDERED > > > + * queue. > > > * > > > * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE > > > */ > > > @@ -1373,18 +1383,25 @@ struct rte_event_vector { > > > #define RTE_SCHED_TYPE_ATOMIC 1 > > > /**< Atomic scheduling > > > * > > > - * Events from an atomic flow of an event queue can be scheduled only to a > > > + * Events from an atomic flow, identified by a combination of @ref rte_event.flow_id, > > > + * @ref rte_event.queue_id and @ref rte_event.priority, can be scheduled only to a > > > * single port at a time. The port is guaranteed to have exclusive (atomic) > > > * access to the associated flow context, which enables the user to avoid SW > > > - * synchronization. Atomic flows also help to maintain event ordering > > > - * since only one port at a time can process events from a flow of an > > > - * event queue. > > > - * > > > - * The atomic queue synchronization context is dedicated to the port until > > > - * application call rte_event_dequeue_burst() from the same port, > > > - * which implicitly releases the context. User may allow the scheduler to > > > - * release the context earlier than that by invoking rte_event_enqueue_burst() > > > - * with RTE_EVENT_OP_RELEASE operation. > > > + * synchronization. Atomic flows also maintain event ordering > > > + * since only one port at a time can process events from each flow of an > > > + * event queue, and events within a flow are not reordered within the scheduler. > > > + * > > > + * An atomic flow is locked to a port when events from that flow are first > > > + * scheduled to that port. That lock remains in place until the > > > + * application calls rte_event_dequeue_burst() from the same port, > > > + * which implicitly releases the lock (if @ref RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL flag is not set). > > > + * User may allow the scheduler to release the lock earlier than that by invoking > > > + * rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation for each event from that flow. > > > + * > > > + * NOTE: The lock is only released once the last event from the flow, outstanding on the port, > > > > I think, Note can start with something like below, > > > > When there are multiple atomic events dequeue from @ref > > rte_event_dequeue_burst() > > for the same event queue, and it has same flow id then the lock is .... > > > > Yes, or maybe describing the whole lock/unlock state. > > "The conceptual per-queue-per-flow lock is in a locked state as long (and > only as long) as one or more events pertaining to that flow were scheduled > to the port in question, but are not yet released." > > Maybe it needs to be more meaty, describing what released means. I don't > have the full context of the documentation in my head when I'm writing this. > I'd rather not go into what "released" means, but I'll reword this a bit in v4. As part of that, I'll also put in a reference to forwarding events also releasing the lock. /Bruce