From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 34F6A42614; Fri, 22 Sep 2023 08:32:16 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DE40F40265; Fri, 22 Sep 2023 08:32:14 +0200 (CEST) Received: from mail.lysator.liu.se (mail.lysator.liu.se [130.236.254.3]) by mails.dpdk.org (Postfix) with ESMTP id DBF6440151; Fri, 22 Sep 2023 08:32:12 +0200 (CEST) Received: from mail.lysator.liu.se (localhost [127.0.0.1]) by mail.lysator.liu.se (Postfix) with ESMTP id 7F6FC13D2; Fri, 22 Sep 2023 08:32:12 +0200 (CEST) Received: by mail.lysator.liu.se (Postfix, from userid 1004) id 736F015A3; Fri, 22 Sep 2023 08:32:12 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on hermod.lysator.liu.se X-Spam-Level: X-Spam-Status: No, score=-2.3 required=5.0 tests=ALL_TRUSTED, AWL, NICE_REPLY_A autolearn=disabled version=3.4.6 X-Spam-Score: -2.3 Received: from [192.168.1.59] (h-62-63-215-114.A163.priv.bahnhof.se [62.63.215.114]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (2048 bits)) (No client certificate requested) by mail.lysator.liu.se (Postfix) with ESMTPSA id 81CB31061; Fri, 22 Sep 2023 08:32:11 +0200 (CEST) Message-ID: Date: Fri, 22 Sep 2023 08:32:11 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.15.1 Subject: Re: [PATCH v3 1/3] lib: introduce dispatcher library Content-Language: en-US To: Jerin Jacob , =?UTF-8?Q?Mattias_R=c3=b6nnblom?= Cc: dev@dpdk.org, Jerin Jacob , techboard@dpdk.org, harry.van.haaren@intel.com, Peter Nilsson , Heng Wang , Naga Harish K S V , Pavan Nikhilesh , Gujjar Abhinandan S , Erik Gabriel Carrillo , Shijith Thotton , Hemant Agrawal , Sachin Saxena , Liang Ma , Peter Mccarthy , Zhirun Yan References: <20230616074041.159675-2-mattias.ronnblom@ericsson.com> <20230904130313.327809-1-mattias.ronnblom@ericsson.com> <20230904130313.327809-2-mattias.ronnblom@ericsson.com> From: =?UTF-8?Q?Mattias_R=c3=b6nnblom?= In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Virus-Scanned: ClamAV using ClamSMTP X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 2023-09-21 20:36, Jerin Jacob wrote: > On Mon, Sep 4, 2023 at 6:39 PM Mattias Rönnblom > wrote: >> >> The purpose of the dispatcher library is to help reduce coupling in an >> Eventdev-based DPDK application. >> >> In addition, the dispatcher also provides a convenient and flexible >> way for the application to use service cores for application-level >> processing. >> >> Signed-off-by: Mattias Rönnblom >> Tested-by: Peter Nilsson >> Reviewed-by: Heng Wang >> > >> +static inline void >> +evd_dispatch_events(struct rte_dispatcher *dispatcher, >> + struct rte_dispatcher_lcore *lcore, >> + struct rte_dispatcher_lcore_port *port, >> + struct rte_event *events, uint16_t num_events) >> +{ >> + int i; >> + struct rte_event bursts[EVD_MAX_HANDLERS][num_events]; >> + uint16_t burst_lens[EVD_MAX_HANDLERS] = { 0 }; >> + uint16_t drop_count = 0; >> + uint16_t dispatch_count; >> + uint16_t dispatched = 0; >> + >> + for (i = 0; i < num_events; i++) { >> + struct rte_event *event = &events[i]; >> + int handler_idx; >> + >> + handler_idx = evd_lookup_handler_idx(lcore, event); >> + >> + if (unlikely(handler_idx < 0)) { >> + drop_count++; >> + continue; >> + } >> + >> + bursts[handler_idx][burst_lens[handler_idx]] = *event; > > Looks like it caching the event to accumulate ? If flow or queue is > configured as RTE_SCHED_TYPE_ORDERED? The ordering guarantees (and lack thereof) are covered in detail in the programming guide. "Delivery order" (the order the callbacks see the events) is maintained only for events destined for the same handler. I have considered adding a flags field to the create function, to then in turn (now, or in the future) add an option to maintain strict ordering between handlers. In my mind, and in the applications where this pattern has been used in the past, the "clustering" of events going to the same handler is a feature, not a bug, since it much improves cache temporal locality and provides more opportunity for software prefetching/preloading. (Prefetching may be done already in the match function.) If your event device does clustering already, or if the application implements this pattern already, you will obviously see no gains. If neither of those are true, the application will likely suffer fewer cache misses, much outweighing the tiny bit of extra processing required in the event dispatcher. This reshuffling ("clustering") of events is the only thing I think could be offloaded to hardware. The event device is already free to reshuffle events as long as it conforms to whatever ordering guarantees the eventdev scheduling types in questions require, but the event dispatcher relaxes those further, and give further hints to the platform, what events are actually related. > Will it completely lose ordering as next rte_event_enqueue_burst will > release context? > It is the dequeue operation that will release the context (provided "implicit release" is not disabled). See the documentation you quote below. (Total) ordering is guaranteed between dequeue bursts. > > Definition of RTE_SCHED_TYPE_ORDERED > > #define RTE_SCHED_TYPE_ORDERED 0 > /**< Ordered scheduling > * > * Events from an ordered flow of an event queue can be scheduled to multiple > * ports for concurrent processing while maintaining the original event order. > * This scheme enables the user to achieve high single flow throughput by > * avoiding SW synchronization for ordering between ports which bound to cores. > * > * The source flow ordering from an event queue is maintained when events are > * enqueued to their destination queue within the same ordered flow context. > * An event port holds the context until application call > * rte_event_dequeue_burst() from the same port, which implicitly releases > * the context. > * User may allow the scheduler to release the context earlier than that > * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation. > * > * Events from the source queue appear in their original order when dequeued > * from a destination queue. > * Event ordering is based on the received event(s), but also other > * (newly allocated or stored) events are ordered when enqueued within the same > * ordered context. Events not enqueued (e.g. released or stored) within the > * context are considered missing from reordering and are skipped at this time > * (but can be ordered again within another context). > * > * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE > */