From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9F77AA04C0; Tue, 29 Sep 2020 11:02:28 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8E8361D6F4; Tue, 29 Sep 2020 11:02:26 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 972BB1D6BF for ; Tue, 29 Sep 2020 11:02:24 +0200 (CEST) IronPort-SDR: 6zpR0KPGY0hmJ33chbNpQD7ZK5A6haQMIJZPgGhLEjM8RfFGseX8r/UaS4A6T4//eRQGHedW4R qHgYU8tIfIoQ== X-IronPort-AV: E=McAfee;i="6000,8403,9758"; a="149804178" X-IronPort-AV: E=Sophos;i="5.77,317,1596524400"; d="scan'208";a="149804178" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Sep 2020 02:02:20 -0700 IronPort-SDR: GUkpIYyjuvbtLhBWwhrkWYhlyd1zMBiZugSYh4t4CMaH9oRJMMNOxgCCYxVBysk+MlydO8JAgp /VI/Ew7BfdIg== X-IronPort-AV: E=Sophos;i="5.77,317,1596524400"; d="scan'208";a="457206301" Received: from rnicolau-mobl1.ger.corp.intel.com (HELO [10.252.11.176]) ([10.252.11.176]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Sep 2020 02:02:17 -0700 From: "Nicolau, Radu" To: Honnappa Nagarahalli , Bruce Richardson , "Ananyev, Konstantin" Cc: "Van Haaren, Harry" , "dev@dpdk.org" , "jerinj@marvell.com" , nd References: <20200908105211.10066-1-radu.nicolau@intel.com> <46118f3466274596a663d7d44abb680a@intel.com> <20200925102805.GD923@bricha3-MOBL.ger.corp.intel.com> Message-ID: Date: Tue, 29 Sep 2020 10:02:06 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-GB Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH v1] event/sw: performance improvements X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 9/28/2020 5:02 PM, Honnappa Nagarahalli wrote: > >>> Add minimum burst throughout the scheduler pipeline and a flush counter. >>> Replace ring API calls with local single threaded implementation where >>> possible. >>> >>> Signed-off-by: Radu Nicolau mailto:radu.nicolau@intel.com >>> >>> Thanks for the patch, a few comments inline. >>> >>> --- >>> drivers/event/sw/sw_evdev.h���������� | 11 +++- >>> drivers/event/sw/sw_evdev_scheduler.c | 83 >>> +++++++++++++++++++++++---- >>> 2 files changed, 81 insertions(+), 13 deletions(-) >>> >>> diff --git a/drivers/event/sw/sw_evdev.h b/drivers/event/sw/sw_evdev.h >>> index 7c77b2495..95e51065f 100644 >>> --- a/drivers/event/sw/sw_evdev.h >>> +++ b/drivers/event/sw/sw_evdev.h >>> @@ -29,7 +29,13 @@ >>> /* report dequeue burst sizes in buckets */� #define >>> SW_DEQ_STAT_BUCKET_SHIFT 2 >>> /* how many packets pulled from port by sched */ -#define >>> SCHED_DEQUEUE_BURST_SIZE 32 >>> +#define SCHED_DEQUEUE_BURST_SIZE 64 >>> + >>> +#define SCHED_MIN_BURST_SIZE 8 >>> +#define SCHED_NO_ENQ_CYCLE_FLUSH 256 >>> +/* set SCHED_DEQUEUE_BURST_SIZE to 64 or 128 when setting this to 1*/ >>> +#define SCHED_REFILL_ONCE_PER_CALL 1 >>> >>> Is it possible to make the above #define a runtime option? >>> Eg, --vdev event_sw,refill_iter=1 >>> >>> That would allow packaged versions of DPDK to be usable in both modes. >>> >>> + >>> >>> #define SW_PORT_HIST_LIST (MAX_SW_PROD_Q_DEPTH) /* size of our >>> history list */� #define NUM_SAMPLES 64 /* how many data points use >>> for average stats */ @@ -214,6 +220,9 @@ struct sw_evdev { ��� >>> uint32_t xstats_count_mode_port; ��� uint32_t >>> xstats_count_mode_queue; >>> >>> +��� uint16_t sched_flush_count; ��� uint16_t >>> +sched_min_burst; >>> + >>> ��� /* Contains all ports - load balanced and directed */ >>> ��� struct sw_port ports[SW_PORTS_MAX] __rte_cache_aligned; >>> >>> diff --git a/drivers/event/sw/sw_evdev_scheduler.c >>> b/drivers/event/sw/sw_evdev_scheduler.c >>> index cff747da8..ca6d1caff 100644 >>> --- a/drivers/event/sw/sw_evdev_scheduler.c >>> +++ b/drivers/event/sw/sw_evdev_scheduler.c >>> @@ -26,6 +26,29 @@ >>> /* use cheap bit mixing, we only need to lose a few bits */� >>> #define >>> SW_HASH_FLOWID(f) (((f) ^ (f >> 10)) & FLOWID_MASK) >>> >>> + >>> +/* single object enq and deq for non MT ring */ static >>> +__rte_always_inline void sw_nonmt_ring_dequeue(struct rte_ring *r, >>> +void **obj) { ��� if ((r->prod.tail - r->cons.tail) < 1) >>> +����������� return; ��� void **ring = >>> +(void **)&r[1]; ��� *obj = ring[r->cons.tail & r->mask]; >>> +��� r->cons.tail++; } static __rte_always_inline int >>> +sw_nonmt_ring_enqueue(struct rte_ring *r, void *obj) { ��� if >>> +((r->capacity + r->cons.tail - r->prod.tail) < 1) >>> +����������� return 0; ��� void **ring = >>> +(void **)&r[1]; ��� ring[r->prod.tail & r->mask] = obj; >>> +��� r->prod.tail++; ��� return 1; >>> + >>> Why not make these APIs part of the rte_ring library? You could further >> optimize them by keeping the indices on the same cacheline. >>> I'm not sure there is any need for non thread-safe rings outside this >> particular case. >>> [Honnappa] I think if we add the APIs, we will find the use cases. >>> But, more than that, I understand that rte_ring structure is exposed to the >> application. The reason for doing that is the inline functions that rte_ring >> provides. IMO, we should still maintain modularity and should not use the >> internals of the rte_ring structure outside of the library. >>> +1 to that. >>> >>> BTW, is there any real perf benefit from such micor-optimisation? >> I'd tend to view these as use-case specific, and I'm not sure we should clutter >> up the ring library with yet more functions, especially since they can't be >> mixed with the existing enqueue/dequeue functions, since they don't use >> the head pointers. > IMO, the ring library is pretty organized with the recent addition of HTS/RTS modes. This can be one of the modes and should allow us to use the existing functions (though additional functions are required as well). > The other concern I have is, this implementation can be further optimized by using a single cache line for the pointers. It uses 2 cache lines just because of the layout of the rte_ring structure. > There was a question earlier about the performance improvements of this patch? Are there any % performance improvements that can be shared? > It is also possible to change the above functions to use the head/tail pointers from producer or the consumer cache line alone to check for perf differences. I don't have a % for the final improvement for this change alone, but there was some improvement in the memory overhead measurable during development, which very likely resulted in the whole optimization having more headroom. I agree that this may be further optimized, maybe by having a local implementation of a ring-like container instead.