From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 34A6CA0505; Fri, 6 May 2022 01:16:30 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D915A4014F; Fri, 6 May 2022 01:16:29 +0200 (CEST) Received: from mail-pf1-f180.google.com (mail-pf1-f180.google.com [209.85.210.180]) by mails.dpdk.org (Postfix) with ESMTP id 9BA2E40042 for ; Fri, 6 May 2022 01:16:27 +0200 (CEST) Received: by mail-pf1-f180.google.com with SMTP id j6so4823463pfe.13 for ; Thu, 05 May 2022 16:16:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2XesHDPMWijkhRzALYL2ee/1inAfPTUY+SYiNZIp5H4=; b=eWhxVUV8RPldGeFuDEw9bjAMgptsTDcvxyQH9dbbZQia7zW7PDm/wkRlCRGXG3ia6m Uck+uniBQhvWwE08w/W6ThBVZSmur7+l+8Ynh0TYIBq+I0H33xd/pL5H5O9+y5nx1esh P3lEQp8VzPYf00l1z0Q9SmsqgDw0yOIsMG7CDtGCST3tyohwmrnyUa1ZID8jHnm20WsR IjY9jdtihUUH20V3wj6P+MDGEv9gDcl9lS9QWy3c7RorFI1hukGWV14FFS24ZF5JZU96 +WTc3nDIQSagDO+KbS5nDHvHWdye0Xsls/O5GQSpjWjPtYCjhW1uIum1DqzJSNM9GKcI 1NmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2XesHDPMWijkhRzALYL2ee/1inAfPTUY+SYiNZIp5H4=; b=qFA7cpWHdh9TOXGMXxKMzvZRmKDLsohOXPRdiEHfdyFqUhI2NZ2nNxGhyDVFd77FJH AyaEH2vWa1uU5HDyVgElt0WCw87Aj+G1GCDXU7eypiyF0wBzNFlc3VX66Sj2dXuuMWpv o2mkLQNyunYDDLLxGCoNFocGGZSVna/LpqwbblbaJaHwhZBwfYfaBGUpg9rRI/pVTDs2 aVhrXTXDoHtI8OuTwICRMAbKf6CmxCkSx4JUdLpfjZaFNwxJzamQfpNy0La/GvDPPnhW GyFSfhZEkkDc8zSo95u83tV2WgyZkxat2rXP79OjT474Y5YrDwtFRhOjTCe2OpYG7FBf mndQ== X-Gm-Message-State: AOAM532awvWfp1R1s1Z9ZnDcMT4QGH/95iHMSdL5nK4rFQG09hjDEvPa Pmey3aRiu+c2EAblzKdwZRFm5/06kpJX4w== X-Google-Smtp-Source: ABdhPJyGhNreOklZMJv1C4J46QfSFeJDr4KoJHAdeTc7H06nAM+NEADAAkwx+yBRXFV7kcCpsIbI7g== X-Received: by 2002:a62:f20d:0:b0:50d:6961:7b75 with SMTP id m13-20020a62f20d000000b0050d69617b75mr657876pfh.19.1651792586710; Thu, 05 May 2022 16:16:26 -0700 (PDT) Received: from hermes.local (204-195-112-199.wavecable.com. [204.195.112.199]) by smtp.gmail.com with ESMTPSA id 28-20020a17090a031c00b001d7761ee6fcsm2105031pje.3.2022.05.05.16.16.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 May 2022 16:16:26 -0700 (PDT) Date: Thu, 5 May 2022 16:16:23 -0700 From: Stephen Hemminger To: Honnappa Nagarahalli Cc: "dev@dpdk.org" , nd Subject: Re: [RFC] rte_ring: don't use always inline Message-ID: <20220505161623.584aa41c@hermes.local> In-Reply-To: <20220505161027.62fa50d8@hermes.local> References: <20220505224547.394253-1-stephen@networkplumber.org> <20220505161027.62fa50d8@hermes.local> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Thu, 5 May 2022 16:10:27 -0700 Stephen Hemminger wrote: > On Thu, 5 May 2022 22:59:32 +0000 > Honnappa Nagarahalli wrote: > > > Thanks Stephen. Do you see any performance difference with this change? > > > > > -----Original Message----- > > > From: Stephen Hemminger > > > Sent: Thursday, May 5, 2022 5:46 PM > > > To: dev@dpdk.org > > > Cc: Stephen Hemminger > > > Subject: [RFC] rte_ring: don't use always inline > > > > > > Forcing compiler to inline with always inline can lead to worse and sometimes > > > broken code. Better to use the standard inline keyword and let compiler have > > > some flexibilty. > > > > > > Signed-off-by: Stephen Hemminger > > > --- > > > > > > This is RFC because the use of large scale inlining is debatable. > > > This change may slow things down on some versions of Gcc and architectures. > > > > > > If you follow Linux kernel list, this has been a debated topic over the years, with > > > opinions for and against inlining. > > > Combined with bad inlining in various Gcc versions. > > > > > > lib/ring/rte_ring.h | 36 ++++++++++++++++++------------------ > > > lib/ring/rte_ring_c11_pvt.h | 6 +++--- > > > lib/ring/rte_ring_elem.h | 36 ++++++++++++++++++------------------ > > > 3 files changed, 39 insertions(+), 39 deletions(-) > > > > > > diff --git a/lib/ring/rte_ring.h b/lib/ring/rte_ring.h index > > > 980e92e59493..4a2588beed9e 100644 > > > --- a/lib/ring/rte_ring.h > > > +++ b/lib/ring/rte_ring.h > > > @@ -226,7 +226,7 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r); > > > * @return > > > * The number of objects enqueued, either 0 or n > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_mp_enqueue_bulk(struct rte_ring *r, void * const *obj_table, > > > unsigned int n, unsigned int *free_space) { @@ - > > > 249,7 +249,7 @@ rte_ring_mp_enqueue_bulk(struct rte_ring *r, void * const > > > *obj_table, > > > * @return > > > * The number of objects enqueued, either 0 or n > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_sp_enqueue_bulk(struct rte_ring *r, void * const *obj_table, > > > unsigned int n, unsigned int *free_space) { @@ - > > > 276,7 +276,7 @@ rte_ring_sp_enqueue_bulk(struct rte_ring *r, void * const > > > *obj_table, > > > * @return > > > * The number of objects enqueued, either 0 or n > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_enqueue_bulk(struct rte_ring *r, void * const *obj_table, > > > unsigned int n, unsigned int *free_space) { @@ -298,7 > > > +298,7 @@ rte_ring_enqueue_bulk(struct rte_ring *r, void * const *obj_table, > > > * - 0: Success; objects enqueued. > > > * - -ENOBUFS: Not enough room in the ring to enqueue; no object is > > > enqueued. > > > */ > > > -static __rte_always_inline int > > > +static inline int > > > rte_ring_mp_enqueue(struct rte_ring *r, void *obj) { > > > return rte_ring_mp_enqueue_elem(r, &obj, sizeof(void *)); @@ -315,7 > > > +315,7 @@ rte_ring_mp_enqueue(struct rte_ring *r, void *obj) > > > * - 0: Success; objects enqueued. > > > * - -ENOBUFS: Not enough room in the ring to enqueue; no object is > > > enqueued. > > > */ > > > -static __rte_always_inline int > > > +static inline int > > > rte_ring_sp_enqueue(struct rte_ring *r, void *obj) { > > > return rte_ring_sp_enqueue_elem(r, &obj, sizeof(void *)); @@ -336,7 > > > +336,7 @@ rte_ring_sp_enqueue(struct rte_ring *r, void *obj) > > > * - 0: Success; objects enqueued. > > > * - -ENOBUFS: Not enough room in the ring to enqueue; no object is > > > enqueued. > > > */ > > > -static __rte_always_inline int > > > +static inline int > > > rte_ring_enqueue(struct rte_ring *r, void *obj) { > > > return rte_ring_enqueue_elem(r, &obj, sizeof(void *)); @@ -360,7 > > > +360,7 @@ rte_ring_enqueue(struct rte_ring *r, void *obj) > > > * @return > > > * The number of objects dequeued, either 0 or n > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_mc_dequeue_bulk(struct rte_ring *r, void **obj_table, > > > unsigned int n, unsigned int *available) { @@ -384,7 +384,7 > > > @@ rte_ring_mc_dequeue_bulk(struct rte_ring *r, void **obj_table, > > > * @return > > > * The number of objects dequeued, either 0 or n > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_sc_dequeue_bulk(struct rte_ring *r, void **obj_table, > > > unsigned int n, unsigned int *available) { @@ -411,7 +411,7 > > > @@ rte_ring_sc_dequeue_bulk(struct rte_ring *r, void **obj_table, > > > * @return > > > * The number of objects dequeued, either 0 or n > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned int n, > > > unsigned int *available) > > > { > > > @@ -434,7 +434,7 @@ rte_ring_dequeue_bulk(struct rte_ring *r, void > > > **obj_table, unsigned int n, > > > * - -ENOENT: Not enough entries in the ring to dequeue; no object is > > > * dequeued. > > > */ > > > -static __rte_always_inline int > > > +static inline int > > > rte_ring_mc_dequeue(struct rte_ring *r, void **obj_p) { > > > return rte_ring_mc_dequeue_elem(r, obj_p, sizeof(void *)); @@ -452,7 > > > +452,7 @@ rte_ring_mc_dequeue(struct rte_ring *r, void **obj_p) > > > * - -ENOENT: Not enough entries in the ring to dequeue, no object is > > > * dequeued. > > > */ > > > -static __rte_always_inline int > > > +static inline int > > > rte_ring_sc_dequeue(struct rte_ring *r, void **obj_p) { > > > return rte_ring_sc_dequeue_elem(r, obj_p, sizeof(void *)); @@ -474,7 > > > +474,7 @@ rte_ring_sc_dequeue(struct rte_ring *r, void **obj_p) > > > * - -ENOENT: Not enough entries in the ring to dequeue, no object is > > > * dequeued. > > > */ > > > -static __rte_always_inline int > > > +static inline int > > > rte_ring_dequeue(struct rte_ring *r, void **obj_p) { > > > return rte_ring_dequeue_elem(r, obj_p, sizeof(void *)); @@ -681,7 > > > +681,7 @@ struct rte_ring *rte_ring_lookup(const char *name); > > > * @return > > > * - n: Actual number of objects enqueued. > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_mp_enqueue_burst(struct rte_ring *r, void * const *obj_table, > > > unsigned int n, unsigned int *free_space) { @@ - > > > 704,7 +704,7 @@ rte_ring_mp_enqueue_burst(struct rte_ring *r, void * const > > > *obj_table, > > > * @return > > > * - n: Actual number of objects enqueued. > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_sp_enqueue_burst(struct rte_ring *r, void * const *obj_table, > > > unsigned int n, unsigned int *free_space) { @@ - > > > 731,7 +731,7 @@ rte_ring_sp_enqueue_burst(struct rte_ring *r, void * const > > > *obj_table, > > > * @return > > > * - n: Actual number of objects enqueued. > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_enqueue_burst(struct rte_ring *r, void * const *obj_table, > > > unsigned int n, unsigned int *free_space) { @@ -759,7 > > > +759,7 @@ rte_ring_enqueue_burst(struct rte_ring *r, void * const *obj_table, > > > * @return > > > * - n: Actual number of objects dequeued, 0 if ring is empty > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_mc_dequeue_burst(struct rte_ring *r, void **obj_table, > > > unsigned int n, unsigned int *available) { @@ -784,7 +784,7 > > > @@ rte_ring_mc_dequeue_burst(struct rte_ring *r, void **obj_table, > > > * @return > > > * - n: Actual number of objects dequeued, 0 if ring is empty > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_sc_dequeue_burst(struct rte_ring *r, void **obj_table, > > > unsigned int n, unsigned int *available) { @@ -811,7 +811,7 > > > @@ rte_ring_sc_dequeue_burst(struct rte_ring *r, void **obj_table, > > > * @return > > > * - Number of objects dequeued > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_dequeue_burst(struct rte_ring *r, void **obj_table, > > > unsigned int n, unsigned int *available) { diff --git > > > a/lib/ring/rte_ring_c11_pvt.h b/lib/ring/rte_ring_c11_pvt.h index > > > f895950df487..6972a9825cb7 100644 > > > --- a/lib/ring/rte_ring_c11_pvt.h > > > +++ b/lib/ring/rte_ring_c11_pvt.h > > > @@ -11,7 +11,7 @@ > > > #ifndef _RTE_RING_C11_PVT_H_ > > > #define _RTE_RING_C11_PVT_H_ > > > > > > -static __rte_always_inline void > > > +static inline void > > > __rte_ring_update_tail(struct rte_ring_headtail *ht, uint32_t old_val, > > > uint32_t new_val, uint32_t single, uint32_t enqueue) { @@ - > > > 50,7 +50,7 @@ __rte_ring_update_tail(struct rte_ring_headtail *ht, uint32_t > > > old_val, > > > * Actual number of objects enqueued. > > > * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > __rte_ring_move_prod_head(struct rte_ring *r, unsigned int is_sp, > > > unsigned int n, enum rte_ring_queue_behavior behavior, > > > uint32_t *old_head, uint32_t *new_head, @@ -126,7 +126,7 > > > @@ __rte_ring_move_prod_head(struct rte_ring *r, unsigned int is_sp, > > > * - Actual number of objects dequeued. > > > * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > __rte_ring_move_cons_head(struct rte_ring *r, int is_sc, > > > unsigned int n, enum rte_ring_queue_behavior behavior, > > > uint32_t *old_head, uint32_t *new_head, diff --git > > > a/lib/ring/rte_ring_elem.h b/lib/ring/rte_ring_elem.h index > > > fb1edc9aad1f..35e110fc5b4b 100644 > > > --- a/lib/ring/rte_ring_elem.h > > > +++ b/lib/ring/rte_ring_elem.h > > > @@ -128,7 +128,7 @@ struct rte_ring *rte_ring_create_elem(const char > > > *name, unsigned int esize, > > > * @return > > > * The number of objects enqueued, either 0 or n > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_mp_enqueue_bulk_elem(struct rte_ring *r, const void *obj_table, > > > unsigned int esize, unsigned int n, unsigned int *free_space) { > > > @@ -157,7 +157,7 @@ rte_ring_mp_enqueue_bulk_elem(struct rte_ring *r, > > > const void *obj_table, > > > * @return > > > * The number of objects enqueued, either 0 or n > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_sp_enqueue_bulk_elem(struct rte_ring *r, const void *obj_table, > > > unsigned int esize, unsigned int n, unsigned int *free_space) { > > > @@ -191,7 +191,7 @@ rte_ring_sp_enqueue_bulk_elem(struct rte_ring *r, > > > const void *obj_table, > > > * @return > > > * The number of objects enqueued, either 0 or n > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_enqueue_bulk_elem(struct rte_ring *r, const void *obj_table, > > > unsigned int esize, unsigned int n, unsigned int *free_space) { > > > @@ -235,7 +235,7 @@ rte_ring_enqueue_bulk_elem(struct rte_ring *r, const > > > void *obj_table, > > > * - 0: Success; objects enqueued. > > > * - -ENOBUFS: Not enough room in the ring to enqueue; no object is > > > enqueued. > > > */ > > > -static __rte_always_inline int > > > +static inline int > > > rte_ring_mp_enqueue_elem(struct rte_ring *r, void *obj, unsigned int esize) { > > > return rte_ring_mp_enqueue_bulk_elem(r, obj, esize, 1, NULL) ? 0 : > > > @@ -259,7 +259,7 @@ rte_ring_mp_enqueue_elem(struct rte_ring *r, void > > > *obj, unsigned int esize) > > > * - 0: Success; objects enqueued. > > > * - -ENOBUFS: Not enough room in the ring to enqueue; no object is > > > enqueued. > > > */ > > > -static __rte_always_inline int > > > +static inline int > > > rte_ring_sp_enqueue_elem(struct rte_ring *r, void *obj, unsigned int esize) { > > > return rte_ring_sp_enqueue_bulk_elem(r, obj, esize, 1, NULL) ? 0 : > > > @@ -285,7 +285,7 @@ rte_ring_sp_enqueue_elem(struct rte_ring *r, void > > > *obj, unsigned int esize) > > > * - 0: Success; objects enqueued. > > > * - -ENOBUFS: Not enough room in the ring to enqueue; no object is > > > enqueued. > > > */ > > > -static __rte_always_inline int > > > +static inline int > > > rte_ring_enqueue_elem(struct rte_ring *r, void *obj, unsigned int esize) { > > > return rte_ring_enqueue_bulk_elem(r, obj, esize, 1, NULL) ? 0 : > > > @@ -314,7 +314,7 @@ rte_ring_enqueue_elem(struct rte_ring *r, void *obj, > > > unsigned int esize) > > > * @return > > > * The number of objects dequeued, either 0 or n > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_mc_dequeue_bulk_elem(struct rte_ring *r, void *obj_table, > > > unsigned int esize, unsigned int n, unsigned int *available) { > > > @@ -342,7 +342,7 @@ rte_ring_mc_dequeue_bulk_elem(struct rte_ring *r, > > > void *obj_table, > > > * @return > > > * The number of objects dequeued, either 0 or n > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_sc_dequeue_bulk_elem(struct rte_ring *r, void *obj_table, > > > unsigned int esize, unsigned int n, unsigned int *available) { > > > @@ -373,7 +373,7 @@ rte_ring_sc_dequeue_bulk_elem(struct rte_ring *r, > > > void *obj_table, > > > * @return > > > * The number of objects dequeued, either 0 or n > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_dequeue_bulk_elem(struct rte_ring *r, void *obj_table, > > > unsigned int esize, unsigned int n, unsigned int *available) { > > > @@ -418,7 +418,7 @@ rte_ring_dequeue_bulk_elem(struct rte_ring *r, void > > > *obj_table, > > > * - -ENOENT: Not enough entries in the ring to dequeue; no object is > > > * dequeued. > > > */ > > > -static __rte_always_inline int > > > +static inline int > > > rte_ring_mc_dequeue_elem(struct rte_ring *r, void *obj_p, > > > unsigned int esize) > > > { > > > @@ -442,7 +442,7 @@ rte_ring_mc_dequeue_elem(struct rte_ring *r, void > > > *obj_p, > > > * - -ENOENT: Not enough entries in the ring to dequeue, no object is > > > * dequeued. > > > */ > > > -static __rte_always_inline int > > > +static inline int > > > rte_ring_sc_dequeue_elem(struct rte_ring *r, void *obj_p, > > > unsigned int esize) > > > { > > > @@ -470,7 +470,7 @@ rte_ring_sc_dequeue_elem(struct rte_ring *r, void > > > *obj_p, > > > * - -ENOENT: Not enough entries in the ring to dequeue, no object is > > > * dequeued. > > > */ > > > -static __rte_always_inline int > > > +static inline int > > > rte_ring_dequeue_elem(struct rte_ring *r, void *obj_p, unsigned int esize) { > > > return rte_ring_dequeue_bulk_elem(r, obj_p, esize, 1, NULL) ? 0 : > > > @@ -499,7 +499,7 @@ rte_ring_dequeue_elem(struct rte_ring *r, void *obj_p, > > > unsigned int esize) > > > * @return > > > * - n: Actual number of objects enqueued. > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_mp_enqueue_burst_elem(struct rte_ring *r, const void *obj_table, > > > unsigned int esize, unsigned int n, unsigned int *free_space) { > > > @@ -528,7 +528,7 @@ rte_ring_mp_enqueue_burst_elem(struct rte_ring *r, > > > const void *obj_table, > > > * @return > > > * - n: Actual number of objects enqueued. > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_sp_enqueue_burst_elem(struct rte_ring *r, const void *obj_table, > > > unsigned int esize, unsigned int n, unsigned int *free_space) { > > > @@ -559,7 +559,7 @@ rte_ring_sp_enqueue_burst_elem(struct rte_ring *r, > > > const void *obj_table, > > > * @return > > > * - n: Actual number of objects enqueued. > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_enqueue_burst_elem(struct rte_ring *r, const void *obj_table, > > > unsigned int esize, unsigned int n, unsigned int *free_space) { > > > @@ -609,7 +609,7 @@ rte_ring_enqueue_burst_elem(struct rte_ring *r, const > > > void *obj_table, > > > * @return > > > * - n: Actual number of objects dequeued, 0 if ring is empty > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_mc_dequeue_burst_elem(struct rte_ring *r, void *obj_table, > > > unsigned int esize, unsigned int n, unsigned int *available) { > > > @@ -638,7 +638,7 @@ rte_ring_mc_dequeue_burst_elem(struct rte_ring *r, > > > void *obj_table, > > > * @return > > > * - n: Actual number of objects dequeued, 0 if ring is empty > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_sc_dequeue_burst_elem(struct rte_ring *r, void *obj_table, > > > unsigned int esize, unsigned int n, unsigned int *available) { > > > @@ -669,7 +669,7 @@ rte_ring_sc_dequeue_burst_elem(struct rte_ring *r, > > > void *obj_table, > > > * @return > > > * - Number of objects dequeued > > > */ > > > -static __rte_always_inline unsigned int > > > +static inline unsigned int > > > rte_ring_dequeue_burst_elem(struct rte_ring *r, void *obj_table, > > > unsigned int esize, unsigned int n, unsigned int *available) { > > > -- > > > 2.35.1 > > > > I have not run performance tests and don't have the infrastructure available > to give a meaningful answer. The application we use doesn't make much use > of rings. > > This was more in response to the RISCV issues. With GCC there is control over of level of inlining with compiler flags. But definitely in the nerd-knob zone. -finline-limit=n By default, GCC limits the size of functions that can be inlined. This flag allows coarse control of this limit. n is the size of functions that can be inlined in number of pseudo instructions. Inlining is actually controlled by a number of parameters, which may be specified individually by using --param name=value. The -finline-limit=n option sets some of these parameters as follows: max-inline-insns-single is set to n/2. max-inline-insns-auto is set to n/2. See below for a documentation of the individual parameters controlling inlining and for the defaults of these parameters. Note: there may be no value to -finline-limit that results in default behavior. Note: pseudo instruction represents, in this particular context, an abstract measurement of function's size. In no way does it represent a count of assembly instructions and as such its exact meaning might change from one release to an another.