From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0D410A0505; Fri, 6 May 2022 00:45:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B205E4014F; Fri, 6 May 2022 00:45:52 +0200 (CEST) Received: from mail-pj1-f54.google.com (mail-pj1-f54.google.com [209.85.216.54]) by mails.dpdk.org (Postfix) with ESMTP id EC89B40042 for ; Fri, 6 May 2022 00:45:51 +0200 (CEST) Received: by mail-pj1-f54.google.com with SMTP id r9so5453532pjo.5 for ; Thu, 05 May 2022 15:45:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=PI3ElDsT63SaUCl/RWG5kHJpdJmRZXC1Ux8AZ17+0Ic=; b=dtfsHfFcadi3f0bDsowJxMahxtVsMam9zr+gD71BPf79qU5m1OA8AOJY5V2Am9xuXV qj2J/ZmrTk3NH3Ve+ukYdw7RpJBhS8pS+vHC3qs0SusrDH0ZMNoqbd7ENAXSNYTuz8l/ GuPvzMDbDkhZCKEDMRvYhK58g/WN9IDSUcgF1rKAJj6MSsnBDmRpbmhOoBdkUgcHVCKi BZFfHpw3hBxVx/dpnzuTVKxR1SRWevOg39jFhlYqXqiCYOnUP8DMFfNT6smSuMIsdQP2 CVT/tvqr9dU0k2Nt4RLmKQUEm2lMaT/Vlb+zvAKzM/MQ+AW7srQG2YD24yh87XenGqmI 8SeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=PI3ElDsT63SaUCl/RWG5kHJpdJmRZXC1Ux8AZ17+0Ic=; b=bd4jnqy431mcOa0RhGsECBCShJgn+GRpCIQpC5o1nT9Yb4K42U4Gkqtv2gfJqy2gn0 F5FVP7qa+vkoREpSKVQ6hKwb3AxW/hE1K8teENzqBhmL4mUR+FrG1603pJKWED4wtK6u fkJ7cHgfmjG0O7H6f/uHECtxPLs1OC7aa3CL5aW+QkYUGBlXzL94+mduw5gB3FVtE+bP TGIAv2Gzt6D04aagDP78PwqGcF9tgKOPWG/o7zDmkETpte3N76csIEoeDlbWaUIBxH1B vJPYhf/DkLI8Z389oeLb09xhtemXbYuAu/mr5NS0+0Ve+zqdBDzkzzMMsDKoUfbSykDT vOLQ== X-Gm-Message-State: AOAM533+TCkVvAJG+7eweW6aZzExt4EYjZ4JVG6rtUR9lKzL3Ncgtjvp U1JHMg2yj7v7YkylbgBpQJJTz59U7jIcuQ== X-Google-Smtp-Source: ABdhPJzqm8egI/LRI9s6DE7m1hC1nwkzXXVpiDOw3OJX4S4fsM5nYcEdwwx1OE7lY29NmwiMnTz1fg== X-Received: by 2002:a17:90a:343:b0:1cb:234a:a975 with SMTP id 3-20020a17090a034300b001cb234aa975mr8808711pjf.83.1651790750395; Thu, 05 May 2022 15:45:50 -0700 (PDT) Received: from hermes.local (204-195-112-199.wavecable.com. [204.195.112.199]) by smtp.gmail.com with ESMTPSA id m3-20020a170902768300b0015e8d4eb24dsm143128pll.151.2022.05.05.15.45.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 May 2022 15:45:49 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Subject: [RFC] rte_ring: don't use always inline Date: Thu, 5 May 2022 15:45:47 -0700 Message-Id: <20220505224547.394253-1-stephen@networkplumber.org> X-Mailer: git-send-email 2.35.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Forcing compiler to inline with always inline can lead to worse and sometimes broken code. Better to use the standard inline keyword and let compiler have some flexibilty. Signed-off-by: Stephen Hemminger --- This is RFC because the use of large scale inlining is debatable. This change may slow things down on some versions of Gcc and architectures. If you follow Linux kernel list, this has been a debated topic over the years, with opinions for and against inlining. Combined with bad inlining in various Gcc versions. lib/ring/rte_ring.h | 36 ++++++++++++++++++------------------ lib/ring/rte_ring_c11_pvt.h | 6 +++--- lib/ring/rte_ring_elem.h | 36 ++++++++++++++++++------------------ 3 files changed, 39 insertions(+), 39 deletions(-) diff --git a/lib/ring/rte_ring.h b/lib/ring/rte_ring.h index 980e92e59493..4a2588beed9e 100644 --- a/lib/ring/rte_ring.h +++ b/lib/ring/rte_ring.h @@ -226,7 +226,7 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r); * @return * The number of objects enqueued, either 0 or n */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_mp_enqueue_bulk(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { @@ -249,7 +249,7 @@ rte_ring_mp_enqueue_bulk(struct rte_ring *r, void * const *obj_table, * @return * The number of objects enqueued, either 0 or n */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_sp_enqueue_bulk(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { @@ -276,7 +276,7 @@ rte_ring_sp_enqueue_bulk(struct rte_ring *r, void * const *obj_table, * @return * The number of objects enqueued, either 0 or n */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_enqueue_bulk(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { @@ -298,7 +298,7 @@ rte_ring_enqueue_bulk(struct rte_ring *r, void * const *obj_table, * - 0: Success; objects enqueued. * - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued. */ -static __rte_always_inline int +static inline int rte_ring_mp_enqueue(struct rte_ring *r, void *obj) { return rte_ring_mp_enqueue_elem(r, &obj, sizeof(void *)); @@ -315,7 +315,7 @@ rte_ring_mp_enqueue(struct rte_ring *r, void *obj) * - 0: Success; objects enqueued. * - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued. */ -static __rte_always_inline int +static inline int rte_ring_sp_enqueue(struct rte_ring *r, void *obj) { return rte_ring_sp_enqueue_elem(r, &obj, sizeof(void *)); @@ -336,7 +336,7 @@ rte_ring_sp_enqueue(struct rte_ring *r, void *obj) * - 0: Success; objects enqueued. * - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued. */ -static __rte_always_inline int +static inline int rte_ring_enqueue(struct rte_ring *r, void *obj) { return rte_ring_enqueue_elem(r, &obj, sizeof(void *)); @@ -360,7 +360,7 @@ rte_ring_enqueue(struct rte_ring *r, void *obj) * @return * The number of objects dequeued, either 0 or n */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_mc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { @@ -384,7 +384,7 @@ rte_ring_mc_dequeue_bulk(struct rte_ring *r, void **obj_table, * @return * The number of objects dequeued, either 0 or n */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_sc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { @@ -411,7 +411,7 @@ rte_ring_sc_dequeue_bulk(struct rte_ring *r, void **obj_table, * @return * The number of objects dequeued, either 0 or n */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { @@ -434,7 +434,7 @@ rte_ring_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned int n, * - -ENOENT: Not enough entries in the ring to dequeue; no object is * dequeued. */ -static __rte_always_inline int +static inline int rte_ring_mc_dequeue(struct rte_ring *r, void **obj_p) { return rte_ring_mc_dequeue_elem(r, obj_p, sizeof(void *)); @@ -452,7 +452,7 @@ rte_ring_mc_dequeue(struct rte_ring *r, void **obj_p) * - -ENOENT: Not enough entries in the ring to dequeue, no object is * dequeued. */ -static __rte_always_inline int +static inline int rte_ring_sc_dequeue(struct rte_ring *r, void **obj_p) { return rte_ring_sc_dequeue_elem(r, obj_p, sizeof(void *)); @@ -474,7 +474,7 @@ rte_ring_sc_dequeue(struct rte_ring *r, void **obj_p) * - -ENOENT: Not enough entries in the ring to dequeue, no object is * dequeued. */ -static __rte_always_inline int +static inline int rte_ring_dequeue(struct rte_ring *r, void **obj_p) { return rte_ring_dequeue_elem(r, obj_p, sizeof(void *)); @@ -681,7 +681,7 @@ struct rte_ring *rte_ring_lookup(const char *name); * @return * - n: Actual number of objects enqueued. */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_mp_enqueue_burst(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { @@ -704,7 +704,7 @@ rte_ring_mp_enqueue_burst(struct rte_ring *r, void * const *obj_table, * @return * - n: Actual number of objects enqueued. */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_sp_enqueue_burst(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { @@ -731,7 +731,7 @@ rte_ring_sp_enqueue_burst(struct rte_ring *r, void * const *obj_table, * @return * - n: Actual number of objects enqueued. */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_enqueue_burst(struct rte_ring *r, void * const *obj_table, unsigned int n, unsigned int *free_space) { @@ -759,7 +759,7 @@ rte_ring_enqueue_burst(struct rte_ring *r, void * const *obj_table, * @return * - n: Actual number of objects dequeued, 0 if ring is empty */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_mc_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { @@ -784,7 +784,7 @@ rte_ring_mc_dequeue_burst(struct rte_ring *r, void **obj_table, * @return * - n: Actual number of objects dequeued, 0 if ring is empty */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_sc_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { @@ -811,7 +811,7 @@ rte_ring_sc_dequeue_burst(struct rte_ring *r, void **obj_table, * @return * - Number of objects dequeued */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned int n, unsigned int *available) { diff --git a/lib/ring/rte_ring_c11_pvt.h b/lib/ring/rte_ring_c11_pvt.h index f895950df487..6972a9825cb7 100644 --- a/lib/ring/rte_ring_c11_pvt.h +++ b/lib/ring/rte_ring_c11_pvt.h @@ -11,7 +11,7 @@ #ifndef _RTE_RING_C11_PVT_H_ #define _RTE_RING_C11_PVT_H_ -static __rte_always_inline void +static inline void __rte_ring_update_tail(struct rte_ring_headtail *ht, uint32_t old_val, uint32_t new_val, uint32_t single, uint32_t enqueue) { @@ -50,7 +50,7 @@ __rte_ring_update_tail(struct rte_ring_headtail *ht, uint32_t old_val, * Actual number of objects enqueued. * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. */ -static __rte_always_inline unsigned int +static inline unsigned int __rte_ring_move_prod_head(struct rte_ring *r, unsigned int is_sp, unsigned int n, enum rte_ring_queue_behavior behavior, uint32_t *old_head, uint32_t *new_head, @@ -126,7 +126,7 @@ __rte_ring_move_prod_head(struct rte_ring *r, unsigned int is_sp, * - Actual number of objects dequeued. * If behavior == RTE_RING_QUEUE_FIXED, this will be 0 or n only. */ -static __rte_always_inline unsigned int +static inline unsigned int __rte_ring_move_cons_head(struct rte_ring *r, int is_sc, unsigned int n, enum rte_ring_queue_behavior behavior, uint32_t *old_head, uint32_t *new_head, diff --git a/lib/ring/rte_ring_elem.h b/lib/ring/rte_ring_elem.h index fb1edc9aad1f..35e110fc5b4b 100644 --- a/lib/ring/rte_ring_elem.h +++ b/lib/ring/rte_ring_elem.h @@ -128,7 +128,7 @@ struct rte_ring *rte_ring_create_elem(const char *name, unsigned int esize, * @return * The number of objects enqueued, either 0 or n */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_mp_enqueue_bulk_elem(struct rte_ring *r, const void *obj_table, unsigned int esize, unsigned int n, unsigned int *free_space) { @@ -157,7 +157,7 @@ rte_ring_mp_enqueue_bulk_elem(struct rte_ring *r, const void *obj_table, * @return * The number of objects enqueued, either 0 or n */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_sp_enqueue_bulk_elem(struct rte_ring *r, const void *obj_table, unsigned int esize, unsigned int n, unsigned int *free_space) { @@ -191,7 +191,7 @@ rte_ring_sp_enqueue_bulk_elem(struct rte_ring *r, const void *obj_table, * @return * The number of objects enqueued, either 0 or n */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_enqueue_bulk_elem(struct rte_ring *r, const void *obj_table, unsigned int esize, unsigned int n, unsigned int *free_space) { @@ -235,7 +235,7 @@ rte_ring_enqueue_bulk_elem(struct rte_ring *r, const void *obj_table, * - 0: Success; objects enqueued. * - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued. */ -static __rte_always_inline int +static inline int rte_ring_mp_enqueue_elem(struct rte_ring *r, void *obj, unsigned int esize) { return rte_ring_mp_enqueue_bulk_elem(r, obj, esize, 1, NULL) ? 0 : @@ -259,7 +259,7 @@ rte_ring_mp_enqueue_elem(struct rte_ring *r, void *obj, unsigned int esize) * - 0: Success; objects enqueued. * - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued. */ -static __rte_always_inline int +static inline int rte_ring_sp_enqueue_elem(struct rte_ring *r, void *obj, unsigned int esize) { return rte_ring_sp_enqueue_bulk_elem(r, obj, esize, 1, NULL) ? 0 : @@ -285,7 +285,7 @@ rte_ring_sp_enqueue_elem(struct rte_ring *r, void *obj, unsigned int esize) * - 0: Success; objects enqueued. * - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued. */ -static __rte_always_inline int +static inline int rte_ring_enqueue_elem(struct rte_ring *r, void *obj, unsigned int esize) { return rte_ring_enqueue_bulk_elem(r, obj, esize, 1, NULL) ? 0 : @@ -314,7 +314,7 @@ rte_ring_enqueue_elem(struct rte_ring *r, void *obj, unsigned int esize) * @return * The number of objects dequeued, either 0 or n */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_mc_dequeue_bulk_elem(struct rte_ring *r, void *obj_table, unsigned int esize, unsigned int n, unsigned int *available) { @@ -342,7 +342,7 @@ rte_ring_mc_dequeue_bulk_elem(struct rte_ring *r, void *obj_table, * @return * The number of objects dequeued, either 0 or n */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_sc_dequeue_bulk_elem(struct rte_ring *r, void *obj_table, unsigned int esize, unsigned int n, unsigned int *available) { @@ -373,7 +373,7 @@ rte_ring_sc_dequeue_bulk_elem(struct rte_ring *r, void *obj_table, * @return * The number of objects dequeued, either 0 or n */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_dequeue_bulk_elem(struct rte_ring *r, void *obj_table, unsigned int esize, unsigned int n, unsigned int *available) { @@ -418,7 +418,7 @@ rte_ring_dequeue_bulk_elem(struct rte_ring *r, void *obj_table, * - -ENOENT: Not enough entries in the ring to dequeue; no object is * dequeued. */ -static __rte_always_inline int +static inline int rte_ring_mc_dequeue_elem(struct rte_ring *r, void *obj_p, unsigned int esize) { @@ -442,7 +442,7 @@ rte_ring_mc_dequeue_elem(struct rte_ring *r, void *obj_p, * - -ENOENT: Not enough entries in the ring to dequeue, no object is * dequeued. */ -static __rte_always_inline int +static inline int rte_ring_sc_dequeue_elem(struct rte_ring *r, void *obj_p, unsigned int esize) { @@ -470,7 +470,7 @@ rte_ring_sc_dequeue_elem(struct rte_ring *r, void *obj_p, * - -ENOENT: Not enough entries in the ring to dequeue, no object is * dequeued. */ -static __rte_always_inline int +static inline int rte_ring_dequeue_elem(struct rte_ring *r, void *obj_p, unsigned int esize) { return rte_ring_dequeue_bulk_elem(r, obj_p, esize, 1, NULL) ? 0 : @@ -499,7 +499,7 @@ rte_ring_dequeue_elem(struct rte_ring *r, void *obj_p, unsigned int esize) * @return * - n: Actual number of objects enqueued. */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_mp_enqueue_burst_elem(struct rte_ring *r, const void *obj_table, unsigned int esize, unsigned int n, unsigned int *free_space) { @@ -528,7 +528,7 @@ rte_ring_mp_enqueue_burst_elem(struct rte_ring *r, const void *obj_table, * @return * - n: Actual number of objects enqueued. */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_sp_enqueue_burst_elem(struct rte_ring *r, const void *obj_table, unsigned int esize, unsigned int n, unsigned int *free_space) { @@ -559,7 +559,7 @@ rte_ring_sp_enqueue_burst_elem(struct rte_ring *r, const void *obj_table, * @return * - n: Actual number of objects enqueued. */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_enqueue_burst_elem(struct rte_ring *r, const void *obj_table, unsigned int esize, unsigned int n, unsigned int *free_space) { @@ -609,7 +609,7 @@ rte_ring_enqueue_burst_elem(struct rte_ring *r, const void *obj_table, * @return * - n: Actual number of objects dequeued, 0 if ring is empty */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_mc_dequeue_burst_elem(struct rte_ring *r, void *obj_table, unsigned int esize, unsigned int n, unsigned int *available) { @@ -638,7 +638,7 @@ rte_ring_mc_dequeue_burst_elem(struct rte_ring *r, void *obj_table, * @return * - n: Actual number of objects dequeued, 0 if ring is empty */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_sc_dequeue_burst_elem(struct rte_ring *r, void *obj_table, unsigned int esize, unsigned int n, unsigned int *available) { @@ -669,7 +669,7 @@ rte_ring_sc_dequeue_burst_elem(struct rte_ring *r, void *obj_table, * @return * - Number of objects dequeued */ -static __rte_always_inline unsigned int +static inline unsigned int rte_ring_dequeue_burst_elem(struct rte_ring *r, void *obj_table, unsigned int esize, unsigned int n, unsigned int *available) { -- 2.35.1