From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 175BBF95C for ; Tue, 7 Feb 2017 15:14:33 +0100 (CET) Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga104.jf.intel.com with ESMTP; 07 Feb 2017 06:14:29 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,346,1477983600"; d="scan'208";a="61814155" Received: from sivswdev01.ir.intel.com (HELO localhost.localdomain) ([10.237.217.45]) by orsmga005.jf.intel.com with ESMTP; 07 Feb 2017 06:14:27 -0800 From: Bruce Richardson To: olivier.matz@6wind.com Cc: thomas.monjalon@6wind.com, keith.wiles@intel.com, konstantin.ananyev@intel.com, stephen@networkplumber.org, dev@dpdk.org, Bruce Richardson Date: Tue, 7 Feb 2017 14:12:55 +0000 Message-Id: <1486476777-24768-18-git-send-email-bruce.richardson@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <20170125121456.GA24344@bricha3-MOBL3.ger.corp.intel.com> References: <20170125121456.GA24344@bricha3-MOBL3.ger.corp.intel.com> Subject: [dpdk-dev] [PATCH RFCv3 17/19] ring: allow macros to work with any type of object X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 Feb 2017 14:14:34 -0000 Modify the enqueue and dequeue macros to support copying any type of object by passing in the exact object type. We no longer need a placeholder element array in the ring structure, since the macros just take the address of the end of the structure, so remove it, leaving the rte_ring structure as a ring header only. Signed-off-by: Bruce Richardson --- lib/librte_ring/rte_ring.h | 68 ++++++++++++++++++++++++---------------------- 1 file changed, 36 insertions(+), 32 deletions(-) diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h index 4a58857..d708c90 100644 --- a/lib/librte_ring/rte_ring.h +++ b/lib/librte_ring/rte_ring.h @@ -154,11 +154,7 @@ struct rte_ring { /** Ring consumer status. */ struct rte_ring_ht_ptr cons __rte_aligned(RTE_CACHE_LINE_SIZE * 2); - - void *ring[] __rte_cache_aligned; /**< Memory space of ring starts here. - * not volatile so need to be careful - * about compiler re-ordering */ -}; +} __rte_cache_aligned; #define RING_F_SP_ENQ 0x0001 /**< The default enqueue is "single-producer". */ #define RING_F_SC_DEQ 0x0002 /**< The default dequeue is "single-consumer". */ @@ -286,54 +282,62 @@ void rte_ring_dump(FILE *f, const struct rte_ring *r); /* the actual enqueue of pointers on the ring. * Placed here since identical code needed in both * single and multi producer enqueue functions */ -#define ENQUEUE_PTRS() do { \ +#define ENQUEUE_PTRS(r, obj_table, n, prod_head, obj_type) do { \ unsigned int i; \ - const uint32_t size = r->size; \ - uint32_t idx = prod_head & r->mask; \ + const uint32_t size = (r)->size; \ + uint32_t idx = prod_head & (r)->mask; \ + obj_type *ring = (void *)&(r)[1]; \ if (likely(idx + n < size)) { \ for (i = 0; i < (n & ((~(unsigned)0x3))); i+=4, idx+=4) { \ - r->ring[idx] = obj_table[i]; \ - r->ring[idx+1] = obj_table[i+1]; \ - r->ring[idx+2] = obj_table[i+2]; \ - r->ring[idx+3] = obj_table[i+3]; \ + ring[idx] = obj_table[i]; \ + ring[idx+1] = obj_table[i+1]; \ + ring[idx+2] = obj_table[i+2]; \ + ring[idx+3] = obj_table[i+3]; \ } \ switch (n & 0x3) { \ - case 3: r->ring[idx++] = obj_table[i++]; \ - case 2: r->ring[idx++] = obj_table[i++]; \ - case 1: r->ring[idx++] = obj_table[i++]; \ + case 3: \ + ring[idx++] = obj_table[i++]; /* fallthrough */ \ + case 2: \ + ring[idx++] = obj_table[i++]; /* fallthrough */ \ + case 1: \ + ring[idx++] = obj_table[i++]; \ } \ } else { \ for (i = 0; idx < size; i++, idx++)\ - r->ring[idx] = obj_table[i]; \ + ring[idx] = obj_table[i]; \ for (idx = 0; i < n; i++, idx++) \ - r->ring[idx] = obj_table[i]; \ + ring[idx] = obj_table[i]; \ } \ -} while(0) +} while (0) /* the actual copy of pointers on the ring to obj_table. * Placed here since identical code needed in both * single and multi consumer dequeue functions */ -#define DEQUEUE_PTRS() do { \ +#define DEQUEUE_PTRS(r, obj_table, n, cons_head, obj_type) do { \ unsigned int i; \ - uint32_t idx = cons_head & r->mask; \ - const uint32_t size = r->size; \ + uint32_t idx = cons_head & (r)->mask; \ + const uint32_t size = (r)->size; \ + obj_type *ring = (void *)&(r)[1]; \ if (likely(idx + n < size)) { \ for (i = 0; i < (n & (~(unsigned)0x3)); i+=4, idx+=4) {\ - obj_table[i] = r->ring[idx]; \ - obj_table[i+1] = r->ring[idx+1]; \ - obj_table[i+2] = r->ring[idx+2]; \ - obj_table[i+3] = r->ring[idx+3]; \ + obj_table[i] = ring[idx]; \ + obj_table[i+1] = ring[idx+1]; \ + obj_table[i+2] = ring[idx+2]; \ + obj_table[i+3] = ring[idx+3]; \ } \ switch (n & 0x3) { \ - case 3: obj_table[i++] = r->ring[idx++]; \ - case 2: obj_table[i++] = r->ring[idx++]; \ - case 1: obj_table[i++] = r->ring[idx++]; \ + case 3: \ + obj_table[i++] = ring[idx++]; /* fallthrough */ \ + case 2: \ + obj_table[i++] = ring[idx++]; /* fallthrough */ \ + case 1: \ + obj_table[i++] = ring[idx++]; \ } \ } else { \ for (i = 0; idx < size; i++, idx++) \ - obj_table[i] = r->ring[idx]; \ + obj_table[i] = ring[idx]; \ for (idx = 0; i < n; i++, idx++) \ - obj_table[i] = r->ring[idx]; \ + obj_table[i] = ring[idx]; \ } \ } while (0) @@ -444,7 +448,7 @@ __rte_ring_do_enqueue(struct rte_ring *r, void * const *obj_table, if (n == 0) goto end; - ENQUEUE_PTRS(); + ENQUEUE_PTRS(r, obj_table, n, prod_head, void *); rte_smp_wmb(); update_tail(&r->prod, prod_head, prod_next); @@ -547,7 +551,7 @@ __rte_ring_do_dequeue(struct rte_ring *r, void **obj_table, if (n == 0) goto end; - DEQUEUE_PTRS(); + DEQUEUE_PTRS(r, obj_table, n, cons_head, void *); rte_smp_rmb(); update_tail(&r->cons, cons_head, cons_next); -- 2.9.3