From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 227052C27 for ; Thu, 13 Apr 2017 08:42:48 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Apr 2017 23:42:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.37,193,1488873600"; d="scan'208";a="955585245" Received: from fmsmsx106.amr.corp.intel.com ([10.18.124.204]) by orsmga003.jf.intel.com with ESMTP; 12 Apr 2017 23:42:44 -0700 Received: from FMSMSX110.amr.corp.intel.com (10.18.116.10) by FMSMSX106.amr.corp.intel.com (10.18.124.204) with Microsoft SMTP Server (TLS) id 14.3.319.2; Wed, 12 Apr 2017 23:42:44 -0700 Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by fmsmsx110.amr.corp.intel.com (10.18.116.10) with Microsoft SMTP Server (TLS) id 14.3.319.2; Wed, 12 Apr 2017 23:42:43 -0700 Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.117]) by shsmsx102.ccr.corp.intel.com ([169.254.2.246]) with mapi id 14.03.0319.002; Thu, 13 Apr 2017 14:42:40 +0800 From: "Wang, Zhihong" To: "Richardson, Bruce" , "olivier.matz@6wind.com" CC: "dev@dpdk.org" , "Richardson, Bruce" Thread-Topic: [dpdk-dev] [PATCH v5 07/14] ring: make bulk and burst fn return vals consistent Thread-Index: AQHSqJgFztsQUCyLcUCMAQ1U1aeXHaHC7tvA Date: Thu, 13 Apr 2017 06:42:39 +0000 Message-ID: <8F6C2BD409508844A0EFC19955BE0941512656FB@SHSMSX103.ccr.corp.intel.com> References: <20170328203606.27457-1-bruce.richardson@intel.com> <20170329130941.31190-1-bruce.richardson@intel.com> <20170329130941.31190-8-bruce.richardson@intel.com> In-Reply-To: <20170329130941.31190-8-bruce.richardson@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v5 07/14] ring: make bulk and burst fn return vals consistent X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 13 Apr 2017 06:42:50 -0000 Hi Bruce, This patch changes the behavior and causes some existing code to malfunction, e.g. bond_ethdev_stop() will get stuck here: while (rte_ring_dequeue(port->rx_ring, &pkt) !=3D -ENOENT) rte_pktmbuf_free(pkt); Another example in test/test/virtual_pmd.c: virtual_ethdev_stop(). Thanks Zhihong > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Bruce Richardson > Sent: Wednesday, March 29, 2017 9:10 PM > To: olivier.matz@6wind.com > Cc: dev@dpdk.org; Richardson, Bruce > Subject: [dpdk-dev] [PATCH v5 07/14] ring: make bulk and burst fn return > vals consistent >=20 > The bulk fns for rings returns 0 for all elements enqueued and negative > for no space. Change that to make them consistent with the burst function= s > in returning the number of elements enqueued/dequeued, i.e. 0 or N. > This change also allows the return value from enq/deq to be used directly > without a branch for error checking. >=20 > Signed-off-by: Bruce Richardson > Reviewed-by: Yuanhan Liu > Acked-by: Olivier Matz > --- > doc/guides/rel_notes/release_17_05.rst | 11 +++ > doc/guides/sample_app_ug/server_node_efd.rst | 2 +- > examples/load_balancer/runtime.c | 16 ++- > .../client_server_mp/mp_client/client.c | 8 +- > .../client_server_mp/mp_server/main.c | 2 +- > examples/qos_sched/app_thread.c | 8 +- > examples/server_node_efd/node/node.c | 2 +- > examples/server_node_efd/server/main.c | 2 +- > lib/librte_mempool/rte_mempool_ring.c | 12 ++- > lib/librte_ring/rte_ring.h | 109 +++++++--------= ------ > test/test-pipeline/pipeline_hash.c | 2 +- > test/test-pipeline/runtime.c | 8 +- > test/test/test_ring.c | 46 +++++---- > test/test/test_ring_perf.c | 8 +- > 14 files changed, 106 insertions(+), 130 deletions(-) >=20 > diff --git a/doc/guides/rel_notes/release_17_05.rst > b/doc/guides/rel_notes/release_17_05.rst > index 084b359..6da2612 100644 > --- a/doc/guides/rel_notes/release_17_05.rst > +++ b/doc/guides/rel_notes/release_17_05.rst > @@ -137,6 +137,17 @@ API Changes > * removed the build-time setting > ``CONFIG_RTE_RING_PAUSE_REP_COUNT`` > * removed the function ``rte_ring_set_water_mark`` as part of a genera= l > removal of watermarks support in the library. > + * changed the return value of the enqueue and dequeue bulk functions t= o > + match that of the burst equivalents. In all cases, ring functions wh= ich > + operate on multiple packets now return the number of elements > enqueued > + or dequeued, as appropriate. The updated functions are: > + > + - ``rte_ring_mp_enqueue_bulk`` > + - ``rte_ring_sp_enqueue_bulk`` > + - ``rte_ring_enqueue_bulk`` > + - ``rte_ring_mc_dequeue_bulk`` > + - ``rte_ring_sc_dequeue_bulk`` > + - ``rte_ring_dequeue_bulk`` >=20 > ABI Changes > ----------- > diff --git a/doc/guides/sample_app_ug/server_node_efd.rst > b/doc/guides/sample_app_ug/server_node_efd.rst > index 9b69cfe..e3a63c8 100644 > --- a/doc/guides/sample_app_ug/server_node_efd.rst > +++ b/doc/guides/sample_app_ug/server_node_efd.rst > @@ -286,7 +286,7 @@ repeated infinitely. >=20 > cl =3D &nodes[node]; > if (rte_ring_enqueue_bulk(cl->rx_q, (void **)cl_rx_buf[node].buf= fer, > - cl_rx_buf[node].count) !=3D 0){ > + cl_rx_buf[node].count) !=3D cl_rx_buf[node].count){ > for (j =3D 0; j < cl_rx_buf[node].count; j++) > rte_pktmbuf_free(cl_rx_buf[node].buffer[j]); > cl->stats.rx_drop +=3D cl_rx_buf[node].count; > diff --git a/examples/load_balancer/runtime.c > b/examples/load_balancer/runtime.c > index 6944325..82b10bc 100644 > --- a/examples/load_balancer/runtime.c > +++ b/examples/load_balancer/runtime.c > @@ -146,7 +146,7 @@ app_lcore_io_rx_buffer_to_send ( > (void **) lp->rx.mbuf_out[worker].array, > bsz); >=20 > - if (unlikely(ret =3D=3D -ENOBUFS)) { > + if (unlikely(ret =3D=3D 0)) { > uint32_t k; > for (k =3D 0; k < bsz; k ++) { > struct rte_mbuf *m =3D lp- > >rx.mbuf_out[worker].array[k]; > @@ -312,7 +312,7 @@ app_lcore_io_rx_flush(struct app_lcore_params_io > *lp, uint32_t n_workers) > (void **) lp->rx.mbuf_out[worker].array, > lp->rx.mbuf_out[worker].n_mbufs); >=20 > - if (unlikely(ret < 0)) { > + if (unlikely(ret =3D=3D 0)) { > uint32_t k; > for (k =3D 0; k < lp->rx.mbuf_out[worker].n_mbufs; k > ++) { > struct rte_mbuf *pkt_to_free =3D lp- > >rx.mbuf_out[worker].array[k]; > @@ -349,9 +349,8 @@ app_lcore_io_tx( > (void **) &lp- > >tx.mbuf_out[port].array[n_mbufs], > bsz_rd); >=20 > - if (unlikely(ret =3D=3D -ENOENT)) { > + if (unlikely(ret =3D=3D 0)) > continue; > - } >=20 > n_mbufs +=3D bsz_rd; >=20 > @@ -505,9 +504,8 @@ app_lcore_worker( > (void **) lp->mbuf_in.array, > bsz_rd); >=20 > - if (unlikely(ret =3D=3D -ENOENT)) { > + if (unlikely(ret =3D=3D 0)) > continue; > - } >=20 > #if APP_WORKER_DROP_ALL_PACKETS > for (j =3D 0; j < bsz_rd; j ++) { > @@ -559,7 +557,7 @@ app_lcore_worker( >=20 > #if APP_STATS > lp->rings_out_iters[port] ++; > - if (ret =3D=3D 0) { > + if (ret > 0) { > lp->rings_out_count[port] +=3D 1; > } > if (lp->rings_out_iters[port] =3D=3D APP_STATS){ > @@ -572,7 +570,7 @@ app_lcore_worker( > } > #endif >=20 > - if (unlikely(ret =3D=3D -ENOBUFS)) { > + if (unlikely(ret =3D=3D 0)) { > uint32_t k; > for (k =3D 0; k < bsz_wr; k ++) { > struct rte_mbuf *pkt_to_free =3D lp- > >mbuf_out[port].array[k]; > @@ -609,7 +607,7 @@ app_lcore_worker_flush(struct > app_lcore_params_worker *lp) > (void **) lp->mbuf_out[port].array, > lp->mbuf_out[port].n_mbufs); >=20 > - if (unlikely(ret < 0)) { > + if (unlikely(ret =3D=3D 0)) { > uint32_t k; > for (k =3D 0; k < lp->mbuf_out[port].n_mbufs; k ++) { > struct rte_mbuf *pkt_to_free =3D lp- > >mbuf_out[port].array[k]; > diff --git a/examples/multi_process/client_server_mp/mp_client/client.c > b/examples/multi_process/client_server_mp/mp_client/client.c > index d4f9ca3..dca9eb9 100644 > --- a/examples/multi_process/client_server_mp/mp_client/client.c > +++ b/examples/multi_process/client_server_mp/mp_client/client.c > @@ -276,14 +276,10 @@ main(int argc, char *argv[]) > printf("[Press Ctrl-C to quit ...]\n"); >=20 > for (;;) { > - uint16_t i, rx_pkts =3D PKT_READ_SIZE; > + uint16_t i, rx_pkts; > uint8_t port; >=20 > - /* try dequeuing max possible packets first, if that fails, get > the > - * most we can. Loop body should only execute once, > maximum */ > - while (rx_pkts > 0 && > - unlikely(rte_ring_dequeue_bulk(rx_ring, > pkts, rx_pkts) !=3D 0)) > - rx_pkts =3D > (uint16_t)RTE_MIN(rte_ring_count(rx_ring), PKT_READ_SIZE); > + rx_pkts =3D rte_ring_dequeue_burst(rx_ring, pkts, > PKT_READ_SIZE); >=20 > if (unlikely(rx_pkts =3D=3D 0)){ > if (need_flush) > diff --git a/examples/multi_process/client_server_mp/mp_server/main.c > b/examples/multi_process/client_server_mp/mp_server/main.c > index a6dc12d..19c95b2 100644 > --- a/examples/multi_process/client_server_mp/mp_server/main.c > +++ b/examples/multi_process/client_server_mp/mp_server/main.c > @@ -227,7 +227,7 @@ flush_rx_queue(uint16_t client) >=20 > cl =3D &clients[client]; > if (rte_ring_enqueue_bulk(cl->rx_q, (void > **)cl_rx_buf[client].buffer, > - cl_rx_buf[client].count) !=3D 0){ > + cl_rx_buf[client].count) =3D=3D 0){ > for (j =3D 0; j < cl_rx_buf[client].count; j++) > rte_pktmbuf_free(cl_rx_buf[client].buffer[j]); > cl->stats.rx_drop +=3D cl_rx_buf[client].count; > diff --git a/examples/qos_sched/app_thread.c > b/examples/qos_sched/app_thread.c > index 70fdcdb..dab4594 100644 > --- a/examples/qos_sched/app_thread.c > +++ b/examples/qos_sched/app_thread.c > @@ -107,7 +107,7 @@ app_rx_thread(struct thread_conf **confs) > } >=20 > if (unlikely(rte_ring_sp_enqueue_bulk(conf->rx_ring, > - (void > **)rx_mbufs, nb_rx) !=3D 0)) { > + (void **)rx_mbufs, nb_rx) =3D=3D 0)) { > for(i =3D 0; i < nb_rx; i++) { > rte_pktmbuf_free(rx_mbufs[i]); >=20 > @@ -180,7 +180,7 @@ app_tx_thread(struct thread_conf **confs) > while ((conf =3D confs[conf_idx])) { > retval =3D rte_ring_sc_dequeue_bulk(conf->tx_ring, (void > **)mbufs, > burst_conf.qos_dequeue); > - if (likely(retval =3D=3D 0)) { > + if (likely(retval !=3D 0)) { > app_send_packets(conf, mbufs, > burst_conf.qos_dequeue); >=20 > conf->counter =3D 0; /* reset empty read loop counter > */ > @@ -230,7 +230,9 @@ app_worker_thread(struct thread_conf **confs) > nb_pkt =3D rte_sched_port_dequeue(conf->sched_port, > mbufs, > burst_conf.qos_dequeue); > if (likely(nb_pkt > 0)) > - while (rte_ring_sp_enqueue_bulk(conf->tx_ring, > (void **)mbufs, nb_pkt) !=3D 0); > + while (rte_ring_sp_enqueue_bulk(conf->tx_ring, > + (void **)mbufs, nb_pkt) =3D=3D 0) > + ; /* empty body */ >=20 > conf_idx++; > if (confs[conf_idx] =3D=3D NULL) > diff --git a/examples/server_node_efd/node/node.c > b/examples/server_node_efd/node/node.c > index a6c0c70..9ec6a05 100644 > --- a/examples/server_node_efd/node/node.c > +++ b/examples/server_node_efd/node/node.c > @@ -392,7 +392,7 @@ main(int argc, char *argv[]) > */ > while (rx_pkts > 0 && > unlikely(rte_ring_dequeue_bulk(rx_ring, > pkts, > - rx_pkts) !=3D 0)) > + rx_pkts) =3D=3D 0)) > rx_pkts =3D > (uint16_t)RTE_MIN(rte_ring_count(rx_ring), > PKT_READ_SIZE); >=20 > diff --git a/examples/server_node_efd/server/main.c > b/examples/server_node_efd/server/main.c > index 1a54d1b..3eb7fac 100644 > --- a/examples/server_node_efd/server/main.c > +++ b/examples/server_node_efd/server/main.c > @@ -247,7 +247,7 @@ flush_rx_queue(uint16_t node) >=20 > cl =3D &nodes[node]; > if (rte_ring_enqueue_bulk(cl->rx_q, (void > **)cl_rx_buf[node].buffer, > - cl_rx_buf[node].count) !=3D 0){ > + cl_rx_buf[node].count) !=3D cl_rx_buf[node].count){ > for (j =3D 0; j < cl_rx_buf[node].count; j++) > rte_pktmbuf_free(cl_rx_buf[node].buffer[j]); > cl->stats.rx_drop +=3D cl_rx_buf[node].count; > diff --git a/lib/librte_mempool/rte_mempool_ring.c > b/lib/librte_mempool/rte_mempool_ring.c > index b9aa64d..409b860 100644 > --- a/lib/librte_mempool/rte_mempool_ring.c > +++ b/lib/librte_mempool/rte_mempool_ring.c > @@ -42,26 +42,30 @@ static int > common_ring_mp_enqueue(struct rte_mempool *mp, void * const > *obj_table, > unsigned n) > { > - return rte_ring_mp_enqueue_bulk(mp->pool_data, obj_table, n); > + return rte_ring_mp_enqueue_bulk(mp->pool_data, > + obj_table, n) =3D=3D 0 ? -ENOBUFS : 0; > } >=20 > static int > common_ring_sp_enqueue(struct rte_mempool *mp, void * const > *obj_table, > unsigned n) > { > - return rte_ring_sp_enqueue_bulk(mp->pool_data, obj_table, n); > + return rte_ring_sp_enqueue_bulk(mp->pool_data, > + obj_table, n) =3D=3D 0 ? -ENOBUFS : 0; > } >=20 > static int > common_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table, > unsigned n) > { > - return rte_ring_mc_dequeue_bulk(mp->pool_data, obj_table, n); > + return rte_ring_mc_dequeue_bulk(mp->pool_data, > + obj_table, n) =3D=3D 0 ? -ENOBUFS : 0; > } >=20 > static int > common_ring_sc_dequeue(struct rte_mempool *mp, void **obj_table, > unsigned n) > { > - return rte_ring_sc_dequeue_bulk(mp->pool_data, obj_table, n); > + return rte_ring_sc_dequeue_bulk(mp->pool_data, > + obj_table, n) =3D=3D 0 ? -ENOBUFS : 0; > } >=20 > static unsigned > diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h > index 906e8ae..34b438c 100644 > --- a/lib/librte_ring/rte_ring.h > +++ b/lib/librte_ring/rte_ring.h > @@ -349,14 +349,10 @@ void rte_ring_dump(FILE *f, const struct rte_ring > *r); > * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items from a rin= g > * RTE_RING_QUEUE_VARIABLE: Enqueue as many items a possible from > ring > * @return > - * Depend on the behavior value > - * if behavior =3D RTE_RING_QUEUE_FIXED > - * - 0: Success; objects enqueue. > - * - -ENOBUFS: Not enough room in the ring to enqueue, no object is > enqueued. > - * if behavior =3D RTE_RING_QUEUE_VARIABLE > - * - n: Actual number of objects enqueued. > + * Actual number of objects enqueued. > + * If behavior =3D=3D RTE_RING_QUEUE_FIXED, this will be 0 or n only. > */ > -static inline int __attribute__((always_inline)) > +static inline unsigned int __attribute__((always_inline)) > __rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table, > unsigned n, enum rte_ring_queue_behavior > behavior) > { > @@ -388,7 +384,7 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void > * const *obj_table, > /* check that we have enough room in ring */ > if (unlikely(n > free_entries)) { > if (behavior =3D=3D RTE_RING_QUEUE_FIXED) > - return -ENOBUFS; > + return 0; > else { > /* No free entry available */ > if (unlikely(free_entries =3D=3D 0)) > @@ -414,7 +410,7 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, void > * const *obj_table, > rte_pause(); >=20 > r->prod.tail =3D prod_next; > - return (behavior =3D=3D RTE_RING_QUEUE_FIXED) ? 0 : n; > + return n; > } >=20 > /** > @@ -430,14 +426,10 @@ __rte_ring_mp_do_enqueue(struct rte_ring *r, > void * const *obj_table, > * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items from a rin= g > * RTE_RING_QUEUE_VARIABLE: Enqueue as many items a possible from > ring > * @return > - * Depend on the behavior value > - * if behavior =3D RTE_RING_QUEUE_FIXED > - * - 0: Success; objects enqueue. > - * - -ENOBUFS: Not enough room in the ring to enqueue, no object is > enqueued. > - * if behavior =3D RTE_RING_QUEUE_VARIABLE > - * - n: Actual number of objects enqueued. > + * Actual number of objects enqueued. > + * If behavior =3D=3D RTE_RING_QUEUE_FIXED, this will be 0 or n only. > */ > -static inline int __attribute__((always_inline)) > +static inline unsigned int __attribute__((always_inline)) > __rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table, > unsigned n, enum rte_ring_queue_behavior > behavior) > { > @@ -457,7 +449,7 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, void * > const *obj_table, > /* check that we have enough room in ring */ > if (unlikely(n > free_entries)) { > if (behavior =3D=3D RTE_RING_QUEUE_FIXED) > - return -ENOBUFS; > + return 0; > else { > /* No free entry available */ > if (unlikely(free_entries =3D=3D 0)) > @@ -474,7 +466,7 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, void * > const *obj_table, > rte_smp_wmb(); >=20 > r->prod.tail =3D prod_next; > - return (behavior =3D=3D RTE_RING_QUEUE_FIXED) ? 0 : n; > + return n; > } >=20 > /** > @@ -495,16 +487,11 @@ __rte_ring_sp_do_enqueue(struct rte_ring *r, > void * const *obj_table, > * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from a > ring > * RTE_RING_QUEUE_VARIABLE: Dequeue as many items a possible from > ring > * @return > - * Depend on the behavior value > - * if behavior =3D RTE_RING_QUEUE_FIXED > - * - 0: Success; objects dequeued. > - * - -ENOENT: Not enough entries in the ring to dequeue; no object is > - * dequeued. > - * if behavior =3D RTE_RING_QUEUE_VARIABLE > - * - n: Actual number of objects dequeued. > + * - Actual number of objects dequeued. > + * If behavior =3D=3D RTE_RING_QUEUE_FIXED, this will be 0 or n only= . > */ >=20 > -static inline int __attribute__((always_inline)) > +static inline unsigned int __attribute__((always_inline)) > __rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table, > unsigned n, enum rte_ring_queue_behavior behavior) > { > @@ -536,7 +523,7 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void > **obj_table, > /* Set the actual entries for dequeue */ > if (n > entries) { > if (behavior =3D=3D RTE_RING_QUEUE_FIXED) > - return -ENOENT; > + return 0; > else { > if (unlikely(entries =3D=3D 0)) > return 0; > @@ -562,7 +549,7 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, void > **obj_table, >=20 > r->cons.tail =3D cons_next; >=20 > - return behavior =3D=3D RTE_RING_QUEUE_FIXED ? 0 : n; > + return n; > } >=20 > /** > @@ -580,15 +567,10 @@ __rte_ring_mc_do_dequeue(struct rte_ring *r, > void **obj_table, > * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from a > ring > * RTE_RING_QUEUE_VARIABLE: Dequeue as many items a possible from > ring > * @return > - * Depend on the behavior value > - * if behavior =3D RTE_RING_QUEUE_FIXED > - * - 0: Success; objects dequeued. > - * - -ENOENT: Not enough entries in the ring to dequeue; no object is > - * dequeued. > - * if behavior =3D RTE_RING_QUEUE_VARIABLE > - * - n: Actual number of objects dequeued. > + * - Actual number of objects dequeued. > + * If behavior =3D=3D RTE_RING_QUEUE_FIXED, this will be 0 or n only= . > */ > -static inline int __attribute__((always_inline)) > +static inline unsigned int __attribute__((always_inline)) > __rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table, > unsigned n, enum rte_ring_queue_behavior behavior) > { > @@ -607,7 +589,7 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void > **obj_table, >=20 > if (n > entries) { > if (behavior =3D=3D RTE_RING_QUEUE_FIXED) > - return -ENOENT; > + return 0; > else { > if (unlikely(entries =3D=3D 0)) > return 0; > @@ -623,7 +605,7 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void > **obj_table, > rte_smp_rmb(); >=20 > r->cons.tail =3D cons_next; > - return behavior =3D=3D RTE_RING_QUEUE_FIXED ? 0 : n; > + return n; > } >=20 > /** > @@ -639,10 +621,9 @@ __rte_ring_sc_do_dequeue(struct rte_ring *r, void > **obj_table, > * @param n > * The number of objects to add in the ring from the obj_table. > * @return > - * - 0: Success; objects enqueue. > - * - -ENOBUFS: Not enough room in the ring to enqueue, no object is > enqueued. > + * The number of objects enqueued, either 0 or n > */ > -static inline int __attribute__((always_inline)) > +static inline unsigned int __attribute__((always_inline)) > rte_ring_mp_enqueue_bulk(struct rte_ring *r, void * const *obj_table, > unsigned n) > { > @@ -659,10 +640,9 @@ rte_ring_mp_enqueue_bulk(struct rte_ring *r, void > * const *obj_table, > * @param n > * The number of objects to add in the ring from the obj_table. > * @return > - * - 0: Success; objects enqueued. > - * - -ENOBUFS: Not enough room in the ring to enqueue; no object is > enqueued. > + * The number of objects enqueued, either 0 or n > */ > -static inline int __attribute__((always_inline)) > +static inline unsigned int __attribute__((always_inline)) > rte_ring_sp_enqueue_bulk(struct rte_ring *r, void * const *obj_table, > unsigned n) > { > @@ -683,10 +663,9 @@ rte_ring_sp_enqueue_bulk(struct rte_ring *r, void * > const *obj_table, > * @param n > * The number of objects to add in the ring from the obj_table. > * @return > - * - 0: Success; objects enqueued. > - * - -ENOBUFS: Not enough room in the ring to enqueue; no object is > enqueued. > + * The number of objects enqueued, either 0 or n > */ > -static inline int __attribute__((always_inline)) > +static inline unsigned int __attribute__((always_inline)) > rte_ring_enqueue_bulk(struct rte_ring *r, void * const *obj_table, > unsigned n) > { > @@ -713,7 +692,7 @@ rte_ring_enqueue_bulk(struct rte_ring *r, void * > const *obj_table, > static inline int __attribute__((always_inline)) > rte_ring_mp_enqueue(struct rte_ring *r, void *obj) > { > - return rte_ring_mp_enqueue_bulk(r, &obj, 1); > + return rte_ring_mp_enqueue_bulk(r, &obj, 1) ? 0 : -ENOBUFS; > } >=20 > /** > @@ -730,7 +709,7 @@ rte_ring_mp_enqueue(struct rte_ring *r, void *obj) > static inline int __attribute__((always_inline)) > rte_ring_sp_enqueue(struct rte_ring *r, void *obj) > { > - return rte_ring_sp_enqueue_bulk(r, &obj, 1); > + return rte_ring_sp_enqueue_bulk(r, &obj, 1) ? 0 : -ENOBUFS; > } >=20 > /** > @@ -751,10 +730,7 @@ rte_ring_sp_enqueue(struct rte_ring *r, void *obj) > static inline int __attribute__((always_inline)) > rte_ring_enqueue(struct rte_ring *r, void *obj) > { > - if (r->prod.single) > - return rte_ring_sp_enqueue(r, obj); > - else > - return rte_ring_mp_enqueue(r, obj); > + return rte_ring_enqueue_bulk(r, &obj, 1) ? 0 : -ENOBUFS; > } >=20 > /** > @@ -770,11 +746,9 @@ rte_ring_enqueue(struct rte_ring *r, void *obj) > * @param n > * The number of objects to dequeue from the ring to the obj_table. > * @return > - * - 0: Success; objects dequeued. > - * - -ENOENT: Not enough entries in the ring to dequeue; no object is > - * dequeued. > + * The number of objects dequeued, either 0 or n > */ > -static inline int __attribute__((always_inline)) > +static inline unsigned int __attribute__((always_inline)) > rte_ring_mc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned = n) > { > return __rte_ring_mc_do_dequeue(r, obj_table, n, > RTE_RING_QUEUE_FIXED); > @@ -791,11 +765,9 @@ rte_ring_mc_dequeue_bulk(struct rte_ring *r, void > **obj_table, unsigned n) > * The number of objects to dequeue from the ring to the obj_table, > * must be strictly positive. > * @return > - * - 0: Success; objects dequeued. > - * - -ENOENT: Not enough entries in the ring to dequeue; no object is > - * dequeued. > + * The number of objects dequeued, either 0 or n > */ > -static inline int __attribute__((always_inline)) > +static inline unsigned int __attribute__((always_inline)) > rte_ring_sc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned = n) > { > return __rte_ring_sc_do_dequeue(r, obj_table, n, > RTE_RING_QUEUE_FIXED); > @@ -815,11 +787,9 @@ rte_ring_sc_dequeue_bulk(struct rte_ring *r, void > **obj_table, unsigned n) > * @param n > * The number of objects to dequeue from the ring to the obj_table. > * @return > - * - 0: Success; objects dequeued. > - * - -ENOENT: Not enough entries in the ring to dequeue, no object is > - * dequeued. > + * The number of objects dequeued, either 0 or n > */ > -static inline int __attribute__((always_inline)) > +static inline unsigned int __attribute__((always_inline)) > rte_ring_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n) > { > if (r->cons.single) > @@ -846,7 +816,7 @@ rte_ring_dequeue_bulk(struct rte_ring *r, void > **obj_table, unsigned n) > static inline int __attribute__((always_inline)) > rte_ring_mc_dequeue(struct rte_ring *r, void **obj_p) > { > - return rte_ring_mc_dequeue_bulk(r, obj_p, 1); > + return rte_ring_mc_dequeue_bulk(r, obj_p, 1) ? 0 : -ENOBUFS; > } >=20 > /** > @@ -864,7 +834,7 @@ rte_ring_mc_dequeue(struct rte_ring *r, void > **obj_p) > static inline int __attribute__((always_inline)) > rte_ring_sc_dequeue(struct rte_ring *r, void **obj_p) > { > - return rte_ring_sc_dequeue_bulk(r, obj_p, 1); > + return rte_ring_sc_dequeue_bulk(r, obj_p, 1) ? 0 : -ENOBUFS; > } >=20 > /** > @@ -886,10 +856,7 @@ rte_ring_sc_dequeue(struct rte_ring *r, void > **obj_p) > static inline int __attribute__((always_inline)) > rte_ring_dequeue(struct rte_ring *r, void **obj_p) > { > - if (r->cons.single) > - return rte_ring_sc_dequeue(r, obj_p); > - else > - return rte_ring_mc_dequeue(r, obj_p); > + return rte_ring_dequeue_bulk(r, obj_p, 1) ? 0 : -ENOBUFS; > } >=20 > /** > diff --git a/test/test-pipeline/pipeline_hash.c b/test/test- > pipeline/pipeline_hash.c > index 10d2869..1ac0aa8 100644 > --- a/test/test-pipeline/pipeline_hash.c > +++ b/test/test-pipeline/pipeline_hash.c > @@ -547,6 +547,6 @@ app_main_loop_rx_metadata(void) { > app.rings_rx[i], > (void **) app.mbuf_rx.array, > n_mbufs); > - } while (ret < 0); > + } while (ret =3D=3D 0); > } > } > diff --git a/test/test-pipeline/runtime.c b/test/test-pipeline/runtime.c > index 42a6142..4e20669 100644 > --- a/test/test-pipeline/runtime.c > +++ b/test/test-pipeline/runtime.c > @@ -98,7 +98,7 @@ app_main_loop_rx(void) { > app.rings_rx[i], > (void **) app.mbuf_rx.array, > n_mbufs); > - } while (ret < 0); > + } while (ret =3D=3D 0); > } > } >=20 > @@ -123,7 +123,7 @@ app_main_loop_worker(void) { > (void **) worker_mbuf->array, > app.burst_size_worker_read); >=20 > - if (ret =3D=3D -ENOENT) > + if (ret =3D=3D 0) > continue; >=20 > do { > @@ -131,7 +131,7 @@ app_main_loop_worker(void) { > app.rings_tx[i ^ 1], > (void **) worker_mbuf->array, > app.burst_size_worker_write); > - } while (ret < 0); > + } while (ret =3D=3D 0); > } > } >=20 > @@ -152,7 +152,7 @@ app_main_loop_tx(void) { > (void **) &app.mbuf_tx[i].array[n_mbufs], > app.burst_size_tx_read); >=20 > - if (ret =3D=3D -ENOENT) > + if (ret =3D=3D 0) > continue; >=20 > n_mbufs +=3D app.burst_size_tx_read; > diff --git a/test/test/test_ring.c b/test/test/test_ring.c > index 666a451..112433b 100644 > --- a/test/test/test_ring.c > +++ b/test/test/test_ring.c > @@ -117,20 +117,18 @@ test_ring_basic_full_empty(void * const src[], void > *dst[]) > rand =3D RTE_MAX(rte_rand() % RING_SIZE, 1UL); > printf("%s: iteration %u, random shift: %u;\n", > __func__, i, rand); > - TEST_RING_VERIFY(-ENOBUFS !=3D rte_ring_enqueue_bulk(r, > src, > - rand)); > - TEST_RING_VERIFY(0 =3D=3D rte_ring_dequeue_bulk(r, dst, > rand)); > + TEST_RING_VERIFY(rte_ring_enqueue_bulk(r, src, rand) !=3D > 0); > + TEST_RING_VERIFY(rte_ring_dequeue_bulk(r, dst, rand) =3D=3D > rand); >=20 > /* fill the ring */ > - TEST_RING_VERIFY(-ENOBUFS !=3D rte_ring_enqueue_bulk(r, > src, > - rsz)); > + TEST_RING_VERIFY(rte_ring_enqueue_bulk(r, src, rsz) !=3D 0); > TEST_RING_VERIFY(0 =3D=3D rte_ring_free_count(r)); > TEST_RING_VERIFY(rsz =3D=3D rte_ring_count(r)); > TEST_RING_VERIFY(rte_ring_full(r)); > TEST_RING_VERIFY(0 =3D=3D rte_ring_empty(r)); >=20 > /* empty the ring */ > - TEST_RING_VERIFY(0 =3D=3D rte_ring_dequeue_bulk(r, dst, rsz)); > + TEST_RING_VERIFY(rte_ring_dequeue_bulk(r, dst, rsz) =3D=3D > rsz); > TEST_RING_VERIFY(rsz =3D=3D rte_ring_free_count(r)); > TEST_RING_VERIFY(0 =3D=3D rte_ring_count(r)); > TEST_RING_VERIFY(0 =3D=3D rte_ring_full(r)); > @@ -171,37 +169,37 @@ test_ring_basic(void) > printf("enqueue 1 obj\n"); > ret =3D rte_ring_sp_enqueue_bulk(r, cur_src, 1); > cur_src +=3D 1; > - if (ret !=3D 0) > + if (ret =3D=3D 0) > goto fail; >=20 > printf("enqueue 2 objs\n"); > ret =3D rte_ring_sp_enqueue_bulk(r, cur_src, 2); > cur_src +=3D 2; > - if (ret !=3D 0) > + if (ret =3D=3D 0) > goto fail; >=20 > printf("enqueue MAX_BULK objs\n"); > ret =3D rte_ring_sp_enqueue_bulk(r, cur_src, MAX_BULK); > cur_src +=3D MAX_BULK; > - if (ret !=3D 0) > + if (ret =3D=3D 0) > goto fail; >=20 > printf("dequeue 1 obj\n"); > ret =3D rte_ring_sc_dequeue_bulk(r, cur_dst, 1); > cur_dst +=3D 1; > - if (ret !=3D 0) > + if (ret =3D=3D 0) > goto fail; >=20 > printf("dequeue 2 objs\n"); > ret =3D rte_ring_sc_dequeue_bulk(r, cur_dst, 2); > cur_dst +=3D 2; > - if (ret !=3D 0) > + if (ret =3D=3D 0) > goto fail; >=20 > printf("dequeue MAX_BULK objs\n"); > ret =3D rte_ring_sc_dequeue_bulk(r, cur_dst, MAX_BULK); > cur_dst +=3D MAX_BULK; > - if (ret !=3D 0) > + if (ret =3D=3D 0) > goto fail; >=20 > /* check data */ > @@ -217,37 +215,37 @@ test_ring_basic(void) > printf("enqueue 1 obj\n"); > ret =3D rte_ring_mp_enqueue_bulk(r, cur_src, 1); > cur_src +=3D 1; > - if (ret !=3D 0) > + if (ret =3D=3D 0) > goto fail; >=20 > printf("enqueue 2 objs\n"); > ret =3D rte_ring_mp_enqueue_bulk(r, cur_src, 2); > cur_src +=3D 2; > - if (ret !=3D 0) > + if (ret =3D=3D 0) > goto fail; >=20 > printf("enqueue MAX_BULK objs\n"); > ret =3D rte_ring_mp_enqueue_bulk(r, cur_src, MAX_BULK); > cur_src +=3D MAX_BULK; > - if (ret !=3D 0) > + if (ret =3D=3D 0) > goto fail; >=20 > printf("dequeue 1 obj\n"); > ret =3D rte_ring_mc_dequeue_bulk(r, cur_dst, 1); > cur_dst +=3D 1; > - if (ret !=3D 0) > + if (ret =3D=3D 0) > goto fail; >=20 > printf("dequeue 2 objs\n"); > ret =3D rte_ring_mc_dequeue_bulk(r, cur_dst, 2); > cur_dst +=3D 2; > - if (ret !=3D 0) > + if (ret =3D=3D 0) > goto fail; >=20 > printf("dequeue MAX_BULK objs\n"); > ret =3D rte_ring_mc_dequeue_bulk(r, cur_dst, MAX_BULK); > cur_dst +=3D MAX_BULK; > - if (ret !=3D 0) > + if (ret =3D=3D 0) > goto fail; >=20 > /* check data */ > @@ -264,11 +262,11 @@ test_ring_basic(void) > for (i =3D 0; i ret =3D rte_ring_mp_enqueue_bulk(r, cur_src, MAX_BULK); > cur_src +=3D MAX_BULK; > - if (ret !=3D 0) > + if (ret =3D=3D 0) > goto fail; > ret =3D rte_ring_mc_dequeue_bulk(r, cur_dst, MAX_BULK); > cur_dst +=3D MAX_BULK; > - if (ret !=3D 0) > + if (ret =3D=3D 0) > goto fail; > } >=20 > @@ -294,25 +292,25 @@ test_ring_basic(void) >=20 > ret =3D rte_ring_enqueue_bulk(r, cur_src, num_elems); > cur_src +=3D num_elems; > - if (ret !=3D 0) { > + if (ret =3D=3D 0) { > printf("Cannot enqueue\n"); > goto fail; > } > ret =3D rte_ring_enqueue_bulk(r, cur_src, num_elems); > cur_src +=3D num_elems; > - if (ret !=3D 0) { > + if (ret =3D=3D 0) { > printf("Cannot enqueue\n"); > goto fail; > } > ret =3D rte_ring_dequeue_bulk(r, cur_dst, num_elems); > cur_dst +=3D num_elems; > - if (ret !=3D 0) { > + if (ret =3D=3D 0) { > printf("Cannot dequeue\n"); > goto fail; > } > ret =3D rte_ring_dequeue_bulk(r, cur_dst, num_elems); > cur_dst +=3D num_elems; > - if (ret !=3D 0) { > + if (ret =3D=3D 0) { > printf("Cannot dequeue2\n"); > goto fail; > } > diff --git a/test/test/test_ring_perf.c b/test/test/test_ring_perf.c > index 320c20c..8ccbdef 100644 > --- a/test/test/test_ring_perf.c > +++ b/test/test/test_ring_perf.c > @@ -195,13 +195,13 @@ enqueue_bulk(void *p) >=20 > const uint64_t sp_start =3D rte_rdtsc(); > for (i =3D 0; i < iterations; i++) > - while (rte_ring_sp_enqueue_bulk(r, burst, size) !=3D 0) > + while (rte_ring_sp_enqueue_bulk(r, burst, size) =3D=3D 0) > rte_pause(); > const uint64_t sp_end =3D rte_rdtsc(); >=20 > const uint64_t mp_start =3D rte_rdtsc(); > for (i =3D 0; i < iterations; i++) > - while (rte_ring_mp_enqueue_bulk(r, burst, size) !=3D 0) > + while (rte_ring_mp_enqueue_bulk(r, burst, size) =3D=3D 0) > rte_pause(); > const uint64_t mp_end =3D rte_rdtsc(); >=20 > @@ -230,13 +230,13 @@ dequeue_bulk(void *p) >=20 > const uint64_t sc_start =3D rte_rdtsc(); > for (i =3D 0; i < iterations; i++) > - while (rte_ring_sc_dequeue_bulk(r, burst, size) !=3D 0) > + while (rte_ring_sc_dequeue_bulk(r, burst, size) =3D=3D 0) > rte_pause(); > const uint64_t sc_end =3D rte_rdtsc(); >=20 > const uint64_t mc_start =3D rte_rdtsc(); > for (i =3D 0; i < iterations; i++) > - while (rte_ring_mc_dequeue_bulk(r, burst, size) !=3D 0) > + while (rte_ring_mc_dequeue_bulk(r, burst, size) =3D=3D 0) > rte_pause(); > const uint64_t mc_end =3D rte_rdtsc(); >=20 > -- > 2.9.3