From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id BB211A0547; Wed, 15 Jul 2020 16:58:45 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 568881BEB1; Wed, 15 Jul 2020 16:58:42 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 675961BEAC for ; Wed, 15 Jul 2020 16:58:39 +0200 (CEST) IronPort-SDR: 8ki8vE66uDc4XXjVfBcWD7sqwfxXWxnktFW8NLn+g+3DWDkdyh8M+lETtNvOD0EXQhD5dWnZwY CuMlySbVz/QA== X-IronPort-AV: E=McAfee;i="6000,8403,9683"; a="146669955" X-IronPort-AV: E=Sophos;i="5.75,355,1589266800"; d="scan'208";a="146669955" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jul 2020 07:58:39 -0700 IronPort-SDR: FHeUU0QyIggY5ZnzUOTYWyiYy1vdccz0ayVGTf1jm8GbfrARCSLaQgyEcIYTbxfLdwMssNpVq4 oLsc43QxfqLg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,355,1589266800"; d="scan'208";a="270291179" Received: from sivswdev08.ir.intel.com ([10.237.217.47]) by fmsmga008.fm.intel.com with ESMTP; 15 Jul 2020 07:58:37 -0700 From: Konstantin Ananyev To: dev@dpdk.org Cc: olivier.matz@6wind.com, arybchenko@solarflare.com, jielong.zjl@antfin.com, gage.eads@intel.com, thomas@monjalon.net, Konstantin Ananyev Date: Wed, 15 Jul 2020 15:58:15 +0100 Message-Id: <20200715145815.27132-3-konstantin.ananyev@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20200715145815.27132-1-konstantin.ananyev@intel.com> References: <20200713155050.27743-1-konstantin.ananyev@intel.com> <20200715145815.27132-1-konstantin.ananyev@intel.com> Subject: [dpdk-dev] [PATCH v5 2/2] mempool/ring: add support for new ring sync modes X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Two new sync modes were introduced into rte_ring: relaxed tail sync (RTS) and head/tail sync (HTS). This change provides user with ability to select these modes for ring based mempool via mempool ops API. Signed-off-by: Konstantin Ananyev Acked-by: Gage Eads --- doc/guides/mempool/ring.rst | 22 ++++++- doc/guides/prog_guide/ring_lib.rst | 8 +++ doc/guides/rel_notes/release_20_08.rst | 6 ++ drivers/mempool/ring/rte_mempool_ring.c | 88 ++++++++++++++++++++++--- 4 files changed, 113 insertions(+), 11 deletions(-) diff --git a/doc/guides/mempool/ring.rst b/doc/guides/mempool/ring.rst index b8659c03f..ca03180ea 100644 --- a/doc/guides/mempool/ring.rst +++ b/doc/guides/mempool/ring.rst @@ -12,12 +12,14 @@ and can be selected via mempool ops API: - ``ring_mp_mc`` Underlying **rte_ring** operates in multi-thread producer, - multi-thread consumer sync mode. + multi-thread consumer sync mode. For more information please refer to: + :ref:`Ring_Library_MPMC_Mode`. - ``ring_sp_sc`` Underlying **rte_ring** operates in single-thread producer, - single-thread consumer sync mode. + single-thread consumer sync mode. For more information please refer to: + :ref:`Ring_Library_SPSC_Mode`. - ``ring_sp_mc`` @@ -29,6 +31,22 @@ and can be selected via mempool ops API: Underlying **rte_ring** operates in multi-thread producer, single-thread consumer sync mode. +- ``ring_mt_rts`` + + For underlying **rte_ring** both producer and consumer operate in + multi-thread Relaxed Tail Sync (RTS) mode. For more information please + refer to: :ref:`Ring_Library_MT_RTS_Mode`. + +- ``ring_mt_hts`` + + For underlying **rte_ring** both producer and consumer operate in + multi-thread Head-Tail Sync (HTS) mode. For more information please + refer to: :ref:`Ring_Library_MT_HTS_Mode`. + +For 'classic' DPDK deployments (with one thread per core) ``ring_mp_mc`` +mode is usually the most suitable and the fastest one. For overcommitted +scenarios (multiple threads share same set of cores) ``ring_mt_rts`` or +``ring_mt_hts`` usually provide a better alternative. For more information about ``rte_ring`` structure, behaviour and available synchronisation modes please refer to: :doc:`../prog_guide/ring_lib`. diff --git a/doc/guides/prog_guide/ring_lib.rst b/doc/guides/prog_guide/ring_lib.rst index f0a5a78b0..895484d95 100644 --- a/doc/guides/prog_guide/ring_lib.rst +++ b/doc/guides/prog_guide/ring_lib.rst @@ -359,6 +359,8 @@ That should help users to configure ring in the most suitable way for his specific usage scenarios. Currently supported modes: +.. _Ring_Library_MPMC_Mode: + MP/MC (default one) ~~~~~~~~~~~~~~~~~~~ @@ -369,11 +371,15 @@ per core) this is usually the most suitable and fastest synchronization mode. As a well known limitation - it can perform quite pure on some overcommitted scenarios. +.. _Ring_Library_SPSC_Mode: + SP/SC ~~~~~ Single-producer (/single-consumer) mode. In this mode only one thread at a time is allowed to enqueue (/dequeue) objects to (/from) the ring. +.. _Ring_Library_MT_RTS_Mode: + MP_RTS/MC_RTS ~~~~~~~~~~~~~ @@ -390,6 +396,8 @@ one for head update, second for tail update. In comparison the original MP/MC algorithm requires one 32-bit CAS for head update and waiting/spinning on tail value. +.. _Ring_Library_MT_HTS_Mode: + MP_HTS/MC_HTS ~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst index 17d70e7c1..db25c6f9c 100644 --- a/doc/guides/rel_notes/release_20_08.rst +++ b/doc/guides/rel_notes/release_20_08.rst @@ -69,6 +69,12 @@ New Features barriers. rte_*mb APIs, for ARMv8 platforms, are changed to use DMB instruction to reflect this. +* **Added support for new sync modes into mempool ring driver.** + + Added ability to select new ring synchronisation modes: + ``relaxed tail sync (ring_mt_rts)`` and ``head/tail sync (ring_mt_hts)`` + via mempool ops API. + * **Added the support for vfio-pci new VF token interface.** From Linux 5.7, vfio-pci supports to bind both SR-IOV PF and the created VFs, diff --git a/drivers/mempool/ring/rte_mempool_ring.c b/drivers/mempool/ring/rte_mempool_ring.c index bc123fc52..b1f09ff28 100644 --- a/drivers/mempool/ring/rte_mempool_ring.c +++ b/drivers/mempool/ring/rte_mempool_ring.c @@ -25,6 +25,22 @@ common_ring_sp_enqueue(struct rte_mempool *mp, void * const *obj_table, obj_table, n, NULL) == 0 ? -ENOBUFS : 0; } +static int +rts_ring_mp_enqueue(struct rte_mempool *mp, void * const *obj_table, + unsigned int n) +{ + return rte_ring_mp_rts_enqueue_bulk(mp->pool_data, + obj_table, n, NULL) == 0 ? -ENOBUFS : 0; +} + +static int +hts_ring_mp_enqueue(struct rte_mempool *mp, void * const *obj_table, + unsigned int n) +{ + return rte_ring_mp_hts_enqueue_bulk(mp->pool_data, + obj_table, n, NULL) == 0 ? -ENOBUFS : 0; +} + static int common_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n) { @@ -39,17 +55,30 @@ common_ring_sc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned n) obj_table, n, NULL) == 0 ? -ENOBUFS : 0; } +static int +rts_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n) +{ + return rte_ring_mc_rts_dequeue_bulk(mp->pool_data, + obj_table, n, NULL) == 0 ? -ENOBUFS : 0; +} + +static int +hts_ring_mc_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n) +{ + return rte_ring_mc_hts_dequeue_bulk(mp->pool_data, + obj_table, n, NULL) == 0 ? -ENOBUFS : 0; +} + static unsigned common_ring_get_count(const struct rte_mempool *mp) { return rte_ring_count(mp->pool_data); } - static int -common_ring_alloc(struct rte_mempool *mp) +ring_alloc(struct rte_mempool *mp, uint32_t rg_flags) { - int rg_flags = 0, ret; + int ret; char rg_name[RTE_RING_NAMESIZE]; struct rte_ring *r; @@ -60,12 +89,6 @@ common_ring_alloc(struct rte_mempool *mp) return -rte_errno; } - /* ring flags */ - if (mp->flags & MEMPOOL_F_SP_PUT) - rg_flags |= RING_F_SP_ENQ; - if (mp->flags & MEMPOOL_F_SC_GET) - rg_flags |= RING_F_SC_DEQ; - /* * Allocate the ring that will be used to store objects. * Ring functions will return appropriate errors if we are @@ -82,6 +105,31 @@ common_ring_alloc(struct rte_mempool *mp) return 0; } +static int +common_ring_alloc(struct rte_mempool *mp) +{ + uint32_t rg_flags = 0; + + if (mp->flags & MEMPOOL_F_SP_PUT) + rg_flags |= RING_F_SP_ENQ; + if (mp->flags & MEMPOOL_F_SC_GET) + rg_flags |= RING_F_SC_DEQ; + + return ring_alloc(mp, rg_flags); +} + +static int +rts_ring_alloc(struct rte_mempool *mp) +{ + return ring_alloc(mp, RING_F_MP_RTS_ENQ | RING_F_MC_RTS_DEQ); +} + +static int +hts_ring_alloc(struct rte_mempool *mp) +{ + return ring_alloc(mp, RING_F_MP_HTS_ENQ | RING_F_MC_HTS_DEQ); +} + static void common_ring_free(struct rte_mempool *mp) { @@ -130,7 +178,29 @@ static const struct rte_mempool_ops ops_sp_mc = { .get_count = common_ring_get_count, }; +/* ops for mempool with ring in MT_RTS sync mode */ +static const struct rte_mempool_ops ops_mt_rts = { + .name = "ring_mt_rts", + .alloc = rts_ring_alloc, + .free = common_ring_free, + .enqueue = rts_ring_mp_enqueue, + .dequeue = rts_ring_mc_dequeue, + .get_count = common_ring_get_count, +}; + +/* ops for mempool with ring in MT_HTS sync mode */ +static const struct rte_mempool_ops ops_mt_hts = { + .name = "ring_mt_hts", + .alloc = hts_ring_alloc, + .free = common_ring_free, + .enqueue = hts_ring_mp_enqueue, + .dequeue = hts_ring_mc_dequeue, + .get_count = common_ring_get_count, +}; + MEMPOOL_REGISTER_OPS(ops_mp_mc); MEMPOOL_REGISTER_OPS(ops_sp_sc); MEMPOOL_REGISTER_OPS(ops_mp_sc); MEMPOOL_REGISTER_OPS(ops_sp_mc); +MEMPOOL_REGISTER_OPS(ops_mt_rts); +MEMPOOL_REGISTER_OPS(ops_mt_hts); -- 2.17.1