From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0DE28A0561; Sun, 19 Apr 2020 04:31:41 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3ABD41D442; Sun, 19 Apr 2020 04:31:33 +0200 (CEST) Received: from EUR04-DB3-obe.outbound.protection.outlook.com (mail-eopbgr60082.outbound.protection.outlook.com [40.107.6.82]) by dpdk.org (Postfix) with ESMTP id ABD0E1D41C for ; Sun, 19 Apr 2020 04:31:23 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=f8trDYKw+YFEedEIKNCpUYXjVrZqRAimaEOO79APlnE=; b=FW5LQneUWxge4M9F4Vrv9WOtTWUaaAB/8k+wb5LDe4B+p4XDj0Zp2/t/LQRNzPvJxbz61105xuBI4iqWP45oblCOVnkRWARJZS8CbIWaTtHAa6/zA9UvlU83l+aHLtQwjJstp1xnrSDPWtZJTpDEJ2ENaUWK6yYEqaGCBvMXU98= Received: from AM6P194CA0028.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:90::41) by VI1PR08MB4605.eurprd08.prod.outlook.com (2603:10a6:803:b6::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2921.25; Sun, 19 Apr 2020 02:31:22 +0000 Received: from VE1EUR03FT030.eop-EUR03.prod.protection.outlook.com (2603:10a6:209:90:cafe::49) by AM6P194CA0028.outlook.office365.com (2603:10a6:209:90::41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2921.25 via Frontend Transport; Sun, 19 Apr 2020 02:31:21 +0000 Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dpdk.org; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dpdk.org; dmarc=bestguesspass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by VE1EUR03FT030.mail.protection.outlook.com (10.152.18.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2900.18 via Frontend Transport; Sun, 19 Apr 2020 02:31:21 +0000 Received: ("Tessian outbound 43fc5cd677c4:v53"); Sun, 19 Apr 2020 02:31:21 +0000 X-CR-MTA-TID: 64aa7808 Received: from 8eb19f9d19f0.1 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 2C3DAEA6-09C8-4B89-9F0A-EC9A205D399F.1; Sun, 19 Apr 2020 02:31:15 +0000 Received: from EUR03-DB5-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8eb19f9d19f0.1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Sun, 19 Apr 2020 02:31:15 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=DB+XT5+8dCuDLnoQuh+hcyg7kIlj87Xj3Q4iB8KWhEZHRKo6oXZS6kcTnJQIp8QQ7z3A2anjaoNS+Lep64b3e65yrPy4/xgnoAu+0/DNMkCHmZQ2XrxyuenaNfZVsfpXqZdAPalLYrE7QHQEPsF3H6z2sNUnITFizS9mMQQu/z89yxwvSBbn90i/vQnmAh+sz3jUh+uiTbTcoyIDwpGIaUo7d/MMHIAY8xwcGFsGPeQY3bwkTaUSpwSifX8iS8anWoeVJD6JMdDMF5aGZUIjUygldZ3I2gJXwSUumcQqicdWwSlTFRJzsBYeKaoyJoRO2Z2ZH2X1wlpAO7IQkvrXag== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=f8trDYKw+YFEedEIKNCpUYXjVrZqRAimaEOO79APlnE=; b=YWFeX5Y7HRzw0OiuIIAZxfviKFbPl+FbH7cClHUb2dIcgHDcSSXnP9wKVr0vXMK6d3Drgt3R3svZUwmdyoBSFRkvTt+5mAvK28QWGNrAzZD7GM9/xs5Gbqq3Da/yLQasEcPom1GMlB/uciIVjp0eMcEqijP5UO0W3/Hcyr3aPuNvI/jcNjuBqVEF2pscWsuALVxRVI5o+wdRQegpBWXDZxR5jbF7zCkyQxzRMf8LbRmlXRrcuC3MHAPINFM/7S6yBT/iEL4C3Vk847X6HI3Es/LhUY40izwzm5TmB2Mp4G2UHBXWJxallakI6tYANecozC7U/FeLAOnyP8j75oADLg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=f8trDYKw+YFEedEIKNCpUYXjVrZqRAimaEOO79APlnE=; b=FW5LQneUWxge4M9F4Vrv9WOtTWUaaAB/8k+wb5LDe4B+p4XDj0Zp2/t/LQRNzPvJxbz61105xuBI4iqWP45oblCOVnkRWARJZS8CbIWaTtHAa6/zA9UvlU83l+aHLtQwjJstp1xnrSDPWtZJTpDEJ2ENaUWK6yYEqaGCBvMXU98= Received: from DBBPR08MB4646.eurprd08.prod.outlook.com (2603:10a6:10:f5::16) by DBBPR08MB4902.eurprd08.prod.outlook.com (2603:10a6:10:db::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2921.29; Sun, 19 Apr 2020 02:31:14 +0000 Received: from DBBPR08MB4646.eurprd08.prod.outlook.com ([fe80::1870:afc4:b90f:609d]) by DBBPR08MB4646.eurprd08.prod.outlook.com ([fe80::1870:afc4:b90f:609d%5]) with mapi id 15.20.2921.027; Sun, 19 Apr 2020 02:31:14 +0000 From: Honnappa Nagarahalli To: Konstantin Ananyev , "dev@dpdk.org" CC: "david.marchand@redhat.com" , "jielong.zjl@antfin.com" , nd Thread-Topic: [PATCH v5 3/9] ring: introduce RTS ring mode Thread-Index: AQHWFZ8BtSrv/OGxGkeObjaCH9WB/qh/NOrQ Date: Sun, 19 Apr 2020 02:31:14 +0000 Message-ID: References: <20200417133639.14019-1-konstantin.ananyev@intel.com> <20200418163225.17635-1-konstantin.ananyev@intel.com> <20200418163225.17635-4-konstantin.ananyev@intel.com> In-Reply-To: <20200418163225.17635-4-konstantin.ananyev@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ts-tracking-id: cc91e981-1ef7-4d5b-9b2e-2d9653496083.0 x-checkrecipientchecked: true Authentication-Results-Original: spf=none (sender IP is ) smtp.mailfrom=Honnappa.Nagarahalli@arm.com; x-originating-ip: [70.113.25.165] x-ms-publictraffictype: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 35bdb0fc-2162-4a77-2e3d-08d7e409bf7e x-ms-traffictypediagnostic: DBBPR08MB4902:|VI1PR08MB4605: X-Microsoft-Antispam-PRVS: x-checkrecipientrouted: true nodisclaimer: true x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508; x-forefront-prvs: 0378F1E47A X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DBBPR08MB4646.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFTY:; SFS:(10009020)(4636009)(136003)(376002)(396003)(366004)(346002)(39850400004)(316002)(9686003)(66556008)(66446008)(64756008)(66476007)(66946007)(76116006)(26005)(186003)(52536014)(478600001)(6506007)(7696005)(54906003)(110136005)(81156014)(8676002)(33656002)(8936002)(5660300002)(30864003)(71200400001)(55016002)(4326008)(2906002)(86362001)(559001)(579004); DIR:OUT; SFP:1101; received-spf: None (protection.outlook.com: arm.com does not designate permitted sender hosts) X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: cP5uvOjshY7Wdz60ORTg+wjsLQt3uZzbNPSHJK+OcySrjQwGMoFYcrRu1PMOvz3Qe8IjGQXyHN4bMLiVY05Vo7lVzo4ZSsVK/5PJ/pzJW2p84m1hRacaZg9jEQI69vf7QEijjh8lyOfDkZLzg4L7XixDVWIROdsilHq185HPTxpTbCP54ubgIHDQgeJUnB4PcTARqEueglERlkGB2jlIwaX6hedV9mQMGoY4cstIsWzpc+Mmlrqj2C5ofj8DYeLX7eEtAvY978XxLB+9l9txpHvDHCqabMzdosCE2jhFuzg4MM+iuWu+UZfa8vZi4MAfnJOUfvP8nNShw1lr3YoLdpGgTyg/7DaVDmZ4AZSw8Pij8sp3/3rv/RS/15x8+R9b4lDBzTpp2MwiCmluEKZtGq5Igi3mYbHV3e0qUAj/gTzke/NO0hd7UfyW7FlJnZPc x-ms-exchange-antispam-messagedata: 52tktjL2BQ7QwG94lObSmhC79Z6+hPbLkOwhEmtoKxknPGs37EmOqKgRt6hCM9ISGBFNJCpJYLVhWoq29AbyyX9CBAO+KajWRuUzLXyN9lZq72kDVl/wxKaqXYJJq49tfY8D7vlBhN/1GUbh1WjOmg== x-ms-exchange-transport-forked: True Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4902 Original-Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Honnappa.Nagarahalli@arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT030.eop-EUR03.prod.protection.outlook.com X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:; SFS:(10009020)(4636009)(396003)(136003)(346002)(376002)(39850400004)(46966005)(7696005)(186003)(6506007)(336012)(4326008)(356005)(8936002)(9686003)(8676002)(478600001)(33656002)(81156014)(55016002)(36906005)(82740400003)(81166007)(26005)(110136005)(86362001)(54906003)(70206006)(70586007)(52536014)(47076004)(2906002)(5660300002)(316002)(30864003); DIR:OUT; SFP:1101; X-MS-Office365-Filtering-Correlation-Id-Prvs: f57a5bd1-7956-4cf4-6c63-08d7e409bb54 X-Forefront-PRVS: 0378F1E47A X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 1POsnxC2evKXl9pAnN2FsOc2d+q9whIL9ZS758uxq+W9DypRZh6MXkCR/QmY5hOaqiN/7LwExtMGR1S1nPArxROHAAWNeabraSEd5VMMYLaAt6WG+U+Unw6f6QUSxqd6hkj8sgg+gEAR8VzqkZXkx/SecTyplTqbZCkqSKLpdjFlBpxSyUzXE7IFSw7Dn+jf9HKvA12Eisw59ID6Z3cxlVDIfqe7G+ajYR867HeENnAtQOKvi4u++im5jXsVIeNitQE6Tcvibd6rwEhGbM0m7Q3/EJyO3ZWwrvPDNvMo5nTwLSYXS0UH2rOs42E5elcyJZCtpuG33XpKp03ZvacYhw12pO4778ohoWXnyYTHENJXHUDTfL+4SDOY+fmSXWMElYamc8VRNXFbKNw/hpuuxJeDG02bfRg/z0TLt0NjAHRI2ve06NYNAuJ2pgI1Ax/eXNbU2H7I5/cZDkZOaF2/eBmXxzIk3feZ+SDCEYiO9ZgT9pIts8ZMWfOyo1CrLerrIjYYKL9LguCeS6bMWakwWQ== X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Apr 2020 02:31:21.3709 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 35bdb0fc-2162-4a77-2e3d-08d7e409bf7e X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4605 Subject: Re: [dpdk-dev] [PATCH v5 3/9] ring: introduce RTS ring mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" >=20 > Introduce relaxed tail sync (RTS) mode for MT ring synchronization. > Aim to reduce stall times in case when ring is used on overcommited cpus > (multiple active threads on the same cpu). > The main difference from original MP/MC algorithm is that tail value is > increased not by every thread that finished enqueue/dequeue, but only by = the > last one. > That allows threads to avoid spinning on ring tail value, leaving actual = tail > value change to the last thread in the update queue. >=20 > Signed-off-by: Konstantin Ananyev Few nits, otherwise Acked-by: Honnappa Nagarahalli > --- >=20 > check-abi.sh reports what I believe is a false-positive about ring cons/p= rod > changes. As a workaround, devtools/libabigail.abignore is updated to > suppress *struct ring* related errors. >=20 > devtools/libabigail.abignore | 7 + > lib/librte_ring/Makefile | 4 +- > lib/librte_ring/meson.build | 7 +- > lib/librte_ring/rte_ring.c | 100 +++++- > lib/librte_ring/rte_ring.h | 70 +++- > lib/librte_ring/rte_ring_core.h | 36 +- > lib/librte_ring/rte_ring_elem.h | 90 ++++- > lib/librte_ring/rte_ring_rts.h | 439 +++++++++++++++++++++++++ > lib/librte_ring/rte_ring_rts_c11_mem.h | 179 ++++++++++ > 9 files changed, 902 insertions(+), 30 deletions(-) create mode 100644 > lib/librte_ring/rte_ring_rts.h create mode 100644 > lib/librte_ring/rte_ring_rts_c11_mem.h >=20 > diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore = index > a59df8f13..cd86d89ca 100644 > --- a/devtools/libabigail.abignore > +++ b/devtools/libabigail.abignore > @@ -11,3 +11,10 @@ > type_kind =3D enum > name =3D rte_crypto_asym_xform_type > changed_enumerators =3D RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END > +; Ignore updates of ring prod/cons > +[suppress_type] > + type_kind =3D struct > + name =3D rte_ring > +[suppress_type] > + type_kind =3D struct > + name =3D rte_event_ring > diff --git a/lib/librte_ring/Makefile b/lib/librte_ring/Makefile index > 6572768c9..04e446e37 100644 > --- a/lib/librte_ring/Makefile > +++ b/lib/librte_ring/Makefile > @@ -19,6 +19,8 @@ SYMLINK-$(CONFIG_RTE_LIBRTE_RING)-include :=3D > rte_ring.h \ > rte_ring_core.h \ > rte_ring_elem.h \ > rte_ring_generic.h \ > - rte_ring_c11_mem.h > + rte_ring_c11_mem.h \ > + rte_ring_rts.h \ > + rte_ring_rts_c11_mem.h >=20 > include $(RTE_SDK)/mk/rte.lib.mk > diff --git a/lib/librte_ring/meson.build b/lib/librte_ring/meson.build in= dex > c656781da..a95598032 100644 > --- a/lib/librte_ring/meson.build > +++ b/lib/librte_ring/meson.build > @@ -6,4 +6,9 @@ headers =3D files('rte_ring.h', > 'rte_ring_core.h', > 'rte_ring_elem.h', > 'rte_ring_c11_mem.h', > - 'rte_ring_generic.h') > + 'rte_ring_generic.h', > + 'rte_ring_rts.h', > + 'rte_ring_rts_c11_mem.h') > + > +# rte_ring_create_elem and rte_ring_get_memsize_elem are experimental > +allow_experimental_apis =3D true > diff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c inde= x > fa5733907..222eec0fb 100644 > --- a/lib/librte_ring/rte_ring.c > +++ b/lib/librte_ring/rte_ring.c > @@ -45,6 +45,9 @@ EAL_REGISTER_TAILQ(rte_ring_tailq) > /* true if x is a power of 2 */ > #define POWEROF2(x) ((((x)-1) & (x)) =3D=3D 0) >=20 > +/* by default set head/tail distance as 1/8 of ring capacity */ > +#define HTD_MAX_DEF 8 > + > /* return the size of memory occupied by a ring */ ssize_t > rte_ring_get_memsize_elem(unsigned int esize, unsigned int count) @@ - > 79,11 +82,84 @@ rte_ring_get_memsize(unsigned int count) > return rte_ring_get_memsize_elem(sizeof(void *), count); } >=20 > +/* > + * internal helper function to reset prod/cons head-tail values. > + */ > +static void > +reset_headtail(void *p) The internal functions have used __rte prefix in ring library. I think we s= hould follow the same here. > +{ > + struct rte_ring_headtail *ht; > + struct rte_ring_rts_headtail *ht_rts; > + > + ht =3D p; > + ht_rts =3D p; > + > + switch (ht->sync_type) { > + case RTE_RING_SYNC_MT: > + case RTE_RING_SYNC_ST: > + ht->head =3D 0; > + ht->tail =3D 0; > + break; > + case RTE_RING_SYNC_MT_RTS: > + ht_rts->head.raw =3D 0; > + ht_rts->tail.raw =3D 0; > + break; > + default: > + /* unknown sync mode */ > + RTE_ASSERT(0); > + } > +} > + > void > rte_ring_reset(struct rte_ring *r) > { > - r->prod.head =3D r->cons.head =3D 0; > - r->prod.tail =3D r->cons.tail =3D 0; > + reset_headtail(&r->prod); > + reset_headtail(&r->cons); > +} > + > +/* > + * helper function, calculates sync_type values for prod and cons > + * based on input flags. Returns zero at success or negative > + * errno value otherwise. > + */ > +static int > +get_sync_type(uint32_t flags, enum rte_ring_sync_type *prod_st, > + enum rte_ring_sync_type *cons_st) The internal functions have used __rte prefix in ring library. I think we s= hould follow the same here. Also, it will help avoid symbol clashes. > +{ > + static const uint32_t prod_st_flags =3D > + (RING_F_SP_ENQ | RING_F_MP_RTS_ENQ); > + static const uint32_t cons_st_flags =3D > + (RING_F_SC_DEQ | RING_F_MC_RTS_DEQ); > + > + switch (flags & prod_st_flags) { > + case 0: > + *prod_st =3D RTE_RING_SYNC_MT; > + break; > + case RING_F_SP_ENQ: > + *prod_st =3D RTE_RING_SYNC_ST; > + break; > + case RING_F_MP_RTS_ENQ: > + *prod_st =3D RTE_RING_SYNC_MT_RTS; > + break; > + default: > + return -EINVAL; > + } > + > + switch (flags & cons_st_flags) { > + case 0: > + *cons_st =3D RTE_RING_SYNC_MT; > + break; > + case RING_F_SC_DEQ: > + *cons_st =3D RTE_RING_SYNC_ST; > + break; > + case RING_F_MC_RTS_DEQ: > + *cons_st =3D RTE_RING_SYNC_MT_RTS; > + break; > + default: > + return -EINVAL; > + } > + > + return 0; > } >=20 > int > @@ -100,16 +176,20 @@ rte_ring_init(struct rte_ring *r, const char *name, > unsigned count, > RTE_BUILD_BUG_ON((offsetof(struct rte_ring, prod) & > RTE_CACHE_LINE_MASK) !=3D 0); >=20 > + RTE_BUILD_BUG_ON(offsetof(struct rte_ring_headtail, sync_type) !=3D > + offsetof(struct rte_ring_rts_headtail, sync_type)); > + RTE_BUILD_BUG_ON(offsetof(struct rte_ring_headtail, tail) !=3D > + offsetof(struct rte_ring_rts_headtail, tail.val.pos)); > + > /* init the ring structure */ > memset(r, 0, sizeof(*r)); > ret =3D strlcpy(r->name, name, sizeof(r->name)); > if (ret < 0 || ret >=3D (int)sizeof(r->name)) > return -ENAMETOOLONG; > r->flags =3D flags; > - r->prod.sync_type =3D (flags & RING_F_SP_ENQ) ? > - RTE_RING_SYNC_ST : RTE_RING_SYNC_MT; > - r->cons.sync_type =3D (flags & RING_F_SC_DEQ) ? > - RTE_RING_SYNC_ST : RTE_RING_SYNC_MT; > + ret =3D get_sync_type(flags, &r->prod.sync_type, &r->cons.sync_type); > + if (ret !=3D 0) > + return ret; >=20 > if (flags & RING_F_EXACT_SZ) { > r->size =3D rte_align32pow2(count + 1); @@ -126,8 +206,12 > @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned count, > r->mask =3D count - 1; > r->capacity =3D r->mask; > } > - r->prod.head =3D r->cons.head =3D 0; > - r->prod.tail =3D r->cons.tail =3D 0; > + > + /* set default values for head-tail distance */ > + if (flags & RING_F_MP_RTS_ENQ) > + rte_ring_set_prod_htd_max(r, r->capacity / HTD_MAX_DEF); > + if (flags & RING_F_MC_RTS_DEQ) > + rte_ring_set_cons_htd_max(r, r->capacity / HTD_MAX_DEF); >=20 > return 0; > } > diff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h inde= x > 35ee4491c..77f206ca7 100644 > --- a/lib/librte_ring/rte_ring.h > +++ b/lib/librte_ring/rte_ring.h > @@ -1,6 +1,6 @@ > /* SPDX-License-Identifier: BSD-3-Clause > * > - * Copyright (c) 2010-2017 Intel Corporation > + * Copyright (c) 2010-2020 Intel Corporation > * Copyright (c) 2007-2009 Kip Macy kmacy@freebsd.org > * All rights reserved. > * Derived from FreeBSD's bufring.h > @@ -389,8 +389,21 @@ static __rte_always_inline unsigned int > rte_ring_enqueue_bulk(struct rte_ring *r, void * const *obj_table, > unsigned int n, unsigned int *free_space) { > - return __rte_ring_do_enqueue(r, obj_table, n, > RTE_RING_QUEUE_FIXED, > - r->prod.sync_type, free_space); > + switch (r->prod.sync_type) { > + case RTE_RING_SYNC_MT: > + return rte_ring_mp_enqueue_bulk(r, obj_table, n, free_space); > + case RTE_RING_SYNC_ST: > + return rte_ring_sp_enqueue_bulk(r, obj_table, n, free_space); > #ifdef > +ALLOW_EXPERIMENTAL_API > + case RTE_RING_SYNC_MT_RTS: > + return rte_ring_mp_rts_enqueue_bulk(r, obj_table, n, > + free_space); > +#endif > + } > + > + /* valid ring should never reach this point */ > + RTE_ASSERT(0); > + return 0; > } >=20 > /** > @@ -524,8 +537,20 @@ static __rte_always_inline unsigned int > rte_ring_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned int = n, > unsigned int *available) > { > - return __rte_ring_do_dequeue(r, obj_table, n, > RTE_RING_QUEUE_FIXED, > - r->cons.sync_type, available); > + switch (r->cons.sync_type) { > + case RTE_RING_SYNC_MT: > + return rte_ring_mc_dequeue_bulk(r, obj_table, n, available); > + case RTE_RING_SYNC_ST: > + return rte_ring_sc_dequeue_bulk(r, obj_table, n, available); > #ifdef > +ALLOW_EXPERIMENTAL_API > + case RTE_RING_SYNC_MT_RTS: > + return rte_ring_mc_rts_dequeue_bulk(r, obj_table, n, > available); > +#endif > + } > + > + /* valid ring should never reach this point */ > + RTE_ASSERT(0); > + return 0; > } >=20 > /** > @@ -845,8 +870,21 @@ static __rte_always_inline unsigned > rte_ring_enqueue_burst(struct rte_ring *r, void * const *obj_table, > unsigned int n, unsigned int *free_space) { > - return __rte_ring_do_enqueue(r, obj_table, n, > RTE_RING_QUEUE_VARIABLE, > - r->prod.sync_type, free_space); > + switch (r->prod.sync_type) { > + case RTE_RING_SYNC_MT: > + return rte_ring_mp_enqueue_burst(r, obj_table, n, > free_space); > + case RTE_RING_SYNC_ST: > + return rte_ring_sp_enqueue_burst(r, obj_table, n, free_space); > #ifdef > +ALLOW_EXPERIMENTAL_API > + case RTE_RING_SYNC_MT_RTS: > + return rte_ring_mp_rts_enqueue_burst(r, obj_table, n, > + free_space); > +#endif > + } > + > + /* valid ring should never reach this point */ > + RTE_ASSERT(0); > + return 0; > } >=20 > /** > @@ -925,9 +963,21 @@ static __rte_always_inline unsigned > rte_ring_dequeue_burst(struct rte_ring *r, void **obj_table, > unsigned int n, unsigned int *available) { > - return __rte_ring_do_dequeue(r, obj_table, n, > - RTE_RING_QUEUE_VARIABLE, > - r->cons.sync_type, available); > + switch (r->cons.sync_type) { > + case RTE_RING_SYNC_MT: > + return rte_ring_mc_dequeue_burst(r, obj_table, n, available); > + case RTE_RING_SYNC_ST: > + return rte_ring_sc_dequeue_burst(r, obj_table, n, available); > #ifdef > +ALLOW_EXPERIMENTAL_API > + case RTE_RING_SYNC_MT_RTS: > + return rte_ring_mc_rts_dequeue_burst(r, obj_table, n, > + available); > +#endif > + } > + > + /* valid ring should never reach this point */ > + RTE_ASSERT(0); > + return 0; > } >=20 > #ifdef __cplusplus > diff --git a/lib/librte_ring/rte_ring_core.h b/lib/librte_ring/rte_ring_c= ore.h > index d9cef763f..ded0fa0b7 100644 > --- a/lib/librte_ring/rte_ring_core.h > +++ b/lib/librte_ring/rte_ring_core.h > @@ -57,6 +57,9 @@ enum rte_ring_queue_behavior { enum > rte_ring_sync_type { > RTE_RING_SYNC_MT, /**< multi-thread safe (default mode) */ > RTE_RING_SYNC_ST, /**< single thread only */ > +#ifdef ALLOW_EXPERIMENTAL_API > + RTE_RING_SYNC_MT_RTS, /**< multi-thread relaxed tail sync */ These need to be documented in rte_ring_init, rte_ring_create, rte_ring_cre= ate_elem API comments. Also, please check if you want to update the file description in rte_ring.h= in brief to capture the new features. > #endif > }; >=20 > /** > @@ -76,6 +79,22 @@ struct rte_ring_headtail { > }; > }; >=20 > +union rte_ring_rts_poscnt { I think this is internal structure, prefix can be __rte > + /** raw 8B value to read/write *cnt* and *pos* as one atomic op */ > + uint64_t raw __rte_aligned(8); > + struct { > + uint32_t cnt; /**< head/tail reference counter */ > + uint32_t pos; /**< head/tail position */ > + } val; > +}; > + > +struct rte_ring_rts_headtail { Same here, the prefix can be __rte > + volatile union rte_ring_rts_poscnt tail; > + enum rte_ring_sync_type sync_type; /**< sync type of prod/cons */ > + uint32_t htd_max; /**< max allowed distance between head/tail */ > + volatile union rte_ring_rts_poscnt head; }; > + > /** > * An RTE ring structure. > * > @@ -104,11 +123,21 @@ struct rte_ring { > char pad0 __rte_cache_aligned; /**< empty cache line */ >=20 > /** Ring producer status. */ > - struct rte_ring_headtail prod __rte_cache_aligned; > + RTE_STD_C11 > + union { > + struct rte_ring_headtail prod; > + struct rte_ring_rts_headtail rts_prod; > + } __rte_cache_aligned; > + > char pad1 __rte_cache_aligned; /**< empty cache line */ >=20 > /** Ring consumer status. */ > - struct rte_ring_headtail cons __rte_cache_aligned; > + RTE_STD_C11 > + union { > + struct rte_ring_headtail cons; > + struct rte_ring_rts_headtail rts_cons; > + } __rte_cache_aligned; > + > char pad2 __rte_cache_aligned; /**< empty cache line */ }; >=20 > @@ -125,6 +154,9 @@ struct rte_ring { > #define RING_F_EXACT_SZ 0x0004 > #define RTE_RING_SZ_MASK (0x7fffffffU) /**< Ring size mask */ >=20 > +#define RING_F_MP_RTS_ENQ 0x0008 /**< The default enqueue is "MP > RTS". > +*/ #define RING_F_MC_RTS_DEQ 0x0010 /**< The default dequeue is "MC > +RTS". */ > + > #ifdef __cplusplus > } > #endif > diff --git a/lib/librte_ring/rte_ring_elem.h b/lib/librte_ring/rte_ring_e= lem.h > index 7406c0b0f..6da0a917b 100644 > --- a/lib/librte_ring/rte_ring_elem.h > +++ b/lib/librte_ring/rte_ring_elem.h > @@ -528,6 +528,10 @@ rte_ring_sp_enqueue_bulk_elem(struct rte_ring *r, > const void *obj_table, > RTE_RING_QUEUE_FIXED, RTE_RING_SYNC_ST, > free_space); } >=20 > +#ifdef ALLOW_EXPERIMENTAL_API > +#include > +#endif > + > /** > * Enqueue several objects on a ring. > * > @@ -557,6 +561,26 @@ rte_ring_enqueue_bulk_elem(struct rte_ring *r, > const void *obj_table, { > return __rte_ring_do_enqueue_elem(r, obj_table, esize, n, > RTE_RING_QUEUE_FIXED, r->prod.sync_type, > free_space); > + > + switch (r->prod.sync_type) { > + case RTE_RING_SYNC_MT: > + return rte_ring_mp_enqueue_bulk_elem(r, obj_table, esize, n, > + free_space); > + case RTE_RING_SYNC_ST: > + return rte_ring_sp_enqueue_bulk_elem(r, obj_table, esize, n, > + free_space); > +#ifdef ALLOW_EXPERIMENTAL_API > + case RTE_RING_SYNC_MT_RTS: > + return rte_ring_mp_rts_enqueue_bulk_elem(r, obj_table, > esize, n, > + free_space); > +#endif > + } > + > + /* valid ring should never reach this point */ > + RTE_ASSERT(0); > + if (free_space !=3D NULL) > + *free_space =3D 0; > + return 0; > } >=20 > /** > @@ -661,7 +685,7 @@ rte_ring_mc_dequeue_bulk_elem(struct rte_ring *r, > void *obj_table, > unsigned int esize, unsigned int n, unsigned int *available) { > return __rte_ring_do_dequeue_elem(r, obj_table, esize, n, > - RTE_RING_QUEUE_FIXED, > RTE_RING_SYNC_MT, available); > + RTE_RING_QUEUE_FIXED, RTE_RING_SYNC_MT, > available); > } >=20 > /** > @@ -719,8 +743,25 @@ static __rte_always_inline unsigned int > rte_ring_dequeue_bulk_elem(struct rte_ring *r, void *obj_table, > unsigned int esize, unsigned int n, unsigned int *available) { > - return __rte_ring_do_dequeue_elem(r, obj_table, esize, n, > - RTE_RING_QUEUE_FIXED, r->cons.sync_type, > available); > + switch (r->cons.sync_type) { > + case RTE_RING_SYNC_MT: > + return rte_ring_mc_dequeue_bulk_elem(r, obj_table, esize, n, > + available); > + case RTE_RING_SYNC_ST: > + return rte_ring_sc_dequeue_bulk_elem(r, obj_table, esize, n, > + available); > +#ifdef ALLOW_EXPERIMENTAL_API > + case RTE_RING_SYNC_MT_RTS: > + return rte_ring_mc_rts_dequeue_bulk_elem(r, obj_table, > esize, > + n, available); > +#endif > + } > + > + /* valid ring should never reach this point */ > + RTE_ASSERT(0); > + if (available !=3D NULL) > + *available =3D 0; > + return 0; > } >=20 > /** > @@ -887,8 +928,25 @@ static __rte_always_inline unsigned > rte_ring_enqueue_burst_elem(struct rte_ring *r, const void *obj_table, > unsigned int esize, unsigned int n, unsigned int *free_space) { > - return __rte_ring_do_enqueue_elem(r, obj_table, esize, n, > - RTE_RING_QUEUE_VARIABLE, r->prod.sync_type, > free_space); > + switch (r->prod.sync_type) { > + case RTE_RING_SYNC_MT: > + return rte_ring_mp_enqueue_burst_elem(r, obj_table, esize, > n, > + free_space); > + case RTE_RING_SYNC_ST: > + return rte_ring_sp_enqueue_burst_elem(r, obj_table, esize, n, > + free_space); > +#ifdef ALLOW_EXPERIMENTAL_API > + case RTE_RING_SYNC_MT_RTS: > + return rte_ring_mp_rts_enqueue_burst_elem(r, obj_table, > esize, > + n, free_space); > +#endif > + } > + > + /* valid ring should never reach this point */ > + RTE_ASSERT(0); > + if (free_space !=3D NULL) > + *free_space =3D 0; > + return 0; > } >=20 > /** > @@ -979,9 +1037,25 @@ static __rte_always_inline unsigned int > rte_ring_dequeue_burst_elem(struct rte_ring *r, void *obj_table, > unsigned int esize, unsigned int n, unsigned int *available) { > - return __rte_ring_do_dequeue_elem(r, obj_table, esize, n, > - RTE_RING_QUEUE_VARIABLE, > - r->cons.sync_type, available); > + switch (r->cons.sync_type) { > + case RTE_RING_SYNC_MT: > + return rte_ring_mc_dequeue_burst_elem(r, obj_table, esize, n, > + available); > + case RTE_RING_SYNC_ST: > + return rte_ring_sc_dequeue_burst_elem(r, obj_table, esize, n, > + available); > +#ifdef ALLOW_EXPERIMENTAL_API > + case RTE_RING_SYNC_MT_RTS: > + return rte_ring_mc_rts_dequeue_burst_elem(r, obj_table, > esize, > + n, available); > +#endif > + } > + > + /* valid ring should never reach this point */ > + RTE_ASSERT(0); > + if (available !=3D NULL) > + *available =3D 0; > + return 0; > } >=20 > #include > diff --git a/lib/librte_ring/rte_ring_rts.h b/lib/librte_ring/rte_ring_rt= s.h new > file mode 100644 index 000000000..8ced07096 > --- /dev/null > +++ b/lib/librte_ring/rte_ring_rts.h > @@ -0,0 +1,439 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * > + * Copyright (c) 2010-2020 Intel Corporation > + * Copyright (c) 2007-2009 Kip Macy kmacy@freebsd.org > + * All rights reserved. > + * Derived from FreeBSD's bufring.h > + * Used as BSD-3 Licensed with permission from Kip Macy. > + */ > + > +#ifndef _RTE_RING_RTS_H_ > +#define _RTE_RING_RTS_H_ > + > +/** > + * @file rte_ring_rts.h > + * @b EXPERIMENTAL: this API may change without prior notice > + * It is not recommended to include this file directly. > + * Please include instead. > + * > + * Contains functions for Relaxed Tail Sync (RTS) ring mode. > + * The main idea remains the same as for our original MP/MC > +synchronization > + * mechanism. > + * The main difference is that tail value is increased not > + * by every thread that finished enqueue/dequeue, > + * but only by the current last one doing enqueue/dequeue. > + * That allows threads to skip spinning on tail value, > + * leaving actual tail value change to last thread at a given instance. > + * RTS requires 2 64-bit CAS for each enqueue(/dequeue) operation: > + * one for head update, second for tail update. > + * As a gain it allows thread to avoid spinning/waiting on tail value. > + * In comparision original MP/MC algorithm requires one 32-bit CAS > + * for head update and waiting/spinning on tail value. > + * > + * Brief outline: > + * - introduce update counter (cnt) for both head and tail. > + * - increment head.cnt for each head.value update > + * - write head.value and head.cnt atomically (64-bit CAS) > + * - move tail.value ahead only when tail.cnt + 1 =3D=3D head.cnt > + * (indicating that this is the last thread updating the tail) > + * - increment tail.cnt when each enqueue/dequeue op finishes > + * (no matter if tail.value going to change or not) > + * - write tail.value and tail.cnt atomically (64-bit CAS) > + * > + * To avoid producer/consumer starvation: > + * - limit max allowed distance between head and tail value (HTD_MAX). > + * I.E. thread is allowed to proceed with changing head.value, > + * only when: head.value - tail.value <=3D HTD_MAX > + * HTD_MAX is an optional parameter. > + * With HTD_MAX =3D=3D 0 we'll have fully serialized ring - > + * i.e. only one thread at a time will be able to enqueue/dequeue > + * to/from the ring. > + * With HTD_MAX >=3D ring.capacity - no limitation. > + * By default HTD_MAX =3D=3D ring.capacity / 8. > + */ > + > +#ifdef __cplusplus > +extern "C" { > +#endif > + > +#include > + > +/** > + * @internal Enqueue several objects on the RTS ring. > + * > + * @param r > + * A pointer to the ring structure. > + * @param obj_table > + * A pointer to a table of objects. > + * @param esize > + * The size of ring element, in bytes. It must be a multiple of 4. > + * This must be the same value used while creating the ring. Otherwise > + * the results are undefined. > + * @param n > + * The number of objects to add in the ring from the obj_table. > + * @param behavior > + * RTE_RING_QUEUE_FIXED: Enqueue a fixed number of items from a rin= g > + * RTE_RING_QUEUE_VARIABLE: Enqueue as many items as possible from > ring > + * @param free_space > + * returns the amount of space after the enqueue operation has finishe= d > + * @return > + * Actual number of objects enqueued. > + * If behavior =3D=3D RTE_RING_QUEUE_FIXED, this will be 0 or n only. > + */ > +static __rte_always_inline unsigned int > +__rte_ring_do_rts_enqueue_elem(struct rte_ring *r, const void *obj_table= , > + uint32_t esize, uint32_t n, enum rte_ring_queue_behavior behavior, > + uint32_t *free_space) > +{ > + uint32_t free, head; > + > + n =3D __rte_ring_rts_move_prod_head(r, n, behavior, &head, &free); > + > + if (n !=3D 0) { > + __rte_ring_enqueue_elems(r, head, obj_table, esize, n); > + __rte_ring_rts_update_tail(&r->rts_prod); > + } > + > + if (free_space !=3D NULL) > + *free_space =3D free - n; > + return n; > +} > + > +/** > + * @internal Dequeue several objects from the RTS ring. > + * > + * @param r > + * A pointer to the ring structure. > + * @param obj_table > + * A pointer to a table of objects. > + * @param esize > + * The size of ring element, in bytes. It must be a multiple of 4. > + * This must be the same value used while creating the ring. Otherwise > + * the results are undefined. > + * @param n > + * The number of objects to pull from the ring. > + * @param behavior > + * RTE_RING_QUEUE_FIXED: Dequeue a fixed number of items from a > ring > + * RTE_RING_QUEUE_VARIABLE: Dequeue as many items as possible from > ring > + * @param available > + * returns the number of remaining ring entries after the dequeue has > finished > + * @return > + * - Actual number of objects dequeued. > + * If behavior =3D=3D RTE_RING_QUEUE_FIXED, this will be 0 or n only= . > + */ > +static __rte_always_inline unsigned int > +__rte_ring_do_rts_dequeue_elem(struct rte_ring *r, void *obj_table, > + uint32_t esize, uint32_t n, enum rte_ring_queue_behavior behavior, > + uint32_t *available) > +{ > + uint32_t entries, head; > + > + n =3D __rte_ring_rts_move_cons_head(r, n, behavior, &head, &entries); > + > + if (n !=3D 0) { > + __rte_ring_dequeue_elems(r, head, obj_table, esize, n); > + __rte_ring_rts_update_tail(&r->rts_cons); > + } > + > + if (available !=3D NULL) > + *available =3D entries - n; > + return n; > +} > + > +/** > + * Enqueue several objects on the RTS ring (multi-producers safe). > + * > + * @param r > + * A pointer to the ring structure. > + * @param obj_table > + * A pointer to a table of objects. > + * @param esize > + * The size of ring element, in bytes. It must be a multiple of 4. > + * This must be the same value used while creating the ring. Otherwise > + * the results are undefined. > + * @param n > + * The number of objects to add in the ring from the obj_table. > + * @param free_space > + * if non-NULL, returns the amount of space in the ring after the > + * enqueue operation has finished. > + * @return > + * The number of objects enqueued, either 0 or n > + */ > +__rte_experimental > +static __rte_always_inline unsigned int > +rte_ring_mp_rts_enqueue_bulk_elem(struct rte_ring *r, const void > *obj_table, > + unsigned int esize, unsigned int n, unsigned int *free_space) { > + return __rte_ring_do_rts_enqueue_elem(r, obj_table, esize, n, > + RTE_RING_QUEUE_FIXED, free_space); > +} > + > +/** > + * Dequeue several objects from an RTS ring (multi-consumers safe). > + * > + * @param r > + * A pointer to the ring structure. > + * @param obj_table > + * A pointer to a table of objects that will be filled. > + * @param esize > + * The size of ring element, in bytes. It must be a multiple of 4. > + * This must be the same value used while creating the ring. Otherwise > + * the results are undefined. > + * @param n > + * The number of objects to dequeue from the ring to the obj_table. > + * @param available > + * If non-NULL, returns the number of remaining ring entries after the > + * dequeue has finished. > + * @return > + * The number of objects dequeued, either 0 or n > + */ > +__rte_experimental > +static __rte_always_inline unsigned int > +rte_ring_mc_rts_dequeue_bulk_elem(struct rte_ring *r, void *obj_table, > + unsigned int esize, unsigned int n, unsigned int *available) { > + return __rte_ring_do_rts_dequeue_elem(r, obj_table, esize, n, > + RTE_RING_QUEUE_FIXED, available); > +} > + > +/** > + * Enqueue several objects on the RTS ring (multi-producers safe). > + * > + * @param r > + * A pointer to the ring structure. > + * @param obj_table > + * A pointer to a table of objects. > + * @param esize > + * The size of ring element, in bytes. It must be a multiple of 4. > + * This must be the same value used while creating the ring. Otherwise > + * the results are undefined. > + * @param n > + * The number of objects to add in the ring from the obj_table. > + * @param free_space > + * if non-NULL, returns the amount of space in the ring after the > + * enqueue operation has finished. > + * @return > + * - n: Actual number of objects enqueued. > + */ > +__rte_experimental > +static __rte_always_inline unsigned > +rte_ring_mp_rts_enqueue_burst_elem(struct rte_ring *r, const void > *obj_table, > + unsigned int esize, unsigned int n, unsigned int *free_space) { > + return __rte_ring_do_rts_enqueue_elem(r, obj_table, esize, n, > + RTE_RING_QUEUE_VARIABLE, free_space); } > + > +/** > + * Dequeue several objects from an RTS ring (multi-consumers safe). > + * When the requested objects are more than the available objects, > + * only dequeue the actual number of objects. > + * > + * @param r > + * A pointer to the ring structure. > + * @param obj_table > + * A pointer to a table of objects that will be filled. > + * @param esize > + * The size of ring element, in bytes. It must be a multiple of 4. > + * This must be the same value used while creating the ring. Otherwise > + * the results are undefined. > + * @param n > + * The number of objects to dequeue from the ring to the obj_table. > + * @param available > + * If non-NULL, returns the number of remaining ring entries after the > + * dequeue has finished. > + * @return > + * - n: Actual number of objects dequeued, 0 if ring is empty > + */ > +__rte_experimental > +static __rte_always_inline unsigned > +rte_ring_mc_rts_dequeue_burst_elem(struct rte_ring *r, void *obj_table, > + unsigned int esize, unsigned int n, unsigned int *available) { > + return __rte_ring_do_rts_dequeue_elem(r, obj_table, esize, n, > + RTE_RING_QUEUE_VARIABLE, available); } > + > +/** > + * Enqueue several objects on the RTS ring (multi-producers safe). > + * > + * @param r > + * A pointer to the ring structure. > + * @param obj_table > + * A pointer to a table of void * pointers (objects). > + * @param n > + * The number of objects to add in the ring from the obj_table. > + * @param free_space > + * if non-NULL, returns the amount of space in the ring after the > + * enqueue operation has finished. > + * @return > + * The number of objects enqueued, either 0 or n > + */ > +__rte_experimental > +static __rte_always_inline unsigned int > +rte_ring_mp_rts_enqueue_bulk(struct rte_ring *r, void * const *obj_table= , > + unsigned int n, unsigned int *free_space) { > + return rte_ring_mp_rts_enqueue_bulk_elem(r, obj_table, > + sizeof(uintptr_t), n, free_space); > +} > + > +/** > + * Dequeue several objects from an RTS ring (multi-consumers safe). > + * > + * @param r > + * A pointer to the ring structure. > + * @param obj_table > + * A pointer to a table of void * pointers (objects) that will be fill= ed. > + * @param n > + * The number of objects to dequeue from the ring to the obj_table. > + * @param available > + * If non-NULL, returns the number of remaining ring entries after the > + * dequeue has finished. > + * @return > + * The number of objects dequeued, either 0 or n > + */ > +__rte_experimental > +static __rte_always_inline unsigned int > +rte_ring_mc_rts_dequeue_bulk(struct rte_ring *r, void **obj_table, > + unsigned int n, unsigned int *available) { > + return rte_ring_mc_rts_dequeue_bulk_elem(r, obj_table, > + sizeof(uintptr_t), n, available); > +} > + > +/** > + * Enqueue several objects on the RTS ring (multi-producers safe). > + * > + * @param r > + * A pointer to the ring structure. > + * @param obj_table > + * A pointer to a table of void * pointers (objects). > + * @param n > + * The number of objects to add in the ring from the obj_table. > + * @param free_space > + * if non-NULL, returns the amount of space in the ring after the > + * enqueue operation has finished. > + * @return > + * - n: Actual number of objects enqueued. > + */ > +__rte_experimental > +static __rte_always_inline unsigned > +rte_ring_mp_rts_enqueue_burst(struct rte_ring *r, void * const *obj_tabl= e, > + unsigned int n, unsigned int *free_space) { > + return rte_ring_mp_rts_enqueue_burst_elem(r, obj_table, > + sizeof(uintptr_t), n, free_space); > +} > + > +/** > + * Dequeue several objects from an RTS ring (multi-consumers safe). > + * When the requested objects are more than the available objects, > + * only dequeue the actual number of objects. > + * > + * @param r > + * A pointer to the ring structure. > + * @param obj_table > + * A pointer to a table of void * pointers (objects) that will be fill= ed. > + * @param n > + * The number of objects to dequeue from the ring to the obj_table. > + * @param available > + * If non-NULL, returns the number of remaining ring entries after the > + * dequeue has finished. > + * @return > + * - n: Actual number of objects dequeued, 0 if ring is empty > + */ > +__rte_experimental > +static __rte_always_inline unsigned > +rte_ring_mc_rts_dequeue_burst(struct rte_ring *r, void **obj_table, > + unsigned int n, unsigned int *available) { > + return rte_ring_mc_rts_dequeue_burst_elem(r, obj_table, > + sizeof(uintptr_t), n, available); > +} > + > +/** > + * Return producer max Head-Tail-Distance (HTD). > + * > + * @param r > + * A pointer to the ring structure. > + * @return > + * Producer HTD value, if producer is set in appropriate sync mode, > + * or UINT32_MAX otherwise. > + */ > +__rte_experimental > +static inline uint32_t > +rte_ring_get_prod_htd_max(const struct rte_ring *r) { > + if (r->prod.sync_type =3D=3D RTE_RING_SYNC_MT_RTS) > + return r->rts_prod.htd_max; > + return UINT32_MAX; > +} > + > +/** > + * Set producer max Head-Tail-Distance (HTD). > + * Note that producer has to use appropriate sync mode (RTS). > + * > + * @param r > + * A pointer to the ring structure. > + * @param v > + * new HTD value to setup. > + * @return > + * Zero on success, or negative error code otherwise. > + */ > +__rte_experimental > +static inline int > +rte_ring_set_prod_htd_max(struct rte_ring *r, uint32_t v) { > + if (r->prod.sync_type !=3D RTE_RING_SYNC_MT_RTS) > + return -ENOTSUP; > + > + r->rts_prod.htd_max =3D v; > + return 0; > +} > + > +/** > + * Return consumer max Head-Tail-Distance (HTD). > + * > + * @param r > + * A pointer to the ring structure. > + * @return > + * Consumer HTD value, if consumer is set in appropriate sync mode, > + * or UINT32_MAX otherwise. > + */ > +__rte_experimental > +static inline uint32_t > +rte_ring_get_cons_htd_max(const struct rte_ring *r) { > + if (r->cons.sync_type =3D=3D RTE_RING_SYNC_MT_RTS) > + return r->rts_cons.htd_max; > + return UINT32_MAX; > +} > + > +/** > + * Set consumer max Head-Tail-Distance (HTD). > + * Note that consumer has to use appropriate sync mode (RTS). > + * > + * @param r > + * A pointer to the ring structure. > + * @param v > + * new HTD value to setup. > + * @return > + * Zero on success, or negative error code otherwise. > + */ > +__rte_experimental > +static inline int > +rte_ring_set_cons_htd_max(struct rte_ring *r, uint32_t v) { > + if (r->cons.sync_type !=3D RTE_RING_SYNC_MT_RTS) > + return -ENOTSUP; > + > + r->rts_cons.htd_max =3D v; > + return 0; > +} > + > +#ifdef __cplusplus > +} > +#endif > + > +#endif /* _RTE_RING_RTS_H_ */ > diff --git a/lib/librte_ring/rte_ring_rts_c11_mem.h > b/lib/librte_ring/rte_ring_rts_c11_mem.h > new file mode 100644 > index 000000000..9f26817c0 > --- /dev/null > +++ b/lib/librte_ring/rte_ring_rts_c11_mem.h > @@ -0,0 +1,179 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * > + * Copyright (c) 2010-2020 Intel Corporation > + * Copyright (c) 2007-2009 Kip Macy kmacy@freebsd.org > + * All rights reserved. > + * Derived from FreeBSD's bufring.h > + * Used as BSD-3 Licensed with permission from Kip Macy. > + */ > + > +#ifndef _RTE_RING_RTS_C11_MEM_H_ > +#define _RTE_RING_RTS_C11_MEM_H_ > + > +/** > + * @file rte_ring_rts_c11_mem.h > + * It is not recommended to include this file directly, > + * include instead. > + * Contains internal helper functions for Relaxed Tail Sync (RTS) ring m= ode. > + * For more information please refer to . > + */ > + > +/** > + * @internal This function updates tail values. > + */ > +static __rte_always_inline void > +__rte_ring_rts_update_tail(struct rte_ring_rts_headtail *ht) { > + union rte_ring_rts_poscnt h, ot, nt; > + > + /* > + * If there are other enqueues/dequeues in progress that > + * might preceded us, then don't update tail with new value. > + */ > + > + ot.raw =3D __atomic_load_n(&ht->tail.raw, __ATOMIC_ACQUIRE); > + > + do { > + /* on 32-bit systems we have to do atomic read here */ > + h.raw =3D __atomic_load_n(&ht->head.raw, > __ATOMIC_RELAXED); > + > + nt.raw =3D ot.raw; > + if (++nt.val.cnt =3D=3D h.val.cnt) > + nt.val.pos =3D h.val.pos; > + > + } while (__atomic_compare_exchange_n(&ht->tail.raw, &ot.raw, > nt.raw, > + 0, __ATOMIC_RELEASE, __ATOMIC_ACQUIRE) =3D=3D 0); } > + > +/** > + * @internal This function waits till head/tail distance wouldn't > + * exceed pre-defined max value. > + */ > +static __rte_always_inline void > +__rte_ring_rts_head_wait(const struct rte_ring_rts_headtail *ht, > + union rte_ring_rts_poscnt *h) > +{ > + uint32_t max; > + > + max =3D ht->htd_max; > + > + while (h->val.pos - ht->tail.val.pos > max) { > + rte_pause(); > + h->raw =3D __atomic_load_n(&ht->head.raw, > __ATOMIC_ACQUIRE); > + } > +} > + > +/** > + * @internal This function updates the producer head for enqueue. > + */ > +static __rte_always_inline uint32_t > +__rte_ring_rts_move_prod_head(struct rte_ring *r, uint32_t num, > + enum rte_ring_queue_behavior behavior, uint32_t *old_head, > + uint32_t *free_entries) > +{ > + uint32_t n; > + union rte_ring_rts_poscnt nh, oh; > + > + const uint32_t capacity =3D r->capacity; > + > + oh.raw =3D __atomic_load_n(&r->rts_prod.head.raw, > __ATOMIC_ACQUIRE); > + > + do { > + /* Reset n to the initial burst count */ > + n =3D num; > + > + /* > + * wait for prod head/tail distance, > + * make sure that we read prod head *before* > + * reading cons tail. > + */ > + __rte_ring_rts_head_wait(&r->rts_prod, &oh); > + > + /* > + * The subtraction is done between two unsigned 32bits > value > + * (the result is always modulo 32 bits even if we have > + * *old_head > cons_tail). So 'free_entries' is always between > 0 > + * and capacity (which is < size). > + */ > + *free_entries =3D capacity + r->cons.tail - oh.val.pos; > + > + /* check that we have enough room in ring */ > + if (unlikely(n > *free_entries)) > + n =3D (behavior =3D=3D RTE_RING_QUEUE_FIXED) ? > + 0 : *free_entries; > + > + if (n =3D=3D 0) > + break; > + > + nh.val.pos =3D oh.val.pos + n; > + nh.val.cnt =3D oh.val.cnt + 1; > + > + /* > + * this CAS(ACQUIRE, ACQUIRE) serves as a hoist barrier to prevent: > + * - OOO reads of cons tail value > + * - OOO copy of elems to the ring > + */ > + } while (__atomic_compare_exchange_n(&r->rts_prod.head.raw, > + &oh.raw, nh.raw, > + 0, __ATOMIC_ACQUIRE, __ATOMIC_ACQUIRE) =3D=3D 0); > + > + *old_head =3D oh.val.pos; > + return n; > +} > + > +/** > + * @internal This function updates the consumer head for dequeue */ > +static __rte_always_inline unsigned int > +__rte_ring_rts_move_cons_head(struct rte_ring *r, uint32_t num, > + enum rte_ring_queue_behavior behavior, uint32_t *old_head, > + uint32_t *entries) > +{ > + uint32_t n; > + union rte_ring_rts_poscnt nh, oh; > + > + oh.raw =3D __atomic_load_n(&r->rts_cons.head.raw, > __ATOMIC_ACQUIRE); > + > + /* move cons.head atomically */ > + do { > + /* Restore n as it may change every loop */ > + n =3D num; > + > + /* > + * wait for cons head/tail distance, > + * make sure that we read cons head *before* > + * reading prod tail. > + */ > + __rte_ring_rts_head_wait(&r->rts_cons, &oh); > + > + /* The subtraction is done between two unsigned 32bits value > + * (the result is always modulo 32 bits even if we have > + * cons_head > prod_tail). So 'entries' is always between 0 > + * and size(ring)-1. > + */ > + *entries =3D r->prod.tail - oh.val.pos; > + > + /* Set the actual entries for dequeue */ > + if (n > *entries) > + n =3D (behavior =3D=3D RTE_RING_QUEUE_FIXED) ? 0 : > *entries; > + > + if (unlikely(n =3D=3D 0)) > + break; > + > + nh.val.pos =3D oh.val.pos + n; > + nh.val.cnt =3D oh.val.cnt + 1; > + > + /* > + * this CAS(ACQUIRE, ACQUIRE) serves as a hoist barrier to prevent: > + * - OOO reads of prod tail value > + * - OOO copy of elems from the ring > + */ > + } while (__atomic_compare_exchange_n(&r->rts_cons.head.raw, > + &oh.raw, nh.raw, > + 0, __ATOMIC_ACQUIRE, __ATOMIC_ACQUIRE) =3D=3D 0); > + > + *old_head =3D oh.val.pos; > + return n; > +} > + > +#endif /* _RTE_RING_RTS_C11_MEM_H_ */ > -- > 2.17.1