From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 303F7A04C3;
	Mon, 28 Sep 2020 17:12:48 +0200 (CEST)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 71C2F1D9A9;
	Mon, 28 Sep 2020 17:12:46 +0200 (CEST)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2056.outbound.protection.outlook.com [40.107.20.56])
 by dpdk.org (Postfix) with ESMTP id CFFA11D9A4
 for <dev@dpdk.org>; Mon, 28 Sep 2020 17:12:43 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kGUsmzZkd6XtGlijeuaymVlsr84rlhgsFVCSw1cB3lQ=;
 b=QrJEnTBzyECPhoE6C5w/qD4ezHKSnSRSZftDf6+ZNW143fMeqs1kZA+uidRwaZ2P/bI9fsXKia8Yi8JAgHd7GGdIKUf2Cd3ueSmvqLlVbQe0+6HU4d5IIbhYG9cLqfilj8KgQyKhU6nzWBHHKVJjwbmYFg8wPOwKjeMzLnopbIc=
Received: from DB6PR07CA0120.eurprd07.prod.outlook.com (2603:10a6:6:2c::34) by
 AM6PR08MB4770.eurprd08.prod.outlook.com (2603:10a6:20b:c7::23) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3412.22; Mon, 28 Sep 2020 15:12:39 +0000
Received: from DB5EUR03FT004.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2c:cafe::bc) by DB6PR07CA0120.outlook.office365.com
 (2603:10a6:6:2c::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.18 via Frontend
 Transport; Mon, 28 Sep 2020 15:12:39 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dpdk.org; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;dpdk.org; dmarc=bestguesspass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT004.mail.protection.outlook.com (10.152.20.128) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3412.21 via Frontend Transport; Mon, 28 Sep 2020 15:12:39 +0000
Received: ("Tessian outbound a0bffebca527:v64");
 Mon, 28 Sep 2020 15:12:39 +0000
X-CR-MTA-TID: 64aa7808
Received: from abde51c5db0a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7B455952-ADFD-49CB-A601-4633392D8E6C.1; 
 Mon, 28 Sep 2020 15:12:33 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id abde51c5db0a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 28 Sep 2020 15:12:33 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IPSteqcm7rlMmWs45hOe7F0enZIV51Cr1YzUef+JsyWjb/qKYg5NKj8au4oJVemevCEyvEKVkffx6R9bYat5E45o41NZaxFHSdgGagYdrCuzP2CwJImSFywUVz/QIql24zH+R4ETMAY/+PoElzb/8tWoMazkHM/h6JdAsmjkzFi89nOE2zol3LCOpVK1Mc6TsvEXu9QsexHZeVXty1zjrgG020RJSgTNp9M+MenVXNHOwuYP7TAdZ9DYB8cygoxm7k4MJdSZy4Zr0joSQfdyxHr5K/ek42JV7KZ9SLgdRca2zmpVpofsrhl7619OhGbv5GAbTO1Nc57JYlrBcBnqMA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kGUsmzZkd6XtGlijeuaymVlsr84rlhgsFVCSw1cB3lQ=;
 b=keQ9VBLQv/a6yNgDDu04x9EtFoi1HeTMRM1q/i7kWvzzy6C1BCLzONPzq4q9N8TZOn5Rvgv8rtZMcyXYg3i+cLRtmWBcONn0u1RbKVY+izP7Td/JwLoo5gKZ7AfZCR3J+o5va9IA/o+/Ww05E2Ny66J8B5xTICt3QG1MDlYEZNw6qywmKx06QOEsqvi3xCHokW6ua2aijQBelChg0MyRKNZ9vsNHwafm2XIjnGwyEf6YaNJc41+S3OOt0DgBQTiJP3/X7IVWuM27b+iUATZifOqX9dTs/uHkeNfCZc/a/uwOz7kIEntAKfTl6oV4KgDWY0wXwznBbYhNxWSsqlV1AA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kGUsmzZkd6XtGlijeuaymVlsr84rlhgsFVCSw1cB3lQ=;
 b=QrJEnTBzyECPhoE6C5w/qD4ezHKSnSRSZftDf6+ZNW143fMeqs1kZA+uidRwaZ2P/bI9fsXKia8Yi8JAgHd7GGdIKUf2Cd3ueSmvqLlVbQe0+6HU4d5IIbhYG9cLqfilj8KgQyKhU6nzWBHHKVJjwbmYFg8wPOwKjeMzLnopbIc=
Received: from VI1PR0802MB2351.eurprd08.prod.outlook.com
 (2603:10a6:800:a0::10) by VI1PR08MB2878.eurprd08.prod.outlook.com
 (2603:10a6:802:19::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3391.19; Mon, 28 Sep
 2020 15:12:28 +0000
Received: from VI1PR0802MB2351.eurprd08.prod.outlook.com
 ([fe80::14d4:6ade:368a:204b]) by VI1PR0802MB2351.eurprd08.prod.outlook.com
 ([fe80::14d4:6ade:368a:204b%3]) with mapi id 15.20.3412.029; Mon, 28 Sep 2020
 15:12:28 +0000
From: Ruifeng Wang <Ruifeng.Wang@arm.com>
To: Adam Dybkowski <adamx.dybkowski@intel.com>, "dev@dpdk.org" <dev@dpdk.org>, 
 "fiona.trahe@intel.com" <fiona.trahe@intel.com>, "Akhil.goyal@nxp.com"
 <akhil.goyal@nxp.com>
CC: Fan Zhang <roy.fan.zhang@intel.com>, nd <nd@arm.com>
Thread-Topic: [dpdk-dev] [PATCH v2 1/1] crypto/scheduler: rename slave to
 worker
Thread-Index: AQHWlaJXRxbvClbHg0WFJtLTrQAGy6l+J8mg
Date: Mon, 28 Sep 2020 15:12:27 +0000
Message-ID: <VI1PR0802MB2351C78B98AF0B794538B27D9E350@VI1PR0802MB2351.eurprd08.prod.outlook.com>
References: <20200826153412.1041-1-adamx.dybkowski@intel.com>
 <20200928141633.396-1-adamx.dybkowski@intel.com>
 <20200928141633.396-2-adamx.dybkowski@intel.com>
In-Reply-To: <20200928141633.396-2-adamx.dybkowski@intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ts-tracking-id: EA7B578D52AC7B469CDE06E4A16BACDA.0
x-checkrecipientchecked: true
Authentication-Results-Original: intel.com; dkim=none (message not signed)
 header.d=none;intel.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [180.164.237.73]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: b5c8d3ba-83dc-49c8-0686-08d863c0f065
x-ms-traffictypediagnostic: VI1PR08MB2878:|AM6PR08MB4770:
x-ld-processed: f34e5979-57d9-4aaa-ad4d-b122a662184d,ExtAddr
X-Microsoft-Antispam-PRVS: <AM6PR08MB4770A0164E5AEFE5168F67699E350@AM6PR08MB4770.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:16;OLM:16;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: DV/0vO43BJbjjwoRmuAXwG3TQQtrWMbyR0C1tr/d8PTgcFJM7DcPdiIihYQ4eUjte6/TMaakMZHARVXgZNzBAo33z2leJtgB1QZIrubSVZ2QOrxX6/L22d7dUhtjgz5bgER21Cj9tPrASUx/66pC1nmBKGj8Mi8QFtd8/PUI6P1nP0Sb9VsRmt7cYp+bnhdDi5nBpXKZtJzOzB7gEflsqWTbCGriExFanWL3SaqW+vaonsjJ7RWRbb60oyrJ6GUQnD7UxiIMb8mgFfDYyTP+fbr0Db/8KKevXC8aKGn5vI4JSYpmZB04/sk8YpsCH4i1xfbM2av+CaFCuxhkMNlj/SslGWHW9y5TG6RbZMt6/4X9Pf3V/+RGf0wjvyj/Ho5x
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:VI1PR0802MB2351.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE;
 SFS:(4636009)(136003)(376002)(346002)(396003)(39860400002)(366004)(9686003)(52536014)(54906003)(2906002)(6506007)(53546011)(7696005)(86362001)(478600001)(8936002)(8676002)(316002)(110136005)(4326008)(5660300002)(30864003)(66946007)(76116006)(66476007)(66556008)(64756008)(66446008)(33656002)(55016002)(186003)(26005)(71200400001)(83380400001)(579004)(559001);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: SysOyrsJyCrXXAaUKqzeADAuA1V+AekTuoyYE3mu5AcGxJI7FndaY7HrMspTFWxi2u7IR7nq5v2ha6Ai4Gub4iLWQPzoYw88f0d8hHEctX2WTzDIDqktYWumCu/GeCdHd4M60sIFrqi9pbi0o3GvtctuThJyGGbU/4/uTg2YnOrglywLV+yd3GIVQ8K5L9JLnhhX0bkk17LKhqdSl5g0f7Co4BJ8CNbvQ9h+mqoLNq1W8I78FDojD8N4rUzMA5LKI7zUP8dW0Z7DFnXJ1G/ZdndbeBNXMLLodwIkzrVIwKqRxtBAR+wLq+NfgQHr487e5XD0FGyTdFPbNI4vHApUIFrYCFHfTO2lNI/dOwsYvK98VhlTRYrc5HdGiNZfO55/8YHY7uQacyQ1vWXNZAypVWfoBmr+glU+t8EbOfCWM29WuBcJh/SnRf37BSnK3y5MFmNPKnl7XiboCSMSYECChtJtQd7NbqgyHP/9ySg+yEbMOwUyeNon2gVKCW0HDThsbToVKRqWlfHnvB1l8EHohbSU0TaX1ZFTM4/el3W1mZJMOgGi8cuKWfOBwd4kho8916mVlgRFGckr81uxY0eXR/RVeeFBx7PGXXu/t2RrOWgpaSDnDrbWBnMpSWnYPuHfLCannNEmavtaykyRt5yv/g==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB2878
Original-Authentication-Results: intel.com; dkim=none (message not signed)
 header.d=none;intel.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: c477033c-d9e7-4336-ed86-08d863c0e9c8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Hp6mDgIbWW66H6EiUZfwngDhdL5564Wi1DlLBljCriuKNzOkoOM6m5Ym+LEDZ0/Nfi2yykbYqoXbRtVxsCcZMe37XXC0Wfe0zckfmMgg52RuhQWgYey7M8B3Pw0BBPwWMsCE36l9/5r3vfP0LxuE9n8KToIuNeZVzNL0mLv8Kkv9BfZR9OggHqeTrTPKOA3cRL+x2g8CKtB3rHsfn3wxng7YIPRLbqLh+TNrVcMvng6fKMejz1GmvPQzFf9o4u9Mb87jOjKa8pEP2nX1YCCcDQdv7fHPLqE6hrCLf9byl18+k0ouAvzpZs4bw7suFEEQySAiQN2GAbm+CR9wvS7yEM1+EhGKXkz7rVVS1zns85RmTNWGU28YBuemJ0T9BBOWvUJf71ghHL3fnu67q0k0gZ0D1UVkL9L/PMrfzNomCQ29kasG+rw0i1cyhvLi9x4m
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE;
 SFS:(4636009)(136003)(396003)(346002)(376002)(39860400002)(46966005)(7696005)(336012)(83380400001)(54906003)(6506007)(8676002)(53546011)(356005)(8936002)(4326008)(30864003)(33656002)(81166007)(70586007)(70206006)(2906002)(82740400003)(478600001)(86362001)(47076004)(9686003)(110136005)(316002)(82310400003)(55016002)(52536014)(26005)(186003)(5660300002)(559001)(579004);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Sep 2020 15:12:39.0868 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b5c8d3ba-83dc-49c8-0686-08d863c0f065
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4770
Subject: Re: [dpdk-dev] [PATCH v2 1/1] crypto/scheduler: rename slave to
 worker
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>


> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Adam Dybkowski
> Sent: Monday, September 28, 2020 10:17 PM
> To: dev@dpdk.org; fiona.trahe@intel.com; Akhil.goyal@nxp.com
> Cc: Adam Dybkowski <adamx.dybkowski@intel.com>; Fan Zhang
> <roy.fan.zhang@intel.com>
> Subject: [dpdk-dev] [PATCH v2 1/1] crypto/scheduler: rename slave to
> worker
>=20
> This patch replaces the usage of the word 'slave' with more
> appropriate word 'worker' in QAT PMD and Scheduler PMD
> as well as in their docs. Also the test app was modified
> to use the new wording.
>=20
> The Scheduler PMD's public API was modified according to the
> previous deprecation notice:
> rte_cryptodev_scheduler_slave_attach is now called
> rte_cryptodev_scheduler_worker_attach,
> rte_cryptodev_scheduler_slave_detach is
> rte_cryptodev_scheduler_worker_detach,
> rte_cryptodev_scheduler_slaves_get is
> rte_cryptodev_scheduler_workers_get.
>=20
> Also, the configuration value
> RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES
> was renamed to RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS.
>=20
> Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
> Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
>  app/test-crypto-perf/main.c                   |   2 +-
>  app/test/test_cryptodev.c                     |  20 +-
>  doc/guides/cryptodevs/qat.rst                 |   2 +-
>  doc/guides/cryptodevs/scheduler.rst           |  40 ++--
>  doc/guides/rel_notes/deprecation.rst          |   7 -
>  doc/guides/rel_notes/release_20_11.rst        |  11 +
>  .../scheduler/rte_cryptodev_scheduler.c       | 114 +++++-----
>  .../scheduler/rte_cryptodev_scheduler.h       |  35 ++-
>  .../rte_cryptodev_scheduler_operations.h      |  12 +-
>  .../rte_pmd_crypto_scheduler_version.map      |   6 +-
>  drivers/crypto/scheduler/scheduler_failover.c |  83 +++----
>  .../crypto/scheduler/scheduler_multicore.c    |  54 ++---
>  .../scheduler/scheduler_pkt_size_distr.c      | 142 ++++++------
>  drivers/crypto/scheduler/scheduler_pmd.c      |  54 ++---
>  drivers/crypto/scheduler/scheduler_pmd_ops.c  | 204 +++++++++---------
>  .../crypto/scheduler/scheduler_pmd_private.h  |  12 +-
>  .../crypto/scheduler/scheduler_roundrobin.c   |  87 ++++----
>  examples/l2fwd-crypto/main.c                  |   6 +-
>  18 files changed, 449 insertions(+), 442 deletions(-)
>=20
> diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
> index 8f8e580e4..62ae6048b 100644
> --- a/app/test-crypto-perf/main.c
> +++ b/app/test-crypto-perf/main.c
> @@ -240,7 +240,7 @@ cperf_initialize_cryptodev(struct cperf_options *opts=
,
> uint8_t *enabled_cdevs)
>  					"crypto_scheduler")) {
>  #ifdef RTE_LIBRTE_PMD_CRYPTO_SCHEDULER
>  			uint32_t nb_slaves =3D
> -
> 	rte_cryptodev_scheduler_slaves_get(cdev_id,
> +
> 	rte_cryptodev_scheduler_workers_get(cdev_id,
>  								NULL);
>=20
>  			sessions_needed =3D enabled_cdev_count *
> diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
> index 70bf6fe2c..255fb7525 100644
> --- a/app/test/test_cryptodev.c
> +++ b/app/test/test_cryptodev.c
> @@ -479,29 +479,29 @@ testsuite_setup(void)
>  	char vdev_args[VDEV_ARGS_SIZE] =3D {""};
>  	char temp_str[VDEV_ARGS_SIZE] =3D {"mode=3Dmulti-core,"
>=20
> 	"ordering=3Denable,name=3Dcryptodev_test_scheduler,corelist=3D"};
> -	uint16_t slave_core_count =3D 0;
> +	uint16_t worker_core_count =3D 0;
>  	uint16_t socket_id =3D 0;
>=20
>  	if (gbl_driver_id =3D=3D rte_cryptodev_driver_id_get(
>  			RTE_STR(CRYPTODEV_NAME_SCHEDULER_PMD))) {
>=20
> -		/* Identify the Slave Cores
> -		 * Use 2 slave cores for the device args
> +		/* Identify the Worker Cores
> +		 * Use 2 worker cores for the device args
>  		 */
>  		RTE_LCORE_FOREACH_SLAVE(i) {
> -			if (slave_core_count > 1)
> +			if (worker_core_count > 1)
>  				break;
>  			snprintf(vdev_args, sizeof(vdev_args),
>  					"%s%d", temp_str, i);
>  			strcpy(temp_str, vdev_args);
>  			strlcat(temp_str, ";", sizeof(temp_str));
> -			slave_core_count++;
> +			worker_core_count++;
>  			socket_id =3D rte_lcore_to_socket_id(i);
>  		}
> -		if (slave_core_count !=3D 2) {
> +		if (worker_core_count !=3D 2) {
>  			RTE_LOG(ERR, USER1,
>  				"Cryptodev scheduler test require at least "
> -				"two slave cores to run. "
> +				"two worker cores to run. "
>  				"Please use the correct coremask.\n");
>  			return TEST_FAILED;
>  		}
> @@ -11712,7 +11712,7 @@
> test_chacha20_poly1305_decrypt_test_case_rfc8439(void)
>=20
>  #ifdef RTE_LIBRTE_PMD_CRYPTO_SCHEDULER
>=20
> -/* global AESNI slave IDs for the scheduler test */
> +/* global AESNI worker IDs for the scheduler test */
>  uint8_t aesni_ids[2];
>=20
>  static int
> @@ -11810,7 +11810,7 @@ test_scheduler_attach_slave_op(void)
>  		ts_params->qp_conf.mp_session_private =3D
>  				ts_params->session_priv_mpool;
>=20
> -		ret =3D rte_cryptodev_scheduler_slave_attach(sched_id,
> +		ret =3D rte_cryptodev_scheduler_worker_attach(sched_id,
>  				(uint8_t)i);
>=20
>  		TEST_ASSERT(ret =3D=3D 0,
> @@ -11834,7 +11834,7 @@ test_scheduler_detach_slave_op(void)
>  	int ret;
>=20
>  	for (i =3D 0; i < 2; i++) {
> -		ret =3D rte_cryptodev_scheduler_slave_detach(sched_id,
> +		ret =3D rte_cryptodev_scheduler_worker_detach(sched_id,
>  				aesni_ids[i]);
>  		TEST_ASSERT(ret =3D=3D 0,
>  			"Failed to detach device %u", aesni_ids[i]);
> diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rs=
t
> index e5d2cf499..ee0bd3a0d 100644
> --- a/doc/guides/cryptodevs/qat.rst
> +++ b/doc/guides/cryptodevs/qat.rst
> @@ -328,7 +328,7 @@ The "rte_cryptodev_devices_get()" returns the
> devices exposed by either of these
>=20
>  	The cryptodev driver name is passed to the dpdk-test-crypto-perf
> tool in the "-devtype" parameter.
>=20
> -	The qat crypto device name is in the format of the slave parameter
> passed to the crypto scheduler.
> +	The qat crypto device name is in the format of the worker parameter
> passed to the crypto scheduler.
>=20
>  * The qat compressdev driver name is "compress_qat".
>    The rte_compressdev_devices_get() returns the devices exposed by this
> driver.
> diff --git a/doc/guides/cryptodevs/scheduler.rst
> b/doc/guides/cryptodevs/scheduler.rst
> index 7004ca431..565de40f3 100644
> --- a/doc/guides/cryptodevs/scheduler.rst
> +++ b/doc/guides/cryptodevs/scheduler.rst
> @@ -16,12 +16,12 @@ crypto ops among them in a certain manner.
>  The Cryptodev Scheduler PMD library (**librte_pmd_crypto_scheduler**)
> acts as
>  a software crypto PMD and shares the same API provided by
> librte_cryptodev.
>  The PMD supports attaching multiple crypto PMDs, software or hardware, a=
s
> -slaves, and distributes the crypto workload to them with certain behavio=
r.
> +workers, and distributes the crypto workload to them with certain behavi=
or.
>  The behaviors are categorizes as different "modes". Basically, a schedul=
ing
> -mode defines certain actions for scheduling crypto ops to its slaves.
> +mode defines certain actions for scheduling crypto ops to its workers.
>=20
>  The librte_pmd_crypto_scheduler library exports a C API which provides a=
n
> API
> -for attaching/detaching slaves, set/get scheduling modes, and
> enable/disable
> +for attaching/detaching workers, set/get scheduling modes, and
> enable/disable
>  crypto ops reordering.
>=20
>  Limitations
> @@ -62,7 +62,7 @@ two calls:
>    created. This value may be overwritten internally if there are too
>    many devices are attached.
>=20
> -* slave: If a cryptodev has been initialized with specific name, it can =
be
> +* worker: If a cryptodev has been initialized with specific name, it can=
 be
>    attached to the scheduler using this parameter, simply filling the nam=
e
>    here. Multiple cryptodevs can be attached initially by presenting this
>    parameter multiple times.
> @@ -84,13 +84,13 @@ Example:
>=20
>  .. code-block:: console
>=20
> -    ... --vdev "crypto_aesni_mb0,name=3Daesni_mb_1" --vdev
> "crypto_aesni_mb1,name=3Daesni_mb_2" --vdev
> "crypto_scheduler,slave=3Daesni_mb_1,slave=3Daesni_mb_2" ...
> +    ... --vdev "crypto_aesni_mb0,name=3Daesni_mb_1" --vdev
> "crypto_aesni_mb1,name=3Daesni_mb_2" --vdev
> "crypto_scheduler,worker=3Daesni_mb_1,worker=3Daesni_mb_2" ...
>=20
>  .. note::
>=20
>      * The scheduler cryptodev cannot be started unless the scheduling mo=
de
> -      is set and at least one slave is attached. Also, to configure the
> -      scheduler in the run-time, like attach/detach slave(s), change
> +      is set and at least one worker is attached. Also, to configure the
> +      scheduler in the run-time, like attach/detach worker(s), change
>        scheduling mode, or enable/disable crypto op ordering, one should =
stop
>        the scheduler first, otherwise an error will be returned.
>=20
> @@ -111,7 +111,7 @@ operation:
>     *Initialization mode parameter*: **round-robin**
>=20
>     Round-robin mode, which distributes the enqueued burst of crypto ops
> -   among its slaves in a round-robin manner. This mode may help to fill
> +   among its workers in a round-robin manner. This mode may help to fill
>     the throughput gap between the physical core and the existing cryptod=
evs
>     to increase the overall performance.
>=20
> @@ -119,15 +119,15 @@ operation:
>=20
>     *Initialization mode parameter*: **packet-size-distr**
>=20
> -   Packet-size based distribution mode, which works with 2 slaves, the
> primary
> -   slave and the secondary slave, and distributes the enqueued crypto
> +   Packet-size based distribution mode, which works with 2 workers, the
> primary
> +   worker and the secondary worker, and distributes the enqueued crypto
>     operations to them based on their data lengths. A crypto operation wi=
ll be
> -   distributed to the primary slave if its data length is equal to or bi=
gger
> +   distributed to the primary worker if its data length is equal to or b=
igger
>     than the designated threshold, otherwise it will be handled by the
> secondary
> -   slave.
> +   worker.
>=20
>     A typical usecase in this mode is with the QAT cryptodev as the prima=
ry and
> -   a software cryptodev as the secondary slave. This may help applicatio=
ns to
> +   a software cryptodev as the secondary worker. This may help applicati=
ons
> to
>     process additional crypto workload than what the QAT cryptodev can
> handle on
>     its own, by making use of the available CPU cycles to deal with small=
er
>     crypto workloads.
> @@ -148,11 +148,11 @@ operation:
>=20
>     *Initialization mode parameter*: **fail-over**
>=20
> -   Fail-over mode, which works with 2 slaves, the primary slave and the
> -   secondary slave. In this mode, the scheduler will enqueue the incomin=
g
> -   crypto operation burst to the primary slave. When one or more crypto
> +   Fail-over mode, which works with 2 workers, the primary worker and th=
e
> +   secondary worker. In this mode, the scheduler will enqueue the incomi=
ng
> +   crypto operation burst to the primary worker. When one or more crypto
>     operations fail to be enqueued, then they will be enqueued to the
> secondary
> -   slave.
> +   worker.
>=20
>  *   **CDEV_SCHED_MODE_MULTICORE:**
>=20
> @@ -167,16 +167,16 @@ operation:
>     For mixed traffic (IMIX) the optimal number of worker cores is around=
 2-3.
>     For large packets (1.5 kbytes) scheduler shows linear scaling in
> performance
>     up to eight cores.
> -   Each worker uses its own slave cryptodev. Only software cryptodevs
> +   Each worker uses its own cryptodev. Only software cryptodevs
>     are supported. Only the same type of cryptodevs should be used
> concurrently.
>=20
>     The multi-core mode uses one extra parameter:
>=20
>     * corelist: Semicolon-separated list of logical cores to be used as w=
orkers.
> -     The number of worker cores should be equal to the number of slave
> cryptodevs.
> +     The number of worker cores should be equal to the number of worker
> cryptodevs.
>       These cores should be present in EAL core list parameter and
>       should not be used by the application or any other process.
>=20
>     Example:
>      ... --vdev "crypto_aesni_mb1,name=3Daesni_mb_1" --vdev
> "crypto_aesni_mb_pmd2,name=3Daesni_mb_2" \
> -    --vdev
> "crypto_scheduler,slave=3Daesni_mb_1,slave=3Daesni_mb_2,mode=3Dmulti-
> core,corelist=3D23;24" ...
> +    --vdev
> "crypto_scheduler,worker=3Daesni_mb_1,worker=3Daesni_mb_2,mode=3Dmulti-
> core,corelist=3D23;24" ...
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index f9b72acb8..7621210b0 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -241,13 +241,6 @@ Deprecation Notices
>    to one it means it represents IV, when is set to zero it means J0 is u=
sed
>    directly, in this case 16 bytes of J0 need to be passed.
>=20
> -* scheduler: The functions ``rte_cryptodev_scheduler_slave_attach``,
> -  ``rte_cryptodev_scheduler_slave_detach`` and
> -  ``rte_cryptodev_scheduler_slaves_get`` will be replaced in 20.11 by
> -  ``rte_cryptodev_scheduler_worker_attach``,
> -  ``rte_cryptodev_scheduler_worker_detach`` and
> -  ``rte_cryptodev_scheduler_workers_get`` accordingly.
> -
>  * eventdev: Following structures will be modified to support DLB PMD
>    and future extensions:
>=20
> diff --git a/doc/guides/rel_notes/release_20_11.rst
> b/doc/guides/rel_notes/release_20_11.rst
> index 73ac08fb0..3e06ad7c3 100644
> --- a/doc/guides/rel_notes/release_20_11.rst
> +++ b/doc/guides/rel_notes/release_20_11.rst
> @@ -133,6 +133,17 @@ API Changes
>    and the function ``rte_rawdev_queue_conf_get()``
>    from ``void`` to ``int`` allowing the return of error codes from drive=
rs.
>=20
> +* scheduler: Renamed functions ``rte_cryptodev_scheduler_slave_attach``,
> +  ``rte_cryptodev_scheduler_slave_detach`` and
> +  ``rte_cryptodev_scheduler_slaves_get`` to
> +  ``rte_cryptodev_scheduler_worker_attach``,
> +  ``rte_cryptodev_scheduler_worker_detach`` and
> +  ``rte_cryptodev_scheduler_workers_get`` accordingly.
> +
> +* scheduler: Renamed the configuration value
> +  ``RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES`` to
> +  ``RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS``.
> +
>  * bpf: ``RTE_BPF_XTYPE_NUM`` has been dropped from ``rte_bpf_xtype``.
>=20
>=20
> diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
> b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
> index 730504dab..9367a0e91 100644
> --- a/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
> +++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.c
> @@ -13,31 +13,31 @@
>  /** update the scheduler pmd's capability with attaching device's
>   *  capability.
>   *  For each device to be attached, the scheduler's capability should be
> - *  the common capability set of all slaves
> + *  the common capability set of all workers
>   **/
>  static uint32_t
>  sync_caps(struct rte_cryptodev_capabilities *caps,
>  		uint32_t nb_caps,
> -		const struct rte_cryptodev_capabilities *slave_caps)
> +		const struct rte_cryptodev_capabilities *worker_caps)
>  {
> -	uint32_t sync_nb_caps =3D nb_caps, nb_slave_caps =3D 0;
> +	uint32_t sync_nb_caps =3D nb_caps, nb_worker_caps =3D 0;
>  	uint32_t i;
>=20
> -	while (slave_caps[nb_slave_caps].op !=3D
> RTE_CRYPTO_OP_TYPE_UNDEFINED)
> -		nb_slave_caps++;
> +	while (worker_caps[nb_worker_caps].op !=3D
> RTE_CRYPTO_OP_TYPE_UNDEFINED)
> +		nb_worker_caps++;
>=20
>  	if (nb_caps =3D=3D 0) {
> -		rte_memcpy(caps, slave_caps, sizeof(*caps) *
> nb_slave_caps);
> -		return nb_slave_caps;
> +		rte_memcpy(caps, worker_caps, sizeof(*caps) *
> nb_worker_caps);
> +		return nb_worker_caps;
>  	}
>=20
>  	for (i =3D 0; i < sync_nb_caps; i++) {
>  		struct rte_cryptodev_capabilities *cap =3D &caps[i];
>  		uint32_t j;
>=20
> -		for (j =3D 0; j < nb_slave_caps; j++) {
> +		for (j =3D 0; j < nb_worker_caps; j++) {
>  			const struct rte_cryptodev_capabilities *s_cap =3D
> -					&slave_caps[j];
> +					&worker_caps[j];
>=20
>  			if (s_cap->op !=3D cap->op || s_cap-
> >sym.xform_type !=3D
>  					cap->sym.xform_type)
> @@ -72,7 +72,7 @@ sync_caps(struct rte_cryptodev_capabilities *caps,
>  			break;
>  		}
>=20
> -		if (j < nb_slave_caps)
> +		if (j < nb_worker_caps)
>  			continue;
>=20
>  		/* remove a uncommon cap from the array */
> @@ -97,10 +97,10 @@ update_scheduler_capability(struct scheduler_ctx
> *sched_ctx)
>  		sched_ctx->capabilities =3D NULL;
>  	}
>=20
> -	for (i =3D 0; i < sched_ctx->nb_slaves; i++) {
> +	for (i =3D 0; i < sched_ctx->nb_workers; i++) {
>  		struct rte_cryptodev_info dev_info;
>=20
> -		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id,
> &dev_info);
> +		rte_cryptodev_info_get(sched_ctx->workers[i].dev_id,
> &dev_info);
>=20
>  		nb_caps =3D sync_caps(tmp_caps, nb_caps,
> dev_info.capabilities);
>  		if (nb_caps =3D=3D 0)
> @@ -127,10 +127,10 @@ update_scheduler_feature_flag(struct
> rte_cryptodev *dev)
>=20
>  	dev->feature_flags =3D 0;
>=20
> -	for (i =3D 0; i < sched_ctx->nb_slaves; i++) {
> +	for (i =3D 0; i < sched_ctx->nb_workers; i++) {
>  		struct rte_cryptodev_info dev_info;
>=20
> -		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id,
> &dev_info);
> +		rte_cryptodev_info_get(sched_ctx->workers[i].dev_id,
> &dev_info);
>=20
>  		dev->feature_flags |=3D dev_info.feature_flags;
>  	}
> @@ -142,15 +142,15 @@ update_max_nb_qp(struct scheduler_ctx
> *sched_ctx)
>  	uint32_t i;
>  	uint32_t max_nb_qp;
>=20
> -	if (!sched_ctx->nb_slaves)
> +	if (!sched_ctx->nb_workers)
>  		return;
>=20
> -	max_nb_qp =3D sched_ctx->nb_slaves ? UINT32_MAX : 0;
> +	max_nb_qp =3D sched_ctx->nb_workers ? UINT32_MAX : 0;
>=20
> -	for (i =3D 0; i < sched_ctx->nb_slaves; i++) {
> +	for (i =3D 0; i < sched_ctx->nb_workers; i++) {
>  		struct rte_cryptodev_info dev_info;
>=20
> -		rte_cryptodev_info_get(sched_ctx->slaves[i].dev_id,
> &dev_info);
> +		rte_cryptodev_info_get(sched_ctx->workers[i].dev_id,
> &dev_info);
>  		max_nb_qp =3D dev_info.max_nb_queue_pairs < max_nb_qp ?
>  				dev_info.max_nb_queue_pairs : max_nb_qp;
>  	}
> @@ -160,11 +160,11 @@ update_max_nb_qp(struct scheduler_ctx
> *sched_ctx)
>=20
>  /** Attach a device to the scheduler. */
>  int
> -rte_cryptodev_scheduler_slave_attach(uint8_t scheduler_id, uint8_t
> slave_id)
> +rte_cryptodev_scheduler_worker_attach(uint8_t scheduler_id, uint8_t
> worker_id)
>  {
>  	struct rte_cryptodev *dev =3D
> rte_cryptodev_pmd_get_dev(scheduler_id);
>  	struct scheduler_ctx *sched_ctx;
> -	struct scheduler_slave *slave;
> +	struct scheduler_worker *worker;
>  	struct rte_cryptodev_info dev_info;
>  	uint32_t i;
>=20
> @@ -184,30 +184,30 @@ rte_cryptodev_scheduler_slave_attach(uint8_t
> scheduler_id, uint8_t slave_id)
>  	}
>=20
>  	sched_ctx =3D dev->data->dev_private;
> -	if (sched_ctx->nb_slaves >=3D
> -			RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES) {
> -		CR_SCHED_LOG(ERR, "Too many slaves attached");
> +	if (sched_ctx->nb_workers >=3D
> +			RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS)
> {
> +		CR_SCHED_LOG(ERR, "Too many workers attached");
>  		return -ENOMEM;
>  	}
>=20
> -	for (i =3D 0; i < sched_ctx->nb_slaves; i++)
> -		if (sched_ctx->slaves[i].dev_id =3D=3D slave_id) {
> -			CR_SCHED_LOG(ERR, "Slave already added");
> +	for (i =3D 0; i < sched_ctx->nb_workers; i++)
> +		if (sched_ctx->workers[i].dev_id =3D=3D worker_id) {
> +			CR_SCHED_LOG(ERR, "Worker already added");
>  			return -ENOTSUP;
>  		}
>=20
> -	slave =3D &sched_ctx->slaves[sched_ctx->nb_slaves];
> +	worker =3D &sched_ctx->workers[sched_ctx->nb_workers];
>=20
> -	rte_cryptodev_info_get(slave_id, &dev_info);
> +	rte_cryptodev_info_get(worker_id, &dev_info);
>=20
> -	slave->dev_id =3D slave_id;
> -	slave->driver_id =3D dev_info.driver_id;
> -	sched_ctx->nb_slaves++;
> +	worker->dev_id =3D worker_id;
> +	worker->driver_id =3D dev_info.driver_id;
> +	sched_ctx->nb_workers++;
>=20
>  	if (update_scheduler_capability(sched_ctx) < 0) {
> -		slave->dev_id =3D 0;
> -		slave->driver_id =3D 0;
> -		sched_ctx->nb_slaves--;
> +		worker->dev_id =3D 0;
> +		worker->driver_id =3D 0;
> +		sched_ctx->nb_workers--;
>=20
>  		CR_SCHED_LOG(ERR, "capabilities update failed");
>  		return -ENOTSUP;
> @@ -221,11 +221,11 @@ rte_cryptodev_scheduler_slave_attach(uint8_t
> scheduler_id, uint8_t slave_id)
>  }
>=20
>  int
> -rte_cryptodev_scheduler_slave_detach(uint8_t scheduler_id, uint8_t
> slave_id)
> +rte_cryptodev_scheduler_worker_detach(uint8_t scheduler_id, uint8_t
> worker_id)
>  {
>  	struct rte_cryptodev *dev =3D
> rte_cryptodev_pmd_get_dev(scheduler_id);
>  	struct scheduler_ctx *sched_ctx;
> -	uint32_t i, slave_pos;
> +	uint32_t i, worker_pos;
>=20
>  	if (!dev) {
>  		CR_SCHED_LOG(ERR, "Operation not supported");
> @@ -244,26 +244,26 @@ rte_cryptodev_scheduler_slave_detach(uint8_t
> scheduler_id, uint8_t slave_id)
>=20
>  	sched_ctx =3D dev->data->dev_private;
>=20
> -	for (slave_pos =3D 0; slave_pos < sched_ctx->nb_slaves; slave_pos++)
> -		if (sched_ctx->slaves[slave_pos].dev_id =3D=3D slave_id)
> +	for (worker_pos =3D 0; worker_pos < sched_ctx->nb_workers;
> worker_pos++)
> +		if (sched_ctx->workers[worker_pos].dev_id =3D=3D worker_id)
>  			break;
> -	if (slave_pos =3D=3D sched_ctx->nb_slaves) {
> -		CR_SCHED_LOG(ERR, "Cannot find slave");
> +	if (worker_pos =3D=3D sched_ctx->nb_workers) {
> +		CR_SCHED_LOG(ERR, "Cannot find worker");
>  		return -ENOTSUP;
>  	}
>=20
> -	if (sched_ctx->ops.slave_detach(dev, slave_id) < 0) {
> -		CR_SCHED_LOG(ERR, "Failed to detach slave");
> +	if (sched_ctx->ops.worker_detach(dev, worker_id) < 0) {
> +		CR_SCHED_LOG(ERR, "Failed to detach worker");
>  		return -ENOTSUP;
>  	}
>=20
> -	for (i =3D slave_pos; i < sched_ctx->nb_slaves - 1; i++) {
> -		memcpy(&sched_ctx->slaves[i], &sched_ctx->slaves[i+1],
> -				sizeof(struct scheduler_slave));
> +	for (i =3D worker_pos; i < sched_ctx->nb_workers - 1; i++) {
> +		memcpy(&sched_ctx->workers[i], &sched_ctx-
> >workers[i+1],
> +				sizeof(struct scheduler_worker));
>  	}
> -	memset(&sched_ctx->slaves[sched_ctx->nb_slaves - 1], 0,
> -			sizeof(struct scheduler_slave));
> -	sched_ctx->nb_slaves--;
> +	memset(&sched_ctx->workers[sched_ctx->nb_workers - 1], 0,
> +			sizeof(struct scheduler_worker));
> +	sched_ctx->nb_workers--;
>=20
>  	if (update_scheduler_capability(sched_ctx) < 0) {
>  		CR_SCHED_LOG(ERR, "capabilities update failed");
> @@ -459,8 +459,8 @@
> rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
>  	sched_ctx->ops.create_private_ctx =3D scheduler->ops-
> >create_private_ctx;
>  	sched_ctx->ops.scheduler_start =3D scheduler->ops->scheduler_start;
>  	sched_ctx->ops.scheduler_stop =3D scheduler->ops->scheduler_stop;
> -	sched_ctx->ops.slave_attach =3D scheduler->ops->slave_attach;
> -	sched_ctx->ops.slave_detach =3D scheduler->ops->slave_detach;
> +	sched_ctx->ops.worker_attach =3D scheduler->ops->worker_attach;
> +	sched_ctx->ops.worker_detach =3D scheduler->ops->worker_detach;
>  	sched_ctx->ops.option_set =3D scheduler->ops->option_set;
>  	sched_ctx->ops.option_get =3D scheduler->ops->option_get;
>=20
> @@ -485,11 +485,11 @@
> rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
>  }
>=20
>  int
> -rte_cryptodev_scheduler_slaves_get(uint8_t scheduler_id, uint8_t *slaves=
)
> +rte_cryptodev_scheduler_workers_get(uint8_t scheduler_id, uint8_t
> *workers)
>  {
>  	struct rte_cryptodev *dev =3D
> rte_cryptodev_pmd_get_dev(scheduler_id);
>  	struct scheduler_ctx *sched_ctx;
> -	uint32_t nb_slaves =3D 0;
> +	uint32_t nb_workers =3D 0;
>=20
>  	if (!dev) {
>  		CR_SCHED_LOG(ERR, "Operation not supported");
> @@ -503,16 +503,16 @@ rte_cryptodev_scheduler_slaves_get(uint8_t
> scheduler_id, uint8_t *slaves)
>=20
>  	sched_ctx =3D dev->data->dev_private;
>=20
> -	nb_slaves =3D sched_ctx->nb_slaves;
> +	nb_workers =3D sched_ctx->nb_workers;
>=20
> -	if (slaves && nb_slaves) {
> +	if (workers && nb_workers) {
>  		uint32_t i;
>=20
> -		for (i =3D 0; i < nb_slaves; i++)
> -			slaves[i] =3D sched_ctx->slaves[i].dev_id;
> +		for (i =3D 0; i < nb_workers; i++)
> +			workers[i] =3D sched_ctx->workers[i].dev_id;
>  	}
>=20
> -	return (int)nb_slaves;
> +	return (int)nb_workers;
>  }
>=20
>  int
> diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
> b/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
> index 9a72a90ae..88da8368e 100644
> --- a/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
> +++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler.h
> @@ -10,9 +10,9 @@
>   *
>   * RTE Cryptodev Scheduler Device
>   *
> - * The RTE Cryptodev Scheduler Device allows the aggregation of multiple
> (slave)
> + * The RTE Cryptodev Scheduler Device allows the aggregation of multiple
> worker
>   * Cryptodevs into a single logical crypto device, and the scheduling th=
e
> - * crypto operations to the slaves based on the mode of the specified mo=
de
> of
> + * crypto operations to the workers based on the mode of the specified
> mode of
>   * operation specified and supported. This implementation supports 3
> modes of
>   * operation: round robin, packet-size based, and fail-over.
>   */
> @@ -25,8 +25,8 @@ extern "C" {
>  #endif
>=20
>  /** Maximum number of bonded devices per device */
> -#ifndef RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES
> -#define RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES	(8)
> +#ifndef RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS
> +#define RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS	(8)
>  #endif
>=20
>  /** Maximum number of multi-core worker cores */
> @@ -106,34 +106,33 @@
> rte_cryptodev_scheduler_load_user_scheduler(uint8_t scheduler_id,
>   *
>   * @param scheduler_id
>   *   The target scheduler device ID
> - * @param slave_id
> + * @param worker_id
>   *   Crypto device ID to be attached
>   *
>   * @return
> - *   - 0 if the slave is attached.
> + *   - 0 if the worker is attached.
>   *   - -ENOTSUP if the operation is not supported.
>   *   - -EBUSY if device is started.
> - *   - -ENOMEM if the scheduler's slave list is full.
> + *   - -ENOMEM if the scheduler's worker list is full.
>   */
>  int
> -rte_cryptodev_scheduler_slave_attach(uint8_t scheduler_id, uint8_t
> slave_id);
> +rte_cryptodev_scheduler_worker_attach(uint8_t scheduler_id, uint8_t
> worker_id);
>=20
>  /**
>   * Detach a crypto device from the scheduler
>   *
>   * @param scheduler_id
>   *   The target scheduler device ID
> - * @param slave_id
> + * @param worker_id
>   *   Crypto device ID to be detached
>   *
>   * @return
> - *   - 0 if the slave is detached.
> + *   - 0 if the worker is detached.
>   *   - -ENOTSUP if the operation is not supported.
>   *   - -EBUSY if device is started.
>   */
>  int
> -rte_cryptodev_scheduler_slave_detach(uint8_t scheduler_id, uint8_t
> slave_id);
> -
> +rte_cryptodev_scheduler_worker_detach(uint8_t scheduler_id, uint8_t
> worker_id);
>=20
>  /**
>   * Set the scheduling mode
> @@ -199,21 +198,21 @@ int
>  rte_cryptodev_scheduler_ordering_get(uint8_t scheduler_id);
>=20
>  /**
> - * Get the attached slaves' count and/or ID
> + * Get the attached workers' count and/or ID
>   *
>   * @param scheduler_id
>   *   The target scheduler device ID
> - * @param slaves
> - *   If successful, the function will write back all slaves' device IDs =
to it.
> + * @param workers
> + *   If successful, the function will write back all workers' device IDs=
 to it.
>   *   This parameter will either be an uint8_t array of
> - *   RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES elements or NULL.
> + *   RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS elements or NULL.
>   *
>   * @return
> - *   - non-negative number: the number of slaves attached
> + *   - non-negative number: the number of workers attached
>   *   - -ENOTSUP if the operation is not supported.
>   */
>  int
> -rte_cryptodev_scheduler_slaves_get(uint8_t scheduler_id, uint8_t *slaves=
);
> +rte_cryptodev_scheduler_workers_get(uint8_t scheduler_id, uint8_t
> *workers);
>=20
>  /**
>   * Set the mode specific option
> diff --git a/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.=
h
> b/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
> index c43695894..f8726c009 100644
> --- a/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
> +++ b/drivers/crypto/scheduler/rte_cryptodev_scheduler_operations.h
> @@ -11,10 +11,10 @@
>  extern "C" {
>  #endif
>=20
> -typedef int (*rte_cryptodev_scheduler_slave_attach_t)(
> -		struct rte_cryptodev *dev, uint8_t slave_id);
> -typedef int (*rte_cryptodev_scheduler_slave_detach_t)(
> -		struct rte_cryptodev *dev, uint8_t slave_id);
> +typedef int (*rte_cryptodev_scheduler_worker_attach_t)(
> +		struct rte_cryptodev *dev, uint8_t worker_id);
> +typedef int (*rte_cryptodev_scheduler_worker_detach_t)(
> +		struct rte_cryptodev *dev, uint8_t worker_id);
>=20
>  typedef int (*rte_cryptodev_scheduler_start_t)(struct rte_cryptodev *dev=
);
>  typedef int (*rte_cryptodev_scheduler_stop_t)(struct rte_cryptodev *dev)=
;
> @@ -36,8 +36,8 @@ typedef int
> (*rte_cryptodev_scheduler_config_option_get)(
>  		void *option);
>=20
>  struct rte_cryptodev_scheduler_ops {
> -	rte_cryptodev_scheduler_slave_attach_t slave_attach;
> -	rte_cryptodev_scheduler_slave_attach_t slave_detach;
> +	rte_cryptodev_scheduler_worker_attach_t worker_attach;
> +	rte_cryptodev_scheduler_worker_attach_t worker_detach;
>=20
>  	rte_cryptodev_scheduler_start_t scheduler_start;
>  	rte_cryptodev_scheduler_stop_t scheduler_stop;
> diff --git
> a/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
> b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
> index ca6f102d9..ab7d50562 100644
> --- a/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
> +++ b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
> @@ -8,9 +8,9 @@ DPDK_21 {
>  	rte_cryptodev_scheduler_option_set;
>  	rte_cryptodev_scheduler_ordering_get;
>  	rte_cryptodev_scheduler_ordering_set;
> -	rte_cryptodev_scheduler_slave_attach;
> -	rte_cryptodev_scheduler_slave_detach;
> -	rte_cryptodev_scheduler_slaves_get;
> +	rte_cryptodev_scheduler_worker_attach;
> +	rte_cryptodev_scheduler_worker_detach;
> +	rte_cryptodev_scheduler_workers_get;
>=20
>  	local: *;
>  };
> diff --git a/drivers/crypto/scheduler/scheduler_failover.c
> b/drivers/crypto/scheduler/scheduler_failover.c
> index 3a023b8ad..844312dd1 100644
> --- a/drivers/crypto/scheduler/scheduler_failover.c
> +++ b/drivers/crypto/scheduler/scheduler_failover.c
> @@ -8,20 +8,20 @@
>  #include "rte_cryptodev_scheduler_operations.h"
>  #include "scheduler_pmd_private.h"
>=20
> -#define PRIMARY_SLAVE_IDX	0
> -#define SECONDARY_SLAVE_IDX	1
> -#define NB_FAILOVER_SLAVES	2
> -#define SLAVE_SWITCH_MASK	(0x01)
> +#define PRIMARY_WORKER_IDX	0
> +#define SECONDARY_WORKER_IDX	1
> +#define NB_FAILOVER_WORKERS	2
> +#define WORKER_SWITCH_MASK	(0x01)
>=20
>  struct fo_scheduler_qp_ctx {
> -	struct scheduler_slave primary_slave;
> -	struct scheduler_slave secondary_slave;
> +	struct scheduler_worker primary_worker;
> +	struct scheduler_worker secondary_worker;
>=20
>  	uint8_t deq_idx;
>  };
>=20
>  static __rte_always_inline uint16_t
> -failover_slave_enqueue(struct scheduler_slave *slave,
> +failover_worker_enqueue(struct scheduler_worker *worker,
>  		struct rte_crypto_op **ops, uint16_t nb_ops)
>  {
>  	uint16_t i, processed_ops;
> @@ -29,9 +29,9 @@ failover_slave_enqueue(struct scheduler_slave *slave,
>  	for (i =3D 0; i < nb_ops && i < 4; i++)
>  		rte_prefetch0(ops[i]->sym->session);
>=20
> -	processed_ops =3D rte_cryptodev_enqueue_burst(slave->dev_id,
> -			slave->qp_id, ops, nb_ops);
> -	slave->nb_inflight_cops +=3D processed_ops;
> +	processed_ops =3D rte_cryptodev_enqueue_burst(worker->dev_id,
> +			worker->qp_id, ops, nb_ops);
> +	worker->nb_inflight_cops +=3D processed_ops;
>=20
>  	return processed_ops;
>  }
> @@ -46,11 +46,12 @@ schedule_enqueue(void *qp, struct rte_crypto_op
> **ops, uint16_t nb_ops)
>  	if (unlikely(nb_ops =3D=3D 0))
>  		return 0;
>=20
> -	enqueued_ops =3D failover_slave_enqueue(&qp_ctx->primary_slave,
> +	enqueued_ops =3D failover_worker_enqueue(&qp_ctx-
> >primary_worker,
>  			ops, nb_ops);
>=20
>  	if (enqueued_ops < nb_ops)
> -		enqueued_ops +=3D failover_slave_enqueue(&qp_ctx-
> >secondary_slave,
> +		enqueued_ops +=3D failover_worker_enqueue(
> +				&qp_ctx->secondary_worker,
>  				&ops[enqueued_ops],
>  				nb_ops - enqueued_ops);
>=20
> @@ -79,28 +80,28 @@ schedule_dequeue(void *qp, struct rte_crypto_op
> **ops, uint16_t nb_ops)
>  {
>  	struct fo_scheduler_qp_ctx *qp_ctx =3D
>  			((struct scheduler_qp_ctx *)qp)->private_qp_ctx;
> -	struct scheduler_slave *slaves[NB_FAILOVER_SLAVES] =3D {
> -			&qp_ctx->primary_slave, &qp_ctx-
> >secondary_slave};
> -	struct scheduler_slave *slave =3D slaves[qp_ctx->deq_idx];
> +	struct scheduler_worker *workers[NB_FAILOVER_WORKERS] =3D {
> +			&qp_ctx->primary_worker, &qp_ctx-
> >secondary_worker};
> +	struct scheduler_worker *worker =3D workers[qp_ctx->deq_idx];
>  	uint16_t nb_deq_ops =3D 0, nb_deq_ops2 =3D 0;
>=20
> -	if (slave->nb_inflight_cops) {
> -		nb_deq_ops =3D rte_cryptodev_dequeue_burst(slave-
> >dev_id,
> -			slave->qp_id, ops, nb_ops);
> -		slave->nb_inflight_cops -=3D nb_deq_ops;
> +	if (worker->nb_inflight_cops) {
> +		nb_deq_ops =3D rte_cryptodev_dequeue_burst(worker-
> >dev_id,
> +			worker->qp_id, ops, nb_ops);
> +		worker->nb_inflight_cops -=3D nb_deq_ops;
>  	}
>=20
> -	qp_ctx->deq_idx =3D (~qp_ctx->deq_idx) & SLAVE_SWITCH_MASK;
> +	qp_ctx->deq_idx =3D (~qp_ctx->deq_idx) & WORKER_SWITCH_MASK;
>=20
>  	if (nb_deq_ops =3D=3D nb_ops)
>  		return nb_deq_ops;
>=20
> -	slave =3D slaves[qp_ctx->deq_idx];
> +	worker =3D workers[qp_ctx->deq_idx];
>=20
> -	if (slave->nb_inflight_cops) {
> -		nb_deq_ops2 =3D rte_cryptodev_dequeue_burst(slave-
> >dev_id,
> -			slave->qp_id, &ops[nb_deq_ops], nb_ops -
> nb_deq_ops);
> -		slave->nb_inflight_cops -=3D nb_deq_ops2;
> +	if (worker->nb_inflight_cops) {
> +		nb_deq_ops2 =3D rte_cryptodev_dequeue_burst(worker-
> >dev_id,
> +			worker->qp_id, &ops[nb_deq_ops], nb_ops -
> nb_deq_ops);
> +		worker->nb_inflight_cops -=3D nb_deq_ops2;
>  	}
>=20
>  	return nb_deq_ops + nb_deq_ops2;
> @@ -119,15 +120,15 @@ schedule_dequeue_ordering(void *qp, struct
> rte_crypto_op **ops,
>  }
>=20
>  static int
> -slave_attach(__rte_unused struct rte_cryptodev *dev,
> -		__rte_unused uint8_t slave_id)
> +worker_attach(__rte_unused struct rte_cryptodev *dev,
> +		__rte_unused uint8_t worker_id)
>  {
>  	return 0;
>  }
>=20
>  static int
> -slave_detach(__rte_unused struct rte_cryptodev *dev,
> -		__rte_unused uint8_t slave_id)
> +worker_detach(__rte_unused struct rte_cryptodev *dev,
> +		__rte_unused uint8_t worker_id)
>  {
>  	return 0;
>  }
> @@ -138,8 +139,8 @@ scheduler_start(struct rte_cryptodev *dev)
>  	struct scheduler_ctx *sched_ctx =3D dev->data->dev_private;
>  	uint16_t i;
>=20
> -	if (sched_ctx->nb_slaves < 2) {
> -		CR_SCHED_LOG(ERR, "Number of slaves shall no less than 2");
> +	if (sched_ctx->nb_workers < 2) {
> +		CR_SCHED_LOG(ERR, "Number of workers shall no less than
> 2");
>  		return -ENOMEM;
>  	}
>=20
> @@ -156,12 +157,12 @@ scheduler_start(struct rte_cryptodev *dev)
>  			((struct scheduler_qp_ctx *)
>  				dev->data->queue_pairs[i])->private_qp_ctx;
>=20
> -		rte_memcpy(&qp_ctx->primary_slave,
> -				&sched_ctx->slaves[PRIMARY_SLAVE_IDX],
> -				sizeof(struct scheduler_slave));
> -		rte_memcpy(&qp_ctx->secondary_slave,
> -				&sched_ctx-
> >slaves[SECONDARY_SLAVE_IDX],
> -				sizeof(struct scheduler_slave));
> +		rte_memcpy(&qp_ctx->primary_worker,
> +				&sched_ctx-
> >workers[PRIMARY_WORKER_IDX],
> +				sizeof(struct scheduler_worker));
> +		rte_memcpy(&qp_ctx->secondary_worker,
> +				&sched_ctx-
> >workers[SECONDARY_WORKER_IDX],
> +				sizeof(struct scheduler_worker));
>  	}
>=20
>  	return 0;
> @@ -198,8 +199,8 @@ scheduler_create_private_ctx(__rte_unused struct
> rte_cryptodev *dev)
>  }
>=20
>  static struct rte_cryptodev_scheduler_ops scheduler_fo_ops =3D {
> -	slave_attach,
> -	slave_detach,
> +	worker_attach,
> +	worker_detach,
>  	scheduler_start,
>  	scheduler_stop,
>  	scheduler_config_qp,
> @@ -210,8 +211,8 @@ static struct rte_cryptodev_scheduler_ops
> scheduler_fo_ops =3D {
>=20
>  static struct rte_cryptodev_scheduler fo_scheduler =3D {
>  		.name =3D "failover-scheduler",
> -		.description =3D "scheduler which enqueues to the primary
> slave, "
> -				"and only then enqueues to the secondary
> slave "
> +		.description =3D "scheduler which enqueues to the primary
> worker, "
> +				"and only then enqueues to the secondary
> worker "
>  				"upon failing on enqueuing to primary",
>  		.mode =3D CDEV_SCHED_MODE_FAILOVER,
>  		.ops =3D &scheduler_fo_ops
> diff --git a/drivers/crypto/scheduler/scheduler_multicore.c
> b/drivers/crypto/scheduler/scheduler_multicore.c
> index 2d6790bb3..1e2e8dbf9 100644
> --- a/drivers/crypto/scheduler/scheduler_multicore.c
> +++ b/drivers/crypto/scheduler/scheduler_multicore.c
> @@ -26,8 +26,8 @@ struct mc_scheduler_ctx {
>  };
>=20
>  struct mc_scheduler_qp_ctx {
> -	struct scheduler_slave
> slaves[RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES];
> -	uint32_t nb_slaves;
> +	struct scheduler_worker
> workers[RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS];
> +	uint32_t nb_workers;
>=20
>  	uint32_t last_enq_worker_idx;
>  	uint32_t last_deq_worker_idx;
> @@ -132,15 +132,15 @@ schedule_dequeue_ordering(void *qp, struct
> rte_crypto_op **ops,
>  }
>=20
>  static int
> -slave_attach(__rte_unused struct rte_cryptodev *dev,
> -		__rte_unused uint8_t slave_id)
> +worker_attach(__rte_unused struct rte_cryptodev *dev,
> +		__rte_unused uint8_t worker_id)
>  {
>  	return 0;
>  }
>=20
>  static int
> -slave_detach(__rte_unused struct rte_cryptodev *dev,
> -		__rte_unused uint8_t slave_id)
> +worker_detach(__rte_unused struct rte_cryptodev *dev,
> +		__rte_unused uint8_t worker_id)
>  {
>  	return 0;
>  }
> @@ -154,7 +154,7 @@ mc_scheduler_worker(struct rte_cryptodev *dev)
>  	struct rte_ring *deq_ring;
>  	uint32_t core_id =3D rte_lcore_id();
>  	int i, worker_idx =3D -1;
> -	struct scheduler_slave *slave;
> +	struct scheduler_worker *worker;
>  	struct rte_crypto_op *enq_ops[MC_SCHED_BUFFER_SIZE];
>  	struct rte_crypto_op *deq_ops[MC_SCHED_BUFFER_SIZE];
>  	uint16_t processed_ops;
> @@ -177,15 +177,16 @@ mc_scheduler_worker(struct rte_cryptodev *dev)
>  		return -1;
>  	}
>=20
> -	slave =3D &sched_ctx->slaves[worker_idx];
> +	worker =3D &sched_ctx->workers[worker_idx];
>  	enq_ring =3D mc_ctx->sched_enq_ring[worker_idx];
>  	deq_ring =3D mc_ctx->sched_deq_ring[worker_idx];
>=20
>  	while (!mc_ctx->stop_signal) {
>  		if (pending_enq_ops) {
>  			processed_ops =3D
> -				rte_cryptodev_enqueue_burst(slave-
> >dev_id,
> -					slave->qp_id,
> &enq_ops[pending_enq_ops_idx],
> +				rte_cryptodev_enqueue_burst(worker-
> >dev_id,
> +					worker->qp_id,
> +					&enq_ops[pending_enq_ops_idx],
>  					pending_enq_ops);
>  			pending_enq_ops -=3D processed_ops;
>  			pending_enq_ops_idx +=3D processed_ops;
> @@ -195,8 +196,8 @@ mc_scheduler_worker(struct rte_cryptodev *dev)
>=20
> 	MC_SCHED_BUFFER_SIZE, NULL);
>  			if (processed_ops) {
>  				pending_enq_ops_idx =3D
> rte_cryptodev_enqueue_burst(
> -							slave->dev_id, slave-
> >qp_id,
> -							enq_ops,
> processed_ops);
> +						worker->dev_id, worker-
> >qp_id,
> +						enq_ops, processed_ops);
>  				pending_enq_ops =3D processed_ops -
> pending_enq_ops_idx;
>  				inflight_ops +=3D pending_enq_ops_idx;
>  			}
> @@ -209,8 +210,9 @@ mc_scheduler_worker(struct rte_cryptodev *dev)
>  			pending_deq_ops -=3D processed_ops;
>  			pending_deq_ops_idx +=3D processed_ops;
>  		} else if (inflight_ops) {
> -			processed_ops =3D
> rte_cryptodev_dequeue_burst(slave->dev_id,
> -					slave->qp_id, deq_ops,
> MC_SCHED_BUFFER_SIZE);
> +			processed_ops =3D rte_cryptodev_dequeue_burst(
> +					worker->dev_id, worker->qp_id,
> deq_ops,
> +					MC_SCHED_BUFFER_SIZE);
>  			if (processed_ops) {
>  				inflight_ops -=3D processed_ops;
>  				if (reordering_enabled) {
> @@ -264,16 +266,16 @@ scheduler_start(struct rte_cryptodev *dev)
>  				qp_ctx->private_qp_ctx;
>  		uint32_t j;
>=20
> -		memset(mc_qp_ctx->slaves, 0,
> -
> 	RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES *
> -				sizeof(struct scheduler_slave));
> -		for (j =3D 0; j < sched_ctx->nb_slaves; j++) {
> -			mc_qp_ctx->slaves[j].dev_id =3D
> -					sched_ctx->slaves[j].dev_id;
> -			mc_qp_ctx->slaves[j].qp_id =3D i;
> +		memset(mc_qp_ctx->workers, 0,
> +
> 	RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS *
> +				sizeof(struct scheduler_worker));
> +		for (j =3D 0; j < sched_ctx->nb_workers; j++) {
> +			mc_qp_ctx->workers[j].dev_id =3D
> +					sched_ctx->workers[j].dev_id;
> +			mc_qp_ctx->workers[j].qp_id =3D i;
>  		}
>=20
> -		mc_qp_ctx->nb_slaves =3D sched_ctx->nb_slaves;
> +		mc_qp_ctx->nb_workers =3D sched_ctx->nb_workers;
>=20
>  		mc_qp_ctx->last_enq_worker_idx =3D 0;
>  		mc_qp_ctx->last_deq_worker_idx =3D 0;
> @@ -347,7 +349,7 @@ scheduler_create_private_ctx(struct rte_cryptodev
> *dev)
>  		mc_ctx->sched_enq_ring[i] =3D rte_ring_lookup(r_name);
>  		if (!mc_ctx->sched_enq_ring[i]) {
>  			mc_ctx->sched_enq_ring[i] =3D
> rte_ring_create(r_name,
> -						PER_SLAVE_BUFF_SIZE,
> +						PER_WORKER_BUFF_SIZE,
>  						rte_socket_id(),
>  						RING_F_SC_DEQ |
> RING_F_SP_ENQ);
>  			if (!mc_ctx->sched_enq_ring[i]) {
> @@ -361,7 +363,7 @@ scheduler_create_private_ctx(struct rte_cryptodev
> *dev)
>  		mc_ctx->sched_deq_ring[i] =3D rte_ring_lookup(r_name);
>  		if (!mc_ctx->sched_deq_ring[i]) {
>  			mc_ctx->sched_deq_ring[i] =3D
> rte_ring_create(r_name,
> -						PER_SLAVE_BUFF_SIZE,
> +						PER_WORKER_BUFF_SIZE,
>  						rte_socket_id(),
>  						RING_F_SC_DEQ |
> RING_F_SP_ENQ);
>  			if (!mc_ctx->sched_deq_ring[i]) {
> @@ -387,8 +389,8 @@ scheduler_create_private_ctx(struct rte_cryptodev
> *dev)
>  }
>=20
>  static struct rte_cryptodev_scheduler_ops scheduler_mc_ops =3D {
> -	slave_attach,
> -	slave_detach,
> +	worker_attach,
> +	worker_detach,
>  	scheduler_start,
>  	scheduler_stop,
>  	scheduler_config_qp,
> diff --git a/drivers/crypto/scheduler/scheduler_pkt_size_distr.c
> b/drivers/crypto/scheduler/scheduler_pkt_size_distr.c
> index 45c8dceb4..57e330a74 100644
> --- a/drivers/crypto/scheduler/scheduler_pkt_size_distr.c
> +++ b/drivers/crypto/scheduler/scheduler_pkt_size_distr.c
> @@ -9,10 +9,10 @@
>  #include "scheduler_pmd_private.h"
>=20
>  #define DEF_PKT_SIZE_THRESHOLD			(0xffffff80)
> -#define SLAVE_IDX_SWITCH_MASK			(0x01)
> -#define PRIMARY_SLAVE_IDX			0
> -#define SECONDARY_SLAVE_IDX			1
> -#define NB_PKT_SIZE_SLAVES			2
> +#define WORKER_IDX_SWITCH_MASK			(0x01)
> +#define PRIMARY_WORKER_IDX			0
> +#define SECONDARY_WORKER_IDX			1
> +#define NB_PKT_SIZE_WORKERS			2
>=20
>  /** pkt size based scheduler context */
>  struct psd_scheduler_ctx {
> @@ -21,15 +21,15 @@ struct psd_scheduler_ctx {
>=20
>  /** pkt size based scheduler queue pair context */
>  struct psd_scheduler_qp_ctx {
> -	struct scheduler_slave primary_slave;
> -	struct scheduler_slave secondary_slave;
> +	struct scheduler_worker primary_worker;
> +	struct scheduler_worker secondary_worker;
>  	uint32_t threshold;
>  	uint8_t deq_idx;
>  } __rte_cache_aligned;
>=20
>  /** scheduling operation variables' wrapping */
>  struct psd_schedule_op {
> -	uint8_t slave_idx;
> +	uint8_t worker_idx;
>  	uint16_t pos;
>  };
>=20
> @@ -38,13 +38,13 @@ schedule_enqueue(void *qp, struct rte_crypto_op
> **ops, uint16_t nb_ops)
>  {
>  	struct scheduler_qp_ctx *qp_ctx =3D qp;
>  	struct psd_scheduler_qp_ctx *psd_qp_ctx =3D qp_ctx-
> >private_qp_ctx;
> -	struct rte_crypto_op *sched_ops[NB_PKT_SIZE_SLAVES][nb_ops];
> -	uint32_t in_flight_ops[NB_PKT_SIZE_SLAVES] =3D {
> -			psd_qp_ctx->primary_slave.nb_inflight_cops,
> -			psd_qp_ctx->secondary_slave.nb_inflight_cops
> +	struct rte_crypto_op *sched_ops[NB_PKT_SIZE_WORKERS][nb_ops];
> +	uint32_t in_flight_ops[NB_PKT_SIZE_WORKERS] =3D {
> +			psd_qp_ctx->primary_worker.nb_inflight_cops,
> +			psd_qp_ctx->secondary_worker.nb_inflight_cops
>  	};
> -	struct psd_schedule_op enq_ops[NB_PKT_SIZE_SLAVES] =3D {
> -		{PRIMARY_SLAVE_IDX, 0}, {SECONDARY_SLAVE_IDX, 0}
> +	struct psd_schedule_op enq_ops[NB_PKT_SIZE_WORKERS] =3D {
> +		{PRIMARY_WORKER_IDX, 0}, {SECONDARY_WORKER_IDX, 0}
>  	};
>  	struct psd_schedule_op *p_enq_op;
>  	uint16_t i, processed_ops_pri =3D 0, processed_ops_sec =3D 0;
> @@ -80,13 +80,13 @@ schedule_enqueue(void *qp, struct rte_crypto_op
> **ops, uint16_t nb_ops)
>  		/* stop schedule cops before the queue is full, this shall
>  		 * prevent the failed enqueue
>  		 */
> -		if (p_enq_op->pos + in_flight_ops[p_enq_op->slave_idx] =3D=3D
> +		if (p_enq_op->pos + in_flight_ops[p_enq_op->worker_idx]
> =3D=3D
>  				qp_ctx->max_nb_objs) {
>  			i =3D nb_ops;
>  			break;
>  		}
>=20
> -		sched_ops[p_enq_op->slave_idx][p_enq_op->pos] =3D ops[i];
> +		sched_ops[p_enq_op->worker_idx][p_enq_op->pos] =3D
> ops[i];
>  		p_enq_op->pos++;
>=20
>  		job_len =3D ops[i+1]->sym->cipher.data.length;
> @@ -94,13 +94,13 @@ schedule_enqueue(void *qp, struct rte_crypto_op
> **ops, uint16_t nb_ops)
>  				ops[i+1]->sym->auth.data.length;
>  		p_enq_op =3D &enq_ops[!(job_len & psd_qp_ctx->threshold)];
>=20
> -		if (p_enq_op->pos + in_flight_ops[p_enq_op->slave_idx] =3D=3D
> +		if (p_enq_op->pos + in_flight_ops[p_enq_op->worker_idx]
> =3D=3D
>  				qp_ctx->max_nb_objs) {
>  			i =3D nb_ops;
>  			break;
>  		}
>=20
> -		sched_ops[p_enq_op->slave_idx][p_enq_op->pos] =3D
> ops[i+1];
> +		sched_ops[p_enq_op->worker_idx][p_enq_op->pos] =3D
> ops[i+1];
>  		p_enq_op->pos++;
>=20
>  		job_len =3D ops[i+2]->sym->cipher.data.length;
> @@ -108,13 +108,13 @@ schedule_enqueue(void *qp, struct rte_crypto_op
> **ops, uint16_t nb_ops)
>  				ops[i+2]->sym->auth.data.length;
>  		p_enq_op =3D &enq_ops[!(job_len & psd_qp_ctx->threshold)];
>=20
> -		if (p_enq_op->pos + in_flight_ops[p_enq_op->slave_idx] =3D=3D
> +		if (p_enq_op->pos + in_flight_ops[p_enq_op->worker_idx]
> =3D=3D
>  				qp_ctx->max_nb_objs) {
>  			i =3D nb_ops;
>  			break;
>  		}
>=20
> -		sched_ops[p_enq_op->slave_idx][p_enq_op->pos] =3D
> ops[i+2];
> +		sched_ops[p_enq_op->worker_idx][p_enq_op->pos] =3D
> ops[i+2];
>  		p_enq_op->pos++;
>=20
>  		job_len =3D ops[i+3]->sym->cipher.data.length;
> @@ -122,13 +122,13 @@ schedule_enqueue(void *qp, struct rte_crypto_op
> **ops, uint16_t nb_ops)
>  				ops[i+3]->sym->auth.data.length;
>  		p_enq_op =3D &enq_ops[!(job_len & psd_qp_ctx->threshold)];
>=20
> -		if (p_enq_op->pos + in_flight_ops[p_enq_op->slave_idx] =3D=3D
> +		if (p_enq_op->pos + in_flight_ops[p_enq_op->worker_idx]
> =3D=3D
>  				qp_ctx->max_nb_objs) {
>  			i =3D nb_ops;
>  			break;
>  		}
>=20
> -		sched_ops[p_enq_op->slave_idx][p_enq_op->pos] =3D
> ops[i+3];
> +		sched_ops[p_enq_op->worker_idx][p_enq_op->pos] =3D
> ops[i+3];
>  		p_enq_op->pos++;
>  	}
>=20
> @@ -138,34 +138,34 @@ schedule_enqueue(void *qp, struct rte_crypto_op
> **ops, uint16_t nb_ops)
>  				ops[i]->sym->auth.data.length;
>  		p_enq_op =3D &enq_ops[!(job_len & psd_qp_ctx->threshold)];
>=20
> -		if (p_enq_op->pos + in_flight_ops[p_enq_op->slave_idx] =3D=3D
> +		if (p_enq_op->pos + in_flight_ops[p_enq_op->worker_idx]
> =3D=3D
>  				qp_ctx->max_nb_objs) {
>  			i =3D nb_ops;
>  			break;
>  		}
>=20
> -		sched_ops[p_enq_op->slave_idx][p_enq_op->pos] =3D ops[i];
> +		sched_ops[p_enq_op->worker_idx][p_enq_op->pos] =3D
> ops[i];
>  		p_enq_op->pos++;
>  	}
>=20
>  	processed_ops_pri =3D rte_cryptodev_enqueue_burst(
> -			psd_qp_ctx->primary_slave.dev_id,
> -			psd_qp_ctx->primary_slave.qp_id,
> -			sched_ops[PRIMARY_SLAVE_IDX],
> -			enq_ops[PRIMARY_SLAVE_IDX].pos);
> -	/* enqueue shall not fail as the slave queue is monitored */
> -	RTE_ASSERT(processed_ops_pri =3D=3D
> enq_ops[PRIMARY_SLAVE_IDX].pos);
> +			psd_qp_ctx->primary_worker.dev_id,
> +			psd_qp_ctx->primary_worker.qp_id,
> +			sched_ops[PRIMARY_WORKER_IDX],
> +			enq_ops[PRIMARY_WORKER_IDX].pos);
> +	/* enqueue shall not fail as the worker queue is monitored */
> +	RTE_ASSERT(processed_ops_pri =3D=3D
> enq_ops[PRIMARY_WORKER_IDX].pos);
>=20
> -	psd_qp_ctx->primary_slave.nb_inflight_cops +=3D processed_ops_pri;
> +	psd_qp_ctx->primary_worker.nb_inflight_cops +=3D
> processed_ops_pri;
>=20
>  	processed_ops_sec =3D rte_cryptodev_enqueue_burst(
> -			psd_qp_ctx->secondary_slave.dev_id,
> -			psd_qp_ctx->secondary_slave.qp_id,
> -			sched_ops[SECONDARY_SLAVE_IDX],
> -			enq_ops[SECONDARY_SLAVE_IDX].pos);
> -	RTE_ASSERT(processed_ops_sec =3D=3D
> enq_ops[SECONDARY_SLAVE_IDX].pos);
> +			psd_qp_ctx->secondary_worker.dev_id,
> +			psd_qp_ctx->secondary_worker.qp_id,
> +			sched_ops[SECONDARY_WORKER_IDX],
> +			enq_ops[SECONDARY_WORKER_IDX].pos);
> +	RTE_ASSERT(processed_ops_sec =3D=3D
> enq_ops[SECONDARY_WORKER_IDX].pos);
>=20
> -	psd_qp_ctx->secondary_slave.nb_inflight_cops +=3D
> processed_ops_sec;
> +	psd_qp_ctx->secondary_worker.nb_inflight_cops +=3D
> processed_ops_sec;
>=20
>  	return processed_ops_pri + processed_ops_sec;
>  }
> @@ -191,33 +191,33 @@ schedule_dequeue(void *qp, struct rte_crypto_op
> **ops, uint16_t nb_ops)
>  {
>  	struct psd_scheduler_qp_ctx *qp_ctx =3D
>  			((struct scheduler_qp_ctx *)qp)->private_qp_ctx;
> -	struct scheduler_slave *slaves[NB_PKT_SIZE_SLAVES] =3D {
> -			&qp_ctx->primary_slave, &qp_ctx-
> >secondary_slave};
> -	struct scheduler_slave *slave =3D slaves[qp_ctx->deq_idx];
> +	struct scheduler_worker *workers[NB_PKT_SIZE_WORKERS] =3D {
> +			&qp_ctx->primary_worker, &qp_ctx-
> >secondary_worker};
> +	struct scheduler_worker *worker =3D workers[qp_ctx->deq_idx];
>  	uint16_t nb_deq_ops_pri =3D 0, nb_deq_ops_sec =3D 0;
>=20
> -	if (slave->nb_inflight_cops) {
> -		nb_deq_ops_pri =3D rte_cryptodev_dequeue_burst(slave-
> >dev_id,
> -			slave->qp_id, ops, nb_ops);
> -		slave->nb_inflight_cops -=3D nb_deq_ops_pri;
> +	if (worker->nb_inflight_cops) {
> +		nb_deq_ops_pri =3D rte_cryptodev_dequeue_burst(worker-
> >dev_id,
> +			worker->qp_id, ops, nb_ops);
> +		worker->nb_inflight_cops -=3D nb_deq_ops_pri;
>  	}
>=20
> -	qp_ctx->deq_idx =3D (~qp_ctx->deq_idx) &
> SLAVE_IDX_SWITCH_MASK;
> +	qp_ctx->deq_idx =3D (~qp_ctx->deq_idx) &
> WORKER_IDX_SWITCH_MASK;
>=20
>  	if (nb_deq_ops_pri =3D=3D nb_ops)
>  		return nb_deq_ops_pri;
>=20
> -	slave =3D slaves[qp_ctx->deq_idx];
> +	worker =3D workers[qp_ctx->deq_idx];
>=20
> -	if (slave->nb_inflight_cops) {
> -		nb_deq_ops_sec =3D rte_cryptodev_dequeue_burst(slave-
> >dev_id,
> -				slave->qp_id, &ops[nb_deq_ops_pri],
> +	if (worker->nb_inflight_cops) {
> +		nb_deq_ops_sec =3D rte_cryptodev_dequeue_burst(worker-
> >dev_id,
> +				worker->qp_id, &ops[nb_deq_ops_pri],
>  				nb_ops - nb_deq_ops_pri);
> -		slave->nb_inflight_cops -=3D nb_deq_ops_sec;
> +		worker->nb_inflight_cops -=3D nb_deq_ops_sec;
>=20
> -		if (!slave->nb_inflight_cops)
> +		if (!worker->nb_inflight_cops)
>  			qp_ctx->deq_idx =3D (~qp_ctx->deq_idx) &
> -					SLAVE_IDX_SWITCH_MASK;
> +					WORKER_IDX_SWITCH_MASK;
>  	}
>=20
>  	return nb_deq_ops_pri + nb_deq_ops_sec;
> @@ -236,15 +236,15 @@ schedule_dequeue_ordering(void *qp, struct
> rte_crypto_op **ops,
>  }
>=20
>  static int
> -slave_attach(__rte_unused struct rte_cryptodev *dev,
> -		__rte_unused uint8_t slave_id)
> +worker_attach(__rte_unused struct rte_cryptodev *dev,
> +		__rte_unused uint8_t worker_id)
>  {
>  	return 0;
>  }
>=20
>  static int
> -slave_detach(__rte_unused struct rte_cryptodev *dev,
> -		__rte_unused uint8_t slave_id)
> +worker_detach(__rte_unused struct rte_cryptodev *dev,
> +		__rte_unused uint8_t worker_id)
>  {
>  	return 0;
>  }
> @@ -256,9 +256,9 @@ scheduler_start(struct rte_cryptodev *dev)
>  	struct psd_scheduler_ctx *psd_ctx =3D sched_ctx->private_ctx;
>  	uint16_t i;
>=20
> -	/* for packet size based scheduler, nb_slaves have to >=3D 2 */
> -	if (sched_ctx->nb_slaves < NB_PKT_SIZE_SLAVES) {
> -		CR_SCHED_LOG(ERR, "not enough slaves to start");
> +	/* for packet size based scheduler, nb_workers have to >=3D 2 */
> +	if (sched_ctx->nb_workers < NB_PKT_SIZE_WORKERS) {
> +		CR_SCHED_LOG(ERR, "not enough workers to start");
>  		return -1;
>  	}
>=20
> @@ -267,15 +267,15 @@ scheduler_start(struct rte_cryptodev *dev)
>  		struct psd_scheduler_qp_ctx *ps_qp_ctx =3D
>  				qp_ctx->private_qp_ctx;
>=20
> -		ps_qp_ctx->primary_slave.dev_id =3D
> -				sched_ctx-
> >slaves[PRIMARY_SLAVE_IDX].dev_id;
> -		ps_qp_ctx->primary_slave.qp_id =3D i;
> -		ps_qp_ctx->primary_slave.nb_inflight_cops =3D 0;
> +		ps_qp_ctx->primary_worker.dev_id =3D
> +				sched_ctx-
> >workers[PRIMARY_WORKER_IDX].dev_id;
> +		ps_qp_ctx->primary_worker.qp_id =3D i;
> +		ps_qp_ctx->primary_worker.nb_inflight_cops =3D 0;
>=20
> -		ps_qp_ctx->secondary_slave.dev_id =3D
> -				sched_ctx-
> >slaves[SECONDARY_SLAVE_IDX].dev_id;
> -		ps_qp_ctx->secondary_slave.qp_id =3D i;
> -		ps_qp_ctx->secondary_slave.nb_inflight_cops =3D 0;
> +		ps_qp_ctx->secondary_worker.dev_id =3D
> +				sched_ctx-
> >workers[SECONDARY_WORKER_IDX].dev_id;
> +		ps_qp_ctx->secondary_worker.qp_id =3D i;
> +		ps_qp_ctx->secondary_worker.nb_inflight_cops =3D 0;
>=20
>  		ps_qp_ctx->threshold =3D psd_ctx->threshold;
>  	}
> @@ -300,9 +300,9 @@ scheduler_stop(struct rte_cryptodev *dev)
>  		struct scheduler_qp_ctx *qp_ctx =3D dev->data-
> >queue_pairs[i];
>  		struct psd_scheduler_qp_ctx *ps_qp_ctx =3D qp_ctx-
> >private_qp_ctx;
>=20
> -		if (ps_qp_ctx->primary_slave.nb_inflight_cops +
> -				ps_qp_ctx-
> >secondary_slave.nb_inflight_cops) {
> -			CR_SCHED_LOG(ERR, "Some crypto ops left in slave
> queue");
> +		if (ps_qp_ctx->primary_worker.nb_inflight_cops +
> +				ps_qp_ctx-
> >secondary_worker.nb_inflight_cops) {
> +			CR_SCHED_LOG(ERR, "Some crypto ops left in worker
> queue");
>  			return -1;
>  		}
>  	}
> @@ -399,8 +399,8 @@ scheduler_option_get(struct rte_cryptodev *dev,
> uint32_t option_type,
>  }
>=20
>  static struct rte_cryptodev_scheduler_ops scheduler_ps_ops =3D {
> -	slave_attach,
> -	slave_detach,
> +	worker_attach,
> +	worker_detach,
>  	scheduler_start,
>  	scheduler_stop,
>  	scheduler_config_qp,
> diff --git a/drivers/crypto/scheduler/scheduler_pmd.c
> b/drivers/crypto/scheduler/scheduler_pmd.c
> index a1632a2b9..632197833 100644
> --- a/drivers/crypto/scheduler/scheduler_pmd.c
> +++ b/drivers/crypto/scheduler/scheduler_pmd.c
> @@ -18,18 +18,18 @@ uint8_t cryptodev_scheduler_driver_id;
>=20
>  struct scheduler_init_params {
>  	struct rte_cryptodev_pmd_init_params def_p;
> -	uint32_t nb_slaves;
> +	uint32_t nb_workers;
>  	enum rte_cryptodev_scheduler_mode mode;
>  	char
> mode_param_str[RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN];
>  	uint32_t enable_ordering;
>  	uint16_t wc_pool[RTE_MAX_LCORE];
>  	uint16_t nb_wc;
> -	char slave_names[RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES]
> +	char
> worker_names[RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS]
>  			[RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN];
>  };
>=20
>  #define RTE_CRYPTODEV_VDEV_NAME			("name")
> -#define RTE_CRYPTODEV_VDEV_SLAVE		("slave")
> +#define RTE_CRYPTODEV_VDEV_WORKER		("worker")
>  #define RTE_CRYPTODEV_VDEV_MODE			("mode")
>  #define RTE_CRYPTODEV_VDEV_MODE_PARAM
> 	("mode_param")
>  #define RTE_CRYPTODEV_VDEV_ORDERING		("ordering")
> @@ -40,7 +40,7 @@ struct scheduler_init_params {
>=20
>  static const char * const scheduler_valid_params[] =3D {
>  	RTE_CRYPTODEV_VDEV_NAME,
> -	RTE_CRYPTODEV_VDEV_SLAVE,
> +	RTE_CRYPTODEV_VDEV_WORKER,
>  	RTE_CRYPTODEV_VDEV_MODE,
>  	RTE_CRYPTODEV_VDEV_MODE_PARAM,
>  	RTE_CRYPTODEV_VDEV_ORDERING,
> @@ -193,31 +193,31 @@ cryptodev_scheduler_create(const char *name,
>  		break;
>  	}
>=20
> -	for (i =3D 0; i < init_params->nb_slaves; i++) {
> -		sched_ctx->init_slave_names[sched_ctx->nb_init_slaves] =3D
> +	for (i =3D 0; i < init_params->nb_workers; i++) {
> +		sched_ctx->init_worker_names[sched_ctx-
> >nb_init_workers] =3D
>  			rte_zmalloc_socket(
>  				NULL,
>=20
> 	RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN, 0,
>  				SOCKET_ID_ANY);
>=20
> -		if (!sched_ctx->init_slave_names[
> -				sched_ctx->nb_init_slaves]) {
> +		if (!sched_ctx->init_worker_names[
> +				sched_ctx->nb_init_workers]) {
>  			CR_SCHED_LOG(ERR, "driver %s: Insufficient
> memory",
>  					name);
>  			return -ENOMEM;
>  		}
>=20
> -		strncpy(sched_ctx->init_slave_names[
> -					sched_ctx->nb_init_slaves],
> -				init_params->slave_names[i],
> +		strncpy(sched_ctx->init_worker_names[
> +					sched_ctx->nb_init_workers],
> +				init_params->worker_names[i],
>=20
> 	RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN - 1);
>=20
> -		sched_ctx->nb_init_slaves++;
> +		sched_ctx->nb_init_workers++;
>  	}
>=20
>  	/*
>  	 * Initialize capabilities structure as an empty structure,
> -	 * in case device information is requested when no slaves are
> attached
> +	 * in case device information is requested when no workers are
> attached
>  	 */
>  	sched_ctx->capabilities =3D rte_zmalloc_socket(NULL,
>  			sizeof(struct rte_cryptodev_capabilities),
> @@ -249,12 +249,12 @@ cryptodev_scheduler_remove(struct
> rte_vdev_device *vdev)
>=20
>  	sched_ctx =3D dev->data->dev_private;
>=20
> -	if (sched_ctx->nb_slaves) {
> +	if (sched_ctx->nb_workers) {
>  		uint32_t i;
>=20
> -		for (i =3D 0; i < sched_ctx->nb_slaves; i++)
> -			rte_cryptodev_scheduler_slave_detach(dev->data-
> >dev_id,
> -					sched_ctx->slaves[i].dev_id);
> +		for (i =3D 0; i < sched_ctx->nb_workers; i++)
> +			rte_cryptodev_scheduler_worker_detach(dev-
> >data->dev_id,
> +					sched_ctx->workers[i].dev_id);
>  	}
>=20
>  	return rte_cryptodev_pmd_destroy(dev);
> @@ -374,19 +374,19 @@ parse_name_arg(const char *key __rte_unused,
>  	return 0;
>  }
>=20
> -/** Parse slave */
> +/** Parse worker */
>  static int
> -parse_slave_arg(const char *key __rte_unused,
> +parse_worker_arg(const char *key __rte_unused,
>  		const char *value, void *extra_args)
>  {
>  	struct scheduler_init_params *param =3D extra_args;
>=20
> -	if (param->nb_slaves >=3D
> RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES) {
> -		CR_SCHED_LOG(ERR, "Too many slaves.");
> +	if (param->nb_workers >=3D
> RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS) {
> +		CR_SCHED_LOG(ERR, "Too many workers.");
>  		return -ENOMEM;
>  	}
>=20
> -	strncpy(param->slave_names[param->nb_slaves++], value,
> +	strncpy(param->worker_names[param->nb_workers++], value,
>  			RTE_CRYPTODEV_SCHEDULER_NAME_MAX_LEN - 1);
>=20
>  	return 0;
> @@ -498,8 +498,8 @@ scheduler_parse_init_params(struct
> scheduler_init_params *params,
>  		if (ret < 0)
>  			goto free_kvlist;
>=20
> -		ret =3D rte_kvargs_process(kvlist,
> RTE_CRYPTODEV_VDEV_SLAVE,
> -				&parse_slave_arg, params);
> +		ret =3D rte_kvargs_process(kvlist,
> RTE_CRYPTODEV_VDEV_WORKER,
> +				&parse_worker_arg, params);
>  		if (ret < 0)
>  			goto free_kvlist;
>=20
> @@ -534,10 +534,10 @@ cryptodev_scheduler_probe(struct
> rte_vdev_device *vdev)
>  			rte_socket_id(),
>=20
> 	RTE_CRYPTODEV_PMD_DEFAULT_MAX_NB_QUEUE_PAIRS
>  		},
> -		.nb_slaves =3D 0,
> +		.nb_workers =3D 0,
>  		.mode =3D CDEV_SCHED_MODE_NOT_SET,
>  		.enable_ordering =3D 0,
> -		.slave_names =3D { {0} }
> +		.worker_names =3D { {0} }
>  	};
>  	const char *name;
>=20
> @@ -566,7 +566,7 @@
> RTE_PMD_REGISTER_VDEV(CRYPTODEV_NAME_SCHEDULER_PMD,
>=20
> RTE_PMD_REGISTER_PARAM_STRING(CRYPTODEV_NAME_SCHEDULER_PM
> D,
>  	"max_nb_queue_pairs=3D<int> "
>  	"socket_id=3D<int> "
> -	"slave=3D<name>");
> +	"worker=3D<name>");
>  RTE_PMD_REGISTER_CRYPTO_DRIVER(scheduler_crypto_drv,
>  		cryptodev_scheduler_pmd_drv.driver,
>  		cryptodev_scheduler_driver_id);
> diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c
> b/drivers/crypto/scheduler/scheduler_pmd_ops.c
> index 14e5a3712..cb125e802 100644
> --- a/drivers/crypto/scheduler/scheduler_pmd_ops.c
> +++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
> @@ -12,43 +12,43 @@
>=20
>  #include "scheduler_pmd_private.h"
>=20
> -/** attaching the slaves predefined by scheduler's EAL options */
> +/** attaching the workers predefined by scheduler's EAL options */
>  static int
> -scheduler_attach_init_slave(struct rte_cryptodev *dev)
> +scheduler_attach_init_worker(struct rte_cryptodev *dev)
>  {
>  	struct scheduler_ctx *sched_ctx =3D dev->data->dev_private;
>  	uint8_t scheduler_id =3D dev->data->dev_id;
>  	int i;
>=20
> -	for (i =3D sched_ctx->nb_init_slaves - 1; i >=3D 0; i--) {
> -		const char *dev_name =3D sched_ctx->init_slave_names[i];
> -		struct rte_cryptodev *slave_dev =3D
> +	for (i =3D sched_ctx->nb_init_workers - 1; i >=3D 0; i--) {
> +		const char *dev_name =3D sched_ctx->init_worker_names[i];
> +		struct rte_cryptodev *worker_dev =3D
>=20
> 	rte_cryptodev_pmd_get_named_dev(dev_name);
>  		int status;
>=20
> -		if (!slave_dev) {
> -			CR_SCHED_LOG(ERR, "Failed to locate slave dev %s",
> +		if (!worker_dev) {
> +			CR_SCHED_LOG(ERR, "Failed to locate worker
> dev %s",
>  					dev_name);
>  			return -EINVAL;
>  		}
>=20
> -		status =3D rte_cryptodev_scheduler_slave_attach(
> -				scheduler_id, slave_dev->data->dev_id);
> +		status =3D rte_cryptodev_scheduler_worker_attach(
> +				scheduler_id, worker_dev->data->dev_id);
>=20
>  		if (status < 0) {
> -			CR_SCHED_LOG(ERR, "Failed to attach slave
> cryptodev %u",
> -					slave_dev->data->dev_id);
> +			CR_SCHED_LOG(ERR, "Failed to attach worker
> cryptodev %u",
> +					worker_dev->data->dev_id);
>  			return status;
>  		}
>=20
> -		CR_SCHED_LOG(INFO, "Scheduler %s attached slave %s",
> +		CR_SCHED_LOG(INFO, "Scheduler %s attached worker %s",
>  				dev->data->name,
> -				sched_ctx->init_slave_names[i]);
> +				sched_ctx->init_worker_names[i]);
>=20
> -		rte_free(sched_ctx->init_slave_names[i]);
> -		sched_ctx->init_slave_names[i] =3D NULL;
> +		rte_free(sched_ctx->init_worker_names[i]);
> +		sched_ctx->init_worker_names[i] =3D NULL;
>=20
> -		sched_ctx->nb_init_slaves -=3D 1;
> +		sched_ctx->nb_init_workers -=3D 1;
>  	}
>=20
>  	return 0;
> @@ -62,17 +62,17 @@ scheduler_pmd_config(struct rte_cryptodev *dev,
>  	uint32_t i;
>  	int ret;
>=20
> -	/* although scheduler_attach_init_slave presents multiple times,
> +	/* although scheduler_attach_init_worker presents multiple times,
>  	 * there will be only 1 meaningful execution.
>  	 */
> -	ret =3D scheduler_attach_init_slave(dev);
> +	ret =3D scheduler_attach_init_worker(dev);
>  	if (ret < 0)
>  		return ret;
>=20
> -	for (i =3D 0; i < sched_ctx->nb_slaves; i++) {
> -		uint8_t slave_dev_id =3D sched_ctx->slaves[i].dev_id;
> +	for (i =3D 0; i < sched_ctx->nb_workers; i++) {
> +		uint8_t worker_dev_id =3D sched_ctx->workers[i].dev_id;
>=20
> -		ret =3D rte_cryptodev_configure(slave_dev_id, config);
> +		ret =3D rte_cryptodev_configure(worker_dev_id, config);
>  		if (ret < 0)
>  			break;
>  	}
> @@ -89,7 +89,7 @@ update_order_ring(struct rte_cryptodev *dev, uint16_t
> qp_id)
>  	if (sched_ctx->reordering_enabled) {
>  		char order_ring_name[RTE_CRYPTODEV_NAME_MAX_LEN];
>  		uint32_t buff_size =3D rte_align32pow2(
> -			sched_ctx->nb_slaves * PER_SLAVE_BUFF_SIZE);
> +			sched_ctx->nb_workers * PER_WORKER_BUFF_SIZE);
>=20
>  		if (qp_ctx->order_ring) {
>  			rte_ring_free(qp_ctx->order_ring);
> @@ -135,10 +135,10 @@ scheduler_pmd_start(struct rte_cryptodev *dev)
>  	if (dev->data->dev_started)
>  		return 0;
>=20
> -	/* although scheduler_attach_init_slave presents multiple times,
> +	/* although scheduler_attach_init_worker presents multiple times,
>  	 * there will be only 1 meaningful execution.
>  	 */
> -	ret =3D scheduler_attach_init_slave(dev);
> +	ret =3D scheduler_attach_init_worker(dev);
>  	if (ret < 0)
>  		return ret;
>=20
> @@ -155,18 +155,18 @@ scheduler_pmd_start(struct rte_cryptodev *dev)
>  		return -1;
>  	}
>=20
> -	if (!sched_ctx->nb_slaves) {
> -		CR_SCHED_LOG(ERR, "No slave in the scheduler");
> +	if (!sched_ctx->nb_workers) {
> +		CR_SCHED_LOG(ERR, "No worker in the scheduler");
>  		return -1;
>  	}
>=20
> -	RTE_FUNC_PTR_OR_ERR_RET(*sched_ctx->ops.slave_attach, -
> ENOTSUP);
> +	RTE_FUNC_PTR_OR_ERR_RET(*sched_ctx->ops.worker_attach, -
> ENOTSUP);
>=20
> -	for (i =3D 0; i < sched_ctx->nb_slaves; i++) {
> -		uint8_t slave_dev_id =3D sched_ctx->slaves[i].dev_id;
> +	for (i =3D 0; i < sched_ctx->nb_workers; i++) {
> +		uint8_t worker_dev_id =3D sched_ctx->workers[i].dev_id;
>=20
> -		if ((*sched_ctx->ops.slave_attach)(dev, slave_dev_id) < 0) {
> -			CR_SCHED_LOG(ERR, "Failed to attach slave");
> +		if ((*sched_ctx->ops.worker_attach)(dev, worker_dev_id) <
> 0) {
> +			CR_SCHED_LOG(ERR, "Failed to attach worker");
>  			return -ENOTSUP;
>  		}
>  	}
> @@ -178,16 +178,16 @@ scheduler_pmd_start(struct rte_cryptodev *dev)
>  		return -1;
>  	}
>=20
> -	/* start all slaves */
> -	for (i =3D 0; i < sched_ctx->nb_slaves; i++) {
> -		uint8_t slave_dev_id =3D sched_ctx->slaves[i].dev_id;
> -		struct rte_cryptodev *slave_dev =3D
> -				rte_cryptodev_pmd_get_dev(slave_dev_id);
> +	/* start all workers */
> +	for (i =3D 0; i < sched_ctx->nb_workers; i++) {
> +		uint8_t worker_dev_id =3D sched_ctx->workers[i].dev_id;
> +		struct rte_cryptodev *worker_dev =3D
> +
> 	rte_cryptodev_pmd_get_dev(worker_dev_id);
>=20
> -		ret =3D (*slave_dev->dev_ops->dev_start)(slave_dev);
> +		ret =3D (*worker_dev->dev_ops->dev_start)(worker_dev);
>  		if (ret < 0) {
> -			CR_SCHED_LOG(ERR, "Failed to start slave dev %u",
> -					slave_dev_id);
> +			CR_SCHED_LOG(ERR, "Failed to start worker dev %u",
> +					worker_dev_id);
>  			return ret;
>  		}
>  	}
> @@ -205,23 +205,23 @@ scheduler_pmd_stop(struct rte_cryptodev *dev)
>  	if (!dev->data->dev_started)
>  		return;
>=20
> -	/* stop all slaves first */
> -	for (i =3D 0; i < sched_ctx->nb_slaves; i++) {
> -		uint8_t slave_dev_id =3D sched_ctx->slaves[i].dev_id;
> -		struct rte_cryptodev *slave_dev =3D
> -				rte_cryptodev_pmd_get_dev(slave_dev_id);
> +	/* stop all workers first */
> +	for (i =3D 0; i < sched_ctx->nb_workers; i++) {
> +		uint8_t worker_dev_id =3D sched_ctx->workers[i].dev_id;
> +		struct rte_cryptodev *worker_dev =3D
> +
> 	rte_cryptodev_pmd_get_dev(worker_dev_id);
>=20
> -		(*slave_dev->dev_ops->dev_stop)(slave_dev);
> +		(*worker_dev->dev_ops->dev_stop)(worker_dev);
>  	}
>=20
>  	if (*sched_ctx->ops.scheduler_stop)
>  		(*sched_ctx->ops.scheduler_stop)(dev);
>=20
> -	for (i =3D 0; i < sched_ctx->nb_slaves; i++) {
> -		uint8_t slave_dev_id =3D sched_ctx->slaves[i].dev_id;
> +	for (i =3D 0; i < sched_ctx->nb_workers; i++) {
> +		uint8_t worker_dev_id =3D sched_ctx->workers[i].dev_id;
>=20
> -		if (*sched_ctx->ops.slave_detach)
> -			(*sched_ctx->ops.slave_detach)(dev, slave_dev_id);
> +		if (*sched_ctx->ops.worker_detach)
> +			(*sched_ctx->ops.worker_detach)(dev,
> worker_dev_id);
>  	}
>  }
>=20
> @@ -237,13 +237,13 @@ scheduler_pmd_close(struct rte_cryptodev *dev)
>  	if (dev->data->dev_started)
>  		return -EBUSY;
>=20
> -	/* close all slaves first */
> -	for (i =3D 0; i < sched_ctx->nb_slaves; i++) {
> -		uint8_t slave_dev_id =3D sched_ctx->slaves[i].dev_id;
> -		struct rte_cryptodev *slave_dev =3D
> -				rte_cryptodev_pmd_get_dev(slave_dev_id);
> +	/* close all workers first */
> +	for (i =3D 0; i < sched_ctx->nb_workers; i++) {
> +		uint8_t worker_dev_id =3D sched_ctx->workers[i].dev_id;
> +		struct rte_cryptodev *worker_dev =3D
> +
> 	rte_cryptodev_pmd_get_dev(worker_dev_id);
>=20
> -		ret =3D (*slave_dev->dev_ops->dev_close)(slave_dev);
> +		ret =3D (*worker_dev->dev_ops->dev_close)(worker_dev);
>  		if (ret < 0)
>  			return ret;
>  	}
> @@ -283,19 +283,19 @@ scheduler_pmd_stats_get(struct rte_cryptodev
> *dev,
>  	struct scheduler_ctx *sched_ctx =3D dev->data->dev_private;
>  	uint32_t i;
>=20
> -	for (i =3D 0; i < sched_ctx->nb_slaves; i++) {
> -		uint8_t slave_dev_id =3D sched_ctx->slaves[i].dev_id;
> -		struct rte_cryptodev *slave_dev =3D
> -				rte_cryptodev_pmd_get_dev(slave_dev_id);
> -		struct rte_cryptodev_stats slave_stats =3D {0};
> +	for (i =3D 0; i < sched_ctx->nb_workers; i++) {
> +		uint8_t worker_dev_id =3D sched_ctx->workers[i].dev_id;
> +		struct rte_cryptodev *worker_dev =3D
> +
> 	rte_cryptodev_pmd_get_dev(worker_dev_id);
> +		struct rte_cryptodev_stats worker_stats =3D {0};
>=20
> -		(*slave_dev->dev_ops->stats_get)(slave_dev, &slave_stats);
> +		(*worker_dev->dev_ops->stats_get)(worker_dev,
> &worker_stats);
>=20
> -		stats->enqueued_count +=3D slave_stats.enqueued_count;
> -		stats->dequeued_count +=3D slave_stats.dequeued_count;
> +		stats->enqueued_count +=3D worker_stats.enqueued_count;
> +		stats->dequeued_count +=3D worker_stats.dequeued_count;
>=20
> -		stats->enqueue_err_count +=3D
> slave_stats.enqueue_err_count;
> -		stats->dequeue_err_count +=3D
> slave_stats.dequeue_err_count;
> +		stats->enqueue_err_count +=3D
> worker_stats.enqueue_err_count;
> +		stats->dequeue_err_count +=3D
> worker_stats.dequeue_err_count;
>  	}
>  }
>=20
> @@ -306,12 +306,12 @@ scheduler_pmd_stats_reset(struct rte_cryptodev
> *dev)
>  	struct scheduler_ctx *sched_ctx =3D dev->data->dev_private;
>  	uint32_t i;
>=20
> -	for (i =3D 0; i < sched_ctx->nb_slaves; i++) {
> -		uint8_t slave_dev_id =3D sched_ctx->slaves[i].dev_id;
> -		struct rte_cryptodev *slave_dev =3D
> -				rte_cryptodev_pmd_get_dev(slave_dev_id);
> +	for (i =3D 0; i < sched_ctx->nb_workers; i++) {
> +		uint8_t worker_dev_id =3D sched_ctx->workers[i].dev_id;
> +		struct rte_cryptodev *worker_dev =3D
> +
> 	rte_cryptodev_pmd_get_dev(worker_dev_id);
>=20
> -		(*slave_dev->dev_ops->stats_reset)(slave_dev);
> +		(*worker_dev->dev_ops->stats_reset)(worker_dev);
>  	}
>  }
>=20
> @@ -329,32 +329,32 @@ scheduler_pmd_info_get(struct rte_cryptodev
> *dev,
>  	if (!dev_info)
>  		return;
>=20
> -	/* although scheduler_attach_init_slave presents multiple times,
> +	/* although scheduler_attach_init_worker presents multiple times,
>  	 * there will be only 1 meaningful execution.
>  	 */
> -	scheduler_attach_init_slave(dev);
> +	scheduler_attach_init_worker(dev);
>=20
> -	for (i =3D 0; i < sched_ctx->nb_slaves; i++) {
> -		uint8_t slave_dev_id =3D sched_ctx->slaves[i].dev_id;
> -		struct rte_cryptodev_info slave_info;
> +	for (i =3D 0; i < sched_ctx->nb_workers; i++) {
> +		uint8_t worker_dev_id =3D sched_ctx->workers[i].dev_id;
> +		struct rte_cryptodev_info worker_info;
>=20
> -		rte_cryptodev_info_get(slave_dev_id, &slave_info);
> -		uint32_t dev_max_sess =3D slave_info.sym.max_nb_sessions;
> +		rte_cryptodev_info_get(worker_dev_id, &worker_info);
> +		uint32_t dev_max_sess =3D
> worker_info.sym.max_nb_sessions;
>  		if (dev_max_sess !=3D 0) {
>  			if (max_nb_sess =3D=3D 0 ||	dev_max_sess <
> max_nb_sess)
> -				max_nb_sess =3D
> slave_info.sym.max_nb_sessions;
> +				max_nb_sess =3D
> worker_info.sym.max_nb_sessions;
>  		}
>=20
> -		/* Get the max headroom requirement among slave PMDs
> */
> -		headroom_sz =3D slave_info.min_mbuf_headroom_req >
> +		/* Get the max headroom requirement among worker PMDs
> */
> +		headroom_sz =3D worker_info.min_mbuf_headroom_req >
>  				headroom_sz ?
> -				slave_info.min_mbuf_headroom_req :
> +				worker_info.min_mbuf_headroom_req :
>  				headroom_sz;
>=20
> -		/* Get the max tailroom requirement among slave PMDs */
> -		tailroom_sz =3D slave_info.min_mbuf_tailroom_req >
> +		/* Get the max tailroom requirement among worker PMDs
> */
> +		tailroom_sz =3D worker_info.min_mbuf_tailroom_req >
>  				tailroom_sz ?
> -				slave_info.min_mbuf_tailroom_req :
> +				worker_info.min_mbuf_tailroom_req :
>  				tailroom_sz;
>  	}
>=20
> @@ -409,15 +409,15 @@ scheduler_pmd_qp_setup(struct rte_cryptodev
> *dev, uint16_t qp_id,
>  	if (dev->data->queue_pairs[qp_id] !=3D NULL)
>  		scheduler_pmd_qp_release(dev, qp_id);
>=20
> -	for (i =3D 0; i < sched_ctx->nb_slaves; i++) {
> -		uint8_t slave_id =3D sched_ctx->slaves[i].dev_id;
> +	for (i =3D 0; i < sched_ctx->nb_workers; i++) {
> +		uint8_t worker_id =3D sched_ctx->workers[i].dev_id;
>=20
>  		/*
> -		 * All slaves will share the same session mempool
> +		 * All workers will share the same session mempool
>  		 * for session-less operations, so the objects
>  		 * must be big enough for all the drivers used.
>  		 */
> -		ret =3D rte_cryptodev_queue_pair_setup(slave_id, qp_id,
> +		ret =3D rte_cryptodev_queue_pair_setup(worker_id, qp_id,
>  				qp_conf, socket_id);
>  		if (ret < 0)
>  			return ret;
> @@ -434,12 +434,12 @@ scheduler_pmd_qp_setup(struct rte_cryptodev
> *dev, uint16_t qp_id,
>=20
>  	dev->data->queue_pairs[qp_id] =3D qp_ctx;
>=20
> -	/* although scheduler_attach_init_slave presents multiple times,
> +	/* although scheduler_attach_init_worker presents multiple times,
>  	 * there will be only 1 meaningful execution.
>  	 */
> -	ret =3D scheduler_attach_init_slave(dev);
> +	ret =3D scheduler_attach_init_worker(dev);
>  	if (ret < 0) {
> -		CR_SCHED_LOG(ERR, "Failed to attach slave");
> +		CR_SCHED_LOG(ERR, "Failed to attach worker");
>  		scheduler_pmd_qp_release(dev, qp_id);
>  		return ret;
>  	}
> @@ -461,10 +461,10 @@ scheduler_pmd_sym_session_get_size(struct
> rte_cryptodev *dev __rte_unused)
>  	uint8_t i =3D 0;
>  	uint32_t max_priv_sess_size =3D 0;
>=20
> -	/* Check what is the maximum private session size for all slaves */
> -	for (i =3D 0; i < sched_ctx->nb_slaves; i++) {
> -		uint8_t slave_dev_id =3D sched_ctx->slaves[i].dev_id;
> -		struct rte_cryptodev *dev =3D &rte_cryptodevs[slave_dev_id];
> +	/* Check what is the maximum private session size for all workers */
> +	for (i =3D 0; i < sched_ctx->nb_workers; i++) {
> +		uint8_t worker_dev_id =3D sched_ctx->workers[i].dev_id;
> +		struct rte_cryptodev *dev =3D
> &rte_cryptodevs[worker_dev_id];
>  		uint32_t priv_sess_size =3D (*dev->dev_ops-
> >sym_session_get_size)(dev);
>=20
>  		if (max_priv_sess_size < priv_sess_size)
> @@ -484,10 +484,10 @@ scheduler_pmd_sym_session_configure(struct
> rte_cryptodev *dev,
>  	uint32_t i;
>  	int ret;
>=20
> -	for (i =3D 0; i < sched_ctx->nb_slaves; i++) {
> -		struct scheduler_slave *slave =3D &sched_ctx->slaves[i];
> +	for (i =3D 0; i < sched_ctx->nb_workers; i++) {
> +		struct scheduler_worker *worker =3D &sched_ctx->workers[i];
>=20
> -		ret =3D rte_cryptodev_sym_session_init(slave->dev_id, sess,
> +		ret =3D rte_cryptodev_sym_session_init(worker->dev_id, sess,
>  					xform, mempool);
>  		if (ret < 0) {
>  			CR_SCHED_LOG(ERR, "unable to config sym session");
> @@ -506,11 +506,11 @@ scheduler_pmd_sym_session_clear(struct
> rte_cryptodev *dev,
>  	struct scheduler_ctx *sched_ctx =3D dev->data->dev_private;
>  	uint32_t i;
>=20
> -	/* Clear private data of slaves */
> -	for (i =3D 0; i < sched_ctx->nb_slaves; i++) {
> -		struct scheduler_slave *slave =3D &sched_ctx->slaves[i];
> +	/* Clear private data of workers */
> +	for (i =3D 0; i < sched_ctx->nb_workers; i++) {
> +		struct scheduler_worker *worker =3D &sched_ctx->workers[i];
>=20
> -		rte_cryptodev_sym_session_clear(slave->dev_id, sess);
> +		rte_cryptodev_sym_session_clear(worker->dev_id, sess);
>  	}
>  }
>=20
> diff --git a/drivers/crypto/scheduler/scheduler_pmd_private.h
> b/drivers/crypto/scheduler/scheduler_pmd_private.h
> index e1531d1da..adb4eb063 100644
> --- a/drivers/crypto/scheduler/scheduler_pmd_private.h
> +++ b/drivers/crypto/scheduler/scheduler_pmd_private.h
> @@ -10,7 +10,7 @@
>  #define CRYPTODEV_NAME_SCHEDULER_PMD	crypto_scheduler
>  /**< Scheduler Crypto PMD device name */
>=20
> -#define PER_SLAVE_BUFF_SIZE			(256)
> +#define PER_WORKER_BUFF_SIZE			(256)
>=20
>  extern int scheduler_logtype_driver;
>=20
> @@ -18,7 +18,7 @@ extern int scheduler_logtype_driver;
>  	rte_log(RTE_LOG_ ## level, scheduler_logtype_driver,		\
>  			"%s() line %u: "fmt "\n", __func__, __LINE__, ##args)
>=20
> -struct scheduler_slave {
> +struct scheduler_worker {
>  	uint8_t dev_id;
>  	uint16_t qp_id;
>  	uint32_t nb_inflight_cops;
> @@ -35,8 +35,8 @@ struct scheduler_ctx {
>=20
>  	uint32_t max_nb_queue_pairs;
>=20
> -	struct scheduler_slave
> slaves[RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES];
> -	uint32_t nb_slaves;
> +	struct scheduler_worker
> workers[RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS];
> +	uint32_t nb_workers;
>=20
>  	enum rte_cryptodev_scheduler_mode mode;
>=20
> @@ -49,8 +49,8 @@ struct scheduler_ctx {
>  	uint16_t wc_pool[RTE_MAX_LCORE];
>  	uint16_t nb_wc;
>=20
> -	char
> *init_slave_names[RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES];
> -	int nb_init_slaves;
> +	char
> *init_worker_names[RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS];
> +	int nb_init_workers;
>  } __rte_cache_aligned;
>=20
>  struct scheduler_qp_ctx {
> diff --git a/drivers/crypto/scheduler/scheduler_roundrobin.c
> b/drivers/crypto/scheduler/scheduler_roundrobin.c
> index 9b891d978..bc4a63210 100644
> --- a/drivers/crypto/scheduler/scheduler_roundrobin.c
> +++ b/drivers/crypto/scheduler/scheduler_roundrobin.c
> @@ -9,11 +9,11 @@
>  #include "scheduler_pmd_private.h"
>=20
>  struct rr_scheduler_qp_ctx {
> -	struct scheduler_slave
> slaves[RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES];
> -	uint32_t nb_slaves;
> +	struct scheduler_worker
> workers[RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS];
> +	uint32_t nb_workers;
>=20
> -	uint32_t last_enq_slave_idx;
> -	uint32_t last_deq_slave_idx;
> +	uint32_t last_enq_worker_idx;
> +	uint32_t last_deq_worker_idx;
>  };
>=20
>  static uint16_t
> @@ -21,8 +21,8 @@ schedule_enqueue(void *qp, struct rte_crypto_op
> **ops, uint16_t nb_ops)
>  {
>  	struct rr_scheduler_qp_ctx *rr_qp_ctx =3D
>  			((struct scheduler_qp_ctx *)qp)->private_qp_ctx;
> -	uint32_t slave_idx =3D rr_qp_ctx->last_enq_slave_idx;
> -	struct scheduler_slave *slave =3D &rr_qp_ctx->slaves[slave_idx];
> +	uint32_t worker_idx =3D rr_qp_ctx->last_enq_worker_idx;
> +	struct scheduler_worker *worker =3D &rr_qp_ctx-
> >workers[worker_idx];
>  	uint16_t i, processed_ops;
>=20
>  	if (unlikely(nb_ops =3D=3D 0))
> @@ -31,13 +31,13 @@ schedule_enqueue(void *qp, struct rte_crypto_op
> **ops, uint16_t nb_ops)
>  	for (i =3D 0; i < nb_ops && i < 4; i++)
>  		rte_prefetch0(ops[i]->sym->session);
>=20
> -	processed_ops =3D rte_cryptodev_enqueue_burst(slave->dev_id,
> -			slave->qp_id, ops, nb_ops);
> +	processed_ops =3D rte_cryptodev_enqueue_burst(worker->dev_id,
> +			worker->qp_id, ops, nb_ops);
>=20
> -	slave->nb_inflight_cops +=3D processed_ops;
> +	worker->nb_inflight_cops +=3D processed_ops;
>=20
> -	rr_qp_ctx->last_enq_slave_idx +=3D 1;
> -	rr_qp_ctx->last_enq_slave_idx %=3D rr_qp_ctx->nb_slaves;
> +	rr_qp_ctx->last_enq_worker_idx +=3D 1;
> +	rr_qp_ctx->last_enq_worker_idx %=3D rr_qp_ctx->nb_workers;
>=20
>  	return processed_ops;
>  }
> @@ -64,34 +64,35 @@ schedule_dequeue(void *qp, struct rte_crypto_op
> **ops, uint16_t nb_ops)
>  {
>  	struct rr_scheduler_qp_ctx *rr_qp_ctx =3D
>  			((struct scheduler_qp_ctx *)qp)->private_qp_ctx;
> -	struct scheduler_slave *slave;
> -	uint32_t last_slave_idx =3D rr_qp_ctx->last_deq_slave_idx;
> +	struct scheduler_worker *worker;
> +	uint32_t last_worker_idx =3D rr_qp_ctx->last_deq_worker_idx;
>  	uint16_t nb_deq_ops;
>=20
> -	if (unlikely(rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops =3D=3D =
0))
> {
> +	if (unlikely(rr_qp_ctx->workers[last_worker_idx].nb_inflight_cops
> +			=3D=3D 0)) {
>  		do {
> -			last_slave_idx +=3D 1;
> +			last_worker_idx +=3D 1;
>=20
> -			if (unlikely(last_slave_idx >=3D rr_qp_ctx->nb_slaves))
> -				last_slave_idx =3D 0;
> +			if (unlikely(last_worker_idx >=3D rr_qp_ctx-
> >nb_workers))
> +				last_worker_idx =3D 0;
>  			/* looped back, means no inflight cops in the queue
> */
> -			if (last_slave_idx =3D=3D rr_qp_ctx->last_deq_slave_idx)
> +			if (last_worker_idx =3D=3D rr_qp_ctx-
> >last_deq_worker_idx)
>  				return 0;
> -		} while (rr_qp_ctx->slaves[last_slave_idx].nb_inflight_cops
> +		} while (rr_qp_ctx-
> >workers[last_worker_idx].nb_inflight_cops
>  				=3D=3D 0);
>  	}
>=20
> -	slave =3D &rr_qp_ctx->slaves[last_slave_idx];
> +	worker =3D &rr_qp_ctx->workers[last_worker_idx];
>=20
> -	nb_deq_ops =3D rte_cryptodev_dequeue_burst(slave->dev_id,
> -			slave->qp_id, ops, nb_ops);
> +	nb_deq_ops =3D rte_cryptodev_dequeue_burst(worker->dev_id,
> +			worker->qp_id, ops, nb_ops);
>=20
> -	last_slave_idx +=3D 1;
> -	last_slave_idx %=3D rr_qp_ctx->nb_slaves;
> +	last_worker_idx +=3D 1;
> +	last_worker_idx %=3D rr_qp_ctx->nb_workers;
>=20
> -	rr_qp_ctx->last_deq_slave_idx =3D last_slave_idx;
> +	rr_qp_ctx->last_deq_worker_idx =3D last_worker_idx;
>=20
> -	slave->nb_inflight_cops -=3D nb_deq_ops;
> +	worker->nb_inflight_cops -=3D nb_deq_ops;
>=20
>  	return nb_deq_ops;
>  }
> @@ -109,15 +110,15 @@ schedule_dequeue_ordering(void *qp, struct
> rte_crypto_op **ops,
>  }
>=20
>  static int
> -slave_attach(__rte_unused struct rte_cryptodev *dev,
> -		__rte_unused uint8_t slave_id)
> +worker_attach(__rte_unused struct rte_cryptodev *dev,
> +		__rte_unused uint8_t worker_id)
>  {
>  	return 0;
>  }
>=20
>  static int
> -slave_detach(__rte_unused struct rte_cryptodev *dev,
> -		__rte_unused uint8_t slave_id)
> +worker_detach(__rte_unused struct rte_cryptodev *dev,
> +		__rte_unused uint8_t worker_id)
>  {
>  	return 0;
>  }
> @@ -142,19 +143,19 @@ scheduler_start(struct rte_cryptodev *dev)
>  				qp_ctx->private_qp_ctx;
>  		uint32_t j;
>=20
> -		memset(rr_qp_ctx->slaves, 0,
> -
> 	RTE_CRYPTODEV_SCHEDULER_MAX_NB_SLAVES *
> -				sizeof(struct scheduler_slave));
> -		for (j =3D 0; j < sched_ctx->nb_slaves; j++) {
> -			rr_qp_ctx->slaves[j].dev_id =3D
> -					sched_ctx->slaves[j].dev_id;
> -			rr_qp_ctx->slaves[j].qp_id =3D i;
> +		memset(rr_qp_ctx->workers, 0,
> +
> 	RTE_CRYPTODEV_SCHEDULER_MAX_NB_WORKERS *
> +				sizeof(struct scheduler_worker));
> +		for (j =3D 0; j < sched_ctx->nb_workers; j++) {
> +			rr_qp_ctx->workers[j].dev_id =3D
> +					sched_ctx->workers[j].dev_id;
> +			rr_qp_ctx->workers[j].qp_id =3D i;
>  		}
>=20
> -		rr_qp_ctx->nb_slaves =3D sched_ctx->nb_slaves;
> +		rr_qp_ctx->nb_workers =3D sched_ctx->nb_workers;
>=20
> -		rr_qp_ctx->last_enq_slave_idx =3D 0;
> -		rr_qp_ctx->last_deq_slave_idx =3D 0;
> +		rr_qp_ctx->last_enq_worker_idx =3D 0;
> +		rr_qp_ctx->last_deq_worker_idx =3D 0;
>  	}
>=20
>  	return 0;
> @@ -191,8 +192,8 @@ scheduler_create_private_ctx(__rte_unused struct
> rte_cryptodev *dev)
>  }
>=20
>  static struct rte_cryptodev_scheduler_ops scheduler_rr_ops =3D {
> -	slave_attach,
> -	slave_detach,
> +	worker_attach,
> +	worker_detach,
>  	scheduler_start,
>  	scheduler_stop,
>  	scheduler_config_qp,
> @@ -204,7 +205,7 @@ static struct rte_cryptodev_scheduler_ops
> scheduler_rr_ops =3D {
>  static struct rte_cryptodev_scheduler scheduler =3D {
>  		.name =3D "roundrobin-scheduler",
>  		.description =3D "scheduler which will round robin burst across "
> -				"slave crypto devices",
> +				"worker crypto devices",
>  		.mode =3D CDEV_SCHED_MODE_ROUNDROBIN,
>  		.ops =3D &scheduler_rr_ops
>  };
> diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
> index 827da9b3e..42e80bc3f 100644
> --- a/examples/l2fwd-crypto/main.c
> +++ b/examples/l2fwd-crypto/main.c
> @@ -2277,11 +2277,11 @@ initialize_cryptodevs(struct
> l2fwd_crypto_options *options, unsigned nb_ports,
>  		 */
>  		if (!strcmp(dev_info.driver_name, "crypto_scheduler")) {
>  #ifdef RTE_LIBRTE_PMD_CRYPTO_SCHEDULER
> -			uint32_t nb_slaves =3D
> -
> 	rte_cryptodev_scheduler_slaves_get(cdev_id,
> +			uint32_t nb_workers =3D
> +
> 	rte_cryptodev_scheduler_workers_get(cdev_id,
>  								NULL);
>=20
> -			sessions_needed =3D enabled_cdev_count * nb_slaves;
> +			sessions_needed =3D enabled_cdev_count *
> nb_workers;
>  #endif
>  		} else
>  			sessions_needed =3D enabled_cdev_count;
> --
> 2.25.1

Looks good from ABI perspective.

Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>