From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <Pavan.Bhagavatula@cavium.com>
Received: from NAM03-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam03on0065.outbound.protection.outlook.com [104.47.40.65])
 by dpdk.org (Postfix) with ESMTP id 14FAB1B654
 for <dev@dpdk.org>; Mon, 23 Oct 2017 18:29:58 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version;
 bh=cLYp/Ti0gIZLtCR87QhJIoFZyq7dYivU3vyZrvISb7w=;
 b=iChDI1hfClPoXvEKqmXp56cX/EGCMGLjG/E9fHXh8R3Bhf2K1JVDhGOlkt/JfDUeCZnfRDgKec0rCUVBV7K/+SoIfw/4pcNhSTQW0oglKqvzupk90TaXX/XOMDOiMvwl3jbSqpGK8qrBvMTTz0Vwi3vCl9Eew3TfVg7i8g5hezY=
Authentication-Results: spf=none (sender IP is )
 smtp.mailfrom=Pavan.Bhagavatula@cavium.com; 
Received: from localhost.localdomain (103.16.71.47) by
 MWHPR07MB3472.namprd07.prod.outlook.com (10.164.192.23) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id
 15.20.156.4; Mon, 23 Oct 2017 16:29:55 +0000
From: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
To: harry.van.haaren@intel.com, hemant.agrawal@nxp.com,
 jerin.jacob@caviumnetworks.com
Cc: dev@dpdk.org,
	Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>
Date: Mon, 23 Oct 2017 21:59:35 +0530
Message-Id: <1508776177-11264-1-git-send-email-pbhagavatula@caviumnetworks.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1507814147-8223-1-git-send-email-pbhagavatula@caviumnetworks.com>
References: <1507814147-8223-1-git-send-email-pbhagavatula@caviumnetworks.com>
MIME-Version: 1.0
Content-Type: text/plain
X-Originating-IP: [103.16.71.47]
X-ClientProxiedBy: HK2PR04CA0047.apcprd04.prod.outlook.com (10.170.154.15) To
 MWHPR07MB3472.namprd07.prod.outlook.com (10.164.192.23)
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b6b6e674-514a-4748-2b58-08d51a334c32
X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0;
 RULEID:(22001)(4534020)(4602075)(4627075)(201703031133081)(201702281549075)(2017052603199);
 SRVR:MWHPR07MB3472; 
X-Microsoft-Exchange-Diagnostics: 1; MWHPR07MB3472;
 3:w2fev4sVtCgzYN4uonyQrykA4/dCqnU6bhJ6QGbY/70CfW6KHdJ7qnB6yn4eVyE/RztU1mbHFnI6Nq0RZ/SFwXOgcPnpw6PCDEqnOkYiuIqtyj9ma5MIrfp5TorW/GzTUBTjt7jNkx4rmuFaGEBWcwwbgjUxFEf1jHkD3w9VeooFO4bCpWR7weYYs5TIbwBkrTn3z4vLV0B8q/fCd2HhmUgyp3pUpqc0nnFpvSh505fCYLB8x+PcFwdhmupjWwAL;
 25:3KvaG5rj2A3WJNuBANacpzDZNbhTniZqc4FeSmw0y4ioOxi/STQcJNfXV4WrOeuATEqrmB9xK5UuATdd5tfDYYh3+3GniMLLknDfnk63QjK/70QaBe4z732hdgwfojBKE4UaBBY8669tUN3O2ZNV2fCUINZDKTTrs0ysmUa9gfMHgL3DwXKbzFE6bcw18pQQw5xBsmSXU1ZyzXnLJwZhcY70pWb1OOSW6VCXDdv8DJOI7iyiUigm2kOzfyAXR7a9WVI1PMcbH6FRapoq3glAt/VaL0x3yX1cKQovGLr/qDSGwDLVwZoAHG+R7rpm/ChB8T2zO2M1WApeQQCZBL+hYA==;
 31:JHtOzO3ddqWOjmpiWgR9q6Y5kDuzXDXAP9rfdd2Z3K2MKJShqs+pxSEflERStJ5GldiXoxqDfB9D3tF8fTsL4+FtfYBu/nJwM/3jU6MOE8qfGuOg0i4XwO2T36ByuNPztYi3fUwHYz3G4wr/T2Bjwt/WJxPjPk6klc2WELOwzt6SEhdTB3l32wc2q2SvidEaNY4LC5TPY+8snj728892Rdzw66znqBzzGBJqdiXHkos=
X-MS-TrafficTypeDiagnostic: MWHPR07MB3472:
X-Microsoft-Exchange-Diagnostics: 1; MWHPR07MB3472;
 20:X6ps995KWhHopEsyVMDRiTvfrLa9+vQ6+Yv6WuyTpzi/C5Bfqm95R+mP/IBmlVHi2Xu9KPrtJz1Qjr5LJXBmgbYqwJwcSSBnDJ9+N37FSIrtdqbI1yAGie03ByXHhkgcxP1zP3SJXbNyrJPPcrfHKZqRmbmJfm28M9J8bWPlQRfc//g8YSBJVEwbNKSwerqreP04SNj9yEfi5mTbwDSrB1O8y2zGqIw5Su/zhhBN2Cq3stAUd5eu5IRdwDOP8iuA3zxuSvChxintlIKkfqHnnKN2LcuTG0UxwD/JEzyWVEwVBhw+QyODYyeeFzVKJLtgqOnrN6hkfkvEj1iIMov7t+SVpfPPhAHpRdwkCCWpkNeoHWb+6RjJpQxqmWkO63X0DIw/IGncFnynw2ekjK37p9bAlir0NRY8JVpYwUtk3tW7pNr1Jg1696tdzzuG38SiAgeYIA/MYmQlOQtBVK4dfNWCnyWdhIlYii/kWb8FQ9d4Spid/WjSW60Q9/7ENPWOFalcths7+Mw/iggnH60tS7GJiBwDW7OoV/XqTpDrt2KTEnMnwDo32UIbh0Ht4/pN1Sp1oAr/R3d0NWun1HIydGOn09Rev+n8RP9Z6/Niv74=;
 4:siRuiwY4j66p/XVh7bWDT8LcWw3dzWsDK7qCbIr4lC+/wfAQKhXOUkcSTYEJcCuWDdsI8UrWjgKKwUmiZA39F09YgvPW6GEyh7KOI4H/FLvbdAeNXoV0dDsBJwtbXbhshIg07fDLBKdl8kMBVW78KzP8/BLs9NNcKnu0nA2qw+TLn+EHt4uMsmDFJimXhCHS/qtz+kaIKHTc9OrpvLRH/ZbDw8dJi2gPfhu3DiAva4bbkUfHv1mFGPNOUaw6TTxW
X-Exchange-Antispam-Report-Test: UriScan:;
X-Microsoft-Antispam-PRVS: <MWHPR07MB3472D3A01EA6C3AE1AB4C70B80460@MWHPR07MB3472.namprd07.prod.outlook.com>
X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0;
 RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(5005006)(8121501046)(3231020)(3002001)(10201501046)(100000703101)(100105400095)(93006095)(6041248)(20161123555025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123564025)(20161123562025)(20161123558100)(20161123560025)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095);
 SRVR:MWHPR07MB3472; BCL:0; PCL:0;
 RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095);
 SRVR:MWHPR07MB3472; 
X-Forefront-PRVS: 046985391D
X-Forefront-Antispam-Report: SFV:NSPM;
 SFS:(10009020)(6069001)(6009001)(376002)(346002)(189002)(199003)(4326008)(48376002)(42882006)(8936002)(2950100002)(8656005)(8656005)(36756003)(6116002)(3846002)(316002)(33646002)(189998001)(107886003)(81156014)(2906002)(81166006)(8676002)(50466002)(16586007)(6636002)(6666003)(66066001)(97736004)(53936002)(47776003)(105586002)(72206003)(7736002)(106356001)(478600001)(101416001)(50986999)(76176999)(5660300001)(6512007)(305945005)(6506006)(50226002)(25786009)(6486002)(16526018)(5003940100001)(68736007)(42262002);
 DIR:OUT; SFP:1101; SCL:1; SRVR:MWHPR07MB3472; H:localhost.localdomain; FPR:;
 SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; 
Received-SPF: None (protection.outlook.com: cavium.com does not designate
 permitted sender hosts)
X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; MWHPR07MB3472;
 23:Hfo9KOunkqff8nmtINT3GrPy/sy6PhsybJayLI9w9?=
 =?us-ascii?Q?CpGU7OsK0aKZwHMjOJqwL7UUtYOGcrRF73+7KEOD+lL4VTJCPiLFBWI2OHUG?=
 =?us-ascii?Q?pg6H7JxY8095RtCLxQj6PV32bqN8iWvTH4y1uHms7oR4mdV1DqRaW/jIwJy+?=
 =?us-ascii?Q?egSdf5bwmPwoIcCcTxVIqYdrX5dfDyYAd7o//1t2f4+sctcmAimzIIAkALc3?=
 =?us-ascii?Q?jc3sRx+4M0xBoD16tJjPERag/Z6y1BMiI2bcEoqg4zKGfVliQoOnBsz9Dxx/?=
 =?us-ascii?Q?M56K37WPvoYktTxI82Fkv681wXLQMFcfGG7kKgw9isuSgYunB52O4X3rMKG6?=
 =?us-ascii?Q?sQ7gsKJKwZ0UDnAcIbOztv2p5XrPi8vDHRVklccwjR3HlYjHb7AMzDd4H/aM?=
 =?us-ascii?Q?WwQwG7nA7qsItDPArF+98iSYThDCqg07DLbOxTyntWRmdUb3fGjr3PN44sLy?=
 =?us-ascii?Q?QmVa2KmlmbhXmGlImrtacIInO+gCGjxd5V4saxozJxkzAdWzUMWhAQ8VHpkO?=
 =?us-ascii?Q?VQ/LQSgfsJTulE1pj0/pAEWM37z/m7AtW2uGottQHulX2HAnrTXy51w3BURf?=
 =?us-ascii?Q?DiXgeREpCyR1Ot24vMPM6tpIz127GY1UTdaEZaD998MAWlMqcQA4OIWCjXrG?=
 =?us-ascii?Q?1lVGfuaZSb9CcuRdUazKZ6mCSKmAJHnn5+BFqZ9pmoppRwDjmcAkc+KMueLi?=
 =?us-ascii?Q?5xhHsjneY8BqLtN8BfJVs9pHblowS8XaaqCbRJ4jhJPxrHQsatnPEn6hxGUg?=
 =?us-ascii?Q?b/js6YfDH2mX4pzVBu1iT1PFOFH6SPDpvmHZlIJhrqccOorqd76JXtNuTj0i?=
 =?us-ascii?Q?IiNlawGWKEzIUPxHnb/gg39uzTY9xfxT0BPNHcG3E158vUZE4MA/iCBgnPYu?=
 =?us-ascii?Q?jq6n4Ws6zX1WINSEfWd/rzxZr5sD6gAK7ArakFuqpym7G3o3Eh2tb7JmTxzf?=
 =?us-ascii?Q?47wItJz5l/ZLQ/JJTx8xvHhK7x9txiEOkBZoKoI0g4050/zoFt3HpiL6byvr?=
 =?us-ascii?Q?fdhTjdsVeppFbm5OKhNK5RQteuJU/+5A2hSuD7GyyRRbFYru04BF8Krt3nMB?=
 =?us-ascii?Q?Wk6QRrTrO8ts21oJcvBvsGo443/JYNsaFJYwwugeW4TG2IKKsH4IPZJcujbP?=
 =?us-ascii?Q?63xD9OP9N4RbhVAjWP6/dqGRG2sgCuhldoFOtJVmWQpZ39INCqzNMLXx+Qyb?=
 =?us-ascii?Q?NzY1jLulsJOgbk=3D?=
X-Microsoft-Exchange-Diagnostics: 1; MWHPR07MB3472;
 6:04u3p1H3SgJKr8ZxDj7Vgs/3FBALoZawl+fbJOIXpQcp2/TACzTg7Jlp1YVTUD+nLnaseCZ2f4sF0GB5YxfL+LqsgbdZbk+jJxcBAxkD5PDVbhCRSX/Aa3SE5idEFIYkrx9+5SNOfYV8GO3xKlq1z+4bG+OS67BuzMN2J8N9WZrPDsfXp9K+vcP/fXzidmUc2p/zIate/SErHttcU8NNlZLN7Bo16EFLdsLMiJjHnE2O0+b/8W/HxbIiQfmZ/fGkmOq1h6bsLMcaR14EmgJo9dy42RhVBjMznx3r1D8uPgxk2eUAhAi7hYLc+h7gI03rtq6/Sv6quvvNStzotECzOA==;
 5:ePLjTbhf0o6ZSHDci5YW4Q4gBVldrOtf8whLyQ4ZFITdWgIZyOEvbBuU3B/SgXjfezPYRdq6oR9R02HPOMGNt3atn4GD0sdp6AWQQEDD/cV8ehsK5BK4RQNiPHqlme3bkj8Feknz1c4fJcAeXsRz1Q==;
 24:4DQoKBOIysVpNTdAIrJ1Xk9wXbh/YarQWOhHLt3QKYAWe5+ViqkdHXi1xbViV+GrNxaJ6Ho9dQFgPSrCBGkUuqelHJZJKw04xfhrU2sp2Ks=;
 7:quzzQt8Ei4lmqYauUQSGOaIlBItR2Lfh1Mwkrnsbe7kGJDpWdZJR7vu4CROijp6zkiuA2lI1VLihbxrkBb6goD7OafzGY+d78+pEF+3vWuhl5WsxZgBcRonw56XmMUpFMfKmKpyem4RY7xfTn5NyD9xx1oj6RP0uJSEq5PxE9O3cOBgK+yTV3PP9hbmwkbd/t/GAFXew/dXt9x+zQxWdTkSldxjIVKX/hsuBCKiUWH4=
SpamDiagnosticOutput: 1:99
SpamDiagnosticMetadata: NSPM
X-OriginatorOrg: caviumnetworks.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Oct 2017 16:29:55.1491 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b6b6e674-514a-4748-2b58-08d51a334c32
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR07MB3472
Subject: [dpdk-dev] [PATCH v2 1/3] evendev: fix inconsistency in event queue
	config
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Mon, 23 Oct 2017 16:29:59 -0000

From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>

With the current scheme of event queue configuration the cfg schedule
type macros (RTE_EVENT_QUEUE_CFG_*_ONLY) are inconsistent with the
event schedule type (RTE_SCHED_TYPE_*) this requires unnecessary
conversion between the fastpath and slowpath API's while scheduling
events or configuring event queues.

This patch aims to fix such inconsistency by using event schedule
types (RTE_SCHED_TYPE_*) for event queue configuration.

This patch also fixes example/eventdev_pipeline_sw_pmd as it doesn't
convert RTE_EVENT_QUEUE_CFG_*_ONLY to RTE_SCHED_TYPE_* which leads to
improper events being enqueued to the eventdev.

Fixes: adb5d5486c39 ("examples/eventdev_pipeline_sw_pmd: add sample app")

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
---
 app/test-eventdev/evt_common.h           | 21 -------------
 app/test-eventdev/test_order_queue.c     |  4 +--
 app/test-eventdev/test_perf_queue.c      |  4 +--
 drivers/event/dpaa2/dpaa2_eventdev.c     |  4 +--
 drivers/event/sw/sw_evdev.c              | 28 +++++------------
 examples/eventdev_pipeline_sw_pmd/main.c | 18 +++++------
 lib/librte_eventdev/rte_eventdev.c       | 20 +++++-------
 lib/librte_eventdev/rte_eventdev.h       | 54 ++++++++++----------------------
 test/test/test_eventdev.c                | 12 +++----
 test/test/test_eventdev_sw.c             | 16 +++++-----
 10 files changed, 60 insertions(+), 121 deletions(-)

diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
index 4102076..ee896a2 100644
--- a/app/test-eventdev/evt_common.h
+++ b/app/test-eventdev/evt_common.h
@@ -92,25 +92,4 @@ evt_has_all_types_queue(uint8_t dev_id)
 			true : false;
 }
 
-static inline uint32_t
-evt_sched_type2queue_cfg(uint8_t sched_type)
-{
-	uint32_t ret;
-
-	switch (sched_type) {
-	case RTE_SCHED_TYPE_ATOMIC:
-		ret = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
-		break;
-	case RTE_SCHED_TYPE_ORDERED:
-		ret = RTE_EVENT_QUEUE_CFG_ORDERED_ONLY;
-		break;
-	case RTE_SCHED_TYPE_PARALLEL:
-		ret = RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY;
-		break;
-	default:
-		rte_panic("Invalid sched_type %d\n", sched_type);
-	}
-	return ret;
-}
-
 #endif /*  _EVT_COMMON_*/
diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c
index beadd9c..1fa4082 100644
--- a/app/test-eventdev/test_order_queue.c
+++ b/app/test-eventdev/test_order_queue.c
@@ -164,7 +164,7 @@ order_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	/* q0 (ordered queue) configuration */
 	struct rte_event_queue_conf q0_ordered_conf = {
 			.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
-			.event_queue_cfg = RTE_EVENT_QUEUE_CFG_ORDERED_ONLY,
+			.schedule_type = RTE_SCHED_TYPE_ORDERED,
 			.nb_atomic_flows = opt->nb_flows,
 			.nb_atomic_order_sequences = opt->nb_flows,
 	};
@@ -177,7 +177,7 @@ order_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	/* q1 (atomic queue) configuration */
 	struct rte_event_queue_conf q1_atomic_conf = {
 			.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
-			.event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY,
+			.schedule_type = RTE_SCHED_TYPE_ATOMIC,
 			.nb_atomic_flows = opt->nb_flows,
 			.nb_atomic_order_sequences = opt->nb_flows,
 	};
diff --git a/app/test-eventdev/test_perf_queue.c b/app/test-eventdev/test_perf_queue.c
index 658c08a..28c2096 100644
--- a/app/test-eventdev/test_perf_queue.c
+++ b/app/test-eventdev/test_perf_queue.c
@@ -205,8 +205,8 @@ perf_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	};
 	/* queue configurations */
 	for (queue = 0; queue < perf_queue_nb_event_queues(opt); queue++) {
-		q_conf.event_queue_cfg =  evt_sched_type2queue_cfg
-				(opt->sched_type_list[queue % nb_stages]);
+		q_conf.event_queue_cfg =
+			(opt->sched_type_list[queue % nb_stages]);
 
 		if (opt->q_priority) {
 			uint8_t stage_pos = queue % nb_stages;
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index 81286a8..3dbc337 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -378,8 +378,8 @@ dpaa2_eventdev_queue_def_conf(struct rte_eventdev *dev, uint8_t queue_id,
 	RTE_SET_USED(queue_conf);
 
 	queue_conf->nb_atomic_flows = DPAA2_EVENT_QUEUE_ATOMIC_FLOWS;
-	queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY |
-				      RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY;
+	queue_conf->schedule_type = RTE_SCHED_TYPE_ATOMIC |
+				      RTE_SCHED_TYPE_PARALLEL;
 	queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
 }
 
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index aed8b72..522cd71 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -345,28 +345,14 @@ sw_queue_setup(struct rte_eventdev *dev, uint8_t queue_id,
 {
 	int type;
 
-	/* SINGLE_LINK can be OR-ed with other types, so handle first */
+	type = conf->schedule_type;
+
 	if (RTE_EVENT_QUEUE_CFG_SINGLE_LINK & conf->event_queue_cfg) {
 		type = SW_SCHED_TYPE_DIRECT;
-	} else {
-		switch (conf->event_queue_cfg) {
-		case RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY:
-			type = RTE_SCHED_TYPE_ATOMIC;
-			break;
-		case RTE_EVENT_QUEUE_CFG_ORDERED_ONLY:
-			type = RTE_SCHED_TYPE_ORDERED;
-			break;
-		case RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY:
-			type = RTE_SCHED_TYPE_PARALLEL;
-			break;
-		case RTE_EVENT_QUEUE_CFG_ALL_TYPES:
-			SW_LOG_ERR("QUEUE_CFG_ALL_TYPES not supported\n");
-			return -ENOTSUP;
-		default:
-			SW_LOG_ERR("Unknown queue type %d requested\n",
-				   conf->event_queue_cfg);
-			return -EINVAL;
-		}
+	} else if (RTE_EVENT_QUEUE_CFG_ALL_TYPES
+			& conf->event_queue_cfg) {
+		SW_LOG_ERR("QUEUE_CFG_ALL_TYPES not supported\n");
+		return -ENOTSUP;
 	}
 
 	struct sw_evdev *sw = sw_pmd_priv(dev);
@@ -400,7 +386,7 @@ sw_queue_def_conf(struct rte_eventdev *dev, uint8_t queue_id,
 	static const struct rte_event_queue_conf default_conf = {
 		.nb_atomic_flows = 4096,
 		.nb_atomic_order_sequences = 1,
-		.event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY,
+		.schedule_type = RTE_SCHED_TYPE_ATOMIC,
 		.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
 	};
 
diff --git a/examples/eventdev_pipeline_sw_pmd/main.c b/examples/eventdev_pipeline_sw_pmd/main.c
index 09b90c3..2e6787b 100644
--- a/examples/eventdev_pipeline_sw_pmd/main.c
+++ b/examples/eventdev_pipeline_sw_pmd/main.c
@@ -108,7 +108,7 @@ struct config_data {
 static struct config_data cdata = {
 	.num_packets = (1L << 25), /* do ~32M packets */
 	.num_fids = 512,
-	.queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY,
+	.queue_type = RTE_SCHED_TYPE_ATOMIC,
 	.next_qid = {-1},
 	.qid = {-1},
 	.num_stages = 1,
@@ -490,10 +490,10 @@ parse_app_args(int argc, char **argv)
 			cdata.enable_queue_priorities = 1;
 			break;
 		case 'o':
-			cdata.queue_type = RTE_EVENT_QUEUE_CFG_ORDERED_ONLY;
+			cdata.queue_type = RTE_SCHED_TYPE_ORDERED;
 			break;
 		case 'p':
-			cdata.queue_type = RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY;
+			cdata.queue_type = RTE_SCHED_TYPE_PARALLEL;
 			break;
 		case 'q':
 			cdata.quiet = 1;
@@ -684,7 +684,7 @@ setup_eventdev(struct prod_data *prod_data,
 			.new_event_threshold = 4096,
 	};
 	struct rte_event_queue_conf wkr_q_conf = {
-			.event_queue_cfg = cdata.queue_type,
+			.schedule_type = cdata.queue_type,
 			.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
 			.nb_atomic_flows = 1024,
 			.nb_atomic_order_sequences = 1024,
@@ -751,11 +751,11 @@ setup_eventdev(struct prod_data *prod_data,
 		}
 
 		const char *type_str = "Atomic";
-		switch (wkr_q_conf.event_queue_cfg) {
-		case RTE_EVENT_QUEUE_CFG_ORDERED_ONLY:
+		switch (wkr_q_conf.schedule_type) {
+		case RTE_SCHED_TYPE_ORDERED:
 			type_str = "Ordered";
 			break;
-		case RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY:
+		case RTE_SCHED_TYPE_PARALLEL:
 			type_str = "Parallel";
 			break;
 		}
@@ -907,9 +907,9 @@ main(int argc, char **argv)
 		printf("\tworkers: %u\n", cdata.num_workers);
 		printf("\tpackets: %"PRIi64"\n", cdata.num_packets);
 		printf("\tQueue-prio: %u\n", cdata.enable_queue_priorities);
-		if (cdata.queue_type == RTE_EVENT_QUEUE_CFG_ORDERED_ONLY)
+		if (cdata.queue_type == RTE_SCHED_TYPE_ORDERED)
 			printf("\tqid0 type: ordered\n");
-		if (cdata.queue_type == RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY)
+		if (cdata.queue_type == RTE_SCHED_TYPE_ATOMIC)
 			printf("\tqid0 type: atomic\n");
 		printf("\tCores available: %u\n", rte_lcore_count());
 		printf("\tCores used: %u\n", cores_needed);
diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
index 378ccb5..db96552 100644
--- a/lib/librte_eventdev/rte_eventdev.c
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -517,13 +517,11 @@ is_valid_atomic_queue_conf(const struct rte_event_queue_conf *queue_conf)
 {
 	if (queue_conf &&
 		!(queue_conf->event_queue_cfg &
-		  RTE_EVENT_QUEUE_CFG_SINGLE_LINK) && (
+		  RTE_EVENT_QUEUE_CFG_SINGLE_LINK) &&
 		((queue_conf->event_queue_cfg &
-			RTE_EVENT_QUEUE_CFG_TYPE_MASK)
-			== RTE_EVENT_QUEUE_CFG_ALL_TYPES) ||
-		((queue_conf->event_queue_cfg &
-			RTE_EVENT_QUEUE_CFG_TYPE_MASK)
-			== RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY)
+			 RTE_EVENT_QUEUE_CFG_ALL_TYPES) ||
+		(queue_conf->schedule_type
+			== RTE_SCHED_TYPE_ATOMIC)
 		))
 		return 1;
 	else
@@ -535,13 +533,11 @@ is_valid_ordered_queue_conf(const struct rte_event_queue_conf *queue_conf)
 {
 	if (queue_conf &&
 		!(queue_conf->event_queue_cfg &
-		  RTE_EVENT_QUEUE_CFG_SINGLE_LINK) && (
-		((queue_conf->event_queue_cfg &
-			RTE_EVENT_QUEUE_CFG_TYPE_MASK)
-			== RTE_EVENT_QUEUE_CFG_ALL_TYPES) ||
+		  RTE_EVENT_QUEUE_CFG_SINGLE_LINK) &&
 		((queue_conf->event_queue_cfg &
-			RTE_EVENT_QUEUE_CFG_TYPE_MASK)
-			== RTE_EVENT_QUEUE_CFG_ORDERED_ONLY)
+			 RTE_EVENT_QUEUE_CFG_ALL_TYPES) ||
+		(queue_conf->schedule_type
+			== RTE_SCHED_TYPE_ORDERED)
 		))
 		return 1;
 	else
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index 1dbc872..fa16f82 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -270,9 +270,9 @@ struct rte_mbuf; /* we just use mbuf pointers; no need to include rte_mbuf.h */
 #define RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES     (1ULL << 3)
 /**< Event device is capable of enqueuing events of any type to any queue.
  * If this capability is not set, the queue only supports events of the
- *  *RTE_EVENT_QUEUE_CFG_* type that it was created with.
+ *  *RTE_SCHED_TYPE_* type that it was created with.
  *
- * @see RTE_EVENT_QUEUE_CFG_* values
+ * @see RTE_SCHED_TYPE_* values
  */
 #define RTE_EVENT_DEV_CAP_BURST_MODE          (1ULL << 4)
 /**< Event device is capable of operating in burst mode for enqueue(forward,
@@ -515,39 +515,13 @@ rte_event_dev_configure(uint8_t dev_id,
 /* Event queue specific APIs */
 
 /* Event queue configuration bitmap flags */
-#define RTE_EVENT_QUEUE_CFG_TYPE_MASK          (3ULL << 0)
-/**< Mask for event queue schedule type configuration request */
-#define RTE_EVENT_QUEUE_CFG_ALL_TYPES          (0ULL << 0)
+#define RTE_EVENT_QUEUE_CFG_ALL_TYPES          (1ULL << 0)
 /**< Allow ATOMIC,ORDERED,PARALLEL schedule type enqueue
  *
  * @see RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC, RTE_SCHED_TYPE_PARALLEL
  * @see rte_event_enqueue_burst()
  */
-#define RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY        (1ULL << 0)
-/**< Allow only ATOMIC schedule type enqueue
- *
- * The rte_event_enqueue_burst() result is undefined if the queue configured
- * with ATOMIC only and sched_type != RTE_SCHED_TYPE_ATOMIC
- *
- * @see RTE_SCHED_TYPE_ATOMIC, rte_event_enqueue_burst()
- */
-#define RTE_EVENT_QUEUE_CFG_ORDERED_ONLY       (2ULL << 0)
-/**< Allow only ORDERED schedule type enqueue
- *
- * The rte_event_enqueue_burst() result is undefined if the queue configured
- * with ORDERED only and sched_type != RTE_SCHED_TYPE_ORDERED
- *
- * @see RTE_SCHED_TYPE_ORDERED, rte_event_enqueue_burst()
- */
-#define RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY      (3ULL << 0)
-/**< Allow only PARALLEL schedule type enqueue
- *
- * The rte_event_enqueue_burst() result is undefined if the queue configured
- * with PARALLEL only and sched_type != RTE_SCHED_TYPE_PARALLEL
- *
- * @see RTE_SCHED_TYPE_PARALLEL, rte_event_enqueue_burst()
- */
-#define RTE_EVENT_QUEUE_CFG_SINGLE_LINK        (1ULL << 2)
+#define RTE_EVENT_QUEUE_CFG_SINGLE_LINK        (1ULL << 1)
 /**< This event queue links only to a single event port.
  *
  *  @see rte_event_port_setup(), rte_event_port_link()
@@ -558,8 +532,8 @@ struct rte_event_queue_conf {
 	uint32_t nb_atomic_flows;
 	/**< The maximum number of active flows this queue can track at any
 	 * given time. If the queue is configured for atomic scheduling (by
-	 * applying the RTE_EVENT_QUEUE_CFG_ALL_TYPES or
-	 * RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY flags to event_queue_cfg), then the
+	 * applying the RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to event_queue_cfg
+	 * or RTE_SCHED_TYPE_ATOMIC flag to schedule_type), then the
 	 * value must be in the range of [1, nb_event_queue_flows], which was
 	 * previously provided in rte_event_dev_configure().
 	 */
@@ -572,12 +546,18 @@ struct rte_event_queue_conf {
 	 * event will be returned from dequeue until one or more entries are
 	 * freed up/released.
 	 * If the queue is configured for ordered scheduling (by applying the
-	 * RTE_EVENT_QUEUE_CFG_ALL_TYPES or RTE_EVENT_QUEUE_CFG_ORDERED_ONLY
-	 * flags to event_queue_cfg), then the value must be in the range of
-	 * [1, nb_event_queue_flows], which was previously supplied to
-	 * rte_event_dev_configure().
+	 * RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to event_queue_cfg or
+	 * RTE_SCHED_TYPE_ORDERED flag to schedule_type), then the value must
+	 * be in the range of [1, nb_event_queue_flows], which was
+	 * previously supplied to rte_event_dev_configure().
+	 */
+	uint32_t event_queue_cfg;
+	/**< Queue cfg flags(EVENT_QUEUE_CFG_) */
+	uint8_t schedule_type;
+	/**< Queue schedule type(RTE_SCHED_TYPE_*).
+	 * Valid when RTE_EVENT_QUEUE_CFG_ALL_TYPES bit is not set in
+	 * event_queue_cfg.
 	 */
-	uint32_t event_queue_cfg; /**< Queue cfg flags(EVENT_QUEUE_CFG_) */
 	uint8_t priority;
 	/**< Priority for this event queue relative to other event queues.
 	 * The requested priority should in the range of
diff --git a/test/test/test_eventdev.c b/test/test/test_eventdev.c
index d6ade78..4118b75 100644
--- a/test/test/test_eventdev.c
+++ b/test/test/test_eventdev.c
@@ -300,15 +300,13 @@ test_eventdev_queue_setup(void)
 	/* Negative cases */
 	ret = rte_event_queue_default_conf_get(TEST_DEV_ID, 0, &qconf);
 	TEST_ASSERT_SUCCESS(ret, "Failed to get queue0 info");
-	qconf.event_queue_cfg =	(RTE_EVENT_QUEUE_CFG_ALL_TYPES &
-		 RTE_EVENT_QUEUE_CFG_TYPE_MASK);
+	qconf.event_queue_cfg =	RTE_EVENT_QUEUE_CFG_ALL_TYPES;
 	qconf.nb_atomic_flows = info.max_event_queue_flows + 1;
 	ret = rte_event_queue_setup(TEST_DEV_ID, 0, &qconf);
 	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
 
 	qconf.nb_atomic_flows = info.max_event_queue_flows;
-	qconf.event_queue_cfg =	(RTE_EVENT_QUEUE_CFG_ORDERED_ONLY &
-		 RTE_EVENT_QUEUE_CFG_TYPE_MASK);
+	qconf.schedule_type = RTE_SCHED_TYPE_ORDERED;
 	qconf.nb_atomic_order_sequences = info.max_event_queue_flows + 1;
 	ret = rte_event_queue_setup(TEST_DEV_ID, 0, &qconf);
 	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
@@ -423,7 +421,7 @@ test_eventdev_queue_attr_nb_atomic_flows(void)
 		/* Assume PMD doesn't support atomic flows, return early */
 		return -ENOTSUP;
 
-	qconf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
+	qconf.schedule_type = RTE_SCHED_TYPE_ATOMIC;
 
 	for (i = 0; i < (int)queue_count; i++) {
 		ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf);
@@ -466,7 +464,7 @@ test_eventdev_queue_attr_nb_atomic_order_sequences(void)
 		/* Assume PMD doesn't support reordering */
 		return -ENOTSUP;
 
-	qconf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_ORDERED_ONLY;
+	qconf.schedule_type = RTE_SCHED_TYPE_ORDERED;
 
 	for (i = 0; i < (int)queue_count; i++) {
 		ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf);
@@ -507,7 +505,7 @@ test_eventdev_queue_attr_event_queue_cfg(void)
 	ret = rte_event_queue_default_conf_get(TEST_DEV_ID, 0, &qconf);
 	TEST_ASSERT_SUCCESS(ret, "Failed to get queue0 def conf");
 
-	qconf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY;
+	qconf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_SINGLE_LINK;
 
 	for (i = 0; i < (int)queue_count; i++) {
 		ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf);
diff --git a/test/test/test_eventdev_sw.c b/test/test/test_eventdev_sw.c
index 7219886..dea302f 100644
--- a/test/test/test_eventdev_sw.c
+++ b/test/test/test_eventdev_sw.c
@@ -219,7 +219,7 @@ create_lb_qids(struct test *t, int num_qids, uint32_t flags)
 
 	/* Q creation */
 	const struct rte_event_queue_conf conf = {
-			.event_queue_cfg = flags,
+			.schedule_type = flags,
 			.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
 			.nb_atomic_flows = 1024,
 			.nb_atomic_order_sequences = 1024,
@@ -242,20 +242,20 @@ create_lb_qids(struct test *t, int num_qids, uint32_t flags)
 static inline int
 create_atomic_qids(struct test *t, int num_qids)
 {
-	return create_lb_qids(t, num_qids, RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY);
+	return create_lb_qids(t, num_qids, RTE_SCHED_TYPE_ATOMIC);
 }
 
 static inline int
 create_ordered_qids(struct test *t, int num_qids)
 {
-	return create_lb_qids(t, num_qids, RTE_EVENT_QUEUE_CFG_ORDERED_ONLY);
+	return create_lb_qids(t, num_qids, RTE_SCHED_TYPE_ORDERED);
 }
 
 
 static inline int
 create_unordered_qids(struct test *t, int num_qids)
 {
-	return create_lb_qids(t, num_qids, RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY);
+	return create_lb_qids(t, num_qids, RTE_SCHED_TYPE_PARALLEL);
 }
 
 static inline int
@@ -1238,7 +1238,7 @@ port_reconfig_credits(struct test *t)
 	const uint32_t NUM_ITERS = 32;
 	for (i = 0; i < NUM_ITERS; i++) {
 		const struct rte_event_queue_conf conf = {
-			.event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY,
+			.schedule_type = RTE_SCHED_TYPE_ATOMIC,
 			.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
 			.nb_atomic_flows = 1024,
 			.nb_atomic_order_sequences = 1024,
@@ -1320,7 +1320,7 @@ port_single_lb_reconfig(struct test *t)
 
 	static const struct rte_event_queue_conf conf_lb_atomic = {
 		.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
-		.event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY,
+		.schedule_type = RTE_SCHED_TYPE_ATOMIC,
 		.nb_atomic_flows = 1024,
 		.nb_atomic_order_sequences = 1024,
 	};
@@ -1818,7 +1818,7 @@ ordered_reconfigure(struct test *t)
 	}
 
 	const struct rte_event_queue_conf conf = {
-			.event_queue_cfg = RTE_EVENT_QUEUE_CFG_ORDERED_ONLY,
+			.schedule_type = RTE_SCHED_TYPE_ORDERED,
 			.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
 			.nb_atomic_flows = 1024,
 			.nb_atomic_order_sequences = 1024,
@@ -1865,7 +1865,7 @@ qid_priorities(struct test *t)
 	for (i = 0; i < 3; i++) {
 		/* Create QID */
 		const struct rte_event_queue_conf conf = {
-			.event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY,
+			.schedule_type = RTE_SCHED_TYPE_ATOMIC,
 			/* increase priority (0 == highest), as we go */
 			.priority = RTE_EVENT_DEV_PRIORITY_NORMAL - i,
 			.nb_atomic_flows = 1024,
-- 
2.7.4