From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM01-SN1-obe.outbound.protection.outlook.com (mail-sn1nam01on0044.outbound.protection.outlook.com [104.47.32.44]) by dpdk.org (Postfix) with ESMTP id E4DE71B89A for ; Wed, 25 Oct 2017 16:22:15 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=fDY5wY12Me/jIKmc8PI0f6AnoXjmA3A1qlXTR00s0tA=; b=ScSyIlI+vO4jhSmMTQ8T2duc3PNSTXZiNJvgCHq53LLogv+CZUbuaCXnkqGYwP5w22CQNxLdpJEKFaIDHj7Al7HirTrlglylXJdwVf4RGlMsyN3nLcA87z6M22x0C105sIHgX6vtbv/VdxfEke0DlZjO3MWdlVGHGjL181o1wWI= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Pavan.Bhagavatula@cavium.com; Received: from PBHAGAVATULA-LT.caveonetworks.com (111.93.218.67) by BN6PR07MB3458.namprd07.prod.outlook.com (10.161.153.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.156.4; Wed, 25 Oct 2017 14:22:12 +0000 From: Pavan Nikhilesh To: harry.van.haaren@intel.com, hemant.agrawal@nxp.com, jerin.jacob@caviumnetworks.com Cc: dev@dpdk.org, Pavan Nikhilesh Date: Wed, 25 Oct 2017 19:51:42 +0530 Message-Id: <1508941304-11596-1-git-send-email-pbhagavatula@caviumnetworks.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1507814147-8223-1-git-send-email-pbhagavatula@caviumnetworks.com> References: <1507814147-8223-1-git-send-email-pbhagavatula@caviumnetworks.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [111.93.218.67] X-ClientProxiedBy: PN1PR01CA0115.INDPRD01.PROD.OUTLOOK.COM (10.174.144.31) To BN6PR07MB3458.namprd07.prod.outlook.com (10.161.153.21) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a02cc17b-d16c-4c15-8d58-08d51bb3c9cd X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(4534020)(4602075)(4627075)(201703031133081)(201702281549075)(2017052603199); SRVR:BN6PR07MB3458; X-Microsoft-Exchange-Diagnostics: 1; BN6PR07MB3458; 3:pfRArjNN3bbpfPt22TOQ8tLkFzZy6CxXATdQ1ga8roS6cp/ZGdagbG3hs3kpQz7w/GcWrflTWlLfjzRdGE+Gc3Sx13CdBnReRVDRYm3ZkmIlC8bcQf3A3vVMYxn6zcw7uRL8DU8bEDkn44hDsgVIoNWXMKWdofvt7Ms/+23VC4kBswkfdTGaTORLhVSuvuG8dViXt+5/3tlxiVWRh/yWdZwCqVC0dUw856FiBjG8R3cbIQoIiOIi84egSrESjiwe; 25:NUxYQnbKi3uMXv5+z8dPSc0EwuTu/YEDuWCpwg8HOieV/I0hdbsCX6bD1l2Jf8TaqGcI4+wq9TSLgDtG8h4dCRgsEw5+/5Os+kPJ1oAi9GaVNI8tbVSVIrraZTCq59/9OnXRRLiXJCNror79Y4blFzVAPfd34/8E/pt0sgzkjccIy1buy3Vp5wgTY9Nl4oJlp8mnlOGH5kWiiMq6keYYGCUFgKNeB1u8xZ9v9cLYqPlSmj1fcyCtmUQbH1HeOOR+ZUQsiGOlFy27C6GcWUtplHq36pdTiuSJXsvtIy5MaNTNX/ExTxKHKSPbOWQ1CQyqNDQTDrUGqN+4WKqaE7ed9Q==; 31:ZB/eK3grgDhwt3s1FxNNXsylELcMnmC5tEhA2CDVjbLXQ8CnUsrvPAZBD5+jD36YbY/DNTBrs/Ox+/Mhxt20yVx6Ae6bMk7/3jURikSRHesFOTsIxHYg2FR7EOlsrcwV+4injlyXz4YrTC6ntm34tlRRm76trcfdaaIDdAUURklkuWLNqMjAwB3b9LzRz71ZczhmVill5A65wD2kDovWIFp3J7w6UH2+UR49jLuRFT8= X-MS-TrafficTypeDiagnostic: BN6PR07MB3458: X-Microsoft-Exchange-Diagnostics: 1; BN6PR07MB3458; 20:WPKITljKDn1czaiEmKPl5gGZl3MbtlzRWE8S1TSrAGF9rS0rCx5J83Ig8RH2fO9mGLblv3/BjqUpT/2algiYhK6TzrYpF2iUxeakVtvQlIaQ498xwqc2lwH6g13LJJGBkhkv6ksXI6dheSkEphQYqThKI2fV1Y6iPMDS/ticOb4V7zJYVQ3ptTjfsJDnaZ+P7TKYD4aBQE3I9Uhje6OBJ41lIoskJMlzMjgDxBQsz5U+7jJGcXPYWYKOBY1K5L9JhyKpXiui7mwkcqJAwbgwZeOBKKl+dyrkUKrRl/cEIqgiKwHgZda/+cpILC+4y8UF9aR5lgYYxEkBYKZwwaylm7pHZjSwz1vf6BYgNs1vjM29Xc0bDnn27rWr7nNjtZYpPo6CEa+5qlSvJS9PAr6uhfX6kG9WhmFrRyK9G7X6DK1YN9PX2/RzBPDENmVi0ZPMUAHhRQAW5w6CQDsh7RFUn2wHI1rJrvtPozf/g4AuHtLBaiaKlJCsfJnngAZz/opJNgmQlpEnCpgXsCAZdya9Cm8YKbpamoY/SLlPTg1ave8wijuCio4RBUxoCW/j1m0bhh/WhhaSd6sd86V/6iES70For7VsSuDI/ywhZqMxFYA=; 4:1NeiWwSTVumfzXsgpKZhTythTVcqp18LiQ89S+XZS+yM5nR7UzDv6Wp74S73XudZTajE3Tl9s85M0EABzGUiXVi4mk+U6cBPSuBuOFgLtcZiUnTfpAb43+KZxmM0fu7uqbw8kJYU+vxTldNLuW+sjzC6g4TCqVTIdmFuxzdoH+DTljGRQF+miS2Vx5Huc8IlSWZQA9rQJE/ZRL1nhVxmWKEVd0OyAUC0Z6u7DpBbNu6xBi62OiqNcCktXIKH8kD6fvvJ9ixxPDaWcGm4GMp34YJ2zsujCKFqbtrBSnFaKsM= X-Exchange-Antispam-Report-Test: UriScan:(228905959029699); X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(2401047)(5005006)(8121501046)(3002001)(3231020)(10201501046)(100000703101)(100105400095)(93006095)(6041248)(20161123560025)(20161123555025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123562025)(20161123558100)(20161123564025)(6072148)(201708071742011)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:BN6PR07MB3458; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:BN6PR07MB3458; X-Forefront-PRVS: 0471B73328 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6009001)(376002)(346002)(189002)(199003)(81166006)(53936002)(50986999)(7736002)(16526018)(50226002)(76176999)(4326008)(66066001)(305945005)(5003940100001)(8936002)(2906002)(25786009)(101416001)(42882006)(6666003)(6636002)(107886003)(6512007)(8676002)(5660300001)(2950100002)(68736007)(81156014)(47776003)(50466002)(36756003)(5009440100003)(316002)(16586007)(48376002)(33646002)(105586002)(106356001)(53416004)(189998001)(6116002)(97736004)(6506006)(478600001)(3846002)(6486002)(72206003)(69596002)(8656006)(42262002); DIR:OUT; SFP:1101; SCL:1; SRVR:BN6PR07MB3458; H:PBHAGAVATULA-LT.caveonetworks.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN6PR07MB3458; 23:LJlV2Ej+BQPjjOgK4cyyzbAOBlvwFlJzd8016MSLn?= =?us-ascii?Q?IFFLpintYvnQrmWrOu7v6HT+g3AWH4FiKRxPYJn50k+TTcuMhJPaIROEoY2e?= =?us-ascii?Q?B30JAjvCcfsMhd2fm+aMdWcIKF6aBSspJJ4ojFtzR66XWwHcmZ6bFp3RWV7/?= =?us-ascii?Q?Ysqxm9TH+/dLZbzL4qXFrp1IDvT3rwcl+81cYyMIfwPlauPKafKf/YDEDjIU?= =?us-ascii?Q?VCcVoNfdiPb01Ig7gt3RPGpZFj5+wxVXa2nkRubnpOLysQU0wIosVfAo44I0?= =?us-ascii?Q?BaNcn7hAopqOLp2AxMo/T59jvN8UDHT4kRj4rKE2FbFMjJVQvUxh5lirUcbo?= =?us-ascii?Q?ndF/iavJgTJn5O1FVv1fsy3icl2gFVknya6b0wh6hi1G3lPMwtjZEscSuDoC?= =?us-ascii?Q?S4+w3po2kBt6rtv8B48QHoVUdRndL5svZrf8PKOA4II4tUGoJbCVVtJEQ2+6?= =?us-ascii?Q?+HNY6V3oraUSzZXdy5Znwo5ZuNCwsw6N9Y06fniPew+ZuAu2ZrXUDUAmEfwj?= =?us-ascii?Q?M/EGs7XzJsTNmhLHWEFVg5lAv+26ZF/rwXE8c5CYQOMj7mFpjtEK0nKMb1Ud?= =?us-ascii?Q?jjdHlTMe02LXgX639jivlZ3p5pdjueY9da3/bMeXCQw2MlE7uP7c7fODqHNi?= =?us-ascii?Q?dMkCI2u0H+kSqJykVoj6gC2EO3F3TJZLryEEyZvhVWngQm+y6WOVCorpR0+H?= =?us-ascii?Q?Iv3kPpIyd3YEeKd/dl0cceI3JJocQWYt1W9WfQnbaw/TQG8p2jBpdIhCKIQ4?= =?us-ascii?Q?gWS0xclHiFvhyV6bWvvKuGeJhX/6Tc8ZIiE8H2zHRc1T7QNyC3aQqFR6md/J?= =?us-ascii?Q?W3krsnc11XygWUdS1AjtoEr+OjqgoDGZ1hsYwHrjNdavJNSi2LyebQW8gTZ2?= =?us-ascii?Q?OWFhCRzkVWvF+OXzaxtIpOaDdnn+ufrmgvHvRShhwElvd4MYX+4VWtyos/DG?= =?us-ascii?Q?9ZO6q3VQUTPgkEpy1MMF432im8BLMO3N46U5Dx1XYTrMyR09XN2gO56X2YJd?= =?us-ascii?Q?UJzkO/hA4byD/5ZZH4riLUhThieALijI1KhDdRnP8fezn/wXAe8EYjkcjP7P?= =?us-ascii?Q?1rgK414NOTSDIoBNRbWigegl1d9qo1TzfVNeLdki6wvIK4+cww9Pn7MneYTl?= =?us-ascii?Q?6A14RJYsWB9bXKQjKauG6YGAsAeGgB949FVuHk74e7deTsYKmDJKQBfigpQM?= =?us-ascii?Q?Vabvz7keyHFHRHRokqccXXgykvQjF4piJStT70s6NLqJmWWSAh4XN+1PA=3D?= =?us-ascii?Q?=3D?= X-Microsoft-Exchange-Diagnostics: 1; BN6PR07MB3458; 6:IY8kmaOnj0u69+8X00c/HtkAd1wzZ4BBTewGebQPYmfaZZALrrzecA7nAc1nvwNcCehog8/yuwEQwJ6XyfXfmhf0SaY6byWuW1pkkanVVXFAXLbNDeHdos+NCKNUzXUU7SAWZZtSUlhNSZXTWUaMcGbQJDh11JwwMRpAuRnRDNFI+zsb5qPYj6ijKXcnRt+bGE3elYO2CzY1hd0N04JqzT8QlthmMgV/hjbZ2qaXXVd1D8eyUDS6EYMdsP8cjw3lpHrq4lSJ1ernhWDXBxWHkeGg1iUN9VYEaK0BkKzB43fuBrW7FnAGZJx59+5ewV78uWmrsswe0h1baNqELV4cOw==; 5:U4d6rGD80rYC4fSj6DclpQV49jPeIbGUr9BZlVveiYYGHC6AjpE4QLCOPfTw3Uwht1xtwnJt6VlSIpZ0qVLJfQcchQgUvBKILWE3Qx6SoHu+oCsy+MxXrN7+4muKg6ED7UTKkZiHtzfsCLBHZZKSDA==; 24:2y5/noWIF9Ri5+eeWaWZztaoHt2L+3gAJweSX7KGEcJ7vISyyKTj339b9eiQpnWYD3BlmbCObV6YchxWhrF9a7uAqKcwLu53dzrYj8T3SBU=; 7:wzcUWG6c+cRDlt1ZHd06uXoAlAlBCLV6ydDpkeZ8pFXmBC+0cB/m1LTwIEht0Gv/HTrSjcEQg+jFUKc5LNUZ/Dk4lr20Nh+yFlWrs+qGmLjzlwKW8MDafUCBndtrnNpwoWVf+41PvfqOKjpMDC4l49klDZB06mnT+gTq+wGTdXYsYWKQvG43uncyDfghtge0Vppzf8adHizUyfLL4qyqgLQrvUxq3wjl5Zhdp334rEA= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Oct 2017 14:22:12.5339 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a02cc17b-d16c-4c15-8d58-08d51bb3c9cd X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR07MB3458 Subject: [dpdk-dev] [PATCH v4 1/3] evendev: fix inconsistency in event queue config X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 25 Oct 2017 14:22:16 -0000 With the current scheme of event queue configuration the cfg schedule type macros (RTE_EVENT_QUEUE_CFG_*_ONLY) are inconsistent with the event schedule type (RTE_SCHED_TYPE_*) this requires unnecessary conversion between the fastpath and slowpath API's while scheduling events or configuring event queues. This patch aims to fix such inconsistency by using event schedule types (RTE_SCHED_TYPE_*) for event queue configuration. This patch also fixes example/eventdev_pipeline_sw_pmd as it doesn't convert RTE_EVENT_QUEUE_CFG_*_ONLY to RTE_SCHED_TYPE_* which leads to improper events being enqueued to the eventdev. Fixes: adb5d5486c39 ("examples/eventdev_pipeline_sw_pmd: add sample app") Signed-off-by: Pavan Nikhilesh Acked-by: Harry van Haaren --- v3 changes: - fix app/test_perf_queue using invalid queue configuration i.e. setting schedule type in event_queue_cfg instead of schedule_type. app/test-eventdev/evt_common.h | 21 ------------- app/test-eventdev/test_order_queue.c | 4 +-- app/test-eventdev/test_perf_queue.c | 4 +-- drivers/event/dpaa2/dpaa2_eventdev.c | 4 +-- drivers/event/sw/sw_evdev.c | 28 +++++------------ examples/eventdev_pipeline_sw_pmd/main.c | 18 +++++------ lib/librte_eventdev/rte_eventdev.c | 20 +++++------- lib/librte_eventdev/rte_eventdev.h | 54 ++++++++++---------------------- test/test/test_eventdev.c | 12 +++---- test/test/test_eventdev_sw.c | 16 +++++----- 10 files changed, 60 insertions(+), 121 deletions(-) diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h index 4102076..ee896a2 100644 --- a/app/test-eventdev/evt_common.h +++ b/app/test-eventdev/evt_common.h @@ -92,25 +92,4 @@ evt_has_all_types_queue(uint8_t dev_id) true : false; } -static inline uint32_t -evt_sched_type2queue_cfg(uint8_t sched_type) -{ - uint32_t ret; - - switch (sched_type) { - case RTE_SCHED_TYPE_ATOMIC: - ret = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY; - break; - case RTE_SCHED_TYPE_ORDERED: - ret = RTE_EVENT_QUEUE_CFG_ORDERED_ONLY; - break; - case RTE_SCHED_TYPE_PARALLEL: - ret = RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY; - break; - default: - rte_panic("Invalid sched_type %d\n", sched_type); - } - return ret; -} - #endif /* _EVT_COMMON_*/ diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c index beadd9c..1fa4082 100644 --- a/app/test-eventdev/test_order_queue.c +++ b/app/test-eventdev/test_order_queue.c @@ -164,7 +164,7 @@ order_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt) /* q0 (ordered queue) configuration */ struct rte_event_queue_conf q0_ordered_conf = { .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, - .event_queue_cfg = RTE_EVENT_QUEUE_CFG_ORDERED_ONLY, + .schedule_type = RTE_SCHED_TYPE_ORDERED, .nb_atomic_flows = opt->nb_flows, .nb_atomic_order_sequences = opt->nb_flows, }; @@ -177,7 +177,7 @@ order_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt) /* q1 (atomic queue) configuration */ struct rte_event_queue_conf q1_atomic_conf = { .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, - .event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY, + .schedule_type = RTE_SCHED_TYPE_ATOMIC, .nb_atomic_flows = opt->nb_flows, .nb_atomic_order_sequences = opt->nb_flows, }; diff --git a/app/test-eventdev/test_perf_queue.c b/app/test-eventdev/test_perf_queue.c index 658c08a..a7a2b1f 100644 --- a/app/test-eventdev/test_perf_queue.c +++ b/app/test-eventdev/test_perf_queue.c @@ -205,8 +205,8 @@ perf_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt) }; /* queue configurations */ for (queue = 0; queue < perf_queue_nb_event_queues(opt); queue++) { - q_conf.event_queue_cfg = evt_sched_type2queue_cfg - (opt->sched_type_list[queue % nb_stages]); + q_conf.schedule_type = + (opt->sched_type_list[queue % nb_stages]); if (opt->q_priority) { uint8_t stage_pos = queue % nb_stages; diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c index 81286a8..3dbc337 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev.c +++ b/drivers/event/dpaa2/dpaa2_eventdev.c @@ -378,8 +378,8 @@ dpaa2_eventdev_queue_def_conf(struct rte_eventdev *dev, uint8_t queue_id, RTE_SET_USED(queue_conf); queue_conf->nb_atomic_flows = DPAA2_EVENT_QUEUE_ATOMIC_FLOWS; - queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY | - RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY; + queue_conf->schedule_type = RTE_SCHED_TYPE_ATOMIC | + RTE_SCHED_TYPE_PARALLEL; queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL; } diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c index aed8b72..522cd71 100644 --- a/drivers/event/sw/sw_evdev.c +++ b/drivers/event/sw/sw_evdev.c @@ -345,28 +345,14 @@ sw_queue_setup(struct rte_eventdev *dev, uint8_t queue_id, { int type; - /* SINGLE_LINK can be OR-ed with other types, so handle first */ + type = conf->schedule_type; + if (RTE_EVENT_QUEUE_CFG_SINGLE_LINK & conf->event_queue_cfg) { type = SW_SCHED_TYPE_DIRECT; - } else { - switch (conf->event_queue_cfg) { - case RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY: - type = RTE_SCHED_TYPE_ATOMIC; - break; - case RTE_EVENT_QUEUE_CFG_ORDERED_ONLY: - type = RTE_SCHED_TYPE_ORDERED; - break; - case RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY: - type = RTE_SCHED_TYPE_PARALLEL; - break; - case RTE_EVENT_QUEUE_CFG_ALL_TYPES: - SW_LOG_ERR("QUEUE_CFG_ALL_TYPES not supported\n"); - return -ENOTSUP; - default: - SW_LOG_ERR("Unknown queue type %d requested\n", - conf->event_queue_cfg); - return -EINVAL; - } + } else if (RTE_EVENT_QUEUE_CFG_ALL_TYPES + & conf->event_queue_cfg) { + SW_LOG_ERR("QUEUE_CFG_ALL_TYPES not supported\n"); + return -ENOTSUP; } struct sw_evdev *sw = sw_pmd_priv(dev); @@ -400,7 +386,7 @@ sw_queue_def_conf(struct rte_eventdev *dev, uint8_t queue_id, static const struct rte_event_queue_conf default_conf = { .nb_atomic_flows = 4096, .nb_atomic_order_sequences = 1, - .event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY, + .schedule_type = RTE_SCHED_TYPE_ATOMIC, .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, }; diff --git a/examples/eventdev_pipeline_sw_pmd/main.c b/examples/eventdev_pipeline_sw_pmd/main.c index 09b90c3..2e6787b 100644 --- a/examples/eventdev_pipeline_sw_pmd/main.c +++ b/examples/eventdev_pipeline_sw_pmd/main.c @@ -108,7 +108,7 @@ struct config_data { static struct config_data cdata = { .num_packets = (1L << 25), /* do ~32M packets */ .num_fids = 512, - .queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY, + .queue_type = RTE_SCHED_TYPE_ATOMIC, .next_qid = {-1}, .qid = {-1}, .num_stages = 1, @@ -490,10 +490,10 @@ parse_app_args(int argc, char **argv) cdata.enable_queue_priorities = 1; break; case 'o': - cdata.queue_type = RTE_EVENT_QUEUE_CFG_ORDERED_ONLY; + cdata.queue_type = RTE_SCHED_TYPE_ORDERED; break; case 'p': - cdata.queue_type = RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY; + cdata.queue_type = RTE_SCHED_TYPE_PARALLEL; break; case 'q': cdata.quiet = 1; @@ -684,7 +684,7 @@ setup_eventdev(struct prod_data *prod_data, .new_event_threshold = 4096, }; struct rte_event_queue_conf wkr_q_conf = { - .event_queue_cfg = cdata.queue_type, + .schedule_type = cdata.queue_type, .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, .nb_atomic_flows = 1024, .nb_atomic_order_sequences = 1024, @@ -751,11 +751,11 @@ setup_eventdev(struct prod_data *prod_data, } const char *type_str = "Atomic"; - switch (wkr_q_conf.event_queue_cfg) { - case RTE_EVENT_QUEUE_CFG_ORDERED_ONLY: + switch (wkr_q_conf.schedule_type) { + case RTE_SCHED_TYPE_ORDERED: type_str = "Ordered"; break; - case RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY: + case RTE_SCHED_TYPE_PARALLEL: type_str = "Parallel"; break; } @@ -907,9 +907,9 @@ main(int argc, char **argv) printf("\tworkers: %u\n", cdata.num_workers); printf("\tpackets: %"PRIi64"\n", cdata.num_packets); printf("\tQueue-prio: %u\n", cdata.enable_queue_priorities); - if (cdata.queue_type == RTE_EVENT_QUEUE_CFG_ORDERED_ONLY) + if (cdata.queue_type == RTE_SCHED_TYPE_ORDERED) printf("\tqid0 type: ordered\n"); - if (cdata.queue_type == RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY) + if (cdata.queue_type == RTE_SCHED_TYPE_ATOMIC) printf("\tqid0 type: atomic\n"); printf("\tCores available: %u\n", rte_lcore_count()); printf("\tCores used: %u\n", cores_needed); diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c index 378ccb5..db96552 100644 --- a/lib/librte_eventdev/rte_eventdev.c +++ b/lib/librte_eventdev/rte_eventdev.c @@ -517,13 +517,11 @@ is_valid_atomic_queue_conf(const struct rte_event_queue_conf *queue_conf) { if (queue_conf && !(queue_conf->event_queue_cfg & - RTE_EVENT_QUEUE_CFG_SINGLE_LINK) && ( + RTE_EVENT_QUEUE_CFG_SINGLE_LINK) && ((queue_conf->event_queue_cfg & - RTE_EVENT_QUEUE_CFG_TYPE_MASK) - == RTE_EVENT_QUEUE_CFG_ALL_TYPES) || - ((queue_conf->event_queue_cfg & - RTE_EVENT_QUEUE_CFG_TYPE_MASK) - == RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY) + RTE_EVENT_QUEUE_CFG_ALL_TYPES) || + (queue_conf->schedule_type + == RTE_SCHED_TYPE_ATOMIC) )) return 1; else @@ -535,13 +533,11 @@ is_valid_ordered_queue_conf(const struct rte_event_queue_conf *queue_conf) { if (queue_conf && !(queue_conf->event_queue_cfg & - RTE_EVENT_QUEUE_CFG_SINGLE_LINK) && ( - ((queue_conf->event_queue_cfg & - RTE_EVENT_QUEUE_CFG_TYPE_MASK) - == RTE_EVENT_QUEUE_CFG_ALL_TYPES) || + RTE_EVENT_QUEUE_CFG_SINGLE_LINK) && ((queue_conf->event_queue_cfg & - RTE_EVENT_QUEUE_CFG_TYPE_MASK) - == RTE_EVENT_QUEUE_CFG_ORDERED_ONLY) + RTE_EVENT_QUEUE_CFG_ALL_TYPES) || + (queue_conf->schedule_type + == RTE_SCHED_TYPE_ORDERED) )) return 1; else diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h index 1dbc872..fa16f82 100644 --- a/lib/librte_eventdev/rte_eventdev.h +++ b/lib/librte_eventdev/rte_eventdev.h @@ -270,9 +270,9 @@ struct rte_mbuf; /* we just use mbuf pointers; no need to include rte_mbuf.h */ #define RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES (1ULL << 3) /**< Event device is capable of enqueuing events of any type to any queue. * If this capability is not set, the queue only supports events of the - * *RTE_EVENT_QUEUE_CFG_* type that it was created with. + * *RTE_SCHED_TYPE_* type that it was created with. * - * @see RTE_EVENT_QUEUE_CFG_* values + * @see RTE_SCHED_TYPE_* values */ #define RTE_EVENT_DEV_CAP_BURST_MODE (1ULL << 4) /**< Event device is capable of operating in burst mode for enqueue(forward, @@ -515,39 +515,13 @@ rte_event_dev_configure(uint8_t dev_id, /* Event queue specific APIs */ /* Event queue configuration bitmap flags */ -#define RTE_EVENT_QUEUE_CFG_TYPE_MASK (3ULL << 0) -/**< Mask for event queue schedule type configuration request */ -#define RTE_EVENT_QUEUE_CFG_ALL_TYPES (0ULL << 0) +#define RTE_EVENT_QUEUE_CFG_ALL_TYPES (1ULL << 0) /**< Allow ATOMIC,ORDERED,PARALLEL schedule type enqueue * * @see RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC, RTE_SCHED_TYPE_PARALLEL * @see rte_event_enqueue_burst() */ -#define RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY (1ULL << 0) -/**< Allow only ATOMIC schedule type enqueue - * - * The rte_event_enqueue_burst() result is undefined if the queue configured - * with ATOMIC only and sched_type != RTE_SCHED_TYPE_ATOMIC - * - * @see RTE_SCHED_TYPE_ATOMIC, rte_event_enqueue_burst() - */ -#define RTE_EVENT_QUEUE_CFG_ORDERED_ONLY (2ULL << 0) -/**< Allow only ORDERED schedule type enqueue - * - * The rte_event_enqueue_burst() result is undefined if the queue configured - * with ORDERED only and sched_type != RTE_SCHED_TYPE_ORDERED - * - * @see RTE_SCHED_TYPE_ORDERED, rte_event_enqueue_burst() - */ -#define RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY (3ULL << 0) -/**< Allow only PARALLEL schedule type enqueue - * - * The rte_event_enqueue_burst() result is undefined if the queue configured - * with PARALLEL only and sched_type != RTE_SCHED_TYPE_PARALLEL - * - * @see RTE_SCHED_TYPE_PARALLEL, rte_event_enqueue_burst() - */ -#define RTE_EVENT_QUEUE_CFG_SINGLE_LINK (1ULL << 2) +#define RTE_EVENT_QUEUE_CFG_SINGLE_LINK (1ULL << 1) /**< This event queue links only to a single event port. * * @see rte_event_port_setup(), rte_event_port_link() @@ -558,8 +532,8 @@ struct rte_event_queue_conf { uint32_t nb_atomic_flows; /**< The maximum number of active flows this queue can track at any * given time. If the queue is configured for atomic scheduling (by - * applying the RTE_EVENT_QUEUE_CFG_ALL_TYPES or - * RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY flags to event_queue_cfg), then the + * applying the RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to event_queue_cfg + * or RTE_SCHED_TYPE_ATOMIC flag to schedule_type), then the * value must be in the range of [1, nb_event_queue_flows], which was * previously provided in rte_event_dev_configure(). */ @@ -572,12 +546,18 @@ struct rte_event_queue_conf { * event will be returned from dequeue until one or more entries are * freed up/released. * If the queue is configured for ordered scheduling (by applying the - * RTE_EVENT_QUEUE_CFG_ALL_TYPES or RTE_EVENT_QUEUE_CFG_ORDERED_ONLY - * flags to event_queue_cfg), then the value must be in the range of - * [1, nb_event_queue_flows], which was previously supplied to - * rte_event_dev_configure(). + * RTE_EVENT_QUEUE_CFG_ALL_TYPES flag to event_queue_cfg or + * RTE_SCHED_TYPE_ORDERED flag to schedule_type), then the value must + * be in the range of [1, nb_event_queue_flows], which was + * previously supplied to rte_event_dev_configure(). + */ + uint32_t event_queue_cfg; + /**< Queue cfg flags(EVENT_QUEUE_CFG_) */ + uint8_t schedule_type; + /**< Queue schedule type(RTE_SCHED_TYPE_*). + * Valid when RTE_EVENT_QUEUE_CFG_ALL_TYPES bit is not set in + * event_queue_cfg. */ - uint32_t event_queue_cfg; /**< Queue cfg flags(EVENT_QUEUE_CFG_) */ uint8_t priority; /**< Priority for this event queue relative to other event queues. * The requested priority should in the range of diff --git a/test/test/test_eventdev.c b/test/test/test_eventdev.c index d6ade78..4118b75 100644 --- a/test/test/test_eventdev.c +++ b/test/test/test_eventdev.c @@ -300,15 +300,13 @@ test_eventdev_queue_setup(void) /* Negative cases */ ret = rte_event_queue_default_conf_get(TEST_DEV_ID, 0, &qconf); TEST_ASSERT_SUCCESS(ret, "Failed to get queue0 info"); - qconf.event_queue_cfg = (RTE_EVENT_QUEUE_CFG_ALL_TYPES & - RTE_EVENT_QUEUE_CFG_TYPE_MASK); + qconf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_ALL_TYPES; qconf.nb_atomic_flows = info.max_event_queue_flows + 1; ret = rte_event_queue_setup(TEST_DEV_ID, 0, &qconf); TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); qconf.nb_atomic_flows = info.max_event_queue_flows; - qconf.event_queue_cfg = (RTE_EVENT_QUEUE_CFG_ORDERED_ONLY & - RTE_EVENT_QUEUE_CFG_TYPE_MASK); + qconf.schedule_type = RTE_SCHED_TYPE_ORDERED; qconf.nb_atomic_order_sequences = info.max_event_queue_flows + 1; ret = rte_event_queue_setup(TEST_DEV_ID, 0, &qconf); TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret); @@ -423,7 +421,7 @@ test_eventdev_queue_attr_nb_atomic_flows(void) /* Assume PMD doesn't support atomic flows, return early */ return -ENOTSUP; - qconf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY; + qconf.schedule_type = RTE_SCHED_TYPE_ATOMIC; for (i = 0; i < (int)queue_count; i++) { ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf); @@ -466,7 +464,7 @@ test_eventdev_queue_attr_nb_atomic_order_sequences(void) /* Assume PMD doesn't support reordering */ return -ENOTSUP; - qconf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_ORDERED_ONLY; + qconf.schedule_type = RTE_SCHED_TYPE_ORDERED; for (i = 0; i < (int)queue_count; i++) { ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf); @@ -507,7 +505,7 @@ test_eventdev_queue_attr_event_queue_cfg(void) ret = rte_event_queue_default_conf_get(TEST_DEV_ID, 0, &qconf); TEST_ASSERT_SUCCESS(ret, "Failed to get queue0 def conf"); - qconf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY; + qconf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_SINGLE_LINK; for (i = 0; i < (int)queue_count; i++) { ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf); diff --git a/test/test/test_eventdev_sw.c b/test/test/test_eventdev_sw.c index 7219886..dea302f 100644 --- a/test/test/test_eventdev_sw.c +++ b/test/test/test_eventdev_sw.c @@ -219,7 +219,7 @@ create_lb_qids(struct test *t, int num_qids, uint32_t flags) /* Q creation */ const struct rte_event_queue_conf conf = { - .event_queue_cfg = flags, + .schedule_type = flags, .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, .nb_atomic_flows = 1024, .nb_atomic_order_sequences = 1024, @@ -242,20 +242,20 @@ create_lb_qids(struct test *t, int num_qids, uint32_t flags) static inline int create_atomic_qids(struct test *t, int num_qids) { - return create_lb_qids(t, num_qids, RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY); + return create_lb_qids(t, num_qids, RTE_SCHED_TYPE_ATOMIC); } static inline int create_ordered_qids(struct test *t, int num_qids) { - return create_lb_qids(t, num_qids, RTE_EVENT_QUEUE_CFG_ORDERED_ONLY); + return create_lb_qids(t, num_qids, RTE_SCHED_TYPE_ORDERED); } static inline int create_unordered_qids(struct test *t, int num_qids) { - return create_lb_qids(t, num_qids, RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY); + return create_lb_qids(t, num_qids, RTE_SCHED_TYPE_PARALLEL); } static inline int @@ -1238,7 +1238,7 @@ port_reconfig_credits(struct test *t) const uint32_t NUM_ITERS = 32; for (i = 0; i < NUM_ITERS; i++) { const struct rte_event_queue_conf conf = { - .event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY, + .schedule_type = RTE_SCHED_TYPE_ATOMIC, .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, .nb_atomic_flows = 1024, .nb_atomic_order_sequences = 1024, @@ -1320,7 +1320,7 @@ port_single_lb_reconfig(struct test *t) static const struct rte_event_queue_conf conf_lb_atomic = { .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, - .event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY, + .schedule_type = RTE_SCHED_TYPE_ATOMIC, .nb_atomic_flows = 1024, .nb_atomic_order_sequences = 1024, }; @@ -1818,7 +1818,7 @@ ordered_reconfigure(struct test *t) } const struct rte_event_queue_conf conf = { - .event_queue_cfg = RTE_EVENT_QUEUE_CFG_ORDERED_ONLY, + .schedule_type = RTE_SCHED_TYPE_ORDERED, .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, .nb_atomic_flows = 1024, .nb_atomic_order_sequences = 1024, @@ -1865,7 +1865,7 @@ qid_priorities(struct test *t) for (i = 0; i < 3; i++) { /* Create QID */ const struct rte_event_queue_conf conf = { - .event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY, + .schedule_type = RTE_SCHED_TYPE_ATOMIC, /* increase priority (0 == highest), as we go */ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL - i, .nb_atomic_flows = 1024, -- 2.7.4