From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM02-CY1-obe.outbound.protection.outlook.com (mail-cys01nam02on0080.outbound.protection.outlook.com [104.47.37.80]) by dpdk.org (Postfix) with ESMTP id 1501FCF62 for ; Tue, 28 Mar 2017 12:43:22 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=bxaq5p9cTDCwP5UE5Ccw9TCcSpL8BX8eTXdw3XYcqoE=; b=Yb/DwXkZxCSEnDRHjvWYNHwrN7FoeFm2zOMFMy/tw/EVJ4gO0+31l5Ae/9DsIQwijzaeZ6b8Ydic+DqfWv0gUvy6AAGBK/TaGJGbw6w6VFhCDjl/WhFlzmtRUqtnjJ3N83P1fspXc6TCt0yis1+kIRebGXob28JR9fUda3Knbj4= Authentication-Results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=caviumnetworks.com; Received: from localhost.localdomain (111.93.218.67) by BN3PR0701MB1720.namprd07.prod.outlook.com (10.163.39.19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.991.14; Tue, 28 Mar 2017 10:43:17 +0000 Date: Tue, 28 Mar 2017 16:13:02 +0530 From: Jerin Jacob To: "Van Haaren, Harry" Cc: "dev@dpdk.org" , "Richardson, Bruce" Message-ID: <20170328104301.ysxnlgyxvnqfv674@localhost.localdomain> References: <489175012-101439-1-git-send-email-harry.van.haaren@intel.com> <1490374395-149320-1-git-send-email-harry.van.haaren@intel.com> <1490374395-149320-7-git-send-email-harry.van.haaren@intel.com> <20170327074011.fgodyrhquabj54r2@localhost.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20170306 (1.8.0) X-Originating-IP: [111.93.218.67] X-ClientProxiedBy: MAXPR01CA0054.INDPRD01.PROD.OUTLOOK.COM (10.164.146.154) To BN3PR0701MB1720.namprd07.prod.outlook.com (10.163.39.19) X-MS-Office365-Filtering-Correlation-Id: 5032d99a-cfd2-4e59-6694-08d475c73f72 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(201703131423059)(201703031133065); SRVR:BN3PR0701MB1720; X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1720; 3:+nYD5J9PqMg2me6TGGQ/oANBP9bkeHRnco2lLpQikJZp35OURukc6TIa0PXms0ctu+gDSgFde2fhw2Pzbk4CiUOw1kEXxOL4/RLIJaoaPSNUB5A4gKGbMpMgz5ocgc0iPOm0W5AWawQgxHbFFHk4xNfy4qS7BYGzW2opI6jBVCm/HiUD09Pu7G8vRLW9sg0boIeo/lYzWggzgSKyaWcD9kfMtEfOQOYBMP88ANM+xxOaNpOipszdwJU3kCTb3xdk3JroHmlAP5NUSKq5NGz8EaUvA6po93By1sIbM0ckeIXDhaaf6OKPlWOLpDJV0VjlHiMv2pi7CCAym/2NkJot3w==; 25:0n2yEn9p0ZL3N8NnRjwpDzzVX3+6i/ARI3Cr1zPlpfAblYXVOW1xKOAE303C9ZXqLiRszk7E4sv3hrFxT9vt+ETAOTgHOxFwWOc6VVjOK81LbKvewg7LEF5SsGmcST79vjFJ9ERfckxcmKjMXVByXIev2bt7HeNn9HSOAt9nU1rWcJkoQChB6uPtAbc5tf8Oq4DHw7Wznm43TR+NP1XQCDmHjB1WtjDYlYP0NXKIy8/b5/tPSXTOJBEb4WrUXHaDnsl2La/vE2VyRimTrU+uL1xTlR+zwnWBKk7o9EYZGqN9V+XvWb+TCkwPbEdwYVurI+ag5ltYKfMEOYTU4dbEiMx32TY7TAvdcBB4j78LASvvF2PzCvkhURhhx4STA5G+XFHn99qY1j9HX8nbILwck1LicngDQpTohriRAzgZHYpmwEQHBPYHgRB4c87jFrqS1H9nYsA3sHju60puscAmTQ== X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1720; 31:J7egdU8GTaIMk4Yg+YtEronXiPguDlZNSMKAwHiW2q7vECZBtRXsFmEqvYrm63cBFf3idyiowK4AvTI3XyFtxsh9N3RsZC62M1V9erS/eWBqjJO7EbdthmH3uh7oX484JUqidDhHSfIQuv37xXbB5WMuWypbKpb8nq+TKJ613//tlYJkxbMSWMXkcyVhRc1z4X5VmxJ+Lv1VYVSJlPpL2QaoV/yE2degDI8uShIt0NRANR043uiQ/xFCPvdv3drfR4DuuIcoO581JJth7joGQJtZak9gX2NVar6W7AYd1dM=; 20:hyvZIkjn4kDsEQxF0ovOsMrKnQQtdM73x3SvcCDLzoWnxEhTv5OXNLZIZwrOVYl4/AYO3GUCd+GRYi2j5eK9rHBg1bQJWTpdPtpjHIU7cMcSVW+EW16H/j5ejgAdG1AUrCeiIYPSLQvLKv3+HiOaBIcuAoGG1yP36HeU/wwHQq8sr/g2fDMsIDJY2o0rHMy+//lkAwKVnv3vB2V/9xcCzexRZC4BOGunRA72sMEhj+44O970pCHO8BEGopjGNi3c30Kzsa0H3tiztRL+1HTvQGJRvZ5q5WoPAU7WLw3gkdMzjtIIu8k9uh8pH/u5y/JgsjXgKqgnzYSZh1AnmqkA9ZCI6uUyC4glI32JAPzBAmSq6F51TvecjcfjQkji4Jy0odWh31LS9TKEqkj23R4Z3r8dNmCTZ/5aLNClgALjUBxaFo3y37h5ZAzzn6tuRKys2u7EbA1cgxhjjgqMyVyJ4PO1anWnZBWtGEO+4hw22TWwanBd7gbzFTRLY41fBL6zAEV7mWKeE1/KavUtSFBx9gXUkjUiEjWOeBY1P7s5VlZRwON7tSyRPTdLpJHZpeSzOeU4UyVbd//E5fCz2epTE9zUEGWHqFJwzZv5+PJEF1Q= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(278428928389397)(228905959029699); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040434)(601004)(2401047)(5005006)(8121501046)(10201501046)(3002001)(6041248)(20161123555025)(20161123562025)(20161123558025)(201703131423059)(201702281528059)(201703061421059)(201703061406059)(20161123564025)(20161123560025)(6072148); SRVR:BN3PR0701MB1720; BCL:0; PCL:0; RULEID:; SRVR:BN3PR0701MB1720; X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1720; 4:u5y/MOaUWVfymtx1aMxW6HfH5WjloqgWVCIS2KJYf3feR9IR1CROjc9QpijTo43JqJ9yy0jpqs5cQfFWxxCV8I8f7p0ug52nl9eX3UFINoaJUI+/WohkaCItsiO5qSZF/4rsTnDRXPi4xNBU7KrSZU+yTMyo9xT6OiKVtIzYxOW8Acfc108LL9qcl9M3smB3bzOT3MOvpZyDUc3PykGY1lBMX0xffUar/9wxEC11hTBR2EzUasWwcn+1855oARjZKf4HStsqRmpI4a2EzZIxa4WZs48eN+5x5QV5WblkI/3wwY05LRv3UqfjMGPmwq+tNPsPITYMd6yoAfnUcUZbwBGRlNYtaqCP5PF+kcp3ZwpwwP71c0bpTSGaDNQZWgyx+t8iY8arHxTeQdp5RqVd1SNsz5GUizEo7SQE4dUGzXZMe9c7wyxjzk7L3CXUxmNMov3DUHKmY0djj7dsnHy35SohB7hj4pw+k8zS1vLjKuyIN5SNonq6V7437hP4w91VNAYvotEZR+J44VFISEp4Bz+Ctlu78lS/H6EeFE1wGY52VfBse//Hv94YmnEth/i1cIPB34C3DEiwfQ1Su68jA7Udu2j64sYoxS3N0bpLSJZbMTYRBkDkINhWFg2H8kqxctnkIZDgHNX917g3/ImtcjcWd9b4kLLWIPQB/7q3Je+YvXafO9tPws/b2lEmQX1rCBU18FwxGos4CVv/yx3nyuDbVgrcAlF3gp7PpraNLQjcJwM131qwNCdoGOSvPQHBEdNS1FcVqovtOaenSWDDip5j3pezFfBeHoKi9m9Rk+2hD5N1/eszWho8KomuCGuw2vtIzyZJ9Il/2M9rfRjbFA== X-Forefront-PRVS: 0260457E99 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(4630300001)(6069001)(6009001)(39410400002)(39400400002)(39850400002)(39840400002)(39450400003)(76104003)(24454002)(377454003)(42882006)(61506002)(6666003)(6916009)(2950100002)(110136004)(7736002)(305945005)(4326008)(50986999)(81166006)(83506001)(1076002)(76176999)(53936002)(54356999)(8676002)(38730400002)(5009440100003)(3846002)(2906002)(6116002)(50466002)(229853002)(189998001)(42186005)(54906002)(53546009)(9686003)(66066001)(25786009)(6246003)(6506006)(5660300001)(47776003)(55016002)(33646002)(23726003)(4001350100001); DIR:OUT; SFP:1101; SCL:1; SRVR:BN3PR0701MB1720; H:localhost.localdomain; FPR:; SPF:None; MLV:sfv; LANG:en; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN3PR0701MB1720; 23:S0oMXf6F4Trstp8XXmw/hGJl7DP/azDoCTMt0KH?= =?us-ascii?Q?0sM/mimZqyus93dUKMKS3v7zUsuo4egDePwMlbVMhvcDcDqTZa1650Z3IWPr?= =?us-ascii?Q?8KMSSlqDnDHxCYIKaYfbnHBAonB7YQSI/QVmiyu+dkIpmxcbWHY3blg6hRqs?= =?us-ascii?Q?7Jz1rys8LyRVLIHoEk3VZeOkIPpjd9xFnZdyENEtoSrJ7p/RSw6HrgHlh00s?= =?us-ascii?Q?oIZDWaq5afTFsxA9nzeeDfCJD+Vuu+3mMs8CBTdJ9dcO18uYYyduam2xkfOL?= =?us-ascii?Q?aTWztIPiiFp3TQXRVWDOoHEUbVH6EWmJVmqrN2Ao89wZyDsndR6SPY1TIv1c?= =?us-ascii?Q?3rYYOhA708gAIWDEB9pYMqRsmofV1D1M+ZKeShjAH4XenM2Ldsv9pZ/yUy5J?= =?us-ascii?Q?vMnB3ua43WI7rUT1S2QC27zuDPvVzs3nPeSFWX+ypf6gu3/LWE+lFKtl66RA?= =?us-ascii?Q?iu4OvrYnCbbMyaP47ydOqT24uW8qDMP9Gn27So6VPnRvBLHrqTrCJcHzmCQv?= =?us-ascii?Q?4EYwoBszZ/IhuBUqmnHd96Z+YGkJxOx2sRHpL2wmDsFYDbqScDgL2H3cjcj6?= =?us-ascii?Q?JuZtIErjMpY7KY5xMFRLHSelxj9ksBuN7YnL3V+V17O/5aK7yjTLRD/e8Zwx?= =?us-ascii?Q?psfCNu+wHAKVV32LtAjNBHSZoUPbuNoseX+GsQkaODmkDoRLYnUUvumq6pYU?= =?us-ascii?Q?LAYUEWjyHqkz8aW3mjJm/IKFjf0OxKFNK8761MjQSGer3EHB63IUycdGC4ui?= =?us-ascii?Q?2hzDj6kD0ShIm7UaBKKI9rjzhhmJjczz/iZuaJiOhEXeBCkK1b1Jv/O5ExNk?= =?us-ascii?Q?X99fUNprc1PjR9NP36jLwbd617Rt8a333jbVvIsAs+4GY2eWMgkS7e84uC1Z?= =?us-ascii?Q?c/2TXjvVHitxN7BlLNAxSSQOf5ke5Rc6sTRwqEX3WMbZMFFVdbLSS9PDxyQS?= =?us-ascii?Q?eHgVi66KJmeAwdh/kELfOfuwA7siQcs7xBzFX92A9acb4se4/Ah0YVFaXSNu?= =?us-ascii?Q?40Q7i/zV67URsnguZW2tIFy3X7BBmVxH1IxBn0t7E0SnfoHh/E8K+SQ1xqBc?= =?us-ascii?Q?uktnEO18Cpy4a16zx/ohjRo5rh3q+p39o1jtBlIeYe2HcGTNGi54hnC5GZ/L?= =?us-ascii?Q?MuRhlnGSzLsdDMCPh3KxweVUj2c2L0YsMS++2GYJYMYECdA2eeARSM6fPqMb?= =?us-ascii?Q?t2IW55L5qkhByP7Oay/lRbYz73gLX6TCUsZuPspK8r0GTM6R3iqSFYL58Hig?= =?us-ascii?Q?BCqnvEicwXweHTFS0Avo=3D?= X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1720; 6:+iPIovjZXoCxPdFWshUsXvS8S6GY6n0TDh4zf3rbwtu0ar8RpMny7IlUDT8/8nhVlMAIsGX8vJtxuwZaWtq3J97R9oTPwl8zXV8zd3r8BxtSejTG5Xg9T8Hz2fbwXk1tRJcFZn2WkgHGZL7B1YAZgfGqmbWqnkza55HZFa3Osmng1kJuAbsjA1zj9RlB5V6EDZIYV1kiFKyupG+Z2E9BR6xjNPdM4JYT5R6BIEHFFSt/Du+tEI/hCkVb6CVVN45WZ3XbK6uKW1zGnWKMhAXKHAOcNf9PLDRRsqWQqVeeABaQOdDASmN/GX56BkZlQ61TAtC2aqNLfP+ve3/SzDKLBgInGjduSYQl9PbLxN7JF5FVlbcUSw8WjovYJDBT/LzASinR7Ye8/XBKX/l+FZpfZg==; 5:Bf2UaZ92Hl8AWJHzrKjO13+T45LYjmOqTN4LxgpgS78xL42LYkjgIfx+YdwkgVAAkN842ujrcKEv/v2sZo9WbEnD1Vrr+jtJ35XF6p7aYI8J0u1U67RuKRculzGtwKP9mvv90aM0tTAF/Ln3vzJAr2leolFCYpqdHkD08ZhcyQE=; 24:71HhIURT0jcP361lQznpt4nDjxqvfEoiYaJ7qTAKFlhUxx59ZlN0vmgmGwsfhTqGspXUSDKOASR8ic0vQAR88gjNn7r0ntAagP1iuZbcO7s= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1720; 7:Wp+WeN+NYBlhdonVFQLRJmkCffgYJRePaPSNFABME7757XBEo7I/5xtOEOL/ESNqDFv99Yr9HreV4eiq50QN3HIQVRYZyMNyxHANROKXznJMz09qwGiux51GMUcp+JKd4Vl7Ck+N3+ivpCaDOhRARp6196awuYvlKkfeYbQaky9kDKSCJAdxC5YKzigKE5gkt4qzdX76c7SrcwrlUbryVi7dXZtomDbb7qRRCa2GwLCuyPHxL8HoGhEoEBRdnv+1HnNNObU/UrRzk5aejY2m4PkOlo32uH4M7wY+9pQfkij8Rvr1zzhaP7dhhVbYxUgC0uWkDH6U3m0QCGiz0ylvfg== X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Mar 2017 10:43:17.5775 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN3PR0701MB1720 Subject: Re: [dpdk-dev] [PATCH v5 06/20] event/sw: add support for event queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Mar 2017 10:43:23 -0000 On Mon, Mar 27, 2017 at 03:17:48PM +0000, Van Haaren, Harry wrote: > > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > > Sent: Monday, March 27, 2017 8:45 AM > > To: Van Haaren, Harry > > Cc: dev@dpdk.org; Richardson, Bruce > > Subject: Re: [PATCH v5 06/20] event/sw: add support for event queues > > > > On Fri, Mar 24, 2017 at 04:53:01PM +0000, Harry van Haaren wrote: > > > From: Bruce Richardson > > > > > > Add in the data structures for the event queues, and the eventdev > > > functions to create and destroy those queues. > > > > > > Signed-off-by: Bruce Richardson > > > Signed-off-by: Harry van Haaren > > > --- > > > > > > +static int32_t > > > +qid_init(struct sw_evdev *sw, unsigned int idx, int type, > > > + const struct rte_event_queue_conf *queue_conf) > > > +{ > > > + unsigned int i; > > > + int dev_id = sw->data->dev_id; > > > + int socket_id = sw->data->socket_id; > > > + char buf[IQ_RING_NAMESIZE]; > > > + struct sw_qid *qid = &sw->qids[idx]; > > > + > > > + for (i = 0; i < SW_IQS_MAX; i++) { > > > > Just for my understanding, Are 4(SW_IQS_MAX) iq rings created to address > > different priority for each enqueue operation? What is the significance of > > 4(SW_IQS_MAX) here? > > Yes each IQ represents a priority level. There is a compile-time define (SW_IQS_MAX) which allows setting the number of internal-queues at each queue stage. The default number of priorities is currently 4. OK. The reason why I asked because, If i understood it correctly the PRIO_TO_IQ is not normalizing it correctly if SW_IQS_MAX == 4. I thought following mapping will be the correct normalization if SW_IQS_MAX == 4 What do you think? priority----iq 0 - 63 -> 0 64 -127 -> 1 127 -191 -> 2 192 - 255 -> 3 Snippet from header file: uint8_t priority; /**< Event priority relative to other events in the * event queue. The requested priority should in the * range of [RTE_EVENT_DEV_PRIORITY_HIGHEST, * RTE_EVENT_DEV_PRIORITY_LOWEST]. * The implementation shall normalize the requested * priority to supported priority value. * Valid when the device has * RTE_EVENT_DEV_CAP_EVENT_QOS capability. */ > > > > > +static int > > > +sw_queue_setup(struct rte_eventdev *dev, uint8_t queue_id, > > > + const struct rte_event_queue_conf *conf) > > > +{ > > > + int type; > > > + > > > + switch (conf->event_queue_cfg) { > > > + case RTE_EVENT_QUEUE_CFG_SINGLE_LINK: > > > + type = SW_SCHED_TYPE_DIRECT; > > > + break; > > > > event_queue_cfg is a bitmap. It is valid to have > > RTE_EVENT_QUEUE_CFG_SINGLE_LINK | RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY. > > i.e An atomic schedule type queue and it has only one port linked to > > dequeue the events. > > So in the above context, The switch case is not correct. i.e > > it goes to the default condition. Right? > > Is this intentional? > > > > If I understand it correctly, Based on the use case(grouped based event > > pipelining), you have shared in > > the documentation patch. RTE_EVENT_QUEUE_CFG_SINGLE_LINK used for last > > stage(last queue). One option is if SW PMD cannot support > > RTE_EVENT_QUEUE_CFG_SINGLE_LINK | RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY mode > > then even tough application sets the RTE_EVENT_QUEUE_CFG_SINGLE_LINK | > > RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY, driver can ignore > > RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY. But I am not sure the case where > > application sets RTE_EVENT_QUEUE_CFG_SINGLE_LINK in the middle of the pipeline. > > > > Thoughts? > > > I don't like the idea of the SW PMD ignoring flags for queues - the PMD has no idea if the queue is the final or middle of the pipeline as it's the applications usage which defines that. > > > Does anybody have a need for a queue to be both Atomic *and* Single-link? I understand the current API doesn't prohibit it, but I don't see the actual use-case in which that may be useful. Atomic implies load-balancing is occurring, single link implies there is only one consuming core. Those seem like opposites to me? > > Unless anybody sees value in queue's having both, I suggest we update the documentation to specify that a queue is either load balanced, or single-link, and that setting both flags will result in -ENOTSUP being returned. (This check can be added to EventDev layer if consistent for all PMDs). If I understand it correctly(Based on the previous discussions), HW implementations(Cavium or NXP) does not need to use RTE_EVENT_QUEUE_CFG_* flags for the operations(sched type will be derived from event.sched_type on enqueue). So that means we are free to tailor the header file based on the SW PMD requirement on this. But semantically it has to be inline with rest of the header file.We can work together to make it happen. A few question on everyone benefit: 1) Does RTE_EVENT_QUEUE_CFG_SINGLE_LINK has any other meaning other than an event queue linked only to single port? Based on the discussions, It was add in the header file so that SW PMD can know upfront only single port will be linked to the given event queue. It is added as an optimization for SW PMD. Does it has any functional expectation? 2) Based on following topology given in documentation patch for queue based event pipelining, rx_port w1_port \ / \ qid0 - w2_port - qid1 \ / \ w3_port tx_port a) I understand, rx_port is feeding events to qid0 b) But, Do you see any issue with following model? IMO, It scales well linearly based on number of cores available to work(Since it is ATOMIC to ATOMIC). Nothing wrong with qid1 just connects to tx_port, I am just trying understand the rational behind it? rx_port w1_port w1_port \ / \ / qid0 - w2_port - qid1- w2_port \ / \ w3_port w3_port 3) > Does anybody have a need for a queue to be both Atomic *and* Single-link? I understand the current API doesn't prohibit it, but I don't see the actual use-case in which that may be useful. Atomic implies load-balancing is occurring, single link implies there is only one consuming core. Those seem like opposites to me? I can think about the following use case: topology: rx_port w1_port \ / \ qid0 - w2_port - qid1 \ / \ w3_port tx_port Use case: Queue based event pipeling: ORERDED(Stage1) to ATOMIC(Stage2) pipeline: - For ingress order maintenance - For executing Stage 1 in parallel for better scaling i.e A fat flow can spray over N cores while maintaining the ingress order when it sends out on the wire(after consuming from tx_port) I am not sure how SW PMD work in the use case of ingress order maintenance. But the HW and header file expects this form: Snippet from header file: -- * The source flow ordering from an event queue is maintained when events are * enqueued to their destination queue within the same ordered flow context. * * Events from the source queue appear in their original order when dequeued * from a destination queue. -- Here qid0 is source queue with ORDERED sched_type and qid1 is destination queue with ATOMIC sched_type. qid1 can be linked to only port(tx_port). Are we on same page? If not, let me know the differences? We will try to accommodate the same in header file. > > > > Counter-thoughts? > > > > > + case RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY: > > > + type = RTE_SCHED_TYPE_ATOMIC; > > > + break; > > > + case RTE_EVENT_QUEUE_CFG_ORDERED_ONLY: > > > + type = RTE_SCHED_TYPE_ORDERED; > > > + break; > > > + case RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY: > > > + type = RTE_SCHED_TYPE_PARALLEL; > > > + break; > > > + case RTE_EVENT_QUEUE_CFG_ALL_TYPES: > > > + SW_LOG_ERR("QUEUE_CFG_ALL_TYPES not supported\n"); > > > + return -ENOTSUP; > > > + default: > > > + SW_LOG_ERR("Unknown queue type %d requested\n", > > > + conf->event_queue_cfg); > > > + return -EINVAL; > > > + } > > > + > > > + struct sw_evdev *sw = sw_pmd_priv(dev); > > > + return qid_init(sw, queue_id, type, conf); > > > +}