From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM02-BL2-obe.outbound.protection.outlook.com (mail-bl2nam02on0042.outbound.protection.outlook.com [104.47.38.42]) by dpdk.org (Postfix) with ESMTP id 50076106A for ; Tue, 22 Nov 2016 21:00:32 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=aoof9/G9U+OSSwgcSGki6r3ZWB4NF9WNcEWTxgwT9mg=; b=Yt6702a07BZQfsZII1fpYwZZbzgmGv2Pc1pV5sU1keyZJ79WHEW69ptqtcCmrQotQHBRGlKhsTGUdw0HjlOZWpdFyvLXrcXzqooW/nEnjWFyexSv3vhDPui7EIiggVZTjPBcekJtyTd8/eaj9YGFksaqJd0MFbkuQn+OFSIOq2U= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Jerin.Jacob@cavium.com; Received: from svelivela-lt.caveonetworks.com (50.233.148.156) by BN3PR0701MB1719.namprd07.prod.outlook.com (10.163.39.18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.693.12; Tue, 22 Nov 2016 20:00:28 +0000 Date: Wed, 23 Nov 2016 01:30:23 +0530 From: Jerin Jacob To: "Eads, Gage" CC: "dev@dpdk.org" , "Richardson, Bruce" , "Van Haaren, Harry" , "hemant.agrawal@nxp.com" Message-ID: <20161122200022.GA12168@svelivela-lt.caveonetworks.com> References: <1479447902-3700-1-git-send-email-jerin.jacob@caviumnetworks.com> <1479447902-3700-3-git-send-email-jerin.jacob@caviumnetworks.com> <9184057F7FC11744A2107296B6B8EB1E01E31739@FMSMSX108.amr.corp.intel.com> <20161121191358.GA9044@svelivela-lt.caveonetworks.com> <20161121193133.GA9895@svelivela-lt.caveonetworks.com> <9184057F7FC11744A2107296B6B8EB1E01E31C40@FMSMSX108.amr.corp.intel.com> <20161122181913.GA9456@svelivela-lt.caveonetworks.com> <9184057F7FC11744A2107296B6B8EB1E01E32F3E@FMSMSX108.amr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <9184057F7FC11744A2107296B6B8EB1E01E32F3E@FMSMSX108.amr.corp.intel.com> User-Agent: Mutt/1.7.1 (2016-10-04) X-Originating-IP: [50.233.148.156] X-ClientProxiedBy: BY1PR0501CA0029.namprd05.prod.outlook.com (10.162.139.39) To BN3PR0701MB1719.namprd07.prod.outlook.com (10.163.39.18) X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 2:FHJWGYiovIkBLynqdI+C5Cv9yBGNu+MmUr3cskxC04bhqnY+5YFxiNHkAOqT1vvf9HIoMFgGz3fpZuuckSn7KK1L7s8gdbyeUwpGEVSP3cAyB13dv7Vwtti9lTApt1WLYZTHTKViZ7ZJfkTqrrrxUr+xkuWjt0lYscyCN4JpHlE=; 3:6hLSu2klE7NLSjBF3N0sHPvw/F3fn/mGtUdpZxeYSHWZu2ILf3HvqLG1JNKQUDTLVAtInPw5QsmuJfTbegNrsDChyzg/jdis6JEUy4DX1URB3aEYlI6dKPJUv+1eVu7QQP631NZ1GgJLa38JkAPB3lubqdOCOm6cnYFa2HTfIbA=; 25:lRm8rSTX/ooO9tG+c9wmKv+AI2qHUp9IrfAuwQW1kAx1g8yeN6svtS6ypkJtdTfYdzmLKuPBiYcU/7/bLVf9E8aSF5NzMfCXnb2LgSCUyT9h+IifKmwe69LNJuCZuQ5zxWlBIh60Ig756eygm7w7gowktOtMf/yGjha/mOKaAfC/6l/o1k/ab7T/L2WVFgbiht/biSTQPthJLoQxmld6vwVoX3X2VSUMpRCw9HRtlbrsL/Jdnro7iJYrLNI+o8YCcFRngwgcRH3e95gHTa4iW1q1RKKRDx1g8Ch8wnnwdQQj2/ZS1iIokzG+CbvNqY8MLMFyT5aG33jO+p9mh6cAMNC6Z6/PESnq5yJ5EdVaKiUCKtOOxwi6iSjH9MgNUCOLpqG3Add70IyhchJvAuftTcA15uvvzELjfl82rCgVyPyOadGXfZct7f23cfDaRV8JccZ0pUdYPyiihMj6imWaZQ== X-MS-Office365-Filtering-Correlation-Id: 67560d32-3fbc-4f07-b818-08d413123518 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001); SRVR:BN3PR0701MB1719; X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 31:RAqs6W0P5i19puw20sKOb1qu1DfCkhDsls9+sGrHv16k4gY3t6K4S+EImwsIoyMYSoEHuMihh0WwEf8ZPqEdN3U9gEAg/5xdkzKEJSzlIp94u/hlORDi8sVIIVQYR9gsOa9Ql2Ut1RFTkimdERXaenSTkERfC0scLOnH3Q7JMMpwNwE8dMHkzVe4p8HUdIme792QuDVC7Hyl0n3tcj1sXDUB/jAaNL5OFAQhx2wEK8jcYkjP25W+npGeNNIsQJrQR+mbgH90/TJKbzcoYQCbXWaiWP1TYD3WLdZfSfj7Zak=; 20:qtTSS3wDw1eSejuan/4KpxQtwUJHiHIPermbyMLMSqECH5XiFS3ku8LT23BL6RDtne6qSaQyJSuWoqSCSQtGU0U3fT3Ogi8QUQIlk4O62tgS7WDRWZrgIckM5HgfGEdwxCfCDrObUHwQOOqDxLIhk4aiIjlPJAfmCNJDUXCNnzegAspJJD+GJKGN6bCpWO5UDDwYUebbOk98L3aW1jBaVOtcV4m9wK0Ja8fysFLVIzJtAUeZS4DKMJ+R3FGuLXFq9z74Vn18IWiZfcUvrLbv3RnmVINFV0WvpYkyNXdGSsn4wEyRSPNANbLSvh1a6AOkElO+EBOM5y6Ru5vnJlT1Ab9hpv5aHzov43vRKkJE6qonQ6JqyXq/Uc2EMobZELyUCjKKun09N/hHFxQS+tkWUMKqccVzlNLE7VomuvPv2urhfmlot/C4uc3lpZMifXTgwkvTO1vdogqd86RSqeDALqjPQ6xTy8B40x8+A3EmzAyF9NlWsjEZ4twTVo3FI/VGwBpb1pPJ3uEAeLWEElHSh4Rz8OYOA1xDn+CnDXpPw8J5AmB6MBHZP2nqMMkN3g2SQ/uJNASB6T0T1T8Z0czMIwGrw/mzUAvWrLBGFlka8OI= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(271806183753584)(17755550239193); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040307)(6060326)(6045199)(601004)(2401047)(5005006)(8121501046)(10201501046)(3002001)(6041248)(6061324)(6042181); SRVR:BN3PR0701MB1719; BCL:0; PCL:0; RULEID:; SRVR:BN3PR0701MB1719; X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 4:WiLdUYzJFocY7ha6sybbmttSICEm7qaeBkDuxBbLFQIyVsG8M24G5v3tJFi4NDrYUYxeX3pe6sTVWAZJlnxvWJJQNB3IEIYlW+EV6RCFHXCFrRSdmGVwT+AnbSxdnJtySr/XyOdG/CiSexY4hOtPr4UJZr41hQQ8/8rmNHiA1fbQZvMIzuJirSJ5NLVq1yoRe+iJ8yJFpkT3+nITdr3jqQ99eV8N8lVCLbuzLXCS6TGWX3mjYe3ZCxLRvAT8AcgGN8t/FRydoY6SxJcQ5H8OrC6QGYz5nbzqou8AntVItMyHdAWHC7+BHtBq1ADopHyhvmULFfHn9rb2s2x/cMDfHuNEM4Yzar1DlsO3VSH4+AclqfjqElsEF8avOnxaVuYiyE57TkWa5nu+vbeOqAW4xEoIyylw9PgDhhVh//IXX8lJy4+X907bfS9Dr8vbiWW1amaVbN0jJvSJ1OvYiN0MeZuCqL1bsqwtu5dewl3XKSYTxrBblEzA3nVxraMn+fum4pCmBxo1d/Au6D9DjD7OkT72jSCwaTncQRUNJ8DG2HP7L6EWNm/zAxiMJSLZNzgO X-Forefront-PRVS: 0134AD334F X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(4630300001)(6009001)(7916002)(24454002)(199003)(76094002)(189002)(5660300001)(6916009)(69596002)(50986999)(92566002)(42882006)(1076002)(6116002)(4001350100001)(66066001)(33656002)(83506001)(8666005)(38730400001)(76176999)(110136003)(8676002)(47776003)(68736007)(7736002)(2950100002)(9686002)(2906002)(53416004)(3846002)(105586002)(229853002)(23726003)(54356999)(42186005)(7846002)(77096005)(81166006)(97756001)(561944003)(101416001)(4326007)(46406003)(93886004)(189998001)(50466002)(97736004)(6666003)(305945005)(81156014)(106356001)(18370500001)(7059030); DIR:OUT; SFP:1101; SCL:1; SRVR:BN3PR0701MB1719; H:svelivela-lt.caveonetworks.com; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN3PR0701MB1719; 23:M8h9640BBf4i3F9bDOCOHroON/on1lmQJiizsmg?= =?us-ascii?Q?LOp9AF0IpyymNvGf/IEicVgzD2u/uY5F92atOYs5+PYOtLaAoLommD4U/jZ2?= =?us-ascii?Q?CJcZMaCiulTSIy9YHbo8+Frz8H9QgFV5TokmT+HwYCjUug+nbs/VNnpRfhuu?= =?us-ascii?Q?s1SQA9JbCtvEe8sD7VQmKTDtRdgkK2pUddiqrwa9VjnhJmieMGH1sz35lo9c?= =?us-ascii?Q?smqPscgAYx8A8FC5jJtSOGbqWRQrbCJkZkS306hi69gplamUGaoCIBmVqEIJ?= =?us-ascii?Q?aFPh8uULXV7vPlPS398xyUhIrsqIiKynyRDxUvP3tIsu29jCAOlPjbQxbQpN?= =?us-ascii?Q?85g7lsRJAEFQvqpyn/jACrjKKVWrh/jlGGkLiN+XIliSKEdDpSCE2jeREYFD?= =?us-ascii?Q?vAav+3hpDgIIoyOnN/uukI6r3CuDnqsCg3RJHSIF7c0wQyL7IpRSohP37dbo?= =?us-ascii?Q?3Yl7VDmPAREQjh9vZZzyU2aq768G6c9B23M8aBwbU8194N+bAKKYBeUrgxqS?= =?us-ascii?Q?pZ7RH/ywYzC6xu8FDeWb1E1CK3wp5BQ8uyybl/wgCnFV9auvAhUXE45VKILS?= =?us-ascii?Q?51bx9Ixy0N2rHjYit6GL7k0OCJulptSNLCnJzbhYRH52cU8tQrOBNwV88bfH?= =?us-ascii?Q?KX8sjhM5cOfYO+9ieKSyjvOcrHzBisEVvY3DGqfV76v/YihWVguXcgaKBgBo?= =?us-ascii?Q?A5giPawa/HSYhuyNdSycihd5JSTDUfKAyhAFpAdByyfYqKoko3WcLakd79WP?= =?us-ascii?Q?W0eGmOmvHQ7L+K5gNpAIe7/VUt7DeXKc5OnHvHcr4iNGqz48hRvZYdZJEMVH?= =?us-ascii?Q?uO8xYmTwUr3iR2UBU76OP5U+qpgCZtA2Krt8XGPfENoKPjhKCOMqii827nDn?= =?us-ascii?Q?tkyco9pQTwb/8IMvHk9hUYizu/y/5uoo4An+TNJJ38sgNVs6gnrG5i/5hhOg?= =?us-ascii?Q?3iEECes8S6DS2RYLgGYRxdiij0XUz26bTale9Pc0OXkHv8egk0K8WSr7hoDs?= =?us-ascii?Q?xU9ukyVvaAOyGPrLvKDs9H2/tQSynltQOvBzSklUXAawrpjLmo1gRIUq9Fds?= =?us-ascii?Q?Dq0jXgt8lrBTJuWrgk/+/8gU9XOs3oQOn5GqY2u+jViAL7r1RZZXTh7GEXAx?= =?us-ascii?Q?IKOZ3u3TH5Z/uRYMGm/M32JPRgQO+OsApVrRMMgHRat6xg5Hvj7EX08i5uat?= =?us-ascii?Q?zPebt/dR7TdtRfFBJbuG2PUA0LURCSYDBhoIaN5TIiDZEj+z+LxvNP9AGc7K?= =?us-ascii?Q?OKu08FvnpBDqAHrunV7VHp9fpIui6gl/icJgjvkMwkqUKIjAw7Sa4/dX8cJQ?= =?us-ascii?Q?xmN6pl6h7D2zvOtw4ydMuFsPtc0vvDPq6a+mOFT7jpTjvkE7tGtXgnaouqv0?= =?us-ascii?Q?iG+FpmQ=3D=3D?= X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 6:dDl4NZqmSWndtQwz0ZuUOYZWfuuXOSViwB5lIKi2ePOmXrPBiAjPz2GYsQ/3vVewBNyGONjZvq3xeeFrz3e3RNai7lLNo/1nky/4BOCEfgF/J4ph+4+ftuA6BUqKq3PiZwGdg+Gw3yVeT3VuMtuzoWG8lhk4MqKfGcWAm8QeTXWWxv5GN7QKp/XGfjwzF6vu+S9kp6Wr29U+zppOOof9FHPNJo1nSE8D0IXV9XIyfYfi65dem/CYAxA1w3ISj22EYF9nGgK3r+BUxcJd++ccmKmuQBCApYhxV3oJHrEFJDizrbEYh77hdOBgkhwFPwFgLKNJ9OKjiY6gXR7bwahOhJEqd5I1r49wQR8y4AdA3Nk=; 5:wel3Zm18zW9aUPduF9FutOfC/Pb6EQSkYKi7Nj9V3KBhjG5AeJICBTvXcwaI3u4cS1G0GbGbdHmzZR2db4yAK+aLfE8NlI9XwWrGjTOi+8Djtsv5H4RRConMQqVy2GDRiaPQvNa+8XrFneppTkYdUA==; 24:tUjWCev5D2dcVFVYXPBheUHtMh76hOzrMJP5+MDDYTJnDirxTMc6cPOKjFEmOB0R2KuxP2rlDYb3aNAGSk/igYB0m4iogGTFVRxFUBiOqus= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 7:tmWM/Ae8W6rPH5mdvCFakxpdPPQfT64s3erNyP0bB1fPy4eVcW5wKKzjulPgRRqbcEWaL27SB6BHFg2qvBs7rzvxYBg7cMk1xxsSu/D0X3nAK3YLNa9+2l59P1HvZ9UVTrTk0/XfioYWHlgYCqM1/Hu/wYdPI/84WbHLu/ftfdyW8cI1OJ9IMgvQLCOM33nTeV0+aQg4gA+ww17BFZpNGDQNLBnhjxknm4eQJuZU3RTkV6MUI5l6MORDf30JjC+NkxG9UIhdSAKcHGtsycx1Ig8G30NVpUDXSAuE7HfWzNKibuxBa9NhEzJzGgIAoyJjQqZr6ZD1qryjHzVhlbEsoyBnjdMATXvUEc8GAppCWck= X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Nov 2016 20:00:28.2720 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN3PR0701MB1719 Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Nov 2016 20:00:32 -0000 On Tue, Nov 22, 2016 at 07:43:03PM +0000, Eads, Gage wrote: > > > > > > One open issue I noticed is the "typical workflow" description starting in > > > > rte_eventdev.h:204 conflicts with the centralized software PMD that Harry > > > > posted last week. Specifically, that PMD expects a single core to call the > > > > schedule function. We could extend the documentation to account for this > > > > alternative style of scheduler invocation, or discuss ways to make the > > software > > > > PMD work with the documented workflow. I prefer the former, but either > > way I > > > > think we ought to expose the scheduler's expected usage to the user -- > > perhaps > > > > through an RTE_EVENT_DEV_CAP flag? > > > > > > > > > > I prefer former too, you can propose the documentation change required > > for > > > > software PMD. > > > > > > Sure, proposal follows. The "typical workflow" isn't the most optimal by > > having a conditional in the fast-path, of course, but it demonstrates the idea > > simply. > > > > > > (line 204) > > > * An event driven based application has following typical workflow on > > fastpath: > > > * \code{.c} > > > * while (1) { > > > * > > > * if (dev_info.event_dev_cap & > > > * RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED) > > > * rte_event_schedule(dev_id); > > > > Yes, I like the idea of RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED. > > It can be input to application/subsystem to > > launch separate core(s) for schedule functions. > > But, I think, the "dev_info.event_dev_cap & > > RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED" > > check can be moved inside the implementation(to make the better decisions > > and > > avoiding consuming cycles on HW based schedulers. > > How would this check work? Wouldn't it prevent any core from running the software scheduler in the centralized case? I guess you may not need RTE_EVENT_DEV_CAP here, instead need flag for device configure here #define RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED (1ULL << 1) struct rte_event_dev_config config; config.event_dev_cfg = RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED; rte_event_dev_configure(.., &config); on the driver side on configure, if (config.event_dev_cfg & RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED) eventdev->schedule = NULL; else // centralized case eventdev->schedule = your_centrized_schedule_function; Does that work? > > > > > > * > > > * rte_event_dequeue(...); > > > * > > > * (event processing) > > > * > > > * rte_event_enqueue(...); > > > * } > > > * \endcode > > > * > > > * The *schedule* operation is intended to do event scheduling, and the > > > * *dequeue* operation returns the scheduled events. An implementation > > > * is free to define the semantics between *schedule* and *dequeue*. For > > > * example, a system based on a hardware scheduler can define its > > > * rte_event_schedule() to be an NOOP, whereas a software scheduler can > > use > > > * the *schedule* operation to schedule events. The > > > * RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag indicates > > whether > > > * rte_event_schedule() should be called by all cores or by a single (typically > > > * dedicated) core. > > > > > > (line 308) > > > #define RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED (1ULL < 2) > > > /**< Event scheduling implementation is distributed and all cores must > > execute > > > * rte_event_schedule(). If unset, the implementation is centralized and > > > * a single core must execute the schedule operation. > > > * > > > * \see rte_event_schedule() > > > */ > > > > > > > > > > > > > On same note, If software PMD based workflow need a separate core(s) > > for > > > > > schedule function then, Can we hide that from API specification and pass > > an > > > > > argument to SW pmd to define the scheduling core(s)? > > > > > > > > > > Something like --vdev=eventsw0,schedule_cmask=0x2 > > > > > > An API for controlling the scheduler coremask instead of (or perhaps in > > addition to) the vdev argument would be good, to allow runtime control. I can > > imagine apps that scale the number of cores based on load, and in doing so > > may want to migrate the scheduler to a different core. > > > > Yes, an API for number of scheduler core looks OK. But if we are going to > > have service core approach then we just need to specify at one place as > > application will not creating the service functions. > > > > > > > > > > > > > Just a thought, > > > > > > > > Perhaps, We could introduce generic "service" cores concept to DPDK to > > hide > > > > the > > > > requirement where the implementation needs dedicated core to do certain > > > > work. I guess it would useful for other NPU integration in DPDK. > > > > > > > > > > That's an interesting idea. As you suggested in the other thread, this concept > > could be extended to the "producer" code in the example for configurations > > where the NIC requires software to feed into the eventdev. And to the other > > subsystems mentioned in your original PDF, crypto and timer. > > > > Yes. Producers should come in service core category. I think, that > > enables us to have better NPU integration.(same application code for > > NPU vs non NPU) > >