From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on0058.outbound.protection.outlook.com [104.47.36.58]) by dpdk.org (Postfix) with ESMTP id 4D3BA3DC for ; Thu, 8 Dec 2016 05:41:47 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=NMy9y4BndDMLHVwcB+LeH+gDj5XW/f9irye3BD1DpxE=; b=NfjMuUsjc2Uy/UU4/aNteVaiUmrlS21+eBVYperpSdzuBVB2CIbOK1sEP0WRfGNG0SzVOw40hhWYLujOOglZQovb5at3iuBPY7nrg7c4tr8KGQqUIgU9NEGfsdi0qiFxnM8f2x9duFWIlIRcB2g365hwYImj8oZJnqgno9PWBFM= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Jerin.Jacob@cavium.com; Received: from svelivela-lt.caveonetworks.com (50.233.148.156) by CY1PR0701MB1726.namprd07.prod.outlook.com (10.163.21.140) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.693.12; Thu, 8 Dec 2016 04:41:43 +0000 Date: Thu, 8 Dec 2016 10:11:40 +0530 From: Jerin Jacob To: "Eads, Gage" CC: "dev@dpdk.org" , "Richardson, Bruce" , "Van Haaren, Harry" , "hemant.agrawal@nxp.com" Message-ID: <20161208044139.GA24793@svelivela-lt.caveonetworks.com> References: <1480707956-17187-1-git-send-email-gage.eads@intel.com> <1480707956-17187-2-git-send-email-gage.eads@intel.com> <20161202211847.GA14577@localhost.localdomain> <60DABA4C-E3E8-4768-B2E4-BB97C6421A50@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <60DABA4C-E3E8-4768-B2E4-BB97C6421A50@intel.com> User-Agent: Mutt/1.7.1 (2016-10-04) X-Originating-IP: [50.233.148.156] X-ClientProxiedBy: CY1PR20CA0103.namprd20.prod.outlook.com (10.164.213.157) To CY1PR0701MB1726.namprd07.prod.outlook.com (10.163.21.140) X-Microsoft-Exchange-Diagnostics: 1; CY1PR0701MB1726; 2:KIymAUiB/Iu+H73mBrNdsHDhX/sOiBiYYTRnXjUhsNkTyTosyahbkicA8jC6JqZfJZ4fUHoCm/W09Kr2NmQm2eA3wKVCVQudefQEagcu1IUNUxXAIyByPovjPTUW01iP1oRtlaFoK2NnEQBfu5Ru6CVrKs03N8R8vTUW3wjfzuM=; 3:AHTx90Qof1sTBiLybbLdxKjje3a6qD65xBSqAqF3S+LUFqFg0pYCEhU9SjVIe6DPB2f0sKBe3IRR/PeaTBn2GMS1EDmxmA6XSNGfKN4gl0kX9INHXZwdPEBiHtK+u666DH0QrQz002hc3/Jp/RZqtfBVWIy/F5cz3DNe3r+VKWY=; 25:t5htRe+vnr7CNvQXcF3sBcvNg8SJ82bcTCaEW45C+ACe/Kt1CoksJ+RmZnXABnx8ul8Y++FADRTqskvGMUzWGbrA3zHfN1Q8dv3KXgf5Vdz9zgya+K4Fo5+uVeQQt4oEylrXa8GNogwAjqWlh0GtUPhClUrBSVmrfVGAajoNdMKkW+13dF3WmWCtRPmSi3/HigpKAXaLenxO9Qv9N3XTyEiDs4DdXl2fdKKoOUGCPe/GhhpxSXQGQ2Hi+jzLN/n/6onwFh3xj9+1T1/rzfWtC00Nt86ADlfJgcX8XACEIuNoCW1OGzFMY9xKbdmsUolY1itK5PASS2UCsNbyk8aQqMrBbzkh734tyQ3DoN0D+Z5ayJW/EQKzG6XuVm8P2wSGp5HFvv4MXoyMtxnGV/Xwvi3iGvFYYlZQUreRyp9yKxPtw8fJLyRRrax/9NDujfIFvEOExljjcRFX4Q7T1SfIJw== X-MS-Office365-Filtering-Correlation-Id: 4ecf35b7-98c5-4590-52fc-08d41f2482e5 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001); SRVR:CY1PR0701MB1726; X-Microsoft-Exchange-Diagnostics: 1; CY1PR0701MB1726; 31:LrBbxldJepqqH6RKBaSwPZi/rf+0vfa5evVNjxrRO79MaTVpgnb6IAQbjdwfa4rrdsxHH0q7BFeppyeONIpTIDrfmSMnRwxyPbXNtxeGY1c0Wyqdpx2szn/ut3FfguyCDSy0w1WZK25nRmpAhhXAy/sa+ZYiWqIX2vluX9LLpVZg+Eb12At9bBQiJzmbqT66iCQRDXSoYel73aOACzJi6hmAzH9MvbxYTJOoChqCGGj9KLK3cHL8nbfJT029kg5XP8Pjd/5QoSIDJTR1tSJXJQ==; 20:UBasQdL3MRdvI7f/WU8Vy4NDRdoOfC+k0EJj5hs6Cukf01yXYCXjWnlnRsiPY9ZHBTcFpN5qzRpQhpOIcPgyRRnvFtZFE32zFD3g7Yoy8eiuBu+MQFT2VcJ1f+CiO9HLH/SAjMmYipvmJvXntxsg8cDmDcEJgX0ywA5hsMcFlUQ9CoxGmAVvVF9zbtbFy9vkUumaQmmd3zDCkGqtP5cgpTS35AI4O3MaQIPWNLbUJtRDqpgwF1zsa0qhDRo0TkKOnjL+cosv5yqA7N2eYilvcZpR1hU+D3wupVKXMrxQt9k0aTwtkKx/dJq1MMfBImJcZMEcdBQnYclOMFnR7A1MHhrRv5Kq7lUxETRccXO9ejBSYbJtc0PNX7fT0oXyj2nbDvkI7mjT9vE7I/exCBAKN/F6B/ssfpXaZE9TysDH5iOSJn/aWnVRSR+OU2temCWVL9zp8K7TZSsCsCtbBrY8xuyJqELLKuiTDHJKfPvet31xiI96aqAo26ZSNw71gr2r5cTB4qKC77jB3sDC+3bm3hVH19SHb6ZXYfHq91sYUbBTIQezh/rtOCLgLPm1FwCGXaoTuDjvZh7yk9tPKGxHrZJ3BiQmqrTP5RkuPGT1ZT8= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(228905959029699); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040375)(601004)(2401047)(8121501046)(5005006)(3002001)(10201501046)(6041248)(20161123555025)(20161123560025)(20161123562025)(20161123564025)(6072148); SRVR:CY1PR0701MB1726; BCL:0; PCL:0; RULEID:; SRVR:CY1PR0701MB1726; X-Microsoft-Exchange-Diagnostics: 1; CY1PR0701MB1726; 4:5NJj1JEDLj4pbMIoPz828GCVkm8fO6NegYNBDVRZdQ1VpVJrd1tMxaX7vP3bRQI+TXlEq/VGwL3z0ncTFXQamRqDZlg/YWaOmxn0uIbiUyCjUVIVMd11AmfjBMEVpP++TsZfLJYJSD2VcVXZAaEXmADhX6QjGTwGvsrVyAGD7kao+lZlnBnViBDos5OPPUHfMRuQVhlNSanVeAV3q+Ft9zfDA9gAmkijpR6sX/rCGiHYqnGadxtBLPpAe0tSBRiF+eEFL/l+MwsKvP6WSFIuFuUQxiBELpQf9+RZOnx49dOOTqj9QWbnMO2sJuz0ByjIinA4Ygfy6YX8hULHkKePKycLwGLc8y+m5vpo5y4jB0U2+1IA4rU/Y3dqQxOrllYa6q7o368BWstGxhszA4ul/5wqcuC95OodUp2DKwYp+093PZRWwdc9oqv1BAFuO0vYU7IjqFGOH78NFUPqtDZarvsKieUIc3Yb5xd9ZGjYBxkBPHgWPBdqWgPMQ4xPYteUh9KmGdSYppRHzlLA0PHAjfdRUVuUjlVGoQu2xBfZiDbn9R6qdpajOs9SGnzEpeRIUkNjBysIkUSDO2sX2ntJI5Dhg57nUKurhbnm7RJS38dPYNVcnldKdN9haXhQvpTV X-Forefront-PRVS: 0150F3F97D X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(4630300001)(6009001)(7916002)(39840400002)(39450400003)(39850400002)(39410400002)(189002)(199003)(24454002)(377454003)(106356001)(5660300001)(6916009)(105586002)(4326007)(53416004)(97756001)(81166006)(81156014)(8676002)(4001350100001)(2906002)(42186005)(733004)(2950100002)(110136003)(1076002)(8666005)(47776003)(6116002)(23726003)(42882006)(33656002)(66066001)(3846002)(46406003)(7846002)(305945005)(7736002)(229853002)(69596002)(189998001)(92566002)(38730400001)(76176999)(6506006)(101416001)(50986999)(68736007)(83506001)(9686002)(54356999)(97736004)(50466002)(93886004)(18370500001)(7059030); DIR:OUT; SFP:1101; SCL:1; SRVR:CY1PR0701MB1726; H:svelivela-lt.caveonetworks.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; CY1PR0701MB1726; 23:9ZT7O6IE6s/XYULeeB4Sxykyd5tr1q57BvGmT6E?= =?us-ascii?Q?UnrJ5FNBLfaLP0LdqLu+JtJ4q9/btNZTU23RVlqrQkyzQbi31+esZ+iZjUmu?= =?us-ascii?Q?dn9VWLNmDMohR7v1x0u7tXnnN+lINfGZjn+lhP+G/7YYDpdXO1ynO4nxLD0l?= =?us-ascii?Q?7QAJjccjB18Oy7wOF5foh39lL0pCkc59AW1MSOmlBhALLnuI9SD0mjyaMt4w?= =?us-ascii?Q?NhDHbcULGfjmCvK4tdkKitwXCe+k4c8bVAsH5U2OINtaye3dpbdP77n9jO7J?= =?us-ascii?Q?EkTLZCKl3RrmK9Ea6LeT/aPZJIDpbO6B01wyEsDCCjHMH9SCyG4ZgzpCMIdz?= =?us-ascii?Q?dUcvtmGBOkKmONa89LRwIl0xdBw9mPwIu79Jbt/q7CVgOnLm94K5YCg9z9k7?= =?us-ascii?Q?iRLD+4hJcdU/07wJdeU6mlw70c0tKBop7zMeBqFs4GG7AjvLsM8bnsyw5k6x?= =?us-ascii?Q?IVAmE/UVt9oVCdvO9ILNX+8Bnm+fdDqNoPgCG8LjkygtWcuU4k3t3WiPx6YE?= =?us-ascii?Q?XYLE9n9qHQv/5Ikt/EPvItQtsoLGnjRP7mEGnRE0R9eaNw+ppUBO6juVEhTH?= =?us-ascii?Q?sTjjJlBFQ4B8eihEb2xeN54s6c5imRmW3ZuK133LPn7Zy3O6kQnGXN79K4R5?= =?us-ascii?Q?xmW08TC3q8EKGG3f8OpPZjgo2cnEchbTE+4JWmWuQbC1rGzYnOEd5ePolNOV?= =?us-ascii?Q?gfYSQLhWGzCJ4uFI34PAjZ5TfwStdSp6krCkTKKL8oF4B29//WPSxvL5woj7?= =?us-ascii?Q?lFEuaHQbbZ4zCwGoU8DUHH+8TlPYK7hQsiczmG0TBGho8bXUlVerWQHXKyqm?= =?us-ascii?Q?bZ3nVzNQvqCLVlupZ6QhgXwx7s+Fc1P9Z66+m2pVfNsUY8k4zuEr+z67F7Fc?= =?us-ascii?Q?iG45ywa7cguAwWPGjjlcwxvzDbTcz4CLuZ887DJw/xDEo3IGu10MX2CpDQZa?= =?us-ascii?Q?FmXAkpCa3nvg5L0nzUMd9XU7tNEOEo9StOYDJRif9dar8tY3TYBQD2w6W8/W?= =?us-ascii?Q?hoxRFgKDXAs2IdvRB6F8JMa7ovnHZ1ZRKBWsyRsoSD4x/pHDPbmdaMeuw1vb?= =?us-ascii?Q?6sJ/QihHY8lWbDRQ2Tu+DKbFxY8sC0zrPC8+Ezz6qg4iYydxYjcv/2c0W3AR?= =?us-ascii?Q?IzxPf9j4BuENAz0eE5nq8KodqVYVOCPHyzNYiU0plgYYG00FqIyWbmMDxYjO?= =?us-ascii?Q?0nYhadz3FoVQMMXj3itmj5gcmIeUcC25t4XZ4p6JDXU81V/VokKunRE02ERE?= =?us-ascii?Q?FlkdLTuT3k5rRZIld0jDXZceasHv7kfiO7nYFd8bHEvIQ7FBZ4e9LCwbvcru?= =?us-ascii?Q?c9Hr1gsVnIFC375WTvSWAV6ZkeF2lc6BFwwgHuVGa6AgaKFtjnCbzH7xmJ9r?= =?us-ascii?Q?/HZhA5ONjOomuqG/WdpbgE9k1HnTNHHFqBd2RZwGab7es715L6731ps/SwN+?= =?us-ascii?Q?r9TDPT/dv0A=3D=3D?= X-Microsoft-Exchange-Diagnostics: 1; CY1PR0701MB1726; 6:5Wg7xMjIVlE1a9eoZK+r2Wjf5fAjKgJVrdmDKBTfqnwaLOxpNE18Txzv1C2B76Tvnvg1V14PNioPfhG/0EPdFjZkVWNNrDp8aNt3f3bIRy27gBoXYLKzl2Eb+oP6HKTzC+gtoDXZIDvgAdKodkKQSDSdBwyIkXfA1sSfEBaTsaB9ijlyl77tQE82i1k0ekKgWlmkhMxvDJ7YN1pSEsEvZNUCoYZ0LAgghLPa5+IiUItBzSA/3V14rllaGOFtM0V/AAg9kkck0eFwVvS49UtimgpQpTOTnALCeJCyvMX2i6H9S/a0v5lDtcuHRUfqWOYAhPqFCHsX/RreOpipowb2tF+JK1Pjd10dJzY5l5h8j4g=; 5:eqaBa3u1STMg6VEy2Dv4Zjt54uublJp9BJn7euwyybGah6dtpWjnlAp0vl2Uu2q5HiHRE2si5MrAuwCyXZTv8364AgZdWhASK0vgQwv7YBHW2FGiFa6q8/LOsB03ygSQBd7+9ueP2UTnpWI+lzrY8w==; 24:Tc6ljEwmDFYFZZqeRMc9Lu+vzFxp6g0CSZBw9NWIz7ouT6ewNdU3D/I+U6vTw0bS9pvWZBZVOfq3F6exJ6XuTawmYNL5QPWZE9yPgQeRGh4= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; CY1PR0701MB1726; 7:/PDE8tkTtnAu1BeZTzICUKy+EGz52bC87kErUYo1ITjdEGyOs/3HFXlzKc3LiTdMrX4nOixBwQlQQm4vD8PTVv6mTJcwZAiI2lCNaLOe6N5QWmEPd7oXKH1uURn+4sXUxahTDHAGd67LgtJiN2QIdruG0pWPo7fLjW/+TS2qLB/tJxZydVESDw+tG2afmobt7SPOLbDlUJT3ciSutwNMEip4kTOoRz5/F9xOGdAhHqDE3L7n57+PrA0QSb6pyheGL/PNOWfcIf8w+xwI1Ne7tsh02V+nxIeUfpAh8/5dgie51lDfLzlB1wc6uRGfqbGY0xnHtmxFYv0WKiWAg50lI1ZKtXD+R6Ro6nRxwq2pnnw= X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Dec 2016 04:41:43.8777 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY1PR0701MB1726 Subject: Re: [dpdk-dev] [RFC PATCH] eventdev: add buffered enqueue and flush APIs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 08 Dec 2016 04:41:47 -0000 On Mon, Dec 05, 2016 at 11:30:46PM +0000, Eads, Gage wrote: > > > On Dec 3, 2016, at 5:18 AM, Jerin Jacob wrote: > > > >> On Fri, Dec 02, 2016 at 01:45:56PM -0600, Gage Eads wrote: > >> This commit adds buffered enqueue functionality to the eventdev API. > >> It is conceptually similar to the ethdev API's tx buffering, however > >> with a smaller API surface and no dropping of events. > > > > Hello Gage, > > Different implementation may have different strategies to hold the buffers. > > A benefit of inlining the buffering logic in the header is that we avoid the overhead of entering the PMD for what is a fairly simple operation (common case: add event to an array, increment counter). If we make this implementation-defined (i.e. use PMD callbacks), we lose that benefit. In general, I agree from the system perspective. But, few general issues with eventdev integration part, 1) What if the burst has ATOMIC flows and if we are NOT en-queuing to the implementation then other event ports won't get the packets from the same ATOMIC tag ? BAD. Right? 2) At least, In our HW implementation, The event buffer strategy is more like, if you enqueue to HW then ONLY you get the events from dequeue provided if op == RTE_EVENT_OP_FORWARD.So it will create deadlock.i.e application cannot hold the events with RTE_EVENT_OP_FORWARD 3) So considering the above case there is nothing like flush for us 4) In real high throughput benchmark case, we will get the packets at the rate of max burst and then we always needs to memcpy before we flush. Otherwise there will be ordering issue as burst can get us the packet from different flows(unlike polling mode) > > > and some does not need to hold the buffers if it is DDR backed. > > Though DDR-backed hardware doesn't need to buffer in software, doing so would reduce the software overhead of enqueueing. Compared to N individual calls to enqueue, buffering N events then calling enqueue burst once can benefit from amortized (or parallelized) PMD-specific bookkeeping and error-checking across the set of events, and will definitely benefit from the amortized function call overhead and better I-cache behavior. (Essentially this is VPP from the fd.io project.) This should result in higher overall event throughout (agnostic of the underlying device). See above. I am not against burst processing in "application". The flush does not make sense for us in HW perspective and it is costly for us if we trying generalize it. > > I'm skeptical that other buffering strategies would emerge, but I can only speculate on Cavium/NXP/etc. NPU software. i> > > IHMO, This may not be the candidate for common code. I guess you can move this > > to driver side and abstract under SW driver's enqueue_burst. > > > > I don't think that will work without adding a flush API, otherwise we could have indefinitely buffered events. I see three ways forward: I agree. More portable way is to move the "flush" to the implementation and "flush" whenever it makes sense to PMD. > > - The proposed approach > - Add the proposed functions but make them implementation-specific. > - Require the application to write its own buffering logic (i.e. no API change) I think, If the additional function call overhead cost is too much for SW implementation then we can think of implementation-specific API or custom application flow based on SW driver. But I am not fan of that(but tempted do now a days), If we take that route, we have truckload of custom implementation specific API and now we try to hide all black magic under enqueue/dequeue to make it portable at some expense. > > Thanks, > Gage > > > > >> > >> Signed-off-by: Gage Eads > >> --- > >> lib/librte_eventdev/rte_eventdev.c | 29 ++++++++++ > >> lib/librte_eventdev/rte_eventdev.h | 106 +++++++++++++++++++++++++++++++++++++ > >> 2 files changed, 135 insertions(+) > >> > >> diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c > >> index 17ce5c3..564573f 100644 > >> --- a/lib/librte_eventdev/rte_eventdev.c > >> +++ b/lib/librte_eventdev/rte_eventdev.c > >> @@ -219,6 +219,7 @@ > >> uint16_t *links_map; > >> uint8_t *ports_dequeue_depth; > >> uint8_t *ports_enqueue_depth; > >> + struct rte_eventdev_enqueue_buffer *port_buffers; > >> unsigned int i; > >> > >> EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports, > >> @@ -272,6 +273,19 @@ > >> "nb_ports %u", nb_ports); > >> return -(ENOMEM); > >> } > >> + > >> + /* Allocate memory to store port enqueue buffers */ > >> + dev->data->port_buffers = > >> + rte_zmalloc_socket("eventdev->port_buffers", > >> + sizeof(dev->data->port_buffers[0]) * nb_ports, > >> + RTE_CACHE_LINE_SIZE, dev->data->socket_id); > >> + if (dev->data->port_buffers == NULL) { > >> + dev->data->nb_ports = 0; > >> + EDEV_LOG_ERR("failed to get memory for port enq" > >> + " buffers, nb_ports %u", nb_ports); > >> + return -(ENOMEM); > >> + } > >> + > >> } else if (dev->data->ports != NULL && nb_ports != 0) {/* re-config */ > >> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release, -ENOTSUP); > >> > >> @@ -279,6 +293,7 @@ > >> ports_dequeue_depth = dev->data->ports_dequeue_depth; > >> ports_enqueue_depth = dev->data->ports_enqueue_depth; > >> links_map = dev->data->links_map; > >> + port_buffers = dev->data->port_buffers; > >> > >> for (i = nb_ports; i < old_nb_ports; i++) > >> (*dev->dev_ops->port_release)(ports[i]); > >> @@ -324,6 +339,17 @@ > >> return -(ENOMEM); > >> } > >> > >> + /* Realloc memory to store port enqueue buffers */ > >> + port_buffers = rte_realloc(dev->data->port_buffers, > >> + sizeof(dev->data->port_buffers[0]) * nb_ports, > >> + RTE_CACHE_LINE_SIZE); > >> + if (port_buffers == NULL) { > >> + dev->data->nb_ports = 0; > >> + EDEV_LOG_ERR("failed to realloc mem for port enq" > >> + " buffers, nb_ports %u", nb_ports); > >> + return -(ENOMEM); > >> + } > >> + > >> if (nb_ports > old_nb_ports) { > >> uint8_t new_ps = nb_ports - old_nb_ports; > >> > >> @@ -336,12 +362,15 @@ > >> memset(links_map + > >> (old_nb_ports * RTE_EVENT_MAX_QUEUES_PER_DEV), > >> 0, sizeof(ports_enqueue_depth[0]) * new_ps); > >> + memset(port_buffers + old_nb_ports, 0, > >> + sizeof(port_buffers[0]) * new_ps); > >> } > >> > >> dev->data->ports = ports; > >> dev->data->ports_dequeue_depth = ports_dequeue_depth; > >> dev->data->ports_enqueue_depth = ports_enqueue_depth; > >> dev->data->links_map = links_map; > >> + dev->data->port_buffers = port_buffers; > >> } else if (dev->data->ports != NULL && nb_ports == 0) { > >> RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release, -ENOTSUP); > >> > >> diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h > >> index 778d6dc..3f24342 100644 > >> --- a/lib/librte_eventdev/rte_eventdev.h > >> +++ b/lib/librte_eventdev/rte_eventdev.h > >> @@ -246,6 +246,7 @@ > >> #include > >> #include > >> #include > >> +#include > >> > >> #define EVENTDEV_NAME_SKELETON_PMD event_skeleton > >> /**< Skeleton event device PMD name */ > >> @@ -965,6 +966,26 @@ typedef uint16_t (*event_dequeue_burst_t)(void *port, struct rte_event ev[], > >> #define RTE_EVENTDEV_NAME_MAX_LEN (64) > >> /**< @internal Max length of name of event PMD */ > >> > >> +#define RTE_EVENT_BUF_MAX 16 > >> +/**< Maximum number of events in an enqueue buffer. */ > >> + > >> +/** > >> + * @internal > >> + * An enqueue buffer for each port. > >> + * > >> + * The reason this struct is in the header is for inlining the function calls > >> + * to enqueue, as doing a function call per packet would incur significant > >> + * performance overhead. > >> + * > >> + * \see rte_event_enqueue_buffer(), rte_event_enqueue_buffer_flush() > >> + */ > >> +struct rte_eventdev_enqueue_buffer { > >> + /**> Count of events in this buffer */ > >> + uint16_t count; > >> + /**> Array of events in this buffer */ > >> + struct rte_event events[RTE_EVENT_BUF_MAX]; > >> +} __rte_cache_aligned; > >> + > >> /** > >> * @internal > >> * The data part, with no function pointers, associated with each device. > >> @@ -983,6 +1004,8 @@ struct rte_eventdev_data { > >> /**< Number of event ports. */ > >> void **ports; > >> /**< Array of pointers to ports. */ > >> + struct rte_eventdev_enqueue_buffer *port_buffers; > >> + /**< Array of port enqueue buffers. */ > >> uint8_t *ports_dequeue_depth; > >> /**< Array of port dequeue depth. */ > >> uint8_t *ports_enqueue_depth; > >> @@ -1132,6 +1155,89 @@ struct rte_eventdev { > >> } > >> > >> /** > >> + * Flush the enqueue buffer of the event port specified by *port_id*, in the > >> + * event device specified by *dev_id*. > >> + * > >> + * This function attempts to flush as many of the buffered events as possible, > >> + * and returns the number of flushed events. Any unflushed events remain in > >> + * the buffer. > >> + * > >> + * @param dev_id > >> + * The identifier of the device. > >> + * @param port_id > >> + * The identifier of the event port. > >> + * > >> + * @return > >> + * The number of event objects actually flushed to the event device. > >> + * > >> + * \see rte_event_enqueue_buffer(), rte_event_enqueue_burst() > >> + * \see rte_event_port_enqueue_depth() > >> + */ > >> +static inline int > >> +rte_event_enqueue_buffer_flush(uint8_t dev_id, uint8_t port_id) > >> +{ > >> + struct rte_eventdev *dev = &rte_eventdevs[dev_id]; > >> + struct rte_eventdev_enqueue_buffer *buf = > >> + &dev->data->port_buffers[port_id]; > >> + int n; > >> + > >> + n = rte_event_enqueue_burst(dev_id, port_id, buf->events, buf->count); > >> + > >> + if (n != buf->count) > >> + memmove(buf->events, &buf->events[n], buf->count - n); > >> + > >> + buf->count -= n; > >> + > >> + return n; > >> +} > >> + > >> +/** > >> + * Buffer an event object supplied in *rte_event* structure for future > >> + * enqueueing on an event device designated by its *dev_id* through the event > >> + * port specified by *port_id*. > >> + * > >> + * This function takes a single event and buffers it for later enqueuing to the > >> + * queue specified in the event structure. If the buffer is full, the > >> + * function will attempt to flush the buffer before buffering the event. > >> + * If the flush operation fails, the previously buffered events remain in the > >> + * buffer and an error is returned to the user to indicate that *ev* was not > >> + * buffered. > >> + * > >> + * @param dev_id > >> + * The identifier of the device. > >> + * @param port_id > >> + * The identifier of the event port. > >> + * @param ev > >> + * Pointer to struct rte_event > >> + * > >> + * @return > >> + * - 0 on success > >> + * - <0 on failure. Failure can occur if the event port's output queue is > >> + * backpressured, for instance. > >> + * > >> + * \see rte_event_enqueue_buffer_flush(), rte_event_enqueue_burst() > >> + * \see rte_event_port_enqueue_depth() > >> + */ > >> +static inline int > >> +rte_event_enqueue_buffer(uint8_t dev_id, uint8_t port_id, struct rte_event *ev) > >> +{ > >> + struct rte_eventdev *dev = &rte_eventdevs[dev_id]; > >> + struct rte_eventdev_enqueue_buffer *buf = > >> + &dev->data->port_buffers[port_id]; > >> + int ret; > >> + > >> + /* If necessary, flush the enqueue buffer to make space for ev. */ > >> + if (buf->count == RTE_EVENT_BUF_MAX) { > >> + ret = rte_event_enqueue_buffer_flush(dev_id, port_id); > >> + if (ret == 0) > >> + return -ENOSPC; > >> + } > >> + > >> + rte_memcpy(&buf->events[buf->count++], ev, sizeof(struct rte_event)); > >> + return 0; > >> +} > >> + > >> +/** > >> * Converts nanoseconds to *wait* value for rte_event_dequeue() > >> * > >> * If the device is configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT flag then > >> -- > >> 1.9.1 > >>