From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM01-SN1-obe.outbound.protection.outlook.com (mail-sn1nam01on0065.outbound.protection.outlook.com [104.47.32.65]) by dpdk.org (Postfix) with ESMTP id 8512D689B for ; Wed, 11 Jan 2017 14:56:27 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=fh13d3j1Zyfv6Z7Mmac9vSixmzAgs6KtkF+kJKWrbwM=; b=WSyOZK7RtLIXlhN4EeJuBUfTQ5wpnU/E22QR4tr8AqCcqchlc/RM9/zn6ppOU5iAHFGgHyvTDD/K8S0En1URtpPPNVL3VP6s+U07g7u7dYVpMDn+Xztax8vIFVV95ZCcBnN10sZ9BbLVnfJklwh3xC0iEcBXZJnxuJR11qWae3k= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Jerin.Jacob@cavium.com; Received: from localhost.localdomain (122.171.36.30) by BLUPR0701MB1714.namprd07.prod.outlook.com (10.163.85.140) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.845.12; Wed, 11 Jan 2017 13:56:23 +0000 Date: Wed, 11 Jan 2017 19:26:03 +0530 From: Jerin Jacob To: Cristian Dumitrescu CC: , Message-ID: <20170111135600.GA25163@localhost.localdomain> References: <1480529810-95280-1-git-send-email-cristian.dumitrescu@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <1480529810-95280-1-git-send-email-cristian.dumitrescu@intel.com> User-Agent: Mutt/1.7.1 (2016-10-04) X-Originating-IP: [122.171.36.30] X-ClientProxiedBy: PN1PR01CA0034.INDPRD01.PROD.OUTLOOK.COM (10.164.137.41) To BLUPR0701MB1714.namprd07.prod.outlook.com (10.163.85.140) X-MS-Office365-Filtering-Correlation-Id: 5f91c154-3973-4c3b-bb95-08d43a29a21d X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001); SRVR:BLUPR0701MB1714; X-Microsoft-Exchange-Diagnostics: 1; BLUPR0701MB1714; 3:ea5y+BEL1O7fZFevHb1pZCQ/WAlp4G7FGJ8LrgmFZeJBzTFZA2kZFtMEltIK4N/f5ksOK5gp7MOftpj8BM9JBIugPHRtWPIIivCanxOYkOAjhMsQzN5aT/VI8hmwT2VxT+EjX0h0EtXeJYvXhgxgGmuvY7x7hVLXVcfVqHlxA2EC83ck8fWFyfsKclnYBwXsWOg7/fTYprUspykQOwrxWQq+3gIwjEVXK5+SJBlIcE70/IKuwJpQ+Fzf9x6R6cJ3UOg9c5s7wvZEuGgO1+LUcg== X-Microsoft-Exchange-Diagnostics: 1; BLUPR0701MB1714; 25:sekl/j6iy8gPBWA6zjvCIwybfHRUMZrMFwoy6koQlBAjhwNOCjVJ76G4kgrLwhgaTG6v2IefDNM6Btf0lFbQUD8296cHYTo5ssd2o4h7jEV4O2QNwSuaDBu3fbTQ90dfBMryrJ8zPE3cfWs9xOTxnKfA9ZGdh1MoD1i1p4UlUdGA24DZ1Z0QQMa++fmutMO3sO9j8yuMDo3b3J7Lgv+yCyIZHusQmS+hftM4PU6JGWYw86iRMY2dO5DvYxmEpXzT6szkDR6okZxgdiMxkCz3meP4/olljvoTgF6vJf21YP7lTPZ+2DVKC8FZ9vjoG9wLpHwIo3dzcd75Wdq7Wd2KKybq6zAuqirknWHETtlR7kvQFap5nF+/nmYQIGJJ9V86pQPYSU9q+ImKMvnbnTSoKghXtQ2xjhKFlpjq1/WZ4V3+LuCQ7pr6RAlukSuzX8/QErgHRdmnOXBYD5JywAVNJdpa0oylZMn+quwGSRSbDSiz1EGMJvl49TM2Eo87cr6q+l+Xy+mqvDSP/9SmQAr6bE7Kr4a8IA/SE5WCYTb2cul+XvH5je5Q8/mh9CcCsHp5w8HXGlPHoYW9jM/UvI0BnZWZRVhqGFKkR4HBTH/REI7qtk+e7egeVrVCRIzxVGS9JUXq8edK+j0bJHiEGpuWGqBmunjP2miyxyqq7hxUigB4CIvNVrjWm7ZJQmoQuxdk+j6mS2Cyv0K3Sg6mYqt38AjR61WWwSm8FxLlwTZeAZKf/CS0g/KHMDbeI5OJvGwJ+ldbILZkGsUOFxIYUXoNvfW7cjkM0UzyeAgTu9ozxeGdnGzp/za8J7qNGj0jPx2U X-Microsoft-Exchange-Diagnostics: 1; BLUPR0701MB1714; 31:UmSvhcozV7vQefjyN5akS5Tbt8eBk7V37DGO9E8YcRrQSNDUzUoMK/GpD993T9hqmGiXs2BdLnlGF00fEnGL/xAyCF+t5oaHSlDUS6cv5pe08zDupKtBjxqtfHIz35oYluzoZtdvzc8C0+PbvgdB+PInUMb0RC5Rv3HJ3cIbTA/78LN+rhTJrRV+pv11OSV12FDP8IcMsczvwFsx2QzmqUNqqgv8uMh1wv8mU9ccFBUsmTNlMhMppmqGGMZLj0R6; 20:r+d0DJAGl/jOeO9O82yeuvXa5/P9+D2i87xMHpib3Fw5t0WLX2HukIQLJ7gxo6BQRStYszH1G0OGK5C+ly3qJhvQObwfLc7tqHHNrKUoP2HNQPcxjlPeOF2u/WVKLPKtEd9IBmvyIynx8goyrSNfP61AIRZ7E/6GCNJubI2bq66u/zE0JrYNz11AcFrTXuwmEyzqUXix6+Q7vlSopefuoTYq3LwqMn87DNzWzRmn60u8ttO58QoVH+LZ03DiGO3TDEfXmMJWtwonxmtq/ZauPQkNuy2T/KeLF78L87gN/6JnfQ9XIjs/kBWcOibDgZFG8kjkNHS58h2mEai9S58pXB8g72ytALiSQMF/DeAiLlr+d/GW/K/eB0UbHTaVTQJ6bKkkS0WMuPMhrilbdhuygB4t+8cAH09TtAdcsrjCBGKx/yHUXk3K8ll46VfJIa4ZFlgyj9jD01KBjS+RhuAE+n/h3Q2WAfm99lsnMVVOYUdsLSTX61LngouZJfSF2sOpFXNx30l+m4VoFm5jrRnFuGcyTBB1X8DH0L1nfKEjJ4sk2kHfSl++cnxAhcvihScr+AcPmM3Wx7MrO35Y7E7FeoYbUt8vY/c4VcTAz61JJLA= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(228905959029699); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040375)(601004)(2401047)(5005006)(8121501046)(3002001)(10201501046)(6041248)(20161123562025)(20161123555025)(20161123564025)(20161123560025)(6072148); SRVR:BLUPR0701MB1714; BCL:0; PCL:0; RULEID:; SRVR:BLUPR0701MB1714; X-Microsoft-Exchange-Diagnostics: 1; BLUPR0701MB1714; 4:Bnr2RQdsE6uVuIOX97J+KEWM5R6wGrYNddlX7cxu5VPk7hBmo7D5BbNMX1+TtxTGn2BsCuNw+vURFNnlWUPpMan5KuAsvcAUCASLa7oqQzLTaZAUJ3t3Baj+jB4ivBv7ELtpbj23fC3MJxyLusITMUKlr7cOo9fzRd73VWnf1yjSSLn0SALtMB0owza7rUyMeO3XqBwq9JUQhb/zj0Bt7emJ3gqeC2S8AdXYfI2zgyZ1Cjc7s9O8udRKZV5+jUxNu9rC9W2YggCnFV6rMAqBWCdwDdJ3h229nPz9EzGyM+xsUculA1aDSlCz3chaPQARYKgxUK6qXe74FFtrFq7gQxxP7OIkOi2ZQlC46Kom8ynal3gDPP1mSi1Kf8QrM8tORE/Q6BmfOwvrD+pHBYrtCnVfpFUpp8OQ2m+HxZKz1mLZ/qUD3na5zP8R7qQAprc72jExnjJzfjBPiCIV6mnDhEMreQCTvw/OOtStS7LCQVZydWOtEvawV9ZPPdxaKdQ72nuO/8lkSQ9VhKvE5atLJ7yexQk1FHJ09zz/N0T01ZUCJhI+3tSBXzUYwBOu1NxJfNF5c+e8DR5OygMlZy2WsE78hWJFO3/bAYbo684In19YjrmBk+sgBtQk9n5l4T0T X-Forefront-PRVS: 01842C458A X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(4630300001)(6009001)(6069001)(7916002)(39450400003)(24454002)(189002)(43544003)(199003)(57704003)(4001430100002)(1076002)(97756001)(38730400001)(47776003)(6506006)(54906002)(101416001)(81166006)(66066001)(81156014)(61506002)(8676002)(83506001)(50986999)(25786008)(54356999)(76176999)(3846002)(6116002)(23726003)(9686003)(55016002)(106356001)(42186005)(2950100002)(6916009)(42882006)(4001350100001)(2906002)(50466002)(7736002)(305945005)(92566002)(68736007)(105586002)(6666003)(189998001)(33656002)(46406003)(229853002)(5660300001)(4326007)(110136003)(97736004)(107886002)(18370500001); DIR:OUT; SFP:1101; SCL:1; SRVR:BLUPR0701MB1714; H:localhost.localdomain; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BLUPR0701MB1714; 23:qfJSCz7enK9t0oQjbjA/7Dn47HRhGfApzYyZO2x?= =?us-ascii?Q?zuZSGYJI9qRBC1qaHvWFeHoaTam+vM2hoIhIcHOnJfXMvJLAc/FOX9fcsKLM?= =?us-ascii?Q?YOwYzt7IkFIjjWcqlnRl7xMysUBDxOqM6vhvodcORvdIjYpBkstVwSyFr7s7?= =?us-ascii?Q?MhEqyHg+uGFDD3yTPK1xcyUiO1cnH1BvyYT7eW7IFDc2jakJxpjxdp+ZSDj6?= =?us-ascii?Q?oqXWcd076kHhMAq9Rdo1JnKmZdBbw8JZcO21GlzH3W9rQjHPfMyvsqeGJL/4?= =?us-ascii?Q?lSkOM2NB+dIgjb0MXgKFTiMkNh5CS4dAJGt2dw1a8C5yi0ZFMFF74mDEVRxT?= =?us-ascii?Q?TdV7luCq9B5JAKTrdHAYEC3byw0t4KUuuqK7WpZc6wM7l9xwiFtVZluvjTwp?= =?us-ascii?Q?dhXm3UizEMA5/DgWaoLTNC9rU/rAY9fZ1Lae7D2kJ2CtssNFvhvpS3s71I6r?= =?us-ascii?Q?NSEEinoT2Xag7zmMxrU/9khruTFkEq7vgtTYb2aUWR5bOjwCsaAt2gRzEuG9?= =?us-ascii?Q?iTtXdgV8qj7yUxe/uJUc3qyusFJsjqgUZfUIMaUbOryGJ0xSFIv39l8CfMv7?= =?us-ascii?Q?pNIr9egxRKVXcEm+TYzDrhcR+7yF9TMRBWOwCYCpd6eWAn9+OaOySBPhUCAt?= =?us-ascii?Q?1sp57hL92lsCkHIBtRjEoLPVjlL7CwzfPhA+4H7iavf/OyiKZ8fgfH8em92P?= =?us-ascii?Q?4ybX2OHBBToBqEygBv5FzpJ0uEXxgJYjQLgW+DX4xfHml6OMpFQVYreQQyPp?= =?us-ascii?Q?lKlKVqn2tbDqM7RXPZYg0Tx7H5LZhFqI9imcpcWozIXmVONnOyBmmEKNZwI0?= =?us-ascii?Q?7zBN0hZfQDqW7u+GpOvr3K2RRpLwUNkUe6bjRax/dQhQkP4jDlDxLK9VoOjb?= =?us-ascii?Q?6tCPpjdT8hNl0PqLphlgx4s2gxpAbjMk4/UZqxyTORYOZmsZ4h4CIgsKnKVR?= =?us-ascii?Q?qqySXK/Nv9FyxWd3pXDa6bz6HsVScKfCdvgpSxhQNmYLQQxkZz02Zi8Ylbtb?= =?us-ascii?Q?HIa5DcK7WCCs9v4D9au/eXjsjoA7Lx0PBlijL3loOj+CiKtA1fXM8qXfbpkx?= =?us-ascii?Q?BHokFFDXlM+T7lnr2AcUsA/Z9QJiIBVRSLLSoh6euzM8PP4U/ZeOtL9Xs78b?= =?us-ascii?Q?vA7ZVjzhhN1E6Du18QflT4iX4o0LzBXwUK9oREp+T/PPNuLufp6Yo9ZUxBlz?= =?us-ascii?Q?Alyf66DdCRZiguEXnKx6ufXGP5QEoHUlygiyO81JcPNlopYI7PFttMZip3+A?= =?us-ascii?Q?coL/kf18Tfl1CjfSLlrOhAvADWqRQbvWR5W6afbaK5zRNVs3c3oxlHnqqweS?= =?us-ascii?Q?y6KYJMVzv5lZSH99OPE5PpwO25iCH/JcIV+ZVgdqMcrkUT3An0TeYnAFKvC6?= =?us-ascii?Q?AB2f6dA7OXGPHs3d3dp4Z1sRWZ+k=3D?= X-Microsoft-Exchange-Diagnostics: 1; BLUPR0701MB1714; 6:8NzIgcEnfMqyFzTRFStMNkXFy0IXyhd95SgsPRjUnF7TZLEvxhWJCmM//m2dNek1jHhCwSYjofDk539z1qCNamUmeTE0Pj138KVc7tuCk1MFi1aNncqYCteHzQBGxSxq+h+wW44O5UrajpE1DeMnzNvjdwQkEeLJIyuqQ2OT0YmxovRCGQHrl1xbal0m9NypTELaDcTzwFupEZCgrIPJUmRhspbeHgMjLFGCUEkxbqrzO3tG0PoD5uBov43KtvzCDmJDNJW4VeWKN/gFEmhyEaHncrNDzr8RG1ZqhlmarHiq8gZbfQ4kt+1mL1h4cgl5ShEV2QUM8QtdoRqZuudkHYGoDxjrIG5zRvX9wejOtBBRiDwKUXmoxbK3JOk4PLuHlWgfzpnnKVw0qTwfDe1h2vO0PmIuDyxgvW7C48vsv7E=; 5:mSY+VUdfm3kSo93pJi89fYQ36eVyBQYfcVZ5wb+7oc+prFLeOPYruigvpDYrWS+y7kN4jW/yPUxNzC5ilYb9+hXI/M7smcI7Z7xD8H26GnLBLSYztjy2Fq0g0KlcRz8WnOtLLS3j+yHQWHpg47QykJX+efe1IZOaN3BTVj8CRko=; 24:/qWsgbPXruc623AOmA+8Rp+jegpGzJXF3D9w6JytjZroGi2idp1OneZnwp459DLa0s8rRWAnn8NtYTOPR+z7MopbI+Id/gkM8ckbTxd0agM= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BLUPR0701MB1714; 7:dSTqEpzJuinZGJtti8HhBM5q1y1v0fnqtalXt3GU63SyKf7Ig7QGm9Y7rM33Q3CKZOnEHyOmA6r93cfOn1a8xt5KOUUmzaXKeNiaZLDUYK4ynwMkvch4l1Fwbr6BTN0XQxPv6nJEMHXReF+Xd2WhD2T9Hr8oBWXvrkxEqUIXD2DPUD0vxJpYPomCMkth8eppSbTjTA1OkagDwsF2VxZ4OQnqXz1tUTmiAnpOmZJO7+UJKRy9TnS8LB7LFvhPLx6tpXISQ9sqso+Sy1SxYGn7w3xOzLytMcFhy8nm+Jr82QjiCf/iPjHrBbW4FdmgOqXxmjRBUUsGzIr/R5dA6q2yvWbH98qSu5XYzFxf4Pc6EdsP7J3iQu8ppoTvz6/zUF9g+GNMGdJK9U/Jx/NoNZ5JbTsnfkrUzF88DwTGWrwBN8Q+36DGv4QoxFhljTDaUqFO32xBjUuuD6/rToxx9Q/uLw== X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2017 13:56:23.3771 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLUPR0701MB1714 Subject: Re: [dpdk-dev] [RFC] ethdev: abstraction layer for QoS hierarchical scheduler X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 11 Jan 2017 13:56:28 -0000 On Wed, Nov 30, 2016 at 06:16:50PM +0000, Cristian Dumitrescu wrote: > This RFC proposes an ethdev-based abstraction layer for Quality of Service (QoS) > hierarchical scheduler. The goal of the abstraction layer is to provide a simple > generic API that is agnostic of the underlying HW, SW or mixed HW-SW complex > implementation. Thanks Cristian for bringing up this RFC. This will help in integrating NPU's QoS hierarchical scheduler's into DPDK. Overall the RFC looks very good as a generic traffic manager. However, as a NPU HW vendor, we feel like we need to expose some of the HW constraints and HW specific features in a generic way in this specification to effectively use with HW based implementation. I will try to describe HW constraints and HW features associated with HW based hierarchical scheduler found in Cavium SoCs as inline. IMO, If other HW vendors share the constraints on "hardware based hierarchical scheduler" then we could have a realistic HW/SW abstraction for the hierarchical scheduler. > > Q1: What is the benefit for having an abstraction layer for QoS hierarchical > layer? > A1: There is growing interest in the industry for handling various HW-based, > SW-based or mixed hierarchical scheduler implementations using a unified DPDK > API. Yes. > Q4: Why have this abstraction layer into ethdev as opposed to a new type of > device (e.g. scheddev) similar to ethdev, cryptodev, eventdev, etc? > A4: Packets are sent to the Ethernet device using the ethdev API > rte_eth_tx_burst() function, with the hierarchical scheduling taking place > automatically (i.e. no SW intervention) in HW implementations. Basically, the > hierarchical scheduler is done as part of packet TX operation. > The hierarchical scheduler is typically the last stage before packet TX and it > is tightly integrated with the TX stage. The hierarchical scheduler is just > another offload feature of the Ethernet device, which needs to be accommodated > by the ethdev API similar to any other offload feature (such as RSS, DCB, > flow director, etc). > Once the decision to schedule a specific packet has been taken, this packet > cannot be dropped and it has to be sent over the wire as is, otherwise what > takes place on the wire is not what was planned at scheduling time, so the > scheduling is not accurate (Note: there are some devices which allow prepending > headers to the packet after the scheduling stage at the expense of sending > correction requests back to the scheduler, but this only strengthens the bond > between scheduling and TX). Makes sense. > > Q5: Given that the packet scheduling takes place automatically for pure HW > implementations, how does packet scheduling take place for poll-mode SW > implementations? > A5: The API provided function rte_sched_run() is designed to take care of this. > For HW implementations, this function typically does nothing. For SW > implementations, this function is typically expected to perform dequeue of > packets from the hierarchical scheduler and their write to Ethernet device TX > queue, periodic flush of any buffers on enqueue-side into the hierarchical > scheduler for burst-oriented implementations, etc. > Yes. In addition to that, if rte_sched_run() does nothing(for HW implementation) then _application_ should not call the same. I think we need to introduce "service core" concept in DPDK to make it very transparent from an application perspective. > Q6: Which are the scheduling algorithms supported? > A6: The fundamental scheduling algorithms that are supported are Strict Priority > (SP) and Weighted Fair Queuing (WFQ). The SP and WFQ algorithms are supported at > the level of each node of the scheduling hierarchy, regardless of the node > level/position in the tree. The SP algorithm is used to schedule between sibling > nodes with different priority, while WFQ is used to schedule between groups of > siblings that have the same priority. > Algorithms such as Weighed Round Robin (WRR), byte-level WRR, Deficit WRR > (DWRR), etc are considered approximations of the ideal WFQ and are therefore > assimilated to WFQ, although an associated implementation-dependent accuracy, > performance and resource usage trade-off might exist. Makes sense. > > Q7: Which are the supported congestion management algorithms? > A7: Tail drop, head drop and Weighted Random Early Detection (WRED). They are > available for every leaf node in the hierarchy, subject to the specific > implementation supporting them. We don't support Tail drop, head drop or WRED for each leaf node in the hierarchy, Instead, in some sense, it is integrated into the HW mempool block at the ingress. So, maybe we can have some sort of capability or info API to get the capability of the scheduler to the application to get the big picture instead of trying individual resource APIs in the spec. We do have support for querying available free entries in the leaf queue to figure out the load. But it may not be worth to start a service core(rte_sched_run()) for implementing the spec due to multicore communication overhead. Instead, using the HW base support(a means to get the depth of leaf queue) application/library can do congestion management. Thoughts ? Does any other HW vendor support egress congestion management in HW ? > > Q8: Is traffic shaping supported? > A8: Yes, there are a number of shapers (rate limiters) that can be supported for > each node in the hierarchy (built-in limit is currently set to 4 per node). Each > shaper can be private to a node (used only by that node) or shared between > multiple nodes. Makes sense. We have dual rate shapers(very similar to RFC-2697 and RFC-2698) at all the nodes(obviously, an only single rate at the last node(the one close to physical port)). Just to understand, When we say 4 shapers per node, Is it four different rate limiters per node? Is there any RFC for four rate limiter like single(RFC-2697) and dual(RFC-2698)? > > Q9: What is the purpose of having shaper profiles and WRED profiles? > A9: In most implementations, many shapers typically share the same configuration > parameters, so defining shaper profiles simplifies the configuration task. Same > considerations apply to WRED contexts and profiles. Makes sense. > Q11: Are on-the-fly changes of the scheduling hierarchy allowed by the API? > A11: Yes. The actual changes take place subject to the specific implementation > supporting them, otherwise error code is returned. On-the-fly scheduling hierarchy is tricky in HW implementation and it comes with a lot of constraints. Returning the error code is fine, But we need to define what it takes to reconfigure the hierarchy if on-the-fly reconfiguring is not supported. The high-level constraints for reconfiguring hierarchy in our HW is: 1) Stop adding additional packets in leaf node 2) Wait for the packets to drain out from the nodes. Point (2) is internal to implementation so we can manage. I guess, For, Point (1), Application may need to know the constraint. > > Q13: Which are the possible options for the user when the Ethernet port does not > support the scheduling hierarchy required by the user? > A13: The following options are available to the user: > i) abort > ii) try out a new hierarchy (e.g. with less leaf nodes), if acceptable As mentioned earlier, Additional API to get the capability will help here. Some of the other capabilities, we believe it will be useful for the applications. 1) maximum number of levels, 2) maximum nodes per level, 3) Is congestion management supported? 4) maximum priority per node? At least it will be useful for writing the example application > iii) wrap the Ethernet device into a new type of Ethernet device that has a SW > front-end implementing the hierarchical scheduler (e.g. existing DPDK library > librte_sched); instantiate the new device type on-the-fly and check if the > hierarchy requirements can be met by the new device. Do we want to wrap to new ethernet device or let application to use software library directly ? If it is former then, Are we planning for a generic SW based driver for this? So that the NICs don't have HW support can just reuse the SW driver.Instead of duplicating the code in all the PMD drivers? > > > Signed-off-by: Cristian Dumitrescu > --- > lib/librte_ether/rte_ethdev.h | 794 ++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 794 insertions(+) > mode change 100644 => 100755 lib/librte_ether/rte_ethdev.h > > diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h > old mode 100644 > new mode 100755 > index 9678179..d4d8604 > --- a/lib/librte_ether/rte_ethdev.h > +++ b/lib/librte_ether/rte_ethdev.h > @@ -182,6 +182,8 @@ extern "C" { > #include > #include > #include > +#include > +#include [snip] > + > +enum rte_eth_sched_stats_counter { > + /**< Number of packets scheduled from current node. */ > + RTE_ETH_SCHED_STATS_COUNTER_N_PKTS = 1<< 0, > + /**< Number of bytes scheduled from current node. */ > + RTE_ETH_SCHED_STATS_COUNTER_N_BYTES = 1 << 1, > + RTE_ETH_SCHED_STATS_COUNTER_N_PKTS_DROPPED = 1 << 2, > + RTE_ETH_SCHED_STATS_COUNTER_N_BYTES_DROPPED = 1 << 3, > + /**< Number of packets currently waiting in the packet queue of current > + leaf node. */ > + RTE_ETH_SCHED_STATS_COUNTER_N_PKTS_QUEUED = 1 << 4, > + /**< Number of bytes currently waiting in the packet queue of current > + leaf node. */ > + RTE_ETH_SCHED_STATS_COUNTER_N_BYTES_QUEUED = 1 << 5, Some of the other counters seen in HW implementations from the shaper(rate limiter) are RED_PACKETS, RED_BYTES, YELLOW_PACKETS, YELLOW_BYTES, GREEN_PACKETS, GREEN_BYTES > +}; > + > +/** > + * Node statistics counters > + */ > +struct rte_eth_sched_node_stats { > + /**< Number of packets scheduled from current node. */ > + uint64_t n_pkts; > + /**< Number of bytes scheduled from current node. */ > + uint64_t n_bytes; > + /**< Statistics counters for leaf nodes only */ We don't have support for the stats for the all nodes.Since you have the rte_eth_sched_node_stats_get_enabled(), We are good. > + struct { > + /**< Number of packets dropped by current leaf node. */ > + uint64_t n_pkts_dropped; > + /**< Number of bytes dropped by current leaf node. */ > + uint64_t n_bytes_dropped; > + /**< Number of packets currently waiting in the packet queue of > + current leaf node. */ > + uint64_t n_pkts_queued; > + /**< Number of bytes currently waiting in the packet queue of > + current leaf node. */ > + uint64_t n_bytes_queued; > + } leaf; leaf stats looks good to us. > +}; > + > /** > + * Scheduler WRED profile add > + * > + * Create a new WRED profile with ID set to *wred_profile_id*. The new profile > + * is used to create one or several WRED contexts. > + * > + * @param port_id > + * The port identifier of the Ethernet device. > + * @param wred_profile_id > + * WRED profile ID for the new profile. Needs to be unused. > + * @param profile > + * WRED profile parameters. Needs to be pre-allocated and valid. > + * @return > + * 0 on success, non-zero error code otherwise. > + */ > +int rte_eth_sched_wred_profile_add(uint8_t port_id, > + uint32_t wred_profile_id, > + struct rte_eth_sched_wred_params *profile); How about returning wred_profile_id from the driver? looks like, that is the easy way to manage from driver perspective(driver can pass the same handle for similar profiles and have an opaque number for embedding some other information) and it is kind of norm. i.e int rte_eth_sched_wred_profile_add(uint8_t port_id, struct rte_eth_sched_wred_params *profile); > +/** > + * Scheduler node add or remap > + * > + * When *node_id* is not a valid node ID, a new node with this ID is created and > + * connected as child to the existing node identified by *parent_node_id*. > + * > + * When *node_id* is a valid node ID, this node is disconnected from its current > + * parent and connected as child to another existing node identified by > + * *parent_node_id *. > + * > + * This function can be called during port initialization phase (before the > + * Ethernet port is started) for building the scheduler start-up hierarchy. > + * Subject to the specific Ethernet port supporting on-the-fly scheduler > + * hierarchy updates, this function can also be called during run-time (after > + * the Ethernet port is started). > + * > + * @param port_id > + * The port identifier of the Ethernet device. > + * @param node_id > + * Node ID > + * @param parent_node_id > + * Parent node ID. Needs to be the valid. > + * @param params > + * Node parameters. Needs to be pre-allocated and valid. > + * @return > + * 0 on success, non-zero error code otherwise. IMO, We need an explicit error number to differentiate the configuration error due do Ethernet port has been started. And on receiving on such error code, we need to define what is the procedure to reconfigure the topology. The recent rte_flow spec has own error codes to get more visibility on the failure, so that application can choose better attributes for configuring. For example, Some of those limitations in our HW are 1) priorities are from 0 to 9(error type: PRIORITY_NOT_SUPPORTED) 2) DDWR is applicable only for one set priorities per children to parent connection. example, valid case: 0-1-1-1-2-3. Invalid case: 0-1-1-1-3-2-(2), > + */ > +int rte_eth_sched_node_add(uint8_t port_id, > + uint32_t node_id, > + uint32_t parent_node_id, > + struct rte_eth_sched_node_params *params); > + > +/** > + * > + * @param port_id > + * The port identifier of the Ethernet device. > + * @param node_id > + * Node ID. Needs to be valid. > + * @param queue_id > + * Queue ID. Needs to be valid. > + * @return > + * 0 on success, non-zero error code otherwise. > + */ > +int rte_eth_sched_node_queue_set(uint8_t port_id, > + uint32_t node_id, > + uint32_t queue_id); > + In HW based implementation leaf node id == tx_queue_id as hierarchical scheduling is tightly coupled with tx_queues(ie leaf nodes), Do we need such translation? like specifying "queue_id" in struct rte_eth_sched_node_params ? since tx_queues are expressed in 0..n. How about making the leaf node id as same. There is no such translation in HW so may be it will difficult to implement. Do we really need this translation? Other points: HW can't understand any SW marking schemes applied at ingress classification level. For us, at leaf node level all packets are in color unaware mode with input color set as green(aka (color blind mode). On the subsequent levels,HW adds color meta on the packet based on the shapers. With above scheme, we have few features where we need figure out how to abstract in the generic way based on SW implementation or other HW vendors constraints. 1) If last level color meta is YELLOW, HW can mark(write) 3 bits in the packet. It will be useful for sharing the color info across two different systems.(like updating IP diffserv bits) 2) The need for additional shaping param called _adjust_. Typically the conditioning and scheduling algorithm is measured in bytes of IP packets per second. We have a _signed_ adjust(-255 to 255) field (looks like other HW implementations also) to express packet length with reference to L2 length. Positive value to include L1 header (typically 20B, Ethernet preamble and Inter Frame Gap) and negative value to express to remove L2 + VLAN header and take only IP len etc /Jerin Cavium