From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM02-CY1-obe.outbound.protection.outlook.com (mail-cys01nam02on0071.outbound.protection.outlook.com [104.47.37.71]) by dpdk.org (Postfix) with ESMTP id 3158FCF7A for ; Tue, 28 Mar 2017 19:36:31 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=f+5Mc+6DdO1Y80Lv9TZMvu2inVApSB8V7IJVflIVpfM=; b=gKgKFxFhpxyqGxThi2dzweNu3e1ezR27roaTqOk7EqdXCDgi74v2PjUaR377+tkRW8APWjy8ASoUKUxyWGfdqYh4QbxieSAcQqGukBaTTnMNlK2mI+OIbJRcQ3nG69Q5St/y3XjmuPJAoT5g466OEeD9GJRAkoPbPBnxCHYeNvk= Authentication-Results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=caviumnetworks.com; Received: from localhost.localdomain (111.93.218.67) by BN3PR0701MB1719.namprd07.prod.outlook.com (10.163.39.18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.991.14; Tue, 28 Mar 2017 17:36:27 +0000 Date: Tue, 28 Mar 2017 23:06:11 +0530 From: Jerin Jacob To: "Van Haaren, Harry" Cc: "dev@dpdk.org" , "Richardson, Bruce" Message-ID: <20170328173610.3hi6wyqvdpx2lo7e@localhost.localdomain> References: <489175012-101439-1-git-send-email-harry.van.haaren@intel.com> <1490374395-149320-1-git-send-email-harry.van.haaren@intel.com> <1490374395-149320-7-git-send-email-harry.van.haaren@intel.com> <20170327074011.fgodyrhquabj54r2@localhost.localdomain> <20170328104301.ysxnlgyxvnqfv674@localhost.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20170306 (1.8.0) X-Originating-IP: [111.93.218.67] X-ClientProxiedBy: BM1PR01CA0018.INDPRD01.PROD.OUTLOOK.COM (10.163.198.153) To BN3PR0701MB1719.namprd07.prod.outlook.com (10.163.39.18) X-MS-Office365-Filtering-Correlation-Id: 6fb4d616-b37b-491f-04e9-08d47600f710 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(201703131423064)(201703031133070); SRVR:BN3PR0701MB1719; X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 3:MRo4FoLVd6DGt1JUkHVuSNNkHkWx1+KHh+hQY4pV+LWilhT0oJEQd1CqkhEdDbxXceOGyg+yuHdmk287ZoEOxIX78gwlMVAQSkQj6YVCVJXtbpRoFY46eutSFL/4Pcoh/ZvfhRQ8TKEUwN196r2z1eJQ5mLIG3R3qyXcJT0kYIbzo4hKMeYe87AJTjc0rwDMCChJgWMbF5y6R+kXbC/4eR5ta9IBGKZIiFUC1rF5Kp06Kgr6leFjqomf2sAh6le16GWkkni8V0ynWE+364/qDAmYHjc3+VJibPdi8KjwitL3gvut1KodODM/vM4GUFiUBXhkFmZkIRM1epn1aYtD1w==; 25:Yp+m2bqOzT8m09P5zvestJt87g2gw3ChTdrx9u2zN1eUKeWdNUu97D/O6yTFfyybHR6e11nKqMRyfJddP5PQ6kbH922E+Su43Tn9uBpejUnkHv057jgUl1wRdC7PcM/payfgEkFZ0iDdIa2RV7ktlUjZUXDp9QHjyCWByL4dcNP0fpFH7ocCUKU7X8wJass5ox08XdlwGhzA/V4APUNo3EvQ2G3I0t9aJY6/qLJ+iQYDmtNyqOPgx2Q4SbrsKaHc6ALRiVHJkJmh2TtSejOgCiwtmO06Bzrxiox/2qeIfvU+BzF6pr4HzhGjdpDHVKzcIovi6a1mhe9nXq/Hg9Pn1INLiuiSQUoP7WouDiAWsvWTxbpNA9I1GctAcuCDjj81//aUcWd8IBLF8Tel4+I3sXv69e3LzoC0jzYVM6YfQ6EFem+cwHdJuNLd2nghb8ZwkhZgTIqI6H1gdNyOB3vUIQ== X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 31:a025+UT9obsnzOjXtjTksnFBxGJP52Puan059/cI6w1dmRvxYGKoUW/3MfmEYUvMgV5sdrDFhA0VOoPtYUWaO6ymf9LIPT7ymMxTevYqL/bSdFA4PuqQN8VCWP4NzXvnsgIfAJkJ4RZDorb45aE80DAnFc/6aBBIqMpEF3Y3Is69RmeJ+GACZPGUmxIFiJF+AHZQabnrDaHHX+JAOQ4QAaoPSSwE5Vhk+G4LUo3v07JLUiTKI/2N30CXrZal+0v5s7BYAXTOw+JwU0BqOi4r8Q==; 20:7gl8EvjfOm9aj77Z7sAfyiVNatZ9pDz99GyrpBHtGK+5StTNIVj+5jSDYP27nWwpDLkSayymaPheEjl0+I3fbKFbxiojnnQOBvrx8P7oibdATkN5vemCEb0O1mJbYgvWNvCXHi7ZPwk7yPLzcnRhJCWKikwZrZNMdgkaJ1QM0oOKcvz+729/TAQgchKyemxPXtiC0zDxh+MN7BRS+gj28kNav/j8x/XzkrmwZTDM0/CVmNXn7l22+VE6MDaQpGSUGYktovW7wPGr6tOzuwoIwQhQ5WfaJgBzi2+bm4vcGNGy5iM9BHvbFGOrQaBAGxIBbcrOakL/sOmAPXLLuzRVYuiL0dH0S+q1cbh8VNomGoGkFQopOKZGowF1ZUlQsjWYLjJ2P86wTxEUhuXvMc6Yj0qI2efIQ65tSessX6s84L1u/Z7tyEU8f3uuBBt2yWeEmkacaTjOIi7LoaQfhxG1JsGs0BF6Tx/5ksl/OWLb5IQ+WLZRXWWPh5kc1MuyUz/RsLtPpGaRjyZE3V0gOpNUJDZ/+y1Yk7UC/ADkncDxNUIxk2W4GVWOwJ749Hz0GiO6RtB0icXRj92cpVjyPRsBFAoK4x0+aeglPzD6IvJueDs= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(278428928389397)(228905959029699); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040438)(601004)(2401047)(8121501046)(5005006)(10201501046)(3002001)(6041248)(20161123558025)(20161123555025)(201703131423063)(201702281528063)(201703061421063)(201703061406063)(20161123562025)(20161123564025)(20161123560025)(6072148); SRVR:BN3PR0701MB1719; BCL:0; PCL:0; RULEID:; SRVR:BN3PR0701MB1719; X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 4:dWw+lCrTYSMVXPPZI4otCypemeGRrRXdxcsZzxzzy0yiYSgLWGavkb+Osj3zVoRlSClBn3LcLRJcOGqMm1+/MdlnHfES2ssWaUvJs6gZwL4iyMqu5OmnHWYsNCEFhPhalHc5a1PvgGStiu3kOv+N3PQTRtDQPkG601lAOZOR9P6lQskynhzyrHj/2wex09+V0y4k5W+w7SpP2pcACpAsxeAxzbMdHK8uw8CyHfjlJlrzYM1Cignm0N2fuX9e47bDrxsQHVX+vwDhxUR37R8ish1OpKinENe3p58ibzS8Xu3UaVgaUteU89EhMR7D7M8zTosLj33Ydmewk154PbT8/EhVJwKuCnKZm5/6vJMuVStFKTb9bsdf/FGgsfY4hJIf5Jx7kkkgLact7aXoi0AAJFUc4JiQMCAAEe9Q22Loe0uit/EyIDBtMNurqj+6Oxwpu9lNor/xWwAMGlYrLTutr+B3bzT9srN7sUi/nTnlWhqjHOWpJXdUZd62HwaXyAXfOculcXJ2xWA67yoatZsDd38Ni+VG3hvEYaYg0Yfkgkb5CRCivQic3KuFKoaNMcIfFIv5a3itdwQGrFth2MABie2ap50FpNRfCnL3tFFW2iDb+DIMaopm3nN91OXguDh2o45uB/BG7mujgoiWihxAhW6BJhoRM+vuMSYdVDEMceTtzz6wg/ARf5SNUeCn6h9XEnkwNg82NwXU4/VGd2CQQ4LoWh2GeOmXXV4+i3R5I7vDYa44sYOD3olgcRZx5S3lhXh9/S/pnJjofkgRqAITklGc6N0m6seriu7TuSnc/UfJ02bP4qCIfzuS7cj30cQROWTP0M6pEwTKDvWUgQ71yg== X-Forefront-PRVS: 0260457E99 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(4630300001)(6069001)(6009001)(39840400002)(39410400002)(39400400002)(39850400002)(39450400003)(24454002)(377454003)(76104003)(54906002)(76176999)(54356999)(229853002)(9686003)(50986999)(61506002)(81166006)(8676002)(42186005)(4326008)(55016002)(2950100002)(53936002)(6916009)(6666003)(42882006)(53546009)(33646002)(6506006)(25786009)(189998001)(38730400002)(2906002)(7736002)(50466002)(305945005)(93886004)(4001350100001)(5890100001)(1076002)(47776003)(23726003)(6116002)(3846002)(5009440100003)(110136004)(6246003)(5660300001)(83506001)(66066001); DIR:OUT; SFP:1101; SCL:1; SRVR:BN3PR0701MB1719; H:localhost.localdomain; FPR:; SPF:None; MLV:sfv; LANG:en; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN3PR0701MB1719; 23:fPRcikHzdV6ImA9m4yiAWhw2MxJ5DvIUXYYp4F8?= =?us-ascii?Q?nikCGZ3U01fZNmpSayi8rQECF/mzYW0bGNJ92QLLm8Xr5alPkrfYUHLQ35ip?= =?us-ascii?Q?EXfsHIoHs4ikKrOccf0HgkgA6S5L8Z+QRvN38EffjCJVj5TsIVrX5foky3y0?= =?us-ascii?Q?Tom1Pqck3HK44+m83XpqQLoSyUY+ZXJcXY9DuwK4WLukplrlalrAIegZLxdI?= =?us-ascii?Q?gKkxKNut1Hd78DwBY1BSQMcwDadS/v8DeoL1WIz+YrEBMXGOLUbDL5k0dKyH?= =?us-ascii?Q?GujOnkmFAX5h7oqWp6+lItND4/8pDybqtqwHuDR43faKjIE42d312mts+fcH?= =?us-ascii?Q?gvzot2AEzpBf6ud/JChZoVMkH8wG5CBRpZPvwCz7QYB3pdwCekkS1POY3n1s?= =?us-ascii?Q?qTNweqsGsanUaYtfM1d1eTlTe8YNy3xxUUWXVbN/bBKx9Mn7hAO3AantYxZJ?= =?us-ascii?Q?6dQ1o3i938panqfu/PL7Gz/7rhy8ha+k7AkjF1fsgF++BO0zphSbnWe7SnUC?= =?us-ascii?Q?aTMxv/ZrZLCGMg0kNXeUVYaDATBbiiitOxilnMM9+sI1X6weKNc0/hhYrYQf?= =?us-ascii?Q?YR5jDudwsUutfaO4QjdxTuaHFavik5rZYH4/oqh539RY+YPUHKvuvN8Pfro6?= =?us-ascii?Q?iYmAeHyq9H/axZioRZx8ASUHLoLwPK7cbWOxcXNxKkvuaflRbg/bH8jchoLo?= =?us-ascii?Q?HmOxqH2jFBe4i0p9S3Xzl8KipHR/2wAMbYDhXCs0L0N7Ia0tP5WBZKSokpIV?= =?us-ascii?Q?H149gfunTDTypIVRts/p+CQoE/TKCCHKg8xePZUjkOFiFUl51Vs3N9YPFeGK?= =?us-ascii?Q?F5HxoWBtvAm901Nwdqitb0zJGEqSXkar6AdNNX5COsHwwCxuOfyMGayTnEnq?= =?us-ascii?Q?aMpNqBKpLA5wKTD2lxINl0VAcEYNEhrCyoz5Wz6RrRLQ7erMWovKq8ObPGL+?= =?us-ascii?Q?O4pLEnDT+5BgEGgPMlkyjkcNtO2kdpI3SnalaeV1psuOJDD+IC439qkCGcMr?= =?us-ascii?Q?cf4HGKUeMSChEsvoRFqJk8svSIaGGm+7DQ8wBl7c82XwnARBSYbfpXAwNPAV?= =?us-ascii?Q?/aFuf/wb58f0t46Wy6Q3udFTmFDtkyE5j7B9c0DV+W4Jjr3huXOfarYgVsCH?= =?us-ascii?Q?Hal2S7m8PaQkQWdwzuTT8ETJ5GYQPrETokY4QIJReIhb6gykvD3onJHHONWr?= =?us-ascii?Q?wmKQii0XTRs0eGF7H6zkTP6xnuabcQ8tnT+t592FUEH2Rry7WnXPkoy29aZM?= =?us-ascii?Q?FtPl09vK7KSzN9h+PbmWlTilR3yxXZ5c592KMaStO9NqjELJiCATsD/PC9JG?= =?us-ascii?Q?kIA=3D=3D?= X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 6:J3ZOm5AEFcQGfx3xBLnfWtO6NhrzCO4oo999CHjXplsnHOUrlxRmgdpmMXFrIVgjOHVezZW6hgVRbxeF64+k4ofW9ho6zKZ59vc9RKSq/LGGoTrScteh0NajOqIczPxge8u+8ILAPOqfcRwT8x4XhgXEFm+XqWRLjny/yImq6nWhOyBjwsYianJmMMPPCJlY2r/tan+Tra+OcEQA73W8BwoTREfn0XpnEy/CKYjd9ffswy+NCwmlBwYcKPpEp7949rcsYxIYBTk+a3jEy8NoojBtmQmpWdYk6oU9ZcwNtp1JoTYXAp1jt4P2mEnTTy+KGS4sVKJM4ON59YBMQMKlBNQHPMgZUdKIKqQtIlL6BxveVXE/mScqClqL3Qw5C96st4vAQhn3ET7fI/h18TVmBg==; 5:sCg2h/vyJ1x9uISI34unikVWG+fjhleAelY8z/MndwLWPRWTWMIsfomSR371oqK6Yke/1Slq72fmxMoZyNazk23Yi5yo+T4uBUErrQQwpzyAwaQWS1w512qKiqLaOh5x6iSNYS6EfKNCv8NwMK+B4+NXYrDU82svhanrheProJg=; 24:mnM3x3kXOmyAoJOhki0C9PiInjxej5MsQsoWpv4xdoIR/ZieFw/BLgM8yz0JUn7HVbq74qnd0xXtnNvth5GgvYYazKsPIiHPm11aszBxQX4= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1719; 7:Wpi0vTuuUhS+ApPLgc3XH3+iuP2VX9oHkPxWqPxqesUQmjtK9ykgx6OcwJK0llk2Ea5cp+9eNJKiyOgtbs1AKQHyze6Cd1G3OCahpsFEZW3rbVcBOTRVENyKn2UnsGkDk0sL9hDP6kvaLFAsQlYhUz014bN6kUfmqHN4ZPu/oYS6txT74rqN96lV93RruZx4zMQxX2BAmq/PS6T4VzbANNw12UHiR8YciNuOFzvRvkbUN1hQ3TgOlc4igSmGkg35mz5JHUoguAN5bemYjmwPTRghlbfKPRZdD0p9TzaJKOfbQ6fQMcbTxZt1uTs3JyWpgNOgCW8NEXs6/OAovmileA== X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Mar 2017 17:36:27.1442 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN3PR0701MB1719 Subject: Re: [dpdk-dev] [PATCH v5 06/20] event/sw: add support for event queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Mar 2017 17:36:32 -0000 On Tue, Mar 28, 2017 at 12:42:27PM +0000, Van Haaren, Harry wrote: > > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > > Sent: Tuesday, March 28, 2017 11:43 AM > > To: Van Haaren, Harry > > Cc: dev@dpdk.org; Richardson, Bruce > > Subject: Re: [PATCH v5 06/20] event/sw: add support for event queues > > > > On Mon, Mar 27, 2017 at 03:17:48PM +0000, Van Haaren, Harry wrote: > > > > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > > > > Sent: Monday, March 27, 2017 8:45 AM > > > > To: Van Haaren, Harry > > > > Cc: dev@dpdk.org; Richardson, Bruce > > > > Subject: Re: [PATCH v5 06/20] event/sw: add support for event queues > > > > > > > Just for my understanding, Are 4(SW_IQS_MAX) iq rings created to address > > > > different priority for each enqueue operation? What is the significance of > > > > 4(SW_IQS_MAX) here? > > > > > > Yes each IQ represents a priority level. There is a compile-time define (SW_IQS_MAX) which > > allows setting the number of internal-queues at each queue stage. The default number of > > priorities is currently 4. > > > > OK. The reason why I asked because, If i understood it correctly the > > PRIO_TO_IQ is not normalizing it correctly if SW_IQS_MAX == 4. > > > > I thought following mapping will be the correct normalization if SW_IQS_MAX > > == 4 > > > > What do you think? > > > > Good catch - agreed, will fix. > > > > > > > +static int > > > > > +sw_queue_setup(struct rte_eventdev *dev, uint8_t queue_id, > > > > > + const struct rte_event_queue_conf *conf) > > > > > +{ > > > > > + int type; > > > > > + > > > > > + switch (conf->event_queue_cfg) { > > > > > + case RTE_EVENT_QUEUE_CFG_SINGLE_LINK: > > > > > + type = SW_SCHED_TYPE_DIRECT; > > > > > + break; > > > > > > > > event_queue_cfg is a bitmap. It is valid to have > > > > RTE_EVENT_QUEUE_CFG_SINGLE_LINK | RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY. > > > > i.e An atomic schedule type queue and it has only one port linked to > > > > dequeue the events. > > > > So in the above context, The switch case is not correct. i.e > > > > it goes to the default condition. Right? > > > > Is this intentional? > > > > > > > > If I understand it correctly, Based on the use case(grouped based event > > > > pipelining), you have shared in > > > > the documentation patch. RTE_EVENT_QUEUE_CFG_SINGLE_LINK used for last > > > > stage(last queue). One option is if SW PMD cannot support > > > > RTE_EVENT_QUEUE_CFG_SINGLE_LINK | RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY mode > > > > then even tough application sets the RTE_EVENT_QUEUE_CFG_SINGLE_LINK | > > > > RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY, driver can ignore > > > > RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY. But I am not sure the case where > > > > application sets RTE_EVENT_QUEUE_CFG_SINGLE_LINK in the middle of the pipeline. > > > > > > > > Thoughts? > > > > > > > > > I don't like the idea of the SW PMD ignoring flags for queues - the PMD has no idea if the > > queue is the final or middle of the pipeline as it's the applications usage which defines that. > > > > > > > > > Does anybody have a need for a queue to be both Atomic *and* Single-link? I understand the > > current API doesn't prohibit it, but I don't see the actual use-case in which that may be > > useful. Atomic implies load-balancing is occurring, single link implies there is only one > > consuming core. Those seem like opposites to me? > > > > > > Unless anybody sees value in queue's having both, I suggest we update the documentation to > > specify that a queue is either load balanced, or single-link, and that setting both flags will > > result in -ENOTSUP being returned. (This check can be added to EventDev layer if consistent for > > all PMDs). > > > > If I understand it correctly(Based on the previous discussions), > > HW implementations(Cavium or NXP) does not > > need to use RTE_EVENT_QUEUE_CFG_* flags for the operations(sched type > > will be derived from event.sched_type on enqueue). So that means we are > > free to tailor the header file based on the SW PMD requirement on this. > > But semantically it has to be inline with rest of the header file.We can > > work together to make it happen. > > OK :) > > > > A few question on everyone benefit: > > > > 1) Does RTE_EVENT_QUEUE_CFG_SINGLE_LINK has any other meaning other than an > > event queue linked only to single port? Based on the discussions, It was > > add in the header file so that SW PMD can know upfront only single port > > will be linked to the given event queue. It is added as an optimization for SW > > PMD. Does it has any functional expectation? > > In the context of the SW PMD, SINGLE_LINK means that a specific queue and port have a unique relationship in that there is only connection. This allows bypassing of Atomic, Ordering and Load-Balancing code. The result is a good performance increase, particularly if the worker port dequeue depth is large, as then large bursts of packets can be dequeued with little overhead. > > As a result, (ATOMIC | SINGLE_LINK) is not a supported combination for the sw pmd queue types. > To be more precise, a SINGLE_LINK is its own queue type, and can not be OR-ed with any other type. > > > > 2) Based on following topology given in documentation patch for queue > > based event pipelining, > > > > rx_port w1_port > > \ / \ > > qid0 - w2_port - qid1 > > \ / \ > > w3_port tx_port > > > > a) I understand, rx_port is feeding events to qid0 > > b) But, Do you see any issue with following model? IMO, It scales well > > linearly based on number of cores available to work(Since it is ATOMIC to > > ATOMIC). Nothing wrong with > > qid1 just connects to tx_port, I am just trying understand the rational > > behind it? > > > > rx_port w1_port w1_port > > \ / \ / > > qid0 - w2_port - qid1- w2_port > > \ / \ > > w3_port w3_port > > > This is also a valid model from the SW eventdev. OK. If understand it correctly, On the above topology, Even though you make qid2 as ATOMIC. SW PMD will not maintain ingress order when comes out of qid1 on different workers. A SINGLE_LINK queue with one port attached scheme is required at end of the pipeline or where ever ordering has to be maintained. Is my understanding correct? > The value of using a SINGLE_LINK at the end of a pipeline is > A) can TX all traffic on a single core (using a single queue) > B) re-ordering of traffic from the previous stage is possible > > To illustrate (B), a very simple pipeline here > > RX port -> QID #1 (Ordered) -> workers(eg 4 ports) -> QID # 2 (SINGLE_LINK to tx) -> TX port > > Here, QID #1 is allowed to send the packets out of order to the 4 worker ports - because they are later passed back to the eventdev for re-ordering before they get to the SINGLE_LINK stage, and then TX in the correct order. > > > > 3) > > > Does anybody have a need for a queue to be both Atomic *and* Single-link? I understand the > > current API doesn't prohibit it, but I don't see the actual use-case in which that may be > > useful. Atomic implies load-balancing is occurring, single link implies there is only one > > consuming core. Those seem like opposites to me? > > > > I can think about the following use case: > > > > topology: > > > > rx_port w1_port > > \ / \ > > qid0 - w2_port - qid1 > > \ / \ > > w3_port tx_port > > > > Use case: > > > > Queue based event pipeling: > > ORERDED(Stage1) to ATOMIC(Stage2) pipeline: > > - For ingress order maintenance > > - For executing Stage 1 in parallel for better scaling > > i.e A fat flow can spray over N cores while maintaining the ingress > > order when it sends out on the wire(after consuming from tx_port) > > > > I am not sure how SW PMD work in the use case of ingress order maintenance. > > I think my illustration of (B) above is the same use-case as you have here. Instead of using an ATOMIC stage2, the SW PMD benefits from using the SINGLE_LINK port/queue, and the SINGLE_LINK queue ensures ingress order is also egress order to the TX port. > > > > But the HW and header file expects this form: > > Snippet from header file: > > -- > > * The source flow ordering from an event queue is maintained when events are > > * enqueued to their destination queue within the same ordered flow context. > > * > > * Events from the source queue appear in their original order when dequeued > > * from a destination queue. > > -- > > Here qid0 is source queue with ORDERED sched_type and qid1 is destination > > queue with ATOMIC sched_type. qid1 can be linked to only port(tx_port). > > > > Are we on same page? If not, let me know the differences? We will try to > > accommodate the same in header file. > > Yes I think we are saying the same thing, using slightly different words. > > To summarize; > - SW PMD sees SINGLE_LINK as its own queue type, and does not support load-balanced (Atomic Ordered, Parallel) queue functionality. > - SW PMD would use a SINGLE_LINK queue/port for the final stage of a pipeline > A) to allow re-ordering to happen if required > B) to merge traffic from multiple ports into a single stream for TX > > A possible solution; > 1) The application creates a SINGLE_LINK for the purpose of ensuring re-ordering is taking place as expected, and linking only one port for TX. The only issue is in Low-end cores case it wont scale. TX core will become as bottleneck and we need to have different pipelines based on the amount of traffic(40G or 10G) a core can handle. > 2) SW PMDs can create a SINGLE_LINK queue type, and benefit from the optimization Yes. > 3) HW PMDs can ignore the "SINGLE_LINK" aspect and uses an ATOMIC instead (as per your example in 3) above) But topology will be fixed for both HW and SW. An extra port and extra core needs to wasted for ordering business in case HW. Right? I think, we can roll out something based on capability. > > The application doesn't have to change anything, and just configures its pipeline. The PMD is able to optimize if it makes sense (SW) or just use another queue type to provide the same functionality to the application (HW). > > Thoughts? -Harry