From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0CB7FA2E1B for ; Thu, 5 Sep 2019 07:44:44 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CAD8A1ED5A; Thu, 5 Sep 2019 07:44:43 +0200 (CEST) Received: from EUR01-HE1-obe.outbound.protection.outlook.com (mail-eopbgr130073.outbound.protection.outlook.com [40.107.13.73]) by dpdk.org (Postfix) with ESMTP id F22881ED52 for ; Thu, 5 Sep 2019 07:44:42 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MM7eQMZQzsCRXyoD+QWuPBt1ZlrJoIV7tH9h0Q7rNYiMIP8NXdid8OhdxE5AXQqBANcmEXFpcdZ/yKxYjvUVSdeNIdusyMPbZ8tCjQYvDOhYAdx8p4bzNrAjqNoWsZrnQM622RMV5jsUErZaMjAxLcWYgDHkFMmkImTg3xYGqI67a3nxkcJ5V8m3oHwjUED7//CkNqh0YXrwptUqH6cmWgxEBQmUSWIx0HDfDJDpUydRJ8O8HLLBeJdr92LcVpMKzkSXx/KV1ap9znxUNXtzdJioGDnOddxxi5htb1l2zt44DEllP243Cpa2BhQX0q5eA2LRSmQ47fMkDg+IgqZxCA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VUtceDps/zfe6xPrEGrZc5zNJTUwuc24FHj7yjDx4WI=; b=nfq7LHUM603lYjPojQHgLNMc/wA/SBc1gUtPMHF9H+TfUJgPaoKBWcY5fUDd0If/rPRuT8y5CxRdSqpofcZQVZXP3mP1cFxQoKFPuin9BsGvzu+Jh+juM7TkeQ6fy69upJe+q9NlgU0ACGrNRIVhbArGmQXvtt+r/v4q7R1iQgW7/ATLkUYmtfVG7xrQacd2841uH+bPwJKrPHvqyg6RgdXMbIVVoHyKCclkWZkVIVRpMNG0nq/+5chYr5w67sp1zI4lTJcIyS45q4lTdhSryLZ2H/RsaFb6tLUJwUK73jIO8bXmTsQFdhXeKAk4fj+5mt/di/MYGs2Skmdu4CSGEw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=mellanox.com; dmarc=pass action=none header.from=mellanox.com; dkim=pass header.d=mellanox.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VUtceDps/zfe6xPrEGrZc5zNJTUwuc24FHj7yjDx4WI=; b=E8g1aby11dV2hj1Hn/9n5vZE5vn3FDt5hal5PplovU0JozisOWUAm0vK3OV+SsiWn/9kRdrSD6UrgvGFNx90cJ0dVu0/LSkBXRVyoD00X2ZpRSeZg1VaDQIhVOiDJHG4JcOTgv004xD35CZLwzloLyZ5iMPhusnMRmKp4oa4mTw= Received: from AM4PR05MB3425.eurprd05.prod.outlook.com (10.171.190.15) by AM4PR05MB3441.eurprd05.prod.outlook.com (10.171.188.160) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2199.21; Thu, 5 Sep 2019 05:44:41 +0000 Received: from AM4PR05MB3425.eurprd05.prod.outlook.com ([fe80::4c32:a34f:5558:a2c6]) by AM4PR05MB3425.eurprd05.prod.outlook.com ([fe80::4c32:a34f:5558:a2c6%7]) with mapi id 15.20.2220.021; Thu, 5 Sep 2019 05:44:41 +0000 From: Ori Kam To: "Wu, Jingjing" , Thomas Monjalon , "Yigit, Ferruh" , "arybchenko@solarflare.com" , Shahaf Shuler , Slava Ovsiienko , Alex Rosenbaum CC: "dev@dpdk.org" Thread-Topic: [dpdk-dev] [RFC] ethdev: support hairpin queue Thread-Index: AQHVUdxZYjq5dQPUn0m0/551v94NJqccmUsAgAAVIBA= Date: Thu, 5 Sep 2019 05:44:39 +0000 Message-ID: References: <1565703468-55617-1-git-send-email-orika@mellanox.com> <9BB6961774997848B5B42BEC655768F81150C0CA@SHSMSX103.ccr.corp.intel.com> In-Reply-To: <9BB6961774997848B5B42BEC655768F81150C0CA@SHSMSX103.ccr.corp.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=orika@mellanox.com; x-originating-ip: [193.47.165.251] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: bfe0838f-e01d-421e-c752-08d731c425db x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600166)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020); SRVR:AM4PR05MB3441; x-ms-traffictypediagnostic: AM4PR05MB3441: x-ld-processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:9508; x-forefront-prvs: 015114592F x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(4636009)(136003)(346002)(376002)(39860400002)(396003)(366004)(199004)(189003)(13464003)(9686003)(81166006)(81156014)(6246003)(53936002)(102836004)(186003)(6436002)(66066001)(2906002)(8676002)(7696005)(76176011)(316002)(305945005)(6636002)(4326008)(25786009)(71190400001)(71200400001)(53546011)(486006)(446003)(99286004)(229853002)(110136005)(6506007)(55016002)(8936002)(256004)(14444005)(26005)(3846002)(86362001)(33656002)(5660300002)(6116002)(74316002)(66946007)(66556008)(64756008)(66476007)(66446008)(476003)(52536014)(2501003)(478600001)(76116006)(14454004)(7736002)(11346002); DIR:OUT; SFP:1101; SCL:1; SRVR:AM4PR05MB3441; H:AM4PR05MB3425.eurprd05.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: HwjGNOAl8dehfbJG/IbhcEotNTlDE2RsP7r6XDPNmBbBh17X+cM6m3rpY4T8K5k9wnU99Gj8Guw83SGjE33Tu0eZ8ZPBS35xZWRPODe6LrznhC1DW2CQE3QwAkcyb8pf5UT3N7NyJePESus+ydeJNN3ovOw/8zHnbvfREXOHfzK1mAv+CV5potyZoOS0U7uA2KiNxrN/bzSg74OT22uGMSV5zU9tIidkWVsYQY7+GTGIwORHQSLDfOgiVkCYv1EmaS0VnrJR+hWepIJ7Lq5tgDDTumwgddZo0TmYaYRcZ8iFFfnm1N7Gd1se6unpmsc5JGhBCnGDRuW60nITJpYBG5dDKmA6rEnANfxUTZ35ejaWJhDX8VH5+K4rh1e6IMpNoxKKj0K4BjBR8+4XzXhI/sR7l0E8qdgRtrWPbUtnwp4= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: bfe0838f-e01d-421e-c752-08d731c425db X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Sep 2019 05:44:40.1233 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: 7QfoBpKOjTDcYEAxqF/lLXQna0pC926iWUCKu7EqSJvUVzaM5SCwHM+UnKe3hYVkbyp+osiHfF9CXBLfR6zfaA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR05MB3441 Subject: Re: [dpdk-dev] [RFC] ethdev: support hairpin queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Wu,=20 Thanks for your comments PSB, Ori > -----Original Message----- > From: Wu, Jingjing > Sent: Thursday, September 5, 2019 7:01 AM > To: Ori Kam ; Thomas Monjalon > ; Yigit, Ferruh ; > arybchenko@solarflare.com; Shahaf Shuler ; Slava > Ovsiienko ; Alex Rosenbaum > > Cc: dev@dpdk.org > Subject: RE: [dpdk-dev] [RFC] ethdev: support hairpin queue >=20 >=20 > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ori Kam > > Sent: Tuesday, August 13, 2019 9:38 PM > > To: thomas@monjalon.net; Yigit, Ferruh ; > > arybchenko@solarflare.com; shahafs@mellanox.com; > viacheslavo@mellanox.com; > > alexr@mellanox.com > > Cc: dev@dpdk.org; orika@mellanox.com > > Subject: [dpdk-dev] [RFC] ethdev: support hairpin queue > > > > This RFC replaces RFC[1]. > > > > The hairpin feature (different name can be forward) acts as "bump on th= e > wire", > > meaning that a packet that is received from the wire can be modified us= ing > > offloaded action and then sent back to the wire without application > intervention > > which save CPU cycles. > > > > The hairpin is the inverse function of loopback in which application > > sends a packet then it is received again by the > > application without being sent to the wire. > > > > The hairpin can be used by a number of different NVF, for example load > > balancer, gateway and so on. > > > > As can be seen from the hairpin description, hairpin is basically RX qu= eue > > connected to TX queue. > > > > During the design phase I was thinking of two ways to implement this > > feature the first one is adding a new rte flow action. and the second > > one is create a special kind of queue. > > > > The advantages of using the queue approch: > > 1. More control for the application. queue depth (the memory size that > > should be used). > > 2. Enable QoS. QoS is normaly a parametr of queue, so in this approch i= t > > will be easy to integrate with such system. >=20 >=20 > Which kind of QoS? For example latency , packet rate those kinds of makes sense in the queue l= evel. I know we don't have any current support but I think we will have during th= e next year. >=20 > > 3. Native integression with the rte flow API. Just setting the target > > queue/rss to hairpin queue, will result that the traffic will be routed > > to the hairpin queue. > > 4. Enable queue offloading. > > > Looks like the hairpin queue is just hardware queue, it has no relationsh= ip with > host memory. It makes the queue concept a little bit confusing. And why d= o we > need to setup queues, maybe some info in eth_conf is enough? Like stated above it makes sense to have queue related parameters. For example I can think of application that most packets are going threw th= at hairpin queue, but some control packets are from the application. So the application can configure the QoS between thos= e two queues. In addtion this will enable the application to use the queue like normal queue from rte_flow (see comment below) and ev= ery other aspect. =20 >=20 > Not sure how your hardware make the hairpin work? Use rte_flow for packet > modification offload? Then how does HW distribute packets to those hardwa= re > queue, classification? If So, why not just extend rte_flow with the hairp= in > action? >=20 You are correct, the application uses rte_flow and just points the traffic = to the requested hairpin queue/rss. We could have added a new rte_flow command. The reasons we didn't: 1. Like stated above some of the hairpin makes sense in queue level. 2. In the near future, we will also want to support hairpin between differ= ent ports. This makes much more sense using queues. =20 > > Each hairpin Rxq can be connected Txq / number of Txqs which can belong= to > a > > different ports assuming the PMD supports it. The same goes the other > > way each hairpin Txq can be connected to one or more Rxqs. > > This is the reason that both the Txq setup and Rxq setup are getting th= e > > hairpin configuration structure. > > > > From PMD prespctive the number of Rxq/Txq is the total of standard > > queues + hairpin queues. > > > > To configure hairpin queue the user should call > > rte_eth_rx_hairpin_queue_setup / rte_eth_tx_hairpin_queue_setup insteed > > of the normal queue setup functions. >=20 > If the new API introduced to avoid ABI change, would one API > rte_eth_rx_hairpin_setup be enough? I'm not sure I understand your comment. The rx_hairpin_setup was created for two main reasons: 1. Avoid API change. 2. I think it is more correct to use different API since the parameters are= different. The reason we have both rx and tx setup functions is that we want the user = to have control binding the two queues. It is most important when we will advance to hairpin between ports. >=20 > Thanks > Jingjing Thanks, Ori