From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B4D54A0542; Sat, 8 Oct 2022 18:31:36 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9B01B4021E; Sat, 8 Oct 2022 18:31:36 +0200 (CEST) Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by mails.dpdk.org (Postfix) with ESMTP id D08C840146 for ; Sat, 8 Oct 2022 18:31:34 +0200 (CEST) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 2CB0D5C0095; Sat, 8 Oct 2022 12:31:34 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Sat, 08 Oct 2022 12:31:34 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= cc:cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm3; t=1665246694; x= 1665333094; bh=shzurhJzJwsfFTZXJmr1uNY0tW2BERfiOSw3XdxAmNI=; b=D lFLuTunIMUEhB81geIZI7Ipfz89J1pAhCA5zbxuuGx/60uIXXhV3tW7N8eoHPzac iN7r0Bb31G1pWdNXy1vIcLh+5+djEsQnNXt0/uikuYVKwBV5sYRGubiEDwWY9xHr R0xDz18KrfLhreQYlsA6wl4XSOIY0z3E0aTxmPfLftnTYdzf+IDTEPTp7Sjec/Rs RdhTBQpve1O236bhB5J/WPyntYKRgYpqlR8iCEAUkbm0VClEaHzrRFaSOP3k2amv VWE87MeEMt4BduWgWw1T6jQOC+2M28kQDkdbe8YMyeIVJjGJgDwKRQrWlBE4xdY1 f4Qmcs2g34QmXtZky5Mtw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1665246694; x= 1665333094; bh=shzurhJzJwsfFTZXJmr1uNY0tW2BERfiOSw3XdxAmNI=; b=m 4aebrZw7isdi2ULUCRLZKs2fF6ew2vgES3PqXA3+O/PYze8LNKZ7oEyMUaYAkASP DqAtSY4pyTF1NgK1SMWFEOLfGzaP1pfBdBJhILsHR0KXixO+2BRglsyiEF32HeVV U9kvUCpEERN1MprMOE0DB+433xC7oGTe23OIDOIfOU60HFVsN56gv0cFO/OfcjnM kmfdwcJGSJqV/OcHBoe5dFFpMNREhe9qGfDDNtuEGlaTM/BQSzavNXt1nZ1b8JDm 5x7K2nK9da5pPxVZx8qDZfi8WWZYMFYXHJJQw4UhKRpW3/fg1TeMOYaz4v4b6ILu y0dyiDoTHIpqh6gFC9BWg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrfeeiledgleelucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhm rghsucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenuc ggtffrrghtthgvrhhnpeejjefffffgffekfefflefgkeelteejffelledugefhheelffet heevudffudfgvdenucffohhmrghinhepughpughkrdhorhhgnecuvehluhhsthgvrhfuih iivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepthhhohhmrghssehmohhnjhgrlhho nhdrnhgvth X-ME-Proxy: Feedback-ID: i47234305:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sat, 8 Oct 2022 12:31:32 -0400 (EDT) From: Thomas Monjalon To: Dariusz Sosnowski Cc: Ferruh Yigit , Andrew Rybchenko , dev@dpdk.org, Viacheslav Ovsiienko , Matan Azrad , Ori Kam , Wisam Jaddo , Aman Singh , Yuying Zhang Subject: Re: [PATCH v2 0/8] ethdev: introduce hairpin memory capabilities Date: Sat, 08 Oct 2022 18:31:31 +0200 Message-ID: <37847984.10thIPus4b@thomas> In-Reply-To: <20221006110105.2986966-1-dsosnowski@nvidia.com> References: <20221006110105.2986966-1-dsosnowski@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org 06/10/2022 13:00, Dariusz Sosnowski: > The hairpin queues are used to transmit packets received on the wire, back to the wire. > How hairpin queues are implemented and configured is decided internally by the PMD and > applications have no control over the configuration of Rx and Tx hairpin queues. > This patchset addresses that by: > > - Extending hairpin queue capabilities reported by PMDs. > - Exposing new configuration options for Rx and Tx hairpin queues. > > Main goal of this patchset is to allow applications to provide configuration hints > regarding memory placement of hairpin queues. > These hints specify whether buffers of hairpin queues should be placed in host memory > or in dedicated device memory. > > For example, in context of NVIDIA Connect-X and BlueField devices, > this distinction is important for several reasons: > > - By default, data buffers and packet descriptors are placed in device memory region > which is shared with other resources (e.g. flow rules). > This results in memory contention on the device, > which may lead to degraded performance under heavy load. > - Placing hairpin queues in dedicated device memory can decrease latency of hairpinned traffic, > since hairpin queue processing will not be memory starved by other operations. > Side effect of this memory configuration is that it leaves less memory for other resources, > possibly causing memory contention in non-hairpin traffic. > - Placing hairpin queues in host memory can increase throughput of hairpinned > traffic at the cost of increasing latency. > Each packet processed by hairpin queues will incur additional PCI transactions (increase in latency), > but memory contention on the device is avoided. > > Depending on the workload and whether throughput or latency has a higher priority for developers, > it would be beneficial if developers could choose the best hairpin configuration for their use case. > > To address that, this patchset adds the following configuration options (in rte_eth_hairpin_conf struct): > > - use_locked_device_memory - If set, PMD will allocate specialized on-device memory for the queue. > - use_rte_memory - If set, PMD will use DPDK-managed memory for the queue. > - force_memory - If set, PMD will be forced to use provided memory configuration. > If no appropriate resources are available, the queue allocation will fail. > If unset and no appropriate resources are available, PMD will fallback to its default behavior. > > Implementing support for these flags is optional and applications should be allowed to not set any of these new flags. > This will result in default memory configuration provided by the PMD. > Application developers should consult the PMD documentation in that case. > > These changes were originally proposed in http://patches.dpdk.org/project/dpdk/patch/20220811120530.191683-1-dsosnowski@nvidia.com/. > > Dariusz Sosnowski (8): > ethdev: introduce hairpin memory capabilities > common/mlx5: add hairpin SQ buffer type capabilities > common/mlx5: add hairpin RQ buffer type capabilities > net/mlx5: allow hairpin Tx queue in RTE memory > net/mlx5: allow hairpin Rx queue in locked memory > doc: add notes for hairpin to mlx5 documentation > app/testpmd: add hairpin queues memory modes > app/flow-perf: add hairpin queue memory config Doc squashed in mlx5 commits. Applied, thanks.