From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D2EFDA04DB; Thu, 15 Oct 2020 12:46:27 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B95BE1DE93; Thu, 15 Oct 2020 12:46:26 +0200 (CEST) Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com [64.147.123.21]) by dpdk.org (Postfix) with ESMTP id 10D131DE91 for ; Thu, 15 Oct 2020 12:46:25 +0200 (CEST) Received: from compute2.internal (compute2.nyi.internal [10.202.2.42]) by mailout.west.internal (Postfix) with ESMTP id 8DEC0BDC; Thu, 15 Oct 2020 06:46:22 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute2.internal (MEProxy); Thu, 15 Oct 2020 06:46:23 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=fm2; bh= +6DhHyrSNGDJ9FAe7ZuZ4/ZA9MZf4ebdodmUaQWi4Wc=; b=a2BLBx8u0qZweuvy /5o026P/V2sb9WG04GaHzdDFlPiVOxFhulDWoZRnsOQYRG0YbasaoBnZJTSQo1/Q ASf1paxoWjYhpq3MwqtUBgZmRQpnbKIBHXEmfuZVlM6pjrwztCx8ORCB2ZgVVgAp gc8j15CBLCEKJPTPwhOWbQGKISEGu6llaBC7HY2bWdXDSI8gOze1AAW9zaK9CEQc Q8xVCx8RSmoJTRV3AfPS9Sf2rlL4c+1R1PP0CBUbpgAqDiC0W1LIrGMKuhgFfZuN MOgvkITrnu4+9iykX4EOT/b2FeB5uFO1FG3HNDrJla6GNIj1E1E4pkJGOscdiZ8K M15UeQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm1; bh=+6DhHyrSNGDJ9FAe7ZuZ4/ZA9MZf4ebdodmUaQWi4 Wc=; b=nzP9NnkL9gp64dKeKFXIqkxEB5qbNw2nejKszG4oOR2FM9jteQyNp9hMW RALfgbSNPzBJdefP4Q3GA+RaFabMpcb1sBUqAj3pUjBvw+dh25CZ/SELBb7V/2IO dq7bL+LjPIp1iYPsaPlVoyhiJoE7pCiWsHWi2iiwNdFD+BXrvd135etnrSALqnwf /u7v0OkJbuXHqPOdP8sn0+aVAjy8Qi88bIxTxDtAuLVmFs8vHbhYrD7popYEz3Su RSzgFgJ2lagsX0XFUmSu0cPHDK/wUjoY37hmoMOc5HjZvsdW88jrAn/6RMT+aYun EsLCHbAchngkUehVKGDeP7HIfusaQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrieefgdefvdcutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhmrghs ucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenucggtf frrghtthgvrhhnpedugefgvdefudfftdefgeelgffhueekgfffhfeujedtteeutdejueei iedvffegheenucfkphepjeejrddufeegrddvtdefrddukeegnecuvehluhhsthgvrhfuih iivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepthhhohhmrghssehmohhnjhgrlhho nhdrnhgvth X-ME-Proxy: Received: from xps.localnet (184.203.134.77.rev.sfr.net [77.134.203.184]) by mail.messagingengine.com (Postfix) with ESMTPA id 666A63064610; Thu, 15 Oct 2020 06:46:20 -0400 (EDT) From: Thomas Monjalon To: Bing Zhao Cc: orika@nvidia.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, mdr@ashroe.eu, nhorman@tuxdriver.com, bernard.iremonger@intel.com, beilei.xing@intel.com, wenzhuo.lu@intel.com, dev@dpdk.org Date: Thu, 15 Oct 2020 12:46:19 +0200 Message-ID: <25852134.RkOs9RG3c4@thomas> In-Reply-To: <1602740124-397688-3-git-send-email-bingz@nvidia.com> References: <1601511962-21532-1-git-send-email-bingz@nvidia.com> <1602740124-397688-1-git-send-email-bingz@nvidia.com> <1602740124-397688-3-git-send-email-bingz@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH v5 2/5] ethdev: add new attributes to hairpin config X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 15/10/2020 07:35, Bing Zhao: > To support two ports hairpin mode and keep the backward compatibility > for the application, two new attribute members of the hairpin queue > configuration structure will be added. > > `tx_explicit` means if the application itself will insert the TX part > flow rules. If not set, PMD will insert the rules implicitly. > `manual_bind` means if the hairpin TX queue and peer RX queue will be > bound automatically during the device start stage. > > Different TX and RX queue pairs could have different values, but it > is highly recommended that all paired queues between one egress and > its peer ingress ports have the same values, in order not to bring > any chaos to the system. The actual support of these attribute > parameters will be checked and decided by the PMD drivers. > > In the single port hairpin, if both are zero without any setting, the > behavior will remain the same as before. It means that no bind API > needs to be called and no TX flow rules need to be inserted manually > by the application. > > Signed-off-by: Bing Zhao > Acked-by: Ori Kam > --- > v4: squash document update and more info for the two new attributes > v2: optimize the structure and remove unused macros > --- Acked-by: Thomas Monjalon Minor comments below. > struct rte_eth_hairpin_conf { > - uint16_t peer_count; /**< The number of peers. */ > + uint32_t peer_count:16; /**< The number of peers. */ > + > + /** > + * Explicit TX flow rule mode. One hairpin pair of queues should have > + * the same attribute. The actual support depends on the PMD. The second sentence should be on a separate line. About the third sentence, implementation is always PMD-specific. PMD will reject the not supported conf, as usual. I think this comment is not needed in the API description. > + * > + * - When set, the user should be responsible for inserting the hairpin > + * TX part flows and removing them. > + * - When clear, the PMD will try to handle the TX part of the flows, > + * e.g., by splitting one flow into two parts. > + */ > + uint32_t tx_explicit:1; > + > + /** > + * Manually bind hairpin queues. One hairpin pair of queues should have > + * the same attribute. The actual support depends on the PMD. Same here > + * > + * - When set, to enable hairpin, the user should call the hairpin bind > + * API after all the queues are set up properly and the ports are > + * started. Also, the hairpin unbind API should be called accordingly > + * before stopping a port that with hairpin configured. > + * - When clear, the PMD will try to enable the hairpin with the queues > + * configured automatically during port start. > + */ > + uint32_t manual_bind:1; > + uint32_t reserved:14; /**< Reserved bits. */ > struct rte_eth_hairpin_peer peers[RTE_ETH_MAX_HAIRPIN_PEERS]; > };