From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id DDFAB430EC;
	Thu, 24 Aug 2023 08:10:15 +0200 (CEST)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id C42B5410EE;
	Thu, 24 Aug 2023 08:10:15 +0200 (CEST)
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2068.outbound.protection.outlook.com [40.107.104.68])
 by mails.dpdk.org (Postfix) with ESMTP id 5582840EE1
 for <dev@dpdk.org>; Thu, 24 Aug 2023 08:10:14 +0200 (CEST)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LaRjdf+2C217shjRgAY+lMdeDN5vUa5yuD25WyWePEdqtHM54hK9WCxc8M+zbONgB97hCoTd9kGwyF5mP10YNhxQb0msYvOYUD/eRqguUeluo17xsOIB5Jh28anu7lA1S+ljcnMOVupcQo8gWDPZawrKgVtTZPbH9S/x4ej5r4QLDgG3tV2iFLT1HOZtNRj56tA9Ml7ZIkWozhGd7shTzDFwKxST0C8BR5PCZhdJs5Bt1H6eUGVBoXVvxPyFvDmNqH6PL/0LB+0uKff0DsZ3cCCqeZGsZGnKepKcgq/aB7URBP0WaiwvGZLK/hOW1QdAIQYMGpXUzNWK4ql0GQ9TOw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=2qRPE+VBF0E33M355/emslsmFNZ+3JoviEzeRSwDeFg=;
 b=hzPrb5fZUWwRaq1cnRtRgQPYf3tBVI2xOwFNFW95ApQ+JzQX9sWUgTCgsTAAPMoXXNFfftfv0c2rY6pwmvUwNnTfpBictjPKR+DkQUcX8Ui3kokrZ33TjyCazsvHIXafBByIuO6wK6uZsU8rRDm3qeZBv+VrFSRPPHFT1PU5xdmgpIvPGgSx8uZlDjpQzHuHvfxLU6rJNs5EFyPWOO6AlmgT70SpuLzdUMezCxaPSJxaPwRYB+A2WMBBLycVd7xDB2UU6JRpexnGhoaj4uxWn+I7COysIjLms/VuYBoitK5Ef8FmZBTS5I0pnkbC+JXGfu5v2oDdaEMTKRp6vrBjjQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2qRPE+VBF0E33M355/emslsmFNZ+3JoviEzeRSwDeFg=;
 b=cYG66QzzRfRJwz+EwCuZ9mApNQZ1npWSdvvplx0t/6YO1WrrfDSz5Jx2hwulLIufNN7byxZlRS1F9QiPPIKFXKxB+FSmTvnzvSkpLr/uj07n3JBXbWGcTodska4YPlRaa+UsBplbTxTq10fvq0RaqlLRNXSHD5RxhNUTxY2bCyo=
Received: from AS8PR08MB7718.eurprd08.prod.outlook.com (2603:10a6:20b:50a::22)
 by AM0PR08MB5315.eurprd08.prod.outlook.com (2603:10a6:208:18e::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6699.24; Thu, 24 Aug
 2023 06:10:08 +0000
Received: from AS8PR08MB7718.eurprd08.prod.outlook.com
 ([fe80::70e8:2daa:5a39:dc50]) by AS8PR08MB7718.eurprd08.prod.outlook.com
 ([fe80::70e8:2daa:5a39:dc50%4]) with mapi id 15.20.6699.027; Thu, 24 Aug 2023
 06:10:08 +0000
From: Feifei Wang <Feifei.Wang2@arm.com>
To: Feifei Wang <Feifei.Wang2@arm.com>, Konstantin Ananyev
 <konstantin.v.ananyev@yandex.ru>
CC: "dev@dpdk.org" <dev@dpdk.org>, nd <nd@arm.com>, Honnappa Nagarahalli
 <Honnappa.Nagarahalli@arm.com>, Ruifeng Wang <Ruifeng.Wang@arm.com>, Yuying
 Zhang <Yuying.Zhang@intel.com>, Beilei Xing <beilei.xing@intel.com>, nd
 <nd@arm.com>
Subject: RE: [PATCH v11 2/4] net/i40e: implement mbufs recycle mode
Thread-Topic: [PATCH v11 2/4] net/i40e: implement mbufs recycle mode
Thread-Index: AQHZ1Mog0nujzlh9U0CHKeg6D9zv8K/48qdw
Date: Thu, 24 Aug 2023 06:10:08 +0000
Message-ID: <AS8PR08MB77189BD70AD1E420E4D05E2EC81DA@AS8PR08MB7718.eurprd08.prod.outlook.com>
References: <20220420081650.2043183-1-feifei.wang2@arm.com>
 <20230822072710.1945027-1-feifei.wang2@arm.com>
 <20230822072710.1945027-3-feifei.wang2@arm.com>
In-Reply-To: <20230822072710.1945027-3-feifei.wang2@arm.com>
Accept-Language: zh-CN, en-US
Content-Language: zh-CN
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ts-tracking-id: F8ED15D488E2D040A7339482D2D1B7F8.0
x-checkrecipientchecked: true
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: AS8PR08MB7718:EE_|AM0PR08MB5315:EE_
x-ms-office365-filtering-correlation-id: ccf41487-c5d7-44eb-ae7e-08dba468c484
nodisclaimer: true
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: H9Xms1ucpYjjKYepANG8BZ/Z/jaRW0f9KBN+9j81BEaIEWr/9Tve5RRZpyDbjQQXVVvACk9/Uc5cW+UcL75FvHt6ipG3GygeILLsJeTZX3EcPCBKFnotdmxhZuRHzWXmXcSEngPwN4uCLayPHkmcl9DSle44Kr3USb4GqMZptRoa9W6lJFNljvK5XU+f+I0OOSuQ07Vh8QNTkOzHhEiKeFPbEDuTFXvOHhHBmbXqBGhlud5DTPpTs+xUxwlskMogIQ9MAm5vTZ9wsJGNl5Vj7vDbHX0HjTM3tP8y6j2vohX2/ppxyQH5OjxNn0+JEi14odvrnwPuTbanjuQTrMXXcJjQOkMMWSdgkp6QcHWXUVolqxsHccuoVqLwWkgeM36vu39eZ2cvwSPOf5R1FYTnlfDAp1Y71oYiAuA+fBHyMefCP/UHMECJRtd/J0dMvDLLtlRAwvBzXY3AHyC21FsYtq4/x16PBCGfJDGpKOEY7xaP4eWHHsHn42bGTrBDfpZCxx6xDotXIDr/IHwQtzRG7of0eFGL3CwSCdlrC/zcjpo3WOrfYcAXqcRLhlk+TmRDLwBtnKZMTRqf3WFJE+vdY9olAl79NBR2ydklViP8FIa7ojNvw1qMun5tQqPzDoq8YbuRB04vn1haE3P6OSI4mA==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:AS8PR08MB7718.eurprd08.prod.outlook.com; PTR:; CAT:NONE;
 SFS:(13230031)(376002)(366004)(346002)(136003)(39860400002)(396003)(451199024)(186009)(1800799009)(2906002)(30864003)(38070700005)(38100700002)(53546011)(6506007)(83380400001)(5660300002)(26005)(33656002)(52536014)(86362001)(7696005)(8676002)(8936002)(4326008)(66946007)(66556008)(64756008)(9686003)(316002)(54906003)(76116006)(66446008)(66476007)(110136005)(55016003)(122000001)(478600001)(71200400001)(41300700001)(448954002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?wxdUxq2Anhlh3tuEBvD+zEpGjePqVbFtuAgJsrxBgQraUypV4IsdpVsvuwRe?=
 =?us-ascii?Q?qKPfa3GkDIt1AH0ym+zwqh0OeNiMValhzZJotcVzsCiQ9EsAEdJ0J3ri84PG?=
 =?us-ascii?Q?fg8+Tpc/1oGuwImcbArWAdgKpjZTb/YEtHAzwNUydaxepHsbWFp+ofxBE//M?=
 =?us-ascii?Q?X+4DNl+jcRnFDt8e2TYueWnCJgrp7aigp+0JPuNr52vKLtlGoPNotp1oY2Ap?=
 =?us-ascii?Q?DWUMT/GJKlC1Wx54ewhC+fx2QMtG+DBRzcyy0WSW3LpCACveq/Xu6i3YGh8B?=
 =?us-ascii?Q?EFKrHAOnbYuPenqHQn+tAFqW2rQ6ySZfq8d2Mj3Qng2vVZtAOBbgJOEHRe8p?=
 =?us-ascii?Q?2MsGJWyAUHvaQQ318eHneSQo0hUNZ3JyzEVqM2BIouvPnnyyOIhRisiWpRZB?=
 =?us-ascii?Q?/P4Mg8be3eZwJStLFrpNtK4QrWSjyC3eaUyyfLEeYLYJbpXp5ZAdp2Ef5aCY?=
 =?us-ascii?Q?e6dDlaY63wRNYz8TgZgX1dcDIpGO2r7dDlYgTBAFn99+8sZYh7FnUKq9leQ9?=
 =?us-ascii?Q?SwwcF8Qrt8N0rLVwjGcv3NFe2kXSCHREHOjI504lzk48ebnCcNXhOzK3L4Sp?=
 =?us-ascii?Q?uXNMwVgaL0woIYI+LN+6t6lH64omkV/4vxFNPlYbl+piow4sXawGOgGr4hMY?=
 =?us-ascii?Q?tOPh3QZhI5B2Lpr+X/qGxn5UPKy1V6AUWwLcwC47+nIvvY7sSQGUomKRaoGG?=
 =?us-ascii?Q?aTvDFjAEJ1pIHK4V58+b2MSZW80XIm8wCV7vETHe8T9upzMgL4b6kONl0R8U?=
 =?us-ascii?Q?LNnjotnXLaE+MM1nEMDua0YqUgsE6yDaKGq7tDfGiiG5M+L+g1UGOtU3VVEm?=
 =?us-ascii?Q?DgLFsdIK8stAbfcA00+jmIZ+zIqYeesgWzC929A1L2k4ECiWJt4QSKMR2YzC?=
 =?us-ascii?Q?gqZ6vO9utpKGK02ednr1+CY9DqQiKUTMVSJvUgVr0rzZ4jNkelL9dwLVIl4C?=
 =?us-ascii?Q?iiDZPijg26ZgUVZJFMUAULIzCIANx+HWiNU18MdyrnargVUcELwvW+GaWQsk?=
 =?us-ascii?Q?y0dwLzptIPX8e9BW426vMMakuINnvKRmLKvWUYErr9SHcYhGhEPTj7j/+b2p?=
 =?us-ascii?Q?Av1rC8I3LIchbS7so6R8UbpeYdVreav+ZLgVbVIfM/LP08hROQ3SaL+y5FPc?=
 =?us-ascii?Q?JH2y9cNZ0Tu5zYKn7ZO2KCLQ7nDZkyZFhpqxRAWmAevNTkPIxas+hCqyygZ0?=
 =?us-ascii?Q?cm6S/AxpImyiy2LXDKd9+/rdVUc4hLQoFipYHyBWt2F/RvvIep2pR4PWrein?=
 =?us-ascii?Q?kk+5KXQpDdc4u8qUYSO6YdfnagRVZ2NgpQ60vVbHDixy2VdyUylDVXwW1hVc?=
 =?us-ascii?Q?kncdblJ3qUunTk8ZQKjNn5FtnEaumGaIvQMnwVcKjbubX+3Ilx5ym3TjfpOU?=
 =?us-ascii?Q?fJaHmXtbCv4coagmNWbdOfJS+bITiseS5iWj+8F+WFdfdysNE3WiIco9IJc8?=
 =?us-ascii?Q?smTJ8VE4EWoU6p0lrCtobWBCDGN3VniEKEJcutNb65kf9tOeL8ww3dJhJZE+?=
 =?us-ascii?Q?xK1LF1llCg092kztsCmGpLGOR1VQtSVK5/wg8ftUlmTDhfKpa8a2+S2r0ond?=
 =?us-ascii?Q?9Db3hWWwYHi8V19HhroLLRMTAvgBnx47Hk4YbHQw?=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AS8PR08MB7718.eurprd08.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ccf41487-c5d7-44eb-ae7e-08dba468c484
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Aug 2023 06:10:08.3458 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: y7xprSwBYWpI1uHHkmD+xniaNFC7YTCuPrvsj21O5wlU35VENiJ577GE9it5s5M/Kt9T3tjhoHe7Kkr+tzdMFw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5315
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

For konstantin

> -----Original Message-----
> From: Feifei Wang <feifei.wang2@arm.com>
> Sent: Tuesday, August 22, 2023 3:27 PM
> To: Yuying Zhang <Yuying.Zhang@intel.com>; Beilei Xing
> <beilei.xing@intel.com>
> Cc: dev@dpdk.org; nd <nd@arm.com>; Feifei Wang
> <Feifei.Wang2@arm.com>; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>; Ruifeng Wang
> <Ruifeng.Wang@arm.com>
> Subject: [PATCH v11 2/4] net/i40e: implement mbufs recycle mode
>=20
> Define specific function implementation for i40e driver.
> Currently, mbufs recycle mode can support 128bit vector path and avx2 pat=
h.
> And can be enabled both in fast free and no fast free mode.
>=20
> Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> ---
>  drivers/net/i40e/i40e_ethdev.c                |   1 +
>  drivers/net/i40e/i40e_ethdev.h                |   2 +
>  .../net/i40e/i40e_recycle_mbufs_vec_common.c  | 147
> ++++++++++++++++++
>  drivers/net/i40e/i40e_rxtx.c                  |  32 ++++
>  drivers/net/i40e/i40e_rxtx.h                  |   4 +
>  drivers/net/i40e/meson.build                  |   1 +
>  6 files changed, 187 insertions(+)
>  create mode 100644 drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
>=20
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethde=
v.c
> index 8271bbb394..50ba9aac94 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -496,6 +496,7 @@ static const struct eth_dev_ops i40e_eth_dev_ops =3D =
{
>  	.flow_ops_get                 =3D i40e_dev_flow_ops_get,
>  	.rxq_info_get                 =3D i40e_rxq_info_get,
>  	.txq_info_get                 =3D i40e_txq_info_get,
> +	.recycle_rxq_info_get         =3D i40e_recycle_rxq_info_get,
>  	.rx_burst_mode_get            =3D i40e_rx_burst_mode_get,
>  	.tx_burst_mode_get            =3D i40e_tx_burst_mode_get,
>  	.timesync_enable              =3D i40e_timesync_enable,
> diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethde=
v.h
> index 6f65d5e0ac..af758798e1 100644
> --- a/drivers/net/i40e/i40e_ethdev.h
> +++ b/drivers/net/i40e/i40e_ethdev.h
> @@ -1355,6 +1355,8 @@ void i40e_rxq_info_get(struct rte_eth_dev *dev,
> uint16_t queue_id,
>  	struct rte_eth_rxq_info *qinfo);
>  void i40e_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
>  	struct rte_eth_txq_info *qinfo);
> +void i40e_recycle_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_i=
d,
> +	struct rte_eth_recycle_rxq_info *recycle_rxq_info);
>  int i40e_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
>  			   struct rte_eth_burst_mode *mode);  int
> i40e_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id, diff -=
-
> git a/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
> b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
> new file mode 100644
> index 0000000000..5663ecccde
> --- /dev/null
> +++ b/drivers/net/i40e/i40e_recycle_mbufs_vec_common.c
> @@ -0,0 +1,147 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright (c) 2023 Arm Limited.
> + */
> +
> +#include <stdint.h>
> +#include <ethdev_driver.h>
> +
> +#include "base/i40e_prototype.h"
> +#include "base/i40e_type.h"
> +#include "i40e_ethdev.h"
> +#include "i40e_rxtx.h"
> +
> +#pragma GCC diagnostic ignored "-Wcast-qual"
> +
> +void
> +i40e_recycle_rx_descriptors_refill_vec(void *rx_queue, uint16_t
> +nb_mbufs) {
> +	struct i40e_rx_queue *rxq =3D rx_queue;
> +	struct i40e_rx_entry *rxep;
> +	volatile union i40e_rx_desc *rxdp;
> +	uint16_t rx_id;
> +	uint64_t paddr;
> +	uint64_t dma_addr;
> +	uint16_t i;
> +
> +	rxdp =3D rxq->rx_ring + rxq->rxrearm_start;
> +	rxep =3D &rxq->sw_ring[rxq->rxrearm_start];
> +
> +	for (i =3D 0; i < nb_mbufs; i++) {
> +		/* Initialize rxdp descs. */
> +		paddr =3D (rxep[i].mbuf)->buf_iova +
> RTE_PKTMBUF_HEADROOM;
> +		dma_addr =3D rte_cpu_to_le_64(paddr);
> +		/* flush desc with pa dma_addr */
> +		rxdp[i].read.hdr_addr =3D 0;
> +		rxdp[i].read.pkt_addr =3D dma_addr;
> +	}
> +
> +	/* Update the descriptor initializer index */
> +	rxq->rxrearm_start +=3D nb_mbufs;
> +	rx_id =3D rxq->rxrearm_start - 1;
> +
> +	if (unlikely(rxq->rxrearm_start >=3D rxq->nb_rx_desc)) {
> +		rxq->rxrearm_start =3D 0;
> +		rx_id =3D rxq->nb_rx_desc - 1;
> +	}
> +
> +	rxq->rxrearm_nb -=3D nb_mbufs;
> +
> +	rte_io_wmb();
> +	/* Update the tail pointer on the NIC */
> +	I40E_PCI_REG_WRITE_RELAXED(rxq->qrx_tail, rx_id); }
> +
> +uint16_t
> +i40e_recycle_tx_mbufs_reuse_vec(void *tx_queue,
> +	struct rte_eth_recycle_rxq_info *recycle_rxq_info) {
> +	struct i40e_tx_queue *txq =3D tx_queue;
> +	struct i40e_tx_entry *txep;
> +	struct rte_mbuf **rxep;
> +	int i, n;
> +	uint16_t nb_recycle_mbufs;
> +	uint16_t avail =3D 0;
> +	uint16_t mbuf_ring_size =3D recycle_rxq_info->mbuf_ring_size;
> +	uint16_t mask =3D recycle_rxq_info->mbuf_ring_size - 1;
> +	uint16_t refill_requirement =3D recycle_rxq_info->refill_requirement;
> +	uint16_t refill_head =3D *recycle_rxq_info->refill_head;
> +	uint16_t receive_tail =3D *recycle_rxq_info->receive_tail;
> +
> +	/* Get available recycling Rx buffers. */
> +	avail =3D (mbuf_ring_size - (refill_head - receive_tail)) & mask;
> +
> +	/* Check Tx free thresh and Rx available space. */
> +	if (txq->nb_tx_free > txq->tx_free_thresh || avail <=3D txq->tx_rs_thre=
sh)
> +		return 0;
> +
> +	/* check DD bits on threshold descriptor */
> +	if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz &
> +
> 	rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=3D
> +
> 	rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE))
> +		return 0;
> +
> +	n =3D txq->tx_rs_thresh;
> +	nb_recycle_mbufs =3D n;
> +
> +	/* Mbufs recycle mode can only support no ring buffer wrapping
> around.
> +	 * Two case for this:
> +	 *
> +	 * case 1: The refill head of Rx buffer ring needs to be aligned with
> +	 * mbuf ring size. In this case, the number of Tx freeing buffers
> +	 * should be equal to refill_requirement.
> +	 *
> +	 * case 2: The refill head of Rx ring buffer does not need to be aligne=
d
> +	 * with mbuf ring size. In this case, the update of refill head can not
> +	 * exceed the Rx mbuf ring size.
> +	 */
> +	if (refill_requirement !=3D n ||
> +		(!refill_requirement && (refill_head + n > mbuf_ring_size)))
> +		return 0;
> +
> +	/* First buffer to free from S/W ring is at index
> +	 * tx_next_dd - (tx_rs_thresh-1).
> +	 */
> +	txep =3D &txq->sw_ring[txq->tx_next_dd - (n - 1)];
> +	rxep =3D recycle_rxq_info->mbuf_ring;
> +	rxep +=3D refill_head;
> +
> +	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
> +		/* Avoid txq contains buffers from unexpected mempool. */
> +		if (unlikely(recycle_rxq_info->mp
> +					!=3D txep[0].mbuf->pool))
> +			return 0;
> +
> +		/* Directly put mbufs from Tx to Rx. */
> +		for (i =3D 0; i < n; i++)
> +			rxep[i] =3D txep[i].mbuf;
> +	} else {
> +		for (i =3D 0; i < n; i++) {
> +			rxep[i] =3D rte_pktmbuf_prefree_seg(txep[i].mbuf);
> +
> +			/* If Tx buffers are not the last reference or from
> +			 * unexpected mempool, previous copied buffers are
> +			 * considered as invalid.
> +			 */
> +			if (unlikely((rxep[i] =3D=3D NULL && refill_requirement) ||
[Konstantin]
Could you pls remind me why it is ok to have rxep[i]=3D=3DNULL when=20
refill_requirement is not set?

If reill_requirement is not zero, it means each tx freed buffer must be val=
id and can be put into Rx
sw_ring. Then  the refill head of Rx buffer ring can be aligned with mbuf r=
ing size. Briefly speaking
the number of Tx valid freed buffer must be equal to Rx refill_requirement.=
 For example, i40e driver.

If reill_requirement is zero, it means that the refill head of Rx buffer ri=
ng does not need to be aligned
with mbuf ring size, thus if Tx have n valid freed buffers, we just need to=
 put these n buffers into Rx sw-ring,
and not to be equal to the Rx setting rearm number. For example, mlx5 drive=
r.

In conclusion, above difference is due to pmd drivers have different strate=
gies to update their Rx rearm(refill) head.
For i40e driver, if rearm_head exceed 1024, it will be set as 0 due to  the=
 number of each rearm is a fixed value by default.
For mlx5 driver. Its rearm_head can exceed 1024, and use mask to achieve re=
al index. Thus its rearm number can be a different value.=20

> +					recycle_rxq_info->mp !=3D txep[i].mbuf-
> >pool))
> +				nb_recycle_mbufs =3D 0;
> +		}
> +		/* If Tx buffers are not the last reference or
> +		 * from unexpected mempool, all recycled buffers
> +		 * are put into mempool.
> +		 */
> +		if (nb_recycle_mbufs =3D=3D 0)
> +			for (i =3D 0; i < n; i++) {
> +				if (rxep[i] !=3D NULL)
> +					rte_mempool_put(rxep[i]->pool,
> rxep[i]);
> +			}
> +	}
> +
> +	/* Update counters for Tx. */
> +	txq->nb_tx_free =3D (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh);
> +	txq->tx_next_dd =3D (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh);
> +	if (txq->tx_next_dd >=3D txq->nb_tx_desc)
> +		txq->tx_next_dd =3D (uint16_t)(txq->tx_rs_thresh - 1);
> +
> +	return nb_recycle_mbufs;
> +}
> diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c =
index
> b4f65b58fa..a9c9eb331c 100644
> --- a/drivers/net/i40e/i40e_rxtx.c
> +++ b/drivers/net/i40e/i40e_rxtx.c
> @@ -3199,6 +3199,30 @@ i40e_txq_info_get(struct rte_eth_dev *dev,
> uint16_t queue_id,
>  	qinfo->conf.offloads =3D txq->offloads;
>  }
>=20
> +void
> +i40e_recycle_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id,
> +	struct rte_eth_recycle_rxq_info *recycle_rxq_info) {
> +	struct i40e_rx_queue *rxq;
> +	struct i40e_adapter *ad =3D
> +		I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
> +
> +	rxq =3D dev->data->rx_queues[queue_id];
> +
> +	recycle_rxq_info->mbuf_ring =3D (void *)rxq->sw_ring;
> +	recycle_rxq_info->mp =3D rxq->mp;
> +	recycle_rxq_info->mbuf_ring_size =3D rxq->nb_rx_desc;
> +	recycle_rxq_info->receive_tail =3D &rxq->rx_tail;
> +
> +	if (ad->rx_vec_allowed) {
> +		recycle_rxq_info->refill_requirement =3D
> RTE_I40E_RXQ_REARM_THRESH;
> +		recycle_rxq_info->refill_head =3D &rxq->rxrearm_start;
> +	} else {
> +		recycle_rxq_info->refill_requirement =3D rxq->rx_free_thresh;
> +		recycle_rxq_info->refill_head =3D &rxq->rx_free_trigger;
> +	}
> +}
> +
>  #ifdef RTE_ARCH_X86
>  static inline bool
>  get_avx_supported(bool request_avx512)
> @@ -3293,6 +3317,8 @@ i40e_set_rx_function(struct rte_eth_dev *dev)
>  				dev->rx_pkt_burst =3D ad->rx_use_avx2 ?
>  					i40e_recv_scattered_pkts_vec_avx2 :
>  					i40e_recv_scattered_pkts_vec;
> +				dev->recycle_rx_descriptors_refill =3D
> +					i40e_recycle_rx_descriptors_refill_vec;
>  			}
>  		} else {
>  			if (ad->rx_use_avx512) {
> @@ -3311,9 +3337,12 @@ i40e_set_rx_function(struct rte_eth_dev *dev)
>  				dev->rx_pkt_burst =3D ad->rx_use_avx2 ?
>  					i40e_recv_pkts_vec_avx2 :
>  					i40e_recv_pkts_vec;
> +				dev->recycle_rx_descriptors_refill =3D
> +					i40e_recycle_rx_descriptors_refill_vec;
>  			}
>  		}
>  #else /* RTE_ARCH_X86 */
> +		dev->recycle_rx_descriptors_refill =3D
> +i40e_recycle_rx_descriptors_refill_vec;
>  		if (dev->data->scattered_rx) {
>  			PMD_INIT_LOG(DEBUG,
>  				     "Using Vector Scattered Rx (port %d).", @@
> -3481,15 +3510,18 @@ i40e_set_tx_function(struct rte_eth_dev *dev)
>  				dev->tx_pkt_burst =3D ad->tx_use_avx2 ?
>  						    i40e_xmit_pkts_vec_avx2 :
>  						    i40e_xmit_pkts_vec;
> +				dev->recycle_tx_mbufs_reuse =3D
> i40e_recycle_tx_mbufs_reuse_vec;
>  			}
>  #else /* RTE_ARCH_X86 */
>  			PMD_INIT_LOG(DEBUG, "Using Vector Tx (port %d).",
>  				     dev->data->port_id);
>  			dev->tx_pkt_burst =3D i40e_xmit_pkts_vec;
> +			dev->recycle_tx_mbufs_reuse =3D
> i40e_recycle_tx_mbufs_reuse_vec;
>  #endif /* RTE_ARCH_X86 */
>  		} else {
>  			PMD_INIT_LOG(DEBUG, "Simple tx finally be used.");
>  			dev->tx_pkt_burst =3D i40e_xmit_pkts_simple;
> +			dev->recycle_tx_mbufs_reuse =3D
> i40e_recycle_tx_mbufs_reuse_vec;
>  		}
>  		dev->tx_pkt_prepare =3D i40e_simple_prep_pkts;
>  	} else {
> diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h =
index
> a8686224e5..b191f23e1f 100644
> --- a/drivers/net/i40e/i40e_rxtx.h
> +++ b/drivers/net/i40e/i40e_rxtx.h
> @@ -236,6 +236,10 @@ uint32_t i40e_dev_rx_queue_count(void
> *rx_queue);  int i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t
> offset);  int i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offs=
et);
>=20
> +uint16_t i40e_recycle_tx_mbufs_reuse_vec(void *tx_queue,
> +		struct rte_eth_recycle_rxq_info *recycle_rxq_info); void
> +i40e_recycle_rx_descriptors_refill_vec(void *rx_queue, uint16_t
> +nb_mbufs);
> +
>  uint16_t i40e_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
>  			    uint16_t nb_pkts);
>  uint16_t i40e_recv_scattered_pkts_vec(void *rx_queue, diff --git
> a/drivers/net/i40e/meson.build b/drivers/net/i40e/meson.build index
> 8e53b87a65..3b1a233c84 100644
> --- a/drivers/net/i40e/meson.build
> +++ b/drivers/net/i40e/meson.build
> @@ -34,6 +34,7 @@ sources =3D files(
>          'i40e_tm.c',
>          'i40e_hash.c',
>          'i40e_vf_representor.c',
> +	'i40e_recycle_mbufs_vec_common.c',
>          'rte_pmd_i40e.c',
>  )
>=20
> --
> 2.25.1