From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A22EE42C45; Wed, 7 Jun 2023 02:00:54 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4075F40ED5; Wed, 7 Jun 2023 02:00:54 +0200 (CEST) Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam04on2088.outbound.protection.outlook.com [40.107.102.88]) by mails.dpdk.org (Postfix) with ESMTP id 15DF6406B6 for ; Wed, 7 Jun 2023 02:00:52 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ZBDkQ1AZ/+H+r0k24/vu52pIEb+yK5yzsDLhYFZMacI9UfsH/iH1C/fsCjyaWFkjqXz3e+iPUYbJTiGyiC+JGAS9SiySWFmOi8OorQUeUwGRmXEtP2uHyFIAR9rtKavOmFLOMVdmzse/2uqbYFqS0U4H5FjWsyodsdzSQsdGPb6HMPGcZIzT1fW4I0/sRM9430lRcqK2LWw7jsoUBj7mjClw2uZ/PVI7LftxKOMVQfZis+vApI7z6Ouga24I4HPPnqqoqvySMCcQQPPlzO9hKA4BXUSPUOl4Q+E20fmqyHA8cF1fqbX8OxGEJ8/LSlYO6ySHB/UC3QuIZO//T29kqw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=0swUSt5fEkjIyCp9hfJkvt3E4ro0OgFOy3EH3lSNf1w=; b=Vh9F2mChZthDO7vms+xfmoZFpdhzugY1NngE+EPXyVWd4YhO9pM0I7uT+wf0j3HapNBiAlYgF69bFC8wXBij4/irWpRDPrRsixKN+NMcakCbhzgE6FzVxsewuko3vBi6QlG8OkXND6i+w2YlNdtWWJd8yGH2D85pTUumGtmV8UcKVgfpzOpiPviBNHFw0ew5twNTP0HwTN6GSM0nByJN9XYcAWx3SdM6FRlYmmfeTotnJPeGndGos3m9bJqWmVkVchcaOfusOUOxfFiOObdnVdKdBgvmdVqOvH60jqmnXVEWxiVKKWs88qrYzAYmAEqt7pNsbHq4z3IkpfhXm6mLUg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=0swUSt5fEkjIyCp9hfJkvt3E4ro0OgFOy3EH3lSNf1w=; b=NvXEnVx0Ddlxu1o8cmnAxGi8IHxHcOce6D37ClloVPcVYOzQ3ZnXglBdKMcoTBKuXcwHgFw2jcL0Fb+uPGsrFU3cbpc4n8AitxbKSYbNEuaY1cntHY+EDCpHUALN9q0CfwWjvNzR9WbQc56a4aOK6vIzGHS5GMH70NrVRgngH0s= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com; Received: from CH2PR12MB4294.namprd12.prod.outlook.com (2603:10b6:610:a9::11) by PH7PR12MB7378.namprd12.prod.outlook.com (2603:10b6:510:20d::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6477.19; Wed, 7 Jun 2023 00:00:49 +0000 Received: from CH2PR12MB4294.namprd12.prod.outlook.com ([fe80::cf07:30f7:a92a:c53b]) by CH2PR12MB4294.namprd12.prod.outlook.com ([fe80::cf07:30f7:a92a:c53b%4]) with mapi id 15.20.6455.030; Wed, 7 Jun 2023 00:00:48 +0000 Message-ID: <4a6e9e73-f311-854f-98c8-8fb4d0df07de@amd.com> Date: Wed, 7 Jun 2023 01:00:42 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.11.2 Subject: Re: [PATCH v6 1/4] ethdev: add API for mbufs recycle mode Content-Language: en-US To: Konstantin Ananyev , Feifei Wang , =?UTF-8?B?0JrQvtC90YHRgtCw0L3RgtC40L0g0JDQvdCw0L3RjNC10LI=?= , "thomas@monjalon.net" , Andrew Rybchenko Cc: "dev@dpdk.org" , nd , Honnappa Nagarahalli , Ruifeng Wang References: <20211224164613.32569-1-feifei.wang2@arm.com> <20230525094541.331338-1-feifei.wang2@arm.com> <20230525094541.331338-2-feifei.wang2@arm.com> <658741685969010@mail.yandex.ru> <07e46ddd2d6d4e3ca7f9958ecc1fa5b7@huawei.com> From: Ferruh Yigit In-Reply-To: <07e46ddd2d6d4e3ca7f9958ecc1fa5b7@huawei.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-ClientProxiedBy: LO2P265CA0237.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:b::33) To CH2PR12MB4294.namprd12.prod.outlook.com (2603:10b6:610:a9::11) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PR12MB4294:EE_|PH7PR12MB7378:EE_ X-MS-Office365-Filtering-Correlation-Id: b2e348dd-c55e-4ccb-b830-08db66ea4000 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: XBUF4bTSQ9Ju8hj8HQt9Y7ZpZ35RQYR2SZVYL2MUc0vyoRtwysC7F4hnDqp+naq4TX9DjkjybcffVXgIzgTcC8qtVP5UA4y8bjNtf+afrDbZ/mWC29Q7x0CMNDc5b6UENzfdPy1M/dnxtJO9jCzJoO3njJC+6jt6gxG1k+UWNfCwyg5BzpmGbzyac0OUUiU2SlxoyPbFhD23DhSzqvRBh9NmRP9E4aGKktiHh32gl4YCDR78xyoHYLQMGD3wnLfaGIirBpzWybneb7uBwcuFKvHYDrE2gy4Q7j8masbBBN/gRpCK6OUNd5U/ikmIvZ12HuXVvSHVlDGBhApxpnLMbGk+/y8hBYMjPTg5uuHXdkmTKtaALnrnt1eIph+bh+nianNsbjs7HagarlCkuP3RKd8GFWJ1GzOTvjvamI2ClLfSlZ7gRNh+VK8gaLNmNfZDbdy75LUlMXq2IcaHoSy7/wM4I8rCvxqGzJQrW8X7KsVKwVuD3jQ6bCQ26T9zV+BXiIErt5izN/RL8PoLCqgyCxLhtUcEohroN3mTrbakvp56hm/DjIJbEWCqeXZnX4LodqaGxop2fVSQG1RykxYPnK4a0JR0dhMQVIrM42ZFWPHUC7nRAigTBV7qWltDYLp6gYatM5Ygz2B+/r+7ZxNesg== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:CH2PR12MB4294.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230028)(4636009)(346002)(366004)(39860400002)(396003)(136003)(376002)(451199021)(41300700001)(5660300002)(8676002)(8936002)(478600001)(54906003)(110136005)(6666004)(66946007)(4326008)(66556008)(66476007)(38100700002)(316002)(6486002)(36756003)(86362001)(186003)(31696002)(83380400001)(2906002)(44832011)(2616005)(31686004)(6512007)(53546011)(26005)(6506007)(45980500001)(43740500002); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?SVBkcDBsMnpaK0Y4c1RaNDRtS25RRWk1R1NGWSs0S1RLT0ptT3dLSndaMFdh?= =?utf-8?B?MUFCc2JZb3ZNc09WWGJValhndWloQkJxTEQrWW80S3EwcW5SWk9MWFcyT2lo?= =?utf-8?B?ckw0QTB2RGhjWmdSZ3VLcW5FeXVUSFdCeFZlOUhMdGFNTkY2RzV2OUdrQTRS?= =?utf-8?B?d3F0bmtTVE9VeXVoS1Q5VWJEdkxCVGlqM0ZjdWl1em9maWxnOUNpZVJnSjlx?= =?utf-8?B?Z1dSdjkrb3pFNG01R1dVL3JDRHBFYkprdW9hMTMxdzFRQ2JEL1BwQkV1akQ1?= =?utf-8?B?MmZDVjdKb3dCb0ZqQnZlWjFSL2ROYWJNeVNBVkhHM2ZONFdwOU9NRXp5c29C?= =?utf-8?B?YlJOYTdTTmNGcDh5YlpXLzVLaXV6ai9od1N6MEdjbG05WU12c0dNanEyRWlJ?= =?utf-8?B?WDJjbWNGaDV1T1JPWWEyb0tUR2ZpaGhIdExzQ3VoV3FMUy9VU3p0ZWhHei90?= =?utf-8?B?UHdYZk9IOUE1RGNPV0tUN1VpWWhGNDBEemUvYjlXNmZPMVY4WjlOMDk3ZHRD?= =?utf-8?B?eWZYSy9VVndPeVFsMlJyNGVVZElCenZhRS84clpjbXFodFhYbE9kVWZnWnJm?= =?utf-8?B?MFB2ZmlzUnRXRkZxWS9GN2hhaTRLaWJZVzB1UFhVYzJjZGNDMEpQOUFvek94?= =?utf-8?B?ZVpNTVROcEVYYkE5OVM3cUVjcy9wQ0p5WjJLYk1ISnhwQnhBUEFvL2JYZnJB?= =?utf-8?B?ZEZLUXR4blREL2szUHJMMFhMeXFyQ2Nwb3BnbzlnYUdKOHc4TThOOVlMUWlr?= =?utf-8?B?QzFRdzRDbXJjUmhCdGY5OUJ1SUpBZXFGWGh2SHYwdHA1OTRnbUVZUGI1WUJB?= =?utf-8?B?alNlWHVsazB4SGNiYkdDZlljUXFRNk13OENLd2dGd3NDRE9BQjcvcXoyUHBN?= =?utf-8?B?bWpZckJHZi9VNzVpczJlU1EwS0g0UkRDelpodjFoV3Vqc2FaVnJyb3grRDJR?= =?utf-8?B?ei90T2c3b1RaN0NNUmw2aTdORWNGekY3TE9mS0NreFlLaUt0R1FURS8rQXgr?= =?utf-8?B?MjlLR0NIbHR2NWxEN1dZMHBJemw4cy9xVDAzM3pEb1RaOGg3TjhyUEZuSWJW?= =?utf-8?B?MXdWaE5sd0YvU1dmRHFaM1orTDhKbUZ0RW9rU080SXZJYm9XMjg2dVhUemFT?= =?utf-8?B?UTJPWENBS3RKUk1NMVk0SEhjS05DWFdiaFQydm12V1Q5ZzVEbmM3bms2b041?= =?utf-8?B?eUsxSTRGcmNaRmhRQ0RidUd5SWQ4RzZrbzE5RWdJQVpkczBmR1BEUnFMSEE3?= =?utf-8?B?Mm56Qk5nay9mdWEvM3kweWpDQWxwQm9UZ0RZMXFvQ2tiaXFXQ2hoSDJFc0N2?= =?utf-8?B?Z2UzTTlHQXBzNDdDWEV1K050OFhzV2UzYXVhVm5xaXlydDN0UWZ6NzZkQzJU?= =?utf-8?B?U2pITGRlWTFBL3pxSXhlMkRQZmdTNDB5cC9CbkNOcmgySnZYVGNKVGIzQ1Yv?= =?utf-8?B?d3FKRGR0Zis5MitNbzlWbUtTeWhoczRyTXdMUndsWW5BQ3FiMzI0OU40VVNU?= =?utf-8?B?RWNnMEZJNXBEOGFjdkhZMEgrR2M4UGVueEF4cnRLS1BleU95NnF3Z0IxZVhN?= =?utf-8?B?Q1RpN2k2ck1LMmhKUHg5SXdMaEE3bytLUmlWVmJtM3BidWVRdGdmUUFCR2RF?= =?utf-8?B?MENXUXQvcXdmaFhyQWZpbjFzUFhmWHZNaTlORlE2RGhVYWt1RHptVHl3blIy?= =?utf-8?B?SUJEaVpCK29SVHAxM0dvM01XeFZvWE9BbEdLSXRkRndQaVNTVnpoN3VWaHhx?= =?utf-8?B?TmxiWHNmMlowa1hmK2xxVUx2SkVHM3hVaEVncmhTZjVlNDJ5SGtaNzg1K3A2?= =?utf-8?B?QU16TUFKMUhxeWhNdE5idEFJUldiaWQzSjdyTlVzOGlPVW1xL05yYi9LbW50?= =?utf-8?B?ZU5kU2lJbzRLQURGd3c4WERlUGY1dWVZbzlOUGlhL1cra1gycnNmNFJRbGNC?= =?utf-8?B?cTNjZjJkZGVCRGtOZjdWaFFIOC9vYUc5S1ZnU1JSVWczNW5EenpCMGpNK3hN?= =?utf-8?B?YzVMcC94bG9KNC8yK3dZYkFuanJhTmNsbk1CMEhVSm1QeCtubHNXVGlxdzk1?= =?utf-8?B?RC9iUGYwMkhlN000ajhPYmdqdGVCcEdQYXhhOXowNE1tb3pFTU9IclJOb3du?= =?utf-8?Q?cJstRaQRggJI2vIaYjm+rnFUH?= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: b2e348dd-c55e-4ccb-b830-08db66ea4000 X-MS-Exchange-CrossTenant-AuthSource: CH2PR12MB4294.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2023 00:00:48.8302 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: Bfi1F5TjaHk1VEc1RYnSS6MkXGwzhpPxOw90ejvKm+Wc4smGKvi+4Z+86lX7O2mz X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7378 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 6/6/2023 9:34 AM, Konstantin Ananyev wrote: > > >> >> [...] >>>> Probably I am missing something, but why it is not possible to do something >>> like that: >>>> >>>> rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N, >>>> tx_queue_id=M, ...); .... >>>> rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N, >>>> tx_queue_id=K, ...); >>>> >>>> I.E. feed rx queue from 2 tx queues? >>>> >>>> Two problems for this: >>>> 1. If we have 2 tx queues for rx, the thread should make the extra >>>> judgement to decide which one to choose in the driver layer. >>> >>> Not sure, why on the driver layer? >>> The example I gave above - decision is made on application layer. >>> Lets say first call didn't free enough mbufs, so app decided to use second txq >>> for rearm. >> [Feifei] I think currently mbuf recycle mode can support this usage. For examples: >> n = rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N, tx_queue_id=M, ...); >> if (n < planned_number) >> rte_eth_recycle_mbufs(rx_port_id=X, rx_queue_id=Y, tx_port_id=N, tx_queue_id=K, ...); >> >> Thus, if users want, they can do like this. > > Yes, that was my thought, that's why I was surprise that in the comments we have: > " Currently, the rte_eth_recycle_mbufs() function can only support one-time pairing > * between the receive queue and transmit queue. Do not pair one receive queue with > * multiple transmit queues or pair one transmit queue with multiple receive queues, > * in order to avoid memory error rewriting." > I guess that is from previous versions of the set, it can be good to address limitations/restrictions again with latest version. >> >>> >>>> On the other hand, current mechanism can support users to switch 1 txq >>>> to another timely in the application layer. If user want to choose >>>> another txq, he just need to change the txq_queue_id parameter in the API. >>>> 2. If you want one rxq to support two txq at the same time, this needs >>>> to add spinlock on guard variable to avoid multi-thread conflict. >>>> Spinlock will decrease the data-path performance greatly. Thus, we do >>>> not consider >>>> 1 rxq mapping multiple txqs here. >>> >>> I am talking about situation when one thread controls 2 tx queues. >>> >>>> + * >>>> + * @param rx_port_id >>>> + * Port identifying the receive side. >>>> + * @param rx_queue_id >>>> + * The index of the receive queue identifying the receive side. >>>> + * The value must be in the range [0, nb_rx_queue - 1] previously >>>> +supplied >>>> + * to rte_eth_dev_configure(). >>>> + * @param tx_port_id >>>> + * Port identifying the transmit side. >>>> + * @param tx_queue_id >>>> + * The index of the transmit queue identifying the transmit side. >>>> + * The value must be in the range [0, nb_tx_queue - 1] previously >>>> +supplied >>>> + * to rte_eth_dev_configure(). >>>> + * @param recycle_rxq_info >>>> + * A pointer to a structure of type *rte_eth_recycle_rxq_info* which >>>> +contains >>>> + * the information of the Rx queue mbuf ring. >>>> + * @return >>>> + * The number of recycling mbufs. >>>> + */ >>>> +__rte_experimental >>>> +static inline uint16_t >>>> +rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id, >>>> +uint16_t tx_port_id, uint16_t tx_queue_id, struct >>>> +rte_eth_recycle_rxq_info *recycle_rxq_info) { struct rte_eth_fp_ops >>>> +*p; void *qd; uint16_t nb_mbufs; >>>> + >>>> +#ifdef RTE_ETHDEV_DEBUG_TX >>>> + if (tx_port_id >= RTE_MAX_ETHPORTS || tx_queue_id >= >>>> +RTE_MAX_QUEUES_PER_PORT) { RTE_ETHDEV_LOG(ERR, "Invalid >>>> +tx_port_id=%u or tx_queue_id=%u\n", tx_port_id, tx_queue_id); >>>> +return 0; } #endif >>>> + >>>> + /* fetch pointer to queue data */ >>>> + p = &rte_eth_fp_ops[tx_port_id]; >>>> + qd = p->txq.data[tx_queue_id]; >>>> + >>>> +#ifdef RTE_ETHDEV_DEBUG_TX >>>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(tx_port_id, 0); >>>> + >>>> + if (qd == NULL) { >>>> + RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n", >>>> +tx_queue_id, tx_port_id); return 0; } #endif if >>>> +(p->recycle_tx_mbufs_reuse == NULL) return 0; >>>> + >>>> + /* Copy used *rte_mbuf* buffer pointers from Tx mbuf ring >>>> + * into Rx mbuf ring. >>>> + */ >>>> + nb_mbufs = p->recycle_tx_mbufs_reuse(qd, recycle_rxq_info); >>>> + >>>> + /* If no recycling mbufs, return 0. */ if (nb_mbufs == 0) return 0; >>>> + >>>> +#ifdef RTE_ETHDEV_DEBUG_RX >>>> + if (rx_port_id >= RTE_MAX_ETHPORTS || rx_queue_id >= >>>> +RTE_MAX_QUEUES_PER_PORT) { RTE_ETHDEV_LOG(ERR, "Invalid >>>> +rx_port_id=%u or rx_queue_id=%u\n", rx_port_id, rx_queue_id); >>>> +return 0; } #endif >>>> + >>>> + /* fetch pointer to queue data */ >>>> + p = &rte_eth_fp_ops[rx_port_id]; >>>> + qd = p->rxq.data[rx_queue_id]; >>>> + >>>> +#ifdef RTE_ETHDEV_DEBUG_RX >>>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(rx_port_id, 0); >>>> + >>>> + if (qd == NULL) { >>>> + RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n", >>>> +rx_queue_id, rx_port_id); return 0; } #endif >>>> + >>>> + if (p->recycle_rx_descriptors_refill == NULL) return 0; >>>> + >>>> + /* Replenish the Rx descriptors with the recycling >>>> + * into Rx mbuf ring. >>>> + */ >>>> + p->recycle_rx_descriptors_refill(qd, nb_mbufs); >>>> + >>>> + return nb_mbufs; >>>> +} >>>> + >>>> /** >>>> * @warning >>>> * @b EXPERIMENTAL: this API may change without prior notice diff >>>> --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h >>>> index dcf8adab92..a2e6ea6b6c 100644 >>>> --- a/lib/ethdev/rte_ethdev_core.h >>>> +++ b/lib/ethdev/rte_ethdev_core.h >>>> @@ -56,6 +56,13 @@ typedef int (*eth_rx_descriptor_status_t)(void >>>> *rxq, uint16_t offset); >>>> /** @internal Check the status of a Tx descriptor */ typedef int >>>> (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset); >>>> >>>> +/** @internal Copy used mbufs from Tx mbuf ring into Rx mbuf ring */ >>>> +typedef uint16_t (*eth_recycle_tx_mbufs_reuse_t)(void *txq, struct >>>> +rte_eth_recycle_rxq_info *recycle_rxq_info); >>>> + >>>> +/** @internal Refill Rx descriptors with the recycling mbufs */ >>>> +typedef void (*eth_recycle_rx_descriptors_refill_t)(void *rxq, >>>> +uint16_t nb); >>>> + >>>> /** >>>> * @internal >>>> * Structure used to hold opaque pointers to internal ethdev Rx/Tx @@ >>>> -90,9 +97,11 @@ struct rte_eth_fp_ops { >>>> eth_rx_queue_count_t rx_queue_count; >>>> /** Check the status of a Rx descriptor. */ >>>> eth_rx_descriptor_status_t rx_descriptor_status; >>>> + /** Refill Rx descriptors with the recycling mbufs. */ >>>> + eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill; >>>> I am afraid we can't put new fields here without ABI breakage. >>>> >>>> Agree >>>> >>>> It has to be below rxq. >>>> Now thinking about current layout probably not the best one, and when >>>> introducing this struct, I should probably put rxq either on the top >>>> of the struct, or on the next cache line. >>>> But such change is not possible right now anyway. >>>> Same story for txq. >>>> >>>> Thus we should rearrange the structure like below: >>>> struct rte_eth_fp_ops { >>>> struct rte_ethdev_qdata rxq; >>>> eth_rx_burst_t rx_pkt_burst; >>>> eth_rx_queue_count_t rx_queue_count; >>>> eth_rx_descriptor_status_t rx_descriptor_status; >>>> eth_recycle_rx_descriptors_refill_t recycle_rx_descriptors_refill; >>>> uintptr_t reserved1[2]; >>>> } >>> >>> Yes, I think such layout will be better. >>> The only problem here - we have to wait for 23.11 for that. >>> >> Ok, if not this change, maybe we still need to wait. Because mbufs_recycle have other >> ABI breakage. Such as the change for 'struct rte_eth_dev'. > > Ok by me. > >>>> >>>> >>>> /** Rx queues data. */ >>>> struct rte_ethdev_qdata rxq; >>>> - uintptr_t reserved1[3]; >>>> + uintptr_t reserved1[2]; >>>> /**@}*/ >>>> >>>> /**@{*/ >>>> @@ -106,9 +115,11 @@ struct rte_eth_fp_ops { >>>> eth_tx_prep_t tx_pkt_prepare; >>>> /** Check the status of a Tx descriptor. */ >>>> eth_tx_descriptor_status_t tx_descriptor_status; >>>> + /** Copy used mbufs from Tx mbuf ring into Rx. */ >>>> + eth_recycle_tx_mbufs_reuse_t recycle_tx_mbufs_reuse; >>>> /** Tx queues data. */ >>>> struct rte_ethdev_qdata txq; >>>> - uintptr_t reserved2[3]; >>>> + uintptr_t reserved2[2]; >>>> /**@}*/ >>>> >>>> } __rte_cache_aligned; >>>> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index >>>> 357d1a88c0..45c417f6bd 100644 >>>> --- a/lib/ethdev/version.map >>>> +++ b/lib/ethdev/version.map >>>> @@ -299,6 +299,10 @@ EXPERIMENTAL { >>>> rte_flow_action_handle_query_update; >>>> rte_flow_async_action_handle_query_update; >>>> rte_flow_async_create_by_index; >>>> + >>>> + # added in 23.07 >>>> + rte_eth_recycle_mbufs; >>>> + rte_eth_recycle_rx_queue_info_get; >>>> }; >>>> >>>> INTERNAL { >>>> -- >>>> 2.25.1 >>>>