From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8D094A0548 for ; Wed, 21 Dec 2022 14:49:11 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8040E42D12; Wed, 21 Dec 2022 14:49:11 +0100 (CET) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2058.outbound.protection.outlook.com [40.107.94.58]) by mails.dpdk.org (Postfix) with ESMTP id 18AB640A7A; Wed, 21 Dec 2022 14:49:09 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jjyT2iXmc/CYtQ38GV29XgBgCXLV48b+YBm/YL4b0f9TFY/uiBeE6eWu8Xoyy1IgvSHYtXtk+mFUryxIf1O7CavUzx1QGKw+8tBWHfzmklj01dYQ3VpCMSpKe/PcMc92HZDxZePuW39DA05ZkNTBB5K9ZxifOCgyVUG87wfCTwbSZRaTCI3nOtAeDqoQHbSX2VXLhJ201YBScfRfVGCnG9rQwBb42yyv+hSFqaBvIoaANMJNN06azShiG9xJACZ7PFYkaA9kS4zN6F1ED8DyRSopcXEcvpyh1PqsbDP4GtH3w9UYYaOFeeSnUVX47zDk7b1/6wPtLP5Io5RN7TfrOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=M7Bi3YQIU8/qj4k4tAT+fiPxti0RdYdg735XepOXZDU=; b=KkSmzQ1Ot9sIkl3RnL+lnvUwdm87BU6gYouD7GiGQ1E4+IJmMdmwThiZ1S0kx5i51WiGxuvHFC3kJWq4YD+WZ2H5u8ptTkfB/auNKSEqj2QS9dMSfdRJCMFWLO7oevNQpv4QjhBzHMWnAR+EHeyVbaPKGiEdtPrrmbp7fnk7iMR2K6WgG87T8nzvjDDa5S0ww0T7okGuB40BVUcd2/LH82bBUIukL+eRTH0Xwd/vlqqbeWrp+cYK4b6X3egaavKpOkzc/BokBkqOfnofoIeTRltBqru0xYNh9GKxyUuzrKUVNlvYDlc4B6X9wKFq6vulWSLZATRq3MeZNc+v8hjGHg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=M7Bi3YQIU8/qj4k4tAT+fiPxti0RdYdg735XepOXZDU=; b=2t4IR0fhBnkfPVEVMcbTkrTtrw48V5pN6E0BVqGdk4JFkU89uOmMBZHfZtPDrxSkarCZIOibi7jp4HFhPtT4iFjViac5pwhSiBhWwrcsiA0rjSpubpHNWmIDq+5kyc6sdNf9mDY5cGBNJpdD0ZfN97rgIchqp/lvDaLdeR5RRao= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com; Received: from CH2PR12MB4294.namprd12.prod.outlook.com (2603:10b6:610:a9::11) by SJ0PR12MB6853.namprd12.prod.outlook.com (2603:10b6:a03:47b::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5924.16; Wed, 21 Dec 2022 13:49:06 +0000 Received: from CH2PR12MB4294.namprd12.prod.outlook.com ([fe80::4807:1f44:5e04:e05a]) by CH2PR12MB4294.namprd12.prod.outlook.com ([fe80::4807:1f44:5e04:e05a%8]) with mapi id 15.20.5924.016; Wed, 21 Dec 2022 13:49:04 +0000 Message-ID: Date: Wed, 21 Dec 2022 13:48:58 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.6.0 Content-Language: en-US To: "You, KaisenX" , "dev@dpdk.org" , "Burakov, Anatoly" , David Marchand Cc: "stable@dpdk.org" , "Yang, Qiming" , "Zhou, YidingX" , "Wu, Jingjing" , "Xing, Beilei" , "Zhang, Qi Z" , Luca Boccassi , "Mcnamara, John" , Kevin Traynor References: <20221117065726.277672-1-kaisenx.you@intel.com> <3ad04278-59c0-0c60-5c8c-9e57f33bb0de@amd.com> From: Ferruh Yigit Subject: Re: [PATCH] net/iavf:fix slow memory allocation In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-ClientProxiedBy: LO4P123CA0634.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:294::9) To CH2PR12MB4294.namprd12.prod.outlook.com (2603:10b6:610:a9::11) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PR12MB4294:EE_|SJ0PR12MB6853:EE_ X-MS-Office365-Filtering-Correlation-Id: a6205da7-40b7-4866-51fb-08dae35a1f30 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ZET8nJRLymvirKxgyuXgykrJV42vcLoCYfA5rBOwPQWgFmZWqPd3fheywA3bAmIm/2LVNhQOMvEpeoz6a87FtcTBqYUqfDnsMq9hVq46xm+WzDZw/YGfBDpVMPKSG9mcVSqi5ilGEPZSOzqUR1ju/zEoQ99wEGhqRZF6XycHDEljGsEOAiLCWQu5+UYsJRVyvGcMF+uXxnBMQ3jNEtJX+Cw0QrrMBTuo3L3N1F0jxcPmu4RldompSkK35WtqbY+/k9iHnA/5XOxkQmAMlMz8xcONSHCp8aDfDSh0KRvCP59Bhhg7mLGzlE/kjDKeSnFbiecqHwh1d0b+fDA2CSi6dQBGMSzrLUCqOm1x+CD4vgPr+5dP45n5J04QmpgVRzqRtJOfXIcLCxnHvKpag8RdHhmQJUCw/Hg/s3euPvc8EM3GpvGR+t7gj8lNtocHk0TtZfcLtOC738qSBuihjjTX9nbl/R8cdPowLT6uuGMBF+sp5WakdsMA2UaoL2Yx8ie1+tj1F/JFr3FZP8x9cR5PkrTGNGBwfH9kiXJ/R9xaoyI550E8J4okAK0TqoWbt2IenRstDC+9mLmVN4N3bG424lTk8H1jqm7s5/pQ7jDuivPBH6V/C8JErTX7Z8wXQRYg7ZQ6+Ggooph9zPJVG8ou9+/MWqnR1j/zP7v08p97F+mE24cdUgseXMCSjvUCoDwVe30HvkBqXUDa5DqwmfGQPbjbeCtKNDngqbL4ED4NV1g= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:CH2PR12MB4294.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230022)(4636009)(346002)(396003)(39860400002)(376002)(366004)(136003)(451199015)(36756003)(86362001)(66556008)(8676002)(8936002)(66476007)(2616005)(66946007)(30864003)(83380400001)(44832011)(53546011)(6486002)(6506007)(6666004)(54906003)(5660300002)(110136005)(478600001)(316002)(6512007)(4326008)(186003)(26005)(31696002)(7416002)(38100700002)(2906002)(41300700001)(31686004)(45980500001)(43740500002); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?dXFYK2hRckZJOG8zTmJmODZwcWhnYnFJb0xqanJzMC9sWnVKVU4rbU43TmtL?= =?utf-8?B?QTcrV0hqak90L1Bha0pPS0E4Z2J5SlQyb1U2eTNGeVhFV2VkRGtBUVJpMXh1?= =?utf-8?B?ODFXcE1WZlo4VmV1WVFFWnQwY0ljWFpHL25NV3ZNbVlCdnRlNFVzamxIQnFn?= =?utf-8?B?N0tkOGlsOFZ3NjBBWnAra084MzRXSUtyWVJTMGFTSDl1bUk3VWFCbEZGdy9x?= =?utf-8?B?MGZHY1luTVBRK2ZVNzFMWjV2eDZzZGg1cmtPRTdobXA1RSsvQmdWcEhBNHow?= =?utf-8?B?K1VIK0dpelgvQWxmTjRLbjhTVW9LbitnaU5hdFZFR3lQVlZxSUwwNnBHWGNv?= =?utf-8?B?N0hvQkhicnRGdldkZXV6SkN4QWVmNFlrSGtYU3JHR1BEWEtXdTh2M2JLdklz?= =?utf-8?B?MTVtY1JxUnBQN1hQRGdSOU1ueVBDTmJUeG13WkkvV0N2RXY0eVJ6QU02aDVs?= =?utf-8?B?VUJKVVNGdVNlWmEwUGJzUHdOMlJRMVF4czJyY1AvVW92aHM0eTZkTHZqWlhL?= =?utf-8?B?L3pOM29mQ3RTQm5BTzNUYm1TTzdiOUFtYWV0dGUyZHN0WEtxK0pSSlh6MStO?= =?utf-8?B?WmV3a1VDQlViQXoraHFldzdqTkxvTTBKRTlnYzhqNkl0NzUyM0c1VC9la0dD?= =?utf-8?B?c05EQ2t4eDFhaUt1TkwzREJrdGZ4VjdOUHppVk53NkZZZk5mc1F1SVhOM0V2?= =?utf-8?B?cWZCeElUMkRYQWIrMXJCa1pRWkxGVzN6SFdCeko1ZU4ySjhtcjNPOTJ2Mk9t?= =?utf-8?B?L1UxMDIreUo0cWVxVXc3eU1yOGZLMU9VcWdza1JaMmpyZGhqVXRYRVoyR0lm?= =?utf-8?B?YlI2eGNHVmhyWXFqNTliWCtPK2FOYUh6d1V4UXNJQzVPamYwNmdTVDl1bkRF?= =?utf-8?B?UlFBRHBlSURTSUs2b3hScDF4enRQV1U4R09WWittUW9RMFBSejd0MUllTmhG?= =?utf-8?B?akRrMnNQZG9iaXNxUzVIS1o5MXg3aWFTSjJnVTdDZi9IMk80d3NFOXBTOHZs?= =?utf-8?B?U0VLQVY4WG5OYVFjRWtoQWZwTUwrMWV6NkVWZE1Nb00vV3JaSXBnSGVXNVNs?= =?utf-8?B?cHQvZ25qVzBpbXdqcnBtSW04K0JoSUNsNDJTUEZ4RWhRMkxCZS9mZVA0ZnF2?= =?utf-8?B?NWt1OUIwVW1tYXMyTERtLzNYd2c3aldjSnpzT3hKQjgyMFppZVgrdzR3Z1FC?= =?utf-8?B?enVFZ2wzb3B2NUVlRmJFaE4yc3l3c3F3alFrN1ZwMENyb21zYjYrNndDQXBE?= =?utf-8?B?azNaOGw4aVVacVBweHlaaUNCNHJ2YUc1TXlscEtyTXFFOFRkRm85ZSszVFBl?= =?utf-8?B?WVViR3FXZFNHTHZpOWdnQ1RsYlI5WHEwdEdadE5rdW1mZldRTktFTmhZNWRV?= =?utf-8?B?aWRPNWhOS1JTYXd3azhadEI1UzdWK1d2a0hlY3ZwU1pDZTNoZ0N5SWRoYTBR?= =?utf-8?B?ZHJFMWIzdFNXeS92S2VqTzRzakNkRm1jL2lYU056ejFRZXBPMXVOK0UwN1hn?= =?utf-8?B?cmdvalNUSXZQY0crL1l3bWpBUXJUckZNd3pVR1Z1ekRaTnpWWUwzQ1hOc2hy?= =?utf-8?B?cXVIRUFoaDAzZGRPSWQ5bHpTZVNSdkR0VkMrbmdtSVROdWZjeGttUEtMYVdS?= =?utf-8?B?eC8yU1Q1V3BGL3VDSUUzeVptMTlIay9VQWIxclBCbjdYeWJJVkp0MDQ5TVYy?= =?utf-8?B?d1AvSW1MWEQ5Z3FGbndWWnZkaGpiakxYSythUG83NGRqMVMva3lSZWppNU1o?= =?utf-8?B?ZEFYSCtJSHp5SEdaMjc4M2lVQWtEUitJc0dadTliekU5bVZxTld1Mkp0anJY?= =?utf-8?B?RGtxcE1HaC94RHZEbCs4QlI3QVRtMWR3b3ZjZ3pUNnZJUFU0VERscFlmTEM5?= =?utf-8?B?bjg2VEJLdWl1YTRaK2piNXd2NUZPajJjckdlUVhhZTIxU0pnSEpxR1gyTUM1?= =?utf-8?B?SXNWMWoweEV0SVorZVZpYXlZTG5Ka3VlR3BtTEFYSG53ZmlkeFU5KzcvYVo2?= =?utf-8?B?d3FXTnFPejliOVdzc3NVZExJMmdZclVDVW9PckM2dWtHN1djUzAxS0lIajR0?= =?utf-8?B?dDVZTHAyL1F5aFdkLzVnRGoyNTFRSjg5VXVQVzc2Rjh1VitYOXd0TnB0U0k3?= =?utf-8?Q?DHBAKhokkPrOBFab7P+n6AQtl?= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: a6205da7-40b7-4866-51fb-08dae35a1f30 X-MS-Exchange-CrossTenant-AuthSource: CH2PR12MB4294.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Dec 2022 13:49:04.5073 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 8SmG2gsaUsfdzB0alXy8K1UI6GqAZWYVTiKrrI0Cy7xt2+31GDg7yKfDmjTPQ2zR X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB6853 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org On 12/20/2022 6:52 AM, You, KaisenX wrote: > > >> -----Original Message----- >> From: Ferruh Yigit >> Sent: 2022年12月13日 21:28 >> To: You, KaisenX ; dev@dpdk.org; Burakov, >> Anatoly ; David Marchand >> >> Cc: stable@dpdk.org; Yang, Qiming ; Zhou, YidingX >> ; Wu, Jingjing ; Xing, >> Beilei ; Zhang, Qi Z ; Luca >> Boccassi ; Mcnamara, John >> ; Kevin Traynor >> Subject: Re: [PATCH] net/iavf:fix slow memory allocation >> >> On 12/13/2022 9:35 AM, Ferruh Yigit wrote: >>> On 12/13/2022 7:52 AM, You, KaisenX wrote: >>>> >>>> >>>>> -----Original Message----- >>>>> From: Ferruh Yigit >>>>> Sent: 2022年12月8日 23:04 >>>>> To: You, KaisenX ; dev@dpdk.org; Burakov, >>>>> Anatoly ; David Marchand >>>>> >>>>> Cc: stable@dpdk.org; Yang, Qiming ; Zhou, >>>>> YidingX ; Wu, Jingjing >>>>> ; Xing, Beilei ; >>>>> Zhang, Qi Z ; Luca Boccassi >>>>> ; Mcnamara, John ; >> Kevin >>>>> Traynor >>>>> Subject: Re: [PATCH] net/iavf:fix slow memory allocation >>>>> >>>>> On 11/17/2022 6:57 AM, Kaisen You wrote: >>>>>> In some cases, the DPDK does not allocate hugepage heap memory to >>>>> some >>>>>> sockets due to the user setting parameters (e.g. -l 40-79, SOCKET 0 >>>>>> has no memory). >>>>>> When the interrupt thread runs on the corresponding core of this >>>>>> socket, each allocation/release will execute a whole set of heap >>>>>> allocation/release operations,resulting in poor performance. >>>>>> Instead we call malloc() to get memory from the system's heap space >>>>>> to fix this problem. >>>>>> >>>>> >>>>> Hi Kaisen, >>>>> >>>>> Using libc malloc can improve performance for this case, but I would >>>>> like to understand root cause of the problem. >>>>> >>>>> >>>>> As far as I can see, interrupt callbacks are run by interrupt thread >>>>> ("eal-intr- thread"), and interrupt thread created by >> 'rte_ctrl_thread_create()' API. >>>>> >>>>> 'rte_ctrl_thread_create()' comment mentions that "CPU affinity >>>>> retrieved at the time 'rte_eal_init()' was called," >>>>> >>>>> And 'rte_eal_init()' is run on main lcore, which is the first lcore >>>>> in the core list (unless otherwise defined with --main-lcore). >>>>> >>>>> So, the interrupts should be running on a core that has hugepages >>>>> allocated for it, am I missing something here? >>>>> >>>>> >>>> Thank for your comments. Let me try to explain the root cause here: >>>> eal_intr_thread the CPU in the corresponding slot does not create >> memory pool. >>>> That results in frequent memory subsequently creating/destructing. >>>> >>>> When testpmd started, the parameter (e.g. -l 40-79) is set. >>>> Different OS has different topology. Some OS like SUSE only creates >>>> memory pool for one CPU slot, while other system creates for two. >>>> That is why the problem occurs when using memories in different OS. >>> >>> >>> It is testpmd application that decides from which socket to allocate >>> memory from, right. This is nothing specific to OS. >>> >>> As far as I remember, testpmd logic is too allocate from socket that >>> its cores are used (provided with -l parameter), and allocate from >>> socket that device is attached to. >>> >>> So, in a dual socket system, if all used cores are in socket 1 and the >>> NIC is in socket 1, no memory is allocated for socket 0. This is to >>> optimize memory consumption. >>> >>> >>> Can you please confirm that the problem you are observing is because >>> interrupt handler is running on a CPU, which doesn't have memory >>> allocated for its socket? >>> >>> In this case what I don't understand is why interrupts is not running >>> on main lcore, which should be first core in the list, for "-l 40-79" >>> sample it should be lcore 40. >>> For your case, is interrupt handler run on core 0? Or any arbitrary core? >>> If so, can you please confirm when you provide core list as "-l 0,40-79" >>> fixes the issue? >>> > First of all, sorry to reply to you so late. > I can confirm that the problem I observed is because interrupt handler is > running on a CPU, which doesn't have memory allocated for its socket. > > In my case, interrupt handler is running on core 0. > I tried providing "-l 0,40-79" as a startup parameter, this issue can be resolved. > > I corrected the previous statement that this problem does only occur on > the SUSE system. In any OS, this problem occurs as long as the range of > startup parameters is only on node1. > >>> >>>>> >>>>> >>>>> And what about using 'rte_malloc_socket()' API (instead of >>>>> rte_malloc), which gets 'socket' as parameter, and provide the >>>>> socket that devices is on as parameter to this API? Is it possible to test >> this? >>>>> >>>>> >>>> As to the reason for not using rte_malloc_socket. I thought >>>> rte_malloc_socket() could solve the problem too. And the appropriate >>>> parameter should be the socket_id that created the memory pool for >>>> DPDK initialization. Assuming that> the socket_id of the initially >>>> allocated memory = 1, first let the >>> eal_intr_thread >>>> determine if it is on the socket_id, then record this socket_id in >>>> the eal_intr_thread and pass it to the iavf_event_thread. But there >>>> seems no way to link this parameter to the iavf_dev_event_post() >> function. That is why rte_malloc_socket is not used. >>>> >>> >>> I was thinking socket id of device can be used, but that won't help if >>> the core that interrupt handler runs is in different socket. >>> And I also don't know if there is a way to get socket that interrupt >>> thread is on. @David may help perhaps. >>> >>> So question is why interrupt thread is not running on main lcore. >>> >> >> OK after some talk with David, what I am missing is 'rte_ctrl_thread_create()' >> does NOT run on main lcore, it can run on any core except data plane cores. >> >> Driver "iavf-event-thread" thread (iavf_dev_event_handle()) and interrupt >> thread (so driver interrupt callback iavf_dev_event_post()) can run on any >> core, making it hard to manage. >> And it seems it is not possible to control where interrupt thread to run. >> >> One option can be allocating hugepages for all sockets, but this requires user >> involvement, and can't happen transparently. >> >> Other option can be to control where "iavf-event-thread" run, like using >> 'rte_thread_create()' to create thread and provide attribute to run it on main >> lcore (rte_lcore_cpuset(rte_get_main_lcore()))? >> >> Can you please test above option? >> >> > The first option can solve this issue. but to borrow from your previous saying, > "in a dual socket system, if all used cores are in socket 1 and the NIC is in socket 1, > no memory is allocated for socket 0. This is to optimize memory consumption." > I think it's unreasonable to do so. > > About other option. In " rte_eal_intr_init" function, After the thread is created, > I set the thread affinity for eal-intr-thread, but it does not solve this issue. Hi Kaisen, There are two threads involved, First one is interrupt thread, "eal-intr-thread", created by 'rte_eal_intr_init()'. Second one is iavf event handler, "iavf-event-thread", created by 'iavf_dev_event_handler_init()'. First one triggered by interrupt and puts a message to a list, second one consumes from the list and processes the message. So I assume two thread being in different sockets, or memory being allocated in a different socket than the cores running causes the performance issue. Did you test the second thread, "iavf-event-thread", affiliated to main core? (by creating thread using 'rte_thread_create()' API) >> >>>> Let me know if there is anything else unclear. >>>>> >>>>>> Fixes: cb5c1b91f76f ("net/iavf: add thread for event callbacks") >>>>>> Cc: stable@dpdk.org >>>>>> >>>>>> Signed-off-by: Kaisen You >>>>>> --- >>>>>> drivers/net/iavf/iavf_vchnl.c | 8 +++----- >>>>>> 1 file changed, 3 insertions(+), 5 deletions(-) >>>>>> >>>>>> diff --git a/drivers/net/iavf/iavf_vchnl.c >>>>>> b/drivers/net/iavf/iavf_vchnl.c index f92daf97f2..a05791fe48 100644 >>>>>> --- a/drivers/net/iavf/iavf_vchnl.c >>>>>> +++ b/drivers/net/iavf/iavf_vchnl.c >>>>>> @@ -36,7 +36,6 @@ struct iavf_event_element { >>>>>> struct rte_eth_dev *dev; >>>>>> enum rte_eth_event_type event; >>>>>> void *param; >>>>>> - size_t param_alloc_size; >>>>>> uint8_t param_alloc_data[0]; >>>>>> }; >>>>>> >>>>>> @@ -80,7 +79,7 @@ iavf_dev_event_handle(void *param >> __rte_unused) >>>>>> TAILQ_FOREACH_SAFE(pos, &pending, next, save_next) { >>>>>> TAILQ_REMOVE(&pending, pos, next); >>>>>> rte_eth_dev_callback_process(pos->dev, pos- event, >> pos->param); >>>>>> - rte_free(pos); >>>>>> + free(pos); >>>>>> } >>>>>> } >>>>>> >>>>>> @@ -94,14 +93,13 @@ iavf_dev_event_post(struct rte_eth_dev *dev, >> { >>>>>> struct iavf_event_handler *handler = &event_handler; >>>>>> char notify_byte; >>>>>> - struct iavf_event_element *elem = rte_malloc(NULL, sizeof(*elem) >>>>> + param_alloc_size, 0); >>>>>> + struct iavf_event_element *elem = malloc(sizeof(*elem) + >>>>>> +param_alloc_size); >>>>>> if (!elem) >>>>>> return; >>>>>> >>>>>> elem->dev = dev; >>>>>> elem->event = event; >>>>>> elem->param = param; >>>>>> - elem->param_alloc_size = param_alloc_size; >>>>>> if (param && param_alloc_size) { >>>>>> rte_memcpy(elem->param_alloc_data, param, >>>>> param_alloc_size); >>>>>> elem->param = elem->param_alloc_data; @@ -165,7 +163,7 >>>>> @@ >>>>>> iavf_dev_event_handler_fini(void) >>>>>> struct iavf_event_element *pos, *save_next; >>>>>> TAILQ_FOREACH_SAFE(pos, &handler->pending, next, save_next) { >>>>>> TAILQ_REMOVE(&handler->pending, pos, next); >>>>>> - rte_free(pos); >>>>>> + free(pos); >>>>>> } >>>>>> } >>>>>> >>>> >>> >