From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 43AD042B4C for ; Fri, 19 May 2023 20:43:38 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B5FA140E25; Fri, 19 May 2023 20:43:37 +0200 (CEST) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2086.outbound.protection.outlook.com [40.107.101.86]) by mails.dpdk.org (Postfix) with ESMTP id 43DF540E09 for ; Fri, 19 May 2023 20:43:36 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dG+/nJDXS3QrQj6fWdnE+qxG5g80rWZ7FCGn2gNUo2yNFreCf+TlRCLHDw3ILVqQTIAmfF1fqqxRt3ibDKsVWiGMwSvUhHM501cmAKX9QNAmJWLZ4lj1vqMw6uQT9IGeV7JmU5xg0MsQbnB83kwevdML6ErvSNsRZiNETmvKPdebWYhxhEv97eXZyWXu0ICY7V2yOkIr72uQnwHOhNaGYqe/Tn2smQjGIP2nKkx4kvZ4XA3FF5xQDPY3jNQJLVtM+Ojwyd5qsKpYlKvBMrWsXT3zv+jCmyAqHecnOlrtMDMqM4aRyIv54qrzxsOv1/kwEL+sy6EuprCnzEwnwV/Eeg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=1pai9sl6lwajWPbB7i59PMWRt3FLA1QtRHX4T+kb7s0=; b=aYv1lz9yq3WtrlQK5igsgQ2/wPHNVKyusMH4QSvU/UtarPpYjQyhv3TKlO6vU3+qtmoQEbhZp+/XpSs8ULaxLPcDmJjhoq9vc2UziUXY9ovapygkJMhiVx9xtJ2vzywz/os2yyzZwPtTOLe+j44r9K8bVGsH8dzLxd8tb4zHC/VfLFBLJm1W83wRX+gVT1ea+DpUSsO7N20WjUQoSgWe4JnQVa0GxSCzZolOb5yRfITo7AHIAooSs72PumtNHzVIxmXByVaP8ZC4n/ahlpcDTxivJncppAZ2ffZVusywNDdAWDNz9FO3t0WBnoSy2RUY9d0We1oMZ5zShnvu0gTi9Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1pai9sl6lwajWPbB7i59PMWRt3FLA1QtRHX4T+kb7s0=; b=TAFuk8MrS2GyQSrSYxWtKW7D1M4nDn7ZmsWKCTkLyzfaTiQJ1bLP6dMy7tlBQfZSk46ffyVZen/f2rnxSCcRdc+w/SD/4BdNqmYE/KiI5Mo6Cg6rjv47CbJj4tpLf21NXCVgyLX5pqyrW5LmlB2HLw3wW8wR4Ng9MwVmZWLgZrA= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com; Received: from CH2PR12MB4294.namprd12.prod.outlook.com (2603:10b6:610:a9::11) by PH7PR12MB5735.namprd12.prod.outlook.com (2603:10b6:510:1e2::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6411.19; Fri, 19 May 2023 18:43:33 +0000 Received: from CH2PR12MB4294.namprd12.prod.outlook.com ([fe80::7957:641d:6aba:3f9a]) by CH2PR12MB4294.namprd12.prod.outlook.com ([fe80::7957:641d:6aba:3f9a%4]) with mapi id 15.20.6411.019; Fri, 19 May 2023 18:43:33 +0000 Message-ID: <28c13351-994f-1898-8227-6d6875ed4812@amd.com> Date: Fri, 19 May 2023 19:43:26 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0 Content-Language: en-US To: Yasin CANER Cc: Stephen Hemminger , users@dpdk.org References: <20230508091845.646caf17@hermes.local> <4f53f3be-0bae-e204-5737-7735b4a2ba5b@amd.com> From: Ferruh Yigit Subject: Re: DPDK 22.11 - How to fix memory leak for KNI - How to debug In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-ClientProxiedBy: LNXP265CA0078.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:76::18) To CH2PR12MB4294.namprd12.prod.outlook.com (2603:10b6:610:a9::11) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PR12MB4294:EE_|PH7PR12MB5735:EE_ X-MS-Office365-Filtering-Correlation-Id: 97b64e20-a817-44a4-182c-08db5898f25f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ybQ6jQ57DEu1NuEpqS9cKfdji9egYr2kPkVw0KYWv+GY20bqxvOsUjxlLLk1wfcoLLvwVUm21470SxGlIXDXgKfDPcYOnzTxSOpUxummb8Zvl8J0tZv7gHuJfRnlbmg+tzTKtS0xr87OSU5DTTWBNqk/y2zFb89PR/l1B1e2St36nmEyI84S6XANKsfNXgOMyxXrLYXNHPxIDWDwRGK42Eeg380jJqtgx2wXsICG5gg7ILKnC1L6DmL6Ie5raJhSODFDsdKpc2QWoydrkMNa4D+zMEqEJAzgMJT2iq5WqtptKm6i4nft1T+w4IEWvNij4SdlM3q0lf6tk2HmzATvbg/WJC4HIh+QkLX9PWHkxI7FSW0IWcnVXb+i1Cb5y3DlYHHuLVe4XgkAoQe3+h5zayZM1T8W+ukY4TBwdXOZC8bWjxnLxOdHyYjLSSDiZCHwEIQEopsRPRdz7V2hvfUII+hcOoMMoAYLbMxcM0n1FwxoH1gy8puJwVgjnfpyNMtSm3pd735m2WedSKsvfYPyqUimK9YVwZRzwZmAhAdiMUsHVyyVhKi2F3HIdZfFyWm8dDPRDv9/QO2F5f3+Pj0tktS40LsZUvR1bZbkxFb7fMfBIJcZiEXGUG+r2DhocrAgz4q4UeaHno8JaEQdAeilnFwtOMsdq3D/C/0jAbs62R8= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:CH2PR12MB4294.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230028)(4636009)(366004)(39860400002)(346002)(376002)(136003)(396003)(451199021)(26005)(6506007)(6512007)(53546011)(966005)(83380400001)(66574015)(36756003)(31696002)(2616005)(86362001)(186003)(38100700002)(6486002)(44832011)(478600001)(2906002)(6916009)(30864003)(316002)(4326008)(8676002)(8936002)(41300700001)(31686004)(66476007)(66946007)(66556008)(5660300002)(6666004)(45980500001)(43740500002); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?eVBKSHpzNVJSUndmSmVlZWVCejVhUEkvVUcyRXk0QmxIR09JQ1oyNm9ha01Q?= =?utf-8?B?V1RaMFBlUkpQSVhTc2ozVTVqeVpMRGlqUDZ6SXhTWEx4V1VIZm43MFladldZ?= =?utf-8?B?NVJkNnBGRjQ4Q2NodEVDMHFkaDkwTUdlU000NFE2QjJEWTlUUmN6dm14d0xl?= =?utf-8?B?RWxVSmVMc296SlF2Sy9yaGJFL0lDSk5VTkV1TkExaHdHaEJBT0lUVXhpK2hk?= =?utf-8?B?aXV3SGRLM2I2OWJNZ0VnUmF2eU1OTGpFTUNYcVZOK1c0OFJzMWl5RS9JV0hu?= =?utf-8?B?U0pzRGExWG45NFc3WkZwckNPcXczbGl4SGpTeWRRU043OVZLaThaeVVCOWpY?= =?utf-8?B?dm9MdDFlMjZBam0xbFRsSUxwVjVhdzJFRThJOWo4azJ2UFFzUk9BcnBBd1Rj?= =?utf-8?B?TzdWWXRZUHZwY25HaG9BLzhrL3lveWV5OXpveWRtWHlpWlozUVYxMC9DVyt2?= =?utf-8?B?NE9hYzZKMUs1a2xjZkVaNWt4ZXhDU3JyNFVxT3VDN0xzcmZEZGFkRVRCMko0?= =?utf-8?B?Y1NuM3VkU2ZWQmdwU0RqNHJFak9Kcm1FVXpVTGppckRaY2NmazljYS82UkRM?= =?utf-8?B?VDNjSmlNTVNheVVndXgrVXJDdjZ4VDh5bHl0ZHJQQnBubnJPOWV0b3hwNm0x?= =?utf-8?B?dkk4UVNFZTREdFF5bitlTnFpaFhDdTlaQ001VXFmU0dvTmVLQzdzeCszZEl0?= =?utf-8?B?MmtDdHZZM1BVeHRoR2U4UzdkWVQxdnVsQzVXRGFZVUJJTm9nN1NjaFVSTXNv?= =?utf-8?B?SzNQd0R0bkVFVHF4QnpUTm00Vlg3RDk2VFYzOEMzbUVmRFBsenUyeDZ1TS9o?= =?utf-8?B?engybTBXQmV1cmc2OTF5WjNnRXpMVFc4cnlFSFl0dXc0RWw2QXE3NWNrMnhK?= =?utf-8?B?RzFmWFJWYisvYXRycDZRblhnYUJpZDB2U1VOVHloUkJNME5oWlFpYTd3c2hF?= =?utf-8?B?WGEwUk5YZVF6VHFyYmJmSjNZVG1TdEtuVHVqblJxR0VITXBpenRmR2M5OVJH?= =?utf-8?B?WnBIMW9QSE9oQWx3NzlFOENteFFtT0taQkN5d1VxK3RkY0NmaGZNYzdIc3Q1?= =?utf-8?B?M3liZDZESHdJRSsydGptNHlEdUlsYnpwOTU1SXp0Z3UwRWdXK0UvL2MvNVh4?= =?utf-8?B?MmN6SFgvckdLZldJNDdEcHc0d2JONkw3bGptR3N0TUtwZ0U4cTB0cUNQY0xM?= =?utf-8?B?UHFnZGJsQTl0VmIxalQrNVhiRzZOaWZTV3BIQmRmRWp0WGNmNVVvNWlFZFFJ?= =?utf-8?B?SjBZM3p6M3YrM3RqRWVVckJGSUg4d202bWZRQ2JZTUVrMTJTT2QvU2YyeDVE?= =?utf-8?B?V0xiVTJZelRMQ0U4VW5EdkdxVzV3NW1TR2VML0g3TEUvb1BXV1hTQzRpK0I3?= =?utf-8?B?U3hxR3BlcXdiV2pKWGRpWUYvTFNPV2s4b3drNHc1UGpPMUVjL0owZjFkSWx1?= =?utf-8?B?OCtsM1BvNGR6YWNwQS8xV2FRRllhVkNZeG9HSlMyMlFZK3pOcXd3OW5tQ213?= =?utf-8?B?SXdmbEtyck42aHpyU1g4SUFNVktPZmxEYVJ3L3Jvb24ySmc5cUZNMUhJRUpB?= =?utf-8?B?SlZvcmE2Wm1SL0w3QllvZExyeEtleFN3eFJKZ0lQb0NjRW5jY3hxMlB2Wk1x?= =?utf-8?B?OVlVZmdUeGtCT2ZkbFAzVHRTNkx0cExsdFBBR0x2enV0WTZySjNmeU43Ry9W?= =?utf-8?B?UTluMm1zWkpJVURFUlpXNldlSExLRFZJU2ZSRGJWbzNlSGQ3elJ4dEh2WGhR?= =?utf-8?B?d08veStvQVlodThMOUU3bmV0cjdZZzBkU2FuQVErWmJXOUVSSmhmZnNYU2ZZ?= =?utf-8?B?RnN2eS9WQVQvOVpsOGErUlZUb3pab0JJcjcrVElFdGw5S25YYnpjazdGbVZR?= =?utf-8?B?VUZjT3pJK2hEaHdtM21ydlpBTW5Nak5PWitWRjUzOEVTc1BoQzJ0Z2d2WnBw?= =?utf-8?B?OFBZR1hONU14U1AzWmoxUUFsNGtRL2ZyRDkxL2JnYmFXUWRSNGtBZnRjVzgv?= =?utf-8?B?Ukgxd1FidGZ0TzNzMVRmVFFnNjlkdE11KzhmNG1LOVhrSW04TVlJZEx0eUE3?= =?utf-8?B?MENnZ2hWWUo5NzViNVc1K1JNd0ZROVRtbUFwUnV0RGlCcHVtRExDZnJ5Q3d1?= =?utf-8?Q?Zb330ZDYqq9VBqwnWkESlGBsn?= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 97b64e20-a817-44a4-182c-08db5898f25f X-MS-Exchange-CrossTenant-AuthSource: CH2PR12MB4294.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2023 18:43:33.0631 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: yO2GFIoBJxnDGljyRrVplMeKUvyHY5jE4c6xt+9G1KLNDV8R+2Df9rg6blpkQ1xO X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB5735 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org On 5/19/2023 6:47 PM, Yasin CANER wrote: > Hello, > Hi, Can you please bottom-post, combination of both makes discussion very hard to follow? > I tested all day both before and after patching. > > I could not understand that it is a memory leak or not. Maybe it needs > optimization. You lead, I follow. > > 1-) You are right, alloc_q is never bigger than 1024.  But it always > allocates 32 units then more than 1024 are being freed. Maybe it takes > time, I don't know. > At least alloc_q is only freed on kni release, so mbufs in that fifo can sit there as long as application is running. > 2-) I tested tx_rs_thresh via ping. After 210 sec , allocated memories > are back to mempool (most of them). (driver virtio and eth-devices are > binded via igb_uio) . It really takes time. So it is better to increase > the size of the mempool. > (https://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html > ) > > 3-) try to list mempool state in randomly > It looks number of mbufs used seems increasing, but in worst case both alloc_q and free_q can be full, which makes 2048 mbufs, and in below tests used mbufs number is not bigger than this value, so looks OK. If you run your test for a longer duration, do you observe that used mbufs going much above this number? Also what are the 'num' parameter to 'rte_kni_tx_burst()' API? If it is bigger than 'MAX_MBUF_BURST_NUM', that may lead mbufs accumulate at free_q fifo. As experiment, it is possible to decrease KNI fifo sizes, and observe the result. > Test -1 -) (old code) ICMP testing. The whole mempool size is about > 10350. So after FIFO reaches max-size -1024, %10 of the size of the > mempool is in use. But little by little memory is waiting in use and > doesn't go back to the pool. I could not find the reason. > > MBUF_POOL                      448            9,951                     >  4.31% [|....................] > MBUF_POOL                      1,947          8,452                     > 18.72% [||||.................] > MBUF_POOL                      1,803          8,596                     > 17.34% [||||.................] > MBUF_POOL                      1,941          8,458                     > 18.67% [||||.................] > MBUF_POOL                      1,900          8,499                     > 18.27% [||||.................] > MBUF_POOL                      1,999          8,400                     > 19.22% [||||.................] > MBUF_POOL                      1,724          8,675                     > 16.58% [||||.................] > MBUF_POOL                      1,811          8,588                     > 17.42% [||||.................] > MBUF_POOL                      1,978          8,421                     > 19.02% [||||.................] > MBUF_POOL                      2,008          8,391                     > 19.31% [||||.................] > MBUF_POOL                      1,854          8,545                     > 17.83% [||||.................] > MBUF_POOL                      1,922          8,477                     > 18.48% [||||.................] > MBUF_POOL                      1,892          8,507                     > 18.19% [||||.................] > MBUF_POOL                      1,957          8,442                     > 18.82% [||||.................] > > Test-2 -) (old code) run iperf3 udp testing that from Kernel to eth > device. Waited to see what happens in 4 min. memory doesn't go back to > the mempool. little by little, memory usage increases. > > MBUF_POOL                      512            9,887                     >  4.92% [|....................] > MBUF_POOL                      1,411          8,988                     > 13.57% [|||..................] > MBUF_POOL                      1,390          9,009                     > 13.37% [|||..................] > MBUF_POOL                      1,558          8,841                     > 14.98% [|||..................] > MBUF_POOL                      1,453          8,946                     > 13.97% [|||..................] > MBUF_POOL                      1,525          8,874                     > 14.66% [|||..................] > MBUF_POOL                      1,592          8,807                     > 15.31% [||||.................] > MBUF_POOL                      1,639          8,760                     > 15.76% [||||.................] > MBUF_POOL                      1,624          8,775                     > 15.62% [||||.................] > MBUF_POOL                      1,618          8,781                     > 15.56% [||||.................] > MBUF_POOL                      1,708          8,691                     > 16.42% [||||.................] > iperf is STOPPED to tx_fresh for 4 min > MBUF_POOL                      1,709          8,690                     > 16.43% [||||.................] > iperf is STOPPED to tx_fresh for 4 min > MBUF_POOL                      1,709          8,690                     > 16.43% [||||.................] > MBUF_POOL                      1,683          8,716                     > 16.18% [||||.................] > MBUF_POOL                      1,563          8,836                     > 15.03% [||||.................] > MBUF_POOL                      1,726          8,673                     > 16.60% [||||.................] > MBUF_POOL                      1,589          8,810                     > 15.28% [||||.................] > MBUF_POOL                      1,556          8,843                     > 14.96% [|||..................] > MBUF_POOL                      1,610          8,789                     > 15.48% [||||.................] > MBUF_POOL                      1,616          8,783                     > 15.54% [||||.................] > MBUF_POOL                      1,709          8,690                     > 16.43% [||||.................] > MBUF_POOL                      1,740          8,659                     > 16.73% [||||.................] > MBUF_POOL                      1,546          8,853                     > 14.87% [|||..................] > MBUF_POOL                      1,710          8,689                     > 16.44% [||||.................] > MBUF_POOL                      1,787          8,612                     > 17.18% [||||.................] > MBUF_POOL                      1,579          8,820                     > 15.18% [||||.................] > MBUF_POOL                      1,780          8,619                     > 17.12% [||||.................] > MBUF_POOL                      1,679          8,720                     > 16.15% [||||.................] > MBUF_POOL                      1,604          8,795                     > 15.42% [||||.................] > MBUF_POOL                      1,761          8,638                     > 16.93% [||||.................] > MBUF_POOL                      1,773          8,626                     > 17.05% [||||.................] > > Test-3 -) (after patching)  run iperf3 udp testing that from Kernel to > eth device. looks stable. > After patching , > > MBUF_POOL                      76             10,323                     > 0.73% [|....................] > MBUF_POOL                      193            10,206                     > 1.86% [|....................] > MBUF_POOL                      96             10,303                     > 0.92% [|....................] > MBUF_POOL                      269            10,130                     > 2.59% [|....................] > MBUF_POOL                      102            10,297                     > 0.98% [|....................] > MBUF_POOL                      235            10,164                     > 2.26% [|....................] > MBUF_POOL                      87             10,312                     > 0.84% [|....................] > MBUF_POOL                      293            10,106                     > 2.82% [|....................] > MBUF_POOL                      99             10,300                     > 0.95% [|....................] > MBUF_POOL                      296            10,103                     > 2.85% [|....................] > MBUF_POOL                      90             10,309                     > 0.87% [|....................] > MBUF_POOL                      299            10,100                     > 2.88% [|....................] > MBUF_POOL                      86             10,313                     > 0.83% [|....................] > MBUF_POOL                      262            10,137                     > 2.52% [|....................] > MBUF_POOL                      81             10,318                     > 0.78% [|....................] > MBUF_POOL                      81             10,318                     > 0.78% [|....................] > MBUF_POOL                      87             10,312                     > 0.84% [|....................] > MBUF_POOL                      252            10,147                     > 2.42% [|....................] > MBUF_POOL                      97             10,302                     > 0.93% [|....................] > iperf is STOPPED to tx_fresh for 4 min > MBUF_POOL                      296            10,103                     > 2.85% [|....................] > MBUF_POOL                      95             10,304                     > 0.91% [|....................] > MBUF_POOL                      269            10,130                     > 2.59% [|....................] > MBUF_POOL                      302            10,097                     > 2.90% [|....................] > MBUF_POOL                      88             10,311                     > 0.85% [|....................] > MBUF_POOL                      305            10,094                     > 2.93% [|....................] > MBUF_POOL                      88             10,311                     > 0.85% [|....................] > MBUF_POOL                      290            10,109                     > 2.79% [|....................] > MBUF_POOL                      84             10,315                     > 0.81% [|....................] > MBUF_POOL                      85             10,314                     > 0.82% [|....................] > MBUF_POOL                      291            10,108                     > 2.80% [|....................] > MBUF_POOL                      303            10,096                     > 2.91% [|....................] > MBUF_POOL                      92             10,307                     > 0.88% [|....................] > > > Best regards. > > > Ferruh Yigit >, 18 > May 2023 Per, 17:56 tarihinde şunu yazdı: > > On 5/18/2023 9:14 AM, Yasin CANER wrote: > > Hello Ferruh, > > > > Thanks for your kind response. Also thanks to Stephen. > > > > Even if 1 packet is consumed from the kernel , each time rx_kni > > allocates another 32 units. After a while all mempool is used in > alloc_q > > from kni. there is not any room for it. > > > > What you described continues until 'alloc_q' is full, by default fifo > length is 1024 (KNI_FIFO_COUNT_MAX), do you allocate less mbuf in your > mempool? > > You can consider either increasing mempool size, or decreasing 'alloc_q' > fifo length, but reducing fifo size may cause performance issues so you > need to evaluate that option. > > > Do you think my mistake is using one and common mempool usage both kni > > and eth? > > > > Using same mempool for both is fine. > > > If it needs a separate mempool , i'd like to note in docs. > > > > Best regards. > > > > Ferruh Yigit > >>, 17 > > May 2023 Çar, 20:53 tarihinde şunu yazdı: > > > >     On 5/9/2023 12:13 PM, Yasin CANER wrote: > >     > Hello, > >     > > >     > I draw a flow via asciiflow to explain myself better. > Problem is after > >     > transmitting packets(mbufs) , it never puts in the > kni->free_q to back > >     > to the original pool. Each cycle, it allocates another 32 > units that > >     > cause leaks. Or I am missing something. > >     > > >     > I already tried the rte_eth_tx_done_cleanup() function but it > >     didn't fix > >     > anything. > >     > > >     > I am working on a patch to fix this issue but I am not sure > if there > >     > is another way. > >     > > >     > Best regards. > >     > > >     > https://pastebin.ubuntu.com/p/s4h5psqtgZ/ > > >      > > >     > > >      >> > >     > > >     > > >     > unsigned > >     > rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf **mbufs, > >     unsigned > >     > int num) > >     > { > >     > unsigned int ret = kni_fifo_get(kni->tx_q, (void **)mbufs, num); > >     > > >     > /* If buffers removed, allocate mbufs and then put them into > >     alloc_q */ > >     > /* Question, how to test buffers is removed or not?*/ > >     > if (ret) > >     >     kni_allocate_mbufs(kni); > >     > > >     > return ret; > >     > } > >     > > > > >     Selam Yasin, > > > > > >     You can expect 'kni->alloc_q' fifo to be full, this is not a > memory > >     leak. > > > >     As you pointed out, number of mbufs consumed by kernel from > 'alloc_q' > >     and number of mbufs added to 'alloc_q' is not equal and this is > >     expected. > > > >     Target here is to prevent buffer underflow from kernel > perspective, so > >     it will always have available mbufs for new packets. > >     That is why new mbufs are added to 'alloc_q' at worst same or > sometimes > >     higher rate than it is consumed. > > > >     You should calculate your mbuf requirement with the assumption > that > >     'kni->alloc_q' will be full of mbufs. > > > > > >     'kni->alloc_q' is freed when kni is removed. > >     Since 'alloc_q' holds physical address of the mbufs, it is a > little > >     challenging to free them in the userspace, that is why first > kernel > >     tries to move mbufs to 'kni->free_q' fifo, please check > >     'kni_net_release_fifo_phy()' for it. > > > >     If all moved to 'free_q' fifo, nothing left to in 'alloc_q', > but if not, > >     userspace frees 'alloc_q' in 'rte_kni_release()', with > following call: > >     `kni_free_fifo_phy(kni->pktmbuf_pool, kni->alloc_q);` > > > > > >     I can see you have submitted fixes for this issue, although as I > >     explained above I don't think a defect exist, I will review them > >     today/tomorrow. > > > >     Regards, > >     Ferruh > > > > > >     > Stephen Hemminger > >      > > >     > > >      >>>, 8 May 2023 Pzt, 19:18 tarihinde > >     > şunu yazdı: > >     > > >     >     On Mon, 8 May 2023 09:01:41 +0300 > >     >     Yasin CANER > >     > > > >     >>> > >     >     wrote: > >     > > >     >     > Hello Stephen, > >     >     > > >     >     > Thank you for response, it helps me a lot. I > understand problem > >     >     better. > >     >     > > >     >     > After reading mbuf library ( > >     >     > > https://doc.dpdk.org/guides/prog_guide/mempool_lib.html > > >      > > >     >      > >      >>)  i > >     >     realized that > >     >     > 31 units allocation memory slot doesn't return to pool! > >     > > >     >     If receive burst returns 1 mbuf, the other 31 pointers > in the > >     array > >     >     are not valid. They do not point to mbufs. > >     > > >     >     > 1 unit mbuf can be freed via rte_pktmbuf_free so it > can back > >     to pool. > >     >     > > >     >     > Main problem is that allocation doesn't return to > original pool, > >     >     act as > >     >     > used. So, after following rte_pktmbuf_free > >     >     > > >     >    > >    >    > >>> > >     >     > function, > >     >     > i realized that there is 2 function to helps to mbufs back > >     to pool. > >     >     > > >     >     > These are rte_mbuf_raw_free > >     >     > > >     >    > >    >    > >>> > >     >     >  and rte_pktmbuf_free_seg > >     >     > > >     >    > >    >    > >>>. > >     >     > I will focus on them. > >     >     > > >     >     > If there is another suggestion, I will be very pleased. > >     >     > > >     >     > Best regards. > >     >     > > >     >     > Yasin CANER > >     >     > Ulak > >     > > > >