From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5174CA034D for ; Wed, 12 Jan 2022 07:32:36 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1BFAB411E6; Wed, 12 Jan 2022 07:32:36 +0100 (CET) Received: from delivery.mailspamprotection.com (delivery.mailspamprotection.com [185.56.85.147]) by mails.dpdk.org (Postfix) with ESMTP id 6EE4340141 for ; Wed, 12 Jan 2022 07:32:34 +0100 (CET) Received: from 72.204.214.35.bc.googleusercontent.com ([35.214.204.72] helo=es18.siteground.eu) by se24.mailspamprotection.com with esmtps (TLSv1.2:AES128-GCM-SHA256:128) (Exim 4.92) (envelope-from ) id 1n7XBP-0000ks-O5; Wed, 12 Jan 2022 00:32:33 -0600 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=filipjaniszewski.com; s=default; h=Content-Transfer-Encoding:Content-Type: In-Reply-To:MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender :Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=mOTfxF5tLkPUF1XGTXS2rBPzrSPYUmjXhq1J+2Z9rn8=; b=HRqmhWH9unnBPlDQzGMAwvAaRW NMttuwWdWwi6JO9WQkw1qg2biJkLZZT9ZxIRgH6UgarzndDwhaMNuOTbPxb04W+q/FOAUU3cWRXhN hxf3I1+kqXJUFYC/Mnr2ow9gq5oVvCcCT2IGfS1HNQPI8I5SKjHxAMogBFE0rul+wFVYVTGOeEBgH R3pxRoQ5DugExBaC2/uMSy+7YIZFezmRku5lPYNpOWD/6jzQTHbSfH6HlY7HlvD7hb04piUfC83Sr tUXWRFRBFD/fG31YyJ+HyJSfDXuxpEqHiJ0btTpfipg+h542ndA5QNes0m4iudEXH2tY6Ksq6iyXo XQmPZX9Q==; Received: from [89.64.148.179] (port=46564 helo=[192.168.0.144]) by es18.siteground.eu with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.90-.1) (envelope-from ) id 1n7XBO-000AvR-7y; Wed, 12 Jan 2022 06:32:30 +0000 Subject: Re: rte_pktmbuf_free_bulk vs rte_pktmbuf_free To: Stephen Hemminger Cc: "users@dpdk.org" References: <5ecf22af-ac38-7dd5-b3ce-5b2ccf60b32f@filipjaniszewski.com> <20220111100225.53fd75d3@hermes.local> From: Filip Janiszewski Message-ID: Date: Wed, 12 Jan 2022 07:32:29 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.14.0 MIME-Version: 1.0 In-Reply-To: <20220111100225.53fd75d3@hermes.local> Content-Type: text/plain; charset=iso-8859-15 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: 35.214.204.72 X-SpamExperts-Domain: es18.siteground.eu X-SpamExperts-Username: 35.214.204.72 Authentication-Results: mailspamprotection.com; auth=pass smtp.auth=35.214.204.72@es18.siteground.eu X-SpamExperts-Outgoing-Class: ham X-SpamExperts-Outgoing-Evidence: Combined (0.29) X-Recommended-Action: accept X-Filter-ID: Pt3MvcO5N4iKaDQ5O6lkdGlMVN6RH8bjRMzItlySaT+wfGjUkb1+itTHFWyGg75+PUtbdvnXkggZ 3YnVId/Y5jcf0yeVQAvfjHznO7+bT5xQ21np3XeN35PVwNbfU742o3EfziUtef1CcFA2V31FMlDn WmUJhAHRiHIb2lOh6JlOkhzesQ0JdQzOoHQ7p+6dMBSiPSnQIt27BVFkDvgz+lGavff7/jlq6fkm O4krnmtOnrdfKf1PRsyc+YaC7oU/SC1/WCvbD8OjMenfSZHr0qKH/NiA7RQPVgGibKcip6ZkUmiE xkWUoSzEEgLcfjfMMlqTUfnCwluavkVtckUL+AO3o36j0ORZF0x6YOuq5eWZUj4xwe0b3+XaCf5M XFAScPh0GhIodzREMXPnDAEXuc/aBRC/dJUEjVcdL1455UIQUKCPmfxt9vMWnys4FzUPLv/585O+ pWpOKIzg2yRxxNDmYKL8s+6Z1AYc3vFe/OX2bBIZ81oGZp+J9MlIwzA4fvkgd5+u3gDxf9fK2r/Z wdyM1Zhj0mHCNz8fhItVlE4f27xvBBHtPpxgCVRDLa+kf9LQEYii27Ab+HgWq0VETRgBSfuFYtFJ E4ZLDUZ/SAqo8645LIT8rNLtcdfdbkJssUXpeAiVHBWlM6dy+vmbeKTVDs7kesTyxgBnHEJLB915 qBlA8mkLuRHIIwnMwNYN/Yc73AvVh03S9stJDyhbEHgOrHGWcKc5AqOFqAQq4tJtrOYSNX9uXFza ooMSV8AuAJr9BiUkdefQcdW2OEoQ/bix99BfWjSAFfOTRj3Uo9bonV+E7OMXRvgtdyMlnmWiQb4a PVM3GnXVBZWmTY1VKuftuVbv/Ov8PMp5pdK/+scvQE7In0BoKz3cWqTUOvyDv/Fbo9vp6WPqAU7M 9aGsnhl3ZB7nBe0bblF6MVo09NoAbJkdujIQjna76I5VwEQSkKJIy6as4UhiyhpaXNk05PRIcwVP n9zB/4PEIyRR1Wh3ld0brcKAm5em3jw1KJAxQFaSOyxrJmG8voC3a4MxJXlafSbAi6q7OPt8T/w4 viQQk92gTpKKG7ZaFhvW6aApY4dNsdRKPmJ55Knez4sm2kRBxBSW3KwNdL9tr4Q7mJlSsRwf3rRh voymIv7bzlshNHylfB1o84wTFuEwlOZkKOOvSi9oE0W0nclhu+0O4Q2BAOuRBFUOmWrD+akhkuuS nHl2TRtBFGxCwNLr/WIXTl5h0Wo91eNN8EvawglKVRzu9Oz61/WEIdCqLgfPz6VmYGakAQBjahCK iV9DOu/Y/w== X-Report-Abuse-To: spam@quarantine1.mailspamprotection.com X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Il 1/11/22 7:02 PM, Stephen Hemminger ha scritto: > On Tue, 11 Jan 2022 13:12:24 +0100 > Filip Janiszewski wrote: > >> Hi, >> >> Is there any specific reason why using rte_pktmbuf_free_bulk seems to be >> much slower than rte_pktmbuf_free in a loop? (DPDK 21.11) >> >> I ran a bunch of tests on a 50GbE link where I'm getting packet drops >> (running with too few RX cores on purpose, to make some performance >> verification) and when the time comes to release the packets, i did a >> quick change like this: >> >> . >> //rte_pktmbuf_free_bulk( data, pkt_cnt ); >> for( idx = 0 ; idx < pkt_cnt ; ++idx ) { >> rte_pktmbuf_free( data[ idx ] ); >> } >> . >> >> And suddenly I'm dropping around 10% less packets (The traffic rate is >> around ~95Mpps). In case that's relevant, RX from the nic is done on a >> separate core than where the pkts are released (processed and released) >> >> I did also the following experiment: Found the MPPs speed value where i >> get around 2-5% drops using rte_pktmbuf_free_bulk, executed a bunch of >> readings where I consistently get drops.. Then switched to the loop with >> rte_pktmbuf_free and executed the same tests again, of a sudden I can't >> drop anymore. >> >> Isn't this strange? I was sure rte_pktmbuf_free_bulk would be kind of >> optimized for bulk releases so people don't have to loop themselves. >> >> Thanks >> > > Is your mbuf pool close to exhausted? How big is your bulk size? > It might be with that with larger bulk sizes, the loop is giving packets > back that instantly get consumed by incoming packets. So either pool is almost > empty or the non-bulk is keeping packets in cache more. > Well, yes, once it starts dropping the buffer is full, but for quite a while before that event the mem usage is pretty low. In fact, I've added few diagnostics and here they are for the rte_pktmbuf_free_bulk test: . Mem usage: 0.244141%, captured 0 pkts Mem usage: 0.244141%, captured 0 pkts Mem usage: 0.241852%, captured 11,681,034 pkts Mem usage: 0.243807%, captured 44,327,015 pkts Mem usage: 0.243855%, captured 78,834,947 pkts Mem usage: 0.243235%, captured 113,343,787 pkts Mem usage: 0.246191%, captured 147,867,507 pkts Mem usage: 0.264502%, captured 182,367,926 pkts Mem usage: 0.244856%, captured 216,917,982 pkts Mem usage: 0.248837%, captured 251,445,720 pkts Mem usage: 0.257087%, captured 285,985,575 pkts Mem usage: 0.338078%, captured 320,509,279 pkts Mem usage: 0.362778%, captured 355,016,693 pkts Mem usage: 0.692415%, captured 389,521,441 pkts Mem usage: 52.050495%, captured 424,066,179 pkts Mem usage: 99.960041%, captured 456,936,573 pkts // DROPPING FROM HERE Mem usage: 99.962330%, captured 485,568,660 pkts Mem usage: 0.241208%, captured 491,178,294 pkts Mem usage: 0.241208%, captured 491,178,294 pkts . The % value is for the pool usage, it's a 8M items pool. As you can see all of a sudden it sharply gets exhausted and it never recover (the test stops at 500M packets.). Please note the prints have a 500ms interval. Attempting the same test with 1 billion packets lead to a similar result, the pool is exhausted after a while and there are plenty of drops: . Mem usage: 0.244141%, captured 0 pkts Mem usage: 0.244141%, captured 0 pkts Mem usage: 0.242686%, captured 1,994,944 pkts Mem usage: 0.243521%, captured 23,094,546 pkts Mem usage: 0.350094%, captured 57,594,139 pkts Mem usage: 0.245333%, captured 92,103,632 pkts Mem usage: 0.243330%, captured 126,616,534 pkts Mem usage: 0.244308%, captured 161,136,760 pkts Mem usage: 0.244093%, captured 195,633,863 pkts Mem usage: 0.245523%, captured 230,149,916 pkts Mem usage: 0.249910%, captured 264,648,839 pkts Mem usage: 0.258422%, captured 299,165,901 pkts Mem usage: 0.301266%, captured 333,678,228 pkts Mem usage: 0.425720%, captured 368,197,372 pkts Mem usage: 0.542426%, captured 402,699,822 pkts Mem usage: 21.447337%, captured 437,244,879 pkts Mem usage: 86.296201%, captured 471,804,014 pkts Mem usage: 99.958158%, captured 501,730,958 pkts // DROPPING FROM HERE Mem usage: 99.954629%, captured 529,462,253 pkts Mem usage: 99.958587%, captured 556,391,644 pkts Mem usage: 99.932027%, captured 582,999,427 pkts Mem usage: 99.959493%, captured 609,456,194 pkts Mem usage: 99.959779%, captured 635,641,696 pkts Mem usage: 99.958920%, captured 661,792,475 pkts Mem usage: 99.954844%, captured 687,919,194 pkts Mem usage: 99.957728%, captured 713,992,293 pkts Mem usage: 99.960685%, captured 740,042,732 pkts Mem usage: 99.956965%, captured 766,240,304 pkts Mem usage: 99.960780%, captured 792,423,477 pkts Mem usage: 99.960351%, captured 818,629,881 pkts Mem usage: 99.959016%, captured 844,904,955 pkts Mem usage: 99.960637%, captured 871,162,327 pkts Mem usage: 0.241995%, captured 878,826,100 pkts Mem usage: 0.241995%, captured 878,826,100 pkts . I can fix the issue switching from rte_pktmbuf_free_bulk to rte_pktmbuf_free in a loop (Not dropping at all, no matter how many packets I capture).. I would like to understand this issue better, also I'm a little confused on why after a while the performance degrade, you're suggesting there's some packet cache going on, can you elaborate on that? Thanks -- BR, Filip +48 666 369 823