From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id BFA19A0532; Tue, 4 Feb 2020 13:07:22 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1BD6C1C128; Tue, 4 Feb 2020 13:07:22 +0100 (CET) Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) by dpdk.org (Postfix) with ESMTP id 507141BFA1 for ; Tue, 4 Feb 2020 13:07:20 +0100 (CET) Received: by mail-pf1-f179.google.com with SMTP id k29so9351106pfp.13 for ; Tue, 04 Feb 2020 04:07:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=QJAT6M52kL3/ebJcJ3+YY/CCyPBo25/6xsKL/mvKOxE=; b=ieya9SygmdyUvXjL7Xoi0SUiWcmQIpmQzb2aFlQHf0+sAdC9ghV1K/x8nHSD5zY3AT q7Tb9322oC9jIQqnW0GUxTiKiUs8MXfJQg8oNFrBP/UsAy18pM+APn1oTzfe+6ExOk+V SSUbjRyrE9zU2hqmnGi+4F274QkrmTtdcGq+3PrQFoE1P8NFVeBgPUNzVl/7VNFF7axq o37l9QgN/D+kQLe8QaFwCeD/BkPIngrVoj/g+6kTub+TquhmgnItUo1iqypTSUIVW70K E/KF/Zde+/lG1003DrseK9VdDGsSRm0D1Mtf+0+68yUL94kriHMIKUsnyPVLoqLHhiG7 OgSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=QJAT6M52kL3/ebJcJ3+YY/CCyPBo25/6xsKL/mvKOxE=; b=FS/5VPZNSRT8nJtMP7totVpFrTR2W1cXVg06yT+n78NPK25H/tzkN4iBVfNHvW583+ IqhiyXsL9DV7RsEKaKpvzKkP2M3g/fFpAOwtotH0L+AgBb9bgf3piQte/t6uhfojXNrE 9hxWTivdKi502IZUbeO/NJBBJqIIdEZLBPRyqtHtYRwBeWou52om1afkDhGy5hDGUQaD vMk8gaSoWDEy7VB5JZWTJecIFASKqf8rJoLIRKxfyuYGOGql30o9Az+iX1+jFxheowvX qa1AtQuY3QgumY8LGiX0rt2SxO7Hf33e/rf/Z6CIdCOmTfhYuWpj6PXxpEoi5jxAP3Q6 8SWg== X-Gm-Message-State: APjAAAUhrdh12DHonB+3aMAJKB8S9DlvUdeiZhdu090xUTUpPSfcv9YF 7E5JK1PbBalYfOuv5OuMXi3TRv9L++b3jsPCWWgzlMcS X-Google-Smtp-Source: APXvYqxjsbg3wkXSk3y/sNWrKZ5gr7wsiy1P6l//0rmzt1czja/C7zWwTnQAwMGMaLxUtLHm9yko/M81O53JbNvyxL4= X-Received: by 2002:a65:4685:: with SMTP id h5mr32421337pgr.203.1580818039277; Tue, 04 Feb 2020 04:07:19 -0800 (PST) MIME-Version: 1.0 References: <6cebb805-91a3-c074-2380-8ec90ed6c132@intel.com> <66cc7e86-8a4e-b976-032b-52e2950fd243@intel.com> In-Reply-To: From: siddarth rai Date: Tue, 4 Feb 2020 17:37:09 +0530 Message-ID: To: "Burakov, Anatoly" Cc: David Marchand , dev Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] Big spike in DPDK VSZ X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Anatoly, You mentioned that the maximum size of mempool is limited. Can you tell what is the limit and where is it specified? Regards, Siddarth On Tue, Feb 4, 2020 at 5:27 PM siddarth rai wrote: > Hi, > > Thanks for the clarification > > Regards, > Siddarth > > On Tue, Feb 4, 2020 at 4:43 PM Burakov, Anatoly > wrote: > >> On 04-Feb-20 10:55 AM, siddarth rai wrote: >> > Hi Anatoly, >> > >> > I don't need a secondary process. >> >> I understand that you don't, however that doesn't negate the fact that >> the codepath expects that you do. >> >> > >> > I tried out Julien's suggestion and set the param 'RTE_MAX_MEM_MB' >> value >> > to 8192 (the original value was over 500K). This works as a cap. >> > The virtual size dropped down to less than 8G. So this seems to be >> > working for me. >> > >> > I have a few queries/concerns though. >> > Is it safe to reduce the RTE_MAX_MEM_MB to such a low value ? Can I >> > reduce it further ? What will be the impact of doing so ? Will it limit >> > the maximum size of mbuf pool which I create ? >> >> It depends on your use case. The maximum size of mempool is limited as >> is, the better question is where to place that limit. In my experience, >> testpmd mempools are typically around 400MB per socket, so an 8G upper >> limit should not interfere with testpmd very much. However, depending on >> what else is there and what kind of allocations you may do, it may have >> other effects. >> >> Currently, the size of each internal per-NUMA node, per-page size page >> table is dictated by three constraints: maximum amount of memory per >> page table (so that we don't attempt to reserve thousands of 1G pages), >> maximum number of pages per page table (so that we aren't left with a >> few hundred megabytes' worth of 2M pages), and total maximum amount of >> memory (which places an upper limit on the sum of all page tables' >> memory amounts). >> >> You have lowered the latter to 8G which means that, depending on your >> system configuration, you will have at most 2G to 4G per page table. It >> is not possible to limit it further (for example, skip reservation on >> certain nodes or certain page sizes). Whether it will have an effect on >> your actual workload will depend on your use case. >> >> > >> > Regards, >> > Siddarth >> > >> > On Tue, Feb 4, 2020 at 3:53 PM Burakov, Anatoly >> > > wrote: >> > >> > On 30-Jan-20 8:51 AM, David Marchand wrote: >> > > On Thu, Jan 30, 2020 at 8:48 AM siddarth rai > > > wrote: >> > >> I have been using DPDK 19.08 and I notice the process VSZ is >> huge. >> > >> >> > >> I tried running the test PMD. It takes 64G VSZ and if I use the >> > >> '--in-memory' option it takes up to 188G. >> > >> >> > >> Is there anyway to disable allocation of such huge VSZ in DPDK ? >> > > >> > > *Disclaimer* I don't know the arcanes of the mem subsystem. >> > > >> > > I suppose this is due to the memory allocator in dpdk that >> reserves >> > > unused virtual space (for memory hotplug + multiprocess). >> > >> > Yes, that's correct. In order to guarantee memory reservation >> > succeeding >> > at all times, we need to reserve all possible memory in advance. >> > Otherwise we may end up in a situation where primary process has >> > allocated a page, but the secondary can't map it because the address >> > space is already occupied by something else. >> > >> > > >> > > If this is the case, maybe we could do something to enhance the >> > > situation for applications that won't care about multiprocess. >> > > Like inform dpdk that the application won't use multiprocess and >> skip >> > > those reservations. >> > >> > You're welcome to try this, but i assure you, avoiding these >> > reservations is a lot of work, because you'd be adding a yet another >> > path to an already overly complex allocator :) >> > >> > > >> > > Or another idea would be to limit those reservations to what is >> > passed >> > > via --socket-limit. >> > > >> > > Anatoly? >> > >> > I have a patchset in the works that does this and was planning to >> > submit >> > it to 19.08, but things got in the way and it's still sitting there >> > collecting bit rot. This may be reason enough to resurrect it and >> > finish >> > it up :) >> > >> > > >> > > >> > > >> > > -- >> > > David Marchand >> > > >> > >> > >> > -- >> > Thanks, >> > Anatoly >> > >> >> >> -- >> Thanks, >> Anatoly >> >