From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 638E1A0532; Tue, 4 Feb 2020 12:57:23 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A86D91C121; Tue, 4 Feb 2020 12:57:22 +0100 (CET) Received: from mail-pg1-f169.google.com (mail-pg1-f169.google.com [209.85.215.169]) by dpdk.org (Postfix) with ESMTP id D8A661C044 for ; Tue, 4 Feb 2020 12:57:20 +0100 (CET) Received: by mail-pg1-f169.google.com with SMTP id 6so9572735pgk.0 for ; Tue, 04 Feb 2020 03:57:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=8iMDVuyuKu+L8JHjqhGuAGQghEs1s46FuyzwOUbeamk=; b=O4HqGR0kcAl9bKJrfqECHfSVI86gjhO91qLiP6RPbuRBFhwjS7yKaTcZIfAoqDo3Im CGHckTaOru86LxVBHZJCdDVl7xbDadZeMaN2yP70tQMQol2mZqWXNg5sMXJ90yapOd8A vPwkH4Xf2Ece/18/7qGOBTyjEJbILi45lqUN4OyHGNIc5984qKVQQ1M60ps2Bq3Q9ugj LMUeZXmv9yTzKWybHt8v/TMtp3SRSvkozJpY8xaGe1JUp73anqFlOEENtGk03yb10Bxm kZP2hA9ugIORPf4IyAWuV+DHzdkni8NZ2ODLYlUfzCggyW7o7GOvLdqmLNkkC9FDpCph 7OKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=8iMDVuyuKu+L8JHjqhGuAGQghEs1s46FuyzwOUbeamk=; b=I7hmVZLSXD00+LbAsjk2S/+05FejOkHIJguO6BoYoUoIwFQwFfn2S/d0Eavs6GImGW o9tnSFPmy1BVDlnFZlUQgaPLJtYnrp89zeWiIGZkNBum9znrM5xOVBepM+1wU86GbIqy reLLs+vtyw8r4r5AYHMEH1L9QhrTCE/TgPsppRAQuExk3hlPhpgkQXKwa0LjGOo2nZEy JFjjCUSPkkhYIN/ebKoB/gZdzpdobJfBv2OXFteMFmb4Im2aQTOHzmebZ4z14Bt3Ty9H wfB6FiE0pbqQh93gjpoPSPLHNxOxhbgCeWp0we0rdHaMAfzJGcTUUZc7rzcv0k4/1eIS M6DA== X-Gm-Message-State: APjAAAVi5jLnq1Z/f00pIkfV4KKuzzlknhkQgUh29zoq5I85BSrIFUov qdMtBqmaaQ0W/aFjb6NQSFHUHOFn28NHSIzkXFo= X-Google-Smtp-Source: APXvYqzwfCoY/WgeX3BiRHeLcoix11QBrjPW3zVHCbMYp16pd9Oguv2KnQyuuB6rNpgfbBeybLJFeEmKkNuQVILLd/c= X-Received: by 2002:a63:502:: with SMTP id 2mr8647288pgf.364.1580817439751; Tue, 04 Feb 2020 03:57:19 -0800 (PST) MIME-Version: 1.0 References: <6cebb805-91a3-c074-2380-8ec90ed6c132@intel.com> <66cc7e86-8a4e-b976-032b-52e2950fd243@intel.com> In-Reply-To: <66cc7e86-8a4e-b976-032b-52e2950fd243@intel.com> From: siddarth rai Date: Tue, 4 Feb 2020 17:27:09 +0530 Message-ID: To: "Burakov, Anatoly" Cc: David Marchand , dev Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] Big spike in DPDK VSZ X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi, Thanks for the clarification Regards, Siddarth On Tue, Feb 4, 2020 at 4:43 PM Burakov, Anatoly wrote: > On 04-Feb-20 10:55 AM, siddarth rai wrote: > > Hi Anatoly, > > > > I don't need a secondary process. > > I understand that you don't, however that doesn't negate the fact that > the codepath expects that you do. > > > > > I tried out Julien's suggestion and set the param 'RTE_MAX_MEM_MB' value > > to 8192 (the original value was over 500K). This works as a cap. > > The virtual size dropped down to less than 8G. So this seems to be > > working for me. > > > > I have a few queries/concerns though. > > Is it safe to reduce the RTE_MAX_MEM_MB to such a low value ? Can I > > reduce it further ? What will be the impact of doing so ? Will it limit > > the maximum size of mbuf pool which I create ? > > It depends on your use case. The maximum size of mempool is limited as > is, the better question is where to place that limit. In my experience, > testpmd mempools are typically around 400MB per socket, so an 8G upper > limit should not interfere with testpmd very much. However, depending on > what else is there and what kind of allocations you may do, it may have > other effects. > > Currently, the size of each internal per-NUMA node, per-page size page > table is dictated by three constraints: maximum amount of memory per > page table (so that we don't attempt to reserve thousands of 1G pages), > maximum number of pages per page table (so that we aren't left with a > few hundred megabytes' worth of 2M pages), and total maximum amount of > memory (which places an upper limit on the sum of all page tables' > memory amounts). > > You have lowered the latter to 8G which means that, depending on your > system configuration, you will have at most 2G to 4G per page table. It > is not possible to limit it further (for example, skip reservation on > certain nodes or certain page sizes). Whether it will have an effect on > your actual workload will depend on your use case. > > > > > Regards, > > Siddarth > > > > On Tue, Feb 4, 2020 at 3:53 PM Burakov, Anatoly > > > wrote: > > > > On 30-Jan-20 8:51 AM, David Marchand wrote: > > > On Thu, Jan 30, 2020 at 8:48 AM siddarth rai > > wrote: > > >> I have been using DPDK 19.08 and I notice the process VSZ is > huge. > > >> > > >> I tried running the test PMD. It takes 64G VSZ and if I use the > > >> '--in-memory' option it takes up to 188G. > > >> > > >> Is there anyway to disable allocation of such huge VSZ in DPDK ? > > > > > > *Disclaimer* I don't know the arcanes of the mem subsystem. > > > > > > I suppose this is due to the memory allocator in dpdk that > reserves > > > unused virtual space (for memory hotplug + multiprocess). > > > > Yes, that's correct. In order to guarantee memory reservation > > succeeding > > at all times, we need to reserve all possible memory in advance. > > Otherwise we may end up in a situation where primary process has > > allocated a page, but the secondary can't map it because the address > > space is already occupied by something else. > > > > > > > > If this is the case, maybe we could do something to enhance the > > > situation for applications that won't care about multiprocess. > > > Like inform dpdk that the application won't use multiprocess and > skip > > > those reservations. > > > > You're welcome to try this, but i assure you, avoiding these > > reservations is a lot of work, because you'd be adding a yet another > > path to an already overly complex allocator :) > > > > > > > > Or another idea would be to limit those reservations to what is > > passed > > > via --socket-limit. > > > > > > Anatoly? > > > > I have a patchset in the works that does this and was planning to > > submit > > it to 19.08, but things got in the way and it's still sitting there > > collecting bit rot. This may be reason enough to resurrect it and > > finish > > it up :) > > > > > > > > > > > > > > -- > > > David Marchand > > > > > > > > > -- > > Thanks, > > Anatoly > > > > > -- > Thanks, > Anatoly >