From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 639CEA04C2; Thu, 14 Nov 2019 10:53:17 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7AAFD2BAE; Thu, 14 Nov 2019 10:53:16 +0100 (CET) Received: from mail-lj1-f193.google.com (mail-lj1-f193.google.com [209.85.208.193]) by dpdk.org (Postfix) with ESMTP id 835329E4; Thu, 14 Nov 2019 10:53:15 +0100 (CET) Received: by mail-lj1-f193.google.com with SMTP id p18so5962664ljc.6; Thu, 14 Nov 2019 01:53:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ClajALamo2+a+Dthky6fje8lS+hv/RXdV3Bs8gucD24=; b=m7QPKogC28HloYj9S9M11tQ8Hy2n7ZULGKB+6xynE6a1EvQyRo4kgzKBd9fwAxhmpT exMQDo2iLEnmlNKfqIhIiLHjzpawEkZTZTjd16a/6WzdN0rUTJiZNsGIbX6LWKzj1Z0M ZvA/KG98i5A6JofTQ3dKShCYUak7CsabLU4qp2cbfmynsPfu/lvrU0ulsZWgFrT0twRE eoGRNCLRFvR1RxCFztecABsnWC0fCUWxAfXJvFWYIfH9qauskZMsrNm2b/KWWVgo7c/o L/Y66G4CABN/HrduUeJf3BlZ6LwScKm2oyt+E8IYHBx+sVax78k69E+bdBJbD2P3AIYi TM/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ClajALamo2+a+Dthky6fje8lS+hv/RXdV3Bs8gucD24=; b=ASAXpxEiDuInmvZiN9X+FCH/tYCEUFwIjUNaq5znS8+Z0OUYmqSCPeAv3rLWp6Eqb3 6m8bNAd8QYBuwS0XRdBf7SBCvvfyhhdIXJ4/uk70XQJVp9D5IacPvYJok5KiFewJY6zt IlbUx2/ftpUfxc80jKhRdJJdEGRhsLDfGcoXz9+cIO/GZ5IQAmedLbIhHvpYtEnjCBXA IUS2pdo5PzCD6Gp21Kf4qdaF0rfTNhhcioh8U98kevlsw/rXn3hq9I+kjZIcwvnvpt+F 9Si6f/3l/GPRC50WNY2eMYDhSInChe/TmkDFIw5ZnDqDXHPVCUH3Bc7oNNzZKGrn8/RL mttg== X-Gm-Message-State: APjAAAVdONdDBqv50r0e4yFknxVFzgOWDzYkmRxFZ4+2UgiMmsfX7BAC 4qOa3QiV1WwepxDMKRFfjIAQh4dCqk4sub8Sr66jBa92ZNM= X-Google-Smtp-Source: APXvYqxS4aIJirQmPeuHAXKFVyKB/V7mjsEpF1wu9K1SkMWtuFFXWl0Q3iy0hU1I7VI9Ds1FQYb+YugquImgF2uZBtQ= X-Received: by 2002:a2e:b0f6:: with SMTP id h22mr5724285ljl.171.1573725194942; Thu, 14 Nov 2019 01:53:14 -0800 (PST) MIME-Version: 1.0 References: <20191113091927.GA1501@bricha3-MOBL.ger.corp.intel.com> <70f4e9f0-70f7-aa4a-6c5d-c24308d196c2@intel.com> <133b1b07-77bd-330a-e42c-2a8ad40628b6@intel.com> In-Reply-To: <133b1b07-77bd-330a-e42c-2a8ad40628b6@intel.com> From: Venumadhav Josyula Date: Thu, 14 Nov 2019 15:23:03 +0530 Message-ID: To: "Burakov, Anatoly" Cc: Bruce Richardson , users@dpdk.org, dev@dpdk.org, Venumadhav Josyula Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] time taken for allocation of mempool. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Anatoly, > I would also suggest using --limit-mem if you desire to limit the > maximum amount of memory DPDK will be able to allocate. We are already using that. Thanks and regards, Venu On Thu, 14 Nov 2019 at 15:19, Burakov, Anatoly wrote: > On 14-Nov-19 8:12 AM, Venumadhav Josyula wrote: > > Hi Oliver,Bruce, > > > > * we were using --SOCKET-MEM Eal flag. > > * We did not wanted to avoid going back to legacy mode. > > * we also wanted to avoid 1G huge-pages. > > > > Thanks for your inputs. > > > > Hi Anatoly, > > > > We were using vfio with iommu, but by default it s iova-mode=pa, after > > changing to iova-mode=va via EAL it kind of helped us to bring down > > allocation time(s) for mempools drastically. The time taken was brought > > from ~4.4 sec to 0.165254 sec. > > > > Thanks and regards > > Venu > > That's great to hear. > > As a final note, --socket-mem is no longer necessary, because 18.11 will > allocate memory as needed. It is however still advisable to use it if > you see yourself end up in a situation where the runtime allocation > could conceivably fail (such as if you have other applications running > on your system, and DPDK has to compete for hugepage memory). > > I would also suggest using --limit-mem if you desire to limit the > maximum amount of memory DPDK will be able to allocate. This will make > DPDK behave similarly to older releases in that it will not attempt to > allocate more memory than you allow it. > > > > > > > On Wed, 13 Nov 2019 at 22:56, Burakov, Anatoly > > > wrote: > > > > On 13-Nov-19 9:19 AM, Bruce Richardson wrote: > > > On Wed, Nov 13, 2019 at 10:37:57AM +0530, Venumadhav Josyula > wrote: > > >> Hi , > > >> We are using 'rte_mempool_create' for allocation of flow memory. > > This has > > >> been there for a while. We just migrated to dpdk-18.11 from > > dpdk-17.05. Now > > >> here is problem statement > > >> > > >> Problem statement : > > >> In new dpdk ( 18.11 ), the 'rte_mempool_create' take > > approximately ~4.4 sec > > >> for allocation compared to older dpdk (17.05). We have som 8-9 > > mempools for > > >> our entire product. We do upfront allocation for all of them ( > > i.e. when > > >> dpdk application is coming up). Our application is run to > > completion model. > > >> > > >> Questions:- > > >> i) is that acceptable / has anybody seen such a thing ? > > >> ii) What has changed between two dpdk versions ( 18.11 v/s 17.05 > > ) from > > >> memory perspective ? > > >> > > >> Any pointer are welcome. > > >> > > > Hi, > > > > > > from 17.05 to 18.11 there was a change in default memory model > > for DPDK. In > > > 17.05 all DPDK memory was allocated statically upfront and that > > used for > > > the memory pools. With 18.11, no large blocks of memory are > > allocated at > > > init time, instead the memory is requested from the kernel as it > > is needed > > > by the app. This will make the initial startup of an app faster, > > but the > > > allocation of new objects like mempools slower, and it could be > > this you > > > are seeing. > > > > > > Some things to try: > > > 1. Use "--socket-mem" EAL flag to do an upfront allocation of > > memory for use > > > by your memory pools and see if it improves things. > > > 2. Try using "--legacy-mem" flag to revert to the old memory > model. > > > > > > Regards, > > > /Bruce > > > > > > > I would also add to this the fact that the mempool will, by default, > > attempt to allocate IOVA-contiguous memory, with a fallback to > non-IOVA > > contiguous memory whenever getting IOVA-contiguous memory isn't > > possible. > > > > If you are running in IOVA as PA mode (such as would be the case if > you > > are using igb_uio kernel driver), then, since it is now impossible to > > preallocate large PA-contiguous chunks in advance, what will likely > > happen in this case is, mempool will try to allocate IOVA-contiguous > > memory, fail and retry with non-IOVA contiguous memory (essentially > > allocating memory twice). For large mempools (or large number of > > mempools) that can take a bit of time. > > > > The obvious workaround is using VFIO and IOVA as VA mode. This will > > cause the allocator to be able to get IOVA-contiguous memory at the > > outset, and allocation will complete faster. > > > > The other two alternatives, already suggested in this thread by Bruce > > and Olivier, are: > > > > 1) use bigger page sizes (such as 1G) > > 2) use legacy mode (and lose out on all of the benefits provided by > the > > new memory model) > > > > The recommended solution is to use VFIO/IOMMU, and IOVA as VA mode. > > > > -- > > Thanks, > > Anatoly > > > > > -- > Thanks, > Anatoly >