From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pd0-f169.google.com (mail-pd0-f169.google.com [209.85.192.169]) by dpdk.org (Postfix) with ESMTP id 53CD67E7C for ; Tue, 9 Dec 2014 23:05:17 +0100 (CET) Received: by mail-pd0-f169.google.com with SMTP id z10so1390306pdj.28 for ; Tue, 09 Dec 2014 14:05:16 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=DbZSTu7vHpQCv2HuUJdrLu7R4MjwPeGQOLS6xkU4DA8=; b=QVXvN4ZTUzUnhfo9Skda478w0mcI6LDq1moBs3tIGTEMwMOZIecPti4ziNlRsBwGnC Ah4dAmIHZET6AMYYyD6ZXxU9mWy3sPtIX1h+51WXSvsxu3551rBZObNAlM5CTcrsQxhR o7z38YwrrrLVIBihxABgPnKKXA+UiEzAjqGmMC9p+YI7GejOnRT4qqdcFl/B772ANrLP 3eAUM6vVmtXoiHDJV05WKC4TJbE85773ktqs3LO5eYbQqN5DdDqMeQBJyKUhN5Ya5gZf C5tG2TxUFrkCsmz5X+VA2lxxcJt5Z43LKQLHr5fHbjHJp76ZXFmLie++7VFMNhi0dHAD MHoA== X-Gm-Message-State: ALoCoQkbdtGiIgWXsBIUyK3yuURKjQbLyayPgOOEGDwZfHhwqhSkx/R7GctU+f5KzIjmBYkXaorz MIME-Version: 1.0 X-Received: by 10.70.49.41 with SMTP id r9mr933688pdn.83.1418162716478; Tue, 09 Dec 2014 14:05:16 -0800 (PST) Received: by 10.70.30.4 with HTTP; Tue, 9 Dec 2014 14:05:16 -0800 (PST) In-Reply-To: <20141209190649.GA6886@mhcomputing.net> References: <20141209190649.GA6886@mhcomputing.net> Date: Tue, 9 Dec 2014 16:05:16 -0600 Message-ID: From: Matt Laswell To: Matthew Hall Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] A question about hugepage initialization time X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 09 Dec 2014 22:05:17 -0000 Hey Everybody, Thanks for the feedback. Yeah, we're pretty sure that the amount of memory we work with is atypical, and we're hitting something that isn't an issue for most DPDK users. To clarify, yes, we're using 1GB hugepages, and we set them up via hugepagesz and hugepages= in our kernel's grub line. We find that when we use four 1GB huge pages, eal memory init takes a couple of seconds, which is no big deal. When we use 128 1GB pages, though, memory init can take several minutes. The concern is that we will very likely use even more memory in the future. Our boot time is mostly just a nuisance now; nonlinear growth in memory init time may transform it into a larger problem. We've had to disable transparent hugepages due to latency issues with in-memory databases. I'll have to look at the possibility of alternative memset implementations. Perhaps some profiler time is in my future. Again, thanks to everybody for the useful information. -- Matt Laswell laswell@infiniteio.com infinite io, inc. On Tue, Dec 9, 2014 at 1:06 PM, Matthew Hall wrote: > On Tue, Dec 09, 2014 at 10:33:59AM -0600, Matt Laswell wrote: > > Our DPDK application deals with very large in memory data structures, and > > can potentially use tens or even hundreds of gigabytes of hugepage > memory. > > What you're doing is an unusual use case and this is open source code where > nobody might have tested and QA'ed this yet. > > So my recommendation would be adding some rte_log statements to measure the > various steps in the process to see what's going on. Also using the Linux > Perf > framework to do low-overhead sampling-based profiling, and making sure > you've > got everything compiled with debug symbols so you can see what's consuming > the > execution time. > > You might find that it makes sense to use some custom allocators like > jemalloc > alongside of the DPDK allocators, including perhaps "transparent hugepage > mode" in your process, and some larger page sizes to reduce the number of > pages. > > You can also use this handy kernel options, hugepagesz= hugepages=N . > This creates guaranteed-contiguous known-good hugepages during boot which > initialize much more quickly with less trouble and glitches in my > experience. > > https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt > https://www.kernel.org/doc/Documentation/vm/transhuge.txt > > There is no one-size-fits-all solution but these are some possibilities. > > Good Luck, > Matthew. >