From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-yh0-f54.google.com (mail-yh0-f54.google.com [209.85.213.54]) by dpdk.org (Postfix) with ESMTP id 8F83BB43D for ; Thu, 5 Feb 2015 14:22:52 +0100 (CET) Received: by mail-yh0-f54.google.com with SMTP id 29so3292138yhl.13 for ; Thu, 05 Feb 2015 05:22:52 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=xijIhj/iUmbAfJukI7Cx7kDT6Yklpo0uM/Veegg23cc=; b=UgLrddd1yMy4UxjlaP8lA/phpJug5WGkWUb7YzLTw4pWFuhcebLJ5wERtX3lx3G4n2 6SVO2aGuGcrPdo7BmxuXsKKmfXgDVTbwa1BY2rmmjpvvK/UTuUH7e5wgOEd7BE81mkr+ 7QPG51x1oD0oqawsthtP4HvLPbqh0iCBc9NDGG9N0ly5s43ZEHOsQP39tXVUq9r5e2FP DBaGokH8enUbl+gUFJ05OdAfc0UVEoprpyTe4uRqSSYy3bh48vhvzWMoFG4vj5+V0RTJ ZoBVmSMJYXcuxnpggcSLLZTFtFpVZJHPVjTFTOC5/vC1uf7xD2Fl1f61301t1pqJ9pKE z+Dw== X-Gm-Message-State: ALoCoQmovSbLM5CjcFXARJN0lABweyBEOzudRcgXdDGDlQA4E92x/bbZigG2HqxEQ1lpANcnvOaR MIME-Version: 1.0 X-Received: by 10.236.7.39 with SMTP id 27mr1325856yho.16.1423142572013; Thu, 05 Feb 2015 05:22:52 -0800 (PST) Received: by 10.170.42.212 with HTTP; Thu, 5 Feb 2015 05:22:51 -0800 (PST) In-Reply-To: <736BD68D-C5DF-4883-A720-DAD8A2A866BE@cisco.com> References: <736BD68D-C5DF-4883-A720-DAD8A2A866BE@cisco.com> Date: Thu, 5 Feb 2015 07:22:51 -0600 Message-ID: From: Jay Rolette To: "Damjan Marion (damarion)" Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] mmap fails with more than 40000 hugepages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 Feb 2015 13:22:53 -0000 On Thu, Feb 5, 2015 at 6:00 AM, Damjan Marion (damarion) wrote: > Hi, > > I have system with 2 NUMA nodes and 256G RAM total. I noticed that DPDK > crashes in rte_eal_init() > when number of available hugepages is around 40000 or above. > Everything works fine with lower values (i.e. 30000). > > I also tried with allocating 40000 on node0 and 0 on node1, same crash > happens. > > > Any idea what might be causing this? > Any reason you can't switch to using 1GB hugepages? You'll get better performance and your init time will be shorter. The systems we run on are similar (256GB, 2 NUMA nodes) and that works fine for us. Not directly related, but if you have to stick with 2MB hugepages, you might want to take a look at a patch I submitted that fixes the O(n^2) algorithm used in initializing hugepages. Jay