From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from rcdn-iport-9.cisco.com (rcdn-iport-9.cisco.com [173.37.86.80]) by dpdk.org (Postfix) with ESMTP id 4603A20F for ; Fri, 6 Feb 2015 11:31:10 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=1169; q=dns/txt; s=iport; t=1423218670; x=1424428270; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-id:content-transfer-encoding: mime-version; bh=CC0EWdWd+tj0mr+xkmu5nC5h3wQSzZ7Tx4PomfRxEUk=; b=cv4NJUjqkxHFKVGbRKuVzEqtBR/mKLjplkqQaf9JEnJlHJMQFo6lLdwE Nza4UHYzjsmZRF6LB+E4kMKDzaPPIW1H7ZJINk3CAu/3VaMKO3Ydn8Q8v +Bmpu/T2VafqbNf43qObVJ8VaSHskQodYRdbIFKtDeksVx+xRUHrBvrJs s=; X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AlkFAI6X1FStJA2F/2dsb2JhbABagmEigSsEyEgCgRxDAQEBAQF9hAwBAQEDATo/BQcEAgEIEQQBAQEeCQcyFAkIAgQOBYglCAHVIQEBAQEBAQEBAQEBAQEBAQEBAQEBARePRTMHBoMQgRMBBI8YiSySayKDbm8BAYFCfgEBAQ X-IronPort-AV: E=Sophos;i="5.09,528,1418083200"; d="scan'208";a="390862329" Received: from alln-core-11.cisco.com ([173.36.13.133]) by rcdn-iport-9.cisco.com with ESMTP; 06 Feb 2015 10:31:09 +0000 Received: from xhc-aln-x08.cisco.com (xhc-aln-x08.cisco.com [173.36.12.82]) by alln-core-11.cisco.com (8.14.5/8.14.5) with ESMTP id t16AV9lC001363 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL); Fri, 6 Feb 2015 10:31:09 GMT Received: from xmb-aln-x01.cisco.com ([fe80::747b:83e1:9755:d453]) by xhc-aln-x08.cisco.com ([173.36.12.82]) with mapi id 14.03.0195.001; Fri, 6 Feb 2015 04:31:08 -0600 From: "Damjan Marion (damarion)" To: "De Lara Guarch, Pablo" Thread-Topic: [dpdk-dev] mmap fails with more than 40000 hugepages Thread-Index: AQHQQTthrW63bl6xlkWr3i+nt5poHJzjRIUAgACMa4CAAAE2gA== Date: Fri, 6 Feb 2015 10:31:09 +0000 Message-ID: <7C277056-2D98-4266-BD62-8C68F1428152@cisco.com> References: <736BD68D-C5DF-4883-A720-DAD8A2A866BE@cisco.com> <54D4211C.3050703@huawei.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.55.15.242] Content-Type: text/plain; charset="us-ascii" Content-ID: <734B0AE462EC8242976B663373D353C1@emea.cisco.com> Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] mmap fails with more than 40000 hugepages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 Feb 2015 10:31:10 -0000 > On 06 Feb 2015, at 11:26, De Lara Guarch, Pablo wrote: >=20 >=20 >=20 >> -----Original Message----- >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Linhaifeng >> Sent: Friday, February 06, 2015 2:04 AM >> To: Damjan Marion (damarion); dev@dpdk.org >> Subject: Re: [dpdk-dev] mmap fails with more than 40000 hugepages >>=20 >>=20 >>=20 >> On 2015/2/5 20:00, Damjan Marion (damarion) wrote: >>> Hi, >>>=20 >>> I have system with 2 NUMA nodes and 256G RAM total. I noticed that DPDK >> crashes in rte_eal_init() >>> when number of available hugepages is around 40000 or above. >>> Everything works fine with lower values (i.e. 30000). >>>=20 >>> I also tried with allocating 40000 on node0 and 0 on node1, same crash >> happens. >>>=20 >>>=20 >>> Any idea what might be causing this? >>>=20 >>> Thanks, >>>=20 >>> Damjan >>>=20 >>=20 >> cat /proc/sys/vm/max_map_count >=20 > I just checked on my board, and having 40k hugepages, you need that value= to be over the double of that (a value of 81k should work). >=20 > Let us know if that fixed the problem! Yes, it works now. Thanks!