From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9F4F3A034F for ; Tue, 23 Feb 2021 20:22:42 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2517E4068C; Tue, 23 Feb 2021 20:22:42 +0100 (CET) Received: from mail-lj1-f178.google.com (mail-lj1-f178.google.com [209.85.208.178]) by mails.dpdk.org (Postfix) with ESMTP id 044644068B for ; Tue, 23 Feb 2021 20:22:40 +0100 (CET) Received: by mail-lj1-f178.google.com with SMTP id o16so59801314ljj.11 for ; Tue, 23 Feb 2021 11:22:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to; bh=X1u0xujRpulcPg6L+G+0LmhwWu28X3tjF/dvglINyF0=; b=GGStwMg1lkSUcxOn7jVPyZ3Y37h9+8EKOSX2IyHi6gyoK32vGr06kab//unEjGdGHh gfP0vuH3HdQcaUCUgwOBOmF8JsgQt3+rckvCSA0joOKwu/jmzaszFQxu6kIt6DQrqBuE lL7DVw8ces9BrMDp/tn34uEiQZJr4ZE/dd2pZokWqVBrN67PIQLOqtHBbRz+oI3LRNrG w0Xvl64ccgBCegjlTnsppNan/O2bOMN278QXjPZ6x78aXcsG3hB16vrYFvr2nkFdDGnK o5+2OrVu5P9qYGzYecU/HZ1MhmDljzb8FcsHb13qHUrdg0YS1+qRjT3O0cGRBCwG2h7x ExLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=X1u0xujRpulcPg6L+G+0LmhwWu28X3tjF/dvglINyF0=; b=C675Rdhxpju38H6AKSf2rUOFbHUFA0lLU4u2mYkOU6hBuljrI1pNMljFqxv3nRW9cm vv9yqaof4wso9NLm0Zo8q5/iCpq+wyRW8h0FM30U55yI9eEsKKGY4CdDY9jjxITIDYkM PxepIwmzloykGtIbgZzsjdtwap/00DG4ihWu6iQ6UvqfFL4ODytoaOkKbqrttsg19uU6 Qt+UdSFLne3CXSOpb9p4o/8DtgRsqyH5xosu2lPF0r8G2Bt2rI10HOjxx3JiVIidxHsH 42K2XvvozIHd8Ej2Yk5QL1jtuBSUJ7h2owXuHxZ//1cljOQoUkkhRsFOUnAs1glU2QNM //Yw== X-Gm-Message-State: AOAM533HIfI41XL/DBtAQG/UDgzAnaXPt8muwjv10kKi/UOha6KlxxRJ YIjgkeAGLfiY+03pKUo3ALVZI7vTDMYUgrVL1bplLGDTI871mQ== X-Google-Smtp-Source: ABdhPJyMS7mP98fQM6pMTxZQFINHoSF9OXuFzgiANG2pBigD7sVy4fISeYzutxXTue9IExlp/P8hluNPAK3jnTUYUgE= X-Received: by 2002:a05:651c:338:: with SMTP id b24mr17499847ljp.157.1614108160202; Tue, 23 Feb 2021 11:22:40 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: James Huang Date: Tue, 23 Feb 2021 11:22:28 -0800 Message-ID: To: users@dpdk.org Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: Re: [dpdk-users] DPDK program huge core file size X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Sender: "users" UPDATE: the 'kill -6' command does not dump the hugepage memory zone into the core file. Is there a way to bypass the hugepage memory zone dump into the core file with running gcore command ? On Fri, Feb 19, 2021 at 11:18 AM James Huang wrote: > On CentOS7, we observed that the program (based on dpdk 19.11) creates a > huge core file size, i.e. 100+GB, far bigger than the expected <4GB. even > though the system only installs 16GB memory, and allocates 1GB hugepage > size at boot time. no matter if the core file is created by program panic > (segfault), or run with tool gcore. > > On CentOS 6, the program (based on dpdk 17.05), the core file is the > expected size. > > On CentOS7, we tried to adjust the process coredump_filter combinations, > it found only when clean the bit 0 can avoid the huge core size, however, a > cleared bit 0 generate small core file (200MB) and is meaningless for debug > purposes, i.e. gdb bt command does not output. > > Is there a way to avoid dumping the hugepage memory, while remaining other > memory in the core file? > > The following is the program pmap output comparison. > on CentOS 6, the hugepage resides on the process user space: > ... > 00007f4e80000000 1048576K rw-s- /mnt/huge_1GB/rtemap_0 > 00007f4ec0000000 2048K rw-s- > /sys/devices/pci0000:00/0000:00:02.0/0000:04:00.0/resource0 > 00007f4ec0200000 16K rw-s- > /sys/devices/pci0000:00/0000:00:02.0/0000:04:00.0/resource4 > 00007f4ec0204000 2048K rw-s- > /sys/devices/pci0000:00/0000:00:02.0/0000:04:00.1/resource0 > 00007f4ec0404000 16K rw-s- > /sys/devices/pci0000:00/0000:00:02.0/0000:04:00.1/resource4 > ... > > > on CentOS 7, the hugepage resides on the process system space:: > ... > 0000000100000000 20K rw-s- config > 0000000100005000 184K rw-s- fbarray_memzone > 0000000100033000 4K rw-s- fbarray_memseg-1048576k-0-0 > 0000000140000000 1048576K rw-s- rtemap_0 > 0000000180000000 32505856K r---- [ anon ] > 0000000940000000 4K rw-s- fbarray_memseg-1048576k-0-1 > 0000000980000000 33554432K r---- [ anon ] > 0000001180000000 4K rw-s- fbarray_memseg-1048576k-0-2 > 00000011c0000000 33554432K r---- [ anon ] > 00000019c0000000 4K rw-s- fbarray_memseg-1048576k-0-3 > 0000001a00000000 33554432K r---- [ anon ] > 0000002200000000 1024K rw-s- resource0 > 0000002200100000 16K rw-s- resource3 > 0000002200104000 1024K rw-s- resource0 > 0000002200204000 16K rw-s- resource3 > ... > > Thanks, > -James >