From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DCA84A034F for ; Wed, 24 Feb 2021 05:00:05 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6C20B1607B1; Wed, 24 Feb 2021 05:00:05 +0100 (CET) Received: from mail-ua1-f52.google.com (mail-ua1-f52.google.com [209.85.222.52]) by mails.dpdk.org (Postfix) with ESMTP id 2EFEA4069B for ; Wed, 24 Feb 2021 05:00:04 +0100 (CET) Received: by mail-ua1-f52.google.com with SMTP id o31so277320uae.2 for ; Tue, 23 Feb 2021 20:00:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=smartx-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=RgyYLIkBCbIHhn9yYcQ4MbBImXkusn9CCXH4OHa+JhU=; b=NA37XixleLsHqP2AERssU1MzhZrdfkFq4G4ZtDgEx4Emt/2+TuwQE6z+oBAQVv0ckD oqlBNV5l5Y1dUyS1MLp1CKgaGZwwcCBbk/EHJqoEHHDUaBNWhZWr/Y3uS9qZmxYuhzS6 UXBswayq1yR/jiZP6I92ZFHoTl13w8MYWlB0eEuZCWb0XhPZBUApaT10iOb45Z/UjnzH J3ifMkw2LP2iH6GyWtvV+cMvBg1Ud3gy1+rSsfyQMCkEvyO4tTvv1+zxajcKI3J6jWo9 zDjwCzi2UOFND+zD0scgUdHnZCmhOgoyLsClxrx2jvJX/5Ic5t72z4GaLnBwi5Wc53LH 1x+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=RgyYLIkBCbIHhn9yYcQ4MbBImXkusn9CCXH4OHa+JhU=; b=qeI3Ggcnjr+Z7hBDhgWB0/wrfcAQpFQoyrfEskP9a4bzHDKn391HTpZYK2lkq/IxCL uZ7N3Qk9i0D/VRFcdohbXcbOtTDl6h84iF88ycI3DEfmi5emSsQBsHFixDNo1BRM8jU3 2bSK8d9jc7/rbJRdIjcdQ8IT+DJzxOzwdVHYPFFG9fSaaVPW/q1x6MdM45T6VfJsdoLS WNio4ymienqBz/vZTaBPOAZ43k2mdVuQ4l99i+ojJCX3uAPG3cuk/Dw0V9M00dumUGLg 9dC5QqdKEdoqD1czyJUwwyMTnI7V72TPNHDMD9KZ1SoISdap1DMHttLF/zQc3zAlP18u KGRQ== X-Gm-Message-State: AOAM530k/6KYs4IG3AT55XZNV4TWA/2X33voLNNfTda8FFLcn0L+2rCA REC6UGBE4ron5njY9QeOAgFMfbaGccpvm3mM14WLYw== X-Google-Smtp-Source: ABdhPJxE0YmDPkRrQSbGvG0bEG0zd2cntW5CPaveDiOVVGqBIkj43oAsJBWgujONNBlunSWWAmRIqfqykCDu2znFY/M= X-Received: by 2002:ab0:4ac2:: with SMTP id t2mr2875921uae.133.1614139203403; Tue, 23 Feb 2021 20:00:03 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: Li Feng Date: Wed, 24 Feb 2021 11:59:52 +0800 Message-ID: To: James Huang Cc: users@dpdk.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Subject: Re: [dpdk-users] DPDK program huge core file size X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Sender: "users" I think you should update your dpdk to the latest. I have fixed this issue some months ago. d72e4042c - mem: exclude unused memory from core dump Thanks, Feng Li James Huang =E4=BA=8E2021=E5=B9=B42=E6=9C=8824=E6=97= =A5=E5=91=A8=E4=B8=89 =E4=B8=8A=E5=8D=883:22=E5=86=99=E9=81=93=EF=BC=9A > > UPDATE: the 'kill -6' command does not dump the hugepage memory zone into > the core file. > > Is there a way to bypass the hugepage memory zone dump into the core file > with running gcore command ? > > > On Fri, Feb 19, 2021 at 11:18 AM James Huang wrote: > > > On CentOS7, we observed that the program (based on dpdk 19.11) creates = a > > huge core file size, i.e. 100+GB, far bigger than the expected <4GB. ev= en > > though the system only installs 16GB memory, and allocates 1GB hugepage > > size at boot time. no matter if the core file is created by program pan= ic > > (segfault), or run with tool gcore. > > > > On CentOS 6, the program (based on dpdk 17.05), the core file is the > > expected size. > > > > On CentOS7, we tried to adjust the process coredump_filter combinations= , > > it found only when clean the bit 0 can avoid the huge core size, howeve= r, a > > cleared bit 0 generate small core file (200MB) and is meaningless for d= ebug > > purposes, i.e. gdb bt command does not output. > > > > Is there a way to avoid dumping the hugepage memory, while remaining ot= her > > memory in the core file? > > > > The following is the program pmap output comparison. > > on CentOS 6, the hugepage resides on the process user space: > > ... > > 00007f4e80000000 1048576K rw-s- /mnt/huge_1GB/rtemap_0 > > 00007f4ec0000000 2048K rw-s- > > /sys/devices/pci0000:00/0000:00:02.0/0000:04:00.0/resource0 > > 00007f4ec0200000 16K rw-s- > > /sys/devices/pci0000:00/0000:00:02.0/0000:04:00.0/resource4 > > 00007f4ec0204000 2048K rw-s- > > /sys/devices/pci0000:00/0000:00:02.0/0000:04:00.1/resource0 > > 00007f4ec0404000 16K rw-s- > > /sys/devices/pci0000:00/0000:00:02.0/0000:04:00.1/resource4 > > ... > > > > > > on CentOS 7, the hugepage resides on the process system space:: > > ... > > 0000000100000000 20K rw-s- config > > 0000000100005000 184K rw-s- fbarray_memzone > > 0000000100033000 4K rw-s- fbarray_memseg-1048576k-0-0 > > 0000000140000000 1048576K rw-s- rtemap_0 > > 0000000180000000 32505856K r---- [ anon ] > > 0000000940000000 4K rw-s- fbarray_memseg-1048576k-0-1 > > 0000000980000000 33554432K r---- [ anon ] > > 0000001180000000 4K rw-s- fbarray_memseg-1048576k-0-2 > > 00000011c0000000 33554432K r---- [ anon ] > > 00000019c0000000 4K rw-s- fbarray_memseg-1048576k-0-3 > > 0000001a00000000 33554432K r---- [ anon ] > > 0000002200000000 1024K rw-s- resource0 > > 0000002200100000 16K rw-s- resource3 > > 0000002200104000 1024K rw-s- resource0 > > 0000002200204000 16K rw-s- resource3 > > ... > > > > Thanks, > > -James > >