DPDK patches and discussions
 help / color / mirror / Atom feed
From: "humin (Q)" <humin29@huawei.com>
To: "dmarchan@redhat.com" <dmarchan@redhat.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Cc: "Zhouchang (Forest)" <forest.zhouchang@huawei.com>,
	Xiedonghui <xiedonghui@huawei.com>,
	"liudongdong (C)" <liudongdong3@huawei.com>,
	lihuisong <lihuisong@huawei.com>,
	"Huwei (Xavier)" <huwei87@hisilicon.com>
Subject: [dpdk-dev] [RFC]app/testpmd: time-consuming question of mlockall function execution
Date: Mon, 24 Feb 2020 06:35:20 +0000	[thread overview]
Message-ID: <c70a37abc92c4ddf83d99c8b58da88b5@huawei.com> (raw)

We found that if OS transparent hugepage uses non 'always', mlockall function in the main function of testpmd takes more than 25s to execute.

The results of running on both x86 and ARM are the same. It's very unreasonable and deadly. The enable status of transparent hugepage on OS can be viewed by the following command.

[root@X6000-C23-1U dpdk]#cat /sys/kernel/mm/transparent_hugepage/enabled

always [madvise] never



Transparent hugepage on ARM is configured as 'madvise', 'never' or 'always', testpmd runs with using strace as follows:

******************************* Transparent hugepage is configured as 'madvise'  ******************************* [root@X6000-C23-1U dpdk]# strace -T -e trace=mlockall ./testpmd -l 1-4 -w 0000:7d:01.0 --iova-mode=va -- -i

EAL: Detected 96 lcore(s)

EAL: Detected 4 NUMA nodes

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: No available hugepages reported in hugepages-2048kB

EAL: No available hugepages reported in hugepages-32768kB

EAL: No available hugepages reported in hugepages-64kB

EAL: Probing VFIO support...

EAL: VFIO support initialized

EAL: PCI device 0000:7d:01.0 on NUMA socket 0

EAL:   probe driver: 19e5:a22f net_hns3_vf

EAL:   Expecting 'PA' IOVA mode but current mode is 'VA', not initializing

EAL: Requested device 0000:7d:01.0 cannot be used

testpmd: No probed ethernet devices

Interactive-mode selected

mlockall(MCL_CURRENT|MCL_FUTURE)        = 0 <25.736362>

<---------------------- Hang for 25 seconds

testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc Done

testpmd>

testpmd> quit



Bye...

+++ exited with 0 +++



*****************************  Transparent hugepage is configured as 'never'  ********************************* [root@X6000-C23-1U dpdk]# strace -T -e trace=mlockall ./testpmd -l 1-4 -w 0000:7d:01.0 --iova-mode=va -- -i

EAL: Detected 96 lcore(s)

EAL: Detected 4 NUMA nodes

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: No available hugepages reported in hugepages-2048kB

EAL: No available hugepages reported in hugepages-32768kB

EAL: No available hugepages reported in hugepages-64kB

EAL: Probing VFIO support...

EAL: VFIO support initialized

EAL: PCI device 0000:7d:01.0 on NUMA socket 0

EAL:   probe driver: 19e5:a22f net_hns3_vf

EAL:   Expecting 'PA' IOVA mode but current mode is 'VA', not initializing

EAL: Requested device 0000:7d:01.0 cannot be used

testpmd: No probed ethernet devices

Interactive-mode selected

mlockall(MCL_CURRENT|MCL_FUTURE)        = 0 <25.737757>

<---------------------- Hang for 25 seconds

testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc Done

testpmd> quit



Bye...

+++ exited with 0 +++



*****************************  Transparent hugepage is configured as 'always'  ********************************* [root@X6000-C23-1U dpdk]# strace -T -e trace=mlockall testpmd -l 1-4 -w

0000:7d:01.0 --iova-mode=va -- -i

strace: Can't stat 'testpmd': No such file or directory [root@X6000-C23-1U dpdk]# strace -T -e trace=mlockall ./testpmd -l 1-4 -w 0000:7d:01.0 --iova-mode=va -- -i

EAL: Detected 96 lcore(s)

EAL: Detected 4 NUMA nodes

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: No available hugepages reported in hugepages-2048kB

EAL: No available hugepages reported in hugepages-32768kB

EAL: No available hugepages reported in hugepages-64kB

EAL: Probing VFIO support...

EAL: VFIO support initialized

EAL: PCI device 0000:7d:01.0 on NUMA socket 0

EAL:   probe driver: 19e5:a22f net_hns3_vf

EAL:   Expecting 'PA' IOVA mode but current mode is 'VA', not initializing

EAL: Requested device 0000:7d:01.0 cannot be used

testpmd: No probed ethernet devices

Interactive-mode selected

mlockall(MCL_CURRENT|MCL_FUTURE)        = 0 <0.208571>

<---------------------- No Hang

testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc Done

testpmd> quit



Bye...

+++ exited with 0 +++

**********************************************************************************************************************



We have also seen some discussions on this issue in following page:

https://bugzilla.redhat.com/show_bug.cgi?id=1786923



David Marchand has a related patch, as following page:

https://github.com/david-marchand/dpdk/commit/f9e1b9fa101c9f4f16c0717401a55790aecc6484

but this patch doesn't seem to have been submitted to the community.

Transparent hugepage on ARM is configured as 'madvise' or 'never', testpmd runs with using strace as follows:

*******************************************************

[root@X6000-C23-1U app]# strace -T -e trace=mlockall ./testpmd -l 1-4 -w

0000:7d:01.0 --iova-mode=va -- -i

EAL: Detected 96 lcore(s)

EAL: Detected 4 NUMA nodes

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: No available hugepages reported in hugepages-2048kB

EAL: No available hugepages reported in hugepages-32768kB

EAL: No available hugepages reported in hugepages-64kB

EAL: Probing VFIO support...

EAL: VFIO support initialized

EAL: PCI device 0000:7d:01.0 on NUMA socket 0

EAL:   probe driver: 19e5:a22f net_hns3_vf

EAL:   Expecting 'PA' IOVA mode but current mode is 'VA', not initializing

EAL: Requested device 0000:7d:01.0 cannot be used

testpmd: No probed ethernet devices

Interactive-mode selected

mlockall(MCL_CURRENT|MCL_FUTURE|MCL_ONFAULT) = 0 <1.955947>

<---------------------- Hang for less than 2 seconds

testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc Done

testpmd> quit



Bye...

+++ exited with 0 +++



We'd like to know what is the current development on this issue in dpdk community. Thanks



Best Regards




             reply	other threads:[~2020-02-24  6:35 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-24  6:35 humin (Q) [this message]
2020-02-24  8:48 ` David Marchand
2020-02-26  3:59   ` [dpdk-dev] 答复: " humin (Q)
2020-03-06 17:49     ` [dpdk-dev] " David Marchand
2020-03-10 15:28       ` David Marchand
2020-03-13  8:25         ` [dpdk-dev] 答复: " humin (Q)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c70a37abc92c4ddf83d99c8b58da88b5@huawei.com \
    --to=humin29@huawei.com \
    --cc=dev@dpdk.org \
    --cc=dmarchan@redhat.com \
    --cc=forest.zhouchang@huawei.com \
    --cc=huwei87@hisilicon.com \
    --cc=lihuisong@huawei.com \
    --cc=liudongdong3@huawei.com \
    --cc=xiedonghui@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).