DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [RFC]app/testpmd: time-consuming question of mlockall function execution
@ 2020-02-24  6:35 humin (Q)
  2020-02-24  8:48 ` David Marchand
  0 siblings, 1 reply; 6+ messages in thread
From: humin (Q) @ 2020-02-24  6:35 UTC (permalink / raw)
  To: dmarchan, dev
  Cc: Zhouchang (Forest), Xiedonghui, liudongdong (C),
	lihuisong, Huwei (Xavier)

We found that if OS transparent hugepage uses non 'always', mlockall function in the main function of testpmd takes more than 25s to execute.

The results of running on both x86 and ARM are the same. It's very unreasonable and deadly. The enable status of transparent hugepage on OS can be viewed by the following command.

[root@X6000-C23-1U dpdk]#cat /sys/kernel/mm/transparent_hugepage/enabled

always [madvise] never



Transparent hugepage on ARM is configured as 'madvise', 'never' or 'always', testpmd runs with using strace as follows:

******************************* Transparent hugepage is configured as 'madvise'  ******************************* [root@X6000-C23-1U dpdk]# strace -T -e trace=mlockall ./testpmd -l 1-4 -w 0000:7d:01.0 --iova-mode=va -- -i

EAL: Detected 96 lcore(s)

EAL: Detected 4 NUMA nodes

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: No available hugepages reported in hugepages-2048kB

EAL: No available hugepages reported in hugepages-32768kB

EAL: No available hugepages reported in hugepages-64kB

EAL: Probing VFIO support...

EAL: VFIO support initialized

EAL: PCI device 0000:7d:01.0 on NUMA socket 0

EAL:   probe driver: 19e5:a22f net_hns3_vf

EAL:   Expecting 'PA' IOVA mode but current mode is 'VA', not initializing

EAL: Requested device 0000:7d:01.0 cannot be used

testpmd: No probed ethernet devices

Interactive-mode selected

mlockall(MCL_CURRENT|MCL_FUTURE)        = 0 <25.736362>

<---------------------- Hang for 25 seconds

testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc Done

testpmd>

testpmd> quit



Bye...

+++ exited with 0 +++



*****************************  Transparent hugepage is configured as 'never'  ********************************* [root@X6000-C23-1U dpdk]# strace -T -e trace=mlockall ./testpmd -l 1-4 -w 0000:7d:01.0 --iova-mode=va -- -i

EAL: Detected 96 lcore(s)

EAL: Detected 4 NUMA nodes

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: No available hugepages reported in hugepages-2048kB

EAL: No available hugepages reported in hugepages-32768kB

EAL: No available hugepages reported in hugepages-64kB

EAL: Probing VFIO support...

EAL: VFIO support initialized

EAL: PCI device 0000:7d:01.0 on NUMA socket 0

EAL:   probe driver: 19e5:a22f net_hns3_vf

EAL:   Expecting 'PA' IOVA mode but current mode is 'VA', not initializing

EAL: Requested device 0000:7d:01.0 cannot be used

testpmd: No probed ethernet devices

Interactive-mode selected

mlockall(MCL_CURRENT|MCL_FUTURE)        = 0 <25.737757>

<---------------------- Hang for 25 seconds

testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc Done

testpmd> quit



Bye...

+++ exited with 0 +++



*****************************  Transparent hugepage is configured as 'always'  ********************************* [root@X6000-C23-1U dpdk]# strace -T -e trace=mlockall testpmd -l 1-4 -w

0000:7d:01.0 --iova-mode=va -- -i

strace: Can't stat 'testpmd': No such file or directory [root@X6000-C23-1U dpdk]# strace -T -e trace=mlockall ./testpmd -l 1-4 -w 0000:7d:01.0 --iova-mode=va -- -i

EAL: Detected 96 lcore(s)

EAL: Detected 4 NUMA nodes

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: No available hugepages reported in hugepages-2048kB

EAL: No available hugepages reported in hugepages-32768kB

EAL: No available hugepages reported in hugepages-64kB

EAL: Probing VFIO support...

EAL: VFIO support initialized

EAL: PCI device 0000:7d:01.0 on NUMA socket 0

EAL:   probe driver: 19e5:a22f net_hns3_vf

EAL:   Expecting 'PA' IOVA mode but current mode is 'VA', not initializing

EAL: Requested device 0000:7d:01.0 cannot be used

testpmd: No probed ethernet devices

Interactive-mode selected

mlockall(MCL_CURRENT|MCL_FUTURE)        = 0 <0.208571>

<---------------------- No Hang

testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc Done

testpmd> quit



Bye...

+++ exited with 0 +++

**********************************************************************************************************************



We have also seen some discussions on this issue in following page:

https://bugzilla.redhat.com/show_bug.cgi?id=1786923



David Marchand has a related patch, as following page:

https://github.com/david-marchand/dpdk/commit/f9e1b9fa101c9f4f16c0717401a55790aecc6484

but this patch doesn't seem to have been submitted to the community.

Transparent hugepage on ARM is configured as 'madvise' or 'never', testpmd runs with using strace as follows:

*******************************************************

[root@X6000-C23-1U app]# strace -T -e trace=mlockall ./testpmd -l 1-4 -w

0000:7d:01.0 --iova-mode=va -- -i

EAL: Detected 96 lcore(s)

EAL: Detected 4 NUMA nodes

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: No available hugepages reported in hugepages-2048kB

EAL: No available hugepages reported in hugepages-32768kB

EAL: No available hugepages reported in hugepages-64kB

EAL: Probing VFIO support...

EAL: VFIO support initialized

EAL: PCI device 0000:7d:01.0 on NUMA socket 0

EAL:   probe driver: 19e5:a22f net_hns3_vf

EAL:   Expecting 'PA' IOVA mode but current mode is 'VA', not initializing

EAL: Requested device 0000:7d:01.0 cannot be used

testpmd: No probed ethernet devices

Interactive-mode selected

mlockall(MCL_CURRENT|MCL_FUTURE|MCL_ONFAULT) = 0 <1.955947>

<---------------------- Hang for less than 2 seconds

testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc Done

testpmd> quit



Bye...

+++ exited with 0 +++



We'd like to know what is the current development on this issue in dpdk community. Thanks



Best Regards




^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-03-13  8:25 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-24  6:35 [dpdk-dev] [RFC]app/testpmd: time-consuming question of mlockall function execution humin (Q)
2020-02-24  8:48 ` David Marchand
2020-02-26  3:59   ` [dpdk-dev] 答复: " humin (Q)
2020-03-06 17:49     ` [dpdk-dev] " David Marchand
2020-03-10 15:28       ` David Marchand
2020-03-13  8:25         ` [dpdk-dev] 答复: " humin (Q)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).