DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [RFC]app/testpmd: time-consuming question of mlockall function execution
@ 2020-02-24  6:35 humin (Q)
  2020-02-24  8:48 ` David Marchand
  0 siblings, 1 reply; 6+ messages in thread
From: humin (Q) @ 2020-02-24  6:35 UTC (permalink / raw)
  To: dmarchan, dev
  Cc: Zhouchang (Forest), Xiedonghui, liudongdong (C),
	lihuisong, Huwei (Xavier)

We found that if OS transparent hugepage uses non 'always', mlockall function in the main function of testpmd takes more than 25s to execute.

The results of running on both x86 and ARM are the same. It's very unreasonable and deadly. The enable status of transparent hugepage on OS can be viewed by the following command.

[root@X6000-C23-1U dpdk]#cat /sys/kernel/mm/transparent_hugepage/enabled

always [madvise] never



Transparent hugepage on ARM is configured as 'madvise', 'never' or 'always', testpmd runs with using strace as follows:

******************************* Transparent hugepage is configured as 'madvise'  ******************************* [root@X6000-C23-1U dpdk]# strace -T -e trace=mlockall ./testpmd -l 1-4 -w 0000:7d:01.0 --iova-mode=va -- -i

EAL: Detected 96 lcore(s)

EAL: Detected 4 NUMA nodes

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: No available hugepages reported in hugepages-2048kB

EAL: No available hugepages reported in hugepages-32768kB

EAL: No available hugepages reported in hugepages-64kB

EAL: Probing VFIO support...

EAL: VFIO support initialized

EAL: PCI device 0000:7d:01.0 on NUMA socket 0

EAL:   probe driver: 19e5:a22f net_hns3_vf

EAL:   Expecting 'PA' IOVA mode but current mode is 'VA', not initializing

EAL: Requested device 0000:7d:01.0 cannot be used

testpmd: No probed ethernet devices

Interactive-mode selected

mlockall(MCL_CURRENT|MCL_FUTURE)        = 0 <25.736362>

<---------------------- Hang for 25 seconds

testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc Done

testpmd>

testpmd> quit



Bye...

+++ exited with 0 +++



*****************************  Transparent hugepage is configured as 'never'  ********************************* [root@X6000-C23-1U dpdk]# strace -T -e trace=mlockall ./testpmd -l 1-4 -w 0000:7d:01.0 --iova-mode=va -- -i

EAL: Detected 96 lcore(s)

EAL: Detected 4 NUMA nodes

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: No available hugepages reported in hugepages-2048kB

EAL: No available hugepages reported in hugepages-32768kB

EAL: No available hugepages reported in hugepages-64kB

EAL: Probing VFIO support...

EAL: VFIO support initialized

EAL: PCI device 0000:7d:01.0 on NUMA socket 0

EAL:   probe driver: 19e5:a22f net_hns3_vf

EAL:   Expecting 'PA' IOVA mode but current mode is 'VA', not initializing

EAL: Requested device 0000:7d:01.0 cannot be used

testpmd: No probed ethernet devices

Interactive-mode selected

mlockall(MCL_CURRENT|MCL_FUTURE)        = 0 <25.737757>

<---------------------- Hang for 25 seconds

testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc Done

testpmd> quit



Bye...

+++ exited with 0 +++



*****************************  Transparent hugepage is configured as 'always'  ********************************* [root@X6000-C23-1U dpdk]# strace -T -e trace=mlockall testpmd -l 1-4 -w

0000:7d:01.0 --iova-mode=va -- -i

strace: Can't stat 'testpmd': No such file or directory [root@X6000-C23-1U dpdk]# strace -T -e trace=mlockall ./testpmd -l 1-4 -w 0000:7d:01.0 --iova-mode=va -- -i

EAL: Detected 96 lcore(s)

EAL: Detected 4 NUMA nodes

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: No available hugepages reported in hugepages-2048kB

EAL: No available hugepages reported in hugepages-32768kB

EAL: No available hugepages reported in hugepages-64kB

EAL: Probing VFIO support...

EAL: VFIO support initialized

EAL: PCI device 0000:7d:01.0 on NUMA socket 0

EAL:   probe driver: 19e5:a22f net_hns3_vf

EAL:   Expecting 'PA' IOVA mode but current mode is 'VA', not initializing

EAL: Requested device 0000:7d:01.0 cannot be used

testpmd: No probed ethernet devices

Interactive-mode selected

mlockall(MCL_CURRENT|MCL_FUTURE)        = 0 <0.208571>

<---------------------- No Hang

testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc Done

testpmd> quit



Bye...

+++ exited with 0 +++

**********************************************************************************************************************



We have also seen some discussions on this issue in following page:

https://bugzilla.redhat.com/show_bug.cgi?id=1786923



David Marchand has a related patch, as following page:

https://github.com/david-marchand/dpdk/commit/f9e1b9fa101c9f4f16c0717401a55790aecc6484

but this patch doesn't seem to have been submitted to the community.

Transparent hugepage on ARM is configured as 'madvise' or 'never', testpmd runs with using strace as follows:

*******************************************************

[root@X6000-C23-1U app]# strace -T -e trace=mlockall ./testpmd -l 1-4 -w

0000:7d:01.0 --iova-mode=va -- -i

EAL: Detected 96 lcore(s)

EAL: Detected 4 NUMA nodes

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: No available hugepages reported in hugepages-2048kB

EAL: No available hugepages reported in hugepages-32768kB

EAL: No available hugepages reported in hugepages-64kB

EAL: Probing VFIO support...

EAL: VFIO support initialized

EAL: PCI device 0000:7d:01.0 on NUMA socket 0

EAL:   probe driver: 19e5:a22f net_hns3_vf

EAL:   Expecting 'PA' IOVA mode but current mode is 'VA', not initializing

EAL: Requested device 0000:7d:01.0 cannot be used

testpmd: No probed ethernet devices

Interactive-mode selected

mlockall(MCL_CURRENT|MCL_FUTURE|MCL_ONFAULT) = 0 <1.955947>

<---------------------- Hang for less than 2 seconds

testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc Done

testpmd> quit



Bye...

+++ exited with 0 +++



We'd like to know what is the current development on this issue in dpdk community. Thanks



Best Regards




^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [RFC]app/testpmd: time-consuming question of mlockall function execution
  2020-02-24  6:35 [dpdk-dev] [RFC]app/testpmd: time-consuming question of mlockall function execution humin (Q)
@ 2020-02-24  8:48 ` David Marchand
  2020-02-26  3:59   ` [dpdk-dev] 答复: " humin (Q)
  0 siblings, 1 reply; 6+ messages in thread
From: David Marchand @ 2020-02-24  8:48 UTC (permalink / raw)
  To: humin (Q)
  Cc: dev, Zhouchang (Forest), Xiedonghui, liudongdong (C),
	lihuisong, Huwei (Xavier),
	Burakov, Anatoly, Thomas Monjalon, Maxime Coquelin

Hello,

On Mon, Feb 24, 2020 at 7:35 AM humin (Q) <humin29@huawei.com> wrote:
> We found that if OS transparent hugepage uses non 'always', mlockall function in the main function of testpmd takes more than 25s to execute.
> The results of running on both x86 and ARM are the same. It's very unreasonable and deadly. The enable status of transparent hugepage on OS can be viewed by the following command.
> [root@X6000-C23-1U dpdk]#cat /sys/kernel/mm/transparent_hugepage/enabled
> always [madvise] never
>
> Transparent hugepage on ARM is configured as 'madvise', 'never' or 'always', testpmd runs with using strace as follows:
> ******************************* Transparent hugepage is configured as 'madvise'  ******************************* [root@X6000-C23-1U dpdk]# strace -T -e trace=mlockall ./testpmd -l 1-4 -w 0000:7d:01.0 --iova-mode=va -- -i
[snip]
> Interactive-mode selected
> mlockall(MCL_CURRENT|MCL_FUTURE)        = 0 <25.736362>
> <---------------------- Hang for 25 seconds
[snip]
>
> *****************************  Transparent hugepage is configured as 'never'  ********************************* [root@X6000-C23-1U dpdk]# strace -T -e trace=mlockall ./testpmd -l 1-4 -w 0000:7d:01.0 --iova-mode=va -- -i
[snip]
> Interactive-mode selected
> mlockall(MCL_CURRENT|MCL_FUTURE)        = 0 <25.737757>
> <---------------------- Hang for 25 seconds
[snip]
>
> *****************************  Transparent hugepage is configured as 'always'  ********************************* [root@X6000-C23-1U dpdk]# strace -T -e trace=mlockall testpmd -l 1-4 -w
> 0000:7d:01.0 --iova-mode=va -- -i
> strace: Can't stat 'testpmd': No such file or directory [root@X6000-C23-1U dpdk]# strace -T -e trace=mlockall ./testpmd -l 1-4 -w 0000:7d:01.0 --iova-mode=va -- -i
[snip]
> Interactive-mode selected
> mlockall(MCL_CURRENT|MCL_FUTURE)        = 0 <0.208571>
> <---------------------- No Hang
[snip]

>
> We have also seen some discussions on this issue in following page:
> https://bugzilla.redhat.com/show_bug.cgi?id=1786923
>
> David Marchand has a related patch, as following page:
> https://github.com/david-marchand/dpdk/commit/f9e1b9fa101c9f4f16c0717401a55790aecc6484
> but this patch doesn't seem to have been submitted to the community.

Yes, this is not ready, I worked on this locally since then (and the
last version is not pushed to my github).
The main problem is that I have not been able to reproduce Eelco issue
so far which justified the addition of mlockall().


> Transparent hugepage on ARM is configured as 'madvise' or 'never', testpmd runs with using strace as follows:
> *******************************************************
> [root@X6000-C23-1U app]# strace -T -e trace=mlockall ./testpmd -l 1-4 -w
> 0000:7d:01.0 --iova-mode=va -- -i
[snip]
> Interactive-mode selected
> mlockall(MCL_CURRENT|MCL_FUTURE|MCL_ONFAULT) = 0 <1.955947>
> <---------------------- Hang for less than 2 seconds
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc Done
> testpmd> quit
>
> Bye...
> +++ exited with 0 +++
>
> We'd like to know what is the current development on this issue in dpdk community. Thanks

There is also a report about coredumps being huge.
On this topic, there might be something to do with madvise + MADV_DONTDUMP.


I see various options from the hardest to the easiest:
- drop multiprocess support,
- rework the memory allocator so that this kind of side effect
(mapping a huge number of unused pages) is not hit,
- change testpmd so that it locks only the pages of interest (this is
the patch that I had been looking at for now),
- change testpmd so that it does not call mlockall by default,

The last option is likely the quickest workaround.

I write "options", but the last two points feel like "band aid" fixes.
And it does not solve the problem for other applications that will
have to implement similar workarounds.

Anatoly warned that touching the memory allocator is going to be hell.
Quite sad to reach this state because it feels like people are just
starting to hit the changes that entered dpdk 18.05.


Of course, other ideas welcome!

--
David Marchand


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dpdk-dev] 答复: [RFC]app/testpmd: time-consuming question of mlockall function execution
  2020-02-24  8:48 ` David Marchand
@ 2020-02-26  3:59   ` humin (Q)
  2020-03-06 17:49     ` [dpdk-dev] " David Marchand
  0 siblings, 1 reply; 6+ messages in thread
From: humin (Q) @ 2020-02-26  3:59 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, Zhouchang (Forest), Xiedonghui, liudongdong (C),
	lihuisong, Huwei (Xavier),
	Burakov, Anatoly, Thomas Monjalon, Maxime Coquelin

Hello, David Marchand,

Thanks for your quick response.
 
We have another question about your patch. It seems that mlockall() also takes about two seconds after using this patch(about 0.2 seconds before using this patch), if we use "always" option for transparent hugepage configration. Is this reasonable and acceptable? 
If that is ok, when will the patch be uploaded to DPDK community?
Hope for your reply.
					Min Hu

-----邮件原件-----
发件人: David Marchand [mailto:david.marchand@redhat.com] 
发送时间: 2020年2月24日 16:48
收件人: humin (Q) <humin29@huawei.com>
抄送: dev@dpdk.org; Zhouchang (Forest) <forest.zhouchang@huawei.com>; Xiedonghui <xiedonghui@huawei.com>; liudongdong (C) <liudongdong3@huawei.com>; lihuisong <lihuisong@huawei.com>; Huwei (Xavier) <huwei87@hisilicon.com>; Burakov, Anatoly <anatoly.burakov@intel.com>; Thomas Monjalon <thomas@monjalon.net>; Maxime Coquelin <maxime.coquelin@redhat.com>
主题: Re: [RFC]app/testpmd: time-consuming question of mlockall function execution

Hello,

On Mon, Feb 24, 2020 at 7:35 AM humin (Q) <humin29@huawei.com> wrote:
> We found that if OS transparent hugepage uses non 'always', mlockall function in the main function of testpmd takes more than 25s to execute.
> The results of running on both x86 and ARM are the same. It's very unreasonable and deadly. The enable status of transparent hugepage on OS can be viewed by the following command.
> [root@X6000-C23-1U dpdk]#cat 
> /sys/kernel/mm/transparent_hugepage/enabled
> always [madvise] never
>
> Transparent hugepage on ARM is configured as 'madvise', 'never' or 
> 'always', testpmd runs with using strace as follows:
> ******************************* Transparent hugepage is configured as 
> 'madvise'  ******************************* [root@X6000-C23-1U dpdk]# 
> strace -T -e trace=mlockall ./testpmd -l 1-4 -w 0000:7d:01.0 
> --iova-mode=va -- -i
[snip]
> Interactive-mode selected
> mlockall(MCL_CURRENT|MCL_FUTURE)        = 0 <25.736362>
> <---------------------- Hang for 25 seconds
[snip]
>
> *****************************  Transparent hugepage is configured as 
> 'never'  ********************************* [root@X6000-C23-1U dpdk]# 
> strace -T -e trace=mlockall ./testpmd -l 1-4 -w 0000:7d:01.0 
> --iova-mode=va -- -i
[snip]
> Interactive-mode selected
> mlockall(MCL_CURRENT|MCL_FUTURE)        = 0 <25.737757>
> <---------------------- Hang for 25 seconds
[snip]
>
> *****************************  Transparent hugepage is configured as 
> 'always'  ********************************* [root@X6000-C23-1U dpdk]# 
> strace -T -e trace=mlockall testpmd -l 1-4 -w
> 0000:7d:01.0 --iova-mode=va -- -i
> strace: Can't stat 'testpmd': No such file or directory 
> [root@X6000-C23-1U dpdk]# strace -T -e trace=mlockall ./testpmd -l 1-4 
> -w 0000:7d:01.0 --iova-mode=va -- -i
[snip]
> Interactive-mode selected
> mlockall(MCL_CURRENT|MCL_FUTURE)        = 0 <0.208571>
> <---------------------- No Hang
[snip]

>
> We have also seen some discussions on this issue in following page:
> https://bugzilla.redhat.com/show_bug.cgi?id=1786923
>
> David Marchand has a related patch, as following page:
> https://github.com/david-marchand/dpdk/commit/f9e1b9fa101c9f4f16c07174
> 01a55790aecc6484 but this patch doesn't seem to have been submitted to 
> the community.

Yes, this is not ready, I worked on this locally since then (and the last version is not pushed to my github).
The main problem is that I have not been able to reproduce Eelco issue so far which justified the addition of mlockall().


> Transparent hugepage on ARM is configured as 'madvise' or 'never', 
> testpmd runs with using strace as follows:
> *******************************************************
> [root@X6000-C23-1U app]# strace -T -e trace=mlockall ./testpmd -l 1-4 
> -w
> 0000:7d:01.0 --iova-mode=va -- -i
[snip]
> Interactive-mode selected
> mlockall(MCL_CURRENT|MCL_FUTURE|MCL_ONFAULT) = 0 <1.955947>
> <---------------------- Hang for less than 2 seconds
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, 
> size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc Done
> testpmd> quit
>
> Bye...
> +++ exited with 0 +++
>
> We'd like to know what is the current development on this issue in 
> dpdk community. Thanks

There is also a report about coredumps being huge.
On this topic, there might be something to do with madvise + MADV_DONTDUMP.


I see various options from the hardest to the easiest:
- drop multiprocess support,
- rework the memory allocator so that this kind of side effect (mapping a huge number of unused pages) is not hit,
- change testpmd so that it locks only the pages of interest (this is the patch that I had been looking at for now),
- change testpmd so that it does not call mlockall by default,

The last option is likely the quickest workaround.

I write "options", but the last two points feel like "band aid" fixes.
And it does not solve the problem for other applications that will have to implement similar workarounds.

Anatoly warned that touching the memory allocator is going to be hell.
Quite sad to reach this state because it feels like people are just starting to hit the changes that entered dpdk 18.05.


Of course, other ideas welcome!

--
David Marchand


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [RFC]app/testpmd: time-consuming question of mlockall function execution
  2020-02-26  3:59   ` [dpdk-dev] 答复: " humin (Q)
@ 2020-03-06 17:49     ` David Marchand
  2020-03-10 15:28       ` David Marchand
  0 siblings, 1 reply; 6+ messages in thread
From: David Marchand @ 2020-03-06 17:49 UTC (permalink / raw)
  To: humin (Q)
  Cc: dev, Zhouchang (Forest), Xiedonghui, liudongdong (C),
	lihuisong, Huwei (Xavier),
	Burakov, Anatoly, Thomas Monjalon, Maxime Coquelin

On Wed, Feb 26, 2020 at 4:59 AM humin (Q) <humin29@huawei.com> wrote:
> We have another question about your patch. It seems that mlockall() also takes about two seconds after using this patch(about 0.2 seconds before using this patch), if we use "always" option for transparent hugepage configration. Is this reasonable and acceptable?

Hard to tell what reasonable/acceptable mean.
I'd say "it depends" :-).


> If that is ok, when will the patch be uploaded to DPDK community?

I sent a RFC earlier, that makes testpmd only lock pages containing text.
Can you have a try and report your findings?


Thanks.

-- 
David Marchand


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [RFC]app/testpmd: time-consuming question of mlockall function execution
  2020-03-06 17:49     ` [dpdk-dev] " David Marchand
@ 2020-03-10 15:28       ` David Marchand
  2020-03-13  8:25         ` [dpdk-dev] 答复: " humin (Q)
  0 siblings, 1 reply; 6+ messages in thread
From: David Marchand @ 2020-03-10 15:28 UTC (permalink / raw)
  To: humin (Q)
  Cc: dev, Zhouchang (Forest), Xiedonghui, liudongdong (C),
	lihuisong, Huwei (Xavier),
	Burakov, Anatoly, Thomas Monjalon, Maxime Coquelin

On Fri, Mar 6, 2020 at 6:49 PM David Marchand <david.marchand@redhat.com> wrote:
>
> On Wed, Feb 26, 2020 at 4:59 AM humin (Q) <humin29@huawei.com> wrote:
> > We have another question about your patch. It seems that mlockall() also takes about two seconds after using this patch(about 0.2 seconds before using this patch), if we use "always" option for transparent hugepage configration. Is this reasonable and acceptable?
>
> Hard to tell what reasonable/acceptable mean.
> I'd say "it depends" :-).
>
>
> > If that is ok, when will the patch be uploaded to DPDK community?
>
> I sent a RFC earlier, that makes testpmd only lock pages containing text.
> Can you have a try and report your findings?

Let's forget about the change in testpmd.
I came with a different fix in EAL.
Can you have a try?

http://patchwork.dpdk.org/patch/66469/


-- 
David Marchand


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dpdk-dev] 答复: [RFC]app/testpmd: time-consuming question of mlockall function execution
  2020-03-10 15:28       ` David Marchand
@ 2020-03-13  8:25         ` humin (Q)
  0 siblings, 0 replies; 6+ messages in thread
From: humin (Q) @ 2020-03-13  8:25 UTC (permalink / raw)
  To: David Marchand
  Cc: dev, Zhouchang (Forest), Xiedonghui, liudongdong (C),
	lihuisong, Huwei (Xavier),
	Burakov, Anatoly, Thomas Monjalon, Maxime Coquelin

Hello, this is my report about my findings: 

mlockall() takes a very short time for different transparent hugepage configurations after using the fix in EAL. Log as follows:

****** transparent_hugepage is madvise*******
[root@X6000-C23-3U dpdk-19.11]# strace -T -e trace=mlockall ./build/app/testpmd -l 0-1 -n 4 -w 0000:7d:01.0 --file-prefix=rte_lee --log-level=6 -- -i --rxq=1 --txq=1 --rxd=1024 --rxd=1024
EAL: No available hugepages reported in hugepages-2048kB
EAL: No available hugepages reported in hugepages-32768kB
EAL: No available hugepages reported in hugepages-64kB
EAL: VFIO support initialized
EAL:   using IOMMU type 1 (Type 1)
Interactive-mode selected
mlockall(MCL_CURRENT|MCL_FUTURE)        = 0 <0.014386>

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
Port 0: 82:33:17:B1:43:57
Checking link statuses...
Done
Error during enabling promiscuous mode for port 0: Operation not supported - ignore
testpmd>
testpmd>
testpmd> quit

Stopping port 0...
Stopping ports...
Done

Shutting down port 0...
Closing ports...
0000:7d:01.0 hns3vf_dev_close(): Close port 0 finished
Done

Bye...
+++ exited with 0 +++

****** transparent_hugepage is never*******
[root@X6000-C23-3U dpdk-19.11]# strace -T -e trace=mlockall ./build/app/testpmd -l 0-8 -n 4 -w 0000:7d:01.0 --file-prefix=rte_lee --log-level=6 -- -i --rxq=8 --txq=8
EAL: No available hugepages reported in hugepages-2048kB
EAL: No available hugepages reported in hugepages-32768kB
EAL: No available hugepages reported in hugepages-64kB
EAL: VFIO support initialized
EAL:   using IOMMU type 1 (Type 1)
Interactive-mode selected
mlockall(MCL_CURRENT|MCL_FUTURE)        = 0 <0.031902>

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
Port 0: 1E:46:8C:BC:7A:E7
Checking link statuses...
Done
Error during enabling promiscuous mode for port 0: Operation not supported - ignore
testpmd>
testpmd>
testpmd> quit

Stopping port 0...
Stopping ports...
Done

Shutting down port 0...
Closing ports...
0000:7d:01.0 hns3vf_dev_close(): Close port 0 finished
Done

Bye...
+++ exited with 0 +++


****** transparent_hugepage is always *******
[root@X6000-C23-3U dpdk-19.11]# strace -T -e trace=mlockall ./build/app/testpmd -l 0-8 -n 4 -w 0000:7d:01.0 --file-prefix=rte_lee --log-level=6 -- -i --rxq=8 --txq=8
EAL: No available hugepages reported in hugepages-2048kB
EAL: No available hugepages reported in hugepages-32768kB
EAL: No available hugepages reported in hugepages-64kB
EAL: VFIO support initialized
EAL:   using IOMMU type 1 (Type 1)
Interactive-mode selected
mlockall(MCL_CURRENT|MCL_FUTURE)        = 0 <0.017960>

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
Port 0: EA:84:12:62:A5:98
Checking link statuses...
Done
Error during enabling promiscuous mode for port 0: Operation not supported - ignore
testpmd> quit

Stopping port 0...
Stopping ports...
Done

Shutting down port 0...
Closing ports...
0000:7d:01.0 hns3vf_dev_close(): Close port 0 finished
Done

Bye...
+++ exited with 0 +++















-----邮件原件-----
发件人: David Marchand [mailto:david.marchand@redhat.com] 
发送时间: 2020年3月10日 23:28
收件人: humin (Q) <humin29@huawei.com>
抄送: dev@dpdk.org; Zhouchang (Forest) <forest.zhouchang@huawei.com>; Xiedonghui <xiedonghui@huawei.com>; liudongdong (C) <liudongdong3@huawei.com>; lihuisong <lihuisong@huawei.com>; Huwei (Xavier) <huwei87@hisilicon.com>; Burakov, Anatoly <anatoly.burakov@intel.com>; Thomas Monjalon <thomas@monjalon.net>; Maxime Coquelin <maxime.coquelin@redhat.com>
主题: Re: [RFC]app/testpmd: time-consuming question of mlockall function execution

On Fri, Mar 6, 2020 at 6:49 PM David Marchand <david.marchand@redhat.com> wrote:
>
> On Wed, Feb 26, 2020 at 4:59 AM humin (Q) <humin29@huawei.com> wrote:
> > We have another question about your patch. It seems that mlockall() also takes about two seconds after using this patch(about 0.2 seconds before using this patch), if we use "always" option for transparent hugepage configration. Is this reasonable and acceptable?
>
> Hard to tell what reasonable/acceptable mean.
> I'd say "it depends" :-).
>
>
> > If that is ok, when will the patch be uploaded to DPDK community?
>
> I sent a RFC earlier, that makes testpmd only lock pages containing text.
> Can you have a try and report your findings?

Let's forget about the change in testpmd.
I came with a different fix in EAL.
Can you have a try?

http://patchwork.dpdk.org/patch/66469/


-- 
David Marchand


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-03-13  8:25 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-24  6:35 [dpdk-dev] [RFC]app/testpmd: time-consuming question of mlockall function execution humin (Q)
2020-02-24  8:48 ` David Marchand
2020-02-26  3:59   ` [dpdk-dev] 答复: " humin (Q)
2020-03-06 17:49     ` [dpdk-dev] " David Marchand
2020-03-10 15:28       ` David Marchand
2020-03-13  8:25         ` [dpdk-dev] 答复: " humin (Q)

DPDK patches and discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://inbox.dpdk.org/dev/0 dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dev dev/ https://inbox.dpdk.org/dev \
		dev@dpdk.org
	public-inbox-index dev

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git