https://bugs.dpdk.org/show_bug.cgi?id=443 Bug ID: 443 Summary: spp primary takes up the complete hugepages Product: SPP Version: unspecified Hardware: All URL: http://http://doc.dpdk.org/spp/setup/howto_use.html OS: All Status: UNCONFIRMED Severity: normal Priority: Normal Component: main Assignee: yasufum.o@gmail.com Reporter: vipin.varghese@intel.com CC: spp@dpdk.org Target Milestone: --- absence for 'socket-limit' options causes memory bloating for SPP. correction: request to add to URL option '--socket-limit' as the recommended DPDK version is `git clone` -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 Hideyuki Yamashita (yamashita.hideyuki@ntt-tx.co.jp) changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |yamashita.hideyuki@ntt-tx.c | |o.jp --- Comment #1 from Hideyuki Yamashita (yamashita.hideyuki@ntt-tx.co.jp) --- Hello Vipin, Thanks for you reporting. I would like to ask some questions as following. Q1.Which SPP version you used? Q2.Which DPDK version you used? Q3.Can you share terminal logs? I would like to know what you did using console. I would like to know what happened. Q4.What is the real problem described by this ticket? Primary process uses too much memories? Thanks Hideyuki -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 --- Comment #2 from Vipin Varghese (vipin.varghese@intel.com) --- Hello Hideyuki, there is no option to raise this under `documentation`. From the documentation:- URL: https://doc.dpdk.org/spp/setup/getting_started.html#install-dpdk-and-spp section: 1.2.1 DPDK version: git clone http://dpdk.org/git/dpdk from all the examples I have tested with Dpdk 18.11 onwards, not passing `--socket-limit ` in primary takes up all huge pages URL: https://doc.dpdk.org/spp/setup/howto_use.html section: 2.2 Primary command: `sudo ./src/primary/x86_64-native-linuxapp-gcc/spp_primary \ -l 1 -n 4 \ --socket-mem 512,512 \ --huge-dir=/dev/hugepages \ --proc-type=primary \ -- \ -p 0x03 \ -n 10 \ -s 192.168.1.100:5555` Once you pass option '--socket-limit with --socket-mem' you can limit the usage. Please update the documentation for reflecting the new memory requirements. -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 --- Comment #3 from Hideyuki Yamashita (yamashita.hideyuki@ntt-tx.co.jp) --- Thanks for your info. I will confirm what you are saying first. -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 --- Comment #4 from Hideyuki Yamashita (yamashita.hideyuki@ntt-tx.co.jp) --- Hello Vipin, Please let me know a bit more. 1. http://doc.dpdk.org/guides/linux_gsg/linux_eal_parameters.html I understand that socket-limit is newly added EAL parameter which is used with existing socket-mem parameter. But my problem is I can not imagine how those parematers are used in one piece. If you have good example please let me know. 2. Are you suggesting SPP issue document patch for existing tag (19.08)? Or is it sufficient enough to update master branch? BR and Thanks, Hideyuki -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 --- Comment #5 from Vipin Varghese (vipin.varghese@intel.com) --- Hi Hideyuki, But my problem is I can not imagine how those parameters are used in one piece. Answer> are you able to run any dpdk example and observe the issue at all? If yes, please run SPP and check the same. -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 --- Comment #6 from Hideyuki Yamashita (yamashita.hideyuki@ntt-tx.co.jp) --- Hello Vipin, Thanks for your response. Unfortunately I could not observe the issue using DPDK19.11+SPP19.11. [Hugepage usage] Before primary process invoke $ cat /proc/meminfo | grep -i HugePages AnonHugePages: 0 kB ShmemHugePages: 0 kB HugePages_Total: 16 HugePages_Free: 16 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 1048576 kB During primary process invoke $ cat /proc/meminfo | grep -i HugePages AnonHugePages: 0 kB ShmemHugePages: 0 kB HugePages_Total: 16 HugePages_Free: 14 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 1048576 kB It looks that only 2 pages are used per primary invokation. [SPP primary startup parameters] sudo ./src/primary/x86_64-native-linuxapp-gcc/spp_primary \ -l 1 \ -n 4 \ --socket-mem 512,512 \ --huge-dir /mnt/huge1G \ --proc-type primary \ -- \ -p 0x03 \ -n 10 \ -s 127.0.0.1:5555 [How to reserve hugepages] /etc/default/grub GRUB_CMDLINE_LINUX="DEBCONF_DEBUG=5 ksdevice=bootif default_hugepagesz=1G hugepagesz=1G hugepages=16" hugepages mount directory /mnt/huge_1GB sudo mount |grep huge nodev on /mnt/huge1G type hugetlbfs (rw,relatime,pagesize=1024M) Thanks, Hideyuki -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 Vipin Varghese (vipin.varghese@intel.com) changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|Normal |High Severity|normal |critical --- Comment #7 from Vipin Varghese (vipin.varghese@intel.com) --- Hi Hideyuki, I am not sure, whether it's the download or the missing information in the documentation. URL: https://doc.dpdk.org/spp/setup/getting_started.html#setup DPDK version: mainline from git SPP version: spp-19.11.zip Observation: Primary is resource allocator (huge page), without secondary it consumes 1Gb (1 page from the hugepage). Starting of secondary without ```socket-limit``` it is free to consume next huge page. Memory Usage: # numastat -p spp_ Per-node process memory usage (in MBs) PID Node 0 Node 1 Total ------------------- --------------- --------------- --------------- 15191 (gdb) 54.12 3.23 57.35 15201 (spp_primary) 1040.76 2.12 1042.88 15208 (spp_nfv) 1039.82 2.29 1042.11 ------------------- --------------- --------------- --------------- Total 2134.70 7.64 2142.34 Note: I am increasing the priority. -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 --- Comment #8 from Vipin Varghese (vipin.varghese@intel.com) --- on stopping secondary we have ``` # numastat -p spp_ Per-node process memory usage (in MBs) PID Node 0 Node 1 Total ------------------- --------------- --------------- --------------- 15191 (gdb) 54.12 3.23 57.35 15201 (spp_primary) 1040.76 2.12 1042.88 ------------------- --------------- --------------- --------------- Total 1094.88 5.35 1100.23 ``` -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 --- Comment #9 from Vipin Varghese (vipin.varghese@intel.com) --- Primary: ``` x86_64-native-linuxapp-gcc/spp_primary -l 1-5 -n 4 --socket-mem=100,0 --proc-type=primary -w 0000:08:00.1 -w 0000:08:00.2 -- -p 0x3 -n 10 -s 127.0.0.1:5555 ``` Secondary: ```./src/nfv/x86_64-native-linuxapp-gcc/spp_nfv --proc-type=secondary -l 10-14 -- -n 10 -s 127.0.0.1:6666``` -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 Vipin Varghese (vipin.varghese@intel.com) changed: What |Removed |Added ---------------------------------------------------------------------------- OS|All |Linux Hardware|All |x86 -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 masahiro nemoto (masahiro.nemoto.es@s1.ntt-tx.co.jp) changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |masahiro.nemoto.es@s1.ntt-t | |x.co.jp --- Comment #10 from masahiro nemoto (masahiro.nemoto.es@s1.ntt-tx.co.jp) --- Hello Vipin, I tried to re-create the situation you mentioned in your reply. Before the result I would like to confirm that you pointed out the follwoing two points. [Point1] Primary: ``` x86_64-native-linuxapp-gcc/spp_primary -l 1-5 -n 4 --socket-mem=100,0 --proc-type=primary -w 0000:08:00.1 -w 0000:08:00.2 -- -p 0x3 -n 10 -s 127.0.0.1:5555 ``` Even if --socket-mem option is specified when invoking primary process, it is ignored. [Point2] By adding new EAL parameter named--socket-limit pamaeter, the situation changes. Total amount of memroy allocation is limited by parameter which is specified with --socket-limit parameter. [Result1] I confiremed Point1 can be re-created in my environment. sudo ./spp_primary -l 1-2 -n 4 --socket-mem=100,100 --proc-type=primary --huge-dir=/mnt/huge1G -- -p 0x3 -n 10 -s 127.0.0.1:5555 by invoking above parameter, 1gigabyte per socket are allocated. As you mentioned parameter value of --socket-mem is IGNORED. tx_h-yamashita@r740n15:~$ sudo numastat -p spp_ Per-node process memory usage (in MBs) PID Node 0 Node 1 Total ------------------- --------------- --------------- --------------- 38174 (sudo) 0.69 6.93 7.61 38175 (spp_primary) 1039.67 1027.32 2067.00 38183 (sudo) 0.54 6.95 7.48 ------------------- --------------- --------------- --------------- Total 1040.89 1041.20 2082.09 [Result2] sudo ./spp_primary -l 1-2 -n 4 --socket-mem=100,100 --socket-limit=10,10 --proc-type=primary --huge-dir=/mnt/huge1G -- -p 0x3 -n 10 -s 127.0.0.1:5555 With --socket-limit pameter, the situation did NOT change. Expectation: 10 Mbytes per sockets are allocated. Result: 1000Mbytes per sockets are allocated. tx_h-yamashita@r740n15:~$ sudo numastat -p spp_ Per-node process memory usage (in MBs) PID Node 0 Node 1 Total ------------------- --------------- --------------- --------------- 38187 (sudo) 0.68 6.95 7.63 38188 (spp_primary) 1039.61 1027.14 2066.75 38198 (sudo) 0.54 6.75 7.30 ------------------- --------------- --------------- --------------- Total 1040.84 1040.84 2081.68 -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 --- Comment #11 from Vipin Varghese (vipin.varghese@intel.com) --- Hi Hideyuki, thanks for your updates. Summarizing your findings 1. The issue can be reproduced. 2. The parameter ```--socket-mem=100,100 --socket-limit=10,10 ``` did not make difference for you. My Answers 1. you can move the status of the bug to `confirmed`. 2. The parameter usage ```--socket-mem=100,100 --socket-limit=10,10``` is not correct. Let me explain: a. option ```socket-mem`` defines the minimum amount memory to be used form huge page. b. option ```socket-limit``` defines the max limit the memory to be used from huge page. Hence if you want to use memory from 1. NUMA-0, it should be ```--socket-limit=100,1 --socket-mem=1024,1``` 2. NUMA-1, it should be ```--socket-limit=1,100 --socket-mem=1,1024``` 3. NUMA-0 and NUMA-1, it should be ```--socket-limit=100,100 --socket-mem=1024,1024``` This prevents primary or secondaries hijacking huge pages beyond ```socket-limit``` -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 masahiro nemoto (masahiro.nemoto.es@s1.ntt-tx.co.jp) changed: What |Removed |Added ---------------------------------------------------------------------------- Ever confirmed|0 |1 Status|UNCONFIRMED |CONFIRMED --- Comment #12 from masahiro nemoto (masahiro.nemoto.es@s1.ntt-tx.co.jp) --- Hello Vipin, Thanks for your advise. 1. Since we have acknowledged what you are saying, I change this ticket status to "confirmed". 2. I will add --socket-limit parameter in SPP documents where "--socket-mem" parameter is used. Update will be done with other document improvement because the description is not so much urgent. -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 --- Comment #13 from masahiro nemoto (masahiro.nemoto.es@s1.ntt-tx.co.jp) --- Hello Vipin, I found the related documents of memory management of DPDK as following. https://software.intel.com/content/www/us/en/develop/articles/memory-in-dpdk-part-4-1811-and-beyond.html Now I think I understand the meaning of '--socket-mem' and '--socket-limit'. But at the same time, new questions pops up. Can you answer those basic questions. Q1. If user specify enough memory with '--socket-mem'(e.g. 5Gbytes), then no need to think about '--socket-limit'. (However this may risk the situation like following: "Large memory pre-allocated but almost all memory not used, thus waste of memory from system perspective") Q2. '--socket-limit' specifies upper limit of memory usage regardless of '--socket-mem' value. Then what is the risk(e.g. performance) when user specify '--socket-limit' without '--socket-mem'. In general I understand that it is 'safe' to specify both parameters rather than specifying only one of those. But it depends on 'characteristic of DPDK application',right? At least, SPP does NOT need additional memory in runtime (rather primary process allocate memory only during initialization). In addition, adding many parameters may lead to 'confusion' or 'mistype' and needs much workload to update documents. Your advice is highly appreciated. What do you think? -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 --- Comment #14 from Vipin Varghese (vipin.varghese@intel.com) --- Hello Masahiro, thank you for taking time and trying to understand the details. Following is my answers shared below 1, memory management of DPDK [Answer] for all final decision and right setup please reach out to DPDK memory manamangent maintainer. I can only suggest what I know. Q1. If user specify enough memory with '--socket-mem'(e.g. 5Gbytes), then no need to think about '--socket-limit'. [Answer] this is not the right understanding, because `socket-limit` is the upper (maximum) limit. This is been explained in comment-1. Q2. '--socket-limit' specifies upper limit of memory usage regardless of '--socket-mem' value. Then what is the risk(e.g. performance) when user specify '--socket-limit' without '--socket-mem' [Answer] yes this is good argument, but only valid if there are not more than 2 primary application. In use cases (SPP is being used mostly) which has VM and dockers. So if one intents to ensure SPP primary requires minimum 512MB use `--socket-mem` and to ensure that dynamic allocations in SPP does not cross over 2048MB use `--socket-limit` 4. But it depends on 'characteristic of DPDK application',right? [Answer]Yes, I have explained the use case scenario. But for SPP this is not the cazse. At least, SPP does NOT need additional memory in runtime (rather primary process allocate memory only during initialization). [Answer] I think this is incorrect understand, If SPP is only standalone application there are no Docker, VM or Pod which will not dynamically grow or resize then you are safe not using the same. In addition, adding many parameters may lead to 'confusion' or 'mistype' and needs much workload to update documents. [Answer] I find this is as very inconsistent argument because 'if the intention is not to confuse users with argument, SPP should be hiding or abstract all EAL arguments from end user which it is failing to do' Your advice is highly appreciated. What do you think? [Answer] I think you should take this up with `Hideyuki Yamashita` (maintainer) on the direction of discussion and fixes. Note: Would not you agree, for the confirmed issue if the direction of the fix is not clear 1. there should have been meeting invitee or brain storming invites for the same. 2. If the goal is not to confuse end user with option, the SPP should be abstracting or templating EAL arg with zero maintenance. Hope all these will be fixed soon. -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 --- Comment #15 from Hideyuki Yamashita (yamashita.hideyuki@ntt-tx.co.jp) --- Hello Vipin, Thanks for your kindness and guidance. First of all, I think how "--socket-limit" option is used for DPDK applications in general. SPP is application which assumes the following usecases. https://doc.dpdk.org/spp/setup/use_cases.html https://doc.dpdk.org/spp/spp_vf/use_cases/index.html SPP rarely allocates memory in dynamically and thus SPP document guides users to alloacate enough memory by using "--socket-mem" option. If you find problem by using SPP, we are open to improve document, please let us know the specific use case. BR, Hideyuki Yamashita -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 --- Comment #16 from Vipin Varghese (vipin.varghese@intel.com) --- Hi Hideyuku, Please find the information which show cases how hugepage is dynamically consumed in SPP ``` Vipin Varghese 2020-04-19 08:50:36 CEST Hi Hideyuki, I am not sure, whether it's the download or the missing information in the documentation. URL: https://doc.dpdk.org/spp/setup/getting_started.html#setup DPDK version: mainline from git SPP version: spp-19.11.zip Observation: Primary is resource allocator (huge page), without secondary it consumes 1Gb (1 page from the hugepage). Starting of secondary without ```socket-limit``` it is free to consume next huge page. Memory Usage: # numastat -p spp_ Per-node process memory usage (in MBs) PID Node 0 Node 1 Total ------------------- --------------- --------------- --------------- 15191 (gdb) 54.12 3.23 57.35 15201 (spp_primary) 1040.76 2.12 1042.88 15208 (spp_nfv) 1039.82 2.29 1042.11 ------------------- --------------- --------------- --------------- Total 2134.70 7.64 2142.34 ``` Please note this is confirmed and shared on `2020-04-19 08:50:36` in the same ticket. `SPP rarely allocates memory in dynamically and thus SPP document guides users to allocate enough memory by using "--socket-mem" option`. [request] Can you please confirm you are not seeing this. -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 --- Comment #17 from Hideyuki Yamashita (yamashita.hideyuki@ntt-tx.co.jp) --- Hello Vipin, I have tried what you said in the following ticket. https://bugs.dpdk.org/show_bug.cgi?id=443#c8 --Result- 1. Usage of hugepage only changes when primary process started 2. Usage of hugepage did NOT change with number of secondary Note that we measured hugepage usage not only measurement by using numastat. What do you think? 0. Initila staete(no primary, no secondary) grep HugePages_ /proc/meminfo HugePages_Total: 16 HugePages_Free: 16 HugePages_Rsvd: 0 HugePages_Surp: 0 tx_h-yamashita@R730n10:~$ numastat -s Per-node numastat info (in MBs): Node 0 Node 1 Total --------------- --------------- --------------- Numa_Hit 1170.66 3359.65 4530.31 Local_Node 678.14 3210.07 3888.21 Other_Node 492.52 149.58 642.10 Interleave_Hit 146.16 143.30 289.46 Numa_Miss 0.00 0.00 0.00 Numa_Foreign 0.00 0.00 0.00 tx_h-yamashita@R730n10:~$ numastat -s|grep spp_ tx_h-yamashita@R730n10:~$ numastat -s|grep spp_p spp_ Found no processes containing pattern: "spp_" Per-node numastat info (in MBs): Node 0 Node 1 Total --------------- --------------- --------------- Numa_Hit 1171.93 3361.25 4533.18 Numa_Miss 0.00 0.00 0.00 Numa_Foreign 0.00 0.00 0.00 Interleave_Hit 146.16 143.30 289.46 Local_Node 679.41 3211.66 3891.08 Other_Node 492.52 149.58 642.10 1.State1(primary started, no secondary) tx_h-yamashita@R730n10:~$ numastat -p spp_s|gregrep HugePages_ /proc/meminfo HugePages_Total: 16 HugePages_Free: 14 HugePages_Rsvd: 0 HugePages_Surp: 0 tx_h-yamashita@R730n10:~$ grep HugePages_ /proc/meminfonumastat -p spp_ Per-node process memory usage (in MBs) Can't read /proc/10178/numa_maps: Permission denied Can't read /proc/10179/numa_maps: Permission denied tx_h-yamashita@R730n10:~$ numastat -p spp_sudo [sudo] password for tx_h-yamashita: Per-node process memory usage (in MBs) PID Node 0 Node 1 Total ------------------- --------------- --------------- --------------- 10178 (sudo) 0.67 6.81 7.48 10179 (spp_primary) 1033.96 1030.59 2064.54 10193 (sudo) 0.39 9.16 9.55 ------------------- --------------- --------------- --------------- Total 1035.02 1046.55 2081.57 2.State2(primary started, 1 secondary started) tx_h-yamashita@R730n10:~$ 4sudo numastat p spp_grep HugePages_ /proc/meminfo HugePages_Total: 16 HugePages_Free: 14 HugePages_Rsvd: 0 HugePages_Surp: 0 tx_h-yamashita@R730n10:~$ grep HugePages_ /proc/meminfosudo numastat -p spp_ Per-node process memory usage (in MBs) PID Node 0 Node 1 Total ------------------ --------------- --------------- --------------- 10178 (sudo) 0.67 6.81 7.48 10179 (spp_primary) 1033.96 1030.59 2064.55 10213 (sudo) 0.62 6.77 7.40 10214 (spp_nfv) 1024.95 1039.67 2064.61 10219 (sudo) 0.39 7.01 7.39 ------------------- --------------- --------------- --------------- Total 2060.59 2090.85 4151.43 3.State3(primary started, 2 secondary started) tx_h-yamashita@R730n10:~$ sudo numastat -p spp_grep HugePages_ /proc/meminfo HugePages_Total: 16 HugePages_Free: 14 HugePages_Rsvd: 0 HugePages_Surp: 0 tx_h-yamashita@R730n10:~$ grep HugePages_ /proc/meminfosudo numastat -p spp_ Per-node process memory usage (in MBs) PID Node 0 Node 1 Total ------------------- --------------- --------------- --------------- 10178 (sudo) 0.67 6.81 7.48 10179 (spp_primary) 1033.96 1030.59 2064.55 10213 (sudo) 0.62 6.77 7.40 10214 (spp_nfv) 1024.95 1039.67 2064.61 10225 (sudo) 0.44 7.04 7.48 10226 (spp_nfv) 1025.09 1039.74 2064.83 10243 (sudo) 0.45 7.04 7.48 ------------------- --------------- --------------- --------------- Total 3086.17 3137.66 6223.83 Thanks! Best Regards, Hideyuki Yamashita -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 --- Comment #18 from Vipin Varghese (vipin.varghese@intel.com) --- Hi Hideyuki, I will wait for you update with `numastat -p spp_` with primary, primary and secondary running scenarios. Question: which version are your running? Has there any modification in secondary application (spp_nfv)? -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 --- Comment #19 from Hideyuki Yamashita (yamashita.hideyuki@ntt-tx.co.jp) --- Hello Vipin, With the following link, result of "numastat -p spp_" under primary,secondary running is already described. https://bugs.dpdk.org/show_bug.cgi?id=443#c17 DPDK20.02/SPP20.02 Best Regards, Hideyuki Yamashita -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 --- Comment #20 from Vipin Varghese (vipin.varghese@intel.com) --- Hi Hideyuki, only information I find is ``` tx_h-yamashita@R730n10:~$ numastat -s|grep spp_ tx_h-yamashita@R730n10:~$ numastat -s|grep spp_p spp_ Found no processes containing pattern: "spp_" tx_h-yamashita@R730n10:~$ numastat -p spp_s|gregrep HugePages_ /proc/meminfo HugePages_Total: 16 HugePages_Free: 14 HugePages_Rsvd: 0 HugePages_Surp: 0 tx_h-yamashita@R730n10:~$ grep HugePages_ /proc/meminfosudo numastat -p spp_ Per-node process memory usage (in MBs) PID Node 0 Node 1 Total ------------------ --------------- --------------- --------------- 10178 (sudo) 0.67 6.81 7.48 10179 (spp_primary) 1033.96 1030.59 2064.55 10213 (sudo) 0.62 6.77 7.40 10214 (spp_nfv) 1024.95 1039.67 2064.61 10219 (sudo) 0.39 7.01 7.39 Per-node process memory usage (in MBs) PID Node 0 Node 1 Total ------------------- --------------- --------------- --------------- 10178 (sudo) 0.67 6.81 7.48 10179 (spp_primary) 1033.96 1030.59 2064.55 10213 (sudo) 0.62 6.77 7.40 10214 (spp_nfv) 1024.95 1039.67 2064.61 10225 (sudo) 0.44 7.04 7.48 10226 (spp_nfv) 1025.09 1039.74 2064.83 10243 (sudo) 0.45 7.04 7.48 ------------------- --------------- --------------- --------------- Total 3086.17 3137.66 6223.83 ``` in my testing ``` Memory Usage: # numastat -p spp_ Per-node process memory usage (in MBs) PID Node 0 Node 1 Total ------------------- --------------- --------------- --------------- 15191 (gdb) 54.12 3.23 57.35 15201 (spp_primary) 1040.76 2.12 1042.88 15208 (spp_nfv) 1039.82 2.29 1042.11 ------------------- --------------- --------------- --------------- Total 2134.70 7.64 2142.34 ``` All I see in my primary it allocated 1GB page, when secondary is started it allocated another 1GB. From your logs this is same from your logs too ``` 0 secondaries 10179 (spp_primary) 1033.96 1030.59 2064.54 1 secodnary 10179 (spp_primary) 1033.96 1030.59 2064.55 10214 (spp_nfv) 1024.95 1039.67 2064.61 2secodnary 10179 (spp_primary) 1033.96 1030.59 2064.55 10214 (spp_nfv) 1024.95 1039.67 2064.61 10226 (spp_nfv) 1025.09 1039.74 2064.83 ``` I believe this is same information I have been sharing in multiple comments. In each subsequent start of secodnaries minimum of 1Gb is requested to primary to be added to virtual map area. Hence `socket-limit` has to be ecerised with `socket-mem` to prevent unnecessary blaoting, when running with multiple DPDK primaries. -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 --- Comment #21 from Hideyuki Yamashita (yamashita.hideyuki@ntt-tx.co.jp) --- Hello Vipin, I understand that this ticket is related with "hugepage usage" as ticket title tells. I understand that hugepage usage does NOT increase regardless of number of secondary process. The following is the result - hugepage in system is 16G - primary uses 8G in node0 and 8G in node1 - two secondary processes can start without increasing hugapage usage What is the problem? tx_h-yamashita@R730n10:~$ grep HugePages_ /proc/meminfo HugePages_Total: 16 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 tx_h-yamashita@R730n10:~$ numastat -p spp_ Per-node process memory usage (in MBs) tx_h-yamashita@R730n10:~$ sudo numastat -p spp_ [sudo] password for tx_h-yamashita: Per-node process memory usage (in MBs) PID Node 0 Node 1 Total ------------------- --------------- --------------- --------------- 36244 (sudo) 1.23 6.30 7.53 36245 (spp_primary) 8196.33 8203.80 16400.13 36255 (sudo) 0.20 9.52 9.73 ------------------- --------------- --------------- --------------- Total 8197.76 8219.62 16417.38 tx_h-yamashita@R730n10:~$ tx_h-yamashita@R730n10:~$ tx_h-yamashita@R730n10:~$ tx_h-yamashita@R730n10:~$ tx_h-yamashita@R730n10:~$ tx_h-yamashita@R730n10:~$ tx_h-yamashita@R730n10:~$ grep HugePages_ /proc/meminfo HugePages_Total: 16 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 tx_h-yamashita@R730n10:~$ sudo numastat -p spp_ Per-node process memory usage (in MBs) PID Node 0 Node 1 Total ------------------- --------------- --------------- --------------- 36244 (sudo) 1.23 6.30 7.53 36245 (spp_primary) 8196.33 8203.80 16400.13 36261 (sudo) 0.20 9.52 9.71 36262 (spp_nfv) 8206.14 8194.36 16400.50 36267 (sudo) 0.33 7.02 7.34 ------------------- --------------- --------------- --------------- Total 16404.22 16420.99 32825.21 tx_h-yamashita@R730n10:~$ tx_h-yamashita@R730n10:~$ tx_h-yamashita@R730n10:~$ tx_h-yamashita@R730n10:~$ tx_h-yamashita@R730n10:~$ tx_h-yamashita@R730n10:~$ tx_h-yamashita@R730n10:~$ grep HugePages_ /proc/meminfo HugePages_Total: 16 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 tx_h-yamashita@R730n10:~$ sudo numastat -p spp_ Per-node process memory usage (in MBs) PID Node 0 Node 1 Total ------------------- --------------- --------------- --------------- 36244 (sudo) 1.23 6.30 7.53 36245 (spp_primary) 8196.33 8203.80 16400.13 36261 (sudo) 0.20 9.52 9.71 36262 (spp_nfv) 8206.14 8194.36 16400.50 36273 (sudo) 0.20 9.68 9.89 36274 (spp_nfv) 8206.15 8194.17 16400.32 36279 (sudo) 1.23 6.29 7.52 ------------------- --------------- --------------- --------------- Total 24611.48 24624.12 49235.60 tx_h-yamashita@R730n10:~$ BR, Hideyuki Yamashita -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 --- Comment #22 from Vipin Varghese (vipin.varghese@intel.com) --- I understand that this ticket is related with "hugepage usage" as ticket title tells. [VV] thanks for agreeing I understand that hugepage usage does NOT increase regardless of number of secondary process. [VV] thanks for confirming the same. The following is the result - hugepage in system is 16G [VV] are you stating the SPP requires 16GB of memory as hugepages to work. Only place mentioning about memory is `https://doc.dpdk.org/spp/setup/getting_started.html#setup`. problem-1: But here it is contracting with `1GB is 8` and with `2MB is 1024`. this is the first concern there is no minimum requirement. problem-2: Without setting lower limit via `--socket-mem` and upper limit `--socket-limit`, other DPDK applciations can consume the huge pages leading to exhgaustation of pages. probelm-3: Earlier interactions from `Masahiro Nemoto` and `Hideyuki Yamashita` did not see change in huge page or allocation of memory from secodnary from the day ticket being opened (2020-04-06 08:25:15). But I am happy to udnerstand that we all agree starting secondary leads to 1GB consumption. In my humble opinion right actions for better use of SPP and SPP community would be to a. confirm this has been present in earlier and current releases. b. update documentation in minimum memory requirement for running priamry and secondaries. c. update documentation minimum memory utilziation for launching secodnaries is 1GB each. d. edit the start or EAL-ARGS to reflect the based on user configuration (1GB for priamry + 1GB * n secondaries) e. updating eal-args of spp_primary with socket-limit to prevent using more than `1GB for priamry + 1GB * n secondaries. Please feel free if you have difference in opinion. -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 --- Comment #23 from Hideyuki Yamashita (yamashita.hideyuki@ntt-tx.co.jp) --- Hello Vipin, 1. Description in SPP is just "sample" for letting user understand how he/she can start spp. Describing minimum memory required maybe done in future doc update. 2. About "problem-3", please let us know why you are saying "we all agree starting secondary leads to 1GB consumption"? From hugepage consumption view point, no additional memory consumption when starting secondary. BR, Hideyuki Yamashita -- You are receiving this mail because: You are on the CC list for the bug.
https://bugs.dpdk.org/show_bug.cgi?id=443 --- Comment #24 from Vipin Varghese (vipin.varghese@intel.com) --- Hi Hideyuki, I tried my best to help your project SPP and missing documentation. I can not force your perspective. thanks Vipin Varghese -- You are receiving this mail because: You are on the CC list for the bug.