* [dpdk-dev] CONFIG_RTE_MAX_MEM_MB fails in DPDK18.05 @ 2019-12-07 17:01 Kamaraj P 2019-12-10 10:23 ` Burakov, Anatoly 0 siblings, 1 reply; 13+ messages in thread From: Kamaraj P @ 2019-12-07 17:01 UTC (permalink / raw) To: dev; +Cc: Nageswara Rao Penumarthy, Kamaraj P (kamp) Hello All, Currently, we are facing an issue with memory allocation failure in memseg_primary_init(). When we configure the CONFIG_RTE_MAX_MEM_MB to 512MB and correspondingly configured the number of huge pages for our platform. But the virtual memory allocation is failing. It appears that its trying to allocate CONFIG_RTE_MAX_MEMSEG_PER_LIST * Huge page size (i.e. 8192 * 2MB = 0x400000000) and virtual memory allocation is failing. Also tried changing the CONFIG_RTE_MAX_MEMSEG_PER_LIST to 64 with which virtual memory allocation is passing for the 128MB (64 * 2MB). But looks like 128MB memory is not enough and it is causing the PCIe enumeration failure. Not able allocate virtual memory beyond 128MB by increasing the CONFIG_RTE_MAX_MEMSEG_PER_LIST beyond 64. Is there are any settings(argument) which we need to pass as part of rte_eal_init() to get success in the virtual memory allocation? Please advise. Thanks, Kamaraj ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [dpdk-dev] CONFIG_RTE_MAX_MEM_MB fails in DPDK18.05 2019-12-07 17:01 [dpdk-dev] CONFIG_RTE_MAX_MEM_MB fails in DPDK18.05 Kamaraj P @ 2019-12-10 10:23 ` Burakov, Anatoly 2020-02-17 9:57 ` Kamaraj P 0 siblings, 1 reply; 13+ messages in thread From: Burakov, Anatoly @ 2019-12-10 10:23 UTC (permalink / raw) To: Kamaraj P, dev; +Cc: Nageswara Rao Penumarthy, Kamaraj P (kamp) On 07-Dec-19 5:01 PM, Kamaraj P wrote: > Hello All, > > Currently, we are facing an issue with memory allocation failure > in memseg_primary_init(). > When we configure the CONFIG_RTE_MAX_MEM_MB to 512MB and correspondingly > configured the number of huge pages for our platform. But the virtual > memory allocation is failing. > > It appears that its trying to allocate CONFIG_RTE_MAX_MEMSEG_PER_LIST * > Huge page size (i.e. 8192 * 2MB = 0x400000000) and virtual memory > allocation is failing. > > Also tried changing the CONFIG_RTE_MAX_MEMSEG_PER_LIST to 64 with which > virtual memory allocation is passing for the 128MB (64 * 2MB). But looks > like 128MB memory is not enough and it is causing the PCIe enumeration > failure. > Not able allocate virtual memory beyond 128MB by increasing the > CONFIG_RTE_MAX_MEMSEG_PER_LIST beyond 64. > > Is there are any settings(argument) which we need to pass as part of > rte_eal_init() > to get success in the virtual memory allocation? > Please advise. > > Thanks, > Kamaraj > I don't think there are, as the allocator wasn't designed with such memory constrained use cases in mind. You may want to try --legacy-mem option. -- Thanks, Anatoly ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [dpdk-dev] CONFIG_RTE_MAX_MEM_MB fails in DPDK18.05 2019-12-10 10:23 ` Burakov, Anatoly @ 2020-02-17 9:57 ` Kamaraj P 2020-02-19 10:23 ` Burakov, Anatoly 0 siblings, 1 reply; 13+ messages in thread From: Kamaraj P @ 2020-02-17 9:57 UTC (permalink / raw) To: Burakov, Anatoly; +Cc: dev, Nageswara Rao Penumarthy, Kamaraj P (kamp) Hi Anatoly, Thanks for the clarifications. Currently we are migrating to the new DPDK 18.11 ( from 17.05). Here is our configuration: ======================================================================= We have configured the "--legacy-mem" option and changed the CONFIG_RTE_MAX_MEM_MB to 2048 (and we are passing 2MB huge page 188 and no 1G hugepages in the bootargs). Our application deployment as 2G RAM ======================================================================= We are observing the hang issue, with above configuration. Please see the below logs: EAL: Detected lcore 0 as core 0 on socket 0 EAL: Support maximum 128 logical core(s) by configuration. EAL: Detected 1 lcore(s) EAL: Detected 1 NUMA nodes EAL: open shared lib /usr/lib64/librte_pmd_ixgbe.so.2.1 EAL: open shared lib /usr/lib64/librte_pmd_e1000.so.1.1 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) EAL: VFIO PCI modules not loaded EAL: No free hugepages reported in hugepages-1048576kB EAL: No free hugepages reported in hugepages-1048576kB EAL: Probing VFIO support... EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) EAL: VFIO modules not loaded, skipping VFIO support... EAL: Ask a virtual area of 0x2e000 bytes EAL: Virtual area found at 0x100000000 (size = 0x2e000) EAL: Setting up physically contiguous memory... EAL: Setting maximum number of open files to 4096 EAL: Detected memory type: socket_id:0 hugepage_sz:1073741824 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 EAL: Creating 1 segment lists: n_segs:1 socket_id:0 hugepage_sz:1073741824 EAL: Ask a virtual area of 0x1000 bytes EAL: Virtual area found at 0x10002e000 (size = 0x1000) EAL: Memseg list allocated: 0x100000kB at socket 0 EAL: Ask a virtual area of 0x40000000 bytes <<< --- struck here ---> >>>> Is there any other dpdk options thro which we can resolve the above issue ? Any thoughts ? Like passing the *--socket-limit* and *--m *parameter etc during the EAL Init (could help ???). Please suggest us. Thanks, Kamaraj On Tue, Dec 10, 2019 at 3:53 PM Burakov, Anatoly <anatoly.burakov@intel.com> wrote: > On 07-Dec-19 5:01 PM, Kamaraj P wrote: > > Hello All, > > > > Currently, we are facing an issue with memory allocation failure > > in memseg_primary_init(). > > When we configure the CONFIG_RTE_MAX_MEM_MB to 512MB and correspondingly > > configured the number of huge pages for our platform. But the virtual > > memory allocation is failing. > > > > It appears that its trying to allocate CONFIG_RTE_MAX_MEMSEG_PER_LIST * > > Huge page size (i.e. 8192 * 2MB = 0x400000000) and virtual memory > > allocation is failing. > > > > Also tried changing the CONFIG_RTE_MAX_MEMSEG_PER_LIST to 64 with which > > virtual memory allocation is passing for the 128MB (64 * 2MB). But looks > > like 128MB memory is not enough and it is causing the PCIe enumeration > > failure. > > Not able allocate virtual memory beyond 128MB by increasing the > > CONFIG_RTE_MAX_MEMSEG_PER_LIST beyond 64. > > > > Is there are any settings(argument) which we need to pass as part of > > rte_eal_init() > > to get success in the virtual memory allocation? > > Please advise. > > > > Thanks, > > Kamaraj > > > > I don't think there are, as the allocator wasn't designed with such > memory constrained use cases in mind. You may want to try --legacy-mem > option. > > -- > Thanks, > Anatoly > ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [dpdk-dev] CONFIG_RTE_MAX_MEM_MB fails in DPDK18.05 2020-02-17 9:57 ` Kamaraj P @ 2020-02-19 10:23 ` Burakov, Anatoly 2020-02-19 10:56 ` Kevin Traynor 0 siblings, 1 reply; 13+ messages in thread From: Burakov, Anatoly @ 2020-02-19 10:23 UTC (permalink / raw) To: Kamaraj P; +Cc: dev, Nageswara Rao Penumarthy, Kamaraj P (kamp) On 17-Feb-20 9:57 AM, Kamaraj P wrote: > Hi Anatoly, > Thanks for the clarifications. > > Currently we are migrating to the new DPDK 18.11 ( from 17.05). Here is > our configuration: > ======================================================================= > We have configured the "--legacy-mem" option and changed the > CONFIG_RTE_MAX_MEM_MB to 2048 (and we are passing 2MB huge page 188 and > no 1G hugepages in the bootargs). > Our application deployment as 2G RAM > ======================================================================= > We are observing the hang issue, with above configuration. > Please see the below logs: > EAL: Detected lcore 0 as core 0 on socket 0 > EAL: Support maximum 128 logical core(s) by configuration. > EAL: Detected 1 lcore(s) > EAL: Detected 1 NUMA nodes > EAL: open shared lib /usr/lib64/librte_pmd_ixgbe.so.2.1 > EAL: open shared lib /usr/lib64/librte_pmd_e1000.so.1.1 > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or > directory) > EAL: VFIO PCI modules not loaded > EAL: No free hugepages reported in hugepages-1048576kB > EAL: No free hugepages reported in hugepages-1048576kB > EAL: Probing VFIO support... > EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) > EAL: VFIO modules not loaded, skipping VFIO support... > EAL: Ask a virtual area of 0x2e000 bytes > EAL: Virtual area found at 0x100000000 (size = 0x2e000) > EAL: Setting up physically contiguous memory... > EAL: Setting maximum number of open files to 4096 > EAL: Detected memory type: socket_id:0 hugepage_sz:1073741824 > EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 > EAL: Creating 1 segment lists: n_segs:1 socket_id:0 hugepage_sz:1073741824 > EAL: Ask a virtual area of 0x1000 bytes > EAL: Virtual area found at 0x10002e000 (size = 0x1000) > EAL: Memseg list allocated: 0x100000kB at socket 0 > EAL: Ask a virtual area of 0x40000000 bytes > <<< --- struck here ---> >>>> > > > Is there any other dpdk options thro which we can resolve the above > issue ? Any thoughts ? > Like passing the *--socket-limit* and *--m *parameter etc during the EAL > Init (could help ???). > Please suggest us. > It sounds like it hangs in eal_get_virtual_area() - we've had a similar issue before, not sure if the fix was backported to 18.11. Is this patch present in your code? http://patches.dpdk.org/patch/51943/ If not, it would be of great help if you could find the exact spot where the hang happens. -- Thanks, Anatoly ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [dpdk-dev] CONFIG_RTE_MAX_MEM_MB fails in DPDK18.05 2020-02-19 10:23 ` Burakov, Anatoly @ 2020-02-19 10:56 ` Kevin Traynor 2020-02-19 11:16 ` Kamaraj P 0 siblings, 1 reply; 13+ messages in thread From: Kevin Traynor @ 2020-02-19 10:56 UTC (permalink / raw) To: Burakov, Anatoly, Kamaraj P Cc: dev, Nageswara Rao Penumarthy, Kamaraj P (kamp) On 19/02/2020 10:23, Burakov, Anatoly wrote: > On 17-Feb-20 9:57 AM, Kamaraj P wrote: >> Hi Anatoly, >> Thanks for the clarifications. >> >> Currently we are migrating to the new DPDK 18.11 ( from 17.05). Here is >> our configuration: >> ======================================================================= >> We have configured the "--legacy-mem" option and changed the >> CONFIG_RTE_MAX_MEM_MB to 2048 (and we are passing 2MB huge page 188 and >> no 1G hugepages in the bootargs). >> Our application deployment as 2G RAM >> ======================================================================= >> We are observing the hang issue, with above configuration. >> Please see the below logs: >> EAL: Detected lcore 0 as core 0 on socket 0 >> EAL: Support maximum 128 logical core(s) by configuration. >> EAL: Detected 1 lcore(s) >> EAL: Detected 1 NUMA nodes >> EAL: open shared lib /usr/lib64/librte_pmd_ixgbe.so.2.1 >> EAL: open shared lib /usr/lib64/librte_pmd_e1000.so.1.1 >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket >> EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or >> directory) >> EAL: VFIO PCI modules not loaded >> EAL: No free hugepages reported in hugepages-1048576kB >> EAL: No free hugepages reported in hugepages-1048576kB >> EAL: Probing VFIO support... >> EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) >> EAL: VFIO modules not loaded, skipping VFIO support... >> EAL: Ask a virtual area of 0x2e000 bytes >> EAL: Virtual area found at 0x100000000 (size = 0x2e000) >> EAL: Setting up physically contiguous memory... >> EAL: Setting maximum number of open files to 4096 >> EAL: Detected memory type: socket_id:0 hugepage_sz:1073741824 >> EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 >> EAL: Creating 1 segment lists: n_segs:1 socket_id:0 hugepage_sz:1073741824 >> EAL: Ask a virtual area of 0x1000 bytes >> EAL: Virtual area found at 0x10002e000 (size = 0x1000) >> EAL: Memseg list allocated: 0x100000kB at socket 0 >> EAL: Ask a virtual area of 0x40000000 bytes >> <<< --- struck here ---> >>>> >> >> >> Is there any other dpdk options thro which we can resolve the above >> issue ? Any thoughts ? >> Like passing the *--socket-limit* and *--m *parameter etc during the EAL >> Init (could help ???). >> Please suggest us. >> > > It sounds like it hangs in eal_get_virtual_area() - we've had a similar > issue before, not sure if the fix was backported to 18.11. Is this patch > present in your code? > > http://patches.dpdk.org/patch/51943/ > In 18.11 LTS releases since v18.11.2. Current release is v18.11.6. commit 558509fbb2b0a0f5803f348634e4956ff8cb5214 Author: Shahaf Shuler <shahafs@mellanox.com> Date: Sun Mar 31 11:43:48 2019 +0300 mem: limit use of address hint [ upstream commit 237060c4ad15b4ee9002be3c0e56ac3070eceb48 ] > If not, it would be of great help if you could find the exact spot where > the hang happens. > ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [dpdk-dev] CONFIG_RTE_MAX_MEM_MB fails in DPDK18.05 2020-02-19 10:56 ` Kevin Traynor @ 2020-02-19 11:16 ` Kamaraj P 2020-02-19 14:23 ` Burakov, Anatoly 0 siblings, 1 reply; 13+ messages in thread From: Kamaraj P @ 2020-02-19 11:16 UTC (permalink / raw) To: Kevin Traynor Cc: Burakov, Anatoly, dev, Nageswara Rao Penumarthy, Kamaraj P (kamp) Hi Kevin/Anatoly, Yes we have the patch already included in our code base. Looks like it get struck in the below piece of the code: mapped_addr = mmap(requested_addr, (size_t)map_sz, PROT_READ, mmap_flags, -1, 0); Could you please share your thoughts on this? Thanks, Kamaraj On Wed, Feb 19, 2020 at 4:26 PM Kevin Traynor <ktraynor@redhat.com> wrote: > On 19/02/2020 10:23, Burakov, Anatoly wrote: > > On 17-Feb-20 9:57 AM, Kamaraj P wrote: > >> Hi Anatoly, > >> Thanks for the clarifications. > >> > >> Currently we are migrating to the new DPDK 18.11 ( from 17.05). Here > is > >> our configuration: > >> ======================================================================= > >> We have configured the "--legacy-mem" option and changed the > >> CONFIG_RTE_MAX_MEM_MB to 2048 (and we are passing 2MB huge page 188 and > >> no 1G hugepages in the bootargs). > >> Our application deployment as 2G RAM > >> ======================================================================= > >> We are observing the hang issue, with above configuration. > >> Please see the below logs: > >> EAL: Detected lcore 0 as core 0 on socket 0 > >> EAL: Support maximum 128 logical core(s) by configuration. > >> EAL: Detected 1 lcore(s) > >> EAL: Detected 1 NUMA nodes > >> EAL: open shared lib /usr/lib64/librte_pmd_ixgbe.so.2.1 > >> EAL: open shared lib /usr/lib64/librte_pmd_e1000.so.1.1 > >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > >> EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or > >> directory) > >> EAL: VFIO PCI modules not loaded > >> EAL: No free hugepages reported in hugepages-1048576kB > >> EAL: No free hugepages reported in hugepages-1048576kB > >> EAL: Probing VFIO support... > >> EAL: Module /sys/module/vfio not found! error 2 (No such file or > directory) > >> EAL: VFIO modules not loaded, skipping VFIO support... > >> EAL: Ask a virtual area of 0x2e000 bytes > >> EAL: Virtual area found at 0x100000000 (size = 0x2e000) > >> EAL: Setting up physically contiguous memory... > >> EAL: Setting maximum number of open files to 4096 > >> EAL: Detected memory type: socket_id:0 hugepage_sz:1073741824 > >> EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 > >> EAL: Creating 1 segment lists: n_segs:1 socket_id:0 > hugepage_sz:1073741824 > >> EAL: Ask a virtual area of 0x1000 bytes > >> EAL: Virtual area found at 0x10002e000 (size = 0x1000) > >> EAL: Memseg list allocated: 0x100000kB at socket 0 > >> EAL: Ask a virtual area of 0x40000000 bytes > >> <<< --- struck here ---> >>>> > >> > >> > >> Is there any other dpdk options thro which we can resolve the above > >> issue ? Any thoughts ? > >> Like passing the *--socket-limit* and *--m *parameter etc during the > EAL > >> Init (could help ???). > >> Please suggest us. > >> > > > > It sounds like it hangs in eal_get_virtual_area() - we've had a similar > > issue before, not sure if the fix was backported to 18.11. Is this patch > > present in your code? > > > > http://patches.dpdk.org/patch/51943/ > > > > In 18.11 LTS releases since v18.11.2. Current release is v18.11.6. > > commit 558509fbb2b0a0f5803f348634e4956ff8cb5214 > Author: Shahaf Shuler <shahafs@mellanox.com> > Date: Sun Mar 31 11:43:48 2019 +0300 > > mem: limit use of address hint > > [ upstream commit 237060c4ad15b4ee9002be3c0e56ac3070eceb48 ] > > > If not, it would be of great help if you could find the exact spot where > > the hang happens. > > > > > > > ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [dpdk-dev] CONFIG_RTE_MAX_MEM_MB fails in DPDK18.05 2020-02-19 11:16 ` Kamaraj P @ 2020-02-19 14:23 ` Burakov, Anatoly 2020-02-19 15:02 ` Kamaraj P 0 siblings, 1 reply; 13+ messages in thread From: Burakov, Anatoly @ 2020-02-19 14:23 UTC (permalink / raw) To: Kamaraj P, Kevin Traynor; +Cc: dev, Nageswara Rao Penumarthy, Kamaraj P (kamp) On 19-Feb-20 11:16 AM, Kamaraj P wrote: > Hi Kevin/Anatoly, > > Yes we have the patch already included in our code base. > > Looks like it get struck in the below piece of the code: > mapped_addr = mmap(requested_addr, (size_t)map_sz, PROT_READ, > mmap_flags, -1, 0); > > Could you please share your thoughts on this? > > Thanks, > Kamaraj > Hi, If it's stuck mapping, that probably means it is pinning the memory. Did you call mlockall() (or equivalent) before EAL initialization? -- Thanks, Anatoly ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [dpdk-dev] CONFIG_RTE_MAX_MEM_MB fails in DPDK18.05 2020-02-19 14:23 ` Burakov, Anatoly @ 2020-02-19 15:02 ` Kamaraj P 2020-02-19 15:28 ` Burakov, Anatoly 0 siblings, 1 reply; 13+ messages in thread From: Kamaraj P @ 2020-02-19 15:02 UTC (permalink / raw) To: Burakov, Anatoly Cc: Kevin Traynor, dev, Nageswara Rao Penumarthy, Kamaraj P (kamp), mtang2 Thanks for the suggestions. We didnt have --mlockall parameter option in the rte_eal_init(). we have just tried the option and our application says an *unrecognized option*. Lets us check further on this and let you know. Thanks, Kamaraj On Wed, Feb 19, 2020 at 7:53 PM Burakov, Anatoly <anatoly.burakov@intel.com> wrote: > On 19-Feb-20 11:16 AM, Kamaraj P wrote: > > Hi Kevin/Anatoly, > > > > Yes we have the patch already included in our code base. > > > > Looks like it get struck in the below piece of the code: > > mapped_addr = mmap(requested_addr, (size_t)map_sz, PROT_READ, > > mmap_flags, -1, 0); > > > > Could you please share your thoughts on this? > > > > Thanks, > > Kamaraj > > > > Hi, > > If it's stuck mapping, that probably means it is pinning the memory. Did > you call mlockall() (or equivalent) before EAL initialization? > > -- > Thanks, > Anatoly > ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [dpdk-dev] CONFIG_RTE_MAX_MEM_MB fails in DPDK18.05 2020-02-19 15:02 ` Kamaraj P @ 2020-02-19 15:28 ` Burakov, Anatoly 2020-02-19 15:42 ` Kamaraj P 0 siblings, 1 reply; 13+ messages in thread From: Burakov, Anatoly @ 2020-02-19 15:28 UTC (permalink / raw) To: Kamaraj P Cc: Kevin Traynor, dev, Nageswara Rao Penumarthy, Kamaraj P (kamp), mtang2 On 19-Feb-20 3:02 PM, Kamaraj P wrote: > Thanks for the suggestions. We didnt have --mlockall parameter option in > the rte_eal_init(). > we have just tried the option and our application says an *unrecognized > option*. > Lets us check further on this and let you know. > > Thanks, > Kamaraj > No, that's not an EAL option, that's a testpmd option. However, that's not really what i was asking. If you have a custom application, and that application called mlockall() (with appropriate flags) before EAL init, that would make all pages pinned, present and future. That means, if you mmap() anonymous memory (like EAL init does), it will take a long time because all of that memory will be pinned (and since it's 4K pages because at that point, we're not using hugepages yet, that will indeed take a long time). -- Thanks, Anatoly ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [dpdk-dev] CONFIG_RTE_MAX_MEM_MB fails in DPDK18.05 2020-02-19 15:28 ` Burakov, Anatoly @ 2020-02-19 15:42 ` Kamaraj P 2020-02-19 16:00 ` Burakov, Anatoly 0 siblings, 1 reply; 13+ messages in thread From: Kamaraj P @ 2020-02-19 15:42 UTC (permalink / raw) To: Burakov, Anatoly Cc: Kevin Traynor, dev, Nageswara Rao Penumarthy, Kamaraj P (kamp), mtang2 Hi Anatoly, Thanks for the suggestions. Yeah we have just changed in our application to invoke mlockall() before rte_eal_init(). Looks like it does not help either. if (mlockall(MCL_CURRENT | MCL_FUTURE)) { printf("Failed mlockall !! ******\n"); } ret = rte_eal_init(argc, args); Looks like still observing the struck issue when allocating virtual pages. EAL: Detected lcore 0 as core 0 on socket 0 EAL: Support maximum 128 logical core(s) by configuration. EAL: Detected 1 lcore(s) EAL: Detected 1 NUMA nodes EAL: open shared lib /usr/lib64/librte_pmd_ixgbe.so.2.1 EAL: open shared lib /usr/lib64/librte_pmd_e1000.so.1.1 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) EAL: VFIO PCI modules not loaded EAL: Probing VFIO support... EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) EAL: VFIO modules not loaded, skipping VFIO support... EAL: Ask a virtual area of 0x2e000 bytes EAL: Virtual area found at 0x100000000 (size = 0x2e000) EAL: Setting up physically contiguous memory... EAL: Setting maximum number of open files to 4096 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 EAL: Creating 32 segment lists: n_segs:1024 socket_id:0 hugepage_sz:2097152 EAL: Ask a virtual area of 0xd000 bytes EAL: Virtual area found at 0x10002e000 (size = 0xd000) EAL: Memseg list allocated: 0x800kB at socket 0 EAL: Ask a virtual area of 0x80000000 bytes Could you please suggest if there is any other option which we need to try it out. Thanks, Kamaraj On Wed, Feb 19, 2020 at 8:58 PM Burakov, Anatoly <anatoly.burakov@intel.com> wrote: > On 19-Feb-20 3:02 PM, Kamaraj P wrote: > > Thanks for the suggestions. We didnt have --mlockall parameter option in > > the rte_eal_init(). > > we have just tried the option and our application says an *unrecognized > > option*. > > Lets us check further on this and let you know. > > > > Thanks, > > Kamaraj > > > > No, that's not an EAL option, that's a testpmd option. However, that's > not really what i was asking. > > If you have a custom application, and that application called mlockall() > (with appropriate flags) before EAL init, that would make all pages > pinned, present and future. That means, if you mmap() anonymous memory > (like EAL init does), it will take a long time because all of that > memory will be pinned (and since it's 4K pages because at that point, > we're not using hugepages yet, that will indeed take a long time). > > -- > Thanks, > Anatoly > ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [dpdk-dev] CONFIG_RTE_MAX_MEM_MB fails in DPDK18.05 2020-02-19 15:42 ` Kamaraj P @ 2020-02-19 16:00 ` Burakov, Anatoly 2020-02-19 16:20 ` Kamaraj P 0 siblings, 1 reply; 13+ messages in thread From: Burakov, Anatoly @ 2020-02-19 16:00 UTC (permalink / raw) To: Kamaraj P Cc: Kevin Traynor, dev, Nageswara Rao Penumarthy, Kamaraj P (kamp), mtang2 On 19-Feb-20 3:42 PM, Kamaraj P wrote: > Hi Anatoly, > Thanks for the suggestions. Yeah we have just changed in our application > to invoke mlockall() before rte_eal_init(). Looks like it does not help > either. > > if (mlockall(MCL_CURRENT | MCL_FUTURE)) { > printf("Failed mlockall !! ******\n"); > } > ret = rte_eal_init(argc, args); > > Looks like still observing the struck issue when allocating virtual pages. > EAL: Detected lcore 0 as core 0 on socket 0 > EAL: Support maximum 128 logical core(s) by configuration. > EAL: Detected 1 lcore(s) > EAL: Detected 1 NUMA nodes > EAL: open shared lib /usr/lib64/librte_pmd_ixgbe.so.2.1 > EAL: open shared lib /usr/lib64/librte_pmd_e1000.so.1.1 > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or > directory) > EAL: VFIO PCI modules not loaded > EAL: Probing VFIO support... > EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) > EAL: VFIO modules not loaded, skipping VFIO support... > EAL: Ask a virtual area of 0x2e000 bytes > EAL: Virtual area found at 0x100000000 (size = 0x2e000) > EAL: Setting up physically contiguous memory... > EAL: Setting maximum number of open files to 4096 > EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 > EAL: Creating 32 segment lists: n_segs:1024 socket_id:0 hugepage_sz:2097152 > EAL: Ask a virtual area of 0xd000 bytes > EAL: Virtual area found at 0x10002e000 (size = 0xd000) > EAL: Memseg list allocated: 0x800kB at socket 0 > EAL: Ask a virtual area of 0x80000000 bytes > > Could you please suggest if there is any other option which we need to > try it out. Does this only happen with your application, or does it happen with DPDK example applications or test/testpmd apps? -- Thanks, Anatoly ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [dpdk-dev] CONFIG_RTE_MAX_MEM_MB fails in DPDK18.05 2020-02-19 16:00 ` Burakov, Anatoly @ 2020-02-19 16:20 ` Kamaraj P 2020-02-20 10:02 ` Burakov, Anatoly 0 siblings, 1 reply; 13+ messages in thread From: Kamaraj P @ 2020-02-19 16:20 UTC (permalink / raw) To: Burakov, Anatoly Cc: Kevin Traynor, dev, Nageswara Rao Penumarthy, Kamaraj P (kamp), mtang2 Hi Anatoly, Yes we are facing an issue with our custom applications. Earlier we have tried with l2fwd DPDK application and does not see any issue with memory initialization. Not sure whether we missed any other options. BTW when we tried with l2fwd application, the application does not seem to hang during the memory initialization where as our application is kind of struck when getting memory. Do we tune any config parameters from DPDK ? Please advise. Please see the screenshot below for reference: ubuntu@client-server:~/dpdk/dpdk-stable/app/test-pmd/build/app$ sudo ./testpmd -l 0-3 -n 4 --log-level eal,8 -- -i --portmask=0x1 --nb-cores=2 EAL: Detected lcore 0 as core 0 on socket 0 EAL: Detected lcore 1 as core 0 on socket 0 EAL: Detected lcore 2 as core 0 on socket 0 EAL: Detected lcore 3 as core 0 on socket 0 EAL: Support maximum 128 logical core(s) by configuration. EAL: Detected 4 lcore(s) EAL: Detected 1 NUMA nodes EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) EAL: VFIO PCI modules not loaded EAL: DPAA Bus not present. Skipping. EAL: Probing VFIO support... EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) EAL: VFIO modules not loaded, skipping VFIO support... EAL: Ask a virtual area of 0x2e000 bytes EAL: Virtual area found at 0x100000000 (size = 0x2e000) EAL: Setting up physically contiguous memory... EAL: Setting maximum number of open files to 1048576 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 EAL: Ask a virtual area of 0x61000 bytes EAL: Virtual area found at 0x10002e000 (size = 0x61000) EAL: Memseg list allocated: 0x800kB at socket 0 EAL: Ask a virtual area of 0x400000000 bytes EAL: Virtual area found at 0x100200000 (size = 0x400000000) EAL: Ask a virtual area of 0x61000 bytes EAL: Virtual area found at 0x500200000 (size = 0x61000) EAL: Memseg list allocated: 0x800kB at socket 0 EAL: Ask a virtual area of 0x400000000 bytes EAL: Virtual area found at 0x500400000 (size = 0x400000000) EAL: Ask a virtual area of 0x61000 bytes EAL: Virtual area found at 0x900400000 (size = 0x61000) EAL: Memseg list allocated: 0x800kB at socket 0 EAL: Ask a virtual area of 0x400000000 bytes EAL: Virtual area found at 0x900600000 (size = 0x400000000) EAL: Ask a virtual area of 0x61000 bytes EAL: Virtual area found at 0xd00600000 (size = 0x61000) EAL: Memseg list allocated: 0x800kB at socket 0 EAL: Ask a virtual area of 0x400000000 bytes EAL: Virtual area found at 0xd00800000 (size = 0x400000000) EAL: TSC frequency is ~2094950 KHz EAL: Master lcore 0 is ready (tid=7f6507906c00;cpuset=[0]) EAL: lcore 1 is ready (tid=7f6505727700;cpuset=[1]) EAL: lcore 2 is ready (tid=7f6504f26700;cpuset=[2]) EAL: lcore 3 is ready (tid=7f6504725700;cpuset=[3]) EAL: Trying to obtain current memory policy. EAL: Setting policy MPOL_PREFERRED for socket 0 EAL: Restoring previous memory policy: 0 EAL: request: mp_malloc_sync EAL: Heap on socket 0 was expanded by 2MB EAL: PCI device 0000:03:00.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 15ad:7b0 net_vmxnet3 EAL: Not managed by a supported kernel driver, skipped EAL: PCI device 0000:0b:00.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 15ad:7b0 net_vmxnet3 EAL: Not managed by a supported kernel driver, skipped EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) testpmd: No probed ethernet devices EAL: Trying to obtain current memory policy. EAL: Setting policy MPOL_PREFERRED for socket 0 EAL: Restoring previous memory policy: 0 EAL: request: mp_malloc_sync EAL: Heap on socket 0 was expanded by 138MB Interactive-mode selected testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0 testpmd: preferred mempool ops selected: ring_mp_mc EAL: Trying to obtain current memory policy. EAL: Setting policy MPOL_PREFERRED for socket 0 EAL: Restoring previous memory policy: 0 EAL: request: mp_malloc_sync EAL: Heap on socket 0 was expanded by 4MB EAL: Trying to obtain current memory policy. EAL: Setting policy MPOL_PREFERRED for socket 0 EAL: alloc_seg(): mmap() failed: Cannot allocate memory EAL: Ask a virtual area of 0x200000 bytes EAL: Virtual area found at 0x110200000 (size = 0x200000) EAL: attempted to allocate 194 segments, but only 56 were allocated EAL: Restoring previous memory policy: 0 EAL: Trying to obtain current memory policy. EAL: Setting policy MPOL_PREFERRED for socket 0 EAL: alloc_seg(): mmap() failed: Cannot allocate memory EAL: Ask a virtual area of 0x200000 bytes EAL: Virtual area found at 0x110200000 (size = 0x200000) EAL: attempted to allocate 195 segments, but only 56 were allocated EAL: Restoring previous memory policy: 0 EAL: request: mp_malloc_sync EAL: Heap on socket 0 was shrunk by 4MB EAL: Error - exiting with code: 1 Cause: Creation of mbuf pool for socket 0 failed: Cannot allocate memory Thanks, Kamaraj On Wed, Feb 19, 2020 at 9:30 PM Burakov, Anatoly <anatoly.burakov@intel.com> wrote: > On 19-Feb-20 3:42 PM, Kamaraj P wrote: > > Hi Anatoly, > > Thanks for the suggestions. Yeah we have just changed in our application > > to invoke mlockall() before rte_eal_init(). Looks like it does not help > > either. > > > > if (mlockall(MCL_CURRENT | MCL_FUTURE)) { > > printf("Failed mlockall !! ******\n"); > > } > > ret = rte_eal_init(argc, args); > > > > Looks like still observing the struck issue when allocating virtual > pages. > > EAL: Detected lcore 0 as core 0 on socket 0 > > EAL: Support maximum 128 logical core(s) by configuration. > > EAL: Detected 1 lcore(s) > > EAL: Detected 1 NUMA nodes > > EAL: open shared lib /usr/lib64/librte_pmd_ixgbe.so.2.1 > > EAL: open shared lib /usr/lib64/librte_pmd_e1000.so.1.1 > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > > EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or > > directory) > > EAL: VFIO PCI modules not loaded > > EAL: Probing VFIO support... > > EAL: Module /sys/module/vfio not found! error 2 (No such file or > directory) > > EAL: VFIO modules not loaded, skipping VFIO support... > > EAL: Ask a virtual area of 0x2e000 bytes > > EAL: Virtual area found at 0x100000000 (size = 0x2e000) > > EAL: Setting up physically contiguous memory... > > EAL: Setting maximum number of open files to 4096 > > EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 > > EAL: Creating 32 segment lists: n_segs:1024 socket_id:0 > hugepage_sz:2097152 > > EAL: Ask a virtual area of 0xd000 bytes > > EAL: Virtual area found at 0x10002e000 (size = 0xd000) > > EAL: Memseg list allocated: 0x800kB at socket 0 > > EAL: Ask a virtual area of 0x80000000 bytes > > > > Could you please suggest if there is any other option which we need to > > try it out. > Does this only happen with your application, or does it happen with DPDK > example applications or test/testpmd apps? > > -- > Thanks, > Anatoly > ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [dpdk-dev] CONFIG_RTE_MAX_MEM_MB fails in DPDK18.05 2020-02-19 16:20 ` Kamaraj P @ 2020-02-20 10:02 ` Burakov, Anatoly 0 siblings, 0 replies; 13+ messages in thread From: Burakov, Anatoly @ 2020-02-20 10:02 UTC (permalink / raw) To: Kamaraj P Cc: Kevin Traynor, dev, Nageswara Rao Penumarthy, Kamaraj P (kamp), mtang2 On 19-Feb-20 4:20 PM, Kamaraj P wrote: > Hi Anatoly, > > Yes we are facing an issue with our custom applications. > Earlier we have tried with l2fwd DPDK application and does not see any > issue with memory initialization. > Not sure whether we missed any other options. > > BTW when we tried with l2fwd application, the application does not seem > to hang during the memory initialization where as our application is > kind of struck when getting memory. > Do we tune any config parameters from DPDK ? > Please advise. > Hi, This doesn't look like an issue that is with DPDK. It is more likely that something else triggers this (similar to mlockall()). Are you sure there are no more mem lock calls anywhere in your application before EAL init (i.e. inside the libraries you use, etc.)? Because so far, page pinning is the only thing i can think of that would cause this sort of behavior. -- Thanks, Anatoly ^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2020-02-20 10:02 UTC | newest] Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2019-12-07 17:01 [dpdk-dev] CONFIG_RTE_MAX_MEM_MB fails in DPDK18.05 Kamaraj P 2019-12-10 10:23 ` Burakov, Anatoly 2020-02-17 9:57 ` Kamaraj P 2020-02-19 10:23 ` Burakov, Anatoly 2020-02-19 10:56 ` Kevin Traynor 2020-02-19 11:16 ` Kamaraj P 2020-02-19 14:23 ` Burakov, Anatoly 2020-02-19 15:02 ` Kamaraj P 2020-02-19 15:28 ` Burakov, Anatoly 2020-02-19 15:42 ` Kamaraj P 2020-02-19 16:00 ` Burakov, Anatoly 2020-02-19 16:20 ` Kamaraj P 2020-02-20 10:02 ` Burakov, Anatoly
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).