DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] segfault with dpdk 16.07 in rte_mempool_populate_phys
@ 2016-08-17 15:03 martin_curran-gray
  2016-08-19  6:10 ` Shreyansh Jain
  0 siblings, 1 reply; 7+ messages in thread
From: martin_curran-gray @ 2016-08-17 15:03 UTC (permalink / raw)
  To: users

Hi All,

Trying to move an application from 2.2.0 to 16.07

Tested out l2fwd in 16.7 , quite happy with that ( in fact very happy with the performance improvement I measure  over 2.2.0 )

But now trying to get our app moved over, and coming un-stuck.

As well as the main packet mbuf pools etc, our app has a little pool for error messages

When that is created with "rte_mempool_create" I get a segmentation fault from rte_mempool_populate_phys

It seems I have nothing valid for this call in rte_mempool.c to work with

    ret = rte_mempool_ops_alloc(mp);

I can see in the user guide in section 5.5 it talks about Mempool Handlers, and new API to do with rte_mempool_create_empty and rte_mempool_set_ops_byname

But the code of rte_mempool _create  seems to call rte_mempool_create_empty and rte_mempool_set_ops_byname

Then it calls the mp_init  before the rte_mempool_populate_default, down in which the call to rte_mempool_populate_phys eventually cores

I've tried building and running the ip_reassembly example program, and I can see that is uses rte_mempool_create in a similar, although admittedly slightly different fashion.
it has flags set as  MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET  where as I have 0
but the code in rte_mempool_create uses the flags to call the set_ops_by_name slightly differently depending on what flags you have.

I tried changing my app to use the same parameters in the rte_mempool_create call as the example program, but I still get a segmentation fault

Is there something else I'm missing??

I looked through the ip_reassembly program, but couldn't see it making any extra calls to do anything to the pool before the create is called?

Any help gratefully received

Program terminated with signal 11, Segmentation fault.
#0  0x0000000000000000 in ?? ()

#0  0x0000000000000000 in ?? ()
#1  0x00007f9fa586cdde in rte_mempool_populate_phys (mp=0x7f9f9485dd00,
    vaddr=0x7f9f9485d2c0 <Address 0x7f9f9485d2c0 out of bounds>, paddr=8995066560, len=2560,
    free_cb=0x7f9fa586cbe0 <rte_mempool_memchunk_mz_free>, opaque=0x7f9fb40e4cb4)
    at /root/######/dpdk-16.07/lib/librte_mempool/rte_mempool.c:363
#2  0x00007f9fa586da4a in rte_mempool_populate_default (mp=0x7f9f9485dd00)
    at /root/######/dpdk-16.07/lib/librte_mempool/rte_mempool.c:583
#3  0x00007f9fa586dd49 in rte_mempool_create (name=0x7f9fa588fb56 "Error Ind Mempool", n=<value optimized out>,
    elt_size=256, cache_size=<value optimized out>, private_data_size=<value optimized out>,
    mp_init=0x7f9fa586c2b0 <rte_pktmbuf_pool_init>, mp_init_arg=0x0, obj_init=0x7f9fa586c1c0 <rte_pktmbuf_init>,
    obj_init_arg=0x0, socket_id=-1, flags=0) at /root/######/dpdk-16.07/lib/librte_mempool/rte_mempool.c:909



Thanks

Martin








Martin Curran-Gray
HW/FPGA/SW Engineer
Keysight Technologies UK Ltd

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] segfault with dpdk 16.07 in rte_mempool_populate_phys
  2016-08-17 15:03 [dpdk-users] segfault with dpdk 16.07 in rte_mempool_populate_phys martin_curran-gray
@ 2016-08-19  6:10 ` Shreyansh Jain
  2016-08-19  8:28   ` martin_curran-gray
  0 siblings, 1 reply; 7+ messages in thread
From: Shreyansh Jain @ 2016-08-19  6:10 UTC (permalink / raw)
  To: martin_curran-gray, users

Hi Martin,

> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of martin_curran-
> gray@keysight.com
> Sent: Wednesday, August 17, 2016 8:34 PM
> To: users@dpdk.org
> Subject: [dpdk-users] segfault with dpdk 16.07 in rte_mempool_populate_phys
> 
> Hi All,
> 
> Trying to move an application from 2.2.0 to 16.07
> 
> Tested out l2fwd in 16.7 , quite happy with that ( in fact very happy with
> the performance improvement I measure  over 2.2.0 )
> 
> But now trying to get our app moved over, and coming un-stuck.
> 
> As well as the main packet mbuf pools etc, our app has a little pool for
> error messages
> 
> When that is created with "rte_mempool_create" I get a segmentation fault
> from rte_mempool_populate_phys
> 
> It seems I have nothing valid for this call in rte_mempool.c to work with
> 
>     ret = rte_mempool_ops_alloc(mp);
> 
> I can see in the user guide in section 5.5 it talks about Mempool Handlers,
> and new API to do with rte_mempool_create_empty and
> rte_mempool_set_ops_byname

Indeed there are changes to the mempool allocation system. 16.07 includes support for pluggable pool handlers.
But, it should not matter to you if you are directly calling mempool_create (with flags=0) because default handler "ring_mp_mc" would be attached to your Pool.
You can read more about how ring_mp_mc behaves in rte_mempool_ring.c.

> 
> But the code of rte_mempool _create  seems to call rte_mempool_create_empty
> and rte_mempool_set_ops_byname

As you are passing flags=0, I am assuming it would fall back to default handler - ring_mp_mc (multi-consumer/multi-producer).

> 
> Then it calls the mp_init  before the rte_mempool_populate_default, down in
> which the call to rte_mempool_populate_phys eventually cores
> 
> I've tried building and running the ip_reassembly example program, and I can
> see that is uses rte_mempool_create in a similar, although admittedly
> slightly different fashion.
> it has flags set as  MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET  where as I have 0
> but the code in rte_mempool_create uses the flags to call the set_ops_by_name
> slightly differently depending on what flags you have.

The value of flags would depend on the way you use the mempool: Single/Multiple producer, or Single/Multiple consumers. [For rte_mempool_create and default available handlers].
But, flags=0 should still be safe bet AFAIK.

> 
> I tried changing my app to use the same parameters in the rte_mempool_create
> call as the example program, but I still get a segmentation fault
> 
> Is there something else I'm missing??

Somehow I feel that the problem is not with flags but probably there is not enough memory available because of which rte_memzone_reserve_aligned might have failed. But, I am not sure.

> 
> I looked through the ip_reassembly program, but couldn't see it making any
> extra calls to do anything to the pool before the create is called?
> 
> Any help gratefully received
> 
> Program terminated with signal 11, Segmentation fault.
> #0  0x0000000000000000 in ?? ()
> 
> #0  0x0000000000000000 in ?? ()
> #1  0x00007f9fa586cdde in rte_mempool_populate_phys (mp=0x7f9f9485dd00,
>     vaddr=0x7f9f9485d2c0 <Address 0x7f9f9485d2c0 out of bounds>,
> paddr=8995066560, len=2560,
>     free_cb=0x7f9fa586cbe0 <rte_mempool_memchunk_mz_free>,
> opaque=0x7f9fb40e4cb4)
>     at /root/######/dpdk-16.07/lib/librte_mempool/rte_mempool.c:363
> #2  0x00007f9fa586da4a in rte_mempool_populate_default (mp=0x7f9f9485dd00)
>     at /root/######/dpdk-16.07/lib/librte_mempool/rte_mempool.c:583
> #3  0x00007f9fa586dd49 in rte_mempool_create (name=0x7f9fa588fb56 "Error Ind
> Mempool", n=<value optimized out>,
>     elt_size=256, cache_size=<value optimized out>, private_data_size=<value
> optimized out>,
>     mp_init=0x7f9fa586c2b0 <rte_pktmbuf_pool_init>, mp_init_arg=0x0,
> obj_init=0x7f9fa586c1c0 <rte_pktmbuf_init>,
>     obj_init_arg=0x0, socket_id=-1, flags=0) at /root/######/dpdk-
> 16.07/lib/librte_mempool/rte_mempool.c:909

Would it be possible for you to paste the application startup logs? It might give us some more hints.

> 
> 
> 
> Thanks
> 
> Martin
> 
> 
> 
> 
> 
> 
> 
> 
> Martin Curran-Gray
> HW/FPGA/SW Engineer
> Keysight Technologies UK Ltd

-
Shreyansh

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] segfault with dpdk 16.07 in rte_mempool_populate_phys
  2016-08-19  6:10 ` Shreyansh Jain
@ 2016-08-19  8:28   ` martin_curran-gray
  2016-08-22 13:58     ` Shreyansh Jain
  0 siblings, 1 reply; 7+ messages in thread
From: martin_curran-gray @ 2016-08-19  8:28 UTC (permalink / raw)
  To: shreyansh.jain, users

Hi Shreyansh,

Thanks for your reply, 

Hmmm, I had wondered if the debug output from 16.7 was reduced compared to 2.2.0, but perhaps this is what I should have been concentrating on, rather than the core later


On a vm running our app using 2.2.0 at startup, I see:

dpdk: In dpdk_init_eal core_mask is  79, master_core_id  is 0
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Detected lcore 2 as core 0 on socket 0
EAL: Detected lcore 3 as core 0 on socket 0
EAL: Detected lcore 4 as core 0 on socket 0
EAL: Detected lcore 5 as core 0 on socket 0
EAL: Detected lcore 6 as core 0 on socket 0
EAL: Support maximum 32 logical core(s) by configuration.
EAL: Detected 7 lcore(s)
EAL: Setting up physically contiguous memory...
EAL: Ask a virtual area of 0x40000000 bytes
EAL: Virtual area found at 0x7f2735600000 (size = 0x40000000)
EAL: Requesting 512 pages of size 2MB from socket 0
EAL: TSC frequency is ~2094950 KHz
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
EAL: Master lcore 0 is ready (tid=9a11c720;cpuset=[0])
EAL: Failed to set thread name for interrupt handling
EAL: Cannot set name for lcore thread
EAL: Cannot set name for lcore thread
EAL: Cannot set name for lcore thread
EAL: Cannot set name for lcore thread
EAL: lcore 4 is ready (tid=33ff7700;cpuset=[4])
EAL: lcore 3 is ready (tid=349f8700;cpuset=[3])
EAL: lcore 6 is ready (tid=32bf5700;cpuset=[6])
EAL: lcore 5 is ready (tid=335f6700;cpuset=[5])
EAL: PCI device 0000:00:07.0 on NUMA socket -1
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:00:08.0 on NUMA socket -1
EAL:   probe driver: 8086:1572 rte_i40e_pmd
EAL:   PCI memory mapped at 0x7f27319f5000
EAL:   PCI memory mapped at 0x7f279a33c000
PMD: eth_i40e_dev_init(): FW 5.0 API 1.5 NVM 05.00.02 eetrack 8000224e

However on my vm running our app but with 16.7 I see much less EAL output, the other stuff is printf output I put in the dpdk code to try and figure out where it was going wrong

dpdk: In dpdk_init_eal core_mask is  79, master_core_id  is 0
EAL: Detected 7 lcore(s)
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !

dpdk_init_memory_pools  position 1
dpdk_init_memory_pools  position 2
dpdk_init_memory_pools  position 3

  about to call ret_mempool_create

  name               Error Ind Mempool
  number             8
  element size       256
  cache size         4
  private data size  4
  mp_init            1158173360
  mp_init_arg        0
  obj_init           1158173120
  obj_init_arg       0
  socket_id          4294967295
  flags              0


 at start of rte_mempool_create
 at start of rte_mempool_populate_default
 at start of rte_mempool_populate_phys


Is this just down to a change of the debug output from within the EAL , or is something going fundamentally wrong.

There is output about the individual detected lcores, there is no output about the setting up physically contiguous memory.. etc

However if my call to rte_eal_init  hadn't worked, I shouldn't have to as far as trying to call rte_mempool_create

We check for a return of rte_eal_init of < 0 and if so, we rte_exit.

I'll have a look over the newer documentation for the debug output

Thanks

Martin

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] segfault with dpdk 16.07 in rte_mempool_populate_phys
  2016-08-19  8:28   ` martin_curran-gray
@ 2016-08-22 13:58     ` Shreyansh Jain
  2016-08-22 14:06       ` martin_curran-gray
  0 siblings, 1 reply; 7+ messages in thread
From: Shreyansh Jain @ 2016-08-22 13:58 UTC (permalink / raw)
  To: martin_curran-gray, users

Hi Martin,

See inline.
(Also, please don't remove mail thread text in replied as it loses context).

> -----Original Message-----
> From: martin_curran-gray@keysight.com [mailto:martin_curran-
> gray@keysight.com]
> Sent: Friday, August 19, 2016 1:58 PM
> To: Shreyansh Jain <shreyansh.jain@nxp.com>; users@dpdk.org
> Subject: RE: segfault with dpdk 16.07 in rte_mempool_populate_phys
> 
> Hi Shreyansh,
> 
> Thanks for your reply,
> 
> Hmmm, I had wondered if the debug output from 16.7 was reduced compared to
> 2.2.0, but perhaps this is what I should have been concentrating on, rather
> than the core later
> 
> 
> On a vm running our app using 2.2.0 at startup, I see:
> 
> dpdk: In dpdk_init_eal core_mask is  79, master_core_id  is 0
> EAL: Detected lcore 0 as core 0 on socket 0
> EAL: Detected lcore 1 as core 0 on socket 0
> EAL: Detected lcore 2 as core 0 on socket 0
> EAL: Detected lcore 3 as core 0 on socket 0
> EAL: Detected lcore 4 as core 0 on socket 0
> EAL: Detected lcore 5 as core 0 on socket 0
> EAL: Detected lcore 6 as core 0 on socket 0
> EAL: Support maximum 32 logical core(s) by configuration.
> EAL: Detected 7 lcore(s)
> EAL: Setting up physically contiguous memory...
> EAL: Ask a virtual area of 0x40000000 bytes
> EAL: Virtual area found at 0x7f2735600000 (size = 0x40000000)
> EAL: Requesting 512 pages of size 2MB from socket 0
> EAL: TSC frequency is ~2094950 KHz
> EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
> clock cycles !
> EAL: Master lcore 0 is ready (tid=9a11c720;cpuset=[0])
> EAL: Failed to set thread name for interrupt handling
> EAL: Cannot set name for lcore thread
> EAL: Cannot set name for lcore thread
> EAL: Cannot set name for lcore thread
> EAL: Cannot set name for lcore thread
> EAL: lcore 4 is ready (tid=33ff7700;cpuset=[4])
> EAL: lcore 3 is ready (tid=349f8700;cpuset=[3])
> EAL: lcore 6 is ready (tid=32bf5700;cpuset=[6])
> EAL: lcore 5 is ready (tid=335f6700;cpuset=[5])
> EAL: PCI device 0000:00:07.0 on NUMA socket -1
> EAL:   probe driver: 8086:1521 rte_igb_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:00:08.0 on NUMA socket -1
> EAL:   probe driver: 8086:1572 rte_i40e_pmd
> EAL:   PCI memory mapped at 0x7f27319f5000
> EAL:   PCI memory mapped at 0x7f279a33c000
> PMD: eth_i40e_dev_init(): FW 5.0 API 1.5 NVM 05.00.02 eetrack 8000224e
> 
> However on my vm running our app but with 16.7 I see much less EAL output,
> the other stuff is printf output I put in the dpdk code to try and figure out
> where it was going wrong
> 
> dpdk: In dpdk_init_eal core_mask is  79, master_core_id  is 0
> EAL: Detected 7 lcore(s)
> EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
> clock cycles !
> 
> dpdk_init_memory_pools  position 1
> dpdk_init_memory_pools  position 2
> dpdk_init_memory_pools  position 3
> 
>   about to call ret_mempool_create
> 
>   name               Error Ind Mempool
>   number             8
>   element size       256
>   cache size         4
>   private data size  4
>   mp_init            1158173360
>   mp_init_arg        0
>   obj_init           1158173120
>   obj_init_arg       0
>   socket_id          4294967295
>   flags              0
> 
> 
>  at start of rte_mempool_create
>  at start of rte_mempool_populate_default
>  at start of rte_mempool_populate_phys
> 
> 
> Is this just down to a change of the debug output from within the EAL , or is
> something going fundamentally wrong.

The number of messages (specially the lcore detection, etc) have definitely been reduced across 16.07.
>From what I remember, Lcore detection, VFIO support and eventually application specific log was what was getting printed. As soon as I have access to a vanilla 16.07 app, I will post the output (on Host only). But, it seems fine to me as of now.

> 
> There is output about the individual detected lcores, there is no output
> about the setting up physically contiguous memory.. etc

Which is OK I think. Most of the INFO have been moved to DEBUG which is why you won't see the 2.2.0 messages. 

> 
> However if my call to rte_eal_init  hadn't worked, I shouldn't have to as far
> as trying to call rte_mempool_create
> 
> We check for a return of rte_eal_init of < 0 and if so, we rte_exit.
> 
> I'll have a look over the newer documentation for the debug output

For the stack trace that you dumped in previous email, would it be possible to recompile without the optimization flags and dump it again?
It is possible that the core is hitting some path because of which clean exit is not happening.

> 
> Thanks
> 
> Martin
> 

-
Shreyansh

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] segfault with dpdk 16.07 in rte_mempool_populate_phys
  2016-08-22 13:58     ` Shreyansh Jain
@ 2016-08-22 14:06       ` martin_curran-gray
  0 siblings, 0 replies; 7+ messages in thread
From: martin_curran-gray @ 2016-08-22 14:06 UTC (permalink / raw)
  To: shreyansh.jain, users

Hi Shreyansh,

Thanks for the update

I got a bit further, the ops->alloc function pointer is not being setup, it is left at 0

I'm trying to figure out what happens in the ip_reassembly example that I'm not doing, since ip_reassembly program works fine

Debug from my app

	 at start of rte_mempool_create
	 at start of rte_mempool_populate_default
	 at start of rte_mempool_populate_phys

	rte_mempool_ops_alloc pos 1
	 mp pointer here is 2781207808
	 ops is 3069532288
	rte_mempool_ops_alloc pos 2
	   does ops->alloc exist?
	   ops->alloc is 0

when it tries to then access ops->alloc, it then segfaults

Debug from ip_reassembly

at start of rte_mempool_create
	 at start of rte_mempool_populate_default
	 at start of rte_mempool_populate_phys

	rte_mempool_ops_alloc pos 1
	 mp pointer here is 2410384960
	 ops is 8063296

	rte_mempool_ops_alloc pos 2
	   does ops->alloc exist?
	   ops->alloc is 4526304
	 at end of rte_mempool_populate_phys
	 at end of rte_mempool_populate_default
	 at end of rte_mempool_create




-----Original Message-----
From: Shreyansh Jain [mailto:shreyansh.jain@nxp.com] 
Sent: 22 August 2016 14:59
To: CURRAN-GRAY,MARTIN (K-Scotland,ex1) <martin_curran-gray@keysight.com>; users@dpdk.org
Subject: RE: segfault with dpdk 16.07 in rte_mempool_populate_phys

Hi Martin,

See inline.
(Also, please don't remove mail thread text in replied as it loses context).

> -----Original Message-----
> From: martin_curran-gray@keysight.com [mailto:martin_curran- 
> gray@keysight.com]
> Sent: Friday, August 19, 2016 1:58 PM
> To: Shreyansh Jain <shreyansh.jain@nxp.com>; users@dpdk.org
> Subject: RE: segfault with dpdk 16.07 in rte_mempool_populate_phys
> 
> Hi Shreyansh,
> 
> Thanks for your reply,
> 
> Hmmm, I had wondered if the debug output from 16.7 was reduced 
> compared to 2.2.0, but perhaps this is what I should have been 
> concentrating on, rather than the core later
> 
> 
> On a vm running our app using 2.2.0 at startup, I see:
> 
> dpdk: In dpdk_init_eal core_mask is  79, master_core_id  is 0
> EAL: Detected lcore 0 as core 0 on socket 0
> EAL: Detected lcore 1 as core 0 on socket 0
> EAL: Detected lcore 2 as core 0 on socket 0
> EAL: Detected lcore 3 as core 0 on socket 0
> EAL: Detected lcore 4 as core 0 on socket 0
> EAL: Detected lcore 5 as core 0 on socket 0
> EAL: Detected lcore 6 as core 0 on socket 0
> EAL: Support maximum 32 logical core(s) by configuration.
> EAL: Detected 7 lcore(s)
> EAL: Setting up physically contiguous memory...
> EAL: Ask a virtual area of 0x40000000 bytes
> EAL: Virtual area found at 0x7f2735600000 (size = 0x40000000)
> EAL: Requesting 512 pages of size 2MB from socket 0
> EAL: TSC frequency is ~2094950 KHz
> EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using 
> unreliable clock cycles !
> EAL: Master lcore 0 is ready (tid=9a11c720;cpuset=[0])
> EAL: Failed to set thread name for interrupt handling
> EAL: Cannot set name for lcore thread
> EAL: Cannot set name for lcore thread
> EAL: Cannot set name for lcore thread
> EAL: Cannot set name for lcore thread
> EAL: lcore 4 is ready (tid=33ff7700;cpuset=[4])
> EAL: lcore 3 is ready (tid=349f8700;cpuset=[3])
> EAL: lcore 6 is ready (tid=32bf5700;cpuset=[6])
> EAL: lcore 5 is ready (tid=335f6700;cpuset=[5])
> EAL: PCI device 0000:00:07.0 on NUMA socket -1
> EAL:   probe driver: 8086:1521 rte_igb_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:00:08.0 on NUMA socket -1
> EAL:   probe driver: 8086:1572 rte_i40e_pmd
> EAL:   PCI memory mapped at 0x7f27319f5000
> EAL:   PCI memory mapped at 0x7f279a33c000
> PMD: eth_i40e_dev_init(): FW 5.0 API 1.5 NVM 05.00.02 eetrack 8000224e
> 
> However on my vm running our app but with 16.7 I see much less EAL 
> output, the other stuff is printf output I put in the dpdk code to try 
> and figure out where it was going wrong
> 
> dpdk: In dpdk_init_eal core_mask is  79, master_core_id  is 0
> EAL: Detected 7 lcore(s)
> EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using 
> unreliable clock cycles !
> 
> dpdk_init_memory_pools  position 1
> dpdk_init_memory_pools  position 2
> dpdk_init_memory_pools  position 3
> 
>   about to call ret_mempool_create
> 
>   name               Error Ind Mempool
>   number             8
>   element size       256
>   cache size         4
>   private data size  4
>   mp_init            1158173360
>   mp_init_arg        0
>   obj_init           1158173120
>   obj_init_arg       0
>   socket_id          4294967295
>   flags              0
> 
> 
>  at start of rte_mempool_create
>  at start of rte_mempool_populate_default  at start of 
> rte_mempool_populate_phys
> 
> 
> Is this just down to a change of the debug output from within the EAL 
> , or is something going fundamentally wrong.

The number of messages (specially the lcore detection, etc) have definitely been reduced across 16.07.
>From what I remember, Lcore detection, VFIO support and eventually application specific log was what was getting printed. As soon as I have access to a vanilla 16.07 app, I will post the output (on Host only). But, it seems fine to me as of now.

> 
> There is output about the individual detected lcores, there is no 
> output about the setting up physically contiguous memory.. etc

Which is OK I think. Most of the INFO have been moved to DEBUG which is why you won't see the 2.2.0 messages. 

> 
> However if my call to rte_eal_init  hadn't worked, I shouldn't have to 
> as far as trying to call rte_mempool_create
> 
> We check for a return of rte_eal_init of < 0 and if so, we rte_exit.
> 
> I'll have a look over the newer documentation for the debug output

For the stack trace that you dumped in previous email, would it be possible to recompile without the optimization flags and dump it again?
It is possible that the core is hitting some path because of which clean exit is not happening.

> 
> Thanks
> 
> Martin
> 

-
Shreyansh

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] segfault with dpdk 16.07 in rte_mempool_populate_phys
  2016-08-24  6:48 martin_curran-gray
@ 2016-08-24  7:23 ` Shreyansh Jain
  0 siblings, 0 replies; 7+ messages in thread
From: Shreyansh Jain @ 2016-08-24  7:23 UTC (permalink / raw)
  To: martin_curran-gray; +Cc: users

Hi Martin,

(Sorry. I have converted the email to plain-text)

Happy to hear this. Probably next time I too will keep in mind the importance of init/contructor.

====
> From: martin_curran-gray@keysight.com [mailto:martin_curran-gray@keysight.com] 
> Sent: Wednesday, August 24, 2016 12:18 PM
> To: users@dpdk.org; Shreyansh Jain <shreyansh.jain@nxp.com>
> Cc: martin_curran-gray@keysight.com
> Subject: Re: [dpdk-users] segfault with dpdk 16.07 in rte_mempool_populate_phys
> 
> Hi Shreyansh
> 
> Found my problem, because of the way the dpdk is integrated into our sw as a prebuilt library, we don’t get the automatic initialisation of objects the way I think the dpdk examples benefit from.
> I’d forgotten this, we have explicit calls to things like 
> 
>        devinitfn_rte_i40e_driver();
> 
> so after I added an explicit call to 
> 
>       mp_hdlr_init_ops_mp_mc();
>
> my function pointer deep down below  rte_mempool_create is now valid.
> 
> 
> Seem to have a nice performance improvement of about 40% in the rate my app can handle at the front door compared to 2.2.0

[Shreyansh] That is a good increase.

> 
> Nice
>
> Now I just need to find out why when I turn on vector mode, I never get any mbuffs returned to the pool ☺

[Shreyansh] I am not sure what does the above mean (vector mode) - so, no comments from my side.

>
> Thanks for your help
> Martin
[..]

-
Shreyansh

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] segfault with dpdk 16.07 in rte_mempool_populate_phys
@ 2016-08-24  6:48 martin_curran-gray
  2016-08-24  7:23 ` Shreyansh Jain
  0 siblings, 1 reply; 7+ messages in thread
From: martin_curran-gray @ 2016-08-24  6:48 UTC (permalink / raw)
  To: users, shreyansh.jain; +Cc: martin_curran-gray

Hi Shreyansh

Found my problem, because of the way the dpdk is integrated into our sw as a prebuilt library, we don't get the automatic initialisation of objects the way I think the dpdk examples benefit from.
I'd forgotten this, we have explicit calls to things like

devinitfn_rte_i40e_driver();

so after I added an explicit call to

mp_hdlr_init_ops_mp_mc();

my function pointer deep down below  rte_mempool_create is now valid.

Seem to have a nice performance improvement of about 40% in the rate my app can handle at the front door compared to 2.2.0

Nice

Now I just need to find out why when I turn on vector mode, I never get any mbuffs returned to the pool :)

Thanks for your help

Martin



Martin Curran-Gray
HW/FPGA/SW Engineer
Keysight Technologies UK Ltd

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2016-08-24  7:23 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-17 15:03 [dpdk-users] segfault with dpdk 16.07 in rte_mempool_populate_phys martin_curran-gray
2016-08-19  6:10 ` Shreyansh Jain
2016-08-19  8:28   ` martin_curran-gray
2016-08-22 13:58     ` Shreyansh Jain
2016-08-22 14:06       ` martin_curran-gray
2016-08-24  6:48 martin_curran-gray
2016-08-24  7:23 ` Shreyansh Jain

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).