DPDK usage discussions
 help / color / mirror / Atom feed
From: Shreyansh Jain <shreyansh.jain@nxp.com>
To: "martin_curran-gray@keysight.com"
	<martin_curran-gray@keysight.com>,
	"users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] segfault with dpdk 16.07 in rte_mempool_populate_phys
Date: Fri, 19 Aug 2016 06:10:01 +0000	[thread overview]
Message-ID: <DB5PR0401MB2054C29AE2C86DCA191A48C490160@DB5PR0401MB2054.eurprd04.prod.outlook.com> (raw)
In-Reply-To: <22C95CA62CBADB498D32A348F0F073BC20AE2583@wcosexch02k.cos.is.keysight.com>

Hi Martin,

> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of martin_curran-
> gray@keysight.com
> Sent: Wednesday, August 17, 2016 8:34 PM
> To: users@dpdk.org
> Subject: [dpdk-users] segfault with dpdk 16.07 in rte_mempool_populate_phys
> 
> Hi All,
> 
> Trying to move an application from 2.2.0 to 16.07
> 
> Tested out l2fwd in 16.7 , quite happy with that ( in fact very happy with
> the performance improvement I measure  over 2.2.0 )
> 
> But now trying to get our app moved over, and coming un-stuck.
> 
> As well as the main packet mbuf pools etc, our app has a little pool for
> error messages
> 
> When that is created with "rte_mempool_create" I get a segmentation fault
> from rte_mempool_populate_phys
> 
> It seems I have nothing valid for this call in rte_mempool.c to work with
> 
>     ret = rte_mempool_ops_alloc(mp);
> 
> I can see in the user guide in section 5.5 it talks about Mempool Handlers,
> and new API to do with rte_mempool_create_empty and
> rte_mempool_set_ops_byname

Indeed there are changes to the mempool allocation system. 16.07 includes support for pluggable pool handlers.
But, it should not matter to you if you are directly calling mempool_create (with flags=0) because default handler "ring_mp_mc" would be attached to your Pool.
You can read more about how ring_mp_mc behaves in rte_mempool_ring.c.

> 
> But the code of rte_mempool _create  seems to call rte_mempool_create_empty
> and rte_mempool_set_ops_byname

As you are passing flags=0, I am assuming it would fall back to default handler - ring_mp_mc (multi-consumer/multi-producer).

> 
> Then it calls the mp_init  before the rte_mempool_populate_default, down in
> which the call to rte_mempool_populate_phys eventually cores
> 
> I've tried building and running the ip_reassembly example program, and I can
> see that is uses rte_mempool_create in a similar, although admittedly
> slightly different fashion.
> it has flags set as  MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET  where as I have 0
> but the code in rte_mempool_create uses the flags to call the set_ops_by_name
> slightly differently depending on what flags you have.

The value of flags would depend on the way you use the mempool: Single/Multiple producer, or Single/Multiple consumers. [For rte_mempool_create and default available handlers].
But, flags=0 should still be safe bet AFAIK.

> 
> I tried changing my app to use the same parameters in the rte_mempool_create
> call as the example program, but I still get a segmentation fault
> 
> Is there something else I'm missing??

Somehow I feel that the problem is not with flags but probably there is not enough memory available because of which rte_memzone_reserve_aligned might have failed. But, I am not sure.

> 
> I looked through the ip_reassembly program, but couldn't see it making any
> extra calls to do anything to the pool before the create is called?
> 
> Any help gratefully received
> 
> Program terminated with signal 11, Segmentation fault.
> #0  0x0000000000000000 in ?? ()
> 
> #0  0x0000000000000000 in ?? ()
> #1  0x00007f9fa586cdde in rte_mempool_populate_phys (mp=0x7f9f9485dd00,
>     vaddr=0x7f9f9485d2c0 <Address 0x7f9f9485d2c0 out of bounds>,
> paddr=8995066560, len=2560,
>     free_cb=0x7f9fa586cbe0 <rte_mempool_memchunk_mz_free>,
> opaque=0x7f9fb40e4cb4)
>     at /root/######/dpdk-16.07/lib/librte_mempool/rte_mempool.c:363
> #2  0x00007f9fa586da4a in rte_mempool_populate_default (mp=0x7f9f9485dd00)
>     at /root/######/dpdk-16.07/lib/librte_mempool/rte_mempool.c:583
> #3  0x00007f9fa586dd49 in rte_mempool_create (name=0x7f9fa588fb56 "Error Ind
> Mempool", n=<value optimized out>,
>     elt_size=256, cache_size=<value optimized out>, private_data_size=<value
> optimized out>,
>     mp_init=0x7f9fa586c2b0 <rte_pktmbuf_pool_init>, mp_init_arg=0x0,
> obj_init=0x7f9fa586c1c0 <rte_pktmbuf_init>,
>     obj_init_arg=0x0, socket_id=-1, flags=0) at /root/######/dpdk-
> 16.07/lib/librte_mempool/rte_mempool.c:909

Would it be possible for you to paste the application startup logs? It might give us some more hints.

> 
> 
> 
> Thanks
> 
> Martin
> 
> 
> 
> 
> 
> 
> 
> 
> Martin Curran-Gray
> HW/FPGA/SW Engineer
> Keysight Technologies UK Ltd

-
Shreyansh

  reply	other threads:[~2016-08-19  6:10 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-17 15:03 martin_curran-gray
2016-08-19  6:10 ` Shreyansh Jain [this message]
2016-08-19  8:28   ` martin_curran-gray
2016-08-22 13:58     ` Shreyansh Jain
2016-08-22 14:06       ` martin_curran-gray
2016-08-24  6:48 martin_curran-gray
2016-08-24  7:23 ` Shreyansh Jain

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DB5PR0401MB2054C29AE2C86DCA191A48C490160@DB5PR0401MB2054.eurprd04.prod.outlook.com \
    --to=shreyansh.jain@nxp.com \
    --cc=martin_curran-gray@keysight.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).