DPDK patches and discussions
 help / color / mirror / Atom feed
From: Olivier Matz <olivier.matz@6wind.com>
To: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Cc: "Dey, Souvik" <sodey@sonusnet.com>, "dev@dpdk.org" <dev@dpdk.org>,
	Thomas Monjalon <thomas.monjalon@6wind.com>
Subject: Re: [dpdk-dev] Getting crash in rte_pktmbuf_alloc with 16.11 DPDK
Date: Mon, 20 Mar 2017 14:05:15 +0100	[thread overview]
Message-ID: <20170320140515.45c91b61@platinum> (raw)
In-Reply-To: <20170317050917.GX18844@yliu-dev.sh.intel.com>

Hi,

On Fri, 17 Mar 2017 13:09:17 +0800, Yuanhan Liu <yuanhan.liu@linux.intel.com> wrote:
> On Fri, Mar 17, 2017 at 03:46:53AM +0000, Dey, Souvik wrote:
> > Hi ,
> >               I am trying to do rte_pktmbuf_alloc from a mempool within a secondary process after doing a rte_mempool_lookup for the same mempool. But the rte_pktmbuf_alloc crashes with the below backtrace  
> 
> I believe it's yet another "accessing a local process pointer in a shared
> memory" issue in the multiple process model. Here is a similar issue I have
> just fixed for virtio pmd in last release.
> 
>     commit 6d890f8ab51295045a53f41c4d2654bb1f01cf38
>     Author: Yuanhan Liu <yuanhan.liu@linux.intel.com>
>     Date:   Fri Jan 6 18:16:19 2017 +0800
>     
>         net/virtio: fix multiple process support
>     

Another idea is that your 2 processes (primary and secondary) do not
have the same configuration or build system. This was discussed a bit
here:

http://dpdk.org/dev/patchwork/patch/16868/

Can you provide a minimal example application that reproduces the
issue?

Regards,
Olivier


> 
> 	--yliu
> > 
> > #0  0x0000000000000000 in ?? ()
> > #1  0x0000000000423da2 in rte_mempool_ops_dequeue_bulk (n=1, obj_table=0x7fffffffd8e0, mp=0x7fe910fbd540) at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/dist
> > #2  __mempool_generic_get (flags=<optimized out>, cache=<optimized out>, n=<optimized out>, obj_table=<optimized out>, mp=<optimized out>)
> >     at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1296
> > #3  rte_mempool_generic_get (flags=<optimized out>, cache=<optimized out>, n=<optimized out>, obj_table=<optimized out>, mp=<optimized out>)
> >     at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1334
> > #4  rte_mempool_get_bulk (n=1, obj_table=0x7fffffffd8e0, mp=0x7fe910fbd540) at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp
> > #5  rte_mempool_get (obj_p=0x7fffffffd8e0, mp=0x7fe910fbd540) at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/r
> > #6  rte_mbuf_raw_alloc (mp=0x7fe910fbd540) at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/rte_mbuf.h:761
> > #7  rte_pktmbuf_alloc (mp=0x7fe910fbd540) at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/rte_mbuf.h:1046
> >   
> > >From the trace it looks like that the ops->dequeue is failing as the ops is not set properly.  
> > In the primary process I have done a rte_mempool_create with the flags passed as 0 (indicating mp_mc option). This should have taken care of setting the ops properly. Also the rte_pktmbuf_alloc calls in the primary does not give any issues.
> > Both the primary and secondary DPDK app code was working fine with 2.1 DPDK, but now when I am trying to link to the newer DPDK versions like 16.07/16.11, it is crashing. There is no changes done in the app code.
> > I do see that the complete rte_mempool code has been changed between 2.1 to 16.07 but could not  find any obvious reasons of the crash. Is my usage wrong or do we need to pass any new flag to make this work.
> > 
> > Did anyone faced similar issue or any help in this will be great for my debugging. Thanks in advance for the help.
> > 
> > --
> > Regards,
> > Souvik  

  reply	other threads:[~2017-03-20 13:05 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-17  3:46 Dey, Souvik
2017-03-17  5:09 ` Yuanhan Liu
2017-03-20 13:05   ` Olivier Matz [this message]
  -- strict thread matches above, loose matches on Subject: below --
2017-03-17  3:17 Dey, Souvik

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170320140515.45c91b61@platinum \
    --to=olivier.matz@6wind.com \
    --cc=dev@dpdk.org \
    --cc=sodey@sonusnet.com \
    --cc=thomas.monjalon@6wind.com \
    --cc=yuanhan.liu@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).