From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 3669D3B5 for ; Fri, 17 Mar 2017 06:11:05 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=intel; t=1489727465; x=1521263465; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=HHMN/BEL58AibicZCmOFsSb32qoKnVepxby176gqI9U=; b=Ia4rSupi8Sc4aX53ZaYV0tgQ4sQ63Y8rBuXMHdWsWD/w7gzMrvT8OKYo cqjADY39ycyt/PnlW4OGtpt279QCgA==; Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Mar 2017 22:11:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.36,175,1486454400"; d="scan'208";a="835734467" Received: from yliu-dev.sh.intel.com (HELO yliu-dev) ([10.239.67.162]) by FMSMGA003.fm.intel.com with ESMTP; 16 Mar 2017 22:11:02 -0700 Date: Fri, 17 Mar 2017 13:09:17 +0800 From: Yuanhan Liu To: "Dey, Souvik" Cc: "dev@dpdk.org" , Olivier Matz , Thomas Monjalon Message-ID: <20170317050917.GX18844@yliu-dev.sh.intel.com> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Subject: Re: [dpdk-dev] Getting crash in rte_pktmbuf_alloc with 16.11 DPDK X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 17 Mar 2017 05:11:05 -0000 On Fri, Mar 17, 2017 at 03:46:53AM +0000, Dey, Souvik wrote: > Hi , > I am trying to do rte_pktmbuf_alloc from a mempool within a secondary process after doing a rte_mempool_lookup for the same mempool. But the rte_pktmbuf_alloc crashes with the below backtrace I believe it's yet another "accessing a local process pointer in a shared memory" issue in the multiple process model. Here is a similar issue I have just fixed for virtio pmd in last release. commit 6d890f8ab51295045a53f41c4d2654bb1f01cf38 Author: Yuanhan Liu Date: Fri Jan 6 18:16:19 2017 +0800 net/virtio: fix multiple process support --yliu > > #0 0x0000000000000000 in ?? () > #1 0x0000000000423da2 in rte_mempool_ops_dequeue_bulk (n=1, obj_table=0x7fffffffd8e0, mp=0x7fe910fbd540) at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/dist > #2 __mempool_generic_get (flags=, cache=, n=, obj_table=, mp=) > at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1296 > #3 rte_mempool_generic_get (flags=, cache=, n=, obj_table=, mp=) > at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1334 > #4 rte_mempool_get_bulk (n=1, obj_table=0x7fffffffd8e0, mp=0x7fe910fbd540) at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp > #5 rte_mempool_get (obj_p=0x7fffffffd8e0, mp=0x7fe910fbd540) at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/r > #6 rte_mbuf_raw_alloc (mp=0x7fe910fbd540) at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/rte_mbuf.h:761 > #7 rte_pktmbuf_alloc (mp=0x7fe910fbd540) at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/rte_mbuf.h:1046 > > >From the trace it looks like that the ops->dequeue is failing as the ops is not set properly. > In the primary process I have done a rte_mempool_create with the flags passed as 0 (indicating mp_mc option). This should have taken care of setting the ops properly. Also the rte_pktmbuf_alloc calls in the primary does not give any issues. > Both the primary and secondary DPDK app code was working fine with 2.1 DPDK, but now when I am trying to link to the newer DPDK versions like 16.07/16.11, it is crashing. There is no changes done in the app code. > I do see that the complete rte_mempool code has been changed between 2.1 to 16.07 but could not find any obvious reasons of the crash. Is my usage wrong or do we need to pass any new flag to make this work. > > Did anyone faced similar issue or any help in this will be great for my debugging. Thanks in advance for the help. > > -- > Regards, > Souvik