DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] Getting crash in rte_pktmbuf_alloc with 16.11 DPDK
@ 2017-03-17  3:46 Dey, Souvik
  2017-03-17  5:09 ` Yuanhan Liu
  0 siblings, 1 reply; 4+ messages in thread
From: Dey, Souvik @ 2017-03-17  3:46 UTC (permalink / raw)
  To: dev; +Cc: Dey, Souvik

Hi ,
              I am trying to do rte_pktmbuf_alloc from a mempool within a secondary process after doing a rte_mempool_lookup for the same mempool. But the rte_pktmbuf_alloc crashes with the below backtrace

#0  0x0000000000000000 in ?? ()
#1  0x0000000000423da2 in rte_mempool_ops_dequeue_bulk (n=1, obj_table=0x7fffffffd8e0, mp=0x7fe910fbd540) at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/dist
#2  __mempool_generic_get (flags=<optimized out>, cache=<optimized out>, n=<optimized out>, obj_table=<optimized out>, mp=<optimized out>)
    at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1296
#3  rte_mempool_generic_get (flags=<optimized out>, cache=<optimized out>, n=<optimized out>, obj_table=<optimized out>, mp=<optimized out>)
    at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1334
#4  rte_mempool_get_bulk (n=1, obj_table=0x7fffffffd8e0, mp=0x7fe910fbd540) at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp
#5  rte_mempool_get (obj_p=0x7fffffffd8e0, mp=0x7fe910fbd540) at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/r
#6  rte_mbuf_raw_alloc (mp=0x7fe910fbd540) at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/rte_mbuf.h:761
#7  rte_pktmbuf_alloc (mp=0x7fe910fbd540) at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/rte_mbuf.h:1046

>From the trace it looks like that the ops->dequeue is failing as the ops is not set properly.
In the primary process I have done a rte_mempool_create with the flags passed as 0 (indicating mp_mc option). This should have taken care of setting the ops properly. Also the rte_pktmbuf_alloc calls in the primary does not give any issues.
Both the primary and secondary DPDK app code was working fine with 2.1 DPDK, but now when I am trying to link to the newer DPDK versions like 16.07/16.11, it is crashing. There is no changes done in the app code.
I do see that the complete rte_mempool code has been changed between 2.1 to 16.07 but could not  find any obvious reasons of the crash. Is my usage wrong or do we need to pass any new flag to make this work.

Did anyone faced similar issue or any help in this will be great for my debugging. Thanks in advance for the help.

--
Regards,
Souvik

^ permalink raw reply	[flat|nested] 4+ messages in thread
* [dpdk-dev] Getting crash in rte_pktmbuf_alloc with 16.11 DPDK
@ 2017-03-17  3:17 Dey, Souvik
  0 siblings, 0 replies; 4+ messages in thread
From: Dey, Souvik @ 2017-03-17  3:17 UTC (permalink / raw)
  To: dev

Hi ,
              I am trying to do rte_pktmbuf_alloc from a mempool within a secondary process after doing a rte_mempool_lookup for the same mempool. But the rte_pktmbuf_alloc crashes with the below backtrace

#0  0x0000000000000000 in ?? ()
#1  0x0000000000423da2 in rte_mempool_ops_dequeue_bulk (n=1, obj_table=0x7fffffffd8e0, mp=0x7fe910fbd540) at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/dist
#2  __mempool_generic_get (flags=<optimized out>, cache=<optimized out>, n=<optimized out>, obj_table=<optimized out>, mp=<optimized out>)
    at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1296
#3  rte_mempool_generic_get (flags=<optimized out>, cache=<optimized out>, n=<optimized out>, obj_table=<optimized out>, mp=<optimized out>)
    at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1334
#4  rte_mempool_get_bulk (n=1, obj_table=0x7fffffffd8e0, mp=0x7fe910fbd540) at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp
#5  rte_mempool_get (obj_p=0x7fffffffd8e0, mp=0x7fe910fbd540) at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/r
#6  rte_mbuf_raw_alloc (mp=0x7fe910fbd540) at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/rte_mbuf.h:761
#7  rte_pktmbuf_alloc (mp=0x7fe910fbd540) at /sonus/p4/ws/sodey/cmn_thirdparty.cloud_dev_5_1/Intel/DPDK/distrib_upd/x86_64-native-linuxapp-gcc/include/rte_mbuf.h:1046

>From the trace it looks like that the ops->dequeue is failing as the ops is not set properly.
In the primary process I have done a rte_mempool_create with the flags passed as 0 (indicating mp_mc option). This should have taken care of setting the ops properly. Also the rte_pktmbuf_alloc calls in the primary does not give any issues.
Both the primary and secondary DPDK app code was working fine with 2.1 DPDK, but now when I am trying to link to the newer DPDK versions like 16.07/16.11, it is crashing. There is no changes done in the app code.
I do see that the complete rte_mempool code has been changed between 2.1 to 16.07 but could not  find any obvious reasons of the crash. Is my usage wrong or do we need to pass any new flag to make this work.

Did anyone faced similar issue or any help in this will be great for my debugging. Thanks in advance for the help.

--
Regards,
Souvik

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2017-03-20 13:05 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-17  3:46 [dpdk-dev] Getting crash in rte_pktmbuf_alloc with 16.11 DPDK Dey, Souvik
2017-03-17  5:09 ` Yuanhan Liu
2017-03-20 13:05   ` Olivier Matz
  -- strict thread matches above, loose matches on Subject: below --
2017-03-17  3:17 Dey, Souvik

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).