DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] Segfault while freeing mbuf in the primary process
@ 2020-10-23 14:38 Farbod
  2020-10-24 14:24 ` Farbod
  0 siblings, 1 reply; 2+ messages in thread
From: Farbod @ 2020-10-23 14:38 UTC (permalink / raw)
  To: users

Hi,

I am using DPDK multi-processor mode for sending packets from one 
application (secondary) to another application (primary) using rings.
The primary applications role is to just free the packet with 
`rte_pktmbuf_free()`.

I encounter a SEGFAULT error in the line of `rte_pktmbuf_free()`.

My DPDK version is 19.11.1.

```

Signal: 11 (Segmentation fault), si_code: 1 (SEGV_MAPERR: address not 
mapped to object)
Backtrace (recent calls first) ---
(0): (+0x81a1ee) [0x562d35df51ee]
     bucket_enqueue_single at 
---/dpdk-19.11.1/drivers/mempool/bucket/rte_mempool_bucket.c:111 
(discriminator 3)
          108:   addr &= bd->bucket_page_mask;
          109:   hdr = (struct bucket_header *)addr;
110:
       -> 111:   if (likely(hdr->lcore_id == lcore_id)) {
          112:           if (hdr->fill_cnt < bd->obj_per_bucket - 1) {
          113: hdr->fill_cnt++;
          114:           } else {
      (inlined by) bucket_enqueue at 
---/dpdk-19.11.1/drivers/mempool/bucket/rte_mempool_bucket.c:148 
(discriminator 3)
          145:   int rc = 0;
146:
          147:   for (i = 0; i < n; i++) {
       -> 148:           rc = bucket_enqueue_single(bd, obj_table[i]);
          149:           RTE_ASSERT(rc == 0);
          150: }
[0x562d35cd2f8c]
     rte_mempool_ops_enqueue_bulk at 
---/dpdk-19.11.1/build/include/rte_mempool.h:786
       -> 786:   return ops->enqueue(mp, obj_table, n);
      (inlined by) __mempool_generic_put at 
---/dpdk-19.11.1/build/include/rte_mempool.h:1329
       -> 1329:          rte_mempool_ops_enqueue_bulk(mp, 
&cache->objs[cache->size],
      (inlined by) rte_mempool_generic_put at 
---/dpdk-19.11.1/build/include/rte_mempool.h:1365
       -> 1365:  __mempool_generic_put(mp, obj_table, n, cache);
      (inlined by) rte_mempool_put_bulk at 
---/dpdk-19.11.1/build/include/rte_mempool.h:1388
       -> 1388:  rte_mempool_generic_put(mp, obj_table, n, cache);
      (inlined by) rte_mempool_put at 
---/dpdk-19.11.1/build/include/rte_mempool.h:1406
       -> 1406:  rte_mempool_put_bulk(mp, &obj, 1);
      (inlined by) rte_mbuf_raw_free at 
---/dpdk-19.11.1/build/include/rte_mbuf.h:579
       -> 579:   rte_mempool_put(m->pool, m);
      (inlined by) rte_pktmbuf_free_seg at 
---/dpdk-19.11.1/build/include/rte_mbuf.h:1223
       -> 1223:          rte_mbuf_raw_free(m);         151:   if 
(local_stack->top > bd->bucket_stack_thresh) {
        (inlined by) rte_pktmbuf_free at ---/rte_mbuf.h:1244
       -> 1244: rte_pktmbuf_free_seg(m);
      (inlined by) ?? at --- /packet.h:199
       -> 199:     rte_pktmbuf_free(reinterpret_cast<struct rte_mbuf 
*>(pkt));

```

I have made sure that primary and secondary process does not share any 
cpu in common. The packets received in the primary application are valid 
and the information inside them is readable. It only runs in to SEGFAULT 
when I am trying to free the mubf structures.

I would like to mention that the applications are in two different 
projects and are built separately.

Thank you


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dpdk-users] Segfault while freeing mbuf in the primary process
  2020-10-23 14:38 [dpdk-users] Segfault while freeing mbuf in the primary process Farbod
@ 2020-10-24 14:24 ` Farbod
  0 siblings, 0 replies; 2+ messages in thread
From: Farbod @ 2020-10-24 14:24 UTC (permalink / raw)
  To: users

Hi,

Regarding to the SEGFAULT question I asked yesterday, while I was 
searching on the internet I stumbled upon an email sent on DPDK user 
list which explained the solution to a similar problem. (Link to the 
email: 
https://inbox.dpdk.org/users/8AC28827-5B04-4392-AFB3-AD259DFBECA9@fireeye.com/t/#m3f14d7e92c0a0b3c530717632ca5047420083112 
)

As Andrew Rybchenko mentioned in the  email, the way two applications 
are built can result in issues with mempool operations. I do not 
understand how the mempool handlers work and how the order of shared 
libraries affect the DPDK system but I can confirm that by changing one 
of the applications' Makefile I could get through this problem and thing 
are working properly now.

It looks like there is a note on DPDK guide about this subject. (Link to 
DPDK guide: https://doc.dpdk.org/guides/prog_guide/mempool_lib.html )
excerpt from DPDK guide:
```

When running a multi-process application with shared libraries, the -d 
arguments for mempool handlers /must be specified in the same order for 
all processes/ to ensure correct operation.

```

I am not using `-d` while I am running my applications maybe I should. I 
just wanted to share my findings with you and thank you all for creating 
such a community.

Thank you
~ Farbod


On 10/23/20 6:08 PM, Farbod wrote:
> Hi,
>
> I am using DPDK multi-processor mode for sending packets from one 
> application (secondary) to another application (primary) using rings.
> The primary applications role is to just free the packet with 
> `rte_pktmbuf_free()`.
>
> I encounter a SEGFAULT error in the line of `rte_pktmbuf_free()`.
>
> My DPDK version is 19.11.1.
>
> ```
>
> Signal: 11 (Segmentation fault), si_code: 1 (SEGV_MAPERR: address not 
> mapped to object)
> Backtrace (recent calls first) ---
> (0): (+0x81a1ee) [0x562d35df51ee]
>     bucket_enqueue_single at 
> ---/dpdk-19.11.1/drivers/mempool/bucket/rte_mempool_bucket.c:111 
> (discriminator 3)
>          108:   addr &= bd->bucket_page_mask;
>          109:   hdr = (struct bucket_header *)addr;
> 110:
>       -> 111:   if (likely(hdr->lcore_id == lcore_id)) {
>          112:           if (hdr->fill_cnt < bd->obj_per_bucket - 1) {
>          113: hdr->fill_cnt++;
>          114:           } else {
>      (inlined by) bucket_enqueue at 
> ---/dpdk-19.11.1/drivers/mempool/bucket/rte_mempool_bucket.c:148 
> (discriminator 3)
>          145:   int rc = 0;
> 146:
>          147:   for (i = 0; i < n; i++) {
>       -> 148:           rc = bucket_enqueue_single(bd, obj_table[i]);
>          149:           RTE_ASSERT(rc == 0);
>          150: }
> [0x562d35cd2f8c]
>     rte_mempool_ops_enqueue_bulk at 
> ---/dpdk-19.11.1/build/include/rte_mempool.h:786
>       -> 786:   return ops->enqueue(mp, obj_table, n);
>      (inlined by) __mempool_generic_put at 
> ---/dpdk-19.11.1/build/include/rte_mempool.h:1329
>       -> 1329:          rte_mempool_ops_enqueue_bulk(mp, 
> &cache->objs[cache->size],
>      (inlined by) rte_mempool_generic_put at 
> ---/dpdk-19.11.1/build/include/rte_mempool.h:1365
>       -> 1365:  __mempool_generic_put(mp, obj_table, n, cache);
>      (inlined by) rte_mempool_put_bulk at 
> ---/dpdk-19.11.1/build/include/rte_mempool.h:1388
>       -> 1388:  rte_mempool_generic_put(mp, obj_table, n, cache);
>      (inlined by) rte_mempool_put at 
> ---/dpdk-19.11.1/build/include/rte_mempool.h:1406
>       -> 1406:  rte_mempool_put_bulk(mp, &obj, 1);
>      (inlined by) rte_mbuf_raw_free at 
> ---/dpdk-19.11.1/build/include/rte_mbuf.h:579
>       -> 579:   rte_mempool_put(m->pool, m);
>      (inlined by) rte_pktmbuf_free_seg at 
> ---/dpdk-19.11.1/build/include/rte_mbuf.h:1223
>       -> 1223:          rte_mbuf_raw_free(m);         151:   if 
> (local_stack->top > bd->bucket_stack_thresh) {
>        (inlined by) rte_pktmbuf_free at ---/rte_mbuf.h:1244
>       -> 1244: rte_pktmbuf_free_seg(m);
>      (inlined by) ?? at --- /packet.h:199
>       -> 199:     rte_pktmbuf_free(reinterpret_cast<struct rte_mbuf 
> *>(pkt));
>
> ```
>
> I have made sure that primary and secondary process does not share any 
> cpu in common. The packets received in the primary application are 
> valid and the information inside them is readable. It only runs in to 
> SEGFAULT when I am trying to free the mubf structures.
>
> I would like to mention that the applications are in two different 
> projects and are built separately.
>
> Thank you
>

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-10-24 14:24 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-23 14:38 [dpdk-users] Segfault while freeing mbuf in the primary process Farbod
2020-10-24 14:24 ` Farbod

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).