DPDK usage discussions
 help / color / mirror / Atom feed
From: Farbod <fshahinfar1@gmail.com>
To: users@dpdk.org
Subject: Re: [dpdk-users] Segfault while freeing mbuf in the primary process
Date: Sat, 24 Oct 2020 17:54:28 +0330	[thread overview]
Message-ID: <de7dd650-eece-0015-5392-5912a60f0cdc@gmail.com> (raw)
In-Reply-To: <0123fbc4-ceda-e67f-dbef-959129a95303@gmail.com>

Hi,

Regarding to the SEGFAULT question I asked yesterday, while I was 
searching on the internet I stumbled upon an email sent on DPDK user 
list which explained the solution to a similar problem. (Link to the 
email: 
https://inbox.dpdk.org/users/8AC28827-5B04-4392-AFB3-AD259DFBECA9@fireeye.com/t/#m3f14d7e92c0a0b3c530717632ca5047420083112 
)

As Andrew Rybchenko mentioned in the  email, the way two applications 
are built can result in issues with mempool operations. I do not 
understand how the mempool handlers work and how the order of shared 
libraries affect the DPDK system but I can confirm that by changing one 
of the applications' Makefile I could get through this problem and thing 
are working properly now.

It looks like there is a note on DPDK guide about this subject. (Link to 
DPDK guide: https://doc.dpdk.org/guides/prog_guide/mempool_lib.html )
excerpt from DPDK guide:
```

When running a multi-process application with shared libraries, the -d 
arguments for mempool handlers /must be specified in the same order for 
all processes/ to ensure correct operation.

```

I am not using `-d` while I am running my applications maybe I should. I 
just wanted to share my findings with you and thank you all for creating 
such a community.

Thank you
~ Farbod


On 10/23/20 6:08 PM, Farbod wrote:
> Hi,
>
> I am using DPDK multi-processor mode for sending packets from one 
> application (secondary) to another application (primary) using rings.
> The primary applications role is to just free the packet with 
> `rte_pktmbuf_free()`.
>
> I encounter a SEGFAULT error in the line of `rte_pktmbuf_free()`.
>
> My DPDK version is 19.11.1.
>
> ```
>
> Signal: 11 (Segmentation fault), si_code: 1 (SEGV_MAPERR: address not 
> mapped to object)
> Backtrace (recent calls first) ---
> (0): (+0x81a1ee) [0x562d35df51ee]
>     bucket_enqueue_single at 
> ---/dpdk-19.11.1/drivers/mempool/bucket/rte_mempool_bucket.c:111 
> (discriminator 3)
>          108:   addr &= bd->bucket_page_mask;
>          109:   hdr = (struct bucket_header *)addr;
> 110:
>       -> 111:   if (likely(hdr->lcore_id == lcore_id)) {
>          112:           if (hdr->fill_cnt < bd->obj_per_bucket - 1) {
>          113: hdr->fill_cnt++;
>          114:           } else {
>      (inlined by) bucket_enqueue at 
> ---/dpdk-19.11.1/drivers/mempool/bucket/rte_mempool_bucket.c:148 
> (discriminator 3)
>          145:   int rc = 0;
> 146:
>          147:   for (i = 0; i < n; i++) {
>       -> 148:           rc = bucket_enqueue_single(bd, obj_table[i]);
>          149:           RTE_ASSERT(rc == 0);
>          150: }
> [0x562d35cd2f8c]
>     rte_mempool_ops_enqueue_bulk at 
> ---/dpdk-19.11.1/build/include/rte_mempool.h:786
>       -> 786:   return ops->enqueue(mp, obj_table, n);
>      (inlined by) __mempool_generic_put at 
> ---/dpdk-19.11.1/build/include/rte_mempool.h:1329
>       -> 1329:          rte_mempool_ops_enqueue_bulk(mp, 
> &cache->objs[cache->size],
>      (inlined by) rte_mempool_generic_put at 
> ---/dpdk-19.11.1/build/include/rte_mempool.h:1365
>       -> 1365:  __mempool_generic_put(mp, obj_table, n, cache);
>      (inlined by) rte_mempool_put_bulk at 
> ---/dpdk-19.11.1/build/include/rte_mempool.h:1388
>       -> 1388:  rte_mempool_generic_put(mp, obj_table, n, cache);
>      (inlined by) rte_mempool_put at 
> ---/dpdk-19.11.1/build/include/rte_mempool.h:1406
>       -> 1406:  rte_mempool_put_bulk(mp, &obj, 1);
>      (inlined by) rte_mbuf_raw_free at 
> ---/dpdk-19.11.1/build/include/rte_mbuf.h:579
>       -> 579:   rte_mempool_put(m->pool, m);
>      (inlined by) rte_pktmbuf_free_seg at 
> ---/dpdk-19.11.1/build/include/rte_mbuf.h:1223
>       -> 1223:          rte_mbuf_raw_free(m);         151:   if 
> (local_stack->top > bd->bucket_stack_thresh) {
>        (inlined by) rte_pktmbuf_free at ---/rte_mbuf.h:1244
>       -> 1244: rte_pktmbuf_free_seg(m);
>      (inlined by) ?? at --- /packet.h:199
>       -> 199:     rte_pktmbuf_free(reinterpret_cast<struct rte_mbuf 
> *>(pkt));
>
> ```
>
> I have made sure that primary and secondary process does not share any 
> cpu in common. The packets received in the primary application are 
> valid and the information inside them is readable. It only runs in to 
> SEGFAULT when I am trying to free the mubf structures.
>
> I would like to mention that the applications are in two different 
> projects and are built separately.
>
> Thank you
>

      reply	other threads:[~2020-10-24 14:24 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-23 14:38 Farbod
2020-10-24 14:24 ` Farbod [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=de7dd650-eece-0015-5392-5912a60f0cdc@gmail.com \
    --to=fshahinfar1@gmail.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).