From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1E59DA04B0 for ; Sat, 24 Oct 2020 16:24:35 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6ACCB4F9C; Sat, 24 Oct 2020 16:24:33 +0200 (CEST) Received: from mail-wr1-f65.google.com (mail-wr1-f65.google.com [209.85.221.65]) by dpdk.org (Postfix) with ESMTP id 6FBBB4F96 for ; Sat, 24 Oct 2020 16:24:32 +0200 (CEST) Received: by mail-wr1-f65.google.com with SMTP id b8so6156652wrn.0 for ; Sat, 24 Oct 2020 07:24:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:subject:to:references:message-id:date:user-agent:mime-version :in-reply-to:content-language; bh=chjPaVFvVVMa/khD+t/gK8ROucAALBvBQ4WisNCu39o=; b=teh+ps2O+o1dq8HbcAS+F+w8eoNs7gHJUbvTrRGnuwo+SpPvHHIedarhMtzrdAnuEY DT1g/eTO/+rwCz3+d5RhOa4PpLz7MexeQjGWrwPLEbTebNge6oCMlKRw7GAsJCZGHCMK dRIe67DTtTqmcbsTYF2jSVplP+/TRFOxgDMTlSiOPUaKQ00DRG8Ahsopbw63c9I/2Bx6 Z7GyQRTlQq4RmSuPelfQrhuGqtkmDe0Nh0I1Tx8h89556l44kpp4QSp0/H+C5VbEKAuP ztiaVEh7f+O5EJ5uMD1pJsk4wVN129nagOkfSTFb8PjYa7Lmx4uVEONuahHyepoPE/ZS nVtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:subject:to:references:message-id:date :user-agent:mime-version:in-reply-to:content-language; bh=chjPaVFvVVMa/khD+t/gK8ROucAALBvBQ4WisNCu39o=; b=oX3EE9efyVVd8BD96j7spKQkB4st5uQmK0aWf+IUQezPTaPlMFyExIlvCmLH1Sb/ku cK31MIHRTmesAArI6uOh10Kta9QVrF0NyfdqYZ6zcT6jH6sGF2TQyAbl1KfELrpqxdkL OeXCYNObZkia1j/syPLmwwURFVlkFx8JtntMo9m/IxCgijvhik1+jHawfGXUMu6hWjOZ YDkdlHFVgUDgzdXWU6cRLd8SUbRlcKkClqAlV1v9FjLZBkBQt6bdnVbnewyFmSetrx5V rXDucBGDPqRwbiIXuoYzpX/fIUgkqYe895pE6hYLpPzZH6MwtmpN89/S1kUHb0id3taq 8eTQ== X-Gm-Message-State: AOAM533ZfEatpA0vlhEsH1RIo/MCxx6bWror6gweBZk65JEyvWqh7bjn tVFsUvILZN/0WHwVp47ONLjqZ/XGNW4mgaUW X-Google-Smtp-Source: ABdhPJxEkx9xuRFStmpzAh1xM5/V48L/14GASwU8AIPqymaGtF7Oo88QcOD1J3u7wxeDhTRA6Gy5jg== X-Received: by 2002:adf:e744:: with SMTP id c4mr7806562wrn.222.1603549470812; Sat, 24 Oct 2020 07:24:30 -0700 (PDT) Received: from [192.168.1.9] ([5.113.85.244]) by smtp.gmail.com with ESMTPSA id u6sm9541663wmj.40.2020.10.24.07.24.29 for (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sat, 24 Oct 2020 07:24:30 -0700 (PDT) From: Farbod To: users@dpdk.org References: <0123fbc4-ceda-e67f-dbef-959129a95303@gmail.com> Message-ID: Date: Sat, 24 Oct 2020 17:54:28 +0330 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <0123fbc4-ceda-e67f-dbef-959129a95303@gmail.com> Content-Language: en-US Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] Segfault while freeing mbuf in the primary process X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Sender: "users" Hi, Regarding to the SEGFAULT question I asked yesterday, while I was searching on the internet I stumbled upon an email sent on DPDK user list which explained the solution to a similar problem. (Link to the email: https://inbox.dpdk.org/users/8AC28827-5B04-4392-AFB3-AD259DFBECA9@fireeye.com/t/#m3f14d7e92c0a0b3c530717632ca5047420083112 ) As Andrew Rybchenko mentioned in the  email, the way two applications are built can result in issues with mempool operations. I do not understand how the mempool handlers work and how the order of shared libraries affect the DPDK system but I can confirm that by changing one of the applications' Makefile I could get through this problem and thing are working properly now. It looks like there is a note on DPDK guide about this subject. (Link to DPDK guide: https://doc.dpdk.org/guides/prog_guide/mempool_lib.html ) excerpt from DPDK guide: ``` When running a multi-process application with shared libraries, the -d arguments for mempool handlers /must be specified in the same order for all processes/ to ensure correct operation. ``` I am not using `-d` while I am running my applications maybe I should. I just wanted to share my findings with you and thank you all for creating such a community. Thank you ~ Farbod On 10/23/20 6:08 PM, Farbod wrote: > Hi, > > I am using DPDK multi-processor mode for sending packets from one > application (secondary) to another application (primary) using rings. > The primary applications role is to just free the packet with > `rte_pktmbuf_free()`. > > I encounter a SEGFAULT error in the line of `rte_pktmbuf_free()`. > > My DPDK version is 19.11.1. > > ``` > > Signal: 11 (Segmentation fault), si_code: 1 (SEGV_MAPERR: address not > mapped to object) > Backtrace (recent calls first) --- > (0): (+0x81a1ee) [0x562d35df51ee] >     bucket_enqueue_single at > ---/dpdk-19.11.1/drivers/mempool/bucket/rte_mempool_bucket.c:111 > (discriminator 3) >          108:   addr &= bd->bucket_page_mask; >          109:   hdr = (struct bucket_header *)addr; > 110: >       -> 111:   if (likely(hdr->lcore_id == lcore_id)) { >          112:           if (hdr->fill_cnt < bd->obj_per_bucket - 1) { >          113: hdr->fill_cnt++; >          114:           } else { >      (inlined by) bucket_enqueue at > ---/dpdk-19.11.1/drivers/mempool/bucket/rte_mempool_bucket.c:148 > (discriminator 3) >          145:   int rc = 0; > 146: >          147:   for (i = 0; i < n; i++) { >       -> 148:           rc = bucket_enqueue_single(bd, obj_table[i]); >          149:           RTE_ASSERT(rc == 0); >          150: } > [0x562d35cd2f8c] >     rte_mempool_ops_enqueue_bulk at > ---/dpdk-19.11.1/build/include/rte_mempool.h:786 >       -> 786:   return ops->enqueue(mp, obj_table, n); >      (inlined by) __mempool_generic_put at > ---/dpdk-19.11.1/build/include/rte_mempool.h:1329 >       -> 1329:          rte_mempool_ops_enqueue_bulk(mp, > &cache->objs[cache->size], >      (inlined by) rte_mempool_generic_put at > ---/dpdk-19.11.1/build/include/rte_mempool.h:1365 >       -> 1365:  __mempool_generic_put(mp, obj_table, n, cache); >      (inlined by) rte_mempool_put_bulk at > ---/dpdk-19.11.1/build/include/rte_mempool.h:1388 >       -> 1388:  rte_mempool_generic_put(mp, obj_table, n, cache); >      (inlined by) rte_mempool_put at > ---/dpdk-19.11.1/build/include/rte_mempool.h:1406 >       -> 1406:  rte_mempool_put_bulk(mp, &obj, 1); >      (inlined by) rte_mbuf_raw_free at > ---/dpdk-19.11.1/build/include/rte_mbuf.h:579 >       -> 579:   rte_mempool_put(m->pool, m); >      (inlined by) rte_pktmbuf_free_seg at > ---/dpdk-19.11.1/build/include/rte_mbuf.h:1223 >       -> 1223:          rte_mbuf_raw_free(m);         151:   if > (local_stack->top > bd->bucket_stack_thresh) { >        (inlined by) rte_pktmbuf_free at ---/rte_mbuf.h:1244 >       -> 1244: rte_pktmbuf_free_seg(m); >      (inlined by) ?? at --- /packet.h:199 >       -> 199:     rte_pktmbuf_free(reinterpret_cast *>(pkt)); > > ``` > > I have made sure that primary and secondary process does not share any > cpu in common. The packets received in the primary application are > valid and the information inside them is readable. It only runs in to > SEGFAULT when I am trying to free the mubf structures. > > I would like to mention that the applications are in two different > projects and are built separately. > > Thank you >