From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 917E4A046B for ; Tue, 20 Aug 2019 22:50:18 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B9FF11BEE9; Tue, 20 Aug 2019 22:50:17 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id 7BDCA1BECE for ; Tue, 20 Aug 2019 22:50:16 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us4.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id 3F50FB40064; Tue, 20 Aug 2019 20:50:15 +0000 (UTC) Received: from [192.168.1.11] (85.187.13.152) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Tue, 20 Aug 2019 21:50:10 +0100 To: Kyle Ames , "users@dpdk.org" References: <8AC28827-5B04-4392-AFB3-AD259DFBECA9@fireeye.com> From: Andrew Rybchenko Message-ID: <7ea67bbc-cf5d-7352-0fe2-0ad126629ab2@solarflare.com> Date: Tue, 20 Aug 2019 23:49:57 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <8AC28827-5B04-4392-AFB3-AD259DFBECA9@fireeye.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-Originating-IP: [85.187.13.152] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-24858.003 X-TM-AS-Result: No-20.436900-8.000000-10 X-TMASE-MatchedRID: C/snMIRQLS3mLzc6AOD8DfHkpkyUphL9+IfriO3cV8QINpIFnbd6mnHH uSMq4A5F1D1MZoG96OfLLkyl7oiDH3bI+PVdeqUpZdorcofH/Gk1jyCvgMi2cpiQXtm0V8JTOZY Br6JeIbW4AfadfvF39+q2CYfkDCgjSoBVPxvX5SR9j6Il8VAHF0GYgKYjJIUia73+XlYDLuxc0y qfvm83xYkcTkkZ+AAToZ3kyPkGlHhV2C/9+oeSXuIfK/Jd5eHmoZtJrPmDMmVrRM6wvXgDafD9z /4wDENTtjcA96wwip5n2VSxwvOUxxBqxioFyNyPydRP56yRRA/UHmaN+mm9YPp5Xn4sBppfJPUK iLeLtT9VM4K2sZ6ae4J8ACQtXxmkyc5YscxgeaytBybNxXyi/JbFbuUKHUSYpdltGKiWi0ViuA8 cVwxubGFp/xySCmMDpiFVxpShjDKq3wD+0OS3Np4CIKY/Hg3AtOt1ofVlaoLUHQeTVDUrItRnEQ CUU+jz9xS3mVzWUuC1PimItaljun7cGd19dSFd X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--20.436900-8.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-24858.003 X-MDID: 1566334216-wjj8dgryPKLr Subject: Re: [dpdk-users] Mbuf Free Segfault in Secondary Application X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Sender: "users" Hello, On 8/20/19 7:23 PM, Kyle Ames wrote: > I'm running into an issue with primary/secondary DPDK processes. I am using DPDK 19.02. > > I'm trying to explore a setup where one process pulls packets off the NIC, and then sends them over a rte_ring for additional processing. Unlike the client_server_mp example, I don't need to send the packets out a given port in the client. Once the client is done with them they can just go back into the mbuf mempool. In order to test this, I took the mp_client example and modified it immediately call rte_pktmbuf_free on the packet and not do anything else with it after receiving the packet over the shared ring. > > This works fine for the first 1.5*N packets, where N is the value set for the per-lcore cache. Calling rte_pktmbuf_free on the next packet will segfault in bucket_enqueue. (backtrace from GDB below) > > Program received signal SIGSEGV, Segmentation fault. > 0x0000000000593822 in bucket_enqueue () > Missing separate debuginfos, use: debuginfo-install glibc-2.17-196.el7_4.2.x86_64 libgcc-4.8.5-16.el7.x86_64 numactl-libs-2.0.9-6.el7_2.x86_64 > (gdb) backtrace > #0 0x0000000000593822 in bucket_enqueue () I doubt that bucket mempool is used intentionally. If so, I guess shared libraries are used and mempool libraries are picked up in different order and drivers got different mempool ops indexes. As far as I remember there is a documentation which says that shared libraries should be specified in the same order in primary and secondary process. Andrew. > #1 0x00000000004769f1 in rte_mempool_ops_enqueue_bulk (n=1, obj_table=0x7fffffffe398, mp=) > at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:704 > #2 __mempool_generic_put (cache=, n=1, obj_table=0x7fffffffe398, mp=) > at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:1263 > #3 rte_mempool_generic_put (cache=, n=1, obj_table=0x7fffffffe398, mp=) > at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:1285 > #4 rte_mempool_put_bulk (n=1, obj_table=0x7fffffffe398, mp=) at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:1308 > #5 rte_mempool_put (obj=0x100800040, mp=) at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:1326 > #6 rte_mbuf_raw_free (m=0x100800040) at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mbuf.h:1185 > #7 rte_pktmbuf_free_seg (m=) at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mbuf.h:1807 > #8 rte_pktmbuf_free (m=0x100800040) at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mbuf.h:1828 > #9 main (argc=, argv=) > at /home/kames/code/3rdparty/dpdk-hack/dpdk/examples/multi_process/client_server_mp/mp_client/client.c:90 > > I changed the size a few times, and the packet in the client that segfaults on free is always the 1.5N'th packet. This happens even if I set the cache_size to zero on mbuf pool creation. (The first mbuf free immediately segfaults) > > I'm a bit stuck at the moment. There's clearly a pattern/interaction of some sort, but I don't know what it is or what to do about it. Is this even the right approach for such a scenario? > > -Kyle Ames > > This email and any attachments thereto may contain private, confidential, and/or privileged material for the sole use of the intended recipient. Any review, copying, or distribution of this email (or any attachments thereto) by others is strictly prohibited. If you are not the intended recipient, please contact the sender immediately and permanently delete the original and any copies of this email and any attachments thereto.