From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtpcmd10101.aruba.it (smtpcmd10101.aruba.it [62.149.156.101]) by dpdk.org (Postfix) with ESMTP id 9A6B02A6C for ; Wed, 8 Aug 2018 11:56:43 +0200 (CEST) Received: from LANZUISI-NBK ([93.146.250.201]) by smtpcmd10.ad.aruba.it with bizsmtp id LZwh1y00N4MU9Ql01Zwjd7; Wed, 08 Aug 2018 11:56:43 +0200 Received: from [172.16.17.19] by LANZUISI-NBK (PGP Universal service); Wed, 08 Aug 2018 11:56:45 +0100 X-PGP-Universal: processed; by LANZUISI-NBK on Wed, 08 Aug 2018 11:56:45 +0100 To: dev@dpdk.org From: Matteo Lanzuisi Message-ID: Date: Wed, 8 Aug 2018 11:56:43 +0200 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 Content-Language: it DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=aruba.it; s=a1; t=1533722203; bh=cwLqk4JIe5gVObLEeJH7/g6wvKnkX2cJvmnTLlhyG/4=; h=To:From:Subject:Date:MIME-Version:Content-Type; b=UMF/MAKTBPtu3zJRHwbQNmt+VTpwcZHXjCtWez9KyUBXLG2nQKLV3GNMDpRuY0nlB 3GWWxOq3dRgjsmuuSmOb+dJMvohfsUs0inEiVuWdM1uj4NozPtCDLewXskssT+gd1q 7b/RsI7gAw/Iv+IajT/9ekdsNkEsHDp3nqV7KdeTZaLok7b6NMjF3OLPX/+mb4yaI3 y3mmVx9EhymbACTT1gRi1YV5oqteF0ajjcsR20HEoydFwUONnQ+b7LaT/+5xLhYSVv 1UVKkod5UsQHikm1/dNn3g0DrBOxZawo3lHHrnSckv8euh8Cm5imMhBuHh1OQvDj9X NalJSrpwyo99Q== Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: [dpdk-dev] Multi-thread mempool usage X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Aug 2018 09:56:43 -0000 Hi all, recently I began using "dpdk-17.11-11.el7.x86_64" rpm (RedHat rpm) on RedHat 7.5 kernel 3.10.0-862.6.3.el7.x86_64 as a porting of an application from RH6 to RH7. On RH6 I used dpdk-2.2.0. This application is made up by one or more threads (each one on a different logical core) reading packets from i40e interfaces. Each thread can call the following code lines when receiving a specific packet: RTE_LCORE_FOREACH(lcore_id) {         result = rte_mempool_get(cea_main_lcore_conf[lcore_id].de_conf.cmd_pool, (VOID_P *) &new_work);        // mempools are created one for each logical core         if (((uint64_t)(new_work)) < 0x7f0000000000)             printf("Result %d, lcore di partenza %u, lcore di ricezione %u, pointer %p\n", result, rte_lcore_id(), lcore_id, new_work);    // debug print, on my server it should never happen but with multi-thread happens always on the last logical core!!!!         if (result == 0)         {             new_work->command = command; // usage of the memory gotten from the mempool... <<<<<- here is where the application crashes!!!!             result = rte_ring_enqueue(cea_main_lcore_conf[lcore_id].de_conf.cmd_ring, (VOID_P) new_work);    // enqueues the gotten buffer on the rings of all lcores             // check on result value ...         }         else         {             // do something if result != 0 ...         } } This code worked perfectly (never had an issue) on dpdk-2.2.0, while if I use more than 1 thread doing these operations on dpdk-17.11 it happens that after some times the "new_work" pointer is not a good one, and the application crashes when using that pointer. It seems that these lines cannot be used by more than one thread simultaneously. I also used many 2017 and 2018 dpdk versions without success. Is this code possible on the new dpdk versions? Or have I to change my application so that this code is called just by one lcore at a time? Matteo