From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 0E5F56904 for ; Fri, 24 Aug 2018 18:47:45 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Aug 2018 09:47:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.53,283,1531810800"; d="scan'208";a="227418525" Received: from fmsmsx103.amr.corp.intel.com ([10.18.124.201]) by orsmga004.jf.intel.com with ESMTP; 24 Aug 2018 09:47:34 -0700 Received: from fmsmsx152.amr.corp.intel.com (10.18.125.5) by FMSMSX103.amr.corp.intel.com (10.18.124.201) with Microsoft SMTP Server (TLS) id 14.3.319.2; Fri, 24 Aug 2018 09:47:34 -0700 Received: from fmsmsx117.amr.corp.intel.com ([169.254.3.210]) by FMSMSX152.amr.corp.intel.com ([169.254.6.159]) with mapi id 14.03.0319.002; Fri, 24 Aug 2018 09:47:34 -0700 From: "Wiles, Keith" To: Matteo Lanzuisi CC: Olivier Matz , "dev@dpdk.org" Thread-Topic: [dpdk-dev] Multi-thread mempool usage Thread-Index: AQHULv41lPaH6s1QH0eKSa+Fpr+0laS+KD0AgACPhACACoktAIAAFPkAgAFO0ACAAASXAIAAB4kAgAAB5YCAAB/+gIAEtpQAgAAiWwA= Date: Fri, 24 Aug 2018 16:47:33 +0000 Message-ID: <9CABB611-8AD2-4E4F-BDD9-6BEB803CBB22@intel.com> References: <20180813215424.cesdejskukrrt4zf@neon> <18bbb971-40f1-bba3-3cea-83e7eff94e43@resi.it> <9970633e-a992-0bb0-31a7-72d78325de0e@resi.it> <2d1b12b7-62c9-bfd8-d899-db9990f4f44d@resi.it> In-Reply-To: <2d1b12b7-62c9-bfd8-d899-db9990f4f44d@resi.it> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.254.103.169] Content-Type: text/plain; charset="us-ascii" Content-ID: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] Multi-thread mempool usage X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 24 Aug 2018 16:47:46 -0000 > On Aug 24, 2018, at 9:44 AM, Matteo Lanzuisi wrote: >=20 > Hi, >=20 > I used valgrind again for a very long time, and it told me nothing strang= e is happening on my code. > After it, I changed my code this way >=20 > unsigned lcore_id_start =3D rte_lcore_id(); > RTE_LCORE_FOREACH(lcore_id) > { > if (lcore_id_start !=3D lcore_id) // <--------- before this chang= e, every lcore could use it own mempool and enqueue to its own ring Something in the back of my head tells me this is correct, but I have no re= al reason :-( If this works then I guess it is OK, but it would be nice to understand why= it works with this fix. Unless you have another thread running on this lco= re doing a get/put I do not see the problem. > { > new_work =3D NULL; > result =3D rte_mempool_get(cea_main_lcore_conf[lcore_id].= de_conf.cmd_pool, (VOID_P *) &new_work); // mempools are created one= for each logical core > if (result =3D=3D 0) > {=20 > if (((uint64_t)(new_work)) < 0x7f0000000000) > printf("Result %d, lcore di partenza %u, lcore di= ricezione %u, pointer %p\n", result, rte_lcore_id(), lcore_id, new_work); = // debug print, on my server it should never happen but with multi-threa= d happens always on the last logical core!!!! > new_work->command =3D command; // usage of the memory= gotten from the mempool... <<<<<- here is where the application crashes!!!= ! > result =3D rte_ring_enqueue(cea_main_lcore_conf[lcore= _id].de_conf.cmd_ring, (VOID_P) new_work); // enqueues the gotten buffer= on the rings of all lcores > // check on result value ... > } > else > { > // do something if result !=3D 0 ... > } > } > else > { > // don't use mempool but call a function instead .... > } > } >=20 > and now it all goes well.=20 > It is possibile that sending to itself could generate this issue? >=20 > Regards, > Matteo >=20 > Il 21/08/2018 16:46, Matteo Lanzuisi ha scritto: >> Il 21/08/2018 14:51, Wiles, Keith ha scritto:=20 >>>=20 >>>> On Aug 21, 2018, at 7:44 AM, Matteo Lanzuisi wrot= e:=20 >>>>=20 >>>> Il 21/08/2018 14:17, Wiles, Keith ha scritto:=20 >>>>>> On Aug 21, 2018, at 7:01 AM, Matteo Lanzuisi wr= ote:=20 >>>>>>=20 >>>>>> Hi=20 >>>>>>=20 >>>>>> Il 20/08/2018 18:03, Wiles, Keith ha scritto:=20 >>>>>>>> On Aug 20, 2018, at 9:47 AM, Matteo Lanzuisi = =20 >>>>>>>> wrote:=20 >>>>>>>>=20 >>>>>>>> Hello Olivier,=20 >>>>>>>>=20 >>>>>>>> Il 13/08/2018 23:54, Olivier Matz ha scritto:=20 >>>>>>>>=20 >>>>>>>>> Hello Matteo,=20 >>>>>>>>>=20 >>>>>>>>> On Mon, Aug 13, 2018 at 03:20:44PM +0200, Matteo Lanzuisi wrote:= =20 >>>>>>>>>=20 >>>>>>>>>> Any suggestion? any idea about this behaviour?=20 >>>>>>>>>>=20 >>>>>>>>>> Il 08/08/2018 11:56, Matteo Lanzuisi ha scritto:=20 >>>>>>>>>>=20 >>>>>>>>>>> Hi all,=20 >>>>>>>>>>>=20 >>>>>>>>>>> recently I began using "dpdk-17.11-11.el7.x86_64" rpm (RedHat r= pm) on=20 >>>>>>>>>>> RedHat 7.5 kernel 3.10.0-862.6.3.el7.x86_64 as a porting of an= =20 >>>>>>>>>>> application from RH6 to RH7. On RH6 I used dpdk-2.2.0.=20 >>>>>>>>>>>=20 >>>>>>>>>>> This application is made up by one or more threads (each one on= a=20 >>>>>>>>>>> different logical core) reading packets from i40e interfaces.=20 >>>>>>>>>>>=20 >>>>>>>>>>> Each thread can call the following code lines when receiving a = specific=20 >>>>>>>>>>> packet:=20 >>>>>>>>>>>=20 >>>>>>>>>>> RTE_LCORE_FOREACH(lcore_id)=20 >>>>>>>>>>> {=20 >>>>>>>>>>> result =3D=20 >>>>>>>>>>> rte_mempool_get(cea_main_lcore_conf[lcore_id].de_conf.cmd_pool,= (VOID_P=20 >>>>>>>>>>> *) &new_work); // mempools are created one for each logi= cal core=20 >>>>>>>>>>> if (((uint64_t)(new_work)) < 0x7f0000000000)=20 >>>>>>>>>>> printf("Result %d, lcore di partenza %u, lcore di = ricezione=20 >>>>>>>>>>> %u, pointer %p\n", result, rte_lcore_id(), lcore_id, new_work);= //=20 >>>>>>>>>>> debug print, on my server it should never happen but with multi= -thread=20 >>>>>>>>>>> happens always on the last logical core!!!!=20 >>>>>>>>>>>=20 >>>>>>>>> Here, checking the value of new_work looks wrong to me, before=20 >>>>>>>>> ensuring that result =3D=3D 0. At least, new_work should be set t= o=20 >>>>>>>>> NULL before calling rte_mempool_get().=20 >>>>>>>>>=20 >>>>>>>> I put the check after result =3D=3D 0, and just before the rte_mem= pool_get() I set new_work to NULL, but nothing changed.=20 >>>>>>>> The first time something goes wrong the print is=20 >>>>>>>>=20 >>>>>>>> Result 0, lcore di partenza 1, lcore di ricezione 2, counter 635, = pointer 0x880002=20 >>>>>>>>=20 >>>>>>>> Sorry for the italian language print :) it means that application = is sending a message from the logical core 1 to the logical core 2, it's th= e 635th time, the result is 0 and the pointer is 0x880002 while all pointer= s before were 0x7ffxxxxxx.=20 >>>>>>>> One strange thing is that this behaviour happens always from the l= ogical core 1 to the logical core 2 when the counter is 635!!! (Sending mes= sages from 2 to 1 or 1 to 1 or 2 to 2 is all ok)=20 >>>>>>>> Another strange thing is that pointers from counter 636 to 640 are= NULL, and from 641 begin again to be good... as you can see here following= (I attached the result of a test without the "if" of the check on the valu= e of new_work, and only for messages from the lcore 1 to lcore 2)=20 >>>>>>>>=20 >>>>>>>> Result 0, lcore di partenza 1, lcore di ricezione 2, counter 627, = pointer 0x7ffe8a261880=20 >>>>>>>> Result 0, lcore di partenza 1, lcore di ricezione 2, counter 628, = pointer 0x7ffe8a261900=20 >>>>>>>> Result 0, lcore di partenza 1, lcore di ricezione 2, counter 629, = pointer 0x7ffe8a261980=20 >>>>>>>> Result 0, lcore di partenza 1, lcore di ricezione 2, counter 630, = pointer 0x7ffe8a261a00=20 >>>>>>>> Result 0, lcore di partenza 1, lcore di ricezione 2, counter 631, = pointer 0x7ffe8a261a80=20 >>>>>>>> Result 0, lcore di partenza 1, lcore di ricezione 2, counter 632, = pointer 0x7ffe8a261b00=20 >>>>>>>> Result 0, lcore di partenza 1, lcore di ricezione 2, counter 633, = pointer 0x7ffe8a261b80=20 >>>>>>>> Result 0, lcore di partenza 1, lcore di ricezione 2, counter 634, = pointer 0x7ffe8a261c00=20 >>>>>>>> Result 0, lcore di partenza 1, lcore di ricezione 2, counter 635, = pointer 0x880002=20 >>>>>>>> Result 0, lcore di partenza 1, lcore di ricezione 2, counter 636, = pointer (nil)=20 >>>>>>>> Result 0, lcore di partenza 1, lcore di ricezione 2, counter 637, = pointer (nil)=20 >>>>>>>> Result 0, lcore di partenza 1, lcore di ricezione 2, counter 638, = pointer (nil)=20 >>>>>>>> Result 0, lcore di partenza 1, lcore di ricezione 2, counter 639, = pointer (nil)=20 >>>>>>>> Result 0, lcore di partenza 1, lcore di ricezione 2, counter 640, = pointer (nil)=20 >>>>>>>>=20 >>>>>>> This sure does seem like a memory over write problem, with maybe a = memset(0) in the mix as well. Have you tried using hardware break points wi= th the 0x880002 or 0x00 being written into this range?=20 >>>>>> I put some breakpoints and found this:=20 >>>>>>=20 >>>>>> 1 - using pointer 0x880002, the output is (the pointer comes in the = middle of two rwlock):=20 >>>>>>=20 >>>>>> (gdb) awatch *0x880002=20 >>>>>> Hardware access (read/write) watchpoint 1: *0x880002=20 >>>>>> (gdb) c=20 >>>>>> Continuing.=20 >>>>>> [New Thread 0x7fffeded5700 (LWP 19969)]=20 >>>>>> [New Thread 0x7fffed6d4700 (LWP 19970)]=20 >>>>>> [New Thread 0x7fffeced3700 (LWP 19971)]=20 >>>>>> [New Thread 0x7fffec6d2700 (LWP 19972)]=20 >>>>>> [New Thread 0x7fffebed1700 (LWP 19973)]=20 >>>>>> [New Thread 0x7fffeb6d0700 (LWP 19974)]=20 >>>>>> Hardware access (read/write) watchpoint 1: *0x880002=20 >>>>>>=20 >>>>>> Value =3D 0=20 >>>>>> rte_rwlock_init (rwl=3D0x880000 )=20 >>>>>> at /usr/share/dpdk/x86_64-default-linuxapp-gcc/include/generic/= rte_rwlock.h:81 >>>>>> 81 }=20 >>>>>> (gdb) c=20 >>>>>> Continuing.=20 >>>>>> Hardware access (read/write) watchpoint 1: *0x880002=20 >>>>> These are most likely false positive hits and not the issue.=20 >>>>>> Value =3D 0=20 >>>>>> rte_rwlock_init (rwl=3D0x880004 )=20 >>>>>> at /usr/share/dpdk/x86_64-default-linuxapp-gcc/include/generic/= rte_rwlock.h:81 >>>>>> 81 }=20 >>>>>> (gdb) c=20 >>>>>> Continuing.=20 >>>>>>=20 >>>>>> 2 - when using pointers minor or equal than 0x7ffe8a261d64 (in the r= ange of the mempool), gdb tells nothing about them (I don't use them, I jus= t get them from the pool and the put them in the pool again);=20 >>>>>>=20 >>>>>> 3 - when using pointer 0x7ffe8a261d65 or greater, this is the output= of gdb:=20 >>>>>>=20 >>>>>> (gdb) awatch *(int *)0x7ffe8a261d65=20 >>>>>> Hardware access (read/write) watchpoint 1: *(int *)0x7ffe8a261d65=20 >>>>>> (gdb) c=20 >>>>>> Continuing.=20 >>>>>> [New Thread 0x7fffeded5700 (LWP 17689)]=20 >>>>>> [New Thread 0x7fffed6d4700 (LWP 17690)]=20 >>>>>> [New Thread 0x7fffeced3700 (LWP 17691)]=20 >>>>>> [New Thread 0x7fffec6d2700 (LWP 17692)]=20 >>>>>> [New Thread 0x7fffebed1700 (LWP 17693)]=20 >>>>>> [New Thread 0x7fffeb6d0700 (LWP 17694)]=20 >>>>>> Hardware access (read/write) watchpoint 1: *(int *)0x7ffe8a261d65=20 >>>>>>=20 >>>>>> Value =3D 0=20 >>>>>> 0x00007ffff3798c21 in mempool_add_elem (mp=3Dmp@entry=3D0x7ffebfd8d6= c0, obj=3Dobj@entry=3D0x7ffe8a261d80,=20 >>>>>> iova=3Diova@entry=3D4465237376) at /usr/src/debug/dpdk-17.11/li= b/librte_mempool/rte_mempool.c:140=20 >>>>>> 140 STAILQ_INSERT_TAIL(&mp->elt_list, hdr, next);=20 >>>>>> (gdb) where=20 >>>>>> #0 0x00007ffff3798c21 in mempool_add_elem (mp=3Dmp@entry=3D0x7ffebf= d8d6c0, obj=3Dobj@entry=3D0x7ffe8a261d80,=20 >>>>>> iova=3Diova@entry=3D4465237376) at /usr/src/debug/dpdk-17.11/li= b/librte_mempool/rte_mempool.c:140=20 >>>>>> #1 0x00007ffff37990f0 in rte_mempool_populate_iova (mp=3D0x7ffebfd8= d6c0, vaddr=3D0x7ffe8a23d540 "",=20 >>>>>> iova=3D4465087808, len=3D8388480, free_cb=3D, op= aque=3D)=20 >>>>>> at /usr/src/debug/dpdk-17.11/lib/librte_mempool/rte_mempool.c:4= 24=20 >>>>>> #2 0x00007ffff379967d in rte_mempool_populate_default (mp=3Dmp@entr= y=3D0x7ffebfd8d6c0)=20 >>>>>> at /usr/src/debug/dpdk-17.11/lib/librte_mempool/rte_mempool.c:6= 24=20 >>>>>> #3 0x00007ffff3799e89 in rte_mempool_create (name=3D= , n=3D,=20 >>>>>> elt_size=3D, cache_size=3D, priva= te_data_size=3D,=20 >>>>>> mp_init=3D0x7ffff444e410 , mp_init_arg= =3D0x0,=20 >>>>>> obj_init=3D0x7ffff444e330 , obj_init_arg=3D0x= 0, socket_id=3D0, flags=3D0)=20 >>>>>> at /usr/src/debug/dpdk-17.11/lib/librte_mempool/rte_mempool.c:9= 52=20 >>>>>> #4 0x0000000000548a52 in main (argc=3D16, argv=3D0x7fffffffe3c8)=20 >>>>>> at /root/gemini-cea-4.6.0/msrc/sys/com/linux-dpdk/cea-app/../..= /../../sys/com/linux-dpdk/cea-app/main.c:2360 >>>>>> (gdb) c=20 >>>>>> Continuing.=20 >>>>>> Hardware access (read/write) watchpoint 1: *(int *)0x7ffe8a261d65=20 >>>>> This seems to be just creating a pktmbuf pool. The STAILQ_INSERT_TAIL= Q is just putting the mempool on the main tailq list for mempools in DPDK.= =20 >>>>>=20 >>>>>> Old value =3D 0=20 >>>>>> New value =3D -402653184=20 >>>>>> 0x00007ffff3798c24 in mempool_add_elem (mp=3Dmp@entry=3D0x7ffebfd8d6= c0, obj=3Dobj@entry=3D0x7ffe8a261e00,=20 >>>>>> iova=3Diova@entry=3D4465237504) at /usr/src/debug/dpdk-17.11/li= b/librte_mempool/rte_mempool.c:140=20 >>>>>> 140 STAILQ_INSERT_TAIL(&mp->elt_list, hdr, next);=20 >>>>>> (gdb) where=20 >>>>>> #0 0x00007ffff3798c24 in mempool_add_elem (mp=3Dmp@entry=3D0x7ffebf= d8d6c0, obj=3Dobj@entry=3D0x7ffe8a261e00,=20 >>>>>> iova=3Diova@entry=3D4465237504) at /usr/src/debug/dpdk-17.11/li= b/librte_mempool/rte_mempool.c:140=20 >>>>>> #1 0x00007ffff37990f0 in rte_mempool_populate_iova (mp=3D0x7ffebfd8= d6c0, vaddr=3D0x7ffe8a23d540 "",=20 >>>>>> iova=3D4465087808, len=3D8388480, free_cb=3D, op= aque=3D)=20 >>>>>> at /usr/src/debug/dpdk-17.11/lib/librte_mempool/rte_mempool.c:4= 24=20 >>>>>> #2 0x00007ffff379967d in rte_mempool_populate_default (mp=3Dmp@entr= y=3D0x7ffebfd8d6c0)=20 >>>>>> at /usr/src/debug/dpdk-17.11/lib/librte_mempool/rte_mempool.c:6= 24=20 >>>>>> #3 0x00007ffff3799e89 in rte_mempool_create (name=3D= , n=3D,=20 >>>>>> elt_size=3D, cache_size=3D, priva= te_data_size=3D,=20 >>>>>> mp_init=3D0x7ffff444e410 , mp_init_arg= =3D0x0,=20 >>>>>> obj_init=3D0x7ffff444e330 , obj_init_arg=3D0x= 0, socket_id=3D0, flags=3D0)=20 >>>>>> at /usr/src/debug/dpdk-17.11/lib/librte_mempool/rte_mempool.c:9= 52=20 >>>>>> #4 0x0000000000548a52 in main (argc=3D16, argv=3D0x7fffffffe3c8)=20 >>>>>> at /root/gemini-cea-4.6.0/msrc/sys/com/linux-dpdk/cea-app/../..= /../../sys/com/linux-dpdk/cea-app/main.c:2360 >>>>>> (gdb) c=20 >>>>>> Continuing.=20 >>>>>>=20 >>>>>> What do you think? It is normal that the mempool_add_elem is called = only on certain pointers of the mempool?=20 >>>>>> I attached the initialization of the mempool. Can this be wrong?=20 >>>>> All mempools with a cache size will have two queue to put memory on, = one is the per lcore list and that one is used as a fast access queue. When= the cache becomes empty or has more entries then the cache was created wit= h then it pushed the extra entries to the main list of mempool data.=20 >>>> Why do you say "mempools with a cache size" ? In my initialization thi= s mempool has cache_size =3D 0=20 >>> If you give a cache size then you will have a cache list per lcore, in = your case you do not have a cache. BTW not having a cache will effect perfo= rmance a great deal.=20 >>>=20 >>>>> The only time that rwlock is touched is to get/put items on the main = mempool.=20 >>>>>=20 >>>>> Just as a data point have you tried this app on 18.08 yet? I do not s= ee the problem yet, sorry.=20 >>>> I'll try 18.08 and let you know=20 >>=20 >> Hi ,=20 >>=20 >> I tried 18.08 but nothing changed about the described behaviour. I'm thi= nking about some overflow in my code lines but using valgrind on my applica= tion tells me nothing more and it seems strange to me.=20 >> Is there any particular way to debug memory issues on dpdk application a= part from valgrind?=20 >>=20 >> Regards,=20 >> Matteo=20 >>=20 >>>>>>>> Result 0, lcore di partenza 1, lcore di ricezione 2, counter 641, = pointer 0x7ffe8a262b00=20 >>>>>>>> Result 0, lcore di partenza 1, lcore di ricezione 2, counter 642, = pointer 0x7ffe8a262b80=20 >>>>>>>> Result 0, lcore di partenza 1, lcore di ricezione 2, counter 643, = pointer 0x7ffe8a262d00=20 >>>>>>>> Result 0, lcore di partenza 1, lcore di ricezione 2, counter 644, = pointer 0x7ffe8a262d80=20 >>>>>>>> Result 0, lcore di partenza 1, lcore di ricezione 2, counter 645, = pointer 0x7ffe8a262e00=20 >>>>>>>>=20 >>>>>>>>=20 >>>>>>>>>>> if (result =3D=3D 0)=20 >>>>>>>>>>> {=20 >>>>>>>>>>> new_work->command =3D command; // usage of the mem= ory gotten=20 >>>>>>>>>>> from the mempool... <<<<<- here is where the application crashe= s!!!!=20 >>>>>>>>>>>=20 >>>>>>>>> Do you know why it crashes? Is it that new_work is NULL?=20 >>>>>>>>>=20 >>>>>>>> The pointer is not NULL but is not sequential to the others (0x880= 002 as written before in this email). It seems to be in a memory zone not i= n DPDK hugepages or something similar.=20 >>>>>>>> If I use this pointer the application crashes.=20 >>>>>>>>=20 >>>>>>>>> Can you check how the mempool is initialized? It should be in mul= ti=20 >>>>>>>>> consumer and depending on your use case, single or multi producer= .=20 >>>>>>>>>=20 >>>>>>>> Here is the initialization of this mempool=20 >>>>>>>>=20 >>>>>>>> cea_main_cmd_pool[i] =3D rte_mempool_create(pool_name,=20 >>>>>>>> (unsigned int) (ikco_cmd_buffers - 1), // 65536 - 1 i= n this case=20 >>>>>>>> sizeof (CEA_DECODE_CMD_T), // 24 bytes=20 >>>>>>>> 0, 0,=20 >>>>>>>> rte_pktmbuf_pool_init, NULL,=20 >>>>>>>> rte_pktmbuf_init, NULL,=20 >>>>>>>> rte_socket_id(), 0);=20 >>>>>>>>=20 >>>>>>>>> Another thing that could be checked: at all the places where you= =20 >>>>>>>>> return your work object to the mempool, you should add a check=20 >>>>>>>>> that it is not NULL. Or just enabling RTE_LIBRTE_MEMPOOL_DEBUG=20 >>>>>>>>> could do the trick: it adds some additional checks when doing=20 >>>>>>>>> mempool operations.=20 >>>>>>>>>=20 >>>>>>>> I think I have already answered this point with the prints up in t= he email.=20 >>>>>>>>=20 >>>>>>>> What do you think about this behaviour?=20 >>>>>>>>=20 >>>>>>>> Regards,=20 >>>>>>>> Matteo=20 >>>>>>>>=20 >>>>>>>>>>> result =3D=20 >>>>>>>>>>> rte_ring_enqueue(cea_main_lcore_conf[lcore_id].de_conf.cmd_ring= ,=20 >>>>>>>>>>> (VOID_P) new_work); // enqueues the gotten buffer on the rin= gs of all=20 >>>>>>>>>>> lcores=20 >>>>>>>>>>> // check on result value ...=20 >>>>>>>>>>> }=20 >>>>>>>>>>> else=20 >>>>>>>>>>> {=20 >>>>>>>>>>> // do something if result !=3D 0 ...=20 >>>>>>>>>>> }=20 >>>>>>>>>>> }=20 >>>>>>>>>>>=20 >>>>>>>>>>> This code worked perfectly (never had an issue) on dpdk-2.2.0, = while if=20 >>>>>>>>>>> I use more than 1 thread doing these operations on dpdk-17.11 i= t happens=20 >>>>>>>>>>> that after some times the "new_work" pointer is not a good one,= and the=20 >>>>>>>>>>> application crashes when using that pointer.=20 >>>>>>>>>>>=20 >>>>>>>>>>> It seems that these lines cannot be used by more than one threa= d=20 >>>>>>>>>>> simultaneously. I also used many 2017 and 2018 dpdk versions wi= thout=20 >>>>>>>>>>> success.=20 >>>>>>>>>>>=20 >>>>>>>>>>> Is this code possible on the new dpdk versions? Or have I to ch= ange my=20 >>>>>>>>>>> application so that this code is called just by one lcore at a = time?=20 >>>>>>>>>>>=20 >>>>>>>>> Assuming the mempool is properly initialized, I don't see any rea= son=20 >>>>>>>>> why it would not work. There has been a lot of changes in mempool= between=20 >>>>>>>>> dpdk-2.2.0 and dpdk-17.11, but this behavior should remain the sa= me.=20 >>>>>>>>>=20 >>>>>>>>> If the comments above do not help to solve the issue, it could be= helpful=20 >>>>>>>>> to try to reproduce the issue in a minimal program, so we can hel= p to=20 >>>>>>>>> review it.=20 >>>>>>>>>=20 >>>>>>>>> Regards,=20 >>>>>>>>> Olivier=20 >>>>>>>>>=20 >>>>>>> Regards,=20 >>>>>>> Keith=20 >>>>>>>=20 >>>>>>>=20 >>>>>>>=20 >>>>>>>=20 >>>>>> Regards,=20 >>>>>>=20 >>>>>> Matteo=20 >>>>>>=20 >>>>> Regards,=20 >>>>> Keith=20 >>>>>=20 >>>>>=20 >>>>>=20 >>>> Regards,=20 >>>> Matteo=20 >>> Regards,=20 >>> Keith=20 >>>=20 >>>=20 >>=20 >>=20 Regards, Keith