DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] Mbuf Free Segfault in Secondary Application
@ 2019-08-20 16:23 Kyle Ames
  2019-08-20 20:49 ` Andrew Rybchenko
  0 siblings, 1 reply; 3+ messages in thread
From: Kyle Ames @ 2019-08-20 16:23 UTC (permalink / raw)
  To: users

I'm running into an issue with primary/secondary DPDK processes. I am using DPDK 19.02.

I'm trying to explore a setup where one process pulls packets off the NIC, and then sends them over a rte_ring for additional processing. Unlike the client_server_mp example, I don't need to send the packets out a given port in the client. Once the client is done with them they can just go back into the mbuf mempool. In order to test this, I took the mp_client example and modified it immediately call rte_pktmbuf_free on the packet and not do anything else with it after receiving the packet over the shared ring.

This works fine for the first 1.5*N packets, where N is the value set for the per-lcore cache. Calling rte_pktmbuf_free on the next packet will segfault in bucket_enqueue. (backtrace from GDB below)

Program received signal SIGSEGV, Segmentation fault.
0x0000000000593822 in bucket_enqueue ()
Missing separate debuginfos, use: debuginfo-install glibc-2.17-196.el7_4.2.x86_64 libgcc-4.8.5-16.el7.x86_64 numactl-libs-2.0.9-6.el7_2.x86_64
(gdb) backtrace
#0  0x0000000000593822 in bucket_enqueue ()
#1  0x00000000004769f1 in rte_mempool_ops_enqueue_bulk (n=1, obj_table=0x7fffffffe398, mp=<optimized out>)
    at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:704
#2  __mempool_generic_put (cache=<optimized out>, n=1, obj_table=0x7fffffffe398, mp=<optimized out>)
    at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:1263
#3  rte_mempool_generic_put (cache=<optimized out>, n=1, obj_table=0x7fffffffe398, mp=<optimized out>)
    at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:1285
#4  rte_mempool_put_bulk (n=1, obj_table=0x7fffffffe398, mp=<optimized out>) at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:1308
#5  rte_mempool_put (obj=0x100800040, mp=<optimized out>) at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:1326
#6  rte_mbuf_raw_free (m=0x100800040) at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mbuf.h:1185
#7  rte_pktmbuf_free_seg (m=<optimized out>) at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mbuf.h:1807
#8  rte_pktmbuf_free (m=0x100800040) at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mbuf.h:1828
#9  main (argc=<optimized out>, argv=<optimized out>)
    at /home/kames/code/3rdparty/dpdk-hack/dpdk/examples/multi_process/client_server_mp/mp_client/client.c:90

I changed the size a few times, and the packet in the client that segfaults on free is always the 1.5N'th packet. This happens even if I set the cache_size to zero on mbuf pool creation. (The first mbuf free immediately segfaults)

I'm a bit stuck at the moment. There's clearly a pattern/interaction of some sort, but I don't know what it is or what to do about it. Is this even the right approach for such a scenario?

-Kyle Ames

This email and any attachments thereto may contain private, confidential, and/or privileged material for the sole use of the intended recipient. Any review, copying, or distribution of this email (or any attachments thereto) by others is strictly prohibited. If you are not the intended recipient, please contact the sender immediately and permanently delete the original and any copies of this email and any attachments thereto.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] Mbuf Free Segfault in Secondary Application
  2019-08-20 16:23 [dpdk-users] Mbuf Free Segfault in Secondary Application Kyle Ames
@ 2019-08-20 20:49 ` Andrew Rybchenko
  2019-08-21 17:32   ` Kyle Ames
  0 siblings, 1 reply; 3+ messages in thread
From: Andrew Rybchenko @ 2019-08-20 20:49 UTC (permalink / raw)
  To: Kyle Ames, users

Hello,

On 8/20/19 7:23 PM, Kyle Ames wrote:
> I'm running into an issue with primary/secondary DPDK processes. I am using DPDK 19.02.
>
> I'm trying to explore a setup where one process pulls packets off the NIC, and then sends them over a rte_ring for additional processing. Unlike the client_server_mp example, I don't need to send the packets out a given port in the client. Once the client is done with them they can just go back into the mbuf mempool. In order to test this, I took the mp_client example and modified it immediately call rte_pktmbuf_free on the packet and not do anything else with it after receiving the packet over the shared ring.
>
> This works fine for the first 1.5*N packets, where N is the value set for the per-lcore cache. Calling rte_pktmbuf_free on the next packet will segfault in bucket_enqueue. (backtrace from GDB below)
>
> Program received signal SIGSEGV, Segmentation fault.
> 0x0000000000593822 in bucket_enqueue ()
> Missing separate debuginfos, use: debuginfo-install glibc-2.17-196.el7_4.2.x86_64 libgcc-4.8.5-16.el7.x86_64 numactl-libs-2.0.9-6.el7_2.x86_64
> (gdb) backtrace
> #0  0x0000000000593822 in bucket_enqueue ()

I doubt that bucket mempool is used intentionally. If so, I guess shared 
libraries are
used and mempool libraries are picked up in different order and drivers 
got different
mempool ops indexes. As far as I remember there is a documentation which 
says
that shared libraries should be specified in the same order in primary 
and secondary
process.

Andrew.

> #1  0x00000000004769f1 in rte_mempool_ops_enqueue_bulk (n=1, obj_table=0x7fffffffe398, mp=<optimized out>)
>      at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:704
> #2  __mempool_generic_put (cache=<optimized out>, n=1, obj_table=0x7fffffffe398, mp=<optimized out>)
>      at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:1263
> #3  rte_mempool_generic_put (cache=<optimized out>, n=1, obj_table=0x7fffffffe398, mp=<optimized out>)
>      at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:1285
> #4  rte_mempool_put_bulk (n=1, obj_table=0x7fffffffe398, mp=<optimized out>) at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:1308
> #5  rte_mempool_put (obj=0x100800040, mp=<optimized out>) at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:1326
> #6  rte_mbuf_raw_free (m=0x100800040) at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mbuf.h:1185
> #7  rte_pktmbuf_free_seg (m=<optimized out>) at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mbuf.h:1807
> #8  rte_pktmbuf_free (m=0x100800040) at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mbuf.h:1828
> #9  main (argc=<optimized out>, argv=<optimized out>)
>      at /home/kames/code/3rdparty/dpdk-hack/dpdk/examples/multi_process/client_server_mp/mp_client/client.c:90
>
> I changed the size a few times, and the packet in the client that segfaults on free is always the 1.5N'th packet. This happens even if I set the cache_size to zero on mbuf pool creation. (The first mbuf free immediately segfaults)
>
> I'm a bit stuck at the moment. There's clearly a pattern/interaction of some sort, but I don't know what it is or what to do about it. Is this even the right approach for such a scenario?
>
> -Kyle Ames
>
> This email and any attachments thereto may contain private, confidential, and/or privileged material for the sole use of the intended recipient. Any review, copying, or distribution of this email (or any attachments thereto) by others is strictly prohibited. If you are not the intended recipient, please contact the sender immediately and permanently delete the original and any copies of this email and any attachments thereto.


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] Mbuf Free Segfault in Secondary Application
  2019-08-20 20:49 ` Andrew Rybchenko
@ 2019-08-21 17:32   ` Kyle Ames
  0 siblings, 0 replies; 3+ messages in thread
From: Kyle Ames @ 2019-08-21 17:32 UTC (permalink / raw)
  To: Andrew Rybchenko, users

Andrew,

Yep, that was exactly it...I was building the client straight out of the main source tree and the other application from a shared library. As soon as I made sure to build both the same way everything worked out perfectly.

Thanks!!

-Kyle

On 8/20/19, 4:50 PM, "Andrew Rybchenko" <arybchenko@solarflare.com> wrote:

    Hello,

    On 8/20/19 7:23 PM, Kyle Ames wrote:
    > I'm running into an issue with primary/secondary DPDK processes. I am using DPDK 19.02.
    >
    > I'm trying to explore a setup where one process pulls packets off the NIC, and then sends them over a rte_ring for additional processing. Unlike the client_server_mp example, I don't need to send the packets out a given port in the client. Once the client is done with them they can just go back into the mbuf mempool. In order to test this, I took the mp_client example and modified it immediately call rte_pktmbuf_free on the packet and not do anything else with it after receiving the packet over the shared ring.
    >
    > This works fine for the first 1.5*N packets, where N is the value set for the per-lcore cache. Calling rte_pktmbuf_free on the next packet will segfault in bucket_enqueue. (backtrace from GDB below)
    >
    > Program received signal SIGSEGV, Segmentation fault.
    > 0x0000000000593822 in bucket_enqueue ()
    > Missing separate debuginfos, use: debuginfo-install glibc-2.17-196.el7_4.2.x86_64 libgcc-4.8.5-16.el7.x86_64 numactl-libs-2.0.9-6.el7_2.x86_64
    > (gdb) backtrace
    > #0  0x0000000000593822 in bucket_enqueue ()

    I doubt that bucket mempool is used intentionally. If so, I guess shared
    libraries are
    used and mempool libraries are picked up in different order and drivers
    got different
    mempool ops indexes. As far as I remember there is a documentation which
    says
    that shared libraries should be specified in the same order in primary
    and secondary
    process.

    Andrew.

    > #1  0x00000000004769f1 in rte_mempool_ops_enqueue_bulk (n=1, obj_table=0x7fffffffe398, mp=<optimized out>)
    >      at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:704
    > #2  __mempool_generic_put (cache=<optimized out>, n=1, obj_table=0x7fffffffe398, mp=<optimized out>)
    >      at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:1263
    > #3  rte_mempool_generic_put (cache=<optimized out>, n=1, obj_table=0x7fffffffe398, mp=<optimized out>)
    >      at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:1285
    > #4  rte_mempool_put_bulk (n=1, obj_table=0x7fffffffe398, mp=<optimized out>) at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:1308
    > #5  rte_mempool_put (obj=0x100800040, mp=<optimized out>) at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mempool.h:1326
    > #6  rte_mbuf_raw_free (m=0x100800040) at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mbuf.h:1185
    > #7  rte_pktmbuf_free_seg (m=<optimized out>) at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mbuf.h:1807
    > #8  rte_pktmbuf_free (m=0x100800040) at /home/kames/code/3rdparty/dpdk-hack/dpdk/build/include/rte_mbuf.h:1828
    > #9  main (argc=<optimized out>, argv=<optimized out>)
    >      at /home/kames/code/3rdparty/dpdk-hack/dpdk/examples/multi_process/client_server_mp/mp_client/client.c:90
    >
    > I changed the size a few times, and the packet in the client that segfaults on free is always the 1.5N'th packet. This happens even if I set the cache_size to zero on mbuf pool creation. (The first mbuf free immediately segfaults)
    >
    > I'm a bit stuck at the moment. There's clearly a pattern/interaction of some sort, but I don't know what it is or what to do about it. Is this even the right approach for such a scenario?
    >
    > -Kyle Ames

This email and any attachments thereto may contain private, confidential, and/or privileged material for the sole use of the intended recipient. Any review, copying, or distribution of this email (or any attachments thereto) by others is strictly prohibited. If you are not the intended recipient, please contact the sender immediately and permanently delete the original and any copies of this email and any attachments thereto.

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-08-21 17:32 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-20 16:23 [dpdk-users] Mbuf Free Segfault in Secondary Application Kyle Ames
2019-08-20 20:49 ` Andrew Rybchenko
2019-08-21 17:32   ` Kyle Ames

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).