DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] New to DPDK
@ 2021-09-15 12:11 Jeevan Nailwal
  2021-09-15 12:22 ` Van Haaren, Harry
  0 siblings, 1 reply; 3+ messages in thread
From: Jeevan Nailwal @ 2021-09-15 12:11 UTC (permalink / raw)
  To: users

Hi Everyone, I am new to DPDK and trying to learn its usage. I am facing a
SEGFAULT while sending a single packet via this.
Please find the snippet of my code below:


------------Started my program with initial pool configuration:
ret = rte_eal_init(argc, argv);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Error with EAL initialization\n");
    argc -= ret;
    argv += ret;

    rte_log_set_global_level(RTE_LOG_NOTICE);

/* parse app arguments */
if (rte_lcore_count() > RTE_MAX_LCORE)
rte_exit(EXIT_FAILURE,"Not enough cores\n");

create_mbuf_pool(128000, 128);

ret = rte_vhost_driver_register(sockpath, 0);
if (ret != 0)
rte_exit(EXIT_FAILURE, "vhost driver register failure.\n");


rte_vhost_driver_callback_register (sockpath, &virtio_net_device_ops);
chmod (sockpath, 0777);
rte_vhost_driver_start(sockpath);



-------- afterwards i created a thread to instantiate m TX.. i.e. receive
packet from DPDK:

ret = pthread_create (&proc_tx, NULL, (void *)tx_process, NULL);
if (ret != 0)
{
rte_exit (EXIT_FAILURE, "Cannot create TX thread\n");
}

-------Finally: in my tx_process, i am fetching the data when its
available, based on data availability:

ret = rte_vhost_dequeue_burst (port, VIRTIO_TXQ, mbuf_pool, &pkt, 1);


I am getting a segfault as soon as dequeue_burst api gets called. please
find the stack trace below:

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x2aaaad642700 (LWP 288924)]
0x000000000071ce74 in common_ring_mc_dequeue ()
Missing separate debuginfos, use: debuginfo-install
glibc-2.17-260.el7_6.3.x86_64 libgcc-4.8.5-36.el7.x86_64
ncurses-libs-5.9-14.20130511.el7_4.x86_64 numactl-libs-2.0.9-7.el7.x86_64
(gdb) bt
#0 0x000000000071ce74 in common_ring_mc_dequeue ()
#1 0x00000000006d0a37 in virtio_dev_tx_split ()
#2 0x00000000006d2395 in rte_vhost_dequeue_burst ()
#3 0x000000000064c3ac in tx_process ()
#4 0x00002aaaaba41dd5 in start_thread () from /lib64/libpthread.so.0
#5 0x00002aaaabd53ead in clone () from /lib64/libc.so.6

Please help me here.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] New to DPDK
  2021-09-15 12:11 [dpdk-users] New to DPDK Jeevan Nailwal
@ 2021-09-15 12:22 ` Van Haaren, Harry
       [not found]   ` <CA+NhQKwOvbHBv_eadQDMjarGv7Qs+1dna8s9_-JyP0eZthwTZQ@mail.gmail.com>
  0 siblings, 1 reply; 3+ messages in thread
From: Van Haaren, Harry @ 2021-09-15 12:22 UTC (permalink / raw)
  To: Jeevan Nailwal, users

> -----Original Message-----
> From: users <users-bounces@dpdk.org> On Behalf Of Jeevan Nailwal
> Sent: Wednesday, September 15, 2021 1:11 PM
> To: users@dpdk.org
> Subject: [dpdk-users] New to DPDK
> 
> Hi Everyone, I am new to DPDK and trying to learn its usage. I am facing a
> SEGFAULT while sending a single packet via this.
> Please find the snippet of my code below:

Hi Jeevan,

> ------------Started my program with initial pool configuration:
> ret = rte_eal_init(argc, argv);
> if (ret < 0)
> rte_exit(EXIT_FAILURE, "Error with EAL initialization\n");
>     argc -= ret;
>     argv += ret;
> 
>     rte_log_set_global_level(RTE_LOG_NOTICE);
> 
> /* parse app arguments */
> if (rte_lcore_count() > RTE_MAX_LCORE)
> rte_exit(EXIT_FAILURE,"Not enough cores\n");
> 
> create_mbuf_pool(128000, 128);
> 
> ret = rte_vhost_driver_register(sockpath, 0);
> if (ret != 0)
> rte_exit(EXIT_FAILURE, "vhost driver register failure.\n");
> 
> 
> rte_vhost_driver_callback_register (sockpath, &virtio_net_device_ops);
> chmod (sockpath, 0777);
> rte_vhost_driver_start(sockpath);

Could I suggest to start with a simpler program than going straight for vhost?
Some basic forwarding using e.g. pcap PMD or something would simplify the
problem... and help get something small working correctly before attempting big.


> -------- afterwards i created a thread to instantiate m TX.. i.e. receive
> packet from DPDK:
> 
> ret = pthread_create (&proc_tx, NULL, (void *)tx_process, NULL);
> if (ret != 0)
> {
> rte_exit (EXIT_FAILURE, "Cannot create TX thread\n");
> }

DPDK handles thread creation, assigns "lcore ids" and various other thread-local specific things to the thread.
These are later used in e.g. mempool library for optimized per-thread cache data structures. Using a "raw"
pthread will not work with DPDK function-calls, nor is it expected to.

Have a look at examples/helloworld to see how lcores are launch using DPDK.
Then perhaps look at examples/skeleton to see how launched lcores can use rx/tx burst APIs correctly.

Hope that helps! -Harry

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [dpdk-users] Fwd:  New to DPDK
       [not found]   ` <CA+NhQKwOvbHBv_eadQDMjarGv7Qs+1dna8s9_-JyP0eZthwTZQ@mail.gmail.com>
@ 2021-09-16  5:08     ` Jeevan Nailwal
  0 siblings, 0 replies; 3+ messages in thread
From: Jeevan Nailwal @ 2021-09-16  5:08 UTC (permalink / raw)
  To: users, Van Haaren, Harry

Thanks for the inputs Harry.
I had started with basics first and reason for moving quickly to vhost
application was due to my prior knowledge in same protocol with qemu  :)

I tried the same code with rte_eal_remote_launch api too but still landed
in same error. However this time i build my DPDK with -g option and found
bigger stack trace. Can someone please look in this and let me know :

bt
#0  0x000000000075cd6f in __rte_ring_move_cons_head
(entries=0x2aaaac8311b4, new_head=0x2aaaac8311b8, old_head=0x2aaaac8311bc,
    behavior=RTE_RING_QUEUE_FIXED, n=1, is_sc=0, r=0x6d00000000000000)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/x86_64-native-linuxapp-gcc/include/rte_ring_generic.h:139
#1  __rte_ring_do_dequeue (available=0x0, is_sc=0,
behavior=RTE_RING_QUEUE_FIXED, n=1, obj_table=0x2aaaac8312d8,
r=0x6d00000000000000)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/x86_64-native-linuxapp-gcc/include/rte_ring.h:384
#2  rte_ring_mc_dequeue_bulk (available=0x0, n=1, obj_table=0x2aaaac8312d8,
r=0x6d00000000000000)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/x86_64-native-linuxapp-gcc/include/rte_ring.h:555
#3  common_ring_mc_dequeue (mp=0x100000001, obj_table=0x2aaaac8312d8, n=1)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/drivers/mempool/ring/rte_mempool_ring.c:31
#4  0x0000000000640bd9 in rte_mempool_ops_dequeue_bulk (mp=0x100000001,
obj_table=0x2aaaac8312d8, n=1)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/x86_64-native-linuxapp-gcc/include/rte_mempool.h:739
#5  0x0000000000641062 in __mempool_generic_get (cache=0x0, n=1,
obj_table=0x2aaaac8312d8, mp=0x100000001)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1471
#6  rte_mempool_generic_get (cache=0x0, n=1, obj_table=0x2aaaac8312d8,
mp=0x100000001)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1506
#7  rte_mempool_get_bulk (n=1, obj_table=0x2aaaac8312d8, mp=0x100000001)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1539
#8  rte_mempool_get (obj_p=0x2aaaac8312d8, mp=0x100000001)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1565
#9  rte_mbuf_raw_alloc (mp=0x100000001)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/x86_64-native-linuxapp-gcc/include/rte_mbuf.h:551
#10 0x00000000006411b1 in rte_pktmbuf_alloc (mp=0x100000001)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/x86_64-native-linuxapp-gcc/include/rte_mbuf.h:804
#11 0x000000000065d1dd in virtio_dev_pktmbuf_alloc (data_len=110,
mp=0x100000001, dev=0x227fffba80)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/lib/librte_vhost/virtio_net.c:1637
#12 virtio_dev_tx_split (dev=0x227fffba80, vq=0x227ffcf280,
mbuf_pool=0x100000001, pkts=0x2aaaac8394e8, count=1)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/lib/librte_vhost/virtio_net.c:1734
#13 0x000000000066fae6 in rte_vhost_dequeue_burst (vid=0, queue_id=1,
mbuf_pool=0x100000001, pkts=0x2aaaac8394e8, count=1)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/lib/librte_vhost/virtio_net.c:2278
#14 0x000000000048b88c in tx_process ()
#15 0x000000000076acf8 in eal_thread_loop (arg=0x0)


As per Harry's inputs, ring address i.e. r=0x6d00000000000000 in frame 0
arguments seems weired. Can somebody please comment on same?
As per m,y code, I am not making any ring explicitly for RX/TX. do i need
to create them manually. I thought creating vhost socket and registering
will take care of it internally. please correct me if i m wrong.

Regards,
Jeevan



On Wed, Sep 15, 2021 at 5:52 PM Van Haaren, Harry <
harry.van.haaren@intel.com> wrote:

> > -----Original Message-----
> > From: users <users-bounces@dpdk.org> On Behalf Of Jeevan Nailwal
> > Sent: Wednesday, September 15, 2021 1:11 PM
> > To: users@dpdk.org
> > Subject: [dpdk-users] New to DPDK
> >
> > Hi Everyone, I am new to DPDK and trying to learn its usage. I am facing
> a
> > SEGFAULT while sending a single packet via this.
> > Please find the snippet of my code below:
>
> Hi Jeevan,
>
> > ------------Started my program with initial pool configuration:
> > ret = rte_eal_init(argc, argv);
> > if (ret < 0)
> > rte_exit(EXIT_FAILURE, "Error with EAL initialization\n");
> >     argc -= ret;
> >     argv += ret;
> >
> >     rte_log_set_global_level(RTE_LOG_NOTICE);
> >
> > /* parse app arguments */
> > if (rte_lcore_count() > RTE_MAX_LCORE)
> > rte_exit(EXIT_FAILURE,"Not enough cores\n");
> >
> > create_mbuf_pool(128000, 128);
> >
> > ret = rte_vhost_driver_register(sockpath, 0);
> > if (ret != 0)
> > rte_exit(EXIT_FAILURE, "vhost driver register failure.\n");
> >
> >
> > rte_vhost_driver_callback_register (sockpath, &virtio_net_device_ops);
> > chmod (sockpath, 0777);
> > rte_vhost_driver_start(sockpath);
>
> Could I suggest to start with a simpler program than going straight for
> vhost?
> Some basic forwarding using e.g. pcap PMD or something would simplify the
> problem... and help get something small working correctly before
> attempting big.
>
>
> > -------- afterwards i created a thread to instantiate m TX.. i.e. receive
> > packet from DPDK:
> >
> > ret = pthread_create (&proc_tx, NULL, (void *)tx_process, NULL);
> > if (ret != 0)
> > {
> > rte_exit (EXIT_FAILURE, "Cannot create TX thread\n");
> > }
>
> DPDK handles thread creation, assigns "lcore ids" and various other
> thread-local specific things to the thread.
> These are later used in e.g. mempool library for optimized per-thread
> cache data structures. Using a "raw"
> pthread will not work with DPDK function-calls, nor is it expected to.
>
> Have a look at examples/helloworld to see how lcores are launch using DPDK.
> Then perhaps look at examples/skeleton to see how launched lcores can use
> rx/tx burst APIs correctly.
>
> Hope that helps! -Harry
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-09-16  5:08 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-15 12:11 [dpdk-users] New to DPDK Jeevan Nailwal
2021-09-15 12:22 ` Van Haaren, Harry
     [not found]   ` <CA+NhQKwOvbHBv_eadQDMjarGv7Qs+1dna8s9_-JyP0eZthwTZQ@mail.gmail.com>
2021-09-16  5:08     ` [dpdk-users] Fwd: " Jeevan Nailwal

DPDK usage discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror http://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ http://inbox.dpdk.org/users \
		users@dpdk.org
	public-inbox-index users

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.users


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git