DPDK usage discussions
 help / color / mirror / Atom feed
From: Jeevan Nailwal <jeevan.ginnie@gmail.com>
To: users@dpdk.org, "Van Haaren, Harry" <harry.van.haaren@intel.com>
Subject: [dpdk-users] Fwd:  New to DPDK
Date: Thu, 16 Sep 2021 10:38:21 +0530	[thread overview]
Message-ID: <CA+NhQKxkD4s87zWjrcrwDsBnAZxB2dBAhcq5f1mM54-a-ZFQvA@mail.gmail.com> (raw)
In-Reply-To: <CA+NhQKwOvbHBv_eadQDMjarGv7Qs+1dna8s9_-JyP0eZthwTZQ@mail.gmail.com>

Thanks for the inputs Harry.
I had started with basics first and reason for moving quickly to vhost
application was due to my prior knowledge in same protocol with qemu  :)

I tried the same code with rte_eal_remote_launch api too but still landed
in same error. However this time i build my DPDK with -g option and found
bigger stack trace. Can someone please look in this and let me know :

bt
#0  0x000000000075cd6f in __rte_ring_move_cons_head
(entries=0x2aaaac8311b4, new_head=0x2aaaac8311b8, old_head=0x2aaaac8311bc,
    behavior=RTE_RING_QUEUE_FIXED, n=1, is_sc=0, r=0x6d00000000000000)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/x86_64-native-linuxapp-gcc/include/rte_ring_generic.h:139
#1  __rte_ring_do_dequeue (available=0x0, is_sc=0,
behavior=RTE_RING_QUEUE_FIXED, n=1, obj_table=0x2aaaac8312d8,
r=0x6d00000000000000)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/x86_64-native-linuxapp-gcc/include/rte_ring.h:384
#2  rte_ring_mc_dequeue_bulk (available=0x0, n=1, obj_table=0x2aaaac8312d8,
r=0x6d00000000000000)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/x86_64-native-linuxapp-gcc/include/rte_ring.h:555
#3  common_ring_mc_dequeue (mp=0x100000001, obj_table=0x2aaaac8312d8, n=1)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/drivers/mempool/ring/rte_mempool_ring.c:31
#4  0x0000000000640bd9 in rte_mempool_ops_dequeue_bulk (mp=0x100000001,
obj_table=0x2aaaac8312d8, n=1)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/x86_64-native-linuxapp-gcc/include/rte_mempool.h:739
#5  0x0000000000641062 in __mempool_generic_get (cache=0x0, n=1,
obj_table=0x2aaaac8312d8, mp=0x100000001)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1471
#6  rte_mempool_generic_get (cache=0x0, n=1, obj_table=0x2aaaac8312d8,
mp=0x100000001)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1506
#7  rte_mempool_get_bulk (n=1, obj_table=0x2aaaac8312d8, mp=0x100000001)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1539
#8  rte_mempool_get (obj_p=0x2aaaac8312d8, mp=0x100000001)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1565
#9  rte_mbuf_raw_alloc (mp=0x100000001)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/x86_64-native-linuxapp-gcc/include/rte_mbuf.h:551
#10 0x00000000006411b1 in rte_pktmbuf_alloc (mp=0x100000001)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/x86_64-native-linuxapp-gcc/include/rte_mbuf.h:804
#11 0x000000000065d1dd in virtio_dev_pktmbuf_alloc (data_len=110,
mp=0x100000001, dev=0x227fffba80)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/lib/librte_vhost/virtio_net.c:1637
#12 virtio_dev_tx_split (dev=0x227fffba80, vq=0x227ffcf280,
mbuf_pool=0x100000001, pkts=0x2aaaac8394e8, count=1)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/lib/librte_vhost/virtio_net.c:1734
#13 0x000000000066fae6 in rte_vhost_dequeue_burst (vid=0, queue_id=1,
mbuf_pool=0x100000001, pkts=0x2aaaac8394e8, count=1)
    at
/lan/cva/pdapp/jeevann//dpdk-stable-19.11.9/lib/librte_vhost/virtio_net.c:2278
#14 0x000000000048b88c in tx_process ()
#15 0x000000000076acf8 in eal_thread_loop (arg=0x0)


As per Harry's inputs, ring address i.e. r=0x6d00000000000000 in frame 0
arguments seems weired. Can somebody please comment on same?
As per m,y code, I am not making any ring explicitly for RX/TX. do i need
to create them manually. I thought creating vhost socket and registering
will take care of it internally. please correct me if i m wrong.

Regards,
Jeevan



On Wed, Sep 15, 2021 at 5:52 PM Van Haaren, Harry <
harry.van.haaren@intel.com> wrote:

> > -----Original Message-----
> > From: users <users-bounces@dpdk.org> On Behalf Of Jeevan Nailwal
> > Sent: Wednesday, September 15, 2021 1:11 PM
> > To: users@dpdk.org
> > Subject: [dpdk-users] New to DPDK
> >
> > Hi Everyone, I am new to DPDK and trying to learn its usage. I am facing
> a
> > SEGFAULT while sending a single packet via this.
> > Please find the snippet of my code below:
>
> Hi Jeevan,
>
> > ------------Started my program with initial pool configuration:
> > ret = rte_eal_init(argc, argv);
> > if (ret < 0)
> > rte_exit(EXIT_FAILURE, "Error with EAL initialization\n");
> >     argc -= ret;
> >     argv += ret;
> >
> >     rte_log_set_global_level(RTE_LOG_NOTICE);
> >
> > /* parse app arguments */
> > if (rte_lcore_count() > RTE_MAX_LCORE)
> > rte_exit(EXIT_FAILURE,"Not enough cores\n");
> >
> > create_mbuf_pool(128000, 128);
> >
> > ret = rte_vhost_driver_register(sockpath, 0);
> > if (ret != 0)
> > rte_exit(EXIT_FAILURE, "vhost driver register failure.\n");
> >
> >
> > rte_vhost_driver_callback_register (sockpath, &virtio_net_device_ops);
> > chmod (sockpath, 0777);
> > rte_vhost_driver_start(sockpath);
>
> Could I suggest to start with a simpler program than going straight for
> vhost?
> Some basic forwarding using e.g. pcap PMD or something would simplify the
> problem... and help get something small working correctly before
> attempting big.
>
>
> > -------- afterwards i created a thread to instantiate m TX.. i.e. receive
> > packet from DPDK:
> >
> > ret = pthread_create (&proc_tx, NULL, (void *)tx_process, NULL);
> > if (ret != 0)
> > {
> > rte_exit (EXIT_FAILURE, "Cannot create TX thread\n");
> > }
>
> DPDK handles thread creation, assigns "lcore ids" and various other
> thread-local specific things to the thread.
> These are later used in e.g. mempool library for optimized per-thread
> cache data structures. Using a "raw"
> pthread will not work with DPDK function-calls, nor is it expected to.
>
> Have a look at examples/helloworld to see how lcores are launch using DPDK.
> Then perhaps look at examples/skeleton to see how launched lcores can use
> rx/tx burst APIs correctly.
>
> Hope that helps! -Harry
>

      parent reply	other threads:[~2021-09-16  5:08 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-15 12:11 [dpdk-users] " Jeevan Nailwal
2021-09-15 12:22 ` Van Haaren, Harry
     [not found]   ` <CA+NhQKwOvbHBv_eadQDMjarGv7Qs+1dna8s9_-JyP0eZthwTZQ@mail.gmail.com>
2021-09-16  5:08     ` Jeevan Nailwal [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CA+NhQKxkD4s87zWjrcrwDsBnAZxB2dBAhcq5f1mM54-a-ZFQvA@mail.gmail.com \
    --to=jeevan.ginnie@gmail.com \
    --cc=harry.van.haaren@intel.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).