DPDK patches and discussions
 help / color / mirror / Atom feed
From: Stephen Hemminger <stephen@networkplumber.org>
To: Dor Green <dorgreen1@gmail.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] rte_ring's dequeue appears to be slow
Date: Mon, 6 Apr 2015 13:43:29 -0700	[thread overview]
Message-ID: <20150406134329.1f613e92@urahara> (raw)
In-Reply-To: <CAKedurzQLk-gJCJw4uM6ObZwj+J+vPrZwBb7K22box-s-iFrzA@mail.gmail.com>

On Mon, 6 Apr 2015 15:18:21 +0300
Dor Green <dorgreen1@gmail.com> wrote:

> I have an app which captures packets on a single core and then passes
> to multiple workers on different lcores, using the ring queues.
> 
> While I manage to capture packets at 10Gbps, when I send it to the
> processing lcores there is substantial packet loss. At first I figured
> it's the processing I do on the packets and optimized that, which did
> help it a little but did not alleviate the problem.
> 
> I used Intel VTune amplifier to profile the program, and on all
> profiling checks that I did there, the majority of the time in the
> program is spent in "__rte_ring_sc_do_dequeue" (about 70%). I was
> wondering if anyone can tell me how to optimize this, or if I'm using
> the queues incorrectly, or maybe even doing the profiling wrong
> (because I do find it weird that this dequeuing is so slow).
> 
> My program architecture is as follows (replaced consts with actual values):
> 
> A queue is created for each processing lcore:
>       rte_ring_create(qname, swsize, NUMA_SOCKET, 1024*1024,
> RING_F_SP_ENQ | RING_F_SC_DEQ);
> 
> The processing core enqueues packets one by one, to each of the queues
> (the packet burst size is 256):
>      rte_ring_sp_enqueue(lc[queue_index].queue, (void *const)pkts[i]);
> 
> Which are then dequeued in bulk in the processor lcores:
>      rte_ring_sc_dequeue_bulk(lc->queue, (void**) &mbufs, 128);
> 
> I'm using 16 1GB hugepages, running the new 2.0 version. If there's
> any further info required about the program, let me know.
> 
> Thank you.

First off, make sure you are enqueuing and dequeuing in bursts
if possible. That saves a lot of the overhead.

Also, with polling applications, the dequeue function can be
falsely blamed for taking CPU, if most of the time the poll does
not succeed in finding any data.

  reply	other threads:[~2015-04-06 20:43 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-06 12:18 Dor Green
2015-04-06 20:43 ` Stephen Hemminger [this message]
2015-04-14 11:58   ` Dor Green

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150406134329.1f613e92@urahara \
    --to=stephen@networkplumber.org \
    --cc=dev@dpdk.org \
    --cc=dorgreen1@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).