DPDK usage discussions
 help / color / mirror / Atom feed
From: Arvind Narayanan <webguru2688@gmail.com>
To: stephen@networkplumber.org
Cc: keith.wiles@intel.com, users@dpdk.org
Subject: Re: [dpdk-users] How to use software prefetching for custom structures to increase throughput on the fast path
Date: Tue, 11 Sep 2018 13:39:24 -0500	[thread overview]
Message-ID: <CAHJJQSXNaV7ixaTTZzyaaXCiX7mtujgadyfZUxvtxKDu6hOwpQ@mail.gmail.com> (raw)
In-Reply-To: <20180911110744.7ef55fc2@xeon-e3>

Stephen, thanks!

That is it! Not sure if there is any workaround.

So, essentially, what I am doing is -- core 0 gets a burst of my_packet(s)
from its pre-allocated mempool, and then (bulk) enqueues it into a
rte_ring. Core 1 then (bulk) dequeues from this ring and when it access the
data pointed by the ring's element (i.e. my_packet->tag1), this memory
access latency issue is seen. I cannot advance the prefetch any earlier. Is
there any clever workaround (or hack) to overcome this issue other than
using the same core for all the functions? For e.g. can I can prefetch the
packets in core 0 for core 1's cache (could be a dumb question!)?

Thanks,
Arvind

On Tue, Sep 11, 2018 at 1:07 PM Stephen Hemminger <
stephen@networkplumber.org> wrote:

> On Tue, 11 Sep 2018 12:18:42 -0500
> Arvind Narayanan <webguru2688@gmail.com> wrote:
>
> > If I don't do any processing, I easily get 10G. It is only when I access
> > the tag when the throughput drops.
> > What confuses me is if I use the following snippet, it works at line
> rate.
> >
> > ```
> > int temp_key = 1; // declared outside of the for loop
> >
> > for (i = 0; i < pkt_count; i++) {
> >     if (rte_hash_lookup_data(rx_table, &(temp_key), (void **)&val[i]) <
> 0) {
> >     }
> > }
> > ```
> >
> > But as soon as I replace `temp_key` with `my_packet->tag1`, I experience
> > fall in throughput (which in a way confirms the issue is due to cache
> > misses).
>
> Your packet data is not in cache.
> Doing prefetch can help but it is very timing sensitive. If prefetch is
> done
> before data is available it won't help. And if prefetch is done just before
> data is used then there isn't enough cycles to get it from memory to the
> cache.
>
>
>

  reply	other threads:[~2018-09-11 18:39 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-11  8:15 Arvind Narayanan
2018-09-11 14:20 ` Wiles, Keith
2018-09-11 15:42   ` Arvind Narayanan
2018-09-11 16:52     ` Wiles, Keith
2018-09-11 17:18       ` Arvind Narayanan
2018-09-11 18:07         ` Stephen Hemminger
2018-09-11 18:39           ` Arvind Narayanan [this message]
2018-09-11 19:12             ` Stephen Hemminger
2018-09-12  8:22             ` Van Haaren, Harry
2018-09-11 19:36           ` Pierre Laurent
2018-09-11 21:49             ` Arvind Narayanan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAHJJQSXNaV7ixaTTZzyaaXCiX7mtujgadyfZUxvtxKDu6hOwpQ@mail.gmail.com \
    --to=webguru2688@gmail.com \
    --cc=keith.wiles@intel.com \
    --cc=stephen@networkplumber.org \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).