DPDK usage discussions
 help / color / mirror / Atom feed
From: Antonio Di Bacco <a.dibacco.ks@gmail.com>
To: "Kinsella, Ray" <ray.kinsella@intel.com>
Cc: "Sanford, Robert" <rsanford@akamai.com>,
	"users@dpdk.org" <users@dpdk.org>
Subject: Re: DPDK performances surprise
Date: Thu, 19 May 2022 11:04:04 +0200	[thread overview]
Message-ID: <CAO8pfF=xmm_SeOUP82ox-3Dv+rXeMjhOH9Uh3gbRLth7yz4y1w@mail.gmail.com> (raw)
In-Reply-To: <PH0PR11MB477631876B9858FD0747A67E90D09@PH0PR11MB4776.namprd11.prod.outlook.com>

[-- Attachment #1: Type: text/plain, Size: 1762 bytes --]

This tool seems awesome!!! Better than VTUNE?

On Thu, May 19, 2022 at 10:29 AM Kinsella, Ray <ray.kinsella@intel.com>
wrote:

> I’d say that is likely yes.
>
>
>
> FYI - pcm-memory is very handy tool for looking at memory traffic.
>
> https://github.com/opcm/pcm
>
>
>
> Thanks,
>
>
>
> Ray K
>
>
>
> *From:* Sanford, Robert <rsanford@akamai.com>
> *Sent:* Wednesday 18 May 2022 17:53
> *To:* Antonio Di Bacco <a.dibacco.ks@gmail.com>; users@dpdk.org
> *Subject:* Re: DPDK performances surprise
>
>
>
> My guess is that most of the packet data has a short life in the L3 cache
> (before being overwritten by newer packets), but is never flushed to memory.
>
>
>
> *From: *Antonio Di Bacco <a.dibacco.ks@gmail.com>
> *Date: *Wednesday, May 18, 2022 at 12:40 PM
> *To: *"users@dpdk.org" <users@dpdk.org>
> *Subject: *DPDK performances surprise
>
>
>
> I recently read a performance test where l2fwd was able to receive packets
> (8000B) from a 100 Gbps card, swap the L2 addresses and send them back to
> the same port to be received by an ethernet analyzer. The throughput
> achieved was close to 100 Gbps on a XEON machine (Intel(R) Xeon(R) Platinum
> 8176 CPU @ 2.10GHz) . This is the same processor I have and I know that, if
> I try to write around 8000B to the attached DDR4 (2666MT/s) on an allocated
> 1GB hugepage, I get a maximum throughput of around 20GB/s.
>
>
>
> Now, a 100 Gbps can generate a flow of around 12 GB/s, these packets have
> to be written to the DDR and then read back to swap L2 addresses and this
> leads to a cumulative bandwidth on the DDR that is around 2x12 GB/s and is
> more than the 20GB/s of available bandwidth on the DDR4.
>
>
>
> How can this be possible ?
>

[-- Attachment #2: Type: text/html, Size: 5080 bytes --]

  reply	other threads:[~2022-05-19  9:04 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-18 16:40 Antonio Di Bacco
2022-05-18 16:53 ` Sanford, Robert
2022-05-18 17:04   ` Stephen Hemminger
2022-05-19  9:03     ` Antonio Di Bacco
2022-05-19  8:29   ` Kinsella, Ray
2022-05-19  9:04     ` Antonio Di Bacco [this message]
2022-05-19  9:07       ` Kinsella, Ray
2022-05-19 15:05         ` Stephen Hemminger

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAO8pfF=xmm_SeOUP82ox-3Dv+rXeMjhOH9Uh3gbRLth7yz4y1w@mail.gmail.com' \
    --to=a.dibacco.ks@gmail.com \
    --cc=ray.kinsella@intel.com \
    --cc=rsanford@akamai.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).