DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jay Rolette <rolette@infiniteio.com>
To: Luke Gorrie <luke@snabb.co>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization
Date: Thu, 22 Jan 2015 13:36:26 -0600	[thread overview]
Message-ID: <CADNuJVp9Z-Cp2iJ72SwZPLcq+tJeg797E2-_D0aPmWBYuoO0rw@mail.gmail.com> (raw)
In-Reply-To: <CAA2XHbfcFj=RE-__=6Gjetthp+XyxRRuj91G6FD=j4B=f9Je=Q@mail.gmail.com>

On Thu, Jan 22, 2015 at 12:27 PM, Luke Gorrie <luke@snabb.co> wrote:

> On 22 January 2015 at 14:29, Jay Rolette <rolette@infiniteio.com> wrote:
>
>> Microseconds matter. Scaling up to 100GbE, nanoseconds matter.
>>
>
> True. Is there a cut-off point though?
>

There are always engineering trade-offs that have to be made. If I'm
optimizing something today, I'm certainly not starting at something that
takes 1ns for an app that is doing L4-7 processing. It's all about
profiling and figuring out where the bottlenecks are.

For past networking products I've built, there was a lot of traffic that
the software didn't have to do much to. Minimal L2/L3 checks, then forward
the packet. It didn't even have to parse the headers because that was
offloaded on an FPGA. The only way to make those packets faster was to turn
them around in the FPGA and not send them to the CPU at all. That change
improved small packet performance by ~30%. That was on high-end network
processors that are significantly faster than Intel processors for packet
handling.

It seems to be a strange thing when you realize that just getting the
packets into the CPU is expensive, nevermind what you do with them after
that.

Does one nanosecond matter?
>

You just have to be careful when talking about things like a nanosecond.
It's sounds really small, but IPG for a 10G link is only 9.6ns. It's all
relative.

AVX512 will fit a 64-byte packet in one register and move that to or from
> memory with one instruction. L1/L2 cache bandwidth per server is growing on
> a double-exponential curve (both bandwidth per core and cores per CPU). I
> wonder if moving data around in cache will soon be too cheap for us to
> justify worrying about.
>

Adding cores helps with aggregate performance, but doesn't really help with
latency on a single packet. That said, I'll take advantage of anything I
can from the hardware to either let me scale up how much traffic I can
handle or the amount of features I can add at the same performance level!

Jay

  reply	other threads:[~2015-01-22 19:36 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-19  1:53 zhihong.wang
2015-01-19  1:53 ` [dpdk-dev] [PATCH 1/4] app/test: Disabled VTA for memcpy test in app/test/Makefile zhihong.wang
2015-01-19  1:53 ` [dpdk-dev] [PATCH 2/4] app/test: Removed unnecessary test cases in test_memcpy.c zhihong.wang
2015-01-19  1:53 ` [dpdk-dev] [PATCH 3/4] app/test: Extended test coverage in test_memcpy_perf.c zhihong.wang
2015-01-19  1:53 ` [dpdk-dev] [PATCH 4/4] lib/librte_eal: Optimized memcpy in arch/x86/rte_memcpy.h for both SSE and AVX platforms zhihong.wang
2015-01-20 17:15   ` Stephen Hemminger
2015-01-20 19:16     ` Neil Horman
2015-01-21  3:18       ` Wang, Zhihong
2015-01-25 20:02     ` Jim Thompson
2015-01-26 14:43   ` Wodkowski, PawelX
2015-01-27  5:12     ` Wang, Zhihong
2015-01-19 13:02 ` [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization Neil Horman
2015-01-20  3:01   ` Wang, Zhihong
2015-01-20 15:11     ` Neil Horman
2015-01-20 16:14       ` Bruce Richardson
2015-01-21  3:44         ` Wang, Zhihong
2015-01-21 11:40           ` Bruce Richardson
2015-01-21 12:02           ` Ananyev, Konstantin
2015-01-21 12:38             ` Neil Horman
2015-01-23  3:26               ` Wang, Zhihong
2015-01-21 12:36           ` Marc Sune
2015-01-21 13:02             ` Bruce Richardson
2015-01-21 13:21               ` Marc Sune
2015-01-21 13:26                 ` Bruce Richardson
2015-01-21 19:49                   ` Stephen Hemminger
2015-01-21 20:54                     ` Neil Horman
2015-01-21 21:25                       ` Jim Thompson
2015-01-22  0:53                         ` Stephen Hemminger
2015-01-22  9:06                         ` Luke Gorrie
2015-01-22 13:29                           ` Jay Rolette
2015-01-22 18:27                             ` Luke Gorrie
2015-01-22 19:36                               ` Jay Rolette [this message]
2015-01-22 18:21                       ` EDMISON, Kelvin (Kelvin)
2015-01-27  8:22                         ` Wang, Zhihong
2015-01-28 21:48                           ` EDMISON, Kelvin (Kelvin)
2015-01-29  1:53                             ` Wang, Zhihong
2015-01-23  6:52                   ` Wang, Zhihong
2015-01-26 18:29                     ` Ananyev, Konstantin
2015-01-27  1:42                       ` Wang, Zhihong
2015-01-27 11:30                         ` Ananyev, Konstantin
2015-01-27 12:19                           ` Ananyev, Konstantin
2015-01-28  2:06                             ` Wang, Zhihong
2015-01-25 14:50 ` Luke Gorrie
2015-01-26  1:30   ` Wang, Zhihong
2015-01-26  8:03     ` Luke Gorrie
2015-01-27  7:19       ` Wang, Zhihong
2015-01-27 13:57         ` [dpdk-dev] [snabb-devel] " Luke Gorrie
2015-01-29  3:42 ` [dpdk-dev] " Fu, JingguoX

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CADNuJVp9Z-Cp2iJ72SwZPLcq+tJeg797E2-_D0aPmWBYuoO0rw@mail.gmail.com \
    --to=rolette@infiniteio.com \
    --cc=dev@dpdk.org \
    --cc=luke@snabb.co \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).