DPDK patches and discussions
 help / color / mirror / Atom feed
From: Thomas Monjalon <thomas.monjalon@6wind.com>
To: "Damjan Marion (damarion)" <damarion@cisco.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] rte_mbuf.next in 2nd cacheline
Date: Wed, 17 Jun 2015 18:32:24 +0200	[thread overview]
Message-ID: <5029156.q8l1qJC5K0@xps13> (raw)
In-Reply-To: <56928EA5-A3DB-44B3-B0ED-54E6FC0AE361@cisco.com>

2015-06-17 14:23, Damjan Marion:
> 
> > On 17 Jun 2015, at 16:06, Bruce Richardson <bruce.richardson@intel.com> wrote:
> > 
> > On Wed, Jun 17, 2015 at 01:55:57PM +0000, Damjan Marion (damarion) wrote:
> >> 
> >>> On 15 Jun 2015, at 16:12, Bruce Richardson <bruce.richardson@intel.com> wrote:
> >>> 
> >>> The next pointers always start out as NULL when the mbuf pool is created. The
> >>> only time it is set to non-NULL is when we have chained mbufs. If we never have
> >>> any chained mbufs, we never need to touch the next field, or even read it - since
> >>> we have the num-segments count in the first cache line. If we do have a multi-segment
> >>> mbuf, it's likely to be a big packet, so we have more processing time available
> >>> and we can then take the hit of setting the next pointer.
> >> 
> >> There are applications which are not using rx offload, but they deal with chained mbufs.
> >> Why they are less important than ones using rx offload? This is something people 
> >> should be able to configure on build time.
> > 
> > It's not that they are less important, it's that the packet processing cycle count
> > budget is going to be greater. A packet which is 64 bytes, or 128 bytes in size
> > can make use of a number of RX offloads to reduce it's processing time. However,
> > a 64/128 packet is not going to be split across multiple buffers [unless we
> > are dealing with a very unusual setup!].
> > 
> > To handle 64 byte packets at 40G line rate, one has 50 cycles per core per packet
> > when running at 3GHz. [3000000000 cycles / 59.5 mpps].
> > If we assume that we are dealing with fairly small buffers
> > here, and that anything greater than 1k packets are chained, we still have 626
> > cycles per 3GHz core per packet to work with for that 1k packet. Given that
> > "normal" DPDK buffers are 2k in size, we have over a thousand cycles per packet
> > for any packet that is split. 
> > 
> > In summary, packets spread across multiple buffers are large packets, and so have
> > larger packet cycle count budgets and so can much better absorb the cost of
> > touching a second cache line in the mbuf than a 64-byte packet can. Therefore,
> > we optimize for the 64B packet case.
> 
> This makes sense if there is no other work to do on the same core.
> Otherwise it is better to spent those cycles on actual work instead of waiting for 
> 2nd cache line...

You're probably right.
I wonder wether this flexibility can be implemented only in static lib builds?

  reply	other threads:[~2015-06-17 16:33 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-06-10 21:47 Damjan Marion (damarion)
2015-06-15 13:20 ` Olivier MATZ
2015-06-15 13:44   ` Bruce Richardson
2015-06-15 13:54     ` Ananyev, Konstantin
2015-06-15 14:05       ` Olivier MATZ
2015-06-15 14:12         ` Bruce Richardson
2015-06-15 14:30           ` Olivier MATZ
2015-06-15 14:46             ` Bruce Richardson
2015-06-15 14:52             ` Ananyev, Konstantin
2015-06-15 15:19               ` Olivier MATZ
2015-06-15 15:23                 ` Bruce Richardson
2015-06-15 15:28                   ` Ananyev, Konstantin
2015-06-15 15:39                     ` Bruce Richardson
2015-06-15 15:59                       ` Ananyev, Konstantin
2015-06-15 16:02                         ` Bruce Richardson
2015-06-15 16:10                           ` Ananyev, Konstantin
2015-06-15 16:23                             ` Bruce Richardson
2015-06-15 18:34                               ` Ananyev, Konstantin
2015-06-15 20:47                                 ` Stephen Hemminger
2015-06-16  8:20                                   ` Bruce Richardson
2015-06-17 13:55           ` Damjan Marion (damarion)
2015-06-17 14:04             ` Thomas Monjalon
2015-06-17 14:06             ` Bruce Richardson
2015-06-17 14:23               ` Damjan Marion (damarion)
2015-06-17 16:32                 ` Thomas Monjalon [this message]
     [not found]               ` <0DE313B5-C9F0-4879-9D92-838ED088202C@cisco.com>
     [not found]                 ` <27EA8870B328F74E88180827A0F816396BD43720@xmb-aln-x10.cisco.com>
     [not found]                   ` <59AF69C657FD0841A61C55336867B5B0345592CD@IRSMSX103.ger.corp.intel.com>
     [not found]                     ` <1FD9B82B8BF2CF418D9A1000154491D97450B186@ORSMSX102.amr.corp.intel.com>
     [not found]                       ` <27EA8870B328F74E88180827A0F816396BD43891@xmb-aln-x10.cisco.com>
     [not found]                         ` <2601191342CEEE43887BDE71AB97725836A1237C@irsmsx105.ger.corp.intel.com>
2015-06-17 18:50                           ` Ananyev, Konstantin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5029156.q8l1qJC5K0@xps13 \
    --to=thomas.monjalon@6wind.com \
    --cc=damarion@cisco.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).