From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id E33D4590E for ; Tue, 25 Oct 2016 14:20:20 +0200 (CEST) Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP; 25 Oct 2016 05:20:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,545,1473145200"; d="scan'208";a="23472251" Received: from bricha3-mobl3.ger.corp.intel.com ([10.237.210.150]) by orsmga005.jf.intel.com with SMTP; 25 Oct 2016 05:20:09 -0700 Received: by (sSMTP sendmail emulation); Tue, 25 Oct 2016 13:20:09 +0100 Date: Tue, 25 Oct 2016 13:20:09 +0100 From: Bruce Richardson To: Morten =?iso-8859-1?Q?Br=F8rup?= Cc: Adrien Mazarguil , "Wiles, Keith" , dev@dpdk.org, Olivier Matz , Oleg Kuporosov Message-ID: <20161025122008.GA54704@bricha3-MOBL3.ger.corp.intel.com> References: <98CBD80474FA8B44BF855DF32C47DC359EA8B1@smartserver.smartshare.dk> <7910CF2F-7087-4307-A9AC-DE0287104185@intel.com> <20161024162538.GA34988@bricha3-MOBL3.ger.corp.intel.com> <20161025093915.GJ5733@6wind.com> <98CBD80474FA8B44BF855DF32C47DC359EA8B7@smartserver.smartshare.dk> <20161025110444.GK5733@6wind.com> <20161025111357.GA43504@bricha3-MOBL3.ger.corp.intel.com> <98CBD80474FA8B44BF855DF32C47DC359EA8BA@smartserver.smartshare.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC359EA8BA@smartserver.smartshare.dk> Organization: Intel Research and =?iso-8859-1?Q?De=ACvel?= =?iso-8859-1?Q?opment?= Ireland Ltd. User-Agent: Mutt/1.7.1 (2016-10-04) Subject: Re: [dpdk-dev] mbuf changes X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 25 Oct 2016 12:20:21 -0000 On Tue, Oct 25, 2016 at 02:16:29PM +0200, Morten Brørup wrote: > Comments inline. > > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Bruce Richardson > > Sent: Tuesday, October 25, 2016 1:14 PM > > To: Adrien Mazarguil > > Cc: Morten Brørup; Wiles, Keith; dev@dpdk.org; Olivier Matz; Oleg > > Kuporosov > > Subject: Re: [dpdk-dev] mbuf changes > > > > On Tue, Oct 25, 2016 at 01:04:44PM +0200, Adrien Mazarguil wrote: > > > On Tue, Oct 25, 2016 at 12:11:04PM +0200, Morten Brørup wrote: > > > > Comments inline. > > > > > > > > Med venlig hilsen / kind regards > > > > - Morten Brørup > > > > > > > > > > > > > -----Original Message----- > > > > > From: Adrien Mazarguil [mailto:adrien.mazarguil@6wind.com] > > > > > Sent: Tuesday, October 25, 2016 11:39 AM > > > > > To: Bruce Richardson > > > > > Cc: Wiles, Keith; Morten Brørup; dev@dpdk.org; Olivier Matz; Oleg > > > > > Kuporosov > > > > > Subject: Re: [dpdk-dev] mbuf changes > > > > > > > > > > On Mon, Oct 24, 2016 at 05:25:38PM +0100, Bruce Richardson wrote: > > > > > > On Mon, Oct 24, 2016 at 04:11:33PM +0000, Wiles, Keith wrote: > > > > > [...] > > > > > > > > On Oct 24, 2016, at 10:49 AM, Morten Brørup > > > > > wrote: > > > > > [...] > > > > > > > > 5. > > > > > > > > > > > > > > > > And here’s something new to think about: > > > > > > > > > > > > > > > > m->next already reveals if there are more segments to a > > packet. > > > > > Which purpose does m->nb_segs serve that is not already covered > > by > > > > > m- > > > > > >next? > > > > > > > > > > > > It is duplicate info, but nb_segs can be used to check the > > > > > > validity > > > > > of > > > > > > the next pointer without having to read the second mbuf > > cacheline. > > > > > > > > > > > > Whether it's worth having is something I'm happy enough to > > > > > > discuss, though. > > > > > > > > > > Although slower in some cases than a full blown "next packet" > > > > > pointer, nb_segs can also be conveniently abused to link several > > > > > packets and their segments in the same list without wasting > > space. > > > > > > > > I don’t understand that; can you please elaborate? Are you abusing > > m->nb_segs as an index into an array in your application? If that is > > the case, and it is endorsed by the community, we should get rid of m- > > >nb_segs and add a member for application specific use instead. > > > > > > Well, that's just an idea, I'm not aware of any application using > > > this, however the ability to link several packets with segments seems > > > useful to me (e.g. buffering packets). Here's a diagram: > > > > > > .-----------. .-----------. .-----------. .-----------. .--- > > --- > > > | pkt 0 | | seg 1 | | seg 2 | | pkt 1 | | > > pkt 2 > > > | next --->| next --->| next --->| next --->| > > ... > > > | nb_segs 3 | | nb_segs 1 | | nb_segs 1 | | nb_segs 1 | | > > > `-----------' `-----------' `-----------' `-----------' `--- > > --- > > I see. It makes it possible to refer to a burst of packets (with segments or not) by a single mbuf reference, as an alternative to the current design pattern of using an array and length (struct rte_mbuf **mbufs, unsigned count). > > This would require implementation in the PMDs etc. > > And even in this case, m->nb_segs does not need to be an integer, but could be replaced by a single bit indicating if the segment is a continuation of a packet or the beginning (alternatively the end) of a packet, i.e. the bit can be set for either the first or the last segment in the packet. > > It is an almost equivalent alternative to the fundamental design pattern of using an array of mbuf with count, which is widely implemented in DPDK. And m->next still lives in the second cache line, so I don't see any gain by this. > > I still don't get how m->nb_segs can be abused without m->next. > > > > > > > > One other point I'll mention is that we need to have a > > > > > > discussion on how/where to add in a timestamp value into the > > > > > > mbuf. Personally, I think it can be in a union with the > > sequence > > > > > > number value, but I also suspect that 32-bits of a timestamp is > > > > > > not going to be enough for > > > > > many. > > > > > > > > > > > > Thoughts? > > > > > > > > > > If we consider that timestamp representation should use > > nanosecond > > > > > granularity, a 32-bit value may likely wrap around too quickly to > > > > > be useful. We can also assume that applications requesting > > > > > timestamps may care more about latency than throughput, Oleg > > found > > > > > that using the second cache line for this purpose had a > > noticeable impact [1]. > > > > > > > > > > [1] http://dpdk.org/ml/archives/dev/2016-October/049237.html > > > > > > > > I agree with Oleg about the latency vs. throughput importance for > > such applications. > > > > > > > > If you need high resolution timestamps, consider them to be > > generated by the NIC RX driver, possibly by the hardware itself > > (http://w3new.napatech.com/features/time-precision/hardware-time- > > stamp), so the timestamp belongs in the first cache line. And I am > > proposing that it should have the highest possible accuracy, which > > makes the value hardware dependent. > > > > > > > > Furthermore, I am arguing that we leave it up to the application to > > keep track of the slowly moving bits (i.e. counting whole seconds, > > hours and calendar date) out of band, so we don't use precious space in > > the mbuf. The application doesn't need the NIC RX driver's fast path to > > capture which date (or even which second) a packet was received. Yes, > > it adds complexity to the application, but we can't set aside 64 bit > > for a generic timestamp. Or as a weird tradeoff: Put the fast moving 32 > > bit in the first cache line and the slow moving 32 bit in the second > > cache line, as a placeholder for the application to fill out if needed. > > Yes, it means that the application needs to check the time and update > > its variable holding the slow moving time once every second or so; but > > that should be doable without significant effort. > > > > > > That's a good point, however without a 64 bit value, elapsed time > > > between two arbitrary mbufs cannot be measured reliably due to not > > > enough context, one way or another the low resolution value is also > > needed. > > > > > > Obviously latency-sensitive applications are unlikely to perform > > > lengthy buffering and require this but I'm not sure about all the > > > possible use-cases. Considering many NICs expose 64 bit timestaps, I > > > suggest we do not truncate them. > > > > > > I'm not a fan of the weird tradeoff either, PMDs will be tempted to > > > fill the extra 32 bits whenever they can and negate the performance > > > improvement of the first cache line. > > > > I would tend to agree, and I don't really see any convenient way to > > avoid putting in a 64-bit field for the timestamp in cache-line 0. If > > we are ok with having this overlap/partially overlap with sequence > > number, it will use up an extra 4B of storage in that cacheline. > > I agree about the lack of convenience! And Adrien certainly has a point about PMD temptations. > > However, I still don't think that a NICs ability to date-stamp a packet is sufficient reason to put a date-stamp in cache line 0 of the mbuf. Storing only the fast moving 32 bit in cache line 0 seems like a good compromise to me. > > Maybe you can find just one more byte, so it can hold 17 minutes with nanosecond resolution. (I'm joking!) > > Please don't sacrifice the sequence number for the seconds/hours/days part a timestamp. Maybe it could be configurable to use a 32 bit or 64 bit timestamp. > Do you see both timestamp and sequence numbers being used together? I would have thought that apps would either use one or the other? However, your suggestion is workable in any case, to allow the sequence number to overlap just the high 32 bits of the timestamp, rather than the low. /Bruce