From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <konstantin.ananyev@intel.com>
Received: from mga01.intel.com (mga01.intel.com [192.55.52.88])
 by dpdk.org (Postfix) with ESMTP id 865FA5917
 for <dev@dpdk.org>; Wed, 14 May 2014 16:10:24 +0200 (CEST)
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
 by fmsmga101.fm.intel.com with ESMTP; 14 May 2014 07:07:15 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,1052,1389772800"; d="scan'208";a="538946360"
Received: from irsmsx103.ger.corp.intel.com ([163.33.3.157])
 by fmsmga002.fm.intel.com with ESMTP; 14 May 2014 07:07:15 -0700
Received: from irsmsx105.ger.corp.intel.com ([169.254.7.70]) by
 IRSMSX103.ger.corp.intel.com ([163.33.3.157]) with mapi id 14.03.0123.003;
 Wed, 14 May 2014 15:07:10 +0100
From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
To: "Shaw, Jeffrey B" <jeffrey.b.shaw@intel.com>, Olivier MATZ
 <olivier.matz@6wind.com>, "dev@dpdk.org" <dev@dpdk.org>
Thread-Topic: [dpdk-dev] [PATCH RFC 05/11] mbuf: merge physaddr and buf_len
 in a	bitfield
Thread-Index: AQHPa5Y6oEhAQPHLmkuZVbv+E7VQLJs4YDOg///5CICAAAGkgIAHxulw
Date: Wed, 14 May 2014 14:07:10 +0000
Message-ID: <2601191342CEEE43887BDE71AB9772580EFA666D@IRSMSX105.ger.corp.intel.com>
References: <1399647038-15095-1-git-send-email-olivier.matz@6wind.com>
 <1399647038-15095-6-git-send-email-olivier.matz@6wind.com>
 <4032A54B6BB5F04B8C08B6CFF08C59285542081E@FMSMSX103.amr.corp.intel.com>
 <536CFCEF.4080704@6wind.com>
 <4032A54B6BB5F04B8C08B6CFF08C59285542085B@FMSMSX103.amr.corp.intel.com>
In-Reply-To: <4032A54B6BB5F04B8C08B6CFF08C59285542085B@FMSMSX103.amr.corp.intel.com>
Accept-Language: en-IE, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [163.33.239.182]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: Re: [dpdk-dev] [PATCH RFC 05/11] mbuf: merge physaddr and buf_len
 in a	bitfield
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Wed, 14 May 2014 14:10:27 -0000

Hi Oliver,

Apart from performance impact, one more concern:
As I know, theoretical limit for PA on Intel is 52 bits.
I understand that these days no-one using more than 48 bits and it probably=
 would stay like that for next few years.
Though if we'll occupy these (MAXPHYADDR - 48) bits now, it can become a po=
tential problem in future.
After all the savings from that changes are not that big - only 2 bytes. =20
As I understand you already save extra 7 bytes with other proposed modifica=
tions of mbuf.
That's enough to add TSO related information into the mbuf.
So my suggestion would be to keep phys_addr 64bit long.
Thanks
Konstantin =20

-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Shaw, Jeffrey B
Sent: Friday, May 09, 2014 5:12 PM
To: Olivier MATZ; dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH RFC 05/11] mbuf: merge physaddr and buf_len =
in a bitfield

I agree, we should wait for comments then test the performance when the pat=
ches have settled.


-----Original Message-----
From: Olivier MATZ [mailto:olivier.matz@6wind.com]=20
Sent: Friday, May 09, 2014 9:06 AM
To: Shaw, Jeffrey B; dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH RFC 05/11] mbuf: merge physaddr and buf_len =
in a bitfield

Hi Jeff,

Thank you for your comment.

On 05/09/2014 05:39 PM, Shaw, Jeffrey B wrote:
> have you tested this patch to see if there is a negative impact to=20
> performance?

Yes, but not with testpmd. I passed our internal non-regression performance=
 tests and it shows no difference (or below the error margin), even with lo=
w overhead processing like forwarding whatever the number of cores I use.

> Wouldn't the processor have to mask the high bytes of the physical=20
> address when it is used, for example, to populate descriptors with=20
> buffer addresses?  When compute bound, this could steal CPU cycles=20
> away from packet processing.  I think we should understand the=20
> performance trade-off in order to save these 2 bytes.

I would naively say that the cost is negligible: accessing to the length is=
 the same as before (it's a 16 bits field) and accessing the physical addre=
ss is just a mask or a shift, which should not be very long on an Intel pro=
cessor (1 cycle?). This is to be compared with the number of cycles per pac=
ket in io-fwd mode, which is probably around 150 or 200.

> It would be interesting to see how throughput is impacted when the=20
> workload is core-bound.  This could be accomplished by running testpmd=20
> in io-fwd mode across 4x 10G ports.

I agree, this is something we could check. If you agree, let's first wait f=
or some other comments and see if we find a consensus on the patches.

Regards,
Olivier