From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.droids-corp.org (zoll.droids-corp.org [94.23.50.67]) by dpdk.org (Postfix) with ESMTP id 23535AFD7 for ; Fri, 9 May 2014 18:06:30 +0200 (CEST) Received: from was59-1-82-226-113-214.fbx.proxad.net ([82.226.113.214] helo=[192.168.0.10]) by mail.droids-corp.org with esmtpsa (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from ) id 1WinLA-00067p-Kd; Fri, 09 May 2014 18:08:17 +0200 Message-ID: <536CFCEF.4080704@6wind.com> Date: Fri, 09 May 2014 18:06:07 +0200 From: Olivier MATZ User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Icedove/24.4.0 MIME-Version: 1.0 To: "Shaw, Jeffrey B" , "dev@dpdk.org" References: <1399647038-15095-1-git-send-email-olivier.matz@6wind.com> <1399647038-15095-6-git-send-email-olivier.matz@6wind.com> <4032A54B6BB5F04B8C08B6CFF08C59285542081E@FMSMSX103.amr.corp.intel.com> In-Reply-To: <4032A54B6BB5F04B8C08B6CFF08C59285542081E@FMSMSX103.amr.corp.intel.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH RFC 05/11] mbuf: merge physaddr and buf_len in a bitfield X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 May 2014 16:06:30 -0000 Hi Jeff, Thank you for your comment. On 05/09/2014 05:39 PM, Shaw, Jeffrey B wrote: > have you tested this patch to see if there is a negative impact to > performance? Yes, but not with testpmd. I passed our internal non-regression performance tests and it shows no difference (or below the error margin), even with low overhead processing like forwarding whatever the number of cores I use. > Wouldn't the processor have to mask the high bytes of the physical > address when it is used, for example, to populate descriptors with > buffer addresses? When compute bound, this could steal CPU cycles > away from packet processing. I think we should understand the > performance trade-off in order to save these 2 bytes. I would naively say that the cost is negligible: accessing to the length is the same as before (it's a 16 bits field) and accessing the physical address is just a mask or a shift, which should not be very long on an Intel processor (1 cycle?). This is to be compared with the number of cycles per packet in io-fwd mode, which is probably around 150 or 200. > It would be interesting to see how throughput is impacted when the > workload is core-bound. This could be accomplished by running testpmd > in io-fwd mode across 4x 10G ports. I agree, this is something we could check. If you agree, let's first wait for some other comments and see if we find a consensus on the patches. Regards, Olivier