From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 60CFB37B1 for ; Mon, 19 Dec 2016 07:25:51 +0100 (CET) Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga101.jf.intel.com with ESMTP; 18 Dec 2016 22:25:50 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,372,1477983600"; d="scan'208";a="44557932" Received: from yliu-dev.sh.intel.com (HELO yliu-dev) ([10.239.67.162]) by fmsmga006.fm.intel.com with ESMTP; 18 Dec 2016 22:25:48 -0800 Date: Mon, 19 Dec 2016 14:27:36 +0800 From: Yuanhan Liu To: "Yang, Zhiyong" Cc: "Richardson, Bruce" , "Ananyev, Konstantin" , Thomas Monjalon , "dev@dpdk.org" , "De Lara Guarch, Pablo" , "Wang, Zhihong" Message-ID: <20161219062736.GO18991@yliu-dev.sh.intel.com> References: <1480926387-63838-2-git-send-email-zhiyong.yang@intel.com> <7223515.9TZuZb6buy@xps13> <2601191342CEEE43887BDE71AB9772583F0E55B0@irsmsx105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB9772583F0E568B@irsmsx105.ger.corp.intel.com> <20161215101242.GA125588@bricha3-MOBL3.ger.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Subject: Re: [dpdk-dev] [PATCH 1/4] eal/common: introduce rte_memset on IA platform X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Dec 2016 06:25:51 -0000 On Fri, Dec 16, 2016 at 10:19:43AM +0000, Yang, Zhiyong wrote: > > > I run the same virtio/vhost loopback tests without NIC. > > > I can see the throughput drop when running choosing functions at run > > > time compared to original code as following on the same platform(my > > machine is haswell) > > > Packet size perf drop > > > 64 -4% > > > 256 -5.4% > > > 1024 -5% > > > 1500 -2.5% > > > Another thing, I run the memcpy_perf_autotest, when N= <128, the > > > rte_memcpy perf gains almost disappears When choosing functions at run > > > time. For N=other numbers, the perf gains will become narrow. > > > > > How narrow. How significant is the improvement that we gain from having to > > maintain our own copy of memcpy. If the libc version is nearly as good we > > should just use that. > > > > /Bruce > > Zhihong sent a patch about rte_memcpy, From the patch, > we can see the optimization job for memcpy will bring obvious perf improvements > than glibc for DPDK. Just a clarification: it's better than the __original DPDK__ rte_memcpy but not the glibc one. That makes me think have any one tested the memcpy with big packets? Does the one from DPDK outweigh the one from glibc, even for big packets? --yliu > http://www.dpdk.org/dev/patchwork/patch/17753/ > git log as following: > This patch is tested on Ivy Bridge, Haswell and Skylake, it provides > up to 20% gain for Virtio Vhost PVP traffic, with packet size ranging > from 64 to 1500 bytes. > > thanks > Zhiyong