From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <thomas@monjalon.net>
Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com
 [66.111.4.28]) by dpdk.org (Postfix) with ESMTP id 837687CB3
 for <dev@dpdk.org>; Thu, 19 Oct 2017 08:58:57 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id 187A520AF8;
 Thu, 19 Oct 2017 02:58:57 -0400 (EDT)
Received: from frontend1 ([10.202.2.160])
 by compute1.internal (MEProxy); Thu, 19 Oct 2017 02:58:57 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h=
 cc:content-transfer-encoding:content-type:date:from:in-reply-to
 :message-id:mime-version:references:subject:to:x-me-sender
 :x-me-sender:x-sasl-enc; s=mesmtp; bh=kDh0XrYKBx1/m5VKhF2prb8epD
 wfSIAPaFrjVd5sA08=; b=eon3+rS5P+nG2+ltz4s2cYYFYlfE2yMl2Cz/tzs/HX
 H/7tAVAddZqUdSdbjew2jCAWUzXTn8BD2zmQnJ79eS2IXYJyCrbnlSARrwyVZD46
 j6ztBSpvECqgruPWDTNwFhG74tl/Ue5lLQJ+noY6p3f4tOFdlcdBFmhgpeTa7G8F
 4=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
 messagingengine.com; h=cc:content-transfer-encoding:content-type
 :date:from:in-reply-to:message-id:mime-version:references
 :subject:to:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=kDh0Xr
 YKBx1/m5VKhF2prb8epDwfSIAPaFrjVd5sA08=; b=Lb16Jy2GKCbEvf8t+CwGto
 WM67/EyIbB9QALh25QrMiNJU0CtFfKUGtV9hNPyu2tc3w50oSsq0Py0Pq/SlRks4
 tCb8UfeP3Lj9kBopWG84a5/sLsxMFO/nM8f3Q9gcMfBarppu4mfR38fQRzKO+46S
 9nbtzI4tjsCEvCGazRdrGqMXOWHAEaOr/xqmik9FiAYkPSKYcynj0Cqcsirm3HqU
 FSHFNXQF1X6xshMNTHo2HesfAKLgm/Kxi6Tl92qzqjbAZqzYpEli5BDvCpATKu6k
 vh8wpJrkk73iAjtknIZDnVOCVqZlS9+sY0xjy0J92JkmTILK7pMykYiYMrmEbfCA
 ==
X-ME-Sender: <xms:MU3oWf123fA8jFg5R7Pf1MhnbjrXaqLV0zS7C-4fQKjLeHL-m39FjQ>
Received: from xps.localnet (184.203.134.77.rev.sfr.net [77.134.203.184])
 by mail.messagingengine.com (Postfix) with ESMTPA id B86967E3E2;
 Thu, 19 Oct 2017 02:58:56 -0400 (EDT)
From: Thomas Monjalon <thomas@monjalon.net>
To: "Li, Xiaoyun" <xiaoyun.li@intel.com>
Cc: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>, "Richardson,
 Bruce" <bruce.richardson@intel.com>, dev@dpdk.org, "Lu,
 Wenzhuo" <wenzhuo.lu@intel.com>, "Zhang, Helin" <helin.zhang@intel.com>,
 "ophirmu@mellanox.com" <ophirmu@mellanox.com>
Date: Thu, 19 Oct 2017 08:58:54 +0200
Message-ID: <1661434.2yK3chXuTC@xps>
In-Reply-To: <B9E724F4CB7543449049E7AE7669D82F47FD6D@SHSMSX101.ccr.corp.intel.com>
References: <1507206794-79941-1-git-send-email-xiaoyun.li@intel.com>
 <B9E724F4CB7543449049E7AE7669D82F47F814@SHSMSX101.ccr.corp.intel.com>
 <B9E724F4CB7543449049E7AE7669D82F47FD6D@SHSMSX101.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 7Bit
Content-Type: text/plain; charset="us-ascii"
Subject: Re: [dpdk-dev] [PATCH v8 1/3] eal/x86: run-time dispatch over memcpy
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Thu, 19 Oct 2017 06:58:57 -0000

19/10/2017 04:45, Li, Xiaoyun:
> Hi
> > > >
> > > > The significant change of this patch is to call a function pointer
> > > > for packet size > 128 (RTE_X86_MEMCPY_THRESH).
> > > The perf drop is due to function call replacing inline.
> > >
> > > > Please could you provide some benchmark numbers?
> > > I ran memcpy_perf_test which would show the time cost of memcpy. I ran
> > > it on broadwell with sse and avx2.
> > > But I just draw pictures and looked at the trend not computed the
> > > exact percentage. Sorry about that.
> > > The picture shows results of copy size of 2, 4, 6, 8, 9, 12, 16, 32,
> > > 64, 128, 192, 256, 320, 384, 448, 512, 768, 1024, 1518, 1522, 1536,
> > > 1600, 2048, 2560, 3072, 3584, 4096, 4608, 5120, 5632, 6144, 6656, 7168,
> > 7680, 8192.
> > > In my test, the size grows, the drop degrades. (Using copy time
> > > indicates the
> > > perf.) From the trend picture, when the size is smaller than 128
> > > bytes, the perf drops a lot, almost 50%. And above 128 bytes, it
> > > approaches the original dpdk.
> > > I computed it right now, it shows that when greater than 128 bytes and
> > > smaller than 1024 bytes, the perf drops about 15%. When above 1024
> > > bytes, the perf drops about 4%.
> > >
> > > > From a test done at Mellanox, there might be a performance
> > > > degradation of about 15% in testpmd txonly with AVX2.
> > 
> 
> I did tests on X710, XXV710, X540 and MT27710 but didn't see performance degradation.
> 
> I used command "./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -- -I" and set fwd txonly. 
> I tested it on v17.11-rc1, then revert my patch and tested it again.
> Show port stats all and see the throughput pps. But the results are similar and no drop.
> 
> Did I miss something?

I do not understand. Yesterday you confirmed a 15% drop with buffers between
128 and 1024 bytes.
But you do not see this drop in your txonly tests, right?

> > Another thing, I will test testpmd txonly with intel nics and mellanox these
> > days.
> > And try adjusting the RTE_X86_MEMCPY_THRESH to see if there is any
> > improvement.
> > 
> > > > Is there someone else seeing a performance degradation?