From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by dpdk.org (Postfix) with ESMTP id A4F9E1B3C7 for ; Sun, 29 Oct 2017 09:50:02 +0100 (CET) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id 1AD8420AF1; Sun, 29 Oct 2017 04:50:02 -0400 (EDT) Received: from frontend1 ([10.202.2.160]) by compute1.internal (MEProxy); Sun, 29 Oct 2017 04:50:02 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to:x-me-sender :x-me-sender:x-sasl-enc; s=mesmtp; bh=huh0aN6ct3Kz6Ii2Hpa6pRK662 DhltFF0OSq3YeO0xE=; b=ZKz9zZ9rrqngE/9wP60bX2NfNpXsCgf+NUPyvwmG2J 4jhKz4+KSR3IQViecgnzP88wpLj6tezddLjCHdWlFrQQro5KKiLFsOim02F6RuNv W8v7JSbQmIurARh9BvvAk/GN2Wu3uP2VXE+oPV/dtkaiStULbypeeVjsdEk3gg+6 U= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=huh0aN 6ct3Kz6Ii2Hpa6pRK662DhltFF0OSq3YeO0xE=; b=oSR9z1FQRTbuRZGr/ZyN4h AJYnu/PVPanKoAjT9xgtzMVZfay676qo0h+8H/ilXydEvYqvPji9B2aUWiQYpnwY yD3lO7AKDHPTN1LELe2dV5L+ifAJRMuKyEGjK38Ols9cgiMl/ElPj3JpFxW16Q+i xqn3JOh2JQGXxggbiPo5oFjQD5QAKAUGPE3pkvAkeSTjYjKKcEehNd8G8kslwBm2 JS99DMXysGHAL26hUL4F7UZes+vRKMyZfX+mKWzmMI4WWIxSiKnlZEdQFHy4Xcz2 Jx9jxHw3XNwGDW4uX4CXGY3Jc8M3UTX5qQy6M/+JKF52zqpig2W9jpFwZgrhZa6w == X-ME-Sender: Received: from xps.localnet (211.252.98.84.rev.sfr.net [84.98.252.211]) by mail.messagingengine.com (Postfix) with ESMTPA id 8A62D7F91F; Sun, 29 Oct 2017 04:50:01 -0400 (EDT) From: Thomas Monjalon To: "Li, Xiaoyun" Cc: dev@dpdk.org, "Richardson, Bruce" , "Ananyev, Konstantin" , "Lu, Wenzhuo" , "Zhang, Helin" , "ophirmu@mellanox.com" Date: Sun, 29 Oct 2017 09:49:59 +0100 Message-ID: <6811801.9Gdy4CqsrT@xps> In-Reply-To: <8258746.M4EytlN248@xps> References: <1507206794-79941-1-git-send-email-xiaoyun.li@intel.com> <8258746.M4EytlN248@xps> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH v8 1/3] eal/x86: run-time dispatch over memcpy X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 29 Oct 2017 08:50:02 -0000 25/10/2017 09:25, Thomas Monjalon: > 25/10/2017 08:55, Li, Xiaoyun: > > From: Li, Xiaoyun > > > From: Richardson, Bruce > > > > On Thu, Oct 19, 2017 at 11:00:33AM +0200, Thomas Monjalon wrote: > > > > > 19/10/2017 10:50, Li, Xiaoyun: > > > > > > From: Thomas Monjalon > > > > > > > 19/10/2017 09:51, Li, Xiaoyun: > > > > > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net] > > > > > > > > > 19/10/2017 04:45, Li, Xiaoyun: > > > > > > > > > > Hi > > > > > > > > > > > > > > > > > > > > > > > > > > The significant change of this patch is to call a > > > > > > > > > > > > > function pointer for packet size > 128 > > > > (RTE_X86_MEMCPY_THRESH). > > > > > > > > > > > > The perf drop is due to function call replacing inline. > > > > > > > > > > > > > > > > > > > > > > > > > Please could you provide some benchmark numbers? > > > > > > > > > > > > I ran memcpy_perf_test which would show the time cost > > > > > > > > > > > > of memcpy. I ran it on broadwell with sse and avx2. > > > > > > > > > > > > But I just draw pictures and looked at the trend not > > > > > > > > > > > > computed the exact percentage. Sorry about that. > > > > > > > > > > > > The picture shows results of copy size of 2, 4, 6, 8, > > > > > > > > > > > > 9, 12, 16, 32, 64, 128, 192, 256, 320, 384, 448, 512, > > > > > > > > > > > > 768, 1024, 1518, 1522, 1536, 1600, 2048, 2560, 3072, > > > > > > > > > > > > 3584, 4096, 4608, 5120, 5632, 6144, 6656, 7168, > > > > > > > > > > > 7680, 8192. > > > > > > > > > > > > In my test, the size grows, the drop degrades. (Using > > > > > > > > > > > > copy time indicates the > > > > > > > > > > > > perf.) From the trend picture, when the size is > > > > > > > > > > > > smaller than > > > > > > > > > > > > 128 bytes, the perf drops a lot, almost 50%. And above > > > > > > > > > > > > 128 bytes, it approaches the original dpdk. > > > > > > > > > > > > I computed it right now, it shows that when greater > > > > > > > > > > > > than > > > > > > > > > > > > 128 bytes and smaller than 1024 bytes, the perf drops > > > > > > > > > > > > about > > > > 15%. > > > > > > > > > > > > When above > > > > > > > > > > > > 1024 bytes, the perf drops about 4%. > > > > > > > > > > > > > > > > > > > > > > > > > From a test done at Mellanox, there might be a > > > > > > > > > > > > > performance degradation of about 15% in testpmd > > > > > > > > > > > > > txonly > > > > with AVX2. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > I did tests on X710, XXV710, X540 and MT27710 but didn't > > > > > > > > > > see > > > > > > > > > performance degradation. > > > > > > > > > > > > > > > > > > > > I used command "./x86_64-native-linuxapp-gcc/app/testpmd > > > > > > > > > > -c 0xf -n > > > > > > > > > > 4 -- - > > > > > > > > > I" and set fwd txonly. > > > > > > > > > > I tested it on v17.11-rc1, then revert my patch and tested it > > > again. > > > > > > > > > > Show port stats all and see the throughput pps. But the > > > > > > > > > > results are similar > > > > > > > > > and no drop. > > > > > > > > > > > > > > > > > > > > Did I miss something? > > > > > > > > > > > > > > > > > > I do not understand. Yesterday you confirmed a 15% drop with > > > > > > > > > buffers between > > > > > > > > > 128 and 1024 bytes. > > > > > > > > > But you do not see this drop in your txonly tests, right? > > > > > > > > > > > > > > > > > Yes. The drop is using test. > > > > > > > > Using command "make test -j" and then " ./build/app/test -c f -n 4 " > > > > > > > > Then run "memcpy_perf_autotest" > > > > > > > > The results are the cycles that memory copy costs. > > > > > > > > But I just use it to show the trend because I heard that it's > > > > > > > > not > > > > > > > recommended to use micro benchmarks like test_memcpy_perf for > > > > > > > memcpy performance report as they aren't likely able to reflect > > > > > > > performance of real world applications. > > > > > > > > > > > > > > Yes real applications can hide the memcpy cost. > > > > > > > Sometimes, the cost appear for real :) > > > > > > > > > > > > > > > Details can be seen at > > > > > > > > https://software.intel.com/en-us/articles/performance-optimiza > > > > > > > > ti > > > > > > > > on-of- > > > > > > > > memcpy-in-dpdk > > > > > > > > > > > > > > > > And I didn't see drop in testpmd txonly test. Maybe it's > > > > > > > > because not a lot > > > > > > > memcpy calls. > > > > > > > > > > > > > > It has been seen in a mlx4 use-case using more memcpy. > > > > > > > I think 15% in micro-benchmark is too much. > > > > > > > What can we do? Raise the threshold? > > > > > > > > > > > > > I think so. If there is big drop, can try raise the threshold. Maybe 1024? > > > > but not sure. > > > > > > But I didn't reproduce the 15% drop on mellanox and not sure how > > > > > > to > > > > verify it. > > > > > > > > > > I think we should focus on micro-benchmark and find a reasonnable > > > > > threshold for a reasonnable drop tradeoff. > > > > > > > > > Sadly, it may not be that simple. What shows best performance for > > > > micro- benchmarks may not show the same effect in a real application. > > > > > > > > /Bruce > > > > > > Then how to measure the performance? > > > > > > And I cannot reproduce 15% drop on mellanox. > > > Could the person who tested 15% drop help to do test again with 1024 > > > threshold and see if there is any improvement? > > > > As Bruce said, best performance on micro-benchmark may not show the same effect in real applications. > > Yes real applications may hide the impact. > You keep saying that it is a reason to allow degrading memcpy raw perf. > But can you see better performance with buffers of 256 bytes with > any application thanks to your patch? > I am not sure whether there is a benefit keeping a code which imply > a signicative drop in micro-benchmarks. > > > And I cannot reproduce the 15% drop. > > And I don't know if raising the threshold can improve the perf or not. > > Could the person who tested 15% drop help to do test again with 1024 threshold and see if there is any improvement? > > We will test a increased threshold today. Sorry, I forgot to update. It seems that increasing the threshold from 128 to 1024 has no impact. We can recover the 15% drop only by reverting the patch. I don't know what is creating this drop exactly. When doing different tests on different environments, we do not see this drop. If nobody else can see such issue, I guess we can ignore it.