From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 91D182B8C for ; Fri, 4 Nov 2016 07:18:30 +0100 (CET) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga103.fm.intel.com with ESMTP; 03 Nov 2016 23:18:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,442,1473145200"; d="scan'208";a="1055023604" Received: from fmsmsx106.amr.corp.intel.com ([10.18.124.204]) by orsmga001.jf.intel.com with ESMTP; 03 Nov 2016 23:18:29 -0700 Received: from fmsmsx156.amr.corp.intel.com (10.18.116.74) by FMSMSX106.amr.corp.intel.com (10.18.124.204) with Microsoft SMTP Server (TLS) id 14.3.248.2; Thu, 3 Nov 2016 23:18:28 -0700 Received: from shsmsx104.ccr.corp.intel.com (10.239.4.70) by fmsmsx156.amr.corp.intel.com (10.18.116.74) with Microsoft SMTP Server (TLS) id 14.3.248.2; Thu, 3 Nov 2016 23:18:28 -0700 Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.206]) by SHSMSX104.ccr.corp.intel.com ([169.254.5.209]) with mapi id 14.03.0248.002; Fri, 4 Nov 2016 14:18:26 +0800 From: "Xu, Qian Q" To: Maxime Coquelin , "Wang, Zhihong" , Yuanhan Liu CC: "mst@redhat.com" , "dev@dpdk.org" , "vkaplans@redhat.com" Thread-Topic: [dpdk-dev] [PATCH v4] vhost: Add indirect descriptors support to the TX path Thread-Index: AQHSJjLCgwD8+QKrV0mx7GRMqs2mgaCr/zyAgAAg9YCAAA7sgIAPX3cAgAAC3ACAABcdgIAAAIeAgAADH4CAAOuxgIAAczKAgATd2gCAAzK1AIABZX6AgAH1bEA= Date: Fri, 4 Nov 2016 06:18:24 +0000 Message-ID: <82F45D86ADE5454A95A89742C8D1410E3923865A@shsmsx102.ccr.corp.intel.com> References: <1474965769-24782-1-git-send-email-maxime.coquelin@redhat.com> <8F6C2BD409508844A0EFC19955BE09414E7CE6D1@SHSMSX103.ccr.corp.intel.com> <70cc3b89-d680-1519-add3-f38b228e65b5@redhat.com> <20161017132121.GG16751@yliu-dev.sh.intel.com> <8F6C2BD409508844A0EFC19955BE09414E7D8BDF@SHSMSX103.ccr.corp.intel.com> <20161027103317.GM16751@yliu-dev.sh.intel.com> <0ba8f8c9-2174-b3c1-4f07-f6911bffa6cd@redhat.com> <20161027104621.GN16751@yliu-dev.sh.intel.com> <8F6C2BD409508844A0EFC19955BE09414E7D90C7@SHSMSX103.ccr.corp.intel.com> <88169067-290d-a7bb-ab2c-c9b8ec1b1ded@redhat.com> <8F6C2BD409508844A0EFC19955BE09414E7DA533@SHSMSX103.ccr.corp.intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v4] vhost: Add indirect descriptors support to the TX path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Nov 2016 06:18:31 -0000 -----Original Message----- From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Maxime Coquelin Sent: Thursday, November 3, 2016 4:11 PM To: Wang, Zhihong ; Yuanhan Liu Cc: mst@redhat.com; dev@dpdk.org; vkaplans@redhat.com Subject: Re: [dpdk-dev] [PATCH v4] vhost: Add indirect descriptors support = to the TX path > > The strange thing with both of our figures is that this is below from=20 > what I obtain with my SandyBridge machine. The SB cpu freq is 4%=20 > higher, but that doesn't explain the gap between the measurements. > > I'm continuing the investigations on my side. > Maybe we should fix a deadline, and decide do disable indirect in=20 > Virtio PMD if root cause not identified/fixed at some point? > > Yuanhan, what do you think? I have done some measurements using perf, and know understand better what h= appens. With indirect descriptors, I can see a cache miss when fetching the descrip= tors in the indirect table. Actually, this is expected, so we prefetch the = first desc as soon as possible, but still not soon enough to make it transp= arent. In direct descriptors case, the desc in the virtqueue seems to be remain in= the cache from its previous use, so we have a hit. That said, in realistic use-case, I think we should not have a hit, even wi= th direct descriptors. Indeed, the test case use testpmd on guest side with the forwarding set in = IO mode. It means the packet content is never accessed by the guest. In my experiments, I am used to set the "macswap" forwarding mode, which sw= aps src and dest MAC addresses in the packet. I find it more realistic, bec= ause I don't see the point in sending packets to the guest if it is not acc= essed (not even its header). I tried again the test case, this time with setting the forwarding mode to = macswap in the guest. This time, I get same performance with both direct an= d indirect (indirect even a little better with a small optimization, consis= ting in prefetching the 2 first descs systematically as we know there are c= ontiguous). Do you agree we should assume that the packet (header or/and buf) will alwa= ys be accessed by the guest application? ----Maybe it's true in many real use case. But we also need ensure the perf= ormance for "io fwd" has no performance drop. As I know, OVS-DPDK team will= do the performance benchmark based on "IO fwd" for virtio part, so they wi= ll also see some performance drop. And we just thought if it's possible to = make the feature default off then if someone wanted to use it can turn it o= n. People can choose if they want to use the feature, just like vhost deque= ue zero copy.=20 If so, do you agree we should keep indirect descs enabled, and maybe update= the test cases? Thanks, Maxime