From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 5B124475E for ; Mon, 10 Oct 2016 05:36:55 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP; 09 Oct 2016 20:36:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,470,1473145200"; d="scan'208";a="888369767" Received: from yliu-dev.sh.intel.com (HELO yliu-dev) ([10.239.67.162]) by orsmga003.jf.intel.com with ESMTP; 09 Oct 2016 20:36:52 -0700 Date: Mon, 10 Oct 2016 11:37:44 +0800 From: Yuanhan Liu To: "Michael S. Tsirkin" Cc: Maxime Coquelin , Stephen Hemminger , dev@dpdk.org, qemu-devel@nongnu.org, "Wang, Zhihong" Message-ID: <20161010033744.GW1597@yliu-dev.sh.intel.com> References: <1474872056-24665-2-git-send-email-yuanhan.liu@linux.intel.com> <20160926221112-mutt-send-email-mst@kernel.org> <20160927031158.GA25823@yliu-dev.sh.intel.com> <20160927224935-mutt-send-email-mst@kernel.org> <20160928022848.GE1597@yliu-dev.sh.intel.com> <20160929205047-mutt-send-email-mst@kernel.org> <2889e609-f750-a4e1-66f8-768bb07a2339@redhat.com> <20160929231252-mutt-send-email-mst@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160929231252-mutt-send-email-mst@kernel.org> User-Agent: Mutt/1.5.23 (2014-03-12) Subject: Re: [dpdk-dev] [Qemu-devel] [PATCH 1/2] vhost: enable any layout feature X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 10 Oct 2016 03:36:55 -0000 On Thu, Sep 29, 2016 at 11:21:48PM +0300, Michael S. Tsirkin wrote: > On Thu, Sep 29, 2016 at 10:05:22PM +0200, Maxime Coquelin wrote: > > > > > > On 09/29/2016 07:57 PM, Michael S. Tsirkin wrote: > Yes but two points. > > 1. why is this memset expensive? I don't have the exact answer, but just some rough thoughts: It's an external clib function: there is a call stack and the IP register will bounch back and forth. BTW, It's kind of an overkill to use that for resetting 14 bytes structure. Some trick like *(struct virtio_net_hdr *)hdr = {0, }; Or even hdr->xxx = 0; hdr->yyy = 0; should behaviour better. There was an example: the vhost enqueue optmization patchset from Zhihong [0] uses memset, and it introduces more than 15% drop (IIRC) on my Ivybridge server: it has no such issue on his server though. [0]: http://dpdk.org/ml/archives/dev/2016-August/045272.html --yliu > Is the test completely skipping looking > at the packet otherwise? > > 2. As long as we are doing this, see > Alignment vs. Networking > ======================== > in Documentation/unaligned-memory-access.txt > > > > From the micro-benchmarks results, we can expect +10% compared to > > indirect descriptors, and + 5% compared to using 2 descs in the > > virtqueue. > > Also, it should have the same benefits as indirect descriptors for 0% > > pkt loss (as we can fill 2x more packets in the virtqueue). > > > > What do you think? > > > > Thanks, > > Maxime