From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 1E6361B227 for ; Wed, 1 Nov 2017 03:58:12 +0100 (CET) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 31 Oct 2017 19:58:11 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.44,326,1505804400"; d="scan'208";a="1031909861" Received: from tanjianf-mobl.ccr.corp.intel.com (HELO [10.67.64.54]) ([10.67.64.54]) by orsmga003.jf.intel.com with ESMTP; 31 Oct 2017 19:58:10 -0700 To: =?UTF-8?B?546L5b+X5YWL?= , "users@dpdk.org" References: <6DAF063A35010343823807B082E5681F41D22305@mbx05.360buyAD.local> <0d678703-5fbc-c999-0009-4b4b1dfcb20a@intel.com> <6DAF063A35010343823807B082E5681F41D2BFCA@mbx05.360buyAD.local> <6DAF063A35010343823807B082E5681F41D2C525@mbx05.360buyAD.local> <030b706c-1566-36de-79cb-74af834f6a65@intel.com> <6DAF063A35010343823807B082E5681F41D31C79@mbx05.360buyAD.local> From: "Tan, Jianfeng" Message-ID: <44eae57a-d487-1ff7-0a72-5a7325beb0da@intel.com> Date: Wed, 1 Nov 2017 10:58:09 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 MIME-Version: 1.0 In-Reply-To: <6DAF063A35010343823807B082E5681F41D31C79@mbx05.360buyAD.local> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-users] VIRTIO for containers X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 01 Nov 2017 02:58:13 -0000 Hi Zhike, On 10/31/2017 12:25 PM, 王志克 wrote: > Hi, > > I tested KNI, and compared with virtio-user. The result is beyond my > expectation: > > The KNI performance is better (+30%) in simpe netperf test with TCP > and different size UDP. I though they have similar performance, but it > proved that KNI performed better in my test. Not sure why. This is expected. As KNI has a better thread model, its kthread only processes user->kernel path; the kernel->user path is processed in ksoftirq thread. > > Note in my test, I did not enable checksum/gso/… offloading and > multi-queue, since we need do vxLan encapsulation using SW. I am using > ovs2.8.1 and dpdk 17.05.2. And below is the feature table. Note that OVS (mainstream) so far does not integrate LRO/TSO etc. KNI virtio-user Multi-seg (user->kernel) Y Y Multi-seg (kernel->user) N Y Multi-queue N Y Csum offload (user->kernel) Y Y Csum offload (kernel->user) N Y Zero copy (user->kernel) N Experimental Zero copy (kernel->user) N N > > In addition, one queue pair on virtio-user would create one vhost > thread. If we have many containters, it seems hard to manage the CPU > usage. Is there any proposal/practice to limit the vhost kthread CPU > resource? Yes, this is another thread model problem. There is proposal from Redhat and IBM on this: http://events.linuxfoundation.org/sites/events/files/slides/kvm_forum_2015_vhost_sharing_is_better.pdf. But not sure when it will be ready. Thanks, Jianfeng > > Br, > Wang Zhike