From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id C22991BBAF for ; Fri, 27 Oct 2017 03:58:36 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 26 Oct 2017 18:58:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.44,302,1505804400"; d="scan'208";a="1235878154" Received: from tanjianf-mobl.ccr.corp.intel.com (HELO [10.67.64.55]) ([10.67.64.55]) by fmsmga002.fm.intel.com with ESMTP; 26 Oct 2017 18:58:33 -0700 To: =?UTF-8?B?546L5b+X5YWL?= , "avi.cohen@huawei.com" , "users@dpdk.org" References: <6DAF063A35010343823807B082E5681F41D22305@mbx05.360buyAD.local> <0d678703-5fbc-c999-0009-4b4b1dfcb20a@intel.com> <6DAF063A35010343823807B082E5681F41D2BFCA@mbx05.360buyAD.local> <6DAF063A35010343823807B082E5681F41D2C525@mbx05.360buyAD.local> <030b706c-1566-36de-79cb-74af834f6a65@intel.com> <6DAF063A35010343823807B082E5681F41D2DF53@mbx05.360buyAD.local> From: "Tan, Jianfeng" Message-ID: <1cf4d4ad-d767-0cf8-3728-0077f4b73ab3@intel.com> Date: Fri, 27 Oct 2017 09:58:33 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 MIME-Version: 1.0 In-Reply-To: <6DAF063A35010343823807B082E5681F41D2DF53@mbx05.360buyAD.local> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-users] VIRTIO for containers X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 27 Oct 2017 01:58:38 -0000 Hi Zhike, On 10/26/2017 8:53 PM, 王志克 wrote: > Hi, > > Thanks for reply. > > To put tcp/ip rx into app thread, actually, might avoid that with a > little change on tap driver. Currently, we use > netif_rx/netif_receive_skb() to rx in tap, which could result in going > up to the tcp/ip stack in the vhost kthread. Instead, we could backlog > the packets into other cpu (application thread's cpu?). > > [Wang Zhike] Then in this case, another kthread like ksoftirq will be > kicked, right? > > In my understanding, the advantage is that the rx performance can be > even improvement, while disadvantage is that more cpu resource is used > and another queue is needed. If that can be done in a smart way, like > system has idle CPUs, we can use this way, else fall back to only use > one kernel thread. Just my 2 cents. Yes, make sense. We need a smart mechanism to decide if it is handled in vhost kthread or ksoftirqd kthread. And also, we could even avoid forking a vhost kthread, to avoid too many context switches. Thanks, Jianfeng