From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id D8A455901 for ; Mon, 6 Jun 2016 12:35:53 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga101.jf.intel.com with ESMTP; 06 Jun 2016 03:35:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,426,1459839600"; d="scan'208";a="714598507" Received: from shwdeisgchi083.ccr.corp.intel.com (HELO [10.239.67.193]) ([10.239.67.193]) by FMSMGA003.fm.intel.com with ESMTP; 06 Jun 2016 03:35:52 -0700 To: Tetsuya Mukawa , Yuanhan Liu References: <1457512409-24403-12-git-send-email-mukawa@igel.co.jp> <1464838185-21751-1-git-send-email-mukawa@igel.co.jp> <20160602073105.GS10038@yliu-dev.sh.intel.com> <687ff542-f97b-8706-5f96-0727dfcdf174@igel.co.jp> <20160603041748.GW10038@yliu-dev.sh.intel.com> <17d81002-b582-f866-100d-3f8ea5068089@igel.co.jp> <292d8c0b-7979-1173-243b-62adaf3cb353@intel.com> <8b323417-a4d2-2b92-db4c-42ada2cfcc54@igel.co.jp> Cc: dev@dpdk.org, huawei.xie@intel.com, Thomas Monjalon , David Marchand , "nakajima.yoshihiro@lab.ntt.co.jp" From: "Tan, Jianfeng" Message-ID: Date: Mon, 6 Jun 2016 18:35:50 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.1.1 MIME-Version: 1.0 In-Reply-To: <8b323417-a4d2-2b92-db4c-42ada2cfcc54@igel.co.jp> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v5 0/6] Virtio-net PMD: QEMU QTest extension for container X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 06 Jun 2016 10:35:54 -0000 Hi, On 6/6/2016 5:28 PM, Tetsuya Mukawa wrote: > On 2016/06/06 17:03, Tan, Jianfeng wrote: >> Hi, >> >> >> On 6/6/2016 1:10 PM, Tetsuya Mukawa wrote: >>> Hi Yuanhan, >>> >>> Sorry for late replying. >>> >>> On 2016/06/03 13:17, Yuanhan Liu wrote: >>>> On Thu, Jun 02, 2016 at 06:30:18PM +0900, Tetsuya Mukawa wrote: >>>>> Hi Yuanhan, >>>>> >>>>> On 2016/06/02 16:31, Yuanhan Liu wrote: >>>>>> But still, I'd ask do we really need 2 virtio for container solutions? >>>>> I appreciate your comments. >>>> No, I appreciate your effort for contributing to DPDK! vhost-pmd stuff >>>> is just brilliant! >>>> >>>>> Let me have time to discuss it with our team. >>>> I'm wondering could we have one solution only. IMO, the drawback of >>>> having two (quite different) solutions might outweighs the benefit >>>> it takes. Say, it might just confuse user. >>> I agree with this. >>> If we have 2 solutions, it would confuse the DPDK users. >>> >>>> OTOH, I'm wondering could you adapt to Jianfeng's solution? If not, >>>> what's the missing parts, and could we fix it? I'm thinking having >>>> one unified solution will keep ours energy/focus on one thing, making >>>> it better and better! Having two just splits the energy; it also >>>> introduces extra burden for maintaining. >>> Of course, I adopt Jiangeng's solution basically. >>> Actually, his solution is almost similar I tried to implement at first. >>> >>> I guess here is pros/cons of 2 solutions. >>> >>> [Jianfeng's solution] >>> - Pros >>> Don't need to invoke QEMU process. >>> - Cons >>> If virtio-net specification is changed, we need to implement it by >>> ourselves. >> It will barely introduce any change when virtio-net specification is >> changed as far as I can see. The only part we care is the how desc, >> avail, used distribute on memory, which is a very small part. > It's a good news, because we don't pay much effort to follow latest > virtio-net specification. > >> It's true that my solution now seriously depend on vhost-user protocol, >> which is defined in QEMU. I cannot see a big problem there so far. >> >>> Also, LSC interrupt and control queue functions are not >>> supported yet. >>> I agree both functions may not be so important, and if we need it >>> we can implement them, but we need to pay energy to implement them. >> LSC is really less important than rxq interrupt (IMO). We don't know how >> long will rxq interrupt of virtio be available for QEMU, but we can >> accelerate it if we avoid using QEMU. >> >> Actually, if the vhost backend is vhost-user (the main use case), >> current qemu have limited control queue support, because it needs the >> support from the vhost user backend. >> >> Add one more con of my solution: >> - Need to write another logic to support other virtio device (say >> virtio-scsi), if it's easier of Tetsuya's solution to do that? >> > Probably, my solution will be easier to do that. > My solution has enough facility to access to io port and PCI > configuration space of virtio-scsi device of QEMU. > So, if you invoke with QEMU with virtio-scsi, only you need to do is > changing PCI interface of current virtio-scsi PMD. > (I just assume currently we have virtio-scsi PMD.) > If the virtio-scsi PMD works on QEMU, same code should work with only > changing PCI interface. > >>> [My solution] >>> - Pros >>> Basic principle of my implementation is not to reinvent the wheel. >>> We can use a virtio-net device of QEMU implementation, it means we don't >>> need to maintain virtio-net device by ourselves, and we can use all of >>> functions supported by QEMU virtio-net device. >>> - Cons >>> Need to invoke QEMU process. >> Two more possible cons: >> a) This solution also needs to maintain qtest utility, right? > But the spec of qtest will be more stable than virtio-net. > >> b) There's still address arrange restriction, right? Although we can use >> "--base-virtaddr=0x400000000" to relieve this question, but how about if >> there are 2 or more devices? (By the way, is there still address arrange >> requirement for 32 bit system) > Our solutions are a virtio-net driver, and a vhost-user backend driver > needs to access to memory allocated by virtio-net driver. > If an application has 2 devices, it means 2 vhost-user backend PMD needs > to access to the same application memory, right? > Also, currently each virtio-net device has an one QEMU process. > So, I am not sure what will be problem if we have 2 devices. OK, my bad. Multiple devices should have just one "--base-virtaddr=0x400000000". > > BTW, 44bits limitations comes from current QEMU implementation itself. > (Actually, if modern virtio device is used, we should be able to remove > the restriction.) Good to know. > >> c) Actually, IMO this solution is sensitive to any virtio spec change >> (io port, pci configuration space). > In this case, virtio-net PMD itself will need to be fixed. > Then, my implementation will be also fixed with the same way. > Current implementation has only PCI abstraction that Yuanhan introduced, > so you may think my solution depends on above things, but actually, my > implementation depends on only how to access to io port and PCI > configuration space. This is what "qtest.h" provides. Gotcha. Thanks, Jianfeng