From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7DBB2A2EEB for ; Wed, 11 Sep 2019 09:15:32 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6B4DE1EA6D; Wed, 11 Sep 2019 09:15:31 +0200 (CEST) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 318C61EA61; Wed, 11 Sep 2019 09:15:30 +0200 (CEST) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8B33A18CB8F6; Wed, 11 Sep 2019 07:15:29 +0000 (UTC) Received: from [10.36.112.36] (ovpn-112-36.ams2.redhat.com [10.36.112.36]) by smtp.corp.redhat.com (Postfix) with ESMTPS id AFAE360BEC; Wed, 11 Sep 2019 07:15:22 +0000 (UTC) To: Shahaf Shuler , "tiwei.bie@intel.com" , "zhihong.wang@intel.com" , "amorenoz@redhat.com" , "xiao.w.wang@intel.com" , "dev@dpdk.org" , "jfreimann@redhat.com" Cc: "stable@dpdk.org" , Matan Azrad References: <20190829080000.20806-1-maxime.coquelin@redhat.com> <9edf9ca8-6ecf-ff24-2db6-311c00e678ce@redhat.com> From: Maxime Coquelin Openpgp: preference=signencrypt Autocrypt: addr=maxime.coquelin@redhat.com; keydata= mQINBFOEQQIBEADjNLYZZqghYuWv1nlLisptPJp+TSxE/KuP7x47e1Gr5/oMDJ1OKNG8rlNg kLgBQUki3voWhUbMb69ybqdMUHOl21DGCj0BTU3lXwapYXOAnsh8q6RRM+deUpasyT+Jvf3a gU35dgZcomRh5HPmKMU4KfeA38cVUebsFec1HuJAWzOb/UdtQkYyZR4rbzw8SbsOemtMtwOx YdXodneQD7KuRU9IhJKiEfipwqk2pufm2VSGl570l5ANyWMA/XADNhcEXhpkZ1Iwj3TWO7XR uH4xfvPl8nBsLo/EbEI7fbuUULcAnHfowQslPUm6/yaGv6cT5160SPXT1t8U9QDO6aTSo59N jH519JS8oeKZB1n1eLDslCfBpIpWkW8ZElGkOGWAN0vmpLfdyiqBNNyS3eGAfMkJ6b1A24un /TKc6j2QxM0QK4yZGfAxDxtvDv9LFXec8ENJYsbiR6WHRHq7wXl/n8guyh5AuBNQ3LIK44x0 KjGXP1FJkUhUuruGyZsMrDLBRHYi+hhDAgRjqHgoXi5XGETA1PAiNBNnQwMf5aubt+mE2Q5r qLNTgwSo2dpTU3+mJ3y3KlsIfoaxYI7XNsPRXGnZi4hbxmeb2NSXgdCXhX3nELUNYm4ArKBP LugOIT/zRwk0H0+RVwL2zHdMO1Tht1UOFGfOZpvuBF60jhMzbQARAQABtCxNYXhpbWUgQ29x dWVsaW4gPG1heGltZS5jb3F1ZWxpbkByZWRoYXQuY29tPokCOAQTAQIAIgUCV3u/5QIbAwYL CQgHAwIGFQgCCQoLBBYCAwECHgECF4AACgkQyjiNKEaHD4ma2g/+P+Hg9WkONPaY1J4AR7Uf kBneosS4NO3CRy0x4WYmUSLYMLx1I3VH6SVjqZ6uBoYy6Fs6TbF6SHNc7QbB6Qjo3neqnQR1 71Ua1MFvIob8vUEl3jAR/+oaE1UJKrxjWztpppQTukIk4oJOmXbL0nj3d8dA2QgHdTyttZ1H xzZJWWz6vqxCrUqHU7RSH9iWg9R2iuTzii4/vk1oi4Qz7y/q8ONOq6ffOy/t5xSZOMtZCspu Mll2Szzpc/trFO0pLH4LZZfz/nXh2uuUbk8qRIJBIjZH3ZQfACffgfNefLe2PxMqJZ8mFJXc RQO0ONZvwoOoHL6CcnFZp2i0P5ddduzwPdGsPq1bnIXnZqJSl3dUfh3xG5ArkliZ/++zGF1O wvpGvpIuOgLqjyCNNRoR7cP7y8F24gWE/HqJBXs1qzdj/5Hr68NVPV1Tu/l2D1KMOcL5sOrz 2jLXauqDWn1Okk9hkXAP7+0Cmi6QwAPuBT3i6t2e8UdtMtCE4sLesWS/XohnSFFscZR6Vaf3 gKdWiJ/fW64L6b9gjkWtHd4jAJBAIAx1JM6xcA1xMbAFsD8gA2oDBWogHGYcScY/4riDNKXi lw92d6IEHnSf6y7KJCKq8F+Jrj2BwRJiFKTJ6ChbOpyyR6nGTckzsLgday2KxBIyuh4w+hMq TGDSp2rmWGJjASq5Ag0EVPSbkwEQAMkaNc084Qvql+XW+wcUIY+Dn9A2D1gMr2BVwdSfVDN7 0ZYxo9PvSkzh6eQmnZNQtl8WSHl3VG3IEDQzsMQ2ftZn2sxjcCadexrQQv3Lu60Tgj7YVYRM H+fLYt9W5YuWduJ+FPLbjIKynBf6JCRMWr75QAOhhhaI0tsie3eDsKQBA0w7WCuPiZiheJaL 4MDe9hcH4rM3ybnRW7K2dLszWNhHVoYSFlZGYh+MGpuODeQKDS035+4H2rEWgg+iaOwqD7bg CQXwTZ1kSrm8NxIRVD3MBtzp9SZdUHLfmBl/tLVwDSZvHZhhvJHC6Lj6VL4jPXF5K2+Nn/Su CQmEBisOmwnXZhhu8ulAZ7S2tcl94DCo60ReheDoPBU8PR2TLg8rS5f9w6mLYarvQWL7cDtT d2eX3Z6TggfNINr/RTFrrAd7NHl5h3OnlXj7PQ1f0kfufduOeCQddJN4gsQfxo/qvWVB7PaE 1WTIggPmWS+Xxijk7xG6x9McTdmGhYaPZBpAxewK8ypl5+yubVsE9yOOhKMVo9DoVCjh5To5 aph7CQWfQsV7cd9PfSJjI2lXI0dhEXhQ7lRCFpf3V3mD6CyrhpcJpV6XVGjxJvGUale7+IOp sQIbPKUHpB2F+ZUPWds9yyVxGwDxD8WLqKKy0WLIjkkSsOb9UBNzgRyzrEC9lgQ/ABEBAAGJ Ah8EGAECAAkFAlT0m5MCGwwACgkQyjiNKEaHD4nU8hAAtt0xFJAy0sOWqSmyxTc7FUcX+pbD KVyPlpl6urKKMk1XtVMUPuae/+UwvIt0urk1mXi6DnrAN50TmQqvdjcPTQ6uoZ8zjgGeASZg jj0/bJGhgUr9U7oG7Hh2F8vzpOqZrdd65MRkxmc7bWj1k81tOU2woR/Gy8xLzi0k0KUa8ueB iYOcZcIGTcs9CssVwQjYaXRoeT65LJnTxYZif2pfNxfINFzCGw42s3EtZFteczClKcVSJ1+L +QUY/J24x0/ocQX/M1PwtZbB4c/2Pg/t5FS+s6UB1Ce08xsJDcwyOPIH6O3tccZuriHgvqKP yKz/Ble76+NFlTK1mpUlfM7PVhD5XzrDUEHWRTeTJSvJ8TIPL4uyfzhjHhlkCU0mw7Pscyxn DE8G0UYMEaNgaZap8dcGMYH/96EfE5s/nTX0M6MXV0yots7U2BDb4soLCxLOJz4tAFDtNFtA wLBhXRSvWhdBJZiig/9CG3dXmKfi2H+wdUCSvEFHRpgo7GK8/Kh3vGhgKmnnxhl8ACBaGy9n fxjSxjSO6rj4/MeenmlJw1yebzkX8ZmaSi8BHe+n6jTGEFNrbiOdWpJgc5yHIZZnwXaW54QT UhhSjDL1rV2B4F28w30jYmlRmm2RdN7iCZfbyP3dvFQTzQ4ySquuPkIGcOOHrvZzxbRjzMx1 Mwqu3GQ= Message-ID: Date: Wed, 11 Sep 2019 09:15:20 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.6.2 (mx1.redhat.com [10.5.110.63]); Wed, 11 Sep 2019 07:15:29 +0000 (UTC) Subject: Re: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 9/11/19 7:15 AM, Shahaf Shuler wrote: > Tuesday, September 10, 2019 4:56 PM, Maxime Coquelin: >> Subject: Re: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver >> On 9/10/19 3:44 PM, Shahaf Shuler wrote: >>> Tuesday, September 10, 2019 10:46 AM, Maxime Coquelin: >>>> Subject: Re: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver > > [...] > >>>> >>>> Hi Shahaf, >>>> >>>> >>>> IMHO, I see two uses cases where it can make sense to use vDPA with a >>>> full offload HW device: >>>> 1. Live-migration support: It makes it possible to switch to rings >>>> processing in SW during the migration as Virtio HH does not support >>>> dirty pages logging. >>> >>> Can you elaborate why specifically using virtio_vdpa PMD enables this SW >> relay during migration? >>> e.g. the vdpa PMD of intel that runs on top of VF do that today as well. >> >> I think there were a misunderstanding. When I said: >> " >> I see two uses cases where it can make sense to use vDPA with a full offload >> HW device " >> >> I meant, I see two uses cases where it can make sense to use vDPA with a full >> offload HW device, instead of the full offload HW device to use Virtio PMD. >> >> In other words, I think it is preferable to only offload the datapath, so that it >> is possible to support SW live-migration. >> >>>> >>>> 2. Can be used to provide a single standard interface (the vhost-user >>>> socket) to containers in the scope of CNFs. Doing so, the container >>>> does not need to be modified, whatever the HW NIC: Virtio datapath >>>> offload only, full Virtio offload, or no offload at all. In the >>>> latter case, it would not be optimal as it implies forwarding between >>>> the Vhost PMD and the HW NIC PMD but it would work. >>> >>> It is not clear to me the interface map in such system. >>> From what I understand the container will have virtio-user i/f and the host >> will have virtio i/f. then the virtio i/f can be programmed to work w/ vDPA or >> not. >>> For full emulation I guess you will need to expose the netdev of the fully >> emulated virtio device to the container? >>> >>> Am trying to map when it is beneficial to use this virtio_vdpa PMD and >> when it is better to use the vendor specific vDPA PMD on top of VF. >> >> I think that with above clarification, I made it clear that the goal of this driver >> is not to replace vendors vDPA drivers (their control path maybe not even be >> compatible), but instead to provide a generic driver that can be used either >> within a guest with a para-virtualized Virtio- net device or with HW NIC that >> fully offloads Virtio (both data and control paths). > > Thanks Maxim, It is clearer now. > From what I understand this driver is to be used w/ vDPA when the underlying device is virtio. > > I can perfectly understand the para-virt ( + nested virtualization / container inside VM) use case. > > Regarding the fully emulated virtio device on the host (instead of a plain VF) - for me the benefit still not clear - if you have HW that can expose VF why not use VF + vendor specific vDPA driver. If you need a vendor specific vDPA driver for the VF, then you definitely want to use the vendor specific driver. However, if there is a HW or VF that implements the Virtio Spec even for the control path (i.e. the PCI registers layout), one may be tempted to do device assignment directly to the guest and use Virtio PMD. The downside of doing that is it won't support live-migration. The benefit of using vDPA with virtio vDPA driver in this case is provide a way to support live-migration (by switch to SW ring processing and perform dirty pages logging). > > Anyway - for the series, > Acked-by: Shahaf Shuler > Thanks! Maxime