From: Maxime Coquelin <maxime.coquelin@redhat.com>
To: Shahaf Shuler <shahafs@mellanox.com>,
"tiwei.bie@intel.com" <tiwei.bie@intel.com>,
"zhihong.wang@intel.com" <zhihong.wang@intel.com>,
"amorenoz@redhat.com" <amorenoz@redhat.com>,
"xiao.w.wang@intel.com" <xiao.w.wang@intel.com>,
"dev@dpdk.org" <dev@dpdk.org>,
"jfreimann@redhat.com" <jfreimann@redhat.com>
Cc: "stable@dpdk.org" <stable@dpdk.org>, Matan Azrad <matan@mellanox.com>
Subject: Re: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver
Date: Tue, 10 Sep 2019 15:56:00 +0200 [thread overview]
Message-ID: <9edf9ca8-6ecf-ff24-2db6-311c00e678ce@redhat.com> (raw)
In-Reply-To: <AM0PR0502MB3795900F00F4150392B4B2EEC3B60@AM0PR0502MB3795.eurprd05.prod.outlook.com>
On 9/10/19 3:44 PM, Shahaf Shuler wrote:
> Tuesday, September 10, 2019 10:46 AM, Maxime Coquelin:
>> Subject: Re: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver
>>
>> Hi Shahaf,
>>
>> On 9/9/19 1:55 PM, Shahaf Shuler wrote:
>>> Hi Maxime,
>>>
>>> Thursday, August 29, 2019 11:00 AM, Maxime Coquelin:
>>>> Subject: [dpdk-dev] [PATCH 00/15] Introduce Virtio vDPA driver
>>>>
>>>> vDPA allows to offload Virtio Datapath processing by supported NICs,
>>>> like IFCVF for example.
>>>>
>>>> The control path has to be handled by a dedicated vDPA driver, so
>>>> that it can translate Vhost-user protocol requests to proprietary
>>>> NICs registers accesses.
>>>>
>>>> This driver is the vDPA driver for Virtio devices, meaning that
>>>> Vhost-user protocol requests get translated to Virtio registers
>>>> accesses as per defined in the Virtio spec.
>>>>
>>>> Basically, it can be used within a guest with a para-virtualized
>>>> Virtio-net device, or even with a full Virtio HW offload NIC directly on
>> host.
>>>
>>> Can you elaborate more on the use cases to use such driver?
>>>
>>> 1. If the underlying HW can support full virtio device, why we need to work
>> w/ it w/ vDPA mode? Why not providing it to the VM as passthrough one?
>>> 2. why it is preferable to work w/ virtio device as the backend device to be
>> used w/ vDPA v.s. working w/ the underlying HW VF?
>>
>>
>> IMHO, I see two uses cases where it can make sense to use vDPA with a full
>> offload HW device:
>> 1. Live-migration support: It makes it possible to switch to rings
>> processing in SW during the migration as Virtio HH does not support
>> dirty pages logging.
>
> Can you elaborate why specifically using virtio_vdpa PMD enables this SW relay during migration?
> e.g. the vdpa PMD of intel that runs on top of VF do that today as well.
I think there were a misunderstanding. When I said:
"
I see two uses cases where it can make sense to use vDPA with a full
offload HW device
"
I meant, I see two uses cases where it can make sense to use vDPA with a
full offload HW device, instead of the full offload HW device to use
Virtio PMD.
In other words, I think it is preferable to only offload the datapath,
so that it is possible to support SW live-migration.
>>
>> 2. Can be used to provide a single standard interface (the vhost-user
>> socket) to containers in the scope of CNFs. Doing so, the container
>> does not need to be modified, whatever the HW NIC: Virtio datapath
>> offload only, full Virtio offload, or no offload at all. In the
>> latter case, it would not be optimal as it implies forwarding between
>> the Vhost PMD and the HW NIC PMD but it would work.
>
> It is not clear to me the interface map in such system.
> From what I understand the container will have virtio-user i/f and the host will have virtio i/f. then the virtio i/f can be programmed to work w/ vDPA or not.
> For full emulation I guess you will need to expose the netdev of the fully emulated virtio device to the container?
>
> Am trying to map when it is beneficial to use this virtio_vdpa PMD and when it is better to use the vendor specific vDPA PMD on top of VF.
I think that with above clarification, I made it clear that the goal of
this driver is not to replace vendors vDPA drivers (their control path
maybe not even be compatible), but instead to provide a generic driver
that can be used either within a guest with a para-virtualized Virtio-
net device or with HW NIC that fully offloads Virtio (both data and
control paths).
next prev parent reply other threads:[~2019-09-10 13:56 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-29 7:59 Maxime Coquelin
2019-08-29 7:59 ` [dpdk-dev] [PATCH 01/15] vhost: remove vhost kernel header inclusion Maxime Coquelin
2019-09-02 6:03 ` Tiwei Bie
2019-09-03 7:24 ` Maxime Coquelin
2019-08-29 7:59 ` [dpdk-dev] [PATCH 02/15] vhost: configure vDPA as soon as the device is ready Maxime Coquelin
2019-09-02 8:34 ` Ye Xiaolong
2019-09-02 9:02 ` Wang, Xiao W
2019-09-03 7:34 ` Maxime Coquelin
2019-09-03 10:58 ` Wang, Xiao W
2019-08-29 7:59 ` [dpdk-dev] [PATCH 03/15] net/virtio: move control path fonctions in virtqueue file Maxime Coquelin
2019-09-02 6:05 ` Tiwei Bie
2019-08-29 7:59 ` [dpdk-dev] [PATCH 04/15] net/virtio: add virtio PCI subsystem device ID declaration Maxime Coquelin
2019-09-02 6:14 ` Tiwei Bie
2019-09-03 7:25 ` Maxime Coquelin
2019-08-29 7:59 ` [dpdk-dev] [PATCH 05/15] net/virtio: save notify bar ID in virtio HW struct Maxime Coquelin
2019-09-02 6:17 ` Tiwei Bie
2019-08-29 7:59 ` [dpdk-dev] [PATCH 06/15] net/virtio: add skeleton for virtio vDPA driver Maxime Coquelin
2019-09-02 6:27 ` Tiwei Bie
2019-09-03 7:25 ` Maxime Coquelin
2019-08-29 7:59 ` [dpdk-dev] [PATCH 07/15] net/virtio: add vDPA ops to get number of queue Maxime Coquelin
2019-09-02 6:32 ` Tiwei Bie
2019-08-29 7:59 ` [dpdk-dev] [PATCH 08/15] net/virtio: add virtio vDPA op to get features Maxime Coquelin
2019-09-02 6:43 ` Tiwei Bie
2019-09-03 7:27 ` Maxime Coquelin
2019-08-29 7:59 ` [dpdk-dev] [PATCH 09/15] net/virtio: add virtio vDPA op to get protocol features Maxime Coquelin
2019-09-02 6:46 ` Tiwei Bie
2019-08-29 7:59 ` [dpdk-dev] [PATCH 10/15] net/virtio: add vDPA op to configure and start the device Maxime Coquelin
2019-09-03 5:30 ` Tiwei Bie
2019-09-03 7:40 ` Maxime Coquelin
2019-09-03 8:49 ` Tiwei Bie
2019-09-04 4:06 ` Jason Wang
2019-09-04 6:56 ` Maxime Coquelin
2019-09-05 2:57 ` Tiwei Bie
2019-08-29 7:59 ` [dpdk-dev] [PATCH 11/15] net/virtio: add vDPA op to stop and close " Maxime Coquelin
2019-09-02 7:07 ` Tiwei Bie
2019-09-03 7:30 ` Maxime Coquelin
2019-08-29 7:59 ` [dpdk-dev] [PATCH 12/15] net/virtio: add vDPA op to set features Maxime Coquelin
2019-08-29 7:59 ` [dpdk-dev] [PATCH 13/15] net/virtio: add vDPA ops to get VFIO FDs Maxime Coquelin
2019-09-03 4:47 ` Tiwei Bie
2019-08-29 7:59 ` [dpdk-dev] [PATCH 14/15] net/virtio: add vDPA op to get notification area Maxime Coquelin
2019-09-03 5:02 ` Tiwei Bie
2019-09-03 7:36 ` Maxime Coquelin
2019-09-03 8:40 ` Tiwei Bie
2019-08-29 8:00 ` [dpdk-dev] [PATCH 15/15] doc: add documentation for Virtio vDPA driver Maxime Coquelin
2019-09-09 11:55 ` [dpdk-dev] [PATCH 00/15] Introduce " Shahaf Shuler
2019-09-10 7:46 ` Maxime Coquelin
2019-09-10 13:44 ` Shahaf Shuler
2019-09-10 13:56 ` Maxime Coquelin [this message]
2019-09-11 5:15 ` Shahaf Shuler
2019-09-11 7:15 ` Maxime Coquelin
2019-10-24 6:32 ` Maxime Coquelin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9edf9ca8-6ecf-ff24-2db6-311c00e678ce@redhat.com \
--to=maxime.coquelin@redhat.com \
--cc=amorenoz@redhat.com \
--cc=dev@dpdk.org \
--cc=jfreimann@redhat.com \
--cc=matan@mellanox.com \
--cc=shahafs@mellanox.com \
--cc=stable@dpdk.org \
--cc=tiwei.bie@intel.com \
--cc=xiao.w.wang@intel.com \
--cc=zhihong.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).