* Re: [dpdk-dev] discussion: creating a new class for vdpa drivers
@ 2019-12-16 8:29 Matan Azrad
2019-12-16 8:46 ` Andrew Rybchenko
2019-12-16 8:47 ` Maxime Coquelin
0 siblings, 2 replies; 8+ messages in thread
From: Matan Azrad @ 2019-12-16 8:29 UTC (permalink / raw)
To: Tiwei Bie, Thomas Monjalon
Cc: Andrew Rybchenko, Ori Kam, Liang, Cunming, Wang, Xiao W,
maxime.coquelin, Wang, Zhihong, Yigit, Ferruh, Shahaf Shuler,
dev, Slava Ovsiienko, Asaf Penso, Olga Shern
Hi all
I understand all of you agree \ not object with the new class for vdpa drivers.
Based on that, I'm going to start it.
From: Tiwei Bie
> On Tue, Dec 10, 2019 at 09:00:33AM +0100, Thomas Monjalon wrote:
> > 10/12/2019 03:41, Tiwei Bie:
> > > On Mon, Dec 09, 2019 at 02:22:27PM +0300, Andrew Rybchenko wrote:
> > > > On 12/9/19 1:41 PM, Ori Kam wrote:
> > > > > From: Andrew Rybchenko
> > > > >> On 12/8/19 10:06 AM, Matan Azrad wrote:
> > > > >>> From: Andrew Rybchenko
> > > > >>>> On 12/6/19 8:32 AM, Liang, Cunming wrote:
> > > > >>>>> From: Bie, Tiwei <tiwei.bie@intel.com>
> > > > >>>>>> On Thu, Dec 05, 2019 at 01:26:36PM +0000, Matan Azrad wrote:
> > > > >>>>>>> Hi all
> > > > >>>>>>>
> > > > >>>>>>> As described in RFC “[RFC] net: new vdpa PMD for Mellanox
> > > > >>>>>>> devices”, a new vdpa drivers is going to be added for
> > > > >>>>>>> Mellanox devices – mlx5_vdpa
> > > > >>>>>>>
> > > > >>>>>>> The only vdpa driver now is the IFC driver that is located
> > > > >>>>>>> in net
> > > > >> directory.
> > > > >>>>>>>
> > > > >>>>>>> The IFC driver and the new mlx5_vdpa driver provide the
> > > > >>>>>>> vdpa ops
> > > > >> and
> > > > >>>>>>> not the eth_dev ops.
> > > > >>>>>>>
> > > > >>>>>>> All the others drivers in net provide the eth-dev ops.
> > > > >>>>>>>
> > > > >>>>>>> I suggest to create a new class for vdpa drivers, to move
> > > > >>>>>>> IFC to this class and to add the mlx5_vdpa to this class too.
> > > > >>>>>>>
> > > > >>>>>>> Later, all the new drivers that implements the vdpa ops
> > > > >>>>>>> will be added to the vdpa class.
> > > > >>>>>>
> > > > >>>>>> +1. Sounds like a good idea to me.
> > > > >>>>> +1
> > > > >>>>
> > > > >>>> vDPA drivers are vendor-specific and expected to talk to vendor
> NIC. I.e.
> > > > >>>> there are significant chances to share code with network drivers
> (e.g.
> > > > >> base
> > > > >>>> driver). Should base driver be moved to drivers/common in
> > > > >>>> this case or is
> > > > >> it
> > > > >>>> still allows to have vdpa driver in drivers/net together with ethdev
> driver?
> > > > >>>
> > > > >>> Yes, I think this should be the method, shared code should be
> > > > >>> moved to
> > > > >> the drivers/common directory.
> > > > >>> I think there is a precedence with shared code in common which
> > > > >>> shares a
> > > > >> vendor specific code between crypto and net.
> > > > >>
> > > > >> I see motivation behind driver/vdpa. However, vdpa and net
> > > > >> drivers tightly related and I would prefer to avoid extra
> > > > >> complexity here. Right now simplicity over-weights for me.
> > > > >> No strong opinion on the topic, but it definitely requires
> > > > >> better and more clear motivation why a new class should be
> > > > >> introduced and existing drivers restructured.
> > > > >>
> > > > >
> > > > > Why do you think there is extra complexity?
> > > >
> > > > Even grep becomes a bit more complicated J
> > > >
> > > > > I think from design correctness it is more correct to create a dedicated
> class for the following reasons:
> > > > > 1. All of the classes implements a different set of ops. For
> > > > > example the cryptodev has a defined set of ops, same goes for the
> compress driver and the ethdev driver. Each ones of them has different ops.
> Going by this definition since VDPA has a different set of ops, it makes sense
> that it will be in a different class.
> > > > >
> > > > > 2. even that both drivers (eth/vdpa) handle traffic from the nic
> > > > > most of the code is different (the difference is also dependent on the
> manufacture) So even the shared code base is not large and can be shared
> using the common directory. For example ifc doesn't have any shared code.
> > > > >
> > > > > What do you think?
> > > >
> > > > The true reason is: if the difference in ops implemented is a key
> > > > difference which should enforce location in different directories.
> > > > Or underlying device class is a key.
> > > > Roughly:
> > > > - net driver is a control+data path
> > > > - vdpa driver is a control path only My fear is that control path
> > > > will grow more and more (Rx mode, RSS, filters and so on)
> > >
> > > I think this is a reasonable concern.
> > >
> > > One thing needs to be clarified is that, the control path (ops) in
> > > vdpa driver is something very different with the control path in net
> > > driver. vdpa is very generic (or more precisely vhost-related),
> > > instead of network related:
> > >
> > > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgi
> > >
> thub.com%2FDPDK%2Fdpdk%2Fblob%2Faef1d0733179%2Flib%2Flibrte_vhos
> t%2F
> > > rte_vdpa.h%23L40-
> L78&data=02%7C01%7Cmatan%40mellanox.com%7Cfac15
> > >
> 729a67c4c81ee7608d77d7434a2%7Ca652971c7d2e4d9ba6a4d149256f461b%7C
> 0%7
> > >
> C0%7C637115810358231304&sdata=%2BZa39vxadtKx5Ov7vmqcU3RuZhz
> kOP9o
> > > 8roEB0d5j6M%3D&reserved=0
> > >
> > > It's built on top of vhost-user protocol, manipulates virtqueues,
> > > virtio/vhost features, memory table, ...
> > >
> > > Technically, it's possible to have blk, crypto, ...
> > > vdpa devices as well (we already have vhost-user-blk,
> > > vhost-user-crypto, ...).
> > >
> > > But network specific features will come eventually, e.g. RSS. One
> > > possible way to solve it is to define a generic event callback in
> > > vdpa ops, and vdpa driver can request the corresponding info from
> > > vhost based on the event received.
> > >
> > > Another thing needs to be clarified is that, the control path
> > > supposed to be used by DPDK apps directly in vdpa should always be
> > > generic, it should just be something like:
> > >
> > > int rte_vdpa_find_device_id(struct rte_vdpa_dev_addr *addr); int
> > > rte_vhost_driver_attach_vdpa_device(const char *path, int did); int
> > > rte_vhost_driver_detach_vdpa_device(const char *path); ...
> > >
> > > That is to say, users just need to bind the vdpa device to the vhost
> > > connection. The control path ops in vdpa is supposed to be called by
> > > vhost-library transparently based on the events on the vhost-user
> > > connection, i.e.
> > > the vdpa device will be configured (including RSS) by the guest
> > > driver in QEMU "directly" via the vhost-user protocol instead of the
> > > DPDK app in the host.
> >
> > Tiwei, in order to be clear,
> > You think vDPA drivers should be in drivers/vdpa directory?
>
> I was just trying to clarify two facts in vDPA to address Andrew's concern.
> And back to the question, to make sure that we don't miss anything
> important, (although maybe not very related) it might be helpful to also
> clarify how to support vDPA in OvS at the same time which isn't quite clear to
> me yet..
>
> Regards,
> Tiwei
>
> >
> >
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] discussion: creating a new class for vdpa drivers
2019-12-16 8:29 [dpdk-dev] discussion: creating a new class for vdpa drivers Matan Azrad
@ 2019-12-16 8:46 ` Andrew Rybchenko
2019-12-16 8:50 ` Maxime Coquelin
2019-12-16 8:47 ` Maxime Coquelin
1 sibling, 1 reply; 8+ messages in thread
From: Andrew Rybchenko @ 2019-12-16 8:46 UTC (permalink / raw)
To: Matan Azrad, Tiwei Bie, Thomas Monjalon
Cc: Ori Kam, Liang, Cunming, Wang, Xiao W, maxime.coquelin, Wang,
Zhihong, Yigit, Ferruh, Shahaf Shuler, dev, Slava Ovsiienko,
Asaf Penso, Olga Shern
On 12/16/19 11:29 AM, Matan Azrad wrote:
>
> Hi all
>
> I understand all of you agree \ not object with the new class for vdpa drivers.
I have two control questions:
1. If so, is it allowed to have vDPA driver in
drivers/net/<driver> if it is better from code sharing point
of view?
2. If drivers/common is used, is exported functions which are
used by drivers/net/<driver> and drivers/vdpa/<driver> and
data structures are a part of public API/ABI? Hopefully not,
but I'd like to double-check and ensure that it is solved in
the case of shared libraries build.
> Based on that, I'm going to start it.
>
> From: Tiwei Bie
>> On Tue, Dec 10, 2019 at 09:00:33AM +0100, Thomas Monjalon wrote:
>>> 10/12/2019 03:41, Tiwei Bie:
>>>> On Mon, Dec 09, 2019 at 02:22:27PM +0300, Andrew Rybchenko wrote:
>>>>> On 12/9/19 1:41 PM, Ori Kam wrote:
>>>>>> From: Andrew Rybchenko
>>>>>>> On 12/8/19 10:06 AM, Matan Azrad wrote:
>>>>>>>> From: Andrew Rybchenko
>>>>>>>>> On 12/6/19 8:32 AM, Liang, Cunming wrote:
>>>>>>>>>> From: Bie, Tiwei <tiwei.bie@intel.com>
>>>>>>>>>>> On Thu, Dec 05, 2019 at 01:26:36PM +0000, Matan Azrad wrote:
>>>>>>>>>>>> Hi all
>>>>>>>>>>>>
>>>>>>>>>>>> As described in RFC “[RFC] net: new vdpa PMD for Mellanox
>>>>>>>>>>>> devices”, a new vdpa drivers is going to be added for
>>>>>>>>>>>> Mellanox devices – mlx5_vdpa
>>>>>>>>>>>>
>>>>>>>>>>>> The only vdpa driver now is the IFC driver that is located
>>>>>>>>>>>> in net
>>>>>>> directory.
>>>>>>>>>>>>
>>>>>>>>>>>> The IFC driver and the new mlx5_vdpa driver provide the
>>>>>>>>>>>> vdpa ops
>>>>>>> and
>>>>>>>>>>>> not the eth_dev ops.
>>>>>>>>>>>>
>>>>>>>>>>>> All the others drivers in net provide the eth-dev ops.
>>>>>>>>>>>>
>>>>>>>>>>>> I suggest to create a new class for vdpa drivers, to move
>>>>>>>>>>>> IFC to this class and to add the mlx5_vdpa to this class too.
>>>>>>>>>>>>
>>>>>>>>>>>> Later, all the new drivers that implements the vdpa ops
>>>>>>>>>>>> will be added to the vdpa class.
>>>>>>>>>>>
>>>>>>>>>>> +1. Sounds like a good idea to me.
>>>>>>>>>> +1
>>>>>>>>>
>>>>>>>>> vDPA drivers are vendor-specific and expected to talk to vendor
>> NIC. I.e.
>>>>>>>>> there are significant chances to share code with network drivers
>> (e.g.
>>>>>>> base
>>>>>>>>> driver). Should base driver be moved to drivers/common in
>>>>>>>>> this case or is
>>>>>>> it
>>>>>>>>> still allows to have vdpa driver in drivers/net together with ethdev
>> driver?
>>>>>>>>
>>>>>>>> Yes, I think this should be the method, shared code should be
>>>>>>>> moved to
>>>>>>> the drivers/common directory.
>>>>>>>> I think there is a precedence with shared code in common which
>>>>>>>> shares a
>>>>>>> vendor specific code between crypto and net.
>>>>>>>
>>>>>>> I see motivation behind driver/vdpa. However, vdpa and net
>>>>>>> drivers tightly related and I would prefer to avoid extra
>>>>>>> complexity here. Right now simplicity over-weights for me.
>>>>>>> No strong opinion on the topic, but it definitely requires
>>>>>>> better and more clear motivation why a new class should be
>>>>>>> introduced and existing drivers restructured.
>>>>>>>
>>>>>>
>>>>>> Why do you think there is extra complexity?
>>>>>
>>>>> Even grep becomes a bit more complicated J
>>>>>
>>>>>> I think from design correctness it is more correct to create a dedicated
>> class for the following reasons:
>>>>>> 1. All of the classes implements a different set of ops. For
>>>>>> example the cryptodev has a defined set of ops, same goes for the
>> compress driver and the ethdev driver. Each ones of them has different ops.
>> Going by this definition since VDPA has a different set of ops, it makes sense
>> that it will be in a different class.
>>>>>>
>>>>>> 2. even that both drivers (eth/vdpa) handle traffic from the nic
>>>>>> most of the code is different (the difference is also dependent on the
>> manufacture) So even the shared code base is not large and can be shared
>> using the common directory. For example ifc doesn't have any shared code.
>>>>>>
>>>>>> What do you think?
>>>>>
>>>>> The true reason is: if the difference in ops implemented is a key
>>>>> difference which should enforce location in different directories.
>>>>> Or underlying device class is a key.
>>>>> Roughly:
>>>>> - net driver is a control+data path
>>>>> - vdpa driver is a control path only My fear is that control path
>>>>> will grow more and more (Rx mode, RSS, filters and so on)
>>>>
>>>> I think this is a reasonable concern.
>>>>
>>>> One thing needs to be clarified is that, the control path (ops) in
>>>> vdpa driver is something very different with the control path in net
>>>> driver. vdpa is very generic (or more precisely vhost-related),
>>>> instead of network related:
>>>>
>>>> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgi
>>>>
>> thub.com%2FDPDK%2Fdpdk%2Fblob%2Faef1d0733179%2Flib%2Flibrte_vhos
>> t%2F
>>>> rte_vdpa.h%23L40-
>> L78&data=02%7C01%7Cmatan%40mellanox.com%7Cfac15
>>>>
>> 729a67c4c81ee7608d77d7434a2%7Ca652971c7d2e4d9ba6a4d149256f461b%7C
>> 0%7
>>>>
>> C0%7C637115810358231304&sdata=%2BZa39vxadtKx5Ov7vmqcU3RuZhz
>> kOP9o
>>>> 8roEB0d5j6M%3D&reserved=0
>>>>
>>>> It's built on top of vhost-user protocol, manipulates virtqueues,
>>>> virtio/vhost features, memory table, ...
>>>>
>>>> Technically, it's possible to have blk, crypto, ...
>>>> vdpa devices as well (we already have vhost-user-blk,
>>>> vhost-user-crypto, ...).
>>>>
>>>> But network specific features will come eventually, e.g. RSS. One
>>>> possible way to solve it is to define a generic event callback in
>>>> vdpa ops, and vdpa driver can request the corresponding info from
>>>> vhost based on the event received.
>>>>
>>>> Another thing needs to be clarified is that, the control path
>>>> supposed to be used by DPDK apps directly in vdpa should always be
>>>> generic, it should just be something like:
>>>>
>>>> int rte_vdpa_find_device_id(struct rte_vdpa_dev_addr *addr); int
>>>> rte_vhost_driver_attach_vdpa_device(const char *path, int did); int
>>>> rte_vhost_driver_detach_vdpa_device(const char *path); ...
>>>>
>>>> That is to say, users just need to bind the vdpa device to the vhost
>>>> connection. The control path ops in vdpa is supposed to be called by
>>>> vhost-library transparently based on the events on the vhost-user
>>>> connection, i.e.
>>>> the vdpa device will be configured (including RSS) by the guest
>>>> driver in QEMU "directly" via the vhost-user protocol instead of the
>>>> DPDK app in the host.
>>>
>>> Tiwei, in order to be clear,
>>> You think vDPA drivers should be in drivers/vdpa directory?
>>
>> I was just trying to clarify two facts in vDPA to address Andrew's concern.
>> And back to the question, to make sure that we don't miss anything
>> important, (although maybe not very related) it might be helpful to also
>> clarify how to support vDPA in OvS at the same time which isn't quite clear to
>> me yet..
>>
>> Regards,
>> Tiwei
>>
>>>
>>>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] discussion: creating a new class for vdpa drivers
2019-12-16 8:46 ` Andrew Rybchenko
@ 2019-12-16 8:50 ` Maxime Coquelin
2019-12-16 9:39 ` Andrew Rybchenko
0 siblings, 1 reply; 8+ messages in thread
From: Maxime Coquelin @ 2019-12-16 8:50 UTC (permalink / raw)
To: Andrew Rybchenko, Matan Azrad, Tiwei Bie, Thomas Monjalon
Cc: Ori Kam, Liang, Cunming, Wang, Xiao W, Wang, Zhihong, Yigit,
Ferruh, Shahaf Shuler, dev, Slava Ovsiienko, Asaf Penso,
Olga Shern
Hi Andrew,
On 12/16/19 9:46 AM, Andrew Rybchenko wrote:
> On 12/16/19 11:29 AM, Matan Azrad wrote:
>>
>> Hi all
>>
>> I understand all of you agree \ not object with the new class for vdpa drivers.
>
> I have two control questions:
>
> 1. If so, is it allowed to have vDPA driver in
> drivers/net/<driver> if it is better from code sharing point
> of view?
If it has something to share, I think we should move the common bits
to the common directory.
> 2. If drivers/common is used, is exported functions which are
> used by drivers/net/<driver> and drivers/vdpa/<driver> and
> data structures are a part of public API/ABI? Hopefully not,
> but I'd like to double-check and ensure that it is solved in
> the case of shared libraries build.
Common functions and data should not be part of the API/ABI I agree.
I guess we should use relative paths for including the common headers.
>> Based on that, I'm going to start it.
>>
>> From: Tiwei Bie
>>> On Tue, Dec 10, 2019 at 09:00:33AM +0100, Thomas Monjalon wrote:
>>>> 10/12/2019 03:41, Tiwei Bie:
>>>>> On Mon, Dec 09, 2019 at 02:22:27PM +0300, Andrew Rybchenko wrote:
>>>>>> On 12/9/19 1:41 PM, Ori Kam wrote:
>>>>>>> From: Andrew Rybchenko
>>>>>>>> On 12/8/19 10:06 AM, Matan Azrad wrote:
>>>>>>>>> From: Andrew Rybchenko
>>>>>>>>>> On 12/6/19 8:32 AM, Liang, Cunming wrote:
>>>>>>>>>>> From: Bie, Tiwei <tiwei.bie@intel.com>
>>>>>>>>>>>> On Thu, Dec 05, 2019 at 01:26:36PM +0000, Matan Azrad wrote:
>>>>>>>>>>>>> Hi all
>>>>>>>>>>>>>
>>>>>>>>>>>>> As described in RFC “[RFC] net: new vdpa PMD for Mellanox
>>>>>>>>>>>>> devices”, a new vdpa drivers is going to be added for
>>>>>>>>>>>>> Mellanox devices – mlx5_vdpa
>>>>>>>>>>>>>
>>>>>>>>>>>>> The only vdpa driver now is the IFC driver that is located
>>>>>>>>>>>>> in net
>>>>>>>> directory.
>>>>>>>>>>>>>
>>>>>>>>>>>>> The IFC driver and the new mlx5_vdpa driver provide the
>>>>>>>>>>>>> vdpa ops
>>>>>>>> and
>>>>>>>>>>>>> not the eth_dev ops.
>>>>>>>>>>>>>
>>>>>>>>>>>>> All the others drivers in net provide the eth-dev ops.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I suggest to create a new class for vdpa drivers, to move
>>>>>>>>>>>>> IFC to this class and to add the mlx5_vdpa to this class too.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Later, all the new drivers that implements the vdpa ops
>>>>>>>>>>>>> will be added to the vdpa class.
>>>>>>>>>>>>
>>>>>>>>>>>> +1. Sounds like a good idea to me.
>>>>>>>>>>> +1
>>>>>>>>>>
>>>>>>>>>> vDPA drivers are vendor-specific and expected to talk to vendor
>>> NIC. I.e.
>>>>>>>>>> there are significant chances to share code with network drivers
>>> (e.g.
>>>>>>>> base
>>>>>>>>>> driver). Should base driver be moved to drivers/common in
>>>>>>>>>> this case or is
>>>>>>>> it
>>>>>>>>>> still allows to have vdpa driver in drivers/net together with ethdev
>>> driver?
>>>>>>>>>
>>>>>>>>> Yes, I think this should be the method, shared code should be
>>>>>>>>> moved to
>>>>>>>> the drivers/common directory.
>>>>>>>>> I think there is a precedence with shared code in common which
>>>>>>>>> shares a
>>>>>>>> vendor specific code between crypto and net.
>>>>>>>>
>>>>>>>> I see motivation behind driver/vdpa. However, vdpa and net
>>>>>>>> drivers tightly related and I would prefer to avoid extra
>>>>>>>> complexity here. Right now simplicity over-weights for me.
>>>>>>>> No strong opinion on the topic, but it definitely requires
>>>>>>>> better and more clear motivation why a new class should be
>>>>>>>> introduced and existing drivers restructured.
>>>>>>>>
>>>>>>>
>>>>>>> Why do you think there is extra complexity?
>>>>>>
>>>>>> Even grep becomes a bit more complicated J
>>>>>>
>>>>>>> I think from design correctness it is more correct to create a dedicated
>>> class for the following reasons:
>>>>>>> 1. All of the classes implements a different set of ops. For
>>>>>>> example the cryptodev has a defined set of ops, same goes for the
>>> compress driver and the ethdev driver. Each ones of them has different ops.
>>> Going by this definition since VDPA has a different set of ops, it makes sense
>>> that it will be in a different class.
>>>>>>>
>>>>>>> 2. even that both drivers (eth/vdpa) handle traffic from the nic
>>>>>>> most of the code is different (the difference is also dependent on the
>>> manufacture) So even the shared code base is not large and can be shared
>>> using the common directory. For example ifc doesn't have any shared code.
>>>>>>>
>>>>>>> What do you think?
>>>>>>
>>>>>> The true reason is: if the difference in ops implemented is a key
>>>>>> difference which should enforce location in different directories.
>>>>>> Or underlying device class is a key.
>>>>>> Roughly:
>>>>>> - net driver is a control+data path
>>>>>> - vdpa driver is a control path only My fear is that control path
>>>>>> will grow more and more (Rx mode, RSS, filters and so on)
>>>>>
>>>>> I think this is a reasonable concern.
>>>>>
>>>>> One thing needs to be clarified is that, the control path (ops) in
>>>>> vdpa driver is something very different with the control path in net
>>>>> driver. vdpa is very generic (or more precisely vhost-related),
>>>>> instead of network related:
>>>>>
>>>>> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgi
>>>>>
>>> thub.com%2FDPDK%2Fdpdk%2Fblob%2Faef1d0733179%2Flib%2Flibrte_vhos
>>> t%2F
>>>>> rte_vdpa.h%23L40-
>>> L78&data=02%7C01%7Cmatan%40mellanox.com%7Cfac15
>>>>>
>>> 729a67c4c81ee7608d77d7434a2%7Ca652971c7d2e4d9ba6a4d149256f461b%7C
>>> 0%7
>>>>>
>>> C0%7C637115810358231304&sdata=%2BZa39vxadtKx5Ov7vmqcU3RuZhz
>>> kOP9o
>>>>> 8roEB0d5j6M%3D&reserved=0
>>>>>
>>>>> It's built on top of vhost-user protocol, manipulates virtqueues,
>>>>> virtio/vhost features, memory table, ...
>>>>>
>>>>> Technically, it's possible to have blk, crypto, ...
>>>>> vdpa devices as well (we already have vhost-user-blk,
>>>>> vhost-user-crypto, ...).
>>>>>
>>>>> But network specific features will come eventually, e.g. RSS. One
>>>>> possible way to solve it is to define a generic event callback in
>>>>> vdpa ops, and vdpa driver can request the corresponding info from
>>>>> vhost based on the event received.
>>>>>
>>>>> Another thing needs to be clarified is that, the control path
>>>>> supposed to be used by DPDK apps directly in vdpa should always be
>>>>> generic, it should just be something like:
>>>>>
>>>>> int rte_vdpa_find_device_id(struct rte_vdpa_dev_addr *addr); int
>>>>> rte_vhost_driver_attach_vdpa_device(const char *path, int did); int
>>>>> rte_vhost_driver_detach_vdpa_device(const char *path); ...
>>>>>
>>>>> That is to say, users just need to bind the vdpa device to the vhost
>>>>> connection. The control path ops in vdpa is supposed to be called by
>>>>> vhost-library transparently based on the events on the vhost-user
>>>>> connection, i.e.
>>>>> the vdpa device will be configured (including RSS) by the guest
>>>>> driver in QEMU "directly" via the vhost-user protocol instead of the
>>>>> DPDK app in the host.
>>>>
>>>> Tiwei, in order to be clear,
>>>> You think vDPA drivers should be in drivers/vdpa directory?
>>>
>>> I was just trying to clarify two facts in vDPA to address Andrew's concern.
>>> And back to the question, to make sure that we don't miss anything
>>> important, (although maybe not very related) it might be helpful to also
>>> clarify how to support vDPA in OvS at the same time which isn't quite clear to
>>> me yet..
>>>
>>> Regards,
>>> Tiwei
>>>
>>>>
>>>>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] discussion: creating a new class for vdpa drivers
2019-12-16 8:50 ` Maxime Coquelin
@ 2019-12-16 9:39 ` Andrew Rybchenko
2019-12-16 10:04 ` Maxime Coquelin
0 siblings, 1 reply; 8+ messages in thread
From: Andrew Rybchenko @ 2019-12-16 9:39 UTC (permalink / raw)
To: Maxime Coquelin, Matan Azrad, Tiwei Bie, Thomas Monjalon
Cc: Ori Kam, Liang, Cunming, Wang, Xiao W, Wang, Zhihong, Yigit,
Ferruh, Shahaf Shuler, dev, Slava Ovsiienko, Asaf Penso,
Olga Shern
Hi Maxime,
On 12/16/19 11:50 AM, Maxime Coquelin wrote:
> Hi Andrew,
>
> On 12/16/19 9:46 AM, Andrew Rybchenko wrote:
>> On 12/16/19 11:29 AM, Matan Azrad wrote:
>>>
>>> Hi all
>>>
>>> I understand all of you agree \ not object with the new class for vdpa drivers.
>>
>> I have two control questions:
>>
>> 1. If so, is it allowed to have vDPA driver in
>> drivers/net/<driver> if it is better from code sharing point
>> of view?
>
> If it has something to share, I think we should move the common bits
> to the common directory.
Does it mean that it is *not* allowed to have vdpa driver in
drivers/net/<driver> and vDPA drivers *must* live in
drivers/vdpa only?
>> 2. If drivers/common is used, is exported functions which are
>> used by drivers/net/<driver> and drivers/vdpa/<driver> and
>> data structures are a part of public API/ABI? Hopefully not,
>> but I'd like to double-check and ensure that it is solved in
>> the case of shared libraries build.
>
> Common functions and data should not be part of the API/ABI I agree.
> I guess we should use relative paths for including the common headers.
Hopefully include_directories() with relative path in the case
of meson.build.
>>> Based on that, I'm going to start it.
>>>
>>> From: Tiwei Bie
>>>> On Tue, Dec 10, 2019 at 09:00:33AM +0100, Thomas Monjalon wrote:
>>>>> 10/12/2019 03:41, Tiwei Bie:
>>>>>> On Mon, Dec 09, 2019 at 02:22:27PM +0300, Andrew Rybchenko wrote:
>>>>>>> On 12/9/19 1:41 PM, Ori Kam wrote:
>>>>>>>> From: Andrew Rybchenko
>>>>>>>>> On 12/8/19 10:06 AM, Matan Azrad wrote:
>>>>>>>>>> From: Andrew Rybchenko
>>>>>>>>>>> On 12/6/19 8:32 AM, Liang, Cunming wrote:
>>>>>>>>>>>> From: Bie, Tiwei <tiwei.bie@intel.com>
>>>>>>>>>>>>> On Thu, Dec 05, 2019 at 01:26:36PM +0000, Matan Azrad wrote:
>>>>>>>>>>>>>> Hi all
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> As described in RFC “[RFC] net: new vdpa PMD for Mellanox
>>>>>>>>>>>>>> devices”, a new vdpa drivers is going to be added for
>>>>>>>>>>>>>> Mellanox devices – mlx5_vdpa
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The only vdpa driver now is the IFC driver that is located
>>>>>>>>>>>>>> in net
>>>>>>>>> directory.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The IFC driver and the new mlx5_vdpa driver provide the
>>>>>>>>>>>>>> vdpa ops
>>>>>>>>> and
>>>>>>>>>>>>>> not the eth_dev ops.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> All the others drivers in net provide the eth-dev ops.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I suggest to create a new class for vdpa drivers, to move
>>>>>>>>>>>>>> IFC to this class and to add the mlx5_vdpa to this class too.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Later, all the new drivers that implements the vdpa ops
>>>>>>>>>>>>>> will be added to the vdpa class.
>>>>>>>>>>>>>
>>>>>>>>>>>>> +1. Sounds like a good idea to me.
>>>>>>>>>>>> +1
>>>>>>>>>>>
>>>>>>>>>>> vDPA drivers are vendor-specific and expected to talk to vendor
>>>> NIC. I.e.
>>>>>>>>>>> there are significant chances to share code with network drivers
>>>> (e.g.
>>>>>>>>> base
>>>>>>>>>>> driver). Should base driver be moved to drivers/common in
>>>>>>>>>>> this case or is
>>>>>>>>> it
>>>>>>>>>>> still allows to have vdpa driver in drivers/net together with ethdev
>>>> driver?
>>>>>>>>>>
>>>>>>>>>> Yes, I think this should be the method, shared code should be
>>>>>>>>>> moved to
>>>>>>>>> the drivers/common directory.
>>>>>>>>>> I think there is a precedence with shared code in common which
>>>>>>>>>> shares a
>>>>>>>>> vendor specific code between crypto and net.
>>>>>>>>>
>>>>>>>>> I see motivation behind driver/vdpa. However, vdpa and net
>>>>>>>>> drivers tightly related and I would prefer to avoid extra
>>>>>>>>> complexity here. Right now simplicity over-weights for me.
>>>>>>>>> No strong opinion on the topic, but it definitely requires
>>>>>>>>> better and more clear motivation why a new class should be
>>>>>>>>> introduced and existing drivers restructured.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Why do you think there is extra complexity?
>>>>>>>
>>>>>>> Even grep becomes a bit more complicated J
>>>>>>>
>>>>>>>> I think from design correctness it is more correct to create a dedicated
>>>> class for the following reasons:
>>>>>>>> 1. All of the classes implements a different set of ops. For
>>>>>>>> example the cryptodev has a defined set of ops, same goes for the
>>>> compress driver and the ethdev driver. Each ones of them has different ops.
>>>> Going by this definition since VDPA has a different set of ops, it makes sense
>>>> that it will be in a different class.
>>>>>>>>
>>>>>>>> 2. even that both drivers (eth/vdpa) handle traffic from the nic
>>>>>>>> most of the code is different (the difference is also dependent on the
>>>> manufacture) So even the shared code base is not large and can be shared
>>>> using the common directory. For example ifc doesn't have any shared code.
>>>>>>>>
>>>>>>>> What do you think?
>>>>>>>
>>>>>>> The true reason is: if the difference in ops implemented is a key
>>>>>>> difference which should enforce location in different directories.
>>>>>>> Or underlying device class is a key.
>>>>>>> Roughly:
>>>>>>> - net driver is a control+data path
>>>>>>> - vdpa driver is a control path only My fear is that control path
>>>>>>> will grow more and more (Rx mode, RSS, filters and so on)
>>>>>>
>>>>>> I think this is a reasonable concern.
>>>>>>
>>>>>> One thing needs to be clarified is that, the control path (ops) in
>>>>>> vdpa driver is something very different with the control path in net
>>>>>> driver. vdpa is very generic (or more precisely vhost-related),
>>>>>> instead of network related:
>>>>>>
>>>>>> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgi
>>>>>>
>>>> thub.com%2FDPDK%2Fdpdk%2Fblob%2Faef1d0733179%2Flib%2Flibrte_vhos
>>>> t%2F
>>>>>> rte_vdpa.h%23L40-
>>>> L78&data=02%7C01%7Cmatan%40mellanox.com%7Cfac15
>>>>>>
>>>> 729a67c4c81ee7608d77d7434a2%7Ca652971c7d2e4d9ba6a4d149256f461b%7C
>>>> 0%7
>>>>>>
>>>> C0%7C637115810358231304&sdata=%2BZa39vxadtKx5Ov7vmqcU3RuZhz
>>>> kOP9o
>>>>>> 8roEB0d5j6M%3D&reserved=0
>>>>>>
>>>>>> It's built on top of vhost-user protocol, manipulates virtqueues,
>>>>>> virtio/vhost features, memory table, ...
>>>>>>
>>>>>> Technically, it's possible to have blk, crypto, ...
>>>>>> vdpa devices as well (we already have vhost-user-blk,
>>>>>> vhost-user-crypto, ...).
>>>>>>
>>>>>> But network specific features will come eventually, e.g. RSS. One
>>>>>> possible way to solve it is to define a generic event callback in
>>>>>> vdpa ops, and vdpa driver can request the corresponding info from
>>>>>> vhost based on the event received.
>>>>>>
>>>>>> Another thing needs to be clarified is that, the control path
>>>>>> supposed to be used by DPDK apps directly in vdpa should always be
>>>>>> generic, it should just be something like:
>>>>>>
>>>>>> int rte_vdpa_find_device_id(struct rte_vdpa_dev_addr *addr); int
>>>>>> rte_vhost_driver_attach_vdpa_device(const char *path, int did); int
>>>>>> rte_vhost_driver_detach_vdpa_device(const char *path); ...
>>>>>>
>>>>>> That is to say, users just need to bind the vdpa device to the vhost
>>>>>> connection. The control path ops in vdpa is supposed to be called by
>>>>>> vhost-library transparently based on the events on the vhost-user
>>>>>> connection, i.e.
>>>>>> the vdpa device will be configured (including RSS) by the guest
>>>>>> driver in QEMU "directly" via the vhost-user protocol instead of the
>>>>>> DPDK app in the host.
>>>>>
>>>>> Tiwei, in order to be clear,
>>>>> You think vDPA drivers should be in drivers/vdpa directory?
>>>>
>>>> I was just trying to clarify two facts in vDPA to address Andrew's concern.
>>>> And back to the question, to make sure that we don't miss anything
>>>> important, (although maybe not very related) it might be helpful to also
>>>> clarify how to support vDPA in OvS at the same time which isn't quite clear to
>>>> me yet..
>>>>
>>>> Regards,
>>>> Tiwei
>>>>
>>>>>
>>>>>
>>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] discussion: creating a new class for vdpa drivers
2019-12-16 9:39 ` Andrew Rybchenko
@ 2019-12-16 10:04 ` Maxime Coquelin
2019-12-16 10:19 ` Andrew Rybchenko
0 siblings, 1 reply; 8+ messages in thread
From: Maxime Coquelin @ 2019-12-16 10:04 UTC (permalink / raw)
To: Andrew Rybchenko, Matan Azrad, Tiwei Bie, Thomas Monjalon
Cc: Ori Kam, Liang, Cunming, Wang, Xiao W, Wang, Zhihong, Yigit,
Ferruh, Shahaf Shuler, dev, Slava Ovsiienko, Asaf Penso,
Olga Shern
On 12/16/19 10:39 AM, Andrew Rybchenko wrote:
> Hi Maxime,
>
> On 12/16/19 11:50 AM, Maxime Coquelin wrote:
>> Hi Andrew,
>>
>> On 12/16/19 9:46 AM, Andrew Rybchenko wrote:
>>> On 12/16/19 11:29 AM, Matan Azrad wrote:
>>>>
>>>> Hi all
>>>>
>>>> I understand all of you agree \ not object with the new class for vdpa drivers.
>>>
>>> I have two control questions:
>>>
>>> 1. If so, is it allowed to have vDPA driver in
>>> drivers/net/<driver> if it is better from code sharing point
>>> of view?
>>
>> If it has something to share, I think we should move the common bits
>> to the common directory.
>
> Does it mean that it is *not* allowed to have vdpa driver in
> drivers/net/<driver> and vDPA drivers *must* live in
> drivers/vdpa only?
I would say yes, for consistency.
But that's just my point of view.
Do you have an argument in favor of not enforcing it?
Thanks,
Maxime
>>> 2. If drivers/common is used, is exported functions which are
>>> used by drivers/net/<driver> and drivers/vdpa/<driver> and
>>> data structures are a part of public API/ABI? Hopefully not,
>>> but I'd like to double-check and ensure that it is solved in
>>> the case of shared libraries build.
>>
>> Common functions and data should not be part of the API/ABI I agree.
>> I guess we should use relative paths for including the common headers.
>
> Hopefully include_directories() with relative path in the case
> of meson.build.
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] discussion: creating a new class for vdpa drivers
2019-12-16 10:04 ` Maxime Coquelin
@ 2019-12-16 10:19 ` Andrew Rybchenko
2019-12-16 10:26 ` Maxime Coquelin
0 siblings, 1 reply; 8+ messages in thread
From: Andrew Rybchenko @ 2019-12-16 10:19 UTC (permalink / raw)
To: Maxime Coquelin, Matan Azrad, Tiwei Bie, Thomas Monjalon
Cc: Ori Kam, Liang, Cunming, Wang, Xiao W, Wang, Zhihong, Yigit,
Ferruh, Shahaf Shuler, dev, Slava Ovsiienko, Asaf Penso,
Olga Shern
On 12/16/19 1:04 PM, Maxime Coquelin wrote:
>
>
> On 12/16/19 10:39 AM, Andrew Rybchenko wrote:
>> Hi Maxime,
>>
>> On 12/16/19 11:50 AM, Maxime Coquelin wrote:
>>> Hi Andrew,
>>>
>>> On 12/16/19 9:46 AM, Andrew Rybchenko wrote:
>>>> On 12/16/19 11:29 AM, Matan Azrad wrote:
>>>>>
>>>>> Hi all
>>>>>
>>>>> I understand all of you agree \ not object with the new class for vdpa drivers.
>>>>
>>>> I have two control questions:
>>>>
>>>> 1. If so, is it allowed to have vDPA driver in
>>>> drivers/net/<driver> if it is better from code sharing point
>>>> of view?
>>>
>>> If it has something to share, I think we should move the common bits
>>> to the common directory.
>>
>> Does it mean that it is *not* allowed to have vdpa driver in
>> drivers/net/<driver> and vDPA drivers *must* live in
>> drivers/vdpa only?
>
> I would say yes, for consistency.
OK, it makes sense. Consistency is good.
> But that's just my point of view.
> Do you have an argument in favor of not enforcing it?
I simply expect (storm of) patches which do factor/move out
etc. No strong opinion right now. Just clarifying suggested
policy.
Thanks,
Andrew.
> Thanks,
> Maxime
>
>>>> 2. If drivers/common is used, is exported functions which are
>>>> used by drivers/net/<driver> and drivers/vdpa/<driver> and
>>>> data structures are a part of public API/ABI? Hopefully not,
>>>> but I'd like to double-check and ensure that it is solved in
>>>> the case of shared libraries build.
>>>
>>> Common functions and data should not be part of the API/ABI I agree.
>>> I guess we should use relative paths for including the common headers.
>>
>> Hopefully include_directories() with relative path in the case
>> of meson.build.
>>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] discussion: creating a new class for vdpa drivers
2019-12-16 10:19 ` Andrew Rybchenko
@ 2019-12-16 10:26 ` Maxime Coquelin
0 siblings, 0 replies; 8+ messages in thread
From: Maxime Coquelin @ 2019-12-16 10:26 UTC (permalink / raw)
To: Andrew Rybchenko, Matan Azrad, Tiwei Bie, Thomas Monjalon
Cc: Ori Kam, Liang, Cunming, Wang, Xiao W, Wang, Zhihong, Yigit,
Ferruh, Shahaf Shuler, dev, Slava Ovsiienko, Asaf Penso,
Olga Shern
On 12/16/19 11:19 AM, Andrew Rybchenko wrote:
> On 12/16/19 1:04 PM, Maxime Coquelin wrote:
>>
>>
>> On 12/16/19 10:39 AM, Andrew Rybchenko wrote:
>>> Hi Maxime,
>>>
>>> On 12/16/19 11:50 AM, Maxime Coquelin wrote:
>>>> Hi Andrew,
>>>>
>>>> On 12/16/19 9:46 AM, Andrew Rybchenko wrote:
>>>>> On 12/16/19 11:29 AM, Matan Azrad wrote:
>>>>>>
>>>>>> Hi all
>>>>>>
>>>>>> I understand all of you agree \ not object with the new class for vdpa drivers.
>>>>>
>>>>> I have two control questions:
>>>>>
>>>>> 1. If so, is it allowed to have vDPA driver in
>>>>> drivers/net/<driver> if it is better from code sharing point
>>>>> of view?
>>>>
>>>> If it has something to share, I think we should move the common bits
>>>> to the common directory.
>>>
>>> Does it mean that it is *not* allowed to have vdpa driver in
>>> drivers/net/<driver> and vDPA drivers *must* live in
>>> drivers/vdpa only?
>>
>> I would say yes, for consistency.
>
> OK, it makes sense. Consistency is good.
>
>> But that's just my point of view.
>> Do you have an argument in favor of not enforcing it?
>
> I simply expect (storm of) patches which do factor/move out
> etc. No strong opinion right now. Just clarifying suggested
> policy.
You're right, maybe we could request the technical board agree on that.
Next tech board is this wednesday.
I'm sending the request to the techboard ML.
Thanks,
Maxime
> Thanks,
> Andrew.
>
>> Thanks,
>> Maxime
>>
>>>>> 2. If drivers/common is used, is exported functions which are
>>>>> used by drivers/net/<driver> and drivers/vdpa/<driver> and
>>>>> data structures are a part of public API/ABI? Hopefully not,
>>>>> but I'd like to double-check and ensure that it is solved in
>>>>> the case of shared libraries build.
>>>>
>>>> Common functions and data should not be part of the API/ABI I agree.
>>>> I guess we should use relative paths for including the common headers.
>>>
>>> Hopefully include_directories() with relative path in the case
>>> of meson.build.
>>>
>>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] discussion: creating a new class for vdpa drivers
2019-12-16 8:29 [dpdk-dev] discussion: creating a new class for vdpa drivers Matan Azrad
2019-12-16 8:46 ` Andrew Rybchenko
@ 2019-12-16 8:47 ` Maxime Coquelin
1 sibling, 0 replies; 8+ messages in thread
From: Maxime Coquelin @ 2019-12-16 8:47 UTC (permalink / raw)
To: Matan Azrad, Tiwei Bie, Thomas Monjalon
Cc: Andrew Rybchenko, Ori Kam, Liang, Cunming, Wang, Xiao W, Wang,
Zhihong, Yigit, Ferruh, Shahaf Shuler, dev, Slava Ovsiienko,
Asaf Penso, Olga Shern
Hi,
On 12/16/19 9:29 AM, Matan Azrad wrote:
>
> Hi all
>
> I understand all of you agree \ not object with the new class for vdpa drivers.
>
> Based on that, I'm going to start it.
Please go ahead, I'll rebase my virtio vDPA series on top of it.
Thanks,
Maxime
> From: Tiwei Bie
>> On Tue, Dec 10, 2019 at 09:00:33AM +0100, Thomas Monjalon wrote:
>>> 10/12/2019 03:41, Tiwei Bie:
>>>> On Mon, Dec 09, 2019 at 02:22:27PM +0300, Andrew Rybchenko wrote:
>>>>> On 12/9/19 1:41 PM, Ori Kam wrote:
>>>>>> From: Andrew Rybchenko
>>>>>>> On 12/8/19 10:06 AM, Matan Azrad wrote:
>>>>>>>> From: Andrew Rybchenko
>>>>>>>>> On 12/6/19 8:32 AM, Liang, Cunming wrote:
>>>>>>>>>> From: Bie, Tiwei <tiwei.bie@intel.com>
>>>>>>>>>>> On Thu, Dec 05, 2019 at 01:26:36PM +0000, Matan Azrad wrote:
>>>>>>>>>>>> Hi all
>>>>>>>>>>>>
>>>>>>>>>>>> As described in RFC “[RFC] net: new vdpa PMD for Mellanox
>>>>>>>>>>>> devices”, a new vdpa drivers is going to be added for
>>>>>>>>>>>> Mellanox devices – mlx5_vdpa
>>>>>>>>>>>>
>>>>>>>>>>>> The only vdpa driver now is the IFC driver that is located
>>>>>>>>>>>> in net
>>>>>>> directory.
>>>>>>>>>>>>
>>>>>>>>>>>> The IFC driver and the new mlx5_vdpa driver provide the
>>>>>>>>>>>> vdpa ops
>>>>>>> and
>>>>>>>>>>>> not the eth_dev ops.
>>>>>>>>>>>>
>>>>>>>>>>>> All the others drivers in net provide the eth-dev ops.
>>>>>>>>>>>>
>>>>>>>>>>>> I suggest to create a new class for vdpa drivers, to move
>>>>>>>>>>>> IFC to this class and to add the mlx5_vdpa to this class too.
>>>>>>>>>>>>
>>>>>>>>>>>> Later, all the new drivers that implements the vdpa ops
>>>>>>>>>>>> will be added to the vdpa class.
>>>>>>>>>>>
>>>>>>>>>>> +1. Sounds like a good idea to me.
>>>>>>>>>> +1
>>>>>>>>>
>>>>>>>>> vDPA drivers are vendor-specific and expected to talk to vendor
>> NIC. I.e.
>>>>>>>>> there are significant chances to share code with network drivers
>> (e.g.
>>>>>>> base
>>>>>>>>> driver). Should base driver be moved to drivers/common in
>>>>>>>>> this case or is
>>>>>>> it
>>>>>>>>> still allows to have vdpa driver in drivers/net together with ethdev
>> driver?
>>>>>>>>
>>>>>>>> Yes, I think this should be the method, shared code should be
>>>>>>>> moved to
>>>>>>> the drivers/common directory.
>>>>>>>> I think there is a precedence with shared code in common which
>>>>>>>> shares a
>>>>>>> vendor specific code between crypto and net.
>>>>>>>
>>>>>>> I see motivation behind driver/vdpa. However, vdpa and net
>>>>>>> drivers tightly related and I would prefer to avoid extra
>>>>>>> complexity here. Right now simplicity over-weights for me.
>>>>>>> No strong opinion on the topic, but it definitely requires
>>>>>>> better and more clear motivation why a new class should be
>>>>>>> introduced and existing drivers restructured.
>>>>>>>
>>>>>>
>>>>>> Why do you think there is extra complexity?
>>>>>
>>>>> Even grep becomes a bit more complicated J
>>>>>
>>>>>> I think from design correctness it is more correct to create a dedicated
>> class for the following reasons:
>>>>>> 1. All of the classes implements a different set of ops. For
>>>>>> example the cryptodev has a defined set of ops, same goes for the
>> compress driver and the ethdev driver. Each ones of them has different ops.
>> Going by this definition since VDPA has a different set of ops, it makes sense
>> that it will be in a different class.
>>>>>>
>>>>>> 2. even that both drivers (eth/vdpa) handle traffic from the nic
>>>>>> most of the code is different (the difference is also dependent on the
>> manufacture) So even the shared code base is not large and can be shared
>> using the common directory. For example ifc doesn't have any shared code.
>>>>>>
>>>>>> What do you think?
>>>>>
>>>>> The true reason is: if the difference in ops implemented is a key
>>>>> difference which should enforce location in different directories.
>>>>> Or underlying device class is a key.
>>>>> Roughly:
>>>>> - net driver is a control+data path
>>>>> - vdpa driver is a control path only My fear is that control path
>>>>> will grow more and more (Rx mode, RSS, filters and so on)
>>>>
>>>> I think this is a reasonable concern.
>>>>
>>>> One thing needs to be clarified is that, the control path (ops) in
>>>> vdpa driver is something very different with the control path in net
>>>> driver. vdpa is very generic (or more precisely vhost-related),
>>>> instead of network related:
>>>>
>>>> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgi
>>>>
>> thub.com%2FDPDK%2Fdpdk%2Fblob%2Faef1d0733179%2Flib%2Flibrte_vhos
>> t%2F
>>>> rte_vdpa.h%23L40-
>> L78&data=02%7C01%7Cmatan%40mellanox.com%7Cfac15
>>>>
>> 729a67c4c81ee7608d77d7434a2%7Ca652971c7d2e4d9ba6a4d149256f461b%7C
>> 0%7
>>>>
>> C0%7C637115810358231304&sdata=%2BZa39vxadtKx5Ov7vmqcU3RuZhz
>> kOP9o
>>>> 8roEB0d5j6M%3D&reserved=0
>>>>
>>>> It's built on top of vhost-user protocol, manipulates virtqueues,
>>>> virtio/vhost features, memory table, ...
>>>>
>>>> Technically, it's possible to have blk, crypto, ...
>>>> vdpa devices as well (we already have vhost-user-blk,
>>>> vhost-user-crypto, ...).
>>>>
>>>> But network specific features will come eventually, e.g. RSS. One
>>>> possible way to solve it is to define a generic event callback in
>>>> vdpa ops, and vdpa driver can request the corresponding info from
>>>> vhost based on the event received.
>>>>
>>>> Another thing needs to be clarified is that, the control path
>>>> supposed to be used by DPDK apps directly in vdpa should always be
>>>> generic, it should just be something like:
>>>>
>>>> int rte_vdpa_find_device_id(struct rte_vdpa_dev_addr *addr); int
>>>> rte_vhost_driver_attach_vdpa_device(const char *path, int did); int
>>>> rte_vhost_driver_detach_vdpa_device(const char *path); ...
>>>>
>>>> That is to say, users just need to bind the vdpa device to the vhost
>>>> connection. The control path ops in vdpa is supposed to be called by
>>>> vhost-library transparently based on the events on the vhost-user
>>>> connection, i.e.
>>>> the vdpa device will be configured (including RSS) by the guest
>>>> driver in QEMU "directly" via the vhost-user protocol instead of the
>>>> DPDK app in the host.
>>>
>>> Tiwei, in order to be clear,
>>> You think vDPA drivers should be in drivers/vdpa directory?
>>
>> I was just trying to clarify two facts in vDPA to address Andrew's concern.
>> And back to the question, to make sure that we don't miss anything
>> important, (although maybe not very related) it might be helpful to also
>> clarify how to support vDPA in OvS at the same time which isn't quite clear to
>> me yet..
>>
>> Regards,
>> Tiwei
>>
>>>
>>>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2019-12-16 10:26 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-16 8:29 [dpdk-dev] discussion: creating a new class for vdpa drivers Matan Azrad
2019-12-16 8:46 ` Andrew Rybchenko
2019-12-16 8:50 ` Maxime Coquelin
2019-12-16 9:39 ` Andrew Rybchenko
2019-12-16 10:04 ` Maxime Coquelin
2019-12-16 10:19 ` Andrew Rybchenko
2019-12-16 10:26 ` Maxime Coquelin
2019-12-16 8:47 ` Maxime Coquelin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).