DPDK patches and discussions
 help / color / mirror / Atom feed
From: Andrew Rybchenko <arybchenko@solarflare.com>
To: Maxime Coquelin <maxime.coquelin@redhat.com>,
	Matan Azrad <matan@mellanox.com>, Tiwei Bie <tiwei.bie@intel.com>,
	Thomas Monjalon <thomas@monjalon.net>
Cc: Ori Kam <orika@mellanox.com>,
	"Liang, Cunming" <cunming.liang@intel.com>,
	 "Wang, Xiao W" <xiao.w.wang@intel.com>,
	"Wang, Zhihong" <zhihong.wang@intel.com>,
	"Yigit, Ferruh" <ferruh.yigit@intel.com>,
	"Shahaf Shuler" <shahafs@mellanox.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	"Slava Ovsiienko" <viacheslavo@mellanox.com>,
	Asaf Penso <asafp@mellanox.com>,
	"Olga Shern" <olgas@mellanox.com>
Subject: Re: [dpdk-dev] discussion: creating a new class for vdpa drivers
Date: Mon, 16 Dec 2019 12:39:42 +0300	[thread overview]
Message-ID: <7a420216-1e20-3ffc-de09-ec62bba592d3@solarflare.com> (raw)
In-Reply-To: <186093dc-5330-965e-5283-4432b7e996f3@redhat.com>

Hi Maxime,

On 12/16/19 11:50 AM, Maxime Coquelin wrote:
> Hi Andrew,
> 
> On 12/16/19 9:46 AM, Andrew Rybchenko wrote:
>> On 12/16/19 11:29 AM, Matan Azrad wrote:
>>>
>>> Hi all
>>>
>>> I understand all of you agree \ not object with the new class for vdpa drivers.
>>
>> I have two control questions:
>>
>> 1. If so, is it allowed to have vDPA driver in
>>    drivers/net/<driver> if it is better from code sharing point
>>    of view?
> 
> If it has something to share, I think we should move the common bits
> to the common directory.

Does it mean that it is *not* allowed to have vdpa driver in
drivers/net/<driver> and vDPA drivers *must* live in
drivers/vdpa only?

>> 2. If drivers/common is used, is exported functions which are
>>    used by drivers/net/<driver> and drivers/vdpa/<driver> and
>>    data structures are a part of public API/ABI? Hopefully not,
>>    but I'd like to double-check and ensure that it is solved in
>>    the case of shared libraries build.
> 
> Common functions and data should not be part of the API/ABI I agree.
> I guess we should use relative paths for including the common headers.

Hopefully include_directories() with relative path in the case
of meson.build.

>>> Based on that, I'm going to start it.
>>>
>>> From: Tiwei Bie
>>>> On Tue, Dec 10, 2019 at 09:00:33AM +0100, Thomas Monjalon wrote:
>>>>> 10/12/2019 03:41, Tiwei Bie:
>>>>>> On Mon, Dec 09, 2019 at 02:22:27PM +0300, Andrew Rybchenko wrote:
>>>>>>> On 12/9/19 1:41 PM, Ori Kam wrote:
>>>>>>>> From: Andrew Rybchenko
>>>>>>>>> On 12/8/19 10:06 AM, Matan Azrad wrote:
>>>>>>>>>> From: Andrew Rybchenko
>>>>>>>>>>> On 12/6/19 8:32 AM, Liang, Cunming wrote:
>>>>>>>>>>>> From: Bie, Tiwei <tiwei.bie@intel.com>
>>>>>>>>>>>>> On Thu, Dec 05, 2019 at 01:26:36PM +0000, Matan Azrad wrote:
>>>>>>>>>>>>>> Hi all
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> As described in RFC “[RFC] net: new vdpa PMD for Mellanox
>>>>>>>>>>>>>> devices”, a new vdpa drivers is going to be added for
>>>>>>>>>>>>>> Mellanox devices – mlx5_vdpa
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The only vdpa driver now is the IFC driver that is located
>>>>>>>>>>>>>> in net
>>>>>>>>> directory.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The IFC driver and the new mlx5_vdpa driver provide the
>>>>>>>>>>>>>> vdpa ops
>>>>>>>>> and
>>>>>>>>>>>>>> not the eth_dev ops.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> All the others drivers in net provide the eth-dev ops.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I suggest to create a new class for vdpa drivers, to move
>>>>>>>>>>>>>> IFC to this class and to add the mlx5_vdpa to this class too.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Later, all the new drivers that implements the vdpa ops
>>>>>>>>>>>>>> will be added to the vdpa class.
>>>>>>>>>>>>>
>>>>>>>>>>>>> +1. Sounds like a good idea to me.
>>>>>>>>>>>> +1
>>>>>>>>>>>
>>>>>>>>>>> vDPA drivers are vendor-specific and expected to talk to vendor
>>>> NIC. I.e.
>>>>>>>>>>> there are significant chances to share code with network drivers
>>>> (e.g.
>>>>>>>>> base
>>>>>>>>>>> driver). Should base driver be moved to drivers/common in
>>>>>>>>>>> this case or is
>>>>>>>>> it
>>>>>>>>>>> still allows to have vdpa driver in drivers/net together with ethdev
>>>> driver?
>>>>>>>>>>
>>>>>>>>>> Yes, I think this should be the method, shared code should be
>>>>>>>>>> moved to
>>>>>>>>> the drivers/common directory.
>>>>>>>>>> I think there is a precedence with shared code in common which
>>>>>>>>>> shares a
>>>>>>>>> vendor specific code between crypto and net.
>>>>>>>>>
>>>>>>>>> I see motivation behind driver/vdpa. However, vdpa and net
>>>>>>>>> drivers tightly related and I would prefer to avoid extra
>>>>>>>>> complexity here. Right now simplicity over-weights for me.
>>>>>>>>> No strong opinion on the topic, but it definitely requires
>>>>>>>>> better and more clear motivation why a new class should be
>>>>>>>>> introduced and existing drivers restructured.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Why do you think there is extra complexity?
>>>>>>>
>>>>>>> Even grep becomes a bit more complicated J
>>>>>>>
>>>>>>>> I think from design correctness it is more correct to create a dedicated
>>>> class for the following reasons:
>>>>>>>> 1. All of the classes implements a different set of ops. For
>>>>>>>> example the cryptodev has a defined set of ops, same goes for the
>>>> compress driver and the ethdev driver. Each ones of them has different ops.
>>>> Going by this definition since VDPA has a different set of ops, it makes sense
>>>> that it will be in a different class.
>>>>>>>>
>>>>>>>> 2. even that both drivers (eth/vdpa) handle traffic from the nic
>>>>>>>> most of the code is different (the difference is also dependent on the
>>>> manufacture) So even the shared code base is not large and can be shared
>>>> using the common directory. For example ifc doesn't have any shared code.
>>>>>>>>
>>>>>>>> What do you think?
>>>>>>>
>>>>>>> The true reason is: if the difference in ops implemented is a key
>>>>>>> difference which should enforce location in different directories.
>>>>>>> Or underlying device class is a key.
>>>>>>> Roughly:
>>>>>>>  - net driver is a control+data path
>>>>>>>  - vdpa driver is a control path only My fear is that control path
>>>>>>> will grow more and more (Rx mode, RSS, filters and so on)
>>>>>>
>>>>>> I think this is a reasonable concern.
>>>>>>
>>>>>> One thing needs to be clarified is that, the control path (ops) in
>>>>>> vdpa driver is something very different with the control path in net
>>>>>> driver. vdpa is very generic (or more precisely vhost-related),
>>>>>> instead of network related:
>>>>>>
>>>>>> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgi
>>>>>>
>>>> thub.com%2FDPDK%2Fdpdk%2Fblob%2Faef1d0733179%2Flib%2Flibrte_vhos
>>>> t%2F
>>>>>> rte_vdpa.h%23L40-
>>>> L78&amp;data=02%7C01%7Cmatan%40mellanox.com%7Cfac15
>>>>>>
>>>> 729a67c4c81ee7608d77d7434a2%7Ca652971c7d2e4d9ba6a4d149256f461b%7C
>>>> 0%7
>>>>>>
>>>> C0%7C637115810358231304&amp;sdata=%2BZa39vxadtKx5Ov7vmqcU3RuZhz
>>>> kOP9o
>>>>>> 8roEB0d5j6M%3D&amp;reserved=0
>>>>>>
>>>>>> It's built on top of vhost-user protocol, manipulates virtqueues,
>>>>>> virtio/vhost features, memory table, ...
>>>>>>
>>>>>> Technically, it's possible to have blk, crypto, ...
>>>>>> vdpa devices as well (we already have vhost-user-blk,
>>>>>> vhost-user-crypto, ...).
>>>>>>
>>>>>> But network specific features will come eventually, e.g. RSS. One
>>>>>> possible way to solve it is to define a generic event callback in
>>>>>> vdpa ops, and vdpa driver can request the corresponding info from
>>>>>> vhost based on the event received.
>>>>>>
>>>>>> Another thing needs to be clarified is that, the control path
>>>>>> supposed to be used by DPDK apps directly in vdpa should always be
>>>>>> generic, it should just be something like:
>>>>>>
>>>>>> int rte_vdpa_find_device_id(struct rte_vdpa_dev_addr *addr); int
>>>>>> rte_vhost_driver_attach_vdpa_device(const char *path, int did); int
>>>>>> rte_vhost_driver_detach_vdpa_device(const char *path); ...
>>>>>>
>>>>>> That is to say, users just need to bind the vdpa device to the vhost
>>>>>> connection. The control path ops in vdpa is supposed to be called by
>>>>>> vhost-library transparently based on the events on the vhost-user
>>>>>> connection, i.e.
>>>>>> the vdpa device will be configured (including RSS) by the guest
>>>>>> driver in QEMU "directly" via the vhost-user protocol instead of the
>>>>>> DPDK app in the host.
>>>>>
>>>>> Tiwei, in order to be clear,
>>>>> You think vDPA drivers should be in drivers/vdpa directory?
>>>>
>>>> I was just trying to clarify two facts in vDPA to address Andrew's concern.
>>>> And back to the question, to make sure that we don't miss anything
>>>> important, (although maybe not very related) it might be helpful to also
>>>> clarify how to support vDPA in OvS at the same time which isn't quite clear to
>>>> me yet..
>>>>
>>>> Regards,
>>>> Tiwei
>>>>
>>>>>
>>>>>
>>
> 


  reply	other threads:[~2019-12-16  9:40 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-16  8:29 Matan Azrad
2019-12-16  8:46 ` Andrew Rybchenko
2019-12-16  8:50   ` Maxime Coquelin
2019-12-16  9:39     ` Andrew Rybchenko [this message]
2019-12-16 10:04       ` Maxime Coquelin
2019-12-16 10:19         ` Andrew Rybchenko
2019-12-16 10:26           ` Maxime Coquelin
2019-12-16  8:47 ` Maxime Coquelin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7a420216-1e20-3ffc-de09-ec62bba592d3@solarflare.com \
    --to=arybchenko@solarflare.com \
    --cc=asafp@mellanox.com \
    --cc=cunming.liang@intel.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=matan@mellanox.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=olgas@mellanox.com \
    --cc=orika@mellanox.com \
    --cc=shahafs@mellanox.com \
    --cc=thomas@monjalon.net \
    --cc=tiwei.bie@intel.com \
    --cc=viacheslavo@mellanox.com \
    --cc=xiao.w.wang@intel.com \
    --cc=zhihong.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).