From: Tiwei Bie <tiwei.bie@intel.com>
To: Andrew Rybchenko <arybchenko@solarflare.com>
Cc: Ori Kam <orika@mellanox.com>, Matan Azrad <matan@mellanox.com>,
"Liang, Cunming" <cunming.liang@intel.com>,
"Wang, Xiao W" <xiao.w.wang@intel.com>,
Thomas Monjalon <thomas@monjalon.net>,
"maxime.coquelin@redhat.com" <maxime.coquelin@redhat.com>,
"Wang, Zhihong" <zhihong.wang@intel.com>,
"Yigit, Ferruh" <ferruh.yigit@intel.com>,
Shahaf Shuler <shahafs@mellanox.com>,
"dev@dpdk.org" <dev@dpdk.org>,
Slava Ovsiienko <viacheslavo@mellanox.com>,
Asaf Penso <asafp@mellanox.com>, Olga Shern <olgas@mellanox.com>
Subject: Re: [dpdk-dev] discussion: creating a new class for vdpa driversxiao.w.wang@intel.com
Date: Tue, 10 Dec 2019 10:41:11 +0800 [thread overview]
Message-ID: <20191210024111.GA104968@___> (raw)
In-Reply-To: <2bf414d6-f1dc-24c7-a17d-743fc4e8474d@solarflare.com>
On Mon, Dec 09, 2019 at 02:22:27PM +0300, Andrew Rybchenko wrote:
> On 12/9/19 1:41 PM, Ori Kam wrote:
> > Hi
> >
> >> -----Original Message-----
> >> From: dev <dev-bounces@dpdk.org> On Behalf Of Andrew Rybchenko
> >> Sent: Monday, December 9, 2019 9:44 AM
> >> To: Matan Azrad <matan@mellanox.com>; Liang, Cunming
> >> <cunming.liang@intel.com>; Bie, Tiwei <tiwei.bie@intel.com>
> >> Cc: Wang, Xiao W <xiao.w.wang@intel.com>; Thomas Monjalon
> >> <thomas@monjalon.net>; maxime.coquelin@redhat.com; Wang, Zhihong
> >> <zhihong.wang@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Shahaf
> >> Shuler <shahafs@mellanox.com>; Ori Kam <orika@mellanox.com>;
> >> dev@dpdk.org; Slava Ovsiienko <viacheslavo@mellanox.com>; Asaf Penso
> >> <asafp@mellanox.com>; Olga Shern <olgas@mellanox.com>
> >> Subject: Re: [dpdk-dev] discussion: creating a new class for vdpa
> >> driversxiao.w.wang@intel.com
> >>
> >> On 12/8/19 10:06 AM, Matan Azrad wrote:
> >>> From: Andrew Rybchenko
> >>>> On 12/6/19 8:32 AM, Liang, Cunming wrote:
> >>>>>
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Bie, Tiwei <tiwei.bie@intel.com>
> >>>>>> Sent: Friday, December 6, 2019 12:28 PM
> >>>>>> To: Matan Azrad <matan@mellanox.com>
> >>>>>> Cc: Wang, Xiao W <xiao.w.wang@intel.com>; Thomas Monjalon
> >>>>>> <thomas@monjalon.net>; maxime.coquelin@redhat.com; Wang,
> >>>> Zhihong
> >>>>>> <zhihong.wang@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>;
> >>>>>> Shahaf Shuler <shahafs@mellanox.com>; Ori Kam
> >>>> <orika@mellanox.com>;
> >>>>>> dev@dpdk.org; Slava Ovsiienko <viacheslavo@mellanox.com>; Asaf
> >>>> Penso
> >>>>>> <asafp@mellanox.com>; Olga Shern <olgas@mellanox.com>; Liang,
> >>>> Cunming
> >>>>>> <cunming.liang@intel.com>
> >>>>>> Subject: Re: discussion: creating a new class for vdpa
> >>>>>> driversxiao.w.wang@intel.com
> >>>>>>
> >>>>>> On Thu, Dec 05, 2019 at 01:26:36PM +0000, Matan Azrad wrote:
> >>>>>>> Hi all
> >>>>>>>
> >>>>>>> As described in RFC “[RFC] net: new vdpa PMD for Mellanox devices”,
> >>>>>>> a new vdpa drivers is going to be added for Mellanox devices –
> >>>>>>> mlx5_vdpa
> >>>>>>>
> >>>>>>> The only vdpa driver now is the IFC driver that is located in net
> >> directory.
> >>>>>>>
> >>>>>>> The IFC driver and the new mlx5_vdpa driver provide the vdpa ops
> >> and
> >>>>>>> not the eth_dev ops.
> >>>>>>>
> >>>>>>> All the others drivers in net provide the eth-dev ops.
> >>>>>>>
> >>>>>>> I suggest to create a new class for vdpa drivers, to move IFC to
> >>>>>>> this class and to add the mlx5_vdpa to this class too.
> >>>>>>>
> >>>>>>> Later, all the new drivers that implements the vdpa ops will be
> >>>>>>> added to the vdpa class.
> >>>>>>
> >>>>>> +1. Sounds like a good idea to me.
> >>>>> +1
> >>>>
> >>>> vDPA drivers are vendor-specific and expected to talk to vendor NIC. I.e.
> >>>> there are significant chances to share code with network drivers (e.g.
> >> base
> >>>> driver). Should base driver be moved to drivers/common in this case or is
> >> it
> >>>> still allows to have vdpa driver in drivers/net together with ethdev driver?
> >>>
> >>> Yes, I think this should be the method, shared code should be moved to
> >> the drivers/common directory.
> >>> I think there is a precedence with shared code in common which shares a
> >> vendor specific code between crypto and net.
> >>
> >> I see motivation behind driver/vdpa. However, vdpa and net
> >> drivers tightly related and I would prefer to avoid extra
> >> complexity here. Right now simplicity over-weights for me.
> >> No strong opinion on the topic, but it definitely requires
> >> better and more clear motivation why a new class should be
> >> introduced and existing drivers restructured.
> >>
> >
> > Why do you think there is extra complexity?
>
> Even grep becomes a bit more complicated J
>
> > I think from design correctness it is more correct to create a dedicated class for the following reasons:
> > 1. All of the classes implements a different set of ops. For example the cryptodev has a defined set of ops, same goes for the compress driver and the ethdev driver. Each ones of them
> > has different ops. Going by this definition since VDPA has a different set of ops, it makes sense that it will be in a different class.
> >
> > 2. even that both drivers (eth/vdpa) handle traffic from the nic most of the code is different (the difference is also dependent on the manufacture)
> > So even the shared code base is not large and can be shared using the common directory. For example ifc doesn't have any shared code.
> >
> > What do you think?
>
> The true reason is: if the difference in ops implemented
> is a key difference which should enforce location in
> different directories. Or underlying device class is a key.
> Roughly:
> - net driver is a control+data path
> - vdpa driver is a control path only
> My fear is that control path will grow more and more
> (Rx mode, RSS, filters and so on)
I think this is a reasonable concern.
One thing needs to be clarified is that, the control
path (ops) in vdpa driver is something very different
with the control path in net driver. vdpa is very
generic (or more precisely vhost-related), instead
of network related:
https://github.com/DPDK/dpdk/blob/aef1d0733179/lib/librte_vhost/rte_vdpa.h#L40-L78
It's built on top of vhost-user protocol, manipulates
virtqueues, virtio/vhost features, memory table, ...
Technically, it's possible to have blk, crypto, ...
vdpa devices as well (we already have vhost-user-blk,
vhost-user-crypto, ...).
But network specific features will come eventually,
e.g. RSS. One possible way to solve it is to define a
generic event callback in vdpa ops, and vdpa driver can
request the corresponding info from vhost based on the
event received.
Another thing needs to be clarified is that, the control
path supposed to be used by DPDK apps directly in vdpa
should always be generic, it should just be something like:
int rte_vdpa_find_device_id(struct rte_vdpa_dev_addr *addr);
int rte_vhost_driver_attach_vdpa_device(const char *path, int did);
int rte_vhost_driver_detach_vdpa_device(const char *path);
...
That is to say, users just need to bind the vdpa device
to the vhost connection. The control path ops in vdpa is
supposed to be called by vhost-library transparently
based on the events on the vhost-user connection, i.e.
the vdpa device will be configured (including RSS) by
the guest driver in QEMU "directly" via the vhost-user
protocol instead of the DPDK app in the host.
> and more DPDK specific
> (not base driver) code could be shared and it is a bit
> easier to share if vdpa driver lives nearby net driver
> since both implement net driver control path but in
> different terms (ops).
>
> Again, I have no strong opinion and just want to
> foresee the future at least just a bit and make sure
> that the decision is motivated with all concerns
> discussed and resolved.
>
> net/ifc and net/virtio are existing stakeholders.
>
> >>> Actually, this is my plan to share mlx5 vdpa code with mlx5 net code by the
> >> drivers/common dir (see RFC).
> >>
> >> I see.
>
next prev parent reply other threads:[~2019-12-10 2:40 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-12-05 13:26 Matan Azrad
2019-12-06 4:27 ` Tiwei Bie
2019-12-06 5:32 ` Liang, Cunming
2019-12-06 9:05 ` Andrew Rybchenko
2019-12-06 10:04 ` Maxime Coquelin
2019-12-06 11:22 ` Tiwei Bie
2019-12-08 7:06 ` Matan Azrad
2019-12-09 7:44 ` Andrew Rybchenko
2019-12-09 10:41 ` Ori Kam
2019-12-09 11:22 ` Andrew Rybchenko
2019-12-10 2:41 ` Tiwei Bie [this message]
2019-12-10 8:00 ` Thomas Monjalon
2019-12-10 13:24 ` Tiwei Bie
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191210024111.GA104968@___ \
--to=tiwei.bie@intel.com \
--cc=arybchenko@solarflare.com \
--cc=asafp@mellanox.com \
--cc=cunming.liang@intel.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
--cc=matan@mellanox.com \
--cc=maxime.coquelin@redhat.com \
--cc=olgas@mellanox.com \
--cc=orika@mellanox.com \
--cc=shahafs@mellanox.com \
--cc=thomas@monjalon.net \
--cc=viacheslavo@mellanox.com \
--cc=xiao.w.wang@intel.com \
--cc=zhihong.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).