From: Ilya Maximets <i.maximets@samsung.com>
To: Flavio Leitner <fbl@sysclose.org>,
Yuanhan Liu <yuanhan.liu@linux.intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
Sergey Dyasly <s.dyasly@samsung.com>,
Thomas Monjalon <thomas.monjalon@6wind.com>,
"Xie, Huawei" <huawei.xie@intel.com>
Subject: Re: [dpdk-dev] [RFC] vhost-user public struct refactor (was Re: [PATCH RFC 2/4] vhost: make buf vector for scatter RX) local.
Date: Wed, 06 Apr 2016 05:11:01 +0000 (GMT) [thread overview]
Message-ID: <506067238.174981459919460416.JavaMail.weblogic@eumlwas01> (raw)
------- Original Message -------
Sender : Flavio Leitner<fbl@sysclose.org>
Date : Apr 06, 2016 07:14 (GMT+03:00)
Title : Re: [RFC] vhost-user public struct refactor (was Re: [dpdk-dev] [PATCH RFC 2/4] vhost: make buf vector for scatter RX) local.
On Tue, Apr 05, 2016 at 01:47:33PM +0800, Yuanhan Liu wrote:
> On Fri, Feb 19, 2016 at 03:06:50PM +0800, Yuanhan Liu wrote:
> > On Fri, Feb 19, 2016 at 09:32:41AM +0300, Ilya Maximets wrote:
> > > Array of buf_vector's is just an array for temporary storing information
> > > about available descriptors. It used only locally in virtio_dev_merge_rx()
> > > and there is no reason for that array to be shared.
> > >
> > > Fix that by allocating local buf_vec inside virtio_dev_merge_rx().
> > >
> > > Signed-off-by: Ilya Maximets
> > > ---
> > > lib/librte_vhost/rte_virtio_net.h | 1 -
> > > lib/librte_vhost/vhost_rxtx.c | 45 ++++++++++++++++++++-------------------
> > > 2 files changed, 23 insertions(+), 23 deletions(-)
> > >
> > > diff --git a/lib/librte_vhost/rte_virtio_net.h b/lib/librte_vhost/rte_virtio_net.h
> > > index 10dcb90..ae1e4fb 100644
> > > --- a/lib/librte_vhost/rte_virtio_net.h
> > > +++ b/lib/librte_vhost/rte_virtio_net.h
> > > @@ -91,7 +91,6 @@ struct vhost_virtqueue {
> > > int kickfd; /**< Currently unused as polling mode is enabled. */
> > > int enabled;
> > > uint64_t reserved[16]; /**< Reserve some spaces for future extension. */
> > > - struct buf_vector buf_vec[BUF_VECTOR_MAX]; /**< for scatter RX. */
> > > } __rte_cache_aligned;
> >
> > I like this kind of cleanup, however, it breaks ABI.
>
> So, I was considering to add vhost-user Tx delayed-copy (or zero copy)
> support recently, which comes to yet another ABI violation, as we need
> add a new field to virtio_memory_regions struct to do guest phys addr
> to host phys addr translation. You may ask, however, that why do we need
> expose virtio_memory_regions struct to users at all?
>
> You are right, we don't have to. And here is the thing: we exposed way
> too many fields (or even structures) than necessary. Say, vhost_virtqueue
> struct should NOT be exposed to user at all: application just need to
> tell the right queue id to locate a specific queue, and that's all.
> The structure should be defined in an internal header file. With that,
> we could do any changes to it we want, without worrying about that we
> may offense the painful ABI rules.
>
> Similar changes could be done to virtio_net struct as well, just exposing
> very few fields that are necessary and moving all others to an internal
> structure.
>
> Huawei then suggested a more radical yet much cleaner one: just exposing
> a virtio_net handle to application, just like the way kernel exposes an
> fd to user for locating a specific file. However, it's more than an ABI
> change; it's also an API change: some fields are referenced by applications,
> such as flags, virt_qp_nb. We could expose some new functions to access
> them though.
>
> I'd vote for this one, as it sounds very clean to me. This would also
> solve the block issue of this patch. Though it would break OVS, I'm thinking
> that'd be okay, as OVS has dependence on DPDK version: what we need to
> do is just to send few patches to OVS, and let it points to next release,
> say DPDK v16.07. Flavio, please correct me if I'm wrong.
> There is a plan to use vHost PMD, so from OVS point of view the virtio
> stuff would be hidden because vhost PMD would look like just as a
> regular ethernet, right?
But we still need to have access to virtqueue_enabe/disable notifications to
work properly. How this will be done if virtqueue will be hidden from user?
Best regards, Ilya Maximets.
next reply other threads:[~2016-04-06 5:11 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-06 5:11 Ilya Maximets [this message]
2016-04-06 5:38 ` Yuanhan Liu
-- strict thread matches above, loose matches on Subject: below --
2016-04-06 6:00 Ilya Maximets
2016-02-19 6:32 [dpdk-dev] [PATCH RFC 0/4] Thread safe rte_vhost_enqueue_burst() Ilya Maximets
2016-02-19 6:32 ` [dpdk-dev] [PATCH RFC 2/4] vhost: make buf vector for scatter RX local Ilya Maximets
2016-02-19 7:06 ` Yuanhan Liu
2016-04-05 5:47 ` [dpdk-dev] [RFC] vhost-user public struct refactor (was Re: [PATCH RFC 2/4] vhost: make buf vector for scatter RX) local Yuanhan Liu
2016-04-05 8:37 ` Thomas Monjalon
2016-04-05 14:06 ` Yuanhan Liu
2016-04-06 4:14 ` Flavio Leitner
2016-04-06 4:54 ` Yuanhan Liu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=506067238.174981459919460416.JavaMail.weblogic@eumlwas01 \
--to=i.maximets@samsung.com \
--cc=dev@dpdk.org \
--cc=fbl@sysclose.org \
--cc=huawei.xie@intel.com \
--cc=s.dyasly@samsung.com \
--cc=thomas.monjalon@6wind.com \
--cc=yuanhan.liu@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).