From: Yuanhan Liu <yuanhan.liu@linux.intel.com>
To: "Xie, Huawei" <huawei.xie@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
"ann.zhuangyanying@huawei.com" <ann.zhuangyanying@huawei.com>
Subject: Re: [dpdk-dev] [PATCH] vhost: Fix wrong handling of virtqueue array index
Date: Tue, 27 Oct 2015 16:57:32 +0800 [thread overview]
Message-ID: <20151027085732.GH3115@yliu-dev.sh.intel.com> (raw)
In-Reply-To: <C37D651A908B024F974696C65296B57B4B145CA6@SHSMSX101.ccr.corp.intel.com>
On Tue, Oct 27, 2015 at 08:46:48AM +0000, Xie, Huawei wrote:
> On 10/27/2015 4:39 PM, Yuanhan Liu wrote:
> > On Tue, Oct 27, 2015 at 08:24:00AM +0000, Xie, Huawei wrote:
> >> On 10/27/2015 3:52 PM, Tetsuya Mukawa wrote:
> >>> The patch fixes wrong handling of virtqueue array index when
> >>> GET_VRING_BASE message comes.
> >>> The vhost backend will receive the message per virtqueue.
> >>> Also we should call a destroy callback handler when both RXQ
> >>> and TXQ receives the message.
> >>>
> >>> Signed-off-by: Tetsuya Mukawa <mukawa@igel.co.jp>
> >>> ---
> >>> lib/librte_vhost/vhost_user/virtio-net-user.c | 20 ++++++++++----------
> >>> 1 file changed, 10 insertions(+), 10 deletions(-)
> >>>
> >>> diff --git a/lib/librte_vhost/vhost_user/virtio-net-user.c b/lib/librte_vhost/vhost_user/virtio-net-user.c
> >>> index a998ad8..99c075f 100644
> >>> --- a/lib/librte_vhost/vhost_user/virtio-net-user.c
> >>> +++ b/lib/librte_vhost/vhost_user/virtio-net-user.c
> >>> @@ -283,12 +283,10 @@ user_get_vring_base(struct vhost_device_ctx ctx,
> >>> struct vhost_vring_state *state)
> >>> {
> >>> struct virtio_net *dev = get_device(ctx);
> >>> + uint16_t base_idx = state->index / VIRTIO_QNUM * VIRTIO_QNUM;
> >>>
> >>> if (dev == NULL)
> >>> return -1;
> >>> - /* We have to stop the queue (virtio) if it is running. */
> >>> - if (dev->flags & VIRTIO_DEV_RUNNING)
> >>> - notify_ops->destroy_device(dev);
> >> Hi Tetsuya:
> >> I don't understand why we move it to the end of the function.
> >> If we don't tell the application to remove the virtio device from the
> > As you stated, he just moved it to the end of the function: it
> > still does invoke notfiy_ops->destroy_device() in the end.
> The problem is before calling destroy_device, we shouldn't modify the
> virtio_net data structure as data plane is also using it.
Right then, we may shoud not move it in the end.
> >
> > And the reason he moved it to the end is he want to invoke the
> > callback just when the second GET_VRING_BASE message is received
> > for the queue pair.
> Don't get it. What issue it fixes?
I guess Tetsuya thinks that'd be a more proper time to invoke the
callback, but in fact, it's not, as we have MQ enabled :)
--yliu
> > And while thinking twice, it's not necessary,
> > as we will do the "flags & VIRTIO_DEV_RUNNING" check first, it
> > doesn't matter on which virt queue we invoke the callback.
> >
> >
> > --yliu
> >
> >> data plane, then the vhost application is still operating on that
> >> device, we shouldn't do anything to the virtio_net device.
> >> For this case, as vhost doesn't use kickfd, it will not cause issue, but
> >> i think it is best practice firstly to remove it from data plan through
> >> destroy_device.
> >>
> >> I think we could call destroy_device the first time we receive this
> >> message. Currently we don't have per queue granularity control to only
> >> remove one queue from data plane.
> >>
> >> I am Okay to only close the kickfd for the specified queue index.
> >>
> >> Btw, do you meet issue with previous implementation?
> >>>
> >>> /* Here we are safe to get the last used index */
> >>> ops->get_vring_base(ctx, state->index, state);
> >>> @@ -300,15 +298,17 @@ user_get_vring_base(struct vhost_device_ctx ctx,
> >>> * sent and only sent in vhost_vring_stop.
> >>> * TODO: cleanup the vring, it isn't usable since here.
> >>> */
> >>> - if (dev->virtqueue[state->index + VIRTIO_RXQ]->kickfd >= 0) {
> >>> - close(dev->virtqueue[state->index + VIRTIO_RXQ]->kickfd);
> >>> - dev->virtqueue[state->index + VIRTIO_RXQ]->kickfd = -1;
> >>> - }
> >>> - if (dev->virtqueue[state->index + VIRTIO_TXQ]->kickfd >= 0) {
> >>> - close(dev->virtqueue[state->index + VIRTIO_TXQ]->kickfd);
> >>> - dev->virtqueue[state->index + VIRTIO_TXQ]->kickfd = -1;
> >>> + if (dev->virtqueue[state->index]->kickfd >= 0) {
> >>> + close(dev->virtqueue[state->index]->kickfd);
> >>> + dev->virtqueue[state->index]->kickfd = -1;
> >>> }
> >>>
> >>> + /* We have to stop the queue (virtio) if it is running. */
> >>> + if ((dev->flags & VIRTIO_DEV_RUNNING) &&
> >>> + (dev->virtqueue[base_idx + VIRTIO_RXQ]->kickfd == -1) &&
> >>> + (dev->virtqueue[base_idx + VIRTIO_TXQ]->kickfd == -1))
> >>> + notify_ops->destroy_device(dev);
> >>> +
> >>> return 0;
> >>> }
> >>>
>
next prev parent reply other threads:[~2015-10-27 8:56 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-10-27 7:51 Tetsuya Mukawa
2015-10-27 8:00 ` Yuanhan Liu
2015-10-27 8:24 ` Xie, Huawei
2015-10-27 8:39 ` Yuanhan Liu
2015-10-27 8:46 ` Xie, Huawei
2015-10-27 8:57 ` Yuanhan Liu [this message]
2015-10-27 9:25 ` Tetsuya Mukawa
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20151027085732.GH3115@yliu-dev.sh.intel.com \
--to=yuanhan.liu@linux.intel.com \
--cc=ann.zhuangyanying@huawei.com \
--cc=dev@dpdk.org \
--cc=huawei.xie@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).