From: "Xia, Chenbo" <chenbo.xia@intel.com>
To: Matan Azrad <matan@mellanox.com>,
Maxime Coquelin <maxime.coquelin@redhat.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
"Wang, Xiao W" <xiao.w.wang@intel.com>,
"stable@dpdk.org" <stable@dpdk.org>,
"Wang, Zhihong" <zhihong.wang@intel.com>
Subject: Re: [dpdk-stable] [dpdk-dev] [PATCH] vhost: fix host notifier configuration error flow
Date: Sun, 14 Jun 2020 13:11:14 +0000 [thread overview]
Message-ID: <MN2PR11MB406372C206DA84FBB2AB83999C9F0@MN2PR11MB4063.namprd11.prod.outlook.com> (raw)
In-Reply-To: <AM0PR0502MB40197A658DA425C33E1CB50DD2810@AM0PR0502MB4019.eurprd05.prod.outlook.com>
Hi Matan,
> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Friday, June 12, 2020 7:06 PM
> To: Xia, Chenbo <chenbo.xia@intel.com>; Maxime Coquelin
> <maxime.coquelin@redhat.com>
> Cc: dev@dpdk.org; Wang, Xiao W <xiao.w.wang@intel.com>; Tiwei Bie
> <tiwei.bie@intel.com>; stable@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH] vhost: fix host notifier configuration error flow
>
>
> Hi Xia
> Thanks for the fast review.
> Please see inline.
>
> From: Xia, Chenbo:
> > Hi Matan,
> >
> > > -----Original Message-----
> > > From: dev <dev-bounces@dpdk.org> On Behalf Of Matan Azrad
> > > Sent: Thursday, June 11, 2020 11:48 PM
> > > To: Maxime Coquelin <maxime.coquelin@redhat.com>
> > > Cc: dev@dpdk.org; Wang, Xiao W <xiao.w.wang@intel.com>; Tiwei Bie
> > > <tiwei.bie@intel.com>; stable@dpdk.org
> > > Subject: [dpdk-dev] [PATCH] vhost: fix host notifier configuration
> > > error flow
> > >
> > > A vDPA driver can configure its device FD to be notified directly by
> > > the guest memory mapping using `rte_vhost_host_notifier_ctrl` API.
> > >
> > > The driver request is managed by the dpdk vhost management and is
> > > forwarded to the QEMU, the vhost massage includes reply request in
> > > order to be sure that the memory mapping was done correctly by the
> > QEMU.
> > >
> > > When QEMU finishes the configuration, it marks that its replay is
> > > valid in the slave FD using VHOST_USER_REPLY_MASK flag.
> > > The flag is set only in success and when the slave FD includes the
> > > reply
> > data.
> > >
> > > The vhost library didn't validate the above flag before accessing to
> > > the slave FD, it leaded to the thread to be blocked on recvmsg call
> > > forever in case the QEMU has some problems in the notifier configuration.
> > >
> > > Handle VHOST_USER_REPLY_MASK flag to validate that slave FD includes
> > > a reply data.
> > >
> > > Fixes: d90cf7d111ac ("vhost: support host notifier")
> > > Cc: stable@dpdk.org
> > >
> > > Signed-off-by: Matan Azrad <matan@mellanox.com>
> > > ---
> > > lib/librte_vhost/vhost_user.c | 10 ++++++++--
> > > 1 file changed, 8 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/lib/librte_vhost/vhost_user.c
> > > b/lib/librte_vhost/vhost_user.c index
> > > 84bebad..aa19d15 100644
> > > --- a/lib/librte_vhost/vhost_user.c
> > > +++ b/lib/librte_vhost/vhost_user.c
> > > @@ -2833,8 +2833,14 @@ static int process_slave_message_reply(struct
> > > virtio_net *dev,
> > > struct VhostUserMsg msg_reply;
> > > int ret;
> > >
> > > - if ((msg->flags & VHOST_USER_NEED_REPLY) == 0)
> > > - return 0;
> > > + if (!(msg->flags & VHOST_USER_REPLY_MASK)) {
> > > + if (msg->flags & VHOST_USER_NEED_REPLY) {
> > > + ret = -1;
> > > + goto out;
> > > + } else {
> > > + return 0;
> > > + }
> > > + }
> >
> > Based on your commit log, I think you want to check the reply msg sent
> > from qemu but msg is the request sent from vhost-user.
>
> Yes, looks like I missed here something.
>
> >
> > Also, could you clarify the problem based on that? Because I see in
> > QEMU
> > v5.0.0 that if vhost request has VHOST_USER_NEED_REPLY_MASK, qemu will
> > set the reply mask with VHOST_USER_REPLY_MASK and without
> > VHOST_USER_NEED_REPLY_MASK no matter the handle is correct. Do I miss
> > something? Please correct me.
>
> I got stuck in this function while receiving massage back from QEMU(4.2.0).
> Looks like QEMU printed "Failed to read from slave. ".
> In this case looks like QEMU doesn't reply and therefore the dpdk thread was
> stuck.
> Any idea?
I see in QEMU code that the error is caused by recv size not match, could you check on the slave channel send/recv?
I check in dpdk code that normally all the slave msg should have size larger than VHOST_USER_HDR_SIZE. There is one
possibility that dpdk close its socket for some reason so that recvmsg returns 0. But I am sure. It's better to check on
send/recv of slave chanel :)
BRs,
Chenbo
>
>
>
>
> > Thanks!
> > Chenbo
> >
> > >
> > > ret = read_vhost_message(dev->slave_req_fd, &msg_reply);
> > > if (ret <= 0) {
> > > --
> > > 1.8.3.1
prev parent reply other threads:[~2020-06-14 13:11 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-11 15:47 [dpdk-stable] " Matan Azrad
2020-06-12 7:25 ` [dpdk-stable] [dpdk-dev] " Xia, Chenbo
2020-06-12 11:05 ` Matan Azrad
2020-06-14 13:11 ` Xia, Chenbo [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=MN2PR11MB406372C206DA84FBB2AB83999C9F0@MN2PR11MB4063.namprd11.prod.outlook.com \
--to=chenbo.xia@intel.com \
--cc=dev@dpdk.org \
--cc=matan@mellanox.com \
--cc=maxime.coquelin@redhat.com \
--cc=stable@dpdk.org \
--cc=xiao.w.wang@intel.com \
--cc=zhihong.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).