From: Maxime Coquelin <maxime.coquelin@redhat.com>
To: Thomas Monjalon <thomas@monjalon.net>,
Claudio Fontana <cfontana@suse.de>
Cc: Chenbo Xia <chenbo.xia@intel.com>, dev@dpdk.org
Subject: Re: [PATCH v3 1/2] vhost: check for nr_vec == 0 in desc_to_mbuf, mbuf_to_desc
Date: Fri, 30 Sep 2022 12:22:41 +0200 [thread overview]
Message-ID: <e22ed5c8-4d1d-9106-0e5d-d384c3df86f4@redhat.com> (raw)
In-Reply-To: <5662486.1B3tZ46Xf9@thomas>
On 9/28/22 18:03, Thomas Monjalon wrote:
> 28/09/2022 17:21, Claudio Fontana:
>> On 9/28/22 16:37, Maxime Coquelin wrote:
>>> The title should be reworded, maybe something like below?
>>> "vhost: fix possible out of bound access in buffer vectors"
>>
>> possible, I leave it to you and other maintainers here now to figure out.
>
> Maxime is suggesting a reword to you for your next version.
>
>>> On 8/2/22 02:49, Claudio Fontana wrote:
> [...]
>>>> This should fix errors that have been reported in multiple occasions
>>>> from telcos to the DPDK, OVS and QEMU projects, as this affects in
>>>> particular the openvswitch/DPDK, QEMU vhost-user setup when the
>>>> guest DPDK application abruptly goes away via SIGKILL and then
>>>> reconnects.
>
> What are the "multiple occasions"? Is there an entry in bugs.dpdk.org?
>
> [...]
>>> I'm going to try to reproduce the issue, but I'm not sure I will
>>> succeed. Could you please share the Vhost logs when the issue is
>>> reproduced and you face the crash?
>>
>> With vacations and lab work it's unlikely anything can be done from my side for the next 2-3 weeks.
>
> We can probably wait 3 more weeks.
Yes please, because I fail to reproduce it (Fc35 on host, Ubuntu 18.05
in guest).
What I can see is that on guest testpmd crash, the host backend receives
VHOST_USER_GET_VRING_BASE requests that stop the vring processing. on
reconnect, the rings start to be processed only when it received all the
configuration requests from QEMU.
Note that I have tested it with VFIO in the guest because I could not
find the uio_pci_generic module in Ubuntu 18.04 cloud image.
>> This issue has been reported multiple times from multiple telco customers for about a year, it's all over the mailing lists
>> between ovs, dpdk and qemu, with all the details.
>
> What was the reply on the DPDK mailing list? Any link?
Please provide the links, it may help to understand the root cause if
these mail threads contain logs.
>
>> Something in the governance of these Open Source projects is clearly not working right, probably too much inward-focus between a small number of companies, but I digress.
>
> Contributors to DPDK are from multiple companies,
> but I agree we may need more help.
> Thank you for bringing your help with this fix.
>
>> I think Chenbo Xia already knows the context, and I suspect this is now considered a "security issue". The problem is, the information about all of this has been public already for a year.
>
> OK
>
>> I will again repost how to reproduce here:
>
> Thanks it helps to have all infos in the same place.
>
> [...]
>
>>> It is a fix, so it should contains the Fixes tag, and also cc
>>> stable@dpdk.org.
>>
>> After one year, it is time for Redhat and Intel or whatever the governance of this project is,
>
> The DPDK governance is not owned by any company.
> If you think there is an issue in a decision,
> you can alert the Technical Board at techboard@dpdk.org.
>
>> to mention if there is any intention to fix this or not,
>> before I or anyone else at SUSE will invest any more of our time and efforts here.
>
> I don't understand why we would not fix any issue.
> I think the project is quite dynamic and fixing issues,
> I am sorry if you have a different opinion.
>
>> Any tags you need you can add as required.
>
> It would be nice to add the tags as suggested in the next version.
> The most important would be to know where the issue comes from.
> If you can identify the original commit introducing the bug,
> you can mark it with:
> Fixes: <commit-sha1> ("<commit-title>")
> This way, maintainers and users know where it should be backported.
>
> If any question, feel free to ask, we are here to help.
> Thanks for the effort
>
>
next prev parent reply other threads:[~2022-09-30 10:22 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-02 0:49 [PATCH v3 0/2] vhost fixes for OVS SIGSEGV in PMD Claudio Fontana
2022-08-02 0:49 ` [PATCH v3 1/2] vhost: check for nr_vec == 0 in desc_to_mbuf, mbuf_to_desc Claudio Fontana
2022-08-02 1:34 ` Stephen Hemminger
2022-09-28 14:37 ` Maxime Coquelin
2022-09-28 15:21 ` Claudio Fontana
2022-09-28 16:03 ` Thomas Monjalon
2022-09-30 10:22 ` Maxime Coquelin [this message]
2022-10-05 15:06 ` Maxime Coquelin
2022-11-02 10:34 ` Claudio Fontana
2022-12-20 12:23 ` Claudio Fontana
2022-08-02 0:49 ` [PATCH v3 2/2] vhost: improve error handling in desc_to_mbuf Claudio Fontana
2022-10-05 12:57 ` Maxime Coquelin
2022-08-02 1:40 ` [PATCH v3 0/2] vhost fixes for OVS SIGSEGV in PMD Stephen Hemminger
2022-08-02 17:20 ` Claudio Fontana
2022-08-04 10:32 ` Claudio Fontana
2022-08-09 12:39 ` Claudio Fontana
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e22ed5c8-4d1d-9106-0e5d-d384c3df86f4@redhat.com \
--to=maxime.coquelin@redhat.com \
--cc=cfontana@suse.de \
--cc=chenbo.xia@intel.com \
--cc=dev@dpdk.org \
--cc=thomas@monjalon.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).