From: Claudio Fontana <cfontana@suse.de>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: Maxime Coquelin <maxime.coquelin@redhat.com>,
Chenbo Xia <chenbo.xia@intel.com>,
dev@dpdk.org
Subject: Re: [PATCH v3 0/2] vhost fixes for OVS SIGSEGV in PMD
Date: Thu, 4 Aug 2022 12:32:38 +0200 [thread overview]
Message-ID: <ffafc164-cd8c-6603-b8bf-c75e01e098ef@suse.de> (raw)
In-Reply-To: <25754484-a44b-65a1-8504-7d268d47c984@suse.de>
On 8/2/22 19:20, Claudio Fontana wrote:
> On 8/2/22 03:40, Stephen Hemminger wrote:
>> On Tue, 2 Aug 2022 02:49:36 +0200
>> Claudio Fontana <cfontana@suse.de> wrote:
>>
>>> This is an alternative, more general fix compared with PATCH v1,
>>> and fixes style issues in v2.
>>>
>>> The series fixes a segmentation fault in the OVS PMD thread when
>>> resynchronizing with QEMU after the guest application has been killed
>>> with SIGKILL (patch 1/2),
>>>
>>> The segmentation fault can be caused by the guest DPDK application,
>>> which is able this way to crash the OVS process on the host,
>>> see the backtrace in patch 1/2.
>>>
>>> Patch 2/2 is an additional improvement in the current error handling.
>>
>> Checking for NULL and 0 is good on host side.
>> But guest should probably not be sending such a useless request?
>
>
> Right, I focused on hardening the host side, as that is what the customer required.
>
> This happens specifically when the guest application goes away abruptly and has no chance to signal anything (SIGKILL),
> and at restart issues a virtio reset on the device, which in qemu causes also a (actually two) virtio_net set_status, which attempt to stop the queues (twice).
>
> DPDK seems to think at that point that it needs to drain the queue, and tries to process MAX_PKT_BURST buffers
> ("about to dequeue 32 buffers"),
>
> then calls fill_vec_buf_split and gets absolutely nothing.
>
> I think this should also address the reports in this thread:
>
> https://inbox.dpdk.org/dev/SA1PR08MB713373B0D19329C38C7527BB839A9@SA1PR08MB7133.namprd08.prod.outlook.com/
>
> in addition to my specific customer request,
>
> Thanks,
>
> Claudio
anything more required from my side? Do you need a respin without the "Tested-by" tag?
Thanks,
Claudio
next prev parent reply other threads:[~2022-08-04 10:32 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-02 0:49 Claudio Fontana
2022-08-02 0:49 ` [PATCH v3 1/2] vhost: check for nr_vec == 0 in desc_to_mbuf, mbuf_to_desc Claudio Fontana
2022-08-02 1:34 ` Stephen Hemminger
2022-09-28 14:37 ` Maxime Coquelin
2022-09-28 15:21 ` Claudio Fontana
2022-09-28 16:03 ` Thomas Monjalon
2022-09-30 10:22 ` Maxime Coquelin
2022-10-05 15:06 ` Maxime Coquelin
2022-11-02 10:34 ` Claudio Fontana
2022-12-20 12:23 ` Claudio Fontana
2022-08-02 0:49 ` [PATCH v3 2/2] vhost: improve error handling in desc_to_mbuf Claudio Fontana
2022-10-05 12:57 ` Maxime Coquelin
2022-08-02 1:40 ` [PATCH v3 0/2] vhost fixes for OVS SIGSEGV in PMD Stephen Hemminger
2022-08-02 17:20 ` Claudio Fontana
2022-08-04 10:32 ` Claudio Fontana [this message]
2022-08-09 12:39 ` Claudio Fontana
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ffafc164-cd8c-6603-b8bf-c75e01e098ef@suse.de \
--to=cfontana@suse.de \
--cc=chenbo.xia@intel.com \
--cc=dev@dpdk.org \
--cc=maxime.coquelin@redhat.com \
--cc=stephen@networkplumber.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).