patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Gongming Chen <chengm11@chinatelecom.cn>
To: Maxime Coquelin <maxime.coquelin@redhat.com>
Cc: chenbox@nvidia.com, dev@dpdk.org, stable@dpdk.org
Subject: Re: [PATCH v1] vhost: fix crash caused by accessing a freed vsocket
Date: Tue, 9 Jul 2024 07:50:14 +0800	[thread overview]
Message-ID: <7813527B-514C-4B05-B568-7C2D9F49214E@chinatelecom.cn> (raw)
In-Reply-To: <1c002c66-08a7-465c-bb56-a22abb7c840e@redhat.com>



> On Jul 2, 2024, at 3:48 PM, Maxime Coquelin <maxime.coquelin@redhat.com> wrote:
> 
> Hi Gongming,
> 
> On 5/10/24 09:28, Gongming Chen wrote:
>> Hi Maxime and Chenbo,
>> Do you have any suggestions for how to address this?
>> Looking forward to hearing from you!
> 
> Could you please have a try with latest DPDK main branch,
> and if it reproduces, rebase your series on top of it.
> 
> I don't think it has been fixed, but we've done significant changes in
> fdman in this release so we need a rebase anyways.
> 
> Thanks in advance,
> Maxime

Hi Maxime,

This bug still exists, I rebase the latest main branch and submit the v4 version.
Thank you for your review, looking forward to hearing from you!

Thanks,
Gongming

> 
>> Thanks,
>> Gongming
>>> On Apr 3, 2024, at 11:52 PM, Gongming Chen <chengongming1900@outlook.com> wrote:
>>> 
>>> Hi Maxime,
>>> Thanks for review.
>>> 
>>>> On Apr 3, 2024, at 5:39 PM, Maxime Coquelin <maxime.coquelin@redhat.com> wrote:
>>>> 
>>>> Hi Gongming,
>>>> 
>>>> It's the 9th time the patch has been sent.
>>>> I'm not sure whether there are changes between them or these are just
>>>> re-sends, but that's something to avoid.
>>>> 
>>> 
>>> Sorry, there's something wrong with my mailbox.
>>> I will send a v1 version as the latest patch, but they are actually the same.
>>> 
>>>> If there are differences, you should use versionning to highlight it.
>>>> If unsure, please check the contributions guidelines first.
>>>> 
>>>> Regarding the patch itself, I don't know if this is avoidable, but I
>>>> would prefer we do not introduce yet another lock in there.
>>>> 
>>>> Thanks,
>>>> Maxime
>>>> 
>>> 
>>> I totally agree with your.
>>> Therefore, initially I hoped to solve this problem without introducing
>>> new lock. However, the result was not expected.
>>> 
>>> 1. The vsocket is shared between the event and reconnect threads by
>>> transmitting the vsocket pointer. Therefore, there is no way to protect
>>> vsocket through a simple vsocket lock.
>>> 
>>> 2. The event and reconnect threads can transmit vsocket pointers to
>>> each other, so there is no way to ensure that vsocket will not be
>>> accessed by locking the two threads separately.
>>> 
>>> 3. Therefore, on the vsocket resource, event and reconnect are in the
>>> same critical section. Only by locking two threads at the same time
>>> can the vsocket be ensured that it will not be accessed and can be
>>> freed safely.
>>> 
>>> Currently, app config, event, and reconnect threads respectively have
>>> locks corresponding to their own maintenance resources,
>>> vhost_user.mutex, pfdset->fd_mutex, and reconn_list.mutex.
>>> 
>>> I think there is a thread-level lock missing here to protect the
>>> critical section between threads, just like the rcu scene protection.
>>> 
>>> After app config acquires the write lock, it ensures that the event and
>>> reconnect threads are outside the critical section.
>>> This is to completely clean up the resources associated with vsocket
>>> and safely free vsocket.
>>> 
>>> Therefore, considering future expansion, if there may be more
>>> resources like vsocket, this thread lock can also be used to ensure
>>> that resources are safely released after complete cleanup.
>>> 
>>> In this way, the threads will be clearer, and the complicated try lock
>>> method is no longer needed.
>>> 
>>> Thanks,
>>> Gongming
> 
> 


  reply	other threads:[~2024-07-08 23:50 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-03 16:05 Gongming Chen
2024-05-10  7:28 ` Gongming Chen
2024-07-02  7:48   ` Maxime Coquelin
2024-07-08 23:50     ` Gongming Chen [this message]
2024-07-08  2:17 ` [PATCH v2] " Gongming Chen
2024-07-08  2:23 ` [PATCH v3] " Gongming Chen
2024-07-08  4:41 ` [PATCH v4] " Gongming Chen
2024-07-09  7:26   ` David Marchand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7813527B-514C-4B05-B568-7C2D9F49214E@chinatelecom.cn \
    --to=chengm11@chinatelecom.cn \
    --cc=chenbox@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=maxime.coquelin@redhat.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).