From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 48D96A052A; Tue, 26 Jan 2021 14:00:25 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 31ECD141492; Tue, 26 Jan 2021 14:00:25 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by mails.dpdk.org (Postfix) with ESMTP id 58742141490 for ; Tue, 26 Jan 2021 14:00:23 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1611666022; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dmTI1lhcnV9vpNCzBSXqky+P4j6VoRs9l90C6ioWdJg=; b=AUOZv2SEBeEmfU2nYnOV5TbWJrsp49cQsSHfvArnfYERx1sMdKsV4r2yKRtvX58zfp6FoK Z6RaJiKPNK228m5MLEiYmVk6W7Eea3ZGyf3UIk/1Nq407X+B4VJD6ytnEKChlGTtYoRO/J d8H/XpojPN+r3/hQGwUjm4xWll0OUS8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-193-ecTUZxeXNBC27h1vUJGzDQ-1; Tue, 26 Jan 2021 08:00:20 -0500 X-MC-Unique: ecTUZxeXNBC27h1vUJGzDQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 79B52835DE0; Tue, 26 Jan 2021 13:00:19 +0000 (UTC) Received: from [10.36.110.31] (unknown [10.36.110.31]) by smtp.corp.redhat.com (Postfix) with ESMTPS id EA09510023B9; Tue, 26 Jan 2021 13:00:11 +0000 (UTC) To: Matan Azrad , David Marchand Cc: dev , dpdk stable References: <1609915409-272126-1-git-send-email-matan@nvidia.com> <746e905a-c394-44df-2c49-2afd59c05d9f@redhat.com> <1052520c-61e9-2135-bbad-9d009f52ce4b@redhat.com> <1c1fdabf-2588-2fd7-f5c4-dcb4e029ac35@redhat.com> <16c7d5ef-3113-b40b-d398-8d5d19e9fd60@redhat.com> From: Maxime Coquelin Message-ID: Date: Tue, 26 Jan 2021 14:00:10 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [dpdk-stable] [PATCH] vdpa/mlx5: fix configuration mutex cleanup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 1/26/21 11:45 AM, Matan Azrad wrote: > > > From: Maxime Coquelin >>> From: Maxime Coquelin >>>> On 1/14/21 4:23 PM, Matan Azrad wrote: >>>>> >>>>> >>>>> From: Maxime Coquelin >>>>>> On 1/14/21 2:09 PM, Matan Azrad wrote: >>>>>>> >>>>>>> >>>>>>> From: Maxime Coquelin >>>>>>>> Hi Matan, >>>>>>>> >>>>>>>> On 1/14/21 12:49 PM, Matan Azrad wrote: >>>>>>>>> Hi Maxime and David >>>>>>>>> >>>>>>>>> Thank you for Review. >>>>>>>>> >>>>>>>>> From: David Marchand >>>>>>>>>> On Fri, Jan 8, 2021 at 9:48 AM David Marchand >>>>>>>>>> wrote: >>>>>>>>>>>> I wonder if it would be possible and cleaner to disable >>>>>>>>>>>> cancellation on the thread while the mutex is held? >>>>>>>>> >>>>>>>>> Yes, we can cause thread to return by some global variable sync. >>>>>>>>> It is the same logic. >>>>>>>> >>>>>>>> No, that was not my suggestion. My suggestion is to block the >>>>>>>> thread cancellation while in the critical section, using >>>> pthread_setcancelstate(). >>>>>>> >>>>>>> Yes, Generally it is better to let the thread control his >>>>>>> cancellation, either >>>>>> cancel itself or enabling\disabling cancellations. >>>>>>> >>>>>>> I don't see a reason to wait for the thread in current logic - the >>>>>>> critical section >>>>>> is not important to be completed here. >>>>>> >>>>>> The reason I see is there are quite a few things done in this >>>>>> critical section. And if tomorrow someone add new things in it, he >>>>>> may not know the thread can be cancelled at any time, which could >>>>>> cause >>>> hard to debug issues. >>>>> >>>>> As I said, here it is not needed, this thread designed just to cause >>>>> guest >>>> notifications. >>>>> >>>>> The optional future developer mistake can be done also outside the >>>>> critical >>>> section in in any other place - we cannot protect it. >>>>> >>>>> The design choice is to close the thread fast. >>>> >>>> But why is it so urgent that it cannot been stopped cleanly? >>>> I don't think it would add seconds delay by doing it in a clean way. >>> >>> We have system calls there per queue. >>> No need this optional delay just because of mutex cleaning. >> >> OK, up to you... >> >> And what about the timer lock? > > Existing code initiates it before reusing... Ok, so why not applying same logic for both mutexes? > Thanks. > >> >>> >>> >>>> Thanks, >>>> Maxime >>>> >>>>>>> We just want to close the thread and to clean the mutex. >>>>>>> >>>>>>>>>>> +1 >>>>>>>>>> >>>>>>>>>> IEEE Std 1003.1-2001/Cor 2-2004, item XBD/TC2/D6/26 is applied, >>>>>>>>>> adding pthread_t to the list of types that are not required to >>>>>>>>>> be arithmetic types, thus allowing pthread_t to be defined as a >> structure. >>>>>>>>>> >>>>>>>>>> It would be better to leave pthread_t alone and not interpret it: >>>>>>>>>> >>>>>>>>>> if (priv->timer_tid) { >>>>>>>>>> pthread_cancel(priv->timer_tid); >>>>>>>>>> pthread_join(priv->timer_tid, &status); } >>>>>>>>>> priv->timer_tid = 0; >>>>>>>>> >>>>>>>>> >>>>>>>>> I'm not sure why you think it is better in this specific case. >>>>>>>>> The cancellation will close the thread in faster way, no need to >>>>>>>>> wait for the >>>>>>>> thread to close itself. >>>>>>>>> >>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> David Marchand >>>>>>>>> >>>>>>> >>>>> >>> >