From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <stable-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id EABD941EBF
	for <public@inbox.dpdk.org>; Fri, 17 Mar 2023 19:51:40 +0100 (CET)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id E4A4942F98;
	Fri, 17 Mar 2023 19:51:40 +0100 (CET)
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by mails.dpdk.org (Postfix) with ESMTP id 75A7C40395
 for <stable@dpdk.org>; Fri, 17 Mar 2023 19:51:39 +0100 (CET)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1679079099;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=6TmVGbqe5/CLDjr+n8XmVODUBQ62Rum6vZAVqDGdEg4=;
 b=L6FsflxjqWFrlyHtGsReTMbLs67EIj+gGQvvZIaKakGqK+Gg+K7Pz7Ukpf8Bu0u1Q8rvLn
 vGC6sBGp15pkgmJg0YVLC2tjzC22F9YxWkXOpWLxx+b3hoxsf3NHABSSpZb4QEXr+aVyWT
 op6SeN8hIjVNf8SioM6cyutKbL568Ek=
Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com
 [209.85.214.198]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-659-_hcUtPeCPp-WToslQwzblg-1; Fri, 17 Mar 2023 14:51:38 -0400
X-MC-Unique: _hcUtPeCPp-WToslQwzblg-1
Received: by mail-pl1-f198.google.com with SMTP id
 k6-20020a170902c40600b001a1a4ff890fso1796615plk.22
 for <stable@dpdk.org>; Fri, 17 Mar 2023 11:51:37 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20210112; t=1679079097;
 h=content-transfer-encoding:cc:to:subject:message-id:date:from
 :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
 :subject:date:message-id:reply-to;
 bh=6TmVGbqe5/CLDjr+n8XmVODUBQ62Rum6vZAVqDGdEg4=;
 b=CFZY5ic7ND7SS4DfidLwLaEnufJAc0hZRZZ/YKerppVHDEjUoD6y4EdVVxdDsLSvKm
 3wKUGisEFUG9C3CxbSNPMQUM3YtajN5/ZHRGr0pBSZ/QH4pC02z9pfvyls6rIW5WrRpV
 4+7oQLgEKCPgB3Rm6SWgDWmHz/syFJ1vi7s4WPwFTkTpDLDcq1EN24S+RrrDOVP6i1R6
 +GWWAWlOqJT2YFCwOf2yduIS3LlmhJaijOds/tgMlCDlsg+zz74hFKW8fKjXI7MaR0LK
 A2lNjUB5vemZUDxDF3/o53lyiaFbNH5VzSlCbQjz/DPNFuw/xD+y0x7Oad0jPb6di15h
 1h8g==
X-Gm-Message-State: AO0yUKVgFQ64XX2Svl+e2JueIuBkIYtOTRPx0NoGr2d1wX+KXQkxG8aZ
 PBsF3z/+M94K13FsqENu8MCjkbOn+o+HbTD1IbxFyjpnB1JwctBnAFiBX48UHySpzTngcnUtANx
 2j6zE9P7obiuYteA3FSNPqz8=
X-Received: by 2002:a65:66d4:0:b0:50a:c1b3:ed55 with SMTP id
 c20-20020a6566d4000000b0050ac1b3ed55mr48229pgw.11.1679079097046; 
 Fri, 17 Mar 2023 11:51:37 -0700 (PDT)
X-Google-Smtp-Source: AK7set9nZXoEFg0gdZUyQ3IFJ1dZin0vjCAQj/44efk7xeQ9yHYMuUNeCDOsmlNlsWj/PgahPzntFLkp67emSCLVR9E=
X-Received: by 2002:a65:66d4:0:b0:50a:c1b3:ed55 with SMTP id
 c20-20020a6566d4000000b0050ac1b3ed55mr48218pgw.11.1679079096637; Fri, 17 Mar
 2023 11:51:36 -0700 (PDT)
MIME-Version: 1.0
References: <1677782682-27200-1-git-send-email-roretzla@linux.microsoft.com>
 <1678925224-2706-1-git-send-email-roretzla@linux.microsoft.com>
 <1678925224-2706-3-git-send-email-roretzla@linux.microsoft.com>
 <CAJFAV8yk-APRVYO-v953DUc=YKx=54mMCRPsCXUa_y43vs1q9A@mail.gmail.com>
 <20230317144931.GA29683@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net>
In-Reply-To: <20230317144931.GA29683@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net>
From: David Marchand <david.marchand@redhat.com>
Date: Fri, 17 Mar 2023 19:51:25 +0100
Message-ID: <CAJFAV8zAfOFpK8JpR_f_VGV43XfMGkRF_EL1o2xOo5FRShDSXA@mail.gmail.com>
Subject: Re: [PATCH v5 2/2] eal: fix failure path race setting new thread
 affinity
To: Tyler Retzlaff <roretzla@linux.microsoft.com>
Cc: dev@dpdk.org, thomas@monjalon.net, stephen@networkplumber.org, 
 stable@dpdk.org, Dodji Seketeli <dodji@redhat.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: stable@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: patches for DPDK stable branches <stable.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/stable>,
 <mailto:stable-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/stable/>
List-Post: <mailto:stable@dpdk.org>
List-Help: <mailto:stable-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/stable>,
 <mailto:stable-request@dpdk.org?subject=subscribe>
Errors-To: stable-bounces@dpdk.org

On Fri, Mar 17, 2023 at 3:50=E2=80=AFPM Tyler Retzlaff
<roretzla@linux.microsoft.com> wrote:
> > > -struct thread_routine_ctx {
> > > +struct thread_start_context {
> > >         rte_thread_func thread_func;
> > > -       void *routine_args;
> > > +       void *thread_args;
> > > +       const rte_thread_attr_t *thread_attr;
> > > +       pthread_mutex_t wrapper_mutex;
> > > +       pthread_cond_t wrapper_cond;
> > > +       int wrapper_ret;
> > > +       volatile int wrapper_done;
> >
> > One question.
> >
> > I see that wrapper_done is accessed under wrapper_mutex.
> > Is volatile needed?
>
> I'm not entirely certain. i'm being cautious since i can conceive of the
> load in the loop being optimized as a single load by the compiler. but
> again i'm not sure, i always like to learn if someone knows better.

After an interesting discussion with Dodji on C99 and side effects
(5.1.2.3/2 and 5.1.2.3/3), I am a bit more convinced that we don't
need this volatile.


>
> >
> > (nit: a boolean is probably enough too)
>
> I have no issue with it being a _Bool if you want to adjust it for that
> i certainly don't object. ordinarily i would use _Bool but a lot of dpdk
> code seems to prefer int so that's why i chose it. if we use the macro
> bool then we should include stdbool.h directly into this translation
> unit.
>
> >
> > I was thinking of squashing below diff:
>
> Yeah, no objection. you can decide if you want to keep the volatile or
> not and add the stdbool.h include.
>
> Thanks for reviewing, appreciate it.

This is a fix but this v5 had an additional change in affinity setting
(switching to rte_thread_set_affinity()).
To be on the safe side wrt backport, I'll also revert to calling
rte_thread_set_affinity_by_id as this is what was being used before.
And this removes the need for patch 1.

Sending a v6 soon, so that it goes through the CI before rc3.


--=20
David Marchand