From: Tyler Retzlaff <roretzla@linux.microsoft.com>
To: David Marchand <david.marchand@redhat.com>
Cc: dev@dpdk.org, thomas@monjalon.net, stephen@networkplumber.org,
stable@dpdk.org
Subject: Re: [PATCH v5 2/2] eal: fix failure path race setting new thread affinity
Date: Fri, 17 Mar 2023 07:49:31 -0700 [thread overview]
Message-ID: <20230317144931.GA29683@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net> (raw)
In-Reply-To: <CAJFAV8yk-APRVYO-v953DUc=YKx=54mMCRPsCXUa_y43vs1q9A@mail.gmail.com>
On Fri, Mar 17, 2023 at 11:45:08AM +0100, David Marchand wrote:
> On Thu, Mar 16, 2023 at 1:07 AM Tyler Retzlaff
> <roretzla@linux.microsoft.com> wrote:
> >
> > In rte_thread_create setting affinity after pthread_create may fail.
> > Such a failure should result in the entire rte_thread_create failing
> > but doesn't.
> >
> > Additionally if there is a failure to set affinity a race exists where
> > the creating thread will free ctx and depending on scheduling of the new
> > thread it may also free ctx (double free).
> >
> > Resolve the above by setting the affinity from the newly created thread
> > using a condition variable to signal the completion of the thread
> > start wrapper having completed.
> >
> > Since we are now waiting for the thread start wrapper to complete we can
> > allocate the thread start wrapper context on the stack. While here clean
> > up the variable naming in the context to better highlight the fields of
> > the context require synchronization between the creating and created
> > thread.
> >
> > Fixes: ce6e911d20f6 ("eal: add thread lifetime API")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > ---
> > lib/eal/unix/rte_thread.c | 70 +++++++++++++++++++++++++++++------------------
> > 1 file changed, 43 insertions(+), 27 deletions(-)
> >
> > diff --git a/lib/eal/unix/rte_thread.c b/lib/eal/unix/rte_thread.c
> > index 37ebfcf..5992b04 100644
> > --- a/lib/eal/unix/rte_thread.c
> > +++ b/lib/eal/unix/rte_thread.c
> > @@ -16,9 +16,14 @@ struct eal_tls_key {
> > pthread_key_t thread_index;
> > };
> >
> > -struct thread_routine_ctx {
> > +struct thread_start_context {
> > rte_thread_func thread_func;
> > - void *routine_args;
> > + void *thread_args;
> > + const rte_thread_attr_t *thread_attr;
> > + pthread_mutex_t wrapper_mutex;
> > + pthread_cond_t wrapper_cond;
> > + int wrapper_ret;
> > + volatile int wrapper_done;
>
> One question.
>
> I see that wrapper_done is accessed under wrapper_mutex.
> Is volatile needed?
I'm not entirely certain. i'm being cautious since i can conceive of the
load in the loop being optimized as a single load by the compiler. but
again i'm not sure, i always like to learn if someone knows better.
>
> (nit: a boolean is probably enough too)
I have no issue with it being a _Bool if you want to adjust it for that
i certainly don't object. ordinarily i would use _Bool but a lot of dpdk
code seems to prefer int so that's why i chose it. if we use the macro
bool then we should include stdbool.h directly into this translation
unit.
>
> I was thinking of squashing below diff:
Yeah, no objection. you can decide if you want to keep the volatile or
not and add the stdbool.h include.
Thanks for reviewing, appreciate it.
>
> diff --git a/lib/eal/unix/rte_thread.c b/lib/eal/unix/rte_thread.c
> index 5992b04a45..5ab5267ca3 100644
> --- a/lib/eal/unix/rte_thread.c
> +++ b/lib/eal/unix/rte_thread.c
> @@ -23,7 +23,7 @@ struct thread_start_context {
> pthread_mutex_t wrapper_mutex;
> pthread_cond_t wrapper_cond;
> int wrapper_ret;
> - volatile int wrapper_done;
> + bool wrapper_done;
> };
>
> static int
> @@ -101,7 +101,7 @@ thread_start_wrapper(void *arg)
>
> pthread_mutex_lock(&ctx->wrapper_mutex);
> ctx->wrapper_ret = ret;
> - ctx->wrapper_done = 1;
> + ctx->wrapper_done = true;
> pthread_cond_signal(&ctx->wrapper_cond);
> pthread_mutex_unlock(&ctx->wrapper_mutex);
>
> @@ -127,6 +127,7 @@ rte_thread_create(rte_thread_t *thread_id,
> .thread_func = thread_func,
> .thread_args = args,
> .thread_attr = thread_attr,
> + .wrapper_done = false,
> .wrapper_mutex = PTHREAD_MUTEX_INITIALIZER,
> .wrapper_cond = PTHREAD_COND_INITIALIZER,
> };
> @@ -151,7 +152,6 @@ rte_thread_create(rte_thread_t *thread_id,
> goto cleanup;
> }
>
> -
> if (thread_attr->priority ==
> RTE_THREAD_PRIORITY_REALTIME_CRITICAL) {
> ret = ENOTSUP;
> @@ -183,7 +183,7 @@ rte_thread_create(rte_thread_t *thread_id,
> }
>
> pthread_mutex_lock(&ctx.wrapper_mutex);
> - while (ctx.wrapper_done != 1)
> + while (!ctx.wrapper_done)
> pthread_cond_wait(&ctx.wrapper_cond, &ctx.wrapper_mutex);
> ret = ctx.wrapper_ret;
> pthread_mutex_unlock(&ctx.wrapper_mutex);
>
>
> The rest lgtmn thanks Tyler.
>
>
>
> --
> David Marchand
next prev parent reply other threads:[~2023-03-17 14:49 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-02 18:44 [PATCH 1/2] eal: fix failure race and behavior of thread create Tyler Retzlaff
2023-03-02 18:44 ` [PATCH 2/2] eal/windows: fix create thread failure behavior Tyler Retzlaff
2023-03-07 14:33 ` [PATCH 1/2] eal: fix failure race and behavior of thread create David Marchand
2023-03-09 9:17 ` David Marchand
2023-03-09 9:58 ` Thomas Monjalon
2023-03-09 20:49 ` Tyler Retzlaff
2023-03-09 21:05 ` David Marchand
2023-03-13 23:31 ` [PATCH v2 0/2] fix race in rte_thread_create failure path Tyler Retzlaff
2023-03-13 23:31 ` [PATCH v2 1/2] eal: make cpusetp to rte thread set affinity const Tyler Retzlaff
2023-03-13 23:31 ` [PATCH v2 2/2] eal: fix failure path race setting new thread affinity Tyler Retzlaff
2023-03-14 11:47 ` [PATCH v2 0/2] fix race in rte_thread_create failure path David Marchand
2023-03-14 13:59 ` Tyler Retzlaff
2023-03-14 22:44 ` [PATCH v3 " Tyler Retzlaff
2023-03-14 22:44 ` [PATCH v3 1/2] eal: make cpusetp to rte thread set affinity const Tyler Retzlaff
2023-03-14 22:44 ` [PATCH v3 2/2] eal: fix failure path race setting new thread affinity Tyler Retzlaff
2023-03-14 22:50 ` [PATCH v4 0/2] fix race in rte_thread_create failure path Tyler Retzlaff
2023-03-14 22:50 ` [PATCH v4 1/2] eal: make cpusetp to rte thread set affinity const Tyler Retzlaff
2023-03-14 22:50 ` [PATCH v4 2/2] eal: fix failure path race setting new thread affinity Tyler Retzlaff
2023-03-15 1:20 ` Stephen Hemminger
2023-03-15 1:26 ` Tyler Retzlaff
2023-03-16 0:04 ` [PATCH v4 0/2] fix race in rte_thread_create failure path Tyler Retzlaff
2023-03-16 0:04 ` [PATCH v4 1/2] eal: make cpusetp to rte thread set affinity const Tyler Retzlaff
2023-03-16 0:04 ` [PATCH v4 2/2] eal: fix failure path race setting new thread affinity Tyler Retzlaff
2023-03-16 0:07 ` [PATCH v5 0/2] fix race in rte_thread_create failure path Tyler Retzlaff
2023-03-16 0:07 ` [PATCH v5 1/2] eal: make cpusetp to rte thread set affinity const Tyler Retzlaff
2023-03-16 0:07 ` [PATCH v5 2/2] eal: fix failure path race setting new thread affinity Tyler Retzlaff
2023-03-17 10:45 ` David Marchand
2023-03-17 14:49 ` Tyler Retzlaff [this message]
2023-03-17 18:51 ` David Marchand
2023-03-17 21:20 ` Tyler Retzlaff
2023-03-17 18:52 ` [PATCH v6] eal/unix: fix thread creation David Marchand
2023-03-17 21:24 ` Tyler Retzlaff
2023-03-18 18:26 ` David Marchand
2023-03-18 18:26 ` David Marchand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230317144931.GA29683@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net \
--to=roretzla@linux.microsoft.com \
--cc=david.marchand@redhat.com \
--cc=dev@dpdk.org \
--cc=stable@dpdk.org \
--cc=stephen@networkplumber.org \
--cc=thomas@monjalon.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).