From: Maxime Coquelin <maxime.coquelin@redhat.com>
To: Matthias Gatto <matthias.gatto@outscale.com>, dev@dpdk.org
Cc: tiwei.bie@intel.com, zhihong.wang@intel.com
Subject: Re: [dpdk-dev] [PATCH] vhost: fix race condition in fdset_add
Date: Tue, 11 Dec 2018 19:11:02 +0100 [thread overview]
Message-ID: <a5a9c7bb-3950-5248-84d3-cfe3a2961fe6@redhat.com> (raw)
In-Reply-To: <1544112007-23177-1-git-send-email-matthias.gatto@outscale.com>
Hi Matthias,
On 12/6/18 5:00 PM, Matthias Gatto wrote:
> fdset_add can call fdset_shrink_nolock which call fdset_move
> concurrently to poll that is call in fdset_event_dispatch.
>
> This patch add a mutex to protect poll from been call at the same time
> fdset_add call fdset_shrink_nolock.
>
> Signed-off-by: Matthias Gatto <matthias.gatto@outscale.com>
> ---
> lib/librte_vhost/fd_man.c | 4 ++++
> lib/librte_vhost/fd_man.h | 1 +
> lib/librte_vhost/socket.c | 1 +
> 3 files changed, 6 insertions(+)
>
> diff --git a/lib/librte_vhost/fd_man.c b/lib/librte_vhost/fd_man.c
> index 38347ab..55d4856 100644
> --- a/lib/librte_vhost/fd_man.c
> +++ b/lib/librte_vhost/fd_man.c
> @@ -129,7 +129,9 @@
> pthread_mutex_lock(&pfdset->fd_mutex);
> i = pfdset->num < MAX_FDS ? pfdset->num++ : -1;
> if (i == -1) {
> + pthread_mutex_lock(&pfdset->fd_pooling_mutex);
> fdset_shrink_nolock(pfdset);
> + pthread_mutex_unlock(&pfdset->fd_pooling_mutex);
> i = pfdset->num < MAX_FDS ? pfdset->num++ : -1;
> if (i == -1) {
> pthread_mutex_unlock(&pfdset->fd_mutex);
> @@ -246,7 +248,9 @@
> numfds = pfdset->num;
> pthread_mutex_unlock(&pfdset->fd_mutex);
>
> + pthread_mutex_lock(&pfdset->fd_pooling_mutex);
> val = poll(pfdset->rwfds, numfds, 1000 /* millisecs */);
> + pthread_mutex_unlock(&pfdset->fd_pooling_mutex);
Any reason we cannot use the existing fd_mutex?
> if (val < 0)
> continue;
>
> diff --git a/lib/librte_vhost/fd_man.h b/lib/librte_vhost/fd_man.h
> index 3331bcd..3ab5cfd 100644
> --- a/lib/librte_vhost/fd_man.h
> +++ b/lib/librte_vhost/fd_man.h
> @@ -24,6 +24,7 @@ struct fdset {
> struct pollfd rwfds[MAX_FDS];
> struct fdentry fd[MAX_FDS];
> pthread_mutex_t fd_mutex;
> + pthread_mutex_t fd_pooling_mutex;
> int num; /* current fd number of this fdset */
>
> union pipefds {
> diff --git a/lib/librte_vhost/socket.c b/lib/librte_vhost/socket.c
> index d630317..cc4e748 100644
> --- a/lib/librte_vhost/socket.c
> +++ b/lib/librte_vhost/socket.c
> @@ -88,6 +88,7 @@ struct vhost_user {
> .fdset = {
> .fd = { [0 ... MAX_FDS - 1] = {-1, NULL, NULL, NULL, 0} },
> .fd_mutex = PTHREAD_MUTEX_INITIALIZER,
> + .fd_pooling_mutex = PTHREAD_MUTEX_INITIALIZER,
> .num = 0
> },
> .vsocket_cnt = 0,
>
next prev parent reply other threads:[~2018-12-11 18:11 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-12-06 16:00 Matthias Gatto
2018-12-11 18:11 ` Maxime Coquelin [this message]
2018-12-14 9:32 ` Matthias Gatto
2018-12-14 9:51 ` Maxime Coquelin
2018-12-14 9:53 ` Maxime Coquelin
2018-12-14 10:07 ` Matthias Gatto
2018-12-14 10:08 ` Maxime Coquelin
2018-12-18 14:01 ` Maxime Coquelin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a5a9c7bb-3950-5248-84d3-cfe3a2961fe6@redhat.com \
--to=maxime.coquelin@redhat.com \
--cc=dev@dpdk.org \
--cc=matthias.gatto@outscale.com \
--cc=tiwei.bie@intel.com \
--cc=zhihong.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).