* [PATCH] mempool: fix rte_mempool_avail_count may segment fault when used in multiprocess
@ 2022-11-14 7:14 changfengnan
2022-11-14 9:55 ` Morten Brørup
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: changfengnan @ 2022-11-14 7:14 UTC (permalink / raw)
To: olivier.matz, andrew.rybchenko, dev; +Cc: changfengnan
rte_mempool_create put tailq entry into rte_mempool_tailq list before
populate, and pool_data set when populate. So in multi process, if
process A create mempool, and process B can get mempool through
rte_mempool_lookup before pool_data set, if B call rte_mempool_lookup,
it will cause segment fault.
Fix this by put tailq entry into rte_mempool_tailq after populate.
Signed-off-by: changfengnan <changfengnan@bytedance.com>
---
lib/mempool/rte_mempool.c | 40 +++++++++++++++++++--------------------
1 file changed, 19 insertions(+), 21 deletions(-)
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 4c78071a34..b23d6138ff 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -798,9 +798,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
int socket_id, unsigned flags)
{
char mz_name[RTE_MEMZONE_NAMESIZE];
- struct rte_mempool_list *mempool_list;
struct rte_mempool *mp = NULL;
- struct rte_tailq_entry *te = NULL;
const struct rte_memzone *mz = NULL;
size_t mempool_size;
unsigned int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
@@ -820,8 +818,6 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
RTE_CACHE_LINE_MASK) != 0);
#endif
- mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list);
-
/* asked for zero items */
if (n == 0) {
rte_errno = EINVAL;
@@ -866,14 +862,6 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
private_data_size = (private_data_size +
RTE_MEMPOOL_ALIGN_MASK) & (~RTE_MEMPOOL_ALIGN_MASK);
-
- /* try to allocate tailq entry */
- te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0);
- if (te == NULL) {
- RTE_LOG(ERR, MEMPOOL, "Cannot allocate tailq entry!\n");
- goto exit_unlock;
- }
-
mempool_size = RTE_MEMPOOL_HEADER_SIZE(mp, cache_size);
mempool_size += private_data_size;
mempool_size = RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN);
@@ -908,7 +896,6 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
mp->private_data_size = private_data_size;
STAILQ_INIT(&mp->elt_list);
STAILQ_INIT(&mp->mem_list);
-
/*
* local_cache pointer is set even if cache_size is zero.
* The local_cache points to just past the elt_pa[] array.
@@ -922,12 +909,6 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
mempool_cache_init(&mp->local_cache[lcore_id],
cache_size);
}
-
- te->data = mp;
-
- rte_mcfg_tailq_write_lock();
- TAILQ_INSERT_TAIL(mempool_list, te, next);
- rte_mcfg_tailq_write_unlock();
rte_mcfg_mempool_write_unlock();
rte_mempool_trace_create_empty(name, n, elt_size, cache_size,
@@ -936,7 +917,6 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
exit_unlock:
rte_mcfg_mempool_write_unlock();
- rte_free(te);
rte_mempool_free(mp);
return NULL;
}
@@ -951,11 +931,22 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
{
int ret;
struct rte_mempool *mp;
+ struct rte_mempool_list *mempool_list;
+ struct rte_tailq_entry *te = NULL;
+
+ /* try to allocate tailq entry */
+ te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL) {
+ RTE_LOG(ERR, MEMPOOL, "Cannot allocate tailq entry!\n");
+ return NULL;
+ }
mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
private_data_size, socket_id, flags);
- if (mp == NULL)
+ if (mp == NULL) {
+ rte_free(te);
return NULL;
+ }
/*
* Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
@@ -984,12 +975,19 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
if (obj_init)
rte_mempool_obj_iter(mp, obj_init, obj_init_arg);
+ te->data = mp;
+ mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list);
+ rte_mcfg_tailq_write_lock();
+ TAILQ_INSERT_TAIL(mempool_list, te, next);
+ rte_mcfg_tailq_write_unlock();
+
rte_mempool_trace_create(name, n, elt_size, cache_size,
private_data_size, mp_init, mp_init_arg, obj_init,
obj_init_arg, flags, mp);
return mp;
fail:
+ rte_free(te);
rte_mempool_free(mp);
return NULL;
}
--
2.37.0 (Apple Git-136)
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: [PATCH] mempool: fix rte_mempool_avail_count may segment fault when used in multiprocess
2022-11-14 7:14 [PATCH] mempool: fix rte_mempool_avail_count may segment fault when used in multiprocess changfengnan
@ 2022-11-14 9:55 ` Morten Brørup
2022-11-14 11:23 ` changfengnan
2022-11-14 20:43 ` David Marchand
2 siblings, 0 replies; 9+ messages in thread
From: Morten Brørup @ 2022-11-14 9:55 UTC (permalink / raw)
To: changfengnan, olivier.matz, andrew.rybchenko, dev
> From: changfengnan [mailto:changfengnan@bytedance.com]
> Sent: Monday, 14 November 2022 08.15
>
> rte_mempool_create put tailq entry into rte_mempool_tailq list before
> populate, and pool_data set when populate. So in multi process, if
> process A create mempool, and process B can get mempool through
> rte_mempool_lookup before pool_data set, if B call rte_mempool_lookup,
> it will cause segment fault.
> Fix this by put tailq entry into rte_mempool_tailq after populate.
>
> Signed-off-by: changfengnan <changfengnan@bytedance.com>
> ---
Good catch.
You must use your real name (not your username) before the email address in the sign-off line, or the patch cannot be accepted. Please refer to: https://doc.dpdk.org/guides/contributing/patches.html#commit-messages-body
With a proper sign-off line,
Acked-by: Morten Brørup <mb@smartsharesystems.com>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] mempool: fix rte_mempool_avail_count may segment fault when used in multiprocess
2022-11-14 7:14 [PATCH] mempool: fix rte_mempool_avail_count may segment fault when used in multiprocess changfengnan
2022-11-14 9:55 ` Morten Brørup
@ 2022-11-14 11:23 ` changfengnan
2022-11-14 20:43 ` David Marchand
2 siblings, 0 replies; 9+ messages in thread
From: changfengnan @ 2022-11-14 11:23 UTC (permalink / raw)
To: olivier.matz, andrew.rybchenko, dev
It seems that this modification ignores some certain cases, for example calling
rte_mempool_create_empty directly, maybe add a new flag bit to indicate
when to put tailq entry into rte_mempool_tailq list is a better way ?
Looking forward to any suggestions.
Thanks.
Fengnan Chang.
changfengnan <changfengnan@bytedance.com> 于2022年11月14日周一 15:14写道:
>
> rte_mempool_create put tailq entry into rte_mempool_tailq list before
> populate, and pool_data set when populate. So in multi process, if
> process A create mempool, and process B can get mempool through
> rte_mempool_lookup before pool_data set, if B call rte_mempool_lookup,
> it will cause segment fault.
> Fix this by put tailq entry into rte_mempool_tailq after populate.
>
> Signed-off-by: changfengnan <changfengnan@bytedance.com>
> ---
> lib/mempool/rte_mempool.c | 40 +++++++++++++++++++--------------------
> 1 file changed, 19 insertions(+), 21 deletions(-)
>
> diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
> index 4c78071a34..b23d6138ff 100644
> --- a/lib/mempool/rte_mempool.c
> +++ b/lib/mempool/rte_mempool.c
> @@ -798,9 +798,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
> int socket_id, unsigned flags)
> {
> char mz_name[RTE_MEMZONE_NAMESIZE];
> - struct rte_mempool_list *mempool_list;
> struct rte_mempool *mp = NULL;
> - struct rte_tailq_entry *te = NULL;
> const struct rte_memzone *mz = NULL;
> size_t mempool_size;
> unsigned int mz_flags = RTE_MEMZONE_1GB|RTE_MEMZONE_SIZE_HINT_ONLY;
> @@ -820,8 +818,6 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
> RTE_CACHE_LINE_MASK) != 0);
> #endif
>
> - mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list);
> -
> /* asked for zero items */
> if (n == 0) {
> rte_errno = EINVAL;
> @@ -866,14 +862,6 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
> private_data_size = (private_data_size +
> RTE_MEMPOOL_ALIGN_MASK) & (~RTE_MEMPOOL_ALIGN_MASK);
>
> -
> - /* try to allocate tailq entry */
> - te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0);
> - if (te == NULL) {
> - RTE_LOG(ERR, MEMPOOL, "Cannot allocate tailq entry!\n");
> - goto exit_unlock;
> - }
> -
> mempool_size = RTE_MEMPOOL_HEADER_SIZE(mp, cache_size);
> mempool_size += private_data_size;
> mempool_size = RTE_ALIGN_CEIL(mempool_size, RTE_MEMPOOL_ALIGN);
> @@ -908,7 +896,6 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
> mp->private_data_size = private_data_size;
> STAILQ_INIT(&mp->elt_list);
> STAILQ_INIT(&mp->mem_list);
> -
> /*
> * local_cache pointer is set even if cache_size is zero.
> * The local_cache points to just past the elt_pa[] array.
> @@ -922,12 +909,6 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
> mempool_cache_init(&mp->local_cache[lcore_id],
> cache_size);
> }
> -
> - te->data = mp;
> -
> - rte_mcfg_tailq_write_lock();
> - TAILQ_INSERT_TAIL(mempool_list, te, next);
> - rte_mcfg_tailq_write_unlock();
> rte_mcfg_mempool_write_unlock();
>
> rte_mempool_trace_create_empty(name, n, elt_size, cache_size,
> @@ -936,7 +917,6 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
>
> exit_unlock:
> rte_mcfg_mempool_write_unlock();
> - rte_free(te);
> rte_mempool_free(mp);
> return NULL;
> }
> @@ -951,11 +931,22 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
> {
> int ret;
> struct rte_mempool *mp;
> + struct rte_mempool_list *mempool_list;
> + struct rte_tailq_entry *te = NULL;
> +
> + /* try to allocate tailq entry */
> + te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0);
> + if (te == NULL) {
> + RTE_LOG(ERR, MEMPOOL, "Cannot allocate tailq entry!\n");
> + return NULL;
> + }
>
> mp = rte_mempool_create_empty(name, n, elt_size, cache_size,
> private_data_size, socket_id, flags);
> - if (mp == NULL)
> + if (mp == NULL) {
> + rte_free(te);
> return NULL;
> + }
>
> /*
> * Since we have 4 combinations of the SP/SC/MP/MC examine the flags to
> @@ -984,12 +975,19 @@ rte_mempool_create(const char *name, unsigned n, unsigned elt_size,
> if (obj_init)
> rte_mempool_obj_iter(mp, obj_init, obj_init_arg);
>
> + te->data = mp;
> + mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list);
> + rte_mcfg_tailq_write_lock();
> + TAILQ_INSERT_TAIL(mempool_list, te, next);
> + rte_mcfg_tailq_write_unlock();
> +
> rte_mempool_trace_create(name, n, elt_size, cache_size,
> private_data_size, mp_init, mp_init_arg, obj_init,
> obj_init_arg, flags, mp);
> return mp;
>
> fail:
> + rte_free(te);
> rte_mempool_free(mp);
> return NULL;
> }
> --
> 2.37.0 (Apple Git-136)
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] mempool: fix rte_mempool_avail_count may segment fault when used in multiprocess
2022-11-14 7:14 [PATCH] mempool: fix rte_mempool_avail_count may segment fault when used in multiprocess changfengnan
2022-11-14 9:55 ` Morten Brørup
2022-11-14 11:23 ` changfengnan
@ 2022-11-14 20:43 ` David Marchand
2022-11-15 1:51 ` [External] " Fengnan Chang
2 siblings, 1 reply; 9+ messages in thread
From: David Marchand @ 2022-11-14 20:43 UTC (permalink / raw)
To: changfengnan; +Cc: olivier.matz, andrew.rybchenko, dev, Morten Brørup
On Mon, Nov 14, 2022 at 9:13 AM changfengnan <changfengnan@bytedance.com> wrote:
>
> rte_mempool_create put tailq entry into rte_mempool_tailq list before
> populate, and pool_data set when populate. So in multi process, if
> process A create mempool, and process B can get mempool through
> rte_mempool_lookup before pool_data set, if B call rte_mempool_lookup,
> it will cause segment fault.
I fail to see how pool_data impacts rte_mempool_lookup.
Something is fishy about this commitlog.
> Fix this by put tailq entry into rte_mempool_tailq after populate.
Moving tailq manipulation to rte_mempool_create only, is probably incorrect.
An application is allowed to call rte_mempool_create_empty() and
rte_mempool_populate().
I did not look in depth, but It is likely the reason why testpmd (as
run with devtools/test-null.sh) won't pass anymore.
The CI reported this issue in various envs.
We can't take this patch.
>
> Signed-off-by: changfengnan <changfengnan@bytedance.com>
Please use your real name.
--
David Marchand
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [External] Re: [PATCH] mempool: fix rte_mempool_avail_count may segment fault when used in multiprocess
2022-11-14 20:43 ` David Marchand
@ 2022-11-15 1:51 ` Fengnan Chang
2022-11-15 7:23 ` Morten Brørup
2022-11-15 7:47 ` David Marchand
0 siblings, 2 replies; 9+ messages in thread
From: Fengnan Chang @ 2022-11-15 1:51 UTC (permalink / raw)
To: David Marchand; +Cc: olivier.matz, andrew.rybchenko, dev, Morten Brørup
David Marchand <david.marchand@redhat.com> 于2022年11月15日周二 04:44写道:
>
> On Mon, Nov 14, 2022 at 9:13 AM changfengnan <changfengnan@bytedance.com> wrote:
> >
> > rte_mempool_create put tailq entry into rte_mempool_tailq list before
> > populate, and pool_data set when populate. So in multi process, if
> > process A create mempool, and process B can get mempool through
> > rte_mempool_lookup before pool_data set, if B call rte_mempool_lookup,
> > it will cause segment fault.
>
> I fail to see how pool_data impacts rte_mempool_lookup.
> Something is fishy about this commitlog.
oh, it's my fault about this commit. correct: if B can get mempool through
rte_mempool_lookup before pool_data set, and call rte_mempool_avail_count,
it will cause segment fault.
>
>
> > Fix this by put tailq entry into rte_mempool_tailq after populate.
>
> Moving tailq manipulation to rte_mempool_create only, is probably incorrect.
> An application is allowed to call rte_mempool_create_empty() and
> rte_mempool_populate().
>
> I did not look in depth, but It is likely the reason why testpmd (as
> run with devtools/test-null.sh) won't pass anymore.
> The CI reported this issue in various envs.
>
> We can't take this patch.
Yeah, this version makes CI fail.
I didn't notice rte_mempool_create_empty will called directly before, maybe
add a new flag bit to indicate when to put tailq entry into rte_mempool_tailq
list is a better way. If no better idea, I'll send a new version.
>
>
> >
> > Signed-off-by: changfengnan <changfengnan@bytedance.com>
>
> Please use your real name.
It's my real name.
>
>
> --
> David Marchand
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: [External] Re: [PATCH] mempool: fix rte_mempool_avail_count may segment fault when used in multiprocess
2022-11-15 1:51 ` [External] " Fengnan Chang
@ 2022-11-15 7:23 ` Morten Brørup
2022-11-15 7:47 ` David Marchand
1 sibling, 0 replies; 9+ messages in thread
From: Morten Brørup @ 2022-11-15 7:23 UTC (permalink / raw)
To: Fengnan Chang, David Marchand; +Cc: olivier.matz, andrew.rybchenko, dev
> From: Fengnan Chang [mailto:changfengnan@bytedance.com]
> Sent: Tuesday, 15 November 2022 02.51
>
> David Marchand <david.marchand@redhat.com> 于2022年11月15日周二
> 04:44写道:
> >
> > On Mon, Nov 14, 2022 at 9:13 AM changfengnan
> <changfengnan@bytedance.com> wrote:
[...]
> > > Signed-off-by: changfengnan <changfengnan@bytedance.com>
> >
> > Please use your real name.
>
> It's my real name.
The name in the sign-off line must be Fengnan Chang <changfengnan@bytedance.com>, not changfengnan <changfengnan@bytedance.com>.
-Morten
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [External] Re: [PATCH] mempool: fix rte_mempool_avail_count may segment fault when used in multiprocess
2022-11-15 1:51 ` [External] " Fengnan Chang
2022-11-15 7:23 ` Morten Brørup
@ 2022-11-15 7:47 ` David Marchand
2022-11-15 8:29 ` Olivier Matz
1 sibling, 1 reply; 9+ messages in thread
From: David Marchand @ 2022-11-15 7:47 UTC (permalink / raw)
To: Fengnan Chang, olivier.matz, andrew.rybchenko; +Cc: dev, Morten Brørup
On Tue, Nov 15, 2022 at 2:51 AM Fengnan Chang
<changfengnan@bytedance.com> wrote:
>
> David Marchand <david.marchand@redhat.com> 于2022年11月15日周二 04:44写道:
> >
> > On Mon, Nov 14, 2022 at 9:13 AM changfengnan <changfengnan@bytedance.com> wrote:
> > >
> > > rte_mempool_create put tailq entry into rte_mempool_tailq list before
> > > populate, and pool_data set when populate. So in multi process, if
> > > process A create mempool, and process B can get mempool through
> > > rte_mempool_lookup before pool_data set, if B call rte_mempool_lookup,
> > > it will cause segment fault.
> >
> > I fail to see how pool_data impacts rte_mempool_lookup.
> > Something is fishy about this commitlog.
>
> oh, it's my fault about this commit. correct: if B can get mempool through
> rte_mempool_lookup before pool_data set, and call rte_mempool_avail_count,
> it will cause segment fault.
Ok, now it makes more sense :-).
>
> >
> >
> > > Fix this by put tailq entry into rte_mempool_tailq after populate.
> >
> > Moving tailq manipulation to rte_mempool_create only, is probably incorrect.
> > An application is allowed to call rte_mempool_create_empty() and
> > rte_mempool_populate().
> >
> > I did not look in depth, but It is likely the reason why testpmd (as
> > run with devtools/test-null.sh) won't pass anymore.
> > The CI reported this issue in various envs.
> >
> > We can't take this patch.
>
> Yeah, this version makes CI fail.
> I didn't notice rte_mempool_create_empty will called directly before, maybe
> add a new flag bit to indicate when to put tailq entry into rte_mempool_tailq
> list is a better way. If no better idea, I'll send a new version.
I don't think we need an other flag.
Can we "publish" the mempool at the mempool_ops_alloc_once stage?
>
> >
> >
> > >
> > > Signed-off-by: changfengnan <changfengnan@bytedance.com>
> >
> > Please use your real name.
>
> It's my real name.
Sorry, I meant your full name, like Fengnan Chang <changfengnan@bytedance.com>
--
David Marchand
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [External] Re: [PATCH] mempool: fix rte_mempool_avail_count may segment fault when used in multiprocess
2022-11-15 7:47 ` David Marchand
@ 2022-11-15 8:29 ` Olivier Matz
2022-11-15 11:30 ` Fengnan Chang
0 siblings, 1 reply; 9+ messages in thread
From: Olivier Matz @ 2022-11-15 8:29 UTC (permalink / raw)
To: David Marchand; +Cc: Fengnan Chang, andrew.rybchenko, dev, Morten Brørup
Hi,
On Tue, Nov 15, 2022 at 08:47:15AM +0100, David Marchand wrote:
> On Tue, Nov 15, 2022 at 2:51 AM Fengnan Chang
> <changfengnan@bytedance.com> wrote:
> >
> > David Marchand <david.marchand@redhat.com> 于2022年11月15日周二 04:44写道:
> > >
> > > On Mon, Nov 14, 2022 at 9:13 AM changfengnan <changfengnan@bytedance.com> wrote:
> > > >
> > > > rte_mempool_create put tailq entry into rte_mempool_tailq list before
> > > > populate, and pool_data set when populate. So in multi process, if
> > > > process A create mempool, and process B can get mempool through
> > > > rte_mempool_lookup before pool_data set, if B call rte_mempool_lookup,
> > > > it will cause segment fault.
> > >
> > > I fail to see how pool_data impacts rte_mempool_lookup.
> > > Something is fishy about this commitlog.
> >
> > oh, it's my fault about this commit. correct: if B can get mempool through
> > rte_mempool_lookup before pool_data set, and call rte_mempool_avail_count,
> > it will cause segment fault.
>
> Ok, now it makes more sense :-).
>
> >
> > >
> > >
> > > > Fix this by put tailq entry into rte_mempool_tailq after populate.
> > >
> > > Moving tailq manipulation to rte_mempool_create only, is probably incorrect.
> > > An application is allowed to call rte_mempool_create_empty() and
> > > rte_mempool_populate().
> > >
> > > I did not look in depth, but It is likely the reason why testpmd (as
> > > run with devtools/test-null.sh) won't pass anymore.
> > > The CI reported this issue in various envs.
> > >
> > > We can't take this patch.
> >
> > Yeah, this version makes CI fail.
> > I didn't notice rte_mempool_create_empty will called directly before, maybe
> > add a new flag bit to indicate when to put tailq entry into rte_mempool_tailq
> > list is a better way. If no better idea, I'll send a new version.
>
> I don't think we need an other flag.
> Can we "publish" the mempool at the mempool_ops_alloc_once stage?
The mempool_ops_alloc_once() seems it is the proper place, yes.
Alternatively, I suppose this issue can be fixed in the secondary
application:
- it can wait that the flag RTE_MEMPOOL_F_POOL_CREATED is present before
using the mempool.
- or it can wait the RTE_MEMPOOL_EVENT_READY
- or it can wait that the whole initialization of the primary
application is finished by another mean (a sort of lock). I don't know
the exact use case, but to me, it looks sane to do that, it would
protect from other similar issues.
Olivier
>
>
> >
> > >
> > >
> > > >
> > > > Signed-off-by: changfengnan <changfengnan@bytedance.com>
> > >
> > > Please use your real name.
> >
> > It's my real name.
>
> Sorry, I meant your full name, like Fengnan Chang <changfengnan@bytedance.com>
>
>
> --
> David Marchand
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [External] Re: [PATCH] mempool: fix rte_mempool_avail_count may segment fault when used in multiprocess
2022-11-15 8:29 ` Olivier Matz
@ 2022-11-15 11:30 ` Fengnan Chang
0 siblings, 0 replies; 9+ messages in thread
From: Fengnan Chang @ 2022-11-15 11:30 UTC (permalink / raw)
To: Olivier Matz; +Cc: David Marchand, andrew.rybchenko, dev, Morten Brørup
Olivier Matz <olivier.matz@6wind.com> 于2022年11月15日周二 16:29写道:
>
> Hi,
>
> On Tue, Nov 15, 2022 at 08:47:15AM +0100, David Marchand wrote:
> > On Tue, Nov 15, 2022 at 2:51 AM Fengnan Chang
> > <changfengnan@bytedance.com> wrote:
> > >
> > > David Marchand <david.marchand@redhat.com> 于2022年11月15日周二 04:44写道:
> > > >
> > > > On Mon, Nov 14, 2022 at 9:13 AM changfengnan <changfengnan@bytedance.com> wrote:
> > > > >
> > > > > rte_mempool_create put tailq entry into rte_mempool_tailq list before
> > > > > populate, and pool_data set when populate. So in multi process, if
> > > > > process A create mempool, and process B can get mempool through
> > > > > rte_mempool_lookup before pool_data set, if B call rte_mempool_lookup,
> > > > > it will cause segment fault.
> > > >
> > > > I fail to see how pool_data impacts rte_mempool_lookup.
> > > > Something is fishy about this commitlog.
> > >
> > > oh, it's my fault about this commit. correct: if B can get mempool through
> > > rte_mempool_lookup before pool_data set, and call rte_mempool_avail_count,
> > > it will cause segment fault.
> >
> > Ok, now it makes more sense :-).
> >
> > >
> > > >
> > > >
> > > > > Fix this by put tailq entry into rte_mempool_tailq after populate.
> > > >
> > > > Moving tailq manipulation to rte_mempool_create only, is probably incorrect.
> > > > An application is allowed to call rte_mempool_create_empty() and
> > > > rte_mempool_populate().
> > > >
> > > > I did not look in depth, but It is likely the reason why testpmd (as
> > > > run with devtools/test-null.sh) won't pass anymore.
> > > > The CI reported this issue in various envs.
> > > >
> > > > We can't take this patch.
> > >
> > > Yeah, this version makes CI fail.
> > > I didn't notice rte_mempool_create_empty will called directly before, maybe
> > > add a new flag bit to indicate when to put tailq entry into rte_mempool_tailq
> > > list is a better way. If no better idea, I'll send a new version.
> >
> > I don't think we need an other flag.
> > Can we "publish" the mempool at the mempool_ops_alloc_once stage?
>
> The mempool_ops_alloc_once() seems it is the proper place, yes.
>
> Alternatively, I suppose this issue can be fixed in the secondary
> application:
I found this problem in spdk, if this is fixed in the secondary app,
it needs modify
spdk too.
The following options work, but in spdk, I prefer to publish the mempool at
mempool_ops_alloc_once, I'll send a new version like this.
>
> - it can wait that the flag RTE_MEMPOOL_F_POOL_CREATED is present before
> using the mempool.
>
> - or it can wait the RTE_MEMPOOL_EVENT_READY
>
> - or it can wait that the whole initialization of the primary
> application is finished by another mean (a sort of lock). I don't know
> the exact use case, but to me, it looks sane to do that, it would
> protect from other similar issues.
>
>
> Olivier
>
> >
> >
> > >
> > > >
> > > >
> > > > >
> > > > > Signed-off-by: changfengnan <changfengnan@bytedance.com>
> > > >
> > > > Please use your real name.
> > >
> > > It's my real name.
> >
> > Sorry, I meant your full name, like Fengnan Chang <changfengnan@bytedance.com>
> >
> >
> > --
> > David Marchand
> >
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2022-11-15 11:33 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-14 7:14 [PATCH] mempool: fix rte_mempool_avail_count may segment fault when used in multiprocess changfengnan
2022-11-14 9:55 ` Morten Brørup
2022-11-14 11:23 ` changfengnan
2022-11-14 20:43 ` David Marchand
2022-11-15 1:51 ` [External] " Fengnan Chang
2022-11-15 7:23 ` Morten Brørup
2022-11-15 7:47 ` David Marchand
2022-11-15 8:29 ` Olivier Matz
2022-11-15 11:30 ` Fengnan Chang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).