* [dpdk-dev] [PATCH] mempool: fix mempool obj alignment for non x86 @ 2019-12-19 13:42 jerinj 2019-12-20 3:26 ` Gavin Hu ` (2 more replies) 0 siblings, 3 replies; 18+ messages in thread From: jerinj @ 2019-12-19 13:42 UTC (permalink / raw) To: dev Cc: thomas, olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev, hemant.agrawal, shahafs, honnappa.nagarahalli, gavin.hu, viktorin, drc, anatoly.burakov, Jerin Jacob, stable From: Jerin Jacob <jerinj@marvell.com> The exiting optimize_object_size() function address the memory object alignment constraint on x86 for better performance. Different (Mirco) architecture may have different memory alignment constraint for better performance and it not same as the existing optimize_object_size() function. Some use, XOR(kind of CRC) scheme to enable DRAM channel distribution based on the address and some may have a different formula. Introducing arch_mem_object_align() function to abstract the differences in different (mirco) architectures and avoid wasting memory for mempool object alignment for the architecture the existing optimize_object_size() is not valid. Additional details: https://www.mail-archive.com/dev@dpdk.org/msg149157.html Fixes: af75078fece3 ("first public release") Cc: stable@dpdk.org Signed-off-by: Jerin Jacob <jerinj@marvell.com> --- doc/guides/prog_guide/mempool_lib.rst | 6 +++--- lib/librte_mempool/rte_mempool.c | 17 +++++++++++++---- 2 files changed, 16 insertions(+), 7 deletions(-) diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst index 3bb84b0a6..eea7a2906 100644 --- a/doc/guides/prog_guide/mempool_lib.rst +++ b/doc/guides/prog_guide/mempool_lib.rst @@ -27,10 +27,10 @@ In debug mode (CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG is enabled), statistics about get from/put in the pool are stored in the mempool structure. Statistics are per-lcore to avoid concurrent access to statistics counters. -Memory Alignment Constraints ----------------------------- +Memory Alignment Constraints on X86 architecture +------------------------------------------------ -Depending on hardware memory configuration, performance can be greatly improved by adding a specific padding between objects. +Depending on hardware memory configuration on X86 architecture, performance can be greatly improved by adding a specific padding between objects. The objective is to ensure that the beginning of each object starts on a different channel and rank in memory so that all channels are equally loaded. This is particularly true for packet buffers when doing L3 forwarding or flow classification. diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 78d8eb941..871894525 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -45,6 +45,7 @@ EAL_REGISTER_TAILQ(rte_mempool_tailq) #define CALC_CACHE_FLUSHTHRESH(c) \ ((typeof(c))((c) * CACHE_FLUSHTHRESH_MULTIPLIER)) +#if defined(RTE_ARCH_X86) /* * return the greatest common divisor between a and b (fast algorithm) * @@ -74,12 +75,13 @@ static unsigned get_gcd(unsigned a, unsigned b) } /* - * Depending on memory configuration, objects addresses are spread + * Depending on memory configuration on x86 arch, objects addresses are spread * between channels and ranks in RAM: the pool allocator will add * padding between objects. This function return the new size of the * object. */ -static unsigned optimize_object_size(unsigned obj_size) +static unsigned +arch_mem_object_align(unsigned obj_size) { unsigned nrank, nchan; unsigned new_obj_size; @@ -99,6 +101,13 @@ static unsigned optimize_object_size(unsigned obj_size) new_obj_size++; return new_obj_size * RTE_MEMPOOL_ALIGN; } +#else +static unsigned +arch_mem_object_align(unsigned obj_size) +{ + return obj_size; +} +#endif struct pagesz_walk_arg { int socket_id; @@ -234,8 +243,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags, */ if ((flags & MEMPOOL_F_NO_SPREAD) == 0) { unsigned new_size; - new_size = optimize_object_size(sz->header_size + sz->elt_size + - sz->trailer_size); + new_size = arch_mem_object_align + (sz->header_size + sz->elt_size + sz->trailer_size); sz->trailer_size = new_size - sz->header_size - sz->elt_size; } -- 2.24.1 ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [dpdk-dev] [PATCH] mempool: fix mempool obj alignment for non x86 2019-12-19 13:42 [dpdk-dev] [PATCH] mempool: fix mempool obj alignment for non x86 jerinj @ 2019-12-20 3:26 ` Gavin Hu 2019-12-20 3:45 ` Jerin Jacob 2019-12-20 15:55 ` [dpdk-dev] [PATCH] mempool: fix mempool obj alignment for non x86 Honnappa Nagarahalli 2020-01-11 13:34 ` [dpdk-dev] [PATCH v2] " jerinj 2 siblings, 1 reply; 18+ messages in thread From: Gavin Hu @ 2019-12-20 3:26 UTC (permalink / raw) To: jerinj, dev Cc: thomas, olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev, hemant.agrawal, shahafs, Honnappa Nagarahalli, viktorin, drc, anatoly.burakov, jerinj, stable, Jerry Hao, nd Hi Jerin, It got two coding style warnings, otherwise, Reviewed-by: Gavin Hu <gavin.hu@arm.com> WARNING:UNSPECIFIED_INT: Prefer 'unsigned int' to bare use of 'unsigned' #144: FILE: lib/librte_mempool/rte_mempool.c:84: +arch_mem_object_align(unsigned obj_size) WARNING:UNSPECIFIED_INT: Prefer 'unsigned int' to bare use of 'unsigned' #154: FILE: lib/librte_mempool/rte_mempool.c:106: +arch_mem_object_align(unsigned obj_size) ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [dpdk-dev] [PATCH] mempool: fix mempool obj alignment for non x86 2019-12-20 3:26 ` Gavin Hu @ 2019-12-20 3:45 ` Jerin Jacob 2019-12-20 10:54 ` [dpdk-dev] [PATCH] mempool: fix mempool obj alignment for nonx86 Morten Brørup 0 siblings, 1 reply; 18+ messages in thread From: Jerin Jacob @ 2019-12-20 3:45 UTC (permalink / raw) To: Gavin Hu Cc: jerinj, dev, thomas, olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev, hemant.agrawal, shahafs, Honnappa Nagarahalli, viktorin, drc, anatoly.burakov, stable, Jerry Hao, nd On Fri, Dec 20, 2019 at 8:56 AM Gavin Hu <Gavin.Hu@arm.com> wrote: > > Hi Jerin, > > It got two coding style warnings, otherwise, > Reviewed-by: Gavin Hu <gavin.hu@arm.com> Thanks Gavin for review. Since the existing code is using "unsigned" in that file, I thought of not change by this patch. If someone thinks, It is better to change then I can send v2 by fixing "unsigned" to "unsigned int" across the file as a first patch in the series. > > WARNING:UNSPECIFIED_INT: Prefer 'unsigned int' to bare use of 'unsigned' > #144: FILE: lib/librte_mempool/rte_mempool.c:84: > +arch_mem_object_align(unsigned obj_size) > > WARNING:UNSPECIFIED_INT: Prefer 'unsigned int' to bare use of 'unsigned' > #154: FILE: lib/librte_mempool/rte_mempool.c:106: > +arch_mem_object_align(unsigned obj_size) ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [dpdk-dev] [PATCH] mempool: fix mempool obj alignment for nonx86 2019-12-20 3:45 ` Jerin Jacob @ 2019-12-20 10:54 ` Morten Brørup 0 siblings, 0 replies; 18+ messages in thread From: Morten Brørup @ 2019-12-20 10:54 UTC (permalink / raw) To: Jerin Jacob, Gavin Hu Cc: jerinj, dev, thomas, olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev, hemant.agrawal, shahafs, Honnappa Nagarahalli, viktorin, drc, anatoly.burakov, stable, Jerry Hao, nd > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob > Sent: Friday, December 20, 2019 4:45 AM > > On Fri, Dec 20, 2019 at 8:56 AM Gavin Hu <Gavin.Hu@arm.com> wrote: > > > > Hi Jerin, > > > > It got two coding style warnings, otherwise, > > Reviewed-by: Gavin Hu <gavin.hu@arm.com> > > Thanks Gavin for review. Since the existing code is using "unsigned" > in that file, I thought of not change by this patch. > If someone thinks, It is better to change then I can send v2 by fixing > "unsigned" to "unsigned int" across the file as a first patch in the > series. > The use of the type "unsigned" is a general issue with older DPDK libraries. Anyone touching any of these libraries will get these warnings. They should all be fixed. I'm not saying you should do it; I'm only suggesting that someone should create a dedicated patch to fix them all. > > > > WARNING:UNSPECIFIED_INT: Prefer 'unsigned int' to bare use of > 'unsigned' > > #144: FILE: lib/librte_mempool/rte_mempool.c:84: > > +arch_mem_object_align(unsigned obj_size) > > > > WARNING:UNSPECIFIED_INT: Prefer 'unsigned int' to bare use of > 'unsigned' > > #154: FILE: lib/librte_mempool/rte_mempool.c:106: > > +arch_mem_object_align(unsigned obj_size) ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [dpdk-dev] [PATCH] mempool: fix mempool obj alignment for non x86 2019-12-19 13:42 [dpdk-dev] [PATCH] mempool: fix mempool obj alignment for non x86 jerinj 2019-12-20 3:26 ` Gavin Hu @ 2019-12-20 15:55 ` Honnappa Nagarahalli 2019-12-20 16:55 ` Jerin Jacob 2020-01-11 13:34 ` [dpdk-dev] [PATCH v2] " jerinj 2 siblings, 1 reply; 18+ messages in thread From: Honnappa Nagarahalli @ 2019-12-20 15:55 UTC (permalink / raw) To: jerinj, dev Cc: thomas, olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev, hemant.agrawal, shahafs, Gavin Hu, viktorin, drc, anatoly.burakov, jerinj, stable, nd, nd <snip> > > From: Jerin Jacob <jerinj@marvell.com> > > The exiting optimize_object_size() function address the memory object > alignment constraint on x86 for better performance. > > Different (Mirco) architecture may have different memory alignment > constraint for better performance and it not same as the existing > optimize_object_size() function. Some use, XOR(kind of CRC) scheme to > enable DRAM channel distribution based on the address and some may have > a different formula. If I understand correctly, address interleaving is the characteristic of the memory controller and not the CPU. For ex: different SoCs using the same Arm architecture might have different memory controllers. So, the solution should not be architecture specific, but SoC specific. > > Introducing arch_mem_object_align() function to abstract the differences in > different (mirco) architectures and avoid wasting memory for mempool > object alignment for the architecture the existing optimize_object_size() is > not valid. > > Additional details: > https://www.mail-archive.com/dev@dpdk.org/msg149157.html > > Fixes: af75078fece3 ("first public release") > Cc: stable@dpdk.org > > Signed-off-by: Jerin Jacob <jerinj@marvell.com> > --- > doc/guides/prog_guide/mempool_lib.rst | 6 +++--- > lib/librte_mempool/rte_mempool.c | 17 +++++++++++++---- > 2 files changed, 16 insertions(+), 7 deletions(-) > > diff --git a/doc/guides/prog_guide/mempool_lib.rst > b/doc/guides/prog_guide/mempool_lib.rst > index 3bb84b0a6..eea7a2906 100644 > --- a/doc/guides/prog_guide/mempool_lib.rst > +++ b/doc/guides/prog_guide/mempool_lib.rst > @@ -27,10 +27,10 @@ In debug mode > (CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG is enabled), statistics about get > from/put in the pool are stored in the mempool structure. > Statistics are per-lcore to avoid concurrent access to statistics counters. > > -Memory Alignment Constraints > ----------------------------- > +Memory Alignment Constraints on X86 architecture > +------------------------------------------------ > > -Depending on hardware memory configuration, performance can be greatly > improved by adding a specific padding between objects. > +Depending on hardware memory configuration on X86 architecture, > performance can be greatly improved by adding a specific padding between > objects. > The objective is to ensure that the beginning of each object starts on a > different channel and rank in memory so that all channels are equally loaded. > > This is particularly true for packet buffers when doing L3 forwarding or flow > classification. > diff --git a/lib/librte_mempool/rte_mempool.c > b/lib/librte_mempool/rte_mempool.c > index 78d8eb941..871894525 100644 > --- a/lib/librte_mempool/rte_mempool.c > +++ b/lib/librte_mempool/rte_mempool.c > @@ -45,6 +45,7 @@ EAL_REGISTER_TAILQ(rte_mempool_tailq) > #define CALC_CACHE_FLUSHTHRESH(c) \ > ((typeof(c))((c) * CACHE_FLUSHTHRESH_MULTIPLIER)) > > +#if defined(RTE_ARCH_X86) > /* > * return the greatest common divisor between a and b (fast algorithm) > * > @@ -74,12 +75,13 @@ static unsigned get_gcd(unsigned a, unsigned b) } > > /* > - * Depending on memory configuration, objects addresses are spread > + * Depending on memory configuration on x86 arch, objects addresses are > + spread > * between channels and ranks in RAM: the pool allocator will add > * padding between objects. This function return the new size of the > * object. > */ > -static unsigned optimize_object_size(unsigned obj_size) > +static unsigned > +arch_mem_object_align(unsigned obj_size) > { > unsigned nrank, nchan; > unsigned new_obj_size; > @@ -99,6 +101,13 @@ static unsigned optimize_object_size(unsigned > obj_size) > new_obj_size++; > return new_obj_size * RTE_MEMPOOL_ALIGN; } > +#else This applies to add Arm (PPC as well) SoCs which might have different schemes depending on the memory controller. IMO, this should not be architecture specific. > +static unsigned > +arch_mem_object_align(unsigned obj_size) { > + return obj_size; > +} > +#endif > > struct pagesz_walk_arg { > int socket_id; > @@ -234,8 +243,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, > uint32_t flags, > */ > if ((flags & MEMPOOL_F_NO_SPREAD) == 0) { > unsigned new_size; > - new_size = optimize_object_size(sz->header_size + sz- > >elt_size + > - sz->trailer_size); > + new_size = arch_mem_object_align > + (sz->header_size + sz->elt_size + sz->trailer_size); > sz->trailer_size = new_size - sz->header_size - sz->elt_size; > } > > -- > 2.24.1 ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [dpdk-dev] [PATCH] mempool: fix mempool obj alignment for non x86 2019-12-20 15:55 ` [dpdk-dev] [PATCH] mempool: fix mempool obj alignment for non x86 Honnappa Nagarahalli @ 2019-12-20 16:55 ` Jerin Jacob 2019-12-20 21:07 ` Honnappa Nagarahalli 0 siblings, 1 reply; 18+ messages in thread From: Jerin Jacob @ 2019-12-20 16:55 UTC (permalink / raw) To: Honnappa Nagarahalli Cc: jerinj, dev, thomas, olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev, hemant.agrawal, shahafs, Gavin Hu, viktorin, drc, anatoly.burakov, stable, nd On Fri, Dec 20, 2019 at 9:25 PM Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote: > > <snip> > > > > > From: Jerin Jacob <jerinj@marvell.com> > > > > The exiting optimize_object_size() function address the memory object > > alignment constraint on x86 for better performance. > > > > Different (Mirco) architecture may have different memory alignment > > constraint for better performance and it not same as the existing > > optimize_object_size() function. Some use, XOR(kind of CRC) scheme to > > enable DRAM channel distribution based on the address and some may have > > a different formula. > If I understand correctly, address interleaving is the characteristic of the memory controller and not the CPU. > For ex: different SoCs using the same Arm architecture might have different memory controllers. So, the solution should not be architecture specific, but SoC specific. Yes. See below. > > -static unsigned optimize_object_size(unsigned obj_size) > > +static unsigned > > +arch_mem_object_align(unsigned obj_size) > > { > > unsigned nrank, nchan; > > unsigned new_obj_size; > > @@ -99,6 +101,13 @@ static unsigned optimize_object_size(unsigned > > obj_size) > > new_obj_size++; > > return new_obj_size * RTE_MEMPOOL_ALIGN; } > > +#else > This applies to add Arm (PPC as well) SoCs which might have different schemes depending on the memory controller. IMO, this should not be architecture specific. I agree in principle. I will summarize the https://www.mail-archive.com/dev@dpdk.org/msg149157.html feedback: 1) For x86 arch, it is architecture-specific 2) For power PC arch, It is architecture-specific 3) For the ARM case, it will be the memory controller specific. 4) For the ARM case, The memory controller is not using the existing x86 arch formula. 5) If it is memory/arch-specific, Can userspace code find the optimal alignment? In the case of octeontx2/arm64, the memory controller does XOR on PA address which userspace code doesn't have much control. This patch address the known case of (1), (2), (4) and (5). (2) can be added to this framework when POWER9 folks want it. We can extend this patch to address (3) if there is a case. Without the actual requirement(If some can share the formula of alignment which is the memory controller specific and it does not come under (4))) then we can create extra layer for the memory controller and abstraction to probe it. Again there is no standard way of probing the memory controller in userspace and we need platform #define, which won't work for distribution build. So solution needs to be arch-specific and then fine-tune to memory controller if possible. I can work on creating an extra layer of code if some can provide the details of the memory controller and probing mechanism or this patch be extended to support such case if it arises in future. Thoughts? > > > +static unsigned > > +arch_mem_object_align(unsigned obj_size) { > > + return obj_size; > > +} > > +#endif > > > > struct pagesz_walk_arg { > > int socket_id; > > @@ -234,8 +243,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, > > uint32_t flags, > > */ > > if ((flags & MEMPOOL_F_NO_SPREAD) == 0) { > > unsigned new_size; > > - new_size = optimize_object_size(sz->header_size + sz- > > >elt_size + > > - sz->trailer_size); > > + new_size = arch_mem_object_align > > + (sz->header_size + sz->elt_size + sz->trailer_size); > > sz->trailer_size = new_size - sz->header_size - sz->elt_size; > > } > > > > -- > > 2.24.1 > ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [dpdk-dev] [PATCH] mempool: fix mempool obj alignment for non x86 2019-12-20 16:55 ` Jerin Jacob @ 2019-12-20 21:07 ` Honnappa Nagarahalli 2019-12-21 5:06 ` Jerin Jacob 0 siblings, 1 reply; 18+ messages in thread From: Honnappa Nagarahalli @ 2019-12-20 21:07 UTC (permalink / raw) To: Jerin Jacob Cc: jerinj, dev, thomas, olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev, hemant.agrawal, shahafs, Gavin Hu, viktorin, drc, anatoly.burakov, stable, nd, Honnappa Nagarahalli, nd <snip> > > > From: Jerin Jacob <jerinj@marvell.com> > > > > > > The exiting optimize_object_size() function address the memory > > > object alignment constraint on x86 for better performance. > > > > > > Different (Mirco) architecture may have different memory alignment > > > constraint for better performance and it not same as the existing > > > optimize_object_size() function. Some use, XOR(kind of CRC) scheme > > > to enable DRAM channel distribution based on the address and some > > > may have a different formula. > > If I understand correctly, address interleaving is the characteristic of the > memory controller and not the CPU. > > For ex: different SoCs using the same Arm architecture might have different > memory controllers. So, the solution should not be architecture specific, but > SoC specific. > > Yes. See below. > > > > -static unsigned optimize_object_size(unsigned obj_size) > > > +static unsigned > > > +arch_mem_object_align(unsigned obj_size) > > > { > > > unsigned nrank, nchan; > > > unsigned new_obj_size; > > > @@ -99,6 +101,13 @@ static unsigned optimize_object_size(unsigned > > > obj_size) > > > new_obj_size++; > > > return new_obj_size * RTE_MEMPOOL_ALIGN; } > > > +#else > > This applies to add Arm (PPC as well) SoCs which might have different > schemes depending on the memory controller. IMO, this should not be > architecture specific. > > I agree in principle. > I will summarize the > https://www.mail-archive.com/dev@dpdk.org/msg149157.html feedback: > > 1) For x86 arch, it is architecture-specific > 2) For power PC arch, It is architecture-specific > 3) For the ARM case, it will be the memory controller specific. > 4) For the ARM case, The memory controller is not using the existing > x86 arch formula. > 5) If it is memory/arch-specific, Can userspace code find the optimal > alignment? In the case of octeontx2/arm64, the memory controller does XOR > on PA address which userspace code doesn't have much control. > > This patch address the known case of (1), (2), (4) and (5). (2) can be added to > this framework when POWER9 folks want it. > > We can extend this patch to address (3) if there is a case. Without the actual > requirement(If some can share the formula of alignment which is the > memory controller specific and it does not come under (4))) then we can > create extra layer for the memory controller and abstraction to probe it. > Again there is no standard way of probing the memory controller in > userspace and we need platform #define, which won't work for distribution > build. > So solution needs to be arch-specific and then fine-tune to memory controller > if possible. > > I can work on creating an extra layer of code if some can provide the details > of the memory controller and probing mechanism or this patch be extended Inputs for BlueField, DPAAx, ThunderX2 would be helpful. > to support such case if it arises in future. > > Thoughts? How much memory will this save for your platform? Is it affecting performance? > > > > > > +static unsigned > > > +arch_mem_object_align(unsigned obj_size) { > > > + return obj_size; > > > +} > > > +#endif > > > > > > struct pagesz_walk_arg { > > > int socket_id; > > > @@ -234,8 +243,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, > > > uint32_t flags, > > > */ > > > if ((flags & MEMPOOL_F_NO_SPREAD) == 0) { > > > unsigned new_size; > > > - new_size = optimize_object_size(sz->header_size + sz- > > > >elt_size + > > > - sz->trailer_size); > > > + new_size = arch_mem_object_align > > > + (sz->header_size + sz->elt_size + > > > + sz->trailer_size); > > > sz->trailer_size = new_size - sz->header_size - sz->elt_size; > > > } > > > > > > -- > > > 2.24.1 > > ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [dpdk-dev] [PATCH] mempool: fix mempool obj alignment for non x86 2019-12-20 21:07 ` Honnappa Nagarahalli @ 2019-12-21 5:06 ` Jerin Jacob 2019-12-27 15:54 ` Olivier Matz 0 siblings, 1 reply; 18+ messages in thread From: Jerin Jacob @ 2019-12-21 5:06 UTC (permalink / raw) To: Honnappa Nagarahalli Cc: jerinj, dev, thomas, olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev, hemant.agrawal, shahafs, Gavin Hu, viktorin, drc, anatoly.burakov, stable, nd On Sat, Dec 21, 2019 at 2:37 AM Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com> wrote: > > <snip> > > > > From: Jerin Jacob <jerinj@marvell.com> > > > > > > > > The exiting optimize_object_size() function address the memory > > > > object alignment constraint on x86 for better performance. > > > > > > > > Different (Mirco) architecture may have different memory alignment > > > > constraint for better performance and it not same as the existing > > > > optimize_object_size() function. Some use, XOR(kind of CRC) scheme > > > > to enable DRAM channel distribution based on the address and some > > > > may have a different formula. > > > If I understand correctly, address interleaving is the characteristic of the > > memory controller and not the CPU. > > > For ex: different SoCs using the same Arm architecture might have different > > memory controllers. So, the solution should not be architecture specific, but > > SoC specific. > > > > Yes. See below. > > > > > > -static unsigned optimize_object_size(unsigned obj_size) > > > > +static unsigned > > > > +arch_mem_object_align(unsigned obj_size) > > > > { > > > > unsigned nrank, nchan; > > > > unsigned new_obj_size; > > > > @@ -99,6 +101,13 @@ static unsigned optimize_object_size(unsigned > > > > obj_size) > > > > new_obj_size++; > > > > return new_obj_size * RTE_MEMPOOL_ALIGN; } > > > > +#else > > > This applies to add Arm (PPC as well) SoCs which might have different > > schemes depending on the memory controller. IMO, this should not be > > architecture specific. > > > > I agree in principle. > > I will summarize the > > https://www.mail-archive.com/dev@dpdk.org/msg149157.html feedback: > > > > 1) For x86 arch, it is architecture-specific > > 2) For power PC arch, It is architecture-specific > > 3) For the ARM case, it will be the memory controller specific. > > 4) For the ARM case, The memory controller is not using the existing > > x86 arch formula. > > 5) If it is memory/arch-specific, Can userspace code find the optimal > > alignment? In the case of octeontx2/arm64, the memory controller does XOR > > on PA address which userspace code doesn't have much control. > > > > This patch address the known case of (1), (2), (4) and (5). (2) can be added to > > this framework when POWER9 folks want it. > > > > We can extend this patch to address (3) if there is a case. Without the actual > > requirement(If some can share the formula of alignment which is the > > memory controller specific and it does not come under (4))) then we can > > create extra layer for the memory controller and abstraction to probe it. > > Again there is no standard way of probing the memory controller in > > userspace and we need platform #define, which won't work for distribution > > build. > > So solution needs to be arch-specific and then fine-tune to memory controller > > if possible. > > > > I can work on creating an extra layer of code if some can provide the details > > of the memory controller and probing mechanism or this patch be extended > Inputs for BlueField, DPAAx, ThunderX2 would be helpful. Yes. Probably memory controller used in n1sdp SoC also. > > > to support such case if it arises in future. > > > > Thoughts? > How much memory will this save for your platform? Is it affecting performance? No performance difference. The existing code adding the tailer for each objs. Additional space/Trailer space will be function of number of objects in mempool and its obj_size, its alignment and selected rte_memory_get_nchannel() and rte_memory_get_nrank() I will wait for inputs from Bluefield, DPAAx, ThunderX2 and n1sdp(if any) for any rework on the patch. > > > > > > > > > > +static unsigned > > > > +arch_mem_object_align(unsigned obj_size) { > > > > + return obj_size; > > > > +} > > > > +#endif > > > > > > > > struct pagesz_walk_arg { > > > > int socket_id; > > > > @@ -234,8 +243,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, > > > > uint32_t flags, > > > > */ > > > > if ((flags & MEMPOOL_F_NO_SPREAD) == 0) { > > > > unsigned new_size; > > > > - new_size = optimize_object_size(sz->header_size + sz- > > > > >elt_size + > > > > - sz->trailer_size); > > > > + new_size = arch_mem_object_align > > > > + (sz->header_size + sz->elt_size + > > > > + sz->trailer_size); > > > > sz->trailer_size = new_size - sz->header_size - sz->elt_size; > > > > } > > > > > > > > -- > > > > 2.24.1 > > > ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [dpdk-dev] [PATCH] mempool: fix mempool obj alignment for non x86 2019-12-21 5:06 ` Jerin Jacob @ 2019-12-27 15:54 ` Olivier Matz 0 siblings, 0 replies; 18+ messages in thread From: Olivier Matz @ 2019-12-27 15:54 UTC (permalink / raw) To: Jerin Jacob Cc: Honnappa Nagarahalli, jerinj, dev, thomas, arybchenko, bruce.richardson, konstantin.ananyev, hemant.agrawal, shahafs, Gavin Hu, viktorin, drc, anatoly.burakov, stable, nd Hi, On Sat, Dec 21, 2019 at 10:36:15AM +0530, Jerin Jacob wrote: > On Sat, Dec 21, 2019 at 2:37 AM Honnappa Nagarahalli > <Honnappa.Nagarahalli@arm.com> wrote: > > > > <snip> > > > > > From: Jerin Jacob <jerinj@marvell.com> > > > > > > > > > > The exiting optimize_object_size() function address the memory > > > > > object alignment constraint on x86 for better performance. > > > > > > > > > > Different (Mirco) architecture may have different memory alignment > > > > > constraint for better performance and it not same as the existing > > > > > optimize_object_size() function. Some use, XOR(kind of CRC) scheme > > > > > to enable DRAM channel distribution based on the address and some > > > > > may have a different formula. typo: Mirco -> Micro Maybe the whole sentence can be reworded a bit (I think a word is missing). > > > > If I understand correctly, address interleaving is the characteristic of the > > > memory controller and not the CPU. > > > > For ex: different SoCs using the same Arm architecture might have different > > > memory controllers. So, the solution should not be architecture specific, but > > > SoC specific. > > > > > > Yes. See below. > > > > > > > > -static unsigned optimize_object_size(unsigned obj_size) > > > > > +static unsigned > > > > > +arch_mem_object_align(unsigned obj_size) > > > > > { > > > > > unsigned nrank, nchan; > > > > > unsigned new_obj_size; > > > > > @@ -99,6 +101,13 @@ static unsigned optimize_object_size(unsigned > > > > > obj_size) > > > > > new_obj_size++; > > > > > return new_obj_size * RTE_MEMPOOL_ALIGN; } > > > > > +#else > > > > This applies to add Arm (PPC as well) SoCs which might have different > > > schemes depending on the memory controller. IMO, this should not be > > > architecture specific. > > > > > > I agree in principle. > > > I will summarize the > > > https://www.mail-archive.com/dev@dpdk.org/msg149157.html feedback: > > > > > > 1) For x86 arch, it is architecture-specific > > > 2) For power PC arch, It is architecture-specific > > > 3) For the ARM case, it will be the memory controller specific. > > > 4) For the ARM case, The memory controller is not using the existing > > > x86 arch formula. > > > 5) If it is memory/arch-specific, Can userspace code find the optimal > > > alignment? In the case of octeontx2/arm64, the memory controller does XOR > > > on PA address which userspace code doesn't have much control. > > > > > > This patch address the known case of (1), (2), (4) and (5). (2) can be added to > > > this framework when POWER9 folks want it. > > > > > > We can extend this patch to address (3) if there is a case. Without the actual > > > requirement(If some can share the formula of alignment which is the > > > memory controller specific and it does not come under (4))) then we can > > > create extra layer for the memory controller and abstraction to probe it. > > > Again there is no standard way of probing the memory controller in > > > userspace and we need platform #define, which won't work for distribution > > > build. > > > So solution needs to be arch-specific and then fine-tune to memory controller > > > if possible. > > > > > > I can work on creating an extra layer of code if some can provide the details > > > of the memory controller and probing mechanism or this patch be extended > > Inputs for BlueField, DPAAx, ThunderX2 would be helpful. > > Yes. Probably memory controller used in n1sdp SoC also. > > > > > > to support such case if it arises in future. > > > > > > Thoughts? > > How much memory will this save for your platform? Is it affecting performance? Currently, I think Arm-based architectures use the default (nchan=4, nrank=1). The worst case is for object whose size (including mempool header) is 2 cache lines, where it is optimized to 3 cache lines (+50%). Examples for cache lines size = 64: orig optimized 64 -> 64 +0% 128 -> 192 +50% 192 -> 192 +0% 256 -> 320 +25% 320 -> 320 +0% 384 -> 448 +16% ... 2304 -> 2368 +2.7% (~mbuf size) > No performance difference. > > The existing code adding the tailer for each objs. > Additional space/Trailer space will be function of number of objects > in mempool and its obj_size, its alignment and selected > rte_memory_get_nchannel() and rte_memory_get_nrank() > > I will wait for inputs from Bluefield, DPAAx, ThunderX2 and n1sdp(if > any) for any rework on the patch. If there is no performance impact on other supporter Arm-based architectures, I think it is a step in a right direction. > > > > > +static unsigned > > > > > +arch_mem_object_align(unsigned obj_size) { > > > > > + return obj_size; > > > > > +} > > > > > +#endif I'd prefer "unsigned int" for new code. Also, the opening brace should be on a separate line. The documentation of the MEMPOOL_F_NO_SPREAD flag in the .h could be slightly modified, as you did for the comment above arch_mem_object_align(). > > > > > > > > > > struct pagesz_walk_arg { > > > > > int socket_id; > > > > > @@ -234,8 +243,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, > > > > > uint32_t flags, > > > > > */ > > > > > if ((flags & MEMPOOL_F_NO_SPREAD) == 0) { > > > > > unsigned new_size; > > > > > - new_size = optimize_object_size(sz->header_size + sz- > > > > > >elt_size + > > > > > - sz->trailer_size); > > > > > + new_size = arch_mem_object_align > > > > > + (sz->header_size + sz->elt_size + > > > > > + sz->trailer_size); > > > > > sz->trailer_size = new_size - sz->header_size - sz->elt_size; > > > > > } > > > > > > > > > > -- > > > > > 2.24.1 > > > > ^ permalink raw reply [flat|nested] 18+ messages in thread
* [dpdk-dev] [PATCH v2] mempool: fix mempool obj alignment for non x86 2019-12-19 13:42 [dpdk-dev] [PATCH] mempool: fix mempool obj alignment for non x86 jerinj 2019-12-20 3:26 ` Gavin Hu 2019-12-20 15:55 ` [dpdk-dev] [PATCH] mempool: fix mempool obj alignment for non x86 Honnappa Nagarahalli @ 2020-01-11 13:34 ` jerinj 2020-01-11 17:41 ` Stephen Hemminger 2020-01-13 6:49 ` [dpdk-dev] [PATCH v3] " jerinj 2 siblings, 2 replies; 18+ messages in thread From: jerinj @ 2020-01-11 13:34 UTC (permalink / raw) To: dev Cc: thomas, olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev, hemant.agrawal, shahafs, honnappa.nagarahalli, gavin.hu, viktorin, drc, anatoly.burakov, Jerin Jacob, stable From: Jerin Jacob <jerinj@marvell.com> The existing optimize_object_size() function address the memory object alignment constraint on x86 for better performance. Different (micro) architecture may have different memory alignment constraint for better performance and it not the same as the existing optimize_object_size(). Some use, XOR(kind of CRC) scheme to enable DRAM channel distribution based on the address and some may have a different formula. Introducing arch_mem_object_align() function to abstract the difference between different (micro) architectures to avoid wasting memory for mempool object alignment for the architecture that it is not required to do so. Details on the amount of memory saving: Currently, arm64 based architectures use the default (nchan=4, nrank=1). The worst case is for an object whose size (including mempool header) is 2 cache lines, where it is optimized to 3 cache lines (+50%). Examples for cache lines size = 64: orig optimized 64 -> 64 +0% 128 -> 192 +50% 192 -> 192 +0% 256 -> 320 +25% 320 -> 320 +0% 384 -> 448 +16% ... 2304 -> 2368 +2.7% (~mbuf size) Additional details: https://www.mail-archive.com/dev@dpdk.org/msg149157.html Fixes: af75078fece3 ("first public release") Cc: stable@dpdk.org Signed-off-by: Jerin Jacob <jerinj@marvell.com> Reviewed-by: Gavin Hu <gavin.hu@arm.com> --- v2: - Changed the return type of arch_mem_object_align() to "unsigned int" from "unsigned" to fix the checkpatch issues (Olivier Matz) - Updated the comments for MEMPOOL_F_NO_SPREAD (Olivier Matz) - Update the git comments to share the memory saving details. doc/guides/prog_guide/mempool_lib.rst | 6 +++--- lib/librte_mempool/rte_mempool.c | 17 +++++++++++++---- lib/librte_mempool/rte_mempool.h | 6 +++++- 3 files changed, 21 insertions(+), 8 deletions(-) diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst index 3bb84b0a6..eea7a2906 100644 --- a/doc/guides/prog_guide/mempool_lib.rst +++ b/doc/guides/prog_guide/mempool_lib.rst @@ -27,10 +27,10 @@ In debug mode (CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG is enabled), statistics about get from/put in the pool are stored in the mempool structure. Statistics are per-lcore to avoid concurrent access to statistics counters. -Memory Alignment Constraints ----------------------------- +Memory Alignment Constraints on X86 architecture +------------------------------------------------ -Depending on hardware memory configuration, performance can be greatly improved by adding a specific padding between objects. +Depending on hardware memory configuration on X86 architecture, performance can be greatly improved by adding a specific padding between objects. The objective is to ensure that the beginning of each object starts on a different channel and rank in memory so that all channels are equally loaded. This is particularly true for packet buffers when doing L3 forwarding or flow classification. diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 78d8eb941..1909998e8 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -45,6 +45,7 @@ EAL_REGISTER_TAILQ(rte_mempool_tailq) #define CALC_CACHE_FLUSHTHRESH(c) \ ((typeof(c))((c) * CACHE_FLUSHTHRESH_MULTIPLIER)) +#if defined(RTE_ARCH_X86) /* * return the greatest common divisor between a and b (fast algorithm) * @@ -74,12 +75,13 @@ static unsigned get_gcd(unsigned a, unsigned b) } /* - * Depending on memory configuration, objects addresses are spread + * Depending on memory configuration on x86 arch, objects addresses are spread * between channels and ranks in RAM: the pool allocator will add * padding between objects. This function return the new size of the * object. */ -static unsigned optimize_object_size(unsigned obj_size) +static unsigned int +arch_mem_object_align(unsigned int obj_size) { unsigned nrank, nchan; unsigned new_obj_size; @@ -99,6 +101,13 @@ static unsigned optimize_object_size(unsigned obj_size) new_obj_size++; return new_obj_size * RTE_MEMPOOL_ALIGN; } +#else +static unsigned int +arch_mem_object_align(unsigned int obj_size) +{ + return obj_size; +} +#endif struct pagesz_walk_arg { int socket_id; @@ -234,8 +243,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags, */ if ((flags & MEMPOOL_F_NO_SPREAD) == 0) { unsigned new_size; - new_size = optimize_object_size(sz->header_size + sz->elt_size + - sz->trailer_size); + new_size = arch_mem_object_align + (sz->header_size + sz->elt_size + sz->trailer_size); sz->trailer_size = new_size - sz->header_size - sz->elt_size; } diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index f81152af9..9a4411110 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -260,7 +260,11 @@ struct rte_mempool { #endif } __rte_cache_aligned; -#define MEMPOOL_F_NO_SPREAD 0x0001 /**< Do not spread among memory channels. */ +#define MEMPOOL_F_NO_SPREAD 0x0001 +/**< Do not spread among memory channels. It is a hint to the library, + * library honor this hint only when, if it is required by the + * (micro) architecture. + */ #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/ #define MEMPOOL_F_SP_PUT 0x0004 /**< Default put is "single-producer".*/ #define MEMPOOL_F_SC_GET 0x0008 /**< Default get is "single-consumer".*/ -- 2.24.1 ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [dpdk-dev] [PATCH v2] mempool: fix mempool obj alignment for non x86 2020-01-11 13:34 ` [dpdk-dev] [PATCH v2] " jerinj @ 2020-01-11 17:41 ` Stephen Hemminger 2020-01-13 6:49 ` [dpdk-dev] [PATCH v3] " jerinj 1 sibling, 0 replies; 18+ messages in thread From: Stephen Hemminger @ 2020-01-11 17:41 UTC (permalink / raw) To: jerinj Cc: dev, thomas, olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev, hemant.agrawal, shahafs, honnappa.nagarahalli, gavin.hu, viktorin, drc, anatoly.burakov, stable On Sat, 11 Jan 2020 19:04:10 +0530 <jerinj@marvell.com> wrote: > -#define MEMPOOL_F_NO_SPREAD 0x0001 /**< Do not spread among memory channels. */ > +#define MEMPOOL_F_NO_SPREAD 0x0001 > +/**< Do not spread among memory channels. It is a hint to the library, > + * library honor this hint only when, if it is required by the > + * (micro) architecture. > + */ That text is awkward for me to read. There maybe other reasons in future for mempool to ignore the flag. I prefer the minor change original comment as: > -#define MEMPOOL_F_NO_SPREAD 0x0001 /**< Spreading among memory channels not required. */ ^ permalink raw reply [flat|nested] 18+ messages in thread
* [dpdk-dev] [PATCH v3] mempool: fix mempool obj alignment for non x86 2020-01-11 13:34 ` [dpdk-dev] [PATCH v2] " jerinj 2020-01-11 17:41 ` Stephen Hemminger @ 2020-01-13 6:49 ` jerinj 2020-01-13 9:46 ` [dpdk-dev] [dpdk-stable] " David Marchand 2020-01-14 21:06 ` [dpdk-dev] [PATCH v4] mempool: remove memory wastage on " jerinj 1 sibling, 2 replies; 18+ messages in thread From: jerinj @ 2020-01-13 6:49 UTC (permalink / raw) To: dev Cc: thomas, olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev, hemant.agrawal, shahafs, honnappa.nagarahalli, gavin.hu, viktorin, drc, anatoly.burakov, Jerin Jacob, stable From: Jerin Jacob <jerinj@marvell.com> The existing optimize_object_size() function address the memory object alignment constraint on x86 for better performance. Different (micro) architecture may have different memory alignment constraint for better performance and it not the same as the existing optimize_object_size(). Some use, XOR(kind of CRC) scheme to enable DRAM channel distribution based on the address and some may have a different formula. Introducing arch_mem_object_align() function to abstract the difference between different (micro) architectures to avoid wasting memory for mempool object alignment for the architecture that it is not required to do so. Details on the amount of memory saving: Currently, arm64 based architectures use the default (nchan=4, nrank=1). The worst case is for an object whose size (including mempool header) is 2 cache lines, where it is optimized to 3 cache lines (+50%). Examples for cache lines size = 64: orig optimized 64 -> 64 +0% 128 -> 192 +50% 192 -> 192 +0% 256 -> 320 +25% 320 -> 320 +0% 384 -> 448 +16% ... 2304 -> 2368 +2.7% (~mbuf size) Additional details: https://www.mail-archive.com/dev@dpdk.org/msg149157.html Fixes: af75078fece3 ("first public release") Cc: stable@dpdk.org Signed-off-by: Jerin Jacob <jerinj@marvell.com> Reviewed-by: Gavin Hu <gavin.hu@arm.com> --- v3: - Change comment for MEMPOOL_F_NO_SPREAD flag as " Spreading among memory channels not required." (Stephen Hemminger) v2: - Changed the return type of arch_mem_object_align() to "unsigned int" from "unsigned" to fix the checkpatch issues (Olivier Matz) - Updated the comments for MEMPOOL_F_NO_SPREAD (Olivier Matz) - Update the git comments to share the memory saving details. doc/guides/prog_guide/mempool_lib.rst | 6 +++--- lib/librte_mempool/rte_mempool.c | 17 +++++++++++++---- lib/librte_mempool/rte_mempool.h | 3 ++- 3 files changed, 18 insertions(+), 8 deletions(-) diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst index 3bb84b0a6..eea7a2906 100644 --- a/doc/guides/prog_guide/mempool_lib.rst +++ b/doc/guides/prog_guide/mempool_lib.rst @@ -27,10 +27,10 @@ In debug mode (CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG is enabled), statistics about get from/put in the pool are stored in the mempool structure. Statistics are per-lcore to avoid concurrent access to statistics counters. -Memory Alignment Constraints ----------------------------- +Memory Alignment Constraints on X86 architecture +------------------------------------------------ -Depending on hardware memory configuration, performance can be greatly improved by adding a specific padding between objects. +Depending on hardware memory configuration on X86 architecture, performance can be greatly improved by adding a specific padding between objects. The objective is to ensure that the beginning of each object starts on a different channel and rank in memory so that all channels are equally loaded. This is particularly true for packet buffers when doing L3 forwarding or flow classification. diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 78d8eb941..1909998e8 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -45,6 +45,7 @@ EAL_REGISTER_TAILQ(rte_mempool_tailq) #define CALC_CACHE_FLUSHTHRESH(c) \ ((typeof(c))((c) * CACHE_FLUSHTHRESH_MULTIPLIER)) +#if defined(RTE_ARCH_X86) /* * return the greatest common divisor between a and b (fast algorithm) * @@ -74,12 +75,13 @@ static unsigned get_gcd(unsigned a, unsigned b) } /* - * Depending on memory configuration, objects addresses are spread + * Depending on memory configuration on x86 arch, objects addresses are spread * between channels and ranks in RAM: the pool allocator will add * padding between objects. This function return the new size of the * object. */ -static unsigned optimize_object_size(unsigned obj_size) +static unsigned int +arch_mem_object_align(unsigned int obj_size) { unsigned nrank, nchan; unsigned new_obj_size; @@ -99,6 +101,13 @@ static unsigned optimize_object_size(unsigned obj_size) new_obj_size++; return new_obj_size * RTE_MEMPOOL_ALIGN; } +#else +static unsigned int +arch_mem_object_align(unsigned int obj_size) +{ + return obj_size; +} +#endif struct pagesz_walk_arg { int socket_id; @@ -234,8 +243,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags, */ if ((flags & MEMPOOL_F_NO_SPREAD) == 0) { unsigned new_size; - new_size = optimize_object_size(sz->header_size + sz->elt_size + - sz->trailer_size); + new_size = arch_mem_object_align + (sz->header_size + sz->elt_size + sz->trailer_size); sz->trailer_size = new_size - sz->header_size - sz->elt_size; } diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index f81152af9..f48c94def 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -260,7 +260,8 @@ struct rte_mempool { #endif } __rte_cache_aligned; -#define MEMPOOL_F_NO_SPREAD 0x0001 /**< Do not spread among memory channels. */ +#define MEMPOOL_F_NO_SPREAD 0x0001 +/**< Spreading among memory channels not required. */ #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/ #define MEMPOOL_F_SP_PUT 0x0004 /**< Default put is "single-producer".*/ #define MEMPOOL_F_SC_GET 0x0008 /**< Default get is "single-consumer".*/ -- 2.24.1 ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [dpdk-dev] [dpdk-stable] [PATCH v3] mempool: fix mempool obj alignment for non x86 2020-01-13 6:49 ` [dpdk-dev] [PATCH v3] " jerinj @ 2020-01-13 9:46 ` David Marchand 2020-01-13 11:46 ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran 2020-01-14 21:06 ` [dpdk-dev] [PATCH v4] mempool: remove memory wastage on " jerinj 1 sibling, 1 reply; 18+ messages in thread From: David Marchand @ 2020-01-13 9:46 UTC (permalink / raw) To: Jerin Jacob Kollanukkaran Cc: dev, Thomas Monjalon, Olivier Matz, Andrew Rybchenko, Bruce Richardson, Ananyev, Konstantin, Hemant Agrawal, Shahaf Shuler, Honnappa Nagarahalli, Gavin Hu, viktorin, David Christensen, Burakov, Anatoly, dpdk stable, Kevin Traynor, Luca Boccassi On Mon, Jan 13, 2020 at 7:49 AM <jerinj@marvell.com> wrote: > > From: Jerin Jacob <jerinj@marvell.com> > > The existing optimize_object_size() function address the memory object > alignment constraint on x86 for better performance. > > Different (micro) architecture may have different memory alignment > constraint for better performance and it not the same as the existing > optimize_object_size(). > > Some use, XOR(kind of CRC) scheme to enable DRAM channel distribution > based on the address and some may have a different formula. > > Introducing arch_mem_object_align() function to abstract > the difference between different (micro) architectures to avoid > wasting memory for mempool object alignment for the architecture > that it is not required to do so. > > Details on the amount of memory saving: > > Currently, arm64 based architectures use the default (nchan=4, > nrank=1). The worst case is for an object whose size (including mempool > header) is 2 cache lines, where it is optimized to 3 cache lines (+50%). > > Examples for cache lines size = 64: > orig optimized > 64 -> 64 +0% > 128 -> 192 +50% > 192 -> 192 +0% > 256 -> 320 +25% > 320 -> 320 +0% > 384 -> 448 +16% > ... > 2304 -> 2368 +2.7% (~mbuf size) > > Additional details: > https://www.mail-archive.com/dev@dpdk.org/msg149157.html > > Fixes: af75078fece3 ("first public release") Weird to flag this as a problem in this sha1. x86 was the only architecture supported at the time. Either we mark the introduction of new architectures as the point of backport, or we remove this tag and just let Cc: stable@dpdk.org > Cc: stable@dpdk.org It seems more an optimisation than a fix to me, but in any case, the stable maintainers will be the judges. > > Signed-off-by: Jerin Jacob <jerinj@marvell.com> > Reviewed-by: Gavin Hu <gavin.hu@arm.com> > --- > v3: > - Change comment for MEMPOOL_F_NO_SPREAD flag as > " Spreading among memory channels not required." (Stephen Hemminger) > > v2: > - Changed the return type of arch_mem_object_align() to "unsigned int" from > "unsigned" to fix the checkpatch issues (Olivier Matz) > - Updated the comments for MEMPOOL_F_NO_SPREAD (Olivier Matz) > - Update the git comments to share the memory saving details. > > doc/guides/prog_guide/mempool_lib.rst | 6 +++--- > lib/librte_mempool/rte_mempool.c | 17 +++++++++++++---- > lib/librte_mempool/rte_mempool.h | 3 ++- > 3 files changed, 18 insertions(+), 8 deletions(-) > > diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst > index 3bb84b0a6..eea7a2906 100644 > --- a/doc/guides/prog_guide/mempool_lib.rst > +++ b/doc/guides/prog_guide/mempool_lib.rst > @@ -27,10 +27,10 @@ In debug mode (CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG is enabled), > statistics about get from/put in the pool are stored in the mempool structure. > Statistics are per-lcore to avoid concurrent access to statistics counters. > > -Memory Alignment Constraints > ----------------------------- > +Memory Alignment Constraints on X86 architecture > +------------------------------------------------ Nit: afaics in the docs, x86 is preferred to X86. > > -Depending on hardware memory configuration, performance can be greatly improved by adding a specific padding between objects. > +Depending on hardware memory configuration on X86 architecture, performance can be greatly improved by adding a specific padding between objects. > The objective is to ensure that the beginning of each object starts on a different channel and rank in memory so that all channels are equally loaded. > > This is particularly true for packet buffers when doing L3 forwarding or flow classification. -- David Marchand ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [dpdk-stable] [PATCH v3] mempool: fix mempool obj alignment for non x86 2020-01-13 9:46 ` [dpdk-dev] [dpdk-stable] " David Marchand @ 2020-01-13 11:46 ` Jerin Jacob Kollanukkaran 0 siblings, 0 replies; 18+ messages in thread From: Jerin Jacob Kollanukkaran @ 2020-01-13 11:46 UTC (permalink / raw) To: David Marchand Cc: dev, Thomas Monjalon, Olivier Matz, Andrew Rybchenko, Bruce Richardson, Ananyev, Konstantin, Hemant Agrawal, Shahaf Shuler, Honnappa Nagarahalli, Gavin Hu, viktorin, David Christensen, Burakov, Anatoly, dpdk stable, Kevin Traynor, Luca Boccassi > -----Original Message----- > From: David Marchand <david.marchand@redhat.com> > Sent: Monday, January 13, 2020 3:17 PM > To: Jerin Jacob Kollanukkaran <jerinj@marvell.com> > Cc: dev <dev@dpdk.org>; Thomas Monjalon <thomas@monjalon.net>; Olivier > Matz <olivier.matz@6wind.com>; Andrew Rybchenko > <arybchenko@solarflare.com>; Bruce Richardson > <bruce.richardson@intel.com>; Ananyev, Konstantin > <konstantin.ananyev@intel.com>; Hemant Agrawal > <hemant.agrawal@nxp.com>; Shahaf Shuler <shahafs@mellanox.com>; > Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>; Gavin Hu > <gavin.hu@arm.com>; viktorin@rehivetech.com; David Christensen > <drc@linux.vnet.ibm.com>; Burakov, Anatoly <anatoly.burakov@intel.com>; > dpdk stable <stable@dpdk.org>; Kevin Traynor <ktraynor@redhat.com>; Luca > Boccassi <bluca@debian.org> > Subject: [EXT] Re: [dpdk-stable] [dpdk-dev] [PATCH v3] mempool: fix mempool > obj alignment for non x86 > > External Email > > ---------------------------------------------------------------------- > On Mon, Jan 13, 2020 at 7:49 AM <jerinj@marvell.com> wrote: > > > > From: Jerin Jacob <jerinj@marvell.com> > > > > The existing optimize_object_size() function address the memory object > > alignment constraint on x86 for better performance. > > > > Different (micro) architecture may have different memory alignment > > constraint for better performance and it not the same as the existing > > optimize_object_size(). > > > > Some use, XOR(kind of CRC) scheme to enable DRAM channel distribution > > based on the address and some may have a different formula. > > > > Introducing arch_mem_object_align() function to abstract the > > difference between different (micro) architectures to avoid wasting > > memory for mempool object alignment for the architecture that it is > > not required to do so. > > > > Details on the amount of memory saving: > > > > Currently, arm64 based architectures use the default (nchan=4, > > nrank=1). The worst case is for an object whose size (including > > mempool > > header) is 2 cache lines, where it is optimized to 3 cache lines (+50%). > > > > Examples for cache lines size = 64: > > orig optimized > > 64 -> 64 +0% > > 128 -> 192 +50% > > 192 -> 192 +0% > > 256 -> 320 +25% > > 320 -> 320 +0% > > 384 -> 448 +16% > > ... > > 2304 -> 2368 +2.7% (~mbuf size) > > > > Additional details: > > https://urldefense.proofpoint.com/v2/url?u=https-3A__www.mail-2Darchiv > > e.com_dev- > 40dpdk.org_msg149157.html&d=DwIFaQ&c=nKjWec2b6R0mOyPaz7xtfQ& > > > r=1DGob4H4rxz6H8uITozGOCa0s5f4wCNtTa4UUKvcsvI&m=VKkiHhyflsqwipCoE > MtdUR > > SXuHSq2neWGqTRmxVfjr8&s=y-LYGZ- > 2MsAfrGo3r5aADQnr2mUcsP7LxXT5XEmTuwE&e= > > > > Fixes: af75078fece3 ("first public release") > > Weird to flag this as a problem in this sha1. > x86 was the only architecture supported at the time. > Either we mark the introduction of new architectures as the point of backport, > or we remove this tag and just let Cc: stable@dpdk.org While committing the maintainer can take either one of the decision. No issues/opinion on this from my side. > > > Cc: stable@dpdk.org > > It seems more an optimisation than a fix to me, but in any case, the stable > maintainers will be the judges. OK. No issues. > > > > > > Signed-off-by: Jerin Jacob <jerinj@marvell.com> > > Reviewed-by: Gavin Hu <gavin.hu@arm.com> > > --- > > v3: > > - Change comment for MEMPOOL_F_NO_SPREAD flag as " Spreading among > > memory channels not required." (Stephen Hemminger) > > > > v2: > > - Changed the return type of arch_mem_object_align() to "unsigned int" from > > "unsigned" to fix the checkpatch issues (Olivier Matz) > > - Updated the comments for MEMPOOL_F_NO_SPREAD (Olivier Matz) > > - Update the git comments to share the memory saving details. > > > > doc/guides/prog_guide/mempool_lib.rst | 6 +++--- > > lib/librte_mempool/rte_mempool.c | 17 +++++++++++++---- > > lib/librte_mempool/rte_mempool.h | 3 ++- > > 3 files changed, 18 insertions(+), 8 deletions(-) > > > > diff --git a/doc/guides/prog_guide/mempool_lib.rst > > b/doc/guides/prog_guide/mempool_lib.rst > > index 3bb84b0a6..eea7a2906 100644 > > --- a/doc/guides/prog_guide/mempool_lib.rst > > +++ b/doc/guides/prog_guide/mempool_lib.rst > > @@ -27,10 +27,10 @@ In debug mode > (CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG is > > enabled), statistics about get from/put in the pool are stored in the mempool > structure. > > Statistics are per-lcore to avoid concurrent access to statistics counters. > > > > -Memory Alignment Constraints > > ----------------------------- > > +Memory Alignment Constraints on X86 architecture > > +------------------------------------------------ > > Nit: afaics in the docs, x86 is preferred to X86. > > > > > > -Depending on hardware memory configuration, performance can be greatly > improved by adding a specific padding between objects. > > +Depending on hardware memory configuration on X86 architecture, > performance can be greatly improved by adding a specific padding between > objects. > > The objective is to ensure that the beginning of each object starts on a > different channel and rank in memory so that all channels are equally loaded. > > > > This is particularly true for packet buffers when doing L3 forwarding or flow > classification. > > > -- > David Marchand ^ permalink raw reply [flat|nested] 18+ messages in thread
* [dpdk-dev] [PATCH v4] mempool: remove memory wastage on non x86 2020-01-13 6:49 ` [dpdk-dev] [PATCH v3] " jerinj 2020-01-13 9:46 ` [dpdk-dev] [dpdk-stable] " David Marchand @ 2020-01-14 21:06 ` jerinj 2020-01-16 13:10 ` Jerin Jacob 1 sibling, 1 reply; 18+ messages in thread From: jerinj @ 2020-01-14 21:06 UTC (permalink / raw) To: dev Cc: thomas, olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev, hemant.agrawal, shahafs, honnappa.nagarahalli, gavin.hu, viktorin, drc, anatoly.burakov, Jerin Jacob From: Jerin Jacob <jerinj@marvell.com> The existing optimize_object_size() function address the memory object alignment constraint on x86 for better performance. Different (micro) architecture may have different memory alignment constraint for better performance and it not the same as the existing optimize_object_size(). Some use, XOR(kind of CRC) scheme to enable DRAM channel distribution based on the address and some may have a different formula. Introducing arch_mem_object_align() function to abstract the difference between different (micro) architectures to avoid wasting memory for mempool object alignment for the architecture that it is not required to do so. Details on the amount of memory saving: Currently, arm64 based architectures use the default (nchan=4, nrank=1). The worst case is for an object whose size (including mempool header) is 2 cache lines, where it is optimized to 3 cache lines (+50%). Examples for cache lines size = 64: orig optimized 64 -> 64 +0% 128 -> 192 +50% 192 -> 192 +0% 256 -> 320 +25% 320 -> 320 +0% 384 -> 448 +16% ... 2304 -> 2368 +2.7% (~mbuf size) Additional details: https://www.mail-archive.com/dev@dpdk.org/msg149157.html Signed-off-by: Jerin Jacob <jerinj@marvell.com> Reviewed-by: Gavin Hu <gavin.hu@arm.com> --- v4: - s/X86/x86 in the doc/guides/prog_guide/mempool_lib.rst file (David) - Remove Fixes and CC: stable@dpdk.org as well, as this is just a improvement.(David) - Udpate the patch heading to remove checkpatch warning due to above change. v3: - Change comment for MEMPOOL_F_NO_SPREAD flag as " Spreading among memory channels not required." (Stephen Hemminger) v2: - Changed the return type of arch_mem_object_align() to "unsigned int" from "unsigned" to fix the checkpatch issues (Olivier Matz) - Updated the comments for MEMPOOL_F_NO_SPREAD (Olivier Matz) - Update the git comments to share the memory saving details. doc/guides/prog_guide/mempool_lib.rst | 6 +++--- lib/librte_mempool/rte_mempool.c | 17 +++++++++++++---- lib/librte_mempool/rte_mempool.h | 3 ++- 3 files changed, 18 insertions(+), 8 deletions(-) diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst index 3bb84b0a6..f8b430d65 100644 --- a/doc/guides/prog_guide/mempool_lib.rst +++ b/doc/guides/prog_guide/mempool_lib.rst @@ -27,10 +27,10 @@ In debug mode (CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG is enabled), statistics about get from/put in the pool are stored in the mempool structure. Statistics are per-lcore to avoid concurrent access to statistics counters. -Memory Alignment Constraints ----------------------------- +Memory Alignment Constraints on x86 architecture +------------------------------------------------ -Depending on hardware memory configuration, performance can be greatly improved by adding a specific padding between objects. +Depending on hardware memory configuration on X86 architecture, performance can be greatly improved by adding a specific padding between objects. The objective is to ensure that the beginning of each object starts on a different channel and rank in memory so that all channels are equally loaded. This is particularly true for packet buffers when doing L3 forwarding or flow classification. diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 78d8eb941..1909998e8 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -45,6 +45,7 @@ EAL_REGISTER_TAILQ(rte_mempool_tailq) #define CALC_CACHE_FLUSHTHRESH(c) \ ((typeof(c))((c) * CACHE_FLUSHTHRESH_MULTIPLIER)) +#if defined(RTE_ARCH_X86) /* * return the greatest common divisor between a and b (fast algorithm) * @@ -74,12 +75,13 @@ static unsigned get_gcd(unsigned a, unsigned b) } /* - * Depending on memory configuration, objects addresses are spread + * Depending on memory configuration on x86 arch, objects addresses are spread * between channels and ranks in RAM: the pool allocator will add * padding between objects. This function return the new size of the * object. */ -static unsigned optimize_object_size(unsigned obj_size) +static unsigned int +arch_mem_object_align(unsigned int obj_size) { unsigned nrank, nchan; unsigned new_obj_size; @@ -99,6 +101,13 @@ static unsigned optimize_object_size(unsigned obj_size) new_obj_size++; return new_obj_size * RTE_MEMPOOL_ALIGN; } +#else +static unsigned int +arch_mem_object_align(unsigned int obj_size) +{ + return obj_size; +} +#endif struct pagesz_walk_arg { int socket_id; @@ -234,8 +243,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags, */ if ((flags & MEMPOOL_F_NO_SPREAD) == 0) { unsigned new_size; - new_size = optimize_object_size(sz->header_size + sz->elt_size + - sz->trailer_size); + new_size = arch_mem_object_align + (sz->header_size + sz->elt_size + sz->trailer_size); sz->trailer_size = new_size - sz->header_size - sz->elt_size; } diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index f81152af9..f48c94def 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -260,7 +260,8 @@ struct rte_mempool { #endif } __rte_cache_aligned; -#define MEMPOOL_F_NO_SPREAD 0x0001 /**< Do not spread among memory channels. */ +#define MEMPOOL_F_NO_SPREAD 0x0001 +/**< Spreading among memory channels not required. */ #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/ #define MEMPOOL_F_SP_PUT 0x0004 /**< Default put is "single-producer".*/ #define MEMPOOL_F_SC_GET 0x0008 /**< Default get is "single-consumer".*/ -- 2.24.1 ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [dpdk-dev] [PATCH v4] mempool: remove memory wastage on non x86 2020-01-14 21:06 ` [dpdk-dev] [PATCH v4] mempool: remove memory wastage on " jerinj @ 2020-01-16 13:10 ` Jerin Jacob 2020-01-20 12:24 ` Olivier Matz 0 siblings, 1 reply; 18+ messages in thread From: Jerin Jacob @ 2020-01-16 13:10 UTC (permalink / raw) To: Jerin Jacob Cc: dpdk-dev, Thomas Monjalon, Olivier Matz, Andrew Rybchenko, Richardson, Bruce, Ananyev, Konstantin, Hemant Agrawal, Shahaf Shuler, Honnappa Nagarahalli, Gavin Hu, Jan Viktorin, David Christensen, Anatoly Burakov On Wed, Jan 15, 2020 at 2:35 AM <jerinj@marvell.com> wrote: > > From: Jerin Jacob <jerinj@marvell.com> > > The existing optimize_object_size() function address the memory object > alignment constraint on x86 for better performance. > > Different (micro) architecture may have different memory alignment > constraint for better performance and it not the same as the existing > optimize_object_size(). > > Some use, XOR(kind of CRC) scheme to enable DRAM channel distribution > based on the address and some may have a different formula. > > Introducing arch_mem_object_align() function to abstract > the difference between different (micro) architectures to avoid > wasting memory for mempool object alignment for the architecture > that it is not required to do so. > > Details on the amount of memory saving: > > Currently, arm64 based architectures use the default (nchan=4, > nrank=1). The worst case is for an object whose size (including mempool > header) is 2 cache lines, where it is optimized to 3 cache lines (+50%). > > Examples for cache lines size = 64: > orig optimized > 64 -> 64 +0% > 128 -> 192 +50% > 192 -> 192 +0% > 256 -> 320 +25% > 320 -> 320 +0% > 384 -> 448 +16% > ... > 2304 -> 2368 +2.7% (~mbuf size) > > Additional details: > https://www.mail-archive.com/dev@dpdk.org/msg149157.html > > Signed-off-by: Jerin Jacob <jerinj@marvell.com> > Reviewed-by: Gavin Hu <gavin.hu@arm.com> Ping for merge. > --- > v4: > > - s/X86/x86 in the doc/guides/prog_guide/mempool_lib.rst file (David) > - Remove Fixes and CC: stable@dpdk.org as well, as this is just a > improvement.(David) > - Udpate the patch heading to remove checkpatch warning due to above > change. > > v3: > - Change comment for MEMPOOL_F_NO_SPREAD flag as > " Spreading among memory channels not required." (Stephen Hemminger) > > v2: > - Changed the return type of arch_mem_object_align() to "unsigned int" from > "unsigned" to fix the checkpatch issues (Olivier Matz) > - Updated the comments for MEMPOOL_F_NO_SPREAD (Olivier Matz) > - Update the git comments to share the memory saving details. > > doc/guides/prog_guide/mempool_lib.rst | 6 +++--- > lib/librte_mempool/rte_mempool.c | 17 +++++++++++++---- > lib/librte_mempool/rte_mempool.h | 3 ++- > 3 files changed, 18 insertions(+), 8 deletions(-) > > diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst > index 3bb84b0a6..f8b430d65 100644 > --- a/doc/guides/prog_guide/mempool_lib.rst > +++ b/doc/guides/prog_guide/mempool_lib.rst > @@ -27,10 +27,10 @@ In debug mode (CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG is enabled), > statistics about get from/put in the pool are stored in the mempool structure. > Statistics are per-lcore to avoid concurrent access to statistics counters. > > -Memory Alignment Constraints > ----------------------------- > +Memory Alignment Constraints on x86 architecture > +------------------------------------------------ > > -Depending on hardware memory configuration, performance can be greatly improved by adding a specific padding between objects. > +Depending on hardware memory configuration on X86 architecture, performance can be greatly improved by adding a specific padding between objects. > The objective is to ensure that the beginning of each object starts on a different channel and rank in memory so that all channels are equally loaded. > > This is particularly true for packet buffers when doing L3 forwarding or flow classification. > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c > index 78d8eb941..1909998e8 100644 > --- a/lib/librte_mempool/rte_mempool.c > +++ b/lib/librte_mempool/rte_mempool.c > @@ -45,6 +45,7 @@ EAL_REGISTER_TAILQ(rte_mempool_tailq) > #define CALC_CACHE_FLUSHTHRESH(c) \ > ((typeof(c))((c) * CACHE_FLUSHTHRESH_MULTIPLIER)) > > +#if defined(RTE_ARCH_X86) > /* > * return the greatest common divisor between a and b (fast algorithm) > * > @@ -74,12 +75,13 @@ static unsigned get_gcd(unsigned a, unsigned b) > } > > /* > - * Depending on memory configuration, objects addresses are spread > + * Depending on memory configuration on x86 arch, objects addresses are spread > * between channels and ranks in RAM: the pool allocator will add > * padding between objects. This function return the new size of the > * object. > */ > -static unsigned optimize_object_size(unsigned obj_size) > +static unsigned int > +arch_mem_object_align(unsigned int obj_size) > { > unsigned nrank, nchan; > unsigned new_obj_size; > @@ -99,6 +101,13 @@ static unsigned optimize_object_size(unsigned obj_size) > new_obj_size++; > return new_obj_size * RTE_MEMPOOL_ALIGN; > } > +#else > +static unsigned int > +arch_mem_object_align(unsigned int obj_size) > +{ > + return obj_size; > +} > +#endif > > struct pagesz_walk_arg { > int socket_id; > @@ -234,8 +243,8 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags, > */ > if ((flags & MEMPOOL_F_NO_SPREAD) == 0) { > unsigned new_size; > - new_size = optimize_object_size(sz->header_size + sz->elt_size + > - sz->trailer_size); > + new_size = arch_mem_object_align > + (sz->header_size + sz->elt_size + sz->trailer_size); > sz->trailer_size = new_size - sz->header_size - sz->elt_size; > } > > diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h > index f81152af9..f48c94def 100644 > --- a/lib/librte_mempool/rte_mempool.h > +++ b/lib/librte_mempool/rte_mempool.h > @@ -260,7 +260,8 @@ struct rte_mempool { > #endif > } __rte_cache_aligned; > > -#define MEMPOOL_F_NO_SPREAD 0x0001 /**< Do not spread among memory channels. */ > +#define MEMPOOL_F_NO_SPREAD 0x0001 > +/**< Spreading among memory channels not required. */ > #define MEMPOOL_F_NO_CACHE_ALIGN 0x0002 /**< Do not align objs on cache lines.*/ > #define MEMPOOL_F_SP_PUT 0x0004 /**< Default put is "single-producer".*/ > #define MEMPOOL_F_SC_GET 0x0008 /**< Default get is "single-consumer".*/ > -- > 2.24.1 > ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [dpdk-dev] [PATCH v4] mempool: remove memory wastage on non x86 2020-01-16 13:10 ` Jerin Jacob @ 2020-01-20 12:24 ` Olivier Matz 2020-01-20 12:29 ` Thomas Monjalon 0 siblings, 1 reply; 18+ messages in thread From: Olivier Matz @ 2020-01-20 12:24 UTC (permalink / raw) To: Jerin Jacob Cc: Jerin Jacob, dpdk-dev, Thomas Monjalon, Andrew Rybchenko, Richardson, Bruce, Ananyev, Konstantin, Hemant Agrawal, Shahaf Shuler, Honnappa Nagarahalli, Gavin Hu, Jan Viktorin, David Christensen, Anatoly Burakov On Thu, Jan 16, 2020 at 06:40:23PM +0530, Jerin Jacob wrote: > On Wed, Jan 15, 2020 at 2:35 AM <jerinj@marvell.com> wrote: > > > > From: Jerin Jacob <jerinj@marvell.com> > > > > The existing optimize_object_size() function address the memory object > > alignment constraint on x86 for better performance. > > > > Different (micro) architecture may have different memory alignment > > constraint for better performance and it not the same as the existing > > optimize_object_size(). > > > > Some use, XOR(kind of CRC) scheme to enable DRAM channel distribution > > based on the address and some may have a different formula. > > > > Introducing arch_mem_object_align() function to abstract > > the difference between different (micro) architectures to avoid > > wasting memory for mempool object alignment for the architecture > > that it is not required to do so. > > > > Details on the amount of memory saving: > > > > Currently, arm64 based architectures use the default (nchan=4, > > nrank=1). The worst case is for an object whose size (including mempool > > header) is 2 cache lines, where it is optimized to 3 cache lines (+50%). > > > > Examples for cache lines size = 64: > > orig optimized > > 64 -> 64 +0% > > 128 -> 192 +50% > > 192 -> 192 +0% > > 256 -> 320 +25% > > 320 -> 320 +0% > > 384 -> 448 +16% > > ... > > 2304 -> 2368 +2.7% (~mbuf size) > > > > Additional details: > > https://www.mail-archive.com/dev@dpdk.org/msg149157.html > > > > Signed-off-by: Jerin Jacob <jerinj@marvell.com> > > Reviewed-by: Gavin Hu <gavin.hu@arm.com> > > Ping for merge. Acked-by: Olivier Matz <olivier.matz@6wind.com> ^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [dpdk-dev] [PATCH v4] mempool: remove memory wastage on non x86 2020-01-20 12:24 ` Olivier Matz @ 2020-01-20 12:29 ` Thomas Monjalon 0 siblings, 0 replies; 18+ messages in thread From: Thomas Monjalon @ 2020-01-20 12:29 UTC (permalink / raw) To: Jerin Jacob Cc: Jerin Jacob, dev, dpdk-dev, Andrew Rybchenko, Richardson, Bruce, Ananyev, Konstantin, Hemant Agrawal, Shahaf Shuler, Honnappa Nagarahalli, Gavin Hu, Jan Viktorin, David Christensen, Anatoly Burakov, Olivier Matz 20/01/2020 13:24, Olivier Matz: > On Thu, Jan 16, 2020 at 06:40:23PM +0530, Jerin Jacob wrote: > > On Wed, Jan 15, 2020 at 2:35 AM <jerinj@marvell.com> wrote: > > > > > > From: Jerin Jacob <jerinj@marvell.com> > > > > > > The existing optimize_object_size() function address the memory object > > > alignment constraint on x86 for better performance. > > > > > > Different (micro) architecture may have different memory alignment > > > constraint for better performance and it not the same as the existing > > > optimize_object_size(). > > > > > > Some use, XOR(kind of CRC) scheme to enable DRAM channel distribution > > > based on the address and some may have a different formula. > > > > > > Introducing arch_mem_object_align() function to abstract > > > the difference between different (micro) architectures to avoid > > > wasting memory for mempool object alignment for the architecture > > > that it is not required to do so. > > > > > > Details on the amount of memory saving: > > > > > > Currently, arm64 based architectures use the default (nchan=4, > > > nrank=1). The worst case is for an object whose size (including mempool > > > header) is 2 cache lines, where it is optimized to 3 cache lines (+50%). > > > > > > Examples for cache lines size = 64: > > > orig optimized > > > 64 -> 64 +0% > > > 128 -> 192 +50% > > > 192 -> 192 +0% > > > 256 -> 320 +25% > > > 320 -> 320 +0% > > > 384 -> 448 +16% > > > ... > > > 2304 -> 2368 +2.7% (~mbuf size) > > > > > > Additional details: > > > https://www.mail-archive.com/dev@dpdk.org/msg149157.html > > > > > > Signed-off-by: Jerin Jacob <jerinj@marvell.com> > > > Reviewed-by: Gavin Hu <gavin.hu@arm.com> > > > > Ping for merge. > > Acked-by: Olivier Matz <olivier.matz@6wind.com> Applied, thanks ^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2020-01-20 12:29 UTC | newest] Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2019-12-19 13:42 [dpdk-dev] [PATCH] mempool: fix mempool obj alignment for non x86 jerinj 2019-12-20 3:26 ` Gavin Hu 2019-12-20 3:45 ` Jerin Jacob 2019-12-20 10:54 ` [dpdk-dev] [PATCH] mempool: fix mempool obj alignment for nonx86 Morten Brørup 2019-12-20 15:55 ` [dpdk-dev] [PATCH] mempool: fix mempool obj alignment for non x86 Honnappa Nagarahalli 2019-12-20 16:55 ` Jerin Jacob 2019-12-20 21:07 ` Honnappa Nagarahalli 2019-12-21 5:06 ` Jerin Jacob 2019-12-27 15:54 ` Olivier Matz 2020-01-11 13:34 ` [dpdk-dev] [PATCH v2] " jerinj 2020-01-11 17:41 ` Stephen Hemminger 2020-01-13 6:49 ` [dpdk-dev] [PATCH v3] " jerinj 2020-01-13 9:46 ` [dpdk-dev] [dpdk-stable] " David Marchand 2020-01-13 11:46 ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran 2020-01-14 21:06 ` [dpdk-dev] [PATCH v4] mempool: remove memory wastage on " jerinj 2020-01-16 13:10 ` Jerin Jacob 2020-01-20 12:24 ` Olivier Matz 2020-01-20 12:29 ` Thomas Monjalon
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).