From: "Wiles, Keith" <keith.wiles@intel.com>
To: Olivier MATZ <olivier.matz@6wind.com>, "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v2] mempool: reduce rte_mempool structure size
Date: Wed, 10 Feb 2016 18:01:29 +0000 [thread overview]
Message-ID: <069176D7-5C83-42E0-B3EC-3119469592BA@intel.com> (raw)
>>Hi Keith,
>>
>>Thank you for adding the RTE_NEXT_ABI. I think this is the way
>>described in the process. Your changes will be available in next
>>version (16.4) for people compiling with RTE_NEXT_ABI=y, and in
>>16.7 without option (I'm just surprised that RTE_NEXT_ABI=y in
>>default configs...).
>>
>>I think a deprecation notice should also be added in this commit
>>in doc/guides/rel_notes/deprecation.rst.
>
>Will add the text.
>>
>>Please also find comments below.
>>
>>On 02/09/2016 06:30 PM, Keith Wiles wrote:
>>
>>> diff --git a/config/defconfig_x86_64-native-linuxapp-gcc b/config/defconfig_x86_64-native-linuxapp-gcc
>>> index 60baf5b..02e9ace 100644
>>> --- a/config/defconfig_x86_64-native-linuxapp-gcc
>>> +++ b/config/defconfig_x86_64-native-linuxapp-gcc
>>> @@ -40,3 +40,8 @@ CONFIG_RTE_ARCH_64=y
>>>
>>> CONFIG_RTE_TOOLCHAIN="gcc"
>>> CONFIG_RTE_TOOLCHAIN_GCC=y
>>> +CONFIG_RTE_BUILD_SHARED_LIB=y
>>> +CONFIG_RTE_NEXT_ABI=n
>>> +CONFIG_RTE_EAL_IGB_UIO=n
>>> +CONFIG_RTE_LIBRTE_KNI=n
>>> +CONFIG_RTE_KNI_KMOD=n
>
>Hmm, not sure where this came from, but will remove it.
I think this was from the ABI-Checker I ran and the tool should leave the repo in its original state.
>>
>>I think this should not be part of the patch.
>>
>>> @@ -672,6 +704,24 @@ rte_mempool_count(const struct rte_mempool *mp)
>>> static unsigned
>>> rte_mempool_dump_cache(FILE *f, const struct rte_mempool *mp)
>>> {
>>> +#ifdef RTE_NEXT_ABI
>>> + unsigned lcore_id;
>>> + unsigned count = 0;
>>> + unsigned cache_count;
>>> +
>>> + fprintf(f, " cache infos:\n");
>>> + fprintf(f, " cache_size=%"PRIu32"\n", mp->cache_size);
>>> + if (mp->cache_size == 0)
>>> + return count;
>>> +
>>> + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
>>> + cache_count = mp->local_cache[lcore_id].len;
>>> + fprintf(f, " cache_count[%u]=%u\n", lcore_id, cache_count);
>>> + count += cache_count;
>>> + }
>>> + fprintf(f, " total_cache_count=%u\n", count);
>>> + return count;
>>> +#else
>>> #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>>> unsigned lcore_id;
>>> unsigned count = 0;
>>
>>I think in this case we could avoid to duplicate the code without
>>beeing unclear by using the proper #ifdefs:
>
>I was struggling with how it should be done. I like to see clear ifdefs and be able to see the complete code for a given case. In these cases I wanted to make it simple to remove the code quickly by just deleting lines instead of editing lines. I will follow your suggestion.
>>
>>#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 || defined(RTE_NEXT_ABI)
>> /* common code */
>>#ifdef RTE_NEXT_ABI
>> if (mp->cache_size == 0)
>> return count;
>>#endif
>> /* common code */
>>#else
>>...
>>#endif
>>
>>
>>> @@ -755,6 +806,26 @@ mempool_audit_cookies(const struct rte_mempool *mp)
>>> #define mempool_audit_cookies(mp) do {} while(0)
>>> #endif
>>>
>>> +#ifdef RTE_NEXT_ABI
>>> +/* check cookies before and after objects */
>>> +static void
>>> +mempool_audit_cache(const struct rte_mempool *mp)
>>> +{
>>> + /* check cache size consistency */
>>> + unsigned lcore_id;
>>> +
>>> + if (mp->cache_size == 0)
>>> + return;
>>> +
>>> + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
>>> + if (mp->local_cache[lcore_id].len > mp->cache_flushthresh) {
>>> + RTE_LOG(CRIT, MEMPOOL, "badness on cache[%u]\n",
>>> + lcore_id);
>>> + rte_panic("MEMPOOL: invalid cache len\n");
>>> + }
>>> + }
>>> +}
>>> +#else
>>
>>same here
>>
>>> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
>>> index 6e2390a..fc9b595 100644
>>> --- a/lib/librte_mempool/rte_mempool.h
>>> +++ b/lib/librte_mempool/rte_mempool.h
>>> @@ -95,6 +95,19 @@ struct rte_mempool_debug_stats {
>>> } __rte_cache_aligned;
>>> #endif
>>>
>>> +#ifdef RTE_NEXT_ABI
>>> +/**
>>> + * A structure that stores a per-core object cache.
>>> + */
>>> +struct rte_mempool_cache {
>>> + unsigned len; /**< Cache len */
>>> + /*
>>> + * Cache is allocated to this size to allow it to overflow in certain
>>> + * cases to avoid needless emptying of cache.
>>> + */
>>> + void *objs[RTE_MEMPOOL_CACHE_MAX_SIZE * 3]; /**< Cache objects */
>>> +} __rte_cache_aligned;
>>> +#else
>>
>>same here
>>
>>
>>
>>> @@ -755,19 +793,25 @@ static inline void __attribute__((always_inline))
>>> __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>>> unsigned n, int is_mp)
>>> {
>>> +#ifndef RTE_NEXT_ABI /* Note: ifndef */
>>> #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>>> +#endif /* RTE_NEXT_ABI */
>>> struct rte_mempool_cache *cache;
>>> uint32_t index;
>>> void **cache_objs;
>>> unsigned lcore_id = rte_lcore_id();
>>> uint32_t cache_size = mp->cache_size;
>>> uint32_t flushthresh = mp->cache_flushthresh;
>>> +#ifndef RTE_NEXT_ABI /* Note: ifndef */
>>> #endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
>>> +#endif /* RTE_NEXT_ABI */
>>
>>this looks strange... I think it does not work properly.
>>Why not
>>#if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 || defined(RTE_NEXT_ABI)
>
>Yes, it is strange :-(
>>
>>> /* increment stat now, adding in mempool always success */
>>> __MEMPOOL_STAT_ADD(mp, put, n);
>>>
>>> +#ifndef RTE_NEXT_ABI /* Note: ifndef */
>>> #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>>> +#endif /* RTE_NEXT_ABI */
>>> /* cache is not enabled or single producer or non-EAL thread */
>>> if (unlikely(cache_size == 0 || is_mp == 0 ||
>>> lcore_id >= RTE_MAX_LCORE))
>>> @@ -802,7 +846,9 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
>>> return;
>>>
>>> ring_enqueue:
>>> +#ifndef RTE_NEXT_ABI /* Note: ifndef */
>>> #endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
>>> +#endif /* RTE_NEXT_ABI */
>>>
>>> /* push remaining objects in ring */
>>> #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
>>> @@ -946,7 +992,9 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>>> unsigned n, int is_mc)
>>> {
>>> int ret;
>>> +#ifndef RTE_NEXT_ABI /* Note: ifndef */
>>> #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0
>>> +#endif /* RTE_NEXT_ABI */
>>> struct rte_mempool_cache *cache;
>>> uint32_t index, len;
>>> void **cache_objs;
>>> @@ -992,7 +1040,9 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table,
>>> return 0;
>>>
>>> ring_dequeue:
>>> +#ifndef RTE_NEXT_ABI /* Note: ifndef */
>>> #endif /* RTE_MEMPOOL_CACHE_MAX_SIZE > 0 */
>>> +#endif /* RTE_NEXT_ABI */
>>>
>>> /* get remaining objects from ring */
>>> if (is_mc)
>>
>>Same in those cases.
>>
>>
>>
>>Regards,
>>Olivier
>>
>
>
>Regards,
>Keith
>
>
>
>
>
Regards,
Keith
next reply other threads:[~2016-02-10 18:01 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-10 18:01 Wiles, Keith [this message]
2016-02-10 18:02 ` Thomas Monjalon
2016-02-12 11:52 ` Panu Matilainen
-- strict thread matches above, loose matches on Subject: below --
2016-02-02 23:02 [dpdk-dev] [PATCH] mempool: Reduce " Keith Wiles
2016-02-09 17:30 ` [dpdk-dev] [PATCH v2] mempool: reduce " Keith Wiles
2016-02-10 16:59 ` Olivier MATZ
2016-02-10 17:22 ` Wiles, Keith
2016-02-10 18:35 ` Wiles, Keith
2016-02-10 20:06 ` Olivier MATZ
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=069176D7-5C83-42E0-B3EC-3119469592BA@intel.com \
--to=keith.wiles@intel.com \
--cc=dev@dpdk.org \
--cc=olivier.matz@6wind.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).