From: Olivier MATZ <olivier.matz@6wind.com>
To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Cc: dev@dpdk.org, thomas.monjalon@6wind.com,
bruce.richardson@intel.com, konstantin.ananyev@intel.com
Subject: Re: [dpdk-dev] [PATCH v2] mempool: replace c memcpy code semantics with optimized rte_memcpy
Date: Tue, 31 May 2016 23:05:30 +0200 [thread overview]
Message-ID: <574DFC9A.2050304@6wind.com> (raw)
In-Reply-To: <20160531125822.GA10995@localhost.localdomain>
Hi Jerin,
>>> /* Add elements back into the cache */
>>> - for (index = 0; index < n; ++index, obj_table++)
>>> - cache_objs[index] = *obj_table;
>>> + rte_memcpy(&cache_objs[0], obj_table, sizeof(void *) * n);
>>>
>>> cache->len += n;
>>>
>>>
>>
>> I also checked in the get_bulk() function, which looks like that:
>>
>> /* Now fill in the response ... */
>> for (index = 0, len = cache->len - 1;
>> index < n;
>> ++index, len--, obj_table++)
>> *obj_table = cache_objs[len];
>>
>> I think we could replace it by something like:
>>
>> rte_memcpy(obj_table, &cache_objs[len - n], sizeof(void *) * n);
>>
>> The only difference is that it won't reverse the pointers in the
>> table, but I don't see any problem with that.
>>
>> What do you think?
>
> In true sense, it will _not_ be LIFO. Not sure about cache usage implications
> on the specific use cases.
Today, the objects pointers are reversed only in the get(). It means
that this code:
rte_mempool_get_bulk(mp, table, 4);
for (i = 0; i < 4; i++)
printf("obj = %p\n", t[i]);
rte_mempool_put_bulk(mp, table, 4);
printf("-----\n");
rte_mempool_get_bulk(mp, table, 4);
for (i = 0; i < 4; i++)
printf("obj = %p\n", t[i]);
rte_mempool_put_bulk(mp, table, 4);
prints:
addr1
addr2
addr3
addr4
-----
addr4
addr3
addr2
addr1
Which is quite strange.
I don't think it would be an issue to replace the loop by a
rte_memcpy(), it may increase the copy speed and it will be
more coherent with the put().
Olivier
next prev parent reply other threads:[~2016-05-31 21:05 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-05-24 14:50 [dpdk-dev] [PATCH] mbuf: " Jerin Jacob
2016-05-24 14:59 ` Olivier Matz
2016-05-24 15:17 ` Jerin Jacob
2016-05-27 10:24 ` Hunt, David
2016-05-27 11:42 ` Jerin Jacob
2016-05-27 15:05 ` Thomas Monjalon
2016-05-30 8:44 ` Olivier Matz
2016-05-27 13:45 ` Hunt, David
2016-06-24 15:56 ` Hunt, David
2016-06-24 16:02 ` Olivier Matz
2016-05-26 8:07 ` [dpdk-dev] [PATCH v2] mempool: " Jerin Jacob
2016-05-30 8:45 ` Olivier Matz
2016-05-31 12:58 ` Jerin Jacob
2016-05-31 21:05 ` Olivier MATZ [this message]
2016-06-01 7:00 ` Jerin Jacob
2016-06-02 7:36 ` Olivier MATZ
2016-06-02 9:39 ` Jerin Jacob
2016-06-02 21:16 ` Olivier MATZ
2016-06-03 7:02 ` Jerin Jacob
2016-06-17 10:40 ` Olivier Matz
2016-06-24 16:04 ` Olivier Matz
2016-06-30 9:41 ` Thomas Monjalon
2016-06-30 11:38 ` Jerin Jacob
2016-06-30 12:16 ` [dpdk-dev] [PATCH v3] " Jerin Jacob
2016-06-30 17:28 ` Thomas Monjalon
2016-07-05 8:43 ` Ferruh Yigit
2016-07-05 11:32 ` Yuanhan Liu
2016-07-05 13:13 ` Jerin Jacob
2016-07-05 13:42 ` Yuanhan Liu
2016-07-05 14:09 ` Ferruh Yigit
2016-07-06 16:21 ` Ferruh Yigit
2016-07-07 13:51 ` Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=574DFC9A.2050304@6wind.com \
--to=olivier.matz@6wind.com \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=jerin.jacob@caviumnetworks.com \
--cc=konstantin.ananyev@intel.com \
--cc=thomas.monjalon@6wind.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).