From: Hemant Agrawal <hemant.agrawal@nxp.com>
To: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>,
"dev@dpdk.org" <dev@dpdk.org>
Cc: "ashish.jain@nxp.com" <ashish.jain@nxp.com>
Subject: Re: [dpdk-dev] [PATCH] examples/ip_reassembly: use pktmbuf to create pool
Date: Thu, 12 Oct 2017 16:32:52 +0530 [thread overview]
Message-ID: <d2b60c33-9273-086b-ac03-5c64b71418fe@nxp.com> (raw)
In-Reply-To: <2601191342CEEE43887BDE71AB9772585FAA80B6@IRSMSX103.ger.corp.intel.com>
On 10/12/2017 4:08 PM, Ananyev, Konstantin wrote:
>
>
>> -----Original Message-----
>> From: Hemant Agrawal [mailto:hemant.agrawal@nxp.com]
>> Sent: Wednesday, September 6, 2017 10:34 AM
>> To: dev@dpdk.org
>> Cc: Ananyev, Konstantin <konstantin.ananyev@intel.com>; ashish.jain@nxp.com
>> Subject: [PATCH] examples/ip_reassembly: use pktmbuf to create pool
>>
>> From: Ashish Jain <ashish.jain@nxp.com>
>>
>> Use of rte_mempool_create() with the helper provided in
>> librte_mbuf: rte_pktmbuf_pool_create().
>> This is the preferred way to create a mbuf pool else
>> it may not work on implementation using the HW buffer pool
>>
>> Signed-off-by: Ashish Jain <ashish.jain@nxp.com>
>> ---
>> examples/ip_reassembly/main.c | 13 ++++++-------
>> 1 file changed, 6 insertions(+), 7 deletions(-)
>>
>> diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
>> index e62636c..20caeb3 100644
>> --- a/examples/ip_reassembly/main.c
>> +++ b/examples/ip_reassembly/main.c
>> @@ -84,8 +84,7 @@
>> #define MAX_JUMBO_PKT_LEN 9600
>>
>> #define BUF_SIZE RTE_MBUF_DEFAULT_DATAROOM
>> -#define MBUF_SIZE \
>> - (BUF_SIZE + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
>> +#define MBUF_DATA_SIZE RTE_MBUF_DEFAULT_BUF_SIZE
>>
>> #define NB_MBUF 8192
>>
>> @@ -909,11 +908,11 @@ setup_queue_tbl(struct rx_queue *rxq, uint32_t lcore, uint32_t queue)
>>
>> snprintf(buf, sizeof(buf), "mbuf_pool_%u_%u", lcore, queue);
>>
>> - if ((rxq->pool = rte_mempool_create(buf, nb_mbuf, MBUF_SIZE, 0,
>> - sizeof(struct rte_pktmbuf_pool_private),
>> - rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL,
>> - socket, MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) == NULL) {
>> - RTE_LOG(ERR, IP_RSMBL, "mempool_create(%s) failed", buf);
>> + rxq->pool = rte_pktmbuf_pool_create(buf, nb_mbuf, 0, 0,
>> + MBUF_DATA_SIZE, socket);
>
> As we can't pass SC|SP anymore can we then setup mempool cache size to some non-zero value?
> Konstantin
>
#define MEMPOOL_CACHE_SIZE 256
do you think "256" will be ok?
>> + if (rxq->pool == NULL) {
>> + RTE_LOG(ERR, IP_RSMBL,
>> + "rte_pktmbuf_pool_create(%s) failed", buf);
>> return -1;
>> }
>>
>> --
>> 2.7.4
>
>
next prev parent reply other threads:[~2017-10-12 11:02 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-09-06 9:34 Hemant Agrawal
2017-10-12 4:51 ` Hemant Agrawal
2017-10-12 10:38 ` Ananyev, Konstantin
2017-10-12 11:02 ` Hemant Agrawal [this message]
2017-10-12 11:38 ` Ananyev, Konstantin
2017-10-12 13:25 ` [dpdk-dev] [PATCH v2] " Hemant Agrawal
2017-10-12 13:29 ` Ananyev, Konstantin
2017-10-13 22:56 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d2b60c33-9273-086b-ac03-5c64b71418fe@nxp.com \
--to=hemant.agrawal@nxp.com \
--cc=ashish.jain@nxp.com \
--cc=dev@dpdk.org \
--cc=konstantin.ananyev@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).