DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jerin Jacob <jerin.jacob@caviumnetworks.com>
To: Hemant Agrawal <hemant.agrawal@nxp.com>
Cc: David Hunt <david.hunt@intel.com>, "dev@dpdk.org" <dev@dpdk.org>,
	Thomas Monjalon <thomas.monjalon@6wind.com>,
	"olivier.matz@6wind.com" <olivier.matz@6wind.com>,
	"viktorin@rehivetech.com" <viktorin@rehivetech.com>,
	Shreyansh Jain <shreyansh.jain@nxp.com>
Subject: Re: [dpdk-dev] usages issue with external mempool
Date: Wed, 27 Jul 2016 15:21:29 +0530	[thread overview]
Message-ID: <20160727095128.GA11679@localhost.localdomain> (raw)
In-Reply-To: <DB5PR04MB1605210D71D8CA61C01C78B6890E0@DB5PR04MB1605.eurprd04.prod.outlook.com>

On Tue, Jul 26, 2016 at 10:11:13AM +0000, Hemant Agrawal wrote:
> Hi,
>                There was lengthy discussions w.r.t external mempool patches. However, I am still finding usages issue with the agreed approach.
> 
> The existing API to create packet mempool, "rte_pktmbuf_pool_create" does not provide the option to change the object init iterator. This may be the reason that many applications (e.g. OVS) are using rte_mempool_create to create packet mempool  with their own object iterator (e.g. ovs_rte_pktmbuf_init).
> 
> e.g the existing usages are:
>         dmp->mp = rte_mempool_create(mp_name, mp_size, MBUF_SIZE(mtu),
>                                      MP_CACHE_SZ,
>                                      sizeof(struct rte_pktmbuf_pool_private),
>                                      rte_pktmbuf_pool_init, NULL,
>                                      ovs_rte_pktmbuf_init, NULL,
>                                     socket_id, 0);
> 
> 
> With the new API set for packet pool create, this need to be changed to:
> 
>         dmp->mp = rte_mempool_create_empty(mp_name, mp_size, MBUF_SIZE(mtu),
>                                      MP_CACHE_SZ,
>                                      sizeof(struct rte_pktmbuf_pool_private),
>                                      socket_id, 0);
>                               if (dmp->mp == NULL)
>                                              break;
> 
>                               rte_errno = rte_mempool_set_ops_byname(dmp-mp,
>                                                             RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL);
>                               if (rte_errno != 0) {
>                                              RTE_LOG(ERR, MBUF, "error setting mempool handler\n");
>                                              return NULL;
>                               }
>                               rte_pktmbuf_pool_init(dmp->mp, NULL);
> 
>                               ret = rte_mempool_populate_default(dmp->mp);
>                               if (ret < 0) {
>                                              rte_mempool_free(dmp->mp);
>                                              rte_errno = -ret;
>                                              return NULL;
>                               }
> 
>                               rte_mempool_obj_iter(dmp->mp, ovs_rte_pktmbuf_init, NULL);
> 
> This is not a user friendly approach to ask for changing 1 API to 6 new APIs. Or, am I missing something?

I agree, To me, this is very bad. I have raised this concern earlier
also

Since applications like OVS goes through "rte_mempool_create" for
even packet buffer pool creation. IMO it make senses to extend
"rte_mempool_create" to take one more argument to provide external pool
handler name(NULL for default). I don't see any valid technical reason
to treat external pool handler based mempool creation API different
from default handler.

Oliver, David

Thoughts ?

If we agree on this then may be I can send the API deprecation notices for
rte_mempool_create for v16.11

Jerin


> 
> I think, we should do one of the following:
> 
> 1. Enhance "rte_pktmbuf_pool_create" to optionally accept "rte_mempool_obj_cb_t *obj_init, void *obj_init_arg" as inputs. If obj_init is not present, default can be used.
> 2. Create a new wrapper API (e.g. e_pktmbuf_pool_create_new) with  the above said behavior e.g.:
> /* helper to create a mbuf pool */
> struct rte_mempool *
> rte_pktmbuf_pool_create_new(const char *name, unsigned n,
>                unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
> rte_mempool_obj_cb_t *obj_init, void *obj_init_arg,
>                int socket_id)
> 3. Let the existing rte_mempool_create accept flag as "MEMPOOL_F_HW_PKT_POOL". Obviously, if this flag is set - all other flag values should be ignored. This was discussed earlier also.
> 
> Please share your opinion.
> 
> Regards,
> Hemant
> 
> 

  reply	other threads:[~2016-07-27  9:51 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-26 10:11 Hemant Agrawal
2016-07-27  9:51 ` Jerin Jacob [this message]
2016-07-27 10:00   ` Thomas Monjalon
2016-07-27 13:23     ` Hemant Agrawal
2016-07-27 13:35       ` Thomas Monjalon
2016-07-27 16:52         ` Hemant Agrawal
2016-07-28  7:09           ` Thomas Monjalon
2016-07-28  8:32   ` Olivier MATZ
2016-07-28 10:25     ` Jerin Jacob
2016-07-29 10:09     ` Hemant Agrawal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160727095128.GA11679@localhost.localdomain \
    --to=jerin.jacob@caviumnetworks.com \
    --cc=david.hunt@intel.com \
    --cc=dev@dpdk.org \
    --cc=hemant.agrawal@nxp.com \
    --cc=olivier.matz@6wind.com \
    --cc=shreyansh.jain@nxp.com \
    --cc=thomas.monjalon@6wind.com \
    --cc=viktorin@rehivetech.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).