DPDK patches and discussions
 help / color / mirror / Atom feed
From: Olivier MATZ <olivier.matz@6wind.com>
To: Jerin Jacob <jerin.jacob@caviumnetworks.com>,
	Hemant Agrawal <hemant.agrawal@nxp.com>
Cc: David Hunt <david.hunt@intel.com>, "dev@dpdk.org" <dev@dpdk.org>,
	Thomas Monjalon <thomas.monjalon@6wind.com>,
	"viktorin@rehivetech.com" <viktorin@rehivetech.com>,
	Shreyansh Jain <shreyansh.jain@nxp.com>
Subject: Re: [dpdk-dev] usages issue with external mempool
Date: Thu, 28 Jul 2016 10:32:44 +0200	[thread overview]
Message-ID: <5799C32C.3050907@6wind.com> (raw)
In-Reply-To: <20160727095128.GA11679@localhost.localdomain>

Hi Hemant, Jerin,

On 07/27/2016 11:51 AM, Jerin Jacob wrote:
> On Tue, Jul 26, 2016 at 10:11:13AM +0000, Hemant Agrawal wrote:
>> Hi,
>>                 There was lengthy discussions w.r.t external mempool patches. However, I am still finding usages issue with the agreed approach.
>>
>> The existing API to create packet mempool, "rte_pktmbuf_pool_create" does not provide the option to change the object init iterator. This may be the reason that many applications (e.g. OVS) are using rte_mempool_create to create packet mempool  with their own object iterator (e.g. ovs_rte_pktmbuf_init).
>>
>> e.g the existing usages are:
>>          dmp->mp = rte_mempool_create(mp_name, mp_size, MBUF_SIZE(mtu),
>>                                       MP_CACHE_SZ,
>>                                       sizeof(struct rte_pktmbuf_pool_private),
>>                                       rte_pktmbuf_pool_init, NULL,
>>                                       ovs_rte_pktmbuf_init, NULL,
>>                                      socket_id, 0);
>>
>>
>> With the new API set for packet pool create, this need to be changed to:
>>
>>          dmp->mp = rte_mempool_create_empty(mp_name, mp_size, MBUF_SIZE(mtu),
>>                                       MP_CACHE_SZ,
>>                                       sizeof(struct rte_pktmbuf_pool_private),
>>                                       socket_id, 0);
>>                                if (dmp->mp == NULL)
>>                                               break;
>>
>>                                rte_errno = rte_mempool_set_ops_byname(dmp-mp,
>>                                                              RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL);
>>                                if (rte_errno != 0) {
>>                                               RTE_LOG(ERR, MBUF, "error setting mempool handler\n");
>>                                               return NULL;
>>                                }
>>                                rte_pktmbuf_pool_init(dmp->mp, NULL);
>>
>>                                ret = rte_mempool_populate_default(dmp->mp);
>>                                if (ret < 0) {
>>                                               rte_mempool_free(dmp->mp);
>>                                               rte_errno = -ret;
>>                                               return NULL;
>>                                }
>>
>>                                rte_mempool_obj_iter(dmp->mp, ovs_rte_pktmbuf_init, NULL);
>>
>> This is not a user friendly approach to ask for changing 1 API to 6 new APIs. Or, am I missing something?

The example you are giving first still works today, right?

Since the mempool rework, as the objects are linked together in the 
mempool, it is also possible to use rte_pktmbuf_pool_create() and call 
another iterator after, like below:

	mp = rte_pktmbuf_pool_create(name, size, cache_size, priv_size,
		data_room_size, socket_id);
	if (mp == NULL)
		handle_error();
	rte_mempool_obj_iter(mp, ovs_rte_pktmbuf_init);

By the way, rte_mempool_set_ops_byname() is not needed in your example 
above since it sets the default ops.

>
> I agree, To me, this is very bad. I have raised this concern earlier
> also
>
> Since applications like OVS goes through "rte_mempool_create" for
> even packet buffer pool creation. IMO it make senses to extend
> "rte_mempool_create" to take one more argument to provide external pool
> handler name(NULL for default). I don't see any valid technical reason
> to treat external pool handler based mempool creation API different
> from default handler.

I disagree that changing from one function do_many_stuff(11 args) to 
several do_one_stuff(few args) functions is a regression.

I don't feel that having a new function with 12 args solves anything.
What is the problem of having 20 lines of code for initializing a mbuf 
pool? The new API gives more flexibility, and it allow an application to 
define its own function if the default one cannot be used.

I think that the name of the functions pretty well defines what they do:

   rte_mempool_create_empty(): create an empty mempool
   rte_mempool_set_ops_byname(): set the mempool handler from its name
   rte_pktmbuf_pool_init(): initialize the mempool as a packet pool
   rte_mempool_populate_default(): populate the pool with objects
   rte_mempool_obj_iter(): call a function for each object

>> I think, we should do one of the following:
>>
>> 1. Enhance "rte_pktmbuf_pool_create" to optionally accept "rte_mempool_obj_cb_t *obj_init, void *obj_init_arg" as inputs. If obj_init is not present, default can be used.

This function was introduced to simplify the creation of mbuf pools 
compared to mempool_create().
As I said above, you can still call rte_mempool_obj_iter() after.

>> 2. Create a new wrapper API (e.g. e_pktmbuf_pool_create_new) with  the above said behavior e.g.:
>> /* helper to create a mbuf pool */
>> struct rte_mempool *
>> rte_pktmbuf_pool_create_new(const char *name, unsigned n,
>>                 unsigned cache_size, uint16_t priv_size, uint16_t data_room_size,
>> rte_mempool_obj_cb_t *obj_init, void *obj_init_arg,
>>                 int socket_id)

Same comment here.

>> 3. Let the existing rte_mempool_create accept flag as "MEMPOOL_F_HW_PKT_POOL". Obviously, if this flag is set - all other flag values should be ignored. This was discussed earlier also.

You say we should do one of these points. But what is the link with the 
point 1/ and 2/ ?

You say we should add a flag which:
   - (obviously) will make all other flag values be ignored
   - (probably also obviously) will prevent to use 
rte_mempool_set_ops_byname() later


So to conclude, as I understand, your issue is having 20 lines of code 
to initialize a mbuf pool, and you would prefer to have one function 
with all possible parameters, is that correct? If that's the case, sorry 
but I feel it's clearer to have shorter functions.


Regards,
Olivier

  parent reply	other threads:[~2016-07-28  8:32 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-26 10:11 Hemant Agrawal
2016-07-27  9:51 ` Jerin Jacob
2016-07-27 10:00   ` Thomas Monjalon
2016-07-27 13:23     ` Hemant Agrawal
2016-07-27 13:35       ` Thomas Monjalon
2016-07-27 16:52         ` Hemant Agrawal
2016-07-28  7:09           ` Thomas Monjalon
2016-07-28  8:32   ` Olivier MATZ [this message]
2016-07-28 10:25     ` Jerin Jacob
2016-07-29 10:09     ` Hemant Agrawal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5799C32C.3050907@6wind.com \
    --to=olivier.matz@6wind.com \
    --cc=david.hunt@intel.com \
    --cc=dev@dpdk.org \
    --cc=hemant.agrawal@nxp.com \
    --cc=jerin.jacob@caviumnetworks.com \
    --cc=shreyansh.jain@nxp.com \
    --cc=thomas.monjalon@6wind.com \
    --cc=viktorin@rehivetech.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).