* [dpdk-dev] usages issue with external mempool @ 2016-07-26 10:11 Hemant Agrawal 2016-07-27 9:51 ` Jerin Jacob 0 siblings, 1 reply; 10+ messages in thread From: Hemant Agrawal @ 2016-07-26 10:11 UTC (permalink / raw) To: David Hunt, dev, Thomas Monjalon Cc: olivier.matz, viktorin, jerin.jacob, Shreyansh Jain Hi, There was lengthy discussions w.r.t external mempool patches. However, I am still finding usages issue with the agreed approach. The existing API to create packet mempool, "rte_pktmbuf_pool_create" does not provide the option to change the object init iterator. This may be the reason that many applications (e.g. OVS) are using rte_mempool_create to create packet mempool with their own object iterator (e.g. ovs_rte_pktmbuf_init). e.g the existing usages are: dmp->mp = rte_mempool_create(mp_name, mp_size, MBUF_SIZE(mtu), MP_CACHE_SZ, sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init, NULL, ovs_rte_pktmbuf_init, NULL, socket_id, 0); With the new API set for packet pool create, this need to be changed to: dmp->mp = rte_mempool_create_empty(mp_name, mp_size, MBUF_SIZE(mtu), MP_CACHE_SZ, sizeof(struct rte_pktmbuf_pool_private), socket_id, 0); if (dmp->mp == NULL) break; rte_errno = rte_mempool_set_ops_byname(dmp-mp, RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL); if (rte_errno != 0) { RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); return NULL; } rte_pktmbuf_pool_init(dmp->mp, NULL); ret = rte_mempool_populate_default(dmp->mp); if (ret < 0) { rte_mempool_free(dmp->mp); rte_errno = -ret; return NULL; } rte_mempool_obj_iter(dmp->mp, ovs_rte_pktmbuf_init, NULL); This is not a user friendly approach to ask for changing 1 API to 6 new APIs. Or, am I missing something? I think, we should do one of the following: 1. Enhance "rte_pktmbuf_pool_create" to optionally accept "rte_mempool_obj_cb_t *obj_init, void *obj_init_arg" as inputs. If obj_init is not present, default can be used. 2. Create a new wrapper API (e.g. e_pktmbuf_pool_create_new) with the above said behavior e.g.: /* helper to create a mbuf pool */ struct rte_mempool * rte_pktmbuf_pool_create_new(const char *name, unsigned n, unsigned cache_size, uint16_t priv_size, uint16_t data_room_size, rte_mempool_obj_cb_t *obj_init, void *obj_init_arg, int socket_id) 3. Let the existing rte_mempool_create accept flag as "MEMPOOL_F_HW_PKT_POOL". Obviously, if this flag is set - all other flag values should be ignored. This was discussed earlier also. Please share your opinion. Regards, Hemant ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-dev] usages issue with external mempool 2016-07-26 10:11 [dpdk-dev] usages issue with external mempool Hemant Agrawal @ 2016-07-27 9:51 ` Jerin Jacob 2016-07-27 10:00 ` Thomas Monjalon 2016-07-28 8:32 ` Olivier MATZ 0 siblings, 2 replies; 10+ messages in thread From: Jerin Jacob @ 2016-07-27 9:51 UTC (permalink / raw) To: Hemant Agrawal Cc: David Hunt, dev, Thomas Monjalon, olivier.matz, viktorin, Shreyansh Jain On Tue, Jul 26, 2016 at 10:11:13AM +0000, Hemant Agrawal wrote: > Hi, > There was lengthy discussions w.r.t external mempool patches. However, I am still finding usages issue with the agreed approach. > > The existing API to create packet mempool, "rte_pktmbuf_pool_create" does not provide the option to change the object init iterator. This may be the reason that many applications (e.g. OVS) are using rte_mempool_create to create packet mempool with their own object iterator (e.g. ovs_rte_pktmbuf_init). > > e.g the existing usages are: > dmp->mp = rte_mempool_create(mp_name, mp_size, MBUF_SIZE(mtu), > MP_CACHE_SZ, > sizeof(struct rte_pktmbuf_pool_private), > rte_pktmbuf_pool_init, NULL, > ovs_rte_pktmbuf_init, NULL, > socket_id, 0); > > > With the new API set for packet pool create, this need to be changed to: > > dmp->mp = rte_mempool_create_empty(mp_name, mp_size, MBUF_SIZE(mtu), > MP_CACHE_SZ, > sizeof(struct rte_pktmbuf_pool_private), > socket_id, 0); > if (dmp->mp == NULL) > break; > > rte_errno = rte_mempool_set_ops_byname(dmp-mp, > RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL); > if (rte_errno != 0) { > RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); > return NULL; > } > rte_pktmbuf_pool_init(dmp->mp, NULL); > > ret = rte_mempool_populate_default(dmp->mp); > if (ret < 0) { > rte_mempool_free(dmp->mp); > rte_errno = -ret; > return NULL; > } > > rte_mempool_obj_iter(dmp->mp, ovs_rte_pktmbuf_init, NULL); > > This is not a user friendly approach to ask for changing 1 API to 6 new APIs. Or, am I missing something? I agree, To me, this is very bad. I have raised this concern earlier also Since applications like OVS goes through "rte_mempool_create" for even packet buffer pool creation. IMO it make senses to extend "rte_mempool_create" to take one more argument to provide external pool handler name(NULL for default). I don't see any valid technical reason to treat external pool handler based mempool creation API different from default handler. Oliver, David Thoughts ? If we agree on this then may be I can send the API deprecation notices for rte_mempool_create for v16.11 Jerin > > I think, we should do one of the following: > > 1. Enhance "rte_pktmbuf_pool_create" to optionally accept "rte_mempool_obj_cb_t *obj_init, void *obj_init_arg" as inputs. If obj_init is not present, default can be used. > 2. Create a new wrapper API (e.g. e_pktmbuf_pool_create_new) with the above said behavior e.g.: > /* helper to create a mbuf pool */ > struct rte_mempool * > rte_pktmbuf_pool_create_new(const char *name, unsigned n, > unsigned cache_size, uint16_t priv_size, uint16_t data_room_size, > rte_mempool_obj_cb_t *obj_init, void *obj_init_arg, > int socket_id) > 3. Let the existing rte_mempool_create accept flag as "MEMPOOL_F_HW_PKT_POOL". Obviously, if this flag is set - all other flag values should be ignored. This was discussed earlier also. > > Please share your opinion. > > Regards, > Hemant > > ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-dev] usages issue with external mempool 2016-07-27 9:51 ` Jerin Jacob @ 2016-07-27 10:00 ` Thomas Monjalon 2016-07-27 13:23 ` Hemant Agrawal 2016-07-28 8:32 ` Olivier MATZ 1 sibling, 1 reply; 10+ messages in thread From: Thomas Monjalon @ 2016-07-27 10:00 UTC (permalink / raw) To: Jerin Jacob, Hemant Agrawal Cc: David Hunt, dev, olivier.matz, viktorin, Shreyansh Jain 2016-07-27 15:21, Jerin Jacob: > On Tue, Jul 26, 2016 at 10:11:13AM +0000, Hemant Agrawal wrote: > > This is not a user friendly approach to ask for changing 1 API to 6 new APIs. Or, am I missing something? > > I agree, To me, this is very bad. I have raised this concern earlier > also > > Since applications like OVS goes through "rte_mempool_create" for > even packet buffer pool creation. IMO it make senses to extend > "rte_mempool_create" to take one more argument to provide external pool > handler name(NULL for default). I don't see any valid technical reason > to treat external pool handler based mempool creation API different > from default handler. > > Oliver, David > > Thoughts ? > > If we agree on this then may be I can send the API deprecation notices for > rte_mempool_create for v16.11 It would have been a lot better to send a patch during the 16.07 cycle to avoid breaking again the API. I'm afraid it will even be too late for the deprecation notice. ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-dev] usages issue with external mempool 2016-07-27 10:00 ` Thomas Monjalon @ 2016-07-27 13:23 ` Hemant Agrawal 2016-07-27 13:35 ` Thomas Monjalon 0 siblings, 1 reply; 10+ messages in thread From: Hemant Agrawal @ 2016-07-27 13:23 UTC (permalink / raw) To: Thomas Monjalon, Jerin Jacob Cc: David Hunt, dev, olivier.matz, viktorin, Shreyansh Jain > -----Original Message----- > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] > 2016-07-27 15:21, Jerin Jacob: > > On Tue, Jul 26, 2016 at 10:11:13AM +0000, Hemant Agrawal wrote: > > > This is not a user friendly approach to ask for changing 1 API to 6 new APIs. > Or, am I missing something? > > > > I agree, To me, this is very bad. I have raised this concern earlier > > also > > > > Since applications like OVS goes through "rte_mempool_create" for even > > packet buffer pool creation. IMO it make senses to extend > > "rte_mempool_create" to take one more argument to provide external > > pool handler name(NULL for default). I don't see any valid technical > > reason to treat external pool handler based mempool creation API > > different from default handler. [Hemant] It is better. However, I will also suggest to upgrade "rte_pktmbuf_pool_create" with obj_init, this will create consistency in applications to use it instead of using rte_mempool_create. > > > > Oliver, David > > > > Thoughts ? > > > > If we agree on this then may be I can send the API deprecation notices > > for rte_mempool_create for v16.11 > > It would have been a lot better to send a patch during the 16.07 cycle to avoid > breaking again the API. > I'm afraid it will even be too late for the deprecation notice. [Hemant] Yes! It is late. we can make these changes immediately after 16.07. ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-dev] usages issue with external mempool 2016-07-27 13:23 ` Hemant Agrawal @ 2016-07-27 13:35 ` Thomas Monjalon 2016-07-27 16:52 ` Hemant Agrawal 0 siblings, 1 reply; 10+ messages in thread From: Thomas Monjalon @ 2016-07-27 13:35 UTC (permalink / raw) To: Hemant Agrawal Cc: Jerin Jacob, David Hunt, dev, olivier.matz, viktorin, Shreyansh Jain 2016-07-27 13:23, Hemant Agrawal: > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] > > 2016-07-27 15:21, Jerin Jacob: > > > If we agree on this then may be I can send the API deprecation notices > > > for rte_mempool_create for v16.11 > > > > It would have been a lot better to send a patch during the 16.07 cycle to avoid > > breaking again the API. > > I'm afraid it will even be too late for the deprecation notice. > > [Hemant] Yes! It is late. > we can make these changes immediately after 16.07. You cannot change the API in 16.11 without having a deprecation agreement in 16.07. ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-dev] usages issue with external mempool 2016-07-27 13:35 ` Thomas Monjalon @ 2016-07-27 16:52 ` Hemant Agrawal 2016-07-28 7:09 ` Thomas Monjalon 0 siblings, 1 reply; 10+ messages in thread From: Hemant Agrawal @ 2016-07-27 16:52 UTC (permalink / raw) To: Thomas Monjalon Cc: Jerin Jacob, David Hunt, dev, olivier.matz, viktorin, Shreyansh Jain > -----Original Message----- > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] > Sent: Wednesday, July 27, 2016 7:05 PM > To: Hemant Agrawal <hemant.agrawal@nxp.com> > Cc: Jerin Jacob <jerin.jacob@caviumnetworks.com>; David Hunt > <david.hunt@intel.com>; dev@dpdk.org; olivier.matz@6wind.com; > viktorin@rehivetech.com; Shreyansh Jain <shreyansh.jain@nxp.com> > Subject: Re: usages issue with external mempool > > 2016-07-27 13:23, Hemant Agrawal: > > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] > > > 2016-07-27 15:21, Jerin Jacob: > > > > If we agree on this then may be I can send the API deprecation > > > > notices for rte_mempool_create for v16.11 > > > > > > It would have been a lot better to send a patch during the 16.07 > > > cycle to avoid breaking again the API. > > > I'm afraid it will even be too late for the deprecation notice. > > > > [Hemant] Yes! It is late. > > we can make these changes immediately after 16.07. > > You cannot change the API in 16.11 without having a deprecation agreement in > 16.07. [Hemant] Can you still accommodate the deprecation notice for 16.07? ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-dev] usages issue with external mempool 2016-07-27 16:52 ` Hemant Agrawal @ 2016-07-28 7:09 ` Thomas Monjalon 0 siblings, 0 replies; 10+ messages in thread From: Thomas Monjalon @ 2016-07-28 7:09 UTC (permalink / raw) To: Hemant Agrawal Cc: Jerin Jacob, David Hunt, dev, olivier.matz, viktorin, Shreyansh Jain 2016-07-27 16:52, Hemant Agrawal: > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] > > 2016-07-27 13:23, Hemant Agrawal: > > > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] > > > > 2016-07-27 15:21, Jerin Jacob: > > > > > If we agree on this then may be I can send the API deprecation > > > > > notices for rte_mempool_create for v16.11 > > > > > > > > It would have been a lot better to send a patch during the 16.07 > > > > cycle to avoid breaking again the API. > > > > I'm afraid it will even be too late for the deprecation notice. > > > > > > [Hemant] Yes! It is late. > > > we can make these changes immediately after 16.07. > > > > You cannot change the API in 16.11 without having a deprecation agreement in > > 16.07. > [Hemant] Can you still accommodate the deprecation notice for 16.07? No, I cannot write a deprecation notice for you. And you must give some time to other developers to review your proposal and accept it. You failed to write a patch to modify the new API in 16.07 and you failed to prepare and discuss a deprecation notice for 16.11. So I think your best bet is to work now for a change in 17.02. ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-dev] usages issue with external mempool 2016-07-27 9:51 ` Jerin Jacob 2016-07-27 10:00 ` Thomas Monjalon @ 2016-07-28 8:32 ` Olivier MATZ 2016-07-28 10:25 ` Jerin Jacob 2016-07-29 10:09 ` Hemant Agrawal 1 sibling, 2 replies; 10+ messages in thread From: Olivier MATZ @ 2016-07-28 8:32 UTC (permalink / raw) To: Jerin Jacob, Hemant Agrawal Cc: David Hunt, dev, Thomas Monjalon, viktorin, Shreyansh Jain Hi Hemant, Jerin, On 07/27/2016 11:51 AM, Jerin Jacob wrote: > On Tue, Jul 26, 2016 at 10:11:13AM +0000, Hemant Agrawal wrote: >> Hi, >> There was lengthy discussions w.r.t external mempool patches. However, I am still finding usages issue with the agreed approach. >> >> The existing API to create packet mempool, "rte_pktmbuf_pool_create" does not provide the option to change the object init iterator. This may be the reason that many applications (e.g. OVS) are using rte_mempool_create to create packet mempool with their own object iterator (e.g. ovs_rte_pktmbuf_init). >> >> e.g the existing usages are: >> dmp->mp = rte_mempool_create(mp_name, mp_size, MBUF_SIZE(mtu), >> MP_CACHE_SZ, >> sizeof(struct rte_pktmbuf_pool_private), >> rte_pktmbuf_pool_init, NULL, >> ovs_rte_pktmbuf_init, NULL, >> socket_id, 0); >> >> >> With the new API set for packet pool create, this need to be changed to: >> >> dmp->mp = rte_mempool_create_empty(mp_name, mp_size, MBUF_SIZE(mtu), >> MP_CACHE_SZ, >> sizeof(struct rte_pktmbuf_pool_private), >> socket_id, 0); >> if (dmp->mp == NULL) >> break; >> >> rte_errno = rte_mempool_set_ops_byname(dmp-mp, >> RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL); >> if (rte_errno != 0) { >> RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); >> return NULL; >> } >> rte_pktmbuf_pool_init(dmp->mp, NULL); >> >> ret = rte_mempool_populate_default(dmp->mp); >> if (ret < 0) { >> rte_mempool_free(dmp->mp); >> rte_errno = -ret; >> return NULL; >> } >> >> rte_mempool_obj_iter(dmp->mp, ovs_rte_pktmbuf_init, NULL); >> >> This is not a user friendly approach to ask for changing 1 API to 6 new APIs. Or, am I missing something? The example you are giving first still works today, right? Since the mempool rework, as the objects are linked together in the mempool, it is also possible to use rte_pktmbuf_pool_create() and call another iterator after, like below: mp = rte_pktmbuf_pool_create(name, size, cache_size, priv_size, data_room_size, socket_id); if (mp == NULL) handle_error(); rte_mempool_obj_iter(mp, ovs_rte_pktmbuf_init); By the way, rte_mempool_set_ops_byname() is not needed in your example above since it sets the default ops. > > I agree, To me, this is very bad. I have raised this concern earlier > also > > Since applications like OVS goes through "rte_mempool_create" for > even packet buffer pool creation. IMO it make senses to extend > "rte_mempool_create" to take one more argument to provide external pool > handler name(NULL for default). I don't see any valid technical reason > to treat external pool handler based mempool creation API different > from default handler. I disagree that changing from one function do_many_stuff(11 args) to several do_one_stuff(few args) functions is a regression. I don't feel that having a new function with 12 args solves anything. What is the problem of having 20 lines of code for initializing a mbuf pool? The new API gives more flexibility, and it allow an application to define its own function if the default one cannot be used. I think that the name of the functions pretty well defines what they do: rte_mempool_create_empty(): create an empty mempool rte_mempool_set_ops_byname(): set the mempool handler from its name rte_pktmbuf_pool_init(): initialize the mempool as a packet pool rte_mempool_populate_default(): populate the pool with objects rte_mempool_obj_iter(): call a function for each object >> I think, we should do one of the following: >> >> 1. Enhance "rte_pktmbuf_pool_create" to optionally accept "rte_mempool_obj_cb_t *obj_init, void *obj_init_arg" as inputs. If obj_init is not present, default can be used. This function was introduced to simplify the creation of mbuf pools compared to mempool_create(). As I said above, you can still call rte_mempool_obj_iter() after. >> 2. Create a new wrapper API (e.g. e_pktmbuf_pool_create_new) with the above said behavior e.g.: >> /* helper to create a mbuf pool */ >> struct rte_mempool * >> rte_pktmbuf_pool_create_new(const char *name, unsigned n, >> unsigned cache_size, uint16_t priv_size, uint16_t data_room_size, >> rte_mempool_obj_cb_t *obj_init, void *obj_init_arg, >> int socket_id) Same comment here. >> 3. Let the existing rte_mempool_create accept flag as "MEMPOOL_F_HW_PKT_POOL". Obviously, if this flag is set - all other flag values should be ignored. This was discussed earlier also. You say we should do one of these points. But what is the link with the point 1/ and 2/ ? You say we should add a flag which: - (obviously) will make all other flag values be ignored - (probably also obviously) will prevent to use rte_mempool_set_ops_byname() later So to conclude, as I understand, your issue is having 20 lines of code to initialize a mbuf pool, and you would prefer to have one function with all possible parameters, is that correct? If that's the case, sorry but I feel it's clearer to have shorter functions. Regards, Olivier ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-dev] usages issue with external mempool 2016-07-28 8:32 ` Olivier MATZ @ 2016-07-28 10:25 ` Jerin Jacob 2016-07-29 10:09 ` Hemant Agrawal 1 sibling, 0 replies; 10+ messages in thread From: Jerin Jacob @ 2016-07-28 10:25 UTC (permalink / raw) To: Olivier MATZ Cc: Hemant Agrawal, David Hunt, dev, Thomas Monjalon, viktorin, Shreyansh Jain On Thu, Jul 28, 2016 at 10:32:44AM +0200, Olivier MATZ wrote: Hi Olivier, > Hi Hemant, Jerin, > > On 07/27/2016 11:51 AM, Jerin Jacob wrote: > > On Tue, Jul 26, 2016 at 10:11:13AM +0000, Hemant Agrawal wrote: > > > > > I agree, To me, this is very bad. I have raised this concern earlier > > also > > > > Since applications like OVS goes through "rte_mempool_create" for > > even packet buffer pool creation. IMO it make senses to extend > > "rte_mempool_create" to take one more argument to provide external pool > > handler name(NULL for default). I don't see any valid technical reason > > to treat external pool handler based mempool creation API different > > from default handler. > > I disagree that changing from one function do_many_stuff(11 args) to several > do_one_stuff(few args) functions is a regression. > > I don't feel that having a new function with 12 args solves anything. > What is the problem of having 20 lines of code for initializing a mbuf pool? > The new API gives more flexibility, and it allow an application to define > its own function if the default one cannot be used. > The problem I have in this scheme is that there is no visibility on converging part of external vs default based pool manager Creation/usage. Can we deprecate the original (11 args) API have ONLY ONE WAY to create/use the mempool irrespective of external or internal ?.So in application perspective it is converged and its matter selecting the handler name. I believe deprecation of (11 args) API is the only that can happen ? Any other thoughts on converging it? for an example, look at cryptodev, selection of SW based virtual or HW based physical device section and usage are identical in APPLICATION perspective. > I think that the name of the functions pretty well defines what they do: > > rte_mempool_create_empty(): create an empty mempool > rte_mempool_set_ops_byname(): set the mempool handler from its name > rte_pktmbuf_pool_init(): initialize the mempool as a packet pool > rte_mempool_populate_default(): populate the pool with objects > rte_mempool_obj_iter(): call a function for each object > I agree.New APIs are great. See above Jerin ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [dpdk-dev] usages issue with external mempool 2016-07-28 8:32 ` Olivier MATZ 2016-07-28 10:25 ` Jerin Jacob @ 2016-07-29 10:09 ` Hemant Agrawal 1 sibling, 0 replies; 10+ messages in thread From: Hemant Agrawal @ 2016-07-29 10:09 UTC (permalink / raw) To: Olivier MATZ, Jerin Jacob Cc: David Hunt, dev, Thomas Monjalon, viktorin, Shreyansh Jain Hi Oliver > -----Original Message----- > From: Olivier MATZ [mailto:olivier.matz@6wind.com] > Sent: Thursday, July 28, 2016 2:03 PM > On 07/27/2016 11:51 AM, Jerin Jacob wrote: > > On Tue, Jul 26, 2016 at 10:11:13AM +0000, Hemant Agrawal wrote: > >> Hi, > >> There was lengthy discussions w.r.t external mempool patches. > However, I am still finding usages issue with the agreed approach. > >> > >> The existing API to create packet mempool, "rte_pktmbuf_pool_create" does > not provide the option to change the object init iterator. This may be the reason > that many applications (e.g. OVS) are using rte_mempool_create to create > packet mempool with their own object iterator (e.g. ovs_rte_pktmbuf_init). > >> > >> e.g the existing usages are: > >> dmp->mp = rte_mempool_create(mp_name, mp_size, MBUF_SIZE(mtu), > >> MP_CACHE_SZ, > >> sizeof(struct rte_pktmbuf_pool_private), > >> rte_pktmbuf_pool_init, NULL, > >> ovs_rte_pktmbuf_init, NULL, > >> socket_id, 0); > >> > >> > >> With the new API set for packet pool create, this need to be changed to: > >> > >> dmp->mp = rte_mempool_create_empty(mp_name, mp_size, > MBUF_SIZE(mtu), > >> MP_CACHE_SZ, > >> sizeof(struct rte_pktmbuf_pool_private), > >> socket_id, 0); > >> if (dmp->mp == NULL) > >> break; > >> > >> rte_errno = rte_mempool_set_ops_byname(dmp-mp, > >> RTE_MBUF_DEFAULT_MEMPOOL_OPS, > NULL); > >> if (rte_errno != 0) { > >> RTE_LOG(ERR, MBUF, "error setting mempool > handler\n"); > >> return NULL; > >> } > >> rte_pktmbuf_pool_init(dmp->mp, NULL); > >> > >> ret = rte_mempool_populate_default(dmp->mp); > >> if (ret < 0) { > >> rte_mempool_free(dmp->mp); > >> rte_errno = -ret; > >> return NULL; > >> } > >> > >> rte_mempool_obj_iter(dmp->mp, > >> ovs_rte_pktmbuf_init, NULL); > >> > >> This is not a user friendly approach to ask for changing 1 API to 6 new APIs. > Or, am I missing something? > > The example you are giving first still works today, right? [Hemant] No. The rte_mempool_create, may not work with offloaded mempool. In the current code, the default option in rte_mempool_create is hardcoded as "ring_mp_mc". So it will not use any other config file specified mempool. One possible solution, I see that the default option in rte_mempool_create can be changed to "CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS" from "ring_mp_mc". > > Since the mempool rework, as the objects are linked together in the mempool, it > is also possible to use rte_pktmbuf_pool_create() and call another iterator after, > like below: > > mp = rte_pktmbuf_pool_create(name, size, cache_size, priv_size, > data_room_size, socket_id); > if (mp == NULL) > handle_error(); > rte_mempool_obj_iter(mp, ovs_rte_pktmbuf_init); > [Hemant] Why? The purpose of rte_pktmbuf_pool_create was to provide a usable wrapper. If applications are not able to use it, we should retrospect and upgrade it as per common usages. your ease-of-usages definition may differ from my definition. > By the way, rte_mempool_set_ops_byname() is not needed in your example > above since it sets the default ops. [Hemant] I think it is needed. I could not find a MEMPOOL_REGISTER_OPS for "default" value. Default may not be ""ring_mp_mc". > > > > > I agree, To me, this is very bad. I have raised this concern earlier > > also > > > > Since applications like OVS goes through "rte_mempool_create" for even > > packet buffer pool creation. IMO it make senses to extend > > "rte_mempool_create" to take one more argument to provide external > > pool handler name(NULL for default). I don't see any valid technical > > reason to treat external pool handler based mempool creation API > > different from default handler. > > I disagree that changing from one function do_many_stuff(11 args) to several > do_one_stuff(few args) functions is a regression. > > I don't feel that having a new function with 12 args solves anything. > What is the problem of having 20 lines of code for initializing a mbuf pool? The > new API gives more flexibility, and it allow an application to define its own > function if the default one cannot be used. > > I think that the name of the functions pretty well defines what they do: > > rte_mempool_create_empty(): create an empty mempool > rte_mempool_set_ops_byname(): set the mempool handler from its name > rte_pktmbuf_pool_init(): initialize the mempool as a packet pool > rte_mempool_populate_default(): populate the pool with objects > rte_mempool_obj_iter(): call a function for each object > > >> I think, we should do one of the following: > >> > >> 1. Enhance "rte_pktmbuf_pool_create" to optionally accept > "rte_mempool_obj_cb_t *obj_init, void *obj_init_arg" as inputs. If obj_init is not > present, default can be used. > > This function was introduced to simplify the creation of mbuf pools compared to > mempool_create(). > As I said above, you can still call rte_mempool_obj_iter() after. > [Hemant] see comment above. > >> 2. Create a new wrapper API (e.g. e_pktmbuf_pool_create_new) with the > above said behavior e.g.: > >> /* helper to create a mbuf pool */ > >> struct rte_mempool * > >> rte_pktmbuf_pool_create_new(const char *name, unsigned n, > >> unsigned cache_size, uint16_t priv_size, uint16_t > >> data_room_size, rte_mempool_obj_cb_t *obj_init, void *obj_init_arg, > >> int socket_id) > > Same comment here. > > >> 3. Let the existing rte_mempool_create accept flag as > "MEMPOOL_F_HW_PKT_POOL". Obviously, if this flag is set - all other flag > values should be ignored. This was discussed earlier also. > > You say we should do one of these points. But what is the link with the point 1/ > and 2/ ? > > You say we should add a flag which: > - (obviously) will make all other flag values be ignored > - (probably also obviously) will prevent to use > rte_mempool_set_ops_byname() later > > > So to conclude, as I understand, your issue is having 20 lines of code > to initialize a mbuf pool, and you would prefer to have one function > with all possible parameters, is that correct? If that's the case, sorry > but I feel it's clearer to have shorter functions. [Hemant] The new shorter functions are good. However, providing a usable wrapper will help. [Hemant] Also, rte_mempool_create should have been deprecated or it should be enhanced to work with external mempool. 12th argument or flag, these are just different way to make it work. The sad story is that it does not work today with external mempool and it is not deprecated. Regards, Hemant ^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2016-07-29 10:09 UTC | newest] Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2016-07-26 10:11 [dpdk-dev] usages issue with external mempool Hemant Agrawal 2016-07-27 9:51 ` Jerin Jacob 2016-07-27 10:00 ` Thomas Monjalon 2016-07-27 13:23 ` Hemant Agrawal 2016-07-27 13:35 ` Thomas Monjalon 2016-07-27 16:52 ` Hemant Agrawal 2016-07-28 7:09 ` Thomas Monjalon 2016-07-28 8:32 ` Olivier MATZ 2016-07-28 10:25 ` Jerin Jacob 2016-07-29 10:09 ` Hemant Agrawal
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).