* [dpdk-dev] [PATCH 0/2] Allow application set mempool handle @ 2017-06-01 8:05 Santosh Shukla 2017-06-01 8:05 ` [dpdk-dev] [PATCH 1/2] eal: Introducing option to " Santosh Shukla ` (3 more replies) 0 siblings, 4 replies; 72+ messages in thread From: Santosh Shukla @ 2017-06-01 8:05 UTC (permalink / raw) To: olivier.matz, dev; +Cc: hemant.agrawal, jerin.jacob, Santosh Shukla Some platform can have two different NICs for example external PCI Intel 40G card and Integrated NIC like vNIC/octeontx/dpaa2. Both NICs like to use their preferred pool e.g. external PCI card/ vNIC's preferred pool would be the ring based pool and octeontx/dpaa2 preferred would be ext-mempools. Right now, Framework doesn't support such case. Only one pool can be used across two different NIC's. For that, user has to statically set CONFIG_RTE_MEMPOOL_DEFAULT_OPS=<pool-name>. So proposing two approaches: Patch 1) Introducing eal option --pkt-mempool=<pool-name> Patch 2) Introducing ethdev API called _get_preferred_pool(), where PMD driver gets a chance to advertise their pool capability to the application. And based on that hint- application creates pools for that driver. Santosh Shukla (2): eal: Introducing option to set mempool handle ether/ethdev: Allow pmd to advertise preferred pool capability lib/librte_eal/bsdapp/eal/eal.c | 9 +++++++ lib/librte_eal/bsdapp/eal/rte_eal_version.map | 7 +++++ lib/librte_eal/common/eal_common_options.c | 3 +++ lib/librte_eal/common/eal_internal_cfg.h | 2 ++ lib/librte_eal/common/eal_options.h | 2 ++ lib/librte_eal/common/include/rte_eal.h | 9 +++++++ lib/librte_eal/linuxapp/eal/eal.c | 36 +++++++++++++++++++++++++ lib/librte_eal/linuxapp/eal/rte_eal_version.map | 7 +++++ lib/librte_ether/rte_ethdev.c | 16 +++++++++++ lib/librte_ether/rte_ethdev.h | 21 +++++++++++++++ lib/librte_ether/rte_ether_version.map | 7 +++++ lib/librte_mbuf/rte_mbuf.c | 8 ++++-- 12 files changed, 125 insertions(+), 2 deletions(-) -- 2.11.0 ^ permalink raw reply [flat|nested] 72+ messages in thread
* [dpdk-dev] [PATCH 1/2] eal: Introducing option to set mempool handle 2017-06-01 8:05 [dpdk-dev] [PATCH 0/2] Allow application set mempool handle Santosh Shukla @ 2017-06-01 8:05 ` Santosh Shukla 2017-06-30 14:12 ` Olivier Matz 2017-06-01 8:05 ` [dpdk-dev] [PATCH 2/2] ether/ethdev: Allow pmd to advertise preferred pool capability Santosh Shukla ` (2 subsequent siblings) 3 siblings, 1 reply; 72+ messages in thread From: Santosh Shukla @ 2017-06-01 8:05 UTC (permalink / raw) To: olivier.matz, dev; +Cc: hemant.agrawal, jerin.jacob, Santosh Shukla Platform can have external PCI cards like Intel 40G card and Integrated NIC like OcteoTX. Where both NIC has their preferred pool handle. Example: Intel 40G NIC preferred pool is ring_mp_mc and OcteonTX preferred pool handle would be ext-mempool's handle named 'octeontx-fpavf'. There is no way that either of NIC's could use their choice of mempool handle. Because Currently mempool handle programmed in static way i.e. User has to set pool handle name in CONFIG_RTE_MEMPOOL_OPS_DEFAULT='' So introducing eal option --pkt-mempool="". Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> --- lib/librte_eal/bsdapp/eal/eal.c | 9 +++++++ lib/librte_eal/bsdapp/eal/rte_eal_version.map | 7 +++++ lib/librte_eal/common/eal_common_options.c | 3 +++ lib/librte_eal/common/eal_internal_cfg.h | 2 ++ lib/librte_eal/common/eal_options.h | 2 ++ lib/librte_eal/common/include/rte_eal.h | 9 +++++++ lib/librte_eal/linuxapp/eal/eal.c | 36 +++++++++++++++++++++++++ lib/librte_eal/linuxapp/eal/rte_eal_version.map | 7 +++++ lib/librte_mbuf/rte_mbuf.c | 8 ++++-- 9 files changed, 81 insertions(+), 2 deletions(-) diff --git a/lib/librte_eal/bsdapp/eal/eal.c b/lib/librte_eal/bsdapp/eal/eal.c index 05f0c1f90..7d8824707 100644 --- a/lib/librte_eal/bsdapp/eal/eal.c +++ b/lib/librte_eal/bsdapp/eal/eal.c @@ -113,6 +113,15 @@ struct internal_config internal_config; /* used by rte_rdtsc() */ int rte_cycles_vmware_tsc_map; +char *__rte_unused +rte_eal_get_mp_name(void) +{ + if (internal_config.mp_name[0] == 0x0) + return NULL; + else + return internal_config.mp_name; +} + /* Return a pointer to the configuration structure */ struct rte_config * rte_eal_get_configuration(void) diff --git a/lib/librte_eal/bsdapp/eal/rte_eal_version.map b/lib/librte_eal/bsdapp/eal/rte_eal_version.map index 2e48a7366..a1e9ad95f 100644 --- a/lib/librte_eal/bsdapp/eal/rte_eal_version.map +++ b/lib/librte_eal/bsdapp/eal/rte_eal_version.map @@ -193,3 +193,10 @@ DPDK_17.05 { vfio_get_group_no; } DPDK_17.02; + +DPDK_17.08 { + global: + + rte_eal_get_mp_name; + +} DPDK_17.05; diff --git a/lib/librte_eal/common/eal_common_options.c b/lib/librte_eal/common/eal_common_options.c index f470195f3..1c147a696 100644 --- a/lib/librte_eal/common/eal_common_options.c +++ b/lib/librte_eal/common/eal_common_options.c @@ -95,6 +95,7 @@ eal_long_options[] = { {OPT_VFIO_INTR, 1, NULL, OPT_VFIO_INTR_NUM }, {OPT_VMWARE_TSC_MAP, 0, NULL, OPT_VMWARE_TSC_MAP_NUM }, {OPT_XEN_DOM0, 0, NULL, OPT_XEN_DOM0_NUM }, + {OPT_PKT_MEMPOOL, 1, NULL, OPT_PKT_MEMPOOL_NUM }, {0, 0, NULL, 0 } }; @@ -161,6 +162,7 @@ eal_reset_internal_config(struct internal_config *internal_cfg) #endif internal_cfg->vmware_tsc_map = 0; internal_cfg->create_uio_dev = 0; + memset(&internal_cfg->mp_name[0], 0x0, MAX_POOL_NAME_LEN); } static int @@ -1083,5 +1085,6 @@ eal_common_usage(void) " --"OPT_NO_PCI" Disable PCI\n" " --"OPT_NO_HPET" Disable HPET\n" " --"OPT_NO_SHCONF" No shared config (mmap'd files)\n" + " --"OPT_PKT_MEMPOOL" Use pool name as mempool for port\n" "\n", RTE_MAX_LCORE); } diff --git a/lib/librte_eal/common/eal_internal_cfg.h b/lib/librte_eal/common/eal_internal_cfg.h index 7b7e8c887..b8fedd2e6 100644 --- a/lib/librte_eal/common/eal_internal_cfg.h +++ b/lib/librte_eal/common/eal_internal_cfg.h @@ -43,6 +43,7 @@ #include <rte_pci_dev_feature_defs.h> #define MAX_HUGEPAGE_SIZES 3 /**< support up to 3 page sizes */ +#define MAX_POOL_NAME_LEN 256 /**< Max len of a pool name */ /* * internal configuration structure for the number, size and @@ -84,6 +85,7 @@ struct internal_config { const char *hugepage_dir; /**< specific hugetlbfs directory to use */ unsigned num_hugepage_sizes; /**< how many sizes on this system */ + char mp_name[MAX_POOL_NAME_LEN]; /**< mempool handle name */ struct hugepage_info hugepage_info[MAX_HUGEPAGE_SIZES]; }; extern struct internal_config internal_config; /**< Global EAL configuration. */ diff --git a/lib/librte_eal/common/eal_options.h b/lib/librte_eal/common/eal_options.h index a881c62e2..4e52ee255 100644 --- a/lib/librte_eal/common/eal_options.h +++ b/lib/librte_eal/common/eal_options.h @@ -83,6 +83,8 @@ enum { OPT_VMWARE_TSC_MAP_NUM, #define OPT_XEN_DOM0 "xen-dom0" OPT_XEN_DOM0_NUM, +#define OPT_PKT_MEMPOOL "pkt-mempool" + OPT_PKT_MEMPOOL_NUM, OPT_LONG_MAX_NUM }; diff --git a/lib/librte_eal/common/include/rte_eal.h b/lib/librte_eal/common/include/rte_eal.h index abf020bf9..c2f696a3d 100644 --- a/lib/librte_eal/common/include/rte_eal.h +++ b/lib/librte_eal/common/include/rte_eal.h @@ -283,6 +283,15 @@ static inline int rte_gettid(void) return RTE_PER_LCORE(_thread_id); } +/** + * Get mempool name from cmdline. + * + * @return + * On success, returns the pool name. + * On Failure, returs NULL. + */ +char *rte_eal_get_mp_name(void); + #define RTE_INIT(func) \ static void __attribute__((constructor, used)) func(void) diff --git a/lib/librte_eal/linuxapp/eal/eal.c b/lib/librte_eal/linuxapp/eal/eal.c index 7c78f2dc2..b2a6c8068 100644 --- a/lib/librte_eal/linuxapp/eal/eal.c +++ b/lib/librte_eal/linuxapp/eal/eal.c @@ -122,6 +122,15 @@ struct internal_config internal_config; /* used by rte_rdtsc() */ int rte_cycles_vmware_tsc_map; +char * +rte_eal_get_mp_name(void) +{ + if (internal_config.mp_name[0] == 0x0) + return NULL; + else + return internal_config.mp_name; +} + /* Return a pointer to the configuration structure */ struct rte_config * rte_eal_get_configuration(void) @@ -478,6 +487,23 @@ eal_parse_vfio_intr(const char *mode) return -1; } +static int +eal_parse_mp_name(const char *name) +{ + int len; + + if (name == NULL) + return -1; + + len = strlen(name); + if (len >= MAX_POOL_NAME_LEN) + return -1; + + strcpy(internal_config.mp_name, name); + + return 0; +} + /* Parse the arguments for --log-level only */ static void eal_log_level_parse(int argc, char **argv) @@ -611,6 +637,16 @@ eal_parse_args(int argc, char **argv) internal_config.create_uio_dev = 1; break; + case OPT_PKT_MEMPOOL_NUM: + if (eal_parse_mp_name(optarg) < 0) { + RTE_LOG(ERR, EAL, "invalid parameters for --" + OPT_PKT_MEMPOOL "\n"); + eal_usage(prgname); + ret = -1; + goto out; + } + break; + default: if (opt < OPT_LONG_MIN_NUM && isprint(opt)) { RTE_LOG(ERR, EAL, "Option %c is not supported " diff --git a/lib/librte_eal/linuxapp/eal/rte_eal_version.map b/lib/librte_eal/linuxapp/eal/rte_eal_version.map index 670bab3a5..e57330bec 100644 --- a/lib/librte_eal/linuxapp/eal/rte_eal_version.map +++ b/lib/librte_eal/linuxapp/eal/rte_eal_version.map @@ -198,3 +198,10 @@ DPDK_17.05 { vfio_get_group_no; } DPDK_17.02; + +DPDK_17.08 { + global: + + rte_eal_get_mp_name; + +} DPDK_17.05 diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c index 0e3e36a58..38f4b3de0 100644 --- a/lib/librte_mbuf/rte_mbuf.c +++ b/lib/librte_mbuf/rte_mbuf.c @@ -158,6 +158,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, { struct rte_mempool *mp; struct rte_pktmbuf_pool_private mbp_priv; + const char *mp_name = NULL; unsigned elt_size; int ret; @@ -177,8 +178,11 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, if (mp == NULL) return NULL; - ret = rte_mempool_set_ops_byname(mp, - RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL); + mp_name = rte_eal_get_mp_name(); + if (mp_name == NULL) + mp_name = RTE_MBUF_DEFAULT_MEMPOOL_OPS; + + ret = rte_mempool_set_ops_byname(mp, mp_name, NULL); if (ret != 0) { RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); rte_mempool_free(mp); -- 2.11.0 ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH 1/2] eal: Introducing option to set mempool handle 2017-06-01 8:05 ` [dpdk-dev] [PATCH 1/2] eal: Introducing option to " Santosh Shukla @ 2017-06-30 14:12 ` Olivier Matz 2017-07-04 12:33 ` santosh 0 siblings, 1 reply; 72+ messages in thread From: Olivier Matz @ 2017-06-30 14:12 UTC (permalink / raw) To: Santosh Shukla; +Cc: dev, hemant.agrawal, jerin.jacob Hi Santosh, On Thu, 1 Jun 2017 13:35:58 +0530, Santosh Shukla <santosh.shukla@caviumnetworks.com> wrote: > Platform can have external PCI cards like Intel 40G card and > Integrated NIC like OcteoTX. Where both NIC has their > preferred pool handle. Example: Intel 40G NIC preferred pool > is ring_mp_mc and OcteonTX preferred pool handle would be > ext-mempool's handle named 'octeontx-fpavf'. > > There is no way that either of NIC's could use their choice > of mempool handle. > > Because Currently mempool handle programmed in static way i.e. > User has to set pool handle name in CONFIG_RTE_MEMPOOL_OPS_DEFAULT='' > > So introducing eal option --pkt-mempool="". > > Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> > --- > lib/librte_eal/bsdapp/eal/eal.c | 9 +++++++ > lib/librte_eal/bsdapp/eal/rte_eal_version.map | 7 +++++ > lib/librte_eal/common/eal_common_options.c | 3 +++ > lib/librte_eal/common/eal_internal_cfg.h | 2 ++ > lib/librte_eal/common/eal_options.h | 2 ++ > lib/librte_eal/common/include/rte_eal.h | 9 +++++++ > lib/librte_eal/linuxapp/eal/eal.c | 36 +++++++++++++++++++++++++ > lib/librte_eal/linuxapp/eal/rte_eal_version.map | 7 +++++ > lib/librte_mbuf/rte_mbuf.c | 8 ++++-- > 9 files changed, 81 insertions(+), 2 deletions(-) > > diff --git a/lib/librte_eal/bsdapp/eal/eal.c b/lib/librte_eal/bsdapp/eal/eal.c > index 05f0c1f90..7d8824707 100644 > --- a/lib/librte_eal/bsdapp/eal/eal.c > +++ b/lib/librte_eal/bsdapp/eal/eal.c > @@ -113,6 +113,15 @@ struct internal_config internal_config; > /* used by rte_rdtsc() */ > int rte_cycles_vmware_tsc_map; > > +char *__rte_unused why __rte_unused? I'll also add a const > +rte_eal_get_mp_name(void) I suggest rte_eal_mbuf_default_mempool_ops() It's longer but it shows it's only about mbufs, and it is consistent with the #define. > +{ > + if (internal_config.mp_name[0] == 0x0) > + return NULL; > + else > + return internal_config.mp_name; > +} > + Would returning RTE_MBUF_DEFAULT_MEMPOOL_OPS instead of NULL make sense? > /* Return a pointer to the configuration structure */ > struct rte_config * > rte_eal_get_configuration(void) > diff --git a/lib/librte_eal/bsdapp/eal/rte_eal_version.map b/lib/librte_eal/bsdapp/eal/rte_eal_version.map > index 2e48a7366..a1e9ad95f 100644 > --- a/lib/librte_eal/bsdapp/eal/rte_eal_version.map > +++ b/lib/librte_eal/bsdapp/eal/rte_eal_version.map > @@ -193,3 +193,10 @@ DPDK_17.05 { > vfio_get_group_no; > > } DPDK_17.02; > + > +DPDK_17.08 { > + global: > + > + rte_eal_get_mp_name; > + > +} DPDK_17.05; > diff --git a/lib/librte_eal/common/eal_common_options.c b/lib/librte_eal/common/eal_common_options.c > index f470195f3..1c147a696 100644 > --- a/lib/librte_eal/common/eal_common_options.c > +++ b/lib/librte_eal/common/eal_common_options.c > @@ -95,6 +95,7 @@ eal_long_options[] = { > {OPT_VFIO_INTR, 1, NULL, OPT_VFIO_INTR_NUM }, > {OPT_VMWARE_TSC_MAP, 0, NULL, OPT_VMWARE_TSC_MAP_NUM }, > {OPT_XEN_DOM0, 0, NULL, OPT_XEN_DOM0_NUM }, > + {OPT_PKT_MEMPOOL, 1, NULL, OPT_PKT_MEMPOOL_NUM }, > {0, 0, NULL, 0 } > }; > > @@ -161,6 +162,7 @@ eal_reset_internal_config(struct internal_config *internal_cfg) > #endif > internal_cfg->vmware_tsc_map = 0; > internal_cfg->create_uio_dev = 0; > + memset(&internal_cfg->mp_name[0], 0x0, MAX_POOL_NAME_LEN); > } > > static int Or it could be initialized to RTE_MBUF_DEFAULT_MEMPOOL_OPS here? > @@ -1083,5 +1085,6 @@ eal_common_usage(void) > " --"OPT_NO_PCI" Disable PCI\n" > " --"OPT_NO_HPET" Disable HPET\n" > " --"OPT_NO_SHCONF" No shared config (mmap'd files)\n" > + " --"OPT_PKT_MEMPOOL" Use pool name as mempool for port\n" > "\n", RTE_MAX_LCORE); > } > diff --git a/lib/librte_eal/common/eal_internal_cfg.h b/lib/librte_eal/common/eal_internal_cfg.h > index 7b7e8c887..b8fedd2e6 100644 > --- a/lib/librte_eal/common/eal_internal_cfg.h > +++ b/lib/librte_eal/common/eal_internal_cfg.h > @@ -43,6 +43,7 @@ > #include <rte_pci_dev_feature_defs.h> > > #define MAX_HUGEPAGE_SIZES 3 /**< support up to 3 page sizes */ > +#define MAX_POOL_NAME_LEN 256 /**< Max len of a pool name */ > > /* > * internal configuration structure for the number, size and Shouldn't we have the same length than RTE_MEMPOOL_OPS_NAMESIZE? I think we should try to avoid introducing new defines without RTE_ prefix. > @@ -84,6 +85,7 @@ struct internal_config { > const char *hugepage_dir; /**< specific hugetlbfs directory to use */ > > unsigned num_hugepage_sizes; /**< how many sizes on this system */ > + char mp_name[MAX_POOL_NAME_LEN]; /**< mempool handle name */ > struct hugepage_info hugepage_info[MAX_HUGEPAGE_SIZES]; > }; If it's in internal config, I think we can use a char pointer instead of a table. What is the expected behavior in case of multiprocess? > extern struct internal_config internal_config; /**< Global EAL configuration. */ > diff --git a/lib/librte_eal/common/eal_options.h b/lib/librte_eal/common/eal_options.h > index a881c62e2..4e52ee255 100644 > --- a/lib/librte_eal/common/eal_options.h > +++ b/lib/librte_eal/common/eal_options.h > @@ -83,6 +83,8 @@ enum { > OPT_VMWARE_TSC_MAP_NUM, > #define OPT_XEN_DOM0 "xen-dom0" > OPT_XEN_DOM0_NUM, > +#define OPT_PKT_MEMPOOL "pkt-mempool" > + OPT_PKT_MEMPOOL_NUM, > OPT_LONG_MAX_NUM > }; > While "pkt-mempool" is probably ok, here are some other suggestions, in case you feel it's clearer: pkt-pool-ops mbuf-pool-ops mbuf-default-mempool-ops (too long, but consistent with #define and api) > diff --git a/lib/librte_eal/common/include/rte_eal.h b/lib/librte_eal/common/include/rte_eal.h > index abf020bf9..c2f696a3d 100644 > --- a/lib/librte_eal/common/include/rte_eal.h > +++ b/lib/librte_eal/common/include/rte_eal.h > @@ -283,6 +283,15 @@ static inline int rte_gettid(void) > return RTE_PER_LCORE(_thread_id); > } > > +/** > + * Get mempool name from cmdline. > + * > + * @return > + * On success, returns the pool name. > + * On Failure, returs NULL. > + */ > +char *rte_eal_get_mp_name(void); > + > #define RTE_INIT(func) \ > static void __attribute__((constructor, used)) func(void) > > diff --git a/lib/librte_eal/linuxapp/eal/eal.c b/lib/librte_eal/linuxapp/eal/eal.c > index 7c78f2dc2..b2a6c8068 100644 > --- a/lib/librte_eal/linuxapp/eal/eal.c > +++ b/lib/librte_eal/linuxapp/eal/eal.c > @@ -122,6 +122,15 @@ struct internal_config internal_config; > /* used by rte_rdtsc() */ > int rte_cycles_vmware_tsc_map; > > +char * > +rte_eal_get_mp_name(void) > +{ > + if (internal_config.mp_name[0] == 0x0) > + return NULL; > + else > + return internal_config.mp_name; > +} > + > /* Return a pointer to the configuration structure */ > struct rte_config * > rte_eal_get_configuration(void) > @@ -478,6 +487,23 @@ eal_parse_vfio_intr(const char *mode) > return -1; > } > > +static int > +eal_parse_mp_name(const char *name) > +{ > + int len; > + > + if (name == NULL) > + return -1; > + > + len = strlen(name); > + if (len >= MAX_POOL_NAME_LEN) > + return -1; > + > + strcpy(internal_config.mp_name, name); > + > + return 0; > +} > + Why is it linuxapp only? Using strcpy() is not a good habbit, I suggest to use snprintf instead. Also, prefer to use sizeof() instead of using the #define size. ret = snprintf(internal_config.mp_name, sizeof(internal_config.mp_name), "%s", name); if (ret < 0 || ret >= sizeof(internal_config.mp_name)) return -1; return 0; > /* Parse the arguments for --log-level only */ > static void > eal_log_level_parse(int argc, char **argv) > @@ -611,6 +637,16 @@ eal_parse_args(int argc, char **argv) > internal_config.create_uio_dev = 1; > break; > > + case OPT_PKT_MEMPOOL_NUM: > + if (eal_parse_mp_name(optarg) < 0) { > + RTE_LOG(ERR, EAL, "invalid parameters for --" > + OPT_PKT_MEMPOOL "\n"); > + eal_usage(prgname); > + ret = -1; > + goto out; > + } > + break; > + > default: > if (opt < OPT_LONG_MIN_NUM && isprint(opt)) { > RTE_LOG(ERR, EAL, "Option %c is not supported " > diff --git a/lib/librte_eal/linuxapp/eal/rte_eal_version.map b/lib/librte_eal/linuxapp/eal/rte_eal_version.map > index 670bab3a5..e57330bec 100644 > --- a/lib/librte_eal/linuxapp/eal/rte_eal_version.map > +++ b/lib/librte_eal/linuxapp/eal/rte_eal_version.map > @@ -198,3 +198,10 @@ DPDK_17.05 { > vfio_get_group_no; > > } DPDK_17.02; > + > +DPDK_17.08 { > + global: > + > + rte_eal_get_mp_name; > + > +} DPDK_17.05 > diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c > index 0e3e36a58..38f4b3de0 100644 > --- a/lib/librte_mbuf/rte_mbuf.c > +++ b/lib/librte_mbuf/rte_mbuf.c > @@ -158,6 +158,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, > { > struct rte_mempool *mp; > struct rte_pktmbuf_pool_private mbp_priv; > + const char *mp_name = NULL; > unsigned elt_size; > int ret; > > @@ -177,8 +178,11 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, > if (mp == NULL) > return NULL; > > - ret = rte_mempool_set_ops_byname(mp, > - RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL); > + mp_name = rte_eal_get_mp_name(); > + if (mp_name == NULL) > + mp_name = RTE_MBUF_DEFAULT_MEMPOOL_OPS; > + > + ret = rte_mempool_set_ops_byname(mp, mp_name, NULL); > if (ret != 0) { > RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); > rte_mempool_free(mp); If NULL is never returned, the code can be simplified a bit. Olivier ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH 1/2] eal: Introducing option to set mempool handle 2017-06-30 14:12 ` Olivier Matz @ 2017-07-04 12:33 ` santosh 0 siblings, 0 replies; 72+ messages in thread From: santosh @ 2017-07-04 12:33 UTC (permalink / raw) To: Olivier Matz; +Cc: dev, hemant.agrawal, jerin.jacob Hi Olivier, On Friday 30 June 2017 07:42 PM, Olivier Matz wrote: > Hi Santosh, > > On Thu, 1 Jun 2017 13:35:58 +0530, Santosh Shukla <santosh.shukla@caviumnetworks.com> wrote: >> Platform can have external PCI cards like Intel 40G card and >> Integrated NIC like OcteoTX. Where both NIC has their >> preferred pool handle. Example: Intel 40G NIC preferred pool >> is ring_mp_mc and OcteonTX preferred pool handle would be >> ext-mempool's handle named 'octeontx-fpavf'. >> >> There is no way that either of NIC's could use their choice >> of mempool handle. >> >> Because Currently mempool handle programmed in static way i.e. >> User has to set pool handle name in CONFIG_RTE_MEMPOOL_OPS_DEFAULT='' >> >> So introducing eal option --pkt-mempool="". >> >> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> >> --- >> lib/librte_eal/bsdapp/eal/eal.c | 9 +++++++ >> lib/librte_eal/bsdapp/eal/rte_eal_version.map | 7 +++++ >> lib/librte_eal/common/eal_common_options.c | 3 +++ >> lib/librte_eal/common/eal_internal_cfg.h | 2 ++ >> lib/librte_eal/common/eal_options.h | 2 ++ >> lib/librte_eal/common/include/rte_eal.h | 9 +++++++ >> lib/librte_eal/linuxapp/eal/eal.c | 36 +++++++++++++++++++++++++ >> lib/librte_eal/linuxapp/eal/rte_eal_version.map | 7 +++++ >> lib/librte_mbuf/rte_mbuf.c | 8 ++++-- >> 9 files changed, 81 insertions(+), 2 deletions(-) >> >> diff --git a/lib/librte_eal/bsdapp/eal/eal.c b/lib/librte_eal/bsdapp/eal/eal.c >> index 05f0c1f90..7d8824707 100644 >> --- a/lib/librte_eal/bsdapp/eal/eal.c >> +++ b/lib/librte_eal/bsdapp/eal/eal.c >> @@ -113,6 +113,15 @@ struct internal_config internal_config; >> /* used by rte_rdtsc() */ >> int rte_cycles_vmware_tsc_map; >> >> +char *__rte_unused > why __rte_unused? > I'll also add a const ok. >> +rte_eal_get_mp_name(void) > I suggest rte_eal_mbuf_default_mempool_ops() > It's longer but it shows it's only about mbufs, and it is consistent > with the #define. > no strong opinion. In v2. >> +{ >> + if (internal_config.mp_name[0] == 0x0) >> + return NULL; >> + else >> + return internal_config.mp_name; >> +} >> + > Would returning RTE_MBUF_DEFAULT_MEMPOOL_OPS instead of NULL > make sense? > ok. >> /* Return a pointer to the configuration structure */ >> struct rte_config * >> rte_eal_get_configuration(void) >> diff --git a/lib/librte_eal/bsdapp/eal/rte_eal_version.map b/lib/librte_eal/bsdapp/eal/rte_eal_version.map >> index 2e48a7366..a1e9ad95f 100644 >> --- a/lib/librte_eal/bsdapp/eal/rte_eal_version.map >> +++ b/lib/librte_eal/bsdapp/eal/rte_eal_version.map >> @@ -193,3 +193,10 @@ DPDK_17.05 { >> vfio_get_group_no; >> >> } DPDK_17.02; >> + >> +DPDK_17.08 { >> + global: >> + >> + rte_eal_get_mp_name; >> + >> +} DPDK_17.05; >> diff --git a/lib/librte_eal/common/eal_common_options.c b/lib/librte_eal/common/eal_common_options.c >> index f470195f3..1c147a696 100644 >> --- a/lib/librte_eal/common/eal_common_options.c >> +++ b/lib/librte_eal/common/eal_common_options.c >> @@ -95,6 +95,7 @@ eal_long_options[] = { >> {OPT_VFIO_INTR, 1, NULL, OPT_VFIO_INTR_NUM }, >> {OPT_VMWARE_TSC_MAP, 0, NULL, OPT_VMWARE_TSC_MAP_NUM }, >> {OPT_XEN_DOM0, 0, NULL, OPT_XEN_DOM0_NUM }, >> + {OPT_PKT_MEMPOOL, 1, NULL, OPT_PKT_MEMPOOL_NUM }, >> {0, 0, NULL, 0 } >> }; >> >> @@ -161,6 +162,7 @@ eal_reset_internal_config(struct internal_config *internal_cfg) >> #endif >> internal_cfg->vmware_tsc_map = 0; >> internal_cfg->create_uio_dev = 0; >> + memset(&internal_cfg->mp_name[0], 0x0, MAX_POOL_NAME_LEN); >> } >> >> static int > Or it could be initialized to RTE_MBUF_DEFAULT_MEMPOOL_OPS here? even better, ok. >> @@ -1083,5 +1085,6 @@ eal_common_usage(void) >> " --"OPT_NO_PCI" Disable PCI\n" >> " --"OPT_NO_HPET" Disable HPET\n" >> " --"OPT_NO_SHCONF" No shared config (mmap'd files)\n" >> + " --"OPT_PKT_MEMPOOL" Use pool name as mempool for port\n" >> "\n", RTE_MAX_LCORE); >> } >> diff --git a/lib/librte_eal/common/eal_internal_cfg.h b/lib/librte_eal/common/eal_internal_cfg.h >> index 7b7e8c887..b8fedd2e6 100644 >> --- a/lib/librte_eal/common/eal_internal_cfg.h >> +++ b/lib/librte_eal/common/eal_internal_cfg.h >> @@ -43,6 +43,7 @@ >> #include <rte_pci_dev_feature_defs.h> >> >> #define MAX_HUGEPAGE_SIZES 3 /**< support up to 3 page sizes */ >> +#define MAX_POOL_NAME_LEN 256 /**< Max len of a pool name */ >> >> /* >> * internal configuration structure for the number, size and > Shouldn't we have the same length than RTE_MEMPOOL_OPS_NAMESIZE? > I think we should try to avoid introducing new defines without > RTE_ prefix. > ok. Hoping that future pool_handle name don't cross >32 limit. >> @@ -84,6 +85,7 @@ struct internal_config { >> const char *hugepage_dir; /**< specific hugetlbfs directory to use */ >> >> unsigned num_hugepage_sizes; /**< how many sizes on this system */ >> + char mp_name[MAX_POOL_NAME_LEN]; /**< mempool handle name */ >> struct hugepage_info hugepage_info[MAX_HUGEPAGE_SIZES]; >> }; > If it's in internal config, I think we can use a char pointer > instead of a table. > > What is the expected behavior in case of multiprocess? > ok. >> extern struct internal_config internal_config; /**< Global EAL configuration. */ >> diff --git a/lib/librte_eal/common/eal_options.h b/lib/librte_eal/common/eal_options.h >> index a881c62e2..4e52ee255 100644 >> --- a/lib/librte_eal/common/eal_options.h >> +++ b/lib/librte_eal/common/eal_options.h >> @@ -83,6 +83,8 @@ enum { >> OPT_VMWARE_TSC_MAP_NUM, >> #define OPT_XEN_DOM0 "xen-dom0" >> OPT_XEN_DOM0_NUM, >> +#define OPT_PKT_MEMPOOL "pkt-mempool" >> + OPT_PKT_MEMPOOL_NUM, >> OPT_LONG_MAX_NUM >> }; >> > While "pkt-mempool" is probably ok, here are some other suggestions, > in case you feel it's clearer: > > pkt-pool-ops > mbuf-pool-ops > mbuf-default-mempool-ops (too long, but consistent with #define and api) > No strong opinion on naming convention. I'll choose last one, for consistency reasons. >> diff --git a/lib/librte_eal/common/include/rte_eal.h b/lib/librte_eal/common/include/rte_eal.h >> index abf020bf9..c2f696a3d 100644 >> --- a/lib/librte_eal/common/include/rte_eal.h >> +++ b/lib/librte_eal/common/include/rte_eal.h >> @@ -283,6 +283,15 @@ static inline int rte_gettid(void) >> return RTE_PER_LCORE(_thread_id); >> } >> >> +/** >> + * Get mempool name from cmdline. >> + * >> + * @return >> + * On success, returns the pool name. >> + * On Failure, returs NULL. >> + */ >> +char *rte_eal_get_mp_name(void); >> + >> #define RTE_INIT(func) \ >> static void __attribute__((constructor, used)) func(void) >> >> diff --git a/lib/librte_eal/linuxapp/eal/eal.c b/lib/librte_eal/linuxapp/eal/eal.c >> index 7c78f2dc2..b2a6c8068 100644 >> --- a/lib/librte_eal/linuxapp/eal/eal.c >> +++ b/lib/librte_eal/linuxapp/eal/eal.c >> @@ -122,6 +122,15 @@ struct internal_config internal_config; >> /* used by rte_rdtsc() */ >> int rte_cycles_vmware_tsc_map; >> >> +char * >> +rte_eal_get_mp_name(void) >> +{ >> + if (internal_config.mp_name[0] == 0x0) >> + return NULL; >> + else >> + return internal_config.mp_name; >> +} >> + >> /* Return a pointer to the configuration structure */ >> struct rte_config * >> rte_eal_get_configuration(void) >> @@ -478,6 +487,23 @@ eal_parse_vfio_intr(const char *mode) >> return -1; >> } >> >> +static int >> +eal_parse_mp_name(const char *name) >> +{ >> + int len; >> + >> + if (name == NULL) >> + return -1; >> + >> + len = strlen(name); >> + if (len >= MAX_POOL_NAME_LEN) >> + return -1; >> + >> + strcpy(internal_config.mp_name, name); >> + >> + return 0; >> +} >> + > Why is it linuxapp only? We'll add in v2. > Using strcpy() is not a good habbit, I suggest to use snprintf > instead. Also, prefer to use sizeof() instead of using the #define > size. > > > ret = snprintf(internal_config.mp_name, sizeof(internal_config.mp_name), > "%s", name); > if (ret < 0 || ret >= sizeof(internal_config.mp_name)) > return -1; > return 0; > in v2. >> /* Parse the arguments for --log-level only */ >> static void >> eal_log_level_parse(int argc, char **argv) >> @@ -611,6 +637,16 @@ eal_parse_args(int argc, char **argv) >> internal_config.create_uio_dev = 1; >> break; >> >> + case OPT_PKT_MEMPOOL_NUM: >> + if (eal_parse_mp_name(optarg) < 0) { >> + RTE_LOG(ERR, EAL, "invalid parameters for --" >> + OPT_PKT_MEMPOOL "\n"); >> + eal_usage(prgname); >> + ret = -1; >> + goto out; >> + } >> + break; >> + >> default: >> if (opt < OPT_LONG_MIN_NUM && isprint(opt)) { >> RTE_LOG(ERR, EAL, "Option %c is not supported " >> diff --git a/lib/librte_eal/linuxapp/eal/rte_eal_version.map b/lib/librte_eal/linuxapp/eal/rte_eal_version.map >> index 670bab3a5..e57330bec 100644 >> --- a/lib/librte_eal/linuxapp/eal/rte_eal_version.map >> +++ b/lib/librte_eal/linuxapp/eal/rte_eal_version.map >> @@ -198,3 +198,10 @@ DPDK_17.05 { >> vfio_get_group_no; >> >> } DPDK_17.02; >> + >> +DPDK_17.08 { >> + global: >> + >> + rte_eal_get_mp_name; >> + >> +} DPDK_17.05 >> diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c >> index 0e3e36a58..38f4b3de0 100644 >> --- a/lib/librte_mbuf/rte_mbuf.c >> +++ b/lib/librte_mbuf/rte_mbuf.c >> @@ -158,6 +158,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, >> { >> struct rte_mempool *mp; >> struct rte_pktmbuf_pool_private mbp_priv; >> + const char *mp_name = NULL; >> unsigned elt_size; >> int ret; >> >> @@ -177,8 +178,11 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, >> if (mp == NULL) >> return NULL; >> >> - ret = rte_mempool_set_ops_byname(mp, >> - RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL); >> + mp_name = rte_eal_get_mp_name(); >> + if (mp_name == NULL) >> + mp_name = RTE_MBUF_DEFAULT_MEMPOOL_OPS; >> + >> + ret = rte_mempool_set_ops_byname(mp, mp_name, NULL); >> if (ret != 0) { >> RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); >> rte_mempool_free(mp); > If NULL is never returned, the code can be simplified a bit. Yes. Thanks for review comment. > > Olivier ^ permalink raw reply [flat|nested] 72+ messages in thread
* [dpdk-dev] [PATCH 2/2] ether/ethdev: Allow pmd to advertise preferred pool capability 2017-06-01 8:05 [dpdk-dev] [PATCH 0/2] Allow application set mempool handle Santosh Shukla 2017-06-01 8:05 ` [dpdk-dev] [PATCH 1/2] eal: Introducing option to " Santosh Shukla @ 2017-06-01 8:05 ` Santosh Shukla 2017-06-30 14:13 ` Olivier Matz 2017-06-19 11:52 ` [dpdk-dev] [PATCH 0/2] Allow application set mempool handle Hemant Agrawal 2017-07-20 7:06 ` [dpdk-dev] [PATCH v2 0/2] Dynamically configure " Santosh Shukla 3 siblings, 1 reply; 72+ messages in thread From: Santosh Shukla @ 2017-06-01 8:05 UTC (permalink / raw) To: olivier.matz, dev; +Cc: hemant.agrawal, jerin.jacob, Santosh Shukla Platform with two different NICs like external PCI NIC and Integrated NIC, May want to use their preferred pool handle. Right now there is no way that two different NICs on same board, Could use their choice of a pool. Both NICs forced to use same pool, Which is statically configured by setting CONFIG_RTE_MEMPOOL_DEFAULT_OPS=<pool-name>. So Introducing get_preferred_pool() API. Which allows PMD driver to advertise their pool capability to Application. Based on that hint, Application creates separate pool for That driver. Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> --- lib/librte_ether/rte_ethdev.c | 16 ++++++++++++++++ lib/librte_ether/rte_ethdev.h | 21 +++++++++++++++++++++ lib/librte_ether/rte_ether_version.map | 7 +++++++ 3 files changed, 44 insertions(+) diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c index 83898a8f7..4068a05b1 100644 --- a/lib/librte_ether/rte_ethdev.c +++ b/lib/librte_ether/rte_ethdev.c @@ -3472,3 +3472,19 @@ rte_eth_dev_l2_tunnel_offload_set(uint8_t port_id, -ENOTSUP); return (*dev->dev_ops->l2_tunnel_offload_set)(dev, l2_tunnel, mask, en); } + +int +rte_eth_dev_get_preferred_pool(uint8_t port_id, const char *pool) +{ + struct rte_eth_dev *dev; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + + dev = &rte_eth_devices[port_id]; + + if (*dev->dev_ops->get_preferred_pool == NULL) { + pool = RTE_MBUF_DEFAULT_MEMPOOL_OPS; + return 0; + } + return (*dev->dev_ops->get_preferred_pool)(dev, pool); +} diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h index 0f38b45f8..8e5b06af7 100644 --- a/lib/librte_ether/rte_ethdev.h +++ b/lib/librte_ether/rte_ethdev.h @@ -1381,6 +1381,10 @@ typedef int (*eth_l2_tunnel_offload_set_t) uint8_t en); /**< @internal enable/disable the l2 tunnel offload functions */ +typedef int (*eth_get_preferred_pool_t)(struct rte_eth_dev *dev, + const char *pool); +/**< @internal Get preferred pool handler for a device */ + #ifdef RTE_NIC_BYPASS enum { @@ -1573,6 +1577,8 @@ struct eth_dev_ops { /**< Get extended device statistic values by ID. */ eth_xstats_get_names_by_id_t xstats_get_names_by_id; /**< Get name of extended device statistics by ID. */ + eth_get_preferred_pool_t get_preferred_pool; + /**< Get preferred pool handler for a device */ }; /** @@ -4607,6 +4613,21 @@ rte_eth_dev_get_port_by_name(const char *name, uint8_t *port_id); int rte_eth_dev_get_name_by_port(uint8_t port_id, char *name); +/** + * Get preferred pool handle for a device + * + * @param port_id + * port identifier of the device + * @param [out] pool + * Preferred pool handle for this device. + * Pool len shouldn't more than 256B. Allocated by pmd driver. + * @return + * - (0) if successful. + * - (-EINVAL) on failure. + */ +int +rte_eth_dev_get_preferred_pool(uint8_t port_id, const char *pool); + #ifdef __cplusplus } #endif diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map index d6726bb1b..819fe800e 100644 --- a/lib/librte_ether/rte_ether_version.map +++ b/lib/librte_ether/rte_ether_version.map @@ -156,3 +156,10 @@ DPDK_17.05 { rte_eth_xstats_get_names_by_id; } DPDK_17.02; + +DPDK_17.08 { + global: + + rte_eth_dev_get_preferred_pool; + +} DPDK_17.05; -- 2.11.0 ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH 2/2] ether/ethdev: Allow pmd to advertise preferred pool capability 2017-06-01 8:05 ` [dpdk-dev] [PATCH 2/2] ether/ethdev: Allow pmd to advertise preferred pool capability Santosh Shukla @ 2017-06-30 14:13 ` Olivier Matz 2017-07-04 12:39 ` santosh 0 siblings, 1 reply; 72+ messages in thread From: Olivier Matz @ 2017-06-30 14:13 UTC (permalink / raw) To: Santosh Shukla; +Cc: dev, hemant.agrawal, jerin.jacob On Thu, 1 Jun 2017 13:35:59 +0530, Santosh Shukla <santosh.shukla@caviumnetworks.com> wrote: > Platform with two different NICs like external PCI NIC and > Integrated NIC, May want to use their preferred pool handle. > Right now there is no way that two different NICs on same board, > Could use their choice of a pool. > Both NICs forced to use same pool, Which is statically configured > by setting CONFIG_RTE_MEMPOOL_DEFAULT_OPS=<pool-name>. > > So Introducing get_preferred_pool() API. Which allows PMD driver > to advertise their pool capability to Application. > Based on that hint, Application creates separate pool for > That driver. > > Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> > --- > lib/librte_ether/rte_ethdev.c | 16 ++++++++++++++++ > lib/librte_ether/rte_ethdev.h | 21 +++++++++++++++++++++ > lib/librte_ether/rte_ether_version.map | 7 +++++++ > 3 files changed, 44 insertions(+) > > diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c > index 83898a8f7..4068a05b1 100644 > --- a/lib/librte_ether/rte_ethdev.c > +++ b/lib/librte_ether/rte_ethdev.c > @@ -3472,3 +3472,19 @@ rte_eth_dev_l2_tunnel_offload_set(uint8_t port_id, > -ENOTSUP); > return (*dev->dev_ops->l2_tunnel_offload_set)(dev, l2_tunnel, mask, en); > } > + > +int > +rte_eth_dev_get_preferred_pool(uint8_t port_id, const char *pool) > +{ > + struct rte_eth_dev *dev; > + > + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); > + > + dev = &rte_eth_devices[port_id]; > + > + if (*dev->dev_ops->get_preferred_pool == NULL) { > + pool = RTE_MBUF_DEFAULT_MEMPOOL_OPS; > + return 0; > + } > + return (*dev->dev_ops->get_preferred_pool)(dev, pool); > +} Instead of this, what about: /* * Return values: * - -ENOTSUP: error, pool type is not supported * - on success, return the priority of the mempool (0 = highest) */ int rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool) By default, always return 0 (i.e. all pools are supported). With this API, we can announce several supported pools (not only one preferred), and order them by preference. I also wonder if we should use a ops_index instead of a pool name for the second argument. > diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h > index 0f38b45f8..8e5b06af7 100644 > --- a/lib/librte_ether/rte_ethdev.h > +++ b/lib/librte_ether/rte_ethdev.h > @@ -1381,6 +1381,10 @@ typedef int (*eth_l2_tunnel_offload_set_t) > uint8_t en); > /**< @internal enable/disable the l2 tunnel offload functions */ > > +typedef int (*eth_get_preferred_pool_t)(struct rte_eth_dev *dev, > + const char *pool); > +/**< @internal Get preferred pool handler for a device */ > + > #ifdef RTE_NIC_BYPASS > > enum { > @@ -1573,6 +1577,8 @@ struct eth_dev_ops { > /**< Get extended device statistic values by ID. */ > eth_xstats_get_names_by_id_t xstats_get_names_by_id; > /**< Get name of extended device statistics by ID. */ > + eth_get_preferred_pool_t get_preferred_pool; > + /**< Get preferred pool handler for a device */ > }; > > /** > @@ -4607,6 +4613,21 @@ rte_eth_dev_get_port_by_name(const char *name, uint8_t *port_id); > int > rte_eth_dev_get_name_by_port(uint8_t port_id, char *name); > > +/** > + * Get preferred pool handle for a device > + * > + * @param port_id > + * port identifier of the device > + * @param [out] pool > + * Preferred pool handle for this device. > + * Pool len shouldn't more than 256B. Allocated by pmd driver. [out] ?? I don't get why it is allocated by the driver > + * @return > + * - (0) if successful. > + * - (-EINVAL) on failure. > + */ > +int > +rte_eth_dev_get_preferred_pool(uint8_t port_id, const char *pool); > + > #ifdef __cplusplus > } > #endif > diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map > index d6726bb1b..819fe800e 100644 > --- a/lib/librte_ether/rte_ether_version.map > +++ b/lib/librte_ether/rte_ether_version.map > @@ -156,3 +156,10 @@ DPDK_17.05 { > rte_eth_xstats_get_names_by_id; > > } DPDK_17.02; > + > +DPDK_17.08 { > + global: > + > + rte_eth_dev_get_preferred_pool; > + > +} DPDK_17.05; ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH 2/2] ether/ethdev: Allow pmd to advertise preferred pool capability 2017-06-30 14:13 ` Olivier Matz @ 2017-07-04 12:39 ` santosh 2017-07-04 13:07 ` Olivier Matz 0 siblings, 1 reply; 72+ messages in thread From: santosh @ 2017-07-04 12:39 UTC (permalink / raw) To: Olivier Matz; +Cc: dev, hemant.agrawal, jerin.jacob On Friday 30 June 2017 07:43 PM, Olivier Matz wrote: > On Thu, 1 Jun 2017 13:35:59 +0530, Santosh Shukla <santosh.shukla@caviumnetworks.com> wrote: >> Platform with two different NICs like external PCI NIC and >> Integrated NIC, May want to use their preferred pool handle. >> Right now there is no way that two different NICs on same board, >> Could use their choice of a pool. >> Both NICs forced to use same pool, Which is statically configured >> by setting CONFIG_RTE_MEMPOOL_DEFAULT_OPS=<pool-name>. >> >> So Introducing get_preferred_pool() API. Which allows PMD driver >> to advertise their pool capability to Application. >> Based on that hint, Application creates separate pool for >> That driver. >> >> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> >> --- >> lib/librte_ether/rte_ethdev.c | 16 ++++++++++++++++ >> lib/librte_ether/rte_ethdev.h | 21 +++++++++++++++++++++ >> lib/librte_ether/rte_ether_version.map | 7 +++++++ >> 3 files changed, 44 insertions(+) >> >> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c >> index 83898a8f7..4068a05b1 100644 >> --- a/lib/librte_ether/rte_ethdev.c >> +++ b/lib/librte_ether/rte_ethdev.c >> @@ -3472,3 +3472,19 @@ rte_eth_dev_l2_tunnel_offload_set(uint8_t port_id, >> -ENOTSUP); >> return (*dev->dev_ops->l2_tunnel_offload_set)(dev, l2_tunnel, mask, en); >> } >> + >> +int >> +rte_eth_dev_get_preferred_pool(uint8_t port_id, const char *pool) >> +{ >> + struct rte_eth_dev *dev; >> + >> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); >> + >> + dev = &rte_eth_devices[port_id]; >> + >> + if (*dev->dev_ops->get_preferred_pool == NULL) { >> + pool = RTE_MBUF_DEFAULT_MEMPOOL_OPS; >> + return 0; >> + } >> + return (*dev->dev_ops->get_preferred_pool)(dev, pool); >> +} > Instead of this, what about: > > /* > * Return values: > * - -ENOTSUP: error, pool type is not supported > * - on success, return the priority of the mempool (0 = highest) > */ > int > rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool) > > By default, always return 0 (i.e. all pools are supported). > > With this API, we can announce several supported pools (not only > one preferred), and order them by preference. IMO: We should let application to decide on pool preference. Driver only to advice his preferred or supported pool handle to application, and its upto application to decide on pool selection scheme. > I also wonder if we should use a ops_index instead of a pool name > for the second argument. > > > >> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h >> index 0f38b45f8..8e5b06af7 100644 >> --- a/lib/librte_ether/rte_ethdev.h >> +++ b/lib/librte_ether/rte_ethdev.h >> @@ -1381,6 +1381,10 @@ typedef int (*eth_l2_tunnel_offload_set_t) >> uint8_t en); >> /**< @internal enable/disable the l2 tunnel offload functions */ >> >> +typedef int (*eth_get_preferred_pool_t)(struct rte_eth_dev *dev, >> + const char *pool); >> +/**< @internal Get preferred pool handler for a device */ >> + >> #ifdef RTE_NIC_BYPASS >> >> enum { >> @@ -1573,6 +1577,8 @@ struct eth_dev_ops { >> /**< Get extended device statistic values by ID. */ >> eth_xstats_get_names_by_id_t xstats_get_names_by_id; >> /**< Get name of extended device statistics by ID. */ >> + eth_get_preferred_pool_t get_preferred_pool; >> + /**< Get preferred pool handler for a device */ >> }; >> >> /** >> @@ -4607,6 +4613,21 @@ rte_eth_dev_get_port_by_name(const char *name, uint8_t *port_id); >> int >> rte_eth_dev_get_name_by_port(uint8_t port_id, char *name); >> >> +/** >> + * Get preferred pool handle for a device >> + * >> + * @param port_id >> + * port identifier of the device >> + * @param [out] pool >> + * Preferred pool handle for this device. >> + * Pool len shouldn't more than 256B. Allocated by pmd driver. > [out] ?? > I don't get why it is allocated by the driver > Driver to advice his preferred pool to application. That's why out. Thanks. > >> + * @return >> + * - (0) if successful. >> + * - (-EINVAL) on failure. >> + */ >> +int >> +rte_eth_dev_get_preferred_pool(uint8_t port_id, const char *pool); >> + >> #ifdef __cplusplus >> } >> #endif >> diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map >> index d6726bb1b..819fe800e 100644 >> --- a/lib/librte_ether/rte_ether_version.map >> +++ b/lib/librte_ether/rte_ether_version.map >> @@ -156,3 +156,10 @@ DPDK_17.05 { >> rte_eth_xstats_get_names_by_id; >> >> } DPDK_17.02; >> + >> +DPDK_17.08 { >> + global: >> + >> + rte_eth_dev_get_preferred_pool; >> + >> +} DPDK_17.05; ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH 2/2] ether/ethdev: Allow pmd to advertise preferred pool capability 2017-07-04 12:39 ` santosh @ 2017-07-04 13:07 ` Olivier Matz 2017-07-04 14:12 ` Jerin Jacob 0 siblings, 1 reply; 72+ messages in thread From: Olivier Matz @ 2017-07-04 13:07 UTC (permalink / raw) To: santosh; +Cc: dev, hemant.agrawal, jerin.jacob On Tue, 4 Jul 2017 18:09:33 +0530, santosh <santosh.shukla@caviumnetworks.com> wrote: > On Friday 30 June 2017 07:43 PM, Olivier Matz wrote: > > > On Thu, 1 Jun 2017 13:35:59 +0530, Santosh Shukla <santosh.shukla@caviumnetworks.com> wrote: > >> Platform with two different NICs like external PCI NIC and > >> Integrated NIC, May want to use their preferred pool handle. > >> Right now there is no way that two different NICs on same board, > >> Could use their choice of a pool. > >> Both NICs forced to use same pool, Which is statically configured > >> by setting CONFIG_RTE_MEMPOOL_DEFAULT_OPS=<pool-name>. > >> > >> So Introducing get_preferred_pool() API. Which allows PMD driver > >> to advertise their pool capability to Application. > >> Based on that hint, Application creates separate pool for > >> That driver. > >> > >> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> > >> --- > >> lib/librte_ether/rte_ethdev.c | 16 ++++++++++++++++ > >> lib/librte_ether/rte_ethdev.h | 21 +++++++++++++++++++++ > >> lib/librte_ether/rte_ether_version.map | 7 +++++++ > >> 3 files changed, 44 insertions(+) > >> > >> diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c > >> index 83898a8f7..4068a05b1 100644 > >> --- a/lib/librte_ether/rte_ethdev.c > >> +++ b/lib/librte_ether/rte_ethdev.c > >> @@ -3472,3 +3472,19 @@ rte_eth_dev_l2_tunnel_offload_set(uint8_t port_id, > >> -ENOTSUP); > >> return (*dev->dev_ops->l2_tunnel_offload_set)(dev, l2_tunnel, mask, en); > >> } > >> + > >> +int > >> +rte_eth_dev_get_preferred_pool(uint8_t port_id, const char *pool) > >> +{ > >> + struct rte_eth_dev *dev; > >> + > >> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); > >> + > >> + dev = &rte_eth_devices[port_id]; > >> + > >> + if (*dev->dev_ops->get_preferred_pool == NULL) { > >> + pool = RTE_MBUF_DEFAULT_MEMPOOL_OPS; > >> + return 0; > >> + } > >> + return (*dev->dev_ops->get_preferred_pool)(dev, pool); > >> +} > > Instead of this, what about: > > > > /* > > * Return values: > > * - -ENOTSUP: error, pool type is not supported > > * - on success, return the priority of the mempool (0 = highest) > > */ > > int > > rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool) > > > > By default, always return 0 (i.e. all pools are supported). > > > > With this API, we can announce several supported pools (not only > > one preferred), and order them by preference. > > IMO: We should let application to decide on pool preference. Driver > only to advice his preferred or supported pool handle to application, > and its upto application to decide on pool selection scheme. The api I'm proposing does not prevent the application from taking the decision. On the contrary, it gives more clues to the application: an ordered list of supported pools, instead of just the preferred pool. > > > I also wonder if we should use a ops_index instead of a pool name > > for the second argument. > > > > > > > >> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h > >> index 0f38b45f8..8e5b06af7 100644 > >> --- a/lib/librte_ether/rte_ethdev.h > >> +++ b/lib/librte_ether/rte_ethdev.h > >> @@ -1381,6 +1381,10 @@ typedef int (*eth_l2_tunnel_offload_set_t) > >> uint8_t en); > >> /**< @internal enable/disable the l2 tunnel offload functions */ > >> > >> +typedef int (*eth_get_preferred_pool_t)(struct rte_eth_dev *dev, > >> + const char *pool); > >> +/**< @internal Get preferred pool handler for a device */ > >> + > >> #ifdef RTE_NIC_BYPASS > >> > >> enum { > >> @@ -1573,6 +1577,8 @@ struct eth_dev_ops { > >> /**< Get extended device statistic values by ID. */ > >> eth_xstats_get_names_by_id_t xstats_get_names_by_id; > >> /**< Get name of extended device statistics by ID. */ > >> + eth_get_preferred_pool_t get_preferred_pool; > >> + /**< Get preferred pool handler for a device */ > >> }; > >> > >> /** > >> @@ -4607,6 +4613,21 @@ rte_eth_dev_get_port_by_name(const char *name, uint8_t *port_id); > >> int > >> rte_eth_dev_get_name_by_port(uint8_t port_id, char *name); > >> > >> +/** > >> + * Get preferred pool handle for a device > >> + * > >> + * @param port_id > >> + * port identifier of the device > >> + * @param [out] pool > >> + * Preferred pool handle for this device. > >> + * Pool len shouldn't more than 256B. Allocated by pmd driver. > > [out] ?? > > I don't get why it is allocated by the driver > > > Driver to advice his preferred pool to application. That's why out. So how can it be const? Did you tried it? > > Thanks. > > > > >> + * @return > >> + * - (0) if successful. > >> + * - (-EINVAL) on failure. > >> + */ > >> +int > >> +rte_eth_dev_get_preferred_pool(uint8_t port_id, const char *pool); > >> + > >> #ifdef __cplusplus > >> } > >> #endif > >> diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map > >> index d6726bb1b..819fe800e 100644 > >> --- a/lib/librte_ether/rte_ether_version.map > >> +++ b/lib/librte_ether/rte_ether_version.map > >> @@ -156,3 +156,10 @@ DPDK_17.05 { > >> rte_eth_xstats_get_names_by_id; > >> > >> } DPDK_17.02; > >> + > >> +DPDK_17.08 { > >> + global: > >> + > >> + rte_eth_dev_get_preferred_pool; > >> + > >> +} DPDK_17.05; > ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH 2/2] ether/ethdev: Allow pmd to advertise preferred pool capability 2017-07-04 13:07 ` Olivier Matz @ 2017-07-04 14:12 ` Jerin Jacob 0 siblings, 0 replies; 72+ messages in thread From: Jerin Jacob @ 2017-07-04 14:12 UTC (permalink / raw) To: Olivier Matz; +Cc: santosh, dev, hemant.agrawal -----Original Message----- > Date: Tue, 4 Jul 2017 15:07:14 +0200 > From: Olivier Matz <olivier.matz@6wind.com> > To: santosh <santosh.shukla@caviumnetworks.com> > Cc: dev@dpdk.org, hemant.agrawal@nxp.com, jerin.jacob@caviumnetworks.com > Subject: Re: [PATCH 2/2] ether/ethdev: Allow pmd to advertise preferred > pool capability > X-Mailer: Claws Mail 3.14.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) > > On Tue, 4 Jul 2017 18:09:33 +0530, santosh <santosh.shukla@caviumnetworks.com> wrote: > > On Friday 30 June 2017 07:43 PM, Olivier Matz wrote: Hi Olivier, > > > > >> + > > >> +int > > >> +rte_eth_dev_get_preferred_pool(uint8_t port_id, const char *pool) > > >> +{ > > >> + struct rte_eth_dev *dev; > > >> + > > >> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); > > >> + > > >> + dev = &rte_eth_devices[port_id]; > > >> + > > >> + if (*dev->dev_ops->get_preferred_pool == NULL) { > > >> + pool = RTE_MBUF_DEFAULT_MEMPOOL_OPS; > > >> + return 0; > > >> + } > > >> + return (*dev->dev_ops->get_preferred_pool)(dev, pool); > > >> +} > > > Instead of this, what about: > > > > > > /* > > > * Return values: > > > * - -ENOTSUP: error, pool type is not supported > > > * - on success, return the priority of the mempool (0 = highest) > > > */ > > > int > > > rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool) > > > > > > By default, always return 0 (i.e. all pools are supported). > > > > > > With this API, we can announce several supported pools (not only > > > one preferred), and order them by preference. > > > > IMO: We should let application to decide on pool preference. Driver > > only to advice his preferred or supported pool handle to application, > > and its upto application to decide on pool selection scheme. > > The api I'm proposing does not prevent the application from taking > the decision. On the contrary, it gives more clues to the application: > an ordered list of supported pools, instead of just the preferred pool. Does it complicate the mempool selection procedure from the application perspective? I have a real world use case, We can take this as a base for for brainstorming. A system with two ethdev ports - Port 0 # traditional NIC # Preferred mempool handler: ring - Port 1 # integrated NIC # Preferred mempool handler: a_HW_based_ring Some of the characteristics of HW based ring: - TX buffer recycling done by HW(packet allocation and free done by HW no software intervention is required) - It will _not_ be fast when using it with traditional NIC as traditional NIC does packet alloc and free using SW which comes through mempool per cpu caches unlike HW ring solution. - So an integrated NIC with a HW based ring does not really make sense to use SW ring handler and other way around too. So in this context, All application wants to know the preferred handler for the given ethdev port and any other non preferred handlers are _equally_ bad. Not sure what would be preference for taking one or another if _the_ preferred handler attached not available to the integrated NIC. >From application perspective, approach 1: char pref_mempool[128]; rte_eth_dev_pool_ops_supported(ethdev_port_id, pref_mempool /* out */); create_mempool_by_name(pref_mempool); eth_dev_rx_configure(pref_mempool); approach 2 is very complicated. The first problem is API to get the available pools. Unlike ethdev, mempool uses compiler based constructor scheme to register the mempool PMD and normal build will have all the mempool PMD even though it is not used or applicable. Isn't complicating external mempool usage from application perspective? If there any real world use for giving a set of pools for a given eth port then it make sense to add the complication in the application. Does any one has such _real world_ use case ? /Jerin ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH 0/2] Allow application set mempool handle 2017-06-01 8:05 [dpdk-dev] [PATCH 0/2] Allow application set mempool handle Santosh Shukla 2017-06-01 8:05 ` [dpdk-dev] [PATCH 1/2] eal: Introducing option to " Santosh Shukla 2017-06-01 8:05 ` [dpdk-dev] [PATCH 2/2] ether/ethdev: Allow pmd to advertise preferred pool capability Santosh Shukla @ 2017-06-19 11:52 ` Hemant Agrawal 2017-06-19 13:01 ` Jerin Jacob 2017-07-20 7:06 ` [dpdk-dev] [PATCH v2 0/2] Dynamically configure " Santosh Shukla 3 siblings, 1 reply; 72+ messages in thread From: Hemant Agrawal @ 2017-06-19 11:52 UTC (permalink / raw) To: Santosh Shukla, olivier.matz, dev; +Cc: jerin.jacob On 6/1/2017 1:35 PM, Santosh Shukla wrote: > Some platform can have two different NICs for example external PCI Intel > 40G card and Integrated NIC like vNIC/octeontx/dpaa2. > > Both NICs like to use their preferred pool e.g. external PCI card/ vNIC's > preferred pool would be the ring based pool and octeontx/dpaa2 preferred would > be ext-mempools. > Right now, Framework doesn't support such case. Only one pool can be > used across two different NIC's. For that, user has to statically set > CONFIG_RTE_MEMPOOL_DEFAULT_OPS=<pool-name>. > > So proposing two approaches: > Patch 1) Introducing eal option --pkt-mempool=<pool-name> > Patch 2) Introducing ethdev API called _get_preferred_pool(), where PMD driver > gets a chance to advertise their pool capability to the application. And based > on that hint- application creates pools for that driver. > The idea is good. it will help the vendors with hw mempool support. On a similar line, I also submitted a patch to check the existence of a mempool instance. http://dpdk.org/dev/patchwork/patch/15877/ Option 1) requires manual knowledge of underlying NIC and different commands for different machines. Option 2) this will help more as it allows the application to take decision autonomously. In addition to it, we can also extend the overall MEMPOOL_OPS support. 3) currently we support defining only one "RTE_MBUF_DEFAULT_MEMPOOL_OPS" this can be supported to publish a priority list of MEMPOOL_OPS in config. if one is not available, application can try the next one in priority list as supported by the platform. 4) we can also try something, where the existing application can also be supported. - default mempool is configured as alias. This is with empty ops. - based on the mempool detections on the bus, the bus configure the mempool ops internally with the actual ones. > Santosh Shukla (2): > eal: Introducing option to set mempool handle > ether/ethdev: Allow pmd to advertise preferred pool capability > > lib/librte_eal/bsdapp/eal/eal.c | 9 +++++++ > lib/librte_eal/bsdapp/eal/rte_eal_version.map | 7 +++++ > lib/librte_eal/common/eal_common_options.c | 3 +++ > lib/librte_eal/common/eal_internal_cfg.h | 2 ++ > lib/librte_eal/common/eal_options.h | 2 ++ > lib/librte_eal/common/include/rte_eal.h | 9 +++++++ > lib/librte_eal/linuxapp/eal/eal.c | 36 +++++++++++++++++++++++++ > lib/librte_eal/linuxapp/eal/rte_eal_version.map | 7 +++++ > lib/librte_ether/rte_ethdev.c | 16 +++++++++++ > lib/librte_ether/rte_ethdev.h | 21 +++++++++++++++ > lib/librte_ether/rte_ether_version.map | 7 +++++ > lib/librte_mbuf/rte_mbuf.c | 8 ++++-- > 12 files changed, 125 insertions(+), 2 deletions(-) > ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH 0/2] Allow application set mempool handle 2017-06-19 11:52 ` [dpdk-dev] [PATCH 0/2] Allow application set mempool handle Hemant Agrawal @ 2017-06-19 13:01 ` Jerin Jacob 2017-06-20 10:37 ` Hemant Agrawal 0 siblings, 1 reply; 72+ messages in thread From: Jerin Jacob @ 2017-06-19 13:01 UTC (permalink / raw) To: Hemant Agrawal; +Cc: Santosh Shukla, olivier.matz, dev -----Original Message----- > Date: Mon, 19 Jun 2017 17:22:46 +0530 > From: Hemant Agrawal <hemant.agrawal@nxp.com> > To: Santosh Shukla <santosh.shukla@caviumnetworks.com>, > olivier.matz@6wind.com, dev@dpdk.org > CC: jerin.jacob@caviumnetworks.com > Subject: Re: [PATCH 0/2] Allow application set mempool handle > User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 > Thunderbird/45.8.0 > > On 6/1/2017 1:35 PM, Santosh Shukla wrote: > > Some platform can have two different NICs for example external PCI Intel > > 40G card and Integrated NIC like vNIC/octeontx/dpaa2. > > > > Both NICs like to use their preferred pool e.g. external PCI card/ vNIC's > > preferred pool would be the ring based pool and octeontx/dpaa2 preferred would > > be ext-mempools. > > Right now, Framework doesn't support such case. Only one pool can be > > used across two different NIC's. For that, user has to statically set > > CONFIG_RTE_MEMPOOL_DEFAULT_OPS=<pool-name>. > > > > So proposing two approaches: > > Patch 1) Introducing eal option --pkt-mempool=<pool-name> > > Patch 2) Introducing ethdev API called _get_preferred_pool(), where PMD driver > > gets a chance to advertise their pool capability to the application. And based > > on that hint- application creates pools for that driver. > > > The idea is good. it will help the vendors with hw mempool support. > > On a similar line, I also submitted a patch to check the existence of a > mempool instance. > http://dpdk.org/dev/patchwork/patch/15877/ > > Option 1) requires manual knowledge of underlying NIC and different commands > for different machines. > > Option 2) this will help more as it allows the application to take decision > autonomously. > > In addition to it, we can also extend the overall MEMPOOL_OPS support. > 3) currently we support defining only one "RTE_MBUF_DEFAULT_MEMPOOL_OPS" > this can be supported to publish a priority list of MEMPOOL_OPS in > config. if one is not available, application can try the next one in > priority list as supported by the platform. > > 4) we can also try something, where the existing application can also be > supported. > - default mempool is configured as alias. This is with empty ops. > - based on the mempool detections on the bus, the bus configure the > mempool ops internally with the actual ones. What if both HW are on PCIe bus(That the case for us)? Any scheme to fix that? > > > > Santosh Shukla (2): > > eal: Introducing option to set mempool handle > > ether/ethdev: Allow pmd to advertise preferred pool capability > > > > lib/librte_eal/bsdapp/eal/eal.c | 9 +++++++ > > lib/librte_eal/bsdapp/eal/rte_eal_version.map | 7 +++++ > > lib/librte_eal/common/eal_common_options.c | 3 +++ > > lib/librte_eal/common/eal_internal_cfg.h | 2 ++ > > lib/librte_eal/common/eal_options.h | 2 ++ > > lib/librte_eal/common/include/rte_eal.h | 9 +++++++ > > lib/librte_eal/linuxapp/eal/eal.c | 36 +++++++++++++++++++++++++ > > lib/librte_eal/linuxapp/eal/rte_eal_version.map | 7 +++++ > > lib/librte_ether/rte_ethdev.c | 16 +++++++++++ > > lib/librte_ether/rte_ethdev.h | 21 +++++++++++++++ > > lib/librte_ether/rte_ether_version.map | 7 +++++ > > lib/librte_mbuf/rte_mbuf.c | 8 ++++-- > > 12 files changed, 125 insertions(+), 2 deletions(-) > > > > ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH 0/2] Allow application set mempool handle 2017-06-19 13:01 ` Jerin Jacob @ 2017-06-20 10:37 ` Hemant Agrawal 2017-06-20 14:04 ` Jerin Jacob 0 siblings, 1 reply; 72+ messages in thread From: Hemant Agrawal @ 2017-06-20 10:37 UTC (permalink / raw) To: Jerin Jacob; +Cc: Santosh Shukla, olivier.matz, dev On 6/19/2017 6:31 PM, Jerin Jacob wrote: > -----Original Message----- >> Date: Mon, 19 Jun 2017 17:22:46 +0530 >> From: Hemant Agrawal <hemant.agrawal@nxp.com> >> To: Santosh Shukla <santosh.shukla@caviumnetworks.com>, >> olivier.matz@6wind.com, dev@dpdk.org >> CC: jerin.jacob@caviumnetworks.com >> Subject: Re: [PATCH 0/2] Allow application set mempool handle >> User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 >> Thunderbird/45.8.0 >> >> On 6/1/2017 1:35 PM, Santosh Shukla wrote: >>> Some platform can have two different NICs for example external PCI Intel >>> 40G card and Integrated NIC like vNIC/octeontx/dpaa2. >>> >>> Both NICs like to use their preferred pool e.g. external PCI card/ vNIC's >>> preferred pool would be the ring based pool and octeontx/dpaa2 preferred would >>> be ext-mempools. >>> Right now, Framework doesn't support such case. Only one pool can be >>> used across two different NIC's. For that, user has to statically set >>> CONFIG_RTE_MEMPOOL_DEFAULT_OPS=<pool-name>. >>> >>> So proposing two approaches: >>> Patch 1) Introducing eal option --pkt-mempool=<pool-name> >>> Patch 2) Introducing ethdev API called _get_preferred_pool(), where PMD driver >>> gets a chance to advertise their pool capability to the application. And based >>> on that hint- application creates pools for that driver. If the system is having more than one heterogeneous ethernet device with different mempool, the application has to create different mempool for each of the ethernet device. However, let's take a case As system has a DPAA2 eth device, which only work with dpaa2 mempools. System also detect a standard PCI NIC, which can work with any software mempool (e.g ring_mp_mc) or with dpaa2 mempool. Given the preference, PCI NIC will have preferred as software mempool. how the application will choose between these, if it want to create only one mempool? Or, how the scheme will work if the application want to create only one mempool? >>> >> The idea is good. it will help the vendors with hw mempool support. >> >> On a similar line, I also submitted a patch to check the existence of a >> mempool instance. >> http://dpdk.org/dev/patchwork/patch/15877/ >> >> Option 1) requires manual knowledge of underlying NIC and different commands >> for different machines. >> >> Option 2) this will help more as it allows the application to take decision >> autonomously. >> >> In addition to it, we can also extend the overall MEMPOOL_OPS support. >> 3) currently we support defining only one "RTE_MBUF_DEFAULT_MEMPOOL_OPS" >> this can be supported to publish a priority list of MEMPOOL_OPS in >> config. if one is not available, application can try the next one in >> priority list as supported by the platform. >> >> 4) we can also try something, where the existing application can also be >> supported. >> - default mempool is configured as alias. This is with empty ops. >> - based on the mempool detections on the bus, the bus configure the >> mempool ops internally with the actual ones. > > What if both HW are on PCIe bus(That the case for us)? Any scheme to fix > that? Nothing without a hackish approach. In our case, there is only a one mempool type on one type of platform specific bus. > > >> >> >>> Santosh Shukla (2): >>> eal: Introducing option to set mempool handle >>> ether/ethdev: Allow pmd to advertise preferred pool capability >>> >>> lib/librte_eal/bsdapp/eal/eal.c | 9 +++++++ >>> lib/librte_eal/bsdapp/eal/rte_eal_version.map | 7 +++++ >>> lib/librte_eal/common/eal_common_options.c | 3 +++ >>> lib/librte_eal/common/eal_internal_cfg.h | 2 ++ >>> lib/librte_eal/common/eal_options.h | 2 ++ >>> lib/librte_eal/common/include/rte_eal.h | 9 +++++++ >>> lib/librte_eal/linuxapp/eal/eal.c | 36 +++++++++++++++++++++++++ >>> lib/librte_eal/linuxapp/eal/rte_eal_version.map | 7 +++++ >>> lib/librte_ether/rte_ethdev.c | 16 +++++++++++ >>> lib/librte_ether/rte_ethdev.h | 21 +++++++++++++++ >>> lib/librte_ether/rte_ether_version.map | 7 +++++ >>> lib/librte_mbuf/rte_mbuf.c | 8 ++++-- >>> 12 files changed, 125 insertions(+), 2 deletions(-) >>> >> >> > ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH 0/2] Allow application set mempool handle 2017-06-20 10:37 ` Hemant Agrawal @ 2017-06-20 14:04 ` Jerin Jacob 2017-06-30 14:12 ` Olivier Matz 0 siblings, 1 reply; 72+ messages in thread From: Jerin Jacob @ 2017-06-20 14:04 UTC (permalink / raw) To: Hemant Agrawal; +Cc: Santosh Shukla, olivier.matz, dev -----Original Message----- > Date: Tue, 20 Jun 2017 16:07:17 +0530 > From: Hemant Agrawal <hemant.agrawal@nxp.com> > To: Jerin Jacob <jerin.jacob@caviumnetworks.com> > CC: Santosh Shukla <santosh.shukla@caviumnetworks.com>, > olivier.matz@6wind.com, dev@dpdk.org > Subject: Re: [PATCH 0/2] Allow application set mempool handle > User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 > Thunderbird/45.8.0 > > On 6/19/2017 6:31 PM, Jerin Jacob wrote: > > -----Original Message----- > > > Date: Mon, 19 Jun 2017 17:22:46 +0530 > > > From: Hemant Agrawal <hemant.agrawal@nxp.com> > > > To: Santosh Shukla <santosh.shukla@caviumnetworks.com>, > > > olivier.matz@6wind.com, dev@dpdk.org > > > CC: jerin.jacob@caviumnetworks.com > > > Subject: Re: [PATCH 0/2] Allow application set mempool handle > > > User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 > > > Thunderbird/45.8.0 > > > > > > On 6/1/2017 1:35 PM, Santosh Shukla wrote: > > > > Some platform can have two different NICs for example external PCI Intel > > > > 40G card and Integrated NIC like vNIC/octeontx/dpaa2. > > > > > > > > Both NICs like to use their preferred pool e.g. external PCI card/ vNIC's > > > > preferred pool would be the ring based pool and octeontx/dpaa2 preferred would > > > > be ext-mempools. > > > > Right now, Framework doesn't support such case. Only one pool can be > > > > used across two different NIC's. For that, user has to statically set > > > > CONFIG_RTE_MEMPOOL_DEFAULT_OPS=<pool-name>. > > > > > > > > So proposing two approaches: > > > > Patch 1) Introducing eal option --pkt-mempool=<pool-name> > > > > Patch 2) Introducing ethdev API called _get_preferred_pool(), where PMD driver > > > > gets a chance to advertise their pool capability to the application. And based > > > > on that hint- application creates pools for that driver. > > If the system is having more than one heterogeneous ethernet device with > different mempool, the application has to create different mempool for each > of the ethernet device. > > However, let's take a case > As system has a DPAA2 eth device, which only work with dpaa2 mempools. dpaa2 ethdev will return dpaa2 mempool as preferred handler. > System also detect a standard PCI NIC, which can work with any software > mempool (e.g ring_mp_mc) or with dpaa2 mempool. Given the preference, PCI > NIC will have preferred as software mempool. > how the application will choose between these, if it want to create only one > mempool? We need add some policy in common code to help application to choose in case if application interested in creating in one pool for heterogeneous cases. It is more of application problem, ethdev can return the preferred handler, let application choose interested in one. ethdev is depended on the specific mempool not any other object. We will provide option 1(eal argument based one) as one policy.More sophisticated policies we need add in application. > Or, how the scheme will work if the application want to create only one > mempool? option 1 (eal argument based) or we need to change the application to choose from available ethdev count and its preferred mempool handler. > > > > > > > > > The idea is good. it will help the vendors with hw mempool support. > > > > > > On a similar line, I also submitted a patch to check the existence of a > > > mempool instance. > > > http://dpdk.org/dev/patchwork/patch/15877/ > > > > > > Option 1) requires manual knowledge of underlying NIC and different commands > > > for different machines. > > > > > > Option 2) this will help more as it allows the application to take decision > > > autonomously. > > > > > > In addition to it, we can also extend the overall MEMPOOL_OPS support. > > > 3) currently we support defining only one "RTE_MBUF_DEFAULT_MEMPOOL_OPS" > > > this can be supported to publish a priority list of MEMPOOL_OPS in > > > config. if one is not available, application can try the next one in > > > priority list as supported by the platform. > > > > > > 4) we can also try something, where the existing application can also be > > > supported. > > > - default mempool is configured as alias. This is with empty ops. > > > - based on the mempool detections on the bus, the bus configure the > > > mempool ops internally with the actual ones. > > > > What if both HW are on PCIe bus(That the case for us)? Any scheme to fix > > that? > > Nothing without a hackish approach. In our case, there is only a one mempool > type on one type of platform specific bus. Yes. But in future a bus may have >1 mempool device(which already case for PCI now) Could you send a patch for you patch, We can review it. May be we can add it as one policy. I think, this may help to fix the problem avoiding the application change on specific use case. I think, it is difficult to integrate external mempool without application change. We can some policy like option 1 or bus scheme but it may not working in all the use cases without application change(option 2(new api).So I think, more polices and API(option 2) will fix the problem. Let application choose what they want. > > > > > > > > > > > > > > > Santosh Shukla (2): > > > > eal: Introducing option to set mempool handle > > > > ether/ethdev: Allow pmd to advertise preferred pool capability > > > > > > > > lib/librte_eal/bsdapp/eal/eal.c | 9 +++++++ > > > > lib/librte_eal/bsdapp/eal/rte_eal_version.map | 7 +++++ > > > > lib/librte_eal/common/eal_common_options.c | 3 +++ > > > > lib/librte_eal/common/eal_internal_cfg.h | 2 ++ > > > > lib/librte_eal/common/eal_options.h | 2 ++ > > > > lib/librte_eal/common/include/rte_eal.h | 9 +++++++ > > > > lib/librte_eal/linuxapp/eal/eal.c | 36 +++++++++++++++++++++++++ > > > > lib/librte_eal/linuxapp/eal/rte_eal_version.map | 7 +++++ > > > > lib/librte_ether/rte_ethdev.c | 16 +++++++++++ > > > > lib/librte_ether/rte_ethdev.h | 21 +++++++++++++++ > > > > lib/librte_ether/rte_ether_version.map | 7 +++++ > > > > lib/librte_mbuf/rte_mbuf.c | 8 ++++-- > > > > 12 files changed, 125 insertions(+), 2 deletions(-) > > > > > > > > > > > > > > ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH 0/2] Allow application set mempool handle 2017-06-20 14:04 ` Jerin Jacob @ 2017-06-30 14:12 ` Olivier Matz 2017-07-04 12:25 ` santosh 0 siblings, 1 reply; 72+ messages in thread From: Olivier Matz @ 2017-06-30 14:12 UTC (permalink / raw) To: Jerin Jacob; +Cc: Hemant Agrawal, Santosh Shukla, dev Hi, On Tue, 20 Jun 2017 19:34:15 +0530, Jerin Jacob <jerin.jacob@caviumnetworks.com> wrote: > -----Original Message----- > > Date: Tue, 20 Jun 2017 16:07:17 +0530 > > From: Hemant Agrawal <hemant.agrawal@nxp.com> > > To: Jerin Jacob <jerin.jacob@caviumnetworks.com> > > CC: Santosh Shukla <santosh.shukla@caviumnetworks.com>, > > olivier.matz@6wind.com, dev@dpdk.org > > Subject: Re: [PATCH 0/2] Allow application set mempool handle > > User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 > > Thunderbird/45.8.0 > > > > On 6/19/2017 6:31 PM, Jerin Jacob wrote: > > > -----Original Message----- > > > > Date: Mon, 19 Jun 2017 17:22:46 +0530 > > > > From: Hemant Agrawal <hemant.agrawal@nxp.com> > > > > To: Santosh Shukla <santosh.shukla@caviumnetworks.com>, > > > > olivier.matz@6wind.com, dev@dpdk.org > > > > CC: jerin.jacob@caviumnetworks.com > > > > Subject: Re: [PATCH 0/2] Allow application set mempool handle > > > > User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 > > > > Thunderbird/45.8.0 > > > > > > > > On 6/1/2017 1:35 PM, Santosh Shukla wrote: > > > > > Some platform can have two different NICs for example external PCI Intel > > > > > 40G card and Integrated NIC like vNIC/octeontx/dpaa2. > > > > > > > > > > Both NICs like to use their preferred pool e.g. external PCI card/ vNIC's > > > > > preferred pool would be the ring based pool and octeontx/dpaa2 preferred would > > > > > be ext-mempools. > > > > > Right now, Framework doesn't support such case. Only one pool can be > > > > > used across two different NIC's. For that, user has to statically set > > > > > CONFIG_RTE_MEMPOOL_DEFAULT_OPS=<pool-name>. > > > > > > > > > > So proposing two approaches: > > > > > Patch 1) Introducing eal option --pkt-mempool=<pool-name> > > > > > Patch 2) Introducing ethdev API called _get_preferred_pool(), where PMD driver > > > > > gets a chance to advertise their pool capability to the application. And based > > > > > on that hint- application creates pools for that driver. > > > > If the system is having more than one heterogeneous ethernet device with > > different mempool, the application has to create different mempool for each > > of the ethernet device. > > > > However, let's take a case > > As system has a DPAA2 eth device, which only work with dpaa2 mempools. > > dpaa2 ethdev will return dpaa2 mempool as preferred handler. > > > System also detect a standard PCI NIC, which can work with any software > > mempool (e.g ring_mp_mc) or with dpaa2 mempool. Given the preference, PCI > > NIC will have preferred as software mempool. > > how the application will choose between these, if it want to create only one > > mempool? > > We need add some policy in common code to help application to choose > in case if application interested in creating in one pool for > heterogeneous cases. It is more of application problem, ethdev can > return the preferred handler, let application choose interested in one. > ethdev is depended on the specific mempool not any other object. > > We will provide option 1(eal argument based one) as one policy.More sophisticated > policies we need add in application. > > > > Or, how the scheme will work if the application want to create only one > > mempool? > > option 1 (eal argument based) or we need to change the application to > choose from available ethdev count and its preferred mempool handler. I also think the approach in this patchset is not that bad: - The first step is to allow the user to specify the mempool dynamically (eal arg). One thing I don't really like is to have a mempool-related argument inside eal. It would be better if eal could provide a framework so that each libraries (ex: mbuf, mempool) can register their argument that could be changed through the command line or trough an API. Without this, it introduces a sort of dependency between eal and mempool, which I don't think is sane. - The second step is to be able to ask to the eth devices which mempool they prefer. If there is only one kind of port, it's quite easy. As suggested, more complexity could go in the application if required, or some helpers could be provided in the future. I'm sending some comments as replies to the patches. ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH 0/2] Allow application set mempool handle 2017-06-30 14:12 ` Olivier Matz @ 2017-07-04 12:25 ` santosh 2017-07-04 15:59 ` Olivier Matz 0 siblings, 1 reply; 72+ messages in thread From: santosh @ 2017-07-04 12:25 UTC (permalink / raw) To: Olivier Matz, Jerin Jacob; +Cc: Hemant Agrawal, dev Hi Olivier, On Friday 30 June 2017 07:42 PM, Olivier Matz wrote: > Hi, > > On Tue, 20 Jun 2017 19:34:15 +0530, Jerin Jacob <jerin.jacob@caviumnetworks.com> wrote: >> -----Original Message----- >>> Date: Tue, 20 Jun 2017 16:07:17 +0530 >>> From: Hemant Agrawal <hemant.agrawal@nxp.com> >>> To: Jerin Jacob <jerin.jacob@caviumnetworks.com> >>> CC: Santosh Shukla <santosh.shukla@caviumnetworks.com>, >>> olivier.matz@6wind.com, dev@dpdk.org >>> Subject: Re: [PATCH 0/2] Allow application set mempool handle >>> User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 >>> Thunderbird/45.8.0 >>> >>> On 6/19/2017 6:31 PM, Jerin Jacob wrote: >>>> -----Original Message----- >>>>> Date: Mon, 19 Jun 2017 17:22:46 +0530 >>>>> From: Hemant Agrawal <hemant.agrawal@nxp.com> >>>>> To: Santosh Shukla <santosh.shukla@caviumnetworks.com>, >>>>> olivier.matz@6wind.com, dev@dpdk.org >>>>> CC: jerin.jacob@caviumnetworks.com >>>>> Subject: Re: [PATCH 0/2] Allow application set mempool handle >>>>> User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 >>>>> Thunderbird/45.8.0 >>>>> >>>>> On 6/1/2017 1:35 PM, Santosh Shukla wrote: >>>>>> Some platform can have two different NICs for example external PCI Intel >>>>>> 40G card and Integrated NIC like vNIC/octeontx/dpaa2. >>>>>> >>>>>> Both NICs like to use their preferred pool e.g. external PCI card/ vNIC's >>>>>> preferred pool would be the ring based pool and octeontx/dpaa2 preferred would >>>>>> be ext-mempools. >>>>>> Right now, Framework doesn't support such case. Only one pool can be >>>>>> used across two different NIC's. For that, user has to statically set >>>>>> CONFIG_RTE_MEMPOOL_DEFAULT_OPS=<pool-name>. >>>>>> >>>>>> So proposing two approaches: >>>>>> Patch 1) Introducing eal option --pkt-mempool=<pool-name> >>>>>> Patch 2) Introducing ethdev API called _get_preferred_pool(), where PMD driver >>>>>> gets a chance to advertise their pool capability to the application. And based >>>>>> on that hint- application creates pools for that driver. >>> If the system is having more than one heterogeneous ethernet device with >>> different mempool, the application has to create different mempool for each >>> of the ethernet device. >>> >>> However, let's take a case >>> As system has a DPAA2 eth device, which only work with dpaa2 mempools. >> dpaa2 ethdev will return dpaa2 mempool as preferred handler. >> >>> System also detect a standard PCI NIC, which can work with any software >>> mempool (e.g ring_mp_mc) or with dpaa2 mempool. Given the preference, PCI >>> NIC will have preferred as software mempool. >>> how the application will choose between these, if it want to create only one >>> mempool? >> We need add some policy in common code to help application to choose >> in case if application interested in creating in one pool for >> heterogeneous cases. It is more of application problem, ethdev can >> return the preferred handler, let application choose interested in one. >> ethdev is depended on the specific mempool not any other object. >> >> We will provide option 1(eal argument based one) as one policy.More sophisticated >> policies we need add in application. >> >> >>> Or, how the scheme will work if the application want to create only one >>> mempool? >> option 1 (eal argument based) or we need to change the application to >> choose from available ethdev count and its preferred mempool handler. > I also think the approach in this patchset is not that bad: > > - The first step is to allow the user to specify the mempool > dynamically (eal arg). > > One thing I don't really like is to have a mempool-related argument > inside eal. It would be better if eal could provide a framework so > that each libraries (ex: mbuf, mempool) can register their argument > that could be changed through the command line or trough an API. > > Without this, it introduces a sort of dependency between eal and > mempool, which I don't think is sane. Yes, eal has no such framework for the non-eal library. IIUC, then are you looking at something like below: - All non-eal library to register their callback function with eal. - EAL iterates through registered callbacks and calls them one by one. - EAL don't do the parsing and those non-eal libs do the parsing. - EAL passes char *string arg as input to those registered callback function. - It is up to those callback function to parse and find out i/p arg is correct or incorrect. - Having said that, then in the mempool case; We need to add new API to list the number of supported mempool handles(by name) and then compare/match i/p string with mempool handle(byname). Are you referring to such framework? did I catch everything alright? > - The second step is to be able to ask to the eth devices which > mempool they prefer. If there is only one kind of port, it's > quite easy. > > As suggested, more complexity could go in the application if > required, or some helpers could be provided in the future. > > > I'm sending some comments as replies to the patches. > If above eal framework approach is meeting your expectation then [1/4] need rework? Or you want to keep [1/4] patch and I'll send v2 patch incorporating your inline review comment, which one you prefer? ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH 0/2] Allow application set mempool handle 2017-07-04 12:25 ` santosh @ 2017-07-04 15:59 ` Olivier Matz 2017-07-05 7:48 ` santosh 0 siblings, 1 reply; 72+ messages in thread From: Olivier Matz @ 2017-07-04 15:59 UTC (permalink / raw) To: santosh; +Cc: Jerin Jacob, Hemant Agrawal, dev Hi Santosh, On Tue, 4 Jul 2017 17:55:54 +0530, santosh <santosh.shukla@caviumnetworks.com> wrote: > Hi Olivier, > > On Friday 30 June 2017 07:42 PM, Olivier Matz wrote: > > > Hi, > > > > On Tue, 20 Jun 2017 19:34:15 +0530, Jerin Jacob <jerin.jacob@caviumnetworks.com> wrote: > >> -----Original Message----- > >>> Date: Tue, 20 Jun 2017 16:07:17 +0530 > >>> From: Hemant Agrawal <hemant.agrawal@nxp.com> > >>> To: Jerin Jacob <jerin.jacob@caviumnetworks.com> > >>> CC: Santosh Shukla <santosh.shukla@caviumnetworks.com>, > >>> olivier.matz@6wind.com, dev@dpdk.org > >>> Subject: Re: [PATCH 0/2] Allow application set mempool handle > >>> User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 > >>> Thunderbird/45.8.0 > >>> > >>> On 6/19/2017 6:31 PM, Jerin Jacob wrote: > >>>> -----Original Message----- > >>>>> Date: Mon, 19 Jun 2017 17:22:46 +0530 > >>>>> From: Hemant Agrawal <hemant.agrawal@nxp.com> > >>>>> To: Santosh Shukla <santosh.shukla@caviumnetworks.com>, > >>>>> olivier.matz@6wind.com, dev@dpdk.org > >>>>> CC: jerin.jacob@caviumnetworks.com > >>>>> Subject: Re: [PATCH 0/2] Allow application set mempool handle > >>>>> User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 > >>>>> Thunderbird/45.8.0 > >>>>> > >>>>> On 6/1/2017 1:35 PM, Santosh Shukla wrote: > >>>>>> Some platform can have two different NICs for example external PCI Intel > >>>>>> 40G card and Integrated NIC like vNIC/octeontx/dpaa2. > >>>>>> > >>>>>> Both NICs like to use their preferred pool e.g. external PCI card/ vNIC's > >>>>>> preferred pool would be the ring based pool and octeontx/dpaa2 preferred would > >>>>>> be ext-mempools. > >>>>>> Right now, Framework doesn't support such case. Only one pool can be > >>>>>> used across two different NIC's. For that, user has to statically set > >>>>>> CONFIG_RTE_MEMPOOL_DEFAULT_OPS=<pool-name>. > >>>>>> > >>>>>> So proposing two approaches: > >>>>>> Patch 1) Introducing eal option --pkt-mempool=<pool-name> > >>>>>> Patch 2) Introducing ethdev API called _get_preferred_pool(), where PMD driver > >>>>>> gets a chance to advertise their pool capability to the application. And based > >>>>>> on that hint- application creates pools for that driver. > >>> If the system is having more than one heterogeneous ethernet device with > >>> different mempool, the application has to create different mempool for each > >>> of the ethernet device. > >>> > >>> However, let's take a case > >>> As system has a DPAA2 eth device, which only work with dpaa2 mempools. > >> dpaa2 ethdev will return dpaa2 mempool as preferred handler. > >> > >>> System also detect a standard PCI NIC, which can work with any software > >>> mempool (e.g ring_mp_mc) or with dpaa2 mempool. Given the preference, PCI > >>> NIC will have preferred as software mempool. > >>> how the application will choose between these, if it want to create only one > >>> mempool? > >> We need add some policy in common code to help application to choose > >> in case if application interested in creating in one pool for > >> heterogeneous cases. It is more of application problem, ethdev can > >> return the preferred handler, let application choose interested in one. > >> ethdev is depended on the specific mempool not any other object. > >> > >> We will provide option 1(eal argument based one) as one policy.More sophisticated > >> policies we need add in application. > >> > >> > >>> Or, how the scheme will work if the application want to create only one > >>> mempool? > >> option 1 (eal argument based) or we need to change the application to > >> choose from available ethdev count and its preferred mempool handler. > > I also think the approach in this patchset is not that bad: > > > > - The first step is to allow the user to specify the mempool > > dynamically (eal arg). > > > > One thing I don't really like is to have a mempool-related argument > > inside eal. It would be better if eal could provide a framework so > > that each libraries (ex: mbuf, mempool) can register their argument > > that could be changed through the command line or trough an API. > > > > Without this, it introduces a sort of dependency between eal and > > mempool, which I don't think is sane. > > Yes, eal has no such framework for the non-eal library. > > IIUC, then are you looking at something like below: > - All non-eal library to register their callback function with eal. > - EAL iterates through registered callbacks and calls them one by one. > - EAL don't do the parsing and those non-eal libs do the parsing. > - EAL passes char *string arg as input to those registered callback function. > - It is up to those callback function to parse and find out i/p arg is correct > or incorrect. > - Having said that, then in the mempool case; We need to add new API to list > the number of supported mempool handles(by name) and then compare/match > i/p string with mempool handle(byname). > > Are you referring to such framework? did I catch everything alright? Here is how I see this feature (very high level). The first step would be quite simple (no registration). The EAL manages a key value database, and provides a key/value API like this: /* return NULL if key is not in database */ const char *rte_eal_cfg_get(const char *key); /* value can be NULL to delete the key, return 0 on success */ int rte_eal_cfg_set(const char *key, const char *value); At startup, the EAL parses the arguments like this: --cfg=key:value Example: --cfg=mbuf.default_pool:ring Another way to set these options could be a config file (maybe the librte_cfgfile could be useful for that, I don't know). Probably something like: --cfgfile=file.conf The EAL parsing layer calls rte_eal_cfg_set() Then, a library like librte_mbuf can query a specific key through rte_eal_cfg_get("mbuf.default_pool"). No registration would be needed. We'd need to define a convention for the key names. It could be extented in a second step by adding a registration in the constructor of the library: /* check_cb is a function that is called to check if the parsing is * correct. Maybe an opaque arg could be added too. */ rte_eal_register_cfg(const char *key, rte_eal_cfg_check_cb_t check_cb); I'm sure many people will have an opinion on this topic, which could be different than mine. > > > - The second step is to be able to ask to the eth devices which > > mempool they prefer. If there is only one kind of port, it's > > quite easy. > > > > As suggested, more complexity could go in the application if > > required, or some helpers could be provided in the future. > > > > > > I'm sending some comments as replies to the patches. > > > If above eal framework approach is meeting your expectation then [1/4] need rework? > Or you want to keep [1/4] patch and I'll send v2 patch incorporating > your inline review comment, which one you prefer? Adding a specific EAL argument --pkt-mempool could do the job for now. But I'd be happy to see someone working on a generic cfg framework in EAL, which seems to be a longer term solution, and helpful for other libs. Some parts of EAL have currently no maintainer, which is a problem to get a good feedback. But I guess a proposition on this topic would trigger many comments. Regards, Olivier ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH 0/2] Allow application set mempool handle 2017-07-04 15:59 ` Olivier Matz @ 2017-07-05 7:48 ` santosh 0 siblings, 0 replies; 72+ messages in thread From: santosh @ 2017-07-05 7:48 UTC (permalink / raw) To: Olivier Matz; +Cc: Jerin Jacob, Hemant Agrawal, dev Hi Olivier, On Tuesday 04 July 2017 09:29 PM, Olivier Matz wrote: > Hi Santosh, > > On Tue, 4 Jul 2017 17:55:54 +0530, santosh <santosh.shukla@caviumnetworks.com> wrote: >> Hi Olivier, >> >> On Friday 30 June 2017 07:42 PM, Olivier Matz wrote: >> >>> Hi, >>> >>> On Tue, 20 Jun 2017 19:34:15 +0530, Jerin Jacob <jerin.jacob@caviumnetworks.com> wrote: >>>> -----Original Message----- >>>>> Date: Tue, 20 Jun 2017 16:07:17 +0530 >>>>> From: Hemant Agrawal <hemant.agrawal@nxp.com> >>>>> To: Jerin Jacob <jerin.jacob@caviumnetworks.com> >>>>> CC: Santosh Shukla <santosh.shukla@caviumnetworks.com>, >>>>> olivier.matz@6wind.com, dev@dpdk.org >>>>> Subject: Re: [PATCH 0/2] Allow application set mempool handle >>>>> User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 >>>>> Thunderbird/45.8.0 >>>>> >>>>> On 6/19/2017 6:31 PM, Jerin Jacob wrote: >>>>>> -----Original Message----- >>>>>>> Date: Mon, 19 Jun 2017 17:22:46 +0530 >>>>>>> From: Hemant Agrawal <hemant.agrawal@nxp.com> >>>>>>> To: Santosh Shukla <santosh.shukla@caviumnetworks.com>, >>>>>>> olivier.matz@6wind.com, dev@dpdk.org >>>>>>> CC: jerin.jacob@caviumnetworks.com >>>>>>> Subject: Re: [PATCH 0/2] Allow application set mempool handle >>>>>>> User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 >>>>>>> Thunderbird/45.8.0 >>>>>>> >>>>>>> On 6/1/2017 1:35 PM, Santosh Shukla wrote: >>>>>>>> Some platform can have two different NICs for example external PCI Intel >>>>>>>> 40G card and Integrated NIC like vNIC/octeontx/dpaa2. >>>>>>>> >>>>>>>> Both NICs like to use their preferred pool e.g. external PCI card/ vNIC's >>>>>>>> preferred pool would be the ring based pool and octeontx/dpaa2 preferred would >>>>>>>> be ext-mempools. >>>>>>>> Right now, Framework doesn't support such case. Only one pool can be >>>>>>>> used across two different NIC's. For that, user has to statically set >>>>>>>> CONFIG_RTE_MEMPOOL_DEFAULT_OPS=<pool-name>. >>>>>>>> >>>>>>>> So proposing two approaches: >>>>>>>> Patch 1) Introducing eal option --pkt-mempool=<pool-name> >>>>>>>> Patch 2) Introducing ethdev API called _get_preferred_pool(), where PMD driver >>>>>>>> gets a chance to advertise their pool capability to the application. And based >>>>>>>> on that hint- application creates pools for that driver. >>>>> If the system is having more than one heterogeneous ethernet device with >>>>> different mempool, the application has to create different mempool for each >>>>> of the ethernet device. >>>>> >>>>> However, let's take a case >>>>> As system has a DPAA2 eth device, which only work with dpaa2 mempools. >>>> dpaa2 ethdev will return dpaa2 mempool as preferred handler. >>>> >>>>> System also detect a standard PCI NIC, which can work with any software >>>>> mempool (e.g ring_mp_mc) or with dpaa2 mempool. Given the preference, PCI >>>>> NIC will have preferred as software mempool. >>>>> how the application will choose between these, if it want to create only one >>>>> mempool? >>>> We need add some policy in common code to help application to choose >>>> in case if application interested in creating in one pool for >>>> heterogeneous cases. It is more of application problem, ethdev can >>>> return the preferred handler, let application choose interested in one. >>>> ethdev is depended on the specific mempool not any other object. >>>> >>>> We will provide option 1(eal argument based one) as one policy.More sophisticated >>>> policies we need add in application. >>>> >>>> >>>>> Or, how the scheme will work if the application want to create only one >>>>> mempool? >>>> option 1 (eal argument based) or we need to change the application to >>>> choose from available ethdev count and its preferred mempool handler. >>> I also think the approach in this patchset is not that bad: >>> >>> - The first step is to allow the user to specify the mempool >>> dynamically (eal arg). >>> >>> One thing I don't really like is to have a mempool-related argument >>> inside eal. It would be better if eal could provide a framework so >>> that each libraries (ex: mbuf, mempool) can register their argument >>> that could be changed through the command line or trough an API. >>> >>> Without this, it introduces a sort of dependency between eal and >>> mempool, which I don't think is sane. >> Yes, eal has no such framework for the non-eal library. >> >> IIUC, then are you looking at something like below: >> - All non-eal library to register their callback function with eal. >> - EAL iterates through registered callbacks and calls them one by one. >> - EAL don't do the parsing and those non-eal libs do the parsing. >> - EAL passes char *string arg as input to those registered callback function. >> - It is up to those callback function to parse and find out i/p arg is correct >> or incorrect. >> - Having said that, then in the mempool case; We need to add new API to list >> the number of supported mempool handles(by name) and then compare/match >> i/p string with mempool handle(byname). >> >> Are you referring to such framework? did I catch everything alright? > Here is how I see this feature (very high level). > The first step would be quite simple (no registration). > The EAL manages a key value database, and provides a key/value API like this: > > /* return NULL if key is not in database */ > const char *rte_eal_cfg_get(const char *key); > /* value can be NULL to delete the key, return 0 on success */ > int rte_eal_cfg_set(const char *key, const char *value); > > At startup, the EAL parses the arguments like this: > --cfg=key:value > Example: > --cfg=mbuf.default_pool:ring > > Another way to set these options could be a config file (maybe the > librte_cfgfile could be useful for that, I don't know). Probably > something like: > --cfgfile=file.conf > > The EAL parsing layer calls rte_eal_cfg_set() > > Then, a library like librte_mbuf can query a specific key > through rte_eal_cfg_get("mbuf.default_pool"). No registration would > be needed. We'd need to define a convention for the key names. > > It could be extented in a second step by adding a registration in > the constructor of the library: > /* check_cb is a function that is called to check if the parsing is > * correct. Maybe an opaque arg could be added too. */ > rte_eal_register_cfg(const char *key, rte_eal_cfg_check_cb_t check_cb); > > > I'm sure many people will have an opinion on this topic, which could > be different than mine. > Thanks for approach. But I think we should take up as separate topic. >>> - The second step is to be able to ask to the eth devices which >>> mempool they prefer. If there is only one kind of port, it's >>> quite easy. >>> >>> As suggested, more complexity could go in the application if >>> required, or some helpers could be provided in the future. >>> >>> >>> I'm sending some comments as replies to the patches. >>> >> If above eal framework approach is meeting your expectation then [1/4] need rework? >> Or you want to keep [1/4] patch and I'll send v2 patch incorporating >> your inline review comment, which one you prefer? > Adding a specific EAL argument --pkt-mempool could do the job for now. > But I'd be happy to see someone working on a generic cfg framework in EAL, > which seems to be a longer term solution, and helpful for other libs. > > Some parts of EAL have currently no maintainer, which is a problem > to get a good feedback. But I guess a proposition on this topic > would trigger many comments. > I may take up this (per my BW), but for now I prefer to follow --pkt-mempool approach. I guess, --pkt-mempool will address heterogeneous pool-handle use-case. so Its priority (imo). Thanks for feedback Olivier. > Regards, > Olivier ^ permalink raw reply [flat|nested] 72+ messages in thread
* [dpdk-dev] [PATCH v2 0/2] Dynamically configure mempool handle 2017-06-01 8:05 [dpdk-dev] [PATCH 0/2] Allow application set mempool handle Santosh Shukla ` (2 preceding siblings ...) 2017-06-19 11:52 ` [dpdk-dev] [PATCH 0/2] Allow application set mempool handle Hemant Agrawal @ 2017-07-20 7:06 ` Santosh Shukla 2017-07-20 7:06 ` [dpdk-dev] [PATCH v2 1/2] eal: allow user to override default pool handle Santosh Shukla 2017-07-20 7:06 ` [dpdk-dev] [PATCH v2 2/2] ethdev: allow pmd to advertise pool handle Santosh Shukla 3 siblings, 2 replies; 72+ messages in thread From: Santosh Shukla @ 2017-07-20 7:06 UTC (permalink / raw) To: thomas, dev, olivier.matz; +Cc: jerin.jacob, hemant.agrawal, Santosh Shukla DPDK has support for hw and sw mempool. Those mempool can work optimal for specific PMD's. Example: sw ring based PMD for Intel NICs. HW mempool manager dpaa2 for dpaa2 PMD. HW mempool manager fpa for octeontx PMD. There could be a use-case where different vendor NIC's used on the same platform and User like to configure mempool in such a way that each of those NIC's should use their preferred mempool(For performance reasons). Current mempool infrastrucure don't support such use-case. This patchset tries to address that problem in 2 steps: 0) Allowing user to dynamically configure mempool handle by passing pool handle as eal arg to `--mbuf-pool-ops=<pool-handle>`. 1) Allowing PMD's to advertise their preferred pool to an application. >From an application point of view: - The application must ask PMD about their preferred pool. - PMD to respond back with preferred pool otherwise CONFIG_RTE_MEMPOOL_DEFAULT_OPS will be used for that PMD. * Application programming sequencing would be char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE]; rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempool /* out */); rte_mempool_create_empty(); rte_mempool_set_ops_byname( , pref_memppol, ); rte_mempool_populate_default(); Change History: v1 --> v2: - Renamed rte_eal_get_mempool_name to rte_eal_mbuf_default_mempool_ops(). (suggested by Olivier) - Renamed _get_preferred_pool to _get_preferred_pool_ops(). - Updated API description and changes return val from -EINVAL to -ENOTSUP. (Suggested by Olivier) * Specific details on v1-->v2 change summary described in each patch. Checkpatch status: - None. Work History: * Refer [1] for v1. Thanks. [1] http://dpdk.org/ml/archives/dev/2017-June/067022.html Santosh Shukla (2): eal: allow user to override default pool handle ethdev: allow pmd to advertise pool handle lib/librte_eal/bsdapp/eal/eal.c | 17 +++++++++++++++++ lib/librte_eal/bsdapp/eal/rte_eal_version.map | 1 + lib/librte_eal/common/eal_common_options.c | 5 +++++ lib/librte_eal/common/eal_internal_cfg.h | 1 + lib/librte_eal/common/eal_options.h | 2 ++ lib/librte_eal/common/include/rte_eal.h | 11 +++++++++++ lib/librte_eal/linuxapp/eal/eal.c | 18 ++++++++++++++++++ lib/librte_eal/linuxapp/eal/rte_eal_version.map | 1 + lib/librte_ether/rte_ethdev.c | 18 ++++++++++++++++++ lib/librte_ether/rte_ethdev.h | 21 +++++++++++++++++++++ lib/librte_ether/rte_ether_version.map | 1 + lib/librte_mbuf/rte_mbuf.c | 5 +++-- 12 files changed, 99 insertions(+), 2 deletions(-) -- 2.11.0 ^ permalink raw reply [flat|nested] 72+ messages in thread
* [dpdk-dev] [PATCH v2 1/2] eal: allow user to override default pool handle 2017-07-20 7:06 ` [dpdk-dev] [PATCH v2 0/2] Dynamically configure " Santosh Shukla @ 2017-07-20 7:06 ` Santosh Shukla 2017-08-15 8:07 ` [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle Santosh Shukla 2017-07-20 7:06 ` [dpdk-dev] [PATCH v2 2/2] ethdev: allow pmd to advertise pool handle Santosh Shukla 1 sibling, 1 reply; 72+ messages in thread From: Santosh Shukla @ 2017-07-20 7:06 UTC (permalink / raw) To: thomas, dev, olivier.matz; +Cc: jerin.jacob, hemant.agrawal, Santosh Shukla DPDK has support for both sw and hw mempool and currently user is limited to use ring_mp_mc pool. In case user want to use other pool handle, need to update config RTE_MEMPOOL_OPS_DEFAULT, then build and run with desired pool handle. Introducing eal option to override default pool handle. Now user can override the RTE_MEMPOOL_OPS_DEFAULT by passing pool handle to eal `--mbuf-pool-ops=""`. Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> --- v1 --> v2: (Changes per review feedback from Olivier) - Renamed rte_eal_get_mempool_name to rte_eal_mbuf_default_mempool_ops(). - Added support in bsdapp too. Few changes considered while working on v2 like: - Choosing eal arg name 'mbuf-pool-ops' out of 3 proposed names by Olivier. Initialy thought to use eal arg 'mbuf-default-mempool-ops' But since its way too long and user may find difficult to use it. That's why chose 'mbuf-pool-ops', As that name very close to api name and #define. - Adding new RTE_ for NAMESIZE in rte_eal.h called 'RTE_MBUF_DEFAULT_MEMPOOL_OPS'. lib/librte_eal/bsdapp/eal/eal.c | 17 +++++++++++++++++ lib/librte_eal/bsdapp/eal/rte_eal_version.map | 1 + lib/librte_eal/common/eal_common_options.c | 5 +++++ lib/librte_eal/common/eal_internal_cfg.h | 1 + lib/librte_eal/common/eal_options.h | 2 ++ lib/librte_eal/common/include/rte_eal.h | 11 +++++++++++ lib/librte_eal/linuxapp/eal/eal.c | 18 ++++++++++++++++++ lib/librte_eal/linuxapp/eal/rte_eal_version.map | 1 + lib/librte_mbuf/rte_mbuf.c | 5 +++-- 9 files changed, 59 insertions(+), 2 deletions(-) diff --git a/lib/librte_eal/bsdapp/eal/eal.c b/lib/librte_eal/bsdapp/eal/eal.c index 80fe21de3..461d81bde 100644 --- a/lib/librte_eal/bsdapp/eal/eal.c +++ b/lib/librte_eal/bsdapp/eal/eal.c @@ -112,6 +112,13 @@ struct internal_config internal_config; /* used by rte_rdtsc() */ int rte_cycles_vmware_tsc_map; +/* Return mbuf pool name */ +const char * +rte_eal_mbuf_default_mempool_ops(void) +{ + return internal_config.mbuf_pool_name; +} + /* Return a pointer to the configuration structure */ struct rte_config * rte_eal_get_configuration(void) @@ -385,6 +392,16 @@ eal_parse_args(int argc, char **argv) continue; switch (opt) { + case OPT_MBUF_POOL_OPS_NUM: + ret = snprintf(internal_config.mbuf_pool_name, + sizeof(internal_config.mbuf_pool_name), + "%s", optarg); + if (ret < 0 || (uint32_t)ret >= + sizeof(internal_config.mbuf_pool_name)) { + ret = -1; + goto out; + } + break; case 'h': eal_usage(prgname); exit(EXIT_SUCCESS); diff --git a/lib/librte_eal/bsdapp/eal/rte_eal_version.map b/lib/librte_eal/bsdapp/eal/rte_eal_version.map index f689f0c8f..ed5f329cd 100644 --- a/lib/librte_eal/bsdapp/eal/rte_eal_version.map +++ b/lib/librte_eal/bsdapp/eal/rte_eal_version.map @@ -200,6 +200,7 @@ DPDK_17.08 { rte_bus_find; rte_bus_find_by_device; rte_bus_find_by_name; + rte_eal_mbuf_default_mempool_ops; } DPDK_17.05; diff --git a/lib/librte_eal/common/eal_common_options.c b/lib/librte_eal/common/eal_common_options.c index 56c368cde..9b0e73504 100644 --- a/lib/librte_eal/common/eal_common_options.c +++ b/lib/librte_eal/common/eal_common_options.c @@ -97,6 +97,7 @@ eal_long_options[] = { {OPT_VFIO_INTR, 1, NULL, OPT_VFIO_INTR_NUM }, {OPT_VMWARE_TSC_MAP, 0, NULL, OPT_VMWARE_TSC_MAP_NUM }, {OPT_XEN_DOM0, 0, NULL, OPT_XEN_DOM0_NUM }, + {OPT_MBUF_POOL_OPS, 1, NULL, OPT_MBUF_POOL_OPS_NUM}, {0, 0, NULL, 0 } }; @@ -163,6 +164,9 @@ eal_reset_internal_config(struct internal_config *internal_cfg) #endif internal_cfg->vmware_tsc_map = 0; internal_cfg->create_uio_dev = 0; + snprintf(internal_cfg->mbuf_pool_name, + sizeof(internal_cfg->mbuf_pool_name), "%s", + RTE_MBUF_DEFAULT_MEMPOOL_OPS); } static int @@ -1246,5 +1250,6 @@ eal_common_usage(void) " --"OPT_NO_PCI" Disable PCI\n" " --"OPT_NO_HPET" Disable HPET\n" " --"OPT_NO_SHCONF" No shared config (mmap'd files)\n" + " --"OPT_MBUF_POOL_OPS" Pool ops name for mbuf to use\n" "\n", RTE_MAX_LCORE); } diff --git a/lib/librte_eal/common/eal_internal_cfg.h b/lib/librte_eal/common/eal_internal_cfg.h index 7b7e8c887..e8a770a42 100644 --- a/lib/librte_eal/common/eal_internal_cfg.h +++ b/lib/librte_eal/common/eal_internal_cfg.h @@ -83,6 +83,7 @@ struct internal_config { const char *hugefile_prefix; /**< the base filename of hugetlbfs files */ const char *hugepage_dir; /**< specific hugetlbfs directory to use */ + char mbuf_pool_name[RTE_MBUF_POOL_OPS_NAMESIZE]; /**< mbuf pool name */ unsigned num_hugepage_sizes; /**< how many sizes on this system */ struct hugepage_info hugepage_info[MAX_HUGEPAGE_SIZES]; }; diff --git a/lib/librte_eal/common/eal_options.h b/lib/librte_eal/common/eal_options.h index a881c62e2..34e6417e4 100644 --- a/lib/librte_eal/common/eal_options.h +++ b/lib/librte_eal/common/eal_options.h @@ -83,6 +83,8 @@ enum { OPT_VMWARE_TSC_MAP_NUM, #define OPT_XEN_DOM0 "xen-dom0" OPT_XEN_DOM0_NUM, +#define OPT_MBUF_POOL_OPS "mbuf-pool-ops" + OPT_MBUF_POOL_OPS_NUM, OPT_LONG_MAX_NUM }; diff --git a/lib/librte_eal/common/include/rte_eal.h b/lib/librte_eal/common/include/rte_eal.h index 0e7363d77..fcd212843 100644 --- a/lib/librte_eal/common/include/rte_eal.h +++ b/lib/librte_eal/common/include/rte_eal.h @@ -54,6 +54,8 @@ extern "C" { /* Maximum thread_name length. */ #define RTE_MAX_THREAD_NAME_LEN 16 +/* Maximum length of mbuf pool ops name. */ +#define RTE_MBUF_POOL_OPS_NAMESIZE 32 /** * The lcore role (used in RTE or not). @@ -287,6 +289,15 @@ static inline int rte_gettid(void) return RTE_PER_LCORE(_thread_id); } +/** + * Ops to get default pool name for mbuf + * + * @return + * returns default pool name. + */ +const char * +rte_eal_mbuf_default_mempool_ops(void); + #define RTE_INIT(func) \ static void __attribute__((constructor, used)) func(void) diff --git a/lib/librte_eal/linuxapp/eal/eal.c b/lib/librte_eal/linuxapp/eal/eal.c index b28bbab54..1bf4e882f 100644 --- a/lib/librte_eal/linuxapp/eal/eal.c +++ b/lib/librte_eal/linuxapp/eal/eal.c @@ -121,6 +121,13 @@ struct internal_config internal_config; /* used by rte_rdtsc() */ int rte_cycles_vmware_tsc_map; +/* Return mbuf pool name */ +const char * +rte_eal_mbuf_default_mempool_ops(void) +{ + return internal_config.mbuf_pool_name; +} + /* Return a pointer to the configuration structure */ struct rte_config * rte_eal_get_configuration(void) @@ -610,6 +617,17 @@ eal_parse_args(int argc, char **argv) internal_config.create_uio_dev = 1; break; + case OPT_MBUF_POOL_OPS_NUM: + ret = snprintf(internal_config.mbuf_pool_name, + sizeof(internal_config.mbuf_pool_name), + "%s", optarg); + if (ret < 0 || (uint32_t)ret >= + sizeof(internal_config.mbuf_pool_name)) { + ret = -1; + goto out; + } + break; + default: if (opt < OPT_LONG_MIN_NUM && isprint(opt)) { RTE_LOG(ERR, EAL, "Option %c is not supported " diff --git a/lib/librte_eal/linuxapp/eal/rte_eal_version.map b/lib/librte_eal/linuxapp/eal/rte_eal_version.map index 202072189..e90e0e062 100644 --- a/lib/librte_eal/linuxapp/eal/rte_eal_version.map +++ b/lib/librte_eal/linuxapp/eal/rte_eal_version.map @@ -205,6 +205,7 @@ DPDK_17.08 { rte_bus_find; rte_bus_find_by_device; rte_bus_find_by_name; + rte_eal_mbuf_default_mempool_ops; } DPDK_17.05; diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c index 26a62b8e1..e1fc90ef3 100644 --- a/lib/librte_mbuf/rte_mbuf.c +++ b/lib/librte_mbuf/rte_mbuf.c @@ -157,6 +157,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, { struct rte_mempool *mp; struct rte_pktmbuf_pool_private mbp_priv; + const char *mp_name; unsigned elt_size; int ret; @@ -176,8 +177,8 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, if (mp == NULL) return NULL; - ret = rte_mempool_set_ops_byname(mp, - RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL); + mp_name = rte_eal_mbuf_default_mempool_ops(); + ret = rte_mempool_set_ops_byname(mp, mp_name, NULL); if (ret != 0) { RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); rte_mempool_free(mp); -- 2.11.0 ^ permalink raw reply [flat|nested] 72+ messages in thread
* [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle 2017-07-20 7:06 ` [dpdk-dev] [PATCH v2 1/2] eal: allow user to override default pool handle Santosh Shukla @ 2017-08-15 8:07 ` Santosh Shukla 2017-08-15 8:07 ` [dpdk-dev] [PATCH v3 1/2] eal: allow user to override default pool handle Santosh Shukla ` (3 more replies) 0 siblings, 4 replies; 72+ messages in thread From: Santosh Shukla @ 2017-08-15 8:07 UTC (permalink / raw) To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla v3: - Rebased on top of v17.11-rc0 - Updated version.map entry to v17.11. v2: DPDK has support for hw and sw mempool. Those mempool can work optimal for specific PMD's. Example: sw ring based PMD for Intel NICs. HW mempool manager dpaa2 for dpaa2 PMD. HW mempool manager fpa for octeontx PMD. There could be a use-case where different vendor NIC's used on the same platform and User like to configure mempool in such a way that each of those NIC's should use their preferred mempool(For performance reasons). Current mempool infrastrucure don't support such use-case. This patchset tries to address that problem in 2 steps: 0) Allowing user to dynamically configure mempool handle by passing pool handle as eal arg to `--mbuf-pool-ops=<pool-handle>`. 1) Allowing PMD's to advertise their preferred pool to an application. >From an application point of view: - The application must ask PMD about their preferred pool. - PMD to respond back with preferred pool otherwise CONFIG_RTE_MEMPOOL_DEFAULT_OPS will be used for that PMD. * Application programming sequencing would be char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE]; rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempool /* out */); rte_mempool_create_empty(); rte_mempool_set_ops_byname( , pref_memppol, ); rte_mempool_populate_default(); Change History: v2 --> v3: - Changed version.map from DPDK_17.08 to DPDK_17.11. v1 --> v2: - Renamed rte_eal_get_mempool_name to rte_eal_mbuf_default_mempool_ops(). (suggested by Olivier) - Renamed _get_preferred_pool to _get_preferred_pool_ops(). - Updated API description and changes return val from -EINVAL to -ENOTSUP. (Suggested by Olivier) * Specific details on v1-->v2 change summary described in each patch. Checkpatch status: - None. Work History: * Refer [1] for v1. Thanks. [1] http://dpdk.org/ml/archives/dev/2017-June/067022.html Santosh Shukla (2): eal: allow user to override default pool handle ethdev: allow pmd to advertise pool handle lib/librte_eal/bsdapp/eal/eal.c | 17 +++++++++++++++++ lib/librte_eal/bsdapp/eal/rte_eal_version.map | 7 +++++++ lib/librte_eal/common/eal_common_options.c | 5 +++++ lib/librte_eal/common/eal_internal_cfg.h | 1 + lib/librte_eal/common/eal_options.h | 2 ++ lib/librte_eal/common/include/rte_eal.h | 11 +++++++++++ lib/librte_eal/linuxapp/eal/eal.c | 18 ++++++++++++++++++ lib/librte_eal/linuxapp/eal/rte_eal_version.map | 7 +++++++ lib/librte_ether/rte_ethdev.c | 18 ++++++++++++++++++ lib/librte_ether/rte_ethdev.h | 21 +++++++++++++++++++++ lib/librte_ether/rte_ether_version.map | 7 +++++++ lib/librte_mbuf/rte_mbuf.c | 5 +++-- 12 files changed, 117 insertions(+), 2 deletions(-) -- 2.11.0 ^ permalink raw reply [flat|nested] 72+ messages in thread
* [dpdk-dev] [PATCH v3 1/2] eal: allow user to override default pool handle 2017-08-15 8:07 ` [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle Santosh Shukla @ 2017-08-15 8:07 ` Santosh Shukla 2017-09-04 11:46 ` Olivier MATZ 2017-09-07 9:25 ` Hemant Agrawal 2017-08-15 8:07 ` [dpdk-dev] [PATCH v3 2/2] ethdev: allow pmd to advertise " Santosh Shukla ` (2 subsequent siblings) 3 siblings, 2 replies; 72+ messages in thread From: Santosh Shukla @ 2017-08-15 8:07 UTC (permalink / raw) To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla DPDK has support for both sw and hw mempool and currently user is limited to use ring_mp_mc pool. In case user want to use other pool handle, need to update config RTE_MEMPOOL_OPS_DEFAULT, then build and run with desired pool handle. Introducing eal option to override default pool handle. Now user can override the RTE_MEMPOOL_OPS_DEFAULT by passing pool handle to eal `--mbuf-pool-ops=""`. Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> --- v2 --> v3: - Updated version.map entry to v17.11. v1 --> v2: (Changes per review feedback from Olivier) - Renamed rte_eal_get_mempool_name to rte_eal_mbuf_default_mempool_ops(). - Added support in bsdapp too. Few changes considered while working on v2 like: - Choosing eal arg name 'mbuf-pool-ops' out of 3 proposed names by Olivier. Initialy thought to use eal arg 'mbuf-default-mempool-ops' But since its way too long and user may find difficult to use it. That's why chose 'mbuf-pool-ops', As that name very close to api name and #define. - Adding new RTE_ for NAMESIZE in rte_eal.h called 'RTE_MBUF_DEFAULT_MEMPOOL_OPS'. lib/librte_eal/bsdapp/eal/eal.c | 17 +++++++++++++++++ lib/librte_eal/bsdapp/eal/rte_eal_version.map | 7 +++++++ lib/librte_eal/common/eal_common_options.c | 5 +++++ lib/librte_eal/common/eal_internal_cfg.h | 1 + lib/librte_eal/common/eal_options.h | 2 ++ lib/librte_eal/common/include/rte_eal.h | 11 +++++++++++ lib/librte_eal/linuxapp/eal/eal.c | 18 ++++++++++++++++++ lib/librte_eal/linuxapp/eal/rte_eal_version.map | 7 +++++++ lib/librte_mbuf/rte_mbuf.c | 5 +++-- 9 files changed, 71 insertions(+), 2 deletions(-) diff --git a/lib/librte_eal/bsdapp/eal/eal.c b/lib/librte_eal/bsdapp/eal/eal.c index 5fa598842..1def123be 100644 --- a/lib/librte_eal/bsdapp/eal/eal.c +++ b/lib/librte_eal/bsdapp/eal/eal.c @@ -112,6 +112,13 @@ struct internal_config internal_config; /* used by rte_rdtsc() */ int rte_cycles_vmware_tsc_map; +/* Return mbuf pool name */ +const char * +rte_eal_mbuf_default_mempool_ops(void) +{ + return internal_config.mbuf_pool_name; +} + /* Return a pointer to the configuration structure */ struct rte_config * rte_eal_get_configuration(void) @@ -385,6 +392,16 @@ eal_parse_args(int argc, char **argv) continue; switch (opt) { + case OPT_MBUF_POOL_OPS_NUM: + ret = snprintf(internal_config.mbuf_pool_name, + sizeof(internal_config.mbuf_pool_name), + "%s", optarg); + if (ret < 0 || (uint32_t)ret >= + sizeof(internal_config.mbuf_pool_name)) { + ret = -1; + goto out; + } + break; case 'h': eal_usage(prgname); exit(EXIT_SUCCESS); diff --git a/lib/librte_eal/bsdapp/eal/rte_eal_version.map b/lib/librte_eal/bsdapp/eal/rte_eal_version.map index aac6fd776..172a28bbf 100644 --- a/lib/librte_eal/bsdapp/eal/rte_eal_version.map +++ b/lib/librte_eal/bsdapp/eal/rte_eal_version.map @@ -237,3 +237,10 @@ EXPERIMENTAL { rte_service_unregister; } DPDK_17.08; + +DPDK_17.11 { + global: + + rte_eal_mbuf_default_mempool_ops; + +} DPDK_17.08; diff --git a/lib/librte_eal/common/eal_common_options.c b/lib/librte_eal/common/eal_common_options.c index 1da185e59..78db639e2 100644 --- a/lib/librte_eal/common/eal_common_options.c +++ b/lib/librte_eal/common/eal_common_options.c @@ -98,6 +98,7 @@ eal_long_options[] = { {OPT_VFIO_INTR, 1, NULL, OPT_VFIO_INTR_NUM }, {OPT_VMWARE_TSC_MAP, 0, NULL, OPT_VMWARE_TSC_MAP_NUM }, {OPT_XEN_DOM0, 0, NULL, OPT_XEN_DOM0_NUM }, + {OPT_MBUF_POOL_OPS, 1, NULL, OPT_MBUF_POOL_OPS_NUM}, {0, 0, NULL, 0 } }; @@ -220,6 +221,9 @@ eal_reset_internal_config(struct internal_config *internal_cfg) #endif internal_cfg->vmware_tsc_map = 0; internal_cfg->create_uio_dev = 0; + snprintf(internal_cfg->mbuf_pool_name, + sizeof(internal_cfg->mbuf_pool_name), "%s", + RTE_MBUF_DEFAULT_MEMPOOL_OPS); } static int @@ -1309,5 +1313,6 @@ eal_common_usage(void) " --"OPT_NO_PCI" Disable PCI\n" " --"OPT_NO_HPET" Disable HPET\n" " --"OPT_NO_SHCONF" No shared config (mmap'd files)\n" + " --"OPT_MBUF_POOL_OPS" Pool ops name for mbuf to use\n" "\n", RTE_MAX_LCORE); } diff --git a/lib/librte_eal/common/eal_internal_cfg.h b/lib/librte_eal/common/eal_internal_cfg.h index 7b7e8c887..e8a770a42 100644 --- a/lib/librte_eal/common/eal_internal_cfg.h +++ b/lib/librte_eal/common/eal_internal_cfg.h @@ -83,6 +83,7 @@ struct internal_config { const char *hugefile_prefix; /**< the base filename of hugetlbfs files */ const char *hugepage_dir; /**< specific hugetlbfs directory to use */ + char mbuf_pool_name[RTE_MBUF_POOL_OPS_NAMESIZE]; /**< mbuf pool name */ unsigned num_hugepage_sizes; /**< how many sizes on this system */ struct hugepage_info hugepage_info[MAX_HUGEPAGE_SIZES]; }; diff --git a/lib/librte_eal/common/eal_options.h b/lib/librte_eal/common/eal_options.h index 439a26104..9c893aa2d 100644 --- a/lib/librte_eal/common/eal_options.h +++ b/lib/librte_eal/common/eal_options.h @@ -83,6 +83,8 @@ enum { OPT_VMWARE_TSC_MAP_NUM, #define OPT_XEN_DOM0 "xen-dom0" OPT_XEN_DOM0_NUM, +#define OPT_MBUF_POOL_OPS "mbuf-pool-ops" + OPT_MBUF_POOL_OPS_NUM, OPT_LONG_MAX_NUM }; diff --git a/lib/librte_eal/common/include/rte_eal.h b/lib/librte_eal/common/include/rte_eal.h index 0e7363d77..fcd212843 100644 --- a/lib/librte_eal/common/include/rte_eal.h +++ b/lib/librte_eal/common/include/rte_eal.h @@ -54,6 +54,8 @@ extern "C" { /* Maximum thread_name length. */ #define RTE_MAX_THREAD_NAME_LEN 16 +/* Maximum length of mbuf pool ops name. */ +#define RTE_MBUF_POOL_OPS_NAMESIZE 32 /** * The lcore role (used in RTE or not). @@ -287,6 +289,15 @@ static inline int rte_gettid(void) return RTE_PER_LCORE(_thread_id); } +/** + * Ops to get default pool name for mbuf + * + * @return + * returns default pool name. + */ +const char * +rte_eal_mbuf_default_mempool_ops(void); + #define RTE_INIT(func) \ static void __attribute__((constructor, used)) func(void) diff --git a/lib/librte_eal/linuxapp/eal/eal.c b/lib/librte_eal/linuxapp/eal/eal.c index 48f12f44c..99c2a14d4 100644 --- a/lib/librte_eal/linuxapp/eal/eal.c +++ b/lib/librte_eal/linuxapp/eal/eal.c @@ -121,6 +121,13 @@ struct internal_config internal_config; /* used by rte_rdtsc() */ int rte_cycles_vmware_tsc_map; +/* Return mbuf pool name */ +const char * +rte_eal_mbuf_default_mempool_ops(void) +{ + return internal_config.mbuf_pool_name; +} + /* Return a pointer to the configuration structure */ struct rte_config * rte_eal_get_configuration(void) @@ -610,6 +617,17 @@ eal_parse_args(int argc, char **argv) internal_config.create_uio_dev = 1; break; + case OPT_MBUF_POOL_OPS_NUM: + ret = snprintf(internal_config.mbuf_pool_name, + sizeof(internal_config.mbuf_pool_name), + "%s", optarg); + if (ret < 0 || (uint32_t)ret >= + sizeof(internal_config.mbuf_pool_name)) { + ret = -1; + goto out; + } + break; + default: if (opt < OPT_LONG_MIN_NUM && isprint(opt)) { RTE_LOG(ERR, EAL, "Option %c is not supported " diff --git a/lib/librte_eal/linuxapp/eal/rte_eal_version.map b/lib/librte_eal/linuxapp/eal/rte_eal_version.map index 3a8f15406..5b48fcd37 100644 --- a/lib/librte_eal/linuxapp/eal/rte_eal_version.map +++ b/lib/librte_eal/linuxapp/eal/rte_eal_version.map @@ -242,3 +242,10 @@ EXPERIMENTAL { rte_service_unregister; } DPDK_17.08; + +DPDK_17.11 { + global: + + rte_eal_mbuf_default_mempool_ops; + +} DPDK_17.08; diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c index 26a62b8e1..e1fc90ef3 100644 --- a/lib/librte_mbuf/rte_mbuf.c +++ b/lib/librte_mbuf/rte_mbuf.c @@ -157,6 +157,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, { struct rte_mempool *mp; struct rte_pktmbuf_pool_private mbp_priv; + const char *mp_name; unsigned elt_size; int ret; @@ -176,8 +177,8 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, if (mp == NULL) return NULL; - ret = rte_mempool_set_ops_byname(mp, - RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL); + mp_name = rte_eal_mbuf_default_mempool_ops(); + ret = rte_mempool_set_ops_byname(mp, mp_name, NULL); if (ret != 0) { RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); rte_mempool_free(mp); -- 2.11.0 ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/2] eal: allow user to override default pool handle 2017-08-15 8:07 ` [dpdk-dev] [PATCH v3 1/2] eal: allow user to override default pool handle Santosh Shukla @ 2017-09-04 11:46 ` Olivier MATZ 2017-09-07 9:25 ` Hemant Agrawal 1 sibling, 0 replies; 72+ messages in thread From: Olivier MATZ @ 2017-09-04 11:46 UTC (permalink / raw) To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal Hi Santosh, On Tue, Aug 15, 2017 at 01:37:16PM +0530, Santosh Shukla wrote: > --- a/lib/librte_eal/common/eal_internal_cfg.h > +++ b/lib/librte_eal/common/eal_internal_cfg.h > @@ -83,6 +83,7 @@ struct internal_config { > const char *hugefile_prefix; /**< the base filename of hugetlbfs files */ > const char *hugepage_dir; /**< specific hugetlbfs directory to use */ > > + char mbuf_pool_name[RTE_MBUF_POOL_OPS_NAMESIZE]; /**< mbuf pool name */ > unsigned num_hugepage_sizes; /**< how many sizes on this system */ > struct hugepage_info hugepage_info[MAX_HUGEPAGE_SIZES]; > }; > --- a/lib/librte_eal/common/include/rte_eal.h > +++ b/lib/librte_eal/common/include/rte_eal.h > @@ -54,6 +54,8 @@ extern "C" { > > /* Maximum thread_name length. */ > #define RTE_MAX_THREAD_NAME_LEN 16 > +/* Maximum length of mbuf pool ops name. */ > +#define RTE_MBUF_POOL_OPS_NAMESIZE 32 > To avoid to define a new constant, is there something preventing to use RTE_MEMPOOL_OPS_NAMESIZE? Or even better, is it possible to use a 'const char *', like it's done for hugepage_dir and hugepage_prefix? Thanks, Olivier ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/2] eal: allow user to override default pool handle 2017-08-15 8:07 ` [dpdk-dev] [PATCH v3 1/2] eal: allow user to override default pool handle Santosh Shukla 2017-09-04 11:46 ` Olivier MATZ @ 2017-09-07 9:25 ` Hemant Agrawal 1 sibling, 0 replies; 72+ messages in thread From: Hemant Agrawal @ 2017-09-07 9:25 UTC (permalink / raw) To: Santosh Shukla, olivier.matz, dev; +Cc: thomas, jerin.jacob On 8/15/2017 1:37 PM, Santosh Shukla wrote: > DPDK has support for both sw and hw mempool and > currently user is limited to use ring_mp_mc pool. > In case user want to use other pool handle, > need to update config RTE_MEMPOOL_OPS_DEFAULT, then > build and run with desired pool handle. > > Introducing eal option to override default pool handle. > > Now user can override the RTE_MEMPOOL_OPS_DEFAULT by passing > pool handle to eal `--mbuf-pool-ops=""`. > > Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> > --- > v2 --> v3: > - Updated version.map entry to v17.11. > > v1 --> v2: > (Changes per review feedback from Olivier) > - Renamed rte_eal_get_mempool_name to rte_eal_mbuf_default_mempool_ops(). > - Added support in bsdapp too. > > Few changes considered while working on v2 like: > - Choosing eal arg name 'mbuf-pool-ops' out of 3 proposed names by Olivier. > Initialy thought to use eal arg 'mbuf-default-mempool-ops' But > since its way too long and user may find difficult to use it. That's why > chose 'mbuf-pool-ops', As that name very close to api name and #define. > - Adding new RTE_ for NAMESIZE in rte_eal.h called > 'RTE_MBUF_DEFAULT_MEMPOOL_OPS'. > > lib/librte_eal/bsdapp/eal/eal.c | 17 +++++++++++++++++ > lib/librte_eal/bsdapp/eal/rte_eal_version.map | 7 +++++++ > lib/librte_eal/common/eal_common_options.c | 5 +++++ > lib/librte_eal/common/eal_internal_cfg.h | 1 + > lib/librte_eal/common/eal_options.h | 2 ++ > lib/librte_eal/common/include/rte_eal.h | 11 +++++++++++ > lib/librte_eal/linuxapp/eal/eal.c | 18 ++++++++++++++++++ > lib/librte_eal/linuxapp/eal/rte_eal_version.map | 7 +++++++ > lib/librte_mbuf/rte_mbuf.c | 5 +++-- > 9 files changed, 71 insertions(+), 2 deletions(-) > > diff --git a/lib/librte_eal/bsdapp/eal/eal.c b/lib/librte_eal/bsdapp/eal/eal.c > index 5fa598842..1def123be 100644 > --- a/lib/librte_eal/bsdapp/eal/eal.c > +++ b/lib/librte_eal/bsdapp/eal/eal.c > @@ -112,6 +112,13 @@ struct internal_config internal_config; > /* used by rte_rdtsc() */ > int rte_cycles_vmware_tsc_map; > > +/* Return mbuf pool name */ > +const char * > +rte_eal_mbuf_default_mempool_ops(void) > +{ > + return internal_config.mbuf_pool_name; > +} > + > /* Return a pointer to the configuration structure */ > struct rte_config * > rte_eal_get_configuration(void) > @@ -385,6 +392,16 @@ eal_parse_args(int argc, char **argv) > continue; > > switch (opt) { > + case OPT_MBUF_POOL_OPS_NUM: > + ret = snprintf(internal_config.mbuf_pool_name, > + sizeof(internal_config.mbuf_pool_name), > + "%s", optarg); > + if (ret < 0 || (uint32_t)ret >= > + sizeof(internal_config.mbuf_pool_name)) { > + ret = -1; > + goto out; > + } > + break; > case 'h': > eal_usage(prgname); > exit(EXIT_SUCCESS); > diff --git a/lib/librte_eal/bsdapp/eal/rte_eal_version.map b/lib/librte_eal/bsdapp/eal/rte_eal_version.map > index aac6fd776..172a28bbf 100644 > --- a/lib/librte_eal/bsdapp/eal/rte_eal_version.map > +++ b/lib/librte_eal/bsdapp/eal/rte_eal_version.map > @@ -237,3 +237,10 @@ EXPERIMENTAL { > rte_service_unregister; > > } DPDK_17.08; > + > +DPDK_17.11 { > + global: > + > + rte_eal_mbuf_default_mempool_ops; > + > +} DPDK_17.08; > diff --git a/lib/librte_eal/common/eal_common_options.c b/lib/librte_eal/common/eal_common_options.c > index 1da185e59..78db639e2 100644 > --- a/lib/librte_eal/common/eal_common_options.c > +++ b/lib/librte_eal/common/eal_common_options.c > @@ -98,6 +98,7 @@ eal_long_options[] = { > {OPT_VFIO_INTR, 1, NULL, OPT_VFIO_INTR_NUM }, > {OPT_VMWARE_TSC_MAP, 0, NULL, OPT_VMWARE_TSC_MAP_NUM }, > {OPT_XEN_DOM0, 0, NULL, OPT_XEN_DOM0_NUM }, > + {OPT_MBUF_POOL_OPS, 1, NULL, OPT_MBUF_POOL_OPS_NUM}, > {0, 0, NULL, 0 } > }; > > @@ -220,6 +221,9 @@ eal_reset_internal_config(struct internal_config *internal_cfg) > #endif > internal_cfg->vmware_tsc_map = 0; > internal_cfg->create_uio_dev = 0; > + snprintf(internal_cfg->mbuf_pool_name, > + sizeof(internal_cfg->mbuf_pool_name), "%s", > + RTE_MBUF_DEFAULT_MEMPOOL_OPS); > } > > static int > @@ -1309,5 +1313,6 @@ eal_common_usage(void) > " --"OPT_NO_PCI" Disable PCI\n" > " --"OPT_NO_HPET" Disable HPET\n" > " --"OPT_NO_SHCONF" No shared config (mmap'd files)\n" > + " --"OPT_MBUF_POOL_OPS" Pool ops name for mbuf to use\n" > "\n", RTE_MAX_LCORE); > } > diff --git a/lib/librte_eal/common/eal_internal_cfg.h b/lib/librte_eal/common/eal_internal_cfg.h > index 7b7e8c887..e8a770a42 100644 > --- a/lib/librte_eal/common/eal_internal_cfg.h > +++ b/lib/librte_eal/common/eal_internal_cfg.h > @@ -83,6 +83,7 @@ struct internal_config { > const char *hugefile_prefix; /**< the base filename of hugetlbfs files */ > const char *hugepage_dir; /**< specific hugetlbfs directory to use */ > > + char mbuf_pool_name[RTE_MBUF_POOL_OPS_NAMESIZE]; /**< mbuf pool name */ > unsigned num_hugepage_sizes; /**< how many sizes on this system */ > struct hugepage_info hugepage_info[MAX_HUGEPAGE_SIZES]; > }; > diff --git a/lib/librte_eal/common/eal_options.h b/lib/librte_eal/common/eal_options.h > index 439a26104..9c893aa2d 100644 > --- a/lib/librte_eal/common/eal_options.h > +++ b/lib/librte_eal/common/eal_options.h > @@ -83,6 +83,8 @@ enum { > OPT_VMWARE_TSC_MAP_NUM, > #define OPT_XEN_DOM0 "xen-dom0" > OPT_XEN_DOM0_NUM, > +#define OPT_MBUF_POOL_OPS "mbuf-pool-ops" > + OPT_MBUF_POOL_OPS_NUM, > OPT_LONG_MAX_NUM > }; > > diff --git a/lib/librte_eal/common/include/rte_eal.h b/lib/librte_eal/common/include/rte_eal.h > index 0e7363d77..fcd212843 100644 > --- a/lib/librte_eal/common/include/rte_eal.h > +++ b/lib/librte_eal/common/include/rte_eal.h > @@ -54,6 +54,8 @@ extern "C" { > > /* Maximum thread_name length. */ > #define RTE_MAX_THREAD_NAME_LEN 16 > +/* Maximum length of mbuf pool ops name. */ > +#define RTE_MBUF_POOL_OPS_NAMESIZE 32 > > /** > * The lcore role (used in RTE or not). > @@ -287,6 +289,15 @@ static inline int rte_gettid(void) > return RTE_PER_LCORE(_thread_id); > } > > +/** > + * Ops to get default pool name for mbuf > + * > + * @return > + * returns default pool name. > + */ > +const char * > +rte_eal_mbuf_default_mempool_ops(void); > + > #define RTE_INIT(func) \ > static void __attribute__((constructor, used)) func(void) > > diff --git a/lib/librte_eal/linuxapp/eal/eal.c b/lib/librte_eal/linuxapp/eal/eal.c > index 48f12f44c..99c2a14d4 100644 > --- a/lib/librte_eal/linuxapp/eal/eal.c > +++ b/lib/librte_eal/linuxapp/eal/eal.c > @@ -121,6 +121,13 @@ struct internal_config internal_config; > /* used by rte_rdtsc() */ > int rte_cycles_vmware_tsc_map; > > +/* Return mbuf pool name */ > +const char * > +rte_eal_mbuf_default_mempool_ops(void) > +{ > + return internal_config.mbuf_pool_name; > +} > + > /* Return a pointer to the configuration structure */ > struct rte_config * > rte_eal_get_configuration(void) > @@ -610,6 +617,17 @@ eal_parse_args(int argc, char **argv) > internal_config.create_uio_dev = 1; > break; > > + case OPT_MBUF_POOL_OPS_NUM: > + ret = snprintf(internal_config.mbuf_pool_name, > + sizeof(internal_config.mbuf_pool_name), > + "%s", optarg); > + if (ret < 0 || (uint32_t)ret >= > + sizeof(internal_config.mbuf_pool_name)) { > + ret = -1; > + goto out; > + } > + break; > + > default: > if (opt < OPT_LONG_MIN_NUM && isprint(opt)) { > RTE_LOG(ERR, EAL, "Option %c is not supported " > diff --git a/lib/librte_eal/linuxapp/eal/rte_eal_version.map b/lib/librte_eal/linuxapp/eal/rte_eal_version.map > index 3a8f15406..5b48fcd37 100644 > --- a/lib/librte_eal/linuxapp/eal/rte_eal_version.map > +++ b/lib/librte_eal/linuxapp/eal/rte_eal_version.map > @@ -242,3 +242,10 @@ EXPERIMENTAL { > rte_service_unregister; > > } DPDK_17.08; > + > +DPDK_17.11 { > + global: > + > + rte_eal_mbuf_default_mempool_ops; > + > +} DPDK_17.08; > diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c > index 26a62b8e1..e1fc90ef3 100644 > --- a/lib/librte_mbuf/rte_mbuf.c > +++ b/lib/librte_mbuf/rte_mbuf.c > @@ -157,6 +157,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, > { > struct rte_mempool *mp; > struct rte_pktmbuf_pool_private mbp_priv; > + const char *mp_name; > unsigned elt_size; > int ret; > > @@ -176,8 +177,8 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, > if (mp == NULL) > return NULL; > > - ret = rte_mempool_set_ops_byname(mp, > - RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL); > + mp_name = rte_eal_mbuf_default_mempool_ops(); > + ret = rte_mempool_set_ops_byname(mp, mp_name, NULL); > if (ret != 0) { > RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); > rte_mempool_free(mp); > I guess you will take care of Olivier's comments. With that, Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com> ^ permalink raw reply [flat|nested] 72+ messages in thread
* [dpdk-dev] [PATCH v3 2/2] ethdev: allow pmd to advertise pool handle 2017-08-15 8:07 ` [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle Santosh Shukla 2017-08-15 8:07 ` [dpdk-dev] [PATCH v3 1/2] eal: allow user to override default pool handle Santosh Shukla @ 2017-08-15 8:07 ` Santosh Shukla 2017-09-04 12:11 ` Olivier MATZ 2017-09-04 9:41 ` [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle Sergio Gonzalez Monroy 2017-09-11 15:18 ` [dpdk-dev] [PATCH v4 " Santosh Shukla 3 siblings, 1 reply; 72+ messages in thread From: Santosh Shukla @ 2017-08-15 8:07 UTC (permalink / raw) To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla Now that dpdk supports more than one mempool drivers and each mempool driver works best for specific PMD, example: - sw ring based mempool for Intel PMD drivers - dpaa2 HW mempool manager for dpaa2 PMD driver. - fpa HW mempool manager for Octeontx PMD driver. Application like to know `preferred mempool vs PMD driver` information in advance before port setup. Introducing rte_eth_dev_get_preferred_pool_ops() API, which allows PMD driver to advertise their pool capability to application. Application side programing sequence would be: char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE]; rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempoolx /*out*/); rte_mempool_create_empty(); rte_mempool_set_ops_byname( , pref_memppol, ); rte_mempool_populate_default(); Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> --- v2 --> v3: - Updated version.map entry to DPDK_v17.11. v1 --> v2: - Renamed _get_preferred_pool to _get_preferred_pool_ops(). Per v1 review feedback, Olivier suggested to rename api to rte_eth_dev_pool_ops_supported(), considering that 2nd param for that api will return pool handle 'priority' for that port. However, per v1 [1], we're opting for approach 1) where ethdev API returns _preferred_ pool handle to application and Its upto application to decide on policy - whether application wants to create pool with received preferred pool handle or not. For more discussion details on this topic refer [1]. [1] http://dpdk.org/dev/patchwork/patch/24944/ - Removed const qualifier from _get_preferred_pool_ops() param. - Updated API description and changes return val from -EINVAL to _ENOTSUP (suggested by Olivier) lib/librte_ether/rte_ethdev.c | 18 ++++++++++++++++++ lib/librte_ether/rte_ethdev.h | 21 +++++++++++++++++++++ lib/librte_ether/rte_ether_version.map | 7 +++++++ 3 files changed, 46 insertions(+) diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c index 0597641ee..6a0fe51fc 100644 --- a/lib/librte_ether/rte_ethdev.c +++ b/lib/librte_ether/rte_ethdev.c @@ -3409,3 +3409,21 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id, return 0; } + +int +rte_eth_dev_get_preferred_pool_ops(uint8_t port_id, char *pool) +{ + struct rte_eth_dev *dev; + const char *tmp; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + + dev = &rte_eth_devices[port_id]; + + if (*dev->dev_ops->get_preferred_pool_ops == NULL) { + tmp = rte_eal_mbuf_default_mempool_ops(); + snprintf(pool, RTE_MBUF_POOL_OPS_NAMESIZE, "%s", tmp); + return 0; + } + return (*dev->dev_ops->get_preferred_pool_ops)(dev, pool); +} diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h index 0adf3274a..afbce0b23 100644 --- a/lib/librte_ether/rte_ethdev.h +++ b/lib/librte_ether/rte_ethdev.h @@ -1425,6 +1425,10 @@ typedef int (*eth_get_dcb_info)(struct rte_eth_dev *dev, struct rte_eth_dcb_info *dcb_info); /**< @internal Get dcb information on an Ethernet device */ +typedef int (*eth_get_preferred_pool_ops_t)(struct rte_eth_dev *dev, + char *pool); +/**< @internal Get preferred pool handle for port */ + /** * @internal A structure containing the functions exported by an Ethernet driver. */ @@ -1544,6 +1548,8 @@ struct eth_dev_ops { eth_tm_ops_get_t tm_ops_get; /**< Get Traffic Management (TM) operations. */ + eth_get_preferred_pool_ops_t get_preferred_pool_ops; + /**< Get preferred pool handle for port */ }; /** @@ -4436,6 +4442,21 @@ int rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id, uint16_t *nb_rx_desc, uint16_t *nb_tx_desc); +/** + * Get preferred pool handle for port + * + * @param port_id + * port identifier of the device + * @param [out] pool + * Preferred pool handle for this port. + * Maximum length of preferred pool handle is RTE_MBUF_POOL_OPS_NAMESIZE. + * @return + * - (0) if successful. + * - (-ENOTSUP, -ENODEV or -EINVAL) on failure. + */ +int +rte_eth_dev_get_preferred_pool_ops(uint8_t port_id, char *pool); + #ifdef __cplusplus } #endif diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map index 42837285e..5d97d6299 100644 --- a/lib/librte_ether/rte_ether_version.map +++ b/lib/librte_ether/rte_ether_version.map @@ -187,3 +187,10 @@ DPDK_17.08 { rte_tm_wred_profile_delete; } DPDK_17.05; + +DPDK_17.11 { + global: + + rte_eth_dev_get_preferred_pool_ops; + +} DPDK_17.08; -- 2.11.0 ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v3 2/2] ethdev: allow pmd to advertise pool handle 2017-08-15 8:07 ` [dpdk-dev] [PATCH v3 2/2] ethdev: allow pmd to advertise " Santosh Shukla @ 2017-09-04 12:11 ` Olivier MATZ 2017-09-04 13:14 ` santosh 0 siblings, 1 reply; 72+ messages in thread From: Olivier MATZ @ 2017-09-04 12:11 UTC (permalink / raw) To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal Hi Santosh, On Tue, Aug 15, 2017 at 01:37:17PM +0530, Santosh Shukla wrote: > Now that dpdk supports more than one mempool drivers and > each mempool driver works best for specific PMD, example: > - sw ring based mempool for Intel PMD drivers > - dpaa2 HW mempool manager for dpaa2 PMD driver. > - fpa HW mempool manager for Octeontx PMD driver. > > Application like to know `preferred mempool vs PMD driver` > information in advance before port setup. > > Introducing rte_eth_dev_get_preferred_pool_ops() API, > which allows PMD driver to advertise their pool capability to application. > > Application side programing sequence would be: > > char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE]; > rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempoolx /*out*/); > rte_mempool_create_empty(); > rte_mempool_set_ops_byname( , pref_memppol, ); > rte_mempool_populate_default(); > > Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> > --- > v2 --> v3: > - Updated version.map entry to DPDK_v17.11. > > v1 --> v2: > - Renamed _get_preferred_pool to _get_preferred_pool_ops(). > Per v1 review feedback, Olivier suggested to rename api > to rte_eth_dev_pool_ops_supported(), considering that 2nd param > for that api will return pool handle 'priority' for that port. > However, per v1 [1], we're opting for approach 1) where > ethdev API returns _preferred_ pool handle to application and Its upto > application to decide on policy - whether application wants to create > pool with received preferred pool handle or not. For more discussion details > on this topic refer [1]. Well, I still think it would be more flexible to have an API like rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool) It supports the easy case (= one preferred mempool) without much pain, and provides a more precise manner to describe what is supported or not by the driver. Example: "pmd_foo" prefers "mempool_foo" (best perf), but also supporst "mempool_stack" and "mempool_ring", but "mempool_bar" won't work at all. Having only one preferred pool_ops also prevents from smoothly renaming a pool (supporting both during some time) or to have 2 names for different variants of the same pool_ops (ex: ring_mp_mc, ring_sp_sc). But if the users (I guess at least Cavium and NXP) are happy with what you propose, I'm fine with it. > --- a/lib/librte_ether/rte_ethdev.c > +++ b/lib/librte_ether/rte_ethdev.c > @@ -3409,3 +3409,21 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id, > > return 0; > } > + > +int > +rte_eth_dev_get_preferred_pool_ops(uint8_t port_id, char *pool) > +{ > + struct rte_eth_dev *dev; > + const char *tmp; > + > + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); > + > + dev = &rte_eth_devices[port_id]; > + > + if (*dev->dev_ops->get_preferred_pool_ops == NULL) { > + tmp = rte_eal_mbuf_default_mempool_ops(); > + snprintf(pool, RTE_MBUF_POOL_OPS_NAMESIZE, "%s", tmp); > + return 0; > + } > + return (*dev->dev_ops->get_preferred_pool_ops)(dev, pool); > +} I think adding the length of the pool buffer to the function arguments would be better: only documenting that the length is RTE_MBUF_POOL_OPS_NAMESIZE looks a bit weak to me, because if one day it changes to another value, the users of the function may not notice it (no ABI/API change). One more comment: it would be helpful to have one user of this API in the example apps or testpmd. Olivier ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v3 2/2] ethdev: allow pmd to advertise pool handle 2017-09-04 12:11 ` Olivier MATZ @ 2017-09-04 13:14 ` santosh 2017-09-07 9:21 ` Hemant Agrawal 0 siblings, 1 reply; 72+ messages in thread From: santosh @ 2017-09-04 13:14 UTC (permalink / raw) To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal Hi Olivier, On Monday 04 September 2017 05:41 PM, Olivier MATZ wrote: > Hi Santosh, > > On Tue, Aug 15, 2017 at 01:37:17PM +0530, Santosh Shukla wrote: >> Now that dpdk supports more than one mempool drivers and >> each mempool driver works best for specific PMD, example: >> - sw ring based mempool for Intel PMD drivers >> - dpaa2 HW mempool manager for dpaa2 PMD driver. >> - fpa HW mempool manager for Octeontx PMD driver. >> >> Application like to know `preferred mempool vs PMD driver` >> information in advance before port setup. >> >> Introducing rte_eth_dev_get_preferred_pool_ops() API, >> which allows PMD driver to advertise their pool capability to application. >> >> Application side programing sequence would be: >> >> char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE]; >> rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempoolx /*out*/); >> rte_mempool_create_empty(); >> rte_mempool_set_ops_byname( , pref_memppol, ); >> rte_mempool_populate_default(); >> >> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> >> --- >> v2 --> v3: >> - Updated version.map entry to DPDK_v17.11. >> >> v1 --> v2: >> - Renamed _get_preferred_pool to _get_preferred_pool_ops(). >> Per v1 review feedback, Olivier suggested to rename api >> to rte_eth_dev_pool_ops_supported(), considering that 2nd param >> for that api will return pool handle 'priority' for that port. >> However, per v1 [1], we're opting for approach 1) where >> ethdev API returns _preferred_ pool handle to application and Its upto >> application to decide on policy - whether application wants to create >> pool with received preferred pool handle or not. For more discussion details >> on this topic refer [1]. > Well, I still think it would be more flexible to have an API like > rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool) > > It supports the easy case (= one preferred mempool) without much pain, > and provides a more precise manner to describe what is supported or not > by the driver. Example: "pmd_foo" prefers "mempool_foo" (best perf), but > also supporst "mempool_stack" and "mempool_ring", but "mempool_bar" > won't work at all. > > Having only one preferred pool_ops also prevents from smoothly renaming > a pool (supporting both during some time) or to have 2 names for > different variants of the same pool_ops (ex: ring_mp_mc, ring_sp_sc). > > But if the users (I guess at least Cavium and NXP) are happy with > what you propose, I'm fine with it. preferred handle based upon real world use-case and same thing raised at [1]. Hi Hemant, Are you ok with proposed preferred API? [1] http://dpdk.org/dev/patchwork/patch/24944/ >> --- a/lib/librte_ether/rte_ethdev.c >> +++ b/lib/librte_ether/rte_ethdev.c >> @@ -3409,3 +3409,21 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id, >> >> return 0; >> } >> + >> +int >> +rte_eth_dev_get_preferred_pool_ops(uint8_t port_id, char *pool) >> +{ >> + struct rte_eth_dev *dev; >> + const char *tmp; >> + >> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); >> + >> + dev = &rte_eth_devices[port_id]; >> + >> + if (*dev->dev_ops->get_preferred_pool_ops == NULL) { >> + tmp = rte_eal_mbuf_default_mempool_ops(); >> + snprintf(pool, RTE_MBUF_POOL_OPS_NAMESIZE, "%s", tmp); >> + return 0; >> + } >> + return (*dev->dev_ops->get_preferred_pool_ops)(dev, pool); >> +} > I think adding the length of the pool buffer to the function arguments > would be better: only documenting that the length is > RTE_MBUF_POOL_OPS_NAMESIZE looks a bit weak to me, because if one day it > changes to another value, the users of the function may not notice it > (no ABI/API change). > > > One more comment: it would be helpful to have one user of this API in > the example apps or testpmd. Yes. I will add in v3. Thanks. > Olivier ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v3 2/2] ethdev: allow pmd to advertise pool handle 2017-09-04 13:14 ` santosh @ 2017-09-07 9:21 ` Hemant Agrawal 2017-09-07 10:06 ` santosh 0 siblings, 1 reply; 72+ messages in thread From: Hemant Agrawal @ 2017-09-07 9:21 UTC (permalink / raw) To: santosh, Olivier MATZ; +Cc: dev, thomas, jerin.jacob On 9/4/2017 6:44 PM, santosh wrote: > Hi Olivier, > > > On Monday 04 September 2017 05:41 PM, Olivier MATZ wrote: >> Hi Santosh, >> >> On Tue, Aug 15, 2017 at 01:37:17PM +0530, Santosh Shukla wrote: >>> Now that dpdk supports more than one mempool drivers and >>> each mempool driver works best for specific PMD, example: >>> - sw ring based mempool for Intel PMD drivers >>> - dpaa2 HW mempool manager for dpaa2 PMD driver. >>> - fpa HW mempool manager for Octeontx PMD driver. >>> >>> Application like to know `preferred mempool vs PMD driver` >>> information in advance before port setup. >>> >>> Introducing rte_eth_dev_get_preferred_pool_ops() API, >>> which allows PMD driver to advertise their pool capability to application. >>> >>> Application side programing sequence would be: >>> >>> char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE]; >>> rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempoolx /*out*/); >>> rte_mempool_create_empty(); >>> rte_mempool_set_ops_byname( , pref_memppol, ); >>> rte_mempool_populate_default(); >>> >>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> >>> --- >>> v2 --> v3: >>> - Updated version.map entry to DPDK_v17.11. >>> >>> v1 --> v2: >>> - Renamed _get_preferred_pool to _get_preferred_pool_ops(). >>> Per v1 review feedback, Olivier suggested to rename api >>> to rte_eth_dev_pool_ops_supported(), considering that 2nd param >>> for that api will return pool handle 'priority' for that port. >>> However, per v1 [1], we're opting for approach 1) where >>> ethdev API returns _preferred_ pool handle to application and Its upto >>> application to decide on policy - whether application wants to create >>> pool with received preferred pool handle or not. For more discussion details >>> on this topic refer [1]. >> Well, I still think it would be more flexible to have an API like >> rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool) >> >> It supports the easy case (= one preferred mempool) without much pain, >> and provides a more precise manner to describe what is supported or not >> by the driver. Example: "pmd_foo" prefers "mempool_foo" (best perf), but >> also supporst "mempool_stack" and "mempool_ring", but "mempool_bar" >> won't work at all. >> >> Having only one preferred pool_ops also prevents from smoothly renaming >> a pool (supporting both during some time) or to have 2 names for >> different variants of the same pool_ops (ex: ring_mp_mc, ring_sp_sc). >> >> But if the users (I guess at least Cavium and NXP) are happy with >> what you propose, I'm fine with it. > > preferred handle based upon real world use-case and same thing raised > at [1]. > > Hi Hemant, Are you ok with proposed preferred API? > > [1] http://dpdk.org/dev/patchwork/patch/24944/ > The current patch is ok, but it is better if you can extend it to provide list of preferred pools (in preference order) instead of just one pool. This will become helpful. I will avoid providing list of not-supported pools etc. A driver can have more than one preferred pool, depend on the resource availability one or other can be used. I am also proposing this in my proposal[1]. [1] http://dpdk.org/dev/patchwork/patch/26377/ >>> --- a/lib/librte_ether/rte_ethdev.c >>> +++ b/lib/librte_ether/rte_ethdev.c >>> @@ -3409,3 +3409,21 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id, >>> >>> return 0; >>> } >>> + >>> +int >>> +rte_eth_dev_get_preferred_pool_ops(uint8_t port_id, char *pool) >>> +{ >>> + struct rte_eth_dev *dev; >>> + const char *tmp; >>> + >>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); >>> + >>> + dev = &rte_eth_devices[port_id]; >>> + >>> + if (*dev->dev_ops->get_preferred_pool_ops == NULL) { >>> + tmp = rte_eal_mbuf_default_mempool_ops(); >>> + snprintf(pool, RTE_MBUF_POOL_OPS_NAMESIZE, "%s", tmp); >>> + return 0; >>> + } >>> + return (*dev->dev_ops->get_preferred_pool_ops)(dev, pool); >>> +} >> I think adding the length of the pool buffer to the function arguments >> would be better: only documenting that the length is >> RTE_MBUF_POOL_OPS_NAMESIZE looks a bit weak to me, because if one day it >> changes to another value, the users of the function may not notice it >> (no ABI/API change). >> >> >> One more comment: it would be helpful to have one user of this API in >> the example apps or testpmd. > > Yes. I will add in v3. Thanks. > >> Olivier > > ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v3 2/2] ethdev: allow pmd to advertise pool handle 2017-09-07 9:21 ` Hemant Agrawal @ 2017-09-07 10:06 ` santosh 2017-09-07 10:11 ` santosh 0 siblings, 1 reply; 72+ messages in thread From: santosh @ 2017-09-07 10:06 UTC (permalink / raw) To: Hemant Agrawal, Olivier MATZ; +Cc: dev, thomas, jerin.jacob Hi Hemant, On Thursday 07 September 2017 02:51 PM, Hemant Agrawal wrote: > On 9/4/2017 6:44 PM, santosh wrote: >> Hi Olivier, >> >> >> On Monday 04 September 2017 05:41 PM, Olivier MATZ wrote: >>> Hi Santosh, >>> >>> On Tue, Aug 15, 2017 at 01:37:17PM +0530, Santosh Shukla wrote: >>>> Now that dpdk supports more than one mempool drivers and >>>> each mempool driver works best for specific PMD, example: >>>> - sw ring based mempool for Intel PMD drivers >>>> - dpaa2 HW mempool manager for dpaa2 PMD driver. >>>> - fpa HW mempool manager for Octeontx PMD driver. >>>> >>>> Application like to know `preferred mempool vs PMD driver` >>>> information in advance before port setup. >>>> >>>> Introducing rte_eth_dev_get_preferred_pool_ops() API, >>>> which allows PMD driver to advertise their pool capability to application. >>>> >>>> Application side programing sequence would be: >>>> >>>> char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE]; >>>> rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempoolx /*out*/); >>>> rte_mempool_create_empty(); >>>> rte_mempool_set_ops_byname( , pref_memppol, ); >>>> rte_mempool_populate_default(); >>>> >>>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> >>>> --- >>>> v2 --> v3: >>>> - Updated version.map entry to DPDK_v17.11. >>>> >>>> v1 --> v2: >>>> - Renamed _get_preferred_pool to _get_preferred_pool_ops(). >>>> Per v1 review feedback, Olivier suggested to rename api >>>> to rte_eth_dev_pool_ops_supported(), considering that 2nd param >>>> for that api will return pool handle 'priority' for that port. >>>> However, per v1 [1], we're opting for approach 1) where >>>> ethdev API returns _preferred_ pool handle to application and Its upto >>>> application to decide on policy - whether application wants to create >>>> pool with received preferred pool handle or not. For more discussion details >>>> on this topic refer [1]. >>> Well, I still think it would be more flexible to have an API like >>> rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool) >>> >>> It supports the easy case (= one preferred mempool) without much pain, >>> and provides a more precise manner to describe what is supported or not >>> by the driver. Example: "pmd_foo" prefers "mempool_foo" (best perf), but >>> also supporst "mempool_stack" and "mempool_ring", but "mempool_bar" >>> won't work at all. >>> >>> Having only one preferred pool_ops also prevents from smoothly renaming >>> a pool (supporting both during some time) or to have 2 names for >>> different variants of the same pool_ops (ex: ring_mp_mc, ring_sp_sc). >>> >>> But if the users (I guess at least Cavium and NXP) are happy with >>> what you propose, I'm fine with it. >> >> preferred handle based upon real world use-case and same thing raised >> at [1]. >> >> Hi Hemant, Are you ok with proposed preferred API? >> >> [1] http://dpdk.org/dev/patchwork/patch/24944/ >> > > The current patch is ok, but it is better if you can extend it to provide list of preferred pools (in preference order) instead of just one pool. This will become helpful. I will avoid providing list of not-supported pools etc. > > A driver can have more than one preferred pool, depend on the resource availability one or other can be used. I am also proposing this in my proposal[1]. > > [1] http://dpdk.org/dev/patchwork/patch/26377/ > Ok, then sticking to Olivier api but slight change in param, example: /** * Get list of supported pools for a port * * @param port_id [in] * Pointer to port identifier of the device. * @param pools [out] * Pointer to the list of supported pools for that port. * Returns with array of pool ops name handler of size * RTE_MEMPOOL_OPS_NAMESIZE. * @return * >=0: Success; PMD has updated supported pool list. * <0: Failure; */ int rte_eth_dev_pools_ops_supported(uint8_t port_id, char **pools) Hemant, Olivier: Does above api make sense? Pl. confirm. Thanks. > > >>>> --- a/lib/librte_ether/rte_ethdev.c >>>> +++ b/lib/librte_ether/rte_ethdev.c >>>> @@ -3409,3 +3409,21 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id, >>>> >>>> return 0; >>>> } >>>> + >>>> +int >>>> +rte_eth_dev_get_preferred_pool_ops(uint8_t port_id, char *pool) >>>> +{ >>>> + struct rte_eth_dev *dev; >>>> + const char *tmp; >>>> + >>>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); >>>> + >>>> + dev = &rte_eth_devices[port_id]; >>>> + >>>> + if (*dev->dev_ops->get_preferred_pool_ops == NULL) { >>>> + tmp = rte_eal_mbuf_default_mempool_ops(); >>>> + snprintf(pool, RTE_MBUF_POOL_OPS_NAMESIZE, "%s", tmp); >>>> + return 0; >>>> + } >>>> + return (*dev->dev_ops->get_preferred_pool_ops)(dev, pool); >>>> +} >>> I think adding the length of the pool buffer to the function arguments >>> would be better: only documenting that the length is >>> RTE_MBUF_POOL_OPS_NAMESIZE looks a bit weak to me, because if one day it >>> changes to another value, the users of the function may not notice it >>> (no ABI/API change). >>> >>> >>> One more comment: it would be helpful to have one user of this API in >>> the example apps or testpmd. >> >> Yes. I will add in v3. Thanks. >> >>> Olivier >> >> > ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v3 2/2] ethdev: allow pmd to advertise pool handle 2017-09-07 10:06 ` santosh @ 2017-09-07 10:11 ` santosh 2017-09-07 11:08 ` Hemant Agrawal 0 siblings, 1 reply; 72+ messages in thread From: santosh @ 2017-09-07 10:11 UTC (permalink / raw) To: Hemant Agrawal, Olivier MATZ; +Cc: dev, thomas, jerin.jacob On Thursday 07 September 2017 03:36 PM, santosh wrote: > Hi Hemant, > > > On Thursday 07 September 2017 02:51 PM, Hemant Agrawal wrote: >> On 9/4/2017 6:44 PM, santosh wrote: >>> Hi Olivier, >>> >>> >>> On Monday 04 September 2017 05:41 PM, Olivier MATZ wrote: >>>> Hi Santosh, >>>> >>>> On Tue, Aug 15, 2017 at 01:37:17PM +0530, Santosh Shukla wrote: >>>>> Now that dpdk supports more than one mempool drivers and >>>>> each mempool driver works best for specific PMD, example: >>>>> - sw ring based mempool for Intel PMD drivers >>>>> - dpaa2 HW mempool manager for dpaa2 PMD driver. >>>>> - fpa HW mempool manager for Octeontx PMD driver. >>>>> >>>>> Application like to know `preferred mempool vs PMD driver` >>>>> information in advance before port setup. >>>>> >>>>> Introducing rte_eth_dev_get_preferred_pool_ops() API, >>>>> which allows PMD driver to advertise their pool capability to application. >>>>> >>>>> Application side programing sequence would be: >>>>> >>>>> char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE]; >>>>> rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempoolx /*out*/); >>>>> rte_mempool_create_empty(); >>>>> rte_mempool_set_ops_byname( , pref_memppol, ); >>>>> rte_mempool_populate_default(); >>>>> >>>>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> >>>>> --- >>>>> v2 --> v3: >>>>> - Updated version.map entry to DPDK_v17.11. >>>>> >>>>> v1 --> v2: >>>>> - Renamed _get_preferred_pool to _get_preferred_pool_ops(). >>>>> Per v1 review feedback, Olivier suggested to rename api >>>>> to rte_eth_dev_pool_ops_supported(), considering that 2nd param >>>>> for that api will return pool handle 'priority' for that port. >>>>> However, per v1 [1], we're opting for approach 1) where >>>>> ethdev API returns _preferred_ pool handle to application and Its upto >>>>> application to decide on policy - whether application wants to create >>>>> pool with received preferred pool handle or not. For more discussion details >>>>> on this topic refer [1]. >>>> Well, I still think it would be more flexible to have an API like >>>> rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool) >>>> >>>> It supports the easy case (= one preferred mempool) without much pain, >>>> and provides a more precise manner to describe what is supported or not >>>> by the driver. Example: "pmd_foo" prefers "mempool_foo" (best perf), but >>>> also supporst "mempool_stack" and "mempool_ring", but "mempool_bar" >>>> won't work at all. >>>> >>>> Having only one preferred pool_ops also prevents from smoothly renaming >>>> a pool (supporting both during some time) or to have 2 names for >>>> different variants of the same pool_ops (ex: ring_mp_mc, ring_sp_sc). >>>> >>>> But if the users (I guess at least Cavium and NXP) are happy with >>>> what you propose, I'm fine with it. >>> preferred handle based upon real world use-case and same thing raised >>> at [1]. >>> >>> Hi Hemant, Are you ok with proposed preferred API? >>> >>> [1] http://dpdk.org/dev/patchwork/patch/24944/ >>> >> The current patch is ok, but it is better if you can extend it to provide list of preferred pools (in preference order) instead of just one pool. This will become helpful. I will avoid providing list of not-supported pools etc. >> >> A driver can have more than one preferred pool, depend on the resource availability one or other can be used. I am also proposing this in my proposal[1]. >> >> [1] http://dpdk.org/dev/patchwork/patch/26377/ >> > Ok, then sticking to Olivier api but slight change in param, > example: > /** * Get list of supported pools for a port * * @param port_id [in] * Pointer to port identifier of the device. * @param pools [out] * Pointer to the list of supported pools for that port. * Returns with array of pool ops name handler of size * RTE_MEMPOOL_OPS_NAMESIZE. * @return * >=0: Success; PMD has updated supported pool list. * <0: Failure; */ int rte_eth_dev_pools_ops_supported(uint8_t port_id, char **pools) Hemant, Olivier: Does above api make sense? Pl. confirm. Thanks. Sorry for the font, resending proposed API: /** * Get list of supported pools for a port * @param port_id [in] * Pointer to port identifier of the device. * @param pools [out] * Pointer to the list of supported pools for that port. * Returns with array of pool ops name handler of size * RTE_MEMPOOL_OPS_NAMESIZE. * @return * >=0: Success; PMD has updated supported pool list. * <0: Failure; */ int rte_eth_dev_pools_ops_supported(uint8_t port_id, char **pools) Hemant, Olivier: Does above api make sense? Pl. confirm. Thanks. >> >>>>> --- a/lib/librte_ether/rte_ethdev.c >>>>> +++ b/lib/librte_ether/rte_ethdev.c >>>>> @@ -3409,3 +3409,21 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id, >>>>> >>>>> return 0; >>>>> } >>>>> + >>>>> +int >>>>> +rte_eth_dev_get_preferred_pool_ops(uint8_t port_id, char *pool) >>>>> +{ >>>>> + struct rte_eth_dev *dev; >>>>> + const char *tmp; >>>>> + >>>>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); >>>>> + >>>>> + dev = &rte_eth_devices[port_id]; >>>>> + >>>>> + if (*dev->dev_ops->get_preferred_pool_ops == NULL) { >>>>> + tmp = rte_eal_mbuf_default_mempool_ops(); >>>>> + snprintf(pool, RTE_MBUF_POOL_OPS_NAMESIZE, "%s", tmp); >>>>> + return 0; >>>>> + } >>>>> + return (*dev->dev_ops->get_preferred_pool_ops)(dev, pool); >>>>> +} >>>> I think adding the length of the pool buffer to the function arguments >>>> would be better: only documenting that the length is >>>> RTE_MBUF_POOL_OPS_NAMESIZE looks a bit weak to me, because if one day it >>>> changes to another value, the users of the function may not notice it >>>> (no ABI/API change). >>>> >>>> >>>> One more comment: it would be helpful to have one user of this API in >>>> the example apps or testpmd. >>> Yes. I will add in v3. Thanks. >>> >>>> Olivier >>> ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v3 2/2] ethdev: allow pmd to advertise pool handle 2017-09-07 10:11 ` santosh @ 2017-09-07 11:08 ` Hemant Agrawal 2017-09-11 9:33 ` Olivier MATZ 0 siblings, 1 reply; 72+ messages in thread From: Hemant Agrawal @ 2017-09-07 11:08 UTC (permalink / raw) To: santosh, Olivier MATZ; +Cc: dev, thomas, jerin.jacob On 9/7/2017 3:41 PM, santosh wrote: > > > On Thursday 07 September 2017 03:36 PM, santosh wrote: >> Hi Hemant, >> >> >> On Thursday 07 September 2017 02:51 PM, Hemant Agrawal wrote: >>> On 9/4/2017 6:44 PM, santosh wrote: >>>> Hi Olivier, >>>> >>>> >>>> On Monday 04 September 2017 05:41 PM, Olivier MATZ wrote: >>>>> Hi Santosh, >>>>> >>>>> On Tue, Aug 15, 2017 at 01:37:17PM +0530, Santosh Shukla wrote: >>>>>> Now that dpdk supports more than one mempool drivers and >>>>>> each mempool driver works best for specific PMD, example: >>>>>> - sw ring based mempool for Intel PMD drivers >>>>>> - dpaa2 HW mempool manager for dpaa2 PMD driver. >>>>>> - fpa HW mempool manager for Octeontx PMD driver. >>>>>> >>>>>> Application like to know `preferred mempool vs PMD driver` >>>>>> information in advance before port setup. >>>>>> >>>>>> Introducing rte_eth_dev_get_preferred_pool_ops() API, >>>>>> which allows PMD driver to advertise their pool capability to application. >>>>>> >>>>>> Application side programing sequence would be: >>>>>> >>>>>> char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE]; >>>>>> rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempoolx /*out*/); >>>>>> rte_mempool_create_empty(); >>>>>> rte_mempool_set_ops_byname( , pref_memppol, ); >>>>>> rte_mempool_populate_default(); >>>>>> >>>>>> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> >>>>>> --- >>>>>> v2 --> v3: >>>>>> - Updated version.map entry to DPDK_v17.11. >>>>>> >>>>>> v1 --> v2: >>>>>> - Renamed _get_preferred_pool to _get_preferred_pool_ops(). >>>>>> Per v1 review feedback, Olivier suggested to rename api >>>>>> to rte_eth_dev_pool_ops_supported(), considering that 2nd param >>>>>> for that api will return pool handle 'priority' for that port. >>>>>> However, per v1 [1], we're opting for approach 1) where >>>>>> ethdev API returns _preferred_ pool handle to application and Its upto >>>>>> application to decide on policy - whether application wants to create >>>>>> pool with received preferred pool handle or not. For more discussion details >>>>>> on this topic refer [1]. >>>>> Well, I still think it would be more flexible to have an API like >>>>> rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool) >>>>> >>>>> It supports the easy case (= one preferred mempool) without much pain, >>>>> and provides a more precise manner to describe what is supported or not >>>>> by the driver. Example: "pmd_foo" prefers "mempool_foo" (best perf), but >>>>> also supporst "mempool_stack" and "mempool_ring", but "mempool_bar" >>>>> won't work at all. >>>>> >>>>> Having only one preferred pool_ops also prevents from smoothly renaming >>>>> a pool (supporting both during some time) or to have 2 names for >>>>> different variants of the same pool_ops (ex: ring_mp_mc, ring_sp_sc). >>>>> >>>>> But if the users (I guess at least Cavium and NXP) are happy with >>>>> what you propose, I'm fine with it. >>>> preferred handle based upon real world use-case and same thing raised >>>> at [1]. >>>> >>>> Hi Hemant, Are you ok with proposed preferred API? >>>> >>>> [1] http://dpdk.org/dev/patchwork/patch/24944/ >>>> >>> The current patch is ok, but it is better if you can extend it to provide list of preferred pools (in preference order) instead of just one pool. This will become helpful. I will avoid providing list of not-supported pools etc. >>> >>> A driver can have more than one preferred pool, depend on the resource availability one or other can be used. I am also proposing this in my proposal[1]. >>> >>> [1] http://dpdk.org/dev/patchwork/patch/26377/ >>> >> Ok, then sticking to Olivier api but slight change in param, >> example: >> /** * Get list of supported pools for a port * * @param port_id [in] * Pointer to port identifier of the device. * @param pools [out] * Pointer to the list of supported pools for that port. * Returns with array of pool ops name handler of size * RTE_MEMPOOL_OPS_NAMESIZE. * @return * >=0: Success; PMD has updated supported pool list. * <0: Failure; */ int rte_eth_dev_pools_ops_supported(uint8_t port_id, char **pools) Hemant, Olivier: Does above api make sense? Pl. confirm. Thanks. > > Sorry for the font, resending proposed API: > > /** > * Get list of supported pools for a port > * @param port_id [in] > * Pointer to port identifier of the device. > * @param pools [out] > * Pointer to the list of supported pools for that port. > * Returns with array of pool ops name handler of size > * RTE_MEMPOOL_OPS_NAMESIZE. > * @return > * >=0: Success; PMD has updated supported pool list. > * <0: Failure; > */ > > int rte_eth_dev_pools_ops_supported(uint8_t port_id, char **pools) > > Hemant, Olivier: Does above api make sense? Pl. confirm. Thanks. > looks ok to me. >>> >>>>>> --- a/lib/librte_ether/rte_ethdev.c >>>>>> +++ b/lib/librte_ether/rte_ethdev.c >>>>>> @@ -3409,3 +3409,21 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id, >>>>>> >>>>>> return 0; >>>>>> } >>>>>> + >>>>>> +int >>>>>> +rte_eth_dev_get_preferred_pool_ops(uint8_t port_id, char *pool) >>>>>> +{ >>>>>> + struct rte_eth_dev *dev; >>>>>> + const char *tmp; >>>>>> + >>>>>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); >>>>>> + >>>>>> + dev = &rte_eth_devices[port_id]; >>>>>> + >>>>>> + if (*dev->dev_ops->get_preferred_pool_ops == NULL) { >>>>>> + tmp = rte_eal_mbuf_default_mempool_ops(); >>>>>> + snprintf(pool, RTE_MBUF_POOL_OPS_NAMESIZE, "%s", tmp); >>>>>> + return 0; >>>>>> + } >>>>>> + return (*dev->dev_ops->get_preferred_pool_ops)(dev, pool); >>>>>> +} >>>>> I think adding the length of the pool buffer to the function arguments >>>>> would be better: only documenting that the length is >>>>> RTE_MBUF_POOL_OPS_NAMESIZE looks a bit weak to me, because if one day it >>>>> changes to another value, the users of the function may not notice it >>>>> (no ABI/API change). >>>>> >>>>> >>>>> One more comment: it would be helpful to have one user of this API in >>>>> the example apps or testpmd. >>>> Yes. I will add in v3. Thanks. >>>> >>>>> Olivier >>>> > > ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v3 2/2] ethdev: allow pmd to advertise pool handle 2017-09-07 11:08 ` Hemant Agrawal @ 2017-09-11 9:33 ` Olivier MATZ 2017-09-11 12:40 ` santosh 0 siblings, 1 reply; 72+ messages in thread From: Olivier MATZ @ 2017-09-11 9:33 UTC (permalink / raw) To: Hemant Agrawal; +Cc: santosh, dev, thomas, jerin.jacob On Thu, Sep 07, 2017 at 04:38:39PM +0530, Hemant Agrawal wrote: > On 9/7/2017 3:41 PM, santosh wrote: > > > > > > On Thursday 07 September 2017 03:36 PM, santosh wrote: > > > Hi Hemant, > > > > > > > > > On Thursday 07 September 2017 02:51 PM, Hemant Agrawal wrote: > > > > On 9/4/2017 6:44 PM, santosh wrote: > > > > > Hi Olivier, > > > > > > > > > > > > > > > On Monday 04 September 2017 05:41 PM, Olivier MATZ wrote: > > > > > > Hi Santosh, > > > > > > > > > > > > On Tue, Aug 15, 2017 at 01:37:17PM +0530, Santosh Shukla wrote: > > > > > > > Now that dpdk supports more than one mempool drivers and > > > > > > > each mempool driver works best for specific PMD, example: > > > > > > > - sw ring based mempool for Intel PMD drivers > > > > > > > - dpaa2 HW mempool manager for dpaa2 PMD driver. > > > > > > > - fpa HW mempool manager for Octeontx PMD driver. > > > > > > > > > > > > > > Application like to know `preferred mempool vs PMD driver` > > > > > > > information in advance before port setup. > > > > > > > > > > > > > > Introducing rte_eth_dev_get_preferred_pool_ops() API, > > > > > > > which allows PMD driver to advertise their pool capability to application. > > > > > > > > > > > > > > Application side programing sequence would be: > > > > > > > > > > > > > > char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE]; > > > > > > > rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempoolx /*out*/); > > > > > > > rte_mempool_create_empty(); > > > > > > > rte_mempool_set_ops_byname( , pref_memppol, ); > > > > > > > rte_mempool_populate_default(); > > > > > > > > > > > > > > Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> > > > > > > > --- > > > > > > > v2 --> v3: > > > > > > > - Updated version.map entry to DPDK_v17.11. > > > > > > > > > > > > > > v1 --> v2: > > > > > > > - Renamed _get_preferred_pool to _get_preferred_pool_ops(). > > > > > > > Per v1 review feedback, Olivier suggested to rename api > > > > > > > to rte_eth_dev_pool_ops_supported(), considering that 2nd param > > > > > > > for that api will return pool handle 'priority' for that port. > > > > > > > However, per v1 [1], we're opting for approach 1) where > > > > > > > ethdev API returns _preferred_ pool handle to application and Its upto > > > > > > > application to decide on policy - whether application wants to create > > > > > > > pool with received preferred pool handle or not. For more discussion details > > > > > > > on this topic refer [1]. > > > > > > Well, I still think it would be more flexible to have an API like > > > > > > rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool) > > > > > > > > > > > > It supports the easy case (= one preferred mempool) without much pain, > > > > > > and provides a more precise manner to describe what is supported or not > > > > > > by the driver. Example: "pmd_foo" prefers "mempool_foo" (best perf), but > > > > > > also supporst "mempool_stack" and "mempool_ring", but "mempool_bar" > > > > > > won't work at all. > > > > > > > > > > > > Having only one preferred pool_ops also prevents from smoothly renaming > > > > > > a pool (supporting both during some time) or to have 2 names for > > > > > > different variants of the same pool_ops (ex: ring_mp_mc, ring_sp_sc). > > > > > > > > > > > > But if the users (I guess at least Cavium and NXP) are happy with > > > > > > what you propose, I'm fine with it. > > > > > preferred handle based upon real world use-case and same thing raised > > > > > at [1]. > > > > > > > > > > Hi Hemant, Are you ok with proposed preferred API? > > > > > > > > > > [1] http://dpdk.org/dev/patchwork/patch/24944/ > > > > > > > > > The current patch is ok, but it is better if you can extend it to provide list of preferred pools (in preference order) instead of just one pool. This will become helpful. I will avoid providing list of not-supported pools etc. > > > > > > > > A driver can have more than one preferred pool, depend on the resource availability one or other can be used. I am also proposing this in my proposal[1]. > > > > > > > > [1] http://dpdk.org/dev/patchwork/patch/26377/ > > > > > > > Ok, then sticking to Olivier api but slight change in param, > > > example: > > > /** * Get list of supported pools for a port * * @param port_id [in] * Pointer to port identifier of the device. * @param pools [out] * Pointer to the list of supported pools for that port. * Returns with array of pool ops name handler of size * RTE_MEMPOOL_OPS_NAMESIZE. * @return * >=0: Success; PMD has updated supported pool list. * <0: Failure; */ int rte_eth_dev_pools_ops_supported(uint8_t port_id, char **pools) Hemant, Olivier: Does above api make sense? Pl. confirm. Thanks. > > > > Sorry for the font, resending proposed API: > > > > /** > > * Get list of supported pools for a port > > * @param port_id [in] > > * Pointer to port identifier of the device. > > * @param pools [out] > > * Pointer to the list of supported pools for that port. > > * Returns with array of pool ops name handler of size > > * RTE_MEMPOOL_OPS_NAMESIZE. > > * @return > > * >=0: Success; PMD has updated supported pool list. > > * <0: Failure; > > */ > > > > int rte_eth_dev_pools_ops_supported(uint8_t port_id, char **pools) > > > > Hemant, Olivier: Does above api make sense? Pl. confirm. Thanks. > > > > looks ok to me. I think that returning a list is harder to use in an application, instead of an api that just returns an int (priority): int rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool) The possible returned values are: ENOTSUP: mempool ops not supported < 0: any other error 0: best mempool ops choice for this pmd 1: this mempool ops are supported Let's take an example. Our application wants to select ops that will match all pmds. The pseudo code would be like this: best_score = -1 best_ops = NULL for ops in mempool_ops: score = 0 for p in ports: ret = rte_eth_dev_pools_ops_supported(p, ops.name) if ret < 0: score = -1 break score += ret if score == -1: continue if best_score == -1 || score < best_score: best_score = score best_ops = ops if best_score == -1: print "no matching mempool ops" else: print "selected ops: %s", best_ops.name You can do the exercise with the API you are proposing, but I think it would be harder. Olivier ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v3 2/2] ethdev: allow pmd to advertise pool handle 2017-09-11 9:33 ` Olivier MATZ @ 2017-09-11 12:40 ` santosh 2017-09-11 13:00 ` Olivier MATZ 0 siblings, 1 reply; 72+ messages in thread From: santosh @ 2017-09-11 12:40 UTC (permalink / raw) To: Olivier MATZ, Hemant Agrawal; +Cc: dev, thomas, jerin.jacob Hi Olivier, On Monday 11 September 2017 03:03 PM, Olivier MATZ wrote: > On Thu, Sep 07, 2017 at 04:38:39PM +0530, Hemant Agrawal wrote: >> On 9/7/2017 3:41 PM, santosh wrote: >>> Sorry for the font, resending proposed API: >>> >>> /** >>> * Get list of supported pools for a port >>> * @param port_id [in] >>> * Pointer to port identifier of the device. >>> * @param pools [out] >>> * Pointer to the list of supported pools for that port. >>> * Returns with array of pool ops name handler of size >>> * RTE_MEMPOOL_OPS_NAMESIZE. >>> * @return >>> * >=0: Success; PMD has updated supported pool list. >>> * <0: Failure; >>> */ >>> >>> int rte_eth_dev_pools_ops_supported(uint8_t port_id, char **pools) >>> >>> Hemant, Olivier: Does above api make sense? Pl. confirm. Thanks. >>> >> looks ok to me. > I think that returning a list is harder to use in an application, instead of an > api that just returns an int (priority): > > int rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool) > > The possible returned values are: > ENOTSUP: mempool ops not supported > < 0: any other error > 0: best mempool ops choice for this pmd > 1: this mempool ops are supported > > Let's take an example. Our application wants to select ops that > will match all pmds. The pseudo code would be like this: > > best_score = -1 > best_ops = NULL > for ops in mempool_ops: > score = 0 > for p in ports: > ret = rte_eth_dev_pools_ops_supported(p, ops.name) > if ret < 0: > score = -1 > break > score += ret > if score == -1: > continue > if best_score == -1 || score < best_score: > best_score = score > best_ops = ops > if best_score == -1: > print "no matching mempool ops" > else: > print "selected ops: %s", best_ops.name > > > You can do the exercise with the API you are proposing, but I think > it would be harder. > > Olivier Proposed model looks okay and I'll implement _pool_ops_supported() api. But I cant promise the testpmd related changes with this series, Since rc1 is closing so let the api go in -rc1 release and testpmd changes will follow then. is it ok with you? Thanks. ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v3 2/2] ethdev: allow pmd to advertise pool handle 2017-09-11 12:40 ` santosh @ 2017-09-11 13:00 ` Olivier MATZ 0 siblings, 0 replies; 72+ messages in thread From: Olivier MATZ @ 2017-09-11 13:00 UTC (permalink / raw) To: santosh; +Cc: Hemant Agrawal, dev, thomas, jerin.jacob On Mon, Sep 11, 2017 at 06:10:36PM +0530, santosh wrote: > Hi Olivier, > > > On Monday 11 September 2017 03:03 PM, Olivier MATZ wrote: > > On Thu, Sep 07, 2017 at 04:38:39PM +0530, Hemant Agrawal wrote: > >> On 9/7/2017 3:41 PM, santosh wrote: > >>> Sorry for the font, resending proposed API: > >>> > >>> /** > >>> * Get list of supported pools for a port > >>> * @param port_id [in] > >>> * Pointer to port identifier of the device. > >>> * @param pools [out] > >>> * Pointer to the list of supported pools for that port. > >>> * Returns with array of pool ops name handler of size > >>> * RTE_MEMPOOL_OPS_NAMESIZE. > >>> * @return > >>> * >=0: Success; PMD has updated supported pool list. > >>> * <0: Failure; > >>> */ > >>> > >>> int rte_eth_dev_pools_ops_supported(uint8_t port_id, char **pools) > >>> > >>> Hemant, Olivier: Does above api make sense? Pl. confirm. Thanks. > >>> > >> looks ok to me. > > I think that returning a list is harder to use in an application, instead of an > > api that just returns an int (priority): > > > > int rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool) > > > > The possible returned values are: > > ENOTSUP: mempool ops not supported > > < 0: any other error > > 0: best mempool ops choice for this pmd > > 1: this mempool ops are supported > > > > Let's take an example. Our application wants to select ops that > > will match all pmds. The pseudo code would be like this: > > > > best_score = -1 > > best_ops = NULL > > for ops in mempool_ops: > > score = 0 > > for p in ports: > > ret = rte_eth_dev_pools_ops_supported(p, ops.name) > > if ret < 0: > > score = -1 > > break > > score += ret > > if score == -1: > > continue > > if best_score == -1 || score < best_score: > > best_score = score > > best_ops = ops > > if best_score == -1: > > print "no matching mempool ops" > > else: > > print "selected ops: %s", best_ops.name > > > > > > You can do the exercise with the API you are proposing, but I think > > it would be harder. > > > > Olivier > > Proposed model looks okay and I'll implement _pool_ops_supported() api. > But I cant promise the testpmd related changes with this series, > Since rc1 is closing so let the api go in -rc1 release and testpmd changes will > follow then. is it ok with you? It's ok for me. ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle 2017-08-15 8:07 ` [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle Santosh Shukla 2017-08-15 8:07 ` [dpdk-dev] [PATCH v3 1/2] eal: allow user to override default pool handle Santosh Shukla 2017-08-15 8:07 ` [dpdk-dev] [PATCH v3 2/2] ethdev: allow pmd to advertise " Santosh Shukla @ 2017-09-04 9:41 ` Sergio Gonzalez Monroy 2017-09-04 13:20 ` santosh 2017-09-04 13:34 ` Olivier MATZ 2017-09-11 15:18 ` [dpdk-dev] [PATCH v4 " Santosh Shukla 3 siblings, 2 replies; 72+ messages in thread From: Sergio Gonzalez Monroy @ 2017-09-04 9:41 UTC (permalink / raw) To: Santosh Shukla, olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal On 15/08/2017 09:07, Santosh Shukla wrote: > v3: > - Rebased on top of v17.11-rc0 > - Updated version.map entry to v17.11. > > v2: > > DPDK has support for hw and sw mempool. Those mempool > can work optimal for specific PMD's. > Example: > sw ring based PMD for Intel NICs. > HW mempool manager dpaa2 for dpaa2 PMD. > HW mempool manager fpa for octeontx PMD. > > There could be a use-case where different vendor NIC's used > on the same platform and User like to configure mempool in such a way that > each of those NIC's should use their preferred mempool(For performance reasons). > > Current mempool infrastrucure don't support such use-case. > > This patchset tries to address that problem in 2 steps: > > 0) Allowing user to dynamically configure mempool handle by > passing pool handle as eal arg to `--mbuf-pool-ops=<pool-handle>`. > > 1) Allowing PMD's to advertise their preferred pool to an application. > From an application point of view: > - The application must ask PMD about their preferred pool. > - PMD to respond back with preferred pool otherwise > CONFIG_RTE_MEMPOOL_DEFAULT_OPS will be used for that PMD. > > * Application programming sequencing would be > char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE]; > rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempool /* out */); > rte_mempool_create_empty(); > rte_mempool_set_ops_byname( , pref_memppol, ); > rte_mempool_populate_default(); What about introducing an API like: rte_pktmbuf_poll_create_with_ops (..., ops_name, config_pool); I think that API would help for the case the application wants an mbuf pool with ie. stack handler. Sure we can do the empty/set_ops/populate sequence, but the only thing we want to change from default pktmbuf_pool_create API is the pool handler. Application just needs to decide the ops handler to use, either default or one suggested by PMD? I think ideally we would have similar APIs: - rte_mempool_create_with_ops (...) - rte_memppol_xmem_create_with_ops (...) Thanks, Sergio > Change History: > v2 --> v3: > - Changed version.map from DPDK_17.08 to DPDK_17.11. > > v1 --> v2: > - Renamed rte_eal_get_mempool_name to rte_eal_mbuf_default_mempool_ops(). > (suggested by Olivier) > - Renamed _get_preferred_pool to _get_preferred_pool_ops(). > - Updated API description and changes return val from -EINVAL to -ENOTSUP. > (Suggested by Olivier) > * Specific details on v1-->v2 change summary described in each patch. > > Checkpatch status: > - None. > > Work History: > * Refer [1] for v1. > > Thanks. > > [1] http://dpdk.org/ml/archives/dev/2017-June/067022.html > > > Santosh Shukla (2): > eal: allow user to override default pool handle > ethdev: allow pmd to advertise pool handle > > lib/librte_eal/bsdapp/eal/eal.c | 17 +++++++++++++++++ > lib/librte_eal/bsdapp/eal/rte_eal_version.map | 7 +++++++ > lib/librte_eal/common/eal_common_options.c | 5 +++++ > lib/librte_eal/common/eal_internal_cfg.h | 1 + > lib/librte_eal/common/eal_options.h | 2 ++ > lib/librte_eal/common/include/rte_eal.h | 11 +++++++++++ > lib/librte_eal/linuxapp/eal/eal.c | 18 ++++++++++++++++++ > lib/librte_eal/linuxapp/eal/rte_eal_version.map | 7 +++++++ > lib/librte_ether/rte_ethdev.c | 18 ++++++++++++++++++ > lib/librte_ether/rte_ethdev.h | 21 +++++++++++++++++++++ > lib/librte_ether/rte_ether_version.map | 7 +++++++ > lib/librte_mbuf/rte_mbuf.c | 5 +++-- > 12 files changed, 117 insertions(+), 2 deletions(-) > ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle 2017-09-04 9:41 ` [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle Sergio Gonzalez Monroy @ 2017-09-04 13:20 ` santosh 2017-09-04 13:34 ` Olivier MATZ 1 sibling, 0 replies; 72+ messages in thread From: santosh @ 2017-09-04 13:20 UTC (permalink / raw) To: Sergio Gonzalez Monroy, olivier.matz, dev Cc: thomas, jerin.jacob, hemant.agrawal Hi Sergio, On Monday 04 September 2017 03:11 PM, Sergio Gonzalez Monroy wrote: > On 15/08/2017 09:07, Santosh Shukla wrote: >> v3: >> - Rebased on top of v17.11-rc0 >> - Updated version.map entry to v17.11. >> >> v2: >> >> DPDK has support for hw and sw mempool. Those mempool >> can work optimal for specific PMD's. >> Example: >> sw ring based PMD for Intel NICs. >> HW mempool manager dpaa2 for dpaa2 PMD. >> HW mempool manager fpa for octeontx PMD. >> >> There could be a use-case where different vendor NIC's used >> on the same platform and User like to configure mempool in such a way that >> each of those NIC's should use their preferred mempool(For performance reasons). >> >> Current mempool infrastrucure don't support such use-case. >> >> This patchset tries to address that problem in 2 steps: >> >> 0) Allowing user to dynamically configure mempool handle by >> passing pool handle as eal arg to `--mbuf-pool-ops=<pool-handle>`. >> >> 1) Allowing PMD's to advertise their preferred pool to an application. >> From an application point of view: >> - The application must ask PMD about their preferred pool. >> - PMD to respond back with preferred pool otherwise >> CONFIG_RTE_MEMPOOL_DEFAULT_OPS will be used for that PMD. >> >> * Application programming sequencing would be >> char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE]; >> rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempool /* out */); >> rte_mempool_create_empty(); >> rte_mempool_set_ops_byname( , pref_memppol, ); >> rte_mempool_populate_default(); > > What about introducing an API like: > rte_pktmbuf_poll_create_with_ops (..., ops_name, config_pool); > > I think that API would help for the case the application wants an mbuf pool with ie. stack handler. > Sure we can do the empty/set_ops/populate sequence, but the only thing we want to change from default pktmbuf_pool_create API is the pool handler. > > Application just needs to decide the ops handler to use, either default or one suggested by PMD? > > I think ideally we would have similar APIs: > - rte_mempool_create_with_ops (...) > - rte_memppol_xmem_create_with_ops (...) > Olivier, What do you think? No strong opion but Can we please close/agree on API? I don't want to keep rebasing and pushing new variant in every other version, This whole series and its dependent series ie..octeontx PMD/FPA PMD taregeted for -rc1-v17.11 release therefore if we close on api and get it reviewed by this week, would really appreciate. Thanks. > > Thanks, > Sergio > >> Change History: >> v2 --> v3: >> - Changed version.map from DPDK_17.08 to DPDK_17.11. >> >> v1 --> v2: >> - Renamed rte_eal_get_mempool_name to rte_eal_mbuf_default_mempool_ops(). >> (suggested by Olivier) >> - Renamed _get_preferred_pool to _get_preferred_pool_ops(). >> - Updated API description and changes return val from -EINVAL to -ENOTSUP. >> (Suggested by Olivier) >> * Specific details on v1-->v2 change summary described in each patch. >> >> Checkpatch status: >> - None. >> >> Work History: >> * Refer [1] for v1. >> >> Thanks. >> >> [1] http://dpdk.org/ml/archives/dev/2017-June/067022.html >> >> >> Santosh Shukla (2): >> eal: allow user to override default pool handle >> ethdev: allow pmd to advertise pool handle >> >> lib/librte_eal/bsdapp/eal/eal.c | 17 +++++++++++++++++ >> lib/librte_eal/bsdapp/eal/rte_eal_version.map | 7 +++++++ >> lib/librte_eal/common/eal_common_options.c | 5 +++++ >> lib/librte_eal/common/eal_internal_cfg.h | 1 + >> lib/librte_eal/common/eal_options.h | 2 ++ >> lib/librte_eal/common/include/rte_eal.h | 11 +++++++++++ >> lib/librte_eal/linuxapp/eal/eal.c | 18 ++++++++++++++++++ >> lib/librte_eal/linuxapp/eal/rte_eal_version.map | 7 +++++++ >> lib/librte_ether/rte_ethdev.c | 18 ++++++++++++++++++ >> lib/librte_ether/rte_ethdev.h | 21 +++++++++++++++++++++ >> lib/librte_ether/rte_ether_version.map | 7 +++++++ >> lib/librte_mbuf/rte_mbuf.c | 5 +++-- >> 12 files changed, 117 insertions(+), 2 deletions(-) >> > ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle 2017-09-04 9:41 ` [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle Sergio Gonzalez Monroy 2017-09-04 13:20 ` santosh @ 2017-09-04 13:34 ` Olivier MATZ 2017-09-04 14:24 ` Sergio Gonzalez Monroy 1 sibling, 1 reply; 72+ messages in thread From: Olivier MATZ @ 2017-09-04 13:34 UTC (permalink / raw) To: Sergio Gonzalez Monroy Cc: Santosh Shukla, dev, thomas, jerin.jacob, hemant.agrawal Hi Sergio, On Mon, Sep 04, 2017 at 10:41:56AM +0100, Sergio Gonzalez Monroy wrote: > On 15/08/2017 09:07, Santosh Shukla wrote: > > v3: > > - Rebased on top of v17.11-rc0 > > - Updated version.map entry to v17.11. > > > > v2: > > > > DPDK has support for hw and sw mempool. Those mempool > > can work optimal for specific PMD's. > > Example: > > sw ring based PMD for Intel NICs. > > HW mempool manager dpaa2 for dpaa2 PMD. > > HW mempool manager fpa for octeontx PMD. > > > > There could be a use-case where different vendor NIC's used > > on the same platform and User like to configure mempool in such a way that > > each of those NIC's should use their preferred mempool(For performance reasons). > > > > Current mempool infrastrucure don't support such use-case. > > > > This patchset tries to address that problem in 2 steps: > > > > 0) Allowing user to dynamically configure mempool handle by > > passing pool handle as eal arg to `--mbuf-pool-ops=<pool-handle>`. > > > > 1) Allowing PMD's to advertise their preferred pool to an application. > > From an application point of view: > > - The application must ask PMD about their preferred pool. > > - PMD to respond back with preferred pool otherwise > > CONFIG_RTE_MEMPOOL_DEFAULT_OPS will be used for that PMD. > > > > * Application programming sequencing would be > > char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE]; > > rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempool /* out */); > > rte_mempool_create_empty(); > > rte_mempool_set_ops_byname( , pref_memppol, ); > > rte_mempool_populate_default(); > > What about introducing an API like: > rte_pktmbuf_poll_create_with_ops (..., ops_name, config_pool); > > I think that API would help for the case the application wants an mbuf pool > with ie. stack handler. > Sure we can do the empty/set_ops/populate sequence, but the only thing we > want to change from default pktmbuf_pool_create API is the pool handler. > > Application just needs to decide the ops handler to use, either default or > one suggested by PMD? > > I think ideally we would have similar APIs: > - rte_mempool_create_with_ops (...) > - rte_memppol_xmem_create_with_ops (...) Today, we may only want to change the mempool handler, but if we need to change something else tomorrow, we would need to add another parameter again, breaking the ABI. If we pass a config structure, adding a new field in it would also break the ABI, except if the structure is opaque, with accessors. These accessors would be functions (ex: mempool_cfg_new, mempool_cfg_set_pool_ops, ...). This is not so much different than what we have now. The advantage I can see of working on a config structure instead of directly on a mempool is that the api can be reused to build a default config. That said, I think it's quite orthogonal to this patch since we still require the ethdev api. Olivier ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle 2017-09-04 13:34 ` Olivier MATZ @ 2017-09-04 14:24 ` Sergio Gonzalez Monroy 2017-09-05 7:47 ` Olivier MATZ 0 siblings, 1 reply; 72+ messages in thread From: Sergio Gonzalez Monroy @ 2017-09-04 14:24 UTC (permalink / raw) To: Olivier MATZ; +Cc: Santosh Shukla, dev, thomas, jerin.jacob, hemant.agrawal Hi Olivier, On 04/09/2017 14:34, Olivier MATZ wrote: > Hi Sergio, > > On Mon, Sep 04, 2017 at 10:41:56AM +0100, Sergio Gonzalez Monroy wrote: >> On 15/08/2017 09:07, Santosh Shukla wrote: >>> v3: >>> - Rebased on top of v17.11-rc0 >>> - Updated version.map entry to v17.11. >>> >>> v2: >>> >>> DPDK has support for hw and sw mempool. Those mempool >>> can work optimal for specific PMD's. >>> Example: >>> sw ring based PMD for Intel NICs. >>> HW mempool manager dpaa2 for dpaa2 PMD. >>> HW mempool manager fpa for octeontx PMD. >>> >>> There could be a use-case where different vendor NIC's used >>> on the same platform and User like to configure mempool in such a way that >>> each of those NIC's should use their preferred mempool(For performance reasons). >>> >>> Current mempool infrastrucure don't support such use-case. >>> >>> This patchset tries to address that problem in 2 steps: >>> >>> 0) Allowing user to dynamically configure mempool handle by >>> passing pool handle as eal arg to `--mbuf-pool-ops=<pool-handle>`. >>> >>> 1) Allowing PMD's to advertise their preferred pool to an application. >>> From an application point of view: >>> - The application must ask PMD about their preferred pool. >>> - PMD to respond back with preferred pool otherwise >>> CONFIG_RTE_MEMPOOL_DEFAULT_OPS will be used for that PMD. >>> >>> * Application programming sequencing would be >>> char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE]; >>> rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempool /* out */); >>> rte_mempool_create_empty(); >>> rte_mempool_set_ops_byname( , pref_memppol, ); >>> rte_mempool_populate_default(); >> What about introducing an API like: >> rte_pktmbuf_poll_create_with_ops (..., ops_name, config_pool); >> >> I think that API would help for the case the application wants an mbuf pool >> with ie. stack handler. >> Sure we can do the empty/set_ops/populate sequence, but the only thing we >> want to change from default pktmbuf_pool_create API is the pool handler. >> >> Application just needs to decide the ops handler to use, either default or >> one suggested by PMD? >> >> I think ideally we would have similar APIs: >> - rte_mempool_create_with_ops (...) >> - rte_memppol_xmem_create_with_ops (...) > Today, we may only want to change the mempool handler, but if we > need to change something else tomorrow, we would need to add another > parameter again, breaking the ABI. > > If we pass a config structure, adding a new field in it would also break the > ABI, except if the structure is opaque, with accessors. These accessors would be > functions (ex: mempool_cfg_new, mempool_cfg_set_pool_ops, ...). This is not so > much different than what we have now. > > The advantage I can see of working on a config structure instead of directly on > a mempool is that the api can be reused to build a default config. > > That said, I think it's quite orthogonal to this patch since we still require > the ethdev api. > > Olivier Fair enough. Just to double check that we are on the same page: - rte_mempool_create is just there for backwards compatibility and a sequence of create_empty -> set_ops (optional) ->init -> populate_default -> obj_iter (optional) is the recommended way of creating mempools. - if application wants to use re_mempool_xmem_create with different pool handler needs to replicate function and add set_ops step. Now if rte_pktmbuf_pool_create is still the preferred mechanism for applications to create mbuf pools, wouldn't it make sense to offer the option of either pass the ops_name as suggested before or for the application to just set a different pool handler? I understand the it is either breaking API or introducing a new API, but is the solution to basically "copy" the whole function in the application and add an optional step (set_ops)? With these patches we can: - change the default pool handler with EAL option, which does *not* allow for pktmbuf_pool_create with different handlers. - We can check the PMDs preferred/supported handlers but then we need to implement function with whole sequence, cannot use pktmbuf_pool_create. It looks to me then that any application that wants to use different pool handlers than default/ring need to implement their own create_pool with set_ops. Thanks, Sergio ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle 2017-09-04 14:24 ` Sergio Gonzalez Monroy @ 2017-09-05 7:47 ` Olivier MATZ 2017-09-05 8:11 ` Jerin Jacob 0 siblings, 1 reply; 72+ messages in thread From: Olivier MATZ @ 2017-09-05 7:47 UTC (permalink / raw) To: Sergio Gonzalez Monroy Cc: Santosh Shukla, dev, thomas, jerin.jacob, hemant.agrawal On Mon, Sep 04, 2017 at 03:24:38PM +0100, Sergio Gonzalez Monroy wrote: > Hi Olivier, > > On 04/09/2017 14:34, Olivier MATZ wrote: > > Hi Sergio, > > > > On Mon, Sep 04, 2017 at 10:41:56AM +0100, Sergio Gonzalez Monroy wrote: > > > On 15/08/2017 09:07, Santosh Shukla wrote: > > > > * Application programming sequencing would be > > > > char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE]; > > > > rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempool /* out */); > > > > rte_mempool_create_empty(); > > > > rte_mempool_set_ops_byname( , pref_memppol, ); > > > > rte_mempool_populate_default(); > > > What about introducing an API like: > > > rte_pktmbuf_poll_create_with_ops (..., ops_name, config_pool); > > > > > > I think that API would help for the case the application wants an mbuf pool > > > with ie. stack handler. > > > Sure we can do the empty/set_ops/populate sequence, but the only thing we > > > want to change from default pktmbuf_pool_create API is the pool handler. > > > > > > Application just needs to decide the ops handler to use, either default or > > > one suggested by PMD? > > > > > > I think ideally we would have similar APIs: > > > - rte_mempool_create_with_ops (...) > > > - rte_memppol_xmem_create_with_ops (...) > > Today, we may only want to change the mempool handler, but if we > > need to change something else tomorrow, we would need to add another > > parameter again, breaking the ABI. > > > > If we pass a config structure, adding a new field in it would also break the > > ABI, except if the structure is opaque, with accessors. These accessors would be > > functions (ex: mempool_cfg_new, mempool_cfg_set_pool_ops, ...). This is not so > > much different than what we have now. > > > > The advantage I can see of working on a config structure instead of directly on > > a mempool is that the api can be reused to build a default config. > > > > That said, I think it's quite orthogonal to this patch since we still require > > the ethdev api. > > Fair enough. > > Just to double check that we are on the same page: > - rte_mempool_create is just there for backwards compatibility and a > sequence of create_empty -> set_ops (optional) ->init -> populate_default -> > obj_iter (optional) is the recommended way of creating mempools. Yes, I think rte_mempool_create() has too many arguments. > - if application wants to use rte_mempool_xmem_create with different pool > handler needs to replicate function and add set_ops step. Yes. And now that xen support is going to be removed, I'm wondering if this function is still required, given the API is not that easy to use. Calling rte_mempool_populate_phys() several times looks more flexible. But this is also another topic. > Now if rte_pktmbuf_pool_create is still the preferred mechanism for > applications to create mbuf pools, wouldn't it make sense to offer the > option of either pass the ops_name as suggested before or for the > application to just set a different pool handler? I understand the it is > either breaking API or introducing a new API, but is the solution to > basically "copy" the whole function in the application and add an optional > step (set_ops)? I was quite reticent about introducing rte_pktmbuf_pool_create_with_ops() because for the same reasons, we would also want to introduce functions to create a mempool that use a different pktmbuf_init() or pool_init() callback, or to create the pool in external memory, ... and we would end up with a functions with too many arguments. I like the approach of having several simple functions, because it's easier to read (even if the code is longer), and it's easily extensible. Now if we feel that the mempool ops is more important than the other parameters, we can consider to add it in rte_pktmbuf_pool_create(). > With these patches we can: > - change the default pool handler with EAL option, which does *not* allow > for pktmbuf_pool_create with different handlers. > - We can check the PMDs preferred/supported handlers but then we need to > implement function with whole sequence, cannot use pktmbuf_pool_create. > > It looks to me then that any application that wants to use different pool > handlers than default/ring need to implement their own create_pool with > set_ops. ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle 2017-09-05 7:47 ` Olivier MATZ @ 2017-09-05 8:11 ` Jerin Jacob 0 siblings, 0 replies; 72+ messages in thread From: Jerin Jacob @ 2017-09-05 8:11 UTC (permalink / raw) To: Olivier MATZ Cc: Sergio Gonzalez Monroy, Santosh Shukla, dev, thomas, hemant.agrawal -----Original Message----- > Date: Tue, 5 Sep 2017 09:47:26 +0200 > From: Olivier MATZ <olivier.matz@6wind.com> > To: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com> > CC: Santosh Shukla <santosh.shukla@caviumnetworks.com>, dev@dpdk.org, > thomas@monjalon.net, jerin.jacob@caviumnetworks.com, > hemant.agrawal@nxp.com > Subject: Re: [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle > User-Agent: NeoMutt/20170113 (1.7.2) > > On Mon, Sep 04, 2017 at 03:24:38PM +0100, Sergio Gonzalez Monroy wrote: > > Hi Olivier, > > > > On 04/09/2017 14:34, Olivier MATZ wrote: > > > Hi Sergio, > > > > > > On Mon, Sep 04, 2017 at 10:41:56AM +0100, Sergio Gonzalez Monroy wrote: > > > > On 15/08/2017 09:07, Santosh Shukla wrote: > > > > > * Application programming sequencing would be > > > > > char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE]; > > > > > rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempool /* out */); > > > > > rte_mempool_create_empty(); > > > > > rte_mempool_set_ops_byname( , pref_memppol, ); > > > > > rte_mempool_populate_default(); > > > > What about introducing an API like: > > > > rte_pktmbuf_poll_create_with_ops (..., ops_name, config_pool); > > > > > > > > I think that API would help for the case the application wants an mbuf pool > > > > with ie. stack handler. > > > > Sure we can do the empty/set_ops/populate sequence, but the only thing we > > > > want to change from default pktmbuf_pool_create API is the pool handler. > > > > > > > > Application just needs to decide the ops handler to use, either default or > > > > one suggested by PMD? > > > > > > > > I think ideally we would have similar APIs: > > > > - rte_mempool_create_with_ops (...) > > > > - rte_memppol_xmem_create_with_ops (...) > > > Today, we may only want to change the mempool handler, but if we > > > need to change something else tomorrow, we would need to add another > > > parameter again, breaking the ABI. > > > > > > If we pass a config structure, adding a new field in it would also break the > > > ABI, except if the structure is opaque, with accessors. These accessors would be > > > functions (ex: mempool_cfg_new, mempool_cfg_set_pool_ops, ...). This is not so > > > much different than what we have now. > > > > > > The advantage I can see of working on a config structure instead of directly on > > > a mempool is that the api can be reused to build a default config. > > > > > > That said, I think it's quite orthogonal to this patch since we still require > > > the ethdev api. > > > > Fair enough. > > > > Just to double check that we are on the same page: > > - rte_mempool_create is just there for backwards compatibility and a > > sequence of create_empty -> set_ops (optional) ->init -> populate_default -> > > obj_iter (optional) is the recommended way of creating mempools. > > Yes, I think rte_mempool_create() has too many arguments. > > > - if application wants to use rte_mempool_xmem_create with different pool > > handler needs to replicate function and add set_ops step. > > Yes. And now that xen support is going to be removed, I'm wondering if > this function is still required, given the API is not that easy to > use. Calling rte_mempool_populate_phys() several times looks more > flexible. But this is also another topic. > > > Now if rte_pktmbuf_pool_create is still the preferred mechanism for > > applications to create mbuf pools, wouldn't it make sense to offer the > > option of either pass the ops_name as suggested before or for the > > application to just set a different pool handler? I understand the it is > > either breaking API or introducing a new API, but is the solution to > > basically "copy" the whole function in the application and add an optional > > step (set_ops)? > > I was quite reticent about introducing > rte_pktmbuf_pool_create_with_ops() because for the same reasons, we > would also want to introduce functions to create a mempool that use a > different pktmbuf_init() or pool_init() callback, or to create the pool > in external memory, ... and we would end up with a functions with too > many arguments. > > I like the approach of having several simple functions, because it's > easier to read (even if the code is longer), and it's easily extensible. > > Now if we feel that the mempool ops is more important than the other > parameters, we can consider to add it in rte_pktmbuf_pool_create(). Yes. I agree with Sergio here. Something we could target for v18.02. ^ permalink raw reply [flat|nested] 72+ messages in thread
* [dpdk-dev] [PATCH v4 0/2] Dynamically configure mempool handle 2017-08-15 8:07 ` [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle Santosh Shukla ` (2 preceding siblings ...) 2017-09-04 9:41 ` [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle Sergio Gonzalez Monroy @ 2017-09-11 15:18 ` Santosh Shukla 2017-09-11 15:18 ` [dpdk-dev] [PATCH v4 1/2] eal: allow user to override default pool handle Santosh Shukla ` (3 more replies) 3 siblings, 4 replies; 72+ messages in thread From: Santosh Shukla @ 2017-09-11 15:18 UTC (permalink / raw) To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla v4: - Includes v3 review coment changes. Patches rebased on 06791a4bce: ethdev: get the supported pools for a port v3: - Rebased on top of v17.11-rc0 - Updated version.map entry to v17.11. v2: DPDK has support for hw and sw mempool. Those mempool can work optimal for specific PMD's. Example: sw ring based PMD for Intel NICs. HW mempool manager dpaa2 for dpaa2 PMD. HW mempool manager fpa for octeontx PMD. There could be a use-case where different vendor NIC's used on the same platform and User like to configure mempool in such a way that each of those NIC's should use their preferred mempool(For performance reasons). Current mempool infrastrucure don't support such use-case. This patchset tries to address that problem in 2 steps: 0) Allowing user to dynamically configure mempool handle by passing pool handle as eal arg to `--mbuf-pool-ops=<pool-handle>`. 1) Allowing PMD's to advertise their pool capability to an application. >From an application point of view: - The application must ask to PMD about supported pools. - PMD will advertise his pool capability in priority order '0' - Best match mempool ops for _that_ port (highest priority) '1' - PMD support this mempool ops. - Based on those i/p from PMD, application to chose pool-hdl and do pool create. Change History: v3 --> v4: - Removed RTE_MBUF_XXX_POOL_NAMESIZE macro and replaced mbuf_pool_name with const. (Suggested by Olivier) - Added eal arg info in doc guide. - Removed _preferred_pool() api and replaced with rte_eth_dev_pools_ops_supported() api (Suggested by Olivier) v2 --> v3: - Changed version.map from DPDK_17.08 to DPDK_17.11. v1 --> v2: - Renamed rte_eal_get_mempool_name to rte_eal_mbuf_default_mempool_ops(). (suggested by Olivier) - Renamed _get_preferred_pool to _get_preferred_pool_ops(). - Updated API description and changes return val from -EINVAL to -ENOTSUP. (Suggested by Olivier) * Specific details on v1-->v2 change summary described in each patch. Checkpatch status: - None. Work History: * Refer [1] for v1. Thanks. [1] http://dpdk.org/ml/archives/dev/2017-June/067022.html Santosh Shukla (2): eal: allow user to override default pool handle ethdev: get the supported pools for a port doc/guides/freebsd_gsg/build_sample_apps.rst | 3 +++ doc/guides/linux_gsg/build_sample_apps.rst | 3 +++ doc/guides/testpmd_app_ug/run_app.rst | 4 ++++ lib/librte_eal/bsdapp/eal/eal.c | 10 ++++++++++ lib/librte_eal/bsdapp/eal/rte_eal_version.map | 7 +++++++ lib/librte_eal/common/eal_common_options.c | 3 +++ lib/librte_eal/common/eal_internal_cfg.h | 2 +- lib/librte_eal/common/eal_options.h | 2 ++ lib/librte_eal/common/include/rte_eal.h | 9 +++++++++ lib/librte_eal/linuxapp/eal/eal.c | 11 +++++++++++ lib/librte_eal/linuxapp/eal/rte_eal_version.map | 7 +++++++ lib/librte_ether/rte_ethdev.c | 24 ++++++++++++++++++++++++ lib/librte_ether/rte_ethdev.h | 24 ++++++++++++++++++++++++ lib/librte_ether/rte_ethdev_version.map | 7 +++++++ lib/librte_mbuf/rte_mbuf.c | 5 +++-- 15 files changed, 118 insertions(+), 3 deletions(-) -- 2.14.1 ^ permalink raw reply [flat|nested] 72+ messages in thread
* [dpdk-dev] [PATCH v4 1/2] eal: allow user to override default pool handle 2017-09-11 15:18 ` [dpdk-dev] [PATCH v4 " Santosh Shukla @ 2017-09-11 15:18 ` Santosh Shukla 2017-09-25 7:28 ` Olivier MATZ 2017-09-11 15:18 ` [dpdk-dev] [PATCH v4 2/2] ethdev: get the supported pools for a port Santosh Shukla ` (2 subsequent siblings) 3 siblings, 1 reply; 72+ messages in thread From: Santosh Shukla @ 2017-09-11 15:18 UTC (permalink / raw) To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla DPDK has support for both sw and hw mempool and currently user is limited to use ring_mp_mc pool. In case user want to use other pool handle, need to update config RTE_MEMPOOL_OPS_DEFAULT, then build and run with desired pool handle. Introducing eal option to override default pool handle. Now user can override the RTE_MEMPOOL_OPS_DEFAULT by passing pool handle to eal `--mbuf-pool-ops=""`. Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com> --- v3 --> v4: - Removed RTE_MBUF_XX_POOL_NAMESIZE macro. (Suggested by Olivier) - Replaced char * with const char * for 'mbuf_pool_name' var. (Olivier) - Added eal arg info in release guide. doc/guides/freebsd_gsg/build_sample_apps.rst | 3 +++ doc/guides/linux_gsg/build_sample_apps.rst | 3 +++ doc/guides/testpmd_app_ug/run_app.rst | 4 ++++ lib/librte_eal/bsdapp/eal/eal.c | 10 ++++++++++ lib/librte_eal/bsdapp/eal/rte_eal_version.map | 7 +++++++ lib/librte_eal/common/eal_common_options.c | 3 +++ lib/librte_eal/common/eal_internal_cfg.h | 2 +- lib/librte_eal/common/eal_options.h | 2 ++ lib/librte_eal/common/include/rte_eal.h | 9 +++++++++ lib/librte_eal/linuxapp/eal/eal.c | 11 +++++++++++ lib/librte_eal/linuxapp/eal/rte_eal_version.map | 7 +++++++ lib/librte_mbuf/rte_mbuf.c | 5 +++-- 12 files changed, 63 insertions(+), 3 deletions(-) diff --git a/doc/guides/freebsd_gsg/build_sample_apps.rst b/doc/guides/freebsd_gsg/build_sample_apps.rst index 9faa0e6e5..bd69aa9f5 100644 --- a/doc/guides/freebsd_gsg/build_sample_apps.rst +++ b/doc/guides/freebsd_gsg/build_sample_apps.rst @@ -163,6 +163,9 @@ Other options, specific to Linux and are not supported under FreeBSD are as foll * ``--huge-dir``: The directory where hugetlbfs is mounted. +* ``mbuf-pool-ops``: + Pool ops name for mbuf to use. + * ``--file-prefix``: The prefix text used for hugepage filenames. diff --git a/doc/guides/linux_gsg/build_sample_apps.rst b/doc/guides/linux_gsg/build_sample_apps.rst index 0cc5fd173..7cba22e2a 100644 --- a/doc/guides/linux_gsg/build_sample_apps.rst +++ b/doc/guides/linux_gsg/build_sample_apps.rst @@ -157,6 +157,9 @@ The EAL options are as follows: * ``--huge-dir``: The directory where hugetlbfs is mounted. +* ``mbuf-pool-ops``: + Pool ops name for mbuf to use. + * ``--file-prefix``: The prefix text used for hugepage filenames. diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst index e8303f3ba..61138957c 100644 --- a/doc/guides/testpmd_app_ug/run_app.rst +++ b/doc/guides/testpmd_app_ug/run_app.rst @@ -110,6 +110,10 @@ See the DPDK Getting Started Guides for more information on these options. Specify the directory where the hugetlbfs is mounted. +* ``mbuf-pool-ops``: + + Pool ops name for mbuf to use. + * ``--proc-type`` Set the type of the current process. diff --git a/lib/librte_eal/bsdapp/eal/eal.c b/lib/librte_eal/bsdapp/eal/eal.c index 5fa598842..150d3ccc4 100644 --- a/lib/librte_eal/bsdapp/eal/eal.c +++ b/lib/librte_eal/bsdapp/eal/eal.c @@ -112,6 +112,13 @@ struct internal_config internal_config; /* used by rte_rdtsc() */ int rte_cycles_vmware_tsc_map; +/* Return mbuf pool name */ +const char * +rte_eal_mbuf_default_mempool_ops(void) +{ + return internal_config.mbuf_pool_name; +} + /* Return a pointer to the configuration structure */ struct rte_config * rte_eal_get_configuration(void) @@ -385,6 +392,9 @@ eal_parse_args(int argc, char **argv) continue; switch (opt) { + case OPT_MBUF_POOL_OPS_NUM: + internal_config.mbuf_pool_name = optarg; + break; case 'h': eal_usage(prgname); exit(EXIT_SUCCESS); diff --git a/lib/librte_eal/bsdapp/eal/rte_eal_version.map b/lib/librte_eal/bsdapp/eal/rte_eal_version.map index aac6fd776..172a28bbf 100644 --- a/lib/librte_eal/bsdapp/eal/rte_eal_version.map +++ b/lib/librte_eal/bsdapp/eal/rte_eal_version.map @@ -237,3 +237,10 @@ EXPERIMENTAL { rte_service_unregister; } DPDK_17.08; + +DPDK_17.11 { + global: + + rte_eal_mbuf_default_mempool_ops; + +} DPDK_17.08; diff --git a/lib/librte_eal/common/eal_common_options.c b/lib/librte_eal/common/eal_common_options.c index 1da185e59..cf49db990 100644 --- a/lib/librte_eal/common/eal_common_options.c +++ b/lib/librte_eal/common/eal_common_options.c @@ -98,6 +98,7 @@ eal_long_options[] = { {OPT_VFIO_INTR, 1, NULL, OPT_VFIO_INTR_NUM }, {OPT_VMWARE_TSC_MAP, 0, NULL, OPT_VMWARE_TSC_MAP_NUM }, {OPT_XEN_DOM0, 0, NULL, OPT_XEN_DOM0_NUM }, + {OPT_MBUF_POOL_OPS, 1, NULL, OPT_MBUF_POOL_OPS_NUM}, {0, 0, NULL, 0 } }; @@ -220,6 +221,7 @@ eal_reset_internal_config(struct internal_config *internal_cfg) #endif internal_cfg->vmware_tsc_map = 0; internal_cfg->create_uio_dev = 0; + internal_cfg->mbuf_pool_name = RTE_MBUF_DEFAULT_MEMPOOL_OPS; } static int @@ -1309,5 +1311,6 @@ eal_common_usage(void) " --"OPT_NO_PCI" Disable PCI\n" " --"OPT_NO_HPET" Disable HPET\n" " --"OPT_NO_SHCONF" No shared config (mmap'd files)\n" + " --"OPT_MBUF_POOL_OPS" Pool ops name for mbuf to use\n" "\n", RTE_MAX_LCORE); } diff --git a/lib/librte_eal/common/eal_internal_cfg.h b/lib/librte_eal/common/eal_internal_cfg.h index 7b7e8c887..00c3bd227 100644 --- a/lib/librte_eal/common/eal_internal_cfg.h +++ b/lib/librte_eal/common/eal_internal_cfg.h @@ -82,7 +82,7 @@ struct internal_config { volatile enum rte_intr_mode vfio_intr_mode; const char *hugefile_prefix; /**< the base filename of hugetlbfs files */ const char *hugepage_dir; /**< specific hugetlbfs directory to use */ - + const char *mbuf_pool_name; /**< mbuf pool name */ unsigned num_hugepage_sizes; /**< how many sizes on this system */ struct hugepage_info hugepage_info[MAX_HUGEPAGE_SIZES]; }; diff --git a/lib/librte_eal/common/eal_options.h b/lib/librte_eal/common/eal_options.h index 439a26104..9c893aa2d 100644 --- a/lib/librte_eal/common/eal_options.h +++ b/lib/librte_eal/common/eal_options.h @@ -83,6 +83,8 @@ enum { OPT_VMWARE_TSC_MAP_NUM, #define OPT_XEN_DOM0 "xen-dom0" OPT_XEN_DOM0_NUM, +#define OPT_MBUF_POOL_OPS "mbuf-pool-ops" + OPT_MBUF_POOL_OPS_NUM, OPT_LONG_MAX_NUM }; diff --git a/lib/librte_eal/common/include/rte_eal.h b/lib/librte_eal/common/include/rte_eal.h index 0e7363d77..1a719cc6b 100644 --- a/lib/librte_eal/common/include/rte_eal.h +++ b/lib/librte_eal/common/include/rte_eal.h @@ -287,6 +287,15 @@ static inline int rte_gettid(void) return RTE_PER_LCORE(_thread_id); } +/** + * Get default pool name for mbuf + * + * @return + * returns default pool name. + */ +const char * +rte_eal_mbuf_default_mempool_ops(void); + #define RTE_INIT(func) \ static void __attribute__((constructor, used)) func(void) diff --git a/lib/librte_eal/linuxapp/eal/eal.c b/lib/librte_eal/linuxapp/eal/eal.c index 48f12f44c..a44454129 100644 --- a/lib/librte_eal/linuxapp/eal/eal.c +++ b/lib/librte_eal/linuxapp/eal/eal.c @@ -121,6 +121,13 @@ struct internal_config internal_config; /* used by rte_rdtsc() */ int rte_cycles_vmware_tsc_map; +/* Return mbuf pool name */ +const char * +rte_eal_mbuf_default_mempool_ops(void) +{ + return internal_config.mbuf_pool_name; +} + /* Return a pointer to the configuration structure */ struct rte_config * rte_eal_get_configuration(void) @@ -610,6 +617,10 @@ eal_parse_args(int argc, char **argv) internal_config.create_uio_dev = 1; break; + case OPT_MBUF_POOL_OPS_NUM: + internal_config.mbuf_pool_name = optarg; + break; + default: if (opt < OPT_LONG_MIN_NUM && isprint(opt)) { RTE_LOG(ERR, EAL, "Option %c is not supported " diff --git a/lib/librte_eal/linuxapp/eal/rte_eal_version.map b/lib/librte_eal/linuxapp/eal/rte_eal_version.map index 3a8f15406..5b48fcd37 100644 --- a/lib/librte_eal/linuxapp/eal/rte_eal_version.map +++ b/lib/librte_eal/linuxapp/eal/rte_eal_version.map @@ -242,3 +242,10 @@ EXPERIMENTAL { rte_service_unregister; } DPDK_17.08; + +DPDK_17.11 { + global: + + rte_eal_mbuf_default_mempool_ops; + +} DPDK_17.08; diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c index 26a62b8e1..e1fc90ef3 100644 --- a/lib/librte_mbuf/rte_mbuf.c +++ b/lib/librte_mbuf/rte_mbuf.c @@ -157,6 +157,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, { struct rte_mempool *mp; struct rte_pktmbuf_pool_private mbp_priv; + const char *mp_name; unsigned elt_size; int ret; @@ -176,8 +177,8 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, if (mp == NULL) return NULL; - ret = rte_mempool_set_ops_byname(mp, - RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL); + mp_name = rte_eal_mbuf_default_mempool_ops(); + ret = rte_mempool_set_ops_byname(mp, mp_name, NULL); if (ret != 0) { RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); rte_mempool_free(mp); -- 2.14.1 ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/2] eal: allow user to override default pool handle 2017-09-11 15:18 ` [dpdk-dev] [PATCH v4 1/2] eal: allow user to override default pool handle Santosh Shukla @ 2017-09-25 7:28 ` Olivier MATZ 2017-09-25 21:23 ` santosh 0 siblings, 1 reply; 72+ messages in thread From: Olivier MATZ @ 2017-09-25 7:28 UTC (permalink / raw) To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal On Mon, Sep 11, 2017 at 08:48:36PM +0530, Santosh Shukla wrote: > DPDK has support for both sw and hw mempool and > currently user is limited to use ring_mp_mc pool. > In case user want to use other pool handle, > need to update config RTE_MEMPOOL_OPS_DEFAULT, then > build and run with desired pool handle. > > Introducing eal option to override default pool handle. > > Now user can override the RTE_MEMPOOL_OPS_DEFAULT by passing > pool handle to eal `--mbuf-pool-ops=""`. > > Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> > Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com> > > [...] > > --- a/lib/librte_eal/common/eal_internal_cfg.h > +++ b/lib/librte_eal/common/eal_internal_cfg.h > @@ -82,7 +82,7 @@ struct internal_config { > volatile enum rte_intr_mode vfio_intr_mode; > const char *hugefile_prefix; /**< the base filename of hugetlbfs files */ > const char *hugepage_dir; /**< specific hugetlbfs directory to use */ > - > + const char *mbuf_pool_name; /**< mbuf pool name */ > unsigned num_hugepage_sizes; /**< how many sizes on this system */ > struct hugepage_info hugepage_info[MAX_HUGEPAGE_SIZES]; > }; What do you think about mbuf_pool_ops_name instead? I'm afraid of the confusion we could have with the name of the mempool. ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/2] eal: allow user to override default pool handle 2017-09-25 7:28 ` Olivier MATZ @ 2017-09-25 21:23 ` santosh 0 siblings, 0 replies; 72+ messages in thread From: santosh @ 2017-09-25 21:23 UTC (permalink / raw) To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal On Monday 25 September 2017 08:28 AM, Olivier MATZ wrote: > On Mon, Sep 11, 2017 at 08:48:36PM +0530, Santosh Shukla wrote: >> DPDK has support for both sw and hw mempool and >> currently user is limited to use ring_mp_mc pool. >> In case user want to use other pool handle, >> need to update config RTE_MEMPOOL_OPS_DEFAULT, then >> build and run with desired pool handle. >> >> Introducing eal option to override default pool handle. >> >> Now user can override the RTE_MEMPOOL_OPS_DEFAULT by passing >> pool handle to eal `--mbuf-pool-ops=""`. >> >> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> >> Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com> >> >> [...] >> >> --- a/lib/librte_eal/common/eal_internal_cfg.h >> +++ b/lib/librte_eal/common/eal_internal_cfg.h >> @@ -82,7 +82,7 @@ struct internal_config { >> volatile enum rte_intr_mode vfio_intr_mode; >> const char *hugefile_prefix; /**< the base filename of hugetlbfs files */ >> const char *hugepage_dir; /**< specific hugetlbfs directory to use */ >> - >> + const char *mbuf_pool_name; /**< mbuf pool name */ >> unsigned num_hugepage_sizes; /**< how many sizes on this system */ >> struct hugepage_info hugepage_info[MAX_HUGEPAGE_SIZES]; >> }; > What do you think about mbuf_pool_ops_name instead? > > I'm afraid of the confusion we could have with the name > of the mempool. > Hoping that we're doing final renaming on handle, Its the third time and so for other mempool series. queued for v5. ^ permalink raw reply [flat|nested] 72+ messages in thread
* [dpdk-dev] [PATCH v4 2/2] ethdev: get the supported pools for a port 2017-09-11 15:18 ` [dpdk-dev] [PATCH v4 " Santosh Shukla 2017-09-11 15:18 ` [dpdk-dev] [PATCH v4 1/2] eal: allow user to override default pool handle Santosh Shukla @ 2017-09-11 15:18 ` Santosh Shukla 2017-09-25 7:37 ` Olivier MATZ 2017-09-13 10:00 ` [dpdk-dev] [PATCH v4 0/2] Dynamically configure mempool handle santosh 2017-10-01 9:14 ` [dpdk-dev] [PATCH v5 " Santosh Shukla 3 siblings, 1 reply; 72+ messages in thread From: Santosh Shukla @ 2017-09-11 15:18 UTC (permalink / raw) To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla Now that dpdk supports more than one mempool drivers and each mempool driver works best for specific PMD, example: - sw ring based mempool for Intel PMD drivers. - dpaa2 HW mempool manager for dpaa2 PMD driver. - fpa HW mempool manager for Octeontx PMD driver. Application would like to know the best mempool handle for any port. Introducing rte_eth_dev_pools_ops_supported() API, which allows PMD driver to advertise his supported pools capability to the application. Supported pools are categorized in below priority:- - Best mempool handle for this port (Highest priority '0') - Port supports this mempool handle (Priority '1') Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> --- v3 --> v4: - Replaced __preferred_pool() with rte_eth_dev_pools_ops_supported() (suggested by Olivier) History, Refer [1]. [1] http://dpdk.org/dev/patchwork/patch/27610/ lib/librte_ether/rte_ethdev.c | 24 ++++++++++++++++++++++++ lib/librte_ether/rte_ethdev.h | 24 ++++++++++++++++++++++++ lib/librte_ether/rte_ethdev_version.map | 7 +++++++ 3 files changed, 55 insertions(+) diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c index 0597641ee..0fa0cc3fa 100644 --- a/lib/librte_ether/rte_ethdev.c +++ b/lib/librte_ether/rte_ethdev.c @@ -3409,3 +3409,27 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id, return 0; } + +int +rte_eth_dev_pools_ops_supported(uint8_t port_id, const char *pool) +{ + struct rte_eth_dev *dev; + const char *tmp; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + + if (pool == NULL) + return -EINVAL; + + dev = &rte_eth_devices[port_id]; + + if (*dev->dev_ops->pools_ops_supported == NULL) { + tmp = rte_eal_mbuf_default_mempool_ops(); + if (!strcmp(tmp, pool)) + return 0; + else + return -ENOTSUP; + } + + return (*dev->dev_ops->pools_ops_supported)(dev, pool); +} diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h index 0adf3274a..d90029b1e 100644 --- a/lib/librte_ether/rte_ethdev.h +++ b/lib/librte_ether/rte_ethdev.h @@ -1425,6 +1425,10 @@ typedef int (*eth_get_dcb_info)(struct rte_eth_dev *dev, struct rte_eth_dcb_info *dcb_info); /**< @internal Get dcb information on an Ethernet device */ +typedef int (*eth_pools_ops_supported_t)(struct rte_eth_dev *dev, + const char *pool); +/**< @internal Get the supported pools for a port */ + /** * @internal A structure containing the functions exported by an Ethernet driver. */ @@ -1544,6 +1548,8 @@ struct eth_dev_ops { eth_tm_ops_get_t tm_ops_get; /**< Get Traffic Management (TM) operations. */ + eth_pools_ops_supported_t pools_ops_supported; + /**< Get the supported pools for a port */ }; /** @@ -4436,6 +4442,24 @@ int rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id, uint16_t *nb_rx_desc, uint16_t *nb_tx_desc); + +/** + * Get the supported pools for a port + * + * @param port_id + * Port identifier of the Ethernet device. + * @param [in] pool + * The supported pool handle for this port. + * Maximum length of pool handle name is RTE_MEMPOOL_OPS_NAMESIZE. + * @return + * - 0: best mempool ops choice for this port. + * - 1: mempool ops are supported for this port. + * - -ENOTSUP: mempool ops not supported for this port. + * - <0: (-ENODEV, EINVAL) on failure. + */ +int +rte_eth_dev_pools_ops_supported(uint8_t port_id, const char *pool); + #ifdef __cplusplus } #endif diff --git a/lib/librte_ether/rte_ethdev_version.map b/lib/librte_ether/rte_ethdev_version.map index 42837285e..644705cf1 100644 --- a/lib/librte_ether/rte_ethdev_version.map +++ b/lib/librte_ether/rte_ethdev_version.map @@ -187,3 +187,10 @@ DPDK_17.08 { rte_tm_wred_profile_delete; } DPDK_17.05; + +DPDK_17.11 { + global: + + rte_eth_dev_pools_ops_supported; + +} DPDK_17.08; -- 2.14.1 ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/2] ethdev: get the supported pools for a port 2017-09-11 15:18 ` [dpdk-dev] [PATCH v4 2/2] ethdev: get the supported pools for a port Santosh Shukla @ 2017-09-25 7:37 ` Olivier MATZ 2017-09-25 21:52 ` santosh 0 siblings, 1 reply; 72+ messages in thread From: Olivier MATZ @ 2017-09-25 7:37 UTC (permalink / raw) To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal Hi, On Mon, Sep 11, 2017 at 08:48:37PM +0530, Santosh Shukla wrote: > Now that dpdk supports more than one mempool drivers and > each mempool driver works best for specific PMD, example: > - sw ring based mempool for Intel PMD drivers. > - dpaa2 HW mempool manager for dpaa2 PMD driver. > - fpa HW mempool manager for Octeontx PMD driver. > > Application would like to know the best mempool handle > for any port. > > Introducing rte_eth_dev_pools_ops_supported() API, > which allows PMD driver to advertise > his supported pools capability to the application. > > Supported pools are categorized in below priority:- > - Best mempool handle for this port (Highest priority '0') > - Port supports this mempool handle (Priority '1') > > Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> > > [...] > > +int > +rte_eth_dev_pools_ops_supported(uint8_t port_id, const char *pool) pools -> pool? > +{ > + struct rte_eth_dev *dev; > + const char *tmp; > + > + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); > + > + if (pool == NULL) > + return -EINVAL; > + > + dev = &rte_eth_devices[port_id]; > + > + if (*dev->dev_ops->pools_ops_supported == NULL) { > + tmp = rte_eal_mbuf_default_mempool_ops(); > + if (!strcmp(tmp, pool)) > + return 0; > + else > + return -ENOTSUP; I don't understand why we are comparing with rte_eal_mbuf_default_mempool_ops(). It means that the result of the function would be influenced by the parameter given by the user. I think that a PMD that does not implement ->pools_ops_supported should always return 1 (mempool is supported). > + } > + > + return (*dev->dev_ops->pools_ops_supported)(dev, pool); > +} > diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h > index 0adf3274a..d90029b1e 100644 > --- a/lib/librte_ether/rte_ethdev.h > +++ b/lib/librte_ether/rte_ethdev.h > @@ -1425,6 +1425,10 @@ typedef int (*eth_get_dcb_info)(struct rte_eth_dev *dev, > struct rte_eth_dcb_info *dcb_info); > /**< @internal Get dcb information on an Ethernet device */ > > +typedef int (*eth_pools_ops_supported_t)(struct rte_eth_dev *dev, > + const char *pool); > +/**< @internal Get the supported pools for a port */ > + The comment should be something like: Test if a port supports specific mempool ops. > /** > * @internal A structure containing the functions exported by an Ethernet driver. > */ > @@ -1544,6 +1548,8 @@ struct eth_dev_ops { > > eth_tm_ops_get_t tm_ops_get; > /**< Get Traffic Management (TM) operations. */ > + eth_pools_ops_supported_t pools_ops_supported; > + /**< Get the supported pools for a port */ Same > }; > > /** > @@ -4436,6 +4442,24 @@ int rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id, > uint16_t *nb_rx_desc, > uint16_t *nb_tx_desc); > > + > +/** > + * Get the supported pools for a port Same > + * > + * @param port_id > + * Port identifier of the Ethernet device. > + * @param [in] pool > + * The supported pool handle for this port. The name of the pool operations to test > + * Maximum length of pool handle name is RTE_MEMPOOL_OPS_NAMESIZE. I don't think we should keep this Thanks, Olivier ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/2] ethdev: get the supported pools for a port 2017-09-25 7:37 ` Olivier MATZ @ 2017-09-25 21:52 ` santosh 2017-09-29 5:00 ` santosh 2017-09-29 8:32 ` Olivier MATZ 0 siblings, 2 replies; 72+ messages in thread From: santosh @ 2017-09-25 21:52 UTC (permalink / raw) To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal Hi Olivier, On Monday 25 September 2017 08:37 AM, Olivier MATZ wrote: > Hi, > > On Mon, Sep 11, 2017 at 08:48:37PM +0530, Santosh Shukla wrote: >> Now that dpdk supports more than one mempool drivers and >> each mempool driver works best for specific PMD, example: >> - sw ring based mempool for Intel PMD drivers. >> - dpaa2 HW mempool manager for dpaa2 PMD driver. >> - fpa HW mempool manager for Octeontx PMD driver. >> >> Application would like to know the best mempool handle >> for any port. >> >> Introducing rte_eth_dev_pools_ops_supported() API, >> which allows PMD driver to advertise >> his supported pools capability to the application. >> >> Supported pools are categorized in below priority:- >> - Best mempool handle for this port (Highest priority '0') >> - Port supports this mempool handle (Priority '1') >> >> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> >> >> [...] >> >> +int >> +rte_eth_dev_pools_ops_supported(uint8_t port_id, const char *pool) > pools -> pool? ok. >> +{ >> + struct rte_eth_dev *dev; >> + const char *tmp; >> + >> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); >> + >> + if (pool == NULL) >> + return -EINVAL; >> + >> + dev = &rte_eth_devices[port_id]; >> + >> + if (*dev->dev_ops->pools_ops_supported == NULL) { >> + tmp = rte_eal_mbuf_default_mempool_ops(); >> + if (!strcmp(tmp, pool)) >> + return 0; >> + else >> + return -ENOTSUP; > I don't understand why we are comparing with > rte_eal_mbuf_default_mempool_ops(). > > It means that the result of the function would be influenced > by the parameter given by the user. But that will be only for ops not supported case and in that case, function _must_ make sure that if inputted param is _default_ops_name then function should return ops supported correct info (whether returning '0' : Best ops or '1': ops does support , this part is arguable.. meaning One can say that default_ops ='handle-name' is best possible handle Or one of handle which platform supports). > I think that a PMD that does not implement ->pools_ops_supported > should always return 1 (mempool is supported). Return 1 says: PMD support this ops.. So if ops is not supported and func returns with 1, then which ops application will use? If that ops is default_ops.. then How application will distinguish when to use default ops or param ops?.. as because in both cases func will return with value 1. The approach in the patch takes care of that condition and func will return -ENOTSUP if (ops not support || inputted param not matching with default ops) otherwise will return 0 or 1. At application side; For error case: In case of -ENOTSUP, its upto application to use _default_ops or exit. For good case: 0 or 1 case, func gaurantee that handle is either best handle for pool or pool supports that handle.. However in your suggestion if ops not supported case returns 1 then application is not sure which ops to use.. default_ops Or input ops given to func. make sense? >> + } >> + >> + return (*dev->dev_ops->pools_ops_supported)(dev, pool); >> +} >> diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h >> index 0adf3274a..d90029b1e 100644 >> --- a/lib/librte_ether/rte_ethdev.h >> +++ b/lib/librte_ether/rte_ethdev.h >> @@ -1425,6 +1425,10 @@ typedef int (*eth_get_dcb_info)(struct rte_eth_dev *dev, >> struct rte_eth_dcb_info *dcb_info); >> /**< @internal Get dcb information on an Ethernet device */ >> >> +typedef int (*eth_pools_ops_supported_t)(struct rte_eth_dev *dev, >> + const char *pool); >> +/**< @internal Get the supported pools for a port */ >> + > The comment should be something like: > Test if a port supports specific mempool ops. > >> /** >> * @internal A structure containing the functions exported by an Ethernet driver. >> */ >> @@ -1544,6 +1548,8 @@ struct eth_dev_ops { >> >> eth_tm_ops_get_t tm_ops_get; >> /**< Get Traffic Management (TM) operations. */ >> + eth_pools_ops_supported_t pools_ops_supported; >> + /**< Get the supported pools for a port */ > Same > >> }; >> >> /** >> @@ -4436,6 +4442,24 @@ int rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id, >> uint16_t *nb_rx_desc, >> uint16_t *nb_tx_desc); >> >> + >> +/** >> + * Get the supported pools for a port > Same > >> + * >> + * @param port_id >> + * Port identifier of the Ethernet device. >> + * @param [in] pool >> + * The supported pool handle for this port. > The name of the pool operations to test ok, v5. >> + * Maximum length of pool handle name is RTE_MEMPOOL_OPS_NAMESIZE. > I don't think we should keep this > Ok > > Thanks, > Olivier ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/2] ethdev: get the supported pools for a port 2017-09-25 21:52 ` santosh @ 2017-09-29 5:00 ` santosh 2017-09-29 8:32 ` Olivier MATZ 1 sibling, 0 replies; 72+ messages in thread From: santosh @ 2017-09-29 5:00 UTC (permalink / raw) To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal Hi Olivier, On Monday 25 September 2017 10:52 PM, santosh wrote: > Hi Olivier, > > > On Monday 25 September 2017 08:37 AM, Olivier MATZ wrote: >>> +{ >>> + struct rte_eth_dev *dev; >>> + const char *tmp; >>> + >>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); >>> + >>> + if (pool == NULL) >>> + return -EINVAL; >>> + >>> + dev = &rte_eth_devices[port_id]; >>> + >>> + if (*dev->dev_ops->pools_ops_supported == NULL) { >>> + tmp = rte_eal_mbuf_default_mempool_ops(); >>> + if (!strcmp(tmp, pool)) >>> + return 0; >>> + else >>> + return -ENOTSUP; >> I don't understand why we are comparing with >> rte_eal_mbuf_default_mempool_ops(). >> >> It means that the result of the function would be influenced >> by the parameter given by the user. > But that will be only for ops not supported case and in that case, > function _must_ make sure that if inputted param is _default_ops_name > then function should return ops supported correct info (whether > returning '0' : Best ops or '1': ops does support > , this part is arguable.. meaning One can say that default_ops ='handle-name' is best possible handle Or > one of handle which platform supports). > >> I think that a PMD that does not implement ->pools_ops_supported >> should always return 1 (mempool is supported). > Return 1 says: PMD support this ops.. > > So if ops is not supported and func returns with 1, then which ops application will use? > If that ops is default_ops.. then How application will distinguish when to use default ops or > param ops?.. as because in both cases func will return with value 1. > > The approach in the patch takes care of that condition and func will return -ENOTSUP > if (ops not support || inputted param not matching with default ops) otherwise will return > 0 or 1. > > At application side; > For error case: In case of -ENOTSUP, its upto application to use _default_ops or exit. > For good case: 0 or 1 case, func gaurantee that handle is either best handle for pool or pool supports > that handle.. However in your suggestion if ops not supported case returns 1 then application is not > sure which ops to use.. default_ops Or input ops given to func. > > make sense? Ping? Are you fine with said explanation if not then would you share your thought in case if ops not supported returns simply 1 if then how application differentiate whether to use default_ops Or inputted param ops.. as return value 1 holds true for both cases, so how application will pick and choose which ops to use? Can we pl. close on this? need to send v5 for -rc1 release. Thanks. ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/2] ethdev: get the supported pools for a port 2017-09-25 21:52 ` santosh 2017-09-29 5:00 ` santosh @ 2017-09-29 8:32 ` Olivier MATZ 2017-09-29 10:16 ` santosh 1 sibling, 1 reply; 72+ messages in thread From: Olivier MATZ @ 2017-09-29 8:32 UTC (permalink / raw) To: santosh; +Cc: dev, thomas, jerin.jacob, hemant.agrawal On Mon, Sep 25, 2017 at 10:52:31PM +0100, santosh wrote: > Hi Olivier, > > > On Monday 25 September 2017 08:37 AM, Olivier MATZ wrote: > > Hi, > > > > On Mon, Sep 11, 2017 at 08:48:37PM +0530, Santosh Shukla wrote: > >> Now that dpdk supports more than one mempool drivers and > >> each mempool driver works best for specific PMD, example: > >> - sw ring based mempool for Intel PMD drivers. > >> - dpaa2 HW mempool manager for dpaa2 PMD driver. > >> - fpa HW mempool manager for Octeontx PMD driver. > >> > >> Application would like to know the best mempool handle > >> for any port. > >> > >> Introducing rte_eth_dev_pools_ops_supported() API, > >> which allows PMD driver to advertise > >> his supported pools capability to the application. > >> > >> Supported pools are categorized in below priority:- > >> - Best mempool handle for this port (Highest priority '0') > >> - Port supports this mempool handle (Priority '1') > >> > >> Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> > >> > >> [...] > >> > >> +int > >> +rte_eth_dev_pools_ops_supported(uint8_t port_id, const char *pool) > > pools -> pool? > > ok. > > >> +{ > >> + struct rte_eth_dev *dev; > >> + const char *tmp; > >> + > >> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); > >> + > >> + if (pool == NULL) > >> + return -EINVAL; > >> + > >> + dev = &rte_eth_devices[port_id]; > >> + > >> + if (*dev->dev_ops->pools_ops_supported == NULL) { > >> + tmp = rte_eal_mbuf_default_mempool_ops(); > >> + if (!strcmp(tmp, pool)) > >> + return 0; > >> + else > >> + return -ENOTSUP; > > I don't understand why we are comparing with > > rte_eal_mbuf_default_mempool_ops(). > > > > It means that the result of the function would be influenced > > by the parameter given by the user. > > But that will be only for ops not supported case and in that case, > function _must_ make sure that if inputted param is _default_ops_name > then function should return ops supported correct info (whether > returning '0' : Best ops or '1': ops does support > , this part is arguable.. meaning One can say that default_ops ='handle-name' is best possible handle Or > one of handle which platform supports). > > > I think that a PMD that does not implement ->pools_ops_supported > > should always return 1 (mempool is supported). > I don't agree. The result of this API (mempool ops supported or not by a PMD) should not depend on what user asks for. > Return 1 says: PMD support this ops.. > > So if ops is not supported and func returns with 1, then which ops application will use? > If that ops is default_ops.. then How application will distinguish when to use default ops or > param ops?.. as because in both cases func will return with value 1. > > The approach in the patch takes care of that condition and func will return -ENOTSUP > if (ops not support || inputted param not matching with default ops) otherwise will return > 0 or 1. > > At application side; > For error case: In case of -ENOTSUP, its upto application to use _default_ops or exit. > For good case: 0 or 1 case, func gaurantee that handle is either best handle for pool or pool supports > that handle.. However in your suggestion if ops not supported case returns 1 then application is not > sure which ops to use.. default_ops Or input ops given to func. > > make sense? My proposition is: - what a PMD returns does not depend on used parameter: - 0: best support - 1: support - -ENOTSUP: not supported - if a PMD does not implement the _ops_supported() API, assume all pools are supported (returns 1) - if the user does not pass a specific mempool ops, the application asks the PMDs, finds the best mempool ops, and use it. This could even be done in rte_pktmbuf_pool_create() as discussed at the summit. - if the user passes a specific mempool ops, we don't need to call the _ops_supported() api, we just try to use this pool. The _ops_supported() returns a property of a PMD, in my opinion it should not be impacted by a user argument. ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/2] ethdev: get the supported pools for a port 2017-09-29 8:32 ` Olivier MATZ @ 2017-09-29 10:16 ` santosh 2017-09-29 11:21 ` santosh 2017-09-29 11:23 ` Olivier MATZ 0 siblings, 2 replies; 72+ messages in thread From: santosh @ 2017-09-29 10:16 UTC (permalink / raw) To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal Hi Olivier, On Friday 29 September 2017 09:32 AM, Olivier MATZ wrote: > On Mon, Sep 25, 2017 at 10:52:31PM +0100, santosh wrote: >> Hi Olivier, >> >> >> On Monday 25 September 2017 08:37 AM, Olivier MATZ wrote: >> >>>> +{ >>>> + struct rte_eth_dev *dev; >>>> + const char *tmp; >>>> + >>>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); >>>> + >>>> + if (pool == NULL) >>>> + return -EINVAL; >>>> + >>>> + dev = &rte_eth_devices[port_id]; >>>> + >>>> + if (*dev->dev_ops->pools_ops_supported == NULL) { >>>> + tmp = rte_eal_mbuf_default_mempool_ops(); >>>> + if (!strcmp(tmp, pool)) >>>> + return 0; >>>> + else >>>> + return -ENOTSUP; >>> I don't understand why we are comparing with >>> rte_eal_mbuf_default_mempool_ops(). >>> >>> It means that the result of the function would be influenced >>> by the parameter given by the user. >> But that will be only for ops not supported case and in that case, >> function _must_ make sure that if inputted param is _default_ops_name >> then function should return ops supported correct info (whether >> returning '0' : Best ops or '1': ops does support >> , this part is arguable.. meaning One can say that default_ops ='handle-name' is best possible handle Or >> one of handle which platform supports). >> >>> I think that a PMD that does not implement ->pools_ops_supported >>> should always return 1 (mempool is supported). > I don't agree. > The result of this API (mempool ops supported or not by a PMD) > should not depend on what user asks for. > >> Return 1 says: PMD support this ops.. >> >> So if ops is not supported and func returns with 1, then which ops application will use? >> If that ops is default_ops.. then How application will distinguish when to use default ops or >> param ops?.. as because in both cases func will return with value 1. >> >> The approach in the patch takes care of that condition and func will return -ENOTSUP >> if (ops not support || inputted param not matching with default ops) otherwise will return >> 0 or 1. >> >> At application side; >> For error case: In case of -ENOTSUP, its upto application to use _default_ops or exit. >> For good case: 0 or 1 case, func gaurantee that handle is either best handle for pool or pool supports >> that handle.. However in your suggestion if ops not supported case returns 1 then application is not >> sure which ops to use.. default_ops Or input ops given to func. >> >> make sense? > My proposition is: > > - what a PMD returns does not depend on used parameter: > - 0: best support > - 1: support > - -ENOTSUP: not supported > > - if a PMD does not implement the _ops_supported() API, assume all pools > are supported (returns 1) If API returns 1 for PMD does not implement _ops_supported() case, then do we really need -ENOTSUP? If so then in which case API will return -ENOTSUP error, wondering. > - if the user does not pass a specific mempool ops, the application asks > the PMDs, finds the best mempool ops, and use it. This could even be > done in rte_pktmbuf_pool_create() as discussed at the summit. Right. > - if the user passes a specific mempool ops, we don't need to call the > _ops_supported() api, we just try to use this pool. > > > The _ops_supported() returns a property of a PMD, in my opinion it > should not be impacted by a user argument. > Thanks. ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/2] ethdev: get the supported pools for a port 2017-09-29 10:16 ` santosh @ 2017-09-29 11:21 ` santosh 2017-09-29 11:23 ` Olivier MATZ 1 sibling, 0 replies; 72+ messages in thread From: santosh @ 2017-09-29 11:21 UTC (permalink / raw) To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal On Friday 29 September 2017 11:16 AM, santosh wrote: > Hi Olivier, > > > On Friday 29 September 2017 09:32 AM, Olivier MATZ wrote: >> On Mon, Sep 25, 2017 at 10:52:31PM +0100, santosh wrote: >>> Hi Olivier, >>> >>> >>> On Monday 25 September 2017 08:37 AM, Olivier MATZ wrote: >>> >>>>> +{ >>>>> + struct rte_eth_dev *dev; >>>>> + const char *tmp; >>>>> + >>>>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); >>>>> + >>>>> + if (pool == NULL) >>>>> + return -EINVAL; >>>>> + >>>>> + dev = &rte_eth_devices[port_id]; >>>>> + >>>>> + if (*dev->dev_ops->pools_ops_supported == NULL) { >>>>> + tmp = rte_eal_mbuf_default_mempool_ops(); >>>>> + if (!strcmp(tmp, pool)) >>>>> + return 0; >>>>> + else >>>>> + return -ENOTSUP; >>>> I don't understand why we are comparing with >>>> rte_eal_mbuf_default_mempool_ops(). >>>> >>>> It means that the result of the function would be influenced >>>> by the parameter given by the user. >>> But that will be only for ops not supported case and in that case, >>> function _must_ make sure that if inputted param is _default_ops_name >>> then function should return ops supported correct info (whether >>> returning '0' : Best ops or '1': ops does support >>> , this part is arguable.. meaning One can say that default_ops ='handle-name' is best possible handle Or >>> one of handle which platform supports). >>> >>>> I think that a PMD that does not implement ->pools_ops_supported >>>> should always return 1 (mempool is supported). >> I don't agree. >> The result of this API (mempool ops supported or not by a PMD) >> should not depend on what user asks for. >> >>> Return 1 says: PMD support this ops.. >>> >>> So if ops is not supported and func returns with 1, then which ops application will use? >>> If that ops is default_ops.. then How application will distinguish when to use default ops or >>> param ops?.. as because in both cases func will return with value 1. >>> >>> The approach in the patch takes care of that condition and func will return -ENOTSUP >>> if (ops not support || inputted param not matching with default ops) otherwise will return >>> 0 or 1. >>> >>> At application side; >>> For error case: In case of -ENOTSUP, its upto application to use _default_ops or exit. >>> For good case: 0 or 1 case, func gaurantee that handle is either best handle for pool or pool supports >>> that handle.. However in your suggestion if ops not supported case returns 1 then application is not >>> sure which ops to use.. default_ops Or input ops given to func. >>> >>> make sense? >> My proposition is: >> >> - what a PMD returns does not depend on used parameter: >> - 0: best support >> - 1: support >> - -ENOTSUP: not supported >> >> - if a PMD does not implement the _ops_supported() API, assume all pools >> are supported (returns 1) > If API returns 1 for PMD does not implement _ops_supported() case, > then do we really need -ENOTSUP? > > If so then in which case API will return -ENOTSUP error, wondering. To summarize return value of API (specially for error case): - 0: best support - 1: pool support ops or _that_ PMD did not implement _ops_supported(). - -ENODEV: Invalid port id. - -EINVAL: pool is null. So to me, We don't need -ENOTSUP error code as return value 1 per your input, address both case ie.. (Pool does support this ops Or _that_ PMD does not implement _ops_supported()). is above API return value and their case explanation make sense to you? If not then could you pl. be more explicit on your error code case detailing and suggest more about return value '1' case? As per me: Return value 1 is wearing two hats, addressing two cases. Correct me if I misunderstood your proposition? Thanks. ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/2] ethdev: get the supported pools for a port 2017-09-29 10:16 ` santosh 2017-09-29 11:21 ` santosh @ 2017-09-29 11:23 ` Olivier MATZ 2017-09-29 11:31 ` santosh 1 sibling, 1 reply; 72+ messages in thread From: Olivier MATZ @ 2017-09-29 11:23 UTC (permalink / raw) To: santosh; +Cc: dev, thomas, jerin.jacob, hemant.agrawal On Fri, Sep 29, 2017 at 11:16:20AM +0100, santosh wrote: > Hi Olivier, > > > On Friday 29 September 2017 09:32 AM, Olivier MATZ wrote: > > On Mon, Sep 25, 2017 at 10:52:31PM +0100, santosh wrote: > >> Hi Olivier, > >> > >> > >> On Monday 25 September 2017 08:37 AM, Olivier MATZ wrote: > >> > >>>> +{ > >>>> + struct rte_eth_dev *dev; > >>>> + const char *tmp; > >>>> + > >>>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); > >>>> + > >>>> + if (pool == NULL) > >>>> + return -EINVAL; > >>>> + > >>>> + dev = &rte_eth_devices[port_id]; > >>>> + > >>>> + if (*dev->dev_ops->pools_ops_supported == NULL) { > >>>> + tmp = rte_eal_mbuf_default_mempool_ops(); > >>>> + if (!strcmp(tmp, pool)) > >>>> + return 0; > >>>> + else > >>>> + return -ENOTSUP; > >>> I don't understand why we are comparing with > >>> rte_eal_mbuf_default_mempool_ops(). > >>> > >>> It means that the result of the function would be influenced > >>> by the parameter given by the user. > >> But that will be only for ops not supported case and in that case, > >> function _must_ make sure that if inputted param is _default_ops_name > >> then function should return ops supported correct info (whether > >> returning '0' : Best ops or '1': ops does support > >> , this part is arguable.. meaning One can say that default_ops ='handle-name' is best possible handle Or > >> one of handle which platform supports). > >> > >>> I think that a PMD that does not implement ->pools_ops_supported > >>> should always return 1 (mempool is supported). > > I don't agree. > > The result of this API (mempool ops supported or not by a PMD) > > should not depend on what user asks for. > > > >> Return 1 says: PMD support this ops.. > >> > >> So if ops is not supported and func returns with 1, then which ops application will use? > >> If that ops is default_ops.. then How application will distinguish when to use default ops or > >> param ops?.. as because in both cases func will return with value 1. > >> > >> The approach in the patch takes care of that condition and func will return -ENOTSUP > >> if (ops not support || inputted param not matching with default ops) otherwise will return > >> 0 or 1. > >> > >> At application side; > >> For error case: In case of -ENOTSUP, its upto application to use _default_ops or exit. > >> For good case: 0 or 1 case, func gaurantee that handle is either best handle for pool or pool supports > >> that handle.. However in your suggestion if ops not supported case returns 1 then application is not > >> sure which ops to use.. default_ops Or input ops given to func. > >> > >> make sense? > > My proposition is: > > > > - what a PMD returns does not depend on used parameter: > > - 0: best support > > - 1: support > > - -ENOTSUP: not supported > > > > - if a PMD does not implement the _ops_supported() API, assume all pools > > are supported (returns 1) > > If API returns 1 for PMD does not implement _ops_supported() case, > then do we really need -ENOTSUP? When the PMD explicitly does not support a mempool ops. Not implementing the API means, like today: "I don't care what the mempool ops is, so I a priori support all the ones available, without preference". ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/2] ethdev: get the supported pools for a port 2017-09-29 11:23 ` Olivier MATZ @ 2017-09-29 11:31 ` santosh 0 siblings, 0 replies; 72+ messages in thread From: santosh @ 2017-09-29 11:31 UTC (permalink / raw) To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal On Friday 29 September 2017 12:23 PM, Olivier MATZ wrote: > On Fri, Sep 29, 2017 at 11:16:20AM +0100, santosh wrote: >> Hi Olivier, >> >> >> On Friday 29 September 2017 09:32 AM, Olivier MATZ wrote: >>> On Mon, Sep 25, 2017 at 10:52:31PM +0100, santosh wrote: >>>> Hi Olivier, >>>> >>>> >>>> On Monday 25 September 2017 08:37 AM, Olivier MATZ wrote: >>>> >>>>>> +{ >>>>>> + struct rte_eth_dev *dev; >>>>>> + const char *tmp; >>>>>> + >>>>>> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); >>>>>> + >>>>>> + if (pool == NULL) >>>>>> + return -EINVAL; >>>>>> + >>>>>> + dev = &rte_eth_devices[port_id]; >>>>>> + >>>>>> + if (*dev->dev_ops->pools_ops_supported == NULL) { >>>>>> + tmp = rte_eal_mbuf_default_mempool_ops(); >>>>>> + if (!strcmp(tmp, pool)) >>>>>> + return 0; >>>>>> + else >>>>>> + return -ENOTSUP; >>>>> I don't understand why we are comparing with >>>>> rte_eal_mbuf_default_mempool_ops(). >>>>> >>>>> It means that the result of the function would be influenced >>>>> by the parameter given by the user. >>>> But that will be only for ops not supported case and in that case, >>>> function _must_ make sure that if inputted param is _default_ops_name >>>> then function should return ops supported correct info (whether >>>> returning '0' : Best ops or '1': ops does support >>>> , this part is arguable.. meaning One can say that default_ops ='handle-name' is best possible handle Or >>>> one of handle which platform supports). >>>> >>>>> I think that a PMD that does not implement ->pools_ops_supported >>>>> should always return 1 (mempool is supported). >>> I don't agree. >>> The result of this API (mempool ops supported or not by a PMD) >>> should not depend on what user asks for. >>> >>>> Return 1 says: PMD support this ops.. >>>> >>>> So if ops is not supported and func returns with 1, then which ops application will use? >>>> If that ops is default_ops.. then How application will distinguish when to use default ops or >>>> param ops?.. as because in both cases func will return with value 1. >>>> >>>> The approach in the patch takes care of that condition and func will return -ENOTSUP >>>> if (ops not support || inputted param not matching with default ops) otherwise will return >>>> 0 or 1. >>>> >>>> At application side; >>>> For error case: In case of -ENOTSUP, its upto application to use _default_ops or exit. >>>> For good case: 0 or 1 case, func gaurantee that handle is either best handle for pool or pool supports >>>> that handle.. However in your suggestion if ops not supported case returns 1 then application is not >>>> sure which ops to use.. default_ops Or input ops given to func. >>>> >>>> make sense? >>> My proposition is: >>> >>> - what a PMD returns does not depend on used parameter: >>> - 0: best support >>> - 1: support >>> - -ENOTSUP: not supported >>> >>> - if a PMD does not implement the _ops_supported() API, assume all pools >>> are supported (returns 1) >> If API returns 1 for PMD does not implement _ops_supported() case, >> then do we really need -ENOTSUP? > When the PMD explicitly does not support a mempool ops. > > Not implementing the API means, like today: "I don't care what > the mempool ops is, so I a priori support all the ones available, > without preference". > More confusing to me. If PMD does not implement mempool_ops then if (_ops_supported() == NULL) return 1; /// per your proposition. so I don't see why we need -ENOTSUP error code in API? ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v4 0/2] Dynamically configure mempool handle 2017-09-11 15:18 ` [dpdk-dev] [PATCH v4 " Santosh Shukla 2017-09-11 15:18 ` [dpdk-dev] [PATCH v4 1/2] eal: allow user to override default pool handle Santosh Shukla 2017-09-11 15:18 ` [dpdk-dev] [PATCH v4 2/2] ethdev: get the supported pools for a port Santosh Shukla @ 2017-09-13 10:00 ` santosh 2017-09-19 8:28 ` santosh 2017-10-01 9:14 ` [dpdk-dev] [PATCH v5 " Santosh Shukla 3 siblings, 1 reply; 72+ messages in thread From: santosh @ 2017-09-13 10:00 UTC (permalink / raw) To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal Hi Olivier, On Monday 11 September 2017 08:48 PM, Santosh Shukla wrote: > v4: > - Includes v3 review coment changes. > Patches rebased on 06791a4bce: ethdev: get the supported pools for a port are you fine with v4? Thanks. ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v4 0/2] Dynamically configure mempool handle 2017-09-13 10:00 ` [dpdk-dev] [PATCH v4 0/2] Dynamically configure mempool handle santosh @ 2017-09-19 8:28 ` santosh 2017-09-25 7:24 ` Olivier MATZ 0 siblings, 1 reply; 72+ messages in thread From: santosh @ 2017-09-19 8:28 UTC (permalink / raw) To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal Hi Olivier, On Wednesday 13 September 2017 03:30 PM, santosh wrote: > Hi Olivier, > > > On Monday 11 September 2017 08:48 PM, Santosh Shukla wrote: >> v4: >> - Includes v3 review coment changes. >> Patches rebased on 06791a4bce: ethdev: get the supported pools for a port > are you fine with v4? Thanks. > Ping? Thanks. ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v4 0/2] Dynamically configure mempool handle 2017-09-19 8:28 ` santosh @ 2017-09-25 7:24 ` Olivier MATZ 2017-09-25 21:58 ` santosh 0 siblings, 1 reply; 72+ messages in thread From: Olivier MATZ @ 2017-09-25 7:24 UTC (permalink / raw) To: santosh; +Cc: dev, thomas, jerin.jacob, hemant.agrawal Hi Santosh, On Tue, Sep 19, 2017 at 01:58:14PM +0530, santosh wrote: > Hi Olivier, > > > On Wednesday 13 September 2017 03:30 PM, santosh wrote: > > Hi Olivier, > > > > > > On Monday 11 September 2017 08:48 PM, Santosh Shukla wrote: > >> v4: > >> - Includes v3 review coment changes. > >> Patches rebased on 06791a4bce: ethdev: get the supported pools for a port > > are you fine with v4? Thanks. > > > Ping? Thanks. Really sorry for the late review. I'm sending some minor comments, hoping the patches can be integrated in RC1. Olivier ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v4 0/2] Dynamically configure mempool handle 2017-09-25 7:24 ` Olivier MATZ @ 2017-09-25 21:58 ` santosh 0 siblings, 0 replies; 72+ messages in thread From: santosh @ 2017-09-25 21:58 UTC (permalink / raw) To: Olivier MATZ; +Cc: dev, thomas, jerin.jacob, hemant.agrawal On Monday 25 September 2017 08:24 AM, Olivier MATZ wrote: > Hi Santosh, > > On Tue, Sep 19, 2017 at 01:58:14PM +0530, santosh wrote: >> Hi Olivier, >> >> >> On Wednesday 13 September 2017 03:30 PM, santosh wrote: >>> Hi Olivier, >>> >>> >>> On Monday 11 September 2017 08:48 PM, Santosh Shukla wrote: >>>> v4: >>>> - Includes v3 review coment changes. >>>> Patches rebased on 06791a4bce: ethdev: get the supported pools for a port >>> are you fine with v4? Thanks. >>> >> Ping? Thanks. > Really sorry for the late review. > I'm sending some minor comments, hoping the patches can be > integrated in RC1. Mempool nat_align and this series blocking octeontx pmd, must to get in -rc1. Appreciate if you spend little time in reviewing, as those series design part addressed, only style and renaming is under discussion.. few style comments are comming over weeks of time.. it is blocking patch progress.. appreciate if you review little quickly. Thanks. > Olivier ^ permalink raw reply [flat|nested] 72+ messages in thread
* [dpdk-dev] [PATCH v5 0/2] Dynamically configure mempool handle 2017-09-11 15:18 ` [dpdk-dev] [PATCH v4 " Santosh Shukla ` (2 preceding siblings ...) 2017-09-13 10:00 ` [dpdk-dev] [PATCH v4 0/2] Dynamically configure mempool handle santosh @ 2017-10-01 9:14 ` Santosh Shukla 2017-10-01 9:14 ` [dpdk-dev] [PATCH v5 1/2] eal: allow user to override default pool handle Santosh Shukla ` (2 more replies) 3 siblings, 3 replies; 72+ messages in thread From: Santosh Shukla @ 2017-10-01 9:14 UTC (permalink / raw) To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla v5: - Includes v4 minor review comment. Patches rebased on upstream tip / commit id:5dce9fcdb2 v4: - Includes v3 review coment changes. Patches rebased on 06791a4bce: ethdev: get the supported pools for a port v3: - Rebased on top of v17.11-rc0 - Updated version.map entry to v17.11. v2: DPDK has support for hw and sw mempool. Those mempool can work optimal for specific PMD's. Example: sw ring based PMD for Intel NICs. HW mempool manager dpaa2 for dpaa2 PMD. HW mempool manager fpa for octeontx PMD. There could be a use-case where different vendor NIC's used on the same platform and User like to configure mempool in such a way that each of those NIC's should use their preferred mempool(For performance reasons). Current mempool infrastrucure don't support such use-case. This patchset tries to address that problem in 2 steps: 0) Allowing user to dynamically configure mempool handle by passing pool handle as eal arg to `--mbuf-pool-ops-name=<pool-handle>`. 1) Allowing PMD's to advertise their pool capability to an application. >From an application point of view: - The application must ask to PMD about supported pools. - PMD will advertise his pool capability in priority order '0' - Best match mempool ops for _that_ port (highest priority) '1' - PMD support this mempool ops. - Based on those i/p from PMD, application to chose pool-hdl and do pool create. Change History: v5 --> v4: - Renamed mbuf_pool_name to mbuf_pool_ops_name in [01/02]. (Suggested by Olivier) - Incorporated review comment suggested by Olivier for [02/02]. Refer specific patch for v5 change history. v3 --> v4: - Removed RTE_MBUF_XXX_POOL_NAMESIZE macro and replaced mbuf_pool_name with const. (Suggested by Olivier) - Added eal arg info in doc guide. - Removed _preferred_pool() api and replaced with rte_eth_dev_pools_ops_supported() api (Suggested by Olivier) v2 --> v3: - Changed version.map from DPDK_17.08 to DPDK_17.11. v1 --> v2: - Renamed rte_eal_get_mempool_name to rte_eal_mbuf_default_mempool_ops(). (suggested by Olivier) - Renamed _get_preferred_pool to _get_preferred_pool_ops(). - Updated API description and changes return val from -EINVAL to -ENOTSUP. (Suggested by Olivier) * Specific details on v1-->v2 change summary described in each patch. Checkpatch status: - None. Work History: * Refer [1] for v1. Thanks. [1] http://dpdk.org/ml/archives/dev/2017-June/067022.html Santosh Shukla (2): eal: allow user to override default pool handle ethdev: get the supported pool for a port doc/guides/freebsd_gsg/build_sample_apps.rst | 3 +++ doc/guides/linux_gsg/build_sample_apps.rst | 3 +++ doc/guides/testpmd_app_ug/run_app.rst | 4 ++++ lib/librte_eal/bsdapp/eal/eal.c | 10 ++++++++++ lib/librte_eal/bsdapp/eal/rte_eal_version.map | 7 +++++++ lib/librte_eal/common/eal_common_options.c | 3 +++ lib/librte_eal/common/eal_internal_cfg.h | 2 +- lib/librte_eal/common/eal_options.h | 2 ++ lib/librte_eal/common/include/rte_eal.h | 9 +++++++++ lib/librte_eal/linuxapp/eal/eal.c | 11 +++++++++++ lib/librte_eal/linuxapp/eal/rte_eal_version.map | 7 +++++++ lib/librte_ether/rte_ethdev.c | 18 ++++++++++++++++++ lib/librte_ether/rte_ethdev.h | 24 ++++++++++++++++++++++++ lib/librte_ether/rte_ethdev_version.map | 1 + lib/librte_mbuf/rte_mbuf.c | 5 +++-- 15 files changed, 106 insertions(+), 3 deletions(-) -- 2.14.1 ^ permalink raw reply [flat|nested] 72+ messages in thread
* [dpdk-dev] [PATCH v5 1/2] eal: allow user to override default pool handle 2017-10-01 9:14 ` [dpdk-dev] [PATCH v5 " Santosh Shukla @ 2017-10-01 9:14 ` Santosh Shukla 2017-10-02 14:29 ` Olivier MATZ ` (2 more replies) 2017-10-01 9:14 ` [dpdk-dev] [PATCH v5 2/2] ethdev: get the supported pool for a port Santosh Shukla 2017-10-02 8:37 ` [dpdk-dev] [PATCH v5 0/2] Dynamically configure mempool handle santosh 2 siblings, 3 replies; 72+ messages in thread From: Santosh Shukla @ 2017-10-01 9:14 UTC (permalink / raw) To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla DPDK has support for both sw and hw mempool and currently user is limited to use ring_mp_mc pool. In case user want to use other pool handle, need to update config RTE_MEMPOOL_OPS_DEFAULT, then build and run with desired pool handle. Introducing eal option to override default pool handle. Now user can override the RTE_MEMPOOL_OPS_DEFAULT by passing pool handle to eal `--mbuf-pool-ops-name=""`. Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com> --- v4 --> v5: - Renamed mbuf_pool_name to mbuf_pool_ops_name, same change reflected across patch in respective areas. (Suggested by Olivier). v3 --> v4: - Removed RTE_MBUF_XX_POOL_NAMESIZE macro. (Suggested by Olivier) - Replaced char * with const char * for 'mbuf_pool_name' var. (Olivier) - Added eal arg info in release guide. doc/guides/freebsd_gsg/build_sample_apps.rst | 3 +++ doc/guides/linux_gsg/build_sample_apps.rst | 3 +++ doc/guides/testpmd_app_ug/run_app.rst | 4 ++++ lib/librte_eal/bsdapp/eal/eal.c | 10 ++++++++++ lib/librte_eal/bsdapp/eal/rte_eal_version.map | 7 +++++++ lib/librte_eal/common/eal_common_options.c | 3 +++ lib/librte_eal/common/eal_internal_cfg.h | 2 +- lib/librte_eal/common/eal_options.h | 2 ++ lib/librte_eal/common/include/rte_eal.h | 9 +++++++++ lib/librte_eal/linuxapp/eal/eal.c | 11 +++++++++++ lib/librte_eal/linuxapp/eal/rte_eal_version.map | 7 +++++++ lib/librte_mbuf/rte_mbuf.c | 5 +++-- 12 files changed, 63 insertions(+), 3 deletions(-) diff --git a/doc/guides/freebsd_gsg/build_sample_apps.rst b/doc/guides/freebsd_gsg/build_sample_apps.rst index 9faa0e6e5..d84f15b82 100644 --- a/doc/guides/freebsd_gsg/build_sample_apps.rst +++ b/doc/guides/freebsd_gsg/build_sample_apps.rst @@ -163,6 +163,9 @@ Other options, specific to Linux and are not supported under FreeBSD are as foll * ``--huge-dir``: The directory where hugetlbfs is mounted. +* ``mbuf-pool-ops-name``: + Pool ops name for mbuf to use. + * ``--file-prefix``: The prefix text used for hugepage filenames. diff --git a/doc/guides/linux_gsg/build_sample_apps.rst b/doc/guides/linux_gsg/build_sample_apps.rst index 0cc5fd173..ec0a9ec50 100644 --- a/doc/guides/linux_gsg/build_sample_apps.rst +++ b/doc/guides/linux_gsg/build_sample_apps.rst @@ -157,6 +157,9 @@ The EAL options are as follows: * ``--huge-dir``: The directory where hugetlbfs is mounted. +* ``mbuf-pool-ops-name``: + Pool ops name for mbuf to use. + * ``--file-prefix``: The prefix text used for hugepage filenames. diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst index e8303f3ba..10fec60f9 100644 --- a/doc/guides/testpmd_app_ug/run_app.rst +++ b/doc/guides/testpmd_app_ug/run_app.rst @@ -110,6 +110,10 @@ See the DPDK Getting Started Guides for more information on these options. Specify the directory where the hugetlbfs is mounted. +* ``mbuf-pool-ops-name``: + + Pool ops name for mbuf to use. + * ``--proc-type`` Set the type of the current process. diff --git a/lib/librte_eal/bsdapp/eal/eal.c b/lib/librte_eal/bsdapp/eal/eal.c index 5fa598842..2991fcbcf 100644 --- a/lib/librte_eal/bsdapp/eal/eal.c +++ b/lib/librte_eal/bsdapp/eal/eal.c @@ -112,6 +112,13 @@ struct internal_config internal_config; /* used by rte_rdtsc() */ int rte_cycles_vmware_tsc_map; +/* Return mbuf pool ops name */ +const char * +rte_eal_mbuf_default_mempool_ops(void) +{ + return internal_config.mbuf_pool_ops_name; +} + /* Return a pointer to the configuration structure */ struct rte_config * rte_eal_get_configuration(void) @@ -385,6 +392,9 @@ eal_parse_args(int argc, char **argv) continue; switch (opt) { + case OPT_MBUF_POOL_OPS_NAME_NUM: + internal_config.mbuf_pool_ops_name = optarg; + break; case 'h': eal_usage(prgname); exit(EXIT_SUCCESS); diff --git a/lib/librte_eal/bsdapp/eal/rte_eal_version.map b/lib/librte_eal/bsdapp/eal/rte_eal_version.map index 47a09ea7f..c30f25fe3 100644 --- a/lib/librte_eal/bsdapp/eal/rte_eal_version.map +++ b/lib/librte_eal/bsdapp/eal/rte_eal_version.map @@ -238,3 +238,10 @@ EXPERIMENTAL { rte_service_start_with_defaults; } DPDK_17.08; + +DPDK_17.11 { + global: + + rte_eal_mbuf_default_mempool_ops; + +} DPDK_17.08; diff --git a/lib/librte_eal/common/eal_common_options.c b/lib/librte_eal/common/eal_common_options.c index 1da185e59..1ad3b6c59 100644 --- a/lib/librte_eal/common/eal_common_options.c +++ b/lib/librte_eal/common/eal_common_options.c @@ -98,6 +98,7 @@ eal_long_options[] = { {OPT_VFIO_INTR, 1, NULL, OPT_VFIO_INTR_NUM }, {OPT_VMWARE_TSC_MAP, 0, NULL, OPT_VMWARE_TSC_MAP_NUM }, {OPT_XEN_DOM0, 0, NULL, OPT_XEN_DOM0_NUM }, + {OPT_MBUF_POOL_OPS_NAME, 1, NULL, OPT_MBUF_POOL_OPS_NAME_NUM}, {0, 0, NULL, 0 } }; @@ -220,6 +221,7 @@ eal_reset_internal_config(struct internal_config *internal_cfg) #endif internal_cfg->vmware_tsc_map = 0; internal_cfg->create_uio_dev = 0; + internal_cfg->mbuf_pool_ops_name = RTE_MBUF_DEFAULT_MEMPOOL_OPS; } static int @@ -1309,5 +1311,6 @@ eal_common_usage(void) " --"OPT_NO_PCI" Disable PCI\n" " --"OPT_NO_HPET" Disable HPET\n" " --"OPT_NO_SHCONF" No shared config (mmap'd files)\n" + " --"OPT_MBUF_POOL_OPS_NAME" Pool ops name for mbuf to use\n" "\n", RTE_MAX_LCORE); } diff --git a/lib/librte_eal/common/eal_internal_cfg.h b/lib/librte_eal/common/eal_internal_cfg.h index 7b7e8c887..658783db3 100644 --- a/lib/librte_eal/common/eal_internal_cfg.h +++ b/lib/librte_eal/common/eal_internal_cfg.h @@ -82,7 +82,7 @@ struct internal_config { volatile enum rte_intr_mode vfio_intr_mode; const char *hugefile_prefix; /**< the base filename of hugetlbfs files */ const char *hugepage_dir; /**< specific hugetlbfs directory to use */ - + const char *mbuf_pool_ops_name; /**< mbuf pool ops name */ unsigned num_hugepage_sizes; /**< how many sizes on this system */ struct hugepage_info hugepage_info[MAX_HUGEPAGE_SIZES]; }; diff --git a/lib/librte_eal/common/eal_options.h b/lib/librte_eal/common/eal_options.h index 439a26104..8cdc25208 100644 --- a/lib/librte_eal/common/eal_options.h +++ b/lib/librte_eal/common/eal_options.h @@ -83,6 +83,8 @@ enum { OPT_VMWARE_TSC_MAP_NUM, #define OPT_XEN_DOM0 "xen-dom0" OPT_XEN_DOM0_NUM, +#define OPT_MBUF_POOL_OPS_NAME "mbuf-pool-ops-name" + OPT_MBUF_POOL_OPS_NAME_NUM, OPT_LONG_MAX_NUM }; diff --git a/lib/librte_eal/common/include/rte_eal.h b/lib/librte_eal/common/include/rte_eal.h index 0e7363d77..022f92302 100644 --- a/lib/librte_eal/common/include/rte_eal.h +++ b/lib/librte_eal/common/include/rte_eal.h @@ -287,6 +287,15 @@ static inline int rte_gettid(void) return RTE_PER_LCORE(_thread_id); } +/** + * Get default pool ops name for mbuf + * + * @return + * returns default pool ops name. + */ +const char * +rte_eal_mbuf_default_mempool_ops(void); + #define RTE_INIT(func) \ static void __attribute__((constructor, used)) func(void) diff --git a/lib/librte_eal/linuxapp/eal/eal.c b/lib/librte_eal/linuxapp/eal/eal.c index 48f12f44c..2401f7bf4 100644 --- a/lib/librte_eal/linuxapp/eal/eal.c +++ b/lib/librte_eal/linuxapp/eal/eal.c @@ -121,6 +121,13 @@ struct internal_config internal_config; /* used by rte_rdtsc() */ int rte_cycles_vmware_tsc_map; +/* Return mbuf pool ops name */ +const char * +rte_eal_mbuf_default_mempool_ops(void) +{ + return internal_config.mbuf_pool_ops_name; +} + /* Return a pointer to the configuration structure */ struct rte_config * rte_eal_get_configuration(void) @@ -610,6 +617,10 @@ eal_parse_args(int argc, char **argv) internal_config.create_uio_dev = 1; break; + case OPT_MBUF_POOL_OPS_NAME_NUM: + internal_config.mbuf_pool_ops_name = optarg; + break; + default: if (opt < OPT_LONG_MIN_NUM && isprint(opt)) { RTE_LOG(ERR, EAL, "Option %c is not supported " diff --git a/lib/librte_eal/linuxapp/eal/rte_eal_version.map b/lib/librte_eal/linuxapp/eal/rte_eal_version.map index 8c08b8d1e..cbbb6f332 100644 --- a/lib/librte_eal/linuxapp/eal/rte_eal_version.map +++ b/lib/librte_eal/linuxapp/eal/rte_eal_version.map @@ -243,3 +243,10 @@ EXPERIMENTAL { rte_service_start_with_defaults; } DPDK_17.08; + +DPDK_17.11 { + global: + + rte_eal_mbuf_default_mempool_ops; + +} DPDK_17.08; diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c index 26a62b8e1..5a81c8a4f 100644 --- a/lib/librte_mbuf/rte_mbuf.c +++ b/lib/librte_mbuf/rte_mbuf.c @@ -157,6 +157,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, { struct rte_mempool *mp; struct rte_pktmbuf_pool_private mbp_priv; + const char *mp_ops_name; unsigned elt_size; int ret; @@ -176,8 +177,8 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, if (mp == NULL) return NULL; - ret = rte_mempool_set_ops_byname(mp, - RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL); + mp_ops_name = rte_eal_mbuf_default_mempool_ops(); + ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); if (ret != 0) { RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); rte_mempool_free(mp); -- 2.14.1 ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/2] eal: allow user to override default pool handle 2017-10-01 9:14 ` [dpdk-dev] [PATCH v5 1/2] eal: allow user to override default pool handle Santosh Shukla @ 2017-10-02 14:29 ` Olivier MATZ 2017-10-06 0:29 ` Thomas Monjalon 2017-10-06 7:45 ` [dpdk-dev] [PATCH v6 0/2] Dynamically configure mempool handle Santosh Shukla 2 siblings, 0 replies; 72+ messages in thread From: Olivier MATZ @ 2017-10-02 14:29 UTC (permalink / raw) To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal On Sun, Oct 01, 2017 at 02:44:39PM +0530, Santosh Shukla wrote: > DPDK has support for both sw and hw mempool and > currently user is limited to use ring_mp_mc pool. > In case user want to use other pool handle, > need to update config RTE_MEMPOOL_OPS_DEFAULT, then > build and run with desired pool handle. > > Introducing eal option to override default pool handle. > > Now user can override the RTE_MEMPOOL_OPS_DEFAULT by passing > pool handle to eal `--mbuf-pool-ops-name=""`. > > Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> > Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com> Acked-by: Olivier Matz <olivier.matz@6wind.com> ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/2] eal: allow user to override default pool handle 2017-10-01 9:14 ` [dpdk-dev] [PATCH v5 1/2] eal: allow user to override default pool handle Santosh Shukla 2017-10-02 14:29 ` Olivier MATZ @ 2017-10-06 0:29 ` Thomas Monjalon 2017-10-06 3:31 ` santosh 2017-10-06 7:45 ` [dpdk-dev] [PATCH v6 0/2] Dynamically configure mempool handle Santosh Shukla 2 siblings, 1 reply; 72+ messages in thread From: Thomas Monjalon @ 2017-10-06 0:29 UTC (permalink / raw) To: Santosh Shukla; +Cc: dev, olivier.matz, jerin.jacob, hemant.agrawal 01/10/2017 11:14, Santosh Shukla: > --- a/lib/librte_eal/common/eal_common_options.c > +++ b/lib/librte_eal/common/eal_common_options.c > @@ -98,6 +98,7 @@ eal_long_options[] = { > {OPT_VFIO_INTR, 1, NULL, OPT_VFIO_INTR_NUM }, > {OPT_VMWARE_TSC_MAP, 0, NULL, OPT_VMWARE_TSC_MAP_NUM }, > {OPT_XEN_DOM0, 0, NULL, OPT_XEN_DOM0_NUM }, > + {OPT_MBUF_POOL_OPS_NAME, 1, NULL, OPT_MBUF_POOL_OPS_NAME_NUM}, > {0, 0, NULL, 0 } > }; I think the options were sorted alphabetically. [...] > --- a/lib/librte_eal/common/eal_internal_cfg.h > +++ b/lib/librte_eal/common/eal_internal_cfg.h > @@ -82,7 +82,7 @@ struct internal_config { > volatile enum rte_intr_mode vfio_intr_mode; > const char *hugefile_prefix; /**< the base filename of hugetlbfs files */ > const char *hugepage_dir; /**< specific hugetlbfs directory to use */ > - > + const char *mbuf_pool_ops_name; /**< mbuf pool ops name */ Why this config is not stored in mbuf.c? ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/2] eal: allow user to override default pool handle 2017-10-06 0:29 ` Thomas Monjalon @ 2017-10-06 3:31 ` santosh 2017-10-06 8:39 ` Thomas Monjalon 0 siblings, 1 reply; 72+ messages in thread From: santosh @ 2017-10-06 3:31 UTC (permalink / raw) To: Thomas Monjalon; +Cc: dev, olivier.matz, jerin.jacob, hemant.agrawal On Friday 06 October 2017 05:59 AM, Thomas Monjalon wrote: > 01/10/2017 11:14, Santosh Shukla: >> --- a/lib/librte_eal/common/eal_common_options.c >> +++ b/lib/librte_eal/common/eal_common_options.c >> @@ -98,6 +98,7 @@ eal_long_options[] = { >> {OPT_VFIO_INTR, 1, NULL, OPT_VFIO_INTR_NUM }, >> {OPT_VMWARE_TSC_MAP, 0, NULL, OPT_VMWARE_TSC_MAP_NUM }, >> {OPT_XEN_DOM0, 0, NULL, OPT_XEN_DOM0_NUM }, >> + {OPT_MBUF_POOL_OPS_NAME, 1, NULL, OPT_MBUF_POOL_OPS_NAME_NUM}, >> {0, 0, NULL, 0 } >> }; > I think the options were sorted alphabetically. This is most logical comment so far I got from you. Yes' will do. posting v6. Thanks. > [...] >> --- a/lib/librte_eal/common/eal_internal_cfg.h >> +++ b/lib/librte_eal/common/eal_internal_cfg.h >> @@ -82,7 +82,7 @@ struct internal_config { >> volatile enum rte_intr_mode vfio_intr_mode; >> const char *hugefile_prefix; /**< the base filename of hugetlbfs files */ >> const char *hugepage_dir; /**< specific hugetlbfs directory to use */ >> - >> + const char *mbuf_pool_ops_name; /**< mbuf pool ops name */ > Why this config is not stored in mbuf.c? > Why the config not stored for vfio? hugepage? etc..in that case applicable too. This is correct place to keep for now, unless as discussed in dpdksummit about eal parsing abstraction approach.. plugin style approach so that each module has its own parser. till then It should sit here like other, Its blocker for external-mempool in general case: Where users are forced to hard-code their handle in _OPS_DEFAULT_=. ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/2] eal: allow user to override default pool handle 2017-10-06 3:31 ` santosh @ 2017-10-06 8:39 ` Thomas Monjalon 0 siblings, 0 replies; 72+ messages in thread From: Thomas Monjalon @ 2017-10-06 8:39 UTC (permalink / raw) To: santosh; +Cc: dev, olivier.matz, jerin.jacob, hemant.agrawal 06/10/2017 05:31, santosh: > > On Friday 06 October 2017 05:59 AM, Thomas Monjalon wrote: > > 01/10/2017 11:14, Santosh Shukla: > >> --- a/lib/librte_eal/common/eal_common_options.c > >> +++ b/lib/librte_eal/common/eal_common_options.c > >> @@ -98,6 +98,7 @@ eal_long_options[] = { > >> {OPT_VFIO_INTR, 1, NULL, OPT_VFIO_INTR_NUM }, > >> {OPT_VMWARE_TSC_MAP, 0, NULL, OPT_VMWARE_TSC_MAP_NUM }, > >> {OPT_XEN_DOM0, 0, NULL, OPT_XEN_DOM0_NUM }, > >> + {OPT_MBUF_POOL_OPS_NAME, 1, NULL, OPT_MBUF_POOL_OPS_NAME_NUM}, > >> {0, 0, NULL, 0 } > >> }; > > I think the options were sorted alphabetically. > > This is most logical comment so far I got from you. I will imagine you did not really write this. > Yes' will do. posting v6. Thanks. > > > [...] > >> --- a/lib/librte_eal/common/eal_internal_cfg.h > >> +++ b/lib/librte_eal/common/eal_internal_cfg.h > >> @@ -82,7 +82,7 @@ struct internal_config { > >> volatile enum rte_intr_mode vfio_intr_mode; > >> const char *hugefile_prefix; /**< the base filename of hugetlbfs files */ > >> const char *hugepage_dir; /**< specific hugetlbfs directory to use */ > >> - > >> + const char *mbuf_pool_ops_name; /**< mbuf pool ops name */ > > Why this config is not stored in mbuf.c? > > > Why the config not stored for vfio? hugepage? etc..in that case applicable too. All other configs are related to EAL features. > This is correct place to keep for now, unless as discussed in dpdksummit about eal > parsing abstraction approach.. plugin style approach so that each module has its own > parser. till then It should sit here like other, Its blocker for external-mempool > in general case: Where users are forced to hard-code their handle in _OPS_DEFAULT_=. You probably missed that mbuf is not part of EAL. I'm not talking about parsers, just where to save a variable. You store mbuf info in EAL, and later, call rte_eal_mbuf_default_mempool_ops() from mbuf lib. It would be saner to directly save it in mbuf lib. But Olivier agreed to save mbuf config in EAL. I won't discuss it anymore if you don't want to change. ^ permalink raw reply [flat|nested] 72+ messages in thread
* [dpdk-dev] [PATCH v6 0/2] Dynamically configure mempool handle 2017-10-01 9:14 ` [dpdk-dev] [PATCH v5 1/2] eal: allow user to override default pool handle Santosh Shukla 2017-10-02 14:29 ` Olivier MATZ 2017-10-06 0:29 ` Thomas Monjalon @ 2017-10-06 7:45 ` Santosh Shukla 2017-10-06 7:45 ` [dpdk-dev] [PATCH v6 1/2] eal: allow user to override default pool handle Santosh Shukla ` (2 more replies) 2 siblings, 3 replies; 72+ messages in thread From: Santosh Shukla @ 2017-10-06 7:45 UTC (permalink / raw) To: dev, thomas; +Cc: olivier.matz, jerin.jacob, hemant.agrawal, Santosh Shukla v6: - Include v5 minor comment Thomas. Patches rebased on upstream tip / commit id:16bbc98a3e6305766c79fb42e55ca05b323d6e4d v5: - Includes v4 minor review comment. Patches rebased on upstream tip / commit id:5dce9fcdb2 v4: - Includes v3 review coment changes. Patches rebased on 06791a4bce: ethdev: get the supported pools for a port v3: - Rebased on top of v17.11-rc0 - Updated version.map entry to v17.11. v2: DPDK has support for hw and sw mempool. Those mempool can work optimal for specific PMD's. Example: sw ring based PMD for Intel NICs. HW mempool manager dpaa2 for dpaa2 PMD. HW mempool manager fpa for octeontx PMD. There could be a use-case where different vendor NIC's used on the same platform and User like to configure mempool in such a way that each of those NIC's should use their preferred mempool(For performance reasons). Current mempool infrastrucure don't support such use-case. This patchset tries to address that problem in 2 steps: 0) Allowing user to dynamically configure mempool handle by passing pool handle as eal arg to `--mbuf-pool-ops-name=<pool-handle>`. 1) Allowing PMD's to advertise their pool capability to an application. >From an application point of view: - The application must ask to PMD about supported pools. - PMD will advertise his pool capability in priority order '0' - Best match mempool ops for _that_ port (highest priority) '1' - PMD support this mempool ops. - Based on those i/p from PMD, application to chose pool-hdl and do pool create. Change History: v5 --> v6: - Added Olivier Acked-by in series. - Arranged alphbetical order for 'OPT_MBUF_POOL_OPS_NAME' (Suggested By Thomas) - Placed alphabetical order for 'rte_eth_dev_pool_ops_supported' in version.map file. v5 --> v4: - Renamed mbuf_pool_name to mbuf_pool_ops_name in [01/02]. (Suggested by Olivier) - Incorporated review comment suggested by Olivier for [02/02]. Refer specific patch for v5 change history. v3 --> v4: - Removed RTE_MBUF_XXX_POOL_NAMESIZE macro and replaced mbuf_pool_name with const. (Suggested by Olivier) - Added eal arg info in doc guide. - Removed _preferred_pool() api and replaced with rte_eth_dev_pools_ops_supported() api (Suggested by Olivier) v2 --> v3: - Changed version.map from DPDK_17.08 to DPDK_17.11. v1 --> v2: - Renamed rte_eal_get_mempool_name to rte_eal_mbuf_default_mempool_ops(). (suggested by Olivier) - Renamed _get_preferred_pool to _get_preferred_pool_ops(). - Updated API description and changes return val from -EINVAL to -ENOTSUP. (Suggested by Olivier) * Specific details on v1-->v2 change summary described in each patch. Checkpatch status: - None. Work History: * Refer [1] for v1. Thanks. [1] http://dpdk.org/ml/archives/dev/2017-June/067022.html Santosh Shukla (2): eal: allow user to override default pool handle ethdev: get the supported pool for a port doc/guides/freebsd_gsg/build_sample_apps.rst | 3 +++ doc/guides/linux_gsg/build_sample_apps.rst | 3 +++ doc/guides/testpmd_app_ug/run_app.rst | 4 ++++ lib/librte_eal/bsdapp/eal/eal.c | 10 ++++++++++ lib/librte_eal/bsdapp/eal/rte_eal_version.map | 7 +++++++ lib/librte_eal/common/eal_common_options.c | 3 +++ lib/librte_eal/common/eal_internal_cfg.h | 2 +- lib/librte_eal/common/eal_options.h | 2 ++ lib/librte_eal/common/include/rte_eal.h | 9 +++++++++ lib/librte_eal/linuxapp/eal/eal.c | 11 +++++++++++ lib/librte_eal/linuxapp/eal/rte_eal_version.map | 7 +++++++ lib/librte_ether/rte_ethdev.c | 18 ++++++++++++++++++ lib/librte_ether/rte_ethdev.h | 24 ++++++++++++++++++++++++ lib/librte_ether/rte_ethdev_version.map | 1 + lib/librte_mbuf/rte_mbuf.c | 5 +++-- 15 files changed, 106 insertions(+), 3 deletions(-) -- 2.14.1 ^ permalink raw reply [flat|nested] 72+ messages in thread
* [dpdk-dev] [PATCH v6 1/2] eal: allow user to override default pool handle 2017-10-06 7:45 ` [dpdk-dev] [PATCH v6 0/2] Dynamically configure mempool handle Santosh Shukla @ 2017-10-06 7:45 ` Santosh Shukla 2017-10-06 7:45 ` [dpdk-dev] [PATCH v6 2/2] ethdev: get the supported pool for a port Santosh Shukla 2017-10-06 18:58 ` [dpdk-dev] [PATCH v6 0/2] Dynamically configure mempool handle Thomas Monjalon 2 siblings, 0 replies; 72+ messages in thread From: Santosh Shukla @ 2017-10-06 7:45 UTC (permalink / raw) To: dev, thomas; +Cc: olivier.matz, jerin.jacob, hemant.agrawal, Santosh Shukla DPDK has support for both sw and hw mempool and currently user is limited to use ring_mp_mc pool. In case user want to use other pool handle, need to update config RTE_MEMPOOL_OPS_DEFAULT, then build and run with desired pool handle. Introducing eal option to override default pool handle. Now user can override the RTE_MEMPOOL_OPS_DEFAULT by passing pool handle to eal `--mbuf-pool-ops-name=""`. Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com> Acked-by: Olivier Matz <olivier.matz@6wind.com> --- v5 --> v6: - Arranged alphabetical order for OPT_MBUF_POOL_OPS_NAME_NUM. (Suggested by Thomas) v4 --> v5: - Renamed mbuf_pool_name to mbuf_pool_ops_name, same change reflected across patch in respective areas. (Suggested by Olivier). v3 --> v4: - Removed RTE_MBUF_XX_POOL_NAMESIZE macro. (Suggested by Olivier) - Replaced char * with const char * for 'mbuf_pool_name' var. (Olivier) - Added eal arg info in release guide. doc/guides/freebsd_gsg/build_sample_apps.rst | 3 +++ doc/guides/linux_gsg/build_sample_apps.rst | 3 +++ doc/guides/testpmd_app_ug/run_app.rst | 4 ++++ lib/librte_eal/bsdapp/eal/eal.c | 10 ++++++++++ lib/librte_eal/bsdapp/eal/rte_eal_version.map | 7 +++++++ lib/librte_eal/common/eal_common_options.c | 3 +++ lib/librte_eal/common/eal_internal_cfg.h | 2 +- lib/librte_eal/common/eal_options.h | 2 ++ lib/librte_eal/common/include/rte_eal.h | 9 +++++++++ lib/librte_eal/linuxapp/eal/eal.c | 11 +++++++++++ lib/librte_eal/linuxapp/eal/rte_eal_version.map | 7 +++++++ lib/librte_mbuf/rte_mbuf.c | 5 +++-- 12 files changed, 63 insertions(+), 3 deletions(-) diff --git a/doc/guides/freebsd_gsg/build_sample_apps.rst b/doc/guides/freebsd_gsg/build_sample_apps.rst index 9faa0e6e5..d84f15b82 100644 --- a/doc/guides/freebsd_gsg/build_sample_apps.rst +++ b/doc/guides/freebsd_gsg/build_sample_apps.rst @@ -163,6 +163,9 @@ Other options, specific to Linux and are not supported under FreeBSD are as foll * ``--huge-dir``: The directory where hugetlbfs is mounted. +* ``mbuf-pool-ops-name``: + Pool ops name for mbuf to use. + * ``--file-prefix``: The prefix text used for hugepage filenames. diff --git a/doc/guides/linux_gsg/build_sample_apps.rst b/doc/guides/linux_gsg/build_sample_apps.rst index 0cc5fd173..ec0a9ec50 100644 --- a/doc/guides/linux_gsg/build_sample_apps.rst +++ b/doc/guides/linux_gsg/build_sample_apps.rst @@ -157,6 +157,9 @@ The EAL options are as follows: * ``--huge-dir``: The directory where hugetlbfs is mounted. +* ``mbuf-pool-ops-name``: + Pool ops name for mbuf to use. + * ``--file-prefix``: The prefix text used for hugepage filenames. diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst index e8303f3ba..10fec60f9 100644 --- a/doc/guides/testpmd_app_ug/run_app.rst +++ b/doc/guides/testpmd_app_ug/run_app.rst @@ -110,6 +110,10 @@ See the DPDK Getting Started Guides for more information on these options. Specify the directory where the hugetlbfs is mounted. +* ``mbuf-pool-ops-name``: + + Pool ops name for mbuf to use. + * ``--proc-type`` Set the type of the current process. diff --git a/lib/librte_eal/bsdapp/eal/eal.c b/lib/librte_eal/bsdapp/eal/eal.c index 5fa598842..2991fcbcf 100644 --- a/lib/librte_eal/bsdapp/eal/eal.c +++ b/lib/librte_eal/bsdapp/eal/eal.c @@ -112,6 +112,13 @@ struct internal_config internal_config; /* used by rte_rdtsc() */ int rte_cycles_vmware_tsc_map; +/* Return mbuf pool ops name */ +const char * +rte_eal_mbuf_default_mempool_ops(void) +{ + return internal_config.mbuf_pool_ops_name; +} + /* Return a pointer to the configuration structure */ struct rte_config * rte_eal_get_configuration(void) @@ -385,6 +392,9 @@ eal_parse_args(int argc, char **argv) continue; switch (opt) { + case OPT_MBUF_POOL_OPS_NAME_NUM: + internal_config.mbuf_pool_ops_name = optarg; + break; case 'h': eal_usage(prgname); exit(EXIT_SUCCESS); diff --git a/lib/librte_eal/bsdapp/eal/rte_eal_version.map b/lib/librte_eal/bsdapp/eal/rte_eal_version.map index 47a09ea7f..c30f25fe3 100644 --- a/lib/librte_eal/bsdapp/eal/rte_eal_version.map +++ b/lib/librte_eal/bsdapp/eal/rte_eal_version.map @@ -238,3 +238,10 @@ EXPERIMENTAL { rte_service_start_with_defaults; } DPDK_17.08; + +DPDK_17.11 { + global: + + rte_eal_mbuf_default_mempool_ops; + +} DPDK_17.08; diff --git a/lib/librte_eal/common/eal_common_options.c b/lib/librte_eal/common/eal_common_options.c index 1da185e59..f406592f3 100644 --- a/lib/librte_eal/common/eal_common_options.c +++ b/lib/librte_eal/common/eal_common_options.c @@ -85,6 +85,7 @@ eal_long_options[] = { {OPT_LCORES, 1, NULL, OPT_LCORES_NUM }, {OPT_LOG_LEVEL, 1, NULL, OPT_LOG_LEVEL_NUM }, {OPT_MASTER_LCORE, 1, NULL, OPT_MASTER_LCORE_NUM }, + {OPT_MBUF_POOL_OPS_NAME, 1, NULL, OPT_MBUF_POOL_OPS_NAME_NUM}, {OPT_NO_HPET, 0, NULL, OPT_NO_HPET_NUM }, {OPT_NO_HUGE, 0, NULL, OPT_NO_HUGE_NUM }, {OPT_NO_PCI, 0, NULL, OPT_NO_PCI_NUM }, @@ -220,6 +221,7 @@ eal_reset_internal_config(struct internal_config *internal_cfg) #endif internal_cfg->vmware_tsc_map = 0; internal_cfg->create_uio_dev = 0; + internal_cfg->mbuf_pool_ops_name = RTE_MBUF_DEFAULT_MEMPOOL_OPS; } static int @@ -1279,6 +1281,7 @@ eal_common_usage(void) " '@' can be omitted if cpus and lcores have the same value\n" " -s SERVICE COREMASK Hexadecimal bitmask of cores to be used as service cores\n" " --"OPT_MASTER_LCORE" ID Core ID that is used as master\n" + " --"OPT_MBUF_POOL_OPS_NAME" Pool ops name for mbuf to use\n" " -n CHANNELS Number of memory channels\n" " -m MB Memory to allocate (see also --"OPT_SOCKET_MEM")\n" " -r RANKS Force number of memory ranks (don't detect)\n" diff --git a/lib/librte_eal/common/eal_internal_cfg.h b/lib/librte_eal/common/eal_internal_cfg.h index 7b7e8c887..658783db3 100644 --- a/lib/librte_eal/common/eal_internal_cfg.h +++ b/lib/librte_eal/common/eal_internal_cfg.h @@ -82,7 +82,7 @@ struct internal_config { volatile enum rte_intr_mode vfio_intr_mode; const char *hugefile_prefix; /**< the base filename of hugetlbfs files */ const char *hugepage_dir; /**< specific hugetlbfs directory to use */ - + const char *mbuf_pool_ops_name; /**< mbuf pool ops name */ unsigned num_hugepage_sizes; /**< how many sizes on this system */ struct hugepage_info hugepage_info[MAX_HUGEPAGE_SIZES]; }; diff --git a/lib/librte_eal/common/eal_options.h b/lib/librte_eal/common/eal_options.h index 439a26104..79410bd6a 100644 --- a/lib/librte_eal/common/eal_options.h +++ b/lib/librte_eal/common/eal_options.h @@ -61,6 +61,8 @@ enum { OPT_LOG_LEVEL_NUM, #define OPT_MASTER_LCORE "master-lcore" OPT_MASTER_LCORE_NUM, +#define OPT_MBUF_POOL_OPS_NAME "mbuf-pool-ops-name" + OPT_MBUF_POOL_OPS_NAME_NUM, #define OPT_PROC_TYPE "proc-type" OPT_PROC_TYPE_NUM, #define OPT_NO_HPET "no-hpet" diff --git a/lib/librte_eal/common/include/rte_eal.h b/lib/librte_eal/common/include/rte_eal.h index 559d2308e..0964e49a0 100644 --- a/lib/librte_eal/common/include/rte_eal.h +++ b/lib/librte_eal/common/include/rte_eal.h @@ -287,6 +287,15 @@ static inline int rte_gettid(void) return RTE_PER_LCORE(_thread_id); } +/** + * Get default pool ops name for mbuf + * + * @return + * returns default pool ops name. + */ +const char * +rte_eal_mbuf_default_mempool_ops(void); + /** * Run function before main() with low priority. * diff --git a/lib/librte_eal/linuxapp/eal/eal.c b/lib/librte_eal/linuxapp/eal/eal.c index 48f12f44c..2401f7bf4 100644 --- a/lib/librte_eal/linuxapp/eal/eal.c +++ b/lib/librte_eal/linuxapp/eal/eal.c @@ -121,6 +121,13 @@ struct internal_config internal_config; /* used by rte_rdtsc() */ int rte_cycles_vmware_tsc_map; +/* Return mbuf pool ops name */ +const char * +rte_eal_mbuf_default_mempool_ops(void) +{ + return internal_config.mbuf_pool_ops_name; +} + /* Return a pointer to the configuration structure */ struct rte_config * rte_eal_get_configuration(void) @@ -610,6 +617,10 @@ eal_parse_args(int argc, char **argv) internal_config.create_uio_dev = 1; break; + case OPT_MBUF_POOL_OPS_NAME_NUM: + internal_config.mbuf_pool_ops_name = optarg; + break; + default: if (opt < OPT_LONG_MIN_NUM && isprint(opt)) { RTE_LOG(ERR, EAL, "Option %c is not supported " diff --git a/lib/librte_eal/linuxapp/eal/rte_eal_version.map b/lib/librte_eal/linuxapp/eal/rte_eal_version.map index 8c08b8d1e..cbbb6f332 100644 --- a/lib/librte_eal/linuxapp/eal/rte_eal_version.map +++ b/lib/librte_eal/linuxapp/eal/rte_eal_version.map @@ -243,3 +243,10 @@ EXPERIMENTAL { rte_service_start_with_defaults; } DPDK_17.08; + +DPDK_17.11 { + global: + + rte_eal_mbuf_default_mempool_ops; + +} DPDK_17.08; diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c index 26a62b8e1..5a81c8a4f 100644 --- a/lib/librte_mbuf/rte_mbuf.c +++ b/lib/librte_mbuf/rte_mbuf.c @@ -157,6 +157,7 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, { struct rte_mempool *mp; struct rte_pktmbuf_pool_private mbp_priv; + const char *mp_ops_name; unsigned elt_size; int ret; @@ -176,8 +177,8 @@ rte_pktmbuf_pool_create(const char *name, unsigned n, if (mp == NULL) return NULL; - ret = rte_mempool_set_ops_byname(mp, - RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL); + mp_ops_name = rte_eal_mbuf_default_mempool_ops(); + ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); if (ret != 0) { RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); rte_mempool_free(mp); -- 2.14.1 ^ permalink raw reply [flat|nested] 72+ messages in thread
* [dpdk-dev] [PATCH v6 2/2] ethdev: get the supported pool for a port 2017-10-06 7:45 ` [dpdk-dev] [PATCH v6 0/2] Dynamically configure mempool handle Santosh Shukla 2017-10-06 7:45 ` [dpdk-dev] [PATCH v6 1/2] eal: allow user to override default pool handle Santosh Shukla @ 2017-10-06 7:45 ` Santosh Shukla 2017-10-06 18:58 ` [dpdk-dev] [PATCH v6 0/2] Dynamically configure mempool handle Thomas Monjalon 2 siblings, 0 replies; 72+ messages in thread From: Santosh Shukla @ 2017-10-06 7:45 UTC (permalink / raw) To: dev, thomas; +Cc: olivier.matz, jerin.jacob, hemant.agrawal, Santosh Shukla Now that dpdk supports more than one mempool drivers and each mempool driver works best for specific PMD, example: - sw ring based mempool for Intel PMD drivers. - dpaa2 HW mempool manager for dpaa2 PMD driver. - fpa HW mempool manager for Octeontx PMD driver. Application would like to know the best mempool handle for any port. Introducing rte_eth_dev_pool_ops_supported() API, which allows PMD driver to advertise his supported pool capability to the application. Supported pools are categorized in below priority:- - Best mempool handle for this port (Highest priority '0') - Port supports this mempool handle (Priority '1') Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> Acked-by: Olivier Matz <olivier.matz@6wind.com> --- v5 --> v6: - Arranged alphabetical order for rte_eth_dev_pool_ops_supported. (Suggested by Thomas) v4 --> v5: - Incorporated wording comment perv 4 feedback, refer [1](Suggested by Olivier) - Note: Implementation assume that if PMD does not implement _pool_ops_supported() then library will return '1'.. assuming that PMD supports all the pool ops. (Proposed by Olivier) [1] http://dpdk.org/dev/patchwork/patch/28596/ v3 --> v4: - Replaced __preferred_pool() with rte_eth_dev_pools_ops_supported() (suggested by Olivier) History, Refer [2]. [2] http://dpdk.org/dev/patchwork/patch/27610/ lib/librte_ether/rte_ethdev.c | 18 ++++++++++++++++++ lib/librte_ether/rte_ethdev.h | 24 ++++++++++++++++++++++++ lib/librte_ether/rte_ethdev_version.map | 1 + 3 files changed, 43 insertions(+) diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c index 1849a3bdd..f0b647e10 100644 --- a/lib/librte_ether/rte_ethdev.c +++ b/lib/librte_ether/rte_ethdev.c @@ -3437,3 +3437,21 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id, return 0; } + +int +rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool) +{ + struct rte_eth_dev *dev; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + + if (pool == NULL) + return -EINVAL; + + dev = &rte_eth_devices[port_id]; + + if (*dev->dev_ops->pool_ops_supported == NULL) + return 1; /* all pools are supported */ + + return (*dev->dev_ops->pool_ops_supported)(dev, pool); +} diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h index 99cdd54d4..c65b64d40 100644 --- a/lib/librte_ether/rte_ethdev.h +++ b/lib/librte_ether/rte_ethdev.h @@ -1428,6 +1428,10 @@ typedef int (*eth_get_dcb_info)(struct rte_eth_dev *dev, struct rte_eth_dcb_info *dcb_info); /**< @internal Get dcb information on an Ethernet device */ +typedef int (*eth_pool_ops_supported_t)(struct rte_eth_dev *dev, + const char *pool); +/**< @internal Test if a port supports specific mempool ops */ + /** * @internal A structure containing the functions exported by an Ethernet driver. */ @@ -1548,6 +1552,8 @@ struct eth_dev_ops { eth_tm_ops_get_t tm_ops_get; /**< Get Traffic Management (TM) operations. */ + eth_pool_ops_supported_t pool_ops_supported; + /**< Test if a port supports specific mempool ops */ }; /** @@ -4470,6 +4476,24 @@ int rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id, uint16_t *nb_rx_desc, uint16_t *nb_tx_desc); + +/** + * Test if a port supports specific mempool ops. + * + * @param port_id + * Port identifier of the Ethernet device. + * @param [in] pool + * The name of the pool operations to test. + * @return + * - 0: best mempool ops choice for this port. + * - 1: mempool ops are supported for this port. + * - -ENOTSUP: mempool ops not supported for this port. + * - -ENODEV: Invalid port Identifier. + * - -EINVAL: Pool param is null. + */ +int +rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool); + #ifdef __cplusplus } #endif diff --git a/lib/librte_ether/rte_ethdev_version.map b/lib/librte_ether/rte_ethdev_version.map index 07f9e17f6..92c9e2908 100644 --- a/lib/librte_ether/rte_ethdev_version.map +++ b/lib/librte_ether/rte_ethdev_version.map @@ -191,6 +191,7 @@ DPDK_17.08 { DPDK_17.11 { global: + rte_eth_dev_pool_ops_supported; rte_eth_dev_reset; } DPDK_17.08; -- 2.14.1 ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/2] Dynamically configure mempool handle 2017-10-06 7:45 ` [dpdk-dev] [PATCH v6 0/2] Dynamically configure mempool handle Santosh Shukla 2017-10-06 7:45 ` [dpdk-dev] [PATCH v6 1/2] eal: allow user to override default pool handle Santosh Shukla 2017-10-06 7:45 ` [dpdk-dev] [PATCH v6 2/2] ethdev: get the supported pool for a port Santosh Shukla @ 2017-10-06 18:58 ` Thomas Monjalon 2 siblings, 0 replies; 72+ messages in thread From: Thomas Monjalon @ 2017-10-06 18:58 UTC (permalink / raw) To: Santosh Shukla; +Cc: dev, olivier.matz, jerin.jacob, hemant.agrawal > Santosh Shukla (2): > eal: allow user to override default pool handle > ethdev: get the supported pool for a port I had not reviewed it before yesterday. It seems functionnally ready. I have some reserves about naming and placing but I could try to address them later. Applied, thanks ^ permalink raw reply [flat|nested] 72+ messages in thread
* [dpdk-dev] [PATCH v5 2/2] ethdev: get the supported pool for a port 2017-10-01 9:14 ` [dpdk-dev] [PATCH v5 " Santosh Shukla 2017-10-01 9:14 ` [dpdk-dev] [PATCH v5 1/2] eal: allow user to override default pool handle Santosh Shukla @ 2017-10-01 9:14 ` Santosh Shukla 2017-10-02 14:31 ` Olivier MATZ 2017-10-06 0:30 ` Thomas Monjalon 2017-10-02 8:37 ` [dpdk-dev] [PATCH v5 0/2] Dynamically configure mempool handle santosh 2 siblings, 2 replies; 72+ messages in thread From: Santosh Shukla @ 2017-10-01 9:14 UTC (permalink / raw) To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal, Santosh Shukla Now that dpdk supports more than one mempool drivers and each mempool driver works best for specific PMD, example: - sw ring based mempool for Intel PMD drivers. - dpaa2 HW mempool manager for dpaa2 PMD driver. - fpa HW mempool manager for Octeontx PMD driver. Application would like to know the best mempool handle for any port. Introducing rte_eth_dev_pool_ops_supported() API, which allows PMD driver to advertise his supported pool capability to the application. Supported pools are categorized in below priority:- - Best mempool handle for this port (Highest priority '0') - Port supports this mempool handle (Priority '1') Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> --- v4 --> v5: - Incorporated wording comment perv 4 feedback, refer [1](Suggested by Olivier) - Note: Implementation assume that if PMD does not implement _pool_ops_supported() then library will return '1'.. assuming that PMD supports all the pool ops. (Proposed by Olivier) [1] http://dpdk.org/dev/patchwork/patch/28596/ v3 --> v4: - Replaced __preferred_pool() with rte_eth_dev_pools_ops_supported() (suggested by Olivier) History, Refer [2]. [2] http://dpdk.org/dev/patchwork/patch/27610/ lib/librte_ether/rte_ethdev.c | 18 ++++++++++++++++++ lib/librte_ether/rte_ethdev.h | 24 ++++++++++++++++++++++++ lib/librte_ether/rte_ethdev_version.map | 1 + 3 files changed, 43 insertions(+) diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c index 1849a3bdd..f0b647e10 100644 --- a/lib/librte_ether/rte_ethdev.c +++ b/lib/librte_ether/rte_ethdev.c @@ -3437,3 +3437,21 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id, return 0; } + +int +rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool) +{ + struct rte_eth_dev *dev; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + + if (pool == NULL) + return -EINVAL; + + dev = &rte_eth_devices[port_id]; + + if (*dev->dev_ops->pool_ops_supported == NULL) + return 1; /* all pools are supported */ + + return (*dev->dev_ops->pool_ops_supported)(dev, pool); +} diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h index 99cdd54d4..c65b64d40 100644 --- a/lib/librte_ether/rte_ethdev.h +++ b/lib/librte_ether/rte_ethdev.h @@ -1428,6 +1428,10 @@ typedef int (*eth_get_dcb_info)(struct rte_eth_dev *dev, struct rte_eth_dcb_info *dcb_info); /**< @internal Get dcb information on an Ethernet device */ +typedef int (*eth_pool_ops_supported_t)(struct rte_eth_dev *dev, + const char *pool); +/**< @internal Test if a port supports specific mempool ops */ + /** * @internal A structure containing the functions exported by an Ethernet driver. */ @@ -1548,6 +1552,8 @@ struct eth_dev_ops { eth_tm_ops_get_t tm_ops_get; /**< Get Traffic Management (TM) operations. */ + eth_pool_ops_supported_t pool_ops_supported; + /**< Test if a port supports specific mempool ops */ }; /** @@ -4470,6 +4476,24 @@ int rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id, uint16_t *nb_rx_desc, uint16_t *nb_tx_desc); + +/** + * Test if a port supports specific mempool ops. + * + * @param port_id + * Port identifier of the Ethernet device. + * @param [in] pool + * The name of the pool operations to test. + * @return + * - 0: best mempool ops choice for this port. + * - 1: mempool ops are supported for this port. + * - -ENOTSUP: mempool ops not supported for this port. + * - -ENODEV: Invalid port Identifier. + * - -EINVAL: Pool param is null. + */ +int +rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool); + #ifdef __cplusplus } #endif diff --git a/lib/librte_ether/rte_ethdev_version.map b/lib/librte_ether/rte_ethdev_version.map index 07f9e17f6..6b217119b 100644 --- a/lib/librte_ether/rte_ethdev_version.map +++ b/lib/librte_ether/rte_ethdev_version.map @@ -192,5 +192,6 @@ DPDK_17.11 { global: rte_eth_dev_reset; + rte_eth_dev_pool_ops_supported; } DPDK_17.08; -- 2.14.1 ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v5 2/2] ethdev: get the supported pool for a port 2017-10-01 9:14 ` [dpdk-dev] [PATCH v5 2/2] ethdev: get the supported pool for a port Santosh Shukla @ 2017-10-02 14:31 ` Olivier MATZ 2017-10-06 0:30 ` Thomas Monjalon 1 sibling, 0 replies; 72+ messages in thread From: Olivier MATZ @ 2017-10-02 14:31 UTC (permalink / raw) To: Santosh Shukla; +Cc: dev, thomas, jerin.jacob, hemant.agrawal On Sun, Oct 01, 2017 at 02:44:40PM +0530, Santosh Shukla wrote: > Now that dpdk supports more than one mempool drivers and > each mempool driver works best for specific PMD, example: > - sw ring based mempool for Intel PMD drivers. > - dpaa2 HW mempool manager for dpaa2 PMD driver. > - fpa HW mempool manager for Octeontx PMD driver. > > Application would like to know the best mempool handle > for any port. > > Introducing rte_eth_dev_pool_ops_supported() API, > which allows PMD driver to advertise > his supported pool capability to the application. > > Supported pools are categorized in below priority:- > - Best mempool handle for this port (Highest priority '0') > - Port supports this mempool handle (Priority '1') > > Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> Acked-by: Olivier Matz <olivier.matz@6wind.com> ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v5 2/2] ethdev: get the supported pool for a port 2017-10-01 9:14 ` [dpdk-dev] [PATCH v5 2/2] ethdev: get the supported pool for a port Santosh Shukla 2017-10-02 14:31 ` Olivier MATZ @ 2017-10-06 0:30 ` Thomas Monjalon 2017-10-06 3:32 ` santosh 1 sibling, 1 reply; 72+ messages in thread From: Thomas Monjalon @ 2017-10-06 0:30 UTC (permalink / raw) To: Santosh Shukla; +Cc: dev, olivier.matz, jerin.jacob, hemant.agrawal 01/10/2017 11:14, Santosh Shukla: > --- a/lib/librte_ether/rte_ethdev_version.map > +++ b/lib/librte_ether/rte_ethdev_version.map > @@ -192,5 +192,6 @@ DPDK_17.11 { > global: > > rte_eth_dev_reset; > + rte_eth_dev_pool_ops_supported; You missed the alphabetical order here :) ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v5 2/2] ethdev: get the supported pool for a port 2017-10-06 0:30 ` Thomas Monjalon @ 2017-10-06 3:32 ` santosh 0 siblings, 0 replies; 72+ messages in thread From: santosh @ 2017-10-06 3:32 UTC (permalink / raw) To: Thomas Monjalon; +Cc: dev, olivier.matz, jerin.jacob, hemant.agrawal On Friday 06 October 2017 06:00 AM, Thomas Monjalon wrote: > 01/10/2017 11:14, Santosh Shukla: >> --- a/lib/librte_ether/rte_ethdev_version.map >> +++ b/lib/librte_ether/rte_ethdev_version.map >> @@ -192,5 +192,6 @@ DPDK_17.11 { >> global: >> >> rte_eth_dev_reset; >> + rte_eth_dev_pool_ops_supported; > You missed the alphabetical order here :) > Another valid comment, Sending v6. Need this series in -rc1 a _big_ note. Thanks. ^ permalink raw reply [flat|nested] 72+ messages in thread
* Re: [dpdk-dev] [PATCH v5 0/2] Dynamically configure mempool handle 2017-10-01 9:14 ` [dpdk-dev] [PATCH v5 " Santosh Shukla 2017-10-01 9:14 ` [dpdk-dev] [PATCH v5 1/2] eal: allow user to override default pool handle Santosh Shukla 2017-10-01 9:14 ` [dpdk-dev] [PATCH v5 2/2] ethdev: get the supported pool for a port Santosh Shukla @ 2017-10-02 8:37 ` santosh 2 siblings, 0 replies; 72+ messages in thread From: santosh @ 2017-10-02 8:37 UTC (permalink / raw) To: olivier.matz, dev; +Cc: thomas, jerin.jacob, hemant.agrawal On Sunday 01 October 2017 02:44 PM, Santosh Shukla wrote: > v5: > - Includes v4 minor review comment. > Patches rebased on upstream tip / commit id:5dce9fcdb2 ping, Thanks. ^ permalink raw reply [flat|nested] 72+ messages in thread
* [dpdk-dev] [PATCH v2 2/2] ethdev: allow pmd to advertise pool handle 2017-07-20 7:06 ` [dpdk-dev] [PATCH v2 0/2] Dynamically configure " Santosh Shukla 2017-07-20 7:06 ` [dpdk-dev] [PATCH v2 1/2] eal: allow user to override default pool handle Santosh Shukla @ 2017-07-20 7:06 ` Santosh Shukla 1 sibling, 0 replies; 72+ messages in thread From: Santosh Shukla @ 2017-07-20 7:06 UTC (permalink / raw) To: thomas, dev, olivier.matz; +Cc: jerin.jacob, hemant.agrawal, Santosh Shukla Now that dpdk supports more than one mempool drivers and each mempool driver works best for specific PMD, example: - sw ring based mempool for Intel PMD drivers - dpaa2 HW mempool manager for dpaa2 PMD driver. - fpa HW mempool manager for Octeontx PMD driver. Application like to know `preferred mempool vs PMD driver` information in advance before port setup. Introducing rte_eth_dev_get_preferred_pool_ops() API, which allows PMD driver to advertise their pool capability to application. Application side programing sequence would be: char pref_mempool[RTE_MEMPOOL_OPS_NAMESIZE]; rte_eth_dev_get_preferred_pool_ops(ethdev_port_id, pref_mempoolx /*out*/); rte_mempool_create_empty(); rte_mempool_set_ops_byname( , pref_memppol, ); rte_mempool_populate_default(); Signed-off-by: Santosh Shukla <santosh.shukla@caviumnetworks.com> --- v1 --> v2: - Renamed _get_preferred_pool to _get_preferred_pool_ops(). Per v1 review feedback, Olivier suggested to rename api to rte_eth_dev_pool_ops_supported(), considering that 2nd param for that api will return pool handle 'priority' for that port. However, per v1 [1], we're opting for approach 1) where ethdev API returns _preferred_ pool handle to application and Its upto application to decide on policy - whether application wants to create pool with received preferred pool handle or not. For more discussion details on this topic refer [1]. [1] http://dpdk.org/dev/patchwork/patch/24944/ - Removed const qualifier from _get_preferred_pool_ops() param. - Updated API description and changes return val from -EINVAL to _ENOTSUP (suggested by Olivier) lib/librte_ether/rte_ethdev.c | 18 ++++++++++++++++++ lib/librte_ether/rte_ethdev.h | 21 +++++++++++++++++++++ lib/librte_ether/rte_ether_version.map | 1 + 3 files changed, 40 insertions(+) diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c index a1b744704..3ac87f617 100644 --- a/lib/librte_ether/rte_ethdev.c +++ b/lib/librte_ether/rte_ethdev.c @@ -3415,3 +3415,21 @@ rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id, return 0; } + +int +rte_eth_dev_get_preferred_pool_ops(uint8_t port_id, char *pool) +{ + struct rte_eth_dev *dev; + const char *tmp; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + + dev = &rte_eth_devices[port_id]; + + if (*dev->dev_ops->get_preferred_pool_ops == NULL) { + tmp = rte_eal_mbuf_default_mempool_ops(); + snprintf(pool, RTE_MBUF_POOL_OPS_NAMESIZE, "%s", tmp); + return 0; + } + return (*dev->dev_ops->get_preferred_pool_ops)(dev, pool); +} diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h index 55fda9583..d27e34e2e 100644 --- a/lib/librte_ether/rte_ethdev.h +++ b/lib/librte_ether/rte_ethdev.h @@ -1425,6 +1425,10 @@ typedef int (*eth_get_dcb_info)(struct rte_eth_dev *dev, struct rte_eth_dcb_info *dcb_info); /**< @internal Get dcb information on an Ethernet device */ +typedef int (*eth_get_preferred_pool_ops_t)(struct rte_eth_dev *dev, + char *pool); +/**< @internal Get preferred pool handle for port */ + /** * @internal A structure containing the functions exported by an Ethernet driver. */ @@ -1544,6 +1548,8 @@ struct eth_dev_ops { eth_tm_ops_get_t tm_ops_get; /**< Get Traffic Management (TM) operations. */ + eth_get_preferred_pool_ops_t get_preferred_pool_ops; + /**< Get preferred pool handle for port */ }; /** @@ -4435,6 +4441,21 @@ int rte_eth_dev_adjust_nb_rx_tx_desc(uint8_t port_id, uint16_t *nb_rx_desc, uint16_t *nb_tx_desc); +/** + * Get preferred pool handle for port + * + * @param port_id + * port identifier of the device + * @param [out] pool + * Preferred pool handle for this port. + * Maximum length of preferred pool handle is RTE_MBUF_POOL_OPS_NAMESIZE. + * @return + * - (0) if successful. + * - (-ENOTSUP, -ENODEV or -EINVAL) on failure. + */ +int +rte_eth_dev_get_preferred_pool_ops(uint8_t port_id, char *pool); + #ifdef __cplusplus } #endif diff --git a/lib/librte_ether/rte_ether_version.map b/lib/librte_ether/rte_ether_version.map index 42837285e..ac35da6f8 100644 --- a/lib/librte_ether/rte_ether_version.map +++ b/lib/librte_ether/rte_ether_version.map @@ -185,5 +185,6 @@ DPDK_17.08 { rte_tm_shared_wred_context_delete; rte_tm_wred_profile_add; rte_tm_wred_profile_delete; + rte_eth_dev_get_preferred_pool_ops; } DPDK_17.05; -- 2.11.0 ^ permalink raw reply [flat|nested] 72+ messages in thread
end of thread, other threads:[~2017-10-06 18:58 UTC | newest] Thread overview: 72+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2017-06-01 8:05 [dpdk-dev] [PATCH 0/2] Allow application set mempool handle Santosh Shukla 2017-06-01 8:05 ` [dpdk-dev] [PATCH 1/2] eal: Introducing option to " Santosh Shukla 2017-06-30 14:12 ` Olivier Matz 2017-07-04 12:33 ` santosh 2017-06-01 8:05 ` [dpdk-dev] [PATCH 2/2] ether/ethdev: Allow pmd to advertise preferred pool capability Santosh Shukla 2017-06-30 14:13 ` Olivier Matz 2017-07-04 12:39 ` santosh 2017-07-04 13:07 ` Olivier Matz 2017-07-04 14:12 ` Jerin Jacob 2017-06-19 11:52 ` [dpdk-dev] [PATCH 0/2] Allow application set mempool handle Hemant Agrawal 2017-06-19 13:01 ` Jerin Jacob 2017-06-20 10:37 ` Hemant Agrawal 2017-06-20 14:04 ` Jerin Jacob 2017-06-30 14:12 ` Olivier Matz 2017-07-04 12:25 ` santosh 2017-07-04 15:59 ` Olivier Matz 2017-07-05 7:48 ` santosh 2017-07-20 7:06 ` [dpdk-dev] [PATCH v2 0/2] Dynamically configure " Santosh Shukla 2017-07-20 7:06 ` [dpdk-dev] [PATCH v2 1/2] eal: allow user to override default pool handle Santosh Shukla 2017-08-15 8:07 ` [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle Santosh Shukla 2017-08-15 8:07 ` [dpdk-dev] [PATCH v3 1/2] eal: allow user to override default pool handle Santosh Shukla 2017-09-04 11:46 ` Olivier MATZ 2017-09-07 9:25 ` Hemant Agrawal 2017-08-15 8:07 ` [dpdk-dev] [PATCH v3 2/2] ethdev: allow pmd to advertise " Santosh Shukla 2017-09-04 12:11 ` Olivier MATZ 2017-09-04 13:14 ` santosh 2017-09-07 9:21 ` Hemant Agrawal 2017-09-07 10:06 ` santosh 2017-09-07 10:11 ` santosh 2017-09-07 11:08 ` Hemant Agrawal 2017-09-11 9:33 ` Olivier MATZ 2017-09-11 12:40 ` santosh 2017-09-11 13:00 ` Olivier MATZ 2017-09-04 9:41 ` [dpdk-dev] [PATCH v3 0/2] Dynamically configure mempool handle Sergio Gonzalez Monroy 2017-09-04 13:20 ` santosh 2017-09-04 13:34 ` Olivier MATZ 2017-09-04 14:24 ` Sergio Gonzalez Monroy 2017-09-05 7:47 ` Olivier MATZ 2017-09-05 8:11 ` Jerin Jacob 2017-09-11 15:18 ` [dpdk-dev] [PATCH v4 " Santosh Shukla 2017-09-11 15:18 ` [dpdk-dev] [PATCH v4 1/2] eal: allow user to override default pool handle Santosh Shukla 2017-09-25 7:28 ` Olivier MATZ 2017-09-25 21:23 ` santosh 2017-09-11 15:18 ` [dpdk-dev] [PATCH v4 2/2] ethdev: get the supported pools for a port Santosh Shukla 2017-09-25 7:37 ` Olivier MATZ 2017-09-25 21:52 ` santosh 2017-09-29 5:00 ` santosh 2017-09-29 8:32 ` Olivier MATZ 2017-09-29 10:16 ` santosh 2017-09-29 11:21 ` santosh 2017-09-29 11:23 ` Olivier MATZ 2017-09-29 11:31 ` santosh 2017-09-13 10:00 ` [dpdk-dev] [PATCH v4 0/2] Dynamically configure mempool handle santosh 2017-09-19 8:28 ` santosh 2017-09-25 7:24 ` Olivier MATZ 2017-09-25 21:58 ` santosh 2017-10-01 9:14 ` [dpdk-dev] [PATCH v5 " Santosh Shukla 2017-10-01 9:14 ` [dpdk-dev] [PATCH v5 1/2] eal: allow user to override default pool handle Santosh Shukla 2017-10-02 14:29 ` Olivier MATZ 2017-10-06 0:29 ` Thomas Monjalon 2017-10-06 3:31 ` santosh 2017-10-06 8:39 ` Thomas Monjalon 2017-10-06 7:45 ` [dpdk-dev] [PATCH v6 0/2] Dynamically configure mempool handle Santosh Shukla 2017-10-06 7:45 ` [dpdk-dev] [PATCH v6 1/2] eal: allow user to override default pool handle Santosh Shukla 2017-10-06 7:45 ` [dpdk-dev] [PATCH v6 2/2] ethdev: get the supported pool for a port Santosh Shukla 2017-10-06 18:58 ` [dpdk-dev] [PATCH v6 0/2] Dynamically configure mempool handle Thomas Monjalon 2017-10-01 9:14 ` [dpdk-dev] [PATCH v5 2/2] ethdev: get the supported pool for a port Santosh Shukla 2017-10-02 14:31 ` Olivier MATZ 2017-10-06 0:30 ` Thomas Monjalon 2017-10-06 3:32 ` santosh 2017-10-02 8:37 ` [dpdk-dev] [PATCH v5 0/2] Dynamically configure mempool handle santosh 2017-07-20 7:06 ` [dpdk-dev] [PATCH v2 2/2] ethdev: allow pmd to advertise pool handle Santosh Shukla
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).