DPDK patches and discussions
 help / color / mirror / Atom feed
From: Santosh Shukla <santosh.shukla@caviumnetworks.com>
To: Hemant Agrawal <hemant.agrawal@nxp.com>
Cc: <olivier.matz@6wind.com>, <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] mbuf: use pktmbuf helper to create the pool
Date: Tue, 17 Jan 2017 19:01:23 +0530	[thread overview]
Message-ID: <20170117133121.GA27592@santosh-Latitude-E5530-non-vPro> (raw)
In-Reply-To: <1484678576-3925-1-git-send-email-hemant.agrawal@nxp.com>

Hi Hemant,

On Wed, Jan 18, 2017 at 12:12:56AM +0530, Hemant Agrawal wrote:
> When possible, replace the uses of rte_mempool_create() with
> the helper provided in librte_mbuf: rte_pktmbuf_pool_create().
> 
> This is the preferred way to create a mbuf pool.
> 
> This also updates the documentation.


> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
> This patch is derived from the RFC from Olivier:
> http://dpdk.org/dev/patchwork/patch/15925/

rte_mempool_create to _empty/populate OR rte_pktmbuf_pool_create changes missing
for mempool testcases. do you have plan to take them up Or shall I post the
patches? Also same change needed at ovs side too?

Thanks,

>  app/test/test_link_bonding_rssconf.c               | 11 ++++----
>  doc/guides/sample_app_ug/ip_reassembly.rst         | 13 +++++----
>  doc/guides/sample_app_ug/ipv4_multicast.rst        | 12 ++++----
>  doc/guides/sample_app_ug/l2_forward_job_stats.rst  | 33 ++++++++--------------
>  .../sample_app_ug/l2_forward_real_virtual.rst      | 26 +++++++----------
>  doc/guides/sample_app_ug/ptpclient.rst             | 11 ++------
>  doc/guides/sample_app_ug/quota_watermark.rst       | 26 ++++++-----------
>  drivers/net/bonding/rte_eth_bond_8023ad.c          | 13 ++++-----
>  examples/ip_pipeline/init.c                        | 18 ++++++------
>  examples/ip_reassembly/main.c                      | 16 +++++------
>  examples/multi_process/l2fwd_fork/main.c           | 13 +++------
>  examples/tep_termination/main.c                    | 16 +++++------
>  lib/librte_mbuf/rte_mbuf.c                         |  7 +++--
>  lib/librte_mbuf/rte_mbuf.h                         | 29 +++++++++++--------
>  14 files changed, 106 insertions(+), 138 deletions(-)
> 
> diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
> index 34f1c16..9034f62 100644
> --- a/app/test/test_link_bonding_rssconf.c
> +++ b/app/test/test_link_bonding_rssconf.c
> @@ -67,7 +67,7 @@
>  #define SLAVE_RXTX_QUEUE_FMT      ("rssconf_slave%d_q%d")
>  
>  #define NUM_MBUFS 8191
> -#define MBUF_SIZE (1600 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
> +#define MBUF_SIZE (1600 + RTE_PKTMBUF_HEADROOM)
>  #define MBUF_CACHE_SIZE 250
>  #define BURST_SIZE 32
>  
> @@ -536,13 +536,12 @@ struct link_bonding_rssconf_unittest_params {
>  
>  	if (test_params.mbuf_pool == NULL) {
>  
> -		test_params.mbuf_pool = rte_mempool_create("RSS_MBUF_POOL", NUM_MBUFS *
> -				SLAVE_COUNT, MBUF_SIZE, MBUF_CACHE_SIZE,
> -				sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init,
> -				NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
> +		test_params.mbuf_pool = rte_pktmbuf_pool_create(
> +			"RSS_MBUF_POOL", NUM_MBUFS * SLAVE_COUNT,
> +			MBUF_CACHE_SIZE, 0, MBUF_SIZE, rte_socket_id());
>  
>  		TEST_ASSERT(test_params.mbuf_pool != NULL,
> -				"rte_mempool_create failed\n");
> +				"rte_pktmbuf_pool_create failed\n");
>  	}
>  
>  	/* Create / initialize ring eth devs. */
> diff --git a/doc/guides/sample_app_ug/ip_reassembly.rst b/doc/guides/sample_app_ug/ip_reassembly.rst
> index 3c5cc70..d5097c6 100644
> --- a/doc/guides/sample_app_ug/ip_reassembly.rst
> +++ b/doc/guides/sample_app_ug/ip_reassembly.rst
> @@ -223,11 +223,14 @@ each RX queue uses its own mempool.
>  
>      snprintf(buf, sizeof(buf), "mbuf_pool_%u_%u", lcore, queue);
>  
> -    if ((rxq->pool = rte_mempool_create(buf, nb_mbuf, MBUF_SIZE, 0, sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init, NULL,
> -        rte_pktmbuf_init, NULL, socket, MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) == NULL) {
> -
> -            RTE_LOG(ERR, IP_RSMBL, "mempool_create(%s) failed", buf);
> -            return -1;
> +    rxq->pool = rte_pktmbuf_pool_create(buf, nb_mbuf,
> +    	0, /* cache size */
> +    	0, /* priv size */
> +    	MBUF_DATA_SIZE, socket);
> +    if (rxq->pool == NULL) {
> +    	RTE_LOG(ERR, IP_RSMBL,
> +    		"rte_pktmbuf_pool_create(%s) failed", buf);
> +    	return -1;
>      }
>  
>  Packet Reassembly and Forwarding
> diff --git a/doc/guides/sample_app_ug/ipv4_multicast.rst b/doc/guides/sample_app_ug/ipv4_multicast.rst
> index 72da8c4..d9ff249 100644
> --- a/doc/guides/sample_app_ug/ipv4_multicast.rst
> +++ b/doc/guides/sample_app_ug/ipv4_multicast.rst
> @@ -145,12 +145,12 @@ Memory pools for indirect buffers are initialized differently from the memory po
>  
>  .. code-block:: c
>  
> -    packet_pool = rte_mempool_create("packet_pool", NB_PKT_MBUF, PKT_MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private),
> -                                     rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
> -
> -    header_pool = rte_mempool_create("header_pool", NB_HDR_MBUF, HDR_MBUF_SIZE, 32, 0, NULL, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
> -    clone_pool = rte_mempool_create("clone_pool", NB_CLONE_MBUF,
> -    CLONE_MBUF_SIZE, 32, 0, NULL, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
> +    packet_pool = rte_pktmbuf_pool_create("packet_pool", NB_PKT_MBUF, 32,
> +    	0, PKT_MBUF_DATA_SIZE, rte_socket_id());
> +    header_pool = rte_pktmbuf_pool_create("header_pool", NB_HDR_MBUF, 32,
> +    	0, HDR_MBUF_DATA_SIZE, rte_socket_id());
> +    clone_pool = rte_pktmbuf_pool_create("clone_pool", NB_CLONE_MBUF, 32,
> +    	0, 0, rte_socket_id());
>  
>  The reason for this is because indirect buffers are not supposed to hold any packet data and
>  therefore can be initialized with lower amount of reserved memory for each buffer.
> diff --git a/doc/guides/sample_app_ug/l2_forward_job_stats.rst b/doc/guides/sample_app_ug/l2_forward_job_stats.rst
> index 2444e36..a606b86 100644
> --- a/doc/guides/sample_app_ug/l2_forward_job_stats.rst
> +++ b/doc/guides/sample_app_ug/l2_forward_job_stats.rst
> @@ -193,36 +193,25 @@ and the application to store network packet data:
>  .. code-block:: c
>  
>      /* create the mbuf pool */
> -    l2fwd_pktmbuf_pool =
> -        rte_mempool_create("mbuf_pool", NB_MBUF,
> -                   MBUF_SIZE, 32,
> -                   sizeof(struct rte_pktmbuf_pool_private),
> -                   rte_pktmbuf_pool_init, NULL,
> -                   rte_pktmbuf_init, NULL,
> -                   rte_socket_id(), 0);
> +    l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
> +    	MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
> +    	rte_socket_id());
>  
>      if (l2fwd_pktmbuf_pool == NULL)
>          rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
>  
>  The rte_mempool is a generic structure used to handle pools of objects.
> -In this case, it is necessary to create a pool that will be used by the driver,
> -which expects to have some reserved space in the mempool structure,
> -sizeof(struct rte_pktmbuf_pool_private) bytes.
> -The number of allocated pkt mbufs is NB_MBUF, with a size of MBUF_SIZE each.
> -A per-lcore cache of 32 mbufs is kept.
> +In this case, it is necessary to create a pool that will be used by the driver.
> +The number of allocated pkt mbufs is NB_MBUF, with a data room size of
> +RTE_MBUF_DEFAULT_BUF_SIZE each.
> +A per-lcore cache of MEMPOOL_CACHE_SIZE mbufs is kept.
>  The memory is allocated in rte_socket_id() socket,
>  but it is possible to extend this code to allocate one mbuf pool per socket.
>  
> -Two callback pointers are also given to the rte_mempool_create() function:
> -
> -*   The first callback pointer is to rte_pktmbuf_pool_init() and is used
> -    to initialize the private data of the mempool, which is needed by the driver.
> -    This function is provided by the mbuf API, but can be copied and extended by the developer.
> -
> -*   The second callback pointer given to rte_mempool_create() is the mbuf initializer.
> -    The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library.
> -    If a more complex application wants to extend the rte_pktmbuf structure for its own needs,
> -    a new function derived from rte_pktmbuf_init( ) can be created.
> +The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
> +initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
> +An advanced application may want to use the mempool API to create the
> +mbuf pool with more control.
>  
>  Driver Initialization
>  ~~~~~~~~~~~~~~~~~~~~~
> diff --git a/doc/guides/sample_app_ug/l2_forward_real_virtual.rst b/doc/guides/sample_app_ug/l2_forward_real_virtual.rst
> index cf15d1c..de86ac8 100644
> --- a/doc/guides/sample_app_ug/l2_forward_real_virtual.rst
> +++ b/doc/guides/sample_app_ug/l2_forward_real_virtual.rst
> @@ -207,31 +207,25 @@ and the application to store network packet data:
>  
>      /* create the mbuf pool */
>  
> -    l2fwd_pktmbuf_pool = rte_mempool_create("mbuf_pool", NB_MBUF, MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private),
> -        rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, SOCKET0, 0);
> +    l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
> +    	MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
> +    	rte_socket_id());
>  
>      if (l2fwd_pktmbuf_pool == NULL)
>          rte_panic("Cannot init mbuf pool\n");
>  
>  The rte_mempool is a generic structure used to handle pools of objects.
> -In this case, it is necessary to create a pool that will be used by the driver,
> -which expects to have some reserved space in the mempool structure,
> -sizeof(struct rte_pktmbuf_pool_private) bytes.
> -The number of allocated pkt mbufs is NB_MBUF, with a size of MBUF_SIZE each.
> +In this case, it is necessary to create a pool that will be used by the driver.
> +The number of allocated pkt mbufs is NB_MBUF, with a data room size of
> +RTE_MBUF_DEFAULT_BUF_SIZE each.
>  A per-lcore cache of 32 mbufs is kept.
>  The memory is allocated in NUMA socket 0,
>  but it is possible to extend this code to allocate one mbuf pool per socket.
>  
> -Two callback pointers are also given to the rte_mempool_create() function:
> -
> -*   The first callback pointer is to rte_pktmbuf_pool_init() and is used
> -    to initialize the private data of the mempool, which is needed by the driver.
> -    This function is provided by the mbuf API, but can be copied and extended by the developer.
> -
> -*   The second callback pointer given to rte_mempool_create() is the mbuf initializer.
> -    The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library.
> -    If a more complex application wants to extend the rte_pktmbuf structure for its own needs,
> -    a new function derived from rte_pktmbuf_init( ) can be created.
> +The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
> +initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
> +An advanced application may want to use the mempool API to create the
> +mbuf pool with more control.
>  
>  .. _l2_fwd_app_dvr_init:
>  
> diff --git a/doc/guides/sample_app_ug/ptpclient.rst b/doc/guides/sample_app_ug/ptpclient.rst
> index 6e425b7..405a267 100644
> --- a/doc/guides/sample_app_ug/ptpclient.rst
> +++ b/doc/guides/sample_app_ug/ptpclient.rst
> @@ -171,15 +171,8 @@ used by the application:
>  
>  .. code-block:: c
>  
> -    mbuf_pool = rte_mempool_create("MBUF_POOL",
> -                                   NUM_MBUFS * nb_ports,
> -                                   MBUF_SIZE,
> -                                   MBUF_CACHE_SIZE,
> -                                   sizeof(struct rte_pktmbuf_pool_private),
> -                                   rte_pktmbuf_pool_init, NULL,
> -                                   rte_pktmbuf_init,      NULL,
> -                                   rte_socket_id(),
> -                                   0);
> +    mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * nb_ports,
> +    	MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
>  
>  Mbufs are the packet buffer structure used by DPDK. They are explained in
>  detail in the "Mbuf Library" section of the *DPDK Programmer's Guide*.
> diff --git a/doc/guides/sample_app_ug/quota_watermark.rst b/doc/guides/sample_app_ug/quota_watermark.rst
> index c56683a..a0da8fe 100644
> --- a/doc/guides/sample_app_ug/quota_watermark.rst
> +++ b/doc/guides/sample_app_ug/quota_watermark.rst
> @@ -254,32 +254,24 @@ It contains a set of mbuf objects that are used by the driver and the applicatio
>  .. code-block:: c
>  
>      /* Create a pool of mbuf to store packets */
> -
> -    mbuf_pool = rte_mempool_create("mbuf_pool", MBUF_PER_POOL, MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private),
> -        rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0);
> +    mbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", MBUF_PER_POOL, 32, 0,
> +    	MBUF_DATA_SIZE, rte_socket_id());
>  
>      if (mbuf_pool == NULL)
>          rte_panic("%s\n", rte_strerror(rte_errno));
>  
>  The rte_mempool is a generic structure used to handle pools of objects.
> -In this case, it is necessary to create a pool that will be used by the driver,
> -which expects to have some reserved space in the mempool structure, sizeof(struct rte_pktmbuf_pool_private) bytes.
> +In this case, it is necessary to create a pool that will be used by the driver.
>  
> -The number of allocated pkt mbufs is MBUF_PER_POOL, with a size of MBUF_SIZE each.
> +The number of allocated pkt mbufs is MBUF_PER_POOL, with a data room size
> +of MBUF_DATA_SIZE each.
>  A per-lcore cache of 32 mbufs is kept.
>  The memory is allocated in on the master lcore's socket, but it is possible to extend this code to allocate one mbuf pool per socket.
>  
> -Two callback pointers are also given to the rte_mempool_create() function:
> -
> -*   The first callback pointer is to rte_pktmbuf_pool_init() and is used to initialize the private data of the mempool,
> -    which is needed by the driver.
> -    This function is provided by the mbuf API, but can be copied and extended by the developer.
> -
> -*   The second callback pointer given to rte_mempool_create() is the mbuf initializer.
> -
> -The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library.
> -If a more complex application wants to extend the rte_pktmbuf structure for its own needs,
> -a new function derived from rte_pktmbuf_init() can be created.
> +The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
> +initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
> +An advanced application may want to use the mempool API to create the
> +mbuf pool with more control.
>  
>  Ports Configuration and Pairing
>  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
> index 2f7ae70..af211ca 100644
> --- a/drivers/net/bonding/rte_eth_bond_8023ad.c
> +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
> @@ -888,8 +888,8 @@
>  	RTE_ASSERT(port->tx_ring == NULL);
>  	socket_id = rte_eth_devices[slave_id].data->numa_node;
>  
> -	element_size = sizeof(struct slow_protocol_frame) + sizeof(struct rte_mbuf)
> -				+ RTE_PKTMBUF_HEADROOM;
> +	element_size = sizeof(struct slow_protocol_frame) +
> +		RTE_PKTMBUF_HEADROOM;
>  
>  	/* The size of the mempool should be at least:
>  	 * the sum of the TX descriptors + BOND_MODE_8023AX_SLAVE_TX_PKTS */
> @@ -900,11 +900,10 @@
>  	}
>  
>  	snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id);
> -	port->mbuf_pool = rte_mempool_create(mem_name,
> -		total_tx_desc, element_size,
> -		RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ? 32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
> -		sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init,
> -		NULL, rte_pktmbuf_init, NULL, socket_id, MEMPOOL_F_NO_SPREAD);
> +	port->mbuf_pool = rte_pktmbuf_pool_create(mem_name, total_tx_desc,
> +		RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
> +			32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
> +		0, element_size, socket_id);
>  
>  	/* Any memory allocation failure in initalization is critical because
>  	 * resources can't be free, so reinitialization is impossible. */
> diff --git a/examples/ip_pipeline/init.c b/examples/ip_pipeline/init.c
> index 3b36b53..d55c3b4 100644
> --- a/examples/ip_pipeline/init.c
> +++ b/examples/ip_pipeline/init.c
> @@ -324,16 +324,14 @@
>  		struct app_mempool_params *p = &app->mempool_params[i];
>  
>  		APP_LOG(app, HIGH, "Initializing %s ...", p->name);
> -		app->mempool[i] = rte_mempool_create(
> -				p->name,
> -				p->pool_size,
> -				p->buffer_size,
> -				p->cache_size,
> -				sizeof(struct rte_pktmbuf_pool_private),
> -				rte_pktmbuf_pool_init, NULL,
> -				rte_pktmbuf_init, NULL,
> -				p->cpu_socket_id,
> -				0);
> +		app->mempool[i] = rte_pktmbuf_pool_create(
> +			p->name,
> +			p->pool_size,
> +			p->cache_size,
> +			0, /* priv_size */
> +			p->buffer_size -
> +				sizeof(struct rte_mbuf), /* mbuf data size */
> +			p->cpu_socket_id);
>  
>  		if (app->mempool[i] == NULL)
>  			rte_panic("%s init error\n", p->name);
> diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
> index 50fe422..f6378bf 100644
> --- a/examples/ip_reassembly/main.c
> +++ b/examples/ip_reassembly/main.c
> @@ -84,9 +84,7 @@
>  
>  #define MAX_JUMBO_PKT_LEN  9600
>  
> -#define	BUF_SIZE	RTE_MBUF_DEFAULT_DATAROOM
> -#define MBUF_SIZE	\
> -	(BUF_SIZE + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
> +#define	MBUF_DATA_SIZE	RTE_MBUF_DEFAULT_BUF_SIZE
>  
>  #define NB_MBUF 8192
>  
> @@ -909,11 +907,13 @@ struct rte_lpm6_config lpm6_config = {
>  
>  	snprintf(buf, sizeof(buf), "mbuf_pool_%u_%u", lcore, queue);
>  
> -	if ((rxq->pool = rte_mempool_create(buf, nb_mbuf, MBUF_SIZE, 0,
> -			sizeof(struct rte_pktmbuf_pool_private),
> -			rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL,
> -			socket, MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) == NULL) {
> -		RTE_LOG(ERR, IP_RSMBL, "mempool_create(%s) failed", buf);
> +	rxq->pool = rte_pktmbuf_pool_create(buf, nb_mbuf,
> +		0, /* cache size */
> +		0, /* priv size */
> +		MBUF_DATA_SIZE, socket);
> +	if (rxq->pool == NULL) {
> +		RTE_LOG(ERR, IP_RSMBL,
> +			"rte_pktmbuf_pool_create(%s) failed", buf);
>  		return -1;
>  	}
>  
> diff --git a/examples/multi_process/l2fwd_fork/main.c b/examples/multi_process/l2fwd_fork/main.c
> index 2d951d9..b34916e 100644
> --- a/examples/multi_process/l2fwd_fork/main.c
> +++ b/examples/multi_process/l2fwd_fork/main.c
> @@ -77,8 +77,7 @@
>  
>  #define RTE_LOGTYPE_L2FWD RTE_LOGTYPE_USER1
>  #define MBUF_NAME	"mbuf_pool_%d"
> -#define MBUF_SIZE	\
> -(RTE_MBUF_DEFAULT_DATAROOM + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
> +#define MBUF_DATA_SIZE	RTE_MBUF_DEFAULT_BUF_SIZE
>  #define NB_MBUF   8192
>  #define RING_MASTER_NAME	"l2fwd_ring_m2s_"
>  #define RING_SLAVE_NAME		"l2fwd_ring_s2m_"
> @@ -989,14 +988,10 @@ struct l2fwd_port_statistics {
>  		flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET;
>  		snprintf(buf_name, RTE_MEMPOOL_NAMESIZE, MBUF_NAME, portid);
>  		l2fwd_pktmbuf_pool[portid] =
> -			rte_mempool_create(buf_name, NB_MBUF,
> -					   MBUF_SIZE, 32,
> -					   sizeof(struct rte_pktmbuf_pool_private),
> -					   rte_pktmbuf_pool_init, NULL,
> -					   rte_pktmbuf_init, NULL,
> -					   rte_socket_id(), flags);
> +			rte_pktmbuf_pool_create(buf_name, NB_MBUF, 32,
> +				0, MBUF_DATA_SIZE, rte_socket_id());
>  		if (l2fwd_pktmbuf_pool[portid] == NULL)
> -			rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
> +			rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
>  
>  		printf("Create mbuf %s\n", buf_name);
>  	}
> diff --git a/examples/tep_termination/main.c b/examples/tep_termination/main.c
> index bd1dc96..20dafdb 100644
> --- a/examples/tep_termination/main.c
> +++ b/examples/tep_termination/main.c
> @@ -68,7 +68,7 @@
>  				(nb_switching_cores * MBUF_CACHE_SIZE))
>  
>  #define MBUF_CACHE_SIZE 128
> -#define MBUF_SIZE (2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
> +#define MBUF_DATA_SIZE RTE_MBUF_DEFAULT_BUF_SIZE
>  
>  #define MAX_PKT_BURST 32	/* Max burst size for RX/TX */
>  #define BURST_TX_DRAIN_US 100	/* TX drain every ~100us */
> @@ -1199,15 +1199,13 @@ static inline void __attribute__((always_inline))
>  			MAX_SUP_PORTS);
>  	}
>  	/* Create the mbuf pool. */
> -	mbuf_pool = rte_mempool_create(
> +	mbuf_pool = rte_pktmbuf_pool_create(
>  			"MBUF_POOL",
> -			NUM_MBUFS_PER_PORT
> -			* valid_nb_ports,
> -			MBUF_SIZE, MBUF_CACHE_SIZE,
> -			sizeof(struct rte_pktmbuf_pool_private),
> -			rte_pktmbuf_pool_init, NULL,
> -			rte_pktmbuf_init, NULL,
> -			rte_socket_id(), 0);
> +			NUM_MBUFS_PER_PORT * valid_nb_ports,
> +			MBUF_CACHE_SIZE,
> +			0,
> +			MBUF_DATA_SIZE,
> +			rte_socket_id());
>  	if (mbuf_pool == NULL)
>  		rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
>  
> diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c
> index 72ad91e..3fb2700 100644
> --- a/lib/librte_mbuf/rte_mbuf.c
> +++ b/lib/librte_mbuf/rte_mbuf.c
> @@ -62,7 +62,7 @@
>  
>  /*
>   * ctrlmbuf constructor, given as a callback function to
> - * rte_mempool_create()
> + * rte_mempool_obj_iter() or rte_mempool_create()
>   */
>  void
>  rte_ctrlmbuf_init(struct rte_mempool *mp,
> @@ -77,7 +77,8 @@
>  
>  /*
>   * pktmbuf pool constructor, given as a callback function to
> - * rte_mempool_create()
> + * rte_mempool_create(), or called directly if using
> + * rte_mempool_create_empty()/rte_mempool_populate()
>   */
>  void
>  rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg)
> @@ -110,7 +111,7 @@
>  
>  /*
>   * pktmbuf constructor, given as a callback function to
> - * rte_mempool_create().
> + * rte_mempool_obj_iter() or rte_mempool_create().
>   * Set the fields of a packet mbuf to their default values.
>   */
>  void
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index bfce9f4..b1d4ccb 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -44,6 +44,13 @@
>   * buffers. The message buffers are stored in a mempool, using the
>   * RTE mempool library.
>   *
> + * The preferred way to create a mbuf pool is to use
> + * rte_pktmbuf_pool_create(). However, in some situations, an
> + * application may want to have more control (ex: populate the pool with
> + * specific memory), in this case it is possible to use functions from
> + * rte_mempool. See how rte_pktmbuf_pool_create() is implemented for
> + * details.
> + *
>   * This library provides an API to allocate/free packet mbufs, which are
>   * used to carry network packets.
>   *
> @@ -810,14 +817,14 @@ static inline void __attribute__((always_inline))
>   * This function initializes some fields in an mbuf structure that are
>   * not modified by the user once created (mbuf type, origin pool, buffer
>   * start address, and so on). This function is given as a callback function
> - * to rte_mempool_create() at pool creation time.
> + * to rte_mempool_obj_iter() or rte_mempool_create() at pool creation time.
>   *
>   * @param mp
>   *   The mempool from which the mbuf is allocated.
>   * @param opaque_arg
>   *   A pointer that can be used by the user to retrieve useful information
> - *   for mbuf initialization. This pointer comes from the ``init_arg``
> - *   parameter of rte_mempool_create().
> + *   for mbuf initialization. This pointer is the opaque argument passed to
> + *   rte_mempool_obj_iter() or rte_mempool_create().
>   * @param m
>   *   The mbuf to initialize.
>   * @param i
> @@ -891,14 +898,14 @@ void rte_ctrlmbuf_init(struct rte_mempool *mp, void *opaque_arg,
>   * This function initializes some fields in the mbuf structure that are
>   * not modified by the user once created (origin pool, buffer start
>   * address, and so on). This function is given as a callback function to
> - * rte_mempool_create() at pool creation time.
> + * rte_mempool_obj_iter() or rte_mempool_create() at pool creation time.
>   *
>   * @param mp
>   *   The mempool from which mbufs originate.
>   * @param opaque_arg
>   *   A pointer that can be used by the user to retrieve useful information
> - *   for mbuf initialization. This pointer comes from the ``init_arg``
> - *   parameter of rte_mempool_create().
> + *   for mbuf initialization. This pointer is the opaque argument passed to
> + *   rte_mempool_obj_iter() or rte_mempool_create().
>   * @param m
>   *   The mbuf to initialize.
>   * @param i
> @@ -913,7 +920,8 @@ void rte_pktmbuf_init(struct rte_mempool *mp, void *opaque_arg,
>   *
>   * This function initializes the mempool private data in the case of a
>   * pktmbuf pool. This private data is needed by the driver. The
> - * function is given as a callback function to rte_mempool_create() at
> + * function must be called on the mempool before it is used, or it
> + * can be given as a callback function to rte_mempool_create() at
>   * pool creation. It can be extended by the user, for example, to
>   * provide another packet size.
>   *
> @@ -921,8 +929,8 @@ void rte_pktmbuf_init(struct rte_mempool *mp, void *opaque_arg,
>   *   The mempool from which mbufs originate.
>   * @param opaque_arg
>   *   A pointer that can be used by the user to retrieve useful information
> - *   for mbuf initialization. This pointer comes from the ``init_arg``
> - *   parameter of rte_mempool_create().
> + *   for mbuf initialization. This pointer is the opaque argument passed to
> + *   rte_mempool_create().
>   */
>  void rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg);
>  
> @@ -930,8 +938,7 @@ void rte_pktmbuf_init(struct rte_mempool *mp, void *opaque_arg,
>   * Create a mbuf pool.
>   *
>   * This function creates and initializes a packet mbuf pool. It is
> - * a wrapper to rte_mempool_create() with the proper packet constructor
> - * and mempool constructor.
> + * a wrapper to rte_mempool functions.
>   *
>   * @param name
>   *   The name of the mbuf pool.
> -- 
> 1.9.1
> 

  reply	other threads:[~2017-01-17 13:31 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-17 18:42 Hemant Agrawal
2017-01-17 13:31 ` Santosh Shukla [this message]
2017-01-18  6:01   ` Hemant Agrawal
2017-01-31 10:32   ` Olivier Matz
2017-01-20  7:11 ` [dpdk-dev] [PATCH v2] " Hemant Agrawal
2017-01-31  9:55   ` Olivier Matz
2017-02-01 19:31     ` Hemant Agrawal
2017-02-14 22:00   ` [dpdk-dev] [PATCH v3] " Hemant Agrawal
2017-03-14  9:14     ` [dpdk-dev] [PATCH v4] " Olivier Matz
2017-03-15 12:57       ` Thomas Monjalon
2017-01-17 18:52 [dpdk-dev] [PATCHv4 00/33] NXP DPAA2 PMD Hemant Agrawal
2017-01-19 13:23 ` [dpdk-dev] [PATCHv5 " Hemant Agrawal
2017-01-19 13:23   ` [dpdk-dev] [PATCH] mbuf: use pktmbuf helper to create the pool Hemant Agrawal
2017-01-19 13:27     ` Hemant Agrawal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170117133121.GA27592@santosh-Latitude-E5530-non-vPro \
    --to=santosh.shukla@caviumnetworks.com \
    --cc=dev@dpdk.org \
    --cc=hemant.agrawal@nxp.com \
    --cc=olivier.matz@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).