From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on0065.outbound.protection.outlook.com [104.47.36.65]) by dpdk.org (Postfix) with ESMTP id 10DD8108F for ; Mon, 16 Jan 2017 16:31:10 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=j5K1hbjDBn60pVSqyo9vIbA7OwoH/DD1mvSBH+ZuqTY=; b=aeGHufccETK1z6cnuBRkieB7MXIYxeq2CS4YTovyUGZNsKEONSpPY9TEaITU9cL/7JBtyS1freCgv3h/Ke+uNaLah3ka2L11j0kycs6ESYZnP3MiI8uJJ18F8ixWW+Kg0dajrPopvydemLuzAfRAKREOqEaLwHfb43rlX6aIFzs= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Santosh.Shukla@cavium.com; Received: from santosh-Latitude-E5530-non-vPro (111.93.218.67) by BN3PR0701MB1718.namprd07.prod.outlook.com (10.163.39.17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.845.12; Mon, 16 Jan 2017 15:31:04 +0000 Date: Mon, 16 Jan 2017 21:00:37 +0530 From: Santosh Shukla To: Olivier Matz CC: , , , Message-ID: <20170116153022.GA8179@santosh-Latitude-E5530-non-vPro> References: <1474292567-21912-1-git-send-email-olivier.matz@6wind.com> <1474292567-21912-3-git-send-email-olivier.matz@6wind.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <1474292567-21912-3-git-send-email-olivier.matz@6wind.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-Originating-IP: [111.93.218.67] X-ClientProxiedBy: MA1PR01CA0046.INDPRD01.PROD.OUTLOOK.COM (10.164.116.146) To BN3PR0701MB1718.namprd07.prod.outlook.com (10.163.39.17) X-MS-Office365-Filtering-Correlation-Id: 805342da-cbc1-4950-6d01-08d43e24b0e1 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001); SRVR:BN3PR0701MB1718; X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1718; 3:zqjf/LaPwow9jz0PC52/olKSj8XMcyiXJUnEwBsPRQCV75rP+uKXVNe9nN2Fv890SlGkjaVsxqJ5GIdKRvRUyVW0yG1uOGEt0LkepRAo8x7mnd/+SvyodQTTzZPLxoRIaMDdZdkjQioLGxsCkrAsAiIXdtBPO59VzKWkdboHjE26d59ZwmrobkKABr51cnCa0kHKx38A9wbhjCRdU99qvh2Q9kF6DAhjDNBfrooasRnWBZa4i/Dt27LnPSPM9h5Nvd2dOJH19a6FIb8HZhZVzQ== X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1718; 25:0OXSWKf1yTUtdN/cHIs4pK+FS/KgXMf2JkgIBB0+AsC+Xo2Tx1Tt0JfuqLpVqegiqxxUp9Wh7fb3URKv68s4VvaJhmxJJrjluEVSj6zjsMxhf+gxFd0XupD/6/exgt3DPtBdYeD+Mk1dwe20q5JywnWrAwZ9DXcYk0PS+fC6UlhK3SWLM250kclQBP8bOpVM1qcX6Bb5Q3poaKF7IOBCNee/c79uaw6aQlmkyOhhxc0svGLS/VcdfhDPxhbbF7fKqPYdga+VyGg9HilQxMkH7mMe3Rqljy4gdaho4X+ITd3eHr75ZY+8G1yt0+5q/MuvLoHLY1FNF1yfFTowcPnyKs+QXimV5pGA9yTejVkSHOfPmsb7AxMw3K9uD3UojDpDUGfhkJbcmXEW2Sr2CtGvXunn4AoBAJQ9IaJmOpUotdSh6QeY2q7rX5VusZSjRO9oKpMdOXPWOR2Eu9iiox7GmxpnQQLUCE7XYGEkB5We1huVC+Qa4HVaE9QfUjgCXRUoaW6VaO9RAuZ9juMZbj6cnAu7S0/Rt4YQCk1EfW09vZOtrsh/nUkR5NjP+zg2qrXZkwm8gc1ZdmPsp6qXN2yKNnvgnGsTIQ+3InNX6AiyQzmDrQSQZdBRCOSwblXAY1RZq9dmvGCpaFLwQnL0Q4FBbgXykWMEYGIgUWuIb5v/5ffbSlUPGqimd4bySB/NgW+qdOsG/LR5S56rbFFp72kC5y6k4Vk/KrLUjsi+DE79ZYhMKtE8lov+eWStPzIG8+A3 X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1718; 31:dDih2Y7brsGRJam2Q9er5/Y1rwH4fAPBTObBL04slVbAyEylY8BY0XvpO95KBoV6JGtXJ0AYYPS7vmxz2hFSDjUeNOsYM5SPs1aW1WZ8WnSgREDGIgAqnJNjGpU/q5IN9wu+2bYMFtP9BfT0Xzy6aCzkcu4kZpy+M3kbrMG75M3HdXT3bqyKmpDh/UmLN/O0vJ5SOfdyK/C2tHVwZbExuu+46dUKr9/kfCeJd67R5e9oEMFZNap2j7w3u3Nfosp+; 20:WOPqEGXjEdk9ksXGb3sf8/n8z/lz6VQ0+frRxrw33Q2qCCFIRn5UuydgnFW00KoM8CuCh3B7pde3VJWjOVzV0xgD/8GKghlLR5x80YgrnLqD9dr/fOL6OcwcoxIdQYB9HwWDQ5bikLDfISPw9B7ej5DL9O7R4JBpxj+8hBymKTmO5B4rslhCpHw0INvAMJU4AH8ibwRTgfILo2MnE2MLuFL94gjuJlUh8153Rr76ncah4X3lxby7SNPq/XaHGraXsB5XE58SoEOb0Jn5175BdHXRBCc1qzvo7gds/c0Xtb5UgM/L+jlWaHo7D5HNAkRQr4tuI08lsHtkZYdsGSk9ODr8x/gU/EiVcpBCyodEtutZuusNu/DyGOFvsFup+TY7xdM65lYlC9fy2OdB7NHCmxjdDj474cyaMtiWFQiq0HZ2GB6j1soDaQyswUwcJCDKlFOR5NeAQYI+oatU43PZn7ybRaci5ZmTXOzLhneSsBMNBn+17b46HvPkMdxmxHHNTlgBOuyhHXcZSlM6AtINLvAn9wxOoSzbI3fG9Gkl3Ry3tNv3cNAyYKr9kZZTYTAo1c6DMTop3UFBaFv7sY4PPV853ystQSbZB8UV4QQAlLo= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(788757137089); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040375)(601004)(2401047)(5005006)(8121501046)(10201501046)(3002001)(6041248)(20161123555025)(20161123560025)(20161123564025)(20161123562025)(6072148); SRVR:BN3PR0701MB1718; BCL:0; PCL:0; RULEID:; SRVR:BN3PR0701MB1718; X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1718; 4:qHvOGiOg1qvzEJEPSl6xbUOfMnFVLPknuWoJYJHJao3MOtpqEKbNFRiY6OKV4PhmF8VKhxEXrzNLeqRlbSrm9MQtR5PkxtDxqrRnIt9PKx7LPRPkO3Xv2NchpNvT0NBMFNYZtnrbq5UbZBFmg2A5ANs5/Xvf2B/QWsiRyYdDSPyB1MMFFa5Z2U1Q+6EV9glO7m37p5to5CV2jjKHEy/WwdVsVnRGyST3/VVdVPzJbHwsJEk4vAGQTrIrFtiEiW62pj1r/CPVGxOH1+LFXa15MPiGvXCp1VFY7aTummGCAAA/TJ0YQCNweR9XaPfz3PC1lLwT2fkshnNHGwY0lU1M4doIj3bZ3dhXR+JT/2evZKlVJCwhmDMnv6BihXFPs3Bf+N52N6no4NpIViwPCOpYW1c+cHPUuHCkhaH/AcOPZS3XxPAOH7x3Rm5aDrUWHqutPhc7wVRXkm5bZCZFMLhyR+FS0r0T672afQFkeocDzfUuYuLpkdvT/jcJnHAENwJuag1M5E9fM6erT2PvycuxWfHzFmSeP8ZwB/FucVa67qW2vKbRdxj3eYaefo0juJLtvYmlyJgpsW7T/Z1t3Aa1XlOpzseZPXWvVccHtXdejO4= X-Forefront-PRVS: 01894AD3B8 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(4630300001)(6009001)(7916002)(39450400003)(24454002)(76104003)(199003)(189002)(81166006)(6116002)(3846002)(81156014)(6666003)(2950100002)(105586002)(42186005)(50466002)(551984002)(106356001)(23726003)(83506001)(8676002)(33716001)(2906002)(27001)(4326007)(92566002)(1076002)(4001350100001)(7736002)(50986999)(46406003)(76176999)(97756001)(54906002)(229853002)(38730400001)(305945005)(97736004)(66066001)(47776003)(68736007)(5660300001)(189998001)(9686003)(42882006)(54356999)(6916009)(110136003)(55016002)(8656002)(101416001)(25786008)(33656002)(5009440100003)(6496003)(7099028)(18370500001); DIR:OUT; SFP:1101; SCL:1; SRVR:BN3PR0701MB1718; H:santosh-Latitude-E5530-non-vPro; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN3PR0701MB1718; 23:kOwq8NDvwvpcouy4xFbRqViZYYF1YG6V8zo6hL9?= =?us-ascii?Q?PWxKM31hLVue7/WCSMWIwl5Efib0z/D1iT/k2ECxwA8Yijrsr4ztyAUjDqOl?= =?us-ascii?Q?hnuS5+XuApbNjYx2jqQU+Buj3+wAGIgAjE3QdU00aLmNHdHlkyD/D2znsnPD?= =?us-ascii?Q?+2oabMmladPzQTYpZHs/Ho+bebP3aJa0fBG93xepjdwQiP5TLN54mD8PfJBc?= =?us-ascii?Q?rO3azVOBfQw30Jnje2xVBhn09fZk8UfbrJ1LopYHWHzinvo4uyF0um/L+jxW?= =?us-ascii?Q?OUOyR7oRfhGbxOpWupg3w6P3i838ye1MQVOnUZQaYuus9pB/bNfmWUswfKgR?= =?us-ascii?Q?/Q2BUHMj/7gBh/X42B/197svUYlmmd5diQ7+bjAAmKEyx9Lh+X4WmZYa+2fI?= =?us-ascii?Q?SxoWQXeIy9aFiONjYgtXii5fOdgmQX+omnIGUwt53bkGRsH9rdB514vDfbm9?= =?us-ascii?Q?FZuq6x7sJ9jowEjF4/FdeREyh+gOhJVRUS0zMrCkUZcLss1P8PVw77q5s4Sz?= =?us-ascii?Q?Ru/Mb9akAhDbvXegexALDNgF4T4dco3pOCPN8UB0Z0kDw7NINb7WkofcyyQz?= =?us-ascii?Q?Xo/+QtGRyspM1zTp1qNFiGHttQLEFog0yKmOIiUK2GWgjcLjkQDvo8+hCJxg?= =?us-ascii?Q?lfRsQAT2lkx3EoW3JNjNSvE5G3LdfJ2zLAWCGoXGuEs8+bqgxAYZvIZ5DvLQ?= =?us-ascii?Q?PSFMfPR9YUNJMCHdWB4pGGB7NUmMni+AJzwH9WmwkZCzbbjPaZ7i063uEkJZ?= =?us-ascii?Q?SB2Zga397dyho9qqMMJ8lSz5nSLLYQ2xEgS50JHRN6uL1E4G+G5lsvXrtVdn?= =?us-ascii?Q?5EArTQas4bUx62/Zj1h/ZNe3OtB/UdbietOpc1uRy/C8vx99X6Q0CHOkED5k?= =?us-ascii?Q?6gpeaUiBGRRZLrKjtIz6j3gz8PYVnLA/UV9T/O0YMxfR5GBgwikSKfnkm+/l?= =?us-ascii?Q?JD+vSmEU4LUHIk9xggCNdKeRDBKOlDVhnUkLupVWKeSNtQ+xWxYbQd7Zgvgh?= =?us-ascii?Q?Oa7/yJYFOvPQay8vZItbw6TzqKR3i6DvwKWPYVkljxnwK1uqUi6MA/86EfYU?= =?us-ascii?Q?FIpdltQx1hUmPbgMeELPjQzKsWv967s0nUxAXZgrgrnioYxQl1bPztQZ8BRt?= =?us-ascii?Q?XCVmJVsZSeYJaNFIbC04O4J0kLNCurak8ZdTgj5PJ1Moa5DoFsJRmG0jnvhW?= =?us-ascii?Q?60M2hW4df1JW1UyFPYIRTjREXsliqrG8F1TOJi1oDI/oorOY32wWSqtpc64f?= =?us-ascii?Q?0hkzdfuoMQ8tHN/FE8+9OYPDRumROv68+z52Ev4/152rcXlDedHTEDhcCDYE?= =?us-ascii?Q?nTHB8wiSVp5QZLoxxv029hDDd72qX1k7m13rZloZtL3DpVLTIxRJk0ICcbtw?= =?us-ascii?Q?VKZ9YONaK+0RBE5SOw93HEcGKiDA=3D?= X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1718; 6:RWETqpC+v6YG/2hGYtNrJDOsozf7mz5ksRXk8NB1apSFrKmHY/6JiyKdciIVMZSV6rvuhTQZ83rqy2xPnMrsw87nLMm+TzgFGJWDgw4Mzkudpr5Bvy4ExRayN+YvAFMlyM3i9ZHTuFXzXQFRiKbyfhMqWnz4YIax6AnP0c5s4t3vsSrwGAeNKFDmoZu3Dmv6IOjQ0e+Gw+v3AW9bzRFtcBtIMO0Fp5wuuDprYqzwUAjObfGj1j5MNS8yi8sH7h3q1tOxvEr5aNuW5rMqMb33gVsUTpGqHfg5+x1m0da/Rin/xKfYYGScV+0p/UsZCye4q5HHoE117iKGaS1mdhlBCkMSdAbDwitgVRiYtA68CibVG6EkhC+DM+UWeu4A0QQjsKHZELD8NwVs9cmHePPJVTaVg7sHHULQ+NCsULgMVuY=; 5:HKHLfaMJL3c4xnNLCK1yULkH+/VK0WXauULvn2ummFuwnERznsy5Aykl3nnRpNSjhis9iuav8eXpBEEITLOuH2Mx1tlrwG3aJHEcKyfG8t879JCpZeWgq9fTPvFCc03Np8DH2r0Ig6LV8vWKhcdCFA==; 24:wRxxeOf8rv9VoKBDNuAbNBNeP1gClL/gceBjZcyh3NHlr5Z6UWqMnSWgV3zR36Kvlt7Bj01VkSLnHg3E1MxavrTy+wqlH6KV9BSuSx+DpsQ= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BN3PR0701MB1718; 7:WA/1RDb79MSzT0J8kktcg3Di/j8p5LjucmU+jAvGL75xc+EtCVBHqRPSIEjJvn+x0GuEzQW/r5uQKcmoe88Kiv1Yz340jNk1yt5rieB78gxk0tDENiEQoA0cgP+lTlmk9S3tLkM4+jYGMwq6bzlDXnnW+4K7BQfFhocPK+ClhAui6bNSnqkFv4S+jQVLhvmhbAYOpYp2k5zvUMuAQrCTkt0vzgmlxRZrGWHt5BvO1edX0Uvg212s6RjdPC57ZxT+drUgeT3jRyS0loxXCoW5OMfWAqjkBlXEfeyO+eyMvs329iiCNYm6uE9kUoywArFUoQHc+JcQulehudXkVv7h3jWQOTdkZ8XHC58xhlul1lp1xncgkrbP1xFEAQ6hskifWRfuOS1MtTp3FdFVL0I8+gaKqGFTviqm6uvWDVR+ZrO9FOz9HYI2Snc87x8lEvd+s7OnNab2WM6YWIXD7YmFFA== X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jan 2017 15:31:04.8620 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN3PR0701MB1718 Subject: Re: [dpdk-dev] [RFC 2/7] mbuf: use helper to create the pool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 Jan 2017 15:31:10 -0000 Hi Olivier, On Mon, Sep 19, 2016 at 03:42:42PM +0200, Olivier Matz wrote: > When possible, replace the uses of rte_mempool_create() with > the helper provided in librte_mbuf: rte_pktmbuf_pool_create(). > > This is the preferred way to create a mbuf pool. > > By the way, akso update the documentation. > I am working on ext-mempool pmd driver for cvm soc. So interested in this thread. Wondering why this thread not followed up, is it because we don't want to deprecate rte_mempool_create()? Or if we want to then in which release you are targeting. Beside that some high level comment - - Your changeset missing mempool test application i.e.. test_mempool.c/ test_mempool_perf.c; Do you plan to accomodate them? - ext-mempool does not necessarily need MBUF_CACHE_SIZE. Let HW-mngr to directly handover buffer to application; Rather caching same buffer per core way. It will save some cycles. What do you think? - I figured out that ext-mempool API not mapping well on cvm hw; For few reason: Lets say application calls: rte_pktmbuf_pool_create() --> rte_mempool_create_empty() --> rte_mempool_ops_byname() --> rte_mempool_populate_default() -->> rte_mempool_ops_alloc() --> ext-mempool-specific-pool-create handle In my case: ext-mempool-pool-create handle will look for huge page mapped mz->vaddr/paddr, So to program HW-manager start/end addr of pool; And current ext-mempool API doesn't support such case. Therefor I chose to add new ops something like below, which could address such case; We'll soon post the patch. /** * Set the memzone va/pa addr range in the external pool. */ typedef void (*rte_mempool_populate_mz_range_t)(const struct rte_memzone *mz); /** Structure defining mempool operations structure */ struct rte_mempool_ops { char name[RTE_MEMPOOL_OPS_NAMESIZE]; /**< Name of mempool ops struct. */ rte_mempool_alloc_t alloc; /**< Allocate private data. */ rte_mempool_free_t free; /**< Free the external pool. */ rte_mempool_enqueue_t enqueue; /**< Enqueue an object. */ rte_mempool_dequeue_t dequeue; /**< Dequeue an object. */ rte_mempool_get_count get_count; /**< Get qty of available objs. */ rte_mempool_populate_mz_range_t populate_mz_range; /**< set memzone per pool info */ } __rte_cache_aligned; Let me know your opinion. Thanks. > Signed-off-by: Olivier Matz > --- > app/test/test_link_bonding_rssconf.c | 11 ++++---- > doc/guides/prog_guide/mbuf_lib.rst | 2 +- > doc/guides/sample_app_ug/ip_reassembly.rst | 13 +++++---- > doc/guides/sample_app_ug/ipv4_multicast.rst | 12 ++++---- > doc/guides/sample_app_ug/l2_forward_job_stats.rst | 33 ++++++++-------------- > .../sample_app_ug/l2_forward_real_virtual.rst | 26 +++++++---------- > doc/guides/sample_app_ug/ptpclient.rst | 12 ++------ > doc/guides/sample_app_ug/quota_watermark.rst | 26 ++++++----------- > drivers/net/bonding/rte_eth_bond_8023ad.c | 13 ++++----- > examples/ip_pipeline/init.c | 19 ++++++------- > examples/ip_reassembly/main.c | 16 +++++------ > examples/multi_process/l2fwd_fork/main.c | 14 ++++----- > examples/tep_termination/main.c | 17 ++++++----- > lib/librte_mbuf/rte_mbuf.c | 7 +++-- > lib/librte_mbuf/rte_mbuf.h | 29 +++++++++++-------- > 15 files changed, 111 insertions(+), 139 deletions(-) > > diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c > index 34f1c16..dd1bcc7 100644 > --- a/app/test/test_link_bonding_rssconf.c > +++ b/app/test/test_link_bonding_rssconf.c > @@ -67,7 +67,7 @@ > #define SLAVE_RXTX_QUEUE_FMT ("rssconf_slave%d_q%d") > > #define NUM_MBUFS 8191 > -#define MBUF_SIZE (1600 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM) > +#define MBUF_SIZE (1600 + RTE_PKTMBUF_HEADROOM) > #define MBUF_CACHE_SIZE 250 > #define BURST_SIZE 32 > > @@ -536,13 +536,12 @@ test_setup(void) > > if (test_params.mbuf_pool == NULL) { > > - test_params.mbuf_pool = rte_mempool_create("RSS_MBUF_POOL", NUM_MBUFS * > - SLAVE_COUNT, MBUF_SIZE, MBUF_CACHE_SIZE, > - sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init, > - NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0); > + test_params.mbuf_pool = rte_pktmbuf_pool_create( > + "RSS_MBUF_POOL", NUM_MBUFS * SLAVE_COUNT, > + MBUF_CACHE_SIZE, 0, MBUF_SIZE, rte_socket_id(), NULL); > > TEST_ASSERT(test_params.mbuf_pool != NULL, > - "rte_mempool_create failed\n"); > + "rte_pktmbuf_pool_create failed\n"); > } > > /* Create / initialize ring eth devs. */ > diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst > index 8e61682..b366e04 100644 > --- a/doc/guides/prog_guide/mbuf_lib.rst > +++ b/doc/guides/prog_guide/mbuf_lib.rst > @@ -103,7 +103,7 @@ Constructors > Packet and control mbuf constructors are provided by the API. > The rte_pktmbuf_init() and rte_ctrlmbuf_init() functions initialize some fields in the mbuf structure that > are not modified by the user once created (mbuf type, origin pool, buffer start address, and so on). > -This function is given as a callback function to the rte_mempool_create() function at pool creation time. > +This function is given as a callback function to the rte_pktmbuf_pool_create() or the rte_mempool_create() function at pool creation time. > > Allocating and Freeing mbufs > ---------------------------- > diff --git a/doc/guides/sample_app_ug/ip_reassembly.rst b/doc/guides/sample_app_ug/ip_reassembly.rst > index 3c5cc70..4b6023a 100644 > --- a/doc/guides/sample_app_ug/ip_reassembly.rst > +++ b/doc/guides/sample_app_ug/ip_reassembly.rst > @@ -223,11 +223,14 @@ each RX queue uses its own mempool. > > snprintf(buf, sizeof(buf), "mbuf_pool_%u_%u", lcore, queue); > > - if ((rxq->pool = rte_mempool_create(buf, nb_mbuf, MBUF_SIZE, 0, sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init, NULL, > - rte_pktmbuf_init, NULL, socket, MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) == NULL) { > - > - RTE_LOG(ERR, IP_RSMBL, "mempool_create(%s) failed", buf); > - return -1; > + rxq->pool = rte_pktmbuf_pool_create(buf, nb_mbuf, > + 0, /* cache size */ > + 0, /* priv size */ > + MBUF_DATA_SIZE, socket, "ring_sp_sc"); > + if (rxq->pool == NULL) { > + RTE_LOG(ERR, IP_RSMBL, > + "rte_pktmbuf_pool_create(%s) failed", buf); > + return -1; > } > > Packet Reassembly and Forwarding > diff --git a/doc/guides/sample_app_ug/ipv4_multicast.rst b/doc/guides/sample_app_ug/ipv4_multicast.rst > index 72da8c4..099d61a 100644 > --- a/doc/guides/sample_app_ug/ipv4_multicast.rst > +++ b/doc/guides/sample_app_ug/ipv4_multicast.rst > @@ -145,12 +145,12 @@ Memory pools for indirect buffers are initialized differently from the memory po > > .. code-block:: c > > - packet_pool = rte_mempool_create("packet_pool", NB_PKT_MBUF, PKT_MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private), > - rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0); > - > - header_pool = rte_mempool_create("header_pool", NB_HDR_MBUF, HDR_MBUF_SIZE, 32, 0, NULL, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0); > - clone_pool = rte_mempool_create("clone_pool", NB_CLONE_MBUF, > - CLONE_MBUF_SIZE, 32, 0, NULL, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0); > + packet_pool = rte_pktmbuf_pool_create("packet_pool", NB_PKT_MBUF, 32, > + 0, PKT_MBUF_DATA_SIZE, rte_socket_id(), NULL); > + header_pool = rte_pktmbuf_pool_create("header_pool", NB_HDR_MBUF, 32, > + 0, HDR_MBUF_DATA_SIZE, rte_socket_id(), NULL); > + clone_pool = rte_pktmbuf_pool_create("clone_pool", NB_CLONE_MBUF, 32, > + 0, 0, rte_socket_id(), NULL); > > The reason for this is because indirect buffers are not supposed to hold any packet data and > therefore can be initialized with lower amount of reserved memory for each buffer. > diff --git a/doc/guides/sample_app_ug/l2_forward_job_stats.rst b/doc/guides/sample_app_ug/l2_forward_job_stats.rst > index 2444e36..a1b3f43 100644 > --- a/doc/guides/sample_app_ug/l2_forward_job_stats.rst > +++ b/doc/guides/sample_app_ug/l2_forward_job_stats.rst > @@ -193,36 +193,25 @@ and the application to store network packet data: > .. code-block:: c > > /* create the mbuf pool */ > - l2fwd_pktmbuf_pool = > - rte_mempool_create("mbuf_pool", NB_MBUF, > - MBUF_SIZE, 32, > - sizeof(struct rte_pktmbuf_pool_private), > - rte_pktmbuf_pool_init, NULL, > - rte_pktmbuf_init, NULL, > - rte_socket_id(), 0); > + l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, > + MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, > + rte_socket_id(), NULL); > > if (l2fwd_pktmbuf_pool == NULL) > rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n"); > > The rte_mempool is a generic structure used to handle pools of objects. > -In this case, it is necessary to create a pool that will be used by the driver, > -which expects to have some reserved space in the mempool structure, > -sizeof(struct rte_pktmbuf_pool_private) bytes. > -The number of allocated pkt mbufs is NB_MBUF, with a size of MBUF_SIZE each. > -A per-lcore cache of 32 mbufs is kept. > +In this case, it is necessary to create a pool that will be used by the driver. > +The number of allocated pkt mbufs is NB_MBUF, with a data room size of > +RTE_MBUF_DEFAULT_BUF_SIZE each. > +A per-lcore cache of MEMPOOL_CACHE_SIZE mbufs is kept. > The memory is allocated in rte_socket_id() socket, > but it is possible to extend this code to allocate one mbuf pool per socket. > > -Two callback pointers are also given to the rte_mempool_create() function: > - > -* The first callback pointer is to rte_pktmbuf_pool_init() and is used > - to initialize the private data of the mempool, which is needed by the driver. > - This function is provided by the mbuf API, but can be copied and extended by the developer. > - > -* The second callback pointer given to rte_mempool_create() is the mbuf initializer. > - The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library. > - If a more complex application wants to extend the rte_pktmbuf structure for its own needs, > - a new function derived from rte_pktmbuf_init( ) can be created. > +The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf > +initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init(). > +An advanced application may want to use the mempool API to create the > +mbuf pool with more control. > > Driver Initialization > ~~~~~~~~~~~~~~~~~~~~~ > diff --git a/doc/guides/sample_app_ug/l2_forward_real_virtual.rst b/doc/guides/sample_app_ug/l2_forward_real_virtual.rst > index a1c10c0..2330148 100644 > --- a/doc/guides/sample_app_ug/l2_forward_real_virtual.rst > +++ b/doc/guides/sample_app_ug/l2_forward_real_virtual.rst > @@ -197,31 +197,25 @@ and the application to store network packet data: > > /* create the mbuf pool */ > > - l2fwd_pktmbuf_pool = rte_mempool_create("mbuf_pool", NB_MBUF, MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private), > - rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, SOCKET0, 0); > + l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, > + MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, > + rte_socket_id(), NULL); > > if (l2fwd_pktmbuf_pool == NULL) > rte_panic("Cannot init mbuf pool\n"); > > The rte_mempool is a generic structure used to handle pools of objects. > -In this case, it is necessary to create a pool that will be used by the driver, > -which expects to have some reserved space in the mempool structure, > -sizeof(struct rte_pktmbuf_pool_private) bytes. > -The number of allocated pkt mbufs is NB_MBUF, with a size of MBUF_SIZE each. > +In this case, it is necessary to create a pool that will be used by the driver. > +The number of allocated pkt mbufs is NB_MBUF, with a data room size of > +RTE_MBUF_DEFAULT_BUF_SIZE each. > A per-lcore cache of 32 mbufs is kept. > The memory is allocated in NUMA socket 0, > but it is possible to extend this code to allocate one mbuf pool per socket. > > -Two callback pointers are also given to the rte_mempool_create() function: > - > -* The first callback pointer is to rte_pktmbuf_pool_init() and is used > - to initialize the private data of the mempool, which is needed by the driver. > - This function is provided by the mbuf API, but can be copied and extended by the developer. > - > -* The second callback pointer given to rte_mempool_create() is the mbuf initializer. > - The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library. > - If a more complex application wants to extend the rte_pktmbuf structure for its own needs, > - a new function derived from rte_pktmbuf_init( ) can be created. > +The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf > +initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init(). > +An advanced application may want to use the mempool API to create the > +mbuf pool with more control. > > .. _l2_fwd_app_dvr_init: > > diff --git a/doc/guides/sample_app_ug/ptpclient.rst b/doc/guides/sample_app_ug/ptpclient.rst > index 6e425b7..4bd87c2 100644 > --- a/doc/guides/sample_app_ug/ptpclient.rst > +++ b/doc/guides/sample_app_ug/ptpclient.rst > @@ -171,15 +171,9 @@ used by the application: > > .. code-block:: c > > - mbuf_pool = rte_mempool_create("MBUF_POOL", > - NUM_MBUFS * nb_ports, > - MBUF_SIZE, > - MBUF_CACHE_SIZE, > - sizeof(struct rte_pktmbuf_pool_private), > - rte_pktmbuf_pool_init, NULL, > - rte_pktmbuf_init, NULL, > - rte_socket_id(), > - 0); > + mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * nb_ports, > + MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id(), > + NULL); > > Mbufs are the packet buffer structure used by DPDK. They are explained in > detail in the "Mbuf Library" section of the *DPDK Programmer's Guide*. > diff --git a/doc/guides/sample_app_ug/quota_watermark.rst b/doc/guides/sample_app_ug/quota_watermark.rst > index c56683a..f3a6624 100644 > --- a/doc/guides/sample_app_ug/quota_watermark.rst > +++ b/doc/guides/sample_app_ug/quota_watermark.rst > @@ -254,32 +254,24 @@ It contains a set of mbuf objects that are used by the driver and the applicatio > .. code-block:: c > > /* Create a pool of mbuf to store packets */ > - > - mbuf_pool = rte_mempool_create("mbuf_pool", MBUF_PER_POOL, MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private), > - rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0); > + mbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", MBUF_PER_POOL, 32, 0, > + MBUF_DATA_SIZE, rte_socket_id(), NULL); > > if (mbuf_pool == NULL) > rte_panic("%s\n", rte_strerror(rte_errno)); > > The rte_mempool is a generic structure used to handle pools of objects. > -In this case, it is necessary to create a pool that will be used by the driver, > -which expects to have some reserved space in the mempool structure, sizeof(struct rte_pktmbuf_pool_private) bytes. > +In this case, it is necessary to create a pool that will be used by the driver. > > -The number of allocated pkt mbufs is MBUF_PER_POOL, with a size of MBUF_SIZE each. > +The number of allocated pkt mbufs is MBUF_PER_POOL, with a data room size > +of MBUF_DATA_SIZE each. > A per-lcore cache of 32 mbufs is kept. > The memory is allocated in on the master lcore's socket, but it is possible to extend this code to allocate one mbuf pool per socket. > > -Two callback pointers are also given to the rte_mempool_create() function: > - > -* The first callback pointer is to rte_pktmbuf_pool_init() and is used to initialize the private data of the mempool, > - which is needed by the driver. > - This function is provided by the mbuf API, but can be copied and extended by the developer. > - > -* The second callback pointer given to rte_mempool_create() is the mbuf initializer. > - > -The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library. > -If a more complex application wants to extend the rte_pktmbuf structure for its own needs, > -a new function derived from rte_pktmbuf_init() can be created. > +The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf > +initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init(). > +An advanced application may want to use the mempool API to create the > +mbuf pool with more control. > > Ports Configuration and Pairing > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c > index 2f7ae70..e234c63 100644 > --- a/drivers/net/bonding/rte_eth_bond_8023ad.c > +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c > @@ -888,8 +888,8 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, uint8_t slave_id) > RTE_ASSERT(port->tx_ring == NULL); > socket_id = rte_eth_devices[slave_id].data->numa_node; > > - element_size = sizeof(struct slow_protocol_frame) + sizeof(struct rte_mbuf) > - + RTE_PKTMBUF_HEADROOM; > + element_size = sizeof(struct slow_protocol_frame) + > + RTE_PKTMBUF_HEADROOM; > > /* The size of the mempool should be at least: > * the sum of the TX descriptors + BOND_MODE_8023AX_SLAVE_TX_PKTS */ > @@ -900,11 +900,10 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev, uint8_t slave_id) > } > > snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id); > - port->mbuf_pool = rte_mempool_create(mem_name, > - total_tx_desc, element_size, > - RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ? 32 : RTE_MEMPOOL_CACHE_MAX_SIZE, > - sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init, > - NULL, rte_pktmbuf_init, NULL, socket_id, MEMPOOL_F_NO_SPREAD); > + port->mbuf_pool = rte_pktmbuf_pool_create(mem_name, total_tx_desc, > + RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ? > + 32 : RTE_MEMPOOL_CACHE_MAX_SIZE, > + 0, element_size, socket_id, NULL); > > /* Any memory allocation failure in initalization is critical because > * resources can't be free, so reinitialization is impossible. */ > diff --git a/examples/ip_pipeline/init.c b/examples/ip_pipeline/init.c > index cd167f6..d86aa86 100644 > --- a/examples/ip_pipeline/init.c > +++ b/examples/ip_pipeline/init.c > @@ -316,16 +316,15 @@ app_init_mempool(struct app_params *app) > struct app_mempool_params *p = &app->mempool_params[i]; > > APP_LOG(app, HIGH, "Initializing %s ...", p->name); > - app->mempool[i] = rte_mempool_create( > - p->name, > - p->pool_size, > - p->buffer_size, > - p->cache_size, > - sizeof(struct rte_pktmbuf_pool_private), > - rte_pktmbuf_pool_init, NULL, > - rte_pktmbuf_init, NULL, > - p->cpu_socket_id, > - 0); > + app->mempool[i] = rte_pktmbuf_pool_create( > + p->name, > + p->pool_size, > + p->cache_size, > + 0, /* priv_size */ > + p->buffer_size - > + sizeof(struct rte_mbuf), /* mbuf data size */ > + p->cpu_socket_id, > + NULL); > > if (app->mempool[i] == NULL) > rte_panic("%s init error\n", p->name); > diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c > index 50fe422..8648161 100644 > --- a/examples/ip_reassembly/main.c > +++ b/examples/ip_reassembly/main.c > @@ -84,9 +84,7 @@ > > #define MAX_JUMBO_PKT_LEN 9600 > > -#define BUF_SIZE RTE_MBUF_DEFAULT_DATAROOM > -#define MBUF_SIZE \ > - (BUF_SIZE + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM) > +#define MBUF_DATA_SIZE RTE_MBUF_DEFAULT_BUF_SIZE > > #define NB_MBUF 8192 > > @@ -909,11 +907,13 @@ setup_queue_tbl(struct rx_queue *rxq, uint32_t lcore, uint32_t queue) > > snprintf(buf, sizeof(buf), "mbuf_pool_%u_%u", lcore, queue); > > - if ((rxq->pool = rte_mempool_create(buf, nb_mbuf, MBUF_SIZE, 0, > - sizeof(struct rte_pktmbuf_pool_private), > - rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, > - socket, MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) == NULL) { > - RTE_LOG(ERR, IP_RSMBL, "mempool_create(%s) failed", buf); > + rxq->pool = rte_pktmbuf_pool_create(buf, nb_mbuf, > + 0, /* cache size */ > + 0, /* priv size */ > + MBUF_DATA_SIZE, socket, "ring_sp_sc"); > + if (rxq->pool == NULL) { > + RTE_LOG(ERR, IP_RSMBL, > + "rte_pktmbuf_pool_create(%s) failed", buf); > return -1; > } > > diff --git a/examples/multi_process/l2fwd_fork/main.c b/examples/multi_process/l2fwd_fork/main.c > index 2d951d9..358a760 100644 > --- a/examples/multi_process/l2fwd_fork/main.c > +++ b/examples/multi_process/l2fwd_fork/main.c > @@ -77,8 +77,7 @@ > > #define RTE_LOGTYPE_L2FWD RTE_LOGTYPE_USER1 > #define MBUF_NAME "mbuf_pool_%d" > -#define MBUF_SIZE \ > -(RTE_MBUF_DEFAULT_DATAROOM + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM) > +#define MBUF_DATA_SIZE RTE_MBUF_DEFAULT_BUF_SIZE > #define NB_MBUF 8192 > #define RING_MASTER_NAME "l2fwd_ring_m2s_" > #define RING_SLAVE_NAME "l2fwd_ring_s2m_" > @@ -989,14 +988,11 @@ main(int argc, char **argv) > flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET; > snprintf(buf_name, RTE_MEMPOOL_NAMESIZE, MBUF_NAME, portid); > l2fwd_pktmbuf_pool[portid] = > - rte_mempool_create(buf_name, NB_MBUF, > - MBUF_SIZE, 32, > - sizeof(struct rte_pktmbuf_pool_private), > - rte_pktmbuf_pool_init, NULL, > - rte_pktmbuf_init, NULL, > - rte_socket_id(), flags); > + rte_pktmbuf_pool_create(buf_name, NB_MBUF, 32, > + 0, MBUF_DATA_SIZE, rte_socket_id(), > + NULL); > if (l2fwd_pktmbuf_pool[portid] == NULL) > - rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n"); > + rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n"); > > printf("Create mbuf %s\n", buf_name); > } > diff --git a/examples/tep_termination/main.c b/examples/tep_termination/main.c > index 622f248..2b786c5 100644 > --- a/examples/tep_termination/main.c > +++ b/examples/tep_termination/main.c > @@ -68,7 +68,7 @@ > (nb_switching_cores * MBUF_CACHE_SIZE)) > > #define MBUF_CACHE_SIZE 128 > -#define MBUF_SIZE (2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM) > +#define MBUF_DATA_SIZE RTE_MBUF_DEFAULT_BUF_SIZE > > #define MAX_PKT_BURST 32 /* Max burst size for RX/TX */ > #define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */ > @@ -1200,15 +1200,14 @@ main(int argc, char *argv[]) > MAX_SUP_PORTS); > } > /* Create the mbuf pool. */ > - mbuf_pool = rte_mempool_create( > + mbuf_pool = rte_pktmbuf_pool_create( > "MBUF_POOL", > - NUM_MBUFS_PER_PORT > - * valid_nb_ports, > - MBUF_SIZE, MBUF_CACHE_SIZE, > - sizeof(struct rte_pktmbuf_pool_private), > - rte_pktmbuf_pool_init, NULL, > - rte_pktmbuf_init, NULL, > - rte_socket_id(), 0); > + NUM_MBUFS_PER_PORT * valid_nb_ports, > + MBUF_CACHE_SIZE, > + 0, > + MBUF_DATA_SIZE, > + rte_socket_id(), > + NULL); > if (mbuf_pool == NULL) > rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n"); > > diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c > index 3e9cbb6..4b871ca 100644 > --- a/lib/librte_mbuf/rte_mbuf.c > +++ b/lib/librte_mbuf/rte_mbuf.c > @@ -62,7 +62,7 @@ > > /* > * ctrlmbuf constructor, given as a callback function to > - * rte_mempool_create() > + * rte_mempool_obj_iter() or rte_mempool_create() > */ > void > rte_ctrlmbuf_init(struct rte_mempool *mp, > @@ -77,7 +77,8 @@ rte_ctrlmbuf_init(struct rte_mempool *mp, > > /* > * pktmbuf pool constructor, given as a callback function to > - * rte_mempool_create() > + * rte_mempool_create(), or called directly if using > + * rte_mempool_create_empty()/rte_mempool_populate() > */ > void > rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg) > @@ -110,7 +111,7 @@ rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg) > > /* > * pktmbuf constructor, given as a callback function to > - * rte_mempool_create(). > + * rte_mempool_obj_iter() or rte_mempool_create(). > * Set the fields of a packet mbuf to their default values. > */ > void > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h > index 774e071..352fa02 100644 > --- a/lib/librte_mbuf/rte_mbuf.h > +++ b/lib/librte_mbuf/rte_mbuf.h > @@ -44,6 +44,13 @@ > * buffers. The message buffers are stored in a mempool, using the > * RTE mempool library. > * > + * The preferred way to create a mbuf pool is to use > + * rte_pktmbuf_pool_create(). However, in some situations, an > + * application may want to have more control (ex: populate the pool with > + * specific memory), in this case it is possible to use functions from > + * rte_mempool. See how rte_pktmbuf_pool_create() is implemented for > + * details. > + * > * This library provide an API to allocate/free packet mbufs, which are > * used to carry network packets. > * > @@ -1189,14 +1196,14 @@ __rte_mbuf_raw_free(struct rte_mbuf *m) > * This function initializes some fields in an mbuf structure that are > * not modified by the user once created (mbuf type, origin pool, buffer > * start address, and so on). This function is given as a callback function > - * to rte_mempool_create() at pool creation time. > + * to rte_mempool_obj_iter() or rte_mempool_create() at pool creation time. > * > * @param mp > * The mempool from which the mbuf is allocated. > * @param opaque_arg > * A pointer that can be used by the user to retrieve useful information > - * for mbuf initialization. This pointer comes from the ``init_arg`` > - * parameter of rte_mempool_create(). > + * for mbuf initialization. This pointer is the opaque argument passed to > + * rte_mempool_obj_iter() or rte_mempool_create(). > * @param m > * The mbuf to initialize. > * @param i > @@ -1270,14 +1277,14 @@ rte_is_ctrlmbuf(struct rte_mbuf *m) > * This function initializes some fields in the mbuf structure that are > * not modified by the user once created (origin pool, buffer start > * address, and so on). This function is given as a callback function to > - * rte_mempool_create() at pool creation time. > + * rte_mempool_obj_iter() or rte_mempool_create() at pool creation time. > * > * @param mp > * The mempool from which mbufs originate. > * @param opaque_arg > * A pointer that can be used by the user to retrieve useful information > - * for mbuf initialization. This pointer comes from the ``init_arg`` > - * parameter of rte_mempool_create(). > + * for mbuf initialization. This pointer is the opaque argument passed to > + * rte_mempool_obj_iter() or rte_mempool_create(). > * @param m > * The mbuf to initialize. > * @param i > @@ -1292,7 +1299,8 @@ void rte_pktmbuf_init(struct rte_mempool *mp, void *opaque_arg, > * > * This function initializes the mempool private data in the case of a > * pktmbuf pool. This private data is needed by the driver. The > - * function is given as a callback function to rte_mempool_create() at > + * function must be called on the mempool before it is used, or it > + * can be given as a callback function to rte_mempool_create() at > * pool creation. It can be extended by the user, for example, to > * provide another packet size. > * > @@ -1300,8 +1308,8 @@ void rte_pktmbuf_init(struct rte_mempool *mp, void *opaque_arg, > * The mempool from which mbufs originate. > * @param opaque_arg > * A pointer that can be used by the user to retrieve useful information > - * for mbuf initialization. This pointer comes from the ``init_arg`` > - * parameter of rte_mempool_create(). > + * for mbuf initialization. This pointer is the opaque argument passed to > + * rte_mempool_create(). > */ > void rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg); > > @@ -1309,8 +1317,7 @@ void rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg); > * Create a mbuf pool. > * > * This function creates and initializes a packet mbuf pool. It is > - * a wrapper to rte_mempool_create() with the proper packet constructor > - * and mempool constructor. > + * a wrapper to rte_mempool functions. > * > * @param name > * The name of the mbuf pool. > -- > 2.8.1 >