From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM03-BY2-obe.outbound.protection.outlook.com (mail-by2nam03on0055.outbound.protection.outlook.com [104.47.42.55]) by dpdk.org (Postfix) with ESMTP id 43C4F326C for ; Wed, 18 Jan 2017 07:02:07 +0100 (CET) Received: from BN3PR0301CA0034.namprd03.prod.outlook.com (10.160.180.172) by BY2PR0301MB0743.namprd03.prod.outlook.com (10.160.63.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.845.12; Wed, 18 Jan 2017 06:02:05 +0000 Received: from BN1AFFO11FD013.protection.gbl (2a01:111:f400:7c10::161) by BN3PR0301CA0034.outlook.office365.com (2a01:111:e400:4000::44) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.860.13 via Frontend Transport; Wed, 18 Jan 2017 06:02:04 +0000 Authentication-Results: spf=fail (sender IP is 192.88.168.50) smtp.mailfrom=nxp.com; caviumnetworks.com; dkim=none (message not signed) header.d=none;caviumnetworks.com; dmarc=fail action=none header.from=nxp.com;caviumnetworks.com; dkim=none (message not signed) header.d=none; Received-SPF: Fail (protection.outlook.com: domain of nxp.com does not designate 192.88.168.50 as permitted sender) receiver=protection.outlook.com; client-ip=192.88.168.50; helo=tx30smr01.am.freescale.net; Received: from tx30smr01.am.freescale.net (192.88.168.50) by BN1AFFO11FD013.mail.protection.outlook.com (10.58.52.73) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_RSA_WITH_AES_256_CBC_SHA) id 15.1.803.8 via Frontend Transport; Wed, 18 Jan 2017 06:02:03 +0000 Received: from [127.0.0.1] ([10.232.133.65]) by tx30smr01.am.freescale.net (8.14.3/8.14.0) with ESMTP id v0I61vic001713; Tue, 17 Jan 2017 23:02:01 -0700 To: Santosh Shukla References: <1484678576-3925-1-git-send-email-hemant.agrawal@nxp.com> <20170117133121.GA27592@santosh-Latitude-E5530-non-vPro> CC: , From: Hemant Agrawal Message-ID: Date: Wed, 18 Jan 2017 11:31:58 +0530 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.6.0 MIME-Version: 1.0 In-Reply-To: <20170117133121.GA27592@santosh-Latitude-E5530-non-vPro> Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Transfer-Encoding: 7bit X-EOPAttributedMessage: 0 X-Matching-Connectors: 131291929234030061; (91ab9b29-cfa4-454e-5278-08d120cd25b8); () X-Forefront-Antispam-Report: CIP:192.88.168.50; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(6009001)(336005)(7916002)(39840400002)(39850400002)(39860400002)(39400400002)(39410400002)(39450400003)(39380400002)(2980300002)(1109001)(1110001)(339900001)(24454002)(189002)(377454003)(199003)(7126002)(86362001)(31696002)(7246003)(31686004)(33646002)(626004)(6306002)(575784001)(356003)(65956001)(97736004)(23746002)(53936002)(106466001)(47776003)(120886001)(92566002)(4001350100001)(65806001)(305945005)(105606002)(15395725005)(4326007)(38730400001)(85426001)(551984002)(104016004)(8936002)(77096006)(54906002)(189998001)(6916009)(68736007)(229853002)(50986999)(2906002)(64126003)(54356999)(81166006)(8676002)(81156014)(50466002)(2950100002)(6666003)(65826007)(5660300001)(36756003)(230700001)(83506001)(76176999)(110136003); DIR:OUT; SFP:1101; SCL:1; SRVR:BY2PR0301MB0743; H:tx30smr01.am.freescale.net; FPR:; SPF:Fail; PTR:InfoDomainNonexistent; MX:1; A:1; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; BN1AFFO11FD013; 1:hK8rvXZTMeKR2qPS95HA4FUhy0eTz81MAaOcEDBeIKElJkfwMh7NTA1KHGkOyfhSEagrD59/sxkGPHToFHkUUx303PLAEm6Sw/QsZeM2AzodlpphxUi6PxG5UhxEVixKKEKen5rjt2fTR9SPnQM5coVsICUMD8N76Yc2v9OCeDawoTBA4agBwskv/qtDkxkITWPCzfzgyrw0hrN8XfOAA9XLgfaBFFPv7J2+++i8Hd5rYFzDRkUfypHAh44TSMbNuwb3TvX89P1Fa7gqimJ4DEb9j8F6suB0PAoMrjjhhgKTeW9RlZss6cDLDlwUCf3g8H+bnK2eP4gltHDOGlrv9lblITio7fCYBzYaoxgWUwv/2urdD3idO+MsVGg5I0322Y5Qt8IljQjJL9I6T8H0RABLiIG0fKs3dWV6eGAayZ4YLOXpZTz3fZMzSVBTnmHF1dmvDYB3QaLvQ9I2HJCWhd9OHoZDtjbYuLs5P7OWsUGB6pXk+ubI8C/nbbw3Lr5FchNkoAhLLQzuMCGpHIMtxTN2y4nhcG7/i/wYYSEKdpCOqtgjN4ZIcCoAtUYbPhR1eDA6t/JhC3j8U4P97QtCcg== X-MS-Office365-Filtering-Correlation-Id: 9281c450-1323-46b5-5c94-08d43f678653 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001); SRVR:BY2PR0301MB0743; X-Microsoft-Exchange-Diagnostics: 1; BY2PR0301MB0743; 3:5uS1rhklDiKFs2umTifJUyFJ+SOKrlE1dqcEYgXdBqx7IIuO4WcWRvfj4aGf/J2+VrTGj9M/Foks4vdQwFCGeQWKc2GXIqmCMb8LEhQts0J/cPcFRhjW6kS84/BjA1BpSVGYX3hHX1NAsqHszYWSPeJvqo0JpWm9bg+MD9oFBcKWVHz3tNHnZIAaVbSpDze39PZNhp5xH+M5Jn3yYRV8Q8SyuKH3kYn5R+V8yM5To1Rp1gCpEUSGMMfL0XgwT+XQx/j+fq/M4/xU4Yqv806c6q3+NIKZcS/hWg7adQixD+IF1BPBcbmr242FQCULvVj1o4JE8jQkVY1+11DARhfMdmE72t03GT0IjxYcpmE3PImiXC4lsiZkPnhkYurwQGwi X-Microsoft-Exchange-Diagnostics: 1; BY2PR0301MB0743; 25:jo2RbbvxTVY70KzDzBcO3X9YyxVB3H2I81jtAurEBoYu6T2HZAHEvK0Dyv4BPj64PjvLthqKDCusb1Fy/iiykDb2GjK2J6Fu/jupoVtpB71p3SSj1J/2sht4h71X5BqyYZ6ZhTmBMIXwem71kWseiBll2i5CvM5QqiQRc93931DfTt5O2YXxEZ8buJpqgKYUkH9pJHPcdB/fgFwt3IOVorUcPr9qWtWZEpVWxFZ0qJHWnX/rVpioam1SqAt+QV8ud52Y2+am57/x6vJDNJmFXxMSyckLgP/ee8b5h3agfyNVu6ueZr3WVNyaEiQj7qmDSRpQJ4Inr4mOZ0o8/m3dYRIeE+5upVR+D48XkZ2aRHGJK3+TXZtvnWAAJFjh89C38Qg0KE/Ns4/PMFNbaMtrPM6XUdEFznZaCw8/WRPfjhD5kV8TplqLUlBVurxQR3fgtF/r+bphafrL+CreH4iZ2dF2dgdi6DSi/m7zGVAs4UmeAoFHCr+XSjUZ0YboCFFw/u7wNGAVjE895oyAtlvHtr2GvxYVHQxfIu7LmPjxzyenaNdQ5KdZV68JcYSgQErIkSc+/L9Yl9nU6Wf2dbHQLV4xMH/iQen5WznSyWEaZ6SiSwIo/5l8gIMuDdBYI4vvY5h2zQi1nL4VZQETUe9pZSpY8K/6zwpU18zOPJiqDpUaoggDwSLUcJHt0zxbS6p397z0IiMClIfUuWtAbqcckkPURJ3ZoWHdpxvOoKmRseFWdZfCCbdKjeJdXpyfq+nq8xaNGJK0asvL/BEk/0YXuWJralIdVkw6UJxylEiOE/eypIBGwqo7HKUTnD9uuxxjDMcFeaSWi6OFhry1ReX1NeCpqMIyvDtNI4b4eGNXKkt/PmDO+WC6Hct4ZvUk5kMe X-Microsoft-Exchange-Diagnostics: 1; BY2PR0301MB0743; 31:TEFIO39veCL+Ipp3Soq3MwMFQEGcok4WzJ30GSYrp1Tu1rfRha1nWE9P25FrHfbWLei05I0h/+P2JMWW0M6WTo6nkgUTlxycBZMVYxtFf3UPJ+CSLBo/kwzLggD6hiLmsq9zqSC3Gmrqcj+W47Ky5JExgLWvrlI+dh62Du4xA3px4wbfQnTHdel0NEycW+ennRw/BQ2oDfzyptZgnlfs4vaVd0EDrcVrl8ahpV70xG87u+yfv1t3fLNSmtCLNeoUmzqoGuqyqs8uexFj/1GzGvXWU2ZQWyqAdO3nCiKeJo8= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(185117386973197)(788757137089); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6095060)(601004)(2401047)(13015025)(13024025)(13023025)(13018025)(13017025)(5005006)(8121501046)(3002001)(10201501046)(6055026)(6096035)(20161123565025)(20161123563025)(20161123556025)(20161123561025)(20161123559025); SRVR:BY2PR0301MB0743; BCL:0; PCL:0; RULEID:(400006); SRVR:BY2PR0301MB0743; X-Microsoft-Exchange-Diagnostics: 1; BY2PR0301MB0743; 4:7wGkq0rTLZCRhoZZsqOHaNpHx8lC91uATlAmnaGcdFxK2PxkbROuugsxF12/jr8ZNB95lAV4obPA7GMG2KuAXsvUqfegey6GTDha6FEYUmNQdgtdVptAz0lWYgbu3MiWlCuIjZGI2mzjrvCqA9Frir9/clEiJcVAJJHrl++Dpd+8QDQnEmbuLbb4YsdViOgue/o+3mcb6TUaEJffAJtFlEH+q34WKEf22pC7B7AgAXu0TX5bnL/C6XfIjrhnXEM/QLY+mK8NHDJDxnzeqmQS6IUgbZZ7HvotYTck9aNL7ZN0zE8QupmNzAsc7/oTCrGN8NX9vKd6BDIRtc+cXdx1CC5mhwnQJgRIbOOIfPkin6rlXAVqx6yNmkJCXrHwYtM9bYlZmNdf6E21G6j9i1Y/fyHP8aN7ScTRSlEGihCxDBMrkZ22tXrBiqwdc49LXi0hrpjYFc7VEdffWS8c3IhYcHVga2fQgaBH4qZ69fuiSGOzClzARyxyt8qJ9bSxqIS2NHQoBijcaA89Vgnmir2sLjNCPiHDgNyloFgo1TNdV2PMJX5jgcLPSt4sTwwuiyMrrwRCC8qaNt/L4sT4UXE3DgVbXH5UekPEsaiAOkFf2HeTPRhLhuU0neJ0aFnhT4HJ070PYJRNlxg16B9maMVH1GroMy3rg4YP4HwwAemys+Cjja3g+oF7SkLC1ETxuW/Ev5qKesefWqXJ0aP8IJMSVdLdWVvNMc91yux1K2b/SF/hBhixru2mYgcbg9huLpd7gVEUNX2d1Hlvzu1Pskhdhw== X-Forefront-PRVS: 01917B1794 X-Microsoft-Exchange-Diagnostics: =?Windows-1252?Q?1; BY2PR0301MB0743; 23:pduapxP1TK3rrB9jgkfiYGOtBu+qn6Qu6Bk?= =?Windows-1252?Q?/9Ix1/tP03PiZa0QvrLkooxrNTc8/DDePoRS8ZE+iM4NbEVR+FrD3tgw?= =?Windows-1252?Q?NNv7VkRCjHb+B0uheqR30egOMv7MS4qiVhByixAGN2en4V0zX/4sqqOj?= =?Windows-1252?Q?YcOQipLKEx0qnpFPz2tgbauYQ6/41QRVBpSXNIz1v8U1LzQo5EOPOTJ1?= =?Windows-1252?Q?1WK9EWkPkOc4to99jd8Ep/Z4hTvalC4zUJr/UIDc3TVzW9zXMMIBvM/E?= =?Windows-1252?Q?ZLj+yS/lJUR04DmcjJu8LFGoS3TxQNyyoG/NkQrQXa6jZ7pLN0zC5P7d?= =?Windows-1252?Q?B+EIQWIqcKgsgO/ALoHwPXNpckUalVC97UlYZw3Z6a3qgucuTt3CWvKA?= =?Windows-1252?Q?CazM9WLysuHfw4lvw2s3bJvUkiIwGGPBLYkNUecpw6oO+Qa/Iq9kP2fW?= =?Windows-1252?Q?KwtlBOiJvJpzeEI/kirYYiJkK/XySKY20jFaO3smTh3CS4s/luZfz/CB?= =?Windows-1252?Q?v5qhkYs2zoWSFsDiShw/722ogfkzPdPrZyDEUJtuN2d0uzeJRq6+NYS8?= =?Windows-1252?Q?+nAUFyjUAi57T0CzbR/MEMKx2iiGoKSdEX4n/m6iPhn30U9k1C9HZEEY?= =?Windows-1252?Q?6J2z3lr5utAG+I/ZoZaIcKbbAEtU6teoGAjGWxpXZRqkoJuefOqqt0QV?= =?Windows-1252?Q?Sc2xdjq9xGq1TFHRcEDqfj+hUr8F4atiGWe5qgrdqZCwXABgAAyD055L?= =?Windows-1252?Q?ZFFI5zzDo5ayoAryyYGeHrBWTutbXObekpxSQUshf3dWSUCYe2Jywjkk?= =?Windows-1252?Q?gEYUAW3WF48AAbwayMHNS5c0vra2+kgOJowZr6b5eM1x4d7GraUl2dn5?= =?Windows-1252?Q?BBMiEGWZls+nl4aE29BJIqLHAJ79h6Rdqjzk69gwTfnV2neqd4hEadbZ?= =?Windows-1252?Q?d9u6ETyBEe3V7vEpd81P6Q5IBO2FKSos5z83eGSeb2dIns/lA8pSLklJ?= =?Windows-1252?Q?Z5sPlXnaGQMusWvGAsBBbqmZNL3vQudCZCnIWqt3rB3MHHjja2/id/Q8?= =?Windows-1252?Q?EIYG6m95MbUX3VlXmyGn+hfSpq4nd7LggbjAM29TB2afPS07e0jL+koP?= =?Windows-1252?Q?mobLw84ZIwn6Rh9UV7eUVPflOxi+8lAZvjwTPoHp/qcSntvOuspq/o5j?= =?Windows-1252?Q?+OH2H4eVVPNALQpNX5hAhryJR+LInKtRVQzQDvoweM3O4R4LrEXhzqpC?= =?Windows-1252?Q?oo8lOoHUqpibh4WgMUKPb+isuTVmiIB7S9wmEQfxM5F/Ag7ZOaCIt3nL?= =?Windows-1252?Q?1klb4EC6hib81Jvb3eYV9v/YwNThCoHGk3GlnQch5b2OJuZyVOX3E7+s?= =?Windows-1252?Q?ymzVU0iLltEomyRSWlYAMuxHfZMhH8CLS4zRBw2KtJJXj9F0KJcVWbmw?= =?Windows-1252?Q?xKdhAmLsNPDK4rX8j6Udw5mlLC6UqeVFHIoP5O9PjCMJDVtEmuPYrS+j?= =?Windows-1252?Q?cW3rfDG868H9Dae+YOTV6x3Ji/v140Wf8wZpK3f2KJjD8LO6UvAM/T5t?= =?Windows-1252?Q?1+vc47pAatHuU3F5BkEwor+WHqsZgserY4V+xIDy16ZvQPReMXoA+uy9?= =?Windows-1252?Q?zOv0GQoKIEjCk9+soW4fzYeh9Ff+yd8i5hoVNADMMHUAaW0JA2VkOvy1?= =?Windows-1252?Q?dvjKdCo7TtGtebwOgxkB/GWBsTP/Ah5Lk8vZhJDa9Kie4CACP/UiA2qI?= =?Windows-1252?Q?qHHPixhzCM+FKWTGjM9aZ01XrCGkY+UYY7y8G8B764HpYY7OrDcTPvE0?= =?Windows-1252?Q?OtyqQ?= X-Microsoft-Exchange-Diagnostics: 1; BY2PR0301MB0743; 6:Y9HVwX6g0AVOBN+sWtmQe6MfbYbIaGaqQu8mgxpSQDwMBRsJA0d+SoRcvS4zZ5SM5QT/AoYFsbuymMWS47wQdwnvcvzhzOkVcrBgwyUYOmgfXuOG8uycaw0iNbqOQTToDYBQ7y6Gu2SZc2MAkoPCP34l6EgBuiXegbWkxJaQbfBXk3vYSdmMPtiCgG1jG76506tZzEwFr8GmQaI9qPX67QPa7L+tgmLp0IwbCO+IDISLotbg37hBdKxZpeV17OdHKxXcQ3CGQUi9Ajc+xC5vwIEAXcSuRQEztei5ahafBB8b74UCcCK5vGDj+6BFmvexcECTMeV6gf5qNNRPYo6Twop4hPbxst3OHbyKuE50hEmT613kLdkWu7YA6glQATDvndc4qHeH7NUvljRTfAAfMdWI2VVq6EKbpU9OAlSCGmtPSFMHuD9naNDk4fp7Vb+h; 5:7zbSa+JPPUobUgl8ri/CRlsS9OVzVg1QGwD+WqQAQTwK1KKT6LFDMkzq3lE29aRWYizgC9B7ICxlVBWrchPgMh2tpchzwKcfKbOToV4O5WwTuYXjQZO0whWxyWWFzk/+1IyqjvnBAwFN1neVp3j9lqQiTQlEURbdfR2rxBhGOE3H8pT+OASvRHM/HazcFWuI; 24:OEmPwg8ffUisd3hpf2dRiaoEWBWtq7Uu5zAjsKT43AKRtZ5dWkmVEVvaVae7QpUGpvHsuUD7Gj91JM1KwhYnpa5lJzL1GBQnGT0GoXVQu6g= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BY2PR0301MB0743; 7:tj+pzY2Y6w/K8RcxBVxVwGPxvxpMsqR7RXKzX1exPVfgfBuwpCWfxdOrNIuA8WzzTG9iXf8jU8BIRNbGrr7CDXUdG61a2OK2ufmoXjTiSi7uLNjwxyxSEYoDeeF66B0w6Ydg6G8ad5mCWxfpTPdYNbdzIs2PRhjwJwPXlN+g7IpKW9j5rlmP5cv4dGzEl1zqPjUUBbgpr7RIGqEtj6hKUNq2+fLYSmvSe769+IeB4wsl7SuczfZdoDoulLp8l31uL3Qmwq9SF5dcu7DpMDS5M5g3Kxai0gFClfswujDgQI5mwiUgT9Jk6e62Bs+XeJsmf5TUDe+HBE5k9RNI3K7DkNRTFRS10ErwOJiouGJsO73gaHix4zUh+Zyrim0XR5Je8eckrZfG+vSYwH+d15/xaj1KaQhPmINXiEprXxbPY98F+TqBHLA2CZLmSvD3mFIIge96z+jKIZCDnLT0xA3Zow== X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2017 06:02:03.2158 (UTC) X-MS-Exchange-CrossTenant-Id: 5afe0b00-7697-4969-b663-5eab37d5f47e X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=5afe0b00-7697-4969-b663-5eab37d5f47e; Ip=[192.88.168.50]; Helo=[tx30smr01.am.freescale.net] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY2PR0301MB0743 Subject: Re: [dpdk-dev] [PATCH] mbuf: use pktmbuf helper to create the pool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 18 Jan 2017 06:02:08 -0000 On 1/17/2017 7:01 PM, Santosh Shukla wrote: > Hi Hemant, > > On Wed, Jan 18, 2017 at 12:12:56AM +0530, Hemant Agrawal wrote: >> When possible, replace the uses of rte_mempool_create() with >> the helper provided in librte_mbuf: rte_pktmbuf_pool_create(). >> >> This is the preferred way to create a mbuf pool. >> >> This also updates the documentation. > > >> Signed-off-by: Olivier Matz >> Signed-off-by: Hemant Agrawal >> --- >> This patch is derived from the RFC from Olivier: >> http://dpdk.org/dev/patchwork/patch/15925/ > > rte_mempool_create to _empty/populate OR rte_pktmbuf_pool_create changes missing > for mempool testcases. do you have plan to take them up Or shall I post the > patches? Also same change needed at ovs side too? > Please feel free to post the patches. copy me for OVS side patches as well, I will review them. > Thanks, > >> app/test/test_link_bonding_rssconf.c | 11 ++++---- >> doc/guides/sample_app_ug/ip_reassembly.rst | 13 +++++---- >> doc/guides/sample_app_ug/ipv4_multicast.rst | 12 ++++---- >> doc/guides/sample_app_ug/l2_forward_job_stats.rst | 33 ++++++++-------------- >> .../sample_app_ug/l2_forward_real_virtual.rst | 26 +++++++---------- >> doc/guides/sample_app_ug/ptpclient.rst | 11 ++------ >> doc/guides/sample_app_ug/quota_watermark.rst | 26 ++++++----------- >> drivers/net/bonding/rte_eth_bond_8023ad.c | 13 ++++----- >> examples/ip_pipeline/init.c | 18 ++++++------ >> examples/ip_reassembly/main.c | 16 +++++------ >> examples/multi_process/l2fwd_fork/main.c | 13 +++------ >> examples/tep_termination/main.c | 16 +++++------ >> lib/librte_mbuf/rte_mbuf.c | 7 +++-- >> lib/librte_mbuf/rte_mbuf.h | 29 +++++++++++-------- >> 14 files changed, 106 insertions(+), 138 deletions(-) >> >> diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c >> index 34f1c16..9034f62 100644 >> --- a/app/test/test_link_bonding_rssconf.c >> +++ b/app/test/test_link_bonding_rssconf.c >> @@ -67,7 +67,7 @@ >> #define SLAVE_RXTX_QUEUE_FMT ("rssconf_slave%d_q%d") >> >> #define NUM_MBUFS 8191 >> -#define MBUF_SIZE (1600 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM) >> +#define MBUF_SIZE (1600 + RTE_PKTMBUF_HEADROOM) >> #define MBUF_CACHE_SIZE 250 >> #define BURST_SIZE 32 >> >> @@ -536,13 +536,12 @@ struct link_bonding_rssconf_unittest_params { >> >> if (test_params.mbuf_pool == NULL) { >> >> - test_params.mbuf_pool = rte_mempool_create("RSS_MBUF_POOL", NUM_MBUFS * >> - SLAVE_COUNT, MBUF_SIZE, MBUF_CACHE_SIZE, >> - sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init, >> - NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0); >> + test_params.mbuf_pool = rte_pktmbuf_pool_create( >> + "RSS_MBUF_POOL", NUM_MBUFS * SLAVE_COUNT, >> + MBUF_CACHE_SIZE, 0, MBUF_SIZE, rte_socket_id()); >> >> TEST_ASSERT(test_params.mbuf_pool != NULL, >> - "rte_mempool_create failed\n"); >> + "rte_pktmbuf_pool_create failed\n"); >> } >> >> /* Create / initialize ring eth devs. */ >> diff --git a/doc/guides/sample_app_ug/ip_reassembly.rst b/doc/guides/sample_app_ug/ip_reassembly.rst >> index 3c5cc70..d5097c6 100644 >> --- a/doc/guides/sample_app_ug/ip_reassembly.rst >> +++ b/doc/guides/sample_app_ug/ip_reassembly.rst >> @@ -223,11 +223,14 @@ each RX queue uses its own mempool. >> >> snprintf(buf, sizeof(buf), "mbuf_pool_%u_%u", lcore, queue); >> >> - if ((rxq->pool = rte_mempool_create(buf, nb_mbuf, MBUF_SIZE, 0, sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init, NULL, >> - rte_pktmbuf_init, NULL, socket, MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) == NULL) { >> - >> - RTE_LOG(ERR, IP_RSMBL, "mempool_create(%s) failed", buf); >> - return -1; >> + rxq->pool = rte_pktmbuf_pool_create(buf, nb_mbuf, >> + 0, /* cache size */ >> + 0, /* priv size */ >> + MBUF_DATA_SIZE, socket); >> + if (rxq->pool == NULL) { >> + RTE_LOG(ERR, IP_RSMBL, >> + "rte_pktmbuf_pool_create(%s) failed", buf); >> + return -1; >> } >> >> Packet Reassembly and Forwarding >> diff --git a/doc/guides/sample_app_ug/ipv4_multicast.rst b/doc/guides/sample_app_ug/ipv4_multicast.rst >> index 72da8c4..d9ff249 100644 >> --- a/doc/guides/sample_app_ug/ipv4_multicast.rst >> +++ b/doc/guides/sample_app_ug/ipv4_multicast.rst >> @@ -145,12 +145,12 @@ Memory pools for indirect buffers are initialized differently from the memory po >> >> .. code-block:: c >> >> - packet_pool = rte_mempool_create("packet_pool", NB_PKT_MBUF, PKT_MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private), >> - rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0); >> - >> - header_pool = rte_mempool_create("header_pool", NB_HDR_MBUF, HDR_MBUF_SIZE, 32, 0, NULL, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0); >> - clone_pool = rte_mempool_create("clone_pool", NB_CLONE_MBUF, >> - CLONE_MBUF_SIZE, 32, 0, NULL, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0); >> + packet_pool = rte_pktmbuf_pool_create("packet_pool", NB_PKT_MBUF, 32, >> + 0, PKT_MBUF_DATA_SIZE, rte_socket_id()); >> + header_pool = rte_pktmbuf_pool_create("header_pool", NB_HDR_MBUF, 32, >> + 0, HDR_MBUF_DATA_SIZE, rte_socket_id()); >> + clone_pool = rte_pktmbuf_pool_create("clone_pool", NB_CLONE_MBUF, 32, >> + 0, 0, rte_socket_id()); >> >> The reason for this is because indirect buffers are not supposed to hold any packet data and >> therefore can be initialized with lower amount of reserved memory for each buffer. >> diff --git a/doc/guides/sample_app_ug/l2_forward_job_stats.rst b/doc/guides/sample_app_ug/l2_forward_job_stats.rst >> index 2444e36..a606b86 100644 >> --- a/doc/guides/sample_app_ug/l2_forward_job_stats.rst >> +++ b/doc/guides/sample_app_ug/l2_forward_job_stats.rst >> @@ -193,36 +193,25 @@ and the application to store network packet data: >> .. code-block:: c >> >> /* create the mbuf pool */ >> - l2fwd_pktmbuf_pool = >> - rte_mempool_create("mbuf_pool", NB_MBUF, >> - MBUF_SIZE, 32, >> - sizeof(struct rte_pktmbuf_pool_private), >> - rte_pktmbuf_pool_init, NULL, >> - rte_pktmbuf_init, NULL, >> - rte_socket_id(), 0); >> + l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, >> + MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, >> + rte_socket_id()); >> >> if (l2fwd_pktmbuf_pool == NULL) >> rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n"); >> >> The rte_mempool is a generic structure used to handle pools of objects. >> -In this case, it is necessary to create a pool that will be used by the driver, >> -which expects to have some reserved space in the mempool structure, >> -sizeof(struct rte_pktmbuf_pool_private) bytes. >> -The number of allocated pkt mbufs is NB_MBUF, with a size of MBUF_SIZE each. >> -A per-lcore cache of 32 mbufs is kept. >> +In this case, it is necessary to create a pool that will be used by the driver. >> +The number of allocated pkt mbufs is NB_MBUF, with a data room size of >> +RTE_MBUF_DEFAULT_BUF_SIZE each. >> +A per-lcore cache of MEMPOOL_CACHE_SIZE mbufs is kept. >> The memory is allocated in rte_socket_id() socket, >> but it is possible to extend this code to allocate one mbuf pool per socket. >> >> -Two callback pointers are also given to the rte_mempool_create() function: >> - >> -* The first callback pointer is to rte_pktmbuf_pool_init() and is used >> - to initialize the private data of the mempool, which is needed by the driver. >> - This function is provided by the mbuf API, but can be copied and extended by the developer. >> - >> -* The second callback pointer given to rte_mempool_create() is the mbuf initializer. >> - The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library. >> - If a more complex application wants to extend the rte_pktmbuf structure for its own needs, >> - a new function derived from rte_pktmbuf_init( ) can be created. >> +The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf >> +initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init(). >> +An advanced application may want to use the mempool API to create the >> +mbuf pool with more control. >> >> Driver Initialization >> ~~~~~~~~~~~~~~~~~~~~~ >> diff --git a/doc/guides/sample_app_ug/l2_forward_real_virtual.rst b/doc/guides/sample_app_ug/l2_forward_real_virtual.rst >> index cf15d1c..de86ac8 100644 >> --- a/doc/guides/sample_app_ug/l2_forward_real_virtual.rst >> +++ b/doc/guides/sample_app_ug/l2_forward_real_virtual.rst >> @@ -207,31 +207,25 @@ and the application to store network packet data: >> >> /* create the mbuf pool */ >> >> - l2fwd_pktmbuf_pool = rte_mempool_create("mbuf_pool", NB_MBUF, MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private), >> - rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, SOCKET0, 0); >> + l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, >> + MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, >> + rte_socket_id()); >> >> if (l2fwd_pktmbuf_pool == NULL) >> rte_panic("Cannot init mbuf pool\n"); >> >> The rte_mempool is a generic structure used to handle pools of objects. >> -In this case, it is necessary to create a pool that will be used by the driver, >> -which expects to have some reserved space in the mempool structure, >> -sizeof(struct rte_pktmbuf_pool_private) bytes. >> -The number of allocated pkt mbufs is NB_MBUF, with a size of MBUF_SIZE each. >> +In this case, it is necessary to create a pool that will be used by the driver. >> +The number of allocated pkt mbufs is NB_MBUF, with a data room size of >> +RTE_MBUF_DEFAULT_BUF_SIZE each. >> A per-lcore cache of 32 mbufs is kept. >> The memory is allocated in NUMA socket 0, >> but it is possible to extend this code to allocate one mbuf pool per socket. >> >> -Two callback pointers are also given to the rte_mempool_create() function: >> - >> -* The first callback pointer is to rte_pktmbuf_pool_init() and is used >> - to initialize the private data of the mempool, which is needed by the driver. >> - This function is provided by the mbuf API, but can be copied and extended by the developer. >> - >> -* The second callback pointer given to rte_mempool_create() is the mbuf initializer. >> - The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library. >> - If a more complex application wants to extend the rte_pktmbuf structure for its own needs, >> - a new function derived from rte_pktmbuf_init( ) can be created. >> +The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf >> +initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init(). >> +An advanced application may want to use the mempool API to create the >> +mbuf pool with more control. >> >> .. _l2_fwd_app_dvr_init: >> >> diff --git a/doc/guides/sample_app_ug/ptpclient.rst b/doc/guides/sample_app_ug/ptpclient.rst >> index 6e425b7..405a267 100644 >> --- a/doc/guides/sample_app_ug/ptpclient.rst >> +++ b/doc/guides/sample_app_ug/ptpclient.rst >> @@ -171,15 +171,8 @@ used by the application: >> >> .. code-block:: c >> >> - mbuf_pool = rte_mempool_create("MBUF_POOL", >> - NUM_MBUFS * nb_ports, >> - MBUF_SIZE, >> - MBUF_CACHE_SIZE, >> - sizeof(struct rte_pktmbuf_pool_private), >> - rte_pktmbuf_pool_init, NULL, >> - rte_pktmbuf_init, NULL, >> - rte_socket_id(), >> - 0); >> + mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * nb_ports, >> + MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id()); >> >> Mbufs are the packet buffer structure used by DPDK. They are explained in >> detail in the "Mbuf Library" section of the *DPDK Programmer's Guide*. >> diff --git a/doc/guides/sample_app_ug/quota_watermark.rst b/doc/guides/sample_app_ug/quota_watermark.rst >> index c56683a..a0da8fe 100644 >> --- a/doc/guides/sample_app_ug/quota_watermark.rst >> +++ b/doc/guides/sample_app_ug/quota_watermark.rst >> @@ -254,32 +254,24 @@ It contains a set of mbuf objects that are used by the driver and the applicatio >> .. code-block:: c >> >> /* Create a pool of mbuf to store packets */ >> - >> - mbuf_pool = rte_mempool_create("mbuf_pool", MBUF_PER_POOL, MBUF_SIZE, 32, sizeof(struct rte_pktmbuf_pool_private), >> - rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, rte_socket_id(), 0); >> + mbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", MBUF_PER_POOL, 32, 0, >> + MBUF_DATA_SIZE, rte_socket_id()); >> >> if (mbuf_pool == NULL) >> rte_panic("%s\n", rte_strerror(rte_errno)); >> >> The rte_mempool is a generic structure used to handle pools of objects. >> -In this case, it is necessary to create a pool that will be used by the driver, >> -which expects to have some reserved space in the mempool structure, sizeof(struct rte_pktmbuf_pool_private) bytes. >> +In this case, it is necessary to create a pool that will be used by the driver. >> >> -The number of allocated pkt mbufs is MBUF_PER_POOL, with a size of MBUF_SIZE each. >> +The number of allocated pkt mbufs is MBUF_PER_POOL, with a data room size >> +of MBUF_DATA_SIZE each. >> A per-lcore cache of 32 mbufs is kept. >> The memory is allocated in on the master lcore's socket, but it is possible to extend this code to allocate one mbuf pool per socket. >> >> -Two callback pointers are also given to the rte_mempool_create() function: >> - >> -* The first callback pointer is to rte_pktmbuf_pool_init() and is used to initialize the private data of the mempool, >> - which is needed by the driver. >> - This function is provided by the mbuf API, but can be copied and extended by the developer. >> - >> -* The second callback pointer given to rte_mempool_create() is the mbuf initializer. >> - >> -The default is used, that is, rte_pktmbuf_init(), which is provided in the rte_mbuf library. >> -If a more complex application wants to extend the rte_pktmbuf structure for its own needs, >> -a new function derived from rte_pktmbuf_init() can be created. >> +The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf >> +initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init(). >> +An advanced application may want to use the mempool API to create the >> +mbuf pool with more control. >> >> Ports Configuration and Pairing >> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ >> diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c >> index 2f7ae70..af211ca 100644 >> --- a/drivers/net/bonding/rte_eth_bond_8023ad.c >> +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c >> @@ -888,8 +888,8 @@ >> RTE_ASSERT(port->tx_ring == NULL); >> socket_id = rte_eth_devices[slave_id].data->numa_node; >> >> - element_size = sizeof(struct slow_protocol_frame) + sizeof(struct rte_mbuf) >> - + RTE_PKTMBUF_HEADROOM; >> + element_size = sizeof(struct slow_protocol_frame) + >> + RTE_PKTMBUF_HEADROOM; >> >> /* The size of the mempool should be at least: >> * the sum of the TX descriptors + BOND_MODE_8023AX_SLAVE_TX_PKTS */ >> @@ -900,11 +900,10 @@ >> } >> >> snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id); >> - port->mbuf_pool = rte_mempool_create(mem_name, >> - total_tx_desc, element_size, >> - RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ? 32 : RTE_MEMPOOL_CACHE_MAX_SIZE, >> - sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init, >> - NULL, rte_pktmbuf_init, NULL, socket_id, MEMPOOL_F_NO_SPREAD); >> + port->mbuf_pool = rte_pktmbuf_pool_create(mem_name, total_tx_desc, >> + RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ? >> + 32 : RTE_MEMPOOL_CACHE_MAX_SIZE, >> + 0, element_size, socket_id); >> >> /* Any memory allocation failure in initalization is critical because >> * resources can't be free, so reinitialization is impossible. */ >> diff --git a/examples/ip_pipeline/init.c b/examples/ip_pipeline/init.c >> index 3b36b53..d55c3b4 100644 >> --- a/examples/ip_pipeline/init.c >> +++ b/examples/ip_pipeline/init.c >> @@ -324,16 +324,14 @@ >> struct app_mempool_params *p = &app->mempool_params[i]; >> >> APP_LOG(app, HIGH, "Initializing %s ...", p->name); >> - app->mempool[i] = rte_mempool_create( >> - p->name, >> - p->pool_size, >> - p->buffer_size, >> - p->cache_size, >> - sizeof(struct rte_pktmbuf_pool_private), >> - rte_pktmbuf_pool_init, NULL, >> - rte_pktmbuf_init, NULL, >> - p->cpu_socket_id, >> - 0); >> + app->mempool[i] = rte_pktmbuf_pool_create( >> + p->name, >> + p->pool_size, >> + p->cache_size, >> + 0, /* priv_size */ >> + p->buffer_size - >> + sizeof(struct rte_mbuf), /* mbuf data size */ >> + p->cpu_socket_id); >> >> if (app->mempool[i] == NULL) >> rte_panic("%s init error\n", p->name); >> diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c >> index 50fe422..f6378bf 100644 >> --- a/examples/ip_reassembly/main.c >> +++ b/examples/ip_reassembly/main.c >> @@ -84,9 +84,7 @@ >> >> #define MAX_JUMBO_PKT_LEN 9600 >> >> -#define BUF_SIZE RTE_MBUF_DEFAULT_DATAROOM >> -#define MBUF_SIZE \ >> - (BUF_SIZE + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM) >> +#define MBUF_DATA_SIZE RTE_MBUF_DEFAULT_BUF_SIZE >> >> #define NB_MBUF 8192 >> >> @@ -909,11 +907,13 @@ struct rte_lpm6_config lpm6_config = { >> >> snprintf(buf, sizeof(buf), "mbuf_pool_%u_%u", lcore, queue); >> >> - if ((rxq->pool = rte_mempool_create(buf, nb_mbuf, MBUF_SIZE, 0, >> - sizeof(struct rte_pktmbuf_pool_private), >> - rte_pktmbuf_pool_init, NULL, rte_pktmbuf_init, NULL, >> - socket, MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET)) == NULL) { >> - RTE_LOG(ERR, IP_RSMBL, "mempool_create(%s) failed", buf); >> + rxq->pool = rte_pktmbuf_pool_create(buf, nb_mbuf, >> + 0, /* cache size */ >> + 0, /* priv size */ >> + MBUF_DATA_SIZE, socket); >> + if (rxq->pool == NULL) { >> + RTE_LOG(ERR, IP_RSMBL, >> + "rte_pktmbuf_pool_create(%s) failed", buf); >> return -1; >> } >> >> diff --git a/examples/multi_process/l2fwd_fork/main.c b/examples/multi_process/l2fwd_fork/main.c >> index 2d951d9..b34916e 100644 >> --- a/examples/multi_process/l2fwd_fork/main.c >> +++ b/examples/multi_process/l2fwd_fork/main.c >> @@ -77,8 +77,7 @@ >> >> #define RTE_LOGTYPE_L2FWD RTE_LOGTYPE_USER1 >> #define MBUF_NAME "mbuf_pool_%d" >> -#define MBUF_SIZE \ >> -(RTE_MBUF_DEFAULT_DATAROOM + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM) >> +#define MBUF_DATA_SIZE RTE_MBUF_DEFAULT_BUF_SIZE >> #define NB_MBUF 8192 >> #define RING_MASTER_NAME "l2fwd_ring_m2s_" >> #define RING_SLAVE_NAME "l2fwd_ring_s2m_" >> @@ -989,14 +988,10 @@ struct l2fwd_port_statistics { >> flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET; >> snprintf(buf_name, RTE_MEMPOOL_NAMESIZE, MBUF_NAME, portid); >> l2fwd_pktmbuf_pool[portid] = >> - rte_mempool_create(buf_name, NB_MBUF, >> - MBUF_SIZE, 32, >> - sizeof(struct rte_pktmbuf_pool_private), >> - rte_pktmbuf_pool_init, NULL, >> - rte_pktmbuf_init, NULL, >> - rte_socket_id(), flags); >> + rte_pktmbuf_pool_create(buf_name, NB_MBUF, 32, >> + 0, MBUF_DATA_SIZE, rte_socket_id()); >> if (l2fwd_pktmbuf_pool[portid] == NULL) >> - rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n"); >> + rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n"); >> >> printf("Create mbuf %s\n", buf_name); >> } >> diff --git a/examples/tep_termination/main.c b/examples/tep_termination/main.c >> index bd1dc96..20dafdb 100644 >> --- a/examples/tep_termination/main.c >> +++ b/examples/tep_termination/main.c >> @@ -68,7 +68,7 @@ >> (nb_switching_cores * MBUF_CACHE_SIZE)) >> >> #define MBUF_CACHE_SIZE 128 >> -#define MBUF_SIZE (2048 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM) >> +#define MBUF_DATA_SIZE RTE_MBUF_DEFAULT_BUF_SIZE >> >> #define MAX_PKT_BURST 32 /* Max burst size for RX/TX */ >> #define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */ >> @@ -1199,15 +1199,13 @@ static inline void __attribute__((always_inline)) >> MAX_SUP_PORTS); >> } >> /* Create the mbuf pool. */ >> - mbuf_pool = rte_mempool_create( >> + mbuf_pool = rte_pktmbuf_pool_create( >> "MBUF_POOL", >> - NUM_MBUFS_PER_PORT >> - * valid_nb_ports, >> - MBUF_SIZE, MBUF_CACHE_SIZE, >> - sizeof(struct rte_pktmbuf_pool_private), >> - rte_pktmbuf_pool_init, NULL, >> - rte_pktmbuf_init, NULL, >> - rte_socket_id(), 0); >> + NUM_MBUFS_PER_PORT * valid_nb_ports, >> + MBUF_CACHE_SIZE, >> + 0, >> + MBUF_DATA_SIZE, >> + rte_socket_id()); >> if (mbuf_pool == NULL) >> rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n"); >> >> diff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c >> index 72ad91e..3fb2700 100644 >> --- a/lib/librte_mbuf/rte_mbuf.c >> +++ b/lib/librte_mbuf/rte_mbuf.c >> @@ -62,7 +62,7 @@ >> >> /* >> * ctrlmbuf constructor, given as a callback function to >> - * rte_mempool_create() >> + * rte_mempool_obj_iter() or rte_mempool_create() >> */ >> void >> rte_ctrlmbuf_init(struct rte_mempool *mp, >> @@ -77,7 +77,8 @@ >> >> /* >> * pktmbuf pool constructor, given as a callback function to >> - * rte_mempool_create() >> + * rte_mempool_create(), or called directly if using >> + * rte_mempool_create_empty()/rte_mempool_populate() >> */ >> void >> rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg) >> @@ -110,7 +111,7 @@ >> >> /* >> * pktmbuf constructor, given as a callback function to >> - * rte_mempool_create(). >> + * rte_mempool_obj_iter() or rte_mempool_create(). >> * Set the fields of a packet mbuf to their default values. >> */ >> void >> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h >> index bfce9f4..b1d4ccb 100644 >> --- a/lib/librte_mbuf/rte_mbuf.h >> +++ b/lib/librte_mbuf/rte_mbuf.h >> @@ -44,6 +44,13 @@ >> * buffers. The message buffers are stored in a mempool, using the >> * RTE mempool library. >> * >> + * The preferred way to create a mbuf pool is to use >> + * rte_pktmbuf_pool_create(). However, in some situations, an >> + * application may want to have more control (ex: populate the pool with >> + * specific memory), in this case it is possible to use functions from >> + * rte_mempool. See how rte_pktmbuf_pool_create() is implemented for >> + * details. >> + * >> * This library provides an API to allocate/free packet mbufs, which are >> * used to carry network packets. >> * >> @@ -810,14 +817,14 @@ static inline void __attribute__((always_inline)) >> * This function initializes some fields in an mbuf structure that are >> * not modified by the user once created (mbuf type, origin pool, buffer >> * start address, and so on). This function is given as a callback function >> - * to rte_mempool_create() at pool creation time. >> + * to rte_mempool_obj_iter() or rte_mempool_create() at pool creation time. >> * >> * @param mp >> * The mempool from which the mbuf is allocated. >> * @param opaque_arg >> * A pointer that can be used by the user to retrieve useful information >> - * for mbuf initialization. This pointer comes from the ``init_arg`` >> - * parameter of rte_mempool_create(). >> + * for mbuf initialization. This pointer is the opaque argument passed to >> + * rte_mempool_obj_iter() or rte_mempool_create(). >> * @param m >> * The mbuf to initialize. >> * @param i >> @@ -891,14 +898,14 @@ void rte_ctrlmbuf_init(struct rte_mempool *mp, void *opaque_arg, >> * This function initializes some fields in the mbuf structure that are >> * not modified by the user once created (origin pool, buffer start >> * address, and so on). This function is given as a callback function to >> - * rte_mempool_create() at pool creation time. >> + * rte_mempool_obj_iter() or rte_mempool_create() at pool creation time. >> * >> * @param mp >> * The mempool from which mbufs originate. >> * @param opaque_arg >> * A pointer that can be used by the user to retrieve useful information >> - * for mbuf initialization. This pointer comes from the ``init_arg`` >> - * parameter of rte_mempool_create(). >> + * for mbuf initialization. This pointer is the opaque argument passed to >> + * rte_mempool_obj_iter() or rte_mempool_create(). >> * @param m >> * The mbuf to initialize. >> * @param i >> @@ -913,7 +920,8 @@ void rte_pktmbuf_init(struct rte_mempool *mp, void *opaque_arg, >> * >> * This function initializes the mempool private data in the case of a >> * pktmbuf pool. This private data is needed by the driver. The >> - * function is given as a callback function to rte_mempool_create() at >> + * function must be called on the mempool before it is used, or it >> + * can be given as a callback function to rte_mempool_create() at >> * pool creation. It can be extended by the user, for example, to >> * provide another packet size. >> * >> @@ -921,8 +929,8 @@ void rte_pktmbuf_init(struct rte_mempool *mp, void *opaque_arg, >> * The mempool from which mbufs originate. >> * @param opaque_arg >> * A pointer that can be used by the user to retrieve useful information >> - * for mbuf initialization. This pointer comes from the ``init_arg`` >> - * parameter of rte_mempool_create(). >> + * for mbuf initialization. This pointer is the opaque argument passed to >> + * rte_mempool_create(). >> */ >> void rte_pktmbuf_pool_init(struct rte_mempool *mp, void *opaque_arg); >> >> @@ -930,8 +938,7 @@ void rte_pktmbuf_init(struct rte_mempool *mp, void *opaque_arg, >> * Create a mbuf pool. >> * >> * This function creates and initializes a packet mbuf pool. It is >> - * a wrapper to rte_mempool_create() with the proper packet constructor >> - * and mempool constructor. >> + * a wrapper to rte_mempool functions. >> * >> * @param name >> * The name of the mbuf pool. >> -- >> 1.9.1 >> >