From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM02-BL2-obe.outbound.protection.outlook.com (mail-bl2nam02on0077.outbound.protection.outlook.com [104.47.38.77]) by dpdk.org (Postfix) with ESMTP id E8DC8379B for ; Wed, 8 Mar 2017 13:52:17 +0100 (CET) Received: from BN6PR03CA0007.namprd03.prod.outlook.com (10.168.230.145) by BN1PR0301MB0723.namprd03.prod.outlook.com (10.160.78.142) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.947.12; Wed, 8 Mar 2017 12:52:15 +0000 Received: from BY2FFO11FD011.protection.gbl (2a01:111:f400:7c0c::184) by BN6PR03CA0007.outlook.office365.com (2603:10b6:404:23::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.961.14 via Frontend Transport; Wed, 8 Mar 2017 12:52:15 +0000 Authentication-Results: spf=fail (sender IP is 192.88.168.50) smtp.mailfrom=nxp.com; 6wind.com; dkim=none (message not signed) header.d=none;6wind.com; dmarc=fail action=none header.from=nxp.com; Received-SPF: Fail (protection.outlook.com: domain of nxp.com does not designate 192.88.168.50 as permitted sender) receiver=protection.outlook.com; client-ip=192.88.168.50; helo=tx30smr01.am.freescale.net; Received: from tx30smr01.am.freescale.net (192.88.168.50) by BY2FFO11FD011.mail.protection.outlook.com (10.1.14.129) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_RSA_WITH_AES_256_CBC_SHA) id 15.1.947.7 via Frontend Transport; Wed, 8 Mar 2017 12:52:15 +0000 Received: from [127.0.0.1] ([10.232.133.65]) by tx30smr01.am.freescale.net (8.14.3/8.14.0) with ESMTP id v28Cq6Km003521; Wed, 8 Mar 2017 05:52:11 -0700 To: Olivier MATZ References: <1487205586-6785-1-git-send-email-hemant.agrawal@nxp.com> <1488545223-25739-1-git-send-email-hemant.agrawal@nxp.com> <1488545223-25739-20-git-send-email-hemant.agrawal@nxp.com> <20170308100553.14ca39a7@glumotte.dev.6wind.com> CC: , , , , , , From: Hemant Agrawal Message-ID: <521bf2d1-0393-c4ac-422e-96134880502a@nxp.com> Date: Wed, 8 Mar 2017 18:22:06 +0530 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: <20170308100553.14ca39a7@glumotte.dev.6wind.com> Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Transfer-Encoding: 7bit X-EOPAttributedMessage: 0 X-Matching-Connectors: 131334511354820645; (91ab9b29-cfa4-454e-5278-08d120cd25b8); () X-Forefront-Antispam-Report: CIP:192.88.168.50; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(6009001)(336005)(39860400002)(39450400003)(39400400002)(39410400002)(39380400002)(39850400002)(39840400002)(2980300002)(1110001)(1109001)(339900001)(377454003)(51914003)(43544003)(24454002)(199003)(189002)(9170700003)(31696002)(47776003)(305945005)(53546006)(6246003)(2906002)(50986999)(65806001)(76176999)(6666003)(7246003)(23746002)(189998001)(6916009)(54356999)(356003)(33646002)(2950100002)(64126003)(77096006)(53936002)(106466001)(38730400002)(54906002)(8656002)(86362001)(65826007)(85426001)(7126002)(65956001)(93886004)(5660300001)(8676002)(81166006)(4326008)(110136004)(230700001)(50466002)(31686004)(4001350100001)(104016004)(36756003)(8936002)(120886001)(83506001)(105606002)(229853002); DIR:OUT; SFP:1101; SCL:1; SRVR:BN1PR0301MB0723; H:tx30smr01.am.freescale.net; FPR:; SPF:Fail; MLV:ovrnspm; A:1; MX:1; PTR:InfoDomainNonexistent; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; BY2FFO11FD011; 1:awZEhH0KilNFgJuimlA64wWmsxgsZM1onW7hb1X2UEA37R2AmO36BZRLsiu7KRRNd4fmNZKssdsY5eIHvvKKA7s2iUUUQ2GU1B3lBPQLkdfUoN1r2JzPzv5Fnt98DB/mKSmVxJCue02pvc2830/6VUpIUVkrvbkeufENNk4zhcFabRb0vs29bOEK62DUW4NAe29zLWet/ZZiMh4NhXrUKuA8MT7Pv8XFGgGWBI5JCshTHXtH2v3OGH/oq78Vm2WCYqmFBATnaJMFr6VQavvq01RXF3Ze17aX4Ugnq83mFWlTVXZesyk6ia+P+5ruYkxIKzuvn6BcRFtAmF/V3Vdt1Ou5UuKu7/vw07TZIWBmc83kBEuqPQMFP8t6qMk9v0yYjEAUPrGLUSNqn36n1mJ715YdNzdgkzU/m8RzwzGCFCdznd1lFApZ+EVXY694VHxcldL7jn9b+NVhP7FkqA0XNUyAMMUiccnM/zs1v+njjpSHNT0bpFtvNVEQgCl3zhZYBdFYY9fXV32QUBNT0WUP3vKkd26XUKUsfqeN79xjeqfOyD5KZFWi9nrnaOZYNwuskqiQZlSmAaYFf8dUsBAzZamYtNoQyhCKE4QbBxEC+Ao= X-MS-Office365-Filtering-Correlation-Id: 89968269-326e-49c5-b825-08d46621f284 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001); SRVR:BN1PR0301MB0723; X-Microsoft-Exchange-Diagnostics: 1; BN1PR0301MB0723; 3:DmO2/wmZezVmpe+YYoTaWwOlurLi2KXpEuEBvE91GPPuP9RC4K55YOt/qgXakefOZ5Set7AKbJxGNOPQQPVChV63OG9A6Zeg7EuCkLnHYp3Ewrny5niIqV3LdkGdz9RZjp+wzIC9u7zKddEQoJbZAq8aog27SiQlqNA2MYo9RdY+P21ljbGXXFb17IYhHthFAUsVw7y0f/0w3vpOufN5e2wRRH2Pz+VBgQU0z7Uu7Fc5KfbnueD7w4oceANOGEbR1cgsyLuEDSvk4qlhT3gHioTMTebb0LX+FSB1bZbpANQg8f8RhOB/V2hIbig9aDQSgbFREUlxbH9C8duh2DQzpOq6k/mujiNA0m7+33ChdagqY5762Jg8D4hi73ZnHql/; 25:fcuaNEFTnFYH+NJkAqTKVmLPG8I9ON1FBaD9jsMNO6sq5SYoyoLdJaDD06wJa1e0TyIgEmPPC81KZfXFZtxu92DcVN1bLIR4AqBRuqOISary6NaBn6UuCMrLW7mxKyR2wvqEfyCXE9i2JDWUKxlOHfsmqQWxauVjhbSCyoZidubanty3vKFyYqkqyrN9oVMEDRLPe5C6Fpd3bHwaX9g8UxBz3w9gfMH8Tnhz+uAddBGNKGeKasEQW38o3ax1WRbVopCmR8rVyEH1zZlN+UwlUnfv++fnbTYqhQhspBoZIpxZriUBj+3rvtaxNSZ61XYOf0jgiBYvFPfR99qmcoIplVah2Zjzgm5neXFB1MmWzAmrHrq0yLVhVi2McNqK6i6RLctyDKwtY+ynFoShXPZcG22hM1rJOIB/Bn8QQOgrPpNbSZSufdcicqmjw1brxtK6oXU6ONmhDJXHztlZRMrNDw== X-Microsoft-Exchange-Diagnostics: 1; BN1PR0301MB0723; 31:DycMYLw6PoMQULYCW6nre5YonlwPqmEgUI24LFtCjZ9a9eACFLytXucg5aD1mM8HwTUE0PPWfg4mnJHaqxitxBBjRCah9I0k7ynTwaR6Hq3D6sh5fy44AcGNLCFl1qkcnbNQZb3I+gG55Oyxmw9g1o/OdVIvyfan6DiSNsoYTV7OpWlh25i+cNt6TQvG7tIPaQ8ydkMTO2ZAQieRFIPcB9hK+esOpHbrKKzD4UsECqVvQW0SB47pQv/NUF7Juojcv+dAjtcq3t8nZahSUCfdQ56xrgR4lG/tYp/7dNNxZn0= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(185117386973197)(275809806118684)(17755550239193); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6095060)(601004)(2401047)(13015025)(13017025)(13023025)(13024025)(13018025)(8121501046)(5005006)(3002001)(10201501046)(6055026)(6096035)(20161123559025)(20161123561025)(20161123563025)(20161123565025)(20161123556025); SRVR:BN1PR0301MB0723; BCL:0; PCL:0; RULEID:(400006); SRVR:BN1PR0301MB0723; X-Microsoft-Exchange-Diagnostics: 1; BN1PR0301MB0723; 4:1t7py5oE7ZK1sEI0TGLLp8EwgE/ByfMoT6Pt6IqD1olB+bcl/i+HwrhIYxkS/fQQf7CwdjI5Iv/Z5v5ZWSbvOopdYQvvsmqjt/EbEpcJRjp8JzBLHcMGNOWY4KMaCHZ/jb4scw269yKvfhY2UJ9SaDe1jhlgo3prjWgs1oaNfdsHGCfhK16EYaK4iVrrMUbCj4flZFBUIXqvK+2KKNAlK6zZwNVGGL6dPkmwVqueDoKEb+mcX6ZzkwRYIU5begaSfh9pPPAzLqQKJNy91kWpZzM7S7jvVwodMIiHmsqXt+e9rWNQeaaKF/y1dWWtVwzxXIgDyzW4F8zcL8uthaJ0V+FUikr+2Ud10uX5suPlWd0TaP/IsNLq8BxZzv4znme0mggRSYy6pkGqwHL+LlYS7+Ux/PqRQKWYbFqCiJeYmsJx1Gt4E93Hcf2fU+Nt6rt2rLsCFZ274nZ0wwIh8sS0S7aJUhqu8X2kMnFX5iheIU8n8twCsOq6XLMOWIFPBDEz4obh5QFBHCAZzLEKUjGFD6KkYD1ys18dinilifBUX9avRZ1J9x5QGCoe7tu4nasm2kX1C7fSxYpKM2fdPxb89poDeTKeTMWP3qWCNDoXKNKvjQjoKEwQJJU7gEsvL0YKstyPFBycxf5PP/DeXq1EfXTMn7vs+g3wiUS020dsyaFWxQdR1WfuxamJcbneWxNljco/VdloVAoeyOaszJ68AzPuzQtGm2UJyOwS6yTAFFTA+YPsZpiddcZBT/v/E9saibmWJdORyUniZXWHT9cMCk645Vm+G4RJDRUjBnJVU2M7E9q5+bANLnIK9lyU0wxZ X-Forefront-PRVS: 02408926C4 X-Microsoft-Exchange-Diagnostics: =?Windows-1252?Q?1; BN1PR0301MB0723; 23:l8Mv7w9tXCe/A/AQTikZggdsO29rWXpt7GQ?= =?Windows-1252?Q?YxPF5q5KapAUesnaLXD/3jeEOn0Eomm837tbdpY6dMSZM+vl/MP2I3Qg?= =?Windows-1252?Q?zEoPIaeH3p7zn7rVwudcjuZR7MZ2Lbkn2ruMo07//O0iIW6o5IrP2vQo?= =?Windows-1252?Q?26/xrOy0Z86WHFDsuKzqwy42EtgbrtNFUl9JDyf/+Mat5FM2lMuHemO1?= =?Windows-1252?Q?V7wp5FfWWwAJIF1c+lGXHg9OOUF8lFw/sZGc6XPbjxTpuXQ5JMYciL1b?= =?Windows-1252?Q?ncsey2iXQSZa1l8v8TmtyqVusBAsQQw0+uRtfDDADJZ41s1x2zEUWz8Y?= =?Windows-1252?Q?qrtTT0xw4MAwrcRghBI0aiR2Z6jMbepBcAqZBD9VOq3roLYLfxwdfC/s?= =?Windows-1252?Q?HGLqzBENKd2TNljRhpj13vsSfZRxdjoPwZqnv6v/kxGoZ3w1YeeKPLrU?= =?Windows-1252?Q?M1mi5SEibx4y0Xpqx5ah4BjlpXpe1DCC8tmYYYVbBe7RldviFzkjvmw9?= =?Windows-1252?Q?SC0DuSK8ll6lpzyUfWL55MG+tDXVhifCI4eYP1OkazpZvkahel+TiBjf?= =?Windows-1252?Q?a006lzbcRsq28M6QmOKpkK/sDxIYilNZr9moe4pmbZGwvXOGtFNHFkoM?= =?Windows-1252?Q?yp1hlcDgshYIqi3iRQdkQ/xwnihFjeg5zBHWmkfkI28lTV5IX+IFsfzO?= =?Windows-1252?Q?hor+bH2PGWqTGEOmomCQHB22H/lfzUS22ACD6EJ5IeQjbGOlKmAQxgDV?= =?Windows-1252?Q?IEnXe2GQ3NJqY+PnmcKimkIUfXNTRKADYvvLltqh5w+kazKl+mpniy5e?= =?Windows-1252?Q?iS1oQTg0HytQXHBJydzZU236tDLGKzjzYe+1CLirdBNvZf57Zgb2uptq?= =?Windows-1252?Q?Ar/uaq4IazCfDn6gL1PNjPm+AT/gBvrAx3b3p2nFi/+rMkE2cl3ou4Yw?= =?Windows-1252?Q?7y/e+ZkmjyckIkfj9LsSCi2krOpODIOs2dtheiKnc/W/JFqolt1VUh49?= =?Windows-1252?Q?K+fA2Ez+G/2BrmCT7PbnrarrolFHRf1nfBISidb5y6WNsf1DTn9X+R16?= =?Windows-1252?Q?l+idp5BZ8fKidO5AIxTC8/rTXK1O8DS4p6qJ6YqJDG0YJekMmxv0fAsa?= =?Windows-1252?Q?d6ZcGHfKzvC8jO1W4fyrjunQB6TltLfQjQLOlFkoZtphJsA1tgXXFmJI?= =?Windows-1252?Q?ti2+R1xdlJtggcsv2Khda4saW7aFU+sQSmTde6fMFPQMCOLYRe7ajPF0?= =?Windows-1252?Q?Cq6pf0FSlhQs7x3NLkm1xBqu+0+X6qXnMS/61hJEThYKtvmDng+FQOrR?= =?Windows-1252?Q?sl0y+kJE/lqvLYi8T8eLQ5+MCk8eZuYxECF5Wx53T2qK91n0jS8qiPLf?= =?Windows-1252?Q?SVzs6rgGMFqoP4mnaySd8kpYhrBVcXpHHfCBf9XHTsTvedPJRD0VfiPg?= =?Windows-1252?Q?/WTeUqh11gqIVfBan0Kss1RefEOai91PKdu1rHKJtyg9h9J6J3e0Xfvx?= =?Windows-1252?Q?xmav1MF1iOTg7HPVxYoWAeN4r+TYrS8qyYijCjFVcNUSNwJhWUDo9WBg?= =?Windows-1252?Q?EJhActg2G28eP7enqM3q+fWdb91TvTzQbQtdtu6+kvL16xNqYyPkkcG5?= =?Windows-1252?Q?vdvoQBA9vxsFfLd1BRCiP+nayAZ8k4x9yljeyb9KzhsPg5t6cb9s/ncd?= =?Windows-1252?Q?QibIVYevvur0ecWgNJ8OA0BXE7pzb/ILQYyQWnEtMUrvdZEcNAZfQ?= X-Microsoft-Exchange-Diagnostics: 1; BN1PR0301MB0723; 6:h9onL8Y5f9aCcO37KdSEwKNZjUGVT9Y9N5XRp8WmAet5FIyUd/CyFHLjNmoBXMw1caXWfU37o6tRUT5m4bsabt4phklc1Q1t5eUq18YzV11tZBRLGe6oGKguObelDqLUk1lLVWNYlqwJ5vPFMjV+FdR83Z0Z915wagi1t1DLzt7+fS2faMoRSplrYSpjSl8eZ58TY+oiE/w5cwF+xeuGaqSzVsOynxL5MA+Q1IpAx7n+xAoptLJcXD1T3X4x8EIM1L7gwNpKi2O4CEwJv+yrBYSPuPiAOmKrQURPsnmX//rJKF/1tLhcZxh2a+iSsBh7p1czrqby3ZJYZmP2T9eWYAXM4dILdAPJF7JgrIAieWITNgBO+TF8X3T1D/Vu6qwFbvIguSFv3rbCw64NIYm4GdYJ22WB2Bfo1steh0U7gmc=; 5:pdFZjeGM6N9ogl2jLla1q3FcYPUJ6dckZxEBGLnpe3u+yY56ws3HpOpWgdqD0AwhPtWeDcJSPkByiBp+fJmcm1jjUGDv/pJF1UntmlmhyTNPmKCTIOLhHz35dVN06yLud8Il8sL71Gk8xuyRJb7AwtMGqYGmm6J2XfnSBvqpvz8wcHMKOu5i2+DBQPl5joVh; 24:Jk/yRNieaQmNIQlp/zag9n0cq1uwtwpLXnpegzGXS7RFbLxBKxMMwNEs1jKMLj4P5JuDF8BODFvtwr1Lg5ol0MOfIafQgl8xOG1l+55lTVw= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BN1PR0301MB0723; 7:SgbrumANA8k2juNF/O4AArE3hqfTfNJldBQ6UcqaN9xGUd7A79mYYDL55MIaW2Fy+auetK06Fiig7iyGv36QssHvPo50FTMrN+0iz9VDpI4D1WF8XRc2yGIEFaNO9dm83tsbEB0QCWt5/VIiq7BTnsYuEODgF2W1Uy+arhOBG17OtveNSvSpdcZioIG3hUfUeT/0fJrRG5Tn03akL01WK1bSlkHB8EFkXIZ6GX2vfhXS7fg3+wW6eweo+DEqmo9saUSEybOX7VvYB31qB0uCemTbtP7dB1ywQcowsAA9nqTtJAgWmPacFmGVfZw6vLLWIgGMZHQANsNsQS+ZgOVNxg== X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Mar 2017 12:52:15.1856 (UTC) X-MS-Exchange-CrossTenant-Id: 5afe0b00-7697-4969-b663-5eab37d5f47e X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=5afe0b00-7697-4969-b663-5eab37d5f47e; Ip=[192.88.168.50]; Helo=[tx30smr01.am.freescale.net] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN1PR0301MB0723 Subject: Re: [dpdk-dev] [PATCHv8 19/46] pool/dpaa2: add DPAA2 hardware offloaded mempool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 08 Mar 2017 12:52:18 -0000 Hi Olivier, Thanks for your detailed review. Please see inline... On 3/8/2017 2:35 PM, Olivier MATZ wrote: > Hi Hemant, > > On Fri, 3 Mar 2017 18:16:36 +0530, Hemant Agrawal > wrote: >> Adding NXP DPAA2 architecture specific mempool support. >> >> This patch also registers a dpaa2 type MEMPOOL OPS >> >> Signed-off-by: Hemant Agrawal >> --- >> MAINTAINERS | 1 + >> config/common_base | 5 + >> config/defconfig_arm64-dpaa2-linuxapp-gcc | 8 + >> drivers/Makefile | 1 + >> drivers/pool/Makefile | 40 +++ >> drivers/pool/dpaa2/Makefile | 72 ++++++ >> drivers/pool/dpaa2/dpaa2_hw_mempool.c | 339 >> ++++++++++++++++++++++++++ >> drivers/pool/dpaa2/dpaa2_hw_mempool.h | 95 ++++++++ >> drivers/pool/dpaa2/rte_pool_dpaa2_version.map | 8 + > > I think the current mempool handlers should be moved first in a > separate patch. > Are you seeing any benefit by making it a separate patch series? it will be difficult and tricky for us. The dpaa2_pool has a dependency on mc bus patches. dpaa2_pmd has dependency on dpaa2_pool and mc buses. This will mean that we have to split it into 3 patch series and it will become cumbersome to deal with 3 series. > I'd prefer drivers/mempool instead of drivers/pool (more precise and > more consistent with librte_mempool). > We will take care of it in next revision. > >> >> [...] >> >> + >> +struct dpaa2_bp_info rte_dpaa2_bpid_info[MAX_BPID]; >> +static struct dpaa2_bp_list *h_bp_list; >> + >> +static int >> +hw_mbuf_create_pool(struct rte_mempool *mp) > > Would it work for something else than mbufs? > The initial approach of the mempool is to work for kind of object. The > specialization in mbuf is done by the mbuf layer. I think, we did discuss that hw offloaded mempool are mainly for packet buffers/mbufs. Currently we only support mbuf type of objects. Ideally a hw buffer pool can work for any kind mempool. However, it is not the best way to use hw buffer pools. The latency to allocate buffers are higher than software. The main advantage SoCs, get by using hw pool is that they work seamlessly with the MAC layer. > > >> +{ >> + struct dpaa2_bp_list *bp_list; >> + struct dpaa2_dpbp_dev *avail_dpbp; >> + struct dpbp_attr dpbp_attr; >> + uint32_t bpid; >> + int ret; >> + >> + avail_dpbp = dpaa2_alloc_dpbp_dev(); >> + >> + if (!avail_dpbp) { >> + PMD_DRV_LOG(ERR, "DPAA2 resources not available"); >> + return -1; >> + } > > The other pool handlers return a -errno instead of -1. I think it > should be the same here. We will fix it. > > The same comment can applies to other locations/functions. > >> [...] >> + >> + /* Set parameters of buffer pool list */ >> + bp_list->buf_pool.num_bufs = mp->size; >> + bp_list->buf_pool.size = mp->elt_size >> + - sizeof(struct rte_mbuf) - rte_pktmbuf_priv_size(mp); >> + bp_list->buf_pool.bpid = dpbp_attr.bpid; >> + bp_list->buf_pool.h_bpool_mem = NULL; >> + bp_list->buf_pool.mp = mp; >> + bp_list->buf_pool.dpbp_node = avail_dpbp; >> + bp_list->next = h_bp_list; >> + >> + bpid = dpbp_attr.bpid; >> + >> + >> + rte_dpaa2_bpid_info[bpid].meta_data_size = sizeof(struct rte_mbuf) >> + + rte_pktmbuf_priv_size(mp); > > Are the 2 empty lines garbage? we will fix it > > >> + rte_dpaa2_bpid_info[bpid].bp_list = bp_list; >> + rte_dpaa2_bpid_info[bpid].bpid = bpid; >> + >> + mp->pool_data = (void *)&rte_dpaa2_bpid_info[bpid]; >> + >> + PMD_INIT_LOG(DEBUG, "BP List created for bpid =%d", dpbp_attr.bpid); + >> + h_bp_list = bp_list; >> + /* Identification for our offloaded pool_data structure >> + */ >> + mp->flags |= MEMPOOL_F_HW_PKT_POOL; > > I think this flag should be declared in rte_mempool.h, > not in drivers/bus/fslmc/portal/dpaa2_hw_pvt.h. > > It should also be documented, what does this flag mean? Currently we need a way to differentiate that this is a hw allocated pkt buffer pool or software based buffer pool. String comparison is costly. This flag was discussed during the hw mempool patches of david. Not everyone was in favor of keeping it in librte_mempool. So, we just hid it inside our offloaded mempool. > >> [...] >> >> +static >> +void rte_dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused, >> + void * const *obj_table, >> + uint32_t bpid, >> + uint32_t meta_data_size, >> + int count) > > > Is there a reason why some functions are prefixed with rte_dpaa2_ and > other but hw_mbuf_? initial reason was to use rte_ only for exported functions. we can fix it. > > >> +{ >> + struct qbman_release_desc releasedesc; >> + struct qbman_swp *swp; >> + int ret; >> + int i, n; >> + uint64_t bufs[DPAA2_MBUF_MAX_ACQ_REL]; >> + >> + if (unlikely(!DPAA2_PER_LCORE_DPIO)) { >> + ret = dpaa2_affine_qbman_swp(); >> + if (ret != 0) { >> + RTE_LOG(ERR, PMD, "Failed to allocate IO portal"); >> + return; >> + } >> + } >> + swp = DPAA2_PER_LCORE_PORTAL; >> + >> + /* Create a release descriptor required for releasing >> + * buffers into QBMAN >> + */ >> + qbman_release_desc_clear(&releasedesc); >> + qbman_release_desc_set_bpid(&releasedesc, bpid); >> + >> + n = count % DPAA2_MBUF_MAX_ACQ_REL; >> + >> + /* convert mbuf to buffers for the remainder*/ > > bad spaces ok > >> + for (i = 0; i < n ; i++) >> + bufs[i] = (uint64_t)obj_table[i] + meta_data_size; >> + >> + /* feed them to bman*/ > > missing space at the end ok > >> + do { >> + ret = qbman_swp_release(swp, &releasedesc, bufs, n); >> + } while (ret == -EBUSY); >> + >> + /* if there are more buffers to free */ >> + while (n < count) { >> + /* convert mbuf to buffers */ >> + for (i = 0; i < DPAA2_MBUF_MAX_ACQ_REL; i++) >> + bufs[i] = (uint64_t)obj_table[n + i] + meta_data_size; >> + >> + do { >> + ret = qbman_swp_release(swp, &releasedesc, bufs, >> + DPAA2_MBUF_MAX_ACQ_REL); >> + } while (ret == -EBUSY); > > The while in not properly indented ok > >> [...] >> +int rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool, >> + void **obj_table, unsigned int count) >> +{ >> +#ifdef RTE_LIBRTE_DPAA2_DEBUG_DRIVER >> + static int alloc; >> +#endif >> + struct qbman_swp *swp; >> + uint32_t mbuf_size; >> + uint16_t bpid; >> + uint64_t bufs[DPAA2_MBUF_MAX_ACQ_REL]; >> + int i, ret; >> + unsigned int n = 0; >> + struct dpaa2_bp_info *bp_info; >> + >> + bp_info = mempool_to_bpinfo(pool); >> + >> + if (!(bp_info->bp_list)) { >> + RTE_LOG(ERR, PMD, "DPAA2 buffer pool not configured\n"); >> + return -2; >> + } >> + >> + bpid = bp_info->bpid; >> + >> + if (unlikely(!DPAA2_PER_LCORE_DPIO)) { >> + ret = dpaa2_affine_qbman_swp(); >> + if (ret != 0) { >> + RTE_LOG(ERR, PMD, "Failed to allocate IO portal"); >> + return -1; >> + } >> + } >> + swp = DPAA2_PER_LCORE_PORTAL; >> + >> + mbuf_size = sizeof(struct rte_mbuf) + rte_pktmbuf_priv_size(pool); >> + while (n < count) { >> + /* Acquire is all-or-nothing, so we drain in 7s, >> + * then the remainder. >> + */ >> + if ((count - n) > DPAA2_MBUF_MAX_ACQ_REL) { >> + ret = qbman_swp_acquire(swp, bpid, bufs, >> + DPAA2_MBUF_MAX_ACQ_REL); >> + } else { >> + ret = qbman_swp_acquire(swp, bpid, bufs, >> + count - n); >> + } >> + /* In case of less than requested number of buffers available >> + * in pool, qbman_swp_acquire returns 0 >> + */ >> + if (ret <= 0) { >> + PMD_TX_LOG(ERR, "Buffer acquire failed with" >> + " err code: %d", ret); >> + /* The API expect the exact number of requested bufs */ >> + /* Releasing all buffers allocated */ >> + rte_dpaa2_mbuf_release(pool, obj_table, bpid, >> + bp_info->meta_data_size, n); >> + return -1; >> + } >> + /* assigning mbuf from the acquired objects */ >> + for (i = 0; (i < ret) && bufs[i]; i++) { >> + /* TODO-errata - observed that bufs may be null >> + * i.e. first buffer is valid, >> + * remaining 6 buffers may be null >> + */ >> + obj_table[n] = (struct rte_mbuf *)(bufs[i] - mbuf_size); >> + rte_mbuf_refcnt_set((struct rte_mbuf *)obj_table[n], 0); > > I think we should not assume the objects are mbufs. currently we are only supporting packet buffer i.e. mbufs > But even if we do it, I don't see why we need to set the refcnt. rte_mbuf_raw_alloc has check. RTE_ASSERT(rte_mbuf_refcnt_read(m) == 0); So, mempool should have packets with refcnt as '0'. In our case, during transmit the NIC releases the buffer back to the hw pool without core intervention. We have no option to reset specific fields of buffer in the hardware. It can be set on per packet basis during transmission, but this will than add to the packet processing cost. In case of simple forwarding, NIC will directly get the packets from hw, it will release directly to hw. So, we tried to avoid this cost in data path. > > What is returned un buf[] table? In rte_dpaa2_mbuf_release(), it looks you > are using buf[i] = obj_table[i] + bp_info->meta_data_size > meta_data_size = sizeof(struct rte_mbuf) + rte_pktmbuf_priv_size(mp); In the hardware, we are configure address as buf, which is obj_table + meta_data_size; > >> [...] >> + >> +static unsigned >> +hw_mbuf_get_count(const struct rte_mempool *mp __rte_unused) >> +{ >> + return 0; > > Looks this is not implemented. This would give wrong mempool statistics > when calling rte_mempool_dump(). > Now our MC buf supports the counts, so we can implement it in next version. > Adding a test for this handler in app/test may highlight these issues. Yes! we know it fails. It was not supported in our bus - objects earlier. Thanks for the suggestion. We have a plan to add these test cases. We will add it in our pending item list. > > >> [...] >> + >> +#define DPAA2_MAX_BUF_POOLS 8 >> + >> +struct buf_pool_cfg { >> + void *addr; /*!< The address from where DPAA2 will carve out the >> + * buffers. 'addr' should be 'NULL' if user wants >> + * to create buffers from the memory which user >> + * asked DPAA2 to reserve during 'nadk init' >> + */ >> + phys_addr_t phys_addr; /*!< corresponding physical address >> + * of the memory provided in addr >> + */ >> + uint32_t num; /*!< number of buffers */ >> + uint32_t size; /*!< size of each buffer. 'size' should include >> + * any headroom to be reserved and alignment >> + */ >> + uint16_t align; /*!< Buffer alignment (in bytes) */ >> + uint16_t bpid; /*!< The buffer pool id. This will be filled >> + *in by DPAA2 for each buffer pool >> + */ >> +}; > > I think the usual doxygen comment in dpdk is "/**" instead of "/*!" yes! we will fix it. > > > Regards, > Olivier > Regards, Hemant