From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id ECFCCA00C4;
	Thu, 23 Apr 2020 15:39:00 +0200 (CEST)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 4C71D1D147;
	Thu, 23 Apr 2020 15:39:00 +0200 (CEST)
Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com
 [148.163.129.52]) by dpdk.org (Postfix) with ESMTP id D868E1C43F
 for <dev@dpdk.org>; Thu, 23 Apr 2020 15:38:58 +0200 (CEST)
Received: from mx1-us1.ppe-hosted.com (unknown [10.7.65.61])
 by dispatch1-us1.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTP id
 390266008D; Thu, 23 Apr 2020 13:38:58 +0000 (UTC)
Received: from us4-mdac16-14.ut7.mdlocal (unknown [10.7.65.238])
 by mx1-us1.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTP id 370AE800B2; 
 Thu, 23 Apr 2020 13:38:58 +0000 (UTC)
X-Virus-Scanned: Proofpoint Essentials engine
Received: from mx1-us1.ppe-hosted.com (unknown [10.7.66.41])
 by mx1-us1.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id 7BCA980051; 
 Thu, 23 Apr 2020 13:38:57 +0000 (UTC)
Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits))
 (No client certificate requested)
 by mx1-us1.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id A857A4C0089;
 Thu, 23 Apr 2020 13:38:56 +0000 (UTC)
Received: from [192.168.38.17] (10.17.10.39) by ukex01.SolarFlarecom.com
 (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Thu, 23 Apr
 2020 14:38:47 +0100
To: <xiangxia.m.yue@gmail.com>, <olivier.matz@6wind.com>,
 <gage.eads@intel.com>, <artem.andreev@oktetlabs.ru>, <jerinj@marvell.com>,
 <ndabilpuram@marvell.com>, <vattunuru@marvell.com>, <hemant.agrawal@nxp.com>, 
 <david.marchand@redhat.com>, <anatoly.burakov@intel.com>,
 <bruce.richardson@intel.com>
CC: <dev@dpdk.org>
References: <1583114253-15345-1-git-send-email-xiangxia.m.yue@gmail.com>
 <1586787714-13890-1-git-send-email-xiangxia.m.yue@gmail.com>
 <1586787714-13890-2-git-send-email-xiangxia.m.yue@gmail.com>
From: Andrew Rybchenko <arybchenko@solarflare.com>
Autocrypt: addr=arybchenko@solarflare.com; keydata=
 mQINBF2681gBEACbdTxu8eLL3UX2oAelsnK9GkeaJeUYSOHPJQpV7RL/iaIskqTwBRnhjXt7
 j9UEwGA+omnOmqQMpeQTb/F9Ma2dYE+Hw4/t/1KVjxr3ehFaASvwR4fWJfO4e2l/Rk4rG6Yi
 5r6CWU2y8su2654Fr8KFc+cMGOAgKoZTZHZsRy5lHpMlemeF+VZkv8L5sYJWPnsypgqlCG3h
 v6lbtfZs+QqYbFH6bqoZwBAl5irmxywGR7ZJr1GLUZZ1lfdazSY8r6Vz0/Ip/KVxGu2uxo81
 QCsAj0ZsQtwji9Sds/prTiPrIjx8Fc/tfbnAuVuPcnPbczwCJACzQr4q26XATL3kVuZhSBWh
 4XfO/EAUuEq5AemUG5DDTM87g7Lp4eT9gMZB6P+rJwWPNWTiV3L7Cn+fO+l9mTPnOqdzBgDe
 OaulKiNSft1o0DY4bGzOmM2ad2cZt0jfnbMPMTE9zsr6+RFa+M8Ct20o6U1MUE4vP6veErMK
 of4kZ8PdoMM+Sq1hxMPNtlcVBSP9xMmdSZPlfDYI5VWosOceEcz7XZdjBJKdwKuz70V7eac4
 ITSxgNFCTbeJ03zL2MR5s0IvD9ghISAwZ6ieCjU5UATn5+63qpD0nVNLsAdb/UpfvQcKAmvj
 0fKlxu/PMVkjBa7/4cfNogYOhWDKUO+1pMaFwvb6/XTo6uMpfQARAQABtCxBbmRyZXcgUnli
 Y2hlbmtvIDxhcnliY2hlbmtvQHNvbGFyZmxhcmUuY29tPokCVAQTAQoAPhYhBP6NPgcKRj/Y
 X0yXQahue0sAy4m+BQJduvNYAhsDBQkB4TOABQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJ
 EKhue0sAy4m+t3gP/j1MNc63CEozZo1IZ2UpVPAVWTYbLdPjIRdFqhlwvZYIgGIgIBk3ezKL
 K0/oc4ZeIwL6wQ5+V24ahuXvvcxLlKxfbJ6lo2iQGC7GLGhsDG9Y2k6sW13/sTJB/XuR2yov
 k5FtIgJ+aHa1PDZnepnGGOt9ka9n/Jzrc9WKYapOIIyLRe9U26ikoVgyqsD37PVeq5tLWHHA
 NGTUKupe9G6DFWidxx0KzyMoWDTbW2AWYcEmV2eQsgRT094AZwLFN5ErfefYzsGdO8TAUU9X
 YTiQN2MvP1pBxY/r0/5UfwV4UKBcR0S3ZvzyvrPoYER2Kxdf/qurx0Mn7StiCQ/JlNZb/GWQ
 TQ7huduuZHNQKWm7ufbqvKSfbPYvfl3akj7Wl8/zXhYdLqb5mmK45HXrgYGEqPN53OnK2Ngx
 IgYKEWr05KNv09097jLT5ONgYvszflqlLIzC4dV245g7ucuf9fYmsvmM1p/gFnOJBJL18YE5
 P1fuGYNfLP+qp4WMiDqXlzaJfB4JcinyU49BXUj3Utd6f6sNBsO8YWcLbKBV9WmA324S3+wj
 f4NPRp3A5E+6OmTVMLWire2ZvnYp3YvifUj1r8lhoZ2B2vKuWwiTlHOKYBEjnOQJQnqYZEF0
 JQQ1xzVDBQKE01BPlA3vy6BGWe6I4psBVqMOB9lAev/H+xa4u6Z3uQINBF269JsBEAC2KB3W
 8JES/fh74avN7LOSdK4QA7gFIUQ4egVL81KnxquLzzilABuOhmZf3Rq6rMHSM8xmUAWa7Dkt
 YtzXStjEBI/uF0mAR3mMz1RcL2Wp+WD/15HjVpA7hPjXSEsWY0K2ymPerK4yrLcfFTHdMonY
 JfuACCC9NtOZxrWHOJoUS+RT7AWk80q/6D2iwQ47/2dBTznVG+gSeHSes9l91TB09w6f9JX/
 sT+Ud0NQfm7HJ7t2pmGI9O6Po/NLZsDogmnIpJp/WwYOZN9JK7u2FyX2UyRzR8jK42aJkRsh
 DXs16Cc2/eYGakjrdO3x9a+RoxN7EuFtYhGR1PzMXdUiB5i+FyddYXkYUyO43QE/3VPA5l1v
 TUOagzZq6aONsdNonGJkV3TIG3JmUNtM+D/+r6QKzmgoJ8w576JxEZI09I/ZFN+g7BnUmlMx
 6Z3IUOXVX/SWfGFga0YajwajHz03IBhChEbYbbqndVhmshu2GFURxrfUPYWdDXEqkh+08a5U
 Didia9jm2Opv4oE1e1TXAePyYJl/Zyps4Cv00GObAxibvMBQCUZQ+IBnNldRBOwXXRQV2xpx
 P+9iO1VYA/QXn0KqRK+SH1JGRXbJYi42YFaW1gE0EU0fiR2Wb9pK+doNEjjOhlzUGuvOEAUS
 +4m0m3dlfEvpCV9GMr7ERRpZzh9QkQARAQABiQI8BBgBCgAmFiEE/o0+BwpGP9hfTJdBqG57
 SwDLib4FAl269JsCGwwFCQlmAYAACgkQqG57SwDLib7x6g//e+eCtNnJz7qFGbjWRJYNLCe5
 gQwkhdyEGk4omr3VmjGj3z9kNFy/muh4pmHUngSAnnpwZggx14N4hhKf9y8G4Dwvsqa6b1zB
 Jq/c4t/SBDtGW4M/E331N04PaQZpcrbTfp1KqHNknk2N7yOk4CcoLVuIZmA5tPguASV8aAfz
 ZwhWAwn6vUEw9552eXEAnGFGDTCbyryNwzB5jtVQOEEDjTxcCkpcXMB45Tb1QUslRTu/sBAe
 HhPCQSUcJHR+KOq+P6yKICGAr291PZd6Qc7C3UyE+A3pY/UfdEVWj0STBWx1qvYLaHLrI4O9
 KXDgh7luLjZZafcueCaPYmNo4V2lmNb3+7S4TvqhoZS+wN+9ldRQ4gH3wmRZybN6Y/ZCqxol
 RaZpE3AqdWsGvIgAkD0FpmtZNii9s2pnrhw0K6S4t4tYgXGTossxNSJUltfFQZdXM1xkZhtv
 dBZuUEectbZWuviGvQXahOMuH2pM64mx2hpdZzPcI2beeJNHkAsGT2KcaMETgvtHUBFRlLVB
 YxsUYz3UZmi2JSua4tbcGd6iWVN90eb8CxszYtivfpz6o2nPSjNwg0NaVGSHXjAK0tdByZ9t
 SkwjC3tEPljVycRSDpbauogOiAkvjENfaPd/H26V5hY822kaclaKDAW6ZG9UKiMijcAgb9u5
 CJoOyqE8aGS5Ag0EXbr1RwEQAMXZHbafqmZiu6Kudp+Filgdkj2/XJva5Elv3fLfpXvhVt0Y
 if5Rzds3RpffoLQZk9nPwK8TbZFqNXPu7HSgg9AY7UdCM94WRFTkUCGKzbgiqGdXZ7Vyc8cy
 teGW+BcdfQycDvjfy50T3fO4kJNVp2LDNdknPaZVe8HJ80Od63+9ksB6Ni+EijMkh6Uk3ulB
 CSLnT4iFV57KgU2IsxOQVLnm+0bcsWMcCnGfphkY0yKP+aJ6MfmZkEeaDa7kf24N14ktg50m
 vOGDitcxA/+XXQXOsOIDJx1VeidxYsQ2FfsKu1G8+G6ejuaLf4rV5MI/+B/tfLbbOdikM5PF
 pxZVgTir9q13qHumMxdme7w5c7hybW412yWAe9TsrlXktFmFjRSFzAAxQhQSQxArS6db4oBk
 yeYJ59mW52i4occkimPWSm/raSgdSM+0P6zdWUlxxj+r1qiLgCYvruzLNtp5Nts5tR/HRQjE
 /ohQYaWDSVJEsc/4eGmgwzHzmvHtXeKkasn01381A1Lv3xwtpnfwERMAhxBZ8EGKEkc5gNdk
 vIPhknnGgPXqKmE1aWu8LcHiY+RHAF8gYPCDMuwyzBYnbiosKcicuIUp0Fj8XIaPao6F+WTi
 In4UOrqrYhsaCUvhVjsTBbNphGih9xbFJ8E+lkTLL8P3umtTcMPnpsB4xqcDABEBAAGJBHIE
 GAEKACYWIQT+jT4HCkY/2F9Ml0GobntLAMuJvgUCXbr1RwIbAgUJCWYBgAJACRCobntLAMuJ
 vsF0IAQZAQoAHRYhBNTYjdjWgdaEN5MrAN+9UR5r/4d3BQJduvVHAAoJEN+9UR5r/4d3EiQP
 /3lyby6v49HTU94Q2Fn2Xat6uifR7kWE5SO/1pUwYzx6v+z5K2jqPgqUYmuNoejcGl0CTNhg
 LbsxzUmAuf1OTAdE+ZYvOAjjKQhY4haxHc4enby/ltnHfWJYWJZ9UN5SsIQLvITvYu6rqthO
 CYjpXJhwkj3ODmC9H1TrvjrBGc6i7CTnR8RCjMEwCs2LI2frHa4R6imViEr9ScMfUnzdABMQ
 B0T5MOg8NX92/FRjTldU2KovG0ML9mSveSvVHAoEBLy4UIs5nEDdNiO1opJgKb5CXvWQugub
 7AR52phNdKVdEB0S4tigJT4NalyTaPiUhFEm+CzZpMQDJ5E+/OowaPRfN4HeJX+c8sB+vUAZ
 mkAaG75N+IEk5JKFK9Z+bBYgPgaBDFZYdWDB/TMH0ANt+KI5uYg0i12TB4M8pwKG1DEPUmWc
 F2YpvB3jnbwzsOpSFiJOOlSs6nOB0Sb5GRtPOO3h6XGj+6mzQd6tcL63c9TrrUkjq7LDkxCz
 SJ2hTYRC8WNX8Uw9skWo5728JNrXdazEYCenUWmYiKLNKLslXCFodUCRDh/sUiyqRwS7PHEA
 LYC/UIWLMomI0Yvju3KA5v3RQVXhL+Gx2CzSj3GDz9xxGhJB2LfRfjzPbTR/Z27UpjCkd8z0
 Ro3Ypmi1FLQwnRgoOKDbetTAIhugEShaLTITzJAP/iRDJCQsrZah5tE8oIl81qKEmBJEGcdt
 HYikbpQe7ydcXhqTj7+IECa3O7azI5OhCxUH2jNyonJ/phUslHH2G1TTBZK8y4Hrx5RpuRNS
 esn3P9uKu9DHqBAL7DMsCPwb2p1VNnapD72DBmRhzS/e6zS2R4+r9yNv03Hv7VCxKkmtE63H
 qpS//qpjfrtsIcHAjnKDaDtL1LYCtHoweI+DOpKKULSAYp/JE6F8LNibPQ0/P3S5ZIJNC4QZ
 uESjFOalJwFIqGQdkQB7ltRNJENLrHc+2jKGOuyFHm/Sbvp5EMGdaeQ0+u8CY0P+y6oXenwx
 7WrJz/GvbNoFhJoJ6RzxCMQrFgxrssVZ7w5HcUj94lbnJ6osdYE/WpSd50B6jet6LKh5revg
 u9XI9CoqsPQ1V4wKYYdllPuogCye7KNYNKuiiuSNpaF4gHq1ZWGArwZtWHjgc2v3LegOpRQF
 SwOskMKmWsUyHIRMG1p8RpkBQTqY2rGSeUqPSvaqjT0nq+SUEM6qxEXD/2Wqri/X6bamuPDb
 S0PkBvFD2+0zr5Bc2YkMGPBYPNGZiTp3UjmZlLfn3TiBKIC92jherY563CULjSsiBEJCOSvv
 4VPLn5aAcfbCXJnE3IGCp/hPl50iQqu7BPOYBbWXeb9ptDjGCAThNxSz0WAXkmcjAFE8gdE6 Znk9
Message-ID: <3b1cb032-a1c2-6dee-af72-8ed690a9ad8c@solarflare.com>
Date: Thu, 23 Apr 2020 16:38:44 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.6.0
MIME-Version: 1.0
In-Reply-To: <1586787714-13890-2-git-send-email-xiangxia.m.yue@gmail.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [10.17.10.39]
X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To
 ukex01.SolarFlarecom.com (10.17.10.4)
X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1020-25372.003
X-TM-AS-Result: No-19.760500-8.000000-10
X-TMASE-MatchedRID: DuKherWvI/t0nsJmoztmWPZvT2zYoYOwC/ExpXrHizy+AI6u+HhJmvjt
 e/rXsdtHBybi3AyMnH/JT0xrHjixUGqyMiV+9sgLA9lly13c/gGpZoxavGZhjjsUTbqaO707wxe
 ASBtEzg/RCszacqq5H1zPUWjS8Zoeqj01FlWBadphXXywTJLpfEqAhuLHn5fEyXw1Debayr6F77
 Xmf2kukSzaqwEuzyre8ZlvyFl7jCgTfiYqpBGJ6dm6V9lyTcKhOyBTDrxRCtgDAA5uRHailsHFX
 rJQnUECoiuu1ZMOZb+rTCXcgaengSZ54WxtnrTDmOFnGEL0JOP4qCLIu0mtIKEglsm+Ci4PTNM+
 HSXSm5HYLwCss5A4ihUWBuxqc4y6/B9dz1laVgwX2ntf1QBYZd2Emh5Z2S/NDs0BGU1luwi6oET
 kkHtXEUcN36UB4MN711/jD0G83Im5rzEqaXlmzcebIMlISwjb2LlbtF/6zpCLxWuV11DGxksrPp
 wouchltMEpUF7kpibPtJTEIDcVufKo5eWigZsuw9GVhGa/57ZB2JccVE3dfm4e/82tzePPsmc+H
 zD5HmjtXaLoicQJLcM7T/gLkvs/C+4GRu206OIvLP1C8DIeOgD4keG7QhHmoIqY8fnAczlrcDqF
 w27ycKPtUCyd5qq805Iq22G9WGLecSkNT7l/2TdfT4zyWoZSpFszaLJi7ambKItl61J/ycnjLTA
 /UDoAoTCA5Efyn8CNo+PRbWqfRJBlLa6MK1y4
X-TM-AS-User-Approved-Sender: Yes
X-TM-AS-User-Blocked-Sender: No
X-TMASE-Result: 10--19.760500-8.000000
X-TMASE-Version: SMEX-12.5.0.1300-8.5.1020-25372.003
X-MDID: 1587649138-wZGjNDk9PPdV
Subject: Re: [dpdk-dev] [PATCH dpdk-dev v3 2/2] mempool: use shared memzone
 for rte_mempool_ops
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

On 4/13/20 5:21 PM, xiangxia.m.yue@gmail.com wrote:
> From: Tonghao Zhang <xiangxia.m.yue@gmail.com>
> 
> The order of mempool initiation affects mempool index in the
> rte_mempool_ops_table. For example, when building APPs with:
> 
> $ gcc -lrte_mempool_bucket -lrte_mempool_ring ...
> 
> The "bucket" mempool will be registered firstly, and its index
> in table is 0 while the index of "ring" mempool is 1. DPDK
> uses the mk/rte.app.mk to build APPs, and others, for example,
> Open vSwitch, use the libdpdk.a or libdpdk.so to build it.
> The mempool lib linked in dpdk and Open vSwitch is different.
> 
> The mempool can be used between primary and secondary process,
> such as dpdk-pdump and pdump-pmd/Open vSwitch(pdump enabled).
> There will be a crash because dpdk-pdump creates the "ring_mp_mc"
> ring which index in table is 0, but the index of "bucket" ring
> is 0 in Open vSwitch. If Open vSwitch use the index 0 to get
> mempool ops and malloc memory from mempool. The crash will occur:
> 
>     bucket_dequeue (access null and crash)
>     rte_mempool_get_ops (should get "ring_mp_mc",
>                          but get "bucket" mempool)
>     rte_mempool_ops_dequeue_bulk
>     ...
>     rte_pktmbuf_alloc
>     rte_pktmbuf_copy
>     pdump_copy
>     pdump_rx
>     rte_eth_rx_burst
> 
> To avoid the crash, there are some solution:
> * constructor priority: Different mempool uses different
>   priority in RTE_INIT, but it's not easy to maintain.
> 
> * change mk/rte.app.mk: Change the order in mk/rte.app.mk to
>   be same as libdpdk.a/libdpdk.so, but when adding a new mempool
>   driver in future, we must make sure the order.
> 
> * register mempool orderly: Sort the mempool when registering,
>   so the lib linked will not affect the index in mempool table.
>   but the number of mempool libraries may be different.
> 
> * shared memzone: The primary process allocates a struct in
>   shared memory named memzone, When we register a mempool ops,
>   we first get a name and id from the shared struct: with the lock held,
>   lookup for the registered name and return its index, else
>   get the last id and copy the name in the struct.
> 
> Previous discussion: https://mails.dpdk.org/archives/dev/2020-March/159354.html
> 
> Suggested-by: Olivier Matz <olivier.matz@6wind.com>
> Suggested-by: Jerin Jacob <jerinj@marvell.com>
> Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
> ---
> v2:
> * fix checkpatch warning
> ---
>  lib/librte_mempool/rte_mempool.h     | 28 +++++++++++-
>  lib/librte_mempool/rte_mempool_ops.c | 89 ++++++++++++++++++++++++++++--------
>  2 files changed, 96 insertions(+), 21 deletions(-)
> 
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index c90cf31467b2..2709b9e1d51b 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -50,6 +50,7 @@
>  #include <rte_ring.h>
>  #include <rte_memcpy.h>
>  #include <rte_common.h>
> +#include <rte_init.h>
>  
>  #ifdef __cplusplus
>  extern "C" {
> @@ -678,7 +679,6 @@ struct rte_mempool_ops {
>   */
>  struct rte_mempool_ops_table {
>  	rte_spinlock_t sl;     /**< Spinlock for add/delete. */
> -	uint32_t num_ops;      /**< Number of used ops structs in the table. */
>  	/**
>  	 * Storage for all possible ops structs.
>  	 */
> @@ -910,6 +910,30 @@ int rte_mempool_ops_get_info(const struct rte_mempool *mp,
>   */
>  int rte_mempool_register_ops(const struct rte_mempool_ops *ops);
>  
> +struct rte_mempool_shared_ops {
> +	size_t num_mempool_ops;

Is there any specific reason to change type from uint32_t used
above to size_t? I think that uint32_t is better here since
it is just a number, not a size of memory or related value.

> +	struct {
> +		char name[RTE_MEMPOOL_OPS_NAMESIZE];
> +	} mempool_ops[RTE_MEMPOOL_MAX_OPS_IDX];
> +
> +	rte_spinlock_t mempool;
> +};
> +
> +static inline int
> +mempool_ops_register_cb(const void *arg)
> +{
> +	const struct rte_mempool_ops *h = (const struct rte_mempool_ops *)arg;
> +
> +	return rte_mempool_register_ops(h);
> +}
> +
> +static inline void
> +mempool_ops_register(const struct rte_mempool_ops *ops)
> +{
> +	rte_init_register(mempool_ops_register_cb, (const void *)ops,
> +			  RTE_INIT_PRE);
> +}
> +
>  /**
>   * Macro to statically register the ops of a mempool handler.
>   * Note that the rte_mempool_register_ops fails silently here when
> @@ -918,7 +942,7 @@ int rte_mempool_ops_get_info(const struct rte_mempool *mp,
>  #define MEMPOOL_REGISTER_OPS(ops)				\
>  	RTE_INIT(mp_hdlr_init_##ops)				\
>  	{							\
> -		rte_mempool_register_ops(&ops);			\
> +		mempool_ops_register(&ops);			\
>  	}
>  
>  /**
> diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
> index 22c5251eb068..b10fda662db6 100644
> --- a/lib/librte_mempool/rte_mempool_ops.c
> +++ b/lib/librte_mempool/rte_mempool_ops.c
> @@ -14,43 +14,95 @@
>  /* indirect jump table to support external memory pools. */
>  struct rte_mempool_ops_table rte_mempool_ops_table = {
>  	.sl =  RTE_SPINLOCK_INITIALIZER,
> -	.num_ops = 0
>  };
>  
> -/* add a new ops struct in rte_mempool_ops_table, return its index. */
> -int
> -rte_mempool_register_ops(const struct rte_mempool_ops *h)
> +static int
> +rte_mempool_register_shared_ops(const char *name)
>  {
> -	struct rte_mempool_ops *ops;
> -	int16_t ops_index;
> +	static bool mempool_shared_ops_inited;
> +	struct rte_mempool_shared_ops *shared_ops;
> +	const struct rte_memzone *mz;
> +	uint32_t ops_index = 0;
> +

I think we should sanity check 'name' here to be not
empty string (see review notes below).

> +	if (rte_eal_process_type() == RTE_PROC_PRIMARY &&
> +	    !mempool_shared_ops_inited) {
> +
> +		mz = rte_memzone_reserve("mempool_ops_shared",
> +					 sizeof(*shared_ops), 0, 0);
> +		if (mz == NULL)
> +			return -ENOMEM;
> +
> +		shared_ops = mz->addr;
> +		shared_ops->num_mempool_ops = 0;
> +		rte_spinlock_init(&shared_ops->mempool);
> +
> +		mempool_shared_ops_inited = true;
> +	} else {
> +		mz = rte_memzone_lookup("mempool_ops_shared");
> +		if (mz == NULL)
> +			return -ENOMEM;
> +
> +		shared_ops = mz->addr;
> +	}
>  
> -	rte_spinlock_lock(&rte_mempool_ops_table.sl);
> +	rte_spinlock_lock(&shared_ops->mempool);
>  
> -	if (rte_mempool_ops_table.num_ops >=
> -			RTE_MEMPOOL_MAX_OPS_IDX) {
> -		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> +	if (shared_ops->num_mempool_ops >= RTE_MEMPOOL_MAX_OPS_IDX) {
> +		rte_spinlock_unlock(&shared_ops->mempool);
>  		RTE_LOG(ERR, MEMPOOL,
>  			"Maximum number of mempool ops structs exceeded\n");
>  		return -ENOSPC;
>  	}
>  
> +	while (shared_ops->mempool_ops[ops_index].name[0]) {

Please, compare with '\0' as DPDK style guide says.

> +		if (!strcmp(name, shared_ops->mempool_ops[ops_index].name)) {
> +			rte_spinlock_unlock(&shared_ops->mempool);
> +			return ops_index;
> +		}
> +
> +		ops_index++;
> +	}
> +
> +	strlcpy(shared_ops->mempool_ops[ops_index].name, name,
> +		sizeof(shared_ops->mempool_ops[0].name));
> +
> +	shared_ops->num_mempool_ops++;
> +
> +	rte_spinlock_unlock(&shared_ops->mempool);
> +	return ops_index;
> +}
> +
> +/* add a new ops struct in rte_mempool_ops_table, return its index. */
> +int
> +rte_mempool_register_ops(const struct rte_mempool_ops *h)
> +{
> +	struct rte_mempool_ops *ops;
> +	int16_t ops_index;
> +
>  	if (h->alloc == NULL || h->enqueue == NULL ||
> -			h->dequeue == NULL || h->get_count == NULL) {
> -		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> +	    h->dequeue == NULL || h->get_count == NULL) {

Changing formatting just makes review a bit more harder.

>  		RTE_LOG(ERR, MEMPOOL,
>  			"Missing callback while registering mempool ops\n");
> +		rte_errno = EINVAL;

Why is it done in the patch? For me it looks like logically
different change if it is really required (properly motivated).

>  		return -EINVAL;
>  	}
>  
>  	if (strlen(h->name) >= sizeof(ops->name) - 1) {
> -		rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> -		RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n",
> -				__func__, h->name);
> +		RTE_LOG(ERR, MEMPOOL,
> +			"The registering  mempool_ops <%s>: name too long\n",
> +			h->name);

Why do you change from DEBUG to ERR here? It it not
directly related to the purpose of the patch.

>  		rte_errno = EEXIST;
>  		return -EEXIST;
>  	}
>  
> -	ops_index = rte_mempool_ops_table.num_ops++;
> +	ops_index = rte_mempool_register_shared_ops(h->name);
> +	if (ops_index < 0) {
> +		rte_errno = -ops_index;
> +		return ops_index;
> +	}
> +
> +	rte_spinlock_lock(&rte_mempool_ops_table.sl);
> +
>  	ops = &rte_mempool_ops_table.ops[ops_index];
>  	strlcpy(ops->name, h->name, sizeof(ops->name));
>  	ops->alloc = h->alloc;
> @@ -165,9 +217,8 @@ struct rte_mempool_ops_table rte_mempool_ops_table = {
>  	if (mp->flags & MEMPOOL_F_POOL_CREATED)
>  		return -EEXIST;
>  
> -	for (i = 0; i < rte_mempool_ops_table.num_ops; i++) {
> -		if (!strcmp(name,
> -				rte_mempool_ops_table.ops[i].name)) {
> +	for (i = 0; i < RTE_MEMPOOL_MAX_OPS_IDX; i++) {
> +		if (!strcmp(name, rte_mempool_ops_table.ops[i].name)) {

Since rte_mempool_ops_table is filled in which zeros,
name string is empty by default. So, request with empty name
will match the first unused entry. I guess it is not what we
want here. I think we should handle empty string before the
loop and return -EINVAL.

>  			ops = &rte_mempool_ops_table.ops[i];
>  			break;
>  		}
>