From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A38FCA0573; Thu, 5 Mar 2020 17:57:43 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EE25C2BB8; Thu, 5 Mar 2020 17:57:42 +0100 (CET) Received: from mail-wm1-f65.google.com (mail-wm1-f65.google.com [209.85.128.65]) by dpdk.org (Postfix) with ESMTP id 32B032BA8 for ; Thu, 5 Mar 2020 17:57:42 +0100 (CET) Received: by mail-wm1-f65.google.com with SMTP id e26so6514087wme.5 for ; Thu, 05 Mar 2020 08:57:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind.com; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=9DRrUUFLNuZ1B5uS/VQs+IADSI/ezHiKv/InR3PjG6o=; b=btq+IUZKI/ygef6BZVE9mxMXlABAi7hOn720rMwqvMpES4hCiwF/HwgFUYaIz3Qp5G rIIMcNbLrBNZ5eCBbAwzyg9qRkaKTR2vbTp+iiwf2I+1wNdkMXGJ65qGj1AMY13EiPgZ V7n8yFdWqbmoh7t1d5rljV7Rl8EImKV1EpBJoS6LGQOblb9YPf1Khn65+YCBKO6X5k7c J498GHp9V3bIR90OTQNa8wZQ4UoJtP/WkoSGu93+tom1PYvlzQ/x1Q66IFlR9FtCn3Bb FBNau1LiPTwiFf80YtQfqamQkA8SSGsm4yPZ/MxLibifFAjzEt8P9GUUjalTXd1Oix6q G0OQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=9DRrUUFLNuZ1B5uS/VQs+IADSI/ezHiKv/InR3PjG6o=; b=SIm+vFeyw50YRhKGAsSLvm9eYrvq5d2rbCb4vl3yLFpGV4PjIwDh94IsXYt5aBJvsF KmSy7cy/A4lfdfS3w4HBNBatAX7UIwmlqPWI2MhkqehOp7jRiiQxcxh5zZLoQt4Ecxwz 19WMipr1DGCZKoJkMh8zkwDDj22zWGzKHcqNHKQLp1BFxXkVjG23IO38wFNRzaX+ZiJN Y+8IXHRa+NAWV+0nhnHeWx6mlcA0g1Bj8AnODtpmG+yS2TCUVe+d97bhAMXCdFv96K84 xwA0BrUIyeEZXIkxpBnvb9RUVaxvOPcaW9A6nAE1zu1sXWF+HR0iU/YIvj7KW14bUy0e QxqA== X-Gm-Message-State: ANhLgQ2MBLbW68HVZChAWFCMIfjZNksg10lECupPewdvbdNorjyYugQM +iqLgOT2weCQxkPO38O6oBeLwA== X-Google-Smtp-Source: ADFU+vsJzWTYdcl5mIeRmImDDz/jqqTgPjQa55gGH+1HrgBfcoauzCgRl+mD/GUALY7/pta+nS6Kuw== X-Received: by 2002:a7b:cb46:: with SMTP id v6mr10368232wmj.0.1583427461697; Thu, 05 Mar 2020 08:57:41 -0800 (PST) Received: from 6wind.com (2a01cb0c0005a600345636f7e65ed1a0.ipv6.abo.wanadoo.fr. [2a01:cb0c:5:a600:3456:36f7:e65e:d1a0]) by smtp.gmail.com with ESMTPSA id o5sm10497592wmb.8.2020.03.05.08.57.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 05 Mar 2020 08:57:40 -0800 (PST) Date: Thu, 5 Mar 2020 17:57:39 +0100 From: Olivier Matz To: xiangxia.m.yue@gmail.com Cc: dev@dpdk.org, arybchenko@solarflare.com, gage.eads@intel.com, artem.andreev@oktetlabs.ru, jerinj@marvell.com, ndabilpuram@marvell.com, vattunuru@marvell.com, hemant.agrawal@nxp.com Message-ID: <20200305165739.GH13822@platinum> References: <1583114253-15345-1-git-send-email-xiangxia.m.yue@gmail.com> <1583396440-25485-1-git-send-email-xiangxia.m.yue@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1583396440-25485-1-git-send-email-xiangxia.m.yue@gmail.com> User-Agent: Mutt/1.10.1 (2018-07-13) Subject: Re: [dpdk-dev] [PATCH dpdk-dev v2] mempool: sort the rte_mempool_ops by name X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi, On Thu, Mar 05, 2020 at 04:20:40PM +0800, xiangxia.m.yue@gmail.com wrote: > From: Tonghao Zhang > > The order of mempool initiation affects mempool index in the > rte_mempool_ops_table. For example, when building APPs with: > > $ gcc -lrte_mempool_bucket -lrte_mempool_ring ... > > The "bucket" mempool will be registered firstly, and its index > in table is 0 while the index of "ring" mempool is 1. DPDK > uses the mk/rte.app.mk to build APPs, and others, for example, > Open vSwitch, use the libdpdk.a or libdpdk.so to build it. > The mempool lib linked in dpdk and Open vSwitch is different. > > The mempool can be used between primary and secondary process, > such as dpdk-pdump and pdump-pmd/Open vSwitch(pdump enabled). > There will be a crash because dpdk-pdump creates the "ring_mp_mc" > ring which index in table is 0, but the index of "bucket" ring > is 0 in Open vSwitch. If Open vSwitch use the index 0 to get > mempool ops and malloc memory from mempool. The crash will occur: > > bucket_dequeue (access null and crash) > rte_mempool_get_ops (should get "ring_mp_mc", > but get "bucket" mempool) > rte_mempool_ops_dequeue_bulk > ... > rte_pktmbuf_alloc > rte_pktmbuf_copy > pdump_copy > pdump_rx > rte_eth_rx_burst > > To avoid the crash, there are some solution: > * constructor priority: Different mempool uses different > priority in RTE_INIT, but it's not easy to maintain. > > * change mk/rte.app.mk: Change the order in mk/rte.app.mk to > be same as libdpdk.a/libdpdk.so, but when adding a new mempool > driver in future, we must make sure the order. > > * register mempool orderly: Sort the mempool when registering, > so the lib linked will not affect the index in mempool table. > > Signed-off-by: Tonghao Zhang Sorting the pool drivers certainly make things better than they are today. However, there is still an issue as soon as the mempool list is not the same on both sides. I don't see any better solution anyway. Storing the ops pointers in the shared memory won't work, since function addresses won't be the same in the 2 processes. Just one minor comment below. > --- > v2: > 1. use the qsort to sort the mempool_ops. > 2. tested: https://travis-ci.com/ovn-open-virtual-networks/dpdk-next-net/builds/151894026 > --- > lib/librte_mempool/rte_mempool_ops.c | 14 ++++++++++++++ > 1 file changed, 14 insertions(+) > > diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c > index 22c5251..e9113cf 100644 > --- a/lib/librte_mempool/rte_mempool_ops.c > +++ b/lib/librte_mempool/rte_mempool_ops.c > @@ -17,6 +17,15 @@ struct rte_mempool_ops_table rte_mempool_ops_table = { > .num_ops = 0 > }; > > +static int > +compare_mempool_ops(const void *a, const void *b) > +{ > + const struct rte_mempool_ops *m_a = a; > + const struct rte_mempool_ops *m_b = b; > + > + return strcmp(m_a->name, m_b->name); > +} > + > /* add a new ops struct in rte_mempool_ops_table, return its index. */ > int > rte_mempool_register_ops(const struct rte_mempool_ops *h) > @@ -63,6 +72,11 @@ struct rte_mempool_ops_table rte_mempool_ops_table = { > ops->get_info = h->get_info; > ops->dequeue_contig_blocks = h->dequeue_contig_blocks; > > + /* sort the rte_mempool_ops by name. the order of the mempool > + * lib initiation will not affect rte_mempool_ops index. */ initiation -> initialization there is also a checkpatch comment: WARNING:BLOCK_COMMENT_STYLE: Block comments use a trailing */ on a separate line #153: FILE: lib/librte_mempool/rte_mempool_ops.c:76: + * lib initiation will not affect rte_mempool_ops index. */ > + qsort(rte_mempool_ops_table.ops, rte_mempool_ops_table.num_ops, > + sizeof(rte_mempool_ops_table.ops[0]), compare_mempool_ops); > + > rte_spinlock_unlock(&rte_mempool_ops_table.sl); > > return ops_index; > -- > 1.8.3.1 > Then, Acked-by: Olivier Matz Thanks!