From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id C13B8A0573;
	Wed,  4 Mar 2020 15:47:14 +0100 (CET)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 179712C16;
	Wed,  4 Mar 2020 15:47:14 +0100 (CET)
Received: from mail-qv1-f68.google.com (mail-qv1-f68.google.com
 [209.85.219.68]) by dpdk.org (Postfix) with ESMTP id 343032C02
 for <dev@dpdk.org>; Wed,  4 Mar 2020 15:47:13 +0100 (CET)
Received: by mail-qv1-f68.google.com with SMTP id g16so877770qvz.5
 for <dev@dpdk.org>; Wed, 04 Mar 2020 06:47:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=5NfMMcpQc+MaklRdg68ZcFM3VR+1fitCykKULCeNFvg=;
 b=AA8orFiwUr+W7x6l0ll+gyCAmoG6nnRQg3cBeZAMvG036FHXTq/Lzdm17Tt2IneJQ5
 0yQ+7MVwG7YSytRhtZuYrXY1oVpEMoWOAVMFSEhu60+O8eZ9e/SnourcZaX18LQCqlSH
 BMdhCZS2CCz9AznuwvUsVz9A9HmFYxBZA6IePXmwi8W/MMNShWD6Czh7bx5HPWbd6SNr
 +nMWaA7eV86UKReaEAcQ97g4aza+BXJ59LokKuMZTGV0L7DTl1XsahUa4XxOjUx+5OM2
 iHCkfBXOhiwN8DPyKGcmrZRJXjitu8e8DnDkxXnnM6kLOTlyhbd8c+sYzuflnwFU1oRC
 wRVw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=5NfMMcpQc+MaklRdg68ZcFM3VR+1fitCykKULCeNFvg=;
 b=uZ+C6ristAx97r3ytc2wZ7fD0/mFfV1cpKRGUQaWEloPasFyOXlUg7VmZUGiazKOW+
 xVOEc0gBnzZ9tz/HpIdINnG+oEvcVftZkJPPrkuyjVUdf6tly2/CKzSp7jp5FdnfY83w
 vV63rvfYs1+v33pnfqQTLhq/z9eFBu66zH+kxMqVPeGgQkvKxm2/+5ru7MNDNvSpzpX6
 uUGxe6CYiVFCh9/dp9Em87aGRCyMqLJ3TAK/uEpiJFQfGIFRs3+8eJouokuZokQ8xy/O
 0mqTlZrplXuhoiyVrnYmGtSQrnqRdHlKjBHiFXIbYvltMWHQNcIHWzbBKxZixL7Xjy7K
 F34A==
X-Gm-Message-State: ANhLgQ0BkldmFzBIPcvmNYEuWfe4HM3HIAPE89sd+WwHu29X9idmQ7wt
 IeMCnbyq7nplsUACe4YN4fcT82kd2Aqrsu19MyU=
X-Google-Smtp-Source: ADFU+vvz9/UBRlFDLrfKALImfefxPzbduDXttGT+Qvr5mS9JRUSSVNRSmqyjeDQnxzFNPr7UmCEIaw30ryWEN/eUgRk=
X-Received: by 2002:a05:6214:a91:: with SMTP id
 ev17mr2328182qvb.112.1583333232372; 
 Wed, 04 Mar 2020 06:47:12 -0800 (PST)
MIME-Version: 1.0
References: <1583114253-15345-1-git-send-email-xiangxia.m.yue@gmail.com>
 <CALBAE1PEZUsOaMh-YjmNc6G+f_SvU1N+hoKV8Vq8OL=399BEEQ@mail.gmail.com>
 <CAMDZJNXatfOfh6AuUcTJ7JJUGJs_KNB8T5D-uLsvfO4oS7N_cg@mail.gmail.com>
 <CALBAE1NKhRokNU4xkNVEz8N0JmUymDKkQ9ziCd76CXWBh8P3tg@mail.gmail.com>
In-Reply-To: <CALBAE1NKhRokNU4xkNVEz8N0JmUymDKkQ9ziCd76CXWBh8P3tg@mail.gmail.com>
From: Tonghao Zhang <xiangxia.m.yue@gmail.com>
Date: Wed, 4 Mar 2020 22:46:34 +0800
Message-ID: <CAMDZJNUmgkg0JPawmG4vcORdbobtWw528no8AxzMzayn+Rk5XQ@mail.gmail.com>
To: Jerin Jacob <jerinjacobk@gmail.com>
Cc: dpdk-dev <dev@dpdk.org>, Olivier Matz <olivier.matz@6wind.com>, 
 Andrew Rybchenko <arybchenko@solarflare.com>, Gage Eads <gage.eads@intel.com>, 
 "Artem V. Andreev" <artem.andreev@oktetlabs.ru>,
 Jerin Jacob <jerinj@marvell.com>, 
 Nithin Dabilpuram <ndabilpuram@marvell.com>,
 Vamsi Attunuru <vattunuru@marvell.com>, 
 Hemant Agrawal <hemant.agrawal@nxp.com>
Content-Type: text/plain; charset="UTF-8"
Subject: Re: [dpdk-dev] [PATCH] mempool: sort the rte_mempool_ops by name
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

On Wed, Mar 4, 2020 at 9:33 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>
> On Wed, Mar 4, 2020 at 6:48 PM Tonghao Zhang <xiangxia.m.yue@gmail.com> wrote:
> >
> > On Mon, Mar 2, 2020 at 9:45 PM Jerin Jacob <jerinjacobk@gmail.com> wrote:
> > >
> > > On Mon, Mar 2, 2020 at 7:27 AM <xiangxia.m.yue@gmail.com> wrote:
> > > >
> > > > From: Tonghao Zhang <xiangxia.m.yue@gmail.com>
> > > >
> > > > The order of mempool initiation affects mempool index in the
> > > > rte_mempool_ops_table. For example, when building APPs with:
> > > >
> > > > $ gcc -lrte_mempool_bucket -lrte_mempool_ring ...
> > > >
> > > > The "bucket" mempool will be registered firstly, and its index
> > > > in table is 0 while the index of "ring" mempool is 1. DPDK
> > > > uses the mk/rte.app.mk to build APPs, and others, for example,
> > > > Open vSwitch, use the libdpdk.a or libdpdk.so to build it.
> > > > The mempool lib linked in dpdk and Open vSwitch is different.
> > > >
> > > > The mempool can be used between primary and secondary process,
> > > > such as dpdk-pdump and pdump-pmd/Open vSwitch(pdump enabled).
> > > > There will be a crash because dpdk-pdump creates the "ring_mp_mc"
> > > > ring which index in table is 0, but the index of "bucket" ring
> > > > is 0 in Open vSwitch. If Open vSwitch use the index 0 to get
> > > > mempool ops and malloc memory from mempool. The crash will occur:
> > > >
> > > >     bucket_dequeue (access null and crash)
> > > >     rte_mempool_get_ops (should get "ring_mp_mc",
> > > >                          but get "bucket" mempool)
> > > >     rte_mempool_ops_dequeue_bulk
> > > >     ...
> > > >     rte_pktmbuf_alloc
> > > >     rte_pktmbuf_copy
> > > >     pdump_copy
> > > >     pdump_rx
> > > >     rte_eth_rx_burst
> > > >
> > > > To avoid the crash, there are some solution:
> > > > * constructor priority: Different mempool uses different
> > > >   priority in RTE_INIT, but it's not easy to maintain.
> > > >
> > > > * change mk/rte.app.mk: Change the order in mk/rte.app.mk to
> > > >   be same as libdpdk.a/libdpdk.so, but when adding a new mempool
> > > >   driver in future, we must make sure the order.
> > > >
> > > > * register mempool orderly: Sort the mempool when registering,
> > > >   so the lib linked will not affect the index in mempool table.
> > > >
> > > > Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
> > > > ---
> > > >  lib/librte_mempool/rte_mempool_ops.c | 18 ++++++++++++++++--
> > > >  1 file changed, 16 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c
> > > > index 22c5251..06dfe16 100644
> > > > --- a/lib/librte_mempool/rte_mempool_ops.c
> > > > +++ b/lib/librte_mempool/rte_mempool_ops.c
> > > > @@ -22,7 +22,7 @@ struct rte_mempool_ops_table rte_mempool_ops_table = {
> > > >  rte_mempool_register_ops(const struct rte_mempool_ops *h)
> > > >  {
> > > >         struct rte_mempool_ops *ops;
> > > > -       int16_t ops_index;
> > > > +       unsigned ops_index, i;
> > > >
> > > >         rte_spinlock_lock(&rte_mempool_ops_table.sl);
> > > >
> > > > @@ -50,7 +50,19 @@ struct rte_mempool_ops_table rte_mempool_ops_table = {
> > > >                 return -EEXIST;
> > > >         }
> > > >
> > > > -       ops_index = rte_mempool_ops_table.num_ops++;
> > > > +       /* sort the rte_mempool_ops by name. the order of the mempool
> > > > +        * lib initiation will not affect rte_mempool_ops index. */
> > >
> > > +1 for the fix.
> > > For the implementation, why not use qsort_r() for sorting?
> > The implementation is easy, and the number of mempool driver is not too large.
> > But we can use the qsort_r to implement it.
>
> Since it is in a slow path, IMO, better to use standard sort functions
> for better readability.
Agree, can you help me review the patch:

diff --git a/lib/librte_mempool/rte_mempool_ops.c
b/lib/librte_mempool/rte_mempool_ops.c
index 22c5251..1acee58 100644
--- a/lib/librte_mempool/rte_mempool_ops.c
+++ b/lib/librte_mempool/rte_mempool_ops.c
@@ -17,6 +17,15 @@ struct rte_mempool_ops_table rte_mempool_ops_table = {
        .num_ops = 0
 };

+static int
+compare_mempool_ops(const void *a, const void *b)
+{
+       const struct rte_mempool_ops *m_a = a;
+       const struct rte_mempool_ops *m_b = b;
+
+       return strcmp(m_a->name, m_b->name);
+}
+
 /* add a new ops struct in rte_mempool_ops_table, return its index. */
 int
 rte_mempool_register_ops(const struct rte_mempool_ops *h)
@@ -63,6 +72,9 @@ struct rte_mempool_ops_table rte_mempool_ops_table = {
        ops->get_info = h->get_info;
        ops->dequeue_contig_blocks = h->dequeue_contig_blocks;

+       qsort(rte_mempool_ops_table.ops, rte_mempool_ops_table.num_ops,
+             sizeof(rte_mempool_ops_table.ops[0]), compare_mempool_ops);
+
        rte_spinlock_unlock(&rte_mempool_ops_table.sl);

        return ops_index;


>
> > >
> > > > +       ops_index = rte_mempool_ops_table.num_ops;
> > > > +       for (i = 0; i < rte_mempool_ops_table.num_ops; i++) {
> > > > +               if (strcmp(h->name, rte_mempool_ops_table.ops[i].name) < 0) {
> > > > +                       do {
> > > > +                               rte_mempool_ops_table.ops[ops_index] =
> > > > +                                       rte_mempool_ops_table.ops[ops_index -1];
> > > > +                       } while (--ops_index > i);
> > > > +                       break;
> > > > +               }
> > > > +       }
> > > > +
> > > >         ops = &rte_mempool_ops_table.ops[ops_index];
> > > >         strlcpy(ops->name, h->name, sizeof(ops->name));
> > > >         ops->alloc = h->alloc;
> > > > @@ -63,6 +75,8 @@ struct rte_mempool_ops_table rte_mempool_ops_table = {
> > > >         ops->get_info = h->get_info;
> > > >         ops->dequeue_contig_blocks = h->dequeue_contig_blocks;
> > > >
> > > > +       rte_mempool_ops_table.num_ops++;
> > > > +
> > > >         rte_spinlock_unlock(&rte_mempool_ops_table.sl);
> > > >
> > > >         return ops_index;
> > > > --
> > > > 1.8.3.1
> > > >
> >
> >
> >
> > --
> > Thanks,
> > Tonghao



--
Thanks,
Tonghao