From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1202CA0573; Wed, 4 Mar 2020 16:14:48 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DD2E72C02; Wed, 4 Mar 2020 16:14:47 +0100 (CET) Received: from mail-il1-f196.google.com (mail-il1-f196.google.com [209.85.166.196]) by dpdk.org (Postfix) with ESMTP id 2D3422BB8 for ; Wed, 4 Mar 2020 16:14:46 +0100 (CET) Received: by mail-il1-f196.google.com with SMTP id g126so2116319ilh.2 for ; Wed, 04 Mar 2020 07:14:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=i0vGps2/k89pZ+MpyLtV12yEoVjcTyrQvCR4k91U7mY=; b=Wxn//iBHT3HYuJjHGoRcq8OfZV0JM8rVh/KJheA9QyTbqCkd9XecuzWbguT9OlLgmO IzgOnchBA3zaiczm6uLbX0kklWdqRtVM2r4XIPnX/AXfQWaewwUkghPpxhxTJJ16x5cr KWnPKrkOEYBAhwxXNw5mYE05uwRHPuwoFtsC5Z+mEf7BqYf6rT3wYsx09P1IuOSDUhcU 5i2DNdy7DrfLCKyP6qFwa3p+17+V+/R2J/I1gmp2czK0GA0mEpMR+mu7WSgm4JN9d25O LeNiGb6HWgzFkqM1HlRATpMmauKWwgyFKnKQEotiRlUnT76pXyKz2e7cLmnTO6l64zwT IMyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=i0vGps2/k89pZ+MpyLtV12yEoVjcTyrQvCR4k91U7mY=; b=aBhKckOtCIFfj9xNXFqdb+4QyDPykx9MmmQ/XcJu97XA+dyoPcvpTo9mputdd2qIFk JDWFrMJI03c9jr5mPxN7Cua90uYRrKbcuBSrC+7D0zUS8kjmN1WrqhvX1ilbLy7CyQe6 NA2edz/5PXr60ELaXl20W3uvJA4j+V1VY18qYQT7YSqiWiyiMSkUfECYy1fvtq4K/oon 1gB5Xufa5DD9MhQlJYojSqZGRuhpUIo8co9g4vKO+nkUYurzswmbVzhZr64+skzVoXGS hByUzgdk3iDS2PuFd+Ai20yU6xcgS2l92Sq1D7EPPFNQWgixAexwaCSjKHixBLy0scG+ 0AYg== X-Gm-Message-State: ANhLgQ2MYPfz9kBFh1dV+p+fy0xc4Bv1iMC3VNlCXuIQNYowMMW3JmZ+ HQ/EJk6+Ploq9q8DaMzAWphT5/r7sFuQPW2G8YQ= X-Google-Smtp-Source: ADFU+vs4qowgKfvh8TQd4x6HutMHvBoqJTSA2uEuMqHRBNNB5SULRka/bYsMKpXvlPGkuZpttsp+2JPLCZLSwxxSDgU= X-Received: by 2002:a05:6e02:606:: with SMTP id t6mr3141054ils.271.1583334885314; Wed, 04 Mar 2020 07:14:45 -0800 (PST) MIME-Version: 1.0 References: <1583114253-15345-1-git-send-email-xiangxia.m.yue@gmail.com> In-Reply-To: From: Jerin Jacob Date: Wed, 4 Mar 2020 20:44:28 +0530 Message-ID: To: Tonghao Zhang Cc: dpdk-dev , Olivier Matz , Andrew Rybchenko , Gage Eads , "Artem V. Andreev" , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru , Hemant Agrawal Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH] mempool: sort the rte_mempool_ops by name X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Wed, Mar 4, 2020 at 8:17 PM Tonghao Zhang wrote: > > On Wed, Mar 4, 2020 at 9:33 PM Jerin Jacob wrote: > > > > On Wed, Mar 4, 2020 at 6:48 PM Tonghao Zhang wrote: > > > > > > On Mon, Mar 2, 2020 at 9:45 PM Jerin Jacob wrote: > > > > > > > > On Mon, Mar 2, 2020 at 7:27 AM wrote: > > > > > > > > > > From: Tonghao Zhang > > > > > > > > > > The order of mempool initiation affects mempool index in the > > > > > rte_mempool_ops_table. For example, when building APPs with: > > > > > > > > > > $ gcc -lrte_mempool_bucket -lrte_mempool_ring ... > > > > > > > > > > The "bucket" mempool will be registered firstly, and its index > > > > > in table is 0 while the index of "ring" mempool is 1. DPDK > > > > > uses the mk/rte.app.mk to build APPs, and others, for example, > > > > > Open vSwitch, use the libdpdk.a or libdpdk.so to build it. > > > > > The mempool lib linked in dpdk and Open vSwitch is different. > > > > > > > > > > The mempool can be used between primary and secondary process, > > > > > such as dpdk-pdump and pdump-pmd/Open vSwitch(pdump enabled). > > > > > There will be a crash because dpdk-pdump creates the "ring_mp_mc" > > > > > ring which index in table is 0, but the index of "bucket" ring > > > > > is 0 in Open vSwitch. If Open vSwitch use the index 0 to get > > > > > mempool ops and malloc memory from mempool. The crash will occur: > > > > > > > > > > bucket_dequeue (access null and crash) > > > > > rte_mempool_get_ops (should get "ring_mp_mc", > > > > > but get "bucket" mempool) > > > > > rte_mempool_ops_dequeue_bulk > > > > > ... > > > > > rte_pktmbuf_alloc > > > > > rte_pktmbuf_copy > > > > > pdump_copy > > > > > pdump_rx > > > > > rte_eth_rx_burst > > > > > > > > > > To avoid the crash, there are some solution: > > > > > * constructor priority: Different mempool uses different > > > > > priority in RTE_INIT, but it's not easy to maintain. > > > > > > > > > > * change mk/rte.app.mk: Change the order in mk/rte.app.mk to > > > > > be same as libdpdk.a/libdpdk.so, but when adding a new mempool > > > > > driver in future, we must make sure the order. > > > > > > > > > > * register mempool orderly: Sort the mempool when registering, > > > > > so the lib linked will not affect the index in mempool table. > > > > > > > > > > Signed-off-by: Tonghao Zhang > > > > > --- > > > > > lib/librte_mempool/rte_mempool_ops.c | 18 ++++++++++++++++-- > > > > > 1 file changed, 16 insertions(+), 2 deletions(-) > > > > > > > > > > diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c > > > > > index 22c5251..06dfe16 100644 > > > > > --- a/lib/librte_mempool/rte_mempool_ops.c > > > > > +++ b/lib/librte_mempool/rte_mempool_ops.c > > > > > @@ -22,7 +22,7 @@ struct rte_mempool_ops_table rte_mempool_ops_table = { > > > > > rte_mempool_register_ops(const struct rte_mempool_ops *h) > > > > > { > > > > > struct rte_mempool_ops *ops; > > > > > - int16_t ops_index; > > > > > + unsigned ops_index, i; > > > > > > > > > > rte_spinlock_lock(&rte_mempool_ops_table.sl); > > > > > > > > > > @@ -50,7 +50,19 @@ struct rte_mempool_ops_table rte_mempool_ops_table = { > > > > > return -EEXIST; > > > > > } > > > > > > > > > > - ops_index = rte_mempool_ops_table.num_ops++; > > > > > + /* sort the rte_mempool_ops by name. the order of the mempool > > > > > + * lib initiation will not affect rte_mempool_ops index. */ > > > > > > > > +1 for the fix. > > > > For the implementation, why not use qsort_r() for sorting? > > > The implementation is easy, and the number of mempool driver is not too large. > > > But we can use the qsort_r to implement it. > > > > Since it is in a slow path, IMO, better to use standard sort functions > > for better readability. > Agree, can you help me review the patch: > > diff --git a/lib/librte_mempool/rte_mempool_ops.c > b/lib/librte_mempool/rte_mempool_ops.c > index 22c5251..1acee58 100644 > --- a/lib/librte_mempool/rte_mempool_ops.c > +++ b/lib/librte_mempool/rte_mempool_ops.c > @@ -17,6 +17,15 @@ struct rte_mempool_ops_table rte_mempool_ops_table = { > .num_ops = 0 > }; > > +static int > +compare_mempool_ops(const void *a, const void *b) > +{ > + const struct rte_mempool_ops *m_a = a; > + const struct rte_mempool_ops *m_b = b; > + > + return strcmp(m_a->name, m_b->name); > +} > + > /* add a new ops struct in rte_mempool_ops_table, return its index. */ > int > rte_mempool_register_ops(const struct rte_mempool_ops *h) > @@ -63,6 +72,9 @@ struct rte_mempool_ops_table rte_mempool_ops_table = { > ops->get_info = h->get_info; > ops->dequeue_contig_blocks = h->dequeue_contig_blocks; > > + qsort(rte_mempool_ops_table.ops, rte_mempool_ops_table.num_ops, > + sizeof(rte_mempool_ops_table.ops[0]), compare_mempool_ops); Looks good. Not tested. Please check qsort behavior when rte_mempool_ops_table.num_ops == 0 case. > + > rte_spinlock_unlock(&rte_mempool_ops_table.sl); > > return ops_index; > > > > > > > > > > > > > + ops_index = rte_mempool_ops_table.num_ops; > > > > > + for (i = 0; i < rte_mempool_ops_table.num_ops; i++) { > > > > > + if (strcmp(h->name, rte_mempool_ops_table.ops[i].name) < 0) { > > > > > + do { > > > > > + rte_mempool_ops_table.ops[ops_index] = > > > > > + rte_mempool_ops_table.ops[ops_index -1]; > > > > > + } while (--ops_index > i); > > > > > + break; > > > > > + } > > > > > + } > > > > > + > > > > > ops = &rte_mempool_ops_table.ops[ops_index]; > > > > > strlcpy(ops->name, h->name, sizeof(ops->name)); > > > > > ops->alloc = h->alloc; > > > > > @@ -63,6 +75,8 @@ struct rte_mempool_ops_table rte_mempool_ops_table = { > > > > > ops->get_info = h->get_info; > > > > > ops->dequeue_contig_blocks = h->dequeue_contig_blocks; > > > > > > > > > > + rte_mempool_ops_table.num_ops++; > > > > > + > > > > > rte_spinlock_unlock(&rte_mempool_ops_table.sl); > > > > > > > > > > return ops_index; > > > > > -- > > > > > 1.8.3.1 > > > > > > > > > > > > > > > > > -- > > > Thanks, > > > Tonghao > > > > -- > Thanks, > Tonghao