From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B930FA0566; Mon, 2 Mar 2020 14:45:30 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3BFD01BFF2; Mon, 2 Mar 2020 14:45:30 +0100 (CET) Received: from mail-io1-f65.google.com (mail-io1-f65.google.com [209.85.166.65]) by dpdk.org (Postfix) with ESMTP id 790392C02 for ; Mon, 2 Mar 2020 14:45:28 +0100 (CET) Received: by mail-io1-f65.google.com with SMTP id m25so11507413ioo.8 for ; Mon, 02 Mar 2020 05:45:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=aQmtZINxr2zQ4+83uTUqOO8+r7B5csPJR4yDjQmKxBg=; b=nRaiYfUvzONxt+EKTTmWgQo5A5ncrbUHRCc/9daEq9GUlBjR/HTpFRT+JwLcQCulv0 Uly+3ClYVazFC4E5+5e3NLViLEG5xPacQ7iMDyGGGSPH4h+QaeDQsTKDtgXV4/BFWjB7 QNDG6UvZpFPjcxmsHRNQbQa8M3+RGBsYVSt5JPK45FInKgvbAtxB5q9lcFc8QpoMRGu9 zGNO/p109k3ldqGuwzqPxGu9nHvBsBuWifJ8pLZvdmtmvAwuHRqkxqvI2shjIB44mL5v cZ818AWWiXsVB++k9VJny+lv3g/1wGKxY9Mn5mHc7x045yW6t30qSNVkEwDCzrVbAxsm mZcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=aQmtZINxr2zQ4+83uTUqOO8+r7B5csPJR4yDjQmKxBg=; b=nr/SpsZD4VQjvAVL79dEdQJPULA8aRtd0QUYgEhFHuUXkNzFJvkZUEDfKGBolZDoqP P8OzVhxOc4P5AzVBTLBuoj3v/d/GSJM0ZnEDuMuy1tUpoZKRKThcDr6lmtJ5gZItJBrR 43OXAOb5Ij375oiTRbKj5aWd0QBkUhFejPSFuKN1Q6p1AU5XXgV9p2gac+O47TiOQZnT huPpQkupzznrhlpD14RqHsUQBkjfrRvwGGUC/TVlrkrid397QUcVzj0CvpxVTC8vXsDl mz3kXm4Vpwez002lMCNlC7bvZ5A7yLyYqBzINPsTS6NyPjqUp4X0TXqx8mt8euMLEAwr tJrA== X-Gm-Message-State: APjAAAW3U92pLyi0Ef2lJU5Uk2iu5MmhtGfPA88dcE7mDcnAKV1eyF1x 1g9bRXBRmIRiE526M+dWTk8nm8TxdE+8ua91sBA= X-Google-Smtp-Source: APXvYqyy7wHu81wKyO59NyFlyQPFeIajS+IV2nSesNVKmYTA2fqFfcN2AjJ8ORXHxm7ZbusUel1mpnt/sb8GisLwvdU= X-Received: by 2002:a05:6602:95:: with SMTP id h21mr12832630iob.271.1583156727469; Mon, 02 Mar 2020 05:45:27 -0800 (PST) MIME-Version: 1.0 References: <1583114253-15345-1-git-send-email-xiangxia.m.yue@gmail.com> In-Reply-To: <1583114253-15345-1-git-send-email-xiangxia.m.yue@gmail.com> From: Jerin Jacob Date: Mon, 2 Mar 2020 19:15:11 +0530 Message-ID: To: xiangxia.m.yue@gmail.com Cc: dpdk-dev , Olivier Matz , Andrew Rybchenko , Gage Eads , "Artem V. Andreev" , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru , Hemant Agrawal Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH] mempool: sort the rte_mempool_ops by name X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Mon, Mar 2, 2020 at 7:27 AM wrote: > > From: Tonghao Zhang > > The order of mempool initiation affects mempool index in the > rte_mempool_ops_table. For example, when building APPs with: > > $ gcc -lrte_mempool_bucket -lrte_mempool_ring ... > > The "bucket" mempool will be registered firstly, and its index > in table is 0 while the index of "ring" mempool is 1. DPDK > uses the mk/rte.app.mk to build APPs, and others, for example, > Open vSwitch, use the libdpdk.a or libdpdk.so to build it. > The mempool lib linked in dpdk and Open vSwitch is different. > > The mempool can be used between primary and secondary process, > such as dpdk-pdump and pdump-pmd/Open vSwitch(pdump enabled). > There will be a crash because dpdk-pdump creates the "ring_mp_mc" > ring which index in table is 0, but the index of "bucket" ring > is 0 in Open vSwitch. If Open vSwitch use the index 0 to get > mempool ops and malloc memory from mempool. The crash will occur: > > bucket_dequeue (access null and crash) > rte_mempool_get_ops (should get "ring_mp_mc", > but get "bucket" mempool) > rte_mempool_ops_dequeue_bulk > ... > rte_pktmbuf_alloc > rte_pktmbuf_copy > pdump_copy > pdump_rx > rte_eth_rx_burst > > To avoid the crash, there are some solution: > * constructor priority: Different mempool uses different > priority in RTE_INIT, but it's not easy to maintain. > > * change mk/rte.app.mk: Change the order in mk/rte.app.mk to > be same as libdpdk.a/libdpdk.so, but when adding a new mempool > driver in future, we must make sure the order. > > * register mempool orderly: Sort the mempool when registering, > so the lib linked will not affect the index in mempool table. > > Signed-off-by: Tonghao Zhang > --- > lib/librte_mempool/rte_mempool_ops.c | 18 ++++++++++++++++-- > 1 file changed, 16 insertions(+), 2 deletions(-) > > diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c > index 22c5251..06dfe16 100644 > --- a/lib/librte_mempool/rte_mempool_ops.c > +++ b/lib/librte_mempool/rte_mempool_ops.c > @@ -22,7 +22,7 @@ struct rte_mempool_ops_table rte_mempool_ops_table = { > rte_mempool_register_ops(const struct rte_mempool_ops *h) > { > struct rte_mempool_ops *ops; > - int16_t ops_index; > + unsigned ops_index, i; > > rte_spinlock_lock(&rte_mempool_ops_table.sl); > > @@ -50,7 +50,19 @@ struct rte_mempool_ops_table rte_mempool_ops_table = { > return -EEXIST; > } > > - ops_index = rte_mempool_ops_table.num_ops++; > + /* sort the rte_mempool_ops by name. the order of the mempool > + * lib initiation will not affect rte_mempool_ops index. */ +1 for the fix. For the implementation, why not use qsort_r() for sorting? > + ops_index = rte_mempool_ops_table.num_ops; > + for (i = 0; i < rte_mempool_ops_table.num_ops; i++) { > + if (strcmp(h->name, rte_mempool_ops_table.ops[i].name) < 0) { > + do { > + rte_mempool_ops_table.ops[ops_index] = > + rte_mempool_ops_table.ops[ops_index -1]; > + } while (--ops_index > i); > + break; > + } > + } > + > ops = &rte_mempool_ops_table.ops[ops_index]; > strlcpy(ops->name, h->name, sizeof(ops->name)); > ops->alloc = h->alloc; > @@ -63,6 +75,8 @@ struct rte_mempool_ops_table rte_mempool_ops_table = { > ops->get_info = h->get_info; > ops->dequeue_contig_blocks = h->dequeue_contig_blocks; > > + rte_mempool_ops_table.num_ops++; > + > rte_spinlock_unlock(&rte_mempool_ops_table.sl); > > return ops_index; > -- > 1.8.3.1 >