From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0B868A057B; Tue, 24 Mar 2020 13:42:15 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CF0741C0CF; Tue, 24 Mar 2020 13:42:14 +0100 (CET) Received: from mail-qv1-f68.google.com (mail-qv1-f68.google.com [209.85.219.68]) by dpdk.org (Postfix) with ESMTP id 7B6081C0C9 for ; Tue, 24 Mar 2020 13:42:12 +0100 (CET) Received: by mail-qv1-f68.google.com with SMTP id o18so9081849qvf.1 for ; Tue, 24 Mar 2020 05:42:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=h2W8rzMjXZMVmAkE/AgnFQbZuaUqdr1DBJWH8pB3+Z8=; b=EJpw8uRRUCBcybyCZIsDKhz7qygxThNeqYBEwYtE18gCe/8BzC0yqpdRUKLqhUhkRJ nz1cBOJ1vhdWEfxLzGT0ZzB2VpD1555JcF7snKFgz7mMRH89MTl7wvURLPsZABDrslmG XZvaHC330+s9FAIMcS/UjfYQhXrqRhNeDE/fUpFiVw/a03OdpZbt4KTMsoFhUpPac5Tb lLr7dnTQAy/J6GkGBDC9m6FQ4HWkMHwN1YQ/9Vw4BuG4TtQNg2GhJsWbcGCKU9s2hsY2 b/c9a7wOzwHIOtcyjb88DMRq6JSXFCrQZCpzPQVXEzAZb8g2ddeTi+XhkiRtRYL4pSP/ iO6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=h2W8rzMjXZMVmAkE/AgnFQbZuaUqdr1DBJWH8pB3+Z8=; b=dIxGZ8qO4RqNVDIL2N/P8Z4DJwkXkocRILJB+hanQ75jtDfnr/cvvgdTgeTxcfZ/F1 7vB45J9ieTdo1W+UmX5El9jT7tBoivQGSgJcKS1X+RhhKzvckH5745pLM+TORns0bOXE T22MbPRkCOZVWDezmT1TfJpIqWbTnuZ6ak4ptSUFWfo7GDo/X7zJKnk+LOzB5XwUrkMO UXAURvZFxDKgEzrRMNWmbh8rv3/gWs0OpEXmYu/cCTclcVZPw0mcSlOTDfTkX99IeG+Y iSGM8HpzW1fEoQi2uRKBwPMv5AsK5He5m+OnOyMAgMmlRXQv9Juu6PAxT9XlHEuUk7Gk zuxQ== X-Gm-Message-State: ANhLgQ14BgFoFROq7o8D6zMLj2Wru6fG3ORR7KCtT9BiCG7QnmWmL4Hc VoxJR1maVzAZ1gRjY6YVZVSXhYdRZgDIj35wXnk= X-Google-Smtp-Source: ADFU+vt/AqZw5BbVHBNpgBaRwXSTb9kJf3dRZ0jp/08QNB7NBBTCbTNkf50Ma69C55nFAF654LCT3rXaYbXDIWE7IyQ= X-Received: by 2002:a05:6214:60d:: with SMTP id z13mr25799238qvw.183.1585053731867; Tue, 24 Mar 2020 05:42:11 -0700 (PDT) MIME-Version: 1.0 References: <1583114253-15345-1-git-send-email-xiangxia.m.yue@gmail.com> <1583501776-9958-1-git-send-email-xiangxia.m.yue@gmail.com> <7420c590-4906-34e2-b0b8-d412df9005c8@solarflare.com> <20200309082705.GM13822@platinum> <21e64c42-3fb2-a7cf-41c4-8df951b467f9@solarflare.com> In-Reply-To: <21e64c42-3fb2-a7cf-41c4-8df951b467f9@solarflare.com> From: Tonghao Zhang Date: Tue, 24 Mar 2020 20:41:35 +0800 Message-ID: To: Andrew Rybchenko Cc: Olivier Matz , Jerin Jacob , dpdk-dev , Gage Eads , "Artem V. Andreev" , Jerin Jacob , Nithin Dabilpuram , Vamsi Attunuru , Hemant Agrawal Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH dpdk-dev v3] mempool: sort the rte_mempool_ops by name X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Tue, Mar 24, 2020 at 5:36 PM Andrew Rybchenko wrote: > > On 3/9/20 11:27 AM, Olivier Matz wrote: > > Hi, > > > > On Mon, Mar 09, 2020 at 11:01:25AM +0800, Tonghao Zhang wrote: > >> On Sat, Mar 7, 2020 at 8:54 PM Andrew Rybchenko > >> wrote: > >>> > >>> On 3/7/20 3:51 PM, Andrew Rybchenko wrote: > >>>> On 3/6/20 4:37 PM, Jerin Jacob wrote: > >>>>> On Fri, Mar 6, 2020 at 7:06 PM wrote: > >>>>>> From: Tonghao Zhang > >>>>>> > >>>>>> The order of mempool initiation affects mempool index in the > >>>>>> rte_mempool_ops_table. For example, when building APPs with: > >>>>>> > >>>>>> $ gcc -lrte_mempool_bucket -lrte_mempool_ring ... > >>>>>> > >>>>>> The "bucket" mempool will be registered firstly, and its index > >>>>>> in table is 0 while the index of "ring" mempool is 1. DPDK > >>>>>> uses the mk/rte.app.mk to build APPs, and others, for example, > >>>>>> Open vSwitch, use the libdpdk.a or libdpdk.so to build it. > >>>>>> The mempool lib linked in dpdk and Open vSwitch is different. > >>>>>> > >>>>>> The mempool can be used between primary and secondary process, > >>>>>> such as dpdk-pdump and pdump-pmd/Open vSwitch(pdump enabled). > >>>>>> There will be a crash because dpdk-pdump creates the "ring_mp_mc" > >>>>>> ring which index in table is 0, but the index of "bucket" ring > >>>>>> is 0 in Open vSwitch. If Open vSwitch use the index 0 to get > >>>>>> mempool ops and malloc memory from mempool. The crash will occur: > >>>>>> > >>>>>> bucket_dequeue (access null and crash) > >>>>>> rte_mempool_get_ops (should get "ring_mp_mc", > >>>>>> but get "bucket" mempool) > >>>>>> rte_mempool_ops_dequeue_bulk > >>>>>> ... > >>>>>> rte_pktmbuf_alloc > >>>>>> rte_pktmbuf_copy > >>>>>> pdump_copy > >>>>>> pdump_rx > >>>>>> rte_eth_rx_burst > >>>>>> > >>>>>> To avoid the crash, there are some solution: > >>>>>> * constructor priority: Different mempool uses different > >>>>>> priority in RTE_INIT, but it's not easy to maintain. > >>>>>> > >>>>>> * change mk/rte.app.mk: Change the order in mk/rte.app.mk to > >>>>>> be same as libdpdk.a/libdpdk.so, but when adding a new mempool > >>>>>> driver in future, we must make sure the order. > >>>>>> > >>>>>> * register mempool orderly: Sort the mempool when registering, > >>>>>> so the lib linked will not affect the index in mempool table. > >>>>>> > >>>>>> Signed-off-by: Tonghao Zhang > >>>>>> Acked-by: Olivier Matz > >>>>> Acked-by: Jerin Jacob > >>>> > >>>> The patch is OK, but the fact that ops index changes during > >>>> mempool driver lifetime is frightening. In fact it breaks > >>>> rte_mempool_register_ops() return value semantics (read > >>>> as API break). The return value is not used in DPDK, but it > >>>> is a public function. If I'm not mistaken it should be taken > >>>> into account. > > > > Good points. > > > > The fact that the ops index changes during mempool driver lifetime is > > indeed frightening, especially knowning that this is a dynamic > > registration that could happen at any moment in the life of the > > application. Also, breaking the ABI is not desirable. > > > > Let me try to propose something else to solve your issue: > > > > 1/ At init, the primary process allocates a struct in shared memory > > (named memzone): > > > > struct rte_mempool_shared_ops { > > size_t num_mempool_ops; > > struct { > > char name[RTE_MEMPOOL_OPS_NAMESIZE]; > > } mempool_ops[RTE_MEMPOOL_MAX_OPS_IDX]; > > char *mempool_ops_name[RTE_MEMPOOL_MAX_OPS_IDX]; > > rte_spinlock_t mempool; > > } > > > > 2/ When we register a mempool ops, we first get a name and id from the > > shared struct: with the lock held, lookup for the registered name and > > return its index, else get the last id and copy the name in the struct. > > > > 3/ Then do as before (in the per-process global table), except that we > > reuse the registered id. > > > > We can remove the num_ops field from rte_mempool_ops_table. > > > > Thoughts? > > I like the solution. The patch will be sent, thanks. -- Best regards, Tonghao