From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C62AEA04C0; Wed, 13 Nov 2019 10:30:16 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6095D1BF21; Wed, 13 Nov 2019 10:30:09 +0100 (CET) Received: from mail-wr1-f65.google.com (mail-wr1-f65.google.com [209.85.221.65]) by dpdk.org (Postfix) with ESMTP id 13FEA1BF15 for ; Wed, 13 Nov 2019 10:30:05 +0100 (CET) Received: by mail-wr1-f65.google.com with SMTP id t1so1493197wrv.4 for ; Wed, 13 Nov 2019 01:30:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind.com; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to :user-agent; bh=xCEyHffyQMGcWDMPXESb0IakCkJCcUte6mO2Wa68wSU=; b=hJuDBZe84GiAU9QavyyJQiAxs4FhJReJWYSe1iSqkEnmBUmmQAqGU5onwAfj6MgTlT Liprouxp9FWiyZnEFFG4cNRSyr5kOU3QBZQbUydHv33gnWrcJU69SAl8tcItKisd6Nf8 B5a+QzGEPfJPO9G4Nd2kJOHlCU+JZ20YLc2QoMZ+tJKMS0I7QME/cBz/gtPcPfTQA/kr UkHEwvOhIyf+ffDbhQig/vPtozT8srOedDTP6YEUiQbyHk1DjTvCkpD9Mw9hMOqP2p/6 +UGpCjNSBoAsqVcZJXRacIcFyQuXyoOAUsM7iapVZTD3KURtf9hiimtz0J9jbIu4fEXR AyAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to:user-agent; bh=xCEyHffyQMGcWDMPXESb0IakCkJCcUte6mO2Wa68wSU=; b=LzLq0x/t/U6x2GMLi1eJMFcEXRl42cdQNVG1broxvrFypwYCDYNHYnW4vPFD6JCOFd U3EsjjmMZFPSyxxwPO8HhRR40qlh0icDVIHz4tCwijhjpi5oL7wEOY5dyWKQNJYn8tY1 upU8BquXN2sWnyV3wWie54A3V8MUzLkBRPkDGP0A8ikiX/oxkd3bEa+aLDu3wjWkz/6o W3Ch8Qj62rC2dyl1TiupXSu5b5Io4HYeKkC/6I4QhispeZmbIGVLRnoyd0FD4vL+npjc L/67PzvXC6W8Rk/SN0IADyMPTe5tw4d4whnUdhVzGQDlKLMO7jAmIzSsmAjBBFVCPdlG ycYw== X-Gm-Message-State: APjAAAUQZa/8T346ZgNXpbnBpwqCzecwDD+RgFG/RIYN1N/DpT5AfqMt rCF3dElC8o9JBDDi8ZcNuEIsxg== X-Google-Smtp-Source: APXvYqyFuPA2hH6ge5m13QkHaa1XgxDYOMP+/5+sAve+P/+GhFcHzCSUqdqGyABga/+pDqmSxBLbTA== X-Received: by 2002:adf:c449:: with SMTP id a9mr1760813wrg.240.1573637404507; Wed, 13 Nov 2019 01:30:04 -0800 (PST) Received: from 6wind.com (2a01cb0c0005a6000226b0fffeed02fc.ipv6.abo.wanadoo.fr. [2a01:cb0c:5:a600:226:b0ff:feed:2fc]) by smtp.gmail.com with ESMTPSA id y8sm1537262wmi.9.2019.11.13.01.30.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Nov 2019 01:30:03 -0800 (PST) Date: Wed, 13 Nov 2019 10:30:03 +0100 From: Olivier Matz To: Venumadhav Josyula Cc: users@dpdk.org, dev@dpdk.org, Venumadhav Josyula Message-ID: <20191113093003.GD4841@platinum> References: <20191113083217.GC4841@platinum> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Subject: Re: [dpdk-dev] time taken for allocation of mempool. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Venu, On Wed, Nov 13, 2019 at 02:41:04PM +0530, Venumadhav Josyula wrote: > Hi Oliver, > > > > *> Could you give some more details about you use case? (hugepage size, > number of objects, object size, additional mempool flags, ...)* > > Ours in telecom product, we support multiple rats. Let us take example of > 4G case where we act as an gtpu proxy. > > · Hugepage size :- 2 Mb > > · *rte_mempool_create in param* > > o { name=”gtpu-mem”, > > o n=1500000, > > o elt_size=224, > > o cache_size=0, > > o private_data_size=0, > > o mp_init=NULL, > > o mp_init_arg=NULL, > > o obj_init=NULL, > > o obj_init_arg=NULL, > > o socket_id=rte_socket_id(), > > o flags=MEMPOOL_F_SP_PUT } > OK, that's quite big mempools (~300MB) but I don't think it should take that much time. I suspect that using 1G hugepages could help, in case it is related to the memory allocator. > *> Did you manage to reproduce it in a small test example? We could do some > profiling to investigate.* > > No I would love to try that ? Are there examples ? The simplest way for me is to hack the unit tests. Add this code (not tested) at the beginning of test_mempool.c:test_mempool(): int i; for (i = 0; i < 100; i++) { struct rte_mempool *mp; mp = rte_mempool_create("test", 1500000, 224, 0, 0, NULL, NULL, NULL, NULL, SOCKET_ID_ANY, MEMPOOL_F_SP_PUT); if (mp == NULL) { printf("rte_mempool_create() failed\n"); return -1; } rte_mempool_free(mp); } return 0; Then, you can launch the test application and run you test with "mempool_autotest". I suggest to compile with EXTRA_CFLAGS="-g", so you can run "perf top" (https://perf.wiki.kernel.org/index.php/Main_Page) to see where you spend the time. By using "perf record" / "perf report" with options, you can also analyze the call stack. Please share your results, especially comparison between 17.05 and 18.11. Thanks, Olivier > > > > Thanks, > > Regards, > > Venu > > On Wed, 13 Nov 2019 at 14:02, Olivier Matz wrote: > > > Hi Venu, > > > > On Wed, Nov 13, 2019 at 10:42:07AM +0530, Venumadhav Josyula wrote: > > > Hi, > > > > > > Few more points > > > > > > Operating system : Centos 7.6 > > > Logging mechanism : syslog > > > > > > We have logged using syslog before the call and syslog after the call. > > > > > > Thanks & Regards > > > Venu > > > > > > On Wed, 13 Nov 2019 at 10:37, Venumadhav Josyula > > wrote: > > > > > > > Hi , > > > > We are using 'rte_mempool_create' for allocation of flow memory. This > > has > > > > been there for a while. We just migrated to dpdk-18.11 from > > dpdk-17.05. Now > > > > here is problem statement > > > > > > > > Problem statement : > > > > In new dpdk ( 18.11 ), the 'rte_mempool_create' take approximately ~4.4 > > > > sec for allocation compared to older dpdk (17.05). We have som 8-9 > > mempools > > > > for our entire product. We do upfront allocation for all of them ( i.e. > > > > when dpdk application is coming up). Our application is run to > > completion > > > > model. > > > > > > > > Questions:- > > > > i) is that acceptable / has anybody seen such a thing ? > > > > ii) What has changed between two dpdk versions ( 18.11 v/s 17.05 ) from > > > > memory perspective ? > > > > Could you give some more details about you use case? (hugepage size, number > > of objects, object size, additional mempool flags, ...) > > > > Did you manage to reproduce it in a small test example? We could do some > > profiling to investigate. > > > > Thanks for the feedback. > > Olivier > >