DPDK usage discussions
 help / color / mirror / Atom feed
From: Olivier Matz <olivier.matz@6wind.com>
To: Venumadhav Josyula <vjosyula@gmail.com>
Cc: users@dpdk.org, dev@dpdk.org,
	Venumadhav Josyula <vjosyula@parallelwireless.com>
Subject: Re: [dpdk-users] [dpdk-dev] time taken for allocation of mempool.
Date: Wed, 13 Nov 2019 10:30:03 +0100	[thread overview]
Message-ID: <20191113093003.GD4841@platinum> (raw)
In-Reply-To: <CA+i0PGV3SfNVP0hb8nhMH693e-DUZTgxZeoxrO4wVzMurPv31Q@mail.gmail.com>

Hi Venu,

On Wed, Nov 13, 2019 at 02:41:04PM +0530, Venumadhav Josyula wrote:
> Hi Oliver,
> 
> 
> 
> *> Could you give some more details about you use case? (hugepage size,
> number of objects, object size, additional mempool flags, ...)*
> 
> Ours in telecom product, we support multiple rats. Let us take example of
> 4G case where we act as an gtpu proxy.
> 
> ·        Hugepage size :- 2 Mb
> 
> ·        *rte_mempool_create in param*
> 
> o    { name=”gtpu-mem”,
> 
> o   n=1500000,
> 
> o   elt_size=224,
> 
> o   cache_size=0,
> 
> o   private_data_size=0,
> 
> o   mp_init=NULL,
> 
> o   mp_init_arg=NULL,
> 
> o   obj_init=NULL,
> 
> o   obj_init_arg=NULL,
> 
> o   socket_id=rte_socket_id(),
> 
> o   flags=MEMPOOL_F_SP_PUT }
> 

OK, that's quite big mempools (~300MB) but I don't think it should
take that much time.

I suspect that using 1G hugepages could help, in case it is related
to the memory allocator.

> *> Did you manage to reproduce it in a small test example? We could do some
> profiling to investigate.*
> 
> No I would love to try that ? Are there examples ?

The simplest way for me is to hack the unit tests. Add this code (not
tested) at the beginning of test_mempool.c:test_mempool():

	int i;
	for (i = 0; i < 100; i++) {
		struct rte_mempool *mp;

		mp = rte_mempool_create("test", 1500000,
					224, 0, 0, NULL, NULL,
					NULL, NULL, SOCKET_ID_ANY,
					MEMPOOL_F_SP_PUT);
		if (mp == NULL) {
			printf("rte_mempool_create() failed\n");
			return -1;
		}
		rte_mempool_free(mp);
	}
	return 0;

Then, you can launch the test application and run you test with
"mempool_autotest". I suggest to compile with EXTRA_CFLAGS="-g", so you
can run "perf top" (https://perf.wiki.kernel.org/index.php/Main_Page) to
see where you spend the time.  By using "perf record" / "perf report"
with options, you can also analyze the call stack.

Please share your results, especially comparison between 17.05 and 18.11.

Thanks,
Olivier


> 
> 
> 
> Thanks,
> 
> Regards,
> 
> Venu
> 
> On Wed, 13 Nov 2019 at 14:02, Olivier Matz <olivier.matz@6wind.com> wrote:
> 
> > Hi Venu,
> >
> > On Wed, Nov 13, 2019 at 10:42:07AM +0530, Venumadhav Josyula wrote:
> > > Hi,
> > >
> > > Few more points
> > >
> > > Operating system  : Centos 7.6
> > > Logging mechanism : syslog
> > >
> > > We have logged using syslog before the call and syslog after the call.
> > >
> > > Thanks & Regards
> > > Venu
> > >
> > > On Wed, 13 Nov 2019 at 10:37, Venumadhav Josyula <vjosyula@gmail.com>
> > wrote:
> > >
> > > > Hi ,
> > > > We are using 'rte_mempool_create' for allocation of flow memory. This
> > has
> > > > been there for a while. We just migrated to dpdk-18.11 from
> > dpdk-17.05. Now
> > > > here is problem statement
> > > >
> > > > Problem statement :
> > > > In new dpdk ( 18.11 ), the 'rte_mempool_create' take approximately ~4.4
> > > > sec for allocation compared to older dpdk (17.05). We have som 8-9
> > mempools
> > > > for our entire product. We do upfront allocation for all of them ( i.e.
> > > > when dpdk application is coming up). Our application is run to
> > completion
> > > > model.
> > > >
> > > > Questions:-
> > > > i)  is that acceptable / has anybody seen such a thing ?
> > > > ii) What has changed between two dpdk versions ( 18.11 v/s 17.05 ) from
> > > > memory perspective ?
> >
> > Could you give some more details about you use case? (hugepage size, number
> > of objects, object size, additional mempool flags, ...)
> >
> > Did you manage to reproduce it in a small test example? We could do some
> > profiling to investigate.
> >
> > Thanks for the feedback.
> > Olivier
> >

  reply	other threads:[~2019-11-13  9:30 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-13  5:07 [dpdk-users] " Venumadhav Josyula
2019-11-13  5:12 ` Venumadhav Josyula
2019-11-13  8:32   ` [dpdk-users] [dpdk-dev] " Olivier Matz
2019-11-13  9:11     ` Venumadhav Josyula
2019-11-13  9:30       ` Olivier Matz [this message]
2019-11-13  9:19 ` Bruce Richardson
2019-11-13 17:26   ` Burakov, Anatoly
2019-11-13 21:01     ` Venumadhav Josyula
2019-11-14  9:44       ` Burakov, Anatoly
2019-11-14  9:50         ` Venumadhav Josyula
2019-11-14  9:57           ` Burakov, Anatoly
2019-11-18 16:43             ` Venumadhav Josyula
2019-12-06 10:47               ` Burakov, Anatoly
2019-12-06 10:49                 ` Venumadhav Josyula
2019-11-14  8:12     ` Venumadhav Josyula
2019-11-14  9:49       ` Burakov, Anatoly
2019-11-14  9:53         ` Venumadhav Josyula
2019-11-18 16:45 ` [dpdk-users] " Venumadhav Josyula

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191113093003.GD4841@platinum \
    --to=olivier.matz@6wind.com \
    --cc=dev@dpdk.org \
    --cc=users@dpdk.org \
    --cc=vjosyula@gmail.com \
    --cc=vjosyula@parallelwireless.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).