From: Shreyansh Jain <shreyansh.jain@nxp.com>
To: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>,
"Ruifeng Wang (Arm Technology China)" <Ruifeng.Wang@arm.com>,
"dev@dpdk.org" <dev@dpdk.org>
Cc: nd <nd@arm.com>
Subject: Re: [dpdk-dev] [PATCH] examples/l3fwd: support separate buffer pool per port
Date: Tue, 16 Apr 2019 16:00:44 +0000 [thread overview]
Message-ID: <VI1PR04MB46882E992789C61F624A8DCC90240@VI1PR04MB4688.eurprd04.prod.outlook.com> (raw)
Message-ID: <20190416160044.mtykgxKwLznhwzrNgnLb1xLSCxyw74TXpVjdTqCmrcg@z> (raw)
Hi Ananyev,
[...]
> > As you have stated below, it's just the same thing with two different
> views.
> >
> > > I think it would be plausible for both cases:
> > > - one port per core (your case).
> > > - multiple ports per core.
> >
> > Indeed. For this particular patch, I just chose the first one.
> Probably because that is the most general use-case I come across.
> > I am sure the second too has equal number of possible use-cases - but
> probably someone with access to that kind of scenario would be
> > better suited for validating what is the performance increase.
> > Do you think it would be OK to have that in and then sometime in
> future enable the second option?
>
> What I am trying to say - if we'll have mempool per lcore (not per
> port),
> then it would cover both cases above.
> So wouldn't need to make extra changes.
> Konstantin
What you are suggesting would end up as 1:N mapping of port:pool (when multiple queues are being used for a port, each affined to different core). In my observation, or rather the cases I generally see, that would end up reducing performance. Especially hardware pools work best when pool:port are co-located.
At least for me this option of setting multiple buffer pools against lcores in l3fwd is NOT a preferred use-case. Which leads me to conclude that we would anyways need both way mapping: pool-per-port and pool-per-core, to cover larger number of use-cases (at least, yours and mine).
Regards,
Shreyansh
next reply other threads:[~2019-04-16 16:00 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-16 16:00 Shreyansh Jain [this message]
2019-04-16 16:00 ` Shreyansh Jain
2019-04-17 11:21 ` Ananyev, Konstantin
2019-04-17 11:21 ` Ananyev, Konstantin
-- strict thread matches above, loose matches on Subject: below --
2019-04-16 12:47 Shreyansh Jain
2019-04-16 12:47 ` Shreyansh Jain
2019-04-16 12:54 ` Ananyev, Konstantin
2019-04-16 12:54 ` Ananyev, Konstantin
2019-04-15 10:29 Shreyansh Jain
2019-04-15 10:29 ` Shreyansh Jain
2019-04-15 6:48 Shreyansh Jain
2019-04-15 6:48 ` Shreyansh Jain
2019-04-15 7:58 ` Ruifeng Wang (Arm Technology China)
2019-04-15 7:58 ` Ruifeng Wang (Arm Technology China)
2019-01-03 11:30 Shreyansh Jain
2019-04-04 11:54 ` Hemant Agrawal
2019-04-04 11:54 ` Hemant Agrawal
2019-04-08 6:10 ` Ruifeng Wang (Arm Technology China)
2019-04-08 6:10 ` Ruifeng Wang (Arm Technology China)
2019-04-08 9:29 ` Ananyev, Konstantin
2019-04-08 9:29 ` Ananyev, Konstantin
2019-04-12 9:24 ` Shreyansh Jain
2019-04-12 9:24 ` Shreyansh Jain
2019-04-14 9:13 ` Ruifeng Wang (Arm Technology China)
2019-04-14 9:13 ` Ruifeng Wang (Arm Technology China)
2019-04-15 12:05 ` Ananyev, Konstantin
2019-04-15 12:05 ` Ananyev, Konstantin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=VI1PR04MB46882E992789C61F624A8DCC90240@VI1PR04MB4688.eurprd04.prod.outlook.com \
--to=shreyansh.jain@nxp.com \
--cc=Ruifeng.Wang@arm.com \
--cc=dev@dpdk.org \
--cc=konstantin.ananyev@intel.com \
--cc=nd@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).