From: Stephen Hemminger <stephen@networkplumber.org>
To: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
Cc: "Honnappa Nagarahalli" <Honnappa.Nagarahalli@arm.com>,
"Andrew Rybchenko" <andrew.rybchenko@oktetlabs.ru>,
"Morten Brørup" <mb@smartsharesystems.com>,
"Jerin Jacob" <jerinjacobk@gmail.com>, dpdk-dev <dev@dpdk.org>,
"techboard@dpdk.org" <techboard@dpdk.org>, nd <nd@arm.com>
Subject: Re: Optimizations are not features
Date: Mon, 4 Jul 2022 09:33:11 -0700 [thread overview]
Message-ID: <20220704093311.0582d592@hermes.local> (raw)
In-Reply-To: <91f748cd-14c1-91ca-befe-64db36789346@yandex.ru>
On Sun, 3 Jul 2022 20:38:21 +0100
Konstantin Ananyev <konstantin.v.ananyev@yandex.ru> wrote:
> >
> > The base/existing design for DPDK was done with one particular HW architecture in mind where there was an abundance of resources. Unfortunately, that HW architecture is fast evolving and DPDK is adopted in use cases where that kind of resources are not available. For ex: efficiency cores are being introduced by every CPU vendor now. Soon enough, we will see big-little architecture in networking as well. The existing PMD design introduces 512B of stores (256B for copying to stack variable and 256B to store lcore cache) and 256B load/store on RX side every 32 packets back to back. It doesn't make sense to have that kind of memcopy for little/efficiency cores just for the driver code.
>
> I don't object about specific use-case optimizations.
> Specially if the use-case is a common one.
> But I think such changes has to be transparent to the user as
> much as possible and shouldn't cause further DPDK code fragmentation
> (new CONFIG options, etc.).
> I understand that it is not always possible, but for pure SW based
> optimizations, I think it is a reasonable expectation.
Great discussion.
Also, if you look back at the mailing list history, you can see that lots of users just
use DPDK because it is "go fast" secret sauce and have not understanding of the internals.
My concern, is that if one untestable optimization goes in for one hardware platform then
users will enable it all the time thinking it makes any and all uses cases faster.
Try explaining to a Linux user that the real-time kernel is *not* faster than
the normal kernel...
next prev parent reply other threads:[~2022-07-04 16:33 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-04 9:09 Morten Brørup
2022-06-04 9:33 ` Jerin Jacob
2022-06-04 10:00 ` Andrew Rybchenko
2022-06-04 11:10 ` Jerin Jacob
2022-06-04 12:19 ` Morten Brørup
2022-06-04 12:51 ` Andrew Rybchenko
2022-06-05 8:15 ` Morten Brørup
2022-06-05 16:05 ` Stephen Hemminger
2022-06-06 9:35 ` Konstantin Ananyev
2022-06-29 20:44 ` Honnappa Nagarahalli
2022-06-30 15:39 ` Morten Brørup
2022-07-03 19:38 ` Konstantin Ananyev
2022-07-04 16:33 ` Stephen Hemminger [this message]
2022-07-04 22:06 ` Morten Brørup
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220704093311.0582d592@hermes.local \
--to=stephen@networkplumber.org \
--cc=Honnappa.Nagarahalli@arm.com \
--cc=andrew.rybchenko@oktetlabs.ru \
--cc=dev@dpdk.org \
--cc=jerinjacobk@gmail.com \
--cc=konstantin.v.ananyev@yandex.ru \
--cc=mb@smartsharesystems.com \
--cc=nd@arm.com \
--cc=techboard@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).