DPDK patches and discussions
 help / color / mirror / Atom feed
From: Konstantin Ananyev <konstantin.ananyev@huawei.com>
To: Dariusz Sosnowski <dsosnowski@nvidia.com>,
	Stephen Hemminger <stephen@networkplumber.org>
Cc: "NBU-Contact-Thomas Monjalon (EXTERNAL)" <thomas@monjalon.net>,
	"Ferruh Yigit" <ferruh.yigit@amd.com>,
	Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
	Ori Kam <orika@nvidia.com>, "dev@dpdk.org" <dev@dpdk.org>
Subject: RE: [RFC] ethdev: fast path async flow API
Date: Thu, 4 Jan 2024 08:47:02 +0000	[thread overview]
Message-ID: <4efb00a7f6f3406ab819424ac7a25542@huawei.com> (raw)
In-Reply-To: <IA1PR12MB8311C2B7D70FA8E1157A94E1A460A@IA1PR12MB8311.namprd12.prod.outlook.com>



> > This is a blocker, showstopper for me.
> +1
> 
> > Have you considered having something like
> >    rte_flow_create_bulk()
> >
> > or better yet a Linux iouring style API?
> >
> > A ring style API would allow for better mixed operations across the board and
> > get rid of the I-cache overhead which is the root cause of the needing inline.
> Existing async flow API is somewhat close to the io_uring interface.
> The difference being that queue is not directly exposed to the application.
> Application interacts with the queue using rte_flow_async_* APIs (e.g., places operations in the queue, pushes them to the HW).
> Such design has some benefits over a flow API which exposes the queue to the user:
> - Easier to use - Applications do not manage the queue directly, they do it through exposed APIs.
> - Consistent with other DPDK APIs - In other libraries, queues are manipulated through API, not directly by an application.
> - Lower memory usage - only HW primitives are needed (e.g., HW queue on PMD side), no need to allocate separate application
> queues.
> 
> Bulking of flow operations is a tricky subject.
> Compared to packet processing, where it is desired to keep the manipulation of raw packet data to the minimum (e.g., only packet
> headers are accessed),
> during flow rule creation all items and actions must be processed by PMD to create a flow rule.
> The amount of memory consumed by items and actions themselves during this process might be nonnegligible.
> If flow rule operations were bulked, the size of working set of memory would increase, which could have negative consequences on
> the cache behavior.
> So, it might be the case that by utilizing bulking the I-cache overhead is removed, but the D-cache overhead is added.

Is rte_flow struct really that big?
We do bulk processing for mbufs, crypto_ops, etc., and usually bulk processing improves performance not degrades it.
Of course bulk size has to be somewhat reasonable.

> On the other hand, creating flow rule operations (or enqueuing flow rule operations) one by one enables applications to reuse the
> same memory for different flow rules.
> 
> In summary, in my opinion extending the async flow API with bulking capabilities or exposing the queue directly to the application is
> not desirable.
> This proposal aims to reduce the I-cache overhead in async flow API by reusing the existing design pattern in DPDK - fast path
> functions are inlined to the application code and they call cached PMD callbacks.
> 
> Best regards,
> Dariusz Sosnowski

  parent reply	other threads:[~2024-01-04  8:47 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-27 10:57 Dariusz Sosnowski
2023-12-27 17:39 ` Stephen Hemminger
2023-12-27 17:41 ` Stephen Hemminger
2023-12-28 13:53   ` Dariusz Sosnowski
2023-12-28 14:10     ` Ivan Malov
2024-01-03 18:01       ` Dariusz Sosnowski
2024-01-03 18:29         ` Ivan Malov
2024-01-04 13:13           ` Dariusz Sosnowski
2023-12-28 17:16 ` Stephen Hemminger
2024-01-03 19:14   ` Dariusz Sosnowski
2024-01-04  1:07     ` Stephen Hemminger
2024-01-23 11:37       ` Dariusz Sosnowski
2024-01-29 13:38         ` Dariusz Sosnowski
2024-01-29 17:36           ` Ferruh Yigit
2024-01-30 12:06             ` Dariusz Sosnowski
2024-01-30 12:17               ` Ferruh Yigit
2024-01-30 16:08                 ` Dariusz Sosnowski
2024-01-04  8:47     ` Konstantin Ananyev [this message]
2024-01-04 16:08       ` Dariusz Sosnowski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4efb00a7f6f3406ab819424ac7a25542@huawei.com \
    --to=konstantin.ananyev@huawei.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=dev@dpdk.org \
    --cc=dsosnowski@nvidia.com \
    --cc=ferruh.yigit@amd.com \
    --cc=orika@nvidia.com \
    --cc=stephen@networkplumber.org \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).