DPDK patches and discussions
 help / color / mirror / Atom feed
From: Dariusz Sosnowski <dsosnowski@nvidia.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: "NBU-Contact-Thomas Monjalon (EXTERNAL)" <thomas@monjalon.net>,
	Ferruh Yigit <ferruh.yigit@amd.com>,
	Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
	Ori Kam <orika@nvidia.com>, "dev@dpdk.org" <dev@dpdk.org>
Subject: RE: [RFC] ethdev: fast path async flow API
Date: Thu, 28 Dec 2023 13:53:57 +0000	[thread overview]
Message-ID: <IA1PR12MB831187A9641D5C6BD9FE6EB6A49EA@IA1PR12MB8311.namprd12.prod.outlook.com> (raw)
In-Reply-To: <20231227094117.678ce828@hermes.local>

Hi Stephen,

> > +/**
> > + * @internal
> > + *
> > + * Fast path async flow functions and related data are held in a flat array,
> one entry per ethdev.
> > + * It is assumed that each entry is read-only and cache aligned.
> > + */
> > +struct rte_flow_fp_ops {
> > +     void *ctx;
> > +     rte_flow_async_create_t async_create;
> > +     rte_flow_async_create_by_index_t async_create_by_index;
> > +     rte_flow_async_actions_update_t async_actions_update;
> > +     rte_flow_async_destroy_t async_destroy;
> > +     rte_flow_push_t push;
> > +     rte_flow_pull_t pull;
> > +     rte_flow_async_action_handle_create_t async_action_handle_create;
> > +     rte_flow_async_action_handle_destroy_t async_action_handle_destroy;
> > +     rte_flow_async_action_handle_update_t async_action_handle_update;
> > +     rte_flow_async_action_handle_query_t async_action_handle_query;
> > +     rte_flow_async_action_handle_query_update_t
> async_action_handle_query_update;
> > +     rte_flow_async_action_list_handle_create_t
> async_action_list_handle_create;
> > +     rte_flow_async_action_list_handle_destroy_t
> async_action_list_handle_destroy;
> > +     rte_flow_async_action_list_handle_query_update_t
> async_action_list_handle_query_update;
> > +} __rte_cache_aligned;
> 
> If you go ahead with this then all usage should be const pointer.
> Still think it is bad idea and creates even more technical debt.
Could you please elaborate a bit more on why do you think it is a bad idea and why it creates technical debt? 

> > **Future considerations**
> >
> > A case can be made about converting some of the existing stable API 
> > functions to fast path versions in future LTS releases.
> > I don't have any hard data on how such changes would affect 
> > performance of these APIs, but I assume that the improvement would be noticeable.
> 
> The problem is that inline functions create future ABI problems.
Agreed, this is why such a change can only be introduced when a new major ABI version is released.
However, even though inlining could reduce function call overhead, I'm not sure if such a change is needed for synchronous flow API.
I mentioned it here as a thing to consider.

> Usually, there are other ways to get the same performance benefit.
> Often batching updates to hardware will do the trick.
This is true, but we still have some library-level overhead.
To elaborate more, the current state of flow API is as follows:
- With sync flow API:
  - Applications cannot batch flow operations.
- With async flow APIs:
  - Applications can enqueue multiple flow operations and PMDs can issue batches to HW.
  - But there is still one function call per enqueued flow operation.
    The overall API overhead in datapath may be nonnegligible if we consider that applications may enqueue a flow rule creation operation for every packet received in SW.
This proposal specifically aims at reducing API overhead for async flow API in a case mentioned above.

As a side note, we plan to push changes to mlx5 PMD in 24.03 which will reduce PMD overhead in such scenarios.
This proposal's goal is to reduce overhead of async flow API at library level.

Best regards,
Dariusz Sosnowski

  reply	other threads:[~2023-12-28 13:54 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-27 10:57 Dariusz Sosnowski
2023-12-27 17:39 ` Stephen Hemminger
2023-12-27 17:41 ` Stephen Hemminger
2023-12-28 13:53   ` Dariusz Sosnowski [this message]
2023-12-28 14:10     ` Ivan Malov
2024-01-03 18:01       ` Dariusz Sosnowski
2024-01-03 18:29         ` Ivan Malov
2024-01-04 13:13           ` Dariusz Sosnowski
2023-12-28 17:16 ` Stephen Hemminger
2024-01-03 19:14   ` Dariusz Sosnowski
2024-01-04  1:07     ` Stephen Hemminger
2024-01-23 11:37       ` Dariusz Sosnowski
2024-01-29 13:38         ` Dariusz Sosnowski
2024-01-29 17:36           ` Ferruh Yigit
2024-01-30 12:06             ` Dariusz Sosnowski
2024-01-30 12:17               ` Ferruh Yigit
2024-01-30 16:08                 ` Dariusz Sosnowski
2024-01-04  8:47     ` Konstantin Ananyev
2024-01-04 16:08       ` Dariusz Sosnowski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=IA1PR12MB831187A9641D5C6BD9FE6EB6A49EA@IA1PR12MB8311.namprd12.prod.outlook.com \
    --to=dsosnowski@nvidia.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@amd.com \
    --cc=orika@nvidia.com \
    --cc=stephen@networkplumber.org \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).