DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Etelson, Gregory" <getelson@nvidia.com>
To: Ferruh Yigit <ferruh.yigit@amd.com>
Cc: Gregory Etelson <getelson@nvidia.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	 Maayan Kashani <mkashani@nvidia.com>, Ori Kam <orika@nvidia.com>,
	 Aman Singh <aman.deep.singh@intel.com>,
	 Yuying Zhang <yuying.zhang@intel.com>,
	 "NBU-Contact-Thomas Monjalon (EXTERNAL)" <thomas@monjalon.net>,
	 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Subject: Re: [PATCH] ethdev: add template table resize API
Date: Fri, 9 Feb 2024 07:55:59 +0200 (IST)	[thread overview]
Message-ID: <4bd8d9d2-4c52-2af2-27f6-925000444c4d@nvidia.com> (raw)
In-Reply-To: <5bafdaf7-2d8c-41dd-90bd-043c54ff1d7f@amd.com>

[-- Attachment #1: Type: text/plain, Size: 4778 bytes --]

Hello Ferruh,

I work on the patch update with detailed user guide for
the table resize API.

Regards,
Gregory


>>> So, by design, driver will keep the old table when it is resized.
>>> - Can this have a performance impact, like when rules
>>> updated/removed/inserted driver will need to look more tables?
>>> - Or can this cause additional capacity complexity, like total number of
>>> rules will be sum of rules in all tables, but new rules only can be
>>> added to latest table, so number of rules can be more than size of
>>> latest table.
>>> - Or user can add more flows after resize() and this may not leave
>>> enough room to update old rules to new table, what is expected behavior
>>> for this case?
>>> - Or if user did not updated rules at all after resize(), after each
>>> rule deletion driver won't need to check if old table emptied and needs
>>> to be freed?
>>> - Or can user call resize() API multiple times, causing driver to
>>> maintain multiple tables? How much memory overhead this may bring?
>>
>> After "resize, update, complete" sequence table performance must be
>> the same as before resize.
>> If application skiped updates or resize completion, performance is
>> undefined.
>>
>
> According below update, it is expected some time will pass until user
> updates flows, during this time there may be some performance impact,
> does it make sense to mention from it in API documentation?
>
>
>> Driver must verify that total flows number does not exceed capacity set
>> in table resize.
>>
>
> OK, this is more complexity to driver
>
>
> Is multiple resize() call supported?
>
> In the worst case, think about a case, for each rule application first
> increase the size by one and adds that rule. Is this supported and
> should there be a limit how many resize() can be done?
>
>
>>> 'rte_flow_async_update_resized()' API is called per flow, won't this
>>> force application to trace which flows are created in new table and
>>> which are in old table, so pushing additional work to application.
>>
>> Application must trace what flows require update after table resize.
>> As the general rule, all flows that were created before table resize
>> call has returned must be updated:
>>
>> "old" flows |<-----------------resize------------>| "new" flows
>>   update                                              keep
>>                unknown flow location: update
>>
>> ----------------------------TIME-------------------------------------->
>>
>> In MLX5 PMD, if update was called with a "new" flow, the call returns
>> will success, without changing the flow.
>>
>
> At least this makes life easy for the application, can we document this
> behavior as API requirement?
>
> But still in a case there is high amount of flow insertion and deletion
> happening in parallel, how application call the update(), it may still
> need to keep list of flows to update.
>
>>>
>>> Or what will happen if update() fails in the middle of update, should
>>> user retry, should PMD restore back the moved rules?
>>>
>>
>> If flow update call failed, it treated as failure during flow create,
>> update or destroy.
>>
>>>
>>> I understood the logic behind the dividing responsibility to multiple
>>> APIs, and it makes sense, but it brings above complexities, and more
>>> work to application.
>>> Can it be possible to have monolithic API but only resize() part of it
>>> is blocking and update() part and later remove table part done
>>> asynchronously?
>>>
>>
>> Table resize and single flow update operations consume approximately the
>> same time duration.
>>
>
> Ah, thanks. I was missing this bit of information.
>
>
>> An update of a table with 1_000_000 flows will consume driver for too
>> much time.
>> During that time application will not be able to create, destroy or
>> update existing "old" flows.
>> Such operation must be coordinated with application.
>>
>
> Got it. Briefly I was trying to highlight the complexities that multiple
> APIs bring, and responsibility pushed to the application,
> but above performance impact explains the design decision.
>
> Eventually application needs to get this hit, how application expected
> to use these APIs?
> First call resize(), after this point how application should handle
> updating flows?
>
>
> Can something like async updating flow work? Like driver returns success
> but it sets a thread that in background moves flows in a loop, if it
> gets the lock and release it per flow, this lets other thread get the
> lock for other flow related operations?
>
>> A driver could provide a batch flows update, but I don’t see how it helps.
>> It's ether update one or update all and the latter does not scale.
>>
>>>
>>> I will also put more comment on the patch based on latest understanding.
>>>
>>>
>
>

  reply	other threads:[~2024-02-09  5:56 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-17  9:32 Gregory Etelson
2024-01-29 14:24 ` Ferruh Yigit
2024-01-29 15:08   ` Etelson, Gregory
2024-01-30  8:58     ` Ferruh Yigit
2024-01-30 12:46       ` Etelson, Gregory
2024-01-30 14:34         ` Ferruh Yigit
2024-01-30 18:15           ` Etelson, Gregory
2024-02-08 12:46             ` Ferruh Yigit
2024-02-09  5:55               ` Etelson, Gregory [this message]
2024-01-30 14:56 ` Ferruh Yigit
2024-01-30 18:49   ` Etelson, Gregory
2024-01-31  9:59 ` [PATCH v2] " Gregory Etelson
2024-02-06 22:31   ` Thomas Monjalon
2024-02-07  7:09     ` Etelson, Gregory
2024-02-07  7:03 ` [PATCH v3] " Gregory Etelson
2024-02-07 17:36 ` [PATCH v4] " Gregory Etelson
2024-02-11  9:30 ` [PATCH v5] " Gregory Etelson
2024-02-12 14:02   ` Thomas Monjalon
2024-02-12 14:48     ` Etelson, Gregory
2024-02-12 14:14   ` Ferruh Yigit
2024-02-12 15:01     ` Etelson, Gregory
2024-02-12 15:07       ` Ferruh Yigit
2024-02-12 18:12 ` [PATCH v6] " Gregory Etelson
2024-02-12 20:30   ` Ferruh Yigit
2024-02-13 11:51   ` Thomas Monjalon
2024-02-14 14:32 ` [PATCH v7] " Gregory Etelson
2024-02-14 14:42   ` Thomas Monjalon
2024-02-14 15:56   ` Ferruh Yigit
2024-02-14 17:07     ` Etelson, Gregory
2024-02-14 21:59       ` Ferruh Yigit
2024-02-15  5:41         ` Etelson, Gregory
2024-02-15  6:13 ` [PATCH v8] " Gregory Etelson
2024-02-15 13:13   ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4bd8d9d2-4c52-2af2-27f6-925000444c4d@nvidia.com \
    --to=getelson@nvidia.com \
    --cc=aman.deep.singh@intel.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@amd.com \
    --cc=mkashani@nvidia.com \
    --cc=orika@nvidia.com \
    --cc=thomas@monjalon.net \
    --cc=yuying.zhang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).