From: Raslan Darawsheh <rasland@nvidia.com>
To: "Jiawei(Jonny) Wang" <jiaweiw@nvidia.com>,
Slava Ovsiienko <viacheslavo@nvidia.com>,
Matan Azrad <matan@nvidia.com>, Ori Kam <orika@nvidia.com>,
NBU-Contact-Thomas Monjalon <thomas@monjalon.net>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v2] net/mlx5: optimize the device spawn time with representors
Date: Tue, 26 Oct 2021 12:40:18 +0000 [thread overview]
Message-ID: <DM4PR12MB5312CC3E6B6318C8E56434B2CF849@DM4PR12MB5312.namprd12.prod.outlook.com> (raw)
In-Reply-To: <20210930120047.37448-1-jiaweiw@nvidia.com>
Hi,
> -----Original Message-----
> From: Jiawei(Jonny) Wang <jiaweiw@nvidia.com>
> Sent: Thursday, September 30, 2021 3:01 PM
> To: Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad
> <matan@nvidia.com>; Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas
> Monjalon <thomas@monjalon.net>
> Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
> Subject: [PATCH v2] net/mlx5: optimize the device spawn time with
> representors
>
> During the device spawn process, mlx5 PMD queried the available flow
> priorities by calling mlx5_flow_discover_priorities, queried
> if the DR drop action was supported on the root table by calling
> the mlx5_flow_discover_dr_action_support routine, and queried the
> availability of metadata register C by calling mlx5_flow_discover_mreg_c.
>
> These functions created the test flows to get the supported fields, and
> at the end destroyed the test flows. The test flows in the first two
> functions was created on the root table.
> If the device was spawned with multiple representors, these test flows
> were created and destroyed on each representor as well. The above
> operations took a significant amount of init time during the device spawn.
>
> This patch optimizes the device discover functions, if there is
> the device with multiple representors (VF/SF) being spawned,
> the priority and drop action and metadata register support check can be
> done only ones and check results can be shared for all representors.
>
> Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
> v2: Fix the CI warning
>
> ---
> drivers/net/mlx5/linux/mlx5_os.c | 33 +++++++++++++++++++++---------
> drivers/net/mlx5/mlx5.h | 10 ++++++---
> drivers/net/mlx5/mlx5_flow.c | 31 ++++++++++++++--------------
> drivers/net/mlx5/mlx5_flow_verbs.c | 4 ++--
> drivers/net/mlx5/windows/mlx5_os.c | 12 ++++++-----
> 5 files changed, 54 insertions(+), 36 deletions(-)
>
Patch rebased and applied to next-net-mlx,
Kindest regards,
Raslan Darawsheh
next prev parent reply other threads:[~2021-10-26 12:40 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-30 11:52 [dpdk-dev] [PATCH] " Jiawei Wang
2021-09-30 12:00 ` [dpdk-dev] [PATCH v2] " Jiawei Wang
2021-10-26 12:40 ` Raslan Darawsheh [this message]
2021-10-27 10:35 ` [dpdk-dev] [PATCH v3] " Jiawei Wang
2021-10-27 12:06 ` Raslan Darawsheh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=DM4PR12MB5312CC3E6B6318C8E56434B2CF849@DM4PR12MB5312.namprd12.prod.outlook.com \
--to=rasland@nvidia.com \
--cc=dev@dpdk.org \
--cc=jiaweiw@nvidia.com \
--cc=matan@nvidia.com \
--cc=orika@nvidia.com \
--cc=thomas@monjalon.net \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).