patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Raslan Darawsheh <rasland@nvidia.com>
To: Dmitry Kozlyuk <dkozlyuk@nvidia.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: "stable@dpdk.org" <stable@dpdk.org>,
	Matan Azrad <matan@nvidia.com>,
	Slava Ovsiienko <viacheslavo@nvidia.com>
Subject: RE: [PATCH v2] common/mlx5: fix non-expandable global MR cache
Date: Mon, 4 Jul 2022 16:20:01 +0000	[thread overview]
Message-ID: <BYAPR12MB3078C5490602DE59DB17DEC8CFBE9@BYAPR12MB3078.namprd12.prod.outlook.com> (raw)
In-Reply-To: <20220629220800.751719-1-dkozlyuk@nvidia.com>

Hi,

> -----Original Message-----
> From: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
> Sent: Thursday, June 30, 2022 1:08 AM
> To: dev@dpdk.org
> Cc: Raslan Darawsheh <rasland@nvidia.com>; stable@dpdk.org; Matan
> Azrad <matan@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>
> Subject: [PATCH v2] common/mlx5: fix non-expandable global MR cache
> 
> The number of memory regions (MR) that MLX5 PMD can use
> was limited by 512 per IB device, the size of the global MR cache
> that was fixed at compile time.
> The cache allows to search MR LKey by address efficiently,
> therefore it is the last place searched on data path
> (skipped is the global MR database which would be slow).
> If the application logic caused the PMD to create more than 512 MRs,
> which can be the case with external memory,
> those MRs would never be found on data path
> and later cause a HW failure.
> 
> The cache size was fixed because at the time of overflow
> the EAL memory hotplug lock may be held,
> prohibiting to allocate a larger cache
> (it must reside in DPDK memory for multi-process support).
> This patch adds logic to release the necessary locks,
> extend the cache, and repeat the attempt to insert new entries.
> 
> `mlx5_mr_btree` structure had `overflow` field
> that was set when a cache (not only the global one)
> could not accept new entries.
> However, it was only checked for the global cache,
> because caches of upper layers were dynamically expandable.
> With the global cache size limitation removed, this field is not needed.
> Cache size was previously limited by 16-bit indices.
> Use the space in the structure previously fileld by `overflow` field
> to extend indices to 32 bits.
> With this patch, it is the HW and RAM that limit the number of MRs.
> 
> Fixes: 974f1e7ef146 ("net/mlx5: add new memory region support")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
> ---
> v2: fix warnings in debug mode and with assertions enabled (Raslan).
> 

Patch applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh

      parent reply	other threads:[~2022-07-04 16:20 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-24 20:35 [PATCH] " Dmitry Kozlyuk
2022-06-29 22:08 ` [PATCH v2] " Dmitry Kozlyuk
2022-07-04 14:22   ` Matan Azrad
2022-07-04 16:20   ` Raslan Darawsheh [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BYAPR12MB3078C5490602DE59DB17DEC8CFBE9@BYAPR12MB3078.namprd12.prod.outlook.com \
    --to=rasland@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=dkozlyuk@nvidia.com \
    --cc=matan@nvidia.com \
    --cc=stable@dpdk.org \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).