From: "Nélio Laranjeiro" <nelio.laranjeiro@6wind.com>
To: Sagi Grimberg <sagi@grimberg.me>
Cc: dev@dpdk.org, Shahaf Shuler <shahafs@mellanox.com>,
Yongseok Koh <yskoh@mellanox.com>,
Roy Shterman <roys@lightbitslabs.com>,
Alexander Solganik <sashas@lightbitslabs.com>
Subject: Re: [dpdk-dev] Question on mlx5 PMD txq memory registration
Date: Mon, 17 Jul 2017 23:02:24 +0200 [thread overview]
Message-ID: <20170717210222.j4dwxiujqdlqhlp2@shalom> (raw)
In-Reply-To: <75d08202-1882-7660-924c-b6dbb4455b88@grimberg.me>
Hello Sagi,
On Mon, Jul 17, 2017 at 04:29:34PM +0300, Sagi Grimberg wrote:
> Hi,
>
> Looking at the code, it looks like mlx5 keeps a MR cache per TX queue
> (each MR registers a rte_mempool).
>
> Once a TX queue is created, mlx5 scans existing mempools and
> pre-registers a MR for each mempool it meets (using rte_mempool_walk).
> For each MR registration that exceeds the TX queue cache, it removes the
> first entry from the cache and deregisters that MR (in txq_mp2mr_reg).
>
> Now on TX burst, if the driver sees a mbuf from an unknown mempool, it
> registers the mempool on the fly and again *deregister* the first MR in
> the cache.
>
> My question is, what guarantees that no inflight send operations posted
> on the TX queue when we deregister and remove a MR from the cache?
There is none, if you send a burst of 9 packets each one coming from a
different mempool the first one will be dropped.
> AFAICT, it is the driver responsibility to guarantee to never deregister
> a memory region that has inflight send operations posted, otherwise
> the send operation *will* complete with a local protection error. Is
> that taken care of?
Up to now we have assumed that the user knows its application needs and
he can increase this cache size to its needs through the configuration
item.
This way this limit and guarantee becomes true.
> Another question, why is the MR cache maintained per TX queue and not
> per device? If the application starts N TX queues then a single mempool
> will be registered N times instead of just once. Having lots of MR
> instances will pollute the device ICMC pretty badly. Am I missing
> something?
Having this cache per device needs a lock on the device structure while
threads are sending packets. Having such locks cost cycles, that is why
the cache is per queue. Another point is, having several mempool per
device is something common, whereas having several mempool per queues is
not, it seems logical to have this cache per queue for those two
reasons.
I am currently re-working this part of the code to improve it using
reference counters instead. The cache will remain for performance
purpose. This will fix the issues you are pointing.
Are you facing some kind of issue? Maybe you can share it, it can help
to improve things.
Thanks,
--
Nélio Laranjeiro
6WIND
next prev parent reply other threads:[~2017-07-17 20:47 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-07-17 13:29 Sagi Grimberg
2017-07-17 21:02 ` Nélio Laranjeiro [this message]
2017-07-19 6:21 ` Sagi Grimberg
2017-07-20 13:55 ` Nélio Laranjeiro
2017-07-20 14:06 ` Sagi Grimberg
2017-07-20 15:20 ` Shahaf Shuler
2017-07-20 16:22 ` Sagi Grimberg
2017-07-23 8:17 ` Shahaf Shuler
2017-07-23 9:03 ` Sagi Grimberg
2017-07-24 13:44 ` Bruce Richardson
2017-07-27 10:48 ` Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170717210222.j4dwxiujqdlqhlp2@shalom \
--to=nelio.laranjeiro@6wind.com \
--cc=dev@dpdk.org \
--cc=roys@lightbitslabs.com \
--cc=sagi@grimberg.me \
--cc=sashas@lightbitslabs.com \
--cc=shahafs@mellanox.com \
--cc=yskoh@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).