DPDK patches and discussions
 help / color / mirror / Atom feed
From: Sagi Grimberg <sagi@grimberg.me>
To: "Nélio Laranjeiro" <nelio.laranjeiro@6wind.com>
Cc: dev@dpdk.org, Shahaf Shuler <shahafs@mellanox.com>,
	Yongseok Koh <yskoh@mellanox.com>,
	Roy Shterman <roys@lightbitslabs.com>,
	Alexander Solganik <sashas@lightbitslabs.com>,
	Leon Romanovsky <leonro@mellanox.com>
Subject: Re: [dpdk-dev] Question on mlx5 PMD txq memory registration
Date: Wed, 19 Jul 2017 09:21:39 +0300	[thread overview]
Message-ID: <85c0b1d9-bbf3-c6ab-727f-f508c5e5f584@grimberg.me> (raw)
In-Reply-To: <20170717210222.j4dwxiujqdlqhlp2@shalom>


> There is none, if you send a burst of 9 packets each one coming from a
> different mempool the first one will be dropped.

Its worse than just a drop, without debug enabled the error completion
is ignored, and the wqe_pi is taken from an invalid field, which leads
to bogus mbufs free (elts_tail is not valid).

>> AFAICT, it is the driver responsibility to guarantee to never deregister
>> a memory region that has inflight send operations posted, otherwise
>> the send operation *will* complete with a local protection error. Is
>> that taken care of?
> 
> Up to now we have assumed that the user knows its application needs and
> he can increase this cache size to its needs through the configuration
> item.
> This way this limit and guarantee becomes true.

That is an undocumented assumption.

>> Another question, why is the MR cache maintained per TX queue and not
>> per device? If the application starts N TX queues then a single mempool
>> will be registered N times instead of just once. Having lots of MR
>> instances will pollute the device ICMC pretty badly. Am I missing
>> something?
> 
> Having this cache per device needs a lock on the device structure while
> threads are sending packets.

Not sure why it needs a lock at all. it *may* need an rcu protection
or rw_lock if at all.

> Having such locks cost cycles, that is why
> the cache is per queue.  Another point is, having several mempool per
> device is something common, whereas having several mempool per queues is
> not, it seems logical to have this cache per queue for those two
> reasons.
> 
> 
> I am currently re-working this part of the code to improve it using
> reference counters instead. The cache will remain for performance
> purpose.  This will fix the issues you are pointing.

AFAICT, all this caching mechanism is just working around the fact
that mlx5 allocates resources on top of the existing verbs interface.
I think it should work like any other pmd driver, i.e. use mbuf the
physical addresses.

The mlx5 device (like all other rdma devices) has a global DMA lkey that
spans the entire physical address space. Just about all the kernel
drivers heavily use this lkey. IMO, the mlx5_pmd driver should be able
to query the kernel what this lkey is and ask for the kernel to create
the QP with privilege level to post send/recv operations with that lkey.

And then, mlx5_pmd becomes like other drivers working with physical
addresses instead of working around the memory registration sub-optimally.

And while were on the subject, what is the plan of detaching mlx5_pmd
from its MLNX_OFED dependency? Mellanox has been doing a good job
upstreaming the needed features (rdma-core). CC'ing Leon (who is
co-maintaining the user-space rdma tree.

  reply	other threads:[~2017-07-19  6:21 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-17 13:29 Sagi Grimberg
2017-07-17 21:02 ` Nélio Laranjeiro
2017-07-19  6:21   ` Sagi Grimberg [this message]
2017-07-20 13:55     ` Nélio Laranjeiro
2017-07-20 14:06       ` Sagi Grimberg
2017-07-20 15:20         ` Shahaf Shuler
2017-07-20 16:22           ` Sagi Grimberg
2017-07-23  8:17             ` Shahaf Shuler
2017-07-23  9:03               ` Sagi Grimberg
2017-07-24 13:44                 ` Bruce Richardson
2017-07-27 10:48                   ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=85c0b1d9-bbf3-c6ab-727f-f508c5e5f584@grimberg.me \
    --to=sagi@grimberg.me \
    --cc=dev@dpdk.org \
    --cc=leonro@mellanox.com \
    --cc=nelio.laranjeiro@6wind.com \
    --cc=roys@lightbitslabs.com \
    --cc=sashas@lightbitslabs.com \
    --cc=shahafs@mellanox.com \
    --cc=yskoh@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).