From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f51.google.com (mail-wm0-f51.google.com [74.125.82.51]) by dpdk.org (Postfix) with ESMTP id 58BD33977 for ; Wed, 19 Jul 2017 08:21:42 +0200 (CEST) Received: by mail-wm0-f51.google.com with SMTP id g127so5071307wmd.0 for ; Tue, 18 Jul 2017 23:21:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=REx2s/tds9H0BUmlapIoBHNK3aLJ6wCSIjKyqh7JV1Q=; b=HUCQXzz5VZ73nqS137Wz0dd6fpK6LlZp8l51M4WFlJYaIwR4/ug6dD22f8W1I5dwHw jNKYozjWT+X2527ARPBuGfj3dxv/7IfJpGQygwllMdNMW1qBq983Svcxz5uKPuH3g3h1 0VRu1Vp3fIRUaeE8/n5tJ+ilJpgcIArhoAfbxLFdwoG2juW06WVHOT2QAwyg1E2u8XmH bujhC2obS2ayEXE/C97PoSZ+HwHR5D4vxEW+9c64vWZMbB+k/sRYo4vgLeoL1QMJIOkq PQmxbONm3KI6Z5kAEAilgGm4VtJTQvG3PcpI1KXQXipLl69jDo39p/vKEUxDUgYlmbLm pCvw== X-Gm-Message-State: AIVw111rBGC+Tv7IzKcenxySKL8oNNtWdLxfCFc6Rw2PsY8zwI++gvEi VB/sReJUyIPzqQ== X-Received: by 10.28.27.15 with SMTP id b15mr4232539wmb.60.1500445301967; Tue, 18 Jul 2017 23:21:41 -0700 (PDT) Received: from [192.168.64.116] (bzq-82-81-101-184.red.bezeqint.net. [82.81.101.184]) by smtp.gmail.com with ESMTPSA id x21sm19443191wme.24.2017.07.18.23.21.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 18 Jul 2017 23:21:41 -0700 (PDT) To: =?UTF-8?Q?N=c3=a9lio_Laranjeiro?= Cc: dev@dpdk.org, Shahaf Shuler , Yongseok Koh , Roy Shterman , Alexander Solganik , Leon Romanovsky References: <75d08202-1882-7660-924c-b6dbb4455b88@grimberg.me> <20170717210222.j4dwxiujqdlqhlp2@shalom> From: Sagi Grimberg Message-ID: <85c0b1d9-bbf3-c6ab-727f-f508c5e5f584@grimberg.me> Date: Wed, 19 Jul 2017 09:21:39 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1 MIME-Version: 1.0 In-Reply-To: <20170717210222.j4dwxiujqdlqhlp2@shalom> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] Question on mlx5 PMD txq memory registration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 19 Jul 2017 06:21:42 -0000 > There is none, if you send a burst of 9 packets each one coming from a > different mempool the first one will be dropped. Its worse than just a drop, without debug enabled the error completion is ignored, and the wqe_pi is taken from an invalid field, which leads to bogus mbufs free (elts_tail is not valid). >> AFAICT, it is the driver responsibility to guarantee to never deregister >> a memory region that has inflight send operations posted, otherwise >> the send operation *will* complete with a local protection error. Is >> that taken care of? > > Up to now we have assumed that the user knows its application needs and > he can increase this cache size to its needs through the configuration > item. > This way this limit and guarantee becomes true. That is an undocumented assumption. >> Another question, why is the MR cache maintained per TX queue and not >> per device? If the application starts N TX queues then a single mempool >> will be registered N times instead of just once. Having lots of MR >> instances will pollute the device ICMC pretty badly. Am I missing >> something? > > Having this cache per device needs a lock on the device structure while > threads are sending packets. Not sure why it needs a lock at all. it *may* need an rcu protection or rw_lock if at all. > Having such locks cost cycles, that is why > the cache is per queue. Another point is, having several mempool per > device is something common, whereas having several mempool per queues is > not, it seems logical to have this cache per queue for those two > reasons. > > > I am currently re-working this part of the code to improve it using > reference counters instead. The cache will remain for performance > purpose. This will fix the issues you are pointing. AFAICT, all this caching mechanism is just working around the fact that mlx5 allocates resources on top of the existing verbs interface. I think it should work like any other pmd driver, i.e. use mbuf the physical addresses. The mlx5 device (like all other rdma devices) has a global DMA lkey that spans the entire physical address space. Just about all the kernel drivers heavily use this lkey. IMO, the mlx5_pmd driver should be able to query the kernel what this lkey is and ask for the kernel to create the QP with privilege level to post send/recv operations with that lkey. And then, mlx5_pmd becomes like other drivers working with physical addresses instead of working around the memory registration sub-optimally. And while were on the subject, what is the plan of detaching mlx5_pmd from its MLNX_OFED dependency? Mellanox has been doing a good job upstreaming the needed features (rdma-core). CC'ing Leon (who is co-maintaining the user-space rdma tree.