From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id CCC6FA00E6 for ; Wed, 7 Aug 2019 17:21:16 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0F4DE2BD3; Wed, 7 Aug 2019 17:21:16 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [148.163.129.52]) by dpdk.org (Postfix) with ESMTP id 2A1BB2BA8 for ; Wed, 7 Aug 2019 17:21:15 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us5.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id ECD3280007F; Wed, 7 Aug 2019 15:21:12 +0000 (UTC) Received: from [192.168.1.11] (85.187.13.152) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 7 Aug 2019 16:21:05 +0100 To: Olivier Matz , Vamsi Krishna Attunuru , CC: Thomas Monjalon , Anatoly Burakov , Jerin Jacob Kollanukkaran , Kokkilagadda , Ferruh Yigit References: <20190719133845.32432-1-olivier.matz@6wind.com> From: Andrew Rybchenko Message-ID: Date: Wed, 7 Aug 2019 18:21:01 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <20190719133845.32432-1-olivier.matz@6wind.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-Originating-IP: [85.187.13.152] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-24828.000 X-TM-AS-Result: No-16.909800-8.000000-10 X-TMASE-MatchedRID: gjZGo2H/wj9q0U6EhO9EE4mfV7NNMGm+69aS+7/zbj+qvcIF1TcLYHsy gY4tPtxeXHg2k8+jMhfAg1mJwuIqTe4oGFq8y69jM71h0SMVl8LdXhRKGhNdp6ANamSpMq+hFAc pyp5sxOYfI0G08cA2jepnPusyzoZD9r0jF+r1PdjD0ZWEZr/nthfbPFE2GHrVK8VLPDcP9n4Tw7 jCStQ0rvOBRT+axaCeJf/xcFbirz7E3UocaQhb9Khx+sYfmafeIiTd2l7lf6FHZg0gWH5yUcA3U eFsJzySuKD5KsR9e4wtKVChA4DEvRLmJd2F/yFu9FQh3flUIh7LRD51bz5RZPnUz2SOjbiBsYX5 Wji8DDpmQDcOIthZ4MjHL8YdBXH5Ily/lfs5uYlLIfps09VJ2xpxmKWTfsQIL6RgNpYETZomSkS BaVwPgU1dMis1/gKDGXHbHcxypwuatxq8drZz/ttNtOEgicP6cVr+FAe3UDXAJMh4mAwEG0s8X8 IwJqPUs8FJVarnxm3E5chAZfA5RI9oUcx9VMLgOX/V8P8ail1ZDL1gLmoa/PoA9r2LThYYKrauX d3MZDWQUtVu4jv/Jp5calSSQ3wf7gFT+jHes8/+jP5PwYWQTwoo6f4Fah+d X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--16.909800-8.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-24828.000 X-MDID: 1565191274-XaRf0LG-ZJ9i Subject: Re: [dpdk-dev] ***Spam*** [RFC 0/4] mempool: avoid objects allocations across pages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 7/19/19 4:38 PM, Olivier Matz wrote: > When IOVA mode is VA, a mempool can be created with objects that > are not physically contiguous, which breaks KNI. > > To solve this, this patchset changes the default behavior of mempool > populate function, to prevent objects from being located across pages. I'll provide top level review notes on individual patches, but what I don't understand in general, why do we add a rule to respect page boundaries in any case even when it is not absolutely required. It may add holes. Can it make negative impact on performance? I think that KNI VA-mode requirements are very specific. It is VA-mode, but page boundaries should be respected even if VA is contiguous. > Olivier Matz (4): > mempool: clarify default populate function > mempool: unalign size when calculating required mem amount > mempool: introduce function to get mempool page size > mempool: prevent objects from being across pages > > lib/librte_mempool/rte_mempool.c | 106 +++++++++++---------------- > lib/librte_mempool/rte_mempool.h | 8 +- > lib/librte_mempool/rte_mempool_ops.c | 4 +- > lib/librte_mempool/rte_mempool_ops_default.c | 39 +++++++++- > 4 files changed, 90 insertions(+), 67 deletions(-) > > --- > > Hi, > >> @Olivier, >> Any suggestions..? > I took some time to go a bit deeper. I still think we can change the > default behavior to avoid objects to be located accross pages. But > it is more complex that I expected. > > I made a draft patchset, that, in short: > - cleans/renames variables > - removes the optimistic full iova contiguous allocation > - changes return value of calc_mem_size to return the unaligned size, > therefore the allocation is smaller in case of big hugepages > - changes rte_mempool_op_populate_default() to prevent allocation > of objects accross multiple pages > > Andrew, Anatoly, did I miss something? > Vamsi, can you check if it solves your issue? > > Anyway, even if validate the patchset it and make it work, I'm afraid > this is not something that could go in 19.08. > > The only alternative I see is a specific mempool allocation function > when used in iova=va mode + kni, as you proposed previously. > > It can probably be implemented without adding a flag, starting from > rte_mempool_create(), and replacing rte_mempool_populate_default(mp) by > something else: allocate pages one by one, and call > rte_mempool_populate_iova() for each of them. > > Hope it helps. Unfortunately, I may not have too much time to spend on > it in the coming days. > > Regards, > Olivier