From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D510FA00BE; Wed, 30 Oct 2019 08:44:17 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A7E861BEB4; Wed, 30 Oct 2019 08:44:17 +0100 (CET) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id D84DA1BEAD for ; Wed, 30 Oct 2019 08:44:16 +0100 (CET) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us4.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id 72F84B40064; Wed, 30 Oct 2019 07:44:15 +0000 (UTC) Received: from [192.168.38.17] (91.220.146.112) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 30 Oct 2019 07:44:08 +0000 From: Andrew Rybchenko To: Olivier Matz CC: , Anatoly Burakov , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , Kiran Kumar Kokkilagadda , Stephen Hemminger , "Thomas Monjalon" , Vamsi Krishna Attunuru References: <20190719133845.32432-1-olivier.matz@6wind.com> <20191028140122.9592-1-olivier.matz@6wind.com> <20191028140122.9592-4-olivier.matz@6wind.com> <24990d86-4ef8-4dce-113c-b824fe55e3f5@solarflare.com> <20191029172018.wvz57dbuhyy4bd4k@platinum> <79e7adee-32c0-2559-3aec-6d311f3dddfa@solarflare.com> Message-ID: Date: Wed, 30 Oct 2019 10:44:04 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 MIME-Version: 1.0 In-Reply-To: <79e7adee-32c0-2559-3aec-6d311f3dddfa@solarflare.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-GB X-Originating-IP: [91.220.146.112] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-25010.003 X-TM-AS-Result: No-11.758900-8.000000-10 X-TMASE-MatchedRID: X4bcv0S75KnlBUZDaqkvV/ZvT2zYoYOwC/ExpXrHizwdQW9W2F3v/ZRH roLqLm/zefwhycM0BbOFNQFuxJmqazdlsQBiDkU4tAPJLacFo+sxXH/dlhvLv+D3XFrJfgvzT51 2fbtiIR249B96VzRA5y+wKzmu8Az8YlldA0POS1KCYB3gC22mf+qhuTPUDQDtw0B7B0IIzkSZI5 3PKmUqN5jecj2e6HJGaa0GcyksHZSPcxe/n56ep3TnOygHVQpOfZubQ3G+UmGnk8VVpU1FNhUsY g9U/g5BT53L53/f6yeOPpe0Zssm7LKRmD27FFbq9Ib/6w+1lWQA+JHhu0IR5vdpM2ZaJMkenkqI Hfz9J490bSJYJvjoe4oT8NxYzDuxifsNgXDdrIaeAiCmPx4NwFkMvWAuahr8+gD2vYtOFhgqtq5 d3cxkNYtrDk2AD5MlJOLAnZSsfMceSzCoGwgj4W2tELvnwTB7MEmwXiHga7M= X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--11.758900-8.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-25010.003 X-MDID: 1572421456-nuXDslbyz4bc Subject: Re: [dpdk-dev] [PATCH 3/5] mempool: remove optimistic IOVA-contiguous allocation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 10/30/19 10:36 AM, Andrew Rybchenko wrote: > On 10/29/19 8:20 PM, Olivier Matz wrote: >> On Tue, Oct 29, 2019 at 01:25:10PM +0300, Andrew Rybchenko wrote: >>> On 10/28/19 5:01 PM, Olivier Matz wrote: >>>> The previous commit reduced the amount of required memory when >>>> populating the mempool with non iova-contiguous memory. >>>> >>>> Since there is no big advantage to have a fully iova-contiguous >>>> mempool >>>> if it is not explicitly asked, remove this code, it simplifies the >>>> populate function. >>>> >>>> Signed-off-by: Olivier Matz >>> One comment below, other than that >>> Reviewed-by: Andrew Rybchenko >>> >>>> --- >>>>    lib/librte_mempool/rte_mempool.c | 47 >>>> ++++++-------------------------- >>>>    1 file changed, 8 insertions(+), 39 deletions(-) >>>> >>>> diff --git a/lib/librte_mempool/rte_mempool.c >>>> b/lib/librte_mempool/rte_mempool.c >>>> index 3275e48c9..5e1f202dc 100644 >>>> --- a/lib/librte_mempool/rte_mempool.c >>>> +++ b/lib/librte_mempool/rte_mempool.c >>> [snip] >>> >>>> @@ -531,36 +521,15 @@ rte_mempool_populate_default(struct >>>> rte_mempool *mp) >>>>                goto fail; >>>>            } >>>> -        flags = mz_flags; >>>> - >>>>            /* if we're trying to reserve contiguous memory, add >>>> appropriate >>>>             * memzone flag. >>>>             */ >>>> -        if (try_iova_contig_mempool) >>>> -            flags |= RTE_MEMZONE_IOVA_CONTIG; >>>> +        if (min_chunk_size == (size_t)mem_size) >>>> +            mz_flags |= RTE_MEMZONE_IOVA_CONTIG; >>>>            mz = rte_memzone_reserve_aligned(mz_name, mem_size, >>>> -                mp->socket_id, flags, align); >>>> - >>>> -        /* if we were trying to allocate contiguous memory, failed >>>> and >>>> -         * minimum required contiguous chunk fits minimum page, >>>> adjust >>>> -         * memzone size to the page size, and try again. >>>> -         */ >>>> -        if (mz == NULL && try_iova_contig_mempool && >>>> -                min_chunk_size <= pg_sz) { >>>> -            try_iova_contig_mempool = false; >>>> -            flags &= ~RTE_MEMZONE_IOVA_CONTIG; >>>> - >>>> -            mem_size = rte_mempool_ops_calc_mem_size(mp, n, >>>> -                    pg_shift, &min_chunk_size, &align); >>>> -            if (mem_size < 0) { >>>> -                ret = mem_size; >>>> -                goto fail; >>>> -            } >>>> +                mp->socket_id, mz_flags, align); >>>> -            mz = rte_memzone_reserve_aligned(mz_name, mem_size, >>>> -                mp->socket_id, flags, align); >>>> -        } >>>>            /* don't try reserving with 0 size if we were asked to >>>> reserve >>>>             * IOVA-contiguous memory. >>>>             */ >>> [snip] >>> >>>> @@ -587,7 +556,7 @@ rte_mempool_populate_default(struct rte_mempool >>>> *mp) >>>>            else >>>>                iova = RTE_BAD_IOVA; >>>> -        if (try_iova_contig_mempool || pg_sz == 0) >>>> +        if (pg_sz == 0) >>> I think (mz_flags & RTE_MEMZONE_IOVA_CONTIG) is lost here. >> Do you mean we should do instead >>     if (pg_sz == 0 || (mz_flags & RTE_MEMZONE_IOVA_CONTIG)) >>         populate_iova() >>     else >>         populate_virt() >> ? > > Yes. > >> I would say yes, but it should be removed in patch 5/5. >> At the end, we don't want objects to be across pages, even >> with RTE_MEMZONE_IOVA_CONTIG. Thinking more about it I don't understand why. If we know that the memzone is IOVA contiguous, why we should not populate it this way. It does not prevent patch 5/5 to do the job. >>>>                ret = rte_mempool_populate_iova(mp, mz->addr, >>>>                    iova, mz->len, >>>>                    rte_mempool_memchunk_mz_free,