From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wi0-f171.google.com (mail-wi0-f171.google.com [209.85.212.171]) by dpdk.org (Postfix) with ESMTP id B2AAF5A63 for ; Tue, 12 May 2015 18:30:58 +0200 (CEST) Received: by wicmc15 with SMTP id mc15so58393854wic.1 for ; Tue, 12 May 2015 09:30:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=9IVWk/aKtCU4shRUf9TbbOoVfJhZZjeSQ3PSXWVzYcI=; b=SorlBwQ7+q8Iw7083LmT+Vo5VJ3sDvN9Az9oCnjEEi+AxZK7+U6jmdOlwHiHkAX10n qbK53UxA2JXU0+T6wtZ6YUtfMe/xLlqfFWGcB0VbUkXucEIcRRHt8w4sTy6g1y1/YU73 UsJKwbDEsPZxUGMphr4ffQ3bQMekoR1HKBW4826szlUtHSI8xLeh0pIUC+0oPlPw4J5a VdT5syc9/LB0GPJiKbjoEgfElzjreTXXDJ00z4Sfh6zG4VneqIslrUSjq7pj9ZN0TwPB Lrgr/c495Ttjb1X3myygRbw+cIQeWX4/DWa2y29q+Me2BdkM0rad6FjWKIfp5+dbfAQ9 6EEw== X-Gm-Message-State: ALoCoQlDNB0ViV0/X3KAcyLplSYhuz28q3LGy6NIyHqosEs0e99Q7pzWnK+2YyueTPscp7RlNLS8 X-Received: by 10.194.71.208 with SMTP id x16mr30503358wju.129.1431448258584; Tue, 12 May 2015 09:30:58 -0700 (PDT) Received: from [10.16.0.195] (6wind.net2.nerim.net. [213.41.180.237]) by mx.google.com with ESMTPSA id gj7sm3649957wib.4.2015.05.12.09.30.57 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 May 2015 09:30:57 -0700 (PDT) Message-ID: <55522ABF.602@6wind.com> Date: Tue, 12 May 2015 18:30:55 +0200 From: Olivier MATZ User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Icedove/31.6.0 MIME-Version: 1.0 To: Sergio Gonzalez Monroy , dev@dpdk.org References: <1431103079-18096-1-git-send-email-sergio.gonzalez.monroy@intel.com> In-Reply-To: <1431103079-18096-1-git-send-email-sergio.gonzalez.monroy@intel.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [RFC PATCH 0/2] dynamic memzones X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 12 May 2015 16:30:58 -0000 Hi Sergio, On 05/08/2015 06:37 PM, Sergio Gonzalez Monroy wrote: > Please NOTE that this series is meant to illustrate an idea/approach and start > discussion on the topic. > > Current implemetation allows reserving/creating memzones but not the opposite > (unreserve/delete). This affects mempools and other memzone based objects. > > From my point of view, implementing unreserve functionality for memzones would > look like malloc over memsegs. > Thus, this approach moves malloc inside eal (which in turn removes a circular > dependency), where malloc heaps are composed of memsegs. > We keep both malloc and memzone APIs as they are, but memzones allocate its > memory by calling malloc_heap_alloc (there would be some ABI changes, see below). > Some extra functionality is required in malloc to allow for boundary constrained > memory requests. > In summary, currently malloc is based on memzones, and with this approach > memzones are based on malloc. > > An alternative would be to move malloc internals (malloc_heap, malloc_elem) > to the eal, but keeping the malloc library as is, where malloc is based on > memzones. This way we could avoid ABI changes while keeping the existing > circular dependency between malloc and eal. > > TODOs: > - Implement memzone_unreserve, simply call rte_malloc_free. > - Implement mempool_delete, simply call rte_memzone_unreserve. > - Init heaps with all available memsegs at once. > - Review symbols in version map. > > ABI changes: > - Removed support for rte_memzone_reserve_xxxx with len=0 (not needed?). > - Removed librte_malloc as single library (linker script as work around?). > > IDEAS FOR FUTURE WORK: > - More control over requested memory, ie. shared/private, phys_contig, etc. > One of the goals would be trying to reduce the need of physically contiguous > memory when not required. > - Attach/unattach hugepages at runtime (faster VM migration). > - Improve malloc algorithm? ie. jemalloc (or any other). > > > Any comments/toughts and/or different approaches are welcome. I like the idea and I don't see any issue on the principle. It will clearly help to have dynamic pools or rings. (I didn't dive in the second patch very deep, it's just a high-level thought). Regards, Olivier