From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.droids-corp.org (zoll.droids-corp.org [94.23.50.67]) by dpdk.org (Postfix) with ESMTP id 20FA81B1A7 for ; Mon, 19 Mar 2018 18:33:45 +0100 (CET) Received: from lfbn-lil-1-702-109.w81-254.abo.wanadoo.fr ([81.254.39.109] helo=droids-corp.org) by mail.droids-corp.org with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256) (Exim 4.89) (envelope-from ) id 1exyfm-00079E-TS; Mon, 19 Mar 2018 18:34:16 +0100 Received: by droids-corp.org (sSMTP sendmail emulation); Mon, 19 Mar 2018 18:33:41 +0100 Date: Mon, 19 Mar 2018 18:33:41 +0100 From: Olivier Matz To: Anatoly Burakov Cc: dev@dpdk.org, keith.wiles@intel.com, jianfeng.tan@intel.com, andras.kovacs@ericsson.com, laszlo.vadkeri@ericsson.com, benjamin.walker@intel.com, bruce.richardson@intel.com, thomas@monjalon.net, konstantin.ananyev@intel.com, kuralamudhan.ramakrishnan@intel.com, louise.m.daly@intel.com, nelio.laranjeiro@6wind.com, yskoh@mellanox.com, pepperjo@japf.ch, jerin.jacob@caviumnetworks.com, hemant.agrawal@nxp.com Message-ID: <20180319173341.waytqvcr6blrgein@platinum> References: <5c17cc4b8824b59ad817ba5253369f1e69bd04d3.1520083504.git.anatoly.burakov@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5c17cc4b8824b59ad817ba5253369f1e69bd04d3.1520083504.git.anatoly.burakov@intel.com> User-Agent: NeoMutt/20170113 (1.7.2) Subject: Re: [dpdk-dev] [PATCH 03/41] eal: make malloc heap a doubly-linked list X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 19 Mar 2018 17:33:45 -0000 On Sat, Mar 03, 2018 at 01:45:51PM +0000, Anatoly Burakov wrote: > As we are preparing for dynamic memory allocation, we need to be > able to handle holes in our malloc heap, hence we're switching to > doubly linked list, and prepare infrastructure to support it. > > Since our heap is now aware where are our first and last elements, > there is no longer any need to have a dummy element at the end of > each heap, so get rid of that as well. Instead, let insert/remove/ > join/split operations handle end-of-list conditions automatically. > > Signed-off-by: Anatoly Burakov > --- > lib/librte_eal/common/include/rte_malloc_heap.h | 6 + > lib/librte_eal/common/malloc_elem.c | 200 +++++++++++++++++++----- > lib/librte_eal/common/malloc_elem.h | 14 +- > lib/librte_eal/common/malloc_heap.c | 8 +- > 4 files changed, 179 insertions(+), 49 deletions(-) > > diff --git a/lib/librte_eal/common/include/rte_malloc_heap.h b/lib/librte_eal/common/include/rte_malloc_heap.h > index ba99ed9..9ec4b62 100644 > --- a/lib/librte_eal/common/include/rte_malloc_heap.h > +++ b/lib/librte_eal/common/include/rte_malloc_heap.h > @@ -13,12 +13,18 @@ > /* Number of free lists per heap, grouped by size. */ > #define RTE_HEAP_NUM_FREELISTS 13 > > +/* dummy definition, for pointers */ > +struct malloc_elem; > + > /** > * Structure to hold malloc heap > */ > struct malloc_heap { > rte_spinlock_t lock; > LIST_HEAD(, malloc_elem) free_head[RTE_HEAP_NUM_FREELISTS]; > + struct malloc_elem *first; > + struct malloc_elem *last; > + > unsigned alloc_count; > size_t total_size; > } __rte_cache_aligned; > diff --git a/lib/librte_eal/common/malloc_elem.c b/lib/librte_eal/common/malloc_elem.c > index ea041e2..eb41200 100644 > --- a/lib/librte_eal/common/malloc_elem.c > +++ b/lib/librte_eal/common/malloc_elem.c > @@ -31,6 +31,7 @@ malloc_elem_init(struct malloc_elem *elem, > elem->heap = heap; > elem->ms = ms; > elem->prev = NULL; > + elem->next = NULL; > memset(&elem->free_list, 0, sizeof(elem->free_list)); > elem->state = ELEM_FREE; > elem->size = size; > @@ -39,15 +40,56 @@ malloc_elem_init(struct malloc_elem *elem, > set_trailer(elem); > } > > -/* > - * Initialize a dummy malloc_elem header for the end-of-memseg marker > - */ > void > -malloc_elem_mkend(struct malloc_elem *elem, struct malloc_elem *prev) > +malloc_elem_insert(struct malloc_elem *elem) > { > - malloc_elem_init(elem, prev->heap, prev->ms, 0); > - elem->prev = prev; > - elem->state = ELEM_BUSY; /* mark busy so its never merged */ > + struct malloc_elem *prev_elem, *next_elem; > + struct malloc_heap *heap = elem->heap; > + > + if (heap->first == NULL && heap->last == NULL) { > + /* if empty heap */ > + heap->first = elem; > + heap->last = elem; > + prev_elem = NULL; > + next_elem = NULL; > + } else if (elem < heap->first) { > + /* if lower than start */ > + prev_elem = NULL; > + next_elem = heap->first; > + heap->first = elem; > + } else if (elem > heap->last) { > + /* if higher than end */ > + prev_elem = heap->last; > + next_elem = NULL; > + heap->last = elem; > + } else { > + /* the new memory is somewhere inbetween start and end */ > + uint64_t dist_from_start, dist_from_end; > + > + dist_from_end = RTE_PTR_DIFF(heap->last, elem); > + dist_from_start = RTE_PTR_DIFF(elem, heap->first); > + > + /* check which is closer, and find closest list entries */ > + if (dist_from_start < dist_from_end) { > + prev_elem = heap->first; > + while (prev_elem->next < elem) > + prev_elem = prev_elem->next; > + next_elem = prev_elem->next; > + } else { > + next_elem = heap->last; > + while (next_elem->prev > elem) > + next_elem = next_elem->prev; > + prev_elem = next_elem->prev; > + } > + } > + > + /* insert new element */ > + elem->prev = prev_elem; > + elem->next = next_elem; > + if (prev_elem) > + prev_elem->next = elem; > + if (next_elem) > + next_elem->prev = elem; > } Would it be possible here to use a TAILQ? If yes, it could be easier to read.