* [PATCH] common/mlx5: Optimize mlx5 mempool get extmem @ 2023-09-26 20:59 Aaron Conole 2023-10-10 14:38 ` [PATCH v2] " Aaron Conole 0 siblings, 1 reply; 6+ messages in thread From: Aaron Conole @ 2023-09-26 20:59 UTC (permalink / raw) To: dev Cc: John Romein, Dmitry Kozlyuk, Matan Azrad, Viacheslav Ovsiienko, Ori Kam, Suanming Mou From: John Romein <romein@astron.nl> This patch reduces the time to allocate and register tens of gigabytes of GPU memory from hours to seconds, by sorting the heap only once instead of for each object in the mempool. Fixes: 690b2a8 ("common/mlx5: add mempool registration facilities") Signed-off-by: John Romein <romein@astron.nl> --- NOTE: this is a post of https://github.com/DPDK/dpdk/pull/70 to the mailing list. drivers/common/mlx5/mlx5_common_mr.c | 69 ++++++++-------------------- 1 file changed, 20 insertions(+), 49 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common_mr.c b/drivers/common/mlx5/mlx5_common_mr.c index 40ff9153bd8..3f95438045f 100644 --- a/drivers/common/mlx5/mlx5_common_mr.c +++ b/drivers/common/mlx5/mlx5_common_mr.c @@ -1389,63 +1389,23 @@ mlx5_mempool_get_chunks(struct rte_mempool *mp, struct mlx5_range **out, return 0; } -struct mlx5_mempool_get_extmem_data { - struct mlx5_range *heap; - unsigned int heap_size; - int ret; -}; - static void mlx5_mempool_get_extmem_cb(struct rte_mempool *mp, void *opaque, void *obj, unsigned int obj_idx) { - struct mlx5_mempool_get_extmem_data *data = opaque; + struct mlx5_range *heap = opaque; struct rte_mbuf *mbuf = obj; uintptr_t addr = (uintptr_t)mbuf->buf_addr; - struct mlx5_range *seg, *heap; struct rte_memseg_list *msl; size_t page_size; uintptr_t page_start; - unsigned int pos = 0, len = data->heap_size, delta; RTE_SET_USED(mp); - RTE_SET_USED(obj_idx); - if (data->ret < 0) - return; - /* Binary search for an already visited page. */ - while (len > 1) { - delta = len / 2; - if (addr < data->heap[pos + delta].start) { - len = delta; - } else { - pos += delta; - len -= delta; - } - } - if (data->heap != NULL) { - seg = &data->heap[pos]; - if (seg->start <= addr && addr < seg->end) - return; - } - /* Determine the page boundaries and remember them. */ - heap = realloc(data->heap, sizeof(heap[0]) * (data->heap_size + 1)); - if (heap == NULL) { - free(data->heap); - data->heap = NULL; - data->ret = -1; - return; - } - data->heap = heap; - data->heap_size++; - seg = &heap[data->heap_size - 1]; msl = rte_mem_virt2memseg_list((void *)addr); page_size = msl != NULL ? msl->page_sz : rte_mem_page_size(); page_start = RTE_PTR_ALIGN_FLOOR(addr, page_size); - seg->start = page_start; - seg->end = page_start + page_size; - /* Maintain the heap order. */ - qsort(data->heap, data->heap_size, sizeof(heap[0]), - mlx5_range_compare_start); + heap[obj_idx].start = page_start; + heap[obj_idx].end = page_start + page_size; } /** @@ -1457,15 +1417,26 @@ static int mlx5_mempool_get_extmem(struct rte_mempool *mp, struct mlx5_range **out, unsigned int *out_n) { - struct mlx5_mempool_get_extmem_data data; + unsigned out_size = 1; + struct mlx5_range *heap; DRV_LOG(DEBUG, "Recovering external pinned pages of mempool %s", mp->name); - memset(&data, 0, sizeof(data)); - rte_mempool_obj_iter(mp, mlx5_mempool_get_extmem_cb, &data); - *out = data.heap; - *out_n = data.heap_size; - return data.ret; + heap = malloc(mp->size * sizeof(struct mlx5_range)); + if (heap == NULL) + return -1; + rte_mempool_obj_iter(mp, mlx5_mempool_get_extmem_cb, heap); + qsort(heap, mp->size, sizeof heap[0], mlx5_range_compare_start); + /* remove duplicates */ + for (unsigned i = 1; i < mp->size; i ++) + if (heap[out_size - 1].start != heap[i].start) + heap[out_size ++] = heap[i]; + heap = realloc(heap, out_size * sizeof(struct mlx5_range)); + if (heap == NULL) + return -1; + *out = heap; + *out_n = out_size; + return 0; } /** --- 2.40.1 ^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v2] common/mlx5: Optimize mlx5 mempool get extmem 2023-09-26 20:59 [PATCH] common/mlx5: Optimize mlx5 mempool get extmem Aaron Conole @ 2023-10-10 14:38 ` Aaron Conole 2023-11-01 8:29 ` Slava Ovsiienko 0 siblings, 1 reply; 6+ messages in thread From: Aaron Conole @ 2023-10-10 14:38 UTC (permalink / raw) To: dev Cc: John Romein, Raslan Darawsheh, Elena Agostini, Dmitry Kozlyuk, Matan Azrad, Viacheslav Ovsiienko, Ori Kam, Suanming Mou From: John Romein <romein@astron.nl> This patch reduces the time to allocate and register tens of gigabytes of GPU memory from hours to seconds, by sorting the heap only once instead of for each object in the mempool. Fixes: 690b2a88c2f7 ("common/mlx5: add mempool registration facilities") Signed-off-by: John Romein <romein@astron.nl> --- drivers/common/mlx5/mlx5_common_mr.c | 69 ++++++++-------------------- 1 file changed, 20 insertions(+), 49 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common_mr.c b/drivers/common/mlx5/mlx5_common_mr.c index 40ff9153bd..77b66e444b 100644 --- a/drivers/common/mlx5/mlx5_common_mr.c +++ b/drivers/common/mlx5/mlx5_common_mr.c @@ -1389,63 +1389,23 @@ mlx5_mempool_get_chunks(struct rte_mempool *mp, struct mlx5_range **out, return 0; } -struct mlx5_mempool_get_extmem_data { - struct mlx5_range *heap; - unsigned int heap_size; - int ret; -}; - static void mlx5_mempool_get_extmem_cb(struct rte_mempool *mp, void *opaque, void *obj, unsigned int obj_idx) { - struct mlx5_mempool_get_extmem_data *data = opaque; + struct mlx5_range *heap = opaque; struct rte_mbuf *mbuf = obj; uintptr_t addr = (uintptr_t)mbuf->buf_addr; - struct mlx5_range *seg, *heap; struct rte_memseg_list *msl; size_t page_size; uintptr_t page_start; - unsigned int pos = 0, len = data->heap_size, delta; RTE_SET_USED(mp); - RTE_SET_USED(obj_idx); - if (data->ret < 0) - return; - /* Binary search for an already visited page. */ - while (len > 1) { - delta = len / 2; - if (addr < data->heap[pos + delta].start) { - len = delta; - } else { - pos += delta; - len -= delta; - } - } - if (data->heap != NULL) { - seg = &data->heap[pos]; - if (seg->start <= addr && addr < seg->end) - return; - } - /* Determine the page boundaries and remember them. */ - heap = realloc(data->heap, sizeof(heap[0]) * (data->heap_size + 1)); - if (heap == NULL) { - free(data->heap); - data->heap = NULL; - data->ret = -1; - return; - } - data->heap = heap; - data->heap_size++; - seg = &heap[data->heap_size - 1]; msl = rte_mem_virt2memseg_list((void *)addr); page_size = msl != NULL ? msl->page_sz : rte_mem_page_size(); page_start = RTE_PTR_ALIGN_FLOOR(addr, page_size); - seg->start = page_start; - seg->end = page_start + page_size; - /* Maintain the heap order. */ - qsort(data->heap, data->heap_size, sizeof(heap[0]), - mlx5_range_compare_start); + heap[obj_idx].start = page_start; + heap[obj_idx].end = page_start + page_size; } /** @@ -1457,15 +1417,26 @@ static int mlx5_mempool_get_extmem(struct rte_mempool *mp, struct mlx5_range **out, unsigned int *out_n) { - struct mlx5_mempool_get_extmem_data data; + unsigned int out_size = 1; + struct mlx5_range *heap; DRV_LOG(DEBUG, "Recovering external pinned pages of mempool %s", mp->name); - memset(&data, 0, sizeof(data)); - rte_mempool_obj_iter(mp, mlx5_mempool_get_extmem_cb, &data); - *out = data.heap; - *out_n = data.heap_size; - return data.ret; + heap = malloc(mp->size * sizeof(struct mlx5_range)); + if (heap == NULL) + return -1; + rte_mempool_obj_iter(mp, mlx5_mempool_get_extmem_cb, heap); + qsort(heap, mp->size, sizeof(heap[0]), mlx5_range_compare_start); + /* remove duplicates */ + for (unsigned int i = 1; i < mp->size; i++) + if (heap[out_size - 1].start != heap[i].start) + heap[out_size++] = heap[i]; + heap = realloc(heap, out_size * sizeof(struct mlx5_range)); + if (heap == NULL) + return -1; + *out = heap; + *out_n = out_size; + return 0; } /** -- 2.41.0 ^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: [PATCH v2] common/mlx5: Optimize mlx5 mempool get extmem 2023-10-10 14:38 ` [PATCH v2] " Aaron Conole @ 2023-11-01 8:29 ` Slava Ovsiienko 2023-11-01 21:21 ` John Romein 0 siblings, 1 reply; 6+ messages in thread From: Slava Ovsiienko @ 2023-11-01 8:29 UTC (permalink / raw) To: Aaron Conole, dev Cc: John Romein, Raslan Darawsheh, Elena Agostini, Dmitry Kozlyuk, Matan Azrad, Ori Kam, Suanming Mou Hi, Thank you for this optimizing patch. My concern is this line: > + heap = malloc(mp->size * sizeof(struct mlx5_range)); The pool size can be huge and it might cause the large memory allocation (on host CPU side). What is the reason causing "hours" of registering? Reallocs per each pool element? The mp struct has "struct rte_mempool_memhdr_list mem_list" member. I think we should consider populating this list with data from "struct rte_pktmbuf_extmem *ext_mem" on pool creation. Because of it seems the rte_mempool_mem_iter() functionality is completely broken for the pools with external memory, and that's why mlx5 implemented the dedicated branch to handle their registration. With best regards, Slava > -----Original Message----- > From: Aaron Conole <aconole@redhat.com> > Sent: Tuesday, October 10, 2023 5:38 PM > To: dev@dpdk.org > Cc: John Romein <romein@astron.nl>; Raslan Darawsheh > <rasland@nvidia.com>; Elena Agostini <eagostini@nvidia.com>; Dmitry > Kozlyuk <dkozlyuk@nvidia.com>; Matan Azrad <matan@nvidia.com>; Slava > Ovsiienko <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; > Suanming Mou <suanmingm@nvidia.com> > Subject: [PATCH v2] common/mlx5: Optimize mlx5 mempool get extmem > > From: John Romein <romein@astron.nl> > > This patch reduces the time to allocate and register tens of gigabytes of GPU > memory from hours to seconds, by sorting the heap only once instead of for > each object in the mempool. > > Fixes: 690b2a88c2f7 ("common/mlx5: add mempool registration facilities") > > Signed-off-by: John Romein <romein@astron.nl> > --- > drivers/common/mlx5/mlx5_common_mr.c | 69 ++++++++-------------------- > 1 file changed, 20 insertions(+), 49 deletions(-) > > diff --git a/drivers/common/mlx5/mlx5_common_mr.c > b/drivers/common/mlx5/mlx5_common_mr.c > index 40ff9153bd..77b66e444b 100644 > --- a/drivers/common/mlx5/mlx5_common_mr.c > +++ b/drivers/common/mlx5/mlx5_common_mr.c > @@ -1389,63 +1389,23 @@ mlx5_mempool_get_chunks(struct > rte_mempool *mp, struct mlx5_range **out, > return 0; > } > > -struct mlx5_mempool_get_extmem_data { > - struct mlx5_range *heap; > - unsigned int heap_size; > - int ret; > -}; > - > static void > mlx5_mempool_get_extmem_cb(struct rte_mempool *mp, void *opaque, > void *obj, unsigned int obj_idx) > { > - struct mlx5_mempool_get_extmem_data *data = opaque; > + struct mlx5_range *heap = opaque; > struct rte_mbuf *mbuf = obj; > uintptr_t addr = (uintptr_t)mbuf->buf_addr; > - struct mlx5_range *seg, *heap; > struct rte_memseg_list *msl; > size_t page_size; > uintptr_t page_start; > - unsigned int pos = 0, len = data->heap_size, delta; > > RTE_SET_USED(mp); > - RTE_SET_USED(obj_idx); > - if (data->ret < 0) > - return; > - /* Binary search for an already visited page. */ > - while (len > 1) { > - delta = len / 2; > - if (addr < data->heap[pos + delta].start) { > - len = delta; > - } else { > - pos += delta; > - len -= delta; > - } > - } > - if (data->heap != NULL) { > - seg = &data->heap[pos]; > - if (seg->start <= addr && addr < seg->end) > - return; > - } > - /* Determine the page boundaries and remember them. */ > - heap = realloc(data->heap, sizeof(heap[0]) * (data->heap_size + 1)); > - if (heap == NULL) { > - free(data->heap); > - data->heap = NULL; > - data->ret = -1; > - return; > - } > - data->heap = heap; > - data->heap_size++; > - seg = &heap[data->heap_size - 1]; > msl = rte_mem_virt2memseg_list((void *)addr); > page_size = msl != NULL ? msl->page_sz : rte_mem_page_size(); > page_start = RTE_PTR_ALIGN_FLOOR(addr, page_size); > - seg->start = page_start; > - seg->end = page_start + page_size; > - /* Maintain the heap order. */ > - qsort(data->heap, data->heap_size, sizeof(heap[0]), > - mlx5_range_compare_start); > + heap[obj_idx].start = page_start; > + heap[obj_idx].end = page_start + page_size; > } > > /** > @@ -1457,15 +1417,26 @@ static int > mlx5_mempool_get_extmem(struct rte_mempool *mp, struct mlx5_range > **out, > unsigned int *out_n) > { > - struct mlx5_mempool_get_extmem_data data; > + unsigned int out_size = 1; > + struct mlx5_range *heap; > > DRV_LOG(DEBUG, "Recovering external pinned pages of mempool > %s", > mp->name); > - memset(&data, 0, sizeof(data)); > - rte_mempool_obj_iter(mp, mlx5_mempool_get_extmem_cb, > &data); > - *out = data.heap; > - *out_n = data.heap_size; > - return data.ret; > + heap = malloc(mp->size * sizeof(struct mlx5_range)); > + if (heap == NULL) > + return -1; > + rte_mempool_obj_iter(mp, mlx5_mempool_get_extmem_cb, heap); > + qsort(heap, mp->size, sizeof(heap[0]), mlx5_range_compare_start); > + /* remove duplicates */ > + for (unsigned int i = 1; i < mp->size; i++) > + if (heap[out_size - 1].start != heap[i].start) > + heap[out_size++] = heap[i]; > + heap = realloc(heap, out_size * sizeof(struct mlx5_range)); > + if (heap == NULL) > + return -1; > + *out = heap; > + *out_n = out_size; > + return 0; > } > > /** > -- > 2.41.0 ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2] common/mlx5: Optimize mlx5 mempool get extmem 2023-11-01 8:29 ` Slava Ovsiienko @ 2023-11-01 21:21 ` John Romein 2024-10-04 22:16 ` Stephen Hemminger 0 siblings, 1 reply; 6+ messages in thread From: John Romein @ 2023-11-01 21:21 UTC (permalink / raw) To: Slava Ovsiienko, Aaron Conole, dev Cc: Raslan Darawsheh, Elena Agostini, Dmitry Kozlyuk, Matan Azrad, Ori Kam, Suanming Mou [-- Attachment #1: Type: text/plain, Size: 6124 bytes --] Dear Slava, Thank you for looking at the patch. With the original code, I saw that the application spent literally hours in this function during program start up, if tens of gigabytes of GPU memory are registered. This was due to qsort being invoked for every new added item (to keep the list sorted). So I tried to write equivalent code that sorts the list only once, after all items were added. At least for our application, this works well and is /much/ faster, as the complexity decreased from n^2 log(n) to n log(n). But I must admit that I have no idea /what/ is being sorted, or why; I only understand this isolated piece of code (or at least I think so). So if you think there are better ways to initialize the list, then I am sure you will be absolutely right. But I will not be able to implement this, as I do not understand the full context of the code. Kind Regards, John On 01-11-2023 09:29, Slava Ovsiienko wrote: > Hi, > > Thank you for this optimizing patch. > My concern is this line: >> + heap = malloc(mp->size * sizeof(struct mlx5_range)); > The pool size can be huge and it might cause the large memory allocation > (on host CPU side). > > What is the reason causing "hours" of registering? Reallocs per each pool element? > The mp struct has "struct rte_mempool_memhdr_list mem_list" member. > I think we should consider populating this list with data from > "struct rte_pktmbuf_extmem *ext_mem" on pool creation. > > Because of it seems the rte_mempool_mem_iter() functionality is > completely broken for the pools with external memory, and that's why > mlx5 implemented the dedicated branch to handle their registration. > > With best regards, > Slava > >> -----Original Message----- >> From: Aaron Conole<aconole@redhat.com> >> Sent: Tuesday, October 10, 2023 5:38 PM >> To:dev@dpdk.org >> Cc: John Romein<romein@astron.nl>; Raslan Darawsheh >> <rasland@nvidia.com>; Elena Agostini<eagostini@nvidia.com>; Dmitry >> Kozlyuk<dkozlyuk@nvidia.com>; Matan Azrad<matan@nvidia.com>; Slava >> Ovsiienko<viacheslavo@nvidia.com>; Ori Kam<orika@nvidia.com>; >> Suanming Mou<suanmingm@nvidia.com> >> Subject: [PATCH v2] common/mlx5: Optimize mlx5 mempool get extmem >> >> From: John Romein<romein@astron.nl> >> >> This patch reduces the time to allocate and register tens of gigabytes of GPU >> memory from hours to seconds, by sorting the heap only once instead of for >> each object in the mempool. >> >> Fixes: 690b2a88c2f7 ("common/mlx5: add mempool registration facilities") >> >> Signed-off-by: John Romein<romein@astron.nl> >> --- >> drivers/common/mlx5/mlx5_common_mr.c | 69 ++++++++-------------------- >> 1 file changed, 20 insertions(+), 49 deletions(-) >> >> diff --git a/drivers/common/mlx5/mlx5_common_mr.c >> b/drivers/common/mlx5/mlx5_common_mr.c >> index 40ff9153bd..77b66e444b 100644 >> --- a/drivers/common/mlx5/mlx5_common_mr.c >> +++ b/drivers/common/mlx5/mlx5_common_mr.c >> @@ -1389,63 +1389,23 @@ mlx5_mempool_get_chunks(struct >> rte_mempool *mp, struct mlx5_range **out, >> return 0; >> } >> >> -struct mlx5_mempool_get_extmem_data { >> - struct mlx5_range *heap; >> - unsigned int heap_size; >> - int ret; >> -}; >> - >> static void >> mlx5_mempool_get_extmem_cb(struct rte_mempool *mp, void *opaque, >> void *obj, unsigned int obj_idx) >> { >> - struct mlx5_mempool_get_extmem_data *data = opaque; >> + struct mlx5_range *heap = opaque; >> struct rte_mbuf *mbuf = obj; >> uintptr_t addr = (uintptr_t)mbuf->buf_addr; >> - struct mlx5_range *seg, *heap; >> struct rte_memseg_list *msl; >> size_t page_size; >> uintptr_t page_start; >> - unsigned int pos = 0, len = data->heap_size, delta; >> >> RTE_SET_USED(mp); >> - RTE_SET_USED(obj_idx); >> - if (data->ret < 0) >> - return; >> - /* Binary search for an already visited page. */ >> - while (len > 1) { >> - delta = len / 2; >> - if (addr < data->heap[pos + delta].start) { >> - len = delta; >> - } else { >> - pos += delta; >> - len -= delta; >> - } >> - } >> - if (data->heap != NULL) { >> - seg = &data->heap[pos]; >> - if (seg->start <= addr && addr < seg->end) >> - return; >> - } >> - /* Determine the page boundaries and remember them. */ >> - heap = realloc(data->heap, sizeof(heap[0]) * (data->heap_size + 1)); >> - if (heap == NULL) { >> - free(data->heap); >> - data->heap = NULL; >> - data->ret = -1; >> - return; >> - } >> - data->heap = heap; >> - data->heap_size++; >> - seg = &heap[data->heap_size - 1]; >> msl = rte_mem_virt2memseg_list((void *)addr); >> page_size = msl != NULL ? msl->page_sz : rte_mem_page_size(); >> page_start = RTE_PTR_ALIGN_FLOOR(addr, page_size); >> - seg->start = page_start; >> - seg->end = page_start + page_size; >> - /* Maintain the heap order. */ >> - qsort(data->heap, data->heap_size, sizeof(heap[0]), >> - mlx5_range_compare_start); >> + heap[obj_idx].start = page_start; >> + heap[obj_idx].end = page_start + page_size; >> } >> >> /** >> @@ -1457,15 +1417,26 @@ static int >> mlx5_mempool_get_extmem(struct rte_mempool *mp, struct mlx5_range >> **out, >> unsigned int *out_n) >> { >> - struct mlx5_mempool_get_extmem_data data; >> + unsigned int out_size = 1; >> + struct mlx5_range *heap; >> >> DRV_LOG(DEBUG, "Recovering external pinned pages of mempool >> %s", >> mp->name); >> - memset(&data, 0, sizeof(data)); >> - rte_mempool_obj_iter(mp, mlx5_mempool_get_extmem_cb, >> &data); >> - *out = data.heap; >> - *out_n = data.heap_size; >> - return data.ret; >> + heap = malloc(mp->size * sizeof(struct mlx5_range)); >> + if (heap == NULL) >> + return -1; >> + rte_mempool_obj_iter(mp, mlx5_mempool_get_extmem_cb, heap); >> + qsort(heap, mp->size, sizeof(heap[0]), mlx5_range_compare_start); >> + /* remove duplicates */ >> + for (unsigned int i = 1; i < mp->size; i++) >> + if (heap[out_size - 1].start != heap[i].start) >> + heap[out_size++] = heap[i]; >> + heap = realloc(heap, out_size * sizeof(struct mlx5_range)); >> + if (heap == NULL) >> + return -1; >> + *out = heap; >> + *out_n = out_size; >> + return 0; >> } >> >> /** >> -- >> 2.41.0 [-- Attachment #2: Type: text/html, Size: 7593 bytes --] ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2] common/mlx5: Optimize mlx5 mempool get extmem 2023-11-01 21:21 ` John Romein @ 2024-10-04 22:16 ` Stephen Hemminger 2024-10-07 12:47 ` John Romein 0 siblings, 1 reply; 6+ messages in thread From: Stephen Hemminger @ 2024-10-04 22:16 UTC (permalink / raw) To: John Romein Cc: Slava Ovsiienko, Aaron Conole, dev, Raslan Darawsheh, Elena Agostini, Dmitry Kozlyuk, Matan Azrad, Ori Kam, Suanming Mou On Wed, 1 Nov 2023 22:21:16 +0100 John Romein <romein@astron.nl> wrote: > Dear Slava, > > Thank you for looking at the patch. With the original code, I saw that > the application spent literally hours in this function during program > start up, if tens of gigabytes of GPU memory are registered. This was > due to qsort being invoked for every new added item (to keep the list > sorted). So I tried to write equivalent code that sorts the list only > once, after all items were added. At least for our application, this > works well and is /much/ faster, as the complexity decreased from n^2 > log(n) to n log(n). But I must admit that I have no idea /what/ is > being sorted, or why; I only understand this isolated piece of code (or > at least I think so). So if you think there are better ways to > initialize the list, then I am sure you will be absolutely right. But I > will not be able to implement this, as I do not understand the full > context of the code. > > Kind Regards, John Looks like the problem remains but patch has been sitting around for 11 months. Was this resolved? ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2] common/mlx5: Optimize mlx5 mempool get extmem 2024-10-04 22:16 ` Stephen Hemminger @ 2024-10-07 12:47 ` John Romein 0 siblings, 0 replies; 6+ messages in thread From: John Romein @ 2024-10-07 12:47 UTC (permalink / raw) To: Stephen Hemminger Cc: Slava Ovsiienko, Aaron Conole, dev, Raslan Darawsheh, Elena Agostini, Dmitry Kozlyuk, Matan Azrad, Ori Kam, Suanming Mou [-- Attachment #1: Type: text/plain, Size: 1698 bytes --] Dear Stephen, The problem has not been solved, but I found a workaround. According to the documentation (https://doc.dpdk.org/guides/prog_guide/gpudev.html, sec 11.3), rte_extmem_register should be invoked with GPU_PAGE_SIZE as an argument. If GPU_PAGE_SIZE is set to 2 MB instead of 64 kB, registration of 72 GB of GPU memory (on a Grace Hopper) is done in about ten seconds, not hours. rte_extmem_register(ext_mem.buf_ptr, ext_mem.buf_len, NULL, ext_mem.buf_iova, GPU_PAGE_SIZE); Thanks, John Romein On 05-10-2024 00:16, Stephen Hemminger wrote: > On Wed, 1 Nov 2023 22:21:16 +0100 > John Romein<romein@astron.nl> wrote: > >> Dear Slava, >> >> Thank you for looking at the patch. With the original code, I saw that >> the application spent literally hours in this function during program >> start up, if tens of gigabytes of GPU memory are registered. This was >> due to qsort being invoked for every new added item (to keep the list >> sorted). So I tried to write equivalent code that sorts the list only >> once, after all items were added. At least for our application, this >> works well and is /much/ faster, as the complexity decreased from n^2 >> log(n) to n log(n). But I must admit that I have no idea /what/ is >> being sorted, or why; I only understand this isolated piece of code (or >> at least I think so). So if you think there are better ways to >> initialize the list, then I am sure you will be absolutely right. But I >> will not be able to implement this, as I do not understand the full >> context of the code. >> >> Kind Regards, John > Looks like the problem remains but patch has been sitting around for 11 months. > Was this resolved? [-- Attachment #2: Type: text/html, Size: 2887 bytes --] ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2024-10-07 12:58 UTC | newest] Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2023-09-26 20:59 [PATCH] common/mlx5: Optimize mlx5 mempool get extmem Aaron Conole 2023-10-10 14:38 ` [PATCH v2] " Aaron Conole 2023-11-01 8:29 ` Slava Ovsiienko 2023-11-01 21:21 ` John Romein 2024-10-04 22:16 ` Stephen Hemminger 2024-10-07 12:47 ` John Romein
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).