From: "lihuisong (C)" <lihuisong@huawei.com>
To: <dev@dpdk.org>
Cc: <thomas@monjalon.net>, <david.hunt@intel.com>,
<anatoly.burakov@intel.com>, <sivaprasad.tummala@amd.com>,
<liuyonglong@huawei.com>,
Stephen Hemminger <stephen@networkplumber.org>
Subject: Re: [PATCH] power: use hugepage memory for queue list entry structure
Date: Thu, 20 Feb 2025 17:01:53 +0800 [thread overview]
Message-ID: <01d163c6-6d18-03e8-ac67-e7907d27bd08@huawei.com> (raw)
In-Reply-To: <20241219075319.8874-1-lihuisong@huawei.com>
Hi all,
Kindly ping for review.
在 2024/12/19 15:53, Huisong Li 写道:
> The queue_list_entry structure data is used in rx_callback of io path
> when enable PMD Power Management. However its memory is currently from
> normal heap memory. For better performance, use hugepage memory to
> replace it.
>
> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> ---
> lib/power/rte_power_pmd_mgmt.c | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c
> index a2fff3b765..c7bf57a910 100644
> --- a/lib/power/rte_power_pmd_mgmt.c
> +++ b/lib/power/rte_power_pmd_mgmt.c
> @@ -97,7 +97,7 @@ queue_list_find(const struct pmd_core_cfg *cfg, const union queue *q)
> }
>
> static int
> -queue_list_add(struct pmd_core_cfg *cfg, const union queue *q)
> +queue_list_add(struct pmd_core_cfg *cfg, const union queue *q, unsigned int lcore_id)
> {
> struct queue_list_entry *qle;
>
> @@ -105,10 +105,10 @@ queue_list_add(struct pmd_core_cfg *cfg, const union queue *q)
> if (queue_list_find(cfg, q) != NULL)
> return -EEXIST;
>
> - qle = malloc(sizeof(*qle));
> + qle = rte_zmalloc_socket(NULL, sizeof(*qle), RTE_CACHE_LINE_SIZE,
> + rte_lcore_to_socket_id(lcore_id));
> if (qle == NULL)
> return -ENOMEM;
> - memset(qle, 0, sizeof(*qle));
>
> queue_copy(&qle->queue, q);
> TAILQ_INSERT_TAIL(&cfg->head, qle, next);
> @@ -570,7 +570,7 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,
> goto end;
> }
> /* add this queue to the list */
> - ret = queue_list_add(lcore_cfg, &qdata);
> + ret = queue_list_add(lcore_cfg, &qdata, lcore_id);
> if (ret < 0) {
> POWER_LOG(DEBUG, "Failed to add queue to list: %s",
> strerror(-ret));
> @@ -664,7 +664,7 @@ rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,
> * callbacks can be freed. we're intentionally casting away const-ness.
> */
> rte_free((void *)(uintptr_t)queue_cfg->cb);
> - free(queue_cfg);
> + rte_free(queue_cfg);
>
> return 0;
> }
next prev parent reply other threads:[~2025-02-20 9:02 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-19 7:53 Huisong Li
2025-02-20 9:01 ` lihuisong (C) [this message]
2025-02-20 9:41 ` Konstantin Ananyev
2025-02-20 16:11 ` Stephen Hemminger
2025-02-20 16:39 ` Konstantin Ananyev
2025-02-20 16:45 ` Stephen Hemminger
2025-02-20 16:58 ` Konstantin Ananyev
2025-02-21 11:21 ` Burakov, Anatoly
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=01d163c6-6d18-03e8-ac67-e7907d27bd08@huawei.com \
--to=lihuisong@huawei.com \
--cc=anatoly.burakov@intel.com \
--cc=david.hunt@intel.com \
--cc=dev@dpdk.org \
--cc=liuyonglong@huawei.com \
--cc=sivaprasad.tummala@amd.com \
--cc=stephen@networkplumber.org \
--cc=thomas@monjalon.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).