DPDK patches and discussions
 help / color / mirror / Atom feed
From: "lihuisong (C)" <lihuisong@huawei.com>
To: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"thomas@monjalon.net" <thomas@monjalon.net>,
	"david.hunt@intel.com" <david.hunt@intel.com>,
	"anatoly.burakov@intel.com" <anatoly.burakov@intel.com>,
	"sivaprasad.tummala@amd.com" <sivaprasad.tummala@amd.com>,
	liuyonglong <liuyonglong@huawei.com>,
	Stephen Hemminger <stephen@networkplumber.org>
Subject: Re: [PATCH] power: use hugepage memory for queue list entry structure
Date: Mon, 24 Feb 2025 20:47:44 +0800	[thread overview]
Message-ID: <c51b6c63-9898-b5e3-7531-97414a12f240@huawei.com> (raw)
In-Reply-To: <cebfbfb33329429a9a7e28ec717f80cf@huawei.com>


在 2025/2/24 19:12, Konstantin Ananyev 写道:
>
>> 在 2025/2/20 17:41, Konstantin Ananyev 写道:
>>> Hi
>>>
>>>> Hi all,
>>>>
>>>> Kindly ping for review.
>>>>
>>>>
>>>> 在 2024/12/19 15:53, Huisong Li 写道:
>>>>> The queue_list_entry structure data is used in rx_callback of io path
>>>>> when enable PMD Power Management. However its memory is currently from
>>>>> normal heap memory. For better performance, use hugepage memory to
>>>>> replace it.
>>> Make sense to me.
>>> Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
>>>
>>> I  suppose it would also help if you can provide some numbers:
>>> i.e.: how much exactly it is 'better'?
>>> Did you see any changes in throughput/latency numbers, etc.
>> This patch is just from my knowledge of DPDK.
>>
>> I don't what is the good way to evaluate the performance of l3fwd-power.
>>
>> But I did a test for this after you said.
>>
>> I found that the throughput of using malloc is better than rte_malloc in
>> continuous packet flow case.😮
>>
>> Can you test this patch on your platform?
> I did a quick test - didn't see any diff in performance (packet flooding) with/without the patch.
Thanks for your testing. So let's drop this patch.
>   
>>>>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>>>>> ---
>>>>>     lib/power/rte_power_pmd_mgmt.c | 10 +++++-----
>>>>>     1 file changed, 5 insertions(+), 5 deletions(-)
>>>>>
>>>>> diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c
>>>>> index a2fff3b765..c7bf57a910 100644
>>>>> --- a/lib/power/rte_power_pmd_mgmt.c
>>>>> +++ b/lib/power/rte_power_pmd_mgmt.c
>>>>> @@ -97,7 +97,7 @@ queue_list_find(const struct pmd_core_cfg *cfg, const union queue *q)
>>>>>     }
>>>>>
>>>>>     static int
>>>>> -queue_list_add(struct pmd_core_cfg *cfg, const union queue *q)
>>>>> +queue_list_add(struct pmd_core_cfg *cfg, const union queue *q, unsigned int lcore_id)
>>>>>     {
>>>>>     	struct queue_list_entry *qle;
>>>>>
>>>>> @@ -105,10 +105,10 @@ queue_list_add(struct pmd_core_cfg *cfg, const union queue *q)
>>>>>     	if (queue_list_find(cfg, q) != NULL)
>>>>>     		return -EEXIST;
>>>>>
>>>>> -	qle = malloc(sizeof(*qle));
>>>>> +	qle = rte_zmalloc_socket(NULL, sizeof(*qle), RTE_CACHE_LINE_SIZE,
>>>>> +				 rte_lcore_to_socket_id(lcore_id));
>>>>>     	if (qle == NULL)
>>>>>     		return -ENOMEM;
>>>>> -	memset(qle, 0, sizeof(*qle));
>>>>>
>>>>>     	queue_copy(&qle->queue, q);
>>>>>     	TAILQ_INSERT_TAIL(&cfg->head, qle, next);
>>>>> @@ -570,7 +570,7 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,
>>>>>     		goto end;
>>>>>     	}
>>>>>     	/* add this queue to the list */
>>>>> -	ret = queue_list_add(lcore_cfg, &qdata);
>>>>> +	ret = queue_list_add(lcore_cfg, &qdata, lcore_id);
>>>>>     	if (ret < 0) {
>>>>>     		POWER_LOG(DEBUG, "Failed to add queue to list: %s",
>>>>>     				strerror(-ret));
>>>>> @@ -664,7 +664,7 @@ rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,
>>>>>     	 * callbacks can be freed. we're intentionally casting away const-ness.
>>>>>     	 */
>>>>>     	rte_free((void *)(uintptr_t)queue_cfg->cb);
>>>>> -	free(queue_cfg);
>>>>> +	rte_free(queue_cfg);
>>>>>
>>>>>     	return 0;
>>>>>     }

  reply	other threads:[~2025-02-24 12:47 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-19  7:53 Huisong Li
2025-02-20  9:01 ` lihuisong (C)
2025-02-20  9:41   ` Konstantin Ananyev
2025-02-24  9:23     ` lihuisong (C)
2025-02-24 11:12       ` Konstantin Ananyev
2025-02-24 12:47         ` lihuisong (C) [this message]
2025-02-20 16:11   ` Stephen Hemminger
2025-02-20 16:39     ` Konstantin Ananyev
2025-02-20 16:45       ` Stephen Hemminger
2025-02-20 16:58         ` Konstantin Ananyev
2025-02-21 11:21         ` Burakov, Anatoly

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c51b6c63-9898-b5e3-7531-97414a12f240@huawei.com \
    --to=lihuisong@huawei.com \
    --cc=anatoly.burakov@intel.com \
    --cc=david.hunt@intel.com \
    --cc=dev@dpdk.org \
    --cc=konstantin.ananyev@huawei.com \
    --cc=liuyonglong@huawei.com \
    --cc=sivaprasad.tummala@amd.com \
    --cc=stephen@networkplumber.org \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).