DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@amd.com>
To: Bruce Richardson <bruce.richardson@intel.com>
Cc: dev@dpdk.org, stable@dpdk.org,
	Padraig Connolly <padraig.j.connolly@intel.com>
Subject: Re: [PATCH] ethdev: fix device init without socket-local memory
Date: Sun, 21 Jul 2024 23:56:08 +0100	[thread overview]
Message-ID: <92bc58fd-4a77-464e-bc49-e012b428834f@amd.com> (raw)
In-Reply-To: <ZpqQB_kB7lRLOVJh@bricha3-mobl1.ger.corp.intel.com>

On 7/19/2024 5:10 PM, Bruce Richardson wrote:
> On Fri, Jul 19, 2024 at 04:31:11PM +0100, Ferruh Yigit wrote:
>> On 7/19/2024 2:22 PM, Bruce Richardson wrote:
>>> On Fri, Jul 19, 2024 at 12:10:24PM +0100, Ferruh Yigit wrote:
>>>> One option can be adding a warning log to the fallback case, saying that
>>>> memory allocated from non-local socket and performance will be less.
>>>> Although this message may not mean much to a new user, it may still help
>>>> via a support engineer or internet search...
>>>>
>>>
>>> Yes, we could add a warning, but that is probably better in the app itself.
>>> Thinking about where we get issues, they primarily stem from running the
>>> cores on a different numa node  Since the private_data struct is accessed
>>> by cores not devices, any perf degradation will come from having it remote
>>> to the cores. Because of that, I actually think the original implementation
>>> should really have used "rte_socket_id()" rather than the device socket id
>>> for the allocation.
>>>
>>
>> Yes I guess private_data is not accessed by device, but it may be
>> accessed by cores that is running the datapath.
>>
>> This API may be called by core that is not involved to the datapath, so
>> it may not correct to allocate memory from its numa.
>>
>> Will it be wrong to assume that cores used for datapath will be same
>> numa with device, if so allocating memory from that numa (which device
>> is in) makes more sense. Am I missing something?
>>
> 
> It depends on which you think is more likely for the polling cores:
> - they will be on the same socket as the device, but different socket to
>   the main lcore.
> - they will be on the same socket as the main lcore, but different socket
>   to the device.
> 
> Personally, I'd suspect both to be unlikely, but also both to be possible.
> For the first scenario, I don't see anything being broken or affected by
> the proposed fix here, since priority is still being given to memory on the
> same socket as the device. It just opens up the possibility of scenario
> two.
> 

My comment was on suggestion to use "rte_socket_id()" rather than the
device socket id,
if both nodes have memory, memory should be allocated from the one where
device is in, because although device doesn't use private_data, polling
cores will and polling cores will be most likely in the same node with
device and memory, but we don't know main core is in.
So I think first try for memory allocation should be node where device
is in, which is the existing code.

If node that has device doesn't have any memory attached, your change
enables this case, as already there is memory only in one node, it
doesn't matter if we check device node or main core node anyway.


Briefly, I am OK to current patch with a warning log in fallback, but
not to "rte_socket_id()" change.


  reply	other threads:[~2024-07-21 22:56 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-11 12:35 Bruce Richardson
2024-07-19  8:59 ` Ferruh Yigit
2024-07-19  9:57   ` Bruce Richardson
2024-07-19 11:10     ` Ferruh Yigit
2024-07-19 13:22       ` Bruce Richardson
2024-07-19 15:31         ` Ferruh Yigit
2024-07-19 16:10           ` Bruce Richardson
2024-07-21 22:56             ` Ferruh Yigit [this message]
2024-07-22 10:06               ` Bruce Richardson
2024-07-19 10:41   ` Bruce Richardson
2024-07-22 10:02 ` [PATCH v2] " Bruce Richardson
2024-07-22 13:24   ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=92bc58fd-4a77-464e-bc49-e012b428834f@amd.com \
    --to=ferruh.yigit@amd.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=padraig.j.connolly@intel.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).