patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Bruce Richardson <bruce.richardson@intel.com>
To: Ferruh Yigit <ferruh.yigit@amd.com>
Cc: <dev@dpdk.org>, <stable@dpdk.org>,
	Padraig Connolly <padraig.j.connolly@intel.com>
Subject: Re: [PATCH] ethdev: fix device init without socket-local memory
Date: Fri, 19 Jul 2024 17:10:47 +0100	[thread overview]
Message-ID: <ZpqQB_kB7lRLOVJh@bricha3-mobl1.ger.corp.intel.com> (raw)
In-Reply-To: <e22166ca-04ce-42a0-9435-14935c91d8b2@amd.com>

On Fri, Jul 19, 2024 at 04:31:11PM +0100, Ferruh Yigit wrote:
> On 7/19/2024 2:22 PM, Bruce Richardson wrote:
> > On Fri, Jul 19, 2024 at 12:10:24PM +0100, Ferruh Yigit wrote:
> >> One option can be adding a warning log to the fallback case, saying that
> >> memory allocated from non-local socket and performance will be less.
> >> Although this message may not mean much to a new user, it may still help
> >> via a support engineer or internet search...
> >>
> > 
> > Yes, we could add a warning, but that is probably better in the app itself.
> > Thinking about where we get issues, they primarily stem from running the
> > cores on a different numa node  Since the private_data struct is accessed
> > by cores not devices, any perf degradation will come from having it remote
> > to the cores. Because of that, I actually think the original implementation
> > should really have used "rte_socket_id()" rather than the device socket id
> > for the allocation.
> >
> 
> Yes I guess private_data is not accessed by device, but it may be
> accessed by cores that is running the datapath.
> 
> This API may be called by core that is not involved to the datapath, so
> it may not correct to allocate memory from its numa.
> 
> Will it be wrong to assume that cores used for datapath will be same
> numa with device, if so allocating memory from that numa (which device
> is in) makes more sense. Am I missing something?
> 

It depends on which you think is more likely for the polling cores:
- they will be on the same socket as the device, but different socket to
  the main lcore.
- they will be on the same socket as the main lcore, but different socket
  to the device.

Personally, I'd suspect both to be unlikely, but also both to be possible.
For the first scenario, I don't see anything being broken or affected by
the proposed fix here, since priority is still being given to memory on the
same socket as the device. It just opens up the possibility of scenario
two.

/Bruce

  reply	other threads:[~2024-07-19 16:11 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-11 12:35 Bruce Richardson
2024-07-19  8:59 ` Ferruh Yigit
2024-07-19  9:57   ` Bruce Richardson
2024-07-19 11:10     ` Ferruh Yigit
2024-07-19 13:22       ` Bruce Richardson
2024-07-19 15:31         ` Ferruh Yigit
2024-07-19 16:10           ` Bruce Richardson [this message]
2024-07-21 22:56             ` Ferruh Yigit
2024-07-22 10:06               ` Bruce Richardson
2024-07-19 10:41   ` Bruce Richardson
2024-07-22 10:02 ` [PATCH v2] " Bruce Richardson
2024-07-22 13:24   ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZpqQB_kB7lRLOVJh@bricha3-mobl1.ger.corp.intel.com \
    --to=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@amd.com \
    --cc=padraig.j.connolly@intel.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).