DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@intel.com>
To: phil.yang@arm.com, dev@dpdk.org
Cc: nd@arm.com, anatoly.burakov@intel.com
Subject: Re: [dpdk-dev] [PATCH] app/testpmd: fix vdev socket initialization
Date: Fri, 12 Oct 2018 18:13:29 +0100	[thread overview]
Message-ID: <4b4aeed5-ba6e-8df3-386c-191b05a73586@intel.com> (raw)
In-Reply-To: <1539336895-22691-1-git-send-email-phil.yang@arm.com>

On 10/12/2018 10:34 AM, phil.yang@arm.com wrote:
> The cmdline settings of port-numa-config and rxring-numa-config have been
> flushed by the following init_config. If we don't configure the
> port-numa-config, the virtual device will allocate the device ports to
> socket 0. It will cause failure when the socket 0 is unavailable.
> 
> eg:
> testpmd -l <cores from socket 1> --vdev net_pcap0,iface=lo
> --socket-mem=64 -- --numa --port-numa-config="(0,1)"
> --ring-numa-config="(0,1,1),(0,2,1)" -i
> 
> ...
> Configuring Port 0 (socket 0)
> Failed to setup RX queue:No mempool allocation on the socket 0
> EAL: Error - exiting with code: 1
>   Cause: Start ports failed
> 
> Fix by allocate the devices port to the first available socket or the
> socket configured in port-numa-config.

I confirm this fixes the issue, by making vdev to allocate from available socket
instead of hardcoded socket 0, overall this make sense.

But currently there is no way to request mempool form "socket 0" if only cores
from "socket 1" provided in "-l", even with "port-numa-config" and
"rxring-numa-config".
Both this behavior and the problem this patch fixes caused by patch:
Commit dbfb8ec7094c ("app/testpmd: optimize mbuf pool allocation")

It is good to have optimized mempool allocation but I think this shouldn't limit
the tool. If user wants mempools from specific socket, let it have.

What about changing the default behavior to:
1- Allocate mempool only from socket that coremask provided (current approach)
2- Plus, allocate mempool from sockets of attached devices (this is alternative
solution to this patch, your solution seems better for virtual devices but for
physical devices allocating from socket it connects can be better)
3- Plus, allocate mempool from sockets provided in "port-numa-config" and
"rxring-numa-config"

What do you think?


> 
> Fixes: 487f9a5 ("app/testpmd: fix NUMA structures initialization")
> 
> Signed-off-by: Phil Yang <phil.yang@arm.com>
> Reviewed-by: Gavin Hu <Gavin.Hu@arm.com>

<...>

  reply	other threads:[~2018-10-12 17:13 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-12  9:34 Phil Yang
2018-10-12 17:13 ` Ferruh Yigit [this message]
2018-10-15  9:51   ` Phil Yang (Arm Technology China)
2018-10-15 10:41     ` Ferruh Yigit
2018-10-16  8:58       ` Phil Yang (Arm Technology China)
2018-10-15 10:58 ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4b4aeed5-ba6e-8df3-386c-191b05a73586@intel.com \
    --to=ferruh.yigit@intel.com \
    --cc=anatoly.burakov@intel.com \
    --cc=dev@dpdk.org \
    --cc=nd@arm.com \
    --cc=phil.yang@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).