patches for DPDK stable branches
 help / color / mirror / Atom feed
From: "You, KaisenX" <kaisenx.you@intel.com>
To: David Marchand <david.marchand@redhat.com>
Cc: Ferruh Yigit <ferruh.yigit@amd.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	"Burakov, Anatoly" <anatoly.burakov@intel.com>,
	"stable@dpdk.org" <stable@dpdk.org>,
	"Yang, Qiming" <qiming.yang@intel.com>,
	"Zhou, YidingX" <yidingx.zhou@intel.com>,
	"Wu, Jingjing" <jingjing.wu@intel.com>,
	"Xing, Beilei" <beilei.xing@intel.com>,
	"Zhang, Qi Z" <qi.z.zhang@intel.com>,
	"Luca Boccassi" <bluca@debian.org>,
	"Mcnamara, John" <john.mcnamara@intel.com>,
	Kevin Traynor <ktraynor@redhat.com>
Subject: RE: [PATCH] net/iavf:fix slow memory allocation
Date: Wed, 21 Dec 2022 09:12:45 +0000	[thread overview]
Message-ID: <SJ0PR11MB6765B8816B7C5BC4BC35A919E1EB9@SJ0PR11MB6765.namprd11.prod.outlook.com> (raw)
In-Reply-To: <CAJFAV8wQxoAiv9pAsuEdFj0vd4JzTQ1-6TQZbcGAwjjAb1SWvg@mail.gmail.com>



> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: 2022年12月20日 18:33
> To: You, KaisenX <kaisenx.you@intel.com>
> Cc: Ferruh Yigit <ferruh.yigit@amd.com>; dev@dpdk.org; Burakov, Anatoly
> <anatoly.burakov@intel.com>; stable@dpdk.org; Yang, Qiming
> <qiming.yang@intel.com>; Zhou, YidingX <yidingx.zhou@intel.com>; Wu,
> Jingjing <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>; Zhang,
> Qi Z <qi.z.zhang@intel.com>; Luca Boccassi <bluca@debian.org>; Mcnamara,
> John <john.mcnamara@intel.com>; Kevin Traynor <ktraynor@redhat.com>
> Subject: Re: [PATCH] net/iavf:fix slow memory allocation
> 
> On Tue, Dec 20, 2022 at 11:12 AM You, KaisenX <kaisenx.you@intel.com>
> wrote:
> > > I tried to play a bit with a E810 nic on a dual numa and I can't see
> > > anything wrong for now.
> > > Can you provide a simple and small reproducer of your issue?
> > >
> > > Thanks.
> > >
> > This is my environment:
> > Enter "lscpu" on the command line:
> > NUMA:
> >         NUMA node(s): 2
> >         NUMA node0 CPU(S) : 0-27,56-83
> >         NUMA node1 CPU(S) : 28-55,84-111
> >
> > List the steps to reproduce the issue:
> >
> > 1. create vf and blind to dpdk
> > echo 1 > /sys/bus/pci/devices/0000\:ca\:00.0/sriov_ numvfs
> > ./usertools/dpdk-devbind. py -b vfio-pci 0000:ca:01.0 2. launch
> > testpmd ./x86_ 64-native-linuxapp-clang/app/dpdk-testpmd -l 28-48 -n 4
> > -a 0000:ca:01.0 --file-prefix=dpdk_ 525342_ 20221104042659 -- -i
> > --rxq=256 --txq=256
> > --total-num-mbufs=500000
> >
> > Parameter Description:
> >  "-l 28-48":The range of parameter values after "-l" must be on "NUMA
> node1 CPU(S)"
> >  "0000:ca:01.0":inset on node1
> 
> - Using 256 queues is not supported upstream...
> iavf_dev_configure(): large VF is not supported
> 
> I would really love that Intel stops building/testing features with this out of
> tree driver........
> We have lost and still lose so much time because of it.
> 
> 
> - Back to your topic.
> Can you try this simple hack:
> 
> diff --git a/lib/eal/common/eal_common_thread.c
> b/lib/eal/common/eal_common_thread.c
> index c5d8b4327d..92160c7fa6 100644
> --- a/lib/eal/common/eal_common_thread.c
> +++ b/lib/eal/common/eal_common_thread.c
> @@ -253,6 +253,7 @@ static void *ctrl_thread_init(void *arg)
>         void *routine_arg = params->arg;
> 
>         __rte_thread_init(rte_lcore_id(), cpuset);
> +       RTE_PER_LCORE(_socket_id) = SOCKET_ID_ANY;
>         params->ret = pthread_setaffinity_np(pthread_self(), sizeof(*cpuset),
>                 cpuset);
>         if (params->ret != 0) {
> 
Thanks for your advice.

But this issue still exists after I tried.
> 
> --
> David Marchand


  reply	other threads:[~2022-12-21  9:12 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-17  6:57 Kaisen You
2022-11-18  8:22 ` Jiale, SongX
2022-12-07  9:07 ` You, KaisenX
2022-12-08  8:46 ` Wu, Jingjing
2022-12-08 15:04 ` Ferruh Yigit
2022-12-13  7:52   ` You, KaisenX
2022-12-13  9:35     ` Ferruh Yigit
2022-12-13 13:27       ` Ferruh Yigit
2022-12-20  6:52         ` You, KaisenX
2022-12-20  9:33           ` David Marchand
2022-12-20 10:11             ` You, KaisenX
2022-12-20 10:33               ` David Marchand
2022-12-21  9:12                 ` You, KaisenX [this message]
2022-12-21 10:50                   ` David Marchand
2022-12-22  6:42                     ` You, KaisenX
2022-12-27  6:06                       ` You, KaisenX
2023-01-10 10:16                         ` David Marchand
2023-01-13  6:24                           ` You, KaisenX
2022-12-21 13:48           ` Ferruh Yigit
2022-12-22  7:23             ` You, KaisenX
2022-12-22 12:06               ` Ferruh Yigit
2022-12-26  2:17                 ` Zhang, Qi Z

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SJ0PR11MB6765B8816B7C5BC4BC35A919E1EB9@SJ0PR11MB6765.namprd11.prod.outlook.com \
    --to=kaisenx.you@intel.com \
    --cc=anatoly.burakov@intel.com \
    --cc=beilei.xing@intel.com \
    --cc=bluca@debian.org \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@amd.com \
    --cc=jingjing.wu@intel.com \
    --cc=john.mcnamara@intel.com \
    --cc=ktraynor@redhat.com \
    --cc=qi.z.zhang@intel.com \
    --cc=qiming.yang@intel.com \
    --cc=stable@dpdk.org \
    --cc=yidingx.zhou@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).