From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7574CA0613 for ; Tue, 30 Jul 2019 15:48:00 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0E79B1C137; Tue, 30 Jul 2019 15:47:59 +0200 (CEST) Received: from mail-vk1-f193.google.com (mail-vk1-f193.google.com [209.85.221.193]) by dpdk.org (Postfix) with ESMTP id CA3A83423 for ; Tue, 30 Jul 2019 15:47:55 +0200 (CEST) Received: by mail-vk1-f193.google.com with SMTP id 130so12801486vkn.9 for ; Tue, 30 Jul 2019 06:47:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=+/JO+mQyNKe+ZzyBHhAeJ2KbC9BBsAk7ftrwRj5FV6E=; b=Of99fpIh79ovhmDyI6rFDwL+S7Tmykuk0PR+lhH3UGIN4ppXvS6Ky6j7LCySI9crsT OvGtA9xvcpPAKjRTBfN3C7OR7xWaW8Zvam35/62IofaVflJ/aYGlbhGLaSjdeT+783Pr GHI1URrcpkt81zS/2HMhibCwwhIrZtBDJYgzQQddKQUmkqc+i7EE3xvRfYPX48vSmlVW uf10V/oGLGFI9YC8WMqp/fheHlDg8KQjBBB3o/oLwnXTJ0Ng6usHpIQquPfnqVRg4Tme VYcxEiJsxD9AXEuS6ai690f4Zh/qF5EvxrcYrIAfD4h5DefHoPmqbPYKyFhPuTBomXwA yZrQ== X-Gm-Message-State: APjAAAVgrZrhd0H0qK4uJ6TscQi+UWI9Iw+RMWw83zeWFqcqFv9kFvaO D5dSAAwfrd8bv+0svAmEshTk0h46Zb2FqbSpeJcsYA== X-Google-Smtp-Source: APXvYqyMj8Tw0wXIdVvk6/d91INNk8woJxIQzkRou66YH0FZQqBeVM7tZJ9+H0FfnODElnWlc3ieg/++fV3rzJABzSE= X-Received: by 2002:a1f:50c1:: with SMTP id e184mr44789194vkb.86.1564494472845; Tue, 30 Jul 2019 06:47:52 -0700 (PDT) MIME-Version: 1.0 References: <1564479354-11192-1-git-send-email-david.marchand@redhat.com> In-Reply-To: From: David Marchand Date: Tue, 30 Jul 2019 15:47:41 +0200 Message-ID: To: =?UTF-8?B?Sm9oYW4gS8OkbGxzdHLDtm0=?= Cc: "dev@dpdk.org" , "anatoly.burakov@intel.com" , "olivier.matz@6wind.com" , "stable@dpdk.org" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Subject: Re: [dpdk-dev] [PATCH] eal: fix ctrl thread affinity with --lcores X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Tue, Jul 30, 2019 at 1:38 PM Johan K=C3=A4llstr=C3=B6m wrote: > The CPU failsafe is nice to have as you could set the thread affinity to = offline cpus. Created a "dpdk" cpuset and put cpus 4-7 into it (my system is mono numa with 8 cpus) # cd /sys/fs/cgroup/cpuset/ # mkdir dpdk # cd dpdk # echo 4-7 > cpuset.cpus # echo 0 > cpuset.mems Disabled cpu 5. # echo 0 > /sys/bus/cpu/devices/cpu5/online Put my shell that starts testpmd in this dpdk cpuset # echo 4439 > tasks EAL refuses an offline core when parsing the thread affinities and this did not change. $ ./master/app/testpmd --master-lcore 0 --lcores '(0,7)@(7,4,5)' --log-level *:debug --no-huge --no-pci -m 512 -- -i --total-num-mbufs=3D2048 EAL: Detected lcore 0 as core 0 on socket 0 EAL: Detected lcore 1 as core 1 on socket 0 EAL: Detected lcore 2 as core 2 on socket 0 EAL: Detected lcore 3 as core 3 on socket 0 EAL: Detected lcore 4 as core 0 on socket 0 EAL: Detected lcore 6 as core 2 on socket 0 EAL: Detected lcore 7 as core 3 on socket 0 EAL: Support maximum 128 logical core(s) by configuration. EAL: Detected 7 lcore(s) EAL: Detected 1 NUMA nodes EAL: core 5 unavailable EAL: invalid parameter for --lcores What did I miss? > > Maybe also add the example I gave you to trigger the bug? https://bugs.dp= dk.org/show_bug.cgi?id=3D322#c12 I managed to reproduce your error with the setup above (without relying on the cset tool that is not available on rhel afaics), I can add it to the commitlog yes. > This also shows how to set the default_affinity mask and proves that the = calculation will result in threads inside the cpuset on Linux. > > /Johan > > On tis, 2019-07-30 at 11:35 +0200, David Marchand wrote: > > When using -l/-c options, each lcore is mapped to a physical cpu in a > > 1:1 fashion. > > On the contrary, when using --lcores, each lcore has its own cpuset > > Use "thread affinity" instead of cpuset when we talk about setting the th= read affinity. > > I know that the term cpuset is used in the data structure, but it is not = a cpuset as described by 'man cpuset' (on Linux). This comment can be seen = as cosmetic, but I think that it could be good to have a clear definitions = to minimize confusion. Indeed, using cpuset is inappropriate. I will update the commitlog and the comment. --=20 David Marchand