From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by dpdk.org (Postfix) with ESMTP id 81D515A41 for ; Thu, 11 Aug 2016 14:02:02 +0200 (CEST) Received: from pps.filterd (m0098394.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.11/8.16.0.11) with SMTP id u7BBx4xN135744 for ; Thu, 11 Aug 2016 08:02:01 -0400 Received: from e23smtp09.au.ibm.com (e23smtp09.au.ibm.com [202.81.31.142]) by mx0a-001b2d01.pphosted.com with ESMTP id 24r5ug3tvt-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Thu, 11 Aug 2016 08:02:01 -0400 Received: from localhost by e23smtp09.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 11 Aug 2016 22:01:59 +1000 Received: from d23dlp01.au.ibm.com (202.81.31.203) by e23smtp09.au.ibm.com (202.81.31.206) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Thu, 11 Aug 2016 22:01:58 +1000 X-IBM-Helo: d23dlp01.au.ibm.com X-IBM-MailFrom: gowrishankar.m@linux.vnet.ibm.com X-IBM-RcptTo: dev@dpdk.org Received: from d23relay06.au.ibm.com (d23relay06.au.ibm.com [9.185.63.219]) by d23dlp01.au.ibm.com (Postfix) with ESMTP id C3D2A2CE8056 for ; Thu, 11 Aug 2016 22:01:57 +1000 (EST) Received: from d23av01.au.ibm.com (d23av01.au.ibm.com [9.190.234.96]) by d23relay06.au.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u7BC1vjr26869856 for ; Thu, 11 Aug 2016 22:01:57 +1000 Received: from d23av01.au.ibm.com (localhost [127.0.0.1]) by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id u7BC1vgM030464 for ; Thu, 11 Aug 2016 22:01:57 +1000 Received: from [9.124.124.26] (chozha.in.ibm.com [9.124.124.26] (may be forged)) by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id u7BC1tcP030371; Thu, 11 Aug 2016 22:01:56 +1000 To: Chao Zhu References: <1470486765-2672-1-git-send-email-gowrishankar.m@linux.vnet.ibm.com> <1470486765-2672-4-git-send-email-gowrishankar.m@linux.vnet.ibm.com> <000101d1f21d$82985610$87c90230$@linux.vnet.ibm.com> <02d3d480-85cf-619f-eb92-630adb5881c3@linux.vnet.ibm.com> <000101d1f3bb$32fefa60$98fcef20$@linux.vnet.ibm.com> Cc: dev@dpdk.org, "'Bruce Richardson'" , "'Konstantin Ananyev'" , "'Thomas Monjalon'" , "'Cristian Dumitrescu'" , "'Pradeep'" From: gowrishankar muthukrishnan Date: Thu, 11 Aug 2016 17:31:55 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0 MIME-Version: 1.0 In-Reply-To: <000101d1f3bb$32fefa60$98fcef20$@linux.vnet.ibm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16081112-0052-0000-0000-000001B97693 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 16081112-0053-0000-0000-0000068DF6A9 Message-Id: <0b0a3246-5a85-6b70-c258-04141ccc3fa1@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2016-08-11_08:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1604210000 definitions=main-1608110167 Subject: Re: [dpdk-dev] [PATCH v4 3/6] ip_pipeline: fix lcore mapping for varying SMT threads as in ppc64 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Aug 2016 12:02:03 -0000 On Thursday 11 August 2016 03:59 PM, Chao Zhu wrote: > Gowrishankar, > > Thanks for the detail. > If my understanding is correct, Power8 has different chips. Some of the OpenPOWER chips have 8 cores per socket. And the max threads per core is 8. Should we support this in cpu_core_map_init()? > > Here's a dump from the OpenPOWER system. > ====================================== > # lscpu > Architecture: ppc64le > Byte Order: Little Endian > CPU(s): 64 > On-line CPU(s) list: 0,8,16,24,32,40,48,56 > Off-line CPU(s) list: 1-7,9-15,17-23,25-31,33-39,41-47,49-55,57-63 > Thread(s) per core: 1 > Core(s) per socket: 8 > Socket(s): 1 > NUMA node(s): 1 > Model: unknown > L1d cache: 64K > L1i cache: 32K > L2 cache: 512K > L3 cache: 8192K > NUMA node0 CPU(s): 0,8,16,24,32,40,48,56 > ========================================= > > >> +#if defined(RTE_ARCH_PPC_64) >> + app->core_map = cpu_core_map_init(2, 5, 1, 0); #else >> >> This value seems quite strange. Can you give more detail? Based on config of tested server (as below output), CPU(s): 80 On-line CPU(s) list: 0,8,16,24,32,40,48,56,64,72 Off-line CPU(s) list: 1-7,9-15,17-23,25-31,33-39,41-47,49-55,57-63,65-71,73-79 Thread(s) per core: 1 <<< Core(s) per socket: 5 <<< Socket(s): 2 <<< NUMA node(s): 2 cpu_core_map_init parameters (2,5,1,0) were prepared. Instead, we can cap max sockets/core/ht counts to possible maximum supported today. Regards, Gowrishankar >> >> app->core_map = cpu_core_map_init(4, 32, 4, 0); >> +#endif > > -----Original Message----- > From: gowrishankar muthukrishnan [mailto:gowrishankar.m@linux.vnet.ibm.com] > Sent: 2016年8月9日 19:14 > To: Chao Zhu ; dev@dpdk.org > Cc: 'Bruce Richardson' ; 'Konstantin Ananyev' ; 'Thomas Monjalon' ; 'Cristian Dumitrescu' ; 'Pradeep' > Subject: Re: [PATCH v4 3/6] ip_pipeline: fix lcore mapping for varying SMT threads as in ppc64 > > Hi Chao, > Sure. Please find below one. > > This patch fixes ip_pipeline panic in app_init_core_map while preparing cpu core map in powerpc with SMT off. cpu_core_map_compute_linux currently prepares core mapping based on file existence in sysfs ie. > > /sys/devices/system/cpu/cpu/topology/physical_package_id > /sys/devices/system/cpu/cpu/topology/core_id > > These files do not exist for lcores which are offline for any reason (as in powerpc, while SMT is off). In this situation, this function should further continue preparing map for other online lcores instead of returning with -1 for a first unavailable lcore. > > Also, in SMT=off scenario for powerpc, lcore ids can not be always indexed from > 0 upto 'number of cores present' (/sys/devices/system/cpu/present). For eg, for an online lcore 32, core_id returned in sysfs is 112 where online lcores are > 10 (as in one configuration), hence sysfs lcore id can not be checked with indexing lcore number before positioning lcore map array. > > Thanks, > Gowrishankar > > On Tuesday 09 August 2016 02:37 PM, Chao Zhu wrote: >> Gowrishankar, >> >> Can you give more description about this patch? >> Thank you! >> >> -----Original Message----- >> From: Gowrishankar Muthukrishnan >> [mailto:gowrishankar.m@linux.vnet.ibm.com] >> Sent: 2016年8月6日 20:33 >> To: dev@dpdk.org >> Cc: Chao Zhu ; Bruce Richardson >> ; Konstantin Ananyev >> ; Thomas Monjalon >> ; Cristian Dumitrescu >> ; Pradeep ; >> gowrishankar >> Subject: [PATCH v4 3/6] ip_pipeline: fix lcore mapping for varying SMT >> threads as in ppc64 >> >> From: gowrishankar >> >> offline lcore would still refer to original core id and this has to be >> considered while creating cpu core mask. >> >> Signed-off-by: Gowrishankar >> --- >> config/defconfig_ppc_64-power8-linuxapp-gcc | 3 --- >> examples/ip_pipeline/cpu_core_map.c | 12 +----------- >> examples/ip_pipeline/init.c | 4 ++++ >> 3 files changed, 5 insertions(+), 14 deletions(-) >> >> diff --git a/config/defconfig_ppc_64-power8-linuxapp-gcc >> b/config/defconfig_ppc_64-power8-linuxapp-gcc >> index dede34f..a084672 100644 >> --- a/config/defconfig_ppc_64-power8-linuxapp-gcc >> +++ b/config/defconfig_ppc_64-power8-linuxapp-gcc >> @@ -58,6 +58,3 @@ CONFIG_RTE_LIBRTE_FM10K_PMD=n >> >> # This following libraries are not available on Power. So they're >> turned off. >> CONFIG_RTE_LIBRTE_SCHED=n >> -CONFIG_RTE_LIBRTE_PORT=n >> -CONFIG_RTE_LIBRTE_TABLE=n >> -CONFIG_RTE_LIBRTE_PIPELINE=n >> diff --git a/examples/ip_pipeline/cpu_core_map.c >> b/examples/ip_pipeline/cpu_core_map.c >> index cb088b1..482e68e 100644 >> --- a/examples/ip_pipeline/cpu_core_map.c >> +++ b/examples/ip_pipeline/cpu_core_map.c >> @@ -351,9 +351,6 @@ cpu_core_map_compute_linux(struct cpu_core_map *map) >> int lcore_socket_id = >> cpu_core_map_get_socket_id_linux(lcore_id); >> >> - if (lcore_socket_id < 0) >> - return -1; >> - >> if (((uint32_t) lcore_socket_id) == socket_id) >> n_detected++; >> } >> @@ -368,18 +365,11 @@ cpu_core_map_compute_linux(struct cpu_core_map *map) >> cpu_core_map_get_socket_id_linux( >> lcore_id); >> >> - if (lcore_socket_id < 0) >> - return -1; >> - >> Why to remove the lcore_socket_id check? >> >> int lcore_core_id = >> cpu_core_map_get_core_id_linux( >> lcore_id); >> >> - if (lcore_core_id < 0) >> - return -1; >> - >> - if (((uint32_t) lcore_socket_id == >> socket_id) && >> - ((uint32_t) lcore_core_id == >> core_id)) { >> + if ((uint32_t) lcore_socket_id == socket_id) >> { >> uint32_t pos = cpu_core_map_pos(map, >> socket_id, >> core_id_contig, >> diff --git a/examples/ip_pipeline/init.c b/examples/ip_pipeline/init.c >> index cd167f6..60c931f 100644 >> --- a/examples/ip_pipeline/init.c >> +++ b/examples/ip_pipeline/init.c >> @@ -61,7 +61,11 @@ static void >> app_init_core_map(struct app_params *app) { >> APP_LOG(app, HIGH, "Initializing CPU core map ..."); >> +#if defined(RTE_ARCH_PPC_64) >> + app->core_map = cpu_core_map_init(2, 5, 1, 0); #else >> >> This value seems quite strange. Can you give more detail? >> >> app->core_map = cpu_core_map_init(4, 32, 4, 0); >> +#endif >> >> if (app->core_map == NULL) >> rte_panic("Cannot create CPU core map\n"); >> -- >> 1.9.1 >> >> >> > > >