From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by dpdk.org (Postfix) with ESMTP id 731D829CB for ; Fri, 12 Aug 2016 12:15:19 +0200 (CEST) Received: from pps.filterd (m0098396.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.11/8.16.0.11) with SMTP id u7CADuJk043743 for ; Fri, 12 Aug 2016 06:15:18 -0400 Received: from e23smtp08.au.ibm.com (e23smtp08.au.ibm.com [202.81.31.141]) by mx0a-001b2d01.pphosted.com with ESMTP id 24scb80spt-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Fri, 12 Aug 2016 06:15:18 -0400 Received: from localhost by e23smtp08.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 12 Aug 2016 20:15:15 +1000 Received: from d23dlp03.au.ibm.com (202.81.31.214) by e23smtp08.au.ibm.com (202.81.31.205) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Fri, 12 Aug 2016 20:15:13 +1000 X-IBM-Helo: d23dlp03.au.ibm.com X-IBM-MailFrom: chaozhu@linux.vnet.ibm.com X-IBM-RcptTo: dev@dpdk.org Received: from d23relay08.au.ibm.com (d23relay08.au.ibm.com [9.185.71.33]) by d23dlp03.au.ibm.com (Postfix) with ESMTP id 163C63578056 for ; Fri, 12 Aug 2016 20:15:13 +1000 (EST) Received: from d23av05.au.ibm.com (d23av05.au.ibm.com [9.190.234.119]) by d23relay08.au.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u7CAFDm929753566 for ; Fri, 12 Aug 2016 20:15:13 +1000 Received: from d23av05.au.ibm.com (localhost [127.0.0.1]) by d23av05.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id u7CAFC6J009663 for ; Fri, 12 Aug 2016 20:15:12 +1000 Received: from ADMINIB2M8Q79C ([9.186.50.203]) by d23av05.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id u7CAFAJF009487; Fri, 12 Aug 2016 20:15:10 +1000 From: "Chao Zhu" To: "'gowrishankar muthukrishnan'" Cc: , "'Bruce Richardson'" , "'Konstantin Ananyev'" , "'Thomas Monjalon'" , "'Cristian Dumitrescu'" , "'Pradeep'" References: <1470486765-2672-1-git-send-email-gowrishankar.m@linux.vnet.ibm.com> <1470486765-2672-4-git-send-email-gowrishankar.m@linux.vnet.ibm.com> <000101d1f21d$82985610$87c90230$@linux.vnet.ibm.com> <02d3d480-85cf-619f-eb92-630adb5881c3@linux.vnet.ibm.com> <000101d1f3bb$32fefa60$98fcef20$@linux.vnet.ibm.com> <0b0a3246-5a85-6b70-c258-04141ccc3fa1@linux.vnet.ibm.com> <000001d1f475$b5460170$1fd20450$@linux.vnet.ibm.com> <8904e228-bf41-042f-0956-c531b5d2af93@linux.vnet.ibm.com> In-Reply-To: <8904e228-bf41-042f-0956-c531b5d2af93@linux.vnet.ibm.com> Date: Fri, 12 Aug 2016 18:15:27 +0800 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Mailer: Microsoft Outlook 15.0 Thread-Index: AQIGVN4ASttF2V4zwHcWoSyv3pjVSwHAnYMKAq9IYEMBabo1CAH9NvbqAr5JlogBaLZY4AF1oDfBn3C/EnA= Content-Language: zh-cn X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16081210-0048-0000-0000-000001ABC085 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 16081210-0049-0000-0000-000046529FBF Message-Id: <000201d1f482$76c2aa90$6447ffb0$@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2016-08-12_03:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1604210000 definitions=main-1608120113 Subject: Re: [dpdk-dev] [PATCH v4 3/6] ip_pipeline: fix lcore mapping for varying SMT threads as in ppc64 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 12 Aug 2016 10:15:19 -0000 Another comment is, comment out lcore_socket_id check will influence = other architectures. If possible, I would like to make this change to = Power specific.=20 -----Original Message----- From: gowrishankar muthukrishnan = [mailto:gowrishankar.m@linux.vnet.ibm.com]=20 Sent: 2016=E5=B9=B48=E6=9C=8812=E6=97=A5 17:00 To: Chao Zhu Cc: dev@dpdk.org; 'Bruce Richardson' ; = 'Konstantin Ananyev' ; 'Thomas Monjalon' = ; 'Cristian Dumitrescu' = ; 'Pradeep' Subject: Re: [PATCH v4 3/6] ip_pipeline: fix lcore mapping for varying = SMT threads as in ppc64 On Friday 12 August 2016 02:14 PM, Chao Zhu wrote: > Gowrishankar, > > I suggest to set the following value: > > n_max_cores_per_socket =3D 8 > n_max_ht_per_core =3D 8 > > This will cover most of the Power8 servers. > Any comments? Sure Chao. I will include this change in v5. If there are no other = comments, I can spin out v5, with changes in this patch. Regards, Gowrishankar > > -----Original Message----- > From: gowrishankar muthukrishnan=20 > [mailto:gowrishankar.m@linux.vnet.ibm.com] > Sent: 2016=E5=B9=B48=E6=9C=8811=E6=97=A5 20:02 > To: Chao Zhu > Cc: dev@dpdk.org; 'Bruce Richardson' ;=20 > 'Konstantin Ananyev' ; 'Thomas Monjalon' = > ; 'Cristian Dumitrescu'=20 > ; 'Pradeep' > Subject: Re: [PATCH v4 3/6] ip_pipeline: fix lcore mapping for varying = > SMT threads as in ppc64 > > On Thursday 11 August 2016 03:59 PM, Chao Zhu wrote: >> Gowrishankar, >> >> Thanks for the detail. >> If my understanding is correct, Power8 has different chips. Some of = the OpenPOWER chips have 8 cores per socket. And the max threads per = core is 8. Should we support this in cpu_core_map_init()? >> >> Here's a dump from the OpenPOWER system. >> = =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> # lscpu >> Architecture: ppc64le >> Byte Order: Little Endian >> CPU(s): 64 >> On-line CPU(s) list: 0,8,16,24,32,40,48,56 >> Off-line CPU(s) list: 1-7,9-15,17-23,25-31,33-39,41-47,49-55,57-63 >> Thread(s) per core: 1 >> Core(s) per socket: 8 >> Socket(s): 1 >> NUMA node(s): 1 >> Model: unknown >> L1d cache: 64K >> L1i cache: 32K >> L2 cache: 512K >> L3 cache: 8192K >> NUMA node0 CPU(s): 0,8,16,24,32,40,48,56 >> = =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> >> >>> +#if defined(RTE_ARCH_PPC_64) >>> + app->core_map =3D cpu_core_map_init(2, 5, 1, 0); #else >>> >>> This value seems quite strange. Can you give more detail? > Based on config of tested server (as below output), > > CPU(s): 80 > On-line CPU(s) list: 0,8,16,24,32,40,48,56,64,72 > Off-line CPU(s) list: > 1-7,9-15,17-23,25-31,33-39,41-47,49-55,57-63,65-71,73-79 > Thread(s) per core: 1 <<< > Core(s) per socket: 5 <<< > Socket(s): 2 <<< > NUMA node(s): 2 > > cpu_core_map_init parameters (2,5,1,0) were prepared. Instead, we can = cap max sockets/core/ht counts to possible maximum supported today. > > Regards, > Gowrishankar >>> app->core_map =3D cpu_core_map_init(4, 32, 4, 0); >>> +#endif >> -----Original Message----- >> From: gowrishankar muthukrishnan >> [mailto:gowrishankar.m@linux.vnet.ibm.com] >> Sent: 2016=E5=B9=B48=E6=9C=889=E6=97=A5 19:14 >> To: Chao Zhu ; dev@dpdk.org >> Cc: 'Bruce Richardson' ; 'Konstantin=20 >> Ananyev' ; 'Thomas Monjalon' >> ; 'Cristian Dumitrescu' >> ; 'Pradeep' >> Subject: Re: [PATCH v4 3/6] ip_pipeline: fix lcore mapping for=20 >> varying SMT threads as in ppc64 >> >> Hi Chao, >> Sure. Please find below one. >> >> This patch fixes ip_pipeline panic in app_init_core_map while = preparing cpu core map in powerpc with SMT off. = cpu_core_map_compute_linux currently prepares core mapping based on file = existence in sysfs ie. >> >> /sys/devices/system/cpu/cpu/topology/physical_package_id >> /sys/devices/system/cpu/cpu/topology/core_id >> >> These files do not exist for lcores which are offline for any reason = (as in powerpc, while SMT is off). In this situation, this function = should further continue preparing map for other online lcores instead of = returning with -1 for a first unavailable lcore. >> >> Also, in SMT=3Doff scenario for powerpc, lcore ids can not be always=20 >> indexed from >> 0 upto 'number of cores present' (/sys/devices/system/cpu/present). >> For eg, for an online lcore 32, core_id returned in sysfs is 112=20 >> where online lcores are >> 10 (as in one configuration), hence sysfs lcore id can not be checked = with indexing lcore number before positioning lcore map array. >> >> Thanks, >> Gowrishankar >> >> On Tuesday 09 August 2016 02:37 PM, Chao Zhu wrote: >>> Gowrishankar, >>> >>> Can you give more description about this patch? >>> Thank you! >>> >>> -----Original Message----- >>> From: Gowrishankar Muthukrishnan >>> [mailto:gowrishankar.m@linux.vnet.ibm.com] >>> Sent: 2016=E5=B9=B48=E6=9C=886=E6=97=A5 20:33 >>> To: dev@dpdk.org >>> Cc: Chao Zhu ; Bruce Richardson=20 >>> ; Konstantin Ananyev=20 >>> ; Thomas Monjalon=20 >>> ; Cristian Dumitrescu=20 >>> ; Pradeep ;=20 >>> gowrishankar >>> Subject: [PATCH v4 3/6] ip_pipeline: fix lcore mapping for varying=20 >>> SMT threads as in ppc64 >>> >>> From: gowrishankar >>> >>> offline lcore would still refer to original core id and this has to=20 >>> be considered while creating cpu core mask. >>> >>> Signed-off-by: Gowrishankar >>> --- >>> config/defconfig_ppc_64-power8-linuxapp-gcc | 3 --- >>> examples/ip_pipeline/cpu_core_map.c | 12 +----------- >>> examples/ip_pipeline/init.c | 4 ++++ >>> 3 files changed, 5 insertions(+), 14 deletions(-) >>> >>> diff --git a/config/defconfig_ppc_64-power8-linuxapp-gcc >>> b/config/defconfig_ppc_64-power8-linuxapp-gcc >>> index dede34f..a084672 100644 >>> --- a/config/defconfig_ppc_64-power8-linuxapp-gcc >>> +++ b/config/defconfig_ppc_64-power8-linuxapp-gcc >>> @@ -58,6 +58,3 @@ CONFIG_RTE_LIBRTE_FM10K_PMD=3Dn >>> >>> # This following libraries are not available on Power. So=20 >>> they're turned off. >>> CONFIG_RTE_LIBRTE_SCHED=3Dn >>> -CONFIG_RTE_LIBRTE_PORT=3Dn >>> -CONFIG_RTE_LIBRTE_TABLE=3Dn >>> -CONFIG_RTE_LIBRTE_PIPELINE=3Dn >>> diff --git a/examples/ip_pipeline/cpu_core_map.c >>> b/examples/ip_pipeline/cpu_core_map.c >>> index cb088b1..482e68e 100644 >>> --- a/examples/ip_pipeline/cpu_core_map.c >>> +++ b/examples/ip_pipeline/cpu_core_map.c >>> @@ -351,9 +351,6 @@ cpu_core_map_compute_linux(struct cpu_core_map = *map) >>> int lcore_socket_id =3D >>> cpu_core_map_get_socket_id_linux(lcore_id); >>> >>> - if (lcore_socket_id < 0) >>> - return -1; >>> - >>> if (((uint32_t) lcore_socket_id) =3D=3D socket_id) >>> n_detected++; >>> } >>> @@ -368,18 +365,11 @@ cpu_core_map_compute_linux(struct cpu_core_map = *map) >>> cpu_core_map_get_socket_id_linux( >>> lcore_id); >>> >>> - if (lcore_socket_id < 0) >>> - return -1; >>> - >>> Why to remove the lcore_socket_id check? >>> >>> int lcore_core_id =3D >>> cpu_core_map_get_core_id_linux( >>> lcore_id); >>> >>> - if (lcore_core_id < 0) >>> - return -1; >>> - >>> - if (((uint32_t) lcore_socket_id =3D=3D >>> socket_id) && >>> - ((uint32_t) lcore_core_id =3D=3D >>> core_id)) { >>> + if ((uint32_t) lcore_socket_id =3D=3D socket_id) >>> { >>> uint32_t pos =3D cpu_core_map_pos(map, >>> socket_id, >>> core_id_contig, >>> diff --git a/examples/ip_pipeline/init.c=20 >>> b/examples/ip_pipeline/init.c index cd167f6..60c931f 100644 >>> --- a/examples/ip_pipeline/init.c >>> +++ b/examples/ip_pipeline/init.c >>> @@ -61,7 +61,11 @@ static void >>> app_init_core_map(struct app_params *app) { >>> APP_LOG(app, HIGH, "Initializing CPU core map ..."); >>> +#if defined(RTE_ARCH_PPC_64) >>> + app->core_map =3D cpu_core_map_init(2, 5, 1, 0); #else >>> >>> This value seems quite strange. Can you give more detail? >>> >>> app->core_map =3D cpu_core_map_init(4, 32, 4, 0); >>> +#endif >>> >>> if (app->core_map =3D=3D NULL) >>> rte_panic("Cannot create CPU core map\n"); >>> -- >>> 1.9.1 >>> >>> >>> >> >> > >