From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by dpdk.org (Postfix) with ESMTP id CD2395A89 for ; Fri, 12 Aug 2016 12:34:13 +0200 (CEST) Received: from pps.filterd (m0098417.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.11/8.16.0.11) with SMTP id u7CAP1np126126 for ; Fri, 12 Aug 2016 06:34:13 -0400 Received: from e23smtp02.au.ibm.com (e23smtp02.au.ibm.com [202.81.31.144]) by mx0a-001b2d01.pphosted.com with ESMTP id 24rtx48qmn-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Fri, 12 Aug 2016 06:34:13 -0400 Received: from localhost by e23smtp02.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 12 Aug 2016 20:34:08 +1000 Received: from d23dlp02.au.ibm.com (202.81.31.213) by e23smtp02.au.ibm.com (202.81.31.208) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Fri, 12 Aug 2016 20:34:07 +1000 X-IBM-Helo: d23dlp02.au.ibm.com X-IBM-MailFrom: gowrishankar.m@linux.vnet.ibm.com X-IBM-RcptTo: dev@dpdk.org Received: from d23relay10.au.ibm.com (d23relay10.au.ibm.com [9.190.26.77]) by d23dlp02.au.ibm.com (Postfix) with ESMTP id B712E2BB0057 for ; Fri, 12 Aug 2016 20:34:06 +1000 (EST) Received: from d23av01.au.ibm.com (d23av01.au.ibm.com [9.190.234.96]) by d23relay10.au.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u7CAY6MZ31195162 for ; Fri, 12 Aug 2016 20:34:06 +1000 Received: from d23av01.au.ibm.com (localhost [127.0.0.1]) by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id u7CAY6tQ029619 for ; Fri, 12 Aug 2016 20:34:06 +1000 Received: from [9.124.223.33] ([9.124.223.33]) by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id u7CAY3xe029571; Fri, 12 Aug 2016 20:34:04 +1000 To: Chao Zhu References: <1470486765-2672-1-git-send-email-gowrishankar.m@linux.vnet.ibm.com> <1470486765-2672-4-git-send-email-gowrishankar.m@linux.vnet.ibm.com> <000101d1f21d$82985610$87c90230$@linux.vnet.ibm.com> <02d3d480-85cf-619f-eb92-630adb5881c3@linux.vnet.ibm.com> <000101d1f3bb$32fefa60$98fcef20$@linux.vnet.ibm.com> <0b0a3246-5a85-6b70-c258-04141ccc3fa1@linux.vnet.ibm.com> <000001d1f475$b5460170$1fd20450$@linux.vnet.ibm.com> <8904e228-bf41-042f-0956-c531b5d2af93@linux.vnet.ibm.com> <000201d1f482$76c2aa90$6447ffb0$@linux.vnet.ibm.com> Cc: dev@dpdk.org, "'Bruce Richardson'" , "'Konstantin Ananyev'" , "'Thomas Monjalon'" , "'Cristian Dumitrescu'" , "'Pradeep'" From: gowrishankar muthukrishnan Date: Fri, 12 Aug 2016 16:04:03 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0 MIME-Version: 1.0 In-Reply-To: <000201d1f482$76c2aa90$6447ffb0$@linux.vnet.ibm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16081210-0004-0000-0000-00000185B133 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 16081210-0005-0000-0000-0000087366AC Message-Id: X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2016-08-12_03:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1604210000 definitions=main-1608120115 Subject: Re: [dpdk-dev] [PATCH v4 3/6] ip_pipeline: fix lcore mapping for varying SMT threads as in ppc64 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 12 Aug 2016 10:34:14 -0000 On Friday 12 August 2016 03:45 PM, Chao Zhu wrote: > Another comment is, comment out lcore_socket_id check will influence other architectures. If possible, I would like to make this change to Power specific. Hi Chao, I am revisiting cpu_core_map_init() fn. I realize, all we handle is max values of socket/core/ht. So, ideally the function should not break in any arch. In my quick test w/o adjusting prev max values (4,32,4,0), only for ht > 1, creating cpu map panics in ppc i.e (4,32,1,0) is able to create core map and app runs. I continue to debug this and fix this example app separately. So, I will be sending powerpc specific enablement of lpm,acl,port,table and sched in v5. Example ip_pipeline will be fixed in separate patch. Thanks, Gowrishankar > -----Original Message----- > From: gowrishankar muthukrishnan [mailto:gowrishankar.m@linux.vnet.ibm.com] > Sent: 2016年8月12日 17:00 > To: Chao Zhu > Cc: dev@dpdk.org; 'Bruce Richardson' ; 'Konstantin Ananyev' ; 'Thomas Monjalon' ; 'Cristian Dumitrescu' ; 'Pradeep' > Subject: Re: [PATCH v4 3/6] ip_pipeline: fix lcore mapping for varying SMT threads as in ppc64 > > On Friday 12 August 2016 02:14 PM, Chao Zhu wrote: >> Gowrishankar, >> >> I suggest to set the following value: >> >> n_max_cores_per_socket = 8 >> n_max_ht_per_core = 8 >> >> This will cover most of the Power8 servers. >> Any comments? > Sure Chao. I will include this change in v5. If there are no other comments, I can spin out v5, with changes in this patch. > > Regards, > Gowrishankar >> -----Original Message----- >> From: gowrishankar muthukrishnan >> [mailto:gowrishankar.m@linux.vnet.ibm.com] >> Sent: 2016年8月11日 20:02 >> To: Chao Zhu >> Cc: dev@dpdk.org; 'Bruce Richardson' ; >> 'Konstantin Ananyev' ; 'Thomas Monjalon' >> ; 'Cristian Dumitrescu' >> ; 'Pradeep' >> Subject: Re: [PATCH v4 3/6] ip_pipeline: fix lcore mapping for varying >> SMT threads as in ppc64 >> >> On Thursday 11 August 2016 03:59 PM, Chao Zhu wrote: >>> Gowrishankar, >>> >>> Thanks for the detail. >>> If my understanding is correct, Power8 has different chips. Some of the OpenPOWER chips have 8 cores per socket. And the max threads per core is 8. Should we support this in cpu_core_map_init()? >>> >>> Here's a dump from the OpenPOWER system. >>> ====================================== >>> # lscpu >>> Architecture: ppc64le >>> Byte Order: Little Endian >>> CPU(s): 64 >>> On-line CPU(s) list: 0,8,16,24,32,40,48,56 >>> Off-line CPU(s) list: 1-7,9-15,17-23,25-31,33-39,41-47,49-55,57-63 >>> Thread(s) per core: 1 >>> Core(s) per socket: 8 >>> Socket(s): 1 >>> NUMA node(s): 1 >>> Model: unknown >>> L1d cache: 64K >>> L1i cache: 32K >>> L2 cache: 512K >>> L3 cache: 8192K >>> NUMA node0 CPU(s): 0,8,16,24,32,40,48,56 >>> ========================================= >>> >>> >>>> +#if defined(RTE_ARCH_PPC_64) >>>> + app->core_map = cpu_core_map_init(2, 5, 1, 0); #else >>>> >>>> This value seems quite strange. Can you give more detail? >> Based on config of tested server (as below output), >> >> CPU(s): 80 >> On-line CPU(s) list: 0,8,16,24,32,40,48,56,64,72 >> Off-line CPU(s) list: >> 1-7,9-15,17-23,25-31,33-39,41-47,49-55,57-63,65-71,73-79 >> Thread(s) per core: 1 <<< >> Core(s) per socket: 5 <<< >> Socket(s): 2 <<< >> NUMA node(s): 2 >> >> cpu_core_map_init parameters (2,5,1,0) were prepared. Instead, we can cap max sockets/core/ht counts to possible maximum supported today. >> >> Regards, >> Gowrishankar >>>> app->core_map = cpu_core_map_init(4, 32, 4, 0); >>>> +#endif >>> -----Original Message----- >>> From: gowrishankar muthukrishnan >>> [mailto:gowrishankar.m@linux.vnet.ibm.com] >>> Sent: 2016年8月9日 19:14 >>> To: Chao Zhu ; dev@dpdk.org >>> Cc: 'Bruce Richardson' ; 'Konstantin >>> Ananyev' ; 'Thomas Monjalon' >>> ; 'Cristian Dumitrescu' >>> ; 'Pradeep' >>> Subject: Re: [PATCH v4 3/6] ip_pipeline: fix lcore mapping for >>> varying SMT threads as in ppc64 >>> >>> Hi Chao, >>> Sure. Please find below one. >>> >>> This patch fixes ip_pipeline panic in app_init_core_map while preparing cpu core map in powerpc with SMT off. cpu_core_map_compute_linux currently prepares core mapping based on file existence in sysfs ie. >>> >>> /sys/devices/system/cpu/cpu/topology/physical_package_id >>> /sys/devices/system/cpu/cpu/topology/core_id >>> >>> These files do not exist for lcores which are offline for any reason (as in powerpc, while SMT is off). In this situation, this function should further continue preparing map for other online lcores instead of returning with -1 for a first unavailable lcore. >>> >>> Also, in SMT=off scenario for powerpc, lcore ids can not be always >>> indexed from >>> 0 upto 'number of cores present' (/sys/devices/system/cpu/present). >>> For eg, for an online lcore 32, core_id returned in sysfs is 112 >>> where online lcores are >>> 10 (as in one configuration), hence sysfs lcore id can not be checked with indexing lcore number before positioning lcore map array. >>> >>> Thanks, >>> Gowrishankar >>> >>> On Tuesday 09 August 2016 02:37 PM, Chao Zhu wrote: >>>> Gowrishankar, >>>> >>>> Can you give more description about this patch? >>>> Thank you! >>>> >>>> -----Original Message----- >>>> From: Gowrishankar Muthukrishnan >>>> [mailto:gowrishankar.m@linux.vnet.ibm.com] >>>> Sent: 2016年8月6日 20:33 >>>> To: dev@dpdk.org >>>> Cc: Chao Zhu ; Bruce Richardson >>>> ; Konstantin Ananyev >>>> ; Thomas Monjalon >>>> ; Cristian Dumitrescu >>>> ; Pradeep ; >>>> gowrishankar >>>> Subject: [PATCH v4 3/6] ip_pipeline: fix lcore mapping for varying >>>> SMT threads as in ppc64 >>>> >>>> From: gowrishankar >>>> >>>> offline lcore would still refer to original core id and this has to >>>> be considered while creating cpu core mask. >>>> >>>> Signed-off-by: Gowrishankar >>>> --- >>>> config/defconfig_ppc_64-power8-linuxapp-gcc | 3 --- >>>> examples/ip_pipeline/cpu_core_map.c | 12 +----------- >>>> examples/ip_pipeline/init.c | 4 ++++ >>>> 3 files changed, 5 insertions(+), 14 deletions(-) >>>> >>>> diff --git a/config/defconfig_ppc_64-power8-linuxapp-gcc >>>> b/config/defconfig_ppc_64-power8-linuxapp-gcc >>>> index dede34f..a084672 100644 >>>> --- a/config/defconfig_ppc_64-power8-linuxapp-gcc >>>> +++ b/config/defconfig_ppc_64-power8-linuxapp-gcc >>>> @@ -58,6 +58,3 @@ CONFIG_RTE_LIBRTE_FM10K_PMD=n >>>> >>>> # This following libraries are not available on Power. So >>>> they're turned off. >>>> CONFIG_RTE_LIBRTE_SCHED=n >>>> -CONFIG_RTE_LIBRTE_PORT=n >>>> -CONFIG_RTE_LIBRTE_TABLE=n >>>> -CONFIG_RTE_LIBRTE_PIPELINE=n >>>> diff --git a/examples/ip_pipeline/cpu_core_map.c >>>> b/examples/ip_pipeline/cpu_core_map.c >>>> index cb088b1..482e68e 100644 >>>> --- a/examples/ip_pipeline/cpu_core_map.c >>>> +++ b/examples/ip_pipeline/cpu_core_map.c >>>> @@ -351,9 +351,6 @@ cpu_core_map_compute_linux(struct cpu_core_map *map) >>>> int lcore_socket_id = >>>> cpu_core_map_get_socket_id_linux(lcore_id); >>>> >>>> - if (lcore_socket_id < 0) >>>> - return -1; >>>> - >>>> if (((uint32_t) lcore_socket_id) == socket_id) >>>> n_detected++; >>>> } >>>> @@ -368,18 +365,11 @@ cpu_core_map_compute_linux(struct cpu_core_map *map) >>>> cpu_core_map_get_socket_id_linux( >>>> lcore_id); >>>> >>>> - if (lcore_socket_id < 0) >>>> - return -1; >>>> - >>>> Why to remove the lcore_socket_id check? >>>> >>>> int lcore_core_id = >>>> cpu_core_map_get_core_id_linux( >>>> lcore_id); >>>> >>>> - if (lcore_core_id < 0) >>>> - return -1; >>>> - >>>> - if (((uint32_t) lcore_socket_id == >>>> socket_id) && >>>> - ((uint32_t) lcore_core_id == >>>> core_id)) { >>>> + if ((uint32_t) lcore_socket_id == socket_id) >>>> { >>>> uint32_t pos = cpu_core_map_pos(map, >>>> socket_id, >>>> core_id_contig, >>>> diff --git a/examples/ip_pipeline/init.c >>>> b/examples/ip_pipeline/init.c index cd167f6..60c931f 100644 >>>> --- a/examples/ip_pipeline/init.c >>>> +++ b/examples/ip_pipeline/init.c >>>> @@ -61,7 +61,11 @@ static void >>>> app_init_core_map(struct app_params *app) { >>>> APP_LOG(app, HIGH, "Initializing CPU core map ..."); >>>> +#if defined(RTE_ARCH_PPC_64) >>>> + app->core_map = cpu_core_map_init(2, 5, 1, 0); #else >>>> >>>> This value seems quite strange. Can you give more detail? >>>> >>>> app->core_map = cpu_core_map_init(4, 32, 4, 0); >>>> +#endif >>>> >>>> if (app->core_map == NULL) >>>> rte_panic("Cannot create CPU core map\n"); >>>> -- >>>> 1.9.1 >>>> >>>> >>>> >>> >> > >