From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 32DC72C4F for ; Wed, 30 Nov 2016 07:49:35 +0100 (CET) Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga101.jf.intel.com with ESMTP; 29 Nov 2016 22:49:30 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,720,1473145200"; d="scan'208";a="197290241" Received: from fmsmsx103.amr.corp.intel.com ([10.18.124.201]) by fmsmga004.fm.intel.com with ESMTP; 29 Nov 2016 22:49:30 -0800 Received: from fmsmsx116.amr.corp.intel.com (10.18.116.20) by FMSMSX103.amr.corp.intel.com (10.18.124.201) with Microsoft SMTP Server (TLS) id 14.3.248.2; Tue, 29 Nov 2016 22:49:30 -0800 Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by fmsmsx116.amr.corp.intel.com (10.18.116.20) with Microsoft SMTP Server (TLS) id 14.3.248.2; Tue, 29 Nov 2016 22:49:30 -0800 Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.239]) by SHSMSX152.ccr.corp.intel.com ([169.254.6.138]) with mapi id 14.03.0248.002; Wed, 30 Nov 2016 14:49:29 +0800 From: "Xu, Qian Q" To: Thomas Monjalon CC: "ci@dpdk.org" , "Wei, FangfangX" , "Liu, Yong" Thread-Topic: Intel PerPatch Build Thread-Index: AQHSSV98PXAtuLsffUiwl1yy5BBYYaDvURPw///Z3ICAAdfAkA== Date: Wed, 30 Nov 2016 06:49:28 +0000 Message-ID: <82F45D86ADE5454A95A89742C8D1410E39277FB3@shsmsx102.ccr.corp.intel.com> References: <587443$9qu68@orsmga002.jf.intel.com> <3110646.SlUphBRNZp@xps13> <82F45D86ADE5454A95A89742C8D1410E3927590F@shsmsx102.ccr.corp.intel.com> <8972903.ZmvP4EtD2m@xps13> In-Reply-To: <8972903.ZmvP4EtD2m@xps13> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-ci] Intel PerPatch Build X-BeenThere: ci@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: CI discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 30 Nov 2016 06:49:36 -0000 See below inline, thx.=20 > -----Original Message----- > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] > Sent: Tuesday, November 29, 2016 5:21 PM > To: Xu, Qian Q > Cc: ci@dpdk.org; Wei, FangfangX ; Liu, Yong > > Subject: Re: Intel PerPatch Build >=20 > 2016-11-29 03:56, Xu, Qian Q: > > I feel we need to split the report. > > What do you think of having a report per OS? or a report per build? > > It would show easily how big is the failure by looking at the counters = and > descriptions in patchwork. > > The email volume would be bigger but is it a problem? > > ----current report is for per patch build report, and for each patch, w= e now > have 18 builds. We will send out 1 build report for 1 patch. If we send = 18 reports > for 1 patch, then it may be too many reports. >=20 > Why is it too many reports? >=20 For each patch, we have 18 builds tests and what you want is to have 18 rep= orts for each patch. If yes to it, normally we will have average ~30 patche= s(maybe not accurate) every day, then=20 We will have 30x18=3D540 mails report every day, if we have 100 patches one= day, then we will have 1800 reports. So I mean it would be too many report= s. It would be a disaster for the mailing list.=20 I'm not sure if you like thousands of mails from one mailing list. Or do I = misunderstand your points? > In one build test, there can be several errors (even if we stop at the fi= rst error). > And one error can be seen in several build tests. > So we cannot really count the number of real errors, but we can count the > number of tests which are failing. > Yes we could add the number of failed tests in a test report. Yes, we can show the total numbers of failed tests in one test report, and = it's also very simple.=20 > I just thought that it would be simpler to understand if sending 1 report= per test. Not sure if it's simpler but I can see mail spam if 1 report per test.=20 >=20 > An example of the benefit of splitting: > An error with recent GCC is a failure. > An error with ICC or an old GCC may be just a warning. Currently ICC and old GCC is not included in the per patch build system.=20 > Finer is the grain of the reports, better is the patchwork overview. Agreed, but maybe there is other way. Fetch the failure numbers from the re= port. I also wonder if in future, there would be IBM or ARM build or even m= ore regression functional test report.=20 If we put Intel IA build as 18 Lines for tracking how big the failure, then= IBM or ARM may have more lines, then how about functional test report? If = we run 100 functional tests per patch, then=20 We need 100 test reports for each patch, then if 100 patches for one day, t= hen we have 10k mails one day...... >>From our monitor on the build error, not many errors currently, most failur= es are due to the apply patch failures, and some configurations(gcc-debug, = gcc-combined) build errors. Build is relatively=20 Stable now. So not sure how much value we can get from the split the patch = report. Before we provide the patchset report, and now we have already spli= t it to per patch report, which is also an=20 Improved grain of the report.=20 > Today, we have few tests only. But later we could have more and more test > instances sending the same kind of reports. >=20 > Another example: > If a test is failing for some reasons, it will fail for every patches. > We must have a way to ignore it when reading the results. How to ignore it? Don't send the failure report?? We need analyze the failu= re reason then decide if the failure is same for every patch, so the analys= is is needed here.=20 > If it is a separate report, it is easier to ignore. > PS: please Qian, could you configure your email client to use reply quoti= ng? Yes, is it better now?=20