From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <qian.q.xu@intel.com>
Received: from mga02.intel.com (mga02.intel.com [134.134.136.20])
 by dpdk.org (Postfix) with ESMTP id C7A8C37A8
 for <ci@dpdk.org>; Wed, 30 Nov 2016 10:25:51 +0100 (CET)
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
 by orsmga101.jf.intel.com with ESMTP; 30 Nov 2016 01:25:50 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.31,573,1473145200"; d="scan'208";a="792509417"
Received: from fmsmsx103.amr.corp.intel.com ([10.18.124.201])
 by FMSMGA003.fm.intel.com with ESMTP; 30 Nov 2016 01:25:50 -0800
Received: from fmsmsx126.amr.corp.intel.com (10.18.125.43) by
 FMSMSX103.amr.corp.intel.com (10.18.124.201) with Microsoft SMTP Server (TLS)
 id 14.3.248.2; Wed, 30 Nov 2016 01:25:49 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
 FMSMSX126.amr.corp.intel.com (10.18.125.43) with Microsoft SMTP Server (TLS)
 id 14.3.248.2; Wed, 30 Nov 2016 01:25:49 -0800
Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.239]) by
 SHSMSX101.ccr.corp.intel.com ([169.254.1.239]) with mapi id 14.03.0248.002;
 Wed, 30 Nov 2016 17:25:47 +0800
From: "Xu, Qian Q" <qian.q.xu@intel.com>
To: Thomas Monjalon <thomas.monjalon@6wind.com>
CC: "ci@dpdk.org" <ci@dpdk.org>, "Wei, FangfangX" <fangfangx.wei@intel.com>,
 "Liu, Yong" <yong.liu@intel.com>
Thread-Topic: Intel PerPatch Build
Thread-Index: AQHSSV98PXAtuLsffUiwl1yy5BBYYaDvURPw///Z3ICAAdfAkP//rT4AgACTk1A=
Date: Wed, 30 Nov 2016 09:25:47 +0000
Message-ID: <82F45D86ADE5454A95A89742C8D1410E39278339@shsmsx102.ccr.corp.intel.com>
References: <587443$9qu68@orsmga002.jf.intel.com> <8972903.ZmvP4EtD2m@xps13>
 <82F45D86ADE5454A95A89742C8D1410E39277FB3@shsmsx102.ccr.corp.intel.com>
 <2415876.bILSuWsmVC@xps13>
In-Reply-To: <2415876.bILSuWsmVC@xps13>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: Re: [dpdk-ci] Intel PerPatch Build
X-BeenThere: ci@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: CI discussions <ci.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/ci>,
 <mailto:ci-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/ci/>
List-Post: <mailto:ci@dpdk.org>
List-Help: <mailto:ci-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/ci>,
 <mailto:ci-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Wed, 30 Nov 2016 09:25:52 -0000

Then the conclusion is that we just kept current model, 1 report for 1 patc=
h, right?=20
Also keep in mind some things about the error comparison, we can discuss it=
 again if we see many these issues. =20
As to the next step, I think Fangfang can check how to update the per patch=
 build result to the patchwork to make the result more visible/readable in =
website.=20

> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Wednesday, November 30, 2016 4:33 PM
> To: Xu, Qian Q <qian.q.xu@intel.com>
> Cc: ci@dpdk.org; Wei, FangfangX <fangfangx.wei@intel.com>; Liu, Yong
> <yong.liu@intel.com>
> Subject: Re: Intel PerPatch Build
>=20
> 2016-11-30 06:49, Xu, Qian Q:
> > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > > 2016-11-29 03:56, Xu, Qian Q:
> > > > I feel we need to split the report.
> > > > What do you think of having a report per OS? or a report per build?
> > > > It would show easily how big is the failure by looking at the
> > > > counters and
> > > descriptions in patchwork.
> > > > The email volume would be bigger but is it a problem?
> > > > ----current report is for per patch build report, and for each
> > > > patch, we now
> > > have 18 builds. We will send out 1 build report for 1 patch.  If we
> > > send 18 reports for 1 patch, then it may be too many reports.
> > >
> > > Why is it too many reports?
> > >
> > For each patch, we have 18 builds tests and what you want is to have
> > 18 reports for each patch. If yes to it, normally we will have average =
~30
> patches(maybe not accurate) every day, then We will have 30x18=3D540 mail=
s
> report every day, if we have 100 patches one day, then we will have 1800
> reports. So I mean it would be too many reports. It would be a disaster f=
or the
> mailing list.
> > I'm not sure if you like thousands of mails from one mailing list. Or d=
o I
> misunderstand your points?
>=20
> I think this mailing-list does not aim to be human readable.
> The per-patch reports feed patchwork and the daily reports should feed an=
other
> monitoring tool.
> That's why I wonder wether it is important to minimize the number of emai=
ls.
>=20
> > > In one build test, there can be several errors (even if we stop at th=
e first
> error).
> > > And one error can be seen in several build tests.
> > > So we cannot really count the number of real errors, but we can
> > > count the number of tests which are failing.
> > > Yes we could add the number of failed tests in a test report.
> >
> > Yes, we can show the total numbers of failed tests in one test report, =
and it's
> also very simple.
> >
> > > I just thought that it would be simpler to understand if sending 1 re=
port per
> test.
> >
> > Not sure if it's simpler but I can see mail spam if 1 report per test.
>=20
> It is not a spam if you are not registered to the mailing list :)
>=20
> > > An example of the benefit of splitting:
> > > An error with recent GCC is a failure.
> > > An error with ICC or an old GCC may be just a warning.
> >
> > Currently ICC and old GCC is not included in the per patch build system=
.
> >
> > > Finer is the grain of the reports, better is the patchwork overview.
> >
> > Agreed, but maybe there is other way. Fetch the failure numbers from th=
e
> report. I also wonder if in future, there would be IBM or ARM build or ev=
en more
> regression functional test report.
> > If we put Intel IA build as 18 Lines for tracking how big the failure,
> > then IBM or ARM may have more lines, then how about functional test rep=
ort?
> If we run 100 functional tests per patch, then We need 100 test reports f=
or each
> patch, then if 100 patches for one day, then we have 10k mails one day...=
...
> >
> > From our monitor on the build error, not many errors currently, most
> > failures are due to the apply patch failures, and some
> > configurations(gcc-debug, gcc-combined) build errors. Build is relative=
ly Stable
> now. So not sure how much value we can get from the split the patch repor=
t.
> Before we provide the patchset report, and now we have already split it t=
o per
> patch report, which is also an Improved grain of the report.
>=20
> Yes good point. OK to continue with only one report for the Intel build t=
ests as
> errors are rare.
>=20
> > > Today, we have few tests only. But later we could have more and more
> > > test instances sending the same kind of reports.
> > >
> > > Another example:
> > > If a test is failing for some reasons, it will fail for every patches=
.
> > > We must have a way to ignore it when reading the results.
> >
> > How to ignore it? Don't send the failure report??
>=20
> I meant visually ignore it in the list of tests for a patch.
> But there is probably a better approach, like comparing with the results =
of the
> previous tests when making the report.
> I suggest to keep it in mind for later if we see this kind of issue.
>=20
> > We need analyze the failure reason then decide if the failure is same
> > for every patch, so the analysis is needed here.
>=20
> > > If it is a separate report, it is easier to ignore.
> >
> >
> >
> > > PS: please Qian, could you configure your email client to use reply q=
uoting?
> > Yes, is it better now?
>=20
> Yes a lot better, thanks.