DPDK CI discussions
 help / color / mirror / Atom feed
From: "Xu, Qian Q" <qian.q.xu@intel.com>
To: Thomas Monjalon <thomas.monjalon@6wind.com>
Cc: "ci@dpdk.org" <ci@dpdk.org>,
	"Wei, FangfangX" <fangfangx.wei@intel.com>,
	"Liu, Yong" <yong.liu@intel.com>
Subject: Re: [dpdk-ci] Intel PerPatch Build
Date: Wed, 30 Nov 2016 06:49:28 +0000	[thread overview]
Message-ID: <82F45D86ADE5454A95A89742C8D1410E39277FB3@shsmsx102.ccr.corp.intel.com> (raw)
In-Reply-To: <8972903.ZmvP4EtD2m@xps13>

See below inline, thx. 

> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Tuesday, November 29, 2016 5:21 PM
> To: Xu, Qian Q <qian.q.xu@intel.com>
> Cc: ci@dpdk.org; Wei, FangfangX <fangfangx.wei@intel.com>; Liu, Yong
> <yong.liu@intel.com>
> Subject: Re: Intel PerPatch Build
> 
> 2016-11-29 03:56, Xu, Qian Q:
> > I feel we need to split the report.
> > What do you think of having a report per OS? or a report per build?
> > It would show easily how big is the failure by looking at the counters and
> descriptions in patchwork.
> > The email volume would be bigger but is it a problem?
> > ----current report is for per patch build report, and for each patch, we now
> have 18 builds. We will send out 1 build report for 1 patch.  If we send 18 reports
> for 1 patch, then it may be too many reports.
> 
> Why is it too many reports?
> 
For each patch, we have 18 builds tests and what you want is to have 18 reports for each patch. If yes to it, normally we will have average ~30 patches(maybe not accurate) every day, then 
We will have 30x18=540 mails report every day, if we have 100 patches one day, then we will have 1800 reports. So I mean it would be too many reports. It would be a disaster for the mailing list. 
I'm not sure if you like thousands of mails from one mailing list. Or do I misunderstand your points?

> In one build test, there can be several errors (even if we stop at the first error).
> And one error can be seen in several build tests.
> So we cannot really count the number of real errors, but we can count the
> number of tests which are failing.
> Yes we could add the number of failed tests in a test report.
Yes, we can show the total numbers of failed tests in one test report, and it's also very simple. 

> I just thought that it would be simpler to understand if sending 1 report per test.
Not sure if it's simpler but I can see mail spam if 1 report per test. 

> 
> An example of the benefit of splitting:
> An error with recent GCC is a failure.
> An error with ICC or an old GCC may be just a warning.
Currently ICC and old GCC is not included in the per patch build system. 

> Finer is the grain of the reports, better is the patchwork overview.
Agreed, but maybe there is other way. Fetch the failure numbers from the report. I also wonder if in future, there would be IBM or ARM build or even more regression functional test report. 
If we put Intel IA build as 18 Lines for tracking how big the failure, then IBM or ARM may have more lines, then how about functional test report? If we run 100 functional tests per patch, then 
We need 100 test reports for each patch, then if 100 patches for one day, then we have 10k mails one day......

>From our monitor on the build error, not many errors currently, most failures are due to the apply patch failures, and some configurations(gcc-debug, gcc-combined) build errors. Build is relatively 
Stable now. So not sure how much value we can get from the split the patch report. Before we provide the patchset report, and now we have already split it to per patch report, which is also an 
Improved grain of the report. 

> Today, we have few tests only. But later we could have more and more test
> instances sending the same kind of reports.
> 
> Another example:
> If a test is failing for some reasons, it will fail for every patches.
> We must have a way to ignore it when reading the results.
How to ignore it? Don't send the failure report?? We need analyze the failure reason then decide if the failure is same for every patch, so the analysis is needed here. 
> If it is a separate report, it is easier to ignore.



> PS: please Qian, could you configure your email client to use reply quoting?
Yes, is it better now? 

  reply	other threads:[~2016-11-30  6:49 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <587443$9qu68@orsmga002.jf.intel.com>
2016-11-28  7:30 ` [dpdk-ci] [dpdk-test-report] [Intel PerPatch Build] |SUCCESS| pw17274 [PATCH, v2] maintainers: update testpmd maintainer Xu, Qian Q
2016-11-28 10:08   ` [dpdk-ci] Intel PerPatch Build Thomas Monjalon
2016-11-29  3:56     ` Xu, Qian Q
2016-11-29  9:20       ` Thomas Monjalon
2016-11-30  6:49         ` Xu, Qian Q [this message]
2016-11-30  8:33           ` Thomas Monjalon
2016-11-30  9:25             ` Xu, Qian Q
2016-11-30  9:46               ` Thomas Monjalon
2016-12-01  9:00                 ` Xu, Qian Q
2016-12-01  9:26                   ` Thomas Monjalon
2016-12-01  9:29                     ` Wei, FangfangX

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=82F45D86ADE5454A95A89742C8D1410E39277FB3@shsmsx102.ccr.corp.intel.com \
    --to=qian.q.xu@intel.com \
    --cc=ci@dpdk.org \
    --cc=fangfangx.wei@intel.com \
    --cc=thomas.monjalon@6wind.com \
    --cc=yong.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).