* Re: [dpdk-ci] [dpdk-test-report] [Intel PerPatch Build] |SUCCESS| pw17274 [PATCH, v2] maintainers: update testpmd maintainer
[not found] <587443$9qu68@orsmga002.jf.intel.com>
@ 2016-11-28 7:30 ` Xu, Qian Q
2016-11-28 10:08 ` [dpdk-ci] Intel PerPatch Build Thomas Monjalon
0 siblings, 1 reply; 11+ messages in thread
From: Xu, Qian Q @ 2016-11-28 7:30 UTC (permalink / raw)
To: test-report, Thomas Monjalon; +Cc: ci
Hi, Thomas
Pls note that we have updated the per patch Intel compilation check report, and it's the per patch build report. You can get the test-status result from below report. If you have any comments for the report, just feel free to let us know.
-----Original Message-----
From: test-report [mailto:test-report-bounces@dpdk.org] On Behalf Of sys_stv@intel.com
Sent: Monday, November 28, 2016 2:38 PM
To: Wu, Jingjing <jingjing.wu@intel.com>; test-report@dpdk.org
Subject: [dpdk-test-report] [Intel PerPatch Build] |SUCCESS| pw17274 [PATCH, v2] maintainers: update testpmd maintainer
Test-Label: Intel Per-patch compilation check
Test-Status: SUCCESS
http://www.dpdk.org/dev/patchwork/patch/17274
Submitter: Jingjing Wu <jingjing.wu@intel.com>
Date: Mon, 28 Nov 2016 14:00:50 +0800
DPDK git baseline: Repo:dpdk, Branch:master, CommitID:3549dd67152d26e46b136286ee0296a9aaa1923d
Patch17274-17274 --> compile pass
Build Summary: 18 Builds Done, 18 Successful, 0 Failures
Test environment and configuration as below:
OS: FreeBSD10.3_64
Kernel Version:10.3-RELEASE
CPU info:CPU: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz (2294.75-MHz K8-class CPU)
GCC Version:gcc (FreeBSD Ports Collection) 4.8.5
Clang Version:3.4.1
x86_64-native-bsdapp-clang
x86_64-native-bsdapp-gcc
OS: RHEL7.2_64
Kernel Version:3.10.0-327.el7.x86_64
CPU info:Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
GCC Version:gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4)
Clang Version:3.4.2
i686-native-linuxapp-gcc
x86_64-native-linuxapp-gcc
x86_64-native-linuxapp-gcc-shared
OS: UB1604_64
Kernel Version:4.4.0-47-generic
CPU info:Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
GCC Version:gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609
Clang Version:3.8.0
i686-native-linuxapp-gcc
x86_64-native-linuxapp-clang
x86_64-native-linuxapp-gcc-shared
x86_64-native-linuxapp-gcc
OS: CentOS7_64
Kernel Version:3.10.0-327.el7.x86_64
CPU info:Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
GCC Version:gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4)
Clang Version:3.4.2
i686-native-linuxapp-gcc
x86_64-native-linuxapp-clang
x86_64-native-linuxapp-gcc-shared
x86_64-native-linuxapp-gcc
OS: FC24_64
Kernel Version:4.8.6-201.fc24.x86_64
CPU info:Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
GCC Version:gcc (GCC) 6.2.1 20160916 (Red Hat 6.2.1-2)
Clang Version:3.8.0
x86_64-native-linuxapp-gcc-debug
i686-native-linuxapp-gcc
x86_64-native-linuxapp-clang
x86_64-native-linuxapp-gcc-shared
x86_64-native-linuxapp-gcc
DPDK STV team
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-ci] Intel PerPatch Build
2016-11-28 7:30 ` [dpdk-ci] [dpdk-test-report] [Intel PerPatch Build] |SUCCESS| pw17274 [PATCH, v2] maintainers: update testpmd maintainer Xu, Qian Q
@ 2016-11-28 10:08 ` Thomas Monjalon
2016-11-29 3:56 ` Xu, Qian Q
0 siblings, 1 reply; 11+ messages in thread
From: Thomas Monjalon @ 2016-11-28 10:08 UTC (permalink / raw)
To: Xu, Qian Q; +Cc: ci, fangfangx.wei
2016-11-28 07:30, Xu, Qian Q:
> Hi, Thomas
> Pls note that we have updated the per patch Intel compilation check
> report, and it's the per patch build report. You can get the
> test-status result from below report. If you have any comments for
> the report, just feel free to let us know.
Thanks for improving the report.
I have few comments.
The list of test reports is easier to read if every report titles start with
[dpdk-test-report] |SUCCESS|
(or |FAILURE|)
I think you can remove [Intel PerPatch Build] in the title.
I feel we need to split the report.
What do you think of having a report per OS? or a report per build?
It would show easily how big is the failure by looking at the counters
and descriptions in patchwork.
The email volume would be bigger but is it a problem?
You need to use the script send-patch-report.sh.
It will make your reports integrated in patchwork.
If I understand well you test every patches of a series at once and send the
report for the last patch of the series?
I think it is a good option while waiting for a real series support in patchwork
and dpdk-ci scripts.
> -----Original Message-----
> From: test-report [mailto:test-report-bounces@dpdk.org] On Behalf Of sys_stv@intel.com
> Sent: Monday, November 28, 2016 2:38 PM
> To: Wu, Jingjing <jingjing.wu@intel.com>; test-report@dpdk.org
> Subject: [dpdk-test-report] [Intel PerPatch Build] |SUCCESS| pw17274 [PATCH, v2] maintainers: update testpmd maintainer
>
> Test-Label: Intel Per-patch compilation check
> Test-Status: SUCCESS
>
> http://www.dpdk.org/dev/patchwork/patch/17274
> Submitter: Jingjing Wu <jingjing.wu@intel.com>
> Date: Mon, 28 Nov 2016 14:00:50 +0800
> DPDK git baseline: Repo:dpdk, Branch:master, CommitID:3549dd67152d26e46b136286ee0296a9aaa1923d
>
> Patch17274-17274 --> compile pass
> Build Summary: 18 Builds Done, 18 Successful, 0 Failures
>
> Test environment and configuration as below:
> OS: FreeBSD10.3_64
> Kernel Version:10.3-RELEASE
> CPU info:CPU: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz (2294.75-MHz K8-class CPU)
> GCC Version:gcc (FreeBSD Ports Collection) 4.8.5
> Clang Version:3.4.1
> x86_64-native-bsdapp-clang
> x86_64-native-bsdapp-gcc
> OS: RHEL7.2_64
> Kernel Version:3.10.0-327.el7.x86_64
> CPU info:Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
> GCC Version:gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4)
> Clang Version:3.4.2
> i686-native-linuxapp-gcc
> x86_64-native-linuxapp-gcc
> x86_64-native-linuxapp-gcc-shared
> OS: UB1604_64
> Kernel Version:4.4.0-47-generic
> CPU info:Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
> GCC Version:gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609
> Clang Version:3.8.0
> i686-native-linuxapp-gcc
> x86_64-native-linuxapp-clang
> x86_64-native-linuxapp-gcc-shared
> x86_64-native-linuxapp-gcc
> OS: CentOS7_64
> Kernel Version:3.10.0-327.el7.x86_64
> CPU info:Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
> GCC Version:gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4)
> Clang Version:3.4.2
> i686-native-linuxapp-gcc
> x86_64-native-linuxapp-clang
> x86_64-native-linuxapp-gcc-shared
> x86_64-native-linuxapp-gcc
> OS: FC24_64
> Kernel Version:4.8.6-201.fc24.x86_64
> CPU info:Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
> GCC Version:gcc (GCC) 6.2.1 20160916 (Red Hat 6.2.1-2)
> Clang Version:3.8.0
> x86_64-native-linuxapp-gcc-debug
> i686-native-linuxapp-gcc
> x86_64-native-linuxapp-clang
> x86_64-native-linuxapp-gcc-shared
> x86_64-native-linuxapp-gcc
>
> DPDK STV team
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-ci] Intel PerPatch Build
2016-11-28 10:08 ` [dpdk-ci] Intel PerPatch Build Thomas Monjalon
@ 2016-11-29 3:56 ` Xu, Qian Q
2016-11-29 9:20 ` Thomas Monjalon
0 siblings, 1 reply; 11+ messages in thread
From: Xu, Qian Q @ 2016-11-29 3:56 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: ci, Wei, FangfangX, Liu, Yong
See below inline, thx.
-----Original Message-----
From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
Sent: Monday, November 28, 2016 6:09 PM
To: Xu, Qian Q <qian.q.xu@intel.com>
Cc: ci@dpdk.org; Wei, FangfangX <fangfangx.wei@intel.com>
Subject: Re: Intel PerPatch Build
2016-11-28 07:30, Xu, Qian Q:
> Hi, Thomas
> Pls note that we have updated the per patch Intel compilation check
> report, and it's the per patch build report. You can get the
> test-status result from below report. If you have any comments for the
> report, just feel free to let us know.
Thanks for improving the report.
I have few comments.
The list of test reports is easier to read if every report titles start with [dpdk-test-report] |SUCCESS| (or |FAILURE|) I think you can remove [Intel PerPatch Build] in the title.
---Sure, Fangfang will change it.
I feel we need to split the report.
What do you think of having a report per OS? or a report per build?
It would show easily how big is the failure by looking at the counters and descriptions in patchwork.
The email volume would be bigger but is it a problem?
----current report is for per patch build report, and for each patch, we now have 18 builds. We will send out 1 build report for 1 patch. If we send 18 reports for 1 patch, then it may be too many reports.
If you want to check how many failures for the build, for example, 18 builds, then 1 failures, there is 2 ways to check.
1. you can get how many failures information from the Build Summary.
> Patch17274-17274 --> compile pass
> Build Summary: 18 Builds Done, 18 Successful, 0 Failures
2. We can add the failure numbers in the title, such as [dpdk-test-report] |2 FAILURE|xxxxx
Does it make sense?
You need to use the script send-patch-report.sh.
It will make your reports integrated in patchwork.
---Fangfang will check your scripts and try it.
If I understand well you test every patches of a series at once and send the report for the last patch of the series?
----No. We send the report for each patch of the series, not the last one. The process of per patch build check is as below.
1. We pull the patchset, for example, we have 10 patches in 1 patchset. We will apply all the patches to the git tree.
If apply failed for one tree, we will try other trees(such as next-virtio, next-crypto, next-net), if any tree can be applied, then
We think the patch apply OK. If all trees can't be applied, we take it as Failure. Then we will send out 10 patch reports, each one is Failure.
2. If apply patch is OK, then we do per patch build check one by one. After 10 patch's build done, we will send report one by one.
For example, 10 patches in 1 patchset, we will do 18 builds for each patch, totally 180 builds for the patchset. Then send report for each patch after the last patch.
Test1: patch1: 18 build
Test2: patch1 + patch2: 18 build
....
Test10: patch1+xxx+patch10: 18build
Is it clear?
I think it is a good option while waiting for a real series support in patchwork and dpdk-ci scripts.
--We can provide per patch result, no need patchset result.
> Patch17274-17274 --> compile pass
> Build Summary: 18 Builds Done, 18 Successful, 0 Failures
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-ci] Intel PerPatch Build
2016-11-29 3:56 ` Xu, Qian Q
@ 2016-11-29 9:20 ` Thomas Monjalon
2016-11-30 6:49 ` Xu, Qian Q
0 siblings, 1 reply; 11+ messages in thread
From: Thomas Monjalon @ 2016-11-29 9:20 UTC (permalink / raw)
To: Xu, Qian Q; +Cc: ci, Wei, FangfangX, Liu, Yong
2016-11-29 03:56, Xu, Qian Q:
> I feel we need to split the report.
> What do you think of having a report per OS? or a report per build?
> It would show easily how big is the failure by looking at the counters and descriptions in patchwork.
> The email volume would be bigger but is it a problem?
> ----current report is for per patch build report, and for each patch, we now have 18 builds. We will send out 1 build report for 1 patch. If we send 18 reports for 1 patch, then it may be too many reports.
Why is it too many reports?
> If you want to check how many failures for the build, for example, 18 builds, then 1 failures, there is 2 ways to check.
> 1. you can get how many failures information from the Build Summary.
> > Patch17274-17274 --> compile pass
> > Build Summary: 18 Builds Done, 18 Successful, 0 Failures
>
> 2. We can add the failure numbers in the title, such as [dpdk-test-report] |2 FAILURE|xxxxx
>
> Does it make sense?
In one build test, there can be several errors (even if we stop at the first error).
And one error can be seen in several build tests.
So we cannot really count the number of real errors, but we can count the number
of tests which are failing.
Yes we could add the number of failed tests in a test report.
I just thought that it would be simpler to understand if sending 1 report per test.
An example of the benefit of splitting:
An error with recent GCC is a failure.
An error with ICC or an old GCC may be just a warning.
Finer is the grain of the reports, better is the patchwork overview.
Today, we have few tests only. But later we could have more and more
test instances sending the same kind of reports.
Another example:
If a test is failing for some reasons, it will fail for every patches.
We must have a way to ignore it when reading the results.
If it is a separate report, it is easier to ignore.
> If I understand well you test every patches of a series at once and send the report for the last patch of the series?
> ----No. We send the report for each patch of the series, not the last one. The process of per patch build check is as below.
> 1. We pull the patchset, for example, we have 10 patches in 1 patchset. We will apply all the patches to the git tree.
> If apply failed for one tree, we will try other trees(such as next-virtio, next-crypto, next-net), if any tree can be applied, then
> We think the patch apply OK. If all trees can't be applied, we take it as Failure. Then we will send out 10 patch reports, each one is Failure.
>
> 2. If apply patch is OK, then we do per patch build check one by one. After 10 patch's build done, we will send report one by one.
> For example, 10 patches in 1 patchset, we will do 18 builds for each patch, totally 180 builds for the patchset. Then send report for each patch after the last patch.
> Test1: patch1: 18 build
> Test2: patch1 + patch2: 18 build
> ....
> Test10: patch1+xxx+patch10: 18build
>
> Is it clear?
OK perfect, thanks
PS: please Qian, could you configure your email client to use reply quoting?
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-ci] Intel PerPatch Build
2016-11-29 9:20 ` Thomas Monjalon
@ 2016-11-30 6:49 ` Xu, Qian Q
2016-11-30 8:33 ` Thomas Monjalon
0 siblings, 1 reply; 11+ messages in thread
From: Xu, Qian Q @ 2016-11-30 6:49 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: ci, Wei, FangfangX, Liu, Yong
See below inline, thx.
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Tuesday, November 29, 2016 5:21 PM
> To: Xu, Qian Q <qian.q.xu@intel.com>
> Cc: ci@dpdk.org; Wei, FangfangX <fangfangx.wei@intel.com>; Liu, Yong
> <yong.liu@intel.com>
> Subject: Re: Intel PerPatch Build
>
> 2016-11-29 03:56, Xu, Qian Q:
> > I feel we need to split the report.
> > What do you think of having a report per OS? or a report per build?
> > It would show easily how big is the failure by looking at the counters and
> descriptions in patchwork.
> > The email volume would be bigger but is it a problem?
> > ----current report is for per patch build report, and for each patch, we now
> have 18 builds. We will send out 1 build report for 1 patch. If we send 18 reports
> for 1 patch, then it may be too many reports.
>
> Why is it too many reports?
>
For each patch, we have 18 builds tests and what you want is to have 18 reports for each patch. If yes to it, normally we will have average ~30 patches(maybe not accurate) every day, then
We will have 30x18=540 mails report every day, if we have 100 patches one day, then we will have 1800 reports. So I mean it would be too many reports. It would be a disaster for the mailing list.
I'm not sure if you like thousands of mails from one mailing list. Or do I misunderstand your points?
> In one build test, there can be several errors (even if we stop at the first error).
> And one error can be seen in several build tests.
> So we cannot really count the number of real errors, but we can count the
> number of tests which are failing.
> Yes we could add the number of failed tests in a test report.
Yes, we can show the total numbers of failed tests in one test report, and it's also very simple.
> I just thought that it would be simpler to understand if sending 1 report per test.
Not sure if it's simpler but I can see mail spam if 1 report per test.
>
> An example of the benefit of splitting:
> An error with recent GCC is a failure.
> An error with ICC or an old GCC may be just a warning.
Currently ICC and old GCC is not included in the per patch build system.
> Finer is the grain of the reports, better is the patchwork overview.
Agreed, but maybe there is other way. Fetch the failure numbers from the report. I also wonder if in future, there would be IBM or ARM build or even more regression functional test report.
If we put Intel IA build as 18 Lines for tracking how big the failure, then IBM or ARM may have more lines, then how about functional test report? If we run 100 functional tests per patch, then
We need 100 test reports for each patch, then if 100 patches for one day, then we have 10k mails one day......
>From our monitor on the build error, not many errors currently, most failures are due to the apply patch failures, and some configurations(gcc-debug, gcc-combined) build errors. Build is relatively
Stable now. So not sure how much value we can get from the split the patch report. Before we provide the patchset report, and now we have already split it to per patch report, which is also an
Improved grain of the report.
> Today, we have few tests only. But later we could have more and more test
> instances sending the same kind of reports.
>
> Another example:
> If a test is failing for some reasons, it will fail for every patches.
> We must have a way to ignore it when reading the results.
How to ignore it? Don't send the failure report?? We need analyze the failure reason then decide if the failure is same for every patch, so the analysis is needed here.
> If it is a separate report, it is easier to ignore.
> PS: please Qian, could you configure your email client to use reply quoting?
Yes, is it better now?
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-ci] Intel PerPatch Build
2016-11-30 6:49 ` Xu, Qian Q
@ 2016-11-30 8:33 ` Thomas Monjalon
2016-11-30 9:25 ` Xu, Qian Q
0 siblings, 1 reply; 11+ messages in thread
From: Thomas Monjalon @ 2016-11-30 8:33 UTC (permalink / raw)
To: Xu, Qian Q; +Cc: ci, Wei, FangfangX, Liu, Yong
2016-11-30 06:49, Xu, Qian Q:
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > 2016-11-29 03:56, Xu, Qian Q:
> > > I feel we need to split the report.
> > > What do you think of having a report per OS? or a report per build?
> > > It would show easily how big is the failure by looking at the counters and
> > descriptions in patchwork.
> > > The email volume would be bigger but is it a problem?
> > > ----current report is for per patch build report, and for each patch, we now
> > have 18 builds. We will send out 1 build report for 1 patch. If we send 18 reports
> > for 1 patch, then it may be too many reports.
> >
> > Why is it too many reports?
> >
> For each patch, we have 18 builds tests and what you want is to have 18 reports for each patch. If yes to it, normally we will have average ~30 patches(maybe not accurate) every day, then
> We will have 30x18=540 mails report every day, if we have 100 patches one day, then we will have 1800 reports. So I mean it would be too many reports. It would be a disaster for the mailing list.
> I'm not sure if you like thousands of mails from one mailing list. Or do I misunderstand your points?
I think this mailing-list does not aim to be human readable.
The per-patch reports feed patchwork and the daily reports should feed
another monitoring tool.
That's why I wonder wether it is important to minimize the number of emails.
> > In one build test, there can be several errors (even if we stop at the first error).
> > And one error can be seen in several build tests.
> > So we cannot really count the number of real errors, but we can count the
> > number of tests which are failing.
> > Yes we could add the number of failed tests in a test report.
>
> Yes, we can show the total numbers of failed tests in one test report, and it's also very simple.
>
> > I just thought that it would be simpler to understand if sending 1 report per test.
>
> Not sure if it's simpler but I can see mail spam if 1 report per test.
It is not a spam if you are not registered to the mailing list :)
> > An example of the benefit of splitting:
> > An error with recent GCC is a failure.
> > An error with ICC or an old GCC may be just a warning.
>
> Currently ICC and old GCC is not included in the per patch build system.
>
> > Finer is the grain of the reports, better is the patchwork overview.
>
> Agreed, but maybe there is other way. Fetch the failure numbers from the report. I also wonder if in future, there would be IBM or ARM build or even more regression functional test report.
> If we put Intel IA build as 18 Lines for tracking how big the failure, then IBM or ARM may have more lines, then how about functional test report? If we run 100 functional tests per patch, then
> We need 100 test reports for each patch, then if 100 patches for one day, then we have 10k mails one day......
>
> From our monitor on the build error, not many errors currently, most failures are due to the apply patch failures, and some configurations(gcc-debug, gcc-combined) build errors. Build is relatively
> Stable now. So not sure how much value we can get from the split the patch report. Before we provide the patchset report, and now we have already split it to per patch report, which is also an
> Improved grain of the report.
Yes good point. OK to continue with only one report for the Intel build tests
as errors are rare.
> > Today, we have few tests only. But later we could have more and more test
> > instances sending the same kind of reports.
> >
> > Another example:
> > If a test is failing for some reasons, it will fail for every patches.
> > We must have a way to ignore it when reading the results.
>
> How to ignore it? Don't send the failure report??
I meant visually ignore it in the list of tests for a patch.
But there is probably a better approach, like comparing with the results
of the previous tests when making the report.
I suggest to keep it in mind for later if we see this kind of issue.
> We need analyze the failure reason then decide if the failure is same
> for every patch, so the analysis is needed here.
> > If it is a separate report, it is easier to ignore.
>
>
>
> > PS: please Qian, could you configure your email client to use reply quoting?
> Yes, is it better now?
Yes a lot better, thanks.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-ci] Intel PerPatch Build
2016-11-30 8:33 ` Thomas Monjalon
@ 2016-11-30 9:25 ` Xu, Qian Q
2016-11-30 9:46 ` Thomas Monjalon
0 siblings, 1 reply; 11+ messages in thread
From: Xu, Qian Q @ 2016-11-30 9:25 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: ci, Wei, FangfangX, Liu, Yong
Then the conclusion is that we just kept current model, 1 report for 1 patch, right?
Also keep in mind some things about the error comparison, we can discuss it again if we see many these issues.
As to the next step, I think Fangfang can check how to update the per patch build result to the patchwork to make the result more visible/readable in website.
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Wednesday, November 30, 2016 4:33 PM
> To: Xu, Qian Q <qian.q.xu@intel.com>
> Cc: ci@dpdk.org; Wei, FangfangX <fangfangx.wei@intel.com>; Liu, Yong
> <yong.liu@intel.com>
> Subject: Re: Intel PerPatch Build
>
> 2016-11-30 06:49, Xu, Qian Q:
> > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > > 2016-11-29 03:56, Xu, Qian Q:
> > > > I feel we need to split the report.
> > > > What do you think of having a report per OS? or a report per build?
> > > > It would show easily how big is the failure by looking at the
> > > > counters and
> > > descriptions in patchwork.
> > > > The email volume would be bigger but is it a problem?
> > > > ----current report is for per patch build report, and for each
> > > > patch, we now
> > > have 18 builds. We will send out 1 build report for 1 patch. If we
> > > send 18 reports for 1 patch, then it may be too many reports.
> > >
> > > Why is it too many reports?
> > >
> > For each patch, we have 18 builds tests and what you want is to have
> > 18 reports for each patch. If yes to it, normally we will have average ~30
> patches(maybe not accurate) every day, then We will have 30x18=540 mails
> report every day, if we have 100 patches one day, then we will have 1800
> reports. So I mean it would be too many reports. It would be a disaster for the
> mailing list.
> > I'm not sure if you like thousands of mails from one mailing list. Or do I
> misunderstand your points?
>
> I think this mailing-list does not aim to be human readable.
> The per-patch reports feed patchwork and the daily reports should feed another
> monitoring tool.
> That's why I wonder wether it is important to minimize the number of emails.
>
> > > In one build test, there can be several errors (even if we stop at the first
> error).
> > > And one error can be seen in several build tests.
> > > So we cannot really count the number of real errors, but we can
> > > count the number of tests which are failing.
> > > Yes we could add the number of failed tests in a test report.
> >
> > Yes, we can show the total numbers of failed tests in one test report, and it's
> also very simple.
> >
> > > I just thought that it would be simpler to understand if sending 1 report per
> test.
> >
> > Not sure if it's simpler but I can see mail spam if 1 report per test.
>
> It is not a spam if you are not registered to the mailing list :)
>
> > > An example of the benefit of splitting:
> > > An error with recent GCC is a failure.
> > > An error with ICC or an old GCC may be just a warning.
> >
> > Currently ICC and old GCC is not included in the per patch build system.
> >
> > > Finer is the grain of the reports, better is the patchwork overview.
> >
> > Agreed, but maybe there is other way. Fetch the failure numbers from the
> report. I also wonder if in future, there would be IBM or ARM build or even more
> regression functional test report.
> > If we put Intel IA build as 18 Lines for tracking how big the failure,
> > then IBM or ARM may have more lines, then how about functional test report?
> If we run 100 functional tests per patch, then We need 100 test reports for each
> patch, then if 100 patches for one day, then we have 10k mails one day......
> >
> > From our monitor on the build error, not many errors currently, most
> > failures are due to the apply patch failures, and some
> > configurations(gcc-debug, gcc-combined) build errors. Build is relatively Stable
> now. So not sure how much value we can get from the split the patch report.
> Before we provide the patchset report, and now we have already split it to per
> patch report, which is also an Improved grain of the report.
>
> Yes good point. OK to continue with only one report for the Intel build tests as
> errors are rare.
>
> > > Today, we have few tests only. But later we could have more and more
> > > test instances sending the same kind of reports.
> > >
> > > Another example:
> > > If a test is failing for some reasons, it will fail for every patches.
> > > We must have a way to ignore it when reading the results.
> >
> > How to ignore it? Don't send the failure report??
>
> I meant visually ignore it in the list of tests for a patch.
> But there is probably a better approach, like comparing with the results of the
> previous tests when making the report.
> I suggest to keep it in mind for later if we see this kind of issue.
>
> > We need analyze the failure reason then decide if the failure is same
> > for every patch, so the analysis is needed here.
>
> > > If it is a separate report, it is easier to ignore.
> >
> >
> >
> > > PS: please Qian, could you configure your email client to use reply quoting?
> > Yes, is it better now?
>
> Yes a lot better, thanks.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-ci] Intel PerPatch Build
2016-11-30 9:25 ` Xu, Qian Q
@ 2016-11-30 9:46 ` Thomas Monjalon
2016-12-01 9:00 ` Xu, Qian Q
0 siblings, 1 reply; 11+ messages in thread
From: Thomas Monjalon @ 2016-11-30 9:46 UTC (permalink / raw)
To: Xu, Qian Q; +Cc: ci, Wei, FangfangX, Liu, Yong
2016-11-30 09:25, Xu, Qian Q:
> Then the conclusion is that we just kept current model, 1 report for 1 patch, right?
Yes
Thanks for the discussion
(I keep this discussion in mind in case it needs some improvement)
> Also keep in mind some things about the error comparison, we can discuss it again if we see many these issues.
Yes
> As to the next step, I think Fangfang can check how to update the per patch build result to the patchwork to make the result more visible/readable in website.
Yes please.
I'm also waiting to apply the patches. Should I apply them right now?
> > -----Original Message-----
> > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > Sent: Wednesday, November 30, 2016 4:33 PM
> > To: Xu, Qian Q <qian.q.xu@intel.com>
> > Cc: ci@dpdk.org; Wei, FangfangX <fangfangx.wei@intel.com>; Liu, Yong
> > <yong.liu@intel.com>
> > Subject: Re: Intel PerPatch Build
> >
> > 2016-11-30 06:49, Xu, Qian Q:
> > > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > > > 2016-11-29 03:56, Xu, Qian Q:
> > > > > I feel we need to split the report.
> > > > > What do you think of having a report per OS? or a report per build?
> > > > > It would show easily how big is the failure by looking at the
> > > > > counters and
> > > > descriptions in patchwork.
> > > > > The email volume would be bigger but is it a problem?
> > > > > ----current report is for per patch build report, and for each
> > > > > patch, we now
> > > > have 18 builds. We will send out 1 build report for 1 patch. If we
> > > > send 18 reports for 1 patch, then it may be too many reports.
> > > >
> > > > Why is it too many reports?
> > > >
> > > For each patch, we have 18 builds tests and what you want is to have
> > > 18 reports for each patch. If yes to it, normally we will have average ~30
> > patches(maybe not accurate) every day, then We will have 30x18=540 mails
> > report every day, if we have 100 patches one day, then we will have 1800
> > reports. So I mean it would be too many reports. It would be a disaster for the
> > mailing list.
> > > I'm not sure if you like thousands of mails from one mailing list. Or do I
> > misunderstand your points?
> >
> > I think this mailing-list does not aim to be human readable.
> > The per-patch reports feed patchwork and the daily reports should feed another
> > monitoring tool.
> > That's why I wonder wether it is important to minimize the number of emails.
> >
> > > > In one build test, there can be several errors (even if we stop at the first
> > error).
> > > > And one error can be seen in several build tests.
> > > > So we cannot really count the number of real errors, but we can
> > > > count the number of tests which are failing.
> > > > Yes we could add the number of failed tests in a test report.
> > >
> > > Yes, we can show the total numbers of failed tests in one test report, and it's
> > also very simple.
> > >
> > > > I just thought that it would be simpler to understand if sending 1 report per
> > test.
> > >
> > > Not sure if it's simpler but I can see mail spam if 1 report per test.
> >
> > It is not a spam if you are not registered to the mailing list :)
> >
> > > > An example of the benefit of splitting:
> > > > An error with recent GCC is a failure.
> > > > An error with ICC or an old GCC may be just a warning.
> > >
> > > Currently ICC and old GCC is not included in the per patch build system.
> > >
> > > > Finer is the grain of the reports, better is the patchwork overview.
> > >
> > > Agreed, but maybe there is other way. Fetch the failure numbers from the
> > report. I also wonder if in future, there would be IBM or ARM build or even more
> > regression functional test report.
> > > If we put Intel IA build as 18 Lines for tracking how big the failure,
> > > then IBM or ARM may have more lines, then how about functional test report?
> > If we run 100 functional tests per patch, then We need 100 test reports for each
> > patch, then if 100 patches for one day, then we have 10k mails one day......
> > >
> > > From our monitor on the build error, not many errors currently, most
> > > failures are due to the apply patch failures, and some
> > > configurations(gcc-debug, gcc-combined) build errors. Build is relatively Stable
> > now. So not sure how much value we can get from the split the patch report.
> > Before we provide the patchset report, and now we have already split it to per
> > patch report, which is also an Improved grain of the report.
> >
> > Yes good point. OK to continue with only one report for the Intel build tests as
> > errors are rare.
> >
> > > > Today, we have few tests only. But later we could have more and more
> > > > test instances sending the same kind of reports.
> > > >
> > > > Another example:
> > > > If a test is failing for some reasons, it will fail for every patches.
> > > > We must have a way to ignore it when reading the results.
> > >
> > > How to ignore it? Don't send the failure report??
> >
> > I meant visually ignore it in the list of tests for a patch.
> > But there is probably a better approach, like comparing with the results of the
> > previous tests when making the report.
> > I suggest to keep it in mind for later if we see this kind of issue.
> >
> > > We need analyze the failure reason then decide if the failure is same
> > > for every patch, so the analysis is needed here.
> >
> > > > If it is a separate report, it is easier to ignore.
> > >
> > >
> > >
> > > > PS: please Qian, could you configure your email client to use reply quoting?
> > > Yes, is it better now?
> >
> > Yes a lot better, thanks.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-ci] Intel PerPatch Build
2016-11-30 9:46 ` Thomas Monjalon
@ 2016-12-01 9:00 ` Xu, Qian Q
2016-12-01 9:26 ` Thomas Monjalon
0 siblings, 1 reply; 11+ messages in thread
From: Xu, Qian Q @ 2016-12-01 9:00 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: ci, Wei, FangfangX, Liu, Yong
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Wednesday, November 30, 2016 5:47 PM
> To: Xu, Qian Q <qian.q.xu@intel.com>
> Cc: ci@dpdk.org; Wei, FangfangX <fangfangx.wei@intel.com>; Liu, Yong
> <yong.liu@intel.com>
> Subject: Re: Intel PerPatch Build
>
> 2016-11-30 09:25, Xu, Qian Q:
> > Then the conclusion is that we just kept current model, 1 report for 1 patch,
> right?
>
> Yes
> Thanks for the discussion
> (I keep this discussion in mind in case it needs some improvement)
>
> > Also keep in mind some things about the error comparison, we can discuss it
> again if we see many these issues.
>
> Yes
>
> > As to the next step, I think Fangfang can check how to update the per patch
> build result to the patchwork to make the result more visible/readable in website.
>
> Yes please.
> I'm also waiting to apply the patches. Should I apply them right now?
>
Do you mean your script patches? I think fangfang may give some comments here.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-ci] Intel PerPatch Build
2016-12-01 9:00 ` Xu, Qian Q
@ 2016-12-01 9:26 ` Thomas Monjalon
2016-12-01 9:29 ` Wei, FangfangX
0 siblings, 1 reply; 11+ messages in thread
From: Thomas Monjalon @ 2016-12-01 9:26 UTC (permalink / raw)
To: Xu, Qian Q; +Cc: ci, Wei, FangfangX, Liu, Yong
2016-12-01 09:00, Xu, Qian Q:
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > 2016-11-30 09:25, Xu, Qian Q:
> > > Then the conclusion is that we just kept current model, 1 report for 1 patch,
> > right?
> >
> > Yes
> > Thanks for the discussion
> > (I keep this discussion in mind in case it needs some improvement)
> >
> > > Also keep in mind some things about the error comparison, we can discuss it
> > again if we see many these issues.
> >
> > Yes
> >
> > > As to the next step, I think Fangfang can check how to update the per patch
> > build result to the patchwork to make the result more visible/readable in website.
> >
> > Yes please.
> > I'm also waiting to apply the patches. Should I apply them right now?
> >
> Do you mean your script patches? I think fangfang may give some comments here.
Yes please
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-ci] Intel PerPatch Build
2016-12-01 9:26 ` Thomas Monjalon
@ 2016-12-01 9:29 ` Wei, FangfangX
0 siblings, 0 replies; 11+ messages in thread
From: Wei, FangfangX @ 2016-12-01 9:29 UTC (permalink / raw)
To: Thomas Monjalon, Xu, Qian Q; +Cc: ci, Liu, Yong
Hi Thomas,
Sorry for the late reply, I was working on other tasks these days, and will start to try your scripts ASAP.
Best Regards
Fangfang Wei
-----Original Message-----
From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
Sent: Thursday, December 1, 2016 5:27 PM
To: Xu, Qian Q <qian.q.xu@intel.com>
Cc: ci@dpdk.org; Wei, FangfangX <fangfangx.wei@intel.com>; Liu, Yong <yong.liu@intel.com>
Subject: Re: Intel PerPatch Build
2016-12-01 09:00, Xu, Qian Q:
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > 2016-11-30 09:25, Xu, Qian Q:
> > > Then the conclusion is that we just kept current model, 1 report for 1 patch,
> > right?
> >
> > Yes
> > Thanks for the discussion
> > (I keep this discussion in mind in case it needs some improvement)
> >
> > > Also keep in mind some things about the error comparison, we can discuss it
> > again if we see many these issues.
> >
> > Yes
> >
> > > As to the next step, I think Fangfang can check how to update the per patch
> > build result to the patchwork to make the result more visible/readable in website.
> >
> > Yes please.
> > I'm also waiting to apply the patches. Should I apply them right now?
> >
> Do you mean your script patches? I think fangfang may give some comments here.
Yes please
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2016-12-01 9:30 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <587443$9qu68@orsmga002.jf.intel.com>
2016-11-28 7:30 ` [dpdk-ci] [dpdk-test-report] [Intel PerPatch Build] |SUCCESS| pw17274 [PATCH, v2] maintainers: update testpmd maintainer Xu, Qian Q
2016-11-28 10:08 ` [dpdk-ci] Intel PerPatch Build Thomas Monjalon
2016-11-29 3:56 ` Xu, Qian Q
2016-11-29 9:20 ` Thomas Monjalon
2016-11-30 6:49 ` Xu, Qian Q
2016-11-30 8:33 ` Thomas Monjalon
2016-11-30 9:25 ` Xu, Qian Q
2016-11-30 9:46 ` Thomas Monjalon
2016-12-01 9:00 ` Xu, Qian Q
2016-12-01 9:26 ` Thomas Monjalon
2016-12-01 9:29 ` Wei, FangfangX
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).