DPDK CI discussions
 help / color / mirror / Atom feed
* Re: [dpdk-ci] [dpdk-moving] proposal for DPDK CI improvement
       [not found]   ` <58200E0A.4010804@intel.com>
@ 2016-11-07  9:59     ` Thomas Monjalon
  2016-11-07 14:59       ` Liu, Yong
  0 siblings, 1 reply; 10+ messages in thread
From: Thomas Monjalon @ 2016-11-07  9:59 UTC (permalink / raw)
  To: Liu, Yong; +Cc: moving, ci

For discussing the details of the CI, I suggest we move to the new
mailing list ci@dpdk.org (CC) and stop bothering recipients of
moving@dpdk.org.

2016-11-07 13:15, Liu, Yong:
> My main concern about patchwork is that patchwork's page can't show 
> enough information.
> Developers may need detail build log or function case log to narrow down 
> issue.

Your concern should be solved:
The detailed reports are available in http://dpdk.org/ml/archives/test-report/.
When viewing a patch in patchwork, the tests are listed with a brief result.
The detailed report of each test is linked here.
Example:
	dpdk.org/patch/16960

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-ci] [dpdk-moving] proposal for DPDK CI improvement
       [not found]   ` <82F45D86ADE5454A95A89742C8D1410E3923B784@shsmsx102.ccr.corp.intel.com>
@ 2016-11-07 10:17     ` Thomas Monjalon
  2016-11-07 10:26       ` Jerome Tollet (jtollet)
  2016-11-07 12:20       ` Xu, Qian Q
  0 siblings, 2 replies; 10+ messages in thread
From: Thomas Monjalon @ 2016-11-07 10:17 UTC (permalink / raw)
  To: Xu, Qian Q; +Cc: moving, Liu, Yong, ci

Hi Qian,

2016-11-07 07:55, Xu, Qian Q:
> I think the discussion about CI is a good start. I agreed on the general ideas: 
> 1. It's good to have more contributors for CI and it's a community effort. 
> 2. Building a distributed CI system is good and necessary. 
> 3. "When and Where" is the very basic and important questions. 
> 
> Add my 2 cents here. 
> 1.  Distributed test vs Centralized lab
> We can put the build and functional tests on our distributed lab. As to the performance, as we all know, performance is key to DPDK. 
> So I suggested we can have the centralized lab for the performance testing, and some comments as below: 
> a). Do we want to publish the performance report on different platforms with different HW/NICs? Anyone against on publishing performance numbers? 
> b). If the answer to the first question is "Yes", so how to ensure others trust the performance and how to reproduce the performance if we don't have the platforms/HWs? 
> As Marvin said, transparency and independence is the advantage for open centralized lab. Besides, we can demonstrate to all audience about DPDK performance with the 
> Lab. Of course, we need the control of the system, not allow others to access it randomly. It's another topic of access control. I even think that if the lab can be used as 
> the training lab or demo lab when we have the community training or performance demo days(I just named the events). 
> 
> 2. Besides "When and Where", then "What" and "How"
> When:
> 	- regularly on a git tree ---what tests need to be done here? Propose to have the daily build, daily functional regression, daily performance regression
> 	- after each patch submission -> report available via patchwork----what tests need to be done? Build test as the first one, maybe we can add functional or performance in future. 
>                                                                                                                                        
> How to collect and display the results? 
> Thanks Thomas for the hard work on patchwork upgrade. And it's good to see the CheckPatch display here. 
> IMHO, to build the complete distributed system needs very big effort. Thomas, any effort estimation and the schedule for it?

It must be a collective effort.
I plan to publish a new git repository really soon to help building a test lab.
The first version will allow to send some test reports correctly formatted.
The next step will be to help applying patches (on right branch with series support).

> a). Currently, there is only " S/W/F for Success/Warning/Fail counters" in tests, so does it refer to build test or functional test or performance test? 

It can be any test, including performance ones. A major performance regression
must be seen as a failed test.

> If it only referred to build test, then you may need change the title to Build S/W/F. Then how many architecture or platforms for the builds? For example, we support Intel IA build, 
> ARM build, IBM power build. Then we may need collect build results from INTEL/IBM/ARM and etc to show the total S/W/F. For example, if the build is passed on IA but failed on IBM, then we 
> Need record it as 1S/0W/1F. I don't know if we need collect the warning information here.

The difference between warnings and failures is a matter of severity.
The checkpatch errors are reported as warnings.

> b). How about performance result display on website? No matter distributed or centralized lab, we need a place to show the performance number or the performance trend to 
> ensure no performance regression? Do you have any plan to implement it? 

No I have no plan but I expect it to be solved by ones working on
performance tests, maybe you? :)
If a private lab can publish some web graphs of performance evolutions, it is great.
If we can do it in a centralized lab, it is also great.
If we can have a web interface to gather every performance numbers and graphs,
it is really really great!

> 3.  Proposal to have a CI mailing list for people working on CI to have the regular meetings only discussing about CI? Maybe we can have more frequent meetings at first to have an alignment. Then
> We can reduce the frequency if the solution is settle down. Current call is covering many other topics. What do you think? 

The mailing list is now created: ci@dpdk.org.
About meetings, I feel we can start working through ci@dpdk.org and see
how efficient it is. Though if you need a meeting, feel free to propose.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-ci] [dpdk-moving] proposal for DPDK CI improvement
  2016-11-07 10:17     ` Thomas Monjalon
@ 2016-11-07 10:26       ` Jerome Tollet (jtollet)
  2016-11-07 10:34         ` O'Driscoll, Tim
  2016-11-07 12:20       ` Xu, Qian Q
  1 sibling, 1 reply; 10+ messages in thread
From: Jerome Tollet (jtollet) @ 2016-11-07 10:26 UTC (permalink / raw)
  To: Thomas Monjalon, Xu, Qian Q; +Cc: moving, Liu, Yong, ci

Hi Thomas & Qian,
IMHO, performance results should be centralized and executed in a trusted & controlled environment.
If official DPDK numbers are coming from private lab’s vendors, perception might be that they are not 100% neutral. That would probably not help DPDK community to be seen open & transparent.

Jerome

Le 07/11/2016 11:17, « moving au nom de Thomas Monjalon » <moving-bounces@dpdk.org au nom de thomas.monjalon@6wind.com> a écrit :

    Hi Qian,
    
    2016-11-07 07:55, Xu, Qian Q:
    > I think the discussion about CI is a good start. I agreed on the general ideas: 
    > 1. It's good to have more contributors for CI and it's a community effort. 
    > 2. Building a distributed CI system is good and necessary. 
    > 3. "When and Where" is the very basic and important questions. 
    > 
    > Add my 2 cents here. 
    > 1.  Distributed test vs Centralized lab
    > We can put the build and functional tests on our distributed lab. As to the performance, as we all know, performance is key to DPDK. 
    > So I suggested we can have the centralized lab for the performance testing, and some comments as below: 
    > a). Do we want to publish the performance report on different platforms with different HW/NICs? Anyone against on publishing performance numbers? 
    > b). If the answer to the first question is "Yes", so how to ensure others trust the performance and how to reproduce the performance if we don't have the platforms/HWs? 
    > As Marvin said, transparency and independence is the advantage for open centralized lab. Besides, we can demonstrate to all audience about DPDK performance with the 
    > Lab. Of course, we need the control of the system, not allow others to access it randomly. It's another topic of access control. I even think that if the lab can be used as 
    > the training lab or demo lab when we have the community training or performance demo days(I just named the events). 
    > 
    > 2. Besides "When and Where", then "What" and "How"
    > When:
    > 	- regularly on a git tree ---what tests need to be done here? Propose to have the daily build, daily functional regression, daily performance regression
    > 	- after each patch submission -> report available via patchwork----what tests need to be done? Build test as the first one, maybe we can add functional or performance in future. 
    >                                                                                                                                        
    > How to collect and display the results? 
    > Thanks Thomas for the hard work on patchwork upgrade. And it's good to see the CheckPatch display here. 
    > IMHO, to build the complete distributed system needs very big effort. Thomas, any effort estimation and the schedule for it?
    
    It must be a collective effort.
    I plan to publish a new git repository really soon to help building a test lab.
    The first version will allow to send some test reports correctly formatted.
    The next step will be to help applying patches (on right branch with series support).
    
    > a). Currently, there is only " S/W/F for Success/Warning/Fail counters" in tests, so does it refer to build test or functional test or performance test? 
    
    It can be any test, including performance ones. A major performance regression
    must be seen as a failed test.
    
    > If it only referred to build test, then you may need change the title to Build S/W/F. Then how many architecture or platforms for the builds? For example, we support Intel IA build, 
    > ARM build, IBM power build. Then we may need collect build results from INTEL/IBM/ARM and etc to show the total S/W/F. For example, if the build is passed on IA but failed on IBM, then we 
    > Need record it as 1S/0W/1F. I don't know if we need collect the warning information here.
    
    The difference between warnings and failures is a matter of severity.
    The checkpatch errors are reported as warnings.
    
    > b). How about performance result display on website? No matter distributed or centralized lab, we need a place to show the performance number or the performance trend to 
    > ensure no performance regression? Do you have any plan to implement it? 
    
    No I have no plan but I expect it to be solved by ones working on
    performance tests, maybe you? :)
    If a private lab can publish some web graphs of performance evolutions, it is great.
    If we can do it in a centralized lab, it is also great.
    If we can have a web interface to gather every performance numbers and graphs,
    it is really really great!
    
    > 3.  Proposal to have a CI mailing list for people working on CI to have the regular meetings only discussing about CI? Maybe we can have more frequent meetings at first to have an alignment. Then
    > We can reduce the frequency if the solution is settle down. Current call is covering many other topics. What do you think? 
    
    The mailing list is now created: ci@dpdk.org.
    About meetings, I feel we can start working through ci@dpdk.org and see
    how efficient it is. Though if you need a meeting, feel free to propose.
    


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-ci] [dpdk-moving] proposal for DPDK CI improvement
  2016-11-07 10:26       ` Jerome Tollet (jtollet)
@ 2016-11-07 10:34         ` O'Driscoll, Tim
  2016-11-07 10:47           ` Arnon Warshavsky
  2016-11-07 10:56           ` Thomas Monjalon
  0 siblings, 2 replies; 10+ messages in thread
From: O'Driscoll, Tim @ 2016-11-07 10:34 UTC (permalink / raw)
  To: Jerome Tollet (jtollet), Thomas Monjalon, Xu, Qian Q
  Cc: moving, Liu, Yong, ci


> -----Original Message-----
> From: moving [mailto:moving-bounces@dpdk.org] On Behalf Of Jerome Tollet
> (jtollet)
> Sent: Monday, November 7, 2016 10:27 AM
> To: Thomas Monjalon <thomas.monjalon@6wind.com>; Xu, Qian Q
> <qian.q.xu@intel.com>
> Cc: moving@dpdk.org; Liu, Yong <yong.liu@intel.com>; ci@dpdk.org
> Subject: Re: [dpdk-moving] proposal for DPDK CI improvement
> 
> Hi Thomas & Qian,
> IMHO, performance results should be centralized and executed in a
> trusted & controlled environment.
> If official DPDK numbers are coming from private lab’s vendors,
> perception might be that they are not 100% neutral. That would probably
> not help DPDK community to be seen open & transparent.

+1

Somebody (Jan Blunck I think) also said on last week's call that performance testing was a higher priority than CI for a centralized lab. A model where we have centralized performance test and distributed CI might work well.

> 
> Jerome
> 
> Le 07/11/2016 11:17, « moving au nom de Thomas Monjalon » <moving-
> bounces@dpdk.org au nom de thomas.monjalon@6wind.com> a écrit :
> 
>     Hi Qian,
> 
>     2016-11-07 07:55, Xu, Qian Q:
>     > I think the discussion about CI is a good start. I agreed on the
> general ideas:
>     > 1. It's good to have more contributors for CI and it's a community
> effort.
>     > 2. Building a distributed CI system is good and necessary.
>     > 3. "When and Where" is the very basic and important questions.
>     >
>     > Add my 2 cents here.
>     > 1.  Distributed test vs Centralized lab
>     > We can put the build and functional tests on our distributed lab.
> As to the performance, as we all know, performance is key to DPDK.
>     > So I suggested we can have the centralized lab for the performance
> testing, and some comments as below:
>     > a). Do we want to publish the performance report on different
> platforms with different HW/NICs? Anyone against on publishing
> performance numbers?
>     > b). If the answer to the first question is "Yes", so how to ensure
> others trust the performance and how to reproduce the performance if we
> don't have the platforms/HWs?
>     > As Marvin said, transparency and independence is the advantage for
> open centralized lab. Besides, we can demonstrate to all audience about
> DPDK performance with the
>     > Lab. Of course, we need the control of the system, not allow
> others to access it randomly. It's another topic of access control. I
> even think that if the lab can be used as
>     > the training lab or demo lab when we have the community training
> or performance demo days(I just named the events).
>     >
>     > 2. Besides "When and Where", then "What" and "How"
>     > When:
>     > 	- regularly on a git tree ---what tests need to be done here?
> Propose to have the daily build, daily functional regression, daily
> performance regression
>     > 	- after each patch submission -> report available via
> patchwork----what tests need to be done? Build test as the first one,
> maybe we can add functional or performance in future.
>     >
>     > How to collect and display the results?
>     > Thanks Thomas for the hard work on patchwork upgrade. And it's
> good to see the CheckPatch display here.
>     > IMHO, to build the complete distributed system needs very big
> effort. Thomas, any effort estimation and the schedule for it?
> 
>     It must be a collective effort.
>     I plan to publish a new git repository really soon to help building
> a test lab.
>     The first version will allow to send some test reports correctly
> formatted.
>     The next step will be to help applying patches (on right branch with
> series support).
> 
>     > a). Currently, there is only " S/W/F for Success/Warning/Fail
> counters" in tests, so does it refer to build test or functional test or
> performance test?
> 
>     It can be any test, including performance ones. A major performance
> regression
>     must be seen as a failed test.
> 
>     > If it only referred to build test, then you may need change the
> title to Build S/W/F. Then how many architecture or platforms for the
> builds? For example, we support Intel IA build,
>     > ARM build, IBM power build. Then we may need collect build results
> from INTEL/IBM/ARM and etc to show the total S/W/F. For example, if the
> build is passed on IA but failed on IBM, then we
>     > Need record it as 1S/0W/1F. I don't know if we need collect the
> warning information here.
> 
>     The difference between warnings and failures is a matter of
> severity.
>     The checkpatch errors are reported as warnings.
> 
>     > b). How about performance result display on website? No matter
> distributed or centralized lab, we need a place to show the performance
> number or the performance trend to
>     > ensure no performance regression? Do you have any plan to
> implement it?
> 
>     No I have no plan but I expect it to be solved by ones working on
>     performance tests, maybe you? :)
>     If a private lab can publish some web graphs of performance
> evolutions, it is great.
>     If we can do it in a centralized lab, it is also great.
>     If we can have a web interface to gather every performance numbers
> and graphs,
>     it is really really great!
> 
>     > 3.  Proposal to have a CI mailing list for people working on CI to
> have the regular meetings only discussing about CI? Maybe we can have
> more frequent meetings at first to have an alignment. Then
>     > We can reduce the frequency if the solution is settle down.
> Current call is covering many other topics. What do you think?
> 
>     The mailing list is now created: ci@dpdk.org.
>     About meetings, I feel we can start working through ci@dpdk.org and
> see
>     how efficient it is. Though if you need a meeting, feel free to
> propose.
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-ci] [dpdk-moving] proposal for DPDK CI improvement
  2016-11-07 10:34         ` O'Driscoll, Tim
@ 2016-11-07 10:47           ` Arnon Warshavsky
  2016-11-07 10:56           ` Thomas Monjalon
  1 sibling, 0 replies; 10+ messages in thread
From: Arnon Warshavsky @ 2016-11-07 10:47 UTC (permalink / raw)
  To: O'Driscoll, Tim
  Cc: Jerome Tollet (jtollet),
	Thomas Monjalon, Xu, Qian Q, moving, Liu, Yong, ci

[-- Attachment #1: Type: text/plain, Size: 1442 bytes --]

On Mon, Nov 7, 2016 at 12:34 PM, O'Driscoll, Tim <tim.odriscoll@intel.com>
wrote:

>
> > -----Original Message-----
> > From: moving [mailto:moving-bounces@dpdk.org] On Behalf Of Jerome Tollet
> > (jtollet)
> > Sent: Monday, November 7, 2016 10:27 AM
> > To: Thomas Monjalon <thomas.monjalon@6wind.com>; Xu, Qian Q
> > <qian.q.xu@intel.com>
> > Cc: moving@dpdk.org; Liu, Yong <yong.liu@intel.com>; ci@dpdk.org
> > Subject: Re: [dpdk-moving] proposal for DPDK CI improvement
> >
> > Hi Thomas & Qian,
> > IMHO, performance results should be centralized and executed in a
> > trusted & controlled environment.
> > If official DPDK numbers are coming from private lab’s vendors,
> > perception might be that they are not 100% neutral. That would probably
> > not help DPDK community to be seen open & transparent.
>
> +1
>
> Somebody (Jan Blunck I think) also said on last week's call that
> performance testing was a higher priority than CI for a centralized lab. A
> model where we have centralized performance test and distributed CI might
> work well.



+1 to the above approach , yet I still see value in publishing both types
of performance results as long as they are clearly separated.
This might might need a way to retroactively mark some results as "proved
invalid" but otoh encourage a cycle of propagating distributed tests proved
beneficial correct and unbiased to the central tests.

/Arnon

[-- Attachment #2: Type: text/html, Size: 2287 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-ci] [dpdk-moving] proposal for DPDK CI improvement
  2016-11-07 10:34         ` O'Driscoll, Tim
  2016-11-07 10:47           ` Arnon Warshavsky
@ 2016-11-07 10:56           ` Thomas Monjalon
  1 sibling, 0 replies; 10+ messages in thread
From: Thomas Monjalon @ 2016-11-07 10:56 UTC (permalink / raw)
  To: O'Driscoll, Tim, Jerome Tollet (jtollet)
  Cc: Xu, Qian Q, moving, Liu, Yong, ci

2016-11-07 10:34, O'Driscoll, Tim:
> From: Jerome Tollet
> > 
> > Hi Thomas & Qian,
> > IMHO, performance results should be centralized and executed in a
> > trusted & controlled environment.
> > If official DPDK numbers are coming from private lab’s vendors,
> > perception might be that they are not 100% neutral. That would probably
> > not help DPDK community to be seen open & transparent.
> 
> +1
> 
> Somebody (Jan Blunck I think) also said on last week's call that
> performance testing was a higher priority than CI for a centralized lab.
> A model where we have centralized performance test and distributed CI
> might work well.

+1

Having some trusted performance numbers is a top priority.
I hope a budget in the foundation can solve it.

I was just trying to say that numbers from private labs can bring some
diversity and may be also valuable.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-ci] [dpdk-moving] proposal for DPDK CI improvement
  2016-11-07 10:17     ` Thomas Monjalon
  2016-11-07 10:26       ` Jerome Tollet (jtollet)
@ 2016-11-07 12:20       ` Xu, Qian Q
  2016-11-07 12:51         ` Thomas Monjalon
  1 sibling, 1 reply; 10+ messages in thread
From: Xu, Qian Q @ 2016-11-07 12:20 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: moving, Liu, Yong, ci

Thomas, 
Thx for your quick response. See my reply inline below. 

-----Original Message-----
From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] 
Sent: Monday, November 7, 2016 6:18 PM
To: Xu, Qian Q <qian.q.xu@intel.com>
Cc: moving@dpdk.org; Liu, Yong <yong.liu@intel.com>; ci@dpdk.org
Subject: Re: [dpdk-moving] proposal for DPDK CI improvement


> a). Currently, there is only " S/W/F for Success/Warning/Fail counters" in tests, so does it refer to build test or functional test or performance test? 

It can be any test, including performance ones. A major performance regression must be seen as a failed test. 

[Qian] If it refer to any test, so how do we know which test has been done. For example, some patch may only do build, some may do perf and some may do functional. 
How to differentiate these tests execution?   

> If it only referred to build test, then you may need change the title 
> to Build S/W/F. Then how many architecture or platforms for the 
> builds? For example, we support Intel IA build, ARM build, IBM power build. Then we may need collect build results from INTEL/IBM/ARM and etc to show the total S/W/F. For example, if the build is passed on IA but failed on IBM, then we Need record it as 1S/0W/1F. I don't know if we need collect the warning information here.

The difference between warnings and failures is a matter of severity.
The checkpatch errors are reported as warnings.

> b). How about performance result display on website? No matter 
> distributed or centralized lab, we need a place to show the performance number or the performance trend to ensure no performance regression? Do you have any plan to implement it?

No I have no plan but I expect it to be solved by ones working on performance tests, maybe you? :) If a private lab can publish some web graphs of performance evolutions, it is great.
If we can do it in a centralized lab, it is also great.
If we can have a web interface to gather every performance numbers and graphs, it is really really great!

[Qian] In intel internally, we put some efforts on the performance web interface design and implementation. But it is just started. If community need our effort to work on the performance 
Report center, we may need discuss about the ownership/resources/requirements and plan.  

> 3.  Proposal to have a CI mailing list for people working on CI to 
> have the regular meetings only discussing about CI? Maybe we can have more frequent meetings at first to have an alignment. Then We can reduce the frequency if the solution is settle down. Current call is covering many other topics. What do you think?

The mailing list is now created: ci@dpdk.org.
About meetings, I feel we can start working through ci@dpdk.org and see how efficient it is. Though if you need a meeting, feel free to propose.
[Qian] I saw there was a community call tomorrow night, and it was 11pm at PRC time. I wonder if all CI people are in EU, if no US people, then we may wish an early time such as 9pm-10pm at PRC time. 
We can join this week's community call and decide if we can have a separate meeting next week for CI, more focus and more efficient, just for CI.   

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-ci] [dpdk-moving] proposal for DPDK CI improvement
  2016-11-07 12:20       ` Xu, Qian Q
@ 2016-11-07 12:51         ` Thomas Monjalon
  2016-11-07 14:22           ` Xu, Qian Q
  0 siblings, 1 reply; 10+ messages in thread
From: Thomas Monjalon @ 2016-11-07 12:51 UTC (permalink / raw)
  To: Xu, Qian Q; +Cc: Liu, Yong, ci

I'm removing the list moving@dpdk.org as we are discussing implementation details.

2016-11-07 12:20, Xu, Qian Q:
> > a). Currently, there is only " S/W/F for Success/Warning/Fail counters"
> > in tests, so does it refer to build test or functional test or
> > performance test? 
> 
> It can be any test, including performance ones.
> A major performance regression must be seen as a failed test. 
> 
> [Qian] If it refer to any test, so how do we know which test has been
> done. For example, some patch may only do build, some may do perf and
> some may do functional. 
> How to differentiate these tests execution?   

Why do you want to differentiate them in patchwork?

There are 3 views of test reports:
1/ On a patchwork page listing patches, we see the counters S/W/F so
we can give attention to patches having some warnings or failures.
2/ On a patchwork page showing only one patch, we see the list of
tests, the results and the links to the reports.
3/ In a detailed report from the test-report ml archives, we can see
exactly what's going wrong or what was tested successfully (and what
are the numbers in case of a performance test).

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-ci] [dpdk-moving] proposal for DPDK CI improvement
  2016-11-07 12:51         ` Thomas Monjalon
@ 2016-11-07 14:22           ` Xu, Qian Q
  0 siblings, 0 replies; 10+ messages in thread
From: Xu, Qian Q @ 2016-11-07 14:22 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: Liu, Yong, ci

See below. 

-----Original Message-----
From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] 
Sent: Monday, November 7, 2016 8:51 PM
To: Xu, Qian Q <qian.q.xu@intel.com>
Cc: Liu, Yong <yong.liu@intel.com>; ci@dpdk.org
Subject: Re: [dpdk-moving] proposal for DPDK CI improvement

I'm removing the list moving@dpdk.org as we are discussing implementation details.

2016-11-07 12:20, Xu, Qian Q:
> > a). Currently, there is only " S/W/F for Success/Warning/Fail counters"
> > in tests, so does it refer to build test or functional test or 
> > performance test?
> 
> It can be any test, including performance ones.
> A major performance regression must be seen as a failed test. 
> 
> [Qian] If it refer to any test, so how do we know which test has been 
> done. For example, some patch may only do build, some may do perf and 
> some may do functional.
> How to differentiate these tests execution?   

Why do you want to differentiate them in patchwork?
There are 3 views of test reports:
1/ On a patchwork page listing patches, we see the counters S/W/F so we can give attention to patches having some warnings or failures.
2/ On a patchwork page showing only one patch, we see the list of tests, the results and the links to the reports.
3/ In a detailed report from the test-report ml archives, we can see exactly what's going wrong or what was tested successfully (and what are the numbers in case of a performance test).

---To differentiate the test(build, function and performance) that can give a clear view of what tests has been done, and what tests failed. And we don't need click 
the link to know which test failed. 
Now, all tests are sharing one column for the Pass or fail or warnings. For example, we see one patch Test failed, we don't know which test failed or even don't know if 
Which tests are executed. For example, if build failed, but no functional tests executed, then it showed as Failed; while if the build passed but one function test is failed, then 
It also showed as Failed. So from Failed, we don't know failures at which level tests. 
For one patch, I think we may have different requirements for build test, function test and performance test failures.  
To pass build is the 1st priority task, no errors are allowed. While as to functional tests, some failures on function may be acceptable if the pass rate is over 90% . As to 
the performance, the criteria for failure is not clear now, since performance will have some fluctuation and some features will bring some performance drop as the feature 
implementation cost. If we can't differentiate them, we can't easily judge the failure is really critical failures, and we even don't know which tests are executed.
Similarly, for a patch, we have Acked-by, Tested-by and Reviewed-by, which differentiate the different aspect of the patch review status. If we just put it as Pass or failed, how 
Can we know if it's reviewed failure or tested failure or acked failure? We even don't know which activity is done here. Even if there is a link to click see more details about 
Acted-by, Tested-by and Reviewed-by, it's not so convenient. So it's the similar thoughts to differentiate the tests. 

Btw, do you want to each patch have function and performance test? As we discussed before, currently we have 4 repos, then we may not sure if the patch should go to which repo, 
It's fine for the build, but for the function test, if it applied to wrong repo, then it may fail the tests. And we may need think how to solve the issue. 
In short term, I suggest that we only keep Build test for each patch, and run function and performance test on daily basis for the git tree. 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-ci] [dpdk-moving] proposal for DPDK CI improvement
  2016-11-07  9:59     ` [dpdk-ci] [dpdk-moving] proposal for DPDK CI improvement Thomas Monjalon
@ 2016-11-07 14:59       ` Liu, Yong
  0 siblings, 0 replies; 10+ messages in thread
From: Liu, Yong @ 2016-11-07 14:59 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: moving, ci

> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Monday, November 07, 2016 6:00 PM
> To: Liu, Yong
> Cc: moving@dpdk.org; ci@dpdk.org
> Subject: Re: [dpdk-moving] proposal for DPDK CI improvement
> 
> 
> Your concern should be solved:
> The detailed reports are available in http://dpdk.org/ml/archives/test-
> report/.
> When viewing a patch in patchwork, the tests are listed with a brief
> result.
> The detailed report of each test is linked here.
> Example:
> 	dpdk.org/patch/16960

Thanks, Thomas. I haven't noticed that checkpatch function has been integrated to patchwork. 
Look like we can easily enhance per patch-set status demonstration by adding more columns like "Checks". 

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2016-11-07 15:00 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <86228AFD5BCD8E4EBFD2B90117B5E81E60310FA1@SHSMSX103.ccr.corp.intel.com>
     [not found] ` <3804736.OkjAMiHs6v@xps13>
     [not found]   ` <58200E0A.4010804@intel.com>
2016-11-07  9:59     ` [dpdk-ci] [dpdk-moving] proposal for DPDK CI improvement Thomas Monjalon
2016-11-07 14:59       ` Liu, Yong
     [not found]   ` <82F45D86ADE5454A95A89742C8D1410E3923B784@shsmsx102.ccr.corp.intel.com>
2016-11-07 10:17     ` Thomas Monjalon
2016-11-07 10:26       ` Jerome Tollet (jtollet)
2016-11-07 10:34         ` O'Driscoll, Tim
2016-11-07 10:47           ` Arnon Warshavsky
2016-11-07 10:56           ` Thomas Monjalon
2016-11-07 12:20       ` Xu, Qian Q
2016-11-07 12:51         ` Thomas Monjalon
2016-11-07 14:22           ` Xu, Qian Q

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).