DPDK community structure changes
 help / color / mirror / Atom feed
* [dpdk-moving] proposal for DPDK CI improvement
@ 2016-11-05  4:47 Liu, Yong
  2016-11-05 19:15 ` Thomas Monjalon
  0 siblings, 1 reply; 12+ messages in thread
From: Liu, Yong @ 2016-11-05  4:47 UTC (permalink / raw)
  To: moving

I'm marvin and on behalf of Intel STV team. As we are moving to LF, there's
one chance for us to discuss on how to enhance DPDK CI infrastructure.

Currently, DPDK CI has done in a distributed way. Several companies running
their own CI tests internally. Some companies running their own CI tests
internally. Some of them (Intel and IBM) provided their test reports to
mailing list, but others keep their test results for internal use only.

There are two possible approaches that we can consider for improving DPDK CI:

1. Create a centralized DPDK CI lab and buildup required infrastructure.
2. Continue with a distributed model but improve reporting and visibility.

We think the main advantages of a centralized approach are:
Transparency: Everybody can see and access the test infrastructure, see what
              exactly how the servers are configured and what tests have been
              run and their result. The community can review and agree
              collectively when new tests are required.
Flexibility:  Testing can be performed on demand. Instead of a developer
              submitting a patch, having it tested by a distributed CI
              infrastructure and then getting test results. The developer can
              access the CI infrastructure and trigger the tests manually
              before submitting the patch, thus speeding up the development
              process, make short test cycle.
Independence: Instead of each vendor providing their own performance results,
              having these generated in a centralized lab run by an 
              independent body will increase confidence that DPDK users have
              in the test results.

There is one example of how this was done for another project.
(https://wiki.fd.io/view/CSIT).

In their wiki page, you can get the idea about how to configure the servers
and run test cases on these servers. The test report for all releases can be
found on the page. You can also browse the detail test report for each release
if click the link. If click their Jekin's link, you can see the trend of
project status.
              
The main disadvantages of a centralized approach are relocating equipment
from separate vendor labs will require a project budget. We can depend on
budget to decide which infrastructure should be deployed in the public test
lab.

For distributed model, we essentially continue as what we are at present.
Vendors can independently choose the CI tests that they run, and the reports
that they choose to make public. We can add enhancements to Patchwork to
display test results to make tracking easier, and can also look at other ways
to make test reports more visible.

The main advantages of a distributed approach are:
There's no requirement for a project budget.

The disadvantages of a distributed approach are:
We lost the benefits of transparency, independence and the ability to run
tests on demand that are described under the centralized approach above. CI
testing and the publication of the results remains under the control of
vendors (or others who choose to run CI tests).

Based on the above, we would like to propose that a centralized CI lab.
Details of the required budget and funding for this will obviously need to be
determined, but for now our proposal will focus purely on the technical scope
and benefits.

------------------------------------------------------------------------------

At the moment, the hardware requirements below are identified only for Intel
platforms. We'd like to encourage input from other vendors on additional
hardware requirements to support their platforms.

The items below are prioritized so that we can determine the cut-off point
based on the final budget that is available for this activity.

  Daily Performance Test
    Scope: An l3fwd performance test will be run once per day on the master
    branch to identify any performance degradation. The test will use a
    software packet generator (could be pktgen or TRex).
  
    Test execution time:
      Almost 60 mins for RFC2544
    Priority: 1
  
    Hardware Requirements: 
      For x86 this will require two 2U servers, one to run the packet
      generator and one to act as the device under test (DUT). 

      In order to make sure the test results are consistent from one run to
      the next, we recommend to allocate dedicated servers. But if budget
      doesn't allow us to place too much servers, we can optimize our
      performance testing by make performance test beds to share with other
      testing like regression unit and building test, then we can maximize the
      utilization of infrastructure.
    
      Hardware requirements for other CPU architectures need to be determined
      by the relevant vendors.

  Automated Patch Compilation Test (Pre check-in Patch Testing):
    Scope: When developers submit patches to the dev@dpdk.org mailing list a
    back-end automation script will be triggered automatically to run a
    compilation test. 
    The results will be sent to the test-report@dpdk.org mailing list and to
    the patch author. To deliver timely report for patch, the automated patch
    compilation test only verifies patch within few OSVs. So Patch compilation
    test can't achieve same coverage with daily compilation test on master
    branch. 

    Testing should ideally be performed on all supported CPU platforms (x86,
    ARM, Power 8, TILE-Gx etc.), but this will depend on which vendors are
    willing to contribute hardware platforms to the DPDK CI lab.
    Testing will be performed on multiple operating systems (Fedora, Ubuntu,
    RHEL etc.), kernel versions (latest stable Linux Kernel  + default Linux
    Kernel in OSV) and compiler versions (GCC/ICC/Clang).

    Test execution time: 
      5 mins per patch, average 30 min per patch set 
    Priority: 2

    Hardware Requirements: 
      For x86 this will require one dedicated 2U server. Because the tests
      will be run frequently (every time a new patch or patch set is
      submitted), it's not realistic to run this testing on shared servers. 
      Combinations of operating systems, kernels and compiler versions will be
      configured as separate VMs on the server.

    Hardware requirements for other CPU architectures need to be determined
    by the relevant vendors.
 
  Automated Daily Compilation Test:
    Scope: This is similar to the previous item but run once per day on the
    master branch and on the latest stable/LTS release branches.
    Since code merging will cause some issues, which breaks compilation for
    master branch. Automated Daily Compilation Test is used to monitor and
    avoid this kind of issues. In general, Automated Daily Compilation Test
    will verify latest branch with 3 compilers (ICC/GCC/Clang) and 4 building
    options on each mainstream OSV. Currently, intel daily build test was
    performed on almost 14 OVS with different Linux kernel. It includes all
    the mainstream operating systems, including Ubuntu, Redhat, Fedora, SUSE,
    CentOS, Windriver, FreeBSD and MontaVista Linux.
    
    Test execution time: 
      30 mins per one platform. We can perform build testing on different
      OSV in parallel.
    Priority:  3
    
    Hardware Requirements:
      For x86 this will require one 2U server. Because the tests will be run
      at the same time every day, they could be scheduled to run on a shared
      server (approximate current test duration is ~2 hours on 2.6GHz CPU with
      64GB RAM). Combinations of operating systems, kernels and compiler
      versions will be configured as separate VMs on the server.
    
    Hardware requirements for other CPU architectures need to be determined by
    the relevant vendors.
 
  Regression Unit Test:
    Scope: Unit tests will be run once per day on the master branch and on the
    latest stable/LTS release branches with one mainstream operating system.
    
    Test execution time:
      2 hours to complete all automated unit test.
    Priority: 4

    Hardware Requirements: For x86 this will require one 2U server.
      Because the tests will be run at the same time every day, they could be
      scheduled to run on a shared server (approximate current test duration
      is ~1 hour on 2.6GHz CPU with 64GB RAM).  

  Regression Function Test:
    Since this function test depends on NIC features. It's difficult to make
    standard test plan & cases for different platforms. This test can be
    executed in the distributed lab. After platform owners complete testing,
    they can provide reports and test plan to maintainer for reviewing.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-moving] proposal for DPDK CI improvement
  2016-11-05  4:47 [dpdk-moving] proposal for DPDK CI improvement Liu, Yong
@ 2016-11-05 19:15 ` Thomas Monjalon
  2016-11-07  5:15   ` Liu, Yong
  2016-11-07  7:55   ` Xu, Qian Q
  0 siblings, 2 replies; 12+ messages in thread
From: Thomas Monjalon @ 2016-11-05 19:15 UTC (permalink / raw)
  To: moving, Liu, Yong

2016-11-05 04:47, Liu, Yong:
> Currently, DPDK CI has done in a distributed way. Several companies running
> their own CI tests internally. Some companies running their own CI tests
> internally. Some of them (Intel and IBM) provided their test reports to
> mailing list, but others keep their test results for internal use only.

I'm confident we'll have more contributors to the distributed CI when it
will be well advertised (see below).

> There are two possible approaches that we can consider for improving DPDK CI:
> 
> 1. Create a centralized DPDK CI lab and buildup required infrastructure.
> 2. Continue with a distributed model but improve reporting and visibility.

I think these two approaches are good:
1. The centralized open lab can help as a reference
2. The distributed CI instances will bring more diversity

> We think the main advantages of a centralized approach are:
> Transparency: Everybody can see and access the test infrastructure, see what
>               exactly how the servers are configured and what tests have been
>               run and their result. The community can review and agree
>               collectively when new tests are required.
> Flexibility:  Testing can be performed on demand. Instead of a developer
>               submitting a patch, having it tested by a distributed CI
>               infrastructure and then getting test results. The developer can
>               access the CI infrastructure and trigger the tests manually
>               before submitting the patch, thus speeding up the development
>               process, make short test cycle.

It is possible to offer such flexibility in private CI labs by offering
an email address where we can send some patches to be tested.
However there can be an issue of hardware bandwith/availability to solve.
This is the same issue for open/centralized labs or private labs.
A test lab accepting any private request can be abused. That's why I think
forcing to send the patches publically to the mailing list is a good policy.

> Independence: Instead of each vendor providing their own performance results,
>               having these generated in a centralized lab run by an 
>               independent body will increase confidence that DPDK users have
>               in the test results.
> 
> There is one example of how this was done for another project.
> (https://wiki.fd.io/view/CSIT).
> 
> In their wiki page, you can get the idea about how to configure the servers
> and run test cases on these servers. The test report for all releases can be
> found on the page. You can also browse the detail test report for each release
> if click the link. If click their Jekin's link, you can see the trend of
> project status.

I do not see an explanation of how to use the CSIT lab on demand (what you
described as "Flexibility"). How does it work?

> The main disadvantages of a centralized approach are relocating equipment
> from separate vendor labs will require a project budget. We can depend on
> budget to decide which infrastructure should be deployed in the public test
> lab.
> 
> For distributed model, we essentially continue as what we are at present.
> Vendors can independently choose the CI tests that they run, and the reports
> that they choose to make public. We can add enhancements to Patchwork to
> display test results to make tracking easier, and can also look at other ways
> to make test reports more visible.

Yes, I'm working on it.
One year ago, we discussed a CI integration in patchwork:
	https://lists.ozlabs.org/pipermail/patchwork/2015-July/001363.html
It is now implemented in patchwork and available on dpdk.org:
	http://dpdk.org/ml/archives/dev/2016-September/046282.html
A first basic test (checkpatch) is integrated. See this example:
	http://dpdk.org/patch/16953
The detailed report in test-report mailing list archives is referenced
with an hyperlink (in the "Description" column).
The next step (work in progress) is to publish some scripts in a new git
repository dpdk-ci to help integrating more test labs in patchwork.
It basically requires only to receive and send some emails from the lab.

Note that any open or centralized lab can be also integrated in patchwork.

Note also that this patchwork integration covers only the tests run when a
patch is submitted. The tests run regularly on a git tree won't appear
in this interface.

> The main advantages of a distributed approach are:
> There's no requirement for a project budget.

The other major advantage is to have a better test coverage.
Many companies need to have some internal DPDK tests for their needs.
If they use them to provide some public reports, they can avoid having
some regressions with their specific use cases or hardware.
That's why the distributed CI approach is a win-win.

> The disadvantages of a distributed approach are:
> We lost the benefits of transparency, independence and the ability to run
> tests on demand that are described under the centralized approach above. CI
> testing and the publication of the results remains under the control of
> vendors (or others who choose to run CI tests).
> 
> Based on the above, we would like to propose that a centralized CI lab.
> Details of the required budget and funding for this will obviously need to be
> determined, but for now our proposal will focus purely on the technical scope
> and benefits.

Thanks for the detailed description of the tests that you expect.
I think the CI discussion must be thought with two major questions:
	When? and Where?
When:
	- regularly on a git tree
	- after each patch submission -> report available via patchwork
Where:
	- in a private lab
	- in a foundation lab
Both private and foundation labs can be more or less open.

My conclusion: every kind of tests have some benefits and are welcome!

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-moving] proposal for DPDK CI improvement
  2016-11-05 19:15 ` Thomas Monjalon
@ 2016-11-07  5:15   ` Liu, Yong
  2016-11-07  9:59     ` Thomas Monjalon
  2016-11-07  7:55   ` Xu, Qian Q
  1 sibling, 1 reply; 12+ messages in thread
From: Liu, Yong @ 2016-11-07  5:15 UTC (permalink / raw)
  To: Thomas Monjalon, moving

Thanks, Thomas.

On 11/06/2016 03:15 AM, Thomas Monjalon wrote:
> 2016-11-05 04:47, Liu, Yong:
>> Currently, DPDK CI has done in a distributed way. Several companies running
>> their own CI tests internally. Some companies running their own CI tests
>> internally. Some of them (Intel and IBM) provided their test reports to
>> mailing list, but others keep their test results for internal use only.
> I'm confident we'll have more contributors to the distributed CI when it
> will be well advertised (see below).
Agree.
>
>> There are two possible approaches that we can consider for improving DPDK CI:
>>
>> 1. Create a centralized DPDK CI lab and buildup required infrastructure.
>> 2. Continue with a distributed model but improve reporting and visibility.
> I think these two approaches are good:
> 1. The centralized open lab can help as a reference
> 2. The distributed CI instances will bring more diversity
>
>> We think the main advantages of a centralized approach are:
>> Transparency: Everybody can see and access the test infrastructure, see what
>>                exactly how the servers are configured and what tests have been
>>                run and their result. The community can review and agree
>>                collectively when new tests are required.
>> Flexibility:  Testing can be performed on demand. Instead of a developer
>>                submitting a patch, having it tested by a distributed CI
>>                infrastructure and then getting test results. The developer can
>>                access the CI infrastructure and trigger the tests manually
>>                before submitting the patch, thus speeding up the development
>>                process, make short test cycle.
> It is possible to offer such flexibility in private CI labs by offering
> an email address where we can send some patches to be tested.
> However there can be an issue of hardware bandwith/availability to solve.
> This is the same issue for open/centralized labs or private labs.
> A test lab accepting any private request can be abused. That's why I think
> forcing to send the patches publically to the mailing list is a good policy.
>
>> Independence: Instead of each vendor providing their own performance results,
>>                having these generated in a centralized lab run by an
>>                independent body will increase confidence that DPDK users have
>>                in the test results.
>>
>> There is one example of how this was done for another project.
>> (https://wiki.fd.io/view/CSIT).
>>
>> In their wiki page, you can get the idea about how to configure the servers
>> and run test cases on these servers. The test report for all releases can be
>> found on the page. You can also browse the detail test report for each release
>> if click the link. If click their Jekin's link, you can see the trend of
>> project status.
> I do not see an explanation of how to use the CSIT lab on demand (what you
> described as "Flexibility"). How does it work?
As I known, CSIT lab have weekly meeting and everyone can join the meeting.
They will discuss about the requirements in the meeting and decide on 
what to do in the next step.
We are supporting CSIT team to create dpdk performance regression on x86 
platform.
Maybe they can share their report to us.
>> The main disadvantages of a centralized approach are relocating equipment
>> from separate vendor labs will require a project budget. We can depend on
>> budget to decide which infrastructure should be deployed in the public test
>> lab.
>>
>> For distributed model, we essentially continue as what we are at present.
>> Vendors can independently choose the CI tests that they run, and the reports
>> that they choose to make public. We can add enhancements to Patchwork to
>> display test results to make tracking easier, and can also look at other ways
>> to make test reports more visible.
> Yes, I'm working on it.
> One year ago, we discussed a CI integration in patchwork:
> 	https://lists.ozlabs.org/pipermail/patchwork/2015-July/001363.html
> It is now implemented in patchwork and available on dpdk.org:
> 	http://dpdk.org/ml/archives/dev/2016-September/046282.html
> A first basic test (checkpatch) is integrated. See this example:
> 	http://dpdk.org/patch/16953
> The detailed report in test-report mailing list archives is referenced
> with an hyperlink (in the "Description" column).
> The next step (work in progress) is to publish some scripts in a new git
> repository dpdk-ci to help integrating more test labs in patchwork.
> It basically requires only to receive and send some emails from the lab.
>
> Note that any open or centralized lab can be also integrated in patchwork.
>
> Note also that this patchwork integration covers only the tests run when a
> patch is submitted. The tests run regularly on a git tree won't appear
> in this interface.
My main concern about patchwork is that patchwork's page can't show 
enough information.
Developers may need detail build log or function case log to narrow down 
issue.

As you mentioned, daily regression status for main/LTS/other develop 
branches need also demonstrate to the community.


>> The main advantages of a distributed approach are:
>> There's no requirement for a project budget.
> The other major advantage is to have a better test coverage.
> Many companies need to have some internal DPDK tests for their needs.
> If they use them to provide some public reports, they can avoid having
> some regressions with their specific use cases or hardware.
> That's why the distributed CI approach is a win-win.
Agree, we need align on the format of the public reports.
>
>> The disadvantages of a distributed approach are:
>> We lost the benefits of transparency, independence and the ability to run
>> tests on demand that are described under the centralized approach above. CI
>> testing and the publication of the results remains under the control of
>> vendors (or others who choose to run CI tests).
>>
>> Based on the above, we would like to propose that a centralized CI lab.
>> Details of the required budget and funding for this will obviously need to be
>> determined, but for now our proposal will focus purely on the technical scope
>> and benefits.
> Thanks for the detailed description of the tests that you expect.
> I think the CI discussion must be thought with two major questions:
> 	When? and Where?
> When:
> 	- regularly on a git tree
> 	- after each patch submission -> report available via patchwork
> Where:
> 	- in a private lab
> 	- in a foundation lab
> Both private and foundation labs can be more or less open.
>
> My conclusion: every kind of tests have some benefits and are welcome!
>
Daily Performance Test:
   When:
        - regularly on master
   Where:
        - propose in foundation lab and enable the ability for guest access

Automated Patch Compilation Test (Pre check-in Patch Testing):
   When:
        - after one patch-set submission
   Where:
        - Either in foundation or private lab

Automated Daily Compilation Test:
   When:
        - regularly on master/LTS/developing branch
   Where:
        - Either in foundation or private lab

Regression Unit Test:
   When:
        - regularly on master/LTS/developing branch

   Where:
        - Either in foundation or private lab

Regression Function Test:
   When:
        - regularly on master/LTS/developing branch

   Where:
        - Either in foundation or private lab

Again, deployment of foundation lab is based on the budget.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-moving] proposal for DPDK CI improvement
  2016-11-05 19:15 ` Thomas Monjalon
  2016-11-07  5:15   ` Liu, Yong
@ 2016-11-07  7:55   ` Xu, Qian Q
  2016-11-07 10:17     ` Thomas Monjalon
  1 sibling, 1 reply; 12+ messages in thread
From: Xu, Qian Q @ 2016-11-07  7:55 UTC (permalink / raw)
  To: Thomas Monjalon, moving, Liu, Yong

I think the discussion about CI is a good start. I agreed on the general ideas: 
1. It's good to have more contributors for CI and it's a community effort. 
2. Building a distributed CI system is good and necessary. 
3. "When and Where" is the very basic and important questions. 

Add my 2 cents here. 
1.  Distributed test vs Centralized lab
We can put the build and functional tests on our distributed lab. As to the performance, as we all know, performance is key to DPDK. 
So I suggested we can have the centralized lab for the performance testing, and some comments as below: 
a). Do we want to publish the performance report on different platforms with different HW/NICs? Anyone against on publishing performance numbers? 
b). If the answer to the first question is "Yes", so how to ensure others trust the performance and how to reproduce the performance if we don't have the platforms/HWs? 
As Marvin said, transparency and independence is the advantage for open centralized lab. Besides, we can demonstrate to all audience about DPDK performance with the 
Lab. Of course, we need the control of the system, not allow others to access it randomly. It's another topic of access control. I even think that if the lab can be used as 
the training lab or demo lab when we have the community training or performance demo days(I just named the events). 

2. Besides "When and Where", then "What" and "How"
When:
	- regularly on a git tree ---what tests need to be done here? Propose to have the daily build, daily functional regression, daily performance regression
	- after each patch submission -> report available via patchwork----what tests need to be done? Build test as the first one, maybe we can add functional or performance in future. 
                                                                                                                                       
How to collect and display the results? 
Thanks Thomas for the hard work on patchwork upgrade. And it's good to see the CheckPatch display here. 
IMHO, to build the complete distributed system needs very big effort. Thomas, any effort estimation and the schedule for it? 
a). Currently, there is only " S/W/F for Success/Warning/Fail counters" in tests, so does it refer to build test or functional test or performance test? 
If it only referred to build test, then you may need change the title to Build S/W/F. Then how many architecture or platforms for the builds? For example, we support Intel IA build, 
ARM build, IBM power build. Then we may need collect build results from INTEL/IBM/ARM and etc to show the total S/W/F. For example, if the build is passed on IA but failed on IBM, then we 
Need record it as 1S/0W/1F. I don't know if we need collect the warning information here. 

b). How about performance result display on website? No matter distributed or centralized lab, we need a place to show the performance number or the performance trend to 
ensure no performance regression? Do you have any plan to implement it? 

3.  Proposal to have a CI mailing list for people working on CI to have the regular meetings only discussing about CI? Maybe we can have more frequent meetings at first to have an alignment. Then
We can reduce the frequency if the solution is settle down. Current call is covering many other topics. What do you think? 

Thanks. Welcome any comments. 

-----Original Message-----
From: moving [mailto:moving-bounces@dpdk.org] On Behalf Of Thomas Monjalon
Sent: Sunday, November 6, 2016 3:15 AM
To: moving@dpdk.org; Liu, Yong <yong.liu@intel.com>
Subject: Re: [dpdk-moving] proposal for DPDK CI improvement

2016-11-05 04:47, Liu, Yong:
> Currently, DPDK CI has done in a distributed way. Several companies 
> running their own CI tests internally. Some companies running their 
> own CI tests internally. Some of them (Intel and IBM) provided their 
> test reports to mailing list, but others keep their test results for internal use only.

I'm confident we'll have more contributors to the distributed CI when it will be well advertised (see below).

> There are two possible approaches that we can consider for improving DPDK CI:
> 
> 1. Create a centralized DPDK CI lab and buildup required infrastructure.
> 2. Continue with a distributed model but improve reporting and visibility.

I think these two approaches are good:
1. The centralized open lab can help as a reference 2. The distributed CI instances will bring more diversity

> We think the main advantages of a centralized approach are:
> Transparency: Everybody can see and access the test infrastructure, see what
>               exactly how the servers are configured and what tests have been
>               run and their result. The community can review and agree
>               collectively when new tests are required.
> Flexibility:  Testing can be performed on demand. Instead of a developer
>               submitting a patch, having it tested by a distributed CI
>               infrastructure and then getting test results. The developer can
>               access the CI infrastructure and trigger the tests manually
>               before submitting the patch, thus speeding up the development
>               process, make short test cycle.

It is possible to offer such flexibility in private CI labs by offering an email address where we can send some patches to be tested.
However there can be an issue of hardware bandwith/availability to solve.
This is the same issue for open/centralized labs or private labs.
A test lab accepting any private request can be abused. That's why I think forcing to send the patches publically to the mailing list is a good policy.

> Independence: Instead of each vendor providing their own performance results,
>               having these generated in a centralized lab run by an 
>               independent body will increase confidence that DPDK users have
>               in the test results.
> 
> There is one example of how this was done for another project.
> (https://wiki.fd.io/view/CSIT).
> 
> In their wiki page, you can get the idea about how to configure the 
> servers and run test cases on these servers. The test report for all 
> releases can be found on the page. You can also browse the detail test 
> report for each release if click the link. If click their Jekin's 
> link, you can see the trend of project status.

I do not see an explanation of how to use the CSIT lab on demand (what you described as "Flexibility"). How does it work?

> The main disadvantages of a centralized approach are relocating 
> equipment from separate vendor labs will require a project budget. We 
> can depend on budget to decide which infrastructure should be deployed 
> in the public test lab.
> 
> For distributed model, we essentially continue as what we are at present.
> Vendors can independently choose the CI tests that they run, and the 
> reports that they choose to make public. We can add enhancements to 
> Patchwork to display test results to make tracking easier, and can 
> also look at other ways to make test reports more visible.

Yes, I'm working on it.
One year ago, we discussed a CI integration in patchwork:
	https://lists.ozlabs.org/pipermail/patchwork/2015-July/001363.html
It is now implemented in patchwork and available on dpdk.org:
	http://dpdk.org/ml/archives/dev/2016-September/046282.html
A first basic test (checkpatch) is integrated. See this example:
	http://dpdk.org/patch/16953
The detailed report in test-report mailing list archives is referenced with an hyperlink (in the "Description" column).
The next step (work in progress) is to publish some scripts in a new git repository dpdk-ci to help integrating more test labs in patchwork.
It basically requires only to receive and send some emails from the lab.

Note that any open or centralized lab can be also integrated in patchwork.

Note also that this patchwork integration covers only the tests run when a patch is submitted. The tests run regularly on a git tree won't appear in this interface.

> The main advantages of a distributed approach are:
> There's no requirement for a project budget.

The other major advantage is to have a better test coverage.
Many companies need to have some internal DPDK tests for their needs.
If they use them to provide some public reports, they can avoid having some regressions with their specific use cases or hardware.
That's why the distributed CI approach is a win-win.

> The disadvantages of a distributed approach are:
> We lost the benefits of transparency, independence and the ability to 
> run tests on demand that are described under the centralized approach 
> above. CI testing and the publication of the results remains under the 
> control of vendors (or others who choose to run CI tests).
> 
> Based on the above, we would like to propose that a centralized CI lab.
> Details of the required budget and funding for this will obviously 
> need to be determined, but for now our proposal will focus purely on 
> the technical scope and benefits.

Thanks for the detailed description of the tests that you expect.
I think the CI discussion must be thought with two major questions:
	When? and Where?
When:
	- regularly on a git tree
	- after each patch submission -> report available via patchwork
Where:
	- in a private lab
	- in a foundation lab
Both private and foundation labs can be more or less open.

My conclusion: every kind of tests have some benefits and are welcome!

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-moving] proposal for DPDK CI improvement
  2016-11-07  5:15   ` Liu, Yong
@ 2016-11-07  9:59     ` Thomas Monjalon
  2016-11-07 14:59       ` Liu, Yong
  0 siblings, 1 reply; 12+ messages in thread
From: Thomas Monjalon @ 2016-11-07  9:59 UTC (permalink / raw)
  To: Liu, Yong; +Cc: moving, ci

For discussing the details of the CI, I suggest we move to the new
mailing list ci@dpdk.org (CC) and stop bothering recipients of
moving@dpdk.org.

2016-11-07 13:15, Liu, Yong:
> My main concern about patchwork is that patchwork's page can't show 
> enough information.
> Developers may need detail build log or function case log to narrow down 
> issue.

Your concern should be solved:
The detailed reports are available in http://dpdk.org/ml/archives/test-report/.
When viewing a patch in patchwork, the tests are listed with a brief result.
The detailed report of each test is linked here.
Example:
	dpdk.org/patch/16960

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-moving] proposal for DPDK CI improvement
  2016-11-07  7:55   ` Xu, Qian Q
@ 2016-11-07 10:17     ` Thomas Monjalon
  2016-11-07 10:26       ` Jerome Tollet (jtollet)
  2016-11-07 12:20       ` Xu, Qian Q
  0 siblings, 2 replies; 12+ messages in thread
From: Thomas Monjalon @ 2016-11-07 10:17 UTC (permalink / raw)
  To: Xu, Qian Q; +Cc: moving, Liu, Yong, ci

Hi Qian,

2016-11-07 07:55, Xu, Qian Q:
> I think the discussion about CI is a good start. I agreed on the general ideas: 
> 1. It's good to have more contributors for CI and it's a community effort. 
> 2. Building a distributed CI system is good and necessary. 
> 3. "When and Where" is the very basic and important questions. 
> 
> Add my 2 cents here. 
> 1.  Distributed test vs Centralized lab
> We can put the build and functional tests on our distributed lab. As to the performance, as we all know, performance is key to DPDK. 
> So I suggested we can have the centralized lab for the performance testing, and some comments as below: 
> a). Do we want to publish the performance report on different platforms with different HW/NICs? Anyone against on publishing performance numbers? 
> b). If the answer to the first question is "Yes", so how to ensure others trust the performance and how to reproduce the performance if we don't have the platforms/HWs? 
> As Marvin said, transparency and independence is the advantage for open centralized lab. Besides, we can demonstrate to all audience about DPDK performance with the 
> Lab. Of course, we need the control of the system, not allow others to access it randomly. It's another topic of access control. I even think that if the lab can be used as 
> the training lab or demo lab when we have the community training or performance demo days(I just named the events). 
> 
> 2. Besides "When and Where", then "What" and "How"
> When:
> 	- regularly on a git tree ---what tests need to be done here? Propose to have the daily build, daily functional regression, daily performance regression
> 	- after each patch submission -> report available via patchwork----what tests need to be done? Build test as the first one, maybe we can add functional or performance in future. 
>                                                                                                                                        
> How to collect and display the results? 
> Thanks Thomas for the hard work on patchwork upgrade. And it's good to see the CheckPatch display here. 
> IMHO, to build the complete distributed system needs very big effort. Thomas, any effort estimation and the schedule for it?

It must be a collective effort.
I plan to publish a new git repository really soon to help building a test lab.
The first version will allow to send some test reports correctly formatted.
The next step will be to help applying patches (on right branch with series support).

> a). Currently, there is only " S/W/F for Success/Warning/Fail counters" in tests, so does it refer to build test or functional test or performance test? 

It can be any test, including performance ones. A major performance regression
must be seen as a failed test.

> If it only referred to build test, then you may need change the title to Build S/W/F. Then how many architecture or platforms for the builds? For example, we support Intel IA build, 
> ARM build, IBM power build. Then we may need collect build results from INTEL/IBM/ARM and etc to show the total S/W/F. For example, if the build is passed on IA but failed on IBM, then we 
> Need record it as 1S/0W/1F. I don't know if we need collect the warning information here.

The difference between warnings and failures is a matter of severity.
The checkpatch errors are reported as warnings.

> b). How about performance result display on website? No matter distributed or centralized lab, we need a place to show the performance number or the performance trend to 
> ensure no performance regression? Do you have any plan to implement it? 

No I have no plan but I expect it to be solved by ones working on
performance tests, maybe you? :)
If a private lab can publish some web graphs of performance evolutions, it is great.
If we can do it in a centralized lab, it is also great.
If we can have a web interface to gather every performance numbers and graphs,
it is really really great!

> 3.  Proposal to have a CI mailing list for people working on CI to have the regular meetings only discussing about CI? Maybe we can have more frequent meetings at first to have an alignment. Then
> We can reduce the frequency if the solution is settle down. Current call is covering many other topics. What do you think? 

The mailing list is now created: ci@dpdk.org.
About meetings, I feel we can start working through ci@dpdk.org and see
how efficient it is. Though if you need a meeting, feel free to propose.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-moving] proposal for DPDK CI improvement
  2016-11-07 10:17     ` Thomas Monjalon
@ 2016-11-07 10:26       ` Jerome Tollet (jtollet)
  2016-11-07 10:34         ` O'Driscoll, Tim
  2016-11-07 12:20       ` Xu, Qian Q
  1 sibling, 1 reply; 12+ messages in thread
From: Jerome Tollet (jtollet) @ 2016-11-07 10:26 UTC (permalink / raw)
  To: Thomas Monjalon, Xu, Qian Q; +Cc: moving, Liu, Yong, ci

Hi Thomas & Qian,
IMHO, performance results should be centralized and executed in a trusted & controlled environment.
If official DPDK numbers are coming from private lab’s vendors, perception might be that they are not 100% neutral. That would probably not help DPDK community to be seen open & transparent.

Jerome

Le 07/11/2016 11:17, « moving au nom de Thomas Monjalon » <moving-bounces@dpdk.org au nom de thomas.monjalon@6wind.com> a écrit :

    Hi Qian,
    
    2016-11-07 07:55, Xu, Qian Q:
    > I think the discussion about CI is a good start. I agreed on the general ideas: 
    > 1. It's good to have more contributors for CI and it's a community effort. 
    > 2. Building a distributed CI system is good and necessary. 
    > 3. "When and Where" is the very basic and important questions. 
    > 
    > Add my 2 cents here. 
    > 1.  Distributed test vs Centralized lab
    > We can put the build and functional tests on our distributed lab. As to the performance, as we all know, performance is key to DPDK. 
    > So I suggested we can have the centralized lab for the performance testing, and some comments as below: 
    > a). Do we want to publish the performance report on different platforms with different HW/NICs? Anyone against on publishing performance numbers? 
    > b). If the answer to the first question is "Yes", so how to ensure others trust the performance and how to reproduce the performance if we don't have the platforms/HWs? 
    > As Marvin said, transparency and independence is the advantage for open centralized lab. Besides, we can demonstrate to all audience about DPDK performance with the 
    > Lab. Of course, we need the control of the system, not allow others to access it randomly. It's another topic of access control. I even think that if the lab can be used as 
    > the training lab or demo lab when we have the community training or performance demo days(I just named the events). 
    > 
    > 2. Besides "When and Where", then "What" and "How"
    > When:
    > 	- regularly on a git tree ---what tests need to be done here? Propose to have the daily build, daily functional regression, daily performance regression
    > 	- after each patch submission -> report available via patchwork----what tests need to be done? Build test as the first one, maybe we can add functional or performance in future. 
    >                                                                                                                                        
    > How to collect and display the results? 
    > Thanks Thomas for the hard work on patchwork upgrade. And it's good to see the CheckPatch display here. 
    > IMHO, to build the complete distributed system needs very big effort. Thomas, any effort estimation and the schedule for it?
    
    It must be a collective effort.
    I plan to publish a new git repository really soon to help building a test lab.
    The first version will allow to send some test reports correctly formatted.
    The next step will be to help applying patches (on right branch with series support).
    
    > a). Currently, there is only " S/W/F for Success/Warning/Fail counters" in tests, so does it refer to build test or functional test or performance test? 
    
    It can be any test, including performance ones. A major performance regression
    must be seen as a failed test.
    
    > If it only referred to build test, then you may need change the title to Build S/W/F. Then how many architecture or platforms for the builds? For example, we support Intel IA build, 
    > ARM build, IBM power build. Then we may need collect build results from INTEL/IBM/ARM and etc to show the total S/W/F. For example, if the build is passed on IA but failed on IBM, then we 
    > Need record it as 1S/0W/1F. I don't know if we need collect the warning information here.
    
    The difference between warnings and failures is a matter of severity.
    The checkpatch errors are reported as warnings.
    
    > b). How about performance result display on website? No matter distributed or centralized lab, we need a place to show the performance number or the performance trend to 
    > ensure no performance regression? Do you have any plan to implement it? 
    
    No I have no plan but I expect it to be solved by ones working on
    performance tests, maybe you? :)
    If a private lab can publish some web graphs of performance evolutions, it is great.
    If we can do it in a centralized lab, it is also great.
    If we can have a web interface to gather every performance numbers and graphs,
    it is really really great!
    
    > 3.  Proposal to have a CI mailing list for people working on CI to have the regular meetings only discussing about CI? Maybe we can have more frequent meetings at first to have an alignment. Then
    > We can reduce the frequency if the solution is settle down. Current call is covering many other topics. What do you think? 
    
    The mailing list is now created: ci@dpdk.org.
    About meetings, I feel we can start working through ci@dpdk.org and see
    how efficient it is. Though if you need a meeting, feel free to propose.
    


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-moving] proposal for DPDK CI improvement
  2016-11-07 10:26       ` Jerome Tollet (jtollet)
@ 2016-11-07 10:34         ` O'Driscoll, Tim
  2016-11-07 10:47           ` Arnon Warshavsky
  2016-11-07 10:56           ` Thomas Monjalon
  0 siblings, 2 replies; 12+ messages in thread
From: O'Driscoll, Tim @ 2016-11-07 10:34 UTC (permalink / raw)
  To: Jerome Tollet (jtollet), Thomas Monjalon, Xu, Qian Q
  Cc: moving, Liu, Yong, ci


> -----Original Message-----
> From: moving [mailto:moving-bounces@dpdk.org] On Behalf Of Jerome Tollet
> (jtollet)
> Sent: Monday, November 7, 2016 10:27 AM
> To: Thomas Monjalon <thomas.monjalon@6wind.com>; Xu, Qian Q
> <qian.q.xu@intel.com>
> Cc: moving@dpdk.org; Liu, Yong <yong.liu@intel.com>; ci@dpdk.org
> Subject: Re: [dpdk-moving] proposal for DPDK CI improvement
> 
> Hi Thomas & Qian,
> IMHO, performance results should be centralized and executed in a
> trusted & controlled environment.
> If official DPDK numbers are coming from private lab’s vendors,
> perception might be that they are not 100% neutral. That would probably
> not help DPDK community to be seen open & transparent.

+1

Somebody (Jan Blunck I think) also said on last week's call that performance testing was a higher priority than CI for a centralized lab. A model where we have centralized performance test and distributed CI might work well.

> 
> Jerome
> 
> Le 07/11/2016 11:17, « moving au nom de Thomas Monjalon » <moving-
> bounces@dpdk.org au nom de thomas.monjalon@6wind.com> a écrit :
> 
>     Hi Qian,
> 
>     2016-11-07 07:55, Xu, Qian Q:
>     > I think the discussion about CI is a good start. I agreed on the
> general ideas:
>     > 1. It's good to have more contributors for CI and it's a community
> effort.
>     > 2. Building a distributed CI system is good and necessary.
>     > 3. "When and Where" is the very basic and important questions.
>     >
>     > Add my 2 cents here.
>     > 1.  Distributed test vs Centralized lab
>     > We can put the build and functional tests on our distributed lab.
> As to the performance, as we all know, performance is key to DPDK.
>     > So I suggested we can have the centralized lab for the performance
> testing, and some comments as below:
>     > a). Do we want to publish the performance report on different
> platforms with different HW/NICs? Anyone against on publishing
> performance numbers?
>     > b). If the answer to the first question is "Yes", so how to ensure
> others trust the performance and how to reproduce the performance if we
> don't have the platforms/HWs?
>     > As Marvin said, transparency and independence is the advantage for
> open centralized lab. Besides, we can demonstrate to all audience about
> DPDK performance with the
>     > Lab. Of course, we need the control of the system, not allow
> others to access it randomly. It's another topic of access control. I
> even think that if the lab can be used as
>     > the training lab or demo lab when we have the community training
> or performance demo days(I just named the events).
>     >
>     > 2. Besides "When and Where", then "What" and "How"
>     > When:
>     > 	- regularly on a git tree ---what tests need to be done here?
> Propose to have the daily build, daily functional regression, daily
> performance regression
>     > 	- after each patch submission -> report available via
> patchwork----what tests need to be done? Build test as the first one,
> maybe we can add functional or performance in future.
>     >
>     > How to collect and display the results?
>     > Thanks Thomas for the hard work on patchwork upgrade. And it's
> good to see the CheckPatch display here.
>     > IMHO, to build the complete distributed system needs very big
> effort. Thomas, any effort estimation and the schedule for it?
> 
>     It must be a collective effort.
>     I plan to publish a new git repository really soon to help building
> a test lab.
>     The first version will allow to send some test reports correctly
> formatted.
>     The next step will be to help applying patches (on right branch with
> series support).
> 
>     > a). Currently, there is only " S/W/F for Success/Warning/Fail
> counters" in tests, so does it refer to build test or functional test or
> performance test?
> 
>     It can be any test, including performance ones. A major performance
> regression
>     must be seen as a failed test.
> 
>     > If it only referred to build test, then you may need change the
> title to Build S/W/F. Then how many architecture or platforms for the
> builds? For example, we support Intel IA build,
>     > ARM build, IBM power build. Then we may need collect build results
> from INTEL/IBM/ARM and etc to show the total S/W/F. For example, if the
> build is passed on IA but failed on IBM, then we
>     > Need record it as 1S/0W/1F. I don't know if we need collect the
> warning information here.
> 
>     The difference between warnings and failures is a matter of
> severity.
>     The checkpatch errors are reported as warnings.
> 
>     > b). How about performance result display on website? No matter
> distributed or centralized lab, we need a place to show the performance
> number or the performance trend to
>     > ensure no performance regression? Do you have any plan to
> implement it?
> 
>     No I have no plan but I expect it to be solved by ones working on
>     performance tests, maybe you? :)
>     If a private lab can publish some web graphs of performance
> evolutions, it is great.
>     If we can do it in a centralized lab, it is also great.
>     If we can have a web interface to gather every performance numbers
> and graphs,
>     it is really really great!
> 
>     > 3.  Proposal to have a CI mailing list for people working on CI to
> have the regular meetings only discussing about CI? Maybe we can have
> more frequent meetings at first to have an alignment. Then
>     > We can reduce the frequency if the solution is settle down.
> Current call is covering many other topics. What do you think?
> 
>     The mailing list is now created: ci@dpdk.org.
>     About meetings, I feel we can start working through ci@dpdk.org and
> see
>     how efficient it is. Though if you need a meeting, feel free to
> propose.
> 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-moving] proposal for DPDK CI improvement
  2016-11-07 10:34         ` O'Driscoll, Tim
@ 2016-11-07 10:47           ` Arnon Warshavsky
  2016-11-07 10:56           ` Thomas Monjalon
  1 sibling, 0 replies; 12+ messages in thread
From: Arnon Warshavsky @ 2016-11-07 10:47 UTC (permalink / raw)
  To: O'Driscoll, Tim
  Cc: Jerome Tollet (jtollet),
	Thomas Monjalon, Xu, Qian Q, moving, Liu, Yong, ci

[-- Attachment #1: Type: text/plain, Size: 1442 bytes --]

On Mon, Nov 7, 2016 at 12:34 PM, O'Driscoll, Tim <tim.odriscoll@intel.com>
wrote:

>
> > -----Original Message-----
> > From: moving [mailto:moving-bounces@dpdk.org] On Behalf Of Jerome Tollet
> > (jtollet)
> > Sent: Monday, November 7, 2016 10:27 AM
> > To: Thomas Monjalon <thomas.monjalon@6wind.com>; Xu, Qian Q
> > <qian.q.xu@intel.com>
> > Cc: moving@dpdk.org; Liu, Yong <yong.liu@intel.com>; ci@dpdk.org
> > Subject: Re: [dpdk-moving] proposal for DPDK CI improvement
> >
> > Hi Thomas & Qian,
> > IMHO, performance results should be centralized and executed in a
> > trusted & controlled environment.
> > If official DPDK numbers are coming from private lab’s vendors,
> > perception might be that they are not 100% neutral. That would probably
> > not help DPDK community to be seen open & transparent.
>
> +1
>
> Somebody (Jan Blunck I think) also said on last week's call that
> performance testing was a higher priority than CI for a centralized lab. A
> model where we have centralized performance test and distributed CI might
> work well.



+1 to the above approach , yet I still see value in publishing both types
of performance results as long as they are clearly separated.
This might might need a way to retroactively mark some results as "proved
invalid" but otoh encourage a cycle of propagating distributed tests proved
beneficial correct and unbiased to the central tests.

/Arnon

[-- Attachment #2: Type: text/html, Size: 2287 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-moving] proposal for DPDK CI improvement
  2016-11-07 10:34         ` O'Driscoll, Tim
  2016-11-07 10:47           ` Arnon Warshavsky
@ 2016-11-07 10:56           ` Thomas Monjalon
  1 sibling, 0 replies; 12+ messages in thread
From: Thomas Monjalon @ 2016-11-07 10:56 UTC (permalink / raw)
  To: O'Driscoll, Tim, Jerome Tollet (jtollet)
  Cc: Xu, Qian Q, moving, Liu, Yong, ci

2016-11-07 10:34, O'Driscoll, Tim:
> From: Jerome Tollet
> > 
> > Hi Thomas & Qian,
> > IMHO, performance results should be centralized and executed in a
> > trusted & controlled environment.
> > If official DPDK numbers are coming from private lab’s vendors,
> > perception might be that they are not 100% neutral. That would probably
> > not help DPDK community to be seen open & transparent.
> 
> +1
> 
> Somebody (Jan Blunck I think) also said on last week's call that
> performance testing was a higher priority than CI for a centralized lab.
> A model where we have centralized performance test and distributed CI
> might work well.

+1

Having some trusted performance numbers is a top priority.
I hope a budget in the foundation can solve it.

I was just trying to say that numbers from private labs can bring some
diversity and may be also valuable.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-moving] proposal for DPDK CI improvement
  2016-11-07 10:17     ` Thomas Monjalon
  2016-11-07 10:26       ` Jerome Tollet (jtollet)
@ 2016-11-07 12:20       ` Xu, Qian Q
  1 sibling, 0 replies; 12+ messages in thread
From: Xu, Qian Q @ 2016-11-07 12:20 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: moving, Liu, Yong, ci

Thomas, 
Thx for your quick response. See my reply inline below. 

-----Original Message-----
From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com] 
Sent: Monday, November 7, 2016 6:18 PM
To: Xu, Qian Q <qian.q.xu@intel.com>
Cc: moving@dpdk.org; Liu, Yong <yong.liu@intel.com>; ci@dpdk.org
Subject: Re: [dpdk-moving] proposal for DPDK CI improvement


> a). Currently, there is only " S/W/F for Success/Warning/Fail counters" in tests, so does it refer to build test or functional test or performance test? 

It can be any test, including performance ones. A major performance regression must be seen as a failed test. 

[Qian] If it refer to any test, so how do we know which test has been done. For example, some patch may only do build, some may do perf and some may do functional. 
How to differentiate these tests execution?   

> If it only referred to build test, then you may need change the title 
> to Build S/W/F. Then how many architecture or platforms for the 
> builds? For example, we support Intel IA build, ARM build, IBM power build. Then we may need collect build results from INTEL/IBM/ARM and etc to show the total S/W/F. For example, if the build is passed on IA but failed on IBM, then we Need record it as 1S/0W/1F. I don't know if we need collect the warning information here.

The difference between warnings and failures is a matter of severity.
The checkpatch errors are reported as warnings.

> b). How about performance result display on website? No matter 
> distributed or centralized lab, we need a place to show the performance number or the performance trend to ensure no performance regression? Do you have any plan to implement it?

No I have no plan but I expect it to be solved by ones working on performance tests, maybe you? :) If a private lab can publish some web graphs of performance evolutions, it is great.
If we can do it in a centralized lab, it is also great.
If we can have a web interface to gather every performance numbers and graphs, it is really really great!

[Qian] In intel internally, we put some efforts on the performance web interface design and implementation. But it is just started. If community need our effort to work on the performance 
Report center, we may need discuss about the ownership/resources/requirements and plan.  

> 3.  Proposal to have a CI mailing list for people working on CI to 
> have the regular meetings only discussing about CI? Maybe we can have more frequent meetings at first to have an alignment. Then We can reduce the frequency if the solution is settle down. Current call is covering many other topics. What do you think?

The mailing list is now created: ci@dpdk.org.
About meetings, I feel we can start working through ci@dpdk.org and see how efficient it is. Though if you need a meeting, feel free to propose.
[Qian] I saw there was a community call tomorrow night, and it was 11pm at PRC time. I wonder if all CI people are in EU, if no US people, then we may wish an early time such as 9pm-10pm at PRC time. 
We can join this week's community call and decide if we can have a separate meeting next week for CI, more focus and more efficient, just for CI.   

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [dpdk-moving] proposal for DPDK CI improvement
  2016-11-07  9:59     ` Thomas Monjalon
@ 2016-11-07 14:59       ` Liu, Yong
  0 siblings, 0 replies; 12+ messages in thread
From: Liu, Yong @ 2016-11-07 14:59 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: moving, ci

> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Monday, November 07, 2016 6:00 PM
> To: Liu, Yong
> Cc: moving@dpdk.org; ci@dpdk.org
> Subject: Re: [dpdk-moving] proposal for DPDK CI improvement
> 
> 
> Your concern should be solved:
> The detailed reports are available in http://dpdk.org/ml/archives/test-
> report/.
> When viewing a patch in patchwork, the tests are listed with a brief
> result.
> The detailed report of each test is linked here.
> Example:
> 	dpdk.org/patch/16960

Thanks, Thomas. I haven't noticed that checkpatch function has been integrated to patchwork. 
Look like we can easily enhance per patch-set status demonstration by adding more columns like "Checks". 

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2016-11-07 15:00 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-05  4:47 [dpdk-moving] proposal for DPDK CI improvement Liu, Yong
2016-11-05 19:15 ` Thomas Monjalon
2016-11-07  5:15   ` Liu, Yong
2016-11-07  9:59     ` Thomas Monjalon
2016-11-07 14:59       ` Liu, Yong
2016-11-07  7:55   ` Xu, Qian Q
2016-11-07 10:17     ` Thomas Monjalon
2016-11-07 10:26       ` Jerome Tollet (jtollet)
2016-11-07 10:34         ` O'Driscoll, Tim
2016-11-07 10:47           ` Arnon Warshavsky
2016-11-07 10:56           ` Thomas Monjalon
2016-11-07 12:20       ` Xu, Qian Q

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).