DPDK CI discussions
 help / color / mirror / Atom feed
* [dpdk-ci] Minutes of DPDK Lab Meeting, July 17th
@ 2018-07-19 14:54 O'Driscoll, Tim
  2018-08-16  9:33 ` Yigit, Ferruh
  0 siblings, 1 reply; 4+ messages in thread
From: O'Driscoll, Tim @ 2018-07-19 14:54 UTC (permalink / raw)
  To: 'ci@dpdk.org'
  Cc: 'Bob Noseworthy',
	Mcnamara, John, 'Shepard Siegel',
	'Thomas Monjalon', 'Erez Scop',
	'Shreyansh Jain',
	Xu, Qian Q, 'pmacarth@iol.unh.edu',
	'Matt Spencer', 'George Zhao',
	'Mishra, Shishir', 'Lixuming',
	Tkachuk, Georgii, 'Trishan de Lanerolle',
	'Sean Campbell', 'Ali Alnubani',
	'May Chen', 'Lodha, Nishant',
	Zhang, Chun, 'Malla, Malathi', 'khemendra kumar',
	'graeme.gregory@linaro.org',
	Yigit, Ferruh, Tu, Lijuan

[-- Attachment #1: Type: text/plain, Size: 1029 bytes --]

UNH Policies and Procedures Doc:
Document is available at: https://docs.google.com/document/d/1rtSwpNKltVNyDKNWgeTV5gaYeDoAPlK-sfm8XE7o_5s/edit?usp=sharing
I reviewed it and it looks good to me.
Please add any comments to the google doc, or send them directly to Bob (ren@iol.unh.edu<mailto:ren@iol.unh.edu>).

Dashboard:
Dashboard is in much better shape now and most tests are passing.
We do need to monitor the results and determine if the tolerance is right. If it's too high then all tests will pass and we'll miss any genuine issues. If it's too low then we'll have too many tests failing when there isn't really a problem. Agreed that tuning the tolerance is the responsibility of each vendor.
For Intel, we need to add the specific test config to the results page. We need to confirm the config details to Patrick, and he'll add this to the results pages. Other vendors may want to do something similar.

Hardware:
Shreyansh is almost ready to ship the NXP hardware. It should be in UNH in 2-3 weeks.





[-- Attachment #2: Type: text/html, Size: 5952 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-ci] Minutes of DPDK Lab Meeting, July 17th
  2018-07-19 14:54 [dpdk-ci] Minutes of DPDK Lab Meeting, July 17th O'Driscoll, Tim
@ 2018-08-16  9:33 ` Yigit, Ferruh
  2018-08-16 19:58   ` Jeremy Plsek
  0 siblings, 1 reply; 4+ messages in thread
From: Yigit, Ferruh @ 2018-08-16  9:33 UTC (permalink / raw)
  To: ren; +Cc: O'Driscoll, Tim, 'ci@dpdk.org'

[-- Attachment #1: Type: text/plain, Size: 4458 bytes --]

Hi Bob,

A few comments/questions on "UNH Policies and Procedures Doc":

- Objective and Scope of Project
                - 2: "DPDK-enabled applications", what are these applications, are they refer to testpmd/l2fwd like test applications, or OVS/VPP like other products using DPDK? And if these are other products, will they run for all vendors or for the vendor requested it?
                - 3: "Demonstrate any new feature performance of DPDK", is this via updating test scripts, if not how these new test will be run and how results will be shared?
                - What is the plan to use Lab for Continuous Integration, adding more sanity checks than performance checks by time, as far as I know this was one of the initial plans for a lab.

- DPDK Branch(s) to test
                - 5.1.1: "master", a little background, in dpdk development there are multiple sub-trees, some specific patches targets specific trees, these sub-trees are merged into main tree before release candidate, and there is a target to do regular integration from these sub-trees.
As a result of this process, for example a patch sent for next-net sub-tree may not apply cleanly to main repo, so it won't be tested. But that patch will be applied to next-net tree and a week later next-net tree will be merged into main tree, this patch can be something affects the performance, but it won't be detected. Later, when a patch arrives that can be applied on main tree, it will reveal the performance issue, but suspect will be the wrong patch and the problematic patch will be already merged. We need a solution for this. There are 5 sub-trees merged into main tree and more than half of the patches are coming to main repo through them.

- Private DPDK-Member only Dashboard Specification
- 5.6.1.2: "The delta-values of the script output, per test performed.", in member-only dashboard, why not show base value too, since it will be updated regularly, via "--update-expected argument", it would be good to see both current baseline and the diff.

- I am for defining a change management system, there are multiple vendor and multiple requests, it would be good to trace, discuss and record the result for all of them systematically.

Thanks,
ferruh

From: O'Driscoll, Tim
Sent: Thursday, July 19, 2018 3:55 PM
To: 'ci@dpdk.org' <ci@dpdk.org>
Cc: 'Bob Noseworthy' <ren@iol.unh.edu>; Mcnamara, John <john.mcnamara@intel.com>; 'Shepard Siegel' <shepard.siegel@atomicrules.com>; 'Thomas Monjalon' <thomas@monjalon.net>; 'Erez Scop' <erezsc@mellanox.com>; 'Shreyansh Jain' <shreyansh.jain@nxp.com>; Xu, Qian Q <qian.q.xu@intel.com>; 'pmacarth@iol.unh.edu' <pmacarth@iol.unh.edu>; 'Matt Spencer' <Matt.Spencer@arm.com>; 'George Zhao' <George.Y.Zhao@huawei.com>; 'Mishra, Shishir' <Shishir.Mishra@spirent.com>; 'Lixuming' <lixuming@huawei.com>; Tkachuk, Georgii <georgii.tkachuk@intel.com>; 'Trishan de Lanerolle' <tdelanerolle@linuxfoundation.org>; 'Sean Campbell' <scampbel@qti.qualcomm.com>; 'Ali Alnubani' <alialnu@mellanox.com>; 'May Chen' <May.Chen@huawei.com>; 'Lodha, Nishant' <Nishant.Lodha@cavium.com>; Zhang, Chun <chun.zhang@intel.com>; 'Malla, Malathi' <Malathi.Malla@spirent.com>; 'khemendra kumar' <khemendra.kumar13@gmail.com>; 'graeme.gregory@linaro.org' <graeme.gregory@linaro.org>; Yigit, Ferruh <ferruh.yigit@intel.com>; Tu, Lijuan <lijuan.tu@intel.com>
Subject: Minutes of DPDK Lab Meeting, July 17th

UNH Policies and Procedures Doc:
Document is available at: https://docs.google.com/document/d/1rtSwpNKltVNyDKNWgeTV5gaYeDoAPlK-sfm8XE7o_5s/edit?usp=sharing
I reviewed it and it looks good to me.
Please add any comments to the google doc, or send them directly to Bob (ren@iol.unh.edu<mailto:ren@iol.unh.edu>).

Dashboard:
Dashboard is in much better shape now and most tests are passing.
We do need to monitor the results and determine if the tolerance is right. If it's too high then all tests will pass and we'll miss any genuine issues. If it's too low then we'll have too many tests failing when there isn't really a problem. Agreed that tuning the tolerance is the responsibility of each vendor.
For Intel, we need to add the specific test config to the results page. We need to confirm the config details to Patrick, and he'll add this to the results pages. Other vendors may want to do something similar.

Hardware:
Shreyansh is almost ready to ship the NXP hardware. It should be in UNH in 2-3 weeks.





[-- Attachment #2: Type: text/html, Size: 10096 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-ci] Minutes of DPDK Lab Meeting, July 17th
  2018-08-16  9:33 ` Yigit, Ferruh
@ 2018-08-16 19:58   ` Jeremy Plsek
  2018-08-17 16:30     ` Yigit, Ferruh
  0 siblings, 1 reply; 4+ messages in thread
From: Jeremy Plsek @ 2018-08-16 19:58 UTC (permalink / raw)
  To: Yigit, Ferruh; +Cc: Bob Noseworthy, O'Driscoll, Tim, ci

Hi Ferruh,

With regards to "DPDK Branch(s) to test", I agree with testing the
patch against the appropriate branch.

For some history, we originally tried to apply patches to the
appropriate branches (before Patchwork 2.0) based on the heuristics of
the subject line, but we ended up having a lot of apply errors. After
a meeting, the group agreed with testing against master for now.

But with Patchwork 2.0, is there a way to tell what branch a
patch/series is supposed to be applied against? If not, how can we
figure this out in a reliable programmatic way?

On Thu, Aug 16, 2018 at 5:33 AM Yigit, Ferruh <ferruh.yigit@intel.com> wrote:
>
> Hi Bob,
>
>
>
> A few comments/questions on “UNH Policies and Procedures Doc”:
>
>
>
> - Objective and Scope of Project
>
>                 - 2: “DPDK-enabled applications”, what are these applications, are they refer to testpmd/l2fwd like test applications, or OVS/VPP like other products using DPDK? And if these are other products, will they run for all vendors or for the vendor requested it?
>
>                 - 3: “Demonstrate any new feature performance of DPDK”, is this via updating test scripts, if not how these new test will be run and how results will be shared?
>
>                 - What is the plan to use Lab for Continuous Integration, adding more sanity checks than performance checks by time, as far as I know this was one of the initial plans for a lab.
>
>
>
> - DPDK Branch(s) to test
>
>                 - 5.1.1: “master”, a little background, in dpdk development there are multiple sub-trees, some specific patches targets specific trees, these sub-trees are merged into main tree before release candidate, and there is a target to do regular integration from these sub-trees.
>
> As a result of this process, for example a patch sent for next-net sub-tree may not apply cleanly to main repo, so it won’t be tested. But that patch will be applied to next-net tree and a week later next-net tree will be merged into main tree, this patch can be something affects the performance, but it won’t be detected. Later, when a patch arrives that can be applied on main tree, it will reveal the performance issue, but suspect will be the wrong patch and the problematic patch will be already merged. We need a solution for this. There are 5 sub-trees merged into main tree and more than half of the patches are coming to main repo through them.
>
>
>
> - Private DPDK-Member only Dashboard Specification
>
> - 5.6.1.2: “The delta-values of the script output, per test performed.”, in member-only dashboard, why not show base value too, since it will be updated regularly, via “--update-expected argument”, it would be good to see both current baseline and the diff.
>
>
>
> - I am for defining a change management system, there are multiple vendor and multiple requests, it would be good to trace, discuss and record the result for all of them systematically.
>
>
>
> Thanks,
>
> ferruh
>
>
>
> From: O'Driscoll, Tim
> Sent: Thursday, July 19, 2018 3:55 PM
> To: 'ci@dpdk.org' <ci@dpdk.org>
> Cc: 'Bob Noseworthy' <ren@iol.unh.edu>; Mcnamara, John <john.mcnamara@intel.com>; 'Shepard Siegel' <shepard.siegel@atomicrules.com>; 'Thomas Monjalon' <thomas@monjalon.net>; 'Erez Scop' <erezsc@mellanox.com>; 'Shreyansh Jain' <shreyansh.jain@nxp.com>; Xu, Qian Q <qian.q.xu@intel.com>; 'pmacarth@iol.unh.edu' <pmacarth@iol.unh.edu>; 'Matt Spencer' <Matt.Spencer@arm.com>; 'George Zhao' <George.Y.Zhao@huawei.com>; 'Mishra, Shishir' <Shishir.Mishra@spirent.com>; 'Lixuming' <lixuming@huawei.com>; Tkachuk, Georgii <georgii.tkachuk@intel.com>; 'Trishan de Lanerolle' <tdelanerolle@linuxfoundation.org>; 'Sean Campbell' <scampbel@qti.qualcomm.com>; 'Ali Alnubani' <alialnu@mellanox.com>; 'May Chen' <May.Chen@huawei.com>; 'Lodha, Nishant' <Nishant.Lodha@cavium.com>; Zhang, Chun <chun.zhang@intel.com>; 'Malla, Malathi' <Malathi.Malla@spirent.com>; 'khemendra kumar' <khemendra.kumar13@gmail.com>; 'graeme.gregory@linaro.org' <graeme.gregory@linaro.org>; Yigit, Ferruh <ferruh.yigit@intel.com>; Tu, Lijuan <lijuan.tu@intel.com>
> Subject: Minutes of DPDK Lab Meeting, July 17th
>
>
>
> UNH Policies and Procedures Doc:
>
> Document is available at: https://docs.google.com/document/d/1rtSwpNKltVNyDKNWgeTV5gaYeDoAPlK-sfm8XE7o_5s/edit?usp=sharing
>
> I reviewed it and it looks good to me.
>
> Please add any comments to the google doc, or send them directly to Bob (ren@iol.unh.edu).
>
>
>
> Dashboard:
>
> Dashboard is in much better shape now and most tests are passing.
>
> We do need to monitor the results and determine if the tolerance is right. If it’s too high then all tests will pass and we’ll miss any genuine issues. If it’s too low then we’ll have too many tests failing when there isn’t really a problem. Agreed that tuning the tolerance is the responsibility of each vendor.
>
> For Intel, we need to add the specific test config to the results page. We need to confirm the config details to Patrick, and he’ll add this to the results pages. Other vendors may want to do something similar.
>
>
>
> Hardware:
>
> Shreyansh is almost ready to ship the NXP hardware. It should be in UNH in 2-3 weeks.
>
>
>
>
>
>
>
>



--
Jeremy Plsek
UNH InterOperability Laboratory

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-ci] Minutes of DPDK Lab Meeting, July 17th
  2018-08-16 19:58   ` Jeremy Plsek
@ 2018-08-17 16:30     ` Yigit, Ferruh
  0 siblings, 0 replies; 4+ messages in thread
From: Yigit, Ferruh @ 2018-08-17 16:30 UTC (permalink / raw)
  To: Jeremy Plsek; +Cc: Bob Noseworthy, O'Driscoll, Tim, ci

Hi Jeremy,

Testing multiple sub-tree will make things more complex, that is why I was reluctant to suggest this, I think it is better to stick to single main repo.
And there is no clear way of knowing what is the target tree for a patch.

What about to trigger the regression test from main repo for given number of commits?
This can be used when a sub-tree merged into main repo, main repo can be tested for new commits.

Still there is a downside of patch being already merged, agree this is bad.
But at list will detect the correct patch that cause the issue and will catch early, at least earlier than RC.

Regards,
ferruh

> -----Original Message-----
> From: Jeremy Plsek [mailto:jplsek@iol.unh.edu]
> Sent: Thursday, August 16, 2018 8:58 PM
> To: Yigit, Ferruh <ferruh.yigit@intel.com>
> Cc: Bob Noseworthy <ren@iol.unh.edu>; O'Driscoll, Tim
> <tim.odriscoll@intel.com>; ci@dpdk.org
> Subject: Re: [dpdk-ci] Minutes of DPDK Lab Meeting, July 17th
> 
> Hi Ferruh,
> 
> With regards to "DPDK Branch(s) to test", I agree with testing the patch
> against the appropriate branch.
> 
> For some history, we originally tried to apply patches to the appropriate
> branches (before Patchwork 2.0) based on the heuristics of the subject line,
> but we ended up having a lot of apply errors. After a meeting, the group
> agreed with testing against master for now.
> 
> But with Patchwork 2.0, is there a way to tell what branch a patch/series is
> supposed to be applied against? If not, how can we figure this out in a
> reliable programmatic way?
> 
> On Thu, Aug 16, 2018 at 5:33 AM Yigit, Ferruh <ferruh.yigit@intel.com>
> wrote:
> >
> > Hi Bob,
> >
> >
> >
> > A few comments/questions on “UNH Policies and Procedures Doc”:
> >
> >
> >
> > - Objective and Scope of Project
> >
> >                 - 2: “DPDK-enabled applications”, what are these applications, are
> they refer to testpmd/l2fwd like test applications, or OVS/VPP like other
> products using DPDK? And if these are other products, will they run for all
> vendors or for the vendor requested it?
> >
> >                 - 3: “Demonstrate any new feature performance of DPDK”, is this
> via updating test scripts, if not how these new test will be run and how
> results will be shared?
> >
> >                 - What is the plan to use Lab for Continuous Integration, adding
> more sanity checks than performance checks by time, as far as I know this
> was one of the initial plans for a lab.
> >
> >
> >
> > - DPDK Branch(s) to test
> >
> >                 - 5.1.1: “master”, a little background, in dpdk development there
> are multiple sub-trees, some specific patches targets specific trees, these
> sub-trees are merged into main tree before release candidate, and there is a
> target to do regular integration from these sub-trees.
> >
> > As a result of this process, for example a patch sent for next-net sub-tree
> may not apply cleanly to main repo, so it won’t be tested. But that patch will
> be applied to next-net tree and a week later next-net tree will be merged
> into main tree, this patch can be something affects the performance, but it
> won’t be detected. Later, when a patch arrives that can be applied on main
> tree, it will reveal the performance issue, but suspect will be the wrong patch
> and the problematic patch will be already merged. We need a solution for
> this. There are 5 sub-trees merged into main tree and more than half of the
> patches are coming to main repo through them.
> >
> >
> >
> > - Private DPDK-Member only Dashboard Specification
> >
> > - 5.6.1.2: “The delta-values of the script output, per test performed.”, in
> member-only dashboard, why not show base value too, since it will be
> updated regularly, via “--update-expected argument”, it would be good to
> see both current baseline and the diff.
> >
> >
> >
> > - I am for defining a change management system, there are multiple
> vendor and multiple requests, it would be good to trace, discuss and record
> the result for all of them systematically.
> >
> >
> >
> > Thanks,
> >
> > ferruh
> >
> >
> >
> > From: O'Driscoll, Tim
> > Sent: Thursday, July 19, 2018 3:55 PM
> > To: 'ci@dpdk.org' <ci@dpdk.org>
> > Cc: 'Bob Noseworthy' <ren@iol.unh.edu>; Mcnamara, John
> > <john.mcnamara@intel.com>; 'Shepard Siegel'
> > <shepard.siegel@atomicrules.com>; 'Thomas Monjalon'
> > <thomas@monjalon.net>; 'Erez Scop' <erezsc@mellanox.com>;
> 'Shreyansh
> > Jain' <shreyansh.jain@nxp.com>; Xu, Qian Q <qian.q.xu@intel.com>;
> > 'pmacarth@iol.unh.edu' <pmacarth@iol.unh.edu>; 'Matt Spencer'
> > <Matt.Spencer@arm.com>; 'George Zhao'
> <George.Y.Zhao@huawei.com>;
> > 'Mishra, Shishir' <Shishir.Mishra@spirent.com>; 'Lixuming'
> > <lixuming@huawei.com>; Tkachuk, Georgii <georgii.tkachuk@intel.com>;
> > 'Trishan de Lanerolle' <tdelanerolle@linuxfoundation.org>; 'Sean
> > Campbell' <scampbel@qti.qualcomm.com>; 'Ali Alnubani'
> > <alialnu@mellanox.com>; 'May Chen' <May.Chen@huawei.com>; 'Lodha,
> > Nishant' <Nishant.Lodha@cavium.com>; Zhang, Chun
> > <chun.zhang@intel.com>; 'Malla, Malathi' <Malathi.Malla@spirent.com>;
> > 'khemendra kumar' <khemendra.kumar13@gmail.com>;
> > 'graeme.gregory@linaro.org' <graeme.gregory@linaro.org>; Yigit, Ferruh
> > <ferruh.yigit@intel.com>; Tu, Lijuan <lijuan.tu@intel.com>
> > Subject: Minutes of DPDK Lab Meeting, July 17th
> >
> >
> >
> > UNH Policies and Procedures Doc:
> >
> > Document is available at:
> >
> https://docs.google.com/document/d/1rtSwpNKltVNyDKNWgeTV5gaYeDoA
> PlK-sf
> > m8XE7o_5s/edit?usp=sharing
> >
> > I reviewed it and it looks good to me.
> >
> > Please add any comments to the google doc, or send them directly to Bob
> (ren@iol.unh.edu).
> >
> >
> >
> > Dashboard:
> >
> > Dashboard is in much better shape now and most tests are passing.
> >
> > We do need to monitor the results and determine if the tolerance is right.
> If it’s too high then all tests will pass and we’ll miss any genuine issues. If it’s
> too low then we’ll have too many tests failing when there isn’t really a
> problem. Agreed that tuning the tolerance is the responsibility of each
> vendor.
> >
> > For Intel, we need to add the specific test config to the results page. We
> need to confirm the config details to Patrick, and he’ll add this to the results
> pages. Other vendors may want to do something similar.
> >
> >
> >
> > Hardware:
> >
> > Shreyansh is almost ready to ship the NXP hardware. It should be in UNH in
> 2-3 weeks.
> >
> >
> >
> >
> >
> >
> >
> >
> 
> 
> 
> --
> Jeremy Plsek
> UNH InterOperability Laboratory

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-08-17 16:30 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-19 14:54 [dpdk-ci] Minutes of DPDK Lab Meeting, July 17th O'Driscoll, Tim
2018-08-16  9:33 ` Yigit, Ferruh
2018-08-16 19:58   ` Jeremy Plsek
2018-08-17 16:30     ` Yigit, Ferruh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).