* [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th @ 2019-02-28 14:49 O'Driscoll, Tim 2019-02-28 15:20 ` Thomas Monjalon 0 siblings, 1 reply; 11+ messages in thread From: O'Driscoll, Tim @ 2019-02-28 14:49 UTC (permalink / raw) To: ci OVS Tests: - Jeremy and Aaron are working on setup of the temporary hardware. - There are two options for hardware to run this on when the setup is complete: 1) use existing vendor hardware; 2) obtain standalone servers for OVS testing. The OVS team's preference is option 2. It's not realistic to expect a vendor to provide hardware to run a competitor's products so we'd need to find a different way to procure this. Aaron will check with Rashid to see if budget is available from Red Hat. I'll check with Trishan to see if the DPDK project budget could cover this. - The OVS focus is on functional tests, not performance tests. The DPDK lab is currently set up so that each vendor has complete control over performance tests & results on their hardware. If we use separate hardware for the OVS tests, we need to ensure that we restrict scope to functional tests so that it does not conflict with this principle in future. Expanding DPDK Test Coverage: - While the OVS testing is good and helps to increase coverage, it doesn't cover everything. We still need to focus on expanding the scope of DPDK performance tests in the lab. - Erez has verified with Trishan and budget is available for a contractor to help. We can discuss this further at the next meeting. CI Scripts: - Jeremy is working on getting the scripts ready to be made public. - Jeremy has tested Ali's script to identify the correct sub-tree for a patch set and it works as expected. He'll integrate this. Zhaoyan raised some concerns that the script doesn't work in all cases. Further discussion will happen in Bugzilla. - Ferruh will check with Thomas on the status of the Travis patches. ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th 2019-02-28 14:49 [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th O'Driscoll, Tim @ 2019-02-28 15:20 ` Thomas Monjalon 2019-02-28 15:24 ` O'Driscoll, Tim 0 siblings, 1 reply; 11+ messages in thread From: Thomas Monjalon @ 2019-02-28 15:20 UTC (permalink / raw) To: ci; +Cc: O'Driscoll, Tim Hi, 28/02/2019 15:49, O'Driscoll, Tim: > OVS Tests: > - Jeremy and Aaron are working on setup of the temporary hardware. > - There are two options for hardware to run this on when the setup is complete: 1) use existing vendor hardware; 2) obtain standalone servers for OVS testing. The OVS team's preference is option 2. It's not realistic to expect a vendor to provide hardware to run a competitor's products so we'd need to find a different way to procure this. Aaron will check with Rashid to see if budget is available from Red Hat. I'll check with Trishan to see if the DPDK project budget could cover this. > - The OVS focus is on functional tests, not performance tests. The DPDK lab is currently set up so that each vendor has complete control over performance tests & results on their hardware. If we use separate hardware for the OVS tests, we need to ensure that we restrict scope to functional tests so that it does not conflict with this principle in future. I am not sure to understand. In my opinion, the purpose of this lab is to have properly tuned hardware for running a large set of tests. We should be able to run various tests on the same machine. So the OVS tests, like any new test scenario, should be run on the same machine as the performance tests. I think we just need to have a job queue to run tests one by one, avoiding a test to disturb results of another one. Why are we looking for additional machines? ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th 2019-02-28 15:20 ` Thomas Monjalon @ 2019-02-28 15:24 ` O'Driscoll, Tim 2019-03-04 8:06 ` Stokes, Ian 2019-03-04 13:06 ` Aaron Conole 0 siblings, 2 replies; 11+ messages in thread From: O'Driscoll, Tim @ 2019-02-28 15:24 UTC (permalink / raw) To: Thomas Monjalon, ci; +Cc: Aaron Conole, Stokes, Ian > -----Original Message----- > From: Thomas Monjalon [mailto:thomas@monjalon.net] > Sent: Thursday, February 28, 2019 3:20 PM > To: ci@dpdk.org > Cc: O'Driscoll, Tim <tim.odriscoll@intel.com> > Subject: Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th > > Hi, > > 28/02/2019 15:49, O'Driscoll, Tim: > > OVS Tests: > > - Jeremy and Aaron are working on setup of the temporary hardware. > > - There are two options for hardware to run this on when the setup is > complete: 1) use existing vendor hardware; 2) obtain standalone servers > for OVS testing. The OVS team's preference is option 2. It's not > realistic to expect a vendor to provide hardware to run a competitor's > products so we'd need to find a different way to procure this. Aaron > will check with Rashid to see if budget is available from Red Hat. I'll > check with Trishan to see if the DPDK project budget could cover this. > > - The OVS focus is on functional tests, not performance tests. The > DPDK lab is currently set up so that each vendor has complete control > over performance tests & results on their hardware. If we use separate > hardware for the OVS tests, we need to ensure that we restrict scope to > functional tests so that it does not conflict with this principle in > future. > > I am not sure to understand. > In my opinion, the purpose of this lab is to have properly tuned > hardware > for running a large set of tests. We should be able to run various > tests > on the same machine. So the OVS tests, like any new test scenario, > should be run on the same machine as the performance tests. > I think we just need to have a job queue to run tests one by one, > avoiding a test to disturb results of another one. > > Why are we looking for additional machines? That was my assumption too. I believe the reason is that the OVS team want to validate with multiple vendor NICs to be sure that nothing is broken. We only have Intel and Mellanox hardware in our lab at present, so we don't cover all vendors. Aaron and Ian can provide more details. ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th 2019-02-28 15:24 ` O'Driscoll, Tim @ 2019-03-04 8:06 ` Stokes, Ian 2019-03-04 8:40 ` Thomas Monjalon 2019-03-04 13:06 ` Aaron Conole 1 sibling, 1 reply; 11+ messages in thread From: Stokes, Ian @ 2019-03-04 8:06 UTC (permalink / raw) To: O'Driscoll, Tim, Thomas Monjalon, ci; +Cc: Aaron Conole > -----Original Message----- > From: O'Driscoll, Tim > Sent: Thursday, February 28, 2019 3:25 PM > To: Thomas Monjalon <thomas@monjalon.net>; ci@dpdk.org > Cc: Aaron Conole <aconole@redhat.com>; Stokes, Ian <ian.stokes@intel.com> > Subject: RE: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th > > > -----Original Message----- > > From: Thomas Monjalon [mailto:thomas@monjalon.net] > > Sent: Thursday, February 28, 2019 3:20 PM > > To: ci@dpdk.org > > Cc: O'Driscoll, Tim <tim.odriscoll@intel.com> > > Subject: Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th > > > > Hi, > > > > 28/02/2019 15:49, O'Driscoll, Tim: > > > OVS Tests: > > > - Jeremy and Aaron are working on setup of the temporary hardware. > > > - There are two options for hardware to run this on when the setup > > > is > > complete: 1) use existing vendor hardware; 2) obtain standalone > > servers for OVS testing. The OVS team's preference is option 2. It's > > not realistic to expect a vendor to provide hardware to run a > > competitor's products so we'd need to find a different way to procure > > this. Aaron will check with Rashid to see if budget is available from > > Red Hat. I'll check with Trishan to see if the DPDK project budget could > cover this. > > > - The OVS focus is on functional tests, not performance tests. The > > DPDK lab is currently set up so that each vendor has complete control > > over performance tests & results on their hardware. If we use separate > > hardware for the OVS tests, we need to ensure that we restrict scope > > to functional tests so that it does not conflict with this principle > > in future. > > > > I am not sure to understand. > > In my opinion, the purpose of this lab is to have properly tuned > > hardware for running a large set of tests. We should be able to run > > various tests on the same machine. So the OVS tests, like any new test > > scenario, should be run on the same machine as the performance tests. > > I think we just need to have a job queue to run tests one by one, > > avoiding a test to disturb results of another one. > > > > Why are we looking for additional machines? > > That was my assumption too. I believe the reason is that the OVS team want > to validate with multiple vendor NICs to be sure that nothing is broken. > We only have Intel and Mellanox hardware in our lab at present, so we > don't cover all vendors. > > Aaron and Ian can provide more details. Hi Thomas, So from the OVS point of view, one of challenges when consuming DPDK is ensuring device compatibility across the community, in particular with the ongoing/upcoming HWOL development work. There is a risk that the implementation for HWOL for vendor x may not be compatible or suitable for vendor y etc. To help address this risk, it was proposed back in DPDK userspace 2018 that if the OVS community could provide a server, it could be used to co-locate a variety of vendor NICs. We could then leverage the OVS Zero Day robot to apply and conduct functional testing for OVS development patches and ensure patches do not break existing functionality. To date Aaron has received a number of NICs from various vendors, however a server (possibly 2) would still be needed to deploy the NICS. It was proposed that possibly the DPDK Lab in UNL aid with this. The aim here is purely functional and the system would not be used to benchmark the NICs in question. It would be purely to stop regressions being introduced into OVS DPDK and also act as a feedback to the DPDK community if changes were needed in DPDK itself. It might be possible to run the tests on the existing hardware in UNL but I guess this might not cover the NIC vendors Aaron has received to date. I wonder would it interrupt the existing DPDK workloads on those servers also so there was an open question on whether OVS DPDK should be deployed on a separate board. @Aaron, have I missed anything from your side? Thanks Ian ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th 2019-03-04 8:06 ` Stokes, Ian @ 2019-03-04 8:40 ` Thomas Monjalon 2019-03-04 8:49 ` Stokes, Ian 0 siblings, 1 reply; 11+ messages in thread From: Thomas Monjalon @ 2019-03-04 8:40 UTC (permalink / raw) To: Stokes, Ian; +Cc: O'Driscoll, Tim, ci, Aaron Conole 04/03/2019 09:06, Stokes, Ian: > > -----Original Message----- > > From: O'Driscoll, Tim > > Sent: Thursday, February 28, 2019 3:25 PM > > To: Thomas Monjalon <thomas@monjalon.net>; ci@dpdk.org > > Cc: Aaron Conole <aconole@redhat.com>; Stokes, Ian <ian.stokes@intel.com> > > Subject: RE: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th > > > > > -----Original Message----- > > > From: Thomas Monjalon [mailto:thomas@monjalon.net] > > > Sent: Thursday, February 28, 2019 3:20 PM > > > To: ci@dpdk.org > > > Cc: O'Driscoll, Tim <tim.odriscoll@intel.com> > > > Subject: Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th > > > > > > Hi, > > > > > > 28/02/2019 15:49, O'Driscoll, Tim: > > > > OVS Tests: > > > > - Jeremy and Aaron are working on setup of the temporary hardware. > > > > - There are two options for hardware to run this on when the setup > > > > is > > > complete: 1) use existing vendor hardware; 2) obtain standalone > > > servers for OVS testing. The OVS team's preference is option 2. It's > > > not realistic to expect a vendor to provide hardware to run a > > > competitor's products so we'd need to find a different way to procure > > > this. Aaron will check with Rashid to see if budget is available from > > > Red Hat. I'll check with Trishan to see if the DPDK project budget could > > cover this. > > > > - The OVS focus is on functional tests, not performance tests. The > > > DPDK lab is currently set up so that each vendor has complete control > > > over performance tests & results on their hardware. If we use separate > > > hardware for the OVS tests, we need to ensure that we restrict scope > > > to functional tests so that it does not conflict with this principle > > > in future. > > > > > > I am not sure to understand. > > > In my opinion, the purpose of this lab is to have properly tuned > > > hardware for running a large set of tests. We should be able to run > > > various tests on the same machine. So the OVS tests, like any new test > > > scenario, should be run on the same machine as the performance tests. > > > I think we just need to have a job queue to run tests one by one, > > > avoiding a test to disturb results of another one. > > > > > > Why are we looking for additional machines? > > > > That was my assumption too. I believe the reason is that the OVS team want > > to validate with multiple vendor NICs to be sure that nothing is broken. > > We only have Intel and Mellanox hardware in our lab at present, so we > > don't cover all vendors. > > > > Aaron and Ian can provide more details. > > Hi Thomas, > > So from the OVS point of view, one of challenges when consuming DPDK is ensuring device compatibility across the community, in particular with the ongoing/upcoming HWOL development work. There is a risk that the implementation for HWOL for vendor x may not be compatible or suitable for vendor y etc. > > To help address this risk, it was proposed back in DPDK userspace 2018 that if the OVS community could provide a server, it could be used to co-locate a variety of vendor NICs. We could then leverage the OVS Zero Day robot to apply and conduct functional testing for OVS development patches and ensure patches do not break existing functionality. Yes it seems to be the scope of the DPDK Community Lab. > To date Aaron has received a number of NICs from various vendors, however a server (possibly 2) would still be needed to deploy the NICS. > > It was proposed that possibly the DPDK Lab in UNL aid with this. > > The aim here is purely functional and the system would not be used to benchmark the NICs in question. It would be purely to stop regressions being introduced into OVS DPDK and also act as a feedback to the DPDK community if changes were needed in DPDK itself. So far I don't see the need for new servers. > It might be possible to run the tests on the existing hardware in UNL but I guess this might not cover the NIC vendors Aaron has received to date. I wonder would it interrupt the existing DPDK workloads on those servers also so there was an open question on whether OVS DPDK should be deployed on a separate board. Which vendor is not available in the DPDK Community Lab? > @Aaron, have I missed anything from your side? > > Thanks > Ian ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th 2019-03-04 8:40 ` Thomas Monjalon @ 2019-03-04 8:49 ` Stokes, Ian 0 siblings, 0 replies; 11+ messages in thread From: Stokes, Ian @ 2019-03-04 8:49 UTC (permalink / raw) To: Thomas Monjalon; +Cc: O'Driscoll, Tim, ci, Aaron Conole > -----Original Message----- > From: Thomas Monjalon [mailto:thomas@monjalon.net] > Sent: Monday, March 4, 2019 8:40 AM > To: Stokes, Ian <ian.stokes@intel.com> > Cc: O'Driscoll, Tim <tim.odriscoll@intel.com>; ci@dpdk.org; Aaron Conole > <aconole@redhat.com> > Subject: Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th > > 04/03/2019 09:06, Stokes, Ian: > > > -----Original Message----- > > > From: O'Driscoll, Tim > > > Sent: Thursday, February 28, 2019 3:25 PM > > > To: Thomas Monjalon <thomas@monjalon.net>; ci@dpdk.org > > > Cc: Aaron Conole <aconole@redhat.com>; Stokes, Ian > > > <ian.stokes@intel.com> > > > Subject: RE: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th > > > > > > > -----Original Message----- > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net] > > > > Sent: Thursday, February 28, 2019 3:20 PM > > > > To: ci@dpdk.org > > > > Cc: O'Driscoll, Tim <tim.odriscoll@intel.com> > > > > Subject: Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th > > > > > > > > Hi, > > > > > > > > 28/02/2019 15:49, O'Driscoll, Tim: > > > > > OVS Tests: > > > > > - Jeremy and Aaron are working on setup of the temporary hardware. > > > > > - There are two options for hardware to run this on when the > > > > > setup is > > > > complete: 1) use existing vendor hardware; 2) obtain standalone > > > > servers for OVS testing. The OVS team's preference is option 2. > > > > It's not realistic to expect a vendor to provide hardware to run a > > > > competitor's products so we'd need to find a different way to > > > > procure this. Aaron will check with Rashid to see if budget is > > > > available from Red Hat. I'll check with Trishan to see if the DPDK > > > > project budget could > > > cover this. > > > > > - The OVS focus is on functional tests, not performance tests. > > > > > The > > > > DPDK lab is currently set up so that each vendor has complete > > > > control over performance tests & results on their hardware. If we > > > > use separate hardware for the OVS tests, we need to ensure that we > > > > restrict scope to functional tests so that it does not conflict > > > > with this principle in future. > > > > > > > > I am not sure to understand. > > > > In my opinion, the purpose of this lab is to have properly tuned > > > > hardware for running a large set of tests. We should be able to > > > > run various tests on the same machine. So the OVS tests, like any > > > > new test scenario, should be run on the same machine as the > performance tests. > > > > I think we just need to have a job queue to run tests one by one, > > > > avoiding a test to disturb results of another one. > > > > > > > > Why are we looking for additional machines? > > > > > > That was my assumption too. I believe the reason is that the OVS > > > team want to validate with multiple vendor NICs to be sure that > nothing is broken. > > > We only have Intel and Mellanox hardware in our lab at present, so > > > we don't cover all vendors. > > > > > > Aaron and Ian can provide more details. > > > > Hi Thomas, > > > > So from the OVS point of view, one of challenges when consuming DPDK is > ensuring device compatibility across the community, in particular with the > ongoing/upcoming HWOL development work. There is a risk that the > implementation for HWOL for vendor x may not be compatible or suitable for > vendor y etc. > > > > To help address this risk, it was proposed back in DPDK userspace 2018 > that if the OVS community could provide a server, it could be used to co- > locate a variety of vendor NICs. We could then leverage the OVS Zero Day > robot to apply and conduct functional testing for OVS development patches > and ensure patches do not break existing functionality. > > Yes it seems to be the scope of the DPDK Community Lab. > > > To date Aaron has received a number of NICs from various vendors, > however a server (possibly 2) would still be needed to deploy the NICS. > > > > It was proposed that possibly the DPDK Lab in UNL aid with this. > > > > The aim here is purely functional and the system would not be used to > benchmark the NICs in question. It would be purely to stop regressions > being introduced into OVS DPDK and also act as a feedback to the DPDK > community if changes were needed in DPDK itself. > > So far I don't see the need for new servers. Maybe there isn't, I guess when this was proposed originally, knowledge WRT what HW was available was limited. > > > It might be possible to run the tests on the existing hardware in UNL > but I guess this might not cover the NIC vendors Aaron has received to > date. I wonder would it interrupt the existing DPDK workloads on those > servers also so there was an open question on whether OVS DPDK should be > deployed on a separate board. > > Which vendor is not available in the DPDK Community Lab? > I'm not sure which vendor Aaron has NICs for already, if this can be shared I guess this needs to be compared to which vendors are in the lab already. Thanks Ian > > @Aaron, have I missed anything from your side? > > > > Thanks > > Ian > ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th 2019-02-28 15:24 ` O'Driscoll, Tim 2019-03-04 8:06 ` Stokes, Ian @ 2019-03-04 13:06 ` Aaron Conole 2019-03-04 16:59 ` Lincoln Lavoie 1 sibling, 1 reply; 11+ messages in thread From: Aaron Conole @ 2019-03-04 13:06 UTC (permalink / raw) To: O'Driscoll, Tim; +Cc: Thomas Monjalon, ci, Stokes, Ian, Rashid Khan "O'Driscoll, Tim" <tim.odriscoll@intel.com> writes: >> -----Original Message----- >> From: Thomas Monjalon [mailto:thomas@monjalon.net] >> Sent: Thursday, February 28, 2019 3:20 PM >> To: ci@dpdk.org >> Cc: O'Driscoll, Tim <tim.odriscoll@intel.com> >> Subject: Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th >> >> Hi, >> >> 28/02/2019 15:49, O'Driscoll, Tim: >> > OVS Tests: >> > - Jeremy and Aaron are working on setup of the temporary hardware. >> > - There are two options for hardware to run this on when the setup is >> complete: 1) use existing vendor hardware; 2) obtain standalone servers >> for OVS testing. The OVS team's preference is option 2. It's not >> realistic to expect a vendor to provide hardware to run a competitor's >> products so we'd need to find a different way to procure this. Aaron >> will check with Rashid to see if budget is available from Red Hat. I'll >> check with Trishan to see if the DPDK project budget could cover this. >> > - The OVS focus is on functional tests, not performance tests. The >> DPDK lab is currently set up so that each vendor has complete control >> over performance tests & results on their hardware. If we use separate >> hardware for the OVS tests, we need to ensure that we restrict scope to >> functional tests so that it does not conflict with this principle in >> future. >> >> I am not sure to understand. >> In my opinion, the purpose of this lab is to have properly tuned >> hardware >> for running a large set of tests. We should be able to run various >> tests >> on the same machine. So the OVS tests, like any new test scenario, >> should be run on the same machine as the performance tests. This is definitely something I support as well. >> I think we just need to have a job queue to run tests one by one, >> avoiding a test to disturb results of another one. >> >> Why are we looking for additional machines? I think because there is no such kind of job queue available, currently? I don't recall if an exact reason was given other than the nebulous fear of "breaking the existing setups". > That was my assumption too. I believe the reason is that the OVS team > want to validate with multiple vendor NICs to be sure that nothing is > broken. We only have Intel and Mellanox hardware in our lab at > present, so we don't cover all vendors. > > Aaron and Ian can provide more details. ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th 2019-03-04 13:06 ` Aaron Conole @ 2019-03-04 16:59 ` Lincoln Lavoie 2019-03-04 17:40 ` Thomas Monjalon 0 siblings, 1 reply; 11+ messages in thread From: Lincoln Lavoie @ 2019-03-04 16:59 UTC (permalink / raw) To: Aaron Conole Cc: O'Driscoll, Tim, Thomas Monjalon, ci, Stokes, Ian, Rashid Khan [-- Attachment #1: Type: text/plain, Size: 3586 bytes --] Hi All, The reason we selection loaner machines (UNH provided) for the development was to avoid interference with the existing setup, i.e. don't break or degrade the performance tuned systems. For the deployed testing (i.e. once we have the OVS developed and integrated with the lab dashboard) can be done either on the existing hardware, or a stand alone setup with multiple NICs. I think this was proposed, because function testing with multiple NICs would had more hardware coverage than the two vendor performance systems right now. That might also be a lower bar for some hardware vendors to only provide a NIC, etc. In we choose the "option A" to use the existing performance setups, we would serialize the testing, so the performance jobs run independently, but I don't think that was really the question. Cheers, Lincoln On Mon, Mar 4, 2019 at 8:06 AM Aaron Conole <aconole@redhat.com> wrote: > "O'Driscoll, Tim" <tim.odriscoll@intel.com> writes: > > >> -----Original Message----- > >> From: Thomas Monjalon [mailto:thomas@monjalon.net] > >> Sent: Thursday, February 28, 2019 3:20 PM > >> To: ci@dpdk.org > >> Cc: O'Driscoll, Tim <tim.odriscoll@intel.com> > >> Subject: Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th > >> > >> Hi, > >> > >> 28/02/2019 15:49, O'Driscoll, Tim: > >> > OVS Tests: > >> > - Jeremy and Aaron are working on setup of the temporary hardware. > >> > - There are two options for hardware to run this on when the setup is > >> complete: 1) use existing vendor hardware; 2) obtain standalone servers > >> for OVS testing. The OVS team's preference is option 2. It's not > >> realistic to expect a vendor to provide hardware to run a competitor's > >> products so we'd need to find a different way to procure this. Aaron > >> will check with Rashid to see if budget is available from Red Hat. I'll > >> check with Trishan to see if the DPDK project budget could cover this. > >> > - The OVS focus is on functional tests, not performance tests. The > >> DPDK lab is currently set up so that each vendor has complete control > >> over performance tests & results on their hardware. If we use separate > >> hardware for the OVS tests, we need to ensure that we restrict scope to > >> functional tests so that it does not conflict with this principle in > >> future. > >> > >> I am not sure to understand. > >> In my opinion, the purpose of this lab is to have properly tuned > >> hardware > >> for running a large set of tests. We should be able to run various > >> tests > >> on the same machine. So the OVS tests, like any new test scenario, > >> should be run on the same machine as the performance tests. > > This is definitely something I support as well. > > >> I think we just need to have a job queue to run tests one by one, > >> avoiding a test to disturb results of another one. > >> > >> Why are we looking for additional machines? > > I think because there is no such kind of job queue available, currently? > I don't recall if an exact reason was given other than the nebulous fear > of "breaking the existing setups". > > > That was my assumption too. I believe the reason is that the OVS team > > want to validate with multiple vendor NICs to be sure that nothing is > > broken. We only have Intel and Mellanox hardware in our lab at > > present, so we don't cover all vendors. > > > > Aaron and Ian can provide more details. > -- *Lincoln Lavoie* Senior Engineer, Broadband Technologies 21 Madbury Rd., Ste. 100, Durham, NH 03824 lylavoie@iol.unh.edu https://www.iol.unh.edu +1-603-674-2755 (m) <https://www.iol.unh.edu/> [-- Attachment #2: Type: text/html, Size: 5774 bytes --] ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th 2019-03-04 16:59 ` Lincoln Lavoie @ 2019-03-04 17:40 ` Thomas Monjalon 2019-03-08 21:24 ` Aaron Conole 0 siblings, 1 reply; 11+ messages in thread From: Thomas Monjalon @ 2019-03-04 17:40 UTC (permalink / raw) To: Lincoln Lavoie Cc: Aaron Conole, O'Driscoll, Tim, ci, Stokes, Ian, Rashid Khan 04/03/2019 17:59, Lincoln Lavoie: > Hi All, > > The reason we selection loaner machines (UNH provided) for the development > was to avoid interference with the existing setup, i.e. don't break or > degrade the performance tuned systems. > > For the deployed testing (i.e. once we have the OVS developed and > integrated with the lab dashboard) can be done either on the existing > hardware, or a stand alone setup with multiple NICs. I think this was > proposed, because function testing with multiple NICs would had more > hardware coverage than the two vendor performance systems right now. That > might also be a lower bar for some hardware vendors to only provide a NIC, > etc. Either a vendor participate fully in the lab with properly setup HW, or not at all. We did not plan to have half participation. Adding more tests should encourage to participate. > In we choose the "option A" to use the existing performance setups, we > would serialize the testing, so the performance jobs run independently, but > I don't think that was really the question. Yes, it is absolutely necessary to have a serialized job queue, in order to have multiple kinds of tests on the same machine. I think we need some priority levels in the queue. > On Mon, Mar 4, 2019 at 8:06 AM Aaron Conole <aconole@redhat.com> wrote: > > > "O'Driscoll, Tim" <tim.odriscoll@intel.com> writes: > > > > >> -----Original Message----- > > >> From: Thomas Monjalon [mailto:thomas@monjalon.net] > > >> Sent: Thursday, February 28, 2019 3:20 PM > > >> To: ci@dpdk.org > > >> Cc: O'Driscoll, Tim <tim.odriscoll@intel.com> > > >> Subject: Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th > > >> > > >> Hi, > > >> > > >> 28/02/2019 15:49, O'Driscoll, Tim: > > >> > OVS Tests: > > >> > - Jeremy and Aaron are working on setup of the temporary hardware. > > >> > - There are two options for hardware to run this on when the setup is > > >> complete: 1) use existing vendor hardware; 2) obtain standalone servers > > >> for OVS testing. The OVS team's preference is option 2. It's not > > >> realistic to expect a vendor to provide hardware to run a competitor's > > >> products so we'd need to find a different way to procure this. Aaron > > >> will check with Rashid to see if budget is available from Red Hat. I'll > > >> check with Trishan to see if the DPDK project budget could cover this. > > >> > - The OVS focus is on functional tests, not performance tests. The > > >> DPDK lab is currently set up so that each vendor has complete control > > >> over performance tests & results on their hardware. If we use separate > > >> hardware for the OVS tests, we need to ensure that we restrict scope to > > >> functional tests so that it does not conflict with this principle in > > >> future. > > >> > > >> I am not sure to understand. > > >> In my opinion, the purpose of this lab is to have properly tuned > > >> hardware > > >> for running a large set of tests. We should be able to run various > > >> tests > > >> on the same machine. So the OVS tests, like any new test scenario, > > >> should be run on the same machine as the performance tests. > > > > This is definitely something I support as well. > > > > >> I think we just need to have a job queue to run tests one by one, > > >> avoiding a test to disturb results of another one. > > >> > > >> Why are we looking for additional machines? > > > > I think because there is no such kind of job queue available, currently? > > I don't recall if an exact reason was given other than the nebulous fear > > of "breaking the existing setups". > > > > > That was my assumption too. I believe the reason is that the OVS team > > > want to validate with multiple vendor NICs to be sure that nothing is > > > broken. We only have Intel and Mellanox hardware in our lab at > > > present, so we don't cover all vendors. > > > > > > Aaron and Ian can provide more details. ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th 2019-03-04 17:40 ` Thomas Monjalon @ 2019-03-08 21:24 ` Aaron Conole 2019-03-08 23:22 ` Thomas Monjalon 0 siblings, 1 reply; 11+ messages in thread From: Aaron Conole @ 2019-03-08 21:24 UTC (permalink / raw) To: Thomas Monjalon Cc: Lincoln Lavoie, O'Driscoll, Tim, ci, Stokes, Ian, Rashid Khan Thomas Monjalon <thomas@monjalon.net> writes: > 04/03/2019 17:59, Lincoln Lavoie: >> Hi All, >> >> The reason we selection loaner machines (UNH provided) for the development >> was to avoid interference with the existing setup, i.e. don't break or >> degrade the performance tuned systems. >> >> For the deployed testing (i.e. once we have the OVS developed and >> integrated with the lab dashboard) can be done either on the existing >> hardware, or a stand alone setup with multiple NICs. I think this was >> proposed, because function testing with multiple NICs would had more >> hardware coverage than the two vendor performance systems right now. That >> might also be a lower bar for some hardware vendors to only provide a NIC, >> etc. > > Either a vendor participate fully in the lab with properly setup HW, > or not at all. We did not plan to have half participation. > Adding more tests should encourage to participate. > >> In we choose the "option A" to use the existing performance setups, we >> would serialize the testing, so the performance jobs run independently, but >> I don't think that was really the question. > > Yes, it is absolutely necessary to have a serialized job queue, > in order to have multiple kinds of tests on the same machine. > I think we need some priority levels in the queue. One problem that we will run into is the length of time currently set for running the ovs pvp tests. Each stream size will run for a length of time * # of stream sizes * # flows * 2 (L2 + L3 flow caching) - so it can take a full day for the ovs_perf tests to run. That would be a long time on patch-set basis. It might make sense to restrict it to a smaller subset of streams, flows, etc. We'll need to figure out what makes sense (for example, maybe we only do 10 minutes of 64-byte and 1514-byte packets with 1m flows l2 + l3) from a testing perspective to give us a good mix of test coverage without spending too many cycles tying up the machines. > >> On Mon, Mar 4, 2019 at 8:06 AM Aaron Conole <aconole@redhat.com> wrote: >> >> > "O'Driscoll, Tim" <tim.odriscoll@intel.com> writes: >> > >> > >> -----Original Message----- >> > >> From: Thomas Monjalon [mailto:thomas@monjalon.net] >> > >> Sent: Thursday, February 28, 2019 3:20 PM >> > >> To: ci@dpdk.org >> > >> Cc: O'Driscoll, Tim <tim.odriscoll@intel.com> >> > >> Subject: Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th >> > >> >> > >> Hi, >> > >> >> > >> 28/02/2019 15:49, O'Driscoll, Tim: >> > >> > OVS Tests: >> > >> > - Jeremy and Aaron are working on setup of the temporary hardware. >> > >> > - There are two options for hardware to run this on when the setup is >> > >> complete: 1) use existing vendor hardware; 2) obtain standalone servers >> > >> for OVS testing. The OVS team's preference is option 2. It's not >> > >> realistic to expect a vendor to provide hardware to run a competitor's >> > >> products so we'd need to find a different way to procure this. Aaron >> > >> will check with Rashid to see if budget is available from Red Hat. I'll >> > >> check with Trishan to see if the DPDK project budget could cover this. >> > >> > - The OVS focus is on functional tests, not performance tests. The >> > >> DPDK lab is currently set up so that each vendor has complete control >> > >> over performance tests & results on their hardware. If we use separate >> > >> hardware for the OVS tests, we need to ensure that we restrict scope to >> > >> functional tests so that it does not conflict with this principle in >> > >> future. >> > >> >> > >> I am not sure to understand. >> > >> In my opinion, the purpose of this lab is to have properly tuned >> > >> hardware >> > >> for running a large set of tests. We should be able to run various >> > >> tests >> > >> on the same machine. So the OVS tests, like any new test scenario, >> > >> should be run on the same machine as the performance tests. >> > >> > This is definitely something I support as well. >> > >> > >> I think we just need to have a job queue to run tests one by one, >> > >> avoiding a test to disturb results of another one. >> > >> >> > >> Why are we looking for additional machines? >> > >> > I think because there is no such kind of job queue available, currently? >> > I don't recall if an exact reason was given other than the nebulous fear >> > of "breaking the existing setups". >> > >> > > That was my assumption too. I believe the reason is that the OVS team >> > > want to validate with multiple vendor NICs to be sure that nothing is >> > > broken. We only have Intel and Mellanox hardware in our lab at >> > > present, so we don't cover all vendors. >> > > >> > > Aaron and Ian can provide more details. ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th 2019-03-08 21:24 ` Aaron Conole @ 2019-03-08 23:22 ` Thomas Monjalon 0 siblings, 0 replies; 11+ messages in thread From: Thomas Monjalon @ 2019-03-08 23:22 UTC (permalink / raw) To: Aaron Conole Cc: Lincoln Lavoie, O'Driscoll, Tim, ci, Stokes, Ian, Rashid Khan 08/03/2019 22:24, Aaron Conole: > Thomas Monjalon <thomas@monjalon.net> writes: > > > 04/03/2019 17:59, Lincoln Lavoie: > >> Hi All, > >> > >> The reason we selection loaner machines (UNH provided) for the development > >> was to avoid interference with the existing setup, i.e. don't break or > >> degrade the performance tuned systems. > >> > >> For the deployed testing (i.e. once we have the OVS developed and > >> integrated with the lab dashboard) can be done either on the existing > >> hardware, or a stand alone setup with multiple NICs. I think this was > >> proposed, because function testing with multiple NICs would had more > >> hardware coverage than the two vendor performance systems right now. That > >> might also be a lower bar for some hardware vendors to only provide a NIC, > >> etc. > > > > Either a vendor participate fully in the lab with properly setup HW, > > or not at all. We did not plan to have half participation. > > Adding more tests should encourage to participate. > > > >> In we choose the "option A" to use the existing performance setups, we > >> would serialize the testing, so the performance jobs run independently, but > >> I don't think that was really the question. > > > > Yes, it is absolutely necessary to have a serialized job queue, > > in order to have multiple kinds of tests on the same machine. > > I think we need some priority levels in the queue. > > One problem that we will run into is the length of time currently set > for running the ovs pvp tests. Each stream size will run for a length > of time * # of stream sizes * # flows * 2 (L2 + L3 flow caching) - so it > can take a full day for the ovs_perf tests to run. That would be a long > time on patch-set basis. > > It might make sense to restrict it to a smaller subset of streams, > flows, etc. We'll need to figure out what makes sense (for example, > maybe we only do 10 minutes of 64-byte and 1514-byte packets with 1m > flows l2 + l3) from a testing perspective to give us a good mix of test > coverage without spending too many cycles tying up the machines. Right, the tests must limited to a reasonnable time. 10 minutes might be a maximum. ^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2019-03-08 23:22 UTC | newest] Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2019-02-28 14:49 [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th O'Driscoll, Tim 2019-02-28 15:20 ` Thomas Monjalon 2019-02-28 15:24 ` O'Driscoll, Tim 2019-03-04 8:06 ` Stokes, Ian 2019-03-04 8:40 ` Thomas Monjalon 2019-03-04 8:49 ` Stokes, Ian 2019-03-04 13:06 ` Aaron Conole 2019-03-04 16:59 ` Lincoln Lavoie 2019-03-04 17:40 ` Thomas Monjalon 2019-03-08 21:24 ` Aaron Conole 2019-03-08 23:22 ` Thomas Monjalon
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).