Hi All,

The reason we selection loaner machines (UNH provided) for the development was to avoid interference with the existing setup, i.e. don't break or degrade the performance tuned systems.

For the deployed testing (i.e. once we have the OVS developed and integrated with the lab dashboard) can be done either on the existing hardware, or a stand alone setup with multiple NICs.  I think this was proposed, because function testing with multiple NICs would had more hardware coverage than the two vendor performance systems right now.  That might also be a lower bar for some hardware vendors to only provide a NIC, etc.

In we choose the "option A" to use the existing performance setups, we would serialize the testing, so the performance jobs run independently, but I don't think that was really the question.

Cheers,
Lincoln

On Mon, Mar 4, 2019 at 8:06 AM Aaron Conole <aconole@redhat.com> wrote:
"O'Driscoll, Tim" <tim.odriscoll@intel.com> writes:

>> -----Original Message-----
>> From: Thomas Monjalon [mailto:thomas@monjalon.net]
>> Sent: Thursday, February 28, 2019 3:20 PM
>> To: ci@dpdk.org
>> Cc: O'Driscoll, Tim <tim.odriscoll@intel.com>
>> Subject: Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th
>>
>> Hi,
>>
>> 28/02/2019 15:49, O'Driscoll, Tim:
>> > OVS Tests:
>> > - Jeremy and Aaron are working on setup of the temporary hardware.
>> > - There are two options for hardware to run this on when the setup is
>> complete: 1) use existing vendor hardware; 2) obtain standalone servers
>> for OVS testing. The OVS team's preference is option 2. It's not
>> realistic to expect a vendor to provide hardware to run a competitor's
>> products so we'd need to find a different way to procure this. Aaron
>> will check with Rashid to see if budget is available from Red Hat. I'll
>> check with Trishan to see if the DPDK project budget could cover this.
>> > - The OVS focus is on functional tests, not performance tests. The
>> DPDK lab is currently set up so that each vendor has complete control
>> over performance tests & results on their hardware. If we use separate
>> hardware for the OVS tests, we need to ensure that we restrict scope to
>> functional tests so that it does not conflict with this principle in
>> future.
>>
>> I am not sure to understand.
>> In my opinion, the purpose of this lab is to have properly tuned
>> hardware
>> for running a large set of tests. We should be able to run various
>> tests
>> on the same machine. So the OVS tests, like any new test scenario,
>> should be run on the same machine as the performance tests.

This is definitely something I support as well.

>> I think we just need to have a job queue to run tests one by one,
>> avoiding a test to disturb results of another one.
>>
>> Why are we looking for additional machines?

I think because there is no such kind of job queue available, currently?
I don't recall if an exact reason was given other than the nebulous fear
of "breaking the existing setups".

> That was my assumption too. I believe the reason is that the OVS team
> want to validate with multiple vendor NICs to be sure that nothing is
> broken. We only have Intel and Mellanox hardware in our lab at
> present, so we don't cover all vendors.
>
> Aaron and Ian can provide more details.


--
Lincoln Lavoie
Senior Engineer, Broadband Technologies
21 Madbury Rd., Ste. 100, Durham, NH 03824
+1-603-674-2755 (m)