From: Thomas Monjalon <firstname.lastname@example.org> To: Lincoln Lavoie <email@example.com> Cc: Aaron Conole <firstname.lastname@example.org>, "O'Driscoll, Tim" <email@example.com>, "firstname.lastname@example.org" <email@example.com>, "Stokes, Ian" <firstname.lastname@example.org>, Rashid Khan <email@example.com> Subject: Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th Date: Mon, 04 Mar 2019 18:40:59 +0100 Message-ID: <5602454.1Rb4ADb6Bi@xps> (raw) In-Reply-To: <CAOE1vsP9tqkN+J3fS5Xmum5xf_8UYQ7a6xiq6uN58jNCukKSXg@mail.gmail.com> 04/03/2019 17:59, Lincoln Lavoie: > Hi All, > > The reason we selection loaner machines (UNH provided) for the development > was to avoid interference with the existing setup, i.e. don't break or > degrade the performance tuned systems. > > For the deployed testing (i.e. once we have the OVS developed and > integrated with the lab dashboard) can be done either on the existing > hardware, or a stand alone setup with multiple NICs. I think this was > proposed, because function testing with multiple NICs would had more > hardware coverage than the two vendor performance systems right now. That > might also be a lower bar for some hardware vendors to only provide a NIC, > etc. Either a vendor participate fully in the lab with properly setup HW, or not at all. We did not plan to have half participation. Adding more tests should encourage to participate. > In we choose the "option A" to use the existing performance setups, we > would serialize the testing, so the performance jobs run independently, but > I don't think that was really the question. Yes, it is absolutely necessary to have a serialized job queue, in order to have multiple kinds of tests on the same machine. I think we need some priority levels in the queue. > On Mon, Mar 4, 2019 at 8:06 AM Aaron Conole <firstname.lastname@example.org> wrote: > > > "O'Driscoll, Tim" <email@example.com> writes: > > > > >> -----Original Message----- > > >> From: Thomas Monjalon [mailto:firstname.lastname@example.org] > > >> Sent: Thursday, February 28, 2019 3:20 PM > > >> To: email@example.com > > >> Cc: O'Driscoll, Tim <firstname.lastname@example.org> > > >> Subject: Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th > > >> > > >> Hi, > > >> > > >> 28/02/2019 15:49, O'Driscoll, Tim: > > >> > OVS Tests: > > >> > - Jeremy and Aaron are working on setup of the temporary hardware. > > >> > - There are two options for hardware to run this on when the setup is > > >> complete: 1) use existing vendor hardware; 2) obtain standalone servers > > >> for OVS testing. The OVS team's preference is option 2. It's not > > >> realistic to expect a vendor to provide hardware to run a competitor's > > >> products so we'd need to find a different way to procure this. Aaron > > >> will check with Rashid to see if budget is available from Red Hat. I'll > > >> check with Trishan to see if the DPDK project budget could cover this. > > >> > - The OVS focus is on functional tests, not performance tests. The > > >> DPDK lab is currently set up so that each vendor has complete control > > >> over performance tests & results on their hardware. If we use separate > > >> hardware for the OVS tests, we need to ensure that we restrict scope to > > >> functional tests so that it does not conflict with this principle in > > >> future. > > >> > > >> I am not sure to understand. > > >> In my opinion, the purpose of this lab is to have properly tuned > > >> hardware > > >> for running a large set of tests. We should be able to run various > > >> tests > > >> on the same machine. So the OVS tests, like any new test scenario, > > >> should be run on the same machine as the performance tests. > > > > This is definitely something I support as well. > > > > >> I think we just need to have a job queue to run tests one by one, > > >> avoiding a test to disturb results of another one. > > >> > > >> Why are we looking for additional machines? > > > > I think because there is no such kind of job queue available, currently? > > I don't recall if an exact reason was given other than the nebulous fear > > of "breaking the existing setups". > > > > > That was my assumption too. I believe the reason is that the OVS team > > > want to validate with multiple vendor NICs to be sure that nothing is > > > broken. We only have Intel and Mellanox hardware in our lab at > > > present, so we don't cover all vendors. > > > > > > Aaron and Ian can provide more details.
next prev parent reply index Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-02-28 14:49 O'Driscoll, Tim 2019-02-28 15:20 ` Thomas Monjalon 2019-02-28 15:24 ` O'Driscoll, Tim 2019-03-04 8:06 ` Stokes, Ian 2019-03-04 8:40 ` Thomas Monjalon 2019-03-04 8:49 ` Stokes, Ian 2019-03-04 13:06 ` Aaron Conole 2019-03-04 16:59 ` Lincoln Lavoie 2019-03-04 17:40 ` Thomas Monjalon [this message] 2019-03-08 21:24 ` Aaron Conole 2019-03-08 23:22 ` Thomas Monjalon
Reply instructions: You may reply publically to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=5602454.1Rb4ADb6Bi@xps \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
DPDK CI discussions Archives are clonable: git clone --mirror http://inbox.dpdk.org/ci/0 ci/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 ci ci/ http://inbox.dpdk.org/ci \ firstname.lastname@example.org public-inbox-index ci Newsgroup available over NNTP: nntp://inbox.dpdk.org/inbox.dpdk.ci AGPL code for this site: git clone https://public-inbox.org/ public-inbox