DPDK CI discussions
 help / color / mirror / Atom feed
From: Thomas Monjalon <thomas@monjalon.net>
To: Lincoln Lavoie <lylavoie@iol.unh.edu>
Cc: Aaron Conole <aconole@redhat.com>,
	"O'Driscoll, Tim" <tim.odriscoll@intel.com>,
	"ci@dpdk.org" <ci@dpdk.org>, "Stokes, Ian" <ian.stokes@intel.com>,
	Rashid Khan <rkhan@redhat.com>
Subject: Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th
Date: Mon, 04 Mar 2019 18:40:59 +0100	[thread overview]
Message-ID: <5602454.1Rb4ADb6Bi@xps> (raw)
In-Reply-To: <CAOE1vsP9tqkN+J3fS5Xmum5xf_8UYQ7a6xiq6uN58jNCukKSXg@mail.gmail.com>

04/03/2019 17:59, Lincoln Lavoie:
> Hi All,
> 
> The reason we selection loaner machines (UNH provided) for the development
> was to avoid interference with the existing setup, i.e. don't break or
> degrade the performance tuned systems.
> 
> For the deployed testing (i.e. once we have the OVS developed and
> integrated with the lab dashboard) can be done either on the existing
> hardware, or a stand alone setup with multiple NICs.  I think this was
> proposed, because function testing with multiple NICs would had more
> hardware coverage than the two vendor performance systems right now.  That
> might also be a lower bar for some hardware vendors to only provide a NIC,
> etc.

Either a vendor participate fully in the lab with properly setup HW,
or not at all. We did not plan to have half participation.
Adding more tests should encourage to participate.

> In we choose the "option A" to use the existing performance setups, we
> would serialize the testing, so the performance jobs run independently, but
> I don't think that was really the question.

Yes, it is absolutely necessary to have a serialized job queue,
in order to have multiple kinds of tests on the same machine.
I think we need some priority levels in the queue.


> On Mon, Mar 4, 2019 at 8:06 AM Aaron Conole <aconole@redhat.com> wrote:
> 
> > "O'Driscoll, Tim" <tim.odriscoll@intel.com> writes:
> >
> > >> -----Original Message-----
> > >> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > >> Sent: Thursday, February 28, 2019 3:20 PM
> > >> To: ci@dpdk.org
> > >> Cc: O'Driscoll, Tim <tim.odriscoll@intel.com>
> > >> Subject: Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th
> > >>
> > >> Hi,
> > >>
> > >> 28/02/2019 15:49, O'Driscoll, Tim:
> > >> > OVS Tests:
> > >> > - Jeremy and Aaron are working on setup of the temporary hardware.
> > >> > - There are two options for hardware to run this on when the setup is
> > >> complete: 1) use existing vendor hardware; 2) obtain standalone servers
> > >> for OVS testing. The OVS team's preference is option 2. It's not
> > >> realistic to expect a vendor to provide hardware to run a competitor's
> > >> products so we'd need to find a different way to procure this. Aaron
> > >> will check with Rashid to see if budget is available from Red Hat. I'll
> > >> check with Trishan to see if the DPDK project budget could cover this.
> > >> > - The OVS focus is on functional tests, not performance tests. The
> > >> DPDK lab is currently set up so that each vendor has complete control
> > >> over performance tests & results on their hardware. If we use separate
> > >> hardware for the OVS tests, we need to ensure that we restrict scope to
> > >> functional tests so that it does not conflict with this principle in
> > >> future.
> > >>
> > >> I am not sure to understand.
> > >> In my opinion, the purpose of this lab is to have properly tuned
> > >> hardware
> > >> for running a large set of tests. We should be able to run various
> > >> tests
> > >> on the same machine. So the OVS tests, like any new test scenario,
> > >> should be run on the same machine as the performance tests.
> >
> > This is definitely something I support as well.
> >
> > >> I think we just need to have a job queue to run tests one by one,
> > >> avoiding a test to disturb results of another one.
> > >>
> > >> Why are we looking for additional machines?
> >
> > I think because there is no such kind of job queue available, currently?
> > I don't recall if an exact reason was given other than the nebulous fear
> > of "breaking the existing setups".
> >
> > > That was my assumption too. I believe the reason is that the OVS team
> > > want to validate with multiple vendor NICs to be sure that nothing is
> > > broken. We only have Intel and Mellanox hardware in our lab at
> > > present, so we don't cover all vendors.
> > >
> > > Aaron and Ian can provide more details.

  reply	other threads:[~2019-03-04 17:41 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-28 14:49 O'Driscoll, Tim
2019-02-28 15:20 ` Thomas Monjalon
2019-02-28 15:24   ` O'Driscoll, Tim
2019-03-04  8:06     ` Stokes, Ian
2019-03-04  8:40       ` Thomas Monjalon
2019-03-04  8:49         ` Stokes, Ian
2019-03-04 13:06     ` Aaron Conole
2019-03-04 16:59       ` Lincoln Lavoie
2019-03-04 17:40         ` Thomas Monjalon [this message]
2019-03-08 21:24           ` Aaron Conole
2019-03-08 23:22             ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5602454.1Rb4ADb6Bi@xps \
    --to=thomas@monjalon.net \
    --cc=aconole@redhat.com \
    --cc=ci@dpdk.org \
    --cc=ian.stokes@intel.com \
    --cc=lylavoie@iol.unh.edu \
    --cc=rkhan@redhat.com \
    --cc=tim.odriscoll@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).