From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com [66.111.4.26]) by dpdk.org (Postfix) with ESMTP id 9AD3FA3 for ; Sat, 9 Mar 2019 00:22:45 +0100 (CET) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id 3349921AC2; Fri, 8 Mar 2019 18:22:45 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute1.internal (MEProxy); Fri, 08 Mar 2019 18:22:45 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=mesmtp; bh=AAKPvzE+1qO4aW0ytpyCVs28q41v3kS96bVYNrJM4WE=; b=I0qrF8OdfAzk Fx9v9dZu+/vY/DtkYl1/Cd+yJLHDnRoWTdnBRLJd7PQh2Il9U6zM9cMOip+r7s7q 5WikndRZZ7zIeiSLzvvj31tyH4c1xsg2b2W/yoh9Yh/PhDrX6NqhW4QuLqzokbpR W5WxYAD+VSU1sMp9ALt/YpEPyS4cpww= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; bh=AAKPvzE+1qO4aW0ytpyCVs28q41v3kS96bVYNrJM4 WE=; b=eOfkSbzn3Tvk1z9GNwnxt/jkgB253O+/gFq79dCKwpVyokh+vVwdXCf3n L+fkuqRbmi3WEKzR3eceJTCPSHQ8tI4gdVwpXPwPh+ZT3sHtVN9HNWg5YnrW4OXz tUqKXjhn5uGm4ib2nAkUn9it0c33nVd3+L2p9HlS6j2T6RpBdnU8P1HstZb5DqAm uC5inxfP+wtFF405mCXgHMsQuCVK++t3oR9f5Fy7djp9VmugmGx9oTifTKiTifH9 DkyDcQSmGh5Vt+Qcv7M9RAh3/h0Ns/jEt9JIriVq7+fkJAe26/eU0PR0srrD647b cDhb4khjA7p3QNSYmBNCVhp987/vw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedutddrgedugddtlecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhmrghs ucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenucfkph epjeejrddufeegrddvtdefrddukeegnecurfgrrhgrmhepmhgrihhlfhhrohhmpehthhho mhgrshesmhhonhhjrghlohhnrdhnvghtnecuvehluhhsthgvrhfuihiivgeptd X-ME-Proxy: Received: from xps.localnet (184.203.134.77.rev.sfr.net [77.134.203.184]) by mail.messagingengine.com (Postfix) with ESMTPA id 3B279E4307; Fri, 8 Mar 2019 18:22:43 -0500 (EST) From: Thomas Monjalon To: Aaron Conole Cc: Lincoln Lavoie , "O'Driscoll, Tim" , "ci@dpdk.org" , "Stokes, Ian" , Rashid Khan Date: Sat, 09 Mar 2019 00:22:39 +0100 Message-ID: <28236099.lslQNaqCNh@xps> In-Reply-To: References: <26FA93C7ED1EAA44AB77D62FBE1D27BAB785C151@IRSMSX108.ger.corp.intel.com> <5602454.1Rb4ADb6Bi@xps> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-ci] Minutes of DPDK Lab Meeting, February 26th X-BeenThere: ci@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK CI discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 08 Mar 2019 23:22:45 -0000 08/03/2019 22:24, Aaron Conole: > Thomas Monjalon writes: > > > 04/03/2019 17:59, Lincoln Lavoie: > >> Hi All, > >> > >> The reason we selection loaner machines (UNH provided) for the development > >> was to avoid interference with the existing setup, i.e. don't break or > >> degrade the performance tuned systems. > >> > >> For the deployed testing (i.e. once we have the OVS developed and > >> integrated with the lab dashboard) can be done either on the existing > >> hardware, or a stand alone setup with multiple NICs. I think this was > >> proposed, because function testing with multiple NICs would had more > >> hardware coverage than the two vendor performance systems right now. That > >> might also be a lower bar for some hardware vendors to only provide a NIC, > >> etc. > > > > Either a vendor participate fully in the lab with properly setup HW, > > or not at all. We did not plan to have half participation. > > Adding more tests should encourage to participate. > > > >> In we choose the "option A" to use the existing performance setups, we > >> would serialize the testing, so the performance jobs run independently, but > >> I don't think that was really the question. > > > > Yes, it is absolutely necessary to have a serialized job queue, > > in order to have multiple kinds of tests on the same machine. > > I think we need some priority levels in the queue. > > One problem that we will run into is the length of time currently set > for running the ovs pvp tests. Each stream size will run for a length > of time * # of stream sizes * # flows * 2 (L2 + L3 flow caching) - so it > can take a full day for the ovs_perf tests to run. That would be a long > time on patch-set basis. > > It might make sense to restrict it to a smaller subset of streams, > flows, etc. We'll need to figure out what makes sense (for example, > maybe we only do 10 minutes of 64-byte and 1514-byte packets with 1m > flows l2 + l3) from a testing perspective to give us a good mix of test > coverage without spending too many cycles tying up the machines. Right, the tests must limited to a reasonnable time. 10 minutes might be a maximum.