It was also discussed in this meeting, after Tim had to leave for another meeting, that we should be saving absolute results instead of delta results. In past discussions, it was agreed to only save the delta result to the UNH-IOL database that drives the dashboard, with the baseline (absolute result) kept on the respective system (without keeping old absolute baselines). Additionally, keeping absolute data in the database could be used to produce accurate charts of performance over time and better show trends. However, that will require some agreements to be reached for what data could be made public, compared to what is available to logged in users (i.e. seeing their own absolute performance numbers). At this time, the only change would be first storing the numbers into the database, with only UNH-IOL having access to the data. Nothing else with the systems would change with this first step. The public view will still remain showing a delta for patches, until we decide later on, on whether to show these absolute numbers in private or also in public views. If there is any issues with going forward with this decision, I'd like this thread to be a public discussion for said issues. Ideally, we would like to reach a definite go / no-go on this change for the next team meeting on February 12. No reply or participation in this thread will be considered as accepting the proposal to store the absolute numbers in the database. Thanks. On Tue, Jan 29, 2019 at 1:58 PM O'Driscoll, Tim wrote: > Red Hat Test Cases: > - We had a separate meeting with Red Hat after our lab meeting today. > Aaron and Rashid explained that they have a suite of DPDK tests, some of > which use OVS and some of which test DPDK is isolation. These can be easily > set up in the DPDK lab. > - We agreed to proceed with this. It's a great way to get additional test > coverage which we've been struggling with for a while. Aaron will work with > Jeremy on the implementation. > - The initial goal is to get tests running and results into the database. > After that, we can decide how to represent the results in the dashboard and > patchwork. > - We will need to make sure that running these tests has not impact on the > current test cases. The assumption is that current utilization is low > enough that this should not be a problem. > > DTS Improvements: > - Thomas will solicit community feedback on this. Community help will also > be required to implement any changes. > > Applying Patches to Sub-Trees: > - Plan is to apply patch sets to master first. If that fails, then we'll > use a script to determine which sub-tree to apply the patch set to. Thomas > and Ali hope to provide a script this week. > - If a patch set is applied to a sub-tree, then we need to agree whether > the baseline for performance comparison should be the master or the > sub-tree. Agreed that we should compare to master. > -- Jeremy Plsek UNH InterOperability Laboratory