DPDK CI discussions
 help / color / mirror / Atom feed
* [dpdk-ci] Community CI Meeting Minutes - February 25, 2021
@ 2021-02-25 16:05 Lincoln Lavoie
  0 siblings, 0 replies; only message in thread
From: Lincoln Lavoie @ 2021-02-25 16:05 UTC (permalink / raw)
  To: ci

[-- Attachment #1: Type: text/plain, Size: 9181 bytes --]

Hello All,

Here are the minutes from today's meeting.

February 25, 2021

########################################################
Attendees

1. Lincoln Lavoie
2. Ali Alnubani
3. Brandon Lo
4. David Liu
5. Juraj Linkeš
6. Lijuan Tu
7. Michael Santana
8. Owen Hilyard

########################################################
Agenda

1. Performance Testing Metrics
2. CI Status
3. Bugzilla Status
4. Test Development
5. Any other business

########################################################
Minutes

========================================================
Performance Testing Metrics

* BUG-640 (https://bugs.dpdk.org/show_bug.cgi?id=640)
* Should performance testing pass / fail requirements be percentage based?
* What is an unacceptable percentage change, i.e. what is a failure?
* The run to run variations depend not only on the DUT, but also the test
environment, i.e. using TRex compared to a dedicated hardware traffic
generator.
* One option is to move to a larger percentage, and tighten this up over
time, i.e. start with 5%, move to 4%, etc.
* Should DTS be changed to record / average multiple measurements from
TRex.  This might provide better statistics, without too much increase in
testing time.
* Need to ensure the infrastructure is configured to minimize variations
run to run, i.e. pinning the CPU clock frequency / turbo turbo frequency,
etc.
* Group agreed to move forward with proposing changes towards DTS to:
   * Change the pass / fail determination to be percentage based, with a
default value of 10%, where the value can be adjusted by the configuration.
   * Investigate making multiple traffic measurements (or reports from
TRex) during the test run to collect better statistics, being careful to
not overly increase the total run time.

========================================================
CI Status

--------------------------------------------------------
UNH-IOL Community CI Lab

* There will be some downtime this afternoon (East Coast USA) and tomorrow
on the bare-metal test systems (performance and functional testing).  This
will likely be followed up with a similar downtime next week to finalize
the role out of the new Jenkins pipelines.
* Environment subscriptions; no longer have to subscribe to all
environments in a group
* More statistics are available for the lab, that detail running, passing,
failing tests over time, https://lab.dpdk.org/results/dashboard/stats/

--------------------------------------------------------
Intel Lab

* Finished the updates to the CI, to continue to test patches, even if
parts (i.e. 1 OS) fail, the other parts will continue forward and report
overall results to patchworks.

--------------------------------------------------------
Travis CI / Github Actions

* No updates, Aaron couldn’t make this meeting.

========================================================
Bugzilla Status

*
https://bugs.dpdk.org/buglist.cgi?bug_status=UNCONFIRMED&bug_status=CONFIRMED&bug_status=IN_PROGRESS&columnlist=product%2Ccomponent%2Cpriority%2Cbug_status%2Cassigned_to%2Cshort_desc%2Cchangeddate&component=job%20scripts&component=UNH%20infra&component=Intel%20Lab&component=Travis%20CI&list_id=3419&order=priority%2Cchangeddate%20DESC&product=lab&query_format=advanced&resolution=---

========================================================
Test Development

The following has been developed with input from the tech board and DPDK
leadership for 2021 priorities and goes.  These goals should shape the
primary focus of lab efforts for the remainder of 2021.

1. Objective 1: UNH-IOL will host the DPDK Performance/Regression Test Lab
and provide the test infrastructure to execute, record and report the tests
developed and defined by the DPDK Project.
   1. Maintain continuous performance and function testing systems,
reporting results into patchworks systems. Tests are defined as the
following:
      1. Compile Testing: Complete DPDK build / compile run, monitoring for
errors / warnings and ABI test.
      2. Unit Testing: Hardware and infrastructure independent tests run
through the make / compile tool chains
      3. Functional Tests: Hardware and infrastructure dependent tests,
focused on verification of features or functionality of DPDK, reported per
hardware system
      4. Performance Tests: Hardware and infrastructure dependent tests
focused on throughput, reported per hardware system
      5. Tests will be run per-patch as patches become available
      6. Tests will be run weekly against the official ‘main’ and “next-”
branches. The tech board will provide a specific list of these branches to
test.
      7. Tests will be run weekly against supported LTS branches  (Ex:
20.11 and 19.11 stable)
   2. Maintain graphical results report/dashboard. The graphic results
should report the following items:
      1. Per NIC performance trend over time graph, based on periodic (not
per patch) testing of the main DPDK branch. Note, the reported data shall
be relative (i.e. performance lost or gained relative to an agreed baseline.
      2. Number of tests run within the lab per time (day / week),
including test outcomes (i.e. pass, fail, etc.).
   3. Report/display periodic branch tests in the dashboard by the end of Q2
   4. Continue to support DPDK participants in deploying new hardware into
the test lab.

2. Objective 2: Increase involvement and visibility of community lab to
community
   1. Host bi-weekly meeting of lab stakeholders with Techboard
participation
   2. Appoint Technical and Govboard liaisons for lab oversight
   3. Publish the meeting minutes from the lab meeting to CI and dev
mailing list
   4. Publish quarterly lab update on blog and the DPDK announce email list
   5. Document every job as part of the dashboard or provide links to
appropriate documentation (i.e. into a git repo)
   6. Publish the scripts and make them easy to consume for other labs
      1. Utilize repositories hosted by DPDK for DPDK-CI (common work),
UNH-jenkins (lab jobs and runners), Dashboard (ui code)
      2. Aim to have scripts generalized and available in DPDK-CI
repository for work items that are common across labs and different testing
infrastructure.

3. Objective 3: Maintain existing tests (limit regressions) as new tests
are developed
   1. Expedited/prioritization of any regressions introduced with new test
development, that break existing DTS tests running the lab.
   2. Enable monitoring system to raise an alert to the community for
repeating failures (more than 5) of the same test, automatically notify CI
technical lead for appropriate follow up or tracking by the end of Q3.
      1. CI Technical lead, and other members of the lab, should have some
ability to pause testing or disable tests in the case that infrastructure
is failed.
   3. Maintain compile and unit testing on the following operating systems:
FreeBSD latest, Windows Server latest, Redhat 8 & 7, Ubuntu latest LTS and
LTS-1, SUSE latest, CentOS Stream
   4. Break out the iol-testing test report into a some smaller reports:
      1. Per-OS would be best so Linux, BSD, and Windows by end of Q3
      2. This will also need to account for architecture here as well
(better support of x86_64 and aarch64).
   5. Review and generate a report on execution time and pipeline flow of
various testing running in the community lab.
      1. Techboard will review and provide specific recommendations for
testing time targets, such as a goal to deliver x results within y hours of
a patch submission. Note, some backlog will always occur when multiple
simultaneous patch submissions occur.

4. Objective 4: Test Coverage, dd more checks static analysis (Coverity),
ABI, compilation
   1. Compilation:
      1. Use lcov to generate a report on testing code coverage (Q2)
      2. Utilize ubsan and asan for early bug detection during test
execution (Q3)
   2. Add ABI check tasks on all OS distributions, except Windows (Q1)
   3. Add spell checker to patch tests for documentation (needs input from
tech board to define official DPDK documentation language requirement) by
end of Q2
   4. Investigate alternatives to coverity and make a recommendation to the
test board by end of Q1.
      1. List: Cppcheck, SonarQube, Clang static analyzer, Flawfinder,
Splint, infer
      2. Add 3 more static analyzers. as directed by the tech board by the
end of Q4.

5. Objective 5: Test Coverage, enable the unit tests (maintain function &
enable micro benchmark)
   1. Work with community to enable the other included unit test suites
(perf, debug, etc) (Q1)
   2. Use compilation profilers (where available, such as gcov) and report
performance information (Q2)

========================================================
Any other business

* Next Meeting: March 11, 2021

-- 
*Lincoln Lavoie*
Principal Engineer, Broadband Technologies
21 Madbury Rd., Ste. 100, Durham, NH 03824
lylavoie@iol.unh.edu
https://www.iol.unh.edu
+1-603-674-2755 (m)
<https://www.iol.unh.edu>

[-- Attachment #2: Type: text/html, Size: 11082 bytes --]

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2021-02-25 16:08 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-25 16:05 [dpdk-ci] Community CI Meeting Minutes - February 25, 2021 Lincoln Lavoie

DPDK CI discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror http://inbox.dpdk.org/ci/0 ci/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 ci ci/ http://inbox.dpdk.org/ci \
		ci@dpdk.org
	public-inbox-index ci

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.ci


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git