DPDK CI discussions
 help / color / mirror / Atom feed
* Email Based Re-Testing Framework
@ 2023-06-06 16:56 Patrick Robb
  2023-06-06 17:53 ` Ferruh Yigit
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Patrick Robb @ 2023-06-06 16:56 UTC (permalink / raw)
  To: ci; +Cc: Tu, Lijuan, Aaron Conole, zhoumin, Michael Santana, Lincoln Lavoie

[-- Attachment #1: Type: text/plain, Size: 2243 bytes --]

Hello all,

I'd like to revive the conversation about a request from the community for
an email based re-testing framework. The idea is that using one
standardized format, dpdk developers could email the test-report mailing
list, requesting a rerun on their patch series for "X" set of tests at "Y"
lab. I think that since patchwork testing labels (ie.
iol-broadcom-Performance, github-robot: build, loongarch-compilation) are
already visible on patch pages on patchwork, those labels are the most
reasonable ones to expect developers to use when requesting a re-test. We
probably wouldn't want to get any more general than that, like, say,
rerunning all CI testing for a specific patch series at a specific lab,
since it would result in a significant amount of "wasted" testing capacity.

The standard email format those of us at the Community Lab are thinking of
is like below. Developers would request retests by emailing the test-report
mailing list with email bodies like:

[RETEST UNH-IOL]
iol-abi-testing
iol-broadcom-Performance

[RETEST Intel]
intel-Functional

[RETEST Loongson]
loongarch-compilation

[RETEST GHA]
github-robot: build

From there, it would be up to the various labs to poll the test-report
mailing list archive (or use a similar method) to check for such requests,
and trigger a CI testing rerun based on the labels provided in the re-test
email. If there is interest from other labs, UNH might also be able to host
the entire set of re-test requests, allowing other labs to poll a curated
list hosted by UNH. One simple approach would be for labs to download all
emails sent to test-report and parse with regex to determine the re-test
list for their specific lab. But, if anyone has any better ideas for
aggregating the emails to be parsed, suggestions are welcome! If this
approach sounds reasonable to everyone, we could determine a timeline by
which labs would implement the functionality needed to trigger re-tests.
Or, we can just add re-testing for various labs if/when they add this
functionality - whatever is better. Happy to discuss at the CI meeting on
Thursday.

-- 

Patrick Robb

Technical Service Manager

UNH InterOperability Laboratory

21 Madbury Rd, Suite 100, Durham, NH 03824

www.iol.unh.edu

[-- Attachment #2: Type: text/html, Size: 4306 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Email Based Re-Testing Framework
  2023-06-06 16:56 Email Based Re-Testing Framework Patrick Robb
@ 2023-06-06 17:53 ` Ferruh Yigit
  2023-06-06 19:27   ` Patrick Robb
  2023-06-07  7:04 ` Thomas Monjalon
  2023-06-21 16:21 ` Ali Alnubani
  2 siblings, 1 reply; 13+ messages in thread
From: Ferruh Yigit @ 2023-06-06 17:53 UTC (permalink / raw)
  To: Patrick Robb, ci
  Cc: Tu, Lijuan, Aaron Conole, zhoumin, Michael Santana, Lincoln Lavoie

On 6/6/2023 5:56 PM, Patrick Robb wrote:
> Hello all,
> 
> I'd like to revive the conversation about a request from the community
> for an email based re-testing framework. The idea is that using one
> standardized format, dpdk developers could email the test-report mailing
> list, requesting a rerun on their patch series for "X" set of tests at
> "Y" lab. I think that since patchwork testing labels (ie.
> iol-broadcom-Performance, github-robot: build, loongarch-compilation)
> are already visible on patch pages on patchwork, those labels are the
> most reasonable ones to expect developers to use when requesting a
> re-test. We probably wouldn't want to get any more general than that,
> like, say, rerunning all CI testing for a specific patch series at a
> specific lab, since it would result in a significant amount of "wasted"
> testing capacity.
> 
> The standard email format those of us at the Community Lab are thinking
> of is like below. Developers would request retests by emailing the
> test-report mailing list with email bodies like:
> 
> [RETEST UNH-IOL]
> iol-abi-testing
> iol-broadcom-Performance
> 
> [RETEST Intel]
> intel-Functional
> 
> [RETEST Loongson]
> loongarch-compilation
> 
> [RETEST GHA]
> github-robot: build
> 
> From there, it would be up to the various labs to poll the test-report
> mailing list archive (or use a similar method) to check for such
> requests, and trigger a CI testing rerun based on the labels provided in
> the re-test email. If there is interest from other labs, UNH might also
> be able to host the entire set of re-test requests, allowing other labs
> to poll a curated list hosted by UNH. One simple approach would be for
> labs to download all emails sent to test-report and parse with regex to
> determine the re-test list for their specific lab. But, if anyone has
> any better ideas for aggregating the emails to be parsed, suggestions
> are welcome! If this approach sounds reasonable to everyone, we could
> determine a timeline by which labs would implement the functionality
> needed to trigger re-tests. Or, we can just add re-testing for various
> labs if/when they add this functionality - whatever is better. Happy to
> discuss at the CI meeting on Thursday.
> 

+1 to re-testing framework.


Also it can be useful to run daily sub-tree testing by request, if possible.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Email Based Re-Testing Framework
  2023-06-06 17:53 ` Ferruh Yigit
@ 2023-06-06 19:27   ` Patrick Robb
  2023-06-06 21:40     ` Ferruh Yigit
  2023-06-07 12:53     ` Aaron Conole
  0 siblings, 2 replies; 13+ messages in thread
From: Patrick Robb @ 2023-06-06 19:27 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: ci, Tu, Lijuan, Aaron Conole, zhoumin, Michael Santana, Lincoln Lavoie

[-- Attachment #1: Type: text/plain, Size: 3289 bytes --]

>
> Also it can be useful to run daily sub-tree testing by request, if
> possible.
>

That wouldn't be too difficult. I'll make a ticket for this. Although, for
testing on the daily sub-trees, since that's UNH-IOL specific, that
wouldn't necessarily have to be done via an email based testing request
framework. We could also just add a button to our dashboard which triggers
a sub-tree ci run. That would help keep narrow the scope of what the email
based retesting framework is for. But, both email or a dashboard button
would both work.

On Tue, Jun 6, 2023 at 1:53 PM Ferruh Yigit <ferruh.yigit@amd.com> wrote:

> On 6/6/2023 5:56 PM, Patrick Robb wrote:
> > Hello all,
> >
> > I'd like to revive the conversation about a request from the community
> > for an email based re-testing framework. The idea is that using one
> > standardized format, dpdk developers could email the test-report mailing
> > list, requesting a rerun on their patch series for "X" set of tests at
> > "Y" lab. I think that since patchwork testing labels (ie.
> > iol-broadcom-Performance, github-robot: build, loongarch-compilation)
> > are already visible on patch pages on patchwork, those labels are the
> > most reasonable ones to expect developers to use when requesting a
> > re-test. We probably wouldn't want to get any more general than that,
> > like, say, rerunning all CI testing for a specific patch series at a
> > specific lab, since it would result in a significant amount of "wasted"
> > testing capacity.
> >
> > The standard email format those of us at the Community Lab are thinking
> > of is like below. Developers would request retests by emailing the
> > test-report mailing list with email bodies like:
> >
> > [RETEST UNH-IOL]
> > iol-abi-testing
> > iol-broadcom-Performance
> >
> > [RETEST Intel]
> > intel-Functional
> >
> > [RETEST Loongson]
> > loongarch-compilation
> >
> > [RETEST GHA]
> > github-robot: build
> >
> > From there, it would be up to the various labs to poll the test-report
> > mailing list archive (or use a similar method) to check for such
> > requests, and trigger a CI testing rerun based on the labels provided in
> > the re-test email. If there is interest from other labs, UNH might also
> > be able to host the entire set of re-test requests, allowing other labs
> > to poll a curated list hosted by UNH. One simple approach would be for
> > labs to download all emails sent to test-report and parse with regex to
> > determine the re-test list for their specific lab. But, if anyone has
> > any better ideas for aggregating the emails to be parsed, suggestions
> > are welcome! If this approach sounds reasonable to everyone, we could
> > determine a timeline by which labs would implement the functionality
> > needed to trigger re-tests. Or, we can just add re-testing for various
> > labs if/when they add this functionality - whatever is better. Happy to
> > discuss at the CI meeting on Thursday.
> >
>
> +1 to re-testing framework.
>
>
> Also it can be useful to run daily sub-tree testing by request, if
> possible.
>
>

-- 

Patrick Robb

Technical Service Manager

UNH InterOperability Laboratory

21 Madbury Rd, Suite 100, Durham, NH 03824

www.iol.unh.edu

[-- Attachment #2: Type: text/html, Size: 5920 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Email Based Re-Testing Framework
  2023-06-06 19:27   ` Patrick Robb
@ 2023-06-06 21:40     ` Ferruh Yigit
  2023-06-07 12:53     ` Aaron Conole
  1 sibling, 0 replies; 13+ messages in thread
From: Ferruh Yigit @ 2023-06-06 21:40 UTC (permalink / raw)
  To: Patrick Robb
  Cc: ci, Tu, Lijuan, Aaron Conole, zhoumin, Michael Santana, Lincoln Lavoie

On 6/6/2023 8:27 PM, Patrick Robb wrote:
>     Also it can be useful to run daily sub-tree testing by request, if
>     possible.
> 
> 
> That wouldn't be too difficult. I'll make a ticket for this. Although,
> for testing on the daily sub-trees, since that's UNH-IOL specific, that
> wouldn't necessarily have to be done via an email based testing request
> framework. We could also just add a button to our dashboard which
> triggers a sub-tree ci run. That would help keep narrow the scope of
> what the email based retesting framework is for. But, both email or a
> dashboard button would both work. 
> 

Thanks, agree that for sub-trees a button on dashboard is sufficient.


> On Tue, Jun 6, 2023 at 1:53 PM Ferruh Yigit <ferruh.yigit@amd.com
> <mailto:ferruh.yigit@amd.com>> wrote:
> 
>     On 6/6/2023 5:56 PM, Patrick Robb wrote:
>     > Hello all,
>     >
>     > I'd like to revive the conversation about a request from the community
>     > for an email based re-testing framework. The idea is that using one
>     > standardized format, dpdk developers could email the test-report
>     mailing
>     > list, requesting a rerun on their patch series for "X" set of tests at
>     > "Y" lab. I think that since patchwork testing labels (ie.
>     > iol-broadcom-Performance, github-robot: build, loongarch-compilation)
>     > are already visible on patch pages on patchwork, those labels are the
>     > most reasonable ones to expect developers to use when requesting a
>     > re-test. We probably wouldn't want to get any more general than that,
>     > like, say, rerunning all CI testing for a specific patch series at a
>     > specific lab, since it would result in a significant amount of
>     "wasted"
>     > testing capacity.
>     >
>     > The standard email format those of us at the Community Lab are
>     thinking
>     > of is like below. Developers would request retests by emailing the
>     > test-report mailing list with email bodies like:
>     >
>     > [RETEST UNH-IOL]
>     > iol-abi-testing
>     > iol-broadcom-Performance
>     >
>     > [RETEST Intel]
>     > intel-Functional
>     >
>     > [RETEST Loongson]
>     > loongarch-compilation
>     >
>     > [RETEST GHA]
>     > github-robot: build
>     >
>     > From there, it would be up to the various labs to poll the test-report
>     > mailing list archive (or use a similar method) to check for such
>     > requests, and trigger a CI testing rerun based on the labels
>     provided in
>     > the re-test email. If there is interest from other labs, UNH might
>     also
>     > be able to host the entire set of re-test requests, allowing other
>     labs
>     > to poll a curated list hosted by UNH. One simple approach would be for
>     > labs to download all emails sent to test-report and parse with
>     regex to
>     > determine the re-test list for their specific lab. But, if anyone has
>     > any better ideas for aggregating the emails to be parsed, suggestions
>     > are welcome! If this approach sounds reasonable to everyone, we could
>     > determine a timeline by which labs would implement the functionality
>     > needed to trigger re-tests. Or, we can just add re-testing for various
>     > labs if/when they add this functionality - whatever is better.
>     Happy to
>     > discuss at the CI meeting on Thursday.
>     >
> 
>     +1 to re-testing framework.
> 
> 
>     Also it can be useful to run daily sub-tree testing by request, if
>     possible.
> 
> 
> 
> -- 
> 
> Patrick Robb
> 
> Technical Service Manager
> 
> UNH InterOperability Laboratory
> 
> 21 Madbury Rd, Suite 100, Durham, NH 03824
> 
> www.iol.unh.edu <http://www.iol.unh.edu/>
> 
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Email Based Re-Testing Framework
  2023-06-06 16:56 Email Based Re-Testing Framework Patrick Robb
  2023-06-06 17:53 ` Ferruh Yigit
@ 2023-06-07  7:04 ` Thomas Monjalon
  2023-06-21 16:21 ` Ali Alnubani
  2 siblings, 0 replies; 13+ messages in thread
From: Thomas Monjalon @ 2023-06-07  7:04 UTC (permalink / raw)
  To: Patrick Robb
  Cc: ci, Tu, Lijuan, Aaron Conole, zhoumin, Michael Santana, Lincoln Lavoie

06/06/2023 18:56, Patrick Robb:
> Hello all,
> 
> I'd like to revive the conversation about a request from the community for
> an email based re-testing framework. The idea is that using one
> standardized format, dpdk developers could email the test-report mailing
> list, requesting a rerun on their patch series for "X" set of tests at "Y"
> lab. I think that since patchwork testing labels (ie.
> iol-broadcom-Performance, github-robot: build, loongarch-compilation) are
> already visible on patch pages on patchwork, those labels are the most
> reasonable ones to expect developers to use when requesting a re-test. We
> probably wouldn't want to get any more general than that, like, say,
> rerunning all CI testing for a specific patch series at a specific lab,
> since it would result in a significant amount of "wasted" testing capacity.
> 
> The standard email format those of us at the Community Lab are thinking of
> is like below. Developers would request retests by emailing the test-report
> mailing list with email bodies like:
> 
> [RETEST UNH-IOL]
> iol-abi-testing
> iol-broadcom-Performance

What would be the purpose of [RETEST UNH-IOL]?

We need to specify the patchwork identifier of the patch.

We could make a script similar to the checkpatch run on dpdk.org:
https://git.dpdk.org/tools/dpdk-ci/tree/tests/checkpatch.sh
The easiest way to run it is to make the script as the receiver of the mail.
If the lab can receive the mails from the mailing list,
then just need to filter the retest requests for its own lab.



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Email Based Re-Testing Framework
  2023-06-06 19:27   ` Patrick Robb
  2023-06-06 21:40     ` Ferruh Yigit
@ 2023-06-07 12:53     ` Aaron Conole
  2023-06-08  1:14       ` Patrick Robb
  2023-06-08  1:47       ` Patrick Robb
  1 sibling, 2 replies; 13+ messages in thread
From: Aaron Conole @ 2023-06-07 12:53 UTC (permalink / raw)
  To: Patrick Robb
  Cc: Ferruh Yigit, ci, Tu, Lijuan, zhoumin, Michael Santana, Lincoln Lavoie

Patrick Robb <probb@iol.unh.edu> writes:

>  Also it can be useful to run daily sub-tree testing by request, if possible.
>
> That wouldn't be too difficult. I'll make a ticket for this. Although, for testing on the daily sub-trees, since that's
> UNH-IOL specific, that wouldn't necessarily have to be done via an email based testing request framework. We
> could also just add a button to our dashboard which triggers a sub-tree ci run. That would help keep narrow
> the scope of what the email based retesting framework is for. But, both email or a dashboard button would
> both work. 

We had discussed this long ago - including agreeing on a format, IIRC.

See the thread starting here:
  https://mails.dpdk.org/archives/ci/2021-May/001189.html

The idea was to have a line like:

Recheck-request: <test names>

where <test names> was the tests in the check labels.  In fact, what
started the discussion was a patch for the pw-ci scripts that
implemented part of it.

I don't see how to make your proposal as easily parsed.

WDYT?  Can you re-read that thread and come up with comments?

> On Tue, Jun 6, 2023 at 1:53 PM Ferruh Yigit <ferruh.yigit@amd.com> wrote:
>
>  On 6/6/2023 5:56 PM, Patrick Robb wrote:
>  > Hello all,
>  > 
>  > I'd like to revive the conversation about a request from the community
>  > for an email based re-testing framework. The idea is that using one
>  > standardized format, dpdk developers could email the test-report mailing
>  > list, requesting a rerun on their patch series for "X" set of tests at
>  > "Y" lab. I think that since patchwork testing labels (ie.
>  > iol-broadcom-Performance, github-robot: build, loongarch-compilation)
>  > are already visible on patch pages on patchwork, those labels are the
>  > most reasonable ones to expect developers to use when requesting a
>  > re-test. We probably wouldn't want to get any more general than that,
>  > like, say, rerunning all CI testing for a specific patch series at a
>  > specific lab, since it would result in a significant amount of "wasted"
>  > testing capacity.
>  > 
>  > The standard email format those of us at the Community Lab are thinking
>  > of is like below. Developers would request retests by emailing the
>  > test-report mailing list with email bodies like:
>  > 
>  > [RETEST UNH-IOL]
>  > iol-abi-testing
>  > iol-broadcom-Performance
>  > 
>  > [RETEST Intel]
>  > intel-Functional
>  > 
>  > [RETEST Loongson]
>  > loongarch-compilation
>  > 
>  > [RETEST GHA]
>  > github-robot: build
>  > 
>  > From there, it would be up to the various labs to poll the test-report
>  > mailing list archive (or use a similar method) to check for such
>  > requests, and trigger a CI testing rerun based on the labels provided in
>  > the re-test email. If there is interest from other labs, UNH might also
>  > be able to host the entire set of re-test requests, allowing other labs
>  > to poll a curated list hosted by UNH. One simple approach would be for
>  > labs to download all emails sent to test-report and parse with regex to
>  > determine the re-test list for their specific lab. But, if anyone has
>  > any better ideas for aggregating the emails to be parsed, suggestions
>  > are welcome! If this approach sounds reasonable to everyone, we could
>  > determine a timeline by which labs would implement the functionality
>  > needed to trigger re-tests. Or, we can just add re-testing for various
>  > labs if/when they add this functionality - whatever is better. Happy to
>  > discuss at the CI meeting on Thursday.
>  > 
>
>  +1 to re-testing framework.
>
>  Also it can be useful to run daily sub-tree testing by request, if possible.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Email Based Re-Testing Framework
  2023-06-07 12:53     ` Aaron Conole
@ 2023-06-08  1:14       ` Patrick Robb
  2023-06-08  1:47       ` Patrick Robb
  1 sibling, 0 replies; 13+ messages in thread
From: Patrick Robb @ 2023-06-08  1:14 UTC (permalink / raw)
  To: Aaron Conole
  Cc: Ferruh Yigit, ci, Tu, Lijuan, zhoumin, Michael Santana, Lincoln Lavoie

[-- Attachment #1: Type: text/plain, Size: 5252 bytes --]

>
> What would be the purpose of [RETEST UNH-IOL]?
>
Agreed, this is redundant, provided we will be using the labels/contexts
used on patchwork. That seems to be the idea behind Aaron's proposed format
and I think we should move to use his format since it previously reached
some consensus, and appears to be easy to parse.


> We need to specify the patchwork identifier of the patch.
>
> We could make a script similar to the checkpatch run on dpdk.org:
> https://git.dpdk.org/tools/dpdk-ci/tree/tests/checkpatch.sh
> The easiest way to run it is to make the script as the receiver of the
> mail.
> If the lab can receive the mails from the mailing list,
> then just need to filter the retest requests for its own lab.

Yes, I think this is reasonable. I don't think this process is likely to
change much, and if we can provide a script to live on the dpdk-ci repo
which checks for retest requests, we can reasonably expect the labs to
separately set up an environment to handle running that script and
triggering their re-tests. Thanks Thomas.



On Wed, Jun 7, 2023 at 8:53 AM Aaron Conole <aconole@redhat.com> wrote:

> Patrick Robb <probb@iol.unh.edu> writes:
>
> >  Also it can be useful to run daily sub-tree testing by request, if
> possible.
> >
> > That wouldn't be too difficult. I'll make a ticket for this. Although,
> for testing on the daily sub-trees, since that's
> > UNH-IOL specific, that wouldn't necessarily have to be done via an email
> based testing request framework. We
> > could also just add a button to our dashboard which triggers a sub-tree
> ci run. That would help keep narrow
> > the scope of what the email based retesting framework is for. But, both
> email or a dashboard button would
> > both work.
>
> We had discussed this long ago - including agreeing on a format, IIRC.
>
> See the thread starting here:
>   https://mails.dpdk.org/archives/ci/2021-May/001189.html
>
> The idea was to have a line like:
>
> Recheck-request: <test names>
>
> where <test names> was the tests in the check labels.  In fact, what
> started the discussion was a patch for the pw-ci scripts that
> implemented part of it.
>
> I don't see how to make your proposal as easily parsed.
>
> WDYT?  Can you re-read that thread and come up with comments?
>
> > On Tue, Jun 6, 2023 at 1:53 PM Ferruh Yigit <ferruh.yigit@amd.com>
> wrote:
> >
> >  On 6/6/2023 5:56 PM, Patrick Robb wrote:
> >  > Hello all,
> >  >
> >  > I'd like to revive the conversation about a request from the community
> >  > for an email based re-testing framework. The idea is that using one
> >  > standardized format, dpdk developers could email the test-report
> mailing
> >  > list, requesting a rerun on their patch series for "X" set of tests at
> >  > "Y" lab. I think that since patchwork testing labels (ie.
> >  > iol-broadcom-Performance, github-robot: build, loongarch-compilation)
> >  > are already visible on patch pages on patchwork, those labels are the
> >  > most reasonable ones to expect developers to use when requesting a
> >  > re-test. We probably wouldn't want to get any more general than that,
> >  > like, say, rerunning all CI testing for a specific patch series at a
> >  > specific lab, since it would result in a significant amount of
> "wasted"
> >  > testing capacity.
> >  >
> >  > The standard email format those of us at the Community Lab are
> thinking
> >  > of is like below. Developers would request retests by emailing the
> >  > test-report mailing list with email bodies like:
> >  >
> >  > [RETEST UNH-IOL]
> >  > iol-abi-testing
> >  > iol-broadcom-Performance
> >  >
> >  > [RETEST Intel]
> >  > intel-Functional
> >  >
> >  > [RETEST Loongson]
> >  > loongarch-compilation
> >  >
> >  > [RETEST GHA]
> >  > github-robot: build
> >  >
> >  > From there, it would be up to the various labs to poll the test-report
> >  > mailing list archive (or use a similar method) to check for such
> >  > requests, and trigger a CI testing rerun based on the labels provided
> in
> >  > the re-test email. If there is interest from other labs, UNH might
> also
> >  > be able to host the entire set of re-test requests, allowing other
> labs
> >  > to poll a curated list hosted by UNH. One simple approach would be for
> >  > labs to download all emails sent to test-report and parse with regex
> to
> >  > determine the re-test list for their specific lab. But, if anyone has
> >  > any better ideas for aggregating the emails to be parsed, suggestions
> >  > are welcome! If this approach sounds reasonable to everyone, we could
> >  > determine a timeline by which labs would implement the functionality
> >  > needed to trigger re-tests. Or, we can just add re-testing for various
> >  > labs if/when they add this functionality - whatever is better. Happy
> to
> >  > discuss at the CI meeting on Thursday.
> >  >
> >
> >  +1 to re-testing framework.
> >
> >  Also it can be useful to run daily sub-tree testing by request, if
> possible.
>
>

-- 

Patrick Robb

Technical Service Manager

UNH InterOperability Laboratory

21 Madbury Rd, Suite 100, Durham, NH 03824

www.iol.unh.edu

[-- Attachment #2: Type: text/html, Size: 8702 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Email Based Re-Testing Framework
  2023-06-07 12:53     ` Aaron Conole
  2023-06-08  1:14       ` Patrick Robb
@ 2023-06-08  1:47       ` Patrick Robb
  2023-06-12 15:01         ` Aaron Conole
  1 sibling, 1 reply; 13+ messages in thread
From: Patrick Robb @ 2023-06-08  1:47 UTC (permalink / raw)
  To: Aaron Conole
  Cc: Ferruh Yigit, ci, Tu, Lijuan, zhoumin, Michael Santana, Lincoln Lavoie

[-- Attachment #1: Type: text/plain, Size: 4720 bytes --]

On Wed, Jun 7, 2023 at 8:53 AM Aaron Conole <aconole@redhat.com> wrote:

> Patrick Robb <probb@iol.unh.edu> writes:
>
> >  Also it can be useful to run daily sub-tree testing by request, if
> possible.
> >
> > That wouldn't be too difficult. I'll make a ticket for this. Although,
> for testing on the daily sub-trees, since that's
> > UNH-IOL specific, that wouldn't necessarily have to be done via an email
> based testing request framework. We
> > could also just add a button to our dashboard which triggers a sub-tree
> ci run. That would help keep narrow
> > the scope of what the email based retesting framework is for. But, both
> email or a dashboard button would
> > both work.
>
> We had discussed this long ago - including agreeing on a format, IIRC.
>
> See the thread starting here:
>   https://mails.dpdk.org/archives/ci/2021-May/001189.html
>
> The idea was to have a line like:
>
> Recheck-request: <test names>
>
I like using this simpler format which is easier to parse. As Thomas
pointed out, specifying labs does not really provide extra information if
we are already going to request by label/context, which is already
specifies the lab.

>
> where <test names> was the tests in the check labels.  In fact, what
> started the discussion was a patch for the pw-ci scripts that
> implemented part of it.
>
> I don't see how to make your proposal as easily parsed.
>
> WDYT?  Can you re-read that thread and come up with comments?

 Will do. And thanks, this thread is very informative.

> It is important to use the 'msgid' field to distinguish recheck
> requests.  Otherwise, we will continuously reparse the same
> recheck request and loop forever.  Additionally, we've discussed using a
> counter to limit the recheck requests to a single 'recheck' per test
> name.
>
We can track message ids to avoid considering a single retest request
twice. Perhaps we can accomplish the same thing by tracking retested
patchseries ids and their total number of requested retests (which could be
1 retest per patchseries).

>  +function check_series_needs_retest() {

+    local pw_instance="$1"
+
+    series_get_active_branches "$pw_instance" | while IFS=\| read -r
series_id project url repo branchname; do
+        local patch_comments_url=$(curl -s "$userpw" "$url" | jq -rc
'.comments')
+        if [ "Xnull" != "X$patch_comments_url" ]; then
+            local comments_json=$(curl -s "$userpw" "$patch_comments_url")
+            local seq_end=$(echo "$comments_json" | jq -rc 'length')
+            if [ "$seq_end" -a $seq_end -gt 0 ]; then
+                seq_end=$((seq_end-1))
+                for comment_id in $(seq 0 $seq_end); do
+                    local recheck_requested=$(echo "$comments_json" | jq
-rc ".[$comment_id].content" | grep "^Recheck-request: ")
+                    if [ "X$recheck_requested" != "X" ]; then
+                        local msgid=$(echo "$comments_json" | jq -rc
".[$comment_id].msgid")
+                        run_recheck "$pw_instance" "$series_id" "$project"
"$url" "$repo" "$branchname" "$recheck_requested" "$msgid"
+                    fi
+                done
+            fi
+        fi
+    done
+}
This is already a superior approach to what I had in mind for acquiring
comments. Unless you're opposed, I think at the communit lab we can
experiment based on this starting point to verify the process is sound, but
I don't see any problems here.

> I think that if we're able to specify multiple contexts, then there's not
> really any reason to run multiple rechecks per patchset.
>
Agreed.

> There was also an ask on filtering requesters (only maintainers and

patch authors can ask for a recheck).

If we can use the maintainers file as a single source of truth that is
convenient and stable as the list of maintainers changes. But, also I think
retesting request permission should be extended to the submitter too. They
may want to initiate a re-run without engaging a maintainer. It's not
likely to cause a big increase in test load for us or other labs, so
there's no harm there.

> No, an explicit list is actually better.

When a new check is added, for someone looking at the mails (maybe 2/3

weeks later), and reading just "all", he would have to know what

checks were available at the time.

Context/Labels rarely change, so I don't think this concern is too serious.
But, if people dont mind comma separating an entire list of contexts,
that's fine.

Thanks,
Patrick

-- 

Patrick Robb

Technical Service Manager

UNH InterOperability Laboratory

21 Madbury Rd, Suite 100, Durham, NH 03824

www.iol.unh.edu

[-- Attachment #2: Type: text/html, Size: 9041 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Email Based Re-Testing Framework
  2023-06-08  1:47       ` Patrick Robb
@ 2023-06-12 15:01         ` Aaron Conole
  2023-06-13 13:28           ` Patrick Robb
  0 siblings, 1 reply; 13+ messages in thread
From: Aaron Conole @ 2023-06-12 15:01 UTC (permalink / raw)
  To: Patrick Robb
  Cc: Ferruh Yigit, ci, Tu, Lijuan, zhoumin, Michael Santana, Lincoln Lavoie

Patrick Robb <probb@iol.unh.edu> writes:

> On Wed, Jun 7, 2023 at 8:53 AM Aaron Conole <aconole@redhat.com> wrote:
>
>  Patrick Robb <probb@iol.unh.edu> writes:
>
>  >  Also it can be useful to run daily sub-tree testing by request, if possible.
>  >
>  > That wouldn't be too difficult. I'll make a ticket for this. Although, for testing on the daily sub-trees,
>  since that's
>  > UNH-IOL specific, that wouldn't necessarily have to be done via an email based testing request
>  framework. We
>  > could also just add a button to our dashboard which triggers a sub-tree ci run. That would help keep
>  narrow
>  > the scope of what the email based retesting framework is for. But, both email or a dashboard button
>  would
>  > both work. 
>
>  We had discussed this long ago - including agreeing on a format, IIRC.
>
>  See the thread starting here:
>    https://mails.dpdk.org/archives/ci/2021-May/001189.html
>
>  The idea was to have a line like:
>
>  Recheck-request: <test names>
>
> I like using this simpler format which is easier to parse. As Thomas pointed out, specifying labs does not really
> provide extra information if we are already going to request by label/context, which is already specifies the
> lab.  

One thing we haven't discussed or determined is if we should have the
ability to re-apply the series or simply to rerun the patches based on
the original sha sum.  There are two cases I can think of:

1. Ephemeral lab/network failures, or flaky unit tests that sometimes
   fail.  In this case, we probably just want to re-run the tree as-is.

2. Failing tree before apply.  In this case, we have applied the series
   to a tree, but the tree isn't in a good state and will fail
   regardless of the series being applied.

WDYT?  Does (2) case warrant any consideration as a possible reason to
retest?  If so, what is the right way of handling that situation?

>  where <test names> was the tests in the check labels.  In fact, what
>  started the discussion was a patch for the pw-ci scripts that
>  implemented part of it.
>
>  I don't see how to make your proposal as easily parsed.
>
>  WDYT?  Can you re-read that thread and come up with comments?
>
>  Will do. And thanks, this thread is very informative. 
>
>  It is important to use the 'msgid' field to distinguish recheck
>  requests.  Otherwise, we will continuously reparse the same
>  recheck request and loop forever.  Additionally, we've discussed using a
>  counter to limit the recheck requests to a single 'recheck' per test
>  name.
>
> We can track message ids to avoid considering a single retest request twice. Perhaps we can accomplish the
> same thing by tracking retested patchseries ids and their total number of requested retests (which could be 1
> retest per patchseries). 
>
>   +function check_series_needs_retest() {
>
> +    local pw_instance="$1"
> +
> +    series_get_active_branches "$pw_instance" | while IFS=\| read -r series_id project url repo branchname; do
> +        local patch_comments_url=$(curl -s "$userpw" "$url" | jq -rc '.comments')
> +        if [ "Xnull" != "X$patch_comments_url" ]; then
> +            local comments_json=$(curl -s "$userpw" "$patch_comments_url")
> +            local seq_end=$(echo "$comments_json" | jq -rc 'length')
> +            if [ "$seq_end" -a $seq_end -gt 0 ]; then
> +                seq_end=$((seq_end-1))
> +                for comment_id in $(seq 0 $seq_end); do
> +                    local recheck_requested=$(echo "$comments_json" | jq -rc ".[$comment_id].content" | grep
> "^Recheck-request: ")
> +                    if [ "X$recheck_requested" != "X" ]; then
> +                        local msgid=$(echo "$comments_json" | jq -rc ".[$comment_id].msgid")
> +                        run_recheck "$pw_instance" "$series_id" "$project" "$url" "$repo" "$branchname"
> "$recheck_requested" "$msgid"
> +                    fi
> +                done
> +            fi
> +        fi
> +    done
> +}
> This is already a superior approach to what I had in mind for acquiring comments. Unless you're opposed, I
> think at the communit lab we can experiment based on this starting point to verify the process is sound, but I
> don't see any problems here. 
>
>  I think that if we're able to specify multiple contexts, then there's not really any reason to run multiple
>  rechecks per patchset.
>
> Agreed.
>
>  There was also an ask on filtering requesters (only maintainers and
>
>  patch authors can ask for a recheck). 
>
> If we can use the maintainers file as a single source of truth that is convenient and stable as the list of
> maintainers changes. But, also I think retesting request permission should be extended to the submitter too.
> They may want to initiate a re-run without engaging a maintainer. It's not likely to cause a big increase in test
> load for us or other labs, so there's no harm there. 
>
>  No, an explicit list is actually better.
>
>  When a new check is added, for someone looking at the mails (maybe 2/3
>
>  weeks later), and reading just "all", he would have to know what
>
>  checks were available at the time. 
>
> Context/Labels rarely change, so I don't think this concern is too serious. But, if people dont mind comma
> separating an entire list of contexts, that's fine. 
>
> Thanks,
> Patrick


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Email Based Re-Testing Framework
  2023-06-12 15:01         ` Aaron Conole
@ 2023-06-13 13:28           ` Patrick Robb
  2023-06-20 14:01             ` Aaron Conole
  0 siblings, 1 reply; 13+ messages in thread
From: Patrick Robb @ 2023-06-13 13:28 UTC (permalink / raw)
  To: Aaron Conole
  Cc: Ferruh Yigit, ci, Tu, Lijuan, zhoumin, Michael Santana, Lincoln Lavoie

[-- Attachment #1: Type: text/plain, Size: 6967 bytes --]

>
> 1. Ephemeral lab/network failures, or flaky unit tests that sometimes
>    fail.  In this case, we probably just want to re-run the tree as-is.
>
> 2. Failing tree before apply.  In this case, we have applied the series
>    to a tree, but the tree isn't in a good state and will fail
>    regardless of the series being applied.
>

I had originally thought of this only as rerunning the tree as-is, though
if the community wants the 2nd option to be available that works too. It
does increase the scope of the task and complexity of the request text
format to be understood for submitters. Personally, where I fall is that
there isn't enough additional value to justify doing more than rerunning
as-is. Plus, if we do end up with a bad tree, submitters can resubmit their
patches when the tree is in a good state again, right? Or alternatively,
lab managers may be aware of this situation and be able to order a rerun
for the last x days (duration of bad tree) on the most up to date tree.

On Mon, Jun 12, 2023 at 11:01 AM Aaron Conole <aconole@redhat.com> wrote:

> Patrick Robb <probb@iol.unh.edu> writes:
>
> > On Wed, Jun 7, 2023 at 8:53 AM Aaron Conole <aconole@redhat.com> wrote:
> >
> >  Patrick Robb <probb@iol.unh.edu> writes:
> >
> >  >  Also it can be useful to run daily sub-tree testing by request, if
> possible.
> >  >
> >  > That wouldn't be too difficult. I'll make a ticket for this.
> Although, for testing on the daily sub-trees,
> >  since that's
> >  > UNH-IOL specific, that wouldn't necessarily have to be done via an
> email based testing request
> >  framework. We
> >  > could also just add a button to our dashboard which triggers a
> sub-tree ci run. That would help keep
> >  narrow
> >  > the scope of what the email based retesting framework is for. But,
> both email or a dashboard button
> >  would
> >  > both work.
> >
> >  We had discussed this long ago - including agreeing on a format, IIRC.
> >
> >  See the thread starting here:
> >    https://mails.dpdk.org/archives/ci/2021-May/001189.html
> >
> >  The idea was to have a line like:
> >
> >  Recheck-request: <test names>
> >
> > I like using this simpler format which is easier to parse. As Thomas
> pointed out, specifying labs does not really
> > provide extra information if we are already going to request by
> label/context, which is already specifies the
> > lab.
>
> One thing we haven't discussed or determined is if we should have the
> ability to re-apply the series or simply to rerun the patches based on
> the original sha sum.  There are two cases I can think of:
>
> 1. Ephemeral lab/network failures, or flaky unit tests that sometimes
>    fail.  In this case, we probably just want to re-run the tree as-is.
>
> 2. Failing tree before apply.  In this case, we have applied the series
>    to a tree, but the tree isn't in a good state and will fail
>    regardless of the series being applied.
>
> WDYT?  Does (2) case warrant any consideration as a possible reason to
> retest?  If so, what is the right way of handling that situation?
>
> >  where <test names> was the tests in the check labels.  In fact, what
> >  started the discussion was a patch for the pw-ci scripts that
> >  implemented part of it.
> >
> >  I don't see how to make your proposal as easily parsed.
> >
> >  WDYT?  Can you re-read that thread and come up with comments?
> >
> >  Will do. And thanks, this thread is very informative.
> >
> >  It is important to use the 'msgid' field to distinguish recheck
> >  requests.  Otherwise, we will continuously reparse the same
> >  recheck request and loop forever.  Additionally, we've discussed using a
> >  counter to limit the recheck requests to a single 'recheck' per test
> >  name.
> >
> > We can track message ids to avoid considering a single retest request
> twice. Perhaps we can accomplish the
> > same thing by tracking retested patchseries ids and their total number
> of requested retests (which could be 1
> > retest per patchseries).
> >
> >   +function check_series_needs_retest() {
> >
> > +    local pw_instance="$1"
> > +
> > +    series_get_active_branches "$pw_instance" | while IFS=\| read -r
> series_id project url repo branchname; do
> > +        local patch_comments_url=$(curl -s "$userpw" "$url" | jq -rc
> '.comments')
> > +        if [ "Xnull" != "X$patch_comments_url" ]; then
> > +            local comments_json=$(curl -s "$userpw"
> "$patch_comments_url")
> > +            local seq_end=$(echo "$comments_json" | jq -rc 'length')
> > +            if [ "$seq_end" -a $seq_end -gt 0 ]; then
> > +                seq_end=$((seq_end-1))
> > +                for comment_id in $(seq 0 $seq_end); do
> > +                    local recheck_requested=$(echo "$comments_json" |
> jq -rc ".[$comment_id].content" | grep
> > "^Recheck-request: ")
> > +                    if [ "X$recheck_requested" != "X" ]; then
> > +                        local msgid=$(echo "$comments_json" | jq -rc
> ".[$comment_id].msgid")
> > +                        run_recheck "$pw_instance" "$series_id"
> "$project" "$url" "$repo" "$branchname"
> > "$recheck_requested" "$msgid"
> > +                    fi
> > +                done
> > +            fi
> > +        fi
> > +    done
> > +}
> > This is already a superior approach to what I had in mind for acquiring
> comments. Unless you're opposed, I
> > think at the communit lab we can experiment based on this starting point
> to verify the process is sound, but I
> > don't see any problems here.
> >
> >  I think that if we're able to specify multiple contexts, then there's
> not really any reason to run multiple
> >  rechecks per patchset.
> >
> > Agreed.
> >
> >  There was also an ask on filtering requesters (only maintainers and
> >
> >  patch authors can ask for a recheck).
> >
> > If we can use the maintainers file as a single source of truth that is
> convenient and stable as the list of
> > maintainers changes. But, also I think retesting request permission
> should be extended to the submitter too.
> > They may want to initiate a re-run without engaging a maintainer. It's
> not likely to cause a big increase in test
> > load for us or other labs, so there's no harm there.
> >
> >  No, an explicit list is actually better.
> >
> >  When a new check is added, for someone looking at the mails (maybe 2/3
> >
> >  weeks later), and reading just "all", he would have to know what
> >
> >  checks were available at the time.
> >
> > Context/Labels rarely change, so I don't think this concern is too
> serious. But, if people dont mind comma
> > separating an entire list of contexts, that's fine.
> >
> > Thanks,
> > Patrick
>
>

-- 

Patrick Robb

Technical Service Manager

UNH InterOperability Laboratory

21 Madbury Rd, Suite 100, Durham, NH 03824

www.iol.unh.edu

[-- Attachment #2: Type: text/html, Size: 10656 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Email Based Re-Testing Framework
  2023-06-13 13:28           ` Patrick Robb
@ 2023-06-20 14:01             ` Aaron Conole
  0 siblings, 0 replies; 13+ messages in thread
From: Aaron Conole @ 2023-06-20 14:01 UTC (permalink / raw)
  To: Patrick Robb
  Cc: Ferruh Yigit, ci, Tu, Lijuan, zhoumin, Michael Santana, Lincoln Lavoie

Patrick Robb <probb@iol.unh.edu> writes:

>  1. Ephemeral lab/network failures, or flaky unit tests that sometimes
>     fail.  In this case, we probably just want to re-run the tree as-is.
>
>  2. Failing tree before apply.  In this case, we have applied the series
>     to a tree, but the tree isn't in a good state and will fail
>     regardless of the series being applied.
>
> I had originally thought of this only as rerunning the tree as-is, though if the community wants the 2nd option
> to be available that works too. It does increase the scope of the task and complexity of the request text format
> to be understood for submitters. Personally, where I fall is that there isn't enough additional value to justify
> doing more than rerunning as-is. Plus, if we do end up with a bad tree, submitters can resubmit their patches
> when the tree is in a good state again, right? Or alternatively, lab managers may be aware of this situation
> and be able to order a rerun for the last x days (duration of bad tree) on the most up to date tree. 

Re: submitters resubmitting, that is currently one way to get an
automatic "retest" right?  The thought is to prevent cluttering up even
more series in patchwork, but I can see that it might not make sense
just yet.  It's probably fine to start with the retest "as-is," and add
the feature on later.

> On Mon, Jun 12, 2023 at 11:01 AM Aaron Conole <aconole@redhat.com> wrote:
>
>  Patrick Robb <probb@iol.unh.edu> writes:
>
>  > On Wed, Jun 7, 2023 at 8:53 AM Aaron Conole <aconole@redhat.com> wrote:
>  >
>  >  Patrick Robb <probb@iol.unh.edu> writes:
>  >
>  >  >  Also it can be useful to run daily sub-tree testing by request, if possible.
>  >  >
>  >  > That wouldn't be too difficult. I'll make a ticket for this. Although, for testing on the daily sub-trees,
>  >  since that's
>  >  > UNH-IOL specific, that wouldn't necessarily have to be done via an email based testing request
>  >  framework. We
>  >  > could also just add a button to our dashboard which triggers a sub-tree ci run. That would help keep
>  >  narrow
>  >  > the scope of what the email based retesting framework is for. But, both email or a dashboard button
>  >  would
>  >  > both work. 
>  >
>  >  We had discussed this long ago - including agreeing on a format, IIRC.
>  >
>  >  See the thread starting here:
>  >    https://mails.dpdk.org/archives/ci/2021-May/001189.html
>  >
>  >  The idea was to have a line like:
>  >
>  >  Recheck-request: <test names>
>  >
>  > I like using this simpler format which is easier to parse. As Thomas pointed out, specifying labs does
>  not really
>  > provide extra information if we are already going to request by label/context, which is already specifies
>  the
>  > lab.  
>
>  One thing we haven't discussed or determined is if we should have the
>  ability to re-apply the series or simply to rerun the patches based on
>  the original sha sum.  There are two cases I can think of:
>
>  1. Ephemeral lab/network failures, or flaky unit tests that sometimes
>     fail.  In this case, we probably just want to re-run the tree as-is.
>
>  2. Failing tree before apply.  In this case, we have applied the series
>     to a tree, but the tree isn't in a good state and will fail
>     regardless of the series being applied.
>
>  WDYT?  Does (2) case warrant any consideration as a possible reason to
>  retest?  If so, what is the right way of handling that situation?
>
>  >  where <test names> was the tests in the check labels.  In fact, what
>  >  started the discussion was a patch for the pw-ci scripts that
>  >  implemented part of it.
>  >
>  >  I don't see how to make your proposal as easily parsed.
>  >
>  >  WDYT?  Can you re-read that thread and come up with comments?
>  >
>  >  Will do. And thanks, this thread is very informative. 
>  >
>  >  It is important to use the 'msgid' field to distinguish recheck
>  >  requests.  Otherwise, we will continuously reparse the same
>  >  recheck request and loop forever.  Additionally, we've discussed using a
>  >  counter to limit the recheck requests to a single 'recheck' per test
>  >  name.
>  >
>  > We can track message ids to avoid considering a single retest request twice. Perhaps we can
>  accomplish the
>  > same thing by tracking retested patchseries ids and their total number of requested retests (which
>  could be 1
>  > retest per patchseries). 
>  >
>  >   +function check_series_needs_retest() {
>  >
>  > +    local pw_instance="$1"
>  > +
>  > +    series_get_active_branches "$pw_instance" | while IFS=\| read -r series_id project url repo
>  branchname; do
>  > +        local patch_comments_url=$(curl -s "$userpw" "$url" | jq -rc '.comments')
>  > +        if [ "Xnull" != "X$patch_comments_url" ]; then
>  > +            local comments_json=$(curl -s "$userpw" "$patch_comments_url")
>  > +            local seq_end=$(echo "$comments_json" | jq -rc 'length')
>  > +            if [ "$seq_end" -a $seq_end -gt 0 ]; then
>  > +                seq_end=$((seq_end-1))
>  > +                for comment_id in $(seq 0 $seq_end); do
>  > +                    local recheck_requested=$(echo "$comments_json" | jq -rc ".[$comment_id].content" |
>  grep
>  > "^Recheck-request: ")
>  > +                    if [ "X$recheck_requested" != "X" ]; then
>  > +                        local msgid=$(echo "$comments_json" | jq -rc ".[$comment_id].msgid")
>  > +                        run_recheck "$pw_instance" "$series_id" "$project" "$url" "$repo" "$branchname"
>  > "$recheck_requested" "$msgid"
>  > +                    fi
>  > +                done
>  > +            fi
>  > +        fi
>  > +    done
>  > +}
>  > This is already a superior approach to what I had in mind for acquiring comments. Unless you're
>  opposed, I
>  > think at the communit lab we can experiment based on this starting point to verify the process is
>  sound, but I
>  > don't see any problems here. 
>  >
>  >  I think that if we're able to specify multiple contexts, then there's not really any reason to run multiple
>  >  rechecks per patchset.
>  >
>  > Agreed.
>  >
>  >  There was also an ask on filtering requesters (only maintainers and
>  >
>  >  patch authors can ask for a recheck). 
>  >
>  > If we can use the maintainers file as a single source of truth that is convenient and stable as the list of
>  > maintainers changes. But, also I think retesting request permission should be extended to the
>  submitter too.
>  > They may want to initiate a re-run without engaging a maintainer. It's not likely to cause a big increase
>  in test
>  > load for us or other labs, so there's no harm there. 
>  >
>  >  No, an explicit list is actually better.
>  >
>  >  When a new check is added, for someone looking at the mails (maybe 2/3
>  >
>  >  weeks later), and reading just "all", he would have to know what
>  >
>  >  checks were available at the time. 
>  >
>  > Context/Labels rarely change, so I don't think this concern is too serious. But, if people dont mind
>  comma
>  > separating an entire list of contexts, that's fine. 
>  >
>  > Thanks,
>  > Patrick


^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: Email Based Re-Testing Framework
  2023-06-06 16:56 Email Based Re-Testing Framework Patrick Robb
  2023-06-06 17:53 ` Ferruh Yigit
  2023-06-07  7:04 ` Thomas Monjalon
@ 2023-06-21 16:21 ` Ali Alnubani
  2023-07-10 21:16   ` Jeremy Spewock
  2 siblings, 1 reply; 13+ messages in thread
From: Ali Alnubani @ 2023-06-21 16:21 UTC (permalink / raw)
  To: Patrick Robb, Aaron Conole, Tu,  Lijuan, ci
  Cc: zhoumin, Michael Santana, Lincoln Lavoie,
	NBU-Contact-Thomas Monjalon (EXTERNAL)

On 6/6/2023 7:57 PM, Patrick Robb wrote:
> Hello all,
> 
> I'd like to revive the conversation about a request from the community
> for an email based re-testing framework. The idea is that using one
> standardized format, dpdk developers could email the test-report mailing
> list, requesting a rerun on their patch series for "X" set of tests at
> "Y" lab. I think that since patchwork testing labels (ie.
> iol-broadcom-Performance, github-robot: build, loongarch-compilation)
> are already visible on patch pages on patchwork, those labels are the
> most reasonable ones to expect developers to use when requesting a
> re-test. We probably wouldn't want to get any more general than that,
> like, say, rerunning all CI testing for a specific patch series at a
> specific lab, since it would result in a significant amount of "wasted"
> testing capacity.
> 
> The standard email format those of us at the Community Lab are thinking
> of is like below. Developers would request retests by emailing the
> test-report mailing list with email bodies like:
> 
> [RETEST UNH-IOL]
> iol-abi-testing
> iol-broadcom-Performance
> 
> [RETEST Intel]
> intel-Functional
> 
> [RETEST Loongson]
> loongarch-compilation
> 
> [RETEST GHA]
> github-robot: build
> 
> From there, it would be up to the various labs to poll the test-report
> mailing list archive (or use a similar method) to check for such
> requests, and trigger a CI testing rerun based on the labels provided in
> the re-test email. If there is interest from other labs, UNH might also
> be able to host the entire set of re-test requests, allowing other labs
> to poll a curated list hosted by UNH. One simple approach would be for
> labs to download all emails sent to test-report and parse with regex to
> determine the re-test list for their specific lab. But, if anyone has
> any better ideas for aggregating the emails to be parsed, suggestions
> are welcome! If this approach sounds reasonable to everyone, we could
> determine a timeline by which labs would implement the functionality
> needed to trigger re-tests. Or, we can just add re-testing for various
> labs if/when they add this functionality - whatever is better. Happy to
> discuss at the CI meeting on Thursday.
>

Hello,

For context, and as discussed in the last community CI meeting, going through every new patch to look for new comments that trigger retests might take too long and potentially slow down the server.

I will upgrade Patchwork to v3.1 right after the v23.07 release.
The new version adds two new events to the /events API: cover-comment-created and patch-comment-created.[1]

An example event schema:

"""
{
        [..]
        "category": "patch-comment-created",
        "project": {
            [..]
        },
        "date": "string",
        "actor": {
            [..]
        },
        "payload": {
            "patch": {
                [..]
            },
            "comment": {
                [..]
                "url": "https://patches.dpdk.org/api/patches/X/comments/Y/",
                [..]
            }
        }
    }
"""

The comments body/contents can be extracted from the "content" property after fetching the comment's api url. Example schema:

"""
{
    [..]
    "subject": "string",
    "submitter": {
        [..]
    },
    "content": "string",
    "headers": {
        [..]
    },
    "addressed": null
}
"""

[1] https://patchwork.readthedocs.io/en/latest/releases/hessian/#relnotes-v3-1-0-stable-3-1-new-features
Also see:
https://patchwork.readthedocs.io/en/latest/api/rest/schemas/v1.2/#get--api-1.2-events-
https://patchwork.readthedocs.io/en/latest/api/rest/schemas/v1.2/#get--api-1.2-patches-id-comments-

Let me know if you have any questions.

Regards,
Ali

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Email Based Re-Testing Framework
  2023-06-21 16:21 ` Ali Alnubani
@ 2023-07-10 21:16   ` Jeremy Spewock
  0 siblings, 0 replies; 13+ messages in thread
From: Jeremy Spewock @ 2023-07-10 21:16 UTC (permalink / raw)
  To: Ali Alnubani
  Cc: Patrick Robb, Aaron Conole, Tu, Lijuan, ci, zhoumin,
	Michael Santana, Lincoln Lavoie,
	NBU-Contact-Thomas Monjalon (EXTERNAL)

[-- Attachment #1: Type: text/plain, Size: 11235 bytes --]

Hello all,

I know Aaron and I had talked a little bit about the script for this and
how we were going to handle it during the CI meeting, so I figured I would
raise this on the mailing list so anyone could provide input. One of the
points we had mentioned was whether this was something we wanted to use
pwclient or instead make it a separate script that is just dedicated to
collecting retest requests. I'm not completely sure which one would be the
better option, but as I mentioned during the meeting I did write a little
bit of a draft for a dedicated script that I thought might be something
that would be good to bring up. The below code is just a simple python
script that allows you to pass in the name of contexts you would like to
collect for retesting and it just follows the event schema listed in the
previous email. The thought process with this is every lab could maintain a
list of labels they would like to capture and the timestamp since the last
time they ran the script and gathered retesting requests and use that to
run this script periodically.

There are a couple of things to consider with this script before it is
completely polished like where to send the output and how people want to
handle it, or if we even want to use it rather than just use pwclient and
write something to allow it to handle collecting these requests. I also
wasn't sure if a comma or space separated list would be preferred for
input, so I'm open to suggestions on that as well if we decide to use this
script. I wrote something to handle both comma and space delimiters but it
required flattening the list and a little extra complexity, so I left it
out of the code below. Let me know if anyone has any thoughts on the matter
and how we wanted to handle collecting the requests.

Thanks,
Jeremy

+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2023 University of New Hampshire
+
+import re
+import requests
+import argparse
+class RerunProcessor:
+    """Class for finding reruns inside an email using the patchworks
events API.
+
+    The idea of this class is to use regex to find certain patterns that
represent
+    desired contexts to rerun.
+    """
+    desired_contexts: list[str] = []
+    collection_of_retests: dict = {}
+    #^ is start of line and $ is end
+    # ((?:[a-xA-Z-]+(?:, ?\n?)?)+) is a capture group that gets all test
labels after "Recheck-request: "
+    #   (?:[a-xA-Z-]+(?:, ?\n?)?)+ means 1 or more of the first match group
+    #       [a-xA-Z-]+ means 1 more more of any character in the ranges
a-z or A-Z, or the character '-'
+    #       (?:, ?\n?)? means 1 or none of this match group which expects
exactly 1 comma followed by
+    #                   1 or no spaces followed by 1 or no newlines.
+    # VALID MATCHES:
+    #   Recheck-request: iol-unit-testing, iol-something-else,
iol-one-more, intel-example-testing
+    #   Recheck-request: iol-unit-testing,iol-something-else,iol-one-more,
intel-example-testing,
+    #   Recheck-request: iol-unit-testing, iol-example,
iol-another-example, intel-example-testing,
+    #   more-intel-testing
+    # INVALID MATCHES:
+    #   Recheck-request: iol-unit-testing,
iol-something-else,iol-one-more,  intel-example-testing
+    #   Recheck-request: iol-unit-testing iol-something-else,iol-one-more,
intel-example-testing,
+    #   Recheck-request: iol-unit-testing,iol-something-else,iol-one-more,
intel-example-testing,
+    #
+    #   more-intel-testing
+    regex:str = "^Recheck-request: ((?:[a-xA-Z-]+(?:, ?\n?)?)+)"
+
+    def __init__(self, desired_contexts: list) -> None:
+        self.desired_contexts = desired_contexts
+
+    def process_comment_info(self, list_of_comment_blobs: list[str]) ->
None:
+        """Takes the list of json blobs of comment information and
associates them
+        with their patches.
+
+        Collects retest labels from a list of comments on patches
represented in
+        list_of_comment_blobs and creates a dictionary that associates
them with their
+        corresponding patch series ID. The labels that need to be retested
are collected
+        by passing the comments body into get_test_names() method.
+
+        Args:
+            list_of_comment_blobs: a list of JSON blobs that represent
comment information
+        """
+        for comment in list_of_comment_blobs:
+            comment_info = requests.get(
+                comment["comment"]["url"]
+            )
+            labels_to_rerun =
self.get_test_name(comment_info.json()["content"])
+            patch_id = comment["payload"]["patch"]["id"]
+            #appending to the list if it already exists, or creating it if
it doesn't
+            self.collection_of_retests[patch_id] =
[*self.collection_of_retests.get(patch_id, []), *labels_to_rerun]
+
+    def get_test_names(self, email_body:str) -> list[str]:
+        """Uses the regex in the class to get the information from the
email.
+
+        When it gets the test names from the email, it will all be in one
capture group.
+        We expect a comma separated list of patchwork labels to be
retested.
+
+        Returns:
+            A list of contexts found in the email that match your list of
desired
+            contexts to capture
+        """
+        rerun_section = re.findall(self.regex, email_body, re.MULTILINE)
+        rerun_list = list(map(str.strip, rerun_section[0].split(",")))
+        valid_test_labels = []
+        for test_name in rerun_list:
+            if not test_name: #handle the capturing of empty string
+                continue
+            if test_name in self.desired_contexts:
+                valid_test_labels.append(test_name)
+        return valid_test_labels
+
+
+if __name__ == '__main__':
+    parser = argparse.ArgumentParser(description="Help text for getting
reruns")
+    parser.add_argument('-ts', '--time-since', dest="time_since",
required=True, help="Get all patches since this timestamp")
+    parser.add_argument('--no-cover-comment', dest="no_cover_comment",
default=False, help="Option to ignore comments on cover letters")
+    parser.add_argument('--no-patch-comment', dest="no_patch_comment",
default=False, help="Option to ignore comments on patch emails")
+    parser.add_argument('--contexts', dest="contexts_to_capture",
nargs='*', required=True, help="List of patchwork contexts you would like
to capture")
+    args = parser.parse_args()
+    rerun_processor = RerunProcessor(args.contexts_to_capture)
+    patchwork_url = f"
http://patches.dpdk.org/api/events/?since={args.time_since}"
+    if args.no_cover_comment and args.no_patch_comment:
+        exit(0)
+    if not args.no_cover_comment:
+        patchwork_url += "&category=cover-comment-created"
+    if not args.no_patch_comment:
+        patchwork_url += "&category=patch-comment-created"
+    comment_request_info = requests.get(patchwork_url)
+    rerun_processor.process_comment_info(comment_request_info.json())

On Wed, Jun 21, 2023 at 12:21 PM Ali Alnubani <alialnu@nvidia.com> wrote:

> On 6/6/2023 7:57 PM, Patrick Robb wrote:
> > Hello all,
> >
> > I'd like to revive the conversation about a request from the community
> > for an email based re-testing framework. The idea is that using one
> > standardized format, dpdk developers could email the test-report mailing
> > list, requesting a rerun on their patch series for "X" set of tests at
> > "Y" lab. I think that since patchwork testing labels (ie.
> > iol-broadcom-Performance, github-robot: build, loongarch-compilation)
> > are already visible on patch pages on patchwork, those labels are the
> > most reasonable ones to expect developers to use when requesting a
> > re-test. We probably wouldn't want to get any more general than that,
> > like, say, rerunning all CI testing for a specific patch series at a
> > specific lab, since it would result in a significant amount of "wasted"
> > testing capacity.
> >
> > The standard email format those of us at the Community Lab are thinking
> > of is like below. Developers would request retests by emailing the
> > test-report mailing list with email bodies like:
> >
> > [RETEST UNH-IOL]
> > iol-abi-testing
> > iol-broadcom-Performance
> >
> > [RETEST Intel]
> > intel-Functional
> >
> > [RETEST Loongson]
> > loongarch-compilation
> >
> > [RETEST GHA]
> > github-robot: build
> >
> > From there, it would be up to the various labs to poll the test-report
> > mailing list archive (or use a similar method) to check for such
> > requests, and trigger a CI testing rerun based on the labels provided in
> > the re-test email. If there is interest from other labs, UNH might also
> > be able to host the entire set of re-test requests, allowing other labs
> > to poll a curated list hosted by UNH. One simple approach would be for
> > labs to download all emails sent to test-report and parse with regex to
> > determine the re-test list for their specific lab. But, if anyone has
> > any better ideas for aggregating the emails to be parsed, suggestions
> > are welcome! If this approach sounds reasonable to everyone, we could
> > determine a timeline by which labs would implement the functionality
> > needed to trigger re-tests. Or, we can just add re-testing for various
> > labs if/when they add this functionality - whatever is better. Happy to
> > discuss at the CI meeting on Thursday.
> >
>
> Hello,
>
> For context, and as discussed in the last community CI meeting, going
> through every new patch to look for new comments that trigger retests might
> take too long and potentially slow down the server.
>
> I will upgrade Patchwork to v3.1 right after the v23.07 release.
> The new version adds two new events to the /events API:
> cover-comment-created and patch-comment-created.[1]
>
> An example event schema:
>
> """
> {
>         [..]
>         "category": "patch-comment-created",
>         "project": {
>             [..]
>         },
>         "date": "string",
>         "actor": {
>             [..]
>         },
>         "payload": {
>             "patch": {
>                 [..]
>             },
>             "comment": {
>                 [..]
>                 "url": "https://patches.dpdk.org/api/patches/X/comments/Y/
> ",
>                 [..]
>             }
>         }
>     }
> """
>
> The comments body/contents can be extracted from the "content" property
> after fetching the comment's api url. Example schema:
>
> """
> {
>     [..]
>     "subject": "string",
>     "submitter": {
>         [..]
>     },
>     "content": "string",
>     "headers": {
>         [..]
>     },
>     "addressed": null
> }
> """
>
> [1]
> https://patchwork.readthedocs.io/en/latest/releases/hessian/#relnotes-v3-1-0-stable-3-1-new-features
> Also see:
>
> https://patchwork.readthedocs.io/en/latest/api/rest/schemas/v1.2/#get--api-1.2-events-
>
> https://patchwork.readthedocs.io/en/latest/api/rest/schemas/v1.2/#get--api-1.2-patches-id-comments-
>
> Let me know if you have any questions.
>
> Regards,
> Ali
>

[-- Attachment #2: Type: text/html, Size: 14474 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2023-07-10 21:16 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-06 16:56 Email Based Re-Testing Framework Patrick Robb
2023-06-06 17:53 ` Ferruh Yigit
2023-06-06 19:27   ` Patrick Robb
2023-06-06 21:40     ` Ferruh Yigit
2023-06-07 12:53     ` Aaron Conole
2023-06-08  1:14       ` Patrick Robb
2023-06-08  1:47       ` Patrick Robb
2023-06-12 15:01         ` Aaron Conole
2023-06-13 13:28           ` Patrick Robb
2023-06-20 14:01             ` Aaron Conole
2023-06-07  7:04 ` Thomas Monjalon
2023-06-21 16:21 ` Ali Alnubani
2023-07-10 21:16   ` Jeremy Spewock

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).