On Mon, Nov 22, 2021 at 3:17 AM David Marchand <david.marchand@redhat.com> wrote:
On Fri, Nov 19, 2021 at 5:54 PM Dumitrescu, Cristian
<cristian.dumitrescu@intel.com> wrote:
> On a different point, we should probably tweak our autotests to differentiate between logical failures and those failures related to resources not being available, and flag the test result accordingly in the report. For example, if memory allocation fails, the test should be flagged as "Not enough resources" instead of simply "Failed". In the first case, the next step should be fixing the test setup, while in the second case the next step should be fixing the code. What do people think on this?

In such case, the test must return TEST_SKIPPED.

If the purpose of the component / function being tested is to get / create / reserve the resource(s), the failure might be valid. So it can't be applied across the board.  But places where the test is checking other functionality, this might at least prevent some failures that are transient (i.e. based on what the test could "get" from the system at that moment in time).  

 
I did a pass for cores count / specific hw requirements, some time ago.
See https://git.dpdk.org/dpdk/commit/?id=e0f4a0ed4237


--
David Marchand



--
Lincoln Lavoie
Principal Engineer, Broadband Technologies
21 Madbury Rd., Ste. 100, Durham, NH 03824
+1-603-674-2755 (m)