Hello all, I've been working on tests for the flow API recently, and I've hit a point where I think runtime code generation would be useful. For context, the way the test suite is architected right now, I start up testpmd in the set_up_all function, and then every single flow rule that is tested get it's own test case. I did this so that we can have the benefits of only starting testpmd once, but we can still have the granularity of knowing exactly what failed. Currently, I have a script which generates all 54 pattern matching test cases and prints them out, this means that I can copy and paste the generated test cases into the test suite. I am concerned about someone who needs to maintain this test suite afterward not receiving that important bit of information. The simple solution to this problem that I see is to modify the test suite to add the test cases at runtime. The reason I'm reaching out instead of just doing this and submitting this is that I wanted to make sure that there no strong objections before starting. Doing this would probably involve the use of exec to avoid needing to hook parts of the interpreter to generate the symbol tree directly. Performance isn't an issue since this takes roughly 3/1000 of a second. More details on the plan for the test suite can be found at https://docs.google.com/document/d/1_jEciQFZ-Lj1ASF_mQbCnB3U5FefdW2Z84HP1y-GJek/edit?usp=sharing I'd appreciate any concerns or suggestions on this. Owen Hilyard UNH InterOperability Laboratory