test suite reviews and discussions
 help / color / mirror / Atom feed
From: "Tu, Lijuan" <lijuan.tu@intel.com>
To: "dts@dpdk.org" <dts@dpdk.org>
Subject: Re: [dts] [PATCH V2 0/5] rework test plan for runtime vf queue number
Date: Fri, 1 Feb 2019 05:13:04 +0000	[thread overview]
Message-ID: <8CE3E05A3F976642AAB0F4675D0AD20E0BA1FCB8@SHSMSX101.ccr.corp.intel.com> (raw)
In-Reply-To: <1548957413-109741-1-git-send-email-lijuan.tu@intel.com>

Applied all patches in the patch set. thanks

> -----Original Message-----
> From: Tu, Lijuan
> Sent: Friday, February 1, 2019 1:57 AM
> To: dts@dpdk.org
> Cc: Tu, Lijuan <lijuan.tu@intel.com>
> Subject: [PATCH V2 0/5] rework test plan for runtime vf queue number
> 
> Since DPDK 19.02 re-defined the EAL parameter "queue-num-per-vf" to reserve
> queue number per vf. At the same time, it also extended the maxinum queue
> numbers to 16 that VF could request from PF. Furthermore, It supports both
> DPDK PF and Kernel PF.
> 
> * removed the old test plan
> * new test plan was seperated to a series of 3 files due to simplify script.
> * attached a picture to describe test topology
> 
>   - v2: fix format errors
> 
> Lijuan Tu (5):
>   test_plans: remove test plan for runtime queue number
>   test_plans: add a picture to clarify test topology
>   test_plans: add the main test plan for runtime vf queue number
>   test_plans: add supplementary runtime vf queue number test plan
>   test_plans: add test plan for runtime vf queue number( Kernel PF +
>     DPDK VF)
> 
>  test_plans/image/2vf1pf.png                        | Bin 0 -> 24020 bytes
>  test_plans/runtime_queue_number_test_plan.rst      | 465 ---------------------
>  .../runtime_vf_queue_number_kernel_test_plan.rst   | 232 ++++++++++
>  .../runtime_vf_queue_number_maxinum_test_plan.rst  | 133 ++++++
>  test_plans/runtime_vf_queue_number_test_plan.rst   | 372
> +++++++++++++++++
>  5 files changed, 737 insertions(+), 465 deletions(-)  create mode 100644
> test_plans/image/2vf1pf.png  delete mode 100644
> test_plans/runtime_queue_number_test_plan.rst
>  create mode 100644
> test_plans/runtime_vf_queue_number_kernel_test_plan.rst
>  create mode 100644
> test_plans/runtime_vf_queue_number_maxinum_test_plan.rst
>  create mode 100644 test_plans/runtime_vf_queue_number_test_plan.rst
> 
> --
> 1.8.3.1

      parent reply	other threads:[~2019-02-01  5:13 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-31 17:56 Lijuan Tu
2019-01-31 17:56 ` [dts] [PATCH V2 1/5] test_plans: remove test plan for runtime " Lijuan Tu
2019-01-31 17:56 ` [dts] [PATCH V2 2/5] test_plans: add a picture to clarify test topology Lijuan Tu
2019-01-31 17:56 ` [dts] [PATCH V2 3/5] test_plans: add the main test plan for runtime vf queue number Lijuan Tu
2019-01-31 17:56 ` [dts] [PATCH V2 4/5] test_plans: add supplementary runtime vf queue number test plan Lijuan Tu
2019-01-31 17:56 ` [dts] [PATCH V2 5/5] test_plans: add test plan for runtime vf queue number( Kernel PF + DPDK VF) Lijuan Tu
2019-02-01  5:13 ` Tu, Lijuan [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8CE3E05A3F976642AAB0F4675D0AD20E0BA1FCB8@SHSMSX101.ccr.corp.intel.com \
    --to=lijuan.tu@intel.com \
    --cc=dts@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).