test suite reviews and discussions
 help / color / mirror / Atom feed
From: "Tu, Lijuan" <lijuan.tu@intel.com>
To: "Wang, Yinan" <yinan.wang@intel.com>, "dts@dpdk.org" <dts@dpdk.org>
Cc: "Wang, Yinan" <yinan.wang@intel.com>
Subject: Re: [dts] [PATCH v1] test_plans: add packed ring vectorized cases in	vm2vm_virtio_user test
Date: Mon, 27 Apr 2020 07:50:42 +0000	[thread overview]
Message-ID: <8CE3E05A3F976642AAB0F4675D0AD20E0BC12A2B@SHSMSX101.ccr.corp.intel.com> (raw)
In-Reply-To: <20200425173740.48722-1-yinan.wang@intel.com>

Applied, thanks

> -----Original Message-----
> From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Yinan
> Sent: Sunday, April 26, 2020 1:38 AM
> To: dts@dpdk.org
> Cc: Wang, Yinan <yinan.wang@intel.com>
> Subject: [dts] [PATCH v1] test_plans: add packed ring vectorized cases in
> vm2vm_virtio_user test
> 
> From: Wang Yinan <yinan.wang@intel.com>
> 
> Signed-off-by: Wang Yinan <yinan.wang@intel.com>
> ---
>  test_plans/vm2vm_virtio_user_test_plan.rst | 87 ++++++++++++++++++----
>  1 file changed, 74 insertions(+), 13 deletions(-)
> 
> diff --git a/test_plans/vm2vm_virtio_user_test_plan.rst
> b/test_plans/vm2vm_virtio_user_test_plan.rst
> index 0aa2501..de5bbb6 100644
> --- a/test_plans/vm2vm_virtio_user_test_plan.rst
> +++ b/test_plans/vm2vm_virtio_user_test_plan.rst
> @@ -38,10 +38,6 @@ Description
>  ===========
> 
>  This test plan includes split virtqueue vm2vm in-order mergeable, in-order
> non-mergeable, mergeable, non-mergeable, vector_rx path test, and packed
> virtqueue vm2vm in-order mergeable, in-order non-mergeable, mergeable,
> non-mergeable, vectorized path test. This plan also check the payload of
> packets is accurate.
> -Note: Packed virtqueue vectorized path need below three initial
> requirements:
> -    1. AVX512 is allowed in config file and supported by compiler
> -    2. Host cpu support AVX512F
> -    3. ring size is power of two
> 
>  Prerequisites
>  =============
> @@ -628,7 +624,7 @@ Test Case 9: split virtqueue vm2vm vector_rx path
> test
> 
>      ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
>      --no-pci --file-prefix=virtio1 \
> -    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \
> +
> + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues
> + =1,packed_vq=0,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=256 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
> 
>  3. Attach pdump secondary process to primary process by same file-prefix::
> @@ -639,7 +635,7 @@ Test Case 9: split virtqueue vm2vm vector_rx path
> test
> 
>      ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
>      --no-pci --file-prefix=virtio \
> -    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \
> +
> + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=
> + 1,packed_vq=0,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=256 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
>      testpmd>set burst 1
>      testpmd>start tx_first 27
> @@ -668,7 +664,7 @@ Test Case 9: split virtqueue vm2vm vector_rx path
> test
> 
>      ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
>      --no-pci \
> -    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=0,mrg_rxbuf=0,in_order=0 \
> +
> + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues
> + =1,packed_vq=0,mrg_rxbuf=0,in_order=0,vectorized=1,queue_size=256 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
>      testpmd>set burst 1
>      testpmd>start tx_first 27
> @@ -683,15 +679,15 @@ Test Case 10: packed virtqueue vm2vm vectorized
> path test
> 
>  1. Launch testpmd by below command::
> 
> -    ./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> + 1024,1024 --no-pci \
>      --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev
> 'eth_vhost1,iface=vhost-net1,queues=1' -- \
>      -i --nb-cores=1 --no-flush-rx
> 
>  2. Launch virtio-user1 by below command::
> 
> -    ./testpmd -n 4 -l 7-8 --socket-mem 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> + 1024,1024 \
>      --no-pci --file-prefix=virtio1 \
> -    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,packed_vec=1 \
> +
> + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues
> + =1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=256 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
>      testpmd>set fwd rxonly
>      testpmd>start
> @@ -702,9 +698,9 @@ Test Case 10: packed virtqueue vm2vm vectorized
> path test
> 
>  4. Launch virtio-user0 and send 8k length packets::
> 
> -    ./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem
> + 1024,1024 \
>      --no-pci --file-prefix=virtio \
> -    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,packed_vec=1 \
> +
> + --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=
> + 1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=256 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
>      testpmd>set burst 1
>      testpmd>start tx_first 27
> @@ -733,7 +729,7 @@ Test Case 10: packed virtqueue vm2vm vectorized
> path test
> 
>      ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
>      --no-pci \
> -    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,packed_vec=1 \
> +
> + --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-net1,queues
> + =1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_size=256 \
>      -- -i --nb-cores=1 --txd=256 --rxd=256
>      testpmd>set burst 1
>      testpmd>start tx_first 27
> @@ -741,4 +737,69 @@ Test Case 10: packed virtqueue vm2vm vectorized
> path test
>      testpmd>set burst 32
>      testpmd>start tx_first 7
> 
> +9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check
> headers and payload of all packets in pdump-virtio-rx.pcap and pdump-
> vhost-rx.pcap and ensure the content are same.
> +
> +Test Case 10: packed virtqueue vm2vm vectorized path test with ring
> +size is not power of 2
> +===============================================================
> ========
> +===================
> +
> +1. Launch testpmd by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci \
> +    --vdev 'eth_vhost0,iface=vhost-net,queues=1' --vdev
> 'eth_vhost1,iface=vhost-net1,queues=1' -- \
> +    -i --nb-cores=1 --no-flush-rx
> +
> +2. Launch virtio-user1 by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    --no-pci --file-prefix=virtio1 \
> +    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_si
> ze=255 \
> +    -- -i --nb-cores=1 --txd=255 --rxd=255
> +    testpmd>set fwd rxonly
> +    testpmd>start
> +
> +3. Attach pdump secondary process to primary process by same file-prefix::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio1 -- --
> pdump 'device_id=net_virtio_user1,queue=*,rx-dev=/root/pdump-
> rx.pcap,mbuf-size=8000'
> +
> +4. Launch virtio-user0 and send 8k length packets::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 5-6 --socket-mem
> 1024,1024 \
> +    --no-pci --file-prefix=virtio \
> +    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-
> net,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_siz
> e=255 \
> +    -- -i --nb-cores=1 --txd=255 --rxd=255
> +    testpmd>set burst 1
> +    testpmd>start tx_first 27
> +    testpmd>stop
> +    testpmd>set burst 32
> +    testpmd>start tx_first 7
> +    testpmd>stop
> +    testpmd>set txpkts 2000,2000,2000,2000
> +    testpmd>start tx_first 1
> +
> +5. Start vhost, then quit pdump and three testpmd, get 251 packets received
> by virtio-user1 in pdump-virtio-rx.pcap.
> +
> +6. Launch testpmd by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -l 1-2 -n 4 --socket-mem
> 1024,1024 --no-pci --file-prefix=vhost  \
> +    --vdev 'eth_vhost1,iface=vhost-net1,queues=1' -- \
> +    -i --nb-cores=1 --no-flush-rx
> +    testpmd>set fwd rxonly
> +    testpmd>start
> +
> +7. Attach pdump secondary process to primary process by same file-prefix::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=vhost -- --
> pdump 'port=0,queue=*,rx-dev=/root/pdump-vhost-rx.pcap,mbuf-
> size=8000'
> +
> +8. Launch virtio-user1 by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/testpmd -n 4 -l 7-8 --socket-mem
> 1024,1024 \
> +    --no-pci \
> +    --vdev=net_virtio_user1,mac=00:01:02:03:04:05,path=./vhost-
> net1,queues=1,packed_vq=1,mrg_rxbuf=0,in_order=1,vectorized=1,queue_si
> ze=255 \
> +    -- -i --nb-cores=1 --txd=255 --rxd=255
> +    testpmd>set burst 1
> +    testpmd>start tx_first 27
> +    testpmd>stop
> +    testpmd>set burst 32
> +    testpmd>start tx_first 7
> +
>  9. Quit pdump,vhost received packets in pdump-vhost-rx.pcap,check
> headers and payload of all packets in pdump-virtio-rx.pcap and pdump-
> vhost-rx.pcap and ensure the content are same.
> \ No newline at end of file
> --
> 2.17.1


      reply	other threads:[~2020-04-27  7:50 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-25 17:37 Yinan
2020-04-27  7:50 ` Tu, Lijuan [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8CE3E05A3F976642AAB0F4675D0AD20E0BC12A2B@SHSMSX101.ccr.corp.intel.com \
    --to=lijuan.tu@intel.com \
    --cc=dts@dpdk.org \
    --cc=yinan.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).