test suite reviews and discussions
 help / color / mirror / Atom feed
* Re: [dts] [PATCH v1] test_plans/vhost_cbdma_test_plan: add one cbdma performance case
  2021-06-09 11:46 [dts] [PATCH v1] test_plans/vhost_cbdma_test_plan: add one cbdma performance case Yinan Wang
@ 2021-06-09  8:26 ` Tu, Lijuan
  0 siblings, 0 replies; 2+ messages in thread
From: Tu, Lijuan @ 2021-06-09  8:26 UTC (permalink / raw)
  To: Wang, Yinan, dts; +Cc: Wang, Yinan



> -----Original Message-----
> From: dts <dts-bounces@dpdk.org> On Behalf Of Yinan Wang
> Sent: 2021年6月9日 19:47
> To: dts@dpdk.org
> Cc: Wang, Yinan <yinan.wang@intel.com>
> Subject: [dts] [PATCH v1] test_plans/vhost_cbdma_test_plan: add one cbdma
> performance case
> 

I see 2 major changed in your patch, one is add a test case, but also modified other cases that not mentioned in your commit message, please refine it.

> Signed-off-by: Yinan Wang <yinan.wang@intel.com>
> ---
>  test_plans/vhost_cbdma_test_plan.rst | 65 +++++++++++++++++++++++++++-
>  1 file changed, 63 insertions(+), 2 deletions(-)
> 
> diff --git a/test_plans/vhost_cbdma_test_plan.rst
> b/test_plans/vhost_cbdma_test_plan.rst
> index c827adaa..ce0fdc3e 100644
> --- a/test_plans/vhost_cbdma_test_plan.rst
> +++ b/test_plans/vhost_cbdma_test_plan.rst
> @@ -73,7 +73,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
> 
>  1. Bind one cbdma port and one nic port to igb_uio, then launch vhost by below
> command::
> 
> -    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost
> --vdev
> 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024' \
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3
> + --file-prefix=vhost --vdev
> + 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:01.0],dmathr=1024' \
>      -- -i --nb-cores=1 --txd=1024 --rxd=1024

Why change pci address ?

>      >set fwd mac
>      >start
> @@ -145,7 +145,7 @@ Test Case2: Split ring dynamic queue number test for
> DMA-accelerated vhost Tx op
>      >set fwd mac
>      >start
> 
> -3. Send packets with packet size [64,1518] from packet generator with random
> ip, check perforamnce can get target.
> +3. Send imix packets from packet generator with random ip, check
> perforamnce can get target.
> 
>  4. Stop vhost port, check vhost RX and TX direction both exist packtes in two
> queues from vhost log.
> 
> @@ -355,3 +355,64 @@ Test Case5: Packed ring dynamic queue number test
> for DMA-accelerated vhost Tx o
>       >start
> 
>  11. Stop vhost port, check vhost RX and TX direction both exist packets in two
> queues from vhost log.
> +
> +Test Case 6: Compare PVP split ring performance between CPU copy, CBDMA
> +copy and Sync copy
> +================================================================
> =======
> +===================
> +
> +CPU copy means vhost enqueue w/o cbdma channel; CBDMA copy needs vhost
> +enqueue with cbdma channel using parameter '-dmas'; Sync copy needs
> +vhost enqueue with cbdma channel, but threshold ( can be adjusted by change
> value of f.async_threshold in dpdk code) is larger than forwarding packet length.
> +
> +1. Bind one cbdma port and one nic port which on same numa to igb_uio, then
> launch vhost by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-
> prefix=vhost --vdev
> 'net_vhost0,iface=/tmp/s0,queues=1,client=1,dmas=[txq0@00:01.0],dmathr=10
> 24' \
> +    -- -i --nb-cores=1 --txd=1024 --rxd=1024
> +    >set fwd mac
> +    >start
> +
> +2. Launch virtio-user with inorder mergeable path::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-
> prefix=virtio \
> +    --
> vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_o
> rder=1,queues=1,server=1 \
> +    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --
> rxd=1024
> +    >set fwd mac
> +    >start
> +
> +3. Send packets with 64b and 1518b seperately from packet generator, record
> the throughput as sync copy throughput for 64b and cbdma copy for 1518b::
> +
> +    testpmd>show port stats all
> +
> +4.Quit vhost side, relaunch with below cmd::
> +
> + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost
> --vdev
> 'net_vhost0,iface=/tmp/s0,queues=1,client=1,dmas=[txq0@00:01.0],dmathr=20
> 00' \
> +    -- -i --nb-cores=1 --txd=1024 --rxd=1024
> +    >set fwd mac
> +    >start
> +
> +5. Send packets with 1518b from packet generator, record the throughput as
> sync copy throughput for 1518b::
> +
> +    testpmd>show port stats all
> +
> +6. Quit two testpmd, relaunch vhost by below command::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-
> prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1' \
> +    -- -i --nb-cores=1 --txd=1024 --rxd=1024
> +    >set fwd mac
> +    >start
> +
> +7. Launch virtio-user with inorder mergeable path::
> +
> +    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-
> prefix=virtio \
> +    --
> vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_o
> rder=1,queues=1 \
> +    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --
> rxd=1024
> +    >set fwd mac
> +    >start
> +
> +8. Send packets with 64b from packet generator, record the throughput as cpu
> copy for 64b::
> +
> +    testpmd>show port stats all
> +
> +9. Check performance can meet below requirement::
> +
> +   (1)CPU copy vs. sync copy delta < 10% for 64B packet size
> +   (2)CBDMA copy vs sync copy delta > 5% for 1518 packet size
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 2+ messages in thread

* [dts] [PATCH v1] test_plans/vhost_cbdma_test_plan: add one cbdma performance case
@ 2021-06-09 11:46 Yinan Wang
  2021-06-09  8:26 ` Tu, Lijuan
  0 siblings, 1 reply; 2+ messages in thread
From: Yinan Wang @ 2021-06-09 11:46 UTC (permalink / raw)
  To: dts; +Cc: Yinan Wang

Signed-off-by: Yinan Wang <yinan.wang@intel.com>
---
 test_plans/vhost_cbdma_test_plan.rst | 65 +++++++++++++++++++++++++++-
 1 file changed, 63 insertions(+), 2 deletions(-)

diff --git a/test_plans/vhost_cbdma_test_plan.rst b/test_plans/vhost_cbdma_test_plan.rst
index c827adaa..ce0fdc3e 100644
--- a/test_plans/vhost_cbdma_test_plan.rst
+++ b/test_plans/vhost_cbdma_test_plan.rst
@@ -73,7 +73,7 @@ TG --> NIC --> Vhost --> Virtio--> Vhost --> NIC --> TG
 
 1. Bind one cbdma port and one nic port to igb_uio, then launch vhost by below command::
 
-    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:04.0],dmathr=1024' \
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,dmas=[txq0@80:01.0],dmathr=1024' \
     -- -i --nb-cores=1 --txd=1024 --rxd=1024
     >set fwd mac
     >start
@@ -145,7 +145,7 @@ Test Case2: Split ring dynamic queue number test for DMA-accelerated vhost Tx op
     >set fwd mac
     >start
 
-3. Send packets with packet size [64,1518] from packet generator with random ip, check perforamnce can get target.
+3. Send imix packets from packet generator with random ip, check perforamnce can get target.
 
 4. Stop vhost port, check vhost RX and TX direction both exist packtes in two queues from vhost log.
 
@@ -355,3 +355,64 @@ Test Case5: Packed ring dynamic queue number test for DMA-accelerated vhost Tx o
      >start
 
 11. Stop vhost port, check vhost RX and TX direction both exist packets in two queues from vhost log.
+
+Test Case 6: Compare PVP split ring performance between CPU copy, CBDMA copy and Sync copy
+==========================================================================================
+
+CPU copy means vhost enqueue w/o cbdma channel; CBDMA copy needs vhost enqueue with cbdma channel
+using parameter '-dmas'; Sync copy needs vhost enqueue with cbdma channel, but threshold ( can be
+adjusted by change value of f.async_threshold in dpdk code) is larger than forwarding packet length.
+
+1. Bind one cbdma port and one nic port which on same numa to igb_uio, then launch vhost by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,client=1,dmas=[txq0@00:01.0],dmathr=1024' \
+    -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    >set fwd mac
+    >start
+
+2. Launch virtio-user with inorder mergeable path::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1,server=1 \
+    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
+    >set fwd mac
+    >start
+
+3. Send packets with 64b and 1518b seperately from packet generator, record the throughput as sync copy throughput for 64b and cbdma copy for 1518b::
+
+    testpmd>show port stats all
+
+4.Quit vhost side, relaunch with below cmd::
+
+ ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1,client=1,dmas=[txq0@00:01.0],dmathr=2000' \
+    -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    >set fwd mac
+    >start
+
+5. Send packets with 1518b from packet generator, record the throughput as sync copy throughput for 1518b::
+
+    testpmd>show port stats all
+
+6. Quit two testpmd, relaunch vhost by below command::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 2-3 --file-prefix=vhost --vdev 'net_vhost0,iface=/tmp/s0,queues=1' \
+    -- -i --nb-cores=1 --txd=1024 --rxd=1024
+    >set fwd mac
+    >start
+
+7. Launch virtio-user with inorder mergeable path::
+
+    ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -n 4 -l 5-6 --no-pci --file-prefix=virtio \
+    --vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=/tmp/s0,mrg_rxbuf=1,in_order=1,queues=1 \
+    -- -i --tx-offloads=0x0 --enable-hw-vlan-strip --nb-cores=1 --txd=1024 --rxd=1024
+    >set fwd mac
+    >start
+
+8. Send packets with 64b from packet generator, record the throughput as cpu copy for 64b::
+
+    testpmd>show port stats all
+
+9. Check performance can meet below requirement::
+
+   (1)CPU copy vs. sync copy delta < 10% for 64B packet size
+   (2)CBDMA copy vs sync copy delta > 5% for 1518 packet size
-- 
2.25.1


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-06-09  8:26 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-09 11:46 [dts] [PATCH v1] test_plans/vhost_cbdma_test_plan: add one cbdma performance case Yinan Wang
2021-06-09  8:26 ` Tu, Lijuan

test suite reviews and discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://inbox.dpdk.org/dts/0 dts/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dts dts/ https://inbox.dpdk.org/dts \
		dts@dpdk.org
	public-inbox-index dts

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dts


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git