From: Yinan <yinan.wang@intel.com>
To: dts@dpdk.org
Cc: Wang Yinan <yinan.wang@intel.com>
Subject: [dts] [PATCH 07/11 v1] test_plans: add packed ring test cases for vhost_user_live_migration
Date: Fri, 28 Feb 2020 06:09:43 +0000 [thread overview]
Message-ID: <20200228060947.26001-8-yinan.wang@intel.com> (raw)
In-Reply-To: <20200228060947.26001-1-yinan.wang@intel.com>
From: Wang Yinan <yinan.wang@intel.com>
Signed-off-by: Wang Yinan <yinan.wang@intel.com>
---
.../vhost_user_live_migration_test_plan.rst | 398 +++++++++++++++++-
1 file changed, 390 insertions(+), 8 deletions(-)
diff --git a/test_plans/vhost_user_live_migration_test_plan.rst b/test_plans/vhost_user_live_migration_test_plan.rst
index ec32e82..2626f7a 100644
--- a/test_plans/vhost_user_live_migration_test_plan.rst
+++ b/test_plans/vhost_user_live_migration_test_plan.rst
@@ -35,6 +35,7 @@ Vhost User Live Migration Tests
===============================
This feature is to make sure vhost user live migration works based on testpmd.
+For packed virtqueue test, need using qemu version > 4.2.0.
Prerequisites
-------------
@@ -63,8 +64,8 @@ NFS configuration
backup# mount -t nfs -o nolock,vers=4 host-ip:/home/osimg/live_mig /mnt/nfs
-Test Case 1: migrate with virtio-pmd
-====================================
+Test Case 1: migrate with split ring virtio-pmd
+===============================================
On host server side:
@@ -163,8 +164,8 @@ On the backup server, run the vhost testpmd on the host and launch VM:
backup server # ssh -p 5555 127.0.0.1
backup VM # screen -r vm
-Test Case 2: migrate with virtio-pmd zero-copy enabled
-======================================================
+Test Case 2: migrate with split ring virtio-pmd zero-copy enabled
+=================================================================
On host server side:
@@ -263,8 +264,8 @@ On the backup server, run the vhost testpmd on the host and launch VM:
backup server # ssh -p 5555 127.0.0.1
backup VM # screen -r vm
-Test Case 3: migrate with virtio-net
-====================================
+Test Case 3: migrate with split ring virtio-net
+===============================================
On host server side:
@@ -351,8 +352,8 @@ On the backup server, run the vhost testpmd on the host and launch VM:
backup server # ssh -p 5555 127.0.0.1
backup VM # screen -r vm
-Test Case 4: adjust virtio-net queue numbers while migrating with virtio-net
-============================================================================
+Test Case 4: adjust split ring virtio-net queue numbers while migrating with virtio-net
+=======================================================================================
On host server side:
@@ -442,3 +443,384 @@ On the backup server, run the vhost testpmd on the host and launch VM:
backup server # ssh -p 5555 127.0.0.1
backup VM # screen -r vm
+
+Test Case 5: migrate with packed ring virtio-pmd
+================================================
+
+On host server side:
+
+1. Create enough hugepages for testpmd and qemu backend memory::
+
+ host server# mkdir /mnt/huge
+ host server# mount -t hugetlbfs hugetlbfs /mnt/huge
+
+2. Bind host port to igb_uio and start testpmd with vhost port::
+
+ host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
+ host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
+ host server# testpmd>start
+
+3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port::
+
+ qemu-system-x86_64 -name vm1 \
+ -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
+ -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+ -chardev socket,id=char0,path=./vhost-net \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,packed=on \
+ -monitor telnet::3333,server,nowait \
+ -serial telnet:localhost:5432,server,nowait \
+ -vnc :10 -daemonize
+
+On the backup server, run the vhost testpmd on the host and launch VM:
+
+4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+
+ backup server # mkdir /mnt/huge
+ backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
+ backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
+ backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
+ backup server # testpmd>start
+
+5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server::
+
+ qemu-system-x86_64 -name vm1 \
+ -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
+ -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+ -chardev socket,id=char0,path=./vhost-net \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,packed=on \
+ -monitor telnet::3333,server,nowait \
+ -serial telnet:localhost:5432,server,nowait \
+ -incoming tcp:0:4444 \
+ -vnc :10 -daemonize
+
+6. SSH to host VM and scp the DPDK folder from host to VM::
+
+ host server# ssh -p 5555 127.0.0.1
+ host server# scp -P 5555 -r <dpdk_folder>/ 127.0.0.1:/root
+
+7. Run testpmd in VM::
+
+ host VM# cd /root/<dpdk_folder>
+ host VM# make -j 110 install T=x86_64-native-linuxapp-gcc
+ host VM# modprobe uio
+ host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
+ host VM# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0
+ host VM# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+ host VM# screen -S vm
+ host VM# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i
+ host VM# testpmd>set fwd rxonly
+ host VM# testpmd>set verbose 1
+ host VM# testpmd>start
+
+8. Send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from tester port::
+
+ tester# scapy
+ tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
+ tester# sendp(p, iface="p5p1", inter=1, loop=1)
+
+9. Check the virtio-pmd can receive the packet, then detach the session for retach on backup server::
+
+ host VM# testpmd>port 0/queue 0: received 1 packets
+ host VM# ctrl+a+d
+
+10. Start Live migration, ensure the traffic is continuous::
+
+ host server # telnet localhost 3333
+ host server # (qemu)migrate -d tcp:backup server:4444
+ host server # (qemu)info migrate
+ host server # Check if the migrate is active and not failed.
+
+11. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done::
+
+ host server # (qemu)info migrate
+ host server # (qemu)Migration status: completed
+
+12. After live migration, go to the backup server and check if the virtio-pmd can continue to receive packets::
+
+ backup server # ssh -p 5555 127.0.0.1
+ backup VM # screen -r vm
+
+Test Case 6: migrate with packed ring virtio-pmd zero-copy enabled
+==================================================================
+
+On host server side:
+
+1. Create enough hugepages for testpmd and qemu backend memory::
+
+ host server# mkdir /mnt/huge
+ host server# mount -t hugetlbfs hugetlbfs /mnt/huge
+
+2. Bind host port to igb_uio and start testpmd with vhost port,note not start vhost port before launching qemu::
+
+ host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
+ host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' --socket-mem 1024,1024 -- -i
+
+3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port::
+
+ qemu-system-x86_64 -name vm1 \
+ -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
+ -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+ -chardev socket,id=char0,path=./vhost-net \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,packed=on \
+ -monitor telnet::3333,server,nowait \
+ -serial telnet:localhost:5432,server,nowait \
+ -vnc :10 -daemonize
+
+On the backup server, run the vhost testpmd on the host and launch VM:
+
+4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+
+ backup server # mkdir /mnt/huge
+ backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
+ backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
+ backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1,dequeue-zero-copy=1' --socket-mem 1024,1024 -- -i
+
+5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server::
+
+ qemu-system-x86_64 -name vm1 \
+ -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
+ -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+ -chardev socket,id=char0,path=./vhost-net \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,packed=on \
+ -monitor telnet::3333,server,nowait \
+ -serial telnet:localhost:5432,server,nowait \
+ -incoming tcp:0:4444 \
+ -vnc :10 -daemonize
+
+6. SSH to host VM and scp the DPDK folder from host to VM::
+
+ host server# ssh -p 5555 127.0.0.1
+ host server# scp -P 5555 -r <dpdk_folder>/ 127.0.0.1:/root
+
+7. Run testpmd in VM::
+
+ host VM# cd /root/<dpdk_folder>
+ host VM# make -j 110 install T=x86_64-native-linuxapp-gcc
+ host VM# modprobe uio
+ host VM# insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
+ host VM# ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0
+ host VM# echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+ host VM# screen -S vm
+ host VM# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i
+ host VM# testpmd>set fwd rxonly
+ host VM# testpmd>set verbose 1
+ host VM# testpmd>start
+
+8. Start vhost testpmd on host and send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from tester port::
+
+ host# testpmd>start
+ tester# scapy
+ tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
+ tester# sendp(p, iface="p5p1", inter=1, loop=1)
+
+9. Check the virtio-pmd can receive packets, then detach the session for retach on backup server::
+
+ host VM# testpmd>port 0/queue 0: received 1 packets
+ host VM# ctrl+a+d
+
+10. Start Live migration, ensure the traffic is continuous::
+
+ host server # telnet localhost 3333
+ host server # (qemu)migrate -d tcp:backup server:4444
+ host server # (qemu)info migrate
+ host server # Check if the migrate is active and not failed.
+
+11. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done::
+
+ host server # (qemu)info migrate
+ host server # (qemu)Migration status: completed
+
+12. After live migration, go to the backup server start vhost testpmd and check if the virtio-pmd can continue to receive packets::
+
+ backup server # testpmd>start
+ backup server # ssh -p 5555 127.0.0.1
+ backup VM # screen -r vm
+
+Test Case 7: migrate with packed ring virtio-net
+================================================
+
+On host server side:
+
+1. Create enough hugepages for testpmd and qemu backend memory::
+
+ host server# mkdir /mnt/huge
+ host server# mount -t hugetlbfs hugetlbfs /mnt/huge
+
+2. Bind host port to igb_uio and start testpmd with vhost port::
+
+ host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
+ host server# ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
+ host server# testpmd>start
+
+3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port::
+
+ qemu-system-x86_64 -name vm1 \
+ -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
+ -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+ -chardev socket,id=char0,path=./vhost-net \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,packed=on \
+ -monitor telnet::3333,server,nowait \
+ -serial telnet:localhost:5432,server,nowait \
+ -vnc :10 -daemonize
+
+On the backup server, run the vhost testpmd on the host and launch VM:
+
+4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+
+ backup server # mkdir /mnt/huge
+ backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
+ backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
+ backup server # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i
+ backup server # testpmd>start
+
+5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server::
+
+ qemu-system-x86_64 -name vm1 \
+ -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
+ -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+ -chardev socket,id=char0,path=./vhost-net \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,packed=on \
+ -monitor telnet::3333,server,nowait \
+ -serial telnet:localhost:5432,server,nowait \
+ -incoming tcp:0:4444 \
+ -vnc :10 -daemonize
+
+6. SSH to host VM and let the virtio-net link up::
+
+ host server# ssh -p 5555 127.0.0.1
+ host vm # ifconfig eth0 up
+ host VM# screen -S vm
+ host VM# tcpdump -i eth0
+
+7. Send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from tester port::
+
+ tester# scapy
+ tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
+ tester# sendp(p, iface="p5p1", inter=1, loop=1)
+
+8. Check the virtio-net can receive the packet, then detach the session for retach on backup server::
+
+ host VM# testpmd>port 0/queue 0: received 1 packets
+ host VM# ctrl+a+d
+
+9. Start Live migration, ensure the traffic is continuous::
+
+ host server # telnet localhost 3333
+ host server # (qemu)migrate -d tcp:backup server:4444
+ host server # (qemu)info migrate
+ host server # Check if the migrate is active and not failed.
+
+10. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done::
+
+ host server # (qemu)info migrate
+ host server # (qemu)Migration status: completed
+
+11. After live migration, go to the backup server and check if the virtio-net can continue to receive packets::
+
+ backup server # ssh -p 5555 127.0.0.1
+ backup VM # screen -r vm
+
+Test Case 8: adjust packed ring virtio-net queue numbers while migrating with virtio-net
+=========================================================================================
+
+On host server side:
+
+1. Create enough hugepages for testpmd and qemu backend memory::
+
+ host server# mkdir /mnt/huge
+ host server# mount -t hugetlbfs hugetlbfs /mnt/huge
+
+2. Bind host port to igb_uio and start testpmd with vhost port::
+
+ host server# ./tools/dpdk-devbind.py -b igb_uio 82:00.1
+ host server# ./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' --socket-mem 1024,1024 -- -i --nb-cores=4 --rxq=4 --txq=4
+ host server# testpmd>start
+
+3. Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port::
+
+ qemu-system-x86_64 -name vm1 \
+ -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
+ -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+ -chardev socket,id=char0,path=./vhost-net \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=4 \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,mq=on,vectors=10,packed=on \
+ -monitor telnet::3333,server,nowait \
+ -serial telnet:localhost:5432,server,nowait \
+ -vnc :10 -daemonize
+
+On the backup server, run the vhost testpmd on the host and launch VM:
+
+4. Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host::
+
+ backup server # mkdir /mnt/huge
+ backup server # mount -t hugetlbfs hugetlbfs /mnt/huge
+ backup server # ./tools/dpdk-devbind.py -b igb_uio 82:00.0
+ backup server#./x86_64-native-linuxapp-gcc/app/testpmd -l 2-6 -n 4 --vdev 'net_vhost0,iface=./vhost-net,queues=4' --socket-mem 1024,1024 -- -i --nb-cores=4 --rxq=4 --txq=4
+ backup server # testpmd>start
+
+5. Launch VM on the backup server, the script is similar to host, need add " -incoming tcp:0:4444 " for live migration and make sure the VM image is the NFS mounted folder, VM image is the exact one on host server::
+
+ qemu-system-x86_64 -name vm1 \
+ -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \
+ -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/osimg/live_mig/ubuntu16.img \
+ -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \
+ -chardev socket,id=char0,path=./vhost-net \
+ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=4 \
+ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=on,mq=on,vectors=10,packed=on \
+ -monitor telnet::3333,server,nowait \
+ -serial telnet:localhost:5432,server,nowait \
+ -incoming tcp:0:4444 \
+ -vnc :10 -daemonize
+
+6. SSH to host VM and let the virtio-net link up::
+
+ host server# ssh -p 5555 127.0.0.1
+ host vm # ifconfig eth0 up
+ host VM# screen -S vm
+ host VM# tcpdump -i eth0
+
+7. Send continuous packets with the physical port's mac(e.g: 90:E2:BA:69:C9:C9) from tester port::
+
+ tester# scapy
+ tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20)
+ tester# sendp(p, iface="p5p1", inter=1, loop=1)
+
+8. Check the virtio-net can receive the packet, then detach the session for retach on backup server::
+
+ host VM# testpmd>port 0/queue 0: received 1 packets
+ host VM# ctrl+a+d
+
+9. Start Live migration, ensure the traffic is continuous::
+
+ host server # telnet localhost 3333
+ host server # (qemu)migrate -d tcp:backup server:4444
+ host server # (qemu)info migrate
+ host server # Check if the migrate is active and not failed.
+
+10. Change virtio-net queue numbers from 1 to 4 while migrating::
+
+ host server # ethtool -L ens3 combined 4
+
+11. Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done::
+
+ host server # (qemu)info migrate
+ host server # (qemu)Migration status: completed
+
+12. After live migration, go to the backup server and check if the virtio-net can continue to receive packets::
+
+ backup server # ssh -p 5555 127.0.0.1
+ backup VM # screen -r vm
+
--
2.17.1
next prev parent reply other threads:[~2020-02-28 13:15 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-28 6:09 [dts] [PATCH 00/11 v1] test_plans: add packed ring cases Yinan
2020-02-28 6:09 ` [dts] [PATCH 01/11 v1] test_plans: add packed ring cases for loopback_multi_paths_port_restart Yinan
2020-02-28 6:09 ` [dts] [PATCH 02/11 v1] test_plans: add packed ring cases for loopback_multi_queues Yinan
2020-02-28 6:09 ` [dts] [PATCH 03/11 v1] test_plans: add packed ring test case for pvp_virtio_user_2M_hugepages Yinan
2020-02-28 6:09 ` [dts] [PATCH 04/11 v1] test_plans: add packed ring cases for pvp_virtio_user_4k_pages Yinan
2020-02-28 6:09 ` [dts] [PATCH 05/11 v1] test_plans: add packed ring test case for vhost_enqueue_interrupt Yinan
2020-02-28 6:09 ` [dts] [PATCH 06/11 v1] test_plans: add packed ring test cases for vhost_event_idx_interrupt Yinan
2020-02-28 6:09 ` Yinan [this message]
2020-02-28 6:09 ` [dts] [PATCH 08/11 v1] test_plans: add packed ring test cases for vhost_virtio_pmd_interrupt Yinan
2020-02-28 6:09 ` [dts] [PATCH 09/11 v1] test_plans: add packed ring test cases for vhost_virtio_user_interrupt Yinan
2020-02-28 6:09 ` [dts] [PATCH 10/11 v1] test_plans: add test cases for virtio_event_idx_interrupt Yinan
2020-02-28 6:09 ` [dts] [PATCH 11/11 v1] test_plans: add packed ring test cases for virtio_pvp_regression Yinan
2020-03-03 7:28 ` [dts] [PATCH 00/11 v1] test_plans: add packed ring cases Tu, Lijuan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200228060947.26001-8-yinan.wang@intel.com \
--to=yinan.wang@intel.com \
--cc=dts@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).