From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 33E67A0093; Fri, 23 Dec 2022 06:56:11 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2E8A341156; Fri, 23 Dec 2022 06:56:11 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id DB8FC40141 for ; Fri, 23 Dec 2022 06:56:09 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671774970; x=1703310970; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=/RQAWK4pNwQghvyT8IenydOvO1oDUThELFpg5sIU31c=; b=LEFqEzEzWjiGZ1WottL4UCP3hc4VWS3TR2EhzluYTKJ4/vyLfvjeZL2w AiTa9PpcxnVCrGNus7jzy9ajXCQVbhp2cu0zHtwt+rpCAvK1uGcyRgxxG qx3NNkWSBkLiUF2XRb4SThnbAH+xSuQ35qPzk2wslJZtJ5SXHmI/N7FSW XX2obV9R9Gc0Ze/PZNdrs77L3FdIWG0jjRSjd+xS9Rpd12ufMdjv3VeOI 3ULZvhkkwa+wedB8H54R7cmPuYvuFONWCg0Ov9gMQjC8AmnU2xGrnXUpd lXBji8L9Nh+5xSPcwKXycJ2fD4L78pyxKxv0OvKC3MOog6a7GwJT8lsEK g==; X-IronPort-AV: E=McAfee;i="6500,9779,10569"; a="384666967" X-IronPort-AV: E=Sophos;i="5.96,267,1665471600"; d="scan'208";a="384666967" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2022 21:56:09 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10569"; a="629736356" X-IronPort-AV: E=Sophos;i="5.96,267,1665471600"; d="scan'208";a="629736356" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2022 21:56:07 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 1/2] test_plans/pvp_vhost_user_reconnect_test_plan: adjust testplan's format Date: Fri, 23 Dec 2022 13:47:22 +0800 Message-Id: <20221223054722.754772-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Adjust testplan's format. Signed-off-by: Wei Ling --- .../pvp_vhost_user_reconnect_test_plan.rst | 178 +++++++++++------- 1 file changed, 112 insertions(+), 66 deletions(-) diff --git a/test_plans/pvp_vhost_user_reconnect_test_plan.rst b/test_plans/pvp_vhost_user_reconnect_test_plan.rst index 6877aec4..ee9d136a 100644 --- a/test_plans/pvp_vhost_user_reconnect_test_plan.rst +++ b/test_plans/pvp_vhost_user_reconnect_test_plan.rst @@ -26,13 +26,15 @@ Vhost-user uses Unix domain sockets for passing messages. This means the DPDK vh Note that QEMU version v2.7 or above is required for split ring cases, and QEMU version v4.2.0 or above is required for packed ring cases. -Test Case1: vhost-user/virtio-pmd pvp split ring reconnect from vhost-user -========================================================================== +Test Case 1: vhost-user/virtio-pmd pvp split ring reconnect from vhost-user +=========================================================================== Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG -1. Bind one port to vfio-pci, then launch vhost with client mode by below commands:: +1. Bind 1 NIC port to vfio-pci, then launch vhost with client mode by below commands:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 \ + --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' \ + -- -i --nb-cores=1 testpmd>set fwd mac testpmd>start @@ -52,7 +54,8 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 3. On VM, bind virtio net to vfio-pci and run testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -63,7 +66,9 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 5. On host, quit vhost-user, then re-launch the vhost-user with below command:: testpmd>quit - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 \ + --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' \ + -- -i --nb-cores=1 testpmd>set fwd mac testpmd>start @@ -71,13 +76,15 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG testpmd>show port stats all -Test Case2: vhost-user/virtio-pmd pvp split ring reconnect from VM -================================================================== +Test Case 2: vhost-user/virtio-pmd pvp split ring reconnect from VM +=================================================================== Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG -1. Bind one port to vfio-pci, then launch vhost with client mode by below commands:: +1. Bind 1 NIC port to vfio-pci, then launch vhost with client mode by below commands:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 \ + --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' \ + -- -i --nb-cores=1 testpmd>set fwd mac testpmd>start @@ -107,22 +114,24 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 5. Reboot the VM, rerun step2-step4, check the reconnection can be established. -Test Case3: vhost-user/virtio-pmd pvp split ring reconnect stability test -========================================================================= +Test Case 3: vhost-user/virtio-pmd pvp split ring reconnect stability test +========================================================================== Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG -Similar as Test Case1, all steps are similar except step 5, 6. +Similar as Test Case 1, all steps are similar except step 5, 6. -5. Quit vhost-user, then re-launch, repeat it 5-8 times, check if the reconnect can work and ensure the traffic can continue. +5. Quit vhost-user, then re-launch, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. -6. Reboot VM, then re-launch VM, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue. +6. Reboot VM, then re-launch VM, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from vhost-user ========================================================================================== -1. Bind one port to vfio-pci, launch the vhost by below command:: +1. Bind 1 NIC port to vfio-pci, launch the vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -154,13 +163,15 @@ Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from 3. On VM1, bind virtio1 to vfio-pci and run testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 4. On VM2, bind virtio2 to vfio-pci and run testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -171,7 +182,9 @@ Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from 6. On host, quit vhost-user, then re-launch the vhost-user with below command:: testpmd>quit - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -182,9 +195,11 @@ Test Case 4: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from Test Case 5: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from VMs =================================================================================== -1. Bind one port to vfio-pci, launch the vhost by below command:: +1. Bind 1 NIC port to vfio-pci, launch the vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -216,13 +231,15 @@ Test Case 5: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from 3. On VM1, bind virtio1 to vfio-pci and run testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 4. On VM2, bind virtio2 to vfio-pci and run testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --port-topology=chain --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 \ + -- -i --port-topology=chained --port-topology=chain --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -239,11 +256,11 @@ Test Case 5: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect from Test Case 6: vhost-user/virtio-pmd pvp split ring with multi VMs reconnect stability test ========================================================================================= -Similar as Test Case 4, all steps are similar except step 6, 7. +Similar as Test Case 4, all steps are similar except step 6, 7. -6. Quit vhost-user, then re-launch, repeat it 5-8 times, check if the reconnect can work and ensure the traffic can continue. +6. Quit vhost-user, then re-launch, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. -7. Reboot VMs, then re-launch VMs, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue. +7. Reboot VMs, then re-launch VMs, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. Test Case 7: vhost-user/virtio-net VM2VM split ring reconnect from vhost-user ============================================================================= @@ -251,7 +268,9 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 1. Launch the vhost by below commands, enable the client mode and tso:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost \ + --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 3. Launch VM1 and VM2:: @@ -295,7 +314,9 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 6. Kill the vhost-user, then re-launch the vhost-user:: testpmd>quit - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost \ + --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 7. Rerun step5, ensure the vhost-user can reconnect to VM again, and the iperf traffic can be continue. @@ -306,7 +327,9 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 1. Launch the vhost by below commands, enable the client mode and tso:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost \ + --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 3. Launch VM1 and VM2:: @@ -353,19 +376,21 @@ Test Case 9: vhost-user/virtio-net VM2VM split ring reconnect stability test ============================================================================ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 -Similar as Test Case 7, all steps are similar except step 6. +Similar as Test Case 7, all steps are similar except step 6. -6. Quit vhost-user, then re-launch, repeat it 5-8 times, check if the reconnect can work and ensure the traffic can continue. +6. Quit vhost-user, then re-launch, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. -7. Reboot two VMs, then re-launch VMs, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue. +7. Reboot two VMs, then re-launch VMs, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. -Test Case10: vhost-user/virtio-pmd pvp packed ring reconnect from vhost-user -============================================================================ +Test Case 10: vhost-user/virtio-pmd pvp packed ring reconnect from vhost-user +============================================================================= Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG -1. Bind one port to vfio-pci, then launch vhost with client mode by below commands:: +1. Bind 1 NIC port to vfio-pci, then launch vhost with client mode by below commands:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 \ + --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' \ + -- -i --nb-cores=1 testpmd>set fwd mac testpmd>start @@ -385,7 +410,8 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 3. On VM, bind virtio net to vfio-pci and run testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -396,7 +422,9 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 5. On host, quit vhost-user, then re-launch the vhost-user with below command:: testpmd>quit - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 \ + --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' \ + -- -i --nb-cores=1 testpmd>set fwd mac testpmd>start @@ -404,13 +432,15 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG testpmd>show port stats all -Test Case11: vhost-user/virtio-pmd pvp packed ring reconnect from VM -==================================================================== +Test Case 11: vhost-user/virtio-pmd pvp packed ring reconnect from VM +===================================================================== Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG -1. Bind one port to vfio-pci, then launch vhost with client mode by below commands:: +1. Bind 1 NIC port to vfio-pci, then launch vhost with client mode by below commands:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' -- -i --nb-cores=1 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 \ + --vdev 'eth_vhost0,iface=vhost-net,client=1,queues=1' \ + -- -i --nb-cores=1 testpmd>set fwd mac testpmd>start @@ -440,22 +470,24 @@ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG 5. Reboot the VM, rerun step2-step4, check the reconnection can be established. -Test Case12: vhost-user/virtio-pmd pvp packed ring reconnect stability test -=========================================================================== +Test Case 12: vhost-user/virtio-pmd pvp packed ring reconnect stability test +============================================================================ Flow: TG--> NIC --> Vhost --> Virtio --> Vhost--> NIC--> TG -Similar as Test Case1, all steps are similar except step 5, 6. +Similar as Test Case 1, all steps are similar except step 5, 6. -5. Quit vhost-user, then re-launch, repeat it 5-8 times, check if the reconnect can work and ensure the traffic can continue. +5. Quit vhost-user, then re-launch, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. -6. Reboot VM, then re-launch VM, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue. +6. Reboot VM, then re-launch VM, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect from vhost-user ============================================================================================ -1. Bind one port to vfio-pci, launch the vhost by below command:: +1. Bind 1 NIC port to vfio-pci, launch the vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -487,13 +519,15 @@ Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro 3. On VM1, bind virtio1 to vfio-pci and run testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 4. On VM2, bind virtio2 to vfio-pci and run testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -504,7 +538,9 @@ Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro 6. On host, quit vhost-user, then re-launch the vhost-user with below command:: testpmd>quit - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -515,9 +551,11 @@ Test Case 13: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro Test Case 14: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect from VMs ===================================================================================== -1. Bind one port to vfio-pci, launch the vhost by below command:: +1. Bind 1 NIC port to vfio-pci, launch the vhost by below command:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --file-prefix=vhost \ + --vdev 'net_vhost0,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -549,13 +587,15 @@ Test Case 14: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro 3. On VM1, bind virtio1 to vfio-pci and run testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 \ + -- -i --port-topology=chained --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start 4. On VM2, bind virtio2 to vfio-pci and run testpmd:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 -- -i --port-topology=chained --port-topology=chain --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 4 \ + -- -i --port-topology=chained --port-topology=chain --nb-cores=1 --txd=1024 --rxd=1024 testpmd>set fwd mac testpmd>start @@ -572,11 +612,11 @@ Test Case 14: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect fro Test Case 15: vhost-user/virtio-pmd pvp packed ring with multi VMs reconnect stability test =========================================================================================== -Similar as Test Case 4, all steps are similar except step 6, 7. +Similar as Test Case 4, all steps are similar except step 6, 7. -6. Quit vhost-user, then re-launch, repeat it 5-8 times, check if the reconnect can work and ensure the traffic can continue. +6. Quit vhost-user, then re-launch, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. -7. Reboot VMs, then re-launch VMs, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue. +7. Reboot VMs, then re-launch VMs, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. Test Case 16: vhost-user/virtio-net VM2VM packed ring reconnect from vhost-user =============================================================================== @@ -584,7 +624,9 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 1. Launch the vhost by below commands, enable the client mode and tso:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost \ + --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 3. Launch VM1 and VM2:: @@ -628,7 +670,9 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 6. Kill the vhost-user, then re-launch the vhost-user:: testpmd>quit - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost \ + --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 7. Rerun step5, ensure the vhost-user can reconnect to VM again, and the iperf traffic can be continue. @@ -639,7 +683,9 @@ Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 1. Launch the vhost by below commands, enable the client mode and tso:: - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024 + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x30 -n 4 --no-pci --file-prefix=vhost \ + --vdev 'net_vhost,iface=vhost-net,client=1,queues=1' --vdev 'net_vhost1,iface=vhost-net1,client=1,queues=1' \ + -- -i --nb-cores=1 --txd=1024 --rxd=1024 testpmd>start 3. Launch VM1 and VM2:: @@ -686,8 +732,8 @@ Test Case 18: vhost-user/virtio-net VM2VM packed ring reconnect stability test ============================================================================== Flow: Virtio-net1 --> Vhost-user --> Virtio-net2 -Similar as Test Case 7, all steps are similar except step 6. +Similar as Test Case 7, all steps are similar except step 6. -6. Quit vhost-user, then re-launch, repeat it 5-8 times, check if the reconnect can work and ensure the traffic can continue. +6. Quit vhost-user, then re-launch, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. -7. Reboot two VMs, then re-launch VMs, repeat it 3-5 times, check if the reconnect can work and ensure the traffic can continue. \ No newline at end of file +7. Reboot two VMs, then re-launch VMs, repeat it 5 times, check if the reconnect can work and ensure the traffic can continue. \ No newline at end of file -- 2.25.1