From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A13C6A0548; Wed, 8 Jun 2022 06:52:29 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 85BF74021F; Wed, 8 Jun 2022 06:52:29 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id B1E3B4021D for ; Wed, 8 Jun 2022 06:52:27 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654663947; x=1686199947; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=foib2rPupYpCGSyyL4ahoKT5mEraFpd+wwZbv8FMPMA=; b=Ljn/TOKkm9M9mo+/RAVzTDiSp7NKLJIvXzt4GMPC2gy01tzcCB+4G/Bl 4X2vGTP+P/JQG7KRNb/173TB+6zvcZn2hpt3JEJNVPJ2uH5U448EyhzR/ xMbxonoMOypnAv0Qm6NIjpkulwadcbP/9jvXVchvtIMs9vVmy0S7nHsHb JPqgmqh/P9pTdfeMrwqDtMXCALQ/BRPOZRVx4c/KtSwEboW9qMP6sKMTh PYf4kVThRwveyBxcbDvYaCdhHol5STGvvOYQGbzQya7nx6iDDZyGpnwEd WtItp+lj/v6+I4ytSupHfk10cNKDDH+1hZb65LBEWYvPRAC4m7wX4C9L8 g==; X-IronPort-AV: E=McAfee;i="6400,9594,10371"; a="338526877" X-IronPort-AV: E=Sophos;i="5.91,285,1647327600"; d="scan'208";a="338526877" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jun 2022 21:52:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.91,285,1647327600"; d="scan'208";a="670346725" Received: from dpdk-yaqi.sh.intel.com ([10.67.118.178]) by FMSMGA003.fm.intel.com with ESMTP; 07 Jun 2022 21:52:25 -0700 From: Yaqi Tang To: dts@dpdk.org Cc: Yaqi Tang Subject: [dts][PATCH V1] test_plans/vm_hotplug: adjust test plan format Date: Wed, 8 Jun 2022 04:52:22 +0000 Message-Id: <20220608045222.350643-1-yaqi.tang@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Adjust the format of test plan according to test plan template, and add the step number. Signed-off-by: Yaqi Tang --- test_plans/vm_hotplug_test_plan.rst | 174 ++++++++++++++-------------- 1 file changed, 88 insertions(+), 86 deletions(-) diff --git a/test_plans/vm_hotplug_test_plan.rst b/test_plans/vm_hotplug_test_plan.rst index 941ace7b..92a2a55a 100644 --- a/test_plans/vm_hotplug_test_plan.rst +++ b/test_plans/vm_hotplug_test_plan.rst @@ -16,17 +16,20 @@ by failsafe PMD. So "plug out/in the NIC" typically does not the case that physically plug out/in a NIC from/to server, it should be case that remove/add a qemu device from/to a VM. +Prerequisites +============= + Hardware -======== +-------- Ixgbe and i40e NICs Note -==== +---- Known issue for UIO in dpdk/doc/guides/rel_notes/known_issues.rst as below, This test plan only test VFIO scenario. Kernel crash when hot-unplug igb_uio device while DPDK application is running ------------------------------------------------------------------------------ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ **Description**: When device has been bound to igb_uio driver and application is running, @@ -46,15 +49,15 @@ Kernel crash when hot-unplug igb_uio device while DPDK application is running ``igb_uio`` module. -Test Case: one device -===================== -Bind host PF port 0 to vfio_pci:: +Test Case 1: one device +----------------------- +1. Bind host PF port 0 to vfio_pci:: modprobe vfio_pci ./usertools/dpdk-devbind.py -b vfio_pci 18:00.0 -Passthrough PF and start qemu script as below, using “-monitor stdio” -will send the monitor to the standard output:: +2. Passthrough PF and start qemu script as below, using “-monitor stdio” + will send the monitor to the standard output:: taskset -c 0-7 qemu-system-x86_64 -enable-kvm \ -m 4096 -cpu host -smp 8 -name qemu-vm1 \ @@ -65,7 +68,7 @@ will send the monitor to the standard output:: -device rtl8139,netdev=hostnet1,id=net0,mac=00:00:00:14:c4:31,bus=pci.0,addr=0x1f \ -vnc :5 -Log in VM, bind passthrough port 0 to vfio-pci:: +3. Log in VM, bind passthrough port 0 to vfio-pci:: modprobe -r vfio_iommu_type1 modprobe -r vfio @@ -73,57 +76,57 @@ Log in VM, bind passthrough port 0 to vfio-pci:: modprobe vfio-pci ./usertools/dpdk-devbind.py -b vfio-pci 00:03.0 -Start testpmd with "--hot-plug" enable, set rxonly forward mode -and enable verbose output:: +4. Start testpmd with "--hot-plug" enable, set rxonly forward mode + and enable verbose output::  ./dpdk-testpmd -c f -n 4 -- -i --hot-plug testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start -Send packets from tester, check RX could work successfully +5. Send packets from tester, check RX could work successfully. -Set txonly forward mode, send packet from testpmd, check TX could -work successfully:: +6. Set txonly forward mode, send packet from testpmd, check TX could + work successfully:: testpmd> set fwd txonly testpmd> start -Remove device from qemu interface:: +7. Remove device from qemu interface:: - (qemu) device_del dev1 + (qemu) device_del dev1 -Check device is removed, no system hange and core dump:: +8. Check device is removed, no system hange and core dump:: - ./usertools/dpdk-devbind.py -s + ./usertools/dpdk-devbind.py -s -Add device from qemu interface:: +9. Add device from qemu interface:: (qemu) device_add vfio-pci,host=18:00.0,id=dev1 -Check driver adds the device, bind port to vfio-pci +10. Check driver adds the device, bind port to vfio-pci. -Attach the VF from testpmd:: +11. Attach the VF from testpmd:: testpmd> port attach 00:03.0 testpmd> port start all -Check testpmd adds the device successfully, no hange and core dump +12. Check testpmd adds the device successfully, no hange and core dump. -Check RX/TX could work successfully +13. Check RX/TX could work successfully. -Repeat above steps for 3 times +14. Repeat above steps for 3 times. -Test Case: one device + reset -============================= -Bind host PF port 0 to vfio_pci:: +Test Case 2: one device + reset +------------------------------- +1. Bind host PF port 0 to vfio_pci:: modprobe vfio_pci ./usertools/dpdk-devbind.py -b vfio_pci 18:00.0 -Log in VM, passthrough PF and start qemu script same as above +2. Log in VM, passthrough PF and start qemu script same as above. -Bind passthrough port 0 to vfio-pci:: +3. Bind passthrough port 0 to vfio-pci:: modprobe -r vfio_iommu_type1 modprobe -r vfio @@ -131,56 +134,55 @@ Bind passthrough port 0 to vfio-pci:: modprobe vfio-pci ./usertools/dpdk-devbind.py -b vfio-pci 00:03.0 -Start testpmd with "--hot-plug" enable, set rxonly forward mode -and enable verbose output:: +4. Start testpmd with "--hot-plug" enable, set rxonly forward mode + and enable verbose output::  ./dpdk-testpmd -c f -n 4 -- -i --hot-plug testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start -Send packets from tester, check RX could work successfully +5. Send packets from tester, check RX could work successfully. -Set txonly forward mode, send packet from testpmd, check TX could -work successfully:: +6. Set txonly forward mode, send packet from testpmd, check TX could + work successfully:: testpmd> set fwd txonly testpmd> start -Remove device from qemu interface:: +7. Remove device from qemu interface:: (qemu) device_del dev1 -Quit testpmd +8. Quit testpmd. -Check device is removed, no system hange and core dump:: +9. Check device is removed, no system hange and core dump:: ./usertools/dpdk-devbind.py -s -Add device from qemu interface:: +10. Add device from qemu interface:: (qemu) device_add vfio-pci,host=18:00.0,id=dev1 -Check driver adds the device, bind port to vfio-pci - -Restart testpmd +11. Check driver adds the device, bind port to vfio-pci. -Check testpmd adds the device successfully, no hange and core dump +12. Restart testpmd. -Check RX/TX could work successfully +13. Check testpmd adds the device successfully, no hange and core dump. -Repeat above steps for 3 times +14. Check RX/TX could work successfully. +15. Repeat above steps for 3 times. -Test Case: two/multi devices -============================ -Bind host PF port 0 and port 1 to vfio_pci:: +Test Case 3: two/multi devices +------------------------------ +1. Bind host PF port 0 and port 1 to vfio_pci:: modprobe vfio_pci ./usertools/dpdk-devbind.py -b vfio_pci 18:00.0 18:00.1 -Passthrough PFs and start qemu script as below, using “-monitor stdio” -will send the monitor to the standard output:: +2. Passthrough PFs and start qemu script as below, using “-monitor stdio” + will send the monitor to the standard output:: taskset -c 0-7 qemu-system-x86_64 -enable-kvm \ -m 4096 -cpu host -smp 8 -name qemu-vm1 \ @@ -192,7 +194,7 @@ will send the monitor to the standard output:: -device rtl8139,netdev=hostnet1,id=net0,mac=00:00:00:14:c4:31,bus=pci.0,addr=0x1f \ -vnc :5 -Log in VM, bind passthrough port 0 and port 1 to vfio-pci:: +3. Log in VM, bind passthrough port 0 and port 1 to vfio-pci:: modprobe -r vfio_iommu_type1 modprobe -r vfio @@ -200,60 +202,60 @@ Log in VM, bind passthrough port 0 and port 1 to vfio-pci:: modprobe vfio-pci ./usertools/dpdk-devbind.py -b vfio-pci 00:03.0 00:04.0 -Start testpmd with "--hot-plug" enable, set rxonly forward mode -and enable verbose output:: +4. Start testpmd with "--hot-plug" enable, set rxonly forward mode + and enable verbose output::  ./dpdk-testpmd -c f -n 4 -- -i --hot-plug testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start -Send packets from tester, check RX could work successfully -Set txonly forward mode, send packet from testpmd, check TX could -work successfully:: +5. Send packets from tester, check RX could work successfully. + +6. Set txonly forward mode, send packet from testpmd, check TX could + work successfully:: testpmd> set fwd txonly testpmd> start -Remove device 1 and device 2 from qemu interface:: +7. Remove device 1 and device 2 from qemu interface:: (qemu) device_del dev1 (qemu) device_del dev2 -Check devices are removed, no system hange and core dump:: +8. Check devices are removed, no system hange and core dump:: ./usertools/dpdk-devbind.py -s -Add devices from qemu interface:: +9. Add devices from qemu interface:: (qemu) device_add vfio-pci,host=18:00.0,id=dev1 (qemu) device_add vfio-pci,host=18:00.1,id=dev2 -Check driver adds the devices, bind port to vfio-pci +10. Check driver adds the devices, bind port to vfio-pci. -Attach the VFs from testpmd:: +11. Attach the VFs from testpmd:: testpmd> port attach 00:03.0 testpmd> port attach 00:04.0 testpmd> port start all -Check testpmd adds the devices successfully, no hange and core dump - -Check RX/TX could work successfully +12. Check testpmd adds the devices successfully, no hange and core dump. -Repeat above steps for 3 times +13. Check RX/TX could work successfully. +14. Repeat above steps for 3 times. -Test Case: two/multi devices + reset -==================================== -Bind host PF port 0 and port 1 to vfio_pci:: +Test Case 4: two/multi devices + reset +-------------------------------------- +1. Bind host PF port 0 and port 1 to vfio_pci:: modprobe vfio_pci ./usertools/dpdk-devbind.py -b vfio_pci 18:00.0 18:00.1 -Passthrough PFs and start qemu script same as above +2. Passthrough PFs and start qemu script same as above. -Log in VM, bind passthrough port 0 and port 1 to vfio-pci:: +3. Log in VM, bind passthrough port 0 and port 1 to vfio-pci:: modprobe -r vfio_iommu_type1 modprobe -r vfio @@ -261,44 +263,44 @@ Log in VM, bind passthrough port 0 and port 1 to vfio-pci:: modprobe vfio-pci ./usertools/dpdk-devbind.py -b vfio-pci 00:03.0 00:04.0 -Start testpmd with "--hot-plug" enable, set rxonly forward mode -and enable verbose output:: +4. Start testpmd with "--hot-plug" enable, set rxonly forward mode + and enable verbose output::  ./dpdk-testpmd -c f -n 4 -- -i --hot-plug testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start -Send packets from tester, check RX could work successfully +5. Send packets from tester, check RX could work successfully. -Set txonly forward mode, send packets from testpmd, check TX could -work successfully:: +6. Set txonly forward mode, send packets from testpmd, check TX could + work successfully:: testpmd> set fwd txonly testpmd> start -Remove device 1 and device 2 from qemu interface:: +7. Remove device 1 and device 2 from qemu interface:: - (qemu) device_del dev1 - (qemu) device_del dev2 + (qemu) device_del dev1 + (qemu) device_del dev2 -Quit testpmd +8. Quit testpmd. -Check devices are removed, no system hange and core dump:: +9. Check devices are removed, no system hange and core dump:: - ./usertools/dpdik-devbind.py -s + ./usertools/dpdik-devbind.py -s -Add devices from qemu interface:: +10. Add devices from qemu interface:: (qemu) device_add vfio-pci,host=18:00.0,id=dev1 (qemu) device_add vfio-pci,host=18:00.1,id=dev2 -Check driver adds the devices, bind ports to vfio-pci +11. Check driver adds the devices, bind ports to vfio-pci. -Restart testpmd +12. Restart testpmd. -Check testpmd adds the devices successfully, no hange and core dump +13. Check testpmd adds the devices successfully, no hange and core dump. -Check RX/TX could work successfully +14. Check RX/TX could work successfully. -Repeat above steps for 3 times +15. Repeat above steps for 3 times. -- 2.25.1