test suite reviews and discussions
 help / color / mirror / Atom feed
* [dts] [PATCH v1] test_plans/vf_interrupt_pmd: add vf multi queues interrupt test cases with i40e driver
@ 2020-02-19 20:33 Yinan
  2020-02-21  2:03 ` Tu, Lijuan
  0 siblings, 1 reply; 2+ messages in thread
From: Yinan @ 2020-02-19 20:33 UTC (permalink / raw)
  To: dts; +Cc: Wang Yinan

From: Wang Yinan <yinan.wang@intel.com>

Signed-off-by: Wang Yinan <yinan.wang@intel.com>
---
 test_plans/vf_interrupt_pmd_test_plan.rst | 76 ++++++++++++++++++++++-
 1 file changed, 75 insertions(+), 1 deletion(-)

diff --git a/test_plans/vf_interrupt_pmd_test_plan.rst b/test_plans/vf_interrupt_pmd_test_plan.rst
index 8f91b14..a8ed3d8 100644
--- a/test_plans/vf_interrupt_pmd_test_plan.rst
+++ b/test_plans/vf_interrupt_pmd_test_plan.rst
@@ -190,4 +190,78 @@ Test Case4: VF interrupt pmd in VM with vfio-pci
 
 7. Check if threads on core 2 have returned to sleep mode::
 
-    L3FWD_POWER: lcore 2 sleeps until interrupt triggers
\ No newline at end of file
+    L3FWD_POWER: lcore 2 sleeps until interrupt triggers
+
+Test Case5: vf multi-queue interrupt with vfio-pci on i40e 
+==========================================================
+
+1. Generate NIC VF, then bind it to vfio drvier::
+
+    echo 1 > /sys/bus/pci/devices/0000\:04\:00.0/sriov_numvfs
+    modprobe vfio-pci
+    usertools/dpdk-devbind.py --bind=vfio-pci 0000:04:10.0(vf_pci)
+
+  Notice:  If your PF is kernel driver, make sure PF link is up when your start testpmd on VF.
+
+2. Start l3fwd-power with VF::
+
+    examples/l3fwd-power/build/l3fwd-power -c 3f -n 4 -m 2048 -- -P -p 0x1 --config="(0,0,1),(0,1,2),(0,2,3),(0,3,4)"
+
+3. Send UDP packets with random ip and dest mac = vf mac addr::
+
+      for x in range(0,10):
+       sendp(Ether(src="00:00:00:00:01:00",dst="vf_mac")/IP(src='2.1.1.' + str(x),dst='2.1.1.5')/UDP()/"Hello!0",iface="tester_intf")
+
+4. Check if threads on all cores have waked up::
+
+    L3FWD_POWER: lcore 1 is waked up from rx interrupt on port 0 queue 0
+    L3FWD_POWER: lcore 2 is waked up from rx interrupt on port 0 queue 1
+    L3FWD_POWER: lcore 3 is waked up from rx interrupt on port 0 queue 2
+    L3FWD_POWER: lcore 4 is waked up from rx interrupt on port 0 queue 3
+
+Test Case6: VF multi-queue interrupt in VM with vfio-pci on i40e
+================================================================
+    
+1. Generate NIC VF, then bind it to vfio drvier::
+
+    echo 1 > /sys/bus/pci/devices/0000\:88:00.1/sriov_numvfs
+    modprobe vfio-pci
+    usertools/dpdk-devbind.py --bind=vfio-pci 0000:88:0a.0(vf_pci)
+
+  Notice:  If your PF is kernel driver, make sure PF link is up when your start testpmd on VF.
+
+2. Passthrough VF 0 to VM0 and start VM0::
+
+    taskset -c 4,5,6,7,8 qemu-system-x86_64 \
+    -name vm0 -enable-kvm -pidfile /tmp/.vm0.pid -daemonize -monitor unix:/tmp/vm0_monitor.sock,server,nowait \
+    -device e1000,netdev=nttsip1  -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \
+    -device vfio-pci,host=0000:88:0a.0,id=pt_0 -cpu host -smp 5 -m 10240 \
+    -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial \
+    -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 -vnc :11 \
+    -drive file=/home/osimg/noiommu-ubt16.img,format=qcow2,if=virtio,index=0,media=disk
+
+  Notice: VM needs Kernel version > 4.8.0, mostly linux distribution don't support vfio-noiommu mode by default, so testing this case need rebuild kernel to enable vfio-noiommu.
+
+3. Bind VF 0 to the vfio-pci driver::
+
+    modprobe -r vfio_iommu_type1
+    modprobe -r vfio
+    modprobe vfio enable_unsafe_noiommu_mode=1
+    modprobe vfio-pci
+    usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0
+
+4.Start l3fwd-power in VM::
+
+    ./build/l3fwd-power -l 0-3 -n 4 -m 2048 -- -P -p 0x1 --config="(0,0,0),(0,1,1),(0,2,2),(0,3,3)"
+
+5. Send UDP packets with random ip and dest mac = vf mac addr::
+
+    for x in range(0,10):
+     sendp(Ether(src="00:00:00:00:01:00",dst="vf_mac")/IP(src='2.1.1.' + str(x),dst='2.1.1.5')/UDP()/"Hello!0",iface="tester_intf")
+
+6. Check if threads on core 0 to core 3 can be waked up in VM::
+
+    L3FWD_POWER: lcore 0 is waked up from rx interrupt on port 0 queue 0
+    L3FWD_POWER: lcore 1 is waked up from rx interrupt on port 0 queue 1
+    L3FWD_POWER: lcore 2 is waked up from rx interrupt on port 0 queue 2
+    L3FWD_POWER: lcore 3 is waked up from rx interrupt on port 0 queue 3
\ No newline at end of file
-- 
2.17.1


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dts] [PATCH v1] test_plans/vf_interrupt_pmd: add vf multi queues interrupt test cases with i40e driver
  2020-02-19 20:33 [dts] [PATCH v1] test_plans/vf_interrupt_pmd: add vf multi queues interrupt test cases with i40e driver Yinan
@ 2020-02-21  2:03 ` Tu, Lijuan
  0 siblings, 0 replies; 2+ messages in thread
From: Tu, Lijuan @ 2020-02-21  2:03 UTC (permalink / raw)
  To: Wang, Yinan, dts; +Cc: Wang, Yinan

Applied, thanks

> -----Original Message-----
> From: dts [mailto:dts-bounces@dpdk.org] On Behalf Of Yinan
> Sent: Thursday, February 20, 2020 4:33 AM
> To: dts@dpdk.org
> Cc: Wang, Yinan <yinan.wang@intel.com>
> Subject: [dts] [PATCH v1] test_plans/vf_interrupt_pmd: add vf multi queues
> interrupt test cases with i40e driver
> 
> From: Wang Yinan <yinan.wang@intel.com>
> 
> Signed-off-by: Wang Yinan <yinan.wang@intel.com>
> ---
>  test_plans/vf_interrupt_pmd_test_plan.rst | 76 ++++++++++++++++++++++-
>  1 file changed, 75 insertions(+), 1 deletion(-)
> 
> diff --git a/test_plans/vf_interrupt_pmd_test_plan.rst
> b/test_plans/vf_interrupt_pmd_test_plan.rst
> index 8f91b14..a8ed3d8 100644
> --- a/test_plans/vf_interrupt_pmd_test_plan.rst
> +++ b/test_plans/vf_interrupt_pmd_test_plan.rst
> @@ -190,4 +190,78 @@ Test Case4: VF interrupt pmd in VM with vfio-pci
> 
>  7. Check if threads on core 2 have returned to sleep mode::
> 
> -    L3FWD_POWER: lcore 2 sleeps until interrupt triggers
> \ No newline at end of file
> +    L3FWD_POWER: lcore 2 sleeps until interrupt triggers
> +
> +Test Case5: vf multi-queue interrupt with vfio-pci on i40e
> +==========================================================
> +
> +1. Generate NIC VF, then bind it to vfio drvier::
> +
> +    echo 1 > /sys/bus/pci/devices/0000\:04\:00.0/sriov_numvfs
> +    modprobe vfio-pci
> +    usertools/dpdk-devbind.py --bind=vfio-pci 0000:04:10.0(vf_pci)
> +
> +  Notice:  If your PF is kernel driver, make sure PF link is up when your start
> testpmd on VF.
> +
> +2. Start l3fwd-power with VF::
> +
> +    examples/l3fwd-power/build/l3fwd-power -c 3f -n 4 -m 2048 -- -P -p 0x1 -
> -config="(0,0,1),(0,1,2),(0,2,3),(0,3,4)"
> +
> +3. Send UDP packets with random ip and dest mac = vf mac addr::
> +
> +      for x in range(0,10):
> +
> + sendp(Ether(src="00:00:00:00:01:00",dst="vf_mac")/IP(src='2.1.1.' +
> + str(x),dst='2.1.1.5')/UDP()/"Hello!0",iface="tester_intf")
> +
> +4. Check if threads on all cores have waked up::
> +
> +    L3FWD_POWER: lcore 1 is waked up from rx interrupt on port 0 queue 0
> +    L3FWD_POWER: lcore 2 is waked up from rx interrupt on port 0 queue 1
> +    L3FWD_POWER: lcore 3 is waked up from rx interrupt on port 0 queue 2
> +    L3FWD_POWER: lcore 4 is waked up from rx interrupt on port 0 queue
> + 3
> +
> +Test Case6: VF multi-queue interrupt in VM with vfio-pci on i40e
> +===============================================================
> =
> +
> +1. Generate NIC VF, then bind it to vfio drvier::
> +
> +    echo 1 > /sys/bus/pci/devices/0000\:88:00.1/sriov_numvfs
> +    modprobe vfio-pci
> +    usertools/dpdk-devbind.py --bind=vfio-pci 0000:88:0a.0(vf_pci)
> +
> +  Notice:  If your PF is kernel driver, make sure PF link is up when your start
> testpmd on VF.
> +
> +2. Passthrough VF 0 to VM0 and start VM0::
> +
> +    taskset -c 4,5,6,7,8 qemu-system-x86_64 \
> +    -name vm0 -enable-kvm -pidfile /tmp/.vm0.pid -daemonize -monitor
> unix:/tmp/vm0_monitor.sock,server,nowait \
> +    -device e1000,netdev=nttsip1  -netdev
> user,id=nttsip1,hostfwd=tcp:127.0.0.1:6000-:22 \
> +    -device vfio-pci,host=0000:88:0a.0,id=pt_0 -cpu host -smp 5 -m 10240 \
> +    -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -
> device virtio-serial \
> +    -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0
> -vnc :11 \
> +    -drive
> + file=/home/osimg/noiommu-
> ubt16.img,format=qcow2,if=virtio,index=0,medi
> + a=disk
> +
> +  Notice: VM needs Kernel version > 4.8.0, mostly linux distribution don't
> support vfio-noiommu mode by default, so testing this case need rebuild
> kernel to enable vfio-noiommu.
> +
> +3. Bind VF 0 to the vfio-pci driver::
> +
> +    modprobe -r vfio_iommu_type1
> +    modprobe -r vfio
> +    modprobe vfio enable_unsafe_noiommu_mode=1
> +    modprobe vfio-pci
> +    usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0
> +
> +4.Start l3fwd-power in VM::
> +
> +    ./build/l3fwd-power -l 0-3 -n 4 -m 2048 -- -P -p 0x1 --
> config="(0,0,0),(0,1,1),(0,2,2),(0,3,3)"
> +
> +5. Send UDP packets with random ip and dest mac = vf mac addr::
> +
> +    for x in range(0,10):
> +     sendp(Ether(src="00:00:00:00:01:00",dst="vf_mac")/IP(src='2.1.1.'
> + + str(x),dst='2.1.1.5')/UDP()/"Hello!0",iface="tester_intf")
> +
> +6. Check if threads on core 0 to core 3 can be waked up in VM::
> +
> +    L3FWD_POWER: lcore 0 is waked up from rx interrupt on port 0 queue 0
> +    L3FWD_POWER: lcore 1 is waked up from rx interrupt on port 0 queue 1
> +    L3FWD_POWER: lcore 2 is waked up from rx interrupt on port 0 queue 2
> +    L3FWD_POWER: lcore 3 is waked up from rx interrupt on port 0 queue
> + 3
> \ No newline at end of file
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-02-21  2:03 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-19 20:33 [dts] [PATCH v1] test_plans/vf_interrupt_pmd: add vf multi queues interrupt test cases with i40e driver Yinan
2020-02-21  2:03 ` Tu, Lijuan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).