From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6A7C3A0524; Fri, 27 Nov 2020 09:39:25 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 346D5C950; Fri, 27 Nov 2020 09:39:24 +0100 (CET) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 9614EC92A for ; Fri, 27 Nov 2020 09:39:21 +0100 (CET) IronPort-SDR: CgS/D0/Gd8DPOKNcYvG8JKHFtKOa3I9j/hn1aYrOdh0qsoF2e/U3cKkaEygl+65UYMK06oKexa /vLIbtguHSKQ== X-IronPort-AV: E=McAfee;i="6000,8403,9817"; a="172523353" X-IronPort-AV: E=Sophos;i="5.78,373,1599548400"; d="scan'208";a="172523353" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Nov 2020 00:39:19 -0800 IronPort-SDR: 2KWGM2z/FaOhkL3yVC8373UAoQmkYxF6PbjJA4LdNaLyDnzRA3/KT6Vwn1gM3+sSd+yPMw/ZGh ifBy5ZJ27qQA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.78,373,1599548400"; d="scan'208";a="547978974" Received: from dpdk-yinan-ntb1.sh.intel.com ([10.67.119.39]) by orsmga005.jf.intel.com with ESMTP; 27 Nov 2020 00:39:16 -0800 From: Yinan Wang To: dts@dpdk.org Cc: Yinan Wang Date: Fri, 27 Nov 2020 12:27:08 -0500 Message-Id: <20201127172708.165646-1-yinan.wang@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Subject: [dts] [PATCH v1] test_plans/vswitch_sample_cbdma_test_plan.rst X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Sender: "dts" Update cbdma enable parameters due to implementation change. Signed-off-by: Yinan Wang --- test_plans/vswitch_sample_cbdma_test_plan.rst | 32 +++++++++---------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/test_plans/vswitch_sample_cbdma_test_plan.rst b/test_plans/vswitch_sample_cbdma_test_plan.rst index c57c0015..93887695 100644 --- a/test_plans/vswitch_sample_cbdma_test_plan.rst +++ b/test_plans/vswitch_sample_cbdma_test_plan.rst @@ -47,7 +47,7 @@ Test Case1: PVP performance check with CBDMA channel using vhost async driver if (async_vhost_driver) { f.async_inorder = 1; - f.async_threshold = 256; - + f.async_threshold = 0; + + f.async_threshold = 1518; return rte_vhost_async_channel_register(vid, VIRTIO_RXQ, f.intval, &channel_ops); } @@ -57,7 +57,7 @@ Test Case1: PVP performance check with CBDMA channel using vhost async driver 3. On host, launch dpdk-vhost by below command:: ./dpdk-vhost -c 0x1c000000 -n 4 – \ - -p 0x1 --mergeable 1 --vm2vm 1 --async_vhost_driver --stats 1 --socket-file /tmp/vhost-net -dmas [txd0@00:04.0] + -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat --stats 1 --socket-file /tmp/vhost-net -dmas [txd0@00:04.0] 4. Launch virtio-user with testpmd:: @@ -96,22 +96,22 @@ Test Case1: PVP performance check with CBDMA channel using vhost async driver (1)CPU copy vs. sync copy delta < 10% for 64B packet size (2)CBDMA copy vs sync copy delta > 5% for 1518 packet size -Test Case2: PV test with multiple CBDMA channels using vhost async driver +Test Case2: PVP test with multiple CBDMA channels using vhost async driver ========================================================================== -1. Bind two physical ports to vfio-pci and two CBDMA channels to igb_uio. +1. Bind one physical ports to vfio-pci and two CBDMA channels to igb_uio. 2. On host, launch dpdk-vhost by below command:: ./dpdk-vhost -c 0x1c000000 -n 4 – \ - -p 0x1 --mergeable 1 --vm2vm 1 --async_vhost_driver --stats 1 --socket-file /tmp/vhost-net0 -socket-file /tmp/vhost-net1 --dmas [txd0@00:04.0,txd1@00:04.1] + -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat --stats 1 --socket-file /tmp/vhost-net0 -socket-file /tmp/vhost-net1 --dmas [txd0@00:04.0,txd1@00:04.1] 3. launch two virtio-user ports:: - ./dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \ + ./x86_64-native-linuxapp-gcc/app/testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net0,queues=1 – -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 --no-numa - ./dpdk-testpmd -l 31-32 -n 4 --no-pci --file-prefix=testpmd1 \ + ./x86_64-native-linuxapp-gcc/app/testpmd -l 31-32 -n 4 --no-pci --file-prefix=testpmd1 \ --vdev=net_virtio_user0,mac=00:11:22:33:44:11,path=/tmp/vhost-net1,queues=1 – -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 --no-numa 4. Start pkts from two virtio-user side individually to let vswitch know the mac addr:: @@ -131,19 +131,19 @@ Test Case2: PV test with multiple CBDMA channels using vhost async driver Test Case3: VM2VM performance test with two CBDMA channels using vhost async driver ==================================================================================== -1.Bind two physical ports to vfio-pci and two CBDMA channels to igb_uio. +1.Bind one physical ports to vfio-pci and two CBDMA channels to igb_uio. 2. On host, launch dpdk-vhost by below command:: - ./dpdk-vhost -c 0x1c000000 -n 4 – -p 0x1 --mergeable 1 --vm2vm 1 --async_vhost_driver \ + ./dpdk-vhost -c 0x1c000000 -n 4 – -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \ --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@00:04.0,txd1@00:04.1] 3. Launch virtio-user:: - ./dpdk-testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \ + ./testpmd -l 29-30 -n 4 --no-pci --file-prefix=testpmd0 \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/vhost-net0,queues=1 – -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 --no-numa - ./dpdk-testpmd -l 31-32 -n 4 --no-pci --file-prefix=testpmd1 \ + ./x86_64-native-linuxapp-gcc/app/testpmd -l 31-32 -n 4 --no-pci --file-prefix=testpmd1 \ --vdev=net_virtio_user0,mac=00:11:22:33:44:11,path=/tmp/vhost-net0,queues=1 – -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1 --no-numa 4. Start pkts from two virtio-user sides, record performance number with txpkts=256 and 2000 from testpmd1 seperately:: @@ -173,16 +173,16 @@ Test Case3: VM2VM performance test with two CBDMA channels using vhost async dri Test Case4: VM2VM test with 2 vhost device using vhost async driver ======================================================================= -1. Bind two physical ports to vfio-pci and two CBDMA channels to igb_uio. +1. Bind one physical ports to vfio-pci and two CBDMA channels to igb_uio. 2. On host, launch dpdk-vhost by below command:: - ./dpdk-vhost -c 0x1c000000 -n 4 – -p 0x1 --mergeable 1 --vm2vm 1 --async_vhost_driver \ + ./dpdk-vhost -c 0x1c000000 -n 4 – -p 0x1 --mergeable 1 --vm2vm 1 --dma-type ioat \ --socket-file /tmp/vhost-net0 --socket-file /tmp/vhost-net1 --dmas [txd0@00:04.0,txd1@00:04.1] 3. Start VM0:: - qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 4 -m 4096 \ + /home/qemu-install/qemu-4.2.1/bin/qemu-system-x86_64 -name vm1 -enable-kvm -cpu host -smp 4 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ @@ -194,8 +194,8 @@ Test Case4: VM2VM test with 2 vhost device using vhost async driver -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on -vnc :10 4. Start VM1:: - - qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 4 -m 4096 \ + + /home/qemu-install/qemu-4.2.1/bin/qemu-system-x86_64 -name vm2 -enable-kvm -cpu host -smp 4 -m 4096 \ -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \ -numa node,memdev=mem -mem-prealloc -drive file=/home/osimg/ubuntu20-04-2.img \ -chardev socket,path=/tmp/vm2_qga0.sock,server,nowait,id=vm2_qga0 -device virtio-serial \ -- 2.25.1