From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 45BDAA0503; Thu, 19 May 2022 08:36:34 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3A44B4161A; Thu, 19 May 2022 08:36:34 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 79DD540140 for ; Thu, 19 May 2022 08:36:32 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652942192; x=1684478192; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=rvDkGTLBa+ICmwRVPEdUUfM0XL62BocuC2uirW0W9a0=; b=OEOX+lF95EBfvHXwLoph/uuvRYPzErb9p+FeNSvxWms1wOitFNiDEdwy +s7GivpnMgWFCX9uu6AdEb/UKrJ/QVAJjrGWPhgdewVEMbfXaRRogIGCP mjqTjIcYwBW/MZgC3pFeb8Ra0+zMH16PNETEHrYoHIiQZA3pd7haV8RyQ A8JYGKHcF12WjDOmtcjINLZKCwXjC6T1XKov9m65VA/gpB5TMwvSy80MG yWZ6yBWHlKtspqyvNMpQg3RB/vivCkzpEsHNyWoY74FXKTMnDVQ+U5ihP A6wOe4m+50PMVY0PgFtpnyo/SYqQdTGKcG04Yi0eXr3Eqz8eTt150xsen w==; X-IronPort-AV: E=McAfee;i="6400,9594,10351"; a="335094468" X-IronPort-AV: E=Sophos;i="5.91,236,1647327600"; d="scan'208";a="335094468" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2022 23:36:31 -0700 X-IronPort-AV: E=Sophos;i="5.91,236,1647327600"; d="scan'208";a="701000721" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 May 2022 23:36:30 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 1/2] test_plans/dpdk_gro_lib_test_plan: delete CBDMA related testcase Date: Thu, 19 May 2022 02:35:09 -0400 Message-Id: <20220519063509.2813311-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Delete CBDMA related testcase. Signed-off-by: Wei Ling --- test_plans/dpdk_gro_lib_test_plan.rst | 58 --------------------------- 1 file changed, 58 deletions(-) diff --git a/test_plans/dpdk_gro_lib_test_plan.rst b/test_plans/dpdk_gro_lib_test_plan.rst index 88ef971a..e90cf931 100644 --- a/test_plans/dpdk_gro_lib_test_plan.rst +++ b/test_plans/dpdk_gro_lib_test_plan.rst @@ -404,61 +404,3 @@ NIC2(In kernel) -> NIC1(DPDK) -> testpmd(csum fwd) -> Vhost -> Virtio-net Host side : taskset -c 35 ip netns exec ns1 iperf -c 1.1.1.2 -i 1 -t 60 -m -P 2 VM side: iperf -s - -Test Case6: DPDK GRO test with two queues and two CBDMA channels using tcp/ipv4 traffic -======================================================================================= - -Test flow -========= - -NIC2(In kernel) -> NIC1(DPDK) -> testpmd(csum fwd) -> Vhost -> Virtio-net - -1. Connect two nic port directly, put nic2 into another namesapce and turn on the tso of this nic port by below cmds:: - - ip netns del ns1 - ip netns add ns1 - ip link set enp26s0f0 netns ns1 # [enp216s0f0] is the name of nic2 - ip netns exec ns1 ifconfig enp26s0f0 1.1.1.8 up - ip netns exec ns1 ethtool -K enp26s0f0 tso on - -2. Bind cbdma port and nic1 to vfio-pci, launch vhost-user with testpmd and set flush interval to 1:: - - ./usertools/dpdk-devbind.py -b vfio-pci xx:xx.x - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-31 -n 4 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,dmas=[txq0@80:04.0;txq1@80:04.1]' -- -i --txd=1024 --rxd=1024 --txq=2 --rxq=2 --nb-cores=2 - set fwd csum - stop - port stop 0 - port stop 1 - csum set tcp hw 0 - csum set ip hw 0 - csum set tcp hw 1 - csum set ip hw 1 - set port 0 gro on - set gro flush 1 - port start 0 - port start 1 - start - -3. Set up vm with virto device and using kernel virtio-net driver:: - - taskset -c 31 /home/qemu-install/qemu-4.2.1/bin/qemu-system-x86_64 -name us-vhost-vm1 \ - -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem \ - -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -netdev user,id=yinan,hostfwd=tcp:127.0.0.1:6005-:22 -device e1000,netdev=yinan \ - -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004.img \ - -chardev socket,id=char0,path=./vhost-net \ - -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \ - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,host_tso4=on,guest_tso4=on,mq=on,vectors=15 \ - -vnc :10 -daemonize - -4. In vm, config the virtio-net device with ip and turn the kernel gro off:: - - ifconfig ens4 1.1.1.2 up # [ens3] is the name of virtio-net - ethtool -L ens4 combined 2 - ethtool -K ens4 gro off - -5. Start iperf test, run iperf server at vm side and iperf client at host side, check throughput, should be larger than 10Gbits/sec:: - - Host side : taskset -c 35 ip netns exec ns1 iperf -c 1.1.1.2 -i 1 -t 60 -m -P 2 - VM side: iperf -s -- 2.25.1