From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 98D60A0503; Thu, 19 May 2022 10:52:51 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9289F40222; Thu, 19 May 2022 10:52:51 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id E2A1340140 for ; Thu, 19 May 2022 10:52:49 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1652950370; x=1684486370; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=5AAKqz7JBuKnZch+Ad8eKePTXhlRJBFRP+5hj+0lufk=; b=L4uqCYMuv4pC4UZ1Hnk6N3q7n93Ej/kTc3uY0zo2cWV005qwktjmtI77 YvD5/O2wrlDyrD7CSon/njbTyLr5zhIVzChd2fpALWzet2T2ODTkop69L hGVCJoaK08C7ulR0mJwteoPQsjtpMNKYIG5C51YuzCTNeXiDH7hLx9mh5 R2hsVFf5UcD5pWez3faiM0dwFugXEuwrXY9VwDnU5xNrwoSecThTa77G+ rS/oJBW5xfvGK4S+ejnYaelwexgOEx87RirlyJyqLYHaAuk3zM5KvAUEV TxhuX91/6tJ/4/dZ0TIQfSA8jhE3+vh/J0O8dIG0ym2jKNmTv6NV6o/RB w==; X-IronPort-AV: E=McAfee;i="6400,9594,10351"; a="270924542" X-IronPort-AV: E=Sophos;i="5.91,237,1647327600"; d="scan'208";a="270924542" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 May 2022 01:52:49 -0700 X-IronPort-AV: E=Sophos;i="5.91,237,1647327600"; d="scan'208";a="598444135" Received: from unknown (HELO localhost.localdomain) ([10.239.251.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 May 2022 01:52:47 -0700 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V2 1/2] test_plans/dpdk_gro_lib_test_plan: delete CBDMA related testcases Date: Thu, 19 May 2022 04:51:26 -0400 Message-Id: <20220519085126.2817429-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Delete CBDMA related testcases. Signed-off-by: Wei Ling --- test_plans/dpdk_gro_lib_test_plan.rst | 58 --------------------------- 1 file changed, 58 deletions(-) diff --git a/test_plans/dpdk_gro_lib_test_plan.rst b/test_plans/dpdk_gro_lib_test_plan.rst index 88ef971a..e90cf931 100644 --- a/test_plans/dpdk_gro_lib_test_plan.rst +++ b/test_plans/dpdk_gro_lib_test_plan.rst @@ -404,61 +404,3 @@ NIC2(In kernel) -> NIC1(DPDK) -> testpmd(csum fwd) -> Vhost -> Virtio-net Host side : taskset -c 35 ip netns exec ns1 iperf -c 1.1.1.2 -i 1 -t 60 -m -P 2 VM side: iperf -s - -Test Case6: DPDK GRO test with two queues and two CBDMA channels using tcp/ipv4 traffic -======================================================================================= - -Test flow -========= - -NIC2(In kernel) -> NIC1(DPDK) -> testpmd(csum fwd) -> Vhost -> Virtio-net - -1. Connect two nic port directly, put nic2 into another namesapce and turn on the tso of this nic port by below cmds:: - - ip netns del ns1 - ip netns add ns1 - ip link set enp26s0f0 netns ns1 # [enp216s0f0] is the name of nic2 - ip netns exec ns1 ifconfig enp26s0f0 1.1.1.8 up - ip netns exec ns1 ethtool -K enp26s0f0 tso on - -2. Bind cbdma port and nic1 to vfio-pci, launch vhost-user with testpmd and set flush interval to 1:: - - ./usertools/dpdk-devbind.py -b vfio-pci xx:xx.x - ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 29-31 -n 4 \ - --file-prefix=vhost --vdev 'net_vhost0,iface=vhost-net,queues=2,dmas=[txq0@80:04.0;txq1@80:04.1]' -- -i --txd=1024 --rxd=1024 --txq=2 --rxq=2 --nb-cores=2 - set fwd csum - stop - port stop 0 - port stop 1 - csum set tcp hw 0 - csum set ip hw 0 - csum set tcp hw 1 - csum set ip hw 1 - set port 0 gro on - set gro flush 1 - port start 0 - port start 1 - start - -3. Set up vm with virto device and using kernel virtio-net driver:: - - taskset -c 31 /home/qemu-install/qemu-4.2.1/bin/qemu-system-x86_64 -name us-vhost-vm1 \ - -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \ - -numa node,memdev=mem \ - -mem-prealloc -monitor unix:/tmp/vm2_monitor.sock,server,nowait -netdev user,id=yinan,hostfwd=tcp:127.0.0.1:6005-:22 -device e1000,netdev=yinan \ - -smp cores=1,sockets=1 -drive file=/home/osimg/ubuntu2004.img \ - -chardev socket,id=char0,path=./vhost-net \ - -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \ - -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,csum=on,gso=on,host_tso4=on,guest_tso4=on,mq=on,vectors=15 \ - -vnc :10 -daemonize - -4. In vm, config the virtio-net device with ip and turn the kernel gro off:: - - ifconfig ens4 1.1.1.2 up # [ens3] is the name of virtio-net - ethtool -L ens4 combined 2 - ethtool -K ens4 gro off - -5. Start iperf test, run iperf server at vm side and iperf client at host side, check throughput, should be larger than 10Gbits/sec:: - - Host side : taskset -c 35 ip netns exec ns1 iperf -c 1.1.1.2 -i 1 -t 60 -m -P 2 - VM side: iperf -s -- 2.25.1