From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id E2D616A80 for ; Wed, 12 Aug 2015 10:02:58 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga103.fm.intel.com with ESMTP; 12 Aug 2015 01:02:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.15,659,1432623600"; d="scan'208";a="623933469" Received: from shvmail01.sh.intel.com ([10.239.29.42]) by orsmga003.jf.intel.com with ESMTP; 12 Aug 2015 01:02:54 -0700 Received: from shecgisg004.sh.intel.com (shecgisg004.sh.intel.com [10.239.29.89]) by shvmail01.sh.intel.com with ESMTP id t7C82oOw027056; Wed, 12 Aug 2015 16:02:50 +0800 Received: from shecgisg004.sh.intel.com (localhost [127.0.0.1]) by shecgisg004.sh.intel.com (8.13.6/8.13.6/SuSE Linux 0.8) with ESMTP id t7C82lr4003625; Wed, 12 Aug 2015 16:02:49 +0800 Received: (from couyang@localhost) by shecgisg004.sh.intel.com (8.13.6/8.13.6/Submit) id t7C82lX9003621; Wed, 12 Aug 2015 16:02:47 +0800 From: Ouyang Changchun To: dev@dpdk.org Date: Wed, 12 Aug 2015 16:02:35 +0800 Message-Id: <1439366567-3402-1-git-send-email-changchun.ouyang@intel.com> X-Mailer: git-send-email 1.7.12.2 In-Reply-To: <1434355006-30583-1-git-send-email-changchun.ouyang@intel.com> References: <1434355006-30583-1-git-send-email-changchun.ouyang@intel.com> Subject: [dpdk-dev] [PATCH v4 00/12] Support multiple queues in vhost X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 12 Aug 2015 08:02:59 -0000 This patch targets for R2.2, please ignore it in R2.1. Send them out a bit earlier just for seeking more comments. This patch set supports the multiple queues for each virtio device in vhost. Currently the multiple queues feature is supported only for vhost-user, not yet for vhost-cuse. The new QEMU patch version(v6) of enabling vhost-use multiple queues has already been sent out to QEMU community and in its comments collecting stage. It requires applying the patch set onto QEMU and rebuild the QEMU before running vhost multiple queues: http://patchwork.ozlabs.org/patch/506333/ http://patchwork.ozlabs.org/patch/506334/ Note: the QEMU patch is based on top of 2 other patches, see patch description for more details Basically vhost sample leverages the VMDq+RSS in HW to receive packets and distribute them into different queue in the pool according to their 5 tuples. On the other hand, the vhost will get the queue pair number based on the communication message with QEMU. HW queue numbers in pool is strongly recommended to set as identical with the queue number to start the QMEU guest and identical with the queue number to start with virtio port on guest. E.g. use '--rxq 4' to set the queue number as 4, it means there are 4 HW queues in each VMDq pool, and 4 queues in each vhost device/port, every queue in pool maps to one queue in vhost device. ========================================= ==================| |==================| vport0 | | vport1 | --- --- --- ---| |--- --- --- ---| q0 | q1 | q2 | q3 | |q0 | q1 | q2 | q3 | /\= =/\= =/\= =/\=| |/\= =/\= =/\= =/\=| || || || || || || || || || || || || || || || || ||= =||= =||= =||=| =||== ||== ||== ||=| q0 | q1 | q2 | q3 | |q0 | q1 | q2 | q3 | ------------------| |------------------| VMDq pool0 | | VMDq pool1 | ==================| |==================| In RX side, it firstly polls each queue of the pool and gets the packets from it and enqueue them into its corresponding virtqueue in virtio device/port. In TX side, it dequeue packets from each virtqueue of virtio device/port and send to either physical port or another virtio device according to its destination MAC address. Here is some test guidance. 1. On host, firstly mount hugepage, and insmod uio, igb_uio, bind one nic on igb_uio; and then run vhost sample, key steps as follows: sudo mount -t hugetlbfs nodev /mnt/huge sudo modprobe uio sudo insmod $RTE_SDK/$RTE_TARGET/kmod/igb_uio.ko $RTE_SDK/tools/dpdk_nic_bind.py --bind igb_uio 0000:08:00.0 sudo $RTE_SDK/examples/vhost/build/vhost-switch -c 0xf0 -n 4 --huge-dir /mnt/huge --socket-mem 1024,0 -- -p 1 --vm2vm 0 --dev-basename usvhost --rxq 2 use '--stats 1' to enable the stats dumping on screen for vhost. 2. After step 1, on host, modprobe kvm and kvm_intel, and use qemu command line to start one guest: modprobe kvm modprobe kvm_intel sudo mount -t hugetlbfs nodev /dev/hugepages -o pagesize=1G $QEMU_PATH/qemu-system-x86_64 -enable-kvm -m 4096 -object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc -smp 10 -cpu core2duo,+sse3,+sse4.1,+sse4.2 -name -drive file=/vm.img -chardev socket,id=char0,path=/usvhost -netdev type=vhost-user,id=hostnet2,chardev=char0,vhostforce=on,queues=2 -device virtio-net-pci,mq=on,vectors=6,netdev=hostnet2,id=net2,mac=52:54:00:12:34:56,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off -chardev socket,id=char1,path=/usvhost -netdev type=vhost-user,id=hostnet3,chardev=char1,vhostforce=on,queues=2 -device virtio-net-pci,mq=on,vectors=6,netdev=hostnet3,id=net3,mac=52:54:00:12:34:57,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off 3. Log on guest, use testpmd(dpdk based) to test, use multiple virtio queues to rx and tx packets. modprobe uio insmod $RTE_SDK/$RTE_TARGET/kmod/igb_uio.ko echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages ./tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 00:04.0 $RTE_SDK/$RTE_TARGET/app/testpmd -c 1f -n 4 -- --rxq=2 --txq=2 --nb-cores=4 --rx-queue-stats-mapping="(0,0,0),(0,1,1),(1,0,2),(1,1,3)" --tx-queue-stats-mapping="(0,0,0),(0,1,1),(1,0,2),(1,1,3)" -i --disable-hw-vlan --txqflags 0xf00 set fwd mac start tx_first 4. Use packet generator to send packets with dest MAC:52 54 00 12 34 57 VLAN tag:1001, select IPv4 as protocols and continuous incremental IP address. 5. Testpmd on guest can display packets received/transmitted in both queues of each virtio port. Changchun Ouyang (12): ixgbe: support VMDq RSS in non-SRIOV environment vhost: support multiple queues in virtio dev vhost: update version map file vhost: set memory layout for multiple queues mode vhost: check the virtqueue address's validity vhost: support protocol feature vhost: add new command line option: rxq vhost: support multiple queues virtio: resolve for control queue vhost: add per queue stats info vhost: alloc core to virtq doc: update doc for vhost multiple queues doc/guides/prog_guide/vhost_lib.rst | 38 +++ doc/guides/sample_app_ug/vhost.rst | 113 +++++++ drivers/net/ixgbe/ixgbe_rxtx.c | 86 ++++- drivers/net/virtio/virtio_ethdev.c | 9 +- examples/vhost/Makefile | 4 +- examples/vhost/main.c | 459 +++++++++++++++++--------- examples/vhost/main.h | 3 +- lib/librte_ether/rte_ethdev.c | 31 ++ lib/librte_vhost/rte_vhost_version.map | 2 +- lib/librte_vhost/rte_virtio_net.h | 47 ++- lib/librte_vhost/vhost-net.h | 4 + lib/librte_vhost/vhost_cuse/virtio-net-cdev.c | 57 ++-- lib/librte_vhost/vhost_rxtx.c | 91 +++-- lib/librte_vhost/vhost_user/vhost-net-user.c | 29 +- lib/librte_vhost/vhost_user/vhost-net-user.h | 4 + lib/librte_vhost/vhost_user/virtio-net-user.c | 164 ++++++--- lib/librte_vhost/vhost_user/virtio-net-user.h | 4 + lib/librte_vhost/virtio-net.c | 283 ++++++++++++---- lib/librte_vhost/virtio-net.h | 2 + 19 files changed, 1087 insertions(+), 343 deletions(-) -- 1.8.4.2