From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 498518D9E for ; Wed, 12 Aug 2015 10:03:28 +0200 (CEST) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP; 12 Aug 2015 01:03:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.15,659,1432623600"; d="scan'208";a="782522775" Received: from shvmail01.sh.intel.com ([10.239.29.42]) by orsmga002.jf.intel.com with ESMTP; 12 Aug 2015 01:03:28 -0700 Received: from shecgisg004.sh.intel.com (shecgisg004.sh.intel.com [10.239.29.89]) by shvmail01.sh.intel.com with ESMTP id t7C83OHw027104; Wed, 12 Aug 2015 16:03:24 +0800 Received: from shecgisg004.sh.intel.com (localhost [127.0.0.1]) by shecgisg004.sh.intel.com (8.13.6/8.13.6/SuSE Linux 0.8) with ESMTP id t7C83LhC003710; Wed, 12 Aug 2015 16:03:24 +0800 Received: (from couyang@localhost) by shecgisg004.sh.intel.com (8.13.6/8.13.6/Submit) id t7C83L1S003706; Wed, 12 Aug 2015 16:03:21 +0800 From: Ouyang Changchun To: dev@dpdk.org Date: Wed, 12 Aug 2015 16:02:47 +0800 Message-Id: <1439366567-3402-13-git-send-email-changchun.ouyang@intel.com> X-Mailer: git-send-email 1.7.12.2 In-Reply-To: <1439366567-3402-1-git-send-email-changchun.ouyang@intel.com> References: <1434355006-30583-1-git-send-email-changchun.ouyang@intel.com> <1439366567-3402-1-git-send-email-changchun.ouyang@intel.com> Subject: [dpdk-dev] [PATCH v4 12/12] doc: update doc for vhost multiple queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 12 Aug 2015 08:03:29 -0000 Update the sample guide doc for vhost multiple queues; Update the prog guide doc for vhost lib multiple queues feature; Signed-off-by: Changchun Ouyang --- It is added since v3 doc/guides/prog_guide/vhost_lib.rst | 38 ++++++++++++ doc/guides/sample_app_ug/vhost.rst | 113 ++++++++++++++++++++++++++++++++++++ 2 files changed, 151 insertions(+) diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst index 48e1fff..6f2315d 100644 --- a/doc/guides/prog_guide/vhost_lib.rst +++ b/doc/guides/prog_guide/vhost_lib.rst @@ -128,6 +128,44 @@ VHOST_GET_VRING_BASE is used as the signal to remove vhost device from data plan When the socket connection is closed, vhost will destroy the device. +Vhost multiple queues feature +----------------------------- +This feature supports the multiple queues for each virtio device in vhost. +Currently multiple queues feature is supported only for vhost-user, not supported for vhost-cuse. + +The new QEMU patch version(v6) of enabling vhost-use multiple queues has already been sent out to +QEMU community and in its comments collecting stage. It requires applying the patch set onto QEMU +and rebuild the QEMU before running vhost multiple queues: + http://patchwork.ozlabs.org/patch/506333/ + http://patchwork.ozlabs.org/patch/506334/ + +Note: the QEMU patch is based on top of 2 other patches, see patch description for more details + +The vhost will get the queue pair number based on the communication message with QEMU. + +HW queue numbers in pool is strongly recommended to set as identical with the queue number to start +the QMEU guest and identical with the queue number to start with virtio port on guest. + +========================================= +==================| |==================| + vport0 | | vport1 | +--- --- --- ---| |--- --- --- ---| +q0 | q1 | q2 | q3 | |q0 | q1 | q2 | q3 | +/\= =/\= =/\= =/\=| |/\= =/\= =/\= =/\=| +|| || || || || || || || +|| || || || || || || || +||= =||= =||= =||=| =||== ||== ||== ||=| +q0 | q1 | q2 | q3 | |q0 | q1 | q2 | q3 | +------------------| |------------------| + VMDq pool0 | | VMDq pool1 | +==================| |==================| + +In RX side, it firstly polls each queue of the pool and gets the packets from +it and enqueue them into its corresponding virtqueue in virtio device/port. +In TX side, it dequeue packets from each virtqueue of virtio device/port and send +to either physical port or another virtio device according to its destination +MAC address. + Vhost supported vSwitch reference --------------------------------- diff --git a/doc/guides/sample_app_ug/vhost.rst b/doc/guides/sample_app_ug/vhost.rst index 730b9da..e7dfe70 100644 --- a/doc/guides/sample_app_ug/vhost.rst +++ b/doc/guides/sample_app_ug/vhost.rst @@ -514,6 +514,13 @@ It is enabled by default. user@target:~$ ./build/app/vhost-switch -c f -n 4 --huge-dir /mnt/huge -- --vlan-strip [0, 1] +**rxq.** +The rxq option specify the rx queue number per VMDq pool, it is 1 on default. + +.. code-block:: console + + user@target:~$ ./build/app/vhost-switch -c f -n 4 --huge-dir /mnt/huge -- --rxq [1, 2, 4] + Running the Virtual Machine (QEMU) ---------------------------------- @@ -833,3 +840,109 @@ For example: The above message indicates that device 0 has been registered with MAC address cc:bb:bb:bb:bb:bb and VLAN tag 1000. Any packets received on the NIC with these values is placed on the devices receive queue. When a virtio-net device transmits packets, the VLAN tag is added to the packet by the DPDK vhost sample code. + +Vhost multiple queues +--------------------- + +This feature supports the multiple queues for each virtio device in vhost. +Currently multiple queues feature is supported only for vhost-user, not supported for vhost-cuse. + +The new QEMU patch version(v6) of enabling vhost-use multiple queues has already been sent out to +QEMU community and in its comments collecting stage. It requires applying the patch set onto QEMU +and rebuild the QEMU before running vhost multiple queues: + http://patchwork.ozlabs.org/patch/506333/ + http://patchwork.ozlabs.org/patch/506334/ + +Note: the QEMU patch is based on top of 2 other patches, see patch description for more details. + +Basically vhost sample leverages the VMDq+RSS in HW to receive packets and distribute them +into different queue in the pool according to their 5 tuples. + +On the other hand, the vhost will get the queue pair number based on the communication message with +QEMU. + +HW queue numbers in pool is strongly recommended to set as identical with the queue number to start +the QMEU guest and identical with the queue number to start with virtio port on guest. +E.g. use '--rxq 4' to set the queue number as 4, it means there are 4 HW queues in each VMDq pool, +and 4 queues in each vhost device/port, every queue in pool maps to one queue in vhost device. + +========================================= +==================| |==================| + vport0 | | vport1 | +--- --- --- ---| |--- --- --- ---| +q0 | q1 | q2 | q3 | |q0 | q1 | q2 | q3 | +/\= =/\= =/\= =/\=| |/\= =/\= =/\= =/\=| +|| || || || || || || || +|| || || || || || || || +||= =||= =||= =||=| =||== ||== ||== ||=| +q0 | q1 | q2 | q3 | |q0 | q1 | q2 | q3 | +------------------| |------------------| + VMDq pool0 | | VMDq pool1 | +==================| |==================| + +In RX side, it firstly polls each queue of the pool and gets the packets from +it and enqueue them into its corresponding virtqueue in virtio device/port. +In TX side, it dequeue packets from each virtqueue of virtio device/port and send +to either physical port or another virtio device according to its destination +MAC address. + + +Test guidance +~~~~~~~~~~~~~ + +#. On host, firstly mount hugepage, and insmod uio, igb_uio, bind one nic on igb_uio; + and then run vhost sample, key steps as follows: + +.. code-block:: console + + sudo mount -t hugetlbfs nodev /mnt/huge + sudo modprobe uio + sudo insmod $RTE_SDK/$RTE_TARGET/kmod/igb_uio.ko + + $RTE_SDK/tools/dpdk_nic_bind.py --bind igb_uio 0000:08:00.0 + sudo $RTE_SDK/examples/vhost/build/vhost-switch -c 0xf0 -n 4 --huge-dir \ + /mnt/huge --socket-mem 1024,0 -- -p 1 --vm2vm 0 --dev-basename usvhost --rxq 2 + +.. note:: + + use '--stats 1' to enable the stats dumping on screen for vhost. + +#. After step 1, on host, modprobe kvm and kvm_intel, and use qemu command line to start one guest: + +.. code-block:: console + + modprobe kvm + modprobe kvm_intel + sudo mount -t hugetlbfs nodev /dev/hugepages -o pagesize=1G + + $QEMU_PATH/qemu-system-x86_64 -enable-kvm -m 4096 \ + -object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on \ + -numa node,memdev=mem -mem-prealloc -smp 10 -cpu core2duo,+sse3,+sse4.1,+sse4.2 \ + -name -drive file=/vm.img \ + -chardev socket,id=char0,path=/usvhost \ + -netdev type=vhost-user,id=hostnet2,chardev=char0,vhostforce=on,queues=2 \ + -device virtio-net-pci,mq=on,vectors=6,netdev=hostnet2,id=net2,mac=52:54:00:12:34:56,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off \ + -chardev socket,id=char1,path=/usvhost \ + -netdev type=vhost-user,id=hostnet3,chardev=char1,vhostforce=on,queues=2 \ + -device virtio-net-pci,mq=on,vectors=6,netdev=hostnet3,id=net3,mac=52:54:00:12:34:57,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off + +#. Log on guest, use testpmd(dpdk based) to test, use multiple virtio queues to rx and tx packets. + +.. code-block:: console + + modprobe uio + insmod $RTE_SDK/$RTE_TARGET/kmod/igb_uio.ko + echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages + ./tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 00:04.0 + + $RTE_SDK/$RTE_TARGET/app/testpmd -c 1f -n 4 -- --rxq=2 --txq=2 --nb-cores=4 \ + --rx-queue-stats-mapping="(0,0,0),(0,1,1),(1,0,2),(1,1,3)" \ + --tx-queue-stats-mapping="(0,0,0),(0,1,1),(1,0,2),(1,1,3)" -i --disable-hw-vlan --txqflags 0xf00 + + set fwd mac + start tx_first + +#. Use packet generator to send packets with dest MAC:52 54 00 12 34 57 VLAN tag:1001, + select IPv4 as protocols and continuous incremental IP address. + +#. Testpmd on guest can display packets received/transmitted in both queues of each virtio port. -- 1.8.4.2