From: Ouyang Changchun <changchun.ouyang@intel.com>
To: dev@dpdk.org
Subject: [dpdk-dev] [PATCH v3 9/9] doc: Update doc for vhost multiple queues
Date: Mon, 15 Jun 2015 15:56:46 +0800 [thread overview]
Message-ID: <1434355006-30583-10-git-send-email-changchun.ouyang@intel.com> (raw)
In-Reply-To: <1434355006-30583-1-git-send-email-changchun.ouyang@intel.com>
Update the sample guide doc for vhost multiple queues;
Update the prog guide doc for vhost lib multiple queues feature;
It is added since v3
Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
---
doc/guides/prog_guide/vhost_lib.rst | 35 ++++++++++++
doc/guides/sample_app_ug/vhost.rst | 110 ++++++++++++++++++++++++++++++++++++
2 files changed, 145 insertions(+)
diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
index 48e1fff..e444681 100644
--- a/doc/guides/prog_guide/vhost_lib.rst
+++ b/doc/guides/prog_guide/vhost_lib.rst
@@ -128,6 +128,41 @@ VHOST_GET_VRING_BASE is used as the signal to remove vhost device from data plan
When the socket connection is closed, vhost will destroy the device.
+Vhost multiple queues feature
+-----------------------------
+This feature supports the multiple queues for each virtio device in vhost.
+The vhost-user is used to enable the multiple queues feature, It's not ready for vhost-cuse.
+
+The QEMU patch of enabling vhost-use multiple queues has already merged into upstream sub-tree in
+QEMU community and it will be put in QEMU 2.4. If using QEMU 2.3, it requires applying the
+same patch onto QEMU 2.3 and rebuild the QEMU before running vhost multiple queues:
+http://patchwork.ozlabs.org/patch/477461/
+
+The vhost will get the queue pair number based on the communication message with QEMU.
+
+HW queue numbers in pool is strongly recommended to set as identical with the queue number to start
+the QMEU guest and identical with the queue number to start with virtio port on guest.
+
+=========================================
+==================| |==================|
+ vport0 | | vport1 |
+--- --- --- ---| |--- --- --- ---|
+q0 | q1 | q2 | q3 | |q0 | q1 | q2 | q3 |
+/\= =/\= =/\= =/\=| |/\= =/\= =/\= =/\=|
+|| || || || || || || ||
+|| || || || || || || ||
+||= =||= =||= =||=| =||== ||== ||== ||=|
+q0 | q1 | q2 | q3 | |q0 | q1 | q2 | q3 |
+------------------| |------------------|
+ VMDq pool0 | | VMDq pool1 |
+==================| |==================|
+
+In RX side, it firstly polls each queue of the pool and gets the packets from
+it and enqueue them into its corresponding virtqueue in virtio device/port.
+In TX side, it dequeue packets from each virtqueue of virtio device/port and send
+to either physical port or another virtio device according to its destination
+MAC address.
+
Vhost supported vSwitch reference
---------------------------------
diff --git a/doc/guides/sample_app_ug/vhost.rst b/doc/guides/sample_app_ug/vhost.rst
index 730b9da..9a57d19 100644
--- a/doc/guides/sample_app_ug/vhost.rst
+++ b/doc/guides/sample_app_ug/vhost.rst
@@ -514,6 +514,13 @@ It is enabled by default.
user@target:~$ ./build/app/vhost-switch -c f -n 4 --huge-dir /mnt/huge -- --vlan-strip [0, 1]
+**rxq.**
+The rxq option specify the rx queue number per VMDq pool, it is 1 on default.
+
+.. code-block:: console
+
+ user@target:~$ ./build/app/vhost-switch -c f -n 4 --huge-dir /mnt/huge -- --rxq [1, 2, 4]
+
Running the Virtual Machine (QEMU)
----------------------------------
@@ -833,3 +840,106 @@ For example:
The above message indicates that device 0 has been registered with MAC address cc:bb:bb:bb:bb:bb and VLAN tag 1000.
Any packets received on the NIC with these values is placed on the devices receive queue.
When a virtio-net device transmits packets, the VLAN tag is added to the packet by the DPDK vhost sample code.
+
+Vhost multiple queues
+---------------------
+
+This feature supports the multiple queues for each virtio device in vhost.
+The vhost-user is used to enable the multiple queues feature, It's not ready for vhost-cuse.
+
+The QEMU patch of enabling vhost-use multiple queues has already merged into upstream sub-tree in
+QEMU community and it will be put in QEMU 2.4. If using QEMU 2.3, it requires applying the
+same patch onto QEMU 2.3 and rebuild the QEMU before running vhost multiple queues:
+http://patchwork.ozlabs.org/patch/477461/
+
+Basically vhost sample leverages the VMDq+RSS in HW to receive packets and distribute them
+into different queue in the pool according to their 5 tuples.
+
+On the other hand, the vhost will get the queue pair number based on the communication message with
+QEMU.
+
+HW queue numbers in pool is strongly recommended to set as identical with the queue number to start
+the QMEU guest and identical with the queue number to start with virtio port on guest.
+E.g. use '--rxq 4' to set the queue number as 4, it means there are 4 HW queues in each VMDq pool,
+and 4 queues in each vhost device/port, every queue in pool maps to one queue in vhost device.
+
+=========================================
+==================| |==================|
+ vport0 | | vport1 |
+--- --- --- ---| |--- --- --- ---|
+q0 | q1 | q2 | q3 | |q0 | q1 | q2 | q3 |
+/\= =/\= =/\= =/\=| |/\= =/\= =/\= =/\=|
+|| || || || || || || ||
+|| || || || || || || ||
+||= =||= =||= =||=| =||== ||== ||== ||=|
+q0 | q1 | q2 | q3 | |q0 | q1 | q2 | q3 |
+------------------| |------------------|
+ VMDq pool0 | | VMDq pool1 |
+==================| |==================|
+
+In RX side, it firstly polls each queue of the pool and gets the packets from
+it and enqueue them into its corresponding virtqueue in virtio device/port.
+In TX side, it dequeue packets from each virtqueue of virtio device/port and send
+to either physical port or another virtio device according to its destination
+MAC address.
+
+
+Test guidance
+~~~~~~~~~~~~~
+
+#. On host, firstly mount hugepage, and insmod uio, igb_uio, bind one nic on igb_uio;
+ and then run vhost sample, key steps as follows:
+
+.. code-block:: console
+
+ sudo mount -t hugetlbfs nodev /mnt/huge
+ sudo modprobe uio
+ sudo insmod $RTE_SDK/$RTE_TARGET/kmod/igb_uio.ko
+
+ $RTE_SDK/tools/dpdk_nic_bind.py --bind igb_uio 0000:08:00.0
+ sudo $RTE_SDK/examples/vhost/build/vhost-switch -c 0xf0 -n 4 --huge-dir \
+ /mnt/huge --socket-mem 1024,0 -- -p 1 --vm2vm 0 --dev-basename usvhost --rxq 2
+
+.. note::
+
+ use '--stats 1' to enable the stats dumping on screen for vhost.
+
+#. After step 1, on host, modprobe kvm and kvm_intel, and use qemu command line to start one guest:
+
+.. code-block:: console
+
+ modprobe kvm
+ modprobe kvm_intel
+ sudo mount -t hugetlbfs nodev /dev/hugepages -o pagesize=1G
+
+ $QEMU_PATH/qemu-system-x86_64 -enable-kvm -m 4096 \
+ -object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on \
+ -numa node,memdev=mem -mem-prealloc -smp 10 -cpu core2duo,+sse3,+sse4.1,+sse4.2 \
+ -name <vm-name> -drive file=<img-path>/vm.img \
+ -chardev socket,id=char0,path=<usvhost-path>/usvhost \
+ -netdev type=vhost-user,id=hostnet2,chardev=char0,vhostforce=on,queues=2 \
+ -device virtio-net-pci,mq=on,vectors=6,netdev=hostnet2,id=net2,mac=52:54:00:12:34:56,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off \
+ -chardev socket,id=char1,path=<usvhost-path>/usvhost \
+ -netdev type=vhost-user,id=hostnet3,chardev=char1,vhostforce=on,queues=2 \
+ -device virtio-net-pci,mq=on,vectors=6,netdev=hostnet3,id=net3,mac=52:54:00:12:34:57,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
+
+#. Log on guest, use testpmd(dpdk based) to test, use multiple virtio queues to rx and tx packets.
+
+.. code-block:: console
+
+ modprobe uio
+ insmod $RTE_SDK/$RTE_TARGET/kmod/igb_uio.ko
+ echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
+ ./tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 00:04.0
+
+ $RTE_SDK/$RTE_TARGET/app/testpmd -c 1f -n 4 -- --rxq=2 --txq=2 --nb-cores=4 \
+ --rx-queue-stats-mapping="(0,0,0),(0,1,1),(1,0,2),(1,1,3)" \
+ --tx-queue-stats-mapping="(0,0,0),(0,1,1),(1,0,2),(1,1,3)" -i --disable-hw-vlan --txqflags 0xf00
+
+ set fwd mac
+ start tx_first
+
+#. Use packet generator to send packets with dest MAC:52 54 00 12 34 57 VLAN tag:1001,
+ select IPv4 as protocols and continuous incremental IP address.
+
+#. Testpmd on guest can display packets received/transmitted in both queues of each virtio port.
--
1.8.4.2
next prev parent reply other threads:[~2015-06-15 7:57 UTC|newest]
Thread overview: 65+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-05-21 7:49 [dpdk-dev] [PATCH 0/6] Support multiple queues in vhost Ouyang Changchun
2015-05-21 7:49 ` [dpdk-dev] [PATCH 1/6] ixgbe: Support VMDq RSS in non-SRIOV environment Ouyang Changchun
2015-08-24 10:41 ` Qiu, Michael
2015-08-25 0:38 ` Ouyang, Changchun
2015-05-21 7:49 ` [dpdk-dev] [PATCH 2/6] lib_vhost: Support multiple queues in virtio dev Ouyang Changchun
2015-06-03 2:47 ` Xie, Huawei
2015-05-21 7:49 ` [dpdk-dev] [PATCH 3/6] lib_vhost: Set memory layout for multiple queues mode Ouyang Changchun
2015-06-02 3:33 ` Xie, Huawei
2015-05-21 7:49 ` [dpdk-dev] [PATCH 4/6] vhost: Add new command line option: rxq Ouyang Changchun
2015-05-22 1:39 ` Thomas F Herbert
2015-05-22 6:05 ` Ouyang, Changchun
2015-05-22 12:51 ` Thomas F Herbert
2015-05-23 1:25 ` Ouyang, Changchun
2015-05-26 7:21 ` Ouyang, Changchun
2015-05-21 7:49 ` [dpdk-dev] [PATCH 5/6] vhost: Support multiple queues Ouyang Changchun
2015-05-21 7:49 ` [dpdk-dev] [PATCH 6/6] virtio: Resolve for control queue Ouyang Changchun
2015-05-22 1:13 ` [dpdk-dev] [PATCH 0/6] Support multiple queues in vhost Thomas F Herbert
2015-05-22 6:08 ` Ouyang, Changchun
2015-06-10 5:52 ` [dpdk-dev] [PATCH v2 0/7] " Ouyang Changchun
2015-06-10 5:52 ` [dpdk-dev] [PATCH v2 1/7] ixgbe: Support VMDq RSS in non-SRIOV environment Ouyang Changchun
2015-06-10 5:52 ` [dpdk-dev] [PATCH v2 2/7] lib_vhost: Support multiple queues in virtio dev Ouyang Changchun
2015-06-11 9:54 ` Panu Matilainen
2015-06-10 5:52 ` [dpdk-dev] [PATCH v2 3/7] lib_vhost: Set memory layout for multiple queues mode Ouyang Changchun
2015-06-10 5:52 ` [dpdk-dev] [PATCH v2 4/7] vhost: Add new command line option: rxq Ouyang Changchun
2015-06-10 5:52 ` [dpdk-dev] [PATCH v2 5/7] vhost: Support multiple queues Ouyang Changchun
2015-06-10 5:52 ` [dpdk-dev] [PATCH v2 6/7] virtio: Resolve for control queue Ouyang Changchun
2015-06-10 5:52 ` [dpdk-dev] [PATCH v2 7/7] vhost: Add per queue stats info Ouyang Changchun
2015-06-15 7:56 ` [dpdk-dev] [PATCH v3 0/9] Support multiple queues in vhost Ouyang Changchun
2015-06-15 7:56 ` [dpdk-dev] [PATCH v3 1/9] ixgbe: Support VMDq RSS in non-SRIOV environment Ouyang Changchun
2015-06-15 7:56 ` [dpdk-dev] [PATCH v3 2/9] lib_vhost: Support multiple queues in virtio dev Ouyang Changchun
2015-06-18 13:16 ` Flavio Leitner
2015-06-19 1:06 ` Ouyang, Changchun
2015-06-18 13:34 ` Flavio Leitner
2015-06-19 1:17 ` Ouyang, Changchun
2015-06-15 7:56 ` [dpdk-dev] [PATCH v3 3/9] lib_vhost: Set memory layout for multiple queues mode Ouyang Changchun
2015-06-15 7:56 ` [dpdk-dev] [PATCH v3 4/9] lib_vhost: Check the virtqueue address's validity Ouyang Changchun
2015-06-15 7:56 ` [dpdk-dev] [PATCH v3 5/9] vhost: Add new command line option: rxq Ouyang Changchun
2015-06-15 7:56 ` [dpdk-dev] [PATCH v3 6/9] vhost: Support multiple queues Ouyang Changchun
2015-06-15 7:56 ` [dpdk-dev] [PATCH v3 7/9] virtio: Resolve for control queue Ouyang Changchun
2015-06-15 7:56 ` [dpdk-dev] [PATCH v3 8/9] vhost: Add per queue stats info Ouyang Changchun
2015-06-15 7:56 ` Ouyang Changchun [this message]
2015-08-12 8:02 ` [dpdk-dev] [PATCH v4 00/12] Support multiple queues in vhost Ouyang Changchun
2015-08-12 8:02 ` [dpdk-dev] [PATCH v4 01/12] ixgbe: support VMDq RSS in non-SRIOV environment Ouyang Changchun
2015-08-12 8:22 ` Vincent JARDIN
2015-08-12 8:02 ` [dpdk-dev] [PATCH v4 02/12] vhost: support multiple queues in virtio dev Ouyang Changchun
2015-08-13 12:52 ` Flavio Leitner
2015-08-14 2:29 ` Ouyang, Changchun
2015-08-14 12:16 ` Flavio Leitner
2015-08-19 3:52 ` Yuanhan Liu
2015-08-19 5:54 ` Ouyang, Changchun
2015-08-19 6:28 ` Yuanhan Liu
2015-08-19 6:39 ` Yuanhan Liu
2015-09-03 2:27 ` Tetsuya Mukawa
2015-09-06 2:25 ` Ouyang, Changchun
2015-08-12 8:02 ` [dpdk-dev] [PATCH v4 03/12] vhost: update version map file Ouyang Changchun
2015-08-12 8:24 ` Panu Matilainen
2015-08-12 8:02 ` [dpdk-dev] [PATCH v4 04/12] vhost: set memory layout for multiple queues mode Ouyang Changchun
2015-08-12 8:02 ` [dpdk-dev] [PATCH v4 05/12] vhost: check the virtqueue address's validity Ouyang Changchun
2015-08-12 8:02 ` [dpdk-dev] [PATCH v4 06/12] vhost: support protocol feature Ouyang Changchun
2015-08-12 8:02 ` [dpdk-dev] [PATCH v4 07/12] vhost: add new command line option: rxq Ouyang Changchun
2015-08-12 8:02 ` [dpdk-dev] [PATCH v4 08/12] vhost: support multiple queues Ouyang Changchun
2015-08-12 8:02 ` [dpdk-dev] [PATCH v4 09/12] virtio: resolve for control queue Ouyang Changchun
2015-08-12 8:02 ` [dpdk-dev] [PATCH v4 10/12] vhost: add per queue stats info Ouyang Changchun
2015-08-12 8:02 ` [dpdk-dev] [PATCH v4 11/12] vhost: alloc core to virtq Ouyang Changchun
2015-08-12 8:02 ` [dpdk-dev] [PATCH v4 12/12] doc: update doc for vhost multiple queues Ouyang Changchun
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1434355006-30583-10-git-send-email-changchun.ouyang@intel.com \
--to=changchun.ouyang@intel.com \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).