From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id B8CF58D97 for ; Fri, 9 Oct 2015 07:46:08 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga102.fm.intel.com with ESMTP; 08 Oct 2015 22:46:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,657,1437462000"; d="scan'208";a="822889450" Received: from yliu-dev.sh.intel.com ([10.239.66.49]) by fmsmga002.fm.intel.com with ESMTP; 08 Oct 2015 22:46:06 -0700 From: Yuanhan Liu To: dev@dpdk.org Date: Fri, 9 Oct 2015 13:45:59 +0800 Message-Id: <1444369572-1157-1-git-send-email-yuanhan.liu@linux.intel.com> X-Mailer: git-send-email 1.9.0 Cc: marcel@redhat.com, "Michael S. Tsirkin" Subject: [dpdk-dev] [PATCH v6 00/13] vhost-user multiple queues enabling X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 09 Oct 2015 05:46:09 -0000 This patch set enables vhost-user multiple queues. Overview ======== It depends on some QEMU patches that has already been merged to upstream. Those qemu patches introduce some new vhost-user messages, for vhost-user mq enabling negotiation. Here is the main negotiation steps (Qemu as master, and DPDK vhost-user as slave): - Master queries features by VHOST_USER_GET_FEATURES from slave - Check if VHOST_USER_F_PROTOCOL_FEATURES exist. If not, mq is not supported. (check patch 1 for why VHOST_USER_F_PROTOCOL_FEATURES is introduced) - Master then sends another command, VHOST_USER_GET_QUEUE_NUM, for querying how many queues the slave supports. Master will compare the result with the requested queue number. Qemu exits if the former is smaller. - Master then tries to initiate all queue pairs by sending some vhost user commands, including VHOST_USER_SET_VRING_CALL, which will trigger the slave to do related vring setup, such as vring allocation. Till now, all necessary initiation and negotiation are done. And master could send another message, VHOST_USER_SET_VRING_ENABLE, to enable/disable a specific queue dynamically later. Patchset ======== Patch 1-6 are all prepare works for enabling mq; they are all atomic changes, with "do not breaking anything" beared in mind while making them. Patch 7 acutally enables mq feature, by setting two key feature flags. Patch 8 handles VHOST_USER_SET_VRING_ENABLE message, which is for enabling disabling a specific virt queue pair, and there is only one queue pair is enabled by default. Patch 9-12 is for demostrating the mq feature. Patch 13 udpates the doc release note. Testing ======= Host side ---------- - # Start vhost-switch sudo mount -t hugetlbfs nodev /mnt/huge sudo modprobe uio sudo insmod $RTE_SDK/$RTE_TARGET/kmod/igb_uio.ko sudo $RTE_SDK/tools/dpdk_nic_bind.py --bind igb_uio 0000:08:00.0 sudo $RTE_SDK/examples/vhost/build/vhost-switch -c 0xf0 -n 4 \ --huge-dir /mnt/huge --socket-mem 2048,0 -- -p 1 --vm2vm 0 \ --dev-basename usvhost --rxq 2 # Above common generates a usvhost socket file at PWD. You could also # specify "--stats 1" option to enable stats dumping. - # start qemu sudo sudo mount -t hugetlbfs nodev $HOME/hugetlbfs $QEMU_DIR/x86_64-softmmu/qemu-system-x86_64 -machine accel=kvm -m 4G \ -object memory-backend-file,id=mem,size=4G,mem-path=$HOME/hugetlbfs,share=on \ -numa node,memdev=mem -chardev socket,id=chr0,path=/path/to/usvhost \ -netdev vhost-user,id=net0,chardev=chr0,vhostforce,queues=2 \ -device virtio-net-pci,netdev=net0,mq=on,vectors=6,mac=52:54:00:12:34:58,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off \ -hda $HOME/iso/fc-22-x86_64.img -smp 10 -cpu core2duo,+sse3,+sse4.1,+sse4.2 Guest side ---------- modprobe uio insmod $RTE_SDK/$RTE_TARGET/kmod/igb_uio.ko echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages ./tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 $RTE_SDK/$RTE_TARGET/app/testpmd -c 1f -n 4 -- --rxq=2 --txq=2 \ --nb-cores=4 -i --disable-hw-vlan --txqflags 0xf00 > set fwd mac > start tx_first After those setups, you then could use packet generator for packet tx/rx testing. Test with OVS ============= Marcel also created a simple yet quite clear test guide with OVS at: http://wiki.qemu.org/Features/vhost-user-ovs-dpdk BTW, Marcel, would you please complete the page on mq testing? --- Changchun Ouyang (7): vhost: rxtx: prepare work for multiple queue support virtio: read virtio_net_config correctly vhost: add VHOST_USER_SET_VRING_ENABLE message vhost: add API bind a virtq to a specific core ixgbe: support VMDq RSS in non-SRIOV environment examples/vhost: demonstrate the usage of vhost mq feature examples/vhost: add per queue stats Yuanhan Liu (6): vhost-user: add protocol features support vhost-user: add VHOST_USER_GET_QUEUE_NUM message vhost: vring queue setup for multiple queue support vhost-user: handle VHOST_USER_RESET_OWNER correctly vhost-user: enable vhost-user multiple queue doc: update release note for vhost-user mq support doc/guides/rel_notes/release_2_2.rst | 5 + drivers/net/ixgbe/ixgbe_rxtx.c | 86 +++++- drivers/net/virtio/virtio_ethdev.c | 16 +- examples/vhost/main.c | 420 +++++++++++++++++--------- examples/vhost/main.h | 3 +- lib/librte_ether/rte_ethdev.c | 11 + lib/librte_vhost/rte_vhost_version.map | 7 + lib/librte_vhost/rte_virtio_net.h | 38 ++- lib/librte_vhost/vhost_rxtx.c | 56 +++- lib/librte_vhost/vhost_user/vhost-net-user.c | 27 +- lib/librte_vhost/vhost_user/vhost-net-user.h | 4 + lib/librte_vhost/vhost_user/virtio-net-user.c | 83 +++-- lib/librte_vhost/vhost_user/virtio-net-user.h | 10 + lib/librte_vhost/virtio-net.c | 181 +++++++---- 14 files changed, 692 insertions(+), 255 deletions(-) -- 1.9.0