From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 4EAD93005 for ; Fri, 18 Sep 2015 17:06:01 +0200 (CEST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP; 18 Sep 2015 08:05:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,553,1437462000"; d="scan'208";a="792562945" Received: from yliu-dev.sh.intel.com (HELO yliu-dev) ([10.239.66.60]) by fmsmga001.fm.intel.com with ESMTP; 18 Sep 2015 08:05:40 -0700 Date: Fri, 18 Sep 2015 23:07:05 +0800 From: Yuanhan Liu To: dev@dpdk.org Message-ID: <20150918150705.GM2339@yliu-dev.sh.intel.com> References: <1442588473-13122-1-git-send-email-yuanhan.liu@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1442588473-13122-1-git-send-email-yuanhan.liu@linux.intel.com> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: "Michael S. Tsirkin" Subject: Re: [dpdk-dev] [PATCH v5 00/12] vhost-user multiple queues enabling X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 18 Sep 2015 15:06:02 -0000 Sorry that I typed wrong email address of Changchun; I will resend them. Sorry for the noisy. --yliu On Fri, Sep 18, 2015 at 11:01:01PM +0800, Yuanhan Liu wrote: > This patch set enables vhost-user multiple queues. > > Overview > ======== > > It depends on some QEMU patches that, hopefully, will be merged soon. > Those qemu patches introduce some new vhost-user messages, for vhost-user > mq enabling negotiation. Here is the main negotiation steps (Qemu > as master, and DPDK vhost-user as slave): > > - Master queries features by VHOST_USER_GET_FEATURES from slave > > - Check if VHOST_USER_F_PROTOCOL_FEATURES exist. If not, mq is not > supported. (check patch 1 for why VHOST_USER_F_PROTOCOL_FEATURES > is introduced) > > - Master then sends another command, VHOST_USER_GET_QUEUE_NUM, for > querying how many queues the slave supports. > > Master will compare the result with the requested queue number. > Qemu exits if the former is smaller. > > - Master then tries to initiate all queue pairs by sending some vhost > user commands, including VHOST_USER_SET_VRING_CALL, which will > trigger the slave to do related vring setup, such as vring allocation. > > > Till now, all necessary initiation and negotiation are done. And master > could send another message, VHOST_USER_SET_VRING_ENABLE, to enable/disable > a specific queue dynamically later. > > > Patchset > ======== > > Patch 1-7 are all prepare works for enabling mq; they are all atomic > changes, which is designed to not break anything. > > Patch 8 acutally enables mq feature, by setting two key feature flags. > > Patch 9-12 are for demostrating the mq feature. > > > Testing > ======= > > Host side > ---------- > > - # Start vhost-switch > > sudo mount -t hugetlbfs nodev /mnt/huge > sudo modprobe uio > sudo insmod $RTE_SDK/$RTE_TARGET/kmod/igb_uio.ko > > sudo $RTE_SDK/tools/dpdk_nic_bind.py --bind igb_uio 0000:08:00.0 > > sudo $RTE_SDK/examples/vhost/build/vhost-switch -c 0xf0 -n 4 \ > --huge-dir /mnt/huge --socket-mem 2048,0 -- -p 1 --vm2vm 0 \ > --dev-basename usvhost --rxq 2 > > # Above common generates a usvhost socket file at PWD. You could also > # specify "--stats 1" option to enable stats dumping. > > > > - # start qemu > > > sudo sudo mount -t hugetlbfs nodev $HOME/hugetlbfs > $QEMU_DIR/x86_64-softmmu/qemu-system-x86_64 -machine accel=kvm -m 4G \ > -object memory-backend-file,id=mem,size=4G,mem-path=$HOME/hugetlbfs,share=on \ > -numa node,memdev=mem -chardev socket,id=chr0,path=/path/to/usvhost \ > -netdev vhost-user,id=net0,chardev=chr0,vhostforce,queues=2 \ > -device virtio-net-pci,netdev=net0,mq=on,vectors=6,mac=52:54:00:12:34:58,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off \ > -hda $HOME/iso/fc-22-x86_64.img -smp 10 -cpu core2duo,+sse3,+sse4.1,+sse4.2 > > > Guest side > ---------- > > modprobe uio > insmod $RTE_SDK/$RTE_TARGET/kmod/igb_uio.ko > echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages > ./tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 > > $RTE_SDK/$RTE_TARGET/app/testpmd -c 1f -n 4 -- --rxq=2 --txq=2 \ > --nb-cores=4 -i --disable-hw-vlan --txqflags 0xf00 > > > set fwd mac > > start tx_first > > > After those setups, you then could use packet generator for packet tx/rx testing. > > --- > Changchun Ouyang (7): > vhost: rxtx: prepare work for multiple queue support > vhost: add VHOST_USER_SET_VRING_ENABLE message > virtio: resolve for control queue > vhost: add API bind a virtq to a specific core > ixgbe: support VMDq RSS in non-SRIOV environment > examples/vhost: demonstrate the usage of vhost mq feature > examples/vhost: add per queue stats > > Yuanhan Liu (5): > vhost-user: add protocol features support > vhost-user: add VHOST_USER_GET_QUEUE_NUM message > vhost: vring queue setup for multiple queue support > vhost-user: handle VHOST_USER_RESET_OWNER correctly > vhost-user: enable vhost-user multiple queue > > drivers/net/ixgbe/ixgbe_rxtx.c | 86 +++++- > drivers/net/virtio/virtio_ethdev.c | 12 +- > examples/vhost/main.c | 420 +++++++++++++++++--------- > examples/vhost/main.h | 3 +- > lib/librte_ether/rte_ethdev.c | 11 + > lib/librte_vhost/rte_vhost_version.map | 7 + > lib/librte_vhost/rte_virtio_net.h | 30 +- > lib/librte_vhost/vhost_rxtx.c | 56 +++- > lib/librte_vhost/vhost_user/vhost-net-user.c | 27 +- > lib/librte_vhost/vhost_user/vhost-net-user.h | 4 + > lib/librte_vhost/vhost_user/virtio-net-user.c | 79 +++-- > lib/librte_vhost/vhost_user/virtio-net-user.h | 10 + > lib/librte_vhost/virtio-net.c | 158 +++++++--- > 13 files changed, 659 insertions(+), 244 deletions(-) > > -- > 1.9.0