From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 5561F8D9F for ; Tue, 22 Sep 2015 09:29:36 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga103.fm.intel.com with ESMTP; 22 Sep 2015 00:29:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,571,1437462000"; d="scan'208";a="649836546" Received: from yliu-dev.sh.intel.com (HELO yliu-dev) ([10.239.66.60]) by orsmga003.jf.intel.com with ESMTP; 22 Sep 2015 00:29:29 -0700 Date: Tue, 22 Sep 2015 15:31:32 +0800 From: Yuanhan Liu To: marcel@redhat.com Message-ID: <20150922073132.GT2339@yliu-dev.sh.intel.com> References: <1442589061-19225-1-git-send-email-yuanhan.liu@linux.intel.com> <1442589061-19225-4-git-send-email-yuanhan.liu@linux.intel.com> <55FEBB92.40403@gmail.com> <20150921020619.GN2339@yliu-dev.sh.intel.com> <560044CE.4040002@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <560044CE.4040002@gmail.com> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: dev@dpdk.org, "Michael S. Tsirkin" Subject: Re: [dpdk-dev] [PATCH v5 resend 03/12] vhost: vring queue setup for multiple queue support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Sep 2015 07:29:37 -0000 On Mon, Sep 21, 2015 at 08:56:30PM +0300, Marcel Apfelbaum wrote: > On 09/21/2015 05:06 AM, Yuanhan Liu wrote: > >On Sun, Sep 20, 2015 at 04:58:42PM +0300, Marcel Apfelbaum wrote: > >>On 09/18/2015 06:10 PM, Yuanhan Liu wrote: > >>>All queue pairs, including the default (the first) queue pair, > >>>are allocated dynamically, when a vring_call message is received > >>>first time for a specific queue pair. > >>> > >>>This is a refactor work for enabling vhost-user multiple queue; > >>>it should not break anything as it does no functional changes: > >>>we don't support mq set, so there is only one mq at max. > >>> > >>>This patch is based on Changchun's patch. > >>> > >>>Signed-off-by: Yuanhan Liu > >>>--- > >>> lib/librte_vhost/rte_virtio_net.h | 3 +- > >>> lib/librte_vhost/vhost_user/virtio-net-user.c | 44 +++++----- > >>> lib/librte_vhost/virtio-net.c | 121 ++++++++++++++++---------- > >>> 3 files changed, 102 insertions(+), 66 deletions(-) > >>> > >>>diff --git a/lib/librte_vhost/rte_virtio_net.h b/lib/librte_vhost/rte_virtio_net.h > >>>index e3a21e5..5dd6493 100644 > >>>--- a/lib/librte_vhost/rte_virtio_net.h > >>>+++ b/lib/librte_vhost/rte_virtio_net.h > >>>@@ -96,7 +96,7 @@ struct vhost_virtqueue { > >>> * Device structure contains all configuration information relating to the device. > >>> */ > >>> struct virtio_net { > >>>- struct vhost_virtqueue *virtqueue[VIRTIO_QNUM]; /**< Contains all virtqueue information. */ > >>>+ struct vhost_virtqueue *virtqueue[VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX]; /**< Contains all virtqueue information. */ > >>> struct virtio_memory *mem; /**< QEMU memory and memory region information. */ > >>> uint64_t features; /**< Negotiated feature set. */ > >>> uint64_t protocol_features; /**< Negotiated protocol feature set. */ > >>>@@ -104,6 +104,7 @@ struct virtio_net { > >>> uint32_t flags; /**< Device flags. Only used to check if device is running on data core. */ > >>> #define IF_NAME_SZ (PATH_MAX > IFNAMSIZ ? PATH_MAX : IFNAMSIZ) > >>> char ifname[IF_NAME_SZ]; /**< Name of the tap device or socket path. */ > >>>+ uint32_t virt_qp_nb; /**< number of queue pair we have allocated */ > >>> void *priv; /**< private context */ > >>> } __rte_cache_aligned; > >>> > >>>diff --git a/lib/librte_vhost/vhost_user/virtio-net-user.c b/lib/librte_vhost/vhost_user/virtio-net-user.c > >>>index 360254e..e83d279 100644 > >>>--- a/lib/librte_vhost/vhost_user/virtio-net-user.c > >>>+++ b/lib/librte_vhost/vhost_user/virtio-net-user.c > >>>@@ -206,25 +206,33 @@ err_mmap: > >>> } > >>> > >> > >>Hi, > >> > >>> static int > >>>+vq_is_ready(struct vhost_virtqueue *vq) > >>>+{ > >>>+ return vq && vq->desc && > >>>+ vq->kickfd != -1 && > >>>+ vq->callfd != -1; > >> > >> kickfd and callfd are unsigned > > > >Hi, > > > >I have made 4 cleanup patches few weeks before, including the patch > >to define kickfd and callfd as int type, and they have already got > >the ACK from Huawei Xie, and Chuangchun Ouyang. It's likely that > >they will be merged, hence I made this patchset based on them. > > > >This will also answer the question from your another email: can't > >apply. > > Hi, > Thank you for the response, it makes sense now. > > T have another issue, maybe you can help. > I have some problems making it work with OVS/DPDK backend and virtio-net driver in guest. > > I am using a simple setup: > http://wiki.qemu.org/Features/vhost-user-ovs-dpdk > that connects 2 VMs using OVS's dpdkvhostuser ports (regular virtio-net driver in guest, not the PMD driver). > > The setup worked fine with the prev DPDK MQ implementation (V4), however on this one the traffic stops > once I set queues=n in guest. Hi, Could you be more specific about that? It also would be helpful if you could tell me the steps, besides those setup steps you mentioned in the qemu wiki and this email, you did for testing. I had a very rough testing based on your test guides, I indeed found an issue: the IP address assigned by "ifconfig" disappears soon in the first few times and after about 2 or 3 times reset, it never changes. (well, I saw that quite few times before while trying different QEMU net devices. So, it might be a system configuration issue, or something else?) Besides that, it works, say, I can wget a big file from host. --yliu > (virtio-net uses only one queue when the guest starts, even if QEMU has multiple queues). > > Two steps are required in order to enable multiple queues in OVS. > 1. Apply the following patch: > - https://www.mail-archive.com/dev@openvswitch.org/msg49198.html > - It needs merging (I think) > 2. Configure ovs for multiqueue: > - ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs= > - ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask= > 3. In order to set queues=n in guest use: > - ethtool -L eth0 combined > > Any pointers/ideas would be appreciated. > > Thank you, > Marcel > > > > > > >Sorry for not pointing it out, as I assume Thomas(cc'ed) will apply > >them soon. And thanks for the review, anyway. > > > > --yliu > >> > >>>+} > >>>+ > >>>+static int > >>> virtio_is_ready(struct virtio_net *dev) > >>> { > >>> struct vhost_virtqueue *rvq, *tvq; > >>>+ uint32_t i; > >>> > >>>- /* mq support in future.*/ > >>>- rvq = dev->virtqueue[VIRTIO_RXQ]; > >>>- tvq = dev->virtqueue[VIRTIO_TXQ]; > >>>- if (rvq && tvq && rvq->desc && tvq->desc && > >>>- (rvq->kickfd != -1) && > >>>- (rvq->callfd != -1) && > >>>- (tvq->kickfd != -1) && > >>>- (tvq->callfd != -1)) { > >>>- RTE_LOG(INFO, VHOST_CONFIG, > >>>- "virtio is now ready for processing.\n"); > >>>- return 1; > >>>+ for (i = 0; i < dev->virt_qp_nb; i++) { > >>>+ rvq = dev->virtqueue[i * VIRTIO_QNUM + VIRTIO_RXQ]; > >>>+ tvq = dev->virtqueue[i * VIRTIO_QNUM + VIRTIO_TXQ]; > >>>+ > >>>+ if (!vq_is_ready(rvq) || !vq_is_ready(tvq)) { > >>>+ RTE_LOG(INFO, VHOST_CONFIG, > >>>+ "virtio is not ready for processing.\n"); > >>>+ return 0; > >>>+ } > >>> } > >>>+ > >>> RTE_LOG(INFO, VHOST_CONFIG, > >>>- "virtio isn't ready for processing.\n"); > >>>- return 0; > >>>+ "virtio is now ready for processing.\n"); > >>>+ return 1; > >>> } > >>> > >>> void > >>>@@ -290,13 +298,9 @@ user_get_vring_base(struct vhost_device_ctx ctx, > >>> * sent and only sent in vhost_vring_stop. > >>> * TODO: cleanup the vring, it isn't usable since here. > >>> */ > >>>- if ((dev->virtqueue[VIRTIO_RXQ]->kickfd) >= 0) { > >>>- close(dev->virtqueue[VIRTIO_RXQ]->kickfd); > >>>- dev->virtqueue[VIRTIO_RXQ]->kickfd = -1; > >>>- } > >>>- if ((dev->virtqueue[VIRTIO_TXQ]->kickfd) >= 0) { > >>>- close(dev->virtqueue[VIRTIO_TXQ]->kickfd); > >>>- dev->virtqueue[VIRTIO_TXQ]->kickfd = -1; > >>>+ if ((dev->virtqueue[state->index]->kickfd) >= 0) { > >> > >>always >= 0 > >> > >>>+ close(dev->virtqueue[state->index]->kickfd); > >>>+ dev->virtqueue[state->index]->kickfd = -1; > >> > >>again unsigned > >> > >>> } > >>> > >>> return 0; > >>>diff --git a/lib/librte_vhost/virtio-net.c b/lib/librte_vhost/virtio-net.c > >>>index deac6b9..643a92e 100644 > >>>--- a/lib/librte_vhost/virtio-net.c > >>>+++ b/lib/librte_vhost/virtio-net.c > >>>@@ -36,6 +36,7 @@ > >>> #include > >>> #include > >>> #include > >>>+#include > >>> #include > >>> #include > >>> #ifdef RTE_LIBRTE_VHOST_NUMA > >>>@@ -178,6 +179,15 @@ add_config_ll_entry(struct virtio_net_config_ll *new_ll_dev) > >>> > >>> } > >>> > >>>+static void > >>>+cleanup_vq(struct vhost_virtqueue *vq) > >>>+{ > >>>+ if (vq->callfd >= 0) > >>>+ close(vq->callfd); > >>>+ if (vq->kickfd >= 0) > >>>+ close(vq->kickfd); > >> > >>both always >=0 > >> > >>>+} > >>>+ > >>> /* > >>> * Unmap any memory, close any file descriptors and > >>> * free any memory owned by a device. > >>>@@ -185,6 +195,8 @@ add_config_ll_entry(struct virtio_net_config_ll *new_ll_dev) > >>> static void > >>> cleanup_device(struct virtio_net *dev) > >>> { > >>>+ uint32_t i; > >>>+ > >>> /* Unmap QEMU memory file if mapped. */ > >>> if (dev->mem) { > >>> munmap((void *)(uintptr_t)dev->mem->mapped_address, > >>>@@ -192,15 +204,10 @@ cleanup_device(struct virtio_net *dev) > >>> free(dev->mem); > >>> } > >>> > >>>- /* Close any event notifiers opened by device. */ > >>>- if (dev->virtqueue[VIRTIO_RXQ]->callfd >= 0) > >>>- close(dev->virtqueue[VIRTIO_RXQ]->callfd); > >>>- if (dev->virtqueue[VIRTIO_RXQ]->kickfd >= 0) > >>>- close(dev->virtqueue[VIRTIO_RXQ]->kickfd); > >>>- if (dev->virtqueue[VIRTIO_TXQ]->callfd >= 0) > >>>- close(dev->virtqueue[VIRTIO_TXQ]->callfd); > >>>- if (dev->virtqueue[VIRTIO_TXQ]->kickfd >= 0) > >>>- close(dev->virtqueue[VIRTIO_TXQ]->kickfd); > >>>+ for (i = 0; i < dev->virt_qp_nb; i++) { > >>>+ cleanup_vq(dev->virtqueue[i * VIRTIO_QNUM + VIRTIO_RXQ]); > >>>+ cleanup_vq(dev->virtqueue[i * VIRTIO_QNUM + VIRTIO_TXQ]); > >>>+ } > >>> } > >>> > >>> /* > >>>@@ -209,9 +216,11 @@ cleanup_device(struct virtio_net *dev) > >>> static void > >>> free_device(struct virtio_net_config_ll *ll_dev) > >>> { > >>>- /* Free any malloc'd memory */ > >>>- rte_free(ll_dev->dev.virtqueue[VIRTIO_RXQ]); > >>>- rte_free(ll_dev->dev.virtqueue[VIRTIO_TXQ]); > >>>+ uint32_t i; > >>>+ > >>>+ for (i = 0; i < ll_dev->dev.virt_qp_nb; i++) > >>>+ rte_free(ll_dev->dev.virtqueue[i * VIRTIO_QNUM]); > >>>+ > >>> rte_free(ll_dev); > >>> } > >>> > >>>@@ -244,6 +253,50 @@ rm_config_ll_entry(struct virtio_net_config_ll *ll_dev, > >>> } > >>> } > >>> > >>>+static void > >>>+init_vring_queue(struct vhost_virtqueue *vq) > >>>+{ > >>>+ memset(vq, 0, sizeof(struct vhost_virtqueue)); > >>>+ > >>>+ vq->kickfd = -1; > >>>+ vq->callfd = -1; > >> > >>same here > >> > >>>+ > >>>+ /* Backends are set to -1 indicating an inactive device. */ > >>>+ vq->backend = -1; > >>>+} > >>>+ > >>>+static void > >>>+init_vring_queue_pair(struct virtio_net *dev, uint32_t qp_idx) > >>>+{ > >>>+ init_vring_queue(dev->virtqueue[qp_idx * VIRTIO_QNUM + VIRTIO_RXQ]); > >>>+ init_vring_queue(dev->virtqueue[qp_idx * VIRTIO_QNUM + VIRTIO_TXQ]); > >>>+} > >>>+ > >>>+static int > >>>+alloc_vring_queue_pair(struct virtio_net *dev, uint32_t qp_idx) > >>>+{ > >>>+ struct vhost_virtqueue *virtqueue = NULL; > >>>+ uint32_t virt_rx_q_idx = qp_idx * VIRTIO_QNUM + VIRTIO_RXQ; > >>>+ uint32_t virt_tx_q_idx = qp_idx * VIRTIO_QNUM + VIRTIO_TXQ; > >>>+ > >>>+ virtqueue = rte_malloc(NULL, > >>>+ sizeof(struct vhost_virtqueue) * VIRTIO_QNUM, 0); > >>>+ if (virtqueue == NULL) { > >>>+ RTE_LOG(ERR, VHOST_CONFIG, > >>>+ "Failed to allocate memory for virt qp:%d.\n", qp_idx); > >>>+ return -1; > >>>+ } > >>>+ > >>>+ dev->virtqueue[virt_rx_q_idx] = virtqueue; > >>>+ dev->virtqueue[virt_tx_q_idx] = virtqueue + VIRTIO_TXQ; > >>>+ > >>>+ init_vring_queue_pair(dev, qp_idx); > >>>+ > >>>+ dev->virt_qp_nb += 1; > >>>+ > >>>+ return 0; > >>>+} > >>>+ > >>> /* > >>> * Initialise all variables in device structure. > >>> */ > >>>@@ -251,6 +304,7 @@ static void > >>> init_device(struct virtio_net *dev) > >>> { > >>> uint64_t vq_offset; > >>>+ uint32_t i; > >>> > >>> /* > >>> * Virtqueues have already been malloced so > >>>@@ -261,17 +315,9 @@ init_device(struct virtio_net *dev) > >>> /* Set everything to 0. */ > >>> memset((void *)(uintptr_t)((uint64_t)(uintptr_t)dev + vq_offset), 0, > >>> (sizeof(struct virtio_net) - (size_t)vq_offset)); > >>>- memset(dev->virtqueue[VIRTIO_RXQ], 0, sizeof(struct vhost_virtqueue)); > >>>- memset(dev->virtqueue[VIRTIO_TXQ], 0, sizeof(struct vhost_virtqueue)); > >>> > >>>- dev->virtqueue[VIRTIO_RXQ]->kickfd = -1; > >>>- dev->virtqueue[VIRTIO_RXQ]->callfd = -1; > >>>- dev->virtqueue[VIRTIO_TXQ]->kickfd = -1; > >>>- dev->virtqueue[VIRTIO_TXQ]->callfd = -1; > >>>- > >>>- /* Backends are set to -1 indicating an inactive device. */ > >>>- dev->virtqueue[VIRTIO_RXQ]->backend = VIRTIO_DEV_STOPPED; > >>>- dev->virtqueue[VIRTIO_TXQ]->backend = VIRTIO_DEV_STOPPED; > >>>+ for (i = 0; i < dev->virt_qp_nb; i++) > >>>+ init_vring_queue_pair(dev, i); > >>> } > >>> > >>> /* > >>>@@ -283,7 +329,6 @@ static int > >>> new_device(struct vhost_device_ctx ctx) > >>> { > >>> struct virtio_net_config_ll *new_ll_dev; > >>>- struct vhost_virtqueue *virtqueue_rx, *virtqueue_tx; > >>> > >>> /* Setup device and virtqueues. */ > >>> new_ll_dev = rte_malloc(NULL, sizeof(struct virtio_net_config_ll), 0); > >>>@@ -294,28 +339,6 @@ new_device(struct vhost_device_ctx ctx) > >>> return -1; > >>> } > >>> > >>>- virtqueue_rx = rte_malloc(NULL, sizeof(struct vhost_virtqueue), 0); > >>>- if (virtqueue_rx == NULL) { > >>>- rte_free(new_ll_dev); > >>>- RTE_LOG(ERR, VHOST_CONFIG, > >>>- "(%"PRIu64") Failed to allocate memory for rxq.\n", > >>>- ctx.fh); > >>>- return -1; > >>>- } > >>>- > >>>- virtqueue_tx = rte_malloc(NULL, sizeof(struct vhost_virtqueue), 0); > >>>- if (virtqueue_tx == NULL) { > >>>- rte_free(virtqueue_rx); > >>>- rte_free(new_ll_dev); > >>>- RTE_LOG(ERR, VHOST_CONFIG, > >>>- "(%"PRIu64") Failed to allocate memory for txq.\n", > >>>- ctx.fh); > >>>- return -1; > >>>- } > >>>- > >>>- new_ll_dev->dev.virtqueue[VIRTIO_RXQ] = virtqueue_rx; > >>>- new_ll_dev->dev.virtqueue[VIRTIO_TXQ] = virtqueue_tx; > >>>- > >>> /* Initialise device and virtqueues. */ > >>> init_device(&new_ll_dev->dev); > >>> > >>>@@ -680,13 +703,21 @@ set_vring_call(struct vhost_device_ctx ctx, struct vhost_vring_file *file) > >>> { > >>> struct virtio_net *dev; > >>> struct vhost_virtqueue *vq; > >>>+ uint32_t cur_qp_idx = file->index / VIRTIO_QNUM; > >>> > >>> dev = get_device(ctx); > >>> if (dev == NULL) > >>> return -1; > >>> > >>>+ /* alloc vring queue pair if it is a new queue pair */ > >>>+ if (cur_qp_idx + 1 > dev->virt_qp_nb) { > >>>+ if (alloc_vring_queue_pair(dev, cur_qp_idx) < 0) > >>>+ return -1; > >>>+ } > >>>+ > >>> /* file->index refers to the queue index. The txq is 1, rxq is 0. */ > >>> vq = dev->virtqueue[file->index]; > >>>+ assert(vq != NULL); > >>> > >>> if (vq->callfd >= 0) > >>> close(vq->callfd); > >>> > >> > >> > >>I hope I helped, > >>Thanks, > >>Marcel > >>