From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 9992E2A66 for ; Thu, 29 Mar 2018 11:50:03 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Mar 2018 02:50:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.48,376,1517904000"; d="scan'208";a="41721636" Received: from cjj-s2600wft.sh.intel.com ([10.67.111.131]) by fmsmga004.fm.intel.com with ESMTP; 29 Mar 2018 02:50:00 -0700 From: Junjie Chen To: jianfeng.tan@intel.com, maxime.coquelin@redhat.com, mtetsuyah@gmail.com Cc: dev@dpdk.org, Junjie Chen Date: Thu, 29 Mar 2018 23:35:44 +0800 Message-Id: <20180329153544.270488-1-junjie.j.chen@intel.com> X-Mailer: git-send-email 2.16.0 In-Reply-To: <1522166726-42025-1-git-send-email-junjie.j.chen@intel.com> References: <1522166726-42025-1-git-send-email-junjie.j.chen@intel.com> Subject: [dpdk-dev] [PATCH v2] net/vhost: fix segfault when creating vdev dynamically X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 29 Mar 2018 09:50:04 -0000 when creating vdev dynamically, vhost pmd driver start directly without checking TX/RX queues ready or not, and thus cause segmentation fault when vhost library accessing queues. This patch add flag to check whether queues setup or not, and add driver start call into dev_start to allow user start it after setting up queue. Signed-off-by: Junjie Chen --- Changes in v2: - check queue status in new_device, create queue in dev_start if not setup yet drivers/net/vhost/rte_eth_vhost.c | 73 ++++++++++++++++++++++++++++----------- 1 file changed, 53 insertions(+), 20 deletions(-) diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c index 3aae01c39..41410fa5a 100644 --- a/drivers/net/vhost/rte_eth_vhost.c +++ b/drivers/net/vhost/rte_eth_vhost.c @@ -117,7 +117,9 @@ struct pmd_internal { char *dev_name; char *iface_name; uint16_t max_queues; + uint16_t vid; rte_atomic32_t started; + rte_atomic32_t once; }; struct internal_list { @@ -580,21 +582,27 @@ new_device(int vid) eth_dev->data->numa_node = newnode; #endif - for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { - vq = eth_dev->data->rx_queues[i]; - if (vq == NULL) - continue; - vq->vid = vid; - vq->internal = internal; - vq->port = eth_dev->data->port_id; - } - for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { - vq = eth_dev->data->tx_queues[i]; - if (vq == NULL) - continue; - vq->vid = vid; - vq->internal = internal; - vq->port = eth_dev->data->port_id; + if (eth_dev->data->rx_queues && eth_dev->data->tx_queues) { + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { + vq = eth_dev->data->rx_queues[i]; + if (!vq) + continue; + vq->vid = vid; + vq->internal = internal; + vq->port = eth_dev->data->port_id; + } + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { + vq = eth_dev->data->tx_queues[i]; + if (!vq) + continue; + vq->vid = vid; + vq->internal = internal; + vq->port = eth_dev->data->port_id; + } + } else { + RTE_LOG(INFO, PMD, "RX/TX queues have not setup yet\n"); + internal->vid = vid; + rte_atomic32_set(&internal->once, 0); } for (i = 0; i < rte_vhost_get_vring_num(vid); i++) @@ -605,7 +613,8 @@ new_device(int vid) eth_dev->data->dev_link.link_status = ETH_LINK_UP; rte_atomic32_set(&internal->dev_attached, 1); - update_queuing_status(eth_dev); + if (likely(rte_atomic32_read(&internal->once) == 1)) + update_queuing_status(eth_dev); RTE_LOG(INFO, PMD, "Vhost device %d created\n", vid); @@ -770,12 +779,34 @@ rte_eth_vhost_get_vid_from_port_id(uint16_t port_id) } static int -eth_dev_start(struct rte_eth_dev *dev) +eth_dev_start(struct rte_eth_dev *eth_dev) { - struct pmd_internal *internal = dev->data->dev_private; + struct pmd_internal *internal = eth_dev->data->dev_private; + struct vhost_queue *vq; + int i; + + if (unlikely(rte_atomic32_read(&internal->once) == 0)) { + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { + vq = eth_dev->data->rx_queues[i]; + if (!vq) + continue; + vq->vid = internal->vid; + vq->internal = internal; + vq->port = eth_dev->data->port_id; + } + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { + vq = eth_dev->data->tx_queues[i]; + if (!vq) + continue; + vq->vid = internal->vid; + vq->internal = internal; + vq->port = eth_dev->data->port_id; + } + rte_atomic32_set(&internal->once, 1); + } rte_atomic32_set(&internal->started, 1); - update_queuing_status(dev); + update_queuing_status(eth_dev); return 0; } @@ -786,7 +817,9 @@ eth_dev_stop(struct rte_eth_dev *dev) struct pmd_internal *internal = dev->data->dev_private; rte_atomic32_set(&internal->started, 0); - update_queuing_status(dev); + + if (likely(rte_atomic32_read(&internal->once) == 1)) + update_queuing_status(dev); } static void -- 2.16.0