From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 2C8E41BBB1 for ; Thu, 12 Apr 2018 09:36:03 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Apr 2018 00:36:02 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.48,440,1517904000"; d="scan'208";a="33080225" Received: from tanjianf-mobl.ccr.corp.intel.com (HELO [10.67.64.91]) ([10.67.64.91]) by orsmga008.jf.intel.com with ESMTP; 12 Apr 2018 00:36:01 -0700 To: Maxime Coquelin , Junjie Chen , mtetsuyah@gmail.com References: <1523443993-176139-1-git-send-email-junjie.j.chen@intel.com> <1523466152-181190-1-git-send-email-junjie.j.chen@intel.com> <75c33a22-9b4c-f293-07ee-daad99f3cad0@intel.com> <6288ba60-eebb-a743-aa1d-a61c32a07c58@redhat.com> Cc: dev@dpdk.org From: "Tan, Jianfeng" Message-ID: Date: Thu, 12 Apr 2018 15:36:00 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 MIME-Version: 1.0 In-Reply-To: <6288ba60-eebb-a743-aa1d-a61c32a07c58@redhat.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v3] net/vhost: fix vhost invalid state X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 12 Apr 2018 07:36:03 -0000 On 4/12/2018 3:29 PM, Maxime Coquelin wrote: > > > On 04/12/2018 09:21 AM, Tan, Jianfeng wrote: >> >> >> On 4/12/2018 1:02 AM, Junjie Chen wrote: >>> dev_start sets *dev_attached* after setup queues, this sets device to >>> invalid state since no frontend is attached. Also destroy_device set >>> *started* to zero which makes *allow_queuing* always zero until >>> dev_start >>> get called again. Actually, we should not determine queues existence by >>> *dev_attached* but by queues pointers or other separated variable(s). >>> >>> Fixes: 30a701a53737 ("net/vhost: fix crash when creating vdev >>> dynamically") >>> >>> Signed-off-by: Junjie Chen >>> Tested-by: Jens Freimann >> >> Overall, looks great to me except a nit below. >> >> Reviewed-by: Jianfeng Tan > > Thanks Jianfeng, I can handle the small change while applying. > > Can you confirm that it is implied that the queue are already allocated, > else we wouldn't find the internal resource and quit earlier (in case of > eth_dev_close called twice for example)? I was referring to i40e_dev_free_queues() and ixgbe_dev_free_queues(). But a 2nd thought, no harm to keep the check. > > Thanks, > Maxime > >> >>> --- >>> Changes in v3: >>> - remove useless log in queue status showing >>> Changes in v2: >>> - use started to determine vhost queues readiness >>> - revert setting started to zero in destroy_device >>> drivers/net/vhost/rte_eth_vhost.c | 59 >>> +++++++++++++++++++-------------------- >>> 1 file changed, 29 insertions(+), 30 deletions(-) >>> >>> diff --git a/drivers/net/vhost/rte_eth_vhost.c >>> b/drivers/net/vhost/rte_eth_vhost.c >>> index 11b6076..e392d71 100644 >>> --- a/drivers/net/vhost/rte_eth_vhost.c >>> +++ b/drivers/net/vhost/rte_eth_vhost.c >>> @@ -528,10 +528,11 @@ update_queuing_status(struct rte_eth_dev *dev) >>> unsigned int i; >>> int allow_queuing = 1; >>> - if (rte_atomic32_read(&internal->dev_attached) == 0) >>> + if (!dev->data->rx_queues || !dev->data->tx_queues) >>> return; >>> - if (rte_atomic32_read(&internal->started) == 0) >>> + if (rte_atomic32_read(&internal->started) == 0 || >>> + rte_atomic32_read(&internal->dev_attached) == 0) >>> allow_queuing = 0; >>> /* Wait until rx/tx_pkt_burst stops accessing vhost device */ >>> @@ -607,13 +608,10 @@ new_device(int vid) >>> #endif >>> internal->vid = vid; >>> - if (eth_dev->data->rx_queues && eth_dev->data->tx_queues) { >>> + if (rte_atomic32_read(&internal->started) == 1) >>> queue_setup(eth_dev, internal); >>> - rte_atomic32_set(&internal->dev_attached, 1); >>> - } else { >>> - RTE_LOG(INFO, PMD, "RX/TX queues have not setup yet\n"); >>> - rte_atomic32_set(&internal->dev_attached, 0); >>> - } >>> + else >>> + RTE_LOG(INFO, PMD, "RX/TX queues not exist yet\n"); >>> for (i = 0; i < rte_vhost_get_vring_num(vid); i++) >>> rte_vhost_enable_guest_notification(vid, i, 0); >>> @@ -622,6 +620,7 @@ new_device(int vid) >>> eth_dev->data->dev_link.link_status = ETH_LINK_UP; >>> + rte_atomic32_set(&internal->dev_attached, 1); >>> update_queuing_status(eth_dev); >>> RTE_LOG(INFO, PMD, "Vhost device %d created\n", vid); >>> @@ -651,23 +650,24 @@ destroy_device(int vid) >>> eth_dev = list->eth_dev; >>> internal = eth_dev->data->dev_private; >>> - rte_atomic32_set(&internal->started, 0); >>> - update_queuing_status(eth_dev); >>> rte_atomic32_set(&internal->dev_attached, 0); >>> + update_queuing_status(eth_dev); >>> eth_dev->data->dev_link.link_status = ETH_LINK_DOWN; >>> - for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { >>> - vq = eth_dev->data->rx_queues[i]; >>> - if (vq == NULL) >>> - continue; >>> - vq->vid = -1; >>> - } >>> - for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { >>> - vq = eth_dev->data->tx_queues[i]; >>> - if (vq == NULL) >>> - continue; >>> - vq->vid = -1; >>> + if (eth_dev->data->rx_queues && eth_dev->data->tx_queues) { >>> + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { >>> + vq = eth_dev->data->rx_queues[i]; >>> + if (!vq) >>> + continue; >>> + vq->vid = -1; >>> + } >>> + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { >>> + vq = eth_dev->data->tx_queues[i]; >>> + if (!vq) >>> + continue; >>> + vq->vid = -1; >>> + } >>> } >>> state = vring_states[eth_dev->data->port_id]; >>> @@ -792,11 +792,7 @@ eth_dev_start(struct rte_eth_dev *eth_dev) >>> { >>> struct pmd_internal *internal = eth_dev->data->dev_private; >>> - if (unlikely(rte_atomic32_read(&internal->dev_attached) == 0)) { >>> - queue_setup(eth_dev, internal); >>> - rte_atomic32_set(&internal->dev_attached, 1); >>> - } >>> - >>> + queue_setup(eth_dev, internal); >>> rte_atomic32_set(&internal->started, 1); >>> update_queuing_status(eth_dev); >>> @@ -836,10 +832,13 @@ eth_dev_close(struct rte_eth_dev *dev) >>> pthread_mutex_unlock(&internal_list_lock); >>> rte_free(list); >>> - for (i = 0; i < dev->data->nb_rx_queues; i++) >>> - rte_free(dev->data->rx_queues[i]); >>> - for (i = 0; i < dev->data->nb_tx_queues; i++) >>> - rte_free(dev->data->tx_queues[i]); >>> + if (dev->data->rx_queues) >> >> This is implied that rx_queues is already allocated. So I don't think >> we need this. >> >>> + for (i = 0; i < dev->data->nb_rx_queues; i++) >>> + rte_free(dev->data->rx_queues[i]); >>> + >>> + if (dev->data->tx_queues) >>> + for (i = 0; i < dev->data->nb_tx_queues; i++) >>> + rte_free(dev->data->tx_queues[i]); >>> rte_free(dev->data->mac_addrs); >>> free(internal->dev_name); >>