From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 7663CADED for ; Tue, 10 May 2016 19:14:06 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga101.jf.intel.com with ESMTP; 10 May 2016 10:12:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,605,1455004800"; d="scan'208";a="803200013" Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.237.220.87]) ([10.237.220.87]) by orsmga003.jf.intel.com with ESMTP; 10 May 2016 10:11:46 -0700 To: Yuanhan Liu References: <1462471869-4378-1-git-send-email-ferruh.yigit@intel.com> <20160509213124.GK5641@yliu-dev.sh.intel.com> Cc: dev@dpdk.org, Tetsuya Mukawa From: Ferruh Yigit Message-ID: <57321650.6090604@intel.com> Date: Tue, 10 May 2016 18:11:44 +0100 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.7.2 MIME-Version: 1.0 In-Reply-To: <20160509213124.GK5641@yliu-dev.sh.intel.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH] vhost: add support for dynamic vhost PMD creation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 10 May 2016 17:14:07 -0000 On 5/9/2016 10:31 PM, Yuanhan Liu wrote: > On Thu, May 05, 2016 at 07:11:09PM +0100, Ferruh Yigit wrote: >> Add rte_eth_from_vhost() API to create vhost PMD dynamically from >> applications. > > This sounds a good idea to me. It could be better if you name a good > usage of it though. > >> >> Signed-off-by: Ferruh Yigit >> --- >> drivers/net/vhost/rte_eth_vhost.c | 117 ++++++++++++++++++++++++++++ >> drivers/net/vhost/rte_eth_vhost.h | 19 +++++ >> drivers/net/vhost/rte_pmd_vhost_version.map | 7 ++ >> 3 files changed, 143 insertions(+) >> >> diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c >> index 310cbef..c860ab8 100644 >> --- a/drivers/net/vhost/rte_eth_vhost.c >> +++ b/drivers/net/vhost/rte_eth_vhost.c >> @@ -796,6 +796,123 @@ error: >> return -1; >> } >> >> +static int >> +rte_eth_from_vhost_create(const char *name, char *iface_name, > > It's not a public function, so don't name it with prefix "rte_". > >> + const unsigned int numa_node, struct rte_mempool *mb_pool) >> +{ >> + struct rte_eth_dev_data *data = NULL; >> + struct rte_eth_dev *eth_dev = NULL; >> + struct pmd_internal *internal = NULL; >> + struct internal_list *list; >> + int nb_queues = 1; >> + uint16_t nb_rx_queues = nb_queues; >> + uint16_t nb_tx_queues = nb_queues; >> + struct vhost_queue *vq; >> + int i; >> + >> + int port_id = eth_dev_vhost_create(name, iface_name, nb_queues, >> + numa_node); >> + >> + if (port_id < 0) >> + return -1; >> + >> + eth_dev = &rte_eth_devices[port_id]; >> + data = eth_dev->data; >> + internal = data->dev_private; >> + list = find_internal_resource(internal->iface_name); >> + >> + data->rx_queues = rte_zmalloc_socket(name, >> + sizeof(void *) * nb_rx_queues, 0, numa_node); >> + if (data->rx_queues == NULL) >> + goto error; >> + >> + data->tx_queues = rte_zmalloc_socket(name, >> + sizeof(void *) * nb_tx_queues, 0, numa_node); >> + if (data->tx_queues == NULL) >> + goto error; >> + >> + for (i = 0; i < nb_rx_queues; i++) { >> + vq = rte_zmalloc_socket(NULL, sizeof(struct vhost_queue), >> + RTE_CACHE_LINE_SIZE, numa_node); >> + if (vq == NULL) { >> + RTE_LOG(ERR, PMD, >> + "Failed to allocate memory for rx queue\n"); >> + goto error; >> + } >> + vq->mb_pool = mb_pool; >> + vq->virtqueue_id = i * VIRTIO_QNUM + VIRTIO_TXQ; >> + data->rx_queues[i] = vq; >> + } > > I would invoke eth_rx_queue_setup() here, to remove the duplicated > effort of queue allocation and initiation. > >> + >> + for (i = 0; i < nb_tx_queues; i++) { >> + vq = rte_zmalloc_socket(NULL, sizeof(struct vhost_queue), >> + RTE_CACHE_LINE_SIZE, numa_node); >> + if (vq == NULL) { >> + RTE_LOG(ERR, PMD, >> + "Failed to allocate memory for tx queue\n"); >> + goto error; >> + } >> + vq->mb_pool = mb_pool; > > Tx queue doesn't need a mbuf pool. And, ditto, call eth_tx_queue_setup() > instead. > > >> +int >> +rte_eth_from_vhost(const char *name, char *iface_name, >> + const unsigned int numa_node, struct rte_mempool *mb_pool) > > That would make this API be very limited. Assume we want to extend > vhost pmd in future, we could easily do that by adding few more > vdev options: you could reference my patch[0] to add client and > reconnect option. But here you hardcode all stuff that are needed > so far to create a vhost-pmd eth device; adding something new > would imply an API breakage in future. > > So, let the vdev options as the argument of this API? That could > be friendly for future extension without breaking the API. > > [0]: http://dpdk.org/dev/patchwork/patch/12608/ > >> +/** >> + * API to create vhost PMD >> + * >> + * @param name >> + * Vhost device name >> + * @param iface_name >> + * Vhost interface name >> + * @param numa_node >> + * Socket id >> + * @param mb_pool >> + * Memory pool >> + * >> + * @return >> + * - On success, port_id. >> + * - On failure, a negative value. >> + */ > > Hmmm, too simple. > > --yliu > Hi Yuanhan, Thank you for the review, I will send new version of the patch with above issues addressed. Thanks, ferruh