From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 20F468D9D for ; Mon, 9 May 2016 23:26:55 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga103.jf.intel.com with ESMTP; 09 May 2016 14:26:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,601,1455004800"; d="scan'208";a="976034039" Received: from yliu-dev.sh.intel.com (HELO yliu-dev) ([10.239.67.162]) by fmsmga002.fm.intel.com with ESMTP; 09 May 2016 14:26:49 -0700 Date: Mon, 9 May 2016 14:31:24 -0700 From: Yuanhan Liu To: Ferruh Yigit Cc: dev@dpdk.org, Tetsuya Mukawa Message-ID: <20160509213124.GK5641@yliu-dev.sh.intel.com> References: <1462471869-4378-1-git-send-email-ferruh.yigit@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1462471869-4378-1-git-send-email-ferruh.yigit@intel.com> User-Agent: Mutt/1.5.23 (2014-03-12) Subject: Re: [dpdk-dev] [PATCH] vhost: add support for dynamic vhost PMD creation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 May 2016 21:26:56 -0000 On Thu, May 05, 2016 at 07:11:09PM +0100, Ferruh Yigit wrote: > Add rte_eth_from_vhost() API to create vhost PMD dynamically from > applications. This sounds a good idea to me. It could be better if you name a good usage of it though. > > Signed-off-by: Ferruh Yigit > --- > drivers/net/vhost/rte_eth_vhost.c | 117 ++++++++++++++++++++++++++++ > drivers/net/vhost/rte_eth_vhost.h | 19 +++++ > drivers/net/vhost/rte_pmd_vhost_version.map | 7 ++ > 3 files changed, 143 insertions(+) > > diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c > index 310cbef..c860ab8 100644 > --- a/drivers/net/vhost/rte_eth_vhost.c > +++ b/drivers/net/vhost/rte_eth_vhost.c > @@ -796,6 +796,123 @@ error: > return -1; > } > > +static int > +rte_eth_from_vhost_create(const char *name, char *iface_name, It's not a public function, so don't name it with prefix "rte_". > + const unsigned int numa_node, struct rte_mempool *mb_pool) > +{ > + struct rte_eth_dev_data *data = NULL; > + struct rte_eth_dev *eth_dev = NULL; > + struct pmd_internal *internal = NULL; > + struct internal_list *list; > + int nb_queues = 1; > + uint16_t nb_rx_queues = nb_queues; > + uint16_t nb_tx_queues = nb_queues; > + struct vhost_queue *vq; > + int i; > + > + int port_id = eth_dev_vhost_create(name, iface_name, nb_queues, > + numa_node); > + > + if (port_id < 0) > + return -1; > + > + eth_dev = &rte_eth_devices[port_id]; > + data = eth_dev->data; > + internal = data->dev_private; > + list = find_internal_resource(internal->iface_name); > + > + data->rx_queues = rte_zmalloc_socket(name, > + sizeof(void *) * nb_rx_queues, 0, numa_node); > + if (data->rx_queues == NULL) > + goto error; > + > + data->tx_queues = rte_zmalloc_socket(name, > + sizeof(void *) * nb_tx_queues, 0, numa_node); > + if (data->tx_queues == NULL) > + goto error; > + > + for (i = 0; i < nb_rx_queues; i++) { > + vq = rte_zmalloc_socket(NULL, sizeof(struct vhost_queue), > + RTE_CACHE_LINE_SIZE, numa_node); > + if (vq == NULL) { > + RTE_LOG(ERR, PMD, > + "Failed to allocate memory for rx queue\n"); > + goto error; > + } > + vq->mb_pool = mb_pool; > + vq->virtqueue_id = i * VIRTIO_QNUM + VIRTIO_TXQ; > + data->rx_queues[i] = vq; > + } I would invoke eth_rx_queue_setup() here, to remove the duplicated effort of queue allocation and initiation. > + > + for (i = 0; i < nb_tx_queues; i++) { > + vq = rte_zmalloc_socket(NULL, sizeof(struct vhost_queue), > + RTE_CACHE_LINE_SIZE, numa_node); > + if (vq == NULL) { > + RTE_LOG(ERR, PMD, > + "Failed to allocate memory for tx queue\n"); > + goto error; > + } > + vq->mb_pool = mb_pool; Tx queue doesn't need a mbuf pool. And, ditto, call eth_tx_queue_setup() instead. > +int > +rte_eth_from_vhost(const char *name, char *iface_name, > + const unsigned int numa_node, struct rte_mempool *mb_pool) That would make this API be very limited. Assume we want to extend vhost pmd in future, we could easily do that by adding few more vdev options: you could reference my patch[0] to add client and reconnect option. But here you hardcode all stuff that are needed so far to create a vhost-pmd eth device; adding something new would imply an API breakage in future. So, let the vdev options as the argument of this API? That could be friendly for future extension without breaking the API. [0]: http://dpdk.org/dev/patchwork/patch/12608/ > +/** > + * API to create vhost PMD > + * > + * @param name > + * Vhost device name > + * @param iface_name > + * Vhost interface name > + * @param numa_node > + * Socket id > + * @param mb_pool > + * Memory pool > + * > + * @return > + * - On success, port_id. > + * - On failure, a negative value. > + */ Hmmm, too simple. --yliu