DPDK patches and discussions
 help / color / mirror / Atom feed
From: Yuanhan Liu <yuanhan.liu@linux.intel.com>
To: Ferruh Yigit <ferruh.yigit@intel.com>
Cc: dev@dpdk.org, Tetsuya Mukawa <mukawa@igel.co.jp>
Subject: Re: [dpdk-dev] [PATCH] vhost: add support for dynamic vhost PMD creation
Date: Mon, 9 May 2016 14:31:24 -0700	[thread overview]
Message-ID: <20160509213124.GK5641@yliu-dev.sh.intel.com> (raw)
In-Reply-To: <1462471869-4378-1-git-send-email-ferruh.yigit@intel.com>

On Thu, May 05, 2016 at 07:11:09PM +0100, Ferruh Yigit wrote:
> Add rte_eth_from_vhost() API to create vhost PMD dynamically from
> applications.

This sounds a good idea to me. It could be better if you name a good
usage of it though.

> 
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
>  drivers/net/vhost/rte_eth_vhost.c           | 117 ++++++++++++++++++++++++++++
>  drivers/net/vhost/rte_eth_vhost.h           |  19 +++++
>  drivers/net/vhost/rte_pmd_vhost_version.map |   7 ++
>  3 files changed, 143 insertions(+)
> 
> diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
> index 310cbef..c860ab8 100644
> --- a/drivers/net/vhost/rte_eth_vhost.c
> +++ b/drivers/net/vhost/rte_eth_vhost.c
> @@ -796,6 +796,123 @@ error:
>  	return -1;
>  }
>  
> +static int
> +rte_eth_from_vhost_create(const char *name, char *iface_name,

It's not a public function, so don't name it with prefix "rte_".

> +		const unsigned int numa_node, struct rte_mempool *mb_pool)
> +{
> +	struct rte_eth_dev_data *data = NULL;
> +	struct rte_eth_dev *eth_dev = NULL;
> +	struct pmd_internal *internal = NULL;
> +	struct internal_list *list;
> +	int nb_queues = 1;
> +	uint16_t nb_rx_queues = nb_queues;
> +	uint16_t nb_tx_queues = nb_queues;
> +	struct vhost_queue *vq;
> +	int i;
> +
> +	int port_id = eth_dev_vhost_create(name, iface_name, nb_queues,
> +			numa_node);
> +
> +	if (port_id < 0)
> +		return -1;
> +
> +	eth_dev = &rte_eth_devices[port_id];
> +	data = eth_dev->data;
> +	internal = data->dev_private;
> +	list = find_internal_resource(internal->iface_name);
> +
> +	data->rx_queues = rte_zmalloc_socket(name,
> +			sizeof(void *) * nb_rx_queues, 0, numa_node);
> +	if (data->rx_queues == NULL)
> +		goto error;
> +
> +	data->tx_queues = rte_zmalloc_socket(name,
> +			sizeof(void *) * nb_tx_queues, 0, numa_node);
> +	if (data->tx_queues == NULL)
> +		goto error;
> +
> +	for (i = 0; i < nb_rx_queues; i++) {
> +		vq = rte_zmalloc_socket(NULL, sizeof(struct vhost_queue),
> +				RTE_CACHE_LINE_SIZE, numa_node);
> +		if (vq == NULL) {
> +			RTE_LOG(ERR, PMD,
> +				"Failed to allocate memory for rx queue\n");
> +			goto error;
> +		}
> +		vq->mb_pool = mb_pool;
> +		vq->virtqueue_id = i * VIRTIO_QNUM + VIRTIO_TXQ;
> +		data->rx_queues[i] = vq;
> +	}

I would invoke eth_rx_queue_setup() here, to remove the duplicated
effort of queue allocation and initiation.

> +
> +	for (i = 0; i < nb_tx_queues; i++) {
> +		vq = rte_zmalloc_socket(NULL, sizeof(struct vhost_queue),
> +				RTE_CACHE_LINE_SIZE, numa_node);
> +		if (vq == NULL) {
> +			RTE_LOG(ERR, PMD,
> +				"Failed to allocate memory for tx queue\n");
> +			goto error;
> +		}
> +		vq->mb_pool = mb_pool;

Tx queue doesn't need a mbuf pool. And, ditto, call eth_tx_queue_setup()
instead.


> +int
> +rte_eth_from_vhost(const char *name, char *iface_name,
> +		const unsigned int numa_node, struct rte_mempool *mb_pool)

That would make this API be very limited. Assume we want to extend
vhost pmd in future, we could easily do that by adding few more
vdev options: you could reference my patch[0] to add client and
reconnect option. But here you hardcode all stuff that are needed
so far to create a vhost-pmd eth device; adding something new
would imply an API breakage in future.

So, let the vdev options as the argument of this API? That could
be friendly for future extension without breaking the API.

[0]: http://dpdk.org/dev/patchwork/patch/12608/

> +/**
> + * API to create vhost PMD
> + *
> + * @param name
> + *  Vhost device name
> + * @param iface_name
> + *  Vhost interface name
> + * @param numa_node
> + *  Socket id
> + * @param mb_pool
> + *  Memory pool
> + *
> + * @return
> + *  - On success, port_id.
> + *  - On failure, a negative value.
> + */

Hmmm, too simple.

	--yliu

  reply	other threads:[~2016-05-09 21:26 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-05 18:11 Ferruh Yigit
2016-05-09 21:31 ` Yuanhan Liu [this message]
2016-05-10 17:11   ` Ferruh Yigit
2016-05-18 17:10   ` [dpdk-dev] [PATCH v2] " Ferruh Yigit
2016-05-19  8:33     ` Thomas Monjalon
2016-05-19 16:28       ` Ferruh Yigit
2016-05-19 16:44         ` Thomas Monjalon
2016-05-20  1:59           ` Yuanhan Liu
2016-05-20 10:37           ` Bruce Richardson
2016-05-20 12:03             ` Thomas Monjalon
2016-05-23 13:24             ` Yuanhan Liu
2016-05-23 17:06               ` Ferruh Yigit
2016-05-24  5:11                 ` Yuanhan Liu
2016-05-24  9:42                   ` Bruce Richardson
2016-05-25  4:41                     ` Yuanhan Liu
2016-05-25 11:54                       ` Thomas Monjalon
2016-05-26  7:58                         ` Yuanhan Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160509213124.GK5641@yliu-dev.sh.intel.com \
    --to=yuanhan.liu@linux.intel.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=mukawa@igel.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).