DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Cc: Victor Kaplansky <vkaplans@redhat.com>,
	dev@dpdk.org, huawei xie <huawei.xie@intel.com>
Subject: Re: [dpdk-dev] [PATCH 2/6] vhost: add vhost-user client mode
Date: Mon, 9 May 2016 23:30:53 +0300	[thread overview]
Message-ID: <20160509232954-mutt-send-email-mst@redhat.com> (raw)
In-Reply-To: <20160509203308.GI5641@yliu-dev.sh.intel.com>

On Mon, May 09, 2016 at 01:33:08PM -0700, Yuanhan Liu wrote:
> On Mon, May 09, 2016 at 06:33:45AM -0400, Victor Kaplansky wrote:
> > Adding a flag for a future extension seems fine to me.
> > Could we manage without adding a flag?
> > For example, could we always try a client mode for a while at first,
> > and if unsuccessful, then come up with server mode?
> 
> It's hard to define "how long is for a while". And I don't think
> there is a way to switch it back to client mode from server mode
> if you do so.
> 
> So, assume you create a vhost-user port (as client), and you
> start QEMU (as server) later, after the gap you mentioned of
> auto-turn into server, it simply doesn't work then.
> 
> 	--yliu

I think I agree - I don't know of other software doing
such an automatic switch. Correctly setting server/client mode
seems easy enough.


> > ----- Original Message -----
> > > From: "Yuanhan Liu" <yuanhan.liu@linux.intel.com>
> > > To: dev@dpdk.org
> > > Cc: "huawei xie" <huawei.xie@intel.com>, "Yuanhan Liu" <yuanhan.liu@linux.intel.com>
> > > Sent: Saturday, May 7, 2016 9:40:20 AM
> > > Subject: [dpdk-dev] [PATCH 2/6] vhost: add vhost-user client mode
> > > 
> > > Add a new paramter (flags) to rte_vhost_driver_register(). DPDK
> > > vhost-user acts as client mode when RTE_VHOST_USER_CLIENT flag
> > > is set.
> > > 
> > > The flags would also allow future extensions without breaking the
> > > API (again).
> > > 
> > > The rest is straingfoward then: allocate a unix socket, and
> > > bind/listen for server, connect for client.
> > > 
> > > Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> > > ---
> > >  drivers/net/vhost/rte_eth_vhost.c            |   2 +-
> > >  examples/vhost/main.c                        |   2 +-
> > >  lib/librte_vhost/rte_virtio_net.h            |  11 +-
> > >  lib/librte_vhost/vhost_user/vhost-net-user.c | 215
> > >  ++++++++++++++++-----------
> > >  4 files changed, 142 insertions(+), 88 deletions(-)
> > > 
> > > diff --git a/drivers/net/vhost/rte_eth_vhost.c
> > > b/drivers/net/vhost/rte_eth_vhost.c
> > > index a9dada5..36697cf 100644
> > > --- a/drivers/net/vhost/rte_eth_vhost.c
> > > +++ b/drivers/net/vhost/rte_eth_vhost.c
> > > @@ -456,7 +456,7 @@ eth_dev_start(struct rte_eth_dev *dev)
> > >  	int ret = 0;
> > >  
> > >  	if (rte_atomic16_cmpset(&internal->once, 0, 1)) {
> > > -		ret = rte_vhost_driver_register(internal->iface_name);
> > > +		ret = rte_vhost_driver_register(internal->iface_name, 0);
> > >  		if (ret)
> > >  			return ret;
> > >  	}
> > > diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> > > index bbf0d28..6899189 100644
> > > --- a/examples/vhost/main.c
> > > +++ b/examples/vhost/main.c
> > > @@ -1499,7 +1499,7 @@ main(int argc, char *argv[])
> > >  		rte_vhost_feature_disable(1ULL << VIRTIO_NET_F_MRG_RXBUF);
> > >  
> > >  	/* Register vhost(cuse or user) driver to handle vhost messages. */
> > > -	ret = rte_vhost_driver_register((char *)&dev_basename);
> > > +	ret = rte_vhost_driver_register(dev_basename, 0);
> > >  	if (ret != 0)
> > >  		rte_exit(EXIT_FAILURE, "vhost driver register failure.\n");
> > >  
> > > diff --git a/lib/librte_vhost/rte_virtio_net.h
> > > b/lib/librte_vhost/rte_virtio_net.h
> > > index 4e50425..c84e7ab 100644
> > > --- a/lib/librte_vhost/rte_virtio_net.h
> > > +++ b/lib/librte_vhost/rte_virtio_net.h
> > > @@ -51,6 +51,8 @@
> > >  #include <rte_mempool.h>
> > >  #include <rte_ether.h>
> > >  
> > > +#define RTE_VHOST_USER_CLIENT		(1ULL << 0)
> > > +
> > >  struct rte_mbuf;
> > >  
> > >  #define VHOST_MEMORY_MAX_NREGIONS 8
> > > @@ -96,11 +98,14 @@ uint64_t rte_vhost_feature_get(void);
> > >  
> > >  int rte_vhost_enable_guest_notification(int vid, uint16_t queue_id, int
> > >  enable);
> > >  
> > > -/* Register vhost driver. dev_name could be different for multiple instance
> > > support. */
> > > -int rte_vhost_driver_register(const char *dev_name);
> > > +/**
> > > + * Register vhost driver. path could be different for multiple
> > > + * instance support.
> > > + */
> > > +int rte_vhost_driver_register(const char *path, uint64_t flags);
> > >  
> > >  /* Unregister vhost driver. This is only meaningful to vhost user. */
> > > -int rte_vhost_driver_unregister(const char *dev_name);
> > > +int rte_vhost_driver_unregister(const char *path);
> > >  
> > >  /* Register callbacks. */
> > >  int rte_vhost_driver_callback_register(struct virtio_net_device_ops const *
> > >  const);
> > > diff --git a/lib/librte_vhost/vhost_user/vhost-net-user.c
> > > b/lib/librte_vhost/vhost_user/vhost-net-user.c
> > > index f485a3b..aa98717 100644
> > > --- a/lib/librte_vhost/vhost_user/vhost-net-user.c
> > > +++ b/lib/librte_vhost/vhost_user/vhost-net-user.c
> > > @@ -58,6 +58,7 @@
> > >  struct vhost_user_socket {
> > >  	char *path;
> > >  	int listenfd;
> > > +	int is_server;
> > >  };
> > >  
> > >  struct vhost_user_connection {
> > > @@ -75,7 +76,7 @@ struct vhost_user {
> > >  
> > >  #define MAX_VIRTIO_BACKLOG 128
> > >  
> > > -static void vhost_user_new_connection(int fd, void *data, int *remove);
> > > +static void vhost_user_server_new_connection(int fd, void *data, int
> > > *remove);
> > >  static void vhost_user_msg_handler(int fd, void *dat, int *remove);
> > >  
> > >  static struct vhost_user vhost_user = {
> > > @@ -111,48 +112,6 @@ static const char *vhost_message_str[VHOST_USER_MAX] = {
> > >  	[VHOST_USER_SEND_RARP]  = "VHOST_USER_SEND_RARP",
> > >  };
> > >  
> > > -/**
> > > - * Create a unix domain socket, bind to path and listen for connection.
> > > - * @return
> > > - *  socket fd or -1 on failure
> > > - */
> > > -static int
> > > -uds_socket(const char *path)
> > > -{
> > > -	struct sockaddr_un un;
> > > -	int sockfd;
> > > -	int ret;
> > > -
> > > -	if (path == NULL)
> > > -		return -1;
> > > -
> > > -	sockfd = socket(AF_UNIX, SOCK_STREAM, 0);
> > > -	if (sockfd < 0)
> > > -		return -1;
> > > -	RTE_LOG(INFO, VHOST_CONFIG, "socket created, fd:%d\n", sockfd);
> > > -
> > > -	memset(&un, 0, sizeof(un));
> > > -	un.sun_family = AF_UNIX;
> > > -	snprintf(un.sun_path, sizeof(un.sun_path), "%s", path);
> > > -	ret = bind(sockfd, (struct sockaddr *)&un, sizeof(un));
> > > -	if (ret == -1) {
> > > -		RTE_LOG(ERR, VHOST_CONFIG, "fail to bind fd:%d, remove file:%s and try
> > > again.\n",
> > > -			sockfd, path);
> > > -		goto err;
> > > -	}
> > > -	RTE_LOG(INFO, VHOST_CONFIG, "bind to %s\n", path);
> > > -
> > > -	ret = listen(sockfd, MAX_VIRTIO_BACKLOG);
> > > -	if (ret == -1)
> > > -		goto err;
> > > -
> > > -	return sockfd;
> > > -
> > > -err:
> > > -	close(sockfd);
> > > -	return -1;
> > > -}
> > > -
> > >  /* return bytes# of read on success or negative val on failure. */
> > >  static int
> > >  read_fd_message(int sockfd, char *buf, int buflen, int *fds, int fd_num)
> > > @@ -287,32 +246,24 @@ send_vhost_message(int sockfd, struct VhostUserMsg
> > > *msg)
> > >  	return ret;
> > >  }
> > >  
> > > -/* call back when there is new vhost-user connection.  */
> > > +
> > >  static void
> > > -vhost_user_new_connection(int fd, void *dat, int *remove __rte_unused)
> > > +vhost_user_add_connection(int fd, struct vhost_user_socket *vsocket)
> > >  {
> > > -	struct vhost_user_socket *vsocket = dat;
> > > -	int conn_fd;
> > > -	struct vhost_user_connection *conn;
> > >  	int vid;
> > > -	unsigned int size;
> > > -
> > > -	conn_fd = accept(fd, NULL, NULL);
> > > -	RTE_LOG(INFO, VHOST_CONFIG,
> > > -		"new virtio connection is %d\n", conn_fd);
> > > -	if (conn_fd < 0)
> > > -		return;
> > > +	size_t size;
> > > +	struct vhost_user_connection *conn;
> > >  
> > > -	conn = calloc(1, sizeof(*conn));
> > > +	conn = malloc(sizeof(*conn));
> > >  	if (conn == NULL) {
> > > -		close(conn_fd);
> > > +		close(fd);
> > >  		return;
> > >  	}
> > >  
> > >  	vid = vhost_new_device();
> > >  	if (vid == -1) {
> > > +		close(fd);
> > >  		free(conn);
> > > -		close(conn_fd);
> > >  		return;
> > >  	}
> > >  
> > > @@ -323,8 +274,21 @@ vhost_user_new_connection(int fd, void *dat, int *remove
> > > __rte_unused)
> > >  
> > >  	conn->vsocket = vsocket;
> > >  	conn->vid = vid;
> > > -	fdset_add(&vhost_user.fdset,
> > > -		conn_fd, vhost_user_msg_handler, NULL, conn);
> > > +	fdset_add(&vhost_user.fdset, fd, vhost_user_msg_handler, NULL, conn);
> > > +}
> > > +
> > > +/* call back when there is new vhost-user connection from client  */
> > > +static void
> > > +vhost_user_server_new_connection(int fd, void *dat, int *remove
> > > __rte_unused)
> > > +{
> > > +	struct vhost_user_socket *vsocket = dat;
> > > +
> > > +	fd = accept(fd, NULL, NULL);
> > > +	if (fd < 0)
> > > +		return;
> > > +
> > > +	RTE_LOG(INFO, VHOST_CONFIG, "new vhost user connection is %d\n", fd);
> > > +	vhost_user_add_connection(fd, vsocket);
> > >  }
> > >  
> > >  /* callback when there is message on the connfd */
> > > @@ -452,50 +416,135 @@ vhost_user_msg_handler(int connfd, void *dat, int
> > > *remove)
> > >  	}
> > >  }
> > >  
> > > -/**
> > > - * Creates and initialise the vhost server.
> > > - */
> > > -int
> > > -rte_vhost_driver_register(const char *path)
> > > +static int
> > > +create_unix_socket(const char *path, struct sockaddr_un *un, int is_server)
> > >  {
> > > -	struct vhost_user_socket *vsocket;
> > > +	int fd;
> > >  
> > > -	pthread_mutex_lock(&vhost_user.mutex);
> > > +	fd = socket(AF_UNIX, SOCK_STREAM, 0);
> > > +	if (fd < 0)
> > > +		return -1;
> > > +	RTE_LOG(INFO, VHOST_CONFIG, "vhost-user %s: socket created, fd: %d\n",
> > > +		is_server ? "server" : "client", fd);
> > >  
> > > -	if (vhost_user.vsocket_cnt == MAX_VHOST_SOCKET) {
> > > -		RTE_LOG(ERR, VHOST_CONFIG,
> > > -			"error: the number of servers reaches maximum\n");
> > > -		pthread_mutex_unlock(&vhost_user.mutex);
> > > +	memset(un, 0, sizeof(*un));
> > > +	un->sun_family = AF_UNIX;
> > > +	strncpy(un->sun_path, path, sizeof(un->sun_path));
> > > +
> > > +	return fd;
> > > +}
> > > +
> > > +static int
> > > +vhost_user_create_server(struct vhost_user_socket *vsocket)
> > > +{
> > > +	int fd;
> > > +	int ret;
> > > +	struct sockaddr_un un;
> > > +	const char *path = vsocket->path;
> > > +
> > > +	fd = create_unix_socket(path, &un, vsocket->is_server);
> > > +	if (fd < 0)
> > >  		return -1;
> > > +
> > > +	ret = bind(fd, (struct sockaddr *)&un, sizeof(un));
> > > +	if (ret < 0) {
> > > +		RTE_LOG(ERR, VHOST_CONFIG,
> > > +			"failed to bind to %s: %s; remove it and try again\n",
> > > +			path, strerror(errno));
> > > +		goto err;
> > >  	}
> > > +	RTE_LOG(INFO, VHOST_CONFIG, "bind to %s\n", path);
> > >  
> > > -	vsocket = calloc(sizeof(struct vhost_user_socket), 1);
> > > -	if (vsocket == NULL) {
> > > -		pthread_mutex_unlock(&vhost_user.mutex);
> > > +	ret = listen(fd, MAX_VIRTIO_BACKLOG);
> > > +	if (ret < 0)
> > > +		goto err;
> > > +
> > > +	vsocket->listenfd = fd;
> > > +	fdset_add(&vhost_user.fdset, fd, vhost_user_server_new_connection,
> > > +		  NULL, vsocket);
> > > +
> > > +	return 0;
> > > +
> > > +err:
> > > +	close(fd);
> > > +	return -1;
> > > +}
> > > +
> > > +static int
> > > +vhost_user_create_client(struct vhost_user_socket *vsocket)
> > > +{
> > > +	int fd;
> > > +	int ret;
> > > +	struct sockaddr_un un;
> > > +	const char *path = vsocket->path;
> > > +
> > > +	fd = create_unix_socket(path, &un, vsocket->is_server);
> > > +	if (fd < 0)
> > > +		return -1;
> > > +
> > > +	ret = connect(fd, (struct sockaddr *)&un, sizeof(un));
> > > +	if (ret < 0) {
> > > +		RTE_LOG(ERR, VHOST_CONFIG, "failed to connect to %s: %s\n",
> > > +			path, strerror(errno));
> > > +		close(fd);
> > >  		return -1;
> > >  	}
> > >  
> > > -	vsocket->listenfd = uds_socket(path);
> > > -	if (vsocket->listenfd < 0) {
> > > -		free(vsocket);
> > > -		pthread_mutex_unlock(&vhost_user.mutex);
> > > +	vhost_user_add_connection(fd, vsocket);
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +/*
> > > + * Register a new vhost-user socket; here we could act as server
> > > + * (the default case), or client (when RTE_VHOST_USER_CLIENT) flag
> > > + * is set.
> > > + */
> > > +int
> > > +rte_vhost_driver_register(const char *path, uint64_t flags)
> > > +{
> > > +	int ret = -1;
> > > +	struct vhost_user_socket *vsocket;
> > > +
> > > +	if (!path)
> > >  		return -1;
> > > +
> > > +	pthread_mutex_lock(&vhost_user.mutex);
> > > +
> > > +	if (vhost_user.vsocket_cnt == MAX_VHOST_SOCKET) {
> > > +		RTE_LOG(ERR, VHOST_CONFIG,
> > > +			"error: the number of vhost sockets reaches maximum\n");
> > > +		goto out;
> > >  	}
> > >  
> > > +	vsocket = malloc(sizeof(struct vhost_user_socket));
> > > +	if (!vsocket)
> > > +		goto out;
> > > +	memset(vsocket, 0, sizeof(struct vhost_user_socket));
> > >  	vsocket->path = strdup(path);
> > >  
> > > -	fdset_add(&vhost_user.fdset, vsocket->listenfd,
> > > -		vhost_user_new_connection, NULL, vsocket);
> > > +	if ((flags & RTE_VHOST_USER_CLIENT) != 0) {
> > > +		ret = vhost_user_create_client(vsocket);
> > > +	} else {
> > > +		vsocket->is_server = 1;
> > > +		ret = vhost_user_create_server(vsocket);
> > > +	}
> > > +	if (ret < 0) {
> > > +		free(vsocket->path);
> > > +		free(vsocket);
> > > +		goto out;
> > > +	}
> > >  
> > >  	vhost_user.vsockets[vhost_user.vsocket_cnt++] = vsocket;
> > > +
> > > +out:
> > >  	pthread_mutex_unlock(&vhost_user.mutex);
> > >  
> > > -	return 0;
> > > +	return ret;
> > >  }
> > >  
> > > -
> > >  /**
> > > - * Unregister the specified vhost server
> > > + * Unregister the specified vhost socket
> > >   */
> > >  int
> > >  rte_vhost_driver_unregister(const char *path)
> > > --
> > > 1.9.0
> > > 
> > > 

  reply	other threads:[~2016-05-09 20:30 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-07  6:40 [dpdk-dev] [PATCH 0/6] vhost: add vhost-user client mode and reconnect ability Yuanhan Liu
2016-05-07  6:40 ` [dpdk-dev] [PATCH 1/6] vhost: rename structs for enabling client mode Yuanhan Liu
2016-05-07  6:40 ` [dpdk-dev] [PATCH 2/6] vhost: add vhost-user " Yuanhan Liu
2016-05-09 10:33   ` Victor Kaplansky
2016-05-09 20:33     ` Yuanhan Liu
2016-05-09 20:30       ` Michael S. Tsirkin [this message]
2016-05-07  6:40 ` [dpdk-dev] [PATCH 3/6] vhost: add reconnect ability Yuanhan Liu
2016-05-09 16:47   ` Xie, Huawei
2016-05-09 18:12     ` Yuanhan Liu
2016-05-10  7:24       ` Xie, Huawei
2016-05-10  7:54         ` Michael S. Tsirkin
2016-05-10  8:07           ` Xie, Huawei
2016-05-10  8:42             ` Michael S. Tsirkin
2016-05-10  9:00               ` Xie, Huawei
2016-05-10  9:17                 ` Michael S. Tsirkin
2016-05-10 17:17                   ` Loftus, Ciara
2016-05-11 21:46                     ` Michael S. Tsirkin
2016-05-07  6:40 ` [dpdk-dev] [PATCH 4/6] vhost: workaround stale vring base Yuanhan Liu
2016-05-09 10:45   ` Victor Kaplansky
2016-05-09 13:39     ` Xie, Huawei
2016-05-09 18:23       ` Yuanhan Liu
2016-05-09 12:19   ` Michael S. Tsirkin
2016-05-09 16:25   ` Xie, Huawei
2016-05-09 18:22     ` Yuanhan Liu
2016-06-13 20:47       ` Michael S. Tsirkin
2016-05-10  8:21   ` Xie, Huawei
2016-05-07  6:40 ` [dpdk-dev] [PATCH 5/6] examples/vhost: add client and reconnect option Yuanhan Liu
2016-05-09 10:47   ` Victor Kaplansky
2016-05-07  6:40 ` [dpdk-dev] [PATCH 6/6] vhost: add pmd " Yuanhan Liu
2016-05-09 10:54   ` Victor Kaplansky
2016-05-09 18:26     ` Yuanhan Liu
2016-05-10  3:23 ` [dpdk-dev] [PATCH 0/6] vhost: add vhost-user client mode and reconnect ability Xu, Qian Q
2016-05-10 17:41   ` Yuanhan Liu
2016-05-13  6:16 ` [dpdk-dev] [PATCH v2 " Yuanhan Liu
2016-05-13  6:16   ` [dpdk-dev] [PATCH v2 1/6] vhost: rename structs for enabling client mode Yuanhan Liu
2016-05-13  6:16   ` [dpdk-dev] [PATCH v2 2/6] vhost: add vhost-user " Yuanhan Liu
2016-05-13  6:16   ` [dpdk-dev] [PATCH v2 3/6] vhost: add reconnect ability Yuanhan Liu
2016-05-13  6:16   ` [dpdk-dev] [PATCH v2 4/6] vhost: workaround stale vring base Yuanhan Liu
2016-05-13  6:16   ` [dpdk-dev] [PATCH v2 5/6] examples/vhost: add client and reconnect option Yuanhan Liu
2016-05-13  6:16   ` [dpdk-dev] [PATCH v2 6/6] vhost: add pmd " Yuanhan Liu
2016-05-25 17:45     ` Rich Lane
2016-05-26  8:01       ` Yuanhan Liu
2016-06-07  4:05   ` [dpdk-dev] [PATCH v3 0/6] vhost: add vhost-user client mode and reconnect ability Yuanhan Liu
2016-06-07  4:05     ` [dpdk-dev] [PATCH v3 1/6] vhost: rename structs for enabling client mode Yuanhan Liu
2016-06-07  4:05     ` [dpdk-dev] [PATCH v3 2/6] vhost: add vhost-user " Yuanhan Liu
2016-06-07  4:05     ` [dpdk-dev] [PATCH v3 3/6] vhost: add reconnect ability Yuanhan Liu
2016-06-07  4:05     ` [dpdk-dev] [PATCH v3 4/6] vhost: workaround stale vring base Yuanhan Liu
2016-06-07  4:05     ` [dpdk-dev] [PATCH v3 5/6] examples/vhost: add client option Yuanhan Liu
2016-06-07  4:05     ` [dpdk-dev] [PATCH v3 6/6] vhost: add pmd " Yuanhan Liu
2016-06-14 12:00     ` [dpdk-dev] [PATCH v3 0/6] vhost: add vhost-user client mode and reconnect ability Yuanhan Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160509232954-mutt-send-email-mst@redhat.com \
    --to=mst@redhat.com \
    --cc=dev@dpdk.org \
    --cc=huawei.xie@intel.com \
    --cc=vkaplans@redhat.com \
    --cc=yuanhan.liu@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).