From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id F0CD5370 for ; Tue, 13 Dec 2016 10:13:21 +0100 (CET) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP; 13 Dec 2016 01:13:04 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,340,1477983600"; d="scan'208";a="911620801" Received: from yliu-dev.sh.intel.com (HELO yliu-dev) ([10.239.67.162]) by orsmga003.jf.intel.com with ESMTP; 13 Dec 2016 01:13:04 -0800 Date: Tue, 13 Dec 2016 17:14:54 +0800 From: Yuanhan Liu To: Jan Wickbom Cc: dev@dpdk.org, patrik.r.andersson@ericsson.com Message-ID: <20161213091454.GB18991@yliu-dev.sh.intel.com> References: <1480606010-6132-1-git-send-email-jan.wickbom@ericsson.com> <1481561434-28675-1-git-send-email-jan.wickbom@ericsson.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1481561434-28675-1-git-send-email-jan.wickbom@ericsson.com> User-Agent: Mutt/1.5.23 (2014-03-12) Subject: Re: [dpdk-dev] [PATCH v3] vhost: allow for many vhost user ports X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 13 Dec 2016 09:13:22 -0000 On Mon, Dec 12, 2016 at 05:50:34PM +0100, Jan Wickbom wrote: > Currently select() is used to monitor file descriptors for vhostuser > ports. This limits the number of ports possible to create since the > fd number is used as index in the fd_set and we have seen fds > 1023. > This patch changes select() to poll(). This way we can keep an > packed (pollfd) array for the fds, e.g. as many fds as the size of > the array. > > Also see: > http://dpdk.org/ml/archives/dev/2016-April/037024.html > > Signed-off-by: Jan Wickbom > Reported-by: Patrik Andersson ... > +static struct pollfd rwfds[MAX_FDS]; Though it's unlikely, but just assume we have multiple instance of fdset_event_dispatch(pfdset), a global rwfds will not work. Thought twice, and it's better to put it into the fdset struct: struct fdset { struct fdentry fd[MAX_FDS]; struct pollfd rwfds[MAX_FDS]; ... > /** > * This functions runs in infinite blocking loop until there is no fd in > * pfdset. It calls corresponding r/w handler if there is event on the fd. > @@ -229,55 +217,71 @@ > void > fdset_event_dispatch(struct fdset *pfdset) > { > - fd_set rfds, wfds; > - int i, maxfds; > + int i; > struct fdentry *pfdentry; > - int num = MAX_FDS; > fd_cb rcb, wcb; > void *dat; > int fd; > int remove1, remove2; > - int ret; > > if (pfdset == NULL) > return; > > - while (1) { > - struct timeval tv; > - tv.tv_sec = 1; > - tv.tv_usec = 0; > - FD_ZERO(&rfds); > - FD_ZERO(&wfds); > - pthread_mutex_lock(&pfdset->fd_mutex); > - > - maxfds = fdset_fill(&rfds, &wfds, pfdset); > - > - pthread_mutex_unlock(&pfdset->fd_mutex); > + memset(rwfds, 0, sizeof(rwfds)); > > + while (1) { > /* > - * When select is blocked, other threads might unregister > + * When poll is blocked, other threads might unregister > * listenfds from and register new listenfds into fdset. > - * When select returns, the entries for listenfds in the fdset > + * When poll returns, the entries for listenfds in the fdset > * might have been updated. It is ok if there is unwanted call > * for new listenfds. > */ > - ret = select(maxfds + 1, &rfds, &wfds, NULL, &tv); > - if (ret <= 0) > - continue; > + poll(rwfds, pfdset->num, 1000 /* millisecs */); > > - for (i = 0; i < num; i++) { > - remove1 = remove2 = 0; > + for (i = 0; i < pfdset->num; ) { > pthread_mutex_lock(&pfdset->fd_mutex); > + > pfdentry = &pfdset->fd[i]; > fd = pfdentry->fd; > + > + if (fd < 0) { > + /* Removed during poll */ > + > + fdset_move_last(pfdset, i); > + fdset_shrink(pfdset); > + > + pthread_mutex_unlock(&pfdset->fd_mutex); > + > + continue; > + } > + > + if (!rwfds[i].revents) { > + /* No revents, maybe added during poll */ > + > + rwfds[i].fd = fd; > + rwfds[i].events = pfdentry->rcb ? POLLIN : 0; > + rwfds[i].events |= pfdentry->wcb ? POLLOUT : 0; > + pthread_mutex_unlock(&pfdset->fd_mutex); > + > + i++; > + continue; I think it's error-prone to manipulate the rwfds here. Besides, it registers an fd repeatedly. The way I think of is: - set rwfds[i] at fdset_add(). This also simply makes sure that pfdset->rwfds[i] and pfdset->fd[i] is correctly bond. fdset_add(fdset, fd, ...) { lock(); i = fdset_find_free_slot(..); pfdset->fd[i]->fd = fd; pfdset->fd[i]->rcb = rcb; pfdset->fd[i]->...; pfdset->rwfds[i]->fd = fd; pfdset->rwfds[i]->events = ...; pfdset->rwfds[i]->revents = 0; } - set pfdset->fd[i]->fd = -1 on fdset_del. Note we should not decrease 'num' here, as we may be at poll. fdset_del(fdset, fd) { lock(); i = fdset_find_fd(pfdset, fd); pfdset->fd[i]->fd = -1; ... } - log down pfdset->num before poll, because 'num' may increase during poll. I think it's optional, since even 'num' is increased during poll, it just leads to few more rwfds entries will be accessed. But it's not tracked by kernel, and revents is initiated with 0, that there is no issue. - shrink the fdset and rwfds (together) for those removed entries, __outside__ the for loop after poll. Works to you? --yliu