From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id D72BA370 for ; Wed, 6 Apr 2016 07:42:29 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga102.fm.intel.com with ESMTP; 05 Apr 2016 22:42:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,447,1455004800"; d="scan'208";a="681546352" Received: from yliu-dev.sh.intel.com (HELO yliu-dev) ([10.239.67.191]) by FMSMGA003.fm.intel.com with ESMTP; 05 Apr 2016 22:42:28 -0700 Date: Wed, 6 Apr 2016 13:44:06 +0800 From: Yuanhan Liu To: "Tan, Jianfeng" Cc: Ciara Loftus , dev@dpdk.org, mukawa@igel.co.jp Message-ID: <20160406054406.GV3080@yliu-dev.sh.intel.com> References: <1459872587-11655-1-git-send-email-ciara.loftus@intel.com> <57049F5C.5080604@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <57049F5C.5080604@intel.com> User-Agent: Mutt/1.5.23 (2014-03-12) Subject: Re: [dpdk-dev] [PATCH] vhost: Fix retrieval of numa information in PMD X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 06 Apr 2016 05:42:30 -0000 On Wed, Apr 06, 2016 at 01:32:12PM +0800, Tan, Jianfeng wrote: > Hi, > > Just out of interest, seems that the message handling thread which runs > new_device() is pthread_create() from the thread which calls the > dev_start(), usually master thread, right? But it's not necessary to be the > master thread to poll pkts from this vhost port, right? So what's the > significance to record the numa_node information of message handling thread > here? Shall we make the decision of numa_realloc based on the final PMD > thread who is responsible for polling this vhost port? It doesn't matter on which core we made the decision: the result would be same since we are querying the numa node info of the virtio_net dev struct. --yliu > > It's not related to this patch itself. And it seems good to me. > > > Thanks, > Jianfeng > > > > On 4/6/2016 12:09 AM, Ciara Loftus wrote: > >After some testing, it was found that retrieving numa information > >about a vhost device via a call to get_mempolicy is more > >accurate when performed during the new_device callback versus > >the vring_state_changed callback, in particular upon initial boot > >of the VM. Performing this check during new_device is also > >potentially more efficient as this callback is only triggered once > >during device initialisation, compared with vring_state_changed > >which may be called multiple times depending on the number of > >queues assigned to the device. > > > >Reorganise the code to perform this check and assign the correct > >socket_id to the device during the new_device callback. > > > >Signed-off-by: Ciara Loftus > >--- > > drivers/net/vhost/rte_eth_vhost.c | 28 ++++++++++++++-------------- > > 1 file changed, 14 insertions(+), 14 deletions(-) > > > >diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c > >index 4cc6bec..b1eb082 100644 > >--- a/drivers/net/vhost/rte_eth_vhost.c > >+++ b/drivers/net/vhost/rte_eth_vhost.c > >@@ -229,6 +229,9 @@ new_device(struct virtio_net *dev) > > struct pmd_internal *internal; > > struct vhost_queue *vq; > > unsigned i; > >+#ifdef RTE_LIBRTE_VHOST_NUMA > >+ int newnode, ret; > >+#endif > > if (dev == NULL) { > > RTE_LOG(INFO, PMD, "Invalid argument\n"); > >@@ -244,6 +247,17 @@ new_device(struct virtio_net *dev) > > eth_dev = list->eth_dev; > > internal = eth_dev->data->dev_private; > >+#ifdef RTE_LIBRTE_VHOST_NUMA > >+ ret = get_mempolicy(&newnode, NULL, 0, dev, > >+ MPOL_F_NODE | MPOL_F_ADDR); > >+ if (ret < 0) { > >+ RTE_LOG(ERR, PMD, "Unknown numa node\n"); > >+ return -1; > >+ } > >+ > >+ eth_dev->data->numa_node = newnode; > >+#endif > >+ > > for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { > > vq = eth_dev->data->rx_queues[i]; > > if (vq == NULL) > >@@ -352,9 +366,6 @@ vring_state_changed(struct virtio_net *dev, uint16_t vring, int enable) > > struct rte_vhost_vring_state *state; > > struct rte_eth_dev *eth_dev; > > struct internal_list *list; > >-#ifdef RTE_LIBRTE_VHOST_NUMA > >- int newnode, ret; > >-#endif > > if (dev == NULL) { > > RTE_LOG(ERR, PMD, "Invalid argument\n"); > >@@ -370,17 +381,6 @@ vring_state_changed(struct virtio_net *dev, uint16_t vring, int enable) > > eth_dev = list->eth_dev; > > /* won't be NULL */ > > state = vring_states[eth_dev->data->port_id]; > >- > >-#ifdef RTE_LIBRTE_VHOST_NUMA > >- ret = get_mempolicy(&newnode, NULL, 0, dev, > >- MPOL_F_NODE | MPOL_F_ADDR); > >- if (ret < 0) { > >- RTE_LOG(ERR, PMD, "Unknown numa node\n"); > >- return -1; > >- } > >- > >- eth_dev->data->numa_node = newnode; > >-#endif > > rte_spinlock_lock(&state->lock); > > state->cur[vring] = enable; > > state->max_vring = RTE_MAX(vring, state->max_vring);