From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 965112C0C for ; Wed, 6 Apr 2016 18:08:05 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga102.fm.intel.com with ESMTP; 06 Apr 2016 09:08:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,447,1455004800"; d="scan'208";a="681828476" Received: from yliu-dev.sh.intel.com (HELO yliu-dev) ([10.239.67.191]) by FMSMGA003.fm.intel.com with ESMTP; 06 Apr 2016 09:08:04 -0700 Date: Thu, 7 Apr 2016 00:09:46 +0800 From: Yuanhan Liu To: "Loftus, Ciara" Cc: Tetsuya Mukawa , "dev@dpdk.org" , "Tan, Jianfeng" Message-ID: <20160406160946.GZ3080@yliu-dev.sh.intel.com> References: <1459872587-11655-1-git-send-email-ciara.loftus@intel.com> <5704B175.3040700@igel.co.jp> <20160406071754.GY3080@yliu-dev.sh.intel.com> <74F120C019F4A64C9B78E802F6AD4CC24F861D10@IRSMSX106.ger.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <74F120C019F4A64C9B78E802F6AD4CC24F861D10@IRSMSX106.ger.corp.intel.com> User-Agent: Mutt/1.5.23 (2014-03-12) Subject: Re: [dpdk-dev] [PATCH] vhost: Fix retrieval of numa information in PMD X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 06 Apr 2016 16:08:06 -0000 On Wed, Apr 06, 2016 at 09:37:53AM +0000, Loftus, Ciara wrote: > > > > On Wed, Apr 06, 2016 at 03:49:25PM +0900, Tetsuya Mukawa wrote: > > > On 2016/04/06 1:09, Ciara Loftus wrote: > > > > After some testing, it was found that retrieving numa information > > > > about a vhost device via a call to get_mempolicy is more > > > > accurate when performed during the new_device callback versus > > > > the vring_state_changed callback, in particular upon initial boot > > > > of the VM. Performing this check during new_device is also > > > > potentially more efficient as this callback is only triggered once > > > > during device initialisation, compared with vring_state_changed > > > > which may be called multiple times depending on the number of > > > > queues assigned to the device. > > > > > > > > Reorganise the code to perform this check and assign the correct > > > > socket_id to the device during the new_device callback. > > > > > > > > Signed-off-by: Ciara Loftus > > > > --- > > > > drivers/net/vhost/rte_eth_vhost.c | 28 ++++++++++++++-------------- > > > > 1 file changed, 14 insertions(+), 14 deletions(-) > > > > > > > > diff --git a/drivers/net/vhost/rte_eth_vhost.c > > b/drivers/net/vhost/rte_eth_vhost.c > > > > index 4cc6bec..b1eb082 100644 > > > > --- a/drivers/net/vhost/rte_eth_vhost.c > > > > +++ b/drivers/net/vhost/rte_eth_vhost.c > > > > > > > > > > Hi, > > > > > > I appreciate fixing it. > > > Just one worry is that state changed event may be occurred before new > > > device event. > > > The users should not call rte_eth_dev_socket_id() until new device event > > > comes, even if they catch queue state events. > > > Otherwise, they will get wrong socket id to call > > > rte_eth_rx/tx_queue_setup(). > > > > There is no way to guarantee that the socket id stuff would work > > perfectly in vhost, right? I mean, it's likely that virtio device > > would allocate memory from 2 or more sockets. > > > > So, it doesn't matter too much whether it's set perfectly right > > or not. Instead, we should assign it with a saner value instead > > of a obvious wrong one when new_device() is not invoked yet. So, > > I'd suggest to make an assignment first based on vhost_dev (or > > whatever) struct, and then make it "right" at new_device() > > callback? > > Thanks for the feedback. > At the moment with this patch numa_node is initially set to rte_socket_id() during pmd init and then updated to the correct value during new_device. > Are you suggesting we set it again in between these two steps ("based on vhost_dev")? If so where do you think would be a good place? Oh, I was not aware of that. Then I think we are fine here. --yliu