DPDK patches and discussions
 help / color / mirror / Atom feed
From: Yuanhan Liu <yuanhan.liu@linux.intel.com>
To: "Tan, Jianfeng" <jianfeng.tan@intel.com>
Cc: Ciara Loftus <ciara.loftus@intel.com>, dev@dpdk.org, mukawa@igel.co.jp
Subject: Re: [dpdk-dev] [PATCH] vhost: Fix retrieval of numa information in PMD
Date: Wed, 6 Apr 2016 13:44:06 +0800	[thread overview]
Message-ID: <20160406054406.GV3080@yliu-dev.sh.intel.com> (raw)
In-Reply-To: <57049F5C.5080604@intel.com>

On Wed, Apr 06, 2016 at 01:32:12PM +0800, Tan, Jianfeng wrote:
> Hi,
> 
> Just out of interest, seems that the message handling thread which runs
> new_device() is pthread_create() from the thread which calls the
> dev_start(), usually master thread, right? But it's not necessary to be the
> master thread to poll pkts from this vhost port, right? So what's the
> significance to record the numa_node information of message handling thread
> here? Shall we make the decision of numa_realloc based on the final PMD
> thread who is responsible for polling this vhost port?

It doesn't matter on which core we made the decision: the result
would be same since we are querying the numa node info of the
virtio_net dev struct.

	--yliu
> 
> It's not related to this patch itself. And it seems good to me.
> 
> 
> Thanks,
> Jianfeng
> 
> 
> 
> On 4/6/2016 12:09 AM, Ciara Loftus wrote:
> >After some testing, it was found that retrieving numa information
> >about a vhost device via a call to get_mempolicy is more
> >accurate when performed during the new_device callback versus
> >the vring_state_changed callback, in particular upon initial boot
> >of the VM.  Performing this check during new_device is also
> >potentially more efficient as this callback is only triggered once
> >during device initialisation, compared with vring_state_changed
> >which may be called multiple times depending on the number of
> >queues assigned to the device.
> >
> >Reorganise the code to perform this check and assign the correct
> >socket_id to the device during the new_device callback.
> >
> >Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
> >---
> >  drivers/net/vhost/rte_eth_vhost.c | 28 ++++++++++++++--------------
> >  1 file changed, 14 insertions(+), 14 deletions(-)
> >
> >diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
> >index 4cc6bec..b1eb082 100644
> >--- a/drivers/net/vhost/rte_eth_vhost.c
> >+++ b/drivers/net/vhost/rte_eth_vhost.c
> >@@ -229,6 +229,9 @@ new_device(struct virtio_net *dev)
> >  	struct pmd_internal *internal;
> >  	struct vhost_queue *vq;
> >  	unsigned i;
> >+#ifdef RTE_LIBRTE_VHOST_NUMA
> >+	int newnode, ret;
> >+#endif
> >  	if (dev == NULL) {
> >  		RTE_LOG(INFO, PMD, "Invalid argument\n");
> >@@ -244,6 +247,17 @@ new_device(struct virtio_net *dev)
> >  	eth_dev = list->eth_dev;
> >  	internal = eth_dev->data->dev_private;
> >+#ifdef RTE_LIBRTE_VHOST_NUMA
> >+	ret  = get_mempolicy(&newnode, NULL, 0, dev,
> >+			MPOL_F_NODE | MPOL_F_ADDR);
> >+	if (ret < 0) {
> >+		RTE_LOG(ERR, PMD, "Unknown numa node\n");
> >+		return -1;
> >+	}
> >+
> >+	eth_dev->data->numa_node = newnode;
> >+#endif
> >+
> >  	for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
> >  		vq = eth_dev->data->rx_queues[i];
> >  		if (vq == NULL)
> >@@ -352,9 +366,6 @@ vring_state_changed(struct virtio_net *dev, uint16_t vring, int enable)
> >  	struct rte_vhost_vring_state *state;
> >  	struct rte_eth_dev *eth_dev;
> >  	struct internal_list *list;
> >-#ifdef RTE_LIBRTE_VHOST_NUMA
> >-	int newnode, ret;
> >-#endif
> >  	if (dev == NULL) {
> >  		RTE_LOG(ERR, PMD, "Invalid argument\n");
> >@@ -370,17 +381,6 @@ vring_state_changed(struct virtio_net *dev, uint16_t vring, int enable)
> >  	eth_dev = list->eth_dev;
> >  	/* won't be NULL */
> >  	state = vring_states[eth_dev->data->port_id];
> >-
> >-#ifdef RTE_LIBRTE_VHOST_NUMA
> >-	ret  = get_mempolicy(&newnode, NULL, 0, dev,
> >-			MPOL_F_NODE | MPOL_F_ADDR);
> >-	if (ret < 0) {
> >-		RTE_LOG(ERR, PMD, "Unknown numa node\n");
> >-		return -1;
> >-	}
> >-
> >-	eth_dev->data->numa_node = newnode;
> >-#endif
> >  	rte_spinlock_lock(&state->lock);
> >  	state->cur[vring] = enable;
> >  	state->max_vring = RTE_MAX(vring, state->max_vring);

  reply	other threads:[~2016-04-06  5:42 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-04-05 16:09 Ciara Loftus
2016-04-06  5:03 ` Yuanhan Liu
2016-04-06 16:45   ` Thomas Monjalon
2016-04-06  5:32 ` Tan, Jianfeng
2016-04-06  5:44   ` Yuanhan Liu [this message]
2016-04-06  6:05     ` Tan, Jianfeng
2016-04-06  6:17       ` Yuanhan Liu
2016-04-06  6:32         ` Tan, Jianfeng
2016-04-06  6:49 ` Tetsuya Mukawa
2016-04-06  7:17   ` Yuanhan Liu
2016-04-06  7:28     ` Tetsuya Mukawa
2016-04-06  9:37     ` Loftus, Ciara
2016-04-06 16:09       ` Yuanhan Liu
2016-04-06 16:12         ` Thomas Monjalon
2016-04-06 16:43           ` Yuanhan Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160406054406.GV3080@yliu-dev.sh.intel.com \
    --to=yuanhan.liu@linux.intel.com \
    --cc=ciara.loftus@intel.com \
    --cc=dev@dpdk.org \
    --cc=jianfeng.tan@intel.com \
    --cc=mukawa@igel.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).