From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f49.google.com (mail-wm0-f49.google.com [74.125.82.49]) by dpdk.org (Postfix) with ESMTP id 7905B9216 for ; Fri, 30 Oct 2015 19:55:58 +0100 (CET) Received: by wmff134 with SMTP id f134so18512747wmf.1 for ; Fri, 30 Oct 2015 11:55:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind_com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=2NX7FDQ55IdpuuOcd7mGRqB2ikUmhvH8TXsUPDIcpZk=; b=xXDm4AKxaL0sWNdeVGv/aPcos4TcVToE4eqKSFnVUYJDqJ2H6/xEukNCF7Dj5Yx/1V oOV/mlxjUzqus9GVXxFn6haFAXJWAGqqRQXeIrd/jfhGohsqIzW+x1fNuOYyyzj1/MAJ CUPs+WoeLxMClZUgjIFBC6aetawWty/R1bVmPeIqGwQvmNaZgWdQI6vH4KJmOaugVUp1 xHphSIsgC4QjU5KAiX1tGT7EbDLfNn2Td7c5ClGJqWuWRLtakIOBoElgYZL/hbaeN/5I KJqoDoCdBZt2EqrQkQwgT8ahtYymhRmFAuZKZQSWBV6g/izKBr1W222JS88NMCSlyXn2 ELPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=2NX7FDQ55IdpuuOcd7mGRqB2ikUmhvH8TXsUPDIcpZk=; b=Abfs3SkH/hEWjWdzthisk0rtNqhYd0NjXNvRTaSrEYZ6wssGn+hQfhkCEAGV9VsCLy 1ygCqO+A8om40OgP1gAbVM4MB9z8MupZWZ3MVs85uxorZDbjmusG5IxrvfMNK801zbX7 L1fYeWAYMr7ANGrafDcsjIbreCGqyXBsnXZBmM77/LP45ZmzfZIWT5jJU0TewbDa9jjQ 2zgqaTsVoF4ZNjdvd6Sb+mda6m2kk6T9jhRaOTQbpue860BuRocBmKODvx3cx/n+0v7k 21xHBPfDb2Q4LW372VzCtHVwGPmGwi70eJMUNPFk6+PBqP86GYfDiy2p8cHEyfNb2hbi QoIw== X-Gm-Message-State: ALoCoQndZrGBIVdwy2tYfPQCmunxE/JqlqDew1Ybow2SJqRE7efLNMJ/g+3oFQGN1jyF6i4Q83/J X-Received: by 10.28.134.134 with SMTP id i128mr5505524wmd.50.1446231358351; Fri, 30 Oct 2015 11:55:58 -0700 (PDT) Received: from 6wind.com (guy78-3-82-239-227-177.fbx.proxad.net. [82.239.227.177]) by smtp.gmail.com with ESMTPSA id c67sm4216867wmh.11.2015.10.30.11.55.57 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Fri, 30 Oct 2015 11:55:57 -0700 (PDT) From: Adrien Mazarguil To: dev@dpdk.org Date: Fri, 30 Oct 2015 19:55:08 +0100 Message-Id: <1446231319-8185-6-git-send-email-adrien.mazarguil@6wind.com> X-Mailer: git-send-email 2.1.0 In-Reply-To: <1446231319-8185-1-git-send-email-adrien.mazarguil@6wind.com> References: <1444067692-29645-1-git-send-email-adrien.mazarguil@6wind.com> <1446231319-8185-1-git-send-email-adrien.mazarguil@6wind.com> Subject: [dpdk-dev] [PATCH v2 05/16] mlx5: adapt indirection table size depending on RX queues number X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 30 Oct 2015 18:55:58 -0000 From: Nelio Laranjeiro Use the maximum size of the indirection table when the number of requested RX queues is not a power of two, this help to improve RSS balancing. A message informs users that balancing is not optimal in such cases. Signed-off-by: Nelio Laranjeiro Signed-off-by: Adrien Mazarguil --- drivers/net/mlx5/mlx5.c | 10 +++++++++- drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_defs.h | 3 +++ drivers/net/mlx5/mlx5_rxq.c | 21 ++++++++++++++------- 4 files changed, 27 insertions(+), 8 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index e394d32..4413248 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -299,7 +299,9 @@ mlx5_pci_devinit(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) struct ether_addr mac; #ifdef HAVE_EXP_QUERY_DEVICE - exp_device_attr.comp_mask = IBV_EXP_DEVICE_ATTR_EXP_CAP_FLAGS; + exp_device_attr.comp_mask = + IBV_EXP_DEVICE_ATTR_EXP_CAP_FLAGS | + IBV_EXP_DEVICE_ATTR_RX_HASH; #endif /* HAVE_EXP_QUERY_DEVICE */ DEBUG("using port %u (%08" PRIx32 ")", port, test); @@ -363,6 +365,12 @@ mlx5_pci_devinit(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) DEBUG("L2 tunnel checksum offloads are %ssupported", (priv->hw_csum_l2tun ? "" : "not ")); + priv->ind_table_max_size = exp_device_attr.rx_hash_caps.max_rwq_indirection_table_size; + DEBUG("maximum RX indirection table size is %u", + priv->ind_table_max_size); + +#else /* HAVE_EXP_QUERY_DEVICE */ + priv->ind_table_max_size = RSS_INDIRECTION_TABLE_SIZE; #endif /* HAVE_EXP_QUERY_DEVICE */ priv->vf = vf; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 29fc1da..5a41678 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -109,6 +109,7 @@ struct priv { /* Indirection tables referencing all RX WQs. */ struct ibv_exp_rwq_ind_table *(*ind_tables)[]; unsigned int ind_tables_n; /* Number of indirection tables. */ + unsigned int ind_table_max_size; /* Maximum indirection table size. */ /* Hash RX QPs feeding the indirection table. */ struct hash_rxq (*hash_rxqs)[]; unsigned int hash_rxqs_n; /* Hash RX QPs array size. */ diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h index 369f8b6..3952c71 100644 --- a/drivers/net/mlx5/mlx5_defs.h +++ b/drivers/net/mlx5/mlx5_defs.h @@ -46,6 +46,9 @@ /* Request send completion once in every 64 sends, might be less. */ #define MLX5_PMD_TX_PER_COMP_REQ 64 +/* RSS Indirection table size. */ +#define RSS_INDIRECTION_TABLE_SIZE 128 + /* Maximum number of Scatter/Gather Elements per Work Request. */ #ifndef MLX5_PMD_SGE_WR_N #define MLX5_PMD_SGE_WR_N 4 diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 8ea1267..41f8811 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -224,7 +224,13 @@ priv_make_ind_table_init(struct priv *priv, int priv_create_hash_rxqs(struct priv *priv) { - unsigned int wqs_n = (1 << log2above(priv->rxqs_n)); + /* If the requested number of WQs is not a power of two, use the + * maximum indirection table size for better balancing. + * The result is always rounded to the next power of two. */ + unsigned int wqs_n = + (1 << log2above((priv->rxqs_n & (priv->rxqs_n - 1)) ? + priv->ind_table_max_size : + priv->rxqs_n)); struct ibv_exp_wq *wqs[wqs_n]; struct ind_table_init ind_table_init[IND_TABLE_INIT_N]; unsigned int ind_tables_n = @@ -251,16 +257,17 @@ priv_create_hash_rxqs(struct priv *priv) " indirection table cannot be created"); return EINVAL; } - if (wqs_n < priv->rxqs_n) { + if ((wqs_n < priv->rxqs_n) || (wqs_n > priv->ind_table_max_size)) { ERROR("cannot handle this many RX queues (%u)", priv->rxqs_n); err = ERANGE; goto error; } - if (wqs_n != priv->rxqs_n) - WARN("%u RX queues are configured, consider rounding this" - " number to the next power of two (%u) for optimal" - " performance", - priv->rxqs_n, wqs_n); + if (wqs_n != priv->rxqs_n) { + INFO("%u RX queues are configured, consider rounding this" + " number to the next power of two for better balancing", + priv->rxqs_n); + DEBUG("indirection table extended to assume %u WQs", wqs_n); + } /* When the number of RX queues is not a power of two, the remaining * table entries are padded with reused WQs and hashes are not spread * uniformly. */ -- 2.1.0