patches for DPDK stable branches
 help / color / mirror / Atom feed
From: "Yang, Qiming" <qiming.yang@intel.com>
To: "Wang, Haiyue" <haiyue.wang@intel.com>, "Xu, Rosen" <rosen.xu@intel.com>
Cc: "stable@dpdk.org" <stable@dpdk.org>
Subject: Re: [dpdk-stable] [PATCH Preview] net/i40e: workaround for Fortville performance
Date: Tue, 29 May 2018 08:34:51 +0000	[thread overview]
Message-ID: <F5DF4F0E3AFEF648ADC1C3C33AD4DBF16F918EF3@SHSMSX101.ccr.corp.intel.com> (raw)
In-Reply-To: <1527582265-5864-1-git-send-email-haiyue.wang@intel.com>

Hi, Haiyue
Do you forget to send to dev@dpdk.org?

Qiming
> -----Original Message-----
> From: Wang, Haiyue
> Sent: Tuesday, May 29, 2018 4:24 PM
> To: Wang, Haiyue <haiyue.wang@intel.com>; Xu, Rosen <rosen.xu@intel.com>;
> Yang, Qiming <qiming.yang@intel.com>
> Cc: stable@dpdk.org
> Subject: [PATCH Preview] net/i40e: workaround for Fortville performance
> 
> The GL_SWR_PM_UP_THR value is not impacted from the link speed, its value is
> set according to the total number of ports for a better pipe-monitor
> configuration.
> 
> All bellowing relevant device IDs are considered (NICs, LOMs, Mezz and
> Backplane):
> 
> Device-ID  Value        Comments
> 0x1572     0x03030303   10G SFI
> 0x1581     0x03030303   10G Backplane
> 0x1586     0x03030303   10G BaseT
> 0x1589     0x03030303   10G BaseT (FortPond)
> 0x1580     0x06060606   40G Backplane
> 0x1583     0x06060606   2x40G QSFP
> 0x1584     0x06060606   1x40G QSFP
> 0x1587     0x06060606   20G Backplane (HP)
> 0x1588     0x06060606   20G KR2 (HP)
> 0x158A     0x06060606   25G Backplane
> 0x158B     0x06060606   25G SFP28
> 
> Fixes: c9223a2bf53c ("i40e: workaround for XL710 performance")
> Fixes: 75d133dd3296 ("net/i40e: enable 25G device")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
> ---
>  drivers/net/i40e/i40e_ethdev.c | 71
> +++++++++++++++++++++++++++++++++++++-----
>  1 file changed, 64 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
> index 7d4f1c9..04a2056 100644
> --- a/drivers/net/i40e/i40e_ethdev.c
> +++ b/drivers/net/i40e/i40e_ethdev.c
> @@ -10003,6 +10003,60 @@ enum i40e_filter_pctype
>  #define I40E_GL_SWR_PM_UP_THR_SF_VALUE   0x06060606
>  #define I40E_GL_SWR_PM_UP_THR            0x269FBC
> 
> +/*
> + * GL_SWR_PM_UP_THR:
> + * The value is not impacted from the link speed, its value is set
> +according
> + * to the total number of ports for a better pipe-monitor configuration.
> + */
> +static int
> +i40e_get_swr_pm_cfg(struct i40e_hw *hw, uint64_t *value) { #define
> +I40E_GL_SWR_PM_EF_DEVICE(dev) \
> +		.device_id = (dev),   \
> +		.val = I40E_GL_SWR_PM_UP_THR_EF_VALUE
> +
> +#define I40E_GL_SWR_PM_SF_DEVICE(dev) \
> +		.device_id = (dev),   \
> +		.val = I40E_GL_SWR_PM_UP_THR_SF_VALUE
> +
> +	static const struct {
> +		uint16_t device_id;
> +		uint64_t val;
> +	} swr_pm_table[] = {
> +		{ I40E_GL_SWR_PM_EF_DEVICE(I40E_DEV_ID_SFP_XL710) },
> +		{ I40E_GL_SWR_PM_EF_DEVICE(I40E_DEV_ID_KX_C) },
> +		{ I40E_GL_SWR_PM_EF_DEVICE(I40E_DEV_ID_10G_BASE_T) },
> +		{ I40E_GL_SWR_PM_EF_DEVICE(I40E_DEV_ID_10G_BASE_T4) },
> +
> +		{ I40E_GL_SWR_PM_SF_DEVICE(I40E_DEV_ID_KX_B) },
> +		{ I40E_GL_SWR_PM_SF_DEVICE(I40E_DEV_ID_QSFP_A) },
> +		{ I40E_GL_SWR_PM_SF_DEVICE(I40E_DEV_ID_QSFP_B) },
> +		{ I40E_GL_SWR_PM_SF_DEVICE(I40E_DEV_ID_20G_KR2) },
> +		{ I40E_GL_SWR_PM_SF_DEVICE(I40E_DEV_ID_20G_KR2_A) },
> +		{ I40E_GL_SWR_PM_SF_DEVICE(I40E_DEV_ID_25G_B) },
> +		{ I40E_GL_SWR_PM_SF_DEVICE(I40E_DEV_ID_25G_SFP28) },
> +	};
> +	uint32_t i;
> +
> +	if (value == NULL) {
> +		PMD_DRV_LOG(ERR, "value is NULL");
> +		return 0;
> +	}
> +
> +	for (i = 0; i < RTE_DIM(swr_pm_table); i++) {
> +		if (hw->device_id == swr_pm_table[i].device_id) {
> +			*value = swr_pm_table[i].val;
> +
> +			PMD_DRV_LOG(DEBUG, "Device 0x%" PRIx16 " with "
> +				    "GL_SWR_PM_UP_THR setting to 0x%"
> PRIx64,
> +				    hw->device_id, *value);
> +			return 1;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
>  static int
>  i40e_dev_sync_phy_type(struct i40e_hw *hw)  { @@ -10067,13 +10121,16 @@
> enum i40e_filter_pctype
>  		}
> 
>  		if (reg_table[i].addr == I40E_GL_SWR_PM_UP_THR) {
> -			if (I40E_PHY_TYPE_SUPPORT_40G(hw->phy.phy_types)
> || /* For XL710 */
> -			    I40E_PHY_TYPE_SUPPORT_25G(hw->phy.phy_types))
> /* For XXV710 */
> -				reg_table[i].val =
> -					I40E_GL_SWR_PM_UP_THR_SF_VALUE;
> -			else /* For X710 */
> -				reg_table[i].val =
> -
> 	I40E_GL_SWR_PM_UP_THR_EF_VALUE;
> +			uint64_t cfg_val;
> +
> +			if (!i40e_get_swr_pm_cfg(hw, &cfg_val)) {
> +				PMD_DRV_LOG(DEBUG, "Device 0x%" PRIx16
> "skips "
> +					    "GL_SWR_PM_UP_THR setting",
> +					    hw->device_id);
> +				continue;
> +			}
> +
> +			reg_table[i].val = cfg_val;
>  		}
> 
>  		ret = i40e_aq_debug_read_register(hw, reg_table[i].addr,
> --
> 1.8.3.1

      reply	other threads:[~2018-05-29  8:34 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-29  8:24 Haiyue Wang
2018-05-29  8:34 ` Yang, Qiming [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=F5DF4F0E3AFEF648ADC1C3C33AD4DBF16F918EF3@SHSMSX101.ccr.corp.intel.com \
    --to=qiming.yang@intel.com \
    --cc=haiyue.wang@intel.com \
    --cc=rosen.xu@intel.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).