DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Wu, Jingjing" <jingjing.wu@intel.com>
To: "Liu, Mingxia" <mingxia.liu@intel.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: "Xing, Beilei" <beilei.xing@intel.com>
Subject: RE: [PATCH v3 2/6] common/idpf: add RSS set/get ops
Date: Thu, 2 Feb 2023 03:28:13 +0000	[thread overview]
Message-ID: <MW3PR11MB4587E429B7FCF307319EFBD9E3D69@MW3PR11MB4587.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20230118071440.902155-3-mingxia.liu@intel.com>

> +static int idpf_config_rss_hf(struct idpf_vport *vport, uint64_t rss_hf)
> +{
> +	uint64_t hena = 0, valid_rss_hf = 0;
According to the coding style, only the last variable on a line should be initialized.

> +	int ret = 0;
> +	uint16_t i;
> +
> +	/**
> +	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
> +	 * generalizations of all other IPv4 and IPv6 RSS types.
> +	 */
> +	if (rss_hf & RTE_ETH_RSS_IPV4)
> +		rss_hf |= idpf_ipv4_rss;
> +
> +	if (rss_hf & RTE_ETH_RSS_IPV6)
> +		rss_hf |= idpf_ipv6_rss;
> +
> +	for (i = 0; i < RTE_DIM(idpf_map_hena_rss); i++) {
> +		uint64_t bit = BIT_ULL(i);
> +
> +		if (idpf_map_hena_rss[i] & rss_hf) {
> +			valid_rss_hf |= idpf_map_hena_rss[i];
> +			hena |= bit;
> +		}
> +	}
> +
> +	vport->rss_hf = hena;
> +
> +	ret = idpf_vc_set_rss_hash(vport);
> +	if (ret != 0) {
> +		PMD_DRV_LOG(WARNING,
> +			    "fail to set RSS offload types, ret: %d", ret);
> +		return ret;
> +	}
> +
> +	if (valid_rss_hf & idpf_ipv4_rss)
> +		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
> +
> +	if (valid_rss_hf & idpf_ipv6_rss)
> +		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
> +
> +	if (rss_hf & ~valid_rss_hf)
> +		PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%" PRIx64,
> +			    rss_hf & ~valid_rss_hf);
It makes me a bit confused, valid_rss_hf is would be the sub of rss_hf according above assignment. Would it be possible to go here?
And if it is possible, why not set valid_rss_hf before calling vc command?

> +	vport->last_general_rss_hf = valid_rss_hf;
> +
> +	return ret;
> +}
> +
>  static int
>  idpf_init_rss(struct idpf_vport *vport)
>  {
> @@ -256,6 +357,204 @@ idpf_init_rss(struct idpf_vport *vport)
>  	return ret;
>  }
> 
> +static int
> +idpf_rss_reta_update(struct rte_eth_dev *dev,
> +		     struct rte_eth_rss_reta_entry64 *reta_conf,
> +		     uint16_t reta_size)
> +{
> +	struct idpf_vport *vport = dev->data->dev_private;
> +	struct idpf_adapter *adapter = vport->adapter;
> +	uint16_t idx, shift;
> +	uint32_t *lut;
> +	int ret = 0;
> +	uint16_t i;
> +
> +	if (adapter->caps.rss_caps == 0 || dev->data->nb_rx_queues == 0) {
> +		PMD_DRV_LOG(DEBUG, "RSS is not supported");
> +		return -ENOTSUP;
> +	}
> +
> +	if (reta_size != vport->rss_lut_size) {
> +		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
> +				 "(%d) doesn't match the number of hardware can "
> +				 "support (%d)",
> +			    reta_size, vport->rss_lut_size);
> +		return -EINVAL;
> +	}
> +
> +	/* It MUST use the current LUT size to get the RSS lookup table,
> +	 * otherwise if will fail with -100 error code.
> +	 */
> +	lut = rte_zmalloc(NULL, reta_size * sizeof(uint32_t), 0);
> +	if (!lut) {
> +		PMD_DRV_LOG(ERR, "No memory can be allocated");
> +		return -ENOMEM;
> +	}
> +	/* store the old lut table temporarily */
> +	rte_memcpy(lut, vport->rss_lut, reta_size * sizeof(uint32_t));
Stored the vport->rss_lut to lut? But you overwrite the lut below?

> +
> +	for (i = 0; i < reta_size; i++) {
> +		idx = i / RTE_ETH_RETA_GROUP_SIZE;
> +		shift = i % RTE_ETH_RETA_GROUP_SIZE;
> +		if (reta_conf[idx].mask & (1ULL << shift))
> +			lut[i] = reta_conf[idx].reta[shift];
> +	}
> +
> +	rte_memcpy(vport->rss_lut, lut, reta_size * sizeof(uint32_t));
> +	/* send virtchnl ops to configure RSS */
> +	ret = idpf_vc_set_rss_lut(vport);
> +	if (ret) {
> +		PMD_INIT_LOG(ERR, "Failed to configure RSS lut");
> +		goto out;
> +	}
> +out:
> +	rte_free(lut);
> +
> +	return ret;
> +}


  reply	other threads:[~2023-02-02  3:28 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-16  9:36 [PATCH 0/7] add idpf pmd enhancement features Mingxia Liu
2022-12-16  9:37 ` [PATCH 1/7] common/idpf: add hw statistics Mingxia Liu
2022-12-16  9:37 ` [PATCH 2/7] common/idpf: add RSS set/get ops Mingxia Liu
2022-12-16  9:37 ` [PATCH 3/7] common/idpf: support single q scatter RX datapath Mingxia Liu
2022-12-16  9:37 ` [PATCH 4/7] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2022-12-16  9:37 ` [PATCH 5/7] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2022-12-16  9:37 ` [PATCH 6/7] common/idpf: add xstats ops Mingxia Liu
2022-12-16  9:37 ` [PATCH 7/7] common/idpf: update mbuf_alloc_failed multi-thread process Mingxia Liu
2023-01-11  7:15 ` [PATCH 0/6] add idpf pmd enhancement features Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 1/6] common/idpf: add hw statistics Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 2/6] common/idpf: add RSS set/get ops Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-01-11  7:15   ` [PATCH v2 6/6] common/idpf: add xstats ops Mingxia Liu
2023-01-18  7:14   ` [PATCH v3 0/6] add idpf pmd enhancement features Mingxia Liu
2023-01-18  7:14     ` [PATCH v3 1/6] common/idpf: add hw statistics Mingxia Liu
2023-02-01  8:48       ` Wu, Jingjing
2023-02-01 12:34         ` Liu, Mingxia
2023-01-18  7:14     ` [PATCH v3 2/6] common/idpf: add RSS set/get ops Mingxia Liu
2023-02-02  3:28       ` Wu, Jingjing [this message]
2023-02-07  3:10         ` Liu, Mingxia
2023-01-18  7:14     ` [PATCH v3 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
2023-02-02  3:45       ` Wu, Jingjing
2023-02-02  7:19         ` Liu, Mingxia
2023-01-18  7:14     ` [PATCH v3 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-01-18  7:14     ` [PATCH v3 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-02-02  4:23       ` Wu, Jingjing
2023-02-02  7:39         ` Liu, Mingxia
2023-02-02  8:46           ` Wu, Jingjing
2023-01-18  7:14     ` [PATCH v3 6/6] common/idpf: add xstats ops Mingxia Liu
2023-02-07  9:56     ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
2023-02-07  9:56       ` [PATCH v4 1/6] common/idpf: add hw statistics Mingxia Liu
2023-02-07  9:56       ` [PATCH v4 2/6] common/idpf: add RSS set/get ops Mingxia Liu
2023-02-07  9:56       ` [PATCH v4 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
2023-02-07  9:56       ` [PATCH v4 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-02-07  9:57       ` [PATCH v4 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-02-07  9:57       ` [PATCH v4 6/6] common/idpf: add xstats ops Mingxia Liu
2023-02-07 10:08       ` [PATCH v4 0/6] add idpf pmd enhancement features Mingxia Liu
2023-02-07 10:08         ` [PATCH v5 1/6] common/idpf: add hw statistics Mingxia Liu
2023-02-07 10:16           ` [PATCH v6 0/6] add idpf pmd enhancement features Mingxia Liu
2023-02-07 10:16             ` [PATCH v6 1/6] common/idpf: add hw statistics Mingxia Liu
2023-02-08  2:00               ` Zhang, Qi Z
2023-02-08  8:28                 ` Liu, Mingxia
2023-02-07 10:16             ` [PATCH v6 2/6] common/idpf: add RSS set/get ops Mingxia Liu
2023-02-07 10:16             ` [PATCH v6 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
2023-02-07 10:16             ` [PATCH v6 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-02-07 10:16             ` [PATCH v6 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-02-07 10:16             ` [PATCH v6 6/6] common/idpf: add xstats ops Mingxia Liu
2023-02-08  0:28             ` [PATCH v6 0/6] add idpf pmd enhancement features Wu, Jingjing
2023-02-08  7:33             ` [PATCH v7 " Mingxia Liu
2023-02-08  7:33               ` [PATCH v7 1/6] net/idpf: add hw statistics Mingxia Liu
2023-02-08  7:33               ` [PATCH v7 2/6] net/idpf: add RSS set/get ops Mingxia Liu
2023-02-08  7:33               ` [PATCH v7 3/6] net/idpf: support single q scatter RX datapath Mingxia Liu
2023-02-08  7:33               ` [PATCH v7 4/6] net/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-02-08  7:34               ` [PATCH v7 5/6] net/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-02-08  7:34               ` [PATCH v7 6/6] net/idpf: add xstats ops Mingxia Liu
2023-02-08  9:32               ` [PATCH v7 0/6] add idpf pmd enhancement features Zhang, Qi Z
2023-02-07 10:08         ` [PATCH v5 2/6] common/idpf: add RSS set/get ops Mingxia Liu
2023-02-07 10:08         ` [PATCH v5 3/6] common/idpf: support single q scatter RX datapath Mingxia Liu
2023-02-07 10:08         ` [PATCH v5 4/6] common/idpf: add rss_offload hash in singleq rx Mingxia Liu
2023-02-07 10:08         ` [PATCH v5 5/6] common/idpf: add alarm to support handle vchnl message Mingxia Liu
2023-02-07 10:08         ` [PATCH v5 6/6] common/idpf: add xstats ops Mingxia Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=MW3PR11MB4587E429B7FCF307319EFBD9E3D69@MW3PR11MB4587.namprd11.prod.outlook.com \
    --to=jingjing.wu@intel.com \
    --cc=beilei.xing@intel.com \
    --cc=dev@dpdk.org \
    --cc=mingxia.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).