DPDK patches and discussions
 help / color / mirror / Atom feed
* Re: [dpdk-dev] [PATCH v1 12/12] doc: enable DCF datapath configuration
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 12/12] doc: enable DCF datapath configuration Ting Xu
@ 2020-06-05 14:41   ` Ye Xiaolong
  2020-06-09  7:50     ` Xu, Ting
  0 siblings, 1 reply; 65+ messages in thread
From: Ye Xiaolong @ 2020-06-05 14:41 UTC (permalink / raw)
  To: Ting Xu; +Cc: dev, qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

Hi, Ting

On 06/05, Ting Xu wrote:
>Add doc for DCF datapath configuration in DPDK 20.08 release note.
>

It'd be better to also add some document update in ice.rst.

Thanks,
Xiaolong

>Signed-off-by: Ting Xu <ting.xu@intel.com>
>---
> doc/guides/rel_notes/release_20_08.rst | 5 +++++
> 1 file changed, 5 insertions(+)
>
>diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
>index 39064afbe..3cda6111c 100644
>--- a/doc/guides/rel_notes/release_20_08.rst
>+++ b/doc/guides/rel_notes/release_20_08.rst
>@@ -56,6 +56,11 @@ New Features
>      Also, make sure to start the actual text at the margin.
>      =========================================================
> 
>+* **Updated the Intel ice driver.**
>+
>+  Updated the Intel ice driver with new features and improvements, including:
>+
>+  * Added support for DCF datapath configuration.
> 
> Removed Items
> -------------
>-- 
>2.17.1
>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [dpdk-dev] [PATCH v1 03/12] net/ice: complete dev configure in DCF
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 03/12] net/ice: complete dev configure " Ting Xu
@ 2020-06-05 14:56   ` Ye Xiaolong
  0 siblings, 0 replies; 65+ messages in thread
From: Ye Xiaolong @ 2020-06-05 14:56 UTC (permalink / raw)
  To: Ting Xu; +Cc: dev, qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

On 06/05, Ting Xu wrote:
>From: Qi Zhang <qi.z.zhang@intel.com>
>
>Enable device configuration function in DCF.
>
>Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
>---
> drivers/net/ice/ice_dcf_ethdev.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
>diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
>index 7f24ef81a..e8bed1362 100644
>--- a/drivers/net/ice/ice_dcf_ethdev.c
>+++ b/drivers/net/ice/ice_dcf_ethdev.c
>@@ -59,6 +59,15 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
> static int
> ice_dcf_dev_configure(__rte_unused struct rte_eth_dev *dev)

__rte_unused tag should be removed.

> {
>+	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
>+	struct ice_adapter *ad = &dcf_ad->parent;
>+
>+	ad->rx_bulk_alloc_allowed = true;
>+	ad->tx_simple_allowed = true;
>+
>+	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
>+		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
>+
> 	return 0;
> }
> 
>-- 
>2.17.1
>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [dpdk-dev] [PATCH v1 07/12] net/ice: init RSS during DCF start
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 07/12] net/ice: init RSS during DCF start Ting Xu
@ 2020-06-05 15:26   ` Ye Xiaolong
  0 siblings, 0 replies; 65+ messages in thread
From: Ye Xiaolong @ 2020-06-05 15:26 UTC (permalink / raw)
  To: Ting Xu; +Cc: dev, qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

On 06/05, Ting Xu wrote:
>From: Qi Zhang <qi.z.zhang@intel.com>
>
>Enable RSS initialization during DCF start. Add RSS LUT and
>RSS key configuration functions.
>
>Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
>---
> drivers/net/ice/ice_dcf.c        | 123 +++++++++++++++++++++++++++++++
> drivers/net/ice/ice_dcf.h        |   1 +
> drivers/net/ice/ice_dcf_ethdev.c |  14 +++-
> 3 files changed, 135 insertions(+), 3 deletions(-)
>
>diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
>index 93fabd5f7..8d078163e 100644
>--- a/drivers/net/ice/ice_dcf.c
>+++ b/drivers/net/ice/ice_dcf.c
>@@ -708,3 +708,126 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
> 	rte_free(hw->rss_lut);
> 	rte_free(hw->rss_key);
> }
>+
>+static int
>+ice_dcf_configure_rss_key(struct ice_dcf_hw *hw)
>+{
>+	struct virtchnl_rss_key *rss_key;
>+	struct dcf_virtchnl_cmd args;
>+	int len, err;
>+
>+	len = sizeof(*rss_key) + hw->vf_res->rss_key_size - 1;
>+	rss_key = rte_zmalloc("rss_key", len, 0);
>+	if (!rss_key)
>+		return -ENOMEM;
>+
>+	rss_key->vsi_id = hw->vsi_res->vsi_id;
>+	rss_key->key_len = hw->vf_res->rss_key_size;
>+	rte_memcpy(rss_key->key, hw->rss_key, hw->vf_res->rss_key_size);
>+
>+	args.v_op = VIRTCHNL_OP_CONFIG_RSS_KEY;
>+	args.req_msglen = len;
>+	args.req_msg = (uint8_t *)rss_key;
>+	args.rsp_msglen = 0;
>+	args.rsp_buflen = 0;
>+	args.rsp_msgbuf = NULL;
>+	args.pending = 0;
>+
>+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
>+	if (err) {
>+		PMD_INIT_LOG(ERR, "Failed to execute OP_CONFIG_RSS_KEY");

Need to free rss_key in error handling as well.

>+		return err;
>+	}
>+
>+	rte_free(rss_key);
>+	return 0;
>+}
>+
>+static int
>+ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw)
>+{
>+	struct virtchnl_rss_lut *rss_lut;
>+	struct dcf_virtchnl_cmd args;
>+	int len, err;
>+
>+	len = sizeof(*rss_lut) + hw->vf_res->rss_lut_size - 1;
>+	rss_lut = rte_zmalloc("rss_lut", len, 0);
>+	if (!rss_lut)
>+		return -ENOMEM;
>+
>+	rss_lut->vsi_id = hw->vsi_res->vsi_id;
>+	rss_lut->lut_entries = hw->vf_res->rss_lut_size;
>+	rte_memcpy(rss_lut->lut, hw->rss_lut, hw->vf_res->rss_lut_size);
>+
>+	args.v_op = VIRTCHNL_OP_CONFIG_RSS_LUT;
>+	args.req_msglen = len;
>+	args.req_msg = (uint8_t *)rss_lut;
>+	args.rsp_msglen = 0;
>+	args.rsp_buflen = 0;
>+	args.rsp_msgbuf = NULL;
>+	args.pending = 0;
>+
>+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
>+	if (err) {
>+		PMD_INIT_LOG(ERR, "Failed to execute OP_CONFIG_RSS_LUT");

Need to free rss_lut here.

>+		return err;
>+	}
>+
>+	rte_free(rss_lut);
>+	return 0;
>+}
>+
>+int
>+ice_dcf_init_rss(struct ice_dcf_hw *hw)
>+{
>+	struct rte_eth_dev *dev = hw->eth_dev;
>+	struct rte_eth_rss_conf *rss_conf;
>+	uint8_t i, j, nb_q;
>+	int ret;
>+
>+	rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf;
>+	nb_q = dev->data->nb_rx_queues;
>+
>+	if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) {
>+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
>+		return -ENOTSUP;
>+	}
>+	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
>+		PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
>+		/* set all lut items to default queue */
>+		for (i = 0; i < hw->vf_res->rss_lut_size; i++)
>+			hw->rss_lut[i] = 0;

How about	memset(hw->rss_lut, 0, hw->vf_res->rss_lut_size);

>+		ret = ice_dcf_configure_rss_lut(hw);
>+		return ret;

return ice_dcf_configure_rss_lut(hw);

>+	}
>+
>+	/* In IAVF, RSS enablement is set by PF driver. It is not supported
>+	 * to set based on rss_conf->rss_hf.
>+	 */
>+
>+	/* configure RSS key */
>+	if (!rss_conf->rss_key)
>+		/* Calculate the default hash key */
>+		for (i = 0; i <= hw->vf_res->rss_key_size; i++)
>+			hw->rss_key[i] = (uint8_t)rte_rand();

Why use <=, will it cause out-of-bounds access?

>+	else
>+		rte_memcpy(hw->rss_key, rss_conf->rss_key,
>+			   RTE_MIN(rss_conf->rss_key_len,
>+				   hw->vf_res->rss_key_size));
>+
>+	/* init RSS LUT table */
>+	for (i = 0, j = 0; i < hw->vf_res->rss_lut_size; i++, j++) {
>+		if (j >= nb_q)
>+			j = 0;
>+		hw->rss_lut[i] = j;
>+	}
>+	/* send virtchnnl ops to configure rss*/
>+	ret = ice_dcf_configure_rss_lut(hw);
>+	if (ret)
>+		return ret;
>+	ret = ice_dcf_configure_rss_key(hw);
>+	if (ret)
>+		return ret;
>+
>+	return 0;
>+}
>diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
>index dcb2a0283..eea4b286b 100644
>--- a/drivers/net/ice/ice_dcf.h
>+++ b/drivers/net/ice/ice_dcf.h
>@@ -63,5 +63,6 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
> int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
> int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
> void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
>+int ice_dcf_init_rss(struct ice_dcf_hw *hw);
> 
> #endif /* _ICE_DCF_H_ */
>diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
>index 1f7474dc3..5fbf70803 100644
>--- a/drivers/net/ice/ice_dcf_ethdev.c
>+++ b/drivers/net/ice/ice_dcf_ethdev.c
>@@ -51,9 +51,9 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
> 	uint16_t buf_size, max_pkt_len, len;
> 
> 	buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
>-
>-	/* Calculate the maximum packet length allowed */
>-	len = rxq->rx_buf_len * IAVF_MAX_CHAINED_RX_BUFFERS;
>+	rxq->rx_hdr_len = 0;
>+	rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
>+	len = ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len;

Above change seems unrelated to this patch, what about squashing it to patch 6?

Thanks,
Xiaolong

> 	max_pkt_len = RTE_MIN(len, dev->data->dev_conf.rxmode.max_rx_pkt_len);
> 
> 	/* Check if the jumbo frame and maximum packet length are set
>@@ -133,6 +133,14 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
> 		return ret;
> 	}
> 
>+	if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
>+		ret = ice_dcf_init_rss(hw);
>+		if (ret) {
>+			PMD_DRV_LOG(ERR, "Failed to configure RSS");
>+			return ret;
>+		}
>+	}
>+
> 	dev->data->dev_link.link_status = ETH_LINK_UP;
> 
> 	return 0;
>-- 
>2.17.1
>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v1 00/12] enable DCF datapath configuration
@ 2020-06-05 20:17 Ting Xu
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 01/12] net/ice: init RSS and supported RXDID in DCF Ting Xu
                   ` (14 more replies)
  0 siblings, 15 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-05 20:17 UTC (permalink / raw)
  To: dev; +Cc: qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

This patchset adds support to configure DCF datapath, including
Rx/Tx queues setup, start and stop, device configuration, RSS
and flexible descriptor RXDID initialization and MAC filter setup.

Qi Zhang (11):
  net/ice: init RSS and supported RXDID in DCF
  net/ice: complete device info get in DCF
  net/ice: complete dev configure in DCF
  net/ice: complete queue setup in DCF
  net/ice: add stop flag for device start / stop
  net/ice: add Rx queue init in DCF
  net/ice: init RSS during DCF start
  net/ice: add queue config in DCF
  net/ice: add queue start and stop for DCF
  net/ice: enable stats for DCF
  net/ice: set MAC filter during dev start for DCF

Ting Xu (1):
  doc: enable DCF datapath configuration

 doc/guides/rel_notes/release_20_08.rst |   5 +
 drivers/net/ice/ice_dcf.c              | 412 +++++++++++++-
 drivers/net/ice/ice_dcf.h              |  17 +
 drivers/net/ice/ice_dcf_ethdev.c       | 759 +++++++++++++++++++++++--
 drivers/net/ice/ice_dcf_ethdev.h       |   3 -
 drivers/net/ice/ice_dcf_parent.c       |   8 +
 6 files changed, 1151 insertions(+), 53 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v1 01/12] net/ice: init RSS and supported RXDID in DCF
  2020-06-05 20:17 [dpdk-dev] [PATCH v1 00/12] enable DCF datapath configuration Ting Xu
@ 2020-06-05 20:17 ` Ting Xu
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 02/12] net/ice: complete device info get " Ting Xu
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-05 20:17 UTC (permalink / raw)
  To: dev; +Cc: qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

From: Qi Zhang <qi.z.zhang@intel.com>

Enable RSS parameters initialization and get the supported
flexible descriptor RXDIDs bitmap from PF during DCF init.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/ice_dcf.c | 54 ++++++++++++++++++++++++++++++++++++++-
 drivers/net/ice/ice_dcf.h |  3 +++
 2 files changed, 56 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 0cd5d1bf6..93fabd5f7 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -233,7 +233,7 @@ ice_dcf_get_vf_resource(struct ice_dcf_hw *hw)
 
 	caps = VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | VIRTCHNL_VF_OFFLOAD_RX_POLLING |
 	       VIRTCHNL_VF_CAP_ADV_LINK_SPEED | VIRTCHNL_VF_CAP_DCF |
-	       VF_BASE_MODE_OFFLOADS;
+	       VF_BASE_MODE_OFFLOADS | VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC;
 
 	err = ice_dcf_send_cmd_req_no_irq(hw, VIRTCHNL_OP_GET_VF_RESOURCES,
 					  (uint8_t *)&caps, sizeof(caps));
@@ -547,6 +547,30 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
 	return err;
 }
 
+static int
+ice_dcf_get_supported_rxdid(struct ice_dcf_hw *hw)
+{
+	int err;
+
+	err = ice_dcf_send_cmd_req_no_irq(hw,
+					  VIRTCHNL_OP_GET_SUPPORTED_RXDIDS,
+					  NULL, 0);
+	if (err) {
+		PMD_INIT_LOG(ERR, "Failed to send OP_GET_SUPPORTED_RXDIDS");
+		return -1;
+	}
+
+	err = ice_dcf_recv_cmd_rsp_no_irq(hw, VIRTCHNL_OP_GET_SUPPORTED_RXDIDS,
+					  (uint8_t *)&hw->supported_rxdid,
+					  sizeof(uint64_t), NULL);
+	if (err) {
+		PMD_INIT_LOG(ERR, "Failed to get response of OP_GET_SUPPORTED_RXDIDS");
+		return -1;
+	}
+
+	return 0;
+}
+
 int
 ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 {
@@ -620,6 +644,29 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 		goto err_alloc;
 	}
 
+	/* Allocate memory for RSS info */
+	if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		hw->rss_key = rte_zmalloc(NULL,
+					  hw->vf_res->rss_key_size, 0);
+		if (!hw->rss_key) {
+			PMD_INIT_LOG(ERR, "unable to allocate rss_key memory");
+			goto err_alloc;
+		}
+		hw->rss_lut = rte_zmalloc("rss_lut",
+					  hw->vf_res->rss_lut_size, 0);
+		if (!hw->rss_lut) {
+			PMD_INIT_LOG(ERR, "unable to allocate rss_lut memory");
+			goto err_rss;
+		}
+	}
+
+	if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
+		if (ice_dcf_get_supported_rxdid(hw) != 0) {
+			PMD_INIT_LOG(ERR, "failed to do get supported rxdid");
+			goto err_rss;
+		}
+	}
+
 	hw->eth_dev = eth_dev;
 	rte_intr_callback_register(&pci_dev->intr_handle,
 				   ice_dcf_dev_interrupt_handler, hw);
@@ -628,6 +675,9 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 
 	return 0;
 
+err_rss:
+	rte_free(hw->rss_key);
+	rte_free(hw->rss_lut);
 err_alloc:
 	rte_free(hw->vf_res);
 err_api:
@@ -655,4 +705,6 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	rte_free(hw->arq_buf);
 	rte_free(hw->vf_vsi_map);
 	rte_free(hw->vf_res);
+	rte_free(hw->rss_lut);
+	rte_free(hw->rss_key);
 }
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index d2e447b48..152266e3c 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -50,6 +50,9 @@ struct ice_dcf_hw {
 	uint16_t vsi_id;
 
 	struct rte_eth_dev *eth_dev;
+	uint8_t *rss_lut;
+	uint8_t *rss_key;
+	uint64_t supported_rxdid;
 };
 
 int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v1 02/12] net/ice: complete device info get in DCF
  2020-06-05 20:17 [dpdk-dev] [PATCH v1 00/12] enable DCF datapath configuration Ting Xu
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 01/12] net/ice: init RSS and supported RXDID in DCF Ting Xu
@ 2020-06-05 20:17 ` Ting Xu
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 03/12] net/ice: complete dev configure " Ting Xu
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-05 20:17 UTC (permalink / raw)
  To: dev; +Cc: qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

From: Qi Zhang <qi.z.zhang@intel.com>

Add support to get complete device information for DCF, including
Rx/Tx offload capabilities and default configuration.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/ice_dcf_ethdev.c | 72 ++++++++++++++++++++++++++++++--
 1 file changed, 69 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index e5ba1a61f..7f24ef81a 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -24,6 +24,7 @@
 
 #include "ice_generic_flow.h"
 #include "ice_dcf_ethdev.h"
+#include "ice_rxtx.h"
 
 static uint16_t
 ice_dcf_recv_pkts(__rte_unused void *rx_queue,
@@ -66,11 +67,76 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
 		     struct rte_eth_dev_info *dev_info)
 {
 	struct ice_dcf_adapter *adapter = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &adapter->real_hw;
 
 	dev_info->max_mac_addrs = 1;
-	dev_info->max_rx_pktlen = (uint32_t)-1;
-	dev_info->max_rx_queues = RTE_DIM(adapter->rxqs);
-	dev_info->max_tx_queues = RTE_DIM(adapter->txqs);
+	dev_info->max_rx_queues = hw->vsi_res->num_queue_pairs;
+	dev_info->max_tx_queues = hw->vsi_res->num_queue_pairs;
+	dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
+	dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX;
+	dev_info->hash_key_size = hw->vf_res->rss_key_size;
+	dev_info->reta_size = hw->vf_res->rss_lut_size;
+	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
+
+	dev_info->rx_offload_capa =
+		DEV_RX_OFFLOAD_VLAN_STRIP |
+		DEV_RX_OFFLOAD_QINQ_STRIP |
+		DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM |
+		DEV_RX_OFFLOAD_TCP_CKSUM |
+		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_SCATTER |
+		DEV_RX_OFFLOAD_JUMBO_FRAME |
+		DEV_RX_OFFLOAD_VLAN_FILTER |
+		DEV_RX_OFFLOAD_RSS_HASH;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_VLAN_INSERT |
+		DEV_TX_OFFLOAD_QINQ_INSERT |
+		DEV_TX_OFFLOAD_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_UDP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_CKSUM |
+		DEV_TX_OFFLOAD_SCTP_CKSUM |
+		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_TCP_TSO |
+		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
+		DEV_TX_OFFLOAD_GRE_TNL_TSO |
+		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
+		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
+		DEV_TX_OFFLOAD_MULTI_SEGS;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_thresh = {
+			.pthresh = ICE_DEFAULT_RX_PTHRESH,
+			.hthresh = ICE_DEFAULT_RX_HTHRESH,
+			.wthresh = ICE_DEFAULT_RX_WTHRESH,
+		},
+		.rx_free_thresh = ICE_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+		.offloads = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_thresh = {
+			.pthresh = ICE_DEFAULT_TX_PTHRESH,
+			.hthresh = ICE_DEFAULT_TX_HTHRESH,
+			.wthresh = ICE_DEFAULT_TX_WTHRESH,
+		},
+		.tx_free_thresh = ICE_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = ICE_DEFAULT_TX_RSBIT_THRESH,
+		.offloads = 0,
+	};
+
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = ICE_MAX_RING_DESC,
+		.nb_min = ICE_MIN_RING_DESC,
+		.nb_align = ICE_ALIGN_RING_DESC,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = ICE_MAX_RING_DESC,
+		.nb_min = ICE_MIN_RING_DESC,
+		.nb_align = ICE_ALIGN_RING_DESC,
+	};
 
 	return 0;
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v1 03/12] net/ice: complete dev configure in DCF
  2020-06-05 20:17 [dpdk-dev] [PATCH v1 00/12] enable DCF datapath configuration Ting Xu
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 01/12] net/ice: init RSS and supported RXDID in DCF Ting Xu
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 02/12] net/ice: complete device info get " Ting Xu
@ 2020-06-05 20:17 ` Ting Xu
  2020-06-05 14:56   ` Ye Xiaolong
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 04/12] net/ice: complete queue setup " Ting Xu
                   ` (11 subsequent siblings)
  14 siblings, 1 reply; 65+ messages in thread
From: Ting Xu @ 2020-06-05 20:17 UTC (permalink / raw)
  To: dev; +Cc: qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

From: Qi Zhang <qi.z.zhang@intel.com>

Enable device configuration function in DCF.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/ice_dcf_ethdev.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 7f24ef81a..e8bed1362 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -59,6 +59,15 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
 static int
 ice_dcf_dev_configure(__rte_unused struct rte_eth_dev *dev)
 {
+	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct ice_adapter *ad = &dcf_ad->parent;
+
+	ad->rx_bulk_alloc_allowed = true;
+	ad->tx_simple_allowed = true;
+
+	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+
 	return 0;
 }
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v1 04/12] net/ice: complete queue setup in DCF
  2020-06-05 20:17 [dpdk-dev] [PATCH v1 00/12] enable DCF datapath configuration Ting Xu
                   ` (2 preceding siblings ...)
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 03/12] net/ice: complete dev configure " Ting Xu
@ 2020-06-05 20:17 ` Ting Xu
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 05/12] net/ice: add stop flag for device start / stop Ting Xu
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-05 20:17 UTC (permalink / raw)
  To: dev; +Cc: qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

From: Qi Zhang <qi.z.zhang@intel.com>

Delete original DCF queue setup functions and use ice
queue setup and release functions instead.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/ice_dcf_ethdev.c | 42 +++-----------------------------
 drivers/net/ice/ice_dcf_ethdev.h |  3 ---
 drivers/net/ice/ice_dcf_parent.c |  7 ++++++
 3 files changed, 11 insertions(+), 41 deletions(-)

diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index e8bed1362..df906cd54 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -231,11 +231,6 @@ ice_dcf_dev_close(struct rte_eth_dev *dev)
 	ice_dcf_uninit_hw(dev, &adapter->real_hw);
 }
 
-static void
-ice_dcf_queue_release(__rte_unused void *q)
-{
-}
-
 static int
 ice_dcf_link_update(__rte_unused struct rte_eth_dev *dev,
 		    __rte_unused int wait_to_complete)
@@ -243,45 +238,16 @@ ice_dcf_link_update(__rte_unused struct rte_eth_dev *dev,
 	return 0;
 }
 
-static int
-ice_dcf_rx_queue_setup(struct rte_eth_dev *dev,
-		       uint16_t rx_queue_id,
-		       __rte_unused uint16_t nb_rx_desc,
-		       __rte_unused unsigned int socket_id,
-		       __rte_unused const struct rte_eth_rxconf *rx_conf,
-		       __rte_unused struct rte_mempool *mb_pool)
-{
-	struct ice_dcf_adapter *adapter = dev->data->dev_private;
-
-	dev->data->rx_queues[rx_queue_id] = &adapter->rxqs[rx_queue_id];
-
-	return 0;
-}
-
-static int
-ice_dcf_tx_queue_setup(struct rte_eth_dev *dev,
-		       uint16_t tx_queue_id,
-		       __rte_unused uint16_t nb_tx_desc,
-		       __rte_unused unsigned int socket_id,
-		       __rte_unused const struct rte_eth_txconf *tx_conf)
-{
-	struct ice_dcf_adapter *adapter = dev->data->dev_private;
-
-	dev->data->tx_queues[tx_queue_id] = &adapter->txqs[tx_queue_id];
-
-	return 0;
-}
-
 static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
 	.dev_start               = ice_dcf_dev_start,
 	.dev_stop                = ice_dcf_dev_stop,
 	.dev_close               = ice_dcf_dev_close,
 	.dev_configure           = ice_dcf_dev_configure,
 	.dev_infos_get           = ice_dcf_dev_info_get,
-	.rx_queue_setup          = ice_dcf_rx_queue_setup,
-	.tx_queue_setup          = ice_dcf_tx_queue_setup,
-	.rx_queue_release        = ice_dcf_queue_release,
-	.tx_queue_release        = ice_dcf_queue_release,
+	.rx_queue_setup          = ice_rx_queue_setup,
+	.tx_queue_setup          = ice_tx_queue_setup,
+	.rx_queue_release        = ice_rx_queue_release,
+	.tx_queue_release        = ice_tx_queue_release,
 	.link_update             = ice_dcf_link_update,
 	.stats_get               = ice_dcf_stats_get,
 	.stats_reset             = ice_dcf_stats_reset,
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index e60e808d8..b54528bea 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -19,10 +19,7 @@ struct ice_dcf_queue {
 
 struct ice_dcf_adapter {
 	struct ice_adapter parent; /* Must be first */
-
 	struct ice_dcf_hw real_hw;
-	struct ice_dcf_queue rxqs[ICE_DCF_MAX_RINGS];
-	struct ice_dcf_queue txqs[ICE_DCF_MAX_RINGS];
 };
 
 void ice_dcf_handle_pf_event_msg(struct ice_dcf_hw *dcf_hw,
diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c
index bdfc7d430..f9c7d9737 100644
--- a/drivers/net/ice/ice_dcf_parent.c
+++ b/drivers/net/ice/ice_dcf_parent.c
@@ -335,6 +335,13 @@ ice_dcf_init_parent_adapter(struct rte_eth_dev *eth_dev)
 	parent_adapter->eth_dev = eth_dev;
 	parent_adapter->pf.adapter = parent_adapter;
 	parent_adapter->pf.dev_data = eth_dev->data;
+	/* create a dummy main_vsi */
+	parent_adapter->pf.main_vsi =
+		rte_zmalloc(NULL, sizeof(struct ice_vsi), 0);
+	if (!parent_adapter->pf.main_vsi)
+		return -ENOMEM;
+	parent_adapter->pf.main_vsi->adapter = parent_adapter;
+
 	parent_hw->back = parent_adapter;
 	parent_hw->mac_type = ICE_MAC_GENERIC;
 	parent_hw->vendor_id = ICE_INTEL_VENDOR_ID;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v1 05/12] net/ice: add stop flag for device start / stop
  2020-06-05 20:17 [dpdk-dev] [PATCH v1 00/12] enable DCF datapath configuration Ting Xu
                   ` (3 preceding siblings ...)
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 04/12] net/ice: complete queue setup " Ting Xu
@ 2020-06-05 20:17 ` Ting Xu
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 06/12] net/ice: add Rx queue init in DCF Ting Xu
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-05 20:17 UTC (permalink / raw)
  To: dev; +Cc: qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

From: Qi Zhang <qi.z.zhang@intel.com>

Add stop flag for DCF device start and stop.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/ice_dcf_ethdev.c | 12 ++++++++++++
 drivers/net/ice/ice_dcf_parent.c |  1 +
 2 files changed, 13 insertions(+)

diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index df906cd54..62ef71ddb 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -45,6 +45,11 @@ ice_dcf_xmit_pkts(__rte_unused void *tx_queue,
 static int
 ice_dcf_dev_start(struct rte_eth_dev *dev)
 {
+	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct ice_adapter *ad = &dcf_ad->parent;
+
+	ad->pf.adapter_stopped = 0;
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
@@ -53,7 +58,14 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 static void
 ice_dcf_dev_stop(struct rte_eth_dev *dev)
 {
+	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct ice_adapter *ad = &dcf_ad->parent;
+
+	if (ad->pf.adapter_stopped == 1)
+		return;
+
 	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	ad->pf.adapter_stopped = 1;
 }
 
 static int
diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c
index f9c7d9737..8ad8bea1a 100644
--- a/drivers/net/ice/ice_dcf_parent.c
+++ b/drivers/net/ice/ice_dcf_parent.c
@@ -341,6 +341,7 @@ ice_dcf_init_parent_adapter(struct rte_eth_dev *eth_dev)
 	if (!parent_adapter->pf.main_vsi)
 		return -ENOMEM;
 	parent_adapter->pf.main_vsi->adapter = parent_adapter;
+	parent_adapter->pf.adapter_stopped = 1;
 
 	parent_hw->back = parent_adapter;
 	parent_hw->mac_type = ICE_MAC_GENERIC;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v1 06/12] net/ice: add Rx queue init in DCF
  2020-06-05 20:17 [dpdk-dev] [PATCH v1 00/12] enable DCF datapath configuration Ting Xu
                   ` (4 preceding siblings ...)
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 05/12] net/ice: add stop flag for device start / stop Ting Xu
@ 2020-06-05 20:17 ` Ting Xu
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 07/12] net/ice: init RSS during DCF start Ting Xu
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-05 20:17 UTC (permalink / raw)
  To: dev; +Cc: qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

From: Qi Zhang <qi.z.zhang@intel.com>

Enable Rx queues initialization during device start in DCF.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/ice_dcf.h        |  1 +
 drivers/net/ice/ice_dcf_ethdev.c | 83 ++++++++++++++++++++++++++++++++
 2 files changed, 84 insertions(+)

diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 152266e3c..dcb2a0283 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -53,6 +53,7 @@ struct ice_dcf_hw {
 	uint8_t *rss_lut;
 	uint8_t *rss_key;
 	uint64_t supported_rxdid;
+	uint16_t num_queue_pairs;
 };
 
 int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 62ef71ddb..1f7474dc3 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -42,14 +42,97 @@ ice_dcf_xmit_pkts(__rte_unused void *tx_queue,
 	return 0;
 }
 
+static int
+ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
+{
+	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct rte_eth_dev_data *dev_data = dev->data;
+	struct iavf_hw *hw = &dcf_ad->real_hw.avf;
+	uint16_t buf_size, max_pkt_len, len;
+
+	buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+
+	/* Calculate the maximum packet length allowed */
+	len = rxq->rx_buf_len * IAVF_MAX_CHAINED_RX_BUFFERS;
+	max_pkt_len = RTE_MIN(len, dev->data->dev_conf.rxmode.max_rx_pkt_len);
+
+	/* Check if the jumbo frame and maximum packet length are set
+	 * correctly.
+	 */
+	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+		if (max_pkt_len <= RTE_ETHER_MAX_LEN ||
+		    max_pkt_len > ICE_FRAME_SIZE_MAX) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is enabled",
+				    (uint32_t)RTE_ETHER_MAX_LEN,
+				    (uint32_t)ICE_FRAME_SIZE_MAX);
+			return -EINVAL;
+		}
+	} else {
+		if (max_pkt_len < RTE_ETHER_MIN_LEN ||
+		    max_pkt_len > RTE_ETHER_MAX_LEN) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is disabled",
+				    (uint32_t)RTE_ETHER_MIN_LEN,
+				    (uint32_t)RTE_ETHER_MAX_LEN);
+			return -EINVAL;
+		}
+	}
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	    (rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size) {
+		dev_data->scattered_rx = 1;
+	}
+	rxq->qrx_tail = hw->hw_addr + IAVF_QRX_TAIL1(rxq->queue_id);
+	IAVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	IAVF_WRITE_FLUSH(hw);
+
+	return 0;
+}
+
+static int
+ice_dcf_init_rx_queues(struct rte_eth_dev *dev)
+{
+	struct ice_rx_queue **rxq =
+		(struct ice_rx_queue **)dev->data->rx_queues;
+	int i, ret;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		if (!rxq[i] || !rxq[i]->q_set)
+			continue;
+		ret = ice_dcf_init_rxq(dev, rxq[i]);
+		if (ret)
+			return ret;
+	}
+
+	ice_set_rx_function(dev);
+	ice_set_tx_function(dev);
+
+	return 0;
+}
+
 static int
 ice_dcf_dev_start(struct rte_eth_dev *dev)
 {
 	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
 	struct ice_adapter *ad = &dcf_ad->parent;
+	struct ice_dcf_hw *hw = &dcf_ad->real_hw;
+	int ret;
 
 	ad->pf.adapter_stopped = 0;
 
+	hw->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
+				      dev->data->nb_tx_queues);
+
+	ret = ice_dcf_init_rx_queues(dev);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Fail to init queues");
+		return ret;
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v1 07/12] net/ice: init RSS during DCF start
  2020-06-05 20:17 [dpdk-dev] [PATCH v1 00/12] enable DCF datapath configuration Ting Xu
                   ` (5 preceding siblings ...)
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 06/12] net/ice: add Rx queue init in DCF Ting Xu
@ 2020-06-05 20:17 ` Ting Xu
  2020-06-05 15:26   ` Ye Xiaolong
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 08/12] net/ice: add queue config in DCF Ting Xu
                   ` (7 subsequent siblings)
  14 siblings, 1 reply; 65+ messages in thread
From: Ting Xu @ 2020-06-05 20:17 UTC (permalink / raw)
  To: dev; +Cc: qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

From: Qi Zhang <qi.z.zhang@intel.com>

Enable RSS initialization during DCF start. Add RSS LUT and
RSS key configuration functions.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/ice_dcf.c        | 123 +++++++++++++++++++++++++++++++
 drivers/net/ice/ice_dcf.h        |   1 +
 drivers/net/ice/ice_dcf_ethdev.c |  14 +++-
 3 files changed, 135 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 93fabd5f7..8d078163e 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -708,3 +708,126 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	rte_free(hw->rss_lut);
 	rte_free(hw->rss_key);
 }
+
+static int
+ice_dcf_configure_rss_key(struct ice_dcf_hw *hw)
+{
+	struct virtchnl_rss_key *rss_key;
+	struct dcf_virtchnl_cmd args;
+	int len, err;
+
+	len = sizeof(*rss_key) + hw->vf_res->rss_key_size - 1;
+	rss_key = rte_zmalloc("rss_key", len, 0);
+	if (!rss_key)
+		return -ENOMEM;
+
+	rss_key->vsi_id = hw->vsi_res->vsi_id;
+	rss_key->key_len = hw->vf_res->rss_key_size;
+	rte_memcpy(rss_key->key, hw->rss_key, hw->vf_res->rss_key_size);
+
+	args.v_op = VIRTCHNL_OP_CONFIG_RSS_KEY;
+	args.req_msglen = len;
+	args.req_msg = (uint8_t *)rss_key;
+	args.rsp_msglen = 0;
+	args.rsp_buflen = 0;
+	args.rsp_msgbuf = NULL;
+	args.pending = 0;
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err) {
+		PMD_INIT_LOG(ERR, "Failed to execute OP_CONFIG_RSS_KEY");
+		return err;
+	}
+
+	rte_free(rss_key);
+	return 0;
+}
+
+static int
+ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw)
+{
+	struct virtchnl_rss_lut *rss_lut;
+	struct dcf_virtchnl_cmd args;
+	int len, err;
+
+	len = sizeof(*rss_lut) + hw->vf_res->rss_lut_size - 1;
+	rss_lut = rte_zmalloc("rss_lut", len, 0);
+	if (!rss_lut)
+		return -ENOMEM;
+
+	rss_lut->vsi_id = hw->vsi_res->vsi_id;
+	rss_lut->lut_entries = hw->vf_res->rss_lut_size;
+	rte_memcpy(rss_lut->lut, hw->rss_lut, hw->vf_res->rss_lut_size);
+
+	args.v_op = VIRTCHNL_OP_CONFIG_RSS_LUT;
+	args.req_msglen = len;
+	args.req_msg = (uint8_t *)rss_lut;
+	args.rsp_msglen = 0;
+	args.rsp_buflen = 0;
+	args.rsp_msgbuf = NULL;
+	args.pending = 0;
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err) {
+		PMD_INIT_LOG(ERR, "Failed to execute OP_CONFIG_RSS_LUT");
+		return err;
+	}
+
+	rte_free(rss_lut);
+	return 0;
+}
+
+int
+ice_dcf_init_rss(struct ice_dcf_hw *hw)
+{
+	struct rte_eth_dev *dev = hw->eth_dev;
+	struct rte_eth_rss_conf *rss_conf;
+	uint8_t i, j, nb_q;
+	int ret;
+
+	rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev->data->nb_rx_queues;
+
+	if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+		PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
+		/* set all lut items to default queue */
+		for (i = 0; i < hw->vf_res->rss_lut_size; i++)
+			hw->rss_lut[i] = 0;
+		ret = ice_dcf_configure_rss_lut(hw);
+		return ret;
+	}
+
+	/* In IAVF, RSS enablement is set by PF driver. It is not supported
+	 * to set based on rss_conf->rss_hf.
+	 */
+
+	/* configure RSS key */
+	if (!rss_conf->rss_key)
+		/* Calculate the default hash key */
+		for (i = 0; i <= hw->vf_res->rss_key_size; i++)
+			hw->rss_key[i] = (uint8_t)rte_rand();
+	else
+		rte_memcpy(hw->rss_key, rss_conf->rss_key,
+			   RTE_MIN(rss_conf->rss_key_len,
+				   hw->vf_res->rss_key_size));
+
+	/* init RSS LUT table */
+	for (i = 0, j = 0; i < hw->vf_res->rss_lut_size; i++, j++) {
+		if (j >= nb_q)
+			j = 0;
+		hw->rss_lut[i] = j;
+	}
+	/* send virtchnnl ops to configure rss*/
+	ret = ice_dcf_configure_rss_lut(hw);
+	if (ret)
+		return ret;
+	ret = ice_dcf_configure_rss_key(hw);
+	if (ret)
+		return ret;
+
+	return 0;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index dcb2a0283..eea4b286b 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -63,5 +63,6 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
 int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
 int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
 void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
+int ice_dcf_init_rss(struct ice_dcf_hw *hw);
 
 #endif /* _ICE_DCF_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 1f7474dc3..5fbf70803 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -51,9 +51,9 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
 	uint16_t buf_size, max_pkt_len, len;
 
 	buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
-
-	/* Calculate the maximum packet length allowed */
-	len = rxq->rx_buf_len * IAVF_MAX_CHAINED_RX_BUFFERS;
+	rxq->rx_hdr_len = 0;
+	rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
+	len = ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len;
 	max_pkt_len = RTE_MIN(len, dev->data->dev_conf.rxmode.max_rx_pkt_len);
 
 	/* Check if the jumbo frame and maximum packet length are set
@@ -133,6 +133,14 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
+	if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		ret = ice_dcf_init_rss(hw);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "Failed to configure RSS");
+			return ret;
+		}
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v1 08/12] net/ice: add queue config in DCF
  2020-06-05 20:17 [dpdk-dev] [PATCH v1 00/12] enable DCF datapath configuration Ting Xu
                   ` (6 preceding siblings ...)
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 07/12] net/ice: init RSS during DCF start Ting Xu
@ 2020-06-05 20:17 ` Ting Xu
  2020-06-07 10:11   ` Ye Xiaolong
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 09/12] net/ice: add queue start and stop for DCF Ting Xu
                   ` (6 subsequent siblings)
  14 siblings, 1 reply; 65+ messages in thread
From: Ting Xu @ 2020-06-05 20:17 UTC (permalink / raw)
  To: dev; +Cc: qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

From: Qi Zhang <qi.z.zhang@intel.com>

Add queues and Rx queue irqs configuration during device start
in DCF. The setup is sent to PF via virtchnl.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/ice_dcf.c        | 109 +++++++++++++++++++++++++++
 drivers/net/ice/ice_dcf.h        |   6 ++
 drivers/net/ice/ice_dcf_ethdev.c | 125 +++++++++++++++++++++++++++++++
 3 files changed, 240 insertions(+)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 8d078163e..d864ae894 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -24,6 +24,7 @@
 #include <rte_dev.h>
 
 #include "ice_dcf.h"
+#include "ice_rxtx.h"
 
 #define ICE_DCF_AQ_LEN     32
 #define ICE_DCF_AQ_BUF_SZ  4096
@@ -831,3 +832,111 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
 
 	return 0;
 }
+
+#define IAVF_RXDID_LEGACY_1 1
+#define IAVF_RXDID_COMMS_GENERIC 16
+
+int
+ice_dcf_configure_queues(struct ice_dcf_hw *hw)
+{
+	struct ice_rx_queue **rxq =
+		(struct ice_rx_queue **)hw->eth_dev->data->rx_queues;
+	struct ice_tx_queue **txq =
+		(struct ice_tx_queue **)hw->eth_dev->data->tx_queues;
+	struct virtchnl_vsi_queue_config_info *vc_config;
+	struct virtchnl_queue_pair_info *vc_qp;
+	struct dcf_virtchnl_cmd args;
+	uint16_t i, size;
+	int err;
+
+	size = sizeof(*vc_config) +
+	       sizeof(vc_config->qpair[0]) * hw->num_queue_pairs;
+	vc_config = rte_zmalloc("cfg_queue", size, 0);
+	if (!vc_config)
+		return -ENOMEM;
+
+	vc_config->vsi_id = hw->vsi_res->vsi_id;
+	vc_config->num_queue_pairs = hw->num_queue_pairs;
+
+	for (i = 0, vc_qp = vc_config->qpair;
+	     i < hw->num_queue_pairs;
+	     i++, vc_qp++) {
+		vc_qp->txq.vsi_id = hw->vsi_res->vsi_id;
+		vc_qp->txq.queue_id = i;
+		if (i < hw->eth_dev->data->nb_tx_queues) {
+			vc_qp->txq.ring_len = txq[i]->nb_tx_desc;
+			vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_dma;
+		}
+		vc_qp->rxq.vsi_id = hw->vsi_res->vsi_id;
+		vc_qp->rxq.queue_id = i;
+		vc_qp->rxq.max_pkt_size = rxq[i]->max_pkt_len;
+		if (i < hw->eth_dev->data->nb_rx_queues) {
+			vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc;
+			vc_qp->rxq.dma_ring_addr = rxq[i]->rx_ring_dma;
+			vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len;
+
+			if (hw->vf_res->vf_cap_flags &
+			    VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
+			    hw->supported_rxdid &
+			    BIT(IAVF_RXDID_COMMS_GENERIC)) {
+				vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_GENERIC;
+				PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
+					    "Queue[%d]", vc_qp->rxq.rxdid, i);
+			} else {
+				PMD_DRV_LOG(ERR, "RXDID 16 is not supported");
+				return -EINVAL;
+			}
+		}
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.v_op = VIRTCHNL_OP_CONFIG_VSI_QUEUES;
+	args.req_msg = (uint8_t *)vc_config;
+	args.req_msglen = size;
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to execute command of"
+			    " VIRTCHNL_OP_CONFIG_VSI_QUEUES");
+
+	rte_free(vc_config);
+	return err;
+}
+
+int
+ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
+{
+	struct virtchnl_irq_map_info *map_info;
+	struct virtchnl_vector_map *vecmap;
+	struct dcf_virtchnl_cmd args;
+	int len, i, err;
+
+	len = sizeof(struct virtchnl_irq_map_info) +
+	      sizeof(struct virtchnl_vector_map) * hw->nb_msix;
+
+	map_info = rte_zmalloc("map_info", len, 0);
+	if (!map_info)
+		return -ENOMEM;
+
+	map_info->num_vectors = hw->nb_msix;
+	for (i = 0; i < hw->nb_msix; i++) {
+		vecmap = &map_info->vecmap[i];
+		vecmap->vsi_id = hw->vsi_res->vsi_id;
+		vecmap->rxitr_idx = 0;
+		vecmap->vector_id = hw->msix_base + i;
+		vecmap->txq_map = 0;
+		vecmap->rxq_map = hw->rxq_map[hw->msix_base + i];
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.v_op = VIRTCHNL_OP_CONFIG_IRQ_MAP;
+	args.req_msg = (u8 *)map_info;
+	args.req_msglen = len;
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command OP_CONFIG_IRQ_MAP");
+
+	rte_free(map_info);
+	return err;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index eea4b286b..9470d1df7 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -54,6 +54,10 @@ struct ice_dcf_hw {
 	uint8_t *rss_key;
 	uint64_t supported_rxdid;
 	uint16_t num_queue_pairs;
+
+	uint16_t msix_base;
+	uint16_t nb_msix;
+	uint16_t rxq_map[16];
 };
 
 int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
@@ -64,5 +68,7 @@ int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
 int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
 void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
 int ice_dcf_init_rss(struct ice_dcf_hw *hw);
+int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
+int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
 
 #endif /* _ICE_DCF_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 5fbf70803..9605fb8ed 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -114,10 +114,123 @@ ice_dcf_init_rx_queues(struct rte_eth_dev *dev)
 	return 0;
 }
 
+#define IAVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
+#define IAVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
+
+#define IAVF_ITR_INDEX_DEFAULT          0
+#define IAVF_QUEUE_ITR_INTERVAL_DEFAULT 32 /* 32 us */
+#define IAVF_QUEUE_ITR_INTERVAL_MAX     8160 /* 8160 us */
+
+static inline uint16_t
+iavf_calc_itr_interval(int16_t interval)
+{
+	if (interval < 0 || interval > IAVF_QUEUE_ITR_INTERVAL_MAX)
+		interval = IAVF_QUEUE_ITR_INTERVAL_DEFAULT;
+
+	/* Convert to hardware count, as writing each 1 represents 2 us */
+	return interval / 2;
+}
+
+static int ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
+				     struct rte_intr_handle *intr_handle)
+{
+	struct ice_dcf_adapter *adapter = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &adapter->real_hw;
+	uint16_t interval, i;
+	int vec;
+
+	if (rte_intr_cap_multiple(intr_handle) &&
+	    dev->data->dev_conf.intr_conf.rxq) {
+		if (rte_intr_efd_enable(intr_handle, dev->data->nb_rx_queues))
+			return -1;
+	}
+
+	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
+		intr_handle->intr_vec =
+			rte_zmalloc("intr_vec",
+				    dev->data->nb_rx_queues * sizeof(int), 0);
+		if (!intr_handle->intr_vec) {
+			PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
+				    dev->data->nb_rx_queues);
+			return -1;
+		}
+	}
+
+	if (!dev->data->dev_conf.intr_conf.rxq ||
+	    !rte_intr_dp_is_en(intr_handle)) {
+		/* Rx interrupt disabled, Map interrupt only for writeback */
+		hw->nb_msix = 1;
+		if (hw->vf_res->vf_cap_flags &
+		    VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
+			/* If WB_ON_ITR supports, enable it */
+			hw->msix_base = IAVF_RX_VEC_START;
+			IAVF_WRITE_REG(&hw->avf,
+				       IAVF_VFINT_DYN_CTLN1(hw->msix_base - 1),
+				       IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK |
+				       IAVF_VFINT_DYN_CTLN1_WB_ON_ITR_MASK);
+		} else {
+			/* If no WB_ON_ITR offload flags, need to set
+			 * interrupt for descriptor write back.
+			 */
+			hw->msix_base = IAVF_MISC_VEC_ID;
+
+			/* set ITR to max */
+			interval =
+			iavf_calc_itr_interval(IAVF_QUEUE_ITR_INTERVAL_MAX);
+			IAVF_WRITE_REG(&hw->avf, IAVF_VFINT_DYN_CTL01,
+				       IAVF_VFINT_DYN_CTL01_INTENA_MASK |
+				       (IAVF_ITR_INDEX_DEFAULT <<
+					IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT) |
+				       (interval <<
+					IAVF_VFINT_DYN_CTL01_INTERVAL_SHIFT));
+		}
+		IAVF_WRITE_FLUSH(&hw->avf);
+		/* map all queues to the same interrupt */
+		for (i = 0; i < dev->data->nb_rx_queues; i++)
+			hw->rxq_map[hw->msix_base] |= 1 << i;
+	} else {
+		if (!rte_intr_allow_others(intr_handle)) {
+			hw->nb_msix = 1;
+			hw->msix_base = IAVF_MISC_VEC_ID;
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				hw->rxq_map[hw->msix_base] |= 1 << i;
+				intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
+			}
+			PMD_DRV_LOG(DEBUG,
+				    "vector %u are mapping to all Rx queues",
+				    hw->msix_base);
+		} else {
+			/* If Rx interrupt is reuquired, and we can use
+			 * multi interrupts, then the vec is from 1
+			 */
+			hw->nb_msix = RTE_MIN(hw->vf_res->max_vectors,
+					      intr_handle->nb_efd);
+			hw->msix_base = IAVF_MISC_VEC_ID;
+			vec = IAVF_MISC_VEC_ID;
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				hw->rxq_map[vec] |= 1 << i;
+				intr_handle->intr_vec[i] = vec++;
+				if (vec >= hw->nb_msix)
+					vec = IAVF_RX_VEC_START;
+			}
+			PMD_DRV_LOG(DEBUG,
+				    "%u vectors are mapping to %u Rx queues",
+				    hw->nb_msix, dev->data->nb_rx_queues);
+		}
+	}
+
+	if (ice_dcf_config_irq_map(hw)) {
+		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+		return -1;
+	}
+	return 0;
+}
+
 static int
 ice_dcf_dev_start(struct rte_eth_dev *dev)
 {
 	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
 	struct ice_adapter *ad = &dcf_ad->parent;
 	struct ice_dcf_hw *hw = &dcf_ad->real_hw;
 	int ret;
@@ -141,6 +254,18 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 		}
 	}
 
+	ret = ice_dcf_configure_queues(hw);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Fail to config queues");
+		return ret;
+	}
+
+	ret = ice_dcf_config_rx_queues_irqs(dev, intr_handle);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Fail to config rx queues' irqs");
+		return ret;
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v1 09/12] net/ice: add queue start and stop for DCF
  2020-06-05 20:17 [dpdk-dev] [PATCH v1 00/12] enable DCF datapath configuration Ting Xu
                   ` (7 preceding siblings ...)
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 08/12] net/ice: add queue config in DCF Ting Xu
@ 2020-06-05 20:17 ` Ting Xu
  2020-06-07 12:28   ` Ye Xiaolong
  2020-06-08  7:35   ` Yang, Qiming
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 10/12] net/ice: enable stats " Ting Xu
                   ` (5 subsequent siblings)
  14 siblings, 2 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-05 20:17 UTC (permalink / raw)
  To: dev; +Cc: qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

From: Qi Zhang <qi.z.zhang@intel.com>

Add queue start and stop in DCF. Support queue enable and disable
through virtual channel. Add support for Rx queue mbufs allocation
and queue reset.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/ice_dcf.c        |  57 ++++++
 drivers/net/ice/ice_dcf.h        |   3 +-
 drivers/net/ice/ice_dcf_ethdev.c | 309 +++++++++++++++++++++++++++++++
 3 files changed, 368 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index d864ae894..56b8c0d25 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -940,3 +940,60 @@ ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
 	rte_free(map_info);
 	return err;
 }
+
+int
+ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on)
+{
+	struct virtchnl_queue_select queue_select;
+	struct dcf_virtchnl_cmd args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = hw->vsi_res->vsi_id;
+	if (rx)
+		queue_select.rx_queues |= 1 << qid;
+	else
+		queue_select.tx_queues |= 1 << qid;
+
+	memset(&args, 0, sizeof(args));
+	if (on)
+		args.v_op = VIRTCHNL_OP_ENABLE_QUEUES;
+	else
+		args.v_op = VIRTCHNL_OP_DISABLE_QUEUES;
+
+	args.req_msg = (u8 *)&queue_select;
+	args.req_msglen = sizeof(queue_select);
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to execute command of %s",
+			    on ? "OP_ENABLE_QUEUES" : "OP_DISABLE_QUEUES");
+	return err;
+}
+
+int
+ice_dcf_disable_queues(struct ice_dcf_hw *hw)
+{
+	struct virtchnl_queue_select queue_select;
+	struct dcf_virtchnl_cmd args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = hw->vsi_res->vsi_id;
+
+	queue_select.rx_queues = BIT(hw->eth_dev->data->nb_rx_queues) - 1;
+	queue_select.tx_queues = BIT(hw->eth_dev->data->nb_tx_queues) - 1;
+
+	memset(&args, 0, sizeof(args));
+	args.v_op = VIRTCHNL_OP_DISABLE_QUEUES;
+	args.req_msg = (u8 *)&queue_select;
+	args.req_msglen = sizeof(queue_select);
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_DISABLE_QUEUES");
+		return err;
+	}
+	return 0;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 9470d1df7..68e1661c0 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -70,5 +70,6 @@ void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
 int ice_dcf_init_rss(struct ice_dcf_hw *hw);
 int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
 int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
-
+int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
+int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
 #endif /* _ICE_DCF_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 9605fb8ed..59113fc4b 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -226,6 +226,259 @@ static int ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+alloc_rxq_mbufs(struct ice_rx_queue *rxq)
+{
+	volatile union ice_32b_rx_flex_desc *rxd;
+	struct rte_mbuf *mbuf = NULL;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		mbuf = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!mbuf)) {
+			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &rxq->rx_ring[i];
+		rxd->read.pkt_addr = dma_addr;
+		rxd->read.hdr_addr = 0;
+		rxd->read.rsvd1 = 0;
+		rxd->read.rsvd2 = 0;
+
+		rxq->sw_ring[i].mbuf = (void *)mbuf;
+	}
+
+	return 0;
+}
+
+static int
+ice_dcf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct iavf_hw *hw = &ad->real_hw.avf;
+	struct ice_rx_queue *rxq;
+	int err = 0;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	err = alloc_rxq_mbufs(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+		return err;
+	}
+
+	rte_wmb();
+
+	/* Init the RX tail register. */
+	IAVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	IAVF_WRITE_FLUSH(hw);
+
+	/* Ready to switch the queue on */
+	err = ice_dcf_switch_queue(&ad->real_hw, rx_queue_id, true, true);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+	else
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+
+	return err;
+}
+
+static inline void
+reset_rx_queue(struct ice_rx_queue *rxq)
+{
+	uint16_t len;
+	uint32_t i;
+
+	if (!rxq)
+		return;
+
+	len = rxq->nb_rx_desc + ICE_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(union ice_rx_flex_desc); i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+	for (i = 0; i < ICE_RX_MAX_BURST; i++)
+		rxq->sw_ring[rxq->nb_rx_desc + i].mbuf = &rxq->fake_mbuf;
+
+	/* for rx bulk */
+	rxq->rx_nb_avail = 0;
+	rxq->rx_next_avail = 0;
+	rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+}
+
+static inline void
+reset_tx_queue(struct ice_tx_queue *txq)
+{
+	struct ice_tx_entry *txe;
+	uint32_t i, size;
+	uint16_t prev;
+
+	if (!txq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	txe = txq->sw_ring;
+	size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->tx_ring)[i] = 0;
+
+	prev = (uint16_t)(txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		txq->tx_ring[i].cmd_type_offset_bsz =
+			rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
+		txe[i].mbuf =  NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_tail = 0;
+	txq->nb_tx_used = 0;
+
+	txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+	txq->nb_tx_free = txq->nb_tx_desc - 1;
+
+	txq->tx_next_dd = txq->tx_rs_thresh - 1;
+	txq->tx_next_rs = txq->tx_rs_thresh - 1;
+}
+
+static int
+ice_dcf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &ad->real_hw;
+	struct ice_rx_queue *rxq;
+	int err;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	err = ice_dcf_switch_queue(hw, rx_queue_id, true, false);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+			    rx_queue_id);
+		return err;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq->rx_rel_mbufs(rxq);
+	reset_rx_queue(rxq);
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+static int
+ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct iavf_hw *hw = &ad->real_hw.avf;
+	struct ice_tx_queue *txq;
+	int err = 0;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	txq = dev->data->tx_queues[tx_queue_id];
+
+	/* Init the RX tail register. */
+	txq->qtx_tail = hw->hw_addr + IAVF_QTX_TAIL1(tx_queue_id);
+	IAVF_PCI_REG_WRITE(txq->qtx_tail, 0);
+	IAVF_WRITE_FLUSH(hw);
+
+	/* Ready to switch the queue on */
+	err = ice_dcf_switch_queue(&ad->real_hw, tx_queue_id, false, true);
+
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+			    tx_queue_id);
+	else
+		dev->data->tx_queue_state[tx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+
+	return err;
+}
+
+static int
+ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &ad->real_hw;
+	struct ice_tx_queue *txq;
+	int err;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	err = ice_dcf_switch_queue(hw, tx_queue_id, false, false);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+			    tx_queue_id);
+		return err;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	txq->tx_rel_mbufs(txq);
+	reset_tx_queue(txq);
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+static int
+ice_dcf_start_queues(struct rte_eth_dev *dev)
+{
+	struct ice_rx_queue *rxq;
+	struct ice_tx_queue *txq;
+	int i;
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (txq->tx_deferred_start)
+			continue;
+		if (ice_dcf_tx_queue_start(dev, i) != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start queue %u", i);
+			return -1;
+		}
+	}
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (rxq->rx_deferred_start)
+			continue;
+		if (ice_dcf_rx_queue_start(dev, i) != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start queue %u", i);
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
 static int
 ice_dcf_dev_start(struct rte_eth_dev *dev)
 {
@@ -266,20 +519,72 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
+	if (dev->data->dev_conf.intr_conf.rxq != 0) {
+		rte_intr_disable(intr_handle);
+		rte_intr_enable(intr_handle);
+	}
+
+	ret = ice_dcf_start_queues(dev);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to enable queues");
+		return ret;
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
 }
 
+static void
+ice_dcf_stop_queues(struct rte_eth_dev *dev)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &ad->real_hw;
+	struct ice_rx_queue *rxq;
+	struct ice_tx_queue *txq;
+	int ret, i;
+
+	/* Stop All queues */
+	ret = ice_dcf_disable_queues(hw);
+	if (ret)
+		PMD_DRV_LOG(WARNING, "Fail to stop queues");
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (!txq)
+			continue;
+		txq->tx_rel_mbufs(txq);
+		reset_tx_queue(txq);
+		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (!rxq)
+			continue;
+		rxq->rx_rel_mbufs(rxq);
+		reset_rx_queue(rxq);
+		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+}
+
 static void
 ice_dcf_dev_stop(struct rte_eth_dev *dev)
 {
 	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
 	struct ice_adapter *ad = &dcf_ad->parent;
 
 	if (ad->pf.adapter_stopped == 1)
 		return;
 
+	ice_dcf_stop_queues(dev);
+
+	rte_intr_efd_disable(intr_handle);
+	if (intr_handle->intr_vec) {
+		rte_free(intr_handle->intr_vec);
+		intr_handle->intr_vec = NULL;
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_DOWN;
 	ad->pf.adapter_stopped = 1;
 }
@@ -476,6 +781,10 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
 	.tx_queue_setup          = ice_tx_queue_setup,
 	.rx_queue_release        = ice_rx_queue_release,
 	.tx_queue_release        = ice_tx_queue_release,
+	.rx_queue_start          = ice_dcf_rx_queue_start,
+	.tx_queue_start          = ice_dcf_tx_queue_start,
+	.rx_queue_stop           = ice_dcf_rx_queue_stop,
+	.tx_queue_stop           = ice_dcf_tx_queue_stop,
 	.link_update             = ice_dcf_link_update,
 	.stats_get               = ice_dcf_stats_get,
 	.stats_reset             = ice_dcf_stats_reset,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v1 10/12] net/ice: enable stats for DCF
  2020-06-05 20:17 [dpdk-dev] [PATCH v1 00/12] enable DCF datapath configuration Ting Xu
                   ` (8 preceding siblings ...)
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 09/12] net/ice: add queue start and stop for DCF Ting Xu
@ 2020-06-05 20:17 ` Ting Xu
  2020-06-07 10:19   ` Ye Xiaolong
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 11/12] net/ice: set MAC filter during dev start " Ting Xu
                   ` (4 subsequent siblings)
  14 siblings, 1 reply; 65+ messages in thread
From: Ting Xu @ 2020-06-05 20:17 UTC (permalink / raw)
  To: dev; +Cc: qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

From: Qi Zhang <qi.z.zhang@intel.com>

Add support to get and reset Rx/Tx stats in DCF. Query stats
from PF.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/ice_dcf.c        |  27 ++++++++
 drivers/net/ice/ice_dcf.h        |   4 ++
 drivers/net/ice/ice_dcf_ethdev.c | 102 +++++++++++++++++++++++++++----
 3 files changed, 120 insertions(+), 13 deletions(-)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 56b8c0d25..2338b46cf 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -997,3 +997,30 @@ ice_dcf_disable_queues(struct ice_dcf_hw *hw)
 	}
 	return 0;
 }
+
+int
+ice_dcf_query_stats(struct ice_dcf_hw *hw,
+		    struct virtchnl_eth_stats *pstats)
+{
+	struct virtchnl_queue_select q_stats;
+	struct dcf_virtchnl_cmd args;
+	int err;
+
+	memset(&q_stats, 0, sizeof(q_stats));
+	q_stats.vsi_id = hw->vsi_res->vsi_id;
+
+	args.v_op = VIRTCHNL_OP_GET_STATS;
+	args.req_msg = (uint8_t *)&q_stats;
+	args.req_msglen = sizeof(q_stats);
+	args.rsp_msglen = sizeof(*pstats);
+	args.rsp_msgbuf = (uint8_t *)pstats;
+	args.rsp_buflen = sizeof(*pstats);
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR, "fail to execute command OP_GET_STATS");
+		return err;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 68e1661c0..e82bc7748 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -58,6 +58,7 @@ struct ice_dcf_hw {
 	uint16_t msix_base;
 	uint16_t nb_msix;
 	uint16_t rxq_map[16];
+	struct virtchnl_eth_stats eth_stats_offset;
 };
 
 int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
@@ -72,4 +73,7 @@ int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
 int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
 int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
 int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
+int ice_dcf_query_stats(struct ice_dcf_hw *hw,
+			struct virtchnl_eth_stats *pstats);
+
 #endif /* _ICE_DCF_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 59113fc4b..869af0e45 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -683,19 +683,6 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
-static int
-ice_dcf_stats_get(__rte_unused struct rte_eth_dev *dev,
-		  __rte_unused struct rte_eth_stats *igb_stats)
-{
-	return 0;
-}
-
-static int
-ice_dcf_stats_reset(__rte_unused struct rte_eth_dev *dev)
-{
-	return 0;
-}
-
 static int
 ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
 {
@@ -748,6 +735,95 @@ ice_dcf_dev_filter_ctrl(struct rte_eth_dev *dev,
 	return ret;
 }
 
+#define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4)
+#define ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6)
+#define ICE_DCF_48_BIT_MASK  RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t)
+
+static void
+ice_dcf_stat_update_48(uint64_t *offset, uint64_t *stat)
+{
+	if (*stat >= *offset)
+		*stat = *stat - *offset;
+	else
+		*stat = (uint64_t)((*stat +
+			((uint64_t)1 << ICE_DCF_48_BIT_WIDTH)) - *offset);
+
+	*stat &= ICE_DCF_48_BIT_MASK;
+}
+
+static void
+ice_dcf_stat_update_32(uint64_t *offset, uint64_t *stat)
+{
+	if (*stat >= *offset)
+		*stat = (uint64_t)(*stat - *offset);
+	else
+		*stat = (uint64_t)((*stat +
+			((uint64_t)1 << ICE_DCF_32_BIT_WIDTH)) - *offset);
+}
+
+static void
+ice_dcf_update_stats(struct virtchnl_eth_stats *oes,
+		     struct virtchnl_eth_stats *nes)
+{
+	ice_dcf_stat_update_48(&oes->rx_bytes, &nes->rx_bytes);
+	ice_dcf_stat_update_48(&oes->rx_unicast, &nes->rx_unicast);
+	ice_dcf_stat_update_48(&oes->rx_multicast, &nes->rx_multicast);
+	ice_dcf_stat_update_48(&oes->rx_broadcast, &nes->rx_broadcast);
+	ice_dcf_stat_update_32(&oes->rx_discards, &nes->rx_discards);
+	ice_dcf_stat_update_48(&oes->tx_bytes, &nes->tx_bytes);
+	ice_dcf_stat_update_48(&oes->tx_unicast, &nes->tx_unicast);
+	ice_dcf_stat_update_48(&oes->tx_multicast, &nes->tx_multicast);
+	ice_dcf_stat_update_48(&oes->tx_broadcast, &nes->tx_broadcast);
+	ice_dcf_stat_update_32(&oes->tx_errors, &nes->tx_errors);
+	ice_dcf_stat_update_32(&oes->tx_discards, &nes->tx_discards);
+}
+
+
+static int
+ice_dcf_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &ad->real_hw;
+	struct virtchnl_eth_stats pstats;
+	int ret;
+
+	ret = ice_dcf_query_stats(hw, &pstats);
+	if (ret == 0) {
+		ice_dcf_update_stats(&hw->eth_stats_offset, &pstats);
+		stats->ipackets = pstats.rx_unicast + pstats.rx_multicast +
+				pstats.rx_broadcast - pstats.rx_discards;
+		stats->opackets = pstats.tx_broadcast + pstats.tx_multicast +
+						pstats.tx_unicast;
+		stats->imissed = pstats.rx_discards;
+		stats->oerrors = pstats.tx_errors + pstats.tx_discards;
+		stats->ibytes = pstats.rx_bytes;
+		stats->ibytes -= stats->ipackets * RTE_ETHER_CRC_LEN;
+		stats->obytes = pstats.tx_bytes;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return -EIO;
+}
+
+static int
+ice_dcf_stats_reset(struct rte_eth_dev *dev)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &ad->real_hw;
+	struct virtchnl_eth_stats pstats;
+	int ret;
+
+	/* read stat values to clear hardware registers */
+	ret = ice_dcf_query_stats(hw, &pstats);
+	if (ret != 0)
+		return ret;
+
+	/* set stats offset base on current values */
+	hw->eth_stats_offset = pstats;
+
+	return 0;
+}
+
 static void
 ice_dcf_dev_close(struct rte_eth_dev *dev)
 {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v1 11/12] net/ice: set MAC filter during dev start for DCF
  2020-06-05 20:17 [dpdk-dev] [PATCH v1 00/12] enable DCF datapath configuration Ting Xu
                   ` (9 preceding siblings ...)
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 10/12] net/ice: enable stats " Ting Xu
@ 2020-06-05 20:17 ` Ting Xu
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 12/12] doc: enable DCF datapath configuration Ting Xu
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-05 20:17 UTC (permalink / raw)
  To: dev; +Cc: qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

From: Qi Zhang <qi.z.zhang@intel.com>

Add support to add and delete  MAC address filter in DCF.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/ice_dcf.c        | 42 ++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_dcf.h        |  1 +
 drivers/net/ice/ice_dcf_ethdev.c |  7 ++++++
 3 files changed, 50 insertions(+)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 2338b46cf..7fd70a394 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1024,3 +1024,45 @@ ice_dcf_query_stats(struct ice_dcf_hw *hw,
 
 	return 0;
 }
+
+int
+ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
+{
+	struct virtchnl_ether_addr_list *list;
+	struct rte_ether_addr *addr;
+	struct dcf_virtchnl_cmd args;
+	int len, err = 0;
+
+	len = sizeof(struct virtchnl_ether_addr_list);
+	addr = hw->eth_dev->data->mac_addrs;
+	len += sizeof(struct virtchnl_ether_addr);
+
+	list = rte_zmalloc(NULL, len, 0);
+	if (!list) {
+		PMD_DRV_LOG(ERR, "fail to allocate memory");
+		return -ENOMEM;
+	}
+
+	rte_memcpy(list->list[0].addr, addr->addr_bytes,
+			sizeof(addr->addr_bytes));
+	PMD_DRV_LOG(DEBUG, "add/rm mac:%x:%x:%x:%x:%x:%x",
+			    addr->addr_bytes[0], addr->addr_bytes[1],
+			    addr->addr_bytes[2], addr->addr_bytes[3],
+			    addr->addr_bytes[4], addr->addr_bytes[5]);
+
+	list->vsi_id = hw->vsi_res->vsi_id;
+	list->num_elements = 1;
+
+	memset(&args, 0, sizeof(args));
+	args.v_op = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
+			VIRTCHNL_OP_DEL_ETH_ADDR;
+	args.req_msg = (uint8_t *)list;
+	args.req_msglen  = len;
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command %s",
+			    add ? "OP_ADD_ETHER_ADDRESS" :
+			    "OP_DEL_ETHER_ADDRESS");
+	rte_free(list);
+	return err;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index e82bc7748..a44a01e90 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -75,5 +75,6 @@ int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
 int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
 int ice_dcf_query_stats(struct ice_dcf_hw *hw,
 			struct virtchnl_eth_stats *pstats);
+int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add);
 
 #endif /* _ICE_DCF_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 869af0e45..a1b1ffb56 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -530,6 +530,12 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
+	ret = ice_dcf_add_del_all_mac_addr(hw, true);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to add mac addr");
+		return ret;
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
@@ -585,6 +591,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
 		intr_handle->intr_vec = NULL;
 	}
 
+	ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
 	dev->data->dev_link.link_status = ETH_LINK_DOWN;
 	ad->pf.adapter_stopped = 1;
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v1 12/12] doc: enable DCF datapath configuration
  2020-06-05 20:17 [dpdk-dev] [PATCH v1 00/12] enable DCF datapath configuration Ting Xu
                   ` (10 preceding siblings ...)
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 11/12] net/ice: set MAC filter during dev start " Ting Xu
@ 2020-06-05 20:17 ` Ting Xu
  2020-06-05 14:41   ` Ye Xiaolong
  2020-06-11 17:08 ` [dpdk-dev] [PATCH v2 00/12] " Ting Xu
                   ` (2 subsequent siblings)
  14 siblings, 1 reply; 65+ messages in thread
From: Ting Xu @ 2020-06-05 20:17 UTC (permalink / raw)
  To: dev; +Cc: qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

Add doc for DCF datapath configuration in DPDK 20.08 release note.

Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 doc/guides/rel_notes/release_20_08.rst | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index 39064afbe..3cda6111c 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -56,6 +56,11 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
+* **Updated the Intel ice driver.**
+
+  Updated the Intel ice driver with new features and improvements, including:
+
+  * Added support for DCF datapath configuration.
 
 Removed Items
 -------------
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [dpdk-dev] [PATCH v1 08/12] net/ice: add queue config in DCF
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 08/12] net/ice: add queue config in DCF Ting Xu
@ 2020-06-07 10:11   ` Ye Xiaolong
  0 siblings, 0 replies; 65+ messages in thread
From: Ye Xiaolong @ 2020-06-07 10:11 UTC (permalink / raw)
  To: Ting Xu; +Cc: dev, qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

On 06/05, Ting Xu wrote:
>From: Qi Zhang <qi.z.zhang@intel.com>
>
>Add queues and Rx queue irqs configuration during device start
>in DCF. The setup is sent to PF via virtchnl.
>
>Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
>---
> drivers/net/ice/ice_dcf.c        | 109 +++++++++++++++++++++++++++
> drivers/net/ice/ice_dcf.h        |   6 ++
> drivers/net/ice/ice_dcf_ethdev.c | 125 +++++++++++++++++++++++++++++++
> 3 files changed, 240 insertions(+)
>
>diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
>index 8d078163e..d864ae894 100644
>--- a/drivers/net/ice/ice_dcf.c
>+++ b/drivers/net/ice/ice_dcf.c
>@@ -24,6 +24,7 @@
> #include <rte_dev.h>
> 
> #include "ice_dcf.h"
>+#include "ice_rxtx.h"
> 
> #define ICE_DCF_AQ_LEN     32
> #define ICE_DCF_AQ_BUF_SZ  4096
>@@ -831,3 +832,111 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
> 
> 	return 0;
> }
>+
>+#define IAVF_RXDID_LEGACY_1 1
>+#define IAVF_RXDID_COMMS_GENERIC 16
>+
>+int
>+ice_dcf_configure_queues(struct ice_dcf_hw *hw)
>+{
>+	struct ice_rx_queue **rxq =
>+		(struct ice_rx_queue **)hw->eth_dev->data->rx_queues;
>+	struct ice_tx_queue **txq =
>+		(struct ice_tx_queue **)hw->eth_dev->data->tx_queues;
>+	struct virtchnl_vsi_queue_config_info *vc_config;
>+	struct virtchnl_queue_pair_info *vc_qp;
>+	struct dcf_virtchnl_cmd args;
>+	uint16_t i, size;
>+	int err;
>+
>+	size = sizeof(*vc_config) +
>+	       sizeof(vc_config->qpair[0]) * hw->num_queue_pairs;
>+	vc_config = rte_zmalloc("cfg_queue", size, 0);
>+	if (!vc_config)
>+		return -ENOMEM;
>+
>+	vc_config->vsi_id = hw->vsi_res->vsi_id;
>+	vc_config->num_queue_pairs = hw->num_queue_pairs;
>+
>+	for (i = 0, vc_qp = vc_config->qpair;
>+	     i < hw->num_queue_pairs;
>+	     i++, vc_qp++) {
>+		vc_qp->txq.vsi_id = hw->vsi_res->vsi_id;
>+		vc_qp->txq.queue_id = i;
>+		if (i < hw->eth_dev->data->nb_tx_queues) {
>+			vc_qp->txq.ring_len = txq[i]->nb_tx_desc;
>+			vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_dma;
>+		}
>+		vc_qp->rxq.vsi_id = hw->vsi_res->vsi_id;
>+		vc_qp->rxq.queue_id = i;
>+		vc_qp->rxq.max_pkt_size = rxq[i]->max_pkt_len;
>+		if (i < hw->eth_dev->data->nb_rx_queues) {

What about changing as below to reduce the nested level of if blocks.

		if (i >= hw->eth_dev->data->nb_rx_queues)
			break;

		vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc;
		vc_qp->rxq.dma_ring_addr = rxq[i]->rx_ring_dma;
		vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len;

		...
		}

>+			vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc;
>+			vc_qp->rxq.dma_ring_addr = rxq[i]->rx_ring_dma;
>+			vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len;
>+
>+			if (hw->vf_res->vf_cap_flags &
>+			    VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
>+			    hw->supported_rxdid &
>+			    BIT(IAVF_RXDID_COMMS_GENERIC)) {
>+				vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_GENERIC;
>+				PMD_DRV_LOG(NOTICE, "request RXDID == %d in "

[snip]

>+static inline uint16_t
>+iavf_calc_itr_interval(int16_t interval)
>+{
>+	if (interval < 0 || interval > IAVF_QUEUE_ITR_INTERVAL_MAX)
>+		interval = IAVF_QUEUE_ITR_INTERVAL_DEFAULT;
>+
>+	/* Convert to hardware count, as writing each 1 represents 2 us */
>+	return interval / 2;
>+}
>+
>+static int ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
>+				     struct rte_intr_handle *intr_handle)

put the return type in a separate line.

>+{
>+	struct ice_dcf_adapter *adapter = dev->data->dev_private;
>+	struct ice_dcf_hw *hw = &adapter->real_hw;
>+	uint16_t interval, i;
>+	int vec;
>+
>+	if (rte_intr_cap_multiple(intr_handle) &&
>+	    dev->data->dev_conf.intr_conf.rxq) {
>+		if (rte_intr_efd_enable(intr_handle, dev->data->nb_rx_queues))
>+			return -1;
>+	}
>+
>+	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
>+		intr_handle->intr_vec =
>+			rte_zmalloc("intr_vec",
>+				    dev->data->nb_rx_queues * sizeof(int), 0);
>+		if (!intr_handle->intr_vec) {
>+			PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
>+				    dev->data->nb_rx_queues);
>+			return -1;
>+		}
>+	}
>+
>+	if (!dev->data->dev_conf.intr_conf.rxq ||
>+	    !rte_intr_dp_is_en(intr_handle)) {
>+		/* Rx interrupt disabled, Map interrupt only for writeback */
>+		hw->nb_msix = 1;
>+		if (hw->vf_res->vf_cap_flags &
>+		    VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
>+			/* If WB_ON_ITR supports, enable it */
>+			hw->msix_base = IAVF_RX_VEC_START;
>+			IAVF_WRITE_REG(&hw->avf,
>+				       IAVF_VFINT_DYN_CTLN1(hw->msix_base - 1),
>+				       IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK |
>+				       IAVF_VFINT_DYN_CTLN1_WB_ON_ITR_MASK);
>+		} else {
>+			/* If no WB_ON_ITR offload flags, need to set
>+			 * interrupt for descriptor write back.
>+			 */
>+			hw->msix_base = IAVF_MISC_VEC_ID;
>+
>+			/* set ITR to max */
>+			interval =
>+			iavf_calc_itr_interval(IAVF_QUEUE_ITR_INTERVAL_MAX);
>+			IAVF_WRITE_REG(&hw->avf, IAVF_VFINT_DYN_CTL01,
>+				       IAVF_VFINT_DYN_CTL01_INTENA_MASK |
>+				       (IAVF_ITR_INDEX_DEFAULT <<
>+					IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT) |
>+				       (interval <<
>+					IAVF_VFINT_DYN_CTL01_INTERVAL_SHIFT));
>+		}
>+		IAVF_WRITE_FLUSH(&hw->avf);
>+		/* map all queues to the same interrupt */
>+		for (i = 0; i < dev->data->nb_rx_queues; i++)
>+			hw->rxq_map[hw->msix_base] |= 1 << i;
>+	} else {
>+		if (!rte_intr_allow_others(intr_handle)) {
>+			hw->nb_msix = 1;
>+			hw->msix_base = IAVF_MISC_VEC_ID;
>+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
>+				hw->rxq_map[hw->msix_base] |= 1 << i;
>+				intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
>+			}
>+			PMD_DRV_LOG(DEBUG,
>+				    "vector %u are mapping to all Rx queues",
>+				    hw->msix_base);
>+		} else {
>+			/* If Rx interrupt is reuquired, and we can use
>+			 * multi interrupts, then the vec is from 1
>+			 */
>+			hw->nb_msix = RTE_MIN(hw->vf_res->max_vectors,
>+					      intr_handle->nb_efd);
>+			hw->msix_base = IAVF_MISC_VEC_ID;
>+			vec = IAVF_MISC_VEC_ID;
>+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
>+				hw->rxq_map[vec] |= 1 << i;
>+				intr_handle->intr_vec[i] = vec++;
>+				if (vec >= hw->nb_msix)
>+					vec = IAVF_RX_VEC_START;
>+			}
>+			PMD_DRV_LOG(DEBUG,
>+				    "%u vectors are mapping to %u Rx queues",
>+				    hw->nb_msix, dev->data->nb_rx_queues);
>+		}
>+	}
>+
>+	if (ice_dcf_config_irq_map(hw)) {
>+		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
>+		return -1;

Do we need to free intr_handle->intr_vec here?

>+	}
>+	return 0;
>+}
>+
> static int
> ice_dcf_dev_start(struct rte_eth_dev *dev)
> {
> 	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
>+	struct rte_intr_handle *intr_handle = dev->intr_handle;
> 	struct ice_adapter *ad = &dcf_ad->parent;
> 	struct ice_dcf_hw *hw = &dcf_ad->real_hw;
> 	int ret;
>@@ -141,6 +254,18 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
> 		}
> 	}
> 
>+	ret = ice_dcf_configure_queues(hw);
>+	if (ret) {
>+		PMD_DRV_LOG(ERR, "Fail to config queues");
>+		return ret;
>+	}
>+
>+	ret = ice_dcf_config_rx_queues_irqs(dev, intr_handle);
>+	if (ret) {
>+		PMD_DRV_LOG(ERR, "Fail to config rx queues' irqs");
>+		return ret;
>+	}
>+
> 	dev->data->dev_link.link_status = ETH_LINK_UP;
> 
> 	return 0;
>-- 
>2.17.1
>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [dpdk-dev] [PATCH v1 10/12] net/ice: enable stats for DCF
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 10/12] net/ice: enable stats " Ting Xu
@ 2020-06-07 10:19   ` Ye Xiaolong
  0 siblings, 0 replies; 65+ messages in thread
From: Ye Xiaolong @ 2020-06-07 10:19 UTC (permalink / raw)
  To: Ting Xu; +Cc: dev, qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

On 06/05, Ting Xu wrote:
>From: Qi Zhang <qi.z.zhang@intel.com>
>
>Add support to get and reset Rx/Tx stats in DCF. Query stats
>from PF.
>
>Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
>---
> drivers/net/ice/ice_dcf.c        |  27 ++++++++
> drivers/net/ice/ice_dcf.h        |   4 ++
> drivers/net/ice/ice_dcf_ethdev.c | 102 +++++++++++++++++++++++++++----
> 3 files changed, 120 insertions(+), 13 deletions(-)
>
>diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c

[snip]

>+static int
>+ice_dcf_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
>+{
>+	struct ice_dcf_adapter *ad = dev->data->dev_private;
>+	struct ice_dcf_hw *hw = &ad->real_hw;
>+	struct virtchnl_eth_stats pstats;
>+	int ret;
>+
>+	ret = ice_dcf_query_stats(hw, &pstats);
>+	if (ret == 0) {
>+		ice_dcf_update_stats(&hw->eth_stats_offset, &pstats);
>+		stats->ipackets = pstats.rx_unicast + pstats.rx_multicast +
>+				pstats.rx_broadcast - pstats.rx_discards;
>+		stats->opackets = pstats.tx_broadcast + pstats.tx_multicast +
>+						pstats.tx_unicast;
>+		stats->imissed = pstats.rx_discards;
>+		stats->oerrors = pstats.tx_errors + pstats.tx_discards;
>+		stats->ibytes = pstats.rx_bytes;
>+		stats->ibytes -= stats->ipackets * RTE_ETHER_CRC_LEN;
>+		stats->obytes = pstats.tx_bytes;
>+	} else {
>+		PMD_DRV_LOG(ERR, "Get statistics failed");
>+	}
>+	return -EIO;

Return -EIO even on success seems not correct.

>+}
>+
>+static int
>+ice_dcf_stats_reset(struct rte_eth_dev *dev)
>+{
>+	struct ice_dcf_adapter *ad = dev->data->dev_private;
>+	struct ice_dcf_hw *hw = &ad->real_hw;
>+	struct virtchnl_eth_stats pstats;
>+	int ret;
>+
>+	/* read stat values to clear hardware registers */
>+	ret = ice_dcf_query_stats(hw, &pstats);
>+	if (ret != 0)
>+		return ret;
>+
>+	/* set stats offset base on current values */
>+	hw->eth_stats_offset = pstats;
>+
>+	return 0;
>+}
>+
> static void
> ice_dcf_dev_close(struct rte_eth_dev *dev)
> {
>-- 
>2.17.1
>

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [dpdk-dev] [PATCH v1 09/12] net/ice: add queue start and stop for DCF
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 09/12] net/ice: add queue start and stop for DCF Ting Xu
@ 2020-06-07 12:28   ` Ye Xiaolong
  2020-06-08  7:35   ` Yang, Qiming
  1 sibling, 0 replies; 65+ messages in thread
From: Ye Xiaolong @ 2020-06-07 12:28 UTC (permalink / raw)
  To: Ting Xu, Qiming Yang
  Cc: dev, qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

On 06/05, Ting Xu wrote:
>From: Qi Zhang <qi.z.zhang@intel.com>
>
>Add queue start and stop in DCF. Support queue enable and disable
>through virtual channel. Add support for Rx queue mbufs allocation
>and queue reset.

There is one i40e patch [1] from qiming to correct the queue behavior as "when
one queue fails to start, all following queues start action should be skipped
and previous started queues should be cleaned, and for queue stop case, one
queue stop failure shouldn't skip following queues stop action", I think it
applies to this patch as well, add Qiming for more comments.


[1] https://patches.dpdk.org/patch/70362/

>
>Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
>---
> drivers/net/ice/ice_dcf.c        |  57 ++++++
> drivers/net/ice/ice_dcf.h        |   3 +-
> drivers/net/ice/ice_dcf_ethdev.c | 309 +++++++++++++++++++++++++++++++
> 3 files changed, 368 insertions(+), 1 deletion(-)
>
>diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
>index d864ae894..56b8c0d25 100644
>--- a/drivers/net/ice/ice_dcf.c
>+++ b/drivers/net/ice/ice_dcf.c
>@@ -940,3 +940,60 @@ ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
> 	rte_free(map_info);
> 	return err;
> }
>+
>+int
>+ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on)
>+{
>+	struct virtchnl_queue_select queue_select;
>+	struct dcf_virtchnl_cmd args;
>+	int err;
>+
>+	memset(&queue_select, 0, sizeof(queue_select));
>+	queue_select.vsi_id = hw->vsi_res->vsi_id;
>+	if (rx)
>+		queue_select.rx_queues |= 1 << qid;
>+	else
>+		queue_select.tx_queues |= 1 << qid;
>+
>+	memset(&args, 0, sizeof(args));
>+	if (on)
>+		args.v_op = VIRTCHNL_OP_ENABLE_QUEUES;
>+	else
>+		args.v_op = VIRTCHNL_OP_DISABLE_QUEUES;
>+
>+	args.req_msg = (u8 *)&queue_select;
>+	args.req_msglen = sizeof(queue_select);
>+
>+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
>+	if (err)
>+		PMD_DRV_LOG(ERR, "Failed to execute command of %s",
>+			    on ? "OP_ENABLE_QUEUES" : "OP_DISABLE_QUEUES");
>+	return err;
>+}
>+
>+int
>+ice_dcf_disable_queues(struct ice_dcf_hw *hw)
>+{
>+	struct virtchnl_queue_select queue_select;
>+	struct dcf_virtchnl_cmd args;
>+	int err;
>+
>+	memset(&queue_select, 0, sizeof(queue_select));
>+	queue_select.vsi_id = hw->vsi_res->vsi_id;
>+
>+	queue_select.rx_queues = BIT(hw->eth_dev->data->nb_rx_queues) - 1;
>+	queue_select.tx_queues = BIT(hw->eth_dev->data->nb_tx_queues) - 1;
>+
>+	memset(&args, 0, sizeof(args));
>+	args.v_op = VIRTCHNL_OP_DISABLE_QUEUES;
>+	args.req_msg = (u8 *)&queue_select;
>+	args.req_msglen = sizeof(queue_select);
>+
>+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
>+	if (err) {
>+		PMD_DRV_LOG(ERR,
>+			    "Failed to execute command of OP_DISABLE_QUEUES");
>+		return err;
>+	}
>+	return 0;

Better to align with above ice_dcf_switch_queue, one 'return err' in the end
is enough.

Thanks,
Xiaolong


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [dpdk-dev] [PATCH v1 09/12] net/ice: add queue start and stop for DCF
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 09/12] net/ice: add queue start and stop for DCF Ting Xu
  2020-06-07 12:28   ` Ye Xiaolong
@ 2020-06-08  7:35   ` Yang, Qiming
  2020-06-09  7:35     ` Xu, Ting
  1 sibling, 1 reply; 65+ messages in thread
From: Yang, Qiming @ 2020-06-08  7:35 UTC (permalink / raw)
  To: Xu, Ting, dev; +Cc: Zhang, Qi Z, Mcnamara, John, Kovacevic, Marko



> -----Original Message-----
> From: Xu, Ting <ting.xu@intel.com>
> Sent: Saturday, June 6, 2020 04:18
> To: dev@dpdk.org
> Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Mcnamara, John <john.mcnamara@intel.com>;
> Kovacevic, Marko <marko.kovacevic@intel.com>
> Subject: [PATCH v1 09/12] net/ice: add queue start and stop for DCF
> 
> From: Qi Zhang <qi.z.zhang@intel.com>
> 
> Add queue start and stop in DCF. Support queue enable and disable through
> virtual channel. Add support for Rx queue mbufs allocation and queue reset.
> 
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> ---
>  drivers/net/ice/ice_dcf.c        |  57 ++++++
>  drivers/net/ice/ice_dcf.h        |   3 +-
>  drivers/net/ice/ice_dcf_ethdev.c | 309
> +++++++++++++++++++++++++++++++
>  3 files changed, 368 insertions(+), 1 deletion(-)
> 

Snip...

> +}
> diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index
> 9470d1df7..68e1661c0 100644
> --- a/drivers/net/ice/ice_dcf.h
> +++ b/drivers/net/ice/ice_dcf.h
> @@ -70,5 +70,6 @@ void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev,
> struct ice_dcf_hw *hw);  int ice_dcf_init_rss(struct ice_dcf_hw *hw);  int
> ice_dcf_configure_queues(struct ice_dcf_hw *hw);  int
> ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
> -
> +int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx,
> +bool on); int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
>  #endif /* _ICE_DCF_H_ */
> diff --git a/drivers/net/ice/ice_dcf_ethdev.c
> b/drivers/net/ice/ice_dcf_ethdev.c
> index 9605fb8ed..59113fc4b 100644
> --- a/drivers/net/ice/ice_dcf_ethdev.c
> +++ b/drivers/net/ice/ice_dcf_ethdev.c
> @@ -226,6 +226,259 @@ static int ice_dcf_config_rx_queues_irqs(struct
> rte_eth_dev *dev,
>  	return 0;
>  }
> 
.
> +static int
> +ice_dcf_start_queues(struct rte_eth_dev *dev) {
> +	struct ice_rx_queue *rxq;
> +	struct ice_tx_queue *txq;
> +	int i;
> +
> +	for (i = 0; i < dev->data->nb_tx_queues; i++) {
> +		txq = dev->data->tx_queues[i];
> +		if (txq->tx_deferred_start)
> +			continue;
> +		if (ice_dcf_tx_queue_start(dev, i) != 0) {
> +			PMD_DRV_LOG(ERR, "Fail to start queue %u", i);
> +			return -1;

If queue start fail, should stop the queue already started

> +		}
> +	}
> +
> +	for (i = 0; i < dev->data->nb_rx_queues; i++) {
> +		rxq = dev->data->rx_queues[i];
> +		if (rxq->rx_deferred_start)
> +			continue;
> +		if (ice_dcf_rx_queue_start(dev, i) != 0) {
> +			PMD_DRV_LOG(ERR, "Fail to start queue %u", i);
> +			return -1;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
>  static int
>  ice_dcf_dev_start(struct rte_eth_dev *dev)  { @@ -266,20 +519,72 @@
> ice_dcf_dev_start(struct rte_eth_dev *dev)
>  		return ret;
>  	}
> 
> +	if (dev->data->dev_conf.intr_conf.rxq != 0) {
> +		rte_intr_disable(intr_handle);
> +		rte_intr_enable(intr_handle);
> +	}
> +
> +	ret = ice_dcf_start_queues(dev);
> +	if (ret) {
> +		PMD_DRV_LOG(ERR, "Failed to enable queues");
> +		return ret;
> +	}
> +
>  	dev->data->dev_link.link_status = ETH_LINK_UP;
> 
>  	return 0;
>  }
> 
> +static void
> +ice_dcf_stop_queues(struct rte_eth_dev *dev) {
> +	struct ice_dcf_adapter *ad = dev->data->dev_private;
> +	struct ice_dcf_hw *hw = &ad->real_hw;
> +	struct ice_rx_queue *rxq;
> +	struct ice_tx_queue *txq;
> +	int ret, i;
> +
> +	/* Stop All queues */
> +	ret = ice_dcf_disable_queues(hw);
> +	if (ret)
> +		PMD_DRV_LOG(WARNING, "Fail to stop queues");
> +
> +	for (i = 0; i < dev->data->nb_tx_queues; i++) {
> +		txq = dev->data->tx_queues[i];
> +		if (!txq)
> +			continue;
> +		txq->tx_rel_mbufs(txq);
> +		reset_tx_queue(txq);
> +		dev->data->tx_queue_state[i] =
> RTE_ETH_QUEUE_STATE_STOPPED;
> +	}
> +	for (i = 0; i < dev->data->nb_rx_queues; i++) {
> +		rxq = dev->data->rx_queues[i];
> +		if (!rxq)
> +			continue;
> +		rxq->rx_rel_mbufs(rxq);
> +		reset_rx_queue(rxq);
> +		dev->data->rx_queue_state[i] =
> RTE_ETH_QUEUE_STATE_STOPPED;
> +	}
> +}
> +
>  static void
>  ice_dcf_dev_stop(struct rte_eth_dev *dev)  {
>  	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
> +	struct rte_intr_handle *intr_handle = dev->intr_handle;
>  	struct ice_adapter *ad = &dcf_ad->parent;
> 
>  	if (ad->pf.adapter_stopped == 1)
>  		return;
> 
> +	ice_dcf_stop_queues(dev);
> +
> +	rte_intr_efd_disable(intr_handle);
> +	if (intr_handle->intr_vec) {
> +		rte_free(intr_handle->intr_vec);
> +		intr_handle->intr_vec = NULL;
> +	}
> +
>  	dev->data->dev_link.link_status = ETH_LINK_DOWN;
>  	ad->pf.adapter_stopped = 1;
>  }
> @@ -476,6 +781,10 @@ static const struct eth_dev_ops
> ice_dcf_eth_dev_ops = {
>  	.tx_queue_setup          = ice_tx_queue_setup,
>  	.rx_queue_release        = ice_rx_queue_release,
>  	.tx_queue_release        = ice_tx_queue_release,
> +	.rx_queue_start          = ice_dcf_rx_queue_start,
> +	.tx_queue_start          = ice_dcf_tx_queue_start,
> +	.rx_queue_stop           = ice_dcf_rx_queue_stop,
> +	.tx_queue_stop           = ice_dcf_tx_queue_stop,
>  	.link_update             = ice_dcf_link_update,
>  	.stats_get               = ice_dcf_stats_get,
>  	.stats_reset             = ice_dcf_stats_reset,
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [dpdk-dev] [PATCH v1 09/12] net/ice: add queue start and stop for DCF
  2020-06-08  7:35   ` Yang, Qiming
@ 2020-06-09  7:35     ` Xu, Ting
  2020-06-10  5:03       ` Yang, Qiming
  0 siblings, 1 reply; 65+ messages in thread
From: Xu, Ting @ 2020-06-09  7:35 UTC (permalink / raw)
  To: Yang, Qiming, dev
  Cc: Zhang, Qi Z, Mcnamara, John, Kovacevic, Marko, Ye, Xiaolong

Hi, Qiming,

> -----Original Message-----
> From: Yang, Qiming <qiming.yang@intel.com>
> Sent: Monday, June 8, 2020 3:36 PM
> To: Xu, Ting <ting.xu@intel.com>; dev@dpdk.org
> Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Mcnamara, John
> <john.mcnamara@intel.com>; Kovacevic, Marko <marko.kovacevic@intel.com>
> Subject: RE: [PATCH v1 09/12] net/ice: add queue start and stop for DCF
> 
> 
> 
> > -----Original Message-----
> > From: Xu, Ting <ting.xu@intel.com>
> > Sent: Saturday, June 6, 2020 04:18
> > To: dev@dpdk.org
> > Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, Qiming
> > <qiming.yang@intel.com>; Mcnamara, John <john.mcnamara@intel.com>;
> > Kovacevic, Marko <marko.kovacevic@intel.com>
> > Subject: [PATCH v1 09/12] net/ice: add queue start and stop for DCF
> >
> > From: Qi Zhang <qi.z.zhang@intel.com>
> >
> > Add queue start and stop in DCF. Support queue enable and disable
> > through virtual channel. Add support for Rx queue mbufs allocation and
> queue reset.
> >
> > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > ---
> >  drivers/net/ice/ice_dcf.c        |  57 ++++++
> >  drivers/net/ice/ice_dcf.h        |   3 +-
> >  drivers/net/ice/ice_dcf_ethdev.c | 309
> > +++++++++++++++++++++++++++++++
> >  3 files changed, 368 insertions(+), 1 deletion(-)
> >
> 
> Snip...
> 
> > +}
> > diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
> > index
> > 9470d1df7..68e1661c0 100644
> > --- a/drivers/net/ice/ice_dcf.h
> > +++ b/drivers/net/ice/ice_dcf.h
> > @@ -70,5 +70,6 @@ void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev,
> > struct ice_dcf_hw *hw);  int ice_dcf_init_rss(struct ice_dcf_hw *hw);
> > int ice_dcf_configure_queues(struct ice_dcf_hw *hw);  int
> > ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
> > -
> > +int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool
> > +rx, bool on); int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
> >  #endif /* _ICE_DCF_H_ */
> > diff --git a/drivers/net/ice/ice_dcf_ethdev.c
> > b/drivers/net/ice/ice_dcf_ethdev.c
> > index 9605fb8ed..59113fc4b 100644
> > --- a/drivers/net/ice/ice_dcf_ethdev.c
> > +++ b/drivers/net/ice/ice_dcf_ethdev.c
> > @@ -226,6 +226,259 @@ static int ice_dcf_config_rx_queues_irqs(struct
> > rte_eth_dev *dev,
> >  return 0;
> >  }
> >
> .
> > +static int
> > +ice_dcf_start_queues(struct rte_eth_dev *dev) { struct ice_rx_queue
> > +*rxq; struct ice_tx_queue *txq; int i;
> > +
> > +for (i = 0; i < dev->data->nb_tx_queues; i++) { txq =
> > +dev->data->tx_queues[i]; if (txq->tx_deferred_start) continue; if
> > +(ice_dcf_tx_queue_start(dev, i) != 0) { PMD_DRV_LOG(ERR, "Fail to
> > +start queue %u", i); return -1;
> 
> If queue start fail, should stop the queue already started
> 

This operation can only be seen in ice and i40e PF driver. In iavf or even earlier i40evf, they did not stop the already started queues when failed.
I am not sure if this operation is suitable for DCF? Or we should not follow the current iavf, since it actually needs this modification to stop started queues as well?

> > +}
> > +}
> > +
> > +for (i = 0; i < dev->data->nb_rx_queues; i++) { rxq =
> > +dev->data->rx_queues[i]; if (rxq->rx_deferred_start) continue; if
> > +(ice_dcf_rx_queue_start(dev, i) != 0) { PMD_DRV_LOG(ERR, "Fail to
> > +start queue %u", i); return -1; } }
> > +
> > +return 0;
> > +}
> > +
> >  static int
> >  ice_dcf_dev_start(struct rte_eth_dev *dev)  { @@ -266,20 +519,72 @@
> > ice_dcf_dev_start(struct rte_eth_dev *dev)  return ret;  }
> >
> > +if (dev->data->dev_conf.intr_conf.rxq != 0) {
> > +rte_intr_disable(intr_handle); rte_intr_enable(intr_handle); }
> > +
> > +ret = ice_dcf_start_queues(dev);
> > +if (ret) {
> > +PMD_DRV_LOG(ERR, "Failed to enable queues"); return ret; }
> > +
> >  dev->data->dev_link.link_status = ETH_LINK_UP;
> >
> >  return 0;
> >  }
> >
> > +static void
> > +ice_dcf_stop_queues(struct rte_eth_dev *dev) { struct ice_dcf_adapter
> > +*ad = dev->data->dev_private; struct ice_dcf_hw *hw = &ad->real_hw;
> > +struct ice_rx_queue *rxq; struct ice_tx_queue *txq; int ret, i;
> > +
> > +/* Stop All queues */
> > +ret = ice_dcf_disable_queues(hw);
> > +if (ret)
> > +PMD_DRV_LOG(WARNING, "Fail to stop queues");
> > +
> > +for (i = 0; i < dev->data->nb_tx_queues; i++) { txq =
> > +dev->data->tx_queues[i]; if (!txq) continue;
> > +txq->tx_rel_mbufs(txq);
> > +reset_tx_queue(txq);
> > +dev->data->tx_queue_state[i] =
> > RTE_ETH_QUEUE_STATE_STOPPED;
> > +}
> > +for (i = 0; i < dev->data->nb_rx_queues; i++) { rxq =
> > +dev->data->rx_queues[i]; if (!rxq) continue;
> > +rxq->rx_rel_mbufs(rxq);
> > +reset_rx_queue(rxq);
> > +dev->data->rx_queue_state[i] =
> > RTE_ETH_QUEUE_STATE_STOPPED;
> > +}
> > +}
> > +
> >  static void
> >  ice_dcf_dev_stop(struct rte_eth_dev *dev)  {  struct ice_dcf_adapter
> > *dcf_ad = dev->data->dev_private;
> > +struct rte_intr_handle *intr_handle = dev->intr_handle;
> >  struct ice_adapter *ad = &dcf_ad->parent;
> >
> >  if (ad->pf.adapter_stopped == 1)
> >  return;
> >
> > +ice_dcf_stop_queues(dev);
> > +
> > +rte_intr_efd_disable(intr_handle);
> > +if (intr_handle->intr_vec) {
> > +rte_free(intr_handle->intr_vec);
> > +intr_handle->intr_vec = NULL;
> > +}
> > +
> >  dev->data->dev_link.link_status = ETH_LINK_DOWN;
> > ad->pf.adapter_stopped = 1;  } @@ -476,6 +781,10 @@ static const
> > struct eth_dev_ops ice_dcf_eth_dev_ops = {
> >  .tx_queue_setup          = ice_tx_queue_setup,
> >  .rx_queue_release        = ice_rx_queue_release,
> >  .tx_queue_release        = ice_tx_queue_release,
> > +.rx_queue_start          = ice_dcf_rx_queue_start,
> > +.tx_queue_start          = ice_dcf_tx_queue_start,
> > +.rx_queue_stop           = ice_dcf_rx_queue_stop,
> > +.tx_queue_stop           = ice_dcf_tx_queue_stop,
> >  .link_update             = ice_dcf_link_update,
> >  .stats_get               = ice_dcf_stats_get,
> >  .stats_reset             = ice_dcf_stats_reset,
> > --
> > 2.17.1
> 


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [dpdk-dev] [PATCH v1 12/12] doc: enable DCF datapath configuration
  2020-06-05 14:41   ` Ye Xiaolong
@ 2020-06-09  7:50     ` Xu, Ting
  0 siblings, 0 replies; 65+ messages in thread
From: Xu, Ting @ 2020-06-09  7:50 UTC (permalink / raw)
  To: Ye, Xiaolong; +Cc: dev, Zhang, Qi Z, Mcnamara, John, Kovacevic, Marko

Hi, Xiaolong

> -----Original Message-----
> From: Ye, Xiaolong <xiaolong.ye@intel.com>
> Sent: Friday, June 5, 2020 10:42 PM
> To: Xu, Ting <ting.xu@intel.com>
> Cc: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Mcnamara, John <john.mcnamara@intel.com>;
> Kovacevic, Marko <marko.kovacevic@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v1 12/12] doc: enable DCF datapath
> configuration
> 
> Hi, Ting
> 
> On 06/05, Ting Xu wrote:
> >Add doc for DCF datapath configuration in DPDK 20.08 release note.
> >
> 
> It'd be better to also add some document update in ice.rst.
> 
> Thanks,
> Xiaolong
> 

I find that there is no additional information need to be added in ice.rst for datapath configure.
For multiple VF and buildin recipe in DCF, I update ice.rst in their RFC patchset.

Thanks!

> >Signed-off-by: Ting Xu <ting.xu@intel.com>
> >---
> > doc/guides/rel_notes/release_20_08.rst | 5 +++++
> > 1 file changed, 5 insertions(+)
> >
> >diff --git a/doc/guides/rel_notes/release_20_08.rst
> b/doc/guides/rel_notes/release_20_08.rst
> >index 39064afbe..3cda6111c 100644
> >--- a/doc/guides/rel_notes/release_20_08.rst
> >+++ b/doc/guides/rel_notes/release_20_08.rst
> >@@ -56,6 +56,11 @@ New Features
> >      Also, make sure to start the actual text at the margin.
> >      =========================================================
> >
> >+* **Updated the Intel ice driver.**
> >+
> >+  Updated the Intel ice driver with new features and improvements,
> including:
> >+
> >+  * Added support for DCF datapath configuration.
> >
> > Removed Items
> > -------------
> >--
> >2.17.1
> >

^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [dpdk-dev] [PATCH v1 09/12] net/ice: add queue start and stop for DCF
  2020-06-09  7:35     ` Xu, Ting
@ 2020-06-10  5:03       ` Yang, Qiming
  0 siblings, 0 replies; 65+ messages in thread
From: Yang, Qiming @ 2020-06-10  5:03 UTC (permalink / raw)
  To: Xu, Ting, dev; +Cc: Zhang, Qi Z, Mcnamara, John, Kovacevic, Marko, Ye, Xiaolong



> -----Original Message-----
> From: Xu, Ting <ting.xu@intel.com>
> Sent: Tuesday, June 9, 2020 15:35
> To: Yang, Qiming <qiming.yang@intel.com>; dev@dpdk.org
> Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Mcnamara, John
> <john.mcnamara@intel.com>; Kovacevic, Marko
> <marko.kovacevic@intel.com>; Ye, Xiaolong <xiaolong.ye@intel.com>
> Subject: RE: [PATCH v1 09/12] net/ice: add queue start and stop for DCF
> 
> Hi, Qiming,
> 
> > -----Original Message-----
> > From: Yang, Qiming <qiming.yang@intel.com>
> > Sent: Monday, June 8, 2020 3:36 PM
> > To: Xu, Ting <ting.xu@intel.com>; dev@dpdk.org
> > Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Mcnamara, John
> > <john.mcnamara@intel.com>; Kovacevic, Marko
> > <marko.kovacevic@intel.com>
> > Subject: RE: [PATCH v1 09/12] net/ice: add queue start and stop for
> > DCF
> >
> >
> >
> > > -----Original Message-----
> > > From: Xu, Ting <ting.xu@intel.com>
> > > Sent: Saturday, June 6, 2020 04:18
> > > To: dev@dpdk.org
> > > Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, Qiming
> > > <qiming.yang@intel.com>; Mcnamara, John
> <john.mcnamara@intel.com>;
> > > Kovacevic, Marko <marko.kovacevic@intel.com>
> > > Subject: [PATCH v1 09/12] net/ice: add queue start and stop for DCF
> > >
> > > From: Qi Zhang <qi.z.zhang@intel.com>
> > >
> > > Add queue start and stop in DCF. Support queue enable and disable
> > > through virtual channel. Add support for Rx queue mbufs allocation
> > > and
> > queue reset.
> > >
> > > Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> > > ---
> > >  drivers/net/ice/ice_dcf.c        |  57 ++++++
> > >  drivers/net/ice/ice_dcf.h        |   3 +-
> > >  drivers/net/ice/ice_dcf_ethdev.c | 309
> > > +++++++++++++++++++++++++++++++
> > >  3 files changed, 368 insertions(+), 1 deletion(-)
> > >
> >
> > Snip...
> >
> > > +}
> > > diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
> > > index
> > > 9470d1df7..68e1661c0 100644
> > > --- a/drivers/net/ice/ice_dcf.h
> > > +++ b/drivers/net/ice/ice_dcf.h
> > > @@ -70,5 +70,6 @@ void ice_dcf_uninit_hw(struct rte_eth_dev
> > > *eth_dev, struct ice_dcf_hw *hw);  int ice_dcf_init_rss(struct
> > > ice_dcf_hw *hw); int ice_dcf_configure_queues(struct ice_dcf_hw
> > > *hw);  int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
> > > -
> > > +int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool
> > > +rx, bool on); int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
> > >  #endif /* _ICE_DCF_H_ */
> > > diff --git a/drivers/net/ice/ice_dcf_ethdev.c
> > > b/drivers/net/ice/ice_dcf_ethdev.c
> > > index 9605fb8ed..59113fc4b 100644
> > > --- a/drivers/net/ice/ice_dcf_ethdev.c
> > > +++ b/drivers/net/ice/ice_dcf_ethdev.c
> > > @@ -226,6 +226,259 @@ static int
> > > ice_dcf_config_rx_queues_irqs(struct
> > > rte_eth_dev *dev,
> > >  return 0;
> > >  }
> > >
> > .
> > > +static int
> > > +ice_dcf_start_queues(struct rte_eth_dev *dev) { struct ice_rx_queue
> > > +*rxq; struct ice_tx_queue *txq; int i;
> > > +
> > > +for (i = 0; i < dev->data->nb_tx_queues; i++) { txq =
> > > +dev->data->tx_queues[i]; if (txq->tx_deferred_start) continue; if
> > > +(ice_dcf_tx_queue_start(dev, i) != 0) { PMD_DRV_LOG(ERR, "Fail to
> > > +start queue %u", i); return -1;
> >
> > If queue start fail, should stop the queue already started
> >
> 
> This operation can only be seen in ice and i40e PF driver. In iavf or even
> earlier i40evf, they did not stop the already started queues when failed.
> I am not sure if this operation is suitable for DCF? Or we should not follow the
> current iavf, since it actually needs this modification to stop started queues
> as well?
> 

I think that's the correct behavior. We'd better fix the gap if iavf and i40evf not act as that.

> > > +}
> > > +}
> > > +
> > > +for (i = 0; i < dev->data->nb_rx_queues; i++) { rxq =
> > > +dev->data->rx_queues[i]; if (rxq->rx_deferred_start) continue; if
> > > +(ice_dcf_rx_queue_start(dev, i) != 0) { PMD_DRV_LOG(ERR, "Fail to
> > > +start queue %u", i); return -1; } }
> > > +
> > > +return 0;
> > > +}
> > > +
> > >  static int
> > >  ice_dcf_dev_start(struct rte_eth_dev *dev)  { @@ -266,20 +519,72 @@
> > > ice_dcf_dev_start(struct rte_eth_dev *dev)  return ret;  }
> > >
> > > +if (dev->data->dev_conf.intr_conf.rxq != 0) {
> > > +rte_intr_disable(intr_handle); rte_intr_enable(intr_handle); }
> > > +
> > > +ret = ice_dcf_start_queues(dev);
> > > +if (ret) {
> > > +PMD_DRV_LOG(ERR, "Failed to enable queues"); return ret; }
> > > +
> > >  dev->data->dev_link.link_status = ETH_LINK_UP;
> > >
> > >  return 0;
> > >  }
> > >
> > > +static void
> > > +ice_dcf_stop_queues(struct rte_eth_dev *dev) { struct
> > > +ice_dcf_adapter *ad = dev->data->dev_private; struct ice_dcf_hw *hw
> > > += &ad->real_hw; struct ice_rx_queue *rxq; struct ice_tx_queue *txq;
> > > +int ret, i;
> > > +
> > > +/* Stop All queues */
> > > +ret = ice_dcf_disable_queues(hw);
> > > +if (ret)
> > > +PMD_DRV_LOG(WARNING, "Fail to stop queues");
> > > +
> > > +for (i = 0; i < dev->data->nb_tx_queues; i++) { txq =
> > > +dev->data->tx_queues[i]; if (!txq) continue;
> > > +txq->tx_rel_mbufs(txq);
> > > +reset_tx_queue(txq);
> > > +dev->data->tx_queue_state[i] =
> > > RTE_ETH_QUEUE_STATE_STOPPED;
> > > +}
> > > +for (i = 0; i < dev->data->nb_rx_queues; i++) { rxq =
> > > +dev->data->rx_queues[i]; if (!rxq) continue;
> > > +rxq->rx_rel_mbufs(rxq);
> > > +reset_rx_queue(rxq);
> > > +dev->data->rx_queue_state[i] =
> > > RTE_ETH_QUEUE_STATE_STOPPED;
> > > +}
> > > +}
> > > +
> > >  static void
> > >  ice_dcf_dev_stop(struct rte_eth_dev *dev)  {  struct
> > > ice_dcf_adapter *dcf_ad = dev->data->dev_private;
> > > +struct rte_intr_handle *intr_handle = dev->intr_handle;
> > >  struct ice_adapter *ad = &dcf_ad->parent;
> > >
> > >  if (ad->pf.adapter_stopped == 1)
> > >  return;
> > >
> > > +ice_dcf_stop_queues(dev);
> > > +
> > > +rte_intr_efd_disable(intr_handle);
> > > +if (intr_handle->intr_vec) {
> > > +rte_free(intr_handle->intr_vec);
> > > +intr_handle->intr_vec = NULL;
> > > +}
> > > +
> > >  dev->data->dev_link.link_status = ETH_LINK_DOWN;
> > > ad->pf.adapter_stopped = 1;  } @@ -476,6 +781,10 @@ static const
> > > struct eth_dev_ops ice_dcf_eth_dev_ops = {
> > >  .tx_queue_setup          = ice_tx_queue_setup,
> > >  .rx_queue_release        = ice_rx_queue_release,
> > >  .tx_queue_release        = ice_tx_queue_release,
> > > +.rx_queue_start          = ice_dcf_rx_queue_start,
> > > +.tx_queue_start          = ice_dcf_tx_queue_start,
> > > +.rx_queue_stop           = ice_dcf_rx_queue_stop,
> > > +.tx_queue_stop           = ice_dcf_tx_queue_stop,
> > >  .link_update             = ice_dcf_link_update,
> > >  .stats_get               = ice_dcf_stats_get,
> > >  .stats_reset             = ice_dcf_stats_reset,
> > > --
> > > 2.17.1
> >
> 


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v2 00/12] enable DCF datapath configuration
  2020-06-05 20:17 [dpdk-dev] [PATCH v1 00/12] enable DCF datapath configuration Ting Xu
                   ` (11 preceding siblings ...)
  2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 12/12] doc: enable DCF datapath configuration Ting Xu
@ 2020-06-11 17:08 ` Ting Xu
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 01/12] net/ice: init RSS and supported RXDID in DCF Ting Xu
                     ` (11 more replies)
  2020-06-19  8:50 ` [dpdk-dev] [PATCH v4 00/12] " Ting Xu
  2020-06-23  2:38 ` [dpdk-dev] [PATCH v5 00/12] " Ting Xu
  14 siblings, 12 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-11 17:08 UTC (permalink / raw)
  To: dev; +Cc: xiaolong.ye, qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

This patchset adds support to configure DCF datapath, including
Rx/Tx queues setup, start and stop, device configuration, RSS
and flexible descriptor RXDID initialization and MAC filter setup.

Qi Zhang (11):
  net/ice: init RSS and supported RXDID in DCF
  net/ice: complete device info get in DCF
  net/ice: complete dev configure in DCF
  net/ice: complete queue setup in DCF
  net/ice: add stop flag for device start / stop
  net/ice: add Rx queue init in DCF
  net/ice: init RSS during DCF start
  net/ice: add queue config in DCF
  net/ice: add queue start and stop for DCF
  net/ice: enable stats for DCF
  net/ice: set MAC filter during dev start for DCF

Ting Xu (1):
  doc: enable DCF datapath configuration

 doc/guides/rel_notes/release_20_08.rst |   6 +
 drivers/net/ice/ice_dcf.c              | 408 ++++++++++++-
 drivers/net/ice/ice_dcf.h              |  17 +
 drivers/net/ice/ice_dcf_ethdev.c       | 771 +++++++++++++++++++++++--
 drivers/net/ice/ice_dcf_ethdev.h       |   3 -
 drivers/net/ice/ice_dcf_parent.c       |   8 +
 6 files changed, 1160 insertions(+), 53 deletions(-)

---

v2->v1:
Optimize coding style
Correct some return values
Add support to stop started queues when queue start failed

-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v2 01/12] net/ice: init RSS and supported RXDID in DCF
  2020-06-11 17:08 ` [dpdk-dev] [PATCH v2 00/12] " Ting Xu
@ 2020-06-11 17:08   ` Ting Xu
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 02/12] net/ice: complete device info get " Ting Xu
                     ` (10 subsequent siblings)
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-11 17:08 UTC (permalink / raw)
  To: dev; +Cc: xiaolong.ye, qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

From: Qi Zhang <qi.z.zhang@intel.com>

Enable RSS parameters initialization and get the supported
flexible descriptor RXDIDs bitmap from PF during DCF init.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf.c | 54 ++++++++++++++++++++++++++++++++++++++-
 drivers/net/ice/ice_dcf.h |  3 +++
 2 files changed, 56 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 0cd5d1bf6..93fabd5f7 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -233,7 +233,7 @@ ice_dcf_get_vf_resource(struct ice_dcf_hw *hw)
 
 	caps = VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | VIRTCHNL_VF_OFFLOAD_RX_POLLING |
 	       VIRTCHNL_VF_CAP_ADV_LINK_SPEED | VIRTCHNL_VF_CAP_DCF |
-	       VF_BASE_MODE_OFFLOADS;
+	       VF_BASE_MODE_OFFLOADS | VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC;
 
 	err = ice_dcf_send_cmd_req_no_irq(hw, VIRTCHNL_OP_GET_VF_RESOURCES,
 					  (uint8_t *)&caps, sizeof(caps));
@@ -547,6 +547,30 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
 	return err;
 }
 
+static int
+ice_dcf_get_supported_rxdid(struct ice_dcf_hw *hw)
+{
+	int err;
+
+	err = ice_dcf_send_cmd_req_no_irq(hw,
+					  VIRTCHNL_OP_GET_SUPPORTED_RXDIDS,
+					  NULL, 0);
+	if (err) {
+		PMD_INIT_LOG(ERR, "Failed to send OP_GET_SUPPORTED_RXDIDS");
+		return -1;
+	}
+
+	err = ice_dcf_recv_cmd_rsp_no_irq(hw, VIRTCHNL_OP_GET_SUPPORTED_RXDIDS,
+					  (uint8_t *)&hw->supported_rxdid,
+					  sizeof(uint64_t), NULL);
+	if (err) {
+		PMD_INIT_LOG(ERR, "Failed to get response of OP_GET_SUPPORTED_RXDIDS");
+		return -1;
+	}
+
+	return 0;
+}
+
 int
 ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 {
@@ -620,6 +644,29 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 		goto err_alloc;
 	}
 
+	/* Allocate memory for RSS info */
+	if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		hw->rss_key = rte_zmalloc(NULL,
+					  hw->vf_res->rss_key_size, 0);
+		if (!hw->rss_key) {
+			PMD_INIT_LOG(ERR, "unable to allocate rss_key memory");
+			goto err_alloc;
+		}
+		hw->rss_lut = rte_zmalloc("rss_lut",
+					  hw->vf_res->rss_lut_size, 0);
+		if (!hw->rss_lut) {
+			PMD_INIT_LOG(ERR, "unable to allocate rss_lut memory");
+			goto err_rss;
+		}
+	}
+
+	if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
+		if (ice_dcf_get_supported_rxdid(hw) != 0) {
+			PMD_INIT_LOG(ERR, "failed to do get supported rxdid");
+			goto err_rss;
+		}
+	}
+
 	hw->eth_dev = eth_dev;
 	rte_intr_callback_register(&pci_dev->intr_handle,
 				   ice_dcf_dev_interrupt_handler, hw);
@@ -628,6 +675,9 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 
 	return 0;
 
+err_rss:
+	rte_free(hw->rss_key);
+	rte_free(hw->rss_lut);
 err_alloc:
 	rte_free(hw->vf_res);
 err_api:
@@ -655,4 +705,6 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	rte_free(hw->arq_buf);
 	rte_free(hw->vf_vsi_map);
 	rte_free(hw->vf_res);
+	rte_free(hw->rss_lut);
+	rte_free(hw->rss_key);
 }
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index d2e447b48..152266e3c 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -50,6 +50,9 @@ struct ice_dcf_hw {
 	uint16_t vsi_id;
 
 	struct rte_eth_dev *eth_dev;
+	uint8_t *rss_lut;
+	uint8_t *rss_key;
+	uint64_t supported_rxdid;
 };
 
 int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v2 02/12] net/ice: complete device info get in DCF
  2020-06-11 17:08 ` [dpdk-dev] [PATCH v2 00/12] " Ting Xu
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 01/12] net/ice: init RSS and supported RXDID in DCF Ting Xu
@ 2020-06-11 17:08   ` Ting Xu
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 03/12] net/ice: complete dev configure " Ting Xu
                     ` (9 subsequent siblings)
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-11 17:08 UTC (permalink / raw)
  To: dev; +Cc: xiaolong.ye, qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

From: Qi Zhang <qi.z.zhang@intel.com>

Add support to get complete device information for DCF, including
Rx/Tx offload capabilities and default configuration.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf_ethdev.c | 72 ++++++++++++++++++++++++++++++--
 1 file changed, 69 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index e5ba1a61f..7f24ef81a 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -24,6 +24,7 @@
 
 #include "ice_generic_flow.h"
 #include "ice_dcf_ethdev.h"
+#include "ice_rxtx.h"
 
 static uint16_t
 ice_dcf_recv_pkts(__rte_unused void *rx_queue,
@@ -66,11 +67,76 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
 		     struct rte_eth_dev_info *dev_info)
 {
 	struct ice_dcf_adapter *adapter = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &adapter->real_hw;
 
 	dev_info->max_mac_addrs = 1;
-	dev_info->max_rx_pktlen = (uint32_t)-1;
-	dev_info->max_rx_queues = RTE_DIM(adapter->rxqs);
-	dev_info->max_tx_queues = RTE_DIM(adapter->txqs);
+	dev_info->max_rx_queues = hw->vsi_res->num_queue_pairs;
+	dev_info->max_tx_queues = hw->vsi_res->num_queue_pairs;
+	dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
+	dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX;
+	dev_info->hash_key_size = hw->vf_res->rss_key_size;
+	dev_info->reta_size = hw->vf_res->rss_lut_size;
+	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
+
+	dev_info->rx_offload_capa =
+		DEV_RX_OFFLOAD_VLAN_STRIP |
+		DEV_RX_OFFLOAD_QINQ_STRIP |
+		DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM |
+		DEV_RX_OFFLOAD_TCP_CKSUM |
+		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_SCATTER |
+		DEV_RX_OFFLOAD_JUMBO_FRAME |
+		DEV_RX_OFFLOAD_VLAN_FILTER |
+		DEV_RX_OFFLOAD_RSS_HASH;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_VLAN_INSERT |
+		DEV_TX_OFFLOAD_QINQ_INSERT |
+		DEV_TX_OFFLOAD_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_UDP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_CKSUM |
+		DEV_TX_OFFLOAD_SCTP_CKSUM |
+		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_TCP_TSO |
+		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
+		DEV_TX_OFFLOAD_GRE_TNL_TSO |
+		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
+		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
+		DEV_TX_OFFLOAD_MULTI_SEGS;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_thresh = {
+			.pthresh = ICE_DEFAULT_RX_PTHRESH,
+			.hthresh = ICE_DEFAULT_RX_HTHRESH,
+			.wthresh = ICE_DEFAULT_RX_WTHRESH,
+		},
+		.rx_free_thresh = ICE_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+		.offloads = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_thresh = {
+			.pthresh = ICE_DEFAULT_TX_PTHRESH,
+			.hthresh = ICE_DEFAULT_TX_HTHRESH,
+			.wthresh = ICE_DEFAULT_TX_WTHRESH,
+		},
+		.tx_free_thresh = ICE_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = ICE_DEFAULT_TX_RSBIT_THRESH,
+		.offloads = 0,
+	};
+
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = ICE_MAX_RING_DESC,
+		.nb_min = ICE_MIN_RING_DESC,
+		.nb_align = ICE_ALIGN_RING_DESC,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = ICE_MAX_RING_DESC,
+		.nb_min = ICE_MIN_RING_DESC,
+		.nb_align = ICE_ALIGN_RING_DESC,
+	};
 
 	return 0;
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v2 03/12] net/ice: complete dev configure in DCF
  2020-06-11 17:08 ` [dpdk-dev] [PATCH v2 00/12] " Ting Xu
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 01/12] net/ice: init RSS and supported RXDID in DCF Ting Xu
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 02/12] net/ice: complete device info get " Ting Xu
@ 2020-06-11 17:08   ` Ting Xu
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 04/12] net/ice: complete queue setup " Ting Xu
                     ` (8 subsequent siblings)
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-11 17:08 UTC (permalink / raw)
  To: dev; +Cc: xiaolong.ye, qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

From: Qi Zhang <qi.z.zhang@intel.com>

Enable device configuration function in DCF.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf_ethdev.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 7f24ef81a..41d130cd9 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -57,8 +57,17 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
 }
 
 static int
-ice_dcf_dev_configure(__rte_unused struct rte_eth_dev *dev)
+ice_dcf_dev_configure(struct rte_eth_dev *dev)
 {
+	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct ice_adapter *ad = &dcf_ad->parent;
+
+	ad->rx_bulk_alloc_allowed = true;
+	ad->tx_simple_allowed = true;
+
+	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+
 	return 0;
 }
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v2 04/12] net/ice: complete queue setup in DCF
  2020-06-11 17:08 ` [dpdk-dev] [PATCH v2 00/12] " Ting Xu
                     ` (2 preceding siblings ...)
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 03/12] net/ice: complete dev configure " Ting Xu
@ 2020-06-11 17:08   ` Ting Xu
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 05/12] net/ice: add stop flag for device start / stop Ting Xu
                     ` (7 subsequent siblings)
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-11 17:08 UTC (permalink / raw)
  To: dev; +Cc: xiaolong.ye, qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

From: Qi Zhang <qi.z.zhang@intel.com>

Delete original DCF queue setup functions and use ice
queue setup and release functions instead.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf_ethdev.c | 42 +++-----------------------------
 drivers/net/ice/ice_dcf_ethdev.h |  3 ---
 drivers/net/ice/ice_dcf_parent.c |  7 ++++++
 3 files changed, 11 insertions(+), 41 deletions(-)

diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 41d130cd9..0c3013228 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -231,11 +231,6 @@ ice_dcf_dev_close(struct rte_eth_dev *dev)
 	ice_dcf_uninit_hw(dev, &adapter->real_hw);
 }
 
-static void
-ice_dcf_queue_release(__rte_unused void *q)
-{
-}
-
 static int
 ice_dcf_link_update(__rte_unused struct rte_eth_dev *dev,
 		    __rte_unused int wait_to_complete)
@@ -243,45 +238,16 @@ ice_dcf_link_update(__rte_unused struct rte_eth_dev *dev,
 	return 0;
 }
 
-static int
-ice_dcf_rx_queue_setup(struct rte_eth_dev *dev,
-		       uint16_t rx_queue_id,
-		       __rte_unused uint16_t nb_rx_desc,
-		       __rte_unused unsigned int socket_id,
-		       __rte_unused const struct rte_eth_rxconf *rx_conf,
-		       __rte_unused struct rte_mempool *mb_pool)
-{
-	struct ice_dcf_adapter *adapter = dev->data->dev_private;
-
-	dev->data->rx_queues[rx_queue_id] = &adapter->rxqs[rx_queue_id];
-
-	return 0;
-}
-
-static int
-ice_dcf_tx_queue_setup(struct rte_eth_dev *dev,
-		       uint16_t tx_queue_id,
-		       __rte_unused uint16_t nb_tx_desc,
-		       __rte_unused unsigned int socket_id,
-		       __rte_unused const struct rte_eth_txconf *tx_conf)
-{
-	struct ice_dcf_adapter *adapter = dev->data->dev_private;
-
-	dev->data->tx_queues[tx_queue_id] = &adapter->txqs[tx_queue_id];
-
-	return 0;
-}
-
 static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
 	.dev_start               = ice_dcf_dev_start,
 	.dev_stop                = ice_dcf_dev_stop,
 	.dev_close               = ice_dcf_dev_close,
 	.dev_configure           = ice_dcf_dev_configure,
 	.dev_infos_get           = ice_dcf_dev_info_get,
-	.rx_queue_setup          = ice_dcf_rx_queue_setup,
-	.tx_queue_setup          = ice_dcf_tx_queue_setup,
-	.rx_queue_release        = ice_dcf_queue_release,
-	.tx_queue_release        = ice_dcf_queue_release,
+	.rx_queue_setup          = ice_rx_queue_setup,
+	.tx_queue_setup          = ice_tx_queue_setup,
+	.rx_queue_release        = ice_rx_queue_release,
+	.tx_queue_release        = ice_tx_queue_release,
 	.link_update             = ice_dcf_link_update,
 	.stats_get               = ice_dcf_stats_get,
 	.stats_reset             = ice_dcf_stats_reset,
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index e60e808d8..b54528bea 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -19,10 +19,7 @@ struct ice_dcf_queue {
 
 struct ice_dcf_adapter {
 	struct ice_adapter parent; /* Must be first */
-
 	struct ice_dcf_hw real_hw;
-	struct ice_dcf_queue rxqs[ICE_DCF_MAX_RINGS];
-	struct ice_dcf_queue txqs[ICE_DCF_MAX_RINGS];
 };
 
 void ice_dcf_handle_pf_event_msg(struct ice_dcf_hw *dcf_hw,
diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c
index d13e19d78..322a5273f 100644
--- a/drivers/net/ice/ice_dcf_parent.c
+++ b/drivers/net/ice/ice_dcf_parent.c
@@ -335,6 +335,13 @@ ice_dcf_init_parent_adapter(struct rte_eth_dev *eth_dev)
 	parent_adapter->eth_dev = eth_dev;
 	parent_adapter->pf.adapter = parent_adapter;
 	parent_adapter->pf.dev_data = eth_dev->data;
+	/* create a dummy main_vsi */
+	parent_adapter->pf.main_vsi =
+		rte_zmalloc(NULL, sizeof(struct ice_vsi), 0);
+	if (!parent_adapter->pf.main_vsi)
+		return -ENOMEM;
+	parent_adapter->pf.main_vsi->adapter = parent_adapter;
+
 	parent_hw->back = parent_adapter;
 	parent_hw->mac_type = ICE_MAC_GENERIC;
 	parent_hw->vendor_id = ICE_INTEL_VENDOR_ID;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v2 05/12] net/ice: add stop flag for device start / stop
  2020-06-11 17:08 ` [dpdk-dev] [PATCH v2 00/12] " Ting Xu
                     ` (3 preceding siblings ...)
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 04/12] net/ice: complete queue setup " Ting Xu
@ 2020-06-11 17:08   ` Ting Xu
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 06/12] net/ice: add Rx queue init in DCF Ting Xu
                     ` (6 subsequent siblings)
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-11 17:08 UTC (permalink / raw)
  To: dev; +Cc: xiaolong.ye, qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

From: Qi Zhang <qi.z.zhang@intel.com>

Add stop flag for DCF device start and stop.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf_ethdev.c | 12 ++++++++++++
 drivers/net/ice/ice_dcf_parent.c |  1 +
 2 files changed, 13 insertions(+)

diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 0c3013228..ff2cab054 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -45,6 +45,11 @@ ice_dcf_xmit_pkts(__rte_unused void *tx_queue,
 static int
 ice_dcf_dev_start(struct rte_eth_dev *dev)
 {
+	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct ice_adapter *ad = &dcf_ad->parent;
+
+	ad->pf.adapter_stopped = 0;
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
@@ -53,7 +58,14 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 static void
 ice_dcf_dev_stop(struct rte_eth_dev *dev)
 {
+	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct ice_adapter *ad = &dcf_ad->parent;
+
+	if (ad->pf.adapter_stopped == 1)
+		return;
+
 	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	ad->pf.adapter_stopped = 1;
 }
 
 static int
diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c
index 322a5273f..c5dfdd36e 100644
--- a/drivers/net/ice/ice_dcf_parent.c
+++ b/drivers/net/ice/ice_dcf_parent.c
@@ -341,6 +341,7 @@ ice_dcf_init_parent_adapter(struct rte_eth_dev *eth_dev)
 	if (!parent_adapter->pf.main_vsi)
 		return -ENOMEM;
 	parent_adapter->pf.main_vsi->adapter = parent_adapter;
+	parent_adapter->pf.adapter_stopped = 1;
 
 	parent_hw->back = parent_adapter;
 	parent_hw->mac_type = ICE_MAC_GENERIC;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v2 06/12] net/ice: add Rx queue init in DCF
  2020-06-11 17:08 ` [dpdk-dev] [PATCH v2 00/12] " Ting Xu
                     ` (4 preceding siblings ...)
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 05/12] net/ice: add stop flag for device start / stop Ting Xu
@ 2020-06-11 17:08   ` Ting Xu
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 07/12] net/ice: init RSS during DCF start Ting Xu
                     ` (5 subsequent siblings)
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-11 17:08 UTC (permalink / raw)
  To: dev; +Cc: xiaolong.ye, qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

From: Qi Zhang <qi.z.zhang@intel.com>

Enable Rx queues initialization during device start in DCF.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf.h        |  1 +
 drivers/net/ice/ice_dcf_ethdev.c | 83 ++++++++++++++++++++++++++++++++
 2 files changed, 84 insertions(+)

diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 152266e3c..dcb2a0283 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -53,6 +53,7 @@ struct ice_dcf_hw {
 	uint8_t *rss_lut;
 	uint8_t *rss_key;
 	uint64_t supported_rxdid;
+	uint16_t num_queue_pairs;
 };
 
 int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index ff2cab054..6d0f93ca7 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -42,14 +42,97 @@ ice_dcf_xmit_pkts(__rte_unused void *tx_queue,
 	return 0;
 }
 
+static int
+ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
+{
+	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct rte_eth_dev_data *dev_data = dev->data;
+	struct iavf_hw *hw = &dcf_ad->real_hw.avf;
+	uint16_t buf_size, max_pkt_len, len;
+
+	buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_hdr_len = 0;
+	rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
+	len = ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len;
+	max_pkt_len = RTE_MIN(len, dev->data->dev_conf.rxmode.max_rx_pkt_len);
+
+	/* Check if the jumbo frame and maximum packet length are set
+	 * correctly.
+	 */
+	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+		if (max_pkt_len <= RTE_ETHER_MAX_LEN ||
+		    max_pkt_len > ICE_FRAME_SIZE_MAX) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is enabled",
+				    (uint32_t)RTE_ETHER_MAX_LEN,
+				    (uint32_t)ICE_FRAME_SIZE_MAX);
+			return -EINVAL;
+		}
+	} else {
+		if (max_pkt_len < RTE_ETHER_MIN_LEN ||
+		    max_pkt_len > RTE_ETHER_MAX_LEN) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is disabled",
+				    (uint32_t)RTE_ETHER_MIN_LEN,
+				    (uint32_t)RTE_ETHER_MAX_LEN);
+			return -EINVAL;
+		}
+	}
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	    (rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size) {
+		dev_data->scattered_rx = 1;
+	}
+	rxq->qrx_tail = hw->hw_addr + IAVF_QRX_TAIL1(rxq->queue_id);
+	IAVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	IAVF_WRITE_FLUSH(hw);
+
+	return 0;
+}
+
+static int
+ice_dcf_init_rx_queues(struct rte_eth_dev *dev)
+{
+	struct ice_rx_queue **rxq =
+		(struct ice_rx_queue **)dev->data->rx_queues;
+	int i, ret;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		if (!rxq[i] || !rxq[i]->q_set)
+			continue;
+		ret = ice_dcf_init_rxq(dev, rxq[i]);
+		if (ret)
+			return ret;
+	}
+
+	ice_set_rx_function(dev);
+	ice_set_tx_function(dev);
+
+	return 0;
+}
+
 static int
 ice_dcf_dev_start(struct rte_eth_dev *dev)
 {
 	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
 	struct ice_adapter *ad = &dcf_ad->parent;
+	struct ice_dcf_hw *hw = &dcf_ad->real_hw;
+	int ret;
 
 	ad->pf.adapter_stopped = 0;
 
+	hw->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
+				      dev->data->nb_tx_queues);
+
+	ret = ice_dcf_init_rx_queues(dev);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Fail to init queues");
+		return ret;
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v2 07/12] net/ice: init RSS during DCF start
  2020-06-11 17:08 ` [dpdk-dev] [PATCH v2 00/12] " Ting Xu
                     ` (5 preceding siblings ...)
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 06/12] net/ice: add Rx queue init in DCF Ting Xu
@ 2020-06-11 17:08   ` Ting Xu
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 08/12] net/ice: add queue config in DCF Ting Xu
                     ` (4 subsequent siblings)
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-11 17:08 UTC (permalink / raw)
  To: dev; +Cc: xiaolong.ye, qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

From: Qi Zhang <qi.z.zhang@intel.com>

Enable RSS initialization during DCF start. Add RSS LUT and
RSS key configuration functions.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf.c        | 117 +++++++++++++++++++++++++++++++
 drivers/net/ice/ice_dcf.h        |   1 +
 drivers/net/ice/ice_dcf_ethdev.c |   8 +++
 3 files changed, 126 insertions(+)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 93fabd5f7..f285323dd 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -708,3 +708,120 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	rte_free(hw->rss_lut);
 	rte_free(hw->rss_key);
 }
+
+static int
+ice_dcf_configure_rss_key(struct ice_dcf_hw *hw)
+{
+	struct virtchnl_rss_key *rss_key;
+	struct dcf_virtchnl_cmd args;
+	int len, err;
+
+	len = sizeof(*rss_key) + hw->vf_res->rss_key_size - 1;
+	rss_key = rte_zmalloc("rss_key", len, 0);
+	if (!rss_key)
+		return -ENOMEM;
+
+	rss_key->vsi_id = hw->vsi_res->vsi_id;
+	rss_key->key_len = hw->vf_res->rss_key_size;
+	rte_memcpy(rss_key->key, hw->rss_key, hw->vf_res->rss_key_size);
+
+	args.v_op = VIRTCHNL_OP_CONFIG_RSS_KEY;
+	args.req_msglen = len;
+	args.req_msg = (uint8_t *)rss_key;
+	args.rsp_msglen = 0;
+	args.rsp_buflen = 0;
+	args.rsp_msgbuf = NULL;
+	args.pending = 0;
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_INIT_LOG(ERR, "Failed to execute OP_CONFIG_RSS_KEY");
+
+	rte_free(rss_key);
+	return err;
+}
+
+static int
+ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw)
+{
+	struct virtchnl_rss_lut *rss_lut;
+	struct dcf_virtchnl_cmd args;
+	int len, err;
+
+	len = sizeof(*rss_lut) + hw->vf_res->rss_lut_size - 1;
+	rss_lut = rte_zmalloc("rss_lut", len, 0);
+	if (!rss_lut)
+		return -ENOMEM;
+
+	rss_lut->vsi_id = hw->vsi_res->vsi_id;
+	rss_lut->lut_entries = hw->vf_res->rss_lut_size;
+	rte_memcpy(rss_lut->lut, hw->rss_lut, hw->vf_res->rss_lut_size);
+
+	args.v_op = VIRTCHNL_OP_CONFIG_RSS_LUT;
+	args.req_msglen = len;
+	args.req_msg = (uint8_t *)rss_lut;
+	args.rsp_msglen = 0;
+	args.rsp_buflen = 0;
+	args.rsp_msgbuf = NULL;
+	args.pending = 0;
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_INIT_LOG(ERR, "Failed to execute OP_CONFIG_RSS_LUT");
+
+	rte_free(rss_lut);
+	return err;
+}
+
+int
+ice_dcf_init_rss(struct ice_dcf_hw *hw)
+{
+	struct rte_eth_dev *dev = hw->eth_dev;
+	struct rte_eth_rss_conf *rss_conf;
+	uint8_t i, j, nb_q;
+	int ret;
+
+	rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev->data->nb_rx_queues;
+
+	if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+		PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
+		/* set all lut items to default queue */
+		memset(hw->rss_lut, 0, hw->vf_res->rss_lut_size);
+		return ice_dcf_configure_rss_lut(hw);
+	}
+
+	/* In IAVF, RSS enablement is set by PF driver. It is not supported
+	 * to set based on rss_conf->rss_hf.
+	 */
+
+	/* configure RSS key */
+	if (!rss_conf->rss_key)
+		/* Calculate the default hash key */
+		for (i = 0; i < hw->vf_res->rss_key_size; i++)
+			hw->rss_key[i] = (uint8_t)rte_rand();
+	else
+		rte_memcpy(hw->rss_key, rss_conf->rss_key,
+			   RTE_MIN(rss_conf->rss_key_len,
+				   hw->vf_res->rss_key_size));
+
+	/* init RSS LUT table */
+	for (i = 0, j = 0; i < hw->vf_res->rss_lut_size; i++, j++) {
+		if (j >= nb_q)
+			j = 0;
+		hw->rss_lut[i] = j;
+	}
+	/* send virtchnnl ops to configure rss*/
+	ret = ice_dcf_configure_rss_lut(hw);
+	if (ret)
+		return ret;
+	ret = ice_dcf_configure_rss_key(hw);
+	if (ret)
+		return ret;
+
+	return 0;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index dcb2a0283..eea4b286b 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -63,5 +63,6 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
 int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
 int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
 void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
+int ice_dcf_init_rss(struct ice_dcf_hw *hw);
 
 #endif /* _ICE_DCF_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 6d0f93ca7..e021d779a 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -133,6 +133,14 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
+	if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		ret = ice_dcf_init_rss(hw);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "Failed to configure RSS");
+			return ret;
+		}
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v2 08/12] net/ice: add queue config in DCF
  2020-06-11 17:08 ` [dpdk-dev] [PATCH v2 00/12] " Ting Xu
                     ` (6 preceding siblings ...)
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 07/12] net/ice: init RSS during DCF start Ting Xu
@ 2020-06-11 17:08   ` Ting Xu
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 09/12] net/ice: add queue start and stop for DCF Ting Xu
                     ` (3 subsequent siblings)
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-11 17:08 UTC (permalink / raw)
  To: dev; +Cc: xiaolong.ye, qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

From: Qi Zhang <qi.z.zhang@intel.com>

Add queues and Rx queue irqs configuration during device start
in DCF. The setup is sent to PF via virtchnl.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf.c        | 111 +++++++++++++++++++++++++++
 drivers/net/ice/ice_dcf.h        |   6 ++
 drivers/net/ice/ice_dcf_ethdev.c | 126 +++++++++++++++++++++++++++++++
 3 files changed, 243 insertions(+)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index f285323dd..8869e0d1c 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -24,6 +24,7 @@
 #include <rte_dev.h>
 
 #include "ice_dcf.h"
+#include "ice_rxtx.h"
 
 #define ICE_DCF_AQ_LEN     32
 #define ICE_DCF_AQ_BUF_SZ  4096
@@ -825,3 +826,113 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
 
 	return 0;
 }
+
+#define IAVF_RXDID_LEGACY_1 1
+#define IAVF_RXDID_COMMS_GENERIC 16
+
+int
+ice_dcf_configure_queues(struct ice_dcf_hw *hw)
+{
+	struct ice_rx_queue **rxq =
+		(struct ice_rx_queue **)hw->eth_dev->data->rx_queues;
+	struct ice_tx_queue **txq =
+		(struct ice_tx_queue **)hw->eth_dev->data->tx_queues;
+	struct virtchnl_vsi_queue_config_info *vc_config;
+	struct virtchnl_queue_pair_info *vc_qp;
+	struct dcf_virtchnl_cmd args;
+	uint16_t i, size;
+	int err;
+
+	size = sizeof(*vc_config) +
+	       sizeof(vc_config->qpair[0]) * hw->num_queue_pairs;
+	vc_config = rte_zmalloc("cfg_queue", size, 0);
+	if (!vc_config)
+		return -ENOMEM;
+
+	vc_config->vsi_id = hw->vsi_res->vsi_id;
+	vc_config->num_queue_pairs = hw->num_queue_pairs;
+
+	for (i = 0, vc_qp = vc_config->qpair;
+	     i < hw->num_queue_pairs;
+	     i++, vc_qp++) {
+		vc_qp->txq.vsi_id = hw->vsi_res->vsi_id;
+		vc_qp->txq.queue_id = i;
+		if (i < hw->eth_dev->data->nb_tx_queues) {
+			vc_qp->txq.ring_len = txq[i]->nb_tx_desc;
+			vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_dma;
+		}
+		vc_qp->rxq.vsi_id = hw->vsi_res->vsi_id;
+		vc_qp->rxq.queue_id = i;
+		vc_qp->rxq.max_pkt_size = rxq[i]->max_pkt_len;
+
+		if (i >= hw->eth_dev->data->nb_rx_queues)
+			continue;
+
+		vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc;
+		vc_qp->rxq.dma_ring_addr = rxq[i]->rx_ring_dma;
+		vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len;
+
+		if (hw->vf_res->vf_cap_flags &
+		    VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
+		    hw->supported_rxdid &
+		    BIT(IAVF_RXDID_COMMS_GENERIC)) {
+			vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_GENERIC;
+			PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
+				    "Queue[%d]", vc_qp->rxq.rxdid, i);
+		} else {
+			PMD_DRV_LOG(ERR, "RXDID 16 is not supported");
+			return -EINVAL;
+		}
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.v_op = VIRTCHNL_OP_CONFIG_VSI_QUEUES;
+	args.req_msg = (uint8_t *)vc_config;
+	args.req_msglen = size;
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to execute command of"
+			    " VIRTCHNL_OP_CONFIG_VSI_QUEUES");
+
+	rte_free(vc_config);
+	return err;
+}
+
+int
+ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
+{
+	struct virtchnl_irq_map_info *map_info;
+	struct virtchnl_vector_map *vecmap;
+	struct dcf_virtchnl_cmd args;
+	int len, i, err;
+
+	len = sizeof(struct virtchnl_irq_map_info) +
+	      sizeof(struct virtchnl_vector_map) * hw->nb_msix;
+
+	map_info = rte_zmalloc("map_info", len, 0);
+	if (!map_info)
+		return -ENOMEM;
+
+	map_info->num_vectors = hw->nb_msix;
+	for (i = 0; i < hw->nb_msix; i++) {
+		vecmap = &map_info->vecmap[i];
+		vecmap->vsi_id = hw->vsi_res->vsi_id;
+		vecmap->rxitr_idx = 0;
+		vecmap->vector_id = hw->msix_base + i;
+		vecmap->txq_map = 0;
+		vecmap->rxq_map = hw->rxq_map[hw->msix_base + i];
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.v_op = VIRTCHNL_OP_CONFIG_IRQ_MAP;
+	args.req_msg = (u8 *)map_info;
+	args.req_msglen = len;
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command OP_CONFIG_IRQ_MAP");
+
+	rte_free(map_info);
+	return err;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index eea4b286b..9470d1df7 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -54,6 +54,10 @@ struct ice_dcf_hw {
 	uint8_t *rss_key;
 	uint64_t supported_rxdid;
 	uint16_t num_queue_pairs;
+
+	uint16_t msix_base;
+	uint16_t nb_msix;
+	uint16_t rxq_map[16];
 };
 
 int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
@@ -64,5 +68,7 @@ int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
 int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
 void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
 int ice_dcf_init_rss(struct ice_dcf_hw *hw);
+int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
+int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
 
 #endif /* _ICE_DCF_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index e021d779a..333fee037 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -114,10 +114,124 @@ ice_dcf_init_rx_queues(struct rte_eth_dev *dev)
 	return 0;
 }
 
+#define IAVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
+#define IAVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
+
+#define IAVF_ITR_INDEX_DEFAULT          0
+#define IAVF_QUEUE_ITR_INTERVAL_DEFAULT 32 /* 32 us */
+#define IAVF_QUEUE_ITR_INTERVAL_MAX     8160 /* 8160 us */
+
+static inline uint16_t
+iavf_calc_itr_interval(int16_t interval)
+{
+	if (interval < 0 || interval > IAVF_QUEUE_ITR_INTERVAL_MAX)
+		interval = IAVF_QUEUE_ITR_INTERVAL_DEFAULT;
+
+	/* Convert to hardware count, as writing each 1 represents 2 us */
+	return interval / 2;
+}
+
+static int
+ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
+				     struct rte_intr_handle *intr_handle)
+{
+	struct ice_dcf_adapter *adapter = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &adapter->real_hw;
+	uint16_t interval, i;
+	int vec;
+
+	if (rte_intr_cap_multiple(intr_handle) &&
+	    dev->data->dev_conf.intr_conf.rxq) {
+		if (rte_intr_efd_enable(intr_handle, dev->data->nb_rx_queues))
+			return -1;
+	}
+
+	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
+		intr_handle->intr_vec =
+			rte_zmalloc("intr_vec",
+				    dev->data->nb_rx_queues * sizeof(int), 0);
+		if (!intr_handle->intr_vec) {
+			PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
+				    dev->data->nb_rx_queues);
+			return -1;
+		}
+	}
+
+	if (!dev->data->dev_conf.intr_conf.rxq ||
+	    !rte_intr_dp_is_en(intr_handle)) {
+		/* Rx interrupt disabled, Map interrupt only for writeback */
+		hw->nb_msix = 1;
+		if (hw->vf_res->vf_cap_flags &
+		    VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
+			/* If WB_ON_ITR supports, enable it */
+			hw->msix_base = IAVF_RX_VEC_START;
+			IAVF_WRITE_REG(&hw->avf,
+				       IAVF_VFINT_DYN_CTLN1(hw->msix_base - 1),
+				       IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK |
+				       IAVF_VFINT_DYN_CTLN1_WB_ON_ITR_MASK);
+		} else {
+			/* If no WB_ON_ITR offload flags, need to set
+			 * interrupt for descriptor write back.
+			 */
+			hw->msix_base = IAVF_MISC_VEC_ID;
+
+			/* set ITR to max */
+			interval =
+			iavf_calc_itr_interval(IAVF_QUEUE_ITR_INTERVAL_MAX);
+			IAVF_WRITE_REG(&hw->avf, IAVF_VFINT_DYN_CTL01,
+				       IAVF_VFINT_DYN_CTL01_INTENA_MASK |
+				       (IAVF_ITR_INDEX_DEFAULT <<
+					IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT) |
+				       (interval <<
+					IAVF_VFINT_DYN_CTL01_INTERVAL_SHIFT));
+		}
+		IAVF_WRITE_FLUSH(&hw->avf);
+		/* map all queues to the same interrupt */
+		for (i = 0; i < dev->data->nb_rx_queues; i++)
+			hw->rxq_map[hw->msix_base] |= 1 << i;
+	} else {
+		if (!rte_intr_allow_others(intr_handle)) {
+			hw->nb_msix = 1;
+			hw->msix_base = IAVF_MISC_VEC_ID;
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				hw->rxq_map[hw->msix_base] |= 1 << i;
+				intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
+			}
+			PMD_DRV_LOG(DEBUG,
+				    "vector %u are mapping to all Rx queues",
+				    hw->msix_base);
+		} else {
+			/* If Rx interrupt is reuquired, and we can use
+			 * multi interrupts, then the vec is from 1
+			 */
+			hw->nb_msix = RTE_MIN(hw->vf_res->max_vectors,
+					      intr_handle->nb_efd);
+			hw->msix_base = IAVF_MISC_VEC_ID;
+			vec = IAVF_MISC_VEC_ID;
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				hw->rxq_map[vec] |= 1 << i;
+				intr_handle->intr_vec[i] = vec++;
+				if (vec >= hw->nb_msix)
+					vec = IAVF_RX_VEC_START;
+			}
+			PMD_DRV_LOG(DEBUG,
+				    "%u vectors are mapping to %u Rx queues",
+				    hw->nb_msix, dev->data->nb_rx_queues);
+		}
+	}
+
+	if (ice_dcf_config_irq_map(hw)) {
+		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+		return -1;
+	}
+	return 0;
+}
+
 static int
 ice_dcf_dev_start(struct rte_eth_dev *dev)
 {
 	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
 	struct ice_adapter *ad = &dcf_ad->parent;
 	struct ice_dcf_hw *hw = &dcf_ad->real_hw;
 	int ret;
@@ -141,6 +255,18 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 		}
 	}
 
+	ret = ice_dcf_configure_queues(hw);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Fail to config queues");
+		return ret;
+	}
+
+	ret = ice_dcf_config_rx_queues_irqs(dev, intr_handle);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Fail to config rx queues' irqs");
+		return ret;
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v2 09/12] net/ice: add queue start and stop for DCF
  2020-06-11 17:08 ` [dpdk-dev] [PATCH v2 00/12] " Ting Xu
                     ` (7 preceding siblings ...)
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 08/12] net/ice: add queue config in DCF Ting Xu
@ 2020-06-11 17:08   ` Ting Xu
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 10/12] net/ice: enable stats " Ting Xu
                     ` (2 subsequent siblings)
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-11 17:08 UTC (permalink / raw)
  To: dev; +Cc: xiaolong.ye, qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

From: Qi Zhang <qi.z.zhang@intel.com>

Add queue start and stop in DCF. Support queue enable and disable
through virtual channel. Add support for Rx queue mbufs allocation
and queue reset.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf.c        |  57 ++++++
 drivers/net/ice/ice_dcf.h        |   3 +-
 drivers/net/ice/ice_dcf_ethdev.c | 320 +++++++++++++++++++++++++++++++
 3 files changed, 379 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 8869e0d1c..f18c0f16a 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -936,3 +936,60 @@ ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
 	rte_free(map_info);
 	return err;
 }
+
+int
+ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on)
+{
+	struct virtchnl_queue_select queue_select;
+	struct dcf_virtchnl_cmd args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = hw->vsi_res->vsi_id;
+	if (rx)
+		queue_select.rx_queues |= 1 << qid;
+	else
+		queue_select.tx_queues |= 1 << qid;
+
+	memset(&args, 0, sizeof(args));
+	if (on)
+		args.v_op = VIRTCHNL_OP_ENABLE_QUEUES;
+	else
+		args.v_op = VIRTCHNL_OP_DISABLE_QUEUES;
+
+	args.req_msg = (u8 *)&queue_select;
+	args.req_msglen = sizeof(queue_select);
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to execute command of %s",
+			    on ? "OP_ENABLE_QUEUES" : "OP_DISABLE_QUEUES");
+
+	return err;
+}
+
+int
+ice_dcf_disable_queues(struct ice_dcf_hw *hw)
+{
+	struct virtchnl_queue_select queue_select;
+	struct dcf_virtchnl_cmd args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = hw->vsi_res->vsi_id;
+
+	queue_select.rx_queues = BIT(hw->eth_dev->data->nb_rx_queues) - 1;
+	queue_select.tx_queues = BIT(hw->eth_dev->data->nb_tx_queues) - 1;
+
+	memset(&args, 0, sizeof(args));
+	args.v_op = VIRTCHNL_OP_DISABLE_QUEUES;
+	args.req_msg = (u8 *)&queue_select;
+	args.req_msglen = sizeof(queue_select);
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_DISABLE_QUEUES");
+
+	return err;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 9470d1df7..68e1661c0 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -70,5 +70,6 @@ void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
 int ice_dcf_init_rss(struct ice_dcf_hw *hw);
 int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
 int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
-
+int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
+int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
 #endif /* _ICE_DCF_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 333fee037..239426b09 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -227,6 +227,270 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+alloc_rxq_mbufs(struct ice_rx_queue *rxq)
+{
+	volatile union ice_32b_rx_flex_desc *rxd;
+	struct rte_mbuf *mbuf = NULL;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		mbuf = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!mbuf)) {
+			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &rxq->rx_ring[i];
+		rxd->read.pkt_addr = dma_addr;
+		rxd->read.hdr_addr = 0;
+		rxd->read.rsvd1 = 0;
+		rxd->read.rsvd2 = 0;
+
+		rxq->sw_ring[i].mbuf = (void *)mbuf;
+	}
+
+	return 0;
+}
+
+static int
+ice_dcf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct iavf_hw *hw = &ad->real_hw.avf;
+	struct ice_rx_queue *rxq;
+	int err = 0;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	err = alloc_rxq_mbufs(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+		return err;
+	}
+
+	rte_wmb();
+
+	/* Init the RX tail register. */
+	IAVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	IAVF_WRITE_FLUSH(hw);
+
+	/* Ready to switch the queue on */
+	err = ice_dcf_switch_queue(&ad->real_hw, rx_queue_id, true, true);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+	else
+		dev->data->rx_queue_state[rx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+
+	return err;
+}
+
+static inline void
+reset_rx_queue(struct ice_rx_queue *rxq)
+{
+	uint16_t len;
+	uint32_t i;
+
+	if (!rxq)
+		return;
+
+	len = rxq->nb_rx_desc + ICE_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(union ice_rx_flex_desc); i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+	for (i = 0; i < ICE_RX_MAX_BURST; i++)
+		rxq->sw_ring[rxq->nb_rx_desc + i].mbuf = &rxq->fake_mbuf;
+
+	/* for rx bulk */
+	rxq->rx_nb_avail = 0;
+	rxq->rx_next_avail = 0;
+	rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+}
+
+static inline void
+reset_tx_queue(struct ice_tx_queue *txq)
+{
+	struct ice_tx_entry *txe;
+	uint32_t i, size;
+	uint16_t prev;
+
+	if (!txq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	txe = txq->sw_ring;
+	size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->tx_ring)[i] = 0;
+
+	prev = (uint16_t)(txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		txq->tx_ring[i].cmd_type_offset_bsz =
+			rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
+		txe[i].mbuf =  NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_tail = 0;
+	txq->nb_tx_used = 0;
+
+	txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+	txq->nb_tx_free = txq->nb_tx_desc - 1;
+
+	txq->tx_next_dd = txq->tx_rs_thresh - 1;
+	txq->tx_next_rs = txq->tx_rs_thresh - 1;
+}
+
+static int
+ice_dcf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &ad->real_hw;
+	struct ice_rx_queue *rxq;
+	int err;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	err = ice_dcf_switch_queue(hw, rx_queue_id, true, false);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+			    rx_queue_id);
+		return err;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq->rx_rel_mbufs(rxq);
+	reset_rx_queue(rxq);
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+static int
+ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct iavf_hw *hw = &ad->real_hw.avf;
+	struct ice_tx_queue *txq;
+	int err = 0;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	txq = dev->data->tx_queues[tx_queue_id];
+
+	/* Init the RX tail register. */
+	txq->qtx_tail = hw->hw_addr + IAVF_QTX_TAIL1(tx_queue_id);
+	IAVF_PCI_REG_WRITE(txq->qtx_tail, 0);
+	IAVF_WRITE_FLUSH(hw);
+
+	/* Ready to switch the queue on */
+	err = ice_dcf_switch_queue(&ad->real_hw, tx_queue_id, false, true);
+
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+			    tx_queue_id);
+	else
+		dev->data->tx_queue_state[tx_queue_id] =
+			RTE_ETH_QUEUE_STATE_STARTED;
+
+	return err;
+}
+
+static int
+ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &ad->real_hw;
+	struct ice_tx_queue *txq;
+	int err;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	err = ice_dcf_switch_queue(hw, tx_queue_id, false, false);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+			    tx_queue_id);
+		return err;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	txq->tx_rel_mbufs(txq);
+	reset_tx_queue(txq);
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+static int
+ice_dcf_start_queues(struct rte_eth_dev *dev)
+{
+	struct ice_rx_queue *rxq;
+	struct ice_tx_queue *txq;
+	int nb_rxq = 0;
+	int nb_txq, i;
+
+	for (nb_txq = 0; nb_txq < dev->data->nb_tx_queues; nb_txq++) {
+		txq = dev->data->tx_queues[nb_txq];
+		if (txq->tx_deferred_start)
+			continue;
+		if (ice_dcf_tx_queue_start(dev, nb_txq) != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start queue %u", nb_txq);
+			goto tx_err;
+		}
+	}
+
+	for (nb_rxq = 0; nb_rxq < dev->data->nb_rx_queues; nb_rxq++) {
+		rxq = dev->data->rx_queues[nb_rxq];
+		if (rxq->rx_deferred_start)
+			continue;
+		if (ice_dcf_rx_queue_start(dev, nb_rxq) != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start queue %u", nb_rxq);
+			goto rx_err;
+		}
+	}
+
+	return 0;
+
+	/* stop the started queues if failed to start all queues */
+rx_err:
+	for (i = 0; i < nb_rxq; i++)
+		ice_dcf_rx_queue_stop(dev, i);
+tx_err:
+	for (i = 0; i < nb_txq; i++)
+		ice_dcf_tx_queue_stop(dev, i);
+
+	return -1;
+}
+
 static int
 ice_dcf_dev_start(struct rte_eth_dev *dev)
 {
@@ -267,20 +531,72 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
+	if (dev->data->dev_conf.intr_conf.rxq != 0) {
+		rte_intr_disable(intr_handle);
+		rte_intr_enable(intr_handle);
+	}
+
+	ret = ice_dcf_start_queues(dev);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to enable queues");
+		return ret;
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
 }
 
+static void
+ice_dcf_stop_queues(struct rte_eth_dev *dev)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &ad->real_hw;
+	struct ice_rx_queue *rxq;
+	struct ice_tx_queue *txq;
+	int ret, i;
+
+	/* Stop All queues */
+	ret = ice_dcf_disable_queues(hw);
+	if (ret)
+		PMD_DRV_LOG(WARNING, "Fail to stop queues");
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (!txq)
+			continue;
+		txq->tx_rel_mbufs(txq);
+		reset_tx_queue(txq);
+		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (!rxq)
+			continue;
+		rxq->rx_rel_mbufs(rxq);
+		reset_rx_queue(rxq);
+		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+}
+
 static void
 ice_dcf_dev_stop(struct rte_eth_dev *dev)
 {
 	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
 	struct ice_adapter *ad = &dcf_ad->parent;
 
 	if (ad->pf.adapter_stopped == 1)
 		return;
 
+	ice_dcf_stop_queues(dev);
+
+	rte_intr_efd_disable(intr_handle);
+	if (intr_handle->intr_vec) {
+		rte_free(intr_handle->intr_vec);
+		intr_handle->intr_vec = NULL;
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_DOWN;
 	ad->pf.adapter_stopped = 1;
 }
@@ -477,6 +793,10 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
 	.tx_queue_setup          = ice_tx_queue_setup,
 	.rx_queue_release        = ice_rx_queue_release,
 	.tx_queue_release        = ice_tx_queue_release,
+	.rx_queue_start          = ice_dcf_rx_queue_start,
+	.tx_queue_start          = ice_dcf_tx_queue_start,
+	.rx_queue_stop           = ice_dcf_rx_queue_stop,
+	.tx_queue_stop           = ice_dcf_tx_queue_stop,
 	.link_update             = ice_dcf_link_update,
 	.stats_get               = ice_dcf_stats_get,
 	.stats_reset             = ice_dcf_stats_reset,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v2 10/12] net/ice: enable stats for DCF
  2020-06-11 17:08 ` [dpdk-dev] [PATCH v2 00/12] " Ting Xu
                     ` (8 preceding siblings ...)
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 09/12] net/ice: add queue start and stop for DCF Ting Xu
@ 2020-06-11 17:08   ` Ting Xu
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 11/12] net/ice: set MAC filter during dev start " Ting Xu
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 12/12] doc: enable DCF datapath configuration Ting Xu
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-11 17:08 UTC (permalink / raw)
  To: dev; +Cc: xiaolong.ye, qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

From: Qi Zhang <qi.z.zhang@intel.com>

Add support to get and reset Rx/Tx stats in DCF. Query stats
from PF.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf.c        |  27 ++++++++
 drivers/net/ice/ice_dcf.h        |   4 ++
 drivers/net/ice/ice_dcf_ethdev.c | 102 +++++++++++++++++++++++++++----
 3 files changed, 120 insertions(+), 13 deletions(-)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index f18c0f16a..bb848bed1 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -993,3 +993,30 @@ ice_dcf_disable_queues(struct ice_dcf_hw *hw)
 
 	return err;
 }
+
+int
+ice_dcf_query_stats(struct ice_dcf_hw *hw,
+				   struct virtchnl_eth_stats *pstats)
+{
+	struct virtchnl_queue_select q_stats;
+	struct dcf_virtchnl_cmd args;
+	int err;
+
+	memset(&q_stats, 0, sizeof(q_stats));
+	q_stats.vsi_id = hw->vsi_res->vsi_id;
+
+	args.v_op = VIRTCHNL_OP_GET_STATS;
+	args.req_msg = (uint8_t *)&q_stats;
+	args.req_msglen = sizeof(q_stats);
+	args.rsp_msglen = sizeof(*pstats);
+	args.rsp_msgbuf = (uint8_t *)pstats;
+	args.rsp_buflen = sizeof(*pstats);
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err) {
+		   PMD_DRV_LOG(ERR, "fail to execute command OP_GET_STATS");
+		   return err;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 68e1661c0..e82bc7748 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -58,6 +58,7 @@ struct ice_dcf_hw {
 	uint16_t msix_base;
 	uint16_t nb_msix;
 	uint16_t rxq_map[16];
+	struct virtchnl_eth_stats eth_stats_offset;
 };
 
 int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
@@ -72,4 +73,7 @@ int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
 int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
 int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
 int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
+int ice_dcf_query_stats(struct ice_dcf_hw *hw,
+			struct virtchnl_eth_stats *pstats);
+
 #endif /* _ICE_DCF_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 239426b09..1a675064a 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -695,19 +695,6 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
-static int
-ice_dcf_stats_get(__rte_unused struct rte_eth_dev *dev,
-		  __rte_unused struct rte_eth_stats *igb_stats)
-{
-	return 0;
-}
-
-static int
-ice_dcf_stats_reset(__rte_unused struct rte_eth_dev *dev)
-{
-	return 0;
-}
-
 static int
 ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
 {
@@ -760,6 +747,95 @@ ice_dcf_dev_filter_ctrl(struct rte_eth_dev *dev,
 	return ret;
 }
 
+#define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4)
+#define ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6)
+#define ICE_DCF_48_BIT_MASK  RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t)
+
+static void
+ice_dcf_stat_update_48(uint64_t *offset, uint64_t *stat)
+{
+	if (*stat >= *offset)
+		*stat = *stat - *offset;
+	else
+		*stat = (uint64_t)((*stat +
+			((uint64_t)1 << ICE_DCF_48_BIT_WIDTH)) - *offset);
+
+	*stat &= ICE_DCF_48_BIT_MASK;
+}
+
+static void
+ice_dcf_stat_update_32(uint64_t *offset, uint64_t *stat)
+{
+	if (*stat >= *offset)
+		*stat = (uint64_t)(*stat - *offset);
+	else
+		*stat = (uint64_t)((*stat +
+			((uint64_t)1 << ICE_DCF_32_BIT_WIDTH)) - *offset);
+}
+
+static void
+ice_dcf_update_stats(struct virtchnl_eth_stats *oes,
+		     struct virtchnl_eth_stats *nes)
+{
+	ice_dcf_stat_update_48(&oes->rx_bytes, &nes->rx_bytes);
+	ice_dcf_stat_update_48(&oes->rx_unicast, &nes->rx_unicast);
+	ice_dcf_stat_update_48(&oes->rx_multicast, &nes->rx_multicast);
+	ice_dcf_stat_update_48(&oes->rx_broadcast, &nes->rx_broadcast);
+	ice_dcf_stat_update_32(&oes->rx_discards, &nes->rx_discards);
+	ice_dcf_stat_update_48(&oes->tx_bytes, &nes->tx_bytes);
+	ice_dcf_stat_update_48(&oes->tx_unicast, &nes->tx_unicast);
+	ice_dcf_stat_update_48(&oes->tx_multicast, &nes->tx_multicast);
+	ice_dcf_stat_update_48(&oes->tx_broadcast, &nes->tx_broadcast);
+	ice_dcf_stat_update_32(&oes->tx_errors, &nes->tx_errors);
+	ice_dcf_stat_update_32(&oes->tx_discards, &nes->tx_discards);
+}
+
+
+static int
+ice_dcf_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &ad->real_hw;
+	struct virtchnl_eth_stats pstats;
+	int ret;
+
+	ret = ice_dcf_query_stats(hw, &pstats);
+	if (ret == 0) {
+		ice_dcf_update_stats(&hw->eth_stats_offset, &pstats);
+		stats->ipackets = pstats.rx_unicast + pstats.rx_multicast +
+				pstats.rx_broadcast - pstats.rx_discards;
+		stats->opackets = pstats.tx_broadcast + pstats.tx_multicast +
+						pstats.tx_unicast;
+		stats->imissed = pstats.rx_discards;
+		stats->oerrors = pstats.tx_errors + pstats.tx_discards;
+		stats->ibytes = pstats.rx_bytes;
+		stats->ibytes -= stats->ipackets * RTE_ETHER_CRC_LEN;
+		stats->obytes = pstats.tx_bytes;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return ret;
+}
+
+static int
+ice_dcf_stats_reset(struct rte_eth_dev *dev)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &ad->real_hw;
+	struct virtchnl_eth_stats pstats;
+	int ret;
+
+	/* read stat values to clear hardware registers */
+	ret = ice_dcf_query_stats(hw, &pstats);
+	if (ret != 0)
+		return ret;
+
+	/* set stats offset base on current values */
+	hw->eth_stats_offset = pstats;
+
+	return 0;
+}
+
 static void
 ice_dcf_dev_close(struct rte_eth_dev *dev)
 {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v2 11/12] net/ice: set MAC filter during dev start for DCF
  2020-06-11 17:08 ` [dpdk-dev] [PATCH v2 00/12] " Ting Xu
                     ` (9 preceding siblings ...)
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 10/12] net/ice: enable stats " Ting Xu
@ 2020-06-11 17:08   ` Ting Xu
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 12/12] doc: enable DCF datapath configuration Ting Xu
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-11 17:08 UTC (permalink / raw)
  To: dev; +Cc: xiaolong.ye, qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

From: Qi Zhang <qi.z.zhang@intel.com>

Add support to add and delete  MAC address filter in DCF.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf.c        | 42 ++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_dcf.h        |  1 +
 drivers/net/ice/ice_dcf_ethdev.c |  7 ++++++
 3 files changed, 50 insertions(+)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index bb848bed1..0e430bd76 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1020,3 +1020,45 @@ ice_dcf_query_stats(struct ice_dcf_hw *hw,
 
 	return 0;
 }
+
+int
+ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
+{
+	struct virtchnl_ether_addr_list *list;
+	struct rte_ether_addr *addr;
+	struct dcf_virtchnl_cmd args;
+	int len, err = 0;
+
+	len = sizeof(struct virtchnl_ether_addr_list);
+	addr = hw->eth_dev->data->mac_addrs;
+	len += sizeof(struct virtchnl_ether_addr);
+
+	list = rte_zmalloc(NULL, len, 0);
+	if (!list) {
+		PMD_DRV_LOG(ERR, "fail to allocate memory");
+		return -ENOMEM;
+	}
+
+	rte_memcpy(list->list[0].addr, addr->addr_bytes,
+			sizeof(addr->addr_bytes));
+	PMD_DRV_LOG(DEBUG, "add/rm mac:%x:%x:%x:%x:%x:%x",
+			    addr->addr_bytes[0], addr->addr_bytes[1],
+			    addr->addr_bytes[2], addr->addr_bytes[3],
+			    addr->addr_bytes[4], addr->addr_bytes[5]);
+
+	list->vsi_id = hw->vsi_res->vsi_id;
+	list->num_elements = 1;
+
+	memset(&args, 0, sizeof(args));
+	args.v_op = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
+			VIRTCHNL_OP_DEL_ETH_ADDR;
+	args.req_msg = (uint8_t *)list;
+	args.req_msglen  = len;
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command %s",
+			    add ? "OP_ADD_ETHER_ADDRESS" :
+			    "OP_DEL_ETHER_ADDRESS");
+	rte_free(list);
+	return err;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index e82bc7748..a44a01e90 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -75,5 +75,6 @@ int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
 int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
 int ice_dcf_query_stats(struct ice_dcf_hw *hw,
 			struct virtchnl_eth_stats *pstats);
+int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add);
 
 #endif /* _ICE_DCF_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 1a675064a..7912dc18a 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -542,6 +542,12 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
+	ret = ice_dcf_add_del_all_mac_addr(hw, true);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to add mac addr");
+		return ret;
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
@@ -597,6 +603,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
 		intr_handle->intr_vec = NULL;
 	}
 
+	ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
 	dev->data->dev_link.link_status = ETH_LINK_DOWN;
 	ad->pf.adapter_stopped = 1;
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v2 12/12] doc: enable DCF datapath configuration
  2020-06-11 17:08 ` [dpdk-dev] [PATCH v2 00/12] " Ting Xu
                     ` (10 preceding siblings ...)
  2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 11/12] net/ice: set MAC filter during dev start " Ting Xu
@ 2020-06-11 17:08   ` Ting Xu
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-11 17:08 UTC (permalink / raw)
  To: dev; +Cc: xiaolong.ye, qi.z.zhang, qiming.yang, john.mcnamara, marko.kovacevic

Add doc for DCF datapath configuration in DPDK 20.08 release note.

Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 doc/guides/rel_notes/release_20_08.rst | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index dee4ccbb5..1a3a4cdb2 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -56,6 +56,12 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
+* **Updated the Intel ice driver.**
+
+  Updated the Intel ice driver with new features and improvements, including:
+
+  * Added support for DCF datapath configuration.
+
 * **Updated Mellanox mlx5 driver.**
 
   Updated Mellanox mlx5 driver with new features and improvements, including:
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v4 00/12] enable DCF datapath configuration
  2020-06-05 20:17 [dpdk-dev] [PATCH v1 00/12] enable DCF datapath configuration Ting Xu
                   ` (12 preceding siblings ...)
  2020-06-11 17:08 ` [dpdk-dev] [PATCH v2 00/12] " Ting Xu
@ 2020-06-19  8:50 ` Ting Xu
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 01/12] net/ice: init RSS and supported RXDID in DCF Ting Xu
                     ` (11 more replies)
  2020-06-23  2:38 ` [dpdk-dev] [PATCH v5 00/12] " Ting Xu
  14 siblings, 12 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-19  8:50 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Xu Ting

From: Xu Ting <ting.xu@intel.com>

This patchset adds support to configure DCF datapath, including
Rx/Tx queues setup, start and stop, device configuration, RSS
and flexible descriptor RXDID initialization and MAC filter setup.

Qi Zhang (11):
  net/ice: init RSS and supported RXDID in DCF
  net/ice: complete device info get in DCF
  net/ice: complete dev configure in DCF
  net/ice: complete queue setup in DCF
  net/ice: add stop flag for device start / stop
  net/ice: add Rx queue init in DCF
  net/ice: init RSS during DCF start
  net/ice: add queue config in DCF
  net/ice: add queue start and stop for DCF
  net/ice: enable stats for DCF
  net/ice: set MAC filter during dev start for DCF

Ting Xu (1):
  doc: enable DCF datapath configuration

---
v3->v4:
Clean codes based on comments

v2->v3:
Correct coding style issue

v1->v2:
Optimize coding style
Correct some return values
Add support to stop started queues when queue start failed

 doc/guides/rel_notes/release_20_08.rst |   6 +
 drivers/net/ice/ice_dcf.c              | 408 ++++++++++++-
 drivers/net/ice/ice_dcf.h              |  17 +
 drivers/net/ice/ice_dcf_ethdev.c       | 773 +++++++++++++++++++++++--
 drivers/net/ice/ice_dcf_ethdev.h       |   3 -
 drivers/net/ice/ice_dcf_parent.c       |   8 +
 6 files changed, 1162 insertions(+), 53 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v4 01/12] net/ice: init RSS and supported RXDID in DCF
  2020-06-19  8:50 ` [dpdk-dev] [PATCH v4 00/12] " Ting Xu
@ 2020-06-19  8:50   ` Ting Xu
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 02/12] net/ice: complete device info get " Ting Xu
                     ` (10 subsequent siblings)
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-19  8:50 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

From: Qi Zhang <qi.z.zhang@intel.com>

Enable RSS parameters initialization and get the supported
flexible descriptor RXDIDs bitmap from PF during DCF init.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf.c | 54 ++++++++++++++++++++++++++++++++++++++-
 drivers/net/ice/ice_dcf.h |  3 +++
 2 files changed, 56 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 0cd5d1bf6..93fabd5f7 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -233,7 +233,7 @@ ice_dcf_get_vf_resource(struct ice_dcf_hw *hw)
 
 	caps = VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | VIRTCHNL_VF_OFFLOAD_RX_POLLING |
 	       VIRTCHNL_VF_CAP_ADV_LINK_SPEED | VIRTCHNL_VF_CAP_DCF |
-	       VF_BASE_MODE_OFFLOADS;
+	       VF_BASE_MODE_OFFLOADS | VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC;
 
 	err = ice_dcf_send_cmd_req_no_irq(hw, VIRTCHNL_OP_GET_VF_RESOURCES,
 					  (uint8_t *)&caps, sizeof(caps));
@@ -547,6 +547,30 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
 	return err;
 }
 
+static int
+ice_dcf_get_supported_rxdid(struct ice_dcf_hw *hw)
+{
+	int err;
+
+	err = ice_dcf_send_cmd_req_no_irq(hw,
+					  VIRTCHNL_OP_GET_SUPPORTED_RXDIDS,
+					  NULL, 0);
+	if (err) {
+		PMD_INIT_LOG(ERR, "Failed to send OP_GET_SUPPORTED_RXDIDS");
+		return -1;
+	}
+
+	err = ice_dcf_recv_cmd_rsp_no_irq(hw, VIRTCHNL_OP_GET_SUPPORTED_RXDIDS,
+					  (uint8_t *)&hw->supported_rxdid,
+					  sizeof(uint64_t), NULL);
+	if (err) {
+		PMD_INIT_LOG(ERR, "Failed to get response of OP_GET_SUPPORTED_RXDIDS");
+		return -1;
+	}
+
+	return 0;
+}
+
 int
 ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 {
@@ -620,6 +644,29 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 		goto err_alloc;
 	}
 
+	/* Allocate memory for RSS info */
+	if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		hw->rss_key = rte_zmalloc(NULL,
+					  hw->vf_res->rss_key_size, 0);
+		if (!hw->rss_key) {
+			PMD_INIT_LOG(ERR, "unable to allocate rss_key memory");
+			goto err_alloc;
+		}
+		hw->rss_lut = rte_zmalloc("rss_lut",
+					  hw->vf_res->rss_lut_size, 0);
+		if (!hw->rss_lut) {
+			PMD_INIT_LOG(ERR, "unable to allocate rss_lut memory");
+			goto err_rss;
+		}
+	}
+
+	if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
+		if (ice_dcf_get_supported_rxdid(hw) != 0) {
+			PMD_INIT_LOG(ERR, "failed to do get supported rxdid");
+			goto err_rss;
+		}
+	}
+
 	hw->eth_dev = eth_dev;
 	rte_intr_callback_register(&pci_dev->intr_handle,
 				   ice_dcf_dev_interrupt_handler, hw);
@@ -628,6 +675,9 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 
 	return 0;
 
+err_rss:
+	rte_free(hw->rss_key);
+	rte_free(hw->rss_lut);
 err_alloc:
 	rte_free(hw->vf_res);
 err_api:
@@ -655,4 +705,6 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	rte_free(hw->arq_buf);
 	rte_free(hw->vf_vsi_map);
 	rte_free(hw->vf_res);
+	rte_free(hw->rss_lut);
+	rte_free(hw->rss_key);
 }
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index d2e447b48..152266e3c 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -50,6 +50,9 @@ struct ice_dcf_hw {
 	uint16_t vsi_id;
 
 	struct rte_eth_dev *eth_dev;
+	uint8_t *rss_lut;
+	uint8_t *rss_key;
+	uint64_t supported_rxdid;
 };
 
 int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v4 02/12] net/ice: complete device info get in DCF
  2020-06-19  8:50 ` [dpdk-dev] [PATCH v4 00/12] " Ting Xu
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 01/12] net/ice: init RSS and supported RXDID in DCF Ting Xu
@ 2020-06-19  8:50   ` Ting Xu
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 03/12] net/ice: complete dev configure " Ting Xu
                     ` (9 subsequent siblings)
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-19  8:50 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

From: Qi Zhang <qi.z.zhang@intel.com>

Add support to get complete device information for DCF, including
Rx/Tx offload capabilities and default configuration.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf_ethdev.c | 70 ++++++++++++++++++++++++++++++--
 1 file changed, 67 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index e5ba1a61f..eb3708191 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -24,6 +24,7 @@
 
 #include "ice_generic_flow.h"
 #include "ice_dcf_ethdev.h"
+#include "ice_rxtx.h"
 
 static uint16_t
 ice_dcf_recv_pkts(__rte_unused void *rx_queue,
@@ -66,11 +67,74 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
 		     struct rte_eth_dev_info *dev_info)
 {
 	struct ice_dcf_adapter *adapter = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &adapter->real_hw;
 
 	dev_info->max_mac_addrs = 1;
-	dev_info->max_rx_pktlen = (uint32_t)-1;
-	dev_info->max_rx_queues = RTE_DIM(adapter->rxqs);
-	dev_info->max_tx_queues = RTE_DIM(adapter->txqs);
+	dev_info->max_rx_queues = hw->vsi_res->num_queue_pairs;
+	dev_info->max_tx_queues = hw->vsi_res->num_queue_pairs;
+	dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
+	dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX;
+	dev_info->hash_key_size = hw->vf_res->rss_key_size;
+	dev_info->reta_size = hw->vf_res->rss_lut_size;
+	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
+
+	dev_info->rx_offload_capa =
+		DEV_RX_OFFLOAD_VLAN_STRIP |
+		DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM |
+		DEV_RX_OFFLOAD_TCP_CKSUM |
+		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_SCATTER |
+		DEV_RX_OFFLOAD_JUMBO_FRAME |
+		DEV_RX_OFFLOAD_VLAN_FILTER |
+		DEV_RX_OFFLOAD_RSS_HASH;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_VLAN_INSERT |
+		DEV_TX_OFFLOAD_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_UDP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_CKSUM |
+		DEV_TX_OFFLOAD_SCTP_CKSUM |
+		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_TCP_TSO |
+		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
+		DEV_TX_OFFLOAD_GRE_TNL_TSO |
+		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
+		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
+		DEV_TX_OFFLOAD_MULTI_SEGS;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_thresh = {
+			.pthresh = ICE_DEFAULT_RX_PTHRESH,
+			.hthresh = ICE_DEFAULT_RX_HTHRESH,
+			.wthresh = ICE_DEFAULT_RX_WTHRESH,
+		},
+		.rx_free_thresh = ICE_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+		.offloads = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_thresh = {
+			.pthresh = ICE_DEFAULT_TX_PTHRESH,
+			.hthresh = ICE_DEFAULT_TX_HTHRESH,
+			.wthresh = ICE_DEFAULT_TX_WTHRESH,
+		},
+		.tx_free_thresh = ICE_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = ICE_DEFAULT_TX_RSBIT_THRESH,
+		.offloads = 0,
+	};
+
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = ICE_MAX_RING_DESC,
+		.nb_min = ICE_MIN_RING_DESC,
+		.nb_align = ICE_ALIGN_RING_DESC,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = ICE_MAX_RING_DESC,
+		.nb_min = ICE_MIN_RING_DESC,
+		.nb_align = ICE_ALIGN_RING_DESC,
+	};
 
 	return 0;
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v4 03/12] net/ice: complete dev configure in DCF
  2020-06-19  8:50 ` [dpdk-dev] [PATCH v4 00/12] " Ting Xu
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 01/12] net/ice: init RSS and supported RXDID in DCF Ting Xu
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 02/12] net/ice: complete device info get " Ting Xu
@ 2020-06-19  8:50   ` Ting Xu
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 04/12] net/ice: complete queue setup " Ting Xu
                     ` (8 subsequent siblings)
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-19  8:50 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

From: Qi Zhang <qi.z.zhang@intel.com>

Enable device configuration function in DCF.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf_ethdev.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index eb3708191..01412ced0 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -57,8 +57,17 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
 }
 
 static int
-ice_dcf_dev_configure(__rte_unused struct rte_eth_dev *dev)
+ice_dcf_dev_configure(struct rte_eth_dev *dev)
 {
+	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct ice_adapter *ad = &dcf_ad->parent;
+
+	ad->rx_bulk_alloc_allowed = true;
+	ad->tx_simple_allowed = true;
+
+	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+
 	return 0;
 }
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v4 04/12] net/ice: complete queue setup in DCF
  2020-06-19  8:50 ` [dpdk-dev] [PATCH v4 00/12] " Ting Xu
                     ` (2 preceding siblings ...)
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 03/12] net/ice: complete dev configure " Ting Xu
@ 2020-06-19  8:50   ` Ting Xu
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 05/12] net/ice: add stop flag for device start / stop Ting Xu
                     ` (7 subsequent siblings)
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-19  8:50 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

From: Qi Zhang <qi.z.zhang@intel.com>

Delete original DCF queue setup functions and use ice
queue setup and release functions instead.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf_ethdev.c | 42 +++-----------------------------
 drivers/net/ice/ice_dcf_ethdev.h |  3 ---
 drivers/net/ice/ice_dcf_parent.c |  7 ++++++
 3 files changed, 11 insertions(+), 41 deletions(-)

diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 01412ced0..b07850ece 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -229,11 +229,6 @@ ice_dcf_dev_close(struct rte_eth_dev *dev)
 	ice_dcf_uninit_hw(dev, &adapter->real_hw);
 }
 
-static void
-ice_dcf_queue_release(__rte_unused void *q)
-{
-}
-
 static int
 ice_dcf_link_update(__rte_unused struct rte_eth_dev *dev,
 		    __rte_unused int wait_to_complete)
@@ -241,45 +236,16 @@ ice_dcf_link_update(__rte_unused struct rte_eth_dev *dev,
 	return 0;
 }
 
-static int
-ice_dcf_rx_queue_setup(struct rte_eth_dev *dev,
-		       uint16_t rx_queue_id,
-		       __rte_unused uint16_t nb_rx_desc,
-		       __rte_unused unsigned int socket_id,
-		       __rte_unused const struct rte_eth_rxconf *rx_conf,
-		       __rte_unused struct rte_mempool *mb_pool)
-{
-	struct ice_dcf_adapter *adapter = dev->data->dev_private;
-
-	dev->data->rx_queues[rx_queue_id] = &adapter->rxqs[rx_queue_id];
-
-	return 0;
-}
-
-static int
-ice_dcf_tx_queue_setup(struct rte_eth_dev *dev,
-		       uint16_t tx_queue_id,
-		       __rte_unused uint16_t nb_tx_desc,
-		       __rte_unused unsigned int socket_id,
-		       __rte_unused const struct rte_eth_txconf *tx_conf)
-{
-	struct ice_dcf_adapter *adapter = dev->data->dev_private;
-
-	dev->data->tx_queues[tx_queue_id] = &adapter->txqs[tx_queue_id];
-
-	return 0;
-}
-
 static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
 	.dev_start               = ice_dcf_dev_start,
 	.dev_stop                = ice_dcf_dev_stop,
 	.dev_close               = ice_dcf_dev_close,
 	.dev_configure           = ice_dcf_dev_configure,
 	.dev_infos_get           = ice_dcf_dev_info_get,
-	.rx_queue_setup          = ice_dcf_rx_queue_setup,
-	.tx_queue_setup          = ice_dcf_tx_queue_setup,
-	.rx_queue_release        = ice_dcf_queue_release,
-	.tx_queue_release        = ice_dcf_queue_release,
+	.rx_queue_setup          = ice_rx_queue_setup,
+	.tx_queue_setup          = ice_tx_queue_setup,
+	.rx_queue_release        = ice_rx_queue_release,
+	.tx_queue_release        = ice_tx_queue_release,
 	.link_update             = ice_dcf_link_update,
 	.stats_get               = ice_dcf_stats_get,
 	.stats_reset             = ice_dcf_stats_reset,
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index e60e808d8..b54528bea 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -19,10 +19,7 @@ struct ice_dcf_queue {
 
 struct ice_dcf_adapter {
 	struct ice_adapter parent; /* Must be first */
-
 	struct ice_dcf_hw real_hw;
-	struct ice_dcf_queue rxqs[ICE_DCF_MAX_RINGS];
-	struct ice_dcf_queue txqs[ICE_DCF_MAX_RINGS];
 };
 
 void ice_dcf_handle_pf_event_msg(struct ice_dcf_hw *dcf_hw,
diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c
index d13e19d78..322a5273f 100644
--- a/drivers/net/ice/ice_dcf_parent.c
+++ b/drivers/net/ice/ice_dcf_parent.c
@@ -335,6 +335,13 @@ ice_dcf_init_parent_adapter(struct rte_eth_dev *eth_dev)
 	parent_adapter->eth_dev = eth_dev;
 	parent_adapter->pf.adapter = parent_adapter;
 	parent_adapter->pf.dev_data = eth_dev->data;
+	/* create a dummy main_vsi */
+	parent_adapter->pf.main_vsi =
+		rte_zmalloc(NULL, sizeof(struct ice_vsi), 0);
+	if (!parent_adapter->pf.main_vsi)
+		return -ENOMEM;
+	parent_adapter->pf.main_vsi->adapter = parent_adapter;
+
 	parent_hw->back = parent_adapter;
 	parent_hw->mac_type = ICE_MAC_GENERIC;
 	parent_hw->vendor_id = ICE_INTEL_VENDOR_ID;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v4 05/12] net/ice: add stop flag for device start / stop
  2020-06-19  8:50 ` [dpdk-dev] [PATCH v4 00/12] " Ting Xu
                     ` (3 preceding siblings ...)
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 04/12] net/ice: complete queue setup " Ting Xu
@ 2020-06-19  8:50   ` Ting Xu
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 06/12] net/ice: add Rx queue init in DCF Ting Xu
                     ` (6 subsequent siblings)
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-19  8:50 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

From: Qi Zhang <qi.z.zhang@intel.com>

Add stop flag for DCF device start and stop.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf_ethdev.c | 14 ++++++++++++++
 drivers/net/ice/ice_dcf_parent.c |  1 +
 2 files changed, 15 insertions(+)

diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index b07850ece..676a504fd 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -45,6 +45,11 @@ ice_dcf_xmit_pkts(__rte_unused void *tx_queue,
 static int
 ice_dcf_dev_start(struct rte_eth_dev *dev)
 {
+	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct ice_adapter *ad = &dcf_ad->parent;
+
+	ad->pf.adapter_stopped = 0;
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
@@ -53,7 +58,16 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 static void
 ice_dcf_dev_stop(struct rte_eth_dev *dev)
 {
+	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct ice_adapter *ad = &dcf_ad->parent;
+
+	if (ad->pf.adapter_stopped == 1) {
+		PMD_DRV_LOG(DEBUG, "Port is already stopped");
+		return;
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	ad->pf.adapter_stopped = 1;
 }
 
 static int
diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c
index 322a5273f..c5dfdd36e 100644
--- a/drivers/net/ice/ice_dcf_parent.c
+++ b/drivers/net/ice/ice_dcf_parent.c
@@ -341,6 +341,7 @@ ice_dcf_init_parent_adapter(struct rte_eth_dev *eth_dev)
 	if (!parent_adapter->pf.main_vsi)
 		return -ENOMEM;
 	parent_adapter->pf.main_vsi->adapter = parent_adapter;
+	parent_adapter->pf.adapter_stopped = 1;
 
 	parent_hw->back = parent_adapter;
 	parent_hw->mac_type = ICE_MAC_GENERIC;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v4 06/12] net/ice: add Rx queue init in DCF
  2020-06-19  8:50 ` [dpdk-dev] [PATCH v4 00/12] " Ting Xu
                     ` (4 preceding siblings ...)
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 05/12] net/ice: add stop flag for device start / stop Ting Xu
@ 2020-06-19  8:50   ` Ting Xu
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 07/12] net/ice: init RSS during DCF start Ting Xu
                     ` (5 subsequent siblings)
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-19  8:50 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

From: Qi Zhang <qi.z.zhang@intel.com>

Enable Rx queues initialization during device start in DCF.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf.h        |  1 +
 drivers/net/ice/ice_dcf_ethdev.c | 83 ++++++++++++++++++++++++++++++++
 2 files changed, 84 insertions(+)

diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 152266e3c..dcb2a0283 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -53,6 +53,7 @@ struct ice_dcf_hw {
 	uint8_t *rss_lut;
 	uint8_t *rss_key;
 	uint64_t supported_rxdid;
+	uint16_t num_queue_pairs;
 };
 
 int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 676a504fd..5afd07f96 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -42,14 +42,97 @@ ice_dcf_xmit_pkts(__rte_unused void *tx_queue,
 	return 0;
 }
 
+static int
+ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
+{
+	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct rte_eth_dev_data *dev_data = dev->data;
+	struct iavf_hw *hw = &dcf_ad->real_hw.avf;
+	uint16_t buf_size, max_pkt_len, len;
+
+	buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_hdr_len = 0;
+	rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
+	len = ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len;
+	max_pkt_len = RTE_MIN(len, dev->data->dev_conf.rxmode.max_rx_pkt_len);
+
+	/* Check if the jumbo frame and maximum packet length are set
+	 * correctly.
+	 */
+	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+		if (max_pkt_len <= RTE_ETHER_MAX_LEN ||
+		    max_pkt_len > ICE_FRAME_SIZE_MAX) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is enabled",
+				    (uint32_t)RTE_ETHER_MAX_LEN,
+				    (uint32_t)ICE_FRAME_SIZE_MAX);
+			return -EINVAL;
+		}
+	} else {
+		if (max_pkt_len < RTE_ETHER_MIN_LEN ||
+		    max_pkt_len > RTE_ETHER_MAX_LEN) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is disabled",
+				    (uint32_t)RTE_ETHER_MIN_LEN,
+				    (uint32_t)RTE_ETHER_MAX_LEN);
+			return -EINVAL;
+		}
+	}
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	    (rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size) {
+		dev_data->scattered_rx = 1;
+	}
+	rxq->qrx_tail = hw->hw_addr + IAVF_QRX_TAIL1(rxq->queue_id);
+	IAVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	IAVF_WRITE_FLUSH(hw);
+
+	return 0;
+}
+
+static int
+ice_dcf_init_rx_queues(struct rte_eth_dev *dev)
+{
+	struct ice_rx_queue **rxq =
+		(struct ice_rx_queue **)dev->data->rx_queues;
+	int i, ret;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		if (!rxq[i] || !rxq[i]->q_set)
+			continue;
+		ret = ice_dcf_init_rxq(dev, rxq[i]);
+		if (ret)
+			return ret;
+	}
+
+	ice_set_rx_function(dev);
+	ice_set_tx_function(dev);
+
+	return 0;
+}
+
 static int
 ice_dcf_dev_start(struct rte_eth_dev *dev)
 {
 	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
 	struct ice_adapter *ad = &dcf_ad->parent;
+	struct ice_dcf_hw *hw = &dcf_ad->real_hw;
+	int ret;
 
 	ad->pf.adapter_stopped = 0;
 
+	hw->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
+				      dev->data->nb_tx_queues);
+
+	ret = ice_dcf_init_rx_queues(dev);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Fail to init queues");
+		return ret;
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v4 07/12] net/ice: init RSS during DCF start
  2020-06-19  8:50 ` [dpdk-dev] [PATCH v4 00/12] " Ting Xu
                     ` (5 preceding siblings ...)
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 06/12] net/ice: add Rx queue init in DCF Ting Xu
@ 2020-06-19  8:50   ` Ting Xu
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 08/12] net/ice: add queue config in DCF Ting Xu
                     ` (4 subsequent siblings)
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-19  8:50 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

From: Qi Zhang <qi.z.zhang@intel.com>

Enable RSS initialization during DCF start. Add RSS LUT and
RSS key configuration functions.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf.c        | 117 +++++++++++++++++++++++++++++++
 drivers/net/ice/ice_dcf.h        |   1 +
 drivers/net/ice/ice_dcf_ethdev.c |   8 +++
 3 files changed, 126 insertions(+)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 93fabd5f7..f285323dd 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -708,3 +708,120 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	rte_free(hw->rss_lut);
 	rte_free(hw->rss_key);
 }
+
+static int
+ice_dcf_configure_rss_key(struct ice_dcf_hw *hw)
+{
+	struct virtchnl_rss_key *rss_key;
+	struct dcf_virtchnl_cmd args;
+	int len, err;
+
+	len = sizeof(*rss_key) + hw->vf_res->rss_key_size - 1;
+	rss_key = rte_zmalloc("rss_key", len, 0);
+	if (!rss_key)
+		return -ENOMEM;
+
+	rss_key->vsi_id = hw->vsi_res->vsi_id;
+	rss_key->key_len = hw->vf_res->rss_key_size;
+	rte_memcpy(rss_key->key, hw->rss_key, hw->vf_res->rss_key_size);
+
+	args.v_op = VIRTCHNL_OP_CONFIG_RSS_KEY;
+	args.req_msglen = len;
+	args.req_msg = (uint8_t *)rss_key;
+	args.rsp_msglen = 0;
+	args.rsp_buflen = 0;
+	args.rsp_msgbuf = NULL;
+	args.pending = 0;
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_INIT_LOG(ERR, "Failed to execute OP_CONFIG_RSS_KEY");
+
+	rte_free(rss_key);
+	return err;
+}
+
+static int
+ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw)
+{
+	struct virtchnl_rss_lut *rss_lut;
+	struct dcf_virtchnl_cmd args;
+	int len, err;
+
+	len = sizeof(*rss_lut) + hw->vf_res->rss_lut_size - 1;
+	rss_lut = rte_zmalloc("rss_lut", len, 0);
+	if (!rss_lut)
+		return -ENOMEM;
+
+	rss_lut->vsi_id = hw->vsi_res->vsi_id;
+	rss_lut->lut_entries = hw->vf_res->rss_lut_size;
+	rte_memcpy(rss_lut->lut, hw->rss_lut, hw->vf_res->rss_lut_size);
+
+	args.v_op = VIRTCHNL_OP_CONFIG_RSS_LUT;
+	args.req_msglen = len;
+	args.req_msg = (uint8_t *)rss_lut;
+	args.rsp_msglen = 0;
+	args.rsp_buflen = 0;
+	args.rsp_msgbuf = NULL;
+	args.pending = 0;
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_INIT_LOG(ERR, "Failed to execute OP_CONFIG_RSS_LUT");
+
+	rte_free(rss_lut);
+	return err;
+}
+
+int
+ice_dcf_init_rss(struct ice_dcf_hw *hw)
+{
+	struct rte_eth_dev *dev = hw->eth_dev;
+	struct rte_eth_rss_conf *rss_conf;
+	uint8_t i, j, nb_q;
+	int ret;
+
+	rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev->data->nb_rx_queues;
+
+	if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+		PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
+		/* set all lut items to default queue */
+		memset(hw->rss_lut, 0, hw->vf_res->rss_lut_size);
+		return ice_dcf_configure_rss_lut(hw);
+	}
+
+	/* In IAVF, RSS enablement is set by PF driver. It is not supported
+	 * to set based on rss_conf->rss_hf.
+	 */
+
+	/* configure RSS key */
+	if (!rss_conf->rss_key)
+		/* Calculate the default hash key */
+		for (i = 0; i < hw->vf_res->rss_key_size; i++)
+			hw->rss_key[i] = (uint8_t)rte_rand();
+	else
+		rte_memcpy(hw->rss_key, rss_conf->rss_key,
+			   RTE_MIN(rss_conf->rss_key_len,
+				   hw->vf_res->rss_key_size));
+
+	/* init RSS LUT table */
+	for (i = 0, j = 0; i < hw->vf_res->rss_lut_size; i++, j++) {
+		if (j >= nb_q)
+			j = 0;
+		hw->rss_lut[i] = j;
+	}
+	/* send virtchnnl ops to configure rss*/
+	ret = ice_dcf_configure_rss_lut(hw);
+	if (ret)
+		return ret;
+	ret = ice_dcf_configure_rss_key(hw);
+	if (ret)
+		return ret;
+
+	return 0;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index dcb2a0283..eea4b286b 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -63,5 +63,6 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
 int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
 int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
 void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
+int ice_dcf_init_rss(struct ice_dcf_hw *hw);
 
 #endif /* _ICE_DCF_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 5afd07f96..e2ab7e637 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -133,6 +133,14 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
+	if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		ret = ice_dcf_init_rss(hw);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "Failed to configure RSS");
+			return ret;
+		}
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v4 08/12] net/ice: add queue config in DCF
  2020-06-19  8:50 ` [dpdk-dev] [PATCH v4 00/12] " Ting Xu
                     ` (6 preceding siblings ...)
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 07/12] net/ice: init RSS during DCF start Ting Xu
@ 2020-06-19  8:50   ` Ting Xu
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 09/12] net/ice: add queue start and stop for DCF Ting Xu
                     ` (3 subsequent siblings)
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-19  8:50 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

From: Qi Zhang <qi.z.zhang@intel.com>

Add queues and Rx queue irqs configuration during device start
in DCF. The setup is sent to PF via virtchnl.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf.c        | 111 +++++++++++++++++++++++++++
 drivers/net/ice/ice_dcf.h        |   6 ++
 drivers/net/ice/ice_dcf_ethdev.c | 126 +++++++++++++++++++++++++++++++
 3 files changed, 243 insertions(+)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index f285323dd..8869e0d1c 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -24,6 +24,7 @@
 #include <rte_dev.h>
 
 #include "ice_dcf.h"
+#include "ice_rxtx.h"
 
 #define ICE_DCF_AQ_LEN     32
 #define ICE_DCF_AQ_BUF_SZ  4096
@@ -825,3 +826,113 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
 
 	return 0;
 }
+
+#define IAVF_RXDID_LEGACY_1 1
+#define IAVF_RXDID_COMMS_GENERIC 16
+
+int
+ice_dcf_configure_queues(struct ice_dcf_hw *hw)
+{
+	struct ice_rx_queue **rxq =
+		(struct ice_rx_queue **)hw->eth_dev->data->rx_queues;
+	struct ice_tx_queue **txq =
+		(struct ice_tx_queue **)hw->eth_dev->data->tx_queues;
+	struct virtchnl_vsi_queue_config_info *vc_config;
+	struct virtchnl_queue_pair_info *vc_qp;
+	struct dcf_virtchnl_cmd args;
+	uint16_t i, size;
+	int err;
+
+	size = sizeof(*vc_config) +
+	       sizeof(vc_config->qpair[0]) * hw->num_queue_pairs;
+	vc_config = rte_zmalloc("cfg_queue", size, 0);
+	if (!vc_config)
+		return -ENOMEM;
+
+	vc_config->vsi_id = hw->vsi_res->vsi_id;
+	vc_config->num_queue_pairs = hw->num_queue_pairs;
+
+	for (i = 0, vc_qp = vc_config->qpair;
+	     i < hw->num_queue_pairs;
+	     i++, vc_qp++) {
+		vc_qp->txq.vsi_id = hw->vsi_res->vsi_id;
+		vc_qp->txq.queue_id = i;
+		if (i < hw->eth_dev->data->nb_tx_queues) {
+			vc_qp->txq.ring_len = txq[i]->nb_tx_desc;
+			vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_dma;
+		}
+		vc_qp->rxq.vsi_id = hw->vsi_res->vsi_id;
+		vc_qp->rxq.queue_id = i;
+		vc_qp->rxq.max_pkt_size = rxq[i]->max_pkt_len;
+
+		if (i >= hw->eth_dev->data->nb_rx_queues)
+			continue;
+
+		vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc;
+		vc_qp->rxq.dma_ring_addr = rxq[i]->rx_ring_dma;
+		vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len;
+
+		if (hw->vf_res->vf_cap_flags &
+		    VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
+		    hw->supported_rxdid &
+		    BIT(IAVF_RXDID_COMMS_GENERIC)) {
+			vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_GENERIC;
+			PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
+				    "Queue[%d]", vc_qp->rxq.rxdid, i);
+		} else {
+			PMD_DRV_LOG(ERR, "RXDID 16 is not supported");
+			return -EINVAL;
+		}
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.v_op = VIRTCHNL_OP_CONFIG_VSI_QUEUES;
+	args.req_msg = (uint8_t *)vc_config;
+	args.req_msglen = size;
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to execute command of"
+			    " VIRTCHNL_OP_CONFIG_VSI_QUEUES");
+
+	rte_free(vc_config);
+	return err;
+}
+
+int
+ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
+{
+	struct virtchnl_irq_map_info *map_info;
+	struct virtchnl_vector_map *vecmap;
+	struct dcf_virtchnl_cmd args;
+	int len, i, err;
+
+	len = sizeof(struct virtchnl_irq_map_info) +
+	      sizeof(struct virtchnl_vector_map) * hw->nb_msix;
+
+	map_info = rte_zmalloc("map_info", len, 0);
+	if (!map_info)
+		return -ENOMEM;
+
+	map_info->num_vectors = hw->nb_msix;
+	for (i = 0; i < hw->nb_msix; i++) {
+		vecmap = &map_info->vecmap[i];
+		vecmap->vsi_id = hw->vsi_res->vsi_id;
+		vecmap->rxitr_idx = 0;
+		vecmap->vector_id = hw->msix_base + i;
+		vecmap->txq_map = 0;
+		vecmap->rxq_map = hw->rxq_map[hw->msix_base + i];
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.v_op = VIRTCHNL_OP_CONFIG_IRQ_MAP;
+	args.req_msg = (u8 *)map_info;
+	args.req_msglen = len;
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command OP_CONFIG_IRQ_MAP");
+
+	rte_free(map_info);
+	return err;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index eea4b286b..9470d1df7 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -54,6 +54,10 @@ struct ice_dcf_hw {
 	uint8_t *rss_key;
 	uint64_t supported_rxdid;
 	uint16_t num_queue_pairs;
+
+	uint16_t msix_base;
+	uint16_t nb_msix;
+	uint16_t rxq_map[16];
 };
 
 int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
@@ -64,5 +68,7 @@ int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
 int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
 void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
 int ice_dcf_init_rss(struct ice_dcf_hw *hw);
+int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
+int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
 
 #endif /* _ICE_DCF_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index e2ab7e637..a190ab7c1 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -114,10 +114,124 @@ ice_dcf_init_rx_queues(struct rte_eth_dev *dev)
 	return 0;
 }
 
+#define IAVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
+#define IAVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
+
+#define IAVF_ITR_INDEX_DEFAULT          0
+#define IAVF_QUEUE_ITR_INTERVAL_DEFAULT 32 /* 32 us */
+#define IAVF_QUEUE_ITR_INTERVAL_MAX     8160 /* 8160 us */
+
+static inline uint16_t
+iavf_calc_itr_interval(int16_t interval)
+{
+	if (interval < 0 || interval > IAVF_QUEUE_ITR_INTERVAL_MAX)
+		interval = IAVF_QUEUE_ITR_INTERVAL_DEFAULT;
+
+	/* Convert to hardware count, as writing each 1 represents 2 us */
+	return interval / 2;
+}
+
+static int
+ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
+				     struct rte_intr_handle *intr_handle)
+{
+	struct ice_dcf_adapter *adapter = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &adapter->real_hw;
+	uint16_t interval, i;
+	int vec;
+
+	if (rte_intr_cap_multiple(intr_handle) &&
+	    dev->data->dev_conf.intr_conf.rxq) {
+		if (rte_intr_efd_enable(intr_handle, dev->data->nb_rx_queues))
+			return -1;
+	}
+
+	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
+		intr_handle->intr_vec =
+			rte_zmalloc("intr_vec",
+				    dev->data->nb_rx_queues * sizeof(int), 0);
+		if (!intr_handle->intr_vec) {
+			PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
+				    dev->data->nb_rx_queues);
+			return -1;
+		}
+	}
+
+	if (!dev->data->dev_conf.intr_conf.rxq ||
+	    !rte_intr_dp_is_en(intr_handle)) {
+		/* Rx interrupt disabled, Map interrupt only for writeback */
+		hw->nb_msix = 1;
+		if (hw->vf_res->vf_cap_flags &
+		    VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
+			/* If WB_ON_ITR supports, enable it */
+			hw->msix_base = IAVF_RX_VEC_START;
+			IAVF_WRITE_REG(&hw->avf,
+				       IAVF_VFINT_DYN_CTLN1(hw->msix_base - 1),
+				       IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK |
+				       IAVF_VFINT_DYN_CTLN1_WB_ON_ITR_MASK);
+		} else {
+			/* If no WB_ON_ITR offload flags, need to set
+			 * interrupt for descriptor write back.
+			 */
+			hw->msix_base = IAVF_MISC_VEC_ID;
+
+			/* set ITR to max */
+			interval =
+			iavf_calc_itr_interval(IAVF_QUEUE_ITR_INTERVAL_MAX);
+			IAVF_WRITE_REG(&hw->avf, IAVF_VFINT_DYN_CTL01,
+				       IAVF_VFINT_DYN_CTL01_INTENA_MASK |
+				       (IAVF_ITR_INDEX_DEFAULT <<
+					IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT) |
+				       (interval <<
+					IAVF_VFINT_DYN_CTL01_INTERVAL_SHIFT));
+		}
+		IAVF_WRITE_FLUSH(&hw->avf);
+		/* map all queues to the same interrupt */
+		for (i = 0; i < dev->data->nb_rx_queues; i++)
+			hw->rxq_map[hw->msix_base] |= 1 << i;
+	} else {
+		if (!rte_intr_allow_others(intr_handle)) {
+			hw->nb_msix = 1;
+			hw->msix_base = IAVF_MISC_VEC_ID;
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				hw->rxq_map[hw->msix_base] |= 1 << i;
+				intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
+			}
+			PMD_DRV_LOG(DEBUG,
+				    "vector %u are mapping to all Rx queues",
+				    hw->msix_base);
+		} else {
+			/* If Rx interrupt is reuquired, and we can use
+			 * multi interrupts, then the vec is from 1
+			 */
+			hw->nb_msix = RTE_MIN(hw->vf_res->max_vectors,
+					      intr_handle->nb_efd);
+			hw->msix_base = IAVF_MISC_VEC_ID;
+			vec = IAVF_MISC_VEC_ID;
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				hw->rxq_map[vec] |= 1 << i;
+				intr_handle->intr_vec[i] = vec++;
+				if (vec >= hw->nb_msix)
+					vec = IAVF_RX_VEC_START;
+			}
+			PMD_DRV_LOG(DEBUG,
+				    "%u vectors are mapping to %u Rx queues",
+				    hw->nb_msix, dev->data->nb_rx_queues);
+		}
+	}
+
+	if (ice_dcf_config_irq_map(hw)) {
+		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+		return -1;
+	}
+	return 0;
+}
+
 static int
 ice_dcf_dev_start(struct rte_eth_dev *dev)
 {
 	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
 	struct ice_adapter *ad = &dcf_ad->parent;
 	struct ice_dcf_hw *hw = &dcf_ad->real_hw;
 	int ret;
@@ -141,6 +255,18 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 		}
 	}
 
+	ret = ice_dcf_configure_queues(hw);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Fail to config queues");
+		return ret;
+	}
+
+	ret = ice_dcf_config_rx_queues_irqs(dev, intr_handle);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Fail to config rx queues' irqs");
+		return ret;
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v4 09/12] net/ice: add queue start and stop for DCF
  2020-06-19  8:50 ` [dpdk-dev] [PATCH v4 00/12] " Ting Xu
                     ` (7 preceding siblings ...)
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 08/12] net/ice: add queue config in DCF Ting Xu
@ 2020-06-19  8:50   ` Ting Xu
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 10/12] net/ice: enable stats " Ting Xu
                     ` (2 subsequent siblings)
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-19  8:50 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

From: Qi Zhang <qi.z.zhang@intel.com>

Add queue start and stop in DCF. Support queue enable and disable
through virtual channel. Add support for Rx queue mbufs allocation
and queue reset.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf.c        |  57 ++++++
 drivers/net/ice/ice_dcf.h        |   3 +-
 drivers/net/ice/ice_dcf_ethdev.c | 322 +++++++++++++++++++++++++++++++
 3 files changed, 381 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 8869e0d1c..f18c0f16a 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -936,3 +936,60 @@ ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
 	rte_free(map_info);
 	return err;
 }
+
+int
+ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on)
+{
+	struct virtchnl_queue_select queue_select;
+	struct dcf_virtchnl_cmd args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = hw->vsi_res->vsi_id;
+	if (rx)
+		queue_select.rx_queues |= 1 << qid;
+	else
+		queue_select.tx_queues |= 1 << qid;
+
+	memset(&args, 0, sizeof(args));
+	if (on)
+		args.v_op = VIRTCHNL_OP_ENABLE_QUEUES;
+	else
+		args.v_op = VIRTCHNL_OP_DISABLE_QUEUES;
+
+	args.req_msg = (u8 *)&queue_select;
+	args.req_msglen = sizeof(queue_select);
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to execute command of %s",
+			    on ? "OP_ENABLE_QUEUES" : "OP_DISABLE_QUEUES");
+
+	return err;
+}
+
+int
+ice_dcf_disable_queues(struct ice_dcf_hw *hw)
+{
+	struct virtchnl_queue_select queue_select;
+	struct dcf_virtchnl_cmd args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = hw->vsi_res->vsi_id;
+
+	queue_select.rx_queues = BIT(hw->eth_dev->data->nb_rx_queues) - 1;
+	queue_select.tx_queues = BIT(hw->eth_dev->data->nb_tx_queues) - 1;
+
+	memset(&args, 0, sizeof(args));
+	args.v_op = VIRTCHNL_OP_DISABLE_QUEUES;
+	args.req_msg = (u8 *)&queue_select;
+	args.req_msglen = sizeof(queue_select);
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_DISABLE_QUEUES");
+
+	return err;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 9470d1df7..68e1661c0 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -70,5 +70,6 @@ void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
 int ice_dcf_init_rss(struct ice_dcf_hw *hw);
 int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
 int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
-
+int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
+int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
 #endif /* _ICE_DCF_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index a190ab7c1..d0219a728 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -227,6 +227,272 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+alloc_rxq_mbufs(struct ice_rx_queue *rxq)
+{
+	volatile union ice_32b_rx_flex_desc *rxd;
+	struct rte_mbuf *mbuf = NULL;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		mbuf = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!mbuf)) {
+			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &rxq->rx_ring[i];
+		rxd->read.pkt_addr = dma_addr;
+		rxd->read.hdr_addr = 0;
+		rxd->read.rsvd1 = 0;
+		rxd->read.rsvd2 = 0;
+
+		rxq->sw_ring[i].mbuf = (void *)mbuf;
+	}
+
+	return 0;
+}
+
+static int
+ice_dcf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct iavf_hw *hw = &ad->real_hw.avf;
+	struct ice_rx_queue *rxq;
+	int err = 0;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	err = alloc_rxq_mbufs(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+		return err;
+	}
+
+	rte_wmb();
+
+	/* Init the RX tail register. */
+	IAVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	IAVF_WRITE_FLUSH(hw);
+
+	/* Ready to switch the queue on */
+	err = ice_dcf_switch_queue(&ad->real_hw, rx_queue_id, true, true);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+		return err;
+	}
+
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+
+	return 0;
+}
+
+static inline void
+reset_rx_queue(struct ice_rx_queue *rxq)
+{
+	uint16_t len;
+	uint32_t i;
+
+	if (!rxq)
+		return;
+
+	len = rxq->nb_rx_desc + ICE_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(union ice_rx_flex_desc); i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+	for (i = 0; i < ICE_RX_MAX_BURST; i++)
+		rxq->sw_ring[rxq->nb_rx_desc + i].mbuf = &rxq->fake_mbuf;
+
+	/* for rx bulk */
+	rxq->rx_nb_avail = 0;
+	rxq->rx_next_avail = 0;
+	rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+}
+
+static inline void
+reset_tx_queue(struct ice_tx_queue *txq)
+{
+	struct ice_tx_entry *txe;
+	uint32_t i, size;
+	uint16_t prev;
+
+	if (!txq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	txe = txq->sw_ring;
+	size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->tx_ring)[i] = 0;
+
+	prev = (uint16_t)(txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		txq->tx_ring[i].cmd_type_offset_bsz =
+			rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
+		txe[i].mbuf =  NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_tail = 0;
+	txq->nb_tx_used = 0;
+
+	txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+	txq->nb_tx_free = txq->nb_tx_desc - 1;
+
+	txq->tx_next_dd = txq->tx_rs_thresh - 1;
+	txq->tx_next_rs = txq->tx_rs_thresh - 1;
+}
+
+static int
+ice_dcf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &ad->real_hw;
+	struct ice_rx_queue *rxq;
+	int err;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	err = ice_dcf_switch_queue(hw, rx_queue_id, true, false);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+			    rx_queue_id);
+		return err;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq->rx_rel_mbufs(rxq);
+	reset_rx_queue(rxq);
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+static int
+ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct iavf_hw *hw = &ad->real_hw.avf;
+	struct ice_tx_queue *txq;
+	int err = 0;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	txq = dev->data->tx_queues[tx_queue_id];
+
+	/* Init the RX tail register. */
+	txq->qtx_tail = hw->hw_addr + IAVF_QTX_TAIL1(tx_queue_id);
+	IAVF_PCI_REG_WRITE(txq->qtx_tail, 0);
+	IAVF_WRITE_FLUSH(hw);
+
+	/* Ready to switch the queue on */
+	err = ice_dcf_switch_queue(&ad->real_hw, tx_queue_id, false, true);
+
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+			    tx_queue_id);
+		return err;
+	}
+
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+
+	return 0;
+}
+
+static int
+ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &ad->real_hw;
+	struct ice_tx_queue *txq;
+	int err;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	err = ice_dcf_switch_queue(hw, tx_queue_id, false, false);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+			    tx_queue_id);
+		return err;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	txq->tx_rel_mbufs(txq);
+	reset_tx_queue(txq);
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+static int
+ice_dcf_start_queues(struct rte_eth_dev *dev)
+{
+	struct ice_rx_queue *rxq;
+	struct ice_tx_queue *txq;
+	int nb_rxq = 0;
+	int nb_txq, i;
+
+	for (nb_txq = 0; nb_txq < dev->data->nb_tx_queues; nb_txq++) {
+		txq = dev->data->tx_queues[nb_txq];
+		if (txq->tx_deferred_start)
+			continue;
+		if (ice_dcf_tx_queue_start(dev, nb_txq) != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start queue %u", nb_txq);
+			goto tx_err;
+		}
+	}
+
+	for (nb_rxq = 0; nb_rxq < dev->data->nb_rx_queues; nb_rxq++) {
+		rxq = dev->data->rx_queues[nb_rxq];
+		if (rxq->rx_deferred_start)
+			continue;
+		if (ice_dcf_rx_queue_start(dev, nb_rxq) != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start queue %u", nb_rxq);
+			goto rx_err;
+		}
+	}
+
+	return 0;
+
+	/* stop the started queues if failed to start all queues */
+rx_err:
+	for (i = 0; i < nb_rxq; i++)
+		ice_dcf_rx_queue_stop(dev, i);
+tx_err:
+	for (i = 0; i < nb_txq; i++)
+		ice_dcf_tx_queue_stop(dev, i);
+
+	return -1;
+}
+
 static int
 ice_dcf_dev_start(struct rte_eth_dev *dev)
 {
@@ -267,15 +533,59 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
+	if (dev->data->dev_conf.intr_conf.rxq != 0) {
+		rte_intr_disable(intr_handle);
+		rte_intr_enable(intr_handle);
+	}
+
+	ret = ice_dcf_start_queues(dev);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to enable queues");
+		return ret;
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
 }
 
+static void
+ice_dcf_stop_queues(struct rte_eth_dev *dev)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &ad->real_hw;
+	struct ice_rx_queue *rxq;
+	struct ice_tx_queue *txq;
+	int ret, i;
+
+	/* Stop All queues */
+	ret = ice_dcf_disable_queues(hw);
+	if (ret)
+		PMD_DRV_LOG(WARNING, "Fail to stop queues");
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (!txq)
+			continue;
+		txq->tx_rel_mbufs(txq);
+		reset_tx_queue(txq);
+		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (!rxq)
+			continue;
+		rxq->rx_rel_mbufs(rxq);
+		reset_rx_queue(rxq);
+		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+}
+
 static void
 ice_dcf_dev_stop(struct rte_eth_dev *dev)
 {
 	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
 	struct ice_adapter *ad = &dcf_ad->parent;
 
 	if (ad->pf.adapter_stopped == 1) {
@@ -283,6 +593,14 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
 		return;
 	}
 
+	ice_dcf_stop_queues(dev);
+
+	rte_intr_efd_disable(intr_handle);
+	if (intr_handle->intr_vec) {
+		rte_free(intr_handle->intr_vec);
+		intr_handle->intr_vec = NULL;
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_DOWN;
 	ad->pf.adapter_stopped = 1;
 }
@@ -477,6 +795,10 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
 	.tx_queue_setup          = ice_tx_queue_setup,
 	.rx_queue_release        = ice_rx_queue_release,
 	.tx_queue_release        = ice_tx_queue_release,
+	.rx_queue_start          = ice_dcf_rx_queue_start,
+	.tx_queue_start          = ice_dcf_tx_queue_start,
+	.rx_queue_stop           = ice_dcf_rx_queue_stop,
+	.tx_queue_stop           = ice_dcf_tx_queue_stop,
 	.link_update             = ice_dcf_link_update,
 	.stats_get               = ice_dcf_stats_get,
 	.stats_reset             = ice_dcf_stats_reset,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v4 10/12] net/ice: enable stats for DCF
  2020-06-19  8:50 ` [dpdk-dev] [PATCH v4 00/12] " Ting Xu
                     ` (8 preceding siblings ...)
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 09/12] net/ice: add queue start and stop for DCF Ting Xu
@ 2020-06-19  8:50   ` Ting Xu
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 11/12] net/ice: set MAC filter during dev start " Ting Xu
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 12/12] doc: enable DCF datapath configuration Ting Xu
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-19  8:50 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

From: Qi Zhang <qi.z.zhang@intel.com>

Add support to get and reset Rx/Tx stats in DCF. Query stats
from PF.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf.c        |  27 ++++++++
 drivers/net/ice/ice_dcf.h        |   4 ++
 drivers/net/ice/ice_dcf_ethdev.c | 102 +++++++++++++++++++++++++++----
 3 files changed, 120 insertions(+), 13 deletions(-)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index f18c0f16a..fbeb58ee1 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -993,3 +993,30 @@ ice_dcf_disable_queues(struct ice_dcf_hw *hw)
 
 	return err;
 }
+
+int
+ice_dcf_query_stats(struct ice_dcf_hw *hw,
+				   struct virtchnl_eth_stats *pstats)
+{
+	struct virtchnl_queue_select q_stats;
+	struct dcf_virtchnl_cmd args;
+	int err;
+
+	memset(&q_stats, 0, sizeof(q_stats));
+	q_stats.vsi_id = hw->vsi_res->vsi_id;
+
+	args.v_op = VIRTCHNL_OP_GET_STATS;
+	args.req_msg = (uint8_t *)&q_stats;
+	args.req_msglen = sizeof(q_stats);
+	args.rsp_msglen = sizeof(*pstats);
+	args.rsp_msgbuf = (uint8_t *)pstats;
+	args.rsp_buflen = sizeof(*pstats);
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR, "fail to execute command OP_GET_STATS");
+		return err;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 68e1661c0..e82bc7748 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -58,6 +58,7 @@ struct ice_dcf_hw {
 	uint16_t msix_base;
 	uint16_t nb_msix;
 	uint16_t rxq_map[16];
+	struct virtchnl_eth_stats eth_stats_offset;
 };
 
 int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
@@ -72,4 +73,7 @@ int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
 int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
 int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
 int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
+int ice_dcf_query_stats(struct ice_dcf_hw *hw,
+			struct virtchnl_eth_stats *pstats);
+
 #endif /* _ICE_DCF_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index d0219a728..38e321f4b 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -697,19 +697,6 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
-static int
-ice_dcf_stats_get(__rte_unused struct rte_eth_dev *dev,
-		  __rte_unused struct rte_eth_stats *igb_stats)
-{
-	return 0;
-}
-
-static int
-ice_dcf_stats_reset(__rte_unused struct rte_eth_dev *dev)
-{
-	return 0;
-}
-
 static int
 ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
 {
@@ -762,6 +749,95 @@ ice_dcf_dev_filter_ctrl(struct rte_eth_dev *dev,
 	return ret;
 }
 
+#define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4)
+#define ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6)
+#define ICE_DCF_48_BIT_MASK  RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t)
+
+static void
+ice_dcf_stat_update_48(uint64_t *offset, uint64_t *stat)
+{
+	if (*stat >= *offset)
+		*stat = *stat - *offset;
+	else
+		*stat = (uint64_t)((*stat +
+			((uint64_t)1 << ICE_DCF_48_BIT_WIDTH)) - *offset);
+
+	*stat &= ICE_DCF_48_BIT_MASK;
+}
+
+static void
+ice_dcf_stat_update_32(uint64_t *offset, uint64_t *stat)
+{
+	if (*stat >= *offset)
+		*stat = (uint64_t)(*stat - *offset);
+	else
+		*stat = (uint64_t)((*stat +
+			((uint64_t)1 << ICE_DCF_32_BIT_WIDTH)) - *offset);
+}
+
+static void
+ice_dcf_update_stats(struct virtchnl_eth_stats *oes,
+		     struct virtchnl_eth_stats *nes)
+{
+	ice_dcf_stat_update_48(&oes->rx_bytes, &nes->rx_bytes);
+	ice_dcf_stat_update_48(&oes->rx_unicast, &nes->rx_unicast);
+	ice_dcf_stat_update_48(&oes->rx_multicast, &nes->rx_multicast);
+	ice_dcf_stat_update_48(&oes->rx_broadcast, &nes->rx_broadcast);
+	ice_dcf_stat_update_32(&oes->rx_discards, &nes->rx_discards);
+	ice_dcf_stat_update_48(&oes->tx_bytes, &nes->tx_bytes);
+	ice_dcf_stat_update_48(&oes->tx_unicast, &nes->tx_unicast);
+	ice_dcf_stat_update_48(&oes->tx_multicast, &nes->tx_multicast);
+	ice_dcf_stat_update_48(&oes->tx_broadcast, &nes->tx_broadcast);
+	ice_dcf_stat_update_32(&oes->tx_errors, &nes->tx_errors);
+	ice_dcf_stat_update_32(&oes->tx_discards, &nes->tx_discards);
+}
+
+
+static int
+ice_dcf_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &ad->real_hw;
+	struct virtchnl_eth_stats pstats;
+	int ret;
+
+	ret = ice_dcf_query_stats(hw, &pstats);
+	if (ret == 0) {
+		ice_dcf_update_stats(&hw->eth_stats_offset, &pstats);
+		stats->ipackets = pstats.rx_unicast + pstats.rx_multicast +
+				pstats.rx_broadcast - pstats.rx_discards;
+		stats->opackets = pstats.tx_broadcast + pstats.tx_multicast +
+						pstats.tx_unicast;
+		stats->imissed = pstats.rx_discards;
+		stats->oerrors = pstats.tx_errors + pstats.tx_discards;
+		stats->ibytes = pstats.rx_bytes;
+		stats->ibytes -= stats->ipackets * RTE_ETHER_CRC_LEN;
+		stats->obytes = pstats.tx_bytes;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return ret;
+}
+
+static int
+ice_dcf_stats_reset(struct rte_eth_dev *dev)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &ad->real_hw;
+	struct virtchnl_eth_stats pstats;
+	int ret;
+
+	/* read stat values to clear hardware registers */
+	ret = ice_dcf_query_stats(hw, &pstats);
+	if (ret != 0)
+		return ret;
+
+	/* set stats offset base on current values */
+	hw->eth_stats_offset = pstats;
+
+	return 0;
+}
+
 static void
 ice_dcf_dev_close(struct rte_eth_dev *dev)
 {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v4 11/12] net/ice: set MAC filter during dev start for DCF
  2020-06-19  8:50 ` [dpdk-dev] [PATCH v4 00/12] " Ting Xu
                     ` (9 preceding siblings ...)
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 10/12] net/ice: enable stats " Ting Xu
@ 2020-06-19  8:50   ` Ting Xu
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 12/12] doc: enable DCF datapath configuration Ting Xu
  11 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-19  8:50 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

From: Qi Zhang <qi.z.zhang@intel.com>

Add support to add and delete  MAC address filter in DCF.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf.c        | 42 ++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_dcf.h        |  1 +
 drivers/net/ice/ice_dcf_ethdev.c |  7 ++++++
 3 files changed, 50 insertions(+)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index fbeb58ee1..712f43825 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1020,3 +1020,45 @@ ice_dcf_query_stats(struct ice_dcf_hw *hw,
 
 	return 0;
 }
+
+int
+ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
+{
+	struct virtchnl_ether_addr_list *list;
+	struct rte_ether_addr *addr;
+	struct dcf_virtchnl_cmd args;
+	int len, err = 0;
+
+	len = sizeof(struct virtchnl_ether_addr_list);
+	addr = hw->eth_dev->data->mac_addrs;
+	len += sizeof(struct virtchnl_ether_addr);
+
+	list = rte_zmalloc(NULL, len, 0);
+	if (!list) {
+		PMD_DRV_LOG(ERR, "fail to allocate memory");
+		return -ENOMEM;
+	}
+
+	rte_memcpy(list->list[0].addr, addr->addr_bytes,
+			sizeof(addr->addr_bytes));
+	PMD_DRV_LOG(DEBUG, "add/rm mac:%x:%x:%x:%x:%x:%x",
+			    addr->addr_bytes[0], addr->addr_bytes[1],
+			    addr->addr_bytes[2], addr->addr_bytes[3],
+			    addr->addr_bytes[4], addr->addr_bytes[5]);
+
+	list->vsi_id = hw->vsi_res->vsi_id;
+	list->num_elements = 1;
+
+	memset(&args, 0, sizeof(args));
+	args.v_op = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
+			VIRTCHNL_OP_DEL_ETH_ADDR;
+	args.req_msg = (uint8_t *)list;
+	args.req_msglen  = len;
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command %s",
+			    add ? "OP_ADD_ETHER_ADDRESS" :
+			    "OP_DEL_ETHER_ADDRESS");
+	rte_free(list);
+	return err;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index e82bc7748..a44a01e90 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -75,5 +75,6 @@ int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
 int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
 int ice_dcf_query_stats(struct ice_dcf_hw *hw,
 			struct virtchnl_eth_stats *pstats);
+int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add);
 
 #endif /* _ICE_DCF_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 38e321f4b..c39dfc1cc 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -544,6 +544,12 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
+	ret = ice_dcf_add_del_all_mac_addr(hw, true);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to add mac addr");
+		return ret;
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
@@ -601,6 +607,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
 		intr_handle->intr_vec = NULL;
 	}
 
+	ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
 	dev->data->dev_link.link_status = ETH_LINK_DOWN;
 	ad->pf.adapter_stopped = 1;
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v4 12/12] doc: enable DCF datapath configuration
  2020-06-19  8:50 ` [dpdk-dev] [PATCH v4 00/12] " Ting Xu
                     ` (10 preceding siblings ...)
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 11/12] net/ice: set MAC filter during dev start " Ting Xu
@ 2020-06-19  8:50   ` Ting Xu
  2020-06-22  4:48     ` Zhang, Qi Z
  11 siblings, 1 reply; 65+ messages in thread
From: Ting Xu @ 2020-06-19  8:50 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

Add doc for DCF datapath configuration in DPDK 20.08 release note.

Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 doc/guides/rel_notes/release_20_08.rst | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index dee4ccbb5..1a3a4cdb2 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -56,6 +56,12 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
+* **Updated the Intel ice driver.**
+
+  Updated the Intel ice driver with new features and improvements, including:
+
+  * Added support for DCF datapath configuration.
+
 * **Updated Mellanox mlx5 driver.**
 
   Updated Mellanox mlx5 driver with new features and improvements, including:
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [dpdk-dev] [PATCH v4 12/12] doc: enable DCF datapath configuration
  2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 12/12] doc: enable DCF datapath configuration Ting Xu
@ 2020-06-22  4:48     ` Zhang, Qi Z
  0 siblings, 0 replies; 65+ messages in thread
From: Zhang, Qi Z @ 2020-06-22  4:48 UTC (permalink / raw)
  To: Xu, Ting, dev
  Cc: Yang, Qiming, Wu, Jingjing, Xing, Beilei, Kovacevic, Marko,
	Mcnamara, John



> -----Original Message-----
> From: Xu, Ting <ting.xu@intel.com>
> Sent: Friday, June 19, 2020 4:51 PM
> To: dev@dpdk.org
> Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Xing, Beilei
> <beilei.xing@intel.com>; Kovacevic, Marko <marko.kovacevic@intel.com>;
> Mcnamara, John <john.mcnamara@intel.com>; Xu, Ting <ting.xu@intel.com>
> Subject: [PATCH v4 12/12] doc: enable DCF datapath configuration
> 
> Add doc for DCF datapath configuration in DPDK 20.08 release note.
> 
> Signed-off-by: Ting Xu <ting.xu@intel.com>
> ---
>  doc/guides/rel_notes/release_20_08.rst | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/doc/guides/rel_notes/release_20_08.rst
> b/doc/guides/rel_notes/release_20_08.rst
> index dee4ccbb5..1a3a4cdb2 100644
> --- a/doc/guides/rel_notes/release_20_08.rst
> +++ b/doc/guides/rel_notes/release_20_08.rst
> @@ -56,6 +56,12 @@ New Features
>       Also, make sure to start the actual text at the margin.
>       =========================================================
> 
> +* **Updated the Intel ice driver.**
> +
> +  Updated the Intel ice driver with new features and improvements,
> including:
> +
> +  * Added support for DCF datapath configuration.
> +
>  * **Updated Mellanox mlx5 driver.**
> 
>    Updated Mellanox mlx5 driver with new features and improvements,
> including:
> --
> 2.17.1

We might also need to add doc/nic/features/ice_dcf.ini as a new type of ethdev has been added.



^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v5 00/12] enable DCF datapath configuration
  2020-06-05 20:17 [dpdk-dev] [PATCH v1 00/12] enable DCF datapath configuration Ting Xu
                   ` (13 preceding siblings ...)
  2020-06-19  8:50 ` [dpdk-dev] [PATCH v4 00/12] " Ting Xu
@ 2020-06-23  2:38 ` Ting Xu
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 01/12] net/ice: init RSS and supported RXDID in DCF Ting Xu
                     ` (12 more replies)
  14 siblings, 13 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-23  2:38 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

This patchset adds support to configure DCF datapath, including
Rx/Tx queues setup, start and stop, device configuration, RSS
and flexible descriptor RXDID initialization and MAC filter setup.

Qi Zhang (11):
  net/ice: init RSS and supported RXDID in DCF
  net/ice: complete device info get in DCF
  net/ice: complete dev configure in DCF
  net/ice: complete queue setup in DCF
  net/ice: add stop flag for device start / stop
  net/ice: add Rx queue init in DCF
  net/ice: init RSS during DCF start
  net/ice: add queue config in DCF
  net/ice: add queue start and stop for DCF
  net/ice: enable stats for DCF
  net/ice: set MAC filter during dev start for DCF

Ting Xu (1):
  doc: enable DCF datapath configuration

---
v4->v5:
Add driver's feature doc

v3->v4:
Clean codes based on comments

v2->v3:
Correct coding style issue

v1->v2:
Optimize coding style
Correct some return values
Add support to stop started queues when queue start failed

 doc/guides/nics/features/ice_dcf.ini   |  19 +
 doc/guides/rel_notes/release_20_08.rst |   6 +
 drivers/net/ice/ice_dcf.c              | 408 ++++++++++++-
 drivers/net/ice/ice_dcf.h              |  17 +
 drivers/net/ice/ice_dcf_ethdev.c       | 773 +++++++++++++++++++++++--
 drivers/net/ice/ice_dcf_ethdev.h       |   3 -
 drivers/net/ice/ice_dcf_parent.c       |   8 +
 7 files changed, 1181 insertions(+), 53 deletions(-)
 create mode 100644 doc/guides/nics/features/ice_dcf.ini

-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v5 01/12] net/ice: init RSS and supported RXDID in DCF
  2020-06-23  2:38 ` [dpdk-dev] [PATCH v5 00/12] " Ting Xu
@ 2020-06-23  2:38   ` Ting Xu
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 02/12] net/ice: complete device info get " Ting Xu
                     ` (11 subsequent siblings)
  12 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-23  2:38 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

From: Qi Zhang <qi.z.zhang@intel.com>

Enable RSS parameters initialization and get the supported
flexible descriptor RXDIDs bitmap from PF during DCF init.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf.c | 54 ++++++++++++++++++++++++++++++++++++++-
 drivers/net/ice/ice_dcf.h |  3 +++
 2 files changed, 56 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 0cd5d1bf6..93fabd5f7 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -233,7 +233,7 @@ ice_dcf_get_vf_resource(struct ice_dcf_hw *hw)
 
 	caps = VIRTCHNL_VF_OFFLOAD_WB_ON_ITR | VIRTCHNL_VF_OFFLOAD_RX_POLLING |
 	       VIRTCHNL_VF_CAP_ADV_LINK_SPEED | VIRTCHNL_VF_CAP_DCF |
-	       VF_BASE_MODE_OFFLOADS;
+	       VF_BASE_MODE_OFFLOADS | VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC;
 
 	err = ice_dcf_send_cmd_req_no_irq(hw, VIRTCHNL_OP_GET_VF_RESOURCES,
 					  (uint8_t *)&caps, sizeof(caps));
@@ -547,6 +547,30 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw)
 	return err;
 }
 
+static int
+ice_dcf_get_supported_rxdid(struct ice_dcf_hw *hw)
+{
+	int err;
+
+	err = ice_dcf_send_cmd_req_no_irq(hw,
+					  VIRTCHNL_OP_GET_SUPPORTED_RXDIDS,
+					  NULL, 0);
+	if (err) {
+		PMD_INIT_LOG(ERR, "Failed to send OP_GET_SUPPORTED_RXDIDS");
+		return -1;
+	}
+
+	err = ice_dcf_recv_cmd_rsp_no_irq(hw, VIRTCHNL_OP_GET_SUPPORTED_RXDIDS,
+					  (uint8_t *)&hw->supported_rxdid,
+					  sizeof(uint64_t), NULL);
+	if (err) {
+		PMD_INIT_LOG(ERR, "Failed to get response of OP_GET_SUPPORTED_RXDIDS");
+		return -1;
+	}
+
+	return 0;
+}
+
 int
 ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 {
@@ -620,6 +644,29 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 		goto err_alloc;
 	}
 
+	/* Allocate memory for RSS info */
+	if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		hw->rss_key = rte_zmalloc(NULL,
+					  hw->vf_res->rss_key_size, 0);
+		if (!hw->rss_key) {
+			PMD_INIT_LOG(ERR, "unable to allocate rss_key memory");
+			goto err_alloc;
+		}
+		hw->rss_lut = rte_zmalloc("rss_lut",
+					  hw->vf_res->rss_lut_size, 0);
+		if (!hw->rss_lut) {
+			PMD_INIT_LOG(ERR, "unable to allocate rss_lut memory");
+			goto err_rss;
+		}
+	}
+
+	if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
+		if (ice_dcf_get_supported_rxdid(hw) != 0) {
+			PMD_INIT_LOG(ERR, "failed to do get supported rxdid");
+			goto err_rss;
+		}
+	}
+
 	hw->eth_dev = eth_dev;
 	rte_intr_callback_register(&pci_dev->intr_handle,
 				   ice_dcf_dev_interrupt_handler, hw);
@@ -628,6 +675,9 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 
 	return 0;
 
+err_rss:
+	rte_free(hw->rss_key);
+	rte_free(hw->rss_lut);
 err_alloc:
 	rte_free(hw->vf_res);
 err_api:
@@ -655,4 +705,6 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	rte_free(hw->arq_buf);
 	rte_free(hw->vf_vsi_map);
 	rte_free(hw->vf_res);
+	rte_free(hw->rss_lut);
+	rte_free(hw->rss_key);
 }
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index d2e447b48..152266e3c 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -50,6 +50,9 @@ struct ice_dcf_hw {
 	uint16_t vsi_id;
 
 	struct rte_eth_dev *eth_dev;
+	uint8_t *rss_lut;
+	uint8_t *rss_key;
+	uint64_t supported_rxdid;
 };
 
 int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v5 02/12] net/ice: complete device info get in DCF
  2020-06-23  2:38 ` [dpdk-dev] [PATCH v5 00/12] " Ting Xu
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 01/12] net/ice: init RSS and supported RXDID in DCF Ting Xu
@ 2020-06-23  2:38   ` Ting Xu
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 03/12] net/ice: complete dev configure " Ting Xu
                     ` (10 subsequent siblings)
  12 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-23  2:38 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

From: Qi Zhang <qi.z.zhang@intel.com>

Add support to get complete device information for DCF, including
Rx/Tx offload capabilities and default configuration.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf_ethdev.c | 70 ++++++++++++++++++++++++++++++--
 1 file changed, 67 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index e5ba1a61f..eb3708191 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -24,6 +24,7 @@
 
 #include "ice_generic_flow.h"
 #include "ice_dcf_ethdev.h"
+#include "ice_rxtx.h"
 
 static uint16_t
 ice_dcf_recv_pkts(__rte_unused void *rx_queue,
@@ -66,11 +67,74 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
 		     struct rte_eth_dev_info *dev_info)
 {
 	struct ice_dcf_adapter *adapter = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &adapter->real_hw;
 
 	dev_info->max_mac_addrs = 1;
-	dev_info->max_rx_pktlen = (uint32_t)-1;
-	dev_info->max_rx_queues = RTE_DIM(adapter->rxqs);
-	dev_info->max_tx_queues = RTE_DIM(adapter->txqs);
+	dev_info->max_rx_queues = hw->vsi_res->num_queue_pairs;
+	dev_info->max_tx_queues = hw->vsi_res->num_queue_pairs;
+	dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN;
+	dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX;
+	dev_info->hash_key_size = hw->vf_res->rss_key_size;
+	dev_info->reta_size = hw->vf_res->rss_lut_size;
+	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
+
+	dev_info->rx_offload_capa =
+		DEV_RX_OFFLOAD_VLAN_STRIP |
+		DEV_RX_OFFLOAD_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_UDP_CKSUM |
+		DEV_RX_OFFLOAD_TCP_CKSUM |
+		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_RX_OFFLOAD_SCATTER |
+		DEV_RX_OFFLOAD_JUMBO_FRAME |
+		DEV_RX_OFFLOAD_VLAN_FILTER |
+		DEV_RX_OFFLOAD_RSS_HASH;
+	dev_info->tx_offload_capa =
+		DEV_TX_OFFLOAD_VLAN_INSERT |
+		DEV_TX_OFFLOAD_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_UDP_CKSUM |
+		DEV_TX_OFFLOAD_TCP_CKSUM |
+		DEV_TX_OFFLOAD_SCTP_CKSUM |
+		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		DEV_TX_OFFLOAD_TCP_TSO |
+		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
+		DEV_TX_OFFLOAD_GRE_TNL_TSO |
+		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
+		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
+		DEV_TX_OFFLOAD_MULTI_SEGS;
+
+	dev_info->default_rxconf = (struct rte_eth_rxconf) {
+		.rx_thresh = {
+			.pthresh = ICE_DEFAULT_RX_PTHRESH,
+			.hthresh = ICE_DEFAULT_RX_HTHRESH,
+			.wthresh = ICE_DEFAULT_RX_WTHRESH,
+		},
+		.rx_free_thresh = ICE_DEFAULT_RX_FREE_THRESH,
+		.rx_drop_en = 0,
+		.offloads = 0,
+	};
+
+	dev_info->default_txconf = (struct rte_eth_txconf) {
+		.tx_thresh = {
+			.pthresh = ICE_DEFAULT_TX_PTHRESH,
+			.hthresh = ICE_DEFAULT_TX_HTHRESH,
+			.wthresh = ICE_DEFAULT_TX_WTHRESH,
+		},
+		.tx_free_thresh = ICE_DEFAULT_TX_FREE_THRESH,
+		.tx_rs_thresh = ICE_DEFAULT_TX_RSBIT_THRESH,
+		.offloads = 0,
+	};
+
+	dev_info->rx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = ICE_MAX_RING_DESC,
+		.nb_min = ICE_MIN_RING_DESC,
+		.nb_align = ICE_ALIGN_RING_DESC,
+	};
+
+	dev_info->tx_desc_lim = (struct rte_eth_desc_lim) {
+		.nb_max = ICE_MAX_RING_DESC,
+		.nb_min = ICE_MIN_RING_DESC,
+		.nb_align = ICE_ALIGN_RING_DESC,
+	};
 
 	return 0;
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v5 03/12] net/ice: complete dev configure in DCF
  2020-06-23  2:38 ` [dpdk-dev] [PATCH v5 00/12] " Ting Xu
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 01/12] net/ice: init RSS and supported RXDID in DCF Ting Xu
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 02/12] net/ice: complete device info get " Ting Xu
@ 2020-06-23  2:38   ` Ting Xu
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 04/12] net/ice: complete queue setup " Ting Xu
                     ` (9 subsequent siblings)
  12 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-23  2:38 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

From: Qi Zhang <qi.z.zhang@intel.com>

Enable device configuration function in DCF.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf_ethdev.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index eb3708191..01412ced0 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -57,8 +57,17 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
 }
 
 static int
-ice_dcf_dev_configure(__rte_unused struct rte_eth_dev *dev)
+ice_dcf_dev_configure(struct rte_eth_dev *dev)
 {
+	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct ice_adapter *ad = &dcf_ad->parent;
+
+	ad->rx_bulk_alloc_allowed = true;
+	ad->tx_simple_allowed = true;
+
+	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+
 	return 0;
 }
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v5 04/12] net/ice: complete queue setup in DCF
  2020-06-23  2:38 ` [dpdk-dev] [PATCH v5 00/12] " Ting Xu
                     ` (2 preceding siblings ...)
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 03/12] net/ice: complete dev configure " Ting Xu
@ 2020-06-23  2:38   ` Ting Xu
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 05/12] net/ice: add stop flag for device start / stop Ting Xu
                     ` (8 subsequent siblings)
  12 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-23  2:38 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

From: Qi Zhang <qi.z.zhang@intel.com>

Delete original DCF queue setup functions and use ice
queue setup and release functions instead.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf_ethdev.c | 42 +++-----------------------------
 drivers/net/ice/ice_dcf_ethdev.h |  3 ---
 drivers/net/ice/ice_dcf_parent.c |  7 ++++++
 3 files changed, 11 insertions(+), 41 deletions(-)

diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 01412ced0..b07850ece 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -229,11 +229,6 @@ ice_dcf_dev_close(struct rte_eth_dev *dev)
 	ice_dcf_uninit_hw(dev, &adapter->real_hw);
 }
 
-static void
-ice_dcf_queue_release(__rte_unused void *q)
-{
-}
-
 static int
 ice_dcf_link_update(__rte_unused struct rte_eth_dev *dev,
 		    __rte_unused int wait_to_complete)
@@ -241,45 +236,16 @@ ice_dcf_link_update(__rte_unused struct rte_eth_dev *dev,
 	return 0;
 }
 
-static int
-ice_dcf_rx_queue_setup(struct rte_eth_dev *dev,
-		       uint16_t rx_queue_id,
-		       __rte_unused uint16_t nb_rx_desc,
-		       __rte_unused unsigned int socket_id,
-		       __rte_unused const struct rte_eth_rxconf *rx_conf,
-		       __rte_unused struct rte_mempool *mb_pool)
-{
-	struct ice_dcf_adapter *adapter = dev->data->dev_private;
-
-	dev->data->rx_queues[rx_queue_id] = &adapter->rxqs[rx_queue_id];
-
-	return 0;
-}
-
-static int
-ice_dcf_tx_queue_setup(struct rte_eth_dev *dev,
-		       uint16_t tx_queue_id,
-		       __rte_unused uint16_t nb_tx_desc,
-		       __rte_unused unsigned int socket_id,
-		       __rte_unused const struct rte_eth_txconf *tx_conf)
-{
-	struct ice_dcf_adapter *adapter = dev->data->dev_private;
-
-	dev->data->tx_queues[tx_queue_id] = &adapter->txqs[tx_queue_id];
-
-	return 0;
-}
-
 static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
 	.dev_start               = ice_dcf_dev_start,
 	.dev_stop                = ice_dcf_dev_stop,
 	.dev_close               = ice_dcf_dev_close,
 	.dev_configure           = ice_dcf_dev_configure,
 	.dev_infos_get           = ice_dcf_dev_info_get,
-	.rx_queue_setup          = ice_dcf_rx_queue_setup,
-	.tx_queue_setup          = ice_dcf_tx_queue_setup,
-	.rx_queue_release        = ice_dcf_queue_release,
-	.tx_queue_release        = ice_dcf_queue_release,
+	.rx_queue_setup          = ice_rx_queue_setup,
+	.tx_queue_setup          = ice_tx_queue_setup,
+	.rx_queue_release        = ice_rx_queue_release,
+	.tx_queue_release        = ice_tx_queue_release,
 	.link_update             = ice_dcf_link_update,
 	.stats_get               = ice_dcf_stats_get,
 	.stats_reset             = ice_dcf_stats_reset,
diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h
index e60e808d8..b54528bea 100644
--- a/drivers/net/ice/ice_dcf_ethdev.h
+++ b/drivers/net/ice/ice_dcf_ethdev.h
@@ -19,10 +19,7 @@ struct ice_dcf_queue {
 
 struct ice_dcf_adapter {
 	struct ice_adapter parent; /* Must be first */
-
 	struct ice_dcf_hw real_hw;
-	struct ice_dcf_queue rxqs[ICE_DCF_MAX_RINGS];
-	struct ice_dcf_queue txqs[ICE_DCF_MAX_RINGS];
 };
 
 void ice_dcf_handle_pf_event_msg(struct ice_dcf_hw *dcf_hw,
diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c
index d13e19d78..322a5273f 100644
--- a/drivers/net/ice/ice_dcf_parent.c
+++ b/drivers/net/ice/ice_dcf_parent.c
@@ -335,6 +335,13 @@ ice_dcf_init_parent_adapter(struct rte_eth_dev *eth_dev)
 	parent_adapter->eth_dev = eth_dev;
 	parent_adapter->pf.adapter = parent_adapter;
 	parent_adapter->pf.dev_data = eth_dev->data;
+	/* create a dummy main_vsi */
+	parent_adapter->pf.main_vsi =
+		rte_zmalloc(NULL, sizeof(struct ice_vsi), 0);
+	if (!parent_adapter->pf.main_vsi)
+		return -ENOMEM;
+	parent_adapter->pf.main_vsi->adapter = parent_adapter;
+
 	parent_hw->back = parent_adapter;
 	parent_hw->mac_type = ICE_MAC_GENERIC;
 	parent_hw->vendor_id = ICE_INTEL_VENDOR_ID;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v5 05/12] net/ice: add stop flag for device start / stop
  2020-06-23  2:38 ` [dpdk-dev] [PATCH v5 00/12] " Ting Xu
                     ` (3 preceding siblings ...)
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 04/12] net/ice: complete queue setup " Ting Xu
@ 2020-06-23  2:38   ` Ting Xu
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 06/12] net/ice: add Rx queue init in DCF Ting Xu
                     ` (7 subsequent siblings)
  12 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-23  2:38 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

From: Qi Zhang <qi.z.zhang@intel.com>

Add stop flag for DCF device start and stop.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf_ethdev.c | 14 ++++++++++++++
 drivers/net/ice/ice_dcf_parent.c |  1 +
 2 files changed, 15 insertions(+)

diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index b07850ece..676a504fd 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -45,6 +45,11 @@ ice_dcf_xmit_pkts(__rte_unused void *tx_queue,
 static int
 ice_dcf_dev_start(struct rte_eth_dev *dev)
 {
+	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct ice_adapter *ad = &dcf_ad->parent;
+
+	ad->pf.adapter_stopped = 0;
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
@@ -53,7 +58,16 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 static void
 ice_dcf_dev_stop(struct rte_eth_dev *dev)
 {
+	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct ice_adapter *ad = &dcf_ad->parent;
+
+	if (ad->pf.adapter_stopped == 1) {
+		PMD_DRV_LOG(DEBUG, "Port is already stopped");
+		return;
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	ad->pf.adapter_stopped = 1;
 }
 
 static int
diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c
index 322a5273f..c5dfdd36e 100644
--- a/drivers/net/ice/ice_dcf_parent.c
+++ b/drivers/net/ice/ice_dcf_parent.c
@@ -341,6 +341,7 @@ ice_dcf_init_parent_adapter(struct rte_eth_dev *eth_dev)
 	if (!parent_adapter->pf.main_vsi)
 		return -ENOMEM;
 	parent_adapter->pf.main_vsi->adapter = parent_adapter;
+	parent_adapter->pf.adapter_stopped = 1;
 
 	parent_hw->back = parent_adapter;
 	parent_hw->mac_type = ICE_MAC_GENERIC;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v5 06/12] net/ice: add Rx queue init in DCF
  2020-06-23  2:38 ` [dpdk-dev] [PATCH v5 00/12] " Ting Xu
                     ` (4 preceding siblings ...)
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 05/12] net/ice: add stop flag for device start / stop Ting Xu
@ 2020-06-23  2:38   ` Ting Xu
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 07/12] net/ice: init RSS during DCF start Ting Xu
                     ` (6 subsequent siblings)
  12 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-23  2:38 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

From: Qi Zhang <qi.z.zhang@intel.com>

Enable Rx queues initialization during device start in DCF.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf.h        |  1 +
 drivers/net/ice/ice_dcf_ethdev.c | 83 ++++++++++++++++++++++++++++++++
 2 files changed, 84 insertions(+)

diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 152266e3c..dcb2a0283 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -53,6 +53,7 @@ struct ice_dcf_hw {
 	uint8_t *rss_lut;
 	uint8_t *rss_key;
 	uint64_t supported_rxdid;
+	uint16_t num_queue_pairs;
 };
 
 int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 676a504fd..5afd07f96 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -42,14 +42,97 @@ ice_dcf_xmit_pkts(__rte_unused void *tx_queue,
 	return 0;
 }
 
+static int
+ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
+{
+	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct rte_eth_dev_data *dev_data = dev->data;
+	struct iavf_hw *hw = &dcf_ad->real_hw.avf;
+	uint16_t buf_size, max_pkt_len, len;
+
+	buf_size = rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM;
+	rxq->rx_hdr_len = 0;
+	rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S));
+	len = ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len;
+	max_pkt_len = RTE_MIN(len, dev->data->dev_conf.rxmode.max_rx_pkt_len);
+
+	/* Check if the jumbo frame and maximum packet length are set
+	 * correctly.
+	 */
+	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+		if (max_pkt_len <= RTE_ETHER_MAX_LEN ||
+		    max_pkt_len > ICE_FRAME_SIZE_MAX) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is enabled",
+				    (uint32_t)RTE_ETHER_MAX_LEN,
+				    (uint32_t)ICE_FRAME_SIZE_MAX);
+			return -EINVAL;
+		}
+	} else {
+		if (max_pkt_len < RTE_ETHER_MIN_LEN ||
+		    max_pkt_len > RTE_ETHER_MAX_LEN) {
+			PMD_DRV_LOG(ERR, "maximum packet length must be "
+				    "larger than %u and smaller than %u, "
+				    "as jumbo frame is disabled",
+				    (uint32_t)RTE_ETHER_MIN_LEN,
+				    (uint32_t)RTE_ETHER_MAX_LEN);
+			return -EINVAL;
+		}
+	}
+
+	rxq->max_pkt_len = max_pkt_len;
+	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	    (rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size) {
+		dev_data->scattered_rx = 1;
+	}
+	rxq->qrx_tail = hw->hw_addr + IAVF_QRX_TAIL1(rxq->queue_id);
+	IAVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	IAVF_WRITE_FLUSH(hw);
+
+	return 0;
+}
+
+static int
+ice_dcf_init_rx_queues(struct rte_eth_dev *dev)
+{
+	struct ice_rx_queue **rxq =
+		(struct ice_rx_queue **)dev->data->rx_queues;
+	int i, ret;
+
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		if (!rxq[i] || !rxq[i]->q_set)
+			continue;
+		ret = ice_dcf_init_rxq(dev, rxq[i]);
+		if (ret)
+			return ret;
+	}
+
+	ice_set_rx_function(dev);
+	ice_set_tx_function(dev);
+
+	return 0;
+}
+
 static int
 ice_dcf_dev_start(struct rte_eth_dev *dev)
 {
 	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
 	struct ice_adapter *ad = &dcf_ad->parent;
+	struct ice_dcf_hw *hw = &dcf_ad->real_hw;
+	int ret;
 
 	ad->pf.adapter_stopped = 0;
 
+	hw->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues,
+				      dev->data->nb_tx_queues);
+
+	ret = ice_dcf_init_rx_queues(dev);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Fail to init queues");
+		return ret;
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v5 07/12] net/ice: init RSS during DCF start
  2020-06-23  2:38 ` [dpdk-dev] [PATCH v5 00/12] " Ting Xu
                     ` (5 preceding siblings ...)
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 06/12] net/ice: add Rx queue init in DCF Ting Xu
@ 2020-06-23  2:38   ` Ting Xu
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 08/12] net/ice: add queue config in DCF Ting Xu
                     ` (5 subsequent siblings)
  12 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-23  2:38 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

From: Qi Zhang <qi.z.zhang@intel.com>

Enable RSS initialization during DCF start. Add RSS LUT and
RSS key configuration functions.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf.c        | 117 +++++++++++++++++++++++++++++++
 drivers/net/ice/ice_dcf.h        |   1 +
 drivers/net/ice/ice_dcf_ethdev.c |   8 +++
 3 files changed, 126 insertions(+)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 93fabd5f7..f285323dd 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -708,3 +708,120 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw)
 	rte_free(hw->rss_lut);
 	rte_free(hw->rss_key);
 }
+
+static int
+ice_dcf_configure_rss_key(struct ice_dcf_hw *hw)
+{
+	struct virtchnl_rss_key *rss_key;
+	struct dcf_virtchnl_cmd args;
+	int len, err;
+
+	len = sizeof(*rss_key) + hw->vf_res->rss_key_size - 1;
+	rss_key = rte_zmalloc("rss_key", len, 0);
+	if (!rss_key)
+		return -ENOMEM;
+
+	rss_key->vsi_id = hw->vsi_res->vsi_id;
+	rss_key->key_len = hw->vf_res->rss_key_size;
+	rte_memcpy(rss_key->key, hw->rss_key, hw->vf_res->rss_key_size);
+
+	args.v_op = VIRTCHNL_OP_CONFIG_RSS_KEY;
+	args.req_msglen = len;
+	args.req_msg = (uint8_t *)rss_key;
+	args.rsp_msglen = 0;
+	args.rsp_buflen = 0;
+	args.rsp_msgbuf = NULL;
+	args.pending = 0;
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_INIT_LOG(ERR, "Failed to execute OP_CONFIG_RSS_KEY");
+
+	rte_free(rss_key);
+	return err;
+}
+
+static int
+ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw)
+{
+	struct virtchnl_rss_lut *rss_lut;
+	struct dcf_virtchnl_cmd args;
+	int len, err;
+
+	len = sizeof(*rss_lut) + hw->vf_res->rss_lut_size - 1;
+	rss_lut = rte_zmalloc("rss_lut", len, 0);
+	if (!rss_lut)
+		return -ENOMEM;
+
+	rss_lut->vsi_id = hw->vsi_res->vsi_id;
+	rss_lut->lut_entries = hw->vf_res->rss_lut_size;
+	rte_memcpy(rss_lut->lut, hw->rss_lut, hw->vf_res->rss_lut_size);
+
+	args.v_op = VIRTCHNL_OP_CONFIG_RSS_LUT;
+	args.req_msglen = len;
+	args.req_msg = (uint8_t *)rss_lut;
+	args.rsp_msglen = 0;
+	args.rsp_buflen = 0;
+	args.rsp_msgbuf = NULL;
+	args.pending = 0;
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_INIT_LOG(ERR, "Failed to execute OP_CONFIG_RSS_LUT");
+
+	rte_free(rss_lut);
+	return err;
+}
+
+int
+ice_dcf_init_rss(struct ice_dcf_hw *hw)
+{
+	struct rte_eth_dev *dev = hw->eth_dev;
+	struct rte_eth_rss_conf *rss_conf;
+	uint8_t i, j, nb_q;
+	int ret;
+
+	rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf;
+	nb_q = dev->data->nb_rx_queues;
+
+	if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) {
+		PMD_DRV_LOG(DEBUG, "RSS is not supported");
+		return -ENOTSUP;
+	}
+	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+		PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
+		/* set all lut items to default queue */
+		memset(hw->rss_lut, 0, hw->vf_res->rss_lut_size);
+		return ice_dcf_configure_rss_lut(hw);
+	}
+
+	/* In IAVF, RSS enablement is set by PF driver. It is not supported
+	 * to set based on rss_conf->rss_hf.
+	 */
+
+	/* configure RSS key */
+	if (!rss_conf->rss_key)
+		/* Calculate the default hash key */
+		for (i = 0; i < hw->vf_res->rss_key_size; i++)
+			hw->rss_key[i] = (uint8_t)rte_rand();
+	else
+		rte_memcpy(hw->rss_key, rss_conf->rss_key,
+			   RTE_MIN(rss_conf->rss_key_len,
+				   hw->vf_res->rss_key_size));
+
+	/* init RSS LUT table */
+	for (i = 0, j = 0; i < hw->vf_res->rss_lut_size; i++, j++) {
+		if (j >= nb_q)
+			j = 0;
+		hw->rss_lut[i] = j;
+	}
+	/* send virtchnnl ops to configure rss*/
+	ret = ice_dcf_configure_rss_lut(hw);
+	if (ret)
+		return ret;
+	ret = ice_dcf_configure_rss_key(hw);
+	if (ret)
+		return ret;
+
+	return 0;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index dcb2a0283..eea4b286b 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -63,5 +63,6 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc,
 int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
 int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
 void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
+int ice_dcf_init_rss(struct ice_dcf_hw *hw);
 
 #endif /* _ICE_DCF_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 5afd07f96..e2ab7e637 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -133,6 +133,14 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
+	if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+		ret = ice_dcf_init_rss(hw);
+		if (ret) {
+			PMD_DRV_LOG(ERR, "Failed to configure RSS");
+			return ret;
+		}
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v5 08/12] net/ice: add queue config in DCF
  2020-06-23  2:38 ` [dpdk-dev] [PATCH v5 00/12] " Ting Xu
                     ` (6 preceding siblings ...)
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 07/12] net/ice: init RSS during DCF start Ting Xu
@ 2020-06-23  2:38   ` Ting Xu
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 09/12] net/ice: add queue start and stop for DCF Ting Xu
                     ` (4 subsequent siblings)
  12 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-23  2:38 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

From: Qi Zhang <qi.z.zhang@intel.com>

Add queues and Rx queue irqs configuration during device start
in DCF. The setup is sent to PF via virtchnl.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf.c        | 111 +++++++++++++++++++++++++++
 drivers/net/ice/ice_dcf.h        |   6 ++
 drivers/net/ice/ice_dcf_ethdev.c | 126 +++++++++++++++++++++++++++++++
 3 files changed, 243 insertions(+)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index f285323dd..8869e0d1c 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -24,6 +24,7 @@
 #include <rte_dev.h>
 
 #include "ice_dcf.h"
+#include "ice_rxtx.h"
 
 #define ICE_DCF_AQ_LEN     32
 #define ICE_DCF_AQ_BUF_SZ  4096
@@ -825,3 +826,113 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
 
 	return 0;
 }
+
+#define IAVF_RXDID_LEGACY_1 1
+#define IAVF_RXDID_COMMS_GENERIC 16
+
+int
+ice_dcf_configure_queues(struct ice_dcf_hw *hw)
+{
+	struct ice_rx_queue **rxq =
+		(struct ice_rx_queue **)hw->eth_dev->data->rx_queues;
+	struct ice_tx_queue **txq =
+		(struct ice_tx_queue **)hw->eth_dev->data->tx_queues;
+	struct virtchnl_vsi_queue_config_info *vc_config;
+	struct virtchnl_queue_pair_info *vc_qp;
+	struct dcf_virtchnl_cmd args;
+	uint16_t i, size;
+	int err;
+
+	size = sizeof(*vc_config) +
+	       sizeof(vc_config->qpair[0]) * hw->num_queue_pairs;
+	vc_config = rte_zmalloc("cfg_queue", size, 0);
+	if (!vc_config)
+		return -ENOMEM;
+
+	vc_config->vsi_id = hw->vsi_res->vsi_id;
+	vc_config->num_queue_pairs = hw->num_queue_pairs;
+
+	for (i = 0, vc_qp = vc_config->qpair;
+	     i < hw->num_queue_pairs;
+	     i++, vc_qp++) {
+		vc_qp->txq.vsi_id = hw->vsi_res->vsi_id;
+		vc_qp->txq.queue_id = i;
+		if (i < hw->eth_dev->data->nb_tx_queues) {
+			vc_qp->txq.ring_len = txq[i]->nb_tx_desc;
+			vc_qp->txq.dma_ring_addr = txq[i]->tx_ring_dma;
+		}
+		vc_qp->rxq.vsi_id = hw->vsi_res->vsi_id;
+		vc_qp->rxq.queue_id = i;
+		vc_qp->rxq.max_pkt_size = rxq[i]->max_pkt_len;
+
+		if (i >= hw->eth_dev->data->nb_rx_queues)
+			continue;
+
+		vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc;
+		vc_qp->rxq.dma_ring_addr = rxq[i]->rx_ring_dma;
+		vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len;
+
+		if (hw->vf_res->vf_cap_flags &
+		    VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
+		    hw->supported_rxdid &
+		    BIT(IAVF_RXDID_COMMS_GENERIC)) {
+			vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_GENERIC;
+			PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
+				    "Queue[%d]", vc_qp->rxq.rxdid, i);
+		} else {
+			PMD_DRV_LOG(ERR, "RXDID 16 is not supported");
+			return -EINVAL;
+		}
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.v_op = VIRTCHNL_OP_CONFIG_VSI_QUEUES;
+	args.req_msg = (uint8_t *)vc_config;
+	args.req_msglen = size;
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to execute command of"
+			    " VIRTCHNL_OP_CONFIG_VSI_QUEUES");
+
+	rte_free(vc_config);
+	return err;
+}
+
+int
+ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
+{
+	struct virtchnl_irq_map_info *map_info;
+	struct virtchnl_vector_map *vecmap;
+	struct dcf_virtchnl_cmd args;
+	int len, i, err;
+
+	len = sizeof(struct virtchnl_irq_map_info) +
+	      sizeof(struct virtchnl_vector_map) * hw->nb_msix;
+
+	map_info = rte_zmalloc("map_info", len, 0);
+	if (!map_info)
+		return -ENOMEM;
+
+	map_info->num_vectors = hw->nb_msix;
+	for (i = 0; i < hw->nb_msix; i++) {
+		vecmap = &map_info->vecmap[i];
+		vecmap->vsi_id = hw->vsi_res->vsi_id;
+		vecmap->rxitr_idx = 0;
+		vecmap->vector_id = hw->msix_base + i;
+		vecmap->txq_map = 0;
+		vecmap->rxq_map = hw->rxq_map[hw->msix_base + i];
+	}
+
+	memset(&args, 0, sizeof(args));
+	args.v_op = VIRTCHNL_OP_CONFIG_IRQ_MAP;
+	args.req_msg = (u8 *)map_info;
+	args.req_msglen = len;
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command OP_CONFIG_IRQ_MAP");
+
+	rte_free(map_info);
+	return err;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index eea4b286b..9470d1df7 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -54,6 +54,10 @@ struct ice_dcf_hw {
 	uint8_t *rss_key;
 	uint64_t supported_rxdid;
 	uint16_t num_queue_pairs;
+
+	uint16_t msix_base;
+	uint16_t nb_msix;
+	uint16_t rxq_map[16];
 };
 
 int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
@@ -64,5 +68,7 @@ int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw);
 int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
 void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
 int ice_dcf_init_rss(struct ice_dcf_hw *hw);
+int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
+int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
 
 #endif /* _ICE_DCF_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index e2ab7e637..a190ab7c1 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -114,10 +114,124 @@ ice_dcf_init_rx_queues(struct rte_eth_dev *dev)
 	return 0;
 }
 
+#define IAVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
+#define IAVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
+
+#define IAVF_ITR_INDEX_DEFAULT          0
+#define IAVF_QUEUE_ITR_INTERVAL_DEFAULT 32 /* 32 us */
+#define IAVF_QUEUE_ITR_INTERVAL_MAX     8160 /* 8160 us */
+
+static inline uint16_t
+iavf_calc_itr_interval(int16_t interval)
+{
+	if (interval < 0 || interval > IAVF_QUEUE_ITR_INTERVAL_MAX)
+		interval = IAVF_QUEUE_ITR_INTERVAL_DEFAULT;
+
+	/* Convert to hardware count, as writing each 1 represents 2 us */
+	return interval / 2;
+}
+
+static int
+ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
+				     struct rte_intr_handle *intr_handle)
+{
+	struct ice_dcf_adapter *adapter = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &adapter->real_hw;
+	uint16_t interval, i;
+	int vec;
+
+	if (rte_intr_cap_multiple(intr_handle) &&
+	    dev->data->dev_conf.intr_conf.rxq) {
+		if (rte_intr_efd_enable(intr_handle, dev->data->nb_rx_queues))
+			return -1;
+	}
+
+	if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
+		intr_handle->intr_vec =
+			rte_zmalloc("intr_vec",
+				    dev->data->nb_rx_queues * sizeof(int), 0);
+		if (!intr_handle->intr_vec) {
+			PMD_DRV_LOG(ERR, "Failed to allocate %d rx intr_vec",
+				    dev->data->nb_rx_queues);
+			return -1;
+		}
+	}
+
+	if (!dev->data->dev_conf.intr_conf.rxq ||
+	    !rte_intr_dp_is_en(intr_handle)) {
+		/* Rx interrupt disabled, Map interrupt only for writeback */
+		hw->nb_msix = 1;
+		if (hw->vf_res->vf_cap_flags &
+		    VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
+			/* If WB_ON_ITR supports, enable it */
+			hw->msix_base = IAVF_RX_VEC_START;
+			IAVF_WRITE_REG(&hw->avf,
+				       IAVF_VFINT_DYN_CTLN1(hw->msix_base - 1),
+				       IAVF_VFINT_DYN_CTLN1_ITR_INDX_MASK |
+				       IAVF_VFINT_DYN_CTLN1_WB_ON_ITR_MASK);
+		} else {
+			/* If no WB_ON_ITR offload flags, need to set
+			 * interrupt for descriptor write back.
+			 */
+			hw->msix_base = IAVF_MISC_VEC_ID;
+
+			/* set ITR to max */
+			interval =
+			iavf_calc_itr_interval(IAVF_QUEUE_ITR_INTERVAL_MAX);
+			IAVF_WRITE_REG(&hw->avf, IAVF_VFINT_DYN_CTL01,
+				       IAVF_VFINT_DYN_CTL01_INTENA_MASK |
+				       (IAVF_ITR_INDEX_DEFAULT <<
+					IAVF_VFINT_DYN_CTL01_ITR_INDX_SHIFT) |
+				       (interval <<
+					IAVF_VFINT_DYN_CTL01_INTERVAL_SHIFT));
+		}
+		IAVF_WRITE_FLUSH(&hw->avf);
+		/* map all queues to the same interrupt */
+		for (i = 0; i < dev->data->nb_rx_queues; i++)
+			hw->rxq_map[hw->msix_base] |= 1 << i;
+	} else {
+		if (!rte_intr_allow_others(intr_handle)) {
+			hw->nb_msix = 1;
+			hw->msix_base = IAVF_MISC_VEC_ID;
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				hw->rxq_map[hw->msix_base] |= 1 << i;
+				intr_handle->intr_vec[i] = IAVF_MISC_VEC_ID;
+			}
+			PMD_DRV_LOG(DEBUG,
+				    "vector %u are mapping to all Rx queues",
+				    hw->msix_base);
+		} else {
+			/* If Rx interrupt is reuquired, and we can use
+			 * multi interrupts, then the vec is from 1
+			 */
+			hw->nb_msix = RTE_MIN(hw->vf_res->max_vectors,
+					      intr_handle->nb_efd);
+			hw->msix_base = IAVF_MISC_VEC_ID;
+			vec = IAVF_MISC_VEC_ID;
+			for (i = 0; i < dev->data->nb_rx_queues; i++) {
+				hw->rxq_map[vec] |= 1 << i;
+				intr_handle->intr_vec[i] = vec++;
+				if (vec >= hw->nb_msix)
+					vec = IAVF_RX_VEC_START;
+			}
+			PMD_DRV_LOG(DEBUG,
+				    "%u vectors are mapping to %u Rx queues",
+				    hw->nb_msix, dev->data->nb_rx_queues);
+		}
+	}
+
+	if (ice_dcf_config_irq_map(hw)) {
+		PMD_DRV_LOG(ERR, "config interrupt mapping failed");
+		return -1;
+	}
+	return 0;
+}
+
 static int
 ice_dcf_dev_start(struct rte_eth_dev *dev)
 {
 	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
 	struct ice_adapter *ad = &dcf_ad->parent;
 	struct ice_dcf_hw *hw = &dcf_ad->real_hw;
 	int ret;
@@ -141,6 +255,18 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 		}
 	}
 
+	ret = ice_dcf_configure_queues(hw);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Fail to config queues");
+		return ret;
+	}
+
+	ret = ice_dcf_config_rx_queues_irqs(dev, intr_handle);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Fail to config rx queues' irqs");
+		return ret;
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v5 09/12] net/ice: add queue start and stop for DCF
  2020-06-23  2:38 ` [dpdk-dev] [PATCH v5 00/12] " Ting Xu
                     ` (7 preceding siblings ...)
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 08/12] net/ice: add queue config in DCF Ting Xu
@ 2020-06-23  2:38   ` Ting Xu
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 10/12] net/ice: enable stats " Ting Xu
                     ` (3 subsequent siblings)
  12 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-23  2:38 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

From: Qi Zhang <qi.z.zhang@intel.com>

Add queue start and stop in DCF. Support queue enable and disable
through virtual channel. Add support for Rx queue mbufs allocation
and queue reset.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf.c        |  57 ++++++
 drivers/net/ice/ice_dcf.h        |   3 +-
 drivers/net/ice/ice_dcf_ethdev.c | 322 +++++++++++++++++++++++++++++++
 3 files changed, 381 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 8869e0d1c..f18c0f16a 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -936,3 +936,60 @@ ice_dcf_config_irq_map(struct ice_dcf_hw *hw)
 	rte_free(map_info);
 	return err;
 }
+
+int
+ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on)
+{
+	struct virtchnl_queue_select queue_select;
+	struct dcf_virtchnl_cmd args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = hw->vsi_res->vsi_id;
+	if (rx)
+		queue_select.rx_queues |= 1 << qid;
+	else
+		queue_select.tx_queues |= 1 << qid;
+
+	memset(&args, 0, sizeof(args));
+	if (on)
+		args.v_op = VIRTCHNL_OP_ENABLE_QUEUES;
+	else
+		args.v_op = VIRTCHNL_OP_DISABLE_QUEUES;
+
+	args.req_msg = (u8 *)&queue_select;
+	args.req_msglen = sizeof(queue_select);
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "Failed to execute command of %s",
+			    on ? "OP_ENABLE_QUEUES" : "OP_DISABLE_QUEUES");
+
+	return err;
+}
+
+int
+ice_dcf_disable_queues(struct ice_dcf_hw *hw)
+{
+	struct virtchnl_queue_select queue_select;
+	struct dcf_virtchnl_cmd args;
+	int err;
+
+	memset(&queue_select, 0, sizeof(queue_select));
+	queue_select.vsi_id = hw->vsi_res->vsi_id;
+
+	queue_select.rx_queues = BIT(hw->eth_dev->data->nb_rx_queues) - 1;
+	queue_select.tx_queues = BIT(hw->eth_dev->data->nb_tx_queues) - 1;
+
+	memset(&args, 0, sizeof(args));
+	args.v_op = VIRTCHNL_OP_DISABLE_QUEUES;
+	args.req_msg = (u8 *)&queue_select;
+	args.req_msglen = sizeof(queue_select);
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_DRV_LOG(ERR,
+			    "Failed to execute command of OP_DISABLE_QUEUES");
+
+	return err;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 9470d1df7..68e1661c0 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -70,5 +70,6 @@ void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw);
 int ice_dcf_init_rss(struct ice_dcf_hw *hw);
 int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
 int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
-
+int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
+int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
 #endif /* _ICE_DCF_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index a190ab7c1..d0219a728 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -227,6 +227,272 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+alloc_rxq_mbufs(struct ice_rx_queue *rxq)
+{
+	volatile union ice_32b_rx_flex_desc *rxd;
+	struct rte_mbuf *mbuf = NULL;
+	uint64_t dma_addr;
+	uint16_t i;
+
+	for (i = 0; i < rxq->nb_rx_desc; i++) {
+		mbuf = rte_mbuf_raw_alloc(rxq->mp);
+		if (unlikely(!mbuf)) {
+			PMD_DRV_LOG(ERR, "Failed to allocate mbuf for RX");
+			return -ENOMEM;
+		}
+
+		rte_mbuf_refcnt_set(mbuf, 1);
+		mbuf->next = NULL;
+		mbuf->data_off = RTE_PKTMBUF_HEADROOM;
+		mbuf->nb_segs = 1;
+		mbuf->port = rxq->port_id;
+
+		dma_addr =
+			rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf));
+
+		rxd = &rxq->rx_ring[i];
+		rxd->read.pkt_addr = dma_addr;
+		rxd->read.hdr_addr = 0;
+		rxd->read.rsvd1 = 0;
+		rxd->read.rsvd2 = 0;
+
+		rxq->sw_ring[i].mbuf = (void *)mbuf;
+	}
+
+	return 0;
+}
+
+static int
+ice_dcf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct iavf_hw *hw = &ad->real_hw.avf;
+	struct ice_rx_queue *rxq;
+	int err = 0;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+
+	err = alloc_rxq_mbufs(rxq);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to allocate RX queue mbuf");
+		return err;
+	}
+
+	rte_wmb();
+
+	/* Init the RX tail register. */
+	IAVF_PCI_REG_WRITE(rxq->qrx_tail, rxq->nb_rx_desc - 1);
+	IAVF_WRITE_FLUSH(hw);
+
+	/* Ready to switch the queue on */
+	err = ice_dcf_switch_queue(&ad->real_hw, rx_queue_id, true, true);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on",
+			    rx_queue_id);
+		return err;
+	}
+
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+
+	return 0;
+}
+
+static inline void
+reset_rx_queue(struct ice_rx_queue *rxq)
+{
+	uint16_t len;
+	uint32_t i;
+
+	if (!rxq)
+		return;
+
+	len = rxq->nb_rx_desc + ICE_RX_MAX_BURST;
+
+	for (i = 0; i < len * sizeof(union ice_rx_flex_desc); i++)
+		((volatile char *)rxq->rx_ring)[i] = 0;
+
+	memset(&rxq->fake_mbuf, 0x0, sizeof(rxq->fake_mbuf));
+
+	for (i = 0; i < ICE_RX_MAX_BURST; i++)
+		rxq->sw_ring[rxq->nb_rx_desc + i].mbuf = &rxq->fake_mbuf;
+
+	/* for rx bulk */
+	rxq->rx_nb_avail = 0;
+	rxq->rx_next_avail = 0;
+	rxq->rx_free_trigger = (uint16_t)(rxq->rx_free_thresh - 1);
+
+	rxq->rx_tail = 0;
+	rxq->nb_rx_hold = 0;
+	rxq->pkt_first_seg = NULL;
+	rxq->pkt_last_seg = NULL;
+}
+
+static inline void
+reset_tx_queue(struct ice_tx_queue *txq)
+{
+	struct ice_tx_entry *txe;
+	uint32_t i, size;
+	uint16_t prev;
+
+	if (!txq) {
+		PMD_DRV_LOG(DEBUG, "Pointer to txq is NULL");
+		return;
+	}
+
+	txe = txq->sw_ring;
+	size = sizeof(struct ice_tx_desc) * txq->nb_tx_desc;
+	for (i = 0; i < size; i++)
+		((volatile char *)txq->tx_ring)[i] = 0;
+
+	prev = (uint16_t)(txq->nb_tx_desc - 1);
+	for (i = 0; i < txq->nb_tx_desc; i++) {
+		txq->tx_ring[i].cmd_type_offset_bsz =
+			rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DESC_DONE);
+		txe[i].mbuf =  NULL;
+		txe[i].last_id = i;
+		txe[prev].next_id = i;
+		prev = i;
+	}
+
+	txq->tx_tail = 0;
+	txq->nb_tx_used = 0;
+
+	txq->last_desc_cleaned = txq->nb_tx_desc - 1;
+	txq->nb_tx_free = txq->nb_tx_desc - 1;
+
+	txq->tx_next_dd = txq->tx_rs_thresh - 1;
+	txq->tx_next_rs = txq->tx_rs_thresh - 1;
+}
+
+static int
+ice_dcf_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &ad->real_hw;
+	struct ice_rx_queue *rxq;
+	int err;
+
+	if (rx_queue_id >= dev->data->nb_rx_queues)
+		return -EINVAL;
+
+	err = ice_dcf_switch_queue(hw, rx_queue_id, true, false);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch RX queue %u off",
+			    rx_queue_id);
+		return err;
+	}
+
+	rxq = dev->data->rx_queues[rx_queue_id];
+	rxq->rx_rel_mbufs(rxq);
+	reset_rx_queue(rxq);
+	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+static int
+ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct iavf_hw *hw = &ad->real_hw.avf;
+	struct ice_tx_queue *txq;
+	int err = 0;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	txq = dev->data->tx_queues[tx_queue_id];
+
+	/* Init the RX tail register. */
+	txq->qtx_tail = hw->hw_addr + IAVF_QTX_TAIL1(tx_queue_id);
+	IAVF_PCI_REG_WRITE(txq->qtx_tail, 0);
+	IAVF_WRITE_FLUSH(hw);
+
+	/* Ready to switch the queue on */
+	err = ice_dcf_switch_queue(&ad->real_hw, tx_queue_id, false, true);
+
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on",
+			    tx_queue_id);
+		return err;
+	}
+
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
+
+	return 0;
+}
+
+static int
+ice_dcf_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &ad->real_hw;
+	struct ice_tx_queue *txq;
+	int err;
+
+	if (tx_queue_id >= dev->data->nb_tx_queues)
+		return -EINVAL;
+
+	err = ice_dcf_switch_queue(hw, tx_queue_id, false, false);
+	if (err) {
+		PMD_DRV_LOG(ERR, "Failed to switch TX queue %u off",
+			    tx_queue_id);
+		return err;
+	}
+
+	txq = dev->data->tx_queues[tx_queue_id];
+	txq->tx_rel_mbufs(txq);
+	reset_tx_queue(txq);
+	dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED;
+
+	return 0;
+}
+
+static int
+ice_dcf_start_queues(struct rte_eth_dev *dev)
+{
+	struct ice_rx_queue *rxq;
+	struct ice_tx_queue *txq;
+	int nb_rxq = 0;
+	int nb_txq, i;
+
+	for (nb_txq = 0; nb_txq < dev->data->nb_tx_queues; nb_txq++) {
+		txq = dev->data->tx_queues[nb_txq];
+		if (txq->tx_deferred_start)
+			continue;
+		if (ice_dcf_tx_queue_start(dev, nb_txq) != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start queue %u", nb_txq);
+			goto tx_err;
+		}
+	}
+
+	for (nb_rxq = 0; nb_rxq < dev->data->nb_rx_queues; nb_rxq++) {
+		rxq = dev->data->rx_queues[nb_rxq];
+		if (rxq->rx_deferred_start)
+			continue;
+		if (ice_dcf_rx_queue_start(dev, nb_rxq) != 0) {
+			PMD_DRV_LOG(ERR, "Fail to start queue %u", nb_rxq);
+			goto rx_err;
+		}
+	}
+
+	return 0;
+
+	/* stop the started queues if failed to start all queues */
+rx_err:
+	for (i = 0; i < nb_rxq; i++)
+		ice_dcf_rx_queue_stop(dev, i);
+tx_err:
+	for (i = 0; i < nb_txq; i++)
+		ice_dcf_tx_queue_stop(dev, i);
+
+	return -1;
+}
+
 static int
 ice_dcf_dev_start(struct rte_eth_dev *dev)
 {
@@ -267,15 +533,59 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
+	if (dev->data->dev_conf.intr_conf.rxq != 0) {
+		rte_intr_disable(intr_handle);
+		rte_intr_enable(intr_handle);
+	}
+
+	ret = ice_dcf_start_queues(dev);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to enable queues");
+		return ret;
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
 }
 
+static void
+ice_dcf_stop_queues(struct rte_eth_dev *dev)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &ad->real_hw;
+	struct ice_rx_queue *rxq;
+	struct ice_tx_queue *txq;
+	int ret, i;
+
+	/* Stop All queues */
+	ret = ice_dcf_disable_queues(hw);
+	if (ret)
+		PMD_DRV_LOG(WARNING, "Fail to stop queues");
+
+	for (i = 0; i < dev->data->nb_tx_queues; i++) {
+		txq = dev->data->tx_queues[i];
+		if (!txq)
+			continue;
+		txq->tx_rel_mbufs(txq);
+		reset_tx_queue(txq);
+		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+	for (i = 0; i < dev->data->nb_rx_queues; i++) {
+		rxq = dev->data->rx_queues[i];
+		if (!rxq)
+			continue;
+		rxq->rx_rel_mbufs(rxq);
+		reset_rx_queue(rxq);
+		dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
+	}
+}
+
 static void
 ice_dcf_dev_stop(struct rte_eth_dev *dev)
 {
 	struct ice_dcf_adapter *dcf_ad = dev->data->dev_private;
+	struct rte_intr_handle *intr_handle = dev->intr_handle;
 	struct ice_adapter *ad = &dcf_ad->parent;
 
 	if (ad->pf.adapter_stopped == 1) {
@@ -283,6 +593,14 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
 		return;
 	}
 
+	ice_dcf_stop_queues(dev);
+
+	rte_intr_efd_disable(intr_handle);
+	if (intr_handle->intr_vec) {
+		rte_free(intr_handle->intr_vec);
+		intr_handle->intr_vec = NULL;
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_DOWN;
 	ad->pf.adapter_stopped = 1;
 }
@@ -477,6 +795,10 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = {
 	.tx_queue_setup          = ice_tx_queue_setup,
 	.rx_queue_release        = ice_rx_queue_release,
 	.tx_queue_release        = ice_tx_queue_release,
+	.rx_queue_start          = ice_dcf_rx_queue_start,
+	.tx_queue_start          = ice_dcf_tx_queue_start,
+	.rx_queue_stop           = ice_dcf_rx_queue_stop,
+	.tx_queue_stop           = ice_dcf_tx_queue_stop,
 	.link_update             = ice_dcf_link_update,
 	.stats_get               = ice_dcf_stats_get,
 	.stats_reset             = ice_dcf_stats_reset,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v5 10/12] net/ice: enable stats for DCF
  2020-06-23  2:38 ` [dpdk-dev] [PATCH v5 00/12] " Ting Xu
                     ` (8 preceding siblings ...)
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 09/12] net/ice: add queue start and stop for DCF Ting Xu
@ 2020-06-23  2:38   ` Ting Xu
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 11/12] net/ice: set MAC filter during dev start " Ting Xu
                     ` (2 subsequent siblings)
  12 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-23  2:38 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

From: Qi Zhang <qi.z.zhang@intel.com>

Add support to get and reset Rx/Tx stats in DCF. Query stats
from PF.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf.c        |  27 ++++++++
 drivers/net/ice/ice_dcf.h        |   4 ++
 drivers/net/ice/ice_dcf_ethdev.c | 102 +++++++++++++++++++++++++++----
 3 files changed, 120 insertions(+), 13 deletions(-)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index f18c0f16a..fbeb58ee1 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -993,3 +993,30 @@ ice_dcf_disable_queues(struct ice_dcf_hw *hw)
 
 	return err;
 }
+
+int
+ice_dcf_query_stats(struct ice_dcf_hw *hw,
+				   struct virtchnl_eth_stats *pstats)
+{
+	struct virtchnl_queue_select q_stats;
+	struct dcf_virtchnl_cmd args;
+	int err;
+
+	memset(&q_stats, 0, sizeof(q_stats));
+	q_stats.vsi_id = hw->vsi_res->vsi_id;
+
+	args.v_op = VIRTCHNL_OP_GET_STATS;
+	args.req_msg = (uint8_t *)&q_stats;
+	args.req_msglen = sizeof(q_stats);
+	args.rsp_msglen = sizeof(*pstats);
+	args.rsp_msgbuf = (uint8_t *)pstats;
+	args.rsp_buflen = sizeof(*pstats);
+
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err) {
+		PMD_DRV_LOG(ERR, "fail to execute command OP_GET_STATS");
+		return err;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index 68e1661c0..e82bc7748 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -58,6 +58,7 @@ struct ice_dcf_hw {
 	uint16_t msix_base;
 	uint16_t nb_msix;
 	uint16_t rxq_map[16];
+	struct virtchnl_eth_stats eth_stats_offset;
 };
 
 int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw,
@@ -72,4 +73,7 @@ int ice_dcf_configure_queues(struct ice_dcf_hw *hw);
 int ice_dcf_config_irq_map(struct ice_dcf_hw *hw);
 int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
 int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
+int ice_dcf_query_stats(struct ice_dcf_hw *hw,
+			struct virtchnl_eth_stats *pstats);
+
 #endif /* _ICE_DCF_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index d0219a728..38e321f4b 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -697,19 +697,6 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
 	return 0;
 }
 
-static int
-ice_dcf_stats_get(__rte_unused struct rte_eth_dev *dev,
-		  __rte_unused struct rte_eth_stats *igb_stats)
-{
-	return 0;
-}
-
-static int
-ice_dcf_stats_reset(__rte_unused struct rte_eth_dev *dev)
-{
-	return 0;
-}
-
 static int
 ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev)
 {
@@ -762,6 +749,95 @@ ice_dcf_dev_filter_ctrl(struct rte_eth_dev *dev,
 	return ret;
 }
 
+#define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4)
+#define ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6)
+#define ICE_DCF_48_BIT_MASK  RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t)
+
+static void
+ice_dcf_stat_update_48(uint64_t *offset, uint64_t *stat)
+{
+	if (*stat >= *offset)
+		*stat = *stat - *offset;
+	else
+		*stat = (uint64_t)((*stat +
+			((uint64_t)1 << ICE_DCF_48_BIT_WIDTH)) - *offset);
+
+	*stat &= ICE_DCF_48_BIT_MASK;
+}
+
+static void
+ice_dcf_stat_update_32(uint64_t *offset, uint64_t *stat)
+{
+	if (*stat >= *offset)
+		*stat = (uint64_t)(*stat - *offset);
+	else
+		*stat = (uint64_t)((*stat +
+			((uint64_t)1 << ICE_DCF_32_BIT_WIDTH)) - *offset);
+}
+
+static void
+ice_dcf_update_stats(struct virtchnl_eth_stats *oes,
+		     struct virtchnl_eth_stats *nes)
+{
+	ice_dcf_stat_update_48(&oes->rx_bytes, &nes->rx_bytes);
+	ice_dcf_stat_update_48(&oes->rx_unicast, &nes->rx_unicast);
+	ice_dcf_stat_update_48(&oes->rx_multicast, &nes->rx_multicast);
+	ice_dcf_stat_update_48(&oes->rx_broadcast, &nes->rx_broadcast);
+	ice_dcf_stat_update_32(&oes->rx_discards, &nes->rx_discards);
+	ice_dcf_stat_update_48(&oes->tx_bytes, &nes->tx_bytes);
+	ice_dcf_stat_update_48(&oes->tx_unicast, &nes->tx_unicast);
+	ice_dcf_stat_update_48(&oes->tx_multicast, &nes->tx_multicast);
+	ice_dcf_stat_update_48(&oes->tx_broadcast, &nes->tx_broadcast);
+	ice_dcf_stat_update_32(&oes->tx_errors, &nes->tx_errors);
+	ice_dcf_stat_update_32(&oes->tx_discards, &nes->tx_discards);
+}
+
+
+static int
+ice_dcf_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &ad->real_hw;
+	struct virtchnl_eth_stats pstats;
+	int ret;
+
+	ret = ice_dcf_query_stats(hw, &pstats);
+	if (ret == 0) {
+		ice_dcf_update_stats(&hw->eth_stats_offset, &pstats);
+		stats->ipackets = pstats.rx_unicast + pstats.rx_multicast +
+				pstats.rx_broadcast - pstats.rx_discards;
+		stats->opackets = pstats.tx_broadcast + pstats.tx_multicast +
+						pstats.tx_unicast;
+		stats->imissed = pstats.rx_discards;
+		stats->oerrors = pstats.tx_errors + pstats.tx_discards;
+		stats->ibytes = pstats.rx_bytes;
+		stats->ibytes -= stats->ipackets * RTE_ETHER_CRC_LEN;
+		stats->obytes = pstats.tx_bytes;
+	} else {
+		PMD_DRV_LOG(ERR, "Get statistics failed");
+	}
+	return ret;
+}
+
+static int
+ice_dcf_stats_reset(struct rte_eth_dev *dev)
+{
+	struct ice_dcf_adapter *ad = dev->data->dev_private;
+	struct ice_dcf_hw *hw = &ad->real_hw;
+	struct virtchnl_eth_stats pstats;
+	int ret;
+
+	/* read stat values to clear hardware registers */
+	ret = ice_dcf_query_stats(hw, &pstats);
+	if (ret != 0)
+		return ret;
+
+	/* set stats offset base on current values */
+	hw->eth_stats_offset = pstats;
+
+	return 0;
+}
+
 static void
 ice_dcf_dev_close(struct rte_eth_dev *dev)
 {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v5 11/12] net/ice: set MAC filter during dev start for DCF
  2020-06-23  2:38 ` [dpdk-dev] [PATCH v5 00/12] " Ting Xu
                     ` (9 preceding siblings ...)
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 10/12] net/ice: enable stats " Ting Xu
@ 2020-06-23  2:38   ` Ting Xu
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 12/12] doc: enable DCF datapath configuration Ting Xu
  2020-06-29  2:43   ` [dpdk-dev] [PATCH v5 00/12] " Yang, Qiming
  12 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-23  2:38 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

From: Qi Zhang <qi.z.zhang@intel.com>

Add support to add and delete  MAC address filter in DCF.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 drivers/net/ice/ice_dcf.c        | 42 ++++++++++++++++++++++++++++++++
 drivers/net/ice/ice_dcf.h        |  1 +
 drivers/net/ice/ice_dcf_ethdev.c |  7 ++++++
 3 files changed, 50 insertions(+)

diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index fbeb58ee1..712f43825 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -1020,3 +1020,45 @@ ice_dcf_query_stats(struct ice_dcf_hw *hw,
 
 	return 0;
 }
+
+int
+ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add)
+{
+	struct virtchnl_ether_addr_list *list;
+	struct rte_ether_addr *addr;
+	struct dcf_virtchnl_cmd args;
+	int len, err = 0;
+
+	len = sizeof(struct virtchnl_ether_addr_list);
+	addr = hw->eth_dev->data->mac_addrs;
+	len += sizeof(struct virtchnl_ether_addr);
+
+	list = rte_zmalloc(NULL, len, 0);
+	if (!list) {
+		PMD_DRV_LOG(ERR, "fail to allocate memory");
+		return -ENOMEM;
+	}
+
+	rte_memcpy(list->list[0].addr, addr->addr_bytes,
+			sizeof(addr->addr_bytes));
+	PMD_DRV_LOG(DEBUG, "add/rm mac:%x:%x:%x:%x:%x:%x",
+			    addr->addr_bytes[0], addr->addr_bytes[1],
+			    addr->addr_bytes[2], addr->addr_bytes[3],
+			    addr->addr_bytes[4], addr->addr_bytes[5]);
+
+	list->vsi_id = hw->vsi_res->vsi_id;
+	list->num_elements = 1;
+
+	memset(&args, 0, sizeof(args));
+	args.v_op = add ? VIRTCHNL_OP_ADD_ETH_ADDR :
+			VIRTCHNL_OP_DEL_ETH_ADDR;
+	args.req_msg = (uint8_t *)list;
+	args.req_msglen  = len;
+	err = ice_dcf_execute_virtchnl_cmd(hw, &args);
+	if (err)
+		PMD_DRV_LOG(ERR, "fail to execute command %s",
+			    add ? "OP_ADD_ETHER_ADDRESS" :
+			    "OP_DEL_ETHER_ADDRESS");
+	rte_free(list);
+	return err;
+}
diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h
index e82bc7748..a44a01e90 100644
--- a/drivers/net/ice/ice_dcf.h
+++ b/drivers/net/ice/ice_dcf.h
@@ -75,5 +75,6 @@ int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on);
 int ice_dcf_disable_queues(struct ice_dcf_hw *hw);
 int ice_dcf_query_stats(struct ice_dcf_hw *hw,
 			struct virtchnl_eth_stats *pstats);
+int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add);
 
 #endif /* _ICE_DCF_H_ */
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 38e321f4b..c39dfc1cc 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -544,6 +544,12 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
+	ret = ice_dcf_add_del_all_mac_addr(hw, true);
+	if (ret) {
+		PMD_DRV_LOG(ERR, "Failed to add mac addr");
+		return ret;
+	}
+
 	dev->data->dev_link.link_status = ETH_LINK_UP;
 
 	return 0;
@@ -601,6 +607,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
 		intr_handle->intr_vec = NULL;
 	}
 
+	ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
 	dev->data->dev_link.link_status = ETH_LINK_DOWN;
 	ad->pf.adapter_stopped = 1;
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* [dpdk-dev] [PATCH v5 12/12] doc: enable DCF datapath configuration
  2020-06-23  2:38 ` [dpdk-dev] [PATCH v5 00/12] " Ting Xu
                     ` (10 preceding siblings ...)
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 11/12] net/ice: set MAC filter during dev start " Ting Xu
@ 2020-06-23  2:38   ` Ting Xu
  2020-06-29  2:43   ` [dpdk-dev] [PATCH v5 00/12] " Yang, Qiming
  12 siblings, 0 replies; 65+ messages in thread
From: Ting Xu @ 2020-06-23  2:38 UTC (permalink / raw)
  To: dev
  Cc: qi.z.zhang, qiming.yang, jingjing.wu, beilei.xing,
	marko.kovacevic, john.mcnamara, Ting Xu

Add doc for DCF datapath configuration in DPDK 20.08 release note.
Add "ice_dcf" driver features.

Signed-off-by: Ting Xu <ting.xu@intel.com>
---
 doc/guides/nics/features/ice_dcf.ini   | 19 +++++++++++++++++++
 doc/guides/rel_notes/release_20_08.rst |  6 ++++++
 2 files changed, 25 insertions(+)
 create mode 100644 doc/guides/nics/features/ice_dcf.ini

diff --git a/doc/guides/nics/features/ice_dcf.ini b/doc/guides/nics/features/ice_dcf.ini
new file mode 100644
index 000000000..e2b565909
--- /dev/null
+++ b/doc/guides/nics/features/ice_dcf.ini
@@ -0,0 +1,19 @@
+;
+; Supported features of the 'ice_dcf' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Queue start/stop     = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+RSS hash             = P
+Flow API             = Y
+CRC offload          = Y
+L3 checksum offload  = P
+L4 checksum offload  = P
+Basic stats          = Y
+Linux UIO            = Y
+Linux VFIO           = Y
+x86-32               = Y
+x86-64               = Y
diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index dee4ccbb5..1a3a4cdb2 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -56,6 +56,12 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
+* **Updated the Intel ice driver.**
+
+  Updated the Intel ice driver with new features and improvements, including:
+
+  * Added support for DCF datapath configuration.
+
 * **Updated Mellanox mlx5 driver.**
 
   Updated Mellanox mlx5 driver with new features and improvements, including:
-- 
2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [dpdk-dev] [PATCH v5 00/12] enable DCF datapath configuration
  2020-06-23  2:38 ` [dpdk-dev] [PATCH v5 00/12] " Ting Xu
                     ` (11 preceding siblings ...)
  2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 12/12] doc: enable DCF datapath configuration Ting Xu
@ 2020-06-29  2:43   ` Yang, Qiming
  2020-06-29  5:36     ` Zhang, Qi Z
  12 siblings, 1 reply; 65+ messages in thread
From: Yang, Qiming @ 2020-06-29  2:43 UTC (permalink / raw)
  To: Xu, Ting, dev
  Cc: Zhang, Qi Z, Wu, Jingjing, Xing, Beilei, Kovacevic, Marko,
	Mcnamara, John

Reviewed-by: Qiming Yang <qiming.yang@intel.com>

> -----Original Message-----
> From: Xu, Ting <ting.xu@intel.com>
> Sent: Tuesday, June 23, 2020 10:38
> To: dev@dpdk.org
> Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, Qiming
> <qiming.yang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Xing, Beilei
> <beilei.xing@intel.com>; Kovacevic, Marko <marko.kovacevic@intel.com>;
> Mcnamara, John <john.mcnamara@intel.com>; Xu, Ting <ting.xu@intel.com>
> Subject: [PATCH v5 00/12] enable DCF datapath configuration
> 
> This patchset adds support to configure DCF datapath, including Rx/Tx
> queues setup, start and stop, device configuration, RSS and flexible
> descriptor RXDID initialization and MAC filter setup.
> 
> Qi Zhang (11):
>   net/ice: init RSS and supported RXDID in DCF
>   net/ice: complete device info get in DCF
>   net/ice: complete dev configure in DCF
>   net/ice: complete queue setup in DCF
>   net/ice: add stop flag for device start / stop
>   net/ice: add Rx queue init in DCF
>   net/ice: init RSS during DCF start
>   net/ice: add queue config in DCF
>   net/ice: add queue start and stop for DCF
>   net/ice: enable stats for DCF
>   net/ice: set MAC filter during dev start for DCF
> 
> Ting Xu (1):
>   doc: enable DCF datapath configuration
> 
> ---
> v4->v5:
> Add driver's feature doc
> 
> v3->v4:
> Clean codes based on comments
> 
> v2->v3:
> Correct coding style issue
> 
> v1->v2:
> Optimize coding style
> Correct some return values
> Add support to stop started queues when queue start failed
> 
>  doc/guides/nics/features/ice_dcf.ini   |  19 +
>  doc/guides/rel_notes/release_20_08.rst |   6 +
>  drivers/net/ice/ice_dcf.c              | 408 ++++++++++++-
>  drivers/net/ice/ice_dcf.h              |  17 +
>  drivers/net/ice/ice_dcf_ethdev.c       | 773 +++++++++++++++++++++++--
>  drivers/net/ice/ice_dcf_ethdev.h       |   3 -
>  drivers/net/ice/ice_dcf_parent.c       |   8 +
>  7 files changed, 1181 insertions(+), 53 deletions(-)  create mode 100644
> doc/guides/nics/features/ice_dcf.ini
> 
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 65+ messages in thread

* Re: [dpdk-dev] [PATCH v5 00/12] enable DCF datapath configuration
  2020-06-29  2:43   ` [dpdk-dev] [PATCH v5 00/12] " Yang, Qiming
@ 2020-06-29  5:36     ` Zhang, Qi Z
  0 siblings, 0 replies; 65+ messages in thread
From: Zhang, Qi Z @ 2020-06-29  5:36 UTC (permalink / raw)
  To: Yang, Qiming, Xu, Ting, dev
  Cc: Wu, Jingjing, Xing, Beilei, Kovacevic, Marko, Mcnamara, John



> -----Original Message-----
> From: Yang, Qiming <qiming.yang@intel.com>
> Sent: Monday, June 29, 2020 10:44 AM
> To: Xu, Ting <ting.xu@intel.com>; dev@dpdk.org
> Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> Xing, Beilei <beilei.xing@intel.com>; Kovacevic, Marko
> <marko.kovacevic@intel.com>; Mcnamara, John <john.mcnamara@intel.com>
> Subject: RE: [PATCH v5 00/12] enable DCF datapath configuration
> 
> Reviewed-by: Qiming Yang <qiming.yang@intel.com>

Applied to dpdk-next-net-intel.

Thanks
Qi
> 
> > -----Original Message-----
> > From: Xu, Ting <ting.xu@intel.com>
> > Sent: Tuesday, June 23, 2020 10:38
> > To: dev@dpdk.org
> > Cc: Zhang, Qi Z <qi.z.zhang@intel.com>; Yang, Qiming
> > <qiming.yang@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Xing,
> > Beilei <beilei.xing@intel.com>; Kovacevic, Marko
> > <marko.kovacevic@intel.com>; Mcnamara, John
> <john.mcnamara@intel.com>;
> > Xu, Ting <ting.xu@intel.com>
> > Subject: [PATCH v5 00/12] enable DCF datapath configuration
> >
> > This patchset adds support to configure DCF datapath, including Rx/Tx
> > queues setup, start and stop, device configuration, RSS and flexible
> > descriptor RXDID initialization and MAC filter setup.
> >
> > Qi Zhang (11):
> >   net/ice: init RSS and supported RXDID in DCF
> >   net/ice: complete device info get in DCF
> >   net/ice: complete dev configure in DCF
> >   net/ice: complete queue setup in DCF
> >   net/ice: add stop flag for device start / stop
> >   net/ice: add Rx queue init in DCF
> >   net/ice: init RSS during DCF start
> >   net/ice: add queue config in DCF
> >   net/ice: add queue start and stop for DCF
> >   net/ice: enable stats for DCF
> >   net/ice: set MAC filter during dev start for DCF
> >
> > Ting Xu (1):
> >   doc: enable DCF datapath configuration
> >
> > ---
> > v4->v5:
> > Add driver's feature doc
> >
> > v3->v4:
> > Clean codes based on comments
> >
> > v2->v3:
> > Correct coding style issue
> >
> > v1->v2:
> > Optimize coding style
> > Correct some return values
> > Add support to stop started queues when queue start failed
> >
> >  doc/guides/nics/features/ice_dcf.ini   |  19 +
> >  doc/guides/rel_notes/release_20_08.rst |   6 +
> >  drivers/net/ice/ice_dcf.c              | 408 ++++++++++++-
> >  drivers/net/ice/ice_dcf.h              |  17 +
> >  drivers/net/ice/ice_dcf_ethdev.c       | 773 +++++++++++++++++++++++--
> >  drivers/net/ice/ice_dcf_ethdev.h       |   3 -
> >  drivers/net/ice/ice_dcf_parent.c       |   8 +
> >  7 files changed, 1181 insertions(+), 53 deletions(-)  create mode
> > 100644 doc/guides/nics/features/ice_dcf.ini
> >
> > --
> > 2.17.1
> 


^ permalink raw reply	[flat|nested] 65+ messages in thread

end of thread, other threads:[~2020-06-29  5:36 UTC | newest]

Thread overview: 65+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-05 20:17 [dpdk-dev] [PATCH v1 00/12] enable DCF datapath configuration Ting Xu
2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 01/12] net/ice: init RSS and supported RXDID in DCF Ting Xu
2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 02/12] net/ice: complete device info get " Ting Xu
2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 03/12] net/ice: complete dev configure " Ting Xu
2020-06-05 14:56   ` Ye Xiaolong
2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 04/12] net/ice: complete queue setup " Ting Xu
2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 05/12] net/ice: add stop flag for device start / stop Ting Xu
2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 06/12] net/ice: add Rx queue init in DCF Ting Xu
2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 07/12] net/ice: init RSS during DCF start Ting Xu
2020-06-05 15:26   ` Ye Xiaolong
2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 08/12] net/ice: add queue config in DCF Ting Xu
2020-06-07 10:11   ` Ye Xiaolong
2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 09/12] net/ice: add queue start and stop for DCF Ting Xu
2020-06-07 12:28   ` Ye Xiaolong
2020-06-08  7:35   ` Yang, Qiming
2020-06-09  7:35     ` Xu, Ting
2020-06-10  5:03       ` Yang, Qiming
2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 10/12] net/ice: enable stats " Ting Xu
2020-06-07 10:19   ` Ye Xiaolong
2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 11/12] net/ice: set MAC filter during dev start " Ting Xu
2020-06-05 20:17 ` [dpdk-dev] [PATCH v1 12/12] doc: enable DCF datapath configuration Ting Xu
2020-06-05 14:41   ` Ye Xiaolong
2020-06-09  7:50     ` Xu, Ting
2020-06-11 17:08 ` [dpdk-dev] [PATCH v2 00/12] " Ting Xu
2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 01/12] net/ice: init RSS and supported RXDID in DCF Ting Xu
2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 02/12] net/ice: complete device info get " Ting Xu
2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 03/12] net/ice: complete dev configure " Ting Xu
2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 04/12] net/ice: complete queue setup " Ting Xu
2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 05/12] net/ice: add stop flag for device start / stop Ting Xu
2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 06/12] net/ice: add Rx queue init in DCF Ting Xu
2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 07/12] net/ice: init RSS during DCF start Ting Xu
2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 08/12] net/ice: add queue config in DCF Ting Xu
2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 09/12] net/ice: add queue start and stop for DCF Ting Xu
2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 10/12] net/ice: enable stats " Ting Xu
2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 11/12] net/ice: set MAC filter during dev start " Ting Xu
2020-06-11 17:08   ` [dpdk-dev] [PATCH v2 12/12] doc: enable DCF datapath configuration Ting Xu
2020-06-19  8:50 ` [dpdk-dev] [PATCH v4 00/12] " Ting Xu
2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 01/12] net/ice: init RSS and supported RXDID in DCF Ting Xu
2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 02/12] net/ice: complete device info get " Ting Xu
2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 03/12] net/ice: complete dev configure " Ting Xu
2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 04/12] net/ice: complete queue setup " Ting Xu
2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 05/12] net/ice: add stop flag for device start / stop Ting Xu
2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 06/12] net/ice: add Rx queue init in DCF Ting Xu
2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 07/12] net/ice: init RSS during DCF start Ting Xu
2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 08/12] net/ice: add queue config in DCF Ting Xu
2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 09/12] net/ice: add queue start and stop for DCF Ting Xu
2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 10/12] net/ice: enable stats " Ting Xu
2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 11/12] net/ice: set MAC filter during dev start " Ting Xu
2020-06-19  8:50   ` [dpdk-dev] [PATCH v4 12/12] doc: enable DCF datapath configuration Ting Xu
2020-06-22  4:48     ` Zhang, Qi Z
2020-06-23  2:38 ` [dpdk-dev] [PATCH v5 00/12] " Ting Xu
2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 01/12] net/ice: init RSS and supported RXDID in DCF Ting Xu
2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 02/12] net/ice: complete device info get " Ting Xu
2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 03/12] net/ice: complete dev configure " Ting Xu
2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 04/12] net/ice: complete queue setup " Ting Xu
2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 05/12] net/ice: add stop flag for device start / stop Ting Xu
2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 06/12] net/ice: add Rx queue init in DCF Ting Xu
2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 07/12] net/ice: init RSS during DCF start Ting Xu
2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 08/12] net/ice: add queue config in DCF Ting Xu
2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 09/12] net/ice: add queue start and stop for DCF Ting Xu
2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 10/12] net/ice: enable stats " Ting Xu
2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 11/12] net/ice: set MAC filter during dev start " Ting Xu
2020-06-23  2:38   ` [dpdk-dev] [PATCH v5 12/12] doc: enable DCF datapath configuration Ting Xu
2020-06-29  2:43   ` [dpdk-dev] [PATCH v5 00/12] " Yang, Qiming
2020-06-29  5:36     ` Zhang, Qi Z

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).