* [dpdk-dev] [PATCH 00/22] bnxt patchset
@ 2019-07-18 3:35 Ajit Khaparde
2019-07-18 3:35 ` [dpdk-dev] [PATCH 01/22] net/bnxt: fix to handle error case during port start Ajit Khaparde
` (23 more replies)
0 siblings, 24 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:35 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit
This patchset based on commit a164bb7c0a5ab3b100357cf56696c945fe28ab73
contains bug fixes and an update to the HWRM API.
Please apply.
Ajit Khaparde (1):
net/bnxt: update HWRM API to version 1.10.0.91
Kalesh AP (11):
net/bnxt: fix to handle error case during port start
net/bnxt: fix return value check of address mapping
net/bnxt: fix failure to add a MAC address
net/bnxt: fix an unconditional wait in link update
net/bnxt: fix setting primary MAC address
net/bnxt: fix failure path in dev init
net/bnxt: reset filters before registering interrupts
net/bnxt: fix error checking of FW commands
net/bnxt: fix to return standard error codes
net/bnxt: fix lock release on getting NVM info
net/bnxt: fix to correctly check result of HWRM command
Lance Richardson (8):
net/bnxt: use correct vnic default completion ring
net/bnxt: use dedicated cpr for async events
net/bnxt: retry irq callback deregistration
net/bnxt: use correct RSS table sizes
net/bnxt: fully initialize hwrm msgs for thor RSS cfg
net/bnxt: use correct number of RSS contexts for thor
net/bnxt: pass correct RSS table address for thor
net/bnxt: avoid overrun in get statistics
Santoshkumar Karanappa Rastapur (2):
net/bnxt: fix RSS disable issue for thor-based adapters
net/bnxt: fix MAC/VLAN filter allocation failure
config/common_base | 1 +
config/defconfig_arm64-stingray-linuxapp-gcc | 3 +
drivers/net/bnxt/bnxt.h | 11 +-
drivers/net/bnxt/bnxt_ethdev.c | 346 +++--
drivers/net/bnxt/bnxt_hwrm.c | 160 +--
drivers/net/bnxt/bnxt_hwrm.h | 2 +
drivers/net/bnxt/bnxt_irq.c | 108 +-
drivers/net/bnxt/bnxt_irq.h | 2 +-
drivers/net/bnxt/bnxt_ring.c | 147 +-
drivers/net/bnxt/bnxt_ring.h | 3 +
drivers/net/bnxt/bnxt_rxr.c | 2 +-
drivers/net/bnxt/bnxt_rxtx_vec_sse.c | 2 +-
drivers/net/bnxt/bnxt_stats.c | 11 +-
drivers/net/bnxt/bnxt_vnic.c | 18 +-
drivers/net/bnxt/hsi_struct_def_dpdk.h | 1283 +++++++++++++++---
15 files changed, 1562 insertions(+), 537 deletions(-)
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH 01/22] net/bnxt: fix to handle error case during port start
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
@ 2019-07-18 3:35 ` Ajit Khaparde
2019-07-18 3:35 ` [dpdk-dev] [PATCH 02/22] net/bnxt: fix return value check of address mapping Ajit Khaparde
` (22 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:35 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Kalesh AP, stable, Somnath Kotur
From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
1. during port start, if bnxt_init_chip() return error
bnxt_dev_start_op() invokes bnxt_shutdown_nic() which in turn calls
bnxt_free_all_hwrm_resources() to free up resources. Hence remove the
bnxt_free_all_hwrm_resources() from bnxt_init_chip() failure path.
2. fix to check the return value of rte_intr_enable() as this call
can fail.
3. set bp->dev_stopped to 0 only when port start succeeds.
4. handle failure cases in bnxt_init_chip() routine to do proper
cleanup and return correct error value.
Fixes: b7778e8a1c00 ("net/bnxt: refactor to properly allocate resources for PF/VF")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
drivers/net/bnxt/bnxt_ethdev.c | 24 +++++++++++++++---------
1 file changed, 15 insertions(+), 9 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index cd87d0dbc..7290c4c33 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -394,8 +394,9 @@ static int bnxt_init_chip(struct bnxt *bp)
bp->rx_cp_nr_rings);
return -ENOTSUP;
}
- if (rte_intr_efd_enable(intr_handle, intr_vector))
- return -1;
+ rc = rte_intr_efd_enable(intr_handle, intr_vector);
+ if (rc)
+ return rc;
}
if (rte_intr_dp_is_en(intr_handle) && !intr_handle->intr_vec) {
@@ -406,7 +407,8 @@ static int bnxt_init_chip(struct bnxt *bp)
if (intr_handle->intr_vec == NULL) {
PMD_DRV_LOG(ERR, "Failed to allocate %d rx_queues"
" intr_vec", bp->eth_dev->data->nb_rx_queues);
- return -ENOMEM;
+ rc = -ENOMEM;
+ goto err_disable;
}
PMD_DRV_LOG(DEBUG, "intr_handle->intr_vec = %p "
"intr_handle->nb_efd = %d intr_handle->max_intr = %d\n",
@@ -421,12 +423,14 @@ static int bnxt_init_chip(struct bnxt *bp)
}
/* enable uio/vfio intr/eventfd mapping */
- rte_intr_enable(intr_handle);
+ rc = rte_intr_enable(intr_handle);
+ if (rc)
+ goto err_free;
rc = bnxt_get_hwrm_link_config(bp, &new);
if (rc) {
PMD_DRV_LOG(ERR, "HWRM Get link config failure rc: %x\n", rc);
- goto err_out;
+ goto err_free;
}
if (!bp->link_info.link_up) {
@@ -434,16 +438,18 @@ static int bnxt_init_chip(struct bnxt *bp)
if (rc) {
PMD_DRV_LOG(ERR,
"HWRM link config failure rc: %x\n", rc);
- goto err_out;
+ goto err_free;
}
}
bnxt_print_link_info(bp->eth_dev);
return 0;
+err_free:
+ rte_free(intr_handle->intr_vec);
+err_disable:
+ rte_intr_efd_disable(intr_handle);
err_out:
- bnxt_free_all_hwrm_resources(bp);
-
/* Some of the error status returned by FW may not be from errno.h */
if (rc > 0)
rc = -EIO;
@@ -759,7 +765,6 @@ static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
"RxQ cnt %d > CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS %d\n",
bp->rx_cp_nr_rings, RTE_ETHDEV_QUEUE_STAT_CNTRS);
}
- bp->dev_stopped = 0;
rc = bnxt_init_chip(bp);
if (rc)
@@ -781,6 +786,7 @@ static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
eth_dev->tx_pkt_burst = bnxt_transmit_function(eth_dev);
bnxt_enable_int(bp);
bp->flags |= BNXT_FLAG_INIT_DONE;
+ bp->dev_stopped = 0;
return 0;
error:
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH 02/22] net/bnxt: fix return value check of address mapping
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
2019-07-18 3:35 ` [dpdk-dev] [PATCH 01/22] net/bnxt: fix to handle error case during port start Ajit Khaparde
@ 2019-07-18 3:35 ` Ajit Khaparde
2019-07-18 3:35 ` [dpdk-dev] [PATCH 03/22] net/bnxt: fix failure to add a MAC address Ajit Khaparde
` (21 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:35 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Kalesh AP, stable, Somnath Kotur
From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
rte_mem_virt2iova() function returns RTE_BAD_IOVA on failure, not zero.
Fixes: 62196f4e0941 ("mem: rename address mapping function to IOVA")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
drivers/net/bnxt/bnxt_ethdev.c | 6 +++---
drivers/net/bnxt/bnxt_hwrm.c | 16 ++++++++--------
drivers/net/bnxt/bnxt_ring.c | 2 +-
drivers/net/bnxt/bnxt_vnic.c | 4 ++--
4 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 7290c4c33..a37ea93d9 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3472,7 +3472,7 @@ static int bnxt_alloc_ctx_mem_blk(__rte_unused struct bnxt *bp,
PMD_DRV_LOG(WARNING,
"Using rte_mem_virt2iova()\n");
mz_phys_addr = rte_mem_virt2iova(mz->addr);
- if (mz_phys_addr == 0) {
+ if (mz_phys_addr == RTE_BAD_IOVA) {
PMD_DRV_LOG(ERR,
"unable to map addr to phys memory\n");
return -ENOMEM;
@@ -3698,7 +3698,7 @@ static int bnxt_alloc_stats_mem(struct bnxt *bp)
PMD_DRV_LOG(WARNING,
"Using rte_mem_virt2iova()\n");
mz_phys_addr = rte_mem_virt2iova(mz->addr);
- if (mz_phys_addr == 0) {
+ if (mz_phys_addr == RTE_BAD_IOVA) {
PMD_DRV_LOG(ERR,
"Can't map address to physical memory\n");
return -ENOMEM;
@@ -3736,7 +3736,7 @@ static int bnxt_alloc_stats_mem(struct bnxt *bp)
PMD_DRV_LOG(WARNING,
"Using rte_mem_virt2iova()\n");
mz_phys_addr = rte_mem_virt2iova(mz->addr);
- if (mz_phys_addr == 0) {
+ if (mz_phys_addr == RTE_BAD_IOVA) {
PMD_DRV_LOG(ERR,
"Can't map address to physical memory\n");
return -ENOMEM;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 313459aaf..9c5e5ad77 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -888,7 +888,7 @@ int bnxt_hwrm_ver_get(struct bnxt *bp)
rte_mem_lock_page(bp->hwrm_cmd_resp_addr);
bp->hwrm_cmd_resp_dma_addr =
rte_mem_virt2iova(bp->hwrm_cmd_resp_addr);
- if (bp->hwrm_cmd_resp_dma_addr == 0) {
+ if (bp->hwrm_cmd_resp_dma_addr == RTE_BAD_IOVA) {
PMD_DRV_LOG(ERR,
"Unable to map response buffer to physical memory.\n");
rc = -ENOMEM;
@@ -925,7 +925,7 @@ int bnxt_hwrm_ver_get(struct bnxt *bp)
rte_mem_lock_page(bp->hwrm_short_cmd_req_addr);
bp->hwrm_short_cmd_req_dma_addr =
rte_mem_virt2iova(bp->hwrm_short_cmd_req_addr);
- if (bp->hwrm_short_cmd_req_dma_addr == 0) {
+ if (bp->hwrm_short_cmd_req_dma_addr == RTE_BAD_IOVA) {
rte_free(bp->hwrm_short_cmd_req_addr);
PMD_DRV_LOG(ERR,
"Unable to map buffer to physical memory.\n");
@@ -2229,7 +2229,7 @@ int bnxt_alloc_hwrm_resources(struct bnxt *bp)
return -ENOMEM;
bp->hwrm_cmd_resp_dma_addr =
rte_mem_virt2iova(bp->hwrm_cmd_resp_addr);
- if (bp->hwrm_cmd_resp_dma_addr == 0) {
+ if (bp->hwrm_cmd_resp_dma_addr == RTE_BAD_IOVA) {
PMD_DRV_LOG(ERR,
"unable to map response address to physical memory\n");
return -ENOMEM;
@@ -3179,7 +3179,7 @@ int bnxt_hwrm_func_buf_rgtr(struct bnxt *bp)
req.req_buf_len = rte_cpu_to_le_16(HWRM_MAX_REQ_LEN);
req.req_buf_page_addr0 =
rte_cpu_to_le_64(rte_mem_virt2iova(bp->pf.vf_req_buf));
- if (req.req_buf_page_addr0 == 0) {
+ if (req.req_buf_page_addr0 == RTE_BAD_IOVA) {
PMD_DRV_LOG(ERR,
"unable to map buffer address to physical memory\n");
return -ENOMEM;
@@ -3611,7 +3611,7 @@ int bnxt_get_nvram_directory(struct bnxt *bp, uint32_t len, uint8_t *data)
if (buf == NULL)
return -ENOMEM;
dma_handle = rte_mem_virt2iova(buf);
- if (dma_handle == 0) {
+ if (dma_handle == RTE_BAD_IOVA) {
PMD_DRV_LOG(ERR,
"unable to map response address to physical memory\n");
return -ENOMEM;
@@ -3646,7 +3646,7 @@ int bnxt_hwrm_get_nvram_item(struct bnxt *bp, uint32_t index,
return -ENOMEM;
dma_handle = rte_mem_virt2iova(buf);
- if (dma_handle == 0) {
+ if (dma_handle == RTE_BAD_IOVA) {
PMD_DRV_LOG(ERR,
"unable to map response address to physical memory\n");
return -ENOMEM;
@@ -3700,7 +3700,7 @@ int bnxt_hwrm_flash_nvram(struct bnxt *bp, uint16_t dir_type,
return -ENOMEM;
dma_handle = rte_mem_virt2iova(buf);
- if (dma_handle == 0) {
+ if (dma_handle == RTE_BAD_IOVA) {
PMD_DRV_LOG(ERR,
"unable to map response address to physical memory\n");
return -ENOMEM;
@@ -3764,7 +3764,7 @@ static int bnxt_hwrm_func_vf_vnic_query(struct bnxt *bp, uint16_t vf,
req.max_vnic_id_cnt = rte_cpu_to_le_32(bp->pf.total_vnics);
req.vnic_id_tbl_addr = rte_cpu_to_le_64(rte_mem_virt2iova(vnic_ids));
- if (req.vnic_id_tbl_addr == 0) {
+ if (req.vnic_id_tbl_addr == RTE_BAD_IOVA) {
HWRM_UNLOCK();
PMD_DRV_LOG(ERR,
"unable to map VNIC ID table address to physical memory\n");
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index ada748c05..a9952e02c 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -187,7 +187,7 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx,
rte_mem_lock_page(((char *)mz->addr) + sz);
mz_phys_addr_base = rte_mem_virt2iova(mz->addr);
mz_phys_addr = rte_mem_virt2iova(mz->addr);
- if (mz_phys_addr == 0) {
+ if (mz_phys_addr == RTE_BAD_IOVA) {
PMD_DRV_LOG(ERR,
"unable to map ring address to physical memory\n");
return -ENOMEM;
diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c
index 20e5bf2d1..c652b8f03 100644
--- a/drivers/net/bnxt/bnxt_vnic.c
+++ b/drivers/net/bnxt/bnxt_vnic.c
@@ -153,9 +153,9 @@ int bnxt_alloc_vnic_attributes(struct bnxt *bp)
PMD_DRV_LOG(WARNING,
"Using rte_mem_virt2iova()\n");
mz_phys_addr = rte_mem_virt2iova(mz->addr);
- if (mz_phys_addr == 0) {
+ if (mz_phys_addr == RTE_BAD_IOVA) {
PMD_DRV_LOG(ERR,
- "unable to map vnic address to physical memory\n");
+ "unable to map to physical memory\n");
return -ENOMEM;
}
}
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH 03/22] net/bnxt: fix failure to add a MAC address
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
2019-07-18 3:35 ` [dpdk-dev] [PATCH 01/22] net/bnxt: fix to handle error case during port start Ajit Khaparde
2019-07-18 3:35 ` [dpdk-dev] [PATCH 02/22] net/bnxt: fix return value check of address mapping Ajit Khaparde
@ 2019-07-18 3:35 ` Ajit Khaparde
2019-07-18 3:35 ` [dpdk-dev] [PATCH 04/22] net/bnxt: fix an unconditional wait in link update Ajit Khaparde
` (20 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:35 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Kalesh AP, stable, Somnath Kotur
From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
HWRM command to add MAC address can fail. Driver should check
the return value of HWRM command and do the house keeping properly.
Fixes: 778b759ba10e45208 ("net/bnxt: add MAC address")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/net/bnxt/bnxt_ethdev.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index a37ea93d9..0c77a8c08 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -915,6 +915,7 @@ static int bnxt_mac_addr_add_op(struct rte_eth_dev *eth_dev,
struct bnxt *bp = eth_dev->data->dev_private;
struct bnxt_vnic_info *vnic = &bp->vnic_info[pool];
struct bnxt_filter_info *filter;
+ int rc = 0;
if (BNXT_VF(bp) & !BNXT_VF_IS_TRUSTED(bp)) {
PMD_DRV_LOG(ERR, "Cannot add MAC address to a VF interface\n");
@@ -938,10 +939,20 @@ static int bnxt_mac_addr_add_op(struct rte_eth_dev *eth_dev,
PMD_DRV_LOG(ERR, "L2 filter alloc failed\n");
return -ENODEV;
}
- STAILQ_INSERT_TAIL(&vnic->filter, filter, next);
+
filter->mac_index = index;
memcpy(filter->l2_addr, mac_addr, RTE_ETHER_ADDR_LEN);
- return bnxt_hwrm_set_l2_filter(bp, vnic->fw_vnic_id, filter);
+
+ rc = bnxt_hwrm_set_l2_filter(bp, vnic->fw_vnic_id, filter);
+ if (!rc) {
+ STAILQ_INSERT_TAIL(&vnic->filter, filter, next);
+ } else {
+ filter->mac_index = INVALID_MAC_INDEX;
+ memset(&filter->l2_addr, 0, RTE_ETHER_ADDR_LEN);
+ bnxt_free_filter(bp, filter);
+ }
+
+ return rc;
}
int bnxt_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_complete)
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH 04/22] net/bnxt: fix an unconditional wait in link update
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
` (2 preceding siblings ...)
2019-07-18 3:35 ` [dpdk-dev] [PATCH 03/22] net/bnxt: fix failure to add a MAC address Ajit Khaparde
@ 2019-07-18 3:35 ` Ajit Khaparde
2019-07-18 3:35 ` [dpdk-dev] [PATCH 05/22] net/bnxt: fix setting primary MAC address Ajit Khaparde
` (19 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:35 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Kalesh AP, stable, Lance Richardson
From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
There is an unconditional delay in link update op.
Fixed it to wait only if wait for request completion is set.
Fixes: 7bc8e9a227ccbc64 ("net/bnxt: support async link notification")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
---
drivers/net/bnxt/bnxt_ethdev.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 0c77a8c08..7e0fca31e 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -973,11 +973,12 @@ int bnxt_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_complete)
"Failed to retrieve link rc = 0x%x!\n", rc);
goto out;
}
- rte_delay_ms(BNXT_LINK_WAIT_INTERVAL);
- if (!wait_to_complete)
+ if (!wait_to_complete || new.link_status)
break;
- } while (!new.link_status && cnt--);
+
+ rte_delay_ms(BNXT_LINK_WAIT_INTERVAL);
+ } while (cnt--);
out:
/* Timed out or success */
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH 05/22] net/bnxt: fix setting primary MAC address
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
` (3 preceding siblings ...)
2019-07-18 3:35 ` [dpdk-dev] [PATCH 04/22] net/bnxt: fix an unconditional wait in link update Ajit Khaparde
@ 2019-07-18 3:35 ` Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 06/22] net/bnxt: fix failure path in dev init Ajit Khaparde
` (18 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:35 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Kalesh AP, stable, Somnath Kotur
From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
1. Default filter is tied to VNIC 0 at index 0. After finding the filter
with mac_index 0 and set the new MAC address, looping through
remaining filters is unnecessary.
2. Added a check for NULL MAC address.
3. bnxt_hwrm_set_l2_filter() clears the existing filter configuration
first before applying new filter settings. Hence there is no need to
invoke bnxt_hwrm_clear_l2_filter() explicitly in
bnxt_set_default_mac_addr_op().
Fixes: d69851df12b2d ("net/bnxt: support multicast filter and set MAC addr")
Cc: stable@dpdk.org
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/net/bnxt/bnxt_ethdev.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 7e0fca31e..455d8a3bf 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1699,26 +1699,28 @@ bnxt_set_default_mac_addr_op(struct rte_eth_dev *dev,
if (BNXT_VF(bp) && !BNXT_VF_IS_TRUSTED(bp))
return -EPERM;
- memcpy(bp->mac_addr, addr, sizeof(bp->mac_addr));
+ if (rte_is_zero_ether_addr(addr))
+ return -EINVAL;
STAILQ_FOREACH(filter, &vnic->filter, next) {
/* Default Filter is at Index 0 */
if (filter->mac_index != 0)
continue;
- rc = bnxt_hwrm_clear_l2_filter(bp, filter);
- if (rc)
- return rc;
+
memcpy(filter->l2_addr, bp->mac_addr, RTE_ETHER_ADDR_LEN);
memset(filter->l2_addr_mask, 0xff, RTE_ETHER_ADDR_LEN);
filter->flags |= HWRM_CFA_L2_FILTER_ALLOC_INPUT_FLAGS_PATH_RX;
filter->enables |=
HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_ADDR |
HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_ADDR_MASK;
+
rc = bnxt_hwrm_set_l2_filter(bp, vnic->fw_vnic_id, filter);
if (rc)
return rc;
- filter->mac_index = 0;
+
+ memcpy(bp->mac_addr, addr, RTE_ETHER_ADDR_LEN);
PMD_DRV_LOG(DEBUG, "Set MAC addr\n");
+ return 0;
}
return 0;
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH 06/22] net/bnxt: fix failure path in dev init
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
` (4 preceding siblings ...)
2019-07-18 3:35 ` [dpdk-dev] [PATCH 05/22] net/bnxt: fix setting primary MAC address Ajit Khaparde
@ 2019-07-18 3:36 ` Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 07/22] net/bnxt: reset filters before registering interrupts Ajit Khaparde
` (17 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:36 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Kalesh AP, stable, Somnath Kotur
From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
1. bnxt_dev_init() invokes bnxt_dev_uninit() on failure. So there is
no need to do individual function cleanups in failure path.
2. rearrange the check for primary process to remove an unwanted goto.
3. fix to invoke bnxt_hwrm_func_buf_unrgtr() in bnxt_dev_uninit() when
it is needed.
Fixes: b7778e8a1c00a7 ("net/bnxt: refactor to properly allocate resources for PF/VF")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
drivers/net/bnxt/bnxt_ethdev.c | 28 ++++++++++++++--------------
drivers/net/bnxt/bnxt_hwrm.c | 3 +++
2 files changed, 17 insertions(+), 14 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 455d8a3bf..814770ada 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3853,8 +3853,16 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
bp->dev_stopped = 1;
+ eth_dev->dev_ops = &bnxt_dev_ops;
+ eth_dev->rx_pkt_burst = &bnxt_recv_pkts;
+ eth_dev->tx_pkt_burst = &bnxt_xmit_pkts;
+
+ /*
+ * For secondary processes, we don't initialise any further
+ * as primary has already done this work.
+ */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- goto skip_init;
+ return 0;
if (bnxt_vf_pciid(pci_dev->id.device_id))
bp->flags |= BNXT_FLAG_VF;
@@ -3871,12 +3879,6 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
"Board initialization failed rc: %x\n", rc);
goto error;
}
-skip_init:
- eth_dev->dev_ops = &bnxt_dev_ops;
- eth_dev->rx_pkt_burst = &bnxt_recv_pkts;
- eth_dev->tx_pkt_burst = &bnxt_xmit_pkts;
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
rc = bnxt_alloc_hwrm_resources(bp);
if (rc) {
@@ -4021,21 +4023,16 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
rc = bnxt_alloc_mem(bp);
if (rc)
- goto error_free_int;
+ goto error_free;
rc = bnxt_request_int(bp);
if (rc)
- goto error_free_int;
+ goto error_free;
bnxt_init_nic(bp);
return 0;
-error_free_int:
- bnxt_disable_int(bp);
- bnxt_hwrm_func_buf_unrgtr(bp);
- bnxt_free_int(bp);
- bnxt_free_mem(bp);
error_free:
bnxt_dev_uninit(eth_dev);
error:
@@ -4055,6 +4052,9 @@ bnxt_dev_uninit(struct rte_eth_dev *eth_dev)
bnxt_disable_int(bp);
bnxt_free_int(bp);
bnxt_free_mem(bp);
+
+ bnxt_hwrm_func_buf_unrgtr(bp);
+
if (bp->grp_info != NULL) {
rte_free(bp->grp_info);
bp->grp_info = NULL;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 9c5e5ad77..27c4f2d88 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3199,6 +3199,9 @@ int bnxt_hwrm_func_buf_unrgtr(struct bnxt *bp)
struct hwrm_func_buf_unrgtr_input req = {.req_type = 0 };
struct hwrm_func_buf_unrgtr_output *resp = bp->hwrm_cmd_resp_addr;
+ if (!(BNXT_PF(bp) && bp->pdev->max_vfs))
+ return 0;
+
HWRM_PREP(req, FUNC_BUF_UNRGTR, BNXT_USE_CHIMP_MB);
rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH 07/22] net/bnxt: reset filters before registering interrupts
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
` (5 preceding siblings ...)
2019-07-18 3:36 ` [dpdk-dev] [PATCH 06/22] net/bnxt: fix failure path in dev init Ajit Khaparde
@ 2019-07-18 3:36 ` Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 08/22] net/bnxt: use correct vnic default completion ring Ajit Khaparde
` (16 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:36 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Kalesh AP, stable, Somnath Kotur
From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
If interrupt registration fails during device init, driver invokes
uninit which in turn causes error messages while trying to free
vnic filters. Fix this by moving filter initialization call before
interrupt registration.
Fixes: 1b533790f44e ("net/bnxt: avoid invalid vnic id in set L2 Rx mask")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/net/bnxt/bnxt_ethdev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 814770ada..429ebe555 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -4025,12 +4025,12 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
if (rc)
goto error_free;
+ bnxt_init_nic(bp);
+
rc = bnxt_request_int(bp);
if (rc)
goto error_free;
- bnxt_init_nic(bp);
-
return 0;
error_free:
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH 08/22] net/bnxt: use correct vnic default completion ring
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
` (6 preceding siblings ...)
2019-07-18 3:36 ` [dpdk-dev] [PATCH 07/22] net/bnxt: reset filters before registering interrupts Ajit Khaparde
@ 2019-07-18 3:36 ` Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 09/22] net/bnxt: use dedicated cpr for async events Ajit Khaparde
` (15 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:36 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Lance Richardson, Ajit Kumar Khaparde
From: Lance Richardson <lance.richardson@broadcom.com>
Use the completion ring associated with the default Rx ring
when configuring the default completion ring ID instead
of the async completion ring ID.
Fixes: f8168ca0e690 ("net/bnxt: support thor controller")
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
---
drivers/net/bnxt/bnxt_hwrm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 27c4f2d88..bd4250a3a 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -1546,7 +1546,7 @@ int bnxt_hwrm_vnic_cfg(struct bnxt *bp, struct bnxt_vnic_info *vnic)
if (BNXT_CHIP_THOR(bp)) {
struct bnxt_rx_queue *rxq = bp->eth_dev->data->rx_queues[0];
struct bnxt_rx_ring_info *rxr = rxq->rx_ring;
- struct bnxt_cp_ring_info *cpr = bp->def_cp_ring;
+ struct bnxt_cp_ring_info *cpr = rxq->cp_ring;
req.default_rx_ring_id =
rte_cpu_to_le_16(rxr->rx_ring_struct->fw_ring_id);
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH 09/22] net/bnxt: use dedicated cpr for async events
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
` (7 preceding siblings ...)
2019-07-18 3:36 ` [dpdk-dev] [PATCH 08/22] net/bnxt: use correct vnic default completion ring Ajit Khaparde
@ 2019-07-18 3:36 ` Ajit Khaparde
2019-07-22 14:57 ` Ferruh Yigit
2019-07-24 16:14 ` [dpdk-dev] [PATCH] " Lance Richardson
2019-07-18 3:36 ` [dpdk-dev] [PATCH 10/22] net/bnxt: retry irq callback deregistration Ajit Khaparde
` (14 subsequent siblings)
23 siblings, 2 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:36 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Lance Richardson, Somnath Kotur
From: Lance Richardson <lance.richardson@broadcom.com>
This commit enables the creation of a dedicated completion
ring for asynchronous event handling instead of handling these
events on a receive completion ring.
For the stingray platform and other platforms needing tighter
control of resource utilization, we retain the ability to
process async events on a receive completion ring. This behavior
is controlled by a compile-time configuration variable.
For Thor-based adapters, we use a dedicated NQ (notification
queue) ring for async events (async events can't currently
be received on a completion ring due to a firmware limitation).
Rename "def_cp_ring" to "async_cp_ring" to better reflect its
purpose (async event notifications) and to avoid confusion with
VNIC default receive completion rings.
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
config/common_base | 1 +
config/defconfig_arm64-stingray-linuxapp-gcc | 3 +
drivers/net/bnxt/bnxt.h | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 13 +-
drivers/net/bnxt/bnxt_hwrm.c | 16 +-
drivers/net/bnxt/bnxt_hwrm.h | 2 +
drivers/net/bnxt/bnxt_irq.c | 47 +++---
drivers/net/bnxt/bnxt_ring.c | 145 ++++++++++++++++---
drivers/net/bnxt/bnxt_ring.h | 3 +
drivers/net/bnxt/bnxt_rxr.c | 2 +-
drivers/net/bnxt/bnxt_rxtx_vec_sse.c | 2 +-
11 files changed, 195 insertions(+), 49 deletions(-)
diff --git a/config/common_base b/config/common_base
index 8ef75c203..487a9b811 100644
--- a/config/common_base
+++ b/config/common_base
@@ -212,6 +212,7 @@ CONFIG_RTE_LIBRTE_BNX2X_DEBUG_PERIODIC=n
# Compile burst-oriented Broadcom BNXT PMD driver
#
CONFIG_RTE_LIBRTE_BNXT_PMD=y
+CONFIG_RTE_LIBRTE_BNXT_SHARED_ASYNC_RING=n
#
# Compile burst-oriented Chelsio Terminator (CXGBE) PMD
diff --git a/config/defconfig_arm64-stingray-linuxapp-gcc b/config/defconfig_arm64-stingray-linuxapp-gcc
index 7b33aa7af..acfb1c207 100644
--- a/config/defconfig_arm64-stingray-linuxapp-gcc
+++ b/config/defconfig_arm64-stingray-linuxapp-gcc
@@ -12,5 +12,8 @@ CONFIG_RTE_ARCH_ARM_TUNE="cortex-a72"
CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n
CONFIG_RTE_LIBRTE_VHOST_NUMA=n
+# Conserve cpr resources by using rx cpr for async events.
+CONFIG_RTE_LIBRTE_BNXT_SHARED_ASYNC_RING=y
+
CONFIG_RTE_EAL_IGB_UIO=y
CONFIG_RTE_KNI_KMOD=n
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 3ccf784e5..8bd8f536c 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -33,6 +33,14 @@
#define BNXT_MAX_RX_RING_DESC 8192
#define BNXT_DB_SIZE 0x80
+#ifdef RTE_LIBRTE_BNXT_SHARED_ASYNC_RING
+/* Async events are handled on rx queue 0 completion ring. */
+#define BNXT_NUM_ASYNC_CPR 0
+#else
+/* Async events are handled on a dedicated completion ring. */
+#define BNXT_NUM_ASYNC_CPR 1
+#endif
+
/* Chimp Communication Channel */
#define GRCPF_REG_CHIMP_CHANNEL_OFFSET 0x0
#define GRCPF_REG_CHIMP_COMM_TRIGGER 0x100
@@ -387,7 +395,7 @@ struct bnxt {
uint16_t fw_tx_port_stats_ext_size;
/* Default completion ring */
- struct bnxt_cp_ring_info *def_cp_ring;
+ struct bnxt_cp_ring_info *async_cp_ring;
uint32_t max_ring_grps;
struct bnxt_ring_grp_info *grp_info;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 429ebe555..fe7837df2 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -198,12 +198,17 @@ static void bnxt_free_mem(struct bnxt *bp)
bnxt_free_stats(bp);
bnxt_free_tx_rings(bp);
bnxt_free_rx_rings(bp);
+ bnxt_free_async_cp_ring(bp);
}
static int bnxt_alloc_mem(struct bnxt *bp)
{
int rc;
+ rc = bnxt_alloc_async_ring_struct(bp);
+ if (rc)
+ goto alloc_mem_err;
+
rc = bnxt_alloc_vnic_mem(bp);
if (rc)
goto alloc_mem_err;
@@ -216,6 +221,10 @@ static int bnxt_alloc_mem(struct bnxt *bp)
if (rc)
goto alloc_mem_err;
+ rc = bnxt_alloc_async_cp_ring(bp);
+ if (rc)
+ goto alloc_mem_err;
+
return 0;
alloc_mem_err:
@@ -617,8 +626,8 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
/* Inherit new configurations */
if (eth_dev->data->nb_rx_queues > bp->max_rx_rings ||
eth_dev->data->nb_tx_queues > bp->max_tx_rings ||
- eth_dev->data->nb_rx_queues + eth_dev->data->nb_tx_queues >
- bp->max_cp_rings ||
+ eth_dev->data->nb_rx_queues + eth_dev->data->nb_tx_queues
+ + BNXT_NUM_ASYNC_CPR > bp->max_cp_rings ||
eth_dev->data->nb_rx_queues + eth_dev->data->nb_tx_queues >
bp->max_stat_ctx)
goto resource_error;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index bd4250a3a..52b2119a5 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -740,9 +740,12 @@ int bnxt_hwrm_func_reserve_vf_resc(struct bnxt *bp, bool test)
req.num_tx_rings = rte_cpu_to_le_16(bp->tx_nr_rings);
req.num_rx_rings = rte_cpu_to_le_16(bp->rx_nr_rings *
AGG_RING_MULTIPLIER);
- req.num_stat_ctxs = rte_cpu_to_le_16(bp->rx_nr_rings + bp->tx_nr_rings);
+ req.num_stat_ctxs = rte_cpu_to_le_16(bp->rx_nr_rings +
+ bp->tx_nr_rings +
+ BNXT_NUM_ASYNC_CPR);
req.num_cmpl_rings = rte_cpu_to_le_16(bp->rx_nr_rings +
- bp->tx_nr_rings);
+ bp->tx_nr_rings +
+ BNXT_NUM_ASYNC_CPR);
req.num_vnics = rte_cpu_to_le_16(bp->rx_nr_rings);
if (bp->vf_resv_strategy ==
HWRM_FUNC_RESOURCE_QCAPS_OUTPUT_VF_RESV_STRATEGY_MINIMAL_STATIC) {
@@ -2079,7 +2082,7 @@ int bnxt_free_all_hwrm_ring_grps(struct bnxt *bp)
return rc;
}
-static void bnxt_free_nq_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
+void bnxt_free_nq_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
{
struct bnxt_ring *cp_ring = cpr->cp_ring_struct;
@@ -2089,9 +2092,10 @@ static void bnxt_free_nq_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
memset(cpr->cp_desc_ring, 0, cpr->cp_ring_struct->ring_size *
sizeof(*cpr->cp_desc_ring));
cpr->cp_raw_cons = 0;
+ cpr->valid = 0;
}
-static void bnxt_free_cp_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
+void bnxt_free_cp_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
{
struct bnxt_ring *cp_ring = cpr->cp_ring_struct;
@@ -3225,7 +3229,7 @@ int bnxt_hwrm_func_cfg_def_cp(struct bnxt *bp)
req.enables = rte_cpu_to_le_32(
HWRM_FUNC_CFG_INPUT_ENABLES_ASYNC_EVENT_CR);
req.async_event_cr = rte_cpu_to_le_16(
- bp->def_cp_ring->cp_ring_struct->fw_ring_id);
+ bp->async_cp_ring->cp_ring_struct->fw_ring_id);
rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
HWRM_CHECK_RESULT();
@@ -3245,7 +3249,7 @@ int bnxt_hwrm_vf_func_cfg_def_cp(struct bnxt *bp)
req.enables = rte_cpu_to_le_32(
HWRM_FUNC_VF_CFG_INPUT_ENABLES_ASYNC_EVENT_CR);
req.async_event_cr = rte_cpu_to_le_16(
- bp->def_cp_ring->cp_ring_struct->fw_ring_id);
+ bp->async_cp_ring->cp_ring_struct->fw_ring_id);
rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
HWRM_CHECK_RESULT();
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 37aaa1a9e..c882fc2a1 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -119,6 +119,8 @@ int bnxt_free_all_hwrm_stat_ctxs(struct bnxt *bp);
int bnxt_free_all_hwrm_rings(struct bnxt *bp);
int bnxt_free_all_hwrm_ring_grps(struct bnxt *bp);
int bnxt_alloc_all_hwrm_ring_grps(struct bnxt *bp);
+void bnxt_free_cp_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr);
+void bnxt_free_nq_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr);
int bnxt_set_hwrm_vnic_filters(struct bnxt *bp, struct bnxt_vnic_info *vnic);
int bnxt_clear_hwrm_vnic_filters(struct bnxt *bp, struct bnxt_vnic_info *vnic);
void bnxt_free_all_hwrm_resources(struct bnxt *bp);
diff --git a/drivers/net/bnxt/bnxt_irq.c b/drivers/net/bnxt/bnxt_irq.c
index 6c4dce401..9ff16ddd8 100644
--- a/drivers/net/bnxt/bnxt_irq.c
+++ b/drivers/net/bnxt/bnxt_irq.c
@@ -21,7 +21,7 @@ static void bnxt_int_handler(void *param)
{
struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
struct bnxt *bp = eth_dev->data->dev_private;
- struct bnxt_cp_ring_info *cpr = bp->def_cp_ring;
+ struct bnxt_cp_ring_info *cpr = bp->async_cp_ring;
struct cmpl_base *cmp;
uint32_t raw_cons;
uint32_t cons;
@@ -42,19 +42,19 @@ static void bnxt_int_handler(void *param)
bnxt_event_hwrm_resp_handler(bp, cmp);
raw_cons = NEXT_RAW_CMP(raw_cons);
- };
+ }
cpr->cp_raw_cons = raw_cons;
- B_CP_DB_REARM(cpr, cpr->cp_raw_cons);
+ if (BNXT_HAS_NQ(bp))
+ bnxt_db_nq_arm(cpr);
+ else
+ B_CP_DB_REARM(cpr, cpr->cp_raw_cons);
}
void bnxt_free_int(struct bnxt *bp)
{
struct bnxt_irq *irq;
- if (bp->irq_tbl == NULL)
- return;
-
irq = bp->irq_tbl;
if (irq) {
if (irq->requested) {
@@ -70,19 +70,35 @@ void bnxt_free_int(struct bnxt *bp)
void bnxt_disable_int(struct bnxt *bp)
{
- struct bnxt_cp_ring_info *cpr = bp->def_cp_ring;
+ struct bnxt_cp_ring_info *cpr = bp->async_cp_ring;
+
+ if (BNXT_NUM_ASYNC_CPR == 0)
+ return;
+
+ if (!cpr || !cpr->cp_db.doorbell)
+ return;
/* Only the default completion ring */
- if (cpr != NULL && cpr->cp_db.doorbell != NULL)
+ if (BNXT_HAS_NQ(bp))
+ bnxt_db_nq(cpr);
+ else
B_CP_DB_DISARM(cpr);
}
void bnxt_enable_int(struct bnxt *bp)
{
- struct bnxt_cp_ring_info *cpr = bp->def_cp_ring;
+ struct bnxt_cp_ring_info *cpr = bp->async_cp_ring;
+
+ if (BNXT_NUM_ASYNC_CPR == 0)
+ return;
+
+ if (!cpr || !cpr->cp_db.doorbell)
+ return;
/* Only the default completion ring */
- if (cpr != NULL && cpr->cp_db.doorbell != NULL)
+ if (BNXT_HAS_NQ(bp))
+ bnxt_db_nq_arm(cpr);
+ else
B_CP_DB_ARM(cpr);
}
@@ -90,7 +106,7 @@ int bnxt_setup_int(struct bnxt *bp)
{
uint16_t total_vecs;
const int len = sizeof(bp->irq_tbl[0].name);
- int i, rc = 0;
+ int i;
/* DPDK host only supports 1 MSI-X vector */
total_vecs = 1;
@@ -104,14 +120,11 @@ int bnxt_setup_int(struct bnxt *bp)
bp->irq_tbl[i].handler = bnxt_int_handler;
}
} else {
- rc = -ENOMEM;
- goto setup_exit;
+ PMD_DRV_LOG(ERR, "bnxt_irq_tbl setup failed\n");
+ return -ENOMEM;
}
- return 0;
-setup_exit:
- PMD_DRV_LOG(ERR, "bnxt_irq_tbl setup failed\n");
- return rc;
+ return 0;
}
int bnxt_request_int(struct bnxt *bp)
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index a9952e02c..05a9a200c 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -5,6 +5,7 @@
#include <rte_bitmap.h>
#include <rte_memzone.h>
+#include <rte_malloc.h>
#include <unistd.h>
#include "bnxt.h"
@@ -369,6 +370,7 @@ static int bnxt_alloc_cmpl_ring(struct bnxt *bp, int queue_index,
{
struct bnxt_ring *cp_ring = cpr->cp_ring_struct;
uint32_t nq_ring_id = HWRM_NA_SIGNATURE;
+ int cp_ring_index = queue_index + BNXT_NUM_ASYNC_CPR;
uint8_t ring_type;
int rc = 0;
@@ -383,13 +385,13 @@ static int bnxt_alloc_cmpl_ring(struct bnxt *bp, int queue_index,
}
}
- rc = bnxt_hwrm_ring_alloc(bp, cp_ring, ring_type, queue_index,
+ rc = bnxt_hwrm_ring_alloc(bp, cp_ring, ring_type, cp_ring_index,
HWRM_NA_SIGNATURE, nq_ring_id);
if (rc)
return rc;
cpr->cp_cons = 0;
- bnxt_set_db(bp, &cpr->cp_db, ring_type, queue_index,
+ bnxt_set_db(bp, &cpr->cp_db, ring_type, cp_ring_index,
cp_ring->fw_ring_id);
bnxt_db_cq(cpr);
@@ -400,6 +402,7 @@ static int bnxt_alloc_nq_ring(struct bnxt *bp, int queue_index,
struct bnxt_cp_ring_info *nqr)
{
struct bnxt_ring *nq_ring = nqr->cp_ring_struct;
+ int nq_ring_index = queue_index + BNXT_NUM_ASYNC_CPR;
uint8_t ring_type;
int rc = 0;
@@ -408,12 +411,12 @@ static int bnxt_alloc_nq_ring(struct bnxt *bp, int queue_index,
ring_type = HWRM_RING_ALLOC_INPUT_RING_TYPE_NQ;
- rc = bnxt_hwrm_ring_alloc(bp, nq_ring, ring_type, queue_index,
+ rc = bnxt_hwrm_ring_alloc(bp, nq_ring, ring_type, nq_ring_index,
HWRM_NA_SIGNATURE, HWRM_NA_SIGNATURE);
if (rc)
return rc;
- bnxt_set_db(bp, &nqr->cp_db, ring_type, queue_index,
+ bnxt_set_db(bp, &nqr->cp_db, ring_type, nq_ring_index,
nq_ring->fw_ring_id);
bnxt_db_nq(nqr);
@@ -490,14 +493,16 @@ int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index)
struct bnxt_ring *cp_ring = cpr->cp_ring_struct;
struct bnxt_cp_ring_info *nqr = rxq->nq_ring;
struct bnxt_rx_ring_info *rxr = rxq->rx_ring;
- int rc = 0;
+ int rc;
if (BNXT_HAS_NQ(bp)) {
- if (bnxt_alloc_nq_ring(bp, queue_index, nqr))
+ rc = bnxt_alloc_nq_ring(bp, queue_index, nqr);
+ if (rc)
goto err_out;
}
- if (bnxt_alloc_cmpl_ring(bp, queue_index, cpr, nqr))
+ rc = bnxt_alloc_cmpl_ring(bp, queue_index, cpr, nqr);
+ if (rc)
goto err_out;
if (BNXT_HAS_RING_GRPS(bp)) {
@@ -505,22 +510,24 @@ int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index)
bp->grp_info[queue_index].cp_fw_ring_id = cp_ring->fw_ring_id;
}
- if (!queue_index) {
+ if (!BNXT_NUM_ASYNC_CPR && !queue_index) {
/*
- * In order to save completion resources, use the first
- * completion ring from PF or VF as the default completion ring
- * for async event and HWRM forward response handling.
+ * If a dedicated async event completion ring is not enabled,
+ * use the first completion ring from PF or VF as the default
+ * completion ring for async event handling.
*/
- bp->def_cp_ring = cpr;
+ bp->async_cp_ring = cpr;
rc = bnxt_hwrm_set_async_event_cr(bp);
if (rc)
goto err_out;
}
- if (bnxt_alloc_rx_ring(bp, queue_index))
+ rc = bnxt_alloc_rx_ring(bp, queue_index);
+ if (rc)
goto err_out;
- if (bnxt_alloc_rx_agg_ring(bp, queue_index))
+ rc = bnxt_alloc_rx_agg_ring(bp, queue_index);
+ if (rc)
goto err_out;
rxq->rx_buf_use_size = BNXT_MAX_MTU + RTE_ETHER_HDR_LEN +
@@ -545,6 +552,9 @@ int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index)
bp->eth_dev->data->rx_queue_state[queue_index]);
err_out:
+ PMD_DRV_LOG(ERR,
+ "Failed to allocate receive queue %d, rc %d.\n",
+ queue_index, rc);
return rc;
}
@@ -583,15 +593,13 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp)
}
bnxt_hwrm_set_ring_coal(bp, &coal, cp_ring->fw_ring_id);
-
- if (!i) {
+ if (!BNXT_NUM_ASYNC_CPR && !i) {
/*
- * In order to save completion resource, use the first
- * completion ring from PF or VF as the default
- * completion ring for async event & HWRM
- * forward response handling.
+ * If a dedicated async event completion ring is not
+ * enabled, use the first completion ring as the default
+ * completion ring for async event handling.
*/
- bp->def_cp_ring = cpr;
+ bp->async_cp_ring = cpr;
rc = bnxt_hwrm_set_async_event_cr(bp);
if (rc)
goto err_out;
@@ -652,3 +660,98 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp)
err_out:
return rc;
}
+
+/* Allocate dedicated async completion ring. */
+int bnxt_alloc_async_cp_ring(struct bnxt *bp)
+{
+ struct bnxt_cp_ring_info *cpr = bp->async_cp_ring;
+ struct bnxt_ring *cp_ring = cpr->cp_ring_struct;
+ uint8_t ring_type;
+ int rc;
+
+ if (BNXT_NUM_ASYNC_CPR == 0)
+ return 0;
+
+ if (BNXT_HAS_NQ(bp))
+ ring_type = HWRM_RING_ALLOC_INPUT_RING_TYPE_NQ;
+ else
+ ring_type = HWRM_RING_ALLOC_INPUT_RING_TYPE_L2_CMPL;
+
+ rc = bnxt_hwrm_ring_alloc(bp, cp_ring, ring_type, 0,
+ HWRM_NA_SIGNATURE, HWRM_NA_SIGNATURE);
+
+ if (rc)
+ return rc;
+
+ cpr->cp_cons = 0;
+ cpr->valid = 0;
+ bnxt_set_db(bp, &cpr->cp_db, ring_type, 0,
+ cp_ring->fw_ring_id);
+
+ if (BNXT_HAS_NQ(bp))
+ bnxt_db_nq(cpr);
+ else
+ bnxt_db_cq(cpr);
+
+ return bnxt_hwrm_set_async_event_cr(bp);
+}
+
+/* Free dedicated async completion ring. */
+void bnxt_free_async_cp_ring(struct bnxt *bp)
+{
+ struct bnxt_cp_ring_info *cpr = bp->async_cp_ring;
+
+ if (BNXT_NUM_ASYNC_CPR == 0 || cpr == NULL)
+ return;
+
+ if (BNXT_HAS_NQ(bp))
+ bnxt_free_nq_ring(bp, cpr);
+ else
+ bnxt_free_cp_ring(bp, cpr);
+
+ bnxt_free_ring(cpr->cp_ring_struct);
+ rte_free(cpr->cp_ring_struct);
+ cpr->cp_ring_struct = NULL;
+ rte_free(cpr);
+ bp->async_cp_ring = NULL;
+}
+
+int bnxt_alloc_async_ring_struct(struct bnxt *bp)
+{
+ struct bnxt_cp_ring_info *cpr = NULL;
+ struct bnxt_ring *ring = NULL;
+ unsigned int socket_id;
+
+ if (BNXT_NUM_ASYNC_CPR == 0)
+ return 0;
+
+ socket_id = rte_lcore_to_socket_id(rte_get_master_lcore());
+
+ cpr = rte_zmalloc_socket("cpr",
+ sizeof(struct bnxt_cp_ring_info),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (cpr == NULL)
+ return -ENOMEM;
+
+ ring = rte_zmalloc_socket("bnxt_cp_ring_struct",
+ sizeof(struct bnxt_ring),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (ring == NULL) {
+ rte_free(cpr);
+ return -ENOMEM;
+ }
+
+ ring->bd = (void *)cpr->cp_desc_ring;
+ ring->bd_dma = cpr->cp_desc_mapping;
+ ring->ring_size = rte_align32pow2(DEFAULT_CP_RING_SIZE);
+ ring->ring_mask = ring->ring_size - 1;
+ ring->vmem_size = 0;
+ ring->vmem = NULL;
+
+ bp->async_cp_ring = cpr;
+ cpr->cp_ring_struct = ring;
+
+ return bnxt_alloc_rings(bp, 0, NULL, NULL,
+ bp->async_cp_ring, NULL,
+ "def_cp");
+}
diff --git a/drivers/net/bnxt/bnxt_ring.h b/drivers/net/bnxt/bnxt_ring.h
index e5cef3a1d..04c7b04b8 100644
--- a/drivers/net/bnxt/bnxt_ring.h
+++ b/drivers/net/bnxt/bnxt_ring.h
@@ -75,6 +75,9 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx,
const char *suffix);
int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index);
int bnxt_alloc_hwrm_rings(struct bnxt *bp);
+int bnxt_alloc_async_cp_ring(struct bnxt *bp);
+void bnxt_free_async_cp_ring(struct bnxt *bp);
+int bnxt_alloc_async_ring_struct(struct bnxt *bp);
static inline void bnxt_db_write(struct bnxt_db_info *db, uint32_t idx)
{
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 54a2cf5fd..1e068f817 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -564,7 +564,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
nb_rx_pkts++;
if (rc == -EBUSY) /* partial completion */
break;
- } else {
+ } else if (!BNXT_NUM_ASYNC_CPR) {
evt =
bnxt_event_hwrm_resp_handler(rxq->bp,
(struct cmpl_base *)rxcmp);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
index c358506f8..3ef016073 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
@@ -257,7 +257,7 @@ bnxt_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
mbuf->packet_type = bnxt_parse_pkt_type(rxcmp, rxcmp1);
rx_pkts[nb_rx_pkts++] = mbuf;
- } else {
+ } else if (!BNXT_NUM_ASYNC_CPR) {
evt =
bnxt_event_hwrm_resp_handler(rxq->bp,
(struct cmpl_base *)rxcmp);
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH 10/22] net/bnxt: retry irq callback deregistration
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
` (8 preceding siblings ...)
2019-07-18 3:36 ` [dpdk-dev] [PATCH 09/22] net/bnxt: use dedicated cpr for async events Ajit Khaparde
@ 2019-07-18 3:36 ` Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 11/22] net/bnxt: fix error checking of FW commands Ajit Khaparde
` (13 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:36 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Lance Richardson, stable
From: Lance Richardson <lance.richardson@broadcom.com>
rte_intr_callback_unregister() can fail if the handler happens to
be active at the time of the call. Add logic to retry a reasonable
number of times to help ensure that the callback is unregistered
on uninit.
Fixes: 7bc8e9a227cc ("net/bnxt: support async link notification")
Cc: stable@dpdk.org
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
---
drivers/net/bnxt/bnxt_irq.c | 63 +++++++++++++++++++++++++++----------
drivers/net/bnxt/bnxt_irq.h | 2 +-
2 files changed, 48 insertions(+), 17 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_irq.c b/drivers/net/bnxt/bnxt_irq.c
index 9ff16ddd8..42a2ff2a3 100644
--- a/drivers/net/bnxt/bnxt_irq.c
+++ b/drivers/net/bnxt/bnxt_irq.c
@@ -51,21 +51,45 @@ static void bnxt_int_handler(void *param)
B_CP_DB_REARM(cpr, cpr->cp_raw_cons);
}
-void bnxt_free_int(struct bnxt *bp)
+int bnxt_free_int(struct bnxt *bp)
{
- struct bnxt_irq *irq;
-
- irq = bp->irq_tbl;
- if (irq) {
- if (irq->requested) {
- rte_intr_callback_unregister(&bp->pdev->intr_handle,
- irq->handler,
- (void *)bp->eth_dev);
- irq->requested = 0;
+ struct rte_intr_handle *intr_handle = &bp->pdev->intr_handle;
+ struct bnxt_irq *irq = bp->irq_tbl;
+ int rc = 0;
+
+ if (!irq)
+ return 0;
+
+ if (irq->requested) {
+ int count = 0;
+
+ /*
+ * Callback deregistration will fail with rc -EAGAIN if the
+ * callback is currently active. Retry every 50 ms until
+ * successful or 500 ms has elapsed.
+ */
+ do {
+ rc = rte_intr_callback_unregister(intr_handle,
+ irq->handler,
+ bp->eth_dev);
+ if (rc >= 0) {
+ irq->requested = 0;
+ break;
+ }
+ rte_delay_ms(50);
+ } while (count++ < 10);
+
+ if (rc < 0) {
+ PMD_DRV_LOG(ERR, "irq cb unregister failed rc: %d\n",
+ rc);
+ return rc;
}
- rte_free((void *)bp->irq_tbl);
- bp->irq_tbl = NULL;
}
+
+ rte_free(bp->irq_tbl);
+ bp->irq_tbl = NULL;
+
+ return 0;
}
void bnxt_disable_int(struct bnxt *bp)
@@ -129,13 +153,20 @@ int bnxt_setup_int(struct bnxt *bp)
int bnxt_request_int(struct bnxt *bp)
{
+ struct rte_intr_handle *intr_handle = &bp->pdev->intr_handle;
+ struct bnxt_irq *irq = bp->irq_tbl;
int rc = 0;
- struct bnxt_irq *irq = bp->irq_tbl;
+ if (!irq)
+ return 0;
- rte_intr_callback_register(&bp->pdev->intr_handle, irq->handler,
- (void *)bp->eth_dev);
+ if (!irq->requested) {
+ rc = rte_intr_callback_register(intr_handle,
+ irq->handler,
+ bp->eth_dev);
+ if (!rc)
+ irq->requested = 1;
+ }
- irq->requested = 1;
return rc;
}
diff --git a/drivers/net/bnxt/bnxt_irq.h b/drivers/net/bnxt/bnxt_irq.h
index 75ba2135b..460a97a09 100644
--- a/drivers/net/bnxt/bnxt_irq.h
+++ b/drivers/net/bnxt/bnxt_irq.h
@@ -17,7 +17,7 @@ struct bnxt_irq {
};
struct bnxt;
-void bnxt_free_int(struct bnxt *bp);
+int bnxt_free_int(struct bnxt *bp);
void bnxt_disable_int(struct bnxt *bp);
void bnxt_enable_int(struct bnxt *bp);
int bnxt_setup_int(struct bnxt *bp);
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH 11/22] net/bnxt: fix error checking of FW commands
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
` (9 preceding siblings ...)
2019-07-18 3:36 ` [dpdk-dev] [PATCH 10/22] net/bnxt: retry irq callback deregistration Ajit Khaparde
@ 2019-07-18 3:36 ` Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 12/22] net/bnxt: fix to return standard error codes Ajit Khaparde
` (12 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:36 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Kalesh AP, stable, Lance Richardson
From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
HWRM_CHECK_RESULT() checks the return value of HWRM command and returns
in case the command fails. There is no need of return value check after
HWRM_CHECK_RESULT().
Fixes: 49947a13ba9e ("net/bnxt: support Tx loopback, set VF MAC and queues drop")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/net/bnxt/bnxt_hwrm.c | 23 +++--------------------
1 file changed, 3 insertions(+), 20 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 52b2119a5..9bd2fcb9f 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -1766,8 +1766,6 @@ bnxt_hwrm_vnic_rss_cfg_thor(struct bnxt *bp, struct bnxt_vnic_info *vnic)
BNXT_USE_CHIMP_MB);
HWRM_CHECK_RESULT();
- if (rc)
- break;
}
HWRM_UNLOCK();
@@ -3778,16 +3776,7 @@ static int bnxt_hwrm_func_vf_vnic_query(struct bnxt *bp, uint16_t vf,
return -ENOMEM;
}
rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
- if (rc) {
- HWRM_UNLOCK();
- PMD_DRV_LOG(ERR, "hwrm_func_vf_vnic_query failed rc:%d\n", rc);
- return -1;
- } else if (resp->error_code) {
- rc = rte_le_to_cpu_16(resp->error_code);
- HWRM_UNLOCK();
- PMD_DRV_LOG(ERR, "hwrm_func_vf_vnic_query error %d\n", rc);
- return -1;
- }
+ HWRM_CHECK_RESULT();
rc = rte_le_to_cpu_32(resp->vnic_id_cnt);
HWRM_UNLOCK();
@@ -4196,8 +4185,6 @@ bnxt_vnic_rss_configure_thor(struct bnxt *bp, struct bnxt_vnic_info *vnic)
BNXT_USE_CHIMP_MB);
HWRM_CHECK_RESULT();
- if (rc)
- break;
}
HWRM_UNLOCK();
@@ -4275,8 +4262,7 @@ static int bnxt_hwrm_set_coal_params_thor(struct bnxt *bp,
HWRM_PREP(req, RING_AGGINT_QCAPS, BNXT_USE_CHIMP_MB);
rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
- if (rc)
- goto out;
+ HWRM_CHECK_RESULT();
agg_req->num_cmpl_dma_aggr = resp->num_cmpl_dma_aggr_max;
agg_req->cmpl_aggr_dma_tmr = resp->cmpl_aggr_dma_tmr_min;
@@ -4289,8 +4275,6 @@ static int bnxt_hwrm_set_coal_params_thor(struct bnxt *bp,
HWRM_RING_CMPL_RING_CFG_AGGINT_PARAMS_INPUT_ENABLES_NUM_CMPL_DMA_AGGR;
agg_req->enables = rte_cpu_to_le_32(enables);
-out:
- HWRM_CHECK_RESULT();
HWRM_UNLOCK();
return rc;
}
@@ -4502,8 +4486,7 @@ int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, uint32_t enables)
rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
HWRM_CHECK_RESULT();
HWRM_UNLOCK();
- if (rc)
- rc = -EIO;
+
return rc;
}
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH 12/22] net/bnxt: fix to return standard error codes
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
` (10 preceding siblings ...)
2019-07-18 3:36 ` [dpdk-dev] [PATCH 11/22] net/bnxt: fix error checking of FW commands Ajit Khaparde
@ 2019-07-18 3:36 ` Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 13/22] net/bnxt: fix lock release on getting NVM info Ajit Khaparde
` (11 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:36 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Kalesh AP, stable, Somnath Kotur
From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Fixed the return values of few routines to return standard error code.
Also fixed few error logs to more meaningful one.
Fixes: 804e746c7b7338 ("net/bnxt: add hardware resource manager init code")
Fixes: e3d8f1e6a665f9 ("net/bnxt: cache address of doorbell to subsequent access")
Fixes: 19e6af01bb36d7 ("net/bnxt: support get/set EEPROM")
Fixes: b7435d660a8cde ("net/bnxt: add ntuple filtering support")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
drivers/net/bnxt/bnxt_ethdev.c | 46 +++++++---------------------------
drivers/net/bnxt/bnxt_hwrm.c | 30 +++++++++-------------
2 files changed, 21 insertions(+), 55 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index fe7837df2..dd127cd6f 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -2161,7 +2161,7 @@ bnxt_ethertype_filter(struct rte_eth_dev *dev,
filter1 = bnxt_get_l2_filter(bp, bfilter, vnic0);
if (filter1 == NULL) {
- ret = -1;
+ ret = -EINVAL;
goto cleanup;
}
bfilter->enables |=
@@ -2355,7 +2355,7 @@ bnxt_cfg_ntuple_filter(struct bnxt *bp,
vnic0 = &bp->vnic_info[0];
filter1 = STAILQ_FIRST(&vnic0->filter);
if (filter1 == NULL) {
- ret = -1;
+ ret = -EINVAL;
goto free_filter;
}
@@ -3297,7 +3297,6 @@ bnxt_set_eeprom_op(struct rte_eth_dev *dev,
return bnxt_hwrm_flash_nvram(bp, type, ordinal, ext, attr,
in_eeprom->data, in_eeprom->length);
- return 0;
}
/*
@@ -3400,48 +3399,21 @@ bool bnxt_stratus_device(struct bnxt *bp)
static int bnxt_init_board(struct rte_eth_dev *eth_dev)
{
- struct bnxt *bp = eth_dev->data->dev_private;
struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- int rc;
+ struct bnxt *bp = eth_dev->data->dev_private;
/* enable device (incl. PCI PM wakeup), and bus-mastering */
- if (!pci_dev->mem_resource[0].addr) {
- PMD_DRV_LOG(ERR,
- "Cannot find PCI device base address, aborting\n");
- rc = -ENODEV;
- goto init_err_disable;
+ bp->bar0 = (void *)pci_dev->mem_resource[0].addr;
+ bp->doorbell_base = (void *)pci_dev->mem_resource[2].addr;
+ if (!bp->bar0 || !bp->doorbell_base) {
+ PMD_DRV_LOG(ERR, "Unable to access Hardware\n");
+ return -ENODEV;
}
bp->eth_dev = eth_dev;
bp->pdev = pci_dev;
- bp->bar0 = (void *)pci_dev->mem_resource[0].addr;
- if (!bp->bar0) {
- PMD_DRV_LOG(ERR, "Cannot map device registers, aborting\n");
- rc = -ENOMEM;
- goto init_err_release;
- }
-
- if (!pci_dev->mem_resource[2].addr) {
- PMD_DRV_LOG(ERR,
- "Cannot find PCI device BAR 2 address, aborting\n");
- rc = -ENODEV;
- goto init_err_release;
- } else {
- bp->doorbell_base = (void *)pci_dev->mem_resource[2].addr;
- }
-
return 0;
-
-init_err_release:
- if (bp->bar0)
- bp->bar0 = NULL;
- if (bp->doorbell_base)
- bp->doorbell_base = NULL;
-
-init_err_disable:
-
- return rc;
}
static int bnxt_alloc_ctx_mem_blk(__rte_unused struct bnxt *bp,
@@ -3681,7 +3653,7 @@ int bnxt_alloc_ctx_mem(struct bnxt *bp)
else
ctx->flags |= BNXT_CTX_FLAG_INITED;
- return 0;
+ return rc;
}
static int bnxt_alloc_stats_mem(struct bnxt *bp)
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 9bd2fcb9f..fda5c7c1b 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -152,14 +152,11 @@ static int bnxt_hwrm_send_message(struct bnxt *bp, void *msg,
}
if (i >= HWRM_CMD_TIMEOUT) {
- PMD_DRV_LOG(ERR, "Error sending msg 0x%04x\n",
- req->req_type);
- goto err_ret;
+ PMD_DRV_LOG(ERR, "Error(timeout) sending msg 0x%04x\n",
+ req->req_type);
+ return -ETIMEDOUT;
}
return 0;
-
-err_ret:
- return -1;
}
/*
@@ -1220,7 +1217,7 @@ int bnxt_hwrm_ring_alloc(struct bnxt *bp,
PMD_DRV_LOG(ERR, "hwrm alloc invalid ring type %d\n",
ring_type);
HWRM_UNLOCK();
- return -1;
+ return -EINVAL;
}
req.enables = rte_cpu_to_le_32(enables);
@@ -2934,7 +2931,7 @@ int bnxt_hwrm_allocate_pf_only(struct bnxt *bp)
if (!BNXT_PF(bp)) {
PMD_DRV_LOG(ERR, "Attempt to allcoate VFs on a VF!\n");
- return -1;
+ return -EINVAL;
}
rc = bnxt_hwrm_func_qcaps(bp);
@@ -2962,7 +2959,7 @@ int bnxt_hwrm_allocate_vfs(struct bnxt *bp, int num_vfs)
if (!BNXT_PF(bp)) {
PMD_DRV_LOG(ERR, "Attempt to allcoate VFs on a VF!\n");
- return -1;
+ return -EINVAL;
}
rc = bnxt_hwrm_func_qcaps(bp);
@@ -3804,10 +3801,9 @@ int bnxt_hwrm_func_vf_vnic_query_and_config(struct bnxt *bp, uint16_t vf,
vnic_id_sz = bp->pf.total_vnics * sizeof(*vnic_ids);
vnic_ids = rte_malloc("bnxt_hwrm_vf_vnic_ids_query", vnic_id_sz,
RTE_CACHE_LINE_SIZE);
- if (vnic_ids == NULL) {
- rc = -ENOMEM;
- return rc;
- }
+ if (vnic_ids == NULL)
+ return -ENOMEM;
+
for (sz = 0; sz < vnic_id_sz; sz += getpagesize())
rte_mem_lock_page(((char *)vnic_ids) + sz);
@@ -3874,10 +3870,8 @@ int bnxt_hwrm_func_qcfg_vf_dflt_vnic_id(struct bnxt *bp, int vf)
vnic_id_sz = bp->pf.total_vnics * sizeof(*vnic_ids);
vnic_ids = rte_malloc("bnxt_hwrm_vf_vnic_ids_query", vnic_id_sz,
RTE_CACHE_LINE_SIZE);
- if (vnic_ids == NULL) {
- rc = -ENOMEM;
- return rc;
- }
+ if (vnic_ids == NULL)
+ return -ENOMEM;
for (sz = 0; sz < vnic_id_sz; sz += getpagesize())
rte_mem_lock_page(((char *)vnic_ids) + sz);
@@ -3908,7 +3902,7 @@ int bnxt_hwrm_func_qcfg_vf_dflt_vnic_id(struct bnxt *bp, int vf)
PMD_DRV_LOG(ERR, "No default VNIC\n");
exit:
rte_free(vnic_ids);
- return -1;
+ return rc;
}
int bnxt_hwrm_set_em_filter(struct bnxt *bp,
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH 13/22] net/bnxt: fix lock release on getting NVM info
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
` (11 preceding siblings ...)
2019-07-18 3:36 ` [dpdk-dev] [PATCH 12/22] net/bnxt: fix to return standard error codes Ajit Khaparde
@ 2019-07-18 3:36 ` Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 14/22] net/bnxt: fix RSS disable issue for thor-based adapters Ajit Khaparde
` (10 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:36 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Kalesh AP, stable, Somnath Kotur
From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
HWRM response was parsed after releasing the spinlock.
Fixes: 19e6af01bb36 ("net/bnxt: support get/set EEPROM")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
drivers/net/bnxt/bnxt_hwrm.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index fda5c7c1b..672e9882a 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3578,12 +3578,11 @@ int bnxt_hwrm_nvm_get_dir_info(struct bnxt *bp, uint32_t *entries,
rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
HWRM_CHECK_RESULT();
- HWRM_UNLOCK();
- if (!rc) {
- *entries = rte_le_to_cpu_32(resp->entries);
- *length = rte_le_to_cpu_32(resp->entry_length);
- }
+ *entries = rte_le_to_cpu_32(resp->entries);
+ *length = rte_le_to_cpu_32(resp->entry_length);
+
+ HWRM_UNLOCK();
return rc;
}
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH 14/22] net/bnxt: fix RSS disable issue for thor-based adapters
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
` (12 preceding siblings ...)
2019-07-18 3:36 ` [dpdk-dev] [PATCH 13/22] net/bnxt: fix lock release on getting NVM info Ajit Khaparde
@ 2019-07-18 3:36 ` Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 15/22] net/bnxt: use correct RSS table sizes Ajit Khaparde
` (9 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:36 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Santoshkumar Karanappa Rastapur, Lance Richardson
From: Santoshkumar Karanappa Rastapur <santosh.rastapur@broadcom.com>
In bnxt_hwrm_vnic_rss_cfg_thor, we were exiting if hash_type is 0.
This was preventing RSS getting disabled. Fixing it by removing the
check for hash_type while configuring RSS.
Fixes: 38412304b50a ("net/bnxt: enable RSS for thor-based controllers")
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Santoshkumar Karanappa Rastapur <santosh.rastapur@broadcom.com>
---
drivers/net/bnxt/bnxt_hwrm.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 672e9882a..6ace2c11d 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -1740,9 +1740,6 @@ bnxt_hwrm_vnic_rss_cfg_thor(struct bnxt *bp, struct bnxt_vnic_info *vnic)
struct hwrm_vnic_rss_cfg_input req = {.req_type = 0 };
struct hwrm_vnic_rss_cfg_output *resp = bp->hwrm_cmd_resp_addr;
- if (!(vnic->rss_table && vnic->hash_type))
- return 0;
-
HWRM_PREP(req, VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
@@ -1777,6 +1774,9 @@ int bnxt_hwrm_vnic_rss_cfg(struct bnxt *bp,
struct hwrm_vnic_rss_cfg_input req = {.req_type = 0 };
struct hwrm_vnic_rss_cfg_output *resp = bp->hwrm_cmd_resp_addr;
+ if (!vnic->rss_table)
+ return 0;
+
if (BNXT_CHIP_THOR(bp))
return bnxt_hwrm_vnic_rss_cfg_thor(bp, vnic);
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH 15/22] net/bnxt: use correct RSS table sizes
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
` (13 preceding siblings ...)
2019-07-18 3:36 ` [dpdk-dev] [PATCH 14/22] net/bnxt: fix RSS disable issue for thor-based adapters Ajit Khaparde
@ 2019-07-18 3:36 ` Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 16/22] net/bnxt: fully initialize hwrm msgs for thor RSS cfg Ajit Khaparde
` (8 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:36 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Lance Richardson, Somnath Kotur
From: Lance Richardson <lance.richardson@broadcom.com>
RSS table size is variable with BCM57500-based adapters. Use correct
size when allocating memory for RSS state.
Fixes: 05375e6f58df ("net/bnxt: enable rss for thor-based adapters")
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
---
drivers/net/bnxt/bnxt_vnic.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c
index c652b8f03..98415633e 100644
--- a/drivers/net/bnxt/bnxt_vnic.c
+++ b/drivers/net/bnxt/bnxt_vnic.c
@@ -117,6 +117,7 @@ int bnxt_alloc_vnic_attributes(struct bnxt *bp)
const struct rte_memzone *mz;
char mz_name[RTE_MEMZONE_NAMESIZE];
uint32_t entry_length;
+ size_t rss_table_size;
uint16_t max_vnics;
int i;
rte_iova_t mz_phys_addr;
@@ -125,11 +126,12 @@ int bnxt_alloc_vnic_attributes(struct bnxt *bp)
BNXT_MAX_MC_ADDRS * RTE_ETHER_ADDR_LEN;
if (BNXT_CHIP_THOR(bp))
- entry_length += BNXT_RSS_TBL_SIZE_THOR *
- 2 * sizeof(*vnic->rss_table);
+ rss_table_size = BNXT_RSS_TBL_SIZE_THOR *
+ 2 * sizeof(*vnic->rss_table);
else
- entry_length += HW_HASH_INDEX_SIZE * sizeof(*vnic->rss_table);
- entry_length = RTE_CACHE_LINE_ROUNDUP(entry_length);
+ rss_table_size = HW_HASH_INDEX_SIZE * sizeof(*vnic->rss_table);
+
+ entry_length = RTE_CACHE_LINE_ROUNDUP(entry_length + rss_table_size);
max_vnics = bp->max_vnics;
snprintf(mz_name, RTE_MEMZONE_NAMESIZE,
@@ -170,10 +172,10 @@ int bnxt_alloc_vnic_attributes(struct bnxt *bp)
vnic->rss_table_dma_addr = mz_phys_addr + (entry_length * i);
vnic->rss_hash_key = (void *)((char *)vnic->rss_table +
- HW_HASH_INDEX_SIZE * sizeof(*vnic->rss_table));
+ rss_table_size);
vnic->rss_hash_key_dma_addr = vnic->rss_table_dma_addr +
- HW_HASH_INDEX_SIZE * sizeof(*vnic->rss_table);
+ rss_table_size;
vnic->mc_list = (void *)((char *)vnic->rss_hash_key +
HW_HASH_KEY_SIZE);
vnic->mc_list_dma_addr = vnic->rss_hash_key_dma_addr +
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH 16/22] net/bnxt: fully initialize hwrm msgs for thor RSS cfg
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
` (14 preceding siblings ...)
2019-07-18 3:36 ` [dpdk-dev] [PATCH 15/22] net/bnxt: use correct RSS table sizes Ajit Khaparde
@ 2019-07-18 3:36 ` Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 17/22] net/bnxt: use correct number of RSS contexts for thor Ajit Khaparde
` (7 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:36 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Lance Richardson, Kalesh Anakkur Purayil
From: Lance Richardson <lance.richardson@broadcom.com>
Fully initialize hwrm messages for thor RSS configuration
to avoid hwrm duplicate sequence numbers.
Fixes: 38412304b50a ("net/bnxt: enable RSS for thor-based controllers")
Reviewed-by: Kalesh Anakkur Purayil <kalesh-anakkur.purayil@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
---
drivers/net/bnxt/bnxt_hwrm.c | 42 +++++++++++++++++-------------------
1 file changed, 20 insertions(+), 22 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 6ace2c11d..a1fa37f16 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -1740,16 +1740,16 @@ bnxt_hwrm_vnic_rss_cfg_thor(struct bnxt *bp, struct bnxt_vnic_info *vnic)
struct hwrm_vnic_rss_cfg_input req = {.req_type = 0 };
struct hwrm_vnic_rss_cfg_output *resp = bp->hwrm_cmd_resp_addr;
- HWRM_PREP(req, VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
+ for (i = 0; i < nr_ctxs; i++) {
+ HWRM_PREP(req, VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
- req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
- req.hash_type = rte_cpu_to_le_32(vnic->hash_type);
- req.hash_mode_flags = vnic->hash_mode;
+ req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
+ req.hash_type = rte_cpu_to_le_32(vnic->hash_type);
+ req.hash_mode_flags = vnic->hash_mode;
- req.hash_key_tbl_addr =
- rte_cpu_to_le_64(vnic->rss_hash_key_dma_addr);
+ req.hash_key_tbl_addr =
+ rte_cpu_to_le_64(vnic->rss_hash_key_dma_addr);
- for (i = 0; i < nr_ctxs; i++) {
req.ring_grp_tbl_addr =
rte_cpu_to_le_64(vnic->rss_table_dma_addr +
i * HW_HASH_INDEX_SIZE);
@@ -1760,10 +1760,9 @@ bnxt_hwrm_vnic_rss_cfg_thor(struct bnxt *bp, struct bnxt_vnic_info *vnic)
BNXT_USE_CHIMP_MB);
HWRM_CHECK_RESULT();
+ HWRM_UNLOCK();
}
- HWRM_UNLOCK();
-
return rc;
}
@@ -4128,21 +4127,21 @@ bnxt_vnic_rss_configure_thor(struct bnxt *bp, struct bnxt_vnic_info *vnic)
int i, j, k, cnt;
int rc = 0;
- HWRM_PREP(req, VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
-
- req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
- req.hash_type = rte_cpu_to_le_32(vnic->hash_type);
- req.hash_mode_flags = vnic->hash_mode;
-
- req.ring_grp_tbl_addr =
- rte_cpu_to_le_64(vnic->rss_table_dma_addr);
- req.hash_key_tbl_addr =
- rte_cpu_to_le_64(vnic->rss_hash_key_dma_addr);
-
for (i = 0, k = 0; i < nr_ctxs; i++) {
struct bnxt_rx_ring_info *rxr;
struct bnxt_cp_ring_info *cpr;
+ HWRM_PREP(req, VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
+
+ req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
+ req.hash_type = rte_cpu_to_le_32(vnic->hash_type);
+ req.hash_mode_flags = vnic->hash_mode;
+
+ req.ring_grp_tbl_addr =
+ rte_cpu_to_le_64(vnic->rss_table_dma_addr);
+ req.hash_key_tbl_addr =
+ rte_cpu_to_le_64(vnic->rss_hash_key_dma_addr);
+
req.ring_table_pair_index = i;
req.rss_ctx_idx = rte_cpu_to_le_16(vnic->fw_grp_ids[i]);
@@ -4178,10 +4177,9 @@ bnxt_vnic_rss_configure_thor(struct bnxt *bp, struct bnxt_vnic_info *vnic)
BNXT_USE_CHIMP_MB);
HWRM_CHECK_RESULT();
+ HWRM_UNLOCK();
}
- HWRM_UNLOCK();
-
return rc;
}
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH 17/22] net/bnxt: use correct number of RSS contexts for thor
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
` (15 preceding siblings ...)
2019-07-18 3:36 ` [dpdk-dev] [PATCH 16/22] net/bnxt: fully initialize hwrm msgs for thor RSS cfg Ajit Khaparde
@ 2019-07-18 3:36 ` Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 18/22] net/bnxt: pass correct RSS table address " Ajit Khaparde
` (6 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:36 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Lance Richardson
From: Lance Richardson <lance.richardson@broadcom.com>
BCM57500-based adapters use a variable number of RSS contexts
depending upon the number of receive rings in use. The current
implementation is erroneously using the maximum possible number
of RSS contexts instead of the actual number allocated when
setting up RSS tables in the adapter. Fix by using the actual
number of allocated contexts.
Fixes: 38412304b50a ("net/bnxt: enable RSS for thor-based controllers")
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
---
drivers/net/bnxt/bnxt_hwrm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index a1fa37f16..301649d3b 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -4120,9 +4120,9 @@ bnxt_vnic_rss_configure_thor(struct bnxt *bp, struct bnxt_vnic_info *vnic)
struct hwrm_vnic_rss_cfg_output *resp = bp->hwrm_cmd_resp_addr;
uint8_t *rx_queue_state = bp->eth_dev->data->rx_queue_state;
struct hwrm_vnic_rss_cfg_input req = {.req_type = 0 };
- int nr_ctxs = bp->max_ring_grps;
struct bnxt_rx_queue **rxqs = bp->rx_queues;
uint16_t *ring_tbl = vnic->rss_table;
+ int nr_ctxs = vnic->num_lb_ctxts;
int max_rings = bp->rx_nr_rings;
int i, j, k, cnt;
int rc = 0;
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH 18/22] net/bnxt: pass correct RSS table address for thor
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
` (16 preceding siblings ...)
2019-07-18 3:36 ` [dpdk-dev] [PATCH 17/22] net/bnxt: use correct number of RSS contexts for thor Ajit Khaparde
@ 2019-07-18 3:36 ` Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 19/22] net/bnxt: avoid overrun in get statistics Ajit Khaparde
` (5 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:36 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Lance Richardson, Somnath Kotur
From: Lance Richardson <lance.richardson@broadcom.com>
The current implementation erroneously passes the address of the
beginning of RSS table for each 64-entry context instead of the
address of the appropriate subtable for the context. This results
in only the first 64 receive queues being used. Fix by passing the
correct address for each context.
Fixes: 38412304b50a ("net/bnxt: enable RSS for thor-based controllers")
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
---
drivers/net/bnxt/bnxt_hwrm.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 301649d3b..3266c1bec 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -4138,7 +4138,9 @@ bnxt_vnic_rss_configure_thor(struct bnxt *bp, struct bnxt_vnic_info *vnic)
req.hash_mode_flags = vnic->hash_mode;
req.ring_grp_tbl_addr =
- rte_cpu_to_le_64(vnic->rss_table_dma_addr);
+ rte_cpu_to_le_64(vnic->rss_table_dma_addr +
+ i * BNXT_RSS_ENTRIES_PER_CTX_THOR *
+ 2 * sizeof(*ring_tbl));
req.hash_key_tbl_addr =
rte_cpu_to_le_64(vnic->rss_hash_key_dma_addr);
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH 19/22] net/bnxt: avoid overrun in get statistics
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
` (17 preceding siblings ...)
2019-07-18 3:36 ` [dpdk-dev] [PATCH 18/22] net/bnxt: pass correct RSS table address " Ajit Khaparde
@ 2019-07-18 3:36 ` Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 20/22] net/bnxt: fix MAC/VLAN filter allocation failure Ajit Khaparde
` (4 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:36 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Lance Richardson, stable, Kalesh Anakkur Purayil
From: Lance Richardson <lance.richardson@broadcom.com>
Avoid overrun in rte_eth_stats struct when the number of tx/rx
rings in use is greater than RTE_ETHDEV_QUEUE_STAT_CNTRS.
Fixes: 57d5e5bc86e4 ("net/bnxt: add statistics")
Cc: stable@dpdk.org
Reviewed-by: Kalesh Anakkur Purayil <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
---
drivers/net/bnxt/bnxt_stats.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_stats.c b/drivers/net/bnxt/bnxt_stats.c
index 3cd5144ec..4e74f8a27 100644
--- a/drivers/net/bnxt/bnxt_stats.c
+++ b/drivers/net/bnxt/bnxt_stats.c
@@ -351,6 +351,7 @@ int bnxt_stats_get_op(struct rte_eth_dev *eth_dev,
int rc = 0;
unsigned int i;
struct bnxt *bp = eth_dev->data->dev_private;
+ unsigned int num_q_stats;
memset(bnxt_stats, 0, sizeof(*bnxt_stats));
if (!(bp->flags & BNXT_FLAG_INIT_DONE)) {
@@ -358,7 +359,10 @@ int bnxt_stats_get_op(struct rte_eth_dev *eth_dev,
return -1;
}
- for (i = 0; i < bp->rx_cp_nr_rings; i++) {
+ num_q_stats = RTE_MIN(bp->rx_cp_nr_rings,
+ (unsigned int)RTE_ETHDEV_QUEUE_STAT_CNTRS);
+
+ for (i = 0; i < num_q_stats; i++) {
struct bnxt_rx_queue *rxq = bp->rx_queues[i];
struct bnxt_cp_ring_info *cpr = rxq->cp_ring;
@@ -370,7 +374,10 @@ int bnxt_stats_get_op(struct rte_eth_dev *eth_dev,
rte_atomic64_read(&rxq->rx_mbuf_alloc_fail);
}
- for (i = 0; i < bp->tx_cp_nr_rings; i++) {
+ num_q_stats = RTE_MIN(bp->tx_cp_nr_rings,
+ (unsigned int)RTE_ETHDEV_QUEUE_STAT_CNTRS);
+
+ for (i = 0; i < num_q_stats; i++) {
struct bnxt_tx_queue *txq = bp->tx_queues[i];
struct bnxt_cp_ring_info *cpr = txq->cp_ring;
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH 20/22] net/bnxt: fix MAC/VLAN filter allocation failure
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
` (18 preceding siblings ...)
2019-07-18 3:36 ` [dpdk-dev] [PATCH 19/22] net/bnxt: avoid overrun in get statistics Ajit Khaparde
@ 2019-07-18 3:36 ` Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 21/22] net/bnxt: fix to correctly check result of HWRM command Ajit Khaparde
` (3 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:36 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Santoshkumar Karanappa Rastapur, stable
From: Santoshkumar Karanappa Rastapur <santosh.rastapur@broadcom.com>
We were adding the VLAN filters to all the VNICs of the function.
Also, we were adding these VLANs to all the existing MAC only filters.
This was resulting in fewer VLANs getting added. By default we should
allocate MAC+VLAN filter only to the default VNIC of the function using
the default mac address.
Similar logic was followed in the VLAN deletion code. This patch fixes it.
Use inner VLAN fields instead of outer VLAN during filter deletion to be
in sync with VLAN addition code.
Fixes: 246c5cc5f0 ("net/bnxt: use correct flags during VLAN configuration")
Cc: stable@dpdk.org
Signed-off-by: Santoshkumar Karanappa Rastapur <santosh.rastapur@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/net/bnxt/bnxt.h | 1 +
drivers/net/bnxt/bnxt_ethdev.c | 191 +++++++++++++--------------------
2 files changed, 75 insertions(+), 117 deletions(-)
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 8bd8f536c..33edb1fcd 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -401,6 +401,7 @@ struct bnxt {
unsigned int nr_vnics;
+#define BNXT_GET_DEFAULT_VNIC(bp) (&(bp)->vnic_info[0])
struct bnxt_vnic_info *vnic_info;
STAILQ_HEAD(, bnxt_vnic_info) free_vnic_list;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index dd127cd6f..87f069caa 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1508,141 +1508,98 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
static int bnxt_del_vlan_filter(struct bnxt *bp, uint16_t vlan_id)
{
- struct bnxt_filter_info *filter, *temp_filter, *new_filter;
+ struct bnxt_filter_info *filter;
struct bnxt_vnic_info *vnic;
- unsigned int i;
int rc = 0;
- uint32_t chk = HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_OVLAN;
-
- /* Cycle through all VNICs */
- for (i = 0; i < bp->nr_vnics; i++) {
- /*
- * For each VNIC and each associated filter(s)
- * if VLAN exists && VLAN matches vlan_id
- * remove the MAC+VLAN filter
- * add a new MAC only filter
- * else
- * VLAN filter doesn't exist, just skip and continue
- */
- vnic = &bp->vnic_info[i];
- filter = STAILQ_FIRST(&vnic->filter);
- while (filter) {
- temp_filter = STAILQ_NEXT(filter, next);
+ uint32_t chk = HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_IVLAN;
- if (filter->enables & chk &&
- filter->l2_ovlan == vlan_id) {
- /* Must delete the filter */
- STAILQ_REMOVE(&vnic->filter, filter,
- bnxt_filter_info, next);
- bnxt_hwrm_clear_l2_filter(bp, filter);
- STAILQ_INSERT_TAIL(&bp->free_filter_list,
- filter, next);
+ /* if VLAN exists && VLAN matches vlan_id
+ * remove the MAC+VLAN filter
+ * add a new MAC only filter
+ * else
+ * VLAN filter doesn't exist, just skip and continue
+ */
+ vnic = BNXT_GET_DEFAULT_VNIC(bp);
+ filter = STAILQ_FIRST(&vnic->filter);
+ while (filter) {
+ /* Search for this matching MAC+VLAN filter */
+ if (filter->enables & chk && filter->l2_ivlan == vlan_id &&
+ !memcmp(filter->l2_addr,
+ bp->mac_addr,
+ RTE_ETHER_ADDR_LEN)) {
+ /* Delete the filter */
+ rc = bnxt_hwrm_clear_l2_filter(bp, filter);
+ if (rc)
+ return rc;
+ STAILQ_REMOVE(&vnic->filter, filter,
+ bnxt_filter_info, next);
+ STAILQ_INSERT_TAIL(&bp->free_filter_list, filter, next);
- /*
- * Need to examine to see if the MAC
- * filter already existed or not before
- * allocating a new one
- */
-
- new_filter = bnxt_alloc_filter(bp);
- if (!new_filter) {
- PMD_DRV_LOG(ERR,
- "MAC/VLAN filter alloc failed\n");
- rc = -ENOMEM;
- goto exit;
- }
- STAILQ_INSERT_TAIL(&vnic->filter,
- new_filter, next);
- /* Inherit MAC from previous filter */
- new_filter->mac_index =
- filter->mac_index;
- memcpy(new_filter->l2_addr, filter->l2_addr,
- RTE_ETHER_ADDR_LEN);
- /* MAC only filter */
- rc = bnxt_hwrm_set_l2_filter(bp,
- vnic->fw_vnic_id,
- new_filter);
- if (rc)
- goto exit;
- PMD_DRV_LOG(INFO,
- "Del Vlan filter for %d\n",
- vlan_id);
- }
- filter = temp_filter;
+ PMD_DRV_LOG(INFO,
+ "Del Vlan filter for %d\n",
+ vlan_id);
+ return rc;
}
+ filter = STAILQ_NEXT(filter, next);
}
-exit:
- return rc;
+ return -ENOENT;
}
static int bnxt_add_vlan_filter(struct bnxt *bp, uint16_t vlan_id)
{
- struct bnxt_filter_info *filter, *temp_filter, *new_filter;
+ struct bnxt_filter_info *filter;
struct bnxt_vnic_info *vnic;
- unsigned int i;
int rc = 0;
uint32_t en = HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_IVLAN |
HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_IVLAN_MASK;
uint32_t chk = HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_IVLAN;
- /* Cycle through all VNICs */
- for (i = 0; i < bp->nr_vnics; i++) {
- /*
- * For each VNIC and each associated filter(s)
- * if VLAN exists:
- * if VLAN matches vlan_id
- * VLAN filter already exists, just skip and continue
- * else
- * add a new MAC+VLAN filter
- * else
- * Remove the old MAC only filter
- * Add a new MAC+VLAN filter
- */
- vnic = &bp->vnic_info[i];
- filter = STAILQ_FIRST(&vnic->filter);
- while (filter) {
- temp_filter = STAILQ_NEXT(filter, next);
+ /* Implementation notes on the use of VNIC in this command:
+ *
+ * By default, these filters belong to default vnic for the function.
+ * Once these filters are set up, only destination VNIC can be modified.
+ * If the destination VNIC is not specified in this command,
+ * then the HWRM shall only create an l2 context id.
+ */
- if (filter->enables & chk) {
- if (filter->l2_ivlan == vlan_id)
- goto cont;
- } else {
- /* Must delete the MAC filter */
- STAILQ_REMOVE(&vnic->filter, filter,
- bnxt_filter_info, next);
- bnxt_hwrm_clear_l2_filter(bp, filter);
- filter->l2_ovlan = 0;
- STAILQ_INSERT_TAIL(&bp->free_filter_list,
- filter, next);
- }
- new_filter = bnxt_alloc_filter(bp);
- if (!new_filter) {
- PMD_DRV_LOG(ERR,
- "MAC/VLAN filter alloc failed\n");
- rc = -ENOMEM;
- goto exit;
- }
- STAILQ_INSERT_TAIL(&vnic->filter, new_filter, next);
- /* Inherit MAC from the previous filter */
- new_filter->mac_index = filter->mac_index;
- memcpy(new_filter->l2_addr, filter->l2_addr,
- RTE_ETHER_ADDR_LEN);
- /* MAC + VLAN ID filter */
- new_filter->l2_ivlan = vlan_id;
- new_filter->l2_ivlan_mask = 0xF000;
- new_filter->enables |= en;
- rc = bnxt_hwrm_set_l2_filter(bp,
- vnic->fw_vnic_id,
- new_filter);
- if (rc)
- goto exit;
- PMD_DRV_LOG(INFO,
- "Added Vlan filter for %d\n", vlan_id);
-cont:
- filter = temp_filter;
- }
+ vnic = BNXT_GET_DEFAULT_VNIC(bp);
+ filter = STAILQ_FIRST(&vnic->filter);
+ /* Check if the VLAN has already been added */
+ while (filter) {
+ if (filter->enables & chk && filter->l2_ivlan == vlan_id &&
+ !memcmp(filter->l2_addr, bp->mac_addr, RTE_ETHER_ADDR_LEN))
+ return -EEXIST;
+
+ filter = STAILQ_NEXT(filter, next);
}
-exit:
+
+ /* No match found. Alloc a fresh filter and issue the L2_FILTER_ALLOC
+ * command to create MAC+VLAN filter with the right flags, enables set.
+ */
+ filter = bnxt_alloc_filter(bp);
+ if (!filter) {
+ PMD_DRV_LOG(ERR,
+ "MAC/VLAN filter alloc failed\n");
+ return -ENOMEM;
+ }
+ /* MAC + VLAN ID filter */
+ filter->l2_ivlan = vlan_id;
+ filter->l2_ivlan_mask = 0x0FFF;
+ filter->enables |= en;
+ rc = bnxt_hwrm_set_l2_filter(bp, vnic->fw_vnic_id, filter);
+ if (rc) {
+ /* Free the newly allocated filter as we were
+ * not able to create the filter in hardware.
+ */
+ filter->fw_l2_filter_id = UINT64_MAX;
+ STAILQ_INSERT_TAIL(&bp->free_filter_list, filter, next);
+ return rc;
+ }
+
+ /* Add this new filter to the list */
+ STAILQ_INSERT_TAIL(&vnic->filter, filter, next);
+ PMD_DRV_LOG(INFO,
+ "Added Vlan filter for %d\n", vlan_id);
return rc;
}
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH 21/22] net/bnxt: fix to correctly check result of HWRM command
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
` (19 preceding siblings ...)
2019-07-18 3:36 ` [dpdk-dev] [PATCH 20/22] net/bnxt: fix MAC/VLAN filter allocation failure Ajit Khaparde
@ 2019-07-18 3:36 ` Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 22/22] net/bnxt: update HWRM API to version 1.10.0.91 Ajit Khaparde
` (2 subsequent siblings)
23 siblings, 0 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:36 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, Kalesh AP, stable, Somnath Kotur
From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
HWRM_CHECK_RESULT macro is used to check the status of HWRM commands.
Fixes: 18c2854b96dd149 ("net/bnxt: configure a default VF VLAN")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
drivers/net/bnxt/bnxt_hwrm.c | 9 +--------
1 file changed, 1 insertion(+), 8 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 3266c1bec..67b5ff661 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -2888,14 +2888,7 @@ int bnxt_hwrm_func_qcfg_current_vf_vlan(struct bnxt *bp, int vf)
HWRM_PREP(req, FUNC_QCFG, BNXT_USE_CHIMP_MB);
req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
- if (rc) {
- PMD_DRV_LOG(ERR, "hwrm_func_qcfg failed rc:%d\n", rc);
- return -1;
- } else if (resp->error_code) {
- rc = rte_le_to_cpu_16(resp->error_code);
- PMD_DRV_LOG(ERR, "hwrm_func_qcfg error %d\n", rc);
- return -1;
- }
+ HWRM_CHECK_RESULT();
rc = rte_le_to_cpu_16(resp->vlan);
HWRM_UNLOCK();
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH 22/22] net/bnxt: update HWRM API to version 1.10.0.91
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
` (20 preceding siblings ...)
2019-07-18 3:36 ` [dpdk-dev] [PATCH 21/22] net/bnxt: fix to correctly check result of HWRM command Ajit Khaparde
@ 2019-07-18 3:36 ` Ajit Khaparde
2019-07-19 12:30 ` [dpdk-dev] [PATCH 00/22] bnxt patchset Ferruh Yigit
2019-07-19 21:01 ` Ferruh Yigit
23 siblings, 0 replies; 38+ messages in thread
From: Ajit Khaparde @ 2019-07-18 3:36 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit
Update HWRM API to version 1.10.0.91
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/net/bnxt/hsi_struct_def_dpdk.h | 1283 ++++++++++++++++++++----
1 file changed, 1100 insertions(+), 183 deletions(-)
diff --git a/drivers/net/bnxt/hsi_struct_def_dpdk.h b/drivers/net/bnxt/hsi_struct_def_dpdk.h
index c64cd8609..79c1f3d7c 100644
--- a/drivers/net/bnxt/hsi_struct_def_dpdk.h
+++ b/drivers/net/bnxt/hsi_struct_def_dpdk.h
@@ -72,10 +72,8 @@ struct hwrm_resp_hdr {
#define TLV_TYPE_QUERY_ROCE_CC_GEN1 UINT32_C(0x4)
/* RoCE slow path command to modify CC Gen1 support. */
#define TLV_TYPE_MODIFY_ROCE_CC_GEN1 UINT32_C(0x5)
-/* Engine CKV - The device's serial number. */
-#define TLV_TYPE_ENGINE_CKV_DEVICE_SERIAL_NUMBER UINT32_C(0x8001)
-/* Engine CKV - Per-function random nonce data. */
-#define TLV_TYPE_ENGINE_CKV_NONCE UINT32_C(0x8002)
+/* Engine CKV - The Alias key EC curve and ECC public key information. */
+#define TLV_TYPE_ENGINE_CKV_ALIAS_ECC_PUBLIC_KEY UINT32_C(0x8001)
/* Engine CKV - Initialization vector. */
#define TLV_TYPE_ENGINE_CKV_IV UINT32_C(0x8003)
/* Engine CKV - Authentication tag. */
@@ -84,12 +82,14 @@ struct hwrm_resp_hdr {
#define TLV_TYPE_ENGINE_CKV_CIPHERTEXT UINT32_C(0x8005)
/* Engine CKV - Supported algorithms. */
#define TLV_TYPE_ENGINE_CKV_ALGORITHMS UINT32_C(0x8006)
-/* Engine CKV - The EC curve name and ECC public key information. */
-#define TLV_TYPE_ENGINE_CKV_ECC_PUBLIC_KEY UINT32_C(0x8007)
+/* Engine CKV - The Host EC curve name and ECC public key information. */
+#define TLV_TYPE_ENGINE_CKV_HOST_ECC_PUBLIC_KEY UINT32_C(0x8007)
/* Engine CKV - The ECDSA signature. */
#define TLV_TYPE_ENGINE_CKV_ECDSA_SIGNATURE UINT32_C(0x8008)
+/* Engine CKV - The SRT EC curve name and ECC public key information. */
+#define TLV_TYPE_ENGINE_CKV_SRT_ECC_PUBLIC_KEY UINT32_C(0x8009)
#define TLV_TYPE_LAST \
- TLV_TYPE_ENGINE_CKV_ECDSA_SIGNATURE
+ TLV_TYPE_ENGINE_CKV_SRT_ECC_PUBLIC_KEY
/* tlv (size:64b/8B) */
@@ -386,6 +386,10 @@ struct cmd_nums {
#define HWRM_FW_QSTATUS UINT32_C(0xc1)
#define HWRM_FW_HEALTH_CHECK UINT32_C(0xc2)
#define HWRM_FW_SYNC UINT32_C(0xc3)
+ #define HWRM_FW_STATE_BUFFER_QCAPS UINT32_C(0xc4)
+ #define HWRM_FW_STATE_QUIESCE UINT32_C(0xc5)
+ #define HWRM_FW_STATE_BACKUP UINT32_C(0xc6)
+ #define HWRM_FW_STATE_RESTORE UINT32_C(0xc7)
/* Experimental */
#define HWRM_FW_SET_TIME UINT32_C(0xc8)
/* Experimental */
@@ -497,8 +501,6 @@ struct cmd_nums {
#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS UINT32_C(0x124)
/* Experimental */
#define HWRM_CFA_TFLIB UINT32_C(0x125)
- /* Engine CKV - Ping the device and SRT firmware to get the public key. */
- #define HWRM_ENGINE_CKV_HELLO UINT32_C(0x12d)
/* Engine CKV - Get the current allocation status of keys provisioned in the key vault. */
#define HWRM_ENGINE_CKV_STATUS UINT32_C(0x12e)
/* Engine CKV - Add a new CKEK used to encrypt keys. */
@@ -589,6 +591,8 @@ struct cmd_nums {
#define HWRM_FUNC_VF_BW_CFG UINT32_C(0x195)
/* Queries the BW of any VF */
#define HWRM_FUNC_VF_BW_QCFG UINT32_C(0x196)
+ /* Queries pf ids belong to specified host(s) */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY UINT32_C(0x197)
/* Experimental */
#define HWRM_SELFTEST_QLIST UINT32_C(0x200)
/* Experimental */
@@ -835,8 +839,8 @@ struct hwrm_err_output {
#define HWRM_VERSION_MINOR 10
#define HWRM_VERSION_UPDATE 0
/* non-zero means beta version */
-#define HWRM_VERSION_RSVD 74
-#define HWRM_VERSION_STR "1.10.0.74"
+#define HWRM_VERSION_RSVD 91
+#define HWRM_VERSION_STR "1.10.0.91"
/****************
* hwrm_ver_get *
@@ -1211,7 +1215,13 @@ struct hwrm_ver_get_output {
*/
uint8_t flags;
/*
- * If set to 1, device is not ready.
+ * If set to 1, it will indicate to host drivers that firmware is
+ * not ready to start full blown HWRM commands. Host drivers should
+ * re-try HWRM_VER_GET with some timeout period. The timeout period
+ * can be selected up to 5 seconds.
+ * For Example, PCIe hot-plug:
+ * Hot plug timing is system dependent. It generally takes up to
+ * 600 miliseconds for firmware to clear DEV_NOT_RDY flag.
* If set to 0, device is ready to accept all HWRM commands.
*/
#define HWRM_VER_GET_OUTPUT_FLAGS_DEV_NOT_RDY UINT32_C(0x1)
@@ -2909,6 +2919,10 @@ struct rx_pkt_cmpl_hi {
#define RX_PKT_CMPL_REORDER_SFT 0
} __attribute__((packed));
+/*
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
+ */
/* rx_tpa_start_cmpl (size:128b/16B) */
struct rx_tpa_start_cmpl {
uint16_t flags_type;
@@ -3070,7 +3084,12 @@ struct rx_tpa_start_cmpl {
uint32_t rss_hash;
} __attribute__((packed));
-/* Last 16 bytes of rx_tpq_start_cmpl. */
+/*
+ * Last 16 bytes of rx_tpa_start_cmpl.
+ *
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
+ */
/* rx_tpa_start_cmpl_hi (size:128b/16B) */
struct rx_tpa_start_cmpl_hi {
uint32_t flags2;
@@ -3079,34 +3098,29 @@ struct rx_tpa_start_cmpl_hi {
* inner packet and that the sum passed for all segments
* included in the aggregation.
*/
- #define RX_TPA_START_CMPL_FLAGS2_IP_CS_CALC \
- UINT32_C(0x1)
+ #define RX_TPA_START_CMPL_FLAGS2_IP_CS_CALC UINT32_C(0x1)
/*
* This indicates that the TCP, UDP or ICMP checksum was
* calculated for the inner packet and that the sum passed
* for all segments included in the aggregation.
*/
- #define RX_TPA_START_CMPL_FLAGS2_L4_CS_CALC \
- UINT32_C(0x2)
+ #define RX_TPA_START_CMPL_FLAGS2_L4_CS_CALC UINT32_C(0x2)
/*
* This indicates that the ip checksum was calculated for the
* tunnel header and that the sum passed for all segments
* included in the aggregation.
*/
- #define RX_TPA_START_CMPL_FLAGS2_T_IP_CS_CALC \
- UINT32_C(0x4)
+ #define RX_TPA_START_CMPL_FLAGS2_T_IP_CS_CALC UINT32_C(0x4)
/*
* This indicates that the UDP checksum was
* calculated for the tunnel packet and that the sum passed for
* all segments included in the aggregation.
*/
- #define RX_TPA_START_CMPL_FLAGS2_T_L4_CS_CALC \
- UINT32_C(0x8)
+ #define RX_TPA_START_CMPL_FLAGS2_T_L4_CS_CALC UINT32_C(0x8)
/* This value indicates what format the metadata field is. */
- #define RX_TPA_START_CMPL_FLAGS2_META_FORMAT_MASK \
- UINT32_C(0xf0)
- #define RX_TPA_START_CMPL_FLAGS2_META_FORMAT_SFT 4
- /* No metadata informtaion. Value is zero. */
+ #define RX_TPA_START_CMPL_FLAGS2_META_FORMAT_MASK UINT32_C(0xf0)
+ #define RX_TPA_START_CMPL_FLAGS2_META_FORMAT_SFT 4
+ /* No metadata information. Value is zero. */
#define RX_TPA_START_CMPL_FLAGS2_META_FORMAT_NONE \
(UINT32_C(0x0) << 4)
/*
@@ -3118,71 +3132,13 @@ struct rx_tpa_start_cmpl_hi {
*/
#define RX_TPA_START_CMPL_FLAGS2_META_FORMAT_VLAN \
(UINT32_C(0x1) << 4)
- /*
- * If ext_meta_format is equal to 1, the metadata field
- * contains the lower 16b of the tunnel ID value, justified
- * to LSB
- * - VXLAN = VNI[23:0] -> VXLAN Network ID
- * - Geneve (NGE) = VNI[23:0] a-> Virtual Network Identifier.
- * - NVGRE = TNI[23:0] -> Tenant Network ID
- * - GRE = KEY[31:0 -> key fieled with bit mask. zero if K = 0
- * - IPV4 = 0 (not populated)
- * - IPV6 = Flow Label[19:0]
- * - PPPoE = sessionID[15:0]
- * - MPLs = Outer label[19:0]
- * - UPAR = Selected[31:0] with bit mask
- */
- #define RX_TPA_START_CMPL_FLAGS2_META_FORMAT_TUNNEL_ID \
- (UINT32_C(0x2) << 4)
- /*
- * if ext_meta_format is equal to 1, metadata field contains
- * 16b metadata from the prepended header (chdr_data).
- */
- #define RX_TPA_START_CMPL_FLAGS2_META_FORMAT_CHDR_DATA \
- (UINT32_C(0x3) << 4)
- /*
- * If ext_meta_format is equal to 1, the metadata field contains
- * the outer_l3_offset, inner_l2_offset, inner_l3_offset and
- * inner_l4_size.
- * - metadata[8:0] contains the outer_l3_offset.
- * - metadata[17:9] contains the inner_l2_offset.
- * - metadata[26:18] contains the inner_l3_offset.
- * - metadata[31:27] contains the inner_l4_size.
- */
- #define RX_TPA_START_CMPL_FLAGS2_META_FORMAT_HDR_OFFSET \
- (UINT32_C(0x4) << 4)
#define RX_TPA_START_CMPL_FLAGS2_META_FORMAT_LAST \
- RX_TPA_START_CMPL_FLAGS2_META_FORMAT_HDR_OFFSET
+ RX_TPA_START_CMPL_FLAGS2_META_FORMAT_VLAN
/*
* This field indicates the IP type for the inner-most IP header.
* A value of '0' indicates IPv4. A value of '1' indicates IPv6.
*/
- #define RX_TPA_START_CMPL_FLAGS2_IP_TYPE \
- UINT32_C(0x100)
- /*
- * This indicates that the complete 1's complement checksum was
- * calculated for the packet.
- */
- #define RX_TPA_START_CMPL_FLAGS2_COMPLETE_CHECKSUM_CALC \
- UINT32_C(0x200)
- /*
- * The combination of this value and meta_format indicated what
- * format the metadata field is.
- */
- #define RX_TPA_START_CMPL_FLAGS2_EXT_META_FORMAT_MASK \
- UINT32_C(0xc00)
- #define RX_TPA_START_CMPL_FLAGS2_EXT_META_FORMAT_SFT 10
- /*
- * This value is the complete 1's complement checksum calculated from
- * the start of the outer L3 header to the end of the packet (not
- * including the ethernet crc). It is valid when the
- * 'complete_checksum_calc' flag is set. For TPA Start completions,
- * the complete checksum is calculated for the first packet in the
- * aggregation only.
- */
- #define RX_TPA_START_CMPL_FLAGS2_COMPLETE_CHECKSUM_MASK \
- UINT32_C(0xffff0000)
- #define RX_TPA_START_CMPL_FLAGS2_COMPLETE_CHECKSUM_SFT 16
+ #define RX_TPA_START_CMPL_FLAGS2_IP_TYPE UINT32_C(0x100)
/*
* This is data from the CFA block as indicated by the meta_format
* field.
@@ -3199,41 +3155,13 @@ struct rx_tpa_start_cmpl_hi {
/* When meta_format=1, this value is the VLAN TPID. */
#define RX_TPA_START_CMPL_METADATA_TPID_MASK UINT32_C(0xffff0000)
#define RX_TPA_START_CMPL_METADATA_TPID_SFT 16
- uint16_t errors_v2;
+ uint16_t v2;
/*
* This value is written by the NIC such that it will be different
* for each pass through the completion queue. The even passes
* will write 1. The odd passes will write 0.
*/
- #define RX_TPA_START_CMPL_V2 UINT32_C(0x1)
- #define RX_TPA_START_CMPL_ERRORS_MASK \
- UINT32_C(0xfffe)
- #define RX_TPA_START_CMPL_ERRORS_SFT 1
- /*
- * This error indicates that there was some sort of problem with
- * the BDs for the packet that was found after part of the
- * packet was already placed. The packet should be treated as
- * invalid.
- */
- #define RX_TPA_START_CMPL_ERRORS_BUFFER_ERROR_MASK UINT32_C(0xe)
- #define RX_TPA_START_CMPL_ERRORS_BUFFER_ERROR_SFT 1
- /* No buffer error */
- #define RX_TPA_START_CMPL_ERRORS_BUFFER_ERROR_NO_BUFFER \
- (UINT32_C(0x0) << 1)
- /*
- * Bad Format:
- * BDs were not formatted correctly.
- */
- #define RX_TPA_START_CMPL_ERRORS_BUFFER_ERROR_BAD_FORMAT \
- (UINT32_C(0x3) << 1)
- /*
- * Flush:
- * There was a bad_format error on the previous operation
- */
- #define RX_TPA_START_CMPL_ERRORS_BUFFER_ERROR_FLUSH \
- (UINT32_C(0x5) << 1)
- #define RX_TPA_START_CMPL_ERRORS_BUFFER_ERROR_LAST \
- RX_TPA_START_CMPL_ERRORS_BUFFER_ERROR_FLUSH
+ #define RX_TPA_START_CMPL_V2 UINT32_C(0x1)
/*
* This field identifies the CFA action rule that was used for this
* packet.
@@ -3273,6 +3201,10 @@ struct rx_tpa_start_cmpl_hi {
#define RX_TPA_START_CMPL_INNER_L4_SIZE_SFT 27
} __attribute__((packed));
+/*
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
+ */
/* rx_tpa_end_cmpl (size:128b/16B) */
struct rx_tpa_end_cmpl {
uint16_t flags_type;
@@ -3426,35 +3358,21 @@ struct rx_tpa_end_cmpl {
uint32_t tsdelta;
} __attribute__((packed));
-/* Last 16 bytes of rx_tpa_end_cmpl. */
+/*
+ * Last 16 bytes of rx_tpa_end_cmpl.
+ *
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is 0.
+ */
/* rx_tpa_end_cmpl_hi (size:128b/16B) */
struct rx_tpa_end_cmpl_hi {
- /*
- * This value is the number of duplicate ACKs that have been
- * received as part of the TPA operation.
- */
- uint16_t tpa_dup_acks;
+ uint32_t tpa_dup_acks;
/*
* This value is the number of duplicate ACKs that have been
* received as part of the TPA operation.
*/
#define RX_TPA_END_CMPL_TPA_DUP_ACKS_MASK UINT32_C(0xf)
#define RX_TPA_END_CMPL_TPA_DUP_ACKS_SFT 0
- /*
- * This value indicated the offset in bytes from the beginning of
- * the packet where the inner payload starts. This value is valid
- * for TCP, UDP, FCoE and RoCE packets
- */
- uint8_t payload_offset;
- /*
- * The value is the total number of aggregation buffers that were
- * used in the TPA operation. All TPA aggregation buffer completions
- * precede the TPA End completion. If the value is zero, then the
- * aggregation is completely contained in the buffer space provided
- * in the aggregation start completion.
- * Note that the field is simply provided as a cross check.
- */
- uint8_t tpa_agg_bufs;
/*
* This value is the valid when TPA completion is active. It
* indicates the length of the longest segment of the TPA operation
@@ -3474,19 +3392,632 @@ struct rx_tpa_end_cmpl_hi {
* for each pass through the completion queue. The even passes
* will write 1. The odd passes will write 0.
*/
- #define RX_TPA_END_CMPL_V2 UINT32_C(0x1)
- #define RX_TPA_END_CMPL_ERRORS_MASK UINT32_C(0xfffe)
- #define RX_TPA_END_CMPL_ERRORS_SFT 1
+ #define RX_TPA_END_CMPL_V2 UINT32_C(0x1)
+ #define RX_TPA_END_CMPL_ERRORS_MASK UINT32_C(0xfffe)
+ #define RX_TPA_END_CMPL_ERRORS_SFT 1
+ /*
+ * This error indicates that there was some sort of problem with
+ * the BDs for the packet that was found after part of the
+ * packet was already placed. The packet should be treated as
+ * invalid.
+ */
+ #define RX_TPA_END_CMPL_ERRORS_BUFFER_ERROR_MASK UINT32_C(0xe)
+ #define RX_TPA_END_CMPL_ERRORS_BUFFER_ERROR_SFT 1
+ /*
+ * This error occurs when there is a fatal HW problem in
+ * the chip only. It indicates that there were not
+ * BDs on chip but that there was adequate reservation.
+ * provided by the TPA block.
+ */
+ #define RX_TPA_END_CMPL_ERRORS_BUFFER_ERROR_NOT_ON_CHIP \
+ (UINT32_C(0x2) << 1)
+ /*
+ * This error occurs when TPA block was not configured to
+ * reserve adequate BDs for TPA operations on this RX
+ * ring. All data for the TPA operation was not placed.
+ *
+ * This error can also be generated when the number of
+ * segments is not programmed correctly in TPA and the
+ * 33 total aggregation buffers allowed for the TPA
+ * operation has been exceeded.
+ */
+ #define RX_TPA_END_CMPL_ERRORS_BUFFER_ERROR_RSV_ERROR \
+ (UINT32_C(0x4) << 1)
+ #define RX_TPA_END_CMPL_ERRORS_BUFFER_ERROR_LAST \
+ RX_TPA_END_CMPL_ERRORS_BUFFER_ERROR_RSV_ERROR
+ /* unused5 is 16 b */
+ uint16_t unused_4;
+ /*
+ * This is the opaque value that was completed for the TPA start
+ * completion that corresponds to this TPA end completion.
+ */
+ uint32_t start_opaque;
+} __attribute__((packed));
+
+/*
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is greater than 0.
+ */
+/* rx_tpa_v2_start_cmpl (size:128b/16B) */
+struct rx_tpa_v2_start_cmpl {
+ uint16_t flags_type;
+ /*
+ * This field indicates the exact type of the completion.
+ * By convention, the LSB identifies the length of the
+ * record in 16B units. Even values indicate 16B
+ * records. Odd values indicate 32B
+ * records.
+ */
+ #define RX_TPA_V2_START_CMPL_TYPE_MASK \
+ UINT32_C(0x3f)
+ #define RX_TPA_V2_START_CMPL_TYPE_SFT 0
+ /*
+ * RX L2 TPA Start Completion:
+ * Completion at the beginning of a TPA operation.
+ * Length = 32B
+ */
+ #define RX_TPA_V2_START_CMPL_TYPE_RX_TPA_START \
+ UINT32_C(0x13)
+ #define RX_TPA_V2_START_CMPL_TYPE_LAST \
+ RX_TPA_V2_START_CMPL_TYPE_RX_TPA_START
+ #define RX_TPA_V2_START_CMPL_FLAGS_MASK \
+ UINT32_C(0xffc0)
+ #define RX_TPA_V2_START_CMPL_FLAGS_SFT 6
+ /* This bit will always be '0' for TPA start completions. */
+ #define RX_TPA_V2_START_CMPL_FLAGS_ERROR \
+ UINT32_C(0x40)
+ /* This field indicates how the packet was placed in the buffer. */
+ #define RX_TPA_V2_START_CMPL_FLAGS_PLACEMENT_MASK \
+ UINT32_C(0x380)
+ #define RX_TPA_V2_START_CMPL_FLAGS_PLACEMENT_SFT 7
+ /*
+ * Jumbo:
+ * TPA Packet was placed using jumbo algorithm. This means
+ * that the first buffer will be filled with data before
+ * moving to aggregation buffers. Each aggregation buffer
+ * will be filled before moving to the next aggregation
+ * buffer.
+ */
+ #define RX_TPA_V2_START_CMPL_FLAGS_PLACEMENT_JUMBO \
+ (UINT32_C(0x1) << 7)
+ /*
+ * Header/Data Separation:
+ * Packet was placed using Header/Data separation algorithm.
+ * The separation location is indicated by the itype field.
+ */
+ #define RX_TPA_V2_START_CMPL_FLAGS_PLACEMENT_HDS \
+ (UINT32_C(0x2) << 7)
+ /*
+ * GRO/Jumbo:
+ * Packet will be placed using GRO/Jumbo where the first
+ * packet is filled with data. Subsequent packets will be
+ * placed such that any one packet does not span two
+ * aggregation buffers unless it starts at the beginning of
+ * an aggregation buffer.
+ */
+ #define RX_TPA_V2_START_CMPL_FLAGS_PLACEMENT_GRO_JUMBO \
+ (UINT32_C(0x5) << 7)
+ /*
+ * GRO/Header-Data Separation:
+ * Packet will be placed using GRO/HDS where the header
+ * is in the first packet.
+ * Payload of each packet will be
+ * placed such that any one packet does not span two
+ * aggregation buffers unless it starts at the beginning of
+ * an aggregation buffer.
+ */
+ #define RX_TPA_V2_START_CMPL_FLAGS_PLACEMENT_GRO_HDS \
+ (UINT32_C(0x6) << 7)
+ #define RX_TPA_V2_START_CMPL_FLAGS_PLACEMENT_LAST \
+ RX_TPA_V2_START_CMPL_FLAGS_PLACEMENT_GRO_HDS
+ /* This bit is '1' if the RSS field in this completion is valid. */
+ #define RX_TPA_V2_START_CMPL_FLAGS_RSS_VALID \
+ UINT32_C(0x400)
+ /*
+ * For devices that support timestamps, when this bit is cleared the
+ * `inner_l4_size_inner_l3_offset_inner_l2_offset_outer_l3_offset`
+ * field contains the 32b timestamp for
+ * the packet from the MAC. When this bit is set, the
+ * `inner_l4_size_inner_l3_offset_inner_l2_offset_outer_l3_offset`
+ * field contains the outer_l3_offset, inner_l2_offset,
+ * inner_l3_offset, and inner_l4_size.
+ */
+ #define RX_TPA_V2_START_CMPL_FLAGS_TIMESTAMP_FLD_FORMAT \
+ UINT32_C(0x800)
+ /*
+ * This value indicates what the inner packet determined for the
+ * packet was.
+ */
+ #define RX_TPA_V2_START_CMPL_FLAGS_ITYPE_MASK \
+ UINT32_C(0xf000)
+ #define RX_TPA_V2_START_CMPL_FLAGS_ITYPE_SFT 12
+ /*
+ * TCP Packet:
+ * Indicates that the packet was IP and TCP.
+ */
+ #define RX_TPA_V2_START_CMPL_FLAGS_ITYPE_TCP \
+ (UINT32_C(0x2) << 12)
+ #define RX_TPA_V2_START_CMPL_FLAGS_ITYPE_LAST \
+ RX_TPA_V2_START_CMPL_FLAGS_ITYPE_TCP
+ /*
+ * This value indicates the amount of packet data written to the
+ * buffer the opaque field in this completion corresponds to.
+ */
+ uint16_t len;
+ /*
+ * This is a copy of the opaque field from the RX BD this completion
+ * corresponds to.
+ */
+ uint32_t opaque;
+ /*
+ * This value is written by the NIC such that it will be different
+ * for each pass through the completion queue. The even passes
+ * will write 1. The odd passes will write 0.
+ */
+ uint8_t v1;
+ /*
+ * This value is written by the NIC such that it will be different
+ * for each pass through the completion queue. The even passes
+ * will write 1. The odd passes will write 0.
+ */
+ #define RX_TPA_V2_START_CMPL_V1 UINT32_C(0x1)
+ #define RX_TPA_V2_START_CMPL_LAST RX_TPA_V2_START_CMPL_V1
+ /*
+ * This is the RSS hash type for the packet. The value is packed
+ * {tuple_extrac_op[1:0],rss_profile_id[4:0],tuple_extrac_op[2]}.
+ *
+ * The value of tuple_extrac_op provides the information about
+ * what fields the hash was computed on.
+ * * 0: The RSS hash was computed over source IP address,
+ * destination IP address, source port, and destination port of inner
+ * IP and TCP or UDP headers. Note: For non-tunneled packets,
+ * the packet headers are considered inner packet headers for the RSS
+ * hash computation purpose.
+ * * 1: The RSS hash was computed over source IP address and destination
+ * IP address of inner IP header. Note: For non-tunneled packets,
+ * the packet headers are considered inner packet headers for the RSS
+ * hash computation purpose.
+ * * 2: The RSS hash was computed over source IP address,
+ * destination IP address, source port, and destination port of
+ * IP and TCP or UDP headers of outer tunnel headers.
+ * Note: For non-tunneled packets, this value is not applicable.
+ * * 3: The RSS hash was computed over source IP address and
+ * destination IP address of IP header of outer tunnel headers.
+ * Note: For non-tunneled packets, this value is not applicable.
+ *
+ * Note that 4-tuples values listed above are applicable
+ * for layer 4 protocols supported and enabled for RSS in the hardware,
+ * HWRM firmware, and drivers. For example, if RSS hash is supported and
+ * enabled for TCP traffic only, then the values of tuple_extract_op
+ * corresponding to 4-tuples are only valid for TCP traffic.
+ */
+ uint8_t rss_hash_type;
+ /*
+ * This is the aggregation ID that the completion is associated
+ * with. Use this number to correlate the TPA start completion
+ * with the TPA end completion.
+ */
+ uint16_t agg_id;
+ /*
+ * This value is the RSS hash value calculated for the packet
+ * based on the mode bits and key value in the VNIC.
+ */
+ uint32_t rss_hash;
+} __attribute__((packed));
+
+/*
+ * Last 16 bytes of rx_tpa_v2_start_cmpl.
+ *
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is greater than 0.
+ */
+/* rx_tpa_v2_start_cmpl_hi (size:128b/16B) */
+struct rx_tpa_v2_start_cmpl_hi {
+ uint32_t flags2;
+ /*
+ * This indicates that the ip checksum was calculated for the
+ * inner packet and that the sum passed for all segments
+ * included in the aggregation.
+ */
+ #define RX_TPA_V2_START_CMPL_FLAGS2_IP_CS_CALC \
+ UINT32_C(0x1)
+ /*
+ * This indicates that the TCP, UDP or ICMP checksum was
+ * calculated for the inner packet and that the sum passed
+ * for all segments included in the aggregation.
+ */
+ #define RX_TPA_V2_START_CMPL_FLAGS2_L4_CS_CALC \
+ UINT32_C(0x2)
+ /*
+ * This indicates that the ip checksum was calculated for the
+ * tunnel header and that the sum passed for all segments
+ * included in the aggregation.
+ */
+ #define RX_TPA_V2_START_CMPL_FLAGS2_T_IP_CS_CALC \
+ UINT32_C(0x4)
+ /*
+ * This indicates that the UDP checksum was
+ * calculated for the tunnel packet and that the sum passed for
+ * all segments included in the aggregation.
+ */
+ #define RX_TPA_V2_START_CMPL_FLAGS2_T_L4_CS_CALC \
+ UINT32_C(0x8)
+ /* This value indicates what format the metadata field is. */
+ #define RX_TPA_V2_START_CMPL_FLAGS2_META_FORMAT_MASK \
+ UINT32_C(0xf0)
+ #define RX_TPA_V2_START_CMPL_FLAGS2_META_FORMAT_SFT 4
+ /* No metadata informtaion. Value is zero. */
+ #define RX_TPA_V2_START_CMPL_FLAGS2_META_FORMAT_NONE \
+ (UINT32_C(0x0) << 4)
+ /*
+ * The metadata field contains the VLAN tag and TPID value.
+ * - metadata[11:0] contains the vlan VID value.
+ * - metadata[12] contains the vlan DE value.
+ * - metadata[15:13] contains the vlan PRI value.
+ * - metadata[31:16] contains the vlan TPID value.
+ */
+ #define RX_TPA_V2_START_CMPL_FLAGS2_META_FORMAT_VLAN \
+ (UINT32_C(0x1) << 4)
+ /*
+ * If ext_meta_format is equal to 1, the metadata field
+ * contains the lower 16b of the tunnel ID value, justified
+ * to LSB
+ * - VXLAN = VNI[23:0] -> VXLAN Network ID
+ * - Geneve (NGE) = VNI[23:0] a-> Virtual Network Identifier.
+ * - NVGRE = TNI[23:0] -> Tenant Network ID
+ * - GRE = KEY[31:0 -> key fieled with bit mask. zero if K = 0
+ * - IPV4 = 0 (not populated)
+ * - IPV6 = Flow Label[19:0]
+ * - PPPoE = sessionID[15:0]
+ * - MPLs = Outer label[19:0]
+ * - UPAR = Selected[31:0] with bit mask
+ */
+ #define RX_TPA_V2_START_CMPL_FLAGS2_META_FORMAT_TUNNEL_ID \
+ (UINT32_C(0x2) << 4)
+ /*
+ * if ext_meta_format is equal to 1, metadata field contains
+ * 16b metadata from the prepended header (chdr_data).
+ */
+ #define RX_TPA_V2_START_CMPL_FLAGS2_META_FORMAT_CHDR_DATA \
+ (UINT32_C(0x3) << 4)
+ /*
+ * If ext_meta_format is equal to 1, the metadata field contains
+ * the outer_l3_offset, inner_l2_offset, inner_l3_offset and
+ * inner_l4_size.
+ * - metadata[8:0] contains the outer_l3_offset.
+ * - metadata[17:9] contains the inner_l2_offset.
+ * - metadata[26:18] contains the inner_l3_offset.
+ * - metadata[31:27] contains the inner_l4_size.
+ */
+ #define RX_TPA_V2_START_CMPL_FLAGS2_META_FORMAT_HDR_OFFSET \
+ (UINT32_C(0x4) << 4)
+ #define RX_TPA_V2_START_CMPL_FLAGS2_META_FORMAT_LAST \
+ RX_TPA_V2_START_CMPL_FLAGS2_META_FORMAT_HDR_OFFSET
+ /*
+ * This field indicates the IP type for the inner-most IP header.
+ * A value of '0' indicates IPv4. A value of '1' indicates IPv6.
+ */
+ #define RX_TPA_V2_START_CMPL_FLAGS2_IP_TYPE \
+ UINT32_C(0x100)
+ /*
+ * This indicates that the complete 1's complement checksum was
+ * calculated for the packet.
+ */
+ #define RX_TPA_V2_START_CMPL_FLAGS2_COMPLETE_CHECKSUM_CALC \
+ UINT32_C(0x200)
+ /*
+ * The combination of this value and meta_format indicated what
+ * format the metadata field is.
+ */
+ #define RX_TPA_V2_START_CMPL_FLAGS2_EXT_META_FORMAT_MASK \
+ UINT32_C(0xc00)
+ #define RX_TPA_V2_START_CMPL_FLAGS2_EXT_META_FORMAT_SFT 10
+ /*
+ * This value is the complete 1's complement checksum calculated from
+ * the start of the outer L3 header to the end of the packet (not
+ * including the ethernet crc). It is valid when the
+ * 'complete_checksum_calc' flag is set. For TPA Start completions,
+ * the complete checksum is calculated for the first packet in the
+ * aggregation only.
+ */
+ #define RX_TPA_V2_START_CMPL_FLAGS2_COMPLETE_CHECKSUM_MASK \
+ UINT32_C(0xffff0000)
+ #define RX_TPA_V2_START_CMPL_FLAGS2_COMPLETE_CHECKSUM_SFT 16
+ /*
+ * This is data from the CFA block as indicated by the meta_format
+ * field.
+ */
+ uint32_t metadata;
+ /* When {ext_meta_format,meta_format}=1, this value is the VLAN VID. */
+ #define RX_TPA_V2_START_CMPL_METADATA_VID_MASK UINT32_C(0xfff)
+ #define RX_TPA_V2_START_CMPL_METADATA_VID_SFT 0
+ /* When {ext_meta_format,meta_format}=1, this value is the VLAN DE. */
+ #define RX_TPA_V2_START_CMPL_METADATA_DE UINT32_C(0x1000)
+ /* When {ext_meta_format,meta_format}=1, this value is the VLAN PRI. */
+ #define RX_TPA_V2_START_CMPL_METADATA_PRI_MASK UINT32_C(0xe000)
+ #define RX_TPA_V2_START_CMPL_METADATA_PRI_SFT 13
+ /* When {ext_meta_format,meta_format}=1, this value is the VLAN TPID. */
+ #define RX_TPA_V2_START_CMPL_METADATA_TPID_MASK UINT32_C(0xffff0000)
+ #define RX_TPA_V2_START_CMPL_METADATA_TPID_SFT 16
+ uint16_t errors_v2;
+ /*
+ * This value is written by the NIC such that it will be different
+ * for each pass through the completion queue. The even passes
+ * will write 1. The odd passes will write 0.
+ */
+ #define RX_TPA_V2_START_CMPL_V2 \
+ UINT32_C(0x1)
+ #define RX_TPA_V2_START_CMPL_ERRORS_MASK \
+ UINT32_C(0xfffe)
+ #define RX_TPA_V2_START_CMPL_ERRORS_SFT 1
+ /*
+ * This error indicates that there was some sort of problem with
+ * the BDs for the packet that was found after part of the
+ * packet was already placed. The packet should be treated as
+ * invalid.
+ */
+ #define RX_TPA_V2_START_CMPL_ERRORS_BUFFER_ERROR_MASK \
+ UINT32_C(0xe)
+ #define RX_TPA_V2_START_CMPL_ERRORS_BUFFER_ERROR_SFT 1
+ /* No buffer error */
+ #define RX_TPA_V2_START_CMPL_ERRORS_BUFFER_ERROR_NO_BUFFER \
+ (UINT32_C(0x0) << 1)
+ /*
+ * Bad Format:
+ * BDs were not formatted correctly.
+ */
+ #define RX_TPA_V2_START_CMPL_ERRORS_BUFFER_ERROR_BAD_FORMAT \
+ (UINT32_C(0x3) << 1)
+ /*
+ * Flush:
+ * There was a bad_format error on the previous operation
+ */
+ #define RX_TPA_V2_START_CMPL_ERRORS_BUFFER_ERROR_FLUSH \
+ (UINT32_C(0x5) << 1)
+ #define RX_TPA_V2_START_CMPL_ERRORS_BUFFER_ERROR_LAST \
+ RX_TPA_V2_START_CMPL_ERRORS_BUFFER_ERROR_FLUSH
+ /*
+ * This field identifies the CFA action rule that was used for this
+ * packet.
+ */
+ uint16_t cfa_code;
+ /*
+ * For devices that support timestamps this field is overridden
+ * with the timestamp value. When `flags.timestamp_fld_format` is
+ * cleared, this field contains the 32b timestamp for the packet from the
+ * MAC.
+ *
+ * When `flags.timestamp_fld_format` is set, this field contains the
+ * outer_l3_offset, inner_l2_offset, inner_l3_offset, and inner_l4_size
+ * as defined below.
+ */
+ uint32_t inner_l4_size_inner_l3_offset_inner_l2_offset_outer_l3_offset;
+ /*
+ * This is the offset from the beginning of the packet in bytes for
+ * the outer L3 header. If there is no outer L3 header, then this
+ * value is zero.
+ */
+ #define RX_TPA_V2_START_CMPL_OUTER_L3_OFFSET_MASK UINT32_C(0x1ff)
+ #define RX_TPA_V2_START_CMPL_OUTER_L3_OFFSET_SFT 0
+ /*
+ * This is the offset from the beginning of the packet in bytes for
+ * the inner most L2 header.
+ */
+ #define RX_TPA_V2_START_CMPL_INNER_L2_OFFSET_MASK UINT32_C(0x3fe00)
+ #define RX_TPA_V2_START_CMPL_INNER_L2_OFFSET_SFT 9
+ /*
+ * This is the offset from the beginning of the packet in bytes for
+ * the inner most L3 header.
+ */
+ #define RX_TPA_V2_START_CMPL_INNER_L3_OFFSET_MASK UINT32_C(0x7fc0000)
+ #define RX_TPA_V2_START_CMPL_INNER_L3_OFFSET_SFT 18
+ /*
+ * This is the size in bytes of the inner most L4 header.
+ * This can be subtracted from the payload_offset to determine
+ * the start of the inner most L4 header.
+ */
+ #define RX_TPA_V2_START_CMPL_INNER_L4_SIZE_MASK UINT32_C(0xf8000000)
+ #define RX_TPA_V2_START_CMPL_INNER_L4_SIZE_SFT 27
+} __attribute__((packed));
+
+/*
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is greater than 0.
+ */
+/* rx_tpa_v2_end_cmpl (size:128b/16B) */
+struct rx_tpa_v2_end_cmpl {
+ uint16_t flags_type;
+ /*
+ * This field indicates the exact type of the completion.
+ * By convention, the LSB identifies the length of the
+ * record in 16B units. Even values indicate 16B
+ * records. Odd values indicate 32B
+ * records.
+ */
+ #define RX_TPA_V2_END_CMPL_TYPE_MASK UINT32_C(0x3f)
+ #define RX_TPA_V2_END_CMPL_TYPE_SFT 0
+ /*
+ * RX L2 TPA End Completion:
+ * Completion at the end of a TPA operation.
+ * Length = 32B
+ */
+ #define RX_TPA_V2_END_CMPL_TYPE_RX_TPA_END UINT32_C(0x15)
+ #define RX_TPA_V2_END_CMPL_TYPE_LAST \
+ RX_TPA_V2_END_CMPL_TYPE_RX_TPA_END
+ #define RX_TPA_V2_END_CMPL_FLAGS_MASK UINT32_C(0xffc0)
+ #define RX_TPA_V2_END_CMPL_FLAGS_SFT 6
+ /*
+ * When this bit is '1', it indicates a packet that has an
+ * error of some type. Type of error is indicated in
+ * error_flags.
+ */
+ #define RX_TPA_V2_END_CMPL_FLAGS_ERROR UINT32_C(0x40)
+ /* This field indicates how the packet was placed in the buffer. */
+ #define RX_TPA_V2_END_CMPL_FLAGS_PLACEMENT_MASK UINT32_C(0x380)
+ #define RX_TPA_V2_END_CMPL_FLAGS_PLACEMENT_SFT 7
+ /*
+ * Jumbo:
+ * TPA Packet was placed using jumbo algorithm. This means
+ * that the first buffer will be filled with data before
+ * moving to aggregation buffers. Each aggregation buffer
+ * will be filled before moving to the next aggregation
+ * buffer.
+ */
+ #define RX_TPA_V2_END_CMPL_FLAGS_PLACEMENT_JUMBO \
+ (UINT32_C(0x1) << 7)
+ /*
+ * Header/Data Separation:
+ * Packet was placed using Header/Data separation algorithm.
+ * The separation location is indicated by the itype field.
+ */
+ #define RX_TPA_V2_END_CMPL_FLAGS_PLACEMENT_HDS \
+ (UINT32_C(0x2) << 7)
+ /*
+ * GRO/Jumbo:
+ * Packet will be placed using GRO/Jumbo where the first
+ * packet is filled with data. Subsequent packets will be
+ * placed such that any one packet does not span two
+ * aggregation buffers unless it starts at the beginning of
+ * an aggregation buffer.
+ */
+ #define RX_TPA_V2_END_CMPL_FLAGS_PLACEMENT_GRO_JUMBO \
+ (UINT32_C(0x5) << 7)
+ /*
+ * GRO/Header-Data Separation:
+ * Packet will be placed using GRO/HDS where the header
+ * is in the first packet.
+ * Payload of each packet will be
+ * placed such that any one packet does not span two
+ * aggregation buffers unless it starts at the beginning of
+ * an aggregation buffer.
+ */
+ #define RX_TPA_V2_END_CMPL_FLAGS_PLACEMENT_GRO_HDS \
+ (UINT32_C(0x6) << 7)
+ #define RX_TPA_V2_END_CMPL_FLAGS_PLACEMENT_LAST \
+ RX_TPA_V2_END_CMPL_FLAGS_PLACEMENT_GRO_HDS
+ /* unused is 2 b */
+ #define RX_TPA_V2_END_CMPL_FLAGS_UNUSED_MASK UINT32_C(0xc00)
+ #define RX_TPA_V2_END_CMPL_FLAGS_UNUSED_SFT 10
+ /*
+ * This value indicates what the inner packet determined for the
+ * packet was.
+ * - 2 TCP Packet
+ * Indicates that the packet was IP and TCP. This indicates
+ * that the ip_cs field is valid and that the tcp_udp_cs
+ * field is valid and contains the TCP checksum.
+ * This also indicates that the payload_offset field is valid.
+ */
+ #define RX_TPA_V2_END_CMPL_FLAGS_ITYPE_MASK UINT32_C(0xf000)
+ #define RX_TPA_V2_END_CMPL_FLAGS_ITYPE_SFT 12
+ /*
+ * This value is zero for TPA End completions.
+ * There is no data in the buffer that corresponds to the opaque
+ * value in this completion.
+ */
+ uint16_t len;
+ /*
+ * This is a copy of the opaque field from the RX BD this completion
+ * corresponds to.
+ */
+ uint32_t opaque;
+ uint8_t v1;
+ /*
+ * This value is written by the NIC such that it will be different
+ * for each pass through the completion queue. The even passes
+ * will write 1. The odd passes will write 0.
+ */
+ #define RX_TPA_V2_END_CMPL_V1 UINT32_C(0x1)
+ /* This value is the number of segments in the TPA operation. */
+ uint8_t tpa_segs;
+ /*
+ * This is the aggregation ID that the completion is associated
+ * with. Use this number to correlate the TPA start completion
+ * with the TPA end completion.
+ */
+ uint16_t agg_id;
+ /*
+ * For non-GRO packets, this value is the
+ * timestamp delta between earliest and latest timestamp values for
+ * TPA packet. If packets were not time stamped, then delta will be
+ * zero.
+ *
+ * For GRO packets, this field is zero except for the following
+ * sub-fields.
+ * - tsdelta[31]
+ * Timestamp present indication. When '0', no Timestamp
+ * option is in the packet. When '1', then a Timestamp
+ * option is present in the packet.
+ */
+ uint32_t tsdelta;
+} __attribute__((packed));
+
+/*
+ * Last 16 bytes of rx_tpa_v2_end_cmpl.
+ *
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is greater than 0.
+ */
+/* rx_tpa_v2_end_cmpl_hi (size:128b/16B) */
+struct rx_tpa_v2_end_cmpl_hi {
+ /*
+ * This value is the number of duplicate ACKs that have been
+ * received as part of the TPA operation.
+ */
+ uint16_t tpa_dup_acks;
+ /*
+ * This value is the number of duplicate ACKs that have been
+ * received as part of the TPA operation.
+ */
+ #define RX_TPA_V2_END_CMPL_TPA_DUP_ACKS_MASK UINT32_C(0xf)
+ #define RX_TPA_V2_END_CMPL_TPA_DUP_ACKS_SFT 0
+ /*
+ * This value indicated the offset in bytes from the beginning of
+ * the packet where the inner payload starts. This value is valid
+ * for TCP, UDP, FCoE and RoCE packets
+ */
+ uint8_t payload_offset;
+ /*
+ * The value is the total number of aggregation buffers that were
+ * used in the TPA operation. All TPA aggregation buffer completions
+ * preceed the TPA End completion. If the value is zero, then the
+ * aggregation is completely contained in the buffer space provided
+ * in the aggregation start completion.
+ * Note that the field is simply provided as a cross check.
+ */
+ uint8_t tpa_agg_bufs;
+ /*
+ * This value is the valid when TPA completion is active. It
+ * indicates the length of the longest segment of the TPA operation
+ * for LRO mode and the length of the first segment in GRO mode.
+ *
+ * This value may be used by GRO software to re-construct the original
+ * packet stream from the TPA packet. This is the length of all
+ * but the last segment for GRO. In LRO mode this value may be used
+ * to indicate MSS size to the stack.
+ */
+ uint16_t tpa_seg_len;
+ uint16_t unused_1;
+ uint16_t errors_v2;
+ /*
+ * This value is written by the NIC such that it will be different
+ * for each pass through the completion queue. The even passes
+ * will write 1. The odd passes will write 0.
+ */
+ #define RX_TPA_V2_END_CMPL_V2 UINT32_C(0x1)
+ #define RX_TPA_V2_END_CMPL_ERRORS_MASK \
+ UINT32_C(0xfffe)
+ #define RX_TPA_V2_END_CMPL_ERRORS_SFT 1
/*
* This error indicates that there was some sort of problem with
* the BDs for the packet that was found after part of the
* packet was already placed. The packet should be treated as
* invalid.
*/
- #define RX_TPA_END_CMPL_ERRORS_BUFFER_ERROR_MASK UINT32_C(0xe)
- #define RX_TPA_END_CMPL_ERRORS_BUFFER_ERROR_SFT 1
+ #define RX_TPA_V2_END_CMPL_ERRORS_BUFFER_ERROR_MASK \
+ UINT32_C(0xe)
+ #define RX_TPA_V2_END_CMPL_ERRORS_BUFFER_ERROR_SFT 1
/* No buffer error */
- #define RX_TPA_END_CMPL_ERRORS_BUFFER_ERROR_NO_BUFFER \
+ #define RX_TPA_V2_END_CMPL_ERRORS_BUFFER_ERROR_NO_BUFFER \
(UINT32_C(0x0) << 1)
/*
* This error occurs when there is a fatal HW problem in
@@ -3494,13 +4025,13 @@ struct rx_tpa_end_cmpl_hi {
* BDs on chip but that there was adequate reservation.
* provided by the TPA block.
*/
- #define RX_TPA_END_CMPL_ERRORS_BUFFER_ERROR_NOT_ON_CHIP \
+ #define RX_TPA_V2_END_CMPL_ERRORS_BUFFER_ERROR_NOT_ON_CHIP \
(UINT32_C(0x2) << 1)
/*
* Bad Format:
* BDs were not formatted correctly.
*/
- #define RX_TPA_END_CMPL_ERRORS_BUFFER_ERROR_BAD_FORMAT \
+ #define RX_TPA_V2_END_CMPL_ERRORS_BUFFER_ERROR_BAD_FORMAT \
(UINT32_C(0x3) << 1)
/*
* This error occurs when TPA block was not configured to
@@ -3512,18 +4043,17 @@ struct rx_tpa_end_cmpl_hi {
* 33 total aggregation buffers allowed for the TPA
* operation has been exceeded.
*/
- #define RX_TPA_END_CMPL_ERRORS_BUFFER_ERROR_RSV_ERROR \
+ #define RX_TPA_V2_END_CMPL_ERRORS_BUFFER_ERROR_RSV_ERROR \
(UINT32_C(0x4) << 1)
/*
* Flush:
* There was a bad_format error on the previous operation
*/
- #define RX_TPA_END_CMPL_ERRORS_BUFFER_ERROR_FLUSH \
+ #define RX_TPA_V2_END_CMPL_ERRORS_BUFFER_ERROR_FLUSH \
(UINT32_C(0x5) << 1)
- #define RX_TPA_END_CMPL_ERRORS_BUFFER_ERROR_LAST \
- RX_TPA_END_CMPL_ERRORS_BUFFER_ERROR_FLUSH
- /* unused5 is 16 b */
- uint16_t unused_4;
+ #define RX_TPA_V2_END_CMPL_ERRORS_BUFFER_ERROR_LAST \
+ RX_TPA_V2_END_CMPL_ERRORS_BUFFER_ERROR_FLUSH
+ uint16_t unused_2;
/*
* This is the opaque value that was completed for the TPA start
* completion that corresponds to this TPA end completion.
@@ -3531,6 +4061,60 @@ struct rx_tpa_end_cmpl_hi {
uint32_t start_opaque;
} __attribute__((packed));
+/*
+ * This TPA completion structure is used on devices where the
+ * `hwrm_vnic_qcaps.max_aggs_supported` value is greater than 0.
+ */
+/* rx_tpa_v2_abuf_cmpl (size:128b/16B) */
+struct rx_tpa_v2_abuf_cmpl {
+ uint16_t type;
+ /*
+ * This field indicates the exact type of the completion.
+ * By convention, the LSB identifies the length of the
+ * record in 16B units. Even values indicate 16B
+ * records. Odd values indicate 32B
+ * records.
+ */
+ #define RX_TPA_V2_ABUF_CMPL_TYPE_MASK UINT32_C(0x3f)
+ #define RX_TPA_V2_ABUF_CMPL_TYPE_SFT 0
+ /*
+ * RX TPA Aggregation Buffer completion :
+ * Completion of an L2 aggregation buffer in support of
+ * TPA packet completion. Length = 16B
+ */
+ #define RX_TPA_V2_ABUF_CMPL_TYPE_RX_TPA_AGG UINT32_C(0x16)
+ #define RX_TPA_V2_ABUF_CMPL_TYPE_LAST \
+ RX_TPA_V2_ABUF_CMPL_TYPE_RX_TPA_AGG
+ /*
+ * This is the length of the data for the packet stored in this
+ * aggregation buffer identified by the opaque value. This does not
+ * include the length of any
+ * data placed in other aggregation BDs or in the packet or buffer
+ * BDs. This length does not include any space added due to
+ * hdr_offset register during HDS placement mode.
+ */
+ uint16_t len;
+ /*
+ * This is a copy of the opaque field from the RX BD this aggregation
+ * buffer corresponds to.
+ */
+ uint32_t opaque;
+ uint16_t v;
+ /*
+ * This value is written by the NIC such that it will be different
+ * for each pass through the completion queue. The even passes
+ * will write 1. The odd passes will write 0.
+ */
+ #define RX_TPA_V2_ABUF_CMPL_V UINT32_C(0x1)
+ /*
+ * This is the aggregation ID that the completion is associated with. Use
+ * this number to correlate the TPA agg completion with the TPA start
+ * completion and the TPA end completion.
+ */
+ uint16_t agg_id;
+ uint32_t unused_1;
+} __attribute__((packed));
+
/* rx_abuf_cmpl (size:128b/16B) */
struct rx_abuf_cmpl {
uint16_t type;
@@ -3873,13 +4457,13 @@ struct hwrm_async_event_cmpl {
#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_DEBUG_NOTIFICATION \
UINT32_C(0x37)
/*
- * A EEM flow cached memory flush request event being posted to the PF
- * driver.
+ * An EEM flow cached memory flush for all flows request event being
+ * posted to the PF driver.
*/
#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_EEM_CACHE_FLUSH_REQ \
UINT32_C(0x38)
/*
- * A EEM flow cache memory flush completion event being posted to the
+ * An EEM flow cache memory flush completion event being posted to the
* firmware by the PF driver. This is indication that host EEM flush
* has completed by the PF.
*/
@@ -3894,7 +4478,7 @@ struct hwrm_async_event_cmpl {
#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_TCP_FLAG_ACTION_CHANGE \
UINT32_C(0x3a)
/*
- * A eem flow active event being posted to the PF or trusted VF driver
+ * An EEM flow active event being posted to the PF or trusted VF driver
* by the firmware. The PF or trusted VF driver should update the
* flow's aging timer after receiving this async event.
*/
@@ -4016,7 +4600,7 @@ struct hwrm_async_event_cmpl_link_status_change {
UINT32_C(0xffff0)
#define HWRM_ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_PORT_ID_SFT \
4
- /* Indicates the physical function this event occurred on. */
+ /* Indicates the physical function this event occured on. */
#define HWRM_ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_PF_ID_MASK \
UINT32_C(0xff00000)
#define HWRM_ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_PF_ID_SFT \
@@ -4997,7 +5581,7 @@ struct hwrm_async_event_cmpl_vf_flr {
#define HWRM_ASYNC_EVENT_CMPL_VF_FLR_EVENT_DATA1_VF_ID_MASK \
UINT32_C(0xffff)
#define HWRM_ASYNC_EVENT_CMPL_VF_FLR_EVENT_DATA1_VF_ID_SFT 0
- /* Indicates the physical function this event occurred on. */
+ /* Indicates the physical function this event occured on. */
#define HWRM_ASYNC_EVENT_CMPL_VF_FLR_EVENT_DATA1_PF_ID_MASK \
UINT32_C(0xff0000)
#define HWRM_ASYNC_EVENT_CMPL_VF_FLR_EVENT_DATA1_PF_ID_SFT 16
@@ -5345,12 +5929,12 @@ struct hwrm_async_event_cmpl_default_vnic_change {
UINT32_C(0x2)
#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_DEF_VNIC_STATE_LAST \
HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_DEF_VNIC_STATE_DEF_VNIC_FREE
- /* Indicates the physical function this event occurred on. */
+ /* Indicates the physical function this event occured on. */
#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_PF_ID_MASK \
UINT32_C(0x3fc)
#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_PF_ID_SFT \
2
- /* Indicates the virtual function this event occurred on */
+ /* Indicates the virtual function this event occured on */
#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_VF_ID_MASK \
UINT32_C(0x3fffc00)
#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_VF_ID_SFT \
@@ -5400,12 +5984,12 @@ struct hwrm_async_event_cmpl_hw_flow_aged {
uint16_t timestamp_hi;
/* Event specific data */
uint32_t event_data1;
- /* Indicates flow ID this event occurred on. */
+ /* Indicates flow ID this event occured on. */
#define HWRM_ASYNC_EVENT_CMPL_HW_FLOW_AGED_EVENT_DATA1_FLOW_ID_MASK \
UINT32_C(0x7fffffff)
#define HWRM_ASYNC_EVENT_CMPL_HW_FLOW_AGED_EVENT_DATA1_FLOW_ID_SFT \
0
- /* Indicates flow direction this event occurred on. */
+ /* Indicates flow direction this event occured on. */
#define HWRM_ASYNC_EVENT_CMPL_HW_FLOW_AGED_EVENT_DATA1_FLOW_DIRECTION \
UINT32_C(0x80000000)
/*
@@ -5521,7 +6105,7 @@ struct hwrm_async_event_cmpl_eem_cache_flush_done {
uint16_t timestamp_hi;
/* Event specific data */
uint32_t event_data1;
- /* Indicates function ID that this event occurred on. */
+ /* Indicates function ID that this event occured on. */
#define HWRM_ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_DONE_EVENT_DATA1_FID_MASK \
UINT32_C(0xffff)
#define HWRM_ASYNC_EVENT_CMPL_EEM_CACHE_FLUSH_DONE_EVENT_DATA1_FID_SFT \
@@ -5603,7 +6187,7 @@ struct hwrm_async_event_cmpl_eem_flow_active {
HWRM_ASYNC_EVENT_CMPL_EEM_FLOW_ACTIVE_EVENT_ID_EEM_FLOW_ACTIVE
/* Event specific data */
uint32_t event_data2;
- /* Indicates the 2nd global id this event occurred on. */
+ /* Indicates the 2nd global id this event occured on. */
#define HWRM_ASYNC_EVENT_CMPL_EEM_FLOW_ACTIVE_EVENT_DATA2_GLOBAL_ID_2_MASK \
UINT32_C(0x3fffffff)
#define HWRM_ASYNC_EVENT_CMPL_EEM_FLOW_ACTIVE_EVENT_DATA2_GLOBAL_ID_2_SFT \
@@ -5639,7 +6223,7 @@ struct hwrm_async_event_cmpl_eem_flow_active {
uint16_t timestamp_hi;
/* Event specific data */
uint32_t event_data1;
- /* Indicates the 1st global id this event occurred on. */
+ /* Indicates the 1st global id this event occured on. */
#define HWRM_ASYNC_EVENT_CMPL_EEM_FLOW_ACTIVE_EVENT_DATA1_GLOBAL_ID_1_MASK \
UINT32_C(0x3fffffff)
#define HWRM_ASYNC_EVENT_CMPL_EEM_FLOW_ACTIVE_EVENT_DATA1_GLOBAL_ID_1_SFT \
@@ -5659,7 +6243,7 @@ struct hwrm_async_event_cmpl_eem_flow_active {
#define HWRM_ASYNC_EVENT_CMPL_EEM_FLOW_ACTIVE_EVENT_DATA1_FLOW_DIRECTION_LAST \
HWRM_ASYNC_EVENT_CMPL_EEM_FLOW_ACTIVE_EVENT_DATA1_FLOW_DIRECTION_TX
/*
- * Indicates EEM flow aging mode this event occurred on. If
+ * Indicates EEM flow aging mode this event occured on. If
* this bit is set to 0, the event_data1 is the EEM global
* ID. If this bit is set to 1, the event_data1 is the number
* of global ID in the context memory.
@@ -7002,6 +7586,16 @@ struct hwrm_func_qcfg_output {
*/
#define HWRM_FUNC_QCFG_OUTPUT_FLAGS_SECURE_MODE_ENABLED \
UINT32_C(0x80)
+ /*
+ * If set to 1, then this PF is enabled with a preboot driver that
+ * requires access to the legacy L2 ring model and legacy 32b
+ * doorbells. If set to 0, then this PF is not allowed to use
+ * the legacy L2 rings. This feature is not allowed on VFs and
+ * is only relevant for devices that require a context backing
+ * store.
+ */
+ #define HWRM_FUNC_QCFG_OUTPUT_FLAGS_PREBOOT_LEGACY_L2_RINGS \
+ UINT32_C(0x100)
/*
* This value is current MAC address configured for this
* function. A value of 00-00-00-00-00-00 indicates no
@@ -7281,7 +7875,16 @@ struct hwrm_func_qcfg_output {
* the unregister request on PF in the HOT Reset Process.
*/
uint16_t registered_vfs;
- uint8_t unused_1[3];
+ /*
+ * The size of the doorbell BAR in KBytes reserved for L2 including
+ * any area that is shared between L2 and RoCE. The L2 driver
+ * should only map the L2 portion of the doorbell BAR. Any rounding
+ * of the BAR size to the native CPU page size should be performed
+ * by the driver. If the value is zero, no special partitioning
+ * of the doorbell BAR between L2 and RoCE is required.
+ */
+ uint16_t l2_doorbell_bar_size_kb;
+ uint8_t unused_1;
/*
* For backward compatibility this field must be set to 1.
* Older drivers might look for this field to be 1 before
@@ -7522,6 +8125,14 @@ struct hwrm_func_cfg_input {
*/
#define HWRM_FUNC_CFG_INPUT_FLAGS_TRUSTED_VF_DISABLE \
UINT32_C(0x1000000)
+ /*
+ * This bit is used by preboot drivers on a PF that require access
+ * to the legacy L2 ring model and legacy 32b doorbells. This
+ * feature is not allowed on VFs and is only relevant for devices
+ * that require a context backing store.
+ */
+ #define HWRM_FUNC_CFG_INPUT_FLAGS_PREBOOT_LEGACY_L2_RINGS \
+ UINT32_C(0x2000000)
uint32_t enables;
/*
* This bit must be '1' for the mtu field to be
@@ -12063,6 +12674,212 @@ struct hwrm_func_drv_if_change_output {
uint8_t valid;
} __attribute__((packed));
+/*******************************
+ * hwrm_func_host_pf_ids_query *
+ *******************************/
+
+
+/* hwrm_func_host_pf_ids_query_input (size:192b/24B) */
+struct hwrm_func_host_pf_ids_query_input {
+ /* The HWRM command request type. */
+ uint16_t req_type;
+ /*
+ * The completion ring to send the completion event on. This should
+ * be the NQ ID returned from the `nq_alloc` HWRM command.
+ */
+ uint16_t cmpl_ring;
+ /*
+ * The sequence ID is used by the driver for tracking multiple
+ * commands. This ID is treated as opaque data by the firmware and
+ * the value is returned in the `hwrm_resp_hdr` upon completion.
+ */
+ uint16_t seq_id;
+ /*
+ * The target ID of the command:
+ * * 0x0-0xFFF8 - The function ID
+ * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+ * * 0xFFFD - Reserved for user-space HWRM interface
+ * * 0xFFFF - HWRM
+ */
+ uint16_t target_id;
+ /*
+ * A physical address pointer pointing to a host buffer that the
+ * command's response data will be written. This can be either a host
+ * physical address (HPA) or a guest physical address (GPA) and must
+ * point to a physically contiguous block of memory.
+ */
+ uint64_t resp_addr;
+ uint8_t host;
+ /*
+ * # If this bit is set to '1', the query will contain PF(s)
+ * belongs to SOC host.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_INPUT_HOST_SOC UINT32_C(0x1)
+ /*
+ * # If this bit is set to '1', the query will contain PF(s)
+ * belongs to EP0 host.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_INPUT_HOST_EP_0 UINT32_C(0x2)
+ /*
+ * # If this bit is set to '1', the query will contain PF(s)
+ * belongs to EP1 host.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_INPUT_HOST_EP_1 UINT32_C(0x4)
+ /*
+ * # If this bit is set to '1', the query will contain PF(s)
+ * belongs to EP2 host.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_INPUT_HOST_EP_2 UINT32_C(0x8)
+ /*
+ * # If this bit is set to '1', the query will contain PF(s)
+ * belongs to EP3 host.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_INPUT_HOST_EP_3 UINT32_C(0x10)
+ /*
+ * This provides a filter of what PF(s) will be returned in the
+ * query..
+ */
+ uint8_t filter;
+ /*
+ * all available PF(s) belong to the host(s) (defined in the
+ * host field). This includes the hidden PFs.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_INPUT_FILTER_ALL UINT32_C(0x0)
+ /*
+ * all available PF(s) belong to the host(s) (defined in the
+ * host field) that is available for L2 traffic.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_INPUT_FILTER_L2 UINT32_C(0x1)
+ /*
+ * all available PF(s) belong to the host(s) (defined in the
+ * host field) that is available for ROCE traffic.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_INPUT_FILTER_ROCE UINT32_C(0x2)
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_INPUT_FILTER_LAST \
+ HWRM_FUNC_HOST_PF_IDS_QUERY_INPUT_FILTER_ROCE
+ uint8_t unused_1[6];
+} __attribute__((packed));
+
+/* hwrm_func_host_pf_ids_query_output (size:128b/16B) */
+struct hwrm_func_host_pf_ids_query_output {
+ /* The specific error status for the command. */
+ uint16_t error_code;
+ /* The HWRM command request type. */
+ uint16_t req_type;
+ /* The sequence ID from the original command. */
+ uint16_t seq_id;
+ /* The length of the response data in number of bytes. */
+ uint16_t resp_len;
+ /* This provides the first PF ID of the device. */
+ uint16_t first_pf_id;
+ uint16_t pf_ordinal_mask;
+ /*
+ * When this bit is '1', it indicates first PF belongs to one of
+ * the hosts defined in the input request.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_OUTPUT_PF_ORDINAL_MASK_FUNC_0 \
+ UINT32_C(0x1)
+ /*
+ * When this bit is '1', it indicates 2nd PF belongs to one of the
+ * hosts defined in the input request.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_OUTPUT_PF_ORDINAL_MASK_FUNC_1 \
+ UINT32_C(0x2)
+ /*
+ * When this bit is '1', it indicates 3rd PF belongs to one of the
+ * hosts defined in the input request.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_OUTPUT_PF_ORDINAL_MASK_FUNC_2 \
+ UINT32_C(0x4)
+ /*
+ * When this bit is '1', it indicates 4th PF belongs to one of the
+ * hosts defined in the input request.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_OUTPUT_PF_ORDINAL_MASK_FUNC_3 \
+ UINT32_C(0x8)
+ /*
+ * When this bit is '1', it indicates 5th PF belongs to one of the
+ * hosts defined in the input request.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_OUTPUT_PF_ORDINAL_MASK_FUNC_4 \
+ UINT32_C(0x10)
+ /*
+ * When this bit is '1', it indicates 6th PF belongs to one of the
+ * hosts defined in the input request.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_OUTPUT_PF_ORDINAL_MASK_FUNC_5 \
+ UINT32_C(0x20)
+ /*
+ * When this bit is '1', it indicates 7th PF belongs to one of the
+ * hosts defined in the input request.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_OUTPUT_PF_ORDINAL_MASK_FUNC_6 \
+ UINT32_C(0x40)
+ /*
+ * When this bit is '1', it indicates 8th PF belongs to one of the
+ * hosts defined in the input request.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_OUTPUT_PF_ORDINAL_MASK_FUNC_7 \
+ UINT32_C(0x80)
+ /*
+ * When this bit is '1', it indicates 9th PF belongs to one of the
+ * hosts defined in the input request.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_OUTPUT_PF_ORDINAL_MASK_FUNC_8 \
+ UINT32_C(0x100)
+ /*
+ * When this bit is '1', it indicates 10th PF belongs to one of the
+ * hosts defined in the input request.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_OUTPUT_PF_ORDINAL_MASK_FUNC_9 \
+ UINT32_C(0x200)
+ /*
+ * When this bit is '1', it indicates 11th PF belongs to one of the
+ * hosts defined in the input request.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_OUTPUT_PF_ORDINAL_MASK_FUNC_10 \
+ UINT32_C(0x400)
+ /*
+ * When this bit is '1', it indicates 12th PF belongs to one of the
+ * hosts defined in the input request.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_OUTPUT_PF_ORDINAL_MASK_FUNC_11 \
+ UINT32_C(0x800)
+ /*
+ * When this bit is '1', it indicates 13th PF belongs to one of the
+ * hosts defined in the input request.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_OUTPUT_PF_ORDINAL_MASK_FUNC_12 \
+ UINT32_C(0x1000)
+ /*
+ * When this bit is '1', it indicates 14th PF belongs to one of the
+ * hosts defined in the input request.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_OUTPUT_PF_ORDINAL_MASK_FUNC_13 \
+ UINT32_C(0x2000)
+ /*
+ * When this bit is '1', it indicates 15th PF belongs to one of the
+ * hosts defined in the input request.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_OUTPUT_PF_ORDINAL_MASK_FUNC_14 \
+ UINT32_C(0x4000)
+ /*
+ * When this bit is '1', it indicates 16th PF belongs to one of the
+ * hosts defined in the input request.
+ */
+ #define HWRM_FUNC_HOST_PF_IDS_QUERY_OUTPUT_PF_ORDINAL_MASK_FUNC_15 \
+ UINT32_C(0x8000)
+ uint8_t unused_1[3];
+ /*
+ * This field is used in Output records to indicate that the output
+ * is completely written to RAM. This field should be read as '1'
+ * to indicate that the output has been completely written.
+ * When writing a command completion or response to an internal processor,
+ * the order of writes has to be such that this field is written last.
+ */
+ uint8_t valid;
+} __attribute__((packed));
+
/*********************
* hwrm_port_phy_cfg *
*********************/
@@ -28144,6 +28961,12 @@ struct hwrm_cfa_flow_flush_input {
*/
#define HWRM_CFA_FLOW_FLUSH_INPUT_FLAGS_FLOW_RESET_ALL \
UINT32_C(0x2)
+ /*
+ * Set to 1 to indicate flow flush operation to cleanup all the flows by the caller.
+ * This flag is set to 0 by older driver. For older firmware, setting this flag has no effect.
+ */
+ #define HWRM_CFA_FLOW_FLUSH_INPUT_FLAGS_FLOW_RESET_PORT \
+ UINT32_C(0x4)
/* Set to 1 to indicate the flow counter IDs are included in the flow table. */
#define HWRM_CFA_FLOW_FLUSH_INPUT_FLAGS_FLOW_HANDLE_INCL_FC \
UINT32_C(0x8000000)
@@ -29316,7 +30139,15 @@ struct hwrm_cfa_ctx_mem_qcaps_output {
uint16_t resp_len;
/* Indicates the maximum number of context memory which can be registered. */
uint16_t max_entries;
- uint8_t unused_0[6];
+ uint8_t unused_0[5];
+ /*
+ * This field is used in Output records to indicate that the output
+ * is completely written to RAM. This field should be read as '1'
+ * to indicate that the output has been completely written.
+ * When writing a command completion or response to an internal processor,
+ * the order of writes has to be such that this field is written last.
+ */
+ uint8_t valid;
} __attribute__((packed));
/**********************
@@ -29375,7 +30206,7 @@ struct hwrm_cfa_eem_qcaps_input {
uint32_t unused_0;
} __attribute__((packed));
-/* hwrm_cfa_eem_qcaps_output (size:256b/32B) */
+/* hwrm_cfa_eem_qcaps_output (size:320b/40B) */
struct hwrm_cfa_eem_qcaps_output {
/* The specific error status for the command. */
uint16_t error_code;
@@ -29420,29 +30251,35 @@ struct hwrm_cfa_eem_qcaps_output {
uint32_t supported;
/*
* If set to 1, then EEM KEY0 table is supported using crc32 hash.
- * If set to 0 EEM KEY0 table is not supported.
+ * If set to 0, EEM KEY0 table is not supported.
*/
#define HWRM_CFA_EEM_QCAPS_OUTPUT_SUPPORTED_KEY0_TABLE \
UINT32_C(0x1)
/*
* If set to 1, then EEM KEY1 table is supported using lookup3 hash.
- * If set to 0 EEM KEY1 table is not supported.
+ * If set to 0, EEM KEY1 table is not supported.
*/
#define HWRM_CFA_EEM_QCAPS_OUTPUT_SUPPORTED_KEY1_TABLE \
UINT32_C(0x2)
/*
* If set to 1, then EEM External Record table is supported.
- * If set to 0 EEM External Record table is not supported.
+ * If set to 0, EEM External Record table is not supported.
* (This table includes action record, EFC pointers, encap pointers)
*/
#define HWRM_CFA_EEM_QCAPS_OUTPUT_SUPPORTED_EXTERNAL_RECORD_TABLE \
UINT32_C(0x4)
/*
* If set to 1, then EEM External Flow Counters table is supported.
- * If set to 0 EEM External Flow Counters table is not supported.
+ * If set to 0, EEM External Flow Counters table is not supported.
*/
#define HWRM_CFA_EEM_QCAPS_OUTPUT_SUPPORTED_EXTERNAL_FLOW_COUNTERS_TABLE \
UINT32_C(0x8)
+ /*
+ * If set to 1, then FID table used for implicit flow flush is supported.
+ * If set to 0, then FID table used for implicit flow flush is not supported.
+ */
+ #define HWRM_CFA_EEM_QCAPS_OUTPUT_SUPPORTED_FID_TABLE \
+ UINT32_C(0x10)
/*
* The maximum number of entries supported by EEM. When configuring the host memory
* the number of numbers of entries that can supported are -
@@ -29451,13 +30288,15 @@ struct hwrm_cfa_eem_qcaps_output {
* number of entries.
*/
uint32_t max_entries_supported;
- /* The entry size in bytes of each entry in the KEY0/KEY1 EEM tables. */
+ /* The entry size in bytes of each entry in the EEM KEY0/KEY1 tables. */
uint16_t key_entry_size;
- /* The entry size in bytes of each entry in the RECORD EEM tables. */
+ /* The entry size in bytes of each entry in the EEM RECORD tables. */
uint16_t record_entry_size;
- /* The entry size in bytes of each entry in the EFC EEM tables. */
+ /* The entry size in bytes of each entry in the EEM EFC tables. */
uint16_t efc_entry_size;
- uint8_t unused_1;
+ /* The FID size in bytes of each entry in the EEM FID tables. */
+ uint16_t fid_entry_size;
+ uint8_t unused_1[7];
/*
* This field is used in Output records to indicate that the output
* is completely written to RAM. This field should be read as '1'
@@ -29473,7 +30312,7 @@ struct hwrm_cfa_eem_qcaps_output {
********************/
-/* hwrm_cfa_eem_cfg_input (size:320b/40B) */
+/* hwrm_cfa_eem_cfg_input (size:384b/48B) */
struct hwrm_cfa_eem_cfg_input {
/* The HWRM command request type. */
uint16_t req_type;
@@ -29545,6 +30384,10 @@ struct hwrm_cfa_eem_cfg_input {
uint16_t record_ctx_id;
/* Configured EEM with the given context if for EFC table. */
uint16_t efc_ctx_id;
+ /* Configured EEM with the given context if for EFC table. */
+ uint16_t fid_ctx_id;
+ uint16_t unused_2;
+ uint32_t unused_3;
} __attribute__((packed));
/* hwrm_cfa_eem_cfg_output (size:128b/16B) */
@@ -29611,7 +30454,7 @@ struct hwrm_cfa_eem_qcfg_input {
uint32_t unused_0;
} __attribute__((packed));
-/* hwrm_cfa_eem_qcfg_output (size:192b/24B) */
+/* hwrm_cfa_eem_qcfg_output (size:256b/32B) */
struct hwrm_cfa_eem_qcfg_output {
/* The specific error status for the command. */
uint16_t error_code;
@@ -29633,7 +30476,17 @@ struct hwrm_cfa_eem_qcfg_output {
UINT32_C(0x4)
/* The number of entries the FW has configured for EEM. */
uint32_t num_entries;
- uint8_t unused_0[7];
+ /* Configured EEM with the given context if for KEY0 table. */
+ uint16_t key0_ctx_id;
+ /* Configured EEM with the given context if for KEY1 table. */
+ uint16_t key1_ctx_id;
+ /* Configured EEM with the given context if for RECORD table. */
+ uint16_t record_ctx_id;
+ /* Configured EEM with the given context if for EFC table. */
+ uint16_t efc_ctx_id;
+ /* Configured EEM with the given context if for EFC table. */
+ uint16_t fid_ctx_id;
+ uint8_t unused_2[5];
/*
* This field is used in Output records to indicate that the output
* is completely written to RAM. This field should be read as '1'
@@ -30287,6 +31140,53 @@ struct ctx_hw_stats {
uint64_t tpa_aborts;
} __attribute__((packed));
+/* Periodic statistics context DMA to host. */
+/* ctx_hw_stats_ext (size:1344b/168B) */
+struct ctx_hw_stats_ext {
+ /* Number of received unicast packets */
+ uint64_t rx_ucast_pkts;
+ /* Number of received multicast packets */
+ uint64_t rx_mcast_pkts;
+ /* Number of received broadcast packets */
+ uint64_t rx_bcast_pkts;
+ /* Number of discarded packets on received path */
+ uint64_t rx_discard_pkts;
+ /* Number of dropped packets on received path */
+ uint64_t rx_drop_pkts;
+ /* Number of received bytes for unicast traffic */
+ uint64_t rx_ucast_bytes;
+ /* Number of received bytes for multicast traffic */
+ uint64_t rx_mcast_bytes;
+ /* Number of received bytes for broadcast traffic */
+ uint64_t rx_bcast_bytes;
+ /* Number of transmitted unicast packets */
+ uint64_t tx_ucast_pkts;
+ /* Number of transmitted multicast packets */
+ uint64_t tx_mcast_pkts;
+ /* Number of transmitted broadcast packets */
+ uint64_t tx_bcast_pkts;
+ /* Number of discarded packets on transmit path */
+ uint64_t tx_discard_pkts;
+ /* Number of dropped packets on transmit path */
+ uint64_t tx_drop_pkts;
+ /* Number of transmitted bytes for unicast traffic */
+ uint64_t tx_ucast_bytes;
+ /* Number of transmitted bytes for multicast traffic */
+ uint64_t tx_mcast_bytes;
+ /* Number of transmitted bytes for broadcast traffic */
+ uint64_t tx_bcast_bytes;
+ /* Number of TPA eligible packets */
+ uint64_t rx_tpa_eligible_pkt;
+ /* Number of TPA eligible bytes */
+ uint64_t rx_tpa_eligible_bytes;
+ /* Number of TPA packets */
+ uint64_t rx_tpa_pkt;
+ /* Number of TPA bytes */
+ uint64_t rx_tpa_bytes;
+ /* Number of TPA errors */
+ uint64_t rx_tpa_errors;
+} __attribute__((packed));
+
/* Periodic Engine statistics context DMA to host. */
/* ctx_eng_stats (size:512b/64B) */
struct ctx_eng_stats {
@@ -30301,13 +31201,13 @@ struct ctx_eng_stats {
uint64_t eng_bytes_out;
/*
* Count, in 4-byte (dword) units, of bytes
- * that are input as auxiliary data.
+ * that are input as auxillary data.
* This includes the aux_cmd data.
*/
uint64_t aux_bytes_in;
/*
* Count, in 4-byte (dword) units, of bytes
- * that are output as auxiliary data.
+ * that are output as auxillary data.
* This count is the buffer space for aux_data
* output provided in the RQE, not the actual
* aux_data written
@@ -30381,6 +31281,11 @@ struct hwrm_stat_ctx_alloc_input {
* shall be never done and the DMA address shall not be used.
* In this case, the stat block can only be read by
* hwrm_stat_ctx_query command.
+ * On Ethernet/L2 based devices:
+ * if tpa v2 supported (hwrm_vnic_qcaps[max_aggs_supported]>0),
+ * ctx_hw_stats_ext is used for DMA,
+ * else
+ * ctx_hw_stats is used for DMA.
*/
uint32_t update_period_ms;
/*
@@ -30397,7 +31302,13 @@ struct hwrm_stat_ctx_alloc_input {
* used for network traffic or engine traffic.
*/
#define HWRM_STAT_CTX_ALLOC_INPUT_STAT_CTX_FLAGS_ROCE UINT32_C(0x1)
- uint8_t unused_0[3];
+ uint8_t unused_0;
+ /*
+ * This is the size of the structure (ctx_hw_stats or
+ * ctx_hw_stats_ext) that the driver has allocated to be used
+ * for the periodic DMA updates.
+ */
+ uint16_t stats_dma_length;
} __attribute__((packed));
/* hwrm_stat_ctx_alloc_output (size:128b/16B) */
@@ -30648,13 +31559,13 @@ struct hwrm_stat_ctx_eng_query_output {
uint64_t eng_bytes_out;
/*
* Count, in 4-byte (dword) units, of bytes
- * that are input as auxiliary data.
+ * that are input as auxillary data.
* This includes the aux_cmd data.
*/
uint64_t aux_bytes_in;
/*
* Count, in 4-byte (dword) units, of bytes
- * that are output as auxiliary data.
+ * that are output as auxillary data.
* This count is the buffer space for aux_data
* output provided in the RQE, not the actual
* aux_data written
@@ -32549,6 +33460,12 @@ struct hwrm_nvm_set_variable_input {
(UINT32_C(0x3) << 1)
#define HWRM_NVM_SET_VARIABLE_INPUT_FLAGS_ENCRYPT_MODE_LAST \
HWRM_NVM_SET_VARIABLE_INPUT_FLAGS_ENCRYPT_MODE_HMAC_SHA1_AUTH
+ #define HWRM_NVM_SET_VARIABLE_INPUT_FLAGS_FLAGS_UNUSED_0_MASK \
+ UINT32_C(0x70)
+ #define HWRM_NVM_SET_VARIABLE_INPUT_FLAGS_FLAGS_UNUSED_0_SFT 4
+ /* When this bit is 1, update the factory default region */
+ #define HWRM_NVM_SET_VARIABLE_INPUT_FLAGS_FACTORY_DEFAULT \
+ UINT32_C(0x80)
uint8_t unused_0;
} __attribute__((packed));
--
2.20.1 (Apple Git-117)
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [dpdk-dev] [PATCH 00/22] bnxt patchset
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
` (21 preceding siblings ...)
2019-07-18 3:36 ` [dpdk-dev] [PATCH 22/22] net/bnxt: update HWRM API to version 1.10.0.91 Ajit Khaparde
@ 2019-07-19 12:30 ` Ferruh Yigit
2019-07-19 13:22 ` Ajit Kumar Khaparde
2019-07-19 21:01 ` Ferruh Yigit
23 siblings, 1 reply; 38+ messages in thread
From: Ferruh Yigit @ 2019-07-19 12:30 UTC (permalink / raw)
To: Ajit Khaparde, dev; +Cc: Thomas Monjalon
On 7/18/2019 4:35 AM, Ajit Khaparde wrote:
> This patchset based on commit a164bb7c0a5ab3b100357cf56696c945fe28ab73
> contains bug fixes and an update to the HWRM API.
> Please apply.
>
> Ajit Khaparde (1):
> net/bnxt: update HWRM API to version 1.10.0.91
>
> Kalesh AP (11):
> net/bnxt: fix to handle error case during port start
> net/bnxt: fix return value check of address mapping
> net/bnxt: fix failure to add a MAC address
> net/bnxt: fix an unconditional wait in link update
> net/bnxt: fix setting primary MAC address
> net/bnxt: fix failure path in dev init
> net/bnxt: reset filters before registering interrupts
> net/bnxt: fix error checking of FW commands
> net/bnxt: fix to return standard error codes
> net/bnxt: fix lock release on getting NVM info
> net/bnxt: fix to correctly check result of HWRM command
>
> Lance Richardson (8):
> net/bnxt: use correct vnic default completion ring
> net/bnxt: use dedicated cpr for async events
> net/bnxt: retry irq callback deregistration
> net/bnxt: use correct RSS table sizes
> net/bnxt: fully initialize hwrm msgs for thor RSS cfg
> net/bnxt: use correct number of RSS contexts for thor
> net/bnxt: pass correct RSS table address for thor
> net/bnxt: avoid overrun in get statistics
>
> Santoshkumar Karanappa Rastapur (2):
> net/bnxt: fix RSS disable issue for thor-based adapters
> net/bnxt: fix MAC/VLAN filter allocation failure
>
Hi Ajit,
All bnxt patches has been sent after rc1, and this one has been sent a day
before rc2. A believe you are aware that proposal deadline was "June 3, 2019".
I will still try to get them but most probably the patchset won't able to make
this release, fyi.
Regards,
ferruh
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [dpdk-dev] [PATCH 00/22] bnxt patchset
2019-07-19 12:30 ` [dpdk-dev] [PATCH 00/22] bnxt patchset Ferruh Yigit
@ 2019-07-19 13:22 ` Ajit Kumar Khaparde
2019-07-19 16:59 ` Ferruh Yigit
0 siblings, 1 reply; 38+ messages in thread
From: Ajit Kumar Khaparde @ 2019-07-19 13:22 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev, Thomas Monjalon
> On Jul 19, 2019, at 18:00, Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
>> On 7/18/2019 4:35 AM, Ajit Khaparde wrote:
>> This patchset based on commit a164bb7c0a5ab3b100357cf56696c945fe28ab73
>> contains bug fixes and an update to the HWRM API.
>> Please apply.
>>
>> Ajit Khaparde (1):
>> net/bnxt: update HWRM API to version 1.10.0.91
>>
>> Kalesh AP (11):
>> net/bnxt: fix to handle error case during port start
>> net/bnxt: fix return value check of address mapping
>> net/bnxt: fix failure to add a MAC address
>> net/bnxt: fix an unconditional wait in link update
>> net/bnxt: fix setting primary MAC address
>> net/bnxt: fix failure path in dev init
>> net/bnxt: reset filters before registering interrupts
>> net/bnxt: fix error checking of FW commands
>> net/bnxt: fix to return standard error codes
>> net/bnxt: fix lock release on getting NVM info
>> net/bnxt: fix to correctly check result of HWRM command
>>
>> Lance Richardson (8):
>> net/bnxt: use correct vnic default completion ring
>> net/bnxt: use dedicated cpr for async events
>> net/bnxt: retry irq callback deregistration
>> net/bnxt: use correct RSS table sizes
>> net/bnxt: fully initialize hwrm msgs for thor RSS cfg
>> net/bnxt: use correct number of RSS contexts for thor
>> net/bnxt: pass correct RSS table address for thor
>> net/bnxt: avoid overrun in get statistics
>>
>> Santoshkumar Karanappa Rastapur (2):
>> net/bnxt: fix RSS disable issue for thor-based adapters
>> net/bnxt: fix MAC/VLAN filter allocation failure
>>
>
>
> Hi Ajit,
>
> All bnxt patches has been sent after rc1, and this one has been sent a day
> before rc2. A believe you are aware that proposal deadline was "June 3, 2019".
I understand. But most of these including the next patch set are bug fixes.
It just that some stayed in our staging area longer. But most of them
were detected and fixed in the last week when our QA ramped testing on
rc1.
Intact as I type this mail I see two more bug fixes ready for submission.
>
> I will still try to get them but most probably the patchset won't able to make
> this release, fyi.
Thanks
Ajit
>
> Regards,
> ferruh
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [dpdk-dev] [PATCH 00/22] bnxt patchset
2019-07-19 13:22 ` Ajit Kumar Khaparde
@ 2019-07-19 16:59 ` Ferruh Yigit
0 siblings, 0 replies; 38+ messages in thread
From: Ferruh Yigit @ 2019-07-19 16:59 UTC (permalink / raw)
To: Ajit Kumar Khaparde; +Cc: dev, Thomas Monjalon
On 7/19/2019 2:22 PM, Ajit Kumar Khaparde wrote:
>> On Jul 19, 2019, at 18:00, Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>>
>>> On 7/18/2019 4:35 AM, Ajit Khaparde wrote:
>>> This patchset based on commit a164bb7c0a5ab3b100357cf56696c945fe28ab73
>>> contains bug fixes and an update to the HWRM API.
>>> Please apply.
>>>
>>> Ajit Khaparde (1):
>>> net/bnxt: update HWRM API to version 1.10.0.91
>>>
>>> Kalesh AP (11):
>>> net/bnxt: fix to handle error case during port start
>>> net/bnxt: fix return value check of address mapping
>>> net/bnxt: fix failure to add a MAC address
>>> net/bnxt: fix an unconditional wait in link update
>>> net/bnxt: fix setting primary MAC address
>>> net/bnxt: fix failure path in dev init
>>> net/bnxt: reset filters before registering interrupts
>>> net/bnxt: fix error checking of FW commands
>>> net/bnxt: fix to return standard error codes
>>> net/bnxt: fix lock release on getting NVM info
>>> net/bnxt: fix to correctly check result of HWRM command
>>>
>>> Lance Richardson (8):
>>> net/bnxt: use correct vnic default completion ring
>>> net/bnxt: use dedicated cpr for async events
>>> net/bnxt: retry irq callback deregistration
>>> net/bnxt: use correct RSS table sizes
>>> net/bnxt: fully initialize hwrm msgs for thor RSS cfg
>>> net/bnxt: use correct number of RSS contexts for thor
>>> net/bnxt: pass correct RSS table address for thor
>>> net/bnxt: avoid overrun in get statistics
>>>
>>> Santoshkumar Karanappa Rastapur (2):
>>> net/bnxt: fix RSS disable issue for thor-based adapters
>>> net/bnxt: fix MAC/VLAN filter allocation failure
>>>
>>
>>
>> Hi Ajit,
>>
>> All bnxt patches has been sent after rc1, and this one has been sent a day
>> before rc2. A believe you are aware that proposal deadline was "June 3, 2019".
> I understand. But most of these including the next patch set are bug fixes.
> It just that some stayed in our staging area longer. But most of them
> were detected and fixed in the last week when our QA ramped testing on
> rc1.
You are right, I will do my best to get them for rc2.
>
> Intact as I type this mail I see two more bug fixes ready for submission.
>
>>
>> I will still try to get them but most probably the patchset won't able to make
>> this release, fyi.
>
> Thanks
> Ajit
>
>>
>> Regards,
>> ferruh
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [dpdk-dev] [PATCH 00/22] bnxt patchset
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
` (22 preceding siblings ...)
2019-07-19 12:30 ` [dpdk-dev] [PATCH 00/22] bnxt patchset Ferruh Yigit
@ 2019-07-19 21:01 ` Ferruh Yigit
23 siblings, 0 replies; 38+ messages in thread
From: Ferruh Yigit @ 2019-07-19 21:01 UTC (permalink / raw)
To: Ajit Khaparde, dev
Cc: Kalesh AP, Lance Richardson, Santoshkumar Karanappa Rastapur
On 7/18/2019 4:35 AM, Ajit Khaparde wrote:
> This patchset based on commit a164bb7c0a5ab3b100357cf56696c945fe28ab73
> contains bug fixes and an update to the HWRM API.
> Please apply.
>
> Ajit Khaparde (1):
> net/bnxt: update HWRM API to version 1.10.0.91
>
> Kalesh AP (11):
> net/bnxt: fix to handle error case during port start
> net/bnxt: fix return value check of address mapping
> net/bnxt: fix failure to add a MAC address
> net/bnxt: fix an unconditional wait in link update
> net/bnxt: fix setting primary MAC address
> net/bnxt: fix failure path in dev init
> net/bnxt: reset filters before registering interrupts
> net/bnxt: fix error checking of FW commands
> net/bnxt: fix to return standard error codes
> net/bnxt: fix lock release on getting NVM info
> net/bnxt: fix to correctly check result of HWRM command
>
> Lance Richardson (8):
> net/bnxt: use correct vnic default completion ring
> net/bnxt: use dedicated cpr for async events
> net/bnxt: retry irq callback deregistration
> net/bnxt: use correct RSS table sizes
> net/bnxt: fully initialize hwrm msgs for thor RSS cfg
> net/bnxt: use correct number of RSS contexts for thor
> net/bnxt: pass correct RSS table address for thor
> net/bnxt: avoid overrun in get statistics
>
> Santoshkumar Karanappa Rastapur (2):
> net/bnxt: fix RSS disable issue for thor-based adapters
> net/bnxt: fix MAC/VLAN filter allocation failure
Series applied to dpdk-next-net/master, thanks.
- fixed build error [1] for some targets [2], because of missing header
- fixed checkpatch warnings [3]
[1]
.../drivers/net/bnxt/bnxt_irq.c: In function ‘bnxt_free_int’:
.../drivers/net/bnxt/bnxt_irq.c:79:4: error: implicit declaration of function
‘rte_delay_ms’; did you mean ‘rte_read64’? [-Werror=implicit-function-declaration]
rte_delay_ms(50);
^~~~~~~~~~~~
rte_read64
[2]
arm64-armv8a-linuxapp-gcc
ppc_64-power8-linuxapp-gcc
arm64-thunderx-linuxapp-gcc
[3]
fixed following checkpatch warnings (not including multiple instance for same
typo) while merging, please fix them before sending next time.
WARNING:TYPO_SPELLING: 'preceed' may be misspelled - perhaps 'precede'?
#933: FILE: drivers/net/bnxt/hsi_struct_def_dpdk.h:3982:
+ * preceed the TPA End completion. If the value is zero, then the
WARNING:TYPO_SPELLING: 'occured' may be misspelled - perhaps 'occurred'?
#1110: FILE: drivers/net/bnxt/hsi_struct_def_dpdk.h:4603:
+ /* Indicates the physical function this event occured on. */
WARNING:TYPO_SPELLING: 'auxillary' may be misspelled - perhaps 'auxiliary'?
#1658: FILE: drivers/net/bnxt/hsi_struct_def_dpdk.h:31204:
+ * that are input as auxillary data.
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [dpdk-dev] [PATCH 09/22] net/bnxt: use dedicated cpr for async events
2019-07-18 3:36 ` [dpdk-dev] [PATCH 09/22] net/bnxt: use dedicated cpr for async events Ajit Khaparde
@ 2019-07-22 14:57 ` Ferruh Yigit
2019-07-22 15:06 ` Thomas Monjalon
2019-07-24 16:14 ` [dpdk-dev] [PATCH] " Lance Richardson
1 sibling, 1 reply; 38+ messages in thread
From: Ferruh Yigit @ 2019-07-22 14:57 UTC (permalink / raw)
To: Ajit Khaparde, dev; +Cc: Lance Richardson, Somnath Kotur, Thomas Monjalon
On 7/18/2019 4:36 AM, Ajit Khaparde wrote:
> From: Lance Richardson <lance.richardson@broadcom.com>
>
> This commit enables the creation of a dedicated completion
> ring for asynchronous event handling instead of handling these
> events on a receive completion ring.
>
> For the stingray platform and other platforms needing tighter
> control of resource utilization, we retain the ability to
> process async events on a receive completion ring. This behavior
> is controlled by a compile-time configuration variable.
>
> For Thor-based adapters, we use a dedicated NQ (notification
> queue) ring for async events (async events can't currently
> be received on a completion ring due to a firmware limitation).
>
> Rename "def_cp_ring" to "async_cp_ring" to better reflect its
> purpose (async event notifications) and to avoid confusion with
> VNIC default receive completion rings.
>
> Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
> Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> ---
> config/common_base | 1 +
> config/defconfig_arm64-stingray-linuxapp-gcc | 3 +
> drivers/net/bnxt/bnxt.h | 10 +-
> drivers/net/bnxt/bnxt_ethdev.c | 13 +-
> drivers/net/bnxt/bnxt_hwrm.c | 16 +-
> drivers/net/bnxt/bnxt_hwrm.h | 2 +
> drivers/net/bnxt/bnxt_irq.c | 47 +++---
> drivers/net/bnxt/bnxt_ring.c | 145 ++++++++++++++++---
> drivers/net/bnxt/bnxt_ring.h | 3 +
> drivers/net/bnxt/bnxt_rxr.c | 2 +-
> drivers/net/bnxt/bnxt_rxtx_vec_sse.c | 2 +-
> 11 files changed, 195 insertions(+), 49 deletions(-)
>
> diff --git a/config/common_base b/config/common_base
> index 8ef75c203..487a9b811 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -212,6 +212,7 @@ CONFIG_RTE_LIBRTE_BNX2X_DEBUG_PERIODIC=n
> # Compile burst-oriented Broadcom BNXT PMD driver
> #
> CONFIG_RTE_LIBRTE_BNXT_PMD=y
> +CONFIG_RTE_LIBRTE_BNXT_SHARED_ASYNC_RING=n
Normally we don't want to introduce new config time options as much as possible.
This is what happens when patches slip in the last minute, please Ajit try to
send patches before proposal next time.
And back to the config option, is it something have to be a compile time flag
(if so why?), can it be replaced by a devarg?
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [dpdk-dev] [PATCH 09/22] net/bnxt: use dedicated cpr for async events
2019-07-22 14:57 ` Ferruh Yigit
@ 2019-07-22 15:06 ` Thomas Monjalon
2019-07-22 17:57 ` Lance Richardson
0 siblings, 1 reply; 38+ messages in thread
From: Thomas Monjalon @ 2019-07-22 15:06 UTC (permalink / raw)
To: Ajit Khaparde, Lance Richardson; +Cc: Ferruh Yigit, dev, Somnath Kotur
22/07/2019 16:57, Ferruh Yigit:
> On 7/18/2019 4:36 AM, Ajit Khaparde wrote:
> > From: Lance Richardson <lance.richardson@broadcom.com>
> > --- a/config/common_base
> > +++ b/config/common_base
> > @@ -212,6 +212,7 @@ CONFIG_RTE_LIBRTE_BNX2X_DEBUG_PERIODIC=n
> > # Compile burst-oriented Broadcom BNXT PMD driver
> > #
> > CONFIG_RTE_LIBRTE_BNXT_PMD=y
> > +CONFIG_RTE_LIBRTE_BNXT_SHARED_ASYNC_RING=n
>
> Normally we don't want to introduce new config time options as much as possible.
>
> This is what happens when patches slip in the last minute, please Ajit try to
> send patches before proposal next time.
>
> And back to the config option, is it something have to be a compile time flag
> (if so why?), can it be replaced by a devarg?
I confirm it is nack.
I don't see any reason not to replace it with a runtime devargs.
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [dpdk-dev] [PATCH 09/22] net/bnxt: use dedicated cpr for async events
2019-07-22 15:06 ` Thomas Monjalon
@ 2019-07-22 17:57 ` Lance Richardson
2019-07-22 18:34 ` Ferruh Yigit
0 siblings, 1 reply; 38+ messages in thread
From: Lance Richardson @ 2019-07-22 17:57 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: Ajit Khaparde, Ferruh Yigit, dev, Somnath Kotur
On Mon, Jul 22, 2019 at 11:06 AM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 22/07/2019 16:57, Ferruh Yigit:
> > On 7/18/2019 4:36 AM, Ajit Khaparde wrote:
> > > From: Lance Richardson <lance.richardson@broadcom.com>
> > > --- a/config/common_base
> > > +++ b/config/common_base
> > > @@ -212,6 +212,7 @@ CONFIG_RTE_LIBRTE_BNX2X_DEBUG_PERIODIC=n
> > > # Compile burst-oriented Broadcom BNXT PMD driver
> > > #
> > > CONFIG_RTE_LIBRTE_BNXT_PMD=y
> > > +CONFIG_RTE_LIBRTE_BNXT_SHARED_ASYNC_RING=n
> >
> > Normally we don't want to introduce new config time options as much as possible.
> >
> > This is what happens when patches slip in the last minute, please Ajit try to
> > send patches before proposal next time.
> >
> > And back to the config option, is it something have to be a compile time flag
> > (if so why?), can it be replaced by a devarg?
>
> I confirm it is nack.
> I don't see any reason not to replace it with a runtime devargs.
>
>
This build-time configuration option was introduced because the
"shared async completion
ring" configuration is needed for one specific platform,
arm64-stingray-linux-gcc. This
configuration has a number of drawbacks and should be avoided
otherwise. Making it
a run-time option will add complexity and introduce the possibility
that existing stingray
users will not know that they need to specify this option as of 19.08,
so it would be good
if some other way could be found to handle this.
Other than perhaps using RTE_ARCH_ARM64 to select the shared completion ring
configuration (which would obviously affect all ARM64 users), are
there any other options
that might be acceptable?
It's maybe worthwhile to note that the last several DPDK releases use
the "shared completion
ring" configuration.
Thanks and regards,
Lance
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [dpdk-dev] [PATCH 09/22] net/bnxt: use dedicated cpr for async events
2019-07-22 17:57 ` Lance Richardson
@ 2019-07-22 18:34 ` Ferruh Yigit
2019-07-23 8:04 ` Thomas Monjalon
2019-07-23 21:27 ` Lance Richardson
0 siblings, 2 replies; 38+ messages in thread
From: Ferruh Yigit @ 2019-07-22 18:34 UTC (permalink / raw)
To: Lance Richardson, Thomas Monjalon; +Cc: Ajit Khaparde, dev, Somnath Kotur
On 7/22/2019 6:57 PM, Lance Richardson wrote:
> On Mon, Jul 22, 2019 at 11:06 AM Thomas Monjalon <thomas@monjalon.net> wrote:
>>
>> 22/07/2019 16:57, Ferruh Yigit:
>>> On 7/18/2019 4:36 AM, Ajit Khaparde wrote:
>>>> From: Lance Richardson <lance.richardson@broadcom.com>
>>>> --- a/config/common_base
>>>> +++ b/config/common_base
>>>> @@ -212,6 +212,7 @@ CONFIG_RTE_LIBRTE_BNX2X_DEBUG_PERIODIC=n
>>>> # Compile burst-oriented Broadcom BNXT PMD driver
>>>> #
>>>> CONFIG_RTE_LIBRTE_BNXT_PMD=y
>>>> +CONFIG_RTE_LIBRTE_BNXT_SHARED_ASYNC_RING=n
>>>
>>> Normally we don't want to introduce new config time options as much as possible.
>>>
>>> This is what happens when patches slip in the last minute, please Ajit try to
>>> send patches before proposal next time.
>>>
>>> And back to the config option, is it something have to be a compile time flag
>>> (if so why?), can it be replaced by a devarg?
>>
>> I confirm it is nack.
>> I don't see any reason not to replace it with a runtime devargs.
>>
>>
>
> This build-time configuration option was introduced because the
> "shared async completion
> ring" configuration is needed for one specific platform,
> arm64-stingray-linux-gcc. This
> configuration has a number of drawbacks and should be avoided
> otherwise. Making it
> a run-time option will add complexity and introduce the possibility
> that existing stingray
> users will not know that they need to specify this option as of 19.08,
> so it would be good
> if some other way could be found to handle this.
I see this provides a configuration transparent to user, but won't this kill the
binary portability? If a distro packages an 'armv8a' target and distributes it,
this won't be usable in your 'stingray' platform for the driver.
But if this is added as a devargs, it can be usable even with distributed
binaries, by providing proper devargs to binary for a specific platform. And
documenting this devargs in NIC guide can help users to figure it out.
>
> Other than perhaps using RTE_ARCH_ARM64 to select the shared completion ring
> configuration (which would obviously affect all ARM64 users), are
> there any other options
> that might be acceptable?
>
> It's maybe worthwhile to note that the last several DPDK releases use
> the "shared completion
> ring" configuration.
>
I dropped this patch from next-net since discussion is going on, I did my best
to resolve the conflicts but please confirm the final result in next-net.
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [dpdk-dev] [PATCH 09/22] net/bnxt: use dedicated cpr for async events
2019-07-22 18:34 ` Ferruh Yigit
@ 2019-07-23 8:04 ` Thomas Monjalon
2019-07-23 10:53 ` Lance Richardson
2019-07-23 21:27 ` Lance Richardson
1 sibling, 1 reply; 38+ messages in thread
From: Thomas Monjalon @ 2019-07-23 8:04 UTC (permalink / raw)
To: Lance Richardson; +Cc: Ferruh Yigit, Ajit Khaparde, dev, Somnath Kotur
22/07/2019 20:34, Ferruh Yigit:
> On 7/22/2019 6:57 PM, Lance Richardson wrote:
> > On Mon, Jul 22, 2019 at 11:06 AM Thomas Monjalon <thomas@monjalon.net> wrote:
> >> 22/07/2019 16:57, Ferruh Yigit:
> >>> On 7/18/2019 4:36 AM, Ajit Khaparde wrote:
> >>>> From: Lance Richardson <lance.richardson@broadcom.com>
> >>>> --- a/config/common_base
> >>>> +++ b/config/common_base
> >>>> @@ -212,6 +212,7 @@ CONFIG_RTE_LIBRTE_BNX2X_DEBUG_PERIODIC=n
> >>>> # Compile burst-oriented Broadcom BNXT PMD driver
> >>>> #
> >>>> CONFIG_RTE_LIBRTE_BNXT_PMD=y
> >>>> +CONFIG_RTE_LIBRTE_BNXT_SHARED_ASYNC_RING=n
> >>>
> >>> Normally we don't want to introduce new config time options as much as possible.
> >>>
> >>> This is what happens when patches slip in the last minute, please Ajit try to
> >>> send patches before proposal next time.
> >>>
> >>> And back to the config option, is it something have to be a compile time flag
> >>> (if so why?), can it be replaced by a devarg?
> >>
> >> I confirm it is nack.
> >> I don't see any reason not to replace it with a runtime devargs.
> >
> > This build-time configuration option was introduced because the
> > "shared async completion
> > ring" configuration is needed for one specific platform,
> > arm64-stingray-linux-gcc. This
> > configuration has a number of drawbacks and should be avoided
> > otherwise. Making it
> > a run-time option will add complexity and introduce the possibility
> > that existing stingray
> > users will not know that they need to specify this option as of 19.08,
> > so it would be good
> > if some other way could be found to handle this.
>
> I see this provides a configuration transparent to user, but won't this kill the
> binary portability? If a distro packages an 'armv8a' target and distributes it,
> this won't be usable in your 'stingray' platform for the driver.
>
> But if this is added as a devargs, it can be usable even with distributed
> binaries, by providing proper devargs to binary for a specific platform. And
> documenting this devargs in NIC guide can help users to figure it out.
>
> > Other than perhaps using RTE_ARCH_ARM64 to select the shared completion ring
> > configuration (which would obviously affect all ARM64 users), are
> > there any other options
> > that might be acceptable?
Can you detect the platform in the PMD?
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [dpdk-dev] [PATCH 09/22] net/bnxt: use dedicated cpr for async events
2019-07-23 8:04 ` Thomas Monjalon
@ 2019-07-23 10:53 ` Lance Richardson
0 siblings, 0 replies; 38+ messages in thread
From: Lance Richardson @ 2019-07-23 10:53 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: Ferruh Yigit, Ajit Khaparde, dev, Somnath Kotur
Yes, the platform can be detected at runtime
from the PCI device ID. This would be a better
approach than adding a configuration parameter,
I think. I'll take a look.
Thanks,
Lance
On Tue, Jul 23, 2019 at 4:04 AM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 22/07/2019 20:34, Ferruh Yigit:
> > On 7/22/2019 6:57 PM, Lance Richardson wrote:
> > > On Mon, Jul 22, 2019 at 11:06 AM Thomas Monjalon <thomas@monjalon.net> wrote:
> > >> 22/07/2019 16:57, Ferruh Yigit:
> > >>> On 7/18/2019 4:36 AM, Ajit Khaparde wrote:
> > >>>> From: Lance Richardson <lance.richardson@broadcom.com>
> > >>>> --- a/config/common_base
> > >>>> +++ b/config/common_base
> > >>>> @@ -212,6 +212,7 @@ CONFIG_RTE_LIBRTE_BNX2X_DEBUG_PERIODIC=n
> > >>>> # Compile burst-oriented Broadcom BNXT PMD driver
> > >>>> #
> > >>>> CONFIG_RTE_LIBRTE_BNXT_PMD=y
> > >>>> +CONFIG_RTE_LIBRTE_BNXT_SHARED_ASYNC_RING=n
> > >>>
> > >>> Normally we don't want to introduce new config time options as much as possible.
> > >>>
> > >>> This is what happens when patches slip in the last minute, please Ajit try to
> > >>> send patches before proposal next time.
> > >>>
> > >>> And back to the config option, is it something have to be a compile time flag
> > >>> (if so why?), can it be replaced by a devarg?
> > >>
> > >> I confirm it is nack.
> > >> I don't see any reason not to replace it with a runtime devargs.
> > >
> > > This build-time configuration option was introduced because the
> > > "shared async completion
> > > ring" configuration is needed for one specific platform,
> > > arm64-stingray-linux-gcc. This
> > > configuration has a number of drawbacks and should be avoided
> > > otherwise. Making it
> > > a run-time option will add complexity and introduce the possibility
> > > that existing stingray
> > > users will not know that they need to specify this option as of 19.08,
> > > so it would be good
> > > if some other way could be found to handle this.
> >
> > I see this provides a configuration transparent to user, but won't this kill the
> > binary portability? If a distro packages an 'armv8a' target and distributes it,
> > this won't be usable in your 'stingray' platform for the driver.
> >
> > But if this is added as a devargs, it can be usable even with distributed
> > binaries, by providing proper devargs to binary for a specific platform. And
> > documenting this devargs in NIC guide can help users to figure it out.
> >
> > > Other than perhaps using RTE_ARCH_ARM64 to select the shared completion ring
> > > configuration (which would obviously affect all ARM64 users), are
> > > there any other options
> > > that might be acceptable?
>
> Can you detect the platform in the PMD?
>
>
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [dpdk-dev] [PATCH 09/22] net/bnxt: use dedicated cpr for async events
2019-07-22 18:34 ` Ferruh Yigit
2019-07-23 8:04 ` Thomas Monjalon
@ 2019-07-23 21:27 ` Lance Richardson
1 sibling, 0 replies; 38+ messages in thread
From: Lance Richardson @ 2019-07-23 21:27 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: Thomas Monjalon, Ajit Khaparde, dev, Somnath Kotur
On Mon, Jul 22, 2019 at 2:34 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
<snip>
> I dropped this patch from next-net since discussion is going on, I did my best
> to resolve the conflicts but please confirm the final result in next-net.
>
I did some basic testing of the current head of dpdk-next-net and diffed
against the original version, everything looks OK.
I will follow up with another patch that has the build-time configuration stuff
removed.
Thanks,
Lance
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [PATCH] net/bnxt: use dedicated cpr for async events
2019-07-18 3:36 ` [dpdk-dev] [PATCH 09/22] net/bnxt: use dedicated cpr for async events Ajit Khaparde
2019-07-22 14:57 ` Ferruh Yigit
@ 2019-07-24 16:14 ` Lance Richardson
2019-07-24 16:32 ` Lance Richardson
2019-07-24 16:49 ` [dpdk-dev] [[PATCH v2]] " Lance Richardson
1 sibling, 2 replies; 38+ messages in thread
From: Lance Richardson @ 2019-07-24 16:14 UTC (permalink / raw)
To: dev; +Cc: ajit.khaparde, somnath.kotur, ferruh.yigit, thomas, Lance Richardson
This commit enables the creation of a dedicated completion
ring for asynchronous event handling instead of handling these
events on a receive completion ring.
For the stingray platform and other platforms needing tighter
control of resource utilization, we retain the ability to
process async events on a receive completion ring. This behavior
is controlled by a compile-time configuration variable.
For Thor-based adapters, we use a dedicated NQ (notification
queue) ring for async events (async events can't currently
be received on a completion ring due to a firmware limitation).
Rename "def_cp_ring" to "async_cp_ring" to better reflect its
purpose (async event notifications) and to avoid confusion with
VNIC default receive completion rings.
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/net/bnxt/bnxt.h | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 19 +++-
drivers/net/bnxt/bnxt_hwrm.c | 16 +--
drivers/net/bnxt/bnxt_hwrm.h | 2 +
drivers/net/bnxt/bnxt_irq.c | 44 +++++---
drivers/net/bnxt/bnxt_ring.c | 145 +++++++++++++++++++++++----
drivers/net/bnxt/bnxt_ring.h | 3 +
drivers/net/bnxt/bnxt_rxr.c | 2 +-
drivers/net/bnxt/bnxt_rxtx_vec_sse.c | 2 +-
9 files changed, 197 insertions(+), 46 deletions(-)
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 93194bb52..0c9f994ea 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -33,6 +33,12 @@
#define BNXT_MAX_RX_RING_DESC 8192
#define BNXT_DB_SIZE 0x80
+#ifdef RTE_ARCH_ARM64
+#define BNXT_NUM_ASYNC_CPR(bp) (BNXT_STINGRAY(bp) ? 0 : 1)
+#else
+#define BNXT_NUM_ASYNC_CPR(bp) 1
+#endif
+
/* Chimp Communication Channel */
#define GRCPF_REG_CHIMP_CHANNEL_OFFSET 0x0
#define GRCPF_REG_CHIMP_COMM_TRIGGER 0x100
@@ -351,6 +357,7 @@ struct bnxt {
#define BNXT_FLAG_TRUSTED_VF_EN (1 << 11)
#define BNXT_FLAG_DFLT_VNIC_SET (1 << 12)
#define BNXT_FLAG_THOR_CHIP (1 << 13)
+#define BNXT_FLAG_STINGRAY (1 << 14)
#define BNXT_FLAG_EXT_STATS_SUPPORTED (1 << 29)
#define BNXT_FLAG_NEW_RM (1 << 30)
#define BNXT_FLAG_INIT_DONE (1U << 31)
@@ -363,6 +370,7 @@ struct bnxt {
#define BNXT_USE_KONG(bp) ((bp)->flags & BNXT_FLAG_KONG_MB_EN)
#define BNXT_VF_IS_TRUSTED(bp) ((bp)->flags & BNXT_FLAG_TRUSTED_VF_EN)
#define BNXT_CHIP_THOR(bp) ((bp)->flags & BNXT_FLAG_THOR_CHIP)
+#define BNXT_STINGRAY(bp) ((bp)->flags & BNXT_FLAG_STINGRAY)
#define BNXT_HAS_NQ(bp) BNXT_CHIP_THOR(bp)
#define BNXT_HAS_RING_GRPS(bp) (!BNXT_CHIP_THOR(bp))
@@ -387,7 +395,7 @@ struct bnxt {
uint16_t fw_tx_port_stats_ext_size;
/* Default completion ring */
- struct bnxt_cp_ring_info *def_cp_ring;
+ struct bnxt_cp_ring_info *async_cp_ring;
uint32_t max_ring_grps;
struct bnxt_ring_grp_info *grp_info;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index ded970644..2a8b50296 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -200,12 +200,17 @@ static void bnxt_free_mem(struct bnxt *bp)
bnxt_free_stats(bp);
bnxt_free_tx_rings(bp);
bnxt_free_rx_rings(bp);
+ bnxt_free_async_cp_ring(bp);
}
static int bnxt_alloc_mem(struct bnxt *bp)
{
int rc;
+ rc = bnxt_alloc_async_ring_struct(bp);
+ if (rc)
+ goto alloc_mem_err;
+
rc = bnxt_alloc_vnic_mem(bp);
if (rc)
goto alloc_mem_err;
@@ -218,6 +223,10 @@ static int bnxt_alloc_mem(struct bnxt *bp)
if (rc)
goto alloc_mem_err;
+ rc = bnxt_alloc_async_cp_ring(bp);
+ if (rc)
+ goto alloc_mem_err;
+
return 0;
alloc_mem_err:
@@ -617,8 +626,8 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
/* Inherit new configurations */
if (eth_dev->data->nb_rx_queues > bp->max_rx_rings ||
eth_dev->data->nb_tx_queues > bp->max_tx_rings ||
- eth_dev->data->nb_rx_queues + eth_dev->data->nb_tx_queues >
- bp->max_cp_rings ||
+ eth_dev->data->nb_rx_queues + eth_dev->data->nb_tx_queues
+ + BNXT_NUM_ASYNC_CPR(bp) > bp->max_cp_rings ||
eth_dev->data->nb_rx_queues + eth_dev->data->nb_tx_queues >
bp->max_stat_ctx)
goto resource_error;
@@ -3802,6 +3811,12 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
pci_dev->id.device_id == BROADCOM_DEV_ID_57500_VF2)
bp->flags |= BNXT_FLAG_THOR_CHIP;
+ if (pci_dev->id.device_id == BROADCOM_DEV_ID_58802 ||
+ pci_dev->id.device_id == BROADCOM_DEV_ID_58804 ||
+ pci_dev->id.device_id == BROADCOM_DEV_ID_58808 ||
+ pci_dev->id.device_id == BROADCOM_DEV_ID_58802_VF)
+ bp->flags |= BNXT_FLAG_STINGRAY;
+
rc = bnxt_init_board(eth_dev);
if (rc) {
PMD_DRV_LOG(ERR,
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 045ce4a9c..64377473a 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -737,9 +737,12 @@ int bnxt_hwrm_func_reserve_vf_resc(struct bnxt *bp, bool test)
req.num_tx_rings = rte_cpu_to_le_16(bp->tx_nr_rings);
req.num_rx_rings = rte_cpu_to_le_16(bp->rx_nr_rings *
AGG_RING_MULTIPLIER);
- req.num_stat_ctxs = rte_cpu_to_le_16(bp->rx_nr_rings + bp->tx_nr_rings);
+ req.num_stat_ctxs = rte_cpu_to_le_16(bp->rx_nr_rings +
+ bp->tx_nr_rings +
+ BNXT_NUM_ASYNC_CPR(bp));
req.num_cmpl_rings = rte_cpu_to_le_16(bp->rx_nr_rings +
- bp->tx_nr_rings);
+ bp->tx_nr_rings +
+ BNXT_NUM_ASYNC_CPR(bp));
req.num_vnics = rte_cpu_to_le_16(bp->rx_nr_rings);
if (bp->vf_resv_strategy ==
HWRM_FUNC_RESOURCE_QCAPS_OUTPUT_VF_RESV_STRATEGY_MINIMAL_STATIC) {
@@ -2073,7 +2076,7 @@ int bnxt_free_all_hwrm_ring_grps(struct bnxt *bp)
return rc;
}
-static void bnxt_free_nq_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
+void bnxt_free_nq_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
{
struct bnxt_ring *cp_ring = cpr->cp_ring_struct;
@@ -2083,9 +2086,10 @@ static void bnxt_free_nq_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
memset(cpr->cp_desc_ring, 0, cpr->cp_ring_struct->ring_size *
sizeof(*cpr->cp_desc_ring));
cpr->cp_raw_cons = 0;
+ cpr->valid = 0;
}
-static void bnxt_free_cp_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
+void bnxt_free_cp_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
{
struct bnxt_ring *cp_ring = cpr->cp_ring_struct;
@@ -3212,7 +3216,7 @@ int bnxt_hwrm_func_cfg_def_cp(struct bnxt *bp)
req.enables = rte_cpu_to_le_32(
HWRM_FUNC_CFG_INPUT_ENABLES_ASYNC_EVENT_CR);
req.async_event_cr = rte_cpu_to_le_16(
- bp->def_cp_ring->cp_ring_struct->fw_ring_id);
+ bp->async_cp_ring->cp_ring_struct->fw_ring_id);
rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
HWRM_CHECK_RESULT();
@@ -3232,7 +3236,7 @@ int bnxt_hwrm_vf_func_cfg_def_cp(struct bnxt *bp)
req.enables = rte_cpu_to_le_32(
HWRM_FUNC_VF_CFG_INPUT_ENABLES_ASYNC_EVENT_CR);
req.async_event_cr = rte_cpu_to_le_16(
- bp->def_cp_ring->cp_ring_struct->fw_ring_id);
+ bp->async_cp_ring->cp_ring_struct->fw_ring_id);
rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
HWRM_CHECK_RESULT();
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 37aaa1a9e..c882fc2a1 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -119,6 +119,8 @@ int bnxt_free_all_hwrm_stat_ctxs(struct bnxt *bp);
int bnxt_free_all_hwrm_rings(struct bnxt *bp);
int bnxt_free_all_hwrm_ring_grps(struct bnxt *bp);
int bnxt_alloc_all_hwrm_ring_grps(struct bnxt *bp);
+void bnxt_free_cp_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr);
+void bnxt_free_nq_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr);
int bnxt_set_hwrm_vnic_filters(struct bnxt *bp, struct bnxt_vnic_info *vnic);
int bnxt_clear_hwrm_vnic_filters(struct bnxt *bp, struct bnxt_vnic_info *vnic);
void bnxt_free_all_hwrm_resources(struct bnxt *bp);
diff --git a/drivers/net/bnxt/bnxt_irq.c b/drivers/net/bnxt/bnxt_irq.c
index 9016871a2..a22700a0d 100644
--- a/drivers/net/bnxt/bnxt_irq.c
+++ b/drivers/net/bnxt/bnxt_irq.c
@@ -22,7 +22,7 @@ static void bnxt_int_handler(void *param)
{
struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
struct bnxt *bp = eth_dev->data->dev_private;
- struct bnxt_cp_ring_info *cpr = bp->def_cp_ring;
+ struct bnxt_cp_ring_info *cpr = bp->async_cp_ring;
struct cmpl_base *cmp;
uint32_t raw_cons;
uint32_t cons;
@@ -43,10 +43,13 @@ static void bnxt_int_handler(void *param)
bnxt_event_hwrm_resp_handler(bp, cmp);
raw_cons = NEXT_RAW_CMP(raw_cons);
- };
+ }
cpr->cp_raw_cons = raw_cons;
- B_CP_DB_REARM(cpr, cpr->cp_raw_cons);
+ if (BNXT_HAS_NQ(bp))
+ bnxt_db_nq_arm(cpr);
+ else
+ B_CP_DB_REARM(cpr, cpr->cp_raw_cons);
}
int bnxt_free_int(struct bnxt *bp)
@@ -92,19 +95,35 @@ int bnxt_free_int(struct bnxt *bp)
void bnxt_disable_int(struct bnxt *bp)
{
- struct bnxt_cp_ring_info *cpr = bp->def_cp_ring;
+ struct bnxt_cp_ring_info *cpr = bp->async_cp_ring;
+
+ if (BNXT_NUM_ASYNC_CPR(bp) == 0)
+ return;
+
+ if (!cpr || !cpr->cp_db.doorbell)
+ return;
/* Only the default completion ring */
- if (cpr != NULL && cpr->cp_db.doorbell != NULL)
+ if (BNXT_HAS_NQ(bp))
+ bnxt_db_nq(cpr);
+ else
B_CP_DB_DISARM(cpr);
}
void bnxt_enable_int(struct bnxt *bp)
{
- struct bnxt_cp_ring_info *cpr = bp->def_cp_ring;
+ struct bnxt_cp_ring_info *cpr = bp->async_cp_ring;
+
+ if (BNXT_NUM_ASYNC_CPR(bp) == 0)
+ return;
+
+ if (!cpr || !cpr->cp_db.doorbell)
+ return;
/* Only the default completion ring */
- if (cpr != NULL && cpr->cp_db.doorbell != NULL)
+ if (BNXT_HAS_NQ(bp))
+ bnxt_db_nq_arm(cpr);
+ else
B_CP_DB_ARM(cpr);
}
@@ -112,7 +131,7 @@ int bnxt_setup_int(struct bnxt *bp)
{
uint16_t total_vecs;
const int len = sizeof(bp->irq_tbl[0].name);
- int i, rc = 0;
+ int i;
/* DPDK host only supports 1 MSI-X vector */
total_vecs = 1;
@@ -126,14 +145,11 @@ int bnxt_setup_int(struct bnxt *bp)
bp->irq_tbl[i].handler = bnxt_int_handler;
}
} else {
- rc = -ENOMEM;
- goto setup_exit;
+ PMD_DRV_LOG(ERR, "bnxt_irq_tbl setup failed\n");
+ return -ENOMEM;
}
- return 0;
-setup_exit:
- PMD_DRV_LOG(ERR, "bnxt_irq_tbl setup failed\n");
- return rc;
+ return 0;
}
int bnxt_request_int(struct bnxt *bp)
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index a9952e02c..a5447c04c 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -5,6 +5,7 @@
#include <rte_bitmap.h>
#include <rte_memzone.h>
+#include <rte_malloc.h>
#include <unistd.h>
#include "bnxt.h"
@@ -369,6 +370,7 @@ static int bnxt_alloc_cmpl_ring(struct bnxt *bp, int queue_index,
{
struct bnxt_ring *cp_ring = cpr->cp_ring_struct;
uint32_t nq_ring_id = HWRM_NA_SIGNATURE;
+ int cp_ring_index = queue_index + BNXT_NUM_ASYNC_CPR(bp);
uint8_t ring_type;
int rc = 0;
@@ -383,13 +385,13 @@ static int bnxt_alloc_cmpl_ring(struct bnxt *bp, int queue_index,
}
}
- rc = bnxt_hwrm_ring_alloc(bp, cp_ring, ring_type, queue_index,
+ rc = bnxt_hwrm_ring_alloc(bp, cp_ring, ring_type, cp_ring_index,
HWRM_NA_SIGNATURE, nq_ring_id);
if (rc)
return rc;
cpr->cp_cons = 0;
- bnxt_set_db(bp, &cpr->cp_db, ring_type, queue_index,
+ bnxt_set_db(bp, &cpr->cp_db, ring_type, cp_ring_index,
cp_ring->fw_ring_id);
bnxt_db_cq(cpr);
@@ -400,6 +402,7 @@ static int bnxt_alloc_nq_ring(struct bnxt *bp, int queue_index,
struct bnxt_cp_ring_info *nqr)
{
struct bnxt_ring *nq_ring = nqr->cp_ring_struct;
+ int nq_ring_index = queue_index + BNXT_NUM_ASYNC_CPR(bp);
uint8_t ring_type;
int rc = 0;
@@ -408,12 +411,12 @@ static int bnxt_alloc_nq_ring(struct bnxt *bp, int queue_index,
ring_type = HWRM_RING_ALLOC_INPUT_RING_TYPE_NQ;
- rc = bnxt_hwrm_ring_alloc(bp, nq_ring, ring_type, queue_index,
+ rc = bnxt_hwrm_ring_alloc(bp, nq_ring, ring_type, nq_ring_index,
HWRM_NA_SIGNATURE, HWRM_NA_SIGNATURE);
if (rc)
return rc;
- bnxt_set_db(bp, &nqr->cp_db, ring_type, queue_index,
+ bnxt_set_db(bp, &nqr->cp_db, ring_type, nq_ring_index,
nq_ring->fw_ring_id);
bnxt_db_nq(nqr);
@@ -490,14 +493,16 @@ int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index)
struct bnxt_ring *cp_ring = cpr->cp_ring_struct;
struct bnxt_cp_ring_info *nqr = rxq->nq_ring;
struct bnxt_rx_ring_info *rxr = rxq->rx_ring;
- int rc = 0;
+ int rc;
if (BNXT_HAS_NQ(bp)) {
- if (bnxt_alloc_nq_ring(bp, queue_index, nqr))
+ rc = bnxt_alloc_nq_ring(bp, queue_index, nqr);
+ if (rc)
goto err_out;
}
- if (bnxt_alloc_cmpl_ring(bp, queue_index, cpr, nqr))
+ rc = bnxt_alloc_cmpl_ring(bp, queue_index, cpr, nqr);
+ if (rc)
goto err_out;
if (BNXT_HAS_RING_GRPS(bp)) {
@@ -505,22 +510,24 @@ int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index)
bp->grp_info[queue_index].cp_fw_ring_id = cp_ring->fw_ring_id;
}
- if (!queue_index) {
+ if (!BNXT_NUM_ASYNC_CPR(bp) && !queue_index) {
/*
- * In order to save completion resources, use the first
- * completion ring from PF or VF as the default completion ring
- * for async event and HWRM forward response handling.
+ * If a dedicated async event completion ring is not enabled,
+ * use the first completion ring from PF or VF as the default
+ * completion ring for async event handling.
*/
- bp->def_cp_ring = cpr;
+ bp->async_cp_ring = cpr;
rc = bnxt_hwrm_set_async_event_cr(bp);
if (rc)
goto err_out;
}
- if (bnxt_alloc_rx_ring(bp, queue_index))
+ rc = bnxt_alloc_rx_ring(bp, queue_index);
+ if (rc)
goto err_out;
- if (bnxt_alloc_rx_agg_ring(bp, queue_index))
+ rc = bnxt_alloc_rx_agg_ring(bp, queue_index);
+ if (rc)
goto err_out;
rxq->rx_buf_use_size = BNXT_MAX_MTU + RTE_ETHER_HDR_LEN +
@@ -545,6 +552,9 @@ int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index)
bp->eth_dev->data->rx_queue_state[queue_index]);
err_out:
+ PMD_DRV_LOG(ERR,
+ "Failed to allocate receive queue %d, rc %d.\n",
+ queue_index, rc);
return rc;
}
@@ -583,15 +593,13 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp)
}
bnxt_hwrm_set_ring_coal(bp, &coal, cp_ring->fw_ring_id);
-
- if (!i) {
+ if (!BNXT_NUM_ASYNC_CPR(bp) && !i) {
/*
- * In order to save completion resource, use the first
- * completion ring from PF or VF as the default
- * completion ring for async event & HWRM
- * forward response handling.
+ * If a dedicated async event completion ring is not
+ * enabled, use the first completion ring as the default
+ * completion ring for async event handling.
*/
- bp->def_cp_ring = cpr;
+ bp->async_cp_ring = cpr;
rc = bnxt_hwrm_set_async_event_cr(bp);
if (rc)
goto err_out;
@@ -652,3 +660,98 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp)
err_out:
return rc;
}
+
+/* Allocate dedicated async completion ring. */
+int bnxt_alloc_async_cp_ring(struct bnxt *bp)
+{
+ struct bnxt_cp_ring_info *cpr = bp->async_cp_ring;
+ struct bnxt_ring *cp_ring = cpr->cp_ring_struct;
+ uint8_t ring_type;
+ int rc;
+
+ if (BNXT_NUM_ASYNC_CPR(bp) == 0)
+ return 0;
+
+ if (BNXT_HAS_NQ(bp))
+ ring_type = HWRM_RING_ALLOC_INPUT_RING_TYPE_NQ;
+ else
+ ring_type = HWRM_RING_ALLOC_INPUT_RING_TYPE_L2_CMPL;
+
+ rc = bnxt_hwrm_ring_alloc(bp, cp_ring, ring_type, 0,
+ HWRM_NA_SIGNATURE, HWRM_NA_SIGNATURE);
+
+ if (rc)
+ return rc;
+
+ cpr->cp_cons = 0;
+ cpr->valid = 0;
+ bnxt_set_db(bp, &cpr->cp_db, ring_type, 0,
+ cp_ring->fw_ring_id);
+
+ if (BNXT_HAS_NQ(bp))
+ bnxt_db_nq(cpr);
+ else
+ bnxt_db_cq(cpr);
+
+ return bnxt_hwrm_set_async_event_cr(bp);
+}
+
+/* Free dedicated async completion ring. */
+void bnxt_free_async_cp_ring(struct bnxt *bp)
+{
+ struct bnxt_cp_ring_info *cpr = bp->async_cp_ring;
+
+ if (BNXT_NUM_ASYNC_CPR(bp) == 0 || cpr == NULL)
+ return;
+
+ if (BNXT_HAS_NQ(bp))
+ bnxt_free_nq_ring(bp, cpr);
+ else
+ bnxt_free_cp_ring(bp, cpr);
+
+ bnxt_free_ring(cpr->cp_ring_struct);
+ rte_free(cpr->cp_ring_struct);
+ cpr->cp_ring_struct = NULL;
+ rte_free(cpr);
+ bp->async_cp_ring = NULL;
+}
+
+int bnxt_alloc_async_ring_struct(struct bnxt *bp)
+{
+ struct bnxt_cp_ring_info *cpr = NULL;
+ struct bnxt_ring *ring = NULL;
+ unsigned int socket_id;
+
+ if (BNXT_NUM_ASYNC_CPR(bp) == 0)
+ return 0;
+
+ socket_id = rte_lcore_to_socket_id(rte_get_master_lcore());
+
+ cpr = rte_zmalloc_socket("cpr",
+ sizeof(struct bnxt_cp_ring_info),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (cpr == NULL)
+ return -ENOMEM;
+
+ ring = rte_zmalloc_socket("bnxt_cp_ring_struct",
+ sizeof(struct bnxt_ring),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (ring == NULL) {
+ rte_free(cpr);
+ return -ENOMEM;
+ }
+
+ ring->bd = (void *)cpr->cp_desc_ring;
+ ring->bd_dma = cpr->cp_desc_mapping;
+ ring->ring_size = rte_align32pow2(DEFAULT_CP_RING_SIZE);
+ ring->ring_mask = ring->ring_size - 1;
+ ring->vmem_size = 0;
+ ring->vmem = NULL;
+
+ bp->async_cp_ring = cpr;
+ cpr->cp_ring_struct = ring;
+
+ return bnxt_alloc_rings(bp, 0, NULL, NULL,
+ bp->async_cp_ring, NULL,
+ "def_cp");
+}
diff --git a/drivers/net/bnxt/bnxt_ring.h b/drivers/net/bnxt/bnxt_ring.h
index e5cef3a1d..04c7b04b8 100644
--- a/drivers/net/bnxt/bnxt_ring.h
+++ b/drivers/net/bnxt/bnxt_ring.h
@@ -75,6 +75,9 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx,
const char *suffix);
int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index);
int bnxt_alloc_hwrm_rings(struct bnxt *bp);
+int bnxt_alloc_async_cp_ring(struct bnxt *bp);
+void bnxt_free_async_cp_ring(struct bnxt *bp);
+int bnxt_alloc_async_ring_struct(struct bnxt *bp);
static inline void bnxt_db_write(struct bnxt_db_info *db, uint32_t idx)
{
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 54a2cf5fd..185a0e376 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -564,7 +564,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
nb_rx_pkts++;
if (rc == -EBUSY) /* partial completion */
break;
- } else {
+ } else if (!BNXT_NUM_ASYNC_CPR(rxq->bp)) {
evt =
bnxt_event_hwrm_resp_handler(rxq->bp,
(struct cmpl_base *)rxcmp);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
index c358506f8..adc5020ec 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
@@ -257,7 +257,7 @@ bnxt_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
mbuf->packet_type = bnxt_parse_pkt_type(rxcmp, rxcmp1);
rx_pkts[nb_rx_pkts++] = mbuf;
- } else {
+ } else if (!BNXT_NUM_ASYNC_CPR(rxq->bp)) {
evt =
bnxt_event_hwrm_resp_handler(rxq->bp,
(struct cmpl_base *)rxcmp);
--
2.17.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [dpdk-dev] [PATCH] net/bnxt: use dedicated cpr for async events
2019-07-24 16:14 ` [dpdk-dev] [PATCH] " Lance Richardson
@ 2019-07-24 16:32 ` Lance Richardson
2019-07-24 16:49 ` [dpdk-dev] [[PATCH v2]] " Lance Richardson
1 sibling, 0 replies; 38+ messages in thread
From: Lance Richardson @ 2019-07-24 16:32 UTC (permalink / raw)
To: dev; +Cc: Ajit Khaparde, Somnath Kotur, Ferruh Yigit, Thomas Monjalon
On Wed, Jul 24, 2019 at 12:14 PM Lance Richardson
<lance.richardson@broadcom.com> wrote:
> process async events on a receive completion ring. This behavior
> is controlled by a compile-time configuration variable.
I will follow up with a v2 to correct the above statement in the commit log and
to squash with these follow-up patches:
http://patchwork.dpdk.org/patch/56799/
http://patchwork.dpdk.org/patch/56800/
^ permalink raw reply [flat|nested] 38+ messages in thread
* [dpdk-dev] [[PATCH v2]] net/bnxt: use dedicated cpr for async events
2019-07-24 16:14 ` [dpdk-dev] [PATCH] " Lance Richardson
2019-07-24 16:32 ` Lance Richardson
@ 2019-07-24 16:49 ` Lance Richardson
2019-07-25 9:54 ` Ferruh Yigit
1 sibling, 1 reply; 38+ messages in thread
From: Lance Richardson @ 2019-07-24 16:49 UTC (permalink / raw)
To: dev
Cc: ajit.khaparde, somnath.kotur, ferruh.yigit, thomas,
Lance Richardson, Kalesh AP
This commit enables the creation of a dedicated completion
ring for asynchronous event handling instead of handling these
events on a receive completion ring.
For the stingray platform and other platforms needing tighter
control of resource utilization, we retain the ability to
process async events on a receive completion ring.
For Thor-based adapters, we use a dedicated NQ (notification
queue) ring for async events (async events can't currently
be received on a completion ring due to a firmware limitation).
Rename "def_cp_ring" to "async_cp_ring" to better reflect its
purpose (async event notifications) and to avoid confusion with
VNIC default receive completion rings.
Allow rxq 0 to be stopped when not being used for async events.
Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
v2:
- Removed incorrect statement from commit log.
- Squashed with two follow-up patches referring to this patch.
drivers/net/bnxt/bnxt.h | 10 +-
drivers/net/bnxt/bnxt_ethdev.c | 19 +++-
drivers/net/bnxt/bnxt_hwrm.c | 16 +--
drivers/net/bnxt/bnxt_hwrm.h | 2 +
drivers/net/bnxt/bnxt_irq.c | 44 +++++---
drivers/net/bnxt/bnxt_ring.c | 151 ++++++++++++++++++++++-----
drivers/net/bnxt/bnxt_ring.h | 3 +
drivers/net/bnxt/bnxt_rxq.c | 24 +++--
drivers/net/bnxt/bnxt_rxr.c | 2 +-
drivers/net/bnxt/bnxt_rxtx_vec_sse.c | 2 +-
10 files changed, 217 insertions(+), 56 deletions(-)
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 93194bb52..0c9f994ea 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -33,6 +33,12 @@
#define BNXT_MAX_RX_RING_DESC 8192
#define BNXT_DB_SIZE 0x80
+#ifdef RTE_ARCH_ARM64
+#define BNXT_NUM_ASYNC_CPR(bp) (BNXT_STINGRAY(bp) ? 0 : 1)
+#else
+#define BNXT_NUM_ASYNC_CPR(bp) 1
+#endif
+
/* Chimp Communication Channel */
#define GRCPF_REG_CHIMP_CHANNEL_OFFSET 0x0
#define GRCPF_REG_CHIMP_COMM_TRIGGER 0x100
@@ -351,6 +357,7 @@ struct bnxt {
#define BNXT_FLAG_TRUSTED_VF_EN (1 << 11)
#define BNXT_FLAG_DFLT_VNIC_SET (1 << 12)
#define BNXT_FLAG_THOR_CHIP (1 << 13)
+#define BNXT_FLAG_STINGRAY (1 << 14)
#define BNXT_FLAG_EXT_STATS_SUPPORTED (1 << 29)
#define BNXT_FLAG_NEW_RM (1 << 30)
#define BNXT_FLAG_INIT_DONE (1U << 31)
@@ -363,6 +370,7 @@ struct bnxt {
#define BNXT_USE_KONG(bp) ((bp)->flags & BNXT_FLAG_KONG_MB_EN)
#define BNXT_VF_IS_TRUSTED(bp) ((bp)->flags & BNXT_FLAG_TRUSTED_VF_EN)
#define BNXT_CHIP_THOR(bp) ((bp)->flags & BNXT_FLAG_THOR_CHIP)
+#define BNXT_STINGRAY(bp) ((bp)->flags & BNXT_FLAG_STINGRAY)
#define BNXT_HAS_NQ(bp) BNXT_CHIP_THOR(bp)
#define BNXT_HAS_RING_GRPS(bp) (!BNXT_CHIP_THOR(bp))
@@ -387,7 +395,7 @@ struct bnxt {
uint16_t fw_tx_port_stats_ext_size;
/* Default completion ring */
- struct bnxt_cp_ring_info *def_cp_ring;
+ struct bnxt_cp_ring_info *async_cp_ring;
uint32_t max_ring_grps;
struct bnxt_ring_grp_info *grp_info;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index ded970644..2a8b50296 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -200,12 +200,17 @@ static void bnxt_free_mem(struct bnxt *bp)
bnxt_free_stats(bp);
bnxt_free_tx_rings(bp);
bnxt_free_rx_rings(bp);
+ bnxt_free_async_cp_ring(bp);
}
static int bnxt_alloc_mem(struct bnxt *bp)
{
int rc;
+ rc = bnxt_alloc_async_ring_struct(bp);
+ if (rc)
+ goto alloc_mem_err;
+
rc = bnxt_alloc_vnic_mem(bp);
if (rc)
goto alloc_mem_err;
@@ -218,6 +223,10 @@ static int bnxt_alloc_mem(struct bnxt *bp)
if (rc)
goto alloc_mem_err;
+ rc = bnxt_alloc_async_cp_ring(bp);
+ if (rc)
+ goto alloc_mem_err;
+
return 0;
alloc_mem_err:
@@ -617,8 +626,8 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
/* Inherit new configurations */
if (eth_dev->data->nb_rx_queues > bp->max_rx_rings ||
eth_dev->data->nb_tx_queues > bp->max_tx_rings ||
- eth_dev->data->nb_rx_queues + eth_dev->data->nb_tx_queues >
- bp->max_cp_rings ||
+ eth_dev->data->nb_rx_queues + eth_dev->data->nb_tx_queues
+ + BNXT_NUM_ASYNC_CPR(bp) > bp->max_cp_rings ||
eth_dev->data->nb_rx_queues + eth_dev->data->nb_tx_queues >
bp->max_stat_ctx)
goto resource_error;
@@ -3802,6 +3811,12 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
pci_dev->id.device_id == BROADCOM_DEV_ID_57500_VF2)
bp->flags |= BNXT_FLAG_THOR_CHIP;
+ if (pci_dev->id.device_id == BROADCOM_DEV_ID_58802 ||
+ pci_dev->id.device_id == BROADCOM_DEV_ID_58804 ||
+ pci_dev->id.device_id == BROADCOM_DEV_ID_58808 ||
+ pci_dev->id.device_id == BROADCOM_DEV_ID_58802_VF)
+ bp->flags |= BNXT_FLAG_STINGRAY;
+
rc = bnxt_init_board(eth_dev);
if (rc) {
PMD_DRV_LOG(ERR,
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 045ce4a9c..64377473a 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -737,9 +737,12 @@ int bnxt_hwrm_func_reserve_vf_resc(struct bnxt *bp, bool test)
req.num_tx_rings = rte_cpu_to_le_16(bp->tx_nr_rings);
req.num_rx_rings = rte_cpu_to_le_16(bp->rx_nr_rings *
AGG_RING_MULTIPLIER);
- req.num_stat_ctxs = rte_cpu_to_le_16(bp->rx_nr_rings + bp->tx_nr_rings);
+ req.num_stat_ctxs = rte_cpu_to_le_16(bp->rx_nr_rings +
+ bp->tx_nr_rings +
+ BNXT_NUM_ASYNC_CPR(bp));
req.num_cmpl_rings = rte_cpu_to_le_16(bp->rx_nr_rings +
- bp->tx_nr_rings);
+ bp->tx_nr_rings +
+ BNXT_NUM_ASYNC_CPR(bp));
req.num_vnics = rte_cpu_to_le_16(bp->rx_nr_rings);
if (bp->vf_resv_strategy ==
HWRM_FUNC_RESOURCE_QCAPS_OUTPUT_VF_RESV_STRATEGY_MINIMAL_STATIC) {
@@ -2073,7 +2076,7 @@ int bnxt_free_all_hwrm_ring_grps(struct bnxt *bp)
return rc;
}
-static void bnxt_free_nq_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
+void bnxt_free_nq_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
{
struct bnxt_ring *cp_ring = cpr->cp_ring_struct;
@@ -2083,9 +2086,10 @@ static void bnxt_free_nq_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
memset(cpr->cp_desc_ring, 0, cpr->cp_ring_struct->ring_size *
sizeof(*cpr->cp_desc_ring));
cpr->cp_raw_cons = 0;
+ cpr->valid = 0;
}
-static void bnxt_free_cp_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
+void bnxt_free_cp_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
{
struct bnxt_ring *cp_ring = cpr->cp_ring_struct;
@@ -3212,7 +3216,7 @@ int bnxt_hwrm_func_cfg_def_cp(struct bnxt *bp)
req.enables = rte_cpu_to_le_32(
HWRM_FUNC_CFG_INPUT_ENABLES_ASYNC_EVENT_CR);
req.async_event_cr = rte_cpu_to_le_16(
- bp->def_cp_ring->cp_ring_struct->fw_ring_id);
+ bp->async_cp_ring->cp_ring_struct->fw_ring_id);
rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
HWRM_CHECK_RESULT();
@@ -3232,7 +3236,7 @@ int bnxt_hwrm_vf_func_cfg_def_cp(struct bnxt *bp)
req.enables = rte_cpu_to_le_32(
HWRM_FUNC_VF_CFG_INPUT_ENABLES_ASYNC_EVENT_CR);
req.async_event_cr = rte_cpu_to_le_16(
- bp->def_cp_ring->cp_ring_struct->fw_ring_id);
+ bp->async_cp_ring->cp_ring_struct->fw_ring_id);
rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
HWRM_CHECK_RESULT();
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 37aaa1a9e..c882fc2a1 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -119,6 +119,8 @@ int bnxt_free_all_hwrm_stat_ctxs(struct bnxt *bp);
int bnxt_free_all_hwrm_rings(struct bnxt *bp);
int bnxt_free_all_hwrm_ring_grps(struct bnxt *bp);
int bnxt_alloc_all_hwrm_ring_grps(struct bnxt *bp);
+void bnxt_free_cp_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr);
+void bnxt_free_nq_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr);
int bnxt_set_hwrm_vnic_filters(struct bnxt *bp, struct bnxt_vnic_info *vnic);
int bnxt_clear_hwrm_vnic_filters(struct bnxt *bp, struct bnxt_vnic_info *vnic);
void bnxt_free_all_hwrm_resources(struct bnxt *bp);
diff --git a/drivers/net/bnxt/bnxt_irq.c b/drivers/net/bnxt/bnxt_irq.c
index 9016871a2..a22700a0d 100644
--- a/drivers/net/bnxt/bnxt_irq.c
+++ b/drivers/net/bnxt/bnxt_irq.c
@@ -22,7 +22,7 @@ static void bnxt_int_handler(void *param)
{
struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
struct bnxt *bp = eth_dev->data->dev_private;
- struct bnxt_cp_ring_info *cpr = bp->def_cp_ring;
+ struct bnxt_cp_ring_info *cpr = bp->async_cp_ring;
struct cmpl_base *cmp;
uint32_t raw_cons;
uint32_t cons;
@@ -43,10 +43,13 @@ static void bnxt_int_handler(void *param)
bnxt_event_hwrm_resp_handler(bp, cmp);
raw_cons = NEXT_RAW_CMP(raw_cons);
- };
+ }
cpr->cp_raw_cons = raw_cons;
- B_CP_DB_REARM(cpr, cpr->cp_raw_cons);
+ if (BNXT_HAS_NQ(bp))
+ bnxt_db_nq_arm(cpr);
+ else
+ B_CP_DB_REARM(cpr, cpr->cp_raw_cons);
}
int bnxt_free_int(struct bnxt *bp)
@@ -92,19 +95,35 @@ int bnxt_free_int(struct bnxt *bp)
void bnxt_disable_int(struct bnxt *bp)
{
- struct bnxt_cp_ring_info *cpr = bp->def_cp_ring;
+ struct bnxt_cp_ring_info *cpr = bp->async_cp_ring;
+
+ if (BNXT_NUM_ASYNC_CPR(bp) == 0)
+ return;
+
+ if (!cpr || !cpr->cp_db.doorbell)
+ return;
/* Only the default completion ring */
- if (cpr != NULL && cpr->cp_db.doorbell != NULL)
+ if (BNXT_HAS_NQ(bp))
+ bnxt_db_nq(cpr);
+ else
B_CP_DB_DISARM(cpr);
}
void bnxt_enable_int(struct bnxt *bp)
{
- struct bnxt_cp_ring_info *cpr = bp->def_cp_ring;
+ struct bnxt_cp_ring_info *cpr = bp->async_cp_ring;
+
+ if (BNXT_NUM_ASYNC_CPR(bp) == 0)
+ return;
+
+ if (!cpr || !cpr->cp_db.doorbell)
+ return;
/* Only the default completion ring */
- if (cpr != NULL && cpr->cp_db.doorbell != NULL)
+ if (BNXT_HAS_NQ(bp))
+ bnxt_db_nq_arm(cpr);
+ else
B_CP_DB_ARM(cpr);
}
@@ -112,7 +131,7 @@ int bnxt_setup_int(struct bnxt *bp)
{
uint16_t total_vecs;
const int len = sizeof(bp->irq_tbl[0].name);
- int i, rc = 0;
+ int i;
/* DPDK host only supports 1 MSI-X vector */
total_vecs = 1;
@@ -126,14 +145,11 @@ int bnxt_setup_int(struct bnxt *bp)
bp->irq_tbl[i].handler = bnxt_int_handler;
}
} else {
- rc = -ENOMEM;
- goto setup_exit;
+ PMD_DRV_LOG(ERR, "bnxt_irq_tbl setup failed\n");
+ return -ENOMEM;
}
- return 0;
-setup_exit:
- PMD_DRV_LOG(ERR, "bnxt_irq_tbl setup failed\n");
- return rc;
+ return 0;
}
int bnxt_request_int(struct bnxt *bp)
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index a9952e02c..be15b4bd1 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -5,6 +5,7 @@
#include <rte_bitmap.h>
#include <rte_memzone.h>
+#include <rte_malloc.h>
#include <unistd.h>
#include "bnxt.h"
@@ -369,6 +370,7 @@ static int bnxt_alloc_cmpl_ring(struct bnxt *bp, int queue_index,
{
struct bnxt_ring *cp_ring = cpr->cp_ring_struct;
uint32_t nq_ring_id = HWRM_NA_SIGNATURE;
+ int cp_ring_index = queue_index + BNXT_NUM_ASYNC_CPR(bp);
uint8_t ring_type;
int rc = 0;
@@ -383,13 +385,13 @@ static int bnxt_alloc_cmpl_ring(struct bnxt *bp, int queue_index,
}
}
- rc = bnxt_hwrm_ring_alloc(bp, cp_ring, ring_type, queue_index,
+ rc = bnxt_hwrm_ring_alloc(bp, cp_ring, ring_type, cp_ring_index,
HWRM_NA_SIGNATURE, nq_ring_id);
if (rc)
return rc;
cpr->cp_cons = 0;
- bnxt_set_db(bp, &cpr->cp_db, ring_type, queue_index,
+ bnxt_set_db(bp, &cpr->cp_db, ring_type, cp_ring_index,
cp_ring->fw_ring_id);
bnxt_db_cq(cpr);
@@ -400,6 +402,7 @@ static int bnxt_alloc_nq_ring(struct bnxt *bp, int queue_index,
struct bnxt_cp_ring_info *nqr)
{
struct bnxt_ring *nq_ring = nqr->cp_ring_struct;
+ int nq_ring_index = queue_index + BNXT_NUM_ASYNC_CPR(bp);
uint8_t ring_type;
int rc = 0;
@@ -408,12 +411,12 @@ static int bnxt_alloc_nq_ring(struct bnxt *bp, int queue_index,
ring_type = HWRM_RING_ALLOC_INPUT_RING_TYPE_NQ;
- rc = bnxt_hwrm_ring_alloc(bp, nq_ring, ring_type, queue_index,
+ rc = bnxt_hwrm_ring_alloc(bp, nq_ring, ring_type, nq_ring_index,
HWRM_NA_SIGNATURE, HWRM_NA_SIGNATURE);
if (rc)
return rc;
- bnxt_set_db(bp, &nqr->cp_db, ring_type, queue_index,
+ bnxt_set_db(bp, &nqr->cp_db, ring_type, nq_ring_index,
nq_ring->fw_ring_id);
bnxt_db_nq(nqr);
@@ -490,14 +493,16 @@ int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index)
struct bnxt_ring *cp_ring = cpr->cp_ring_struct;
struct bnxt_cp_ring_info *nqr = rxq->nq_ring;
struct bnxt_rx_ring_info *rxr = rxq->rx_ring;
- int rc = 0;
+ int rc;
if (BNXT_HAS_NQ(bp)) {
- if (bnxt_alloc_nq_ring(bp, queue_index, nqr))
+ rc = bnxt_alloc_nq_ring(bp, queue_index, nqr);
+ if (rc)
goto err_out;
}
- if (bnxt_alloc_cmpl_ring(bp, queue_index, cpr, nqr))
+ rc = bnxt_alloc_cmpl_ring(bp, queue_index, cpr, nqr);
+ if (rc)
goto err_out;
if (BNXT_HAS_RING_GRPS(bp)) {
@@ -505,22 +510,24 @@ int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index)
bp->grp_info[queue_index].cp_fw_ring_id = cp_ring->fw_ring_id;
}
- if (!queue_index) {
+ if (!BNXT_NUM_ASYNC_CPR(bp) && !queue_index) {
/*
- * In order to save completion resources, use the first
- * completion ring from PF or VF as the default completion ring
- * for async event and HWRM forward response handling.
+ * If a dedicated async event completion ring is not enabled,
+ * use the first completion ring from PF or VF as the default
+ * completion ring for async event handling.
*/
- bp->def_cp_ring = cpr;
+ bp->async_cp_ring = cpr;
rc = bnxt_hwrm_set_async_event_cr(bp);
if (rc)
goto err_out;
}
- if (bnxt_alloc_rx_ring(bp, queue_index))
+ rc = bnxt_alloc_rx_ring(bp, queue_index);
+ if (rc)
goto err_out;
- if (bnxt_alloc_rx_agg_ring(bp, queue_index))
+ rc = bnxt_alloc_rx_agg_ring(bp, queue_index);
+ if (rc)
goto err_out;
rxq->rx_buf_use_size = BNXT_MAX_MTU + RTE_ETHER_HDR_LEN +
@@ -539,12 +546,13 @@ int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index)
bnxt_db_write(&rxr->ag_db, rxr->ag_prod);
}
rxq->index = queue_index;
- PMD_DRV_LOG(INFO,
- "queue %d, rx_deferred_start %d, state %d!\n",
- queue_index, rxq->rx_deferred_start,
- bp->eth_dev->data->rx_queue_state[queue_index]);
+
+ return 0;
err_out:
+ PMD_DRV_LOG(ERR,
+ "Failed to allocate receive queue %d, rc %d.\n",
+ queue_index, rc);
return rc;
}
@@ -583,15 +591,13 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp)
}
bnxt_hwrm_set_ring_coal(bp, &coal, cp_ring->fw_ring_id);
-
- if (!i) {
+ if (!BNXT_NUM_ASYNC_CPR(bp) && !i) {
/*
- * In order to save completion resource, use the first
- * completion ring from PF or VF as the default
- * completion ring for async event & HWRM
- * forward response handling.
+ * If a dedicated async event completion ring is not
+ * enabled, use the first completion ring as the default
+ * completion ring for async event handling.
*/
- bp->def_cp_ring = cpr;
+ bp->async_cp_ring = cpr;
rc = bnxt_hwrm_set_async_event_cr(bp);
if (rc)
goto err_out;
@@ -652,3 +658,98 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp)
err_out:
return rc;
}
+
+/* Allocate dedicated async completion ring. */
+int bnxt_alloc_async_cp_ring(struct bnxt *bp)
+{
+ struct bnxt_cp_ring_info *cpr = bp->async_cp_ring;
+ struct bnxt_ring *cp_ring = cpr->cp_ring_struct;
+ uint8_t ring_type;
+ int rc;
+
+ if (BNXT_NUM_ASYNC_CPR(bp) == 0)
+ return 0;
+
+ if (BNXT_HAS_NQ(bp))
+ ring_type = HWRM_RING_ALLOC_INPUT_RING_TYPE_NQ;
+ else
+ ring_type = HWRM_RING_ALLOC_INPUT_RING_TYPE_L2_CMPL;
+
+ rc = bnxt_hwrm_ring_alloc(bp, cp_ring, ring_type, 0,
+ HWRM_NA_SIGNATURE, HWRM_NA_SIGNATURE);
+
+ if (rc)
+ return rc;
+
+ cpr->cp_cons = 0;
+ cpr->valid = 0;
+ bnxt_set_db(bp, &cpr->cp_db, ring_type, 0,
+ cp_ring->fw_ring_id);
+
+ if (BNXT_HAS_NQ(bp))
+ bnxt_db_nq(cpr);
+ else
+ bnxt_db_cq(cpr);
+
+ return bnxt_hwrm_set_async_event_cr(bp);
+}
+
+/* Free dedicated async completion ring. */
+void bnxt_free_async_cp_ring(struct bnxt *bp)
+{
+ struct bnxt_cp_ring_info *cpr = bp->async_cp_ring;
+
+ if (BNXT_NUM_ASYNC_CPR(bp) == 0 || cpr == NULL)
+ return;
+
+ if (BNXT_HAS_NQ(bp))
+ bnxt_free_nq_ring(bp, cpr);
+ else
+ bnxt_free_cp_ring(bp, cpr);
+
+ bnxt_free_ring(cpr->cp_ring_struct);
+ rte_free(cpr->cp_ring_struct);
+ cpr->cp_ring_struct = NULL;
+ rte_free(cpr);
+ bp->async_cp_ring = NULL;
+}
+
+int bnxt_alloc_async_ring_struct(struct bnxt *bp)
+{
+ struct bnxt_cp_ring_info *cpr = NULL;
+ struct bnxt_ring *ring = NULL;
+ unsigned int socket_id;
+
+ if (BNXT_NUM_ASYNC_CPR(bp) == 0)
+ return 0;
+
+ socket_id = rte_lcore_to_socket_id(rte_get_master_lcore());
+
+ cpr = rte_zmalloc_socket("cpr",
+ sizeof(struct bnxt_cp_ring_info),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (cpr == NULL)
+ return -ENOMEM;
+
+ ring = rte_zmalloc_socket("bnxt_cp_ring_struct",
+ sizeof(struct bnxt_ring),
+ RTE_CACHE_LINE_SIZE, socket_id);
+ if (ring == NULL) {
+ rte_free(cpr);
+ return -ENOMEM;
+ }
+
+ ring->bd = (void *)cpr->cp_desc_ring;
+ ring->bd_dma = cpr->cp_desc_mapping;
+ ring->ring_size = rte_align32pow2(DEFAULT_CP_RING_SIZE);
+ ring->ring_mask = ring->ring_size - 1;
+ ring->vmem_size = 0;
+ ring->vmem = NULL;
+
+ bp->async_cp_ring = cpr;
+ cpr->cp_ring_struct = ring;
+
+ return bnxt_alloc_rings(bp, 0, NULL, NULL,
+ bp->async_cp_ring, NULL,
+ "def_cp");
+}
diff --git a/drivers/net/bnxt/bnxt_ring.h b/drivers/net/bnxt/bnxt_ring.h
index e5cef3a1d..04c7b04b8 100644
--- a/drivers/net/bnxt/bnxt_ring.h
+++ b/drivers/net/bnxt/bnxt_ring.h
@@ -75,6 +75,9 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx,
const char *suffix);
int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index);
int bnxt_alloc_hwrm_rings(struct bnxt *bp);
+int bnxt_alloc_async_cp_ring(struct bnxt *bp);
+void bnxt_free_async_cp_ring(struct bnxt *bp);
+int bnxt_alloc_async_ring_struct(struct bnxt *bp);
static inline void bnxt_db_write(struct bnxt_db_info *db, uint32_t idx)
{
diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index e0eb890f8..1d95f1139 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -411,10 +411,11 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
return -EINVAL;
}
- dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
-
bnxt_free_hwrm_rx_ring(bp, rx_queue_id);
- bnxt_alloc_hwrm_rx_ring(bp, rx_queue_id);
+ rc = bnxt_alloc_hwrm_rx_ring(bp, rx_queue_id);
+ if (rc)
+ return rc;
+
PMD_DRV_LOG(INFO, "Rx queue started %d\n", rx_queue_id);
if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
@@ -435,8 +436,16 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
rc = bnxt_vnic_rss_configure(bp, vnic);
}
- if (rc == 0)
+ if (rc == 0) {
+ dev->data->rx_queue_state[rx_queue_id] =
+ RTE_ETH_QUEUE_STATE_STARTED;
rxq->rx_deferred_start = false;
+ }
+
+ PMD_DRV_LOG(INFO,
+ "queue %d, rx_deferred_start %d, state %d!\n",
+ rx_queue_id, rxq->rx_deferred_start,
+ bp->eth_dev->data->rx_queue_state[rx_queue_id]);
return rc;
}
@@ -449,8 +458,11 @@ int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
struct bnxt_rx_queue *rxq = NULL;
int rc = 0;
- /* Rx CQ 0 also works as Default CQ for async notifications */
- if (!rx_queue_id) {
+ /* For the stingray platform and other platforms needing tighter
+ * control of resource utilization, Rx CQ 0 also works as
+ * Default CQ for async notifications
+ */
+ if (!BNXT_NUM_ASYNC_CPR(bp) && !rx_queue_id) {
PMD_DRV_LOG(ERR, "Cannot stop Rx queue id %d\n", rx_queue_id);
return -EINVAL;
}
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 54a2cf5fd..185a0e376 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -564,7 +564,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
nb_rx_pkts++;
if (rc == -EBUSY) /* partial completion */
break;
- } else {
+ } else if (!BNXT_NUM_ASYNC_CPR(rxq->bp)) {
evt =
bnxt_event_hwrm_resp_handler(rxq->bp,
(struct cmpl_base *)rxcmp);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
index c358506f8..adc5020ec 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
@@ -257,7 +257,7 @@ bnxt_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
mbuf->packet_type = bnxt_parse_pkt_type(rxcmp, rxcmp1);
rx_pkts[nb_rx_pkts++] = mbuf;
- } else {
+ } else if (!BNXT_NUM_ASYNC_CPR(rxq->bp)) {
evt =
bnxt_event_hwrm_resp_handler(rxq->bp,
(struct cmpl_base *)rxcmp);
--
2.17.1
^ permalink raw reply [flat|nested] 38+ messages in thread
* Re: [dpdk-dev] [[PATCH v2]] net/bnxt: use dedicated cpr for async events
2019-07-24 16:49 ` [dpdk-dev] [[PATCH v2]] " Lance Richardson
@ 2019-07-25 9:54 ` Ferruh Yigit
0 siblings, 0 replies; 38+ messages in thread
From: Ferruh Yigit @ 2019-07-25 9:54 UTC (permalink / raw)
To: Lance Richardson, dev; +Cc: ajit.khaparde, somnath.kotur, thomas, Kalesh AP
On 7/24/2019 5:49 PM, Lance Richardson wrote:
> This commit enables the creation of a dedicated completion
> ring for asynchronous event handling instead of handling these
> events on a receive completion ring.
>
> For the stingray platform and other platforms needing tighter
> control of resource utilization, we retain the ability to
> process async events on a receive completion ring.
>
> For Thor-based adapters, we use a dedicated NQ (notification
> queue) ring for async events (async events can't currently
> be received on a completion ring due to a firmware limitation).
>
> Rename "def_cp_ring" to "async_cp_ring" to better reflect its
> purpose (async event notifications) and to avoid confusion with
> VNIC default receive completion rings.
>
> Allow rxq 0 to be stopped when not being used for async events.
>
> Signed-off-by: Lance Richardson <lance.richardson@broadcom.com>
> Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
> Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Applied to dpdk-next-net/master, thanks.
^ permalink raw reply [flat|nested] 38+ messages in thread
end of thread, other threads:[~2019-07-25 9:54 UTC | newest]
Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-18 3:35 [dpdk-dev] [PATCH 00/22] bnxt patchset Ajit Khaparde
2019-07-18 3:35 ` [dpdk-dev] [PATCH 01/22] net/bnxt: fix to handle error case during port start Ajit Khaparde
2019-07-18 3:35 ` [dpdk-dev] [PATCH 02/22] net/bnxt: fix return value check of address mapping Ajit Khaparde
2019-07-18 3:35 ` [dpdk-dev] [PATCH 03/22] net/bnxt: fix failure to add a MAC address Ajit Khaparde
2019-07-18 3:35 ` [dpdk-dev] [PATCH 04/22] net/bnxt: fix an unconditional wait in link update Ajit Khaparde
2019-07-18 3:35 ` [dpdk-dev] [PATCH 05/22] net/bnxt: fix setting primary MAC address Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 06/22] net/bnxt: fix failure path in dev init Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 07/22] net/bnxt: reset filters before registering interrupts Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 08/22] net/bnxt: use correct vnic default completion ring Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 09/22] net/bnxt: use dedicated cpr for async events Ajit Khaparde
2019-07-22 14:57 ` Ferruh Yigit
2019-07-22 15:06 ` Thomas Monjalon
2019-07-22 17:57 ` Lance Richardson
2019-07-22 18:34 ` Ferruh Yigit
2019-07-23 8:04 ` Thomas Monjalon
2019-07-23 10:53 ` Lance Richardson
2019-07-23 21:27 ` Lance Richardson
2019-07-24 16:14 ` [dpdk-dev] [PATCH] " Lance Richardson
2019-07-24 16:32 ` Lance Richardson
2019-07-24 16:49 ` [dpdk-dev] [[PATCH v2]] " Lance Richardson
2019-07-25 9:54 ` Ferruh Yigit
2019-07-18 3:36 ` [dpdk-dev] [PATCH 10/22] net/bnxt: retry irq callback deregistration Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 11/22] net/bnxt: fix error checking of FW commands Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 12/22] net/bnxt: fix to return standard error codes Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 13/22] net/bnxt: fix lock release on getting NVM info Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 14/22] net/bnxt: fix RSS disable issue for thor-based adapters Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 15/22] net/bnxt: use correct RSS table sizes Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 16/22] net/bnxt: fully initialize hwrm msgs for thor RSS cfg Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 17/22] net/bnxt: use correct number of RSS contexts for thor Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 18/22] net/bnxt: pass correct RSS table address " Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 19/22] net/bnxt: avoid overrun in get statistics Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 20/22] net/bnxt: fix MAC/VLAN filter allocation failure Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 21/22] net/bnxt: fix to correctly check result of HWRM command Ajit Khaparde
2019-07-18 3:36 ` [dpdk-dev] [PATCH 22/22] net/bnxt: update HWRM API to version 1.10.0.91 Ajit Khaparde
2019-07-19 12:30 ` [dpdk-dev] [PATCH 00/22] bnxt patchset Ferruh Yigit
2019-07-19 13:22 ` Ajit Kumar Khaparde
2019-07-19 16:59 ` Ferruh Yigit
2019-07-19 21:01 ` Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).