* [dpdk-dev] [PATCH 0/4] bnxt fixes
@ 2022-01-20 9:12 Kalesh A P
2022-01-20 9:12 ` [dpdk-dev] [PATCH 1/4] net/bnxt: fix check for autoneg enablement Kalesh A P
` (4 more replies)
0 siblings, 5 replies; 8+ messages in thread
From: Kalesh A P @ 2022-01-20 9:12 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, ajit.khaparde
From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Please apply.
Ajit Khaparde (1):
net/bnxt: fix VF resource allocation strategy
Kalesh AP (3):
net/bnxt: fix check for autoneg enablement
net/bnxt: handle ring cleanup in case of error
net/bnxt: fix to alloc the memzone per VNIC
drivers/net/bnxt/bnxt_hwrm.c | 35 ++++++++++++-----------
drivers/net/bnxt/bnxt_hwrm.h | 2 ++
drivers/net/bnxt/bnxt_ring.c | 1 +
drivers/net/bnxt/bnxt_vnic.c | 68 +++++++++++++++++++-------------------------
drivers/net/bnxt/bnxt_vnic.h | 1 +
5 files changed, 52 insertions(+), 55 deletions(-)
--
2.10.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 1/4] net/bnxt: fix check for autoneg enablement
2022-01-20 9:12 [dpdk-dev] [PATCH 0/4] bnxt fixes Kalesh A P
@ 2022-01-20 9:12 ` Kalesh A P
2022-01-20 9:12 ` [dpdk-dev] [PATCH 2/4] net/bnxt: handle ring cleanup in case of error Kalesh A P
` (3 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Kalesh A P @ 2022-01-20 9:12 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, ajit.khaparde
From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
HWRM_PORT_PHY_QCFG_OUTPUT response indicates the autoneg speed mask
supported by the FW. While enabling autoneg, driver should also check
the FW advertised PAM4 speeds supported in auto mode which is set
in the HWRM_PORT_PHY_QCFG_OUTPUT response.
Fixes: c23f9ded0391 ("net/bnxt: support 200G PAM4 link")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
drivers/net/bnxt/bnxt_hwrm.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 5850e7e..5418fa1 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3253,7 +3253,8 @@ int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up)
bp->link_info->link_signal_mode);
link_req.phy_flags = HWRM_PORT_PHY_CFG_INPUT_FLAGS_RESET_PHY;
/* Autoneg can be done only when the FW allows. */
- if (autoneg == 1 && bp->link_info->support_auto_speeds) {
+ if (autoneg == 1 &&
+ (bp->link_info->support_auto_speeds || bp->link_info->support_pam4_auto_speeds)) {
link_req.phy_flags |=
HWRM_PORT_PHY_CFG_INPUT_FLAGS_RESTART_AUTONEG;
link_req.auto_link_speed_mask =
--
2.10.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 2/4] net/bnxt: handle ring cleanup in case of error
2022-01-20 9:12 [dpdk-dev] [PATCH 0/4] bnxt fixes Kalesh A P
2022-01-20 9:12 ` [dpdk-dev] [PATCH 1/4] net/bnxt: fix check for autoneg enablement Kalesh A P
@ 2022-01-20 9:12 ` Kalesh A P
2022-01-20 9:12 ` [dpdk-dev] [PATCH 3/4] net/bnxt: fix to alloc the memzone per VNIC Kalesh A P
` (2 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Kalesh A P @ 2022-01-20 9:12 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, ajit.khaparde
From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
In bnxt_alloc_mem(), after bnxt_alloc_async_ring_struct(),
any of the functions failure causes an error:
bnxt_hwrm_ring_free(): hwrm_ring_free nq failed. rc:1
Fix this by initializing ring->fw_ring_id to INVALID_HW_RING_ID
in bnxt_alloc_async_ring_struct().
Fixes: bd0a14c99f65 ("net/bnxt: use dedicated CPR for async events")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
drivers/net/bnxt/bnxt_ring.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index dc437f3..5c6c27f 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -851,6 +851,7 @@ int bnxt_alloc_async_ring_struct(struct bnxt *bp)
ring->ring_mask = ring->ring_size - 1;
ring->vmem_size = 0;
ring->vmem = NULL;
+ ring->fw_ring_id = INVALID_HW_RING_ID;
bp->async_cp_ring = cpr;
cpr->cp_ring_struct = ring;
--
2.10.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 3/4] net/bnxt: fix to alloc the memzone per VNIC
2022-01-20 9:12 [dpdk-dev] [PATCH 0/4] bnxt fixes Kalesh A P
2022-01-20 9:12 ` [dpdk-dev] [PATCH 1/4] net/bnxt: fix check for autoneg enablement Kalesh A P
2022-01-20 9:12 ` [dpdk-dev] [PATCH 2/4] net/bnxt: handle ring cleanup in case of error Kalesh A P
@ 2022-01-20 9:12 ` Kalesh A P
2022-01-20 9:12 ` [dpdk-dev] [PATCH 4/4] net/bnxt: fix VF resource allocation strategy Kalesh A P
2022-01-25 5:00 ` [dpdk-dev] [PATCH 0/4] bnxt fixes Ajit Khaparde
4 siblings, 0 replies; 8+ messages in thread
From: Kalesh A P @ 2022-01-20 9:12 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, ajit.khaparde
From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
In case of Thor RSS table size is too big. This could result in
memory allocation failure when the supported vnic count is high.
Instead of allocating the memzone for all VNICs in one shot,
allocate for each VNIC individually.
Also, fixed to free the memzone in the uninit path.
Fixes: 9738793f28ec ("net/bnxt: add VNIC functions and structs")
Cc: stable@dpdk.org
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/net/bnxt/bnxt_vnic.c | 68 +++++++++++++++++++-------------------------
drivers/net/bnxt/bnxt_vnic.h | 1 +
2 files changed, 30 insertions(+), 39 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c
index 09d67ef..b3c03a2 100644
--- a/drivers/net/bnxt/bnxt_vnic.c
+++ b/drivers/net/bnxt/bnxt_vnic.c
@@ -98,18 +98,11 @@ void bnxt_free_vnic_attributes(struct bnxt *bp)
for (i = 0; i < bp->max_vnics; i++) {
vnic = &bp->vnic_info[i];
- if (vnic->rss_table) {
- /* 'Unreserve' the rss_table */
- /* N/A */
-
- vnic->rss_table = NULL;
- }
-
- if (vnic->rss_hash_key) {
- /* 'Unreserve' the rss_hash_key */
- /* N/A */
-
+ if (vnic->rss_mz != NULL) {
+ rte_memzone_free(vnic->rss_mz);
+ vnic->rss_mz = NULL;
vnic->rss_hash_key = NULL;
+ vnic->rss_table = NULL;
}
}
}
@@ -122,7 +115,6 @@ int bnxt_alloc_vnic_attributes(struct bnxt *bp, bool reconfig)
char mz_name[RTE_MEMZONE_NAMESIZE];
uint32_t entry_length;
size_t rss_table_size;
- uint16_t max_vnics;
int i;
rte_iova_t mz_phys_addr;
@@ -136,38 +128,36 @@ int bnxt_alloc_vnic_attributes(struct bnxt *bp, bool reconfig)
entry_length = RTE_CACHE_LINE_ROUNDUP(entry_length + rss_table_size);
- max_vnics = bp->max_vnics;
- snprintf(mz_name, RTE_MEMZONE_NAMESIZE,
- "bnxt_" PCI_PRI_FMT "_vnicattr", pdev->addr.domain,
- pdev->addr.bus, pdev->addr.devid, pdev->addr.function);
- mz_name[RTE_MEMZONE_NAMESIZE - 1] = 0;
- mz = rte_memzone_lookup(mz_name);
- if (!mz) {
- mz = rte_memzone_reserve(mz_name,
- entry_length * max_vnics,
- bp->eth_dev->device->numa_node,
- RTE_MEMZONE_2MB |
- RTE_MEMZONE_SIZE_HINT_ONLY |
- RTE_MEMZONE_IOVA_CONTIG);
- if (!mz)
- return -ENOMEM;
- }
- mz_phys_addr = mz->iova;
-
- for (i = 0; i < max_vnics; i++) {
+ for (i = 0; i < bp->max_vnics; i++) {
vnic = &bp->vnic_info[i];
+ snprintf(mz_name, RTE_MEMZONE_NAMESIZE,
+ "bnxt_" PCI_PRI_FMT "_vnicattr_%d", pdev->addr.domain,
+ pdev->addr.bus, pdev->addr.devid, pdev->addr.function, i);
+ mz_name[RTE_MEMZONE_NAMESIZE - 1] = 0;
+ mz = rte_memzone_lookup(mz_name);
+ if (mz == NULL) {
+ mz = rte_memzone_reserve(mz_name,
+ entry_length,
+ bp->eth_dev->device->numa_node,
+ RTE_MEMZONE_2MB |
+ RTE_MEMZONE_SIZE_HINT_ONLY |
+ RTE_MEMZONE_IOVA_CONTIG);
+ if (mz == NULL) {
+ PMD_DRV_LOG(ERR, "Cannot allocate bnxt vnic_attributes memory\n");
+ return -ENOMEM;
+ }
+ }
+ vnic->rss_mz = mz;
+ mz_phys_addr = mz->iova;
+
/* Allocate rss table and hash key */
- vnic->rss_table =
- (void *)((char *)mz->addr + (entry_length * i));
+ vnic->rss_table = (void *)((char *)mz->addr);
+ vnic->rss_table_dma_addr = mz_phys_addr;
memset(vnic->rss_table, -1, entry_length);
- vnic->rss_table_dma_addr = mz_phys_addr + (entry_length * i);
- vnic->rss_hash_key = (void *)((char *)vnic->rss_table +
- rss_table_size);
-
- vnic->rss_hash_key_dma_addr = vnic->rss_table_dma_addr +
- rss_table_size;
+ vnic->rss_hash_key = (void *)((char *)vnic->rss_table + rss_table_size);
+ vnic->rss_hash_key_dma_addr = vnic->rss_table_dma_addr + rss_table_size;
if (!reconfig) {
bnxt_prandom_bytes(vnic->rss_hash_key, HW_HASH_KEY_SIZE);
memcpy(bp->rss_conf.rss_key, vnic->rss_hash_key, HW_HASH_KEY_SIZE);
diff --git a/drivers/net/bnxt/bnxt_vnic.h b/drivers/net/bnxt/bnxt_vnic.h
index 25481fc..9055b93 100644
--- a/drivers/net/bnxt/bnxt_vnic.h
+++ b/drivers/net/bnxt/bnxt_vnic.h
@@ -28,6 +28,7 @@ struct bnxt_vnic_info {
uint16_t mru;
uint16_t hash_type;
uint8_t hash_mode;
+ const struct rte_memzone *rss_mz;
rte_iova_t rss_table_dma_addr;
uint16_t *rss_table;
rte_iova_t rss_hash_key_dma_addr;
--
2.10.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 4/4] net/bnxt: fix VF resource allocation strategy
2022-01-20 9:12 [dpdk-dev] [PATCH 0/4] bnxt fixes Kalesh A P
` (2 preceding siblings ...)
2022-01-20 9:12 ` [dpdk-dev] [PATCH 3/4] net/bnxt: fix to alloc the memzone per VNIC Kalesh A P
@ 2022-01-20 9:12 ` Kalesh A P
2022-01-25 5:00 ` [dpdk-dev] [PATCH 0/4] bnxt fixes Ajit Khaparde
4 siblings, 0 replies; 8+ messages in thread
From: Kalesh A P @ 2022-01-20 9:12 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, ajit.khaparde
From: Ajit Khaparde <ajit.khaparde@broadcom.com>
1. VFs need a notification queue to handle async messages.
But the current logic does not reserve a notification queue leading
to initialization failure in some cases.
2. With the current logic, DPDK PF driver reserves only one VNIC
to the VFs leading to initialization failure with more than 1 RXQs.
Added logic to distribute number of NQs and VNICs from the pool
across VFs and PF.
While reserving resources for the VFs, the strategy is to keep
both min & max values the same. This could result in a failure
when there isn't enough resources to satisfy the request.
Hence fixed to instruct the FW to not reserve all minimum
resources requested for the VF. The VF driver can request the FW
for the allocated resources during probe.
Fixes: b7778e8a1c00 ("net/bnxt: refactor to properly allocate resources for PF/VF")
Cc: stable@dpdk.org
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
drivers/net/bnxt/bnxt_hwrm.c | 32 +++++++++++++++++---------------
drivers/net/bnxt/bnxt_hwrm.h | 2 ++
2 files changed, 19 insertions(+), 15 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 5418fa1..b4aeec5 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -902,15 +902,7 @@ static int __bnxt_hwrm_func_qcaps(struct bnxt *bp)
bp->max_l2_ctx = rte_le_to_cpu_16(resp->max_l2_ctxs);
if (!BNXT_CHIP_P5(bp) && !bp->pdev->max_vfs)
bp->max_l2_ctx += bp->max_rx_em_flows;
- /* TODO: For now, do not support VMDq/RFS on VFs. */
- if (BNXT_PF(bp)) {
- if (bp->pf->max_vfs)
- bp->max_vnics = 1;
- else
- bp->max_vnics = rte_le_to_cpu_16(resp->max_vnics);
- } else {
- bp->max_vnics = 1;
- }
+ bp->max_vnics = rte_le_to_cpu_16(resp->max_vnics);
PMD_DRV_LOG(DEBUG, "Max l2_cntxts is %d vnics is %d\n",
bp->max_l2_ctx, bp->max_vnics);
bp->max_stat_ctx = rte_le_to_cpu_16(resp->max_stat_ctx);
@@ -3495,7 +3487,7 @@ static int bnxt_hwrm_pf_func_cfg(struct bnxt *bp,
rte_cpu_to_le_16(pf_resc->num_hw_ring_grps);
} else if (BNXT_HAS_NQ(bp)) {
enables |= HWRM_FUNC_CFG_INPUT_ENABLES_NUM_MSIX;
- req.num_msix = rte_cpu_to_le_16(bp->max_nq_rings);
+ req.num_msix = rte_cpu_to_le_16(pf_resc->num_nq_rings);
}
req.flags = rte_cpu_to_le_32(bp->pf->func_cfg_flags);
@@ -3508,7 +3500,7 @@ static int bnxt_hwrm_pf_func_cfg(struct bnxt *bp,
req.num_tx_rings = rte_cpu_to_le_16(pf_resc->num_tx_rings);
req.num_rx_rings = rte_cpu_to_le_16(pf_resc->num_rx_rings);
req.num_l2_ctxs = rte_cpu_to_le_16(pf_resc->num_l2_ctxs);
- req.num_vnics = rte_cpu_to_le_16(bp->max_vnics);
+ req.num_vnics = rte_cpu_to_le_16(pf_resc->num_vnics);
req.fid = rte_cpu_to_le_16(0xffff);
req.enables = rte_cpu_to_le_32(enables);
@@ -3545,14 +3537,12 @@ bnxt_fill_vf_func_cfg_req_new(struct bnxt *bp,
req->min_rx_rings = req->max_rx_rings;
req->max_l2_ctxs = rte_cpu_to_le_16(bp->max_l2_ctx / (num_vfs + 1));
req->min_l2_ctxs = req->max_l2_ctxs;
- /* TODO: For now, do not support VMDq/RFS on VFs. */
- req->max_vnics = rte_cpu_to_le_16(1);
+ req->max_vnics = rte_cpu_to_le_16(bp->max_vnics / (num_vfs + 1));
req->min_vnics = req->max_vnics;
req->max_hw_ring_grps = rte_cpu_to_le_16(bp->max_ring_grps /
(num_vfs + 1));
req->min_hw_ring_grps = req->max_hw_ring_grps;
- req->flags =
- rte_cpu_to_le_16(HWRM_FUNC_VF_RESOURCE_CFG_INPUT_FLAGS_MIN_GUARANTEED);
+ req->max_msix = rte_cpu_to_le_16(bp->max_nq_rings / (num_vfs + 1));
}
static void
@@ -3612,6 +3602,8 @@ static int bnxt_update_max_resources(struct bnxt *bp,
bp->max_rx_rings -= rte_le_to_cpu_16(resp->alloc_rx_rings);
bp->max_l2_ctx -= rte_le_to_cpu_16(resp->alloc_l2_ctx);
bp->max_ring_grps -= rte_le_to_cpu_16(resp->alloc_hw_ring_grps);
+ bp->max_nq_rings -= rte_le_to_cpu_16(resp->alloc_msix);
+ bp->max_vnics -= rte_le_to_cpu_16(resp->alloc_vnics);
HWRM_UNLOCK();
@@ -3685,6 +3677,8 @@ static int bnxt_query_pf_resources(struct bnxt *bp,
pf_resc->num_rx_rings = rte_le_to_cpu_16(resp->alloc_rx_rings);
pf_resc->num_l2_ctxs = rte_le_to_cpu_16(resp->alloc_l2_ctx);
pf_resc->num_hw_ring_grps = rte_le_to_cpu_32(resp->alloc_hw_ring_grps);
+ pf_resc->num_nq_rings = rte_le_to_cpu_32(resp->alloc_msix);
+ pf_resc->num_vnics = rte_le_to_cpu_16(resp->alloc_vnics);
bp->pf->evb_mode = resp->evb_mode;
HWRM_UNLOCK();
@@ -3705,6 +3699,8 @@ bnxt_calculate_pf_resources(struct bnxt *bp,
pf_resc->num_rx_rings = bp->max_rx_rings;
pf_resc->num_l2_ctxs = bp->max_l2_ctx;
pf_resc->num_hw_ring_grps = bp->max_ring_grps;
+ pf_resc->num_nq_rings = bp->max_nq_rings;
+ pf_resc->num_vnics = bp->max_vnics;
return;
}
@@ -3723,6 +3719,10 @@ bnxt_calculate_pf_resources(struct bnxt *bp,
bp->max_l2_ctx % (num_vfs + 1);
pf_resc->num_hw_ring_grps = bp->max_ring_grps / (num_vfs + 1) +
bp->max_ring_grps % (num_vfs + 1);
+ pf_resc->num_nq_rings = bp->max_nq_rings / (num_vfs + 1) +
+ bp->max_nq_rings % (num_vfs + 1);
+ pf_resc->num_vnics = bp->max_vnics / (num_vfs + 1) +
+ bp->max_vnics % (num_vfs + 1);
}
int bnxt_hwrm_allocate_pf_only(struct bnxt *bp)
@@ -3898,6 +3898,8 @@ bnxt_update_pf_resources(struct bnxt *bp,
bp->max_tx_rings = pf_resc->num_tx_rings;
bp->max_rx_rings = pf_resc->num_rx_rings;
bp->max_ring_grps = pf_resc->num_hw_ring_grps;
+ bp->max_nq_rings = pf_resc->num_nq_rings;
+ bp->max_vnics = pf_resc->num_vnics;
}
static int32_t
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 21e1b7a..63f8d8c 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -114,6 +114,8 @@ struct bnxt_pf_resource_info {
uint16_t num_rx_rings;
uint16_t num_cp_rings;
uint16_t num_l2_ctxs;
+ uint16_t num_nq_rings;
+ uint16_t num_vnics;
uint32_t num_hw_ring_grps;
};
--
2.10.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] [PATCH 0/4] bnxt fixes
2022-01-20 9:12 [dpdk-dev] [PATCH 0/4] bnxt fixes Kalesh A P
` (3 preceding siblings ...)
2022-01-20 9:12 ` [dpdk-dev] [PATCH 4/4] net/bnxt: fix VF resource allocation strategy Kalesh A P
@ 2022-01-25 5:00 ` Ajit Khaparde
4 siblings, 0 replies; 8+ messages in thread
From: Ajit Khaparde @ 2022-01-25 5:00 UTC (permalink / raw)
To: Kalesh A P; +Cc: dpdk-dev, Ferruh Yigit
On Thu, Jan 20, 2022 at 12:53 AM Kalesh A P
<kalesh-anakkur.purayil@broadcom.com> wrote:
>
> From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
>
> Please apply.
>
> Ajit Khaparde (1):
> net/bnxt: fix VF resource allocation strategy
>
> Kalesh AP (3):
> net/bnxt: fix check for autoneg enablement
> net/bnxt: handle ring cleanup in case of error
> net/bnxt: fix to alloc the memzone per VNIC
Patchset applied to dpdk-next-net-brcm. Thanks
>
> drivers/net/bnxt/bnxt_hwrm.c | 35 ++++++++++++-----------
> drivers/net/bnxt/bnxt_hwrm.h | 2 ++
> drivers/net/bnxt/bnxt_ring.c | 1 +
> drivers/net/bnxt/bnxt_vnic.c | 68 +++++++++++++++++++-------------------------
> drivers/net/bnxt/bnxt_vnic.h | 1 +
> 5 files changed, 52 insertions(+), 55 deletions(-)
>
> --
> 2.10.1
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] [PATCH 0/4] bnxt fixes
2021-06-09 3:13 Kalesh A P
@ 2021-06-16 22:38 ` Ajit Khaparde
0 siblings, 0 replies; 8+ messages in thread
From: Ajit Khaparde @ 2021-06-16 22:38 UTC (permalink / raw)
To: Kalesh A P; +Cc: dpdk-dev, Ferruh Yigit
[-- Attachment #1: Type: text/plain, Size: 650 bytes --]
On Tue, Jun 8, 2021 at 7:52 PM Kalesh A P
<kalesh-anakkur.purayil@broadcom.com> wrote:
>
> From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
>
> This set contains few bnxt fixes and code cleanup changes.
Patchset applied to dpdk-next-net-brcm for-next-net branch.
>
> Kalesh AP (4):
> net/bnxt: cleanup code
> net/bnxt: fix typo in log message
> net/bnxt: fix enabling autoneg on Whitney+
> net/bnxt: invoke device removal event on recovery failure
>
> drivers/net/bnxt/bnxt_ethdev.c | 11 ++++++++---
> drivers/net/bnxt/bnxt_hwrm.c | 22 ++++++++++------------
> 2 files changed, 18 insertions(+), 15 deletions(-)
>
> --
> 2.10.1
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* [dpdk-dev] [PATCH 0/4] bnxt fixes
@ 2021-06-09 3:13 Kalesh A P
2021-06-16 22:38 ` Ajit Khaparde
0 siblings, 1 reply; 8+ messages in thread
From: Kalesh A P @ 2021-06-09 3:13 UTC (permalink / raw)
To: dev; +Cc: ferruh.yigit, ajit.khaparde
From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
This set contains few bnxt fixes and code cleanup changes.
Kalesh AP (4):
net/bnxt: cleanup code
net/bnxt: fix typo in log message
net/bnxt: fix enabling autoneg on Whitney+
net/bnxt: invoke device removal event on recovery failure
drivers/net/bnxt/bnxt_ethdev.c | 11 ++++++++---
drivers/net/bnxt/bnxt_hwrm.c | 22 ++++++++++------------
2 files changed, 18 insertions(+), 15 deletions(-)
--
2.10.1
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2022-01-25 5:01 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-20 9:12 [dpdk-dev] [PATCH 0/4] bnxt fixes Kalesh A P
2022-01-20 9:12 ` [dpdk-dev] [PATCH 1/4] net/bnxt: fix check for autoneg enablement Kalesh A P
2022-01-20 9:12 ` [dpdk-dev] [PATCH 2/4] net/bnxt: handle ring cleanup in case of error Kalesh A P
2022-01-20 9:12 ` [dpdk-dev] [PATCH 3/4] net/bnxt: fix to alloc the memzone per VNIC Kalesh A P
2022-01-20 9:12 ` [dpdk-dev] [PATCH 4/4] net/bnxt: fix VF resource allocation strategy Kalesh A P
2022-01-25 5:00 ` [dpdk-dev] [PATCH 0/4] bnxt fixes Ajit Khaparde
-- strict thread matches above, loose matches on Subject: below --
2021-06-09 3:13 Kalesh A P
2021-06-16 22:38 ` Ajit Khaparde
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).