DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 00/31] bnxt patchset
@ 2018-06-19 21:30 Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 01/31] net/bnxt: fix clear port stats Ajit Khaparde
                   ` (31 more replies)
  0 siblings, 32 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

Patchset against dpdk-next-net contains bug fixes,
some code refactoring and style cleanup.

Please apply.

Ajit Khaparde (15):
  net/bnxt: fix clear port stats
  net/bnxt: add Tx batching support
  net/bnxt: Rx processing optimization
  net/bnxt: set min and max descriptor count for Tx and Rx rings
  net/bnxt: fix dev close operation
  net/bnxt: set ring coalesce parameters for Stratus NIC
  net/bnxt: fix HW Tx checksum offload check
  net/bnxt: add support for VF id 0xd800
  net/bnxt: fix rx/tx queue start/stop operations
  net/bnxt: code cleanup style of bnxt vnic
  net/bnxt: filter/flow refactoring
  net/bnxt: check filter type before clearing it
  net/bnxt: fix set MTU
  net/bnxt: fix incorrect IO address handling in Tx
  net/bnxt: allocate RSS context only if RSS mode is enabled.

Jay Ding (1):
  net/bnxt: check for invalid vnic id

Rob Miller (1):
  net/bnxt: update HWRM API to v1.9.2.9

Scott Branden (11):
  net/bnxt: code cleanup style of bnxt cpr
  net/bnxt: code cleanup style of bnxt rxr
  net/bnxt: code cleanup style of rte pmd bnxt file
  net/bnxt: code cleanup style of bnxt stats
  net/bnxt: code cleanup style of bnxt vnic
  net/bnxt: code cleanup style of bnxt txq
  net/bnxt: code cleanup style of bnxt rxq
  net/bnxt: code cleanup style of bnxt txr
  net/bnxt: code cleanup style of bnxt ring
  net/bnxt: code cleanup style of bnxt ethdev
  net/bnxt: move function check zero bytes to bnxt util.h

Somnath Kotur (2):
  net/bnxt: Revert reset of L2 filter id in clear_ntuple_filter
  net/bnxt: fix to move a flow to a different queue

Xiaoxin Peng (1):
  net/bnxt: fix Tx with multiple mbuf

 drivers/net/bnxt/Makefile              |    2 +
 drivers/net/bnxt/bnxt.h                |   27 +
 drivers/net/bnxt/bnxt_cpr.c            |   22 +-
 drivers/net/bnxt/bnxt_cpr.h            |   12 +
 drivers/net/bnxt/bnxt_ethdev.c         |  284 +++++---
 drivers/net/bnxt/bnxt_filter.c         | 1090 +----------------------------
 drivers/net/bnxt/bnxt_filter.h         |    1 -
 drivers/net/bnxt/bnxt_flow.c           | 1171 ++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.c           |  156 +++--
 drivers/net/bnxt/bnxt_hwrm.h           |    3 +
 drivers/net/bnxt/bnxt_ring.c           |  194 +++++-
 drivers/net/bnxt/bnxt_ring.h           |   41 +-
 drivers/net/bnxt/bnxt_rxq.c            |   76 ++-
 drivers/net/bnxt/bnxt_rxq.h            |   16 +-
 drivers/net/bnxt/bnxt_rxr.c            |   82 ++-
 drivers/net/bnxt/bnxt_rxr.h            |    8 +-
 drivers/net/bnxt/bnxt_stats.c          |   84 ++-
 drivers/net/bnxt/bnxt_stats.h          |   27 +-
 drivers/net/bnxt/bnxt_txq.c            |   24 +-
 drivers/net/bnxt/bnxt_txq.h            |   10 +-
 drivers/net/bnxt/bnxt_txr.c            |  161 +++--
 drivers/net/bnxt/bnxt_txr.h            |   19 +-
 drivers/net/bnxt/bnxt_util.c           |   18 +
 drivers/net/bnxt/bnxt_util.h           |   11 +
 drivers/net/bnxt/bnxt_vnic.c           |   28 +-
 drivers/net/bnxt/bnxt_vnic.h           |    8 +-
 drivers/net/bnxt/hsi_struct_def_dpdk.h |  113 ++-
 drivers/net/bnxt/rte_pmd_bnxt.c        |   97 ++-
 drivers/net/bnxt/rte_pmd_bnxt.h        |   69 +-
 29 files changed, 2314 insertions(+), 1540 deletions(-)
 create mode 100644 drivers/net/bnxt/bnxt_flow.c
 create mode 100644 drivers/net/bnxt/bnxt_util.c
 create mode 100644 drivers/net/bnxt/bnxt_util.h

-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 01/31] net/bnxt: fix clear port stats
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 02/31] net/bnxt: add Tx batching support Ajit Khaparde
                   ` (30 subsequent siblings)
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, stable

PORT_CLR_STATS is not allowed for VFs, NPAR, MultiHost functions
or when SR-IOV is enabled.
Don't send the HWRM command in such cases.

Fixes: bfb9c2260be2 ("net/bnxt: support xstats get/reset")
Cc: stable@dpdk.org

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h      | 4 ++++
 drivers/net/bnxt/bnxt_hwrm.c | 5 ++++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index afaaf8c41..35c3073dd 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -98,6 +98,7 @@ struct bnxt_child_vf_info {
 struct bnxt_pf_info {
 #define BNXT_FIRST_PF_FID	1
 #define BNXT_MAX_VFS(bp)	(bp->pf.max_vfs)
+#define BNXT_TOTAL_VFS(bp)	(bp->pf.total_vfs)
 #define BNXT_FIRST_VF_FID	128
 #define BNXT_PF_RINGS_USED(bp)	bnxt_get_num_queues(bp)
 #define BNXT_PF_RINGS_AVAIL(bp)	(bp->pf.max_cp_rings - BNXT_PF_RINGS_USED(bp))
@@ -105,6 +106,9 @@ struct bnxt_pf_info {
 	uint16_t		first_vf_id;
 	uint16_t		active_vfs;
 	uint16_t		max_vfs;
+	uint16_t		total_vfs; /* Total VFs possible.
+					    * Not necessarily enabled.
+					    */
 	uint32_t		func_cfg_flags;
 	void			*vf_req_buf;
 	rte_iova_t		vf_req_buf_dma_addr;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index d6fdc1b88..f441d4610 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -506,6 +506,7 @@ static int __bnxt_hwrm_func_qcaps(struct bnxt *bp)
 	if (BNXT_PF(bp)) {
 		bp->pf.port_id = resp->port_id;
 		bp->pf.first_vf_id = rte_le_to_cpu_16(resp->first_vf_id);
+		bp->pf.total_vfs = rte_le_to_cpu_16(resp->max_vfs);
 		new_max_vfs = bp->pdev->max_vfs;
 		if (new_max_vfs != bp->pf.max_vfs) {
 			if (bp->pf.vf_info)
@@ -3151,7 +3152,9 @@ int bnxt_hwrm_port_clr_stats(struct bnxt *bp)
 	struct bnxt_pf_info *pf = &bp->pf;
 	int rc;
 
-	if (!(bp->flags & BNXT_FLAG_PORT_STATS))
+	/* Not allowed on NS2 device, NPAR, MultiHost, VF */
+	if (!(bp->flags & BNXT_FLAG_PORT_STATS) || BNXT_VF(bp) ||
+	    BNXT_NPAR(bp) || BNXT_MH(bp) || BNXT_TOTAL_VFS(bp))
 		return 0;
 
 	HWRM_PREP(req, PORT_CLR_STATS);
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 02/31] net/bnxt: add Tx batching support
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 01/31] net/bnxt: fix clear port stats Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 03/31] net/bnxt: Rx processing optimization Ajit Khaparde
                   ` (29 subsequent siblings)
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

Batch more than one Tx requests such that only one completion
is generarted by the HW. We request a Tx completion for first
and last Tx request in the batch.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_cpr.h | 12 ++++++
 drivers/net/bnxt/bnxt_txq.h |  1 +
 drivers/net/bnxt/bnxt_txr.c | 97 +++++++++++++++++++++++++++++----------------
 3 files changed, 75 insertions(+), 35 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_cpr.h b/drivers/net/bnxt/bnxt_cpr.h
index 6c1e6d2b0..5b36bf7d7 100644
--- a/drivers/net/bnxt/bnxt_cpr.h
+++ b/drivers/net/bnxt/bnxt_cpr.h
@@ -22,12 +22,20 @@
 #define ADV_RAW_CMP(idx, n)	((idx) + (n))
 #define NEXT_RAW_CMP(idx)	ADV_RAW_CMP(idx, 1)
 #define RING_CMP(ring, idx)	((idx) & (ring)->ring_mask)
+#define RING_CMPL(ring_mask, idx)	((idx) & (ring_mask))
 #define NEXT_CMP(idx)		RING_CMP(ADV_RAW_CMP(idx, 1))
 #define FLIP_VALID(cons, mask, val)	((cons) >= (mask) ? !(val) : (val))
 
 #define DB_CP_REARM_FLAGS	(DB_KEY_CP | DB_IDX_VALID)
 #define DB_CP_FLAGS		(DB_KEY_CP | DB_IDX_VALID | DB_IRQ_DIS)
 
+#define NEXT_CMPL(cpr, idx, v, inc)	do { \
+	(idx) += (inc); \
+	if (unlikely((idx) == (cpr)->cp_ring_struct->ring_size)) { \
+		(v) = !(v); \
+		idx = 0; \
+	} \
+} while (0)
 #define B_CP_DB_REARM(cpr, raw_cons)					\
 	rte_write32((DB_CP_REARM_FLAGS |				\
 		    RING_CMP(((cpr)->cp_ring_struct), raw_cons)),	\
@@ -50,6 +58,10 @@
 	rte_write32((DB_CP_FLAGS |					\
 		    RING_CMP(((cpr)->cp_ring_struct), raw_cons)),	\
 		    ((cpr)->cp_doorbell))
+#define B_CP_DB(cpr, raw_cons, ring_mask)				\
+	rte_write32((DB_CP_FLAGS |					\
+		    RING_CMPL((ring_mask), raw_cons)),	\
+		    ((cpr)->cp_doorbell))
 
 struct bnxt_ring;
 struct bnxt_cp_ring_info {
diff --git a/drivers/net/bnxt/bnxt_txq.h b/drivers/net/bnxt/bnxt_txq.h
index 720ca90cf..f2c712a75 100644
--- a/drivers/net/bnxt/bnxt_txq.h
+++ b/drivers/net/bnxt/bnxt_txq.h
@@ -24,6 +24,7 @@ struct bnxt_tx_queue {
 	uint8_t			wthresh; /* Write-back threshold reg */
 	uint32_t		ctx_curr; /* Hardware context states */
 	uint8_t			tx_deferred_start; /* not in global dev start */
+	uint8_t			cmpl_next; /* Next BD to trigger a compl */
 
 	struct bnxt		*bp;
 	int			index;
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 470fddd56..0fdf0fd08 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -114,7 +114,9 @@ static inline uint32_t bnxt_tx_avail(struct bnxt_tx_ring_info *txr)
 }
 
 static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
-				struct bnxt_tx_queue *txq)
+				struct bnxt_tx_queue *txq,
+				uint16_t *coal_pkts,
+				uint16_t *cmpl_next)
 {
 	struct bnxt_tx_ring_info *txr = txq->tx_ring;
 	struct tx_bd_long *txbd;
@@ -146,8 +148,15 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 		return -ENOMEM;
 
 	txbd = &txr->tx_desc_ring[txr->tx_prod];
-	txbd->opaque = txr->tx_prod;
+	txbd->opaque = *coal_pkts;
 	txbd->flags_type = tx_buf->nr_bds << TX_BD_LONG_FLAGS_BD_CNT_SFT;
+	txbd->flags_type |= TX_BD_SHORT_FLAGS_COAL_NOW;
+	if (!*cmpl_next) {
+		txbd->flags_type |= TX_BD_LONG_FLAGS_NO_CMPL;
+	} else {
+		*coal_pkts = 0;
+		*cmpl_next = false;
+	}
 	txbd->len = tx_pkt->data_len;
 	if (txbd->len >= 2014)
 		txbd->flags_type |= TX_BD_LONG_FLAGS_LHINT_GTE2K;
@@ -235,7 +244,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 
 		txbd = &txr->tx_desc_ring[txr->tx_prod];
 		txbd->address = rte_cpu_to_le_32(rte_mbuf_data_iova(m_seg));
-		txbd->flags_type = TX_BD_SHORT_TYPE_TX_BD_SHORT;
+		txbd->flags_type |= TX_BD_SHORT_TYPE_TX_BD_SHORT;
 		txbd->len = m_seg->data_len;
 
 		m_seg = m_seg->next;
@@ -278,35 +287,44 @@ static int bnxt_handle_tx_cp(struct bnxt_tx_queue *txq)
 	struct bnxt_cp_ring_info *cpr = txq->cp_ring;
 	uint32_t raw_cons = cpr->cp_raw_cons;
 	uint32_t cons;
-	int nb_tx_pkts = 0;
+	uint32_t nb_tx_pkts = 0;
 	struct tx_cmpl *txcmp;
+	struct cmpl_base *cp_desc_ring = cpr->cp_desc_ring;
+	struct bnxt_ring *cp_ring_struct = cpr->cp_ring_struct;
+	uint32_t ring_mask = cp_ring_struct->ring_mask;
+	uint32_t opaque = 0;
 
-	if ((txq->tx_ring->tx_ring_struct->ring_size -
-			(bnxt_tx_avail(txq->tx_ring))) >
-			txq->tx_free_thresh) {
-		while (1) {
-			cons = RING_CMP(cpr->cp_ring_struct, raw_cons);
-			txcmp = (struct tx_cmpl *)&cpr->cp_desc_ring[cons];
-
-			if (!CMP_VALID(txcmp, raw_cons, cpr->cp_ring_struct))
-				break;
-			cpr->valid = FLIP_VALID(cons,
-						cpr->cp_ring_struct->ring_mask,
-						cpr->valid);
-
-			if (CMP_TYPE(txcmp) == TX_CMPL_TYPE_TX_L2)
-				nb_tx_pkts++;
-			else
-				RTE_LOG_DP(DEBUG, PMD,
-						"Unhandled CMP type %02x\n",
-						CMP_TYPE(txcmp));
-			raw_cons = NEXT_RAW_CMP(raw_cons);
-		}
-		if (nb_tx_pkts)
-			bnxt_tx_cmp(txq, nb_tx_pkts);
+	if (((txq->tx_ring->tx_prod - txq->tx_ring->tx_cons) &
+		txq->tx_ring->tx_ring_struct->ring_mask) < txq->tx_free_thresh)
+		return 0;
+
+	do {
+		cons = RING_CMPL(ring_mask, raw_cons);
+		txcmp = (struct tx_cmpl *)&cpr->cp_desc_ring[cons];
+		rte_prefetch_non_temporal(&cp_desc_ring[(cons + 2) &
+							ring_mask]);
+
+		if (!CMPL_VALID(txcmp, cpr->valid))
+			break;
+		opaque = rte_cpu_to_le_32(txcmp->opaque);
+		NEXT_CMPL(cpr, cons, cpr->valid, 1);
+		rte_prefetch0(&cp_desc_ring[cons]);
+
+		if (CMP_TYPE(txcmp) == TX_CMPL_TYPE_TX_L2)
+			nb_tx_pkts += opaque;
+		else
+			RTE_LOG_DP(ERR, PMD,
+					"Unhandled CMP type %02x\n",
+					CMP_TYPE(txcmp));
+		raw_cons = cons;
+	} while (nb_tx_pkts < ring_mask);
+
+	if (nb_tx_pkts) {
+		bnxt_tx_cmp(txq, nb_tx_pkts);
 		cpr->cp_raw_cons = raw_cons;
-		B_CP_DIS_DB(cpr, cpr->cp_raw_cons);
+		B_CP_DB(cpr, cpr->cp_raw_cons, ring_mask);
 	}
+
 	return nb_tx_pkts;
 }
 
@@ -315,8 +333,8 @@ uint16_t bnxt_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 {
 	struct bnxt_tx_queue *txq = tx_queue;
 	uint16_t nb_tx_pkts = 0;
-	uint16_t db_mask = txq->tx_ring->tx_ring_struct->ring_size >> 2;
-	uint16_t last_db_mask = 0;
+	uint16_t coal_pkts = 0;
+	uint16_t cmpl_next = txq->cmpl_next;
 
 	/* Handle TX completions */
 	bnxt_handle_tx_cp(txq);
@@ -326,16 +344,25 @@ uint16_t bnxt_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		PMD_DRV_LOG(DEBUG, "Tx q stopped;return\n");
 		return 0;
 	}
+
+	txq->cmpl_next = 0;
 	/* Handle TX burst request */
 	for (nb_tx_pkts = 0; nb_tx_pkts < nb_pkts; nb_tx_pkts++) {
-		if (bnxt_start_xmit(tx_pkts[nb_tx_pkts], txq)) {
+		int rc;
+
+		/* Request a completion on first and last packet */
+		cmpl_next |= (nb_pkts == nb_tx_pkts + 1);
+		coal_pkts++;
+		rc = bnxt_start_xmit(tx_pkts[nb_tx_pkts], txq,
+				&coal_pkts, &cmpl_next);
+
+		if (unlikely(rc)) {
+			/* Request a completion in next cycle */
+			txq->cmpl_next = 1;
 			break;
-		} else if ((nb_tx_pkts & db_mask) != last_db_mask) {
-			B_TX_DB(txq->tx_ring->tx_doorbell,
-					txq->tx_ring->tx_prod);
-			last_db_mask = nb_tx_pkts & db_mask;
 		}
 	}
+
 	if (nb_tx_pkts)
 		B_TX_DB(txq->tx_ring->tx_doorbell, txq->tx_ring->tx_prod);
 
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 03/31] net/bnxt: Rx processing optimization
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 01/31] net/bnxt: fix clear port stats Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 02/31] net/bnxt: add Tx batching support Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 04/31] net/bnxt: set min and max descriptor count for Tx and Rx rings Ajit Khaparde
                   ` (28 subsequent siblings)
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

1) Use nb_rx_pkts instead of checking producer indices of Rx and
aggregator rings to decide if any Rx completions were processed.
2) Post Rx buffers early in Rx processing instead of waiting for
the budgeted burst quota.
3) Ring Rx CQ DB after Rx buffers are posted.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_rxr.c | 12 ++++++++----
 drivers/net/bnxt/bnxt_rxr.h |  2 ++
 2 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 9d8842926..b6b72c553 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -540,8 +540,8 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	int rc = 0;
 	bool evt = false;
 
-	/* If Rx Q was stopped return */
-	if (rxq->rx_deferred_start)
+	/* If Rx Q was stopped return. RxQ0 cannot be stopped. */
+	if (rxq->rx_deferred_start && rxq->queue_id)
 		return 0;
 
 	/* Handle RX burst request */
@@ -572,10 +572,13 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		raw_cons = NEXT_RAW_CMP(raw_cons);
 		if (nb_rx_pkts == nb_pkts || evt)
 			break;
+		/* Post some Rx buf early in case of larger burst processing */
+		if (nb_rx_pkts == BNXT_RX_POST_THRESH)
+			B_RX_DB(rxr->rx_doorbell, rxr->rx_prod);
 	}
 
 	cpr->cp_raw_cons = raw_cons;
-	if ((prod == rxr->rx_prod && ag_prod == rxr->ag_prod) && !evt) {
+	if (!nb_rx_pkts && !evt) {
 		/*
 		 * For PMD, there is no need to keep on pushing to REARM
 		 * the doorbell if there are no new completions
@@ -583,7 +586,6 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		return nb_rx_pkts;
 	}
 
-	B_CP_DIS_DB(cpr, cpr->cp_raw_cons);
 	if (prod != rxr->rx_prod)
 		B_RX_DB(rxr->rx_doorbell, rxr->rx_prod);
 
@@ -591,6 +593,8 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	if (ag_prod != rxr->ag_prod)
 		B_RX_DB(rxr->ag_doorbell, rxr->ag_prod);
 
+	B_CP_DIS_DB(cpr, cpr->cp_raw_cons);
+
 	/* Attempt to alloc Rx buf in case of a previous allocation failure. */
 	if (rc == -ENOMEM) {
 		int i;
diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h
index 5b28f0321..3815a2199 100644
--- a/drivers/net/bnxt/bnxt_rxr.h
+++ b/drivers/net/bnxt/bnxt_rxr.h
@@ -54,6 +54,8 @@
 #define RX_CMP_IP_CS_UNKNOWN(rxcmp1)					\
 		!((rxcmp1)->flags2 & RX_CMP_IP_CS_BITS)
 
+#define BNXT_RX_POST_THRESH	32
+
 enum pkt_hash_types {
 	PKT_HASH_TYPE_NONE,	/* Undefined type */
 	PKT_HASH_TYPE_L2,	/* Input: src_MAC, dest_MAC */
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 04/31] net/bnxt: set min and max descriptor count for Tx and Rx rings
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (2 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 03/31] net/bnxt: Rx processing optimization Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 05/31] net/bnxt: fix dev close operation Ajit Khaparde
                   ` (27 subsequent siblings)
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

Set min and max descriptor count for Tx and Rx rings.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        | 3 +++
 drivers/net/bnxt/bnxt_ethdev.c | 4 ++++
 2 files changed, 7 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 35c3073dd..d25bf78af 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -24,6 +24,9 @@
 #define VLAN_TAG_SIZE		4
 #define BNXT_MAX_LED		4
 #define BNXT_NUM_VLANS		2
+#define BNXT_MIN_RING_DESC	16
+#define BNXT_MAX_TX_RING_DESC	4096
+#define BNXT_MAX_RX_RING_DESC	8192
 
 struct bnxt_led_info {
 	uint8_t      led_id;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 6e56bfd36..33560db0d 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -449,6 +449,10 @@ static void bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 	eth_dev->data->dev_conf.intr_conf.lsc = 1;
 
 	eth_dev->data->dev_conf.intr_conf.rxq = 1;
+	dev_info->rx_desc_lim.nb_min = BNXT_MIN_RING_DESC;
+	dev_info->rx_desc_lim.nb_max = BNXT_MAX_RX_RING_DESC;
+	dev_info->tx_desc_lim.nb_min = BNXT_MIN_RING_DESC;
+	dev_info->tx_desc_lim.nb_max = BNXT_MAX_TX_RING_DESC;
 
 	/* *INDENT-ON* */
 
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 05/31] net/bnxt: fix dev close operation
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (3 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 04/31] net/bnxt: set min and max descriptor count for Tx and Rx rings Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-26 15:28   ` Ferruh Yigit
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 06/31] net/bnxt: set ring coalesce parameters for Stratus NIC Ajit Khaparde
                   ` (26 subsequent siblings)
  31 siblings, 1 reply; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, stable

We are not cleaning up all the memory and also not unregistering
the driver during device close operation. This patch fixes the issue.

Fixes: 893074951314 (net/bnxt: free memory in close operation)
Cc: stable@dpdk.org

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 23 ++++++++++++++++++-----
 1 file changed, 18 insertions(+), 5 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 33560db0d..b3826360c 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -152,6 +152,7 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 static int bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask);
 static void bnxt_print_link_info(struct rte_eth_dev *eth_dev);
 static int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu);
+static int bnxt_dev_uninit(struct rte_eth_dev *eth_dev);
 
 /***********************/
 
@@ -668,6 +669,8 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev)
 		rte_free(bp->grp_info);
 		bp->grp_info = NULL;
 	}
+
+	bnxt_dev_uninit(eth_dev);
 }
 
 static void bnxt_mac_addr_remove_op(struct rte_eth_dev *eth_dev,
@@ -3116,7 +3119,6 @@ static int bnxt_init_board(struct rte_eth_dev *eth_dev)
 	return rc;
 }
 
-static int bnxt_dev_uninit(struct rte_eth_dev *eth_dev);
 
 #define ALLOW_FUNC(x)	\
 	{ \
@@ -3408,13 +3410,15 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
 }
 
 static int
-bnxt_dev_uninit(struct rte_eth_dev *eth_dev) {
+bnxt_dev_uninit(struct rte_eth_dev *eth_dev)
+{
 	struct bnxt *bp = eth_dev->data->dev_private;
 	int rc;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return -EPERM;
 
+	PMD_DRV_LOG(INFO, "Calling Device uninit\n");
 	bnxt_disable_int(bp);
 	bnxt_free_int(bp);
 	bnxt_free_mem(bp);
@@ -3428,8 +3432,17 @@ bnxt_dev_uninit(struct rte_eth_dev *eth_dev) {
 	}
 	rc = bnxt_hwrm_func_driver_unregister(bp, 0);
 	bnxt_free_hwrm_resources(bp);
-	rte_memzone_free((const struct rte_memzone *)bp->tx_mem_zone);
-	rte_memzone_free((const struct rte_memzone *)bp->rx_mem_zone);
+
+	if (bp->tx_mem_zone) {
+		rte_memzone_free((const struct rte_memzone *)bp->tx_mem_zone);
+		bp->tx_mem_zone = NULL;
+	}
+
+	if (bp->rx_mem_zone) {
+		rte_memzone_free((const struct rte_memzone *)bp->rx_mem_zone);
+		bp->rx_mem_zone = NULL;
+	}
+
 	if (bp->dev_stopped == 0)
 		bnxt_dev_close_op(eth_dev);
 	if (bp->pf.vf_info)
@@ -3456,7 +3469,7 @@ static int bnxt_pci_remove(struct rte_pci_device *pci_dev)
 static struct rte_pci_driver bnxt_rte_pmd = {
 	.id_table = bnxt_pci_id_map,
 	.drv_flags = RTE_PCI_DRV_NEED_MAPPING |
-		RTE_PCI_DRV_INTR_LSC,
+		RTE_PCI_DRV_INTR_LSC | RTE_PCI_DRV_INTR_RMV,
 	.probe = bnxt_pci_probe,
 	.remove = bnxt_pci_remove,
 };
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 06/31] net/bnxt: set ring coalesce parameters for Stratus NIC
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (4 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 05/31] net/bnxt: fix dev close operation Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 07/31] net/bnxt: fix HW Tx checksum offload check Ajit Khaparde
                   ` (25 subsequent siblings)
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

Set ring coalesce parameters for Stratus NIC.
Other skews don't necessarily need this.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        | 19 ++++++++++++++++
 drivers/net/bnxt/bnxt_ethdev.c | 11 +++++++++
 drivers/net/bnxt/bnxt_hwrm.c   | 51 ++++++++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h   |  2 ++
 drivers/net/bnxt/bnxt_ring.c   | 23 +++++++++++++++++++
 5 files changed, 106 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index d25bf78af..bd8d031de 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -28,6 +28,14 @@
 #define BNXT_MAX_TX_RING_DESC	4096
 #define BNXT_MAX_RX_RING_DESC	8192
 
+#define BNXT_INT_LAT_TMR_MIN			75
+#define BNXT_INT_LAT_TMR_MAX			150
+#define BNXT_NUM_CMPL_AGGR_INT			36
+#define BNXT_CMPL_AGGR_DMA_TMR			37
+#define BNXT_NUM_CMPL_DMA_AGGR			36
+#define BNXT_CMPL_AGGR_DMA_TMR_DURING_INT	50
+#define BNXT_NUM_CMPL_DMA_AGGR_DURING_INT	12
+
 struct bnxt_led_info {
 	uint8_t      led_id;
 	uint8_t      led_type;
@@ -209,6 +217,16 @@ struct bnxt_ptp_cfg {
 	uint32_t			tx_mapped_regs[BNXT_PTP_TX_REGS];
 };
 
+struct bnxt_coal {
+	uint16_t			num_cmpl_aggr_int;
+	uint16_t			num_cmpl_dma_aggr;
+	uint16_t			num_cmpl_dma_aggr_during_int;
+	uint16_t			int_lat_tmr_max;
+	uint16_t			int_lat_tmr_min;
+	uint16_t			cmpl_aggr_dma_tmr;
+	uint16_t			cmpl_aggr_dma_tmr_during_int;
+};
+
 #define BNXT_HWRM_SHORT_REQ_LEN		sizeof(struct hwrm_short_input)
 struct bnxt {
 	void				*bar0;
@@ -315,6 +333,7 @@ int bnxt_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_complete);
 int bnxt_rcv_msg_from_vf(struct bnxt *bp, uint16_t vf_id, void *msg);
 
 bool is_bnxt_supported(struct rte_eth_dev *dev);
+bool bnxt_stratus_device(struct bnxt *bp);
 extern const struct rte_flow_ops bnxt_flow_ops;
 
 extern int bnxt_logtype_driver;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index b3826360c..1b52425e6 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3073,6 +3073,17 @@ static bool bnxt_vf_pciid(uint16_t id)
 	return false;
 }
 
+bool bnxt_stratus_device(struct bnxt *bp)
+{
+	uint16_t id = bp->pdev->id.device_id;
+
+	if (id == BROADCOM_DEV_ID_STRATUS_NIC ||
+	    id == BROADCOM_DEV_ID_STRATUS_NIC_VF1 ||
+	    id == BROADCOM_DEV_ID_STRATUS_NIC_VF2)
+		return true;
+	return false;
+}
+
 static int bnxt_init_board(struct rte_eth_dev *eth_dev)
 {
 	struct bnxt *bp = eth_dev->data->dev_private;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index f441d4610..707ee62e0 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3835,3 +3835,54 @@ int bnxt_vnic_rss_configure(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	}
 	return 0;
 }
+
+static void bnxt_hwrm_set_coal_params(struct bnxt_coal *hw_coal,
+	struct hwrm_ring_cmpl_ring_cfg_aggint_params_input *req)
+{
+	uint16_t flags;
+
+	req->num_cmpl_aggr_int = rte_cpu_to_le_16(hw_coal->num_cmpl_aggr_int);
+
+	/* This is a 6-bit value and must not be 0, or we'll get non stop IRQ */
+	req->num_cmpl_dma_aggr = rte_cpu_to_le_16(hw_coal->num_cmpl_dma_aggr);
+
+	/* This is a 6-bit value and must not be 0, or we'll get non stop IRQ */
+	req->num_cmpl_dma_aggr_during_int =
+		rte_cpu_to_le_16(hw_coal->num_cmpl_dma_aggr_during_int);
+
+	req->int_lat_tmr_max = rte_cpu_to_le_16(hw_coal->int_lat_tmr_max);
+
+	/* min timer set to 1/2 of interrupt timer */
+	req->int_lat_tmr_min = rte_cpu_to_le_16(hw_coal->int_lat_tmr_min);
+
+	/* buf timer set to 1/4 of interrupt timer */
+	req->cmpl_aggr_dma_tmr = rte_cpu_to_le_16(hw_coal->cmpl_aggr_dma_tmr);
+
+	req->cmpl_aggr_dma_tmr_during_int =
+		rte_cpu_to_le_16(hw_coal->cmpl_aggr_dma_tmr_during_int);
+
+	flags = HWRM_RING_CMPL_RING_CFG_AGGINT_PARAMS_INPUT_FLAGS_TIMER_RESET |
+		HWRM_RING_CMPL_RING_CFG_AGGINT_PARAMS_INPUT_FLAGS_RING_IDLE;
+	req->flags = rte_cpu_to_le_16(flags);
+}
+
+int bnxt_hwrm_set_ring_coal(struct bnxt *bp,
+			struct bnxt_coal *coal, uint16_t ring_id)
+{
+	struct hwrm_ring_cmpl_ring_cfg_aggint_params_input req = {0};
+	struct hwrm_ring_cmpl_ring_cfg_aggint_params_output *resp =
+						bp->hwrm_cmd_resp_addr;
+	int rc;
+
+	/* Set ring coalesce parameters only for Stratus 100G NIC */
+	if (!bnxt_stratus_device(bp))
+		return 0;
+
+	HWRM_PREP(req, RING_CMPL_RING_CFG_AGGINT_PARAMS);
+	bnxt_hwrm_set_coal_params(coal, &req);
+	req.ring_id = rte_cpu_to_le_16(ring_id);
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
+	return 0;
+}
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 60a4ab16a..b83aab306 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -167,4 +167,6 @@ int bnxt_hwrm_flash_nvram(struct bnxt *bp, uint16_t dir_type,
 int bnxt_hwrm_ptp_cfg(struct bnxt *bp);
 int bnxt_vnic_rss_configure(struct bnxt *bp,
 			    struct bnxt_vnic_info *vnic);
+int bnxt_hwrm_set_ring_coal(struct bnxt *bp,
+			struct bnxt_coal *coal, uint16_t ring_id);
 #endif
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index bb9f6d1c0..81eb89d74 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -258,6 +258,24 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx,
 	return 0;
 }
 
+static void bnxt_init_dflt_coal(struct bnxt_coal *coal)
+{
+	/* Tick values in micro seconds.
+	 * 1 coal_buf x bufs_per_record = 1 completion record.
+	 */
+	coal->num_cmpl_aggr_int = BNXT_NUM_CMPL_AGGR_INT;
+	/* This is a 6-bit value and must not be 0, or we'll get non stop IRQ */
+	coal->num_cmpl_dma_aggr = BNXT_NUM_CMPL_DMA_AGGR;
+	/* This is a 6-bit value and must not be 0, or we'll get non stop IRQ */
+	coal->num_cmpl_dma_aggr_during_int = BNXT_NUM_CMPL_DMA_AGGR_DURING_INT;
+	coal->int_lat_tmr_max = BNXT_INT_LAT_TMR_MAX;
+	/* min timer set to 1/2 of interrupt timer */
+	coal->int_lat_tmr_min = BNXT_INT_LAT_TMR_MIN;
+	/* buf timer set to 1/4 of interrupt timer */
+	coal->cmpl_aggr_dma_tmr = BNXT_CMPL_AGGR_DMA_TMR;
+	coal->cmpl_aggr_dma_tmr_during_int = BNXT_CMPL_AGGR_DMA_TMR_DURING_INT;
+}
+
 /* ring_grp usage:
  * [0] = default completion ring
  * [1 -> +rx_cp_nr_rings] = rx_cp, rx rings
@@ -265,9 +283,12 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx,
  */
 int bnxt_alloc_hwrm_rings(struct bnxt *bp)
 {
+	struct bnxt_coal coal;
 	unsigned int i;
 	int rc = 0;
 
+	bnxt_init_dflt_coal(&coal);
+
 	for (i = 0; i < bp->rx_cp_nr_rings; i++) {
 		struct bnxt_rx_queue *rxq = bp->rx_queues[i];
 		struct bnxt_cp_ring_info *cpr = rxq->cp_ring;
@@ -291,6 +312,7 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp)
 		cpr->cp_doorbell = (char *)bp->doorbell_base + i * 0x80;
 		bp->grp_info[i].cp_fw_ring_id = cp_ring->fw_ring_id;
 		B_CP_DIS_DB(cpr, cpr->cp_raw_cons);
+		bnxt_hwrm_set_ring_coal(bp, &coal, cp_ring->fw_ring_id);
 
 		if (!i) {
 			/*
@@ -379,6 +401,7 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp)
 
 		txr->tx_doorbell = (char *)bp->doorbell_base + idx * 0x80;
 		txq->index = idx;
+		bnxt_hwrm_set_ring_coal(bp, &coal, cp_ring->fw_ring_id);
 	}
 
 err_out:
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 07/31] net/bnxt: fix HW Tx checksum offload check
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (5 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 06/31] net/bnxt: set ring coalesce parameters for Stratus NIC Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 08/31] net/bnxt: add support for VF id 0xd800 Ajit Khaparde
                   ` (24 subsequent siblings)
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, stable, Xiaoxin Peng

Add more checks for checksum calculation offload.
Also check for tunnel frames and select the proper
buffer descriptor size.

Fixes: 6eb3cc2294fd ("net/bnxt: add initial Tx code")
Cc: stable@dpdk.org

Signed-off-by: Xiaoxin Peng <xiaoxin.peng@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Jason He <jason.he@broadcom.com>
Reviewed-by: Qingmin Liu <qingmin.liu@broadcom.com>
---
 drivers/net/bnxt/bnxt_txr.c | 51 ++++++++++++++++++++++++++++++++++++++++++---
 drivers/net/bnxt/bnxt_txr.h | 10 +++++++++
 2 files changed, 58 insertions(+), 3 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 0fdf0fd08..68645b2f7 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -135,7 +135,9 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 
 	if (tx_pkt->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM |
 				PKT_TX_UDP_CKSUM | PKT_TX_IP_CKSUM |
-				PKT_TX_VLAN_PKT | PKT_TX_OUTER_IP_CKSUM))
+				PKT_TX_VLAN_PKT | PKT_TX_OUTER_IP_CKSUM |
+				PKT_TX_TUNNEL_GRE | PKT_TX_TUNNEL_VXLAN |
+				PKT_TX_TUNNEL_GENEVE))
 		long_bd = true;
 
 	tx_buf = &txr->tx_buf_ring[txr->tx_prod];
@@ -203,16 +205,46 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 			/* Outer IP, Inner IP, Inner TCP/UDP CSO */
 			txbd1->lflags |= TX_BD_FLG_TIP_IP_TCP_UDP_CHKSUM;
 			txbd1->mss = 0;
+		} else if ((tx_pkt->ol_flags & PKT_TX_OIP_IIP_TCP_CKSUM) ==
+			   PKT_TX_OIP_IIP_TCP_CKSUM) {
+			/* Outer IP, Inner IP, Inner TCP/UDP CSO */
+			txbd1->lflags |= TX_BD_FLG_TIP_IP_TCP_UDP_CHKSUM;
+			txbd1->mss = 0;
+		} else if ((tx_pkt->ol_flags & PKT_TX_OIP_IIP_UDP_CKSUM) ==
+			   PKT_TX_OIP_IIP_UDP_CKSUM) {
+			/* Outer IP, Inner IP, Inner TCP/UDP CSO */
+			txbd1->lflags |= TX_BD_FLG_TIP_IP_TCP_UDP_CHKSUM;
+			txbd1->mss = 0;
 		} else if ((tx_pkt->ol_flags & PKT_TX_IIP_TCP_UDP_CKSUM) ==
 			   PKT_TX_IIP_TCP_UDP_CKSUM) {
 			/* (Inner) IP, (Inner) TCP/UDP CSO */
 			txbd1->lflags |= TX_BD_FLG_IP_TCP_UDP_CHKSUM;
 			txbd1->mss = 0;
+		} else if ((tx_pkt->ol_flags & PKT_TX_IIP_UDP_CKSUM) ==
+			   PKT_TX_IIP_UDP_CKSUM) {
+			/* (Inner) IP, (Inner) TCP/UDP CSO */
+			txbd1->lflags |= TX_BD_FLG_IP_TCP_UDP_CHKSUM;
+			txbd1->mss = 0;
+		} else if ((tx_pkt->ol_flags & PKT_TX_IIP_TCP_CKSUM) ==
+			   PKT_TX_IIP_TCP_CKSUM) {
+			/* (Inner) IP, (Inner) TCP/UDP CSO */
+			txbd1->lflags |= TX_BD_FLG_IP_TCP_UDP_CHKSUM;
+			txbd1->mss = 0;
 		} else if ((tx_pkt->ol_flags & PKT_TX_OIP_TCP_UDP_CKSUM) ==
 			   PKT_TX_OIP_TCP_UDP_CKSUM) {
 			/* Outer IP, (Inner) TCP/UDP CSO */
 			txbd1->lflags |= TX_BD_FLG_TIP_TCP_UDP_CHKSUM;
 			txbd1->mss = 0;
+		} else if ((tx_pkt->ol_flags & PKT_TX_OIP_UDP_CKSUM) ==
+			   PKT_TX_OIP_UDP_CKSUM) {
+			/* Outer IP, (Inner) TCP/UDP CSO */
+			txbd1->lflags |= TX_BD_FLG_TIP_TCP_UDP_CHKSUM;
+			txbd1->mss = 0;
+		} else if ((tx_pkt->ol_flags & PKT_TX_OIP_TCP_CKSUM) ==
+			   PKT_TX_OIP_TCP_CKSUM) {
+			/* Outer IP, (Inner) TCP/UDP CSO */
+			txbd1->lflags |= TX_BD_FLG_TIP_TCP_UDP_CHKSUM;
+			txbd1->mss = 0;
 		} else if ((tx_pkt->ol_flags & PKT_TX_OIP_IIP_CKSUM) ==
 			   PKT_TX_OIP_IIP_CKSUM) {
 			/* Outer IP, Inner IP CSO */
@@ -223,11 +255,23 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 			/* TCP/UDP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM;
 			txbd1->mss = 0;
-		} else if (tx_pkt->ol_flags & PKT_TX_IP_CKSUM) {
+		} else if ((tx_pkt->ol_flags & PKT_TX_TCP_CKSUM) ==
+			   PKT_TX_TCP_CKSUM) {
+			/* TCP/UDP CSO */
+			txbd1->lflags |= TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM;
+			txbd1->mss = 0;
+		} else if ((tx_pkt->ol_flags & PKT_TX_UDP_CKSUM) ==
+			   PKT_TX_UDP_CKSUM) {
+			/* TCP/UDP CSO */
+			txbd1->lflags |= TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM;
+			txbd1->mss = 0;
+		} else if ((tx_pkt->ol_flags & PKT_TX_IP_CKSUM) ==
+			   PKT_TX_IP_CKSUM) {
 			/* IP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_IP_CHKSUM;
 			txbd1->mss = 0;
-		} else if (tx_pkt->ol_flags & PKT_TX_OUTER_IP_CKSUM) {
+		} else if ((tx_pkt->ol_flags & PKT_TX_OUTER_IP_CKSUM) ==
+			   PKT_TX_OUTER_IP_CKSUM) {
 			/* IP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_T_IP_CHKSUM;
 			txbd1->mss = 0;
@@ -251,6 +295,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 	}
 
 	txbd->flags_type |= TX_BD_LONG_FLAGS_PACKET_END;
+	txbd1->lflags = rte_cpu_to_le_32(txbd1->lflags);
 
 	txr->tx_prod = RING_NEXT(txr->tx_ring_struct, txr->tx_prod);
 
diff --git a/drivers/net/bnxt/bnxt_txr.h b/drivers/net/bnxt/bnxt_txr.h
index 15c7e5a09..7f3c7cdb0 100644
--- a/drivers/net/bnxt/bnxt_txr.h
+++ b/drivers/net/bnxt/bnxt_txr.h
@@ -45,10 +45,20 @@ int bnxt_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 
 #define PKT_TX_OIP_IIP_TCP_UDP_CKSUM	(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \
 					PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_IIP_UDP_CKSUM	(PKT_TX_UDP_CKSUM | \
+					PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_IIP_TCP_CKSUM	(PKT_TX_TCP_CKSUM | \
+					PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM)
 #define PKT_TX_IIP_TCP_UDP_CKSUM	(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \
 					PKT_TX_IP_CKSUM)
+#define PKT_TX_IIP_TCP_CKSUM		(PKT_TX_TCP_CKSUM | PKT_TX_IP_CKSUM)
+#define PKT_TX_IIP_UDP_CKSUM		(PKT_TX_UDP_CKSUM | PKT_TX_IP_CKSUM)
 #define PKT_TX_OIP_TCP_UDP_CKSUM	(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \
 					PKT_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_UDP_CKSUM		(PKT_TX_UDP_CKSUM | \
+					PKT_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_TCP_CKSUM		(PKT_TX_TCP_CKSUM | \
+					PKT_TX_OUTER_IP_CKSUM)
 #define PKT_TX_OIP_IIP_CKSUM		(PKT_TX_IP_CKSUM |	\
 					 PKT_TX_OUTER_IP_CKSUM)
 #define PKT_TX_TCP_UDP_CKSUM		(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM)
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 08/31] net/bnxt: add support for VF id 0xd800
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (6 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 07/31] net/bnxt: fix HW Tx checksum offload check Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-26 15:28   ` Ferruh Yigit
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 09/31] net/bnxt: fix rx/tx queue start/stop operations Ajit Khaparde
                   ` (23 subsequent siblings)
  31 siblings, 1 reply; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

Add support for StingRay VF device 0xd800

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 1b52425e6..5d7f29cf4 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -73,6 +73,7 @@ int bnxt_logtype_driver;
 #define BROADCOM_DEV_ID_58802 0xd802
 #define BROADCOM_DEV_ID_58804 0xd804
 #define BROADCOM_DEV_ID_58808 0x16f0
+#define BROADCOM_DEV_ID_58802_VF 0xd800
 
 static const struct rte_pci_id bnxt_pci_id_map[] = {
 	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM,
@@ -116,6 +117,7 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58802) },
 	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58804) },
 	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58808) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58802_VF) },
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
@@ -3068,7 +3070,8 @@ static bool bnxt_vf_pciid(uint16_t id)
 	    id == BROADCOM_DEV_ID_5741X_VF ||
 	    id == BROADCOM_DEV_ID_57414_VF ||
 	    id == BROADCOM_DEV_ID_STRATUS_NIC_VF1 ||
-	    id == BROADCOM_DEV_ID_STRATUS_NIC_VF2)
+	    id == BROADCOM_DEV_ID_STRATUS_NIC_VF2 ||
+	    id == BROADCOM_DEV_ID_58802_VF)
 		return true;
 	return false;
 }
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 09/31] net/bnxt: fix rx/tx queue start/stop operations
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (7 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 08/31] net/bnxt: add support for VF id 0xd800 Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 10/31] net/bnxt: code cleanup style of bnxt cpr Ajit Khaparde
                   ` (22 subsequent siblings)
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Somnath Kotur

Packets destined to the to-be-stopped queue should not be dropped
(neither in HW nor in the driver), so re-program the RSS Table without
this queue on stop and add it back to the table on start unless it
is a Representor VF.

Since 0th entry is used for default ring, use fw_grp_id + 1 to change
the RSS table population logic by programming valid IDs instead of the
default zeroth entry in case of an invalid fw_grp_id.

Destroy and recreate the trio of Rx rings(compl, Rx, AG) every time in
start so that HW is in sync with software.

Fixes: 9b63c6fd70e3 ("net/bnxt: support Rx/Tx queue start/stop")

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  1 +
 drivers/net/bnxt/bnxt_ethdev.c | 10 ++++-
 drivers/net/bnxt/bnxt_hwrm.c   | 94 +++++++++++++++++++-----------------------
 drivers/net/bnxt/bnxt_hwrm.h   |  1 +
 drivers/net/bnxt/bnxt_ring.c   | 92 +++++++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_ring.h   |  1 +
 drivers/net/bnxt/bnxt_rxq.c    | 54 +++++++++++++++++++-----
 drivers/net/bnxt/bnxt_rxq.h    |  4 ++
 drivers/net/bnxt/bnxt_rxr.c    | 14 +++++--
 9 files changed, 204 insertions(+), 67 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index bd8d031de..f92e98d83 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -27,6 +27,7 @@
 #define BNXT_MIN_RING_DESC	16
 #define BNXT_MAX_TX_RING_DESC	4096
 #define BNXT_MAX_RX_RING_DESC	8192
+#define BNXT_DB_SIZE		0x80
 
 #define BNXT_INT_LAT_TMR_MIN			75
 #define BNXT_INT_LAT_TMR_MAX			150
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 5d7f29cf4..d66a29758 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -198,13 +198,14 @@ static int bnxt_alloc_mem(struct bnxt *bp)
 
 static int bnxt_init_chip(struct bnxt *bp)
 {
-	unsigned int i;
+	struct bnxt_rx_queue *rxq;
 	struct rte_eth_link new;
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(bp->eth_dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	uint32_t intr_vector = 0;
 	uint32_t queue_id, base = BNXT_MISC_VEC_ID;
 	uint32_t vec = BNXT_MISC_VEC_ID;
+	unsigned int i, j;
 	int rc;
 
 	/* disable uio/vfio intr/eventfd mapping */
@@ -278,6 +279,13 @@ static int bnxt_init_chip(struct bnxt *bp)
 			goto err_out;
 		}
 
+		for (j = 0; j < bp->rx_nr_rings; j++) {
+			rxq = bp->eth_dev->data->rx_queues[j];
+
+			if (rxq->rx_deferred_start)
+				rxq->vnic->fw_grp_ids[j] = INVALID_HW_RING_ID;
+		}
+
 		rc = bnxt_vnic_rss_configure(bp, vnic);
 		if (rc) {
 			PMD_DRV_LOG(ERR,
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 707ee62e0..64687a69b 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -1817,8 +1817,7 @@ int bnxt_free_all_hwrm_ring_grps(struct bnxt *bp)
 	return rc;
 }
 
-static void bnxt_free_cp_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
-				unsigned int idx __rte_unused)
+static void bnxt_free_cp_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
 {
 	struct bnxt_ring *cp_ring = cpr->cp_ring_struct;
 
@@ -1830,17 +1829,52 @@ static void bnxt_free_cp_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
 	cpr->cp_raw_cons = 0;
 }
 
+void bnxt_free_hwrm_rx_ring(struct bnxt *bp, int queue_index)
+{
+	struct bnxt_rx_queue *rxq = bp->rx_queues[queue_index];
+	struct bnxt_rx_ring_info *rxr = rxq->rx_ring;
+	struct bnxt_ring *ring = rxr->rx_ring_struct;
+	struct bnxt_cp_ring_info *cpr = rxq->cp_ring;
+
+	if (ring->fw_ring_id != INVALID_HW_RING_ID) {
+		bnxt_hwrm_ring_free(bp, ring,
+				    HWRM_RING_FREE_INPUT_RING_TYPE_RX);
+		ring->fw_ring_id = INVALID_HW_RING_ID;
+		bp->grp_info[queue_index].rx_fw_ring_id = INVALID_HW_RING_ID;
+		memset(rxr->rx_desc_ring, 0,
+		       rxr->rx_ring_struct->ring_size *
+		       sizeof(*rxr->rx_desc_ring));
+		memset(rxr->rx_buf_ring, 0,
+		       rxr->rx_ring_struct->ring_size *
+		       sizeof(*rxr->rx_buf_ring));
+		rxr->rx_prod = 0;
+	}
+	ring = rxr->ag_ring_struct;
+	if (ring->fw_ring_id != INVALID_HW_RING_ID) {
+		bnxt_hwrm_ring_free(bp, ring,
+				    HWRM_RING_FREE_INPUT_RING_TYPE_RX);
+		ring->fw_ring_id = INVALID_HW_RING_ID;
+		memset(rxr->ag_buf_ring, 0,
+		       rxr->ag_ring_struct->ring_size *
+		       sizeof(*rxr->ag_buf_ring));
+		rxr->ag_prod = 0;
+		bp->grp_info[queue_index].ag_fw_ring_id = INVALID_HW_RING_ID;
+	}
+	if (cpr->cp_ring_struct->fw_ring_id != INVALID_HW_RING_ID)
+		bnxt_free_cp_ring(bp, cpr);
+
+	bp->grp_info[queue_index].cp_fw_ring_id = INVALID_HW_RING_ID;
+}
+
 int bnxt_free_all_hwrm_rings(struct bnxt *bp)
 {
 	unsigned int i;
-	int rc = 0;
 
 	for (i = 0; i < bp->tx_cp_nr_rings; i++) {
 		struct bnxt_tx_queue *txq = bp->tx_queues[i];
 		struct bnxt_tx_ring_info *txr = txq->tx_ring;
 		struct bnxt_ring *ring = txr->tx_ring_struct;
 		struct bnxt_cp_ring_info *cpr = txq->cp_ring;
-		unsigned int idx = bp->rx_cp_nr_rings + i;
 
 		if (ring->fw_ring_id != INVALID_HW_RING_ID) {
 			bnxt_hwrm_ring_free(bp, ring,
@@ -1856,59 +1890,15 @@ int bnxt_free_all_hwrm_rings(struct bnxt *bp)
 			txr->tx_cons = 0;
 		}
 		if (cpr->cp_ring_struct->fw_ring_id != INVALID_HW_RING_ID) {
-			bnxt_free_cp_ring(bp, cpr, idx);
+			bnxt_free_cp_ring(bp, cpr);
 			cpr->cp_ring_struct->fw_ring_id = INVALID_HW_RING_ID;
 		}
 	}
 
-	for (i = 0; i < bp->rx_cp_nr_rings; i++) {
-		struct bnxt_rx_queue *rxq = bp->rx_queues[i];
-		struct bnxt_rx_ring_info *rxr = rxq->rx_ring;
-		struct bnxt_ring *ring = rxr->rx_ring_struct;
-		struct bnxt_cp_ring_info *cpr = rxq->cp_ring;
+	for (i = 0; i < bp->rx_cp_nr_rings; i++)
+		bnxt_free_hwrm_rx_ring(bp, i);
 
-		if (ring->fw_ring_id != INVALID_HW_RING_ID) {
-			bnxt_hwrm_ring_free(bp, ring,
-					HWRM_RING_FREE_INPUT_RING_TYPE_RX);
-			ring->fw_ring_id = INVALID_HW_RING_ID;
-			bp->grp_info[i].rx_fw_ring_id = INVALID_HW_RING_ID;
-			memset(rxr->rx_desc_ring, 0,
-					rxr->rx_ring_struct->ring_size *
-					sizeof(*rxr->rx_desc_ring));
-			memset(rxr->rx_buf_ring, 0,
-					rxr->rx_ring_struct->ring_size *
-					sizeof(*rxr->rx_buf_ring));
-			rxr->rx_prod = 0;
-		}
-		ring = rxr->ag_ring_struct;
-		if (ring->fw_ring_id != INVALID_HW_RING_ID) {
-			bnxt_hwrm_ring_free(bp, ring,
-					    HWRM_RING_FREE_INPUT_RING_TYPE_RX);
-			ring->fw_ring_id = INVALID_HW_RING_ID;
-			memset(rxr->ag_buf_ring, 0,
-			       rxr->ag_ring_struct->ring_size *
-			       sizeof(*rxr->ag_buf_ring));
-			rxr->ag_prod = 0;
-			bp->grp_info[i].ag_fw_ring_id = INVALID_HW_RING_ID;
-		}
-		if (cpr->cp_ring_struct->fw_ring_id != INVALID_HW_RING_ID) {
-			bnxt_free_cp_ring(bp, cpr, i);
-			bp->grp_info[i].cp_fw_ring_id = INVALID_HW_RING_ID;
-			cpr->cp_ring_struct->fw_ring_id = INVALID_HW_RING_ID;
-		}
-	}
-
-	/* Default completion ring */
-	{
-		struct bnxt_cp_ring_info *cpr = bp->def_cp_ring;
-
-		if (cpr->cp_ring_struct->fw_ring_id != INVALID_HW_RING_ID) {
-			bnxt_free_cp_ring(bp, cpr, 0);
-			cpr->cp_ring_struct->fw_ring_id = INVALID_HW_RING_ID;
-		}
-	}
-
-	return rc;
+	return 0;
 }
 
 int bnxt_alloc_all_hwrm_ring_grps(struct bnxt *bp)
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index b83aab306..4a237c4b4 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -107,6 +107,7 @@ int bnxt_set_hwrm_vnic_filters(struct bnxt *bp, struct bnxt_vnic_info *vnic);
 int bnxt_clear_hwrm_vnic_filters(struct bnxt *bp, struct bnxt_vnic_info *vnic);
 void bnxt_free_all_hwrm_resources(struct bnxt *bp);
 void bnxt_free_hwrm_resources(struct bnxt *bp);
+void bnxt_free_hwrm_rx_ring(struct bnxt *bp, int queue_index);
 int bnxt_alloc_hwrm_resources(struct bnxt *bp);
 int bnxt_get_hwrm_link_config(struct bnxt *bp, struct rte_eth_link *link);
 int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up);
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index 81eb89d74..fcbd6bc6e 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -276,6 +276,98 @@ static void bnxt_init_dflt_coal(struct bnxt_coal *coal)
 	coal->cmpl_aggr_dma_tmr_during_int = BNXT_CMPL_AGGR_DMA_TMR_DURING_INT;
 }
 
+int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index)
+{
+	struct rte_pci_device *pci_dev = bp->pdev;
+	struct bnxt_rx_queue *rxq = bp->rx_queues[queue_index];
+	struct bnxt_cp_ring_info *cpr = rxq->cp_ring;
+	struct bnxt_ring *cp_ring = cpr->cp_ring_struct;
+	struct bnxt_rx_ring_info *rxr = rxq->rx_ring;
+	struct bnxt_ring *ring = rxr->rx_ring_struct;
+	unsigned int map_idx = queue_index + bp->rx_cp_nr_rings;
+	int rc = 0;
+
+	bp->grp_info[queue_index].fw_stats_ctx = cpr->hw_stats_ctx_id;
+
+	/* Rx cmpl */
+	rc = bnxt_hwrm_ring_alloc(bp, cp_ring,
+				  HWRM_RING_ALLOC_INPUT_RING_TYPE_L2_CMPL,
+				  queue_index, HWRM_NA_SIGNATURE,
+				  HWRM_NA_SIGNATURE);
+	if (rc)
+		goto err_out;
+
+	cpr->cp_doorbell = (char *)pci_dev->mem_resource[2].addr +
+		queue_index * BNXT_DB_SIZE;
+	bp->grp_info[queue_index].cp_fw_ring_id = cp_ring->fw_ring_id;
+	B_CP_DIS_DB(cpr, cpr->cp_raw_cons);
+
+	if (!queue_index) {
+		/*
+		 * In order to save completion resources, use the first
+		 * completion ring from PF or VF as the default completion ring
+		 * for async event and HWRM forward response handling.
+		 */
+		bp->def_cp_ring = cpr;
+		rc = bnxt_hwrm_set_async_event_cr(bp);
+		if (rc)
+			goto err_out;
+	}
+	/* Rx ring */
+	rc = bnxt_hwrm_ring_alloc(bp, ring, HWRM_RING_ALLOC_INPUT_RING_TYPE_RX,
+				  queue_index, cpr->hw_stats_ctx_id,
+				  cp_ring->fw_ring_id);
+	if (rc)
+		goto err_out;
+
+	rxr->rx_prod = 0;
+	rxr->rx_doorbell = (char *)pci_dev->mem_resource[2].addr +
+		queue_index * BNXT_DB_SIZE;
+	bp->grp_info[queue_index].rx_fw_ring_id = ring->fw_ring_id;
+	B_RX_DB(rxr->rx_doorbell, rxr->rx_prod);
+
+	ring = rxr->ag_ring_struct;
+	/* Agg ring */
+	if (!ring)
+		PMD_DRV_LOG(ERR, "Alloc AGG Ring is NULL!\n");
+
+	rc = bnxt_hwrm_ring_alloc(bp, ring, HWRM_RING_ALLOC_INPUT_RING_TYPE_RX,
+				  map_idx, HWRM_NA_SIGNATURE,
+				  cp_ring->fw_ring_id);
+	if (rc)
+		goto err_out;
+
+	PMD_DRV_LOG(DEBUG, "Alloc AGG Done!\n");
+	rxr->ag_prod = 0;
+	rxr->ag_doorbell = (char *)pci_dev->mem_resource[2].addr +
+		map_idx * BNXT_DB_SIZE;
+	bp->grp_info[queue_index].ag_fw_ring_id = ring->fw_ring_id;
+	B_RX_DB(rxr->ag_doorbell, rxr->ag_prod);
+
+	rxq->rx_buf_use_size = BNXT_MAX_MTU + ETHER_HDR_LEN +
+		ETHER_CRC_LEN + (2 * VLAN_TAG_SIZE);
+
+	if (bp->eth_dev->data->rx_queue_state[queue_index] ==
+	    RTE_ETH_QUEUE_STATE_STARTED) {
+		if (bnxt_init_one_rx_ring(rxq)) {
+			RTE_LOG(ERR, PMD,
+				"bnxt_init_one_rx_ring failed!\n");
+			bnxt_rx_queue_release_op(rxq);
+			rc = -ENOMEM;
+			goto err_out;
+		}
+		B_RX_DB(rxr->rx_doorbell, rxr->rx_prod);
+		B_RX_DB(rxr->ag_doorbell, rxr->ag_prod);
+	}
+	rxq->index = queue_index;
+	PMD_DRV_LOG(INFO,
+		    "queue %d, rx_deferred_start %d, state %d!\n",
+		    queue_index, rxq->rx_deferred_start,
+		    bp->eth_dev->data->rx_queue_state[queue_index]);
+
+err_out:
+	return rc;
+}
 /* ring_grp usage:
  * [0] = default completion ring
  * [1 -> +rx_cp_nr_rings] = rx_cp, rx rings
diff --git a/drivers/net/bnxt/bnxt_ring.h b/drivers/net/bnxt/bnxt_ring.h
index 65bf3e2f5..1446d784f 100644
--- a/drivers/net/bnxt/bnxt_ring.h
+++ b/drivers/net/bnxt/bnxt_ring.h
@@ -70,6 +70,7 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx,
 			    struct bnxt_rx_queue *rxq,
 			    struct bnxt_cp_ring_info *cp_ring_info,
 			    const char *suffix);
+int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index);
 int bnxt_alloc_hwrm_rings(struct bnxt *bp);
 
 #endif
diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index c55ddec4b..f405e2575 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -199,12 +199,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 	return rc;
 }
 
-static void bnxt_rx_queue_release_mbufs(struct bnxt_rx_queue *rxq)
+void bnxt_rx_queue_release_mbufs(struct bnxt_rx_queue *rxq)
 {
 	struct bnxt_sw_rx_bd *sw_ring;
 	struct bnxt_tpa_info *tpa_info;
 	uint16_t i;
 
+	rte_spinlock_lock(&rxq->lock);
+
 	if (rxq) {
 		sw_ring = rxq->rx_ring->rx_buf_ring;
 		if (sw_ring) {
@@ -239,6 +241,8 @@ static void bnxt_rx_queue_release_mbufs(struct bnxt_rx_queue *rxq)
 			}
 		}
 	}
+
+	rte_spinlock_unlock(&rxq->lock);
 }
 
 void bnxt_free_rx_mbufs(struct bnxt *bp)
@@ -286,6 +290,7 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 	uint64_t rx_offloads = eth_dev->data->dev_conf.rxmode.offloads;
 	struct bnxt_rx_queue *rxq;
 	int rc = 0;
+	uint8_t queue_state;
 
 	if (queue_idx >= bp->max_rx_rings) {
 		PMD_DRV_LOG(ERR,
@@ -341,6 +346,11 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 	}
 	rte_atomic64_init(&rxq->rx_mbuf_alloc_fail);
 
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+	queue_state = rxq->rx_deferred_start ? RTE_ETH_QUEUE_STATE_STOPPED :
+						RTE_ETH_QUEUE_STATE_STARTED;
+	eth_dev->data->rx_queue_state[queue_idx] = queue_state;
+	rte_spinlock_init(&rxq->lock);
 out:
 	return rc;
 }
@@ -389,6 +399,7 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	struct rte_eth_conf *dev_conf = &bp->eth_dev->data->dev_conf;
 	struct bnxt_rx_queue *rxq = bp->rx_queues[rx_queue_id];
 	struct bnxt_vnic_info *vnic = NULL;
+	int rc = 0;
 
 	if (rxq == NULL) {
 		PMD_DRV_LOG(ERR, "Invalid Rx queue %d\n", rx_queue_id);
@@ -396,28 +407,47 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 
 	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
-	rxq->rx_deferred_start = false;
+
+	bnxt_free_hwrm_rx_ring(bp, rx_queue_id);
+	bnxt_alloc_hwrm_rx_ring(bp, rx_queue_id);
 	PMD_DRV_LOG(INFO, "Rx queue started %d\n", rx_queue_id);
+
 	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
 		vnic = rxq->vnic;
+
 		if (vnic->fw_grp_ids[rx_queue_id] != INVALID_HW_RING_ID)
 			return 0;
-		PMD_DRV_LOG(DEBUG, "vnic = %p fw_grp_id = %d\n",
-			vnic, bp->grp_info[rx_queue_id + 1].fw_grp_id);
+
+		PMD_DRV_LOG(DEBUG,
+			    "vnic = %p fw_grp_id = %d\n",
+			    vnic, bp->grp_info[rx_queue_id].fw_grp_id);
+
 		vnic->fw_grp_ids[rx_queue_id] =
-					bp->grp_info[rx_queue_id + 1].fw_grp_id;
-		return bnxt_vnic_rss_configure(bp, vnic);
+					bp->grp_info[rx_queue_id].fw_grp_id;
+		rc = bnxt_vnic_rss_configure(bp, vnic);
 	}
 
-	return 0;
+	if (rc == 0)
+		rxq->rx_deferred_start = false;
+
+	return rc;
 }
 
 int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
 	struct rte_eth_conf *dev_conf = &bp->eth_dev->data->dev_conf;
-	struct bnxt_rx_queue *rxq = bp->rx_queues[rx_queue_id];
 	struct bnxt_vnic_info *vnic = NULL;
+	struct bnxt_rx_queue *rxq = NULL;
+	int rc = 0;
+
+	/* Rx CQ 0 also works as Default CQ for async notifications */
+	if (!rx_queue_id) {
+		PMD_DRV_LOG(ERR, "Cannot stop Rx queue id %d\n", rx_queue_id);
+		return -EINVAL;
+	}
+
+	rxq = bp->rx_queues[rx_queue_id];
 
 	if (rxq == NULL) {
 		PMD_DRV_LOG(ERR, "Invalid Rx queue %d\n", rx_queue_id);
@@ -431,7 +461,11 @@ int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
 		vnic = rxq->vnic;
 		vnic->fw_grp_ids[rx_queue_id] = INVALID_HW_RING_ID;
-		return bnxt_vnic_rss_configure(bp, vnic);
+		rc = bnxt_vnic_rss_configure(bp, vnic);
 	}
-	return 0;
+
+	if (rc == 0)
+		bnxt_rx_queue_release_mbufs(rxq);
+
+	return rc;
 }
diff --git a/drivers/net/bnxt/bnxt_rxq.h b/drivers/net/bnxt/bnxt_rxq.h
index 8307f603c..e5d6001d3 100644
--- a/drivers/net/bnxt/bnxt_rxq.h
+++ b/drivers/net/bnxt/bnxt_rxq.h
@@ -10,6 +10,9 @@ struct bnxt;
 struct bnxt_rx_ring_info;
 struct bnxt_cp_ring_info;
 struct bnxt_rx_queue {
+	rte_spinlock_t		lock;	/* Synchronize between rx_queue_stop
+					 * and fast path
+					 */
 	struct rte_mempool	*mb_pool; /* mbuf pool for RX ring */
 	struct rte_mbuf		*pkt_first_seg; /* 1st seg of pkt */
 	struct rte_mbuf		*pkt_last_seg; /* Last seg of pkt */
@@ -54,4 +57,5 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev,
 			uint16_t rx_queue_id);
 int bnxt_rx_queue_stop(struct rte_eth_dev *dev,
 		       uint16_t rx_queue_id);
+void bnxt_rx_queue_release_mbufs(struct bnxt_rx_queue *rxq);
 #endif
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index b6b72c553..e4d473f4b 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -541,7 +541,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	bool evt = false;
 
 	/* If Rx Q was stopped return. RxQ0 cannot be stopped. */
-	if (rxq->rx_deferred_start && rxq->queue_id)
+	if (unlikely(((rxq->rx_deferred_start || !rte_spinlock_trylock(&rxq->lock)) && rxq->queue_id)))
 		return 0;
 
 	/* Handle RX burst request */
@@ -583,7 +583,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		 * For PMD, there is no need to keep on pushing to REARM
 		 * the doorbell if there are no new completions
 		 */
-		return nb_rx_pkts;
+		goto done;
 	}
 
 	if (prod != rxr->rx_prod)
@@ -618,16 +618,22 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		}
 	}
 
+done:
+	rte_spinlock_unlock(&rxq->lock);
+
 	return nb_rx_pkts;
 }
 
 void bnxt_free_rx_rings(struct bnxt *bp)
 {
 	int i;
+	struct bnxt_rx_queue *rxq;
 
-	for (i = 0; i < (int)bp->rx_nr_rings; i++) {
-		struct bnxt_rx_queue *rxq = bp->rx_queues[i];
+	if (!bp->rx_queues)
+		return;
 
+	for (i = 0; i < (int)bp->rx_nr_rings; i++) {
+		rxq = bp->rx_queues[i];
 		if (!rxq)
 			continue;
 
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 10/31] net/bnxt: code cleanup style of bnxt cpr
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (8 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 09/31] net/bnxt: fix rx/tx queue start/stop operations Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 11/31] net/bnxt: code cleanup style of bnxt rxr Ajit Khaparde
                   ` (21 subsequent siblings)
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Scott Branden

From: Scott Branden <scott.branden@broadcom.com>

Cleanup alignment, brackets, debug string style of bnxt_cpr

Signed-off-by: Scott Branden <scott.branden@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Qingmin Liu <qingmin.liu@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
---
 drivers/net/bnxt/bnxt_cpr.c | 22 ++++++++++------------
 1 file changed, 10 insertions(+), 12 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_cpr.c b/drivers/net/bnxt/bnxt_cpr.c
index ff20b6fdf..7257bbedc 100644
--- a/drivers/net/bnxt/bnxt_cpr.c
+++ b/drivers/net/bnxt/bnxt_cpr.c
@@ -74,12 +74,12 @@ void bnxt_handle_fwd_req(struct bnxt *bp, struct cmpl_base *cmpl)
 	fwd_cmd = (struct input *)bp->pf.vf_info[vf_id].req_buf;
 
 	if (fw_vf_id < bp->pf.first_vf_id ||
-	    fw_vf_id >= (bp->pf.first_vf_id) + bp->pf.active_vfs) {
+	    fw_vf_id >= bp->pf.first_vf_id + bp->pf.active_vfs) {
 		PMD_DRV_LOG(ERR,
-		"FWD req's source_id 0x%x out of range 0x%x - 0x%x (%d %d)\n",
-			fw_vf_id, bp->pf.first_vf_id,
-			(bp->pf.first_vf_id) + bp->pf.active_vfs - 1,
-			bp->pf.first_vf_id, bp->pf.active_vfs);
+			    "FWD req 0x%x out of range 0x%x - 0x%x (%d %d)\n",
+			    fw_vf_id, bp->pf.first_vf_id,
+			    bp->pf.first_vf_id + bp->pf.active_vfs - 1,
+			    bp->pf.first_vf_id, bp->pf.active_vfs);
 		goto reject;
 	}
 
@@ -95,7 +95,7 @@ void bnxt_handle_fwd_req(struct bnxt *bp, struct cmpl_base *cmpl)
 			if (vfc->enables &
 			    HWRM_FUNC_VF_CFG_INPUT_ENABLES_DFLT_MAC_ADDR) {
 				bnxt_hwrm_func_vf_mac(bp, vf_id,
-				(const uint8_t *)"\x00\x00\x00\x00\x00");
+				     (const uint8_t *)"\x00\x00\x00\x00\x00");
 			}
 		}
 		if (fwd_cmd->req_type == HWRM_CFA_L2_SET_RX_MASK) {
@@ -104,10 +104,10 @@ void bnxt_handle_fwd_req(struct bnxt *bp, struct cmpl_base *cmpl)
 
 			srm->vlan_tag_tbl_addr = rte_cpu_to_le_64(0);
 			srm->num_vlan_tags = rte_cpu_to_le_32(0);
-			srm->mask &= ~rte_cpu_to_le_32(
-				HWRM_CFA_L2_SET_RX_MASK_INPUT_MASK_VLANONLY |
-			    HWRM_CFA_L2_SET_RX_MASK_INPUT_MASK_VLAN_NONVLAN |
-			    HWRM_CFA_L2_SET_RX_MASK_INPUT_MASK_ANYVLAN_NONVLAN);
+			srm->mask &= ~rte_cpu_to_le_32
+			  (HWRM_CFA_L2_SET_RX_MASK_INPUT_MASK_VLANONLY |
+			   HWRM_CFA_L2_SET_RX_MASK_INPUT_MASK_VLAN_NONVLAN |
+			   HWRM_CFA_L2_SET_RX_MASK_INPUT_MASK_ANYVLAN_NONVLAN);
 		}
 		/* Forward */
 		rc = bnxt_hwrm_exec_fwd_resp(bp, fw_vf_id, fwd_cmd, req_len);
@@ -128,8 +128,6 @@ void bnxt_handle_fwd_req(struct bnxt *bp, struct cmpl_base *cmpl)
 			fw_vf_id - bp->pf.first_vf_id,
 			rte_le_to_cpu_16(fwd_cmd->req_type));
 	}
-
-	return;
 }
 
 int bnxt_event_hwrm_resp_handler(struct bnxt *bp, struct cmpl_base *cmp)
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 11/31] net/bnxt: code cleanup style of bnxt rxr
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (9 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 10/31] net/bnxt: code cleanup style of bnxt cpr Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-26 15:29   ` Ferruh Yigit
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 12/31] net/bnxt: code cleanup style of rte pmd bnxt file Ajit Khaparde
                   ` (20 subsequent siblings)
  31 siblings, 1 reply; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Scott Branden

From: Scott Branden <scott.branden@broadcom.com>

Cleanup alignment, brackets, debug string style of bnxt_rxr

Signed-off-by: Scott Branden <scott.branden@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_rxr.c | 58 ++++++++++++++++++++++++---------------------
 drivers/net/bnxt/bnxt_rxr.h |  6 +++--
 2 files changed, 35 insertions(+), 29 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index e4d473f4b..13928c388 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -72,7 +72,6 @@ static inline int bnxt_alloc_ag_data(struct bnxt_rx_queue *rxq,
 	if (rx_buf == NULL)
 		PMD_DRV_LOG(ERR, "Jumbo Frame. rx_buf is NULL\n");
 
-
 	rx_buf->mbuf = mbuf;
 	mbuf->data_off = RTE_PKTMBUF_HEADROOM;
 
@@ -82,7 +81,7 @@ static inline int bnxt_alloc_ag_data(struct bnxt_rx_queue *rxq,
 }
 
 static inline void bnxt_reuse_rx_mbuf(struct bnxt_rx_ring_info *rxr,
-			       struct rte_mbuf *mbuf)
+				      struct rte_mbuf *mbuf)
 {
 	uint16_t prod = RING_NEXT(rxr->rx_ring_struct, rxr->rx_prod);
 	struct bnxt_sw_rx_bd *prod_rx_buf;
@@ -185,7 +184,8 @@ static void bnxt_tpa_start(struct bnxt_rx_queue *rxq,
 }
 
 static int bnxt_agg_bufs_valid(struct bnxt_cp_ring_info *cpr,
-		uint8_t agg_bufs, uint32_t raw_cp_cons)
+			       uint8_t agg_bufs,
+			       uint32_t raw_cp_cons)
 {
 	uint16_t last_cp_cons;
 	struct rx_pkt_cmpl *agg_cmpl;
@@ -236,8 +236,7 @@ static int bnxt_rx_pages(struct bnxt_rx_queue *rxq,
 		struct rte_mbuf *ag_mbuf;
 		*tmp_raw_cons = NEXT_RAW_CMP(*tmp_raw_cons);
 		cp_cons = RING_CMP(cpr->cp_ring_struct, *tmp_raw_cons);
-		rxcmp = (struct rx_pkt_cmpl *)
-					&cpr->cp_desc_ring[cp_cons];
+		rxcmp = (struct rx_pkt_cmpl *)&cpr->cp_desc_ring[cp_cons];
 
 #ifdef BNXT_DEBUG
 		bnxt_dump_cmpl(cp_cons, rxcmp);
@@ -270,11 +269,11 @@ static int bnxt_rx_pages(struct bnxt_rx_queue *rxq,
 	return 0;
 }
 
-static inline struct rte_mbuf *bnxt_tpa_end(
-		struct bnxt_rx_queue *rxq,
-		uint32_t *raw_cp_cons,
-		struct rx_tpa_end_cmpl *tpa_end,
-		struct rx_tpa_end_cmpl_hi *tpa_end1 __rte_unused)
+static inline
+struct rte_mbuf *bnxt_tpa_end(struct bnxt_rx_queue *rxq,
+			      uint32_t *raw_cp_cons,
+			      struct rx_tpa_end_cmpl *tpa_end,
+			      struct rx_tpa_end_cmpl_hi *tpa_end1 __rte_unused)
 {
 	struct bnxt_cp_ring_info *cpr = rxq->cp_ring;
 	struct bnxt_rx_ring_info *rxr = rxq->rx_ring;
@@ -299,6 +298,7 @@ static inline struct rte_mbuf *bnxt_tpa_end(
 	mbuf->l4_len = tpa_end->payload_offset;
 
 	struct rte_mbuf *new_data = __bnxt_alloc_rx_data(rxq->mb_pool);
+
 	RTE_ASSERT(new_data != NULL);
 	if (!new_data) {
 		rte_atomic64_inc(&rxq->rx_mbuf_alloc_fail);
@@ -368,7 +368,8 @@ bnxt_parse_pkt_type(struct rx_pkt_cmpl *rxcmp, struct rx_pkt_cmpl_hi *rxcmp1)
 }
 
 static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
-			    struct bnxt_rx_queue *rxq, uint32_t *raw_cons)
+		       struct bnxt_rx_queue *rxq,
+		       uint32_t *raw_cons)
 {
 	struct bnxt_cp_ring_info *cpr = rxq->cp_ring;
 	struct bnxt_rx_ring_info *rxr = rxq->rx_ring;
@@ -401,14 +402,16 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 
 	cmp_type = CMP_TYPE(rxcmp);
 	if (cmp_type == RX_TPA_START_CMPL_TYPE_RX_TPA_START) {
-		bnxt_tpa_start(rxq, (struct rx_tpa_start_cmpl *)rxcmp,
+		bnxt_tpa_start(rxq,
+			       (struct rx_tpa_start_cmpl *)rxcmp,
 			       (struct rx_tpa_start_cmpl_hi *)rxcmp1);
 		rc = -EINVAL; /* Continue w/o new mbuf */
 		goto next_rx;
 	} else if (cmp_type == RX_TPA_END_CMPL_TYPE_RX_TPA_END) {
-		mbuf = bnxt_tpa_end(rxq, &tmp_raw_cons,
-				   (struct rx_tpa_end_cmpl *)rxcmp,
-				   (struct rx_tpa_end_cmpl_hi *)rxcmp1);
+		mbuf = bnxt_tpa_end(rxq,
+				    &tmp_raw_cons,
+				    (struct rx_tpa_end_cmpl *)rxcmp,
+				    (struct rx_tpa_end_cmpl_hi *)rxcmp1);
 		if (unlikely(!mbuf))
 			return -EBUSY;
 		*rx_pkt = mbuf;
@@ -525,8 +528,9 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 	return rc;
 }
 
-uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-			       uint16_t nb_pkts)
+uint16_t bnxt_recv_pkts(void *rx_queue,
+			struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts)
 {
 	struct bnxt_rx_queue *rxq = rx_queue;
 	struct bnxt_cp_ring_info *cpr = rxq->cp_ring;
@@ -674,8 +678,8 @@ int bnxt_init_rx_ring_struct(struct bnxt_rx_queue *rxq, unsigned int socket_id)
 	rxq->rx_ring = rxr;
 
 	ring = rte_zmalloc_socket("bnxt_rx_ring_struct",
-				   sizeof(struct bnxt_ring),
-				   RTE_CACHE_LINE_SIZE, socket_id);
+				  sizeof(struct bnxt_ring),
+				  RTE_CACHE_LINE_SIZE, socket_id);
 	if (ring == NULL)
 		return -ENOMEM;
 	rxr->rx_ring_struct = ring;
@@ -694,8 +698,8 @@ int bnxt_init_rx_ring_struct(struct bnxt_rx_queue *rxq, unsigned int socket_id)
 	rxq->cp_ring = cpr;
 
 	ring = rte_zmalloc_socket("bnxt_rx_ring_struct",
-				   sizeof(struct bnxt_ring),
-				   RTE_CACHE_LINE_SIZE, socket_id);
+				  sizeof(struct bnxt_ring),
+				  RTE_CACHE_LINE_SIZE, socket_id);
 	if (ring == NULL)
 		return -ENOMEM;
 	cpr->cp_ring_struct = ring;
@@ -709,8 +713,8 @@ int bnxt_init_rx_ring_struct(struct bnxt_rx_queue *rxq, unsigned int socket_id)
 
 	/* Allocate Aggregator rings */
 	ring = rte_zmalloc_socket("bnxt_rx_ring_struct",
-				   sizeof(struct bnxt_ring),
-				   RTE_CACHE_LINE_SIZE, socket_id);
+				  sizeof(struct bnxt_ring),
+				  RTE_CACHE_LINE_SIZE, socket_id);
 	if (ring == NULL)
 		return -ENOMEM;
 	rxr->ag_ring_struct = ring;
@@ -762,8 +766,8 @@ int bnxt_init_one_rx_ring(struct bnxt_rx_queue *rxq)
 	for (i = 0; i < ring->ring_size; i++) {
 		if (bnxt_alloc_rx_data(rxq, rxr, prod) != 0) {
 			PMD_DRV_LOG(WARNING,
-				"init'ed rx ring %d with %d/%d mbufs only\n",
-				rxq->queue_id, i, ring->ring_size);
+				    "rx ring %d only has %d/%d mbufs\n",
+				    rxq->queue_id, i, ring->ring_size);
 			break;
 		}
 		rxr->rx_prod = prod;
@@ -778,8 +782,8 @@ int bnxt_init_one_rx_ring(struct bnxt_rx_queue *rxq)
 	for (i = 0; i < ring->ring_size; i++) {
 		if (bnxt_alloc_ag_data(rxq, rxr, prod) != 0) {
 			PMD_DRV_LOG(WARNING,
-			"init'ed AG ring %d with %d/%d mbufs only\n",
-			rxq->queue_id, i, ring->ring_size);
+				    "AG ring %d only has %d/%d mbufs\n",
+				    rxq->queue_id, i, ring->ring_size);
 			break;
 		}
 		rxr->ag_prod = prod;
diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h
index 3815a2199..c8ba22ee1 100644
--- a/drivers/net/bnxt/bnxt_rxr.h
+++ b/drivers/net/bnxt/bnxt_rxr.h
@@ -103,8 +103,10 @@ struct bnxt_rx_ring_info {
 	struct bnxt_tpa_info *tpa_info;
 };
 
-uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
-			       uint16_t nb_pkts);
+uint16_t bnxt_recv_pkts(void *rx_queue,
+			struct rte_mbuf **rx_pkts,
+			uint16_t nb_pkts);
+
 void bnxt_free_rx_rings(struct bnxt *bp);
 int bnxt_init_rx_ring_struct(struct bnxt_rx_queue *rxq, unsigned int socket_id);
 int bnxt_init_one_rx_ring(struct bnxt_rx_queue *rxq);
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 12/31] net/bnxt: code cleanup style of rte pmd bnxt file
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (10 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 11/31] net/bnxt: code cleanup style of bnxt rxr Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 13/31] net/bnxt: code cleanup style of bnxt stats Ajit Khaparde
                   ` (19 subsequent siblings)
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Scott Branden

From: Scott Branden <scott.branden@broadcom.com>

Cleanup alignment, brackets, debug string style of rte_pmd_bnxt

Signed-off-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/rte_pmd_bnxt.c | 97 +++++++++++++++++++++++++----------------
 drivers/net/bnxt/rte_pmd_bnxt.h | 69 +++++++++++++++++++----------
 2 files changed, 105 insertions(+), 61 deletions(-)

diff --git a/drivers/net/bnxt/rte_pmd_bnxt.c b/drivers/net/bnxt/rte_pmd_bnxt.c
index c298de83c..e49dba465 100644
--- a/drivers/net/bnxt/rte_pmd_bnxt.c
+++ b/drivers/net/bnxt/rte_pmd_bnxt.c
@@ -77,6 +77,7 @@ static void
 rte_pmd_bnxt_set_all_queues_drop_en_cb(struct bnxt_vnic_info *vnic, void *onptr)
 {
 	uint8_t *on = onptr;
+
 	vnic->bd_stall = !(*on);
 }
 
@@ -119,9 +120,12 @@ int rte_pmd_bnxt_set_all_queues_drop_en(uint16_t port, uint8_t on)
 
 	/* Stall all active VFs */
 	for (i = 0; i < bp->pf.active_vfs; i++) {
-		rc = bnxt_hwrm_func_vf_vnic_query_and_config(bp, i,
-				rte_pmd_bnxt_set_all_queues_drop_en_cb, &on,
-				bnxt_hwrm_vnic_cfg);
+		rc = bnxt_hwrm_func_vf_vnic_query_and_config
+				(bp,
+				 i,
+				 rte_pmd_bnxt_set_all_queues_drop_en_cb,
+				 &on,
+				 bnxt_hwrm_vnic_cfg);
 		if (rc) {
 			PMD_DRV_LOG(ERR, "Failed to update VF VNIC %d.\n", i);
 			break;
@@ -131,8 +135,9 @@ int rte_pmd_bnxt_set_all_queues_drop_en(uint16_t port, uint8_t on)
 	return rc;
 }
 
-int rte_pmd_bnxt_set_vf_mac_addr(uint16_t port, uint16_t vf,
-				struct ether_addr *mac_addr)
+int rte_pmd_bnxt_set_vf_mac_addr(uint16_t port,
+				 uint16_t vf,
+				 struct ether_addr *mac_addr)
 {
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
@@ -163,8 +168,10 @@ int rte_pmd_bnxt_set_vf_mac_addr(uint16_t port, uint16_t vf,
 	return rc;
 }
 
-int rte_pmd_bnxt_set_vf_rate_limit(uint16_t port, uint16_t vf,
-				uint16_t tx_rate, uint64_t q_msk)
+int rte_pmd_bnxt_set_vf_rate_limit(uint16_t port,
+				   uint16_t vf,
+				   uint16_t tx_rate,
+				   uint64_t q_msk)
 {
 	struct rte_eth_dev *eth_dev;
 	struct rte_eth_dev_info dev_info;
@@ -205,7 +212,7 @@ int rte_pmd_bnxt_set_vf_rate_limit(uint16_t port, uint16_t vf,
 		return 0;
 
 	rc = bnxt_hwrm_func_bw_cfg(bp, vf, tot_rate,
-				HWRM_FUNC_CFG_INPUT_ENABLES_MAX_BW);
+				   HWRM_FUNC_CFG_INPUT_ENABLES_MAX_BW);
 
 	if (!rc)
 		bp->pf.vf_info[vf].max_tx_rate = tot_rate;
@@ -247,8 +254,9 @@ int rte_pmd_bnxt_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf, uint8_t on)
 		return 0;
 
 	func_flags = bp->pf.vf_info[vf].func_cfg_flags;
-	func_flags &= ~(HWRM_FUNC_CFG_INPUT_FLAGS_SRC_MAC_ADDR_CHECK_ENABLE |
-	    HWRM_FUNC_CFG_INPUT_FLAGS_SRC_MAC_ADDR_CHECK_DISABLE);
+	func_flags &=
+	  ~(HWRM_FUNC_CFG_INPUT_FLAGS_SRC_MAC_ADDR_CHECK_ENABLE |
+	   HWRM_FUNC_CFG_INPUT_FLAGS_SRC_MAC_ADDR_CHECK_DISABLE);
 
 	if (on)
 		func_flags |=
@@ -298,10 +306,11 @@ int rte_pmd_bnxt_set_vf_vlan_anti_spoof(uint16_t port, uint16_t vf, uint8_t on)
 	if (!rc) {
 		bp->pf.vf_info[vf].vlan_spoof_en = on;
 		if (on) {
-			if (bnxt_hwrm_cfa_vlan_antispoof_cfg(bp,
-				bp->pf.first_vf_id + vf,
-				bp->pf.vf_info[vf].vlan_count,
-				bp->pf.vf_info[vf].vlan_as_table))
+			if (bnxt_hwrm_cfa_vlan_antispoof_cfg
+					(bp,
+					 bp->pf.first_vf_id + vf,
+					 bp->pf.vf_info[vf].vlan_count,
+					 bp->pf.vf_info[vf].vlan_as_table))
 				rc = -1;
 		}
 	} else {
@@ -315,6 +324,7 @@ static void
 rte_pmd_bnxt_set_vf_vlan_stripq_cb(struct bnxt_vnic_info *vnic, void *onptr)
 {
 	uint8_t *on = onptr;
+
 	vnic->vlan_strip = *on;
 }
 
@@ -345,17 +355,22 @@ rte_pmd_bnxt_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on)
 		return -ENOTSUP;
 	}
 
-	rc = bnxt_hwrm_func_vf_vnic_query_and_config(bp, vf,
-				rte_pmd_bnxt_set_vf_vlan_stripq_cb, &on,
-				bnxt_hwrm_vnic_cfg);
+	rc = bnxt_hwrm_func_vf_vnic_query_and_config
+					(bp,
+					 vf,
+					 rte_pmd_bnxt_set_vf_vlan_stripq_cb,
+					 &on,
+					 bnxt_hwrm_vnic_cfg);
 	if (rc)
 		PMD_DRV_LOG(ERR, "Failed to update VF VNIC %d.\n", vf);
 
 	return rc;
 }
 
-int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf,
-				uint16_t rx_mask, uint8_t on)
+int rte_pmd_bnxt_set_vf_rxmode(uint16_t port,
+			       uint16_t vf,
+			       uint16_t rx_mask,
+			       uint8_t on)
 {
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
@@ -397,10 +412,12 @@ int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf,
 	else
 		bp->pf.vf_info[vf].l2_rx_mask &= ~flag;
 
-	rc = bnxt_hwrm_func_vf_vnic_query_and_config(bp, vf,
-					vf_vnic_set_rxmask_cb,
-					&bp->pf.vf_info[vf].l2_rx_mask,
-					bnxt_set_rx_mask_no_vlan);
+	rc = bnxt_hwrm_func_vf_vnic_query_and_config
+					(bp,
+					 vf,
+					 vf_vnic_set_rxmask_cb,
+					 &bp->pf.vf_info[vf].l2_rx_mask,
+					 bnxt_set_rx_mask_no_vlan);
 	if (rc)
 		PMD_DRV_LOG(ERR, "bnxt_hwrm_func_vf_vnic_set_rxmask failed\n");
 
@@ -433,9 +450,11 @@ static int bnxt_set_vf_table(struct bnxt *bp, uint16_t vf)
 		vnic.fw_vnic_id = dflt_vnic;
 		if (bnxt_hwrm_vnic_qcfg(bp, &vnic,
 					bp->pf.first_vf_id + vf) == 0) {
-			if (bnxt_hwrm_cfa_l2_set_rx_mask(bp, &vnic,
-						bp->pf.vf_info[vf].vlan_count,
-						bp->pf.vf_info[vf].vlan_table))
+			if (bnxt_hwrm_cfa_l2_set_rx_mask
+						(bp,
+						 &vnic,
+						 bp->pf.vf_info[vf].vlan_count,
+						 bp->pf.vf_info[vf].vlan_table))
 				rc = -1;
 		} else {
 			rc = -1;
@@ -445,8 +464,10 @@ static int bnxt_set_vf_table(struct bnxt *bp, uint16_t vf)
 	return rc;
 }
 
-int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port, uint16_t vlan,
-				    uint64_t vf_mask, uint8_t vlan_on)
+int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port,
+				    uint16_t vlan,
+				    uint64_t vf_mask,
+				    uint8_t vlan_on)
 {
 	struct bnxt_vlan_table_entry *ve;
 	struct bnxt_vlan_antispoof_table_entry *vase;
@@ -482,8 +503,7 @@ int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port, uint16_t vlan,
 		if (vlan_on) {
 			/* First, search for a duplicate... */
 			for (j = 0; j < cnt; j++) {
-				if (rte_be_to_cpu_16(
-				   bp->pf.vf_info[i].vlan_table[j].vid) == vlan)
+				if (rte_be_to_cpu_16(bp->pf.vf_info[i].vlan_table[j].vid) == vlan)
 					break;
 			}
 			if (j == cnt) {
@@ -491,7 +511,7 @@ int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port, uint16_t vlan,
 				if (cnt == getpagesize() / sizeof(struct
 				    bnxt_vlan_antispoof_table_entry)) {
 					PMD_DRV_LOG(ERR,
-					     "VLAN anti-spoof table is full\n");
+						    "VLAN anti-spoof table is full\n");
 					PMD_DRV_LOG(ERR,
 						"VF %d cannot add VLAN %u\n",
 						i, vlan);
@@ -517,13 +537,14 @@ int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port, uint16_t vlan,
 			}
 		} else {
 			for (j = 0; j < cnt; j++) {
-				if (rte_be_to_cpu_16(
-				   bp->pf.vf_info[i].vlan_table[j].vid) != vlan)
+				if (rte_be_to_cpu_16(bp->pf.vf_info[i].vlan_table[j].vid) != vlan)
 					continue;
+
 				memmove(&bp->pf.vf_info[i].vlan_table[j],
 					&bp->pf.vf_info[i].vlan_table[j + 1],
 					getpagesize() - ((j + 1) *
 					sizeof(struct bnxt_vlan_table_entry)));
+
 				memmove(&bp->pf.vf_info[i].vlan_as_table[j],
 					&bp->pf.vf_info[i].vlan_as_table[j + 1],
 					getpagesize() - ((j + 1) * sizeof(struct
@@ -647,8 +668,9 @@ int rte_pmd_bnxt_get_vf_tx_drop_count(uint16_t port, uint16_t vf_id,
 					     count);
 }
 
-int rte_pmd_bnxt_mac_addr_add(uint16_t port, struct ether_addr *addr,
-				uint32_t vf_id)
+int rte_pmd_bnxt_mac_addr_add(uint16_t port,
+			      struct ether_addr *addr,
+			      uint32_t vf_id)
 {
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
@@ -724,8 +746,9 @@ int rte_pmd_bnxt_mac_addr_add(uint16_t port, struct ether_addr *addr,
 }
 
 int
-rte_pmd_bnxt_set_vf_vlan_insert(uint16_t port, uint16_t vf,
-		uint16_t vlan_id)
+rte_pmd_bnxt_set_vf_vlan_insert(uint16_t port,
+				uint16_t vf,
+				uint16_t vlan_id)
 {
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
diff --git a/drivers/net/bnxt/rte_pmd_bnxt.h b/drivers/net/bnxt/rte_pmd_bnxt.h
index 68fbe34d6..f66c44c19 100644
--- a/drivers/net/bnxt/rte_pmd_bnxt.h
+++ b/drivers/net/bnxt/rte_pmd_bnxt.h
@@ -19,11 +19,11 @@ enum rte_pmd_bnxt_mb_event_rsp {
 };
 
 /* mailbox message types */
-#define BNXT_VF_RESET			0x01 /* VF requests reset */
+#define BNXT_VF_RESET		0x01 /* VF requests reset */
 #define BNXT_VF_SET_MAC_ADDR	0x02 /* VF requests PF to set MAC addr */
-#define BNXT_VF_SET_VLAN		0x03 /* VF requests PF to set VLAN */
-#define BNXT_VF_SET_MTU			0x04 /* VF requests PF to set MTU */
-#define BNXT_VF_SET_MRU			0x05 /* VF requests PF to set MRU */
+#define BNXT_VF_SET_VLAN	0x03 /* VF requests PF to set VLAN */
+#define BNXT_VF_SET_MTU		0x04 /* VF requests PF to set MTU */
+#define BNXT_VF_SET_MRU		0x05 /* VF requests PF to set MRU */
 
 /*
  * Data sent to the caller when the callback is executed.
@@ -50,7 +50,9 @@ struct rte_pmd_bnxt_mb_event_param {
  *   - (-ENODEV) if *port* invalid.
  *   - (-EINVAL) if bad parameter.
  */
-int rte_pmd_bnxt_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf, uint8_t on);
+int rte_pmd_bnxt_set_vf_mac_anti_spoof(uint16_t port,
+				       uint16_t vf,
+				       uint8_t on);
 
 /**
  * Set the VF MAC address.
@@ -66,8 +68,9 @@ int rte_pmd_bnxt_set_vf_mac_anti_spoof(uint16_t port, uint16_t vf, uint8_t on);
  *   - (-ENODEV) if *port* invalid.
  *   - (-EINVAL) if *vf* or *mac_addr* is invalid.
  */
-int rte_pmd_bnxt_set_vf_mac_addr(uint16_t port, uint16_t vf,
-		struct ether_addr *mac_addr);
+int rte_pmd_bnxt_set_vf_mac_addr(uint16_t port,
+				 uint16_t vf,
+				 struct ether_addr *mac_addr);
 
 /**
  * Enable/Disable vf vlan strip for all queues in a pool
@@ -87,7 +90,9 @@ int rte_pmd_bnxt_set_vf_mac_addr(uint16_t port, uint16_t vf,
  *   - (-EINVAL) if bad parameter.
  */
 int
-rte_pmd_bnxt_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on);
+rte_pmd_bnxt_set_vf_vlan_stripq(uint16_t port,
+				uint16_t vf,
+				uint8_t on);
 
 /**
  * Enable/Disable vf vlan insert
@@ -106,8 +111,9 @@ rte_pmd_bnxt_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on);
  *   - (-EINVAL) if bad parameter.
  */
 int
-rte_pmd_bnxt_set_vf_vlan_insert(uint16_t port, uint16_t vf,
-		uint16_t vlan_id);
+rte_pmd_bnxt_set_vf_vlan_insert(uint16_t port,
+				uint16_t vf,
+				uint16_t vlan_id);
 
 /**
  * Enable/Disable hardware VF VLAN filtering by an Ethernet device of
@@ -128,8 +134,10 @@ rte_pmd_bnxt_set_vf_vlan_insert(uint16_t port, uint16_t vf,
  *   - (-ENODEV) if *port_id* invalid.
  *   - (-EINVAL) if bad parameter.
  */
-int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port, uint16_t vlan,
-				    uint64_t vf_mask, uint8_t vlan_on);
+int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port,
+				    uint16_t vlan,
+				    uint64_t vf_mask,
+				    uint8_t vlan_on);
 
 /**
  * Enable/Disable tx loopback
@@ -145,7 +153,8 @@ int rte_pmd_bnxt_set_vf_vlan_filter(uint16_t port, uint16_t vlan,
  *   - (-ENODEV) if *port* invalid.
  *   - (-EINVAL) if bad parameter.
  */
-int rte_pmd_bnxt_set_tx_loopback(uint16_t port, uint8_t on);
+int rte_pmd_bnxt_set_tx_loopback(uint16_t port,
+				 uint8_t on);
 
 /**
  * set all queues drop enable bit
@@ -161,7 +170,8 @@ int rte_pmd_bnxt_set_tx_loopback(uint16_t port, uint8_t on);
  *   - (-ENODEV) if *port* invalid.
  *   - (-EINVAL) if bad parameter.
  */
-int rte_pmd_bnxt_set_all_queues_drop_en(uint16_t port, uint8_t on);
+int rte_pmd_bnxt_set_all_queues_drop_en(uint16_t port,
+					uint8_t on);
 
 /**
  * Set the VF rate limit.
@@ -179,8 +189,10 @@ int rte_pmd_bnxt_set_all_queues_drop_en(uint16_t port, uint8_t on);
  *   - (-ENODEV) if *port* invalid.
  *   - (-EINVAL) if *vf* or *mac_addr* is invalid.
  */
-int rte_pmd_bnxt_set_vf_rate_limit(uint16_t port, uint16_t vf,
-				uint16_t tx_rate, uint64_t q_msk);
+int rte_pmd_bnxt_set_vf_rate_limit(uint16_t port,
+				   uint16_t vf,
+				   uint16_t tx_rate,
+				   uint64_t q_msk);
 
 /**
  * Get VF's statistics
@@ -233,7 +245,9 @@ int rte_pmd_bnxt_reset_vf_stats(uint16_t port,
  *   - (-ENODEV) if *port* invalid.
  *   - (-EINVAL) if bad parameter.
  */
-int rte_pmd_bnxt_set_vf_vlan_anti_spoof(uint16_t port, uint16_t vf, uint8_t on);
+int rte_pmd_bnxt_set_vf_vlan_anti_spoof(uint16_t port,
+					uint16_t vf,
+					uint8_t on);
 
 /**
  * Set RX L2 Filtering mode of a VF of an Ethernet device.
@@ -252,8 +266,10 @@ int rte_pmd_bnxt_set_vf_vlan_anti_spoof(uint16_t port, uint16_t vf, uint8_t on);
  *   - (-ENODEV) if *port_id* invalid.
  *   - (-EINVAL) if bad parameter.
  */
-int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf,
-				uint16_t rx_mask, uint8_t on);
+int rte_pmd_bnxt_set_vf_rxmode(uint16_t port,
+			       uint16_t vf,
+			       uint16_t rx_mask,
+			       uint8_t on);
 
 /**
  * Returns the number of default RX queues on a VF
@@ -269,7 +285,8 @@ int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf,
  *   - (-ENOMEM) on an allocation failure
  *   - (-1) firmware interface error
  */
-int rte_pmd_bnxt_get_vf_rx_status(uint16_t port, uint16_t vf_id);
+int rte_pmd_bnxt_get_vf_rx_status(uint16_t port,
+				  uint16_t vf_id);
 
 /**
  * Queries the TX drop counter for the function
@@ -285,7 +302,8 @@ int rte_pmd_bnxt_get_vf_rx_status(uint16_t port, uint16_t vf_id);
  *   - (-EINVAL) invalid vf_id specified.
  *   - (-ENOTSUP) Ethernet device is not a PF
  */
-int rte_pmd_bnxt_get_vf_tx_drop_count(uint16_t port, uint16_t vf_id,
+int rte_pmd_bnxt_get_vf_tx_drop_count(uint16_t port,
+				      uint16_t vf_id,
 				      uint64_t *count);
 
 /**
@@ -303,8 +321,9 @@ int rte_pmd_bnxt_get_vf_tx_drop_count(uint16_t port, uint16_t vf_id,
  *   - (-ENOTSUP) Ethernet device is not a PF
  *   - (-ENOMEM) on an allocation failure
  */
-int rte_pmd_bnxt_mac_addr_add(uint16_t port, struct ether_addr *mac_addr,
-				uint32_t vf_id);
+int rte_pmd_bnxt_mac_addr_add(uint16_t port,
+			      struct ether_addr *mac_addr,
+			      uint32_t vf_id);
 
 /**
  * Enable/Disable VF statistics retention
@@ -322,5 +341,7 @@ int rte_pmd_bnxt_mac_addr_add(uint16_t port, struct ether_addr *mac_addr,
  *   - (-ENODEV) if *port* invalid.
  *   - (-EINVAL) if bad parameter.
  */
-int rte_pmd_bnxt_set_vf_persist_stats(uint16_t port, uint16_t vf, uint8_t on);
+int rte_pmd_bnxt_set_vf_persist_stats(uint16_t port,
+				      uint16_t vf,
+				      uint8_t on);
 #endif /* _PMD_BNXT_H_ */
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 13/31] net/bnxt: code cleanup style of bnxt stats
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (11 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 12/31] net/bnxt: code cleanup style of rte pmd bnxt file Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 14/31] net/bnxt: code cleanup style of bnxt vnic Ajit Khaparde
                   ` (18 subsequent siblings)
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Scott Branden

From: Scott Branden <scott.branden@broadcom.com>

Cleanup alignment, brackets, debug string style of bnxt_stats

Signed-off-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_stats.c | 84 ++++++++++++++++++++++++++-----------------
 drivers/net/bnxt/bnxt_stats.h | 27 +++++++++-----
 2 files changed, 70 insertions(+), 41 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_stats.c b/drivers/net/bnxt/bnxt_stats.c
index a5d3c8660..d930aa00e 100644
--- a/drivers/net/bnxt/bnxt_stats.c
+++ b/drivers/net/bnxt/bnxt_stats.c
@@ -201,7 +201,7 @@ void bnxt_free_stats(struct bnxt *bp)
 }
 
 int bnxt_stats_get_op(struct rte_eth_dev *eth_dev,
-			   struct rte_eth_stats *bnxt_stats)
+		      struct rte_eth_stats *bnxt_stats)
 {
 	int rc = 0;
 	unsigned int i;
@@ -217,8 +217,11 @@ int bnxt_stats_get_op(struct rte_eth_dev *eth_dev,
 		struct bnxt_rx_queue *rxq = bp->rx_queues[i];
 		struct bnxt_cp_ring_info *cpr = rxq->cp_ring;
 
-		rc = bnxt_hwrm_ctx_qstats(bp, cpr->hw_stats_ctx_id, i,
-				     bnxt_stats, 1);
+		rc = bnxt_hwrm_ctx_qstats(bp,
+					  cpr->hw_stats_ctx_id,
+					  i,
+					  bnxt_stats,
+					  1);
 		if (unlikely(rc))
 			return rc;
 		bnxt_stats->rx_nombuf +=
@@ -229,8 +232,12 @@ int bnxt_stats_get_op(struct rte_eth_dev *eth_dev,
 		struct bnxt_tx_queue *txq = bp->tx_queues[i];
 		struct bnxt_cp_ring_info *cpr = txq->cp_ring;
 
-		rc = bnxt_hwrm_ctx_qstats(bp, cpr->hw_stats_ctx_id, i,
-				     bnxt_stats, 0);
+		rc = bnxt_hwrm_ctx_qstats(bp,
+					  cpr->hw_stats_ctx_id,
+					  i,
+					  bnxt_stats,
+					  0);
+
 		if (unlikely(rc))
 			return rc;
 	}
@@ -259,7 +266,8 @@ void bnxt_stats_reset_op(struct rte_eth_dev *eth_dev)
 }
 
 int bnxt_dev_xstats_get_op(struct rte_eth_dev *eth_dev,
-			   struct rte_eth_xstat *xstats, unsigned int n)
+			   struct rte_eth_xstat *xstats,
+			   unsigned int n)
 {
 	struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private;
 
@@ -279,18 +287,20 @@ int bnxt_dev_xstats_get_op(struct rte_eth_dev *eth_dev,
 	for (i = 0; i < RTE_DIM(bnxt_rx_stats_strings); i++) {
 		uint64_t *rx_stats = (uint64_t *)bp->hw_rx_port_stats;
 		xstats[count].id = count;
-		xstats[count].value = rte_le_to_cpu_64(
-				*(uint64_t *)((char *)rx_stats +
-				bnxt_rx_stats_strings[i].offset));
+		xstats[count].value = rte_le_to_cpu_64
+					(*(uint64_t *)((char *)rx_stats +
+					 bnxt_rx_stats_strings[i].offset));
+
 		count++;
 	}
 
 	for (i = 0; i < RTE_DIM(bnxt_tx_stats_strings); i++) {
 		uint64_t *tx_stats = (uint64_t *)bp->hw_tx_port_stats;
 		xstats[count].id = count;
-		xstats[count].value = rte_le_to_cpu_64(
-				 *(uint64_t *)((char *)tx_stats +
-				bnxt_tx_stats_strings[i].offset));
+		xstats[count].value = rte_le_to_cpu_64
+					(*(uint64_t *)((char *)tx_stats +
+					 bnxt_tx_stats_strings[i].offset));
+
 		count++;
 	}
 
@@ -303,8 +313,8 @@ int bnxt_dev_xstats_get_op(struct rte_eth_dev *eth_dev,
 }
 
 int bnxt_dev_xstats_get_names_op(__rte_unused struct rte_eth_dev *eth_dev,
-	struct rte_eth_xstat_name *xstats_names,
-	__rte_unused unsigned int limit)
+				 struct rte_eth_xstat_name *xstats_names,
+				 __rte_unused unsigned int limit)
 {
 	/* Account for the Tx drop pkts aka the Anti spoof counter */
 	const unsigned int stat_cnt = RTE_DIM(bnxt_rx_stats_strings) +
@@ -316,24 +326,27 @@ int bnxt_dev_xstats_get_names_op(__rte_unused struct rte_eth_dev *eth_dev,
 
 		for (i = 0; i < RTE_DIM(bnxt_rx_stats_strings); i++) {
 			snprintf(xstats_names[count].name,
-				sizeof(xstats_names[count].name),
-				"%s",
-				bnxt_rx_stats_strings[i].name);
+				 sizeof(xstats_names[count].name),
+				 "%s",
+				 bnxt_rx_stats_strings[i].name);
+
 			count++;
 		}
 
 		for (i = 0; i < RTE_DIM(bnxt_tx_stats_strings); i++) {
 			snprintf(xstats_names[count].name,
-				sizeof(xstats_names[count].name),
-				"%s",
-				bnxt_tx_stats_strings[i].name);
+				 sizeof(xstats_names[count].name),
+				 "%s",
+				 bnxt_tx_stats_strings[i].name);
+
 			count++;
 		}
 
 		snprintf(xstats_names[count].name,
-				sizeof(xstats_names[count].name),
-				"%s",
-				bnxt_func_stats_strings[4].name);
+			 sizeof(xstats_names[count].name),
+			 "%s",
+			 bnxt_func_stats_strings[4].name);
+
 		count++;
 	}
 	return stat_cnt;
@@ -354,8 +367,10 @@ void bnxt_dev_xstats_reset_op(struct rte_eth_dev *eth_dev)
 		PMD_DRV_LOG(ERR, "Operation not supported\n");
 }
 
-int bnxt_dev_xstats_get_by_id_op(struct rte_eth_dev *dev, const uint64_t *ids,
-		uint64_t *values, unsigned int limit)
+int bnxt_dev_xstats_get_by_id_op(struct rte_eth_dev *dev,
+				 const uint64_t *ids,
+				 uint64_t *values,
+				 unsigned int limit)
 {
 	/* Account for the Tx drop pkts aka the Anti spoof counter */
 	const unsigned int stat_cnt = RTE_DIM(bnxt_rx_stats_strings) +
@@ -370,7 +385,7 @@ int bnxt_dev_xstats_get_by_id_op(struct rte_eth_dev *dev, const uint64_t *ids,
 	bnxt_dev_xstats_get_by_id_op(dev, NULL, values_copy, stat_cnt);
 	for (i = 0; i < limit; i++) {
 		if (ids[i] >= stat_cnt) {
-			PMD_DRV_LOG(ERR, "id value isn't valid");
+			PMD_DRV_LOG(ERR, "id value isn't valid\n");
 			return -1;
 		}
 		values[i] = values_copy[ids[i]];
@@ -379,8 +394,9 @@ int bnxt_dev_xstats_get_by_id_op(struct rte_eth_dev *dev, const uint64_t *ids,
 }
 
 int bnxt_dev_xstats_get_names_by_id_op(struct rte_eth_dev *dev,
-				struct rte_eth_xstat_name *xstats_names,
-				const uint64_t *ids, unsigned int limit)
+				       struct rte_eth_xstat_name *xstats_names,
+				       const uint64_t *ids,
+				       unsigned int limit)
 {
 	/* Account for the Tx drop pkts aka the Anti spoof counter */
 	const unsigned int stat_cnt = RTE_DIM(bnxt_rx_stats_strings) +
@@ -391,16 +407,18 @@ int bnxt_dev_xstats_get_names_by_id_op(struct rte_eth_dev *dev,
 	if (!ids)
 		return bnxt_dev_xstats_get_names_op(dev, xstats_names,
 						    stat_cnt);
-	bnxt_dev_xstats_get_names_by_id_op(dev, xstats_names_copy, NULL,
-			stat_cnt);
+
+	bnxt_dev_xstats_get_names_by_id_op(dev,
+					   xstats_names_copy,
+					   NULL,
+					   stat_cnt);
 
 	for (i = 0; i < limit; i++) {
 		if (ids[i] >= stat_cnt) {
-			PMD_DRV_LOG(ERR, "id value isn't valid");
+			PMD_DRV_LOG(ERR, "id value isn't valid\n");
 			return -1;
 		}
-		strcpy(xstats_names[i].name,
-				xstats_names_copy[ids[i]].name);
+		strcpy(xstats_names[i].name, xstats_names_copy[ids[i]].name);
 	}
 	return stat_cnt;
 }
diff --git a/drivers/net/bnxt/bnxt_stats.h b/drivers/net/bnxt/bnxt_stats.h
index b0f135a5a..08570238d 100644
--- a/drivers/net/bnxt/bnxt_stats.h
+++ b/drivers/net/bnxt/bnxt_stats.h
@@ -9,20 +9,31 @@
 #include <rte_ethdev_driver.h>
 
 void bnxt_free_stats(struct bnxt *bp);
+
 int bnxt_stats_get_op(struct rte_eth_dev *eth_dev,
-			   struct rte_eth_stats *bnxt_stats);
+		      struct rte_eth_stats *bnxt_stats);
+
 void bnxt_stats_reset_op(struct rte_eth_dev *eth_dev);
+
 int bnxt_dev_xstats_get_names_op(__rte_unused struct rte_eth_dev *eth_dev,
-	struct rte_eth_xstat_name *xstats_names,
-	__rte_unused unsigned int limit);
+				 struct rte_eth_xstat_name *xstats_names,
+				 __rte_unused unsigned int limit);
+
 int bnxt_dev_xstats_get_op(struct rte_eth_dev *eth_dev,
-			   struct rte_eth_xstat *xstats, unsigned int n);
+			   struct rte_eth_xstat *xstats,
+			   unsigned int n);
+
 void bnxt_dev_xstats_reset_op(struct rte_eth_dev *eth_dev);
-int bnxt_dev_xstats_get_by_id_op(struct rte_eth_dev *dev, const uint64_t *ids,
-				uint64_t *values, unsigned int limit);
+
+int bnxt_dev_xstats_get_by_id_op(struct rte_eth_dev *dev,
+				 const uint64_t *ids,
+				 uint64_t *values,
+				 unsigned int limit);
+
 int bnxt_dev_xstats_get_names_by_id_op(struct rte_eth_dev *dev,
-				struct rte_eth_xstat_name *xstats_names,
-				const uint64_t *ids, unsigned int limit);
+				       struct rte_eth_xstat_name *xstats_names,
+				       const uint64_t *ids,
+				       unsigned int limit);
 
 struct bnxt_xstats_name_off {
 	char name[RTE_ETH_XSTATS_NAME_SIZE];
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 14/31] net/bnxt: code cleanup style of bnxt vnic
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (12 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 13/31] net/bnxt: code cleanup style of bnxt stats Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 15/31] net/bnxt: code cleanup style of bnxt txq Ajit Khaparde
                   ` (17 subsequent siblings)
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Scott Branden

From: Scott Branden <scott.branden@broadcom.com>

Cleanup alignment, brackets, debug string style of bnxt_vnic

Signed-off-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_vnic.c | 26 +++++++++++++-------------
 drivers/net/bnxt/bnxt_vnic.h |  8 ++++++--
 2 files changed, 19 insertions(+), 15 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c
index 19d06af55..5d9d369a3 100644
--- a/drivers/net/bnxt/bnxt_vnic.c
+++ b/drivers/net/bnxt/bnxt_vnic.c
@@ -64,8 +64,9 @@ void bnxt_init_vnics(struct bnxt *bp)
 		STAILQ_INIT(&bp->ff_pool[i]);
 }
 
-int bnxt_free_vnic(struct bnxt *bp, struct bnxt_vnic_info *vnic,
-			  int pool)
+int bnxt_free_vnic(struct bnxt *bp,
+		   struct bnxt_vnic_info *vnic,
+		   int pool)
 {
 	struct bnxt_vnic_info *temp;
 
@@ -143,14 +144,16 @@ int bnxt_alloc_vnic_attributes(struct bnxt *bp)
 	struct rte_pci_device *pdev = bp->pdev;
 	const struct rte_memzone *mz;
 	char mz_name[RTE_MEMZONE_NAMESIZE];
-	uint32_t entry_length = RTE_CACHE_LINE_ROUNDUP(
-				HW_HASH_INDEX_SIZE * sizeof(*vnic->rss_table) +
-				HW_HASH_KEY_SIZE +
-				BNXT_MAX_MC_ADDRS * ETHER_ADDR_LEN);
+	uint32_t entry_length;
 	uint16_t max_vnics;
 	int i;
 	rte_iova_t mz_phys_addr;
 
+	entry_length = RTE_CACHE_LINE_ROUNDUP
+			(HW_HASH_INDEX_SIZE * sizeof(*vnic->rss_table) +
+			 HW_HASH_KEY_SIZE +
+			 BNXT_MAX_MC_ADDRS * ETHER_ADDR_LEN);
+
 	max_vnics = bp->max_vnics;
 	snprintf(mz_name, RTE_MEMZONE_NAMESIZE,
 		 "bnxt_%04x:%02x:%02x:%02x_vnicattr", pdev->addr.domain,
@@ -168,14 +171,11 @@ int bnxt_alloc_vnic_attributes(struct bnxt *bp)
 	}
 	mz_phys_addr = mz->iova;
 	if ((unsigned long)mz->addr == mz_phys_addr) {
-		PMD_DRV_LOG(WARNING,
-			"Memzone physical address same as virtual.\n");
-		PMD_DRV_LOG(WARNING,
-			"Using rte_mem_virt2iova()\n");
+		PMD_DRV_LOG(WARNING, "Memzone phys addr == virtual\n");
+		PMD_DRV_LOG(WARNING, "Using rte_mem_virt2iova()\n");
 		mz_phys_addr = rte_mem_virt2iova(mz->addr);
 		if (mz_phys_addr == 0) {
-			PMD_DRV_LOG(ERR,
-			"unable to map vnic address to physical memory\n");
+			PMD_DRV_LOG(ERR, "unable to map vnic addr\n");
 			return -ENOMEM;
 		}
 	}
@@ -234,7 +234,7 @@ int bnxt_alloc_vnic_mem(struct bnxt *bp)
 	vnic_mem = rte_zmalloc("bnxt_vnic_info",
 			       max_vnics * sizeof(struct bnxt_vnic_info), 0);
 	if (vnic_mem == NULL) {
-		PMD_DRV_LOG(ERR, "Failed to alloc memory for %d VNICs",
+		PMD_DRV_LOG(ERR, "Failed to alloc memory for %d VNICs\n",
 			max_vnics);
 		return -ENOMEM;
 	}
diff --git a/drivers/net/bnxt/bnxt_vnic.h b/drivers/net/bnxt/bnxt_vnic.h
index c521d7e5a..3401ae098 100644
--- a/drivers/net/bnxt/bnxt_vnic.h
+++ b/drivers/net/bnxt/bnxt_vnic.h
@@ -58,12 +58,16 @@ struct bnxt_vnic_info {
 
 struct bnxt;
 void bnxt_init_vnics(struct bnxt *bp);
-int bnxt_free_vnic(struct bnxt *bp, struct bnxt_vnic_info *vnic,
-			  int pool);
+
+int bnxt_free_vnic(struct bnxt *bp,
+		   struct bnxt_vnic_info *vnic,
+		   int pool);
+
 struct bnxt_vnic_info *bnxt_alloc_vnic(struct bnxt *bp);
 void bnxt_free_all_vnics(struct bnxt *bp);
 void bnxt_free_vnic_attributes(struct bnxt *bp);
 int bnxt_alloc_vnic_attributes(struct bnxt *bp);
 void bnxt_free_vnic_mem(struct bnxt *bp);
 int bnxt_alloc_vnic_mem(struct bnxt *bp);
+
 #endif
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 15/31] net/bnxt: code cleanup style of bnxt txq
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (13 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 14/31] net/bnxt: code cleanup style of bnxt vnic Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 16/31] net/bnxt: code cleanup style of bnxt rxq Ajit Khaparde
                   ` (16 subsequent siblings)
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Scott Branden

From: Scott Branden <scott.branden@broadcom.com>

Cleanup alignment, brackets, debug string style of bnxt_txq

Signed-off-by: Scott Branden <scott.branden@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_txq.c | 24 ++++++++++++++----------
 drivers/net/bnxt/bnxt_txq.h |  9 +++++----
 2 files changed, 19 insertions(+), 14 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_txq.c b/drivers/net/bnxt/bnxt_txq.c
index b9b975e4c..677bb9692 100644
--- a/drivers/net/bnxt/bnxt_txq.c
+++ b/drivers/net/bnxt/bnxt_txq.c
@@ -74,10 +74,10 @@ void bnxt_tx_queue_release_op(void *tx_queue)
 }
 
 int bnxt_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
-			       uint16_t queue_idx,
-			       uint16_t nb_desc,
-			       unsigned int socket_id,
-			       const struct rte_eth_txconf *tx_conf)
+			   uint16_t queue_idx,
+			   uint16_t nb_desc,
+			   unsigned int socket_id,
+			   const struct rte_eth_txconf *tx_conf)
 {
 	struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private;
 	struct bnxt_tx_queue *txq;
@@ -91,7 +91,7 @@ int bnxt_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
 	}
 
 	if (!nb_desc || nb_desc > MAX_TX_DESC_CNT) {
-		PMD_DRV_LOG(ERR, "nb_desc %d is invalid", nb_desc);
+		PMD_DRV_LOG(ERR, "nb_desc %d is invalid\n", nb_desc);
 		rc = -EINVAL;
 		goto out;
 	}
@@ -106,7 +106,7 @@ int bnxt_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
 	txq = rte_zmalloc_socket("bnxt_tx_queue", sizeof(struct bnxt_tx_queue),
 				 RTE_CACHE_LINE_SIZE, socket_id);
 	if (!txq) {
-		PMD_DRV_LOG(ERR, "bnxt_tx_queue allocation failed!");
+		PMD_DRV_LOG(ERR, "bnxt_tx_queue allocation failed!\n");
 		rc = -ENOMEM;
 		goto out;
 	}
@@ -122,16 +122,20 @@ int bnxt_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
 	txq->port_id = eth_dev->data->port_id;
 
 	/* Allocate TX ring hardware descriptors */
-	if (bnxt_alloc_rings(bp, queue_idx, txq, NULL, txq->cp_ring,
-			"txr")) {
-		PMD_DRV_LOG(ERR, "ring_dma_zone_reserve for tx_ring failed!");
+	if (bnxt_alloc_rings(bp,
+			     queue_idx,
+			     txq,
+			     NULL,
+			     txq->cp_ring,
+			     "txr")) {
+		PMD_DRV_LOG(ERR, "ring_dma_zone_reserve for tx_ring failed!\n");
 		bnxt_tx_queue_release_op(txq);
 		rc = -ENOMEM;
 		goto out;
 	}
 
 	if (bnxt_init_one_tx_ring(txq)) {
-		PMD_DRV_LOG(ERR, "bnxt_init_one_tx_ring failed!");
+		PMD_DRV_LOG(ERR, "bnxt_init_one_tx_ring failed!\n");
 		bnxt_tx_queue_release_op(txq);
 		rc = -ENOMEM;
 		goto out;
diff --git a/drivers/net/bnxt/bnxt_txq.h b/drivers/net/bnxt/bnxt_txq.h
index f2c712a75..9da8f39d8 100644
--- a/drivers/net/bnxt/bnxt_txq.h
+++ b/drivers/net/bnxt/bnxt_txq.h
@@ -40,8 +40,9 @@ void bnxt_free_txq_stats(struct bnxt_tx_queue *txq);
 void bnxt_free_tx_mbufs(struct bnxt *bp);
 void bnxt_tx_queue_release_op(void *tx_queue);
 int bnxt_tx_queue_setup_op(struct rte_eth_dev *eth_dev,
-			       uint16_t queue_idx,
-			       uint16_t nb_desc,
-			       unsigned int socket_id,
-			       const struct rte_eth_txconf *tx_conf);
+			   uint16_t queue_idx,
+			   uint16_t nb_desc,
+			   unsigned int socket_id,
+			   const struct rte_eth_txconf *tx_conf);
+
 #endif
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 16/31] net/bnxt: code cleanup style of bnxt rxq
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (14 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 15/31] net/bnxt: code cleanup style of bnxt txq Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 17/31] net/bnxt: code cleanup style of bnxt vnic Ajit Khaparde
                   ` (15 subsequent siblings)
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Scott Branden

From: Scott Branden <scott.branden@broadcom.com>

Cleanup alignment, brackets, debug string style of bnxt_rxq

Signed-off-by: Scott Branden <scott.branden@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_rxq.c | 22 +++++++++++++---------
 drivers/net/bnxt/bnxt_rxq.h | 12 +++++++-----
 2 files changed, 20 insertions(+), 14 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index f405e2575..d622ad4ef 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -83,8 +83,8 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 			/* For each pool, allocate MACVLAN CFA rule & VNIC */
 			max_pools = RTE_MIN(bp->max_vnics,
 					    RTE_MIN(bp->max_l2_ctx,
-					    RTE_MIN(bp->max_rsscos_ctx,
-						    ETH_64_POOLS)));
+						    RTE_MIN(bp->max_rsscos_ctx,
+							    ETH_64_POOLS)));
 			if (pools > max_pools)
 				pools = max_pools;
 			break;
@@ -280,11 +280,11 @@ void bnxt_rx_queue_release_op(void *rx_queue)
 }
 
 int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
-			       uint16_t queue_idx,
-			       uint16_t nb_desc,
-			       unsigned int socket_id,
-			       const struct rte_eth_rxconf *rx_conf,
-			       struct rte_mempool *mp)
+			   uint16_t queue_idx,
+			   uint16_t nb_desc,
+			   unsigned int socket_id,
+			   const struct rte_eth_rxconf *rx_conf,
+			   struct rte_mempool *mp)
 {
 	struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private;
 	uint64_t rx_offloads = eth_dev->data->dev_conf.rxmode.offloads;
@@ -336,8 +336,12 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 
 	eth_dev->data->rx_queues[queue_idx] = rxq;
 	/* Allocate RX ring hardware descriptors */
-	if (bnxt_alloc_rings(bp, queue_idx, NULL, rxq, rxq->cp_ring,
-			"rxr")) {
+	if (bnxt_alloc_rings(bp,
+			     queue_idx,
+			     NULL,
+			     rxq,
+			     rxq->cp_ring,
+			     "rxr")) {
 		PMD_DRV_LOG(ERR,
 			"ring_dma_zone_reserve for rx_ring failed!\n");
 		bnxt_rx_queue_release_op(rxq);
diff --git a/drivers/net/bnxt/bnxt_rxq.h b/drivers/net/bnxt/bnxt_rxq.h
index e5d6001d3..6e6c04010 100644
--- a/drivers/net/bnxt/bnxt_rxq.h
+++ b/drivers/net/bnxt/bnxt_rxq.h
@@ -42,12 +42,14 @@ struct bnxt_rx_queue {
 void bnxt_free_rxq_stats(struct bnxt_rx_queue *rxq);
 int bnxt_mq_rx_configure(struct bnxt *bp);
 void bnxt_rx_queue_release_op(void *rx_queue);
+
 int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
-			       uint16_t queue_idx,
-			       uint16_t nb_desc,
-			       unsigned int socket_id,
-			       const struct rte_eth_rxconf *rx_conf,
-			       struct rte_mempool *mp);
+			   uint16_t queue_idx,
+			   uint16_t nb_desc,
+			   unsigned int socket_id,
+			   const struct rte_eth_rxconf *rx_conf,
+			   struct rte_mempool *mp);
+
 void bnxt_free_rx_mbufs(struct bnxt *bp);
 int bnxt_rx_queue_intr_enable_op(struct rte_eth_dev *eth_dev,
 				 uint16_t queue_id);
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 17/31] net/bnxt: code cleanup style of bnxt vnic
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (15 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 16/31] net/bnxt: code cleanup style of bnxt rxq Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 18/31] net/bnxt: code cleanup style of bnxt txr Ajit Khaparde
                   ` (14 subsequent siblings)
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

code cleanup style of bnxt_vnic.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_vnic.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c
index 5d9d369a3..d5d81fd36 100644
--- a/drivers/net/bnxt/bnxt_vnic.c
+++ b/drivers/net/bnxt/bnxt_vnic.c
@@ -235,7 +235,7 @@ int bnxt_alloc_vnic_mem(struct bnxt *bp)
 			       max_vnics * sizeof(struct bnxt_vnic_info), 0);
 	if (vnic_mem == NULL) {
 		PMD_DRV_LOG(ERR, "Failed to alloc memory for %d VNICs\n",
-			max_vnics);
+			    max_vnics);
 		return -ENOMEM;
 	}
 	bp->vnic_info = vnic_mem;
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 18/31] net/bnxt: code cleanup style of bnxt txr
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (16 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 17/31] net/bnxt: code cleanup style of bnxt vnic Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 19/31] net/bnxt: code cleanup style of bnxt ring Ajit Khaparde
                   ` (13 subsequent siblings)
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Scott Branden

From: Scott Branden <scott.branden@broadcom.com>

Cleanup alignment, brackets, debug string style of bnxt_txr

Signed-off-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_txr.c | 5 +++--
 drivers/net/bnxt/bnxt_txr.h | 9 +++++----
 2 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 68645b2f7..f8fd22156 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -373,8 +373,9 @@ static int bnxt_handle_tx_cp(struct bnxt_tx_queue *txq)
 	return nb_tx_pkts;
 }
 
-uint16_t bnxt_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-			       uint16_t nb_pkts)
+uint16_t bnxt_xmit_pkts(void *tx_queue,
+			struct rte_mbuf **tx_pkts,
+			uint16_t nb_pkts)
 {
 	struct bnxt_tx_queue *txq = tx_queue;
 	uint16_t nb_tx_pkts = 0;
diff --git a/drivers/net/bnxt/bnxt_txr.h b/drivers/net/bnxt/bnxt_txr.h
index 7f3c7cdb0..33cdea5f6 100644
--- a/drivers/net/bnxt/bnxt_txr.h
+++ b/drivers/net/bnxt/bnxt_txr.h
@@ -30,7 +30,7 @@ struct bnxt_tx_ring_info {
 };
 
 struct bnxt_sw_tx_bd {
-	struct rte_mbuf		*mbuf; /* mbuf associated with TX descriptor */
+	struct rte_mbuf		*mbuf;
 	uint8_t			is_gso;
 	unsigned short		nr_bds;
 };
@@ -38,8 +38,10 @@ struct bnxt_sw_tx_bd {
 void bnxt_free_tx_rings(struct bnxt *bp);
 int bnxt_init_one_tx_ring(struct bnxt_tx_queue *txq);
 int bnxt_init_tx_ring_struct(struct bnxt_tx_queue *txq, unsigned int socket_id);
-uint16_t bnxt_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
-			       uint16_t nb_pkts);
+
+uint16_t bnxt_xmit_pkts(void *tx_queue,
+			struct rte_mbuf **tx_pkts,
+			uint16_t nb_pkts);
 int bnxt_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 int bnxt_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 
@@ -63,7 +65,6 @@ int bnxt_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 					 PKT_TX_OUTER_IP_CKSUM)
 #define PKT_TX_TCP_UDP_CKSUM		(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM)
 
-
 #define TX_BD_FLG_TIP_IP_TCP_UDP_CHKSUM	(TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM | \
 					TX_BD_LONG_LFLAGS_T_IP_CHKSUM | \
 					TX_BD_LONG_LFLAGS_IP_CHKSUM)
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 19/31] net/bnxt: code cleanup style of bnxt ring
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (17 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 18/31] net/bnxt: code cleanup style of bnxt txr Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 20/31] net/bnxt: code cleanup style of bnxt ethdev Ajit Khaparde
                   ` (12 subsequent siblings)
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Scott Branden

From: Scott Branden <scott.branden@broadcom.com>

Cleanup alignment, brackets, debug string style of bnxt_ring

Signed-off-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ring.c | 79 ++++++++++++++++++++++++++------------------
 drivers/net/bnxt/bnxt_ring.h | 40 +++++++++++-----------
 2 files changed, 68 insertions(+), 51 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index fcbd6bc6e..03a5381a3 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -64,10 +64,10 @@ int bnxt_init_ring_grps(struct bnxt *bp)
  * rx bd ring - Only non-zero length if rx_ring_info is not NULL
  */
 int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx,
-			    struct bnxt_tx_queue *txq,
-			    struct bnxt_rx_queue *rxq,
-			    struct bnxt_cp_ring_info *cp_ring_info,
-			    const char *suffix)
+		     struct bnxt_tx_queue *txq,
+		     struct bnxt_rx_queue *rxq,
+		     struct bnxt_cp_ring_info *cp_ring_info,
+		     const char *suffix)
 {
 	struct bnxt_ring *cp_ring = cp_ring_info->cp_ring_struct;
 	struct bnxt_rx_ring_info *rx_ring_info = rxq ? rxq->rx_ring : NULL;
@@ -90,20 +90,24 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx,
 
 	int tx_vmem_start = cp_vmem_start + cp_vmem_len;
 	int tx_vmem_len =
-	    tx_ring_info ? RTE_CACHE_LINE_ROUNDUP(tx_ring_info->
-						tx_ring_struct->vmem_size) : 0;
+	    tx_ring_info ?
+		RTE_CACHE_LINE_ROUNDUP(tx_ring_info->tx_ring_struct->vmem_size)
+		: 0;
 
 	int rx_vmem_start = tx_vmem_start + tx_vmem_len;
 	int rx_vmem_len = rx_ring_info ?
-		RTE_CACHE_LINE_ROUNDUP(rx_ring_info->
-						rx_ring_struct->vmem_size) : 0;
+		RTE_CACHE_LINE_ROUNDUP(rx_ring_info->rx_ring_struct->vmem_size)
+		: 0;
+
 	int ag_vmem_start = 0;
 	int ag_vmem_len = 0;
 	int cp_ring_start =  0;
 
 	ag_vmem_start = rx_vmem_start + rx_vmem_len;
-	ag_vmem_len = rx_ring_info ? RTE_CACHE_LINE_ROUNDUP(
-				rx_ring_info->ag_ring_struct->vmem_size) : 0;
+	ag_vmem_len = rx_ring_info ?
+		RTE_CACHE_LINE_ROUNDUP(rx_ring_info->ag_ring_struct->vmem_size)
+		: 0;
+
 	cp_ring_start = ag_vmem_start + ag_vmem_len;
 
 	int cp_ring_len = RTE_CACHE_LINE_ROUNDUP(cp_ring->ring_size *
@@ -124,9 +128,11 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx,
 
 	int ag_bitmap_start = ag_ring_start + ag_ring_len;
 	int ag_bitmap_len =  rx_ring_info ?
-		RTE_CACHE_LINE_ROUNDUP(rte_bitmap_get_memory_footprint(
-			rx_ring_info->rx_ring_struct->ring_size *
-			AGG_RING_SIZE_FACTOR)) : 0;
+		RTE_CACHE_LINE_ROUNDUP
+		  (rte_bitmap_get_memory_footprint
+		    (rx_ring_info->rx_ring_struct->ring_size *
+		     AGG_RING_SIZE_FACTOR))
+		: 0;
 
 	int tpa_info_start = ag_bitmap_start + ag_bitmap_len;
 	int tpa_info_len = rx_ring_info ?
@@ -134,6 +140,7 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx,
 				       sizeof(struct bnxt_tpa_info)) : 0;
 
 	int total_alloc_len = tpa_info_start;
+
 	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
 		total_alloc_len += tpa_info_len;
 
@@ -144,12 +151,13 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx,
 	mz_name[RTE_MEMZONE_NAMESIZE - 1] = 0;
 	mz = rte_memzone_lookup(mz_name);
 	if (!mz) {
-		mz = rte_memzone_reserve_aligned(mz_name, total_alloc_len,
-				SOCKET_ID_ANY,
-				RTE_MEMZONE_2MB |
-				RTE_MEMZONE_SIZE_HINT_ONLY |
-				RTE_MEMZONE_IOVA_CONTIG,
-				getpagesize());
+		mz = rte_memzone_reserve_aligned(mz_name,
+						 total_alloc_len,
+						 SOCKET_ID_ANY,
+						 RTE_MEMZONE_2MB |
+						 RTE_MEMZONE_SIZE_HINT_ONLY |
+						 RTE_MEMZONE_IOVA_CONTIG,
+						 getpagesize());
 		if (mz == NULL)
 			return -ENOMEM;
 	}
@@ -165,7 +173,7 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx,
 		mz_phys_addr = rte_mem_virt2iova(mz->addr);
 		if (mz_phys_addr == 0) {
 			PMD_DRV_LOG(ERR,
-			"unable to map ring address to physical memory\n");
+				    "unable to map ring addr to phys memory\n");
 			return -ENOMEM;
 		}
 	}
@@ -440,10 +448,12 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp)
 			goto err_out;
 		}
 
-		rc = bnxt_hwrm_ring_alloc(bp, ring,
-				HWRM_RING_ALLOC_INPUT_RING_TYPE_RX,
-				map_idx, HWRM_NA_SIGNATURE,
-				cp_ring->fw_ring_id);
+		rc = bnxt_hwrm_ring_alloc(bp,
+					  ring,
+					  HWRM_RING_ALLOC_INPUT_RING_TYPE_RX,
+					  map_idx,
+					  HWRM_NA_SIGNATURE,
+					  cp_ring->fw_ring_id);
 		if (rc)
 			goto err_out;
 		PMD_DRV_LOG(DEBUG, "Alloc AGG Done!\n");
@@ -473,10 +483,13 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp)
 		unsigned int idx = i + bp->rx_cp_nr_rings;
 
 		/* Tx cmpl */
-		rc = bnxt_hwrm_ring_alloc(bp, cp_ring,
-					HWRM_RING_ALLOC_INPUT_RING_TYPE_L2_CMPL,
-					idx, HWRM_NA_SIGNATURE,
-					HWRM_NA_SIGNATURE);
+		rc = bnxt_hwrm_ring_alloc
+			(bp,
+			 cp_ring,
+			 HWRM_RING_ALLOC_INPUT_RING_TYPE_L2_CMPL,
+			 idx,
+			 HWRM_NA_SIGNATURE,
+			 HWRM_NA_SIGNATURE);
 		if (rc)
 			goto err_out;
 
@@ -484,10 +497,12 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp)
 		B_CP_DIS_DB(cpr, cpr->cp_raw_cons);
 
 		/* Tx ring */
-		rc = bnxt_hwrm_ring_alloc(bp, ring,
-					HWRM_RING_ALLOC_INPUT_RING_TYPE_TX,
-					idx, cpr->hw_stats_ctx_id,
-					cp_ring->fw_ring_id);
+		rc = bnxt_hwrm_ring_alloc(bp,
+					  ring,
+					  HWRM_RING_ALLOC_INPUT_RING_TYPE_TX,
+					  idx,
+					  cpr->hw_stats_ctx_id,
+					  cp_ring->fw_ring_id);
 		if (rc)
 			goto err_out;
 
diff --git a/drivers/net/bnxt/bnxt_ring.h b/drivers/net/bnxt/bnxt_ring.h
index 1446d784f..9348bf2b2 100644
--- a/drivers/net/bnxt/bnxt_ring.h
+++ b/drivers/net/bnxt/bnxt_ring.h
@@ -10,17 +10,17 @@
 
 #include <rte_memory.h>
 
-#define RING_NEXT(ring, idx)		(((idx) + 1) & (ring)->ring_mask)
-
-#define DB_IDX_MASK						0xffffff
-#define DB_IDX_VALID						(0x1 << 26)
-#define DB_IRQ_DIS						(0x1 << 27)
-#define DB_KEY_TX						(0x0 << 28)
-#define DB_KEY_RX						(0x1 << 28)
-#define DB_KEY_CP						(0x2 << 28)
-#define DB_KEY_ST						(0x3 << 28)
-#define DB_KEY_TX_PUSH						(0x4 << 28)
-#define DB_LONG_TX_PUSH						(0x2 << 24)
+#define RING_NEXT(ring, idx)	(((idx) + 1) & (ring)->ring_mask)
+
+#define DB_IDX_MASK		0xffffff
+#define DB_IDX_VALID		(0x1 << 26)
+#define DB_IRQ_DIS		(0x1 << 27)
+#define DB_KEY_TX		(0x0 << 28)
+#define DB_KEY_RX		(0x1 << 28)
+#define DB_KEY_CP		(0x2 << 28)
+#define DB_KEY_ST		(0x3 << 28)
+#define DB_KEY_TX_PUSH		(0x4 << 28)
+#define DB_LONG_TX_PUSH		(0x2 << 24)
 
 #define DEFAULT_CP_RING_SIZE	256
 #define DEFAULT_RX_RING_SIZE	256
@@ -31,12 +31,13 @@
 #define AGG_RING_MULTIPLIER	2
 
 /* These assume 4k pages */
-#define MAX_RX_DESC_CNT (8 * 1024)
-#define MAX_TX_DESC_CNT (4 * 1024)
-#define MAX_CP_DESC_CNT (16 * 1024)
+#define MAX_RX_DESC_CNT		(8 * 1024)
+#define MAX_TX_DESC_CNT		(4 * 1024)
+#define MAX_CP_DESC_CNT		(16 * 1024)
 
 #define INVALID_HW_RING_ID      ((uint16_t)-1)
-#define INVALID_STATS_CTX_ID		((uint16_t)-1)
+#define INVALID_STATS_CTX_ID	((uint16_t)-1)
+#define INVALID_RING_GRP_ID     ((uint16_t)-1)
 
 struct bnxt_ring {
 	void			*bd;
@@ -65,11 +66,12 @@ struct bnxt_rx_ring_info;
 struct bnxt_cp_ring_info;
 void bnxt_free_ring(struct bnxt_ring *ring);
 int bnxt_init_ring_grps(struct bnxt *bp);
+
 int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx,
-			    struct bnxt_tx_queue *txq,
-			    struct bnxt_rx_queue *rxq,
-			    struct bnxt_cp_ring_info *cp_ring_info,
-			    const char *suffix);
+		     struct bnxt_tx_queue *txq,
+		     struct bnxt_rx_queue *rxq,
+		     struct bnxt_cp_ring_info *cp_ring_info,
+		     const char *suffix);
 int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index);
 int bnxt_alloc_hwrm_rings(struct bnxt *bp);
 
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 20/31] net/bnxt: code cleanup style of bnxt ethdev
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (18 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 19/31] net/bnxt: code cleanup style of bnxt ring Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 21/31] net/bnxt: move function check zero bytes to bnxt util.h Ajit Khaparde
                   ` (11 subsequent siblings)
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Scott Branden

From: Scott Branden <scott.branden@broadcom.com>

Cleanup alignment, brackets, debug string style of bnxt_ethdev

Signed-off-by: Scott Branden <scott.branden@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 204 ++++++++++++++++++++++-------------------
 1 file changed, 112 insertions(+), 92 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index d66a29758..6516aeedd 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -315,8 +315,9 @@ static int bnxt_init_chip(struct bnxt *bp)
 		intr_vector = bp->eth_dev->data->nb_rx_queues;
 		PMD_DRV_LOG(DEBUG, "intr_vector = %d\n", intr_vector);
 		if (intr_vector > bp->rx_cp_nr_rings) {
-			PMD_DRV_LOG(ERR, "At most %d intr queues supported",
-					bp->rx_cp_nr_rings);
+			PMD_DRV_LOG(ERR,
+				    "At most %d intr queues supported\n",
+				    bp->rx_cp_nr_rings);
 			return -ENOTSUP;
 		}
 		if (rte_intr_efd_enable(intr_handle, intr_vector))
@@ -329,14 +330,15 @@ static int bnxt_init_chip(struct bnxt *bp)
 				    bp->eth_dev->data->nb_rx_queues *
 				    sizeof(int), 0);
 		if (intr_handle->intr_vec == NULL) {
-			PMD_DRV_LOG(ERR, "Failed to allocate %d rx_queues"
-				" intr_vec", bp->eth_dev->data->nb_rx_queues);
+			PMD_DRV_LOG(ERR,
+				    "Failed to allocate %d rx_queues intr_vec\n",
+				    bp->eth_dev->data->nb_rx_queues);
 			return -ENOMEM;
 		}
-		PMD_DRV_LOG(DEBUG, "intr_handle->intr_vec = %p "
-			"intr_handle->nb_efd = %d intr_handle->max_intr = %d\n",
-			 intr_handle->intr_vec, intr_handle->nb_efd,
-			intr_handle->max_intr);
+		PMD_DRV_LOG(DEBUG,
+			    "intr_handle->intr_vec = %p intr_handle->nb_efd = %d intr_handle->max_intr = %d\n",
+			    intr_handle->intr_vec, intr_handle->nb_efd,
+			    intr_handle->max_intr);
 	}
 
 	for (queue_id = 0; queue_id < bp->eth_dev->data->nb_rx_queues;
@@ -404,7 +406,7 @@ static int bnxt_init_nic(struct bnxt *bp)
  */
 
 static void bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
-				  struct rte_eth_dev_info *dev_info)
+				 struct rte_eth_dev_info *dev_info)
 {
 	struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private;
 	uint16_t max_vnics, i, j, vpool, vrxq;
@@ -706,15 +708,22 @@ static void bnxt_mac_addr_remove_op(struct rte_eth_dev *eth_dev,
 			while (filter) {
 				temp_filter = STAILQ_NEXT(filter, next);
 				if (filter->mac_index == index) {
-					STAILQ_REMOVE(&vnic->filter, filter,
-						      bnxt_filter_info, next);
+					STAILQ_REMOVE(&vnic->filter,
+						      filter,
+						      bnxt_filter_info,
+						      next);
+
 					bnxt_hwrm_clear_l2_filter(bp, filter);
 					filter->mac_index = INVALID_MAC_INDEX;
-					memset(&filter->l2_addr, 0,
+
+					memset(&filter->l2_addr,
+					       0,
 					       ETHER_ADDR_LEN);
-					STAILQ_INSERT_TAIL(
-							&bp->free_filter_list,
-							filter, next);
+
+					STAILQ_INSERT_TAIL
+						(&bp->free_filter_list,
+						 filter,
+						 next);
 				}
 				filter = temp_filter;
 			}
@@ -785,9 +794,10 @@ int bnxt_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_complete)
 out:
 	/* Timed out or success */
 	if (new.link_status != eth_dev->data->dev_link.link_status ||
-	new.link_speed != eth_dev->data->dev_link.link_speed) {
-		memcpy(&eth_dev->data->dev_link, &new,
-			sizeof(struct rte_eth_link));
+	    new.link_speed != eth_dev->data->dev_link.link_speed) {
+		memcpy(&eth_dev->data->dev_link,
+		       &new,
+		       sizeof(struct rte_eth_link));
 
 		_rte_eth_dev_callback_process(eth_dev,
 					      RTE_ETH_EVENT_INTR_LSC,
@@ -856,8 +866,8 @@ static void bnxt_allmulticast_disable_op(struct rte_eth_dev *eth_dev)
 }
 
 static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
-			    struct rte_eth_rss_reta_entry64 *reta_conf,
-			    uint16_t reta_size)
+			       struct rte_eth_rss_reta_entry64 *reta_conf,
+			       uint16_t reta_size)
 {
 	struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private;
 	struct rte_eth_conf *dev_conf = &bp->eth_dev->data->dev_conf;
@@ -868,9 +878,9 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 
 	if (reta_size != HW_HASH_INDEX_SIZE) {
-		PMD_DRV_LOG(ERR, "The configured hash table lookup size "
-			"(%d) must equal the size supported by the hardware "
-			"(%d)\n", reta_size, HW_HASH_INDEX_SIZE);
+		PMD_DRV_LOG(ERR,
+			    "Configured hash table lookup size (%d) != (%d)\n",
+			    reta_size, HW_HASH_INDEX_SIZE);
 		return -EINVAL;
 	}
 	/* Update the RSS VNIC(s) */
@@ -900,9 +910,9 @@ static int bnxt_reta_query_op(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 
 	if (reta_size != HW_HASH_INDEX_SIZE) {
-		PMD_DRV_LOG(ERR, "The configured hash table lookup size "
-			"(%d) must equal the size supported by the hardware "
-			"(%d)\n", reta_size, HW_HASH_INDEX_SIZE);
+		PMD_DRV_LOG(ERR,
+			    "Configured hash table lookup size (%d) != (%d)\n",
+			    reta_size, HW_HASH_INDEX_SIZE);
 		return -EINVAL;
 	}
 	/* EW - need to revisit here copying from uint64_t to uint16_t */
@@ -1021,8 +1031,8 @@ static int bnxt_rss_hash_conf_get_op(struct rte_eth_dev *eth_dev,
 		}
 		if (hash_types) {
 			PMD_DRV_LOG(ERR,
-				"Unknwon RSS config from firmware (%08x), RSS disabled",
-				vnic->hash_type);
+				    "Unknown RSS config (%08x), RSS disabled\n",
+				    vnic->hash_type);
 			return -ENOTSUP;
 		}
 	} else {
@@ -1032,7 +1042,7 @@ static int bnxt_rss_hash_conf_get_op(struct rte_eth_dev *eth_dev,
 }
 
 static int bnxt_flow_ctrl_get_op(struct rte_eth_dev *dev,
-			       struct rte_eth_fc_conf *fc_conf)
+				 struct rte_eth_fc_conf *fc_conf)
 {
 	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
 	struct rte_eth_link link_info;
@@ -1064,7 +1074,7 @@ static int bnxt_flow_ctrl_get_op(struct rte_eth_dev *dev,
 }
 
 static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
-			       struct rte_eth_fc_conf *fc_conf)
+				 struct rte_eth_fc_conf *fc_conf)
 {
 	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
 
@@ -1120,7 +1130,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
 /* Add UDP tunneling port */
 static int
 bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
-			 struct rte_eth_udp_tunnel *udp_tunnel)
+			    struct rte_eth_udp_tunnel *udp_tunnel)
 {
 	struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private;
 	uint16_t tunnel_type = 0;
@@ -1168,7 +1178,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
 
 static int
 bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
-			 struct rte_eth_udp_tunnel *udp_tunnel)
+			    struct rte_eth_udp_tunnel *udp_tunnel)
 {
 	struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private;
 	uint16_t tunnel_type = 0;
@@ -1256,9 +1266,10 @@ static int bnxt_del_vlan_filter(struct bnxt *bp, uint16_t vlan_id)
 					STAILQ_REMOVE(&vnic->filter, filter,
 						      bnxt_filter_info, next);
 					bnxt_hwrm_clear_l2_filter(bp, filter);
-					STAILQ_INSERT_TAIL(
-							&bp->free_filter_list,
-							filter, next);
+					STAILQ_INSERT_TAIL
+							(&bp->free_filter_list,
+							 filter,
+							 next);
 
 					/*
 					 * Need to examine to see if the MAC
@@ -1281,9 +1292,10 @@ static int bnxt_del_vlan_filter(struct bnxt *bp, uint16_t vlan_id)
 					memcpy(new_filter->l2_addr,
 					       filter->l2_addr, ETHER_ADDR_LEN);
 					/* MAC only filter */
-					rc = bnxt_hwrm_set_l2_filter(bp,
-							vnic->fw_vnic_id,
-							new_filter);
+					rc = bnxt_hwrm_set_l2_filter
+							(bp,
+							 vnic->fw_vnic_id,
+							 new_filter);
 					if (rc)
 						goto exit;
 					PMD_DRV_LOG(INFO,
@@ -1335,9 +1347,10 @@ static int bnxt_add_vlan_filter(struct bnxt *bp, uint16_t vlan_id)
 						      bnxt_filter_info, next);
 					bnxt_hwrm_clear_l2_filter(bp, filter);
 					filter->l2_ovlan = 0;
-					STAILQ_INSERT_TAIL(
-							&bp->free_filter_list,
-							filter, next);
+					STAILQ_INSERT_TAIL
+						(&bp->free_filter_list,
+						 filter,
+						 next);
 				}
 				new_filter = bnxt_alloc_filter(bp);
 				if (!new_filter) {
@@ -1405,6 +1418,7 @@ bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask)
 		/* Enable or disable VLAN stripping */
 		for (i = 0; i < bp->nr_vnics; i++) {
 			struct bnxt_vnic_info *vnic = &bp->vnic_info[i];
+
 			if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
 				vnic->vlan_strip = true;
 			else
@@ -1460,8 +1474,8 @@ bnxt_set_default_mac_addr_op(struct rte_eth_dev *dev, struct ether_addr *addr)
 
 static int
 bnxt_dev_set_mc_addr_list_op(struct rte_eth_dev *eth_dev,
-			  struct ether_addr *mc_addr_set,
-			  uint32_t nb_mc_addr)
+			     struct ether_addr *mc_addr_set,
+			     uint32_t nb_mc_addr)
 {
 	struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private;
 	char *mc_addr_list = (char *)mc_addr_set;
@@ -1497,8 +1511,9 @@ bnxt_fw_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size)
 	uint8_t fw_updt = (bp->fw_ver >> 8) & 0xff;
 	int ret;
 
-	ret = snprintf(fw_version, fw_size, "%d.%d.%d",
-			fw_major, fw_minor, fw_updt);
+	ret = snprintf(fw_version, fw_size,
+		       "%d.%d.%d",
+		       fw_major, fw_minor, fw_updt);
 
 	ret += 1; /* add the size of '\0' */
 	if (fw_size < (uint32_t)ret)
@@ -1508,8 +1523,9 @@ bnxt_fw_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size)
 }
 
 static void
-bnxt_rxq_info_get_op(struct rte_eth_dev *dev, uint16_t queue_id,
-	struct rte_eth_rxq_info *qinfo)
+bnxt_rxq_info_get_op(struct rte_eth_dev *dev,
+		     uint16_t queue_id,
+		     struct rte_eth_rxq_info *qinfo)
 {
 	struct bnxt_rx_queue *rxq;
 
@@ -1525,8 +1541,9 @@ bnxt_rxq_info_get_op(struct rte_eth_dev *dev, uint16_t queue_id,
 }
 
 static void
-bnxt_txq_info_get_op(struct rte_eth_dev *dev, uint16_t queue_id,
-	struct rte_eth_txq_info *qinfo)
+bnxt_txq_info_get_op(struct rte_eth_dev *dev,
+		     uint16_t queue_id,
+		     struct rte_eth_txq_info *qinfo)
 {
 	struct bnxt_tx_queue *txq;
 
@@ -1561,7 +1578,6 @@ static int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
 		return -EINVAL;
 	}
 
-
 	if (new_mtu > ETHER_MTU) {
 		bp->flags |= BNXT_FLAG_JUMBO;
 		bp->eth_dev->data->dev_conf.rxmode.offloads |=
@@ -1655,17 +1671,16 @@ bnxt_rx_queue_count_op(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 		valid = FLIP_VALID(cons, cpr->cp_ring_struct->ring_mask, valid);
 		cmp_type = CMP_TYPE(rxcmp);
 		if (cmp_type == RX_TPA_END_CMPL_TYPE_RX_TPA_END) {
-			cmp = (rte_le_to_cpu_32(
-					((struct rx_tpa_end_cmpl *)
-					 (rxcmp))->agg_bufs_v1) &
-			       RX_TPA_END_CMPL_AGG_BUFS_MASK) >>
-				RX_TPA_END_CMPL_AGG_BUFS_SFT;
+			cmp = (rte_le_to_cpu_32
+				(((struct rx_tpa_end_cmpl *)
+				  (rxcmp))->agg_bufs_v1)
+				& RX_TPA_END_CMPL_AGG_BUFS_MASK)
+				>> RX_TPA_END_CMPL_AGG_BUFS_SFT;
 			desc++;
 		} else if (cmp_type == 0x11) {
 			desc++;
-			cmp = (rxcmp->agg_bufs_v1 &
-				   RX_PKT_CMPL_AGG_BUFS_MASK) >>
-				RX_PKT_CMPL_AGG_BUFS_SFT;
+			cmp = (rxcmp->agg_bufs_v1 & RX_PKT_CMPL_AGG_BUFS_MASK)
+				>> RX_PKT_CMPL_AGG_BUFS_SFT;
 		} else {
 			cmp = 1;
 		}
@@ -1710,7 +1725,6 @@ bnxt_rx_descriptor_status_op(void *rx_queue, uint16_t offset)
 	if (rx_buf->mbuf == NULL)
 		return RTE_ETH_RX_DESC_UNAVAIL;
 
-
 	return RTE_ETH_RX_DESC_AVAIL;
 }
 
@@ -2882,16 +2896,20 @@ bnxt_get_eeprom_length_op(struct rte_eth_dev *dev)
 
 static int
 bnxt_get_eeprom_op(struct rte_eth_dev *dev,
-		struct rte_dev_eeprom_info *in_eeprom)
+		   struct rte_dev_eeprom_info *in_eeprom)
 {
 	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
 	uint32_t index;
 	uint32_t offset;
 
-	PMD_DRV_LOG(INFO, "%04x:%02x:%02x:%02x in_eeprom->offset = %d "
-		"len = %d\n", bp->pdev->addr.domain,
-		bp->pdev->addr.bus, bp->pdev->addr.devid,
-		bp->pdev->addr.function, in_eeprom->offset, in_eeprom->length);
+	PMD_DRV_LOG(INFO,
+		    "%04x:%02x:%02x:%02x in_eeprom->offset = %d len = %d\n",
+		    bp->pdev->addr.domain,
+		    bp->pdev->addr.bus,
+		    bp->pdev->addr.devid,
+		    bp->pdev->addr.function,
+		    in_eeprom->offset,
+		    in_eeprom->length);
 
 	if (in_eeprom->offset == 0) /* special offset value to get directory */
 		return bnxt_get_nvram_directory(bp, in_eeprom->length,
@@ -2953,16 +2971,17 @@ static bool bnxt_dir_type_is_executable(uint16_t dir_type)
 
 static int
 bnxt_set_eeprom_op(struct rte_eth_dev *dev,
-		struct rte_dev_eeprom_info *in_eeprom)
+		   struct rte_dev_eeprom_info *in_eeprom)
 {
 	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
 	uint8_t index, dir_op;
 	uint16_t type, ext, ordinal, attr;
 
-	PMD_DRV_LOG(INFO, "%04x:%02x:%02x:%02x in_eeprom->offset = %d "
-		"len = %d\n", bp->pdev->addr.domain,
-		bp->pdev->addr.bus, bp->pdev->addr.devid,
-		bp->pdev->addr.function, in_eeprom->offset, in_eeprom->length);
+	PMD_DRV_LOG(INFO,
+		    "%04x:%02x:%02x:%02x in_eeprom->offset = %d len = %d\n",
+		    bp->pdev->addr.domain, bp->pdev->addr.bus,
+		    bp->pdev->addr.devid, bp->pdev->addr.function,
+		    in_eeprom->offset, in_eeprom->length);
 
 	if (!BNXT_PF(bp)) {
 		PMD_DRV_LOG(ERR, "NVM write not supported from a VF\n");
@@ -3195,14 +3214,14 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
 			 pci_dev->addr.function, "rx_port_stats");
 		mz_name[RTE_MEMZONE_NAMESIZE - 1] = 0;
 		mz = rte_memzone_lookup(mz_name);
-		total_alloc_len = RTE_CACHE_LINE_ROUNDUP(
-				sizeof(struct rx_port_stats) + 512);
+		total_alloc_len = RTE_CACHE_LINE_ROUNDUP
+					(sizeof(struct rx_port_stats) + 512);
 		if (!mz) {
 			mz = rte_memzone_reserve(mz_name, total_alloc_len,
-					SOCKET_ID_ANY,
-					RTE_MEMZONE_2MB |
-					RTE_MEMZONE_SIZE_HINT_ONLY |
-					RTE_MEMZONE_IOVA_CONTIG);
+						 SOCKET_ID_ANY,
+						 RTE_MEMZONE_2MB |
+						 RTE_MEMZONE_SIZE_HINT_ONLY |
+						 RTE_MEMZONE_IOVA_CONTIG);
 			if (mz == NULL)
 				return -ENOMEM;
 		}
@@ -3216,7 +3235,7 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
 			mz_phys_addr = rte_mem_virt2iova(mz->addr);
 			if (mz_phys_addr == 0) {
 				PMD_DRV_LOG(ERR,
-				"unable to map address to physical memory\n");
+					"unable to map addr to phys memory\n");
 				return -ENOMEM;
 			}
 		}
@@ -3231,15 +3250,15 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
 			 pci_dev->addr.function, "tx_port_stats");
 		mz_name[RTE_MEMZONE_NAMESIZE - 1] = 0;
 		mz = rte_memzone_lookup(mz_name);
-		total_alloc_len = RTE_CACHE_LINE_ROUNDUP(
-				sizeof(struct tx_port_stats) + 512);
+		total_alloc_len = RTE_CACHE_LINE_ROUNDUP
+					(sizeof(struct tx_port_stats) + 512);
 		if (!mz) {
 			mz = rte_memzone_reserve(mz_name,
-					total_alloc_len,
-					SOCKET_ID_ANY,
-					RTE_MEMZONE_2MB |
-					RTE_MEMZONE_SIZE_HINT_ONLY |
-					RTE_MEMZONE_IOVA_CONTIG);
+						 total_alloc_len,
+						 SOCKET_ID_ANY,
+						 RTE_MEMZONE_2MB |
+						 RTE_MEMZONE_SIZE_HINT_ONLY |
+						 RTE_MEMZONE_IOVA_CONTIG);
 			if (mz == NULL)
 				return -ENOMEM;
 		}
@@ -3253,7 +3272,7 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
 			mz_phys_addr = rte_mem_virt2iova(mz->addr);
 			if (mz_phys_addr == 0) {
 				PMD_DRV_LOG(ERR,
-				"unable to map address to physical memory\n");
+					    "unable to map to phys memory\n");
 				return -ENOMEM;
 			}
 		}
@@ -3298,10 +3317,11 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
 		goto error_free;
 	}
 	eth_dev->data->mac_addrs = rte_zmalloc("bnxt_mac_addr_tbl",
-					ETHER_ADDR_LEN * bp->max_l2_ctx, 0);
+					       ETHER_ADDR_LEN * bp->max_l2_ctx,
+					       0);
 	if (eth_dev->data->mac_addrs == NULL) {
 		PMD_DRV_LOG(ERR,
-			"Failed to alloc %u bytes needed to store MAC addr tbl",
+			"Failed to alloc %u bytes to store MAC addr tbl\n",
 			ETHER_ADDR_LEN * bp->max_l2_ctx);
 		rc = -ENOMEM;
 		goto error_free;
@@ -3328,7 +3348,8 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
 	}
 
 	bp->grp_info = rte_zmalloc("bnxt_grp_info",
-				sizeof(*bp->grp_info) * bp->max_ring_grps, 0);
+				   sizeof(*bp->grp_info) * bp->max_ring_grps,
+				   0);
 	if (!bp->grp_info) {
 		PMD_DRV_LOG(ERR,
 			"Failed to alloc %zu bytes to store group info table\n",
@@ -3339,8 +3360,8 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
 
 	/* Forward all requests if firmware is new enough */
 	if (((bp->fw_ver >= ((20 << 24) | (6 << 16) | (100 << 8))) &&
-	    (bp->fw_ver < ((20 << 24) | (7 << 16)))) ||
-	    ((bp->fw_ver >= ((20 << 24) | (8 << 16))))) {
+	     (bp->fw_ver < ((20 << 24) | (7 << 16)))) ||
+	    (bp->fw_ver >= ((20 << 24) | (8 << 16)))) {
 		memset(bp->pf.vf_req_fwd, 0xff, sizeof(bp->pf.vf_req_fwd));
 	} else {
 		PMD_DRV_LOG(WARNING,
@@ -3363,8 +3384,7 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
 	ALLOW_FUNC(HWRM_VNIC_TPA_CFG);
 	rc = bnxt_hwrm_func_driver_register(bp);
 	if (rc) {
-		PMD_DRV_LOG(ERR,
-			"Failed to register driver");
+		PMD_DRV_LOG(ERR, "Failed to register driver\n");
 		rc = -EBUSY;
 		goto error_free;
 	}
@@ -3477,7 +3497,7 @@ bnxt_dev_uninit(struct rte_eth_dev *eth_dev)
 }
 
 static int bnxt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
-	struct rte_pci_device *pci_dev)
+			  struct rte_pci_device *pci_dev)
 {
 	return rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct bnxt),
 		bnxt_dev_init);
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 21/31] net/bnxt: move function check zero bytes to bnxt util.h
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (19 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 20/31] net/bnxt: code cleanup style of bnxt ethdev Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 22/31] net/bnxt: filter/flow refactoring Ajit Khaparde
                   ` (10 subsequent siblings)
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Scott Branden

From: Scott Branden <scott.branden@broadcom.com>

Move check_zero_bytes into new bnxt_util.h file.

Signed-off-by: Scott Branden <scott.branden@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile      |  1 +
 drivers/net/bnxt/bnxt_ethdev.c |  1 +
 drivers/net/bnxt/bnxt_filter.c |  9 ---------
 drivers/net/bnxt/bnxt_filter.h |  1 -
 drivers/net/bnxt/bnxt_util.c   | 18 ++++++++++++++++++
 drivers/net/bnxt/bnxt_util.h   | 11 +++++++++++
 6 files changed, 31 insertions(+), 10 deletions(-)
 create mode 100644 drivers/net/bnxt/bnxt_util.c
 create mode 100644 drivers/net/bnxt/bnxt_util.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index fd0cb5235..80db03ea8 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -38,6 +38,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_txq.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_txr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_vnic.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_irq.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_util.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += rte_pmd_bnxt.c
 
 #
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 6516aeedd..9cfa43778 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -26,6 +26,7 @@
 #include "bnxt_vnic.h"
 #include "hsi_struct_def_dpdk.h"
 #include "bnxt_nvm_defs.h"
+#include "bnxt_util.h"
 
 #define DRV_MODULE_NAME		"bnxt"
 static const char bnxt_version[] =
diff --git a/drivers/net/bnxt/bnxt_filter.c b/drivers/net/bnxt/bnxt_filter.c
index e36da9977..72989ab67 100644
--- a/drivers/net/bnxt/bnxt_filter.c
+++ b/drivers/net/bnxt/bnxt_filter.c
@@ -231,15 +231,6 @@ nxt_non_void_action(const struct rte_flow_action *cur)
 	}
 }
 
-int bnxt_check_zero_bytes(const uint8_t *bytes, int len)
-{
-	int i;
-	for (i = 0; i < len; i++)
-		if (bytes[i] != 0x00)
-			return 0;
-	return 1;
-}
-
 static int
 bnxt_filter_type_check(const struct rte_flow_item pattern[],
 		       struct rte_flow_error *error __rte_unused)
diff --git a/drivers/net/bnxt/bnxt_filter.h b/drivers/net/bnxt/bnxt_filter.h
index d27be7032..a1ecfb19d 100644
--- a/drivers/net/bnxt/bnxt_filter.h
+++ b/drivers/net/bnxt/bnxt_filter.h
@@ -69,7 +69,6 @@ struct bnxt_filter_info *bnxt_get_unused_filter(struct bnxt *bp);
 void bnxt_free_filter(struct bnxt *bp, struct bnxt_filter_info *filter);
 struct bnxt_filter_info *bnxt_get_l2_filter(struct bnxt *bp,
 		struct bnxt_filter_info *nf, struct bnxt_vnic_info *vnic);
-int bnxt_check_zero_bytes(const uint8_t *bytes, int len);
 
 #define NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_MACADDR	\
 	HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_SRC_MACADDR
diff --git a/drivers/net/bnxt/bnxt_util.c b/drivers/net/bnxt/bnxt_util.c
new file mode 100644
index 000000000..7d3342719
--- /dev/null
+++ b/drivers/net/bnxt/bnxt_util.c
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2018 Broadcom
+ * All rights reserved.
+ */
+
+#include <inttypes.h>
+
+#include "bnxt_util.h"
+
+int bnxt_check_zero_bytes(const uint8_t *bytes, int len)
+{
+	int i;
+
+	for (i = 0; i < len; i++)
+		if (bytes[i] != 0x00)
+			return 0;
+	return 1;
+}
diff --git a/drivers/net/bnxt/bnxt_util.h b/drivers/net/bnxt/bnxt_util.h
new file mode 100644
index 000000000..2378833cc
--- /dev/null
+++ b/drivers/net/bnxt/bnxt_util.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2018 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BNXT_UTIL_H_
+#define _BNXT_UTIL_H_
+
+int bnxt_check_zero_bytes(const uint8_t *bytes, int len);
+
+#endif /* _BNXT_UTIL_H_ */
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 22/31] net/bnxt: filter/flow refactoring
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (20 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 21/31] net/bnxt: move function check zero bytes to bnxt util.h Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-26 15:29   ` Ferruh Yigit
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 23/31] net/bnxt: check for invalid vnic id Ajit Khaparde
                   ` (9 subsequent siblings)
  31 siblings, 1 reply; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Michael Wildt, Scott Branden

In preparation of more rte_flow support it has been decided to
separate out filter and flow into their own files. Functionally the
same.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile      |    1 +
 drivers/net/bnxt/bnxt_filter.c | 1060 ------------------------------------
 drivers/net/bnxt/bnxt_flow.c   | 1167 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 1168 insertions(+), 1060 deletions(-)
 create mode 100644 drivers/net/bnxt/bnxt_flow.c

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 80db03ea8..8be3cb0e4 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -29,6 +29,7 @@ EXPORT_MAP := rte_pmd_bnxt_version.map
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_cpr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_filter.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_flow.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_hwrm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_ring.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_rxq.c
diff --git a/drivers/net/bnxt/bnxt_filter.c b/drivers/net/bnxt/bnxt_filter.c
index 72989ab67..31757d32c 100644
--- a/drivers/net/bnxt/bnxt_filter.c
+++ b/drivers/net/bnxt/bnxt_filter.c
@@ -180,1063 +180,3 @@ void bnxt_free_filter(struct bnxt *bp, struct bnxt_filter_info *filter)
 {
 	STAILQ_INSERT_TAIL(&bp->free_filter_list, filter, next);
 }
-
-static int
-bnxt_flow_agrs_validate(const struct rte_flow_attr *attr,
-			const struct rte_flow_item pattern[],
-			const struct rte_flow_action actions[],
-			struct rte_flow_error *error)
-{
-	if (!pattern) {
-		rte_flow_error_set(error, EINVAL,
-			RTE_FLOW_ERROR_TYPE_ITEM_NUM,
-			NULL, "NULL pattern.");
-		return -rte_errno;
-	}
-
-	if (!actions) {
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
-				   NULL, "NULL action.");
-		return -rte_errno;
-	}
-
-	if (!attr) {
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_ATTR,
-				   NULL, "NULL attribute.");
-		return -rte_errno;
-	}
-
-	return 0;
-}
-
-static const struct rte_flow_item *
-nxt_non_void_pattern(const struct rte_flow_item *cur)
-{
-	while (1) {
-		if (cur->type != RTE_FLOW_ITEM_TYPE_VOID)
-			return cur;
-		cur++;
-	}
-}
-
-static const struct rte_flow_action *
-nxt_non_void_action(const struct rte_flow_action *cur)
-{
-	while (1) {
-		if (cur->type != RTE_FLOW_ACTION_TYPE_VOID)
-			return cur;
-		cur++;
-	}
-}
-
-static int
-bnxt_filter_type_check(const struct rte_flow_item pattern[],
-		       struct rte_flow_error *error __rte_unused)
-{
-	const struct rte_flow_item *item = nxt_non_void_pattern(pattern);
-	int use_ntuple = 1;
-
-	while (item->type != RTE_FLOW_ITEM_TYPE_END) {
-		switch (item->type) {
-		case RTE_FLOW_ITEM_TYPE_ETH:
-			use_ntuple = 1;
-			break;
-		case RTE_FLOW_ITEM_TYPE_VLAN:
-			use_ntuple = 0;
-			break;
-		case RTE_FLOW_ITEM_TYPE_IPV4:
-		case RTE_FLOW_ITEM_TYPE_IPV6:
-		case RTE_FLOW_ITEM_TYPE_TCP:
-		case RTE_FLOW_ITEM_TYPE_UDP:
-			/* FALLTHROUGH */
-			/* need ntuple match, reset exact match */
-			if (!use_ntuple) {
-				PMD_DRV_LOG(ERR,
-					"VLAN flow cannot use NTUPLE filter\n");
-				rte_flow_error_set(error, EINVAL,
-						   RTE_FLOW_ERROR_TYPE_ITEM,
-						   item,
-						   "Cannot use VLAN with NTUPLE");
-				return -rte_errno;
-			}
-			use_ntuple |= 1;
-			break;
-		default:
-			PMD_DRV_LOG(ERR, "Unknown Flow type");
-			use_ntuple |= 1;
-		}
-		item++;
-	}
-	return use_ntuple;
-}
-
-static int
-bnxt_validate_and_parse_flow_type(struct bnxt *bp,
-				  const struct rte_flow_attr *attr,
-				  const struct rte_flow_item pattern[],
-				  struct rte_flow_error *error,
-				  struct bnxt_filter_info *filter)
-{
-	const struct rte_flow_item *item = nxt_non_void_pattern(pattern);
-	const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
-	const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_mask;
-	const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
-	const struct rte_flow_item_tcp *tcp_spec, *tcp_mask;
-	const struct rte_flow_item_udp *udp_spec, *udp_mask;
-	const struct rte_flow_item_eth *eth_spec, *eth_mask;
-	const struct rte_flow_item_nvgre *nvgre_spec;
-	const struct rte_flow_item_nvgre *nvgre_mask;
-	const struct rte_flow_item_vxlan *vxlan_spec;
-	const struct rte_flow_item_vxlan *vxlan_mask;
-	uint8_t vni_mask[] = {0xFF, 0xFF, 0xFF};
-	uint8_t tni_mask[] = {0xFF, 0xFF, 0xFF};
-	const struct rte_flow_item_vf *vf_spec;
-	uint32_t tenant_id_be = 0;
-	bool vni_masked = 0;
-	bool tni_masked = 0;
-	uint32_t vf = 0;
-	int use_ntuple;
-	uint32_t en = 0;
-	uint32_t en_ethertype;
-	int dflt_vnic;
-
-	use_ntuple = bnxt_filter_type_check(pattern, error);
-	PMD_DRV_LOG(DEBUG, "Use NTUPLE %d\n", use_ntuple);
-	if (use_ntuple < 0)
-		return use_ntuple;
-
-	filter->filter_type = use_ntuple ?
-		HWRM_CFA_NTUPLE_FILTER : HWRM_CFA_EM_FILTER;
-	en_ethertype = use_ntuple ?
-		NTUPLE_FLTR_ALLOC_INPUT_EN_ETHERTYPE :
-		EM_FLOW_ALLOC_INPUT_EN_ETHERTYPE;
-
-	while (item->type != RTE_FLOW_ITEM_TYPE_END) {
-		if (item->last) {
-			/* last or range is NOT supported as match criteria */
-			rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "No support for range");
-			return -rte_errno;
-		}
-		if (!item->spec || !item->mask) {
-			rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "spec/mask is NULL");
-			return -rte_errno;
-		}
-		switch (item->type) {
-		case RTE_FLOW_ITEM_TYPE_ETH:
-			eth_spec = item->spec;
-			eth_mask = item->mask;
-
-			/* Source MAC address mask cannot be partially set.
-			 * Should be All 0's or all 1's.
-			 * Destination MAC address mask must not be partially
-			 * set. Should be all 1's or all 0's.
-			 */
-			if ((!is_zero_ether_addr(&eth_mask->src) &&
-			     !is_broadcast_ether_addr(&eth_mask->src)) ||
-			    (!is_zero_ether_addr(&eth_mask->dst) &&
-			     !is_broadcast_ether_addr(&eth_mask->dst))) {
-				rte_flow_error_set(error, EINVAL,
-						   RTE_FLOW_ERROR_TYPE_ITEM,
-						   item,
-						   "MAC_addr mask not valid");
-				return -rte_errno;
-			}
-
-			/* Mask is not allowed. Only exact matches are */
-			if (eth_mask->type &&
-			    eth_mask->type != RTE_BE16(0xffff)) {
-				rte_flow_error_set(error, EINVAL,
-						   RTE_FLOW_ERROR_TYPE_ITEM,
-						   item,
-						   "ethertype mask not valid");
-				return -rte_errno;
-			}
-
-			if (is_broadcast_ether_addr(&eth_mask->dst)) {
-				rte_memcpy(filter->dst_macaddr,
-					   &eth_spec->dst, 6);
-				en |= use_ntuple ?
-					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_MACADDR :
-					EM_FLOW_ALLOC_INPUT_EN_DST_MACADDR;
-			}
-			if (is_broadcast_ether_addr(&eth_mask->src)) {
-				rte_memcpy(filter->src_macaddr,
-					   &eth_spec->src, 6);
-				en |= use_ntuple ?
-					NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_MACADDR :
-					EM_FLOW_ALLOC_INPUT_EN_SRC_MACADDR;
-			} /*
-			   * else {
-			   *  RTE_LOG(ERR, PMD, "Handle this condition\n");
-			   * }
-			   */
-			if (eth_mask->type) {
-				filter->ethertype =
-					rte_be_to_cpu_16(eth_spec->type);
-				en |= en_ethertype;
-			}
-
-			break;
-		case RTE_FLOW_ITEM_TYPE_VLAN:
-			vlan_spec = item->spec;
-			vlan_mask = item->mask;
-			if (en & en_ethertype) {
-				rte_flow_error_set(error, EINVAL,
-						   RTE_FLOW_ERROR_TYPE_ITEM,
-						   item,
-						   "VLAN TPID matching is not"
-						   " supported");
-				return -rte_errno;
-			}
-			if (vlan_mask->tci &&
-			    vlan_mask->tci == RTE_BE16(0x0fff)) {
-				/* Only the VLAN ID can be matched. */
-				filter->l2_ovlan =
-					rte_be_to_cpu_16(vlan_spec->tci &
-							 RTE_BE16(0x0fff));
-				en |= EM_FLOW_ALLOC_INPUT_EN_OVLAN_VID;
-			} else if (vlan_mask->tci) {
-				rte_flow_error_set(error, EINVAL,
-						   RTE_FLOW_ERROR_TYPE_ITEM,
-						   item,
-						   "VLAN mask is invalid");
-				return -rte_errno;
-			}
-			if (vlan_mask->inner_type &&
-			    vlan_mask->inner_type != RTE_BE16(0xffff)) {
-				rte_flow_error_set(error, EINVAL,
-						   RTE_FLOW_ERROR_TYPE_ITEM,
-						   item,
-						   "inner ethertype mask not"
-						   " valid");
-				return -rte_errno;
-			}
-			if (vlan_mask->inner_type) {
-				filter->ethertype =
-					rte_be_to_cpu_16(vlan_spec->inner_type);
-				en |= en_ethertype;
-			}
-
-			break;
-		case RTE_FLOW_ITEM_TYPE_IPV4:
-			/* If mask is not involved, we could use EM filters. */
-			ipv4_spec = item->spec;
-			ipv4_mask = item->mask;
-			/* Only IP DST and SRC fields are maskable. */
-			if (ipv4_mask->hdr.version_ihl ||
-			    ipv4_mask->hdr.type_of_service ||
-			    ipv4_mask->hdr.total_length ||
-			    ipv4_mask->hdr.packet_id ||
-			    ipv4_mask->hdr.fragment_offset ||
-			    ipv4_mask->hdr.time_to_live ||
-			    ipv4_mask->hdr.next_proto_id ||
-			    ipv4_mask->hdr.hdr_checksum) {
-				rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Invalid IPv4 mask.");
-				return -rte_errno;
-			}
-			filter->dst_ipaddr[0] = ipv4_spec->hdr.dst_addr;
-			filter->src_ipaddr[0] = ipv4_spec->hdr.src_addr;
-			if (use_ntuple)
-				en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR |
-					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR;
-			else
-				en |= EM_FLOW_ALLOC_INPUT_EN_SRC_IPADDR |
-					EM_FLOW_ALLOC_INPUT_EN_DST_IPADDR;
-			if (ipv4_mask->hdr.src_addr) {
-				filter->src_ipaddr_mask[0] =
-					ipv4_mask->hdr.src_addr;
-				en |= !use_ntuple ? 0 :
-				     NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR_MASK;
-			}
-			if (ipv4_mask->hdr.dst_addr) {
-				filter->dst_ipaddr_mask[0] =
-					ipv4_mask->hdr.dst_addr;
-				en |= !use_ntuple ? 0 :
-				     NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR_MASK;
-			}
-			filter->ip_addr_type = use_ntuple ?
-			 HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_IP_ADDR_TYPE_IPV4 :
-			 HWRM_CFA_EM_FLOW_ALLOC_INPUT_IP_ADDR_TYPE_IPV4;
-			if (ipv4_spec->hdr.next_proto_id) {
-				filter->ip_protocol =
-					ipv4_spec->hdr.next_proto_id;
-				if (use_ntuple)
-					en |= NTUPLE_FLTR_ALLOC_IN_EN_IP_PROTO;
-				else
-					en |= EM_FLOW_ALLOC_INPUT_EN_IP_PROTO;
-			}
-			break;
-		case RTE_FLOW_ITEM_TYPE_IPV6:
-			ipv6_spec = item->spec;
-			ipv6_mask = item->mask;
-
-			/* Only IP DST and SRC fields are maskable. */
-			if (ipv6_mask->hdr.vtc_flow ||
-			    ipv6_mask->hdr.payload_len ||
-			    ipv6_mask->hdr.proto ||
-			    ipv6_mask->hdr.hop_limits) {
-				rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Invalid IPv6 mask.");
-				return -rte_errno;
-			}
-
-			if (use_ntuple)
-				en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR |
-					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR;
-			else
-				en |= EM_FLOW_ALLOC_INPUT_EN_SRC_IPADDR |
-					EM_FLOW_ALLOC_INPUT_EN_DST_IPADDR;
-			rte_memcpy(filter->src_ipaddr,
-				   ipv6_spec->hdr.src_addr, 16);
-			rte_memcpy(filter->dst_ipaddr,
-				   ipv6_spec->hdr.dst_addr, 16);
-			if (!bnxt_check_zero_bytes(ipv6_mask->hdr.src_addr,
-						   16)) {
-				rte_memcpy(filter->src_ipaddr_mask,
-					   ipv6_mask->hdr.src_addr, 16);
-				en |= !use_ntuple ? 0 :
-				    NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR_MASK;
-			}
-			if (!bnxt_check_zero_bytes(ipv6_mask->hdr.dst_addr,
-						   16)) {
-				rte_memcpy(filter->dst_ipaddr_mask,
-					   ipv6_mask->hdr.dst_addr, 16);
-				en |= !use_ntuple ? 0 :
-				     NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR_MASK;
-			}
-			filter->ip_addr_type = use_ntuple ?
-				NTUPLE_FLTR_ALLOC_INPUT_IP_ADDR_TYPE_IPV6 :
-				EM_FLOW_ALLOC_INPUT_IP_ADDR_TYPE_IPV6;
-			break;
-		case RTE_FLOW_ITEM_TYPE_TCP:
-			tcp_spec = item->spec;
-			tcp_mask = item->mask;
-
-			/* Check TCP mask. Only DST & SRC ports are maskable */
-			if (tcp_mask->hdr.sent_seq ||
-			    tcp_mask->hdr.recv_ack ||
-			    tcp_mask->hdr.data_off ||
-			    tcp_mask->hdr.tcp_flags ||
-			    tcp_mask->hdr.rx_win ||
-			    tcp_mask->hdr.cksum ||
-			    tcp_mask->hdr.tcp_urp) {
-				rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Invalid TCP mask");
-				return -rte_errno;
-			}
-			filter->src_port = tcp_spec->hdr.src_port;
-			filter->dst_port = tcp_spec->hdr.dst_port;
-			if (use_ntuple)
-				en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT |
-					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT;
-			else
-				en |= EM_FLOW_ALLOC_INPUT_EN_SRC_PORT |
-					EM_FLOW_ALLOC_INPUT_EN_DST_PORT;
-			if (tcp_mask->hdr.dst_port) {
-				filter->dst_port_mask = tcp_mask->hdr.dst_port;
-				en |= !use_ntuple ? 0 :
-				  NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT_MASK;
-			}
-			if (tcp_mask->hdr.src_port) {
-				filter->src_port_mask = tcp_mask->hdr.src_port;
-				en |= !use_ntuple ? 0 :
-				  NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT_MASK;
-			}
-			break;
-		case RTE_FLOW_ITEM_TYPE_UDP:
-			udp_spec = item->spec;
-			udp_mask = item->mask;
-
-			if (udp_mask->hdr.dgram_len ||
-			    udp_mask->hdr.dgram_cksum) {
-				rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Invalid UDP mask");
-				return -rte_errno;
-			}
-
-			filter->src_port = udp_spec->hdr.src_port;
-			filter->dst_port = udp_spec->hdr.dst_port;
-			if (use_ntuple)
-				en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT |
-					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT;
-			else
-				en |= EM_FLOW_ALLOC_INPUT_EN_SRC_PORT |
-					EM_FLOW_ALLOC_INPUT_EN_DST_PORT;
-
-			if (udp_mask->hdr.dst_port) {
-				filter->dst_port_mask = udp_mask->hdr.dst_port;
-				en |= !use_ntuple ? 0 :
-				  NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT_MASK;
-			}
-			if (udp_mask->hdr.src_port) {
-				filter->src_port_mask = udp_mask->hdr.src_port;
-				en |= !use_ntuple ? 0 :
-				  NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT_MASK;
-			}
-			break;
-		case RTE_FLOW_ITEM_TYPE_VXLAN:
-			vxlan_spec = item->spec;
-			vxlan_mask = item->mask;
-			/* Check if VXLAN item is used to describe protocol.
-			 * If yes, both spec and mask should be NULL.
-			 * If no, both spec and mask shouldn't be NULL.
-			 */
-			if ((!vxlan_spec && vxlan_mask) ||
-			    (vxlan_spec && !vxlan_mask)) {
-				rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Invalid VXLAN item");
-				return -rte_errno;
-			}
-
-			if (vxlan_spec->rsvd1 || vxlan_spec->rsvd0[0] ||
-			    vxlan_spec->rsvd0[1] || vxlan_spec->rsvd0[2] ||
-			    vxlan_spec->flags != 0x8) {
-				rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Invalid VXLAN item");
-				return -rte_errno;
-			}
-
-			/* Check if VNI is masked. */
-			if (vxlan_spec && vxlan_mask) {
-				vni_masked =
-					!!memcmp(vxlan_mask->vni, vni_mask,
-						 RTE_DIM(vni_mask));
-				if (vni_masked) {
-					rte_flow_error_set(error, EINVAL,
-						   RTE_FLOW_ERROR_TYPE_ITEM,
-						   item,
-						   "Invalid VNI mask");
-					return -rte_errno;
-				}
-
-				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
-					   vxlan_spec->vni, 3);
-				filter->vni =
-					rte_be_to_cpu_32(tenant_id_be);
-				filter->tunnel_type =
-				 CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN;
-			}
-			break;
-		case RTE_FLOW_ITEM_TYPE_NVGRE:
-			nvgre_spec = item->spec;
-			nvgre_mask = item->mask;
-			/* Check if NVGRE item is used to describe protocol.
-			 * If yes, both spec and mask should be NULL.
-			 * If no, both spec and mask shouldn't be NULL.
-			 */
-			if ((!nvgre_spec && nvgre_mask) ||
-			    (nvgre_spec && !nvgre_mask)) {
-				rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Invalid NVGRE item");
-				return -rte_errno;
-			}
-
-			if (nvgre_spec->c_k_s_rsvd0_ver != 0x2000 ||
-			    nvgre_spec->protocol != 0x6558) {
-				rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Invalid NVGRE item");
-				return -rte_errno;
-			}
-
-			if (nvgre_spec && nvgre_mask) {
-				tni_masked =
-					!!memcmp(nvgre_mask->tni, tni_mask,
-						 RTE_DIM(tni_mask));
-				if (tni_masked) {
-					rte_flow_error_set(error, EINVAL,
-						       RTE_FLOW_ERROR_TYPE_ITEM,
-						       item,
-						       "Invalid TNI mask");
-					return -rte_errno;
-				}
-				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
-					   nvgre_spec->tni, 3);
-				filter->vni =
-					rte_be_to_cpu_32(tenant_id_be);
-				filter->tunnel_type =
-				 CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_NVGRE;
-			}
-			break;
-		case RTE_FLOW_ITEM_TYPE_VF:
-			vf_spec = item->spec;
-			vf = vf_spec->id;
-			if (!BNXT_PF(bp)) {
-				rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Configuring on a VF!");
-				return -rte_errno;
-			}
-
-			if (vf >= bp->pdev->max_vfs) {
-				rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Incorrect VF id!");
-				return -rte_errno;
-			}
-
-			if (!attr->transfer) {
-				rte_flow_error_set(error, ENOTSUP,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Matching VF traffic without"
-					   " affecting it (transfer attribute)"
-					   " is unsupported");
-				return -rte_errno;
-			}
-
-			filter->mirror_vnic_id =
-			dflt_vnic = bnxt_hwrm_func_qcfg_vf_dflt_vnic_id(bp, vf);
-			if (dflt_vnic < 0) {
-				/* This simply indicates there's no driver
-				 * loaded. This is not an error.
-				 */
-				rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Unable to get default VNIC for VF");
-				return -rte_errno;
-			}
-			filter->mirror_vnic_id = dflt_vnic;
-			en |= NTUPLE_FLTR_ALLOC_INPUT_EN_MIRROR_VNIC_ID;
-			break;
-		default:
-			break;
-		}
-		item++;
-	}
-	filter->enables = en;
-
-	return 0;
-}
-
-/* Parse attributes */
-static int
-bnxt_flow_parse_attr(const struct rte_flow_attr *attr,
-		     struct rte_flow_error *error)
-{
-	/* Must be input direction */
-	if (!attr->ingress) {
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
-				   attr, "Only support ingress.");
-		return -rte_errno;
-	}
-
-	/* Not supported */
-	if (attr->egress) {
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
-				   attr, "No support for egress.");
-		return -rte_errno;
-	}
-
-	/* Not supported */
-	if (attr->priority) {
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
-				   attr, "No support for priority.");
-		return -rte_errno;
-	}
-
-	/* Not supported */
-	if (attr->group) {
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_ATTR_GROUP,
-				   attr, "No support for group.");
-		return -rte_errno;
-	}
-
-	return 0;
-}
-
-struct bnxt_filter_info *
-bnxt_get_l2_filter(struct bnxt *bp, struct bnxt_filter_info *nf,
-		   struct bnxt_vnic_info *vnic)
-{
-	struct bnxt_filter_info *filter1, *f0;
-	struct bnxt_vnic_info *vnic0;
-	int rc;
-
-	vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
-	f0 = STAILQ_FIRST(&vnic0->filter);
-
-	//This flow has same DST MAC as the port/l2 filter.
-	if (memcmp(f0->l2_addr, nf->dst_macaddr, ETHER_ADDR_LEN) == 0)
-		return f0;
-
-	//This flow needs DST MAC which is not same as port/l2
-	PMD_DRV_LOG(DEBUG, "Create L2 filter for DST MAC\n");
-	filter1 = bnxt_get_unused_filter(bp);
-	if (filter1 == NULL)
-		return NULL;
-	filter1->flags = HWRM_CFA_L2_FILTER_ALLOC_INPUT_FLAGS_PATH_RX;
-	filter1->enables = HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_ADDR |
-			L2_FILTER_ALLOC_INPUT_EN_L2_ADDR_MASK;
-	memcpy(filter1->l2_addr, nf->dst_macaddr, ETHER_ADDR_LEN);
-	memset(filter1->l2_addr_mask, 0xff, ETHER_ADDR_LEN);
-	rc = bnxt_hwrm_set_l2_filter(bp, vnic->fw_vnic_id,
-				     filter1);
-	if (rc) {
-		bnxt_free_filter(bp, filter1);
-		return NULL;
-	}
-	return filter1;
-}
-
-static int
-bnxt_validate_and_parse_flow(struct rte_eth_dev *dev,
-			     const struct rte_flow_item pattern[],
-			     const struct rte_flow_action actions[],
-			     const struct rte_flow_attr *attr,
-			     struct rte_flow_error *error,
-			     struct bnxt_filter_info *filter)
-{
-	const struct rte_flow_action *act = nxt_non_void_action(actions);
-	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
-	const struct rte_flow_action_queue *act_q;
-	const struct rte_flow_action_vf *act_vf;
-	struct bnxt_vnic_info *vnic, *vnic0;
-	struct bnxt_filter_info *filter1;
-	uint32_t vf = 0;
-	int dflt_vnic;
-	int rc;
-
-	if (bp->eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
-		PMD_DRV_LOG(ERR, "Cannot create flow on RSS queues\n");
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
-				   "Cannot create flow on RSS queues");
-		rc = -rte_errno;
-		goto ret;
-	}
-
-	rc = bnxt_validate_and_parse_flow_type(bp, attr, pattern, error,
-					       filter);
-	if (rc != 0)
-		goto ret;
-
-	rc = bnxt_flow_parse_attr(attr, error);
-	if (rc != 0)
-		goto ret;
-	//Since we support ingress attribute only - right now.
-	if (filter->filter_type == HWRM_CFA_EM_FILTER)
-		filter->flags = HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_PATH_RX;
-
-	switch (act->type) {
-	case RTE_FLOW_ACTION_TYPE_QUEUE:
-		/* Allow this flow. Redirect to a VNIC. */
-		act_q = (const struct rte_flow_action_queue *)act->conf;
-		if (act_q->index >= bp->rx_nr_rings) {
-			rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ACTION, act,
-					   "Invalid queue ID.");
-			rc = -rte_errno;
-			goto ret;
-		}
-		PMD_DRV_LOG(DEBUG, "Queue index %d\n", act_q->index);
-
-		vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
-		vnic = STAILQ_FIRST(&bp->ff_pool[act_q->index]);
-		if (vnic == NULL) {
-			rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ACTION, act,
-					   "No matching VNIC for queue ID.");
-			rc = -rte_errno;
-			goto ret;
-		}
-		filter->dst_id = vnic->fw_vnic_id;
-		filter1 = bnxt_get_l2_filter(bp, filter, vnic);
-		if (filter1 == NULL) {
-			rc = -ENOSPC;
-			goto ret;
-		}
-		filter->fw_l2_filter_id = filter1->fw_l2_filter_id;
-		PMD_DRV_LOG(DEBUG, "VNIC found\n");
-		break;
-	case RTE_FLOW_ACTION_TYPE_DROP:
-		vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
-		filter1 = bnxt_get_l2_filter(bp, filter, vnic0);
-		if (filter1 == NULL) {
-			rc = -ENOSPC;
-			goto ret;
-		}
-		filter->fw_l2_filter_id = filter1->fw_l2_filter_id;
-		if (filter->filter_type == HWRM_CFA_EM_FILTER)
-			filter->flags =
-				HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_DROP;
-		else
-			filter->flags =
-				HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_FLAGS_DROP;
-		break;
-	case RTE_FLOW_ACTION_TYPE_COUNT:
-		vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
-		filter1 = bnxt_get_l2_filter(bp, filter, vnic0);
-		if (filter1 == NULL) {
-			rc = -ENOSPC;
-			goto ret;
-		}
-		filter->fw_l2_filter_id = filter1->fw_l2_filter_id;
-		filter->flags = HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_FLAGS_METER;
-		break;
-	case RTE_FLOW_ACTION_TYPE_VF:
-		act_vf = (const struct rte_flow_action_vf *)act->conf;
-		vf = act_vf->id;
-		if (!BNXT_PF(bp)) {
-			rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_ACTION,
-				   act,
-				   "Configuring on a VF!");
-			rc = -rte_errno;
-			goto ret;
-		}
-
-		if (vf >= bp->pdev->max_vfs) {
-			rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_ACTION,
-				   act,
-				   "Incorrect VF id!");
-			rc = -rte_errno;
-			goto ret;
-		}
-
-		filter->mirror_vnic_id =
-		dflt_vnic = bnxt_hwrm_func_qcfg_vf_dflt_vnic_id(bp, vf);
-		if (dflt_vnic < 0) {
-			/* This simply indicates there's no driver loaded.
-			 * This is not an error.
-			 */
-			rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_ACTION,
-				   act,
-				   "Unable to get default VNIC for VF");
-			rc = -rte_errno;
-			goto ret;
-		}
-		filter->mirror_vnic_id = dflt_vnic;
-		filter->enables |= NTUPLE_FLTR_ALLOC_INPUT_EN_MIRROR_VNIC_ID;
-
-		vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
-		filter1 = bnxt_get_l2_filter(bp, filter, vnic0);
-		if (filter1 == NULL) {
-			rc = -ENOSPC;
-			goto ret;
-		}
-		filter->fw_l2_filter_id = filter1->fw_l2_filter_id;
-		break;
-
-	default:
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_ACTION, act,
-				   "Invalid action.");
-		rc = -rte_errno;
-		goto ret;
-	}
-
-	act = nxt_non_void_action(++act);
-	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_ACTION,
-				   act, "Invalid action.");
-		rc = -rte_errno;
-		goto ret;
-	}
-ret:
-	return rc;
-}
-
-static int
-bnxt_flow_validate(struct rte_eth_dev *dev,
-		const struct rte_flow_attr *attr,
-		const struct rte_flow_item pattern[],
-		const struct rte_flow_action actions[],
-		struct rte_flow_error *error)
-{
-	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
-	struct bnxt_filter_info *filter;
-	int ret = 0;
-
-	ret = bnxt_flow_agrs_validate(attr, pattern, actions, error);
-	if (ret != 0)
-		return ret;
-
-	filter = bnxt_get_unused_filter(bp);
-	if (filter == NULL) {
-		PMD_DRV_LOG(ERR, "Not enough resources for a new flow.\n");
-		return -ENOMEM;
-	}
-
-	ret = bnxt_validate_and_parse_flow(dev, pattern, actions, attr,
-					   error, filter);
-	/* No need to hold on to this filter if we are just validating flow */
-	filter->fw_l2_filter_id = UINT64_MAX;
-	bnxt_free_filter(bp, filter);
-
-	return ret;
-}
-
-static int
-bnxt_match_filter(struct bnxt *bp, struct bnxt_filter_info *nf)
-{
-	struct bnxt_filter_info *mf;
-	struct rte_flow *flow;
-	int i;
-
-	for (i = bp->nr_vnics - 1; i >= 0; i--) {
-		struct bnxt_vnic_info *vnic = &bp->vnic_info[i];
-
-		STAILQ_FOREACH(flow, &vnic->flow_list, next) {
-			mf = flow->filter;
-
-			if (mf->filter_type == nf->filter_type &&
-			    mf->flags == nf->flags &&
-			    mf->src_port == nf->src_port &&
-			    mf->src_port_mask == nf->src_port_mask &&
-			    mf->dst_port == nf->dst_port &&
-			    mf->dst_port_mask == nf->dst_port_mask &&
-			    mf->ip_protocol == nf->ip_protocol &&
-			    mf->ip_addr_type == nf->ip_addr_type &&
-			    mf->ethertype == nf->ethertype &&
-			    mf->vni == nf->vni &&
-			    mf->tunnel_type == nf->tunnel_type &&
-			    mf->l2_ovlan == nf->l2_ovlan &&
-			    mf->l2_ovlan_mask == nf->l2_ovlan_mask &&
-			    mf->l2_ivlan == nf->l2_ivlan &&
-			    mf->l2_ivlan_mask == nf->l2_ivlan_mask &&
-			    !memcmp(mf->l2_addr, nf->l2_addr, ETHER_ADDR_LEN) &&
-			    !memcmp(mf->l2_addr_mask, nf->l2_addr_mask,
-				    ETHER_ADDR_LEN) &&
-			    !memcmp(mf->src_macaddr, nf->src_macaddr,
-				    ETHER_ADDR_LEN) &&
-			    !memcmp(mf->dst_macaddr, nf->dst_macaddr,
-				    ETHER_ADDR_LEN) &&
-			    !memcmp(mf->src_ipaddr, nf->src_ipaddr,
-				    sizeof(nf->src_ipaddr)) &&
-			    !memcmp(mf->src_ipaddr_mask, nf->src_ipaddr_mask,
-				    sizeof(nf->src_ipaddr_mask)) &&
-			    !memcmp(mf->dst_ipaddr, nf->dst_ipaddr,
-				    sizeof(nf->dst_ipaddr)) &&
-			    !memcmp(mf->dst_ipaddr_mask, nf->dst_ipaddr_mask,
-				    sizeof(nf->dst_ipaddr_mask))) {
-				if (mf->dst_id == nf->dst_id)
-					return -EEXIST;
-				/* Same Flow, Different queue
-				 * Clear the old ntuple filter
-				 */
-				if (nf->filter_type == HWRM_CFA_EM_FILTER)
-					bnxt_hwrm_clear_em_filter(bp, mf);
-				if (nf->filter_type == HWRM_CFA_NTUPLE_FILTER)
-					bnxt_hwrm_clear_ntuple_filter(bp, mf);
-				/* Free the old filter, update flow
-				 * with new filter
-				 */
-				bnxt_free_filter(bp, mf);
-				flow->filter = nf;
-				return -EXDEV;
-			}
-		}
-	}
-	return 0;
-}
-
-static struct rte_flow *
-bnxt_flow_create(struct rte_eth_dev *dev,
-		  const struct rte_flow_attr *attr,
-		  const struct rte_flow_item pattern[],
-		  const struct rte_flow_action actions[],
-		  struct rte_flow_error *error)
-{
-	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
-	struct bnxt_filter_info *filter;
-	struct bnxt_vnic_info *vnic = NULL;
-	bool update_flow = false;
-	struct rte_flow *flow;
-	unsigned int i;
-	int ret = 0;
-
-	flow = rte_zmalloc("bnxt_flow", sizeof(struct rte_flow), 0);
-	if (!flow) {
-		rte_flow_error_set(error, ENOMEM,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Failed to allocate memory");
-		return flow;
-	}
-
-	ret = bnxt_flow_agrs_validate(attr, pattern, actions, error);
-	if (ret != 0) {
-		PMD_DRV_LOG(ERR, "Not a validate flow.\n");
-		goto free_flow;
-	}
-
-	filter = bnxt_get_unused_filter(bp);
-	if (filter == NULL) {
-		PMD_DRV_LOG(ERR, "Not enough resources for a new flow.\n");
-		goto free_flow;
-	}
-
-	ret = bnxt_validate_and_parse_flow(dev, pattern, actions, attr,
-					   error, filter);
-	if (ret != 0)
-		goto free_filter;
-
-	ret = bnxt_match_filter(bp, filter);
-	if (ret == -EEXIST) {
-		PMD_DRV_LOG(DEBUG, "Flow already exists.\n");
-		/* Clear the filter that was created as part of
-		 * validate_and_parse_flow() above
-		 */
-		bnxt_hwrm_clear_l2_filter(bp, filter);
-		goto free_filter;
-	} else if (ret == -EXDEV) {
-		PMD_DRV_LOG(DEBUG, "Flow with same pattern exists");
-		PMD_DRV_LOG(DEBUG, "Updating with different destination\n");
-		update_flow = true;
-	}
-
-	if (filter->filter_type == HWRM_CFA_EM_FILTER) {
-		filter->enables |=
-			HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_L2_FILTER_ID;
-		ret = bnxt_hwrm_set_em_filter(bp, filter->dst_id, filter);
-	}
-	if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER) {
-		filter->enables |=
-			HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_L2_FILTER_ID;
-		ret = bnxt_hwrm_set_ntuple_filter(bp, filter->dst_id, filter);
-	}
-
-	for (i = 0; i < bp->nr_vnics; i++) {
-		vnic = &bp->vnic_info[i];
-		if (filter->dst_id == vnic->fw_vnic_id)
-			break;
-	}
-
-	if (!ret) {
-		flow->filter = filter;
-		flow->vnic = vnic;
-		if (update_flow) {
-			ret = -EXDEV;
-			goto free_flow;
-		}
-		PMD_DRV_LOG(ERR, "Successfully created flow.\n");
-		STAILQ_INSERT_TAIL(&vnic->flow_list, flow, next);
-		return flow;
-	}
-free_filter:
-	bnxt_free_filter(bp, filter);
-free_flow:
-	if (ret == -EEXIST)
-		rte_flow_error_set(error, ret,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Matching Flow exists.");
-	else if (ret == -EXDEV)
-		rte_flow_error_set(error, ret,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Flow with pattern exists, updating destination queue");
-	else
-		rte_flow_error_set(error, -ret,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Failed to create flow.");
-	rte_free(flow);
-	flow = NULL;
-	return flow;
-}
-
-static int
-bnxt_flow_destroy(struct rte_eth_dev *dev,
-		  struct rte_flow *flow,
-		  struct rte_flow_error *error)
-{
-	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
-	struct bnxt_filter_info *filter = flow->filter;
-	struct bnxt_vnic_info *vnic = flow->vnic;
-	int ret = 0;
-
-	ret = bnxt_match_filter(bp, filter);
-	if (ret == 0)
-		PMD_DRV_LOG(ERR, "Could not find matching flow\n");
-	if (filter->filter_type == HWRM_CFA_EM_FILTER)
-		ret = bnxt_hwrm_clear_em_filter(bp, filter);
-	if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER)
-		ret = bnxt_hwrm_clear_ntuple_filter(bp, filter);
-	else
-		ret = bnxt_hwrm_clear_l2_filter(bp, filter);
-	if (!ret) {
-		STAILQ_REMOVE(&vnic->flow_list, flow, rte_flow, next);
-		rte_free(flow);
-	} else {
-		rte_flow_error_set(error, -ret,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Failed to destroy flow.");
-	}
-
-	return ret;
-}
-
-static int
-bnxt_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error)
-{
-	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
-	struct bnxt_vnic_info *vnic;
-	struct rte_flow *flow;
-	unsigned int i;
-	int ret = 0;
-
-	for (i = 0; i < bp->nr_vnics; i++) {
-		vnic = &bp->vnic_info[i];
-		STAILQ_FOREACH(flow, &vnic->flow_list, next) {
-			struct bnxt_filter_info *filter = flow->filter;
-
-			if (filter->filter_type == HWRM_CFA_EM_FILTER)
-				ret = bnxt_hwrm_clear_em_filter(bp, filter);
-			if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER)
-				ret = bnxt_hwrm_clear_ntuple_filter(bp, filter);
-
-			if (ret) {
-				rte_flow_error_set(error, -ret,
-						   RTE_FLOW_ERROR_TYPE_HANDLE,
-						   NULL,
-						   "Failed to flush flow in HW.");
-				return -rte_errno;
-			}
-
-			STAILQ_REMOVE(&vnic->flow_list, flow,
-				      rte_flow, next);
-			rte_free(flow);
-		}
-	}
-
-	return ret;
-}
-
-const struct rte_flow_ops bnxt_flow_ops = {
-	.validate = bnxt_flow_validate,
-	.create = bnxt_flow_create,
-	.destroy = bnxt_flow_destroy,
-	.flush = bnxt_flow_flush,
-};
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
new file mode 100644
index 000000000..a491e9dbf
--- /dev/null
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -0,0 +1,1167 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2018 Broadcom
+ * All rights reserved.
+ */
+
+#include <sys/queue.h>
+
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include <rte_tailq.h>
+
+#include "bnxt.h"
+#include "bnxt_filter.h"
+#include "bnxt_hwrm.h"
+#include "bnxt_vnic.h"
+#include "bnxt_util.h"
+#include "hsi_struct_def_dpdk.h"
+
+static int
+bnxt_flow_args_validate(const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct rte_flow_error *error)
+{
+	if (!pattern) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+				   NULL,
+				   "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL,
+				   "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL,
+				   "NULL attribute.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+static const struct rte_flow_item *
+bnxt_flow_non_void_item(const struct rte_flow_item *cur)
+{
+	while (1) {
+		if (cur->type != RTE_FLOW_ITEM_TYPE_VOID)
+			return cur;
+		cur++;
+	}
+}
+
+static const struct rte_flow_action *
+bnxt_flow_non_void_action(const struct rte_flow_action *cur)
+{
+	while (1) {
+		if (cur->type != RTE_FLOW_ACTION_TYPE_VOID)
+			return cur;
+		cur++;
+	}
+}
+
+static int
+bnxt_filter_type_check(const struct rte_flow_item pattern[],
+		       struct rte_flow_error *error __rte_unused)
+{
+	const struct rte_flow_item *item =
+		bnxt_flow_non_void_item(pattern);
+	int use_ntuple = 1;
+
+	while (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		switch (item->type) {
+		case RTE_FLOW_ITEM_TYPE_ETH:
+			use_ntuple = 1;
+			break;
+		case RTE_FLOW_ITEM_TYPE_VLAN:
+			use_ntuple = 0;
+			break;
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+		case RTE_FLOW_ITEM_TYPE_IPV6:
+		case RTE_FLOW_ITEM_TYPE_TCP:
+		case RTE_FLOW_ITEM_TYPE_UDP:
+			/* FALLTHROUGH */
+			/* need ntuple match, reset exact match */
+			if (!use_ntuple) {
+				PMD_DRV_LOG(ERR,
+					"VLAN flow cannot use NTUPLE filter\n");
+				rte_flow_error_set
+					(error,
+					 EINVAL,
+					 RTE_FLOW_ERROR_TYPE_ITEM,
+					 item,
+					 "Cannot use VLAN with NTUPLE");
+				return -rte_errno;
+			}
+			use_ntuple |= 1;
+			break;
+		default:
+			PMD_DRV_LOG(ERR, "Unknown Flow type\n");
+			use_ntuple |= 1;
+		}
+		item++;
+	}
+	return use_ntuple;
+}
+
+static int
+bnxt_validate_and_parse_flow_type(struct bnxt *bp,
+				  const struct rte_flow_attr *attr,
+				  const struct rte_flow_item pattern[],
+				  struct rte_flow_error *error,
+				  struct bnxt_filter_info *filter)
+{
+	const struct rte_flow_item *item = bnxt_flow_non_void_item(pattern);
+	const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
+	const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_mask;
+	const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
+	const struct rte_flow_item_tcp *tcp_spec, *tcp_mask;
+	const struct rte_flow_item_udp *udp_spec, *udp_mask;
+	const struct rte_flow_item_eth *eth_spec, *eth_mask;
+	const struct rte_flow_item_nvgre *nvgre_spec;
+	const struct rte_flow_item_nvgre *nvgre_mask;
+	const struct rte_flow_item_vxlan *vxlan_spec;
+	const struct rte_flow_item_vxlan *vxlan_mask;
+	uint8_t vni_mask[] = {0xFF, 0xFF, 0xFF};
+	uint8_t tni_mask[] = {0xFF, 0xFF, 0xFF};
+	const struct rte_flow_item_vf *vf_spec;
+	uint32_t tenant_id_be = 0;
+	bool vni_masked = 0;
+	bool tni_masked = 0;
+	uint32_t vf = 0;
+	int use_ntuple;
+	uint32_t en = 0;
+	uint32_t en_ethertype;
+	int dflt_vnic;
+
+	use_ntuple = bnxt_filter_type_check(pattern, error);
+	PMD_DRV_LOG(DEBUG, "Use NTUPLE %d\n", use_ntuple);
+	if (use_ntuple < 0)
+		return use_ntuple;
+
+	filter->filter_type = use_ntuple ?
+		HWRM_CFA_NTUPLE_FILTER : HWRM_CFA_EM_FILTER;
+	en_ethertype = use_ntuple ?
+		NTUPLE_FLTR_ALLOC_INPUT_EN_ETHERTYPE :
+		EM_FLOW_ALLOC_INPUT_EN_ETHERTYPE;
+
+	while (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		if (item->last) {
+			/* last or range is NOT supported as match criteria */
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item,
+					   "No support for range");
+			return -rte_errno;
+		}
+
+		if (!item->spec || !item->mask) {
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item,
+					   "spec/mask is NULL");
+			return -rte_errno;
+		}
+
+		switch (item->type) {
+		case RTE_FLOW_ITEM_TYPE_ETH:
+			eth_spec = item->spec;
+			eth_mask = item->mask;
+
+			/* Source MAC address mask cannot be partially set.
+			 * Should be All 0's or all 1's.
+			 * Destination MAC address mask must not be partially
+			 * set. Should be all 1's or all 0's.
+			 */
+			if ((!is_zero_ether_addr(&eth_mask->src) &&
+			     !is_broadcast_ether_addr(&eth_mask->src)) ||
+			    (!is_zero_ether_addr(&eth_mask->dst) &&
+			     !is_broadcast_ether_addr(&eth_mask->dst))) {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "MAC_addr mask not valid");
+				return -rte_errno;
+			}
+
+			/* Mask is not allowed. Only exact matches are */
+			if (eth_mask->type &&
+			    eth_mask->type != RTE_BE16(0xffff)) {
+				rte_flow_error_set(error, EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "ethertype mask not valid");
+				return -rte_errno;
+			}
+
+			if (is_broadcast_ether_addr(&eth_mask->dst)) {
+				rte_memcpy(filter->dst_macaddr,
+					   &eth_spec->dst, 6);
+				en |= use_ntuple ?
+					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_MACADDR :
+					EM_FLOW_ALLOC_INPUT_EN_DST_MACADDR;
+			}
+
+			if (is_broadcast_ether_addr(&eth_mask->src)) {
+				rte_memcpy(filter->src_macaddr,
+					   &eth_spec->src, 6);
+				en |= use_ntuple ?
+					NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_MACADDR :
+					EM_FLOW_ALLOC_INPUT_EN_SRC_MACADDR;
+			} /*
+			   * else {
+			   *  PMD_DRV_LOG(ERR, "Handle this condition\n");
+			   * }
+			   */
+			if (eth_mask->type) {
+				filter->ethertype =
+					rte_be_to_cpu_16(eth_spec->type);
+				en |= en_ethertype;
+			}
+
+			break;
+		case RTE_FLOW_ITEM_TYPE_VLAN:
+			vlan_spec = item->spec;
+			vlan_mask = item->mask;
+			if (en & en_ethertype) {
+				rte_flow_error_set(error, EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "VLAN TPID matching is not"
+						   " supported");
+				return -rte_errno;
+			}
+			if (vlan_mask->tci &&
+			    vlan_mask->tci == RTE_BE16(0x0fff)) {
+				/* Only the VLAN ID can be matched. */
+				filter->l2_ovlan =
+					rte_be_to_cpu_16(vlan_spec->tci &
+							 RTE_BE16(0x0fff));
+				en |= EM_FLOW_ALLOC_INPUT_EN_OVLAN_VID;
+			} else {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "VLAN mask is invalid");
+				return -rte_errno;
+			}
+			if (vlan_mask->inner_type &&
+			    vlan_mask->inner_type != RTE_BE16(0xffff)) {
+				rte_flow_error_set(error, EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "inner ethertype mask not"
+						   " valid");
+				return -rte_errno;
+			}
+			if (vlan_mask->inner_type) {
+				filter->ethertype =
+					rte_be_to_cpu_16(vlan_spec->inner_type);
+				en |= en_ethertype;
+			}
+
+			break;
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+			/* If mask is not involved, we could use EM filters. */
+			ipv4_spec = item->spec;
+			ipv4_mask = item->mask;
+			/* Only IP DST and SRC fields are maskable. */
+			if (ipv4_mask->hdr.version_ihl ||
+			    ipv4_mask->hdr.type_of_service ||
+			    ipv4_mask->hdr.total_length ||
+			    ipv4_mask->hdr.packet_id ||
+			    ipv4_mask->hdr.fragment_offset ||
+			    ipv4_mask->hdr.time_to_live ||
+			    ipv4_mask->hdr.next_proto_id ||
+			    ipv4_mask->hdr.hdr_checksum) {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Invalid IPv4 mask.");
+				return -rte_errno;
+			}
+
+			filter->dst_ipaddr[0] = ipv4_spec->hdr.dst_addr;
+			filter->src_ipaddr[0] = ipv4_spec->hdr.src_addr;
+
+			if (use_ntuple)
+				en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR |
+					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR;
+			else
+				en |= EM_FLOW_ALLOC_INPUT_EN_SRC_IPADDR |
+					EM_FLOW_ALLOC_INPUT_EN_DST_IPADDR;
+
+			if (ipv4_mask->hdr.src_addr) {
+				filter->src_ipaddr_mask[0] =
+					ipv4_mask->hdr.src_addr;
+				en |= !use_ntuple ? 0 :
+				     NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR_MASK;
+			}
+
+			if (ipv4_mask->hdr.dst_addr) {
+				filter->dst_ipaddr_mask[0] =
+					ipv4_mask->hdr.dst_addr;
+				en |= !use_ntuple ? 0 :
+				     NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR_MASK;
+			}
+
+			filter->ip_addr_type = use_ntuple ?
+			 HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_IP_ADDR_TYPE_IPV4 :
+			 HWRM_CFA_EM_FLOW_ALLOC_INPUT_IP_ADDR_TYPE_IPV4;
+
+			if (ipv4_spec->hdr.next_proto_id) {
+				filter->ip_protocol =
+					ipv4_spec->hdr.next_proto_id;
+				if (use_ntuple)
+					en |= NTUPLE_FLTR_ALLOC_IN_EN_IP_PROTO;
+				else
+					en |= EM_FLOW_ALLOC_INPUT_EN_IP_PROTO;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_IPV6:
+			ipv6_spec = item->spec;
+			ipv6_mask = item->mask;
+
+			/* Only IP DST and SRC fields are maskable. */
+			if (ipv6_mask->hdr.vtc_flow ||
+			    ipv6_mask->hdr.payload_len ||
+			    ipv6_mask->hdr.proto ||
+			    ipv6_mask->hdr.hop_limits) {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Invalid IPv6 mask.");
+				return -rte_errno;
+			}
+
+			if (use_ntuple)
+				en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR |
+					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR;
+			else
+				en |= EM_FLOW_ALLOC_INPUT_EN_SRC_IPADDR |
+					EM_FLOW_ALLOC_INPUT_EN_DST_IPADDR;
+
+			rte_memcpy(filter->src_ipaddr,
+				   ipv6_spec->hdr.src_addr, 16);
+			rte_memcpy(filter->dst_ipaddr,
+				   ipv6_spec->hdr.dst_addr, 16);
+
+			if (!bnxt_check_zero_bytes(ipv6_mask->hdr.src_addr,
+						   16)) {
+				rte_memcpy(filter->src_ipaddr_mask,
+					   ipv6_mask->hdr.src_addr, 16);
+				en |= !use_ntuple ? 0 :
+				    NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR_MASK;
+			}
+
+			if (!bnxt_check_zero_bytes(ipv6_mask->hdr.dst_addr,
+						   16)) {
+				rte_memcpy(filter->dst_ipaddr_mask,
+					   ipv6_mask->hdr.dst_addr, 16);
+				en |= !use_ntuple ? 0 :
+				     NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR_MASK;
+			}
+
+			filter->ip_addr_type = use_ntuple ?
+				NTUPLE_FLTR_ALLOC_INPUT_IP_ADDR_TYPE_IPV6 :
+				EM_FLOW_ALLOC_INPUT_IP_ADDR_TYPE_IPV6;
+			break;
+		case RTE_FLOW_ITEM_TYPE_TCP:
+			tcp_spec = item->spec;
+			tcp_mask = item->mask;
+
+			/* Check TCP mask. Only DST & SRC ports are maskable */
+			if (tcp_mask->hdr.sent_seq ||
+			    tcp_mask->hdr.recv_ack ||
+			    tcp_mask->hdr.data_off ||
+			    tcp_mask->hdr.tcp_flags ||
+			    tcp_mask->hdr.rx_win ||
+			    tcp_mask->hdr.cksum ||
+			    tcp_mask->hdr.tcp_urp) {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Invalid TCP mask");
+				return -rte_errno;
+			}
+
+			filter->src_port = tcp_spec->hdr.src_port;
+			filter->dst_port = tcp_spec->hdr.dst_port;
+
+			if (use_ntuple)
+				en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT |
+					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT;
+			else
+				en |= EM_FLOW_ALLOC_INPUT_EN_SRC_PORT |
+					EM_FLOW_ALLOC_INPUT_EN_DST_PORT;
+
+			if (tcp_mask->hdr.dst_port) {
+				filter->dst_port_mask = tcp_mask->hdr.dst_port;
+				en |= !use_ntuple ? 0 :
+				  NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT_MASK;
+			}
+
+			if (tcp_mask->hdr.src_port) {
+				filter->src_port_mask = tcp_mask->hdr.src_port;
+				en |= !use_ntuple ? 0 :
+				  NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT_MASK;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_UDP:
+			udp_spec = item->spec;
+			udp_mask = item->mask;
+
+			if (udp_mask->hdr.dgram_len ||
+			    udp_mask->hdr.dgram_cksum) {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Invalid UDP mask");
+				return -rte_errno;
+			}
+
+			filter->src_port = udp_spec->hdr.src_port;
+			filter->dst_port = udp_spec->hdr.dst_port;
+
+			if (use_ntuple)
+				en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT |
+					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT;
+			else
+				en |= EM_FLOW_ALLOC_INPUT_EN_SRC_PORT |
+					EM_FLOW_ALLOC_INPUT_EN_DST_PORT;
+
+			if (udp_mask->hdr.dst_port) {
+				filter->dst_port_mask = udp_mask->hdr.dst_port;
+				en |= !use_ntuple ? 0 :
+				  NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT_MASK;
+			}
+
+			if (udp_mask->hdr.src_port) {
+				filter->src_port_mask = udp_mask->hdr.src_port;
+				en |= !use_ntuple ? 0 :
+				  NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT_MASK;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_VXLAN:
+			vxlan_spec = item->spec;
+			vxlan_mask = item->mask;
+			/* Check if VXLAN item is used to describe protocol.
+			 * If yes, both spec and mask should be NULL.
+			 * If no, both spec and mask shouldn't be NULL.
+			 */
+			if ((!vxlan_spec && vxlan_mask) ||
+			    (vxlan_spec && !vxlan_mask)) {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Invalid VXLAN item");
+				return -rte_errno;
+			}
+
+			if (vxlan_spec->rsvd1 || vxlan_spec->rsvd0[0] ||
+			    vxlan_spec->rsvd0[1] || vxlan_spec->rsvd0[2] ||
+			    vxlan_spec->flags != 0x8) {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Invalid VXLAN item");
+				return -rte_errno;
+			}
+
+			/* Check if VNI is masked. */
+			if (vxlan_spec && vxlan_mask) {
+				vni_masked =
+					!!memcmp(vxlan_mask->vni, vni_mask,
+						 RTE_DIM(vni_mask));
+				if (vni_masked) {
+					rte_flow_error_set
+						(error,
+						 EINVAL,
+						 RTE_FLOW_ERROR_TYPE_ITEM,
+						 item,
+						 "Invalid VNI mask");
+					return -rte_errno;
+				}
+
+				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
+					   vxlan_spec->vni, 3);
+				filter->vni =
+					rte_be_to_cpu_32(tenant_id_be);
+				filter->tunnel_type =
+				 CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_NVGRE:
+			nvgre_spec = item->spec;
+			nvgre_mask = item->mask;
+			/* Check if NVGRE item is used to describe protocol.
+			 * If yes, both spec and mask should be NULL.
+			 * If no, both spec and mask shouldn't be NULL.
+			 */
+			if ((!nvgre_spec && nvgre_mask) ||
+			    (nvgre_spec && !nvgre_mask)) {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Invalid NVGRE item");
+				return -rte_errno;
+			}
+
+			if (nvgre_spec->c_k_s_rsvd0_ver != 0x2000 ||
+			    nvgre_spec->protocol != 0x6558) {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Invalid NVGRE item");
+				return -rte_errno;
+			}
+
+			if (nvgre_spec && nvgre_mask) {
+				tni_masked =
+					!!memcmp(nvgre_mask->tni, tni_mask,
+						 RTE_DIM(tni_mask));
+				if (tni_masked) {
+					rte_flow_error_set
+						(error,
+						 EINVAL,
+						 RTE_FLOW_ERROR_TYPE_ITEM,
+						 item,
+						 "Invalid TNI mask");
+					return -rte_errno;
+				}
+				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
+					   nvgre_spec->tni, 3);
+				filter->vni =
+					rte_be_to_cpu_32(tenant_id_be);
+				filter->tunnel_type =
+				 CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_NVGRE;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_VF:
+			vf_spec = item->spec;
+			vf = vf_spec->id;
+
+			if (!BNXT_PF(bp)) {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Configuring on a VF!");
+				return -rte_errno;
+			}
+
+			if (vf >= bp->pdev->max_vfs) {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Incorrect VF id!");
+				return -rte_errno;
+			}
+
+			if (!attr->transfer) {
+				rte_flow_error_set(error,
+						   ENOTSUP,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Matching VF traffic without"
+						   " affecting it (transfer attribute)"
+						   " is unsupported");
+				return -rte_errno;
+			}
+
+			filter->mirror_vnic_id =
+			dflt_vnic = bnxt_hwrm_func_qcfg_vf_dflt_vnic_id(bp, vf);
+			if (dflt_vnic < 0) {
+				/* This simply indicates there's no driver
+				 * loaded. This is not an error.
+				 */
+				rte_flow_error_set
+					(error,
+					 EINVAL,
+					 RTE_FLOW_ERROR_TYPE_ITEM,
+					 item,
+					 "Unable to get default VNIC for VF");
+				return -rte_errno;
+			}
+
+			filter->mirror_vnic_id = dflt_vnic;
+			en |= NTUPLE_FLTR_ALLOC_INPUT_EN_MIRROR_VNIC_ID;
+			break;
+		default:
+			break;
+		}
+		item++;
+	}
+	filter->enables = en;
+
+	return 0;
+}
+
+/* Parse attributes */
+static int
+bnxt_flow_parse_attr(const struct rte_flow_attr *attr,
+		     struct rte_flow_error *error)
+{
+	/* Must be input direction */
+	if (!attr->ingress) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+				   attr,
+				   "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->egress) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+				   attr,
+				   "No support for egress.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->priority) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+				   attr,
+				   "No support for priority.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->group) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_GROUP,
+				   attr,
+				   "No support for group.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+struct bnxt_filter_info *
+bnxt_get_l2_filter(struct bnxt *bp, struct bnxt_filter_info *nf,
+		   struct bnxt_vnic_info *vnic)
+{
+	struct bnxt_filter_info *filter1, *f0;
+	struct bnxt_vnic_info *vnic0;
+	int rc;
+
+	vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
+	f0 = STAILQ_FIRST(&vnic0->filter);
+
+	/* This flow has same DST MAC as the port/l2 filter. */
+	if (memcmp(f0->l2_addr, nf->dst_macaddr, ETHER_ADDR_LEN) == 0)
+		return f0;
+
+	/* This flow needs DST MAC which is not same as port/l2 */
+	PMD_DRV_LOG(DEBUG, "Create L2 filter for DST MAC\n");
+	filter1 = bnxt_get_unused_filter(bp);
+	if (filter1 == NULL)
+		return NULL;
+
+	filter1->flags = HWRM_CFA_L2_FILTER_ALLOC_INPUT_FLAGS_PATH_RX;
+	filter1->enables = HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_ADDR |
+			L2_FILTER_ALLOC_INPUT_EN_L2_ADDR_MASK;
+	memcpy(filter1->l2_addr, nf->dst_macaddr, ETHER_ADDR_LEN);
+	memset(filter1->l2_addr_mask, 0xff, ETHER_ADDR_LEN);
+	rc = bnxt_hwrm_set_l2_filter(bp, vnic->fw_vnic_id,
+				     filter1);
+	if (rc) {
+		bnxt_free_filter(bp, filter1);
+		return NULL;
+	}
+	return filter1;
+}
+
+static int
+bnxt_validate_and_parse_flow(struct rte_eth_dev *dev,
+			     const struct rte_flow_item pattern[],
+			     const struct rte_flow_action actions[],
+			     const struct rte_flow_attr *attr,
+			     struct rte_flow_error *error,
+			     struct bnxt_filter_info *filter)
+{
+	const struct rte_flow_action *act =
+		bnxt_flow_non_void_action(actions);
+	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	const struct rte_flow_action_queue *act_q;
+	const struct rte_flow_action_vf *act_vf;
+	struct bnxt_vnic_info *vnic, *vnic0;
+	struct bnxt_filter_info *filter1;
+	uint32_t vf = 0;
+	int dflt_vnic;
+	int rc;
+
+	if (bp->eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+		PMD_DRV_LOG(ERR, "Cannot create flow on RSS queues\n");
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				   NULL,
+				   "Cannot create flow on RSS queues");
+		rc = -rte_errno;
+		goto ret;
+	}
+
+	rc =
+	bnxt_validate_and_parse_flow_type(bp, attr, pattern, error, filter);
+	if (rc != 0)
+		goto ret;
+
+	rc = bnxt_flow_parse_attr(attr, error);
+	if (rc != 0)
+		goto ret;
+
+	/* Since we support ingress attribute only - right now. */
+	if (filter->filter_type == HWRM_CFA_EM_FILTER)
+		filter->flags = HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_PATH_RX;
+
+	switch (act->type) {
+	case RTE_FLOW_ACTION_TYPE_QUEUE:
+		/* Allow this flow. Redirect to a VNIC. */
+		act_q = (const struct rte_flow_action_queue *)act->conf;
+		if (act_q->index >= bp->rx_nr_rings) {
+			rte_flow_error_set(error,
+					   EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ACTION,
+					   act,
+					   "Invalid queue ID.");
+			rc = -rte_errno;
+			goto ret;
+		}
+		PMD_DRV_LOG(DEBUG, "Queue index %d\n", act_q->index);
+
+		vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
+		vnic = STAILQ_FIRST(&bp->ff_pool[act_q->index]);
+		if (vnic == NULL) {
+			rte_flow_error_set(error,
+					   EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ACTION,
+					   act,
+					   "No matching VNIC for queue ID.");
+			rc = -rte_errno;
+			goto ret;
+		}
+
+		filter->dst_id = vnic->fw_vnic_id;
+		filter1 = bnxt_get_l2_filter(bp, filter, vnic);
+		if (filter1 == NULL) {
+			rc = -ENOSPC;
+			goto ret;
+		}
+
+		filter->fw_l2_filter_id = filter1->fw_l2_filter_id;
+		PMD_DRV_LOG(DEBUG, "VNIC found\n");
+		break;
+	case RTE_FLOW_ACTION_TYPE_DROP:
+		vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
+		filter1 = bnxt_get_l2_filter(bp, filter, vnic0);
+		if (filter1 == NULL) {
+			rc = -ENOSPC;
+			goto ret;
+		}
+
+		filter->fw_l2_filter_id = filter1->fw_l2_filter_id;
+		if (filter->filter_type == HWRM_CFA_EM_FILTER)
+			filter->flags =
+				HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_DROP;
+		else
+			filter->flags =
+				HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_FLAGS_DROP;
+		break;
+	case RTE_FLOW_ACTION_TYPE_COUNT:
+		vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
+		filter1 = bnxt_get_l2_filter(bp, filter, vnic0);
+		if (filter1 == NULL) {
+			rc = -ENOSPC;
+			goto ret;
+		}
+
+		filter->fw_l2_filter_id = filter1->fw_l2_filter_id;
+		filter->flags = HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_FLAGS_METER;
+		break;
+	case RTE_FLOW_ACTION_TYPE_VF:
+		act_vf = (const struct rte_flow_action_vf *)act->conf;
+		vf = act_vf->id;
+
+		if (!BNXT_PF(bp)) {
+			rte_flow_error_set(error,
+					   EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ACTION,
+					   act,
+					   "Configuring on a VF!");
+			rc = -rte_errno;
+			goto ret;
+		}
+
+		if (vf >= bp->pdev->max_vfs) {
+			rte_flow_error_set(error,
+					   EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ACTION,
+					   act,
+					   "Incorrect VF id!");
+			rc = -rte_errno;
+			goto ret;
+		}
+
+		filter->mirror_vnic_id =
+		dflt_vnic = bnxt_hwrm_func_qcfg_vf_dflt_vnic_id(bp, vf);
+		if (dflt_vnic < 0) {
+			/* This simply indicates there's no driver loaded.
+			 * This is not an error.
+			 */
+			rte_flow_error_set(error,
+					   EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ACTION,
+					   act,
+					   "Unable to get default VNIC for VF");
+			rc = -rte_errno;
+			goto ret;
+		}
+
+		filter->mirror_vnic_id = dflt_vnic;
+		filter->enables |= NTUPLE_FLTR_ALLOC_INPUT_EN_MIRROR_VNIC_ID;
+
+		vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
+		filter1 = bnxt_get_l2_filter(bp, filter, vnic0);
+		if (filter1 == NULL) {
+			rc = -ENOSPC;
+			goto ret;
+		}
+
+		filter->fw_l2_filter_id = filter1->fw_l2_filter_id;
+		break;
+
+	default:
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION,
+				   act,
+				   "Invalid action.");
+		rc = -rte_errno;
+		goto ret;
+	}
+
+	if (filter1) {
+		bnxt_free_filter(bp, filter1);
+		filter1->fw_l2_filter_id = -1;
+	}
+
+	act = bnxt_flow_non_void_action(++act);
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION,
+				   act,
+				   "Invalid action.");
+		rc = -rte_errno;
+		goto ret;
+	}
+ret:
+	return rc;
+}
+
+static int
+bnxt_flow_validate(struct rte_eth_dev *dev,
+		   const struct rte_flow_attr *attr,
+		   const struct rte_flow_item pattern[],
+		   const struct rte_flow_action actions[],
+		   struct rte_flow_error *error)
+{
+	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	struct bnxt_filter_info *filter;
+	int ret = 0;
+
+	ret = bnxt_flow_args_validate(attr, pattern, actions, error);
+	if (ret != 0)
+		return ret;
+
+	filter = bnxt_get_unused_filter(bp);
+	if (filter == NULL) {
+		PMD_DRV_LOG(ERR, "Not enough resources for a new flow.\n");
+		return -ENOMEM;
+	}
+
+	ret = bnxt_validate_and_parse_flow(dev, pattern, actions, attr,
+					   error, filter);
+	/* No need to hold on to this filter if we are just validating flow */
+	filter->fw_l2_filter_id = UINT64_MAX;
+	bnxt_free_filter(bp, filter);
+
+	return ret;
+}
+
+static int
+bnxt_match_filter(struct bnxt *bp, struct bnxt_filter_info *nf)
+{
+	struct bnxt_filter_info *mf;
+	struct rte_flow *flow;
+	int i;
+
+	for (i = bp->nr_vnics - 1; i >= 0; i--) {
+		struct bnxt_vnic_info *vnic = &bp->vnic_info[i];
+
+		STAILQ_FOREACH(flow, &vnic->flow_list, next) {
+			mf = flow->filter;
+
+			if (mf->filter_type == nf->filter_type &&
+			    mf->flags == nf->flags &&
+			    mf->src_port == nf->src_port &&
+			    mf->src_port_mask == nf->src_port_mask &&
+			    mf->dst_port == nf->dst_port &&
+			    mf->dst_port_mask == nf->dst_port_mask &&
+			    mf->ip_protocol == nf->ip_protocol &&
+			    mf->ip_addr_type == nf->ip_addr_type &&
+			    mf->ethertype == nf->ethertype &&
+			    mf->vni == nf->vni &&
+			    mf->tunnel_type == nf->tunnel_type &&
+			    mf->l2_ovlan == nf->l2_ovlan &&
+			    mf->l2_ovlan_mask == nf->l2_ovlan_mask &&
+			    mf->l2_ivlan == nf->l2_ivlan &&
+			    mf->l2_ivlan_mask == nf->l2_ivlan_mask &&
+			    !memcmp(mf->l2_addr, nf->l2_addr, ETHER_ADDR_LEN) &&
+			    !memcmp(mf->l2_addr_mask, nf->l2_addr_mask,
+				    ETHER_ADDR_LEN) &&
+			    !memcmp(mf->src_macaddr, nf->src_macaddr,
+				    ETHER_ADDR_LEN) &&
+			    !memcmp(mf->dst_macaddr, nf->dst_macaddr,
+				    ETHER_ADDR_LEN) &&
+			    !memcmp(mf->src_ipaddr, nf->src_ipaddr,
+				    sizeof(nf->src_ipaddr)) &&
+			    !memcmp(mf->src_ipaddr_mask, nf->src_ipaddr_mask,
+				    sizeof(nf->src_ipaddr_mask)) &&
+			    !memcmp(mf->dst_ipaddr, nf->dst_ipaddr,
+				    sizeof(nf->dst_ipaddr)) &&
+			    !memcmp(mf->dst_ipaddr_mask, nf->dst_ipaddr_mask,
+				    sizeof(nf->dst_ipaddr_mask))) {
+				if (mf->dst_id == nf->dst_id)
+					return -EEXIST;
+				/* Same Flow, Different queue
+				 * Clear the old ntuple filter
+				 */
+				if (nf->filter_type == HWRM_CFA_EM_FILTER)
+					bnxt_hwrm_clear_em_filter(bp, mf);
+				if (nf->filter_type == HWRM_CFA_NTUPLE_FILTER)
+					bnxt_hwrm_clear_ntuple_filter(bp, mf);
+				/* Free the old filter, update flow
+				 * with new filter
+				 */
+				bnxt_free_filter(bp, mf);
+				flow->filter = nf;
+				return -EXDEV;
+			}
+		}
+	}
+	return 0;
+}
+
+static struct rte_flow *
+bnxt_flow_create(struct rte_eth_dev *dev,
+		 const struct rte_flow_attr *attr,
+		 const struct rte_flow_item pattern[],
+		 const struct rte_flow_action actions[],
+		 struct rte_flow_error *error)
+{
+	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	struct bnxt_filter_info *filter;
+	struct bnxt_vnic_info *vnic = NULL;
+	bool update_flow = false;
+	struct rte_flow *flow;
+	unsigned int i;
+	int ret = 0;
+
+	flow = rte_zmalloc("bnxt_flow", sizeof(struct rte_flow), 0);
+	if (!flow) {
+		rte_flow_error_set(error, ENOMEM,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to allocate memory");
+		return flow;
+	}
+
+	ret = bnxt_flow_args_validate(attr, pattern, actions, error);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Not a validate flow.\n");
+		goto free_flow;
+	}
+
+	filter = bnxt_get_unused_filter(bp);
+	if (filter == NULL) {
+		PMD_DRV_LOG(ERR, "Not enough resources for a new flow.\n");
+		goto free_flow;
+	}
+
+	ret = bnxt_validate_and_parse_flow(dev, pattern, actions, attr,
+					   error, filter);
+	if (ret != 0)
+		goto free_filter;
+
+	ret = bnxt_match_filter(bp, filter);
+	if (ret == -EEXIST) {
+		PMD_DRV_LOG(DEBUG, "Flow already exists.\n");
+		/* Clear the filter that was created as part of
+		 * validate_and_parse_flow() above
+		 */
+		bnxt_hwrm_clear_l2_filter(bp, filter);
+		goto free_filter;
+	} else if (ret == -EXDEV) {
+		PMD_DRV_LOG(DEBUG, "Flow with same pattern exists\n");
+		PMD_DRV_LOG(DEBUG, "Updating with different destination\n");
+		update_flow = true;
+	}
+
+	if (filter->filter_type == HWRM_CFA_EM_FILTER) {
+		filter->enables |=
+			HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_L2_FILTER_ID;
+		ret = bnxt_hwrm_set_em_filter(bp, filter->dst_id, filter);
+	}
+
+	if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER) {
+		filter->enables |=
+			HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_L2_FILTER_ID;
+		ret = bnxt_hwrm_set_ntuple_filter(bp, filter->dst_id, filter);
+	}
+
+	for (i = 0; i < bp->nr_vnics; i++) {
+		vnic = &bp->vnic_info[i];
+		if (filter->dst_id == vnic->fw_vnic_id)
+			break;
+	}
+
+	if (!ret) {
+		flow->filter = filter;
+		flow->vnic = vnic;
+		if (update_flow) {
+			ret = -EXDEV;
+			goto free_flow;
+		}
+		PMD_DRV_LOG(ERR, "Successfully created flow.\n");
+		STAILQ_INSERT_TAIL(&vnic->flow_list, flow, next);
+		return flow;
+	}
+free_filter:
+	bnxt_free_filter(bp, filter);
+free_flow:
+	if (ret == -EEXIST)
+		rte_flow_error_set(error, ret,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Matching Flow exists.");
+	else if (ret == -EXDEV)
+		rte_flow_error_set(error, ret,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Flow with pattern exists, updating destination queue");
+	else
+		rte_flow_error_set(error, -ret,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to create flow.");
+	rte_free(flow);
+	flow = NULL;
+	return flow;
+}
+
+static int
+bnxt_flow_destroy(struct rte_eth_dev *dev,
+		  struct rte_flow *flow,
+		  struct rte_flow_error *error)
+{
+	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	struct bnxt_filter_info *filter = flow->filter;
+	struct bnxt_vnic_info *vnic = flow->vnic;
+	int ret = 0;
+
+	ret = bnxt_match_filter(bp, filter);
+	if (ret == 0)
+		PMD_DRV_LOG(ERR, "Could not find matching flow\n");
+	if (filter->filter_type == HWRM_CFA_EM_FILTER)
+		ret = bnxt_hwrm_clear_em_filter(bp, filter);
+	if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER)
+		ret = bnxt_hwrm_clear_ntuple_filter(bp, filter);
+	else
+		ret = bnxt_hwrm_clear_l2_filter(bp, filter);
+	if (!ret) {
+		STAILQ_REMOVE(&vnic->flow_list, flow, rte_flow, next);
+		rte_free(flow);
+	} else {
+		rte_flow_error_set(error, -ret,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to destroy flow.");
+	}
+
+	return ret;
+}
+
+static int
+bnxt_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error)
+{
+	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	struct bnxt_vnic_info *vnic;
+	struct rte_flow *flow;
+	unsigned int i;
+	int ret = 0;
+
+	for (i = 0; i < bp->nr_vnics; i++) {
+		vnic = &bp->vnic_info[i];
+		STAILQ_FOREACH(flow, &vnic->flow_list, next) {
+			struct bnxt_filter_info *filter = flow->filter;
+
+			if (filter->filter_type == HWRM_CFA_EM_FILTER)
+				ret = bnxt_hwrm_clear_em_filter(bp, filter);
+			if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER)
+				ret = bnxt_hwrm_clear_ntuple_filter(bp, filter);
+
+			if (ret) {
+				rte_flow_error_set
+					(error,
+					 -ret,
+					 RTE_FLOW_ERROR_TYPE_HANDLE,
+					 NULL,
+					 "Failed to flush flow in HW.");
+				return -rte_errno;
+			}
+
+			STAILQ_REMOVE(&vnic->flow_list, flow,
+				      rte_flow, next);
+			rte_free(flow);
+		}
+	}
+
+	return ret;
+}
+
+const struct rte_flow_ops bnxt_flow_ops = {
+	.validate = bnxt_flow_validate,
+	.create = bnxt_flow_create,
+	.destroy = bnxt_flow_destroy,
+	.flush = bnxt_flow_flush,
+};
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 23/31] net/bnxt: check for invalid vnic id
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (21 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 22/31] net/bnxt: filter/flow refactoring Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-26 15:30   ` Ferruh Yigit
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 24/31] net/bnxt: update HWRM API to v1.9.2.9 Ajit Khaparde
                   ` (8 subsequent siblings)
  31 siblings, 1 reply; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Jay Ding

From: Jay Ding <jay.ding@broadcom.com>

Add checking for VNIC id before sending message to firmware in
bnxt_hwrm_vnic_plcmode_cfg().

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_hwrm.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 64687a69b..910129f12 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -1560,6 +1560,11 @@ int bnxt_hwrm_vnic_plcmode_cfg(struct bnxt *bp,
 	struct hwrm_vnic_plcmodes_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 	uint16_t size;
 
+	if (vnic->fw_vnic_id == INVALID_HW_RING_ID) {
+		PMD_DRV_LOG(DEBUG, "VNIC ID %x\n", vnic->fw_vnic_id);
+		return rc;
+	}
+
 	HWRM_PREP(req, VNIC_PLCMODES_CFG);
 
 	req.flags = rte_cpu_to_le_32(
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 24/31] net/bnxt: update HWRM API to v1.9.2.9
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (22 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 23/31] net/bnxt: check for invalid vnic id Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-26 15:30   ` Ferruh Yigit
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 25/31] net/bnxt: fix Tx with multiple mbuf Ajit Khaparde
                   ` (7 subsequent siblings)
  31 siblings, 1 reply; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Rob Miller, Rob Miller

From: Rob Miller <rmiller@broadcom.com>

update HWRM API to v1.9.2.9

Signed-off-by: Rob Miller <rob.miller@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Rob Miller <rmiller@broadcom.com>
---
 drivers/net/bnxt/hsi_struct_def_dpdk.h | 113 ++++++++++++++++++++++++++++++++-
 1 file changed, 111 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bnxt/hsi_struct_def_dpdk.h b/drivers/net/bnxt/hsi_struct_def_dpdk.h
index fd6d8807e..f5c7b4228 100644
--- a/drivers/net/bnxt/hsi_struct_def_dpdk.h
+++ b/drivers/net/bnxt/hsi_struct_def_dpdk.h
@@ -686,8 +686,8 @@ struct hwrm_err_output {
 #define HWRM_VERSION_MINOR 9
 #define HWRM_VERSION_UPDATE 2
 /* non-zero means beta version */
-#define HWRM_VERSION_RSVD 6
-#define HWRM_VERSION_STR "1.9.2.6"
+#define HWRM_VERSION_RSVD 9
+#define HWRM_VERSION_STR "1.9.2.9"
 
 /****************
  * hwrm_ver_get *
@@ -3183,6 +3183,9 @@ struct hwrm_async_event_cmpl {
 	/* LLFC/PFC Configuration Change */
 	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_LLFC_PFC_CHANGE \
 		UINT32_C(0x34)
+	/* Default VNIC Configuration Change */
+	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_DEFAULT_VNIC_CHANGE \
+		UINT32_C(0x35)
 	/* HWRM Error */
 	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_HWRM_ERROR \
 		UINT32_C(0xff)
@@ -3280,6 +3283,11 @@ struct hwrm_async_event_cmpl_link_status_change {
 		UINT32_C(0xffff0)
 	#define HWRM_ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_PORT_ID_SFT \
 		4
+	/* Indicates the physical function this event occured on. */
+	#define HWRM_ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_PF_ID_MASK \
+		UINT32_C(0xff00000)
+	#define HWRM_ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_PF_ID_SFT \
+		20
 } __attribute__((packed));
 
 /* hwrm_async_event_cmpl_link_mtu_change (size:128b/16B) */
@@ -4087,6 +4095,10 @@ struct hwrm_async_event_cmpl_vf_flr {
 	#define HWRM_ASYNC_EVENT_CMPL_VF_FLR_EVENT_DATA1_VF_ID_MASK \
 		UINT32_C(0xffff)
 	#define HWRM_ASYNC_EVENT_CMPL_VF_FLR_EVENT_DATA1_VF_ID_SFT 0
+	/* Indicates the physical function this event occured on. */
+	#define HWRM_ASYNC_EVENT_CMPL_VF_FLR_EVENT_DATA1_PF_ID_MASK \
+		UINT32_C(0xff0000)
+	#define HWRM_ASYNC_EVENT_CMPL_VF_FLR_EVENT_DATA1_PF_ID_SFT 16
 } __attribute__((packed));
 
 /* hwrm_async_event_cmpl_vf_mac_addr_change (size:128b/16B) */
@@ -4354,6 +4366,88 @@ struct hwrm_async_event_cmpl_llfc_pfc_change {
 		5
 } __attribute__((packed));
 
+/* hwrm_async_event_cmpl_default_vnic_change (size:128b/16B) */
+struct hwrm_async_event_cmpl_default_vnic_change {
+	uint16_t	type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units.  Even values indicate 16B
+	 * records.  Odd values indicate 32B
+	 * records.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_TYPE_MASK \
+		UINT32_C(0x3f)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_TYPE_SFT \
+		0
+	/* HWRM Asynchronous Event Information */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_TYPE_HWRM_ASYNC_EVENT \
+		UINT32_C(0x2e)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_TYPE_LAST \
+		HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_TYPE_HWRM_ASYNC_EVENT
+	/* unused1 is 10 b */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_UNUSED1_MASK \
+		UINT32_C(0xffc0)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_UNUSED1_SFT \
+		6
+	/* Identifiers of events. */
+	uint16_t	event_id;
+	/* Notification of a default vnic allocaiton or free */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_ID_ALLOC_FREE_NOTIFICATION \
+		UINT32_C(0x35)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_ID_LAST \
+		HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_ID_ALLOC_FREE_NOTIFICATION
+	/* Event specific data */
+	uint32_t	event_data2;
+	uint8_t	opaque_v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue.   The even passes
+	 * will write 1.  The odd passes will write 0.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_V \
+		UINT32_C(0x1)
+	/* opaque is 7 b */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_OPAQUE_MASK \
+		UINT32_C(0xfe)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_OPAQUE_SFT 1
+	/* 8-lsb timestamp from POR (100-msec resolution) */
+	uint8_t	timestamp_lo;
+	/* 16-lsb timestamp from POR (100-msec resolution) */
+	uint16_t	timestamp_hi;
+	/* Event specific data */
+	uint32_t	event_data1;
+	/* Indicates default vnic configuration change */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_DEF_VNIC_STATE_MASK \
+		UINT32_C(0x3)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_DEF_VNIC_STATE_SFT \
+		0
+	/*
+	 * If this field is set to 1, then it indicates that
+	 * a default VNIC has been allocate.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_DEF_VNIC_STATE_DEF_VNIC_ALLOC \
+		UINT32_C(0x1)
+	/*
+	 * If this field is set to 2, then it indicates that
+	 * a default VNIC has been freed.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_DEF_VNIC_STATE_DEF_VNIC_FREE \
+		UINT32_C(0x2)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_DEF_VNIC_STATE_LAST \
+		HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_DEF_VNIC_STATE_DEF_VNIC_FREE
+	/* Indicates the physical function this event occured on. */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_PF_ID_MASK \
+		UINT32_C(0x3fc)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_PF_ID_SFT \
+		2
+	/* Indicates the virtual function this event occured on */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_VF_ID_MASK \
+		UINT32_C(0x3fffc00)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_VF_ID_SFT \
+		10
+} __attribute__((packed));
+
 /* hwrm_async_event_cmpl_hwrm_error (size:128b/16B) */
 struct hwrm_async_event_cmpl_hwrm_error {
 	uint16_t	type;
@@ -5196,6 +5290,21 @@ struct hwrm_func_qcaps_output {
 	 */
 	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_PCIE_STATS_SUPPORTED \
 		UINT32_C(0x10000)
+	/*
+	 * If the query is for a VF, then this flag shall be ignored,
+	 * If this query is for a PF and this flag is set to 1,
+	 * then the PF has the capability to adopt the VF's belonging
+	 * to another PF.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_ADOPTED_PF_SUPPORTED \
+		UINT32_C(0x20000)
+	/*
+	 * If the query is for a VF, then this flag shall be ignored,
+	 * If this query is for a PF and this flag is set to 1,
+	 * then the PF has the capability to administer another PF.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_ADMIN_PF_SUPPORTED \
+		UINT32_C(0x40000)
 	/*
 	 * This value is current MAC address configured for this
 	 * function. A value of 00-00-00-00-00-00 indicates no
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 25/31] net/bnxt: fix Tx with multiple mbuf
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (23 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 24/31] net/bnxt: update HWRM API to v1.9.2.9 Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 26/31] net/bnxt: Revert reset of L2 filter id in clear_ntuple_filter Ajit Khaparde
                   ` (6 subsequent siblings)
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Xiaoxin Peng, stable

From: Xiaoxin Peng <xiaoxin.peng@broadcom.com>

When using multi-mbuf to xmit large packets, we need to use total
packet lengths (sum of all segments) to set txbd->flags_type.
Packets will not be sent when using tx_pkt->data_len(The first
segment of packets).

Fixes: 6eb3cc2294fd ("net/bnxt: add initial Tx code")
Cc: stable@dpdk.org

Signed-off-by: Xiaoxin Peng <xiaoxin.peng@broadcom.com>
Reviewed-by: Herry Chen <herry.chen@broadcom.com>
Reviewed-by: Jason He <jason.he@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_txr.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index f8fd22156..23c8e6660 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -160,10 +160,10 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 		*cmpl_next = false;
 	}
 	txbd->len = tx_pkt->data_len;
-	if (txbd->len >= 2014)
+	if (tx_pkt->pkt_len >= 2014)
 		txbd->flags_type |= TX_BD_LONG_FLAGS_LHINT_GTE2K;
 	else
-		txbd->flags_type |= lhint_arr[txbd->len >> 9];
+		txbd->flags_type |= lhint_arr[tx_pkt->pkt_len >> 9];
 	txbd->address = rte_cpu_to_le_32(rte_mbuf_data_iova(tx_buf->mbuf));
 
 	if (long_bd) {
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 26/31] net/bnxt: Revert reset of L2 filter id in clear_ntuple_filter
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (24 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 25/31] net/bnxt: fix Tx with multiple mbuf Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 27/31] net/bnxt: check filter type before clearing it Ajit Khaparde
                   ` (5 subsequent siblings)
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Somnath Kotur, ajit.khaparde

From: Somnath Kotur <somnath.kotur@broadcom.com>

The L2 filter id is needed in many scenarios particularly when
we are repurposing the same ntuple filter with different destination queues.

Fixes: 1383434c9089("net/bnxt: reset L2 filter id once filter is freed")
Cc: ajit.khaparde@broadcom.com
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/bnxt_hwrm.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 910129f12..ba8e44a9b 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3798,7 +3798,6 @@ int bnxt_hwrm_clear_ntuple_filter(struct bnxt *bp,
 	HWRM_UNLOCK();
 
 	filter->fw_ntuple_filter_id = UINT64_MAX;
-	filter->fw_l2_filter_id = UINT64_MAX;
 
 	return 0;
 }
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 27/31] net/bnxt: check filter type before clearing it
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (25 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 26/31] net/bnxt: Revert reset of L2 filter id in clear_ntuple_filter Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-26 15:30   ` Ferruh Yigit
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 28/31] net/bnxt: fix set MTU Ajit Khaparde
                   ` (4 subsequent siblings)
  31 siblings, 1 reply; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

In bnxt_free_filter_mem(), check the filter type and call the
appropriate HWRM command to clear the filter from HW.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_filter.c | 21 +++++++++++++++++----
 1 file changed, 17 insertions(+), 4 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_filter.c b/drivers/net/bnxt/bnxt_filter.c
index 31757d32c..1038941e8 100644
--- a/drivers/net/bnxt/bnxt_filter.c
+++ b/drivers/net/bnxt/bnxt_filter.c
@@ -117,16 +117,29 @@ void bnxt_free_filter_mem(struct bnxt *bp)
 	max_filters = bp->max_l2_ctx;
 	for (i = 0; i < max_filters; i++) {
 		filter = &bp->filter_info[i];
-		if (filter->fw_l2_filter_id != ((uint64_t)-1)) {
-			PMD_DRV_LOG(ERR, "HWRM filter is not freed??\n");
+		if (filter->fw_l2_filter_id != ((uint64_t)-1) &&
+		    filter->filter_type == HWRM_CFA_L2_FILTER) {
+			PMD_DRV_LOG(ERR, "L2 filter is not free\n");
 			/* Call HWRM to try to free filter again */
 			rc = bnxt_hwrm_clear_l2_filter(bp, filter);
 			if (rc)
 				PMD_DRV_LOG(ERR,
-				       "HWRM filter cannot be freed rc = %d\n",
-					rc);
+					    "Cannot free L2 filter: %d\n",
+					    rc);
 		}
 		filter->fw_l2_filter_id = UINT64_MAX;
+
+		if (filter->fw_ntuple_filter_id != ((uint64_t)-1) &&
+		    filter->filter_type == HWRM_CFA_NTUPLE_FILTER) {
+			PMD_DRV_LOG(ERR, "NTUPLE filter is not free\n");
+			/* Call HWRM to try to free filter again */
+			rc = bnxt_hwrm_clear_ntuple_filter(bp, filter);
+			if (rc)
+				PMD_DRV_LOG(ERR,
+					    "Cannot free NTUPLE filter: %d\n",
+					    rc);
+		}
+		filter->fw_ntuple_filter_id = UINT64_MAX;
 	}
 	STAILQ_INIT(&bp->free_filter_list);
 
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 28/31] net/bnxt: fix set MTU
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (26 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 27/31] net/bnxt: check filter type before clearing it Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-26 15:30   ` Ferruh Yigit
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 29/31] net/bnxt: fix incorrect IO address handling in Tx Ajit Khaparde
                   ` (3 subsequent siblings)
  31 siblings, 1 reply; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, stable

There is no need to update hardware configuration if new MTU is
not greater than the max data the mbuf can accommodate.

Fixes: daef48efe5e5 ("net/bnxt: support set MTU")
Cc: stable@dpdk.org

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 9cfa43778..1145bc195 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1597,6 +1597,7 @@ static int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
 
 	for (i = 0; i < bp->nr_vnics; i++) {
 		struct bnxt_vnic_info *vnic = &bp->vnic_info[i];
+		uint16_t size = 0;
 
 		vnic->mru = bp->eth_dev->data->mtu + ETHER_HDR_LEN +
 					ETHER_CRC_LEN + VLAN_TAG_SIZE * 2;
@@ -1604,9 +1605,14 @@ static int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
 		if (rc)
 			break;
 
-		rc = bnxt_hwrm_vnic_plcmode_cfg(bp, vnic);
-		if (rc)
-			return rc;
+		size = rte_pktmbuf_data_room_size(bp->rx_queues[0]->mb_pool);
+		size -= RTE_PKTMBUF_HEADROOM;
+
+		if (size < new_mtu) {
+			rc = bnxt_hwrm_vnic_plcmode_cfg(bp, vnic);
+			if (rc)
+				return rc;
+		}
 	}
 
 	return rc;
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 29/31] net/bnxt: fix incorrect IO address handling in Tx
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (27 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 28/31] net/bnxt: fix set MTU Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 30/31] net/bnxt: allocate RSS context only if RSS mode is enabled Ajit Khaparde
                   ` (2 subsequent siblings)
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, stable

rte_mbuf_data_iova returns a 64-bit address. But we are incorrectly
using only 32-bits of that. Use rte_cpu_to_le_64 instead of
rte_cpu_to_le_32

Fixes: 6eb3cc2294fd ("net/bnxt: add initial Tx code")
Cc: stable@dpdk.org

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_txr.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 23c8e6660..4e684f208 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -164,7 +164,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 		txbd->flags_type |= TX_BD_LONG_FLAGS_LHINT_GTE2K;
 	else
 		txbd->flags_type |= lhint_arr[tx_pkt->pkt_len >> 9];
-	txbd->address = rte_cpu_to_le_32(rte_mbuf_data_iova(tx_buf->mbuf));
+	txbd->address = rte_cpu_to_le_64(rte_mbuf_data_iova(tx_buf->mbuf));
 
 	if (long_bd) {
 		txbd->flags_type |= TX_BD_LONG_TYPE_TX_BD_LONG;
@@ -287,7 +287,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 		tx_buf = &txr->tx_buf_ring[txr->tx_prod];
 
 		txbd = &txr->tx_desc_ring[txr->tx_prod];
-		txbd->address = rte_cpu_to_le_32(rte_mbuf_data_iova(m_seg));
+		txbd->address = rte_cpu_to_le_64(rte_mbuf_data_iova(m_seg));
 		txbd->flags_type |= TX_BD_SHORT_TYPE_TX_BD_SHORT;
 		txbd->len = m_seg->data_len;
 
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 30/31] net/bnxt: allocate RSS context only if RSS mode is enabled.
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (28 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 29/31] net/bnxt: fix incorrect IO address handling in Tx Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 31/31] net/bnxt: fix to move a flow to a different queue Ajit Khaparde
  2018-06-26 15:27 ` [dpdk-dev] [PATCH 00/31] bnxt patchset Ferruh Yigit
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

allocate RSS context only if RSS mode is enabled.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 16 ++++++++++------
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 1145bc195..dfae6e2d2 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -248,6 +248,7 @@ static int bnxt_init_chip(struct bnxt *bp)
 
 	/* VNIC configuration */
 	for (i = 0; i < bp->nr_vnics; i++) {
+		struct rte_eth_conf *dev_conf = &bp->eth_dev->data->dev_conf;
 		struct bnxt_vnic_info *vnic = &bp->vnic_info[i];
 
 		rc = bnxt_hwrm_vnic_alloc(bp, vnic);
@@ -257,12 +258,15 @@ static int bnxt_init_chip(struct bnxt *bp)
 			goto err_out;
 		}
 
-		rc = bnxt_hwrm_vnic_ctx_alloc(bp, vnic);
-		if (rc) {
-			PMD_DRV_LOG(ERR,
-				"HWRM vnic %d ctx alloc failure rc: %x\n",
-				i, rc);
-			goto err_out;
+		/* Alloc RSS context only if RSS mode is enabled */
+		if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) {
+			rc = bnxt_hwrm_vnic_ctx_alloc(bp, vnic);
+			if (rc) {
+				PMD_DRV_LOG(ERR,
+					"HWRM vnic %d ctx alloc failure rc: %x\n",
+					i, rc);
+				goto err_out;
+			}
 		}
 
 		rc = bnxt_hwrm_vnic_cfg(bp, vnic);
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH 31/31] net/bnxt: fix to move a flow to a different queue
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (29 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 30/31] net/bnxt: allocate RSS context only if RSS mode is enabled Ajit Khaparde
@ 2018-06-19 21:30 ` Ajit Khaparde
  2018-06-26 15:27 ` [dpdk-dev] [PATCH 00/31] bnxt patchset Ferruh Yigit
  31 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-19 21:30 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Somnath Kotur, stable

From: Somnath Kotur <somnath.kotur@broadcom.com>

While moving a flow to a different destination queue,
the l2_filter_id being passed to the FW command was incorrect.
Fix it by re-using the matching filter's l2_filter_id since
that is supposed to be the same in this case.

Fixes: 5ef3b79fdfe6 ("net/bnxt: support flow filter ops")
Cc: stable@dpdk.org

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_flow.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index a491e9dbf..ac7656741 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -968,9 +968,13 @@ bnxt_match_filter(struct bnxt *bp, struct bnxt_filter_info *nf)
 				    sizeof(nf->dst_ipaddr_mask))) {
 				if (mf->dst_id == nf->dst_id)
 					return -EEXIST;
-				/* Same Flow, Different queue
+				/*
+				 * Same Flow, Different queue
 				 * Clear the old ntuple filter
+				 * Reuse the matching L2 filter
+				 * ID for the new filter
 				 */
+				nf->fw_l2_filter_id = mf->fw_l2_filter_id;
 				if (nf->filter_type == HWRM_CFA_EM_FILTER)
 					bnxt_hwrm_clear_em_filter(bp, mf);
 				if (nf->filter_type == HWRM_CFA_NTUPLE_FILTER)
-- 
2.15.1 (Apple Git-101)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [dpdk-dev] [PATCH 00/31] bnxt patchset
  2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
                   ` (30 preceding siblings ...)
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 31/31] net/bnxt: fix to move a flow to a different queue Ajit Khaparde
@ 2018-06-26 15:27 ` Ferruh Yigit
  2018-06-28 20:15   ` Ajit Khaparde
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
  31 siblings, 2 replies; 73+ messages in thread
From: Ferruh Yigit @ 2018-06-26 15:27 UTC (permalink / raw)
  To: Ajit Khaparde, dev

On 6/19/2018 10:30 PM, Ajit Khaparde wrote:
> Patchset against dpdk-next-net contains bug fixes,
> some code refactoring and style cleanup.
> 
> Please apply.
> 
> Ajit Khaparde (15):
>   net/bnxt: fix clear port stats
>   net/bnxt: add Tx batching support
>   net/bnxt: Rx processing optimization
>   net/bnxt: set min and max descriptor count for Tx and Rx rings
>   net/bnxt: fix dev close operation
>   net/bnxt: set ring coalesce parameters for Stratus NIC
>   net/bnxt: fix HW Tx checksum offload check
>   net/bnxt: add support for VF id 0xd800
>   net/bnxt: fix rx/tx queue start/stop operations
>   net/bnxt: code cleanup style of bnxt vnic
>   net/bnxt: filter/flow refactoring
>   net/bnxt: check filter type before clearing it
>   net/bnxt: fix set MTU
>   net/bnxt: fix incorrect IO address handling in Tx
>   net/bnxt: allocate RSS context only if RSS mode is enabled.
> 
> Jay Ding (1):
>   net/bnxt: check for invalid vnic id
> 
> Rob Miller (1):
>   net/bnxt: update HWRM API to v1.9.2.9
> 
> Scott Branden (11):
>   net/bnxt: code cleanup style of bnxt cpr
>   net/bnxt: code cleanup style of bnxt rxr
>   net/bnxt: code cleanup style of rte pmd bnxt file
>   net/bnxt: code cleanup style of bnxt stats
>   net/bnxt: code cleanup style of bnxt vnic
>   net/bnxt: code cleanup style of bnxt txq
>   net/bnxt: code cleanup style of bnxt rxq
>   net/bnxt: code cleanup style of bnxt txr
>   net/bnxt: code cleanup style of bnxt ring
>   net/bnxt: code cleanup style of bnxt ethdev
>   net/bnxt: move function check zero bytes to bnxt util.h
> 
> Somnath Kotur (2):
>   net/bnxt: Revert reset of L2 filter id in clear_ntuple_filter
>   net/bnxt: fix to move a flow to a different queue
> 
> Xiaoxin Peng (1):
>   net/bnxt: fix Tx with multiple mbuf


Hi Ajit,

./devtools/check-git-log.sh is giving some errors [1], can you please check them?


[1]
Wrong headline format:
        net/bnxt: Revert reset of L2 filter id in clear_ntuple_filter
        net/bnxt: allocate RSS context only if RSS mode is enabled.
Wrong headline uppercase:
        net/bnxt: Rx processing optimization
        net/bnxt: Revert reset of L2 filter id in clear_ntuple_filter
Wrong headline lowercase:
        net/bnxt: fix rx/tx queue start/stop operations
        net/bnxt: code cleanup style of rte pmd bnxt file
Headline too long:
        net/bnxt: set min and max descriptor count for Tx and Rx rings
        net/bnxt: Revert reset of L2 filter id in clear_ntuple_filter
Line too long:
        we are repurposing the same ntuple filter with different destination queues.
Wrong tag:
        Fixes: 893074951314 (net/bnxt: free memory in close operation)
        Fixes: 1383434c9089("net/bnxt: reset L2 filter id once filter is freed")
Wrong 'Fixes' reference:
        Fixes: 893074951314 (net/bnxt: free memory in close operation)
        Fixes: 1383434c9089("net/bnxt: reset L2 filter id once filter is freed")
Is it candidate for Cc: stable@dpdk.org backport?
        net/bnxt: fix rx/tx queue start/stop operations
        net/bnxt: Revert reset of L2 filter id in clear_ntuple_filter

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [dpdk-dev] [PATCH 05/31] net/bnxt: fix dev close operation
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 05/31] net/bnxt: fix dev close operation Ajit Khaparde
@ 2018-06-26 15:28   ` Ferruh Yigit
  2018-06-28 20:16     ` Ajit Khaparde
  0 siblings, 1 reply; 73+ messages in thread
From: Ferruh Yigit @ 2018-06-26 15:28 UTC (permalink / raw)
  To: Ajit Khaparde, dev; +Cc: stable

On 6/19/2018 10:30 PM, Ajit Khaparde wrote:
> We are not cleaning up all the memory and also not unregistering
> the driver during device close operation. This patch fixes the issue.
> 
> Fixes: 893074951314 (net/bnxt: free memory in close operation)
> Cc: stable@dpdk.org
> 
> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>

<...>

> @@ -3408,13 +3410,15 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
>  }
>  
>  static int
> -bnxt_dev_uninit(struct rte_eth_dev *eth_dev) {
> +bnxt_dev_uninit(struct rte_eth_dev *eth_dev)
> +{
>  	struct bnxt *bp = eth_dev->data->dev_private;
>  	int rc;
>  
>  	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
>  		return -EPERM;
>  
> +	PMD_DRV_LOG(INFO, "Calling Device uninit\n");

This looks like can be a debug message, what do you think?

<...>

> @@ -3456,7 +3469,7 @@ static int bnxt_pci_remove(struct rte_pci_device *pci_dev)
>  static struct rte_pci_driver bnxt_rte_pmd = {
>  	.id_table = bnxt_pci_id_map,
>  	.drv_flags = RTE_PCI_DRV_NEED_MAPPING |
> -		RTE_PCI_DRV_INTR_LSC,
> +		RTE_PCI_DRV_INTR_LSC | RTE_PCI_DRV_INTR_RMV,

Is Remove interrupts really supported? I can't find the related code in the driver.

You need to call _rte_eth_dev_callback_process() for RTE_ETH_EVENT_INTR_RMV
where you handle the interrupt.

And announce the feature "Removal event" in bnxt.ini

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [dpdk-dev] [PATCH 08/31] net/bnxt: add support for VF id 0xd800
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 08/31] net/bnxt: add support for VF id 0xd800 Ajit Khaparde
@ 2018-06-26 15:28   ` Ferruh Yigit
  2018-06-28 20:14     ` Ajit Khaparde
  0 siblings, 1 reply; 73+ messages in thread
From: Ferruh Yigit @ 2018-06-26 15:28 UTC (permalink / raw)
  To: Ajit Khaparde, dev

On 6/19/2018 10:30 PM, Ajit Khaparde wrote:
> Add support for StingRay VF device 0xd800

Can you please document new supported device on doc/guides/nics/bnxt.rst

> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> ---
>  drivers/net/bnxt/bnxt_ethdev.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
> index 1b52425e6..5d7f29cf4 100644
> --- a/drivers/net/bnxt/bnxt_ethdev.c
> +++ b/drivers/net/bnxt/bnxt_ethdev.c
> @@ -73,6 +73,7 @@ int bnxt_logtype_driver;
>  #define BROADCOM_DEV_ID_58802 0xd802
>  #define BROADCOM_DEV_ID_58804 0xd804
>  #define BROADCOM_DEV_ID_58808 0x16f0
> +#define BROADCOM_DEV_ID_58802_VF 0xd800
>  
>  static const struct rte_pci_id bnxt_pci_id_map[] = {
>  	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM,
> @@ -116,6 +117,7 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
>  	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58802) },
>  	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58804) },
>  	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58808) },
> +	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58802_VF) },
>  	{ .vendor_id = 0, /* sentinel */ },
>  };
>  
> @@ -3068,7 +3070,8 @@ static bool bnxt_vf_pciid(uint16_t id)
>  	    id == BROADCOM_DEV_ID_5741X_VF ||
>  	    id == BROADCOM_DEV_ID_57414_VF ||
>  	    id == BROADCOM_DEV_ID_STRATUS_NIC_VF1 ||
> -	    id == BROADCOM_DEV_ID_STRATUS_NIC_VF2)
> +	    id == BROADCOM_DEV_ID_STRATUS_NIC_VF2 ||
> +	    id == BROADCOM_DEV_ID_58802_VF)
>  		return true;
>  	return false;
>  }
> 

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [dpdk-dev] [PATCH 11/31] net/bnxt: code cleanup style of bnxt rxr
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 11/31] net/bnxt: code cleanup style of bnxt rxr Ajit Khaparde
@ 2018-06-26 15:29   ` Ferruh Yigit
  0 siblings, 0 replies; 73+ messages in thread
From: Ferruh Yigit @ 2018-06-26 15:29 UTC (permalink / raw)
  To: dev, Scott Branden; +Cc: Ajit Khaparde

On 6/19/2018 10:30 PM, Ajit Khaparde wrote:
> From: Scott Branden <scott.branden@broadcom.com>
> 
> Cleanup alignment, brackets, debug string style of bnxt_rxr
> 
> Signed-off-by: Scott Branden <scott.branden@broadcom.com>
> Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> ---
>  drivers/net/bnxt/bnxt_rxr.c | 58 ++++++++++++++++++++++++---------------------
>  drivers/net/bnxt/bnxt_rxr.h |  6 +++--
>  2 files changed, 35 insertions(+), 29 deletions(-)
> 
> diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
> index e4d473f4b..13928c388 100644
> --- a/drivers/net/bnxt/bnxt_rxr.c
> +++ b/drivers/net/bnxt/bnxt_rxr.c
> @@ -72,7 +72,6 @@ static inline int bnxt_alloc_ag_data(struct bnxt_rx_queue *rxq,
>  	if (rx_buf == NULL)
>  		PMD_DRV_LOG(ERR, "Jumbo Frame. rx_buf is NULL\n");
>  
> -
>  	rx_buf->mbuf = mbuf;
>  	mbuf->data_off = RTE_PKTMBUF_HEADROOM;
>  
> @@ -82,7 +81,7 @@ static inline int bnxt_alloc_ag_data(struct bnxt_rx_queue *rxq,
>  }
>  
>  static inline void bnxt_reuse_rx_mbuf(struct bnxt_rx_ring_info *rxr,
> -			       struct rte_mbuf *mbuf)
> +				      struct rte_mbuf *mbuf)

Hi Scott,

Since this patch is only for syntax updates, should we expect this to follow
DPDK coding convention [1]?

It seems you have align new line to parenthesis but according dpdk coding style
new line should have a tab:
http://doc.dpdk.org/guides/contributing/coding_style.html#definitions

This is same for other syntax fix patches.

Note: The coding style discussion is never to end and I don't have any intention
to have one, I believe what matters is consistency, please check coding style
documentation before sending syntax fix patches.

Also I believe syntax fix patches are not best idea, unless it is fixing a real
readability issue. It corrupts the git history and makes harder to backport
fixes. Specially when it is not contributing the consistency as well I suggest
dropping the patch.

[1]
http://doc.dpdk.org/guides/contributing/coding_style.html

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [dpdk-dev] [PATCH 22/31] net/bnxt: filter/flow refactoring
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 22/31] net/bnxt: filter/flow refactoring Ajit Khaparde
@ 2018-06-26 15:29   ` Ferruh Yigit
  0 siblings, 0 replies; 73+ messages in thread
From: Ferruh Yigit @ 2018-06-26 15:29 UTC (permalink / raw)
  To: Ajit Khaparde, dev; +Cc: Michael Wildt, Scott Branden

On 6/19/2018 10:30 PM, Ajit Khaparde wrote:
> In preparation of more rte_flow support it has been decided to
> separate out filter and flow into their own files. Functionally the
> same.
> 
> Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
> Signed-off-by: Scott Branden <scott.branden@broadcom.com>
> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>

Preferred patch title is to start with verb, like:
net/bnxt: refactor filter/flow

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [dpdk-dev] [PATCH 23/31] net/bnxt: check for invalid vnic id
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 23/31] net/bnxt: check for invalid vnic id Ajit Khaparde
@ 2018-06-26 15:30   ` Ferruh Yigit
  0 siblings, 0 replies; 73+ messages in thread
From: Ferruh Yigit @ 2018-06-26 15:30 UTC (permalink / raw)
  To: Ajit Khaparde, dev; +Cc: Jay Ding

On 6/19/2018 10:30 PM, Ajit Khaparde wrote:
> From: Jay Ding <jay.ding@broadcom.com>
> 
> Add checking for VNIC id before sending message to firmware in
> bnxt_hwrm_vnic_plcmode_cfg().

Can you please add more information, when fw_vnic_id == INVALID_HW_RING_ID, what
does it mean to have fw_vnic_id == INVALID_HW_RING_ID, what is the expected
result without this check?

If this is fixing an issue please update commit according with proper
Fixex/stable tags that patch can be backported to stable trees.

> 
> Signed-off-by: Jay Ding <jay.ding@broadcom.com>
> Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> ---
>  drivers/net/bnxt/bnxt_hwrm.c | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
> index 64687a69b..910129f12 100644
> --- a/drivers/net/bnxt/bnxt_hwrm.c
> +++ b/drivers/net/bnxt/bnxt_hwrm.c
> @@ -1560,6 +1560,11 @@ int bnxt_hwrm_vnic_plcmode_cfg(struct bnxt *bp,
>  	struct hwrm_vnic_plcmodes_cfg_output *resp = bp->hwrm_cmd_resp_addr;
>  	uint16_t size;
>  
> +	if (vnic->fw_vnic_id == INVALID_HW_RING_ID) {
> +		PMD_DRV_LOG(DEBUG, "VNIC ID %x\n", vnic->fw_vnic_id);
> +		return rc;
> +	}
> +
>  	HWRM_PREP(req, VNIC_PLCMODES_CFG);
>  
>  	req.flags = rte_cpu_to_le_32(
> 

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [dpdk-dev] [PATCH 24/31] net/bnxt: update HWRM API to v1.9.2.9
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 24/31] net/bnxt: update HWRM API to v1.9.2.9 Ajit Khaparde
@ 2018-06-26 15:30   ` Ferruh Yigit
  2018-06-28 20:14     ` Ajit Khaparde
  0 siblings, 1 reply; 73+ messages in thread
From: Ferruh Yigit @ 2018-06-26 15:30 UTC (permalink / raw)
  To: Ajit Khaparde, dev; +Cc: Rob Miller

On 6/19/2018 10:30 PM, Ajit Khaparde wrote:
> From: Rob Miller <rmiller@broadcom.com>
> 
> update HWRM API to v1.9.2.9

Does it make sense to update release notes to document this update?

> 
> Signed-off-by: Rob Miller <rob.miller@broadcom.com>
> Reviewed-by: Scott Branden <scott.branden@broadcom.com>
> Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
> Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
> Signed-off-by: Rob Miller <rmiller@broadcom.com>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [dpdk-dev] [PATCH 27/31] net/bnxt: check filter type before clearing it
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 27/31] net/bnxt: check filter type before clearing it Ajit Khaparde
@ 2018-06-26 15:30   ` Ferruh Yigit
  0 siblings, 0 replies; 73+ messages in thread
From: Ferruh Yigit @ 2018-06-26 15:30 UTC (permalink / raw)
  To: Ajit Khaparde, dev

On 6/19/2018 10:30 PM, Ajit Khaparde wrote:
> In bnxt_free_filter_mem(), check the filter type and call the
> appropriate HWRM command to clear the filter from HW.

Just to double check, is this check to fix an issue? If so do you want to
backport the fix into stable trees?

> 
> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>

<...>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [dpdk-dev] [PATCH 28/31] net/bnxt: fix set MTU
  2018-06-19 21:30 ` [dpdk-dev] [PATCH 28/31] net/bnxt: fix set MTU Ajit Khaparde
@ 2018-06-26 15:30   ` Ferruh Yigit
  2018-06-28 20:13     ` Ajit Khaparde
  0 siblings, 1 reply; 73+ messages in thread
From: Ferruh Yigit @ 2018-06-26 15:30 UTC (permalink / raw)
  To: Ajit Khaparde, dev; +Cc: stable

On 6/19/2018 10:30 PM, Ajit Khaparde wrote:
> There is no need to update hardware configuration if new MTU is
> not greater than the max data the mbuf can accommodate.

If app sets a smaller MTU won't it expect that HW will drop received packets
bigger than provided size? Will this logic work if HW is not updated?

> 
> Fixes: daef48efe5e5 ("net/bnxt: support set MTU")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> ---
>  drivers/net/bnxt/bnxt_ethdev.c | 12 +++++++++---
>  1 file changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
> index 9cfa43778..1145bc195 100644
> --- a/drivers/net/bnxt/bnxt_ethdev.c
> +++ b/drivers/net/bnxt/bnxt_ethdev.c
> @@ -1597,6 +1597,7 @@ static int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
>  
>  	for (i = 0; i < bp->nr_vnics; i++) {
>  		struct bnxt_vnic_info *vnic = &bp->vnic_info[i];
> +		uint16_t size = 0;
>  
>  		vnic->mru = bp->eth_dev->data->mtu + ETHER_HDR_LEN +
>  					ETHER_CRC_LEN + VLAN_TAG_SIZE * 2;
> @@ -1604,9 +1605,14 @@ static int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
>  		if (rc)
>  			break;
>  
> -		rc = bnxt_hwrm_vnic_plcmode_cfg(bp, vnic);
> -		if (rc)
> -			return rc;
> +		size = rte_pktmbuf_data_room_size(bp->rx_queues[0]->mb_pool);
> +		size -= RTE_PKTMBUF_HEADROOM;
> +
> +		if (size < new_mtu) {
> +			rc = bnxt_hwrm_vnic_plcmode_cfg(bp, vnic);
> +			if (rc)
> +				return rc;
> +		}
>  	}
>  
>  	return rc;
> 

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [dpdk-dev] [PATCH 28/31] net/bnxt: fix set MTU
  2018-06-26 15:30   ` Ferruh Yigit
@ 2018-06-28 20:13     ` Ajit Khaparde
  0 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:13 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, dpdk stable

On Tue, Jun 26, 2018 at 8:30 AM, Ferruh Yigit <ferruh.yigit@intel.com>
wrote:

> On 6/19/2018 10:30 PM, Ajit Khaparde wrote:
> > There is no need to update hardware configuration if new MTU is
> > not greater than the max data the mbuf can accommodate.
>
> If app sets a smaller MTU won't it expect that HW will drop received
> packets
> bigger than provided size? Will this logic work if HW is not updated?
>
​Actually, the commit message needs rephrased.
The behavior you mentioned will not be impacted.
​The hardware will honor the MTU configured.​

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [dpdk-dev] [PATCH 24/31] net/bnxt: update HWRM API to v1.9.2.9
  2018-06-26 15:30   ` Ferruh Yigit
@ 2018-06-28 20:14     ` Ajit Khaparde
  0 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:14 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, Rob Miller

On Tue, Jun 26, 2018 at 8:30 AM, Ferruh Yigit <ferruh.yigit@intel.com>
wrote:

> On 6/19/2018 10:30 PM, Ajit Khaparde wrote:
> > From: Rob Miller <rmiller@broadcom.com>
> >
> > update HWRM API to v1.9.2.9
>
> Does it make sense to update release notes to document this update?
>
​Since there are a few more weeks for release,
there is a good chance of updating the HWRM version.
I will update the release notes as we get close to the release depending
on what version we end up with. I hope that is ok?
​

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [dpdk-dev] [PATCH 08/31] net/bnxt: add support for VF id 0xd800
  2018-06-26 15:28   ` Ferruh Yigit
@ 2018-06-28 20:14     ` Ajit Khaparde
  0 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:14 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev

On Tue, Jun 26, 2018 at 8:28 AM, Ferruh Yigit <ferruh.yigit@intel.com>
wrote:

> On 6/19/2018 10:30 PM, Ajit Khaparde wrote:
> > Add support for StingRay VF device 0xd800
>
> Can you please document new supported device on doc/guides/nics/bnxt.rst
>
I think we don't need it. ​We are just adding another VF device id.
The actual device family is already listed in the document.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [dpdk-dev] [PATCH 00/31] bnxt patchset
  2018-06-26 15:27 ` [dpdk-dev] [PATCH 00/31] bnxt patchset Ferruh Yigit
@ 2018-06-28 20:15   ` Ajit Khaparde
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
  1 sibling, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev

On Tue, Jun 26, 2018 at 8:27 AM, Ferruh Yigit <ferruh.yigit@intel.com>
wrote:

> On 6/19/2018 10:30 PM, Ajit Khaparde wrote:
> > Patchset against dpdk-next-net contains bug fixes,
> > some code refactoring and style cleanup.
> >
> > Please apply.
> >
> > Ajit Khaparde (15):
> >   net/bnxt: fix clear port stats
> >   net/bnxt: add Tx batching support
> >   net/bnxt: Rx processing optimization
> >   net/bnxt: set min and max descriptor count for Tx and Rx rings
> >   net/bnxt: fix dev close operation
> >   net/bnxt: set ring coalesce parameters for Stratus NIC
> >   net/bnxt: fix HW Tx checksum offload check
> >   net/bnxt: add support for VF id 0xd800
> >   net/bnxt: fix rx/tx queue start/stop operations
> >   net/bnxt: code cleanup style of bnxt vnic
> >   net/bnxt: filter/flow refactoring
> >   net/bnxt: check filter type before clearing it
> >   net/bnxt: fix set MTU
> >   net/bnxt: fix incorrect IO address handling in Tx
> >   net/bnxt: allocate RSS context only if RSS mode is enabled.
> >
> > Jay Ding (1):
> >   net/bnxt: check for invalid vnic id
> >
> > Rob Miller (1):
> >   net/bnxt: update HWRM API to v1.9.2.9
> >
> > Scott Branden (11):
> >   net/bnxt: code cleanup style of bnxt cpr
> >   net/bnxt: code cleanup style of bnxt rxr
> >   net/bnxt: code cleanup style of rte pmd bnxt file
> >   net/bnxt: code cleanup style of bnxt stats
> >   net/bnxt: code cleanup style of bnxt vnic
> >   net/bnxt: code cleanup style of bnxt txq
> >   net/bnxt: code cleanup style of bnxt rxq
> >   net/bnxt: code cleanup style of bnxt txr
> >   net/bnxt: code cleanup style of bnxt ring
> >   net/bnxt: code cleanup style of bnxt ethdev
> >   net/bnxt: move function check zero bytes to bnxt util.h
> >
> > Somnath Kotur (2):
> >   net/bnxt: Revert reset of L2 filter id in clear_ntuple_filter
> >   net/bnxt: fix to move a flow to a different queue
> >
> > Xiaoxin Peng (1):
> >   net/bnxt: fix Tx with multiple mbuf
>
>
> Hi Ajit,
>
> ​​
> ./devtools/check-git-log.sh is giving some errors [1], can you please
> check them?
>
​Took care of this and other comments as well.​

​ Thanks​

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 00/23] bnxt patchset
  2018-06-26 15:27 ` [dpdk-dev] [PATCH 00/31] bnxt patchset Ferruh Yigit
  2018-06-28 20:15   ` Ajit Khaparde
@ 2018-06-28 20:15   ` Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 01/23] net/bnxt: fix clear port stats Ajit Khaparde
                       ` (23 more replies)
  1 sibling, 24 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

Patchset against dpdk-next-net. Please apply.

v1->v2:
Takes care of the various comments made in the previous version.
I am dropping the style changes for now. I will send them later
after addressing the coding convention issues.


Ajit Khaparde (16):
  net/bnxt: fix clear port stats
  net/bnxt: add Tx batching support
  net/bnxt: optimize receive processing code
  net/bnxt: set MIN/MAX descriptor count fox Tx and Rx Rings
  net/bnxt: fix dev close operation
  net/bnxt: set ring coalesce parameters for Stratus NIC
  net/bnxt: fix HW Tx checksum offload check
  net/bnxt: add support for VF id 0xd800
  net/bnxt: fix Rx/Tx queue start/stop operations
  net/bnxt: refactor filter/flow
  net/bnxt: check filter type before clearing it
  net/bnxt: fix set MTU
  net/bnxt: fix incorrect IO address handling in Tx
  net/bnxt: allocate RSS context only if RSS mode is enabled
  net/bnxt: check VF resources if resource manager is enabled
  net/bnxt: fix Rx ring count limitation

Jay Ding (1):
  net/bnxt: check for invalid vnic id

Rob Miller (1):
  net/bnxt: update HWRM API to v1.9.2.9

Scott Branden (1):
  net/bnxt: move function check zero bytes to bnxt util.h

Somnath Kotur (3):
  net/bnxt: revert reset of L2 filter id
  net/bnxt: fix to move a flow to a different queue
  net/bnxt: use correct flags during VLAN configuration

Xiaoxin Peng (1):
  net/bnxt: fix Tx with multiple mbuf

 drivers/net/bnxt/Makefile              |    2 +
 drivers/net/bnxt/bnxt.h                |   32 +
 drivers/net/bnxt/bnxt_cpr.h            |   12 +
 drivers/net/bnxt/bnxt_ethdev.c         |  120 +++-
 drivers/net/bnxt/bnxt_filter.c         | 1090 +----------------------------
 drivers/net/bnxt/bnxt_filter.h         |    1 -
 drivers/net/bnxt/bnxt_flow.c           | 1171 ++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.c           |  224 ++++--
 drivers/net/bnxt/bnxt_hwrm.h           |    9 +-
 drivers/net/bnxt/bnxt_ring.c           |  115 ++++
 drivers/net/bnxt/bnxt_ring.h           |    1 +
 drivers/net/bnxt/bnxt_rxq.c            |   54 +-
 drivers/net/bnxt/bnxt_rxq.h            |    4 +
 drivers/net/bnxt/bnxt_rxr.c            |   26 +-
 drivers/net/bnxt/bnxt_rxr.h            |    2 +
 drivers/net/bnxt/bnxt_txq.h            |    1 +
 drivers/net/bnxt/bnxt_txr.c            |  156 +++--
 drivers/net/bnxt/bnxt_txr.h            |   10 +
 drivers/net/bnxt/bnxt_util.c           |   18 +
 drivers/net/bnxt/bnxt_util.h           |   11 +
 drivers/net/bnxt/bnxt_vnic.c           |    5 +-
 drivers/net/bnxt/bnxt_vnic.h           |    6 +-
 drivers/net/bnxt/hsi_struct_def_dpdk.h |  113 ++-
 23 files changed, 1952 insertions(+), 1231 deletions(-)
 create mode 100644 drivers/net/bnxt/bnxt_flow.c
 create mode 100644 drivers/net/bnxt/bnxt_util.c
 create mode 100644 drivers/net/bnxt/bnxt_util.h

-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 01/23] net/bnxt: fix clear port stats
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 02/23] net/bnxt: add Tx batching support Ajit Khaparde
                       ` (22 subsequent siblings)
  23 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, stable

PORT_CLR_STATS is not allowed for VFs, NPAR, MultiHost functions
or when SR-IOV is enabled.
Don't send the HWRM command in such cases.

Fixes: bfb9c2260be2 ("net/bnxt: support xstats get/reset")
Cc: stable@dpdk.org

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
--
v1->v2: Fix a checkpatch warning.
---
 drivers/net/bnxt/bnxt.h      | 4 ++++
 drivers/net/bnxt/bnxt_hwrm.c | 5 ++++-
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index afaaf8c41..d19aea569 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -98,6 +98,7 @@ struct bnxt_child_vf_info {
 struct bnxt_pf_info {
 #define BNXT_FIRST_PF_FID	1
 #define BNXT_MAX_VFS(bp)	(bp->pf.max_vfs)
+#define BNXT_TOTAL_VFS(bp)	((bp)->pf.total_vfs)
 #define BNXT_FIRST_VF_FID	128
 #define BNXT_PF_RINGS_USED(bp)	bnxt_get_num_queues(bp)
 #define BNXT_PF_RINGS_AVAIL(bp)	(bp->pf.max_cp_rings - BNXT_PF_RINGS_USED(bp))
@@ -105,6 +106,9 @@ struct bnxt_pf_info {
 	uint16_t		first_vf_id;
 	uint16_t		active_vfs;
 	uint16_t		max_vfs;
+	uint16_t		total_vfs; /* Total VFs possible.
+					    * Not necessarily enabled.
+					    */
 	uint32_t		func_cfg_flags;
 	void			*vf_req_buf;
 	rte_iova_t		vf_req_buf_dma_addr;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index d6fdc1b88..f441d4610 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -506,6 +506,7 @@ static int __bnxt_hwrm_func_qcaps(struct bnxt *bp)
 	if (BNXT_PF(bp)) {
 		bp->pf.port_id = resp->port_id;
 		bp->pf.first_vf_id = rte_le_to_cpu_16(resp->first_vf_id);
+		bp->pf.total_vfs = rte_le_to_cpu_16(resp->max_vfs);
 		new_max_vfs = bp->pdev->max_vfs;
 		if (new_max_vfs != bp->pf.max_vfs) {
 			if (bp->pf.vf_info)
@@ -3151,7 +3152,9 @@ int bnxt_hwrm_port_clr_stats(struct bnxt *bp)
 	struct bnxt_pf_info *pf = &bp->pf;
 	int rc;
 
-	if (!(bp->flags & BNXT_FLAG_PORT_STATS))
+	/* Not allowed on NS2 device, NPAR, MultiHost, VF */
+	if (!(bp->flags & BNXT_FLAG_PORT_STATS) || BNXT_VF(bp) ||
+	    BNXT_NPAR(bp) || BNXT_MH(bp) || BNXT_TOTAL_VFS(bp))
 		return 0;
 
 	HWRM_PREP(req, PORT_CLR_STATS);
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 02/23] net/bnxt: add Tx batching support
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 01/23] net/bnxt: fix clear port stats Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 03/23] net/bnxt: optimize receive processing code Ajit Khaparde
                       ` (21 subsequent siblings)
  23 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

Batch more than one Tx requests such that only one completion
is generarted by the HW. We request a Tx completion for first
and last Tx request in the batch.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_cpr.h | 12 ++++++
 drivers/net/bnxt/bnxt_txq.h |  1 +
 drivers/net/bnxt/bnxt_txr.c | 97 +++++++++++++++++++++++++++++----------------
 3 files changed, 75 insertions(+), 35 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_cpr.h b/drivers/net/bnxt/bnxt_cpr.h
index 6c1e6d2b0..c7af56983 100644
--- a/drivers/net/bnxt/bnxt_cpr.h
+++ b/drivers/net/bnxt/bnxt_cpr.h
@@ -22,12 +22,20 @@
 #define ADV_RAW_CMP(idx, n)	((idx) + (n))
 #define NEXT_RAW_CMP(idx)	ADV_RAW_CMP(idx, 1)
 #define RING_CMP(ring, idx)	((idx) & (ring)->ring_mask)
+#define RING_CMPL(ring_mask, idx)	((idx) & (ring_mask))
 #define NEXT_CMP(idx)		RING_CMP(ADV_RAW_CMP(idx, 1))
 #define FLIP_VALID(cons, mask, val)	((cons) >= (mask) ? !(val) : (val))
 
 #define DB_CP_REARM_FLAGS	(DB_KEY_CP | DB_IDX_VALID)
 #define DB_CP_FLAGS		(DB_KEY_CP | DB_IDX_VALID | DB_IRQ_DIS)
 
+#define NEXT_CMPL(cpr, idx, v, inc)	do { \
+	(idx) += (inc); \
+	if (unlikely((idx) == (cpr)->cp_ring_struct->ring_size)) { \
+		(v) = !(v); \
+		(idx) = 0; \
+	} \
+} while (0)
 #define B_CP_DB_REARM(cpr, raw_cons)					\
 	rte_write32((DB_CP_REARM_FLAGS |				\
 		    RING_CMP(((cpr)->cp_ring_struct), raw_cons)),	\
@@ -50,6 +58,10 @@
 	rte_write32((DB_CP_FLAGS |					\
 		    RING_CMP(((cpr)->cp_ring_struct), raw_cons)),	\
 		    ((cpr)->cp_doorbell))
+#define B_CP_DB(cpr, raw_cons, ring_mask)				\
+	rte_write32((DB_CP_FLAGS |					\
+		    RING_CMPL((ring_mask), raw_cons)),	\
+		    ((cpr)->cp_doorbell))
 
 struct bnxt_ring;
 struct bnxt_cp_ring_info {
diff --git a/drivers/net/bnxt/bnxt_txq.h b/drivers/net/bnxt/bnxt_txq.h
index 720ca90cf..f2c712a75 100644
--- a/drivers/net/bnxt/bnxt_txq.h
+++ b/drivers/net/bnxt/bnxt_txq.h
@@ -24,6 +24,7 @@ struct bnxt_tx_queue {
 	uint8_t			wthresh; /* Write-back threshold reg */
 	uint32_t		ctx_curr; /* Hardware context states */
 	uint8_t			tx_deferred_start; /* not in global dev start */
+	uint8_t			cmpl_next; /* Next BD to trigger a compl */
 
 	struct bnxt		*bp;
 	int			index;
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 470fddd56..0fdf0fd08 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -114,7 +114,9 @@ static inline uint32_t bnxt_tx_avail(struct bnxt_tx_ring_info *txr)
 }
 
 static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
-				struct bnxt_tx_queue *txq)
+				struct bnxt_tx_queue *txq,
+				uint16_t *coal_pkts,
+				uint16_t *cmpl_next)
 {
 	struct bnxt_tx_ring_info *txr = txq->tx_ring;
 	struct tx_bd_long *txbd;
@@ -146,8 +148,15 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 		return -ENOMEM;
 
 	txbd = &txr->tx_desc_ring[txr->tx_prod];
-	txbd->opaque = txr->tx_prod;
+	txbd->opaque = *coal_pkts;
 	txbd->flags_type = tx_buf->nr_bds << TX_BD_LONG_FLAGS_BD_CNT_SFT;
+	txbd->flags_type |= TX_BD_SHORT_FLAGS_COAL_NOW;
+	if (!*cmpl_next) {
+		txbd->flags_type |= TX_BD_LONG_FLAGS_NO_CMPL;
+	} else {
+		*coal_pkts = 0;
+		*cmpl_next = false;
+	}
 	txbd->len = tx_pkt->data_len;
 	if (txbd->len >= 2014)
 		txbd->flags_type |= TX_BD_LONG_FLAGS_LHINT_GTE2K;
@@ -235,7 +244,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 
 		txbd = &txr->tx_desc_ring[txr->tx_prod];
 		txbd->address = rte_cpu_to_le_32(rte_mbuf_data_iova(m_seg));
-		txbd->flags_type = TX_BD_SHORT_TYPE_TX_BD_SHORT;
+		txbd->flags_type |= TX_BD_SHORT_TYPE_TX_BD_SHORT;
 		txbd->len = m_seg->data_len;
 
 		m_seg = m_seg->next;
@@ -278,35 +287,44 @@ static int bnxt_handle_tx_cp(struct bnxt_tx_queue *txq)
 	struct bnxt_cp_ring_info *cpr = txq->cp_ring;
 	uint32_t raw_cons = cpr->cp_raw_cons;
 	uint32_t cons;
-	int nb_tx_pkts = 0;
+	uint32_t nb_tx_pkts = 0;
 	struct tx_cmpl *txcmp;
+	struct cmpl_base *cp_desc_ring = cpr->cp_desc_ring;
+	struct bnxt_ring *cp_ring_struct = cpr->cp_ring_struct;
+	uint32_t ring_mask = cp_ring_struct->ring_mask;
+	uint32_t opaque = 0;
 
-	if ((txq->tx_ring->tx_ring_struct->ring_size -
-			(bnxt_tx_avail(txq->tx_ring))) >
-			txq->tx_free_thresh) {
-		while (1) {
-			cons = RING_CMP(cpr->cp_ring_struct, raw_cons);
-			txcmp = (struct tx_cmpl *)&cpr->cp_desc_ring[cons];
-
-			if (!CMP_VALID(txcmp, raw_cons, cpr->cp_ring_struct))
-				break;
-			cpr->valid = FLIP_VALID(cons,
-						cpr->cp_ring_struct->ring_mask,
-						cpr->valid);
-
-			if (CMP_TYPE(txcmp) == TX_CMPL_TYPE_TX_L2)
-				nb_tx_pkts++;
-			else
-				RTE_LOG_DP(DEBUG, PMD,
-						"Unhandled CMP type %02x\n",
-						CMP_TYPE(txcmp));
-			raw_cons = NEXT_RAW_CMP(raw_cons);
-		}
-		if (nb_tx_pkts)
-			bnxt_tx_cmp(txq, nb_tx_pkts);
+	if (((txq->tx_ring->tx_prod - txq->tx_ring->tx_cons) &
+		txq->tx_ring->tx_ring_struct->ring_mask) < txq->tx_free_thresh)
+		return 0;
+
+	do {
+		cons = RING_CMPL(ring_mask, raw_cons);
+		txcmp = (struct tx_cmpl *)&cpr->cp_desc_ring[cons];
+		rte_prefetch_non_temporal(&cp_desc_ring[(cons + 2) &
+							ring_mask]);
+
+		if (!CMPL_VALID(txcmp, cpr->valid))
+			break;
+		opaque = rte_cpu_to_le_32(txcmp->opaque);
+		NEXT_CMPL(cpr, cons, cpr->valid, 1);
+		rte_prefetch0(&cp_desc_ring[cons]);
+
+		if (CMP_TYPE(txcmp) == TX_CMPL_TYPE_TX_L2)
+			nb_tx_pkts += opaque;
+		else
+			RTE_LOG_DP(ERR, PMD,
+					"Unhandled CMP type %02x\n",
+					CMP_TYPE(txcmp));
+		raw_cons = cons;
+	} while (nb_tx_pkts < ring_mask);
+
+	if (nb_tx_pkts) {
+		bnxt_tx_cmp(txq, nb_tx_pkts);
 		cpr->cp_raw_cons = raw_cons;
-		B_CP_DIS_DB(cpr, cpr->cp_raw_cons);
+		B_CP_DB(cpr, cpr->cp_raw_cons, ring_mask);
 	}
+
 	return nb_tx_pkts;
 }
 
@@ -315,8 +333,8 @@ uint16_t bnxt_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 {
 	struct bnxt_tx_queue *txq = tx_queue;
 	uint16_t nb_tx_pkts = 0;
-	uint16_t db_mask = txq->tx_ring->tx_ring_struct->ring_size >> 2;
-	uint16_t last_db_mask = 0;
+	uint16_t coal_pkts = 0;
+	uint16_t cmpl_next = txq->cmpl_next;
 
 	/* Handle TX completions */
 	bnxt_handle_tx_cp(txq);
@@ -326,16 +344,25 @@ uint16_t bnxt_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		PMD_DRV_LOG(DEBUG, "Tx q stopped;return\n");
 		return 0;
 	}
+
+	txq->cmpl_next = 0;
 	/* Handle TX burst request */
 	for (nb_tx_pkts = 0; nb_tx_pkts < nb_pkts; nb_tx_pkts++) {
-		if (bnxt_start_xmit(tx_pkts[nb_tx_pkts], txq)) {
+		int rc;
+
+		/* Request a completion on first and last packet */
+		cmpl_next |= (nb_pkts == nb_tx_pkts + 1);
+		coal_pkts++;
+		rc = bnxt_start_xmit(tx_pkts[nb_tx_pkts], txq,
+				&coal_pkts, &cmpl_next);
+
+		if (unlikely(rc)) {
+			/* Request a completion in next cycle */
+			txq->cmpl_next = 1;
 			break;
-		} else if ((nb_tx_pkts & db_mask) != last_db_mask) {
-			B_TX_DB(txq->tx_ring->tx_doorbell,
-					txq->tx_ring->tx_prod);
-			last_db_mask = nb_tx_pkts & db_mask;
 		}
 	}
+
 	if (nb_tx_pkts)
 		B_TX_DB(txq->tx_ring->tx_doorbell, txq->tx_ring->tx_prod);
 
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 03/23] net/bnxt: optimize receive processing code
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 01/23] net/bnxt: fix clear port stats Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 02/23] net/bnxt: add Tx batching support Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 04/23] net/bnxt: set MIN/MAX descriptor count fox Tx and Rx Rings Ajit Khaparde
                       ` (20 subsequent siblings)
  23 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

1) Use nb_rx_pkts instead of checking producer indices of Rx and
aggregator rings to decide if any Rx completions were processed.
2) Post Rx buffers early in Rx processing instead of waiting for
the budgeted burst quota.
3) Ring Rx CQ DB after Rx buffers are posted.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
--
v1->v2: fix commit log
---
 drivers/net/bnxt/bnxt_rxr.c | 12 ++++++++----
 drivers/net/bnxt/bnxt_rxr.h |  2 ++
 2 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 9d8842926..b6b72c553 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -540,8 +540,8 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	int rc = 0;
 	bool evt = false;
 
-	/* If Rx Q was stopped return */
-	if (rxq->rx_deferred_start)
+	/* If Rx Q was stopped return. RxQ0 cannot be stopped. */
+	if (rxq->rx_deferred_start && rxq->queue_id)
 		return 0;
 
 	/* Handle RX burst request */
@@ -572,10 +572,13 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		raw_cons = NEXT_RAW_CMP(raw_cons);
 		if (nb_rx_pkts == nb_pkts || evt)
 			break;
+		/* Post some Rx buf early in case of larger burst processing */
+		if (nb_rx_pkts == BNXT_RX_POST_THRESH)
+			B_RX_DB(rxr->rx_doorbell, rxr->rx_prod);
 	}
 
 	cpr->cp_raw_cons = raw_cons;
-	if ((prod == rxr->rx_prod && ag_prod == rxr->ag_prod) && !evt) {
+	if (!nb_rx_pkts && !evt) {
 		/*
 		 * For PMD, there is no need to keep on pushing to REARM
 		 * the doorbell if there are no new completions
@@ -583,7 +586,6 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		return nb_rx_pkts;
 	}
 
-	B_CP_DIS_DB(cpr, cpr->cp_raw_cons);
 	if (prod != rxr->rx_prod)
 		B_RX_DB(rxr->rx_doorbell, rxr->rx_prod);
 
@@ -591,6 +593,8 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	if (ag_prod != rxr->ag_prod)
 		B_RX_DB(rxr->ag_doorbell, rxr->ag_prod);
 
+	B_CP_DIS_DB(cpr, cpr->cp_raw_cons);
+
 	/* Attempt to alloc Rx buf in case of a previous allocation failure. */
 	if (rc == -ENOMEM) {
 		int i;
diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h
index 5b28f0321..3815a2199 100644
--- a/drivers/net/bnxt/bnxt_rxr.h
+++ b/drivers/net/bnxt/bnxt_rxr.h
@@ -54,6 +54,8 @@
 #define RX_CMP_IP_CS_UNKNOWN(rxcmp1)					\
 		!((rxcmp1)->flags2 & RX_CMP_IP_CS_BITS)
 
+#define BNXT_RX_POST_THRESH	32
+
 enum pkt_hash_types {
 	PKT_HASH_TYPE_NONE,	/* Undefined type */
 	PKT_HASH_TYPE_L2,	/* Input: src_MAC, dest_MAC */
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 04/23] net/bnxt: set MIN/MAX descriptor count fox Tx and Rx Rings
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
                       ` (2 preceding siblings ...)
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 03/23] net/bnxt: optimize receive processing code Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 05/23] net/bnxt: fix dev close operation Ajit Khaparde
                       ` (19 subsequent siblings)
  23 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

Set MIN and MAX descriptor count for TX and RX rings.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
--
v1->v2: fix commit log
---
 drivers/net/bnxt/bnxt.h        | 3 +++
 drivers/net/bnxt/bnxt_ethdev.c | 4 ++++
 2 files changed, 7 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index d19aea569..9a70617fc 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -24,6 +24,9 @@
 #define VLAN_TAG_SIZE		4
 #define BNXT_MAX_LED		4
 #define BNXT_NUM_VLANS		2
+#define BNXT_MIN_RING_DESC	16
+#define BNXT_MAX_TX_RING_DESC	4096
+#define BNXT_MAX_RX_RING_DESC	8192
 
 struct bnxt_led_info {
 	uint8_t      led_id;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 6e56bfd36..33560db0d 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -449,6 +449,10 @@ static void bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 	eth_dev->data->dev_conf.intr_conf.lsc = 1;
 
 	eth_dev->data->dev_conf.intr_conf.rxq = 1;
+	dev_info->rx_desc_lim.nb_min = BNXT_MIN_RING_DESC;
+	dev_info->rx_desc_lim.nb_max = BNXT_MAX_RX_RING_DESC;
+	dev_info->tx_desc_lim.nb_min = BNXT_MIN_RING_DESC;
+	dev_info->tx_desc_lim.nb_max = BNXT_MAX_TX_RING_DESC;
 
 	/* *INDENT-ON* */
 
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 05/23] net/bnxt: fix dev close operation
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
                       ` (3 preceding siblings ...)
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 04/23] net/bnxt: set MIN/MAX descriptor count fox Tx and Rx Rings Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 06/23] net/bnxt: set ring coalesce parameters for Stratus NIC Ajit Khaparde
                       ` (18 subsequent siblings)
  23 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, stable

We are not cleaning up all the memory and also not unregistering
the driver during device close operation. This patch fixes the issue.

Fixes: 893074951314 ("net/bnxt: free memory in close operation")
Cc: stable@dpdk.org

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
--
v1->v2: Remove incorrectly added RTE_PCI_DRV_INTR_RMV.
---
 drivers/net/bnxt/bnxt_ethdev.c | 21 +++++++++++++++++----
 1 file changed, 17 insertions(+), 4 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 33560db0d..233a7c312 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -152,6 +152,7 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 static int bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask);
 static void bnxt_print_link_info(struct rte_eth_dev *eth_dev);
 static int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu);
+static int bnxt_dev_uninit(struct rte_eth_dev *eth_dev);
 
 /***********************/
 
@@ -668,6 +669,8 @@ static void bnxt_dev_close_op(struct rte_eth_dev *eth_dev)
 		rte_free(bp->grp_info);
 		bp->grp_info = NULL;
 	}
+
+	bnxt_dev_uninit(eth_dev);
 }
 
 static void bnxt_mac_addr_remove_op(struct rte_eth_dev *eth_dev,
@@ -3116,7 +3119,6 @@ static int bnxt_init_board(struct rte_eth_dev *eth_dev)
 	return rc;
 }
 
-static int bnxt_dev_uninit(struct rte_eth_dev *eth_dev);
 
 #define ALLOW_FUNC(x)	\
 	{ \
@@ -3408,13 +3410,15 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
 }
 
 static int
-bnxt_dev_uninit(struct rte_eth_dev *eth_dev) {
+bnxt_dev_uninit(struct rte_eth_dev *eth_dev)
+{
 	struct bnxt *bp = eth_dev->data->dev_private;
 	int rc;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return -EPERM;
 
+	PMD_DRV_LOG(DEBUG, "Calling Device uninit\n");
 	bnxt_disable_int(bp);
 	bnxt_free_int(bp);
 	bnxt_free_mem(bp);
@@ -3428,8 +3432,17 @@ bnxt_dev_uninit(struct rte_eth_dev *eth_dev) {
 	}
 	rc = bnxt_hwrm_func_driver_unregister(bp, 0);
 	bnxt_free_hwrm_resources(bp);
-	rte_memzone_free((const struct rte_memzone *)bp->tx_mem_zone);
-	rte_memzone_free((const struct rte_memzone *)bp->rx_mem_zone);
+
+	if (bp->tx_mem_zone) {
+		rte_memzone_free((const struct rte_memzone *)bp->tx_mem_zone);
+		bp->tx_mem_zone = NULL;
+	}
+
+	if (bp->rx_mem_zone) {
+		rte_memzone_free((const struct rte_memzone *)bp->rx_mem_zone);
+		bp->rx_mem_zone = NULL;
+	}
+
 	if (bp->dev_stopped == 0)
 		bnxt_dev_close_op(eth_dev);
 	if (bp->pf.vf_info)
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 06/23] net/bnxt: set ring coalesce parameters for Stratus NIC
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
                       ` (4 preceding siblings ...)
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 05/23] net/bnxt: fix dev close operation Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 07/23] net/bnxt: fix HW Tx checksum offload check Ajit Khaparde
                       ` (17 subsequent siblings)
  23 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

Set ring coalesce parameters for Stratus NIC.
Other skews don't necessarily need this.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        | 19 ++++++++++++++++
 drivers/net/bnxt/bnxt_ethdev.c | 11 +++++++++
 drivers/net/bnxt/bnxt_hwrm.c   | 51 ++++++++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h   |  2 ++
 drivers/net/bnxt/bnxt_ring.c   | 23 +++++++++++++++++++
 5 files changed, 106 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 9a70617fc..1a746097b 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -28,6 +28,14 @@
 #define BNXT_MAX_TX_RING_DESC	4096
 #define BNXT_MAX_RX_RING_DESC	8192
 
+#define BNXT_INT_LAT_TMR_MIN			75
+#define BNXT_INT_LAT_TMR_MAX			150
+#define BNXT_NUM_CMPL_AGGR_INT			36
+#define BNXT_CMPL_AGGR_DMA_TMR			37
+#define BNXT_NUM_CMPL_DMA_AGGR			36
+#define BNXT_CMPL_AGGR_DMA_TMR_DURING_INT	50
+#define BNXT_NUM_CMPL_DMA_AGGR_DURING_INT	12
+
 struct bnxt_led_info {
 	uint8_t      led_id;
 	uint8_t      led_type;
@@ -209,6 +217,16 @@ struct bnxt_ptp_cfg {
 	uint32_t			tx_mapped_regs[BNXT_PTP_TX_REGS];
 };
 
+struct bnxt_coal {
+	uint16_t			num_cmpl_aggr_int;
+	uint16_t			num_cmpl_dma_aggr;
+	uint16_t			num_cmpl_dma_aggr_during_int;
+	uint16_t			int_lat_tmr_max;
+	uint16_t			int_lat_tmr_min;
+	uint16_t			cmpl_aggr_dma_tmr;
+	uint16_t			cmpl_aggr_dma_tmr_during_int;
+};
+
 #define BNXT_HWRM_SHORT_REQ_LEN		sizeof(struct hwrm_short_input)
 struct bnxt {
 	void				*bar0;
@@ -315,6 +333,7 @@ int bnxt_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_complete);
 int bnxt_rcv_msg_from_vf(struct bnxt *bp, uint16_t vf_id, void *msg);
 
 bool is_bnxt_supported(struct rte_eth_dev *dev);
+bool bnxt_stratus_device(struct bnxt *bp);
 extern const struct rte_flow_ops bnxt_flow_ops;
 
 extern int bnxt_logtype_driver;
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 233a7c312..15dab10bb 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3073,6 +3073,17 @@ static bool bnxt_vf_pciid(uint16_t id)
 	return false;
 }
 
+bool bnxt_stratus_device(struct bnxt *bp)
+{
+	uint16_t id = bp->pdev->id.device_id;
+
+	if (id == BROADCOM_DEV_ID_STRATUS_NIC ||
+	    id == BROADCOM_DEV_ID_STRATUS_NIC_VF1 ||
+	    id == BROADCOM_DEV_ID_STRATUS_NIC_VF2)
+		return true;
+	return false;
+}
+
 static int bnxt_init_board(struct rte_eth_dev *eth_dev)
 {
 	struct bnxt *bp = eth_dev->data->dev_private;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index f441d4610..707ee62e0 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3835,3 +3835,54 @@ int bnxt_vnic_rss_configure(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	}
 	return 0;
 }
+
+static void bnxt_hwrm_set_coal_params(struct bnxt_coal *hw_coal,
+	struct hwrm_ring_cmpl_ring_cfg_aggint_params_input *req)
+{
+	uint16_t flags;
+
+	req->num_cmpl_aggr_int = rte_cpu_to_le_16(hw_coal->num_cmpl_aggr_int);
+
+	/* This is a 6-bit value and must not be 0, or we'll get non stop IRQ */
+	req->num_cmpl_dma_aggr = rte_cpu_to_le_16(hw_coal->num_cmpl_dma_aggr);
+
+	/* This is a 6-bit value and must not be 0, or we'll get non stop IRQ */
+	req->num_cmpl_dma_aggr_during_int =
+		rte_cpu_to_le_16(hw_coal->num_cmpl_dma_aggr_during_int);
+
+	req->int_lat_tmr_max = rte_cpu_to_le_16(hw_coal->int_lat_tmr_max);
+
+	/* min timer set to 1/2 of interrupt timer */
+	req->int_lat_tmr_min = rte_cpu_to_le_16(hw_coal->int_lat_tmr_min);
+
+	/* buf timer set to 1/4 of interrupt timer */
+	req->cmpl_aggr_dma_tmr = rte_cpu_to_le_16(hw_coal->cmpl_aggr_dma_tmr);
+
+	req->cmpl_aggr_dma_tmr_during_int =
+		rte_cpu_to_le_16(hw_coal->cmpl_aggr_dma_tmr_during_int);
+
+	flags = HWRM_RING_CMPL_RING_CFG_AGGINT_PARAMS_INPUT_FLAGS_TIMER_RESET |
+		HWRM_RING_CMPL_RING_CFG_AGGINT_PARAMS_INPUT_FLAGS_RING_IDLE;
+	req->flags = rte_cpu_to_le_16(flags);
+}
+
+int bnxt_hwrm_set_ring_coal(struct bnxt *bp,
+			struct bnxt_coal *coal, uint16_t ring_id)
+{
+	struct hwrm_ring_cmpl_ring_cfg_aggint_params_input req = {0};
+	struct hwrm_ring_cmpl_ring_cfg_aggint_params_output *resp =
+						bp->hwrm_cmd_resp_addr;
+	int rc;
+
+	/* Set ring coalesce parameters only for Stratus 100G NIC */
+	if (!bnxt_stratus_device(bp))
+		return 0;
+
+	HWRM_PREP(req, RING_CMPL_RING_CFG_AGGINT_PARAMS);
+	bnxt_hwrm_set_coal_params(coal, &req);
+	req.ring_id = rte_cpu_to_le_16(ring_id);
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
+	HWRM_CHECK_RESULT();
+	HWRM_UNLOCK();
+	return 0;
+}
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 60a4ab16a..b83aab306 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -167,4 +167,6 @@ int bnxt_hwrm_flash_nvram(struct bnxt *bp, uint16_t dir_type,
 int bnxt_hwrm_ptp_cfg(struct bnxt *bp);
 int bnxt_vnic_rss_configure(struct bnxt *bp,
 			    struct bnxt_vnic_info *vnic);
+int bnxt_hwrm_set_ring_coal(struct bnxt *bp,
+			struct bnxt_coal *coal, uint16_t ring_id);
 #endif
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index bb9f6d1c0..81eb89d74 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -258,6 +258,24 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx,
 	return 0;
 }
 
+static void bnxt_init_dflt_coal(struct bnxt_coal *coal)
+{
+	/* Tick values in micro seconds.
+	 * 1 coal_buf x bufs_per_record = 1 completion record.
+	 */
+	coal->num_cmpl_aggr_int = BNXT_NUM_CMPL_AGGR_INT;
+	/* This is a 6-bit value and must not be 0, or we'll get non stop IRQ */
+	coal->num_cmpl_dma_aggr = BNXT_NUM_CMPL_DMA_AGGR;
+	/* This is a 6-bit value and must not be 0, or we'll get non stop IRQ */
+	coal->num_cmpl_dma_aggr_during_int = BNXT_NUM_CMPL_DMA_AGGR_DURING_INT;
+	coal->int_lat_tmr_max = BNXT_INT_LAT_TMR_MAX;
+	/* min timer set to 1/2 of interrupt timer */
+	coal->int_lat_tmr_min = BNXT_INT_LAT_TMR_MIN;
+	/* buf timer set to 1/4 of interrupt timer */
+	coal->cmpl_aggr_dma_tmr = BNXT_CMPL_AGGR_DMA_TMR;
+	coal->cmpl_aggr_dma_tmr_during_int = BNXT_CMPL_AGGR_DMA_TMR_DURING_INT;
+}
+
 /* ring_grp usage:
  * [0] = default completion ring
  * [1 -> +rx_cp_nr_rings] = rx_cp, rx rings
@@ -265,9 +283,12 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx,
  */
 int bnxt_alloc_hwrm_rings(struct bnxt *bp)
 {
+	struct bnxt_coal coal;
 	unsigned int i;
 	int rc = 0;
 
+	bnxt_init_dflt_coal(&coal);
+
 	for (i = 0; i < bp->rx_cp_nr_rings; i++) {
 		struct bnxt_rx_queue *rxq = bp->rx_queues[i];
 		struct bnxt_cp_ring_info *cpr = rxq->cp_ring;
@@ -291,6 +312,7 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp)
 		cpr->cp_doorbell = (char *)bp->doorbell_base + i * 0x80;
 		bp->grp_info[i].cp_fw_ring_id = cp_ring->fw_ring_id;
 		B_CP_DIS_DB(cpr, cpr->cp_raw_cons);
+		bnxt_hwrm_set_ring_coal(bp, &coal, cp_ring->fw_ring_id);
 
 		if (!i) {
 			/*
@@ -379,6 +401,7 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp)
 
 		txr->tx_doorbell = (char *)bp->doorbell_base + idx * 0x80;
 		txq->index = idx;
+		bnxt_hwrm_set_ring_coal(bp, &coal, cp_ring->fw_ring_id);
 	}
 
 err_out:
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 07/23] net/bnxt: fix HW Tx checksum offload check
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
                       ` (5 preceding siblings ...)
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 06/23] net/bnxt: set ring coalesce parameters for Stratus NIC Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 08/23] net/bnxt: add support for VF id 0xd800 Ajit Khaparde
                       ` (16 subsequent siblings)
  23 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, stable, Xiaoxin Peng

Add more checks for checksum calculation offload.
Also check for tunnel frames and select the proper
buffer descriptor size.

Fixes: 6eb3cc2294fd ("net/bnxt: add initial Tx code")
Cc: stable@dpdk.org

Signed-off-by: Xiaoxin Peng <xiaoxin.peng@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Jason He <jason.he@broadcom.com>
Reviewed-by: Qingmin Liu <qingmin.liu@broadcom.com>
---
 drivers/net/bnxt/bnxt_txr.c | 51 ++++++++++++++++++++++++++++++++++++++++++---
 drivers/net/bnxt/bnxt_txr.h | 10 +++++++++
 2 files changed, 58 insertions(+), 3 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 0fdf0fd08..68645b2f7 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -135,7 +135,9 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 
 	if (tx_pkt->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_TCP_CKSUM |
 				PKT_TX_UDP_CKSUM | PKT_TX_IP_CKSUM |
-				PKT_TX_VLAN_PKT | PKT_TX_OUTER_IP_CKSUM))
+				PKT_TX_VLAN_PKT | PKT_TX_OUTER_IP_CKSUM |
+				PKT_TX_TUNNEL_GRE | PKT_TX_TUNNEL_VXLAN |
+				PKT_TX_TUNNEL_GENEVE))
 		long_bd = true;
 
 	tx_buf = &txr->tx_buf_ring[txr->tx_prod];
@@ -203,16 +205,46 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 			/* Outer IP, Inner IP, Inner TCP/UDP CSO */
 			txbd1->lflags |= TX_BD_FLG_TIP_IP_TCP_UDP_CHKSUM;
 			txbd1->mss = 0;
+		} else if ((tx_pkt->ol_flags & PKT_TX_OIP_IIP_TCP_CKSUM) ==
+			   PKT_TX_OIP_IIP_TCP_CKSUM) {
+			/* Outer IP, Inner IP, Inner TCP/UDP CSO */
+			txbd1->lflags |= TX_BD_FLG_TIP_IP_TCP_UDP_CHKSUM;
+			txbd1->mss = 0;
+		} else if ((tx_pkt->ol_flags & PKT_TX_OIP_IIP_UDP_CKSUM) ==
+			   PKT_TX_OIP_IIP_UDP_CKSUM) {
+			/* Outer IP, Inner IP, Inner TCP/UDP CSO */
+			txbd1->lflags |= TX_BD_FLG_TIP_IP_TCP_UDP_CHKSUM;
+			txbd1->mss = 0;
 		} else if ((tx_pkt->ol_flags & PKT_TX_IIP_TCP_UDP_CKSUM) ==
 			   PKT_TX_IIP_TCP_UDP_CKSUM) {
 			/* (Inner) IP, (Inner) TCP/UDP CSO */
 			txbd1->lflags |= TX_BD_FLG_IP_TCP_UDP_CHKSUM;
 			txbd1->mss = 0;
+		} else if ((tx_pkt->ol_flags & PKT_TX_IIP_UDP_CKSUM) ==
+			   PKT_TX_IIP_UDP_CKSUM) {
+			/* (Inner) IP, (Inner) TCP/UDP CSO */
+			txbd1->lflags |= TX_BD_FLG_IP_TCP_UDP_CHKSUM;
+			txbd1->mss = 0;
+		} else if ((tx_pkt->ol_flags & PKT_TX_IIP_TCP_CKSUM) ==
+			   PKT_TX_IIP_TCP_CKSUM) {
+			/* (Inner) IP, (Inner) TCP/UDP CSO */
+			txbd1->lflags |= TX_BD_FLG_IP_TCP_UDP_CHKSUM;
+			txbd1->mss = 0;
 		} else if ((tx_pkt->ol_flags & PKT_TX_OIP_TCP_UDP_CKSUM) ==
 			   PKT_TX_OIP_TCP_UDP_CKSUM) {
 			/* Outer IP, (Inner) TCP/UDP CSO */
 			txbd1->lflags |= TX_BD_FLG_TIP_TCP_UDP_CHKSUM;
 			txbd1->mss = 0;
+		} else if ((tx_pkt->ol_flags & PKT_TX_OIP_UDP_CKSUM) ==
+			   PKT_TX_OIP_UDP_CKSUM) {
+			/* Outer IP, (Inner) TCP/UDP CSO */
+			txbd1->lflags |= TX_BD_FLG_TIP_TCP_UDP_CHKSUM;
+			txbd1->mss = 0;
+		} else if ((tx_pkt->ol_flags & PKT_TX_OIP_TCP_CKSUM) ==
+			   PKT_TX_OIP_TCP_CKSUM) {
+			/* Outer IP, (Inner) TCP/UDP CSO */
+			txbd1->lflags |= TX_BD_FLG_TIP_TCP_UDP_CHKSUM;
+			txbd1->mss = 0;
 		} else if ((tx_pkt->ol_flags & PKT_TX_OIP_IIP_CKSUM) ==
 			   PKT_TX_OIP_IIP_CKSUM) {
 			/* Outer IP, Inner IP CSO */
@@ -223,11 +255,23 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 			/* TCP/UDP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM;
 			txbd1->mss = 0;
-		} else if (tx_pkt->ol_flags & PKT_TX_IP_CKSUM) {
+		} else if ((tx_pkt->ol_flags & PKT_TX_TCP_CKSUM) ==
+			   PKT_TX_TCP_CKSUM) {
+			/* TCP/UDP CSO */
+			txbd1->lflags |= TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM;
+			txbd1->mss = 0;
+		} else if ((tx_pkt->ol_flags & PKT_TX_UDP_CKSUM) ==
+			   PKT_TX_UDP_CKSUM) {
+			/* TCP/UDP CSO */
+			txbd1->lflags |= TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM;
+			txbd1->mss = 0;
+		} else if ((tx_pkt->ol_flags & PKT_TX_IP_CKSUM) ==
+			   PKT_TX_IP_CKSUM) {
 			/* IP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_IP_CHKSUM;
 			txbd1->mss = 0;
-		} else if (tx_pkt->ol_flags & PKT_TX_OUTER_IP_CKSUM) {
+		} else if ((tx_pkt->ol_flags & PKT_TX_OUTER_IP_CKSUM) ==
+			   PKT_TX_OUTER_IP_CKSUM) {
 			/* IP CSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_T_IP_CHKSUM;
 			txbd1->mss = 0;
@@ -251,6 +295,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 	}
 
 	txbd->flags_type |= TX_BD_LONG_FLAGS_PACKET_END;
+	txbd1->lflags = rte_cpu_to_le_32(txbd1->lflags);
 
 	txr->tx_prod = RING_NEXT(txr->tx_ring_struct, txr->tx_prod);
 
diff --git a/drivers/net/bnxt/bnxt_txr.h b/drivers/net/bnxt/bnxt_txr.h
index 15c7e5a09..7f3c7cdb0 100644
--- a/drivers/net/bnxt/bnxt_txr.h
+++ b/drivers/net/bnxt/bnxt_txr.h
@@ -45,10 +45,20 @@ int bnxt_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id);
 
 #define PKT_TX_OIP_IIP_TCP_UDP_CKSUM	(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \
 					PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_IIP_UDP_CKSUM	(PKT_TX_UDP_CKSUM | \
+					PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_IIP_TCP_CKSUM	(PKT_TX_TCP_CKSUM | \
+					PKT_TX_IP_CKSUM | PKT_TX_OUTER_IP_CKSUM)
 #define PKT_TX_IIP_TCP_UDP_CKSUM	(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \
 					PKT_TX_IP_CKSUM)
+#define PKT_TX_IIP_TCP_CKSUM		(PKT_TX_TCP_CKSUM | PKT_TX_IP_CKSUM)
+#define PKT_TX_IIP_UDP_CKSUM		(PKT_TX_UDP_CKSUM | PKT_TX_IP_CKSUM)
 #define PKT_TX_OIP_TCP_UDP_CKSUM	(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM | \
 					PKT_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_UDP_CKSUM		(PKT_TX_UDP_CKSUM | \
+					PKT_TX_OUTER_IP_CKSUM)
+#define PKT_TX_OIP_TCP_CKSUM		(PKT_TX_TCP_CKSUM | \
+					PKT_TX_OUTER_IP_CKSUM)
 #define PKT_TX_OIP_IIP_CKSUM		(PKT_TX_IP_CKSUM |	\
 					 PKT_TX_OUTER_IP_CKSUM)
 #define PKT_TX_TCP_UDP_CKSUM		(PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM)
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 08/23] net/bnxt: add support for VF id 0xd800
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
                       ` (6 preceding siblings ...)
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 07/23] net/bnxt: fix HW Tx checksum offload check Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 09/23] net/bnxt: fix Rx/Tx queue start/stop operations Ajit Khaparde
                       ` (15 subsequent siblings)
  23 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

Add support for StingRay VF device 0xd800

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 15dab10bb..0bb3f29d9 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -73,6 +73,7 @@ int bnxt_logtype_driver;
 #define BROADCOM_DEV_ID_58802 0xd802
 #define BROADCOM_DEV_ID_58804 0xd804
 #define BROADCOM_DEV_ID_58808 0x16f0
+#define BROADCOM_DEV_ID_58802_VF 0xd800
 
 static const struct rte_pci_id bnxt_pci_id_map[] = {
 	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM,
@@ -116,6 +117,7 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58802) },
 	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58804) },
 	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58808) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58802_VF) },
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
@@ -3068,7 +3070,8 @@ static bool bnxt_vf_pciid(uint16_t id)
 	    id == BROADCOM_DEV_ID_5741X_VF ||
 	    id == BROADCOM_DEV_ID_57414_VF ||
 	    id == BROADCOM_DEV_ID_STRATUS_NIC_VF1 ||
-	    id == BROADCOM_DEV_ID_STRATUS_NIC_VF2)
+	    id == BROADCOM_DEV_ID_STRATUS_NIC_VF2 ||
+	    id == BROADCOM_DEV_ID_58802_VF)
 		return true;
 	return false;
 }
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 09/23] net/bnxt: fix Rx/Tx queue start/stop operations
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
                       ` (7 preceding siblings ...)
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 08/23] net/bnxt: add support for VF id 0xd800 Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 10/23] net/bnxt: move function check zero bytes to bnxt util.h Ajit Khaparde
                       ` (14 subsequent siblings)
  23 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, stable, Somnath Kotur

Packets destined to the to-be-stopped queue should not be dropped
(neither in HW nor in the driver), so re-program the RSS Table without
this queue on stop and add it back to the table on start unless it
is a Representor VF.

Since 0th entry is used for default ring, use fw_grp_id + 1 to change
the RSS table population logic by programming valid IDs instead of the
default zeroth entry in case of an invalid fw_grp_id.

Destroy and recreate the trio of Rx rings(compl, Rx, AG) every time in
start so that HW is in sync with software.

Fixes: 9b63c6fd70e3 ("net/bnxt: support Rx/Tx queue start/stop")
Cc: stable@dpdk.org

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>

--
v1->v2: Fix checkpatch warning.
---
 drivers/net/bnxt/bnxt.h        |  1 +
 drivers/net/bnxt/bnxt_ethdev.c | 10 ++++-
 drivers/net/bnxt/bnxt_hwrm.c   | 94 +++++++++++++++++++-----------------------
 drivers/net/bnxt/bnxt_hwrm.h   |  1 +
 drivers/net/bnxt/bnxt_ring.c   | 92 +++++++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_ring.h   |  1 +
 drivers/net/bnxt/bnxt_rxq.c    | 54 +++++++++++++++++++-----
 drivers/net/bnxt/bnxt_rxq.h    |  4 ++
 drivers/net/bnxt/bnxt_rxr.c    | 16 +++++--
 9 files changed, 206 insertions(+), 67 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 1a746097b..246b8d4d8 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -27,6 +27,7 @@
 #define BNXT_MIN_RING_DESC	16
 #define BNXT_MAX_TX_RING_DESC	4096
 #define BNXT_MAX_RX_RING_DESC	8192
+#define BNXT_DB_SIZE		0x80
 
 #define BNXT_INT_LAT_TMR_MIN			75
 #define BNXT_INT_LAT_TMR_MAX			150
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 0bb3f29d9..22cf8fb93 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -198,13 +198,14 @@ static int bnxt_alloc_mem(struct bnxt *bp)
 
 static int bnxt_init_chip(struct bnxt *bp)
 {
-	unsigned int i;
+	struct bnxt_rx_queue *rxq;
 	struct rte_eth_link new;
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(bp->eth_dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 	uint32_t intr_vector = 0;
 	uint32_t queue_id, base = BNXT_MISC_VEC_ID;
 	uint32_t vec = BNXT_MISC_VEC_ID;
+	unsigned int i, j;
 	int rc;
 
 	/* disable uio/vfio intr/eventfd mapping */
@@ -278,6 +279,13 @@ static int bnxt_init_chip(struct bnxt *bp)
 			goto err_out;
 		}
 
+		for (j = 0; j < bp->rx_nr_rings; j++) {
+			rxq = bp->eth_dev->data->rx_queues[j];
+
+			if (rxq->rx_deferred_start)
+				rxq->vnic->fw_grp_ids[j] = INVALID_HW_RING_ID;
+		}
+
 		rc = bnxt_vnic_rss_configure(bp, vnic);
 		if (rc) {
 			PMD_DRV_LOG(ERR,
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 707ee62e0..64687a69b 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -1817,8 +1817,7 @@ int bnxt_free_all_hwrm_ring_grps(struct bnxt *bp)
 	return rc;
 }
 
-static void bnxt_free_cp_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
-				unsigned int idx __rte_unused)
+static void bnxt_free_cp_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
 {
 	struct bnxt_ring *cp_ring = cpr->cp_ring_struct;
 
@@ -1830,17 +1829,52 @@ static void bnxt_free_cp_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
 	cpr->cp_raw_cons = 0;
 }
 
+void bnxt_free_hwrm_rx_ring(struct bnxt *bp, int queue_index)
+{
+	struct bnxt_rx_queue *rxq = bp->rx_queues[queue_index];
+	struct bnxt_rx_ring_info *rxr = rxq->rx_ring;
+	struct bnxt_ring *ring = rxr->rx_ring_struct;
+	struct bnxt_cp_ring_info *cpr = rxq->cp_ring;
+
+	if (ring->fw_ring_id != INVALID_HW_RING_ID) {
+		bnxt_hwrm_ring_free(bp, ring,
+				    HWRM_RING_FREE_INPUT_RING_TYPE_RX);
+		ring->fw_ring_id = INVALID_HW_RING_ID;
+		bp->grp_info[queue_index].rx_fw_ring_id = INVALID_HW_RING_ID;
+		memset(rxr->rx_desc_ring, 0,
+		       rxr->rx_ring_struct->ring_size *
+		       sizeof(*rxr->rx_desc_ring));
+		memset(rxr->rx_buf_ring, 0,
+		       rxr->rx_ring_struct->ring_size *
+		       sizeof(*rxr->rx_buf_ring));
+		rxr->rx_prod = 0;
+	}
+	ring = rxr->ag_ring_struct;
+	if (ring->fw_ring_id != INVALID_HW_RING_ID) {
+		bnxt_hwrm_ring_free(bp, ring,
+				    HWRM_RING_FREE_INPUT_RING_TYPE_RX);
+		ring->fw_ring_id = INVALID_HW_RING_ID;
+		memset(rxr->ag_buf_ring, 0,
+		       rxr->ag_ring_struct->ring_size *
+		       sizeof(*rxr->ag_buf_ring));
+		rxr->ag_prod = 0;
+		bp->grp_info[queue_index].ag_fw_ring_id = INVALID_HW_RING_ID;
+	}
+	if (cpr->cp_ring_struct->fw_ring_id != INVALID_HW_RING_ID)
+		bnxt_free_cp_ring(bp, cpr);
+
+	bp->grp_info[queue_index].cp_fw_ring_id = INVALID_HW_RING_ID;
+}
+
 int bnxt_free_all_hwrm_rings(struct bnxt *bp)
 {
 	unsigned int i;
-	int rc = 0;
 
 	for (i = 0; i < bp->tx_cp_nr_rings; i++) {
 		struct bnxt_tx_queue *txq = bp->tx_queues[i];
 		struct bnxt_tx_ring_info *txr = txq->tx_ring;
 		struct bnxt_ring *ring = txr->tx_ring_struct;
 		struct bnxt_cp_ring_info *cpr = txq->cp_ring;
-		unsigned int idx = bp->rx_cp_nr_rings + i;
 
 		if (ring->fw_ring_id != INVALID_HW_RING_ID) {
 			bnxt_hwrm_ring_free(bp, ring,
@@ -1856,59 +1890,15 @@ int bnxt_free_all_hwrm_rings(struct bnxt *bp)
 			txr->tx_cons = 0;
 		}
 		if (cpr->cp_ring_struct->fw_ring_id != INVALID_HW_RING_ID) {
-			bnxt_free_cp_ring(bp, cpr, idx);
+			bnxt_free_cp_ring(bp, cpr);
 			cpr->cp_ring_struct->fw_ring_id = INVALID_HW_RING_ID;
 		}
 	}
 
-	for (i = 0; i < bp->rx_cp_nr_rings; i++) {
-		struct bnxt_rx_queue *rxq = bp->rx_queues[i];
-		struct bnxt_rx_ring_info *rxr = rxq->rx_ring;
-		struct bnxt_ring *ring = rxr->rx_ring_struct;
-		struct bnxt_cp_ring_info *cpr = rxq->cp_ring;
+	for (i = 0; i < bp->rx_cp_nr_rings; i++)
+		bnxt_free_hwrm_rx_ring(bp, i);
 
-		if (ring->fw_ring_id != INVALID_HW_RING_ID) {
-			bnxt_hwrm_ring_free(bp, ring,
-					HWRM_RING_FREE_INPUT_RING_TYPE_RX);
-			ring->fw_ring_id = INVALID_HW_RING_ID;
-			bp->grp_info[i].rx_fw_ring_id = INVALID_HW_RING_ID;
-			memset(rxr->rx_desc_ring, 0,
-					rxr->rx_ring_struct->ring_size *
-					sizeof(*rxr->rx_desc_ring));
-			memset(rxr->rx_buf_ring, 0,
-					rxr->rx_ring_struct->ring_size *
-					sizeof(*rxr->rx_buf_ring));
-			rxr->rx_prod = 0;
-		}
-		ring = rxr->ag_ring_struct;
-		if (ring->fw_ring_id != INVALID_HW_RING_ID) {
-			bnxt_hwrm_ring_free(bp, ring,
-					    HWRM_RING_FREE_INPUT_RING_TYPE_RX);
-			ring->fw_ring_id = INVALID_HW_RING_ID;
-			memset(rxr->ag_buf_ring, 0,
-			       rxr->ag_ring_struct->ring_size *
-			       sizeof(*rxr->ag_buf_ring));
-			rxr->ag_prod = 0;
-			bp->grp_info[i].ag_fw_ring_id = INVALID_HW_RING_ID;
-		}
-		if (cpr->cp_ring_struct->fw_ring_id != INVALID_HW_RING_ID) {
-			bnxt_free_cp_ring(bp, cpr, i);
-			bp->grp_info[i].cp_fw_ring_id = INVALID_HW_RING_ID;
-			cpr->cp_ring_struct->fw_ring_id = INVALID_HW_RING_ID;
-		}
-	}
-
-	/* Default completion ring */
-	{
-		struct bnxt_cp_ring_info *cpr = bp->def_cp_ring;
-
-		if (cpr->cp_ring_struct->fw_ring_id != INVALID_HW_RING_ID) {
-			bnxt_free_cp_ring(bp, cpr, 0);
-			cpr->cp_ring_struct->fw_ring_id = INVALID_HW_RING_ID;
-		}
-	}
-
-	return rc;
+	return 0;
 }
 
 int bnxt_alloc_all_hwrm_ring_grps(struct bnxt *bp)
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index b83aab306..4a237c4b4 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -107,6 +107,7 @@ int bnxt_set_hwrm_vnic_filters(struct bnxt *bp, struct bnxt_vnic_info *vnic);
 int bnxt_clear_hwrm_vnic_filters(struct bnxt *bp, struct bnxt_vnic_info *vnic);
 void bnxt_free_all_hwrm_resources(struct bnxt *bp);
 void bnxt_free_hwrm_resources(struct bnxt *bp);
+void bnxt_free_hwrm_rx_ring(struct bnxt *bp, int queue_index);
 int bnxt_alloc_hwrm_resources(struct bnxt *bp);
 int bnxt_get_hwrm_link_config(struct bnxt *bp, struct rte_eth_link *link);
 int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up);
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index 81eb89d74..fcbd6bc6e 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -276,6 +276,98 @@ static void bnxt_init_dflt_coal(struct bnxt_coal *coal)
 	coal->cmpl_aggr_dma_tmr_during_int = BNXT_CMPL_AGGR_DMA_TMR_DURING_INT;
 }
 
+int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index)
+{
+	struct rte_pci_device *pci_dev = bp->pdev;
+	struct bnxt_rx_queue *rxq = bp->rx_queues[queue_index];
+	struct bnxt_cp_ring_info *cpr = rxq->cp_ring;
+	struct bnxt_ring *cp_ring = cpr->cp_ring_struct;
+	struct bnxt_rx_ring_info *rxr = rxq->rx_ring;
+	struct bnxt_ring *ring = rxr->rx_ring_struct;
+	unsigned int map_idx = queue_index + bp->rx_cp_nr_rings;
+	int rc = 0;
+
+	bp->grp_info[queue_index].fw_stats_ctx = cpr->hw_stats_ctx_id;
+
+	/* Rx cmpl */
+	rc = bnxt_hwrm_ring_alloc(bp, cp_ring,
+				  HWRM_RING_ALLOC_INPUT_RING_TYPE_L2_CMPL,
+				  queue_index, HWRM_NA_SIGNATURE,
+				  HWRM_NA_SIGNATURE);
+	if (rc)
+		goto err_out;
+
+	cpr->cp_doorbell = (char *)pci_dev->mem_resource[2].addr +
+		queue_index * BNXT_DB_SIZE;
+	bp->grp_info[queue_index].cp_fw_ring_id = cp_ring->fw_ring_id;
+	B_CP_DIS_DB(cpr, cpr->cp_raw_cons);
+
+	if (!queue_index) {
+		/*
+		 * In order to save completion resources, use the first
+		 * completion ring from PF or VF as the default completion ring
+		 * for async event and HWRM forward response handling.
+		 */
+		bp->def_cp_ring = cpr;
+		rc = bnxt_hwrm_set_async_event_cr(bp);
+		if (rc)
+			goto err_out;
+	}
+	/* Rx ring */
+	rc = bnxt_hwrm_ring_alloc(bp, ring, HWRM_RING_ALLOC_INPUT_RING_TYPE_RX,
+				  queue_index, cpr->hw_stats_ctx_id,
+				  cp_ring->fw_ring_id);
+	if (rc)
+		goto err_out;
+
+	rxr->rx_prod = 0;
+	rxr->rx_doorbell = (char *)pci_dev->mem_resource[2].addr +
+		queue_index * BNXT_DB_SIZE;
+	bp->grp_info[queue_index].rx_fw_ring_id = ring->fw_ring_id;
+	B_RX_DB(rxr->rx_doorbell, rxr->rx_prod);
+
+	ring = rxr->ag_ring_struct;
+	/* Agg ring */
+	if (!ring)
+		PMD_DRV_LOG(ERR, "Alloc AGG Ring is NULL!\n");
+
+	rc = bnxt_hwrm_ring_alloc(bp, ring, HWRM_RING_ALLOC_INPUT_RING_TYPE_RX,
+				  map_idx, HWRM_NA_SIGNATURE,
+				  cp_ring->fw_ring_id);
+	if (rc)
+		goto err_out;
+
+	PMD_DRV_LOG(DEBUG, "Alloc AGG Done!\n");
+	rxr->ag_prod = 0;
+	rxr->ag_doorbell = (char *)pci_dev->mem_resource[2].addr +
+		map_idx * BNXT_DB_SIZE;
+	bp->grp_info[queue_index].ag_fw_ring_id = ring->fw_ring_id;
+	B_RX_DB(rxr->ag_doorbell, rxr->ag_prod);
+
+	rxq->rx_buf_use_size = BNXT_MAX_MTU + ETHER_HDR_LEN +
+		ETHER_CRC_LEN + (2 * VLAN_TAG_SIZE);
+
+	if (bp->eth_dev->data->rx_queue_state[queue_index] ==
+	    RTE_ETH_QUEUE_STATE_STARTED) {
+		if (bnxt_init_one_rx_ring(rxq)) {
+			RTE_LOG(ERR, PMD,
+				"bnxt_init_one_rx_ring failed!\n");
+			bnxt_rx_queue_release_op(rxq);
+			rc = -ENOMEM;
+			goto err_out;
+		}
+		B_RX_DB(rxr->rx_doorbell, rxr->rx_prod);
+		B_RX_DB(rxr->ag_doorbell, rxr->ag_prod);
+	}
+	rxq->index = queue_index;
+	PMD_DRV_LOG(INFO,
+		    "queue %d, rx_deferred_start %d, state %d!\n",
+		    queue_index, rxq->rx_deferred_start,
+		    bp->eth_dev->data->rx_queue_state[queue_index]);
+
+err_out:
+	return rc;
+}
 /* ring_grp usage:
  * [0] = default completion ring
  * [1 -> +rx_cp_nr_rings] = rx_cp, rx rings
diff --git a/drivers/net/bnxt/bnxt_ring.h b/drivers/net/bnxt/bnxt_ring.h
index 65bf3e2f5..1446d784f 100644
--- a/drivers/net/bnxt/bnxt_ring.h
+++ b/drivers/net/bnxt/bnxt_ring.h
@@ -70,6 +70,7 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx,
 			    struct bnxt_rx_queue *rxq,
 			    struct bnxt_cp_ring_info *cp_ring_info,
 			    const char *suffix);
+int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index);
 int bnxt_alloc_hwrm_rings(struct bnxt *bp);
 
 #endif
diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index c55ddec4b..f405e2575 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -199,12 +199,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 	return rc;
 }
 
-static void bnxt_rx_queue_release_mbufs(struct bnxt_rx_queue *rxq)
+void bnxt_rx_queue_release_mbufs(struct bnxt_rx_queue *rxq)
 {
 	struct bnxt_sw_rx_bd *sw_ring;
 	struct bnxt_tpa_info *tpa_info;
 	uint16_t i;
 
+	rte_spinlock_lock(&rxq->lock);
+
 	if (rxq) {
 		sw_ring = rxq->rx_ring->rx_buf_ring;
 		if (sw_ring) {
@@ -239,6 +241,8 @@ static void bnxt_rx_queue_release_mbufs(struct bnxt_rx_queue *rxq)
 			}
 		}
 	}
+
+	rte_spinlock_unlock(&rxq->lock);
 }
 
 void bnxt_free_rx_mbufs(struct bnxt *bp)
@@ -286,6 +290,7 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 	uint64_t rx_offloads = eth_dev->data->dev_conf.rxmode.offloads;
 	struct bnxt_rx_queue *rxq;
 	int rc = 0;
+	uint8_t queue_state;
 
 	if (queue_idx >= bp->max_rx_rings) {
 		PMD_DRV_LOG(ERR,
@@ -341,6 +346,11 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 	}
 	rte_atomic64_init(&rxq->rx_mbuf_alloc_fail);
 
+	rxq->rx_deferred_start = rx_conf->rx_deferred_start;
+	queue_state = rxq->rx_deferred_start ? RTE_ETH_QUEUE_STATE_STOPPED :
+						RTE_ETH_QUEUE_STATE_STARTED;
+	eth_dev->data->rx_queue_state[queue_idx] = queue_state;
+	rte_spinlock_init(&rxq->lock);
 out:
 	return rc;
 }
@@ -389,6 +399,7 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	struct rte_eth_conf *dev_conf = &bp->eth_dev->data->dev_conf;
 	struct bnxt_rx_queue *rxq = bp->rx_queues[rx_queue_id];
 	struct bnxt_vnic_info *vnic = NULL;
+	int rc = 0;
 
 	if (rxq == NULL) {
 		PMD_DRV_LOG(ERR, "Invalid Rx queue %d\n", rx_queue_id);
@@ -396,28 +407,47 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 
 	dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED;
-	rxq->rx_deferred_start = false;
+
+	bnxt_free_hwrm_rx_ring(bp, rx_queue_id);
+	bnxt_alloc_hwrm_rx_ring(bp, rx_queue_id);
 	PMD_DRV_LOG(INFO, "Rx queue started %d\n", rx_queue_id);
+
 	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
 		vnic = rxq->vnic;
+
 		if (vnic->fw_grp_ids[rx_queue_id] != INVALID_HW_RING_ID)
 			return 0;
-		PMD_DRV_LOG(DEBUG, "vnic = %p fw_grp_id = %d\n",
-			vnic, bp->grp_info[rx_queue_id + 1].fw_grp_id);
+
+		PMD_DRV_LOG(DEBUG,
+			    "vnic = %p fw_grp_id = %d\n",
+			    vnic, bp->grp_info[rx_queue_id].fw_grp_id);
+
 		vnic->fw_grp_ids[rx_queue_id] =
-					bp->grp_info[rx_queue_id + 1].fw_grp_id;
-		return bnxt_vnic_rss_configure(bp, vnic);
+					bp->grp_info[rx_queue_id].fw_grp_id;
+		rc = bnxt_vnic_rss_configure(bp, vnic);
 	}
 
-	return 0;
+	if (rc == 0)
+		rxq->rx_deferred_start = false;
+
+	return rc;
 }
 
 int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 {
 	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
 	struct rte_eth_conf *dev_conf = &bp->eth_dev->data->dev_conf;
-	struct bnxt_rx_queue *rxq = bp->rx_queues[rx_queue_id];
 	struct bnxt_vnic_info *vnic = NULL;
+	struct bnxt_rx_queue *rxq = NULL;
+	int rc = 0;
+
+	/* Rx CQ 0 also works as Default CQ for async notifications */
+	if (!rx_queue_id) {
+		PMD_DRV_LOG(ERR, "Cannot stop Rx queue id %d\n", rx_queue_id);
+		return -EINVAL;
+	}
+
+	rxq = bp->rx_queues[rx_queue_id];
 
 	if (rxq == NULL) {
 		PMD_DRV_LOG(ERR, "Invalid Rx queue %d\n", rx_queue_id);
@@ -431,7 +461,11 @@ int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
 		vnic = rxq->vnic;
 		vnic->fw_grp_ids[rx_queue_id] = INVALID_HW_RING_ID;
-		return bnxt_vnic_rss_configure(bp, vnic);
+		rc = bnxt_vnic_rss_configure(bp, vnic);
 	}
-	return 0;
+
+	if (rc == 0)
+		bnxt_rx_queue_release_mbufs(rxq);
+
+	return rc;
 }
diff --git a/drivers/net/bnxt/bnxt_rxq.h b/drivers/net/bnxt/bnxt_rxq.h
index 8307f603c..e5d6001d3 100644
--- a/drivers/net/bnxt/bnxt_rxq.h
+++ b/drivers/net/bnxt/bnxt_rxq.h
@@ -10,6 +10,9 @@ struct bnxt;
 struct bnxt_rx_ring_info;
 struct bnxt_cp_ring_info;
 struct bnxt_rx_queue {
+	rte_spinlock_t		lock;	/* Synchronize between rx_queue_stop
+					 * and fast path
+					 */
 	struct rte_mempool	*mb_pool; /* mbuf pool for RX ring */
 	struct rte_mbuf		*pkt_first_seg; /* 1st seg of pkt */
 	struct rte_mbuf		*pkt_last_seg; /* Last seg of pkt */
@@ -54,4 +57,5 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev,
 			uint16_t rx_queue_id);
 int bnxt_rx_queue_stop(struct rte_eth_dev *dev,
 		       uint16_t rx_queue_id);
+void bnxt_rx_queue_release_mbufs(struct bnxt_rx_queue *rxq);
 #endif
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index b6b72c553..c7bc88481 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -541,7 +541,9 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	bool evt = false;
 
 	/* If Rx Q was stopped return. RxQ0 cannot be stopped. */
-	if (rxq->rx_deferred_start && rxq->queue_id)
+	if (unlikely(((rxq->rx_deferred_start ||
+		       !rte_spinlock_trylock(&rxq->lock)) &&
+		      rxq->queue_id)))
 		return 0;
 
 	/* Handle RX burst request */
@@ -583,7 +585,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		 * For PMD, there is no need to keep on pushing to REARM
 		 * the doorbell if there are no new completions
 		 */
-		return nb_rx_pkts;
+		goto done;
 	}
 
 	if (prod != rxr->rx_prod)
@@ -618,16 +620,22 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		}
 	}
 
+done:
+	rte_spinlock_unlock(&rxq->lock);
+
 	return nb_rx_pkts;
 }
 
 void bnxt_free_rx_rings(struct bnxt *bp)
 {
 	int i;
+	struct bnxt_rx_queue *rxq;
 
-	for (i = 0; i < (int)bp->rx_nr_rings; i++) {
-		struct bnxt_rx_queue *rxq = bp->rx_queues[i];
+	if (!bp->rx_queues)
+		return;
 
+	for (i = 0; i < (int)bp->rx_nr_rings; i++) {
+		rxq = bp->rx_queues[i];
 		if (!rxq)
 			continue;
 
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 10/23] net/bnxt: move function check zero bytes to bnxt util.h
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
                       ` (8 preceding siblings ...)
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 09/23] net/bnxt: fix Rx/Tx queue start/stop operations Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-07-02 12:20       ` Ferruh Yigit
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 11/23] net/bnxt: refactor filter/flow Ajit Khaparde
                       ` (13 subsequent siblings)
  23 siblings, 1 reply; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Scott Branden

From: Scott Branden <scott.branden@broadcom.com>

Move check_zero_bytes into new bnxt_util.h file.

Signed-off-by: Scott Branden <scott.branden@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile      |  1 +
 drivers/net/bnxt/bnxt_ethdev.c |  1 +
 drivers/net/bnxt/bnxt_filter.c |  9 ---------
 drivers/net/bnxt/bnxt_filter.h |  1 -
 drivers/net/bnxt/bnxt_util.c   | 18 ++++++++++++++++++
 drivers/net/bnxt/bnxt_util.h   | 11 +++++++++++
 6 files changed, 31 insertions(+), 10 deletions(-)
 create mode 100644 drivers/net/bnxt/bnxt_util.c
 create mode 100644 drivers/net/bnxt/bnxt_util.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index fd0cb5235..80db03ea8 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -38,6 +38,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_txq.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_txr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_vnic.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_irq.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_util.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += rte_pmd_bnxt.c
 
 #
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 22cf8fb93..ab3f5c8e7 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -26,6 +26,7 @@
 #include "bnxt_vnic.h"
 #include "hsi_struct_def_dpdk.h"
 #include "bnxt_nvm_defs.h"
+#include "bnxt_util.h"
 
 #define DRV_MODULE_NAME		"bnxt"
 static const char bnxt_version[] =
diff --git a/drivers/net/bnxt/bnxt_filter.c b/drivers/net/bnxt/bnxt_filter.c
index e36da9977..72989ab67 100644
--- a/drivers/net/bnxt/bnxt_filter.c
+++ b/drivers/net/bnxt/bnxt_filter.c
@@ -231,15 +231,6 @@ nxt_non_void_action(const struct rte_flow_action *cur)
 	}
 }
 
-int bnxt_check_zero_bytes(const uint8_t *bytes, int len)
-{
-	int i;
-	for (i = 0; i < len; i++)
-		if (bytes[i] != 0x00)
-			return 0;
-	return 1;
-}
-
 static int
 bnxt_filter_type_check(const struct rte_flow_item pattern[],
 		       struct rte_flow_error *error __rte_unused)
diff --git a/drivers/net/bnxt/bnxt_filter.h b/drivers/net/bnxt/bnxt_filter.h
index d27be7032..a1ecfb19d 100644
--- a/drivers/net/bnxt/bnxt_filter.h
+++ b/drivers/net/bnxt/bnxt_filter.h
@@ -69,7 +69,6 @@ struct bnxt_filter_info *bnxt_get_unused_filter(struct bnxt *bp);
 void bnxt_free_filter(struct bnxt *bp, struct bnxt_filter_info *filter);
 struct bnxt_filter_info *bnxt_get_l2_filter(struct bnxt *bp,
 		struct bnxt_filter_info *nf, struct bnxt_vnic_info *vnic);
-int bnxt_check_zero_bytes(const uint8_t *bytes, int len);
 
 #define NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_MACADDR	\
 	HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_SRC_MACADDR
diff --git a/drivers/net/bnxt/bnxt_util.c b/drivers/net/bnxt/bnxt_util.c
new file mode 100644
index 000000000..7d3342719
--- /dev/null
+++ b/drivers/net/bnxt/bnxt_util.c
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2018 Broadcom
+ * All rights reserved.
+ */
+
+#include <inttypes.h>
+
+#include "bnxt_util.h"
+
+int bnxt_check_zero_bytes(const uint8_t *bytes, int len)
+{
+	int i;
+
+	for (i = 0; i < len; i++)
+		if (bytes[i] != 0x00)
+			return 0;
+	return 1;
+}
diff --git a/drivers/net/bnxt/bnxt_util.h b/drivers/net/bnxt/bnxt_util.h
new file mode 100644
index 000000000..2378833cc
--- /dev/null
+++ b/drivers/net/bnxt/bnxt_util.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2018 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BNXT_UTIL_H_
+#define _BNXT_UTIL_H_
+
+int bnxt_check_zero_bytes(const uint8_t *bytes, int len);
+
+#endif /* _BNXT_UTIL_H_ */
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 11/23] net/bnxt: refactor filter/flow
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
                       ` (9 preceding siblings ...)
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 10/23] net/bnxt: move function check zero bytes to bnxt util.h Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 12/23] net/bnxt: check for invalid vnic id Ajit Khaparde
                       ` (12 subsequent siblings)
  23 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Michael Wildt, Scott Branden

In preparation of more rte_flow support it has been decided to
separate out filter and flow into their own files. Functionally the
same.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
--
v1->v2: Fix commit log.
---
 drivers/net/bnxt/Makefile      |    1 +
 drivers/net/bnxt/bnxt_filter.c | 1060 ------------------------------------
 drivers/net/bnxt/bnxt_flow.c   | 1167 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 1168 insertions(+), 1060 deletions(-)
 create mode 100644 drivers/net/bnxt/bnxt_flow.c

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 80db03ea8..8be3cb0e4 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -29,6 +29,7 @@ EXPORT_MAP := rte_pmd_bnxt_version.map
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_cpr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_ethdev.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_filter.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_flow.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_hwrm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_ring.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_rxq.c
diff --git a/drivers/net/bnxt/bnxt_filter.c b/drivers/net/bnxt/bnxt_filter.c
index 72989ab67..31757d32c 100644
--- a/drivers/net/bnxt/bnxt_filter.c
+++ b/drivers/net/bnxt/bnxt_filter.c
@@ -180,1063 +180,3 @@ void bnxt_free_filter(struct bnxt *bp, struct bnxt_filter_info *filter)
 {
 	STAILQ_INSERT_TAIL(&bp->free_filter_list, filter, next);
 }
-
-static int
-bnxt_flow_agrs_validate(const struct rte_flow_attr *attr,
-			const struct rte_flow_item pattern[],
-			const struct rte_flow_action actions[],
-			struct rte_flow_error *error)
-{
-	if (!pattern) {
-		rte_flow_error_set(error, EINVAL,
-			RTE_FLOW_ERROR_TYPE_ITEM_NUM,
-			NULL, "NULL pattern.");
-		return -rte_errno;
-	}
-
-	if (!actions) {
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
-				   NULL, "NULL action.");
-		return -rte_errno;
-	}
-
-	if (!attr) {
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_ATTR,
-				   NULL, "NULL attribute.");
-		return -rte_errno;
-	}
-
-	return 0;
-}
-
-static const struct rte_flow_item *
-nxt_non_void_pattern(const struct rte_flow_item *cur)
-{
-	while (1) {
-		if (cur->type != RTE_FLOW_ITEM_TYPE_VOID)
-			return cur;
-		cur++;
-	}
-}
-
-static const struct rte_flow_action *
-nxt_non_void_action(const struct rte_flow_action *cur)
-{
-	while (1) {
-		if (cur->type != RTE_FLOW_ACTION_TYPE_VOID)
-			return cur;
-		cur++;
-	}
-}
-
-static int
-bnxt_filter_type_check(const struct rte_flow_item pattern[],
-		       struct rte_flow_error *error __rte_unused)
-{
-	const struct rte_flow_item *item = nxt_non_void_pattern(pattern);
-	int use_ntuple = 1;
-
-	while (item->type != RTE_FLOW_ITEM_TYPE_END) {
-		switch (item->type) {
-		case RTE_FLOW_ITEM_TYPE_ETH:
-			use_ntuple = 1;
-			break;
-		case RTE_FLOW_ITEM_TYPE_VLAN:
-			use_ntuple = 0;
-			break;
-		case RTE_FLOW_ITEM_TYPE_IPV4:
-		case RTE_FLOW_ITEM_TYPE_IPV6:
-		case RTE_FLOW_ITEM_TYPE_TCP:
-		case RTE_FLOW_ITEM_TYPE_UDP:
-			/* FALLTHROUGH */
-			/* need ntuple match, reset exact match */
-			if (!use_ntuple) {
-				PMD_DRV_LOG(ERR,
-					"VLAN flow cannot use NTUPLE filter\n");
-				rte_flow_error_set(error, EINVAL,
-						   RTE_FLOW_ERROR_TYPE_ITEM,
-						   item,
-						   "Cannot use VLAN with NTUPLE");
-				return -rte_errno;
-			}
-			use_ntuple |= 1;
-			break;
-		default:
-			PMD_DRV_LOG(ERR, "Unknown Flow type");
-			use_ntuple |= 1;
-		}
-		item++;
-	}
-	return use_ntuple;
-}
-
-static int
-bnxt_validate_and_parse_flow_type(struct bnxt *bp,
-				  const struct rte_flow_attr *attr,
-				  const struct rte_flow_item pattern[],
-				  struct rte_flow_error *error,
-				  struct bnxt_filter_info *filter)
-{
-	const struct rte_flow_item *item = nxt_non_void_pattern(pattern);
-	const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
-	const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_mask;
-	const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
-	const struct rte_flow_item_tcp *tcp_spec, *tcp_mask;
-	const struct rte_flow_item_udp *udp_spec, *udp_mask;
-	const struct rte_flow_item_eth *eth_spec, *eth_mask;
-	const struct rte_flow_item_nvgre *nvgre_spec;
-	const struct rte_flow_item_nvgre *nvgre_mask;
-	const struct rte_flow_item_vxlan *vxlan_spec;
-	const struct rte_flow_item_vxlan *vxlan_mask;
-	uint8_t vni_mask[] = {0xFF, 0xFF, 0xFF};
-	uint8_t tni_mask[] = {0xFF, 0xFF, 0xFF};
-	const struct rte_flow_item_vf *vf_spec;
-	uint32_t tenant_id_be = 0;
-	bool vni_masked = 0;
-	bool tni_masked = 0;
-	uint32_t vf = 0;
-	int use_ntuple;
-	uint32_t en = 0;
-	uint32_t en_ethertype;
-	int dflt_vnic;
-
-	use_ntuple = bnxt_filter_type_check(pattern, error);
-	PMD_DRV_LOG(DEBUG, "Use NTUPLE %d\n", use_ntuple);
-	if (use_ntuple < 0)
-		return use_ntuple;
-
-	filter->filter_type = use_ntuple ?
-		HWRM_CFA_NTUPLE_FILTER : HWRM_CFA_EM_FILTER;
-	en_ethertype = use_ntuple ?
-		NTUPLE_FLTR_ALLOC_INPUT_EN_ETHERTYPE :
-		EM_FLOW_ALLOC_INPUT_EN_ETHERTYPE;
-
-	while (item->type != RTE_FLOW_ITEM_TYPE_END) {
-		if (item->last) {
-			/* last or range is NOT supported as match criteria */
-			rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "No support for range");
-			return -rte_errno;
-		}
-		if (!item->spec || !item->mask) {
-			rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "spec/mask is NULL");
-			return -rte_errno;
-		}
-		switch (item->type) {
-		case RTE_FLOW_ITEM_TYPE_ETH:
-			eth_spec = item->spec;
-			eth_mask = item->mask;
-
-			/* Source MAC address mask cannot be partially set.
-			 * Should be All 0's or all 1's.
-			 * Destination MAC address mask must not be partially
-			 * set. Should be all 1's or all 0's.
-			 */
-			if ((!is_zero_ether_addr(&eth_mask->src) &&
-			     !is_broadcast_ether_addr(&eth_mask->src)) ||
-			    (!is_zero_ether_addr(&eth_mask->dst) &&
-			     !is_broadcast_ether_addr(&eth_mask->dst))) {
-				rte_flow_error_set(error, EINVAL,
-						   RTE_FLOW_ERROR_TYPE_ITEM,
-						   item,
-						   "MAC_addr mask not valid");
-				return -rte_errno;
-			}
-
-			/* Mask is not allowed. Only exact matches are */
-			if (eth_mask->type &&
-			    eth_mask->type != RTE_BE16(0xffff)) {
-				rte_flow_error_set(error, EINVAL,
-						   RTE_FLOW_ERROR_TYPE_ITEM,
-						   item,
-						   "ethertype mask not valid");
-				return -rte_errno;
-			}
-
-			if (is_broadcast_ether_addr(&eth_mask->dst)) {
-				rte_memcpy(filter->dst_macaddr,
-					   &eth_spec->dst, 6);
-				en |= use_ntuple ?
-					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_MACADDR :
-					EM_FLOW_ALLOC_INPUT_EN_DST_MACADDR;
-			}
-			if (is_broadcast_ether_addr(&eth_mask->src)) {
-				rte_memcpy(filter->src_macaddr,
-					   &eth_spec->src, 6);
-				en |= use_ntuple ?
-					NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_MACADDR :
-					EM_FLOW_ALLOC_INPUT_EN_SRC_MACADDR;
-			} /*
-			   * else {
-			   *  RTE_LOG(ERR, PMD, "Handle this condition\n");
-			   * }
-			   */
-			if (eth_mask->type) {
-				filter->ethertype =
-					rte_be_to_cpu_16(eth_spec->type);
-				en |= en_ethertype;
-			}
-
-			break;
-		case RTE_FLOW_ITEM_TYPE_VLAN:
-			vlan_spec = item->spec;
-			vlan_mask = item->mask;
-			if (en & en_ethertype) {
-				rte_flow_error_set(error, EINVAL,
-						   RTE_FLOW_ERROR_TYPE_ITEM,
-						   item,
-						   "VLAN TPID matching is not"
-						   " supported");
-				return -rte_errno;
-			}
-			if (vlan_mask->tci &&
-			    vlan_mask->tci == RTE_BE16(0x0fff)) {
-				/* Only the VLAN ID can be matched. */
-				filter->l2_ovlan =
-					rte_be_to_cpu_16(vlan_spec->tci &
-							 RTE_BE16(0x0fff));
-				en |= EM_FLOW_ALLOC_INPUT_EN_OVLAN_VID;
-			} else if (vlan_mask->tci) {
-				rte_flow_error_set(error, EINVAL,
-						   RTE_FLOW_ERROR_TYPE_ITEM,
-						   item,
-						   "VLAN mask is invalid");
-				return -rte_errno;
-			}
-			if (vlan_mask->inner_type &&
-			    vlan_mask->inner_type != RTE_BE16(0xffff)) {
-				rte_flow_error_set(error, EINVAL,
-						   RTE_FLOW_ERROR_TYPE_ITEM,
-						   item,
-						   "inner ethertype mask not"
-						   " valid");
-				return -rte_errno;
-			}
-			if (vlan_mask->inner_type) {
-				filter->ethertype =
-					rte_be_to_cpu_16(vlan_spec->inner_type);
-				en |= en_ethertype;
-			}
-
-			break;
-		case RTE_FLOW_ITEM_TYPE_IPV4:
-			/* If mask is not involved, we could use EM filters. */
-			ipv4_spec = item->spec;
-			ipv4_mask = item->mask;
-			/* Only IP DST and SRC fields are maskable. */
-			if (ipv4_mask->hdr.version_ihl ||
-			    ipv4_mask->hdr.type_of_service ||
-			    ipv4_mask->hdr.total_length ||
-			    ipv4_mask->hdr.packet_id ||
-			    ipv4_mask->hdr.fragment_offset ||
-			    ipv4_mask->hdr.time_to_live ||
-			    ipv4_mask->hdr.next_proto_id ||
-			    ipv4_mask->hdr.hdr_checksum) {
-				rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Invalid IPv4 mask.");
-				return -rte_errno;
-			}
-			filter->dst_ipaddr[0] = ipv4_spec->hdr.dst_addr;
-			filter->src_ipaddr[0] = ipv4_spec->hdr.src_addr;
-			if (use_ntuple)
-				en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR |
-					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR;
-			else
-				en |= EM_FLOW_ALLOC_INPUT_EN_SRC_IPADDR |
-					EM_FLOW_ALLOC_INPUT_EN_DST_IPADDR;
-			if (ipv4_mask->hdr.src_addr) {
-				filter->src_ipaddr_mask[0] =
-					ipv4_mask->hdr.src_addr;
-				en |= !use_ntuple ? 0 :
-				     NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR_MASK;
-			}
-			if (ipv4_mask->hdr.dst_addr) {
-				filter->dst_ipaddr_mask[0] =
-					ipv4_mask->hdr.dst_addr;
-				en |= !use_ntuple ? 0 :
-				     NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR_MASK;
-			}
-			filter->ip_addr_type = use_ntuple ?
-			 HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_IP_ADDR_TYPE_IPV4 :
-			 HWRM_CFA_EM_FLOW_ALLOC_INPUT_IP_ADDR_TYPE_IPV4;
-			if (ipv4_spec->hdr.next_proto_id) {
-				filter->ip_protocol =
-					ipv4_spec->hdr.next_proto_id;
-				if (use_ntuple)
-					en |= NTUPLE_FLTR_ALLOC_IN_EN_IP_PROTO;
-				else
-					en |= EM_FLOW_ALLOC_INPUT_EN_IP_PROTO;
-			}
-			break;
-		case RTE_FLOW_ITEM_TYPE_IPV6:
-			ipv6_spec = item->spec;
-			ipv6_mask = item->mask;
-
-			/* Only IP DST and SRC fields are maskable. */
-			if (ipv6_mask->hdr.vtc_flow ||
-			    ipv6_mask->hdr.payload_len ||
-			    ipv6_mask->hdr.proto ||
-			    ipv6_mask->hdr.hop_limits) {
-				rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Invalid IPv6 mask.");
-				return -rte_errno;
-			}
-
-			if (use_ntuple)
-				en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR |
-					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR;
-			else
-				en |= EM_FLOW_ALLOC_INPUT_EN_SRC_IPADDR |
-					EM_FLOW_ALLOC_INPUT_EN_DST_IPADDR;
-			rte_memcpy(filter->src_ipaddr,
-				   ipv6_spec->hdr.src_addr, 16);
-			rte_memcpy(filter->dst_ipaddr,
-				   ipv6_spec->hdr.dst_addr, 16);
-			if (!bnxt_check_zero_bytes(ipv6_mask->hdr.src_addr,
-						   16)) {
-				rte_memcpy(filter->src_ipaddr_mask,
-					   ipv6_mask->hdr.src_addr, 16);
-				en |= !use_ntuple ? 0 :
-				    NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR_MASK;
-			}
-			if (!bnxt_check_zero_bytes(ipv6_mask->hdr.dst_addr,
-						   16)) {
-				rte_memcpy(filter->dst_ipaddr_mask,
-					   ipv6_mask->hdr.dst_addr, 16);
-				en |= !use_ntuple ? 0 :
-				     NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR_MASK;
-			}
-			filter->ip_addr_type = use_ntuple ?
-				NTUPLE_FLTR_ALLOC_INPUT_IP_ADDR_TYPE_IPV6 :
-				EM_FLOW_ALLOC_INPUT_IP_ADDR_TYPE_IPV6;
-			break;
-		case RTE_FLOW_ITEM_TYPE_TCP:
-			tcp_spec = item->spec;
-			tcp_mask = item->mask;
-
-			/* Check TCP mask. Only DST & SRC ports are maskable */
-			if (tcp_mask->hdr.sent_seq ||
-			    tcp_mask->hdr.recv_ack ||
-			    tcp_mask->hdr.data_off ||
-			    tcp_mask->hdr.tcp_flags ||
-			    tcp_mask->hdr.rx_win ||
-			    tcp_mask->hdr.cksum ||
-			    tcp_mask->hdr.tcp_urp) {
-				rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Invalid TCP mask");
-				return -rte_errno;
-			}
-			filter->src_port = tcp_spec->hdr.src_port;
-			filter->dst_port = tcp_spec->hdr.dst_port;
-			if (use_ntuple)
-				en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT |
-					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT;
-			else
-				en |= EM_FLOW_ALLOC_INPUT_EN_SRC_PORT |
-					EM_FLOW_ALLOC_INPUT_EN_DST_PORT;
-			if (tcp_mask->hdr.dst_port) {
-				filter->dst_port_mask = tcp_mask->hdr.dst_port;
-				en |= !use_ntuple ? 0 :
-				  NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT_MASK;
-			}
-			if (tcp_mask->hdr.src_port) {
-				filter->src_port_mask = tcp_mask->hdr.src_port;
-				en |= !use_ntuple ? 0 :
-				  NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT_MASK;
-			}
-			break;
-		case RTE_FLOW_ITEM_TYPE_UDP:
-			udp_spec = item->spec;
-			udp_mask = item->mask;
-
-			if (udp_mask->hdr.dgram_len ||
-			    udp_mask->hdr.dgram_cksum) {
-				rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Invalid UDP mask");
-				return -rte_errno;
-			}
-
-			filter->src_port = udp_spec->hdr.src_port;
-			filter->dst_port = udp_spec->hdr.dst_port;
-			if (use_ntuple)
-				en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT |
-					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT;
-			else
-				en |= EM_FLOW_ALLOC_INPUT_EN_SRC_PORT |
-					EM_FLOW_ALLOC_INPUT_EN_DST_PORT;
-
-			if (udp_mask->hdr.dst_port) {
-				filter->dst_port_mask = udp_mask->hdr.dst_port;
-				en |= !use_ntuple ? 0 :
-				  NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT_MASK;
-			}
-			if (udp_mask->hdr.src_port) {
-				filter->src_port_mask = udp_mask->hdr.src_port;
-				en |= !use_ntuple ? 0 :
-				  NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT_MASK;
-			}
-			break;
-		case RTE_FLOW_ITEM_TYPE_VXLAN:
-			vxlan_spec = item->spec;
-			vxlan_mask = item->mask;
-			/* Check if VXLAN item is used to describe protocol.
-			 * If yes, both spec and mask should be NULL.
-			 * If no, both spec and mask shouldn't be NULL.
-			 */
-			if ((!vxlan_spec && vxlan_mask) ||
-			    (vxlan_spec && !vxlan_mask)) {
-				rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Invalid VXLAN item");
-				return -rte_errno;
-			}
-
-			if (vxlan_spec->rsvd1 || vxlan_spec->rsvd0[0] ||
-			    vxlan_spec->rsvd0[1] || vxlan_spec->rsvd0[2] ||
-			    vxlan_spec->flags != 0x8) {
-				rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Invalid VXLAN item");
-				return -rte_errno;
-			}
-
-			/* Check if VNI is masked. */
-			if (vxlan_spec && vxlan_mask) {
-				vni_masked =
-					!!memcmp(vxlan_mask->vni, vni_mask,
-						 RTE_DIM(vni_mask));
-				if (vni_masked) {
-					rte_flow_error_set(error, EINVAL,
-						   RTE_FLOW_ERROR_TYPE_ITEM,
-						   item,
-						   "Invalid VNI mask");
-					return -rte_errno;
-				}
-
-				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
-					   vxlan_spec->vni, 3);
-				filter->vni =
-					rte_be_to_cpu_32(tenant_id_be);
-				filter->tunnel_type =
-				 CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN;
-			}
-			break;
-		case RTE_FLOW_ITEM_TYPE_NVGRE:
-			nvgre_spec = item->spec;
-			nvgre_mask = item->mask;
-			/* Check if NVGRE item is used to describe protocol.
-			 * If yes, both spec and mask should be NULL.
-			 * If no, both spec and mask shouldn't be NULL.
-			 */
-			if ((!nvgre_spec && nvgre_mask) ||
-			    (nvgre_spec && !nvgre_mask)) {
-				rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Invalid NVGRE item");
-				return -rte_errno;
-			}
-
-			if (nvgre_spec->c_k_s_rsvd0_ver != 0x2000 ||
-			    nvgre_spec->protocol != 0x6558) {
-				rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Invalid NVGRE item");
-				return -rte_errno;
-			}
-
-			if (nvgre_spec && nvgre_mask) {
-				tni_masked =
-					!!memcmp(nvgre_mask->tni, tni_mask,
-						 RTE_DIM(tni_mask));
-				if (tni_masked) {
-					rte_flow_error_set(error, EINVAL,
-						       RTE_FLOW_ERROR_TYPE_ITEM,
-						       item,
-						       "Invalid TNI mask");
-					return -rte_errno;
-				}
-				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
-					   nvgre_spec->tni, 3);
-				filter->vni =
-					rte_be_to_cpu_32(tenant_id_be);
-				filter->tunnel_type =
-				 CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_NVGRE;
-			}
-			break;
-		case RTE_FLOW_ITEM_TYPE_VF:
-			vf_spec = item->spec;
-			vf = vf_spec->id;
-			if (!BNXT_PF(bp)) {
-				rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Configuring on a VF!");
-				return -rte_errno;
-			}
-
-			if (vf >= bp->pdev->max_vfs) {
-				rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Incorrect VF id!");
-				return -rte_errno;
-			}
-
-			if (!attr->transfer) {
-				rte_flow_error_set(error, ENOTSUP,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Matching VF traffic without"
-					   " affecting it (transfer attribute)"
-					   " is unsupported");
-				return -rte_errno;
-			}
-
-			filter->mirror_vnic_id =
-			dflt_vnic = bnxt_hwrm_func_qcfg_vf_dflt_vnic_id(bp, vf);
-			if (dflt_vnic < 0) {
-				/* This simply indicates there's no driver
-				 * loaded. This is not an error.
-				 */
-				rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ITEM,
-					   item,
-					   "Unable to get default VNIC for VF");
-				return -rte_errno;
-			}
-			filter->mirror_vnic_id = dflt_vnic;
-			en |= NTUPLE_FLTR_ALLOC_INPUT_EN_MIRROR_VNIC_ID;
-			break;
-		default:
-			break;
-		}
-		item++;
-	}
-	filter->enables = en;
-
-	return 0;
-}
-
-/* Parse attributes */
-static int
-bnxt_flow_parse_attr(const struct rte_flow_attr *attr,
-		     struct rte_flow_error *error)
-{
-	/* Must be input direction */
-	if (!attr->ingress) {
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
-				   attr, "Only support ingress.");
-		return -rte_errno;
-	}
-
-	/* Not supported */
-	if (attr->egress) {
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
-				   attr, "No support for egress.");
-		return -rte_errno;
-	}
-
-	/* Not supported */
-	if (attr->priority) {
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
-				   attr, "No support for priority.");
-		return -rte_errno;
-	}
-
-	/* Not supported */
-	if (attr->group) {
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_ATTR_GROUP,
-				   attr, "No support for group.");
-		return -rte_errno;
-	}
-
-	return 0;
-}
-
-struct bnxt_filter_info *
-bnxt_get_l2_filter(struct bnxt *bp, struct bnxt_filter_info *nf,
-		   struct bnxt_vnic_info *vnic)
-{
-	struct bnxt_filter_info *filter1, *f0;
-	struct bnxt_vnic_info *vnic0;
-	int rc;
-
-	vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
-	f0 = STAILQ_FIRST(&vnic0->filter);
-
-	//This flow has same DST MAC as the port/l2 filter.
-	if (memcmp(f0->l2_addr, nf->dst_macaddr, ETHER_ADDR_LEN) == 0)
-		return f0;
-
-	//This flow needs DST MAC which is not same as port/l2
-	PMD_DRV_LOG(DEBUG, "Create L2 filter for DST MAC\n");
-	filter1 = bnxt_get_unused_filter(bp);
-	if (filter1 == NULL)
-		return NULL;
-	filter1->flags = HWRM_CFA_L2_FILTER_ALLOC_INPUT_FLAGS_PATH_RX;
-	filter1->enables = HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_ADDR |
-			L2_FILTER_ALLOC_INPUT_EN_L2_ADDR_MASK;
-	memcpy(filter1->l2_addr, nf->dst_macaddr, ETHER_ADDR_LEN);
-	memset(filter1->l2_addr_mask, 0xff, ETHER_ADDR_LEN);
-	rc = bnxt_hwrm_set_l2_filter(bp, vnic->fw_vnic_id,
-				     filter1);
-	if (rc) {
-		bnxt_free_filter(bp, filter1);
-		return NULL;
-	}
-	return filter1;
-}
-
-static int
-bnxt_validate_and_parse_flow(struct rte_eth_dev *dev,
-			     const struct rte_flow_item pattern[],
-			     const struct rte_flow_action actions[],
-			     const struct rte_flow_attr *attr,
-			     struct rte_flow_error *error,
-			     struct bnxt_filter_info *filter)
-{
-	const struct rte_flow_action *act = nxt_non_void_action(actions);
-	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
-	const struct rte_flow_action_queue *act_q;
-	const struct rte_flow_action_vf *act_vf;
-	struct bnxt_vnic_info *vnic, *vnic0;
-	struct bnxt_filter_info *filter1;
-	uint32_t vf = 0;
-	int dflt_vnic;
-	int rc;
-
-	if (bp->eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
-		PMD_DRV_LOG(ERR, "Cannot create flow on RSS queues\n");
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
-				   "Cannot create flow on RSS queues");
-		rc = -rte_errno;
-		goto ret;
-	}
-
-	rc = bnxt_validate_and_parse_flow_type(bp, attr, pattern, error,
-					       filter);
-	if (rc != 0)
-		goto ret;
-
-	rc = bnxt_flow_parse_attr(attr, error);
-	if (rc != 0)
-		goto ret;
-	//Since we support ingress attribute only - right now.
-	if (filter->filter_type == HWRM_CFA_EM_FILTER)
-		filter->flags = HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_PATH_RX;
-
-	switch (act->type) {
-	case RTE_FLOW_ACTION_TYPE_QUEUE:
-		/* Allow this flow. Redirect to a VNIC. */
-		act_q = (const struct rte_flow_action_queue *)act->conf;
-		if (act_q->index >= bp->rx_nr_rings) {
-			rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ACTION, act,
-					   "Invalid queue ID.");
-			rc = -rte_errno;
-			goto ret;
-		}
-		PMD_DRV_LOG(DEBUG, "Queue index %d\n", act_q->index);
-
-		vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
-		vnic = STAILQ_FIRST(&bp->ff_pool[act_q->index]);
-		if (vnic == NULL) {
-			rte_flow_error_set(error, EINVAL,
-					   RTE_FLOW_ERROR_TYPE_ACTION, act,
-					   "No matching VNIC for queue ID.");
-			rc = -rte_errno;
-			goto ret;
-		}
-		filter->dst_id = vnic->fw_vnic_id;
-		filter1 = bnxt_get_l2_filter(bp, filter, vnic);
-		if (filter1 == NULL) {
-			rc = -ENOSPC;
-			goto ret;
-		}
-		filter->fw_l2_filter_id = filter1->fw_l2_filter_id;
-		PMD_DRV_LOG(DEBUG, "VNIC found\n");
-		break;
-	case RTE_FLOW_ACTION_TYPE_DROP:
-		vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
-		filter1 = bnxt_get_l2_filter(bp, filter, vnic0);
-		if (filter1 == NULL) {
-			rc = -ENOSPC;
-			goto ret;
-		}
-		filter->fw_l2_filter_id = filter1->fw_l2_filter_id;
-		if (filter->filter_type == HWRM_CFA_EM_FILTER)
-			filter->flags =
-				HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_DROP;
-		else
-			filter->flags =
-				HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_FLAGS_DROP;
-		break;
-	case RTE_FLOW_ACTION_TYPE_COUNT:
-		vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
-		filter1 = bnxt_get_l2_filter(bp, filter, vnic0);
-		if (filter1 == NULL) {
-			rc = -ENOSPC;
-			goto ret;
-		}
-		filter->fw_l2_filter_id = filter1->fw_l2_filter_id;
-		filter->flags = HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_FLAGS_METER;
-		break;
-	case RTE_FLOW_ACTION_TYPE_VF:
-		act_vf = (const struct rte_flow_action_vf *)act->conf;
-		vf = act_vf->id;
-		if (!BNXT_PF(bp)) {
-			rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_ACTION,
-				   act,
-				   "Configuring on a VF!");
-			rc = -rte_errno;
-			goto ret;
-		}
-
-		if (vf >= bp->pdev->max_vfs) {
-			rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_ACTION,
-				   act,
-				   "Incorrect VF id!");
-			rc = -rte_errno;
-			goto ret;
-		}
-
-		filter->mirror_vnic_id =
-		dflt_vnic = bnxt_hwrm_func_qcfg_vf_dflt_vnic_id(bp, vf);
-		if (dflt_vnic < 0) {
-			/* This simply indicates there's no driver loaded.
-			 * This is not an error.
-			 */
-			rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_ACTION,
-				   act,
-				   "Unable to get default VNIC for VF");
-			rc = -rte_errno;
-			goto ret;
-		}
-		filter->mirror_vnic_id = dflt_vnic;
-		filter->enables |= NTUPLE_FLTR_ALLOC_INPUT_EN_MIRROR_VNIC_ID;
-
-		vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
-		filter1 = bnxt_get_l2_filter(bp, filter, vnic0);
-		if (filter1 == NULL) {
-			rc = -ENOSPC;
-			goto ret;
-		}
-		filter->fw_l2_filter_id = filter1->fw_l2_filter_id;
-		break;
-
-	default:
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_ACTION, act,
-				   "Invalid action.");
-		rc = -rte_errno;
-		goto ret;
-	}
-
-	act = nxt_non_void_action(++act);
-	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_ACTION,
-				   act, "Invalid action.");
-		rc = -rte_errno;
-		goto ret;
-	}
-ret:
-	return rc;
-}
-
-static int
-bnxt_flow_validate(struct rte_eth_dev *dev,
-		const struct rte_flow_attr *attr,
-		const struct rte_flow_item pattern[],
-		const struct rte_flow_action actions[],
-		struct rte_flow_error *error)
-{
-	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
-	struct bnxt_filter_info *filter;
-	int ret = 0;
-
-	ret = bnxt_flow_agrs_validate(attr, pattern, actions, error);
-	if (ret != 0)
-		return ret;
-
-	filter = bnxt_get_unused_filter(bp);
-	if (filter == NULL) {
-		PMD_DRV_LOG(ERR, "Not enough resources for a new flow.\n");
-		return -ENOMEM;
-	}
-
-	ret = bnxt_validate_and_parse_flow(dev, pattern, actions, attr,
-					   error, filter);
-	/* No need to hold on to this filter if we are just validating flow */
-	filter->fw_l2_filter_id = UINT64_MAX;
-	bnxt_free_filter(bp, filter);
-
-	return ret;
-}
-
-static int
-bnxt_match_filter(struct bnxt *bp, struct bnxt_filter_info *nf)
-{
-	struct bnxt_filter_info *mf;
-	struct rte_flow *flow;
-	int i;
-
-	for (i = bp->nr_vnics - 1; i >= 0; i--) {
-		struct bnxt_vnic_info *vnic = &bp->vnic_info[i];
-
-		STAILQ_FOREACH(flow, &vnic->flow_list, next) {
-			mf = flow->filter;
-
-			if (mf->filter_type == nf->filter_type &&
-			    mf->flags == nf->flags &&
-			    mf->src_port == nf->src_port &&
-			    mf->src_port_mask == nf->src_port_mask &&
-			    mf->dst_port == nf->dst_port &&
-			    mf->dst_port_mask == nf->dst_port_mask &&
-			    mf->ip_protocol == nf->ip_protocol &&
-			    mf->ip_addr_type == nf->ip_addr_type &&
-			    mf->ethertype == nf->ethertype &&
-			    mf->vni == nf->vni &&
-			    mf->tunnel_type == nf->tunnel_type &&
-			    mf->l2_ovlan == nf->l2_ovlan &&
-			    mf->l2_ovlan_mask == nf->l2_ovlan_mask &&
-			    mf->l2_ivlan == nf->l2_ivlan &&
-			    mf->l2_ivlan_mask == nf->l2_ivlan_mask &&
-			    !memcmp(mf->l2_addr, nf->l2_addr, ETHER_ADDR_LEN) &&
-			    !memcmp(mf->l2_addr_mask, nf->l2_addr_mask,
-				    ETHER_ADDR_LEN) &&
-			    !memcmp(mf->src_macaddr, nf->src_macaddr,
-				    ETHER_ADDR_LEN) &&
-			    !memcmp(mf->dst_macaddr, nf->dst_macaddr,
-				    ETHER_ADDR_LEN) &&
-			    !memcmp(mf->src_ipaddr, nf->src_ipaddr,
-				    sizeof(nf->src_ipaddr)) &&
-			    !memcmp(mf->src_ipaddr_mask, nf->src_ipaddr_mask,
-				    sizeof(nf->src_ipaddr_mask)) &&
-			    !memcmp(mf->dst_ipaddr, nf->dst_ipaddr,
-				    sizeof(nf->dst_ipaddr)) &&
-			    !memcmp(mf->dst_ipaddr_mask, nf->dst_ipaddr_mask,
-				    sizeof(nf->dst_ipaddr_mask))) {
-				if (mf->dst_id == nf->dst_id)
-					return -EEXIST;
-				/* Same Flow, Different queue
-				 * Clear the old ntuple filter
-				 */
-				if (nf->filter_type == HWRM_CFA_EM_FILTER)
-					bnxt_hwrm_clear_em_filter(bp, mf);
-				if (nf->filter_type == HWRM_CFA_NTUPLE_FILTER)
-					bnxt_hwrm_clear_ntuple_filter(bp, mf);
-				/* Free the old filter, update flow
-				 * with new filter
-				 */
-				bnxt_free_filter(bp, mf);
-				flow->filter = nf;
-				return -EXDEV;
-			}
-		}
-	}
-	return 0;
-}
-
-static struct rte_flow *
-bnxt_flow_create(struct rte_eth_dev *dev,
-		  const struct rte_flow_attr *attr,
-		  const struct rte_flow_item pattern[],
-		  const struct rte_flow_action actions[],
-		  struct rte_flow_error *error)
-{
-	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
-	struct bnxt_filter_info *filter;
-	struct bnxt_vnic_info *vnic = NULL;
-	bool update_flow = false;
-	struct rte_flow *flow;
-	unsigned int i;
-	int ret = 0;
-
-	flow = rte_zmalloc("bnxt_flow", sizeof(struct rte_flow), 0);
-	if (!flow) {
-		rte_flow_error_set(error, ENOMEM,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Failed to allocate memory");
-		return flow;
-	}
-
-	ret = bnxt_flow_agrs_validate(attr, pattern, actions, error);
-	if (ret != 0) {
-		PMD_DRV_LOG(ERR, "Not a validate flow.\n");
-		goto free_flow;
-	}
-
-	filter = bnxt_get_unused_filter(bp);
-	if (filter == NULL) {
-		PMD_DRV_LOG(ERR, "Not enough resources for a new flow.\n");
-		goto free_flow;
-	}
-
-	ret = bnxt_validate_and_parse_flow(dev, pattern, actions, attr,
-					   error, filter);
-	if (ret != 0)
-		goto free_filter;
-
-	ret = bnxt_match_filter(bp, filter);
-	if (ret == -EEXIST) {
-		PMD_DRV_LOG(DEBUG, "Flow already exists.\n");
-		/* Clear the filter that was created as part of
-		 * validate_and_parse_flow() above
-		 */
-		bnxt_hwrm_clear_l2_filter(bp, filter);
-		goto free_filter;
-	} else if (ret == -EXDEV) {
-		PMD_DRV_LOG(DEBUG, "Flow with same pattern exists");
-		PMD_DRV_LOG(DEBUG, "Updating with different destination\n");
-		update_flow = true;
-	}
-
-	if (filter->filter_type == HWRM_CFA_EM_FILTER) {
-		filter->enables |=
-			HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_L2_FILTER_ID;
-		ret = bnxt_hwrm_set_em_filter(bp, filter->dst_id, filter);
-	}
-	if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER) {
-		filter->enables |=
-			HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_L2_FILTER_ID;
-		ret = bnxt_hwrm_set_ntuple_filter(bp, filter->dst_id, filter);
-	}
-
-	for (i = 0; i < bp->nr_vnics; i++) {
-		vnic = &bp->vnic_info[i];
-		if (filter->dst_id == vnic->fw_vnic_id)
-			break;
-	}
-
-	if (!ret) {
-		flow->filter = filter;
-		flow->vnic = vnic;
-		if (update_flow) {
-			ret = -EXDEV;
-			goto free_flow;
-		}
-		PMD_DRV_LOG(ERR, "Successfully created flow.\n");
-		STAILQ_INSERT_TAIL(&vnic->flow_list, flow, next);
-		return flow;
-	}
-free_filter:
-	bnxt_free_filter(bp, filter);
-free_flow:
-	if (ret == -EEXIST)
-		rte_flow_error_set(error, ret,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Matching Flow exists.");
-	else if (ret == -EXDEV)
-		rte_flow_error_set(error, ret,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Flow with pattern exists, updating destination queue");
-	else
-		rte_flow_error_set(error, -ret,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Failed to create flow.");
-	rte_free(flow);
-	flow = NULL;
-	return flow;
-}
-
-static int
-bnxt_flow_destroy(struct rte_eth_dev *dev,
-		  struct rte_flow *flow,
-		  struct rte_flow_error *error)
-{
-	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
-	struct bnxt_filter_info *filter = flow->filter;
-	struct bnxt_vnic_info *vnic = flow->vnic;
-	int ret = 0;
-
-	ret = bnxt_match_filter(bp, filter);
-	if (ret == 0)
-		PMD_DRV_LOG(ERR, "Could not find matching flow\n");
-	if (filter->filter_type == HWRM_CFA_EM_FILTER)
-		ret = bnxt_hwrm_clear_em_filter(bp, filter);
-	if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER)
-		ret = bnxt_hwrm_clear_ntuple_filter(bp, filter);
-	else
-		ret = bnxt_hwrm_clear_l2_filter(bp, filter);
-	if (!ret) {
-		STAILQ_REMOVE(&vnic->flow_list, flow, rte_flow, next);
-		rte_free(flow);
-	} else {
-		rte_flow_error_set(error, -ret,
-				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
-				   "Failed to destroy flow.");
-	}
-
-	return ret;
-}
-
-static int
-bnxt_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error)
-{
-	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
-	struct bnxt_vnic_info *vnic;
-	struct rte_flow *flow;
-	unsigned int i;
-	int ret = 0;
-
-	for (i = 0; i < bp->nr_vnics; i++) {
-		vnic = &bp->vnic_info[i];
-		STAILQ_FOREACH(flow, &vnic->flow_list, next) {
-			struct bnxt_filter_info *filter = flow->filter;
-
-			if (filter->filter_type == HWRM_CFA_EM_FILTER)
-				ret = bnxt_hwrm_clear_em_filter(bp, filter);
-			if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER)
-				ret = bnxt_hwrm_clear_ntuple_filter(bp, filter);
-
-			if (ret) {
-				rte_flow_error_set(error, -ret,
-						   RTE_FLOW_ERROR_TYPE_HANDLE,
-						   NULL,
-						   "Failed to flush flow in HW.");
-				return -rte_errno;
-			}
-
-			STAILQ_REMOVE(&vnic->flow_list, flow,
-				      rte_flow, next);
-			rte_free(flow);
-		}
-	}
-
-	return ret;
-}
-
-const struct rte_flow_ops bnxt_flow_ops = {
-	.validate = bnxt_flow_validate,
-	.create = bnxt_flow_create,
-	.destroy = bnxt_flow_destroy,
-	.flush = bnxt_flow_flush,
-};
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
new file mode 100644
index 000000000..a491e9dbf
--- /dev/null
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -0,0 +1,1167 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2018 Broadcom
+ * All rights reserved.
+ */
+
+#include <sys/queue.h>
+
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include <rte_tailq.h>
+
+#include "bnxt.h"
+#include "bnxt_filter.h"
+#include "bnxt_hwrm.h"
+#include "bnxt_vnic.h"
+#include "bnxt_util.h"
+#include "hsi_struct_def_dpdk.h"
+
+static int
+bnxt_flow_args_validate(const struct rte_flow_attr *attr,
+			const struct rte_flow_item pattern[],
+			const struct rte_flow_action actions[],
+			struct rte_flow_error *error)
+{
+	if (!pattern) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+				   NULL,
+				   "NULL pattern.");
+		return -rte_errno;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL,
+				   "NULL action.");
+		return -rte_errno;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL,
+				   "NULL attribute.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+static const struct rte_flow_item *
+bnxt_flow_non_void_item(const struct rte_flow_item *cur)
+{
+	while (1) {
+		if (cur->type != RTE_FLOW_ITEM_TYPE_VOID)
+			return cur;
+		cur++;
+	}
+}
+
+static const struct rte_flow_action *
+bnxt_flow_non_void_action(const struct rte_flow_action *cur)
+{
+	while (1) {
+		if (cur->type != RTE_FLOW_ACTION_TYPE_VOID)
+			return cur;
+		cur++;
+	}
+}
+
+static int
+bnxt_filter_type_check(const struct rte_flow_item pattern[],
+		       struct rte_flow_error *error __rte_unused)
+{
+	const struct rte_flow_item *item =
+		bnxt_flow_non_void_item(pattern);
+	int use_ntuple = 1;
+
+	while (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		switch (item->type) {
+		case RTE_FLOW_ITEM_TYPE_ETH:
+			use_ntuple = 1;
+			break;
+		case RTE_FLOW_ITEM_TYPE_VLAN:
+			use_ntuple = 0;
+			break;
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+		case RTE_FLOW_ITEM_TYPE_IPV6:
+		case RTE_FLOW_ITEM_TYPE_TCP:
+		case RTE_FLOW_ITEM_TYPE_UDP:
+			/* FALLTHROUGH */
+			/* need ntuple match, reset exact match */
+			if (!use_ntuple) {
+				PMD_DRV_LOG(ERR,
+					"VLAN flow cannot use NTUPLE filter\n");
+				rte_flow_error_set
+					(error,
+					 EINVAL,
+					 RTE_FLOW_ERROR_TYPE_ITEM,
+					 item,
+					 "Cannot use VLAN with NTUPLE");
+				return -rte_errno;
+			}
+			use_ntuple |= 1;
+			break;
+		default:
+			PMD_DRV_LOG(ERR, "Unknown Flow type\n");
+			use_ntuple |= 1;
+		}
+		item++;
+	}
+	return use_ntuple;
+}
+
+static int
+bnxt_validate_and_parse_flow_type(struct bnxt *bp,
+				  const struct rte_flow_attr *attr,
+				  const struct rte_flow_item pattern[],
+				  struct rte_flow_error *error,
+				  struct bnxt_filter_info *filter)
+{
+	const struct rte_flow_item *item = bnxt_flow_non_void_item(pattern);
+	const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
+	const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_mask;
+	const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
+	const struct rte_flow_item_tcp *tcp_spec, *tcp_mask;
+	const struct rte_flow_item_udp *udp_spec, *udp_mask;
+	const struct rte_flow_item_eth *eth_spec, *eth_mask;
+	const struct rte_flow_item_nvgre *nvgre_spec;
+	const struct rte_flow_item_nvgre *nvgre_mask;
+	const struct rte_flow_item_vxlan *vxlan_spec;
+	const struct rte_flow_item_vxlan *vxlan_mask;
+	uint8_t vni_mask[] = {0xFF, 0xFF, 0xFF};
+	uint8_t tni_mask[] = {0xFF, 0xFF, 0xFF};
+	const struct rte_flow_item_vf *vf_spec;
+	uint32_t tenant_id_be = 0;
+	bool vni_masked = 0;
+	bool tni_masked = 0;
+	uint32_t vf = 0;
+	int use_ntuple;
+	uint32_t en = 0;
+	uint32_t en_ethertype;
+	int dflt_vnic;
+
+	use_ntuple = bnxt_filter_type_check(pattern, error);
+	PMD_DRV_LOG(DEBUG, "Use NTUPLE %d\n", use_ntuple);
+	if (use_ntuple < 0)
+		return use_ntuple;
+
+	filter->filter_type = use_ntuple ?
+		HWRM_CFA_NTUPLE_FILTER : HWRM_CFA_EM_FILTER;
+	en_ethertype = use_ntuple ?
+		NTUPLE_FLTR_ALLOC_INPUT_EN_ETHERTYPE :
+		EM_FLOW_ALLOC_INPUT_EN_ETHERTYPE;
+
+	while (item->type != RTE_FLOW_ITEM_TYPE_END) {
+		if (item->last) {
+			/* last or range is NOT supported as match criteria */
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item,
+					   "No support for range");
+			return -rte_errno;
+		}
+
+		if (!item->spec || !item->mask) {
+			rte_flow_error_set(error, EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ITEM,
+					   item,
+					   "spec/mask is NULL");
+			return -rte_errno;
+		}
+
+		switch (item->type) {
+		case RTE_FLOW_ITEM_TYPE_ETH:
+			eth_spec = item->spec;
+			eth_mask = item->mask;
+
+			/* Source MAC address mask cannot be partially set.
+			 * Should be All 0's or all 1's.
+			 * Destination MAC address mask must not be partially
+			 * set. Should be all 1's or all 0's.
+			 */
+			if ((!is_zero_ether_addr(&eth_mask->src) &&
+			     !is_broadcast_ether_addr(&eth_mask->src)) ||
+			    (!is_zero_ether_addr(&eth_mask->dst) &&
+			     !is_broadcast_ether_addr(&eth_mask->dst))) {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "MAC_addr mask not valid");
+				return -rte_errno;
+			}
+
+			/* Mask is not allowed. Only exact matches are */
+			if (eth_mask->type &&
+			    eth_mask->type != RTE_BE16(0xffff)) {
+				rte_flow_error_set(error, EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "ethertype mask not valid");
+				return -rte_errno;
+			}
+
+			if (is_broadcast_ether_addr(&eth_mask->dst)) {
+				rte_memcpy(filter->dst_macaddr,
+					   &eth_spec->dst, 6);
+				en |= use_ntuple ?
+					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_MACADDR :
+					EM_FLOW_ALLOC_INPUT_EN_DST_MACADDR;
+			}
+
+			if (is_broadcast_ether_addr(&eth_mask->src)) {
+				rte_memcpy(filter->src_macaddr,
+					   &eth_spec->src, 6);
+				en |= use_ntuple ?
+					NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_MACADDR :
+					EM_FLOW_ALLOC_INPUT_EN_SRC_MACADDR;
+			} /*
+			   * else {
+			   *  PMD_DRV_LOG(ERR, "Handle this condition\n");
+			   * }
+			   */
+			if (eth_mask->type) {
+				filter->ethertype =
+					rte_be_to_cpu_16(eth_spec->type);
+				en |= en_ethertype;
+			}
+
+			break;
+		case RTE_FLOW_ITEM_TYPE_VLAN:
+			vlan_spec = item->spec;
+			vlan_mask = item->mask;
+			if (en & en_ethertype) {
+				rte_flow_error_set(error, EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "VLAN TPID matching is not"
+						   " supported");
+				return -rte_errno;
+			}
+			if (vlan_mask->tci &&
+			    vlan_mask->tci == RTE_BE16(0x0fff)) {
+				/* Only the VLAN ID can be matched. */
+				filter->l2_ovlan =
+					rte_be_to_cpu_16(vlan_spec->tci &
+							 RTE_BE16(0x0fff));
+				en |= EM_FLOW_ALLOC_INPUT_EN_OVLAN_VID;
+			} else {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "VLAN mask is invalid");
+				return -rte_errno;
+			}
+			if (vlan_mask->inner_type &&
+			    vlan_mask->inner_type != RTE_BE16(0xffff)) {
+				rte_flow_error_set(error, EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "inner ethertype mask not"
+						   " valid");
+				return -rte_errno;
+			}
+			if (vlan_mask->inner_type) {
+				filter->ethertype =
+					rte_be_to_cpu_16(vlan_spec->inner_type);
+				en |= en_ethertype;
+			}
+
+			break;
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+			/* If mask is not involved, we could use EM filters. */
+			ipv4_spec = item->spec;
+			ipv4_mask = item->mask;
+			/* Only IP DST and SRC fields are maskable. */
+			if (ipv4_mask->hdr.version_ihl ||
+			    ipv4_mask->hdr.type_of_service ||
+			    ipv4_mask->hdr.total_length ||
+			    ipv4_mask->hdr.packet_id ||
+			    ipv4_mask->hdr.fragment_offset ||
+			    ipv4_mask->hdr.time_to_live ||
+			    ipv4_mask->hdr.next_proto_id ||
+			    ipv4_mask->hdr.hdr_checksum) {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Invalid IPv4 mask.");
+				return -rte_errno;
+			}
+
+			filter->dst_ipaddr[0] = ipv4_spec->hdr.dst_addr;
+			filter->src_ipaddr[0] = ipv4_spec->hdr.src_addr;
+
+			if (use_ntuple)
+				en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR |
+					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR;
+			else
+				en |= EM_FLOW_ALLOC_INPUT_EN_SRC_IPADDR |
+					EM_FLOW_ALLOC_INPUT_EN_DST_IPADDR;
+
+			if (ipv4_mask->hdr.src_addr) {
+				filter->src_ipaddr_mask[0] =
+					ipv4_mask->hdr.src_addr;
+				en |= !use_ntuple ? 0 :
+				     NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR_MASK;
+			}
+
+			if (ipv4_mask->hdr.dst_addr) {
+				filter->dst_ipaddr_mask[0] =
+					ipv4_mask->hdr.dst_addr;
+				en |= !use_ntuple ? 0 :
+				     NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR_MASK;
+			}
+
+			filter->ip_addr_type = use_ntuple ?
+			 HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_IP_ADDR_TYPE_IPV4 :
+			 HWRM_CFA_EM_FLOW_ALLOC_INPUT_IP_ADDR_TYPE_IPV4;
+
+			if (ipv4_spec->hdr.next_proto_id) {
+				filter->ip_protocol =
+					ipv4_spec->hdr.next_proto_id;
+				if (use_ntuple)
+					en |= NTUPLE_FLTR_ALLOC_IN_EN_IP_PROTO;
+				else
+					en |= EM_FLOW_ALLOC_INPUT_EN_IP_PROTO;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_IPV6:
+			ipv6_spec = item->spec;
+			ipv6_mask = item->mask;
+
+			/* Only IP DST and SRC fields are maskable. */
+			if (ipv6_mask->hdr.vtc_flow ||
+			    ipv6_mask->hdr.payload_len ||
+			    ipv6_mask->hdr.proto ||
+			    ipv6_mask->hdr.hop_limits) {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Invalid IPv6 mask.");
+				return -rte_errno;
+			}
+
+			if (use_ntuple)
+				en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR |
+					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR;
+			else
+				en |= EM_FLOW_ALLOC_INPUT_EN_SRC_IPADDR |
+					EM_FLOW_ALLOC_INPUT_EN_DST_IPADDR;
+
+			rte_memcpy(filter->src_ipaddr,
+				   ipv6_spec->hdr.src_addr, 16);
+			rte_memcpy(filter->dst_ipaddr,
+				   ipv6_spec->hdr.dst_addr, 16);
+
+			if (!bnxt_check_zero_bytes(ipv6_mask->hdr.src_addr,
+						   16)) {
+				rte_memcpy(filter->src_ipaddr_mask,
+					   ipv6_mask->hdr.src_addr, 16);
+				en |= !use_ntuple ? 0 :
+				    NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_IPADDR_MASK;
+			}
+
+			if (!bnxt_check_zero_bytes(ipv6_mask->hdr.dst_addr,
+						   16)) {
+				rte_memcpy(filter->dst_ipaddr_mask,
+					   ipv6_mask->hdr.dst_addr, 16);
+				en |= !use_ntuple ? 0 :
+				     NTUPLE_FLTR_ALLOC_INPUT_EN_DST_IPADDR_MASK;
+			}
+
+			filter->ip_addr_type = use_ntuple ?
+				NTUPLE_FLTR_ALLOC_INPUT_IP_ADDR_TYPE_IPV6 :
+				EM_FLOW_ALLOC_INPUT_IP_ADDR_TYPE_IPV6;
+			break;
+		case RTE_FLOW_ITEM_TYPE_TCP:
+			tcp_spec = item->spec;
+			tcp_mask = item->mask;
+
+			/* Check TCP mask. Only DST & SRC ports are maskable */
+			if (tcp_mask->hdr.sent_seq ||
+			    tcp_mask->hdr.recv_ack ||
+			    tcp_mask->hdr.data_off ||
+			    tcp_mask->hdr.tcp_flags ||
+			    tcp_mask->hdr.rx_win ||
+			    tcp_mask->hdr.cksum ||
+			    tcp_mask->hdr.tcp_urp) {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Invalid TCP mask");
+				return -rte_errno;
+			}
+
+			filter->src_port = tcp_spec->hdr.src_port;
+			filter->dst_port = tcp_spec->hdr.dst_port;
+
+			if (use_ntuple)
+				en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT |
+					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT;
+			else
+				en |= EM_FLOW_ALLOC_INPUT_EN_SRC_PORT |
+					EM_FLOW_ALLOC_INPUT_EN_DST_PORT;
+
+			if (tcp_mask->hdr.dst_port) {
+				filter->dst_port_mask = tcp_mask->hdr.dst_port;
+				en |= !use_ntuple ? 0 :
+				  NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT_MASK;
+			}
+
+			if (tcp_mask->hdr.src_port) {
+				filter->src_port_mask = tcp_mask->hdr.src_port;
+				en |= !use_ntuple ? 0 :
+				  NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT_MASK;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_UDP:
+			udp_spec = item->spec;
+			udp_mask = item->mask;
+
+			if (udp_mask->hdr.dgram_len ||
+			    udp_mask->hdr.dgram_cksum) {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Invalid UDP mask");
+				return -rte_errno;
+			}
+
+			filter->src_port = udp_spec->hdr.src_port;
+			filter->dst_port = udp_spec->hdr.dst_port;
+
+			if (use_ntuple)
+				en |= NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT |
+					NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT;
+			else
+				en |= EM_FLOW_ALLOC_INPUT_EN_SRC_PORT |
+					EM_FLOW_ALLOC_INPUT_EN_DST_PORT;
+
+			if (udp_mask->hdr.dst_port) {
+				filter->dst_port_mask = udp_mask->hdr.dst_port;
+				en |= !use_ntuple ? 0 :
+				  NTUPLE_FLTR_ALLOC_INPUT_EN_DST_PORT_MASK;
+			}
+
+			if (udp_mask->hdr.src_port) {
+				filter->src_port_mask = udp_mask->hdr.src_port;
+				en |= !use_ntuple ? 0 :
+				  NTUPLE_FLTR_ALLOC_INPUT_EN_SRC_PORT_MASK;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_VXLAN:
+			vxlan_spec = item->spec;
+			vxlan_mask = item->mask;
+			/* Check if VXLAN item is used to describe protocol.
+			 * If yes, both spec and mask should be NULL.
+			 * If no, both spec and mask shouldn't be NULL.
+			 */
+			if ((!vxlan_spec && vxlan_mask) ||
+			    (vxlan_spec && !vxlan_mask)) {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Invalid VXLAN item");
+				return -rte_errno;
+			}
+
+			if (vxlan_spec->rsvd1 || vxlan_spec->rsvd0[0] ||
+			    vxlan_spec->rsvd0[1] || vxlan_spec->rsvd0[2] ||
+			    vxlan_spec->flags != 0x8) {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Invalid VXLAN item");
+				return -rte_errno;
+			}
+
+			/* Check if VNI is masked. */
+			if (vxlan_spec && vxlan_mask) {
+				vni_masked =
+					!!memcmp(vxlan_mask->vni, vni_mask,
+						 RTE_DIM(vni_mask));
+				if (vni_masked) {
+					rte_flow_error_set
+						(error,
+						 EINVAL,
+						 RTE_FLOW_ERROR_TYPE_ITEM,
+						 item,
+						 "Invalid VNI mask");
+					return -rte_errno;
+				}
+
+				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
+					   vxlan_spec->vni, 3);
+				filter->vni =
+					rte_be_to_cpu_32(tenant_id_be);
+				filter->tunnel_type =
+				 CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_NVGRE:
+			nvgre_spec = item->spec;
+			nvgre_mask = item->mask;
+			/* Check if NVGRE item is used to describe protocol.
+			 * If yes, both spec and mask should be NULL.
+			 * If no, both spec and mask shouldn't be NULL.
+			 */
+			if ((!nvgre_spec && nvgre_mask) ||
+			    (nvgre_spec && !nvgre_mask)) {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Invalid NVGRE item");
+				return -rte_errno;
+			}
+
+			if (nvgre_spec->c_k_s_rsvd0_ver != 0x2000 ||
+			    nvgre_spec->protocol != 0x6558) {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Invalid NVGRE item");
+				return -rte_errno;
+			}
+
+			if (nvgre_spec && nvgre_mask) {
+				tni_masked =
+					!!memcmp(nvgre_mask->tni, tni_mask,
+						 RTE_DIM(tni_mask));
+				if (tni_masked) {
+					rte_flow_error_set
+						(error,
+						 EINVAL,
+						 RTE_FLOW_ERROR_TYPE_ITEM,
+						 item,
+						 "Invalid TNI mask");
+					return -rte_errno;
+				}
+				rte_memcpy(((uint8_t *)&tenant_id_be + 1),
+					   nvgre_spec->tni, 3);
+				filter->vni =
+					rte_be_to_cpu_32(tenant_id_be);
+				filter->tunnel_type =
+				 CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_NVGRE;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_VF:
+			vf_spec = item->spec;
+			vf = vf_spec->id;
+
+			if (!BNXT_PF(bp)) {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Configuring on a VF!");
+				return -rte_errno;
+			}
+
+			if (vf >= bp->pdev->max_vfs) {
+				rte_flow_error_set(error,
+						   EINVAL,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Incorrect VF id!");
+				return -rte_errno;
+			}
+
+			if (!attr->transfer) {
+				rte_flow_error_set(error,
+						   ENOTSUP,
+						   RTE_FLOW_ERROR_TYPE_ITEM,
+						   item,
+						   "Matching VF traffic without"
+						   " affecting it (transfer attribute)"
+						   " is unsupported");
+				return -rte_errno;
+			}
+
+			filter->mirror_vnic_id =
+			dflt_vnic = bnxt_hwrm_func_qcfg_vf_dflt_vnic_id(bp, vf);
+			if (dflt_vnic < 0) {
+				/* This simply indicates there's no driver
+				 * loaded. This is not an error.
+				 */
+				rte_flow_error_set
+					(error,
+					 EINVAL,
+					 RTE_FLOW_ERROR_TYPE_ITEM,
+					 item,
+					 "Unable to get default VNIC for VF");
+				return -rte_errno;
+			}
+
+			filter->mirror_vnic_id = dflt_vnic;
+			en |= NTUPLE_FLTR_ALLOC_INPUT_EN_MIRROR_VNIC_ID;
+			break;
+		default:
+			break;
+		}
+		item++;
+	}
+	filter->enables = en;
+
+	return 0;
+}
+
+/* Parse attributes */
+static int
+bnxt_flow_parse_attr(const struct rte_flow_attr *attr,
+		     struct rte_flow_error *error)
+{
+	/* Must be input direction */
+	if (!attr->ingress) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_INGRESS,
+				   attr,
+				   "Only support ingress.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->egress) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+				   attr,
+				   "No support for egress.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->priority) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY,
+				   attr,
+				   "No support for priority.");
+		return -rte_errno;
+	}
+
+	/* Not supported */
+	if (attr->group) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR_GROUP,
+				   attr,
+				   "No support for group.");
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
+struct bnxt_filter_info *
+bnxt_get_l2_filter(struct bnxt *bp, struct bnxt_filter_info *nf,
+		   struct bnxt_vnic_info *vnic)
+{
+	struct bnxt_filter_info *filter1, *f0;
+	struct bnxt_vnic_info *vnic0;
+	int rc;
+
+	vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
+	f0 = STAILQ_FIRST(&vnic0->filter);
+
+	/* This flow has same DST MAC as the port/l2 filter. */
+	if (memcmp(f0->l2_addr, nf->dst_macaddr, ETHER_ADDR_LEN) == 0)
+		return f0;
+
+	/* This flow needs DST MAC which is not same as port/l2 */
+	PMD_DRV_LOG(DEBUG, "Create L2 filter for DST MAC\n");
+	filter1 = bnxt_get_unused_filter(bp);
+	if (filter1 == NULL)
+		return NULL;
+
+	filter1->flags = HWRM_CFA_L2_FILTER_ALLOC_INPUT_FLAGS_PATH_RX;
+	filter1->enables = HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_ADDR |
+			L2_FILTER_ALLOC_INPUT_EN_L2_ADDR_MASK;
+	memcpy(filter1->l2_addr, nf->dst_macaddr, ETHER_ADDR_LEN);
+	memset(filter1->l2_addr_mask, 0xff, ETHER_ADDR_LEN);
+	rc = bnxt_hwrm_set_l2_filter(bp, vnic->fw_vnic_id,
+				     filter1);
+	if (rc) {
+		bnxt_free_filter(bp, filter1);
+		return NULL;
+	}
+	return filter1;
+}
+
+static int
+bnxt_validate_and_parse_flow(struct rte_eth_dev *dev,
+			     const struct rte_flow_item pattern[],
+			     const struct rte_flow_action actions[],
+			     const struct rte_flow_attr *attr,
+			     struct rte_flow_error *error,
+			     struct bnxt_filter_info *filter)
+{
+	const struct rte_flow_action *act =
+		bnxt_flow_non_void_action(actions);
+	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	const struct rte_flow_action_queue *act_q;
+	const struct rte_flow_action_vf *act_vf;
+	struct bnxt_vnic_info *vnic, *vnic0;
+	struct bnxt_filter_info *filter1;
+	uint32_t vf = 0;
+	int dflt_vnic;
+	int rc;
+
+	if (bp->eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+		PMD_DRV_LOG(ERR, "Cannot create flow on RSS queues\n");
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				   NULL,
+				   "Cannot create flow on RSS queues");
+		rc = -rte_errno;
+		goto ret;
+	}
+
+	rc =
+	bnxt_validate_and_parse_flow_type(bp, attr, pattern, error, filter);
+	if (rc != 0)
+		goto ret;
+
+	rc = bnxt_flow_parse_attr(attr, error);
+	if (rc != 0)
+		goto ret;
+
+	/* Since we support ingress attribute only - right now. */
+	if (filter->filter_type == HWRM_CFA_EM_FILTER)
+		filter->flags = HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_PATH_RX;
+
+	switch (act->type) {
+	case RTE_FLOW_ACTION_TYPE_QUEUE:
+		/* Allow this flow. Redirect to a VNIC. */
+		act_q = (const struct rte_flow_action_queue *)act->conf;
+		if (act_q->index >= bp->rx_nr_rings) {
+			rte_flow_error_set(error,
+					   EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ACTION,
+					   act,
+					   "Invalid queue ID.");
+			rc = -rte_errno;
+			goto ret;
+		}
+		PMD_DRV_LOG(DEBUG, "Queue index %d\n", act_q->index);
+
+		vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
+		vnic = STAILQ_FIRST(&bp->ff_pool[act_q->index]);
+		if (vnic == NULL) {
+			rte_flow_error_set(error,
+					   EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ACTION,
+					   act,
+					   "No matching VNIC for queue ID.");
+			rc = -rte_errno;
+			goto ret;
+		}
+
+		filter->dst_id = vnic->fw_vnic_id;
+		filter1 = bnxt_get_l2_filter(bp, filter, vnic);
+		if (filter1 == NULL) {
+			rc = -ENOSPC;
+			goto ret;
+		}
+
+		filter->fw_l2_filter_id = filter1->fw_l2_filter_id;
+		PMD_DRV_LOG(DEBUG, "VNIC found\n");
+		break;
+	case RTE_FLOW_ACTION_TYPE_DROP:
+		vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
+		filter1 = bnxt_get_l2_filter(bp, filter, vnic0);
+		if (filter1 == NULL) {
+			rc = -ENOSPC;
+			goto ret;
+		}
+
+		filter->fw_l2_filter_id = filter1->fw_l2_filter_id;
+		if (filter->filter_type == HWRM_CFA_EM_FILTER)
+			filter->flags =
+				HWRM_CFA_EM_FLOW_ALLOC_INPUT_FLAGS_DROP;
+		else
+			filter->flags =
+				HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_FLAGS_DROP;
+		break;
+	case RTE_FLOW_ACTION_TYPE_COUNT:
+		vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
+		filter1 = bnxt_get_l2_filter(bp, filter, vnic0);
+		if (filter1 == NULL) {
+			rc = -ENOSPC;
+			goto ret;
+		}
+
+		filter->fw_l2_filter_id = filter1->fw_l2_filter_id;
+		filter->flags = HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_FLAGS_METER;
+		break;
+	case RTE_FLOW_ACTION_TYPE_VF:
+		act_vf = (const struct rte_flow_action_vf *)act->conf;
+		vf = act_vf->id;
+
+		if (!BNXT_PF(bp)) {
+			rte_flow_error_set(error,
+					   EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ACTION,
+					   act,
+					   "Configuring on a VF!");
+			rc = -rte_errno;
+			goto ret;
+		}
+
+		if (vf >= bp->pdev->max_vfs) {
+			rte_flow_error_set(error,
+					   EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ACTION,
+					   act,
+					   "Incorrect VF id!");
+			rc = -rte_errno;
+			goto ret;
+		}
+
+		filter->mirror_vnic_id =
+		dflt_vnic = bnxt_hwrm_func_qcfg_vf_dflt_vnic_id(bp, vf);
+		if (dflt_vnic < 0) {
+			/* This simply indicates there's no driver loaded.
+			 * This is not an error.
+			 */
+			rte_flow_error_set(error,
+					   EINVAL,
+					   RTE_FLOW_ERROR_TYPE_ACTION,
+					   act,
+					   "Unable to get default VNIC for VF");
+			rc = -rte_errno;
+			goto ret;
+		}
+
+		filter->mirror_vnic_id = dflt_vnic;
+		filter->enables |= NTUPLE_FLTR_ALLOC_INPUT_EN_MIRROR_VNIC_ID;
+
+		vnic0 = STAILQ_FIRST(&bp->ff_pool[0]);
+		filter1 = bnxt_get_l2_filter(bp, filter, vnic0);
+		if (filter1 == NULL) {
+			rc = -ENOSPC;
+			goto ret;
+		}
+
+		filter->fw_l2_filter_id = filter1->fw_l2_filter_id;
+		break;
+
+	default:
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION,
+				   act,
+				   "Invalid action.");
+		rc = -rte_errno;
+		goto ret;
+	}
+
+	if (filter1) {
+		bnxt_free_filter(bp, filter1);
+		filter1->fw_l2_filter_id = -1;
+	}
+
+	act = bnxt_flow_non_void_action(++act);
+	if (act->type != RTE_FLOW_ACTION_TYPE_END) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION,
+				   act,
+				   "Invalid action.");
+		rc = -rte_errno;
+		goto ret;
+	}
+ret:
+	return rc;
+}
+
+static int
+bnxt_flow_validate(struct rte_eth_dev *dev,
+		   const struct rte_flow_attr *attr,
+		   const struct rte_flow_item pattern[],
+		   const struct rte_flow_action actions[],
+		   struct rte_flow_error *error)
+{
+	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	struct bnxt_filter_info *filter;
+	int ret = 0;
+
+	ret = bnxt_flow_args_validate(attr, pattern, actions, error);
+	if (ret != 0)
+		return ret;
+
+	filter = bnxt_get_unused_filter(bp);
+	if (filter == NULL) {
+		PMD_DRV_LOG(ERR, "Not enough resources for a new flow.\n");
+		return -ENOMEM;
+	}
+
+	ret = bnxt_validate_and_parse_flow(dev, pattern, actions, attr,
+					   error, filter);
+	/* No need to hold on to this filter if we are just validating flow */
+	filter->fw_l2_filter_id = UINT64_MAX;
+	bnxt_free_filter(bp, filter);
+
+	return ret;
+}
+
+static int
+bnxt_match_filter(struct bnxt *bp, struct bnxt_filter_info *nf)
+{
+	struct bnxt_filter_info *mf;
+	struct rte_flow *flow;
+	int i;
+
+	for (i = bp->nr_vnics - 1; i >= 0; i--) {
+		struct bnxt_vnic_info *vnic = &bp->vnic_info[i];
+
+		STAILQ_FOREACH(flow, &vnic->flow_list, next) {
+			mf = flow->filter;
+
+			if (mf->filter_type == nf->filter_type &&
+			    mf->flags == nf->flags &&
+			    mf->src_port == nf->src_port &&
+			    mf->src_port_mask == nf->src_port_mask &&
+			    mf->dst_port == nf->dst_port &&
+			    mf->dst_port_mask == nf->dst_port_mask &&
+			    mf->ip_protocol == nf->ip_protocol &&
+			    mf->ip_addr_type == nf->ip_addr_type &&
+			    mf->ethertype == nf->ethertype &&
+			    mf->vni == nf->vni &&
+			    mf->tunnel_type == nf->tunnel_type &&
+			    mf->l2_ovlan == nf->l2_ovlan &&
+			    mf->l2_ovlan_mask == nf->l2_ovlan_mask &&
+			    mf->l2_ivlan == nf->l2_ivlan &&
+			    mf->l2_ivlan_mask == nf->l2_ivlan_mask &&
+			    !memcmp(mf->l2_addr, nf->l2_addr, ETHER_ADDR_LEN) &&
+			    !memcmp(mf->l2_addr_mask, nf->l2_addr_mask,
+				    ETHER_ADDR_LEN) &&
+			    !memcmp(mf->src_macaddr, nf->src_macaddr,
+				    ETHER_ADDR_LEN) &&
+			    !memcmp(mf->dst_macaddr, nf->dst_macaddr,
+				    ETHER_ADDR_LEN) &&
+			    !memcmp(mf->src_ipaddr, nf->src_ipaddr,
+				    sizeof(nf->src_ipaddr)) &&
+			    !memcmp(mf->src_ipaddr_mask, nf->src_ipaddr_mask,
+				    sizeof(nf->src_ipaddr_mask)) &&
+			    !memcmp(mf->dst_ipaddr, nf->dst_ipaddr,
+				    sizeof(nf->dst_ipaddr)) &&
+			    !memcmp(mf->dst_ipaddr_mask, nf->dst_ipaddr_mask,
+				    sizeof(nf->dst_ipaddr_mask))) {
+				if (mf->dst_id == nf->dst_id)
+					return -EEXIST;
+				/* Same Flow, Different queue
+				 * Clear the old ntuple filter
+				 */
+				if (nf->filter_type == HWRM_CFA_EM_FILTER)
+					bnxt_hwrm_clear_em_filter(bp, mf);
+				if (nf->filter_type == HWRM_CFA_NTUPLE_FILTER)
+					bnxt_hwrm_clear_ntuple_filter(bp, mf);
+				/* Free the old filter, update flow
+				 * with new filter
+				 */
+				bnxt_free_filter(bp, mf);
+				flow->filter = nf;
+				return -EXDEV;
+			}
+		}
+	}
+	return 0;
+}
+
+static struct rte_flow *
+bnxt_flow_create(struct rte_eth_dev *dev,
+		 const struct rte_flow_attr *attr,
+		 const struct rte_flow_item pattern[],
+		 const struct rte_flow_action actions[],
+		 struct rte_flow_error *error)
+{
+	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	struct bnxt_filter_info *filter;
+	struct bnxt_vnic_info *vnic = NULL;
+	bool update_flow = false;
+	struct rte_flow *flow;
+	unsigned int i;
+	int ret = 0;
+
+	flow = rte_zmalloc("bnxt_flow", sizeof(struct rte_flow), 0);
+	if (!flow) {
+		rte_flow_error_set(error, ENOMEM,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to allocate memory");
+		return flow;
+	}
+
+	ret = bnxt_flow_args_validate(attr, pattern, actions, error);
+	if (ret != 0) {
+		PMD_DRV_LOG(ERR, "Not a validate flow.\n");
+		goto free_flow;
+	}
+
+	filter = bnxt_get_unused_filter(bp);
+	if (filter == NULL) {
+		PMD_DRV_LOG(ERR, "Not enough resources for a new flow.\n");
+		goto free_flow;
+	}
+
+	ret = bnxt_validate_and_parse_flow(dev, pattern, actions, attr,
+					   error, filter);
+	if (ret != 0)
+		goto free_filter;
+
+	ret = bnxt_match_filter(bp, filter);
+	if (ret == -EEXIST) {
+		PMD_DRV_LOG(DEBUG, "Flow already exists.\n");
+		/* Clear the filter that was created as part of
+		 * validate_and_parse_flow() above
+		 */
+		bnxt_hwrm_clear_l2_filter(bp, filter);
+		goto free_filter;
+	} else if (ret == -EXDEV) {
+		PMD_DRV_LOG(DEBUG, "Flow with same pattern exists\n");
+		PMD_DRV_LOG(DEBUG, "Updating with different destination\n");
+		update_flow = true;
+	}
+
+	if (filter->filter_type == HWRM_CFA_EM_FILTER) {
+		filter->enables |=
+			HWRM_CFA_EM_FLOW_ALLOC_INPUT_ENABLES_L2_FILTER_ID;
+		ret = bnxt_hwrm_set_em_filter(bp, filter->dst_id, filter);
+	}
+
+	if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER) {
+		filter->enables |=
+			HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_L2_FILTER_ID;
+		ret = bnxt_hwrm_set_ntuple_filter(bp, filter->dst_id, filter);
+	}
+
+	for (i = 0; i < bp->nr_vnics; i++) {
+		vnic = &bp->vnic_info[i];
+		if (filter->dst_id == vnic->fw_vnic_id)
+			break;
+	}
+
+	if (!ret) {
+		flow->filter = filter;
+		flow->vnic = vnic;
+		if (update_flow) {
+			ret = -EXDEV;
+			goto free_flow;
+		}
+		PMD_DRV_LOG(ERR, "Successfully created flow.\n");
+		STAILQ_INSERT_TAIL(&vnic->flow_list, flow, next);
+		return flow;
+	}
+free_filter:
+	bnxt_free_filter(bp, filter);
+free_flow:
+	if (ret == -EEXIST)
+		rte_flow_error_set(error, ret,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Matching Flow exists.");
+	else if (ret == -EXDEV)
+		rte_flow_error_set(error, ret,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Flow with pattern exists, updating destination queue");
+	else
+		rte_flow_error_set(error, -ret,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to create flow.");
+	rte_free(flow);
+	flow = NULL;
+	return flow;
+}
+
+static int
+bnxt_flow_destroy(struct rte_eth_dev *dev,
+		  struct rte_flow *flow,
+		  struct rte_flow_error *error)
+{
+	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	struct bnxt_filter_info *filter = flow->filter;
+	struct bnxt_vnic_info *vnic = flow->vnic;
+	int ret = 0;
+
+	ret = bnxt_match_filter(bp, filter);
+	if (ret == 0)
+		PMD_DRV_LOG(ERR, "Could not find matching flow\n");
+	if (filter->filter_type == HWRM_CFA_EM_FILTER)
+		ret = bnxt_hwrm_clear_em_filter(bp, filter);
+	if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER)
+		ret = bnxt_hwrm_clear_ntuple_filter(bp, filter);
+	else
+		ret = bnxt_hwrm_clear_l2_filter(bp, filter);
+	if (!ret) {
+		STAILQ_REMOVE(&vnic->flow_list, flow, rte_flow, next);
+		rte_free(flow);
+	} else {
+		rte_flow_error_set(error, -ret,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to destroy flow.");
+	}
+
+	return ret;
+}
+
+static int
+bnxt_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error)
+{
+	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+	struct bnxt_vnic_info *vnic;
+	struct rte_flow *flow;
+	unsigned int i;
+	int ret = 0;
+
+	for (i = 0; i < bp->nr_vnics; i++) {
+		vnic = &bp->vnic_info[i];
+		STAILQ_FOREACH(flow, &vnic->flow_list, next) {
+			struct bnxt_filter_info *filter = flow->filter;
+
+			if (filter->filter_type == HWRM_CFA_EM_FILTER)
+				ret = bnxt_hwrm_clear_em_filter(bp, filter);
+			if (filter->filter_type == HWRM_CFA_NTUPLE_FILTER)
+				ret = bnxt_hwrm_clear_ntuple_filter(bp, filter);
+
+			if (ret) {
+				rte_flow_error_set
+					(error,
+					 -ret,
+					 RTE_FLOW_ERROR_TYPE_HANDLE,
+					 NULL,
+					 "Failed to flush flow in HW.");
+				return -rte_errno;
+			}
+
+			STAILQ_REMOVE(&vnic->flow_list, flow,
+				      rte_flow, next);
+			rte_free(flow);
+		}
+	}
+
+	return ret;
+}
+
+const struct rte_flow_ops bnxt_flow_ops = {
+	.validate = bnxt_flow_validate,
+	.create = bnxt_flow_create,
+	.destroy = bnxt_flow_destroy,
+	.flush = bnxt_flow_flush,
+};
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 12/23] net/bnxt: check for invalid vnic id
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
                       ` (10 preceding siblings ...)
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 11/23] net/bnxt: refactor filter/flow Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 13/23] net/bnxt: update HWRM API to v1.9.2.9 Ajit Khaparde
                       ` (11 subsequent siblings)
  23 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Jay Ding, stable

From: Jay Ding <jay.ding@broadcom.com>

Passing an invalid fw_vnic_id to the firmware will cause the
bnxt_hwrm_vnic_plcmode_cfg command to fail.
Add a check for VNIC id before sending message to firmware.

Fixes: daef48efe5e5 ("net/bnxt: support set MTU")
Cc: stable@dpdk.org

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
--
v1->v2: Fix commit message.
---
 drivers/net/bnxt/bnxt_hwrm.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 64687a69b..910129f12 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -1560,6 +1560,11 @@ int bnxt_hwrm_vnic_plcmode_cfg(struct bnxt *bp,
 	struct hwrm_vnic_plcmodes_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 	uint16_t size;
 
+	if (vnic->fw_vnic_id == INVALID_HW_RING_ID) {
+		PMD_DRV_LOG(DEBUG, "VNIC ID %x\n", vnic->fw_vnic_id);
+		return rc;
+	}
+
 	HWRM_PREP(req, VNIC_PLCMODES_CFG);
 
 	req.flags = rte_cpu_to_le_32(
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 13/23] net/bnxt: update HWRM API to v1.9.2.9
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
                       ` (11 preceding siblings ...)
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 12/23] net/bnxt: check for invalid vnic id Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 14/23] net/bnxt: fix Tx with multiple mbuf Ajit Khaparde
                       ` (10 subsequent siblings)
  23 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Rob Miller, Rob Miller

From: Rob Miller <rmiller@broadcom.com>

update HWRM API to v1.9.2.9

Signed-off-by: Rob Miller <rob.miller@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Rob Miller <rmiller@broadcom.com>
---
 drivers/net/bnxt/hsi_struct_def_dpdk.h | 113 ++++++++++++++++++++++++++++++++-
 1 file changed, 111 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bnxt/hsi_struct_def_dpdk.h b/drivers/net/bnxt/hsi_struct_def_dpdk.h
index fd6d8807e..f5c7b4228 100644
--- a/drivers/net/bnxt/hsi_struct_def_dpdk.h
+++ b/drivers/net/bnxt/hsi_struct_def_dpdk.h
@@ -686,8 +686,8 @@ struct hwrm_err_output {
 #define HWRM_VERSION_MINOR 9
 #define HWRM_VERSION_UPDATE 2
 /* non-zero means beta version */
-#define HWRM_VERSION_RSVD 6
-#define HWRM_VERSION_STR "1.9.2.6"
+#define HWRM_VERSION_RSVD 9
+#define HWRM_VERSION_STR "1.9.2.9"
 
 /****************
  * hwrm_ver_get *
@@ -3183,6 +3183,9 @@ struct hwrm_async_event_cmpl {
 	/* LLFC/PFC Configuration Change */
 	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_LLFC_PFC_CHANGE \
 		UINT32_C(0x34)
+	/* Default VNIC Configuration Change */
+	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_DEFAULT_VNIC_CHANGE \
+		UINT32_C(0x35)
 	/* HWRM Error */
 	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_HWRM_ERROR \
 		UINT32_C(0xff)
@@ -3280,6 +3283,11 @@ struct hwrm_async_event_cmpl_link_status_change {
 		UINT32_C(0xffff0)
 	#define HWRM_ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_PORT_ID_SFT \
 		4
+	/* Indicates the physical function this event occured on. */
+	#define HWRM_ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_PF_ID_MASK \
+		UINT32_C(0xff00000)
+	#define HWRM_ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_PF_ID_SFT \
+		20
 } __attribute__((packed));
 
 /* hwrm_async_event_cmpl_link_mtu_change (size:128b/16B) */
@@ -4087,6 +4095,10 @@ struct hwrm_async_event_cmpl_vf_flr {
 	#define HWRM_ASYNC_EVENT_CMPL_VF_FLR_EVENT_DATA1_VF_ID_MASK \
 		UINT32_C(0xffff)
 	#define HWRM_ASYNC_EVENT_CMPL_VF_FLR_EVENT_DATA1_VF_ID_SFT 0
+	/* Indicates the physical function this event occured on. */
+	#define HWRM_ASYNC_EVENT_CMPL_VF_FLR_EVENT_DATA1_PF_ID_MASK \
+		UINT32_C(0xff0000)
+	#define HWRM_ASYNC_EVENT_CMPL_VF_FLR_EVENT_DATA1_PF_ID_SFT 16
 } __attribute__((packed));
 
 /* hwrm_async_event_cmpl_vf_mac_addr_change (size:128b/16B) */
@@ -4354,6 +4366,88 @@ struct hwrm_async_event_cmpl_llfc_pfc_change {
 		5
 } __attribute__((packed));
 
+/* hwrm_async_event_cmpl_default_vnic_change (size:128b/16B) */
+struct hwrm_async_event_cmpl_default_vnic_change {
+	uint16_t	type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units.  Even values indicate 16B
+	 * records.  Odd values indicate 32B
+	 * records.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_TYPE_MASK \
+		UINT32_C(0x3f)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_TYPE_SFT \
+		0
+	/* HWRM Asynchronous Event Information */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_TYPE_HWRM_ASYNC_EVENT \
+		UINT32_C(0x2e)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_TYPE_LAST \
+		HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_TYPE_HWRM_ASYNC_EVENT
+	/* unused1 is 10 b */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_UNUSED1_MASK \
+		UINT32_C(0xffc0)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_UNUSED1_SFT \
+		6
+	/* Identifiers of events. */
+	uint16_t	event_id;
+	/* Notification of a default vnic allocaiton or free */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_ID_ALLOC_FREE_NOTIFICATION \
+		UINT32_C(0x35)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_ID_LAST \
+		HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_ID_ALLOC_FREE_NOTIFICATION
+	/* Event specific data */
+	uint32_t	event_data2;
+	uint8_t	opaque_v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue.   The even passes
+	 * will write 1.  The odd passes will write 0.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_V \
+		UINT32_C(0x1)
+	/* opaque is 7 b */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_OPAQUE_MASK \
+		UINT32_C(0xfe)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_OPAQUE_SFT 1
+	/* 8-lsb timestamp from POR (100-msec resolution) */
+	uint8_t	timestamp_lo;
+	/* 16-lsb timestamp from POR (100-msec resolution) */
+	uint16_t	timestamp_hi;
+	/* Event specific data */
+	uint32_t	event_data1;
+	/* Indicates default vnic configuration change */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_DEF_VNIC_STATE_MASK \
+		UINT32_C(0x3)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_DEF_VNIC_STATE_SFT \
+		0
+	/*
+	 * If this field is set to 1, then it indicates that
+	 * a default VNIC has been allocate.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_DEF_VNIC_STATE_DEF_VNIC_ALLOC \
+		UINT32_C(0x1)
+	/*
+	 * If this field is set to 2, then it indicates that
+	 * a default VNIC has been freed.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_DEF_VNIC_STATE_DEF_VNIC_FREE \
+		UINT32_C(0x2)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_DEF_VNIC_STATE_LAST \
+		HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_DEF_VNIC_STATE_DEF_VNIC_FREE
+	/* Indicates the physical function this event occured on. */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_PF_ID_MASK \
+		UINT32_C(0x3fc)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_PF_ID_SFT \
+		2
+	/* Indicates the virtual function this event occured on */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_VF_ID_MASK \
+		UINT32_C(0x3fffc00)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFAULT_VNIC_CHANGE_EVENT_DATA1_VF_ID_SFT \
+		10
+} __attribute__((packed));
+
 /* hwrm_async_event_cmpl_hwrm_error (size:128b/16B) */
 struct hwrm_async_event_cmpl_hwrm_error {
 	uint16_t	type;
@@ -5196,6 +5290,21 @@ struct hwrm_func_qcaps_output {
 	 */
 	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_PCIE_STATS_SUPPORTED \
 		UINT32_C(0x10000)
+	/*
+	 * If the query is for a VF, then this flag shall be ignored,
+	 * If this query is for a PF and this flag is set to 1,
+	 * then the PF has the capability to adopt the VF's belonging
+	 * to another PF.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_ADOPTED_PF_SUPPORTED \
+		UINT32_C(0x20000)
+	/*
+	 * If the query is for a VF, then this flag shall be ignored,
+	 * If this query is for a PF and this flag is set to 1,
+	 * then the PF has the capability to administer another PF.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_ADMIN_PF_SUPPORTED \
+		UINT32_C(0x40000)
 	/*
 	 * This value is current MAC address configured for this
 	 * function. A value of 00-00-00-00-00-00 indicates no
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 14/23] net/bnxt: fix Tx with multiple mbuf
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
                       ` (12 preceding siblings ...)
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 13/23] net/bnxt: update HWRM API to v1.9.2.9 Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 15/23] net/bnxt: revert reset of L2 filter id Ajit Khaparde
                       ` (9 subsequent siblings)
  23 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Xiaoxin Peng, stable

From: Xiaoxin Peng <xiaoxin.peng@broadcom.com>

When using multi-mbuf to xmit large packets, we need to use total
packet lengths (sum of all segments) to set txbd->flags_type.
Packets will not be sent when using tx_pkt->data_len(The first
segment of packets).

Fixes: 6eb3cc2294fd ("net/bnxt: add initial Tx code")
Cc: stable@dpdk.org

Signed-off-by: Xiaoxin Peng <xiaoxin.peng@broadcom.com>
Reviewed-by: Herry Chen <herry.chen@broadcom.com>
Reviewed-by: Jason He <jason.he@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_txr.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 68645b2f7..e85511f9a 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -160,10 +160,10 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 		*cmpl_next = false;
 	}
 	txbd->len = tx_pkt->data_len;
-	if (txbd->len >= 2014)
+	if (tx_pkt->pkt_len >= 2014)
 		txbd->flags_type |= TX_BD_LONG_FLAGS_LHINT_GTE2K;
 	else
-		txbd->flags_type |= lhint_arr[txbd->len >> 9];
+		txbd->flags_type |= lhint_arr[tx_pkt->pkt_len >> 9];
 	txbd->address = rte_cpu_to_le_32(rte_mbuf_data_iova(tx_buf->mbuf));
 
 	if (long_bd) {
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 15/23] net/bnxt: revert reset of L2 filter id
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
                       ` (13 preceding siblings ...)
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 14/23] net/bnxt: fix Tx with multiple mbuf Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 16/23] net/bnxt: check filter type before clearing it Ajit Khaparde
                       ` (8 subsequent siblings)
  23 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Somnath Kotur, stable, ajit.khaparde

From: Somnath Kotur <somnath.kotur@broadcom.com>

The L2 filter id is needed in many scenarios particularly when
we are repurposing the same ntuple filter with different destination
queues. This patch reverts a commit in which the L2 filter id was being
reset in clear_ntuple_filter().

Fixes: 1383434c9089 ("net/bnxt: reset L2 filter id once filter is freed")
Cc: stable@dpdk.org

Cc: ajit.khaparde@broadcom.com
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
--
v1->v2: update commit message.
---
 drivers/net/bnxt/bnxt_hwrm.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 910129f12..ba8e44a9b 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3798,7 +3798,6 @@ int bnxt_hwrm_clear_ntuple_filter(struct bnxt *bp,
 	HWRM_UNLOCK();
 
 	filter->fw_ntuple_filter_id = UINT64_MAX;
-	filter->fw_l2_filter_id = UINT64_MAX;
 
 	return 0;
 }
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 16/23] net/bnxt: check filter type before clearing it
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
                       ` (14 preceding siblings ...)
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 15/23] net/bnxt: revert reset of L2 filter id Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 17/23] net/bnxt: fix set MTU Ajit Khaparde
                       ` (7 subsequent siblings)
  23 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, stable

In bnxt_free_filter_mem(), check the filter type and call the
appropriate HWRM command to clear the filter from HW.

Fixes: 5ef3b79fdfe6 ("net/bnxt: support flow filter ops")
Cc: stable@dpdk.org

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
--
v1->v2: add stable@dpdk.org in Cc.
---
 drivers/net/bnxt/bnxt_filter.c | 21 +++++++++++++++++----
 1 file changed, 17 insertions(+), 4 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_filter.c b/drivers/net/bnxt/bnxt_filter.c
index 31757d32c..1038941e8 100644
--- a/drivers/net/bnxt/bnxt_filter.c
+++ b/drivers/net/bnxt/bnxt_filter.c
@@ -117,16 +117,29 @@ void bnxt_free_filter_mem(struct bnxt *bp)
 	max_filters = bp->max_l2_ctx;
 	for (i = 0; i < max_filters; i++) {
 		filter = &bp->filter_info[i];
-		if (filter->fw_l2_filter_id != ((uint64_t)-1)) {
-			PMD_DRV_LOG(ERR, "HWRM filter is not freed??\n");
+		if (filter->fw_l2_filter_id != ((uint64_t)-1) &&
+		    filter->filter_type == HWRM_CFA_L2_FILTER) {
+			PMD_DRV_LOG(ERR, "L2 filter is not free\n");
 			/* Call HWRM to try to free filter again */
 			rc = bnxt_hwrm_clear_l2_filter(bp, filter);
 			if (rc)
 				PMD_DRV_LOG(ERR,
-				       "HWRM filter cannot be freed rc = %d\n",
-					rc);
+					    "Cannot free L2 filter: %d\n",
+					    rc);
 		}
 		filter->fw_l2_filter_id = UINT64_MAX;
+
+		if (filter->fw_ntuple_filter_id != ((uint64_t)-1) &&
+		    filter->filter_type == HWRM_CFA_NTUPLE_FILTER) {
+			PMD_DRV_LOG(ERR, "NTUPLE filter is not free\n");
+			/* Call HWRM to try to free filter again */
+			rc = bnxt_hwrm_clear_ntuple_filter(bp, filter);
+			if (rc)
+				PMD_DRV_LOG(ERR,
+					    "Cannot free NTUPLE filter: %d\n",
+					    rc);
+		}
+		filter->fw_ntuple_filter_id = UINT64_MAX;
 	}
 	STAILQ_INIT(&bp->free_filter_list);
 
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 17/23] net/bnxt: fix set MTU
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
                       ` (15 preceding siblings ...)
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 16/23] net/bnxt: check filter type before clearing it Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 18/23] net/bnxt: fix incorrect IO address handling in Tx Ajit Khaparde
                       ` (6 subsequent siblings)
  23 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, stable

There is no need to update bnxt_hwrm_vnic_plcmode_cfg if new MTU is
not greater than the max data the mbuf can accommodate.

Fixes: daef48efe5e5 ("net/bnxt: support set MTU")
Cc: stable@dpdk.org

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
--
v1->v2: update commit log
---
 drivers/net/bnxt/bnxt_ethdev.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index ab3f5c8e7..fe95e01ca 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1581,6 +1581,7 @@ static int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
 
 	for (i = 0; i < bp->nr_vnics; i++) {
 		struct bnxt_vnic_info *vnic = &bp->vnic_info[i];
+		uint16_t size = 0;
 
 		vnic->mru = bp->eth_dev->data->mtu + ETHER_HDR_LEN +
 					ETHER_CRC_LEN + VLAN_TAG_SIZE * 2;
@@ -1588,9 +1589,14 @@ static int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
 		if (rc)
 			break;
 
-		rc = bnxt_hwrm_vnic_plcmode_cfg(bp, vnic);
-		if (rc)
-			return rc;
+		size = rte_pktmbuf_data_room_size(bp->rx_queues[0]->mb_pool);
+		size -= RTE_PKTMBUF_HEADROOM;
+
+		if (size < new_mtu) {
+			rc = bnxt_hwrm_vnic_plcmode_cfg(bp, vnic);
+			if (rc)
+				return rc;
+		}
 	}
 
 	return rc;
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 18/23] net/bnxt: fix incorrect IO address handling in Tx
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
                       ` (16 preceding siblings ...)
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 17/23] net/bnxt: fix set MTU Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 19/23] net/bnxt: allocate RSS context only if RSS mode is enabled Ajit Khaparde
                       ` (5 subsequent siblings)
  23 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, stable

rte_mbuf_data_iova returns a 64-bit address. But we are incorrectly
using only 32-bits of that. Use rte_cpu_to_le_64 instead of
rte_cpu_to_le_32

Fixes: 6eb3cc2294fd ("net/bnxt: add initial Tx code")
Cc: stable@dpdk.org

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_txr.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index e85511f9a..67bb35e06 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -164,7 +164,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 		txbd->flags_type |= TX_BD_LONG_FLAGS_LHINT_GTE2K;
 	else
 		txbd->flags_type |= lhint_arr[tx_pkt->pkt_len >> 9];
-	txbd->address = rte_cpu_to_le_32(rte_mbuf_data_iova(tx_buf->mbuf));
+	txbd->address = rte_cpu_to_le_64(rte_mbuf_data_iova(tx_buf->mbuf));
 
 	if (long_bd) {
 		txbd->flags_type |= TX_BD_LONG_TYPE_TX_BD_LONG;
@@ -287,7 +287,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 		tx_buf = &txr->tx_buf_ring[txr->tx_prod];
 
 		txbd = &txr->tx_desc_ring[txr->tx_prod];
-		txbd->address = rte_cpu_to_le_32(rte_mbuf_data_iova(m_seg));
+		txbd->address = rte_cpu_to_le_64(rte_mbuf_data_iova(m_seg));
 		txbd->flags_type |= TX_BD_SHORT_TYPE_TX_BD_SHORT;
 		txbd->len = m_seg->data_len;
 
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 19/23] net/bnxt: allocate RSS context only if RSS mode is enabled
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
                       ` (17 preceding siblings ...)
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 18/23] net/bnxt: fix incorrect IO address handling in Tx Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 20/23] net/bnxt: fix to move a flow to a different queue Ajit Khaparde
                       ` (4 subsequent siblings)
  23 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

allocate RSS context only if RSS mode is enabled.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
--
v1->v2: fix commit log
---
 drivers/net/bnxt/bnxt_ethdev.c | 16 ++++++++++------
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index fe95e01ca..44c6cfa0a 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -248,6 +248,7 @@ static int bnxt_init_chip(struct bnxt *bp)
 
 	/* VNIC configuration */
 	for (i = 0; i < bp->nr_vnics; i++) {
+		struct rte_eth_conf *dev_conf = &bp->eth_dev->data->dev_conf;
 		struct bnxt_vnic_info *vnic = &bp->vnic_info[i];
 
 		rc = bnxt_hwrm_vnic_alloc(bp, vnic);
@@ -257,12 +258,15 @@ static int bnxt_init_chip(struct bnxt *bp)
 			goto err_out;
 		}
 
-		rc = bnxt_hwrm_vnic_ctx_alloc(bp, vnic);
-		if (rc) {
-			PMD_DRV_LOG(ERR,
-				"HWRM vnic %d ctx alloc failure rc: %x\n",
-				i, rc);
-			goto err_out;
+		/* Alloc RSS context only if RSS mode is enabled */
+		if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) {
+			rc = bnxt_hwrm_vnic_ctx_alloc(bp, vnic);
+			if (rc) {
+				PMD_DRV_LOG(ERR,
+					"HWRM vnic %d ctx alloc failure rc: %x\n",
+					i, rc);
+				goto err_out;
+			}
 		}
 
 		rc = bnxt_hwrm_vnic_cfg(bp, vnic);
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 20/23] net/bnxt: fix to move a flow to a different queue
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
                       ` (18 preceding siblings ...)
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 19/23] net/bnxt: allocate RSS context only if RSS mode is enabled Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 21/23] net/bnxt: check VF resources if resource manager is enabled Ajit Khaparde
                       ` (3 subsequent siblings)
  23 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Somnath Kotur, stable

From: Somnath Kotur <somnath.kotur@broadcom.com>

While moving a flow to a different destination queue,
the l2_filter_id being passed to the FW command was incorrect.
Fix it by re-using the matching filter's l2_filter_id since
that is supposed to be the same in this case.

Fixes: 5ef3b79fdfe6 ("net/bnxt: support flow filter ops")
Cc: stable@dpdk.org

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_flow.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index a491e9dbf..ac7656741 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -968,9 +968,13 @@ bnxt_match_filter(struct bnxt *bp, struct bnxt_filter_info *nf)
 				    sizeof(nf->dst_ipaddr_mask))) {
 				if (mf->dst_id == nf->dst_id)
 					return -EEXIST;
-				/* Same Flow, Different queue
+				/*
+				 * Same Flow, Different queue
 				 * Clear the old ntuple filter
+				 * Reuse the matching L2 filter
+				 * ID for the new filter
 				 */
+				nf->fw_l2_filter_id = mf->fw_l2_filter_id;
 				if (nf->filter_type == HWRM_CFA_EM_FILTER)
 					bnxt_hwrm_clear_em_filter(bp, mf);
 				if (nf->filter_type == HWRM_CFA_NTUPLE_FILTER)
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 21/23] net/bnxt: check VF resources if resource manager is enabled
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
                       ` (19 preceding siblings ...)
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 20/23] net/bnxt: fix to move a flow to a different queue Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 22/23] net/bnxt: fix Rx ring count limitation Ajit Khaparde
                       ` (2 subsequent siblings)
  23 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

If HWRM resource manager is enabled, check VF resources before proceeding.
Make sure there are enough resources allocated and return an error in case
of insufficient error.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  5 ++++
 drivers/net/bnxt/bnxt_ethdev.c | 21 ++++++++++-----
 drivers/net/bnxt/bnxt_hwrm.c   | 59 +++++++++++++++++++++++++++++++++++++++---
 drivers/net/bnxt/bnxt_hwrm.h   |  6 ++++-
 4 files changed, 80 insertions(+), 11 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 246b8d4d8..db5c4eb0d 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -22,6 +22,10 @@
 
 #define BNXT_MAX_MTU		9500
 #define VLAN_TAG_SIZE		4
+#define BNXT_VF_RSV_NUM_RSS_CTX	1
+#define BNXT_VF_RSV_NUM_L2_CTX	4
+/* TODO: For now, do not support VMDq/RFS on VFs. */
+#define BNXT_VF_RSV_NUM_VNIC	1
 #define BNXT_MAX_LED		4
 #define BNXT_NUM_VLANS		2
 #define BNXT_MIN_RING_DESC	16
@@ -328,6 +332,7 @@ struct bnxt {
 	struct bnxt_led_info	leds[BNXT_MAX_LED];
 	uint8_t			num_leds;
 	struct bnxt_ptp_cfg     *ptp_cfg;
+	uint16_t		vf_resv_strategy;
 };
 
 int bnxt_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_complete);
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 44c6cfa0a..34c3d6ba3 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -509,6 +509,7 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
 {
 	struct bnxt *bp = (struct bnxt *)eth_dev->data->dev_private;
 	uint64_t rx_offloads = eth_dev->data->dev_conf.rxmode.offloads;
+	int rc;
 
 	bp->rx_queues = (void *)eth_dev->data->rx_queues;
 	bp->tx_queues = (void *)eth_dev->data->tx_queues;
@@ -516,19 +517,23 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
 	bp->rx_nr_rings = eth_dev->data->nb_rx_queues;
 
 	if (BNXT_VF(bp) && (bp->flags & BNXT_FLAG_NEW_RM)) {
-		int rc;
+		rc = bnxt_hwrm_check_vf_rings(bp);
+		if (rc) {
+			PMD_DRV_LOG(ERR, "HWRM insufficient resources\n");
+			return -ENOSPC;
+		}
 
-		rc = bnxt_hwrm_func_reserve_vf_resc(bp);
+		rc = bnxt_hwrm_func_reserve_vf_resc(bp, false);
 		if (rc) {
 			PMD_DRV_LOG(ERR, "HWRM resource alloc fail:%x\n", rc);
 			return -ENOSPC;
 		}
-
+	} else {
 		/* legacy driver needs to get updated values */
 		rc = bnxt_hwrm_func_qcaps(bp);
 		if (rc) {
 			PMD_DRV_LOG(ERR, "hwrm func qcaps fail:%d\n", rc);
-			return -ENOSPC;
+			return rc;
 		}
 	}
 
@@ -539,7 +544,9 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
 	    bp->max_cp_rings ||
 	    eth_dev->data->nb_rx_queues + eth_dev->data->nb_tx_queues >
 	    bp->max_stat_ctx ||
-	    (uint32_t)(eth_dev->data->nb_rx_queues) > bp->max_ring_grps) {
+	    (uint32_t)(eth_dev->data->nb_rx_queues) > bp->max_ring_grps ||
+	    (!(eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) &&
+	     bp->max_vnics < eth_dev->data->nb_rx_queues)) {
 		PMD_DRV_LOG(ERR,
 			"Insufficient resources to support requested config\n");
 		PMD_DRV_LOG(ERR,
@@ -547,9 +554,9 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
 			eth_dev->data->nb_tx_queues,
 			eth_dev->data->nb_rx_queues);
 		PMD_DRV_LOG(ERR,
-			"Res available: TxQ %d, RxQ %d, CQ %d Stat %d, Grp %d\n",
+			"MAX: TxQ %d, RxQ %d, CQ %d Stat %d, Grp %d, Vnic %d\n",
 			bp->max_tx_rings, bp->max_rx_rings, bp->max_cp_rings,
-			bp->max_stat_ctx, bp->max_ring_grps);
+			bp->max_stat_ctx, bp->max_ring_grps, bp->max_vnics);
 		return -ENOSPC;
 	}
 
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index ba8e44a9b..de04fe863 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -166,6 +166,18 @@ static int bnxt_hwrm_send_message(struct bnxt *bp, void *msg,
 	req.resp_addr = rte_cpu_to_le_64(bp->hwrm_cmd_resp_dma_addr); \
 } while (0)
 
+#define HWRM_CHECK_RESULT_SILENT() do {\
+	if (rc) { \
+		rte_spinlock_unlock(&bp->hwrm_lock); \
+		return rc; \
+	} \
+	if (resp->error_code) { \
+		rc = rte_le_to_cpu_16(resp->error_code); \
+		rte_spinlock_unlock(&bp->hwrm_lock); \
+		return rc; \
+	} \
+} while (0)
+
 #define HWRM_CHECK_RESULT() do {\
 	if (rc) { \
 		PMD_DRV_LOG(ERR, "failed rc:%d\n", rc); \
@@ -658,9 +670,19 @@ int bnxt_hwrm_func_driver_register(struct bnxt *bp)
 	return rc;
 }
 
-int bnxt_hwrm_func_reserve_vf_resc(struct bnxt *bp)
+int bnxt_hwrm_check_vf_rings(struct bnxt *bp)
+{
+	if (!(BNXT_VF(bp) && (bp->flags & BNXT_FLAG_NEW_RM)))
+		return 0;
+
+	return bnxt_hwrm_func_reserve_vf_resc(bp, true);
+}
+
+int bnxt_hwrm_func_reserve_vf_resc(struct bnxt *bp, bool test)
 {
 	int rc;
+	uint32_t flags = 0;
+	uint32_t enables;
 	struct hwrm_func_vf_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 	struct hwrm_func_vf_cfg_input req = {0};
 
@@ -671,7 +693,8 @@ int bnxt_hwrm_func_reserve_vf_resc(struct bnxt *bp)
 			HWRM_FUNC_VF_CFG_INPUT_ENABLES_NUM_TX_RINGS   |
 			HWRM_FUNC_VF_CFG_INPUT_ENABLES_NUM_STAT_CTXS  |
 			HWRM_FUNC_VF_CFG_INPUT_ENABLES_NUM_CMPL_RINGS |
-			HWRM_FUNC_VF_CFG_INPUT_ENABLES_NUM_HW_RING_GRPS);
+			HWRM_FUNC_VF_CFG_INPUT_ENABLES_NUM_HW_RING_GRPS |
+			HWRM_FUNC_VF_CFG_INPUT_ENABLES_NUM_VNICS);
 
 	req.num_tx_rings = rte_cpu_to_le_16(bp->tx_nr_rings);
 	req.num_rx_rings = rte_cpu_to_le_16(bp->rx_nr_rings *
@@ -680,10 +703,35 @@ int bnxt_hwrm_func_reserve_vf_resc(struct bnxt *bp)
 	req.num_cmpl_rings = rte_cpu_to_le_16(bp->rx_nr_rings +
 					      bp->tx_nr_rings);
 	req.num_hw_ring_grps = rte_cpu_to_le_16(bp->rx_nr_rings);
+	req.num_vnics = rte_cpu_to_le_16(bp->rx_nr_rings);
+	if (bp->vf_resv_strategy ==
+	    HWRM_FUNC_RESOURCE_QCAPS_OUTPUT_VF_RESV_STRATEGY_MINIMAL_STATIC) {
+		enables = HWRM_FUNC_VF_CFG_INPUT_ENABLES_NUM_VNICS |
+				HWRM_FUNC_VF_CFG_INPUT_ENABLES_NUM_L2_CTXS |
+				HWRM_FUNC_VF_CFG_INPUT_ENABLES_NUM_RSSCOS_CTXS;
+		req.enables |= rte_cpu_to_le_32(enables);
+		req.num_rsscos_ctxs = rte_cpu_to_le_16(BNXT_VF_RSV_NUM_RSS_CTX);
+		req.num_l2_ctxs = rte_cpu_to_le_16(BNXT_VF_RSV_NUM_L2_CTX);
+		req.num_vnics = rte_cpu_to_le_16(BNXT_VF_RSV_NUM_VNIC);
+	}
+
+	if (test)
+		flags = HWRM_FUNC_VF_CFG_INPUT_FLAGS_TX_ASSETS_TEST |
+			HWRM_FUNC_VF_CFG_INPUT_FLAGS_RX_ASSETS_TEST |
+			HWRM_FUNC_VF_CFG_INPUT_FLAGS_CMPL_ASSETS_TEST |
+			HWRM_FUNC_VF_CFG_INPUT_FLAGS_RING_GRP_ASSETS_TEST |
+			HWRM_FUNC_VF_CFG_INPUT_FLAGS_STAT_CTX_ASSETS_TEST |
+			HWRM_FUNC_VF_CFG_INPUT_FLAGS_VNIC_ASSETS_TEST;
+
+	req.flags = rte_cpu_to_le_32(flags);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req));
 
-	HWRM_CHECK_RESULT();
+	if (test)
+		HWRM_CHECK_RESULT_SILENT();
+	else
+		HWRM_CHECK_RESULT();
+
 	HWRM_UNLOCK();
 	return rc;
 }
@@ -711,6 +759,11 @@ int bnxt_hwrm_func_resc_qcaps(struct bnxt *bp)
 		bp->max_vnics = rte_le_to_cpu_16(resp->max_vnics);
 		bp->max_stat_ctx = rte_le_to_cpu_16(resp->max_stat_ctx);
 	}
+	bp->vf_resv_strategy = rte_le_to_cpu_16(resp->vf_reservation_strategy);
+	if (bp->vf_resv_strategy >
+	    HWRM_FUNC_RESOURCE_QCAPS_OUTPUT_VF_RESV_STRATEGY_MINIMAL_STATIC)
+		bp->vf_resv_strategy =
+		HWRM_FUNC_RESOURCE_QCAPS_OUTPUT_VF_RESERVATION_STRATEGY_MAXIMAL;
 
 	HWRM_UNLOCK();
 	return rc;
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 4a237c4b4..379aac6e1 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -29,6 +29,9 @@ struct bnxt_cp_ring_info;
 #define HWRM_QUEUE_SERVICE_PROFILE_LOSSY \
 	HWRM_QUEUE_QPORTCFG_OUTPUT_QUEUE_ID0_SERVICE_PROFILE_LOSSY
 
+#define HWRM_FUNC_RESOURCE_QCAPS_OUTPUT_VF_RESV_STRATEGY_MINIMAL_STATIC \
+	HWRM_FUNC_RESOURCE_QCAPS_OUTPUT_VF_RESERVATION_STRATEGY_MINIMAL_STATIC
+
 int bnxt_hwrm_cfa_l2_clear_rx_mask(struct bnxt *bp,
 				   struct bnxt_vnic_info *vnic);
 int bnxt_hwrm_cfa_l2_set_rx_mask(struct bnxt *bp, struct bnxt_vnic_info *vnic,
@@ -113,7 +116,7 @@ int bnxt_get_hwrm_link_config(struct bnxt *bp, struct rte_eth_link *link);
 int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up);
 int bnxt_hwrm_func_qcfg(struct bnxt *bp);
 int bnxt_hwrm_func_resc_qcaps(struct bnxt *bp);
-int bnxt_hwrm_func_reserve_vf_resc(struct bnxt *bp);
+int bnxt_hwrm_func_reserve_vf_resc(struct bnxt *bp, bool test);
 int bnxt_hwrm_allocate_pf_only(struct bnxt *bp);
 int bnxt_hwrm_allocate_vfs(struct bnxt *bp, int num_vfs);
 int bnxt_hwrm_func_vf_mac(struct bnxt *bp, uint16_t vf,
@@ -170,4 +173,5 @@ int bnxt_vnic_rss_configure(struct bnxt *bp,
 			    struct bnxt_vnic_info *vnic);
 int bnxt_hwrm_set_ring_coal(struct bnxt *bp,
 			struct bnxt_coal *coal, uint16_t ring_id);
+int bnxt_hwrm_check_vf_rings(struct bnxt *bp);
 #endif
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 22/23] net/bnxt: fix Rx ring count limitation
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
                       ` (20 preceding siblings ...)
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 21/23] net/bnxt: check VF resources if resource manager is enabled Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 23/23] net/bnxt: use correct flags during VLAN configuration Ajit Khaparde
  2018-07-02 15:48     ` [dpdk-dev] [PATCH v2 00/23] bnxt patchset Ferruh Yigit
  23 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, stable

Fixed size of fw_grp_ids in VNIC is limiting the number of Rx rings
being created. With this patch we are allocating fw_grp_ids dynamically,
allowing us to get over this artificial limit.

Fixes: 9738793f28ec ("net/bnxt: add VNIC functions and structs")
Cc: stable@dpdk.org

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 11 +++++++++++
 drivers/net/bnxt/bnxt_hwrm.c   |  5 ++++-
 drivers/net/bnxt/bnxt_vnic.c   |  5 +----
 drivers/net/bnxt/bnxt_vnic.h   |  6 +-----
 4 files changed, 17 insertions(+), 10 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 34c3d6ba3..a1f835ed9 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -250,6 +250,17 @@ static int bnxt_init_chip(struct bnxt *bp)
 	for (i = 0; i < bp->nr_vnics; i++) {
 		struct rte_eth_conf *dev_conf = &bp->eth_dev->data->dev_conf;
 		struct bnxt_vnic_info *vnic = &bp->vnic_info[i];
+		uint32_t size = sizeof(*vnic->fw_grp_ids) * bp->max_ring_grps;
+
+		vnic->fw_grp_ids = rte_zmalloc("vnic_fw_grp_ids", size, 0);
+		if (!vnic->fw_grp_ids) {
+			PMD_DRV_LOG(ERR,
+				    "Failed to alloc %d bytes for group ids\n",
+				    size);
+			rc = -ENOMEM;
+			goto err_out;
+		}
+		memset(vnic->fw_grp_ids, -1, size);
 
 		rc = bnxt_hwrm_vnic_alloc(bp, vnic);
 		if (rc) {
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index de04fe863..37aefbdc9 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -1319,8 +1319,9 @@ int bnxt_hwrm_vnic_alloc(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	/* map ring groups to this vnic */
 	PMD_DRV_LOG(DEBUG, "Alloc VNIC. Start %x, End %x\n",
 		vnic->start_grp_id, vnic->end_grp_id);
-	for (i = vnic->start_grp_id, j = 0; i <= vnic->end_grp_id; i++, j++)
+	for (i = vnic->start_grp_id, j = 0; i < vnic->end_grp_id; i++, j++)
 		vnic->fw_grp_ids[j] = bp->grp_info[i].fw_grp_id;
+
 	vnic->dflt_ring_grp = bp->grp_info[vnic->start_grp_id].fw_grp_id;
 	vnic->rss_rule = (uint16_t)HWRM_NA_SIGNATURE;
 	vnic->cos_rule = (uint16_t)HWRM_NA_SIGNATURE;
@@ -2106,6 +2107,8 @@ void bnxt_free_all_hwrm_resources(struct bnxt *bp)
 		bnxt_hwrm_vnic_tpa_cfg(bp, vnic, false);
 
 		bnxt_hwrm_vnic_free(bp, vnic);
+
+		rte_free(vnic->fw_grp_ids);
 	}
 	/* Ring resources */
 	bnxt_free_all_hwrm_rings(bp);
diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c
index 19d06af55..c0577cd76 100644
--- a/drivers/net/bnxt/bnxt_vnic.c
+++ b/drivers/net/bnxt/bnxt_vnic.c
@@ -39,7 +39,7 @@ void bnxt_init_vnics(struct bnxt *bp)
 {
 	struct bnxt_vnic_info *vnic;
 	uint16_t max_vnics;
-	int i, j;
+	int i;
 
 	max_vnics = bp->max_vnics;
 	STAILQ_INIT(&bp->free_vnic_list);
@@ -52,9 +52,6 @@ void bnxt_init_vnics(struct bnxt *bp)
 		vnic->hash_mode =
 			HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_DEFAULT;
 
-		for (j = 0; j < MAX_QUEUES_PER_VNIC; j++)
-			vnic->fw_grp_ids[j] = (uint16_t)HWRM_NA_SIGNATURE;
-
 		prandom_bytes(vnic->rss_hash_key, HW_HASH_KEY_SIZE);
 		STAILQ_INIT(&vnic->filter);
 		STAILQ_INIT(&vnic->flow_list);
diff --git a/drivers/net/bnxt/bnxt_vnic.h b/drivers/net/bnxt/bnxt_vnic.h
index c521d7e5a..9029f78c3 100644
--- a/drivers/net/bnxt/bnxt_vnic.h
+++ b/drivers/net/bnxt/bnxt_vnic.h
@@ -15,13 +15,9 @@ struct bnxt_vnic_info {
 
 	uint16_t	fw_vnic_id; /* returned by Chimp during alloc */
 	uint16_t	rss_rule;
-#define MAX_NUM_TRAFFIC_CLASSES		8
-#define MAX_NUM_RSS_QUEUES_PER_VNIC	16
-#define MAX_QUEUES_PER_VNIC	(MAX_NUM_RSS_QUEUES_PER_VNIC + \
-				 MAX_NUM_TRAFFIC_CLASSES)
 	uint16_t	start_grp_id;
 	uint16_t	end_grp_id;
-	uint16_t	fw_grp_ids[MAX_QUEUES_PER_VNIC];
+	uint16_t	*fw_grp_ids;
 	uint16_t	dflt_ring_grp;
 	uint16_t	mru;
 	uint16_t	hash_type;
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [dpdk-dev] [PATCH v2 23/23] net/bnxt: use correct flags during VLAN configuration
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
                       ` (21 preceding siblings ...)
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 22/23] net/bnxt: fix Rx ring count limitation Ajit Khaparde
@ 2018-06-28 20:15     ` Ajit Khaparde
  2018-07-02 15:48     ` [dpdk-dev] [PATCH v2 00/23] bnxt patchset Ferruh Yigit
  23 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:15 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit, Somnath Kotur, stable

From: Somnath Kotur <somnath.kotur@broadcom.com>

Setting of VLAN filter cmd was being done with incorrect flag value.
We need to use inner vlan fields instead of outer vlan.

Fixes: 7fe5668d2ea3 ("net/bnxt: support VLAN filter and strip")
Cc: stable@dpdk.org

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 10 +++++-----
 drivers/net/bnxt/bnxt_hwrm.c   |  4 ++--
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index a1f835ed9..418d3bede 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1327,9 +1327,9 @@ static int bnxt_add_vlan_filter(struct bnxt *bp, uint16_t vlan_id)
 	struct bnxt_vnic_info *vnic;
 	unsigned int i;
 	int rc = 0;
-	uint32_t en = HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_OVLAN |
-		HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_OVLAN_MASK;
-	uint32_t chk = HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_OVLAN;
+	uint32_t en = HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_IVLAN |
+		HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_IVLAN_MASK;
+	uint32_t chk = HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_IVLAN;
 
 	/* Cycle through all VNICs */
 	for (i = 0; i < bp->nr_vnics; i++) {
@@ -1376,8 +1376,8 @@ static int bnxt_add_vlan_filter(struct bnxt *bp, uint16_t vlan_id)
 				memcpy(new_filter->l2_addr, filter->l2_addr,
 				       ETHER_ADDR_LEN);
 				/* MAC + VLAN ID filter */
-				new_filter->l2_ovlan = vlan_id;
-				new_filter->l2_ovlan_mask = 0xF000;
+				new_filter->l2_ivlan = vlan_id;
+				new_filter->l2_ivlan_mask = 0xF000;
 				new_filter->enables |= en;
 				rc = bnxt_hwrm_set_l2_filter(bp,
 							     vnic->fw_vnic_id,
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 37aefbdc9..02562f78c 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -388,13 +388,13 @@ int bnxt_hwrm_set_l2_filter(struct bnxt *bp,
 		req.l2_ovlan = filter->l2_ovlan;
 	if (enables &
 	    HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_IVLAN)
-		req.l2_ovlan = filter->l2_ivlan;
+		req.l2_ivlan = filter->l2_ivlan;
 	if (enables &
 	    HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_OVLAN_MASK)
 		req.l2_ovlan_mask = filter->l2_ovlan_mask;
 	if (enables &
 	    HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_L2_IVLAN_MASK)
-		req.l2_ovlan_mask = filter->l2_ivlan_mask;
+		req.l2_ivlan_mask = filter->l2_ivlan_mask;
 	if (enables & HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_SRC_ID)
 		req.src_id = rte_cpu_to_le_32(filter->src_id);
 	if (enables & HWRM_CFA_L2_FILTER_ALLOC_INPUT_ENABLES_SRC_TYPE)
-- 
2.15.2 (Apple Git-101.1)

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [dpdk-dev] [PATCH 05/31] net/bnxt: fix dev close operation
  2018-06-26 15:28   ` Ferruh Yigit
@ 2018-06-28 20:16     ` Ajit Khaparde
  0 siblings, 0 replies; 73+ messages in thread
From: Ajit Khaparde @ 2018-06-28 20:16 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, dpdk stable

>
> > +     PMD_DRV_LOG(INFO, "Calling Device uninit\n");
>
> This looks like can be a debug message, what do you think?
>
​Yes. Changed it to debug.
​


>
> <...>
>
> > @@ -3456,7 +3469,7 @@ static int bnxt_pci_remove(struct rte_pci_device
> *pci_dev)
> >  static struct rte_pci_driver bnxt_rte_pmd = {
> >       .id_table = bnxt_pci_id_map,
> >       .drv_flags = RTE_PCI_DRV_NEED_MAPPING |
> > -             RTE_PCI_DRV_INTR_LSC,
> > +             RTE_PCI_DRV_INTR_LSC | RTE_PCI_DRV_INTR_RMV,
>
> Is Remove interrupts really supported? I can't find the related code in
> the driver.
>
​That was some left over code. I cleaned it up.

Thanks
​


>
> You need to call _rte_eth_dev_callback_process() for
> RTE_ETH_EVENT_INTR_RMV
> where you handle the interrupt.
>
> And announce the feature "Removal event" in bnxt.ini

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [dpdk-dev] [PATCH v2 10/23] net/bnxt: move function check zero bytes to bnxt util.h
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 10/23] net/bnxt: move function check zero bytes to bnxt util.h Ajit Khaparde
@ 2018-07-02 12:20       ` Ferruh Yigit
  2018-07-02 12:55         ` Ferruh Yigit
  0 siblings, 1 reply; 73+ messages in thread
From: Ferruh Yigit @ 2018-07-02 12:20 UTC (permalink / raw)
  To: Ajit Khaparde, dev; +Cc: Scott Branden

On 6/28/2018 9:15 PM, Ajit Khaparde wrote:
> From: Scott Branden <scott.branden@broadcom.com>
> 
> Move check_zero_bytes into new bnxt_util.h file.
> 
> Signed-off-by: Scott Branden <scott.branden@broadcom.com>
> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> ---
>  drivers/net/bnxt/Makefile      |  1 +
>  drivers/net/bnxt/bnxt_ethdev.c |  1 +
>  drivers/net/bnxt/bnxt_filter.c |  9 ---------
>  drivers/net/bnxt/bnxt_filter.h |  1 -
>  drivers/net/bnxt/bnxt_util.c   | 18 ++++++++++++++++++
>  drivers/net/bnxt/bnxt_util.h   | 11 +++++++++++
>  6 files changed, 31 insertions(+), 10 deletions(-)
>  create mode 100644 drivers/net/bnxt/bnxt_util.c
>  create mode 100644 drivers/net/bnxt/bnxt_util.h
> 
> diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
> index fd0cb5235..80db03ea8 100644
> --- a/drivers/net/bnxt/Makefile
> +++ b/drivers/net/bnxt/Makefile
> @@ -38,6 +38,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_txq.c
>  SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_txr.c
>  SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_vnic.c
>  SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_irq.c
> +SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_util.c

This breaks the meson build and similar change required for meson, same with
bnxt_flow.c in other patch, if there is no other issue I can fix this while merging.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [dpdk-dev] [PATCH v2 10/23] net/bnxt: move function check zero bytes to bnxt util.h
  2018-07-02 12:20       ` Ferruh Yigit
@ 2018-07-02 12:55         ` Ferruh Yigit
  0 siblings, 0 replies; 73+ messages in thread
From: Ferruh Yigit @ 2018-07-02 12:55 UTC (permalink / raw)
  To: Ajit Khaparde, dev; +Cc: Scott Branden

On 7/2/2018 1:20 PM, Ferruh Yigit wrote:
> On 6/28/2018 9:15 PM, Ajit Khaparde wrote:
>> From: Scott Branden <scott.branden@broadcom.com>
>>
>> Move check_zero_bytes into new bnxt_util.h file.
>>
>> Signed-off-by: Scott Branden <scott.branden@broadcom.com>
>> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
>> ---
>>  drivers/net/bnxt/Makefile      |  1 +
>>  drivers/net/bnxt/bnxt_ethdev.c |  1 +
>>  drivers/net/bnxt/bnxt_filter.c |  9 ---------
>>  drivers/net/bnxt/bnxt_filter.h |  1 -
>>  drivers/net/bnxt/bnxt_util.c   | 18 ++++++++++++++++++
>>  drivers/net/bnxt/bnxt_util.h   | 11 +++++++++++
>>  6 files changed, 31 insertions(+), 10 deletions(-)
>>  create mode 100644 drivers/net/bnxt/bnxt_util.c
>>  create mode 100644 drivers/net/bnxt/bnxt_util.h
>>
>> diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
>> index fd0cb5235..80db03ea8 100644
>> --- a/drivers/net/bnxt/Makefile
>> +++ b/drivers/net/bnxt/Makefile
>> @@ -38,6 +38,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_txq.c
>>  SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_txr.c
>>  SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_vnic.c
>>  SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_irq.c
>> +SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_util.c
> 
> This breaks the meson build and similar change required for meson, same with
> bnxt_flow.c in other patch, if there is no other issue I can fix this while merging.

This patch also breaks the Makefile build, 'bnxt_filter.c' requires:
 #include "bnxt_util.h"

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [dpdk-dev] [PATCH v2 00/23] bnxt patchset
  2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
                       ` (22 preceding siblings ...)
  2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 23/23] net/bnxt: use correct flags during VLAN configuration Ajit Khaparde
@ 2018-07-02 15:48     ` Ferruh Yigit
  23 siblings, 0 replies; 73+ messages in thread
From: Ferruh Yigit @ 2018-07-02 15:48 UTC (permalink / raw)
  To: Ajit Khaparde, dev

On 6/28/2018 9:15 PM, Ajit Khaparde wrote:
> Patchset against dpdk-next-net. Please apply.
> 
> v1->v2:
> Takes care of the various comments made in the previous version.
> I am dropping the style changes for now. I will send them later
> after addressing the coding convention issues.
> 
> 
> Ajit Khaparde (16):
>   net/bnxt: fix clear port stats
>   net/bnxt: add Tx batching support
>   net/bnxt: optimize receive processing code
>   net/bnxt: set MIN/MAX descriptor count fox Tx and Rx Rings
>   net/bnxt: fix dev close operation
>   net/bnxt: set ring coalesce parameters for Stratus NIC
>   net/bnxt: fix HW Tx checksum offload check
>   net/bnxt: add support for VF id 0xd800
>   net/bnxt: fix Rx/Tx queue start/stop operations
>   net/bnxt: refactor filter/flow
>   net/bnxt: check filter type before clearing it
>   net/bnxt: fix set MTU
>   net/bnxt: fix incorrect IO address handling in Tx
>   net/bnxt: allocate RSS context only if RSS mode is enabled
>   net/bnxt: check VF resources if resource manager is enabled
>   net/bnxt: fix Rx ring count limitation
> 
> Jay Ding (1):
>   net/bnxt: check for invalid vnic id
> 
> Rob Miller (1):
>   net/bnxt: update HWRM API to v1.9.2.9
> 
> Scott Branden (1):
>   net/bnxt: move function check zero bytes to bnxt util.h
> 
> Somnath Kotur (3):
>   net/bnxt: revert reset of L2 filter id
>   net/bnxt: fix to move a flow to a different queue
>   net/bnxt: use correct flags during VLAN configuration
> 
> Xiaoxin Peng (1):
>   net/bnxt: fix Tx with multiple mbuf

Series applied to dpdk-next-net/master, thanks.


Build errors commented on patch fixed while applying, please double check.

^ permalink raw reply	[flat|nested] 73+ messages in thread

end of thread, other threads:[~2018-07-02 15:48 UTC | newest]

Thread overview: 73+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-19 21:30 [dpdk-dev] [PATCH 00/31] bnxt patchset Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 01/31] net/bnxt: fix clear port stats Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 02/31] net/bnxt: add Tx batching support Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 03/31] net/bnxt: Rx processing optimization Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 04/31] net/bnxt: set min and max descriptor count for Tx and Rx rings Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 05/31] net/bnxt: fix dev close operation Ajit Khaparde
2018-06-26 15:28   ` Ferruh Yigit
2018-06-28 20:16     ` Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 06/31] net/bnxt: set ring coalesce parameters for Stratus NIC Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 07/31] net/bnxt: fix HW Tx checksum offload check Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 08/31] net/bnxt: add support for VF id 0xd800 Ajit Khaparde
2018-06-26 15:28   ` Ferruh Yigit
2018-06-28 20:14     ` Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 09/31] net/bnxt: fix rx/tx queue start/stop operations Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 10/31] net/bnxt: code cleanup style of bnxt cpr Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 11/31] net/bnxt: code cleanup style of bnxt rxr Ajit Khaparde
2018-06-26 15:29   ` Ferruh Yigit
2018-06-19 21:30 ` [dpdk-dev] [PATCH 12/31] net/bnxt: code cleanup style of rte pmd bnxt file Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 13/31] net/bnxt: code cleanup style of bnxt stats Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 14/31] net/bnxt: code cleanup style of bnxt vnic Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 15/31] net/bnxt: code cleanup style of bnxt txq Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 16/31] net/bnxt: code cleanup style of bnxt rxq Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 17/31] net/bnxt: code cleanup style of bnxt vnic Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 18/31] net/bnxt: code cleanup style of bnxt txr Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 19/31] net/bnxt: code cleanup style of bnxt ring Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 20/31] net/bnxt: code cleanup style of bnxt ethdev Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 21/31] net/bnxt: move function check zero bytes to bnxt util.h Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 22/31] net/bnxt: filter/flow refactoring Ajit Khaparde
2018-06-26 15:29   ` Ferruh Yigit
2018-06-19 21:30 ` [dpdk-dev] [PATCH 23/31] net/bnxt: check for invalid vnic id Ajit Khaparde
2018-06-26 15:30   ` Ferruh Yigit
2018-06-19 21:30 ` [dpdk-dev] [PATCH 24/31] net/bnxt: update HWRM API to v1.9.2.9 Ajit Khaparde
2018-06-26 15:30   ` Ferruh Yigit
2018-06-28 20:14     ` Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 25/31] net/bnxt: fix Tx with multiple mbuf Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 26/31] net/bnxt: Revert reset of L2 filter id in clear_ntuple_filter Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 27/31] net/bnxt: check filter type before clearing it Ajit Khaparde
2018-06-26 15:30   ` Ferruh Yigit
2018-06-19 21:30 ` [dpdk-dev] [PATCH 28/31] net/bnxt: fix set MTU Ajit Khaparde
2018-06-26 15:30   ` Ferruh Yigit
2018-06-28 20:13     ` Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 29/31] net/bnxt: fix incorrect IO address handling in Tx Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 30/31] net/bnxt: allocate RSS context only if RSS mode is enabled Ajit Khaparde
2018-06-19 21:30 ` [dpdk-dev] [PATCH 31/31] net/bnxt: fix to move a flow to a different queue Ajit Khaparde
2018-06-26 15:27 ` [dpdk-dev] [PATCH 00/31] bnxt patchset Ferruh Yigit
2018-06-28 20:15   ` Ajit Khaparde
2018-06-28 20:15   ` [dpdk-dev] [PATCH v2 00/23] " Ajit Khaparde
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 01/23] net/bnxt: fix clear port stats Ajit Khaparde
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 02/23] net/bnxt: add Tx batching support Ajit Khaparde
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 03/23] net/bnxt: optimize receive processing code Ajit Khaparde
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 04/23] net/bnxt: set MIN/MAX descriptor count fox Tx and Rx Rings Ajit Khaparde
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 05/23] net/bnxt: fix dev close operation Ajit Khaparde
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 06/23] net/bnxt: set ring coalesce parameters for Stratus NIC Ajit Khaparde
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 07/23] net/bnxt: fix HW Tx checksum offload check Ajit Khaparde
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 08/23] net/bnxt: add support for VF id 0xd800 Ajit Khaparde
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 09/23] net/bnxt: fix Rx/Tx queue start/stop operations Ajit Khaparde
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 10/23] net/bnxt: move function check zero bytes to bnxt util.h Ajit Khaparde
2018-07-02 12:20       ` Ferruh Yigit
2018-07-02 12:55         ` Ferruh Yigit
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 11/23] net/bnxt: refactor filter/flow Ajit Khaparde
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 12/23] net/bnxt: check for invalid vnic id Ajit Khaparde
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 13/23] net/bnxt: update HWRM API to v1.9.2.9 Ajit Khaparde
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 14/23] net/bnxt: fix Tx with multiple mbuf Ajit Khaparde
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 15/23] net/bnxt: revert reset of L2 filter id Ajit Khaparde
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 16/23] net/bnxt: check filter type before clearing it Ajit Khaparde
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 17/23] net/bnxt: fix set MTU Ajit Khaparde
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 18/23] net/bnxt: fix incorrect IO address handling in Tx Ajit Khaparde
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 19/23] net/bnxt: allocate RSS context only if RSS mode is enabled Ajit Khaparde
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 20/23] net/bnxt: fix to move a flow to a different queue Ajit Khaparde
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 21/23] net/bnxt: check VF resources if resource manager is enabled Ajit Khaparde
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 22/23] net/bnxt: fix Rx ring count limitation Ajit Khaparde
2018-06-28 20:15     ` [dpdk-dev] [PATCH v2 23/23] net/bnxt: use correct flags during VLAN configuration Ajit Khaparde
2018-07-02 15:48     ` [dpdk-dev] [PATCH v2 00/23] bnxt patchset Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).