DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD
@ 2019-09-06 21:52 Rahul Lakkireddy
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 01/12] net/cxgbe: add cxgbe_ prefix to global functions Rahul Lakkireddy
                   ` (13 more replies)
  0 siblings, 14 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-06 21:52 UTC (permalink / raw)
  To: dev; +Cc: nirranjan

This series of patches contain bug fixes and feature updates for
CXGBE and CXGBEVF PMD. Patches 1 to 6 contain bug fixes. Patches
7 to 12 contain updates and new features for CXGBE/CXGBEVF PMD.

Patch 1 adds cxgbe_ prefix to some global functions to avoid name
collision.

Patch 2 fixes NULL dereference when allocating CLIP for IPv6 rte_flow
offloads.

Patch 3 fixes slot allocation logic for IPv6 rte_flow offloads
for T6 NICs.

Patch 4 fixes issues with parsing VLAN rte_flow offload actions.

Patch 5 prefetches packets for non-coalesced Tx packets.

Patch 6 fixes NULL dereference when accessing firmware event queue
for link updates before it is created.

Patch 7 reworks compilation dependent logs to use dynamic logging.

Patch 8 reworks devargs parsing to separate CXGBE VF only arguments.

Patch 9 removes compilation dependent flag that controls Tx coalescing
throughput vs latency behavior and uses devargs instead.

Patch 10 uses new firmware API to fetch the maximum number of
packets that can be coalesced in Tx path.

Patch 11 adds support for VLAN pattern match item via rte_flow offload.

Patch 12 adds support for setting VLAN PCP action item via rte_flow
offload.

Thanks,
Rahul


Rahul Lakkireddy (12):
  net/cxgbe: add cxgbe_ prefix to global functions
  net/cxgbe: fix NULL access when allocating CLIP entry
  net/cxgbe: fix slot allocation for IPv6 flows
  net/cxgbe: fix parsing VLAN ID rewrite action
  net/cxgbe: fix prefetch for non-coalesced Tx packets
  net/cxgbe: avoid polling link status before device start
  net/cxgbe: use dynamic logging for debug prints
  net/cxgbe: separate VF only devargs
  net/cxgbe: add devarg to control Tx coalescing
  net/cxgbe: fetch max Tx coalesce limit from firmware
  net/cxgbe: add rte_flow support for matching VLAN
  net/cxgbe: add rte_flow support for setting VLAN PCP

 config/common_base                      |   6 -
 doc/guides/nics/cxgbe.rst               |  57 +++---
 drivers/net/cxgbe/base/adapter.h        |   8 +
 drivers/net/cxgbe/base/common.h         |   1 +
 drivers/net/cxgbe/base/t4_regs_values.h |   9 +
 drivers/net/cxgbe/base/t4fw_interface.h |   3 +-
 drivers/net/cxgbe/cxgbe.h               |  10 +-
 drivers/net/cxgbe/cxgbe_compat.h        |  58 ++----
 drivers/net/cxgbe/cxgbe_ethdev.c        |  40 ++++-
 drivers/net/cxgbe/cxgbe_filter.c        | 230 ++++++++++--------------
 drivers/net/cxgbe/cxgbe_filter.h        |  22 +--
 drivers/net/cxgbe/cxgbe_flow.c          | 204 +++++++++++++++++++--
 drivers/net/cxgbe/cxgbe_main.c          | 137 +++++++-------
 drivers/net/cxgbe/cxgbe_pfvf.h          |  10 ++
 drivers/net/cxgbe/cxgbevf_ethdev.c      |   7 +
 drivers/net/cxgbe/cxgbevf_main.c        |  12 +-
 drivers/net/cxgbe/l2t.c                 |   3 +-
 drivers/net/cxgbe/l2t.h                 |   3 +-
 drivers/net/cxgbe/sge.c                 |  21 +--
 19 files changed, 520 insertions(+), 321 deletions(-)

-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH 01/12] net/cxgbe: add cxgbe_ prefix to global functions
  2019-09-06 21:52 [dpdk-dev] [PATCH 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD Rahul Lakkireddy
@ 2019-09-06 21:52 ` Rahul Lakkireddy
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 02/12] net/cxgbe: fix NULL access when allocating CLIP entry Rahul Lakkireddy
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-06 21:52 UTC (permalink / raw)
  To: dev; +Cc: nirranjan, stable

To avoid name collisions, add cxgbe_ prefix to some global functions.
Also, make some local functions static in cxgbe_filter.c.

Cc: stable@dpdk.org
Fixes: ee61f5113b17 ("net/cxgbe: parse and validate flows")
Fixes: 9eb2c9a48072 ("net/cxgbe: implement flow create operation")
Fixes: 3a381a4116ed ("net/cxgbe: query firmware for HASH filter resources")
Fixes: af44a577988b ("net/cxgbe: support to offload flows to HASH region")
Fixes: 41dc98b0827a ("net/cxgbe: support to delete flows in HASH region")
Fixes: 23af667f1507 ("net/cxgbe: add API to program hardware layer 2 table")

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
 drivers/net/cxgbe/cxgbe_filter.c | 30 ++++++++++++++++--------------
 drivers/net/cxgbe/cxgbe_filter.h | 19 +++++++++----------
 drivers/net/cxgbe/cxgbe_flow.c   |  6 +++---
 drivers/net/cxgbe/cxgbe_main.c   | 10 +++++-----
 drivers/net/cxgbe/l2t.c          |  3 ++-
 drivers/net/cxgbe/l2t.h          |  3 ++-
 6 files changed, 37 insertions(+), 34 deletions(-)

diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index 7fcee5c0a..cc8774c1d 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -14,7 +14,7 @@
 /**
  * Initialize Hash Filters
  */
-int init_hash_filter(struct adapter *adap)
+int cxgbe_init_hash_filter(struct adapter *adap)
 {
 	unsigned int n_user_filters;
 	unsigned int user_filter_perc;
@@ -53,7 +53,8 @@ int init_hash_filter(struct adapter *adap)
  * Validate if the requested filter specification can be set by checking
  * if the requested features have been enabled
  */
-int validate_filter(struct adapter *adapter, struct ch_filter_specification *fs)
+int cxgbe_validate_filter(struct adapter *adapter,
+			  struct ch_filter_specification *fs)
 {
 	u32 fconf;
 
@@ -133,7 +134,7 @@ static unsigned int get_filter_steerq(struct rte_eth_dev *dev,
 }
 
 /* Return an error number if the indicated filter isn't writable ... */
-int writable_filter(struct filter_entry *f)
+static int writable_filter(struct filter_entry *f)
 {
 	if (f->locked)
 		return -EPERM;
@@ -214,7 +215,7 @@ static inline void mk_set_tcb_field_ulp(struct filter_entry *f,
 /**
  * Check if entry already filled.
  */
-bool is_filter_set(struct tid_info *t, int fidx, int family)
+bool cxgbe_is_filter_set(struct tid_info *t, int fidx, int family)
 {
 	bool result = FALSE;
 	int i, max;
@@ -527,7 +528,7 @@ static int cxgbe_set_hash_filter(struct rte_eth_dev *dev,
 	int atid, size;
 	int ret = 0;
 
-	ret = validate_filter(adapter, fs);
+	ret = cxgbe_validate_filter(adapter, fs);
 	if (ret)
 		return ret;
 
@@ -618,7 +619,7 @@ static int cxgbe_set_hash_filter(struct rte_eth_dev *dev,
  * Clear a filter and release any of its resources that we own.  This also
  * clears the filter's "pending" status.
  */
-void clear_filter(struct filter_entry *f)
+static void clear_filter(struct filter_entry *f)
 {
 	if (f->clipt)
 		cxgbe_clip_release(f->dev, f->clipt);
@@ -690,7 +691,7 @@ static int del_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
 	return 0;
 }
 
-int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
+static int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
 {
 	struct adapter *adapter = ethdev2adap(dev);
 	struct filter_entry *f = &adapter->tids.ftid_tab[fidx];
@@ -868,7 +869,7 @@ int cxgbe_del_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 
 	chip_ver = CHELSIO_CHIP_VERSION(adapter->params.chip);
 
-	ret = is_filter_set(&adapter->tids, filter_id, fs->type);
+	ret = cxgbe_is_filter_set(&adapter->tids, filter_id, fs->type);
 	if (!ret) {
 		dev_warn(adap, "%s: could not find filter entry: %u\n",
 			 __func__, filter_id);
@@ -940,7 +941,7 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 
 	chip_ver = CHELSIO_CHIP_VERSION(adapter->params.chip);
 
-	ret = validate_filter(adapter, fs);
+	ret = cxgbe_validate_filter(adapter, fs);
 	if (ret)
 		return ret;
 
@@ -951,7 +952,7 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 	if (fs->type)
 		filter_id &= ~(0x3);
 
-	ret = is_filter_set(&adapter->tids, filter_id, fs->type);
+	ret = cxgbe_is_filter_set(&adapter->tids, filter_id, fs->type);
 	if (ret)
 		return -EBUSY;
 
@@ -1091,7 +1092,8 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 /**
  * Handle a Hash filter write reply.
  */
-void hash_filter_rpl(struct adapter *adap, const struct cpl_act_open_rpl *rpl)
+void cxgbe_hash_filter_rpl(struct adapter *adap,
+			   const struct cpl_act_open_rpl *rpl)
 {
 	struct tid_info *t = &adap->tids;
 	struct filter_entry *f;
@@ -1159,7 +1161,7 @@ void hash_filter_rpl(struct adapter *adap, const struct cpl_act_open_rpl *rpl)
 /**
  * Handle a LE-TCAM filter write/deletion reply.
  */
-void filter_rpl(struct adapter *adap, const struct cpl_set_tcb_rpl *rpl)
+void cxgbe_filter_rpl(struct adapter *adap, const struct cpl_set_tcb_rpl *rpl)
 {
 	struct filter_entry *f = NULL;
 	unsigned int tid = GET_TID(rpl);
@@ -1357,8 +1359,8 @@ int cxgbe_clear_filter_count(struct adapter *adapter, unsigned int fidx,
 /**
  * Handle a Hash filter delete reply.
  */
-void hash_del_filter_rpl(struct adapter *adap,
-			 const struct cpl_abort_rpl_rss *rpl)
+void cxgbe_hash_del_filter_rpl(struct adapter *adap,
+			       const struct cpl_abort_rpl_rss *rpl)
 {
 	struct tid_info *t = &adap->tids;
 	struct filter_entry *f;
diff --git a/drivers/net/cxgbe/cxgbe_filter.h b/drivers/net/cxgbe/cxgbe_filter.h
index 0c67d2d15..1964730ba 100644
--- a/drivers/net/cxgbe/cxgbe_filter.h
+++ b/drivers/net/cxgbe/cxgbe_filter.h
@@ -248,11 +248,8 @@ cxgbe_bitmap_find_free_region(struct rte_bitmap *bmap, unsigned int size,
 	return idx;
 }
 
-bool is_filter_set(struct tid_info *, int fidx, int family);
-void filter_rpl(struct adapter *adap, const struct cpl_set_tcb_rpl *rpl);
-void clear_filter(struct filter_entry *f);
-int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx);
-int writable_filter(struct filter_entry *f);
+bool cxgbe_is_filter_set(struct tid_info *, int fidx, int family);
+void cxgbe_filter_rpl(struct adapter *adap, const struct cpl_set_tcb_rpl *rpl);
 int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 		     struct ch_filter_specification *fs,
 		     struct filter_ctx *ctx);
@@ -260,11 +257,13 @@ int cxgbe_del_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 		     struct ch_filter_specification *fs,
 		     struct filter_ctx *ctx);
 int cxgbe_alloc_ftid(struct adapter *adap, unsigned int family);
-int init_hash_filter(struct adapter *adap);
-void hash_filter_rpl(struct adapter *adap, const struct cpl_act_open_rpl *rpl);
-void hash_del_filter_rpl(struct adapter *adap,
-			 const struct cpl_abort_rpl_rss *rpl);
-int validate_filter(struct adapter *adap, struct ch_filter_specification *fs);
+int cxgbe_init_hash_filter(struct adapter *adap);
+void cxgbe_hash_filter_rpl(struct adapter *adap,
+			   const struct cpl_act_open_rpl *rpl);
+void cxgbe_hash_del_filter_rpl(struct adapter *adap,
+			       const struct cpl_abort_rpl_rss *rpl);
+int cxgbe_validate_filter(struct adapter *adap,
+			  struct ch_filter_specification *fs);
 int cxgbe_get_filter_count(struct adapter *adapter, unsigned int fidx,
 			   u64 *c, int hash, bool get_byte);
 int cxgbe_clear_filter_count(struct adapter *adapter, unsigned int fidx,
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index d3de689c3..848c61f02 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -309,7 +309,7 @@ static int cxgbe_validate_fidxondel(struct filter_entry *f, unsigned int fidx)
 		dev_err(adap, "invalid flow index %d.\n", fidx);
 		return -EINVAL;
 	}
-	if (!is_filter_set(&adap->tids, fidx, fs.type)) {
+	if (!cxgbe_is_filter_set(&adap->tids, fidx, fs.type)) {
 		dev_err(adap, "Already free fidx:%d f:%p\n", fidx, f);
 		return -EINVAL;
 	}
@@ -321,7 +321,7 @@ static int
 cxgbe_validate_fidxonadd(struct ch_filter_specification *fs,
 			 struct adapter *adap, unsigned int fidx)
 {
-	if (is_filter_set(&adap->tids, fidx, fs->type)) {
+	if (cxgbe_is_filter_set(&adap->tids, fidx, fs->type)) {
 		dev_err(adap, "filter index: %d is busy.\n", fidx);
 		return -EBUSY;
 	}
@@ -1019,7 +1019,7 @@ cxgbe_flow_validate(struct rte_eth_dev *dev,
 		return ret;
 	}
 
-	if (validate_filter(adap, &flow->fs)) {
+	if (cxgbe_validate_filter(adap, &flow->fs)) {
 		t4_os_free(flow);
 		return rte_flow_error_set(e, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE,
 				NULL,
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 620f60b4d..c3e6b9557 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -92,19 +92,19 @@ static int fwevtq_handler(struct sge_rspq *q, const __be64 *rsp,
 	} else if (opcode == CPL_ABORT_RPL_RSS) {
 		const struct cpl_abort_rpl_rss *p = (const void *)rsp;
 
-		hash_del_filter_rpl(q->adapter, p);
+		cxgbe_hash_del_filter_rpl(q->adapter, p);
 	} else if (opcode == CPL_SET_TCB_RPL) {
 		const struct cpl_set_tcb_rpl *p = (const void *)rsp;
 
-		filter_rpl(q->adapter, p);
+		cxgbe_filter_rpl(q->adapter, p);
 	} else if (opcode == CPL_ACT_OPEN_RPL) {
 		const struct cpl_act_open_rpl *p = (const void *)rsp;
 
-		hash_filter_rpl(q->adapter, p);
+		cxgbe_hash_filter_rpl(q->adapter, p);
 	} else if (opcode == CPL_L2T_WRITE_RPL) {
 		const struct cpl_l2t_write_rpl *p = (const void *)rsp;
 
-		do_l2t_write_rpl(q->adapter, p);
+		cxgbe_do_l2t_write_rpl(q->adapter, p);
 	} else {
 		dev_err(adapter, "unexpected CPL %#x on FW event queue\n",
 			opcode);
@@ -1179,7 +1179,7 @@ static int adap_init0(struct adapter *adap)
 
 	if ((caps_cmd.niccaps & cpu_to_be16(FW_CAPS_CONFIG_NIC_HASHFILTER)) &&
 	    is_t6(adap->params.chip)) {
-		if (init_hash_filter(adap) < 0)
+		if (cxgbe_init_hash_filter(adap) < 0)
 			goto bye;
 	}
 
diff --git a/drivers/net/cxgbe/l2t.c b/drivers/net/cxgbe/l2t.c
index 6faf624f7..f9d651fe0 100644
--- a/drivers/net/cxgbe/l2t.c
+++ b/drivers/net/cxgbe/l2t.c
@@ -22,7 +22,8 @@ void cxgbe_l2t_release(struct l2t_entry *e)
  * Process a CPL_L2T_WRITE_RPL. Note that the TID in the reply is really
  * the L2T index it refers to.
  */
-void do_l2t_write_rpl(struct adapter *adap, const struct cpl_l2t_write_rpl *rpl)
+void cxgbe_do_l2t_write_rpl(struct adapter *adap,
+			    const struct cpl_l2t_write_rpl *rpl)
 {
 	struct l2t_data *d = adap->l2t;
 	unsigned int tid = GET_TID(rpl);
diff --git a/drivers/net/cxgbe/l2t.h b/drivers/net/cxgbe/l2t.h
index 326abfde4..2c489e4aa 100644
--- a/drivers/net/cxgbe/l2t.h
+++ b/drivers/net/cxgbe/l2t.h
@@ -53,5 +53,6 @@ void t4_cleanup_l2t(struct adapter *adap);
 struct l2t_entry *cxgbe_l2t_alloc_switching(struct rte_eth_dev *dev, u16 vlan,
 					    u8 port, u8 *dmac);
 void cxgbe_l2t_release(struct l2t_entry *e);
-void do_l2t_write_rpl(struct adapter *p, const struct cpl_l2t_write_rpl *rpl);
+void cxgbe_do_l2t_write_rpl(struct adapter *p,
+			    const struct cpl_l2t_write_rpl *rpl);
 #endif /* _CXGBE_L2T_H_ */
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH 02/12] net/cxgbe: fix NULL access when allocating CLIP entry
  2019-09-06 21:52 [dpdk-dev] [PATCH 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD Rahul Lakkireddy
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 01/12] net/cxgbe: add cxgbe_ prefix to global functions Rahul Lakkireddy
@ 2019-09-06 21:52 ` Rahul Lakkireddy
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 03/12] net/cxgbe: fix slot allocation for IPv6 flows Rahul Lakkireddy
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-06 21:52 UTC (permalink / raw)
  To: dev; +Cc: nirranjan, stable

Pass correct arguments to CLIP allocation code to avoid NULL pointer
dereference.

Cc: stable@dpdk.org
Fixes: 3f2c1e209cfc ("net/cxgbe: add Compressed Local IP region")

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
 drivers/net/cxgbe/cxgbe_filter.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index cc8774c1d..3b7966d04 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -1052,7 +1052,7 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 	 */
 	if (chip_ver > CHELSIO_T5 && fs->type &&
 	    memcmp(fs->val.lip, bitoff, sizeof(bitoff))) {
-		f->clipt = cxgbe_clip_alloc(f->dev, (u32 *)&f->fs.val.lip);
+		f->clipt = cxgbe_clip_alloc(dev, (u32 *)&fs->val.lip);
 		if (!f->clipt)
 			goto free_tid;
 	}
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH 03/12] net/cxgbe: fix slot allocation for IPv6 flows
  2019-09-06 21:52 [dpdk-dev] [PATCH 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD Rahul Lakkireddy
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 01/12] net/cxgbe: add cxgbe_ prefix to global functions Rahul Lakkireddy
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 02/12] net/cxgbe: fix NULL access when allocating CLIP entry Rahul Lakkireddy
@ 2019-09-06 21:52 ` Rahul Lakkireddy
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 04/12] net/cxgbe: fix parsing VLAN ID rewrite action Rahul Lakkireddy
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-06 21:52 UTC (permalink / raw)
  To: dev; +Cc: nirranjan, stable

IPv6 flows occupy only 2 slots on Chelsio T6 NICs. Fix the slot
calculation logic to return correct number of slots.

Cc: stable@dpdk.org
Fixes: ee61f5113b17 ("net/cxgbe: parse and validate flows")
Fixes: 9eb2c9a48072 ("net/cxgbe: implement flow create operation")
Fixes: 3f2c1e209cfc ("net/cxgbe: add Compressed Local IP region")

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
 drivers/net/cxgbe/cxgbe_filter.c | 193 +++++++++++--------------------
 drivers/net/cxgbe/cxgbe_filter.h |   5 +-
 drivers/net/cxgbe/cxgbe_flow.c   |  15 ++-
 3 files changed, 85 insertions(+), 128 deletions(-)

diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index 3b7966d04..33b95a69a 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -213,20 +213,32 @@ static inline void mk_set_tcb_field_ulp(struct filter_entry *f,
 }
 
 /**
- * Check if entry already filled.
+ * IPv6 requires 2 slots on T6 and 4 slots for cards below T6.
+ * IPv4 requires only 1 slot on all cards.
  */
-bool cxgbe_is_filter_set(struct tid_info *t, int fidx, int family)
+u8 cxgbe_filter_slots(struct adapter *adap, u8 family)
 {
-	bool result = FALSE;
-	int i, max;
+	if (family == FILTER_TYPE_IPV6) {
+		if (CHELSIO_CHIP_VERSION(adap->params.chip) < CHELSIO_T6)
+			return 4;
 
-	/* IPv6 requires four slots and IPv4 requires only 1 slot.
-	 * Ensure, there's enough slots available.
-	 */
-	max = family == FILTER_TYPE_IPV6 ? fidx + 3 : fidx;
+		return 2;
+	}
+
+	return 1;
+}
+
+/**
+ * Check if entries are already filled.
+ */
+bool cxgbe_is_filter_set(struct tid_info *t, u32 fidx, u8 nentries)
+{
+	bool result = FALSE;
+	u32 i;
 
+	/* Ensure there's enough slots available. */
 	t4_os_lock(&t->ftid_lock);
-	for (i = fidx; i <= max; i++) {
+	for (i = fidx; i < fidx + nentries; i++) {
 		if (rte_bitmap_get(t->ftid_bmap, i)) {
 			result = TRUE;
 			break;
@@ -237,17 +249,18 @@ bool cxgbe_is_filter_set(struct tid_info *t, int fidx, int family)
 }
 
 /**
- * Allocate a available free entry
+ * Allocate available free entries.
  */
-int cxgbe_alloc_ftid(struct adapter *adap, unsigned int family)
+int cxgbe_alloc_ftid(struct adapter *adap, u8 nentries)
 {
 	struct tid_info *t = &adap->tids;
 	int pos;
 	int size = t->nftids;
 
 	t4_os_lock(&t->ftid_lock);
-	if (family == FILTER_TYPE_IPV6)
-		pos = cxgbe_bitmap_find_free_region(t->ftid_bmap, size, 4);
+	if (nentries > 1)
+		pos = cxgbe_bitmap_find_free_region(t->ftid_bmap, size,
+						    nentries);
 	else
 		pos = cxgbe_find_first_zero_bit(t->ftid_bmap, size);
 	t4_os_unlock(&t->ftid_lock);
@@ -565,7 +578,7 @@ static int cxgbe_set_hash_filter(struct rte_eth_dev *dev,
 	if (atid < 0)
 		goto out_err;
 
-	if (f->fs.type) {
+	if (f->fs.type == FILTER_TYPE_IPV6) {
 		/* IPv6 hash filter */
 		f->clipt = cxgbe_clip_alloc(f->dev, (u32 *)&f->fs.val.lip);
 		if (!f->clipt)
@@ -804,44 +817,34 @@ static int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
 }
 
 /**
- * Set the corresponding entry in the bitmap. 4 slots are
- * marked for IPv6, whereas only 1 slot is marked for IPv4.
+ * Set the corresponding entries in the bitmap.
  */
-static int cxgbe_set_ftid(struct tid_info *t, int fidx, int family)
+static int cxgbe_set_ftid(struct tid_info *t, u32 fidx, u8 nentries)
 {
+	u32 i;
+
 	t4_os_lock(&t->ftid_lock);
 	if (rte_bitmap_get(t->ftid_bmap, fidx)) {
 		t4_os_unlock(&t->ftid_lock);
 		return -EBUSY;
 	}
 
-	if (family == FILTER_TYPE_IPV4) {
-		rte_bitmap_set(t->ftid_bmap, fidx);
-	} else {
-		rte_bitmap_set(t->ftid_bmap, fidx);
-		rte_bitmap_set(t->ftid_bmap, fidx + 1);
-		rte_bitmap_set(t->ftid_bmap, fidx + 2);
-		rte_bitmap_set(t->ftid_bmap, fidx + 3);
-	}
+	for (i = fidx; i < fidx + nentries; i++)
+		rte_bitmap_set(t->ftid_bmap, i);
 	t4_os_unlock(&t->ftid_lock);
 	return 0;
 }
 
 /**
- * Clear the corresponding entry in the bitmap. 4 slots are
- * cleared for IPv6, whereas only 1 slot is cleared for IPv4.
+ * Clear the corresponding entries in the bitmap.
  */
-static void cxgbe_clear_ftid(struct tid_info *t, int fidx, int family)
+static void cxgbe_clear_ftid(struct tid_info *t, u32 fidx, u8 nentries)
 {
+	u32 i;
+
 	t4_os_lock(&t->ftid_lock);
-	if (family == FILTER_TYPE_IPV4) {
-		rte_bitmap_clear(t->ftid_bmap, fidx);
-	} else {
-		rte_bitmap_clear(t->ftid_bmap, fidx);
-		rte_bitmap_clear(t->ftid_bmap, fidx + 1);
-		rte_bitmap_clear(t->ftid_bmap, fidx + 2);
-		rte_bitmap_clear(t->ftid_bmap, fidx + 3);
-	}
+	for (i = fidx; i < fidx + nentries; i++)
+		rte_bitmap_clear(t->ftid_bmap, i);
 	t4_os_unlock(&t->ftid_lock);
 }
 
@@ -859,6 +862,7 @@ int cxgbe_del_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 	struct adapter *adapter = pi->adapter;
 	struct filter_entry *f;
 	unsigned int chip_ver;
+	u8 nentries;
 	int ret;
 
 	if (is_hashfilter(adapter) && fs->cap)
@@ -869,24 +873,25 @@ int cxgbe_del_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 
 	chip_ver = CHELSIO_CHIP_VERSION(adapter->params.chip);
 
-	ret = cxgbe_is_filter_set(&adapter->tids, filter_id, fs->type);
-	if (!ret) {
-		dev_warn(adap, "%s: could not find filter entry: %u\n",
-			 __func__, filter_id);
-		return -EINVAL;
-	}
-
 	/*
-	 * Ensure filter id is aligned on the 2 slot boundary for T6,
+	 * Ensure IPv6 filter id is aligned on the 2 slot boundary for T6,
 	 * and 4 slot boundary for cards below T6.
 	 */
-	if (fs->type) {
+	if (fs->type == FILTER_TYPE_IPV6) {
 		if (chip_ver < CHELSIO_T6)
 			filter_id &= ~(0x3);
 		else
 			filter_id &= ~(0x1);
 	}
 
+	nentries = cxgbe_filter_slots(adapter, fs->type);
+	ret = cxgbe_is_filter_set(&adapter->tids, filter_id, nentries);
+	if (!ret) {
+		dev_warn(adap, "%s: could not find filter entry: %u\n",
+			 __func__, filter_id);
+		return -EINVAL;
+	}
+
 	f = &adapter->tids.ftid_tab[filter_id];
 	ret = writable_filter(f);
 	if (ret)
@@ -896,8 +901,7 @@ int cxgbe_del_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 		f->ctx = ctx;
 		cxgbe_clear_ftid(&adapter->tids,
 				 f->tid - adapter->tids.ftid_base,
-				 f->fs.type ? FILTER_TYPE_IPV6 :
-					      FILTER_TYPE_IPV4);
+				 nentries);
 		return del_filter_wr(dev, filter_id);
 	}
 
@@ -927,10 +931,10 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 {
 	struct port_info *pi = ethdev2pinfo(dev);
 	struct adapter *adapter = pi->adapter;
-	unsigned int fidx, iq, fid_bit = 0;
+	unsigned int fidx, iq;
 	struct filter_entry *f;
 	unsigned int chip_ver;
-	uint8_t bitoff[16] = {0};
+	u8 nentries, bitoff[16] = {0};
 	int ret;
 
 	if (is_hashfilter(adapter) && fs->cap)
@@ -945,80 +949,31 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 	if (ret)
 		return ret;
 
-	/*
-	 * Ensure filter id is aligned on the 4 slot boundary for IPv6
-	 * maskfull filters.
-	 */
-	if (fs->type)
-		filter_id &= ~(0x3);
-
-	ret = cxgbe_is_filter_set(&adapter->tids, filter_id, fs->type);
-	if (ret)
-		return -EBUSY;
-
-	iq = get_filter_steerq(dev, fs);
-
 	/*
 	 * IPv6 filters occupy four slots and must be aligned on four-slot
 	 * boundaries for T5. On T6, IPv6 filters occupy two-slots and
 	 * must be aligned on two-slot boundaries.
 	 *
 	 * IPv4 filters only occupy a single slot and have no alignment
-	 * requirements but writing a new IPv4 filter into the middle
-	 * of an existing IPv6 filter requires clearing the old IPv6
-	 * filter.
+	 * requirements.
 	 */
-	if (fs->type == FILTER_TYPE_IPV4) { /* IPv4 */
-		/*
-		 * For T6, If our IPv4 filter isn't being written to a
-		 * multiple of two filter index and there's an IPv6
-		 * filter at the multiple of 2 base slot, then we need
-		 * to delete that IPv6 filter ...
-		 * For adapters below T6, IPv6 filter occupies 4 entries.
-		 */
+	fidx = filter_id;
+	if (fs->type == FILTER_TYPE_IPV6) {
 		if (chip_ver < CHELSIO_T6)
-			fidx = filter_id & ~0x3;
+			fidx &= ~(0x3);
 		else
-			fidx = filter_id & ~0x1;
-
-		if (fidx != filter_id && adapter->tids.ftid_tab[fidx].fs.type) {
-			f = &adapter->tids.ftid_tab[fidx];
-			if (f->valid)
-				return -EBUSY;
-		}
-	} else { /* IPv6 */
-		unsigned int max_filter_id;
-
-		if (chip_ver < CHELSIO_T6) {
-			/*
-			 * Ensure that the IPv6 filter is aligned on a
-			 * multiple of 4 boundary.
-			 */
-			if (filter_id & 0x3)
-				return -EINVAL;
+			fidx &= ~(0x1);
+	}
 
-			max_filter_id = filter_id + 4;
-		} else {
-			/*
-			 * For T6, CLIP being enabled, IPv6 filter would occupy
-			 * 2 entries.
-			 */
-			if (filter_id & 0x1)
-				return -EINVAL;
+	if (fidx != filter_id)
+		return -EINVAL;
 
-			max_filter_id = filter_id + 2;
-		}
+	nentries = cxgbe_filter_slots(adapter, fs->type);
+	ret = cxgbe_is_filter_set(&adapter->tids, filter_id, nentries);
+	if (ret)
+		return -EBUSY;
 
-		/*
-		 * Check all except the base overlapping IPv4 filter
-		 * slots.
-		 */
-		for (fidx = filter_id + 1; fidx < max_filter_id; fidx++) {
-			f = &adapter->tids.ftid_tab[fidx];
-			if (f->valid)
-				return -EBUSY;
-		}
-	}
+	iq = get_filter_steerq(dev, fs);
 
 	/*
 	 * Check to make sure that provided filter index is not
@@ -1029,9 +984,7 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 		return -EBUSY;
 
 	fidx = adapter->tids.ftid_base + filter_id;
-	fid_bit = filter_id;
-	ret = cxgbe_set_ftid(&adapter->tids, fid_bit,
-			     fs->type ? FILTER_TYPE_IPV6 : FILTER_TYPE_IPV4);
+	ret = cxgbe_set_ftid(&adapter->tids, filter_id, nentries);
 	if (ret)
 		return ret;
 
@@ -1041,9 +994,7 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 	ret = writable_filter(f);
 	if (ret) {
 		/* Clear the bits we have set above */
-		cxgbe_clear_ftid(&adapter->tids, fid_bit,
-				 fs->type ? FILTER_TYPE_IPV6 :
-					    FILTER_TYPE_IPV4);
+		cxgbe_clear_ftid(&adapter->tids, filter_id, nentries);
 		return ret;
 	}
 
@@ -1074,17 +1025,13 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 	f->ctx = ctx;
 	f->tid = fidx; /* Save the actual tid */
 	ret = set_filter_wr(dev, filter_id);
-	if (ret) {
-		fid_bit = f->tid - adapter->tids.ftid_base;
+	if (ret)
 		goto free_tid;
-	}
 
 	return ret;
 
 free_tid:
-	cxgbe_clear_ftid(&adapter->tids, fid_bit,
-			 fs->type ? FILTER_TYPE_IPV6 :
-				    FILTER_TYPE_IPV4);
+	cxgbe_clear_ftid(&adapter->tids, filter_id, nentries);
 	clear_filter(f);
 	return ret;
 }
diff --git a/drivers/net/cxgbe/cxgbe_filter.h b/drivers/net/cxgbe/cxgbe_filter.h
index 1964730ba..06021c854 100644
--- a/drivers/net/cxgbe/cxgbe_filter.h
+++ b/drivers/net/cxgbe/cxgbe_filter.h
@@ -248,7 +248,8 @@ cxgbe_bitmap_find_free_region(struct rte_bitmap *bmap, unsigned int size,
 	return idx;
 }
 
-bool cxgbe_is_filter_set(struct tid_info *, int fidx, int family);
+u8 cxgbe_filter_slots(struct adapter *adap, u8 family);
+bool cxgbe_is_filter_set(struct tid_info *t, u32 fidx, u8 nentries);
 void cxgbe_filter_rpl(struct adapter *adap, const struct cpl_set_tcb_rpl *rpl);
 int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 		     struct ch_filter_specification *fs,
@@ -256,7 +257,7 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 int cxgbe_del_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 		     struct ch_filter_specification *fs,
 		     struct filter_ctx *ctx);
-int cxgbe_alloc_ftid(struct adapter *adap, unsigned int family);
+int cxgbe_alloc_ftid(struct adapter *adap, u8 nentries);
 int cxgbe_init_hash_filter(struct adapter *adap);
 void cxgbe_hash_filter_rpl(struct adapter *adap,
 			   const struct cpl_act_open_rpl *rpl);
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index 848c61f02..8a5d06ff3 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -304,12 +304,15 @@ static int cxgbe_validate_fidxondel(struct filter_entry *f, unsigned int fidx)
 {
 	struct adapter *adap = ethdev2adap(f->dev);
 	struct ch_filter_specification fs = f->fs;
+	u8 nentries;
 
 	if (fidx >= adap->tids.nftids) {
 		dev_err(adap, "invalid flow index %d.\n", fidx);
 		return -EINVAL;
 	}
-	if (!cxgbe_is_filter_set(&adap->tids, fidx, fs.type)) {
+
+	nentries = cxgbe_filter_slots(adap, fs.type);
+	if (!cxgbe_is_filter_set(&adap->tids, fidx, nentries)) {
 		dev_err(adap, "Already free fidx:%d f:%p\n", fidx, f);
 		return -EINVAL;
 	}
@@ -321,10 +324,14 @@ static int
 cxgbe_validate_fidxonadd(struct ch_filter_specification *fs,
 			 struct adapter *adap, unsigned int fidx)
 {
-	if (cxgbe_is_filter_set(&adap->tids, fidx, fs->type)) {
+	u8 nentries;
+
+	nentries = cxgbe_filter_slots(adap, fs->type);
+	if (cxgbe_is_filter_set(&adap->tids, fidx, nentries)) {
 		dev_err(adap, "filter index: %d is busy.\n", fidx);
 		return -EBUSY;
 	}
+
 	if (fidx >= adap->tids.nftids) {
 		dev_err(adap, "filter index (%u) >= max(%u)\n",
 			fidx, adap->tids.nftids);
@@ -351,9 +358,11 @@ static int cxgbe_get_fidx(struct rte_flow *flow, unsigned int *fidx)
 
 	/* For tcam get the next available slot, if default value specified */
 	if (flow->fidx == FILTER_ID_MAX) {
+		u8 nentries;
 		int idx;
 
-		idx = cxgbe_alloc_ftid(adap, fs->type);
+		nentries = cxgbe_filter_slots(adap, fs->type);
+		idx = cxgbe_alloc_ftid(adap, nentries);
 		if (idx < 0) {
 			dev_err(adap, "unable to get a filter index in tcam\n");
 			return -ENOMEM;
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH 04/12] net/cxgbe: fix parsing VLAN ID rewrite action
  2019-09-06 21:52 [dpdk-dev] [PATCH 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD Rahul Lakkireddy
                   ` (2 preceding siblings ...)
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 03/12] net/cxgbe: fix slot allocation for IPv6 flows Rahul Lakkireddy
@ 2019-09-06 21:52 ` Rahul Lakkireddy
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 05/12] net/cxgbe: fix prefetch for non-coalesced Tx packets Rahul Lakkireddy
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-06 21:52 UTC (permalink / raw)
  To: dev; +Cc: nirranjan, stable

Set VLAN action mode to VLAN_REWRITE only if VLAN_INSERT has not been
set yet. Otherwise, the resulting VLAN packets will have their VLAN
header rewritten, instead of pushing a new outer VLAN header.

Also fix the VLAN ID extraction logic and endianness issues.

Cc: stable@dpdk.org
Fixes: 1decc62b1cbe ("net/cxgbe: add flow operations to offload VLAN actions")

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
 drivers/net/cxgbe/cxgbe_flow.c | 15 ++++++++++++---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index 8a5d06ff3..4c8553039 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -446,18 +446,27 @@ ch_rte_parse_atype_switch(const struct rte_flow_action *a,
 	const struct rte_flow_action_set_tp *tp_port;
 	const struct rte_flow_action_phy_port *port;
 	int item_index;
+	u16 tmp_vlan;
 
 	switch (a->type) {
 	case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID:
 		vlanid = (const struct rte_flow_action_of_set_vlan_vid *)
 			  a->conf;
-		fs->newvlan = VLAN_REWRITE;
-		fs->vlan = vlanid->vlan_vid;
+		/* If explicitly asked to push a new VLAN header,
+		 * then don't set rewrite mode. Otherwise, the
+		 * incoming VLAN packets will get their VLAN fields
+		 * rewritten, instead of adding an additional outer
+		 * VLAN header.
+		 */
+		if (fs->newvlan != VLAN_INSERT)
+			fs->newvlan = VLAN_REWRITE;
+		tmp_vlan = fs->vlan & 0xe000;
+		fs->vlan = (be16_to_cpu(vlanid->vlan_vid) & 0xfff) | tmp_vlan;
 		break;
 	case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN:
 		pushvlan = (const struct rte_flow_action_of_push_vlan *)
 			    a->conf;
-		if (pushvlan->ethertype != RTE_ETHER_TYPE_VLAN)
+		if (be16_to_cpu(pushvlan->ethertype) != RTE_ETHER_TYPE_VLAN)
 			return rte_flow_error_set(e, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ACTION, a,
 						  "only ethertype 0x8100 "
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH 05/12] net/cxgbe: fix prefetch for non-coalesced Tx packets
  2019-09-06 21:52 [dpdk-dev] [PATCH 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD Rahul Lakkireddy
                   ` (3 preceding siblings ...)
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 04/12] net/cxgbe: fix parsing VLAN ID rewrite action Rahul Lakkireddy
@ 2019-09-06 21:52 ` Rahul Lakkireddy
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 06/12] net/cxgbe: avoid polling link status before device start Rahul Lakkireddy
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-06 21:52 UTC (permalink / raw)
  To: dev; +Cc: nirranjan, stable

Move prefetch code out of Tx coalesce path to allow prefetching for
non-coalesced Tx packets, as well.

Cc: stable@dpdk.org
Fixes: bf89cbedd2d9 ("cxgbe: optimize forwarding performance for 40G")

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
 drivers/net/cxgbe/cxgbe_ethdev.c | 9 +++++++--
 drivers/net/cxgbe/sge.c          | 1 -
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index efb458d47..fb174f8d4 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -67,6 +67,7 @@ uint16_t cxgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	struct sge_eth_txq *txq = (struct sge_eth_txq *)tx_queue;
 	uint16_t pkts_sent, pkts_remain;
 	uint16_t total_sent = 0;
+	uint16_t idx = 0;
 	int ret = 0;
 
 	CXGBE_DEBUG_TX(adapter, "%s: txq = %p; tx_pkts = %p; nb_pkts = %d\n",
@@ -75,12 +76,16 @@ uint16_t cxgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	t4_os_lock(&txq->txq_lock);
 	/* free up desc from already completed tx */
 	reclaim_completed_tx(&txq->q);
+	rte_prefetch0(rte_pktmbuf_mtod(tx_pkts[0], volatile void *));
 	while (total_sent < nb_pkts) {
 		pkts_remain = nb_pkts - total_sent;
 
 		for (pkts_sent = 0; pkts_sent < pkts_remain; pkts_sent++) {
-			ret = t4_eth_xmit(txq, tx_pkts[total_sent + pkts_sent],
-					  nb_pkts);
+			idx = total_sent + pkts_sent;
+			if ((idx + 1) < nb_pkts)
+				rte_prefetch0(rte_pktmbuf_mtod(tx_pkts[idx + 1],
+							volatile void *));
+			ret = t4_eth_xmit(txq, tx_pkts[idx], nb_pkts);
 			if (ret < 0)
 				break;
 		}
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index 641be9657..bf3190211 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -1154,7 +1154,6 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 				txq->stats.mapping_err++;
 				goto out_free;
 			}
-			rte_prefetch0((volatile void *)addr);
 			return tx_do_packet_coalesce(txq, mbuf, cflits, adap,
 						     pi, addr, nb_pkts);
 		} else {
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH 06/12] net/cxgbe: avoid polling link status before device start
  2019-09-06 21:52 [dpdk-dev] [PATCH 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD Rahul Lakkireddy
                   ` (4 preceding siblings ...)
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 05/12] net/cxgbe: fix prefetch for non-coalesced Tx packets Rahul Lakkireddy
@ 2019-09-06 21:52 ` Rahul Lakkireddy
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 07/12] net/cxgbe: use dynamic logging for debug prints Rahul Lakkireddy
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-06 21:52 UTC (permalink / raw)
  To: dev; +Cc: nirranjan, stable

Link updates come in firmware event queue, which is only created
when device starts. So, don't poll for link status if firmware
event queue is not yet created.

This fixes NULL dereference when accessing non existent firmware
event queue.

Cc: stable@dpdk.org
Fixes: 265af08e75ba ("net/cxgbe: add link up and down ops")

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
 drivers/net/cxgbe/cxgbe_ethdev.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index fb174f8d4..381dd273d 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -204,6 +204,9 @@ int cxgbe_dev_link_update(struct rte_eth_dev *eth_dev,
 	u8 old_link = pi->link_cfg.link_ok;
 
 	for (i = 0; i < CXGBE_LINK_STATUS_POLL_CNT; i++) {
+		if (!s->fw_evtq.desc)
+			break;
+
 		cxgbe_poll(&s->fw_evtq, NULL, budget, &work_done);
 
 		/* Exit if link status changed or always forced up */
@@ -237,6 +240,9 @@ int cxgbe_dev_set_link_up(struct rte_eth_dev *dev)
 	struct sge *s = &adapter->sge;
 	int ret;
 
+	if (!s->fw_evtq.desc)
+		return -ENOMEM;
+
 	/* Flush all link events */
 	cxgbe_poll(&s->fw_evtq, NULL, budget, &work_done);
 
@@ -263,6 +269,9 @@ int cxgbe_dev_set_link_down(struct rte_eth_dev *dev)
 	struct sge *s = &adapter->sge;
 	int ret;
 
+	if (!s->fw_evtq.desc)
+		return -ENOMEM;
+
 	/* Flush all link events */
 	cxgbe_poll(&s->fw_evtq, NULL, budget, &work_done);
 
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH 07/12] net/cxgbe: use dynamic logging for debug prints
  2019-09-06 21:52 [dpdk-dev] [PATCH 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD Rahul Lakkireddy
                   ` (5 preceding siblings ...)
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 06/12] net/cxgbe: avoid polling link status before device start Rahul Lakkireddy
@ 2019-09-06 21:52 ` Rahul Lakkireddy
  2019-09-27 14:37   ` Ferruh Yigit
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 08/12] net/cxgbe: separate VF only devargs Rahul Lakkireddy
                   ` (6 subsequent siblings)
  13 siblings, 1 reply; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-06 21:52 UTC (permalink / raw)
  To: dev; +Cc: nirranjan

Remove compile time flags and use dynamic logging for debug prints.

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
 config/common_base               |  5 ---
 doc/guides/nics/cxgbe.rst        | 20 -----------
 drivers/net/cxgbe/cxgbe_compat.h | 58 +++++++++++---------------------
 drivers/net/cxgbe/cxgbe_ethdev.c | 16 +++++++++
 4 files changed, 35 insertions(+), 64 deletions(-)

diff --git a/config/common_base b/config/common_base
index 8ef75c203..43964de6d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -217,11 +217,6 @@ CONFIG_RTE_LIBRTE_BNXT_PMD=y
 # Compile burst-oriented Chelsio Terminator (CXGBE) PMD
 #
 CONFIG_RTE_LIBRTE_CXGBE_PMD=y
-CONFIG_RTE_LIBRTE_CXGBE_DEBUG=n
-CONFIG_RTE_LIBRTE_CXGBE_DEBUG_REG=n
-CONFIG_RTE_LIBRTE_CXGBE_DEBUG_MBOX=n
-CONFIG_RTE_LIBRTE_CXGBE_DEBUG_TX=n
-CONFIG_RTE_LIBRTE_CXGBE_DEBUG_RX=n
 CONFIG_RTE_LIBRTE_CXGBE_TPUT=y
 
 # NXP DPAA Bus
diff --git a/doc/guides/nics/cxgbe.rst b/doc/guides/nics/cxgbe.rst
index 7a893cc1d..fc74b571c 100644
--- a/doc/guides/nics/cxgbe.rst
+++ b/doc/guides/nics/cxgbe.rst
@@ -104,26 +104,6 @@ enabling debugging options may affect system performance.
 
      This controls compilation of both CXGBE and CXGBEVF PMD.
 
-- ``CONFIG_RTE_LIBRTE_CXGBE_DEBUG`` (default **n**)
-
-  Toggle display of generic debugging messages.
-
-- ``CONFIG_RTE_LIBRTE_CXGBE_DEBUG_REG`` (default **n**)
-
-  Toggle display of registers related run-time check messages.
-
-- ``CONFIG_RTE_LIBRTE_CXGBE_DEBUG_MBOX`` (default **n**)
-
-  Toggle display of firmware mailbox related run-time check messages.
-
-- ``CONFIG_RTE_LIBRTE_CXGBE_DEBUG_TX`` (default **n**)
-
-  Toggle display of transmission data path run-time check messages.
-
-- ``CONFIG_RTE_LIBRTE_CXGBE_DEBUG_RX`` (default **n**)
-
-  Toggle display of receiving data path run-time check messages.
-
 - ``CONFIG_RTE_LIBRTE_CXGBE_TPUT`` (default **y**)
 
   Toggle behavior to prefer Throughput or Latency.
diff --git a/drivers/net/cxgbe/cxgbe_compat.h b/drivers/net/cxgbe/cxgbe_compat.h
index 93df0a775..407cfd869 100644
--- a/drivers/net/cxgbe/cxgbe_compat.h
+++ b/drivers/net/cxgbe/cxgbe_compat.h
@@ -21,55 +21,35 @@
 #include <rte_net.h>
 
 extern int cxgbe_logtype;
+extern int cxgbe_reg_logtype;
+extern int cxgbe_mbox_logtype;
+extern int cxgbe_tx_logtype;
+extern int cxgbe_rx_logtype;
 
-#define dev_printf(level, fmt, ...) \
-	rte_log(RTE_LOG_ ## level, cxgbe_logtype, \
+#define dev_printf(level, logtype, fmt, ...) \
+	rte_log(RTE_LOG_ ## level, logtype, \
 		"rte_cxgbe_pmd: " fmt, ##__VA_ARGS__)
 
-#define dev_err(x, fmt, ...) dev_printf(ERR, fmt, ##__VA_ARGS__)
-#define dev_info(x, fmt, ...) dev_printf(INFO, fmt, ##__VA_ARGS__)
-#define dev_warn(x, fmt, ...) dev_printf(WARNING, fmt, ##__VA_ARGS__)
+#define dev_err(x, fmt, ...) \
+	dev_printf(ERR, cxgbe_logtype, fmt, ##__VA_ARGS__)
+#define dev_info(x, fmt, ...) \
+	dev_printf(INFO, cxgbe_logtype, fmt, ##__VA_ARGS__)
+#define dev_warn(x, fmt, ...) \
+	dev_printf(WARNING, cxgbe_logtype, fmt, ##__VA_ARGS__)
+#define dev_debug(x, fmt, ...) \
+	dev_printf(DEBUG, cxgbe_logtype, fmt, ##__VA_ARGS__)
 
-#ifdef RTE_LIBRTE_CXGBE_DEBUG
-#define dev_debug(x, fmt, ...) dev_printf(INFO, fmt, ##__VA_ARGS__)
-#else
-#define dev_debug(x, fmt, ...) do { } while (0)
-#endif
-
-#ifdef RTE_LIBRTE_CXGBE_DEBUG_REG
 #define CXGBE_DEBUG_REG(x, fmt, ...) \
-	dev_printf(INFO, "REG:" fmt, ##__VA_ARGS__)
-#else
-#define CXGBE_DEBUG_REG(x, fmt, ...) do { } while (0)
-#endif
-
-#ifdef RTE_LIBRTE_CXGBE_DEBUG_MBOX
+	dev_printf(DEBUG, cxgbe_reg_logtype, "REG:" fmt, ##__VA_ARGS__)
 #define CXGBE_DEBUG_MBOX(x, fmt, ...) \
-	dev_printf(INFO, "MBOX:" fmt, ##__VA_ARGS__)
-#else
-#define CXGBE_DEBUG_MBOX(x, fmt, ...) do { } while (0)
-#endif
-
-#ifdef RTE_LIBRTE_CXGBE_DEBUG_TX
+	dev_printf(DEBUG, cxgbe_mbox_logtype, "MBOX:" fmt, ##__VA_ARGS__)
 #define CXGBE_DEBUG_TX(x, fmt, ...) \
-	dev_printf(INFO, "TX:" fmt, ##__VA_ARGS__)
-#else
-#define CXGBE_DEBUG_TX(x, fmt, ...) do { } while (0)
-#endif
-
-#ifdef RTE_LIBRTE_CXGBE_DEBUG_RX
+	dev_printf(DEBUG, cxgbe_tx_logtype, "TX:" fmt, ##__VA_ARGS__)
 #define CXGBE_DEBUG_RX(x, fmt, ...) \
-	dev_printf(INFO, "RX:" fmt, ##__VA_ARGS__)
-#else
-#define CXGBE_DEBUG_RX(x, fmt, ...) do { } while (0)
-#endif
+	dev_printf(DEBUG, cxgbe_rx_logtype, "RX:" fmt, ##__VA_ARGS__)
 
-#ifdef RTE_LIBRTE_CXGBE_DEBUG
 #define CXGBE_FUNC_TRACE() \
-	dev_printf(DEBUG, "CXGBE trace: %s\n", __func__)
-#else
-#define CXGBE_FUNC_TRACE() do { } while (0)
-#endif
+	dev_printf(DEBUG, cxgbe_logtype, "CXGBE trace: %s\n", __func__)
 
 #define pr_err(fmt, ...) dev_err(0, fmt, ##__VA_ARGS__)
 #define pr_warn(fmt, ...) dev_warn(0, fmt, ##__VA_ARGS__)
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 381dd273d..b78f190f9 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -39,6 +39,10 @@
 #include "cxgbe_flow.h"
 
 int cxgbe_logtype;
+int cxgbe_mbox_logtype;
+int cxgbe_reg_logtype;
+int cxgbe_tx_logtype;
+int cxgbe_rx_logtype;
 
 /*
  * Macros needed to support the PCI Device ID Table ...
@@ -1240,4 +1244,16 @@ RTE_INIT(cxgbe_init_log)
 	cxgbe_logtype = rte_log_register("pmd.net.cxgbe");
 	if (cxgbe_logtype >= 0)
 		rte_log_set_level(cxgbe_logtype, RTE_LOG_NOTICE);
+	cxgbe_reg_logtype = rte_log_register("pmd.net.cxgbe.reg");
+	if (cxgbe_reg_logtype >= 0)
+		rte_log_set_level(cxgbe_reg_logtype, RTE_LOG_NOTICE);
+	cxgbe_mbox_logtype = rte_log_register("pmd.net.cxgbe.mbox");
+	if (cxgbe_mbox_logtype >= 0)
+		rte_log_set_level(cxgbe_mbox_logtype, RTE_LOG_NOTICE);
+	cxgbe_tx_logtype = rte_log_register("pmd.net.cxgbe.tx");
+	if (cxgbe_tx_logtype >= 0)
+		rte_log_set_level(cxgbe_tx_logtype, RTE_LOG_NOTICE);
+	cxgbe_rx_logtype = rte_log_register("pmd.net.cxgbe.rx");
+	if (cxgbe_rx_logtype >= 0)
+		rte_log_set_level(cxgbe_rx_logtype, RTE_LOG_NOTICE);
 }
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH 08/12] net/cxgbe: separate VF only devargs
  2019-09-06 21:52 [dpdk-dev] [PATCH 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD Rahul Lakkireddy
                   ` (6 preceding siblings ...)
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 07/12] net/cxgbe: use dynamic logging for debug prints Rahul Lakkireddy
@ 2019-09-06 21:52 ` Rahul Lakkireddy
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 09/12] net/cxgbe: add devarg to control Tx coalescing Rahul Lakkireddy
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-06 21:52 UTC (permalink / raw)
  To: dev; +Cc: nirranjan

Rework devargs parsing logic to separate VF only args.

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
 doc/guides/nics/cxgbe.rst          |  6 +++
 drivers/net/cxgbe/base/adapter.h   |  7 +++
 drivers/net/cxgbe/cxgbe.h          |  9 ++--
 drivers/net/cxgbe/cxgbe_ethdev.c   |  5 +-
 drivers/net/cxgbe/cxgbe_main.c     | 75 +++++++++++++++++++-----------
 drivers/net/cxgbe/cxgbevf_ethdev.c |  6 +++
 6 files changed, 76 insertions(+), 32 deletions(-)

diff --git a/doc/guides/nics/cxgbe.rst b/doc/guides/nics/cxgbe.rst
index fc74b571c..6e6767194 100644
--- a/doc/guides/nics/cxgbe.rst
+++ b/doc/guides/nics/cxgbe.rst
@@ -118,12 +118,18 @@ be passed as part of EAL arguments. For example,
 
    testpmd -w 02:00.4,keep_ovlan=1 -- -i
 
+Common Runtime Options
+----------------------
+
 - ``keep_ovlan`` (default **0**)
 
   Toggle behavior to keep/strip outer VLAN in Q-in-Q packets. If
   enabled, the outer VLAN tag is preserved in Q-in-Q packets. Otherwise,
   the outer VLAN tag is stripped in Q-in-Q packets.
 
+CXGBE VF Only Runtime Options
+-----------------------------
+
 - ``force_link_up`` (default **0**)
 
   When set to 1, CXGBEVF PMD always forces link as up for all VFs on
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index e548f9f63..68bd5a9cc 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -299,6 +299,11 @@ struct mbox_entry {
 
 TAILQ_HEAD(mbox_list, mbox_entry);
 
+struct adapter_devargs {
+	bool keep_ovlan;
+	bool force_link_up;
+};
+
 struct adapter {
 	struct rte_pci_device *pdev;       /* associated rte pci device */
 	struct rte_eth_dev *eth_dev;       /* first port's rte eth device */
@@ -331,6 +336,8 @@ struct adapter {
 	struct mpstcam_table *mpstcam;
 
 	struct tid_info tids;     /* Info used to access TID related tables */
+
+	struct adapter_devargs devargs;
 };
 
 /**
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 3f97fa58b..3a50502b7 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -50,8 +50,11 @@
 			   DEV_RX_OFFLOAD_SCATTER)
 
 
-#define CXGBE_DEVARG_KEEP_OVLAN "keep_ovlan"
-#define CXGBE_DEVARG_FORCE_LINK_UP "force_link_up"
+/* Common PF and VF devargs */
+#define CXGBE_DEVARG_CMN_KEEP_OVLAN "keep_ovlan"
+
+/* VF only devargs */
+#define CXGBE_DEVARG_VF_FORCE_LINK_UP "force_link_up"
 
 bool cxgbe_force_linkup(struct adapter *adap);
 int cxgbe_probe(struct adapter *adapter);
@@ -76,7 +79,7 @@ int cxgbe_setup_rss(struct port_info *pi);
 void cxgbe_enable_rx_queues(struct port_info *pi);
 void cxgbe_print_port_info(struct adapter *adap);
 void cxgbe_print_adapter_info(struct adapter *adap);
-int cxgbe_get_devargs(struct rte_devargs *devargs, const char *key);
+void cxgbe_process_devargs(struct adapter *adap);
 void cxgbe_configure_max_ethqsets(struct adapter *adapter);
 
 #endif /* _CXGBE_H_ */
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index b78f190f9..8a2c2ca11 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -1189,6 +1189,8 @@ static int eth_cxgbe_dev_init(struct rte_eth_dev *eth_dev)
 	adapter->eth_dev = eth_dev;
 	pi->adapter = adapter;
 
+	cxgbe_process_devargs(adapter);
+
 	err = cxgbe_probe(adapter);
 	if (err) {
 		dev_err(adapter, "%s: cxgbe probe failed with err %d\n",
@@ -1236,8 +1238,7 @@ RTE_PMD_REGISTER_PCI(net_cxgbe, rte_cxgbe_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(net_cxgbe, cxgb4_pci_tbl);
 RTE_PMD_REGISTER_KMOD_DEP(net_cxgbe, "* igb_uio | uio_pci_generic | vfio-pci");
 RTE_PMD_REGISTER_PARAM_STRING(net_cxgbe,
-			      CXGBE_DEVARG_KEEP_OVLAN "=<0|1> "
-			      CXGBE_DEVARG_FORCE_LINK_UP "=<0|1> ");
+			      CXGBE_DEVARG_CMN_KEEP_OVLAN "=<0|1> ");
 
 RTE_INIT(cxgbe_init_log)
 {
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index c3e6b9557..6a6137f06 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -669,19 +669,25 @@ void cxgbe_print_port_info(struct adapter *adap)
 	}
 }
 
-static int
-check_devargs_handler(__rte_unused const char *key, const char *value,
-		      __rte_unused void *opaque)
+static int check_devargs_handler(const char *key, const char *value, void *p)
 {
-	if (strcmp(value, "1"))
-		return -1;
+	if (!strncmp(key, CXGBE_DEVARG_CMN_KEEP_OVLAN, strlen(key)) ||
+	    !strncmp(key, CXGBE_DEVARG_VF_FORCE_LINK_UP, strlen(key))) {
+		if (!strncmp(value, "1", 1)) {
+			bool *dst_val = (bool *)p;
+
+			*dst_val = true;
+		}
+	}
 
 	return 0;
 }
 
-int cxgbe_get_devargs(struct rte_devargs *devargs, const char *key)
+static int cxgbe_get_devargs(struct rte_devargs *devargs, const char *key,
+			     void *p)
 {
 	struct rte_kvargs *kvlist;
+	int ret = 0;
 
 	if (!devargs)
 		return 0;
@@ -690,24 +696,44 @@ int cxgbe_get_devargs(struct rte_devargs *devargs, const char *key)
 	if (!kvlist)
 		return 0;
 
-	if (!rte_kvargs_count(kvlist, key)) {
-		rte_kvargs_free(kvlist);
-		return 0;
-	}
+	if (!rte_kvargs_count(kvlist, key))
+		goto out;
 
-	if (rte_kvargs_process(kvlist, key,
-			       check_devargs_handler, NULL) < 0) {
-		rte_kvargs_free(kvlist);
-		return 0;
-	}
+	ret = rte_kvargs_process(kvlist, key, check_devargs_handler, p);
+
+out:
 	rte_kvargs_free(kvlist);
 
-	return 1;
+	return ret;
+}
+
+static void cxgbe_get_devargs_int(struct adapter *adap, int *dst,
+				  const char *key, int default_value)
+{
+	struct rte_pci_device *pdev = adap->pdev;
+	int ret, devarg_value = default_value;
+
+	*dst = default_value;
+	if (!pdev)
+		return;
+
+	ret = cxgbe_get_devargs(pdev->device.devargs, key, &devarg_value);
+	if (ret)
+		return;
+
+	*dst = devarg_value;
+}
+
+void cxgbe_process_devargs(struct adapter *adap)
+{
+	cxgbe_get_devargs_int(adap, &adap->devargs.keep_ovlan,
+			      CXGBE_DEVARG_CMN_KEEP_OVLAN, 0);
+	cxgbe_get_devargs_int(adap, &adap->devargs.force_link_up,
+			      CXGBE_DEVARG_VF_FORCE_LINK_UP, 0);
 }
 
 static void configure_vlan_types(struct adapter *adapter)
 {
-	struct rte_pci_device *pdev = adapter->pdev;
 	int i;
 
 	for_each_port(adapter, i) {
@@ -742,9 +768,8 @@ static void configure_vlan_types(struct adapter *adapter)
 				 F_OVLAN_EN2 | F_IVLAN_EN);
 	}
 
-	if (cxgbe_get_devargs(pdev->device.devargs, CXGBE_DEVARG_KEEP_OVLAN))
-		t4_tp_wr_bits_indirect(adapter, A_TP_INGRESS_CONFIG,
-				       V_RM_OVLAN(1), V_RM_OVLAN(0));
+	t4_tp_wr_bits_indirect(adapter, A_TP_INGRESS_CONFIG, V_RM_OVLAN(1),
+			       V_RM_OVLAN(!adapter->devargs.keep_ovlan));
 }
 
 static void configure_pcie_ext_tag(struct adapter *adapter)
@@ -1323,14 +1348,10 @@ void t4_os_portmod_changed(const struct adapter *adap, int port_id)
 
 bool cxgbe_force_linkup(struct adapter *adap)
 {
-	struct rte_pci_device *pdev = adap->pdev;
-
 	if (is_pf4(adap))
-		return false;	/* force_linkup not required for pf driver*/
-	if (!cxgbe_get_devargs(pdev->device.devargs,
-			       CXGBE_DEVARG_FORCE_LINK_UP))
-		return false;
-	return true;
+		return false;	/* force_linkup not required for pf driver */
+
+	return adap->devargs.force_link_up;
 }
 
 /**
diff --git a/drivers/net/cxgbe/cxgbevf_ethdev.c b/drivers/net/cxgbe/cxgbevf_ethdev.c
index 60e96aa4e..cc0938b43 100644
--- a/drivers/net/cxgbe/cxgbevf_ethdev.c
+++ b/drivers/net/cxgbe/cxgbevf_ethdev.c
@@ -162,6 +162,9 @@ static int eth_cxgbevf_dev_init(struct rte_eth_dev *eth_dev)
 	adapter->pdev = pci_dev;
 	adapter->eth_dev = eth_dev;
 	pi->adapter = adapter;
+
+	cxgbe_process_devargs(adapter);
+
 	err = cxgbevf_probe(adapter);
 	if (err) {
 		dev_err(adapter, "%s: cxgbevf probe failed with err %d\n",
@@ -208,3 +211,6 @@ static struct rte_pci_driver rte_cxgbevf_pmd = {
 RTE_PMD_REGISTER_PCI(net_cxgbevf, rte_cxgbevf_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(net_cxgbevf, cxgb4vf_pci_tbl);
 RTE_PMD_REGISTER_KMOD_DEP(net_cxgbevf, "* igb_uio | vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(net_cxgbevf,
+			      CXGBE_DEVARG_CMN_KEEP_OVLAN "=<0|1> "
+			      CXGBE_DEVARG_VF_FORCE_LINK_UP "=<0|1> ");
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH 09/12] net/cxgbe: add devarg to control Tx coalescing
  2019-09-06 21:52 [dpdk-dev] [PATCH 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD Rahul Lakkireddy
                   ` (7 preceding siblings ...)
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 08/12] net/cxgbe: separate VF only devargs Rahul Lakkireddy
@ 2019-09-06 21:52 ` Rahul Lakkireddy
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 10/12] net/cxgbe: fetch max Tx coalesce limit from firmware Rahul Lakkireddy
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-06 21:52 UTC (permalink / raw)
  To: dev; +Cc: nirranjan

Remove compile time option to control Tx coalescing Latency vs
Throughput behavior. Add tx_mode_latency devarg instead, to
dynamically control Tx coalescing behavior.

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
 config/common_base                 |  1 -
 doc/guides/nics/cxgbe.rst          | 13 +++++++++----
 drivers/net/cxgbe/base/adapter.h   |  1 +
 drivers/net/cxgbe/cxgbe.h          |  1 +
 drivers/net/cxgbe/cxgbe_ethdev.c   |  3 ++-
 drivers/net/cxgbe/cxgbe_main.c     |  3 +++
 drivers/net/cxgbe/cxgbevf_ethdev.c |  1 +
 drivers/net/cxgbe/sge.c            | 18 ++++++++----------
 8 files changed, 25 insertions(+), 16 deletions(-)

diff --git a/config/common_base b/config/common_base
index 43964de6d..48b134521 100644
--- a/config/common_base
+++ b/config/common_base
@@ -217,7 +217,6 @@ CONFIG_RTE_LIBRTE_BNXT_PMD=y
 # Compile burst-oriented Chelsio Terminator (CXGBE) PMD
 #
 CONFIG_RTE_LIBRTE_CXGBE_PMD=y
-CONFIG_RTE_LIBRTE_CXGBE_TPUT=y
 
 # NXP DPAA Bus
 CONFIG_RTE_LIBRTE_DPAA_BUS=n
diff --git a/doc/guides/nics/cxgbe.rst b/doc/guides/nics/cxgbe.rst
index 6e6767194..f15e0b90a 100644
--- a/doc/guides/nics/cxgbe.rst
+++ b/doc/guides/nics/cxgbe.rst
@@ -104,10 +104,6 @@ enabling debugging options may affect system performance.
 
      This controls compilation of both CXGBE and CXGBEVF PMD.
 
-- ``CONFIG_RTE_LIBRTE_CXGBE_TPUT`` (default **y**)
-
-  Toggle behavior to prefer Throughput or Latency.
-
 Runtime Options
 ~~~~~~~~~~~~~~~
 
@@ -127,6 +123,15 @@ Common Runtime Options
   enabled, the outer VLAN tag is preserved in Q-in-Q packets. Otherwise,
   the outer VLAN tag is stripped in Q-in-Q packets.
 
+- ``tx_mode_latency`` (default **0**)
+
+  When set to 1, Tx doesn't wait for max number of packets to get
+  coalesced and sends the packets immediately at the end of the
+  current Tx burst. When set to 0, Tx waits across multiple Tx bursts
+  until the max number of packets have been coalesced. In this case,
+  Tx only sends the coalesced packets to hardware once the max
+  coalesce limit has been reached.
+
 CXGBE VF Only Runtime Options
 -----------------------------
 
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index 68bd5a9cc..6a931ce97 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -302,6 +302,7 @@ TAILQ_HEAD(mbox_list, mbox_entry);
 struct adapter_devargs {
 	bool keep_ovlan;
 	bool force_link_up;
+	bool tx_mode_latency;
 };
 
 struct adapter {
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 3a50502b7..ed1be3559 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -52,6 +52,7 @@
 
 /* Common PF and VF devargs */
 #define CXGBE_DEVARG_CMN_KEEP_OVLAN "keep_ovlan"
+#define CXGBE_DEVARG_CMN_TX_MODE_LATENCY "tx_mode_latency"
 
 /* VF only devargs */
 #define CXGBE_DEVARG_VF_FORCE_LINK_UP "force_link_up"
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 8a2c2ca11..e99e5d057 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -1238,7 +1238,8 @@ RTE_PMD_REGISTER_PCI(net_cxgbe, rte_cxgbe_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(net_cxgbe, cxgb4_pci_tbl);
 RTE_PMD_REGISTER_KMOD_DEP(net_cxgbe, "* igb_uio | uio_pci_generic | vfio-pci");
 RTE_PMD_REGISTER_PARAM_STRING(net_cxgbe,
-			      CXGBE_DEVARG_CMN_KEEP_OVLAN "=<0|1> ");
+			      CXGBE_DEVARG_CMN_KEEP_OVLAN "=<0|1> "
+			      CXGBE_DEVARG_CMN_TX_MODE_LATENCY "=<0|1> ");
 
 RTE_INIT(cxgbe_init_log)
 {
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 6a6137f06..23b74c754 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -672,6 +672,7 @@ void cxgbe_print_port_info(struct adapter *adap)
 static int check_devargs_handler(const char *key, const char *value, void *p)
 {
 	if (!strncmp(key, CXGBE_DEVARG_CMN_KEEP_OVLAN, strlen(key)) ||
+	    !strncmp(key, CXGBE_DEVARG_CMN_TX_MODE_LATENCY, strlen(key)) ||
 	    !strncmp(key, CXGBE_DEVARG_VF_FORCE_LINK_UP, strlen(key))) {
 		if (!strncmp(value, "1", 1)) {
 			bool *dst_val = (bool *)p;
@@ -728,6 +729,8 @@ void cxgbe_process_devargs(struct adapter *adap)
 {
 	cxgbe_get_devargs_int(adap, &adap->devargs.keep_ovlan,
 			      CXGBE_DEVARG_CMN_KEEP_OVLAN, 0);
+	cxgbe_get_devargs_int(adap, &adap->devargs.tx_mode_latency,
+			      CXGBE_DEVARG_CMN_TX_MODE_LATENCY, 0);
 	cxgbe_get_devargs_int(adap, &adap->devargs.force_link_up,
 			      CXGBE_DEVARG_VF_FORCE_LINK_UP, 0);
 }
diff --git a/drivers/net/cxgbe/cxgbevf_ethdev.c b/drivers/net/cxgbe/cxgbevf_ethdev.c
index cc0938b43..4165ba986 100644
--- a/drivers/net/cxgbe/cxgbevf_ethdev.c
+++ b/drivers/net/cxgbe/cxgbevf_ethdev.c
@@ -213,4 +213,5 @@ RTE_PMD_REGISTER_PCI_TABLE(net_cxgbevf, cxgb4vf_pci_tbl);
 RTE_PMD_REGISTER_KMOD_DEP(net_cxgbevf, "* igb_uio | vfio-pci");
 RTE_PMD_REGISTER_PARAM_STRING(net_cxgbevf,
 			      CXGBE_DEVARG_CMN_KEEP_OVLAN "=<0|1> "
+			      CXGBE_DEVARG_CMN_TX_MODE_LATENCY "=<0|1> "
 			      CXGBE_DEVARG_VF_FORCE_LINK_UP "=<0|1> ");
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index bf3190211..0df870a41 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -1007,10 +1007,6 @@ static inline int tx_do_packet_coalesce(struct sge_eth_txq *txq,
 	unsigned int max_coal_pkt_num = is_pf4(adap) ? ETH_COALESCE_PKT_NUM :
 						       ETH_COALESCE_VF_PKT_NUM;
 
-#ifdef RTE_LIBRTE_CXGBE_TPUT
-	RTE_SET_USED(nb_pkts);
-#endif
-
 	if (q->coalesce.type == 0) {
 		mc = (struct ulp_txpkt *)q->coalesce.ptr;
 		mc->cmd_dest = htonl(V_ULPTX_CMD(4) | V_ULP_TXPKT_DEST(0) |
@@ -1082,13 +1078,15 @@ static inline int tx_do_packet_coalesce(struct sge_eth_txq *txq,
 	sd->coalesce.sgl[idx & 1] = (struct ulptx_sgl *)(cpl + 1);
 	sd->coalesce.idx = (idx & 1) + 1;
 
-	/* send the coaelsced work request if max reached */
-	if (++q->coalesce.idx == max_coal_pkt_num
-#ifndef RTE_LIBRTE_CXGBE_TPUT
-	    || q->coalesce.idx >= nb_pkts
-#endif
-	    )
+	/* Send the coalesced work request, only if max reached. However,
+	 * if lower latency is preferred over throughput, then don't wait
+	 * for coalescing the next Tx burst and send the packets now.
+	 */
+	q->coalesce.idx++;
+	if (q->coalesce.idx == max_coal_pkt_num ||
+	    (adap->devargs.tx_mode_latency && q->coalesce.idx >= nb_pkts))
 		ship_tx_pkt_coalesce_wr(adap, txq);
+
 	return 0;
 }
 
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH 10/12] net/cxgbe: fetch max Tx coalesce limit from firmware
  2019-09-06 21:52 [dpdk-dev] [PATCH 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD Rahul Lakkireddy
                   ` (8 preceding siblings ...)
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 09/12] net/cxgbe: add devarg to control Tx coalescing Rahul Lakkireddy
@ 2019-09-06 21:52 ` Rahul Lakkireddy
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 11/12] net/cxgbe: add rte_flow support for matching VLAN Rahul Lakkireddy
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-06 21:52 UTC (permalink / raw)
  To: dev; +Cc: nirranjan

Query firmware for max number of packets that can be coalesced by
Tx.

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
 doc/guides/nics/cxgbe.rst               | 18 ++++++------
 drivers/net/cxgbe/base/common.h         |  1 +
 drivers/net/cxgbe/base/t4fw_interface.h |  3 +-
 drivers/net/cxgbe/cxgbe_main.c          | 39 ++++++++++++-------------
 drivers/net/cxgbe/cxgbe_pfvf.h          | 10 +++++++
 drivers/net/cxgbe/cxgbevf_main.c        | 12 ++++++--
 drivers/net/cxgbe/sge.c                 |  4 +--
 7 files changed, 52 insertions(+), 35 deletions(-)

diff --git a/doc/guides/nics/cxgbe.rst b/doc/guides/nics/cxgbe.rst
index f15e0b90a..a7b333dca 100644
--- a/doc/guides/nics/cxgbe.rst
+++ b/doc/guides/nics/cxgbe.rst
@@ -70,7 +70,7 @@ in :ref:`t5-nics` and :ref:`t6-nics`.
 Prerequisites
 -------------
 
-- Requires firmware version **1.17.14.0** and higher. Visit
+- Requires firmware version **1.23.4.0** and higher. Visit
   `Chelsio Download Center <http://service.chelsio.com>`_ to get latest firmware
   bundled with the latest Chelsio Unified Wire package.
 
@@ -215,7 +215,7 @@ Unified Wire package for Linux operating system are as follows:
 
    .. code-block:: console
 
-      firmware-version: 1.17.14.0, TP 0.1.4.9
+      firmware-version: 1.23.4.0, TP 0.1.23.2
 
 Running testpmd
 ~~~~~~~~~~~~~~~
@@ -273,7 +273,7 @@ devices managed by librte_pmd_cxgbe in Linux operating system.
       EAL:   PCI memory mapped at 0x7fd7c0200000
       EAL:   PCI memory mapped at 0x7fd77cdfd000
       EAL:   PCI memory mapped at 0x7fd7c10b7000
-      PMD: rte_cxgbe_pmd: fw: 1.17.14.0, TP: 0.1.4.9
+      PMD: rte_cxgbe_pmd: fw: 1.23.4.0, TP: 0.1.23.2
       PMD: rte_cxgbe_pmd: Coming up as MASTER: Initializing adapter
       Interactive-mode selected
       Configuring Port 0 (socket 0)
@@ -379,16 +379,16 @@ virtual functions.
       [...]
       EAL: PCI device 0000:02:01.0 on NUMA socket 0
       EAL:   probe driver: 1425:5803 net_cxgbevf
-      PMD: rte_cxgbe_pmd: Firmware version: 1.17.14.0
-      PMD: rte_cxgbe_pmd: TP Microcode version: 0.1.4.9
+      PMD: rte_cxgbe_pmd: Firmware version: 1.23.4.0
+      PMD: rte_cxgbe_pmd: TP Microcode version: 0.1.23.2
       PMD: rte_cxgbe_pmd: Chelsio rev 0
       PMD: rte_cxgbe_pmd: No bootstrap loaded
       PMD: rte_cxgbe_pmd: No Expansion ROM loaded
       PMD: rte_cxgbe_pmd:  0000:02:01.0 Chelsio rev 0 1G/10GBASE-SFP
       EAL: PCI device 0000:02:01.1 on NUMA socket 0
       EAL:   probe driver: 1425:5803 net_cxgbevf
-      PMD: rte_cxgbe_pmd: Firmware version: 1.17.14.0
-      PMD: rte_cxgbe_pmd: TP Microcode version: 0.1.4.9
+      PMD: rte_cxgbe_pmd: Firmware version: 1.23.4.0
+      PMD: rte_cxgbe_pmd: TP Microcode version: 0.1.23.2
       PMD: rte_cxgbe_pmd: Chelsio rev 0
       PMD: rte_cxgbe_pmd: No bootstrap loaded
       PMD: rte_cxgbe_pmd: No Expansion ROM loaded
@@ -465,7 +465,7 @@ Unified Wire package for FreeBSD operating system are as follows:
 
    .. code-block:: console
 
-      dev.t5nex.0.firmware_version: 1.17.14.0
+      dev.t5nex.0.firmware_version: 1.23.4.0
 
 Running testpmd
 ~~~~~~~~~~~~~~~
@@ -583,7 +583,7 @@ devices managed by librte_pmd_cxgbe in FreeBSD operating system.
       EAL:   PCI memory mapped at 0x8007ec000
       EAL:   PCI memory mapped at 0x842800000
       EAL:   PCI memory mapped at 0x80086c000
-      PMD: rte_cxgbe_pmd: fw: 1.17.14.0, TP: 0.1.4.9
+      PMD: rte_cxgbe_pmd: fw: 1.23.4.0, TP: 0.1.23.2
       PMD: rte_cxgbe_pmd: Coming up as MASTER: Initializing adapter
       Interactive-mode selected
       Configuring Port 0 (socket 0)
diff --git a/drivers/net/cxgbe/base/common.h b/drivers/net/cxgbe/base/common.h
index 973d4d7dd..6047642c5 100644
--- a/drivers/net/cxgbe/base/common.h
+++ b/drivers/net/cxgbe/base/common.h
@@ -272,6 +272,7 @@ struct adapter_params {
 	bool ulptx_memwrite_dsgl;          /* use of T5 DSGL allowed */
 	u8 fw_caps_support;		  /* 32-bit Port Capabilities */
 	u8 filter2_wr_support;            /* FW support for FILTER2_WR */
+	u32 max_tx_coalesce_num; /* Max # of Tx packets that can be coalesced */
 };
 
 /* Firmware Port Capabilities types.
diff --git a/drivers/net/cxgbe/base/t4fw_interface.h b/drivers/net/cxgbe/base/t4fw_interface.h
index 06d3ef3a6..e992d196d 100644
--- a/drivers/net/cxgbe/base/t4fw_interface.h
+++ b/drivers/net/cxgbe/base/t4fw_interface.h
@@ -692,7 +692,8 @@ enum fw_params_param_pfvf {
 	FW_PARAMS_PARAM_PFVF_L2T_START = 0x13,
 	FW_PARAMS_PARAM_PFVF_L2T_END = 0x14,
 	FW_PARAMS_PARAM_PFVF_CPLFW4MSG_ENCAP = 0x31,
-	FW_PARAMS_PARAM_PFVF_PORT_CAPS32 = 0x3A
+	FW_PARAMS_PARAM_PFVF_PORT_CAPS32 = 0x3A,
+	FW_PARAMS_PARAM_PFVF_MAX_PKTS_PER_ETH_TX_PKTS_WR = 0x3D,
 };
 
 /*
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 23b74c754..4701518a6 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -37,6 +37,7 @@
 #include "base/t4_regs.h"
 #include "base/t4_msg.h"
 #include "cxgbe.h"
+#include "cxgbe_pfvf.h"
 #include "clip_tbl.h"
 #include "l2t.h"
 #include "mps_tcam.h"
@@ -1162,20 +1163,10 @@ static int adap_init0(struct adapter *adap)
 	/*
 	 * Grab some of our basic fundamental operating parameters.
 	 */
-#define FW_PARAM_DEV(param) \
-	(V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_DEV) | \
-	 V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_DEV_##param))
-
-#define FW_PARAM_PFVF(param) \
-	(V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_PFVF) | \
-	 V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_PFVF_##param) |  \
-	 V_FW_PARAMS_PARAM_Y(0) | \
-	 V_FW_PARAMS_PARAM_Z(0))
-
-	params[0] = FW_PARAM_PFVF(L2T_START);
-	params[1] = FW_PARAM_PFVF(L2T_END);
-	params[2] = FW_PARAM_PFVF(FILTER_START);
-	params[3] = FW_PARAM_PFVF(FILTER_END);
+	params[0] = CXGBE_FW_PARAM_PFVF(L2T_START);
+	params[1] = CXGBE_FW_PARAM_PFVF(L2T_END);
+	params[2] = CXGBE_FW_PARAM_PFVF(FILTER_START);
+	params[3] = CXGBE_FW_PARAM_PFVF(FILTER_END);
 	ret = t4_query_params(adap, adap->mbox, adap->pf, 0, 4, params, val);
 	if (ret < 0)
 		goto bye;
@@ -1184,8 +1175,8 @@ static int adap_init0(struct adapter *adap)
 	adap->tids.ftid_base = val[2];
 	adap->tids.nftids = val[3] - val[2] + 1;
 
-	params[0] = FW_PARAM_PFVF(CLIP_START);
-	params[1] = FW_PARAM_PFVF(CLIP_END);
+	params[0] = CXGBE_FW_PARAM_PFVF(CLIP_START);
+	params[1] = CXGBE_FW_PARAM_PFVF(CLIP_END);
 	ret = t4_query_params(adap, adap->mbox, adap->pf, 0, 2, params, val);
 	if (ret < 0)
 		goto bye;
@@ -1215,14 +1206,14 @@ static int adap_init0(struct adapter *adap)
 	if (is_t4(adap->params.chip)) {
 		adap->params.filter2_wr_support = 0;
 	} else {
-		params[0] = FW_PARAM_DEV(FILTER2_WR);
+		params[0] = CXGBE_FW_PARAM_DEV(FILTER2_WR);
 		ret = t4_query_params(adap, adap->mbox, adap->pf, 0,
 				      1, params, val);
 		adap->params.filter2_wr_support = (ret == 0 && val[0] != 0);
 	}
 
 	/* query tid-related parameters */
-	params[0] = FW_PARAM_DEV(NTID);
+	params[0] = CXGBE_FW_PARAM_DEV(NTID);
 	ret = t4_query_params(adap, adap->mbox, adap->pf, 0, 1,
 			      params, val);
 	if (ret < 0)
@@ -1235,7 +1226,7 @@ static int adap_init0(struct adapter *adap)
 	 * firmware won't understand this and we'll just get
 	 * unencapsulated messages ...
 	 */
-	params[0] = FW_PARAM_PFVF(CPLFW4MSG_ENCAP);
+	params[0] = CXGBE_FW_PARAM_PFVF(CPLFW4MSG_ENCAP);
 	val[0] = 1;
 	(void)t4_set_params(adap, adap->mbox, adap->pf, 0, 1, params, val);
 
@@ -1248,12 +1239,20 @@ static int adap_init0(struct adapter *adap)
 	if (is_t4(adap->params.chip)) {
 		adap->params.ulptx_memwrite_dsgl = false;
 	} else {
-		params[0] = FW_PARAM_DEV(ULPTX_MEMWRITE_DSGL);
+		params[0] = CXGBE_FW_PARAM_DEV(ULPTX_MEMWRITE_DSGL);
 		ret = t4_query_params(adap, adap->mbox, adap->pf, 0,
 				      1, params, val);
 		adap->params.ulptx_memwrite_dsgl = (ret == 0 && val[0] != 0);
 	}
 
+	/* Query for max number of packets that can be coalesced for Tx */
+	params[0] = CXGBE_FW_PARAM_PFVF(MAX_PKTS_PER_ETH_TX_PKTS_WR);
+	ret = t4_query_params(adap, adap->mbox, adap->pf, 0, 1, params, val);
+	if (!ret && val[0] > 0)
+		adap->params.max_tx_coalesce_num = val[0];
+	else
+		adap->params.max_tx_coalesce_num = ETH_COALESCE_PKT_NUM;
+
 	/*
 	 * The MTU/MSS Table is initialized by now, so load their values.  If
 	 * we're initializing the adapter, then we'll make any modifications
diff --git a/drivers/net/cxgbe/cxgbe_pfvf.h b/drivers/net/cxgbe/cxgbe_pfvf.h
index 03145cea6..098dbe8f6 100644
--- a/drivers/net/cxgbe/cxgbe_pfvf.h
+++ b/drivers/net/cxgbe/cxgbe_pfvf.h
@@ -6,6 +6,16 @@
 #ifndef _CXGBE_PFVF_H_
 #define _CXGBE_PFVF_H_
 
+#define CXGBE_FW_PARAM_DEV(param) \
+	(V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_DEV) | \
+	 V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_DEV_##param))
+
+#define CXGBE_FW_PARAM_PFVF(param) \
+	(V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_PFVF) | \
+	 V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_PFVF_##param) |  \
+	 V_FW_PARAMS_PARAM_Y(0) | \
+	 V_FW_PARAMS_PARAM_Z(0))
+
 void cxgbe_dev_rx_queue_release(void *q);
 void cxgbe_dev_tx_queue_release(void *q);
 void cxgbe_dev_stop(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/cxgbe/cxgbevf_main.c b/drivers/net/cxgbe/cxgbevf_main.c
index 82f40f358..66fb92375 100644
--- a/drivers/net/cxgbe/cxgbevf_main.c
+++ b/drivers/net/cxgbe/cxgbevf_main.c
@@ -11,6 +11,7 @@
 #include "base/t4_regs.h"
 #include "base/t4_msg.h"
 #include "cxgbe.h"
+#include "cxgbe_pfvf.h"
 #include "mps_tcam.h"
 
 /*
@@ -122,11 +123,18 @@ static int adap_init0vf(struct adapter *adapter)
 	 * firmware won't understand this and we'll just get
 	 * unencapsulated messages ...
 	 */
-	param = V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_PFVF) |
-		V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_PFVF_CPLFW4MSG_ENCAP);
+	param = CXGBE_FW_PARAM_PFVF(CPLFW4MSG_ENCAP);
 	val = 1;
 	t4vf_set_params(adapter, 1, &param, &val);
 
+	/* Query for max number of packets that can be coalesced for Tx */
+	param = CXGBE_FW_PARAM_PFVF(MAX_PKTS_PER_ETH_TX_PKTS_WR);
+	err = t4vf_query_params(adapter, 1, &param, &val);
+	if (!err && val > 0)
+		adapter->params.max_tx_coalesce_num = val;
+	else
+		adapter->params.max_tx_coalesce_num = ETH_COALESCE_VF_PKT_NUM;
+
 	/*
 	 * Grab our Virtual Interface resource allocation, extract the
 	 * features that we're interested in and do a bit of sanity testing on
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index 0df870a41..aba85a209 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -1004,8 +1004,6 @@ static inline int tx_do_packet_coalesce(struct sge_eth_txq *txq,
 	struct cpl_tx_pkt_core *cpl;
 	struct tx_sw_desc *sd;
 	unsigned int idx = q->coalesce.idx, len = mbuf->pkt_len;
-	unsigned int max_coal_pkt_num = is_pf4(adap) ? ETH_COALESCE_PKT_NUM :
-						       ETH_COALESCE_VF_PKT_NUM;
 
 	if (q->coalesce.type == 0) {
 		mc = (struct ulp_txpkt *)q->coalesce.ptr;
@@ -1083,7 +1081,7 @@ static inline int tx_do_packet_coalesce(struct sge_eth_txq *txq,
 	 * for coalescing the next Tx burst and send the packets now.
 	 */
 	q->coalesce.idx++;
-	if (q->coalesce.idx == max_coal_pkt_num ||
+	if (q->coalesce.idx == adap->params.max_tx_coalesce_num ||
 	    (adap->devargs.tx_mode_latency && q->coalesce.idx >= nb_pkts))
 		ship_tx_pkt_coalesce_wr(adap, txq);
 
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH 11/12] net/cxgbe: add rte_flow support for matching VLAN
  2019-09-06 21:52 [dpdk-dev] [PATCH 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD Rahul Lakkireddy
                   ` (9 preceding siblings ...)
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 10/12] net/cxgbe: fetch max Tx coalesce limit from firmware Rahul Lakkireddy
@ 2019-09-06 21:52 ` Rahul Lakkireddy
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 12/12] net/cxgbe: add rte_flow support for setting VLAN PCP Rahul Lakkireddy
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-06 21:52 UTC (permalink / raw)
  To: dev; +Cc: nirranjan

Add support for matching VLAN fields via rte_flow API.

When matching VLAN pattern, the ethertype field in hardware
filter specification must contain VLAN header's ethertype, and
not Ethernet header's ethertype. The hardware automatically
searches for ethertype 0x8100 in Ethernet header, when
parsing incoming packet against VLAN pattern.

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
 drivers/net/cxgbe/base/t4_regs_values.h |   9 ++
 drivers/net/cxgbe/cxgbe_filter.c        |  11 +-
 drivers/net/cxgbe/cxgbe_flow.c          | 145 +++++++++++++++++++++++-
 drivers/net/cxgbe/cxgbe_main.c          |  10 +-
 4 files changed, 162 insertions(+), 13 deletions(-)

diff --git a/drivers/net/cxgbe/base/t4_regs_values.h b/drivers/net/cxgbe/base/t4_regs_values.h
index a9414d202..e3f549e51 100644
--- a/drivers/net/cxgbe/base/t4_regs_values.h
+++ b/drivers/net/cxgbe/base/t4_regs_values.h
@@ -143,4 +143,13 @@
 #define W_FT_MPSHITTYPE			3
 #define W_FT_FRAGMENTATION		1
 
+/*
+ * Some of the Compressed Filter Tuple fields have internal structure.  These
+ * bit shifts/masks describe those structures.  All shifts are relative to the
+ * base position of the fields within the Compressed Filter Tuple
+ */
+#define S_FT_VLAN_VLD			16
+#define V_FT_VLAN_VLD(x)		((x) << S_FT_VLAN_VLD)
+#define F_FT_VLAN_VLD			V_FT_VLAN_VLD(1U)
+
 #endif /* __T4_REGS_VALUES_H__ */
diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index 33b95a69a..b9d9d5d39 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -69,7 +69,8 @@ int cxgbe_validate_filter(struct adapter *adapter,
 	(!(fconf & (_mask)) && S(_field))
 
 	if (U(F_PORT, iport) || U(F_ETHERTYPE, ethtype) ||
-	    U(F_PROTOCOL, proto) || U(F_MACMATCH, macidx))
+	    U(F_PROTOCOL, proto) || U(F_MACMATCH, macidx) ||
+	    U(F_VLAN, ivlan_vld))
 		return -EOPNOTSUPP;
 
 #undef S
@@ -292,6 +293,9 @@ static u64 hash_filter_ntuple(const struct filter_entry *f)
 		ntuple |= (u64)(f->fs.val.ethtype) << tp->ethertype_shift;
 	if (tp->macmatch_shift >= 0 && f->fs.mask.macidx)
 		ntuple |= (u64)(f->fs.val.macidx) << tp->macmatch_shift;
+	if (tp->vlan_shift >= 0 && f->fs.mask.ivlan)
+		ntuple |= (u64)(F_FT_VLAN_VLD | f->fs.val.ivlan) <<
+			  tp->vlan_shift;
 
 	return ntuple;
 }
@@ -769,6 +773,9 @@ static int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
 			    V_FW_FILTER_WR_L2TIX(f->l2t ? f->l2t->idx : 0));
 	fwr->ethtype = cpu_to_be16(f->fs.val.ethtype);
 	fwr->ethtypem = cpu_to_be16(f->fs.mask.ethtype);
+	fwr->frag_to_ovlan_vldm =
+		(V_FW_FILTER_WR_IVLAN_VLD(f->fs.val.ivlan_vld) |
+		 V_FW_FILTER_WR_IVLAN_VLDM(f->fs.mask.ivlan_vld));
 	fwr->smac_sel = 0;
 	fwr->rx_chan_rx_rpl_iq =
 		cpu_to_be16(V_FW_FILTER_WR_RX_CHAN(0) |
@@ -781,6 +788,8 @@ static int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
 			    V_FW_FILTER_WR_PORTM(f->fs.mask.iport));
 	fwr->ptcl = f->fs.val.proto;
 	fwr->ptclm = f->fs.mask.proto;
+	fwr->ivlan = cpu_to_be16(f->fs.val.ivlan);
+	fwr->ivlanm = cpu_to_be16(f->fs.mask.ivlan);
 	rte_memcpy(fwr->lip, f->fs.val.lip, sizeof(fwr->lip));
 	rte_memcpy(fwr->lipm, f->fs.mask.lip, sizeof(fwr->lipm));
 	rte_memcpy(fwr->fip, f->fs.val.fip, sizeof(fwr->fip));
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index 4c8553039..4b72e6422 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -46,6 +46,53 @@ cxgbe_validate_item(const struct rte_flow_item *i, struct rte_flow_error *e)
 	return 0;
 }
 
+/**
+ * Apart from the 4-tuple IPv4/IPv6 - TCP/UDP information,
+ * there's only 40-bits available to store match fields.
+ * So, to save space, optimize filter spec for some common
+ * known fields that hardware can parse against incoming
+ * packets automatically.
+ */
+static void
+cxgbe_tweak_filter_spec(struct adapter *adap,
+			struct ch_filter_specification *fs)
+{
+	/* Save 16-bit ethertype field space, by setting corresponding
+	 * 1-bit flags in the filter spec for common known ethertypes.
+	 * When hardware sees these flags, it automatically infers and
+	 * matches incoming packets against the corresponding ethertype.
+	 */
+	if (fs->mask.ethtype == 0xffff) {
+		switch (fs->val.ethtype) {
+		case RTE_ETHER_TYPE_IPV4:
+			if (adap->params.tp.ethertype_shift < 0) {
+				fs->type = FILTER_TYPE_IPV4;
+				fs->val.ethtype = 0;
+				fs->mask.ethtype = 0;
+			}
+			break;
+		case RTE_ETHER_TYPE_IPV6:
+			if (adap->params.tp.ethertype_shift < 0) {
+				fs->type = FILTER_TYPE_IPV6;
+				fs->val.ethtype = 0;
+				fs->mask.ethtype = 0;
+			}
+			break;
+		case RTE_ETHER_TYPE_VLAN:
+			if (adap->params.tp.ethertype_shift < 0 &&
+			    adap->params.tp.vlan_shift >= 0) {
+				fs->val.ivlan_vld = 1;
+				fs->mask.ivlan_vld = 1;
+				fs->val.ethtype = 0;
+				fs->mask.ethtype = 0;
+			}
+			break;
+		default:
+			break;
+		}
+	}
+}
+
 static void
 cxgbe_fill_filter_region(struct adapter *adap,
 			 struct ch_filter_specification *fs)
@@ -95,6 +142,9 @@ cxgbe_fill_filter_region(struct adapter *adap,
 		ntuple_mask |= (u64)fs->mask.iport << tp->port_shift;
 	if (tp->macmatch_shift >= 0)
 		ntuple_mask |= (u64)fs->mask.macidx << tp->macmatch_shift;
+	if (tp->vlan_shift >= 0 && fs->mask.ivlan_vld)
+		ntuple_mask |= (u64)(F_FT_VLAN_VLD | fs->mask.ivlan) <<
+			       tp->vlan_shift;
 
 	if (ntuple_mask != hash_filter_mask)
 		return;
@@ -114,6 +164,25 @@ ch_rte_parsetype_eth(const void *dmask, const struct rte_flow_item *item,
 	/* If user has not given any mask, then use chelsio supported mask. */
 	mask = umask ? umask : (const struct rte_flow_item_eth *)dmask;
 
+	if (!spec)
+		return 0;
+
+	/* Chelsio hardware supports matching on only one ethertype
+	 * (i.e. either the outer or inner ethertype, but not both). If
+	 * we already encountered VLAN item, then ensure that the outer
+	 * ethertype is VLAN (0x8100) and don't overwrite the inner
+	 * ethertype stored during VLAN item parsing. Note that if
+	 * 'ivlan_vld' bit is set in Chelsio filter spec, then the
+	 * hardware automatically only matches packets with outer
+	 * ethertype having VLAN (0x8100).
+	 */
+	if (fs->mask.ivlan_vld &&
+	    be16_to_cpu(spec->type) != RTE_ETHER_TYPE_VLAN)
+		return rte_flow_error_set(e, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+					  item,
+					  "Already encountered VLAN item,"
+					  " but outer ethertype is not 0x8100");
+
 	/* we don't support SRC_MAC filtering*/
 	if (!rte_is_zero_ether_addr(&mask->src))
 		return rte_flow_error_set(e, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
@@ -137,8 +206,13 @@ ch_rte_parsetype_eth(const void *dmask, const struct rte_flow_item *item,
 		CXGBE_FILL_FS(idx, 0x1ff, macidx);
 	}
 
-	CXGBE_FILL_FS(be16_to_cpu(spec->type),
-		      be16_to_cpu(mask->type), ethtype);
+	/* Only set outer ethertype, if we didn't encounter VLAN item yet.
+	 * Otherwise, the inner ethertype set by VLAN item will get
+	 * overwritten.
+	 */
+	if (!fs->mask.ivlan_vld)
+		CXGBE_FILL_FS(be16_to_cpu(spec->type),
+			      be16_to_cpu(mask->type), ethtype);
 	return 0;
 }
 
@@ -163,6 +237,50 @@ ch_rte_parsetype_port(const void *dmask, const struct rte_flow_item *item,
 	return 0;
 }
 
+static int
+ch_rte_parsetype_vlan(const void *dmask, const struct rte_flow_item *item,
+		      struct ch_filter_specification *fs,
+		      struct rte_flow_error *e)
+{
+	const struct rte_flow_item_vlan *spec = item->spec;
+	const struct rte_flow_item_vlan *umask = item->mask;
+	const struct rte_flow_item_vlan *mask;
+
+	/* If user has not given any mask, then use chelsio supported mask. */
+	mask = umask ? umask : (const struct rte_flow_item_vlan *)dmask;
+
+	CXGBE_FILL_FS(1, 1, ivlan_vld);
+	if (!spec)
+		return 0; /* Wildcard, match all VLAN */
+
+	/* Chelsio hardware supports matching on only one ethertype
+	 * (i.e. either the outer or inner ethertype, but not both).
+	 * If outer ethertype is already set and is not VLAN (0x8100),
+	 * then don't proceed further. Otherwise, reset the outer
+	 * ethertype, so that it can be replaced by inner ethertype.
+	 * Note that the hardware will automatically match on outer
+	 * ethertype 0x8100, if 'ivlan_vld' bit is set in Chelsio
+	 * filter spec.
+	 */
+	if (fs->mask.ethtype) {
+		if (fs->val.ethtype != RTE_ETHER_TYPE_VLAN)
+			return rte_flow_error_set(e, EINVAL,
+						  RTE_FLOW_ERROR_TYPE_ITEM,
+						  item,
+						  "Outer ethertype not 0x8100");
+
+		fs->val.ethtype = 0;
+		fs->mask.ethtype = 0;
+	}
+
+	CXGBE_FILL_FS(be16_to_cpu(spec->tci), be16_to_cpu(mask->tci), ivlan);
+	if (spec->inner_type)
+		CXGBE_FILL_FS(be16_to_cpu(spec->inner_type),
+			      be16_to_cpu(mask->inner_type), ethtype);
+
+	return 0;
+}
+
 static int
 ch_rte_parsetype_udp(const void *dmask, const struct rte_flow_item *item,
 		     struct ch_filter_specification *fs,
@@ -232,8 +350,13 @@ ch_rte_parsetype_ipv4(const void *dmask, const struct rte_flow_item *item,
 		return rte_flow_error_set(e, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
 					  item, "ttl/tos are not supported");
 
+	if (fs->mask.ethtype &&
+	    (fs->val.ethtype != RTE_ETHER_TYPE_VLAN &&
+	     fs->val.ethtype != RTE_ETHER_TYPE_IPV4))
+		return rte_flow_error_set(e, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+					  item,
+					  "Couldn't find IPv4 ethertype");
 	fs->type = FILTER_TYPE_IPV4;
-	CXGBE_FILL_FS(RTE_ETHER_TYPE_IPV4, 0xffff, ethtype);
 	if (!val)
 		return 0; /* ipv4 wild card */
 
@@ -261,8 +384,13 @@ ch_rte_parsetype_ipv6(const void *dmask, const struct rte_flow_item *item,
 					  item,
 					  "tc/flow/hop are not supported");
 
+	if (fs->mask.ethtype &&
+	    (fs->val.ethtype != RTE_ETHER_TYPE_VLAN &&
+	     fs->val.ethtype != RTE_ETHER_TYPE_IPV6))
+		return rte_flow_error_set(e, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+					  item,
+					  "Couldn't find IPv6 ethertype");
 	fs->type = FILTER_TYPE_IPV6;
-	CXGBE_FILL_FS(RTE_ETHER_TYPE_IPV6, 0xffff, ethtype);
 	if (!val)
 		return 0; /* ipv6 wild card */
 
@@ -700,6 +828,14 @@ static struct chrte_fparse parseitem[] = {
 		}
 	},
 
+	[RTE_FLOW_ITEM_TYPE_VLAN] = {
+		.fptr = ch_rte_parsetype_vlan,
+		.dmask = &(const struct rte_flow_item_vlan){
+			.tci = 0xffff,
+			.inner_type = 0xffff,
+		}
+	},
+
 	[RTE_FLOW_ITEM_TYPE_IPV4] = {
 		.fptr  = ch_rte_parsetype_ipv4,
 		.dmask = &rte_flow_item_ipv4_mask,
@@ -773,6 +909,7 @@ cxgbe_rtef_parse_items(struct rte_flow *flow,
 	}
 
 	cxgbe_fill_filter_region(adap, &flow->fs);
+	cxgbe_tweak_filter_spec(adap, &flow->fs);
 
 	return 0;
 }
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 4701518a6..f6967a3e4 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -753,12 +753,6 @@ static void configure_vlan_types(struct adapter *adapter)
 				 V_OVLAN_ETYPE(M_OVLAN_ETYPE),
 				 V_OVLAN_MASK(M_OVLAN_MASK) |
 				 V_OVLAN_ETYPE(0x9100));
-		/* OVLAN Type 0x8100 */
-		t4_set_reg_field(adapter, MPS_PORT_RX_OVLAN_REG(i, A_RX_OVLAN2),
-				 V_OVLAN_MASK(M_OVLAN_MASK) |
-				 V_OVLAN_ETYPE(M_OVLAN_ETYPE),
-				 V_OVLAN_MASK(M_OVLAN_MASK) |
-				 V_OVLAN_ETYPE(0x8100));
 
 		/* IVLAN 0X8100 */
 		t4_set_reg_field(adapter, MPS_PORT_RX_IVLAN(i),
@@ -767,9 +761,9 @@ static void configure_vlan_types(struct adapter *adapter)
 
 		t4_set_reg_field(adapter, MPS_PORT_RX_CTL(i),
 				 F_OVLAN_EN0 | F_OVLAN_EN1 |
-				 F_OVLAN_EN2 | F_IVLAN_EN,
+				 F_IVLAN_EN,
 				 F_OVLAN_EN0 | F_OVLAN_EN1 |
-				 F_OVLAN_EN2 | F_IVLAN_EN);
+				 F_IVLAN_EN);
 	}
 
 	t4_tp_wr_bits_indirect(adapter, A_TP_INGRESS_CONFIG, V_RM_OVLAN(1),
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH 12/12] net/cxgbe: add rte_flow support for setting VLAN PCP
  2019-09-06 21:52 [dpdk-dev] [PATCH 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD Rahul Lakkireddy
                   ` (10 preceding siblings ...)
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 11/12] net/cxgbe: add rte_flow support for matching VLAN Rahul Lakkireddy
@ 2019-09-06 21:52 ` Rahul Lakkireddy
  2019-09-27 14:41 ` [dpdk-dev] [PATCH 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD Ferruh Yigit
  2019-09-27 20:30 ` [dpdk-dev] [PATCH v2 " Rahul Lakkireddy
  13 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-06 21:52 UTC (permalink / raw)
  To: dev; +Cc: nirranjan

Add support for setting VLAN PCP field via rte_flow API. Hardware
overwrites the entire 16-bit VLAN TCI field. So, both VLAN VID and
PCP actions must be specified.

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
 drivers/net/cxgbe/cxgbe_flow.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index 4b72e6422..9ee8353ae 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -568,6 +568,7 @@ ch_rte_parse_atype_switch(const struct rte_flow_action *a,
 			  struct rte_flow_error *e)
 {
 	const struct rte_flow_action_of_set_vlan_vid *vlanid;
+	const struct rte_flow_action_of_set_vlan_pcp *vlanpcp;
 	const struct rte_flow_action_of_push_vlan *pushvlan;
 	const struct rte_flow_action_set_ipv4 *ipv4;
 	const struct rte_flow_action_set_ipv6 *ipv6;
@@ -591,6 +592,20 @@ ch_rte_parse_atype_switch(const struct rte_flow_action *a,
 		tmp_vlan = fs->vlan & 0xe000;
 		fs->vlan = (be16_to_cpu(vlanid->vlan_vid) & 0xfff) | tmp_vlan;
 		break;
+	case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP:
+		vlanpcp = (const struct rte_flow_action_of_set_vlan_pcp *)
+			  a->conf;
+		/* If explicitly asked to push a new VLAN header,
+		 * then don't set rewrite mode. Otherwise, the
+		 * incoming VLAN packets will get their VLAN fields
+		 * rewritten, instead of adding an additional outer
+		 * VLAN header.
+		 */
+		if (fs->newvlan != VLAN_INSERT)
+			fs->newvlan = VLAN_REWRITE;
+		tmp_vlan = fs->vlan & 0xfff;
+		fs->vlan = (vlanpcp->vlan_pcp << 13) | tmp_vlan;
+		break;
 	case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN:
 		pushvlan = (const struct rte_flow_action_of_push_vlan *)
 			    a->conf;
@@ -724,6 +739,7 @@ cxgbe_rtef_parse_actions(struct rte_flow *flow,
 {
 	struct ch_filter_specification *fs = &flow->fs;
 	uint8_t nmode = 0, nat_ipv4 = 0, nat_ipv6 = 0;
+	uint8_t vlan_set_vid = 0, vlan_set_pcp = 0;
 	const struct rte_flow_action_queue *q;
 	const struct rte_flow_action *a;
 	char abit = 0;
@@ -762,6 +778,11 @@ cxgbe_rtef_parse_actions(struct rte_flow *flow,
 			fs->hitcnts = 1;
 			break;
 		case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID:
+			vlan_set_vid++;
+			goto action_switch;
+		case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP:
+			vlan_set_pcp++;
+			goto action_switch;
 		case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN:
 		case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN:
 		case RTE_FLOW_ACTION_TYPE_PHY_PORT:
@@ -804,6 +825,12 @@ cxgbe_rtef_parse_actions(struct rte_flow *flow,
 		}
 	}
 
+	if (fs->newvlan == VLAN_REWRITE && (!vlan_set_vid || !vlan_set_pcp))
+		return rte_flow_error_set(e, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION, a,
+					  "Both OF_SET_VLAN_VID and "
+					  "OF_SET_VLAN_PCP must be specified");
+
 	if (ch_rte_parse_nat(nmode, fs))
 		return rte_flow_error_set(e, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION, a,
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [dpdk-dev] [PATCH 07/12] net/cxgbe: use dynamic logging for debug prints
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 07/12] net/cxgbe: use dynamic logging for debug prints Rahul Lakkireddy
@ 2019-09-27 14:37   ` Ferruh Yigit
  2019-09-27 19:55     ` Rahul Lakkireddy
  0 siblings, 1 reply; 30+ messages in thread
From: Ferruh Yigit @ 2019-09-27 14:37 UTC (permalink / raw)
  To: Rahul Lakkireddy, dev; +Cc: nirranjan

On 9/6/2019 10:52 PM, Rahul Lakkireddy wrote:
> Remove compile time flags and use dynamic logging for debug prints.
> 
> Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
> ---
>  config/common_base               |  5 ---
>  doc/guides/nics/cxgbe.rst        | 20 -----------
>  drivers/net/cxgbe/cxgbe_compat.h | 58 +++++++++++---------------------
>  drivers/net/cxgbe/cxgbe_ethdev.c | 16 +++++++++
>  4 files changed, 35 insertions(+), 64 deletions(-)
> 
> diff --git a/config/common_base b/config/common_base
> index 8ef75c203..43964de6d 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -217,11 +217,6 @@ CONFIG_RTE_LIBRTE_BNXT_PMD=y
>  # Compile burst-oriented Chelsio Terminator (CXGBE) PMD
>  #
>  CONFIG_RTE_LIBRTE_CXGBE_PMD=y
> -CONFIG_RTE_LIBRTE_CXGBE_DEBUG=n

+1, thanks.

> -CONFIG_RTE_LIBRTE_CXGBE_DEBUG_REG=n
> -CONFIG_RTE_LIBRTE_CXGBE_DEBUG_MBOX=n

Are above two used on datapath?

> -CONFIG_RTE_LIBRTE_CXGBE_DEBUG_TX=n
> -CONFIG_RTE_LIBRTE_CXGBE_DEBUG_RX=n

Are you sure about these?
If these logs are enabled in datapath, switching to the dynamic log will add
additional checks for logging, most probably per packet.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [dpdk-dev] [PATCH 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD
  2019-09-06 21:52 [dpdk-dev] [PATCH 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD Rahul Lakkireddy
                   ` (11 preceding siblings ...)
  2019-09-06 21:52 ` [dpdk-dev] [PATCH 12/12] net/cxgbe: add rte_flow support for setting VLAN PCP Rahul Lakkireddy
@ 2019-09-27 14:41 ` Ferruh Yigit
  2019-09-27 20:30 ` [dpdk-dev] [PATCH v2 " Rahul Lakkireddy
  13 siblings, 0 replies; 30+ messages in thread
From: Ferruh Yigit @ 2019-09-27 14:41 UTC (permalink / raw)
  To: Rahul Lakkireddy, dev; +Cc: nirranjan

On 9/6/2019 10:52 PM, Rahul Lakkireddy wrote:
> This series of patches contain bug fixes and feature updates for
> CXGBE and CXGBEVF PMD. Patches 1 to 6 contain bug fixes. Patches
> 7 to 12 contain updates and new features for CXGBE/CXGBEVF PMD.
> 
> Patch 1 adds cxgbe_ prefix to some global functions to avoid name
> collision.
> 
> Patch 2 fixes NULL dereference when allocating CLIP for IPv6 rte_flow
> offloads.
> 
> Patch 3 fixes slot allocation logic for IPv6 rte_flow offloads
> for T6 NICs.
> 
> Patch 4 fixes issues with parsing VLAN rte_flow offload actions.
> 
> Patch 5 prefetches packets for non-coalesced Tx packets.
> 
> Patch 6 fixes NULL dereference when accessing firmware event queue
> for link updates before it is created.
> 
> Patch 7 reworks compilation dependent logs to use dynamic logging.
> 
> Patch 8 reworks devargs parsing to separate CXGBE VF only arguments.
> 
> Patch 9 removes compilation dependent flag that controls Tx coalescing
> throughput vs latency behavior and uses devargs instead.
> 
> Patch 10 uses new firmware API to fetch the maximum number of
> packets that can be coalesced in Tx path.
> 
> Patch 11 adds support for VLAN pattern match item via rte_flow offload.
> 
> Patch 12 adds support for setting VLAN PCP action item via rte_flow
> offload.
> 
> Thanks,
> Rahul
> 
> 
> Rahul Lakkireddy (12):
>   net/cxgbe: add cxgbe_ prefix to global functions
>   net/cxgbe: fix NULL access when allocating CLIP entry
>   net/cxgbe: fix slot allocation for IPv6 flows
>   net/cxgbe: fix parsing VLAN ID rewrite action
>   net/cxgbe: fix prefetch for non-coalesced Tx packets
>   net/cxgbe: avoid polling link status before device start
>   net/cxgbe: use dynamic logging for debug prints
>   net/cxgbe: separate VF only devargs
>   net/cxgbe: add devarg to control Tx coalescing
>   net/cxgbe: fetch max Tx coalesce limit from firmware
>   net/cxgbe: add rte_flow support for matching VLAN
>   net/cxgbe: add rte_flow support for setting VLAN PCP

Hi Rahul,

Overall patchset lgtm, I will wait your response to 7/12 to proceed, thanks.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [dpdk-dev] [PATCH 07/12] net/cxgbe: use dynamic logging for debug prints
  2019-09-27 14:37   ` Ferruh Yigit
@ 2019-09-27 19:55     ` Rahul Lakkireddy
  0 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-27 19:55 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev, Nirranjan Kirubaharan

On Friday, September 09/27/19, 2019 at 20:07:20 +0530, Ferruh Yigit wrote:
> On 9/6/2019 10:52 PM, Rahul Lakkireddy wrote:
> > Remove compile time flags and use dynamic logging for debug prints.
> > 
> > Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
> > ---
> >  config/common_base               |  5 ---
> >  doc/guides/nics/cxgbe.rst        | 20 -----------
> >  drivers/net/cxgbe/cxgbe_compat.h | 58 +++++++++++---------------------
> >  drivers/net/cxgbe/cxgbe_ethdev.c | 16 +++++++++
> >  4 files changed, 35 insertions(+), 64 deletions(-)
> > 
> > diff --git a/config/common_base b/config/common_base
> > index 8ef75c203..43964de6d 100644
> > --- a/config/common_base
> > +++ b/config/common_base
> > @@ -217,11 +217,6 @@ CONFIG_RTE_LIBRTE_BNXT_PMD=y
> >  # Compile burst-oriented Chelsio Terminator (CXGBE) PMD
> >  #
> >  CONFIG_RTE_LIBRTE_CXGBE_PMD=y
> > -CONFIG_RTE_LIBRTE_CXGBE_DEBUG=n
> 
> +1, thanks.
> 
> > -CONFIG_RTE_LIBRTE_CXGBE_DEBUG_REG=n
> > -CONFIG_RTE_LIBRTE_CXGBE_DEBUG_MBOX=n
> 
> Are above two used on datapath?
> 

MBOX is only used in control path. But, REG is used in both control
and datapath.

> > -CONFIG_RTE_LIBRTE_CXGBE_DEBUG_TX=n
> > -CONFIG_RTE_LIBRTE_CXGBE_DEBUG_RX=n
> 
> Are you sure about these?
> If these logs are enabled in datapath, switching to the dynamic log will add
> additional checks for logging, most probably per packet.

(Sigh)... You're correct! I was too excited about the nifty dynamic
log feature and somehow missed the above obvious point... :(

On second thought, the REG, TX, and RX prints are rarely enabled and
hence I'm going to remove them completely. OTOH, MBOX helped in
debugging several control path issues in the past, so it will be kept
as dynamic log.

Will send v2.

Thanks,
Rahul

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH v2 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD
  2019-09-06 21:52 [dpdk-dev] [PATCH 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD Rahul Lakkireddy
                   ` (12 preceding siblings ...)
  2019-09-27 14:41 ` [dpdk-dev] [PATCH 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD Ferruh Yigit
@ 2019-09-27 20:30 ` Rahul Lakkireddy
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 01/12] net/cxgbe: add cxgbe_ prefix to global functions Rahul Lakkireddy
                     ` (12 more replies)
  13 siblings, 13 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-27 20:30 UTC (permalink / raw)
  To: dev; +Cc: nirranjan

This series of patches contain bug fixes and feature updates for
CXGBE and CXGBEVF PMD. Patches 1 to 6 contain bug fixes. Patches
7 to 12 contain updates and new features for CXGBE/CXGBEVF PMD.

Patch 1 adds cxgbe_ prefix to some global functions to avoid name
collision.

Patch 2 fixes NULL dereference when allocating CLIP for IPv6 rte_flow
offloads.

Patch 3 fixes slot allocation logic for IPv6 rte_flow offloads
for T6 NICs.

Patch 4 fixes issues with parsing VLAN rte_flow offload actions.

Patch 5 prefetches packets for non-coalesced Tx packets.

Patch 6 fixes NULL dereference when accessing firmware event queue
for link updates before it is created.

Patch 7 reworks compilation dependent logs to use dynamic logging.

Patch 8 reworks devargs parsing to separate CXGBE VF only arguments.

Patch 9 removes compilation dependent flag that controls Tx coalescing
throughput vs latency behavior and uses devargs instead.

Patch 10 uses new firmware API to fetch the maximum number of
packets that can be coalesced in Tx path.

Patch 11 adds support for VLAN pattern match item via rte_flow offload.

Patch 12 adds support for setting VLAN PCP action item via rte_flow
offload.

Thanks,
Rahul

---
v2:
- Remove rarely used compile time enabled debug logs from datapath
  in patch 7.
- In cxgbe.rst doc, use ^ (instead of -) to represent common and
  vf-only devargs as subsection of Runtime Options in patch 8.


Rahul Lakkireddy (12):
  net/cxgbe: add cxgbe_ prefix to global functions
  net/cxgbe: fix NULL access when allocating CLIP entry
  net/cxgbe: fix slot allocation for IPv6 flows
  net/cxgbe: fix parsing VLAN ID rewrite action
  net/cxgbe: fix prefetch for non-coalesced Tx packets
  net/cxgbe: avoid polling link status before device start
  net/cxgbe: use dynamic logging for debug prints
  net/cxgbe: separate VF only devargs
  net/cxgbe: add devarg to control Tx coalescing
  net/cxgbe: fetch max Tx coalesce limit from firmware
  net/cxgbe: add rte_flow support for matching VLAN
  net/cxgbe: add rte_flow support for setting VLAN PCP

 config/common_base                      |   6 -
 doc/guides/nics/cxgbe.rst               |  57 +++---
 drivers/net/cxgbe/base/adapter.h        |  27 ++-
 drivers/net/cxgbe/base/common.h         |   1 +
 drivers/net/cxgbe/base/t4_regs_values.h |   9 +
 drivers/net/cxgbe/base/t4fw_interface.h |   3 +-
 drivers/net/cxgbe/cxgbe.h               |  10 +-
 drivers/net/cxgbe/cxgbe_compat.h        |  55 ++----
 drivers/net/cxgbe/cxgbe_ethdev.c        |  35 ++--
 drivers/net/cxgbe/cxgbe_filter.c        | 230 ++++++++++--------------
 drivers/net/cxgbe/cxgbe_filter.h        |  22 +--
 drivers/net/cxgbe/cxgbe_flow.c          | 204 +++++++++++++++++++--
 drivers/net/cxgbe/cxgbe_main.c          | 137 +++++++-------
 drivers/net/cxgbe/cxgbe_pfvf.h          |  10 ++
 drivers/net/cxgbe/cxgbevf_ethdev.c      |   7 +
 drivers/net/cxgbe/cxgbevf_main.c        |  12 +-
 drivers/net/cxgbe/l2t.c                 |   3 +-
 drivers/net/cxgbe/l2t.h                 |   3 +-
 drivers/net/cxgbe/sge.c                 |  21 +--
 19 files changed, 504 insertions(+), 348 deletions(-)

-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH v2 01/12] net/cxgbe: add cxgbe_ prefix to global functions
  2019-09-27 20:30 ` [dpdk-dev] [PATCH v2 " Rahul Lakkireddy
@ 2019-09-27 20:30   ` Rahul Lakkireddy
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 02/12] net/cxgbe: fix NULL access when allocating CLIP entry Rahul Lakkireddy
                     ` (11 subsequent siblings)
  12 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-27 20:30 UTC (permalink / raw)
  To: dev; +Cc: nirranjan, stable

To avoid name collisions, add cxgbe_ prefix to some global functions.
Also, make some local functions static in cxgbe_filter.c.

Cc: stable@dpdk.org
Fixes: ee61f5113b17 ("net/cxgbe: parse and validate flows")
Fixes: 9eb2c9a48072 ("net/cxgbe: implement flow create operation")
Fixes: 3a381a4116ed ("net/cxgbe: query firmware for HASH filter resources")
Fixes: af44a577988b ("net/cxgbe: support to offload flows to HASH region")
Fixes: 41dc98b0827a ("net/cxgbe: support to delete flows in HASH region")
Fixes: 23af667f1507 ("net/cxgbe: add API to program hardware layer 2 table")

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
v2:
- No changes.

 drivers/net/cxgbe/cxgbe_filter.c | 30 ++++++++++++++++--------------
 drivers/net/cxgbe/cxgbe_filter.h | 19 +++++++++----------
 drivers/net/cxgbe/cxgbe_flow.c   |  6 +++---
 drivers/net/cxgbe/cxgbe_main.c   | 10 +++++-----
 drivers/net/cxgbe/l2t.c          |  3 ++-
 drivers/net/cxgbe/l2t.h          |  3 ++-
 6 files changed, 37 insertions(+), 34 deletions(-)

diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index 7fcee5c0a..cc8774c1d 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -14,7 +14,7 @@
 /**
  * Initialize Hash Filters
  */
-int init_hash_filter(struct adapter *adap)
+int cxgbe_init_hash_filter(struct adapter *adap)
 {
 	unsigned int n_user_filters;
 	unsigned int user_filter_perc;
@@ -53,7 +53,8 @@ int init_hash_filter(struct adapter *adap)
  * Validate if the requested filter specification can be set by checking
  * if the requested features have been enabled
  */
-int validate_filter(struct adapter *adapter, struct ch_filter_specification *fs)
+int cxgbe_validate_filter(struct adapter *adapter,
+			  struct ch_filter_specification *fs)
 {
 	u32 fconf;
 
@@ -133,7 +134,7 @@ static unsigned int get_filter_steerq(struct rte_eth_dev *dev,
 }
 
 /* Return an error number if the indicated filter isn't writable ... */
-int writable_filter(struct filter_entry *f)
+static int writable_filter(struct filter_entry *f)
 {
 	if (f->locked)
 		return -EPERM;
@@ -214,7 +215,7 @@ static inline void mk_set_tcb_field_ulp(struct filter_entry *f,
 /**
  * Check if entry already filled.
  */
-bool is_filter_set(struct tid_info *t, int fidx, int family)
+bool cxgbe_is_filter_set(struct tid_info *t, int fidx, int family)
 {
 	bool result = FALSE;
 	int i, max;
@@ -527,7 +528,7 @@ static int cxgbe_set_hash_filter(struct rte_eth_dev *dev,
 	int atid, size;
 	int ret = 0;
 
-	ret = validate_filter(adapter, fs);
+	ret = cxgbe_validate_filter(adapter, fs);
 	if (ret)
 		return ret;
 
@@ -618,7 +619,7 @@ static int cxgbe_set_hash_filter(struct rte_eth_dev *dev,
  * Clear a filter and release any of its resources that we own.  This also
  * clears the filter's "pending" status.
  */
-void clear_filter(struct filter_entry *f)
+static void clear_filter(struct filter_entry *f)
 {
 	if (f->clipt)
 		cxgbe_clip_release(f->dev, f->clipt);
@@ -690,7 +691,7 @@ static int del_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
 	return 0;
 }
 
-int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
+static int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
 {
 	struct adapter *adapter = ethdev2adap(dev);
 	struct filter_entry *f = &adapter->tids.ftid_tab[fidx];
@@ -868,7 +869,7 @@ int cxgbe_del_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 
 	chip_ver = CHELSIO_CHIP_VERSION(adapter->params.chip);
 
-	ret = is_filter_set(&adapter->tids, filter_id, fs->type);
+	ret = cxgbe_is_filter_set(&adapter->tids, filter_id, fs->type);
 	if (!ret) {
 		dev_warn(adap, "%s: could not find filter entry: %u\n",
 			 __func__, filter_id);
@@ -940,7 +941,7 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 
 	chip_ver = CHELSIO_CHIP_VERSION(adapter->params.chip);
 
-	ret = validate_filter(adapter, fs);
+	ret = cxgbe_validate_filter(adapter, fs);
 	if (ret)
 		return ret;
 
@@ -951,7 +952,7 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 	if (fs->type)
 		filter_id &= ~(0x3);
 
-	ret = is_filter_set(&adapter->tids, filter_id, fs->type);
+	ret = cxgbe_is_filter_set(&adapter->tids, filter_id, fs->type);
 	if (ret)
 		return -EBUSY;
 
@@ -1091,7 +1092,8 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 /**
  * Handle a Hash filter write reply.
  */
-void hash_filter_rpl(struct adapter *adap, const struct cpl_act_open_rpl *rpl)
+void cxgbe_hash_filter_rpl(struct adapter *adap,
+			   const struct cpl_act_open_rpl *rpl)
 {
 	struct tid_info *t = &adap->tids;
 	struct filter_entry *f;
@@ -1159,7 +1161,7 @@ void hash_filter_rpl(struct adapter *adap, const struct cpl_act_open_rpl *rpl)
 /**
  * Handle a LE-TCAM filter write/deletion reply.
  */
-void filter_rpl(struct adapter *adap, const struct cpl_set_tcb_rpl *rpl)
+void cxgbe_filter_rpl(struct adapter *adap, const struct cpl_set_tcb_rpl *rpl)
 {
 	struct filter_entry *f = NULL;
 	unsigned int tid = GET_TID(rpl);
@@ -1357,8 +1359,8 @@ int cxgbe_clear_filter_count(struct adapter *adapter, unsigned int fidx,
 /**
  * Handle a Hash filter delete reply.
  */
-void hash_del_filter_rpl(struct adapter *adap,
-			 const struct cpl_abort_rpl_rss *rpl)
+void cxgbe_hash_del_filter_rpl(struct adapter *adap,
+			       const struct cpl_abort_rpl_rss *rpl)
 {
 	struct tid_info *t = &adap->tids;
 	struct filter_entry *f;
diff --git a/drivers/net/cxgbe/cxgbe_filter.h b/drivers/net/cxgbe/cxgbe_filter.h
index 0c67d2d15..1964730ba 100644
--- a/drivers/net/cxgbe/cxgbe_filter.h
+++ b/drivers/net/cxgbe/cxgbe_filter.h
@@ -248,11 +248,8 @@ cxgbe_bitmap_find_free_region(struct rte_bitmap *bmap, unsigned int size,
 	return idx;
 }
 
-bool is_filter_set(struct tid_info *, int fidx, int family);
-void filter_rpl(struct adapter *adap, const struct cpl_set_tcb_rpl *rpl);
-void clear_filter(struct filter_entry *f);
-int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx);
-int writable_filter(struct filter_entry *f);
+bool cxgbe_is_filter_set(struct tid_info *, int fidx, int family);
+void cxgbe_filter_rpl(struct adapter *adap, const struct cpl_set_tcb_rpl *rpl);
 int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 		     struct ch_filter_specification *fs,
 		     struct filter_ctx *ctx);
@@ -260,11 +257,13 @@ int cxgbe_del_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 		     struct ch_filter_specification *fs,
 		     struct filter_ctx *ctx);
 int cxgbe_alloc_ftid(struct adapter *adap, unsigned int family);
-int init_hash_filter(struct adapter *adap);
-void hash_filter_rpl(struct adapter *adap, const struct cpl_act_open_rpl *rpl);
-void hash_del_filter_rpl(struct adapter *adap,
-			 const struct cpl_abort_rpl_rss *rpl);
-int validate_filter(struct adapter *adap, struct ch_filter_specification *fs);
+int cxgbe_init_hash_filter(struct adapter *adap);
+void cxgbe_hash_filter_rpl(struct adapter *adap,
+			   const struct cpl_act_open_rpl *rpl);
+void cxgbe_hash_del_filter_rpl(struct adapter *adap,
+			       const struct cpl_abort_rpl_rss *rpl);
+int cxgbe_validate_filter(struct adapter *adap,
+			  struct ch_filter_specification *fs);
 int cxgbe_get_filter_count(struct adapter *adapter, unsigned int fidx,
 			   u64 *c, int hash, bool get_byte);
 int cxgbe_clear_filter_count(struct adapter *adapter, unsigned int fidx,
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index d3de689c3..848c61f02 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -309,7 +309,7 @@ static int cxgbe_validate_fidxondel(struct filter_entry *f, unsigned int fidx)
 		dev_err(adap, "invalid flow index %d.\n", fidx);
 		return -EINVAL;
 	}
-	if (!is_filter_set(&adap->tids, fidx, fs.type)) {
+	if (!cxgbe_is_filter_set(&adap->tids, fidx, fs.type)) {
 		dev_err(adap, "Already free fidx:%d f:%p\n", fidx, f);
 		return -EINVAL;
 	}
@@ -321,7 +321,7 @@ static int
 cxgbe_validate_fidxonadd(struct ch_filter_specification *fs,
 			 struct adapter *adap, unsigned int fidx)
 {
-	if (is_filter_set(&adap->tids, fidx, fs->type)) {
+	if (cxgbe_is_filter_set(&adap->tids, fidx, fs->type)) {
 		dev_err(adap, "filter index: %d is busy.\n", fidx);
 		return -EBUSY;
 	}
@@ -1019,7 +1019,7 @@ cxgbe_flow_validate(struct rte_eth_dev *dev,
 		return ret;
 	}
 
-	if (validate_filter(adap, &flow->fs)) {
+	if (cxgbe_validate_filter(adap, &flow->fs)) {
 		t4_os_free(flow);
 		return rte_flow_error_set(e, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE,
 				NULL,
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 620f60b4d..c3e6b9557 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -92,19 +92,19 @@ static int fwevtq_handler(struct sge_rspq *q, const __be64 *rsp,
 	} else if (opcode == CPL_ABORT_RPL_RSS) {
 		const struct cpl_abort_rpl_rss *p = (const void *)rsp;
 
-		hash_del_filter_rpl(q->adapter, p);
+		cxgbe_hash_del_filter_rpl(q->adapter, p);
 	} else if (opcode == CPL_SET_TCB_RPL) {
 		const struct cpl_set_tcb_rpl *p = (const void *)rsp;
 
-		filter_rpl(q->adapter, p);
+		cxgbe_filter_rpl(q->adapter, p);
 	} else if (opcode == CPL_ACT_OPEN_RPL) {
 		const struct cpl_act_open_rpl *p = (const void *)rsp;
 
-		hash_filter_rpl(q->adapter, p);
+		cxgbe_hash_filter_rpl(q->adapter, p);
 	} else if (opcode == CPL_L2T_WRITE_RPL) {
 		const struct cpl_l2t_write_rpl *p = (const void *)rsp;
 
-		do_l2t_write_rpl(q->adapter, p);
+		cxgbe_do_l2t_write_rpl(q->adapter, p);
 	} else {
 		dev_err(adapter, "unexpected CPL %#x on FW event queue\n",
 			opcode);
@@ -1179,7 +1179,7 @@ static int adap_init0(struct adapter *adap)
 
 	if ((caps_cmd.niccaps & cpu_to_be16(FW_CAPS_CONFIG_NIC_HASHFILTER)) &&
 	    is_t6(adap->params.chip)) {
-		if (init_hash_filter(adap) < 0)
+		if (cxgbe_init_hash_filter(adap) < 0)
 			goto bye;
 	}
 
diff --git a/drivers/net/cxgbe/l2t.c b/drivers/net/cxgbe/l2t.c
index 6faf624f7..f9d651fe0 100644
--- a/drivers/net/cxgbe/l2t.c
+++ b/drivers/net/cxgbe/l2t.c
@@ -22,7 +22,8 @@ void cxgbe_l2t_release(struct l2t_entry *e)
  * Process a CPL_L2T_WRITE_RPL. Note that the TID in the reply is really
  * the L2T index it refers to.
  */
-void do_l2t_write_rpl(struct adapter *adap, const struct cpl_l2t_write_rpl *rpl)
+void cxgbe_do_l2t_write_rpl(struct adapter *adap,
+			    const struct cpl_l2t_write_rpl *rpl)
 {
 	struct l2t_data *d = adap->l2t;
 	unsigned int tid = GET_TID(rpl);
diff --git a/drivers/net/cxgbe/l2t.h b/drivers/net/cxgbe/l2t.h
index 326abfde4..2c489e4aa 100644
--- a/drivers/net/cxgbe/l2t.h
+++ b/drivers/net/cxgbe/l2t.h
@@ -53,5 +53,6 @@ void t4_cleanup_l2t(struct adapter *adap);
 struct l2t_entry *cxgbe_l2t_alloc_switching(struct rte_eth_dev *dev, u16 vlan,
 					    u8 port, u8 *dmac);
 void cxgbe_l2t_release(struct l2t_entry *e);
-void do_l2t_write_rpl(struct adapter *p, const struct cpl_l2t_write_rpl *rpl);
+void cxgbe_do_l2t_write_rpl(struct adapter *p,
+			    const struct cpl_l2t_write_rpl *rpl);
 #endif /* _CXGBE_L2T_H_ */
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH v2 02/12] net/cxgbe: fix NULL access when allocating CLIP entry
  2019-09-27 20:30 ` [dpdk-dev] [PATCH v2 " Rahul Lakkireddy
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 01/12] net/cxgbe: add cxgbe_ prefix to global functions Rahul Lakkireddy
@ 2019-09-27 20:30   ` Rahul Lakkireddy
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 03/12] net/cxgbe: fix slot allocation for IPv6 flows Rahul Lakkireddy
                     ` (10 subsequent siblings)
  12 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-27 20:30 UTC (permalink / raw)
  To: dev; +Cc: nirranjan, stable

Pass correct arguments to CLIP allocation code to avoid NULL pointer
dereference.

Cc: stable@dpdk.org
Fixes: 3f2c1e209cfc ("net/cxgbe: add Compressed Local IP region")

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
v2:
- No changes.

 drivers/net/cxgbe/cxgbe_filter.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index cc8774c1d..3b7966d04 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -1052,7 +1052,7 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 	 */
 	if (chip_ver > CHELSIO_T5 && fs->type &&
 	    memcmp(fs->val.lip, bitoff, sizeof(bitoff))) {
-		f->clipt = cxgbe_clip_alloc(f->dev, (u32 *)&f->fs.val.lip);
+		f->clipt = cxgbe_clip_alloc(dev, (u32 *)&fs->val.lip);
 		if (!f->clipt)
 			goto free_tid;
 	}
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH v2 03/12] net/cxgbe: fix slot allocation for IPv6 flows
  2019-09-27 20:30 ` [dpdk-dev] [PATCH v2 " Rahul Lakkireddy
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 01/12] net/cxgbe: add cxgbe_ prefix to global functions Rahul Lakkireddy
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 02/12] net/cxgbe: fix NULL access when allocating CLIP entry Rahul Lakkireddy
@ 2019-09-27 20:30   ` Rahul Lakkireddy
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 04/12] net/cxgbe: fix parsing VLAN ID rewrite action Rahul Lakkireddy
                     ` (9 subsequent siblings)
  12 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-27 20:30 UTC (permalink / raw)
  To: dev; +Cc: nirranjan, stable

IPv6 flows occupy only 2 slots on Chelsio T6 NICs. Fix the slot
calculation logic to return correct number of slots.

Cc: stable@dpdk.org
Fixes: ee61f5113b17 ("net/cxgbe: parse and validate flows")
Fixes: 9eb2c9a48072 ("net/cxgbe: implement flow create operation")
Fixes: 3f2c1e209cfc ("net/cxgbe: add Compressed Local IP region")

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
v2:
- No changes.

 drivers/net/cxgbe/cxgbe_filter.c | 193 +++++++++++--------------------
 drivers/net/cxgbe/cxgbe_filter.h |   5 +-
 drivers/net/cxgbe/cxgbe_flow.c   |  15 ++-
 3 files changed, 85 insertions(+), 128 deletions(-)

diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index 3b7966d04..33b95a69a 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -213,20 +213,32 @@ static inline void mk_set_tcb_field_ulp(struct filter_entry *f,
 }
 
 /**
- * Check if entry already filled.
+ * IPv6 requires 2 slots on T6 and 4 slots for cards below T6.
+ * IPv4 requires only 1 slot on all cards.
  */
-bool cxgbe_is_filter_set(struct tid_info *t, int fidx, int family)
+u8 cxgbe_filter_slots(struct adapter *adap, u8 family)
 {
-	bool result = FALSE;
-	int i, max;
+	if (family == FILTER_TYPE_IPV6) {
+		if (CHELSIO_CHIP_VERSION(adap->params.chip) < CHELSIO_T6)
+			return 4;
 
-	/* IPv6 requires four slots and IPv4 requires only 1 slot.
-	 * Ensure, there's enough slots available.
-	 */
-	max = family == FILTER_TYPE_IPV6 ? fidx + 3 : fidx;
+		return 2;
+	}
+
+	return 1;
+}
+
+/**
+ * Check if entries are already filled.
+ */
+bool cxgbe_is_filter_set(struct tid_info *t, u32 fidx, u8 nentries)
+{
+	bool result = FALSE;
+	u32 i;
 
+	/* Ensure there's enough slots available. */
 	t4_os_lock(&t->ftid_lock);
-	for (i = fidx; i <= max; i++) {
+	for (i = fidx; i < fidx + nentries; i++) {
 		if (rte_bitmap_get(t->ftid_bmap, i)) {
 			result = TRUE;
 			break;
@@ -237,17 +249,18 @@ bool cxgbe_is_filter_set(struct tid_info *t, int fidx, int family)
 }
 
 /**
- * Allocate a available free entry
+ * Allocate available free entries.
  */
-int cxgbe_alloc_ftid(struct adapter *adap, unsigned int family)
+int cxgbe_alloc_ftid(struct adapter *adap, u8 nentries)
 {
 	struct tid_info *t = &adap->tids;
 	int pos;
 	int size = t->nftids;
 
 	t4_os_lock(&t->ftid_lock);
-	if (family == FILTER_TYPE_IPV6)
-		pos = cxgbe_bitmap_find_free_region(t->ftid_bmap, size, 4);
+	if (nentries > 1)
+		pos = cxgbe_bitmap_find_free_region(t->ftid_bmap, size,
+						    nentries);
 	else
 		pos = cxgbe_find_first_zero_bit(t->ftid_bmap, size);
 	t4_os_unlock(&t->ftid_lock);
@@ -565,7 +578,7 @@ static int cxgbe_set_hash_filter(struct rte_eth_dev *dev,
 	if (atid < 0)
 		goto out_err;
 
-	if (f->fs.type) {
+	if (f->fs.type == FILTER_TYPE_IPV6) {
 		/* IPv6 hash filter */
 		f->clipt = cxgbe_clip_alloc(f->dev, (u32 *)&f->fs.val.lip);
 		if (!f->clipt)
@@ -804,44 +817,34 @@ static int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
 }
 
 /**
- * Set the corresponding entry in the bitmap. 4 slots are
- * marked for IPv6, whereas only 1 slot is marked for IPv4.
+ * Set the corresponding entries in the bitmap.
  */
-static int cxgbe_set_ftid(struct tid_info *t, int fidx, int family)
+static int cxgbe_set_ftid(struct tid_info *t, u32 fidx, u8 nentries)
 {
+	u32 i;
+
 	t4_os_lock(&t->ftid_lock);
 	if (rte_bitmap_get(t->ftid_bmap, fidx)) {
 		t4_os_unlock(&t->ftid_lock);
 		return -EBUSY;
 	}
 
-	if (family == FILTER_TYPE_IPV4) {
-		rte_bitmap_set(t->ftid_bmap, fidx);
-	} else {
-		rte_bitmap_set(t->ftid_bmap, fidx);
-		rte_bitmap_set(t->ftid_bmap, fidx + 1);
-		rte_bitmap_set(t->ftid_bmap, fidx + 2);
-		rte_bitmap_set(t->ftid_bmap, fidx + 3);
-	}
+	for (i = fidx; i < fidx + nentries; i++)
+		rte_bitmap_set(t->ftid_bmap, i);
 	t4_os_unlock(&t->ftid_lock);
 	return 0;
 }
 
 /**
- * Clear the corresponding entry in the bitmap. 4 slots are
- * cleared for IPv6, whereas only 1 slot is cleared for IPv4.
+ * Clear the corresponding entries in the bitmap.
  */
-static void cxgbe_clear_ftid(struct tid_info *t, int fidx, int family)
+static void cxgbe_clear_ftid(struct tid_info *t, u32 fidx, u8 nentries)
 {
+	u32 i;
+
 	t4_os_lock(&t->ftid_lock);
-	if (family == FILTER_TYPE_IPV4) {
-		rte_bitmap_clear(t->ftid_bmap, fidx);
-	} else {
-		rte_bitmap_clear(t->ftid_bmap, fidx);
-		rte_bitmap_clear(t->ftid_bmap, fidx + 1);
-		rte_bitmap_clear(t->ftid_bmap, fidx + 2);
-		rte_bitmap_clear(t->ftid_bmap, fidx + 3);
-	}
+	for (i = fidx; i < fidx + nentries; i++)
+		rte_bitmap_clear(t->ftid_bmap, i);
 	t4_os_unlock(&t->ftid_lock);
 }
 
@@ -859,6 +862,7 @@ int cxgbe_del_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 	struct adapter *adapter = pi->adapter;
 	struct filter_entry *f;
 	unsigned int chip_ver;
+	u8 nentries;
 	int ret;
 
 	if (is_hashfilter(adapter) && fs->cap)
@@ -869,24 +873,25 @@ int cxgbe_del_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 
 	chip_ver = CHELSIO_CHIP_VERSION(adapter->params.chip);
 
-	ret = cxgbe_is_filter_set(&adapter->tids, filter_id, fs->type);
-	if (!ret) {
-		dev_warn(adap, "%s: could not find filter entry: %u\n",
-			 __func__, filter_id);
-		return -EINVAL;
-	}
-
 	/*
-	 * Ensure filter id is aligned on the 2 slot boundary for T6,
+	 * Ensure IPv6 filter id is aligned on the 2 slot boundary for T6,
 	 * and 4 slot boundary for cards below T6.
 	 */
-	if (fs->type) {
+	if (fs->type == FILTER_TYPE_IPV6) {
 		if (chip_ver < CHELSIO_T6)
 			filter_id &= ~(0x3);
 		else
 			filter_id &= ~(0x1);
 	}
 
+	nentries = cxgbe_filter_slots(adapter, fs->type);
+	ret = cxgbe_is_filter_set(&adapter->tids, filter_id, nentries);
+	if (!ret) {
+		dev_warn(adap, "%s: could not find filter entry: %u\n",
+			 __func__, filter_id);
+		return -EINVAL;
+	}
+
 	f = &adapter->tids.ftid_tab[filter_id];
 	ret = writable_filter(f);
 	if (ret)
@@ -896,8 +901,7 @@ int cxgbe_del_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 		f->ctx = ctx;
 		cxgbe_clear_ftid(&adapter->tids,
 				 f->tid - adapter->tids.ftid_base,
-				 f->fs.type ? FILTER_TYPE_IPV6 :
-					      FILTER_TYPE_IPV4);
+				 nentries);
 		return del_filter_wr(dev, filter_id);
 	}
 
@@ -927,10 +931,10 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 {
 	struct port_info *pi = ethdev2pinfo(dev);
 	struct adapter *adapter = pi->adapter;
-	unsigned int fidx, iq, fid_bit = 0;
+	unsigned int fidx, iq;
 	struct filter_entry *f;
 	unsigned int chip_ver;
-	uint8_t bitoff[16] = {0};
+	u8 nentries, bitoff[16] = {0};
 	int ret;
 
 	if (is_hashfilter(adapter) && fs->cap)
@@ -945,80 +949,31 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 	if (ret)
 		return ret;
 
-	/*
-	 * Ensure filter id is aligned on the 4 slot boundary for IPv6
-	 * maskfull filters.
-	 */
-	if (fs->type)
-		filter_id &= ~(0x3);
-
-	ret = cxgbe_is_filter_set(&adapter->tids, filter_id, fs->type);
-	if (ret)
-		return -EBUSY;
-
-	iq = get_filter_steerq(dev, fs);
-
 	/*
 	 * IPv6 filters occupy four slots and must be aligned on four-slot
 	 * boundaries for T5. On T6, IPv6 filters occupy two-slots and
 	 * must be aligned on two-slot boundaries.
 	 *
 	 * IPv4 filters only occupy a single slot and have no alignment
-	 * requirements but writing a new IPv4 filter into the middle
-	 * of an existing IPv6 filter requires clearing the old IPv6
-	 * filter.
+	 * requirements.
 	 */
-	if (fs->type == FILTER_TYPE_IPV4) { /* IPv4 */
-		/*
-		 * For T6, If our IPv4 filter isn't being written to a
-		 * multiple of two filter index and there's an IPv6
-		 * filter at the multiple of 2 base slot, then we need
-		 * to delete that IPv6 filter ...
-		 * For adapters below T6, IPv6 filter occupies 4 entries.
-		 */
+	fidx = filter_id;
+	if (fs->type == FILTER_TYPE_IPV6) {
 		if (chip_ver < CHELSIO_T6)
-			fidx = filter_id & ~0x3;
+			fidx &= ~(0x3);
 		else
-			fidx = filter_id & ~0x1;
-
-		if (fidx != filter_id && adapter->tids.ftid_tab[fidx].fs.type) {
-			f = &adapter->tids.ftid_tab[fidx];
-			if (f->valid)
-				return -EBUSY;
-		}
-	} else { /* IPv6 */
-		unsigned int max_filter_id;
-
-		if (chip_ver < CHELSIO_T6) {
-			/*
-			 * Ensure that the IPv6 filter is aligned on a
-			 * multiple of 4 boundary.
-			 */
-			if (filter_id & 0x3)
-				return -EINVAL;
+			fidx &= ~(0x1);
+	}
 
-			max_filter_id = filter_id + 4;
-		} else {
-			/*
-			 * For T6, CLIP being enabled, IPv6 filter would occupy
-			 * 2 entries.
-			 */
-			if (filter_id & 0x1)
-				return -EINVAL;
+	if (fidx != filter_id)
+		return -EINVAL;
 
-			max_filter_id = filter_id + 2;
-		}
+	nentries = cxgbe_filter_slots(adapter, fs->type);
+	ret = cxgbe_is_filter_set(&adapter->tids, filter_id, nentries);
+	if (ret)
+		return -EBUSY;
 
-		/*
-		 * Check all except the base overlapping IPv4 filter
-		 * slots.
-		 */
-		for (fidx = filter_id + 1; fidx < max_filter_id; fidx++) {
-			f = &adapter->tids.ftid_tab[fidx];
-			if (f->valid)
-				return -EBUSY;
-		}
-	}
+	iq = get_filter_steerq(dev, fs);
 
 	/*
 	 * Check to make sure that provided filter index is not
@@ -1029,9 +984,7 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 		return -EBUSY;
 
 	fidx = adapter->tids.ftid_base + filter_id;
-	fid_bit = filter_id;
-	ret = cxgbe_set_ftid(&adapter->tids, fid_bit,
-			     fs->type ? FILTER_TYPE_IPV6 : FILTER_TYPE_IPV4);
+	ret = cxgbe_set_ftid(&adapter->tids, filter_id, nentries);
 	if (ret)
 		return ret;
 
@@ -1041,9 +994,7 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 	ret = writable_filter(f);
 	if (ret) {
 		/* Clear the bits we have set above */
-		cxgbe_clear_ftid(&adapter->tids, fid_bit,
-				 fs->type ? FILTER_TYPE_IPV6 :
-					    FILTER_TYPE_IPV4);
+		cxgbe_clear_ftid(&adapter->tids, filter_id, nentries);
 		return ret;
 	}
 
@@ -1074,17 +1025,13 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 	f->ctx = ctx;
 	f->tid = fidx; /* Save the actual tid */
 	ret = set_filter_wr(dev, filter_id);
-	if (ret) {
-		fid_bit = f->tid - adapter->tids.ftid_base;
+	if (ret)
 		goto free_tid;
-	}
 
 	return ret;
 
 free_tid:
-	cxgbe_clear_ftid(&adapter->tids, fid_bit,
-			 fs->type ? FILTER_TYPE_IPV6 :
-				    FILTER_TYPE_IPV4);
+	cxgbe_clear_ftid(&adapter->tids, filter_id, nentries);
 	clear_filter(f);
 	return ret;
 }
diff --git a/drivers/net/cxgbe/cxgbe_filter.h b/drivers/net/cxgbe/cxgbe_filter.h
index 1964730ba..06021c854 100644
--- a/drivers/net/cxgbe/cxgbe_filter.h
+++ b/drivers/net/cxgbe/cxgbe_filter.h
@@ -248,7 +248,8 @@ cxgbe_bitmap_find_free_region(struct rte_bitmap *bmap, unsigned int size,
 	return idx;
 }
 
-bool cxgbe_is_filter_set(struct tid_info *, int fidx, int family);
+u8 cxgbe_filter_slots(struct adapter *adap, u8 family);
+bool cxgbe_is_filter_set(struct tid_info *t, u32 fidx, u8 nentries);
 void cxgbe_filter_rpl(struct adapter *adap, const struct cpl_set_tcb_rpl *rpl);
 int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 		     struct ch_filter_specification *fs,
@@ -256,7 +257,7 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 int cxgbe_del_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 		     struct ch_filter_specification *fs,
 		     struct filter_ctx *ctx);
-int cxgbe_alloc_ftid(struct adapter *adap, unsigned int family);
+int cxgbe_alloc_ftid(struct adapter *adap, u8 nentries);
 int cxgbe_init_hash_filter(struct adapter *adap);
 void cxgbe_hash_filter_rpl(struct adapter *adap,
 			   const struct cpl_act_open_rpl *rpl);
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index 848c61f02..8a5d06ff3 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -304,12 +304,15 @@ static int cxgbe_validate_fidxondel(struct filter_entry *f, unsigned int fidx)
 {
 	struct adapter *adap = ethdev2adap(f->dev);
 	struct ch_filter_specification fs = f->fs;
+	u8 nentries;
 
 	if (fidx >= adap->tids.nftids) {
 		dev_err(adap, "invalid flow index %d.\n", fidx);
 		return -EINVAL;
 	}
-	if (!cxgbe_is_filter_set(&adap->tids, fidx, fs.type)) {
+
+	nentries = cxgbe_filter_slots(adap, fs.type);
+	if (!cxgbe_is_filter_set(&adap->tids, fidx, nentries)) {
 		dev_err(adap, "Already free fidx:%d f:%p\n", fidx, f);
 		return -EINVAL;
 	}
@@ -321,10 +324,14 @@ static int
 cxgbe_validate_fidxonadd(struct ch_filter_specification *fs,
 			 struct adapter *adap, unsigned int fidx)
 {
-	if (cxgbe_is_filter_set(&adap->tids, fidx, fs->type)) {
+	u8 nentries;
+
+	nentries = cxgbe_filter_slots(adap, fs->type);
+	if (cxgbe_is_filter_set(&adap->tids, fidx, nentries)) {
 		dev_err(adap, "filter index: %d is busy.\n", fidx);
 		return -EBUSY;
 	}
+
 	if (fidx >= adap->tids.nftids) {
 		dev_err(adap, "filter index (%u) >= max(%u)\n",
 			fidx, adap->tids.nftids);
@@ -351,9 +358,11 @@ static int cxgbe_get_fidx(struct rte_flow *flow, unsigned int *fidx)
 
 	/* For tcam get the next available slot, if default value specified */
 	if (flow->fidx == FILTER_ID_MAX) {
+		u8 nentries;
 		int idx;
 
-		idx = cxgbe_alloc_ftid(adap, fs->type);
+		nentries = cxgbe_filter_slots(adap, fs->type);
+		idx = cxgbe_alloc_ftid(adap, nentries);
 		if (idx < 0) {
 			dev_err(adap, "unable to get a filter index in tcam\n");
 			return -ENOMEM;
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH v2 04/12] net/cxgbe: fix parsing VLAN ID rewrite action
  2019-09-27 20:30 ` [dpdk-dev] [PATCH v2 " Rahul Lakkireddy
                     ` (2 preceding siblings ...)
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 03/12] net/cxgbe: fix slot allocation for IPv6 flows Rahul Lakkireddy
@ 2019-09-27 20:30   ` Rahul Lakkireddy
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 05/12] net/cxgbe: fix prefetch for non-coalesced Tx packets Rahul Lakkireddy
                     ` (8 subsequent siblings)
  12 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-27 20:30 UTC (permalink / raw)
  To: dev; +Cc: nirranjan, stable

Set VLAN action mode to VLAN_REWRITE only if VLAN_INSERT has not been
set yet. Otherwise, the resulting VLAN packets will have their VLAN
header rewritten, instead of pushing a new outer VLAN header.

Also fix the VLAN ID extraction logic and endianness issues.

Cc: stable@dpdk.org
Fixes: 1decc62b1cbe ("net/cxgbe: add flow operations to offload VLAN actions")

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
v2:
- No changes.

 drivers/net/cxgbe/cxgbe_flow.c | 15 ++++++++++++---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index 8a5d06ff3..4c8553039 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -446,18 +446,27 @@ ch_rte_parse_atype_switch(const struct rte_flow_action *a,
 	const struct rte_flow_action_set_tp *tp_port;
 	const struct rte_flow_action_phy_port *port;
 	int item_index;
+	u16 tmp_vlan;
 
 	switch (a->type) {
 	case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID:
 		vlanid = (const struct rte_flow_action_of_set_vlan_vid *)
 			  a->conf;
-		fs->newvlan = VLAN_REWRITE;
-		fs->vlan = vlanid->vlan_vid;
+		/* If explicitly asked to push a new VLAN header,
+		 * then don't set rewrite mode. Otherwise, the
+		 * incoming VLAN packets will get their VLAN fields
+		 * rewritten, instead of adding an additional outer
+		 * VLAN header.
+		 */
+		if (fs->newvlan != VLAN_INSERT)
+			fs->newvlan = VLAN_REWRITE;
+		tmp_vlan = fs->vlan & 0xe000;
+		fs->vlan = (be16_to_cpu(vlanid->vlan_vid) & 0xfff) | tmp_vlan;
 		break;
 	case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN:
 		pushvlan = (const struct rte_flow_action_of_push_vlan *)
 			    a->conf;
-		if (pushvlan->ethertype != RTE_ETHER_TYPE_VLAN)
+		if (be16_to_cpu(pushvlan->ethertype) != RTE_ETHER_TYPE_VLAN)
 			return rte_flow_error_set(e, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ACTION, a,
 						  "only ethertype 0x8100 "
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH v2 05/12] net/cxgbe: fix prefetch for non-coalesced Tx packets
  2019-09-27 20:30 ` [dpdk-dev] [PATCH v2 " Rahul Lakkireddy
                     ` (3 preceding siblings ...)
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 04/12] net/cxgbe: fix parsing VLAN ID rewrite action Rahul Lakkireddy
@ 2019-09-27 20:30   ` Rahul Lakkireddy
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 06/12] net/cxgbe: avoid polling link status before device start Rahul Lakkireddy
                     ` (7 subsequent siblings)
  12 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-27 20:30 UTC (permalink / raw)
  To: dev; +Cc: nirranjan, stable

Move prefetch code out of Tx coalesce path to allow prefetching for
non-coalesced Tx packets, as well.

Cc: stable@dpdk.org
Fixes: bf89cbedd2d9 ("cxgbe: optimize forwarding performance for 40G")

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
v2:
- No changes.

 drivers/net/cxgbe/cxgbe_ethdev.c | 9 +++++++--
 drivers/net/cxgbe/sge.c          | 1 -
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 7d7be69ed..5d74f8ba3 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -67,6 +67,7 @@ uint16_t cxgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	struct sge_eth_txq *txq = (struct sge_eth_txq *)tx_queue;
 	uint16_t pkts_sent, pkts_remain;
 	uint16_t total_sent = 0;
+	uint16_t idx = 0;
 	int ret = 0;
 
 	CXGBE_DEBUG_TX(adapter, "%s: txq = %p; tx_pkts = %p; nb_pkts = %d\n",
@@ -75,12 +76,16 @@ uint16_t cxgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	t4_os_lock(&txq->txq_lock);
 	/* free up desc from already completed tx */
 	reclaim_completed_tx(&txq->q);
+	rte_prefetch0(rte_pktmbuf_mtod(tx_pkts[0], volatile void *));
 	while (total_sent < nb_pkts) {
 		pkts_remain = nb_pkts - total_sent;
 
 		for (pkts_sent = 0; pkts_sent < pkts_remain; pkts_sent++) {
-			ret = t4_eth_xmit(txq, tx_pkts[total_sent + pkts_sent],
-					  nb_pkts);
+			idx = total_sent + pkts_sent;
+			if ((idx + 1) < nb_pkts)
+				rte_prefetch0(rte_pktmbuf_mtod(tx_pkts[idx + 1],
+							volatile void *));
+			ret = t4_eth_xmit(txq, tx_pkts[idx], nb_pkts);
 			if (ret < 0)
 				break;
 		}
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index 641be9657..bf3190211 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -1154,7 +1154,6 @@ int t4_eth_xmit(struct sge_eth_txq *txq, struct rte_mbuf *mbuf,
 				txq->stats.mapping_err++;
 				goto out_free;
 			}
-			rte_prefetch0((volatile void *)addr);
 			return tx_do_packet_coalesce(txq, mbuf, cflits, adap,
 						     pi, addr, nb_pkts);
 		} else {
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH v2 06/12] net/cxgbe: avoid polling link status before device start
  2019-09-27 20:30 ` [dpdk-dev] [PATCH v2 " Rahul Lakkireddy
                     ` (4 preceding siblings ...)
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 05/12] net/cxgbe: fix prefetch for non-coalesced Tx packets Rahul Lakkireddy
@ 2019-09-27 20:30   ` Rahul Lakkireddy
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 07/12] net/cxgbe: use dynamic logging for debug prints Rahul Lakkireddy
                     ` (6 subsequent siblings)
  12 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-27 20:30 UTC (permalink / raw)
  To: dev; +Cc: nirranjan, stable

Link updates come in firmware event queue, which is only created
when device starts. So, don't poll for link status if firmware
event queue is not yet created.

This fixes NULL dereference when accessing non existent firmware
event queue.

Cc: stable@dpdk.org
Fixes: 265af08e75ba ("net/cxgbe: add link up and down ops")

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
v2:
- No changes.

 drivers/net/cxgbe/cxgbe_ethdev.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 5d74f8ba3..5df8d746c 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -206,6 +206,9 @@ int cxgbe_dev_link_update(struct rte_eth_dev *eth_dev,
 	u8 old_link = pi->link_cfg.link_ok;
 
 	for (i = 0; i < CXGBE_LINK_STATUS_POLL_CNT; i++) {
+		if (!s->fw_evtq.desc)
+			break;
+
 		cxgbe_poll(&s->fw_evtq, NULL, budget, &work_done);
 
 		/* Exit if link status changed or always forced up */
@@ -239,6 +242,9 @@ int cxgbe_dev_set_link_up(struct rte_eth_dev *dev)
 	struct sge *s = &adapter->sge;
 	int ret;
 
+	if (!s->fw_evtq.desc)
+		return -ENOMEM;
+
 	/* Flush all link events */
 	cxgbe_poll(&s->fw_evtq, NULL, budget, &work_done);
 
@@ -265,6 +271,9 @@ int cxgbe_dev_set_link_down(struct rte_eth_dev *dev)
 	struct sge *s = &adapter->sge;
 	int ret;
 
+	if (!s->fw_evtq.desc)
+		return -ENOMEM;
+
 	/* Flush all link events */
 	cxgbe_poll(&s->fw_evtq, NULL, budget, &work_done);
 
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH v2 07/12] net/cxgbe: use dynamic logging for debug prints
  2019-09-27 20:30 ` [dpdk-dev] [PATCH v2 " Rahul Lakkireddy
                     ` (5 preceding siblings ...)
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 06/12] net/cxgbe: avoid polling link status before device start Rahul Lakkireddy
@ 2019-09-27 20:30   ` Rahul Lakkireddy
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 08/12] net/cxgbe: separate VF only devargs Rahul Lakkireddy
                     ` (5 subsequent siblings)
  12 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-27 20:30 UTC (permalink / raw)
  To: dev; +Cc: nirranjan

Remove compile time flags and use dynamic logging for debug prints.
Also remove rarely used debug logs in register access and datapath.

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
v2:
- Remove rarely used CXGBE_DEBUG_REG, CXGBE_DEBUG_TX, and
  CXGBE_DEBUG_RX debug logs.

 config/common_base               |  5 ---
 doc/guides/nics/cxgbe.rst        | 20 ------------
 drivers/net/cxgbe/base/adapter.h | 19 ++---------
 drivers/net/cxgbe/cxgbe_compat.h | 55 ++++++++------------------------
 drivers/net/cxgbe/cxgbe_ethdev.c | 11 +++----
 5 files changed, 19 insertions(+), 91 deletions(-)

diff --git a/config/common_base b/config/common_base
index 71a2c33d6..eb4d86065 100644
--- a/config/common_base
+++ b/config/common_base
@@ -217,11 +217,6 @@ CONFIG_RTE_LIBRTE_BNXT_PMD=y
 # Compile burst-oriented Chelsio Terminator (CXGBE) PMD
 #
 CONFIG_RTE_LIBRTE_CXGBE_PMD=y
-CONFIG_RTE_LIBRTE_CXGBE_DEBUG=n
-CONFIG_RTE_LIBRTE_CXGBE_DEBUG_REG=n
-CONFIG_RTE_LIBRTE_CXGBE_DEBUG_MBOX=n
-CONFIG_RTE_LIBRTE_CXGBE_DEBUG_TX=n
-CONFIG_RTE_LIBRTE_CXGBE_DEBUG_RX=n
 CONFIG_RTE_LIBRTE_CXGBE_TPUT=y
 
 # NXP DPAA Bus
diff --git a/doc/guides/nics/cxgbe.rst b/doc/guides/nics/cxgbe.rst
index 7a893cc1d..fc74b571c 100644
--- a/doc/guides/nics/cxgbe.rst
+++ b/doc/guides/nics/cxgbe.rst
@@ -104,26 +104,6 @@ enabling debugging options may affect system performance.
 
      This controls compilation of both CXGBE and CXGBEVF PMD.
 
-- ``CONFIG_RTE_LIBRTE_CXGBE_DEBUG`` (default **n**)
-
-  Toggle display of generic debugging messages.
-
-- ``CONFIG_RTE_LIBRTE_CXGBE_DEBUG_REG`` (default **n**)
-
-  Toggle display of registers related run-time check messages.
-
-- ``CONFIG_RTE_LIBRTE_CXGBE_DEBUG_MBOX`` (default **n**)
-
-  Toggle display of firmware mailbox related run-time check messages.
-
-- ``CONFIG_RTE_LIBRTE_CXGBE_DEBUG_TX`` (default **n**)
-
-  Toggle display of transmission data path run-time check messages.
-
-- ``CONFIG_RTE_LIBRTE_CXGBE_DEBUG_RX`` (default **n**)
-
-  Toggle display of receiving data path run-time check messages.
-
 - ``CONFIG_RTE_LIBRTE_CXGBE_TPUT`` (default **y**)
 
   Toggle behavior to prefer Throughput or Latency.
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index e548f9f63..2dfdb2df8 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -450,11 +450,7 @@ static inline uint64_t cxgbe_write_addr64(volatile void *addr, uint64_t val)
  */
 static inline u32 t4_read_reg(struct adapter *adapter, u32 reg_addr)
 {
-	u32 val = CXGBE_READ_REG(adapter, reg_addr);
-
-	CXGBE_DEBUG_REG(adapter, "read register 0x%x value 0x%x\n", reg_addr,
-			val);
-	return val;
+	return CXGBE_READ_REG(adapter, reg_addr);
 }
 
 /**
@@ -467,8 +463,6 @@ static inline u32 t4_read_reg(struct adapter *adapter, u32 reg_addr)
  */
 static inline void t4_write_reg(struct adapter *adapter, u32 reg_addr, u32 val)
 {
-	CXGBE_DEBUG_REG(adapter, "setting register 0x%x to 0x%x\n", reg_addr,
-			val);
 	CXGBE_WRITE_REG(adapter, reg_addr, val);
 }
 
@@ -483,8 +477,6 @@ static inline void t4_write_reg(struct adapter *adapter, u32 reg_addr, u32 val)
 static inline void t4_write_reg_relaxed(struct adapter *adapter, u32 reg_addr,
 					u32 val)
 {
-	CXGBE_DEBUG_REG(adapter, "setting register 0x%x to 0x%x\n", reg_addr,
-			val);
 	CXGBE_WRITE_REG_RELAXED(adapter, reg_addr, val);
 }
 
@@ -497,11 +489,7 @@ static inline void t4_write_reg_relaxed(struct adapter *adapter, u32 reg_addr,
  */
 static inline u64 t4_read_reg64(struct adapter *adapter, u32 reg_addr)
 {
-	u64 val = CXGBE_READ_REG64(adapter, reg_addr);
-
-	CXGBE_DEBUG_REG(adapter, "64-bit read register %#x value %#llx\n",
-			reg_addr, (unsigned long long)val);
-	return val;
+	return CXGBE_READ_REG64(adapter, reg_addr);
 }
 
 /**
@@ -515,9 +503,6 @@ static inline u64 t4_read_reg64(struct adapter *adapter, u32 reg_addr)
 static inline void t4_write_reg64(struct adapter *adapter, u32 reg_addr,
 				  u64 val)
 {
-	CXGBE_DEBUG_REG(adapter, "setting register %#x to %#llx\n", reg_addr,
-			(unsigned long long)val);
-
 	CXGBE_WRITE_REG64(adapter, reg_addr, val);
 }
 
diff --git a/drivers/net/cxgbe/cxgbe_compat.h b/drivers/net/cxgbe/cxgbe_compat.h
index 93df0a775..20e4f8af2 100644
--- a/drivers/net/cxgbe/cxgbe_compat.h
+++ b/drivers/net/cxgbe/cxgbe_compat.h
@@ -21,55 +21,26 @@
 #include <rte_net.h>
 
 extern int cxgbe_logtype;
+extern int cxgbe_mbox_logtype;
 
-#define dev_printf(level, fmt, ...) \
-	rte_log(RTE_LOG_ ## level, cxgbe_logtype, \
+#define dev_printf(level, logtype, fmt, ...) \
+	rte_log(RTE_LOG_ ## level, logtype, \
 		"rte_cxgbe_pmd: " fmt, ##__VA_ARGS__)
 
-#define dev_err(x, fmt, ...) dev_printf(ERR, fmt, ##__VA_ARGS__)
-#define dev_info(x, fmt, ...) dev_printf(INFO, fmt, ##__VA_ARGS__)
-#define dev_warn(x, fmt, ...) dev_printf(WARNING, fmt, ##__VA_ARGS__)
+#define dev_err(x, fmt, ...) \
+	dev_printf(ERR, cxgbe_logtype, fmt, ##__VA_ARGS__)
+#define dev_info(x, fmt, ...) \
+	dev_printf(INFO, cxgbe_logtype, fmt, ##__VA_ARGS__)
+#define dev_warn(x, fmt, ...) \
+	dev_printf(WARNING, cxgbe_logtype, fmt, ##__VA_ARGS__)
+#define dev_debug(x, fmt, ...) \
+	dev_printf(DEBUG, cxgbe_logtype, fmt, ##__VA_ARGS__)
 
-#ifdef RTE_LIBRTE_CXGBE_DEBUG
-#define dev_debug(x, fmt, ...) dev_printf(INFO, fmt, ##__VA_ARGS__)
-#else
-#define dev_debug(x, fmt, ...) do { } while (0)
-#endif
-
-#ifdef RTE_LIBRTE_CXGBE_DEBUG_REG
-#define CXGBE_DEBUG_REG(x, fmt, ...) \
-	dev_printf(INFO, "REG:" fmt, ##__VA_ARGS__)
-#else
-#define CXGBE_DEBUG_REG(x, fmt, ...) do { } while (0)
-#endif
-
-#ifdef RTE_LIBRTE_CXGBE_DEBUG_MBOX
 #define CXGBE_DEBUG_MBOX(x, fmt, ...) \
-	dev_printf(INFO, "MBOX:" fmt, ##__VA_ARGS__)
-#else
-#define CXGBE_DEBUG_MBOX(x, fmt, ...) do { } while (0)
-#endif
+	dev_printf(DEBUG, cxgbe_mbox_logtype, "MBOX:" fmt, ##__VA_ARGS__)
 
-#ifdef RTE_LIBRTE_CXGBE_DEBUG_TX
-#define CXGBE_DEBUG_TX(x, fmt, ...) \
-	dev_printf(INFO, "TX:" fmt, ##__VA_ARGS__)
-#else
-#define CXGBE_DEBUG_TX(x, fmt, ...) do { } while (0)
-#endif
-
-#ifdef RTE_LIBRTE_CXGBE_DEBUG_RX
-#define CXGBE_DEBUG_RX(x, fmt, ...) \
-	dev_printf(INFO, "RX:" fmt, ##__VA_ARGS__)
-#else
-#define CXGBE_DEBUG_RX(x, fmt, ...) do { } while (0)
-#endif
-
-#ifdef RTE_LIBRTE_CXGBE_DEBUG
 #define CXGBE_FUNC_TRACE() \
-	dev_printf(DEBUG, "CXGBE trace: %s\n", __func__)
-#else
-#define CXGBE_FUNC_TRACE() do { } while (0)
-#endif
+	dev_printf(DEBUG, cxgbe_logtype, "CXGBE trace: %s\n", __func__)
 
 #define pr_err(fmt, ...) dev_err(0, fmt, ##__VA_ARGS__)
 #define pr_warn(fmt, ...) dev_warn(0, fmt, ##__VA_ARGS__)
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 5df8d746c..13df45b2d 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -39,6 +39,7 @@
 #include "cxgbe_flow.h"
 
 int cxgbe_logtype;
+int cxgbe_mbox_logtype;
 
 /*
  * Macros needed to support the PCI Device ID Table ...
@@ -70,9 +71,6 @@ uint16_t cxgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	uint16_t idx = 0;
 	int ret = 0;
 
-	CXGBE_DEBUG_TX(adapter, "%s: txq = %p; tx_pkts = %p; nb_pkts = %d\n",
-		       __func__, txq, tx_pkts, nb_pkts);
-
 	t4_os_lock(&txq->txq_lock);
 	/* free up desc from already completed tx */
 	reclaim_completed_tx(&txq->q);
@@ -106,13 +104,9 @@ uint16_t cxgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	struct sge_eth_rxq *rxq = (struct sge_eth_rxq *)rx_queue;
 	unsigned int work_done;
 
-	CXGBE_DEBUG_RX(adapter, "%s: rxq->rspq.cntxt_id = %u; nb_pkts = %d\n",
-		       __func__, rxq->rspq.cntxt_id, nb_pkts);
-
 	if (cxgbe_poll(&rxq->rspq, rx_pkts, (unsigned int)nb_pkts, &work_done))
 		dev_err(adapter, "error in cxgbe poll\n");
 
-	CXGBE_DEBUG_RX(adapter, "%s: work_done = %u\n", __func__, work_done);
 	return work_done;
 }
 
@@ -1251,4 +1245,7 @@ RTE_INIT(cxgbe_init_log)
 	cxgbe_logtype = rte_log_register("pmd.net.cxgbe");
 	if (cxgbe_logtype >= 0)
 		rte_log_set_level(cxgbe_logtype, RTE_LOG_NOTICE);
+	cxgbe_mbox_logtype = rte_log_register("pmd.net.cxgbe.mbox");
+	if (cxgbe_mbox_logtype >= 0)
+		rte_log_set_level(cxgbe_mbox_logtype, RTE_LOG_NOTICE);
 }
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH v2 08/12] net/cxgbe: separate VF only devargs
  2019-09-27 20:30 ` [dpdk-dev] [PATCH v2 " Rahul Lakkireddy
                     ` (6 preceding siblings ...)
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 07/12] net/cxgbe: use dynamic logging for debug prints Rahul Lakkireddy
@ 2019-09-27 20:30   ` Rahul Lakkireddy
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 09/12] net/cxgbe: add devarg to control Tx coalescing Rahul Lakkireddy
                     ` (4 subsequent siblings)
  12 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-27 20:30 UTC (permalink / raw)
  To: dev; +Cc: nirranjan

Rework devargs parsing logic to separate VF only args.

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
v2:
- In cxgbe.rst doc, use ^ (instead of -) to represent common and
  vf-only devargs as subsection of Runtime Options.

 doc/guides/nics/cxgbe.rst          |  6 +++
 drivers/net/cxgbe/base/adapter.h   |  7 +++
 drivers/net/cxgbe/cxgbe.h          |  9 ++--
 drivers/net/cxgbe/cxgbe_ethdev.c   |  5 +-
 drivers/net/cxgbe/cxgbe_main.c     | 75 +++++++++++++++++++-----------
 drivers/net/cxgbe/cxgbevf_ethdev.c |  6 +++
 6 files changed, 76 insertions(+), 32 deletions(-)

diff --git a/doc/guides/nics/cxgbe.rst b/doc/guides/nics/cxgbe.rst
index fc74b571c..f94b8371e 100644
--- a/doc/guides/nics/cxgbe.rst
+++ b/doc/guides/nics/cxgbe.rst
@@ -118,12 +118,18 @@ be passed as part of EAL arguments. For example,
 
    testpmd -w 02:00.4,keep_ovlan=1 -- -i
 
+Common Runtime Options
+^^^^^^^^^^^^^^^^^^^^^^
+
 - ``keep_ovlan`` (default **0**)
 
   Toggle behavior to keep/strip outer VLAN in Q-in-Q packets. If
   enabled, the outer VLAN tag is preserved in Q-in-Q packets. Otherwise,
   the outer VLAN tag is stripped in Q-in-Q packets.
 
+CXGBE VF Only Runtime Options
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
 - ``force_link_up`` (default **0**)
 
   When set to 1, CXGBEVF PMD always forces link as up for all VFs on
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index 2dfdb2df8..fcc84e4e9 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -299,6 +299,11 @@ struct mbox_entry {
 
 TAILQ_HEAD(mbox_list, mbox_entry);
 
+struct adapter_devargs {
+	bool keep_ovlan;
+	bool force_link_up;
+};
+
 struct adapter {
 	struct rte_pci_device *pdev;       /* associated rte pci device */
 	struct rte_eth_dev *eth_dev;       /* first port's rte eth device */
@@ -331,6 +336,8 @@ struct adapter {
 	struct mpstcam_table *mpstcam;
 
 	struct tid_info tids;     /* Info used to access TID related tables */
+
+	struct adapter_devargs devargs;
 };
 
 /**
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 3f97fa58b..3a50502b7 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -50,8 +50,11 @@
 			   DEV_RX_OFFLOAD_SCATTER)
 
 
-#define CXGBE_DEVARG_KEEP_OVLAN "keep_ovlan"
-#define CXGBE_DEVARG_FORCE_LINK_UP "force_link_up"
+/* Common PF and VF devargs */
+#define CXGBE_DEVARG_CMN_KEEP_OVLAN "keep_ovlan"
+
+/* VF only devargs */
+#define CXGBE_DEVARG_VF_FORCE_LINK_UP "force_link_up"
 
 bool cxgbe_force_linkup(struct adapter *adap);
 int cxgbe_probe(struct adapter *adapter);
@@ -76,7 +79,7 @@ int cxgbe_setup_rss(struct port_info *pi);
 void cxgbe_enable_rx_queues(struct port_info *pi);
 void cxgbe_print_port_info(struct adapter *adap);
 void cxgbe_print_adapter_info(struct adapter *adap);
-int cxgbe_get_devargs(struct rte_devargs *devargs, const char *key);
+void cxgbe_process_devargs(struct adapter *adap);
 void cxgbe_configure_max_ethqsets(struct adapter *adapter);
 
 #endif /* _CXGBE_H_ */
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 13df45b2d..2a2875f89 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -1190,6 +1190,8 @@ static int eth_cxgbe_dev_init(struct rte_eth_dev *eth_dev)
 	adapter->eth_dev = eth_dev;
 	pi->adapter = adapter;
 
+	cxgbe_process_devargs(adapter);
+
 	err = cxgbe_probe(adapter);
 	if (err) {
 		dev_err(adapter, "%s: cxgbe probe failed with err %d\n",
@@ -1237,8 +1239,7 @@ RTE_PMD_REGISTER_PCI(net_cxgbe, rte_cxgbe_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(net_cxgbe, cxgb4_pci_tbl);
 RTE_PMD_REGISTER_KMOD_DEP(net_cxgbe, "* igb_uio | uio_pci_generic | vfio-pci");
 RTE_PMD_REGISTER_PARAM_STRING(net_cxgbe,
-			      CXGBE_DEVARG_KEEP_OVLAN "=<0|1> "
-			      CXGBE_DEVARG_FORCE_LINK_UP "=<0|1> ");
+			      CXGBE_DEVARG_CMN_KEEP_OVLAN "=<0|1> ");
 
 RTE_INIT(cxgbe_init_log)
 {
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index c3e6b9557..6a6137f06 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -669,19 +669,25 @@ void cxgbe_print_port_info(struct adapter *adap)
 	}
 }
 
-static int
-check_devargs_handler(__rte_unused const char *key, const char *value,
-		      __rte_unused void *opaque)
+static int check_devargs_handler(const char *key, const char *value, void *p)
 {
-	if (strcmp(value, "1"))
-		return -1;
+	if (!strncmp(key, CXGBE_DEVARG_CMN_KEEP_OVLAN, strlen(key)) ||
+	    !strncmp(key, CXGBE_DEVARG_VF_FORCE_LINK_UP, strlen(key))) {
+		if (!strncmp(value, "1", 1)) {
+			bool *dst_val = (bool *)p;
+
+			*dst_val = true;
+		}
+	}
 
 	return 0;
 }
 
-int cxgbe_get_devargs(struct rte_devargs *devargs, const char *key)
+static int cxgbe_get_devargs(struct rte_devargs *devargs, const char *key,
+			     void *p)
 {
 	struct rte_kvargs *kvlist;
+	int ret = 0;
 
 	if (!devargs)
 		return 0;
@@ -690,24 +696,44 @@ int cxgbe_get_devargs(struct rte_devargs *devargs, const char *key)
 	if (!kvlist)
 		return 0;
 
-	if (!rte_kvargs_count(kvlist, key)) {
-		rte_kvargs_free(kvlist);
-		return 0;
-	}
+	if (!rte_kvargs_count(kvlist, key))
+		goto out;
 
-	if (rte_kvargs_process(kvlist, key,
-			       check_devargs_handler, NULL) < 0) {
-		rte_kvargs_free(kvlist);
-		return 0;
-	}
+	ret = rte_kvargs_process(kvlist, key, check_devargs_handler, p);
+
+out:
 	rte_kvargs_free(kvlist);
 
-	return 1;
+	return ret;
+}
+
+static void cxgbe_get_devargs_int(struct adapter *adap, int *dst,
+				  const char *key, int default_value)
+{
+	struct rte_pci_device *pdev = adap->pdev;
+	int ret, devarg_value = default_value;
+
+	*dst = default_value;
+	if (!pdev)
+		return;
+
+	ret = cxgbe_get_devargs(pdev->device.devargs, key, &devarg_value);
+	if (ret)
+		return;
+
+	*dst = devarg_value;
+}
+
+void cxgbe_process_devargs(struct adapter *adap)
+{
+	cxgbe_get_devargs_int(adap, &adap->devargs.keep_ovlan,
+			      CXGBE_DEVARG_CMN_KEEP_OVLAN, 0);
+	cxgbe_get_devargs_int(adap, &adap->devargs.force_link_up,
+			      CXGBE_DEVARG_VF_FORCE_LINK_UP, 0);
 }
 
 static void configure_vlan_types(struct adapter *adapter)
 {
-	struct rte_pci_device *pdev = adapter->pdev;
 	int i;
 
 	for_each_port(adapter, i) {
@@ -742,9 +768,8 @@ static void configure_vlan_types(struct adapter *adapter)
 				 F_OVLAN_EN2 | F_IVLAN_EN);
 	}
 
-	if (cxgbe_get_devargs(pdev->device.devargs, CXGBE_DEVARG_KEEP_OVLAN))
-		t4_tp_wr_bits_indirect(adapter, A_TP_INGRESS_CONFIG,
-				       V_RM_OVLAN(1), V_RM_OVLAN(0));
+	t4_tp_wr_bits_indirect(adapter, A_TP_INGRESS_CONFIG, V_RM_OVLAN(1),
+			       V_RM_OVLAN(!adapter->devargs.keep_ovlan));
 }
 
 static void configure_pcie_ext_tag(struct adapter *adapter)
@@ -1323,14 +1348,10 @@ void t4_os_portmod_changed(const struct adapter *adap, int port_id)
 
 bool cxgbe_force_linkup(struct adapter *adap)
 {
-	struct rte_pci_device *pdev = adap->pdev;
-
 	if (is_pf4(adap))
-		return false;	/* force_linkup not required for pf driver*/
-	if (!cxgbe_get_devargs(pdev->device.devargs,
-			       CXGBE_DEVARG_FORCE_LINK_UP))
-		return false;
-	return true;
+		return false;	/* force_linkup not required for pf driver */
+
+	return adap->devargs.force_link_up;
 }
 
 /**
diff --git a/drivers/net/cxgbe/cxgbevf_ethdev.c b/drivers/net/cxgbe/cxgbevf_ethdev.c
index 60e96aa4e..cc0938b43 100644
--- a/drivers/net/cxgbe/cxgbevf_ethdev.c
+++ b/drivers/net/cxgbe/cxgbevf_ethdev.c
@@ -162,6 +162,9 @@ static int eth_cxgbevf_dev_init(struct rte_eth_dev *eth_dev)
 	adapter->pdev = pci_dev;
 	adapter->eth_dev = eth_dev;
 	pi->adapter = adapter;
+
+	cxgbe_process_devargs(adapter);
+
 	err = cxgbevf_probe(adapter);
 	if (err) {
 		dev_err(adapter, "%s: cxgbevf probe failed with err %d\n",
@@ -208,3 +211,6 @@ static struct rte_pci_driver rte_cxgbevf_pmd = {
 RTE_PMD_REGISTER_PCI(net_cxgbevf, rte_cxgbevf_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(net_cxgbevf, cxgb4vf_pci_tbl);
 RTE_PMD_REGISTER_KMOD_DEP(net_cxgbevf, "* igb_uio | vfio-pci");
+RTE_PMD_REGISTER_PARAM_STRING(net_cxgbevf,
+			      CXGBE_DEVARG_CMN_KEEP_OVLAN "=<0|1> "
+			      CXGBE_DEVARG_VF_FORCE_LINK_UP "=<0|1> ");
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH v2 09/12] net/cxgbe: add devarg to control Tx coalescing
  2019-09-27 20:30 ` [dpdk-dev] [PATCH v2 " Rahul Lakkireddy
                     ` (7 preceding siblings ...)
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 08/12] net/cxgbe: separate VF only devargs Rahul Lakkireddy
@ 2019-09-27 20:30   ` Rahul Lakkireddy
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 10/12] net/cxgbe: fetch max Tx coalesce limit from firmware Rahul Lakkireddy
                     ` (3 subsequent siblings)
  12 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-27 20:30 UTC (permalink / raw)
  To: dev; +Cc: nirranjan

Remove compile time option to control Tx coalescing Latency vs
Throughput behavior. Add tx_mode_latency devarg instead, to
dynamically control Tx coalescing behavior.

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
v2:
- No changes.

 config/common_base                 |  1 -
 doc/guides/nics/cxgbe.rst          | 13 +++++++++----
 drivers/net/cxgbe/base/adapter.h   |  1 +
 drivers/net/cxgbe/cxgbe.h          |  1 +
 drivers/net/cxgbe/cxgbe_ethdev.c   |  3 ++-
 drivers/net/cxgbe/cxgbe_main.c     |  3 +++
 drivers/net/cxgbe/cxgbevf_ethdev.c |  1 +
 drivers/net/cxgbe/sge.c            | 18 ++++++++----------
 8 files changed, 25 insertions(+), 16 deletions(-)

diff --git a/config/common_base b/config/common_base
index eb4d86065..6a7e00d75 100644
--- a/config/common_base
+++ b/config/common_base
@@ -217,7 +217,6 @@ CONFIG_RTE_LIBRTE_BNXT_PMD=y
 # Compile burst-oriented Chelsio Terminator (CXGBE) PMD
 #
 CONFIG_RTE_LIBRTE_CXGBE_PMD=y
-CONFIG_RTE_LIBRTE_CXGBE_TPUT=y
 
 # NXP DPAA Bus
 CONFIG_RTE_LIBRTE_DPAA_BUS=n
diff --git a/doc/guides/nics/cxgbe.rst b/doc/guides/nics/cxgbe.rst
index f94b8371e..76b1a2ac7 100644
--- a/doc/guides/nics/cxgbe.rst
+++ b/doc/guides/nics/cxgbe.rst
@@ -104,10 +104,6 @@ enabling debugging options may affect system performance.
 
      This controls compilation of both CXGBE and CXGBEVF PMD.
 
-- ``CONFIG_RTE_LIBRTE_CXGBE_TPUT`` (default **y**)
-
-  Toggle behavior to prefer Throughput or Latency.
-
 Runtime Options
 ~~~~~~~~~~~~~~~
 
@@ -127,6 +123,15 @@ Common Runtime Options
   enabled, the outer VLAN tag is preserved in Q-in-Q packets. Otherwise,
   the outer VLAN tag is stripped in Q-in-Q packets.
 
+- ``tx_mode_latency`` (default **0**)
+
+  When set to 1, Tx doesn't wait for max number of packets to get
+  coalesced and sends the packets immediately at the end of the
+  current Tx burst. When set to 0, Tx waits across multiple Tx bursts
+  until the max number of packets have been coalesced. In this case,
+  Tx only sends the coalesced packets to hardware once the max
+  coalesce limit has been reached.
+
 CXGBE VF Only Runtime Options
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index fcc84e4e9..6758364c7 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -302,6 +302,7 @@ TAILQ_HEAD(mbox_list, mbox_entry);
 struct adapter_devargs {
 	bool keep_ovlan;
 	bool force_link_up;
+	bool tx_mode_latency;
 };
 
 struct adapter {
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 3a50502b7..ed1be3559 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -52,6 +52,7 @@
 
 /* Common PF and VF devargs */
 #define CXGBE_DEVARG_CMN_KEEP_OVLAN "keep_ovlan"
+#define CXGBE_DEVARG_CMN_TX_MODE_LATENCY "tx_mode_latency"
 
 /* VF only devargs */
 #define CXGBE_DEVARG_VF_FORCE_LINK_UP "force_link_up"
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 2a2875f89..615dda607 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -1239,7 +1239,8 @@ RTE_PMD_REGISTER_PCI(net_cxgbe, rte_cxgbe_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(net_cxgbe, cxgb4_pci_tbl);
 RTE_PMD_REGISTER_KMOD_DEP(net_cxgbe, "* igb_uio | uio_pci_generic | vfio-pci");
 RTE_PMD_REGISTER_PARAM_STRING(net_cxgbe,
-			      CXGBE_DEVARG_CMN_KEEP_OVLAN "=<0|1> ");
+			      CXGBE_DEVARG_CMN_KEEP_OVLAN "=<0|1> "
+			      CXGBE_DEVARG_CMN_TX_MODE_LATENCY "=<0|1> ");
 
 RTE_INIT(cxgbe_init_log)
 {
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 6a6137f06..23b74c754 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -672,6 +672,7 @@ void cxgbe_print_port_info(struct adapter *adap)
 static int check_devargs_handler(const char *key, const char *value, void *p)
 {
 	if (!strncmp(key, CXGBE_DEVARG_CMN_KEEP_OVLAN, strlen(key)) ||
+	    !strncmp(key, CXGBE_DEVARG_CMN_TX_MODE_LATENCY, strlen(key)) ||
 	    !strncmp(key, CXGBE_DEVARG_VF_FORCE_LINK_UP, strlen(key))) {
 		if (!strncmp(value, "1", 1)) {
 			bool *dst_val = (bool *)p;
@@ -728,6 +729,8 @@ void cxgbe_process_devargs(struct adapter *adap)
 {
 	cxgbe_get_devargs_int(adap, &adap->devargs.keep_ovlan,
 			      CXGBE_DEVARG_CMN_KEEP_OVLAN, 0);
+	cxgbe_get_devargs_int(adap, &adap->devargs.tx_mode_latency,
+			      CXGBE_DEVARG_CMN_TX_MODE_LATENCY, 0);
 	cxgbe_get_devargs_int(adap, &adap->devargs.force_link_up,
 			      CXGBE_DEVARG_VF_FORCE_LINK_UP, 0);
 }
diff --git a/drivers/net/cxgbe/cxgbevf_ethdev.c b/drivers/net/cxgbe/cxgbevf_ethdev.c
index cc0938b43..4165ba986 100644
--- a/drivers/net/cxgbe/cxgbevf_ethdev.c
+++ b/drivers/net/cxgbe/cxgbevf_ethdev.c
@@ -213,4 +213,5 @@ RTE_PMD_REGISTER_PCI_TABLE(net_cxgbevf, cxgb4vf_pci_tbl);
 RTE_PMD_REGISTER_KMOD_DEP(net_cxgbevf, "* igb_uio | vfio-pci");
 RTE_PMD_REGISTER_PARAM_STRING(net_cxgbevf,
 			      CXGBE_DEVARG_CMN_KEEP_OVLAN "=<0|1> "
+			      CXGBE_DEVARG_CMN_TX_MODE_LATENCY "=<0|1> "
 			      CXGBE_DEVARG_VF_FORCE_LINK_UP "=<0|1> ");
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index bf3190211..0df870a41 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -1007,10 +1007,6 @@ static inline int tx_do_packet_coalesce(struct sge_eth_txq *txq,
 	unsigned int max_coal_pkt_num = is_pf4(adap) ? ETH_COALESCE_PKT_NUM :
 						       ETH_COALESCE_VF_PKT_NUM;
 
-#ifdef RTE_LIBRTE_CXGBE_TPUT
-	RTE_SET_USED(nb_pkts);
-#endif
-
 	if (q->coalesce.type == 0) {
 		mc = (struct ulp_txpkt *)q->coalesce.ptr;
 		mc->cmd_dest = htonl(V_ULPTX_CMD(4) | V_ULP_TXPKT_DEST(0) |
@@ -1082,13 +1078,15 @@ static inline int tx_do_packet_coalesce(struct sge_eth_txq *txq,
 	sd->coalesce.sgl[idx & 1] = (struct ulptx_sgl *)(cpl + 1);
 	sd->coalesce.idx = (idx & 1) + 1;
 
-	/* send the coaelsced work request if max reached */
-	if (++q->coalesce.idx == max_coal_pkt_num
-#ifndef RTE_LIBRTE_CXGBE_TPUT
-	    || q->coalesce.idx >= nb_pkts
-#endif
-	    )
+	/* Send the coalesced work request, only if max reached. However,
+	 * if lower latency is preferred over throughput, then don't wait
+	 * for coalescing the next Tx burst and send the packets now.
+	 */
+	q->coalesce.idx++;
+	if (q->coalesce.idx == max_coal_pkt_num ||
+	    (adap->devargs.tx_mode_latency && q->coalesce.idx >= nb_pkts))
 		ship_tx_pkt_coalesce_wr(adap, txq);
+
 	return 0;
 }
 
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH v2 10/12] net/cxgbe: fetch max Tx coalesce limit from firmware
  2019-09-27 20:30 ` [dpdk-dev] [PATCH v2 " Rahul Lakkireddy
                     ` (8 preceding siblings ...)
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 09/12] net/cxgbe: add devarg to control Tx coalescing Rahul Lakkireddy
@ 2019-09-27 20:30   ` Rahul Lakkireddy
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 11/12] net/cxgbe: add rte_flow support for matching VLAN Rahul Lakkireddy
                     ` (2 subsequent siblings)
  12 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-27 20:30 UTC (permalink / raw)
  To: dev; +Cc: nirranjan

Query firmware for max number of packets that can be coalesced by
Tx.

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
v2:
- No changes.

 doc/guides/nics/cxgbe.rst               | 18 ++++++------
 drivers/net/cxgbe/base/common.h         |  1 +
 drivers/net/cxgbe/base/t4fw_interface.h |  3 +-
 drivers/net/cxgbe/cxgbe_main.c          | 39 ++++++++++++-------------
 drivers/net/cxgbe/cxgbe_pfvf.h          | 10 +++++++
 drivers/net/cxgbe/cxgbevf_main.c        | 12 ++++++--
 drivers/net/cxgbe/sge.c                 |  4 +--
 7 files changed, 52 insertions(+), 35 deletions(-)

diff --git a/doc/guides/nics/cxgbe.rst b/doc/guides/nics/cxgbe.rst
index 76b1a2ac7..cae78a34c 100644
--- a/doc/guides/nics/cxgbe.rst
+++ b/doc/guides/nics/cxgbe.rst
@@ -70,7 +70,7 @@ in :ref:`t5-nics` and :ref:`t6-nics`.
 Prerequisites
 -------------
 
-- Requires firmware version **1.17.14.0** and higher. Visit
+- Requires firmware version **1.23.4.0** and higher. Visit
   `Chelsio Download Center <http://service.chelsio.com>`_ to get latest firmware
   bundled with the latest Chelsio Unified Wire package.
 
@@ -215,7 +215,7 @@ Unified Wire package for Linux operating system are as follows:
 
    .. code-block:: console
 
-      firmware-version: 1.17.14.0, TP 0.1.4.9
+      firmware-version: 1.23.4.0, TP 0.1.23.2
 
 Running testpmd
 ~~~~~~~~~~~~~~~
@@ -273,7 +273,7 @@ devices managed by librte_pmd_cxgbe in Linux operating system.
       EAL:   PCI memory mapped at 0x7fd7c0200000
       EAL:   PCI memory mapped at 0x7fd77cdfd000
       EAL:   PCI memory mapped at 0x7fd7c10b7000
-      PMD: rte_cxgbe_pmd: fw: 1.17.14.0, TP: 0.1.4.9
+      PMD: rte_cxgbe_pmd: fw: 1.23.4.0, TP: 0.1.23.2
       PMD: rte_cxgbe_pmd: Coming up as MASTER: Initializing adapter
       Interactive-mode selected
       Configuring Port 0 (socket 0)
@@ -379,16 +379,16 @@ virtual functions.
       [...]
       EAL: PCI device 0000:02:01.0 on NUMA socket 0
       EAL:   probe driver: 1425:5803 net_cxgbevf
-      PMD: rte_cxgbe_pmd: Firmware version: 1.17.14.0
-      PMD: rte_cxgbe_pmd: TP Microcode version: 0.1.4.9
+      PMD: rte_cxgbe_pmd: Firmware version: 1.23.4.0
+      PMD: rte_cxgbe_pmd: TP Microcode version: 0.1.23.2
       PMD: rte_cxgbe_pmd: Chelsio rev 0
       PMD: rte_cxgbe_pmd: No bootstrap loaded
       PMD: rte_cxgbe_pmd: No Expansion ROM loaded
       PMD: rte_cxgbe_pmd:  0000:02:01.0 Chelsio rev 0 1G/10GBASE-SFP
       EAL: PCI device 0000:02:01.1 on NUMA socket 0
       EAL:   probe driver: 1425:5803 net_cxgbevf
-      PMD: rte_cxgbe_pmd: Firmware version: 1.17.14.0
-      PMD: rte_cxgbe_pmd: TP Microcode version: 0.1.4.9
+      PMD: rte_cxgbe_pmd: Firmware version: 1.23.4.0
+      PMD: rte_cxgbe_pmd: TP Microcode version: 0.1.23.2
       PMD: rte_cxgbe_pmd: Chelsio rev 0
       PMD: rte_cxgbe_pmd: No bootstrap loaded
       PMD: rte_cxgbe_pmd: No Expansion ROM loaded
@@ -465,7 +465,7 @@ Unified Wire package for FreeBSD operating system are as follows:
 
    .. code-block:: console
 
-      dev.t5nex.0.firmware_version: 1.17.14.0
+      dev.t5nex.0.firmware_version: 1.23.4.0
 
 Running testpmd
 ~~~~~~~~~~~~~~~
@@ -583,7 +583,7 @@ devices managed by librte_pmd_cxgbe in FreeBSD operating system.
       EAL:   PCI memory mapped at 0x8007ec000
       EAL:   PCI memory mapped at 0x842800000
       EAL:   PCI memory mapped at 0x80086c000
-      PMD: rte_cxgbe_pmd: fw: 1.17.14.0, TP: 0.1.4.9
+      PMD: rte_cxgbe_pmd: fw: 1.23.4.0, TP: 0.1.23.2
       PMD: rte_cxgbe_pmd: Coming up as MASTER: Initializing adapter
       Interactive-mode selected
       Configuring Port 0 (socket 0)
diff --git a/drivers/net/cxgbe/base/common.h b/drivers/net/cxgbe/base/common.h
index 973d4d7dd..6047642c5 100644
--- a/drivers/net/cxgbe/base/common.h
+++ b/drivers/net/cxgbe/base/common.h
@@ -272,6 +272,7 @@ struct adapter_params {
 	bool ulptx_memwrite_dsgl;          /* use of T5 DSGL allowed */
 	u8 fw_caps_support;		  /* 32-bit Port Capabilities */
 	u8 filter2_wr_support;            /* FW support for FILTER2_WR */
+	u32 max_tx_coalesce_num; /* Max # of Tx packets that can be coalesced */
 };
 
 /* Firmware Port Capabilities types.
diff --git a/drivers/net/cxgbe/base/t4fw_interface.h b/drivers/net/cxgbe/base/t4fw_interface.h
index 06d3ef3a6..e992d196d 100644
--- a/drivers/net/cxgbe/base/t4fw_interface.h
+++ b/drivers/net/cxgbe/base/t4fw_interface.h
@@ -692,7 +692,8 @@ enum fw_params_param_pfvf {
 	FW_PARAMS_PARAM_PFVF_L2T_START = 0x13,
 	FW_PARAMS_PARAM_PFVF_L2T_END = 0x14,
 	FW_PARAMS_PARAM_PFVF_CPLFW4MSG_ENCAP = 0x31,
-	FW_PARAMS_PARAM_PFVF_PORT_CAPS32 = 0x3A
+	FW_PARAMS_PARAM_PFVF_PORT_CAPS32 = 0x3A,
+	FW_PARAMS_PARAM_PFVF_MAX_PKTS_PER_ETH_TX_PKTS_WR = 0x3D,
 };
 
 /*
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 23b74c754..4701518a6 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -37,6 +37,7 @@
 #include "base/t4_regs.h"
 #include "base/t4_msg.h"
 #include "cxgbe.h"
+#include "cxgbe_pfvf.h"
 #include "clip_tbl.h"
 #include "l2t.h"
 #include "mps_tcam.h"
@@ -1162,20 +1163,10 @@ static int adap_init0(struct adapter *adap)
 	/*
 	 * Grab some of our basic fundamental operating parameters.
 	 */
-#define FW_PARAM_DEV(param) \
-	(V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_DEV) | \
-	 V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_DEV_##param))
-
-#define FW_PARAM_PFVF(param) \
-	(V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_PFVF) | \
-	 V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_PFVF_##param) |  \
-	 V_FW_PARAMS_PARAM_Y(0) | \
-	 V_FW_PARAMS_PARAM_Z(0))
-
-	params[0] = FW_PARAM_PFVF(L2T_START);
-	params[1] = FW_PARAM_PFVF(L2T_END);
-	params[2] = FW_PARAM_PFVF(FILTER_START);
-	params[3] = FW_PARAM_PFVF(FILTER_END);
+	params[0] = CXGBE_FW_PARAM_PFVF(L2T_START);
+	params[1] = CXGBE_FW_PARAM_PFVF(L2T_END);
+	params[2] = CXGBE_FW_PARAM_PFVF(FILTER_START);
+	params[3] = CXGBE_FW_PARAM_PFVF(FILTER_END);
 	ret = t4_query_params(adap, adap->mbox, adap->pf, 0, 4, params, val);
 	if (ret < 0)
 		goto bye;
@@ -1184,8 +1175,8 @@ static int adap_init0(struct adapter *adap)
 	adap->tids.ftid_base = val[2];
 	adap->tids.nftids = val[3] - val[2] + 1;
 
-	params[0] = FW_PARAM_PFVF(CLIP_START);
-	params[1] = FW_PARAM_PFVF(CLIP_END);
+	params[0] = CXGBE_FW_PARAM_PFVF(CLIP_START);
+	params[1] = CXGBE_FW_PARAM_PFVF(CLIP_END);
 	ret = t4_query_params(adap, adap->mbox, adap->pf, 0, 2, params, val);
 	if (ret < 0)
 		goto bye;
@@ -1215,14 +1206,14 @@ static int adap_init0(struct adapter *adap)
 	if (is_t4(adap->params.chip)) {
 		adap->params.filter2_wr_support = 0;
 	} else {
-		params[0] = FW_PARAM_DEV(FILTER2_WR);
+		params[0] = CXGBE_FW_PARAM_DEV(FILTER2_WR);
 		ret = t4_query_params(adap, adap->mbox, adap->pf, 0,
 				      1, params, val);
 		adap->params.filter2_wr_support = (ret == 0 && val[0] != 0);
 	}
 
 	/* query tid-related parameters */
-	params[0] = FW_PARAM_DEV(NTID);
+	params[0] = CXGBE_FW_PARAM_DEV(NTID);
 	ret = t4_query_params(adap, adap->mbox, adap->pf, 0, 1,
 			      params, val);
 	if (ret < 0)
@@ -1235,7 +1226,7 @@ static int adap_init0(struct adapter *adap)
 	 * firmware won't understand this and we'll just get
 	 * unencapsulated messages ...
 	 */
-	params[0] = FW_PARAM_PFVF(CPLFW4MSG_ENCAP);
+	params[0] = CXGBE_FW_PARAM_PFVF(CPLFW4MSG_ENCAP);
 	val[0] = 1;
 	(void)t4_set_params(adap, adap->mbox, adap->pf, 0, 1, params, val);
 
@@ -1248,12 +1239,20 @@ static int adap_init0(struct adapter *adap)
 	if (is_t4(adap->params.chip)) {
 		adap->params.ulptx_memwrite_dsgl = false;
 	} else {
-		params[0] = FW_PARAM_DEV(ULPTX_MEMWRITE_DSGL);
+		params[0] = CXGBE_FW_PARAM_DEV(ULPTX_MEMWRITE_DSGL);
 		ret = t4_query_params(adap, adap->mbox, adap->pf, 0,
 				      1, params, val);
 		adap->params.ulptx_memwrite_dsgl = (ret == 0 && val[0] != 0);
 	}
 
+	/* Query for max number of packets that can be coalesced for Tx */
+	params[0] = CXGBE_FW_PARAM_PFVF(MAX_PKTS_PER_ETH_TX_PKTS_WR);
+	ret = t4_query_params(adap, adap->mbox, adap->pf, 0, 1, params, val);
+	if (!ret && val[0] > 0)
+		adap->params.max_tx_coalesce_num = val[0];
+	else
+		adap->params.max_tx_coalesce_num = ETH_COALESCE_PKT_NUM;
+
 	/*
 	 * The MTU/MSS Table is initialized by now, so load their values.  If
 	 * we're initializing the adapter, then we'll make any modifications
diff --git a/drivers/net/cxgbe/cxgbe_pfvf.h b/drivers/net/cxgbe/cxgbe_pfvf.h
index 3a6ab416f..0b7c52aec 100644
--- a/drivers/net/cxgbe/cxgbe_pfvf.h
+++ b/drivers/net/cxgbe/cxgbe_pfvf.h
@@ -6,6 +6,16 @@
 #ifndef _CXGBE_PFVF_H_
 #define _CXGBE_PFVF_H_
 
+#define CXGBE_FW_PARAM_DEV(param) \
+	(V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_DEV) | \
+	 V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_DEV_##param))
+
+#define CXGBE_FW_PARAM_PFVF(param) \
+	(V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_PFVF) | \
+	 V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_PFVF_##param) |  \
+	 V_FW_PARAMS_PARAM_Y(0) | \
+	 V_FW_PARAMS_PARAM_Z(0))
+
 void cxgbe_dev_rx_queue_release(void *q);
 void cxgbe_dev_tx_queue_release(void *q);
 void cxgbe_dev_stop(struct rte_eth_dev *eth_dev);
diff --git a/drivers/net/cxgbe/cxgbevf_main.c b/drivers/net/cxgbe/cxgbevf_main.c
index 82f40f358..66fb92375 100644
--- a/drivers/net/cxgbe/cxgbevf_main.c
+++ b/drivers/net/cxgbe/cxgbevf_main.c
@@ -11,6 +11,7 @@
 #include "base/t4_regs.h"
 #include "base/t4_msg.h"
 #include "cxgbe.h"
+#include "cxgbe_pfvf.h"
 #include "mps_tcam.h"
 
 /*
@@ -122,11 +123,18 @@ static int adap_init0vf(struct adapter *adapter)
 	 * firmware won't understand this and we'll just get
 	 * unencapsulated messages ...
 	 */
-	param = V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_PFVF) |
-		V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_PFVF_CPLFW4MSG_ENCAP);
+	param = CXGBE_FW_PARAM_PFVF(CPLFW4MSG_ENCAP);
 	val = 1;
 	t4vf_set_params(adapter, 1, &param, &val);
 
+	/* Query for max number of packets that can be coalesced for Tx */
+	param = CXGBE_FW_PARAM_PFVF(MAX_PKTS_PER_ETH_TX_PKTS_WR);
+	err = t4vf_query_params(adapter, 1, &param, &val);
+	if (!err && val > 0)
+		adapter->params.max_tx_coalesce_num = val;
+	else
+		adapter->params.max_tx_coalesce_num = ETH_COALESCE_VF_PKT_NUM;
+
 	/*
 	 * Grab our Virtual Interface resource allocation, extract the
 	 * features that we're interested in and do a bit of sanity testing on
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index 0df870a41..aba85a209 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -1004,8 +1004,6 @@ static inline int tx_do_packet_coalesce(struct sge_eth_txq *txq,
 	struct cpl_tx_pkt_core *cpl;
 	struct tx_sw_desc *sd;
 	unsigned int idx = q->coalesce.idx, len = mbuf->pkt_len;
-	unsigned int max_coal_pkt_num = is_pf4(adap) ? ETH_COALESCE_PKT_NUM :
-						       ETH_COALESCE_VF_PKT_NUM;
 
 	if (q->coalesce.type == 0) {
 		mc = (struct ulp_txpkt *)q->coalesce.ptr;
@@ -1083,7 +1081,7 @@ static inline int tx_do_packet_coalesce(struct sge_eth_txq *txq,
 	 * for coalescing the next Tx burst and send the packets now.
 	 */
 	q->coalesce.idx++;
-	if (q->coalesce.idx == max_coal_pkt_num ||
+	if (q->coalesce.idx == adap->params.max_tx_coalesce_num ||
 	    (adap->devargs.tx_mode_latency && q->coalesce.idx >= nb_pkts))
 		ship_tx_pkt_coalesce_wr(adap, txq);
 
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH v2 11/12] net/cxgbe: add rte_flow support for matching VLAN
  2019-09-27 20:30 ` [dpdk-dev] [PATCH v2 " Rahul Lakkireddy
                     ` (9 preceding siblings ...)
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 10/12] net/cxgbe: fetch max Tx coalesce limit from firmware Rahul Lakkireddy
@ 2019-09-27 20:30   ` Rahul Lakkireddy
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 12/12] net/cxgbe: add rte_flow support for setting VLAN PCP Rahul Lakkireddy
  2019-09-30 12:34   ` [dpdk-dev] [PATCH v2 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD Ferruh Yigit
  12 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-27 20:30 UTC (permalink / raw)
  To: dev; +Cc: nirranjan

Add support for matching VLAN fields via rte_flow API.

When matching VLAN pattern, the ethertype field in hardware
filter specification must contain VLAN header's ethertype, and
not Ethernet header's ethertype. The hardware automatically
searches for ethertype 0x8100 in Ethernet header, when
parsing incoming packet against VLAN pattern.

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
v2:
- No changes.

 drivers/net/cxgbe/base/t4_regs_values.h |   9 ++
 drivers/net/cxgbe/cxgbe_filter.c        |  11 +-
 drivers/net/cxgbe/cxgbe_flow.c          | 145 +++++++++++++++++++++++-
 drivers/net/cxgbe/cxgbe_main.c          |  10 +-
 4 files changed, 162 insertions(+), 13 deletions(-)

diff --git a/drivers/net/cxgbe/base/t4_regs_values.h b/drivers/net/cxgbe/base/t4_regs_values.h
index a9414d202..e3f549e51 100644
--- a/drivers/net/cxgbe/base/t4_regs_values.h
+++ b/drivers/net/cxgbe/base/t4_regs_values.h
@@ -143,4 +143,13 @@
 #define W_FT_MPSHITTYPE			3
 #define W_FT_FRAGMENTATION		1
 
+/*
+ * Some of the Compressed Filter Tuple fields have internal structure.  These
+ * bit shifts/masks describe those structures.  All shifts are relative to the
+ * base position of the fields within the Compressed Filter Tuple
+ */
+#define S_FT_VLAN_VLD			16
+#define V_FT_VLAN_VLD(x)		((x) << S_FT_VLAN_VLD)
+#define F_FT_VLAN_VLD			V_FT_VLAN_VLD(1U)
+
 #endif /* __T4_REGS_VALUES_H__ */
diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index 33b95a69a..b9d9d5d39 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -69,7 +69,8 @@ int cxgbe_validate_filter(struct adapter *adapter,
 	(!(fconf & (_mask)) && S(_field))
 
 	if (U(F_PORT, iport) || U(F_ETHERTYPE, ethtype) ||
-	    U(F_PROTOCOL, proto) || U(F_MACMATCH, macidx))
+	    U(F_PROTOCOL, proto) || U(F_MACMATCH, macidx) ||
+	    U(F_VLAN, ivlan_vld))
 		return -EOPNOTSUPP;
 
 #undef S
@@ -292,6 +293,9 @@ static u64 hash_filter_ntuple(const struct filter_entry *f)
 		ntuple |= (u64)(f->fs.val.ethtype) << tp->ethertype_shift;
 	if (tp->macmatch_shift >= 0 && f->fs.mask.macidx)
 		ntuple |= (u64)(f->fs.val.macidx) << tp->macmatch_shift;
+	if (tp->vlan_shift >= 0 && f->fs.mask.ivlan)
+		ntuple |= (u64)(F_FT_VLAN_VLD | f->fs.val.ivlan) <<
+			  tp->vlan_shift;
 
 	return ntuple;
 }
@@ -769,6 +773,9 @@ static int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
 			    V_FW_FILTER_WR_L2TIX(f->l2t ? f->l2t->idx : 0));
 	fwr->ethtype = cpu_to_be16(f->fs.val.ethtype);
 	fwr->ethtypem = cpu_to_be16(f->fs.mask.ethtype);
+	fwr->frag_to_ovlan_vldm =
+		(V_FW_FILTER_WR_IVLAN_VLD(f->fs.val.ivlan_vld) |
+		 V_FW_FILTER_WR_IVLAN_VLDM(f->fs.mask.ivlan_vld));
 	fwr->smac_sel = 0;
 	fwr->rx_chan_rx_rpl_iq =
 		cpu_to_be16(V_FW_FILTER_WR_RX_CHAN(0) |
@@ -781,6 +788,8 @@ static int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
 			    V_FW_FILTER_WR_PORTM(f->fs.mask.iport));
 	fwr->ptcl = f->fs.val.proto;
 	fwr->ptclm = f->fs.mask.proto;
+	fwr->ivlan = cpu_to_be16(f->fs.val.ivlan);
+	fwr->ivlanm = cpu_to_be16(f->fs.mask.ivlan);
 	rte_memcpy(fwr->lip, f->fs.val.lip, sizeof(fwr->lip));
 	rte_memcpy(fwr->lipm, f->fs.mask.lip, sizeof(fwr->lipm));
 	rte_memcpy(fwr->fip, f->fs.val.fip, sizeof(fwr->fip));
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index 4c8553039..4b72e6422 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -46,6 +46,53 @@ cxgbe_validate_item(const struct rte_flow_item *i, struct rte_flow_error *e)
 	return 0;
 }
 
+/**
+ * Apart from the 4-tuple IPv4/IPv6 - TCP/UDP information,
+ * there's only 40-bits available to store match fields.
+ * So, to save space, optimize filter spec for some common
+ * known fields that hardware can parse against incoming
+ * packets automatically.
+ */
+static void
+cxgbe_tweak_filter_spec(struct adapter *adap,
+			struct ch_filter_specification *fs)
+{
+	/* Save 16-bit ethertype field space, by setting corresponding
+	 * 1-bit flags in the filter spec for common known ethertypes.
+	 * When hardware sees these flags, it automatically infers and
+	 * matches incoming packets against the corresponding ethertype.
+	 */
+	if (fs->mask.ethtype == 0xffff) {
+		switch (fs->val.ethtype) {
+		case RTE_ETHER_TYPE_IPV4:
+			if (adap->params.tp.ethertype_shift < 0) {
+				fs->type = FILTER_TYPE_IPV4;
+				fs->val.ethtype = 0;
+				fs->mask.ethtype = 0;
+			}
+			break;
+		case RTE_ETHER_TYPE_IPV6:
+			if (adap->params.tp.ethertype_shift < 0) {
+				fs->type = FILTER_TYPE_IPV6;
+				fs->val.ethtype = 0;
+				fs->mask.ethtype = 0;
+			}
+			break;
+		case RTE_ETHER_TYPE_VLAN:
+			if (adap->params.tp.ethertype_shift < 0 &&
+			    adap->params.tp.vlan_shift >= 0) {
+				fs->val.ivlan_vld = 1;
+				fs->mask.ivlan_vld = 1;
+				fs->val.ethtype = 0;
+				fs->mask.ethtype = 0;
+			}
+			break;
+		default:
+			break;
+		}
+	}
+}
+
 static void
 cxgbe_fill_filter_region(struct adapter *adap,
 			 struct ch_filter_specification *fs)
@@ -95,6 +142,9 @@ cxgbe_fill_filter_region(struct adapter *adap,
 		ntuple_mask |= (u64)fs->mask.iport << tp->port_shift;
 	if (tp->macmatch_shift >= 0)
 		ntuple_mask |= (u64)fs->mask.macidx << tp->macmatch_shift;
+	if (tp->vlan_shift >= 0 && fs->mask.ivlan_vld)
+		ntuple_mask |= (u64)(F_FT_VLAN_VLD | fs->mask.ivlan) <<
+			       tp->vlan_shift;
 
 	if (ntuple_mask != hash_filter_mask)
 		return;
@@ -114,6 +164,25 @@ ch_rte_parsetype_eth(const void *dmask, const struct rte_flow_item *item,
 	/* If user has not given any mask, then use chelsio supported mask. */
 	mask = umask ? umask : (const struct rte_flow_item_eth *)dmask;
 
+	if (!spec)
+		return 0;
+
+	/* Chelsio hardware supports matching on only one ethertype
+	 * (i.e. either the outer or inner ethertype, but not both). If
+	 * we already encountered VLAN item, then ensure that the outer
+	 * ethertype is VLAN (0x8100) and don't overwrite the inner
+	 * ethertype stored during VLAN item parsing. Note that if
+	 * 'ivlan_vld' bit is set in Chelsio filter spec, then the
+	 * hardware automatically only matches packets with outer
+	 * ethertype having VLAN (0x8100).
+	 */
+	if (fs->mask.ivlan_vld &&
+	    be16_to_cpu(spec->type) != RTE_ETHER_TYPE_VLAN)
+		return rte_flow_error_set(e, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+					  item,
+					  "Already encountered VLAN item,"
+					  " but outer ethertype is not 0x8100");
+
 	/* we don't support SRC_MAC filtering*/
 	if (!rte_is_zero_ether_addr(&mask->src))
 		return rte_flow_error_set(e, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
@@ -137,8 +206,13 @@ ch_rte_parsetype_eth(const void *dmask, const struct rte_flow_item *item,
 		CXGBE_FILL_FS(idx, 0x1ff, macidx);
 	}
 
-	CXGBE_FILL_FS(be16_to_cpu(spec->type),
-		      be16_to_cpu(mask->type), ethtype);
+	/* Only set outer ethertype, if we didn't encounter VLAN item yet.
+	 * Otherwise, the inner ethertype set by VLAN item will get
+	 * overwritten.
+	 */
+	if (!fs->mask.ivlan_vld)
+		CXGBE_FILL_FS(be16_to_cpu(spec->type),
+			      be16_to_cpu(mask->type), ethtype);
 	return 0;
 }
 
@@ -163,6 +237,50 @@ ch_rte_parsetype_port(const void *dmask, const struct rte_flow_item *item,
 	return 0;
 }
 
+static int
+ch_rte_parsetype_vlan(const void *dmask, const struct rte_flow_item *item,
+		      struct ch_filter_specification *fs,
+		      struct rte_flow_error *e)
+{
+	const struct rte_flow_item_vlan *spec = item->spec;
+	const struct rte_flow_item_vlan *umask = item->mask;
+	const struct rte_flow_item_vlan *mask;
+
+	/* If user has not given any mask, then use chelsio supported mask. */
+	mask = umask ? umask : (const struct rte_flow_item_vlan *)dmask;
+
+	CXGBE_FILL_FS(1, 1, ivlan_vld);
+	if (!spec)
+		return 0; /* Wildcard, match all VLAN */
+
+	/* Chelsio hardware supports matching on only one ethertype
+	 * (i.e. either the outer or inner ethertype, but not both).
+	 * If outer ethertype is already set and is not VLAN (0x8100),
+	 * then don't proceed further. Otherwise, reset the outer
+	 * ethertype, so that it can be replaced by inner ethertype.
+	 * Note that the hardware will automatically match on outer
+	 * ethertype 0x8100, if 'ivlan_vld' bit is set in Chelsio
+	 * filter spec.
+	 */
+	if (fs->mask.ethtype) {
+		if (fs->val.ethtype != RTE_ETHER_TYPE_VLAN)
+			return rte_flow_error_set(e, EINVAL,
+						  RTE_FLOW_ERROR_TYPE_ITEM,
+						  item,
+						  "Outer ethertype not 0x8100");
+
+		fs->val.ethtype = 0;
+		fs->mask.ethtype = 0;
+	}
+
+	CXGBE_FILL_FS(be16_to_cpu(spec->tci), be16_to_cpu(mask->tci), ivlan);
+	if (spec->inner_type)
+		CXGBE_FILL_FS(be16_to_cpu(spec->inner_type),
+			      be16_to_cpu(mask->inner_type), ethtype);
+
+	return 0;
+}
+
 static int
 ch_rte_parsetype_udp(const void *dmask, const struct rte_flow_item *item,
 		     struct ch_filter_specification *fs,
@@ -232,8 +350,13 @@ ch_rte_parsetype_ipv4(const void *dmask, const struct rte_flow_item *item,
 		return rte_flow_error_set(e, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
 					  item, "ttl/tos are not supported");
 
+	if (fs->mask.ethtype &&
+	    (fs->val.ethtype != RTE_ETHER_TYPE_VLAN &&
+	     fs->val.ethtype != RTE_ETHER_TYPE_IPV4))
+		return rte_flow_error_set(e, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+					  item,
+					  "Couldn't find IPv4 ethertype");
 	fs->type = FILTER_TYPE_IPV4;
-	CXGBE_FILL_FS(RTE_ETHER_TYPE_IPV4, 0xffff, ethtype);
 	if (!val)
 		return 0; /* ipv4 wild card */
 
@@ -261,8 +384,13 @@ ch_rte_parsetype_ipv6(const void *dmask, const struct rte_flow_item *item,
 					  item,
 					  "tc/flow/hop are not supported");
 
+	if (fs->mask.ethtype &&
+	    (fs->val.ethtype != RTE_ETHER_TYPE_VLAN &&
+	     fs->val.ethtype != RTE_ETHER_TYPE_IPV6))
+		return rte_flow_error_set(e, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+					  item,
+					  "Couldn't find IPv6 ethertype");
 	fs->type = FILTER_TYPE_IPV6;
-	CXGBE_FILL_FS(RTE_ETHER_TYPE_IPV6, 0xffff, ethtype);
 	if (!val)
 		return 0; /* ipv6 wild card */
 
@@ -700,6 +828,14 @@ static struct chrte_fparse parseitem[] = {
 		}
 	},
 
+	[RTE_FLOW_ITEM_TYPE_VLAN] = {
+		.fptr = ch_rte_parsetype_vlan,
+		.dmask = &(const struct rte_flow_item_vlan){
+			.tci = 0xffff,
+			.inner_type = 0xffff,
+		}
+	},
+
 	[RTE_FLOW_ITEM_TYPE_IPV4] = {
 		.fptr  = ch_rte_parsetype_ipv4,
 		.dmask = &rte_flow_item_ipv4_mask,
@@ -773,6 +909,7 @@ cxgbe_rtef_parse_items(struct rte_flow *flow,
 	}
 
 	cxgbe_fill_filter_region(adap, &flow->fs);
+	cxgbe_tweak_filter_spec(adap, &flow->fs);
 
 	return 0;
 }
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 4701518a6..f6967a3e4 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -753,12 +753,6 @@ static void configure_vlan_types(struct adapter *adapter)
 				 V_OVLAN_ETYPE(M_OVLAN_ETYPE),
 				 V_OVLAN_MASK(M_OVLAN_MASK) |
 				 V_OVLAN_ETYPE(0x9100));
-		/* OVLAN Type 0x8100 */
-		t4_set_reg_field(adapter, MPS_PORT_RX_OVLAN_REG(i, A_RX_OVLAN2),
-				 V_OVLAN_MASK(M_OVLAN_MASK) |
-				 V_OVLAN_ETYPE(M_OVLAN_ETYPE),
-				 V_OVLAN_MASK(M_OVLAN_MASK) |
-				 V_OVLAN_ETYPE(0x8100));
 
 		/* IVLAN 0X8100 */
 		t4_set_reg_field(adapter, MPS_PORT_RX_IVLAN(i),
@@ -767,9 +761,9 @@ static void configure_vlan_types(struct adapter *adapter)
 
 		t4_set_reg_field(adapter, MPS_PORT_RX_CTL(i),
 				 F_OVLAN_EN0 | F_OVLAN_EN1 |
-				 F_OVLAN_EN2 | F_IVLAN_EN,
+				 F_IVLAN_EN,
 				 F_OVLAN_EN0 | F_OVLAN_EN1 |
-				 F_OVLAN_EN2 | F_IVLAN_EN);
+				 F_IVLAN_EN);
 	}
 
 	t4_tp_wr_bits_indirect(adapter, A_TP_INGRESS_CONFIG, V_RM_OVLAN(1),
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [dpdk-dev] [PATCH v2 12/12] net/cxgbe: add rte_flow support for setting VLAN PCP
  2019-09-27 20:30 ` [dpdk-dev] [PATCH v2 " Rahul Lakkireddy
                     ` (10 preceding siblings ...)
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 11/12] net/cxgbe: add rte_flow support for matching VLAN Rahul Lakkireddy
@ 2019-09-27 20:30   ` Rahul Lakkireddy
  2019-09-30 12:34   ` [dpdk-dev] [PATCH v2 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD Ferruh Yigit
  12 siblings, 0 replies; 30+ messages in thread
From: Rahul Lakkireddy @ 2019-09-27 20:30 UTC (permalink / raw)
  To: dev; +Cc: nirranjan

Add support for setting VLAN PCP field via rte_flow API. Hardware
overwrites the entire 16-bit VLAN TCI field. So, both VLAN VID and
PCP actions must be specified.

Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
v2:
- No changes.

 drivers/net/cxgbe/cxgbe_flow.c | 27 +++++++++++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index 4b72e6422..9ee8353ae 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -568,6 +568,7 @@ ch_rte_parse_atype_switch(const struct rte_flow_action *a,
 			  struct rte_flow_error *e)
 {
 	const struct rte_flow_action_of_set_vlan_vid *vlanid;
+	const struct rte_flow_action_of_set_vlan_pcp *vlanpcp;
 	const struct rte_flow_action_of_push_vlan *pushvlan;
 	const struct rte_flow_action_set_ipv4 *ipv4;
 	const struct rte_flow_action_set_ipv6 *ipv6;
@@ -591,6 +592,20 @@ ch_rte_parse_atype_switch(const struct rte_flow_action *a,
 		tmp_vlan = fs->vlan & 0xe000;
 		fs->vlan = (be16_to_cpu(vlanid->vlan_vid) & 0xfff) | tmp_vlan;
 		break;
+	case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP:
+		vlanpcp = (const struct rte_flow_action_of_set_vlan_pcp *)
+			  a->conf;
+		/* If explicitly asked to push a new VLAN header,
+		 * then don't set rewrite mode. Otherwise, the
+		 * incoming VLAN packets will get their VLAN fields
+		 * rewritten, instead of adding an additional outer
+		 * VLAN header.
+		 */
+		if (fs->newvlan != VLAN_INSERT)
+			fs->newvlan = VLAN_REWRITE;
+		tmp_vlan = fs->vlan & 0xfff;
+		fs->vlan = (vlanpcp->vlan_pcp << 13) | tmp_vlan;
+		break;
 	case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN:
 		pushvlan = (const struct rte_flow_action_of_push_vlan *)
 			    a->conf;
@@ -724,6 +739,7 @@ cxgbe_rtef_parse_actions(struct rte_flow *flow,
 {
 	struct ch_filter_specification *fs = &flow->fs;
 	uint8_t nmode = 0, nat_ipv4 = 0, nat_ipv6 = 0;
+	uint8_t vlan_set_vid = 0, vlan_set_pcp = 0;
 	const struct rte_flow_action_queue *q;
 	const struct rte_flow_action *a;
 	char abit = 0;
@@ -762,6 +778,11 @@ cxgbe_rtef_parse_actions(struct rte_flow *flow,
 			fs->hitcnts = 1;
 			break;
 		case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID:
+			vlan_set_vid++;
+			goto action_switch;
+		case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP:
+			vlan_set_pcp++;
+			goto action_switch;
 		case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN:
 		case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN:
 		case RTE_FLOW_ACTION_TYPE_PHY_PORT:
@@ -804,6 +825,12 @@ cxgbe_rtef_parse_actions(struct rte_flow *flow,
 		}
 	}
 
+	if (fs->newvlan == VLAN_REWRITE && (!vlan_set_vid || !vlan_set_pcp))
+		return rte_flow_error_set(e, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION, a,
+					  "Both OF_SET_VLAN_VID and "
+					  "OF_SET_VLAN_PCP must be specified");
+
 	if (ch_rte_parse_nat(nmode, fs))
 		return rte_flow_error_set(e, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION, a,
-- 
2.18.0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [dpdk-dev] [PATCH v2 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD
  2019-09-27 20:30 ` [dpdk-dev] [PATCH v2 " Rahul Lakkireddy
                     ` (11 preceding siblings ...)
  2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 12/12] net/cxgbe: add rte_flow support for setting VLAN PCP Rahul Lakkireddy
@ 2019-09-30 12:34   ` Ferruh Yigit
  12 siblings, 0 replies; 30+ messages in thread
From: Ferruh Yigit @ 2019-09-30 12:34 UTC (permalink / raw)
  To: Rahul Lakkireddy, dev; +Cc: nirranjan

On 9/27/2019 9:30 PM, Rahul Lakkireddy wrote:
> This series of patches contain bug fixes and feature updates for
> CXGBE and CXGBEVF PMD. Patches 1 to 6 contain bug fixes. Patches
> 7 to 12 contain updates and new features for CXGBE/CXGBEVF PMD.
> 
> Patch 1 adds cxgbe_ prefix to some global functions to avoid name
> collision.
> 
> Patch 2 fixes NULL dereference when allocating CLIP for IPv6 rte_flow
> offloads.
> 
> Patch 3 fixes slot allocation logic for IPv6 rte_flow offloads
> for T6 NICs.
> 
> Patch 4 fixes issues with parsing VLAN rte_flow offload actions.
> 
> Patch 5 prefetches packets for non-coalesced Tx packets.
> 
> Patch 6 fixes NULL dereference when accessing firmware event queue
> for link updates before it is created.
> 
> Patch 7 reworks compilation dependent logs to use dynamic logging.
> 
> Patch 8 reworks devargs parsing to separate CXGBE VF only arguments.
> 
> Patch 9 removes compilation dependent flag that controls Tx coalescing
> throughput vs latency behavior and uses devargs instead.
> 
> Patch 10 uses new firmware API to fetch the maximum number of
> packets that can be coalesced in Tx path.
> 
> Patch 11 adds support for VLAN pattern match item via rte_flow offload.
> 
> Patch 12 adds support for setting VLAN PCP action item via rte_flow
> offload.
> 
> Thanks,
> Rahul
> 
> ---
> v2:
> - Remove rarely used compile time enabled debug logs from datapath
>   in patch 7.
> - In cxgbe.rst doc, use ^ (instead of -) to represent common and
>   vf-only devargs as subsection of Runtime Options in patch 8.
> 
> 
> Rahul Lakkireddy (12):
>   net/cxgbe: add cxgbe_ prefix to global functions
>   net/cxgbe: fix NULL access when allocating CLIP entry
>   net/cxgbe: fix slot allocation for IPv6 flows
>   net/cxgbe: fix parsing VLAN ID rewrite action
>   net/cxgbe: fix prefetch for non-coalesced Tx packets
>   net/cxgbe: avoid polling link status before device start
>   net/cxgbe: use dynamic logging for debug prints
>   net/cxgbe: separate VF only devargs
>   net/cxgbe: add devarg to control Tx coalescing
>   net/cxgbe: fetch max Tx coalesce limit from firmware
>   net/cxgbe: add rte_flow support for matching VLAN
>   net/cxgbe: add rte_flow support for setting VLAN PCP

Series applied to dpdk-next-net/master, thanks.

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2019-09-30 12:34 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-06 21:52 [dpdk-dev] [PATCH 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD Rahul Lakkireddy
2019-09-06 21:52 ` [dpdk-dev] [PATCH 01/12] net/cxgbe: add cxgbe_ prefix to global functions Rahul Lakkireddy
2019-09-06 21:52 ` [dpdk-dev] [PATCH 02/12] net/cxgbe: fix NULL access when allocating CLIP entry Rahul Lakkireddy
2019-09-06 21:52 ` [dpdk-dev] [PATCH 03/12] net/cxgbe: fix slot allocation for IPv6 flows Rahul Lakkireddy
2019-09-06 21:52 ` [dpdk-dev] [PATCH 04/12] net/cxgbe: fix parsing VLAN ID rewrite action Rahul Lakkireddy
2019-09-06 21:52 ` [dpdk-dev] [PATCH 05/12] net/cxgbe: fix prefetch for non-coalesced Tx packets Rahul Lakkireddy
2019-09-06 21:52 ` [dpdk-dev] [PATCH 06/12] net/cxgbe: avoid polling link status before device start Rahul Lakkireddy
2019-09-06 21:52 ` [dpdk-dev] [PATCH 07/12] net/cxgbe: use dynamic logging for debug prints Rahul Lakkireddy
2019-09-27 14:37   ` Ferruh Yigit
2019-09-27 19:55     ` Rahul Lakkireddy
2019-09-06 21:52 ` [dpdk-dev] [PATCH 08/12] net/cxgbe: separate VF only devargs Rahul Lakkireddy
2019-09-06 21:52 ` [dpdk-dev] [PATCH 09/12] net/cxgbe: add devarg to control Tx coalescing Rahul Lakkireddy
2019-09-06 21:52 ` [dpdk-dev] [PATCH 10/12] net/cxgbe: fetch max Tx coalesce limit from firmware Rahul Lakkireddy
2019-09-06 21:52 ` [dpdk-dev] [PATCH 11/12] net/cxgbe: add rte_flow support for matching VLAN Rahul Lakkireddy
2019-09-06 21:52 ` [dpdk-dev] [PATCH 12/12] net/cxgbe: add rte_flow support for setting VLAN PCP Rahul Lakkireddy
2019-09-27 14:41 ` [dpdk-dev] [PATCH 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD Ferruh Yigit
2019-09-27 20:30 ` [dpdk-dev] [PATCH v2 " Rahul Lakkireddy
2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 01/12] net/cxgbe: add cxgbe_ prefix to global functions Rahul Lakkireddy
2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 02/12] net/cxgbe: fix NULL access when allocating CLIP entry Rahul Lakkireddy
2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 03/12] net/cxgbe: fix slot allocation for IPv6 flows Rahul Lakkireddy
2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 04/12] net/cxgbe: fix parsing VLAN ID rewrite action Rahul Lakkireddy
2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 05/12] net/cxgbe: fix prefetch for non-coalesced Tx packets Rahul Lakkireddy
2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 06/12] net/cxgbe: avoid polling link status before device start Rahul Lakkireddy
2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 07/12] net/cxgbe: use dynamic logging for debug prints Rahul Lakkireddy
2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 08/12] net/cxgbe: separate VF only devargs Rahul Lakkireddy
2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 09/12] net/cxgbe: add devarg to control Tx coalescing Rahul Lakkireddy
2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 10/12] net/cxgbe: fetch max Tx coalesce limit from firmware Rahul Lakkireddy
2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 11/12] net/cxgbe: add rte_flow support for matching VLAN Rahul Lakkireddy
2019-09-27 20:30   ` [dpdk-dev] [PATCH v2 12/12] net/cxgbe: add rte_flow support for setting VLAN PCP Rahul Lakkireddy
2019-09-30 12:34   ` [dpdk-dev] [PATCH v2 00/12] net/cxgbe: bug fixes and updates for CXGBE/CXGBEVF PMD Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).