DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/9] net/cxgbe: add support for offloading flows to HASH region
@ 2018-06-29 18:12 Rahul Lakkireddy
  2018-06-29 18:12 ` [dpdk-dev] [PATCH 1/9] net/cxgbe: query firmware for HASH filter resources Rahul Lakkireddy
                   ` (9 more replies)
  0 siblings, 10 replies; 13+ messages in thread
From: Rahul Lakkireddy @ 2018-06-29 18:12 UTC (permalink / raw)
  To: dev; +Cc: shaguna, indranil, nirranjan

This series of patches add support to offload flows to HASH region
available on Chelsio T6 NICs. HASH region can only offload exact match
(maskless) flows and hence the masks must be all set for all match
items.

Patch 1 queries firmware for HASH filter support.

Patch 2 updates cxgbe_flow to decide whether to place flows in LE-TCAM
or HASH region based on supported hardware configuration and masks of
match items.

Patch 3 adds Compressed Local IP (CLIP) region support for offloading
IPv6 flows in HASH region. Also updates LE-TCAM region to use CLIP for
offloading IPv6 flows on Chelsio T6 NICs.

Patch 4 adds support for offloading flows to HASH region.

Patch 5 adds support for deleting flows in HASH region.

Patch 6 adds support to query hit and byte counters for offloaded flows
in HASH region.

Patch 7 adds support to flush filters in HASH region.

Patch 8 adds support to match flows based on physical ingress port.

Patch 9 adds support to redirect packets matching flows to specified
physical egress port without sending them to host.

Thanks,
Rahul

Shagun Agrawal (9):
  net/cxgbe: query firmware for HASH filter resources
  net/cxgbe: validate flows offloaded to HASH region
  net/cxgbe: add Compressed Local IP region
  net/cxgbe: add support to offload flows to HASH region
  net/cxgbe: add support to delete flows in HASH region
  net/cxgbe: add support to query hit counters for flows in HASH region
  net/cxgbe: add support to flush flows in HASH region
  net/cxgbe: add support to match on ingress physical port
  net/cxgbe: add support to redirect packets to egress physical port

 drivers/net/cxgbe/Makefile              |   1 +
 drivers/net/cxgbe/base/adapter.h        |  43 ++
 drivers/net/cxgbe/base/common.h         |  10 +
 drivers/net/cxgbe/base/t4_hw.c          |   7 +
 drivers/net/cxgbe/base/t4_msg.h         | 188 +++++++++
 drivers/net/cxgbe/base/t4_regs.h        |  12 +
 drivers/net/cxgbe/base/t4_tcb.h         |  26 ++
 drivers/net/cxgbe/base/t4fw_interface.h |  31 ++
 drivers/net/cxgbe/clip_tbl.c            | 195 +++++++++
 drivers/net/cxgbe/clip_tbl.h            |  31 ++
 drivers/net/cxgbe/cxgbe_compat.h        |  12 +
 drivers/net/cxgbe/cxgbe_filter.c        | 697 ++++++++++++++++++++++++++++++--
 drivers/net/cxgbe/cxgbe_filter.h        |  13 +-
 drivers/net/cxgbe/cxgbe_flow.c          | 151 ++++++-
 drivers/net/cxgbe/cxgbe_main.c          | 170 +++++++-
 drivers/net/cxgbe/cxgbe_ofld.h          |  66 ++-
 16 files changed, 1614 insertions(+), 39 deletions(-)
 create mode 100644 drivers/net/cxgbe/base/t4_tcb.h
 create mode 100644 drivers/net/cxgbe/clip_tbl.c
 create mode 100644 drivers/net/cxgbe/clip_tbl.h

-- 
2.14.1

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 1/9] net/cxgbe: query firmware for HASH filter resources
  2018-06-29 18:12 [dpdk-dev] [PATCH 0/9] net/cxgbe: add support for offloading flows to HASH region Rahul Lakkireddy
@ 2018-06-29 18:12 ` Rahul Lakkireddy
  2018-06-29 18:12 ` [dpdk-dev] [PATCH 2/9] net/cxgbe: validate flows offloaded to HASH region Rahul Lakkireddy
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Rahul Lakkireddy @ 2018-06-29 18:12 UTC (permalink / raw)
  To: dev; +Cc: shaguna, indranil, nirranjan

From: Shagun Agrawal <shaguna@chelsio.com>

Fetch available HASH filter resources and allocate table for managing
them. Currently only supported on Chelsio T6 family of NICs.

Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
 drivers/net/cxgbe/base/common.h  |  7 ++++++
 drivers/net/cxgbe/base/t4_regs.h |  9 ++++++++
 drivers/net/cxgbe/cxgbe_compat.h | 12 ++++++++++
 drivers/net/cxgbe/cxgbe_filter.c | 38 ++++++++++++++++++++++++++++++++
 drivers/net/cxgbe/cxgbe_filter.h |  1 +
 drivers/net/cxgbe/cxgbe_main.c   | 47 ++++++++++++++++++++++++++++++++++++----
 drivers/net/cxgbe/cxgbe_ofld.h   | 26 ++++++++++++++++++++--
 7 files changed, 134 insertions(+), 6 deletions(-)

diff --git a/drivers/net/cxgbe/base/common.h b/drivers/net/cxgbe/base/common.h
index e524f7931..a276a1ef5 100644
--- a/drivers/net/cxgbe/base/common.h
+++ b/drivers/net/cxgbe/base/common.h
@@ -251,6 +251,8 @@ struct adapter_params {
 	unsigned char nports;             /* # of ethernet ports */
 	unsigned char portvec;
 
+	unsigned char hash_filter;
+
 	enum chip_type chip;              /* chip code */
 	struct arch_specific_params arch; /* chip specific params */
 
@@ -314,6 +316,11 @@ static inline int is_pf4(struct adapter *adap)
 #define for_each_port(adapter, iter) \
 	for (iter = 0; iter < (adapter)->params.nports; ++iter)
 
+static inline int is_hashfilter(const struct adapter *adap)
+{
+	return adap->params.hash_filter;
+}
+
 void t4_read_mtu_tbl(struct adapter *adap, u16 *mtus, u8 *mtu_log);
 void t4_tp_wr_bits_indirect(struct adapter *adap, unsigned int addr,
 			    unsigned int mask, unsigned int val);
diff --git a/drivers/net/cxgbe/base/t4_regs.h b/drivers/net/cxgbe/base/t4_regs.h
index fd8f9cf27..a1f6208ea 100644
--- a/drivers/net/cxgbe/base/t4_regs.h
+++ b/drivers/net/cxgbe/base/t4_regs.h
@@ -937,3 +937,12 @@
 #define M_REV    0xfU
 #define V_REV(x) ((x) << S_REV)
 #define G_REV(x) (((x) >> S_REV) & M_REV)
+
+/* registers for module LE */
+#define A_LE_DB_CONFIG 0x19c04
+
+#define S_HASHEN    20
+#define V_HASHEN(x) ((x) << S_HASHEN)
+#define F_HASHEN    V_HASHEN(1U)
+
+#define A_LE_DB_TID_HASHBASE 0x19df8
diff --git a/drivers/net/cxgbe/cxgbe_compat.h b/drivers/net/cxgbe/cxgbe_compat.h
index 779bcf165..609156499 100644
--- a/drivers/net/cxgbe/cxgbe_compat.h
+++ b/drivers/net/cxgbe/cxgbe_compat.h
@@ -250,4 +250,16 @@ static inline void writel_relaxed(unsigned int val, volatile void __iomem *addr)
 	rte_write32_relaxed(val, addr);
 }
 
+/*
+ * Multiplies an integer by a fraction, while avoiding unnecessary
+ * overflow or loss of precision.
+ */
+#define mult_frac(x, numer, denom)(                     \
+{                                                       \
+	typeof(x) quot = (x) / (denom);                 \
+	typeof(x) rem  = (x) % (denom);                 \
+	(quot * (numer)) + ((rem * (numer)) / (denom)); \
+}                                                       \
+)
+
 #endif /* _CXGBE_COMPAT_H_ */
diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index d098b9308..a5d20d164 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -7,6 +7,44 @@
 #include "t4_regs.h"
 #include "cxgbe_filter.h"
 
+/**
+ * Initialize Hash Filters
+ */
+int init_hash_filter(struct adapter *adap)
+{
+	unsigned int n_user_filters;
+	unsigned int user_filter_perc;
+	int ret;
+	u32 params[7], val[7];
+
+#define FW_PARAM_DEV(param) \
+	(V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_DEV) | \
+	V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_DEV_##param))
+
+#define FW_PARAM_PFVF(param) \
+	(V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_PFVF) | \
+	V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_PFVF_##param) |  \
+	V_FW_PARAMS_PARAM_Y(0) | \
+	V_FW_PARAMS_PARAM_Z(0))
+
+	params[0] = FW_PARAM_DEV(NTID);
+	ret = t4_query_params(adap, adap->mbox, adap->pf, 0, 1,
+			      params, val);
+	if (ret < 0)
+		return ret;
+	adap->tids.ntids = val[0];
+	adap->tids.natids = min(adap->tids.ntids / 2, MAX_ATIDS);
+
+	user_filter_perc = 100;
+	n_user_filters = mult_frac(adap->tids.nftids,
+				   user_filter_perc,
+				   100);
+
+	adap->tids.nftids = n_user_filters;
+	adap->params.hash_filter = 1;
+	return 0;
+}
+
 /**
  * Validate if the requested filter specification can be set by checking
  * if the requested features have been enabled
diff --git a/drivers/net/cxgbe/cxgbe_filter.h b/drivers/net/cxgbe/cxgbe_filter.h
index 4df37b9cd..6758a1879 100644
--- a/drivers/net/cxgbe/cxgbe_filter.h
+++ b/drivers/net/cxgbe/cxgbe_filter.h
@@ -220,6 +220,7 @@ int cxgbe_del_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 		     struct ch_filter_specification *fs,
 		     struct filter_ctx *ctx);
 int cxgbe_alloc_ftid(struct adapter *adap, unsigned int family);
+int init_hash_filter(struct adapter *adap);
 int validate_filter(struct adapter *adap, struct ch_filter_specification *fs);
 int cxgbe_get_filter_count(struct adapter *adapter, unsigned int fidx,
 			   u64 *c, bool get_byte);
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 21ad380ae..c692939db 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -287,24 +287,43 @@ static int tid_init(struct tid_info *t)
 {
 	size_t size;
 	unsigned int ftid_bmap_size;
+	unsigned int natids = t->natids;
 	unsigned int max_ftids = t->nftids;
 
 	ftid_bmap_size = rte_bitmap_get_memory_footprint(t->nftids);
 	size = t->ntids * sizeof(*t->tid_tab) +
-		max_ftids * sizeof(*t->ftid_tab);
+		max_ftids * sizeof(*t->ftid_tab) +
+		natids * sizeof(*t->atid_tab);
 
 	t->tid_tab = t4_os_alloc(size);
 	if (!t->tid_tab)
 		return -ENOMEM;
 
-	t->ftid_tab = (struct filter_entry *)&t->tid_tab[t->ntids];
+	t->atid_tab = (union aopen_entry *)&t->tid_tab[t->ntids];
+	t->ftid_tab = (struct filter_entry *)&t->tid_tab[t->natids];
 	t->ftid_bmap_array = t4_os_alloc(ftid_bmap_size);
 	if (!t->ftid_bmap_array) {
 		tid_free(t);
 		return -ENOMEM;
 	}
 
+	t4_os_lock_init(&t->atid_lock);
 	t4_os_lock_init(&t->ftid_lock);
+
+	t->afree = NULL;
+	t->atids_in_use = 0;
+	rte_atomic32_init(&t->tids_in_use);
+	rte_atomic32_set(&t->tids_in_use, 0);
+	rte_atomic32_init(&t->conns_in_use);
+	rte_atomic32_set(&t->conns_in_use, 0);
+
+	/* Setup the free list for atid_tab and clear the stid bitmap. */
+	if (natids) {
+		while (--natids)
+			t->atid_tab[natids - 1].next = &t->atid_tab[natids];
+		t->afree = t->atid_tab;
+	}
+
 	t->ftid_bmap = rte_bitmap_init(t->nftids, t->ftid_bmap_array,
 				       ftid_bmap_size);
 	if (!t->ftid_bmap) {
@@ -784,8 +803,7 @@ static int adap_init0_config(struct adapter *adapter, int reset)
 	 * This will allow the firmware to optimize aspects of the hardware
 	 * configuration which will result in improved performance.
 	 */
-	caps_cmd.niccaps &= cpu_to_be16(~(FW_CAPS_CONFIG_NIC_HASHFILTER |
-					  FW_CAPS_CONFIG_NIC_ETHOFLD));
+	caps_cmd.niccaps &= cpu_to_be16(~FW_CAPS_CONFIG_NIC_ETHOFLD);
 	caps_cmd.toecaps = 0;
 	caps_cmd.iscsicaps = 0;
 	caps_cmd.rdmacaps = 0;
@@ -990,6 +1008,12 @@ static int adap_init0(struct adapter *adap)
 	if (ret < 0)
 		goto bye;
 
+	if ((caps_cmd.niccaps & cpu_to_be16(FW_CAPS_CONFIG_NIC_HASHFILTER)) &&
+	    is_t6(adap->params.chip)) {
+		if (init_hash_filter(adap) < 0)
+			goto bye;
+	}
+
 	/* query tid-related parameters */
 	params[0] = FW_PARAM_DEV(NTID);
 	ret = t4_query_params(adap, adap->mbox, adap->pf, 0, 1,
@@ -997,6 +1021,7 @@ static int adap_init0(struct adapter *adap)
 	if (ret < 0)
 		goto bye;
 	adap->tids.ntids = val[0];
+	adap->tids.natids = min(adap->tids.ntids / 2, MAX_ATIDS);
 
 	/* If we're running on newer firmware, let it know that we're
 	 * prepared to deal with encapsulated CPL messages.  Older
@@ -1653,6 +1678,20 @@ int cxgbe_probe(struct adapter *adapter)
 			 "filter support disabled. Continuing\n");
 	}
 
+	if (is_hashfilter(adapter)) {
+		if (t4_read_reg(adapter, A_LE_DB_CONFIG) & F_HASHEN) {
+			u32 hash_base, hash_reg;
+
+			hash_reg = A_LE_DB_TID_HASHBASE;
+			hash_base = t4_read_reg(adapter, hash_reg);
+			adapter->tids.hash_base = hash_base / 4;
+		}
+	} else {
+		/* Disable hash filtering support */
+		dev_warn(adapter,
+			 "Maskless filter support disabled. Continuing\n");
+	}
+
 	err = init_rss(adapter);
 	if (err)
 		goto out_free;
diff --git a/drivers/net/cxgbe/cxgbe_ofld.h b/drivers/net/cxgbe/cxgbe_ofld.h
index 9f382f659..e97c42469 100644
--- a/drivers/net/cxgbe/cxgbe_ofld.h
+++ b/drivers/net/cxgbe/cxgbe_ofld.h
@@ -10,6 +10,16 @@
 
 #include "cxgbe_filter.h"
 
+/*
+ * Max # of ATIDs.  The absolute HW max is 16K but we keep it lower.
+ */
+#define MAX_ATIDS 8192U
+
+union aopen_entry {
+	void *data;
+	union aopen_entry *next;
+};
+
 /*
  * Holds the size, base address, free list start, etc of filter TID.
  * The tables themselves are allocated dynamically.
@@ -18,10 +28,22 @@ struct tid_info {
 	void **tid_tab;
 	unsigned int ntids;
 	struct filter_entry *ftid_tab;	/* Normal filters */
+	union aopen_entry *atid_tab;
 	struct rte_bitmap *ftid_bmap;
 	uint8_t *ftid_bmap_array;
-	unsigned int nftids;
-	unsigned int ftid_base;
+	unsigned int nftids, natids;
+	unsigned int ftid_base, hash_base;
+
+	union aopen_entry *afree;
+	unsigned int atids_in_use;
+
+	/* TIDs in the TCAM */
+	rte_atomic32_t tids_in_use;
+	/* TIDs in the HASH */
+	rte_atomic32_t hash_tids_in_use;
+	rte_atomic32_t conns_in_use;
+
+	rte_spinlock_t atid_lock __rte_cache_aligned;
 	rte_spinlock_t ftid_lock;
 };
 #endif /* _CXGBE_OFLD_H_ */
-- 
2.14.1

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 2/9] net/cxgbe: validate flows offloaded to HASH region
  2018-06-29 18:12 [dpdk-dev] [PATCH 0/9] net/cxgbe: add support for offloading flows to HASH region Rahul Lakkireddy
  2018-06-29 18:12 ` [dpdk-dev] [PATCH 1/9] net/cxgbe: query firmware for HASH filter resources Rahul Lakkireddy
@ 2018-06-29 18:12 ` Rahul Lakkireddy
  2018-06-29 18:12 ` [dpdk-dev] [PATCH 3/9] net/cxgbe: add Compressed Local IP region Rahul Lakkireddy
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Rahul Lakkireddy @ 2018-06-29 18:12 UTC (permalink / raw)
  To: dev; +Cc: shaguna, indranil, nirranjan

From: Shagun Agrawal <shaguna@chelsio.com>

Fetch supported match items in HASH region. Ensure the mask
is all set for all the supported match items to be offloaded
to HASH region. Otherwise, offload them to LE-TCAM region.

Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
 drivers/net/cxgbe/base/common.h  |  3 +++
 drivers/net/cxgbe/base/t4_hw.c   |  7 +++++
 drivers/net/cxgbe/base/t4_regs.h |  3 +++
 drivers/net/cxgbe/cxgbe_filter.h |  1 +
 drivers/net/cxgbe/cxgbe_flow.c   | 57 ++++++++++++++++++++++++++++++++++++++++
 5 files changed, 71 insertions(+)

diff --git a/drivers/net/cxgbe/base/common.h b/drivers/net/cxgbe/base/common.h
index a276a1ef5..ea3e112b9 100644
--- a/drivers/net/cxgbe/base/common.h
+++ b/drivers/net/cxgbe/base/common.h
@@ -156,6 +156,9 @@ struct tp_params {
 	int vnic_shift;
 	int port_shift;
 	int protocol_shift;
+	int ethertype_shift;
+
+	u64 hash_filter_mask;
 };
 
 struct vpd_params {
diff --git a/drivers/net/cxgbe/base/t4_hw.c b/drivers/net/cxgbe/base/t4_hw.c
index 66d080476..0893b7ba0 100644
--- a/drivers/net/cxgbe/base/t4_hw.c
+++ b/drivers/net/cxgbe/base/t4_hw.c
@@ -5032,6 +5032,8 @@ int t4_init_tp_params(struct adapter *adap)
 	adap->params.tp.port_shift = t4_filter_field_shift(adap, F_PORT);
 	adap->params.tp.protocol_shift = t4_filter_field_shift(adap,
 							       F_PROTOCOL);
+	adap->params.tp.ethertype_shift = t4_filter_field_shift(adap,
+								F_ETHERTYPE);
 
 	/*
 	 * If TP_INGRESS_CONFIG.VNID == 0, then TP_VLAN_PRI_MAP.VNIC_ID
@@ -5040,6 +5042,11 @@ int t4_init_tp_params(struct adapter *adap)
 	if ((adap->params.tp.ingress_config & F_VNIC) == 0)
 		adap->params.tp.vnic_shift = -1;
 
+	v = t4_read_reg(adap, LE_3_DB_HASH_MASK_GEN_IPV4_T6_A);
+	adap->params.tp.hash_filter_mask = v;
+	v = t4_read_reg(adap, LE_4_DB_HASH_MASK_GEN_IPV4_T6_A);
+	adap->params.tp.hash_filter_mask |= ((u64)v << 32);
+
 	return 0;
 }
 
diff --git a/drivers/net/cxgbe/base/t4_regs.h b/drivers/net/cxgbe/base/t4_regs.h
index a1f6208ea..cbaf415fa 100644
--- a/drivers/net/cxgbe/base/t4_regs.h
+++ b/drivers/net/cxgbe/base/t4_regs.h
@@ -946,3 +946,6 @@
 #define F_HASHEN    V_HASHEN(1U)
 
 #define A_LE_DB_TID_HASHBASE 0x19df8
+
+#define LE_3_DB_HASH_MASK_GEN_IPV4_T6_A 0x19eac
+#define LE_4_DB_HASH_MASK_GEN_IPV4_T6_A 0x19eb0
diff --git a/drivers/net/cxgbe/cxgbe_filter.h b/drivers/net/cxgbe/cxgbe_filter.h
index 6758a1879..27421a475 100644
--- a/drivers/net/cxgbe/cxgbe_filter.h
+++ b/drivers/net/cxgbe/cxgbe_filter.h
@@ -86,6 +86,7 @@ struct ch_filter_specification {
 	 * matching that doesn't exist as a (value, mask) tuple.
 	 */
 	uint32_t type:1;	/* 0 => IPv4, 1 => IPv6 */
+	uint32_t cap:1;		/* 0 => LE-TCAM, 1 => Hash */
 
 	/*
 	 * Packet dispatch information.  Ingress packets which match the
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index 23b7d00b3..dfb5fac2b 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -48,6 +48,58 @@ cxgbe_validate_item(const struct rte_flow_item *i, struct rte_flow_error *e)
 	return 0;
 }
 
+static void
+cxgbe_fill_filter_region(struct adapter *adap,
+			 struct ch_filter_specification *fs)
+{
+	struct tp_params *tp = &adap->params.tp;
+	u64 hash_filter_mask = tp->hash_filter_mask;
+	u64 ntuple_mask = 0;
+
+	fs->cap = 0;
+
+	if (!is_hashfilter(adap))
+		return;
+
+	if (fs->type) {
+		uint8_t biton[16] = {0xff, 0xff, 0xff, 0xff,
+				     0xff, 0xff, 0xff, 0xff,
+				     0xff, 0xff, 0xff, 0xff,
+				     0xff, 0xff, 0xff, 0xff};
+		uint8_t bitoff[16] = {0};
+
+		if (!memcmp(fs->val.lip, bitoff, sizeof(bitoff)) ||
+		    !memcmp(fs->val.fip, bitoff, sizeof(bitoff)) ||
+		    memcmp(fs->mask.lip, biton, sizeof(biton)) ||
+		    memcmp(fs->mask.fip, biton, sizeof(biton)))
+			return;
+	} else {
+		uint32_t biton  = 0xffffffff;
+		uint32_t bitoff = 0x0U;
+
+		if (!memcmp(fs->val.lip, &bitoff, sizeof(bitoff)) ||
+		    !memcmp(fs->val.fip, &bitoff, sizeof(bitoff)) ||
+		    memcmp(fs->mask.lip, &biton, sizeof(biton)) ||
+		    memcmp(fs->mask.fip, &biton, sizeof(biton)))
+			return;
+	}
+
+	if (!fs->val.lport || fs->mask.lport != 0xffff)
+		return;
+	if (!fs->val.fport || fs->mask.fport != 0xffff)
+		return;
+
+	if (tp->protocol_shift >= 0)
+		ntuple_mask |= (u64)fs->mask.proto << tp->protocol_shift;
+	if (tp->ethertype_shift >= 0)
+		ntuple_mask |= (u64)fs->mask.ethtype << tp->ethertype_shift;
+
+	if (ntuple_mask != hash_filter_mask)
+		return;
+
+	fs->cap = 1;	/* use hash region */
+}
+
 static int
 ch_rte_parsetype_udp(const void *dmask, const struct rte_flow_item *item,
 		     struct ch_filter_specification *fs,
@@ -222,6 +274,8 @@ cxgbe_validate_fidxonadd(struct ch_filter_specification *fs,
 static int
 cxgbe_verify_fidx(struct rte_flow *flow, unsigned int fidx, uint8_t del)
 {
+	if (flow->fs.cap)
+		return 0; /* Hash filters */
 	return del ? cxgbe_validate_fidxondel(flow->f, fidx) :
 		cxgbe_validate_fidxonadd(&flow->fs,
 					 ethdev2adap(flow->dev), fidx);
@@ -329,6 +383,7 @@ cxgbe_rtef_parse_items(struct rte_flow *flow,
 		       const struct rte_flow_item items[],
 		       struct rte_flow_error *e)
 {
+	struct adapter *adap = ethdev2adap(flow->dev);
 	const struct rte_flow_item *i;
 	char repeat[ARRAY_SIZE(parseitem)] = {0};
 
@@ -369,6 +424,8 @@ cxgbe_rtef_parse_items(struct rte_flow *flow,
 		}
 	}
 
+	cxgbe_fill_filter_region(adap, &flow->fs);
+
 	return 0;
 }
 
-- 
2.14.1

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 3/9] net/cxgbe: add Compressed Local IP region
  2018-06-29 18:12 [dpdk-dev] [PATCH 0/9] net/cxgbe: add support for offloading flows to HASH region Rahul Lakkireddy
  2018-06-29 18:12 ` [dpdk-dev] [PATCH 1/9] net/cxgbe: query firmware for HASH filter resources Rahul Lakkireddy
  2018-06-29 18:12 ` [dpdk-dev] [PATCH 2/9] net/cxgbe: validate flows offloaded to HASH region Rahul Lakkireddy
@ 2018-06-29 18:12 ` Rahul Lakkireddy
  2018-07-04 19:22   ` Ferruh Yigit
  2018-06-29 18:12 ` [dpdk-dev] [PATCH 4/9] net/cxgbe: add support to offload flows to HASH region Rahul Lakkireddy
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 13+ messages in thread
From: Rahul Lakkireddy @ 2018-06-29 18:12 UTC (permalink / raw)
  To: dev; +Cc: shaguna, indranil, nirranjan

From: Shagun Agrawal <shaguna@chelsio.com>

CLIP region holds destination IPv6 addresses to be matched for
corresponding flows. Query firmware for CLIP resources and allocate
table to manage them. Also update LE-TCAM to use CLIP to reduce
number of slots needed to offload IPv6 flows.

Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
 drivers/net/cxgbe/Makefile              |   1 +
 drivers/net/cxgbe/base/adapter.h        |  32 ++++++
 drivers/net/cxgbe/base/t4fw_interface.h |  19 ++++
 drivers/net/cxgbe/clip_tbl.c            | 195 ++++++++++++++++++++++++++++++++
 drivers/net/cxgbe/clip_tbl.h            |  31 +++++
 drivers/net/cxgbe/cxgbe_filter.c        |  99 ++++++++++++----
 drivers/net/cxgbe/cxgbe_filter.h        |   1 +
 drivers/net/cxgbe/cxgbe_main.c          |  19 ++++
 8 files changed, 377 insertions(+), 20 deletions(-)
 create mode 100644 drivers/net/cxgbe/clip_tbl.c
 create mode 100644 drivers/net/cxgbe/clip_tbl.h

diff --git a/drivers/net/cxgbe/Makefile b/drivers/net/cxgbe/Makefile
index edc5d8188..5d66c4b3a 100644
--- a/drivers/net/cxgbe/Makefile
+++ b/drivers/net/cxgbe/Makefile
@@ -52,6 +52,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += sge.c
 SRCS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += cxgbe_filter.c
 SRCS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += cxgbe_flow.c
 SRCS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += t4_hw.c
+SRCS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += clip_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += t4vf_hw.c
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index de46ecfe3..3ed3252e8 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -11,6 +11,7 @@
 #include <rte_bus_pci.h>
 #include <rte_mbuf.h>
 #include <rte_io.h>
+#include <rte_rwlock.h>
 #include <rte_ethdev.h>
 
 #include "cxgbe_compat.h"
@@ -321,9 +322,40 @@ struct adapter {
 	int use_unpacked_mode; /* unpacked rx mode state */
 	rte_spinlock_t win0_lock;
 
+	unsigned int clipt_start; /* CLIP table start */
+	unsigned int clipt_end;   /* CLIP table end */
+	struct clip_tbl *clipt;   /* CLIP table */
+
 	struct tid_info tids;     /* Info used to access TID related tables */
 };
 
+/**
+ * t4_os_rwlock_init - initialize rwlock
+ * @lock: the rwlock
+ */
+static inline void t4_os_rwlock_init(rte_rwlock_t *lock)
+{
+	rte_rwlock_init(lock);
+}
+
+/**
+ * t4_os_write_lock - get a write lock
+ * @lock: the rwlock
+ */
+static inline void t4_os_write_lock(rte_rwlock_t *lock)
+{
+	rte_rwlock_write_lock(lock);
+}
+
+/**
+ * t4_os_write_unlock - unlock a write lock
+ * @lock: the rwlock
+ */
+static inline void t4_os_write_unlock(rte_rwlock_t *lock)
+{
+	rte_rwlock_write_unlock(lock);
+}
+
 /**
  * ethdev2pinfo - return the port_info structure associated with a rte_eth_dev
  * @dev: the rte_eth_dev
diff --git a/drivers/net/cxgbe/base/t4fw_interface.h b/drivers/net/cxgbe/base/t4fw_interface.h
index 842aa1263..2433bf20c 100644
--- a/drivers/net/cxgbe/base/t4fw_interface.h
+++ b/drivers/net/cxgbe/base/t4fw_interface.h
@@ -333,6 +333,7 @@ enum fw_cmd_opcodes {
 	FW_RSS_IND_TBL_CMD             = 0x20,
 	FW_RSS_GLB_CONFIG_CMD	       = 0x22,
 	FW_RSS_VI_CONFIG_CMD           = 0x23,
+	FW_CLIP_CMD                    = 0x28,
 	FW_DEBUG_CMD                   = 0x81,
 };
 
@@ -648,6 +649,8 @@ enum fw_params_param_dev {
  * physical and virtual function parameters
  */
 enum fw_params_param_pfvf {
+	FW_PARAMS_PARAM_PFVF_CLIP_START = 0x03,
+	FW_PARAMS_PARAM_PFVF_CLIP_END = 0x04,
 	FW_PARAMS_PARAM_PFVF_FILTER_START = 0x05,
 	FW_PARAMS_PARAM_PFVF_FILTER_END = 0x06,
 	FW_PARAMS_PARAM_PFVF_CPLFW4MSG_ENCAP = 0x31,
@@ -2167,6 +2170,22 @@ struct fw_rss_vi_config_cmd {
 	(((x) >> S_FW_RSS_VI_CONFIG_CMD_UDPEN) & M_FW_RSS_VI_CONFIG_CMD_UDPEN)
 #define F_FW_RSS_VI_CONFIG_CMD_UDPEN	V_FW_RSS_VI_CONFIG_CMD_UDPEN(1U)
 
+struct fw_clip_cmd {
+	__be32 op_to_write;
+	__be32 alloc_to_len16;
+	__be64 ip_hi;
+	__be64 ip_lo;
+	__be32 r4[2];
+};
+
+#define S_FW_CLIP_CMD_ALLOC		31
+#define V_FW_CLIP_CMD_ALLOC(x)		((x) << S_FW_CLIP_CMD_ALLOC)
+#define F_FW_CLIP_CMD_ALLOC		V_FW_CLIP_CMD_ALLOC(1U)
+
+#define S_FW_CLIP_CMD_FREE		30
+#define V_FW_CLIP_CMD_FREE(x)		((x) << S_FW_CLIP_CMD_FREE)
+#define F_FW_CLIP_CMD_FREE		V_FW_CLIP_CMD_FREE(1U)
+
 /******************************************************************************
  *   D E B U G   C O M M A N D s
  ******************************************************/
diff --git a/drivers/net/cxgbe/clip_tbl.c b/drivers/net/cxgbe/clip_tbl.c
new file mode 100644
index 000000000..fa5281cd4
--- /dev/null
+++ b/drivers/net/cxgbe/clip_tbl.c
@@ -0,0 +1,195 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Chelsio Communications.
+ * All rights reserved.
+ */
+
+#include "common.h"
+#include "clip_tbl.h"
+
+/**
+ * Allocate clip entry in HW with associated IPV4/IPv6 address
+ */
+static int clip6_get_mbox(const struct rte_eth_dev *dev, const u32 *lip)
+{
+	struct adapter *adap = ethdev2adap(dev);
+	struct fw_clip_cmd c;
+	u64 hi = ((u64)lip[1]) << 32 | lip[0];
+	u64 lo = ((u64)lip[3]) << 32 | lip[2];
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_write = cpu_to_be32(V_FW_CMD_OP(FW_CLIP_CMD) |
+				    F_FW_CMD_REQUEST | F_FW_CMD_WRITE);
+	c.alloc_to_len16 = cpu_to_be32(F_FW_CLIP_CMD_ALLOC | FW_LEN16(c));
+	c.ip_hi = hi;
+	c.ip_lo = lo;
+	return t4_wr_mbox_meat(adap, adap->mbox, &c, sizeof(c), &c, false);
+}
+
+/**
+ * Delete clip entry in HW having the associated IPV4/IPV6 address
+ */
+static int clip6_release_mbox(const struct rte_eth_dev *dev, const u32 *lip)
+{
+	struct adapter *adap = ethdev2adap(dev);
+	struct fw_clip_cmd c;
+	u64 hi = ((u64)lip[1]) << 32 | lip[0];
+	u64 lo = ((u64)lip[3]) << 32 | lip[2];
+
+	memset(&c, 0, sizeof(c));
+	c.op_to_write = cpu_to_be32(V_FW_CMD_OP(FW_CLIP_CMD) |
+				    F_FW_CMD_REQUEST | F_FW_CMD_READ);
+	c.alloc_to_len16 = cpu_to_be32(F_FW_CLIP_CMD_FREE | FW_LEN16(c));
+	c.ip_hi = hi;
+	c.ip_lo = lo;
+	return t4_wr_mbox_meat(adap, adap->mbox, &c, sizeof(c), &c, false);
+}
+
+/**
+ * cxgbe_clip_release - Release associated CLIP entry
+ * @ce: clip entry to release
+ *
+ * Releases ref count and frees up a clip entry from CLIP table
+ */
+void cxgbe_clip_release(struct rte_eth_dev *dev, struct clip_entry *ce)
+{
+	int ret;
+
+	t4_os_lock(&ce->lock);
+	if (rte_atomic32_dec_and_test(&ce->refcnt)) {
+		ret = clip6_release_mbox(dev, ce->addr);
+		if (ret)
+			dev_debug(adap, "CLIP FW DEL CMD failed: %d", ret);
+	}
+	t4_os_unlock(&ce->lock);
+}
+
+/**
+ * find_or_alloc_clipe - Find/Allocate a free CLIP entry
+ * @c: CLIP table
+ * @lip: IPV4/IPV6 address to compare/add
+ * Returns pointer to the IPV4/IPV6 entry found/created
+ *
+ * Finds/Allocates an CLIP entry to be used for a filter rule.
+ */
+static struct clip_entry *find_or_alloc_clipe(struct clip_tbl *c,
+					      const u32 *lip)
+{
+	struct clip_entry *end, *e;
+	struct clip_entry *first_free = NULL;
+	unsigned int clipt_size = c->clipt_size;
+
+	for (e = &c->cl_list[0], end = &c->cl_list[clipt_size]; e != end; ++e) {
+		if (rte_atomic32_read(&e->refcnt) == 0) {
+			if (!first_free)
+				first_free = e;
+		} else {
+			if (memcmp(lip, e->addr, sizeof(e->addr)) == 0)
+				goto exists;
+		}
+	}
+
+	if (first_free) {
+		e = first_free;
+		goto exists;
+	}
+
+	return NULL;
+
+exists:
+	return e;
+}
+
+static struct clip_entry *t4_clip_alloc(struct rte_eth_dev *dev,
+					u32 *lip, u8 v6)
+{
+	struct adapter *adap = ethdev2adap(dev);
+	struct clip_tbl *ctbl = adap->clipt;
+	struct clip_entry *ce;
+	int ret;
+
+	if (!ctbl)
+		return NULL;
+
+	t4_os_write_lock(&ctbl->lock);
+	ce = find_or_alloc_clipe(ctbl, lip);
+	if (ce) {
+		t4_os_lock(&ce->lock);
+		if (!rte_atomic32_read(&ce->refcnt)) {
+			rte_memcpy(ce->addr, lip, sizeof(ce->addr));
+			if (v6) {
+				ce->type = FILTER_TYPE_IPV6;
+				rte_atomic32_set(&ce->refcnt, 1);
+				ret = clip6_get_mbox(dev, lip);
+				if (ret) {
+					dev_debug(adap,
+						  "CLIP FW ADD CMD failed: %d",
+						  ret);
+					ce = NULL;
+				}
+			} else {
+				ce->type = FILTER_TYPE_IPV4;
+			}
+		} else {
+			rte_atomic32_inc(&ce->refcnt);
+		}
+		t4_os_unlock(&ce->lock);
+	}
+	t4_os_write_unlock(&ctbl->lock);
+
+	return ce;
+}
+
+/**
+ * cxgbe_clip_alloc - Allocate a IPV6 CLIP entry
+ * @dev: rte_eth_dev pointer
+ * @lip: IPV6 address to add
+ * Returns pointer to the CLIP entry created
+ *
+ * Allocates a IPV6 CLIP entry to be used for a filter rule.
+ */
+struct clip_entry *cxgbe_clip_alloc(struct rte_eth_dev *dev, u32 *lip)
+{
+	return t4_clip_alloc(dev, lip, FILTER_TYPE_IPV6);
+}
+
+/**
+ * Initialize CLIP Table
+ */
+struct clip_tbl *t4_init_clip_tbl(unsigned int clipt_start,
+				  unsigned int clipt_end)
+{
+	unsigned int clipt_size;
+	struct clip_tbl *ctbl;
+	unsigned int i;
+
+	if (clipt_start >= clipt_end)
+		return NULL;
+
+	clipt_size = clipt_end - clipt_start + 1;
+
+	ctbl = t4_os_alloc(sizeof(*ctbl) +
+			   clipt_size * sizeof(struct clip_entry));
+	if (!ctbl)
+		return NULL;
+
+	ctbl->clipt_start = clipt_start;
+	ctbl->clipt_size = clipt_size;
+
+	t4_os_rwlock_init(&ctbl->lock);
+
+	for (i = 0; i < ctbl->clipt_size; i++) {
+		t4_os_lock_init(&ctbl->cl_list[i].lock);
+		rte_atomic32_set(&ctbl->cl_list[i].refcnt, 0);
+	}
+
+	return ctbl;
+}
+
+/**
+ * Cleanup CLIP Table
+ */
+void t4_cleanup_clip_tbl(struct adapter *adap)
+{
+	if (adap->clipt)
+		t4_os_free(adap->clipt);
+}
diff --git a/drivers/net/cxgbe/clip_tbl.h b/drivers/net/cxgbe/clip_tbl.h
new file mode 100644
index 000000000..737ccc691
--- /dev/null
+++ b/drivers/net/cxgbe/clip_tbl.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Chelsio Communications.
+ * All rights reserved.
+ */
+
+#ifndef _CXGBE_CLIP_H_
+#define _CXGBE_CLIP_H_
+
+/*
+ * State for the corresponding entry of the HW CLIP table.
+ */
+struct clip_entry {
+	enum filter_type type;       /* entry type */
+	u32 addr[4];                 /* IPV4 or IPV6 address */
+	rte_spinlock_t lock;         /* entry lock */
+	rte_atomic32_t refcnt;       /* entry reference count */
+};
+
+struct clip_tbl {
+	unsigned int clipt_start;     /* start index of CLIP table */
+	unsigned int clipt_size;      /* size of CLIP table */
+	rte_rwlock_t lock;            /* table rw lock */
+	struct clip_entry cl_list[0]; /* MUST BE LAST */
+};
+
+struct clip_tbl *t4_init_clip_tbl(unsigned int clipt_start,
+				  unsigned int clipt_end);
+void t4_cleanup_clip_tbl(struct adapter *adap);
+struct clip_entry *cxgbe_clip_alloc(struct rte_eth_dev *dev, u32 *lip);
+void cxgbe_clip_release(struct rte_eth_dev *dev, struct clip_entry *ce);
+#endif /* _CXGBE_CLIP_H_ */
diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index a5d20d164..bb2ebaa62 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -6,6 +6,7 @@
 #include "common.h"
 #include "t4_regs.h"
 #include "cxgbe_filter.h"
+#include "clip_tbl.h"
 
 /**
  * Initialize Hash Filters
@@ -164,6 +165,9 @@ int cxgbe_alloc_ftid(struct adapter *adap, unsigned int family)
  */
 void clear_filter(struct filter_entry *f)
 {
+	if (f->clipt)
+		cxgbe_clip_release(f->dev, f->clipt);
+
 	/*
 	 * The zeroing of the filter rule below clears the filter valid,
 	 * pending, locked flags etc. so it's all we need for
@@ -349,11 +353,14 @@ int cxgbe_del_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 	struct port_info *pi = (struct port_info *)(dev->data->dev_private);
 	struct adapter *adapter = pi->adapter;
 	struct filter_entry *f;
+	unsigned int chip_ver;
 	int ret;
 
 	if (filter_id >= adapter->tids.nftids)
 		return -ERANGE;
 
+	chip_ver = CHELSIO_CHIP_VERSION(adapter->params.chip);
+
 	ret = is_filter_set(&adapter->tids, filter_id, fs->type);
 	if (!ret) {
 		dev_warn(adap, "%s: could not find filter entry: %u\n",
@@ -361,6 +368,17 @@ int cxgbe_del_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 		return -EINVAL;
 	}
 
+	/*
+	 * Ensure filter id is aligned on the 2 slot boundary for T6,
+	 * and 4 slot boundary for cards below T6.
+	 */
+	if (fs->type) {
+		if (chip_ver < CHELSIO_T6)
+			filter_id &= ~(0x3);
+		else
+			filter_id &= ~(0x1);
+	}
+
 	f = &adapter->tids.ftid_tab[filter_id];
 	ret = writable_filter(f);
 	if (ret)
@@ -403,11 +421,15 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 	struct adapter *adapter = pi->adapter;
 	unsigned int fidx, iq, fid_bit = 0;
 	struct filter_entry *f;
+	unsigned int chip_ver;
+	uint8_t bitoff[16] = {0};
 	int ret;
 
 	if (filter_id >= adapter->tids.nftids)
 		return -ERANGE;
 
+	chip_ver = CHELSIO_CHIP_VERSION(adapter->params.chip);
+
 	ret = validate_filter(adapter, fs);
 	if (ret)
 		return ret;
@@ -426,38 +448,61 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 	iq = get_filter_steerq(dev, fs);
 
 	/*
-	 * IPv6 filters occupy four slots and must be aligned on
-	 * four-slot boundaries.  IPv4 filters only occupy a single
-	 * slot and have no alignment requirements but writing a new
-	 * IPv4 filter into the middle of an existing IPv6 filter
-	 * requires clearing the old IPv6 filter.
+	 * IPv6 filters occupy four slots and must be aligned on four-slot
+	 * boundaries for T5. On T6, IPv6 filters occupy two-slots and
+	 * must be aligned on two-slot boundaries.
+	 *
+	 * IPv4 filters only occupy a single slot and have no alignment
+	 * requirements but writing a new IPv4 filter into the middle
+	 * of an existing IPv6 filter requires clearing the old IPv6
+	 * filter.
 	 */
 	if (fs->type == FILTER_TYPE_IPV4) { /* IPv4 */
 		/*
-		 * If our IPv4 filter isn't being written to a
-		 * multiple of four filter index and there's an IPv6
-		 * filter at the multiple of 4 base slot, then we need
+		 * For T6, If our IPv4 filter isn't being written to a
+		 * multiple of two filter index and there's an IPv6
+		 * filter at the multiple of 2 base slot, then we need
 		 * to delete that IPv6 filter ...
+		 * For adapters below T6, IPv6 filter occupies 4 entries.
 		 */
-		fidx = filter_id & ~0x3;
+		if (chip_ver < CHELSIO_T6)
+			fidx = filter_id & ~0x3;
+		else
+			fidx = filter_id & ~0x1;
+
 		if (fidx != filter_id && adapter->tids.ftid_tab[fidx].fs.type) {
 			f = &adapter->tids.ftid_tab[fidx];
 			if (f->valid)
 				return -EBUSY;
 		}
 	} else { /* IPv6 */
-		/*
-		 * Ensure that the IPv6 filter is aligned on a
-		 * multiple of 4 boundary.
-		 */
-		if (filter_id & 0x3)
-			return -EINVAL;
+		unsigned int max_filter_id;
+
+		if (chip_ver < CHELSIO_T6) {
+			/*
+			 * Ensure that the IPv6 filter is aligned on a
+			 * multiple of 4 boundary.
+			 */
+			if (filter_id & 0x3)
+				return -EINVAL;
+
+			max_filter_id = filter_id + 4;
+		} else {
+			/*
+			 * For T6, CLIP being enabled, IPv6 filter would occupy
+			 * 2 entries.
+			 */
+			if (filter_id & 0x1)
+				return -EINVAL;
+
+			max_filter_id = filter_id + 2;
+		}
 
 		/*
 		 * Check all except the base overlapping IPv4 filter
 		 * slots.
 		 */
-		for (fidx = filter_id + 1; fidx < filter_id + 4; fidx++) {
+		for (fidx = filter_id + 1; fidx < max_filter_id; fidx++) {
 			f = &adapter->tids.ftid_tab[fidx];
 			if (f->valid)
 				return -EBUSY;
@@ -491,6 +536,16 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 		return ret;
 	}
 
+	/*
+	 * Allocate a clip table entry only if we have non-zero IPv6 address
+	 */
+	if (chip_ver > CHELSIO_T5 && fs->type &&
+	    memcmp(fs->val.lip, bitoff, sizeof(bitoff))) {
+		f->clipt = cxgbe_clip_alloc(f->dev, (u32 *)&f->fs.val.lip);
+		if (!f->clipt)
+			goto free_tid;
+	}
+
 	/*
 	 * Convert the filter specification into our internal format.
 	 * We copy the PF/VF specification into the Outer VLAN field
@@ -510,13 +565,17 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 	ret = set_filter_wr(dev, filter_id);
 	if (ret) {
 		fid_bit = f->tid - adapter->tids.ftid_base;
-		cxgbe_clear_ftid(&adapter->tids, fid_bit,
-				 fs->type ? FILTER_TYPE_IPV6 :
-					    FILTER_TYPE_IPV4);
-		clear_filter(f);
+		goto free_tid;
 	}
 
 	return ret;
+
+free_tid:
+	cxgbe_clear_ftid(&adapter->tids, fid_bit,
+			 fs->type ? FILTER_TYPE_IPV6 :
+				    FILTER_TYPE_IPV4);
+	clear_filter(f);
+	return ret;
 }
 
 /**
diff --git a/drivers/net/cxgbe/cxgbe_filter.h b/drivers/net/cxgbe/cxgbe_filter.h
index 27421a475..ce115f69f 100644
--- a/drivers/net/cxgbe/cxgbe_filter.h
+++ b/drivers/net/cxgbe/cxgbe_filter.h
@@ -141,6 +141,7 @@ struct filter_entry {
 	u32 locked:1;               /* filter is administratively locked */
 	u32 pending:1;              /* filter action is pending FW reply */
 	struct filter_ctx *ctx;     /* caller's completion hook */
+	struct clip_entry *clipt;   /* CLIP Table entry for IPv6 */
 	struct rte_eth_dev *dev;    /* Port's rte eth device */
 	void *private;              /* For use by apps using filter_entry */
 
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index c692939db..2050fe4db 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -37,6 +37,7 @@
 #include "t4_regs.h"
 #include "t4_msg.h"
 #include "cxgbe.h"
+#include "clip_tbl.h"
 
 /**
  * Allocate a chunk of memory. The allocated memory is cleared.
@@ -995,6 +996,14 @@ static int adap_init0(struct adapter *adap)
 	adap->tids.ftid_base = val[0];
 	adap->tids.nftids = val[1] - val[0] + 1;
 
+	params[0] = FW_PARAM_PFVF(CLIP_START);
+	params[1] = FW_PARAM_PFVF(CLIP_END);
+	ret = t4_query_params(adap, adap->mbox, adap->pf, 0, 2, params, val);
+	if (ret < 0)
+		goto bye;
+	adap->clipt_start = val[0];
+	adap->clipt_end = val[1];
+
 	/*
 	 * Get device capabilities so we can determine what resources we need
 	 * to manage.
@@ -1509,6 +1518,7 @@ void cxgbe_close(struct adapter *adapter)
 		if (is_pf4(adapter))
 			t4_intr_disable(adapter);
 		tid_free(&adapter->tids);
+		t4_cleanup_clip_tbl(adapter);
 		t4_sge_tx_monitor_stop(adapter);
 		t4_free_sge_resources(adapter);
 		for_each_port(adapter, i) {
@@ -1672,6 +1682,15 @@ int cxgbe_probe(struct adapter *adapter)
 	print_adapter_info(adapter);
 	print_port_info(adapter);
 
+	adapter->clipt = t4_init_clip_tbl(adapter->clipt_start,
+					  adapter->clipt_end);
+	if (!adapter->clipt) {
+		/* We tolerate a lack of clip_table, giving up some
+		 * functionality
+		 */
+		dev_warn(adapter, "could not allocate CLIP. Continuing\n");
+	}
+
 	if (tid_init(&adapter->tids) < 0) {
 		/* Disable filtering support */
 		dev_warn(adapter, "could not allocate TID table, "
-- 
2.14.1

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 4/9] net/cxgbe: add support to offload flows to HASH region
  2018-06-29 18:12 [dpdk-dev] [PATCH 0/9] net/cxgbe: add support for offloading flows to HASH region Rahul Lakkireddy
                   ` (2 preceding siblings ...)
  2018-06-29 18:12 ` [dpdk-dev] [PATCH 3/9] net/cxgbe: add Compressed Local IP region Rahul Lakkireddy
@ 2018-06-29 18:12 ` Rahul Lakkireddy
  2018-06-29 18:12 ` [dpdk-dev] [PATCH 5/9] net/cxgbe: add support to delete flows in " Rahul Lakkireddy
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Rahul Lakkireddy @ 2018-06-29 18:12 UTC (permalink / raw)
  To: dev; +Cc: shaguna, indranil, nirranjan

From: Shagun Agrawal <shaguna@chelsio.com>

Add interface to offload flows to HASH region. Translate internal
filter specification to requests to offload flows to HASH region.
Save the returned hash index of the offloaded flow for deletion later.

Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
 drivers/net/cxgbe/base/adapter.h        |  11 ++
 drivers/net/cxgbe/base/t4_msg.h         | 110 +++++++++++++
 drivers/net/cxgbe/base/t4fw_interface.h |   6 +
 drivers/net/cxgbe/cxgbe_filter.c        | 265 +++++++++++++++++++++++++++++++-
 drivers/net/cxgbe/cxgbe_filter.h        |   1 +
 drivers/net/cxgbe/cxgbe_flow.c          |  10 +-
 drivers/net/cxgbe/cxgbe_main.c          |  56 +++++++
 drivers/net/cxgbe/cxgbe_ofld.h          |  25 +++
 8 files changed, 481 insertions(+), 3 deletions(-)

diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index 3ed3252e8..e98dd2182 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -771,6 +771,17 @@ static inline void t4_complete(struct t4_completion *c)
 	t4_os_unlock(&c->lock);
 }
 
+/**
+ * cxgbe_port_viid - get the VI id of a port
+ * @dev: the device for the port
+ *
+ * Return the VI id of the given port.
+ */
+static inline unsigned int cxgbe_port_viid(const struct rte_eth_dev *dev)
+{
+	return ethdev2pinfo(dev)->viid;
+}
+
 void *t4_alloc_mem(size_t size);
 void t4_free_mem(void *addr);
 #define t4_os_alloc(_size)     t4_alloc_mem((_size))
diff --git a/drivers/net/cxgbe/base/t4_msg.h b/drivers/net/cxgbe/base/t4_msg.h
index 43d1cb66f..4112ff212 100644
--- a/drivers/net/cxgbe/base/t4_msg.h
+++ b/drivers/net/cxgbe/base/t4_msg.h
@@ -7,7 +7,10 @@
 #define T4_MSG_H
 
 enum {
+	CPL_ACT_OPEN_REQ      = 0x3,
+	CPL_ACT_OPEN_RPL      = 0x25,
 	CPL_SET_TCB_RPL       = 0x3A,
+	CPL_ACT_OPEN_REQ6     = 0x83,
 	CPL_SGE_EGR_UPDATE    = 0xA5,
 	CPL_FW4_MSG           = 0xC0,
 	CPL_FW6_MSG           = 0xE0,
@@ -15,6 +18,15 @@ enum {
 	CPL_TX_PKT_XT         = 0xEE,
 };
 
+enum CPL_error {
+	CPL_ERR_NONE               = 0,
+	CPL_ERR_TCAM_FULL          = 3,
+};
+
+enum {
+	ULP_MODE_NONE          = 0,
+};
+
 enum {                     /* TX_PKT_XT checksum types */
 	TX_CSUM_TCPIP  = 8,
 	TX_CSUM_UDPIP  = 9,
@@ -26,13 +38,24 @@ union opcode_tid {
 	__u8 opcode;
 };
 
+#define S_CPL_OPCODE    24
+#define V_CPL_OPCODE(x) ((x) << S_CPL_OPCODE)
+
 #define G_TID(x)    ((x) & 0xFFFFFF)
 
+/* tid is assumed to be 24-bits */
+#define MK_OPCODE_TID(opcode, tid) (V_CPL_OPCODE(opcode) | (tid))
+
 #define OPCODE_TID(cmd) ((cmd)->ot.opcode_tid)
 
 /* extract the TID from a CPL command */
 #define GET_TID(cmd) (G_TID(be32_to_cpu(OPCODE_TID(cmd))))
 
+/* partitioning of TID fields that also carry a queue id */
+#define S_TID_TID    0
+#define M_TID_TID    0x3fff
+#define G_TID_TID(x) (((x) >> S_TID_TID) & M_TID_TID)
+
 struct rss_header {
 	__u8 opcode;
 #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
@@ -79,6 +102,93 @@ struct work_request_hdr {
 #define V_COOKIE(x) ((x) << S_COOKIE)
 #define G_COOKIE(x) (((x) >> S_COOKIE) & M_COOKIE)
 
+/* option 0 fields */
+#define S_DELACK    5
+#define V_DELACK(x) ((x) << S_DELACK)
+
+#define S_NON_OFFLOAD    7
+#define V_NON_OFFLOAD(x) ((x) << S_NON_OFFLOAD)
+#define F_NON_OFFLOAD    V_NON_OFFLOAD(1U)
+
+#define S_ULP_MODE    8
+#define V_ULP_MODE(x) ((x) << S_ULP_MODE)
+
+#define S_SMAC_SEL    28
+#define V_SMAC_SEL(x) ((__u64)(x) << S_SMAC_SEL)
+
+#define S_TCAM_BYPASS    48
+#define V_TCAM_BYPASS(x) ((__u64)(x) << S_TCAM_BYPASS)
+#define F_TCAM_BYPASS    V_TCAM_BYPASS(1ULL)
+
+/* option 2 fields */
+#define S_RSS_QUEUE    0
+#define V_RSS_QUEUE(x) ((x) << S_RSS_QUEUE)
+
+#define S_RSS_QUEUE_VALID    10
+#define V_RSS_QUEUE_VALID(x) ((x) << S_RSS_QUEUE_VALID)
+#define F_RSS_QUEUE_VALID    V_RSS_QUEUE_VALID(1U)
+
+#define S_CONG_CNTRL    14
+#define V_CONG_CNTRL(x) ((x) << S_CONG_CNTRL)
+
+#define S_RX_CHANNEL    26
+#define V_RX_CHANNEL(x) ((x) << S_RX_CHANNEL)
+#define F_RX_CHANNEL    V_RX_CHANNEL(1U)
+
+#define S_T5_OPT_2_VALID    31
+#define V_T5_OPT_2_VALID(x) ((x) << S_T5_OPT_2_VALID)
+#define F_T5_OPT_2_VALID    V_T5_OPT_2_VALID(1U)
+
+struct cpl_t6_act_open_req {
+	WR_HDR;
+	union opcode_tid ot;
+	__be16 local_port;
+	__be16 peer_port;
+	__be32 local_ip;
+	__be32 peer_ip;
+	__be64 opt0;
+	__be32 rsvd;
+	__be32 opt2;
+	__be64 params;
+	__be32 rsvd2;
+	__be32 opt3;
+};
+
+struct cpl_t6_act_open_req6 {
+	WR_HDR;
+	union opcode_tid ot;
+	__be16 local_port;
+	__be16 peer_port;
+	__be64 local_ip_hi;
+	__be64 local_ip_lo;
+	__be64 peer_ip_hi;
+	__be64 peer_ip_lo;
+	__be64 opt0;
+	__be32 rsvd;
+	__be32 opt2;
+	__be64 params;
+	__be32 rsvd2;
+	__be32 opt3;
+};
+
+#define S_FILTER_TUPLE	24
+#define V_FILTER_TUPLE(x) ((x) << S_FILTER_TUPLE)
+
+struct cpl_act_open_rpl {
+	RSS_HDR
+	union opcode_tid ot;
+	__be32 atid_status;
+};
+
+/* cpl_act_open_rpl.atid_status fields */
+#define S_AOPEN_STATUS    0
+#define M_AOPEN_STATUS    0xFF
+#define G_AOPEN_STATUS(x) (((x) >> S_AOPEN_STATUS) & M_AOPEN_STATUS)
+
+#define S_AOPEN_ATID    8
+#define M_AOPEN_ATID    0xFFFFFF
+#define G_AOPEN_ATID(x) (((x) >> S_AOPEN_ATID) & M_AOPEN_ATID)
+
 struct cpl_set_tcb_rpl {
 	RSS_HDR
 	union opcode_tid ot;
diff --git a/drivers/net/cxgbe/base/t4fw_interface.h b/drivers/net/cxgbe/base/t4fw_interface.h
index 2433bf20c..19bcfc124 100644
--- a/drivers/net/cxgbe/base/t4fw_interface.h
+++ b/drivers/net/cxgbe/base/t4fw_interface.h
@@ -55,6 +55,7 @@ enum fw_memtype {
 
 enum fw_wr_opcodes {
 	FW_FILTER_WR		= 0x02,
+	FW_TP_WR		= 0x05,
 	FW_ETH_TX_PKT_WR	= 0x08,
 	FW_ETH_TX_PKTS_WR	= 0x09,
 	FW_ETH_TX_PKT_VM_WR	= 0x11,
@@ -93,6 +94,11 @@ struct fw_wr_hdr {
 #define G_FW_WR_EQUEQ(x)	(((x) >> S_FW_WR_EQUEQ) & M_FW_WR_EQUEQ)
 #define F_FW_WR_EQUEQ		V_FW_WR_EQUEQ(1U)
 
+/* flow context identifier (lo)
+ */
+#define S_FW_WR_FLOWID		8
+#define V_FW_WR_FLOWID(x)	((x) << S_FW_WR_FLOWID)
+
 /* length in units of 16-bytes (lo)
  */
 #define S_FW_WR_LEN16		0
diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index bb2ebaa62..bac7aa291 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2018 Chelsio Communications.
  * All rights reserved.
  */
-
+#include <rte_net.h>
 #include "common.h"
 #include "t4_regs.h"
 #include "cxgbe_filter.h"
@@ -159,6 +159,210 @@ int cxgbe_alloc_ftid(struct adapter *adap, unsigned int family)
 	return pos < size ? pos : -1;
 }
 
+/**
+ * Construct hash filter ntuple.
+ */
+static u64 hash_filter_ntuple(const struct filter_entry *f)
+{
+	struct adapter *adap = ethdev2adap(f->dev);
+	struct tp_params *tp = &adap->params.tp;
+	u64 ntuple = 0;
+	u16 tcp_proto = IPPROTO_TCP; /* TCP Protocol Number */
+
+	if (tp->protocol_shift >= 0) {
+		if (!f->fs.val.proto)
+			ntuple |= (u64)tcp_proto << tp->protocol_shift;
+		else
+			ntuple |= (u64)f->fs.val.proto << tp->protocol_shift;
+	}
+
+	if (tp->ethertype_shift >= 0 && f->fs.mask.ethtype)
+		ntuple |= (u64)(f->fs.val.ethtype) << tp->ethertype_shift;
+
+	if (ntuple != tp->hash_filter_mask)
+		return 0;
+
+	return ntuple;
+}
+
+/**
+ * Build a ACT_OPEN_REQ6 message for setting IPv6 hash filter.
+ */
+static void mk_act_open_req6(struct filter_entry *f, struct rte_mbuf *mbuf,
+			     unsigned int qid_filterid, struct adapter *adap)
+{
+	struct cpl_t6_act_open_req6 *req = NULL;
+	u64 local_lo, local_hi, peer_lo, peer_hi;
+	u32 *lip = (u32 *)f->fs.val.lip;
+	u32 *fip = (u32 *)f->fs.val.fip;
+
+	switch (CHELSIO_CHIP_VERSION(adap->params.chip)) {
+	case CHELSIO_T6:
+		req = rte_pktmbuf_mtod(mbuf, struct cpl_t6_act_open_req6 *);
+
+		INIT_TP_WR(req, 0);
+		break;
+	default:
+		dev_err(adap, "%s: unsupported chip type!\n", __func__);
+		return;
+	}
+
+	local_hi = ((u64)lip[1]) << 32 | lip[0];
+	local_lo = ((u64)lip[3]) << 32 | lip[2];
+	peer_hi = ((u64)fip[1]) << 32 | fip[0];
+	peer_lo = ((u64)fip[3]) << 32 | fip[2];
+
+	OPCODE_TID(req) = cpu_to_be32(MK_OPCODE_TID(CPL_ACT_OPEN_REQ6,
+						    qid_filterid));
+	req->local_port = cpu_to_be16(f->fs.val.lport);
+	req->peer_port = cpu_to_be16(f->fs.val.fport);
+	req->local_ip_hi = local_hi;
+	req->local_ip_lo = local_lo;
+	req->peer_ip_hi = peer_hi;
+	req->peer_ip_lo = peer_lo;
+	req->opt0 = cpu_to_be64(V_DELACK(f->fs.hitcnts) |
+				V_SMAC_SEL((cxgbe_port_viid(f->dev) & 0x7F)
+					   << 1) |
+				V_ULP_MODE(ULP_MODE_NONE) |
+				F_TCAM_BYPASS | F_NON_OFFLOAD);
+	req->params = cpu_to_be64(V_FILTER_TUPLE(hash_filter_ntuple(f)));
+	req->opt2 = cpu_to_be32(F_RSS_QUEUE_VALID |
+			    V_RSS_QUEUE(f->fs.iq) |
+			    F_T5_OPT_2_VALID |
+			    F_RX_CHANNEL |
+			    V_CONG_CNTRL((f->fs.action == FILTER_DROP) |
+					 (f->fs.dirsteer << 1)));
+}
+
+/**
+ * Build a ACT_OPEN_REQ message for setting IPv4 hash filter.
+ */
+static void mk_act_open_req(struct filter_entry *f, struct rte_mbuf *mbuf,
+			    unsigned int qid_filterid, struct adapter *adap)
+{
+	struct cpl_t6_act_open_req *req = NULL;
+
+	switch (CHELSIO_CHIP_VERSION(adap->params.chip)) {
+	case CHELSIO_T6:
+		req = rte_pktmbuf_mtod(mbuf, struct cpl_t6_act_open_req *);
+
+		INIT_TP_WR(req, 0);
+		break;
+	default:
+		dev_err(adap, "%s: unsupported chip type!\n", __func__);
+		return;
+	}
+
+	OPCODE_TID(req) = cpu_to_be32(MK_OPCODE_TID(CPL_ACT_OPEN_REQ,
+						    qid_filterid));
+	req->local_port = cpu_to_be16(f->fs.val.lport);
+	req->peer_port = cpu_to_be16(f->fs.val.fport);
+	req->local_ip = f->fs.val.lip[0] | f->fs.val.lip[1] << 8 |
+			f->fs.val.lip[2] << 16 | f->fs.val.lip[3] << 24;
+	req->peer_ip = f->fs.val.fip[0] | f->fs.val.fip[1] << 8 |
+			f->fs.val.fip[2] << 16 | f->fs.val.fip[3] << 24;
+	req->opt0 = cpu_to_be64(V_DELACK(f->fs.hitcnts) |
+				V_SMAC_SEL((cxgbe_port_viid(f->dev) & 0x7F)
+					   << 1) |
+				V_ULP_MODE(ULP_MODE_NONE) |
+				F_TCAM_BYPASS | F_NON_OFFLOAD);
+	req->params = cpu_to_be64(V_FILTER_TUPLE(hash_filter_ntuple(f)));
+	req->opt2 = cpu_to_be32(F_RSS_QUEUE_VALID |
+			    V_RSS_QUEUE(f->fs.iq) |
+			    F_T5_OPT_2_VALID |
+			    F_RX_CHANNEL |
+			    V_CONG_CNTRL((f->fs.action == FILTER_DROP) |
+					 (f->fs.dirsteer << 1)));
+}
+
+/**
+ * Set the specified hash filter.
+ */
+static int cxgbe_set_hash_filter(struct rte_eth_dev *dev,
+				 struct ch_filter_specification *fs,
+				 struct filter_ctx *ctx)
+{
+	struct port_info *pi = ethdev2pinfo(dev);
+	struct adapter *adapter = pi->adapter;
+	struct tid_info *t = &adapter->tids;
+	struct filter_entry *f;
+	struct rte_mbuf *mbuf;
+	struct sge_ctrl_txq *ctrlq;
+	unsigned int iq;
+	int atid, size;
+	int ret = 0;
+
+	ret = validate_filter(adapter, fs);
+	if (ret)
+		return ret;
+
+	iq = get_filter_steerq(dev, fs);
+
+	ctrlq = &adapter->sge.ctrlq[pi->port_id];
+
+	f = t4_os_alloc(sizeof(*f));
+	if (!f)
+		goto out_err;
+
+	f->fs = *fs;
+	f->ctx = ctx;
+	f->dev = dev;
+	f->fs.iq = iq;
+
+	atid = cxgbe_alloc_atid(t, f);
+	if (atid < 0)
+		goto out_err;
+
+	if (f->fs.type) {
+		/* IPv6 hash filter */
+		f->clipt = cxgbe_clip_alloc(f->dev, (u32 *)&f->fs.val.lip);
+		if (!f->clipt)
+			goto free_atid;
+
+		size = sizeof(struct cpl_t6_act_open_req6);
+		mbuf = rte_pktmbuf_alloc(ctrlq->mb_pool);
+		if (!mbuf) {
+			ret = -ENOMEM;
+			goto free_clip;
+		}
+
+		mbuf->data_len = size;
+		mbuf->pkt_len = mbuf->data_len;
+
+		mk_act_open_req6(f, mbuf,
+				 ((adapter->sge.fw_evtq.abs_id << 14) | atid),
+				 adapter);
+	} else {
+		/* IPv4 hash filter */
+		size = sizeof(struct cpl_t6_act_open_req);
+		mbuf = rte_pktmbuf_alloc(ctrlq->mb_pool);
+		if (!mbuf) {
+			ret = -ENOMEM;
+			goto free_atid;
+		}
+
+		mbuf->data_len = size;
+		mbuf->pkt_len = mbuf->data_len;
+
+		mk_act_open_req(f, mbuf,
+				((adapter->sge.fw_evtq.abs_id << 14) | atid),
+				adapter);
+	}
+
+	f->pending = 1;
+	t4_mgmt_tx(ctrlq, mbuf);
+	return 0;
+
+free_clip:
+	cxgbe_clip_release(f->dev, f->clipt);
+free_atid:
+	cxgbe_free_atid(t, atid);
+
+out_err:
+	t4_os_free(f);
+	return ret;
+}
+
 /**
  * Clear a filter and release any of its resources that we own.  This also
  * clears the filter's "pending" status.
@@ -425,6 +629,9 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 	uint8_t bitoff[16] = {0};
 	int ret;
 
+	if (is_hashfilter(adapter) && fs->cap)
+		return cxgbe_set_hash_filter(dev, fs, ctx);
+
 	if (filter_id >= adapter->tids.nftids)
 		return -ERANGE;
 
@@ -578,6 +785,62 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 	return ret;
 }
 
+/**
+ * Handle a Hash filter write reply.
+ */
+void hash_filter_rpl(struct adapter *adap, const struct cpl_act_open_rpl *rpl)
+{
+	struct tid_info *t = &adap->tids;
+	struct filter_entry *f;
+	struct filter_ctx *ctx = NULL;
+	unsigned int tid = GET_TID(rpl);
+	unsigned int ftid = G_TID_TID(G_AOPEN_ATID
+				      (be32_to_cpu(rpl->atid_status)));
+	unsigned int status  = G_AOPEN_STATUS(be32_to_cpu(rpl->atid_status));
+
+	f = lookup_atid(t, ftid);
+	if (!f) {
+		dev_warn(adap, "%s: could not find filter entry: %d\n",
+			 __func__, ftid);
+		return;
+	}
+
+	ctx = f->ctx;
+	f->ctx = NULL;
+
+	switch (status) {
+	case CPL_ERR_NONE: {
+		f->tid = tid;
+		f->pending = 0;  /* asynchronous setup completed */
+		f->valid = 1;
+
+		cxgbe_insert_tid(t, f, f->tid, 0);
+		cxgbe_free_atid(t, ftid);
+		if (ctx) {
+			ctx->tid = f->tid;
+			ctx->result = 0;
+		}
+		break;
+	}
+	default:
+		dev_warn(adap, "%s: filter creation failed with status = %u\n",
+			 __func__, status);
+
+		if (ctx) {
+			if (status == CPL_ERR_TCAM_FULL)
+				ctx->result = -EAGAIN;
+			else
+				ctx->result = -EINVAL;
+		}
+
+		cxgbe_free_atid(t, ftid);
+		t4_os_free(f);
+	}
+
+	if (ctx)
+		t4_complete(&ctx->completion);
+}
+
 /**
  * Handle a LE-TCAM filter write/deletion reply.
  */
diff --git a/drivers/net/cxgbe/cxgbe_filter.h b/drivers/net/cxgbe/cxgbe_filter.h
index ce115f69f..7c469c895 100644
--- a/drivers/net/cxgbe/cxgbe_filter.h
+++ b/drivers/net/cxgbe/cxgbe_filter.h
@@ -223,6 +223,7 @@ int cxgbe_del_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 		     struct filter_ctx *ctx);
 int cxgbe_alloc_ftid(struct adapter *adap, unsigned int family);
 int init_hash_filter(struct adapter *adap);
+void hash_filter_rpl(struct adapter *adap, const struct cpl_act_open_rpl *rpl);
 int validate_filter(struct adapter *adap, struct ch_filter_specification *fs);
 int cxgbe_get_filter_count(struct adapter *adapter, unsigned int fidx,
 			   u64 *c, bool get_byte);
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index dfb5fac2b..4950cb41c 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -452,6 +452,7 @@ static int __cxgbe_flow_create(struct rte_eth_dev *dev, struct rte_flow *flow)
 {
 	struct ch_filter_specification *fs = &flow->fs;
 	struct adapter *adap = ethdev2adap(dev);
+	struct tid_info *t = &adap->tids;
 	struct filter_ctx ctx;
 	unsigned int fidx;
 	int err;
@@ -484,8 +485,13 @@ static int __cxgbe_flow_create(struct rte_eth_dev *dev, struct rte_flow *flow)
 		return ctx.result;
 	}
 
-	flow->fidx = fidx;
-	flow->f = &adap->tids.ftid_tab[fidx];
+	if (fs->cap) { /* to destroy the filter */
+		flow->fidx = ctx.tid;
+		flow->f = lookup_tid(t, ctx.tid);
+	} else {
+		flow->fidx = fidx;
+		flow->f = &adap->tids.ftid_tab[fidx];
+	}
 
 	return 0;
 }
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 2050fe4db..c550dd5be 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -91,6 +91,10 @@ static int fwevtq_handler(struct sge_rspq *q, const __be64 *rsp,
 		const struct cpl_set_tcb_rpl *p = (const void *)rsp;
 
 		filter_rpl(q->adapter, p);
+	} else if (opcode == CPL_ACT_OPEN_RPL) {
+		const struct cpl_act_open_rpl *p = (const void *)rsp;
+
+		hash_filter_rpl(q->adapter, p);
 	} else {
 		dev_err(adapter, "unexpected CPL %#x on FW event queue\n",
 			opcode);
@@ -263,6 +267,58 @@ int cxgb4_set_rspq_intr_params(struct sge_rspq *q, unsigned int us,
 	return 0;
 }
 
+/**
+ * Allocate an active-open TID and set it to the supplied value.
+ */
+int cxgbe_alloc_atid(struct tid_info *t, void *data)
+{
+	int atid = -1;
+
+	t4_os_lock(&t->atid_lock);
+	if (t->afree) {
+		union aopen_entry *p = t->afree;
+
+		atid = p - t->atid_tab;
+		t->afree = p->next;
+		p->data = data;
+		t->atids_in_use++;
+	}
+	t4_os_unlock(&t->atid_lock);
+	return atid;
+}
+
+/**
+ * Release an active-open TID.
+ */
+void cxgbe_free_atid(struct tid_info *t, unsigned int atid)
+{
+	union aopen_entry *p = &t->atid_tab[atid];
+
+	t4_os_lock(&t->atid_lock);
+	p->next = t->afree;
+	t->afree = p;
+	t->atids_in_use--;
+	t4_os_unlock(&t->atid_lock);
+}
+
+/**
+ * Insert a TID.
+ */
+void cxgbe_insert_tid(struct tid_info *t, void *data, unsigned int tid,
+		      unsigned short family)
+{
+	t->tid_tab[tid] = data;
+	if (t->hash_base && tid >= t->hash_base) {
+		if (family == FILTER_TYPE_IPV4)
+			rte_atomic32_inc(&t->hash_tids_in_use);
+	} else {
+		if (family == FILTER_TYPE_IPV4)
+			rte_atomic32_inc(&t->tids_in_use);
+	}
+
+	rte_atomic32_inc(&t->conns_in_use);
+}
+
 /**
  * Free TID tables.
  */
diff --git a/drivers/net/cxgbe/cxgbe_ofld.h b/drivers/net/cxgbe/cxgbe_ofld.h
index e97c42469..798e39828 100644
--- a/drivers/net/cxgbe/cxgbe_ofld.h
+++ b/drivers/net/cxgbe/cxgbe_ofld.h
@@ -10,6 +10,15 @@
 
 #include "cxgbe_filter.h"
 
+#define INIT_TP_WR(w, tid) do { \
+	(w)->wr.wr_hi = cpu_to_be32(V_FW_WR_OP(FW_TP_WR) | \
+				V_FW_WR_IMMDLEN(sizeof(*w) - sizeof(w->wr))); \
+	(w)->wr.wr_mid = cpu_to_be32( \
+				V_FW_WR_LEN16(DIV_ROUND_UP(sizeof(*w), 16)) | \
+				V_FW_WR_FLOWID(tid)); \
+	(w)->wr.wr_lo = cpu_to_be64(0); \
+} while (0)
+
 /*
  * Max # of ATIDs.  The absolute HW max is 16K but we keep it lower.
  */
@@ -46,4 +55,20 @@ struct tid_info {
 	rte_spinlock_t atid_lock __rte_cache_aligned;
 	rte_spinlock_t ftid_lock;
 };
+
+static inline void *lookup_tid(const struct tid_info *t, unsigned int tid)
+{
+	return tid < t->ntids ? t->tid_tab[tid] : NULL;
+}
+
+static inline void *lookup_atid(const struct tid_info *t, unsigned int atid)
+{
+	return atid < t->natids ? t->atid_tab[atid].data : NULL;
+}
+
+int cxgbe_alloc_atid(struct tid_info *t, void *data);
+void cxgbe_free_atid(struct tid_info *t, unsigned int atid);
+void cxgbe_insert_tid(struct tid_info *t, void *data, unsigned int tid,
+		      unsigned short family);
+
 #endif /* _CXGBE_OFLD_H_ */
-- 
2.14.1

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 5/9] net/cxgbe: add support to delete flows in HASH region
  2018-06-29 18:12 [dpdk-dev] [PATCH 0/9] net/cxgbe: add support for offloading flows to HASH region Rahul Lakkireddy
                   ` (3 preceding siblings ...)
  2018-06-29 18:12 ` [dpdk-dev] [PATCH 4/9] net/cxgbe: add support to offload flows to HASH region Rahul Lakkireddy
@ 2018-06-29 18:12 ` Rahul Lakkireddy
  2018-06-29 18:12 ` [dpdk-dev] [PATCH 6/9] net/cxgbe: add support to query hit counters for " Rahul Lakkireddy
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Rahul Lakkireddy @ 2018-06-29 18:12 UTC (permalink / raw)
  To: dev; +Cc: shaguna, indranil, nirranjan

From: Shagun Agrawal <shaguna@chelsio.com>

Add interface to delete offloaded flows in HASH region. Use the
hash index saved during insertion to delete the corresponding flow.

Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
 drivers/net/cxgbe/base/t4_msg.h         |  72 ++++++++++++
 drivers/net/cxgbe/base/t4_tcb.h         |  15 +++
 drivers/net/cxgbe/base/t4fw_interface.h |   6 +
 drivers/net/cxgbe/cxgbe_filter.c        | 193 ++++++++++++++++++++++++++++++++
 drivers/net/cxgbe/cxgbe_filter.h        |   2 +
 drivers/net/cxgbe/cxgbe_main.c          |  48 ++++++++
 drivers/net/cxgbe/cxgbe_ofld.h          |  15 +++
 7 files changed, 351 insertions(+)
 create mode 100644 drivers/net/cxgbe/base/t4_tcb.h

diff --git a/drivers/net/cxgbe/base/t4_msg.h b/drivers/net/cxgbe/base/t4_msg.h
index 4112ff212..7f4c98fb6 100644
--- a/drivers/net/cxgbe/base/t4_msg.h
+++ b/drivers/net/cxgbe/base/t4_msg.h
@@ -8,7 +8,12 @@
 
 enum {
 	CPL_ACT_OPEN_REQ      = 0x3,
+	CPL_SET_TCB_FIELD     = 0x5,
+	CPL_ABORT_REQ         = 0xA,
+	CPL_ABORT_RPL         = 0xB,
+	CPL_TID_RELEASE       = 0x1A,
 	CPL_ACT_OPEN_RPL      = 0x25,
+	CPL_ABORT_RPL_RSS     = 0x2D,
 	CPL_SET_TCB_RPL       = 0x3A,
 	CPL_ACT_OPEN_REQ6     = 0x83,
 	CPL_SGE_EGR_UPDATE    = 0xA5,
@@ -27,6 +32,11 @@ enum {
 	ULP_MODE_NONE          = 0,
 };
 
+enum {
+	CPL_ABORT_SEND_RST = 0,
+	CPL_ABORT_NO_RST,
+};
+
 enum {                     /* TX_PKT_XT checksum types */
 	TX_CSUM_TCPIP  = 8,
 	TX_CSUM_UDPIP  = 9,
@@ -189,6 +199,29 @@ struct cpl_act_open_rpl {
 #define M_AOPEN_ATID    0xFFFFFF
 #define G_AOPEN_ATID(x) (((x) >> S_AOPEN_ATID) & M_AOPEN_ATID)
 
+struct cpl_set_tcb_field {
+	WR_HDR;
+	union opcode_tid ot;
+	__be16 reply_ctrl;
+	__be16 word_cookie;
+	__be64 mask;
+	__be64 val;
+};
+
+/* cpl_set_tcb_field.word_cookie fields */
+#define S_WORD    0
+#define V_WORD(x) ((x) << S_WORD)
+
+/* cpl_get_tcb.reply_ctrl fields */
+#define S_QUEUENO    0
+#define V_QUEUENO(x) ((x) << S_QUEUENO)
+
+#define S_REPLY_CHAN    14
+#define V_REPLY_CHAN(x) ((x) << S_REPLY_CHAN)
+
+#define S_NO_REPLY    15
+#define V_NO_REPLY(x) ((x) << S_NO_REPLY)
+
 struct cpl_set_tcb_rpl {
 	RSS_HDR
 	union opcode_tid ot;
@@ -198,6 +231,39 @@ struct cpl_set_tcb_rpl {
 	__be64 oldval;
 };
 
+/* cpl_abort_req status command code
+ */
+struct cpl_abort_req {
+	WR_HDR;
+	union opcode_tid ot;
+	__be32 rsvd0;
+	__u8  rsvd1;
+	__u8  cmd;
+	__u8  rsvd2[6];
+};
+
+struct cpl_abort_rpl_rss {
+	RSS_HDR
+	union opcode_tid ot;
+	__u8  rsvd[3];
+	__u8  status;
+};
+
+struct cpl_abort_rpl {
+	WR_HDR;
+	union opcode_tid ot;
+	__be32 rsvd0;
+	__u8  rsvd1;
+	__u8  cmd;
+	__u8  rsvd2[6];
+};
+
+struct cpl_tid_release {
+	WR_HDR;
+	union opcode_tid ot;
+	__be32 rsvd;
+};
+
 struct cpl_tx_data {
 	union opcode_tid ot;
 	__be32 len;
@@ -403,7 +469,13 @@ struct cpl_fw6_msg {
 	__be64 data[4];
 };
 
+/* ULP_TX opcodes */
+enum {
+	ULP_TX_PKT = 4
+};
+
 enum {
+	ULP_TX_SC_NOOP = 0x80,
 	ULP_TX_SC_IMM  = 0x81,
 	ULP_TX_SC_DSGL = 0x82,
 	ULP_TX_SC_ISGL = 0x83
diff --git a/drivers/net/cxgbe/base/t4_tcb.h b/drivers/net/cxgbe/base/t4_tcb.h
new file mode 100644
index 000000000..6d7f5e8c1
--- /dev/null
+++ b/drivers/net/cxgbe/base/t4_tcb.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Chelsio Communications.
+ * All rights reserved.
+ */
+
+#ifndef _T4_TCB_DEFS_H
+#define _T4_TCB_DEFS_H
+
+/* 105:96 */
+#define W_TCB_RSS_INFO    3
+#define S_TCB_RSS_INFO    0
+#define M_TCB_RSS_INFO    0x3ffULL
+#define V_TCB_RSS_INFO(x) ((x) << S_TCB_RSS_INFO)
+
+#endif /* _T4_TCB_DEFS_H */
diff --git a/drivers/net/cxgbe/base/t4fw_interface.h b/drivers/net/cxgbe/base/t4fw_interface.h
index 19bcfc124..7b2f2d37f 100644
--- a/drivers/net/cxgbe/base/t4fw_interface.h
+++ b/drivers/net/cxgbe/base/t4fw_interface.h
@@ -55,6 +55,7 @@ enum fw_memtype {
 
 enum fw_wr_opcodes {
 	FW_FILTER_WR		= 0x02,
+	FW_ULPTX_WR		= 0x04,
 	FW_TP_WR		= 0x05,
 	FW_ETH_TX_PKT_WR	= 0x08,
 	FW_ETH_TX_PKTS_WR	= 0x09,
@@ -78,6 +79,11 @@ struct fw_wr_hdr {
 #define V_FW_WR_OP(x)		((x) << S_FW_WR_OP)
 #define G_FW_WR_OP(x)		(((x) >> S_FW_WR_OP) & M_FW_WR_OP)
 
+/* atomic flag (hi) - firmware encapsulates CPLs in CPL_BARRIER
+ */
+#define S_FW_WR_ATOMIC		23
+#define V_FW_WR_ATOMIC(x)	((x) << S_FW_WR_ATOMIC)
+
 /* work request immediate data length (hi)
  */
 #define S_FW_WR_IMMDLEN	0
diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index bac7aa291..7759b8acf 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -4,6 +4,7 @@
  */
 #include <rte_net.h>
 #include "common.h"
+#include "t4_tcb.h"
 #include "t4_regs.h"
 #include "cxgbe_filter.h"
 #include "clip_tbl.h"
@@ -116,6 +117,34 @@ int writable_filter(struct filter_entry *f)
 	return 0;
 }
 
+/**
+ * Build a CPL_SET_TCB_FIELD message as payload of a ULP_TX_PKT command.
+ */
+static inline void mk_set_tcb_field_ulp(struct filter_entry *f,
+					struct cpl_set_tcb_field *req,
+					unsigned int word,
+					u64 mask, u64 val, u8 cookie,
+					int no_reply)
+{
+	struct ulp_txpkt *txpkt = (struct ulp_txpkt *)req;
+	struct ulptx_idata *sc = (struct ulptx_idata *)(txpkt + 1);
+
+	txpkt->cmd_dest = cpu_to_be32(V_ULPTX_CMD(ULP_TX_PKT) |
+				      V_ULP_TXPKT_DEST(0));
+	txpkt->len = cpu_to_be32(DIV_ROUND_UP(sizeof(*req), 16));
+	sc->cmd_more = cpu_to_be32(V_ULPTX_CMD(ULP_TX_SC_IMM));
+	sc->len = cpu_to_be32(sizeof(*req) - sizeof(struct work_request_hdr));
+	OPCODE_TID(req) = cpu_to_be32(MK_OPCODE_TID(CPL_SET_TCB_FIELD, f->tid));
+	req->reply_ctrl = cpu_to_be16(V_NO_REPLY(no_reply) | V_REPLY_CHAN(0) |
+				      V_QUEUENO(0));
+	req->word_cookie = cpu_to_be16(V_WORD(word) | V_COOKIE(cookie));
+	req->mask = cpu_to_be64(mask);
+	req->val = cpu_to_be64(val);
+	sc = (struct ulptx_idata *)(req + 1);
+	sc->cmd_more = cpu_to_be32(V_ULPTX_CMD(ULP_TX_SC_NOOP));
+	sc->len = cpu_to_be32(0);
+}
+
 /**
  * Check if entry already filled.
  */
@@ -185,6 +214,132 @@ static u64 hash_filter_ntuple(const struct filter_entry *f)
 	return ntuple;
 }
 
+/**
+ * Build a CPL_ABORT_REQ message as payload of a ULP_TX_PKT command.
+ */
+static void mk_abort_req_ulp(struct cpl_abort_req *abort_req,
+			     unsigned int tid)
+{
+	struct ulp_txpkt *txpkt = (struct ulp_txpkt *)abort_req;
+	struct ulptx_idata *sc = (struct ulptx_idata *)(txpkt + 1);
+
+	txpkt->cmd_dest = cpu_to_be32(V_ULPTX_CMD(ULP_TX_PKT) |
+				      V_ULP_TXPKT_DEST(0));
+	txpkt->len = cpu_to_be32(DIV_ROUND_UP(sizeof(*abort_req), 16));
+	sc->cmd_more = cpu_to_be32(V_ULPTX_CMD(ULP_TX_SC_IMM));
+	sc->len = cpu_to_be32(sizeof(*abort_req) -
+			      sizeof(struct work_request_hdr));
+	OPCODE_TID(abort_req) = cpu_to_be32(MK_OPCODE_TID(CPL_ABORT_REQ, tid));
+	abort_req->rsvd0 = cpu_to_be32(0);
+	abort_req->rsvd1 = 0;
+	abort_req->cmd = CPL_ABORT_NO_RST;
+	sc = (struct ulptx_idata *)(abort_req + 1);
+	sc->cmd_more = cpu_to_be32(V_ULPTX_CMD(ULP_TX_SC_NOOP));
+	sc->len = cpu_to_be32(0);
+}
+
+/**
+ * Build a CPL_ABORT_RPL message as payload of a ULP_TX_PKT command.
+ */
+static void mk_abort_rpl_ulp(struct cpl_abort_rpl *abort_rpl,
+			     unsigned int tid)
+{
+	struct ulp_txpkt *txpkt = (struct ulp_txpkt *)abort_rpl;
+	struct ulptx_idata *sc = (struct ulptx_idata *)(txpkt + 1);
+
+	txpkt->cmd_dest = cpu_to_be32(V_ULPTX_CMD(ULP_TX_PKT) |
+				      V_ULP_TXPKT_DEST(0));
+	txpkt->len = cpu_to_be32(DIV_ROUND_UP(sizeof(*abort_rpl), 16));
+	sc->cmd_more = cpu_to_be32(V_ULPTX_CMD(ULP_TX_SC_IMM));
+	sc->len = cpu_to_be32(sizeof(*abort_rpl) -
+			      sizeof(struct work_request_hdr));
+	OPCODE_TID(abort_rpl) = cpu_to_be32(MK_OPCODE_TID(CPL_ABORT_RPL, tid));
+	abort_rpl->rsvd0 = cpu_to_be32(0);
+	abort_rpl->rsvd1 = 0;
+	abort_rpl->cmd = CPL_ABORT_NO_RST;
+	sc = (struct ulptx_idata *)(abort_rpl + 1);
+	sc->cmd_more = cpu_to_be32(V_ULPTX_CMD(ULP_TX_SC_NOOP));
+	sc->len = cpu_to_be32(0);
+}
+
+/**
+ * Delete the specified hash filter.
+ */
+static int cxgbe_del_hash_filter(struct rte_eth_dev *dev,
+				 unsigned int filter_id,
+				 struct filter_ctx *ctx)
+{
+	struct adapter *adapter = ethdev2adap(dev);
+	struct tid_info *t = &adapter->tids;
+	struct filter_entry *f;
+	struct sge_ctrl_txq *ctrlq;
+	unsigned int port_id = ethdev2pinfo(dev)->port_id;
+	int ret;
+
+	if (filter_id > adapter->tids.ntids)
+		return -E2BIG;
+
+	f = lookup_tid(t, filter_id);
+	if (!f) {
+		dev_err(adapter, "%s: no filter entry for filter_id = %d\n",
+			__func__, filter_id);
+		return -EINVAL;
+	}
+
+	ret = writable_filter(f);
+	if (ret)
+		return ret;
+
+	if (f->valid) {
+		unsigned int wrlen;
+		struct rte_mbuf *mbuf;
+		struct work_request_hdr *wr;
+		struct ulptx_idata *aligner;
+		struct cpl_set_tcb_field *req;
+		struct cpl_abort_req *abort_req;
+		struct cpl_abort_rpl *abort_rpl;
+
+		f->ctx = ctx;
+		f->pending = 1;
+
+		wrlen = cxgbe_roundup(sizeof(*wr) +
+				      (sizeof(*req) + sizeof(*aligner)) +
+				      sizeof(*abort_req) + sizeof(*abort_rpl),
+				      16);
+
+		ctrlq = &adapter->sge.ctrlq[port_id];
+		mbuf = rte_pktmbuf_alloc(ctrlq->mb_pool);
+		if (!mbuf) {
+			dev_err(adapter, "%s: could not allocate skb ..\n",
+				__func__);
+			goto out_err;
+		}
+
+		mbuf->data_len = wrlen;
+		mbuf->pkt_len = mbuf->data_len;
+
+		req = rte_pktmbuf_mtod(mbuf, struct cpl_set_tcb_field *);
+		INIT_ULPTX_WR(req, wrlen, 0, 0);
+		wr = (struct work_request_hdr *)req;
+		wr++;
+		req = (struct cpl_set_tcb_field *)wr;
+		mk_set_tcb_field_ulp(f, req, W_TCB_RSS_INFO,
+				V_TCB_RSS_INFO(M_TCB_RSS_INFO),
+				V_TCB_RSS_INFO(adapter->sge.fw_evtq.abs_id),
+				0, 1);
+		aligner = (struct ulptx_idata *)(req + 1);
+		abort_req = (struct cpl_abort_req *)(aligner + 1);
+		mk_abort_req_ulp(abort_req, f->tid);
+		abort_rpl = (struct cpl_abort_rpl *)(abort_req + 1);
+		mk_abort_rpl_ulp(abort_rpl, f->tid);
+		t4_mgmt_tx(ctrlq, mbuf);
+	}
+	return 0;
+
+out_err:
+	return -ENOMEM;
+}
+
 /**
  * Build a ACT_OPEN_REQ6 message for setting IPv6 hash filter.
  */
@@ -560,6 +715,9 @@ int cxgbe_del_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 	unsigned int chip_ver;
 	int ret;
 
+	if (is_hashfilter(adapter) && fs->cap)
+		return cxgbe_del_hash_filter(dev, filter_id, ctx);
+
 	if (filter_id >= adapter->tids.nftids)
 		return -ERANGE;
 
@@ -967,3 +1125,38 @@ int cxgbe_get_filter_count(struct adapter *adapter, unsigned int fidx,
 	}
 	return 0;
 }
+
+/**
+ * Handle a Hash filter delete reply.
+ */
+void hash_del_filter_rpl(struct adapter *adap,
+			 const struct cpl_abort_rpl_rss *rpl)
+{
+	struct tid_info *t = &adap->tids;
+	struct filter_entry *f;
+	struct filter_ctx *ctx = NULL;
+	unsigned int tid = GET_TID(rpl);
+
+	f = lookup_tid(t, tid);
+	if (!f) {
+		dev_warn(adap, "%s: could not find filter entry: %u\n",
+			 __func__, tid);
+		return;
+	}
+
+	ctx = f->ctx;
+	f->ctx = NULL;
+
+	f->valid = 0;
+
+	if (f->clipt)
+		cxgbe_clip_release(f->dev, f->clipt);
+
+	cxgbe_remove_tid(t, 0, tid, 0);
+	t4_os_free(f);
+
+	if (ctx) {
+		ctx->result = 0;
+		t4_complete(&ctx->completion);
+	}
+}
diff --git a/drivers/net/cxgbe/cxgbe_filter.h b/drivers/net/cxgbe/cxgbe_filter.h
index 7c469c895..c51efea7d 100644
--- a/drivers/net/cxgbe/cxgbe_filter.h
+++ b/drivers/net/cxgbe/cxgbe_filter.h
@@ -224,6 +224,8 @@ int cxgbe_del_filter(struct rte_eth_dev *dev, unsigned int filter_id,
 int cxgbe_alloc_ftid(struct adapter *adap, unsigned int family);
 int init_hash_filter(struct adapter *adap);
 void hash_filter_rpl(struct adapter *adap, const struct cpl_act_open_rpl *rpl);
+void hash_del_filter_rpl(struct adapter *adap,
+			 const struct cpl_abort_rpl_rss *rpl);
 int validate_filter(struct adapter *adap, struct ch_filter_specification *fs);
 int cxgbe_get_filter_count(struct adapter *adapter, unsigned int fidx,
 			   u64 *c, bool get_byte);
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index c550dd5be..08b2a42d1 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -87,6 +87,10 @@ static int fwevtq_handler(struct sge_rspq *q, const __be64 *rsp,
 		const struct cpl_fw6_msg *msg = (const void *)rsp;
 
 		t4_handle_fw_rpl(q->adapter, msg->data);
+	} else if (opcode == CPL_ABORT_RPL_RSS) {
+		const struct cpl_abort_rpl_rss *p = (const void *)rsp;
+
+		hash_del_filter_rpl(q->adapter, p);
 	} else if (opcode == CPL_SET_TCB_RPL) {
 		const struct cpl_set_tcb_rpl *p = (const void *)rsp;
 
@@ -301,6 +305,50 @@ void cxgbe_free_atid(struct tid_info *t, unsigned int atid)
 	t4_os_unlock(&t->atid_lock);
 }
 
+/**
+ * Populate a TID_RELEASE WR.  Caller must properly size the skb.
+ */
+static void mk_tid_release(struct rte_mbuf *mbuf, unsigned int tid)
+{
+	struct cpl_tid_release *req;
+
+	req = rte_pktmbuf_mtod(mbuf, struct cpl_tid_release *);
+	INIT_TP_WR_MIT_CPL(req, CPL_TID_RELEASE, tid);
+}
+
+/**
+ * Release a TID and inform HW.  If we are unable to allocate the release
+ * message we defer to a work queue.
+ */
+void cxgbe_remove_tid(struct tid_info *t, unsigned int chan, unsigned int tid,
+		      unsigned short family)
+{
+	struct rte_mbuf *mbuf;
+	struct adapter *adap = container_of(t, struct adapter, tids);
+
+	WARN_ON(tid >= t->ntids);
+
+	if (t->tid_tab[tid]) {
+		t->tid_tab[tid] = NULL;
+		rte_atomic32_dec(&t->conns_in_use);
+		if (t->hash_base && tid >= t->hash_base) {
+			if (family == FILTER_TYPE_IPV4)
+				rte_atomic32_dec(&t->hash_tids_in_use);
+		} else {
+			if (family == FILTER_TYPE_IPV4)
+				rte_atomic32_dec(&t->tids_in_use);
+		}
+	}
+
+	mbuf = rte_pktmbuf_alloc((&adap->sge.ctrlq[chan])->mb_pool);
+	if (mbuf) {
+		mbuf->data_len = sizeof(struct cpl_tid_release);
+		mbuf->pkt_len = mbuf->data_len;
+		mk_tid_release(mbuf, tid);
+		t4_mgmt_tx(&adap->sge.ctrlq[chan], mbuf);
+	}
+}
+
 /**
  * Insert a TID.
  */
diff --git a/drivers/net/cxgbe/cxgbe_ofld.h b/drivers/net/cxgbe/cxgbe_ofld.h
index 798e39828..50931ed04 100644
--- a/drivers/net/cxgbe/cxgbe_ofld.h
+++ b/drivers/net/cxgbe/cxgbe_ofld.h
@@ -19,6 +19,19 @@
 	(w)->wr.wr_lo = cpu_to_be64(0); \
 } while (0)
 
+#define INIT_TP_WR_MIT_CPL(w, cpl, tid) do { \
+	INIT_TP_WR(w, tid); \
+	OPCODE_TID(w) = cpu_to_be32(MK_OPCODE_TID(cpl, tid)); \
+} while (0)
+
+#define INIT_ULPTX_WR(w, wrlen, atomic, tid) do { \
+	(w)->wr.wr_hi = cpu_to_be32(V_FW_WR_OP(FW_ULPTX_WR) | \
+				    V_FW_WR_ATOMIC(atomic)); \
+	(w)->wr.wr_mid = cpu_to_be32(V_FW_WR_LEN16(DIV_ROUND_UP(wrlen, 16)) | \
+				     V_FW_WR_FLOWID(tid)); \
+	(w)->wr.wr_lo = cpu_to_be64(0); \
+} while (0)
+
 /*
  * Max # of ATIDs.  The absolute HW max is 16K but we keep it lower.
  */
@@ -68,6 +81,8 @@ static inline void *lookup_atid(const struct tid_info *t, unsigned int atid)
 
 int cxgbe_alloc_atid(struct tid_info *t, void *data);
 void cxgbe_free_atid(struct tid_info *t, unsigned int atid);
+void cxgbe_remove_tid(struct tid_info *t, unsigned int qid, unsigned int tid,
+		      unsigned short family);
 void cxgbe_insert_tid(struct tid_info *t, void *data, unsigned int tid,
 		      unsigned short family);
 
-- 
2.14.1

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 6/9] net/cxgbe: add support to query hit counters for flows in HASH region
  2018-06-29 18:12 [dpdk-dev] [PATCH 0/9] net/cxgbe: add support for offloading flows to HASH region Rahul Lakkireddy
                   ` (4 preceding siblings ...)
  2018-06-29 18:12 ` [dpdk-dev] [PATCH 5/9] net/cxgbe: add support to delete flows in " Rahul Lakkireddy
@ 2018-06-29 18:12 ` Rahul Lakkireddy
  2018-06-29 18:12 ` [dpdk-dev] [PATCH 7/9] net/cxgbe: add support to flush " Rahul Lakkireddy
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Rahul Lakkireddy @ 2018-06-29 18:12 UTC (permalink / raw)
  To: dev; +Cc: shaguna, indranil, nirranjan

From: Shagun Agrawal <shaguna@chelsio.com>

Add interface to enable hit counters for flows offloaded in HASH
region.

Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
 drivers/net/cxgbe/base/t4_tcb.h  | 11 +++++++
 drivers/net/cxgbe/cxgbe_filter.c | 71 +++++++++++++++++++++++++++++++++++++---
 drivers/net/cxgbe/cxgbe_filter.h |  2 +-
 drivers/net/cxgbe/cxgbe_flow.c   |  5 +--
 4 files changed, 81 insertions(+), 8 deletions(-)

diff --git a/drivers/net/cxgbe/base/t4_tcb.h b/drivers/net/cxgbe/base/t4_tcb.h
index 6d7f5e8c1..25435f9f4 100644
--- a/drivers/net/cxgbe/base/t4_tcb.h
+++ b/drivers/net/cxgbe/base/t4_tcb.h
@@ -12,4 +12,15 @@
 #define M_TCB_RSS_INFO    0x3ffULL
 #define V_TCB_RSS_INFO(x) ((x) << S_TCB_RSS_INFO)
 
+/* 191:160 */
+#define W_TCB_TIMESTAMP    5
+#define S_TCB_TIMESTAMP    0
+#define M_TCB_TIMESTAMP    0xffffffffULL
+#define V_TCB_TIMESTAMP(x) ((x) << S_TCB_TIMESTAMP)
+
+/* 223:192 */
+#define S_TCB_T_RTT_TS_RECENT_AGE    0
+#define M_TCB_T_RTT_TS_RECENT_AGE    0xffffffffULL
+#define V_TCB_T_RTT_TS_RECENT_AGE(x) ((x) << S_TCB_T_RTT_TS_RECENT_AGE)
+
 #endif /* _T4_TCB_DEFS_H */
diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index 7759b8acf..ff43488b5 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -117,6 +117,36 @@ int writable_filter(struct filter_entry *f)
 	return 0;
 }
 
+/**
+ * Send CPL_SET_TCB_FIELD message
+ */
+static void set_tcb_field(struct adapter *adapter, unsigned int ftid,
+			  u16 word, u64 mask, u64 val, int no_reply)
+{
+	struct rte_mbuf *mbuf;
+	struct cpl_set_tcb_field *req;
+	struct sge_ctrl_txq *ctrlq;
+
+	ctrlq = &adapter->sge.ctrlq[0];
+	mbuf = rte_pktmbuf_alloc(ctrlq->mb_pool);
+	WARN_ON(!mbuf);
+
+	mbuf->data_len = sizeof(*req);
+	mbuf->pkt_len = mbuf->data_len;
+
+	req = rte_pktmbuf_mtod(mbuf, struct cpl_set_tcb_field *);
+	memset(req, 0, sizeof(*req));
+	INIT_TP_WR_MIT_CPL(req, CPL_SET_TCB_FIELD, ftid);
+	req->reply_ctrl = cpu_to_be16(V_REPLY_CHAN(0) |
+				      V_QUEUENO(adapter->sge.fw_evtq.abs_id) |
+				      V_NO_REPLY(no_reply));
+	req->word_cookie = cpu_to_be16(V_WORD(word) | V_COOKIE(ftid));
+	req->mask = cpu_to_be64(mask);
+	req->val = cpu_to_be64(val);
+
+	t4_mgmt_tx(ctrlq, mbuf);
+}
+
 /**
  * Build a CPL_SET_TCB_FIELD message as payload of a ULP_TX_PKT command.
  */
@@ -978,6 +1008,15 @@ void hash_filter_rpl(struct adapter *adap, const struct cpl_act_open_rpl *rpl)
 			ctx->tid = f->tid;
 			ctx->result = 0;
 		}
+		if (f->fs.hitcnts)
+			set_tcb_field(adap, tid,
+				      W_TCB_TIMESTAMP,
+				      V_TCB_TIMESTAMP(M_TCB_TIMESTAMP) |
+				      V_TCB_T_RTT_TS_RECENT_AGE
+					      (M_TCB_T_RTT_TS_RECENT_AGE),
+				      V_TCB_TIMESTAMP(0ULL) |
+				      V_TCB_T_RTT_TS_RECENT_AGE(0ULL),
+				      1);
 		break;
 	}
 	default:
@@ -1068,22 +1107,44 @@ void filter_rpl(struct adapter *adap, const struct cpl_set_tcb_rpl *rpl)
  * Retrieve the packet count for the specified filter.
  */
 int cxgbe_get_filter_count(struct adapter *adapter, unsigned int fidx,
-			   u64 *c, bool get_byte)
+			   u64 *c, int hash, bool get_byte)
 {
 	struct filter_entry *f;
 	unsigned int tcb_base, tcbaddr;
 	int ret;
 
 	tcb_base = t4_read_reg(adapter, A_TP_CMM_TCB_BASE);
-	if (fidx >= adapter->tids.nftids)
-		return -ERANGE;
+	if (is_hashfilter(adapter) && hash) {
+		if (fidx < adapter->tids.ntids) {
+			f = adapter->tids.tid_tab[fidx];
+			if (!f)
+				return -EINVAL;
+
+			if (is_t5(adapter->params.chip)) {
+				*c = 0;
+				return 0;
+			}
+			tcbaddr = tcb_base + (fidx * TCB_SIZE);
+			goto get_count;
+		} else {
+			return -ERANGE;
+		}
+	} else {
+		if (fidx >= adapter->tids.nftids)
+			return -ERANGE;
+
+		f = &adapter->tids.ftid_tab[fidx];
+		if (!f->valid)
+			return -EINVAL;
+
+		tcbaddr = tcb_base + f->tid * TCB_SIZE;
+	}
 
 	f = &adapter->tids.ftid_tab[fidx];
 	if (!f->valid)
 		return -EINVAL;
 
-	tcbaddr = tcb_base + f->tid * TCB_SIZE;
-
+get_count:
 	if (is_t5(adapter->params.chip) || is_t6(adapter->params.chip)) {
 		/*
 		 * For T5, the Filter Packet Hit Count is maintained as a
diff --git a/drivers/net/cxgbe/cxgbe_filter.h b/drivers/net/cxgbe/cxgbe_filter.h
index c51efea7d..fac1f75f9 100644
--- a/drivers/net/cxgbe/cxgbe_filter.h
+++ b/drivers/net/cxgbe/cxgbe_filter.h
@@ -228,5 +228,5 @@ void hash_del_filter_rpl(struct adapter *adap,
 			 const struct cpl_abort_rpl_rss *rpl);
 int validate_filter(struct adapter *adap, struct ch_filter_specification *fs);
 int cxgbe_get_filter_count(struct adapter *adapter, unsigned int fidx,
-			   u64 *c, bool get_byte);
+			   u64 *c, int hash, bool get_byte);
 #endif /* _CXGBE_FILTER_H_ */
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index 4950cb41c..48df62aff 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -591,13 +591,14 @@ static int __cxgbe_flow_query(struct rte_flow *flow, u64 *count,
 			      u64 *byte_count)
 {
 	struct adapter *adap = ethdev2adap(flow->dev);
+	struct ch_filter_specification fs = flow->f->fs;
 	unsigned int fidx = flow->fidx;
 	int ret = 0;
 
-	ret = cxgbe_get_filter_count(adap, fidx, count, 0);
+	ret = cxgbe_get_filter_count(adap, fidx, count, fs.cap, 0);
 	if (ret)
 		return ret;
-	return cxgbe_get_filter_count(adap, fidx, byte_count, 1);
+	return cxgbe_get_filter_count(adap, fidx, byte_count, fs.cap, 1);
 }
 
 static int
-- 
2.14.1

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 7/9] net/cxgbe: add support to flush flows in HASH region
  2018-06-29 18:12 [dpdk-dev] [PATCH 0/9] net/cxgbe: add support for offloading flows to HASH region Rahul Lakkireddy
                   ` (5 preceding siblings ...)
  2018-06-29 18:12 ` [dpdk-dev] [PATCH 6/9] net/cxgbe: add support to query hit counters for " Rahul Lakkireddy
@ 2018-06-29 18:12 ` Rahul Lakkireddy
  2018-06-29 18:12 ` [dpdk-dev] [PATCH 8/9] net/cxgbe: add support to match on ingress physical port Rahul Lakkireddy
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Rahul Lakkireddy @ 2018-06-29 18:12 UTC (permalink / raw)
  To: dev; +Cc: shaguna, indranil, nirranjan

From: Shagun Agrawal <shaguna@chelsio.com>

Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
 drivers/net/cxgbe/cxgbe_flow.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index 48df62aff..4f00ac4c6 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -730,6 +730,19 @@ static int cxgbe_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *e)
 				goto out;
 		}
 	}
+
+	if (is_hashfilter(adap) && adap->tids.tid_tab) {
+		struct filter_entry *f;
+
+		for (i = adap->tids.hash_base; i <= adap->tids.ntids; i++) {
+			f = (struct filter_entry *)adap->tids.tid_tab[i];
+
+			ret = cxgbe_check_n_destroy(f, dev, e);
+			if (ret < 0)
+				goto out;
+		}
+	}
+
 out:
 	return ret >= 0 ? 0 : ret;
 }
-- 
2.14.1

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 8/9] net/cxgbe: add support to match on ingress physical port
  2018-06-29 18:12 [dpdk-dev] [PATCH 0/9] net/cxgbe: add support for offloading flows to HASH region Rahul Lakkireddy
                   ` (6 preceding siblings ...)
  2018-06-29 18:12 ` [dpdk-dev] [PATCH 7/9] net/cxgbe: add support to flush " Rahul Lakkireddy
@ 2018-06-29 18:12 ` Rahul Lakkireddy
  2018-06-29 18:12 ` [dpdk-dev] [PATCH 9/9] net/cxgbe: add support to redirect packets to egress " Rahul Lakkireddy
  2018-07-04 19:16 ` [dpdk-dev] [PATCH 0/9] net/cxgbe: add support for offloading flows to HASH region Ferruh Yigit
  9 siblings, 0 replies; 13+ messages in thread
From: Rahul Lakkireddy @ 2018-06-29 18:12 UTC (permalink / raw)
  To: dev; +Cc: shaguna, indranil, nirranjan

From: Shagun Agrawal <shaguna@chelsio.com>

Add support to match packets based on ingress physical port.

Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
 drivers/net/cxgbe/cxgbe_filter.c | 16 +++++++++++++++-
 drivers/net/cxgbe/cxgbe_flow.c   | 30 ++++++++++++++++++++++++++++++
 2 files changed, 45 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index ff43488b5..8c5890ea8 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -65,11 +65,19 @@ int validate_filter(struct adapter *adapter, struct ch_filter_specification *fs)
 #define U(_mask, _field) \
 	(!(fconf & (_mask)) && S(_field))
 
-	if (U(F_ETHERTYPE, ethtype) || U(F_PROTOCOL, proto))
+	if (U(F_PORT, iport) || U(F_ETHERTYPE, ethtype) || U(F_PROTOCOL, proto))
 		return -EOPNOTSUPP;
 
 #undef S
 #undef U
+
+	/*
+	 * Don't allow various trivially obvious bogus out-of-range
+	 * values ...
+	 */
+	if (fs->val.iport >= adapter->params.nports)
+		return -ERANGE;
+
 	return 0;
 }
 
@@ -228,6 +236,9 @@ static u64 hash_filter_ntuple(const struct filter_entry *f)
 	u64 ntuple = 0;
 	u16 tcp_proto = IPPROTO_TCP; /* TCP Protocol Number */
 
+	if (tp->port_shift >= 0)
+		ntuple |= (u64)f->fs.mask.iport << tp->port_shift;
+
 	if (tp->protocol_shift >= 0) {
 		if (!f->fs.val.proto)
 			ntuple |= (u64)tcp_proto << tp->protocol_shift;
@@ -664,6 +675,9 @@ int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
 		cpu_to_be16(V_FW_FILTER_WR_RX_CHAN(0) |
 			    V_FW_FILTER_WR_RX_RPL_IQ(adapter->sge.fw_evtq.abs_id
 						     ));
+	fwr->maci_to_matchtypem =
+		cpu_to_be32(V_FW_FILTER_WR_PORT(f->fs.val.iport) |
+			    V_FW_FILTER_WR_PORTM(f->fs.mask.iport));
 	fwr->ptcl = f->fs.val.proto;
 	fwr->ptclm = f->fs.mask.proto;
 	rte_memcpy(fwr->lip, f->fs.val.lip, sizeof(fwr->lip));
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index 4f00ac4c6..823bc720c 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -93,6 +93,8 @@ cxgbe_fill_filter_region(struct adapter *adap,
 		ntuple_mask |= (u64)fs->mask.proto << tp->protocol_shift;
 	if (tp->ethertype_shift >= 0)
 		ntuple_mask |= (u64)fs->mask.ethtype << tp->ethertype_shift;
+	if (tp->port_shift >= 0)
+		ntuple_mask |= (u64)fs->mask.iport << tp->port_shift;
 
 	if (ntuple_mask != hash_filter_mask)
 		return;
@@ -100,6 +102,27 @@ cxgbe_fill_filter_region(struct adapter *adap,
 	fs->cap = 1;	/* use hash region */
 }
 
+static int
+ch_rte_parsetype_port(const void *dmask, const struct rte_flow_item *item,
+		      struct ch_filter_specification *fs,
+		      struct rte_flow_error *e)
+{
+	const struct rte_flow_item_phy_port *val = item->spec;
+	const struct rte_flow_item_phy_port *umask = item->mask;
+	const struct rte_flow_item_phy_port *mask;
+
+	mask = umask ? umask : (const struct rte_flow_item_phy_port *)dmask;
+
+	if (val->index > 0x7)
+		return rte_flow_error_set(e, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+					  item,
+					  "port index upto 0x7 is supported");
+
+	CXGBE_FILL_FS(val->index, mask->index, iport);
+
+	return 0;
+}
+
 static int
 ch_rte_parsetype_udp(const void *dmask, const struct rte_flow_item *item,
 		     struct ch_filter_specification *fs,
@@ -357,6 +380,13 @@ cxgbe_rtef_parse_actions(struct rte_flow *flow,
 }
 
 struct chrte_fparse parseitem[] = {
+		[RTE_FLOW_ITEM_TYPE_PHY_PORT] = {
+		.fptr = ch_rte_parsetype_port,
+		.dmask = &(const struct rte_flow_item_phy_port){
+			.index = 0x7,
+		}
+	},
+
 	[RTE_FLOW_ITEM_TYPE_IPV4] = {
 		.fptr  = ch_rte_parsetype_ipv4,
 		.dmask = &rte_flow_item_ipv4_mask,
-- 
2.14.1

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 9/9] net/cxgbe: add support to redirect packets to egress physical port
  2018-06-29 18:12 [dpdk-dev] [PATCH 0/9] net/cxgbe: add support for offloading flows to HASH region Rahul Lakkireddy
                   ` (7 preceding siblings ...)
  2018-06-29 18:12 ` [dpdk-dev] [PATCH 8/9] net/cxgbe: add support to match on ingress physical port Rahul Lakkireddy
@ 2018-06-29 18:12 ` Rahul Lakkireddy
  2018-07-04 19:16 ` [dpdk-dev] [PATCH 0/9] net/cxgbe: add support for offloading flows to HASH region Ferruh Yigit
  9 siblings, 0 replies; 13+ messages in thread
From: Rahul Lakkireddy @ 2018-06-29 18:12 UTC (permalink / raw)
  To: dev; +Cc: shaguna, indranil, nirranjan

From: Shagun Agrawal <shaguna@chelsio.com>

Add action to redirect matched packets to specified egress physical
port without sending them to host.

Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
 drivers/net/cxgbe/base/t4_msg.h  |  6 ++++++
 drivers/net/cxgbe/cxgbe_filter.c | 19 +++++++++++++++++--
 drivers/net/cxgbe/cxgbe_filter.h |  5 ++++-
 drivers/net/cxgbe/cxgbe_flow.c   | 36 ++++++++++++++++++++++++++++++++++++
 4 files changed, 63 insertions(+), 3 deletions(-)

diff --git a/drivers/net/cxgbe/base/t4_msg.h b/drivers/net/cxgbe/base/t4_msg.h
index 7f4c98fb6..5d433c91c 100644
--- a/drivers/net/cxgbe/base/t4_msg.h
+++ b/drivers/net/cxgbe/base/t4_msg.h
@@ -113,6 +113,9 @@ struct work_request_hdr {
 #define G_COOKIE(x) (((x) >> S_COOKIE) & M_COOKIE)
 
 /* option 0 fields */
+#define S_TX_CHAN    2
+#define V_TX_CHAN(x) ((x) << S_TX_CHAN)
+
 #define S_DELACK    5
 #define V_DELACK(x) ((x) << S_DELACK)
 
@@ -145,6 +148,9 @@ struct work_request_hdr {
 #define V_RX_CHANNEL(x) ((x) << S_RX_CHANNEL)
 #define F_RX_CHANNEL    V_RX_CHANNEL(1U)
 
+#define S_CCTRL_ECN    27
+#define V_CCTRL_ECN(x) ((x) << S_CCTRL_ECN)
+
 #define S_T5_OPT_2_VALID    31
 #define V_T5_OPT_2_VALID(x) ((x) << S_T5_OPT_2_VALID)
 #define F_T5_OPT_2_VALID    V_T5_OPT_2_VALID(1U)
diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index 8c5890ea8..7f0d38001 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -71,6 +71,15 @@ int validate_filter(struct adapter *adapter, struct ch_filter_specification *fs)
 #undef S
 #undef U
 
+	/*
+	 * If the user is requesting that the filter action loop
+	 * matching packets back out one of our ports, make sure that
+	 * the egress port is in range.
+	 */
+	if (fs->action == FILTER_SWITCH &&
+	    fs->eport >= adapter->params.nports)
+		return -ERANGE;
+
 	/*
 	 * Don't allow various trivially obvious bogus out-of-range
 	 * values ...
@@ -419,6 +428,7 @@ static void mk_act_open_req6(struct filter_entry *f, struct rte_mbuf *mbuf,
 	req->opt0 = cpu_to_be64(V_DELACK(f->fs.hitcnts) |
 				V_SMAC_SEL((cxgbe_port_viid(f->dev) & 0x7F)
 					   << 1) |
+				V_TX_CHAN(f->fs.eport) |
 				V_ULP_MODE(ULP_MODE_NONE) |
 				F_TCAM_BYPASS | F_NON_OFFLOAD);
 	req->params = cpu_to_be64(V_FILTER_TUPLE(hash_filter_ntuple(f)));
@@ -427,7 +437,8 @@ static void mk_act_open_req6(struct filter_entry *f, struct rte_mbuf *mbuf,
 			    F_T5_OPT_2_VALID |
 			    F_RX_CHANNEL |
 			    V_CONG_CNTRL((f->fs.action == FILTER_DROP) |
-					 (f->fs.dirsteer << 1)));
+					 (f->fs.dirsteer << 1)) |
+			    V_CCTRL_ECN(f->fs.action == FILTER_SWITCH));
 }
 
 /**
@@ -460,6 +471,7 @@ static void mk_act_open_req(struct filter_entry *f, struct rte_mbuf *mbuf,
 	req->opt0 = cpu_to_be64(V_DELACK(f->fs.hitcnts) |
 				V_SMAC_SEL((cxgbe_port_viid(f->dev) & 0x7F)
 					   << 1) |
+				V_TX_CHAN(f->fs.eport) |
 				V_ULP_MODE(ULP_MODE_NONE) |
 				F_TCAM_BYPASS | F_NON_OFFLOAD);
 	req->params = cpu_to_be64(V_FILTER_TUPLE(hash_filter_ntuple(f)));
@@ -468,7 +480,8 @@ static void mk_act_open_req(struct filter_entry *f, struct rte_mbuf *mbuf,
 			    F_T5_OPT_2_VALID |
 			    F_RX_CHANNEL |
 			    V_CONG_CNTRL((f->fs.action == FILTER_DROP) |
-					 (f->fs.dirsteer << 1)));
+					 (f->fs.dirsteer << 1)) |
+			    V_CCTRL_ECN(f->fs.action == FILTER_SWITCH));
 }
 
 /**
@@ -666,7 +679,9 @@ int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
 	fwr->del_filter_to_l2tix =
 		cpu_to_be32(V_FW_FILTER_WR_DROP(f->fs.action == FILTER_DROP) |
 			    V_FW_FILTER_WR_DIRSTEER(f->fs.dirsteer) |
+			    V_FW_FILTER_WR_LPBK(f->fs.action == FILTER_SWITCH) |
 			    V_FW_FILTER_WR_HITCNTS(f->fs.hitcnts) |
+			    V_FW_FILTER_WR_TXCHAN(f->fs.eport) |
 			    V_FW_FILTER_WR_PRIO(f->fs.prio));
 	fwr->ethtype = cpu_to_be16(f->fs.val.ethtype);
 	fwr->ethtypem = cpu_to_be16(f->fs.mask.ethtype);
diff --git a/drivers/net/cxgbe/cxgbe_filter.h b/drivers/net/cxgbe/cxgbe_filter.h
index fac1f75f9..af8fa7529 100644
--- a/drivers/net/cxgbe/cxgbe_filter.h
+++ b/drivers/net/cxgbe/cxgbe_filter.h
@@ -98,6 +98,8 @@ struct ch_filter_specification {
 	uint32_t dirsteer:1;	/* 0 => RSS, 1 => steer to iq */
 	uint32_t iq:10;		/* ingress queue */
 
+	uint32_t eport:2;	/* egress port to switch packet out */
+
 	/* Filter rule value/mask pairs. */
 	struct ch_filter_tuple val;
 	struct ch_filter_tuple mask;
@@ -105,7 +107,8 @@ struct ch_filter_specification {
 
 enum {
 	FILTER_PASS = 0,	/* default */
-	FILTER_DROP
+	FILTER_DROP,
+	FILTER_SWITCH
 };
 
 enum filter_type {
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index 823bc720c..01c945f1b 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -326,6 +326,28 @@ static int cxgbe_get_fidx(struct rte_flow *flow, unsigned int *fidx)
 	return 0;
 }
 
+static int
+ch_rte_parse_atype_switch(const struct rte_flow_action *a,
+			  struct ch_filter_specification *fs,
+			  struct rte_flow_error *e)
+{
+	const struct rte_flow_action_phy_port *port;
+
+	switch (a->type) {
+	case RTE_FLOW_ACTION_TYPE_PHY_PORT:
+		port = (const struct rte_flow_action_phy_port *)a->conf;
+		fs->eport = port->index;
+		break;
+	default:
+		/* We are not supposed to come here */
+		return rte_flow_error_set(e, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION, a,
+					  "Action not supported");
+	}
+
+	return 0;
+}
+
 static int
 cxgbe_rtef_parse_actions(struct rte_flow *flow,
 			 const struct rte_flow_action action[],
@@ -335,6 +357,7 @@ cxgbe_rtef_parse_actions(struct rte_flow *flow,
 	const struct rte_flow_action_queue *q;
 	const struct rte_flow_action *a;
 	char abit = 0;
+	int ret;
 
 	for (a = action; a->type != RTE_FLOW_ACTION_TYPE_END; a++) {
 		switch (a->type) {
@@ -368,6 +391,19 @@ cxgbe_rtef_parse_actions(struct rte_flow *flow,
 		case RTE_FLOW_ACTION_TYPE_COUNT:
 			fs->hitcnts = 1;
 			break;
+		case RTE_FLOW_ACTION_TYPE_PHY_PORT:
+			/* We allow multiple switch actions, but switch is
+			 * not compatible with either queue or drop
+			 */
+			if (abit++ && fs->action != FILTER_SWITCH)
+				return rte_flow_error_set(e, EINVAL,
+						RTE_FLOW_ERROR_TYPE_ACTION, a,
+						"overlapping action specified");
+			ret = ch_rte_parse_atype_switch(a, fs, e);
+			if (ret)
+				return ret;
+			fs->action = FILTER_SWITCH;
+			break;
 		default:
 			/* Not supported action : return error */
 			return rte_flow_error_set(e, ENOTSUP,
-- 
2.14.1

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 0/9] net/cxgbe: add support for offloading flows to HASH region
  2018-06-29 18:12 [dpdk-dev] [PATCH 0/9] net/cxgbe: add support for offloading flows to HASH region Rahul Lakkireddy
                   ` (8 preceding siblings ...)
  2018-06-29 18:12 ` [dpdk-dev] [PATCH 9/9] net/cxgbe: add support to redirect packets to egress " Rahul Lakkireddy
@ 2018-07-04 19:16 ` Ferruh Yigit
  2018-07-04 19:51   ` Ferruh Yigit
  9 siblings, 1 reply; 13+ messages in thread
From: Ferruh Yigit @ 2018-07-04 19:16 UTC (permalink / raw)
  To: Rahul Lakkireddy, dev; +Cc: shaguna, indranil, nirranjan

On 6/29/2018 7:12 PM, Rahul Lakkireddy wrote:
> This series of patches add support to offload flows to HASH region
> available on Chelsio T6 NICs. HASH region can only offload exact match
> (maskless) flows and hence the masks must be all set for all match
> items.

Hi Shagun, Rahul,

Can you please update driver documentation [1] and release notes [2] for new
added features, HASH(?) region offload support, CLIP region support etc...

[1]: doc/guides/nics/cxgbe.rst
[2]: doc/guides/rel_notes/release_18_08.rst

Thanks,
ferruh

> 
> Patch 1 queries firmware for HASH filter support.
> 
> Patch 2 updates cxgbe_flow to decide whether to place flows in LE-TCAM
> or HASH region based on supported hardware configuration and masks of
> match items.
> 
> Patch 3 adds Compressed Local IP (CLIP) region support for offloading
> IPv6 flows in HASH region. Also updates LE-TCAM region to use CLIP for
> offloading IPv6 flows on Chelsio T6 NICs.
> 
> Patch 4 adds support for offloading flows to HASH region.
> 
> Patch 5 adds support for deleting flows in HASH region.
> 
> Patch 6 adds support to query hit and byte counters for offloaded flows
> in HASH region.
> 
> Patch 7 adds support to flush filters in HASH region.
> 
> Patch 8 adds support to match flows based on physical ingress port.
> 
> Patch 9 adds support to redirect packets matching flows to specified
> physical egress port without sending them to host.
> 
> Thanks,
> Rahul
> 
> Shagun Agrawal (9):
>   net/cxgbe: query firmware for HASH filter resources
>   net/cxgbe: validate flows offloaded to HASH region
>   net/cxgbe: add Compressed Local IP region
>   net/cxgbe: add support to offload flows to HASH region
>   net/cxgbe: add support to delete flows in HASH region
>   net/cxgbe: add support to query hit counters for flows in HASH region
>   net/cxgbe: add support to flush flows in HASH region
>   net/cxgbe: add support to match on ingress physical port
>   net/cxgbe: add support to redirect packets to egress physical port
> 
>  drivers/net/cxgbe/Makefile              |   1 +
>  drivers/net/cxgbe/base/adapter.h        |  43 ++
>  drivers/net/cxgbe/base/common.h         |  10 +
>  drivers/net/cxgbe/base/t4_hw.c          |   7 +
>  drivers/net/cxgbe/base/t4_msg.h         | 188 +++++++++
>  drivers/net/cxgbe/base/t4_regs.h        |  12 +
>  drivers/net/cxgbe/base/t4_tcb.h         |  26 ++
>  drivers/net/cxgbe/base/t4fw_interface.h |  31 ++
>  drivers/net/cxgbe/clip_tbl.c            | 195 +++++++++
>  drivers/net/cxgbe/clip_tbl.h            |  31 ++
>  drivers/net/cxgbe/cxgbe_compat.h        |  12 +
>  drivers/net/cxgbe/cxgbe_filter.c        | 697 ++++++++++++++++++++++++++++++--
>  drivers/net/cxgbe/cxgbe_filter.h        |  13 +-
>  drivers/net/cxgbe/cxgbe_flow.c          | 151 ++++++-
>  drivers/net/cxgbe/cxgbe_main.c          | 170 +++++++-
>  drivers/net/cxgbe/cxgbe_ofld.h          |  66 ++-
>  16 files changed, 1614 insertions(+), 39 deletions(-)
>  create mode 100644 drivers/net/cxgbe/base/t4_tcb.h
>  create mode 100644 drivers/net/cxgbe/clip_tbl.c
>  create mode 100644 drivers/net/cxgbe/clip_tbl.h
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 3/9] net/cxgbe: add Compressed Local IP region
  2018-06-29 18:12 ` [dpdk-dev] [PATCH 3/9] net/cxgbe: add Compressed Local IP region Rahul Lakkireddy
@ 2018-07-04 19:22   ` Ferruh Yigit
  0 siblings, 0 replies; 13+ messages in thread
From: Ferruh Yigit @ 2018-07-04 19:22 UTC (permalink / raw)
  To: Rahul Lakkireddy, dev; +Cc: shaguna, indranil, nirranjan

On 6/29/2018 7:12 PM, Rahul Lakkireddy wrote:
> From: Shagun Agrawal <shaguna@chelsio.com>
> 
> CLIP region holds destination IPv6 addresses to be matched for
> corresponding flows. Query firmware for CLIP resources and allocate
> table to manage them. Also update LE-TCAM to use CLIP to reduce
> number of slots needed to offload IPv6 flows.
> 
> Signed-off-by: Shagun Agrawal <shaguna@chelsio.com>
> Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
> ---
>  drivers/net/cxgbe/Makefile              |   1 +
>  drivers/net/cxgbe/base/adapter.h        |  32 ++++++
>  drivers/net/cxgbe/base/t4fw_interface.h |  19 ++++
>  drivers/net/cxgbe/clip_tbl.c            | 195 ++++++++++++++++++++++++++++++++

This breaks the meson build, new added file 'clip_tbl.c' needs to be added
meson.build, if this is the only issue I can fix while applying.

>  drivers/net/cxgbe/clip_tbl.h            |  31 +++++
>  drivers/net/cxgbe/cxgbe_filter.c        |  99 ++++++++++++----
>  drivers/net/cxgbe/cxgbe_filter.h        |   1 +
>  drivers/net/cxgbe/cxgbe_main.c          |  19 ++++
>  8 files changed, 377 insertions(+), 20 deletions(-)
>  create mode 100644 drivers/net/cxgbe/clip_tbl.c
>  create mode 100644 drivers/net/cxgbe/clip_tbl.h

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH 0/9] net/cxgbe: add support for offloading flows to HASH region
  2018-07-04 19:16 ` [dpdk-dev] [PATCH 0/9] net/cxgbe: add support for offloading flows to HASH region Ferruh Yigit
@ 2018-07-04 19:51   ` Ferruh Yigit
  0 siblings, 0 replies; 13+ messages in thread
From: Ferruh Yigit @ 2018-07-04 19:51 UTC (permalink / raw)
  To: Rahul Lakkireddy, dev; +Cc: shaguna, indranil, nirranjan

On 7/4/2018 8:16 PM, Ferruh Yigit wrote:
> On 6/29/2018 7:12 PM, Rahul Lakkireddy wrote:
>> This series of patches add support to offload flows to HASH region
>> available on Chelsio T6 NICs. HASH region can only offload exact match
>> (maskless) flows and hence the masks must be all set for all match
>> items.
> 
> Hi Shagun, Rahul,
> 
> Can you please update driver documentation [1] and release notes [2] for new
> added features, HASH(?) region offload support, CLIP region support etc...
> 
> [1]: doc/guides/nics/cxgbe.rst
> [2]: doc/guides/rel_notes/release_18_08.rst
> 
> Thanks,
> ferruh
> 
>>
>> Patch 1 queries firmware for HASH filter support.
>>
>> Patch 2 updates cxgbe_flow to decide whether to place flows in LE-TCAM
>> or HASH region based on supported hardware configuration and masks of
>> match items.
>>
>> Patch 3 adds Compressed Local IP (CLIP) region support for offloading
>> IPv6 flows in HASH region. Also updates LE-TCAM region to use CLIP for
>> offloading IPv6 flows on Chelsio T6 NICs.
>>
>> Patch 4 adds support for offloading flows to HASH region.
>>
>> Patch 5 adds support for deleting flows in HASH region.
>>
>> Patch 6 adds support to query hit and byte counters for offloaded flows
>> in HASH region.
>>
>> Patch 7 adds support to flush filters in HASH region.
>>
>> Patch 8 adds support to match flows based on physical ingress port.
>>
>> Patch 9 adds support to redirect packets matching flows to specified
>> physical egress port without sending them to host.
>>
>> Thanks,
>> Rahul
>>
>> Shagun Agrawal (9):
>>   net/cxgbe: query firmware for HASH filter resources
>>   net/cxgbe: validate flows offloaded to HASH region
>>   net/cxgbe: add Compressed Local IP region
>>   net/cxgbe: add support to offload flows to HASH region
>>   net/cxgbe: add support to delete flows in HASH region
>>   net/cxgbe: add support to query hit counters for flows in HASH region
>>   net/cxgbe: add support to flush flows in HASH region
>>   net/cxgbe: add support to match on ingress physical port
>>   net/cxgbe: add support to redirect packets to egress physical port

Series applied to dpdk-next-net/master, thanks.

(meson build fixed while applying. And please remember to check documentation
update request.)

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2018-07-04 19:51 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-29 18:12 [dpdk-dev] [PATCH 0/9] net/cxgbe: add support for offloading flows to HASH region Rahul Lakkireddy
2018-06-29 18:12 ` [dpdk-dev] [PATCH 1/9] net/cxgbe: query firmware for HASH filter resources Rahul Lakkireddy
2018-06-29 18:12 ` [dpdk-dev] [PATCH 2/9] net/cxgbe: validate flows offloaded to HASH region Rahul Lakkireddy
2018-06-29 18:12 ` [dpdk-dev] [PATCH 3/9] net/cxgbe: add Compressed Local IP region Rahul Lakkireddy
2018-07-04 19:22   ` Ferruh Yigit
2018-06-29 18:12 ` [dpdk-dev] [PATCH 4/9] net/cxgbe: add support to offload flows to HASH region Rahul Lakkireddy
2018-06-29 18:12 ` [dpdk-dev] [PATCH 5/9] net/cxgbe: add support to delete flows in " Rahul Lakkireddy
2018-06-29 18:12 ` [dpdk-dev] [PATCH 6/9] net/cxgbe: add support to query hit counters for " Rahul Lakkireddy
2018-06-29 18:12 ` [dpdk-dev] [PATCH 7/9] net/cxgbe: add support to flush " Rahul Lakkireddy
2018-06-29 18:12 ` [dpdk-dev] [PATCH 8/9] net/cxgbe: add support to match on ingress physical port Rahul Lakkireddy
2018-06-29 18:12 ` [dpdk-dev] [PATCH 9/9] net/cxgbe: add support to redirect packets to egress " Rahul Lakkireddy
2018-07-04 19:16 ` [dpdk-dev] [PATCH 0/9] net/cxgbe: add support for offloading flows to HASH region Ferruh Yigit
2018-07-04 19:51   ` Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).