* [dpdk-dev] [PATCH 0/9] net/cxgbe: updates for rte_flow support
@ 2020-03-11 9:05 Rahul Lakkireddy
2020-03-11 9:05 ` [dpdk-dev] [PATCH 1/9] net/cxgbe: add rte_flow support for matching Q-in-Q VLAN Rahul Lakkireddy
` (10 more replies)
0 siblings, 11 replies; 15+ messages in thread
From: Rahul Lakkireddy @ 2020-03-11 9:05 UTC (permalink / raw)
To: dev; +Cc: nirranjan, kaara.satwik
From: Karra Satwik <kaara.satwik@chelsio.com>
This series of patches contain rte_flow support for matching
Q-in-Q VLAN, IP TOS, PF, and VF fields. Also, adds Destination
MAC rewrite and Source MAC rewrite actions.
Apart from the 4-tuple (IP src/dst addresses and TCP/UDP src/dst
port addresses), there are only 40-bits available to match other
fields in packet headers. Currently, the combination of packet
header fields to match are configured via filterMode for LETCAM
filters and filterMask for HASH filters in firmware config files
(t5/t6-config.txt). Adapter needs to be reflashed with new firmware
config file everytime the combinations need to be changed. To avoid
this, a new firmware API is available to dynamically change the
combination before completing full adapter initialization. So, 2
new devargs filtermode and filtermask are added to dynamically
select the combinations during runtime.
Patch 1 adds rte_flow support for matching Q-in-Q VLAN.
Patch 2 adds rte_flow support for matching IP TOS.
Patch 3 adds rte_flow support for matching all packets on PF.
Patch 4 adds rte_flow support for matching all packets on VF.
Patch 5 adds rte_flow support for overwriting destination MAC.
Patch 6 adds Source MAC Table (SMT) support.
Patch 7 adds rte_flow support for Source MAC Rewrite.
Patch 8 adds new firmware API for validating filter spec.
Patch 9 adds devargs to control filtermode and filtermask
combinations.
Thanks,
Satwik
Karra Satwik (9):
net/cxgbe: add rte_flow support for matching Q-in-Q VLAN
net/cxgbe: add rte_flow support for matching IP TOS
net/cxgbe: add rte_flow support for matching all packets on PF
net/cxgbe: add rte_flow support for matching all packets on VF
net/cxgbe: add rte_flow support for overwriting destination MAC
net/cxgbe: add Source MAC Table (SMT) support
net/cxgbe: add rte_flow support for Source MAC Rewrite
net/cxgbe: use firmware API for validating filter spec
net/cxgbe: add devargs to control filtermode and filtermask values
doc/guides/nics/cxgbe.rst | 219 +++++++++++++++++-
drivers/net/cxgbe/Makefile | 1 +
drivers/net/cxgbe/base/adapter.h | 9 +
drivers/net/cxgbe/base/common.h | 8 +-
drivers/net/cxgbe/base/t4_hw.c | 81 +++++--
drivers/net/cxgbe/base/t4_msg.h | 40 ++++
drivers/net/cxgbe/base/t4_regs.h | 4 +
drivers/net/cxgbe/base/t4_tcb.h | 10 +
drivers/net/cxgbe/base/t4fw_interface.h | 55 ++++-
drivers/net/cxgbe/cxgbe.h | 23 ++
drivers/net/cxgbe/cxgbe_ethdev.c | 4 +-
drivers/net/cxgbe/cxgbe_filter.c | 103 ++++++++-
drivers/net/cxgbe/cxgbe_filter.h | 8 +-
drivers/net/cxgbe/cxgbe_flow.c | 241 +++++++++++++++-----
drivers/net/cxgbe/cxgbe_main.c | 291 +++++++++++++++++++++++-
drivers/net/cxgbe/meson.build | 1 +
drivers/net/cxgbe/smt.c | 230 +++++++++++++++++++
drivers/net/cxgbe/smt.h | 44 ++++
18 files changed, 1275 insertions(+), 97 deletions(-)
create mode 100644 drivers/net/cxgbe/smt.c
create mode 100644 drivers/net/cxgbe/smt.h
--
2.25.0
^ permalink raw reply [flat|nested] 15+ messages in thread
* [dpdk-dev] [PATCH 1/9] net/cxgbe: add rte_flow support for matching Q-in-Q VLAN
2020-03-11 9:05 [dpdk-dev] [PATCH 0/9] net/cxgbe: updates for rte_flow support Rahul Lakkireddy
@ 2020-03-11 9:05 ` Rahul Lakkireddy
2020-03-11 9:05 ` [dpdk-dev] [PATCH 2/9] net/cxgbe: add rte_flow support for matching IP TOS Rahul Lakkireddy
` (9 subsequent siblings)
10 siblings, 0 replies; 15+ messages in thread
From: Rahul Lakkireddy @ 2020-03-11 9:05 UTC (permalink / raw)
To: dev; +Cc: nirranjan, kaara.satwik
From: Karra Satwik <kaara.satwik@chelsio.com>
Add support to match fields in 802.1ad Q-in-Q VLAN packets.
Relax check for repeated pattern items for RTE_FLOW_ITEM_TYPE_VLAN
item, since the same item is used to represent both QinQ and VLAN
packets.
When QinQ match is enabled, the ethertype field in the hardware
spec must contain the innermost VLAN header's ethertype field,
and not the Ethernet header's ethertype field. The hardware
automatically searches for ethertype 0x88A8/0x8100 in Ethernet
header, when parsing incoming packet against QinQ/VLAN pattern,
respectively.
Signed-off-by: Karra Satwik <kaara.satwik@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
drivers/net/cxgbe/base/t4_hw.c | 7 ---
drivers/net/cxgbe/base/t4_regs.h | 4 ++
drivers/net/cxgbe/cxgbe_filter.c | 26 +++++++-
drivers/net/cxgbe/cxgbe_flow.c | 102 +++++++++++++++++--------------
4 files changed, 82 insertions(+), 57 deletions(-)
diff --git a/drivers/net/cxgbe/base/t4_hw.c b/drivers/net/cxgbe/base/t4_hw.c
index 71ad1cb0f..f6bf57c75 100644
--- a/drivers/net/cxgbe/base/t4_hw.c
+++ b/drivers/net/cxgbe/base/t4_hw.c
@@ -5254,13 +5254,6 @@ int t4_init_tp_params(struct adapter *adap)
adap->params.tp.macmatch_shift = t4_filter_field_shift(adap,
F_MACMATCH);
- /*
- * If TP_INGRESS_CONFIG.VNID == 0, then TP_VLAN_PRI_MAP.VNIC_ID
- * represents the presense of an Outer VLAN instead of a VNIC ID.
- */
- if ((adap->params.tp.ingress_config & F_VNIC) == 0)
- adap->params.tp.vnic_shift = -1;
-
v = t4_read_reg(adap, LE_3_DB_HASH_MASK_GEN_IPV4_T6_A);
adap->params.tp.hash_filter_mask = v;
v = t4_read_reg(adap, LE_4_DB_HASH_MASK_GEN_IPV4_T6_A);
diff --git a/drivers/net/cxgbe/base/t4_regs.h b/drivers/net/cxgbe/base/t4_regs.h
index af8c741e2..97cf49a48 100644
--- a/drivers/net/cxgbe/base/t4_regs.h
+++ b/drivers/net/cxgbe/base/t4_regs.h
@@ -572,6 +572,10 @@
#define A_TP_INGRESS_CONFIG 0x141
+#define S_USE_ENC_IDX 13
+#define V_USE_ENC_IDX(x) ((x) << S_USE_ENC_IDX)
+#define F_USE_ENC_IDX V_USE_ENC_IDX(1U)
+
#define S_VNIC 11
#define V_VNIC(x) ((x) << S_VNIC)
#define F_VNIC V_VNIC(1U)
diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index b9d9d5d39..d26be3cd7 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -56,13 +56,15 @@ int cxgbe_init_hash_filter(struct adapter *adap)
int cxgbe_validate_filter(struct adapter *adapter,
struct ch_filter_specification *fs)
{
- u32 fconf;
+ u32 fconf, iconf;
/*
* Check for unconfigured fields being used.
*/
fconf = adapter->params.tp.vlan_pri_map;
+ iconf = adapter->params.tp.ingress_config;
+
#define S(_field) \
(fs->val._field || fs->mask._field)
#define U(_mask, _field) \
@@ -70,7 +72,15 @@ int cxgbe_validate_filter(struct adapter *adapter,
if (U(F_PORT, iport) || U(F_ETHERTYPE, ethtype) ||
U(F_PROTOCOL, proto) || U(F_MACMATCH, macidx) ||
- U(F_VLAN, ivlan_vld))
+ U(F_VLAN, ivlan_vld) || U(F_VNIC_ID, ovlan_vld))
+ return -EOPNOTSUPP;
+
+ /* Ensure OVLAN match is enabled in hardware */
+ if (S(ovlan_vld) && (iconf & F_VNIC))
+ return -EOPNOTSUPP;
+
+ /* To use OVLAN, L4 encapsulation match must not be enabled */
+ if (S(ovlan_vld) && (iconf & F_USE_ENC_IDX))
return -EOPNOTSUPP;
#undef S
@@ -296,6 +306,12 @@ static u64 hash_filter_ntuple(const struct filter_entry *f)
if (tp->vlan_shift >= 0 && f->fs.mask.ivlan)
ntuple |= (u64)(F_FT_VLAN_VLD | f->fs.val.ivlan) <<
tp->vlan_shift;
+ if (tp->vnic_shift >= 0) {
+ if (!(adap->params.tp.ingress_config & F_VNIC) &&
+ f->fs.mask.ovlan_vld)
+ ntuple |= (u64)(f->fs.val.ovlan_vld << 16 |
+ f->fs.val.ovlan) << tp->vnic_shift;
+ }
return ntuple;
}
@@ -775,7 +791,9 @@ static int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
fwr->ethtypem = cpu_to_be16(f->fs.mask.ethtype);
fwr->frag_to_ovlan_vldm =
(V_FW_FILTER_WR_IVLAN_VLD(f->fs.val.ivlan_vld) |
- V_FW_FILTER_WR_IVLAN_VLDM(f->fs.mask.ivlan_vld));
+ V_FW_FILTER_WR_IVLAN_VLDM(f->fs.mask.ivlan_vld) |
+ V_FW_FILTER_WR_OVLAN_VLD(f->fs.val.ovlan_vld) |
+ V_FW_FILTER_WR_OVLAN_VLDM(f->fs.mask.ovlan_vld));
fwr->smac_sel = 0;
fwr->rx_chan_rx_rpl_iq =
cpu_to_be16(V_FW_FILTER_WR_RX_CHAN(0) |
@@ -790,6 +808,8 @@ static int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
fwr->ptclm = f->fs.mask.proto;
fwr->ivlan = cpu_to_be16(f->fs.val.ivlan);
fwr->ivlanm = cpu_to_be16(f->fs.mask.ivlan);
+ fwr->ovlan = cpu_to_be16(f->fs.val.ovlan);
+ fwr->ovlanm = cpu_to_be16(f->fs.mask.ovlan);
rte_memcpy(fwr->lip, f->fs.val.lip, sizeof(fwr->lip));
rte_memcpy(fwr->lipm, f->fs.mask.lip, sizeof(fwr->lipm));
rte_memcpy(fwr->fip, f->fs.val.fip, sizeof(fwr->fip));
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index 9070f4960..cd833d095 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -87,6 +87,15 @@ cxgbe_tweak_filter_spec(struct adapter *adap,
fs->mask.ethtype = 0;
}
break;
+ case RTE_ETHER_TYPE_QINQ:
+ if (adap->params.tp.ethertype_shift < 0 &&
+ adap->params.tp.vnic_shift >= 0) {
+ fs->val.ovlan_vld = 1;
+ fs->mask.ovlan_vld = 1;
+ fs->val.ethtype = 0;
+ fs->mask.ethtype = 0;
+ }
+ break;
default:
break;
}
@@ -145,6 +154,9 @@ cxgbe_fill_filter_region(struct adapter *adap,
if (tp->vlan_shift >= 0 && fs->mask.ivlan_vld)
ntuple_mask |= (u64)(F_FT_VLAN_VLD | fs->mask.ivlan) <<
tp->vlan_shift;
+ if (tp->vnic_shift >= 0 && fs->mask.ovlan_vld)
+ ntuple_mask |= (u64)(F_FT_VLAN_VLD | fs->mask.ovlan) <<
+ tp->vnic_shift;
if (ntuple_mask != hash_filter_mask)
return;
@@ -167,22 +179,6 @@ ch_rte_parsetype_eth(const void *dmask, const struct rte_flow_item *item,
if (!spec)
return 0;
- /* Chelsio hardware supports matching on only one ethertype
- * (i.e. either the outer or inner ethertype, but not both). If
- * we already encountered VLAN item, then ensure that the outer
- * ethertype is VLAN (0x8100) and don't overwrite the inner
- * ethertype stored during VLAN item parsing. Note that if
- * 'ivlan_vld' bit is set in Chelsio filter spec, then the
- * hardware automatically only matches packets with outer
- * ethertype having VLAN (0x8100).
- */
- if (fs->mask.ivlan_vld &&
- be16_to_cpu(spec->type) != RTE_ETHER_TYPE_VLAN)
- return rte_flow_error_set(e, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
- item,
- "Already encountered VLAN item,"
- " but outer ethertype is not 0x8100");
-
/* we don't support SRC_MAC filtering*/
if (!rte_is_zero_ether_addr(&mask->src))
return rte_flow_error_set(e, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
@@ -206,13 +202,9 @@ ch_rte_parsetype_eth(const void *dmask, const struct rte_flow_item *item,
CXGBE_FILL_FS(idx, 0x1ff, macidx);
}
- /* Only set outer ethertype, if we didn't encounter VLAN item yet.
- * Otherwise, the inner ethertype set by VLAN item will get
- * overwritten.
- */
- if (!fs->mask.ivlan_vld)
- CXGBE_FILL_FS(be16_to_cpu(spec->type),
- be16_to_cpu(mask->type), ethtype);
+ CXGBE_FILL_FS(be16_to_cpu(spec->type),
+ be16_to_cpu(mask->type), ethtype);
+
return 0;
}
@@ -249,32 +241,48 @@ ch_rte_parsetype_vlan(const void *dmask, const struct rte_flow_item *item,
/* If user has not given any mask, then use chelsio supported mask. */
mask = umask ? umask : (const struct rte_flow_item_vlan *)dmask;
- CXGBE_FILL_FS(1, 1, ivlan_vld);
- if (!spec)
- return 0; /* Wildcard, match all VLAN */
-
- /* Chelsio hardware supports matching on only one ethertype
- * (i.e. either the outer or inner ethertype, but not both).
- * If outer ethertype is already set and is not VLAN (0x8100),
- * then don't proceed further. Otherwise, reset the outer
- * ethertype, so that it can be replaced by inner ethertype.
- * Note that the hardware will automatically match on outer
- * ethertype 0x8100, if 'ivlan_vld' bit is set in Chelsio
- * filter spec.
+ if (!fs->mask.ethtype)
+ return rte_flow_error_set(e, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+ item,
+ "Can't parse VLAN item without knowing ethertype");
+
+ /* If ethertype is already set and is not VLAN (0x8100) or
+ * QINQ(0x88A8), then don't proceed further. Otherwise,
+ * reset the outer ethertype, so that it can be replaced by
+ * innermost ethertype. Note that hardware will automatically
+ * match against VLAN or QINQ packets, based on 'ivlan_vld' or
+ * 'ovlan_vld' bit set in Chelsio filter spec, respectively.
*/
if (fs->mask.ethtype) {
- if (fs->val.ethtype != RTE_ETHER_TYPE_VLAN)
+ if (fs->val.ethtype != RTE_ETHER_TYPE_VLAN &&
+ fs->val.ethtype != RTE_ETHER_TYPE_QINQ)
return rte_flow_error_set(e, EINVAL,
RTE_FLOW_ERROR_TYPE_ITEM,
item,
- "Outer ethertype not 0x8100");
+ "Ethertype must be 0x8100 or 0x88a8");
+ }
- fs->val.ethtype = 0;
- fs->mask.ethtype = 0;
+ if (fs->val.ethtype == RTE_ETHER_TYPE_QINQ) {
+ CXGBE_FILL_FS(1, 1, ovlan_vld);
+ if (spec) {
+ CXGBE_FILL_FS(be16_to_cpu(spec->tci),
+ be16_to_cpu(mask->tci), ovlan);
+
+ fs->mask.ethtype = 0;
+ fs->val.ethtype = 0;
+ }
+ } else if (fs->val.ethtype == RTE_ETHER_TYPE_VLAN) {
+ CXGBE_FILL_FS(1, 1, ivlan_vld);
+ if (spec) {
+ CXGBE_FILL_FS(be16_to_cpu(spec->tci),
+ be16_to_cpu(mask->tci), ivlan);
+
+ fs->mask.ethtype = 0;
+ fs->val.ethtype = 0;
+ }
}
- CXGBE_FILL_FS(be16_to_cpu(spec->tci), be16_to_cpu(mask->tci), ivlan);
- if (spec->inner_type)
+ if (spec)
CXGBE_FILL_FS(be16_to_cpu(spec->inner_type),
be16_to_cpu(mask->inner_type), ethtype);
@@ -351,8 +359,7 @@ ch_rte_parsetype_ipv4(const void *dmask, const struct rte_flow_item *item,
item, "ttl/tos are not supported");
if (fs->mask.ethtype &&
- (fs->val.ethtype != RTE_ETHER_TYPE_VLAN &&
- fs->val.ethtype != RTE_ETHER_TYPE_IPV4))
+ (fs->val.ethtype != RTE_ETHER_TYPE_IPV4))
return rte_flow_error_set(e, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
item,
"Couldn't find IPv4 ethertype");
@@ -385,8 +392,7 @@ ch_rte_parsetype_ipv6(const void *dmask, const struct rte_flow_item *item,
"tc/flow/hop are not supported");
if (fs->mask.ethtype &&
- (fs->val.ethtype != RTE_ETHER_TYPE_VLAN &&
- fs->val.ethtype != RTE_ETHER_TYPE_IPV6))
+ (fs->val.ethtype != RTE_ETHER_TYPE_IPV6))
return rte_flow_error_set(e, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
item,
"Couldn't find IPv6 ethertype");
@@ -907,10 +913,12 @@ cxgbe_rtef_parse_items(struct rte_flow *flow,
continue;
default:
/* check if item is repeated */
- if (repeat[i->type])
+ if (repeat[i->type] &&
+ i->type != RTE_FLOW_ITEM_TYPE_VLAN)
return rte_flow_error_set(e, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ITEM, i,
- "parse items cannot be repeated (except void)");
+ "parse items cannot be repeated(except void/vlan)");
+
repeat[i->type] = 1;
/* No spec found for this pattern item. Skip it */
--
2.25.0
^ permalink raw reply [flat|nested] 15+ messages in thread
* [dpdk-dev] [PATCH 2/9] net/cxgbe: add rte_flow support for matching IP TOS
2020-03-11 9:05 [dpdk-dev] [PATCH 0/9] net/cxgbe: updates for rte_flow support Rahul Lakkireddy
2020-03-11 9:05 ` [dpdk-dev] [PATCH 1/9] net/cxgbe: add rte_flow support for matching Q-in-Q VLAN Rahul Lakkireddy
@ 2020-03-11 9:05 ` Rahul Lakkireddy
2020-03-11 9:05 ` [dpdk-dev] [PATCH 3/9] net/cxgbe: add rte_flow support for matching all packets on PF Rahul Lakkireddy
` (8 subsequent siblings)
10 siblings, 0 replies; 15+ messages in thread
From: Rahul Lakkireddy @ 2020-03-11 9:05 UTC (permalink / raw)
To: dev; +Cc: nirranjan, kaara.satwik
From: Karra Satwik <kaara.satwik@chelsio.com>
Add support to match Type of Service (TOS) field in
IPv4/IPv6 header
Signed-off-by: Karra Satwik <kaara.satwik@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
drivers/net/cxgbe/base/common.h | 1 +
drivers/net/cxgbe/base/t4_hw.c | 1 +
drivers/net/cxgbe/cxgbe_filter.c | 7 +++++-
drivers/net/cxgbe/cxgbe_flow.c | 42 +++++++++++++++++++++++++++-----
4 files changed, 44 insertions(+), 7 deletions(-)
diff --git a/drivers/net/cxgbe/base/common.h b/drivers/net/cxgbe/base/common.h
index 6047642c5..793cad11d 100644
--- a/drivers/net/cxgbe/base/common.h
+++ b/drivers/net/cxgbe/base/common.h
@@ -158,6 +158,7 @@ struct tp_params {
int protocol_shift;
int ethertype_shift;
int macmatch_shift;
+ int tos_shift;
u64 hash_filter_mask;
};
diff --git a/drivers/net/cxgbe/base/t4_hw.c b/drivers/net/cxgbe/base/t4_hw.c
index f6bf57c75..cd4da0b9f 100644
--- a/drivers/net/cxgbe/base/t4_hw.c
+++ b/drivers/net/cxgbe/base/t4_hw.c
@@ -5253,6 +5253,7 @@ int t4_init_tp_params(struct adapter *adap)
F_ETHERTYPE);
adap->params.tp.macmatch_shift = t4_filter_field_shift(adap,
F_MACMATCH);
+ adap->params.tp.tos_shift = t4_filter_field_shift(adap, F_TOS);
v = t4_read_reg(adap, LE_3_DB_HASH_MASK_GEN_IPV4_T6_A);
adap->params.tp.hash_filter_mask = v;
diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index d26be3cd7..193738f93 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -72,7 +72,8 @@ int cxgbe_validate_filter(struct adapter *adapter,
if (U(F_PORT, iport) || U(F_ETHERTYPE, ethtype) ||
U(F_PROTOCOL, proto) || U(F_MACMATCH, macidx) ||
- U(F_VLAN, ivlan_vld) || U(F_VNIC_ID, ovlan_vld))
+ U(F_VLAN, ivlan_vld) || U(F_VNIC_ID, ovlan_vld) ||
+ U(F_TOS, tos))
return -EOPNOTSUPP;
/* Ensure OVLAN match is enabled in hardware */
@@ -312,6 +313,8 @@ static u64 hash_filter_ntuple(const struct filter_entry *f)
ntuple |= (u64)(f->fs.val.ovlan_vld << 16 |
f->fs.val.ovlan) << tp->vnic_shift;
}
+ if (tp->tos_shift >= 0 && f->fs.mask.tos)
+ ntuple |= (u64)f->fs.val.tos << tp->tos_shift;
return ntuple;
}
@@ -806,6 +809,8 @@ static int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
V_FW_FILTER_WR_PORTM(f->fs.mask.iport));
fwr->ptcl = f->fs.val.proto;
fwr->ptclm = f->fs.mask.proto;
+ fwr->ttyp = f->fs.val.tos;
+ fwr->ttypm = f->fs.mask.tos;
fwr->ivlan = cpu_to_be16(f->fs.val.ivlan);
fwr->ivlanm = cpu_to_be16(f->fs.mask.ivlan);
fwr->ovlan = cpu_to_be16(f->fs.val.ovlan);
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index cd833d095..c860b7886 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -157,6 +157,8 @@ cxgbe_fill_filter_region(struct adapter *adap,
if (tp->vnic_shift >= 0 && fs->mask.ovlan_vld)
ntuple_mask |= (u64)(F_FT_VLAN_VLD | fs->mask.ovlan) <<
tp->vnic_shift;
+ if (tp->tos_shift >= 0)
+ ntuple_mask |= (u64)fs->mask.tos << tp->tos_shift;
if (ntuple_mask != hash_filter_mask)
return;
@@ -354,9 +356,9 @@ ch_rte_parsetype_ipv4(const void *dmask, const struct rte_flow_item *item,
mask = umask ? umask : (const struct rte_flow_item_ipv4 *)dmask;
- if (mask->hdr.time_to_live || mask->hdr.type_of_service)
+ if (mask->hdr.time_to_live)
return rte_flow_error_set(e, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
- item, "ttl/tos are not supported");
+ item, "ttl is not supported");
if (fs->mask.ethtype &&
(fs->val.ethtype != RTE_ETHER_TYPE_IPV4))
@@ -370,6 +372,7 @@ ch_rte_parsetype_ipv4(const void *dmask, const struct rte_flow_item *item,
CXGBE_FILL_FS(val->hdr.next_proto_id, mask->hdr.next_proto_id, proto);
CXGBE_FILL_FS_MEMCPY(val->hdr.dst_addr, mask->hdr.dst_addr, lip);
CXGBE_FILL_FS_MEMCPY(val->hdr.src_addr, mask->hdr.src_addr, fip);
+ CXGBE_FILL_FS(val->hdr.type_of_service, mask->hdr.type_of_service, tos);
return 0;
}
@@ -382,14 +385,17 @@ ch_rte_parsetype_ipv6(const void *dmask, const struct rte_flow_item *item,
const struct rte_flow_item_ipv6 *val = item->spec;
const struct rte_flow_item_ipv6 *umask = item->mask;
const struct rte_flow_item_ipv6 *mask;
+ u32 vtc_flow, vtc_flow_mask;
mask = umask ? umask : (const struct rte_flow_item_ipv6 *)dmask;
- if (mask->hdr.vtc_flow ||
+ vtc_flow_mask = be32_to_cpu(mask->hdr.vtc_flow);
+
+ if (vtc_flow_mask & RTE_IPV6_HDR_FL_MASK ||
mask->hdr.payload_len || mask->hdr.hop_limits)
return rte_flow_error_set(e, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
item,
- "tc/flow/hop are not supported");
+ "flow/hop are not supported");
if (fs->mask.ethtype &&
(fs->val.ethtype != RTE_ETHER_TYPE_IPV6))
@@ -401,6 +407,14 @@ ch_rte_parsetype_ipv6(const void *dmask, const struct rte_flow_item *item,
return 0; /* ipv6 wild card */
CXGBE_FILL_FS(val->hdr.proto, mask->hdr.proto, proto);
+
+ vtc_flow = be32_to_cpu(val->hdr.vtc_flow);
+ CXGBE_FILL_FS((vtc_flow & RTE_IPV6_HDR_TC_MASK) >>
+ RTE_IPV6_HDR_TC_SHIFT,
+ (vtc_flow_mask & RTE_IPV6_HDR_TC_MASK) >>
+ RTE_IPV6_HDR_TC_SHIFT,
+ tos);
+
CXGBE_FILL_FS_MEMCPY(val->hdr.dst_addr, mask->hdr.dst_addr, lip);
CXGBE_FILL_FS_MEMCPY(val->hdr.src_addr, mask->hdr.src_addr, fip);
@@ -871,12 +885,28 @@ static struct chrte_fparse parseitem[] = {
[RTE_FLOW_ITEM_TYPE_IPV4] = {
.fptr = ch_rte_parsetype_ipv4,
- .dmask = &rte_flow_item_ipv4_mask,
+ .dmask = &(const struct rte_flow_item_ipv4) {
+ .hdr = {
+ .src_addr = RTE_BE32(0xffffffff),
+ .dst_addr = RTE_BE32(0xffffffff),
+ .type_of_service = 0xff,
+ },
+ },
},
[RTE_FLOW_ITEM_TYPE_IPV6] = {
.fptr = ch_rte_parsetype_ipv6,
- .dmask = &rte_flow_item_ipv6_mask,
+ .dmask = &(const struct rte_flow_item_ipv6) {
+ .hdr = {
+ .src_addr =
+ "\xff\xff\xff\xff\xff\xff\xff\xff"
+ "\xff\xff\xff\xff\xff\xff\xff\xff",
+ .dst_addr =
+ "\xff\xff\xff\xff\xff\xff\xff\xff"
+ "\xff\xff\xff\xff\xff\xff\xff\xff",
+ .vtc_flow = RTE_BE32(0xff000000),
+ },
+ },
},
[RTE_FLOW_ITEM_TYPE_UDP] = {
--
2.25.0
^ permalink raw reply [flat|nested] 15+ messages in thread
* [dpdk-dev] [PATCH 3/9] net/cxgbe: add rte_flow support for matching all packets on PF
2020-03-11 9:05 [dpdk-dev] [PATCH 0/9] net/cxgbe: updates for rte_flow support Rahul Lakkireddy
2020-03-11 9:05 ` [dpdk-dev] [PATCH 1/9] net/cxgbe: add rte_flow support for matching Q-in-Q VLAN Rahul Lakkireddy
2020-03-11 9:05 ` [dpdk-dev] [PATCH 2/9] net/cxgbe: add rte_flow support for matching IP TOS Rahul Lakkireddy
@ 2020-03-11 9:05 ` Rahul Lakkireddy
2020-03-11 9:05 ` [dpdk-dev] [PATCH 4/9] net/cxgbe: add rte_flow support for matching all packets on VF Rahul Lakkireddy
` (7 subsequent siblings)
10 siblings, 0 replies; 15+ messages in thread
From: Rahul Lakkireddy @ 2020-03-11 9:05 UTC (permalink / raw)
To: dev; +Cc: nirranjan, kaara.satwik
From: Karra Satwik <kaara.satwik@chelsio.com>
Add support to match all packets received on the underlying PF
Note that the same 17-bit hardware tuple is shared between QinQ
and PF match. Hence, match on either QinQ or PF only can be done
at a time. Both QinQ and PF match can't be enabled at the same time.
Also, remove check to reject rules without spec because
RTE_FLOW_ITEM_TYPE_PF doesn't require a spec. Due to this check
removal, RTE_FLOW_ITEM_TYPE_PHY_PORT item needs to be updated to
handle NULL spec
Signed-off-by: Karra Satwik <kaara.satwik@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
drivers/net/cxgbe/cxgbe_filter.c | 39 ++++++++++++++++++++++++-------
drivers/net/cxgbe/cxgbe_filter.h | 2 +-
drivers/net/cxgbe/cxgbe_flow.c | 40 ++++++++++++++++++++++++++------
3 files changed, 64 insertions(+), 17 deletions(-)
diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index 193738f93..4c50932af 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -73,15 +73,17 @@ int cxgbe_validate_filter(struct adapter *adapter,
if (U(F_PORT, iport) || U(F_ETHERTYPE, ethtype) ||
U(F_PROTOCOL, proto) || U(F_MACMATCH, macidx) ||
U(F_VLAN, ivlan_vld) || U(F_VNIC_ID, ovlan_vld) ||
- U(F_TOS, tos))
+ U(F_TOS, tos) || U(F_VNIC_ID, pfvf_vld))
return -EOPNOTSUPP;
- /* Ensure OVLAN match is enabled in hardware */
- if (S(ovlan_vld) && (iconf & F_VNIC))
+ /* Either OVLAN or PFVF match is enabled in hardware, but not both */
+ if ((S(pfvf_vld) && !(iconf & F_VNIC)) ||
+ (S(ovlan_vld) && (iconf & F_VNIC)))
return -EOPNOTSUPP;
- /* To use OVLAN, L4 encapsulation match must not be enabled */
- if (S(ovlan_vld) && (iconf & F_USE_ENC_IDX))
+ /* To use OVLAN or PFVF, L4 encapsulation match must not be enabled */
+ if ((S(ovlan_vld) && (iconf & F_USE_ENC_IDX)) ||
+ (S(pfvf_vld) && (iconf & F_USE_ENC_IDX)))
return -EOPNOTSUPP;
#undef S
@@ -308,8 +310,12 @@ static u64 hash_filter_ntuple(const struct filter_entry *f)
ntuple |= (u64)(F_FT_VLAN_VLD | f->fs.val.ivlan) <<
tp->vlan_shift;
if (tp->vnic_shift >= 0) {
- if (!(adap->params.tp.ingress_config & F_VNIC) &&
- f->fs.mask.ovlan_vld)
+ if ((adap->params.tp.ingress_config & F_VNIC) &&
+ f->fs.mask.pfvf_vld)
+ ntuple |= (u64)((f->fs.val.pfvf_vld << 16) |
+ (f->fs.val.pf << 13)) << tp->vnic_shift;
+ else if (!(adap->params.tp.ingress_config & F_VNIC) &&
+ f->fs.mask.ovlan_vld)
ntuple |= (u64)(f->fs.val.ovlan_vld << 16 |
f->fs.val.ovlan) << tp->vnic_shift;
}
@@ -965,10 +971,11 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
{
struct port_info *pi = ethdev2pinfo(dev);
struct adapter *adapter = pi->adapter;
- unsigned int fidx, iq;
+ u8 nentries, bitoff[16] = {0};
struct filter_entry *f;
unsigned int chip_ver;
- u8 nentries, bitoff[16] = {0};
+ unsigned int fidx, iq;
+ u32 iconf;
int ret;
if (is_hashfilter(adapter) && fs->cap)
@@ -1052,6 +1059,20 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
f->fs.iq = iq;
f->dev = dev;
+ iconf = adapter->params.tp.ingress_config;
+
+ /* Either PFVF or OVLAN can be active, but not both
+ * So, if PFVF is enabled, then overwrite the OVLAN
+ * fields with PFVF fields before writing the spec
+ * to hardware.
+ */
+ if (iconf & F_VNIC) {
+ f->fs.val.ovlan = fs->val.pf << 13;
+ f->fs.mask.ovlan = fs->mask.pf << 13;
+ f->fs.val.ovlan_vld = fs->val.pfvf_vld;
+ f->fs.mask.ovlan_vld = fs->mask.pfvf_vld;
+ }
+
/*
* Attempt to set the filter. If we don't succeed, we clear
* it and return the failure.
diff --git a/drivers/net/cxgbe/cxgbe_filter.h b/drivers/net/cxgbe/cxgbe_filter.h
index 06021c854..2ac210045 100644
--- a/drivers/net/cxgbe/cxgbe_filter.h
+++ b/drivers/net/cxgbe/cxgbe_filter.h
@@ -18,7 +18,7 @@
#define MATCHTYPE_BITWIDTH 3
#define PROTO_BITWIDTH 8
#define TOS_BITWIDTH 8
-#define PF_BITWIDTH 8
+#define PF_BITWIDTH 3
#define VF_BITWIDTH 8
#define IVLAN_BITWIDTH 16
#define OVLAN_BITWIDTH 16
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index c860b7886..c1f5ef045 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -154,9 +154,15 @@ cxgbe_fill_filter_region(struct adapter *adap,
if (tp->vlan_shift >= 0 && fs->mask.ivlan_vld)
ntuple_mask |= (u64)(F_FT_VLAN_VLD | fs->mask.ivlan) <<
tp->vlan_shift;
- if (tp->vnic_shift >= 0 && fs->mask.ovlan_vld)
- ntuple_mask |= (u64)(F_FT_VLAN_VLD | fs->mask.ovlan) <<
- tp->vnic_shift;
+ if (tp->vnic_shift >= 0) {
+ if (fs->mask.ovlan_vld)
+ ntuple_mask |= (u64)(fs->val.ovlan_vld << 16 |
+ fs->mask.ovlan) << tp->vnic_shift;
+ else if (fs->mask.pfvf_vld)
+ ntuple_mask |= (u64)((fs->mask.pfvf_vld << 16) |
+ (fs->mask.pf << 13)) <<
+ tp->vnic_shift;
+ }
if (tp->tos_shift >= 0)
ntuple_mask |= (u64)fs->mask.tos << tp->tos_shift;
@@ -221,6 +227,9 @@ ch_rte_parsetype_port(const void *dmask, const struct rte_flow_item *item,
mask = umask ? umask : (const struct rte_flow_item_phy_port *)dmask;
+ if (!val)
+ return 0; /* Wildcard, match all physical ports */
+
if (val->index > 0x7)
return rte_flow_error_set(e, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
item,
@@ -291,6 +300,22 @@ ch_rte_parsetype_vlan(const void *dmask, const struct rte_flow_item *item,
return 0;
}
+static int
+ch_rte_parsetype_pf(const void *dmask __rte_unused,
+ const struct rte_flow_item *item __rte_unused,
+ struct ch_filter_specification *fs,
+ struct rte_flow_error *e __rte_unused)
+{
+ struct rte_flow *flow = (struct rte_flow *)fs->private;
+ struct rte_eth_dev *dev = flow->dev;
+ struct adapter *adap = ethdev2adap(dev);
+
+ CXGBE_FILL_FS(1, 1, pfvf_vld);
+
+ CXGBE_FILL_FS(adap->pf, ~0, pf);
+ return 0;
+}
+
static int
ch_rte_parsetype_udp(const void *dmask, const struct rte_flow_item *item,
struct ch_filter_specification *fs,
@@ -918,6 +943,11 @@ static struct chrte_fparse parseitem[] = {
.fptr = ch_rte_parsetype_tcp,
.dmask = &rte_flow_item_tcp_mask,
},
+
+ [RTE_FLOW_ITEM_TYPE_PF] = {
+ .fptr = ch_rte_parsetype_pf,
+ .dmask = NULL,
+ },
};
static int
@@ -951,10 +981,6 @@ cxgbe_rtef_parse_items(struct rte_flow *flow,
repeat[i->type] = 1;
- /* No spec found for this pattern item. Skip it */
- if (!i->spec)
- break;
-
/* validate the item */
ret = cxgbe_validate_item(i, e);
if (ret)
--
2.25.0
^ permalink raw reply [flat|nested] 15+ messages in thread
* [dpdk-dev] [PATCH 4/9] net/cxgbe: add rte_flow support for matching all packets on VF
2020-03-11 9:05 [dpdk-dev] [PATCH 0/9] net/cxgbe: updates for rte_flow support Rahul Lakkireddy
` (2 preceding siblings ...)
2020-03-11 9:05 ` [dpdk-dev] [PATCH 3/9] net/cxgbe: add rte_flow support for matching all packets on PF Rahul Lakkireddy
@ 2020-03-11 9:05 ` Rahul Lakkireddy
2020-03-11 9:05 ` [dpdk-dev] [PATCH 5/9] net/cxgbe: add rte_flow support for overwriting destination MAC Rahul Lakkireddy
` (6 subsequent siblings)
10 siblings, 0 replies; 15+ messages in thread
From: Rahul Lakkireddy @ 2020-03-11 9:05 UTC (permalink / raw)
To: dev; +Cc: nirranjan, kaara.satwik
From: Karra Satwik <kaara.satwik@chelsio.com>
Add support to match all packets received on the underlying VF.
Use new firmware API to fetch the Virtual Interface Number (VIN)
allocated to each VF by the firmware. The VIN is required to
write filter rules to match all packets on VFs, whose identifier
is beyond max 7-bit value (i.e. 127) in VIID.
If firmware doesn't support fetching the VIN information, then
fallback to manually retrieving the VIN value from the 7-bit field
in the VIID, which only supports in range of 0..127. In this case,
packets belonging to VFs, whose identifier is beyond 127 can't be
matched.
Signed-off-by: Karra Satwik <kaara.satwik@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
drivers/net/cxgbe/base/adapter.h | 6 ++++
drivers/net/cxgbe/base/common.h | 6 ++--
drivers/net/cxgbe/base/t4_hw.c | 27 +++++++++++++---
drivers/net/cxgbe/base/t4fw_interface.h | 23 ++++++++++++++
drivers/net/cxgbe/cxgbe_filter.c | 9 +++---
drivers/net/cxgbe/cxgbe_filter.h | 2 +-
drivers/net/cxgbe/cxgbe_flow.c | 41 +++++++++++++++++++++++--
drivers/net/cxgbe/cxgbe_main.c | 9 ++++++
8 files changed, 109 insertions(+), 14 deletions(-)
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index db654ad9c..c6b8036fd 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -55,6 +55,12 @@ struct port_info {
u8 rss_mode; /* rss mode */
u16 rss_size; /* size of VI's RSS table slice */
u64 rss_hf; /* RSS Hash Function */
+
+ /* viid fields either returned by fw
+ * or decoded by parsing viid by driver.
+ */
+ u8 vin;
+ u8 vivld;
};
/* Enable or disable autonegotiation. If this is set to enable,
diff --git a/drivers/net/cxgbe/base/common.h b/drivers/net/cxgbe/base/common.h
index 793cad11d..892aab64b 100644
--- a/drivers/net/cxgbe/base/common.h
+++ b/drivers/net/cxgbe/base/common.h
@@ -273,6 +273,7 @@ struct adapter_params {
bool ulptx_memwrite_dsgl; /* use of T5 DSGL allowed */
u8 fw_caps_support; /* 32-bit Port Capabilities */
u8 filter2_wr_support; /* FW support for FILTER2_WR */
+ u32 viid_smt_extn_support:1; /* FW returns vin and smt index */
u32 max_tx_coalesce_num; /* Max # of Tx packets that can be coalesced */
};
@@ -382,10 +383,11 @@ int t4_set_params(struct adapter *adap, unsigned int mbox, unsigned int pf,
int t4_alloc_vi_func(struct adapter *adap, unsigned int mbox,
unsigned int port, unsigned int pf, unsigned int vf,
unsigned int nmac, u8 *mac, unsigned int *rss_size,
- unsigned int portfunc, unsigned int idstype);
+ unsigned int portfunc, unsigned int idstype,
+ u8 *vivld, u8 *vin);
int t4_alloc_vi(struct adapter *adap, unsigned int mbox, unsigned int port,
unsigned int pf, unsigned int vf, unsigned int nmac, u8 *mac,
- unsigned int *rss_size);
+ unsigned int *rss_size, u8 *vivild, u8 *vin);
int t4_free_vi(struct adapter *adap, unsigned int mbox,
unsigned int pf, unsigned int vf,
unsigned int viid);
diff --git a/drivers/net/cxgbe/base/t4_hw.c b/drivers/net/cxgbe/base/t4_hw.c
index cd4da0b9f..48b6d77b1 100644
--- a/drivers/net/cxgbe/base/t4_hw.c
+++ b/drivers/net/cxgbe/base/t4_hw.c
@@ -4017,7 +4017,8 @@ int t4_set_params(struct adapter *adap, unsigned int mbox, unsigned int pf,
int t4_alloc_vi_func(struct adapter *adap, unsigned int mbox,
unsigned int port, unsigned int pf, unsigned int vf,
unsigned int nmac, u8 *mac, unsigned int *rss_size,
- unsigned int portfunc, unsigned int idstype)
+ unsigned int portfunc, unsigned int idstype,
+ u8 *vivld, u8 *vin)
{
int ret;
struct fw_vi_cmd c;
@@ -4055,6 +4056,10 @@ int t4_alloc_vi_func(struct adapter *adap, unsigned int mbox,
}
if (rss_size)
*rss_size = G_FW_VI_CMD_RSSSIZE(be16_to_cpu(c.norss_rsssize));
+ if (vivld)
+ *vivld = G_FW_VI_CMD_VFVLD(be32_to_cpu(c.alloc_to_len16));
+ if (vin)
+ *vin = G_FW_VI_CMD_VIN(be32_to_cpu(c.alloc_to_len16));
return G_FW_VI_CMD_VIID(cpu_to_be16(c.type_to_viid));
}
@@ -4075,10 +4080,10 @@ int t4_alloc_vi_func(struct adapter *adap, unsigned int mbox,
*/
int t4_alloc_vi(struct adapter *adap, unsigned int mbox, unsigned int port,
unsigned int pf, unsigned int vf, unsigned int nmac, u8 *mac,
- unsigned int *rss_size)
+ unsigned int *rss_size, u8 *vivld, u8 *vin)
{
return t4_alloc_vi_func(adap, mbox, port, pf, vf, nmac, mac, rss_size,
- FW_VI_FUNC_ETH, 0);
+ FW_VI_FUNC_ETH, 0, vivld, vin);
}
/**
@@ -5346,6 +5351,7 @@ int t4_port_init(struct adapter *adap, int mbox, int pf, int vf)
fw_port_cap32_t pcaps, acaps;
enum fw_port_type port_type;
struct fw_port_cmd cmd;
+ u8 vivld = 0, vin = 0;
int ret, i, j = 0;
int mdio_addr;
u32 action;
@@ -5417,7 +5423,8 @@ int t4_port_init(struct adapter *adap, int mbox, int pf, int vf)
acaps = be32_to_cpu(cmd.u.info32.acaps32);
}
- ret = t4_alloc_vi(adap, mbox, j, pf, vf, 1, addr, &rss_size);
+ ret = t4_alloc_vi(adap, mbox, j, pf, vf, 1, addr, &rss_size,
+ &vivld, &vin);
if (ret < 0)
return ret;
@@ -5426,6 +5433,18 @@ int t4_port_init(struct adapter *adap, int mbox, int pf, int vf)
pi->rss_size = rss_size;
t4_os_set_hw_addr(adap, i, addr);
+ /* If fw supports returning the VIN as part of FW_VI_CMD,
+ * save the returned values.
+ */
+ if (adap->params.viid_smt_extn_support) {
+ pi->vivld = vivld;
+ pi->vin = vin;
+ } else {
+ /* Retrieve the values from VIID */
+ pi->vivld = G_FW_VIID_VIVLD(pi->viid);
+ pi->vin = G_FW_VIID_VIN(pi->viid);
+ }
+
pi->port_type = port_type;
pi->mdio_addr = mdio_addr;
pi->mod_type = FW_PORT_MOD_TYPE_NA;
diff --git a/drivers/net/cxgbe/base/t4fw_interface.h b/drivers/net/cxgbe/base/t4fw_interface.h
index e992d196d..39e02077f 100644
--- a/drivers/net/cxgbe/base/t4fw_interface.h
+++ b/drivers/net/cxgbe/base/t4fw_interface.h
@@ -679,6 +679,7 @@ enum fw_params_param_dev {
FW_PARAMS_PARAM_DEV_TPREV = 0x0C, /* tp version */
FW_PARAMS_PARAM_DEV_ULPTX_MEMWRITE_DSGL = 0x17,
FW_PARAMS_PARAM_DEV_FILTER2_WR = 0x1D,
+ FW_PARAMS_PARAM_DEV_OPAQUE_VIID_SMT_EXTN = 0x27,
};
/*
@@ -1235,6 +1236,18 @@ enum fw_vi_func {
FW_VI_FUNC_ETH,
};
+/* Macros for VIID parsing:
+ * VIID - [10:8] PFN, [7] VI Valid, [6:0] VI number
+ */
+
+#define S_FW_VIID_VIVLD 7
+#define M_FW_VIID_VIVLD 0x1
+#define G_FW_VIID_VIVLD(x) (((x) >> S_FW_VIID_VIVLD) & M_FW_VIID_VIVLD)
+
+#define S_FW_VIID_VIN 0
+#define M_FW_VIID_VIN 0x7F
+#define G_FW_VIID_VIN(x) (((x) >> S_FW_VIID_VIN) & M_FW_VIID_VIN)
+
struct fw_vi_cmd {
__be32 op_to_vfn;
__be32 alloc_to_len16;
@@ -1276,6 +1289,16 @@ struct fw_vi_cmd {
#define G_FW_VI_CMD_FREE(x) (((x) >> S_FW_VI_CMD_FREE) & M_FW_VI_CMD_FREE)
#define F_FW_VI_CMD_FREE V_FW_VI_CMD_FREE(1U)
+#define S_FW_VI_CMD_VFVLD 24
+#define M_FW_VI_CMD_VFVLD 0x1
+#define G_FW_VI_CMD_VFVLD(x) \
+ (((x) >> S_FW_VI_CMD_VFVLD) & M_FW_VI_CMD_VFVLD)
+
+#define S_FW_VI_CMD_VIN 16
+#define M_FW_VI_CMD_VIN 0xff
+#define G_FW_VI_CMD_VIN(x) \
+ (((x) >> S_FW_VI_CMD_VIN) & M_FW_VI_CMD_VIN)
+
#define S_FW_VI_CMD_TYPE 15
#define M_FW_VI_CMD_TYPE 0x1
#define V_FW_VI_CMD_TYPE(x) ((x) << S_FW_VI_CMD_TYPE)
diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index 4c50932af..9c10520b2 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -312,8 +312,9 @@ static u64 hash_filter_ntuple(const struct filter_entry *f)
if (tp->vnic_shift >= 0) {
if ((adap->params.tp.ingress_config & F_VNIC) &&
f->fs.mask.pfvf_vld)
- ntuple |= (u64)((f->fs.val.pfvf_vld << 16) |
- (f->fs.val.pf << 13)) << tp->vnic_shift;
+ ntuple |= (u64)(f->fs.val.pfvf_vld << 16 |
+ f->fs.val.pf << 13 | f->fs.val.vf) <<
+ tp->vnic_shift;
else if (!(adap->params.tp.ingress_config & F_VNIC) &&
f->fs.mask.ovlan_vld)
ntuple |= (u64)(f->fs.val.ovlan_vld << 16 |
@@ -1067,8 +1068,8 @@ int cxgbe_set_filter(struct rte_eth_dev *dev, unsigned int filter_id,
* to hardware.
*/
if (iconf & F_VNIC) {
- f->fs.val.ovlan = fs->val.pf << 13;
- f->fs.mask.ovlan = fs->mask.pf << 13;
+ f->fs.val.ovlan = fs->val.pf << 13 | fs->val.vf;
+ f->fs.mask.ovlan = fs->mask.pf << 13 | fs->mask.vf;
f->fs.val.ovlan_vld = fs->val.pfvf_vld;
f->fs.mask.ovlan_vld = fs->mask.pfvf_vld;
}
diff --git a/drivers/net/cxgbe/cxgbe_filter.h b/drivers/net/cxgbe/cxgbe_filter.h
index 2ac210045..6b1bf25e2 100644
--- a/drivers/net/cxgbe/cxgbe_filter.h
+++ b/drivers/net/cxgbe/cxgbe_filter.h
@@ -19,7 +19,7 @@
#define PROTO_BITWIDTH 8
#define TOS_BITWIDTH 8
#define PF_BITWIDTH 3
-#define VF_BITWIDTH 8
+#define VF_BITWIDTH 13
#define IVLAN_BITWIDTH 16
#define OVLAN_BITWIDTH 16
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index c1f5ef045..3e27a3f68 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -159,9 +159,9 @@ cxgbe_fill_filter_region(struct adapter *adap,
ntuple_mask |= (u64)(fs->val.ovlan_vld << 16 |
fs->mask.ovlan) << tp->vnic_shift;
else if (fs->mask.pfvf_vld)
- ntuple_mask |= (u64)((fs->mask.pfvf_vld << 16) |
- (fs->mask.pf << 13)) <<
- tp->vnic_shift;
+ ntuple_mask |= (u64)(fs->mask.pfvf_vld << 16 |
+ fs->mask.pf << 13 |
+ fs->mask.vf) << tp->vnic_shift;
}
if (tp->tos_shift >= 0)
ntuple_mask |= (u64)fs->mask.tos << tp->tos_shift;
@@ -316,6 +316,34 @@ ch_rte_parsetype_pf(const void *dmask __rte_unused,
return 0;
}
+static int
+ch_rte_parsetype_vf(const void *dmask, const struct rte_flow_item *item,
+ struct ch_filter_specification *fs,
+ struct rte_flow_error *e)
+{
+ const struct rte_flow_item_vf *umask = item->mask;
+ const struct rte_flow_item_vf *val = item->spec;
+ const struct rte_flow_item_vf *mask;
+
+ /* If user has not given any mask, then use chelsio supported mask. */
+ mask = umask ? umask : (const struct rte_flow_item_vf *)dmask;
+
+ CXGBE_FILL_FS(1, 1, pfvf_vld);
+
+ if (!val)
+ return 0; /* Wildcard, match all Vf */
+
+ if (val->id > UCHAR_MAX)
+ return rte_flow_error_set(e, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item,
+ "VF ID > MAX(255)");
+
+ CXGBE_FILL_FS(val->id, mask->id, vf);
+
+ return 0;
+}
+
static int
ch_rte_parsetype_udp(const void *dmask, const struct rte_flow_item *item,
struct ch_filter_specification *fs,
@@ -948,6 +976,13 @@ static struct chrte_fparse parseitem[] = {
.fptr = ch_rte_parsetype_pf,
.dmask = NULL,
},
+
+ [RTE_FLOW_ITEM_TYPE_VF] = {
+ .fptr = ch_rte_parsetype_vf,
+ .dmask = &(const struct rte_flow_item_vf){
+ .id = 0xffffffff,
+ }
+ },
};
static int
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 0d0827c0e..a286d8557 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -1207,6 +1207,15 @@ static int adap_init0(struct adapter *adap)
adap->params.filter2_wr_support = (ret == 0 && val[0] != 0);
}
+ /* Check if FW supports returning vin.
+ * If this is not supported, driver will interpret
+ * these values from viid.
+ */
+ params[0] = CXGBE_FW_PARAM_DEV(OPAQUE_VIID_SMT_EXTN);
+ ret = t4_query_params(adap, adap->mbox, adap->pf, 0,
+ 1, params, val);
+ adap->params.viid_smt_extn_support = (ret == 0 && val[0] != 0);
+
/* query tid-related parameters */
params[0] = CXGBE_FW_PARAM_DEV(NTID);
ret = t4_query_params(adap, adap->mbox, adap->pf, 0, 1,
--
2.25.0
^ permalink raw reply [flat|nested] 15+ messages in thread
* [dpdk-dev] [PATCH 5/9] net/cxgbe: add rte_flow support for overwriting destination MAC
2020-03-11 9:05 [dpdk-dev] [PATCH 0/9] net/cxgbe: updates for rte_flow support Rahul Lakkireddy
` (3 preceding siblings ...)
2020-03-11 9:05 ` [dpdk-dev] [PATCH 4/9] net/cxgbe: add rte_flow support for matching all packets on VF Rahul Lakkireddy
@ 2020-03-11 9:05 ` Rahul Lakkireddy
2020-03-11 9:05 ` [dpdk-dev] [PATCH 6/9] net/cxgbe: add Source MAC Table (SMT) support Rahul Lakkireddy
` (5 subsequent siblings)
10 siblings, 0 replies; 15+ messages in thread
From: Rahul Lakkireddy @ 2020-03-11 9:05 UTC (permalink / raw)
To: dev; +Cc: nirranjan, kaara.satwik
From: Karra Satwik <kaara.satwik@chelsio.com>
Add support for overwriting destination MAC addresses.
The new MAC address is written into a free entry in the
L2T table and the corresponding L2T index is used by
hardware to overwrite the destination MAC address of the
packets hitting the flow
Signed-off-by: Karra Satwik <kaara.satwik@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
drivers/net/cxgbe/base/t4_tcb.h | 2 ++
drivers/net/cxgbe/cxgbe_filter.c | 8 ++++++--
drivers/net/cxgbe/cxgbe_filter.h | 1 +
drivers/net/cxgbe/cxgbe_flow.c | 14 ++++++++++++++
4 files changed, 23 insertions(+), 2 deletions(-)
diff --git a/drivers/net/cxgbe/base/t4_tcb.h b/drivers/net/cxgbe/base/t4_tcb.h
index 3c590e053..834169ab4 100644
--- a/drivers/net/cxgbe/base/t4_tcb.h
+++ b/drivers/net/cxgbe/base/t4_tcb.h
@@ -32,6 +32,8 @@
#define M_TCB_T_RTSEQ_RECENT 0xffffffffULL
#define V_TCB_T_RTSEQ_RECENT(x) ((x) << S_TCB_T_RTSEQ_RECENT)
+#define S_TF_CCTRL_ECE 60
+
#define S_TF_CCTRL_RFR 62
#endif /* _T4_TCB_DEFS_H */
diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index 9c10520b2..b009217f8 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -593,7 +593,7 @@ static int cxgbe_set_hash_filter(struct rte_eth_dev *dev,
* rewriting then we need to allocate a Layer 2 Table (L2T) entry for
* the filter.
*/
- if (f->fs.newvlan == VLAN_INSERT ||
+ if (f->fs.newdmac || f->fs.newvlan == VLAN_INSERT ||
f->fs.newvlan == VLAN_REWRITE) {
/* allocate L2T entry for new filter */
f->l2t = cxgbe_l2t_alloc_switching(dev, f->fs.vlan,
@@ -749,10 +749,11 @@ static int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
* rewriting then we need to allocate a Layer 2 Table (L2T) entry for
* the filter.
*/
- if (f->fs.newvlan) {
+ if (f->fs.newvlan || f->fs.newdmac) {
/* allocate L2T entry for new filter */
f->l2t = cxgbe_l2t_alloc_switching(f->dev, f->fs.vlan,
f->fs.eport, f->fs.dmac);
+
if (!f->l2t)
return -ENOMEM;
}
@@ -787,6 +788,7 @@ static int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
cpu_to_be32(V_FW_FILTER_WR_DROP(f->fs.action == FILTER_DROP) |
V_FW_FILTER_WR_DIRSTEER(f->fs.dirsteer) |
V_FW_FILTER_WR_LPBK(f->fs.action == FILTER_SWITCH) |
+ V_FW_FILTER_WR_DMAC(f->fs.newdmac) |
V_FW_FILTER_WR_INSVLAN
(f->fs.newvlan == VLAN_INSERT ||
f->fs.newvlan == VLAN_REWRITE) |
@@ -1137,6 +1139,8 @@ void cxgbe_hash_filter_rpl(struct adapter *adap,
V_TCB_TIMESTAMP(0ULL) |
V_TCB_T_RTT_TS_RECENT_AGE(0ULL),
1);
+ if (f->fs.newdmac)
+ set_tcb_tflag(adap, tid, S_TF_CCTRL_ECE, 1, 1);
if (f->fs.newvlan == VLAN_INSERT ||
f->fs.newvlan == VLAN_REWRITE)
set_tcb_tflag(adap, tid, S_TF_CCTRL_RFR, 1, 1);
diff --git a/drivers/net/cxgbe/cxgbe_filter.h b/drivers/net/cxgbe/cxgbe_filter.h
index 6b1bf25e2..7a1e72ded 100644
--- a/drivers/net/cxgbe/cxgbe_filter.h
+++ b/drivers/net/cxgbe/cxgbe_filter.h
@@ -100,6 +100,7 @@ struct ch_filter_specification {
uint32_t iq:10; /* ingress queue */
uint32_t eport:2; /* egress port to switch packet out */
+ uint32_t newdmac:1; /* rewrite destination MAC address */
uint32_t swapmac:1; /* swap SMAC/DMAC for loopback packet */
uint32_t newvlan:2; /* rewrite VLAN Tag */
uint8_t dmac[RTE_ETHER_ADDR_LEN]; /* new destination MAC address */
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index 3e27a3f68..b009005c5 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -647,6 +647,7 @@ ch_rte_parse_atype_switch(const struct rte_flow_action *a,
const struct rte_flow_action_set_ipv6 *ipv6;
const struct rte_flow_action_set_tp *tp_port;
const struct rte_flow_action_phy_port *port;
+ const struct rte_flow_action_set_mac *mac;
int item_index;
u16 tmp_vlan;
@@ -794,6 +795,18 @@ ch_rte_parse_atype_switch(const struct rte_flow_action *a,
"found");
fs->swapmac = 1;
break;
+ case RTE_FLOW_ACTION_TYPE_SET_MAC_DST:
+ item_index = cxgbe_get_flow_item_index(items,
+ RTE_FLOW_ITEM_TYPE_ETH);
+ if (item_index < 0)
+ return rte_flow_error_set(e, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, a,
+ "No RTE_FLOW_ITEM_TYPE_ETH found");
+ mac = (const struct rte_flow_action_set_mac *)a->conf;
+
+ fs->newdmac = 1;
+ memcpy(fs->dmac, mac->mac_addr, sizeof(fs->dmac));
+ break;
default:
/* We are not supposed to come here */
return rte_flow_error_set(e, EINVAL,
@@ -870,6 +883,7 @@ cxgbe_rtef_parse_actions(struct rte_flow *flow,
goto action_switch;
case RTE_FLOW_ACTION_TYPE_SET_TP_SRC:
case RTE_FLOW_ACTION_TYPE_SET_TP_DST:
+ case RTE_FLOW_ACTION_TYPE_SET_MAC_DST:
action_switch:
/* We allow multiple switch actions, but switch is
* not compatible with either queue or drop
--
2.25.0
^ permalink raw reply [flat|nested] 15+ messages in thread
* [dpdk-dev] [PATCH 6/9] net/cxgbe: add Source MAC Table (SMT) support
2020-03-11 9:05 [dpdk-dev] [PATCH 0/9] net/cxgbe: updates for rte_flow support Rahul Lakkireddy
` (4 preceding siblings ...)
2020-03-11 9:05 ` [dpdk-dev] [PATCH 5/9] net/cxgbe: add rte_flow support for overwriting destination MAC Rahul Lakkireddy
@ 2020-03-11 9:05 ` Rahul Lakkireddy
2020-03-11 9:05 ` [dpdk-dev] [PATCH 7/9] net/cxgbe: add rte_flow support for Source MAC Rewrite Rahul Lakkireddy
` (4 subsequent siblings)
10 siblings, 0 replies; 15+ messages in thread
From: Rahul Lakkireddy @ 2020-03-11 9:05 UTC (permalink / raw)
To: dev; +Cc: nirranjan, kaara.satwik
From: Karra Satwik <kaara.satwik@chelsio.com>
Source MAC Table (SMT) is used for storing Source MAC
addresses to be written in packets transmitted on the
wire. Hence, the SMT table can be used for overwriting
Source MAC addresses in packets, hitting corresponding
filter rules inserted by the rte_flow API.
Query firmware for SMT start and size information available
to the underlying PF. Allocate and maintain the corresponding
driver's copy of the hardware SMT table, with appropriate
refcount mechanism. If SMT information is not available, then
use the entire hardware SMT table.
Signed-off-by: Karra Satwik <kaara.satwik@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
drivers/net/cxgbe/Makefile | 1 +
drivers/net/cxgbe/base/adapter.h | 1 +
drivers/net/cxgbe/base/t4fw_interface.h | 2 ++
drivers/net/cxgbe/cxgbe_main.c | 41 ++++++++++++++++++++++-
drivers/net/cxgbe/meson.build | 1 +
drivers/net/cxgbe/smt.c | 43 +++++++++++++++++++++++++
drivers/net/cxgbe/smt.h | 39 ++++++++++++++++++++++
7 files changed, 127 insertions(+), 1 deletion(-)
create mode 100644 drivers/net/cxgbe/smt.c
create mode 100644 drivers/net/cxgbe/smt.h
diff --git a/drivers/net/cxgbe/Makefile b/drivers/net/cxgbe/Makefile
index 79c6e1d1f..53b2bb56d 100644
--- a/drivers/net/cxgbe/Makefile
+++ b/drivers/net/cxgbe/Makefile
@@ -51,6 +51,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += t4_hw.c
SRCS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += clip_tbl.c
SRCS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += mps_tcam.c
SRCS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += l2t.c
+SRCS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += smt.c
SRCS-$(CONFIG_RTE_LIBRTE_CXGBE_PMD) += t4vf_hw.c
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index c6b8036fd..ae318ccf5 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -342,6 +342,7 @@ struct adapter {
unsigned int l2t_end; /* Layer 2 table end */
struct clip_tbl *clipt; /* CLIP table */
struct l2t_data *l2t; /* Layer 2 table */
+ struct smt_data *smt; /* Source mac table */
struct mpstcam_table *mpstcam;
struct tid_info tids; /* Info used to access TID related tables */
diff --git a/drivers/net/cxgbe/base/t4fw_interface.h b/drivers/net/cxgbe/base/t4fw_interface.h
index 39e02077f..3684c8006 100644
--- a/drivers/net/cxgbe/base/t4fw_interface.h
+++ b/drivers/net/cxgbe/base/t4fw_interface.h
@@ -695,6 +695,8 @@ enum fw_params_param_pfvf {
FW_PARAMS_PARAM_PFVF_CPLFW4MSG_ENCAP = 0x31,
FW_PARAMS_PARAM_PFVF_PORT_CAPS32 = 0x3A,
FW_PARAMS_PARAM_PFVF_MAX_PKTS_PER_ETH_TX_PKTS_WR = 0x3D,
+ FW_PARAMS_PARAM_PFVF_GET_SMT_START = 0x3E,
+ FW_PARAMS_PARAM_PFVF_GET_SMT_SIZE = 0x3F,
};
/*
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index a286d8557..1ab6f8fba 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -40,6 +40,7 @@
#include "cxgbe_pfvf.h"
#include "clip_tbl.h"
#include "l2t.h"
+#include "smt.h"
#include "mps_tcam.h"
/**
@@ -1735,6 +1736,7 @@ void cxgbe_close(struct adapter *adapter)
t4_cleanup_mpstcam(adapter);
t4_cleanup_clip_tbl(adapter);
t4_cleanup_l2t(adapter);
+ t4_cleanup_smt(adapter);
if (is_pf4(adapter))
t4_intr_disable(adapter);
t4_sge_tx_monitor_stop(adapter);
@@ -1753,13 +1755,45 @@ void cxgbe_close(struct adapter *adapter)
t4_fw_bye(adapter, adapter->mbox);
}
+static void adap_smt_index(struct adapter *adapter, u32 *smt_start_idx,
+ u32 *smt_size)
+{
+ u32 params[2], smt_val[2];
+ int ret;
+
+ params[0] = CXGBE_FW_PARAM_PFVF(GET_SMT_START);
+ params[1] = CXGBE_FW_PARAM_PFVF(GET_SMT_SIZE);
+
+ ret = t4_query_params(adapter, adapter->mbox, adapter->pf, 0,
+ 2, params, smt_val);
+
+ /* if FW doesn't recognize this command then set it to default setting
+ * which is start index as 0 and size as 256.
+ */
+ if (ret < 0) {
+ *smt_start_idx = 0;
+ *smt_size = SMT_SIZE;
+ } else {
+ *smt_start_idx = smt_val[0];
+ /* smt size can be zero, if nsmt is not yet configured in
+ * the config file or set as zero, then configure all the
+ * remaining entries to this PF itself.
+ */
+ if (!smt_val[1])
+ *smt_size = SMT_SIZE - *smt_start_idx;
+ else
+ *smt_size = smt_val[1];
+ }
+}
+
int cxgbe_probe(struct adapter *adapter)
{
+ u32 smt_start_idx, smt_size;
struct port_info *pi;
- int chip;
int func, i;
int err = 0;
u32 whoami;
+ int chip;
whoami = t4_read_reg(adapter, A_PL_WHOAMI);
chip = t4_get_chip_type(adapter,
@@ -1904,6 +1938,11 @@ int cxgbe_probe(struct adapter *adapter)
dev_warn(adapter, "could not allocate CLIP. Continuing\n");
}
+ adap_smt_index(adapter, &smt_start_idx, &smt_size);
+ adapter->smt = t4_init_smt(smt_start_idx, smt_size);
+ if (!adapter->smt)
+ dev_warn(adapter, "could not allocate SMT, continuing\n");
+
adapter->l2t = t4_init_l2t(adapter->l2t_start, adapter->l2t_end);
if (!adapter->l2t) {
/* We tolerate a lack of L2T, giving up some functionality */
diff --git a/drivers/net/cxgbe/meson.build b/drivers/net/cxgbe/meson.build
index c51af26e9..3992aba44 100644
--- a/drivers/net/cxgbe/meson.build
+++ b/drivers/net/cxgbe/meson.build
@@ -11,6 +11,7 @@ sources = files('cxgbe_ethdev.c',
'clip_tbl.c',
'mps_tcam.c',
'l2t.c',
+ 'smt.c',
'base/t4_hw.c',
'base/t4vf_hw.c')
includes += include_directories('base')
diff --git a/drivers/net/cxgbe/smt.c b/drivers/net/cxgbe/smt.c
new file mode 100644
index 000000000..cf40c8a8a
--- /dev/null
+++ b/drivers/net/cxgbe/smt.c
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Chelsio Communications.
+ * All rights reserved.
+ */
+
+#include "base/common.h"
+#include "smt.h"
+
+/**
+ * Initialize Source MAC Table
+ */
+struct smt_data *t4_init_smt(u32 smt_start_idx, u32 smt_size)
+{
+ struct smt_data *s;
+ u32 i;
+
+ s = t4_alloc_mem(sizeof(*s) + smt_size * sizeof(struct smt_entry));
+ if (!s)
+ return NULL;
+
+ s->smt_start = smt_start_idx;
+ s->smt_size = smt_size;
+ t4_os_rwlock_init(&s->lock);
+
+ for (i = 0; i < s->smt_size; ++i) {
+ s->smtab[i].idx = i;
+ s->smtab[i].hw_idx = smt_start_idx + i;
+ s->smtab[i].state = SMT_STATE_UNUSED;
+ memset(&s->smtab[i].src_mac, 0, RTE_ETHER_ADDR_LEN);
+ t4_os_lock_init(&s->smtab[i].lock);
+ rte_atomic32_set(&s->smtab[i].refcnt, 0);
+ }
+ return s;
+}
+
+/**
+ * Cleanup Source MAC Table
+ */
+void t4_cleanup_smt(struct adapter *adap)
+{
+ if (adap->smt)
+ t4_os_free(adap->smt);
+}
diff --git a/drivers/net/cxgbe/smt.h b/drivers/net/cxgbe/smt.h
new file mode 100644
index 000000000..aa4afcce2
--- /dev/null
+++ b/drivers/net/cxgbe/smt.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2020 Chelsio Communications.
+ * All rights reserved.
+ */
+#ifndef __CXGBE_SMT_H_
+#define __CXGBE_SMT_H_
+
+enum {
+ SMT_STATE_SWITCHING,
+ SMT_STATE_UNUSED,
+ SMT_STATE_ERROR
+};
+
+enum {
+ SMT_SIZE = 256
+};
+
+struct smt_entry {
+ u16 state;
+ u16 idx;
+ u16 pfvf;
+ u16 hw_idx;
+ u8 src_mac[RTE_ETHER_ADDR_LEN];
+ rte_atomic32_t refcnt;
+ rte_spinlock_t lock;
+};
+
+struct smt_data {
+ unsigned int smt_size;
+ unsigned int smt_start;
+ rte_rwlock_t lock;
+ struct smt_entry smtab[0];
+};
+
+struct smt_data *t4_init_smt(u32 smt_start_idx, u32 smt_size);
+void t4_cleanup_smt(struct adapter *adap);
+
+#endif /* __CXGBE_SMT_H_ */
+
--
2.25.0
^ permalink raw reply [flat|nested] 15+ messages in thread
* [dpdk-dev] [PATCH 7/9] net/cxgbe: add rte_flow support for Source MAC Rewrite
2020-03-11 9:05 [dpdk-dev] [PATCH 0/9] net/cxgbe: updates for rte_flow support Rahul Lakkireddy
` (5 preceding siblings ...)
2020-03-11 9:05 ` [dpdk-dev] [PATCH 6/9] net/cxgbe: add Source MAC Table (SMT) support Rahul Lakkireddy
@ 2020-03-11 9:05 ` Rahul Lakkireddy
2020-03-11 9:05 ` [dpdk-dev] [PATCH 8/9] net/cxgbe: use firmware API for validating filter spec Rahul Lakkireddy
` (3 subsequent siblings)
10 siblings, 0 replies; 15+ messages in thread
From: Rahul Lakkireddy @ 2020-03-11 9:05 UTC (permalink / raw)
To: dev; +Cc: nirranjan, kaara.satwik
From: Karra Satwik <kaara.satwik@chelsio.com>
Add support to rewrite Source MAC addresses. The new Source
MAC address is written into a free entry in the SMT table
and the corresponding SMT index is used by hardware to
rewrite the Source MAC address of the packets hitting the
flow.
Signed-off-by: Karra Satwik <kaara.satwik@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
drivers/net/cxgbe/base/t4_msg.h | 40 +++++
drivers/net/cxgbe/base/t4_tcb.h | 8 +
drivers/net/cxgbe/base/t4fw_interface.h | 7 +-
drivers/net/cxgbe/cxgbe_filter.c | 35 ++++-
drivers/net/cxgbe/cxgbe_filter.h | 3 +
drivers/net/cxgbe/cxgbe_flow.c | 14 ++
drivers/net/cxgbe/cxgbe_main.c | 4 +
drivers/net/cxgbe/smt.c | 187 ++++++++++++++++++++++++
drivers/net/cxgbe/smt.h | 5 +
9 files changed, 300 insertions(+), 3 deletions(-)
diff --git a/drivers/net/cxgbe/base/t4_msg.h b/drivers/net/cxgbe/base/t4_msg.h
index 9e052b0f0..a6ddaa7b0 100644
--- a/drivers/net/cxgbe/base/t4_msg.h
+++ b/drivers/net/cxgbe/base/t4_msg.h
@@ -12,10 +12,12 @@ enum {
CPL_ABORT_REQ = 0xA,
CPL_ABORT_RPL = 0xB,
CPL_L2T_WRITE_REQ = 0x12,
+ CPL_SMT_WRITE_REQ = 0x14,
CPL_TID_RELEASE = 0x1A,
CPL_L2T_WRITE_RPL = 0x23,
CPL_ACT_OPEN_RPL = 0x25,
CPL_ABORT_RPL_RSS = 0x2D,
+ CPL_SMT_WRITE_RPL = 0x2E,
CPL_SET_TCB_RPL = 0x3A,
CPL_ACT_OPEN_REQ6 = 0x83,
CPL_SGE_EGR_UPDATE = 0xA5,
@@ -465,6 +467,44 @@ struct cpl_l2t_write_rpl {
__u8 rsvd[3];
};
+struct cpl_smt_write_req {
+ WR_HDR;
+ union opcode_tid ot;
+ __be32 params;
+ __be16 pfvf1;
+ __u8 src_mac1[6];
+ __be16 pfvf0;
+ __u8 src_mac0[6];
+};
+
+struct cpl_t6_smt_write_req {
+ WR_HDR;
+ union opcode_tid ot;
+ __be32 params;
+ __be64 tag;
+ __be16 pfvf0;
+ __u8 src_mac0[6];
+ __be32 local_ip;
+ __be32 rsvd;
+};
+
+struct cpl_smt_write_rpl {
+ RSS_HDR
+ union opcode_tid ot;
+ u8 status;
+ u8 rsvd[3];
+};
+
+/* cpl_smt_{read,write}_req.params fields */
+#define S_SMTW_OVLAN_IDX 16
+#define V_SMTW_OVLAN_IDX(x) ((x) << S_SMTW_OVLAN_IDX)
+
+#define S_SMTW_IDX 20
+#define V_SMTW_IDX(x) ((x) << S_SMTW_IDX)
+
+#define S_SMTW_NORPL 31
+#define V_SMTW_NORPL(x) ((x) << S_SMTW_NORPL)
+
/* rx_pkt.l2info fields */
#define S_RXF_UDP 22
#define V_RXF_UDP(x) ((x) << S_RXF_UDP)
diff --git a/drivers/net/cxgbe/base/t4_tcb.h b/drivers/net/cxgbe/base/t4_tcb.h
index 834169ab4..afd03b735 100644
--- a/drivers/net/cxgbe/base/t4_tcb.h
+++ b/drivers/net/cxgbe/base/t4_tcb.h
@@ -6,6 +6,12 @@
#ifndef _T4_TCB_DEFS_H
#define _T4_TCB_DEFS_H
+/* 31:24 */
+#define W_TCB_SMAC_SEL 0
+#define S_TCB_SMAC_SEL 24
+#define M_TCB_SMAC_SEL 0xffULL
+#define V_TCB_SMAC_SEL(x) ((x) << S_TCB_SMAC_SEL)
+
/* 95:32 */
#define W_TCB_T_FLAGS 1
@@ -34,6 +40,8 @@
#define S_TF_CCTRL_ECE 60
+#define S_TF_CCTRL_CWR 61
+
#define S_TF_CCTRL_RFR 62
#endif /* _T4_TCB_DEFS_H */
diff --git a/drivers/net/cxgbe/base/t4fw_interface.h b/drivers/net/cxgbe/base/t4fw_interface.h
index 3684c8006..51ebe4f7a 100644
--- a/drivers/net/cxgbe/base/t4fw_interface.h
+++ b/drivers/net/cxgbe/base/t4fw_interface.h
@@ -248,6 +248,9 @@ struct fw_filter2_wr {
#define S_FW_FILTER_WR_DMAC 19
#define V_FW_FILTER_WR_DMAC(x) ((x) << S_FW_FILTER_WR_DMAC)
+#define S_FW_FILTER_WR_SMAC 18
+#define V_FW_FILTER_WR_SMAC(x) ((x) << S_FW_FILTER_WR_SMAC)
+
#define S_FW_FILTER_WR_INSVLAN 17
#define V_FW_FILTER_WR_INSVLAN(x) ((x) << S_FW_FILTER_WR_INSVLAN)
@@ -1335,8 +1338,8 @@ struct fw_vi_cmd {
#define FW_VI_MAC_ID_BASED_FREE 0x3FC
enum fw_vi_mac_smac {
- FW_VI_MAC_MPS_TCAM_ENTRY,
- FW_VI_MAC_SMT_AND_MPSTCAM
+ FW_VI_MAC_MPS_TCAM_ENTRY = 0x0,
+ FW_VI_MAC_SMT_AND_MPSTCAM = 0x3
};
enum fw_vi_mac_entry_types {
diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index b009217f8..c5f5e41e3 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -10,6 +10,7 @@
#include "cxgbe_filter.h"
#include "clip_tbl.h"
#include "l2t.h"
+#include "smt.h"
/**
* Initialize Hash Filters
@@ -604,6 +605,17 @@ static int cxgbe_set_hash_filter(struct rte_eth_dev *dev,
}
}
+ /* If the new filter requires Source MAC rewriting then we need to
+ * allocate a SMT entry for the filter
+ */
+ if (f->fs.newsmac) {
+ f->smt = cxgbe_smt_alloc_switching(f->dev, f->fs.smac);
+ if (!f->smt) {
+ ret = -EAGAIN;
+ goto out_err;
+ }
+ }
+
atid = cxgbe_alloc_atid(t, f);
if (atid < 0)
goto out_err;
@@ -758,6 +770,20 @@ static int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
return -ENOMEM;
}
+ /* If the new filter requires Source MAC rewriting then we need to
+ * allocate a SMT entry for the filter
+ */
+ if (f->fs.newsmac) {
+ f->smt = cxgbe_smt_alloc_switching(f->dev, f->fs.smac);
+ if (!f->smt) {
+ if (f->l2t) {
+ cxgbe_l2t_release(f->l2t);
+ f->l2t = NULL;
+ }
+ return -ENOMEM;
+ }
+ }
+
ctrlq = &adapter->sge.ctrlq[port_id];
mbuf = rte_pktmbuf_alloc(ctrlq->mb_pool);
if (!mbuf) {
@@ -788,6 +814,7 @@ static int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
cpu_to_be32(V_FW_FILTER_WR_DROP(f->fs.action == FILTER_DROP) |
V_FW_FILTER_WR_DIRSTEER(f->fs.dirsteer) |
V_FW_FILTER_WR_LPBK(f->fs.action == FILTER_SWITCH) |
+ V_FW_FILTER_WR_SMAC(f->fs.newsmac) |
V_FW_FILTER_WR_DMAC(f->fs.newdmac) |
V_FW_FILTER_WR_INSVLAN
(f->fs.newvlan == VLAN_INSERT ||
@@ -806,7 +833,7 @@ static int set_filter_wr(struct rte_eth_dev *dev, unsigned int fidx)
V_FW_FILTER_WR_IVLAN_VLDM(f->fs.mask.ivlan_vld) |
V_FW_FILTER_WR_OVLAN_VLD(f->fs.val.ovlan_vld) |
V_FW_FILTER_WR_OVLAN_VLDM(f->fs.mask.ovlan_vld));
- fwr->smac_sel = 0;
+ fwr->smac_sel = f->smt ? f->smt->hw_idx : 0;
fwr->rx_chan_rx_rpl_iq =
cpu_to_be16(V_FW_FILTER_WR_RX_CHAN(0) |
V_FW_FILTER_WR_RX_RPL_IQ(adapter->sge.fw_evtq.abs_id
@@ -1144,6 +1171,12 @@ void cxgbe_hash_filter_rpl(struct adapter *adap,
if (f->fs.newvlan == VLAN_INSERT ||
f->fs.newvlan == VLAN_REWRITE)
set_tcb_tflag(adap, tid, S_TF_CCTRL_RFR, 1, 1);
+ if (f->fs.newsmac) {
+ set_tcb_tflag(adap, tid, S_TF_CCTRL_CWR, 1, 1);
+ set_tcb_field(adap, tid, W_TCB_SMAC_SEL,
+ V_TCB_SMAC_SEL(M_TCB_SMAC_SEL),
+ V_TCB_SMAC_SEL(f->smt->hw_idx), 1);
+ }
break;
}
default:
diff --git a/drivers/net/cxgbe/cxgbe_filter.h b/drivers/net/cxgbe/cxgbe_filter.h
index 7a1e72ded..e79c052de 100644
--- a/drivers/net/cxgbe/cxgbe_filter.h
+++ b/drivers/net/cxgbe/cxgbe_filter.h
@@ -100,9 +100,11 @@ struct ch_filter_specification {
uint32_t iq:10; /* ingress queue */
uint32_t eport:2; /* egress port to switch packet out */
+ uint32_t newsmac:1; /* rewrite source MAC address */
uint32_t newdmac:1; /* rewrite destination MAC address */
uint32_t swapmac:1; /* swap SMAC/DMAC for loopback packet */
uint32_t newvlan:2; /* rewrite VLAN Tag */
+ uint8_t smac[RTE_ETHER_ADDR_LEN]; /* new source MAC address */
uint8_t dmac[RTE_ETHER_ADDR_LEN]; /* new destination MAC address */
uint16_t vlan; /* VLAN Tag to insert */
@@ -181,6 +183,7 @@ struct filter_entry {
struct filter_ctx *ctx; /* caller's completion hook */
struct clip_entry *clipt; /* CLIP Table entry for IPv6 */
struct l2t_entry *l2t; /* Layer Two Table entry for dmac */
+ struct smt_entry *smt; /* Source Mac Table entry for smac */
struct rte_eth_dev *dev; /* Port's rte eth device */
void *private; /* For use by apps using filter_entry */
diff --git a/drivers/net/cxgbe/cxgbe_flow.c b/drivers/net/cxgbe/cxgbe_flow.c
index b009005c5..13fd78aaf 100644
--- a/drivers/net/cxgbe/cxgbe_flow.c
+++ b/drivers/net/cxgbe/cxgbe_flow.c
@@ -795,6 +795,19 @@ ch_rte_parse_atype_switch(const struct rte_flow_action *a,
"found");
fs->swapmac = 1;
break;
+ case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC:
+ item_index = cxgbe_get_flow_item_index(items,
+ RTE_FLOW_ITEM_TYPE_ETH);
+ if (item_index < 0)
+ return rte_flow_error_set(e, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, a,
+ "No RTE_FLOW_ITEM_TYPE_ETH "
+ "found");
+ mac = (const struct rte_flow_action_set_mac *)a->conf;
+
+ fs->newsmac = 1;
+ memcpy(fs->smac, mac->mac_addr, sizeof(fs->smac));
+ break;
case RTE_FLOW_ACTION_TYPE_SET_MAC_DST:
item_index = cxgbe_get_flow_item_index(items,
RTE_FLOW_ITEM_TYPE_ETH);
@@ -883,6 +896,7 @@ cxgbe_rtef_parse_actions(struct rte_flow *flow,
goto action_switch;
case RTE_FLOW_ACTION_TYPE_SET_TP_SRC:
case RTE_FLOW_ACTION_TYPE_SET_TP_DST:
+ case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC:
case RTE_FLOW_ACTION_TYPE_SET_MAC_DST:
action_switch:
/* We allow multiple switch actions, but switch is
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 1ab6f8fba..df54e54f5 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -107,6 +107,10 @@ static int fwevtq_handler(struct sge_rspq *q, const __be64 *rsp,
const struct cpl_l2t_write_rpl *p = (const void *)rsp;
cxgbe_do_l2t_write_rpl(q->adapter, p);
+ } else if (opcode == CPL_SMT_WRITE_RPL) {
+ const struct cpl_smt_write_rpl *p = (const void *)rsp;
+
+ cxgbe_do_smt_write_rpl(q->adapter, p);
} else {
dev_err(adapter, "unexpected CPL %#x on FW event queue\n",
opcode);
diff --git a/drivers/net/cxgbe/smt.c b/drivers/net/cxgbe/smt.c
index cf40c8a8a..e8f38676e 100644
--- a/drivers/net/cxgbe/smt.c
+++ b/drivers/net/cxgbe/smt.c
@@ -6,6 +6,193 @@
#include "base/common.h"
#include "smt.h"
+void cxgbe_do_smt_write_rpl(struct adapter *adap,
+ const struct cpl_smt_write_rpl *rpl)
+{
+ unsigned int smtidx = G_TID_TID(GET_TID(rpl));
+ struct smt_data *s = adap->smt;
+
+ if (unlikely(rpl->status != CPL_ERR_NONE)) {
+ struct smt_entry *e = &s->smtab[smtidx];
+
+ dev_err(adap,
+ "Unexpected SMT_WRITE_RPL status %u for entry %u\n",
+ rpl->status, smtidx);
+ t4_os_lock(&e->lock);
+ e->state = SMT_STATE_ERROR;
+ t4_os_unlock(&e->lock);
+ }
+}
+
+static int write_smt_entry(struct rte_eth_dev *dev, struct smt_entry *e)
+{
+ unsigned int port_id = ethdev2pinfo(dev)->port_id;
+ struct adapter *adap = ethdev2adap(dev);
+ struct cpl_t6_smt_write_req *t6req;
+ struct smt_data *s = adap->smt;
+ struct cpl_smt_write_req *req;
+ struct sge_ctrl_txq *ctrlq;
+ struct rte_mbuf *mbuf;
+ u8 row;
+
+ ctrlq = &adap->sge.ctrlq[port_id];
+ mbuf = rte_pktmbuf_alloc(ctrlq->mb_pool);
+ if (!mbuf)
+ return -ENOMEM;
+
+ if (CHELSIO_CHIP_VERSION(adap->params.chip) <= CHELSIO_T5) {
+ mbuf->data_len = sizeof(*req);
+ mbuf->pkt_len = mbuf->data_len;
+
+ /* Source MAC Table (SMT) contains 256 SMAC entries
+ * organized in 128 rows of 2 entries each.
+ */
+ req = rte_pktmbuf_mtod(mbuf, struct cpl_smt_write_req *);
+ INIT_TP_WR(req, 0);
+
+ /* Each row contains an SMAC pair.
+ * LSB selects the SMAC entry within a row
+ */
+ if (e->idx & 1) {
+ req->pfvf1 = 0x0;
+ rte_memcpy(req->src_mac1, e->src_mac,
+ RTE_ETHER_ADDR_LEN);
+
+ /* fill pfvf0/src_mac0 with entry
+ * at prev index from smt-tab.
+ */
+ req->pfvf0 = 0x0;
+ rte_memcpy(req->src_mac0, s->smtab[e->idx - 1].src_mac,
+ RTE_ETHER_ADDR_LEN);
+ } else {
+ req->pfvf0 = 0x0;
+ rte_memcpy(req->src_mac0, e->src_mac,
+ RTE_ETHER_ADDR_LEN);
+
+ /* fill pfvf1/src_mac1 with entry
+ * at next index from smt-tab
+ */
+ req->pfvf1 = 0x0;
+ rte_memcpy(req->src_mac1, s->smtab[e->idx + 1].src_mac,
+ RTE_ETHER_ADDR_LEN);
+ }
+ row = (e->hw_idx >> 1);
+ } else {
+ mbuf->data_len = sizeof(*t6req);
+ mbuf->pkt_len = mbuf->data_len;
+
+ /* Source MAC Table (SMT) contains 256 SMAC entries */
+ t6req = rte_pktmbuf_mtod(mbuf, struct cpl_t6_smt_write_req *);
+ INIT_TP_WR(t6req, 0);
+
+ /* fill pfvf0/src_mac0 from smt-tab */
+ t6req->pfvf0 = 0x0;
+ rte_memcpy(t6req->src_mac0, s->smtab[e->idx].src_mac,
+ RTE_ETHER_ADDR_LEN);
+ row = e->hw_idx;
+ req = (struct cpl_smt_write_req *)t6req;
+ }
+
+ OPCODE_TID(req) =
+ cpu_to_be32(MK_OPCODE_TID(CPL_SMT_WRITE_REQ,
+ e->hw_idx |
+ V_TID_QID(adap->sge.fw_evtq.abs_id)));
+
+ req->params = cpu_to_be32(V_SMTW_NORPL(0) |
+ V_SMTW_IDX(row) |
+ V_SMTW_OVLAN_IDX(0));
+ t4_mgmt_tx(ctrlq, mbuf);
+
+ return 0;
+}
+
+/**
+ * find_or_alloc_smte - Find/Allocate a free SMT entry
+ * @s: SMT table
+ * @smac: Source MAC address to compare/add
+ * Returns pointer to the SMT entry found/created
+ *
+ * Finds/Allocates an SMT entry to be used by switching rule of a filter.
+ */
+static struct smt_entry *find_or_alloc_smte(struct smt_data *s, u8 *smac)
+{
+ struct smt_entry *e, *end, *first_free = NULL;
+
+ for (e = &s->smtab[0], end = &s->smtab[s->smt_size]; e != end; ++e) {
+ if (!rte_atomic32_read(&e->refcnt)) {
+ if (!first_free)
+ first_free = e;
+ } else {
+ if (e->state == SMT_STATE_SWITCHING) {
+ /* This entry is actually in use. See if we can
+ * re-use it ?
+ */
+ if (!memcmp(e->src_mac, smac,
+ RTE_ETHER_ADDR_LEN))
+ goto found;
+ }
+ }
+ }
+
+ if (!first_free)
+ return NULL;
+
+ e = first_free;
+ e->state = SMT_STATE_UNUSED;
+
+found:
+ return e;
+}
+
+static struct smt_entry *t4_smt_alloc_switching(struct rte_eth_dev *dev,
+ u16 pfvf, u8 *smac)
+{
+ struct adapter *adap = ethdev2adap(dev);
+ struct smt_data *s = adap->smt;
+ struct smt_entry *e;
+ int ret;
+
+ t4_os_write_lock(&s->lock);
+ e = find_or_alloc_smte(s, smac);
+ if (e) {
+ t4_os_lock(&e->lock);
+ if (!rte_atomic32_read(&e->refcnt)) {
+ e->pfvf = pfvf;
+ rte_memcpy(e->src_mac, smac, RTE_ETHER_ADDR_LEN);
+ ret = write_smt_entry(dev, e);
+ if (ret) {
+ e->pfvf = 0;
+ memset(e->src_mac, 0, RTE_ETHER_ADDR_LEN);
+ t4_os_unlock(&e->lock);
+ e = NULL;
+ goto out_write_unlock;
+ }
+ e->state = SMT_STATE_SWITCHING;
+ rte_atomic32_set(&e->refcnt, 1);
+ } else {
+ rte_atomic32_inc(&e->refcnt);
+ }
+ t4_os_unlock(&e->lock);
+ }
+
+out_write_unlock:
+ t4_os_write_unlock(&s->lock);
+ return e;
+}
+
+/**
+ * cxgbe_smt_alloc_switching - Allocate an SMT entry for switching rule
+ * @dev: rte_eth_dev pointer
+ * @smac: MAC address to add to SMT
+ * Returns pointer to the SMT entry created
+ *
+ * Allocates an SMT entry to be used by switching rule of a filter.
+ */
+struct smt_entry *cxgbe_smt_alloc_switching(struct rte_eth_dev *dev, u8 *smac)
+{
+ return t4_smt_alloc_switching(dev, 0x0, smac);
+}
+
/**
* Initialize Source MAC Table
*/
diff --git a/drivers/net/cxgbe/smt.h b/drivers/net/cxgbe/smt.h
index aa4afcce2..be1fab8ba 100644
--- a/drivers/net/cxgbe/smt.h
+++ b/drivers/net/cxgbe/smt.h
@@ -5,6 +5,8 @@
#ifndef __CXGBE_SMT_H_
#define __CXGBE_SMT_H_
+#include "base/t4_msg.h"
+
enum {
SMT_STATE_SWITCHING,
SMT_STATE_UNUSED,
@@ -34,6 +36,9 @@ struct smt_data {
struct smt_data *t4_init_smt(u32 smt_start_idx, u32 smt_size);
void t4_cleanup_smt(struct adapter *adap);
+void cxgbe_do_smt_write_rpl(struct adapter *adap,
+ const struct cpl_smt_write_rpl *rpl);
+struct smt_entry *cxgbe_smt_alloc_switching(struct rte_eth_dev *dev, u8 *smac);
#endif /* __CXGBE_SMT_H_ */
--
2.25.0
^ permalink raw reply [flat|nested] 15+ messages in thread
* [dpdk-dev] [PATCH 8/9] net/cxgbe: use firmware API for validating filter spec
2020-03-11 9:05 [dpdk-dev] [PATCH 0/9] net/cxgbe: updates for rte_flow support Rahul Lakkireddy
` (6 preceding siblings ...)
2020-03-11 9:05 ` [dpdk-dev] [PATCH 7/9] net/cxgbe: add rte_flow support for Source MAC Rewrite Rahul Lakkireddy
@ 2020-03-11 9:05 ` Rahul Lakkireddy
2020-03-11 9:05 ` [dpdk-dev] [PATCH 9/9] net/cxgbe: add devargs to control filtermode and filtermask values Rahul Lakkireddy
` (2 subsequent siblings)
10 siblings, 0 replies; 15+ messages in thread
From: Rahul Lakkireddy @ 2020-03-11 9:05 UTC (permalink / raw)
To: dev; +Cc: nirranjan, kaara.satwik
From: Karra Satwik <kaara.satwik@chelsio.com>
Add new firmware API FW_PARAM_DEV_FILTER_MODE_MASK to fetch
the filtermode and filtermask values configured in hardware,
which are used to validate the match combinations in the filter
spec before offloading the filter rules to hardware. For older
firmware that doesn't support the new API, fallback to older way
of directly reading from indirect registers
Signed-off-by: Karra Satwik <kaara.satwik@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
drivers/net/cxgbe/base/common.h | 1 +
drivers/net/cxgbe/base/t4_hw.c | 46 ++++++++++++++++++++++---
drivers/net/cxgbe/base/t4fw_interface.h | 18 ++++++++++
drivers/net/cxgbe/cxgbe_filter.c | 3 +-
4 files changed, 62 insertions(+), 6 deletions(-)
diff --git a/drivers/net/cxgbe/base/common.h b/drivers/net/cxgbe/base/common.h
index 892aab64b..79c8fcb76 100644
--- a/drivers/net/cxgbe/base/common.h
+++ b/drivers/net/cxgbe/base/common.h
@@ -133,6 +133,7 @@ struct tp_params {
unsigned short tx_modq[NCHAN]; /* channel to modulation queue map */
u32 vlan_pri_map; /* cached TP_VLAN_PRI_MAP */
+ u32 filter_mask;
u32 ingress_config; /* cached TP_INGRESS_CONFIG */
/* cached TP_OUT_CONFIG compressed error vector
diff --git a/drivers/net/cxgbe/base/t4_hw.c b/drivers/net/cxgbe/base/t4_hw.c
index 48b6d77b1..1e7be3ec3 100644
--- a/drivers/net/cxgbe/base/t4_hw.c
+++ b/drivers/net/cxgbe/base/t4_hw.c
@@ -5215,8 +5215,8 @@ int t4_init_sge_params(struct adapter *adapter)
*/
int t4_init_tp_params(struct adapter *adap)
{
- int chan;
- u32 v;
+ int chan, ret;
+ u32 param, v;
v = t4_read_reg(adap, A_TP_TIMER_RESOLUTION);
adap->params.tp.tre = G_TIMERRESOLUTION(v);
@@ -5227,11 +5227,47 @@ int t4_init_tp_params(struct adapter *adap)
adap->params.tp.tx_modq[chan] = chan;
/*
- * Cache the adapter's Compressed Filter Mode and global Incress
+ * Cache the adapter's Compressed Filter Mode/Mask and global Ingress
* Configuration.
*/
- t4_read_indirect(adap, A_TP_PIO_ADDR, A_TP_PIO_DATA,
- &adap->params.tp.vlan_pri_map, 1, A_TP_VLAN_PRI_MAP);
+ param = (V_FW_PARAMS_MNEM(FW_PARAMS_MNEM_DEV) |
+ V_FW_PARAMS_PARAM_X(FW_PARAMS_PARAM_DEV_FILTER) |
+ V_FW_PARAMS_PARAM_Y(FW_PARAM_DEV_FILTER_MODE_MASK));
+
+ /* Read current value */
+ ret = t4_query_params(adap, adap->mbox, adap->pf, 0,
+ 1, ¶m, &v);
+ if (!ret) {
+ dev_info(adap, "Current filter mode/mask 0x%x:0x%x\n",
+ G_FW_PARAMS_PARAM_FILTER_MODE(v),
+ G_FW_PARAMS_PARAM_FILTER_MASK(v));
+ adap->params.tp.vlan_pri_map =
+ G_FW_PARAMS_PARAM_FILTER_MODE(v);
+ adap->params.tp.filter_mask =
+ G_FW_PARAMS_PARAM_FILTER_MASK(v);
+ } else {
+ dev_info(adap,
+ "Failed to read filter mode/mask via fw api, using indirect-reg-read\n");
+
+ /* Incase of older-fw (which doesn't expose the api
+ * FW_PARAM_DEV_FILTER_MODE_MASK) and newer-driver (which uses
+ * the fw api) combination, fall-back to older method of reading
+ * the filter mode from indirect-register
+ */
+ t4_read_indirect(adap, A_TP_PIO_ADDR, A_TP_PIO_DATA,
+ &adap->params.tp.vlan_pri_map, 1,
+ A_TP_VLAN_PRI_MAP);
+
+ /* With the older-fw and newer-driver combination we might run
+ * into an issue when user wants to use hash filter region but
+ * the filter_mask is zero, in this case filter_mask validation
+ * is tough. To avoid that we set the filter_mask same as filter
+ * mode, which will behave exactly as the older way of ignoring
+ * the filter mask validation.
+ */
+ adap->params.tp.filter_mask = adap->params.tp.vlan_pri_map;
+ }
+
t4_read_indirect(adap, A_TP_PIO_ADDR, A_TP_PIO_DATA,
&adap->params.tp.ingress_config, 1,
A_TP_INGRESS_CONFIG);
diff --git a/drivers/net/cxgbe/base/t4fw_interface.h b/drivers/net/cxgbe/base/t4fw_interface.h
index 51ebe4f7a..46d087a09 100644
--- a/drivers/net/cxgbe/base/t4fw_interface.h
+++ b/drivers/net/cxgbe/base/t4fw_interface.h
@@ -671,6 +671,19 @@ enum fw_params_mnem {
/*
* device parameters
*/
+
+#define S_FW_PARAMS_PARAM_FILTER_MODE 16
+#define M_FW_PARAMS_PARAM_FILTER_MODE 0xffff
+#define G_FW_PARAMS_PARAM_FILTER_MODE(x) \
+ (((x) >> S_FW_PARAMS_PARAM_FILTER_MODE) & \
+ M_FW_PARAMS_PARAM_FILTER_MODE)
+
+#define S_FW_PARAMS_PARAM_FILTER_MASK 0
+#define M_FW_PARAMS_PARAM_FILTER_MASK 0xffff
+#define G_FW_PARAMS_PARAM_FILTER_MASK(x) \
+ (((x) >> S_FW_PARAMS_PARAM_FILTER_MASK) & \
+ M_FW_PARAMS_PARAM_FILTER_MASK)
+
enum fw_params_param_dev {
FW_PARAMS_PARAM_DEV_CCLK = 0x00, /* chip core clock in khz */
FW_PARAMS_PARAM_DEV_PORTVEC = 0x01, /* the port vector */
@@ -683,6 +696,7 @@ enum fw_params_param_dev {
FW_PARAMS_PARAM_DEV_ULPTX_MEMWRITE_DSGL = 0x17,
FW_PARAMS_PARAM_DEV_FILTER2_WR = 0x1D,
FW_PARAMS_PARAM_DEV_OPAQUE_VIID_SMT_EXTN = 0x27,
+ FW_PARAMS_PARAM_DEV_FILTER = 0x2E,
};
/*
@@ -710,6 +724,10 @@ enum fw_params_param_dmaq {
FW_PARAMS_PARAM_DMAQ_CONM_CTXT = 0x20,
};
+enum fw_params_param_dev_filter {
+ FW_PARAM_DEV_FILTER_MODE_MASK = 0x01,
+};
+
#define S_FW_PARAMS_MNEM 24
#define M_FW_PARAMS_MNEM 0xff
#define V_FW_PARAMS_MNEM(x) ((x) << S_FW_PARAMS_MNEM)
diff --git a/drivers/net/cxgbe/cxgbe_filter.c b/drivers/net/cxgbe/cxgbe_filter.c
index c5f5e41e3..27e96c73e 100644
--- a/drivers/net/cxgbe/cxgbe_filter.c
+++ b/drivers/net/cxgbe/cxgbe_filter.c
@@ -62,7 +62,8 @@ int cxgbe_validate_filter(struct adapter *adapter,
/*
* Check for unconfigured fields being used.
*/
- fconf = adapter->params.tp.vlan_pri_map;
+ fconf = fs->cap ? adapter->params.tp.filter_mask :
+ adapter->params.tp.vlan_pri_map;
iconf = adapter->params.tp.ingress_config;
--
2.25.0
^ permalink raw reply [flat|nested] 15+ messages in thread
* [dpdk-dev] [PATCH 9/9] net/cxgbe: add devargs to control filtermode and filtermask values
2020-03-11 9:05 [dpdk-dev] [PATCH 0/9] net/cxgbe: updates for rte_flow support Rahul Lakkireddy
` (7 preceding siblings ...)
2020-03-11 9:05 ` [dpdk-dev] [PATCH 8/9] net/cxgbe: use firmware API for validating filter spec Rahul Lakkireddy
@ 2020-03-11 9:05 ` Rahul Lakkireddy
2020-03-11 13:11 ` [dpdk-dev] [PATCH 0/9] net/cxgbe: updates for rte_flow support Ferruh Yigit
2020-03-18 12:09 ` Thomas Monjalon
10 siblings, 0 replies; 15+ messages in thread
From: Rahul Lakkireddy @ 2020-03-11 9:05 UTC (permalink / raw)
To: dev; +Cc: nirranjan, kaara.satwik
From: Karra Satwik <kaara.satwik@chelsio.com>
Apart from the 4-tuple (IP src/dst addresses and TCP/UDP src/dst
port addresses), there are only 40-bits available to match other
fields in packet headers. Not all combinations of packet header
fields can fit in the 40-bit tuple.
Currently, the combination of packet header fields to match are
configured via filterMode for LETCAM filters and filterMask for
HASH filters in firmware config files (t5/t6-config.txt). So, add
devargs to allow User to dynamically select the filterMode and
filterMask combination during runtime, without having to modify the
firmware config files and reflashing them onto the adapter. A table
of supported combinations is maintained by the driver to internally
translate the User specified devargs combination to hardware's internal
format before writing the requested combination to hardware
Signed-off-by: Karra Satwik <kaara.satwik@chelsio.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
---
doc/guides/nics/cxgbe.rst | 219 +++++++++++++++++++++-
drivers/net/cxgbe/base/adapter.h | 2 +
drivers/net/cxgbe/base/t4fw_interface.h | 5 +
drivers/net/cxgbe/cxgbe.h | 23 +++
drivers/net/cxgbe/cxgbe_ethdev.c | 4 +-
drivers/net/cxgbe/cxgbe_main.c | 237 ++++++++++++++++++++++++
6 files changed, 482 insertions(+), 8 deletions(-)
diff --git a/doc/guides/nics/cxgbe.rst b/doc/guides/nics/cxgbe.rst
index cae78a34c..54a4c1389 100644
--- a/doc/guides/nics/cxgbe.rst
+++ b/doc/guides/nics/cxgbe.rst
@@ -70,7 +70,7 @@ in :ref:`t5-nics` and :ref:`t6-nics`.
Prerequisites
-------------
-- Requires firmware version **1.23.4.0** and higher. Visit
+- Requires firmware version **1.24.11.0** and higher. Visit
`Chelsio Download Center <http://service.chelsio.com>`_ to get latest firmware
bundled with the latest Chelsio Unified Wire package.
@@ -141,6 +141,211 @@ CXGBE VF Only Runtime Options
underlying Chelsio NICs. This enables multiple VFs on the same NIC
to send traffic to each other even when the physical link is down.
+CXGBE PF Only Runtime Options
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+- ``filtermode`` (default **0**)
+
+ Apart from the 4-tuple (IP src/dst addresses and TCP/UDP src/dst port
+ addresses), there are only 40-bits available to match other fields in
+ packet headers. So, ``filtermode`` devarg allows user to dynamically
+ select a 40-bit supported match field combination for LETCAM (wildcard)
+ filters.
+
+ Default value of **0** makes driver pick the combination configured in
+ the firmware configuration file on the adapter.
+
+ The supported flags and their corresponding values are shown in table below.
+ These flags can be OR'd to create 1 of the multiple supported combinations
+ for LETCAM filters.
+
+ ================== ======
+ FLAG VALUE
+ ================== ======
+ Physical Port 0x1
+ PFVF 0x2
+ Destination MAC 0x4
+ Ethertype 0x8
+ Inner VLAN 0x10
+ Outer VLAN 0x20
+ IP TOS 0x40
+ IP Protocol 0x80
+ ================== ======
+
+ The supported ``filtermode`` combinations and their corresponding OR'd
+ values are shown in table below.
+
+ +-----------------------------------+-----------+
+ | FILTERMODE COMBINATIONS | VALUE |
+ +===================================+===========+
+ | Protocol, TOS, Outer VLAN, Port | 0xE1 |
+ +-----------------------------------+-----------+
+ | Protocol, TOS, Outer VLAN | 0xE0 |
+ +-----------------------------------+-----------+
+ | Protocol, TOS, Inner VLAN, Port | 0xD1 |
+ +-----------------------------------+-----------+
+ | Protocol, TOS, Inner VLAN | 0xD0 |
+ +-----------------------------------+-----------+
+ | Protocol, TOS, PFVF, Port | 0xC3 |
+ +-----------------------------------+-----------+
+ | Protocol, TOS, PFVF | 0xC2 |
+ +-----------------------------------+-----------+
+ | Protocol, TOS, Port | 0xC1 |
+ +-----------------------------------+-----------+
+ | Protocol, TOS | 0xC0 |
+ +-----------------------------------+-----------+
+ | Protocol, Outer VLAN, Port | 0xA1 |
+ +-----------------------------------+-----------+
+ | Protocol, Outer VLAN | 0xA0 |
+ +-----------------------------------+-----------+
+ | Protocol, Inner VLAN, Port | 0x91 |
+ +-----------------------------------+-----------+
+ | Protocol, Inner VLAN | 0x90 |
+ +-----------------------------------+-----------+
+ | Protocol, Ethertype, DstMAC, Port | 0x8D |
+ +-----------------------------------+-----------+
+ | Protocol, Ethertype, DstMAC | 0x8C |
+ +-----------------------------------+-----------+
+ | Protocol, Ethertype, Port | 0x89 |
+ +-----------------------------------+-----------+
+ | Protocol, Ethertype | 0x88 |
+ +-----------------------------------+-----------+
+ | Protocol, DstMAC, PFVF, Port | 0x87 |
+ +-----------------------------------+-----------+
+ | Protocol, DstMAC, PFVF | 0x86 |
+ +-----------------------------------+-----------+
+ | Protocol, DstMAC, Port | 0x85 |
+ +-----------------------------------+-----------+
+ | Protocol, DstMAC | 0x84 |
+ +-----------------------------------+-----------+
+ | Protocol, PFVF, Port | 0x83 |
+ +-----------------------------------+-----------+
+ | Protocol, PFVF | 0x82 |
+ +-----------------------------------+-----------+
+ | Protocol, Port | 0x81 |
+ +-----------------------------------+-----------+
+ | Protocol | 0x80 |
+ +-----------------------------------+-----------+
+ | TOS, Outer VLAN, Port | 0x61 |
+ +-----------------------------------+-----------+
+ | TOS, Outer VLAN | 0x60 |
+ +-----------------------------------+-----------+
+ | TOS, Inner VLAN, Port | 0x51 |
+ +-----------------------------------+-----------+
+ | TOS, Inner VLAN | 0x50 |
+ +-----------------------------------+-----------+
+ | TOS, Ethertype, DstMAC, Port | 0x4D |
+ +-----------------------------------+-----------+
+ | TOS, Ethertype, DstMAC | 0x4C |
+ +-----------------------------------+-----------+
+ | TOS, Ethertype, Port | 0x49 |
+ +-----------------------------------+-----------+
+ | TOS, Ethertype | 0x48 |
+ +-----------------------------------+-----------+
+ | TOS, DstMAC, PFVF, Port | 0x47 |
+ +-----------------------------------+-----------+
+ | TOS, DstMAC, PFVF | 0x46 |
+ +-----------------------------------+-----------+
+ | TOS, DstMAC, Port | 0x45 |
+ +-----------------------------------+-----------+
+ | TOS, DstMAC | 0x44 |
+ +-----------------------------------+-----------+
+ | TOS, PFVF, Port | 0x43 |
+ +-----------------------------------+-----------+
+ | TOS, PFVF | 0x42 |
+ +-----------------------------------+-----------+
+ | TOS, Port | 0x41 |
+ +-----------------------------------+-----------+
+ | TOS | 0x40 |
+ +-----------------------------------+-----------+
+ | Outer VLAN, Inner VLAN, Port | 0x31 |
+ +-----------------------------------+-----------+
+ | Outer VLAN, Ethertype, Port | 0x29 |
+ +-----------------------------------+-----------+
+ | Outer VLAN, Ethertype | 0x28 |
+ +-----------------------------------+-----------+
+ | Outer VLAN, DstMAC, Port | 0x25 |
+ +-----------------------------------+-----------+
+ | Outer VLAN, DstMAC | 0x24 |
+ +-----------------------------------+-----------+
+ | Outer VLAN, Port | 0x21 |
+ +-----------------------------------+-----------+
+ | Outer VLAN | 0x20 |
+ +-----------------------------------+-----------+
+ | Inner VLAN, Ethertype, Port | 0x19 |
+ +-----------------------------------+-----------+
+ | Inner VLAN, Ethertype | 0x18 |
+ +-----------------------------------+-----------+
+ | Inner VLAN, DstMAC, Port | 0x15 |
+ +-----------------------------------+-----------+
+ | Inner VLAN, DstMAC | 0x14 |
+ +-----------------------------------+-----------+
+ | Inner VLAN, Port | 0x11 |
+ +-----------------------------------+-----------+
+ | Inner VLAN | 0x10 |
+ +-----------------------------------+-----------+
+ | Ethertype, DstMAC, Port | 0xD |
+ +-----------------------------------+-----------+
+ | Ethertype, DstMAC | 0xC |
+ +-----------------------------------+-----------+
+ | Ethertype, PFVF, Port | 0xB |
+ +-----------------------------------+-----------+
+ | Ethertype, PFVF | 0xA |
+ +-----------------------------------+-----------+
+ | Ethertype, Port | 0x9 |
+ +-----------------------------------+-----------+
+ | Ethertype | 0x8 |
+ +-----------------------------------+-----------+
+ | DstMAC, PFVF, Port | 0x7 |
+ +-----------------------------------+-----------+
+ | DstMAC, PFVF | 0x6 |
+ +-----------------------------------+-----------+
+ | DstMAC, Port | 0x5 |
+ +-----------------------------------+-----------+
+ | Destination MAC | 0x4 |
+ +-----------------------------------+-----------+
+ | PFVF, Port | 0x3 |
+ +-----------------------------------+-----------+
+ | PFVF | 0x2 |
+ +-----------------------------------+-----------+
+ | Physical Port | 0x1 +
+ +-----------------------------------+-----------+
+
+ For example, to enable matching ``ethertype`` field in Ethernet
+ header, and ``protocol`` field in IPv4 header, the ``filtermode``
+ combination must be given as:
+
+ .. code-block:: console
+
+ testpmd -w 02:00.4,filtermode=0x88 -- -i
+
+- ``filtermask`` (default **0**)
+
+ ``filtermask`` devarg works similar to ``filtermode``, but is used
+ to configure a filter mode combination for HASH (exact-match) filters.
+
+ .. note::
+
+ The combination chosen for ``filtermask`` devarg **must be a subset** of
+ the combination chosen for ``filtermode`` devarg.
+
+ Default value of **0** makes driver pick the combination configured in
+ the firmware configuration file on the adapter.
+
+ Note that the filter rule will only be inserted in HASH region, if the
+ rule contains **all** the fields specified in the ``filtermask`` combination.
+ Otherwise, the filter rule will get inserted in LETCAM region.
+
+ The same combination list explained in the tables in ``filtermode`` devarg
+ section earlier applies for ``filtermask`` devarg, as well.
+
+ For example, to enable matching only protocol field in IPv4 header, the
+ ``filtermask`` combination must be given as:
+
+ .. code-block:: console
+
+ testpmd -w 02:00.4,filtermode=0x88,filtermask=0x80 -- -i
+
.. _driver-compilation:
Driver compilation and testing
@@ -215,7 +420,7 @@ Unified Wire package for Linux operating system are as follows:
.. code-block:: console
- firmware-version: 1.23.4.0, TP 0.1.23.2
+ firmware-version: 1.24.11.0, TP 0.1.23.2
Running testpmd
~~~~~~~~~~~~~~~
@@ -273,7 +478,7 @@ devices managed by librte_pmd_cxgbe in Linux operating system.
EAL: PCI memory mapped at 0x7fd7c0200000
EAL: PCI memory mapped at 0x7fd77cdfd000
EAL: PCI memory mapped at 0x7fd7c10b7000
- PMD: rte_cxgbe_pmd: fw: 1.23.4.0, TP: 0.1.23.2
+ PMD: rte_cxgbe_pmd: fw: 1.24.11.0, TP: 0.1.23.2
PMD: rte_cxgbe_pmd: Coming up as MASTER: Initializing adapter
Interactive-mode selected
Configuring Port 0 (socket 0)
@@ -379,7 +584,7 @@ virtual functions.
[...]
EAL: PCI device 0000:02:01.0 on NUMA socket 0
EAL: probe driver: 1425:5803 net_cxgbevf
- PMD: rte_cxgbe_pmd: Firmware version: 1.23.4.0
+ PMD: rte_cxgbe_pmd: Firmware version: 1.24.11.0
PMD: rte_cxgbe_pmd: TP Microcode version: 0.1.23.2
PMD: rte_cxgbe_pmd: Chelsio rev 0
PMD: rte_cxgbe_pmd: No bootstrap loaded
@@ -387,7 +592,7 @@ virtual functions.
PMD: rte_cxgbe_pmd: 0000:02:01.0 Chelsio rev 0 1G/10GBASE-SFP
EAL: PCI device 0000:02:01.1 on NUMA socket 0
EAL: probe driver: 1425:5803 net_cxgbevf
- PMD: rte_cxgbe_pmd: Firmware version: 1.23.4.0
+ PMD: rte_cxgbe_pmd: Firmware version: 1.24.11.0
PMD: rte_cxgbe_pmd: TP Microcode version: 0.1.23.2
PMD: rte_cxgbe_pmd: Chelsio rev 0
PMD: rte_cxgbe_pmd: No bootstrap loaded
@@ -465,7 +670,7 @@ Unified Wire package for FreeBSD operating system are as follows:
.. code-block:: console
- dev.t5nex.0.firmware_version: 1.23.4.0
+ dev.t5nex.0.firmware_version: 1.24.11.0
Running testpmd
~~~~~~~~~~~~~~~
@@ -583,7 +788,7 @@ devices managed by librte_pmd_cxgbe in FreeBSD operating system.
EAL: PCI memory mapped at 0x8007ec000
EAL: PCI memory mapped at 0x842800000
EAL: PCI memory mapped at 0x80086c000
- PMD: rte_cxgbe_pmd: fw: 1.23.4.0, TP: 0.1.23.2
+ PMD: rte_cxgbe_pmd: fw: 1.24.11.0, TP: 0.1.23.2
PMD: rte_cxgbe_pmd: Coming up as MASTER: Initializing adapter
Interactive-mode selected
Configuring Port 0 (socket 0)
diff --git a/drivers/net/cxgbe/base/adapter.h b/drivers/net/cxgbe/base/adapter.h
index ae318ccf5..62de35c7c 100644
--- a/drivers/net/cxgbe/base/adapter.h
+++ b/drivers/net/cxgbe/base/adapter.h
@@ -309,6 +309,8 @@ struct adapter_devargs {
bool keep_ovlan;
bool force_link_up;
bool tx_mode_latency;
+ u32 filtermode;
+ u32 filtermask;
};
struct adapter {
diff --git a/drivers/net/cxgbe/base/t4fw_interface.h b/drivers/net/cxgbe/base/t4fw_interface.h
index 46d087a09..0032178d0 100644
--- a/drivers/net/cxgbe/base/t4fw_interface.h
+++ b/drivers/net/cxgbe/base/t4fw_interface.h
@@ -674,12 +674,16 @@ enum fw_params_mnem {
#define S_FW_PARAMS_PARAM_FILTER_MODE 16
#define M_FW_PARAMS_PARAM_FILTER_MODE 0xffff
+#define V_FW_PARAMS_PARAM_FILTER_MODE(x) \
+ ((x) << S_FW_PARAMS_PARAM_FILTER_MODE)
#define G_FW_PARAMS_PARAM_FILTER_MODE(x) \
(((x) >> S_FW_PARAMS_PARAM_FILTER_MODE) & \
M_FW_PARAMS_PARAM_FILTER_MODE)
#define S_FW_PARAMS_PARAM_FILTER_MASK 0
#define M_FW_PARAMS_PARAM_FILTER_MASK 0xffff
+#define V_FW_PARAMS_PARAM_FILTER_MASK(x) \
+ ((x) << S_FW_PARAMS_PARAM_FILTER_MASK)
#define G_FW_PARAMS_PARAM_FILTER_MASK(x) \
(((x) >> S_FW_PARAMS_PARAM_FILTER_MASK) & \
M_FW_PARAMS_PARAM_FILTER_MASK)
@@ -725,6 +729,7 @@ enum fw_params_param_dmaq {
};
enum fw_params_param_dev_filter {
+ FW_PARAM_DEV_FILTER_VNIC_MODE = 0x00,
FW_PARAM_DEV_FILTER_MODE_MASK = 0x01,
};
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 75a2e9931..0bf6061c0 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -51,6 +51,25 @@
DEV_RX_OFFLOAD_SCATTER | \
DEV_RX_OFFLOAD_RSS_HASH)
+/* Devargs filtermode and filtermask representation */
+enum cxgbe_devargs_filter_mode_flags {
+ CXGBE_DEVARGS_FILTER_MODE_PHYSICAL_PORT = (1 << 0),
+ CXGBE_DEVARGS_FILTER_MODE_PF_VF = (1 << 1),
+
+ CXGBE_DEVARGS_FILTER_MODE_ETHERNET_DSTMAC = (1 << 2),
+ CXGBE_DEVARGS_FILTER_MODE_ETHERNET_ETHTYPE = (1 << 3),
+ CXGBE_DEVARGS_FILTER_MODE_VLAN_INNER = (1 << 4),
+ CXGBE_DEVARGS_FILTER_MODE_VLAN_OUTER = (1 << 5),
+ CXGBE_DEVARGS_FILTER_MODE_IP_TOS = (1 << 6),
+ CXGBE_DEVARGS_FILTER_MODE_IP_PROTOCOL = (1 << 7),
+ CXGBE_DEVARGS_FILTER_MODE_MAX = (1 << 8),
+};
+
+enum cxgbe_filter_vnic_mode {
+ CXGBE_FILTER_VNIC_MODE_NONE,
+ CXGBE_FILTER_VNIC_MODE_PFVF,
+ CXGBE_FILTER_VNIC_MODE_OVLAN,
+};
/* Common PF and VF devargs */
#define CXGBE_DEVARG_CMN_KEEP_OVLAN "keep_ovlan"
@@ -59,6 +78,10 @@
/* VF only devargs */
#define CXGBE_DEVARG_VF_FORCE_LINK_UP "force_link_up"
+/* Filter Mode/Mask devargs */
+#define CXGBE_DEVARG_PF_FILTER_MODE "filtermode"
+#define CXGBE_DEVARG_PF_FILTER_MASK "filtermask"
+
bool cxgbe_force_linkup(struct adapter *adap);
int cxgbe_probe(struct adapter *adapter);
int cxgbevf_probe(struct adapter *adapter);
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 51b63ef57..1deee2f5c 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -1244,7 +1244,9 @@ RTE_PMD_REGISTER_PCI_TABLE(net_cxgbe, cxgb4_pci_tbl);
RTE_PMD_REGISTER_KMOD_DEP(net_cxgbe, "* igb_uio | uio_pci_generic | vfio-pci");
RTE_PMD_REGISTER_PARAM_STRING(net_cxgbe,
CXGBE_DEVARG_CMN_KEEP_OVLAN "=<0|1> "
- CXGBE_DEVARG_CMN_TX_MODE_LATENCY "=<0|1> ");
+ CXGBE_DEVARG_CMN_TX_MODE_LATENCY "=<0|1> "
+ CXGBE_DEVARG_PF_FILTER_MODE "=<uint32> "
+ CXGBE_DEVARG_PF_FILTER_MASK "=<uint32> ");
RTE_INIT(cxgbe_init_log)
{
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index df54e54f5..a541d95cc 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -43,6 +43,77 @@
#include "smt.h"
#include "mps_tcam.h"
+static const u16 cxgbe_filter_mode_features[] = {
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_MACMATCH | F_ETHERTYPE |
+ F_PROTOCOL | F_PORT),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_MACMATCH | F_ETHERTYPE |
+ F_PROTOCOL | F_FCOE),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_MACMATCH | F_ETHERTYPE | F_TOS |
+ F_PORT),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_MACMATCH | F_ETHERTYPE | F_TOS |
+ F_FCOE),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_MACMATCH | F_ETHERTYPE | F_PORT |
+ F_FCOE),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_MACMATCH | F_PROTOCOL | F_TOS |
+ F_PORT | F_FCOE),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_MACMATCH | F_PROTOCOL | F_VLAN |
+ F_FCOE),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_MACMATCH | F_PROTOCOL | F_VNIC_ID |
+ F_FCOE),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_MACMATCH | F_TOS | F_VLAN |
+ F_FCOE),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_MACMATCH | F_TOS | F_VNIC_ID |
+ F_FCOE),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_MACMATCH | F_VLAN | F_PORT |
+ F_FCOE),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_MACMATCH | F_VNIC_ID | F_PORT |
+ F_FCOE),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_ETHERTYPE | F_PROTOCOL | F_TOS |
+ F_PORT | F_FCOE),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_ETHERTYPE | F_VLAN | F_PORT),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_ETHERTYPE | F_VLAN | F_FCOE),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_ETHERTYPE | F_VNIC_ID | F_PORT),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_ETHERTYPE | F_VNIC_ID | F_FCOE),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_PROTOCOL | F_TOS | F_VLAN | F_PORT),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_PROTOCOL | F_TOS | F_VLAN | F_FCOE),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_PROTOCOL | F_TOS | F_VNIC_ID |
+ F_PORT),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_PROTOCOL | F_TOS | F_VNIC_ID |
+ F_FCOE),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_PROTOCOL | F_VLAN | F_PORT |
+ F_FCOE),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_PROTOCOL | F_VNIC_ID | F_PORT |
+ F_FCOE),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_TOS | F_VLAN | F_PORT | F_FCOE),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_TOS | F_VNIC_ID | F_PORT | F_FCOE),
+ (F_FRAGMENTATION | F_MPSHITTYPE | F_VLAN | F_VNIC_ID | F_FCOE),
+ (F_FRAGMENTATION | F_MACMATCH | F_ETHERTYPE | F_PROTOCOL | F_PORT |
+ F_FCOE),
+ (F_FRAGMENTATION | F_MACMATCH | F_ETHERTYPE | F_TOS | F_PORT | F_FCOE),
+ (F_FRAGMENTATION | F_MACMATCH | F_PROTOCOL | F_VLAN | F_PORT | F_FCOE),
+ (F_FRAGMENTATION | F_MACMATCH | F_PROTOCOL | F_VNIC_ID | F_PORT |
+ F_FCOE),
+ (F_FRAGMENTATION | F_MACMATCH | F_TOS | F_VLAN | F_PORT | F_FCOE),
+ (F_FRAGMENTATION | F_MACMATCH | F_TOS | F_VNIC_ID | F_PORT | F_FCOE),
+ (F_FRAGMENTATION | F_ETHERTYPE | F_VLAN | F_PORT | F_FCOE),
+ (F_FRAGMENTATION | F_ETHERTYPE | F_VNIC_ID | F_PORT | F_FCOE),
+ (F_FRAGMENTATION | F_PROTOCOL | F_TOS | F_VLAN | F_FCOE),
+ (F_FRAGMENTATION | F_PROTOCOL | F_TOS | F_VNIC_ID | F_FCOE),
+ (F_FRAGMENTATION | F_VLAN | F_VNIC_ID | F_PORT | F_FCOE),
+ (F_MPSHITTYPE | F_MACMATCH | F_ETHERTYPE | F_PROTOCOL | F_PORT |
+ F_FCOE),
+ (F_MPSHITTYPE | F_MACMATCH | F_ETHERTYPE | F_TOS | F_PORT | F_FCOE),
+ (F_MPSHITTYPE | F_MACMATCH | F_PROTOCOL | F_VLAN | F_PORT),
+ (F_MPSHITTYPE | F_MACMATCH | F_PROTOCOL | F_VNIC_ID | F_PORT),
+ (F_MPSHITTYPE | F_MACMATCH | F_TOS | F_VLAN | F_PORT),
+ (F_MPSHITTYPE | F_MACMATCH | F_TOS | F_VNIC_ID | F_PORT),
+ (F_MPSHITTYPE | F_ETHERTYPE | F_VLAN | F_PORT | F_FCOE),
+ (F_MPSHITTYPE | F_ETHERTYPE | F_VNIC_ID | F_PORT | F_FCOE),
+ (F_MPSHITTYPE | F_PROTOCOL | F_TOS | F_VLAN | F_PORT | F_FCOE),
+ (F_MPSHITTYPE | F_PROTOCOL | F_TOS | F_VNIC_ID | F_PORT | F_FCOE),
+ (F_MPSHITTYPE | F_VLAN | F_VNIC_ID | F_PORT),
+};
+
/**
* Allocate a chunk of memory. The allocated memory is cleared.
*/
@@ -687,6 +758,19 @@ static int check_devargs_handler(const char *key, const char *value, void *p)
}
}
+ if (!strncmp(key, CXGBE_DEVARG_PF_FILTER_MODE, strlen(key)) ||
+ !strncmp(key, CXGBE_DEVARG_PF_FILTER_MASK, strlen(key))) {
+ u32 *dst_val = (u32 *)p;
+ char *endptr = NULL;
+ u32 arg_val;
+
+ arg_val = strtoul(value, &endptr, 16);
+ if (errno || endptr == value)
+ return -EINVAL;
+
+ *dst_val = arg_val;
+ }
+
return 0;
}
@@ -732,6 +816,24 @@ static void cxgbe_get_devargs_int(struct adapter *adap, bool *dst,
*dst = devarg_value;
}
+static void cxgbe_get_devargs_u32(struct adapter *adap, u32 *dst,
+ const char *key, u32 default_value)
+{
+ struct rte_pci_device *pdev = adap->pdev;
+ u32 devarg_value = default_value;
+ int ret;
+
+ *dst = default_value;
+ if (!pdev)
+ return;
+
+ ret = cxgbe_get_devargs(pdev->device.devargs, key, &devarg_value);
+ if (ret)
+ return;
+
+ *dst = devarg_value;
+}
+
void cxgbe_process_devargs(struct adapter *adap)
{
cxgbe_get_devargs_int(adap, &adap->devargs.keep_ovlan,
@@ -740,6 +842,10 @@ void cxgbe_process_devargs(struct adapter *adap)
CXGBE_DEVARG_CMN_TX_MODE_LATENCY, false);
cxgbe_get_devargs_int(adap, &adap->devargs.force_link_up,
CXGBE_DEVARG_VF_FORCE_LINK_UP, false);
+ cxgbe_get_devargs_u32(adap, &adap->devargs.filtermode,
+ CXGBE_DEVARG_PF_FILTER_MODE, 0);
+ cxgbe_get_devargs_u32(adap, &adap->devargs.filtermask,
+ CXGBE_DEVARG_PF_FILTER_MASK, 0);
}
static void configure_vlan_types(struct adapter *adapter)
@@ -776,6 +882,134 @@ static void configure_vlan_types(struct adapter *adapter)
V_RM_OVLAN(!adapter->devargs.keep_ovlan));
}
+static int cxgbe_get_filter_vnic_mode_from_devargs(u32 val)
+{
+ u32 vnic_mode;
+
+ vnic_mode = val & (CXGBE_DEVARGS_FILTER_MODE_PF_VF |
+ CXGBE_DEVARGS_FILTER_MODE_VLAN_OUTER);
+ if (vnic_mode) {
+ switch (vnic_mode) {
+ case CXGBE_DEVARGS_FILTER_MODE_VLAN_OUTER:
+ return CXGBE_FILTER_VNIC_MODE_OVLAN;
+ case CXGBE_DEVARGS_FILTER_MODE_PF_VF:
+ return CXGBE_FILTER_VNIC_MODE_PFVF;
+ default:
+ return -EINVAL;
+ }
+ }
+
+ return CXGBE_FILTER_VNIC_MODE_NONE;
+}
+
+static int cxgbe_get_filter_mode_from_devargs(u32 val, bool closest_match)
+{
+ int vnic_mode, fmode = 0;
+ bool found = false;
+ u8 i;
+
+ if (val >= CXGBE_DEVARGS_FILTER_MODE_MAX) {
+ pr_err("Unsupported flags set in filter mode. Must be < 0x%x\n",
+ CXGBE_DEVARGS_FILTER_MODE_MAX);
+ return -ERANGE;
+ }
+
+ vnic_mode = cxgbe_get_filter_vnic_mode_from_devargs(val);
+ if (vnic_mode < 0) {
+ pr_err("Unsupported Vnic-mode, more than 1 Vnic-mode selected\n");
+ return vnic_mode;
+ }
+
+ if (vnic_mode)
+ fmode |= F_VNIC_ID;
+ if (val & CXGBE_DEVARGS_FILTER_MODE_PHYSICAL_PORT)
+ fmode |= F_PORT;
+ if (val & CXGBE_DEVARGS_FILTER_MODE_ETHERNET_DSTMAC)
+ fmode |= F_MACMATCH;
+ if (val & CXGBE_DEVARGS_FILTER_MODE_ETHERNET_ETHTYPE)
+ fmode |= F_ETHERTYPE;
+ if (val & CXGBE_DEVARGS_FILTER_MODE_VLAN_INNER)
+ fmode |= F_VLAN;
+ if (val & CXGBE_DEVARGS_FILTER_MODE_IP_TOS)
+ fmode |= F_TOS;
+ if (val & CXGBE_DEVARGS_FILTER_MODE_IP_PROTOCOL)
+ fmode |= F_PROTOCOL;
+
+ for (i = 0; i < ARRAY_SIZE(cxgbe_filter_mode_features); i++) {
+ if ((cxgbe_filter_mode_features[i] & fmode) == fmode) {
+ found = true;
+ break;
+ }
+ }
+
+ if (!found)
+ return -EINVAL;
+
+ return closest_match ? cxgbe_filter_mode_features[i] : fmode;
+}
+
+static int configure_filter_mode_mask(struct adapter *adap)
+{
+ u32 params[2], val[2], nparams = 0;
+ int ret;
+
+ if (!adap->devargs.filtermode && !adap->devargs.filtermask)
+ return 0;
+
+ if (!adap->devargs.filtermode || !adap->devargs.filtermask) {
+ pr_err("Unsupported, Provide both filtermode and filtermask devargs\n");
+ return -EINVAL;
+ }
+
+ if (adap->devargs.filtermask & ~adap->devargs.filtermode) {
+ pr_err("Unsupported, filtermask (0x%x) must be subset of filtermode (0x%x)\n",
+ adap->devargs.filtermask, adap->devargs.filtermode);
+
+ return -EINVAL;
+ }
+
+ params[0] = CXGBE_FW_PARAM_DEV(FILTER) |
+ V_FW_PARAMS_PARAM_Y(FW_PARAM_DEV_FILTER_MODE_MASK);
+
+ ret = cxgbe_get_filter_mode_from_devargs(adap->devargs.filtermode,
+ true);
+ if (ret < 0) {
+ pr_err("Unsupported filtermode devargs combination:0x%x\n",
+ adap->devargs.filtermode);
+ return ret;
+ }
+
+ val[0] = V_FW_PARAMS_PARAM_FILTER_MODE(ret);
+
+ ret = cxgbe_get_filter_mode_from_devargs(adap->devargs.filtermask,
+ false);
+ if (ret < 0) {
+ pr_err("Unsupported filtermask devargs combination:0x%x\n",
+ adap->devargs.filtermask);
+ return ret;
+ }
+
+ val[0] |= V_FW_PARAMS_PARAM_FILTER_MASK(ret);
+
+ nparams++;
+
+ ret = cxgbe_get_filter_vnic_mode_from_devargs(adap->devargs.filtermode);
+ if (ret < 0)
+ return ret;
+
+ if (ret) {
+ params[1] = CXGBE_FW_PARAM_DEV(FILTER) |
+ V_FW_PARAMS_PARAM_Y(FW_PARAM_DEV_FILTER_VNIC_MODE);
+
+ val[1] = ret - 1;
+
+ nparams++;
+ }
+
+ return t4_set_params(adap, adap->mbox, adap->pf, 0, nparams,
+ params, val);
+}
+
static void configure_pcie_ext_tag(struct adapter *adapter)
{
u16 v;
@@ -1300,6 +1534,9 @@ static int adap_init0(struct adapter *adap)
adap->params.b_wnd);
}
t4_init_sge_params(adap);
+ ret = configure_filter_mode_mask(adap);
+ if (ret < 0)
+ goto bye;
t4_init_tp_params(adap);
configure_pcie_ext_tag(adap);
configure_vlan_types(adap);
--
2.25.0
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] [PATCH 0/9] net/cxgbe: updates for rte_flow support
2020-03-11 9:05 [dpdk-dev] [PATCH 0/9] net/cxgbe: updates for rte_flow support Rahul Lakkireddy
` (8 preceding siblings ...)
2020-03-11 9:05 ` [dpdk-dev] [PATCH 9/9] net/cxgbe: add devargs to control filtermode and filtermask values Rahul Lakkireddy
@ 2020-03-11 13:11 ` Ferruh Yigit
2020-03-18 12:09 ` Thomas Monjalon
10 siblings, 0 replies; 15+ messages in thread
From: Ferruh Yigit @ 2020-03-11 13:11 UTC (permalink / raw)
To: Rahul Lakkireddy, dev; +Cc: nirranjan, kaara.satwik
On 3/11/2020 9:05 AM, Rahul Lakkireddy wrote:
> From: Karra Satwik <kaara.satwik@chelsio.com>
>
> This series of patches contain rte_flow support for matching
> Q-in-Q VLAN, IP TOS, PF, and VF fields. Also, adds Destination
> MAC rewrite and Source MAC rewrite actions.
>
> Apart from the 4-tuple (IP src/dst addresses and TCP/UDP src/dst
> port addresses), there are only 40-bits available to match other
> fields in packet headers. Currently, the combination of packet
> header fields to match are configured via filterMode for LETCAM
> filters and filterMask for HASH filters in firmware config files
> (t5/t6-config.txt). Adapter needs to be reflashed with new firmware
> config file everytime the combinations need to be changed. To avoid
> this, a new firmware API is available to dynamically change the
> combination before completing full adapter initialization. So, 2
> new devargs filtermode and filtermask are added to dynamically
> select the combinations during runtime.
>
> Patch 1 adds rte_flow support for matching Q-in-Q VLAN.
>
> Patch 2 adds rte_flow support for matching IP TOS.
>
> Patch 3 adds rte_flow support for matching all packets on PF.
>
> Patch 4 adds rte_flow support for matching all packets on VF.
>
> Patch 5 adds rte_flow support for overwriting destination MAC.
>
> Patch 6 adds Source MAC Table (SMT) support.
>
> Patch 7 adds rte_flow support for Source MAC Rewrite.
>
> Patch 8 adds new firmware API for validating filter spec.
>
> Patch 9 adds devargs to control filtermode and filtermask
> combinations.
>
> Thanks,
> Satwik
>
> Karra Satwik (9):
> net/cxgbe: add rte_flow support for matching Q-in-Q VLAN
> net/cxgbe: add rte_flow support for matching IP TOS
> net/cxgbe: add rte_flow support for matching all packets on PF
> net/cxgbe: add rte_flow support for matching all packets on VF
> net/cxgbe: add rte_flow support for overwriting destination MAC
> net/cxgbe: add Source MAC Table (SMT) support
> net/cxgbe: add rte_flow support for Source MAC Rewrite
> net/cxgbe: use firmware API for validating filter spec
> net/cxgbe: add devargs to control filtermode and filtermask values
Series applied to dpdk-next-net/master, thanks.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] [PATCH 0/9] net/cxgbe: updates for rte_flow support
2020-03-11 9:05 [dpdk-dev] [PATCH 0/9] net/cxgbe: updates for rte_flow support Rahul Lakkireddy
` (9 preceding siblings ...)
2020-03-11 13:11 ` [dpdk-dev] [PATCH 0/9] net/cxgbe: updates for rte_flow support Ferruh Yigit
@ 2020-03-18 12:09 ` Thomas Monjalon
2020-03-18 13:06 ` Rahul Lakkireddy
10 siblings, 1 reply; 15+ messages in thread
From: Thomas Monjalon @ 2020-03-18 12:09 UTC (permalink / raw)
To: kaara.satwik, Rahul Lakkireddy; +Cc: dev, nirranjan, ferruh.yigit, orika
11/03/2020 10:05, Rahul Lakkireddy:
> From: Karra Satwik <kaara.satwik@chelsio.com>
>
> This series of patches contain rte_flow support for matching
> Q-in-Q VLAN, IP TOS, PF, and VF fields. Also, adds Destination
> MAC rewrite and Source MAC rewrite actions.
>
> Apart from the 4-tuple (IP src/dst addresses and TCP/UDP src/dst
> port addresses), there are only 40-bits available to match other
> fields in packet headers. Currently, the combination of packet
> header fields to match are configured via filterMode for LETCAM
> filters and filterMask for HASH filters in firmware config files
> (t5/t6-config.txt). Adapter needs to be reflashed with new firmware
> config file everytime the combinations need to be changed. To avoid
> this, a new firmware API is available to dynamically change the
> combination before completing full adapter initialization. So, 2
> new devargs filtermode and filtermask are added to dynamically
> select the combinations during runtime.
Please, could you explain why you are using devargs for flow matching,
instead of using the common and generic rte_flow API?
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] [PATCH 0/9] net/cxgbe: updates for rte_flow support
2020-03-18 12:09 ` Thomas Monjalon
@ 2020-03-18 13:06 ` Rahul Lakkireddy
2020-03-18 15:07 ` Thomas Monjalon
0 siblings, 1 reply; 15+ messages in thread
From: Rahul Lakkireddy @ 2020-03-18 13:06 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: kaara.satwik, dev, nirranjan, ferruh.yigit, orika
Hi Thomas,
On Wednesday, March 03/18/20, 2020 at 13:09:47 +0100, Thomas Monjalon wrote:
> 11/03/2020 10:05, Rahul Lakkireddy:
> > From: Karra Satwik <kaara.satwik@chelsio.com>
> >
> > This series of patches contain rte_flow support for matching
> > Q-in-Q VLAN, IP TOS, PF, and VF fields. Also, adds Destination
> > MAC rewrite and Source MAC rewrite actions.
> >
> > Apart from the 4-tuple (IP src/dst addresses and TCP/UDP src/dst
> > port addresses), there are only 40-bits available to match other
> > fields in packet headers. Currently, the combination of packet
> > header fields to match are configured via filterMode for LETCAM
> > filters and filterMask for HASH filters in firmware config files
> > (t5/t6-config.txt). Adapter needs to be reflashed with new firmware
> > config file everytime the combinations need to be changed. To avoid
> > this, a new firmware API is available to dynamically change the
> > combination before completing full adapter initialization. So, 2
> > new devargs filtermode and filtermask are added to dynamically
> > select the combinations during runtime.
>
> Please, could you explain why you are using devargs for flow matching,
> instead of using the common and generic rte_flow API?
>
The devargs are being used to configure the TCAM in hardware on
what header fields need to be matched in packets by the TCAM. The
actual filter rules are still being inserted using rte_flow API.
Apart from the 4-tuple (src/dst IP, src/dst port addresses), there
are only 40-bits available for each filter rule to match other
header fields. Hardware supports matching ethertype (16-bit),
DST MAC (9-bit MPS index), Inner VLAN (16-bit), Outer VLAN (16-bit),
IP Protocol (8-bit), IP TOS (8-bit), Ingress Physical Port (3-bit),
and PFVF (17-bit) for now. It's not possible to write a filter rule
which wants to match all the above fields, which is far beyond 40-bits
available. So, the devargs are being used to control which of the
above fields that user wants to configure the TCAM to match. Note
that once the TCAM is configured, "all" the rules in the TCAM can
only match the selected fields in the combination. They can't match
any other header fields.
For example, let's say user wants to match ethertype (16-bit),
DST MAC (9-bit MPS index), and IP protocol (8-bit) in all filter
rules. Then, they would configure the TCAM with the {ethertype,
DST MAC, and IP protocol} combination. All rules that the user
wants to insert into TCAM can only have the above header fields,
alongside the default 4-tuple (src/dst IP, src/dst port addresses).
There are 2 regions in hardware. One for matching wild-card filter
rules and the other for matching exact-match rules. The "filtermode"
devarg controls the 40-bit combination for wild-card filter rules and
the "filtermask" devarg controls the combination for exact-match rules.
Thanks,
Rahul
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] [PATCH 0/9] net/cxgbe: updates for rte_flow support
2020-03-18 13:06 ` Rahul Lakkireddy
@ 2020-03-18 15:07 ` Thomas Monjalon
2020-03-19 7:58 ` Rahul Lakkireddy
0 siblings, 1 reply; 15+ messages in thread
From: Thomas Monjalon @ 2020-03-18 15:07 UTC (permalink / raw)
To: Rahul Lakkireddy; +Cc: kaara.satwik, dev, nirranjan, ferruh.yigit, orika
18/03/2020 14:06, Rahul Lakkireddy:
> Hi Thomas,
>
> On Wednesday, March 03/18/20, 2020 at 13:09:47 +0100, Thomas Monjalon wrote:
> > 11/03/2020 10:05, Rahul Lakkireddy:
> > > From: Karra Satwik <kaara.satwik@chelsio.com>
> > >
> > > This series of patches contain rte_flow support for matching
> > > Q-in-Q VLAN, IP TOS, PF, and VF fields. Also, adds Destination
> > > MAC rewrite and Source MAC rewrite actions.
> > >
> > > Apart from the 4-tuple (IP src/dst addresses and TCP/UDP src/dst
> > > port addresses), there are only 40-bits available to match other
> > > fields in packet headers. Currently, the combination of packet
> > > header fields to match are configured via filterMode for LETCAM
> > > filters and filterMask for HASH filters in firmware config files
> > > (t5/t6-config.txt). Adapter needs to be reflashed with new firmware
> > > config file everytime the combinations need to be changed. To avoid
> > > this, a new firmware API is available to dynamically change the
> > > combination before completing full adapter initialization. So, 2
> > > new devargs filtermode and filtermask are added to dynamically
> > > select the combinations during runtime.
> >
> > Please, could you explain why you are using devargs for flow matching,
> > instead of using the common and generic rte_flow API?
>
> The devargs are being used to configure the TCAM in hardware on
> what header fields need to be matched in packets by the TCAM. The
> actual filter rules are still being inserted using rte_flow API.
>
> Apart from the 4-tuple (src/dst IP, src/dst port addresses), there
> are only 40-bits available for each filter rule to match other
> header fields. Hardware supports matching ethertype (16-bit),
> DST MAC (9-bit MPS index), Inner VLAN (16-bit), Outer VLAN (16-bit),
> IP Protocol (8-bit), IP TOS (8-bit), Ingress Physical Port (3-bit),
> and PFVF (17-bit) for now. It's not possible to write a filter rule
> which wants to match all the above fields, which is far beyond 40-bits
> available. So, the devargs are being used to control which of the
> above fields that user wants to configure the TCAM to match. Note
> that once the TCAM is configured, "all" the rules in the TCAM can
> only match the selected fields in the combination. They can't match
> any other header fields.
In case a rule is not possible, are you rejecting it at the validation stage?
> For example, let's say user wants to match ethertype (16-bit),
> DST MAC (9-bit MPS index), and IP protocol (8-bit) in all filter
> rules. Then, they would configure the TCAM with the {ethertype,
> DST MAC, and IP protocol} combination. All rules that the user
> wants to insert into TCAM can only have the above header fields,
> alongside the default 4-tuple (src/dst IP, src/dst port addresses).
>
> There are 2 regions in hardware. One for matching wild-card filter
> rules and the other for matching exact-match rules. The "filtermode"
> devarg controls the 40-bit combination for wild-card filter rules and
> the "filtermask" devarg controls the combination for exact-match rules.
I see an issue with this approach. I will explain below.
An application is written to use some flow rules.
The application requirements are expressed by the app developper
through the API (rte_flow).
In your case, the user must be aware of what the application expects
and fill the right devargs, according to what the dev wrote.
Why bothering the user with this constraint?
I understand the hardware must be prepared in advance.
I think this configuration must be done through API.
One workaround is to manage this HW limitation in a PMD-specific API.
A good solution would be to express this requirement with rte_flow.
One idea: can we use rte_flow_validate() to fill the requirements?
The PMD requirements are empty at the beginning and they are filled
with the first calls to rte_flow_validate().
Maybe we also need to express the capabilities/limitations.
Example: is there a maximum number of rules?
maximum number of protocols to match?
maximum number of bits to match?
I suppose it is not easy to implement. Comments?
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [dpdk-dev] [PATCH 0/9] net/cxgbe: updates for rte_flow support
2020-03-18 15:07 ` Thomas Monjalon
@ 2020-03-19 7:58 ` Rahul Lakkireddy
0 siblings, 0 replies; 15+ messages in thread
From: Rahul Lakkireddy @ 2020-03-19 7:58 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: kaara.satwik, dev, nirranjan, ferruh.yigit, orika
On Wednesday, March 03/18/20, 2020 at 16:07:07 +0100, Thomas Monjalon wrote:
> 18/03/2020 14:06, Rahul Lakkireddy:
> > Hi Thomas,
> >
> > On Wednesday, March 03/18/20, 2020 at 13:09:47 +0100, Thomas Monjalon wrote:
> > > 11/03/2020 10:05, Rahul Lakkireddy:
> > > > From: Karra Satwik <kaara.satwik@chelsio.com>
> > > >
> > > > This series of patches contain rte_flow support for matching
> > > > Q-in-Q VLAN, IP TOS, PF, and VF fields. Also, adds Destination
> > > > MAC rewrite and Source MAC rewrite actions.
> > > >
> > > > Apart from the 4-tuple (IP src/dst addresses and TCP/UDP src/dst
> > > > port addresses), there are only 40-bits available to match other
> > > > fields in packet headers. Currently, the combination of packet
> > > > header fields to match are configured via filterMode for LETCAM
> > > > filters and filterMask for HASH filters in firmware config files
> > > > (t5/t6-config.txt). Adapter needs to be reflashed with new firmware
> > > > config file everytime the combinations need to be changed. To avoid
> > > > this, a new firmware API is available to dynamically change the
> > > > combination before completing full adapter initialization. So, 2
> > > > new devargs filtermode and filtermask are added to dynamically
> > > > select the combinations during runtime.
> > >
> > > Please, could you explain why you are using devargs for flow matching,
> > > instead of using the common and generic rte_flow API?
> >
> > The devargs are being used to configure the TCAM in hardware on
> > what header fields need to be matched in packets by the TCAM. The
> > actual filter rules are still being inserted using rte_flow API.
> >
> > Apart from the 4-tuple (src/dst IP, src/dst port addresses), there
> > are only 40-bits available for each filter rule to match other
> > header fields. Hardware supports matching ethertype (16-bit),
> > DST MAC (9-bit MPS index), Inner VLAN (16-bit), Outer VLAN (16-bit),
> > IP Protocol (8-bit), IP TOS (8-bit), Ingress Physical Port (3-bit),
> > and PFVF (17-bit) for now. It's not possible to write a filter rule
> > which wants to match all the above fields, which is far beyond 40-bits
> > available. So, the devargs are being used to control which of the
> > above fields that user wants to configure the TCAM to match. Note
> > that once the TCAM is configured, "all" the rules in the TCAM can
> > only match the selected fields in the combination. They can't match
> > any other header fields.
>
> In case a rule is not possible, are you rejecting it at the validation stage?
>
Yes, the rule is rejected if it can't be offloaded. Any rule
that has fields that are unsupported or haven't been configured
to be matched by TCAM are rejected. This validation is being done
in the driver's registered callback of rte_flow_validate().
> > For example, let's say user wants to match ethertype (16-bit),
> > DST MAC (9-bit MPS index), and IP protocol (8-bit) in all filter
> > rules. Then, they would configure the TCAM with the {ethertype,
> > DST MAC, and IP protocol} combination. All rules that the user
> > wants to insert into TCAM can only have the above header fields,
> > alongside the default 4-tuple (src/dst IP, src/dst port addresses).
> >
> > There are 2 regions in hardware. One for matching wild-card filter
> > rules and the other for matching exact-match rules. The "filtermode"
> > devarg controls the 40-bit combination for wild-card filter rules and
> > the "filtermask" devarg controls the combination for exact-match rules.
>
> I see an issue with this approach. I will explain below.
>
> An application is written to use some flow rules.
> The application requirements are expressed by the app developper
> through the API (rte_flow).
> In your case, the user must be aware of what the application expects
> and fill the right devargs, according to what the dev wrote.
> Why bothering the user with this constraint?
>
> I understand the hardware must be prepared in advance.
> I think this configuration must be done through API.
> One workaround is to manage this HW limitation in a PMD-specific API.
> A good solution would be to express this requirement with rte_flow.
>
The TCAM must be configured on what fields it is expected to match
before the hardware initialization completes; i.e. during PCIe probe
itself. Devargs is the only API I know that can provide this kind
of configuration information at this early stage. Based on what
fields the TCAM has been configured to match, the resources in
hardware are allocated accordingly.
Note that this configuration is not on a per rule basis. All the
rules must only have the fields that the TCAM had been configured to
match. Any rule that doesn't adhere to this will be rejected.
> One idea: can we use rte_flow_validate() to fill the requirements?
> The PMD requirements are empty at the beginning and they are filled
> with the first calls to rte_flow_validate().
>
> Maybe we also need to express the capabilities/limitations.
> Example: is there a maximum number of rules?
> maximum number of protocols to match?
> maximum number of bits to match?
>
> I suppose it is not easy to implement. Comments?
>
Perhaps the hardware's rte_flow offload capabilities can be exposed
similar to how we're exposing the Tx/Rx offloads today, via the
rte_eth_dev_info structure. Alternatively, we can add a separate
rte_flow_dev_info structure and let PMDs fill their rte_flow related
capabilities. Maybe even add a rte_flow_dev_infos_get() API for this
purpose?
Thanks,
Rahul
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2020-03-19 8:09 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-11 9:05 [dpdk-dev] [PATCH 0/9] net/cxgbe: updates for rte_flow support Rahul Lakkireddy
2020-03-11 9:05 ` [dpdk-dev] [PATCH 1/9] net/cxgbe: add rte_flow support for matching Q-in-Q VLAN Rahul Lakkireddy
2020-03-11 9:05 ` [dpdk-dev] [PATCH 2/9] net/cxgbe: add rte_flow support for matching IP TOS Rahul Lakkireddy
2020-03-11 9:05 ` [dpdk-dev] [PATCH 3/9] net/cxgbe: add rte_flow support for matching all packets on PF Rahul Lakkireddy
2020-03-11 9:05 ` [dpdk-dev] [PATCH 4/9] net/cxgbe: add rte_flow support for matching all packets on VF Rahul Lakkireddy
2020-03-11 9:05 ` [dpdk-dev] [PATCH 5/9] net/cxgbe: add rte_flow support for overwriting destination MAC Rahul Lakkireddy
2020-03-11 9:05 ` [dpdk-dev] [PATCH 6/9] net/cxgbe: add Source MAC Table (SMT) support Rahul Lakkireddy
2020-03-11 9:05 ` [dpdk-dev] [PATCH 7/9] net/cxgbe: add rte_flow support for Source MAC Rewrite Rahul Lakkireddy
2020-03-11 9:05 ` [dpdk-dev] [PATCH 8/9] net/cxgbe: use firmware API for validating filter spec Rahul Lakkireddy
2020-03-11 9:05 ` [dpdk-dev] [PATCH 9/9] net/cxgbe: add devargs to control filtermode and filtermask values Rahul Lakkireddy
2020-03-11 13:11 ` [dpdk-dev] [PATCH 0/9] net/cxgbe: updates for rte_flow support Ferruh Yigit
2020-03-18 12:09 ` Thomas Monjalon
2020-03-18 13:06 ` Rahul Lakkireddy
2020-03-18 15:07 ` Thomas Monjalon
2020-03-19 7:58 ` Rahul Lakkireddy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).